Jimmie Lightner

    • About
    • Privacy Policy
    • Shady Merchants
Illustration of a bird flying.
  • Install and trust DoD Certificates on MacOS

    Trying to use your CAC on a Mac? Don’t want to run some sketchy compiled app to install DoD Certs on your box? Check this handy scripty-doo out. It grabs the latest PKI zip, unpacks it, converts the certificates into a format that works and then installs them into the system’s trust store. Be prepared to either type your password a zillion times or use TouchID to modify the trust store – thanks, Apple!

    #!/bin/bash
    set -eu -o pipefail
    
    export CERT_URL='https://dl.dod.cyber.mil/wp-content/uploads/pki-pke/zip/unclass-certificates_pkcs7_DoD.zip'
    
    # Download & Extract DoD root certificates
    cd ~/Downloads/
    /usr/bin/curl -LOJ ${CERT_URL}
    
    /usr/bin/unzip -o $(basename ${CERT_URL})
    
    cd $(/usr/bin/zipinfo -1 $(basename ${CERT_URL}) | /usr/bin/awk -F/ '{ print $1 }' | head -1)
    
    # Convert .p7b certs to straight pem and import
    for item in *.p7b; do
      TOPDIR=$(pwd)
      TMPDIR=$(mktemp -d /tmp/$(basename ${item} .p7b).XXXXXX) || exit 1
      PEMNAME=$(basename ${item} .p7b)
      openssl pkcs7 -print_certs -in ${item} -inform der -out "${TMPDIR}/${PEMNAME}"
      cd ${TMPDIR}
      /usr/bin/split -p '^$' ${PEMNAME}
      rm $(ls x* | tail -1)
      for cert in x??; do
        sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain ${cert}
      done
    
      cd ${TOPDIR}
      rm -rf ${TMPDIR}
    done
    June 14, 2024
  • Fixing janky Spaces behavior on MacOS with Azure Virtual Desktop and Microsoft Remote Desktop App Streaming

    Since joining Microsoft I’ve worked remotely for nearly 9 years. My current setup consists of a MacBook Pro, a Thunderbolt 4 Dock, and an LG 34WK95U-W (which is an UltraWide 5K2K Nano IPS LED Monitor). I run a Windows 11 ARM VM via Parallels full-screen on the MacBook (basically for Teams and Outlook, so I don’t have to allow the Death Star to manage my physical device via enrollment), and then run other useful apps and workflow on the external monitor.

    With the introduction of Azure Virtual Desktop (AVD hereafter) I’ve now come to rely on the use several systems to get my work done – especially being so tightly integrated with the customer that I access their systems through AVD. I use Teams within their organization equally if not more-so than I do within Microsoft. 😅

    One of the things I love about AVD is App Streaming. Rather than having to connect to yet another VM Desktop, individual Applications can be published to me and appear almost as if they’re native on my device (despite having FUGLY Windoze styling) by streaming the App window itself. This works GREAT for Teams!

    This has been an amazing boost to my workflow, but there’s been a crappy behavior that’s been nagging at me. Every time I click into one of the AVD Remote Apps, the built-in screen on the Mac switches away from the space containing the Windoze VM back to the MacOS desktop, no matter which Screen or Space (or Monitor) the Remote App is on.

    I was certain Mission Control and Spaces had something to do with this, but nothing I changed in the relevant settings corrected the behavior.

    Luckily, a Kagi search turned up a useful thread regarding similar complaints from nearly TEN years ago! https://discussions.apple.com/thread/4995042?sortBy=best This solution was originally for MacOS Leopard! Thankfully, it works even on Sonoma 14.5!

    defaults write com.apple.dock workspaces-auto-swoosh -bool NO
    killall Dock

    With this change things operate a bit differently; I find where before I could use rapid app switching (like Alt+Tab on Windows, Cmd+Tab on Mac) it would automatically bring the selected app to the forefront on an available Space, it no longer does. I have to manually flip between spaces, but this is MUCH better than the unexpected and wacky automatic jank from before!

    Hopefully this helps someone else! 🙂

    May 23, 2024
  • Making W3-Total-Cache work with Storage Accounts in Azure Government

    So you thought you were going to be slick and save some time by using the Marketplace template to deploy “WordPress on App Service” in Azure Government? You likely even selected the ‘Recommended’ tick-box to offload media content to Azure Blob Storage via the template add-on section and then hit “Review and Create!” But when you logged finally in and created your first post with some media, you noticed an error message that content couldn’t be uploaded to Azure Blob Storage using W3-Total-Cache plugin for WordPress… You poked your way through the WordPress configuration to the Plugin Settings and found this section:

    Yours DOES NOT look like mine above. 😉 (I forgot to get a BEFORE screenshot. Woops) Despite “or CNAME” box having the correct “usgovcloudapi.net” base DNS name in that box, when you click Test Microsoft Azure Storage upload – you receive something to the effect of: Unable to resolve DNS name <storageaccountname>.blob.core.windows.net.

    The problem? The w3-Total-Cache plugin is using the deprecated Azure Storage PHP Client Libraries that appear to have a hard-coded service URL that only works in Azure Commercial. If you modify line 54 of the Resources.php file from blob.core.windows.net to blob.core.usgovcloudapi.net, you’ll have a lot easier time! You’re welcome! 🙂

    April 12, 2024
  • How to host WordPress without it being a SNITCH

    I hate companies that make users think they’re getting something for free but then have these hidden functions that report back to the mother-ship what users are doing with them. WordPress’s parent company, Automattic, doesn’t have the best reputation as of late (check out the deal they made to sell your data to OpenAI). Just now I was prompted to “Verify your admin email address is still correct” while logging in. That seems innocent enough – and even mildly HELPFUL… but if you knew what Automattic was REEEAAAALLY doing, you would think differently. When you click that cute little blue button to “confirm” your information is correct it’s not just saving your email address locally (you DO host WordPress yourself, right?).

    119 6 veth119i0-OUT 11/Apr/2024:18:06:40 -0400 policy DROP: IN=fwbr119i0 OUT=fwbr119i0 PHYSIN=veth119i0 PHYSOUT=fwln119i0 MAC=bc:24:11:60:5d:f6:22:e5:10:fa:b2:af:08:00 SRC=172.16.13.32 DST=198.143.164.251 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=61509 DF PROTO=TCP SPT=46840 DPT=443 SEQ=1268854965 ACK=0 WINDOW=62720 SYN 

    Above we see the web server being denied a TCP connection to an HTTPS endpoint (thank you, outbound firewall). Let’s reverse lookup that IP to see who it belongs to…

    WordPress, you sneaky little shits! What a clever way to hoover up user data! We can only assume they were going to guzzle that email address and who knows what else.

    In order to keep your data safe from these jackasses, I’d suggest hosting WordPress in a manner where you can control both INBOUND and OUTBOUND traffic flows. I do this with the built-in firewall of a piece of software called Proxmox – an open source virtualization platform.

    When I spin up a Linux container, I like to set some VERY strict rules for how it can access the network. In my case, both Input and Output policy for the firewall are set to DROP. This allows me to be very granular when permitting only the traffic necessary.

    For this container I allow only outbound access to perform DNS lookups and to my local proxy service. Inbound HTTP and HTTPS are only permitted from the Proxy. Anything else not explicitly defined hits those default drop rules. When it’s time to perform software updates, I stop the web server in the container before temporarily toggling the Output Policy to allow traffic so that WordPress doesn’t sneakily send anything while my guard is down.

    I hope this has been enlightening!

    April 11, 2024
  • Why Linux?

    Because Windows and MacOS both suck.

    I’ve dabbled with Linux since I was a kid. I remember asking my uncle to buy a copy of Red Hat Linux (I think it was Red Hat 5?) on CD from eBay for me when I was just a middle-school aged nerd. We had painfully slow dial-up internet at the time, downloading it would have taken a millennia, and I wasn’t old enough to have the means to buy things on the net by myself.

    Thinking back, Red Hat really started it all for me – and got me into a lot of trouble, too. I accidentally wiped the hard drive in our family computer – a fancy Gateway 2000 machine my stepmom’s parents gifted us – while trying to re-partition the drive to dual boot it. I remember my father’s face and that vein on the side of his forehead throbbing like an alien was about to burst out of him. He said something along the lines of “if you can put it back, I won’t beat your ass.” It didn’t take long before I had Linux dual booting with Windows – and escaped, ass unbeaten! The best part of this outcome was that no one else in our house had a clue what Linux was, it was like I had my own computer whenever I rebooted into it!

    My first hurdle was figuring out how to interact with hardware that didn’t work natively in Linux. That first piece of hardware was the modem! Our machine had a WinModem, which worked fine in Windows but almost appeared to be non-existent in Linux. I remember when I had finally figured out how to initialize it and get it to dial out! I had to issue dial strings directly. ATZ to initialize the hardware, ATM0L0 to set the volume to ‘0’ so my parents wouldn’t hear the screeching at 3am when I’d sneak downstairs to tinker, and ATDT <number> to dial. This was also when I stumble into ‘hacking’ for the first time. When the modem finally connected to our ISP, instead of dropping into PPP and exchanging data automatically like in Windows, the connection stayed in SLIP mode. I remember seeing “Ascend MAX Advanced Pipeline Terminal Server” on my screen, being frustrated that I was “connected” but couldn’t exchange data, not understanding what had just fallen into my lap.

    I remember poking around sheepishly the first few times I dialed in, seeing what commands were available, and then being amused that I could list other user sessions. One day it occurred to me to search for my friend who lived across town… who was connected… until I disconnected him! I messaged him on AOL IM “Check this out” before disconnecting him several times until he realized I was the one doing it and biked over to our house to see how.

    After some harmless fun messing with friends who used the same ISP, I created several accounts in the system that I’d end up using from then all the way through college years when I needed “emergency” connectivity for “reasons.” Those accounts, though, eventually got my father’s computer confiscated and me unable to be alone, unsupervised, in a room with a computer at school for two years.

    In my college years I mostly played with Mandrake and Slackware distributions (my friend Cassie’s computer even saw quite a few iterations of those). Several of my floor-mates were also CSE majors, and friends sometimes jokingly referred to us as “The Linux Fascists.” To this day I’m still not sure what they meant by that. In the time after college I settled down and stuck with Debian for many years. Linux, then, was mostly for utility. I ran it on a NAS, a firewall, and on a couple of old machines that were only ever powered on when I was bored and wanted to tinker. My main machine was a PowerBook “Pismo” I had rescued while dumpster diving. It had a dead LCD panel, but worked fine when plugged into an external monitor. Replacing the panel was a piece of cake! MacOS X was AMAZING back then – sure, it was slow at first in the early days, but it was still way better than the dumpster-fire its descendant has become.

    Linux also got me my first IT job. At age 19, while working for a “media production company” as a receptionist – the owner learned of my computer science background and offered me double time to come set up the office and storefront of a new business they were opening. I drove across the city on a Saturday morning to collect on that sweet, sweet payche… opportunity. While we were there working on getting the registers and network set up, the media company’s Ad server died (it was running a huge campaign for HBO at the time – basically paying everyone’s paycheck) and he looked at me and said “You said you know Linux, right? Can you go help get that back online? Without it we’re hosed.” Needless to say, I fixed the Ad server. Two weeks later, I took over as the lead of network infrastructure.

    These days I spend a lot of time mucking about with other platforms and abstractions. Most Azure customers I work with run a lot of Windows workloads (or just web / cloud / data stuff). Despite working for Microsoft, my only real interaction with Windows is on my work computer. I absolutely hate it. The sheer amount of telemetry data Microsoft collects is atrocious. Users should revolt!

    My desktop is a custom built gaming PC that used to run Windows before I got fed up and switched it over to Arch. (yes, yes, I run Arch Linux… don’t be toxic about it) For my mobile compute needs I’m still rocking a MacBook Pro, though likely not for much longer. I’ve got a previous-gen 2023 14″ M2 Max with 64 gigs of RAM… that I honestly thought would last me forever. The problem? You guessed it – the dumpster-fire known as OS X. I refuse to upgrade beyond Ventura (it took nearly a year for me to get here from Monterey!) because of the continued dumbing-down of the platform and intrusive telemetry and data collection (thankfully LittleSnitch can easily block all of this crap). I’ve contemplated loading something like Asahi, but it’s still early days and I’m not able to live without Thunderbolt and external displays on my notebook.

    I’m still looking for alternatives. I want a mobile machine with equally sexy hardware and form factor, but with open firmware and hardware that works with Linux. And it needs to be reliable and from a reputable company. Sadly, that seems to be a tall order. Sure, lots of modern, sexy hardware can run Linux, (Hellooooo X1 Carbon) but closed source firmware is a HUGE turn-off for me. Framework laptop is fugly. Malibal is a SKETCHY company (see their review controversy). System76 … ehhhh I just don’t care for. What’s that leave? A StarBook? With a 1920×1080 display? My phone has more pixels!

    Woops. Somehow I managed to get a little off track onto a mobile hardware rant, but the point is – Linux is where I started, it paved the way to where I am, and while I may have taken a brief excursion off the path to see what all the other platforms were about, it’s where I choose to be.

    January 20, 2024
  • Disable Azure CLI Telemetry Collection

    Pfffffffft! Dafuq? No thank you, Microsoft. That’s getting #disabled right TF now. (See Microsoft official documentation)

    az config set core.collect_telemetry=no
    January 18, 2024
  • Writing systemd User Timers

    Why am I writing this? I omitted installing the cron package on my Linux box since systemd provides a timers function. Who needs two packages to do one thing? Heh… for a moment I thought maybe I do because some of the documentation out there is a little confusing. Or I’m just dumb. Or both. 😛 So here’s a quick and dirty mash up of the Arch Linux wiki for User units and systemd Timers (because I want this to run when my user is logged in!).

    Timers require two files – the service file and the timer file. I’m pretty sure you can call them whatever you want, including different names since the timer config references the service by its name. For simplicity, I named mine the same thing. User unit files live in the .config directory for systemd within your home dir. Here’s mine:

    ls -l /home/jimmie/.config/systemd/user
    
    drwxr-xr-x 2 jimmie jimmie 4.0K Jan 16 16:51 default.target.wants
    drwxr-xr-x 2 jimmie jimmie 4.0K Jan  4 09:03 sockets.target.wants
    drwxr-xr-x 2 jimmie jimmie 4.0K Jan 17 08:53 timers.target.wants
    -rw-r--r-- 1 jimmie jimmie  179 Jan 16 16:43 update-blocklists.service
    -rw-r--r-- 1 jimmie jimmie  142 Jan 16 16:45 update-blocklists.timer
    

    The timer unit file (update-blocklists.timer) just tells systemd when to do a thing. This unit executes every 180 minutes.

    [Unit]
    Description=Run "Update Blocklists" every 180 minutes
    
    [Timer]
    OnBootSec=3min
    OnUnitActiveSec=180min
    
    [Install]
    WantedBy=timers.target
    

    The service unit file (update-blocklists.service) tells systemd what to do.

    [Unit]
    Description=Update Blocklists
    After=network.target
    
    [Service]
    Type=oneshot
    WorkingDirectory=%h
    ExecStart=%h/Scripts/update-blocklists.sh
    
    [Install]
    WantedBy=default.target
    

    The main parts here give some basic dependencies – execute only after the network is up, do the work in my homedir, and the location of the script to execute. I’m not going to post the contents of the script because you can go find it yourself here.

    Once you have the unit files in place (and the script you want to execute, too) you can enable this thing. Trigger a daemon-reload first, then make sure you enter BOTH the service and the timer units on the line or your timer will not work. I know because I tried with one and then the other before using both. When I came back to the machine this morning nothing had updated since I manually triggered the service.

    systemctl --user daemon-reload
    
    systemctl --user enable --now update-blocklists.service update-blocklists.timer

    Now you can list your user timers:

    systemctl --user list-timers --all
    NEXT                            LEFT LAST                         PASSED UNIT                    ACTIVATES                
    Wed 2024-01-17 11:53:51 EST 2h 59min Wed 2024-01-17 08:53:51 EST 25s ago update-blocklists.timer update-blocklists.service
    
    1 timers listed.
    

    Tada. Easy, right? Lastly, if you need to peep at the logs for your timer, journalctl can filter those! Check it out:

    journalctl --user-unit update-blocklists

    Happy Hump Day!

    January 17, 2024
  • Parallels Desktop VM Isolation

    I’m fascinated by the logic here. Why is this experience not inverted? Why is the user only prompted for confirmation when ENABLING isolation (because: why would I want said VM to be able to read the contents of my clipboard?) but not when disabling it? Isn’t the isolation the entire point of the VM? 🤔

    January 10, 2024
  • Holiday Time Off Shenanigans

    What do you do for time off? Home improvement? Technical skilling? Visiting family? Hang out with friends? I’m off work until January 3rd! I need some things to keep myself busy:

    • Finish the trim in our kitchen
    • Set up a live-stream of the office reef tank
    • Figure out mail delivery for the lab
    • Look at server colocation options
    • Hang out with friends and family
    • Renew my certifications (yuck)

    Seems like enough for a while. It’s a rough work in progress, but check out https://www.communityreeftank.com

    December 15, 2023
  • A&M Landscaping and Lawncare

    While hopping off a work call just now I happened to notice the sound of a mower outside. Odd. When I peered out the window, the neighbor’s yard was being mowed. VERY ODD. Especially today, on the 28th of November. It’s currently 29° Fahrenheit (that’s -1.67° Celsius for you metric weirdos) and the ground felt pretty well frozen just a bit ago when I took the dog out to potty.

    What the hell? If I was paying a REPUTABLE service to care for my lawn, I’d hope they wouldn’t rip me off with “final mow” on a day when they’d actually do more harm to grass than good. Then again, A&M are the same folks who damaged and put ruts in my yard while trying to park their truck to mow next door – and argued with me about “rights of way” and easements when I confronted them about it. 🤣 (See: passive aggressive red reflectors)

    Another local company doing shady, weird shit! 🤷🏼‍♂️

    November 28, 2023
←Previous Page
1 2 3 4 5
Next Page→

Jimmie Lightner