Jehan Alvani
  • Home
  • Photos
  • Archive
  • About
  • Cars
  • A Belated Musical Recap of 2023

    I just found this draft in my notes, figured I’d better put it up before I have to start writing 2024’s.


    Back in the day, before streaming services were a thing and when the internet was young and so was I filled with optimism and the joy of discovery, I looked forward to Jeph Jacques' annual summary of the music he dug that year. It’s how I found so many of the bands I loved then, and a few I still love: Blood Brothers, Liars, Errors to name just a few.

    Anyhow, in a rudimentary effort to do some of the same, here’s what I really got into in 2023. I’m organizing into categories, but I’m not ranking the music. Ranking is a pointless exercise.

    New Albums

    Del Water Gap - I Miss You Already + I Haven’t Left Yet

    The album opens with “All We Ever Do Is Talk”, a soft and warm bop seemingly written specifically for moments of sepia-toned intimacy. Every track is solid. In addition to “All We Ever Do Is Talk”, standouts include singles “Losing You” and “Coping on Unemployment”.

    Petey - USA

    I was into Petey as a comedian, and then one day I was listening to my personal station on Apple Music and heard a song that I dug and it was by “Petey” and the Petey guy in the picture looked a lot like Petey on the internet and it turns out it IS the same guy and he’s not only funny but he writes good music. “I’ll Wait” and “Did I Mention I’m Sorry”.

    Royal Blood - Back To The Water Below

    Two dudes, two instruments. Maybe a little dancier than previous Royal Blood albums, but the groove is good and it rocks.

    Noah Kahan - Stick Season (We’ll All Be Here Forever)

    I slept on Stick Season in its original release, but the “Deluxe” version, including a bunch of additional tracks is fantastic. The title has gotten plenty of well deserved attention, and so is “Dial Drunk”. If you haven’t listened to the Song Exploder episode with Noah Kahan on “Stick Season”, do it.

    ††† - Goodnight, God Bless, I Love U, Delete

    A 40-something dude who is into Deftones and Crosses? I’m sure you’re as shocked as I am. Of course I’m 100% here for this album. Chino Moreno and Shaun Lopez worked on this album over the course of something like five years, so it doesn’t capture a specific point in time as many albums do but the feel is cohesive.

    Manchester Orchestra - The Valley of Vision

    I’d be lying if I said I’ve been into Manchester Orchestra since the beginning. I dug Leave Everything to Nothing but never followed them after. What a massive miss on my part, and what a joy it is to go back and listen to their whole back catalog now! The Valley of Vision is certainly more mellow, evoking the same frustrated confusion that informs so much of their work, but this album reflects a bit more resignation and sadness as opposed to frustration and urgency. A different tone, but one that resonates with our current moment.

    The Beaches - Blame My Ex

    “Blame Brett” caught me off guard with how catchy and bouncy it is - a modern and straightforward fun rock song. The rest of the album lives up to the promise of the first single. Love it.

    Cannons - Heartbeat Highway

    The first song I heard was “Loving You” and it’s easily my favorite, dance-y and sultry. The rest of the album brings stronger disco vibes. I dig it and I can listen to “Loving You” specifically indefinitely.

    The National - First Two Pages of Frankenstein & Laugh Track

    Yes, Dad rock. Sure. But it’s also The National’s best work since Trouble Will Find Me.

    Singles & EPs

    Kenya Grace - Strangers

    This single was all over Car Instagram this year, and I got to be very cool and say “Yeah I was into her before she was big” which was neat. Great song, great voice. Love the vaporwave kind of night-drive feel of the song. It captures that feeling of really not wanting to grow apart from someone but also acknowledging that it’s inevitable.

    IDLES & LCD Soundsystem - Dancer

    Really looking forward to this album; “Dancer” and the IDLES-only tracks “Grace” and “Gift Horse” are just excellent. Can’t wait to hear more.

    15 February 2024
  • For Sale: 2016 Audi allroad

    It’s time for me to part ways with my 2016 allroad 2.0t Premium Plus. Lightly, and if I may say, very tastefully modded 😎. Black on Chestnut leather. Currently sits at 48,681 miles, though I’ll be enjoying it until it sells.

    I’m the second owner. The car doesn’t have an entirely clean title: I was rear ended by a kid in a Jeep in 2019 just a few weeks after I bought it, and in October of the same year the PPF and ceramic was thoroughly put to use when it was covered in outdoor paint. Long story, but if you’re interested, you can read about the whole thing in my car’s diary thread. It looks beautiful now, if I do say so myself. I had bodywork done by Paramount Center in Fife, an Audi certified shop.

    Email me for more information or to come see it

    Asking: $20,000

    VIN: WA1UFAFL6GA003488

    BadVIN Report - I prefer BadVIN to CarFax

    Mods

    • IE CAI
    • IE Stage 1 tune
    • H&R Coilovers
    • H&R Rear Swaybar
    • Porsche Macan 4-pot front brake calipers
    • RSNAV S4 Head Unit, integrated dashcam and HD backup cam
    • Europrice FBSW w/ paddle shifters
    • Xpel Fusion Paint Protection Film and Ceramic Coating
    • Module to add hatch close by remote

    Details

    Macan/Q5/Sq5 Brembo 4-Pot Front Brakes and 345mm Front Rotors

    • Brake Pads:
      • Brembo: 8R0698151R + 1 Sensor
      • Akebono: EUR1546
    • Front Rotors: OE: 8K0615301M / ZIM-100333252
    • Rear Rotors: OE: 8K0615601B / ZIM: ZIM-100333320

    Maintenance Sticker OCR

    WA1UFA FL 6 GA003488 
    8KH 52A 2811863=3
    A4 Allroad   q.2.0  R4
    162 KW  ABS. 	07/15
    CPMB	KRR 	QCU
    LY9T / LY9T   N1F/VR
    EOA	7D5	4UB	6XL	5SG	5RW
    1KW	J1N	1LA		1AT	1BP
    3FU			5MG 7X7		
    FOA		9G3	0G7	0YM	0JJ
    TL6	3NZ	8EH	U1B		GZ7
    1XW		8Q3	9Q8	8Z6	D60
    7T6	CH9	7K6	4X3	VJ1		
    3L4		VW1	3Y0	4I3	5D2
    1SH		7GB	Q1A		4GQ
    

    Decoded

    Decoded with the [VW/Audi/Seat/Skoda Option Code Decoder][36]

    E0A = No special edition
    7D5 = DVD player
    4UB = Air bag for NAR
    6XL = Exterior mirrors: with memory function, automatically dimming, electrically foldable/adjustable/heated
    5SG = Left exterior mirror: flat
    5RW = Right exterior mirror: convex (US) large viewing field
    1KN = Disc brakes, rear
    J1N = Battery 420 A (75 Ah)
    1LA = Disc brakes, front
    1AT = Electronic stabilization program (ESP)
    1BP = Suspension/shock absorption for special rough-road design
    3FU = Big roof system
    5MG = Decorative inserts, burr-walnut
    7X7 = Park distance control rear with rear view camera
    F0A = No special purpose vehicle, standard equipment
    9G3 = Alternator 120-180 A
    0G7 = Tiptronic
    0YM = Weight range 12 installation control only, no requirement forecast
    0JJ = Weight category front axle weight range 9
    TL6 = 4-cylinder gasoline engine 2.0 l unit 06H.H
    3NZ = Rear seat bench unsplit, backrest split folding
    8EH = Bi-functional headlight with gas discharge lamp, for driving on the right(US design)
    U1B = Instrument insert with mph speedometer, clock, tachometer and trip odometer
    GZ7 = Power latching for sliding door right
    1XW = Leather trimmed multi-function sports steering wheel
    8Q3 = Automatic headlight-range adjustment dynamic (self-adjusting while driving)
    9Q8 = Multi-function display/on-board computer
    8Z6 = Hot country
    D60 = 4-cyl. SI engine 2.0 l/162 kW 16V turbo FSI, homogeneous base engine is T61,TW6,TP6,T1P
    7T6 = Navigation system (MID)
    CH9 = Alloy wheels 8J x 18
    7K6 = Flat tire indicator
    4X3 = Side air bag front with curtain air bag
    VJ1 = Reinforced bumpers
    3L4 = Electric seat adjustment for both front seats, drivers seat with memory system
    VW1 = Side windows tinted green, from B-pillarto rear window gray tinted safety glass
    3Y0 = Without roll-up sun screen
    4I3 = Central locking system "Keyless Entry" without deadlock
    5D2 = Carrier frequency 315 MHz
    1SH = Additional engine and transmission guard
    7GB = Emission standard ULEV 2
    Q1A = Standard front seats
    4GQ = Windshield in heat-insulating glass)
    

    7 September 2023
  • Block Notification Requests in Safari.

    Ben Werdmuller posted a quick blurb about entirely disabling website notifications in your browser of choice, and I’m right there with him in not ever granting permission for a website to send me notifications. And while he covered Chromium- and Gecko-based browsers, he omitted Safari and Webkit-based browsers.

    Fortunately Apple doesn’t hide this in hidden advanced settings panels. It’s in Safari Preferences → Websites (tab) → Notifications. Delete any entries you may have previously granted (or don’t) and uncheck the checkbox at the bottom.

    Screenshot of Safari Preferences
    26 July 2023
  • Hi, [coworker],

    Hi, [coworker],

    I hope this email finds you well.
    Wait, no that’s not quite true;
    I hope this email never finds you.
    I hope you sleep the deepest sleep of your life,
    A sleep earned through labor and fresh air. That you smile in the slight fog as you rise to dew
    on the increasingly-reclaimed markers of our once-great society.
    Once-“great” society.

    I hope you look back on the things we built
    And find them quaint in how they misjudged what was important,
    And entirely misguided in how they defined “value”.
    I hope you feel the Earth and the plants in your lungs,
    that you see your breath in the springtime sun.

    I hope you can reflect and reject the techno-industrial, the educational-industrial,
    the capital-industrial complex to which we dedicated so many years,
    And I hope you see smiles and dirt on the faces of your children.
    Hope you smile too, knowing our mistakes won’t be theirs.

    I hope you find satisfaction in the routine,
    reaching into stores to make breakfast for you and yours.
    Tending to others, to plants, to animals. I hope you take less than you give
    And that you teach others do the same.
    I hope you know your worth, our worth
    Is not defined in EBITDA or MAU.
    It’s defined in what we instill, how we inspire, and how we reflect the things we claim to hold dear.
    I think you said that to me.

    I hope that between when I hit send and when this gets delivered to you.
    We, the greater we, face a redefining event.
    That we are forced to reckon with our past prioritization.
    But, I guess, if all that doesn’t happen.
    Maybe, if you could get me the latest quarterly summary?
    We’re supposed to update it with the new KPIs that the Leadership team defined last week.
    Yeah, no the new new KPIs. I know. I told them.

    But, really, I hope this email never finds you.


    I wrote this a little over a year ago, thinking of a friend who I was frustrated on behalf of. It’s been a while, I need to check in on him.

    1 March 2023
  • Reverse Resolutions

    I’ve never been a new year’s resolution kind of guy - if it’s important enough to do, there’s no reason to wait until the end of the year. But the turn of the calendar offers the opportunity to look back on some adjustments I made throughout the year. Indulge me in a little reflection:

    General

    • Restarted my journaling habit
    • Let my work-life take a little less, and keep a little more for my family and friends - I’d say I’ve been successful since January
    • Read more books - semi-successful, getting better again lately
    • Make my kids laugh every day - check
    • Fix more broken stuff myself - check
    • Buy less - check
    • Rely less on news aggregation (Reddit) and more on reading and critiquing journalism - Moving slowly in the right direction

    Work

    • Be intentional with my time - Was very good February through June, let myself get caught in the churn mid-year, back at it over the past couple months
    • Lead with curiosity - I should write more about this, but suffice it to say for now that this has been a big change over time and has gone well. Still need to develop this muscle, though
    • Define concrete individual work goals - Yes, and documented! Sometimes writing them down is the hard part

    There are opportunities, too, as there always are. I want to spend more time getting away both with the kids and just with Linds. But these changes don’t have to be right now.

    Still haven’t speckled and painted that dent in the drywall in our bedroom, though.

    31 December 2022
  • Using a Redirect Rule to Resolve Mastodon's WebFinger requirement on a Subdomain

    I kept running across a problem with my Mastodon instance where I was seemingly unable to follow other accounts. Reviewing Sidekiq logs revealed HTTP 401s for nearly every account I tried to follow.

    After some poking and help from some very kind folks on a Mastodon admin Discord (@[email protected], specifically), as well as my host, I think I’ve resolved the issue. It seems to have been because I boched my webfinger redirect.

    For some context, Mastodon relies webfinger as a method for clearly identifying users on remote servers. Since I have my Mastodon instance on a subdomain of alvani.me, but want my usernames to be in the @[email protected] format, I have to create a redirect for requests to

    https://alvani.me/.well-known/webfinger
    to be redirected to
    https://mastodon.alvani.me/.well-known/webfinger.

    I used a CloudFlare redirect rule to accomplish this, as per the screenshot below.

    31 December 2022
  • Hosted Mastodon instance using Cloudplane and Cloudflare

    Despite being aware of Mastodon and following its development since its introduction back in 2015, I never really spent any meaningful time with it. Along with many others, I was motivated to change this as for so many reasons including but certainly not limited to Elon Musk’s capricious “leadership” of Twitter.

    I initially signed up with an account on Mastodon.social, the “first-party” instance that’s run by the service’s founder. After poking around a bit, I decided I’d prefer to run my own instance that maybe some friends and family could share if they were interested. After reading over the requirements and officially-supported architectures for the image, and looking into options for self-hosting either on my home lab or in some IaaS provider, I decided that this was a case where I’d prefer to have a host. If friends or family wanted to use it, I didn’t want to be on the hook for keeping it running during or after power outages, etc. Looking for servers and primary administration outside of the US, I found Cloudplane. Reasonably priced for a “small” instance, which is probably all I need.

    Cloudplane makes it fairly easy, although the documentation is sparse. During their signup, they seemed to indicate that the name server the customer uses must support root-level CNAMEs or aliases, and they seemed to recommend Cloudfront1. After a little more discovery, I learned that the root-level CNAME/alias requirement only applies if you intend for the root of your domain to point to the Cloudplane-hosted Mastodon instance. In my case, where I intended to use mastodon.alvani.me as the name of the domain, but use Mastodon’s local_domain and web_domain features to make handles as @alvani.me/@jehanalvani.com

    Thus, once the new Cloudplane instance was deployed, I added alvani.me as the local_domain, and mastodon.alvani.me as the Cloudplane-labeled “Custom Domain”2. Cloudplane informs you of the DNS records to configure - in my case just a CNAME.

    [Update] Since I’m using a subdomain and Cloudplane’s proxy for seucrity features, I also had to set up a root-domain txt record. The same record displayed if I typed “example.com” into the web domain field.

    There’s one more step to take care of: Cloudflare defaults to unencrypted backend connections, and Cloudplane requires encryption. I solved this by creating a Cloudflare configuration rule to capture all requests for the host mastodon.alvani.me, and setting the SSL encryption mode to Full. I also overrode the defaults for my account to SSL encryptions mode - Full (Strict).

    The rule expression in my case was:

     (http.host eq “mastodon.alvani.me”)
    

    Then scroll down to the “SSL (optional)” section and choose “Full”.

    Once configured in Cloudflare and propagated, I could access my hosted instance at the name I preferred.

    The last step is enabling the well-known redirect per Mastodon’s documentation:

    To install Mastodon on mastodon.example.com in such a way it can serve @[email protected], set LOCAL_DOMAIN to example.com and WEB_DOMAIN to mastodon.example.com. This also requires additional configuration on the server hosting example.com to redirect or proxy requests to https://example.com/.well-known/webfinger to https://mastodon.example.com/.well-known/webfinger. For instance, with nginx, the configuration could look like the following:

     location /.well-known/webfinger {    
      add_header Access-Control-Allow-Origin ‘*’;     
      return 301 https://mastodon.example.com$request_uri;      
    }    
    

    In Cloudflare, this is accomplished with a Redirect rule. Note that the check mark at the bottom to preserve query parameters is checked. That’s important.


    1. I’d been looking for an excuse to play with Cloudfront’s services for a while, so this seemed like the stars aligning. ↩︎

    2. Which seems to map to web_domain described in Mastodon’s docs. ↩︎

    28 December 2022
  • Ran into a problem updating packages on my Ubuntu-based NAS and Plex host; zfs-zed and zfsutils-linux were unconfigured due to some logic failures that occurred in certain configurations. This resulted in apt failures. In my case, there was an empty zpool configured in a subdir of another pool which resulted in the zfsutils-linux configuration script failing when it was run.

    dpkg: error processing package zfsutils-linux (--configure):
     installed zfsutils-linux package post-installation script subprocess returned error exit status 1
    dpkg: dependency problems prevent configuration of zfs-zed:
     zfs-zed depends on zfsutils-linux (>= 0.8.3-1ubuntu12.14); however:
      Package zfsutils-linux is not configured yet.
    
    dpkg: error processing package zfs-zed (--configure):
     dependency problems - leaving unconfigured
    No apport report written because the error message indicates its a followup error from a previous failure.
    																										  
     Errors were encountered while processing:
         zfsutils-linux
    	 zfs-zed
      E: Sub-process /usr/bin/dpkg returned an error code (1)
    

    To resolve, I stopped all the services that might write to the zpool(s) - NFS and a Plex container, then

    • zfs unmount [root pool name] (replace [root pool name] with -a to unmount all pools if needed)
    • zfs list to list pools
    • zfs destroy [root pool name]/[sub pool] - Be really careful you’re destroying the right pool. Going back isn’t impossible, but it’s not easy.
    • zfs list to confirm the pool isn’t listed
    • zfs mount -a to mount all pools
    • dpkg --configure -a to complete the configuration of unconfigured packages
    • Restart stopped services
    5 October 2022
  • allroad at BSCC Kitsap Cup Event 8 Autocross

    Took the allroad out to BSCC’s last autocross cup event of the season. Running novice class per BSCC rules - you’re a novice until you’ve completed six events. Not sad about it: 8+ years away from the sport has unsurprisingly left me quite rusty.

    Very happy with how the Dad Wagon performed. It’s very heavy and not especially nimble stock. But the H&R Coilovers and sway bars improve the handling considerably.

    All runs were clean, and each run was faster than the last. Very happy with the cars performance, and pretty pleased with my own!

    Run 1

    Run 2

    Run 3

    Run 4

    25 September 2022
  • The Mariners and Julio Rodríguez Sign a Long Term Extension

    Dan Szymborski has a great analysis of the exceptionally complicated contract that still has wet ink on its pages. Excited Julio will be a Mariner for the foreseeable future, and having Julio will be an enticement for other top players to think hard about making a move northwest.

    Not to mention the structure of the contract seems to make sense for both the team and Julio.

    This paragraph also emphasizes how forthcoming the FO has been with Julio. DiPoto told the press that if Julio played well, he’d break camp with the team. Julio played fantastically, and he’s been with the team all season.

    The Mariners not playing games with J-Rod’s service time clock has paid off handsomely. If Rodríguez had been held in the minors for a few weeks, he’d still likely be in the top two for AL Rookie of the Year voting, which would have resulted in him accruing a full year of service time anyway. And he’s been so good that it might have cost Seattle at least a win, which could have plausibly resulted in the team missing on a Wild Card spot it otherwise would have won. Hopefully, this will encourage other clubs to stop monkeying around with service time quite as much, especially with their ultra-elite prospects.

    26 August 2022
  • Florida Family Vacation ‘22 Vol. 1

    All shot with the Ricoh GR-IIIx

    22 August 2022
  • BSCC Novice School Autocross

    Went out to the BSCC Novice School with the 911 last weekend. Despite the withering heat - well over 110ºF on the course - the car was up for the task. Keeping temps low all day, rising only when idling in the grid between runs.

    It’s been years since I last autocrossed, so it was good to get some instruction. Jeremy reminded me that the car can hold much more speed than I’m expecting it to, and that it can brake much harder than I’m accustomed to in street driving. It’s funny, I knew these things but it’s tough to put it into practice when I haven’t been doing it.

    My first run was just getting used to the course. Second was trying to put a little more speed in. In both first and second videos, I hadn’t learned to set the start/finish of the course appropriately so the time and finish of the course is off, but by watching just the speed you can see it makes sense.

    The third video, Jeremy’s driving and I’m just riding along, seeing how a very competent autocross driver would approach the course. The fourth video I’m applying what he demonstrated, and I had by a long-shot the fastest lap of my day. I could have done better if I’d shut up and not overthought & discussed my mistakes in the moment. At least I figured it out a third of the way through.

    Run 1

    Note the finish line isn’t in the right spot on the overlay so the time is wrong.

    Run 2

    Note the finish line isn’t in the right spot on the overlay so the time is wrong.

    Run 3 - Jeremy

    Run 4

    6 August 2022
  • Still using Message-ID to link to specific mail messages

    Nice little throwback. My teams are being proactive and gathering information now for projects they might like to take on in the future. In some cases, I want to make notes to include these in future budget planning sessions. Sometimes it’d be useful to link to some specific details, and often that detail is in a specific email message.

    The Message-ID header is made exactly for this, and you can view the Message-ID header in Mail.app using these instructions. There’s even an old DF post with an AppleScript snippet to make it easy to extract the Message-ID. I didn’t try that, but I suspect it’ll still work just fine.

    Elsewhere on your computer, you can use a message:// prefix followed by the message-ID to link to the specific message and macOS still manages to handle it apporpriately, fifteen years later.

    Nice when the simple & useful stuff still works.

    3 June 2022
  • A month or so ago I shipped the 911’s clock off to Pelican to be rebuilt. The clock hadn’t worked in some time, but it works gorgeously now, and since it’s the ‘88, it has the quartz movement.

    I also dropped in Rennline’s Exact Fit Phone Mount and Induction Charger. I didn’t realize I wouldn’t be able to find a path from the USB port on the back of the new SQR-46 head unit to the gauge cluster without drilling, which I didn’t want to do1. I ordered this 12v to USB module from Amazon, and made a quick wiring harness to tap the 12v positive and negative from the clock. Boom, clock and MagSafe charger work. Hype.


    1. Also, the SQR-46 kept trying to read the induction charger for files to play, which was disruptive to playing FM or Bluetooth audio, so really using the USB port on the head unit was a no-go. ↩︎

    28 January 2022
  • BBC: US Billionaire forced to relinquish $70M of stolen antiques, banned from antiquities trade

    Strong quote from the Manhattan DA Cy Vance Jr:

    …Mr Steinhardt “displayed a rapacious appetite for plundered artefacts without concern for the legality of his actions, the legitimacy of the pieces he bought and sold, or the grievous cultural damage he wrought across the globe”.

    7 December 2021
  • Replacing Headlights on 911 3.2s and 964s

    911s in ‘88 and ‘89 (and at least some from earlier 3.2 years; can’t quite find an exact demarcation point) had the same headlights that made it into the 964s. I had to pull them apart to replace my bulbs last night after noticing that my low beams were out. This video was an enormous help. In a short:

    1. Unscrew the bottom bolt on the rings which holds the rings in place1.
    2. Lift and pull to remove the ring, which is held in place by a small lip on the body-side above the bucket. If the lights haven’t been opened up in a while (20+ years in my case) it might take some effort to pull the rings. I used a hook tool wrapped in a microfiber towel placed into the bottom bolt’s hole and pulled hard on the right side. Left side lifted with no issue.
    3. Remove the four bolts which hold the light assembly in place (one each at 2:00, 4:00, 8:00, 10:00)2.
    4. Unclip the harness, remove the light assembly, remove the star-shaped retaining clip and replace the bulb. Use conductive grease to prevent connector corrosion.
    5. Assembly is the reverse of the above.

    1. The video says to remove the pugs which cover access to the adjustment screws, but this seems unnecessary to me. ↩︎

    2. The screws at 9:00 and 12:00 are for beam adjustment; don’t mess with those. ↩︎

    31 October 2021
  • A Failed Alternator on the 911

    A black vintage 911 in front of a dramatic mountain vista

    The 911 just got back from Squire’s after dying on me on the way home from a gorgeous PCA tour to Artist’s Point on Mt. Baker a couple weeks ago - the pic above was from Artist’s Point. After pulling over and getting out of the car, I could hear a hissing from the trunk, and opening the luggage compartment revealed a wet, steaming battery and the smell of sulphur or rotten eggs that’s typical of an overcharging event. My hunch was that either the alternator or voltage regulator had failed, and the batter was being overcharged. I replaced the battery with a new one from Interstate (MTX-49/H8; the car still wouldn’t start. Now I suspected that the overcharging cooked the DME relay or the DME ECU itself. I was out of my depth, and got back in touch with Squire’s. Once they got the car, they confirmed that the alternator was failing (outputting 17.8v under load!) and had cooked the DME ECU. They installed a new alternator, swapped in a shop DME and sent my unit out for repair. I’ll get my DME back in a few weeks.

    In the interim, I wanted to document some obvious signs of overcharging that I saw in case anyone else runs into this.

    1. Interior bulbs getting bright and dimming or flickering, especially indicator lights
    2. Flickering seatbelt light
    3. Radio would die when I pushed the gas
    4. Moisture in the gauges when the blower was switched on1

    1. This was a surprise to me but it’s becuase the battery was overheating, boiling, and letting off steam into the luggage compartment. The steamy air was taken in by the blower and… blown into the gauges. I got some big packs of silica gel desiccant to ensure all the moisture is removed now that its no longer an issue. ↩︎

    7 October 2021
  • G50 Shift Bushing Refresh

    Spent yesterday morning cleaning up the shift assembly in my ‘88 911. The car was optioned with the G50 transaxle’s short-shifter, but a previous owner had put a knob that added a few inches of throw. It looked like at some point some water had been spilled onto the knob and down the lever, because there was some surface rust under the knob which I wanted to clean up. Most notably, though, there was a ton of play in the lever when in neutral and some slop between gears, both of which are indications of worn shift bushings. Replacing the shift bushings isn’t too tough a job - half a day on the long side, and I thought I’d tackle the rust cleanup and shift knob replacement at the same time.

    Photo from the sales listing showing the “old” aftermarket knob

    Using Brad Phillips’ G50 refresh article for Hagerty as a reference, I pulled the console, shift lever, and shift housing out of the car. Removed the surface rust on the lever with a smoothing stone on my Dremel, sandpaper, and a wire brush. Then I taped off the lever and repainted. I vacuumed the old bushing dust out the cavity in the floor revealing a little more surface rust on the floor of the car. I wire-brushed that, then used a sanding block and vacuumed until bare metal was exposed, then I used a tacky cloth to remove any remaining dust and painted the exposed surfaces.

    Comparison of the aftermarket knob (right) and the stock knob (left)

    Spraying the lever before replacing the bushings

    Reassembly was a bit of a challenge since the new bushing is much less pliable than the quite-worn original. In addition to using a heat gun to warm the bushing and housing, I found a link to some photos of this old article in “Excellence” magazine in which the author gives some tips for replacing failed G50 bushings. Relevant paragraphs excerpted below:

    Installing the bushing was simple. First, we clamped the shifter in a vice to hold it steady - applying a thin film of grease ont the bushing, we installed it in the housing. The bushing goes in from the rear of the housing with the large flared end pointing toward the rear of the car. The bushing is a very snug fit, and we worked it into the hole much like mounting a tire on a tim; first, we pushed half of the bushing into the hole and then pushed the remaining half of the bushing Into place with a heavy screwdriver, working in one direction. Once the bushing pops into the housing, it self-centers due to a recess in the bushing.

    We wiped a film of white lithium grease inside the bushing and used the shop vacuum to remove the remains of the old bushing from the floor recess. The shifter shaft was also wiped clean and a thin coat of grease was applied to the shalt. Prior to installation, we decided to take the time to remove the shifter pin, clean it, and apply a thin layer of grease. This is a simple matter of removing a lock clip, sliding out the pin, and cleaning and greasing the pin. We also applied a small amount of grease to the pivot-ball portion of the shift lever.

    At the end of the day I have a nice, tight shift feel, a shorter throw thanks to the removal of the long shift knob, and cleaned up the shift lever.

    The restored lever and stock knob back in place

    Took the car out for a drive with my neighbor and found a picturesque spot by a mill. Couldn’t ask for more.

    5 September 2021
  • Joining metrics by labels in Prometheus

    I’m using node_exporter to generate host metrics for several of the nodes in my lab. I was re-working one of my thermal graphs, today, with the goal of getting good historical temps of my Pis and my Ubuntu-based homebuilt NAS into a single readable graph. node_exporter has two relevant time series:

    1. node_thermal_zone_temp which was exported on all of the Raspberries Pi
    2. node_hwmon_temp_celsius which was exported by the NAS and the Raspberries Pi 4. The rPi3 did not export this metric.

    I liked node_hwmon_temp_celsius a lot, and opted to spend some time focusing on getting that to fit as well as I could. It’s an [instant vector][instant_vector], and it returned the following with my config:

    node_hwmon_temp_celsius{chip="0000:00:01_1_0000:01:00_0", class="nas server", environment="storage", hostname="20-size", instance="10.0.1.217:9100", job="node-exporter", sensor="temp1"}	29.85
    node_hwmon_temp_celsius{chip="0000:00:01_1_0000:01:00_0", class="nas server", environment="storage", hostname="20-size", instance="10.0.1.217:9100", job="node-exporter", sensor="temp2"}	29.85
    node_hwmon_temp_celsius{chip="0000:00:01_1_0000:01:00_0", class="nas server", environment="storage", hostname="20-size", instance="10.0.1.217:9100", job="node-exporter", sensor="temp3"}	32.85
    node_hwmon_temp_celsius{chip="0000:20:00_0_0000:21:00_0", class="nas server", environment="storage", hostname="20-size", instance="10.0.1.217:9100", job="node-exporter", sensor="temp1"}	52.85
    node_hwmon_temp_celsius{chip="0000:20:00_0_0000:21:00_0", class="nas server", environment="storage", hostname="20-size", instance="10.0.1.217:9100", job="node-exporter", sensor="temp2"}	52.85
    node_hwmon_temp_celsius{chip="0000:20:00_0_0000:21:00_0", class="nas server", environment="storage", hostname="20-size", instance="10.0.1.217:9100", job="node-exporter", sensor="temp3"}	58.85
    node_hwmon_temp_celsius{chip="pci0000:00_0000:00:18_3", class="nas server", environment="storage", hostname="20-size", instance="10.0.1.217:9100", job="node-exporter", sensor="temp1"}		37.75
    node_hwmon_temp_celsius{chip="pci0000:00_0000:00:18_3", class="nas server", environment="storage", hostname="20-size", instance="10.0.1.217:9100", job="node-exporter", sensor="temp2"}		37.75
    node_hwmon_temp_celsius{chip="pci0000:00_0000:00:18_3", class="nas server", environment="storage", hostname="20-size", instance="10.0.1.217:9100", job="node-exporter", sensor="temp3"}		27
    node_hwmon_temp_celsius{chip="thermal_thermal_zone0", class="raspberry pi", environment="cluster", hostname="cluster1", instance="10.0.1.201:9100", job="node-exporter", sensor="temp0"}	37.485
    node_hwmon_temp_celsius{chip="thermal_thermal_zone0", class="raspberry pi", environment="cluster", hostname="cluster1", instance="10.0.1.201:9100", job="node-exporter", sensor="temp1"}	37.972
    node_hwmon_temp_celsius{chip="thermal_thermal_zone0", class="raspberry pi", environment="cluster", hostname="cluster2", instance="10.0.1.252:9100", job="node-exporter", sensor="temp0"}	32.128
    node_hwmon_temp_celsius{chip="thermal_thermal_zone0", class="raspberry pi", environment="cluster", hostname="cluster2", instance="10.0.1.252:9100", job="node-exporter", sensor="temp1"}	32.128
    

    The class, environment, and hostname labels are added when scraped.

    The chip label looked interesting, but it appears to the an identifier as opposed to a name, and I’m terrible at mentally mapping hard-to-read identifiers to something meaningful. Digging around a little more, I found node_hwmon_chip_names, which when queried returned

    node_hwmon_chip_names{chip="0000:00:01_1_0000:01:00_0", chip_name="nvme", class="nas server", environment="storage", hostname="20-size", instance="10.0.1.217:9100", job="node-exporter"}					1
    node_hwmon_chip_names{chip="0000:20:00_0_0000:21:00_0", chip_name="nvme", class="nas server", environment="storage", hostname="20-size", instance="10.0.1.217:9100", job="node-exporter"}					1
    node_hwmon_chip_names{chip="pci0000:00_0000:00:18_3", chip_name="k10temp", class="nas server", environment="storage", hostname="20-size", instance="10.0.1.217:9100", job="node-exporter"}					1
    node_hwmon_chip_names{chip="platform_rpi_poe_fan_0", chip_name="rpipoefan", class="raspberry pi", environment="cluster", hostname="cluster0", instance="10.0.1.42:9100", job="node-exporter"}				1
    node_hwmon_chip_names{chip="platform_rpi_poe_fan_0", chip_name="rpipoefan", class="raspberry pi", environment="cluster", hostname="cluster1", instance="10.0.1.201:9100", job="node-exporter"}				1
    node_hwmon_chip_names{chip="platform_rpi_poe_fan_0", chip_name="rpipoefan", class="raspberry pi", environment="cluster", hostname="cluster2", instance="10.0.1.252:9100", job="node-exporter"}				1
    node_hwmon_chip_names{chip="power_supply_hidpp_battery_0", chip_name="hidpp_battery_0", class="nas server", environment="storage", hostname="20-size", instance="10.0.1.217:9100", job="node-exporter"}		1
    node_hwmon_chip_names{chip="soc:firmware_raspberrypi_hwmon", chip_name="rpi_volt", class="raspberry pi", environment="cluster", hostname="cluster0", instance="10.0.1.42:9100", job="node-exporter"}		1
    node_hwmon_chip_names{chip="soc:firmware_raspberrypi_hwmon", chip_name="rpi_volt", class="raspberry pi", environment="cluster", hostname="cluster1", instance="10.0.1.201:9100", job="node-exporter"}		1
    node_hwmon_chip_names{chip="soc:firmware_raspberrypi_hwmon", chip_name="rpi_volt", class="raspberry pi", environment="cluster", hostname="cluster2", instance="10.0.1.252:9100", job="node-exporter"}		1
    node_hwmon_chip_names{chip="thermal_thermal_zone0", chip_name="cpu_thermal", class="raspberry pi", environment="cluster", hostname="cluster1", instance="10.0.1.201:9100", job="node-exporter"}				1
    node_hwmon_chip_names{chip="thermal_thermal_zone0", chip_name="cpu_thermal", class="raspberry pi", environment="cluster", hostname="cluster2", instance="10.0.1.252:9100", job="node-exporter"}				1
    

    You might notice that the chip label matches in both vectors. Which made me think I could cross-refrence one against the other. This was way more hack-y than I expected.

    Prometheus only allows for label joining by using the group_right and group_left operations, which are very poorly documented. Fortunately, I came across these two posts by Brian Brazil, which got me started. This answer on Stack Overflow helped me get the rest of the way there.


    I’ll start with my working query and work backwards.

    avg (node_hwmon_temp_celsius) by (chip,type,hostname,instance,class,environemenet,job) *  ignoring(chip_name) group_left(chip_name) avg (node_hwmon_chip_names) by (chip,chip_name,hostname,instance,class,environemt,job)
    

    We’ll break the query above into two parts seperated by the operator:

    • the Left side: avg (node_hwmon_temp_celsius) by (chip,type,hostname,instance,class,environemenet,job)
    • the Right side: avg (node_hwmon_chip_names) by (chip,chip_name,hostname,instance,class,environemt,job)
    • the Operator: * ignoring(chip_name) group_left(chip_name)

    Let’s go through each.

    The left side averages the records for every series that has the same chip label. In this case, the output above showed that some chips had multiple series seperated by temp1…tempN labels. I don’t really care about those, so I averaged them. Averaging records with one series just returns that series value, so that’s a good solution.

    The right side returns several series with labels matching chips to chip_names, and the other requisite labels. The value for these series are all 1, effecitvely saying “this chip exists.”

    The operator is where it gets both interesting and hacky.

    1. Arithmetic operations are a type of vector match, which take series with identical labels and perform the operation on their values. I used a * (multiplication) vector match because the right-side value is always 1 and therefore safe to multiply my left-side values without changing them.
    2. The ignore() keyword allows us to list lablels to be ignored when looking for identical label sets. In this case I told the arithmetic operator to ignore(chip_name) becuase it only exists on the right side.
    3. We can use the grouping modifiers (group_left() and group_right()) to match many-to-one or one-to-many. That is, the group_left() modifier will take any labels specified and pass them along with the results of the equation. Since I used group_left(chip_name), it returned chip_name in the list of fields after matching.

    Here’s what makes this hacky: as far as I can tell, this is the only way to take matching labels and use them in reference to one-another.

    The query returns1

    {chip="0000:00:01_1_0000:01:00_0",chip_name="nvme",class="nas server",hostname="20-size",instance="10.0.1.217:9100",job="node-exporter"}			28.85
    {chip="0000:20:00_0_0000:21:00_0",chip_name="nvme",class="nas server",hostname="20-size",instance="10.0.1.217:9100",job="node-exporter"}			54.85
    {chip="pci0000:00_0000:00:18_3",chip_name="k10temp",class="nas server",hostname="20-size",instance="10.0.1.217:9100",job="node-exporter"}			30.166666666666668
    {chip="thermal_thermal_zone0",chip_name="cpu_thermal",class="raspberry pi",hostname="cluster1",instance="10.0.1.201:9100",job="node-exporter"}		36.998000000000005
    {chip="thermal_thermal_zone0",chip_name="cpu_thermal",class="raspberry pi",hostname="cluster2",instance="10.0.1.252:9100",job="node-exporter"}		32.128
    

    Pretty sweet.


    1. You’ll notice the series for chip="platform_rpi_poe_fan_0" and for hostname=cluster0 were dropped because there’s no series with matching labels on the left-side results. ↩︎

    3 February 2021
  • Passing an nvidia GPU to a container launched via Ansible

    I recently built an addition to my lab that is intended to mostly replace my Synology NAS1, and give a better home to my Plex container than my 2018 Mac mini. The comptuer is running Ubuntu 20.04 and has a nvidia Geforce GTX 1060. I chose the 1060 after refrring to this tool which gives nice estimates of of the Plex-specific capabilities enabled by the card. I wanted something that was available secondhand, had hardware h.265 support, and could handle a fair number of streams. 1060 ticked the right boxes.

    After rsyncing my media and volumes, I spent some time last night working on the Ansible role for launching the plex container while passing the GPU to the contiainer. I spent a bunch of time in Ansible’s documentation and with this guide by Samuel Kadolph.

     
       - name: "Deploy Plex container"
        docker\_container:
            name: plex
            hostname: plex
            image: plexinc/pms-docker:plexpass
            restart\_policy: unless-stopped
            state: started
            ports: 
              - 32400:32400
              - 32400:32400/udp
              - 3005:3005
              - 8324:8324
              - 32469:32469
              - 32469:32469/udp
              - 1900:1900
              - 1900:1900/udp
              - 32410:32410
              - 32410:32410/udp
              - 32412:32412
              - 32412:32412/udp
              - 32413:32413
              - 32413:32413/udp
              - 32414:32414
              - 32414:32414/udp
            mounts:
              - source: /snoqualmie/media
                target: /media
                read\_only: no
                type: bind
              - source: /seatac/plex/config
                target: /config
                read\_only: no
                type: bind         
              - source: /seatac/plex/transcode
                target: /transcode
                read\_only: no
                type: bind
              - source: /seatac/plex/backups
                target: /data/backups
                read\_only: no
                type: bind
              - source: /seatac/plex/certs
                target: /data/certs
                read\_only: no
                type: bind
            env:
                TZ: "America/Los_Angeles"
                PUID: "1001"
                PGID: "997"
                PLEX\_CLAIM: "[claim key]"
                ADVERTISE\_IP: "[public URL]"
            device\_requests: 
              - device\_ids: 0
                driver: nvidia
                capabilities: 
                  - gpu
                  - compute
                  - utility
            comparisons:
                env: strict
    

    This is the part relevant to passing the GPU to the container, and the (lacking) documentation can be found [in the device_requests section, here].(https://docs.ansible.com/ansible/latest/collections/community/general/docker_container_module.html#parameter-device_requests)

            device_requests: 
              - device_ids: 0
                driver: nvidia
                capabilities: 
                  - gpu
                  - compute
                  - utility
    

    device_ids is the ID of the GPU that is obtained from nvidia-smi -L, capabilties are spelled out on nvidia’s repo, but all doesn’t seem to work.

    Hope this helps the next poor soul who decides this is a rabbit worth chasing.


    1. I’ll keep Surveilance Station on my Syno for the time being. ↩︎

    27 January 2021
  • Lab Cluster Hardware

    In my last post about my home lab, I mentioned I’d post again about the hardware. The majority of my lab is comprised of 3 Raspberries Pi with PoE Hats, a TP-Link 5-port Gigabit PoE switch, all in a GeekPi Cluster Case. Thanks to the PoE hats, I only need to power the switch and the switch powers the three nodes. I have an extended pass-through 40-pin header on the topmost Pi (the 3B+, currently) which allows the goofy “RGB” fan to be powered, which actually made the temps on the cluster much more consistent.

    Cluster v2 Cluster In the Cabinet

    The topmost Pi is a 3B+, and the bottom two nodes are Raspberry Pi 4s (4 GB models). They’re super competent little nodes, and I’m really pleased with the performance I get from them.

    Here’s a graph of 24-hours of the containers' CPU utilization across all nodes. You can see the only thing that’s making any of the Pis sweat is NZBGet, as I imagine the process of unpacking files is a bit CPU intensive.

    Cluster 24 hour CPU

    Here’s my “instant” dashboard, which shows point-in-time health of the cluster. I’ll dig into this more at some point in the future.

    Cluster instant DB

    The Plex container is running on my 2018 Mac mini, which I’m not currently monitoring in Grafana. That’s a to-do.

    1 December 2020
  • Fustercluck - Reworked my Raspberry Pi Cluster

    I’ve spent the past couple months' forced down-time1 reworking my Raspberry Pi cluster that forms a big portion of my home lab. I set out with the goal of better understanding Prometheus, Grafana, and node-exporter to monitor the hardware. I also needed the Grafana and Prometheus data to be persistent if I moved the container among the nodes. And I needed to deploy and make adjustemnts via Ansible for consistency and versioning. I’ve put the roles and playbooks on GitHub.

    This wasn’t too hard to achieve; I did the same thing that I’d done with my Plex libraries: created appropriate volumes and exposed them via NFS from my Synology. Synology generally makes this pretty easy, although the lack of detailed controls did occasionally give me a headache that was a challenge to resolve.

    Here’s a diagram of the NFS Mounts per-container.

    NFS Mount Diagram

    The biggest change from my previous configuration was that previously, I had NFS Exports for Downloads/Movies/Series. Sonarr helpfully provided the following explainer in their Docker section.

    Volumes and Paths

    There are two common problems with Docker volumes: Paths that differ between the Sonarr and download client container and paths that prevent fast moves and hard links.

    The first is a problem because the download client will report a download’s path as /torrents/My.Series.S01E01/, but in the Sonarr container that might be at /downloads/My.Series.S01E01/. The second is a performance issue and causes problems for seeding torrents. Both problems can be solved with well planned, consistent paths.

    Most Docker images suggest paths like /tv and /downloads. This causes slow moves and doesn’t allow hard links because they are considered two different file systems inside the container. Some also recommend paths for the download client container that are different from the Sonarr container, like /torrents.

    The best solution is to use a single, common volume inside the containers, such as /data. Your TV shows would be in /data/TV, torrents in /data/downloads/torrents and/or usenet downloads in /data/downloads/usenet.

    As a result, I created /media, which is defined as a named Docker volume, and mounted by the Plex container (on the MacMini), Sonarr, Radarr, and NZBGet2.

    I’ll post to-come with a couple cool Dashboards I’ve built the actual hardware I’m using for the cluster.


    1. Forced because of COVID-19, and also because I had some foot surgery in early September, and I’ve been much less mobile since then. Fortunately, I’m healing up well, and I’ll be back to “normal” after a few more months of Physical Therapy. ↩︎

    2. NZBGet’s files are actually in /media/nzb_downloads, but I left it as /media/downloads for the sake of clarity in the post. ↩︎

    27 November 2020
  • Getting Apple Emoji on the Raspberry Pi

    I talked about getting the HyperPixel 4.0 to work in my last post, but I also wanted an excuse to show it off. I’m building a Grafana-based statusboard for the services I run on my lab, and I wanted some character. AFAIK, Raspbian doesn’t include Emoji fonts, but you can add some with Google’s noto-emoji.

    I wanted Apple emoji, so I zipped the .ttc and, copied it to my Pi, and it extracted into /usr/shared/fonts/. This would be easy to automate since Apple regularly adds characters with updates.

    8 November 2020
  • Pimoroni HyperPixel 4.0 Touch Workaround

    I’m using the gorgeous Pimoroni Hyperpixel 4.0 on a Raspberry Pi 4 for a small project. The display is crazy beautiful, and comes in a touch and non-touch version.

    I ran into issues getting touch working, and opened an issue. After a little poking and a helpful commenter pointing me to related issues, I found a workaround

    For posterity, I followed the directions above (running Pimoroni’s install script, choosing option 2 for Rectangular with Experimental Pi 4 Touch Fix, then edited /boot/config.txt as follows. Modified lines commented as such.

    # Enable DRM VC4 V3D driver on top of the dispmanx display stack
    #dtoverlay=vc4-fkms-v3d # Modified
    max_framebuffers=2
    
    [all]
    #dtoverlay=vc4-fkms-v3d
    
    dtoverlay=hyperpixel4
    gpio=0-25=a2
    enable_dpi_lcd=1
    dpi_group=2
    dpi_mode=87
    dpi_output_format=0x7f216
    dpi_timings=480 0 10 16 59 800 0 15 113 15 0 0 0 60 0 32000000 6
    display_rotate=1 # Modified
    

    Very pleased with this, though I’ve heard it won’t work with low-level access (ie. RetroPie setups). YMMV.

    8 November 2020
  • Providing Data Persisence on Prometheus Containers with NFS on Synology

    Made some progress on one of my distraction projects over the past couple days. I’d been working on creating an Ansible role to deploy a Prometheus container with persistent data backed by NFS on my rPi cluster. Getting the NFS mount to work with Prometheus was a challenge. Relevant stanzas from the role’s main.yml:

    
    	  - name: "Creates named docker volume for prometheus persistent data" 
    		docker_volume:
    			volume_name: prometheus_persist
    			state: present
    			driver_options: 
    				type: nfs
    				o: "addr={{ nfs_server }},rw"
    				device: ":{{ prometheus_nfs_path }}"
    			
    			
    	  - name: "Deploy prometheus container"
    		docker_container:
    			name: prometheus
    			hostname: prometheus
    			image: prom/prometheus
    			restart_policy: always
    			state: started
    			ports: 9090:9090
    			volumes:
    			  - "{{ prometheus_config_path }}:/etc/prometheus"
    			mounts:
    			  - source: prometheus_persist
    				target: /prometheus
    				read_only: no
    				type: volume        
    			comparisons:        
    				env: strict
    
    

    However I was getting permission denied on /prometheus when deploying the container. A redditor pointed me in the direction of the solution. Since NFS is provided by my Synology, I can’t set no_root_squash, but by mapping all users to admin in the share’s squashing settings, I could allow the container to set permissions appropriately. Progress!

    3 November 2020

Follow @jalvani on Micro.blog.