Jehan Alvani
  • Home
  • Photos
  • Archive
  • About
  • Peekaboo

    19 May 2022
  • Interior, Soleil can design department

    BOB: Hey, Dave, What’s a grapefruit look like?

    DAVE: What? You’ve never seen a grapefruit?

    BOB: No, we were more of a pomelo family. What’s it look like?

    DAVE: It’s, uh. You take a shrimp? It’s like half a shrimp.

    BOB: Shrimp, Ok. Legs still on?

    DAVE: Yeah, I mean. Yeah. It’s a shrimp. Oh, but the rind is green

    BOB: Ah! Ok. Thanks! Hadn’t considered the rind. Like this?

    DAVE: Nailed it.

    11 May 2022
  • Caught a Charmander!

    17 April 2022
  • Glass Animals just killing it. So good to see live music again, and hot damn what a show.

    11 March 2022
  • It feels a little more like spring 💥⚾️

    10 March 2022
  • 10 March 2022
  • 10 March 2022
  • A month or so ago I shipped the 911’s clock off to Pelican to be rebuilt. The clock hadn’t worked in some time, but it works gorgeously now, and since it’s the ‘88, it has the quartz movement.

    I also dropped in Rennline’s Exact Fit Phone Mount and Induction Charger. I didn’t realize I wouldn’t be able to find a path from the USB port on the back of the new SQR-46 head unit to the gauge cluster without drilling, which I didn’t want to do1. I ordered this 12v to USB module from Amazon, and made a quick wiring harness to tap the 12v positive and negative from the clock. Boom, clock and MagSafe charger work. Hype.


    1. Also, the SQR-46 kept trying to read the induction charger for files to play, which was disruptive to playing FM or Bluetooth audio, so really using the USB port on the head unit was a no-go. [return]
    28 January 2022
  • Just a great first time at Crystal! Wow.

    27 January 2022
  • Help the Studebaker Museum Restore Fozzie’s Studebaker

    In a fantastic confluence of my interests, the Studebaker museum is raising funds to restore the very same car that carried the heroes of the 1979 classic “The Muppet Movie”. A good cause if I’ve ever seen one.

    25 December 2021
  • BBC: US Billionaire forced to relinquish $70M of stolen antiques, banned from antiquities trade

    Strong quote from the Manhattan DA Cy Vance Jr:

    …Mr Steinhardt “displayed a rapacious appetite for plundered artefacts without concern for the legality of his actions, the legitimacy of the pieces he bought and sold, or the grievous cultural damage he wrought across the globe”.

    7 December 2021
  • This immaculate 11k mile ‘79 930 auction is going to be a blast to watch in a few days. It’s already a blast to just look at the gorgeous photos.

    3 December 2021
  • Replacing Headlights on 911 3.2s and 964s

    911s in ‘88 and ‘89 (and at least some from earlier 3.2 years; can’t quite find an exact demarcation point) had the same headlights that made it into the 964s. I had to pull them apart to replace my bulbs last night after noticing that my low beams were out. This video was an enormous help. In a short:

    1. Unscrew the bottom bolt on the rings which holds the rings in place1.
    2. Lift and pull to remove the ring, which is held in place by a small lip on the body-side above the bucket. If the lights haven’t been opened up in a while (20+ years in my case) it might take some effort to pull the rings. I used a hook tool wrapped in a microfiber towel placed into the bottom bolt’s hole and pulled hard on the right side. Left side lifted with no issue.
    3. Remove the four bolts which hold the light assembly in place (one each at 2:00, 4:00, 8:00, 10:00)2.
    4. Unclip the harness, remove the light assembly, remove the star-shaped retaining clip and replace the bulb. Use conductive grease to prevent connector corrosion.
    5. Assembly is the reverse of the above.

    1. The video says to remove the pugs which cover access to the adjustment screws, but this seems unnecessary to me. [return]
    2. The screws at 9:00 and 12:00 are for beam adjustment; don’t mess with those. [return]
    31 October 2021
  • A Failed Alternator on the 911

    A black vintage 911 in front of a dramatic mountain vista

    The 911 just got back from Squire’s after dying on me on the way home from a gorgeous PCA tour to Artist’s Point on Mt. Baker a couple weeks ago - the pic above was from Artist’s Point. After pulling over and getting out of the car, I could hear a hissing from the trunk, and opening the luggage compartment revealed a wet, steaming battery and the smell of sulphur or rotten eggs that’s typical of an overcharging event. My hunch was that either the alternator or voltage regulator had failed, and the batter was being overcharged. I replaced the battery with a new one from Interstate (MTX-49/H8; the car still wouldn’t start. Now I suspected that the overcharging cooked the DME relay or the DME ECU itself. I was out of my depth, and got back in touch with Squire’s. Once they got the car, they confirmed that the alternator was failing (outputting 17.8v under load!) and had cooked the DME ECU. They installed a new alternator, swapped in a shop DME and sent my unit out for repair. I’ll get my DME back in a few weeks.

    In the interim, I wanted to document some obvious signs of overcharging that I saw in case anyone else runs into this.

    1. Interior bulbs getting bright and dimming or flickering, especially indicator lights
    2. Flickering seatbelt light
    3. Radio would die when I pushed the gas
    4. Moisture in the gauges when the blower was switched on1

    1. This was a surprise to me but it’s becuase the battery was overheating, boiling, and letting off steam into the luggage compartment. The steamy air was taken in by the blower and… blown into the gauges. I got some big packs of silica gel desiccant to ensure all the moisture is removed now that its no longer an issue. [return]
    7 October 2021
  • This photo really deserves its own post.

    6 September 2021
  • G50 Shift Bushing Refresh

    Spent yesterday morning cleaning up the shift assembly in my ‘88 911. The car was optioned with the G50 transaxle’s short-shifter, but a previous owner had put a knob that added a few inches of throw. It looked like at some point some water had been spilled onto the knob and down the lever, because there was some surface rust under the knob which I wanted to clean up. Most notably, though, there was a ton of play in the lever when in neutral and some slop between gears, both of which are indications of worn shift bushings. Replacing the shift bushings isn’t too tough a job - half a day on the long side, and I thought I’d tackle the rust cleanup and shift knob replacement at the same time.

    Photo from the sales listing showing the “old” aftermarket knob

    Using Brad Phillips’ G50 refresh article for Hagerty as a reference, I pulled the console, shift lever, and shift housing out of the car. Removed the surface rust on the lever with a smoothing stone on my Dremel, sandpaper, and a wire brush. Then I taped off the lever and repainted. I vacuumed the old bushing dust out the cavity in the floor revealing a little more surface rust on the floor of the car. I wire-brushed that, then used a sanding block and vacuumed until bare metal was exposed, then I used a tacky cloth to remove any remaining dust and painted the exposed surfaces.

    Comparison of the aftermarket knob (right) and the stock knob (left)

    Spraying the lever before replacing the bushings

    Reassembly was a bit of a challenge since the new bushing is much less pliable than the quite-worn original. In addition to using a heat gun to warm the bushing and housing, I found a link to some photos of this old article in “Excellence” magazine in which the author gives some tips for replacing failed G50 bushings. Relevant paragraphs excerpted below:

    Installing the bushing was simple. First, we clamped the shifter in a vice to hold it steady - applying a thin film of grease ont the bushing, we installed it in the housing. The bushing goes in from the rear of the housing with the large flared end pointing toward the rear of the car. The bushing is a very snug fit, and we worked it into the hole much like mounting a tire on a tim; first, we pushed half of the bushing into the hole and then pushed the remaining half of the bushing Into place with a heavy screwdriver, working in one direction. Once the bushing pops into the housing, it self-centers due to a recess in the bushing.

    We wiped a film of white lithium grease inside the bushing and used the shop vacuum to remove the remains of the old bushing from the floor recess. The shifter shaft was also wiped clean and a thin coat of grease was applied to the shalt. Prior to installation, we decided to take the time to remove the shifter pin, clean it, and apply a thin layer of grease. This is a simple matter of removing a lock clip, sliding out the pin, and cleaning and greasing the pin. We also applied a small amount of grease to the pivot-ball portion of the shift lever.

    At the end of the day I have a nice, tight shift feel, a shorter throw thanks to the removal of the long shift knob, and cleaned up the shift lever.

    The restored lever and stock knob back in place

    Took the car out for a drive with my neighbor and found a picturesque spot by a mill. Couldn’t ask for more.

    5 September 2021
  • Joining metrics by labels in Prometheus

    I’m using node_exporter to generate host metrics for several of the nodes in my lab. I was re-working one of my thermal graphs, today, with the goal of getting good historical temps of my Pis and my Ubuntu-based homebuilt NAS into a single readable graph. node_exporter has two relevant time series:

    1. node_thermal_zone_temp which was exported on all of the Raspberries Pi
    2. node_hwmon_temp_celsius which was exported by the NAS and the Raspberries Pi 4. The rPi3 did not export this metric.

    I liked node_hwmon_temp_celsius a lot, and opted to spend some time focusing on getting that to fit as well as I could. It’s an [instant vector][instant_vector], and it returned the following with my config:

    node_hwmon_temp_celsius{chip=“0000:00:01_1_0000:01:00_0”, class=“nas server”, environment=“storage”, hostname=“20-size”, instance=“10.0.1.217:9100”, job=“node-exporter”, sensor=“temp1”}    29.85
    node_hwmon_temp_celsius{chip=“0000:00:01_1_0000:01:00_0”, class=“nas server”, environment=“storage”, hostname=“20-size”, instance=“10.0.1.217:9100”, job=“node-exporter”, sensor=“temp2”}   29.85
    node_hwmon_temp_celsius{chip=“0000:00:01_1_0000:01:00_0”, class=“nas server”, environment=“storage”, hostname=“20-size”, instance=“10.0.1.217:9100”, job=“node-exporter”, sensor=“temp3”}   32.85
    node_hwmon_temp_celsius{chip=“0000:20:00_0_0000:21:00_0”, class=“nas server”, environment=“storage”, hostname=“20-size”, instance=“10.0.1.217:9100”, job=“node-exporter”, sensor=“temp1”}   52.85
    node_hwmon_temp_celsius{chip=“0000:20:00_0_0000:21:00_0”, class=“nas server”, environment=“storage”, hostname=“20-size”, instance=“10.0.1.217:9100”, job=“node-exporter”, sensor=“temp2”}   52.85
    node_hwmon_temp_celsius{chip=“0000:20:00_0_0000:21:00_0”, class=“nas server”, environment=“storage”, hostname=“20-size”, instance=“10.0.1.217:9100”, job=“node-exporter”, sensor=“temp3”}   58.85
    node_hwmon_temp_celsius{chip=“pci0000:00_0000:00:18_3”, class=“nas server”, environment=“storage”, hostname=“20-size”, instance=“10.0.1.217:9100”, job=“node-exporter”, sensor=“temp1”}     37.75
    node_hwmon_temp_celsius{chip=“pci0000:00_0000:00:18_3”, class=“nas server”, environment=“storage”, hostname=“20-size”, instance=“10.0.1.217:9100”, job=“node-exporter”, sensor=“temp2”}     37.75
    node_hwmon_temp_celsius{chip=“pci0000:00_0000:00:18_3”, class=“nas server”, environment=“storage”, hostname=“20-size”, instance=“10.0.1.217:9100”, job=“node-exporter”, sensor=“temp3”}     27
    node_hwmon_temp_celsius{chip=“thermal_thermal_zone0”, class=“raspberry pi”, environment=“cluster”, hostname=“cluster1”, instance=“10.0.1.201:9100”, job=“node-exporter”, sensor=“temp0”}    37.485
    node_hwmon_temp_celsius{chip=“thermal_thermal_zone0”, class=“raspberry pi”, environment=“cluster”, hostname=“cluster1”, instance=“10.0.1.201:9100”, job=“node-exporter”, sensor=“temp1”}    37.972
    node_hwmon_temp_celsius{chip=“thermal_thermal_zone0”, class=“raspberry pi”, environment=“cluster”, hostname=“cluster2”, instance=“10.0.1.252:9100”, job=“node-exporter”, sensor=“temp0”}    32.128
    node_hwmon_temp_celsius{chip=“thermal_thermal_zone0”, class=“raspberry pi”, environment=“cluster”, hostname=“cluster2”, instance=“10.0.1.252:9100”, job=“node-exporter”, sensor=“temp1”}    32.128
    

    The class, environment, and hostname labels are added when scraped.

    The chip label looked interesting, but it appears to the an identifier as opposed to a name, and I’m terrible at mentally mapping hard-to-read identifiers to something meaningful. Digging around a little more, I found node_hwmon_chip_names, which when queried returned

    node_hwmon_chip_names{chip="0000:00:01_1_0000:01:00_0", chip_name="nvme", class="nas server", environment="storage", hostname="20-size", instance="10.0.1.217:9100", job="node-exporter"}                    1
    node_hwmon_chip_names{chip="0000:20:00_0_0000:21:00_0", chip_name="nvme", class="nas server", environment="storage", hostname="20-size", instance="10.0.1.217:9100", job="node-exporter"}                   1
    node_hwmon_chip_names{chip="pci0000:00_0000:00:18_3", chip_name="k10temp", class="nas server", environment="storage", hostname="20-size", instance="10.0.1.217:9100", job="node-exporter"}                  1
    node_hwmon_chip_names{chip="platform_rpi_poe_fan_0", chip_name="rpipoefan", class="raspberry pi", environment="cluster", hostname="cluster0", instance="10.0.1.42:9100", job="node-exporter"}               1
    node_hwmon_chip_names{chip="platform_rpi_poe_fan_0", chip_name="rpipoefan", class="raspberry pi", environment="cluster", hostname="cluster1", instance="10.0.1.201:9100", job="node-exporter"}              1
    node_hwmon_chip_names{chip="platform_rpi_poe_fan_0", chip_name="rpipoefan", class="raspberry pi", environment="cluster", hostname="cluster2", instance="10.0.1.252:9100", job="node-exporter"}              1
    node_hwmon_chip_names{chip="power_supply_hidpp_battery_0", chip_name="hidpp_battery_0", class="nas server", environment="storage", hostname="20-size", instance="10.0.1.217:9100", job="node-exporter"}     1
    node_hwmon_chip_names{chip="soc:firmware_raspberrypi_hwmon", chip_name="rpi_volt", class="raspberry pi", environment="cluster", hostname="cluster0", instance="10.0.1.42:9100", job="node-exporter"}        1
    node_hwmon_chip_names{chip="soc:firmware_raspberrypi_hwmon", chip_name="rpi_volt", class="raspberry pi", environment="cluster", hostname="cluster1", instance="10.0.1.201:9100", job="node-exporter"}       1
    node_hwmon_chip_names{chip="soc:firmware_raspberrypi_hwmon", chip_name="rpi_volt", class="raspberry pi", environment="cluster", hostname="cluster2", instance="10.0.1.252:9100", job="node-exporter"}       1
    node_hwmon_chip_names{chip="thermal_thermal_zone0", chip_name="cpu_thermal", class="raspberry pi", environment="cluster", hostname="cluster1", instance="10.0.1.201:9100", job="node-exporter"}             1
    node_hwmon_chip_names{chip="thermal_thermal_zone0", chip_name="cpu_thermal", class="raspberry pi", environment="cluster", hostname="cluster2", instance="10.0.1.252:9100", job="node-exporter"}             1
    

    You might notice that the chip label matches in both vectors. Which made me think I could cross-refrence one against the other. This was way more hack-y than I expected.

    Prometheus only allows for label joining by using the group_right and group_left operations, which are very poorly documented. Fortunately, I came across these two posts by Brian Brazil, which got me started. This answer on Stack Overflow helped me get the rest of the way there.


    I’ll start with my working query and work backwards.

    avg (node_hwmon_temp_celsius) by (chip,type,hostname,instance,class,environemenet,job) *  ignoring(chip_name) group_left(chip_name) avg (node_hwmon_chip_names) by (chip,chip_name,hostname,instance,class,environemt,job)
    

    We’ll break the query above into two parts seperated by the operator:

    • the Left side: avg (node_hwmon_temp_celsius) by (chip,type,hostname,instance,class,environemenet,job)
    • the Right side: avg (node_hwmon_chip_names) by (chip,chip_name,hostname,instance,class,environemt,job)
    • the Operator: * ignoring(chip_name) group_left(chip_name)

    Let’s go through each.

    The left side averages the records for every series that has the same chip label. In this case, the output above showed that some chips had multiple series seperated by temp1…tempN labels. I don’t really care about those, so I averaged them. Averaging records with one series just returns that series value, so that’s a good solution.

    The right side returns several series with labels matching chips to chip_names, and the other requisite labels. The value for these series are all 1, effecitvely saying “this chip exists.”

    The operator is where it gets both interesting and hacky.

    1. Arithmetic operations are a type of vector match, which take series with identical labels and perform the operation on their values. I used a * (multiplication) vector match because the right-side value is always 1 and therefore safe to multiply my left-side values without changing them.
    2. The ignore() keyword allows us to list lablels to be ignored when looking for identical label sets. In this case I told the arithmetic operator to ignore(chip_name) becuase it only exists on the right side.
    3. We can use the grouping modifiers (group_left() and group_right()) to match many-to-one or one-to-many. That is, the group_left() modifier will take any labels specified and pass them along with the results of the equation. Since I used group_left(chip_name), it returned chip_name in the list of fields after matching.

    Here’s what makes this hacky: as far as I can tell, this is the only way to take matching labels and use them in reference to one-another.

    The query returns1

    {chip="0000:00:01_1_0000:01:00_0",chip_name="nvme",class="nas server",hostname="20-size",instance="10.0.1.217:9100",job="node-exporter"}         28.85
    {chip="0000:20:00_0_0000:21:00_0",chip_name="nvme",class="nas server",hostname="20-size",instance="10.0.1.217:9100",job="node-exporter"}            54.85
    {chip="pci0000:00_0000:00:18_3",chip_name="k10temp",class="nas server",hostname="20-size",instance="10.0.1.217:9100",job="node-exporter"}           30.166666666666668
    {chip="thermal_thermal_zone0",chip_name="cpu_thermal",class="raspberry pi",hostname="cluster1",instance="10.0.1.201:9100",job="node-exporter"}      36.998000000000005
    {chip="thermal_thermal_zone0",chip_name="cpu_thermal",class="raspberry pi",hostname="cluster2",instance="10.0.1.252:9100",job="node-exporter"}      32.128
    

    Pretty sweet.


    1. You’ll notice the series for chip="platform_rpi_poe_fan_0" and for hostname=cluster0 were dropped because there’s no series with matching labels on the left-side results. [return]
    3 February 2021
  • Had an ongoing issue with Mac clients constantly popping “Server Disconnected” erorrs when they’ve mounted and NFS volume. This may be specifc to NFSv4, I couldn’t find a way to test.

    Regardless, mounting the volume with mount -o nolocks,nosuid [server]:/path/to/export /local/mountpoint worked. I added this to /etc/nfs.conf as a workaround. I wonder if I could configure this for a specific NFS server, rather then globally.

    #
    # nfs.conf: the NFS configuration file
    #
    nfs.client.mount.options=nolocks,nosuid
    
    29 January 2021
  • Just under 18 days until pitchers and catchers report.

    28 January 2021
  • Passing an nvidia GPU to a container launched via Ansible

    I recently built an addition to my lab that is intended to mostly replace my Synology NAS1, and give a better home to my Plex container than my 2018 Mac mini. The comptuer is running Ubuntu 20.04 and has a nvidia Geforce GTX 1060. I chose the 1060 after refrring to this tool which gives nice estimates of of the Plex-specific capabilities enabled by the card. I wanted something that was available secondhand, had hardware h.265 support, and could handle a fair number of streams. 1060 ticked the right boxes.

    After rsyncing my media and volumes, I spent some time last night working on the Ansible role for launching the plex container while passing the GPU to the contiainer. I spent a bunch of time in Ansible’s documentation and with this guide by Samuel Kadolph.

    
       - name: “Deploy Plex container”
        docker_container:
            name: plex
            hostname: plex
            image: plexinc/pms-docker:plexpass
            restart_policy: unless-stopped
            state: started
            ports:
              - 32400:32400
              - 32400:32400/udp
              - 3005:3005
              - 8324:8324
              - 32469:32469
              - 32469:32469/udp
              - 1900:1900
              - 1900:1900/udp
              - 32410:32410
              - 32410:32410/udp
              - 32412:32412
              - 32412:32412/udp
              - 32413:32413
              - 32413:32413/udp
              - 32414:32414
              - 32414:32414/udp
            mounts:
              - source: /snoqualmie/media
                target: /media
                read_only: no
                type: bind
              - source: /seatac/plex/config
                target: /config
                read_only: no
                type: bind
    - source: /seatac/plex/transcode target: /transcode read_only: no type: bind - source: /seatac/plex/backups target: /data/backups read_only: no type: bind - source: /seatac/plex/certs target: /data/certs read_only: no type: bind env: TZ: “America/Los_Angeles” PUID: “1001” PGID: “997” PLEX_CLAIM: “[claim key]” ADVERTISE_IP: “[public URL]” device_requests: - device_ids: 0 driver: nvidia capabilities: - gpu - compute - utility comparisons: env: strict

    This is the part relevant to passing the GPU to the container, and the (lacking) documentation can be found [in the device_requests section, here].(https://docs.ansible.com/ansible/latest/collections/community/general/docker_container_module.html#parameter-device_requests)

            device_requests: 
              - device_ids: 0
                driver: nvidia
                capabilities: 
                  - gpu
                  - compute
                  - utility
    

    device_ids is the ID of the GPU that is obtained from nvidia-smi -L, capabilties are spelled out on nvidia’s repo, but all doesn’t seem to work.

    Hope this helps the next poor soul who decides this is a rabbit worth chasing.


    1. I’ll keep Surveilance Station on my Syno for the time being. [return]
    27 January 2021
  • I’m at that point in my beard growth where my face looks poorly-rendered and slightly out of focus.

    19 January 2021
  • As far as gorgeous mise en place go, it’s pretty hard to beat yakisoba.

    9 January 2021
  • Lab Cluster Hardware

    In my last post about my home lab, I mentioned I’d post again about the hardware. The majority of my lab is comprised of 3 Raspberries Pi with PoE Hats, a TP-Link 5-port Gigabit PoE switch, all in a GeekPi Cluster Case. Thanks to the PoE hats, I only need to power the switch and the switch powers the three nodes. I have an extended pass-through 40-pin header on the topmost Pi (the 3B+, currently) which allows the goofy “RGB” fan to be powered, which actually made the temps on the cluster much more consistent.

    Cluster v2

    Cluster In the Cabinet

    The topmost Pi is a 3B+, and the bottom two nodes are Raspberry Pi 4s (4 GB models). They’re super competent little nodes, and I’m really pleased with the performance I get from them.

    Here’s a graph of 24-hours of the containers’ CPU utilization across all nodes. You can see the only thing that’s making any of the Pis sweat is NZBGet, as I imagine the process of unpacking files is a bit CPU intensive.

    Cluster 24 hour CPU

    Here’s my “instant” dashboard, which shows point-in-time health of the cluster. I’ll dig into this more at some point in the future.

    Cluster instant DB

    The Plex container is running on my 2018 Mac mini, which I’m not currently monitoring in Grafana. That’s a to-do.

    1 December 2020
  • The Death/Rebirth scene in Princess Mononoke is almost too beautiful for words. I’ve seen it so many times and it never loses its impact.

    30 November 2020
  • Fustercluck - Reworked my Raspberry Pi Cluster

    I’ve spent the past couple months’ forced down-time1 reworking my Raspberry Pi cluster that forms a big portion of my home lab. I set out with the goal of better understanding Prometheus, Grafana, and node-exporter to monitor the hardware. I also needed the Grafana and Prometheus data to be persistent if I moved the container among the nodes. And I needed to deploy and make adjustemnts via Ansible for consistency and versioning. I’ve put the roles and playbooks on GitHub.

    This wasn’t too hard to achieve; I did the same thing that I’d done with my Plex libraries: created appropriate volumes and exposed them via NFS from my Synology. Synology generally makes this pretty easy, although the lack of detailed controls did occasionally give me a headache that was a challenge to resolve.

    Here’s a diagram of the NFS Mounts per-container.

    NFS Mount Diagram

    The biggest change from my previous configuration was that previously, I had NFS Exports for Downloads/Movies/Series. Sonarr helpfully provided the following explainer in their Docker section.

    Volumes and Paths

    There are two common problems with Docker volumes: Paths that differ between the Sonarr and download client container and paths that prevent fast moves and hard links.

    The first is a problem because the download client will report a download’s path as /torrents/My.Series.S01E01/, but in the Sonarr container that might be at /downloads/My.Series.S01E01/. The second is a performance issue and causes problems for seeding torrents. Both problems can be solved with well planned, consistent paths.

    Most Docker images suggest paths like /tv and /downloads. This causes slow moves and doesn’t allow hard links because they are considered two different file systems inside the container. Some also recommend paths for the download client container that are different from the Sonarr container, like /torrents.

    The best solution is to use a single, common volume inside the containers, such as /data. Your TV shows would be in /data/TV, torrents in /data/downloads/torrents and/or usenet downloads in /data/downloads/usenet.

    As a result, I created /media, which is defined as a named Docker volume, and mounted by the Plex container (on the MacMini), Sonarr, Radarr, and NZBGet2.

    I’ll post to-come with a couple cool Dashboards I’ve built the actual hardware I’m using for the cluster.


    1. Forced because of COVID-19, and also because I had some foot surgery in early September, and I’ve been much less mobile since then. Fortunately, I’m healing up well, and I’ll be back to “normal” after a few more months of Physical Therapy. [return]
    2. NZBGet’s files are actually in /media/nzb_downloads, but I left it as /media/downloads for the sake of clarity in the post. [return]
    27 November 2020

Follow @jalvani on Micro.blog.