KiVar: PCB Assembly Variant Selection for KiCad

My first KiCad Action Plugin was just released: KiVar allows for simple selection of PCB assembly variants.

Variation values and attributes (such as DNP) are defined using simple rules noted in symbol or footprint fields. That is, the variation data is fully contained in the schematic or board, respectively, and no external configuration is required outside the native KiCad design files.

Check out KiVar on GitHub.

Legacy links to VeloAce and

I noticed that there are some dangling links on the web that point to my discontinued projects, such as VeloAce (a bike computer for Palm OS®) or the Semitone Lighting Controller project (aka “”).

I now added landing pages, as well as some legacy folder structures that provided some project-specific information.

If you landed on this blog with a 404, trying to gather information about a legacy project, just contact me for more information.

Reshaping a Linux SW RAID 5 to RAID 0

Yesterday I finally converted my RAID 5 array of four WD Red 3TB disks to a simple stripe, aka RAID 0. Because, naturally, I have nightly onsite full backups and offsite partial backups of irretrievable data. And I did not want to give away 3TB of capacity just for availability anymore. I’m talking about a home environment here, which is a Raspberry Pi CM4-PCIe-SATA-driven NAS.

I’m a big fan of Linux Software RAID. It always worked wonderfully for my needs. And with mdadm it has a really nice user interface. And so I expected the conversion (“reshaping”) of my RAID 5 array into a RAID 0 to be simple. And I was not disappointed! It’s really just a matter of a single mdadm call. Well, admittedly, done twice. But that’s it. Yes, OK, you really might want to grow your filesystem afterwards, but that’s really out-of-scope when talking about the RAID structure.

So, let’s assume we have a RAID 5 array at /dev/md0, which reports the following state in /proc/mdstat:

md0 : active raid5 sdc1[3] sdb1[6] sdd1[4] sda1[5]
      8788827648 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
unused devices: <none>

Then all you need to do is the following call in order to reshape your array to a RAID 0 structure:

sudo mdadm --grow /dev/md0 --backup-file=reshape5to0 --level=0 --raid-devices=4

The mdadm tool will back some critical data up, an then the kernel tells you:

md: reshape of RAID array md0

Monitoring the process reveals what the SW RAID implementation is really doing:

md0 : active raid5 sdb1[6] sda1[5] sdc1[3] sdd1[4]
      8788827648 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/4] [UUUU_]
      [>....................]  reshape =  0.9% (27384308/2929609216) finish=1760.9min speed=27468K/sec
unused devices: <none>

And if you think about it, yes, that’s exactly what you’d expect. It’s converting the array to a RAID 5 five-disk array in degraded state. That is, effectively a four-disk stripe, and a missing parity disk.

Once the reshape completes (in my case the ~1800 minute estimation was pretty accurate), you end up with:

md0 : active raid5 sdb1[6] sda1[5] sdc1[3] sdd1[4]
      11718436864 blocks super 1.2 level 5, 512k chunk, algorithm 5 [5/4] [UUUU_]
unused devices: <none>

The underscore in the disk states is there to make you feel uncomfortable. 😉 Because it’s still a RAID 5, and it’s still in degraded state. And “degraded” sounds a bit worrying. So to feel better (at least from a purely emotional standpoint, after all, a four-disk stripe is nothing that you should ever rely on), just call the above command a second time, now for the actual conversion:

sudo mdadm --grow /dev/md0 --backup-file=convert5to0 --level=0 --raid-devices=4

And now you instantaneously have your RAID 0 stripe:

md0 : active raid0 sdb1[6] sda1[5] sdc1[3] sdd1[4]
      11718436864 blocks super 1.2 512k chunks
unused devices: <none>

Technically, of course, it’s just as unsafe and concerning as a RAID 5 with missing parity disk. But, hey, we do have automated regular full backups, don’t we?

And that’s it for the reshape. Of course, your filesystem is still the old size. But, assuming you have an ext2/3/4 filesystem on your md blockdevice, you can simply check and then grow your filesystem with:

sudo e2fsck -f /dev/md0
sudo resize2fs /dev/md0

Again, thanks to the Linux SW RAID developers for providing such a great system!

AVR Fuse Calculator online again

I must admit I seem to have underestimated the loyalty of my AVR Fuse Calculator users.

Just a few days after I relaunched my site, clearing any tracks of the past, a user named Alan politely asked if I could get my famous AVR Fuse Calculator back online. He also pointed out that the tool AVRDUDESS contains a link to my site, which was obviously non-functional for the last few days.

So, my apologies for that temporary degradation, and now let’s welcome the good old AVR Fuse Calculator, embedded in my new blog site. It’s the same backend with a little restyling.

Contact me if you find any new usability issues, but please be aware that I do not have the time to fix any AVR part database issues.

And thank you, Alan, for your kind inquiry.

Notes from a nerd’s mind

Hi there!

Welcome to my new blog at If you came here looking for my private website, then yeah, welcome too! Both sites have been merged into this new blog site.

Whenever I’ll get the feeling that someone could benefit from my experience, I might write an article about it.

Expect some stuff about KiCad, hardware engineering, single-board computers, Linux and so on.

Hope you enjoy!