Aeva's Totally Reasonable Synthesizer Build
It was a Dark and Stormy Night
I started playing piano again at the end of 2017, after having given it up in my early teen years. After expressing an interest in picking it up again, my mom gave me one of her old midi keyboards. I'm very happy with the progress I've been able to make since then.
My wife happened to have a midi/usb adaptor, so pretty much right away I started exploring my options for using my laptop to extend my piano's sound rendering capabilities. And I quickly learned that while I could do some cool things, the latency in the system meant that I couldn't play and listen to the final sound at the same time. I dug a little further into possible ways to make it usable, but the answers were pointing towards "custom kernel" and "jack" and I just don't have any patience anymore for Linux's weird tendency to make you earn the things you want from it. I also looked into synth modules and such as well, but they were either really expensive and/or didn't have the exact combination of features I was hoping for.
As reasonable person would do, I gave up and started reading about the midi device protocol and looked up what was available from sparkfun for me to glue together. The arduinos I had on hand weren't up to the task of audio processing, but this seemed like a fun project to finally learn how to do something with the FPGAs I had lying around.
It turns out, midi is designed to be pretty easy to bolt onto things. The spec provides a simple reference circuit for hooking up a port safely to your instrument. The communication itself is a serial connection over UART, with 8 bit words, 31250 baud, and no parity bit. The high level protocol for the most part amounts to a byte to tell you what kind of event you received as well as the revelant channel, and then that is followed by one or two bytes to provide parameters relating to that event. So for example, a "Note On" event on channel 0 at 440hz and a velocity of 127 is just 0x90, 0x45, and 0x7f. Nice.
Doing the Thing
With the literature I found on my fpga, and some VHDL tutorials I found, I was able to cobble together a simple blinky light example. I also found a MIT licensed UART implementation written in VHDL that could I could use for both the midi interface and sending debug signals to my computer. As for the remaining hardware, I ended up buying the components needed to make the midi-in circuit instead of buying an existing arduino shield, and I also got a little i2s dac on a breakout board thing. My friend Mairi also sent me a neat dac she made, though I haven't worked out the logistics of getting it working yet.
From there I wrote a midi byte classifier thing in VHDL, which I was able to get working without having finished the hardware first. I used GHDL for a simulator, and sent fake packets over USB to get it working and verify it.
I remember that completing the midi-in circuit gave me some trouble, but I don't remember what it was exactly. It did however also give me an excuse to finally try out the oscilloscope I picked up a while back. Once I got that working, it was simple to connect it up to my midi byte classifier, resulting in some midi-controlled lights.
So juding from the time stamps, I basically made a lot of progress on this project from September 2017 to Octobed 2017, and then burned out really hard until April 2019. I think the main problem was I just ran out problems with obvious paths to solve, and needed to take some time to do some research to figure out what was the best way forward. This is all good fun, but the decision space was too broad and I didn't really know much about the theory behind audio synthesis.
Getting Back Into It
I'm not exactly sure what motivated me to pick this up again. I think it might have been a friend making awesome progress on one of her own projects. I think between that and a fit of spring cleaning got me to find all the notes I've written on random sheets of paper and copy it down into my journal. From there I was able to work out what needed to do next. Finishing wiring up the i2s dac from sparkfun and writing my own i2s implementation was the simplest route to follow next. From there making a 440hz square wave generator was reasonably easy. Following that came hard-coding some conversions from midi note numbers to frequencies, then making a frequency calculator, and then figuring out how to add polyphony.
Now What?
I suppose I'm at the really fun part now, where I can start experimenting with making more interesting sounds. Well, not quite - there are some things that still need to be worked out:
-
The frequencies calculated for midi note numbers are integer values in hz. The resulting tuning is noticably way off for some notes. The easiest way to fix this is probably to just change it to work in khz.
-
I read somewhere recommending that when writting a midi synthesizer, to keep your frequencies represented as midi note numbers (represented as floating point values) for as long as possible, so that you can do easy linear transformations between notes and have it sound right. I'm considering doing something like that but using cents as the intermediary unit.
-
I also don't have a floating point math implementation for the fpga at the moment. I could see what is available there, roll my own, or use fixed point instead. I mean, I don't have a fixed point math implementation for the fpga at the moment either, but I expect it to be a bit simpler to write one.
-
The system currently just supports six simultaneous notes. Ideally it should support at least 20 though. Maybe instead of having several parallel sound rendering paths I could have one and use pipelining.
-
My recent push to get the i2s dac working and glue together the ends so far also resulted in a big pile of quickly hacked together stuff because expedient needs. I'd like to take time to tidy it up, and figure out which parts are going to be expanded on or which patterns might benefit from being pulled out into their own entities, etc.
-
It would be fun to be able to use my piano's sustain pedal.
-
Something something envelope generators.
-
I was reading about squeezeboxes while commuting home the other day, and how sometimes how they'll have multiple sets of reeds tuned slightly apart from one another. I noticed my weird "integer tuning" scheme I have so far sounds kind of rad when layered with the even tempred tuning of my piano's stock intstrument voices, which I was not expecting. Or maybe square waves + harpiscord just sounds cool, and it has nothing to do with tuning. Anyways, I just think that's neat.
-
Generating fancier waves than square waves - like sine waves!
-
Thinking a bit further ahead, using splines to render waves would be pretty neat.
-
Thinking way further ahead, some kind of stack machine for programming the sound rendering pipeline would be neat, especially if it could be easily reprogrammed on the fly.
I guess that should keep me busy for a while :)