Sep 21, 2010 Science
I recently came across a query on a music software mailing list about bit depth, what exactly it meant, and why you would choose anything other than 16 bits (given that CDs are mastered at 44.1khz, 16bit). I wrote a long, disorganized reply there and then it occurred to me that I should take those remarks and bundle something up for you folks here.
The first important thing is to understand what bit depth means. Bit depth and sampling rate are actually two related terms, on two different axes, and as such, they’re best expressed with a graph, but before the graph will be meaningful, we should talk a little bit about digital recording.
Sound in the real world is analog. What analog means, in terms of analog vs. digital, is that it’s continuous. If you take a ruler, it has a bunch of markings on it. Let’s say it’s marked at only centimetres. So it has markings of 1cm, 2cm, 3cm, etc. The ruler doesn’t cease to exist between those markings, however, nor does real-world distance. There’s a defined value of ruler between those. Now, you could make more markings so that you had a 1.5cm marking. But if you zoomed in between the 1cm and the 1.5cm marking, there’s still ruler between them. You could make more markings, and if you zoomed in again, there would be ruler between them and so on and so on down the line. Eventually, since a ruler is a physical object made of atoms you’d get to the point where you’d have discrete atoms. But there would still be distance between each atoms. And that’s where analog vs. digital comes in to play. Digital is like the ruler — if you zoom down deeply enough there’s a point where there is space between the units. Analog is like the distance — no matter how far you zoom down, there’s more distance between the ends. I hope that makes some sense.
Now, sound is a form of energy, which gets transmitted to your eardrums. (It moves as a wave of pressure, and you can imagine a string moving back and forth pushing the air back and forth as it moves, those compressions and rarefactions being transmitted through the air to your eardrum.) When we measure sound, it really had only one dimension — the amount of energy. All of the things we hear in a sound — its colour, timbre, frequency, tone, etc. — are built out of that one energy measure, and how it changes over time.
So when we make a digital recording, we can plot a representation of a given sound as a measure of its energy over time. The most common representation puts energy level on the vertical axis and time on the horizontal axis. A typical sound displayed this way looks something like this:
For our purposes, however, I’ll be dealing with a much simpler graph. All the same principles apply to both. We’re going to talk about a sound graph that looks like this:
Now, I said that digital recording was like the ruler, where eventually you reach the atomic level and there’s a discrete bit of ruler, and then empty space before the next bit. That’s exactly what digital means — discrete elements. If I have a digtal scale that represents only whole numbers, then there is a value for 1, 2, 3, 4, 5, but no value for anything in between them. So if I were to try to plot the graph above in a digtal system, it might look something like this:
(Pedant note: Both the graph above and this one are fundamentally digital because the computers and monitors we’re using to make them are digital. Unless I go over to your house and draw a line on your monitor with a Sharpie, I can’t really show an analog graph to you. A certain suspension of disbelief is required here.)
You can’t have that smooth line from our original because those smooth, continuous values between each value don’t exist. Now, this might seem like a pretty poor way to represent sound, and many audiophiles would agree with you on that, but for the most part there seems to be a point at which we can’t really tell the difference between the straight line and the stair stepped line, and for most of us, the industry standard known commonly as “CD Quality” meets that criteria. CD Quality is actually 44.1khz, 16bits, stereo. What does that mean in terms of our graph? Well, the first part is easy — 44.1khz means that the graph is divided into 44,100 “samples” (discrete measurements) per second. That’s the horizontal line, and I can’t draw you a picture of it because that’s probably well over 40 times the number of vertical lines within your browser’s document window. It’s quite fine. Now, “stereo” is also easy to explain — it just means that there are two separate 44.1khz, 16bit recordings on the disc, one for the left speaker and one for the right speaker.
To understand what 16bit means, you need to know a little about binary numbers, which is how computers store things. Binary numbers are numbers broken down into just ones and zeros, as I’m sure all of you know. You’ve seen the infamous “10100101001010101101001010101…” representation of binary, or in jokes about robots or whatever. The bit depth is what controls how many ones and zeroes are used to represent each sample, mentioned above. So the phrase, 16bit means that sixteen digits, each of which can be a one or a zero are used to represent each sample, i.e., at each moment in time we detect the amount of energy in the sound and store it as a sixteen-digit binary value.
What’s the implication of that? Well, with a single binary digit, you can store two values, 0 or 1. This lets you count from 0 to 1, obviously. With two binary digits, you can store four values, 00, 01, 10, 11. This lets us count to 3, since we can assign these representations the meanings 0, 1, 2 and 3. Each time we add a digit, the number of values that we can store doubles (which makes sense, since we take all the values we could previously represent, and for each we add the option of having a 1 or a 0 prefixed to it).
That may not sound like a lot, but the progression moves pretty fast. The number of possible values at each step of the way (subtract one to get the maximum number we can represent, given that we’re counting from zero) are:
1 bit: 2
2 bit: 4
3 bit: 8
4 bit: 16
5 bit: 32
6 bit: 64
7 bit: 128
8 bit: 256
9 bit: 512
10 bit: 1024
11 bit: 2048
12 bit: 4096
13 bit: 8192
14 bit: 16384
15 bit: 32768
16 bit: 65536
(It’s worth noting that in actual practice for audio use, these aren’t used typically to denote, say, 0 to 65535, for the example of 16 bits, but rather, -32,768 through +32,767.)
So with 44.1khz, 16bit, stereo sound, or “CD quality”, to display a graph of one second of sound, you’d need to have two pictures (for stereo), each of which would have to have 44,100 dots horizontally and 65,535 (see above) dots vertically. On the monitor that I’m typing on here, which has a horizontal dot resolution of about 107dpi and a vertical resolution of about 114dpi, that pair of pictures would (in total) measure almost 69 feet wide and 48 feet tall. So it’s a really fine measurement of sound. Most importantly, it’s a fine enough measurement of sound that just in the way we don’t see the ruler as a collection of discrete objects, we don’t hear the sound as a collection of discrete values and moments in time — instead we hear an indistinguisable representation of the original continuous sound.
Okay, that’s all the background material! Fortunately, from here it becomes easy to demonstrate the answer to the question: If your goal is to render sound in 16 bits, why would you ever work with the common higher-bit rates (20 and 24)? (This by extension can be used to answer the related question of why you’d use higher sampling rates like 48khz, 96khz or 192khz, although those also involve a little delving into nyquist limits as a side topic, which explains why the possible rates in each scale are so different.)
Well, the answer to why you’d use 20 bit or 24 bit is because we tend to manipulate sound. If your goal was just to make a recording and press it, completely unprocessed, onto a CD, there would be no reason to use anything but 16 bit, but in actuality, most of us run our audio through the wringer and back before it gets even close to a CD, and that poses a problem. This problem is fortunately very easy to illustrate with a couple of pictures.
First, let’s look at a graph of our line as presented in 2 bit sound (over one second at 8 hz):
Now, let’s bump up the quality of our graph to 3 bit sound (over one second at 8 hz):
It looks a lot better, right? And it will sound better, accordingly. But we’re talking here about a sound that moves smoothly from the minimum possible loudness (presumably silence) to the maximum possible loudness that our system can record without distortion. Realistically, a lot of sounds don’t neatly maximize their loudness that way. Let’s suppose that the sound had been half as loud. Here’s our 3 bit graph:
Now, suppose that we get that sound, and as is not uncommon, we immediately normalize it (which alters the sound file to scale all values such that the maximum value is at the maximum possible level). The result looks like this:
Notice anything about it? Yes, while it may not look exactly like the 2 bit graph, above, it’s more or less functionally equivalent. That’s because the quieter volumes used less of our available dynamic range, and thus were being represented by fewer possible values.
Now, if we’d recorded the half-volume sound initially in 4-bit audio, we would have gotten this:
Then when we normalized it, we would have gotten this:
While this may look slightly different than our 3 bit graph, it’s more or less functionally equivalent. If we’re then exporting it to 3 bit media, we have the best possible recording of it, despite the fact that we had to normalize it. In fact, if we thereafter “downsampled” it to 3 bit audio for output, the graph would be exactly identical to our 3 bit, 8 hz line plot above.
Normalization is the easiest manipulation to explain the effects of graphically, but of course every processing step you take (and I take a lot) risks a loss of some data, so the more “extra” data you had to begin with, the more you can lose without degrading apparent quality on the target media. And remember that while 20 bit or 24 bit may not sound like a lot of extra leeway for a project targeting a 16 bit destination, each bit doubles the sensitivity and fidelity of the recording, so four or eight extra bits is lots of room for mangling. Also, a good processing algorithm will take steps to lessen the damage — for example, by interpolating or smoothing values where gaps have been introduced.
Is there a downside, or should you just pick the highest bit rate you possibly can at all times?
Well, the more bits in a chunk of sound, the more horsepower it takes to process, the more memory it takes to fit the sound in active memory, and the larger the file when the sound is written to disk. So there’s a cost there, and if you use large quantities of sound on a modest computer, that may be a significant issue for you. Also, you will want to be sure that all of the apps you use to process the sound can handle the format you chose, and 44.1khz, 16bit, stereo is handled by just about everything under the sun. Lastly, that every processing step involves possible data loss is a double-edged sword, and while most apps will do their best to give you the best rendering at all times, if you really do plan to just record direct to CD without any manipulation at all, then it’s quite possible that you will get the most accurate rendition you can by using one format end-to-end, which would be 44.1khz, 16bit, stereo.
(As an aside, while many synthesizers, DAWs and other tools have in the past advertised having x bit internal processing, where x could be anywhere from 20 to 64, these days most software uses 32 or 64 bit internal math, and uses floating point numbers, which operate differently than described here. A good tool will know when to use each format and will switch between them as needed. Also, playback settings just affect what gets sent to your speakers, not what winds up on the sound file. As such, this post in the modern era mostly applies to making choices about recording external audio sources.)
Sep 21, 2010 Uncategorized
So, I started playing around with Mainstage today, and I have to say — it’s pretty darned sweet! Mapping controls to parameters is very visual and very, very easy. Also, it can get tempo from Nodal and map each incoming MIDI channel from Nodal to a channel strip in Mainstage. The combination basically gives Nodal the ability to not only use any instrument or effect plug-ins that I have on my system, but also the ability to modulate all of the parameters of those instruments or effects via my control surface or commands from Nodal or even both at the same time (!). This makes me want to upgrade to Logic Pro 9, which I have heard includes a lot of improvements to Mainstage, but since my laptop’s battery died today, I’ve been thinking that I should start saving in a much more serious way for a new machine.
Since apparently you can record a stereo mix directly from Mainstage, there’s no huge reason that I couldn’t use this Nodal + Mainstage combination for composition, in a totally bizarre fashion. The big problem would be parameter automation. It would be clunky at best. Also, when I tell Mainstage to get its tempo from Nodal, the tempo seems to be all over the map. In some ways, this creates a weird sort of naturalness to the feel, but in other ways, it feels sloppy and unpredictable.
Alternately, I did also get Nodal talking to Logic Pro. This was simpler to initially set up, and offers some flexibility, but I didn’t see an obvious way to have the incoming MIDI data from Nodal trigger instruments whether they were recording-armed or not, or to have each instrument respond to the correct channel. This is a really basic task, but it’s been ages since I’ve done it in Logic.
I’ve got the manuals for Logic, Mainstage and Nodal loaded up on the iPad, so either way, I’m sure I’ll be scheming some weirdness on the bus tomorrow and giving it a whirl by tomorrow evening.
Sep 19, 2010 Uncategorized
I’ve posted a number of times before about the upcoming Einsturzende Neubauten 30th anniversary tour (now the ongoing tour, actually, since I believe they’ve already played dates in Europe). It’s a two-evening event during which one evening is a standard EN show and the other evening is art projects, lectures, side projects, etc.
Anyway, I’ve whined various times that they haven’t announced any Toronto dates before, so I thought I should update you to say that they now have! There’s no ticketing or pricing information yet, but the dates are:
Saturday, December 11th, 2010: Phoenix Concert Theatre
Sunday, December 12th, 2010: Lee’s Palace
I’m definitely going to be doing my best to get tickets to these events. You should too!
Sep 16, 2010 Gear
Tonight, I spent some time composing some phat beatz on the Korg iElectribe app on my iPad. I’m sure that I’ve posted about it here before. It really is an amazing tool; all the functionality of a real Electribe (well, okay, most of it, with some significant omissions) in a portable app that only cost me ten bucks. The touch screen is even a fantastic match for the task, allowing generally intuitive control of all the functions, great power scrolling for the preset browser, easy programming of automation, etc.
However, while I had fun, like I always do, I don’t think I’ll ever use anything I came up with. i might surprise myself there. However, there really is just something about the design of these little boxes that feels really… limiting. My, that sounds haughty. I don’t mean it that way, and in fact, a lot of the rhythms that I compose aren’t any more complex or less loopy than what I’d make in the iElectribe. Perhaps it’s the focus on the rhythm as a beast totally independent from the rest of the music, which is probably fine if you have the sort of structured compositional mind that can visualize the missing parts and figure out where to leave the spaces. Or maybe it really is just that it’s a focused tool for producing a certain kind of result, and I get tired of that result quickly. I’m not sure. Either way, and maybe this will pass with time, I always find myself feeling boxed in.
While I was taking a break and checking my e-mail, I saw that someone is selling a ridiculously gorgeous E-Mu modular system. I could never in a million years afford it, and I probably wouldn’t want to deal with the headaches of maintaining a truly classic synthesizer, but it really was a thing of beauty.
Some years ago, in the mid 80s let’s say, Tom Ellard of Severed Heads said in an interview that if you bought a synthesizer off the shelf and used it as is, you were in effect letting some board member decide your music — that the architecture of the machine dictated to such a high degree what sounds you would make and what music you would write that you might as well invite that guy to be in your band. Years later again, I contacted him and asked if he still felt that way and he said that no, the technology was now flexible enough that it wasn’t necessary to build your own tools or extensively hack the ones you’d bought to maintain creative control over what you wrote.
And yet, I think it’s still true to an extent and always has been, and probably always will be. Certainly the invention of the pianoforte radically altered how people composed music, and that’s probably true of every instrument that’s ever been introduced. I think this sort of interface-and-capabilities level is not what Mr. Ellard was referring to, though.
So, the architecture. And this gets to a lot of my conflict with the iElectribe perhaps. I can only have one effect, seriously? You should see my strips in Logic Pro. I use more effects than I can count. But it’s not even that. Every piece of software I’ve used heavily influenced the product that I created with it, and my like or dislike of each tool is more about how I feel about those influences an anything else. I love how getting familiar with Nodal has changed how I think about composition, and some of those changes go hand in hand with what the Audio Damage plug-ins say to me — they certainly push the “Worship the Glitch” theory that Coil made me think about more concretely but which has always been there for me, and they both helped me to trust randomness more. (Nodal more on the random branching and Audio Damage more on the just abusing things and using what you get. I used to do this on my old samplers all the time, swapping disks while loading and the like.)
But then there are the days when I just want to retreat into Reaktor and make everything from scratch, and the days when even the ways that Reaktor pushes me in certain dirctions chafes, and I daydream about sculpting sound in the air, or I try to see what I can carve out just using a raw waveform editor, although that rarely works out. And this is probably where the Arduino and that sort of DIY thing appeals so much, even though it has its influences as well (Oh, hi there square wave!).
When I bought the AKAI EWI 40000 S, which I sadly no longer have, the idea was exactly the next obvious conclusion here — to use its clear and present influences to not just inspire but channel, cut, sculpt, maybe bludgeon at times. To change radically the way I input my data so that I would start inputting different data. And that worked, to an extent, but I found that continually trying to adapt gets tiring after a while and I just didn’t want to spend the effort to become adept at another interface.
The modular has been different for that for me. Part of that is that while it certainly suggests and architecture, it’s flexible and lets you impart your own stamp on it. Part is that I like the infouences that it brings to the table, mostly. A big part of it, which is weird to admit, is that it took so long to get used to that by the time I was capable of really evaluating it, I’d become committed, had passed the point of no return. But it has its downsides, too. It encourages me to spend more time making sounds and textures and less time putting them in songs, hence the radical drop in output (although it’s not the only factor). Its ephemeral nature is both its great strength and its great weakness.
I think that this is why so many synth people become gear addicts. Every new synth, no matter how familiar, is a new brush, a new colour, and they become extensions of you and the way you think about sound. I find myself lately looking at an ad for a used Access Virus A, a synth that imposes an architecture and imparts a tone if ever there was one. Maybe I’d regret it, and I don’t have the money anyway. Or maybe it’s because these influences are a kind of dialogue, and in the end, I kind of want to invite that guy in a board room somewhere to join my band.
Sep 15, 2010 Uncategorized
I came across a thread recently asking what our favourite albums released in 2010 have been. It made me realize that while I’ve purchased several albums in 2010, only two were actually released in 2010 — Autechre’s “Oversteps,” and Jónsi’s “Go”.
I don’t feel like I can really speak to the Autechre release, because I haven’t taken the time yet to really sit with it and give it the attention it needs. It’s not an immediately accessible album. That’s okay with me, but lately I haven’t had much time to just sit with music in a respectful and attentive way, so I’ve been mostly avoiding it until I have the time to do that.
That leaves only the Jónsi release, really. I’ve actually listened to it quite a lot. I had tickets to see his much-lauded tour, but the date was cancelled last minute due to a problem with the venue and the sets used on the tour. I have to admit that I felt really angry about that; It was my feeling that if you and the venue make a mistake about the elaborate sets that might deny the people who pre-ordered tickets to your tour and have been waiting excitedly the opportunity to see you perform, you really should try your best to work with what you have, perhaps doing a simpler show without the elaborate sets. So I didn’t necessarily approach the album on fair terms for some time after that incident.
Having had time to mostly cool off from that, though, I think that Go is really a remarkable accomplishment. Sigur Rós have always been one of my favourite bands, but mostly because of their ability to capture the rich poignancy of beautiful melancholy. This carried forward pretty much until their last couple of albums, I think, the penultimate venturing into a lot of energetic sense of wonder territory, and the last having the first track of theirs that I think was just unmitigated happiness — the foot-tappingly catchy Gobbledigook.
What Jónsi seems to have done on Go, for the most part (there are a few exceptions) is to distill out that pure positive energy and optimism and present it undiluted, but somehow make it feel every bit as gorgeous and textured and filigreed as Sigur Rós did with darker hues. I really think that that’s hard to do. I mean, I think that we have a predisposition to thinking that great art must be dark or sorrowful. The idea that artists must suffer to produce great work informs this in part, and many people complain that artists lose their edge when they become successful enough to lose the “starving” designation.
Jónsi somehow manages to show not only that great art can be joyful and celebratory but that it’s possible to create things that sparkle with happiness without ever crossing the line into saccharine territory. It’s easy to find true beauty here, but it’s also next to impossible to not smile widely.
I think that these days it’s so easy to get caught up on turmoil and stress and worry that someone who reminds us what it’s like to just smile is truly making great and worthwhile art.
If you’ve not had the chance to check the album out, here are a couple of good example tracks on YouTube:
Sep 10, 2010 Uncategorized
I’ve got to tell you, if you’re looking for great ways to spend a Friday afternoon, you could do a lot worse than enjoying some Sour Lik-m-Ade Fun Dip and watching sound design videos online.
I’ve wanted to get an inexpensive oscilloscope for ages, as a way to help visualize what was happening in the synthesizers.com modular system. I do have an inexpensive digital storage oscilloscope, but I’ve never been able to get decent plots from the dotcom system with it, although it’s been very useful for other projects. Also, I want something that reacts very fluidly in realtime. This has mostly meant scouring eBay, but even old scopes with bits missing being remaindered from workshops from the 70s can run into the hundreds of dollars.
However, I recently discovered oScope for the iPad. (Obviously if you don’t already own an iPad, this is no longer an inexpensive solution, but I already had one.) It’s an oscilloscope that will plot audio signals coming in from the built-in mic (to be honest, I didn’t even know the iPad had a built-in mic) or from an audio source. The “Lite” version is very workable and free, and there’s a $9.99 version that adds triggering and a simple frequency spectrum analyzer.
But wait, you say — from an audio source? The iPad doesn’t have a line in! Well, it turns out that there is a great and easy way to work around that, and that workaround is something else you might already have if you own an iPad: The iPad camera connection kit. The kit, which runs about $35, comes with two small dongles, one of which has a slot for SD card media and one of which has a USB jack. This allows you to download photos and video directly to your iPad from digital still and video cameras. (In fact, I used the SD reader to transfer and upload the video later in this post, since conversion and uploading via the iPad is so simple and painless.) However, the dongle with the USB jack lets you connect all kinds of other things, from keyboards to audio devices. I discovered this a long time ago when I managed to get Skype working on my iPad over WiFi using a Plantronics USB headset.
That same headset can have the cables to the headset part disconnected and act as a standalone monophonic class-compliant USB audio interface. By plugging a 1/4″ to 1/8″ adapter into it, plugging it into the camera connection kit, and plugging that into the dock connector on the iPad, then downloading oScope, I was ready to rock.
A lot of this won’t be news to anybody in theory, but what really surprised me was how clear and fluid the results were, and how responsive oScope is to these kinds of realtime waveform display. I’ll embed a video here for you to see. It’s a bit rambling and I cover the ground again. Also, apologies for the shakycam — my tripod is currently broken, so I went handheld, which proved to be “interesting” while juggling cables and jacks and so on.
You can’t hear the audio from the modular, as you get a cleaner plot if you don’t split it. However, if you jack the audio into a synthesizers.com Q111 Pan-and-Fade and then pan it to the middle, you listen to the audio while running it to the oscilloscope. Because this greatly reduces the amplitude and forces you to zoom in much further, it does introduce a bit more noise into the plot.
As a fun note, just playing with this for two seconds found a problem in one of my Q106 oscillators, where the Saw and Ramp jacks were reversed. Very hard to detect by ear, but it showed up clear as day on the plot.
I should also note that I sent the develop a note with some questions and comments, and he responded less than 30 minutes later. Two thumbs up for customer service!
Anyway, here’s the video:
ERRATA: I say in it that any class-compliant USB audio device will work, but in fact, it seems that there are limitations. The iPad only provides a little power, so if your audio device requires a lot of power, it will have to be powered externally or via a powered hub. Also, I have no idea what it will make of multi-I/O devices. And it only does 44.1khz/16bit, so if you try to use something that works above that by default, then it won’t work unless you can manually switch it down.
ERRATA 2: (I also say “oscillator” instead of “oscilloscope” at one point. Whoops!)
ERRATA 3: When I say that it doesn’t really provide measurement so you can’t use it for calibration, I should explain what I mean, because after some usage I realize that it’s unclear. While the grid is drawn at a fixed distance on the display, the grid spacing is meaningful because the unit displays in the top left corner adjust accordingly. Where the problem lies is that the grid lines aren’t value-labelled, and the display seems to center vertically or do some other sort of compensation that I may simply be misunderstanding. So although you can say things like, “the waveform has a 5V spread”, you can’t necessarily say things like, “the waveform goes between -2V and +4V” if it’s offset. This impedes certain kinds of measurements that I think are common in performing calibrations.
ERRATA 4: In the video I say that I used the tool to discover a wiring fault in one of my oscillators. After comparing the results against my other oscillators, I’m not so sure if it’s a wiring fault in the oscillators or a display fault in the app. I’ll have to investigate further.
Sep 9, 2010 Uncategorized
I was sorting through some old photos today looking for pictures of a rack that I used to have for rackmount synthesizers so that I could post them to a list where someone had asked about that particular rack. In the process, I’ve made myself nostalgic for every synthesizer I’ve ever owned all over again, which is bad in that I already feel this way all the time.
Actually, I shouldn’t lay all the blame on my own shoulders. A bit part of it was spurred earlier, when my good friend David posted this lovely photo of a Kurzweil PC88 belonging to someone I’d love to link except that I have no idea who they are. If you’re reading this, David, toss an attribution in the comments? I’ve never owned a Kurzweil synth of any kind, but I was considering buying an old K2000S recently and have always appreciated their aesthetics (while also being curious about the VAST architecture).
Now, for a good portion of my musical meanderings I have not been a hardware guy at all, so this is going to be a very short list. I might post about software another time, but I don’t get nostalgic over softsynths in the same way, and maybe that in itself deserves a post. In the meantime, though, here are the synths that I’ve called mine over the years (the photos are not of my actual units):
#1: Korg DSS-1
The first synth that I called my own was a Korg DSS-1. Now, technically it was never really mine. I had it on a rent-to-own plan from the Long & McQuade back home in Windsor. It was used, and I had it for only a couple of months around the summer of 1989. The DSS-1 was a hybrid synth / sampler, which has lovely-sounding filters, dual delay lines, and allows you to process inputs through its filters and effects. It had a tiny amount of memory (256kb) which gave you a whopping 5 seconds of sampling time, but compensated with both a huge number of waveforms for its synthesis section but also the ability to draw your own waveforms using the data slider and the LCD. Prior to getting this beast, I was “composing” strictly by screwing around with tape. I can’t say I did a lot of composition on it, and I only have one recorded song that I can attribute to the unit (“The Last Day,” which heavily overused a sample of an Imam saying, “Let every soul look to what it has put forward for the future,” and featured otherwise a sped-up sample of the opening percussion line of When The Levee Breaks and a sort of lo-fi pad made from a sample of Severance by Dead Can Dance). Still, not only did I love it, but I thought it looked beautiful. (Many people complain that it’s clunky, huge and heavy, but I still love it.) You can find a whole pile of images of it here.
I don’t know if I presently have the room for it, but I’d love to get another one someday.
#2: E-Mu EMAX
What the DSS-1 made up in style, it lost in not having a manual. Or rather, Long & McQuade didn’t have one. And as it was my first-ever synth, it became clear pretty early on that I was going nowhere fast with it without one. Sure, I could goof off, but I thirsted for something I could really get deeply in to. I asked them if they had one lying around, but they didn’t… but they did have an E-Mu EMAX available for rent-to-own that did have a manual, and they’d transfer my earned credit over for it.
Now, I love E-Mu. I was a huge E-Mu junkie for years. However, one thing about them, they make some ugly as sin synths. The EMAX wasn’t the ugliest synth ever, but it also wasn’t the prettiest, as you can see here. The grey case colour lacked punch, and the pink highlights were brasher and more obnoxious in person.
However, it sampled like a dream. 12-bit was more than enough for me then, and it could hold 52 seconds of samples. It also came with a whole pile of great sounds, including the now-infamous Arco Strings, which I must admit I loved to death at the time. (They really are kind of horrible.) I had it for only a little over a month, though, during which most of it was learning to use it and going through setup. This was also when I bought a Roland MPU-401 MIDI card and got a copy of Master Trax Pro sequencing software. Having been heavily influenced by a staccoto synth-strings piece called “Ad Astra,” by an Icelandic artist named Höh, I wrote “Flight,” which isn’t on the site but some of you have heard. It’s not cringe-worthy, but it’s not terribly inspiring. However, it marked the beginning of several trends that would play a role over the next couple of years. First, I tried to make it as dense as I could without getting too muddy. Second, I embraced the “Worship the Glitch,” ethic, composing the main line by playing a percussion loop I’d written using a strings patch instead, and third, my good friend James Drage was hauled in to save the piece a few times when I’d worked it in to a corner. It was the first thing that I bothered actually recording to tapes not just for my own listening but specifically to pass around to other people.
Unfortunately, when school started up again, my parents forebade me from getting a part-time job, saying that I was to focus on my grades. I was devastated. I’d lose my synths, and all the “to-own” credit I’d built up. In return, my parents promised to buy me a unit if I did well. It wouldn’t turn out to be quite that easy, but either way, it marked the end of Studio Setup 1.0.
I’ve noticed that it’s 1:25am, and I have to be up for work in five hours! I’m going to cut thus short. But if any of you are enjoying listening to my nostalgic ramblings, or perhaps even if not, I’ll continue on this subject tomorrow!