So I got an iRig for Christmas! This allows me to answer all the questions everybody had about using oScope on the iPad with the iRig to plot DotCom waves. And the answer is… it doesn’t work.
I know that I’d said that it would. A lot of other people seemed to report that it did also, and I went by that. But I’m not certain that the fault lies with the iRig.
Why? Well, because my Plantronics DSP v4 adapter + Camera Kit + headphone adapter solution doesn’t seem to work anymore either.
(On the other hand, the iRig + Amplitube seems to work. The signal from the DotCom seems pretty hot for it, so there’s a lot of distortion if I don’t turn it way down and lots of noise if I do (and then amp it back up in the iPad), but the result isn’t all bad. I wish the free version included more pedals.)
After fighting with it frustratedly for quite a while, I decided to just go chat on Skype for a while and relax, and my home-made shock mount (depicted in the site banner) self-destructed, and proved to be one of those situations where it kept *almost* being fixable and then all the elastics would leap off at once.
It was frustrating to say the least. (I’ll need to go to Michael’s for new craft loops soon.)
Looking back on 2010, the biggest and most obvious thing is that I only posted two tracks this year. So that’s going to be the big thing for the new year — to get back into the swing of actually writing, rather than just tinkering. The dotcom has been a lovely beast, but it’s also proven to not be conducive to getting music made, and I need to find a way to tame it and get it in line, or I need to figure out when it’s useful and look elsewhere for everything else.
I’ve wanted to get an inexpensive oscilloscope for ages, as a way to help visualize what was happening in the synthesizers.com modular system. I do have an inexpensive digital storage oscilloscope, but I’ve never been able to get decent plots from the dotcom system with it, although it’s been very useful for other projects. Also, I want something that reacts very fluidly in realtime. This has mostly meant scouring eBay, but even old scopes with bits missing being remaindered from workshops from the 70s can run into the hundreds of dollars.
However, I recently discovered oScope for the iPad. (Obviously if you don’t already own an iPad, this is no longer an inexpensive solution, but I already had one.) It’s an oscilloscope that will plot audio signals coming in from the built-in mic (to be honest, I didn’t even know the iPad had a built-in mic) or from an audio source. The “Lite” version is very workable and free, and there’s a $9.99 version that adds triggering and a simple frequency spectrum analyzer.
But wait, you say — from an audio source? The iPad doesn’t have a line in! Well, it turns out that there is a great and easy way to work around that, and that workaround is something else you might already have if you own an iPad: The iPad camera connection kit. The kit, which runs about $35, comes with two small dongles, one of which has a slot for SD card media and one of which has a USB jack. This allows you to download photos and video directly to your iPad from digital still and video cameras. (In fact, I used the SD reader to transfer and upload the video later in this post, since conversion and uploading via the iPad is so simple and painless.) However, the dongle with the USB jack lets you connect all kinds of other things, from keyboards to audio devices. I discovered this a long time ago when I managed to get Skype working on my iPad over WiFi using a Plantronics USB headset.
That same headset can have the cables to the headset part disconnected and act as a standalone monophonic class-compliant USB audio interface. By plugging a 1/4″ to 1/8″ adapter into it, plugging it into the camera connection kit, and plugging that into the dock connector on the iPad, then downloading oScope, I was ready to rock.
A lot of this won’t be news to anybody in theory, but what really surprised me was how clear and fluid the results were, and how responsive oScope is to these kinds of realtime waveform display. I’ll embed a video here for you to see. It’s a bit rambling and I cover the ground again. Also, apologies for the shakycam — my tripod is currently broken, so I went handheld, which proved to be “interesting” while juggling cables and jacks and so on.
You can’t hear the audio from the modular, as you get a cleaner plot if you don’t split it. However, if you jack the audio into a synthesizers.com Q111 Pan-and-Fade and then pan it to the middle, you listen to the audio while running it to the oscilloscope. Because this greatly reduces the amplitude and forces you to zoom in much further, it does introduce a bit more noise into the plot.
As a fun note, just playing with this for two seconds found a problem in one of my Q106 oscillators, where the Saw and Ramp jacks were reversed. Very hard to detect by ear, but it showed up clear as day on the plot.
I should also note that I sent the develop a note with some questions and comments, and he responded less than 30 minutes later. Two thumbs up for customer service!
Anyway, here’s the video:
ERRATA: I say in it that any class-compliant USB audio device will work, but in fact, it seems that there are limitations. The iPad only provides a little power, so if your audio device requires a lot of power, it will have to be powered externally or via a powered hub. Also, I have no idea what it will make of multi-I/O devices. And it only does 44.1khz/16bit, so if you try to use something that works above that by default, then it won’t work unless you can manually switch it down.
ERRATA 2: (I also say “oscillator” instead of “oscilloscope” at one point. Whoops!)
ERRATA 3: When I say that it doesn’t really provide measurement so you can’t use it for calibration, I should explain what I mean, because after some usage I realize that it’s unclear. While the grid is drawn at a fixed distance on the display, the grid spacing is meaningful because the unit displays in the top left corner adjust accordingly. Where the problem lies is that the grid lines aren’t value-labelled, and the display seems to center vertically or do some other sort of compensation that I may simply be misunderstanding. So although you can say things like, “the waveform has a 5V spread”, you can’t necessarily say things like, “the waveform goes between -2V and +4V” if it’s offset. This impedes certain kinds of measurements that I think are common in performing calibrations.
ERRATA 4: In the video I say that I used the tool to discover a wiring fault in one of my oscillators. After comparing the results against my other oscillators, I’m not so sure if it’s a wiring fault in the oscillators or a display fault in the app. I’ll have to investigate further.
In my Synthesizers.com modular system, I use STG Soundlabs’ time modules for my sequencing. I’m pretty sure that I’m abusing them somewhat, and I can’t lay a finger on why precisely I like them, although I think it’s mostly the flexibility. I like that I can build up a sequencing tool with exactly the functionality I want when I want it.
However, many of the big integrated sequencers have an option for “third row timing” — that is to say, you have, say, three rows of eight knobs which you would typically use to send voltages, either as three separate sets of voltages (say to play a sequence of three-note chords, or to control a pitch, an amplitude and a filter cutoff for each of eight notes in a sequence) or as a series of 24 voltages, and you can take the third row, or the last eight, and in the former mode you can have that row control the timing with which the sequencer moves, moving it forward with more or less speed depending on where that knob is set.
I had a need to do this the other day, and discovered that it’s exceedingly simple to do. In fact, my example patch here has more to it than you really even need:
What’s happening here is very simple. You’re using the manual shift inputs on your two STG Soundlabs Voltage Mini-Stores to advance the sequence, both driven by a pulse wave coming from an oscillator. I’m using a Q106 in the diagram here, so I’d have it set to the LOW range, but you could use any voltage-controlled LFO that outputs a pulse.
The VMS shown on the left is the one that controls the timing, so you set its knobs to the speed you want that step to progress at. The higher the value of the knob, the faster the speed of the step, the shorter the sequencer waits on that step. That might be counter-intuitive for some, so you could use (a) signal processor(s) to flip that over if you wanted to. Its output gets patched into the Q106′s exponential frequency input. You could use the linear frequency — that might be more intuitive for tracking a knob, it it also alters the available range. The frequency knob on the oscillator is used to set the base tempo that you’re working with, in a sense.
The really optional thing I have going on here is that between the VMS and the Q106 I put a quantizer. I’m using the Synthesizers.com Q171 Quantizer Bank in this diagram. The reason for this is that by and large you want to choose from a set of timings rather than having the timing of each step be completely fluid. If you set two knobs to about the same value you generally want the sequencer to pause for exactly the same time on that step. I use a quantizer to do that, even though I suppose you’d need to use the aid module and restrict the available choices to get exactly musically useful fractions. It works well enough for me just putting it through raw as shown.
The pulse output of the Q106 needs to go to the shift input of both VMS modules, of course, because you want to move them both from step to step in lockstep. You could use a multiple for this (which is actually what I do) or just a Y-splitter as shown. If you have them hooked up to a shift manager, it wouldn’t hurt to also plug the step 1 trigger of one to the reset of the other to keep them in sync.
The output of the VMS shown on the right goes to control whatever you want to control — the pitch of a sequence, for example. You could send the Q106′s pulse to more than two shift managers to control several parameters at once. After the first VMS, all others would be “output” ones — used to control parameters. If you use a Q962 to string together 2-3 VMS modules into a 16-24 step sequence, you can do that, although you’d need an equal number of steps of timing control as steps of parameter control (although again you could control multiple parameters with no additional timing modules needed). And you’d need a Q962 for each set. So for 24 steps of custom-timed sequencing of three parameters, you’d wind up needing one Q172, one Q106, 2 Multiples, 12 Voltage Mini-Stores and four Q962s, which, unless you already have them, strikes me as a lot more costly than just doing it all over MIDI -> CV in some fashion. But perhaps more fun, too.