Darwin: Okay. Today, I have the great pleasure of speaking to someone whose history is mind-boggling. His name's Cameron Warner Jones. He is one of the co-inventors of the Synclavier Digital Music System. I don't even know what to call it anymore because in the end, it was much more than a synthesizer. It was a system. It was an environment that people worked with. But the Synclavier, everyone's heard of it. And I'm really excited to hear more about its history and the process of putting it together. So, with no further ado, let's say hello to Cameron. Hey, man. How's it going?
Cameron Warner Jones: Hey, Darwin. Very nice to be here.
Darwin: Yeah. Thank you so much for taking the time out of your schedule. I appreciate you doing this. Why don't we kick this off for... I think there are going to be relatively few people who don't know of you and don't know, at least, some of the history of Synclavier, but I think it would be wise to just do a quick run through of when you started working on digital systems and how you guys started there. And then, also, what you've done in terms of product releases from then until now, because you currently have products available.
Cameron: Certainly, Darwin, and I'll tell you, it all started the same year that I was drafted. This was 1971, when the Vietnam War was raging. I had been interested in sound and music, but this was my first year of college. And I was in an environment where... Well, there was an electronic music studio on campus.
Darwin: This is at Dartmouth, right?
Cameron: At Dartmouth, absolutely. Dartmouth, over the previous four years, had to... What they did is they took this room full of computer equipment - one of the big mainframes, Honeywell this or that, and they made it so you could hook up computer terminals to it. Oh, my God. You could sit in a classroom down the hall and you could be typing on a Model 33 teletypewriter with paper tape loops and stuff like that. They became interested... The topic at the time was computer-assisted instruction in education which, of course, nowadays it's like online learning, right? You watch the training videos. It's all there. But this was back when the interest at the time was ways that they could incorporate computer technology to help with teaching in the classroom.
And, obviously, I was a first year kid in the music department. The professor, Jon Appleton, set up a program specifically to explore computer-assisted instruction in music. Dartmouth itself put up some money. The person who I later started New England Digital with, Sydney Alonso, created a little circuit board that actually let the computer make pulse waves, ramp waves, and square waves. And I was the only programmer on the team. So, the first project we programmed, it was about a bigger-than-a-bread-box computer with four Kbytes of memory.
And I programmed that to play... This was the Ben Wood Workbook for Ear Training. So, the computer would play the excerpts from the workbook, and the student would do the melodic dictation, or that kind of stuff. And then, he'd get the answers, "Oh no, you didn't get the perfect fifth." That's how it started. And Jon was actually interested in applying that technology to electronic music composition, which is slightly independent. Obviously, that's part of the academic environment.
There I was, my first year at college, and I was surrounded by, first of all, people that were interested in this technology. I was a guitar player at the time, a bass player. My jaws were dropping. My ears were perking up. And I found learning to program extremely easy. I excelled at that. That's what I focused on. That's what my contribution to it was.
And then, boy, didn't it escalate, and here I am still doing it. It wasn't really a product. We developed that system for use within the music department at Dartmouth College. And then, I did a side tour for a couple years as part of that product while we were using a minicomputer. They were called minicomputers, and they were the size of a smaller refrigerator and made a whole all out of a noise. But Sydney and I said, "Do you know what? We want to make a portable product."
So, we actually developed our own computer, which is what you had to do in the day. Just around then you could buy... They were called microprocessors and the Intel 8080... They were little pieces of silicon and the ones that were available just were not up to the task. We needed a 16-bit computer of some kind. So, we actually designed our own computer, and New England Digital was founded to manufacture those. It was sold to the labs at the college to basically perform data collection with. The computer would measure the output of a strip chart recorder, for example, and feed it into the time-sharing system for numerical analysis. That's actually what founded the company. And that, I think, helped us raise money the first time. We had several installations around Dartmouth of the Synclavier system. And then, in 1977, I graduated from college, I believe. I'm not sure I've been back since. But I was looking for work, and Sydney said, "All right. Well, either I see you later, or we've been working together for six years, do we try and commercialize this?"
Well, I've never been one to shy away from an audition, even if I don't get the part. But we said, "All right. We're going to start this company. We're going to try and sell this product." And we did. It was all hand-to-mouth. We started. We sold the computer. We were going to go viral with the computer, but that's back in the day when all the computer companies were folding like there was Orange Micro. It was big and it was PC Magazine. And do you know what? The computer companies were willing to lose money, hand over fist, to gain market share. You had to have deep pockets to kind of play in that arena. First of all, my interest of music, and I like computers, and I did like having a job and having a successful small business. We had maybe two employees at the time, or whatever. We grew over and over again over the years. It wasn't until quite a bit later that there was significant growth.
I've always been passionate about music and sound, and working so many hours on compilers and technical software to help people do scientific experiments. Well, it did pay the bills, but do you know what? I wanted to hear this thing speak. I wanted to hear it make sounds. So, at that point, we developed an updated version of the FM synthesizer, and we said, "Well, gosh darn it, we're going to make a portable musical instrument." We've developed the computer. I developed the whole operating system there, the XPL computer language. That was in 1976. 1977 is when we brainstormed and came up with the trademark name Synclavier, which pronounced Synclavier, at the time. I have a stuttering problem, so it's much easier for me to say Synclavier, now. So, it has morphed in those two directions, but now it's referred to as the Synclavier 1.
It had one of our computers. It had the FM synthesizer. And gosh darn it, if we didn't sell 13 of those, basically, to college electronic music studios... There's an academic discipline called electronic music. They were from University of Washington, and one in Delaware, and University to Massachusetts. We sold 13 of those systems. And then, obviously, I took that to the Audio Engineering Society Convention, and that was in 1978. That's where, for example, I met Suzanne Ciani and who was it? Herbie Hancock came and looked at the machine. And that was the Synclavier 1. It was a little bit geeky if I must admit, but you know what? It made sounds that people hadn't heard before. When you're starting out in your career, if you run an advertising business. This was back when you could make a fortune, just writing jingles, that's because in order... When you look at the sitcom music of the era, it was all done...
You had a little pit orchestra. You'd hire a trumpet. You'd hire a guitar. You'd a little sax section. You'd have to write out the parts by hand. I mean, to make a 30-second jingle cost 10 grand. That's just for the personnel cost. And obviously, there was a terrific interest in the advertising industry at new sounds. Because, of course, when you make a radio or television ad, you're trying to catch people's attention and getting new sounds to help you do that. So, some of the younger people, the up and coming generation, kind of latched onto the machine as a vehicle for promoting their own careers, for making new sounds that they could apply into their business. The company raised money, I believe, it was in 1979. We're talking not exactly Wall Street sums here, but it was a small investment company from Boston that put in, I think, it was 300 grand.
That's what put the company on the map, and obviously, they wanted to take the computer and go national with it. And I think, we did try to do that a little bit, but obviously I was excited about the Synclavier and the sound generating part of it. And that's when Denny Yeager approached us. He ran a very successful advertising business in California. A very successful musician, synthesizer. And he said, "Do you know what? You can revolutionize the music industry if you just come out with this stupid product." And I said, "Well, are you sure?" In a way I was just the programmer. Yes, I had started the company, but there was always so much technical work to do. I didn't keep track of what was going to succeed and whether the computer sales were up to snuff. So, we spent about six months developing. Well, it was that. We introduced it as the Synclavier 2. And that was in May of 1980, as we went to the AES convention in Los Angeles with that. That was an instant success. That's really what propelled things going forward.
Darwin: The Synclavier 2 was the iconic one. The large format, beautiful keyboard, with a rack of gear.
Cameron: Yeah. At that time, it was a box about the size of a small refrigerator. In 1980, it only did the FM synthesizer. It was a non-velocity keyboard. Which is referred to as the original keyboard, now. It was a 60-note, was that five octave? Just very light touch and electronic keyboard, no velocity sensitivity, no pressure, no mod... We didn't even think of a mod wheel. This is before MIDI. A MIDI was absolutely not on the table. It was not in any anyone's radar. So, it was a small box. You hooked up the keyboard to it. It was based on floppy disks. We had the five and a quarter inch floppys, which is what Fairlight was using in the same era. And you would store the FM timbres on there. You'd boot the system from it. That's where the Synclavier 2 started.
Now, that was 1980. We introduced it very quickly. Obviously, sampling was around the corner. Fairlight was doing the 8-bit sampling on their CMI. They had a very neat graphical interface with a light pen and, oh my God, it was geeky. I'm not sure it was very musical, but it was very functional. It was great for doing rhythm entries. They had this page whereby typing and tapping on the screen, you could put together a drum track. We had a memory recorder that was more designed for recording live performances, but they specialized in the rhythmic element of it. But obviously, we could see sampling coming down the pike. There was this product, it was called the Emulator. And people like Patrick Gleason had one of those and it's less like... Now, they were using 12-bit sampling.
So, you listen to it very closely, and said, "Wait a minute, this sounds like the old 70 RPM." But Sydney and I said about 1982, and obviously, our business guru, Brad Naples was particularly visionary and persistent in hardworking, and we said, "Well, do you know what? High fidelity sampling is not too far away." So, we began working on that. And the pictures you see of the system from the 1980s, we developed... I call it a synthesizer. It was called a polyphonic synthesizer.
What's the difference between a recorder and a synthesizer? If you're playing back a sample, it maybe it's just like a recorder. But our technology actually colored the sound in a little way, because we wanted variable frequency which means we didn't do the sampling rate very accurately.
But you know what? It enhanced the sound. It made it sound live. Because, of course, you were using it to play musical notes up and down the keyboard. That was in the '83, '84 timeframe. Brad particularly pushed us. We developed that absolutely world class black keyboard, which was incredibly expensive at the time. We had a piano technician on staff. I think it was the same keyboard - in fact, I believe Dave Smith made that action as part of one of his products. But we had a piano technician actually tune each one. So, it was the best playing synthesizer you could get if you had good keyboard chops - people like Eddie Jobson or people that really play well.
Darwin: There are still a lot of people that use that just because of that keybed...
Cameron: Well, yeah. And that's why they keep it. I get requests, "Can you hook it up to a mini keyboard?" "Well, no. You can only hook it up to a whole computer and you buy the whole ring." And then, of course, the hard disks came out. They were SCSI disk drives and you could buy a 5-Megabyte hard drive. And Sydney said, "Gee, Cameron, we want to do sampling. Here's the hard drive." And oh my God, my life was a nightmare for about two years because the disks, you could just barely do 50-kilohertz sampling. The disks weren't that fast. You could barely get data on and off the hard drive at the rate to support real-time audio playback. But anyway, so we branched out into the hard disk recording product. That's when we developed the tapeless studio concept, kind of from a different direction. The big market of the era, if you look at the whole history of media and whatever.
This is when cable TV was skyrocketing. So, instead of having just three networks, all of a sudden everyone had had cable and there were 70 channels, and there was a massive demand for content. There wasn't just three primetime TV shows and the 18 ads that go along with them. Well, you had ads and music videos. So, the demand for music tracks, for video production, just skyrocketed. And obviously, since we had the sampling, it was very natural to use our machine for sound effect placement, both hard sound effects, Foley effects. So, while it's a gunshot and it's in the dramatic scene, it's a scream. But, oh my goodness, you apply some of the synthesis techniques that we did for FM synthesis and sampling, you apply those techniques to quickly, in real-time, combine sound effects.
Well, you're actually designing sound effects to go in the movie or the commercial, or whatever. So, the technology that we had developed for synthesis just became in-demand. In all the video post-production houses, they really needed our machine to do competitive work during the '80s up until 1990, '91, '92. Well, it was the first digital audio workstation. We didn't coin that term. We didn't realize what it was. It started out as... When we registered the trademark, Synclavier, you have to use a noun. "Okay, it is this, and the brand of it is Synclavier." We first called it a performance instrument. It was the Synclavier Branded Performance Instrument. And then, it became a digital musical instrument. And then, it became the digital music system. And then, it became the digital performance system. And somebody somewhere coined the term digital audio workstation, which then I think we did kind of adopt, but that's the history of NED.
And obviously, by '93, when the personal computer came out... Well, this is so hard to imagine, but it used to be, you'd go in there and there is... First of all, you need a whole room. There's the grand piano and the 24-track tape recorder, two or three of them sync'd together. That is the kind of equipment that it took to make all your recordings. And, of course, that dated from pre-CDs. That's the technology that was used to create it. And the New England Digital product of the era, it fit into that business model where the studios controlled the production. They had big equipment budgets. They could get financing. But obviously, whenever the personal computer was in... At that point, it became more of a democratic process where it became much more practical for producers and people actually making radio commercial soundtracks, and so forth, to work at home and work in a smaller environment.
And the NED products just didn't fit into that business model. And NED was never... Well, how do you make the transition? Well, 3M and Studer used to make these great big, huge tape recorders. Those companies managed to segue to other products. But companies like Fairlight and New England Digital couldn't make the leap to what... You sell software and sell plugins, and that was too much of a leap. That came to an era. And then, I was kind of out of that field for a little bit. I did some work for Mackie Designs. I did nothing for a while, which was a very good thing to do. I mean, I started working on this shit, working around the clock, at age 19.
Darwin: Yeah. When you were still in school, right?
Cameron: Well, absolutely. I remember, I was turning 30 and I had this big light bulb moment. "Oh, my God. Is that middle age?" This is the truth. I got out the dictionary, middle age. "Oh, no. Middle aged is 45." So, I felt a sense of relief. Anyways,bI kind of took a leave. I went because I wanted to have some musical credentials or whatever, before my life passed away. So, I went to music school for two years. I studied double bass and I had an orchestra job, which was really important to me. So, I did that and I worked for Mackie for a while. But then, there was a successor company to NED and they didn't do very well. And then, somebody bought it up over here, then the bank foreclosed. I think it was in 1998, I actually approached the bank that foreclosed. "All right. Here, I'll just take it off your hands." So, a person who had worked at NED, Brian George, bought up the hardware pieces. I bought up the software pieces.
So, in 2000, that's when I did the first product that used a modern Mac computer to perform the computational functions. You could hook up the original voice cards and the original tower, and you could hook it up to a modern Mac. So, you'd use a Mac and your network to store your sound files. That was called Synclavier PowerPC. And that was when the PowerPC computers, just before the Apple G3 and all that era, which was right around 2000. So, Brian and I did that product. We had a hundred installations of that. That carried us through to about 2004. And then, between 2004 and 2014, there really wasn't any development. There was no technical breakthrough that let me do anything else with that existing technology. I was busy pursuing my career as an actor and as an actor in musical theater.
I'm talking: if I sing in this one professional musical theater company in the province, "Okay. I finally got a gig with them." I'm not talking Los Angeles or New York or Carnegie Hall, but do you know what? It meant so much to me. There was no real technological development that let me do anything else. But by about 2014, 2015, about the time all the computers became 64-bit operating system, their ability to crunch numbers was a whole new level. At that point, the computer could actually recreate the audio. Not just control the digital voice cards, but actually model the digital voice cards and create the audio, sounding like the original machine. I had sent out feelers with a couple of the big plugin companies. I had some couple talks with people. But right away, in 2014, Arturia, they were really getting successful with their V collection product. They approached me and said, "All right. Well, it's time to do this."
That's when I really went back to work on the product full-time. I created the DSP engine that I have, which really models the original hardware and the sound of the original system with all its defects, the 8-bit FM grunge. Its neat sound and the intonation errors are all modeled in there. And do you know what? It's got a sound that you don't hear in a sterile recording sampling environment. The sounds have a character to them. And if you don't want that character, while you can buy a hard disk recorder and it'll play back, but it sounds like Synclavier. Well, it does. That product was successful and continues to be successful. And then, you couldn't buy the parts for the Synclavier PowerPC anymore. The QuickLogic, little goofy thing wasn't made anymore. But I kept getting calls, "Can you make more? Can you make more? Can you make more?"
So in 2016, I partnered with another person who worked at New England Digital Mitch Marcoulier, and we developed kind of our third generation of... It's a digital product. It's a little interface box, a little tiny thing, about the size of a cigarette box. In fact, I did that hardware design. I bought a CAD program and I laid out the circuit board, and gosh darn it, if it didn't work. Well, like I said, we're up to unit 65, now. But so people are around the world, they've gathered pieces of the Synclavier systems that were built over time.
That particular product hooks it up to a Thunderbolt chassis on a current Mac. So, from managing your sound files and your timbres. It hooks up through. You're using a modern computer and a big screen. It was very useful in that regard, but it's still uses the old... It controls the old hardware and we're building more of those where eyeing maybe another new product that we might come out with. I don't know if it will make the biggest splash as the original. That is the technology in the events that's occurred in my lifetime.
Darwin: That's an amazing career. And it's amazing to see you go through things that, now, kind of in retrospect, they almost seems like there's a logical path, right? But I'm sure it seemed like anything but logical at the time. It had to seem like every single step was a complete dice shake.
Cameron: Well, it was particularly good. Brad Naples was a very visionary person and he saw the market needs for the product. I was clueless. I knew how to get that technology to work. We got the sound on and off that 5-Megabyte hard drive, and that was its own challenge. At the time, I just never paid attention to what the historical importance of it or the big market... I'm glad I was working on a team, that had someone on the team with that kind of vision. Because they were able to create a successful business, and I was doing the technical part of it. But here we are, that's the story.
Darwin: It's amazing. Now, I have 900 questions related to that stuff. But before we do that, one of the things I want to do is I do want to talk a little bit about your personal background, kind of, before Dartmouth really, and before you got involved in this. Because to do what you did, you mentioned that you were a musician, a guitar player, but obviously you had some real technical chops, too. I'm curious, what were the main musical influences that made you gravitate towards music, and what were the technological things that drove you into being a programmer/developer? I mean, you kind of talk about how they merged, they merged maybe by happenstance, but I'm wondering to what extent you were influence or drawn into it at, as well?
Cameron: Sure, Darwin. It's a very specific story and it's all very real to me. I remember things from 1960, much better that I remember things from yesterday. It started out when I was 10 years old, maybe eight years old, 1960. I was going to summer camp and the older counselors would play their guitar. My ears would cringe because the guys didn't know how to tune their guitars. I've always been extremely sensitive to intonation. In fact, when I've got my first guitar, you play a perfect fifth, you bend the string a little bit. You hear the strings, the pitches go in and out of phase. My eyes and ears were like, "My God, I love the sound." So, my career started, I asked the counselor, if they would let me tune their guitars, which they did.
And I got very good at that. And even the B string, they'd play a C chord and people wouldn't cringe. I was interested in sound in that way, too. But also, going on, the huge situation from 1960 to 1965 was the US Space Program, and there was so much interest in technology. So, also, at summer camp, this was at a later edition of summer camp. I bought a book. It was called like a basic electricity, and it talked about vacuum tubes. A neighbor down the street had a short wave radio, and I heard "bee-ooh, bee-ohh". My ears, I went ballistic. I said, "Wait a minute." The Morse Code, "dit-dit-dit-dah-dah-dah", whatever it is. I found all that to be extremely musical. I mean, its sounds, its pitches. So, I became interested in short wave for radio.
I built a Heathkit for $25. It was a short wave radio. And, of course, you could listen to Radio Moscow. There's the Cold War going on, and Sputnik, and you hear Radio Moscow. I mean, it's very interesting for a youngster coming of age, that period of time. There was a lot of interest in technology. That's why I learned about it. Actually, I was in public high school in 1968. Well, it was called a programmable calculator. It was this little box that Wang made.
Again, obviously, it was a computer. It was a calculator. It was a little box about the size of this little thing I'm looking at here. But it had a numeric display and a little box that went on the floor and it was programmed...
Well, they call it punch cards. Do you remember the IBM punch cards and the data cards? Well, you'd buy the data cards, which is 80 columns and 12 bits wide, or something like that. You'd buy the cards and they were perforated. So, you could punch out the little holes in it. Anyway, so I wrote my first piece of software, this was for a high school chemistry class, by poking holes in the punch card. Oh, I made a mistake. I used a little piece of electrical tape to cover up the hole and punched out the right hole. It's like now, you hit the delete key, right?
So, I took the punch card and it goes in the card reader. In chemistry, I think it was the Nernst Equation. It's not E= I*Pi or E=MC2 . It was some something to do with chemistry and how many moles of this and kilograms, whatever. So, I programmed it and I felt very powerful. I felt like I'd had a hit record. I can write software and people will notice this, and it will do things. I mean, at that time, this was when folk music was the era. I started out in guitar, but there were a lot of guitars. So, I migrated to bass. Also, it's only four strings. So maybe, and, and you don't have to worry about the stupid B string being tuned to fourth instead of a fifth. And then, when I got to college, it was a bluegrass band and we were in great demand. I love to perform. I love sound.
I love music. Actually, my interest and, of course, what I've done later. What I'm personally motivated by is the storytelling aspects of music. That's what I like musical theater. I think it's a great form... It's an oral tradition. Every musical theater piece has great education. It's just a way for young people to vicariously experience, like West Side Story. What happens when things go wrong? I was always interested in music. Just my ears perk up whenever I hear something like that, and the technology. So, it all, I was in the right place at the right time. I mean, I was there and things were happening, and I said, "Do you know what? I like this. I feel at home in this." That's how I got into the field. It was very emotional, very passionate involvement on my part.
Darwin: Now, I know that Jon Appleton was really one of the guiding lights there at Dartmouth. To what extent did he draw you in? Or did you go and put him up against the wall, and say, "I have to work on this?" Or how did that interaction occur?
Cameron: I'll tell you exactly. As I mentioned earlier, there was a program, there was a movement within Dartmouth to explore the use of the computer in education. There was a professor at the engineering school, Fred Hooven. Jon, he graduated from Columbia, must have been about '66 or '68. I'm embarrassed. I don't remember. This is when colleges were establishing electronic music studios. They have a classics department. They have a voice department. Around the country, colleges were setting up electronic music departments. So, Dartmouth was doing that. Jon Appleton was the professor in charge of the Bregman Electronic Studio there. I never met Mr. Bregman, but I believe he was one of the big donors that helped get that going. The year before I started there, Jon Appleton from the music department and Fred Hooven from the engineering school said, "All right. Let's combine this computer technology and music, and make a little sound box."
So, they had the program going. Jon Appleton and Fred Hooven convinced Dartmouth, "All right. We're going to assign some internal resources to try and make a little thing that the students can use." Basically, they posted a job opening. They needed someone to do the programming. During my first year at college, I was taking a lot of music courses. I had a counterpoint course that actually was taught by Jon - unrelated to his electronic music, there was a counterpoint class. Just traditional music theory. And obviously, I heard about that and basically they needed a student programmer. This is when students got their summer job working in the computer lab. Well, I beefed up my resume, Hey, I had written... I had on that stupid Model 33 typewriter. It's not that I could get it to sing and dance, but do you know what?
You make the head go back and forth. It draws graphic on the paper. The bell... Oh God, this is funny. It's got this bell and you could make it do rhythms, right? So, I programmed the Model 33 teletypewriter as if it was a musical instrument, and so people noticed that. In the summer of 1972, that's when I was hired as a student programmer. It was really Jon Appleton and Fred Hooven who had the initiative to kind of push, within Dartmouth, the use of electronics and computers in the field of music. They tapped into Sydney at the engineering school and assigned Sydney the task of making a little device so the computer could make the tones, and they hired me as the program.
And gosh, darn it, if we didn't get it to work. I didn't realize it was revolutionary or whatever. They were crude sounds, but do you know what? It was the first time that a person could compose music by typing. Nowadays, of course, you use Sibelius or use a composing tool. You enter the notes on a page using software, and you create a MIDI file and you hear it. But this was the first time you could go to a computer, you could use a language. We developed two or three computer languages, at the time. You could actually create a soundtrack by sitting at a computer editing the text file, hitting the play button. And you would render that description of the music into audio.
Darwin: I'm a little curious. In that timeframe, were you aware, or did you interact at all with the things that Max Matthews was doing or some of the things that they were doing out of Columbia. I mean, now, you'd be on some sort of forum on the internet, exchanging ideas. A clearly different time and different technology. What was the way that you interacted with all the other people that were inventing musical computers or especially, I think of inventing musical computer programming languages. That was a real rich environment, but you had to be talking to each other.
Cameron: In '71, in fact, I did several trips to Bell Labs and met Max Matthews. So, I would say that was all in Jon's initiative. Obviously, Jon graduated. He did his graduate work at Columbia. And this is where the people of the era like Jon Chowning, they were a clique, they were a club, and I don't mean that in a negative way. They were the leading group of intellectuals that were just creating this technology and defining this academic endeavor called electronics music. There was a lot of networking amongst those individuals. There was a trade show. Boy, this goes back. There was a Brit company, I think it was called EMS, and this was right around 1971. It was a big tabletop thing and you plugged pins in the board to do the patch, like a Moog synthesizer, the original Moog.
Darwin: The EMS Synthi is what you're talking about.
Cameron: Yeah, there we go. With the Moog stuff, it all mounts on a rack and it was quarter-inch phone jacks, and you plug it together that way. There was a trade show, if you'd call it that. There was a convention also that summer of '71 at Dartmouth. And I didn't participate in it. I just saw it. But EMS, there were a bunch of synthesizers being exhibited there. There was a lot of networking within the academic community about where that technology was and how it was going. There were also some publications. Do you remember the Computer Music Journal?
Darwin: I do.
Cameron: People were writing about this... The term digital audio didn't come up. It was all computer. The term was computer music. "Oh, my God. We're going to do computer music." There was a lot of networking there. There were those publications. There were conventions starting to spring up. I went to two in Chicago. Maybe they were affiliated with the Computer Music Journal. There was networking there, but obviously there was no internet. You had to go to the library. Mostly, you visited with other people working in the field. And if it's academia, they aren't quite so protective of their trade secrets. I learned how to program. I had to develop my own computer language several times, actually.
One of the things that propelled NED very successfully... Back in that day, when Apple was doing... Remember the Apple II? They were all programmed in Assembly Language. It was extremely difficult. But there was a movement at Dartmouth to develop higher level languages. There's a language called PL/1. There's a derivative that called XPL. It's very much a modern computer language. Obviously, C and C+ are the standard workhorse computer languages. Obviously, there're more languages at a higher level. But there was a movement to get those modern computer languages up and running for the programs at Dartmouth. So, I wrote an XPL compiler that let us use XPL to write software for our own computer. And that allowed us to be vastly more productive than anyone else. That's one of the reasons we were able to succeed.
Darwin: Yeah, that makes a lot of sense. But at the same time, since you're building on computers, you had to... Not only were you having to come up with your own languages and your own compilers, but you had to come up with your own equivalence of operating systems. Probably prior to the sampling, I guess you could have... It was basically just a runtime management system. But by the time you had to get to disk management and stuff, you had to have some sort of operating system, right?
Cameron: Yeah. I wrote the bastard from scratch. That's why I stutter.
Darwin: Yeah, okay.
Cameron: That's why I'm hard of hearing in one ear. No, I did. Absolutely. Actually, one of the Dartmouth systems was based on a Data General mini computer. It was called the Nova, and it was a 16-bit mini computer. You could buy 16 Kilobytes of memory for it. These were core memories. Iron core memories on a circuit board, 15 inches square. They had a primitive DOS. It was called DOS, which stands for Disk Operating System. Obviously, there was MS DOS coming along at that era. I use it to say, "Do you know what? I can't stand it. This is just so primitive. I wanted something that was fun to use."
So, on the Data General computer, I started with their operating system. But again, I wrote the XPL language to create the object code for the data general computer. There were enough sources. You could buy the tape drivers and the disk drivers from them or there was enough open source material, but I had to do the scheduler. I had to do the interrupting handlers. I had to do the text editor and it kept me busy. When I wasn't playing in the Bitterroot Mountain Boys, I was programming. That's for sure.
Darwin: It's hard to imagine. It's hard to imagine now with the plethora of tools that we have, to have to grind it out at that level. Now, in terms of the thing that the core Synclavier was written in, was that always based off of this XPL language?
Cameron: Yeah, the whole thing. Absolutely, lock, stock, and barrel. I had developed that. One of this kind of side shoots that we did at Dartmouth, this was in the '74 timeframe... Sydney and I had developed this little two-card processor. I had developed software that lets the local computer take the measurements from the scientific experiment. Like you're doing any kind of experiment where, let say, you're trying to measure the rate of a chemical reaction or something silly like that. You mix the oil and then the vinegar, or whatever. How long does it take to solidify? Well, you end up with data from a strip chart recorder. You want to get that into the computer system so you could do numerical analysis on it. So, our computer had a little A to D converter on it.
You could measure the signal. I wrote the software that would collect that data and transfer into the time-sharing system. We had 50 installations at Dartmouth by the time or more, and this is when everything was becoming computerized. So, part of that project, for a whole summer, I spent developing the XPL compiler, which there was a version running on the Dartmouth time-sharing computer. I could actually program in the Dartmouth version of XPL and have it create the language, interpreted its own language and create the object code for the Data General. And then, of course, for our own machine. So, XPL was the only software language we used. Towards the very end, this is when the Mac IIfx came out. There was the Apple Lisa computer. Pascal was becoming a language. And actually, Pascal and XPL are very similar in terms of capability. And Pascal, I think, wasn't that UCSD?
Cameron: So, Pascal was a language and I think that's one of the things that really helped Apple succeed. The Microsoft people were still stuck in Assembly Language. Apple was doing its work in Pascal and Object Pascal. And then, the C programming language kind of transitioned in there. We started using a Mac as a front-end for Synclavier. During '85, '86, you would use a Mac as a graphical front-end. We had a music printing package that would run on the Mac. And, of course, the sounds would all come out of the machine. There was that computer language there. But within the Synclavier, everything even today, is lock, stock, and barrel. It's all in the XPL computer language, which translates to C extremely well. In other words, it's an if-then-else computer language.
Darwin: Right. But even the Arturia implementation or your iOS implementation, do you have a XPL back... The core of it is still written in XPL?
Cameron: I wrote an XPL to C translator.
Darwin: Oh, fabulous. Amazing.
Cameron: Okay. Again, in my spare time, right when ...
Darwin: In between acting, right? There you go.
Cameron: No. So, I'd done all my arpeggios. I'd finished my practicing. I'd done my vocal warmups. "All right. I'm going to write an XPL to C translator." Well, it was either that or abandoned my life's work. I mean, it's not that I'm stuck in the mud, but do you know what? That was my life for 20 years. We're talking from age 20 to age 40. I mean, that's a significant portion of a person's life. I didn't want to let go of that. I said, "I know this is obsolete technology, but the way it works..." Anyway, so I translated some of that software into C. Mostly, for the Arturia product. Of course, it's all written in C++, internally.
They have a huge graphical user interface, which hosts the whole product. My contribution is the DSB engine of it. So, I developed a representation of the Synclavier hardware model to a little bit of an alarming level of detail, with all the shortcomings that we had in the hardware. It created the originals sounds, which I had off a floppy. I was able to convert the timbres, so they would work in the Arturia product. It was all very authentic.
Darwin: So, going all the way back to the Synclavier 1 and 2. I mean, most of the voice... What was the division of labor between what the voice cards did and what the software did?
Cameron: Well, I'll tell you exactly. Back in that era, the computers were not capable of processing the audio, in any way shape [or form]. I mean, the computer we had, had roughly a one microsecond cycle time. It could take a 16-bit... no floating point hardware, floating point took ages. It was a real challenge for the computer to get one channel of digital audio, on and off the hard drive. There was no number crunching, real-time number crunching. So, you needed dedicated hardware to actually create the samples and feed them into the digital to analog converter. So, you could hook up a speaker and hear the puppy. They were called voice cards. That's a generic term. It's a piece of computer hardware that connects to the computer. We had a little parallel, 16-bit data bus that you could connect devices to.
So, the voice card connects to the computer that way. The computer's like the manager. The computer says, "All right. We want this kind of pulse wave." There was a little ratiometric oscillator. So, it didn't have a lot of pitch resolution. It would make the Western scale. You could do harmonics. You could do intervals. So, the software would, in effect, turn on the voice card and tell a voice card to go. And the voice card would have an accumulator and counter. So, it would start taking samples out of memory and feed them out to the D-to-A. You needed that hardware technology to actually produce the audio. Nowadays, your iPhone, your most bottom of the barrel computer module you get, in nanoseconds, it takes data in memory. And, of course, there's USB interface to get it to the outside world. But the computer, the computer is creating and crunching the audio. That's exactly what the hardware of the era could not do.
Darwin: Got it. Well, I think too, one of the things that maybe is a little misleading is we talk about it as a voice card, which makes you think, "Oh, it's an implementation of an oscillator." But no, especially since these were FM voices, it had to have multiple oscillators. It had to implement all of the envelopes because again, the envelopes weren't going to come from the software. It had to implement all the amplitude modulation necessary for gain control. So, all of this kind of stuff that... You talk about having this data bus. It seems like with computers at the time, even data bus clogging could have been kind of a problem.
Cameron: That's one of the things that helped NED succeed. In fact, we got a patent on this partial timbre method, and that is exactly what the voice cards were. There was a digital oscillator to create the pitch. You needed two, one for the modulator, one for the carrier, if you're going to do FM. But then yes, you need an envelope generator to give it an attack and a decay, and any kind of musically useful kind of sound. So, the oscillator, the voice card, had those different components to it. We were issued two patents early on. And the envelope generator, it had three sections to it. One was a volume control. You use an 8-bit D-to-A in the FM case to create a reference voltage that fed into the second D-to-A to control the volume output in the second D-to-A.
And the second D-to-A, again, it was an 8-bit envelope. Boy, you could hear the clicks and pops. There was an envelope generator that would ramp the envelope up and down. For a sharp attack, you need a one millisecond ramp time. And then, it created a reference voltage output that was fed into the wave deck. So, you have separate hardware. There's really four components. There's the oscillator, which tells you when to move the audio data around. You have the memory that has the samples in it. You have a wave DAC that takes the sample, presents it to the speaker. And then, you have the envelope generator feeding the reference voltage on the wave DAC. The result is a musical instrument. I mean, this is where the voice card just gets the audio data out. But you're absolutely right, you have to gate it intelligently. You have to modulate it in a way that's musically useful, that makes interesting sounds that, for example, you can sell to make radio commercials out of.
Darwin: Right. Now, one of the things that the Synclavier was way pre-MIDI, and this whole concept of a tapeless studio was pretty novel prior to MIDI. But MIDI comes traipsing along... Did NED ever have any kind of a MIDO interface for the Synclavier system?
Cameron: Yeah, it did. When MIDI came out, I remember one of our customers... It was called the Linn Drum Machine. Remember, Roger Linn?
Darwin: Sure, yeah.
Cameron: I think he might have done 50 kilohertz sampling. His unit sounded pretty good. Someone brought up one of his drum machines and connected it over MIDI and it really caught the attention of the brass at NED. In fact, maybe it occurred to them, "Wait a minute. This is where the industry is going. You don't have to buy a big, expensive piece of equipment from just one manufacturer. You can get different pieces from different manufacturers and hook them together." Yes, right around then, that would've been. What year was that? That must have been '83, '84? Technically, MIDI was extremely easy to implement. It's basically just an 8-bit serial port.
Cameron: So, we did hook up a MIDI interface. It was always kind of problematical. I mean, the modern MIDI sequencer, while you set your tempo, you talk about your quarter notes. It divides the beats per measure. Our sequencer was a real-time sequencer, but we did take MIDI in and out to generate notes. And then, there were mod wheels and foot pedals. Yes, we also had a guitar interface.
Darwin: Oh, that's right.
Cameron: That wasn't over MIDI. That was a Roland guitar. It had a pickup where every string had its own pickup. So, the strings were fed into a pitch detection mechanism. I remember Pat Metheny and others, it was just... See, I was a guitar player.
Darwin: So that resonated with you.
Cameron: Well, I could get it to work because, oh my God, the transients on a guitar. When you play a MIDI keyboard, you press the key, "Okay. What's the velocity?" But there it is. With a guitar it's a much more elusive, much more vibrant input. You have to guess a little bit but you want the note to come out right away. Maybe not right away but it'll be right within a couple milliseconds. And do you know what? It sounds like a guitar twang. That was actually pre-MIDI. Our guitar interface was pre-MIDI.
Darwin: Yeah. Well, I'm curious now because I remember when Pat Metheny first started using that. I was kind of blown away that he would use it, first of all. But I'm curious, it seems like for someone at that level, he probably came and knocked on your door, and said, "Hey, let's get this right." I mean, did you have a lot of interaction from a developer standpoint? Did you have a lot of interaction with artists who were motivated to have NED get it right?
Cameron: Yeah. I think, that was a very important part of our synergy. I remember working with people... Well Frank Zappa was extremely interested. He had a big Synclavier. In fact, he had the biggest system towards the end of his life. But he had a studio tech who came to NED for several months. And I would work with him, "Okay. What kind of computer language? What kind of changes can we make in how the system works so it would be more useful in his situation?" Obviously, we were on the bleeding edge. People were always saying, "Well, we want this. We want this. Can it do this? We need more storage. It has to be faster. We need more voices. We need more music printing." There were a lot of requests we were getting from the customer for capabilities in that way.
But obviously, behind every great artist or... Behind is the wrong word. Every great artist, when that artist has a successful work, I mean, there's a creative team producing the puppy. Barbara Streisand did the album, but it was Albhy Galutin that did the synth track, and did the design, the sounds. And he said, "Wait a minute, this FM, we need this and we need that." And people like Michael Jackson, for example. Well, Quincy Jones was doing a lot of work, getting the sounds, getting the performers. There was a lot of collaboration. And on the other front, from a pedagogical point of view, Oscar Peterson was a big customer. His use of this system was a very pedagogical. He would play a track and then improvise right on top of his own, and he was using that to teach jazz improvisation.
So, he needed extra things in the sequencer to have it be easier and faster to use. There was a lot of collaboration over the years. This went on for a very long time ago. Nowadays, it's like, "Oh, the internet comes and is gone. Or there's Facebook. Oh, Facebook's gone now." It's this. Things change very quickly, but this went on real... It grew, but the fundamental technology was, shall we say, 22, 23 years from 1971 to 1993. And that core technology did not change during that period of time. It grew, but it was all based on the 16-bit sampling out of the poly-memory, and the same operating system as primitive as it was. There was a lot of collaboration on evolving that body of work over that 22-year period. And to a certain extent, maybe that's why it's still here.
Darwin: Yeah. Well, I want to talk about what we still have because in a way, it kind of - and excuse me for saying this - I'm having trouble wrapping my head around the implementation as it is now. We'll, let's talk about the Arturia V version, right? First of all, in the Synclavier system, the hardware did so much of the work. Now, you had to translate all that hardware into a software implementation. How could you get the artistic taste of that hardware translated into software in any kind of direct way?
Cameron: Well, first of all, I was completely familiar with the hardware design. At college, I was a software specialist. But in fact, when I went to college, that was really before computer science was an academic discipline. It was just starting to become an academic discipline, although my graduate thesis was the XPL compiler. That was at the engineering school. I took courses in computer hardware, and the struggle that Sydney and I did... I mean, he, and I just put in a lot of long hours. It was, "Okay. There's the hardware. You could buy this data book, right? There's the chip, there're the specs for the needed converter.""
"There's the chips for the S184 accumulator register memory chip. Okay. How do you turn this into a musical instrument?" Sydney said, "Well, we can put the chips together this way and this way." And I said, "Wait a minute, that's going to be screwed. It's not going to do this. Can you do this? Can you do that?" So, he and I spent years and multiple iterations, just trying to massage what was available in the computer hardware world, what we could do in software, and what would be a good musical product for the end-user. In terms of translating it into a modern computer language like C, C can model that kind of computer hardware extremely easily. The challenge in the Arturia product is that everything modern is based on a fixed sampling rate.
Darwin: That was going to be my next question. You mentioned that you had a variable sample rate, and I'm like, "How does that translate into something that has a fixed sample rate output?"
Cameron: The secret, Darwin, and this is where... There's some people who are PhDs at this and they talk about Laplace Transforms and the Z-Domain, and I understand that. If I can hear it, I understand about frequency response, and I'm pretty good with numbers. But the oscillators in the Synclavier of which, and I'm referring to the FM oscillator and the polysynth oscillator. Basically, the polysynth oscillator was a 12-bit version of the FM oscillator, which was an 8-bit version. When you use that oscillator to create a periodic signal or to do sampling, well, there's minute errors in the sampling, right? That era of technology, it wasn't fixed rate. To make a different pitch, you record in a musical note, a C. Oh, you want to play C-D-E on the scale? Well, you play this sample back faster and it goes up in pitch.
Nowadays, a modern sampler, it's a fixed rate system. So, they use a software sample rate conversion algorithm to actually allow that one sample to play back faster at a higher pitch. You do it by decimating the samples, and there are algorithms to do that. And people spend multiple coffee cups, debating, "Okay. This sample rate algorithm is better than that. Oh, I like the sound of this one. This one is better at transients." We didn't have that luxury. We had the computer.
The oscillator was actually... Every voice card was operating at its own sampling rate. So, one of the things I developed as part of the Arturia product is a mathematical model that figures out what the spectral impacts of those variable sampling rates are.
If you analyze it, if you run a variable rate sampling system through a spectrum analyzer, or through an audio analysis tool, you can see what it does. It blurs the spectrum a little bit. I'm coining a new name for it because we might use this in a product, someday. I call it Time Domain Dithering. And what it is, dithering... People are familiar with dithering. When you have a 16-bit converter, the quantization error starts to become audible. If you add a little bit of random noise in it, down at the one bit level, you dither the samples plus and minus one, and okay. Instead of sounding distorted, there's a little bit of a white noise floor, but it doesn't bother you. Synclavier, the hardware design that we had, did that at the hardware level in the time domain. The samples would be a little bit early and late. It's very similar to dithering the audio level, the way a DDA... I can see your glazing over.
Darwin: No. I'm imagining an implementation of this. No, again, this is mother's milk to me. So, just let her run!
Cameron: So, the Synclavier, because it was a very... Well, it was a hybrid system. I mean, it was a digital oscillator. The samples were in memory, but the envelope generator I described, it was all analog. It was vrefs from one DDA to the other. Then, we had an analog distributor that would take the 16 voices and route them to 16 analog outputs. So, there was an analog component to its sound, and starting with at the time domain dithering. So, what I've done is I've developed a mathematical model that allows a fixed rate system to recreate the audio artifacts.
Darwin: Oh, that's beautiful. Yeah.
Cameron: And that's one of the reasons why the sampling, and some of the stuff I'm working on sounds, it sounds different than what you hear when you're just using traditional sample rate conversion algorithms. It captures the sound of the variable rate system, the hybrid analog digital systems. That's what we're trying to do. That's the mission we're on now.
Darwin: That's fantastic. That's really exciting. I love to hear about it. Now, just hearing about this and makes me want to ask sort of a flipped question. This is something that I like asking people. Generally, I do it offline, but I'm going to hold. I'm going to put you out in the spotlight on this one. You have had quite a career primarily with one thing with many implementations. Is there anything that you wish you would've done differently that might have made the whole thing easier or the whole thing better? Do you feel like you lucked into the right path all along the way?
Cameron: In hindsight, I wished I had done my more advanced music education earlier in my career, because I really felt hampered by that up until... I went to IU at in 1982. I think one of the challenges of any art technology kind of product is if you're sitting in the middle as the inventor, there are genius creative types that want to use this instrument to make sounds with. And they want to use it in ways that makes your jaw drop. "You're going to do what? You combine these sounds, and you're going to do what?" Next thing you know, we have a rocket ship taking off, made from a cat growl or something. Particularly, once I had two years of real professional music education under my belt, at least I had some common terminology with music and sound designers, and things like intonation and articulation and rhythmic accuracy. And I had the right vocabulary.
For the first 10 years, I was always trying to decipher what they were saying, "Listen, you idiot. Can't you do that? Can't make the machine do this. Why does it have to sound this way?" Seeing the technology develop in front of you... I used to read the Audio Engineering Society papers. One of the big discussions was, again, in 1980 and 1981, this is when the compact disk was just being created. Well, we were pushing for 50 kilohertz sampling because we really felt that made it technically feasible to really do full fidelity sampling. But no, they had to fit it on a CD, or whatever.
They could only fit so much data on a platter that would fit in the car radio. So, they ended up doing 44.1 sampling. But I've always had good collaboration with the creative types. But that's the one thing... I'm glad I did it at age 30 when I was turning middle age. "Oh, my God. I was sweating bullets." I found that really helpful, and I'm able to use that. Like in talking to the Arturia team, has a great team of sound designers working on their product. I guess that's one thing I'm jealous of - Oh, my God. I wish I could have a sound design staff like that. So, I'm able to talk to them in their language now, and I think that's been very helpful.
Darwin: It's really amazing. Well, Cameron, I want to thank you for having spent this time, for humoring me and wanting to dive into all of these different avenues and these different inlets of your mind. It's fascinating and, again, I feel like I could do for another several hours of this, but we are going to wrap it up. I want to be sensitive about your time. I want to thank you so much for doing this, and I really appreciate it.
Cameron: Well, Darwin, I'm glad I could be here. I look forward to talking again sometime. Well, see you later.
Darwin: All right.
Copyright 2022 by Darwin Grosse. All right reserved.