Legacy link:
legacy_200w
This new site was launched in July 2010. Visit the older site to access previous articles by clicking above.

Back Cover

Gryphon Diablo 300

Part One: Developing a Philosophy of Sound 

Andrew JonesOne of the highlights of 2011 for me was the opportunity to interview Andrew Jones, Director and Chief Engineer for TAD Laboratories. Jones not only designed TAD’s state-of-the-art Reference One ($78,000 USD per pair) and Compact Reference ($38,000/pair) loudspeakers, but also Pioneer’s SP-BS41-LR bookshelf model, an outstanding value at a low cost ($150/pair). Talented and engaging, Jones has had a fascinating personal and professional history in the world of high-performance audio. While our discussions ranged widely, they ultimately revolved around his philosophy of sound and the role it plays in his speaker designs. The exchanges in this segment focus on the studies, experiments, and experiences responsible for Jones’s formulation and refinement of his philosophy of sound. The final installment will focus on his role at TAD Laboratories, the implementation of his philosophy in TAD’s speaker models, and the systems he builds around those speakers at international audio events. 

Peter Roth: Tell us about your formal trainings at university in the UK, which I understand were in physics and in sound, and how those studies have contributed to your career in high-fidelity speaker design. 

Andrew Jones: I have been interested in sound since my early teen years. I have an identical twin brother (we are a stereo pair), and we were both interested in hi-fi and electronics. He maintained his interest in electronics, and I morphed into an interest in speakers. I have no recollection why, particularly, but speakers fascinated me rather more than staying with electronics. That was our focus all the way through school, and we then picked universities appropriate to our chosen subjects. I wanted physics, which I felt I needed for sound applications, which included acoustics courses. There were two obvious choices, and looking back, I probably chose the wrong one. I was offered attendance at Southampton University’s Institute of Sound and Vibration Studies (famous in England for sound and vibration research), but I didn’t understand, quite, the importance of what they were doing. Instead, when I interviewed at Surrey University (which I subsequently attended), it just happened to be sound lab day! I saw all these guys doing sound experiments and thought, this must be the place to go. I don’t regret it, though, because there I made the contacts that got me to where I am. 

After studying physics and acoustics at university, I went to do research with Malcolm Hawksford. He was supervisor for me and my brother, and we both joined Malcolm’s research group. My research was in computer-integrated crossover network design, trying to develop new algorithms. [Professor Malcolm Omar Hawksford is currently Chairman of the AES Technical Committee for High-Resolution Audio.] After that I did three years’ research in active noise control -- making noise to get rid of it. At that time I developed a relationship with Billy Woodman (of ATC), using his bass drivers for building these enormous speakers to cancel out the noise of a ship’s funnel. This required going out on the North Sea with ten 15” drivers in a coupled-cavity arrangement with 6000W of amplification to produce the 135dB at 30Hz needed to cancel the low-frequency noise. Interesting stuff, but the wrong field. I wanted to get into hi-fi. 

I’d known Laurie Fincham from when I’d seen him at lectures and shows. [Fincham joined KEF in 1968, and was instrumental in the development of KEF’s technical expertise throughout the 1970s and ’80s.] I met him when seeking an industry externship between my second and final years at university (KEF always took students for a summer or for a year), but due to a timing snag it didn’t work out, and so I went straight through. Nevertheless, I started this key relationship. After about three years of post-grad research, and having been chatting to Fincham, he decided he needed somebody in his research group and asked me to join KEF in a pure research capacity. I remember him saying at the time, “You are going to be in the research department and I don’t know that you will cross over into products, as we don’t have the time for you to make mistakes.” OK, I thought, KEF is still a cool place. Anyway, I joined, and eventually became Chief Engineer. I don’t know how many mistakes I made along the way, but I was there 11 years. 

KEF was a wonderful place -- a Speaker University, really. We had access to everybody: Stanley Lipshitz and John Vanderkooy, Peter Walker of Quad, Greg Mackie from Celestion, Neville Thiele, Dick Small -- all these wonderful people. We used to meet at Audio Engineering Society lectures and conventions. They would come and give private lectures at KEF, and then Dick Small joined KEF to become head of research. With these relationships and access, if you had questions about anything, you could just pick up the phone and call. Peter Walker was wonderful; we would meet once a month in London at the AES lecture series and all go out for dinner afterwards. I’d say to Peter, “I’ve got this problem about blah blah blah blah,” and he says, “I’ll think about it.” I always knew -- it could be days later -- the phone would ring, and then, “It’s Peter here, I was thinking about what you said . . .” and out would spill just this wonderfully elegant treatise on what you’d been doing. With Peter Baxandall, same thing. We used him as a consult. It was fantastic training. I think that a problem with a lot of companies these days is the absence of the opportunity to work with all these people to learn the real engineering behind what you are doing. Today it is too often just experimentation through trial and error, or redoing slightly what someone may have done somewhere else. KEF back then was like being at university. 

PR: One of the things that has routinely surprised me is the lack of collaboration and cross-pollination. Rather, it’s a land of silos, with a lot of different people working, seemingly, in relative isolation. What you are describing, by contrast, sounds very exciting. 

AJ: To a degree, that is true. In the old days, it was much more hobbyist. It grew into a bigger business in the 1980s, when hi-fi companies became big, and as it became much more competitive there was less of an opportunity to mix and mingle. I think, as engineers, there will always be some talk, but I don’t know that we really share and learn. There is a reticence to readily discuss ideas being worked on. With the UK being always known for speakers, and with so many personalities moving between the speaker companies, I think the guys still talk ideas in private -- you can’t help but talk engineering. Nobody would directly give away exactly what is being worked on, but you can still discuss concepts and clarify things -- a type of informal peer review. I don’t think people realize just how thorough an understanding of the fundamentals of acoustics and electroacoustics someone like Peter Walker had. People like Walker and Baxandall can turn their thinking around and give you an elegant, reasoned answer to so many questions. Walker often said that if you needed more than A-level maths to answer a question, you are thinking about it in the wrong way. Quad could never rival KEF, because it was such a different speaker and had such a particular market. So there was no problem discussing things with Peter [Walker], and I was always fascinated with electrostatics. I’ve had my own electrostatics, including his. When he was retiring, he offered me to understudy and take over, but career-wise I thought that would be a more limited avenue, so I stayed with KEF. Still regret it to some degree, because of my fascination with electrostats, but the point is, I had a fantastic opportunity to discuss things with everybody. 

Andrew Jones at CES 2008

PR: One of the things I find interesting is how different designers approach challenges from different perspectives. You talk about physics, which is really a scientific approach, but you also talk of engineering. I’ve heard others talk of the difference between research and engineering, a scientific contrasted to a practical discipline. 

AJ: You have a hierarchy: a mathematician, a physicist (which is a failed mathematician), and an engineer (which is a failed physicist). My background is thoroughly scientific-research based. I try to apply that methodology everywhere. I see how people do experimental procedure, and mostly it is done badly. They don’t have the training. At KEF it was very pedantic. Raymond Cooke was a real pedant. Laurie Fincham was. When you write something scientific, whether discussing something or undertaking a research study, experimental procedure is important: single-variable change. So many people don’t account for all of the things that could be changing as a result of the intended change they thought they were making (in which case, one can’t be sure that the intended change was actually the cause of the identified result). I still try to follow that. While maybe KEF was too far one way -- in the sense that it was so engineering based that most of the design was just from an engineering point of view, and the sound was intended to come from the engineering -- a lot of other companies almost don’t care about the engineering, they just “listen and tweak,” and does it sound “good.” Meaning, does it sound accurate (if you know what accurate means) or does it just sound pleasant? They don’t care what the engineering says, so long as their ears tell them something. These are two polar opposites. At KEF we may have been too far the one way, and now I’m trying to marry them together. So I have to maybe accept and listen to things that would have gotten me thrown out of KEF. While I try to reason through it, I strive to remain rigorous in my methodology. 

PR: Are there traps between the theoretical, scientific approach and the more practical, real-world, result-oriented perspectives?

AJ: I’m sure there are. If you look at a lot of the research that Floyd Toole does -- what do loudspeakers sound like, and what choices do people make when they listen to loudspeakers -- he hails from that old school: “I’ve done the research, I’ve looked at off-axis performance, I’ve looked at frequency response, and most of the time, I can rank speakers based upon the measurements of them. I think we know sufficiently about the measurements to rank them with a trained listening panel.” That approach is anathema to a lot of the subjective people. While there is a lot of truth in Toole’s position, is it the whole truth? There still is debate as to controlled listening panels -- a panelist can still be negatively impacted because she is not relaxed enough to hear the things she would at home. Of course, the objective group rails against the subjectivist “Anything Goes” clause. 

PR: Doesn’t there need to be recognition that the brain is split into two sides? One is engaged during “active” listening, while the pleasure or artistic part of the brain engages in a different way.  

AJ: I struggle with those two sides constantly, which is maybe a good thing. I’ve had the thorough training of the “active” and now have a foot in the “engaged” enjoyment camp. I often have times when, in engaged listening, I realize I wouldn’t have achieved the result without tweaking the cables or amplifiers or whatever. At other times, I second-guess. But that struggle is good, because it constantly makes me question what is happening and try to research it more. The tension prevents complacency. Also, I know ways to bring in certain kinds of listening tests where you can start to bring the two sides together: merging a methodical approach to a listening test (the statistically relevant approach) and the “it sounds better to me” test. 

I was involved in some testing utilizing the ABX switch box. As soon as the subject entered a listening-test situation, you got this issue of tension. So we devised the test in a way the subject could accept -- choosing either A, B, or X. Only once the subject could comfortably hear a clear difference between A and B (the listener knowing what they each are, and able to listen to them as many times as needed) would we proceed to the next part of the test. If she didn’t believe she could reliably and repeatedly differentiate A from B and vice versa, we’d stop the test. We could change the equipment to suit, until the subject could be comfortable in hearing the A-and-B distinction. The next stage is to listen to X, the “blind” choice. Do you think now you can reliably tell me which X is? Don’t write down any answers yet, just get comfortable or confident with your ability to do that. However, as soon as you start writing down an answer, you are committing to the test as being sufficiently revealing, sufficiently sensitive, and you are relaxed enough -- whatever it takes, go down and have a coffee, come back the next day, take as long as you want, it’s under your control -- but every time you write down an answer, you are committing to me that you accept my test as being satisfactory to you to reveal differences. If you don’t write down an answer, it could mean you weren’t in the right frame of mind on this or that day, you don’t like the test equipment, whatever. But by writing down an answer, you can’t wiggle out of this concept. So I used to try to do it that way and, sometimes, discernible differences were too small to be statistically significant. Now you could then say, statistically, no -- but was there still a difference that matters to somebody? How relevant is it? Am I going to spend money on that part when I know I can make a bigger difference somewhere else that is going to be dramatic? It becomes a real tool for balancing where efforts, resources, and costs are best spent. 

PR: Could you delve into the massively multi-source Uni-Q experiment you conducted back in your early KEF days? It seems to have informed your thinking about “two-channel” speakers. 

AJ: That project originated as a joint operation between the Technical University of Copenhagen, KEF, and Bang & Olufsen. The European Union was providing funding for universities and industry to get together on research projects. Søren Bech, from the University of Copenhagen, led the project, and we thought it would be a good idea to scientifically investigate the importance of room acoustics. The interface of the loudspeaker into a room is a three-dimensional issue, and what you hear in any room is directly affected by room characteristics. What is important about room characteristics? People had theorized, people had done simplified experiments. Our thought was to see just how far we could go to really determine the importance of the characteristics of the room. 

Starting with one of the University’s largest anechoic chambers, we put in an array of 32 loudspeakers surrounding the listener (front, back, and sides, both below and above) on a 5m arc (an imaginary sphere). Curtains were placed around the listener so they couldn’t see the speakers. If you went into the chamber and sat in the listening seat, and if only a stereo pair of speakers (left and right) were programmed to play, it would sound simply awful. (Never listen to music in an anechoic chamber, by the way. This is actually quite interesting when we come to talk about important room characteristics and the significant difference between how you set up for home theater vs. music reproduction.) 

We designed our own modeling program, such that we could model a simplified room (a rectangular room) and calculate the appropriate image. Imagine every surface is a mirror, so every reflection from the walls produces an image of the speaker (i.e., on the other side of the wall, at the appropriate angle, at the appropriate distance). We could calculate exactly where all of these images would appear, how far away (i.e., how attenuated), what angle the sound traveled from the original speaker to the listener. If you know the directional characteristics of your “simulated” speaker, you can calculate the frequency response, delay, and strength of every single image, and feed that into a DSP program. We built (this is early 1980s) a 32-channel DSP engine, which simply didn’t exist in those days. Our program simulation could calculate every image -- hundreds of images, if you calculate through the third reflection. We then created an algorithm to group the images from a particular direction (the ear has only a limited acuity to directional cues -- everything within a certain solid angle is fused into one image). We would look for groups to bring it back to 32 images, which we felt sufficient. For the decay of the sound from the farthest images, room reverberation, the calculations became impractical. Even with all that DSP, there was a limit. So our algorithm included reverberation calculations, the low-level end of the tail for a room of particular dimensions and approximate absorption characteristics. Without adding reverberation, it just never sounded right. 

With this setup and sitting in “The Chair,” the left and right stereo pair alone sounded dreadful, but when you “switched on” the room, suddenly you felt as though you were sitting in a real room listening to music. It was astonishing! So there was a process of calculating the images for the “imaginary room,” then doing DSP to process the music to provide all those reflected images, then to play those images back. Each speaker was a 5” Uni-Q. I built every single one -- my own personal albatross. They were active speakers, driven by B&O current-dumping amplifier modules. When you drive a loudspeaker with a current source, you reduce the distortion by about 20dB. What you get, however, is frequency response, which is multiplied by the impedance function. The frequency response of the speaker traditionally is from a voltage source, but if you change to a current source, the impedance controls what voltage is being applied to the speaker, and you have to correct for that (requiring an active speaker, where the variables can be known and corrected). It was a complex design to do, and had to be tweaked for each speaker (this was before the Uni-Q speakers were even in production, so every driver pair was hand built). I had to measure them all. Calculate the active filters. Build the active filters. Match it to each particular driver. It was a ton of work. 

We got them all installed and, on top of this anechoic chamber, we had the DSP engine, 64 amplifiers (32 Uni-Q speakers, therefore 64 individual drivers), and cables feeding down to each of the speakers. It was a hell of a project to set up, but a fascinating experiment. With everything set up and working, the idea was to change the dimensions of the room, or change the absorption characteristics of any particular surface, or turn off particular walls (to see if the sidewall reflection is more important than the floor), etc. The system was left operating for a couple of years for experimentation. Listening panels came in, the results were properly processed, and papers were published on the results. My part was setting it up and formatting the experimental apparatus. Really interesting. 

Which gets us back to what are, truly, the important speaker characteristics. It confirmed a lot of beliefs that ceiling reflections are one of the worst. Sidewall reflections can be good, adding to a sense of spaciousness, if the stereo speaker possesses well-controlled directivity. This same sidewall phenomenon happens in concert halls, on a different scale. The tall, narrow, long concert halls -- traditional ones -- were always the best halls. Sidewall reflections are lower-correlation than ceiling or floor, so they add spaciousness. Correlated signals, by contrast, add coloration. 

Andrew Jones at CEDIA Expo 2010

PR: Since you touched on colorations, what do you think of that segment of the marketplace, often the expensive edge of the hi-fi world, which seeks an enhanced, supersized sound experience filled with excitement and titillation? 

AJ: An aspect of this phenomenon relates to what people think real music sounds like. What are their opportunities for listening to real music? We return to those espousing the closest approach to the original sound, often identified as the sound of live, unamplified music in a real space. But what original sound? If you are going to listen exclusively to large-scale orchestral music, there are very few places to properly experience it. Small-scale orchestral, chamber orchestras, you still have very limited opportunity. I’d say the vast majority of high-performance enthusiasts do not get to listen to live, unamplified performances on a regular basis. Even a lot of orchestral performance is amplified or augmented these days. Everything else is almost always amplified. So what’s the reference point? What are they looking for? Beyond the live event, most music is created in the studio. That is why I now focus on re-creating the original artistic intent, which in most music is created in the studio (not live). Being able to sit and listen to unamplified instruments rarely exists. 

PR: Even with a live performance, the recording is interpreted by a recording and/or mastering engineer.

AJ: Exactly! You and I were both at the Computer Audio Symposium when Keith Johnson [of Reference Recordings] was doing his recording demonstration. He told us, “I’m not re-creating the performance; rather, I’m creating the performance.” When going out to record a performance, what is he capturing and how does he capture it? When a soloist plays and you are there watching, your brain interprets and focuses on the solo. If you just monitor the flat capture with a pair of microphones, the soloist gets lost in comparison to what you remember hearing when you were there. This is why Johnson (or another recording engineer) brings the soloist up in the mix, at least a bit. Because of the type of microphones, and their placement to capture the essence of the performance, it is not a true, holographic representation being captured, but rather the creation of something approximating the “live” perception had you been there. There is no reality. Even at the Symposium, in a purist setting with high-quality Spectral microphone amplifiers and a live mix, there was a large interpretive element. And the microphones were all specifically selected, rebuilt, and with their own sound character. 

I attended a ribbon-microphone conference in Burbank recently -- fascinating -- surveying the history of ribbon microphones, the differing types and differing sounds. During the Q&A, all these experienced sound engineers were discussing their preferences for one particular ribbon microphone or another. They’d favor one for voice, one for guitar, and another for something else. Already, even with a straight mixing console of high quality and with no EQ or compression, in making the recording the engineers perform EQ via microphone choice. TAD Reference OneSo what is it we listen to? Someone has an idea of what they want to hear from music. How can we tell them they are less right, when there is no clear relationship between what the music was in the original performance and what we can hear from our systems? 

PR: How, then, would you describe your philosophy of sound? 

AJ: A natural, balanced performance, capturing the original artistic intent, across the spectrum of all types of music. 

PR: The obvious follow-up: What are you trying to achieve in your loudspeaker designs to implement that philosophy? 

AJ: A balanced design approach which maximizes, along multiple parameters, real-world performance and original artistic intent across the musical spectrum. 

Check back next month for the second part of my interview with Andrew Jones. 

. . . Peter Roth
peter@soundstagenetwork.com