IN: News & Features

Recording Versus Marketing at BSO and Elsewhere

by

Asked to comment on John Newton’s recent Intelligencer article on recording and archiving at the BSO and to the comments therein here, BMInt writer David Griesinger responded in extraordinary detail (for a comment). We publish his response as an article.

A reply to the comments after John Newton’s article would have to be book length to cover the points raised, and in the end I think no one would be convinced. But I might be able to add some water to the fire.  To begin with the subject of SACD vs PCM, I would like to direct readers to the excellent Wikipedia article here . I quote the following paragraphs:

Audible differences compared to PCM/CD

In the audiophile community, the sound from the SACD format is thought to be significantly better compared to older format Red Book CD recordings. However, In September 2007, the Audio Engineering Society published the results of a year-long trial in which a range of subjects including professional recording engineers were asked to discern the difference between SACD and compact disc audio (44.1 kHz/16 bit) under double blind test conditions. Out of 554 trials, there were 276 correct answers, a 49.8% success rate corresponding almost exactly to the 50% that would have been expected by chance guessing alone. The authors suggested that different mixes for the two formats might be causing perceived differences, and commented:

Now, it is very difficult to use negative results to prove the inaudibility of any given phenomenon or process. There is always the remote possibility that a different system or more finely attuned pair of ears would reveal a difference. But we have gathered enough data, using sufficiently varied and capable systems and listeners, to state that the burden of proof has now shifted. Further claims that careful 16/44.1 encoding audibly degrades high resolution signals must be supported by properly controlled double-blind tests.

This conclusion is contentious among a large segment of audio engineers who work with high resolution material and many within the audiophile community. Some have questioned the basic methodology and the equipment used in the AES study.

Double-blind listening tests in 2004 between DSD and 24-bit, 176.4 kHz PCM recordings reported that among test subjects no significant differences could be heard. DSD advocates and equipment manufacturers continue to assert an improvement in sound quality above PCM 24-bit 176.4 kHz. Despite both formats’ extended frequency responses, it has been shown people cannot distinguish audio with information above 21 kHz from audio without such high-frequency content.

Readers with an open mind and a sense of humor might enjoy reading a paper I wrote on this subject in 2003. It was intended as a serious but light hearted look at the possible reasons there might be differences between audio formats with and without frequencies above 20kHz. To quote from my web-page:

Being currently over 60, and having in my youth studied information theory, I have a low tolerance for claims that “high definition” recording is anything but a marketing gimmick. I keep, like the Great Randi, trying to find a way to prove it. Well, I got the idea that maybe some of the presumably positive results on the audibility of frequencies above 18,000Hz were due to intermodulation distortion  that could covert energy in the ultrasonic range into sonic frequencies. So I started measuring loudspeakers for distortion of different types—and looking at the HF content of current disks. The result is the paper below, which is a HOOT! Anytime you want a good laugh, take a read. Slides from the AES convention in Banff on intermodulation distortion in loudspeakers and its relationship to “high definition” audio.

Single-bit converters are cheap—but their signal to noise ratio is approximately 0dB, and the noise spectrum is white, or uniform with frequency. This means that if you run them at 2.8 megahertz, the S/N at 10kHz is 2,800,000/10,000, or about 24dB. To make them work at all, you have to convert the output of the converter to analog, pass it through an extremely complex filter, and feed it back to the input. The feedback reduces the noise at audio frequencies while increasing it at frequencies we hope are inaudible. The result is a bit stream at 2.8MHz, that you can convert to PCM with some straightforward digital mathematics. The converter used in SACD equipment was designed by Bob Adams at Analog Devices. The conversion filter is a work of art, fiendishly complex but with decent linearity at audio frequencies. The filter defines the properties of the output signal, which is far from the “purity” often claimed. The converter noise starts to rise dramatically above 20kHz, as can be seen in the paper above. Fortunately, no one can hear it.

Single bit converters seem to be a godsend, but they are extremely sensitive to clock jitter. Great care must be taken in the clock signals that drive them, and such care is often, perhaps usually, lacking. But this is another story.

For reasons never explained, Analog Devices decided to bring the raw converter output to a pin on the converter chip, even though the chip was designed for PCM output. Engineers at Philips and Sony—looking for some way to get consumers to junk their CDs and buy some new product—jumped at this so-called “pure” signal. Early demos were either deliberately or ignorantly falsified to make the SACS sound better than PCM. In one case I think there was even a different mix used in the comparisons. So we have SACD. With some imagination, and audio engineers HAVE to be possessed with imagination, SACD sounds infinitely better.

Microphone technique and loudspeaker reproduction

As for the illusion that is audio reproduction, my current research is into physical models of human hearing based on what we know about the mechanics of the ear, the information passed upward to the brain, and the properties of speech and music. Too much to explain here. But a few facts may be helpful.

First, speech and music both depend on sounds that are largely pitched tones rich in upper harmonics. Why pitch? Why many upper harmonics? Imagine that the ear was capable of very high pitch discrimination (and it is.) Then it could use pitch discrimination to filter out environmental noise and other signals, and concentrate on a particular sound it found important (it can.) A consequence is that the upper harmonics of useful sounds (speech) is where the information is. Take away everything below 1000Hz and we can understand speech very well—sometimes even better than when lower frequencies are present.

This all matters because particularly in the presence of noise and reverberation we localize sounds primarily at frequencies above 1000Hz. Head shadowing is large at these frequencies, and we can detect sound direction to 2 degrees or better, good enough to separate the instruments in a string quartet at a distance of 80 feet or more. In great acoustics we can use these abilities sitting in a concert seat, but they are useless in listening to a typical recording.

In two channel stereo sound from the left speaker diffracts around the head to the right ear, and sound from the right speaker diffracts around the head to the left ear. If both speakers are producing the same sound we hear a “phantom center” because the signals in our two ears are identical. But what we hear is NOT the signals the speakers produced. The around the head diffracted sound interferes with the direct sound from each speaker. At low frequencies the two signals add, but at about 1600Hz the time delay is sufficient to cause the diffracted sound to cancel the direct sound, creating a very audible dip at that frequency. Vocalists panned to the center are always heard with this odd frequency response unless the recording engineer attempts to correct it—which assumes the engineer knows the size of the listener’s head.

This diffracted sound strongly affects the localization of signals that are NOT panned exactly to the center, because the principle frequencies used for localizing sound in a complex scene are concentrated in the region of these interference dips. The result is that some frequencies are localized in different directions than others. The brain needs to make some kind of average to decide where the image should be—and the data to base the decision on is continually changing. The result is a high degree of uncertainty as to where any sound between the center and left or right really comes from. These topics are covered in the following paper—check out figure 7:

“Stereo and Surround Panning in Practice” A preprint for the AES convention, May 2002.

The bottom line is that any sound source not panned to the exact center of a stereo image – and for any listener not at the exact center between the speakers—the precise location of the sound is not stable. It is thus impossible with loudspeaker stereo to have precise localization of sounds between center and left, or between center and right. We can still think we can localize sound—the brain does not like uncertainty—but we really can’t. When there is a small delay between the left and right loudspeaker signals, such as occurs when a pair of microphones is spaced apart by 17 to 24 centimeters, these delays add to the frequency dependent confusion of localization. The image is widened, but the uncertainty increases. Loudspeaker stereo can NEVER approach our ability to localize in a natural environment. So the recording engineer typically spreads the sound out wider than we would hear it in a concert. That plus ample imagination creates the illusion we call stereo.

But it gets worse. If we assume we are using a point microphone such as a Calrec Soundfield, crossed hypercardioids, etc. the microphone patterns are also less sharp than our ears. The most directive configuration is figure-of-eight patterns at 90 degrees – the Blumlein arrangement. The angle that produces a one dB difference in the channels is 3.3 degrees—not as good as the  ~2 degree angle of natural hearing. Other patterns are worse. Two cardioid microphones separated by 120 degrees need a source angle difference of 11.5 degrees to achieve the same 1dB of channel difference.

I own three soundfield microphones, including a Calrec mark four, and have made several recordings of orchestral and chamber music with them while simultaneously recording with a dummy head that has accurate models of my own pinna, ear canals, and eardrums. When these recordings are played back through special headphones equalized with probe microphones at my own eardrums the realism is startling. I have compared the binaural recordings to the B format soundfield recordings many times, using any reproduction technique I can think of. The closest match is when I play the soundfield recordings through headphones. The timbre and the apparent distance of the music is unchanged – but the instruments are all jammed up in the center because the soundfield simply cannot separate them as well as the binaural head.

Playing the soundfield recordings through loudspeakers is hopeless. The sound is invariably muddy and distant. You cannot use any type of first-order microphone without putting the microphones much closer to the music than would be ideal for concert listening. This increases the direct to reverberant ratio and widens the image enough that the recording begins to have the clarity of natural hearing.

In my own experience as a recording engineer I find that some variety if multi-miking is essential, although I seldom use more than about 14 microphones.  Most ensembles are too large to find a single point at which the sound is both balanced and clear. The fact that a single microphone pair must be closer to the conductor than a perfect concert seat means that instruments in the back of the orchestra invariably sound too soft and too distant. Early recordings moved the orchestras into the middle of a hall precisely to mitigate the early reflections of a stage house that increase the sonic distance of more distant instruments. Under these conditions a simple microphone technique has a chance of working.

Recording a small group is a different matter. I routinely use a miniature soundfield to record the Boston Camerata if a multichannel recording is not wanted. But I always use at least two additional microphones for the hall sound. And nearly always a few accents are helpful.

Two channel stereo is limiting. It is very helpful to reproduce the sound with more than two loudspeakers. Three speakers give you twice the localization accuracy as two if you fully utilize the center, and the sweet spot is also greatly broadened. With careful multi-miking and a good surround setup an exceedingly musical mix can be made. An accurate illusion of depth can be achieved  through careful use of artificial reverberation—a field in which I have both expertise and equipment. See “Recording the Verdi Requiem in Surround and High Definition Video.”

I gave up using a soundfield microphone or a dummy head as a main microphone because you cannot derive a discrete center channel from them. I rely largely on Schoeps super cardioid microphones to capture as much direct sound from a section as possible with as little leakage from other sections and the fewest early reflections.  I don’t think I can convince anyone about the virtues of this type of technique, but (nearly) every commercially successful engineer of classical (or pop) music does pretty much the same thing. They would gladly do something else if it worked better.

David Griesinger is a Harvard-trained physicist who is eminent in the field of sound and music. His website is here.

31 Comments »

31 Comments [leave a civil comment (others will be removed) and please disclose relevant affiliations]

  1. Oh, David. It might appear to someone that you in “extraordinary details” argue with John Newton’s statements but in reality I see just the commentary of the same industry person who would like to highlight the notions of own agenda from a different angle.  Your presentation of SACD vs. PCM notion is very superficial and your reference to the Wikipedia article is, I am sorry, laughable.  Sure, the use of double-blind listening tests as some kind of guide is always was a life-boat for the people who do not know what to listen while they are listening. I wonder, what is the difference between John Newton’s patronage of SACD and your patronage the ideas of multichannel sound? Mr. Newton gets paid big bucks from big name companies to “re-master” old analog tapes to the contemporary SACD garbage. It is not that Mr. Newton is unintelligent and he does not know that Sony killed in 1999 Ed Mather’s brilliant and superior 4-bit DSD and introduced to the word the cheap version of 1-bit surrogate with insufficient support of what SACD meant to be. It is not that he does not know that SACD in reality does not exist and all processing is done by instant conversion of DSD stream to PCM, including the best today SACD players. For sure he knows all of it but it is very convenient for him make statements about superiority of “his” SACD, probably it helps to pay mortgage…  Do you think it is much different from you writing multi-channel glorifying papers for Lexicon which used to flooded market with multi-channel processors? It is not that you do not know better and do not know all problems with multi-channel but I guess you have a mortgage too…

    I think the best thing in response to the John Newton interview was expressed yesterday by quite witty and Robert Everist Greene: “One of the things I have noticed about professional recording engineers is that they tend to stand on unsupported “authoratative” statements. They tend to act as if doing the same thing for a long time has made them experts. But in fact few of them ever do even basic experiments and they have learned very little beyond purely practical experience with few possibilities.”

    Comment by Romy The Cat — October 11, 2011 at 5:24 pm

  2. Writing as one who finds most surround-sound playback to be at best annoying and at worst highly unrealistic, I have to say that Romy’s talk of engineers and physicists never doing basic experiments is, to me, astounding. I would ask Romy to do three things: (1) go to a live concert and wear a blindfold for the entire concert, taking notes about what is actually HEARD — not aided by seeing — for imaging and balance; (2) record a live concert using only two microphones in his preferred arrangement, even if that means putting them somewhere other than at his seat; and (3) listen to the playback of that concert on two loudspeakers, again while blindfolded, allowing other real room sounds (like a telephone or the creak of a chair) to happen once or twice during playback.

    In my experience, doing any of the above — and I’ve done all of the above — proves that imaging in the front hemisphere through hearing alone is mediocre at best, and that one tends to localize the loudspeakers rather than the original sound sources when listening to a two-channel playback over loudspeakers. You have to deceive yourself in order to think anything else.

    Griesinger’s description of the problem of achieving balance for a large ensemble as mikes are moved closer in order to reduce muddiness is also supported by experience. I know some engineers who have used only two mikes, and I’ve heard playback from recordings made from different mike positions (and different mike patterns), and it is either difficult to find a satisfactory tradeoff, or impossible, depending on venue and ensemble.

    All this applies if you must rely on hearing alone to resolve the stereo image. If you use your eyes and your memory, you are no longer talking about the recording, but about extra information or delusion that is not actually in the signal.

    Comment by Mark P. Fishman — October 12, 2011 at 11:24 am

  3. Let me reply to a few of David’s interesting points here:
     
    “Playing the Soundfield recordings through loudspeakers is hopeless. The sound is invariably muddy and distant. You cannot use any type of first-order microphone without putting the microphones much closer to the music than would be ideal for concert listening.”
     
    Nonsense.  I have pictures taken at the Troy Savings Bank Music Hall in which I used SF Mk. IV mic (serial #13) for a highly regarded chamber orchestra.  The sound off of that recording is astounding in its clarity, accuracy of positioning and hall reverb — if it were not for copyright issues I’d make you a copy, but here’s the photos:
     
    http://www.ai.sri.com/ajh/family/ny-trip/tmh-photos.html
     
    Besides, in a B-format recording, you can fine-tune the image and fold in the back lobes somewhat to make the stereo picture sharper — sorta like sharpening a digital photo just a tad, to make images a little crisper.  You can also widen the image if you need to.  That’s the theoretical beauty of B-format, that you can go into the original recorded data after the fact and reorient the picture so to speak.  You can’t do that with spaced omnis and cardioid accent mics.
     
    David also writes: “But it gets worse. If we assume we are using a point microphone such as a Calrec Soundfield, crossed hypercardioids, etc. the microphone patterns are also less sharp than our ears.”
     
    I disagree — yes, the live experience may be even more precise, but I have never heard more precise home reproduction than through a precisely calibrated SF mic, as this one was in the early 1980s, while the original SF team was still alive.  Later SF mics are terrible as the present owner of the patents fails to understand the math properly.
     
    David continues: “I gave up using a SF microphone or a dummy head as a main microphone because you cannot derive a discrete center channel from them”  So, then, have you tried something like the 5.1 surround setup in the Microtech Gefell ensemble?
     
    http://www.musik-service.de/microtech-gefell-surround-ina-5-prx395749333en.aspx
     
    As for sample rates, circumstances once forced me to use a Sony consumer MiniDisc unit to master — yes, MASTER — a piano recital at Union College Memorial Chapel, with one of the leading pianists of our time, Arnaldo Cohen.  By placing my mic as carefully as I could, I mastered a concert recording with data-reduction.  No one could tell the difference between the MD recording and anything prior that I had done with that pianist or others, in the same venue, with the same gear.  That’s why I argue that very high sample rates are ridiculous.  And I _am_ a serious music lover/listener/engineer.  The use of microphones has a far greater impact on the accuracy and quality of sound than sampling rates.
     
    My general points are these:
     
    Simply because American and European engineers uniformly use spaced omnis does not necessarily and inevitably mean you have accurate stereo, as _conceived for the home_ by the late Alan Blumlein.  It also does not mean those engineers are capable of hearing accurately what is placed before their ears, either through loudspeakers or headphones.  The proponents of spaced omni recordings are quasi-religious dogmatists who love to have lots of phase in their recordings, call that phase “space”, and resent anyone who checks up on their work by listening in mono, or over speakers the way one of the inventors of stereo envisioned.  Would you name for me a time when the BSO was miked for broadcast with ORTF, NOS or X-Y techniques?  If so, did you copy those broadcasts off the air, and would you send me copies so that I can verify your claims?
     
    I listened, a few days late, to the webstream of the Newhouse/BSO broadcast from last week.  The sound was what I had predicted — woodwinds louder than the strings, tympani extremely loud, doublebasses muffled and distant, the overall image unclear and drab.  And I conclude this is the result of using spaced omnis for the principal mics, while using accent cardioids for the winds and tymps.
     
    But there is one thing for single-point that is crucial, and that I think Romy brings this out:  Namely, conductors must learn and practice their own craft of balancing of an orchestra.  They must do so through experience and their own ears.  If a woodwind passage is not clear in “The Moldau”, you _could_ place an accent mic on the flute and hope to bring it out; or, you could do what Toscanini once did, namely, double that flute part with a trumpet to bring out the line.
     
    The use of multiple mics often makes a mockery of the armor a conductor must have, the ability to balance an orchestra and clarify details on their own.  Proponents of spaced omnis not only deny that fact, they also prevent conductors from having a proper set of recorded options to choose from, namely, NOS/ORTF/X-Y techniques.  Listeners should have those options, too.

    Comment by Don Drewecki — October 12, 2011 at 12:25 pm

  4. Hey let’s have a party!
    Here’s the sort of thing we used to do, decades ago, at my late The Listening Studio: Comparisons of many different kinds and modes of recording and their reproduction. While I had (and still have) only a two-channel system, it was a very good system, but even then the various participants exhibited a spectrum of opinion regarding the results, not unlike the above and on associated threads here. It became my settled view that most people like what they like.
    Myself, I’m not fond of “surround sound” and can mount a fair argument against it, as I know Romy can too, but there’s no accounting for sonic taste. I’ll go even farther and say that mono pleases me well, although I see no reason to play a native stereo recording in mono. And how about *this*? I claim that 78s (properly played) can reproduce certain instruments (think, cello and voice) more realistically than those tinkly LPs and edgy CDs. But how far do you think I get when discussing these matters with today’s recording engineers?
    It has been my privilege to fall into ownership of a small sampling of on-site BSO broadcast recordings from over the last six decades. (I do not refer to the official BSO box set, although that too is revealing.) Granted these are CDs, but well-transferred — and they quite marvelously demonstrate the changes that occurred over the broad interim. But one must listen with blinders of a sort on, as the music-making was so far superior in the olden days — and as I have always asserted, music is a damn distraction from audio.
    PS to Mark: Are you the Mark Fishman I used to know?
     
     

    Comment by Clark Johnsen — October 12, 2011 at 1:04 pm

  5. I love Clark’s comments because microphoning “bakeoffs” are something I have participated in for a long time.  On some of my gigs I have even allowed another local engineer, Brian Peters, to record Chromatics concerts with his own setups (which often involve spaced omnis, or Royer ribbon mics) in comparison to ours.  That was the fun of recording in the Troy/Schenectady area — that I had access to two or more great series and could take the time to do things scientifically, based on the original thinking of people like Blumlein and Gerzon.
     
    Wouldn’t it be wonderful if, for some concerts during the regular season, WGBH/WCRB would say “For the second half of tonight’s concert, you will hear the BSO reproduced solely through a single-point arrangement, so that listeners at home can hear how different mic techniques produce different sounds of the Orchestra.”  The announcer would then say, “For tonight’s performance of the Sibelius Second Symphony, you will hear a pair of Microtech Gefell M930s placed in an NOS arrangement, with no accent mics.  We’ve placed this pair about five feet above and five feet behind the conductor.  Next week, we’ll try a different technique, and in two weeks, we’ll try yet another textbook arrangement.  We will call this project, Science in the Service of Music.  Let us know what you think.”
     

    Comment by Don Drewecki — October 12, 2011 at 1:24 pm

  6. Mark, you are not taking about imaging but about soundstage.  The semi-ridicules experiments that you’ve proposed have absolutely no relation reconstruction of playback imaging but they mostly describe your understanding of soundstage. There is soundstage, which is in a way a byproduct of stereophonic virtualization (BTW, there is MUCH more to it), and there is imaging. They are different thing things and I think you a bit confused with it.

    Anyhow, I do not share your desire to convert Audio into a ceremony of illusionary geometry and for me a complexity of audio imaging is order of magnitude more vital than any aspects of contrived soundstage hallucinations. Ironically no surround playback, multi-channels or any other marketing inventions have impact to imaging but they only help to the sale-personnel in hi-fi shops to move boxes.

    In fact multi-channeling does destroy imaging as better imaging has space-tone qualitative characteristic and multi-channeling with comb-filtration and a few other things kills playback tone and kills any minute dynamic inflictions. I would not even go to realm of discussions that nowadays no one record to multi-channel properly and people mostly use two-channels final mix from which they construct the multi-channel surrogate by DSP.

    In any case, the multi-channeling is not the subject of this thread. Still, do not be under impression that only you are familiar with multi-channel. My problem is not the multi-channel but absolutely wrong requirements and wrong expectations that people have when they are trying to go for multi-channeling.  By the way the “dumb” multi-channel idea got reincarnated to audio back from film industry but it is ironic that  film industry records sound so ignorantly that it is not even fun to mention.

    Comment by Romy The Cat — October 12, 2011 at 1:33 pm

  7. Mark makes one remark I must answer:
     
    “… one tends to localize the loudspeakers rather than the original sound sources when listening to a two-channel playback over loudspeakers. You have to deceive yourself in order to think anything else.”
     
    Actually, Mark, in B-format SF mic recordings, you can fine tune the pickup arrangement so that the loudspeakers really DO disappear.  If you use what I would call “Full Blumlein”  — two figure-8 patterns at right angles to each other — then the loudspeakers really and truly DO disappear.  The question that follows for me is, Does this wear well over the long run, or do you want something a little sharper and closer?  If you do, then simply fold in the back lobes slightly, to a hypercardioid pattern, and you get a little less hall and a little more stage.  That’s my taste.  And if you work in a hall for a long time, you can set up your mic ensemble quickly and still get that repeatable sound.
     
    The problem is, None of this is possible with spaced-omni techniques.  
     
    But to get back to one sentence you wrote a few days ago, about all those arrival time differences at home and in the hall.  If that’s the case, then — I am guessing now as to what your taste is — either you can go for a recording/broadcast that maximizes arrival time differences, and then add to that still more arrival time differences at home (spaced omnis) or,  a reproduction that mimimizes the arrival time differences from the on-site location, before the home playback adds its own arrival time differences (coincident miking).  My choice is to keep the image clear, and accept the limitations of conventional stereo.
     
    But here’s one more thing:  I am happy and grateful that WGBH/WCRB puts the BSO on live, and I’m especially glad that those of us far away can now hear them, too.  That’s great.  But, as someone who has attended concerts at Symphony Hall probably 12 times in the last 18 years, I myself cannot accept the proposition that the sound we get over the air approximates what we hear live.  It doesn’t.  What we _do_ get may be pleasant and enjoyable and take us away from our troubles for a few hours, but it isn’t an accurate sound-picture.

    Comment by Don Drewecki — October 12, 2011 at 2:00 pm

  8. Mr. Greisinger, thank you for a very clear expose on important aspects of how we hear sound. As the last living designer of the Soundfield Mk4, I was very interested in your journey.

    May I ask how you normally use your Schoeps supercardioids?  Are they in some form or ORTF?  Do you use an additional pair of spaced omnis for your “hall sound”?

    Are these omnis what you use for more “hall sound” with the Camerata & your miniature soundfield?  Which miniature soundfield?

    Excuse my Nosey Parker questions but I’m trying to confirm a theory why many good recording engineers seldom use coincident microphones.

    Comment by Richard Lee — October 12, 2011 at 7:29 pm

  9. “… Wouldn’t it be wonderful if, for some concerts during the regular season, WGBH/WCRB would say “For the second half of tonight’s concert, you will hear the BSO reproduced solely through a single-point arrangement, so that listeners at home can hear how different mic techniques produce different sounds of the Orchestra.”  The announcer would then say, “For tonight’s performance of the Sibelius Second Symphony, you will hear a pair of Microtech Gefell M930s placed in an NOS arrangement, with no accent mics.  We’ve placed this pair about five feet above and five feet behind the conductor.  Next week, we’ll try a different technique, and in two weeks, we’ll try yet another textbook arrangement.  We will call this project, Science in the Service of Music.  Let us know what you think.”
     
    Comment by Don Drewecki — October 12, 2011 at 1:24 pm” 

    Could they use more than one arrangement simultaneously and record each separately for rebroadcast? Then if they could bring back Robert J. Lurtsema (or even use a substitute), he could play a bit recorded with one arrangement, then another arrangement of the same bit, then another. Repeat with different passages. Then we could really tell which works best on our radios or computers. Of course it might depend on the set up each person is using, but it would still be worthwhile IMO. A consensus might even emerge.
     

    Comment by Joe Whipple — October 12, 2011 at 10:41 pm

  10. Joe W. writes: “Could they use more than one arrangement simultaneously and record each separately for rebroadcast?”
     
    See my reply in  the other thread. That’s a great idea, and something very much suited to the intellectual/scientific/cultural life of Boston.

    Comment by Don Drewecki — October 13, 2011 at 9:35 am

  11. I thank the above people for their comments, although I am surprised and somewhat saddened by Romi’s vitriol.
    I cannot respond to all of these comments – there are too many. But – on playback of soundfield recordings through loudspeakers – I meant playback of soundfield recordings made in a somewhat distant but still very good concert seat. I know such recordings can be good if the microphone is close enough, and I said so. The issue is balance – and the inability to make a truly left-center-right channel separation. Although the SF mike does not have the angular resolution of a human head, it is still good when used correctly – and the output is very similar to what you would get with pan-pots.
    It is understandable that multichannel sound reproduction has been disappointing for many of the commenters. Commercial equipment and available recordings do not do the process justice, and two channel stereo is often if not usually superior.
    But multichannel theory is correct – and superior results can be obtained. The front image can be clearer and more accurate, the sweet spot larger, and the hall sound both unobtrusive and believable. To do it right the front speakers need to be identical – or at least have identical phase response from 500Hz to 4000Hz. Trying to image sound between some cheap center speaker and a much larger speaker to the left and right simply does not work.
    And the recording MUST be made such that sounds to the left of center are NOT reproduced through the right loudspeaker, and vice-versa. There is no main microphone array that can record sound in this way. Almost all commercial multichannel recordings either ignore the center channel entirely, or combine sound from the center speaker with the same sound from the left and right speakers. Such a mix overlaps direct sound from the center speaker with a phantom center from the left and right, which causes comb filtering at the listening position and increases the uncertainty of localization for material to the left or right of center. It is better to switch off the center speaker entirely. Which is what happens. The sound mix from Great Performances and the MET broadcasts is stereo with some surround. No attempt is made to broadcast a meaningful center channel.
    There are lots of digital processes that claim to convert stereo to some kind of surround that uses the center speaker. The results are almost uniformly awful. Our team spent thousands of hours switching between stereo playback and digitally derived surround with the goal of keeping the front image at least as good as the original while increasing the size of the sweet spot and making the walls of the room disappear. We think we succeeded, although the process never caught on for home playback – probably because the speaker systems did not have the needed uniformity of phase response. It did catch on in cars – but that is also another story.
    I once had the opportunity to re-mix a 16 track recording made by WGBH of the Cantata Singer’s performance in Symphony Hall of Donald Sur’s Slavery Documents. I dropped the main microphones, which were two omni’s separated by 24cm, and constructed a very good left -center-right mix. The result in five channels is absolutely spectacular – although the performance was under rehearsed . All of my Cantata Singers recordings in the last 10 years have been recorded this way. I wish they could be released in some form, but so far the union has been intransigent.
    We should all get together and have a party – I love playing these recordings, particularly to people willing to listen carefully with an open mind. And I am happy to listen to recordings of others.
    By the way – I do not work for Harman, and when I did it was as a scientist trying to understand precisely how sound interacts with rooms and ears – and as an inventor of algorithms that might improve this process. Recording and playback of live recordings was my laboratory – and was unpaid. I had an opportunity to experiment that is simply not available to most engineers in this field, and I am immensely grateful for it.

    Comment by David Griesinger — October 13, 2011 at 11:06 am

  12. David suggests: “We should all get together and have a party – I love playing these recordings, particularly to people willing to listen carefully with an open mind. And I am happy to listen to recordings of others.”
     
    That’s a great idea!  I’m all for it — a spirited listening session for anyone who cares.  I’m rather far away from Boston but maybe sometime next year.
     
    David also remarks: “The sound mix from Great Performances and the MET broadcasts is stereo with some surround. No attempt is made to broadcast a meaningful center channel.”
     
    Does anyone else besides me notice that on PBS feeds, the audio is really bizarre, with a strange light ripping sound that modulates with high strings, brass and woodwinds?  Is it me or is it my PBS affiliate, or PBS itself?  I occasionally hear this strange artifact on other networks, and live events like football games.  Ten years ago, stereo audio on TV was clean, and now it’s not.
     
    Note to Chris:  I’d also recommend two of the EMI Klemperer Wagner bleeding chunks collections, which also have incredibly natural sound without highlighting.  You can probably find them on Amazon or eBay, and I definitely recommend them.
     
    David also remarks: “Recording and playback of live recordings was my laboratory – and was unpaid. I had an opportunity to experiment that is simply not available to most engineers in this field, and I am immensely grateful for it.”
     
    Partially the same for me:  A friend purchased the SF mic (serial #13) back in 1987, and from 1995 until about two or three years ago, I used it on long-term loan, with great success, and some — but not a lot — of income.  I suspect that a lot of us are doing this for the heck of it, because we learn things, but there’s no financial payoff, because few people really care about these things.

    Comment by Don Drewecki — October 13, 2011 at 11:50 am

  13. *** I thank the above people for their comments, although I am surprised and somewhat saddened by Romi’s vitriol.

    David, I assure you that there was no vitriol on my part. I do possess strong anti-establishment attitude and my response was an illustration that your comments about surround sound, very much as John Newton’s comment about advantage of SACD over PCM were driven by convictions of professional associations but not by the actual empirical results. If I import in US the meet of the Syrian Fat-Tailed sheep then probably you would not extend too much credit to my lectures about the evil of vegetarianism, would you?


    *** Our team spent thousands of hours switching between stereo playback and digitally derived surround with the goal of keeping the front image at least as good as the original while increasing the size of the sweet spot and making the walls of the room disappear.

    If the increasing the size of the sweet spot and the effect of “walls disappearing” is the ONLY objective then for sure it is very possible to accomplish it by multi-channeling and by few other means. I guess the distraction of sound from any other assessable criteria is not countable if the sweet spot got larger. Again, it was not vitriol but sarcasm….


    *** We think we succeeded, although the process never caught on for home playback – probably because the speaker systems did not have the needed uniformity of phase response.

    Isn’t it ironic that you advocate the uniformity of phase response (very much as I do) but at the same time you propose to flood listening rooms with phase randomness coming from surround channels?

    Anyhow, I do insist that multi-channeling is very faulty direction and not only because it is mostly implemented nowadays by ignorant people and for ignorant people. Multi-channeling has fundamentally faulty objectives. Two microphone recordings and playback with two properly positioned loudspeakers (with individual channels in the loudspeakers are time-aligned) is all that is necessary for proper sound reproduction. Yes, I did hear good examples of multi-microphone recordings, the Ravel’s  Daphnis et Chloé of the same John Newton for instance but they just fortunate exception and statistics indicates that with two, or very conservative amount of microphones, we have more recording hits then misses.

    Comment by Romy The Cat — October 13, 2011 at 12:36 pm

  14. As one who has heard demonstrations at the homes of both Romy and David, may I be permitted a couple of non-expert observations?

    Romy’s two channel system using horns and triodes is amazing. I’ve never heard more clarity or been able to tease out individual instruments any more clearer that on that system. Yet, if one’s head is more than a few millimeters outside the sweet spot, then the entire edifice crumbles and one is left hearing a very good rather than an amazing sound.

    The sound from David Greisinger’s Logic7 multichannel decoder has the advantage of sounding very good over a very large area though not superb anywhere.

    Each approach has its merits.

    Comment by Lee Eiseman — October 13, 2011 at 1:39 pm

  15. It’s time I suppose for me to say this: No recording or sound reproduction can be properly assessed without due attention paid to getting the acoustic polarity correct. Half the time when I’m present at a demonstration or evaluation (or at a studio playback) I discern that the music is opposite of what’s correct, which inevitably affects the decisions being made. (If someone doesn’t know what I’m talking about, then I’d be glad to explain — polarity is one of my main hobbyhorses.) Establishing correct polarity is the sine qua non of right practice in audio… and it’s free!

    Comment by Clark Johnsen — October 13, 2011 at 2:17 pm

  16. It seems to me that there are several problems in producing natural sounding high fidelity stereo recordings that are the result of using multi-miking techniques.
     
    I hear comments that a recording should sound better than being there – really?
    Better than reality?
     
    The recording process should aim at recording a musical event so that during playback, the listener experiences the music as if he were present at the live session.   The sound should be natural in the sense that the recording has accurately captured the full sound of the musical instruments in an acoustic space as heard in a live performance.  This means that the recording must not only capture the sound of the strings of a violin, but it must also capture the sound radiating from the body of the instrument and the reflections coming the floor and walls of the hall i.e. the hall in which it was recorded.
     
    In my opinion, the only way to obtain this result is to use true stereo microphone techniques i.e. Blumlein, ORTF, NOS and X/Y.
     
    It may be helpful in this regard to consider how stereoscopic vision.  With only two eyes, we humans can tell instantly where something is located both in direction and distance (depth) and of course, we can detect color and light levels as well. This is possible because each eye sees the image from a slightly different angle and this difference alone can give us a full picture of the scene we are viewing.  Stereo cameras were developed to capture this sense of depth, usually called 3D vision. (Can you envision a photograph that was pieced together from 30 different cameras all placed in different positions?  How realistic would that look)?
     
    Stereoscopic hearing works the same way.  With only two ears we can perceive both direction and depth and of course pitch and tone color.
     
    Placing a stereo pair of microphones in a well chosen position comes closest to emulating our natural stereoscopic hearing just like a stereoscopic camera gives a more natural photographic rendition of a scene.
     
    It is my contention that multi-miking techniques destroy the natural stereo presentation of the music.  If fact, we no longer have stereo at all but rather, multi-miked mono.  
     
     
    Unfortunately, most recordings made today use this multi-microphone technique.   Sometimes a microphone is placed in front of each instrument and it is not uncommon to see several microphones being used just to record a drum kit, for instance.  As many as twenty or thirty microphones can be used to record a symphony orchestra.  I have seen a piano being recorded with up to four microphones placed just a couple of inches from the piano strings.
     
    When recordings are made this way, it is impossible to capture the correct spatial dimensions of the original event both in terms of width, depth, “air” and soundstage nor is the original spatial relationship between instruments preserved.    
     
    A concomitant problem of multi-miking results from the need to place microphones very close to the instruments.  The sound of an instrument is very very different when it is miked very close than it does when miked at a more realistic distance.  A listener, even in a small venue never sits within a few inches of any instruments.  The distance between instrument and listener is necessary for the full development and propagation of the sound wave produced. 
     
    I have just recorded a solo harpsichord CD to be released next year.  Before the actual recording session, several venues were considered as well as several microphones.  But we found that microphone placement was by far the most important when trying to capture the sound of the instrument.  In the end, we used a pair of Schoeps CMC64’s in NOS configuration.  The microphones were placed five feet up and five feet back.  (This is close for us, but the harpsichord is a rather soft instrument). The artist that I was working with had had other recording experiences where the microphone was place inside the harpsichord case just a couple inches above the strings which resulted in a harsh irritating sound so unlike the warm sensuous sound of the instrument.
     
    Another thing we did in prepping for the recording was to listen to other harpsichord recordings which we both found to have been close miked with what sounded like added artificial reverb. They had no “air” or ambience and sounded very clangy (as my wife describes it).  We were unhappy with these recordings and vowed that our project would not sound like that!
     
    A third problem results form multi-miking is what happens after the recording has been made.
     
    During the mixing process, the many monophonic tracks blended together to create “artificial stereo”.  It is artificial because a mixing engineer makes the decisions as to where each instrument is placed on the soundstage.  That is, the engineer decides what instrument is placed in the left channel or the right or any place in between.   The placement of the instruments may or may not be close to their original position.  However, with this method, it is impossible to create a natural sense of depth or to recapture correct spatial cues.
     
    In addition to placing each instrument on an artificial soundstage, the mixing engineer can decide to make one instrument louder or softer based on some subjective decision.  In effect, the engineer is changing the dynamics which could very well change the intent of the composer or the musicians.  This might be okay for a rock band where almost everything is loud, but it is not ideal for a small chamber group or a large symphony orchestra.
     
    Now, even more processing can be added by using more gear that simulates the sound of different music halls around the world.  I have always felt the Symphony Hall has a wonderful warm ambience that doesn’t need to be messed with.
     
    The over use of electronic gadgets can further detract from the original musical event by adding artificial artifacts to the original musical event, and decisions by mixing and mastering engineers can further help to destroy the musical intent of the musicians and the tonal and spatial sound of the live event.  The more we mess around the less natural the sound gets and the further we get away from the music.
     
    Sometimes simple is better!
     
    Another contributor asked if any professional engineers who are NOT using spaced Omni’s.  Chesky Records uses strictly Blumlein configuration.  Water Lilly Acoustics uses Blumlein and other near-coincidence placement, and Mapleshade Records used a baffled pair.
     
    -Walter Klimasewski  Pro Musica Recordings, Bristol, RI

    Comment by Walter Klimasewski — October 13, 2011 at 7:02 pm

  17. “The sound should be natural in the sense that the recording has accurately captured the full sound of the musical instruments in an acoustic space as heard in a live performance. ”

    From which seat?  Orchestra Row K,   Seat 20? Balcony, Row A,  Seat 12?  Five feet above and 10 feet behind the conductor?  

    Comment by duvidl — October 14, 2011 at 12:33 pm

  18. Second Balcony, Row H, center of row (I forget the seat number).  ;)

    Comment by Joe Whipple — October 14, 2011 at 3:14 pm

  19. Thanks to Walter for his comments, especially: “Another contributor asked if any professional engineers who are NOT using spaced Omni’s. Chesky Records uses strictly Blumlein configuration.  Water Lily Acoustics uses Blumlein and other near-coincidence placement, and Mapleshade Records used a baffled pair.”
     
    I second the recommendation of the Water Lily CD of the Philadelphia Orchestra under Sawallisch, recorded in the Academy of Music.  The only small problem with that is ever-so-slight distortion and breakup in the loudest passages.  Otherwise, it is a superb reproduction of the Fab Phils in their former home.  The dry acoustic bothers me not one bit.  (I think you can still get that CD through Loren Lind, a flautist [flutist?] in the orchestra.”

    Comment by Don Drewecki — October 14, 2011 at 4:21 pm

  20. Actually, Walter, let me rephrase what I think you mean by: “It is my contention that multi-miking techniques destroy the natural stereo presentation of the music.  If fact, we no longer have stereo at all but rather, multi-miked mono.”
     
    What you really mean is “multi-miked. multi-channel mono” — and they have to add reverb to add gloss on the spot mics to make it seem that there is a true left-to-right soundstage with no holes.
     
    Anyway, I want to express my deep gratitude for this debate and the chance to express my viewpoints.  To have a forum like this is a wonderful thing, it really is.  Maybe one of these days, WGBH/WCRB/BSO will offer live relays using different types of mic pickups, one per concert, just for the science/knowledge/fun of it.  Of course, you;d have to limit it to pure-orchestra events with no soloists.  The Riccardo Chailly debut in January would make a splendid first attempt.

    Comment by Don Drewecki — October 14, 2011 at 4:27 pm

  21. “‘The sound should be natural in the sense that the recording has accurately captured the full sound of the musical instruments in an acoustic space as heard in a live performance.’ 
     
    From which seat?  Orchestra Row K,   Seat 20? Balcony, Row A,  Seat 12?  Five feet above and 10 feet behind the conductor?  
     
    Comment by duvidl — October 14, 2011 at 12:33 pm
     
    Second Balcony, Row H, center of row (I forget the seat number).  ;)
     
    Comment by Joe Whipple — October 14, 2011 at 3:14 pm”
    Or maybe First Balcony Left, C 37. LOL

    On a more serious note, I must say this has been very interesting, even though I know nothing about the technology involved. 

    Comment by Joe Whipple — October 14, 2011 at 4:42 pm

  22. Let me chime in on Walter’s other comment: “A third problem results form multi-miking is what happens after the recording has been made.”
     
    My example would be the 2-CD set of live performances by the TMC Orchestra at Ozawa Hall, which the BSO issued a few years ago.  Since I live in the Capital District area of NYS, Tanglewood is an hour drive away.  I have sat in Ozawa Hall countless times for concerts, since it opened 17 years ago.  I know what Ozawa sounds like.
     
    This CD set sounds nothing like Ozawa Hall.  Again, there is the overall phasiness of the sound; woodwinds are spot-miked and, thus, spot-lit; basses are poorly localized and indistinct; and, finally, there is a halo of reverb decay, clearly added in post.  Ozawa’s decay rate is very quick in real life, not the long, gradual decay in the recording.  Strings and brass are brighter in the hall, while they come across as veiled and muted in the recording.
     
    This is why I decry the Church of the Sacred Spaced Omnis — they take real life sound, alter it according to their hunger for phasy/hazy sound, and don’t let a hall and players speak for themselves.  Ozawa Hall’s acoustics require no apology, and therefore no tampering.  Leave it alone.  This the BSO/Tanglewood engineers refuse to do. 
     
    Further note to Chris and everyone else:  Another great early EMI Klemperer/Philharmonia recording to get on CD is the Brahms Second Symphony, coupled with the Alto Rhapsody.  There’s no added reverb on it, just the natural original acoustics, non-highlighted winds, and OK’s divisi violins.  That’s another keeper.

    Comment by Don Drewecki — October 15, 2011 at 11:07 am

  23.  
          What you really mean is “multi-miked. multi-channel mono” — and they have     to add reverb to add gloss on the spot mics to make it seem that there is      a true left-to-right soundstage with no holes.
     
          Comment by Don Drewecki — October 14, 2011 at 4:27 pm
     
    Don, thank you for this expanded and more accurate description!
     
     
                    “The sound should be natural in the sense that the recording has accurately captured the full sound       of the musical instruments in an acoustic space as heard in a live performance. ”
     
          From which seat?  Orchestra Row K,   Seat 20? Balcony, Row A,  Seat 12?       Five feet above and 10 feet behind the conductor? 
     
          Comment by duvidl — October 14, 2011 at 12:33 pm
     
          Second Balcony, Row H, center of row (I forget the seat number).  ;)
     
          Comment by Joe Whipple — October 14, 2011 at 3:14 pm
          Or maybe First Balcony Left, C 37. LOL
     
    Actually, at Symphony Hall I prefer row X, seats 17 and 18.  I can stretch out my legs and there are no heads in view to avoid. It also puts me in the center of the orchestra from a left/right perspective.
     
    However, the comments (quips?) above sort of miss the point I was trying to make.  What these comments are referring to is perspective.  No matter where one sits in the concert hall, the gestalt of the music event is still extant.  Usually when I work with a client, I try to elicit from them an idea of the perspective that they would like to hear in the recording.  For example, in recording Purcell’s Dido and Aeneas a few years ago with Newport Baroque and Sine Nomine, conducted by Paul Cienniwa, Paul stated he wanted a somewhat distant perspective so that the voices in the chorus blended in a way that there was less emphasis on the individual voice.  But even with that perspective, the musical event happened in a natural acoustic space.
    My contention is that when multi-miking multi-track methods are employed, the natural acoustic space is fragmented and this fragmentation is exacerbated by close miking.  The gestalt of the musical event has been destroyed. 
    -Walter Klimaseewski – Pro Musica Recording 

    Comment by Walter Klimasewski — October 15, 2011 at 11:45 am

  24. Walter,  they never know where to stop. A complexity of orchestra positioning, or shortage of recording time make them to use multiple microphones. Then immediately the problem with directivity comes…  To mix multiple microphones is tricky, so here is where the multi-trucking “save” the day. In the end everything is boils down to a Moron sitting in editing room and make the decision that harps did not sound lash-enough and need more reverberation, snare-drum were too jumpy and need compression and trumpets needed some hard limiting. Beside the unfortunate technical fact that digital format is not editable by topology we end up with a policies satiation when audio people end up overriding what conductor/musicians were intending to do. Knowing that musicians are not too valuable as audio expects it is not surprise that we end up with huge number of “stupidly” engineered recordings. So, the initial intend to have just 2 microphones was partially an attempt to eliminate audio tech people from entire ceremony of sound reproduction.

    Comment by Romy the Cat — October 15, 2011 at 12:48 pm

  25. Ronny the Cat, You bring up another very important point and that is how financial considerations override the Art of Recording. Studio time and musician time need to be trimmed to keep cost down so they just get something recorded and rely on the editor  and mastering guy to try to put it together.  The multi track approach may help the editor too.  

    I really find that most people don’t know what live music sounds like.  They don’t pay attention to what they are hearing.  I find this even among musicians who sit in the band or orchestra, but not so with conductors.  Many of these people have no idea what the ensemble sounds like from the audience’s point of view.  Have you ever noticed that so many musicians have the worst stereo systems.

    I don’t know if you have ever seen or used some of the software programs that are used to record, mix and master a CD.  It is much as you describe it.  Each track can be manipulates separately by applying EQ or using hundreds of “plug-ins” to add the most hideous effects. The idea of add reverb to one section, boost the bass for another, reduce the presence range in another and etc. does not even pass the common sense test. (I have occasionally applied EQ, but it was applied to the entire recording).  This results in making  it sound like a disembodied orchestra with each section in its own phony acoustic space.  This is why I say the gestalt of the musical event is destroyed.

    Also, I agree with you regarding, that the editing and mastering engineer are second guessing what the musicians and the conductior’s job has always i.e. determining the balance between the sections as well as the dynamics.  I was recently invited to be part of a advisory board for the RI state colleges who want to develop  a recording engineer track.  Most, if not all of the engineers there only talked abut multi-tracking techniques.  Why I mention to one engineer that I only use two mics he was astounded and asked “How can you record with only two mics?’  If you look at the curriculum of most of the colleges who teach the subject, multi-tracking methods are the norm and a guy like me is considered to be very strange indeed.  So when these students get out of school, they accept that there is no other valid method of recording than what they have been taught.  So there is very little hope of much changing soon.

    A good rule that might be helpful when thinking about number of mics is:  When I add a third what sonic gain do I hope to gain from using it.  When I add the fourth mic, what sonic gain to I hope to gain from it etc…   I think most devotees of multi-tracking dont asked these questions, they just assume yhou need a lot of mics cuz that is how they were taught.

    Comment by Walter Klimasewski — October 17, 2011 at 11:05 am

  26.  *** I really find that most people don’t know what live music sounds like.  They don’t pay attention to what they are hearing.  I find this even among musicians who sit in the band or orchestra, but not so with conductors.  Many of these people have no idea what the ensemble sounds like from the audience’s point of view.  Have you ever noticed that so many musicians have the worst stereo systems.

    Walter, I am astonished that you surprised by the fact that “musicians” and non-musicians hear Sound differently. This is very well-researched subject in psycho-acoustic and among audiologists and doing what you do it shall not be a “revelation” for you.

    *** I don’t know if you have ever seen or used some of the software programs that are used to record, mix and master a CD.  It is much as you describe it.  Each track can be manipulates separately by applying EQ or using hundreds of “plug-ins” to add the most hideous effects. The idea of add reverb to one section, boost the bass for another, reduce the presence range in another and etc. does not even pass the common sense test. (I have occasionally applied EQ, but it was applied to the entire recording).  This results in making  it sound like a disembodied orchestra with each section in its own phony acoustic space.  This is why I say the gestalt of the musical event is destroyed.

    Yes, sure. There is more to it. If you take 4 members of a quartet, put then in 4 isolated booths and feed them with real time feedback then you will very unlikely have harmony in their play.  The things that shall come along together shall come along together…

    *** When I add a third what sonic gain do I hope to gain from using it…. 

    Walter, I am sure if the things are done properly then multi-micing, even with enormous amount of superfluous microphones, might still be very fine. The problem is that it happens VERY rarely. In contrary:  if only 2 microphones used than no matter how unintelligently recording was done than damage is more or less rectifiable.  With multi-making gone badly the fabric of sound become terminally fractured.  Since THAT TYPE of fracturing does not have any equivalent in nature then our brain has no mechanism to extrapolate the heard. As the result we experience discomfort and our process of “getting” into music get damaged. With 2 mics we have much more forgiving environment. For sure we might have some improperly barnacled orchestra but those type of aberrations are natural and our brains is very easily accommodates those discrepancies.

    Comment by Romy The Cat — October 17, 2011 at 12:48 pm

  27. Romy writes: :…Yes, sure. There is more to it. If you take 4 members of a quartet, put then in 4 isolated booths and feed them with real time feedback then you will very unlikely have harmony in their play.  The things that shall come along together shall come along together…”
     
    Here’s an anomaly that I used to make successful string quartet recordings in the Union College Memorail Chapel, e.g. Emerson Quartet, Brooklyn Rider, etc.
     
    When we hear a string quartet play in a hall, we are probably 20, 30 or 40 feet away, and so we pretty much hear those instruments at a uniform distance from our ears.  But if we mic quartets for a recording of that concert, usually the front instruments stand out and occasionally cover over the inner ones.  How to solve this dilemma?
     
    Some years ago, Sony Classical resissued some Budapest Quartet historic recordings from Liederkranz Hall in about 1940-41, in their Masterworks Heritage series.  Two pictures taken at recording sessions show that the Columbia team solve the balance problem by placing an RCA 44 on a large boom, right over their heads, and facing down at a right angle to the quartet in the middle of their semi-circle.  So, I adopted something like that with the SF mic, but about four feet higher, tilted down on a Shure mic stand, aimed at the back instruments but not completely. 
     
    Voila!  The balance of the inner instruments was now far stronger in relation to the outer ones.  And, you could differentiate left, center left, center right and right beautifully.  All instruments were precisely balanced, textures were clear, and I still had enough ambience from behind the mic to give the instruments some authentic hall gloss, with no tricks.  I have then adopted this policy ever since.  I had the best of all worlds — equal balance, soundstage precision, and hall ambience.  And if I needed to electronically steer the image so slightly to have a uniform weight of sound, I could do so.  I have adopted this ever since.  You could not have accomplished this with spaced omnis,

    Comment by Don Drewecki — October 17, 2011 at 1:07 pm

  28. I would like to clear up a few statements made by Dave regarding the technical background and history of SACD.

    The chip that I and others designed at Analog Devices back in the late 1980’s was one of the early examples of a 1-bit design using higher-order noise shaping. Later I and others introduced the concept of multi-bit noise-shaped converters which is what most people use today. The addition of more bits inside the noise-shaping loop allows the use of lower-order feedback filters and eliminates the mechansim that causes idle tones (which plauged many early designs).

    The reason that particular early chip provided access to the 1-bit output stream was simply to provide a test mode so that if the output at 48KHz was not working well, we could debug the chip and find out where the problem was occurring. That mode was not documented in the datasheet, but when the SACD concept was first proposed we let a few people in on our secret test mode, and as a result some of the early mastering equipment used that IC. However we obsoleted that part sometime in the mid-90’s so none of the recent SACD encoders use that IC.

    Single-bit noise-shaping converters were used in telecom extensively starting in the 1970’s. Myself and many others (including Phillips, Crystal Semiconductor, Sony, etc) saw that this type of converter could be used for professional audio if extended to use higher-order filters and/or more than 1-bit quantization, and the during the 1980’s and 90’s there were many advances in the field.

    I always thought SACD was a bad idea, for the following reasons;

    1) The bit-rates are too low, and therefore there is not enough spectral distance between the edge of the extended audio band and the rising noise. Therefore if you want to take advantage of the supposed higher bandwidth, you need to let a lot of this noise through to the output, where it can potentially cause trouble in downstream equipment (or even fry your tweeter). Note that the lower the bit-rate, the faster the noise must rise and the harder it is to filter out.

    2) Converter architectures evolve, and the 1-bit variety disappeared in the mid-90’s. In fact most SACD encoders now start with a multi-bit signal and then digitally re-modulate that signal into the SACD format, because there are no more commercial sources of 1-bit converters with a test mode that allows access to the 1-bit stream. So why is it a good idea to standardize on a format that is a snapshot of converter technology that was abandoned in the mid-90’s? With high-sample-rate PCM, the underlying oversampled converter architectures can continue to evolve according to the most recent advances, while keeping the same data format, so if you want to argue that extended bandwidth is important, then it is preferable to just use 96/192K sample-rates.

    Note that I stood in front of large audiences at AES on several occasions and expressed my negative opinion of using a 1-bit signal as an recording format. Sony Japan were not too happy about this (as I heard later), but I have to give a lot of credit to the folks at Oxford digital in the UK, who were building the short-lived SACD mixing desk; they were always very open to factual discussions and had doubts themselves about the format.

    I infer from Daves comments that he believes some of the following;

    1) 1-bit noise-shaped converters evolved because they are cheap to build, and if we really cared about signal quality we could build much better converters.
                 – not true; it’s the only way to get good low-level linearity. The best converters today are all based on the principle of noise-shaping, although they are no longer internally 1-bit. Even if you had an unlimited amount of money to spend on a converter, you would still choose a noise-shaped topology for reasons too numerous to go into here.

    2) We claim the 1-bit format has magical “purity”
                  -converter manufacturers do not claim this; our customers marketing departments do. Don’t forget that converter designers   abandoned the 1-bit concept in the mid-90’s. Plus, we regard these formats as intermediate signals inside our converters that were never meant to see the light of day.

      Hopefully this clears up some of the confusion.

    The only good thing I have to say about SACD is the following. Ten thousand years in the future, when some creature discovers an SACD recording during an excavation, they don’t have to know a thing about the format; you just take the bits and feed them into a speaker, and Justin Bieber will once again be famous.

     
     

    Comment by Robert Adams — November 5, 2011 at 7:18 am

  29. Robert, it is all correct with exception that when SACD as the concept was initially conceived and rendered in a few experimental processors it was 4-bit process. The 4-bit SACD was phenomenal. Then, as SACD hit public it becomes the crappy 1-bit process.

    Comment by Romy the Cat — November 5, 2011 at 11:26 pm

  30. I agree that if it were 4 bits all my concerns would be answered. Too bad it got cut down, but I’m guessing they couldn’t make the discs cheaply enough.

    Comment by Robert Adams — November 7, 2011 at 8:47 pm

  31. Yes, and it was Bob Adams who was responsible for the original pioneering low-bit multibit work, while at dbx. (Makes no difference in any case for home playback; higher-rez does not audibly improve whatsoever on properly done RedBook CD, as has been extensively shown in blind testing.)

    Otherwise, it’s amazing to me how so many pontificate here, baldly asserting this or that without really knowing firsthand what they’re talking about. Impugning someone who has done the work that David Griesinger has done. Not knowing from experience that for a great many concertgoing listeners the most accurate home simulation of their event would indeed be playback in mono with the treble turned all the way down. Etc. Good grief.

    Comment by david moran — November 7, 2011 at 9:12 pm

RSS feed for comments on this post.

Sorry, this comment forum is now closed.