Defining Audio Fidelity

Defining Audio Fidelity

Ethan Winer, author of The Audio Expert, on the building blocks of sound quality.

Audio has been around for a long time.

Its first practical application dates back to at least 1876, when Emile Berliner invented the first microphone, for use inside of Thomas Edison’s telephone.

Even the basic principles of digital recording were realized as far back as 1928, when Harry Nyquist developed his famous sampling theorem, refined by Claude Shannon in 1949.

Indeed, pretty much everything we know today about how to define audio and fidelity has been understood for well over a half a century.

Of course, people will still argue about whether tube gear is “better” than solid state, whether vinyl records are “better” than CDs, or whether there’s a benefit to recording and distributing music at sample rates higher than needed to capture the audible range up to 20 KHz.

Thankfully, these questions become a lot easier to answer once you understand how audio fidelity is defined and measured, and what people mean by “better”.

The Components of Audio Fidelity

sponsored


Sound, for all the nuance and excitement if offers, only operates in two dimensions: Time and intensity. In turn, only four parameters are needed to assess everything that affects audio fidelity: Frequency response, distortion, noise, and time-based errors.

These are really parameter categories, and most have several subsets. For example, under noise there is hum and buzz, tape hiss, and the crackles of a vinyl record. There are also a few main types of distortion: harmonic and intermodulation, as well as digital aliasing.

Time-based errors include the wow and flutter from phonograph records and analog tape, and jitter in digital recording. Frequency response can vary depending on intensity, and in the case of microphones and speakers, distance and direction.

These four basic parameters define everything that affects audio fidelity. As powerful as it is, there’s no secret “magic” to sound, no unknown parameters that “science” hasn’t yet learned to identify, as is sometimes claimed. The closer an audio signal is to the source across these four parameters, the higher its fidelity.

How can we be so certain that audio engineers won’t discover a new aspect of fidelity in the future? One answer is the null test.

Isolating Issues with the Null Test

A null test identifies all differences between two audio sources, including differences you might not even think to look for. To perform one, you simply mix the two signals together at equal volumes, with the polarity of one reversed.

sponsored


For example, you could null the input and output of a microphone preamp to see how it changes the signal. Once you’ve matched the levels, whatever doesn’t cancel out is noise, distortion or frequency coloration added by the preamp.

This kind of nulling has been used to test fidelity since the 1940s, when Hewlett-Packard analyzers pioneered this technique to isolate noise and distortion from an audio test signal. If there really was some other aspect of audio fidelity, it would have been revealed years ago in a null test.

With techniques like these, we can evaluate, with little question, which devices capture sound in a way that is most accurate to the original source. This goal of achieving high fidelity however, is different from creating sounds that are pleasing to our ears.

An equalizer set to boost the bass or treble might sound better on a particular mix, and a fuzz-tone or flanger effect might improve the sound of an electric guitar track. Here, accuracy to the source is not necessarily the goal. During the creation process, recording engineers deal with both accurately capturing sound sources and adding effects to make the music sound even nicer to the ear.

But once a mix is completed to everyone’s satisfaction, the goal from there on should generally be to change the sound as little as possible until it reaches the consumer’s ears. Otherwise, the artist’s intent is corrupted. With this in mind, I believe the goal of most audio playback equipment should be transparency.

Audio Transparency Defined

In the real world, no audio device outside of a computer can reproduce a signal with 100% fidelity to the original source. But, if the noise and distortion from an audio device is too soft to hear at normal volumes, and the frequency response is flat enough to not notice a difference between engaged and bypassed, then that device can be considered audibly transparent. And today, after nearly 150 years of development, we have access to some of the most transparent audio devices in history.

A good understanding of how audio fidelity and measurement work is especially important now, as audible differences between devices have grown increasingly small in many cases. But sometimes, people grow suspicious of the science without really understanding it.

We all know someone who insists they hear an improvement after changing one perfectly good speaker wire for another, or after upgrading a sound card that measurements and listening tests already show to be audibly transparent. Many of us have obsessed over these little changes ourselves, even when the measurements suggest we are chasing phantoms. But why?

There are a few good reasons for this:

One is that our hearing is surprisingly short-term. If it takes fifteen minutes to change speaker wires or connect a new converter to your computer, it’s difficult to recall the exact sound quality.

Further, every time we hear a piece of music we tend to notice new details. Is the shimmer of that brushed ride cymbal really clearer now, or did we simply not notice this quality before?

And even when we can account for these issues, our brain plays tricks of its own, through the universal human effects of placebo and confirmation bias. As good as our ears may be, our minds are far more powerful.

Some claims, like those regarding increased bass fullness or brightness of a component are easy to verify or disprove. They are matters of frequency response, easily measured with simple test signals. Often, the elements that have the most drastic effect on frequency response are physical ones: Transducers like microphones and speakers, and even the rooms that we record and listen to our music in.

I’ve long been convinced that one of the main reasons bass quantity and quality can seem to change—even when no change can be measured in the device—is room acoustics.

All rooms have peaks and nulls, even if they’re well-treated with bass traps. The response at bass frequencies can change by 10 dB or more across distances as small as a few inches in many rooms. I once measured a 13 dB difference at 70 Hz at locations only four inches apart in a bedroom size space. So unless your head and ears are at precisely the same place when auditioning two devices, the response you hear really can change — Even when the change is not in the audio equipment.

Lifting the Wool

Also at issue are the very words we use to describe audio quality. The most subjective among them can obscure meaning, rather than help us get our ideas across.

There are a handful of audio terms that can be very useful, like “full” or “thin” which are almost invariably used to refer to more or less energy in the bass range. But other words such as “harsh” and “warm” can be far less helpful on their own. Depending on who is speaking, “harsh” can mean either boosted treble or added distortion, or both. And “warm” seems to mean almost anything one happens to find pleasing!

Some of the most colorful adjectives we see in audio magazines can be even more confusing still because they have no established meaning at all. What exactly is a “musical” top end, and what would an “unmusical” one sound like? How do you define a “sterile” sound? Does it have less distortion, less midrange, or does it just mean a performance you don’t like? What specific aspect of audio reproduction do these terms describe?

Unless the person using them is willing to define what they mean, words like these remain wide open to interpretation, and can impede communication, rather than offer a shortcut to real understanding.

Years ago, during a conservative war on “obscenity”, US Supreme Court Justice Potter Stewart famously said, “I shall not today attempt further to define the kinds of material … but I know it when I see it.” Such vagueness is as meaningless now as it was then.

Ethan Winer has been a professional audio engineer and musician for more than 40 years. Besides designing acoustic products for his company RealTraps, Ethan’s Cello Rondo music video has received more than 1.5 million views on YouTube and other web sites. His new book, The Audio Expert, is now used as the main text for several college recording courses, including those at Notre Dame.

Please note: When you buy products through links on this page, we may earn an affiliate commission.

sponsored