Difference between revisions of "Videos/Digital Show and Tell"
m (→Bandlimitation and timing: Space)
|Line 327:||Line 327:|
The rippling you see around sharp edges in a bandlimited signal is called the [[WikiPedia:
The rippling you see around sharp edges in a bandlimited signal is called the [[WikiPedia:|Gibbs effect]]. It happens whenever you slice off part of the frequency domain in the middle of nonzero energy.
The usual rule of thumb you'll hear is "the sharper the cutoff, the
The usual rule of thumb you'll hear is "the sharper the cutoff, the
Revision as of 08:09, 11 June 2013
Continuing in the "firehose" tradition of Episode 01, Xiph.Org's second video on digital media explores multiple facets of digital audio signals and how they really behave in the real world.
Demonstrations of sampling, quantization, bit-depth, and dither explore digital audio behavior on real audio equipment using both modern digital analysis and vintage analog bench equipment, just in case we can't trust those newfangled digital gizmos. You can download the source code for each demo and try it all for yourself!
If you're having trouble with playback in a modern browser or player, please visit our playback troubleshooting and discussion page.
“A few months ago, I wrote an article on digital audio and why 24bit/192kHz music downloads don't make sense. In the article, I mentioned--almost in passing--that a digital waveform is not a stairstep, and you certainly don't get a stairstep when you convert from digital back to analog.
“Of everything in the entire article, that was the number one thing people wrote about. In fact, more than half the mail I got was questions and comments about basic digital signal behavior. Since there's interest, let's take a little time to play with some simple digital signals. ”
Veritas ex machina
If we pretend for a moment that we have no idea how digital signals really behave, then it doesn't make sense for us to use digital test equipment. Fortunately for this exercise, there's still plenty of working analog lab equipment out there.
We'll observe our analog waveforms on analog oscilloscopes, like this Tektronix 2246 from the mid-90s, one of the last and best analog scopes made.
Finally, we'll inspect the frequency spectrum of our signals using an analog spectrum analyzer, this HP3585 from the same product line as the signal generator. Like the other equipment here it has a rudimentary and hilariously large microcontroller, but the signal path from input to what you see on the screen is completely analog.
All of this equipment is vintage, but the specs are still quite good. We start with the signal generator set to output a 1 kHz sine wave at one Volt RMS. We see the sine wave on the oscilloscope, can verify that it is indeed 1 kHz at 1 Volt RMS, which is 2.8 Volts peak-to-peak, and that matches the measurement on the spectrum analyzer as well.
The analyzer also shows some low-level white noise and just a bit of harmonic distortion, with the highest peak about 70dB or so below the fundamental. This doesn't matter to the demos, but it's good to take notice of it now to avoid confusion later.
For digital conversion, we use a boring, consumer-grade, eMagic USB1 audio device. It's more than ten years old at this point, and it's getting obsolete.
A recent converter can easily have an order of magnitude better specs. Flatness, linearity, jitter, noise behavior, everything... You may not have noticed. Just because we can measure an improvement doesn't mean we can hear it, and even these old consumer boxes were already at the edge of ideal transparency.
The eMagic connects to my ThinkPad, which displays a digital waveform and spectrum for comparison, then the ThinkPad sends the digital signal right back out to the eMagic for re-conversion to analog and observation on the output scopes.
First demo: We begin by converting an analog signal to digital and then right back to analog again with no other steps.
The signal generator is set to produce a 1kHz sine wave just like before and we can see the analog sine wave on the input-side oscilloscope. The eMagic digitizes our signal to 16 bit PCM at 44.1kHz, same as on a CD. The spectrum of the digitized signal on the Thinkpad matches what we saw earlier and what we see now on the analog spectrum analyzer, aside from its high-impedance input being just a smidge noisier. For now, the waveform display shows our digitized sine wave as a stairstep pattern, one step for each sample.
When we look at the output signal that's been converted from digital back to analog, we see that it's exactly like the original sine wave. No stairsteps.
1 kHz is still a fairly low frequency, so perhaps the stairsteps are just hard to see or they're being smoothed away. Next, set the signal generator to 15kHz, which is much closer to Nyquist. Now the sine wave is represented by less than three samples per cycle, and the digital waveform appears rather poor! Yet the analog output is still a perfect sine wave, exactly like the original. As we keep increasing frequency, all the way to 20kHz, the output waveform is still perfect. No jagged edges, no dropoff, no stairsteps.
So where'd the stairsteps go? It's a trick question; they were never there. Drawing a digital waveform as a stairstep was wrong to begin with.
A stairstep is a continuous-time function. It's jagged, and it's piecewise, but it has a defined value at every point in time. A sampled signal is entirely different. It's discrete-time; it's only got a value right at each instantaneous sample point and it's undefined, there is no value at all, everywhere between. A discrete-time signal is properly drawn as a lollipop graph. The continuous, analog counterpart of a digital signal passes smoothly through each sample point, and that's just as true for high frequencies as it is for low.
The interesting and non-obvious bit is that there's only one bandlimited signal that passes exactly through each sample point; it's a unique solution. If you sample a bandlimited signal and then convert it back, the original input is also the only possible output. A signal that differs even minutely from the original includes frequency content at or beyond Nyquist, breaks the bandlimiting requirement and isn't a valid solution.
So how did everyone get confused and start thinking of digital signals as stairsteps? I can think of two good reasons.
First: it's easy to convert a sampled signal to a true stairstep. Just extend each sample value forward until the next sample period. This is called a zero-order hold, and it's an important part of how some digital-to-analog converters work, especially the simplest ones. As a result, anyone who looks up digital-to-analog converter or digital-to-analog conversion is probably going to see a diagram of a stairstep waveform somewhere, but that's not a finished conversion, and it's not the signal that comes out.
Second, and this is probably the more likely reason, engineers who supposedly know better (yes, even I) draw stairsteps even though they're technically wrong. It's a sort of one-dimensional version of fat bits in an image editor. Pixels aren't squares either, they're samples of a 2-dimensional function space and so they're also, conceptually, infinitely small points. Practically, it's a real pain in the ass to see or manipulate infinitely small anything, so big squares it is.
Digital stairstep drawings are exactly the same thing. It's just a convenient drawing. The stairsteps aren't really there.
When we convert a digital signal back to analog, the result is also smooth regardless of the bit depth. It doesn't matter if it's 24 bits or 16 bits or 8 bits. So does that mean that the digital bit depth makes no difference at all? Of course not.
Channel 2 is the same sine wave input, but we quantize it with dither down to 8 bits. On the scope, we still see a nice smooth sine wave on channel 2. Look very close, and you'll also see a bit more noise. That's a clue.
If we look at the spectrum of the signal, our sine wave is still there unaffected, but the noise level of the 8-bit signal on the second channel is much higher. And that's the difference, the only difference, the number of bits makes.
When we digitize a signal, first we sample it. The sampling step is perfect; it loses nothing. But then we quantize it, and quantization adds noise. The number of bits determines how much noise and so the level of the noise floor.
What does this dithered quantization noise sound like? Those of you who have used analog recording equipment might think to yourselves, "My goodness! That sounds like tape hiss!" Well, it doesn't just sound like tape hiss, it acts like it too, and if we use a gaussian dither then it's mathematically equivalent in every way. It is tape hiss.
Intuitively, that means that we can measure tape hiss and thus the noise floor of magnetic audio tape in bits instead of decibels, in order to put things in a digital perspective. Compact cassettes, for those of you who are old enough to remember them, could reach as deep as 9 bits in perfect conditions. 5 to 6 bits was more typical, especially if it was a recording made on a tape deck. That's right; your old mix tapes were only about 6 bits deep if you were lucky!
We've been quantizing with dither. What is dither exactly and, more importantly, what does it do?
The simplest way to quantize a signal is to choose the digital
amplitude value closest to the original analog amplitude.
Unfortunately, the exact noise that results from this simple
quantization scheme depends somewhat on the input signal.
It may be inconsistent, cause distortion, or be
undesirable in some other way.
Dither is specially-constructed noise that substitutes for the noise produced by simple quantization. Dither doesn't drown out or mask quantization noise, it replaces it with noise characteristics of our choosing that aren't influenced by the input.
The signal generator has too much noise for this test so we produce a mathematically perfect sine wave with the ThinkPad and quantize it to 8 bits with dithering. We see the sine wave on waveform display and output scope, and a clean frequency peak with a uniform noise floor on both spectral displays just like before. Again, this is with dither.
Now I turn dithering off.
The quantization noise that dither had spread out into a nice, flat noise floor, piles up into harmonic distortion peaks. The noise floor is lower, but the level of distortion becomes nonzero, and the distortion peaks sit higher than the dithering noise did.
At 8 bits this effect is exaggerated. At 16 bits, even without dither, harmonic distortion is going to be so low as to be completely inaudible. Still, we can use dither to eliminate it completely if we so choose.
Turning the dither off again for a moment, you'll notice that the absolute level of distortion from undithered quantization stays approximately constant regardless of the input amplitude. But when the signal level drops below a half a bit, everything quantizes to zero.
In a sense, everything quantizing to zero is just 100% distortion! Dither eliminates this distortion too. When we reenable dither, we clearly see our signal at 1/4 bit with a nice flat noise floor.
Human hearing is most sensitive in the midrange from 2kHz to 4kHz; that's where background noise is going to be the most obvious. We can shape dithering noise away from sensitive frequencies to where hearing is less sensitive, usually the highest frequencies.
Lastly, dithered quantization noise is higher power overall than undithered quantization noise, even though it often sounds quieter, and you can see that on a VU meter during passages of near-silence. However, dither isn't only an on or off choice. We can reduce the dither's power to balance less noise against a bit of distortion to minimize the overall effect.
For the next test, we also modulate the input signal like this to show how a varying input affects the quantization noise. At full dithering power, the noise is uniform, constant, and featureless just like we expect.
As we reduce the dither's power, the input increasingly affects the amplitude and the character of the quantization noise. Shaped dither behaves similarly, but noise shaping lends one more nice advantage; it can use a somewhat lower dither power before the input has as much effect on the output.
Despite all this text spent on dither, the differences exist 100 decibels or more below full scale. If the CD had been 14 bits as originally designed, perhaps dither might be more important. At 16 bits it's mostly a wash. It's reasonable to treat dither as an insurance policy that gives several extra decibels of dynamic range just in case. That said no one ever ruined a great recording by not dithering the final master.
Bandlimitation and timing
We've been using sine waves. They're the obvious choice when what we want to see is a system's behavior at a given isolated frequency. Now let's look at something a bit more complex. What should we expect to happen when I change the input to a square wave?
The input scope confirms a 1kHz square wave. The output scope shows... exactly what it should.
What is a square wave really?
We can say it's a waveform that's some positive value for half a cycle and then transitions instantaneously to a negative value for the other half.
But that doesn't really tell us anything useful about how that input becomes this output.
At first glance, that doesn't seem very useful either; you'd have to sum an infinite number of harmonics to get the answer! However, we don't have an infinite number of harmonics.
We're using a quite sharp anti-aliasing filter that cuts off right above 20kHz, so our signal is bandlimited. Only the first ten terms make it through, and that's exactly what we see on the output scope.
The rippling you see around sharp edges in a bandlimited signal is called the Gibbs effect. It happens whenever you slice off part of the frequency domain in the middle of nonzero energy.
The usual rule of thumb you'll hear is "the sharper the cutoff, the stronger the rippling", which is approximately true, but we have to be careful how we think about it. For example, what would you expect our quite sharp anti-aliasing filter to do if I run our signal through it a second time?
Aside from adding a few fractional cycles of delay, the answer is: Nothing at all. The signal is already bandlimited. Bandlimiting it again doesn't do anything. A second pass can't remove frequencies that we already removed.
That's important. People tend to think of the ripples as a kind of artifact that's added by anti-aliasing and anti-imaging filters, implying that the ripples get worse each time the signal passes through. We see that in this case that didn't happen, so it wasn't really the filter that added the ripples the first time through. It's a subtle distinction, but Gibbs effect ripples aren't added by filters, they're just part of what a bandlimited signal is.
Even if we synthetically construct what looks like a perfect digital square wave it's still limited to the channel bandwidth. Remember that the stairstep representation is misleading. What we really have here are instantaneous sample points and only one bandlimited signal fits those points. All we did when we drew our apparently perfect square wave was line up the sample points just right so it appeared that there were no ripples if we played connect-the-dots. The original bandlimited signal, complete with ripples, was still there.
That leads us to one more important point. You've probably heard that the timing precision of a digital signal is limited by its sample rate; put another way, that digital signals can't represent anything that falls between the samples.. implying that impulses or fast attacks have to align exactly with a sample, or the timing gets mangled or they just disappear. At this point, we can easily see why that's wrong.
Again, our input signals are bandlimited. And digital signals are samples, not stairsteps, not 'connect-the-dots'. We most certainly can, for example, put the rising edge of our bandlimited square wave anywhere we want between samples.
It's represented perfectly and it's reconstructed perfectly.
Like in _A Digital Media Primer for Geeks_, we've covered a broad range of topics, and yet barely scratched the surface of each one. If anything, my sins of omission are greater this time around.
Thus I encourage you to dig deeper and experiment. I chose my demos carefully to be simple and give clear results. You can reproduce every one of them on your own if you like, but let's face it: Sometimes we learn the most about a spiffy toy by breaking it open and studying all the pieces that fall out. Play with the demo parameters, hack up the code, set up alternate experiments. The source code for everything, including the little pushbutton demo application, is at the bottom of this page.
In the course of experimentation, you're likely to run into something that you didn't expect and can't explain. Don't worry! My earlier snark aside, Wikipedia is fantastic for exactly this kind of casual research. If you're really serious about understanding signals, several universities have advanced materials online, such as the 6.003 and RES.6-007 Signals and Systems modules at MIT OpenCourseWare. And of course, there's always the community here at Xiph.Org.
Written by: Christopher (Monty) Montgomery and the Xiph.Org Community
Special thanks to:
- Heidi Baumgartner, for the second Tektronix oscilloscope
- Gregory Maxwell and Dr. Timothy Terriberry, for additional technical review
This Video Was Produced Entirely With Free and Open Source Software:
All trademarks are the property of their respective owners.
A Co-Production of Xiph.Org and Red Hat, Inc.
(C) 2012-2013, Some Rights Reserved
Use The Source Luke
As stated in the Epilogue, everything that appears in the video demos is driven by open source software, which means the source is both available for inspection and freely usable by the community. The Thinkpad that appears in the video was running Fedora 17 and Gnome Shell (Gnome 3). The demonstration software does not require Fedora specifically, but it does require Gnu/Linux to run in its current form. In all, the video involved just under 50,000 lines of new and custom-purpose code (including contributions to non-Xiph projects such as Cinelerra and Gromit).
The Spectrum and Waveform Viewer
The realtime software spectrum analyzer application that appears in the video was a preexisting application that was dusted off and updated for use in the video. The waveform viewer (effectively a simple software oscilloscope) was written from scratch making use of some of the internals from the spectrum analyzer application. Both are available from Xiph.Org svn:
Spectrum and Waveform both expect an input stream on the command line, either as raw data or as a WAV file.
The touch-controlled application used in the video is named 'gtk-bounce' and was custom-written for the sole purpose of the in-video demonstrations. It is so named because, for the most part, all it does is read the input from an audio device, and then immediately write the same data back out for playback. It also forwards a copy of this data to up to two external monitoring applications, and in several demos, applies simple filters or generates simple waveforms. It includes several demos not included in the video.
The application is somewhat hardwired for specific demo parameters, but most of the hardwired settings can be found at the top of each source file. As found in SVN, the application expects an ALSA hardware audio device at hw:1, and if none if found, it will wait for one to appear. Once a sound device is successfully initialized, it expects to find and open two pipes named pipe0 and pipe1 for output in the current directory. In the video, the waveform and spectrum applications are started to take input from pipe0 and pipe1 respectively. The output sent to the two pipes is identical, and in most demos matches the output data sent to the hardware device for conversion to analog. The only exception is the tenth demo panel (which does not appear in the video) where gtk-bounce can be set to monitor the hardware inputs instead while the outputs are used to produce test waveforms.
Assuming gtk-bounce, spectrum and waveform have been checked out and built, the configuration seen in the video can be started using the following commands:
Gtk-bounce consists of eleven pushbutton panels (numbered zero through ten) that can be selected by scrolling up and down with the arrow buttons on the right side. Each panel is intended for a specific demo or part of a demo.
The animations featured throughout the Episode 2 video were rapid-development spaghetti hack-jobs coded by hand in raw Cairo. Each module generated a series of PNG stills that were then stitched into an animation with Cinelerra or mplayer. In the interest of pointing and laughing at what really bad code looks like...