Analog vs. digital tutorlal

Digital processing can cause a lot of confusion. In this tutorial we'll shed some light on the link between the worlds of analogue and digital.

Analogue world

Analogue devices were used exclusively in the early years of sound processing, simply because there was no other choice. Computers were the size of a small house, programmed by feeding them paper rolls or cards of data, and with no sound processing software in existence. The only storage device available was magnetic tape which does have some positive features. It saturates the audio for example, and most engineers agree that this kind of saturation sounds pleasing. However it also brings some disadvantages, not least of which is a low resolution. This poor resolution meant that the majority of tracks were recorded with the whole band live on stage and then mixed, with the complete results recorded onto the tape.

Analogue devices and digital processors work on the same signals, neither works the same way as our ears and brain. Digital sound however, is a little harder to understand and this may explain why many people are not willing to completely switch to the digital realm. In the analogue domain, you typically have a physical quantity (in our case voltage), and you use a device (monitors or headphones), to convert this into air motion, which is detected by our ears to produce 'sound'. The more the voltage differs, the more the air pressure changes and the louder the resulting sound is.

If you look at a waveform in a sound editor, you can imagine that the voltage is changing in exactly the same way. This voltage is transmitted by electrons and the good news is that there's just 'gazillions' of them, so with the relatively low speed of our hearing system, we can theoretically achieve an extremely high accuracy. The issue is how to control and measure these electrons as there is no such thing as a continuous flow of them. It is akin to an astronaut having to count and measure the speed of meteorites while travelling through space, as they pass by every few kilometres!

We do actually have some ways to process this type of signal. For example we are able to change the signal level (amplify or attenuate it) and to control some frequencies within it (we'll cover what frequency is a little later). Engineers have spent lots of time developing electrical circuits that can do these tasks with minimal distortion. However, we still have a long way to go.

The problem arises because none of the electrical components are perfect. A transistor used in an amplifier does not have a linear response and actually wave-shapes the result, so we need to use another circuit to 'invert' the transistor's output. To illustrate, a capacitor nominally of 10nF, is not actually 10nF, but in the range 8-12nF and if another transistor located just 1cm from it, heats up to 30 degrees Celsius, it also heats the nearby air and electrical connections, making the 10nF capacitor 7nF. Engineers have to deal with this scenario with every piece of the hardware. One of the reasons high end analogue devices are so expensive is that every piece must be tested and measured (well they are supposed to be, who knows if they actually are, but I hope so). In any case, this also explains why analogue devices are almost always extremely simple compared to digital software. The fewer the components the less that can go wrong.

The biggest problem is usually the non-linearity of the active components - transistors and tubes although there are some positives as most of these components are fairly linear within certain ranges. So, stick within that range and what you input will be very similar to what is output (with just a change in level for example), until you enter the nonlinear stage, where you get the so-called "saturation", this effect increases as we get closer to the maximum voltage.

We can demonstrate this by imagining a flood where the water has completely filled the riverbed. It then starts flooding the lowlands around the bed, but there are too many obstacles for the water to move effectively or smoothly. The transistor, like the flood water, starts passing fewer electrons until they either just cannot get through the transistor, or there are no more electrons available. When this occurs, you have just clipped your audio. Guitarists use this effect all the time, and even many audio engineers will say that they actually prefer to record most of the instruments through tube-based amplifiers because the character of the distortion sounds pleasing.

The scenario is similar with magnetic tape. The audio is recorded by magnetizing tiny components. Simply put, the higher the voltage, the more of them must be magnetized. However increasing the voltage too much, results in less and less of them being available until eventually we just run out completely.

Conclusion

In the analogue world we are limited only by physics, unfortunately our capability to control and measure physical quantities is still very low. We can however find advantages in some of these imperfections.

Digital world

In the digital domain we are able to benefit from almost infinite accuracy of every kind, well that is if you are willing to wait a long time to process your recording! We also encounter problems when converting between our analogue and digital realms but more on this later.

If you didn't actually want to listen to the audio but just generate it instead, (i.e. no recording, everything is 'created inside'), then if you want a 1GHz sampling rate, you can have it! If you want 4096 bits sample precision, no problem! There probably won't be any device capable of actually playing this audio file for centuries, but it certainly is possible to produce. Even if we assume that such a device exists, we are very far away from having enough computing power to run it. The point is, the limitations of digital signal processing are purely mathematical, as opposed to the physical ones in our analogue world.

Let's get back to reality. We often use a 44.1kHz sampling rate and it's a reasonable number, since the so-called 'Nyquist frequency theorem' states that we can represent all frequencies up to 22kHz this way. Since humans are rarely able to hear anything above 20kHz, then this should be sufficient.

So what is sampling? Sampling is, in this context, measuring and storing. Remember how the voltage was going up and down in our analogue example? Well if we were to ask a hardware device called an 'analogue-digital (AD) converter' about "the amount and rate of the electrons at a specific interval" , it may say "yeah, many of them and moving fast!, so this is obviously 0.87645. Oh, and the in the next interval it's 0.7903." :). The result of this sampling process is a set of numbers measured in approximately the same intervals. The individual samples mean nothing, but together they tightly follow the signal.

With actual samples, simple actions such as amplifying the audio become extremely easy, and you can have any precision you want, if you are willing to wait for the result. You can also look back in time, and as a result you can also look forward in time when processing offline (strictly, the data at one point in time is used to manipulate data as an earlier point, but everything gets delayed ("latency")). This way, implementing delays and reverbs is only a matter of quality as opposed to the analogue world, where it is virtually impossible. You can analyze whole blocks of audio, which allow you to detect and manipulate spectral content and much more. Everything is achieved with incredible accuracy, provided by mathematics.

There are some drawbacks as well however: Firstly, although the basic features like amplifying and mixing are very simple, the more complicated tasks can often become a huge mathematical mess. If an engineer developing DSP (digital signal processing) software does not completely understand this area, then the result could potentially be a piece of software which is very poor yet is marketed as top end. This makes software selection much more difficult, because while with analogue you can be more or less certain, that expensive means good, with digital the situation is much more complicated.

Secondly, digital audio might be too perfect. "Too clinical, no character", we've all heard those comments. Fortunately the imperfections of the analogue circuits are pretty trivial to simulate, bringing warmth and character into the digital domain. The problem arises when you want to replicate a certain piece of audio equipment completely, but why the hell would you do that? Well, many people do, trying to slow down progress probably :).

Is the analogue quality really that ultimate? Several people seem to believe that the audio character of analogue devices is basically unbeatable and that the digital world should focus on recreating these qualities. But this actually makes no sense at all. Analogue device engineers did the best they could with what they had available, with all the inherent problems and limitations. But stating that this is the top of the mountain is just plain brain-washing. It's important to understand, that despite the existing qualities of analogue hardware, the results can always be even better! And if you want an improvement, you are probably going to find it in the digital world. Many of the good aspects of the analogue world can be used to inspire evolution in digital processing.

Conclusion

The digital domain itself provides much more power, it is easier to use and cheap to produce. There are problems with conversion from and to analogue however and there are also a few advantages that analogue processing has, compared to digital which we cannot currently simulate accurately.

Conversion between analogue and digital

Dealing with analogue is inevitable. Whenever you record something, it is originally analogue, and to hear it, you have to convert it back to analogue again. After all, we live in a physical world. Maybe in the future it will be possible to live as a piece of software inside a computer, enjoying audio in any accuracy we want ;).

Let's say you are recording something with a microphone. The microphone either generates a small electrical current (dynamic microphone) or modifies the voltage you put into it (condenser microphone). This signal is fed into a preamplifier (because the voltages produced by the microphones are just too small), and then into a device called an analogue-digital (AD) converter, which converts the voltage to a binary number (the sample). Both the preamp and the AD converter are very delicate devices and are the most important and expensive parts of your analogue chain.

The best AD converters currently provide 24-bit precision and 192kHz sampling rates, but note that the fact they claim to have such parameters does NOT mean they actually do. For example the converter may have 22-bits precision with the 2 remaining bits more or less just noise.

Why is it such a problem? If we think about it, let's say the preamp is extremely good and creates about 1 Volt (this is similar to what AA batteries typically produce) from the original millivolt signal with almost no distortion. We then have the AD converter generating 24 bits so the first bit has an accuracy of 1V, the second one of 0.5V, third 0.25V and so on. The 16th bit detects 0.015mV and the 24th 0.06uV (that's 0.00000006V!). From the audio perspective this is about -144dB, thus much less than any human can hear. It's like standing next to a jet plane and trying to hear someone whispering a few meters from you!

And with a 192kHz sampling rate, the AD converter has to perform the measurement 192000 times per second! You could argue that computers can do billions of operations per second, but this is physically very different as computers are dealing with just 2 scenarios - 1 and 0, there is a voltage, or there is not.

Generally the need for sampling rates higher than 96kHz is questionable. 96kHz can represent all frequencies up to 48kHz, which is more than an octave above our hearing limit. However if you study what happens to a 48kHz sine wave when you sample it, you'll notice that the waveform doesn't look much like a sine wave anymore even if it still sounds like one (which after all is what matters).

Comment: Personally I'd like to know what some animals with better hearing, feel when listening to our music. If we take the 20kHz sine sampled at 44.1kHz as an example, then it isn't a sine, but virtually a triangle, and therefore contains several higher harmonics. That would sound like a very sharp distortion, but fortunately, our monitors are far from being good enough to play this, otherwise I would never let my dog listen to my music :).

The bit-resolution of 24 bits provides a range from 0dB to - 144dB which is more than enough, but there's a catch. The limits are fixed so when the input exceeds 1V (creating a signal more than 0dB), it will be clipped back to 0dB and when the input is below -144dB, it becomes silent. Therefore the only way for you to actually use all 24 bits of available resolution is to adjust the preamp so that in the loudest parts of the waveform, the audio gets close to but never exceeds 0dB. Since we can never be sure that the drummer won't play louder this time, because he's in a bad mood and needs to get it out of his system, engineers always employ some headroom, so that the audio never gets clipped. However because of this you often won't actually use the highest bit (the most accurate one), and so you have just lost 6dB, plus now there's just 23 bits left.

You now may end with about 20 reliable bits, which corresponds to a dynamic range of 120dB, which is still more than you need, but not that generous anymore.

Digital to analogue

After you process your recordings, you have to convert them into analogue (or rather the listener has to). Let's say you distribute your music in a lossless format, on a CD for example, this gives us a sampling rate of 44.1kHz and 16-bits resolution. We then need a digital to analogue (DA) converter and an amplifier, because the output of the DA converters isn't usually high enough to power even a pair of headphones.

DA converters do the exact opposite of AD converters. Some circuitry feeds them with a 16-bit number 44,100 times per second, and their task is to generate an output according to these numbers. This means that once they receive number 1, they start generating say 1V output. When it is 0.66, it would be 0.66V. The accuracy of these devices is usually quite good, although not perfect.

After the DA converter we have our amp, which suffers from similar problems to any other amp or preamp. There are nonlinearities in shape, in the frequency domain, just about everywhere.

Next we have our monitors or headphones or whatever we use, and here we have even bigger problems, because designing a reproducer with a linear output is an art on its own. Generally no reproducer has a completely flat response, which means that they play different frequencies at different levels. Therefore engineers need to measure the response and try to invert it to make it flat. This can be done by special circuitry (essentially equalizers inside the monitor), and by the shape and material of the case and interior and so on.

Finally, there is your room and your ears. The room affects the audio, and produces echoes and resonances. Ears do the same and are even adjustable, changing the equalization shape in order for you to hear as much as possible. For example, if you play your song and attenuate (reduce) high frequencies with an equalizer, after a while your ears will adjust to the absence of those frequencies and will allow you to hear high frequencies better. You can check this by simply stopping the playback and the world will immediately sound "brighter", because it basically won't contain much bass sound (your brain attenuates it because of its relatively high volume).

Conclusion

Digital audio can do marvellous things, analogue audio has some advantages as well, but the conversion between them is tricky. So if you are wondering what analogue equipment you should get right and spend your money on, then make sure it is the preamp, the audio interface (AD and DA converters), active monitors and the room. We will always have to deal with analogue equipment to actually enjoy the music, so the small "digital excursion" doesn't seem as such a big problem. And then you should go and train your ears ;).

Which is better, analogue or digital?

Welcome to the capitalist world, where everyone is telling you which is better not based on which actually is, but what they are selling. The truth is, the only remaining advantage of analogue processing is the fact that there is no latency. But customers don't know that and the companies developing analogue gear are much wealthier than software companies, so they needed to come up with some serious marketing. So now every bad thing about analogue is marketed as an awesome positive feature providing warmth, depth or other descriptive words. So what are the "advantages" of analogue that these marketing experts and dinosaurs stubbornly insist on filling you with? Let's enumerate...

1) It adds analogue warmth.
What is that? It's just the never-ending nonlinearities. Just use some saturation... ahem, like the MSaturatorMB or the saturation knob we have in many other MeldaProduction plugins. It creates some higher harmonics, and, yes, it can make it sound slightly richer, but also somewhat distorted - that's how it works after all. What the analogue engineers tried to do is to remove these nonlinearities! But they cannot, so the PR turned it into something awesome by using these great words - classic Freudian persuasion, but there's really no big magic behind them.

2) It adds some random imperfections.
First of all, analogue gear is mainly used on acoustic music, where imperfections are done by the musicians themselves, so you really don't need any more supplied by technology. And if you do, just use the modulators in any of your plugins. You can randomize just about anything, if that's really what you need. For the record, these imperfections are generally inaudible on the high end stuff. It's just another manufactured "positive" inspired by the fact that every time you process sound with analogue using the same settings, the output will be slightly different, which isn't a positive really (probably not a negative either, though, except for scientific pursuits where it is a problem).

3) It has been used for many decades now, so it must be great.
Well, yes, some people really think that! :) And it's actually the most-used argument for analogue. "The big guys are using that, so it must be awesome ...". That's like saying that since my great-great-grandfather was travelling in steam-engine-powered trains, then I should too. Obviously nobody does that, because that would just be stupid. We have much more powerful and effective technologies today. And the same is true for audio processing. You can stay in the past and use steam-engine-based audio processing, or embrace solar-powered based ones and go into the future.

4) If we don't have analogue gear, we need simulations.
The important question is why we need analogue gear in the first place, and we summed that up above. Why should it sound exactly like that?? Maybe it can be better! Who says that the way particular analogue equipment sounds is the best it can ever sound? Then you have the made-up marketing nonsense such as circuit modelling, please just don't fall for that :). Humans have reached many dead ends in their existence and this is one of the biggest ones.

Now, are there any disadvantages of digital?
Basically, the digital domain works with sampled data. You might imagine that there's just not all information, only pieces of it. Like when you look at a photo on your monitor and when you look closer you can see the pixels. That makes certain operations such as resizing very hard to do. But unlike photos, humans have a very low hearing limit. From my experience, most people don't hear even 18kHz (including me sadly), so the 20kHz being presented as 'standard' may well be above every human. And as theory tells us, playback isn't an issue then. Hence 40k sampling rate would be enough, and our 48k just gives us some additional headroom, just in case. So if someone tells you he can hear a difference between playing the same file in 48k and 96k, make a blind test and we wish him good luck.

But there is a problem actually, with processing. While analogue causes distortion, digital CAN cause aliasing. But here's the thing – it CAN cause it. Only certain algorithms can and every algorithm can be protected against it, even if it uses the simplest method – oversampling. The question is only the CPU power needed, which can rise substantially. Anyway, contrary to what many people think, it is actually rarely a problem.

So it's all up to you. The future is digital processing. Everyone will have to embrace that one day, and we guarantee there will be a day when all things analogue will be considered retro (like vinyl is these days) and analogue equipment will have a sole purpose - recording and playback. So how about embracing the future ... now? ;)