In the next few months there will be a great kafuffle about the new Apple format delivery, which they have actually been championing for quite some time but has not yet truly taken off.
Bono from U2 has been talking about a new Artist driven format coming and I’m pretty sure it will be some kind of iTunes LP style Musical product with this new audio standard in a nifty new package. Currently the MP3 Apple delivers to consumers is not good enough for anything except their Apple ear buds, so something has to get better.
I myself have been a bit perplexed with the level of detail that I now need to do in order to create a great sounding “Mastered for iTunes” (MFIT) Product, so I though I would share some insights I have learned and am still learning.
Why do we need Mastered for iTunes?
Ok so firstly don’t take everything I say as the defacto fact as I am not a Professional mastering engineer but I am a record producer with 20+ Years recording and have attended every mastering session and been involved in the Mastering of all my music so I do understand a lot more than just recording of music, so here goes.
In the music business we have mainly had vinyl and CD’s to deliver our musical experiences and both deal with the music in different ways as in CD you have to actually create a file that is at a sample rate of 44.1kHz in order that it plays on a CD but with vinyl you cut vinyl and end up with an analogue reproduction in the form of a record which raises the famous argument that music sounds better on vinyl. Both formats of sound delivery have been accepted as the standard for listening for many years but things are changing.
If you were serious about the making of your music you would hopefully be actually recording it at a higher sample rate (88 or 96 or 192) than the final version people would hear (44.1kHz CD) so that there is room to down sample it to 44.1kHz for the CD by doing this you could be confident that there would be no noticeable sound quality in the down sampled CD version of your studio master.
The thing is, sample rates above 44.1kHz have generally been deemed a professional safety, allowing head room in sonic quality rather than adding noticeable sound to the human ear (this is a contentious argument) I say this, as we mostly are listening to CD’s, which means we are all listening at 44.1kHz .
Due to the many different ways of recording your final studio mix there has never been a true standard of recording higher than 44.1kHz so some artist record masters at 44.1kHz and don’t need to down sample to make their CD some record above 44.1kHz just for safety and those that could afford to do it all at 192kHz might still not use that higher bit rate, as they might be putting their final master on to a DAT (digital Audio Tape) that could only record at 88.2KHz and if you were recording at the highest rate of 193KHz you would have to store it as a digital file which creates bigger files and storage on unreliable disk space plus you would need very expensive Digital converters and so even if you have a 192kHz button you most likely were recording for Digital Audio Tape (DAT 88.2kHZ) as I was, and as DAT was twice what you needed for CD you would have though that was great.
There is also an on going argument as to the sound change at 192kHz by some audio files as digital can be quite cold and harsh to some peoples ears.
All is changing
Mastered for iTunes is setting a standard for all artist to work by in that, the CD is pretty much dead and the digital age is now the de facto for listening to music so it can now be plaid and stored at higher rates than just for a CD (44.1kHz).
Most people (I think) would agree that classical sound better at higher bit rates above 44.1kHz
So in theory you can hear a noticeable difference when using higher bit rate converters. It can still be argued whether it’s always a positive conversion rate on all styles of music as so many other mixing factors can play a part in the final sound as I will now try to point out.
What is the current standards ?
The problem is up until now all masters that have been made, have not been to any defined higher recording standard as any recording above CD quality was deemed as protection against quality loss to CD (44.1kHz) but in order to be a ‘Mastered for iTunes” you have to either re mix your music again or upscale the Masters you already have to this new 192kHz recording standard Apple has set.
I hope this sounds straightforward so far as it now gets a bit confusing.
In Practice This is what’s happening
So you made a few tracks in one studio 10 years ago and your DAT master (Digital Audio Tape) is at 44.1kHz and you have some at 48kHz and some you just lost and only have vinyl so what do you do? Most probably you don’t have the gear to re mix down all that old music again so your DAT’s and Records are all you have.
This will be the dilemma of most artists and the answer is a little confusing. The rumour is that Apple intend to very soon release a new higher format for the public to download (using the Mastered for iTunes spec), So if you want in, you have to start re-mastering all your old catalogue, but are you really going to improve them?
The thing is, if you were mixing at brick wall levels when you made those old tracks you can not change those setting, in fact all you can do is upscale the 44.1kHz or 48kHz sample rate to 192kHz and doing this won’t make any difference and might actually cause more artifacts in the sound during the up scaling.
(imagine trying to put more detail into a picture that is already printed by just rescanning it at a higher detail, you’re still only going to have the same picture nothing more will be added to its detail)
So if your just recording your music from vinyl at 192kHz how can this be an improvement on an already fixed master?
The crux of having it at the highest sample rate won’t really change a master that is already set in stone, it just records what you already have so I’m not seeing the benefit for old masters apart from meeting a new standard for delivery to Apple.
Apple has a caveat that covers this dilemma in their documents as they state:
In order to qualify as Mastered for iTunes Remastered content must begin with a high-resolution digitization of the original analog source and must sound noticeably superior to the previously released version.
So once recorded in higher sample rate you have to tweak it a bit to try and improve the master you already made (the best you could)?
So what makes Mastered for iTunes different?
My understanding so far is that it’s a master with no or minimum clipping created at the highest sample rate possible 196kHz.
What this means for producers is that, if you like ‘that wall of sound’ then you might have to really change the way you work as in the new Mastering techniques clipping has got to go, so if you redline your mix it might not pass as a Mastered for iTunes product.
It’s common knowledge that clipping is not good in the digital domain but can sound good on vinyl when you want a loud ‘wall of sound’.
Mastered for iTunes is more about fidelity and clipping will, by it’s nature, lop the top of everything and compress it into a wall and that is not fidelity, as most of the nuances if not all of them get taken out on impact with the red line of death, that is, clipping.
This is a learning curve for all producers to take the mix level down at least 2db and never clip, but this also means you don’t have that loud sound anymore unless you are very on it with compressors on each channel before your stereo mix I’m sure any number of mastering pro’s will tell you, most dance artist always say “make sure it’s loud”.
Well, artists will now have to do this before the mastering.
If you’re a budding Artist and don’t want to be left out in the cold when the new iTunes package is reveled in a few months time, you better start mixing your masters with no clipping and mastering them for MFIT (Mastered for iTunes) as once this new product Apple is planning to launch is released I’m pretty sure the Mastered for iTunes will be the de facto standard.