Mastering AI Music
An expert audio engineer explains how he transforms Suno tunes into professional songs
Max Genie is the first audio professional to specialize in transforming AI music into analog-sounding masterpieces. In addition to his very long list of traditional productions, he has mastered a number of AI-produced tracks by Vibe Patrol and Soulsigma.
AI musicians can significantly improve their sound by taking advantage of his surprisingly affordable human analog mastering services at MixGenie.co, which are very highly recommended, as the quality of the final product is audibly better than the best automastering services can produce.
I wanted to share a few thoughts I’ve had about the new Suno feature which separates songs into stems.
First about me and why I’m vaguely qualified to comment on this. I’m a music producer with something like 200 million Spotify streams. I’ve also mixed songs for the likes of The 1975 and Bastille. But my favourite thing to do is work with new artists and people who might not think they’re “that good”. And that’s where Suno comes in.
Historically most of my clients are self-styled home musicians who just love making music and want to share their creations with the world. And it’s my great pleasure to help sculpt their Frankenstein creations into new-age Apollos. At least that’s the idea.
But more and more I’m being contacted by people who have created music using Suno or similar and want to make it better.
My initial reaction to AI creations as a musician was predictable:
“P*ss off you talentless morons and go learn to make some real music…”
That is until I tried it for myself: and the penny dropped. Some of the songs and lyrics Suno created hit me in the soft spots. And I could, with some thought process and gentle nudging, create something.
But herein lies the problem. It’s an amazing service/piece of coding and some of its creations genuinely sounded heartfelt. But I can still hear that it’s made by a machine. Something in the audio says “I was made synthetically”, and as a music professional it’s something I can sort of fix. But even with the stems, only sort of, and here’s why.
Imagine, if you will, that I asked an AI to make a picture of a chicken and leek pie. It's got this no problem, and it probably even throws in some peas and mash on the side. Incredible stuff.
But now ask it to show you the component parts (stems). So you’re asking the ai to deconstruct its chicken and leek pie, into chicken, and leaks, onions and flour.
The ai doesn’t actually know what’s in the pie, it just knows what it looks like. And its the same with songs, the ai doesn’t know what’s in the song, just the end result of it all mixed together. Going back to our pie, even if it did know what was inside and could deconstruct this, your chicken has gravy on it, your leeks are all soggy and good luck making anything different from the flour because it’s now just mushy crumbled pastry.
And so it is with the Suno stem creator. It’s looking at the song and trying to pull apart the pie. As you can imagine the results are ok (can probably re-fry that chicken) but its not the same as creating fresh, original audio parts. You’ll probably get some reasonable quality from the vocals and maybe the drums. But that acoustic guitar part behind the vocal? Or the amazing string part it came up with behind the lead guitar? Probably not.
So what can we do to improve the quality of AI songs and what use is the stem function in Suno?
The quickest and simplest way to get a better audio quality is to try re-mastering it with a Human mastering engineer (see here). Humans are much more intuitive when it comes to fixing problems with audio than machines are, and you don't even need the stem function for that. The results can be surprising, so definitely worth checking it out!
But what about the stems - I've jotted down a few uses for them here but feel free to comment if you can think of other ones.
Plug a section of the instrumental into the new uploader in Suno and create an entirely different song around your favourite instrument or part.
Use the separated lead vocal to remix a new song around that using the audio uploader.
Get real musicians to create a new backing track based on the stems and put the AI vocal back on top. You could even do a mix of both, a bit of the ai track and replace instruments that sound bad.
Flip the above and use the ai instrumental stems, but put a real human vocalist (or even yourself) on top as the singer.
Rearrange the song structure using the stems, then either use that audio, or re-upload to Suno and get the ai to recreate it based on the new structure.
Use the stems and try and do a new mix of the song. I've put this last given the degradation to the audio mentioned above, there’s no guarantee you’re going to get a better result! Most likely you’re going to need to replace some things and at that point you’re looking at point 3.
So whilst there are lots of great uses for this feature, until ai starts building the track from the bottom up by creating stems first, and mixing them into songs later; we are going to remain in the situation where songs sound just a little bit synthetic. And for me at least, trying to create stems where there were none, is a complex solution to a fundamental problem: AI needs to start at the start, rather than working backwards from the end.
Max Genie: Human-Controlled Analog Mastering for AI-Generated Songs.
All 13 songs of The Only Skull by Soulsigma were analog-mastered by Max Genie. You can hear an example of the quality he brings to the music, which is much more rock-oriented than Vibe Patrol, in Once There Was Sorrow.

“If it's a good song, you get it down, it's a good song.”
- Peggy McCreary, Sound Engineer, Controversy, 1999, Purple Rain
Thanks to your post I used Suno for the first time and my first observation is that creating songs with Suno is very fun, almost addictive. This is from someone that barely made second trumpet in the school band, but did win a choral award in high school. So I’m definitely not tone deaf, but creating music is far beyond my ability. It is simply an amazing technology.
Just wondering about copywriting and the commercial value of the songs you created. Can you monetize them and do you own the rights? From what I understand, which isn’t much, it is very hard to actually own the rights to songs or other mediums created with AI.
It would be cool if AI could write or edit musical notation scores (Finale, Sibelius) the way it can do it for prose.