ᗝᑎᗴ ᗷᒪᎥᑎᗪ ᗰꖏᑌᔕᗴ

Un-Musician. Un-Affiliated. Experimental…? Apparently but almost certainly genre-confused.

  • 1 Post
  • 7 Comments
Joined 10M ago
cake
Cake day: Nov 21, 2023

help-circle
rss

Are you referring to unmaking audio? I use Steinberg’s SpectrraLayers Pro as I am a Cubase user and whatever the last installation of it becomes resident as an extension but it does run standalone. I use it for things like audio repair and manipulation.

I think the other ‘big name’ in the field would be the RX10

None of them are perfect and it can be quite tricky to isolate to a forensic depth but I also know that SpectraLayers has better tool customisation and thresholds and also better layer management.

I suppose, like most audio things, people will tell you that the one they use os best so I wouldn’t;t just take my word for it.


AI can definitely ‘see’ things I can’t in a spectral layer. It’s not perfect, none of them are but mopping up after them is getting easier as they improve. I just know theres going to be a day that I can’t distinguish between a human tune and an AI one and find that terrifying.

Thankfully, neither you nor I are making that kind of modern ‘homogemastered’ mainstream stuff.


The faucet (tap… I’m English) analogy is perfect and yeah… there are so many of us making music now that the arena is literally stuffed… maybe AI generated has a place…? I dunno. Not for me… yet.


I don’t work in a typical or mainstream genre either. My own mixing methods are unorthodox and I generally master ‘un-loud’ so things like Ozone wouldn’t help me anyway. Guides to me are still reference tracks but yes, I see them as helping a great deal in some production for some people.



Where do you stand on AI in music production?
A conversation popped up on another platform about the role of AI in music production, generally as its used in the mastering process. Now I'm not sure how much AI that actually involves and see it more as a set of rules that will map your song or music to a contemporary 'good mix'... basically control the EQ, RMS peak and LUFS. Things like this are becoming more and more prominent on music histinf sites. I do use AI in some processing as I use software like Steinberg's SpectraLayers to 'un-layer' and un-mix tonal qualities, and so on but I don't use it in mastering. I do that the old fashioned way. Your thoughts..? Yay or nay..?
fedilink

When you say 50%, are you referring to the ‘middle’ of the frequency curve…? Try separating… low and high pass at about 150-200Hz then centering the low, keeping it clean and adding some kind of saturation to the high then panning two of that, not mad or hard L/R. If the bass conflicts with the kick for space, give the kick priority, either using dynamic EQ or multiband on a side chain.

There is no right or wrong here, just ‘what works’ but finding the sweet spot in these strategies might help.

Someone below mentioned double tracking the guitar by replaying. This is a good idea but make sure your timings are hitting, especially on supporting ‘power’ chords, otherwise you’ll also lose punch in the final mix. If you’re double tracking, listen in mono too. You will possibly have phasing issues.

Enjoy.


Definitely keep the things like vocal bass, kick etc straight down the middle. You could consider sending the guitar to a separate bus, adding some soft effects and then panning those. Depending on the tone of your bass, you can duplicate it, high pass one and low pass the other, send the low down the middle and slightly pan the brighter track. You could achieve quite a bit of width just doing this and without recourse to stereo imaging, which of course you can still use.