1. Didja accidentally blow through the whole, "We're using our real names" thing on registration? No problem, just send me (Mike) a Conversation message and I'll get you sorted, by which I mean hammered-into-obedient-line because I'm SO about having a lot of individuality-destroying, oppressive shit all over my forum.
    Dismiss Notice
  2. You're only as good as the harshest criticism you're willing to hear.
    Dismiss Notice

Berlin Series - three tech questions

Discussion in 'Tips, Tricks & Talk' started by Charlotte McMillan, Mar 2, 2018.

  1. you can get good results I feel with cinebrass and SM compressed together but it still feels too large and working with articulations is messy and annoying to me, when the bulk comes out in the performance of SM.

    the setup isn't too tedious, but I feel like it sounds too epic for my taste
     
  2. Prepare yourself... its like an hour long.

    BUUUUTTTTT

    I go through a lot of stuff, the basic concept - and a good number of examples, laid out in a way that should help you with any hurdles you might come across. I match HWB + SSS, SSS + Cinebrass, SM + SSS, SM + HWB trumpets, and trombones. One example is how to deal with a situation like Noam described, where one library doesn't go FF, and how to balance around that.

    I even show you how to set up a reasonable SM sound with kontakt and no external reverbs(just an EQ), as well as "fine tuning" for another library.



    And I didn't pre-setup any EQs, blends, or anything - so you can listen to me tune it in real time, including matching the "tail" of different libraries using mic blending.
     
    Runar Lundvall likes this.
  3. #23 Kyle Judkins, Mar 4, 2018
    Last edited: Mar 4, 2018
    "heh heh ... awww yeahh..." - Kyle after he manages to repeatedly butcher infamous themes - even when they aren't hard.

    edit: Noam, I pronounced your name "Gnome" and I didn't think for a second that maybe I should have looked up the pronunciation, Forgive me if I sound like an uncultured swine.

    edit: also sorry about the section with SM reverb through Kontakt... I realized it was off of the screen(I try to use a 1920x1080 chunk of my screen to fit everything, so that it doesn't blur the text of the video.

    you can see though my webcam that I'm playing around with convolution - but basically the first "preverb" as I like to call it - is a Kontakt default called Hard Wood Room A.wav, and I set the IR size to 50%(so half the length) and blended it like a 60/40, mostly dry. Saved a preset for myself called "Brass Room".

    Then I saved the convolution preset from the Ensemble multi from Samplemodeling, which has a custom one called "Horny Room.wav" Which I've saved as a preset to use on individual patches. I didn't explain how I got this in the video, but you just open one of the multis, look at the convolution on the AUX and save it as a preset.

    again - I simply used the Kontakt convolutions to show you that you don't need to drop 9872398723$ on a mile long list of reverbs, I blended it with HWB using Kontakt Convolution, IR's that came with it - a standard EQ, and a free Haas panner.

    I did throw a little QL spaces on top for the French horn bit - to show you that it's even easier if you use a "Glue" reverb sprinked on top.

    Also, before I added the convolution to the other 2 trumpets, was pretty funny listening to my confusion... I'm listening for the attacks to sit right with the left "chair" but the Right chair seemed so much closer(because I didn't have a convolution like the Piccolo) so I would slide it back so the left chair was right, but then the right chair was in outspace XD like I said, you can see me thinking - even down to trying to separate the chairs in HWB panning.

    Final Edit, I adjust the 3 mics from Caspain brass off screen AGAIN. you can see me noodling through the webcam, but atleast you know I just sat through the whole video myself. Is it possible to waste more time??? I worked all day, tried to figure out how to get my OBS to record DAW audio, set up my mic, and resetup a webcam - then recorded an hour video, only to sit down and watch it to find the random errors. Its now 3:11 I'm going to sleep haha
     
  4. #24 Kyle Judkins, Mar 4, 2018
    Last edited: Mar 4, 2018
    useful timestamps: 45:15ish, I finish matching SM trumpets and HWB trumpets.

    52:45 I match the SM trumpets to HWB 6 horns.

    just so you can see(hear) the result of matching, so you can decide if the hour long video is worth the time :p

    18:30 you can hear just using mic blending between cinebrass and SSS the tails are the same length(no reverb)

    10:34 combined SSS and HWB trumpets with no reverb, just a little EQ on HWB

    6:05 you can hear HWB and SSS tails matched, again just with mic blending.
     
    Charlotte McMillan likes this.
  5. Kyle, thanks for burning the midnight oil and putting out the video. I've given it a once through and will re-watch to catch a couple of your eq/reverb settings. I don't use the Haas trick (I know Mike uses it from his tutorials), but I cheat with some early reflection/positioning plugins to match libraries. The velocity curve stuff is a tool I need to add to my toolkit to better match expressiveness.
     
    Kyle Judkins likes this.
  6. I only use the Haas effect on mono instruments because it adds the feel of a stereo spread... I use a stereo spreader even on my stuff that is i n s u t t i, like the Spitfire symphonic stuff... it's just a preference, I like a really wide sound and even when I mic'd up acoustic drums I much preferred ortf over XY

    Just be careful panning without, because you can easily lead to imbalances in volume. The farther panned, make sure you bring the distance forward a little bit to compensate, so if one is 30 right, and distance 20, if the next one is 40 right, you should use 19 or 18 distance

    As far as EQ settings I suggest smaller cues about the width of a harmonic cluster comma since essentially what you're doing is emphasizing the spectrum of the room, and what frequency ranges it favors, so definitely play around moving it left and right and use your ears

    I'm glad you found it useful, one of these days I keep telling myself I'm going to start making videos for my YouTube channel with guides on this stuff... But I'd really have to plan it out ahead of time because I'll ramble on and talk for hours
     
  7. Don't want to hijack the thread (Berlin!), but why not high-pass cut the trumpets from Hollywood or Cinesamples like the SM? Removes that sub-150Hz room sound from muddying up the mix with large sections? I saw you dipped them, but didn't cut them.
    Also, maybe we should move the topic to the mixing board and resume Berlin.

    Neither Noam or Kyle prefer Berlin brass, but the woods and percussion are pretty good in my opinion. I have used their nocturne solo violin/cello and they're decent, but the CSS2 solo strings and Embertone solo strings are great. I'll see if I can find a YouTube walk-through of the Berlin strings for Charlotte. From the same developer, metropolis ark has some great sounds and uses Capsule. Like the Spitfire multi page Kontakt interface, it isn't hard to use. Spitfire has their UACC control native in the their instruments. They all have multi-mic mappable setups (Air vs Teldex halls).
     
    Charlotte McMillan likes this.
  8. I don't remove them completely, mainly because in a real recording you would never be able to remove them completely... and while I didn't do so there, whenever I eat you I almost always back it off on the wet mix of the EQ, to bring some of the natural sample back in

    Of course it's personal preference, but if you listen to a real orchestral recording it's not like they can EQ the trombones but not the cellos...

    Of course I could have added more low-end to match, but I try not to use too much additive EQ and prefer to focus on subtractive.

    I know you don't really need to with modern equs, but that's what I do
     
  9. Many thanks for the video, Kyle. I'm sorry I haven't responded up to now -- I'm under the gun with a project! I've watched a bit of the video, and will post when I've seen the whole thing. But I already followed some of your advice from your earlier posts -- about placing by starting with the mic positions. That really helped! I think I've actually got HW Brass gold and Cinematic Strings 2 in the same room now! HW Brass is hella loud though -- had to slam down the faders to get them to not overpower CS2. See my screenshot below. The Gold mic position does seem to put the brass pretty far back. I'm looking forward to getting multiple mic positions for brass (in whatever library I decide on). Wouldn't mind if everyone were a bit closer. Anyway, thanks for doing all this. I'm hearing you talk about EQ and the perception of distance-- look forward to learning more about that.



    Screen Shot 2018-03-04 at 5.15.57 PM.png
     
  10. Mike has a class on template balancing that covers that pretty well... of course that's one of many things the class goes over

    But in general, the upper region of the EQ is the amount of presents, while the low mids are the proximity, which is one way I always described it

    In the template balancing video Mike actually uses a close up recording of him speaking, and step-by-step uses EQ and then an Impulse response to push it back into the hole, rather than adding Reverb to it
     
  11. Well it's good that you turned Hollywood brass down, rather than everything else up... it's good practice, especially since you want enough room that if you have a big Unison brass line that you aren't clipping... You can always turn up the master fader when it comes to rendering, but I always mix extremely low render see how many decibels I have left then just slide the master fader up
     
    Bradley Boone likes this.
  12. I used SM Trumpets, Trombones and Tuba with Adventure horns and SSS in my piece here (shameless self-plug):

    WAV Stream



    There's a touch of SM Horns and Adventure Trombones here and there, but the rest of the brass is as specified above. It gets brassy in the climax at the end - you can just skip there and hear how it blends. So there's percussion and strings recorded in a hall, half of woodwinds are recorded on a dry stage, some are modeled, some are recorded in a hall, some brass is recorded on a scoring stage and the rest of it is modeled.

    Generally, the idea for SM stuff is:
    1. Open Altiverb, pick a room, decide whether you're going to be using different IRs for each instrument or you'll just increase the size a bit (make sure the difference is at least 5% within sections - e.g. trumpets are 100%, 105%, 110%, and then you can just copy that over to trombones)
    2. Pick the farthest mic of that room, turn on the positioner and push it all the way back (or in the case of horns just reduce the amount of ER - no direct sound) and add some pre-delay - this is now your ambient mic, route it to channels 3/4 or have this entire instance be on a dedicated track
    3. Pick the closest mic, make sure you select the mono input, pull the positioner all the way to the front - this is now your close mic, route it to channels 5/6 or leave it if it's on a dedicated track
    4. Pick the middle mic, set the positioner so that you have the "tree" sound and drop VSS2 on top of this and place your instrument within the GUI, disable ER (select empty field) and select your preferred main mic setup (I like Decca Tree). You can also try doing this by having a delay with 3 taps panned left, middle and right. The middle one should be mono and down by 4-6db. Leave the middle mic at 0 delay, and if you're doing this on, let's say, horns, add ~8ms to left and ~14ms to the right (adjust until it sounds good) - the horn hits the middle mic first, then the left one, then the right one. Decrease the volume of the right one until the position of the horn sounds right.
    5. Now have a channel mixer unless each Altiverb has its own track
    6. Dial in your preferred mic mix
    If the ambiance is too much, disable the Tail component in all Altiverb instances and drop your preferred reverb onto the bus. It'll work. Or if you want a full "record" sound, pick a recording space that's not so ambient and then drop reverb on top of that. Altiverb is what I use, I don't have MIR or Spaces. But the principle is the same. You definitely don't want a dry/wet IR for Sample modeling. The sound needs to be eaten. You need the IR to color the direct sound too which will count for the microphone response and air absorption. Altiverb has this (color on the direct component) and you can hear how the distance changes depending on what mic position you choose. A lot of other IRs I used don't - you'll have some EQ work to do before the reverb if that's your case in order to simulate the instrument being away. Or you can try with Proximity (disable true gain - no point).
     
  13. I don't think 100% wet SM is the only way, it's just an easier way to get something pleasing.

    I think it's just a slippery slope issue -its easy to think it sounds good on its own, but if you have to make the rest of the orchestra match the wetness you'll be swimming in soup, and lose all punch and bite of fast reps.

    That said, what you describe doing in altiverb is just eqing. The IR color = EQing the direct signal. Main take away is that you can manually EQ and get a better sound, rather than picking the "closest to what you want" from a bunch of IRs. Since those IR's aren't going to match the libraries you own(unless you use all SM/audiomodeling stuff and just use the same IR) Then it's more important that you develop an ear and skills EQing. This is especially useful when using IR reverbs, because if they have dampening features - you'll be able to hear what frequency ranges a room from a library tends to have build up - and you'll be able to properly dampen an IR to better match the sonic character of the room in question.

    Maybe I'll make a video on how to recreate rooms better tonight :) Is that something Interesting?
     
  14. Here was that information you never asked for!



    lol.

    I just made a video, using any ole' IR and some comparison between a close mic and an ambient mic to figure out what the major additions of the room would be - so that it could be applied to a.) other libraries and b.) basic principle behind figuring out what a room's natural dampening features are... Which ranges tend to "hang out longer" and the ones that fade quicker are a key point in nailing that reverb. Way more than the name of the IR - as you'll see I end up using the first IR I could find with a long enough natural Tail, and despite sounding nothing like it in the beginning - it shapes up pretty quick, simply by virtue of sculpting the basic dampening/EQ based on the differences between the close and ambient. And trust me, not a single person on the forum has heard of the IR - unless you go to church there, I doubt it's a sought after gem of an IR.

    As a result - I use 1 instance of altiverb to turn a close mic tuba into the ambient mic tuba. Then I throw it onto the SM Trumpet bus from the other video, adjust the distance of the SM trumpets, then blend the Reverb to taste to get that distance/tail correct. Ofcourse if I were trying to really put SM brass into the "ambient" I would have to use a little more EQ, but in this case, sounds pretty close for a copy paste + blend the wet/dry.

    That said, you already made me list my brass library collection in shame - you'll have to kill me to get me to list the countless reverbs I own. Not the least of which is Altiverb(as you saw) but Also every single Roompack + MIR pro. B2, Seventh Heaven, Spaces, And the list goes on. I'm just glad I never bought Spat - though it is ALWAYS temping.
     
  15. 100% wet in this case just means that no sound from the actual instrument is coming through. Which is how it should be IMO. It should go through air absorption and mic. I mean, even the dev says that.

    Just like IR of a reverb is "just some reverb". Convolution will create a response of the microphone and count in the air absorption of the room in a resolution that you will never completely match with just an EQ. Not to mention the IR of a stereo mic image. You can then EQ the final result.

    Po-tay-to - po-tah-to. One way or the other, it's pushing it back in the room that matters. I just like the mic IRs too.

    Pro-Q 2 has spectral matching, so you can just use that instead of guessing where the differences are. It's at the bottom. Create a reference curve, save it, load it into the other instance and it will adjust automatically as you play. For optimal results, make sure you play the same thing. Obviously it won't change the overtone balance, but it's enough to fool you. Since we're already doing fakery of the highest degree.

    upload_2018-3-5_5-52-55.png

    That pretty much settles the tone you're trying to match. Pro-Q makes it easy.

    What you're missing on the tail is the correct dampening needed to match the tail of Lyndhurst - notice the high end. 10KHz area is more dampened than the 12-20KHz area. You can't EQ that - it's about damping. So far, the only reverb that I've found that can do this is B2. And it sounds beautiful.
     
    Bradley Boone likes this.
  16. Why not turn down the gain on the channel input and save the fader headroom (something you would do with an analog mixer)? (I don't know the channel strip in DP well enough to point it out, but search "gain staging" for more info.
    You can also reduce the brass output in the Play mixer or can turn down CC7. This won't affect the timbre of the brass, just the output volume.
    Glad you're finding success with the multi-library integration!
     
  17. I did lower the fader in VE Pro. And yes, that could be done in PLAY as well, though I usually prefer not to touch those faders. Anyway, it's kind of a rough mix right now. Just trying to get everything placed in the room. Still need to refine the sound.
     
    Kyle Judkins likes this.
  18. Not everyone has a tone matching EQ, and there is no shortcut to developing basic EQ/mixing skills. With the cost of libraries, not everyone has 200$ to spend on a single EQ(even if I don't regret a single penny) An IR is a lot more complicated than an EQ. However an IR "color" added to a signal is the same thing as an EQ - which again, you should be able to sculpt on your own if you want to send people finished products. And you shouldn't just use 100% wet because you can... there is not only no rulebook that says SM needs to be 100% wet, but the rule book probably says no rules - use your ears.

    If you want to bring forward/push back an instrument that's 1 mic(like Hollywood brass) You'd better know how to use an EQ. throwing a far IR from altiverb with 100% wet isn't going to fix low end build up either... If you learn to EQ out some of the proximity effect before it hits a reverb, you can dry/wet blend to taste. If you can't make SM brass sound reasonable before it hits a reverb - you'll be limited to 100% wet only... which isn't how anything sounds in real life... that 2.2 second scoring stage IR doesn't sound like a scoring stage - it sounds like a cathedral-length tail on SM without being able to cut the blend down. In the same manner I blend mic signals from different libraries to "match the tail" - it's a tool that keeps you from having to have the brass-recorded-in-the-empty-warehouse-sound as the only reliable option. Sam made great mockups, and everyone kind of just parroted his 100% wet/dry sound must die sound.

    Here is a challenge - make SM sound the best you can without any reverb. then use as *little* wetness as you can on a reverb. You'll be able to improve the sound of your mixes on just about every kind of instrument/library.
     
  19. I don't own Gold, but I own the full Hollywood diamond suite. All of the default diamond instruments (except percussion oddly enough) load at +6.2db (resave them set to unity, 0db in the Play mixer). If the signal is hot coming into your DAW, then I'd try to lower it as close to the source as possible (in Play), otherwise you're adding in one step (the instrument) and subtracting down the signal flow (in VEP), which is redundant.

    Blakus makes a comment in the following video about this very issue (it is also relevant to this thread as it discusses similar approaches to putting SM instruments in the same space with other libraries like Berlin Winds- skip to 24:12 for the gain commentary, but the whole video is pretty good):


    It's also good practice to make sure your Kontakt instrument volume is at 0db (there's an option in the Options>Engine to set newly loaded instruments to -6db if you prefer). Leaving fader headroom can help with some plugin performance and allow for more control over volume automation.

    Enjoyed the piece you linked at the top of the thread by the way. It would be interesting to see you revisit it with some of your new libraries and techniques!
     
  20. gain staging is pretty important... Even if I'm balancing to a professional score - I use like 80% volume on that, and balance for that. Not least because it gives you room for more - would have to have a huge forte tutti moment ruined by clipping, or worse - slamming a limiter and getting squashed.
     
    Bradley Boone likes this.

Share This Page