Paavo Jumppanen
Paavo Jumppanen
  • 60
  • 11 050
Remarkable 7.1 from Live Stereo Recording
This video details the process and the result of transforming a problematic live stereo mix recording from a PA mixing desk into a convincing and thoroughly enjoyable 7.1 surround mix. This is all achieved through the use of Har-Bal Harmonic Balancer to balance the tone and control the dynamics of the raw stereo track, and Har-Bal Spatial Pan and Har-Bal Synthetic Space plugins to provide the 7.1 synthetic representation of the live venue acoustic. The result speaks for itself.
These tools can be purchased from the Har-Bal website ( www.har-bal.com ) with a 75% discount by using the coupon code HB7 on checkout. This coupon is valid until the end of October, 2024.
Chapters:
0:00 Front Matter
0:56 Introduction
2:13 Introducing TSA
3:35 The concert setup
4:50 The source material and processing
8:50 This is a binaural video so use headphones to evaluate
11:29 Cubase comparison project
13:04 Comparative playback
14:11 The reasoning why the raw track doesn't sound convincing
18:35 The Harmonic Balancer treatment
21:09 The mix treatment with Spatial Pan and Synthetic Space
29:04 Complete Playback Demonstration of 7.1 Demonstration Track
zhlédnutí: 91

Video

For Ukraine : A dedication to the people of Ukraine
zhlédnutí 50Před měsícem
This is a recording I did of myself playing my 2020 composition "For Ukraine", a piece for classical guitar. Although I can "basically" play it and know how I want it to sound, my articulation has limitations. Much more practice on my part is needed to perfect this. Structurally, the piece is built on three parts as program music. The opening in A major is a representation of "peace and tranqui...
Music I Like On Recordings I Hate #6 - Great Expectations by Living Colour
zhlédnutí 87Před měsícem
Here I re-master the track "Great Expectations" from the album "Collidescope" released by "Living Colour" in 2003. This album I find particularly problematic for its variability of tonal balance. A few tracks present really well while others have objectionable tonal balance. "Great Expectations" fits into that latter category, sounding somewhat wooden with a boxy bass guitar sound. Har-Bal Harm...
Music I Love on Recordings I Hate #5 - Heat of the Night by Bryan Adams
zhlédnutí 128Před měsícem
In this video I show how I approach re-mastering "Heat of the Night" from the Bryan Adams album, "Into the Fire". This Bob Clearmountain mixed album, whilst good, does not reach the levels of quality that he obtained in the previous two Bryan Adams releases. Whilst I find it listenable, in the long term I find it fatiguing. This stems from a range of issues discuss in this video along with the ...
Atmosphere & Loudness Maximisation is an Anathema
zhlédnutí 310Před 2 měsíci
To make an atmospheric recording requires a sense of atmosphere, which is provided by reverberation. The application of volume maximisation prior to applying reverberation results in reverberation masking. Applying reverb prior to volume maximisation will result in un-natural reverb modulation by the music volume level. In this video I demonstrate how volume maximisation results it reverb maski...
Masking : What you need to Understand to EQ
zhlédnutí 472Před 3 měsíci
To understand how to EQ, particularly in the context of using spectrum analysis as a guide, you need to understand masking and how it can negatively affect a mix. In this video I give a physical and quantitative demonstration of masking through a web app I wrote in HMTL5 (you can use it yourself by visiting www.har-bal.com/apps/MaskingDemo/). I then illustrate how this relates to re-equalising ...
Pro-Am Mix Comparison
zhlédnutí 104Před 3 měsíci
I mixed the TSA concert from 2015 in Lodz for Petr Pavlas. This is my 14th concert I have prepared for him, but In this case, he also made available to me a professional mix of said concert with the same source tracks for my reference. Here I demonstrate the professional mix along side my amateur one with a clear difference between each. I think you will find that my mix is superior by a wide m...
Perplexing : Why can I create live mixes better than some mix engineers despite not being one?
zhlédnutí 56Před 3 měsíci
In a side by side of a commercial mix of a live recording of TSA Dream Team and the same band less the lead guitarist who died in 2022, I illustrate how my mix is substantially better, yet I am not, and never have been, a recording engineer, a mixing engineer or a mastering engineer and put forward reasons for why that is possible. Chapters 0:00 Introduction 5:10 Mix Comparison 6:55 Straw Man E...
TSA Poznan acoustric, 2009 - mix transformation
zhlédnutí 21Před 3 měsíci
I have been working on my 10th mix for Petr Pavlas. This time a live acoustic concert by TSA in 2009 in Poznan. Unlike the previous one, this one is constructed from a stereo mix from mixing desk recorded on Sony Minidisk and a stereo Zoom field recording at the back of the venue. Did a stereo mix first which sounded satisfying but just did a 7.1 conversion of it with the Zoom field recording p...
TSA Karvina, 2016 - mix transformation
zhlédnutí 27Před 3 měsíci
TSA Karvina, 2016 - mix transformation
A Dedication
zhlédnutí 86Před 4 měsíci
A Dedication
Stereo to 7.1 Mix, Easy!
zhlédnutí 200Před 4 měsíci
Stereo to 7.1 Mix, Easy!
TSA BIELSKO BIALA 2014, JESTEM GLODNY- remix-transformation
zhlédnutí 54Před 4 měsíci
TSA BIELSKO BIALA 2014, JESTEM GLODNY- remix-transformation
TSA PROSZOWKI 2010, BIALA SMIERC - remix transformation
zhlédnutí 32Před 4 měsíci
TSA PROSZOWKI 2010, BIALA SMIERC - remix transformation
TSA Bystrice 2011
zhlédnutí 30Před 4 měsíci
TSA Bystrice 2011
TSA Krakow 2011
zhlédnutí 61Před 4 měsíci
TSA Krakow 2011
Can you turn a live 5.1 mix into ATMOS?
zhlédnutí 96Před 4 měsíci
Can you turn a live 5.1 mix into ATMOS?
Mixing Analog Music Is Hard. Really?
zhlédnutí 188Před 5 měsíci
Mixing Analog Music Is Hard. Really?
Equal Temperament Piano Tuning Sucks
zhlédnutí 156Před 6 měsíci
Equal Temperament Piano Tuning Sucks
Recording Classical Guitar
zhlédnutí 673Před 6 měsíci
Recording Classical Guitar
Variations No 2
zhlédnutí 128Před 6 měsíci
Variations No 2
Track Splitting EQ : When and How?
zhlédnutí 288Před 7 měsíci
Track Splitting EQ : When and How?
Movie Dialog Sucks!
zhlédnutí 118Před 8 měsíci
Movie Dialog Sucks!
Extreme Limiting : Why???
zhlédnutí 322Před 9 měsíci
Extreme Limiting : Why???
My Approach to Mixing : Panning and Reverb
zhlédnutí 110Před 9 měsíci
My Approach to Mixing : Panning and Reverb
My Approach to Mixing : Dynamics
zhlédnutí 124Před 9 měsíci
My Approach to Mixing : Dynamics

Komentáře

  • @miksteduzeltiriz
    @miksteduzeltiriz Před 3 dny

    The stereo version is much better. The 7.1 simply sounds like a room reverb was turned on without the actual 3d "front to back" aspect. Are you still planning on building a VST3 version for Harbal?

    • @paavojumppanen914
      @paavojumppanen914 Před 3 dny

      Your appraisal seems perplexing to me as it is diametrically opposed to how I hear it, but I respect your opinion. Ultimately, the only opinion that counts is Petr Pavlas' given he commissioned me to do it, and in that regard, he is very happy with the 7.1 and stereo remixes that I prepared for him. As I have said before in another video, I am working on an EQ plugin, but it isn't going to be Har-Bal Harmonic Balancer as a plugin. To do so would be a development nightmare.

    • @miksteduzeltiriz
      @miksteduzeltiriz Před 2 dny

      @@paavojumppanen914 7.1 or any other binaural processing only makes the sound smudgy for me. I am yet to find a binaural/atmos/multichannel mix that is better in all aspects compared to the stereo mix. So its not your processing - its an "all water is wet" situation, for me at least. Glad the EQ plugin is still in the works, I hope it is close to what harbal does - I've been following harbal ever since it first came out and I am yet to find any other EQ that does anything similar to what yours does.

    • @paavojumppanen914
      @paavojumppanen914 Před 2 dny

      @miksteduzeltiriz, ok, I now understand where you are coming from but there are two clarifications I would point out with regards to the points you made. Firstly, binaural is quite different to real surround. The real surround sound significantly better in my view but it is obviously much harder to demonstrate. Secondly, the wetness of my mix needn't have been to the extent that I made it, however, I chose that degree of liveliness out of attempting to be authentic towards how the live performance would have sounded to an audience member in the venue. The original stereo mix, in that regard, sounds almost completely dead and very unlike how an audience member would have heard it. If I was doing a "studio mix" it wouldn't be that wet.

    • @paavojumppanen914
      @paavojumppanen914 Před 2 dny

      @miksteduzeltiriz, oh and the EQ will have similar traits to Har-Bal standalone but with omissions of peak taming and probably real time spectrum. On the other hand it will likely have features specifically targeted to a DAW context. Be quite a while before I get it done though. Life has been busy, my motivation has been lacking, and I've got some Har-Bal standalone development work to finish first.

    • @miksteduzeltiriz
      @miksteduzeltiriz Před 2 dny

      @@paavojumppanen914 I agree that real surround with properly placed speakers like the ones you'd have in a movie theater are great, and music designed to be played on those would be great to listen to. However most people do not sit in the middle of a room to listen to music for an hour. Most people listen to music while doing other things, like while working our doing chores outside (headphones) or at home (maybe stereo speakers but most probably a boombox or a bluetooth speaker nowadays) or while driving, or in a live setting. Very few people even have a full range stereo system, let alone a good surround one. So this forces binaural algorithms to do the heavy lifting. Even if no extra reverb was added and the mix was simply modified to take advantage of the added audio paths, the binaural rendering still smudges the mix and the number of atmos-binaural etc mixes I prefered to the stereo is still zero. I dont doubt the full atmos mix would sound great in a properly built room, but that kind of listening environment is not a priority for me. Even if I was willing to take on the cost, I'd still probably buy a better stereo system instead of building an atmos listening environment. Other people might differ in this opinion and that is fine.

  • @tejobolten1011
    @tejobolten1011 Před 8 dny

    Sounds great! In the metering is see a overpower if this has to go to atmos. :-)

    • @paavojumppanen914
      @paavojumppanen914 Před 8 dny

      It's not going to Atmos. I was just using the binaural rendering support of Atmos so that I could showcase how it might sound on real 7.1.

  • @yorshoffmix
    @yorshoffmix Před měsícem

    Thank you, my dear friend! Touching composition and performance.

    • @paavojumppanen914
      @paavojumppanen914 Před měsícem

      I have aspirations of turning this into a larger orchestral piece, though when I'll have the time to learn orchestration, who knows. I certainly have some ideas about it though. Slava Ukraine!

  • @SteveGaddTasmusic
    @SteveGaddTasmusic Před měsícem

    such a profound composition.

  • @ckreon
    @ckreon Před měsícem

    Very interesting to watch your workflow - I am a fan of HarBal, but don't get as detailed in the use. It's cool to see what it can really do when used by someone who knows it inside and out. I think there are many things in the song that benefit from your adjustments, certainly some subjective preferences in areas, but one issue I note is your accentuation of the very top end creates very harsh esses on the vocals. Probably somewhat addressable via peak taming to avoid dampening the cymbal changes you were looking for. I will say on a personal note that the consequence of a more even frequency response is the loss of some excitement. I think sonically in most ways these changes are an improvement, but the extra midrange while taming the upper-mid/high of guitars and low-end of the toms and bass, results in a very "safe" delivery that lacks the punch of the original. I know there's only so much that can be done with a stereo mixdown, but I wonder if after all your changes, if you went to the full song average (as opposed to sectional averages), and lightly tamed the 500hz area and lightly boosted 80hz and down, if it wouldn't bring some of that back. I know it's subjective, but with rock and roll, it's all about emotional impact and hyping the right areas. Personally I think the mix itself leaves a lot to be desired, so much of this I would fix there, not in mastering (it sounds like it was mixed by a longtime live musician with fatigued ears: over-scooped mids, over-boosted upper-mids with bombastic low-mids, top-end largely ignored because it can't be heard anyway 😅). Anyways, cool video, good results. Curious about your thoughts on the vocal esses.

    • @paavojumppanen914
      @paavojumppanen914 Před měsícem

      As I've mentioned before, what I think sounds good is subjective to me, so I'm more than aware that people will listen to my interpretations and think they aren't appropriate. That really isn't the point. The takehome point is what it allows you to do, and in your own hands, you could, with practice, get the results you desire, rather than me. I think to a large extent, what you have said shows you understand this! The issue regarding excitement and punch is influenced by the quality of monitoring. In my case and with my monitors, there is no loss of excitement. I would imagine a lot of people with average monitors will feel this as an outcome because of inherent mid-range weakness, which my monitors do not have. That excitement on my monitors just sounds harsh, and as I demonstrated in my video on trust in monitoring, they are neutral. Most monitors for home recording are active 2 way designs with 8 inch woofers and typically, 1 inch dome tweeters, crossed at around 2khz. That will lead to mid range weakness despite a flat on axis response, by virtue of the directionality of the woofer. For an 8 inch woofer the directionality will start at around 500hz leading to a weakness around the crossover point. Contrast that with my 3 way monitors that use a 5 inch woofer mid-range and a 1inch done crossed at 1khz. The 5 inch woofer mid becomes directional at around 800Hz so there's very little crossover weakness due to polar radiation issues. That's why I so easily pick up on harshness in studio recordings. The other similarly complicating factor is bass extension. Most active monitors don't have much output below 40Hz, which can result in pushing more bass to add excitement, compensating for weakness in the monitoring. The problem is that if you play that back on something capable of producing output down to 20hz it often ends up sounding like mud! Now you can question whether, in fact, mastering for the ideal frequency response is ideal, given that most people won't have it, but similarly, you can question whether in fact, 2 way like most people use in recording are a good representation as well. How do I come to that conclusion? Most people will be listening to music on headphones, phone speakers, computer speakers and maybe TV speakers. Pretty much all of those devices don't involve crossovers as they most typically use full range drivers. As such, they are likely to show the harshness I typically hear on my monitors. That has certainly been my experience from playback through my tablet and a TV as a check. I guess whatever you do involves some sort of compromise! Point taken on the de-essing.

  • @PaulvonKarnten
    @PaulvonKarnten Před měsícem

    Hello Paavo, I first installed & learned how to use Har-Bal to a mid-level many years ago already, i believe you created a VERY valuable tool for mastering engineers especially - a shame more engineers don'[t realise this! Since i upgraded my whole chain to high end hardware, speakers & headphone i found myself being able to discern more easily the overall "production Sound" and was able to compare even the same artist with different releases & hear the differences- it sort of became an enjoyable hobby! I didn't look through all your videos but i would like to see some videos from you analysing some of your favorite "perfect" productions & what you hear (& see) to be the reasons what makes these productions special. Not my music but i was listening (& looking because i always have Smaart running) to Real Groove by Kylie Minogue and believe the production to be very good indeed, i didn't put it into Har-Bal yet but will shortly. All the best and RESPECT to you & your work. Paul

    • @paavojumppanen914
      @paavojumppanen914 Před měsícem

      Thanks for the feedback! Actually, I like your suggestion very much. Doing some videos on excellent recordings and how they stand up to scrutiny is an excellent idea. I'd say it is pretty rare to come across a recording that is absolutely perfect, but there are many that come close. I think David Gilmore's "rattle that lock" is a prime example. I might need help with suggestions for current material as I've switched off contemporary music a bit, finding it hard to find music that satisfies me artistically and sonically. I did manage to find stuff from Sylvan that is very good though! Actually, something I recently re-acquainted myself with was the Smith's final album, "Strangeways, here we come", which is also an exceptional album. I don't often look at great albums in Har-Bal because they will typically reflect what I expect to see. On the other hand, I often try to guess the issues that Har-Bal will show up in problem recordings in an attempt to better educate my ear. It's a good way to develop that, but as you suggested, it is subject to having monitoring that is both revealing and trustworthy. If you are finding it difficult to do this with some practice, then it is likely you have a monitoring problem.

  • @yorshoffmix
    @yorshoffmix Před měsícem

    Hey Paavo, thanks for the great video. Could you please explain more about why you only focus on fixing the Mid channel and keep the Side original?

    • @paavojumppanen914
      @paavojumppanen914 Před měsícem

      Side channel modification, by definition, will alter the stereo image, either widening or narrowing it. If there is no problem with imaging, then I really don't want to touch it. Also, I usually avoid narrow band filtering of the side channel as it can result in weird shifting imagining effects when instruments shift register. Typically, the most I ever do is monofi the bass by applying a low shelf edit and widen the image by applying a zero Q boost of 0.7dB or so. Usually, anything else is unnecessary. I also wanted to avoid doing side channel modification to not complicate comparison of the original with the modified cases.

    • @yorshoffmix
      @yorshoffmix Před měsícem

      @@paavojumppanen914 Thanks for the details! I'd love to see you work with the Side Channel in future videos, where appropriate, of course. Thanks again!

    • @paavojumppanen914
      @paavojumppanen914 Před měsícem

      I'll try and find a good example. There's one I can think of. Should also try and find an example of how it can go wrong if you are too care free.

    • @yorshoffmix
      @yorshoffmix Před měsícem

      @@paavojumppanen914 brilliant! Thank you!

    • @peterkearnsmusic
      @peterkearnsmusic Před měsícem

      That voice was honky alright. Some kind of flange.

  • @beatskool101
    @beatskool101 Před 2 měsíci

    I'm A Big Fan of David Sylvian( & his production), but I've never bothered listening to Porcupine Tree, even though it has ex Japan band members, I hear a lot about Steve Wilson's Mixing Abilities maybe he didn't Mix or Master this Live recording? but I'm not really into the style of music, didn't realize you where behind the Har-Bal Software, Not sure I'll ever get my head around it, you have a lot of video's to digest.

    • @paavojumppanen914
      @paavojumppanen914 Před 2 měsíci

      From the album credits, I think he and Gavin Harrison mixed it, but as I've tried to show with my TSA remixes and the equivalent done by professional mixers, dealing with live recording material without the sophistication of something like Har-Bal Harmonic Balancer is very difficult. Under studio conditions that are much more controlled and you have access to much more sophisticated processes it isn't as hard.

  • @Noiseheads
    @Noiseheads Před 2 měsíci

    I've always felt as though this album left a lot to be desired sonically compared to any other FF release, so it was the title of your video that brought me in. This is my first exposure to Har-Bal and am genuinely impressed. Your detailed analysis and modification showcasing the software's power is fascinating. I've watched a lot of videos from people claiming to be mastering engineers attempting to "fix" or "correct" masters they consider to be less-than, or they give their two cents on genre-specific tips that feel more like cheap listicles for views. Thankfully, this is not either of those - there's real meat here! I also appreciate your ability to toe the line between the subjectivity of sound perception and EQ adjustments made within scientific reason (e.g. speaker response, etc.). I am amazed at how much clarity you were able to surgically extract out of the original master. Great work and great video!

    • @paavojumppanen914
      @paavojumppanen914 Před 2 měsíci

      Thanks for your detailed response! I've had enough experience of being flamed and being aware of how much variation there is in frequency response of loudspeakers and headphones to realise that the way I hear and like it will not be universally appreciated, hence my restraint. On the other hand, the changes in the level of detail that can be recovered is generally universally heard, despite perhaps not liking the balance. The take-home point is that once the balance is struct, re-aligning it to anyone's taste is generally a trivial thing! On the extraction of clarity, this all stems from the phenomenon of masking and how with comprehensive analysis and high Q filtering, you can substantially reduce masking and bring out otherwise hidden detail. If I restricted myself to low Q filter edits this would be impossible. Similarly if I had no analysis to go off it would also be impossible. Doing this through conventional EQ is impossible because we don't have the discrimination capabilities of hearing the necessary tonal anomalies that analysis does, so properly adjusting a static EQ is impossible, let alone a dynamic one. On the subject of dynamic EQ which seems to be used quite a lot these days for correction, my understanding of it is that the filtering used is rather course in comparison to what "peak taming" allows. Whilst dynamic EQ can bring a better balance it doesn't bring the level of sharpness and focus that my approach does. In it you are placing trust in the underlying algorithm controlling the filtering to deliver. You don't directly control it. In contrast, with Har-Bal Harmonic Balancer it is all explicitly under your control! This type of improvement in clarity for problem recordings isn't a rarity but the norm, at least in my experience, provided of course, you know what you are doing. I will do more of these so you can see for yourself. Any suggestions for material welcome!

    • @Noiseheads
      @Noiseheads Před 2 měsíci

      @@paavojumppanen914Very cool stuff! I’m definitely looking forward to learning more and will be checking out the other videos you have for sure. If you feel it would be a good candidate, might I suggest something from Soundgarden’s Badmotorfinger? It was remastered in 2016 but I’m curious if there’s anything that can be done to remove some of the boxiness short of doing a proper remix. Aside from that, different genre examples to see some other use cases would be amazing. Thanks for the detailed response, I’ll be keeping an eye out for new content!

    • @paavojumppanen914
      @paavojumppanen914 Před 2 měsíci

      I actually had a go at outshined on bad motor finger a few years ago but I think I over tamed it. I did a pretty reasonable job of the entire album with static EQ in an early version of Har-Bal a very long time ago but should re-do the entire album again as I like the songs on it but not the presentation. Maybe I can re visit it in a follow up video, maybe with a different track from that album.

  • @shardzkaylar
    @shardzkaylar Před 2 měsíci

    Ironic since Dave Grohl has no honor.

  • @beatskool101
    @beatskool101 Před 2 měsíci

    If you took the clip from YT it might be 128k & dithered & maybe maximised twice?, The first Phil Collins track was very busy so impossible to hear anything good, I cannot believe it's true is more sparse so the clarity comes through. I've wrestled with gain staging, master limiting & compression, & my Limiter telling me "Increase the Volume & it's Too Dynamic" I don't want loud, I do want Dynamics. Not sure How I would add reverb after the Master limiter, the Reverbs & stuff are on the FX Sends. I'd like you to take a listen to my Electronic Permutations, & tell me what you hear & if i'm on the right track.

    • @beatskool101
      @beatskool101 Před 2 měsíci

      One thing I will be trying in the future will be not throwing everything into the same reverbs, not having multiple reverbs on the same thing.

    • @paavojumppanen914
      @paavojumppanen914 Před 2 měsíci

      Yes, good points. I recognise that both the clip quality is lower given the streaming, and that the track in question has a more dense arrangement. Ideally, the same track treated differently would be the most ideal comparison but I don't have access to suitable material so this was my compromise choice. It's just a thing to be aware of and something that initially confused me when playing a range of tracks through my reverb and noting how the audibility of that reverb changed considerably across tracks but not understanding why. And yes, if you add bus reverb post bos compression and limiting you will need and extra limiting stage to catch overs. As you said, you'll typically use channel sends to seperate reverb bus but you will typically have channel compression before the send so the possibility of over compressing everything can give rise to the same effect. If you don't and then rely on mastering compression and limiting to minimise volume you'll hear the reverb but it will not longer sound natural as it's level will be modulated by the track level. By the way, in my mixing of TSA I no longer use a reverb bus as I don't see a need or benefit for it because I use spatial panning. The spatial panning gives me the contextual early reflections and appropriate levels so I have no need for reverb sends. Finally, if you'd like to get my options on stuff you are working on then feel free to get in touch. You can use the Har-Bal website to initially contact me via email, or the Har-Bal Facebook page.

  • @peterkearnsmusic
    @peterkearnsmusic Před 2 měsíci

    Good advice, Paavo.. Harbal customer here since 2006.

  • @Phileosophos
    @Phileosophos Před 2 měsíci

    I'm sorry, but I don't understand the point you're making. May be it didn't help that I could always hear both of the tones while you were moving sliders.

    • @paavojumppanen914
      @paavojumppanen914 Před 2 měsíci

      Masking will be highly influenced by the frequency response of what you are listening to it with, not to mention intermodulation products. For example, I can always hear both tones myself if I listen to it through tablet speakers but ok on my headphones I can't.

    • @paavojumppanen914
      @paavojumppanen914 Před 2 měsíci

      Another point to note is that if you are listening to it in open space (is. without headphones) then the amplitudes of the pure tones are likely highly altered by the standing waves and interference patterns present. If you have ever tried to measure a speaker frequency response in open space with a level meter and a sine wave oscillator you would recognise those as the measured amplitude will vary widely with frequency. Use headphones and try the web app for yourself.

    • @paavojumppanen914
      @paavojumppanen914 Před 2 měsíci

      Actually, yet another point to consider, if the device you are listening to it on adds harmonic distortion to the pure tones, it will likely make what could have been masked audible. It is why I would have preferred a 1/3 octave noise as sources rather than pure tones, but html5 didn't have an easy way to do that.

    • @Phileosophos
      @Phileosophos Před 2 měsíci

      @@paavojumppanen914 First, thanks for all the replies. Second, I was listening through the best "cans" I own, which since my treasured Sony 7509's finally died are the inferior 7506's. My home studio isn't exactly acoustically treated (I'm in a rental house at the moment), so I feared the space alone would mess up my ability to appreciate your demo. It's rather ironic that maybe my headphones did instead! Finally, I've been building web sites since the early 90s, and I didn't know you could easily generate tones as you are through HTML. So congratulations to you for accomplishing as much as you did. Cheers!

  • @bartnettle
    @bartnettle Před 2 měsíci

    Listen to end before commenting hahe enlightening

    • @paavojumppanen914
      @paavojumppanen914 Před 2 měsíci

      No, I'm not trying to dismiss humans in the process of EQ. For instance, the notion of AI automated EQ or mastering is complete nonsense as far as I'm concerned. That said, human hearing has peculiarities that make certain things really hard to deal with if all you have to rely on is your ears. Case in point, the times you send a track to an ME and they say I can't do anything with it, or what they do is minimal and unremarkable. Actual mathematical analysis allows you to address most of those issues provided you understand the cause of the problem.

  • @bartnettle
    @bartnettle Před 2 měsíci

    Wait a minute, are saying that an ear in a atmosphere has deficiencies and a good reason harbal takes these deficiencies out of the equation

    • @paavojumppanen914
      @paavojumppanen914 Před 2 měsíci

      Human hearing has limitations yes, Har-Bal removes those deficiencies, no. It simply allows one to see where the potential masking problems can occur in a recording and reduce that, not remove that. Masking will still occur but we are reducing how much masking occurs.

  • @bartnettle
    @bartnettle Před 2 měsíci

    By god man, just record a mic and move it around an access, there's masking galore. Audiology and the treatment of ear deficiencies is another science

  • @Luke_Stoltenberg
    @Luke_Stoltenberg Před 3 měsíci

    Mate that was enlightening. Next to do some experimenting!

  • @Александр-й7б9и

    I can feel there is great info here. But starting with the vertical distribution on the plot I just struggled to figure out the pretty concept you were passing to us

    • @paavojumppanen914
      @paavojumppanen914 Před 3 měsíci

      What I am trying to point out here is that the peak and average spectrum plots as presented in Har-Bal circumscribe the level statistical distributions at any point in frequency in those plots. Hence, I can take any point in frequency and draw a representative distribution of the volume at that frequency. I drew two of those, one at the peak and one at the trough. I chose to say Imagine the 95 percentile of that distribution represents the masking tone. From that I could then deduce the potential masking threshold for that tone and projected it to the frequency of the trough. Now if the masking tone was static at that level you can deduce how much potential masking occurs by looking at how much of that trough distribution lies below the threshold. In the UN EQd case it's over sixty percent (the area under the distribution curve below threshold). Post EQ that are drops to a much smaller area and hence much less masking. This is a representation of how masking (and EQ to reduce it) works. It's not an exact representation as what is masking and what is masked is not static in music, but this thought experiment illustrates clearly, the mechanism at play and why you want to avoid spectrum troughs.

    • @Александр-й7б9и
      @Александр-й7б9и Před 3 měsíci

      @@paavojumppanen914 it's much clearer for me now. Thank you for text version. The frequency with amplitude difference higher about 20db will mask near frequencies. So one should eq near frequencies as mentioned. And frequency response spectrum withou troughs serves as validation criteria for the equalization 🙏

  • @andy80sdrums
    @andy80sdrums Před 3 měsíci

    HarBal has been my go-to for everything since 2012. A project is not finished if not HB-lized. Would never go back to not having it. I have anlyzed over 4000+ tracks with it and mixed/mastered dozens. No need for expensive hardware.

    • @paavojumppanen914
      @paavojumppanen914 Před 3 měsíci

      In this case I only used it for mixing. I've yet to master any of my mixes with it cos I don't feel they need it.

  • @HiddenroomStudio
    @HiddenroomStudio Před 4 měsíci

    Why don't you put the Monotracks on Stereo Audio Tracks? They will play fine and you can use the panner. In Cubase 13 you can convert the Mono Audio Track to a Stereo Track by clicking on that Mono Circle Symbol. Then you don't need the Groups for the 7.1 Panner

    • @paavojumppanen914
      @paavojumppanen914 Před 4 měsíci

      Firstly, I don't have Cubase 13, only 12. I can put mono tracks into a stereo one for sure, but I use the FX track instead of putting it directly on the channel track because when I have to covert from stereo to 7.1 the only option Cubase 12 gives me is to create a new 7.1 track to put the mono track into. This I would also have to copy my volume slider levels across as well, which is more work. It would be great if Cubase could just let you change the track bus from stereo to 7.1 but I haven't found a way to do that on Cubase 12.

    • @HiddenroomStudio
      @HiddenroomStudio Před 4 měsíci

      @@paavojumppanen914 Thank you for the fast answer, i will try the whole immersive mixing with my own projects, lets see where it takes me. Thank you!

  • @leukocyte3145
    @leukocyte3145 Před 4 měsíci

    I was there :D

    • @paavojumppanen914
      @paavojumppanen914 Před 4 měsíci

      So does the remix do it justice? Hard to know when I wasn't there.

    • @leukocyte3145
      @leukocyte3145 Před 4 měsíci

      @@paavojumppanen914 Yup, I would even say that it sounds better than when it was live. It was a small festival in a rural area, so the sound wasn't optimal, to say the least :) Great job.

  • @markwilson1446
    @markwilson1446 Před 6 měsíci

    Thank you.

  • @shayneoneill1506
    @shayneoneill1506 Před 6 měsíci

    Thats rather pleasing to the ear.

  • @Jrel
    @Jrel Před 6 měsíci

    Great results! To me, both songs sounded very close to the original except when Dizzy was playing its faster sections. Is it because the faster part of Dizzy has this cloudy reverb on instrument and that wasn't being picked up well with the mic?

    • @paavojumppanen914
      @paavojumppanen914 Před 6 měsíci

      I think it's largely down to what room reflections are doing to it more than anything else. That and the fact that the complexity of the "system impulse response" is adding back some dynamics that were lost in hard limiting. If you have a meter running over the audio you'll see what I mean. The direct has a clearly defined ceiling but the speaker version has that blurb and has a lot bigger transients, despite the average level being about the same.

    • @paavojumppanen914
      @paavojumppanen914 Před 6 měsíci

      Oh, and as the room is adding some reverb itself, that is more than likely modifying the sound of that reverb you mention.

  • @michaelplishka-zenstorming
    @michaelplishka-zenstorming Před 6 měsíci

    I shared this with a tuning friend and he said this: "Interesting I use electronic tuner that tempers the scale according to the design of the piano Pianos sound very full and rich that way. "Which actually makes a lot of sense. The whole concept of resonance frequencies of the piano structure itself as well as what particular piano was sampled for the digital piano. As an aside, the link in the online version of your newsletter isn't taking me to the CZcams video while the newsletter version does.

    • @paavojumppanen914
      @paavojumppanen914 Před 6 měsíci

      Yes, I can see that choosing a tuning to blend with the resonances of the instrument could work well but might cause havoc for someone with perfect pitch and accustomed to equal temperament. That said mine might do the same! It's a fascinating topic anyhow.

  • @michaelplishka-zenstorming
    @michaelplishka-zenstorming Před 6 měsíci

    I really like the different temperament. The dissonance to me is not harsh and in fact the unequal temperament to me sounds warmer in general. In fact the higher notes in the equal temperament had almost a harsh feel to them when played but that seemed to disappear with your tuning. Very cool experiment, thank you for sharing!

    • @paavojumppanen914
      @paavojumppanen914 Před 6 měsíci

      Sounds similar to my reaction to it. My general assessment is that my temperament makes the dissonant harmonies of classical music more emotive and coherent. On equal temperament some of that just sounds plain wrong to me. It is readily noticeable by playing stacked thirds over two octaves simultaneously, or worse still playing the eight notes of a given key simultaneously. Doing that with equal temperament sounds harsh. Doing that in this temperament is not. Might not be pretty but the harshness goes away. That harshness comes from the audible rapid flutter (beating) between the notes, which is largely absent in my tuning.

  • @serratusx
    @serratusx Před 6 měsíci

    To me the second version sounds like the piano is going out of tune depending on which notes are playing. It’s almost like listening to a tape or record where the speed is unstable. I guess if you’ve only listened to equal temperament all your life, it could be quite noticeable to some people

    • @paavojumppanen914
      @paavojumppanen914 Před 6 měsíci

      Thanks for your honesty. Do you have perfect pitch? I imagine if you do and you are accustomed to equal temperament that would be a natural reaction. The point with temperament is not about the absolute pitch though, the focus is the harmony and how that works. If you list to the equal temperament case to the harmony, there are many instances where a rapid flutter (beating) is clearly audible. If you then listen to the second case that flutter largely disappears.

    • @serratusx
      @serratusx Před 6 měsíci

      @@paavojumppanen914 I don’t think I have perfect pitch but I’ve played piano since the age of 8 and also now a music producer so my ears are quite finely tuned I guess. I know that mathematically equal temperament is not perfect in terms of harmony etc but I’ve never thought it sounded dissonant

  • @ilblues
    @ilblues Před 6 měsíci

    Thank you, Paavo. Your guitar looks similar to an old Lyle classical guitar my parents bought new for me about 1970. It was bass heavy and had a better sound with rectified strings, in particular a wrapped G string. I wish I still had it, but migrated to a steel string guitar for a number of years, before buying another classical. These days, I have greater love for nylon strings as they're more soothing with my worsening tinnitus. I wanted to ask about your recommended order in an master bus FX chain for Spatial Pan and Synthetic Space. I've swapped their order several times and putting Spatial Pan after Synthetic Space greatly reduces the reverb effect. Are they meant to be used in any particular order? Again, thanks Paavo! You really contribute to my understanding of sound. Jack in Sequim

    • @paavojumppanen914
      @paavojumppanen914 Před 6 měsíci

      Yes, I too actually prefer the more mellow sound of nylons over steels. They're also more comfortable on the fingers, and you are spot on, the guitar I have is bass heavy too. It's actually quite attractive except for it being a bit peaky on particular notes. When I'm more competent I likely try and find another classical guitar. On the order of things, yes, spatial panning should be before reverb and not after. It is to be expected that the reverberation collapses when fed through spatial pan (it gets monofied, collapsed into a single channel). If you look at in the perspective of the underlying physics of sound in spaces, the early reflections come first and the reverberation comes from those reflections being diffused around the room, so in nature the early reflections come first and the reverberation comes after. If you want to experiment with reverb first you should create a parallel path for the reverb to get to your master bus without going through spatial pan (pass through off). I guess this is equivalent to using the reverb in a separate reverb bus and using channel sends. However, this will have a less rich reverb than the recommended approach, because the input to the reverb will have added richness from the spatial pan. This augments the character in the same way as recorded room ambience does.

  • @herveplez7170
    @herveplez7170 Před 6 měsíci

    god bless har bal masteing!

  • @mrpog6541
    @mrpog6541 Před 7 měsíci

    thank you this helped me a lot

  • @larrysmith9293
    @larrysmith9293 Před 7 měsíci

    Thank you for sharing your knowledge

  • @waynenunan857
    @waynenunan857 Před 7 měsíci

    Thanks

  • @synthzizer3324
    @synthzizer3324 Před 9 měsíci

    There is alot involved in getting a mix incredibly loud but retain punch dynamics etc. It is a very very very intricate and specialized field and if you don'tknow how this is what happens. When you know how to do it (push the loudness envelopes to the max)the results are incredible. Your example is an example of a production that has not had the specialized techniques applied in the 1st place, starting from the actual arrangement in how all the parts interact on a vertical time scale X and a horizontal planer scale Y. X as in the stack of what is being struck at an instance of time and Y as in what is happening in bewteen the X parts and the transitory plane between. Like crest factor but not like crest factor.

    • @paavojumppanen914
      @paavojumppanen914 Před 9 měsíci

      That may be so, but I hear very few recordings that are both high in level and sound d good to me. To tell you the truth, I cannot think of anything in the music that I have listened to that your description of incredible. Can you suggest a good example that I might listen to? Although I have heard some recordings with high levels that sound reasonable, they never sound as convincing or emotive as I want them to. Certainly, for me and the way I listen to music, there is zero benefit for pushing super high levels and much to lose so why do it?

    • @paavojumppanen914
      @paavojumppanen914 Před 9 měsíci

      The other thing that you seem to fail to answer is music, particularly the music I like to listen to, has dynamic structure where part A is meant to be louder than part B etc. I'm not talking transient content but the music itself. It's dynamic markings when written into a score. Any attempt to maximise the volume of such music is always going to be detrimental to the music because you are in effect, making the performance wrong compared to how it was originally authored.

    • @synthzizer3324
      @synthzizer3324 Před 9 měsíci

      @paavojumppanen914 yes i agree there are many releases where the couruses have similar impact as the verses and bridges. But that is not just the fault of mixing. The arrangement and tracking and probably the song is also wimpy written.

    • @synthzizer3324
      @synthzizer3324 Před 9 měsíci

      @paavojumppanen914 but what do you mean high level? For example If track and mixing decisions lean towards the kick is to be ultra prominent then what it wrong with the leading edge of that wack set to half a hair below ceiling? The problem is if undesirable distortion is added because of hard clipping and it is audibly un pleasant then it was done wrong. Peak vs meat is a very interesting and very important topic for loudness and if you want loud clear mixes you need to work on your meat and peak factors very carefully. I will come back with examples

    • @paavojumppanen914
      @paavojumppanen914 Před 9 měsíci

      The problem is the level of the ensemble. If you have a track of a four piece band where the song structure starts with a solo instrument that you then take up to a level where it is just shy of limiting, then when the other three instruments cut in the level will go up 6dB (assuming the power of each instrument is the same). For that to not result in clipping the limiter will cut the average level by 6dB to fit the available dynamic range. For tracks like that it is typical that the musical dynamic in the ensemble is louder than in the solo or bare case. Thus if that solo instrument that started the song was say the vocal part, in the super limited case the quiet voice will sound much louder than the loud voice, hence the dynamic inversion. No amount of careful my mixing and crest factor trickery can change that because the power of four instruments will always be greater than one. That is the problem. The only solution that actually preserves the dynamic intent of the music is the case of conservative limiting and or compression. This growth in level with number of instruments is what gives orchestral music it's power and large dynamic range, but as I pointed out you only need a change from one instrument to four to create a lift of 6dB so it applies to contemporary ensemble music as well. Just look at the statistics of a well mastered compilation of tracks where some are loud rock songs and some are less intense ballads and you will find that the ballads are considerably lower in level than the more intense tracks yet have perfectly compatible loudness. That is because what needs to be compatible is the volume of a common instrument (vocal) for example. If you maximised the ballad it would sound way too loud in the compilation owing to fewer instruments playing less intensely.

  • @CyanideLovesong
    @CyanideLovesong Před 9 měsíci

    Oh you're so right about all the heavy compression & limiting (!), but it's rare in most pop, rock & hip-hop these days to hear anything at reasonable levels. Spotify attempted to popularize a standard but it was loudly rejected by the audio production community who promptly ignored it. If streaming sites would get their act together and standardize levels correctly that would help by reducing the perceived value of squashed mixes. But even Spotify doesn't standardize on their web/tv apps -- and Soundcloud (perhaps the most popular indie music platform) doesn't standardize at all. So they're not helping. :-(

    • @paavojumppanen914
      @paavojumppanen914 Před 9 měsíci

      Yes, it is a pretty vacated space when it comes to standards. They certainly got the idea right in cinema where there is a standard and it is predominantly adhered to so it should be feasible for popular music. I guess the problem. With the music space is there is no one out there with respect driving the discussion. It is just a bunch of players doing their own thing. I think the one main impediment for streaming is the fact that the device used to play it back is a phone and phone speakers. In that circumstance you can easily see why louder is more likely better. But if they switch to headphones then it isn't. It really is the playback media that seems to complicate the issue. For cinema the target is cinema so selling quality sound is easy. For phone use it isn't the typical use case though Apple have made some attempt to sell quality through pushing ATMOS.

  • @STARanoff
    @STARanoff Před 9 měsíci

    How to convince our clients that making the mix louder is not the best idea...

    • @paavojumppanen914
      @paavojumppanen914 Před 9 měsíci

      Yes, that can be a tricky sell. I reckon the best you can possible do is something along the lines of what I attempted here. Play a squash master against a non-squashed one where they are normalised to the same average level. Only then will they actually hear the damage. If you don't do the normalisation part then they will almost inevitably choose the loudest one as better because of our loudness bias caused by the Fletcher-Munson effect.

  • @evgeniyzhulikov2235
    @evgeniyzhulikov2235 Před 10 měsíci

    That has been a major improvement!

    • @paavojumppanen914
      @paavojumppanen914 Před 10 měsíci

      Yes, that track does show considerable improvement! Very satisfying when that happens.

  • @evgeniyzhulikov2235
    @evgeniyzhulikov2235 Před 10 měsíci

    Great walkthrough, thank you, Paavo. You are being very careful with those frequencies.

  • @testlab6643
    @testlab6643 Před 10 měsíci

    Thanks for the video. Good content. p.s. you might want to prepare a few texts and check technical hurdles beforehand to streamline/shorten your videos.

    • @paavojumppanen914
      @paavojumppanen914 Před 10 měsíci

      I like it being a bit organic if for no other reason that I'm hopeless at reading from a script but certainly having prepared do points would be helpful. I was actually very tired when I did this series so given that it was better than I expected. Too many things to do!

  • @HakonAudio
    @HakonAudio Před 11 měsíci

    Great video, Paavo. Question. Would you not normally use early reflection on individual instruments or busses in varying degrees/amount to add depth and width as opposed to the stereo file?

    • @paavojumppanen914
      @paavojumppanen914 Před 11 měsíci

      As this is a stereo recording done without the explicit early reflections given by using spatial panning I use discrete early reflections. If I were mixing a track (I'll do a demonstration of that) using spatial panning then I would turn discrete early reflections off. If I were mixing old school with conventional panning I would switch in on. If you want to use both spatial panning and old school then I'd consider using two reverb buses, one for spatial panning, and one for not. Although conventional recordings typically have attempts at some form of early reflections with either panned delays or chorus effects they are typically pretty simple so adding discrete early reflections will generally make them sound better by adding more complexity. Spatial Pan already has that complexity and it has the benefit of being totally coherent with the geometry. Alos note that discrete early reflections in Synthetic Space are synthesised using spatial pan internally through the modelling of speaker positions.

  • @Jrel
    @Jrel Před rokem

    That's awesome; it sounds great! I'm not even listening with Atmos enabled.

    • @paavojumppanen914
      @paavojumppanen914 Před rokem

      As it is a binaurally encoded recording you shouldn't have Atmos enabled, just a stereo set of headphones. I find it a pretty clear way of demonstrating the need for early reflections and diffuse reverberation. Without those "room effects" it sounds dull as dishwater. With it and it easy to forget that it's all fake, apart from the original drum stems.

    • @Jrel
      @Jrel Před rokem

      @@paavojumppanen914 Ahh, I didn't fully read the description. Haha, I just saw the title and the first sentence mentioned Atmos so I didn't read the rest, and kept listening and watching.

  • @PatAutrey
    @PatAutrey Před rokem

    Congrats! It's no surprise that you are busy working on such an important piece of audio gear, thanks for sharing!

  • @SingularityMedia
    @SingularityMedia Před rokem

    Amazing tool, gets used every day in the mastering studio as the first part in correction.

  • @edwardkenemorales
    @edwardkenemorales Před rokem

    Hello, I am interested to get the Harmonic Balancer but I am on an M1 Mac system and wondering if you support it natively? Thanks!

    • @paavojumppanen914
      @paavojumppanen914 Před rokem

      M1 native support is on my to do list. Don't have an Apple Silicon development machine yet but there are quite a few users running the Intel version with Rosetta on M1 machines.

  • @elmatula
    @elmatula Před rokem

    Own it and use all the time as my first step in my mastering process. Fantastic results using it with caution.

  • @PatAutrey
    @PatAutrey Před rokem

    The original recording of the first track is honky and brittle sound so much better after HarBal is applied!

  • @cr5721
    @cr5721 Před rokem

    I have used Har-Bal for many years on my last 4 albums and love the process of moving to a "Separate" mastering and rebalancing system. I have tried making the sound similar by actually mixing the sound from my stems in the mix itself, and there is definitely a "magic" to using Har-Bal (or a separate mastering process "outside" the DAW). But I guess there's many different styles of doing things and whatever works to get the sound good, but I find using Har-Bal (and also "Finalizer" as well) that these separate processes, JUMP you out of "in-the-daw" mindset and allows you to use your ears for that final listen and mastering process (If I use Finalizer as the final process, then I don't use the brick wall limiter in that process, I will use it in Har-Bal, other times I use the limiter in Finalizer and not in Harbal, making sure only to balance everything and bring those problematic frequencies down). It would be great to have as a plugin for those who would utilize it, but I quite enjoy the separate final mastering using Har-Bal it's a magic process that I will never replace now that I've found it... Thank you Paavo for your genius and hard work with this and your other unique products!

  • @CoscarelliFan
    @CoscarelliFan Před rokem

    Well said and explained. It is an amazing tool. I run every mix I ever do through it. I don't have as much time anymore to mix and master music, strictly a hobbyist, but when I do, it is a go to tool. Eyes, not just ears, to the music.

  • @Endless_Skyway_Adventures

    Balance improved, dynamics squashed, loss of life, I own Har-Bal. I don’t use it on stereo mixes, because it didn’t run as a plug in when I tried it . Yes it can tame specific peaks and bring up valleys. I don’t see the need to “remix” that which I can simply mix in the first place. I do use it on some individual tracks that need help. Especially Bass. Whilst I don’t know everything going on under the hood, I can tame peaks with dynamic eq, I can raise valleys with parallel compression and eq, and I can put low end in Mono. Without having to export 2 track then master it. I prefer to mix and master simultaneously in the box and retain the ability to tweak the mix even at the last minute. I will say that Har-Bal does have tools that don’t exist in other tools in quite the same way although Eventide is interesting. I would absolutely use Har-bal if it could be a 2 bus plug in as part of my mastering chain. I would love to hear what your tweaks sound line on a 70’s song like “The River” from the “ One Live Badger” Album. You clearly did a good job with these two mixes and you are obviously better at using your invention than I am. I haven’t touched it in a couple years now. But you do have skills. I’m going to look at it again to see if it is a plug in yet. It’s good, but it breaks the continuity of my particular workflow. I will admit that people advise against my work flow but I have to t mix and master my own demos. It was good to hear from someone who really knows the software.

    • @paavojumppanen914
      @paavojumppanen914 Před rokem

      I think you missed the point which was that if you only have a stereo source to work with rather than a multitrack (which is typical of live concert recordings) then what do you do? Also, on the dynamics thing I tamed it to my personal taste. If you wan't more then you can leave more. The original played at decent levels (75-80dBA SPL) on my monitors sound positively uncomfortable for my ears. How much of that you would note would depend entirely on the monitoring you are using. Most 2 way monitors are typically mid range weak around the crossover frequency because of the dispersion characteristics of the woofer (I'm not talking on axis frequency response which may well be flat) which may make it more tolerable. My monitors are crossed low (1kHz) and the driver picking up the mid-range is 5 inches so has little to no crossover weakness due to dispersion issues. As such I very clearly hear mid-range stridency and it is often uncomfortable. The frequency response is flat too.

    • @Endless_Skyway_Adventures
      @Endless_Skyway_Adventures Před rokem

      @@paavojumppanen914 not only did I not miss that point, I actually qualified my own use case as different and I acknowledged that you did a great job. I even offered another song that does fit your use case scenario. I also agreed that it is a good tool unlike many other available tools.

    • @paavojumppanen914
      @paavojumppanen914 Před rokem

      Sorry if I mis-understood you and thanks for the compliment!

    • @miksteduzeltiriz
      @miksteduzeltiriz Před rokem

      @@Endless_Skyway_Adventures I'm in the same spot. I've known about Harbal for maybe 10+ years now and just the fact that you need to go out of the DAW breaks the deal for me. Harbal is a great EQ but its not indispensable to me so far to break my established workflow. Not making this a VST plıugin is really wasting the potential. Maybe @paavo should partner up with another developer to implement his code/patent into a more streamlined version for easier use and more outreach.

    • @paavojumppanen914
      @paavojumppanen914 Před rokem

      I am working on creating a plugin EQ that is based on the technology in Har-Bal Harmonic Balancer but not everything in that product. Part of the reasoning for that is excessive complexity in a plugin environment and the other is less need for those more advanced aspects of Har-bal Harmonic Balancer.. I'm part way through developing the filter engine for a plugin that has the same resolution but requires less processing power which is a necessary aspect of implementing a plugin because you'll want to be able to use it on more than a handfull of channels simultaneously. Can't give a timeline on release because there is much to be done and plenty of unknowns as far as how I'll implement a user interface.

  • @giancarlopaolini7529
    @giancarlopaolini7529 Před 2 lety

    Just before you launching your dice I opted for the second option to be the most representative more based on my ears "memories" rather than actually playing an acoustic guitar. I'm much more more involved with electric guitars so everything moves via cables 🙂 Then, as you said, nearfield monitors or headphones have their own peculiarities with are reflected in the perceived sound.

    • @paavojumppanen914
      @paavojumppanen914 Před 2 lety

      So by that are you saying your record your electric guitars DI'd? Certainly in that case it doesn't really apply and in such case the EQ used is simply to define or modify the tone of the instrument. On the other hand, I think a sizeable proportion of electric guitar / bass recordings are by mic'ing cabinets and in that case the same issues apply. If you mic the cabinet closely you'll pick up driver modes and cabinet resonances that are quite different in colour to what a listen would hear in a live performance.

  • @zoundsic
    @zoundsic Před 2 lety

    You need to get someone to play it for you so you can hear exact instrument from the mic's position to get best idea of learning its accuracy if working by ear.

    • @paavojumppanen914
      @paavojumppanen914 Před 2 lety

      Yes, that's always an issue even in the aspect of mixing your own music cos often you'll be biased by the desire to hear your preserved instrument. It certainly gives a clearer picture if you can have someone else play and you listen but even in the case of playing myself and acoustically listening to what I play, what I hear is strongly different to what a close mic does when played back through neutral monitoring.

    • @zoundsic
      @zoundsic Před 2 lety

      ​.@@paavojumppanen914 Sure but you can record it nearer to what you hear in the room and use that as a reference to other mic positions.

    • @paavojumppanen914
      @paavojumppanen914 Před 2 lety

      @@zoundsic yes for sure. In fact with Har-Bal Harmonic Balancer equalising to sound like far field is pretty easy. Just record the same performance with near field and far field mics and recording to seperate tracks. Then use the far field track as a reference to EQ the near field one. I've thought about doing a demo of this myself but I more often than not just do things in Har-Bal by ear, although you need reliable monitoring for that.

    • @zoundsic
      @zoundsic Před 2 lety

      @@paavojumppanen914 Thats for that tip, Although even the by ear tweaking needs a good room, or your guessing still. Why I value harbal.

    • @zoundsic
      @zoundsic Před 2 lety

      @@paavojumppanen914 I was thinking in terms of accuracy or near as with an ear and although understand your idea for getting near and far field responses captured I was focusing on getting it mic'ed up to reproduce a instruments accuracy in a treated space, I f Just used near and field field they could both be sufficiently out that any blend is just that, a blend of two mics.