Expectations: How do you measure an EQ in Smaart? [GSwSST24]

Sdílet
Vložit
  • čas přidán 23. 07. 2024
  • Measuring an EQ is a great next step to examine our expectations in our audio analyzer and gain confidence in our tools.
    ___
    Get Started with Sound System Tuning - • Get Started with Sound...
    ___
    SOUND SYSTEM TUNING ONLINE COURSE - www.proaudioworkshopseeingsou...
    ___
    Books on sound system tuning - www.sounddesignlive.com/audio...
    Podcast for live sound engineers - www.sounddesignlive.com/pro-a...
    ___
    START HERE - www.sounddesignlive.com/start...
    ___
    Be friendly
    Facebook - / sounddesignlive
    Twitter - / nathandofrango
    LinkedIn - / nathanlively
    ___
    I love to geek out about the physics of sound. This channel focuses on the growing opportunity for live sound engineers to improve their confidence and consistency through the understanding of the principles of sound system design and optimization. My goal is to make this channel upfront and honest about my success and failure, so you can learn from both.
    I am always open to suggestions and feedback so please comment on this video or contact me through my site.
  • Věda a technologie

Komentáře • 44

  • @BirdcageTV
    @BirdcageTV Před 5 lety +2

    Hey dude, thanks for all of the content you put out. I don't pay attention to all of it, but to the parts I do, they are always super helpful and insightful. Thanks for sharing the knowledge :)

  • @robbyaliby
    @robbyaliby Před 6 lety +2

    Da best explanation...thanks nathan

  • @marczeebregts647
    @marczeebregts647 Před 4 lety +1

    Hi Nathan,
    Great info! Thanks.
    So, how would you route this on, say a Digico? Or any other console where you couldn't take the measurement-signal "pre-everything" on a matrix output? I'm kind of stuck with this.
    Right now I make an extra matrix to be the measurement, and gang it with my actual LR matrix. But that's probably not the right way to go.

    • @nathanlively
      @nathanlively  Před 4 lety +1

      Hey Marc, I did make that step a little complicated. Let's assume that you are mixing everything to a group, then sending that to the Matrix where you are doing some output processing for the different speakers. When you route that group to the Matrix, also route it to a physical output, therefore avoiding the processing you might do on the matrix. Does that work?

  • @adbayarea
    @adbayarea Před 5 lety +1

    mind blown !!

  • @Zzzaffle
    @Zzzaffle Před 5 lety +4

    5:55 HARMONICS!

  • @kidcupid07
    @kidcupid07 Před 5 lety

    Love it! Finally , question did you do a video on what’s the best method

    • @nathanlively
      @nathanlively  Před 5 lety

      Hey Kidcupid, what's the best method to do what?

  • @obaroakporovwovwo7169
    @obaroakporovwovwo7169 Před 2 lety +1

    Thanks for all your content. From the setup you did for this video; can I use a 2 channel sound card ?
    Thanks.

  • @FreeKeenan
    @FreeKeenan Před 8 měsíci +1

    Can you explain more clearly how to parallel compress drums without get comb filtering?
    That would be using two stereo subgroups of drums with compressors inserted in each subgroups correct? How do you get around the off set destruction?

    • @nathanlively
      @nathanlively  Před 8 měsíci +1

      Hey FreeKeenan, you need to make sure that both signal paths have the exact same latency. This is usually pretty easy if you just send to both groups and use the exact same processing. You could have the compressor in, just to be sure, but turn the threshold all the way up so it's not doing anything. Send pink noise through, add the second group, see if you hear any comb filtering. The more advanced way to do this is to actually measure the latency of each channel. Robert Scovill has several videos about this.

  • @aperez12374
    @aperez12374 Před 5 lety +3

    Ken "Pooch"Van Druten told me once to use EQ very sparingly. sort of how a surgeon would use a scalpel

  • @yaminhameed1524
    @yaminhameed1524 Před 5 lety +3

    Hi Nathan, great video. I have always wondered what actually caused the phase shift when boosting or cutting a frequency with an EQ and why it does sort of a 'z' shape(in the phase response) across the center freq. Technically what is causing that? Its same for analog EQs and digital EQs unless it is Linear phase EQ. How does it effect what we are hearing in the end? Thanks in advance.

    • @nathanlively
      @nathanlively  Před 5 lety +1

      Hi Yamin, thanks for checking out the video. I may be wrong, but my understanding of the way an EQ works, is that it takes two copies of the signal, adds phase shift to one of them, then adds them back together.
      Hopefully, in the end, it has a balanced effect on what we hear. Why? Because if there is an EQ change cause by one device, accompanied by a relative phase shift, and we correct it with an complimentary EQ and phase shift, it should come out to zero. Make sense?

  • @Jeremy_Reynolds
    @Jeremy_Reynolds Před 6 lety +2

    So with the impulse response having the extra noise from the graphic eq, is it correct to assume that is a representation of the lag experienced by the phase shifted frequency ranges?

    • @nathanlively
      @nathanlively  Před 6 lety +3

      Bingo.
      Which is also why a subwoofer like a long lazy snake.

  • @jourellbacani3569
    @jourellbacani3569 Před 5 lety

    do you EQ subwoofer too?

  • @livemixpriyan
    @livemixpriyan Před rokem +1

    Hi Nathan, Thanks a lot for the Video. Everything is understood. BUT one question. Everybody speaks about the phase shifting caused by the GEQ or PEQ. But in a practical situation why it matters? We do not send any paralel signals of a channel. like one is with and ather one is without EQ? So phase shifting matters where there are identical 2 signals with different timing. Could you please point out a few practical ( Especially Live Sound Situation) situation where the phase shifting through the EQ matters? Thanks in Advance.

    • @nathanlively
      @nathanlively  Před rokem

      Hey Priyan, please see if this helps: czcams.com/video/Z-lEyq4sb_k/video.html

  • @robintanj
    @robintanj Před 5 měsíci +1

    Hello Nathan"When staging a performance in an indoor venue, what strategies can we employ to address the variations in tonality resulting from the room’s unique acoustic properties?"

    • @nathanlively
      @nathanlively  Před 5 měsíci +1

      Hey Robin, can you give me an example. Maybe this happened to you recently? Tell me more about what you mean by changes in tonality. From my perspective it seems like the sound system calibration process should account for this.

    • @robintanj
      @robintanj Před 5 měsíci

      @@nathanlively Thanks for your reply ....In my experience, indoor environments tend to exhibit a pronounced feedback effect. This is primarily due to the fact that certain frequencies may be amplified, contingent upon the specific acoustical properties of the room...What strategies should we employ to effectively mitigate such a challenge?
      To what extent are Finite Impulse Response (FIR) filters practically applicable in a live system for the purpose of room correction?

    • @nathanlively
      @nathanlively  Před 5 měsíci +1

      @@robintanj ah, I see. I don’t have any good advice. All I know how to do is still apply best practices for system. Check every point in the signal chain to maximize GBF. I’m working on an anti-feedback plugin. Would that help?
      Make sure that you know where your alignment positions are. You don’t want to be accidentally summing main and sub into an open mic onstage for example.
      Otherwise it’s just EQ.

  • @ghighrolla711
    @ghighrolla711 Před 2 lety +1

    thanks for the content. How would you rout this on a ql5?

    • @nathanlively
      @nathanlively  Před rokem

      Hey Phil, you could insert a graphic EQ on an input channel.

  • @casadelrin2966
    @casadelrin2966 Před 5 lety +1

    what does the impulse responde graph tell us?
    If it looks weird after applying the Graphic EQ, does this affect the sound? Does impulse response affect the sound? how?

    • @nathanlively
      @nathanlively  Před 5 lety +2

      Hey Casa, the impulse response tells us amplitude and time. Compare the IR of a microphone cable (a single peak) to that of a subwoofer (long stretched out).
      Yes, the GEQ will affect the sound and the IR. In most cases, any change to the magnitude response will come with a change to the phase response, and therefore, the IR. There's no free lunch. :)

    • @casadelrin2966
      @casadelrin2966 Před 5 lety

      @@nathanlively I understand. Thank you!

  • @adbayarea
    @adbayarea Před 5 lety +1

    Those peaks are Harmonics!

  • @casadelrin2966
    @casadelrin2966 Před 5 lety +1

    what is delay tracking for?

    • @nathanlively
      @nathanlively  Před 5 lety

      Hey Casa, this is a big question, so I am going to direct you to the Smaart user manual under Chapter 6, Delay Compensation.

    • @casadelrin2966
      @casadelrin2966 Před 5 lety

      @@nathanlively Thank you very much!

  • @markjtapply
    @markjtapply Před 6 lety +2

    Noob question...How do you compensate for filling the room full of people? The eq curve totally changes.

    • @nathanlively
      @nathanlively  Před 6 lety +4

      Hey Mark, thanks for checking out the video. I would argue that that is a common misconception. Yes, the floor reflection might be removed, but you can compensate for that with ground plane measurements. The temperature might change and adjust your delay times, but you can recalculate those. The humidity might change, but you can compensate with a high shelf filter.
      All of that aside, the answer is that you keep measuring. After your tuning you take a combined trace. During soundcheck you compare. Once the show starts you compare again.
      Thoughts?

  • @winship7891
    @winship7891 Před 5 lety +4

    harmonics?

  • @MrMelodyCold
    @MrMelodyCold Před 4 lety +1

    that would be harmonic distortion of any electronic equipment, a complete sine wave pure is created digitally but the a/d converters at the moment of generating electrical signal resonate with the components creating harmonics, measured THD (total harmonic distortion) lower is better, specially on broadcast because harmonics create a ton of problems since those where not intentional those are harmonics but created as a margin of error of the electronics do not miss interpret this as harmonics in music a perfect electronic device should not have them.

  • @balabuyew
    @balabuyew Před 2 lety +1

    Smoothing is implemented so badly in both: Smaart and SysTune. In 1/1 mode you can even see straight lines, while this should be a continuous smooth curve. Something like a (bi)cubic interpolation should be used...

    • @nathanlively
      @nathanlively  Před 2 lety

      Hmmm, I had no idea. Good to know. Is an another audio analyzer that you prefer that you think has handled this better?

    • @balabuyew
      @balabuyew Před 2 lety

      @@nathanlively I'm not a "pro" in sound measuring, but as a programmer I see the issue from the first look.

  • @DavidAnthonyFlores
    @DavidAnthonyFlores Před 3 lety +1

    Overtones.