Adding Ha to Red or Luminance Data

Sdílet
Vložit
  • čas přidán 20. 07. 2024
  • We go back and take a fresh look at a familiar formula for adding Ha (or O3) to broadband Red and/or Luminance (or Blue/Green) filter images. The original formula is based on removing the Red in the Ha bandwidth from the Ha data, but we simplify the process a bit and demonstrate it on M31 (William Optics GT81), M33 (Explore Scientific ED102), and M51 (Celestron C9.25).
  • Věda a technologie

Komentáře • 42

  • @csb0xc4rs
    @csb0xc4rs Před 3 lety +2

    You've done it again! I love how you are constantly asking many of the right questions that are often ignored. Fantastic work!

  • @rayhodel8069
    @rayhodel8069 Před 2 lety

    James - I wanted you to know that his video helped me a lot. The key for me was the explanation and the discussion of the formulas. Thanks, Ray

    • @Aero19612
      @Aero19612  Před 2 lety

      I’m glad, Ray! I had always seen those equations and thought “what!?”. But when you take them apart, there’s a reason behind the madness. Thanks for watching!

  • @davidharris8037
    @davidharris8037 Před 2 lety

    I have watched quite a few of your videos & always appreciate your logical, analytical, and realistic approach to this hobby. Always helpful info.

    • @Aero19612
      @Aero19612  Před 2 lety

      Thanks for watching the videos, David! Much appreciated.

  • @dennismichels7194
    @dennismichels7194 Před 3 lety

    Dennis Michels l can't thank you enough for making the complexitys of this endeavor understandable. Please keep up the good work.

    • @Aero19612
      @Aero19612  Před 3 lety

      Glad you found this vid useful, Dennis!

  • @zelodec
    @zelodec Před rokem

    Works like a charm. Thanks!

  • @astrounclejoe2572
    @astrounclejoe2572 Před 2 lety

    Once again thanks James. I recently bought an EdgeHD 8” and 294mm and efw. Have been using OSC cameras before. Went back to check this out.
    I was creating a mask and applying Ha to the RGB combined image as a percent of the new image (between 20-60). It actually worked quite well. This approach makes more sense and probably easier (and more accurate) to pull out just the Ha data and apply to the Red channel. Will give it a try. Thanks

    • @Aero19612
      @Aero19612  Před 2 lety +1

      Thanks for watching! Hey, any method that gives you results you like is a good method.

    • @astrounclejoe2572
      @astrounclejoe2572 Před 2 lety

      @@Aero19612 to checking and to be clear. When I said this approach makes more sense, I was referring to yours :)

    • @Aero19612
      @Aero19612  Před 2 lety +1

      @@astrounclejoe2572 Haha. I know! And when I said If you have a process that produces good results already, I was referring to your approach! Play around and see what you like. Drop me a comment with your conclusions. You might end up liking what you were doing already. There are MANY roads to basically the same place.

  • @MastersofPixInsight
    @MastersofPixInsight Před rokem

    Really well done, thank you. FYI, "ViNcent PerEZ" is Vicent Peris. Thanks

    • @Aero19612
      @Aero19612  Před rokem +1

      Thanks for watching and, especially, for the correct pronunciation! Wish I had a time machine to go back and fix all of my errors. Haha.

  • @slzckboy
    @slzckboy Před 2 lety

    very well presented as usual

  • @chandrainsky
    @chandrainsky Před 3 lety

    Very interesting as always. Hope to use the technique in the M51 data I have more or less captured over the past few weeks.

    • @Aero19612
      @Aero19612  Před 3 lety

      Excellent! I hope it works for you. Just experiment until you're happy with the result.

  • @kayedsss
    @kayedsss Před 3 lety

    For sure you are the best!

    • @Aero19612
      @Aero19612  Před 3 lety

      Haha. Not even close, Kayed. Thanks for watching!

  • @chazparvez4970
    @chazparvez4970 Před 3 lety

    Boom! You iza top man - Thank you!

  • @mohammadranjbaran1897
    @mohammadranjbaran1897 Před 3 lety

    Thank you very much

    • @Aero19612
      @Aero19612  Před 3 lety

      Thanks for watching, Mohammad! Hope the procedure works well for you. I think Ha makes a big difference to an RGB image. Good luck!

  • @tezza0905
    @tezza0905 Před 3 lety

    Love your videos as always, the detail and rigour are very pleasing. I am definitely going to give this a try, but one thing I wondered about was whether, in the R-subtracted Ha data, it would be worth using StarNet to remove the residual stars so as not to impact on them in the final combination? I know StarNet only works on non-linear images, but I’ve seen methods where the image is stretched, StarNet is run and the image is reverse-stretched back to linear. I have some M33 data that I’ve been meaning to work through and your video has given me the nudge to get on and do it!

    • @Aero19612
      @Aero19612  Před 3 lety

      Hey Terry. I'm a big fan of Starnet and generally use it to remove stars and then combine starless channels for narrowband or broadband processing. I have also combined Ha to Red in the nonlinear space. That can work too (but I think combining in the linear state is better). I like the idea of going back to the linear state after removing the stars. If you use a single nonlinear expression, it certainly should be possible to reverse the process (I tend to do an iterative stretch, adjust black point then repeat process which I suspect in irreversible) with pixelmath. I'm not claiming the process shown in the vid is "the best" approach, I just wanted to revisit it since many people do use it, show the concept it is based on, and why we have to play with the parameters. And then streamline the process by removing unnecessary constants in the equations. Thanks for watching!

    • @tezza0905
      @tezza0905 Před 3 lety

      Hi James, the video was thought-provoking which I really love. I have spent a lot of time up to now trying to improve my data acquisition, so next I need to improve my processing, and this has given me lots to think about. Thanks again.

  • @billblanshan3021
    @billblanshan3021 Před 3 lety

    James, thanks for showing this method! Question, if you perform a linear fit on the Ha to the Red channel, would this eliminate any need to scale the data via a formula? I have never tested this, nor have I investigated the process of linear fit I this way, but I wonder if this would stretch the extents of the ha data to be more linear to the red, then subtract the difference and add back into red??

    • @Aero19612
      @Aero19612  Před 3 lety

      No, linear fit is the opposite of what we would need to apply the original formula. When we image, we're always trying to expose long enough to get the peak of the histogram just off of the left side. In effect, we are doing a (poor) linear fit by setting the exposure time and gain. With a broadband filter, I use a lower gain and shorter exposure because so much light is coming in. When I use a narrowband filter, I have to increase the gain and use a longer exposure to put the peak of the histogram at about the same location. If I want to apply the original [Ha - R Nha/Hr] formula, I should use the same gain and exposure for the Ha as when shooting Red. In this case, the Ha would be a very low signal indeed (i.e., the ha peak would be much more to the left than the red peak). Linear fit aligns the peaks.

  • @Lasidar
    @Lasidar Před 3 lety

    Interesting process. Any idea how this would compare to the existing NBLRGBCobmination script? I believe it tries to do something similar to what you've outlined.

    • @Aero19612
      @Aero19612  Před 3 lety

      Agreed. I suspect the script is based on the fundamental equation shown in the video (since it came from Pixinsight in the first place). I have never found a one-size-fits-all solution. They all seem to require some tweaking for each project. Thanks for watching, Kyle!

  • @wanderingquestions7501

    Thanks

  • @sanddollarastro8017
    @sanddollarastro8017 Před 2 lety

    I know this is a little old but... I can use this process to add LUM to my existing M33 image? Thanks.

    • @Aero19612
      @Aero19612  Před 2 lety +1

      I don't think so. This method is about combining data taken with a narrowband filter to data from a broadband filter that covers the narrowband filter's bandwidth.
      All is not lost however. There is a good tool in Pixinsight for what you want to do. It's the LRGBcombination tool. Open the tool and uncheck the R, G, and B lines. Then place your Lum data filename in the L slot. Then drag and drop the triangle on your color image (might want to make a clone of the color image first). For added fun, check the chrominance noise reduction and play with the saturation and lightness sliders. If you want a brighter image move the lightness slider to the left. If you want more color saturation, move the saturation slider to the left.

  • @apg.7461
    @apg.7461 Před rokem

    Hello James, your pixelmath formula works well in the RGB Image. But when i want to add my luminance data to the HaRGB Image all the Ha nebulas went pinkish. Can i also dorthin formula with the LRGB nonlinear Image ? Or is there any other Method where i can do it with the combined LRGB Image ?

    • @Aero19612
      @Aero19612  Před rokem

      There are several approaches. First, once you get your "new Ha" image, you can add it to the Lum data kind of like you add it to the Red channel. This will ensure that any additional detail in the Ha shows up in Lum and in the final detailed LRGB image. As for color, you might try adding the "New Ha" to the Blue channel as well (maybe not at the same strength as in Red. Maybe 60%ish). This will tend to leave you with more of a magenta for the Ha contribution.

    • @apg.7461
      @apg.7461 Před rokem

      Hello James, thank you for answering. But im a totally beginnen in Pixinsight and i don’t Know which formula i have to enter in pixelmath. Can you show me how the formula have to Look in Pixinsight for adding the newHa to the luminance? Thx in advance

    • @Aero19612
      @Aero19612  Před rokem

      Right. So use the expression:
      Ha - R*s1
      to get the NewHa. In this case, your Ha image is called "Ha" in the image identifier tab on the upper left of the window and your red image is labeled as "R" in the image identifier. Try different values of s1 (say, between .2 to .5 per the video).
      Once you have a new image labeled "NewHa", you can add it to other images.
      R + (NewHa - med(NewHa))*s2 for a new red channel with Ha (play with s2) or
      B + (NewHa - med(nedHa))*s3 for a new blue channel with Ha (maybe let s3 = s2*0.6)
      To add Ha to Lum, use
      Lum + (NewHa - med(NewHa))*s4
      Here we assumed that the luminance image is named "Lum" in the image identifier. You can adjust s4 (s4 should be set to something like "s4=1.0;" in the symbols tab of PixelMath.
      As an alternative to the above, you can "blend" images together. The PixelMath formula to blend Ha and Lum, for example, is:
      1 - (1 - Lum)*(1 - (NewHa-med(NewHa))*s4)
      hope that helps!

    • @apg.7461
      @apg.7461 Před rokem

      @@Aero19612 hi James, this is Awesome! Thank you so much for helping me out! Subbeb to your excellent work!