NINA and Pixinsight Subframe Grading Criteria. And the Winner is?

Sdílet
Vložit
  • čas přidán 5. 09. 2024
  • NINA can include the Half-Flux Radius (HFR) and/or Number of Detected Stars in the filename for each subframe. Are these NINA-generated image grades better than Pixinsight's grade for the maximum Full Width at Half Maximum (FWHM). We compare the these three grades to see which wins...

Komentáře • 62

  • @brandonporter4227
    @brandonporter4227 Před 3 lety +2

    Another great video. I've had several club members and other astronomy friends ask me questions about multiple topics. Sometimes I can answer no problem and others I've actually referred to you channel and specific videos on the topic. I tell them you can explain it better and in more detail than I ever could. Your videos are always so detailed, informative and you have an extremely scientific approach about all of it. Well done!

    • @Aero19612
      @Aero19612  Před 3 lety

      Thanks for watching, Brandon! Hope I don’t steer your folks off the road.

  • @HeavenlyBackyardAstronomy

    Excellent tutorial James. I added a few more baby steps to my learning curve from you tutorial ... but yet, sooo much more I need to learn.

    • @Aero19612
      @Aero19612  Před 3 lety

      Thanks, Pat! Don't blink, I may change my mind on this whole topic. The perfect is the enemy of the good.

  • @Lasidar
    @Lasidar Před 3 lety +3

    Amazing analysis as always James!

  • @yosmith1
    @yosmith1 Před 3 lety +1

    I'll try to come back and explain my understanding of all this, but right now I've removed the NINA star count from my evaluation process. Great discussion! thx

    • @Aero19612
      @Aero19612  Před 3 lety

      Dive in with your comments/observations. Lots of room in this pool. I will also remove the star count from the filename, but may keep the HFR as a point of comparison for later if needed.

  • @richardneel6953
    @richardneel6953 Před 3 lety

    Amazing how timely and relevant your subject matter has been lately. Not only do we have the same telescope, but I'm a recent Nina and PixInsight user and have been trying to come up with a more efficient method of grading images. Great video. Much appreciated.

    • @Aero19612
      @Aero19612  Před 3 lety

      Just remember: the perfect is the enemy of the good as far image grading is concerned. Frankly, I’m not sure any of this matters. Haha. Thanks for watching, Richard.

    • @richardneel6953
      @richardneel6953 Před 3 lety

      @@Aero19612 Yeah, but it appeals to the engineer/techy in me. Definitely not looking for perfect - just improvement.

  • @davidf9494
    @davidf9494 Před 3 lety +1

    Another great video James and while everyone has there own method for determining the 'best' images, your analysis has really shed some detailed light into both software packages. I use PI and am using a weighted formula but not always sure it's the best. Your video will get me to re-visit it and see how it might be improved. Thanks again!

    • @Aero19612
      @Aero19612  Před 3 lety

      Don’t waste too much time “evaluating” (like I tend to do) and forget to get some Astrophotography done! Thanks for watching, David

    • @davidf9494
      @davidf9494 Před 3 lety

      @@Aero19612 Wise words James! Clear skies!

  • @jimtaylor5802
    @jimtaylor5802 Před 2 lety

    Excellent comparative analysis…

  • @dilipsharan8699
    @dilipsharan8699 Před 3 lety +1

    Cool video. It's something I've been mulling over recently so thank you for the analysis. It's good to see the empirical data.
    I guess that in N.I.N.A defence it's doing the calculations on the fly.
    One observation about the fact that NINA can have the same HFR for two subs with different numbers of stars. My understanding is that NINA calculates the average HFR on all the stars it can detect. But the number of stars in a frame is a function of the seeing. Whereas, as you mention, the eccentricity or star shape is primarily impacted by quality of focus, mount mechanics, guiding and also in some cases cameras not be correctly placed in the focuser. So as long as the focus, mount mechanics, etc are the same the HFR will be the similar in two images even if the number of stars different.

    • @Aero19612
      @Aero19612  Před 3 lety +1

      Thanks for watching, Dilip! NINA is more selective in the number of stars it selects, whereas PI will detect far more stars. I certainly agree that two images can have a different number of stars but the HFR can be the same. For example, a cloud may obscure a portion of the field of view, but the HFR is based on the stars that can be detected. Star eccentricity probably has no connection to focus. Eccentricity is primarily results from mount mechanics issues.

  • @astrophotonics9470
    @astrophotonics9470 Před 3 lety

    As always great technical honest analysis, no shilling, no dumb links, adds to James creditably with his Technical reviews +10

    • @Aero19612
      @Aero19612  Před 3 lety

      I appreciate the kind words! Thanks! I'm glad you find these musings useful to some degree.

  • @zaphus
    @zaphus Před 20 dny

    It would be interesting to revisit this topc, but now with Hocus Focus FWHM as a comparison

  • @AnakChan
    @AnakChan Před 3 lety

    This has been a great tutorial. Thank you very much! It's prompted me to look more closely at N.I.N.A.'s calculated HFR vs PixInsight's SubFrame Selector's FWHM and I know they both calculate very different things but for a given subset of subs (no pun intended) where HFR and FWHM diverge, I actually find that when I pixel-peep, I actually prefer the the subs with lower HFR from N.I.N.A. than those with lower FWHM as calculated by PixInsight. It's a pity PixInsight doesn't incorporate HFR into Subframe Selector. IMHO ideal would be for both HFR and eccentricity to be accounted for the 1st run, and thereafter SNR for 2nd.

    • @Aero19612
      @Aero19612  Před 3 lety +1

      Agree. I would like to see HFR numbers in Pixinsight as well. The primary objective for NINA is to use the HFR for focusing, so they rely on fewer stars than Pixinsight does when computing the FWHM. You might also look into ASTAP. It has a batch imaging analysis capability that computes HFR. Thanks for watching!

  • @danreind
    @danreind Před rokem

    I would love to see a follow-up using the Hocus-focus plug in for FWHM estimation in NINA against pix.

    • @Aero19612
      @Aero19612  Před rokem

      Not a bad idea. I’ll give that some thought as we move back into Galaxy season. Thanks for watching!

  • @shaunjackson6366
    @shaunjackson6366 Před 3 lety

    Quite interesting - i was using the HFR as a rough detector of picture quality to identify if my focus was drifting - but i was also running a pix insight session with sub frame selector to keep a constant graph of the the FWHM as the night went on

    • @Aero19612
      @Aero19612  Před 3 lety +1

      That's interesting, Shaun. I'm wondering if there is a way to automatically run the SubframeSelector as each image comes in, compute your favorite weight, and then append it to the filename. Will have to look into that. Thanks for watching!

  • @barrytrudgian4514
    @barrytrudgian4514 Před 3 lety

    Thanks. The video and comments will help me to adapt my current grading system of a visual check in ASI Fitsviewer moving images to folders Round, Roundish, Egg and Yuck.

    • @Aero19612
      @Aero19612  Před 3 lety

      Thanks for watching, Stanley! Glad you found something to work with in the video/comments

  • @anata5127
    @anata5127 Před rokem

    I step-by-step changed approach. Each scope-camera has theoretical FWHM, which depends also from guiding accuracy and seeing. So, every frame >20% deviating from theoretical limit is gone; eccentricity >0.55 is gone; WBPP will take care rest. Overall image after integration: FWHM within 5-15% from theoretical; eccentricity 0.45-0.5; no clouds on background and no satellites.
    This apply only to top-notch refractors. Problem is SCT. Really nasty scope to deal with. Available cameras have small pixels.

  • @michaell1473
    @michaell1473 Před 3 lety

    Fascinating.. It makes me want to manually select frames by manually blinking through them all. The star count might also be varying because the post-alignment cropping gives you a different total area to count from.

    • @Aero19612
      @Aero19612  Před 3 lety

      That's a point I had meant, but forgot, to bring up. I do all of my image grading with the calibrated frames before registration to avoid stars "falling" into the abyss. Alignment offsets probably won't hurt the FWHM assessment but you will certainly loose stars. There's also another subtle issue in that there is some degree of image interpolation during registration to account for fractional pixel offsets and rotations. Thanks for watching, Michael!

  • @junktrunk909
    @junktrunk909 Před 9 měsíci

    I just came across this video and found it really helpful to better understand the differences in these metrics. Thanks! One question though -- aside from removing obviously bad images (e.g. your Z streak of stars) before handing it all over to WBPP, if you're using WBPP in the end, doesn't that already perform its own analysis and reject images that are too low in quality before integration? I'm just curious what your rationale was for trying to eliminate the slightly eccentric images by hand rather than relying on WBPP.

    • @Aero19612
      @Aero19612  Před 9 měsíci

      Haha. Simple answer. I don't use WBPP. Seems like WBPP works well if you do all of your calibration, grading, and integration in one session. I have master calibration files I use for many nights. I calibrate and grade each night's images the next morning. That way, when I'm done imaging a target (a multi-night process) I can integrate them with the image grade as the first numbers in the filename that way they're listed in order of "goodness". I also found WBPP to be very sloooow.

    • @junktrunk909
      @junktrunk909 Před 9 měsíci

      Oh I see. Yeah, I'm not really a fan of WBPP but I find so many people say it's the best once you learn it's idiosyncrasies so I keep trying. FWIW, WBPP works fine with master files. It's got a really hacky way of integrating multiple nights of data into one image (group keyword, with specially named folder syntax), but apparently that works too. But maybe you're right that self grading and pruning, then giving only the best files to siril or whatever is better. I should experiment more.

  • @billblanshan3021
    @billblanshan3021 Před 3 lety

    Great video Jay!! So glad you talked about this. A few days ago I contacted the NINA developers about adding eccentricity to their image analysis so that a proper HDR value could be calculated using your fwhm formula and they shot it down, like they do with every other idea presented to them 🙄. I don't know much about Sequence Generator Pro but wonder if they can output information like this which can be formulated for file naming grading purposes? Will check into this

    • @Aero19612
      @Aero19612  Před 3 lety

      Ha! I've had the same experience with NINA developers. I just have to keep telling myself the program is free.

    • @billblanshan3021
      @billblanshan3021 Před 3 lety

      @@Aero19612 Agreed!

    • @billblanshan3021
      @billblanshan3021 Před 3 lety

      @@spamwaffles1419 if you can read eccentricity then you can formulate that with the HFR value and create a new HFR value and use that to help grade subs live. It's the same as what Jay does with fwhm in PI

  • @dhkd7411
    @dhkd7411 Před 3 lety +1

    WoW . Thank you

    • @Aero19612
      @Aero19612  Před 3 lety

      Haha. Maybe not a "Wow". More of a "Hmmm". Thanks for watching?

  • @constantinbaranov
    @constantinbaranov Před 3 lety

    The idea of using the long axis FWHM looks interesting. I am going to try it out and see if that single number will agree with my own subjective perception of stars shape.
    James, how do you use PI's Blink for culling obvious outliers? I mean, I do that too, but I still can't comprehend its UI. I scroll through the images, hitting space bar on bad ones. Then go back and select unchecked lines that fit onto one screen (scrolling messes with selection) and move files out and remove from the list. Then scroll down to the next screen until end. It just feels so unnatural so that I suspect I might be missing something obvious.

    • @Aero19612
      @Aero19612  Před 3 lety

      Hey Konstantin,
      You’re not missing anything. The PI Blink UI is TERRIBLE. I do what you do. But when I go back through to highlight the unchecked images, I scroll down by using the mouse in the scroll bar that way it doesn’t cancel what I’ve selected. When I’m done, I move those images to a “Bad Images” folder. For the life of me, I don’t know why you can’t simply move unchecked images to the bad images folder. It’s dumb.

  • @M31glow
    @M31glow Před 3 lety

    Good post. Is 2.22 vs 2.64 even statically significant? What I would want to know is what are the error bars for each measurement. No measurement is perfect and each one has an error. Especially, when you are measuring FWHM and dividing by eccentricity either of those two measurements independently increasing or decreasing relative to the other, is swinging the product of your calculation.

    • @Aero19612
      @Aero19612  Před 3 lety +1

      Hey Walter. To pick a nit: I would replace “statistically” with “visually”. I do believe 2.22 looks better than a 2.64 (I don’t remember if those numbers apply to NINA or Pixinsight). If the 2.64 is an elongated or bloated star I consider to be indicative of a “bad” image, then I want to leave it out of the stack whether it’s 1% or 10% of the number of images I have. My sense is that NINA HFR numbers are not as sensitive as I would like. Of course, the primary use of NINA HFR is to be used in autofocus or to trigger an autofocus. Totally agree with your statement that “no measurement is perfect”. The point of my formula is to rank on the basis of the long dimension of an elongated star rather than some form of average between the short side and the long side. So it’s not an arbitrary dividing FWHM by eccentricity; it’s actually a physical dimension of the star (which will be very close to the PI FWHM when eccentricity is small). But the key point is “there is no perfect assessment method”. Thanks for watching, Walter!

  • @psuaero100
    @psuaero100 Před 3 lety

    Great job James. I'm using both NINA and PI and have wondered about this exact thing. Like you, I also approach problems analytically and gather tons of data. I have another question maybe you can answer. What method are you using for PA? I've been using SharpCap Pro (SC) but I have a friend who uses a Polemaster. SC gives you a measurement valuel while(from what I've seen) the Polemaster doesn't provide a this. I did a quick search of your videos and didn't find any for polar alignment and what getting it perfect means for your images. I routinely get down to between 10 and 25 arc seconds with SC and then just stop. Is that really enough or should I put in even more time on the PA each night?

    • @Aero19612
      @Aero19612  Před 3 lety +1

      Thanks for watching! I use the Polemaster for polar alignment. I had a couple of videos up about using the Polemaster when I first got it (it had just entered the market at that time) but have since taken down some of the older videos that seem "out dated." My thoughts on polar alignment accuracy are varied. If you have a good mount with little to no DEC backlash, go for a "good" polar alignment, say, within 1 arc-min, but I wouldn't spend toooo much time doing it. I have a good bit of DEC backlash and find that my guiding is best when I set the guide algorithm to Guide North or Guide South so it doesn't have to switch back and forth. In this case, I need the mount to drift in one direction so some deliberate error in polar alignment is "good". I haven't worked out yet how much is error is appropriate. You do get field rotation which is corrected in the Star Alignment phase, but there's an image quality cost that goes with the image interpolation process needed to align a rotated frame. I would say your 10 -25 arc-sec accuracy is awesome and there's no need to push it further. My 2 cents.

    • @psuaero100
      @psuaero100 Před 3 lety

      @@Aero19612 Thanks James. I think I'll keep doing what I'm doing. I usually only spend 4-5 minutes on PA with SharpCap after a quick manual align to hour angle location using my polar scope. I recently tuned up my Orion EQ-G (HEQ5) mount and removed 80% of my DEC backlash. I think I'll just keep doing what I'm doing and not obsess over getting well below 20-30 arc seconds of PAE. The seeing in the northeast often makes it jump around by ±5-10 arc seconds alone most nights.

  • @neverfox
    @neverfox Před 3 lety

    Great video. But, just curious, why not include PI's number of stars metric in the mix? Is it that it is essentially the same as NINA's?

    • @Aero19612
      @Aero19612  Před 3 lety

      Good question. I have found that the number of stars is highly variable with NINA, even from image to image. Also, PI reports a much higher number of stars for the same image than NINA. Different star detection algorithm, I'm sure. Meanwhile, the FWHM (or HFR) will only apply to stars that are detected and therefore seems to be a more "stable" metric. Thanks for watching!

  • @pamelawhitfield4570
    @pamelawhitfield4570 Před rokem

    I know the Hocus Focus plugin makes the NINA portion of the video a bit dated after 2 years (not much anyone can do about that!) but I have a quick query/comment about star eccentricity. When using a guidescope I would expect another possible cause for eccentricity would be differential flexure? I've been helping the developer with the new NINA flexure correction plugin and it seems have a definite positive impact on star eccentricity in narrow-band data (plugin drift calculation is less accurate for short subs). Being able to append HocusFocus metrics to filenames is quite handy these days too.

    • @Aero19612
      @Aero19612  Před rokem

      Hi Pamela. Interesting. Flexure is one of those things I “believe” could be a problem but I’ve never looked into it on a technical/mathematical basis. I need to do that. The devil in me says people are just blaming eccentric stars on magic “flexure” when there are other more mundane causes like gear harmonics in the RA direction and polar alignment error (field rotation). The less devilish engineer in me says “yeah, I can see how flexure could be problem for long exposures.” I just haven’t put numbers to it. If you see elongated stars and they are always in the RA direction, I bet it’s not flexure, it’s the RA gear harmonics. This video was done with an off axis guider and my CGEM which has a ton of gear noise. Thanks for the homework assignment. Geez.

    • @pamelawhitfield4570
      @pamelawhitfield4570 Před rokem

      @@Aero19612 even before doing a full data analysis (I need to check out some recent NINA logs plotting drift rate vs the PI eccentricity and maybe altitude to see improvement as the correction homes in) you can get a feel for it as the gravitational forces in the different axes change through the night with changing target position. I have the carbon fibre version of your Explore Scientific with an Evoguide 50ED so focal length is similar for some of your nebula targets. My mount is well behaved (rebuilt by DarkFrame Optics), I'm fussy about PE and I'm slightly undersampled with my QHY9. The plugin says I don't have a major flexure problem (I worked to minimize it when I put the setup together) but it's interesting to watch how it slowly increases as azimuth decreases. Overall it's probably the thing you look at once everything else has been sorted out to eek out that last bit of improvement!

    • @Aero19612
      @Aero19612  Před rokem

      Yes, that's what should happen. As an experiment to isolate gear harmonics, polar alignment error, and flexure, I wonder if it would be useful to take a short exposure picture through the guide scope and another through the imaging scope. Then plate solve and note the rotation angle for each. Repeat at, say, 15-min intervals. If flexure is present and the plate-solved rotation angle is accurate enough, you should see different rotation angles between the guide scope and the imaging scope as the RA axis moves. Maybe a good experiment for a full Moon night.

    • @pamelawhitfield4570
      @pamelawhitfield4570 Před rokem

      @@Aero19612 that would be an interesting experiment. The plug-in works slightly differently in taking short exposures before and after a sub (for more accurate centroid determination) through the same filter. Each is plate solved and compares where the scope ended up versus where it should be after that amount of time. The error is then decomposed into RA and Dec component (and I do see a Dec component) drift rates. The drift is then countered using the comet tracking feature in PHD2 - that approach seemed more reliable. From a discussion around the Orbitals plug-in I suspect direct control of the mount tracking rate is being affected by a non-standard behaviour of EQASCOM (what I use), GSS might work as expected. In any case it’s an approach I believe has been taken by professional astronomers in the past - I found some old standalone programs online that did something similar. The heavier something is the greater the forces involved, one reason for the move to segmented mirrors for the ever larger professional telescopes (adaptive optics being the other that comes to mind). My carbon fibre OTA is pretty light as 4” scopes go.

    • @Aero19612
      @Aero19612  Před rokem

      Thanks for the background, Pamela. Very interesting. I’ve never used the comet tracking feature (have enough problems with objects that don’t move). So, after each image, a new “tracking adjustment rate” is handed to PHD2 and continuously updated with each image? Kind of like “macro” guiding on top of “micro” guiding. One of my intuitive (and therefore totally unverified) opinions is that flexure is only an issue for very long exposures. I always assumed my sky glow prevents me from being hit with the full impact of flexure (10 min exposures with SHO at best, 100 sec with RGB, 50 sec with Lum). How far off base am I?

  • @RobB_VK6ES
    @RobB_VK6ES Před 3 lety

    Some of those stars are getting very small and down in the noise James, perhaps leading to less than reliable interpretation. So I wonder what criteria and weighting factors each algorithm uses in it's calculation? With all the research going into AI image manipulation these days you might suspect some of that knowledge might be applicable here.

    • @Aero19612
      @Aero19612  Před 3 lety

      Agree. Pixinsight is much more permissive with its star count (around 2000 for these images) than NINA (around 200 for these images). This is one of those fuzzy logic areas where you can spend all of your time with comparisons at high magnification and never really come to a conclusion-the more you study, the more confusing the result. I just wanted to (quickly) answer a relatively simple question: is NINA giving me as useful of an image quality assessment as Pixinsight. Unfortunately, I came to the conclusion that it was not. If we had fast enough computers, we could simply dump all of the images into a routine that would use AI to define the image integration weighting formula for FWHM, Noise, Star Count, Median, etc to arrive at the "best" image. Of course, that formula would likely be different from target to target. No easy answers. Eye of the beholder kind of stuff.

  • @BrokenPik
    @BrokenPik Před 3 lety

    James ? Camera?

    • @Aero19612
      @Aero19612  Před 3 lety

      Just my trusty ol’ ZWO ASI 1600MM Pro