Temporal Denoising Analysis

Sdílet
Vložit
  • čas přidán 5. 07. 2024
  • This video is an analysis of techniques to reduce video noise (specifically temporal noise) in Blender.
    I have created a Blender Compositor node-based temporal denoiser, which can be downloaded for free from my Gumroad.
    markstead.gumroad.com/l/Tempo...
    The video shows how to use the node-based temporal denoiser, and the strategies used to reduce noise and overcome problems.
    Finally I do a comparison of the NVidia OptiX based temporal denoiser with my node-based temporal denoiser.
    When you render your animation with the required render passes you can evaluate both denoisers and work out what works best for you.
    This video covers:
    00:00 - Introduction - Why temporal noise is a problem
    02:10 - Frame blending
    03:09 - Motion compensation - Using the compositor node group
    06:01 - How ghosting is prevented
    07:29 - Shadow handling experiments
    08:45 - Temporal Denoising Comparison - OptiX vs Node Group
    09:31 - Summary
    This video uses the following Blender demo scenes:
    - Agent 327: Operation Barbershop, by Blender Studio
    - Spring, by Blender Studio
    - The Junk Shop, by Alex Treviño
    - Monster Under The Bed, by Metin Seven
    All which can be downloaded on the Blender website www.blender.org/download/demo...
  • Krátké a kreslené filmy

Komentáře • 44

  • @udbhavshrivastava
    @udbhavshrivastava Před 8 měsíci +5

    This was such a thorough analysis !
    appreciate the good work mate.

  • @leonarddoublet1113
    @leonarddoublet1113 Před rokem +4

    Thanks for the video Mark - a lot of clear detailed work to explain the process and functions. I appreciate it.

  • @Mioumi
    @Mioumi Před 12 dny

    Thank you! That's some real good insight

  • @Szzachraj
    @Szzachraj Před rokem +1

    Cool video, clear explenation helped me with decison.

  • @shanekarvin
    @shanekarvin Před 3 měsíci

    Thanks Mark!. this was very helpful

  • @matejivi
    @matejivi Před 10 měsíci +1

    Thank you! A shadow pass would be nice indeed.

  • @BlaBla-sf8pj
    @BlaBla-sf8pj Před 9 měsíci

    thx for your help

  • @blenderheadxyz2418
    @blenderheadxyz2418 Před 2 měsíci

    wow thanks a lot

  • @siufa23
    @siufa23 Před rokem

    thanks Mark. This is great explantation. Do you think its possible to automate the denoise process with a python script commandline wihtout the need to enter blender?

    • @MarkStead
      @MarkStead  Před rokem

      I personally haven't done that.
      Here's the command line doco, and you can certainly perform rendering, and run Python scripts.
      docs.blender.org/manual/en/latest/advanced/command_line/arguments.html
      If you have a Blender file configured for compositing then you could presumably just render that from the command line, with no Python scripting required.
      Perhaps what you could do from a Python script is substitute node parameters for the filenames or the number of frames. You should be able to fully integrate Python with pretty much anything in Blender including adding/manipulating compositing nodes.
      For example in Blender if I modify the frame offset in the compositor, I can see in the Scripting window it has executed this command:
      bpy.data.scenes["Scene"].node_tree.nodes["Image"].frame_offset = 1
      Obviously you have a lot of extra complexity of setting up scripts and and all the command line parameters. However it makes sense when you're trying configure an automated rendering pipeline.
      Does that help?

  • @aulerius
    @aulerius Před 9 měsíci

    Do you know if there is any way to minimize the occlusion masks including edges of objects, even when they are stationary? Does it have something to do with aliasing in the render? I am using your techniques for a different purpose (in projection-mapping textures on moving scenes, to distinguish occluded regions and inpaint them)

    • @MarkStead
      @MarkStead  Před 9 měsíci +1

      Have you looked at Cryptomatte?
      At one point I was trying to use the Cryptomatte node to distinguish between different objects. The problem is that it is designed to be used with a Matte selection - so then I tried to understand how the raw Cryptomatte render layer was structured - referring to this document raw.githubusercontent.com/Psyop/Cryptomatte/master/specification/IDmattes_poster.pdf
      However it was an impossible task for me - since there is no unique object ID for a given pixel position. Specifically the Cryptomatte data represents all the source objects that create the pixel (including reflections, anti-aliasing, transparency, motion blur) and a weighting for each.
      If you're able to make a Cryptomatte selection for the occluded region, then this should give you a mask with properly anti-aliased edges.
      However (not that I understand your project exactly), perhaps you could also be looking at the Shader nodes and rendering those faces with emission and everything else transparent (perhaps using material override for the whole scene). You might be able to use Geometry Nodes to calculate the angles to the projector to give you an X/Y coordinate. GN could also calculate the facing angle and therefore the level of illumination falloff (or whether a face is occluded completely).

  • @djdog465
    @djdog465 Před rokem

    cool video dad

  • @insertanynameyouwant5311
    @insertanynameyouwant5311 Před 3 měsíci +2

    a bit of dilemma, enabling vector pass only works when there`s no motion blur activated. But I need it also

    • @MarkStead
      @MarkStead  Před 3 měsíci +1

      I didn't know that. I just assumed it was possible. You could do something crazy like render out a sequence with one sample, no motion blur, and the vector pass. The problem you've got is that vector represents the position at a single point in time, and there's no way to get the range of movement for the visible blur. (The blur may not be linear.). Maybe when the movement is very low temporal denoising might still make sense, but then the denoising could be automatically disabled in the areas of the image with more movement and blur (and perhaps is less noticeable anyway).

  • @rami22958
    @rami22958 Před 5 měsíci

    Now that I have finished creating the node, do I have to convert it to an image again, or can I convert it directly to a video? Please reply.

    • @MarkStead
      @MarkStead  Před 5 měsíci +1

      👍 You can output as a video file. Just going from memory, you would (1) connect the denoised image to the Composite node, and then configure the output settings in the normal place, or alternatively (2) use the File Output node and specify the output settings in the node properties (N panel). Output using FFmpeg Video with MPEG-4/AV1/etc.

  • @Whalester
    @Whalester Před 8 dny

    I can't seem to get it to work. when there is still more noise in my scene than simply using a normal denoiser node

    • @Whalester
      @Whalester Před 8 dny

      I noticed when using the debugger, to get my motion colors to show at proper exposure I have to change the intensity down from 300 to 5. I don't know how to apply this to the non debugging denoising node.

  • @user-tp3eq8zf1z
    @user-tp3eq8zf1z Před 6 měsíci

    Thanks, but how do I save the temporally denoised after compositing them?

    • @MarkStead
      @MarkStead  Před 6 měsíci +2

      Yeah sorry about that - all the screen captures just show the Viewer node.
      You need to add a Composite node, and connect Image input.
      Then set your Render Output settings (presumably now rendering out as H.264 using FFmpeg Video), then activate Render Animation (Ctrl+F12).

  • @M_Lopez_3D_Artist
    @M_Lopez_3D_Artist Před 10 měsíci

    Hey ive been rendering EXR with blender and i don't see Vector or Noisy Image and i have that checked on my render passes is there something im missing?

    • @MarkStead
      @MarkStead  Před 10 měsíci

      Check it's saved as a MultiLayer EXR.

    • @M_Lopez_3D_Artist
      @M_Lopez_3D_Artist Před 10 měsíci

      i will do that right now hope it works all keep u posted
      @@MarkStead

    • @M_Lopez_3D_Artist
      @M_Lopez_3D_Artist Před 10 měsíci

      i figured it out it has to be selected to layer setting instead of combinded, when i set it to layer it showed all the inputs that i was wanting awesome@@MarkStead

    • @M_Lopez_3D_Artist
      @M_Lopez_3D_Artist Před 10 měsíci

      it works but how do i use this for a 250 frame animation@@MarkStead

    • @MarkStead
      @MarkStead  Před 10 měsíci

      When rendering you render out your animation as MultiLayer EXR, ending up with 250 separate EXR files.
      Then import all the EXR files into a compositor session - importing as an Image Sequence (what I do is click on the first file, then press A to select them all).

  • @pablog.511
    @pablog.511 Před 5 měsíci

    This method works with PNG rendering method??? (I render the frames in PNG first, and then combine them in a video editor)

    • @MarkStead
      @MarkStead  Před 5 měsíci

      That's what I demonstrate in the Frame Blending part of the video. In the parts of the frames where there's movement there will be blurring. I guess you could say it's like an unsophisticated motion blur effect.

  • @MrSofazocker
    @MrSofazocker Před 9 měsíci

    How to get more "free" samples in blender, without blending different frames: Simply render the same frame at a different seed. combine those.
    You can most of the time, only render a third or a half the samples. which might even be faster than rendering the image once with full samples.

    • @MarkStead
      @MarkStead  Před 9 měsíci +1

      I'm not sure that really helps, though it might seem to. Rendering more samples is effectively giving more seed values because each sample has different random properties that result in light rays bouncing differently throughout the scene. In some cases a ray will randomly hit the diffuse colour, and in other cases it does a specular reflection (with a slightly different random bounce angle).

    • @MrSofazocker
      @MrSofazocker Před 9 měsíci

      ​@@MarkStead Please try, combining 3 "Seed renders" with say 500 samples, will give you a better image than rendering it once with 1500 samples. If you get what i mean.
      (I use M4CHIN3 tools, and he has that built-in as an custom operator in the Render menu)
      When rendering, each sample uses the same seed. If you ever rendered an animation with a fixed seed, you will notice that the noise stays the same.
      Bringing that to the extreme and only render with say 20 samples. You will notice the same pixels are black (not sampled at all) in the first frame as well as in the second frame.
      Now, using the same logic on a still frame, and rendering it with only 20 samples, but a differnt seed, now other pixels are black (not rendered).
      Of course this difference gets lower and lower depending on how many samples you start out with, but since we are not rendering to infinite sample, it will improve the clarity for low samples.
      It's the same effect as rendering an image with 200% resolution and half the samples. after denoising and downsampling you get a better image, as you gathered more "spatial samples". As one pixel previously was now 4 pixels to sample.

    • @MrSofazocker
      @MrSofazocker Před 9 měsíci

      This does get a little funky since Blender doesn't let you set the rays per pixel, but just an overall sample amount (Which is pretty dumb), regardless it still works.

    • @MarkStead
      @MarkStead  Před 9 měsíci +2

      Yeah, in an earlier version of Blender (I guess 2.93 and earlier) there was Branched Path Tracing.
      This allowed you to specify how many sub-samples for different rays (e.g. Diffuse, Glossy, Transmission etc). So the benefit is that you can increase the samples where it matters - e.g. Glossy or Transmission. Furthermore I guess I saw it as a way where you didn't need to recalculate the all light bounces from the camera every time.
      However in my testing way back then, I actually got better results using Branched Path Tracing, and setting the sub-samples to 1 only.
      Anyway, if you're getting good results by modifying the seed value - then go for it.
      This is an excellent technique if you render scene (particularly for a video) - then decide you should have used more samples. Just render again with a different seed - and merge the frames.

  • @LiminalLo-fi
    @LiminalLo-fi Před 6 měsíci

    Hey Mark it looks like you are looking for median denoising. czcams.com/video/851cEK0Taro/video.html at about 8:00 minutes in he briefly goes over it, so if you have any deeper knowledge on this guys set up I would love to know!

    • @MarkStead
      @MarkStead  Před 6 měsíci +1

      I did try using a median function, but didn't get better results. There's still a median node group implementation in the debug denoiser that you can hook up and try.
      I ended up focusing on what noisy pixels are like, where they might exhibit a different luminosity or a significant color shift. I tried a fancy (or dodgy) algorithm to apply a weighting to the hue, saturation and luminosity differences and exclude samples where the difference exceeds a threshold.
      I'd appreciate any feedback for where you see an improvement using the median function.

    • @LiminalLo-fi
      @LiminalLo-fi Před 6 měsíci

      @@MarkStead will let you know if I come up with anything useful. I am also looking into Blender to Unreal Engine, for its rendering speed.

    • @LiminalLo-fi
      @LiminalLo-fi Před 6 měsíci +1

      @@MarkStead So for my current project I am getting a perfect sequence with just a single pass denoise on each of the 3 frames ---- running "next" and "previous" into vextor displacemnt ------- running those 2 outputs and the output from the "current frame" into your median group then OUT. [(just the utility median blend group not any other parts from your package)]
      I will have to render it and see what it looks like in premier but it already looks cleaner than the averaged frame method I tried earlier. I mean it looks really good!

    • @LiminalLo-fi
      @LiminalLo-fi Před 6 měsíci

      my scene is a pretty simple project and not a heavily detailed with minimal objects, so I'm not sure how much that plays into the final result others may have.

  • @dimigaming6476
    @dimigaming6476 Před 10 měsíci +2

    this video is much easier to digest at 1.75 speed

    • @MrKezives
      @MrKezives Před 9 měsíci +5

      That's what you have to say after such great content?

    • @dimigaming6476
      @dimigaming6476 Před 9 měsíci +2

      @@MrKezives You're coming in with a negative mindset. The content/Information is great. All i said is that it's easier to digest at a faster speed. Everyone has different methods of learning things. We're all on the same 3D journey here, you have no enemies brother.

    • @zonaeksperimen3449
      @zonaeksperimen3449 Před 7 měsíci

      Thanks dude