Mark Stead
Mark Stead
  • 23
  • 535 523
Back from the dead, Fortnite Underground
Came back from 3% health, and a reboot for the win.
Fortnite Battle Royale, Underground (Chapter 5 Season 1)
In the end, our squad (of 3) didn't really make it far from our landing place in Fencing Fields.
Key moments:
00:00 - Landing in Fencing Fields
00:31 - Boss battle
03:48 - Team knocked down, and my health at 3%
04:36 - Battle near the weather station
05:22 - Killed by the one I didn't see
07:10 - Rebooted
08:14 - Battle while running from the storm
08:52 - Snipers in the distance
10:24 - Final battle for the win
zhlédnutí: 37

Video

Temporal Denoising Analysis
zhlédnutí 7KPřed rokem
This video is an analysis of techniques to reduce video noise (specifically temporal noise) in Blender. I have created a Blender Compositor node-based temporal denoiser, which can be downloaded for free from my Gumroad. markstead.gumroad.com/l/TemporalDenoisingToolkit The video shows how to use the node-based temporal denoiser, and the strategies used to reduce noise and overcome problems. Fina...
Ball vs Blocks, Blender physics simulation
zhlédnutí 231Před 2 lety
This animation is a "rigid body" physics simulation created in the open source software Blender. You assign properties to your objects like mass, friction and bounciness. Blender then applies forces like gravity and calculates the movement and interaction (e.g. collisions) of objects in the scene. Video sections 00:00 - Main animation 00:17 - Slow motion (50%) Music used in this video is "Light...
Nvidia Shield TV Remote Battery Replacement
zhlédnutí 7KPřed 3 lety
This is a quick and easy guide for replacing the batteries in the Nvidia Shield TV 2019 remote control. Unfortunately it is not obvious how to open the remote control battery cover, and it can be difficult to open even when you know how (or think you know how). Complicating matters, I've discovered that there are two different variants of the remote control. (Which drove me nuts when I couldn't...
Blender Trestle Bridge Usage Tutorial
zhlédnutí 218Před 3 lety
In this tutorial I'll explaining how to resize and customise the weathered wooden Trestle Bridge 3D model available for download. This video covers: 00:00 - Installation of the downloaded files and loading into your own scene 00:39 - Resizing the bridge model 01:54 - Moving/reshaping the bridge model 02:37 - Changing the bridge length (number of segments) 03:00 - Controlling texture repetition ...
Blender Trestle Bridge Demo
zhlédnutí 167Před 3 lety
Demo of a weathered wooden Trestle Bridge model made in Blender. This is rendered in Blender using Eevee with a world created using A.N.T.Landscape. The design is loosely based on photos of the Noojee trestle bridge in Australia. This historic railway bridge was built in 1919, then rebuilt after fire in 1939. www.visitmelbourne.com/regions/Gippsland/Things-to-do/History-and-heritage/Heritage-bu...
Camera FOV Clipping for Blender
zhlédnutí 3,3KPřed 3 lety
Camera FOV Clip is a free Node Group for Blender 3.0 Geometry Nodes (or 2.92/2.93 supported in an earlier version). It is designed for Geometry Nodes when scattering many objects throughout your scene - resulting in a scene that is too demanding. When using the Camera FOV Clip node group it will automatically clip or hide objects that are outside of the camera field-of-view. Move or rotate the ...
Rainbow Paper - Marker Pen Chromatography
zhlédnutí 242Před 3 lety
Kids try this yourself. Draw different colour dots along one side of a sheet of paper towel. Make sure to use water based marker pens (not permanent ink). Then dip the edge into water so that it soaks in. The water will spread up the page, and carry the colour along. As the colour rises up the page it will separate into different component colours. The science. The water is drawn up the page by...
Soloshot3: Toby at the dog park
zhlédnutí 196Před 6 lety
Video footage captured with a Soloshot3 (motion tracking base) and Optic65, at 1080p60, 4K30 and 1080p120. Still photos take with a Canon 7DmkII. Music "Clouds" by Jason Shaw (audionatix.com), licensed as Creative Commons License 3.0. audionautix.com/Music/Clouds.mp3 creativecommons.org/licenses/by/3.0/deed.en_US
Magic Lantern: Canon 7D raw video test - pets
zhlédnutí 301Před 6 lety
Testing out Magic Lantern raw video on a Canon 7D. MLV files converted to CinemaDNG using raw2cdng. Edited with Davinci Resolve. Video was shot using the resolution 1728x786 (full sensor mode).
Coral Peony Time-lapse
zhlédnutí 6KPřed 7 lety
My second time-lapse of a peony flower opening. Footage was taken in 2010 on a Canon 20D and presented in 1440p. Music is Prelude No. 2 by Chris Zabriskie.
Snowy River Star Time-lapse
zhlédnutí 200Před 8 lety
The change in lighting is due to the moon rising. Footage was taken over 5 hours before my battery went flat.
Crazy Cart
zhlédnutí 575Před 8 lety
Fun in the Razor Crazy Cart.
Lego 8491 - Ram Rod
zhlédnutí 989Před 11 lety
The footage is shown half speed (50fps reduced to 25fps).
Hologram 1
zhlédnutí 3,1KPřed 13 lety
My first attempt to create a hand-drawn hologram based on the instructions of William J. Beaty. www.amasci.com/amateur/holo1.html They are created by using a compass to scratch curves into a piece of plastic (CD case). It really is very easy to do. In this hologram, I have heart and star shapes and different depths, hence the star moves across the front of the heart. It took me less than an hou...
Child growth face morph time-lapse (from birth to almost 4)
zhlédnutí 238KPřed 13 lety
Child growth face morph time-lapse (from birth to almost 4)
Lover's Falls, Corinna
zhlédnutí 615Před 13 lety
Lover's Falls, Corinna
New chicken coop
zhlédnutí 8KPřed 14 lety
New chicken coop
Moonwalking Chicken
zhlédnutí 2,9KPřed 14 lety
Moonwalking Chicken
How to fold a light tent
zhlédnutí 7KPřed 14 lety
How to fold a light tent
Magic Crystal Tree Time-lapse
zhlédnutí 33KPřed 14 lety
Magic Crystal Tree Time-lapse
Automatic Chicken Door
zhlédnutí 153KPřed 14 lety
Automatic Chicken Door
Peony Time-lapse
zhlédnutí 64KPřed 15 lety
Peony Time-lapse

Komentáře

  • @Whalester
    @Whalester Před 12 dny

    I can't seem to get it to work. when there is still more noise in my scene than simply using a normal denoiser node

    • @Whalester
      @Whalester Před 12 dny

      I noticed when using the debugger, to get my motion colors to show at proper exposure I have to change the intensity down from 300 to 5. I don't know how to apply this to the non debugging denoising node.

  • @Mioumi
    @Mioumi Před 16 dny

    Thank you! That's some real good insight

  • @Essennz
    @Essennz Před měsícem

    What app did you use?

  • @blenderheadxyz2418
    @blenderheadxyz2418 Před 2 měsíci

    wow thanks a lot

  • @Prateekmunjal97
    @Prateekmunjal97 Před 2 měsíci

    Mf using ai from 14 years

  • @insertanynameyouwant5311
    @insertanynameyouwant5311 Před 3 měsíci

    a bit of dilemma, enabling vector pass only works when there`s no motion blur activated. But I need it also

    • @MarkStead
      @MarkStead Před 3 měsíci

      I didn't know that. I just assumed it was possible. You could do something crazy like render out a sequence with one sample, no motion blur, and the vector pass. The problem you've got is that vector represents the position at a single point in time, and there's no way to get the range of movement for the visible blur. (The blur may not be linear.). Maybe when the movement is very low temporal denoising might still make sense, but then the denoising could be automatically disabled in the areas of the image with more movement and blur (and perhaps is less noticeable anyway).

  • @shanekarvin
    @shanekarvin Před 3 měsíci

    Thanks Mark!. this was very helpful

  • @unrealone1
    @unrealone1 Před 4 měsíci

    Reminds me of 911.

  • @SapereAude1490
    @SapereAude1490 Před 5 měsíci

    A few years back, I was tackling this same problem in Vray for 3ds Max. Vray had a tool back then, a standalone vdenoise.exe in which you could specify how many frames you want to consider before and after. I downloaded the old documentation, and found this description of what it takes into account when denoising, and it's quite a lot: Noise level (named noiseLevel) - The denoiser relies heavily on this render element to provide information used during the denoising operation. Defocus amount (named defocusAmount) World positions (named worldPositions or wpp) World normals with bump mapping (named worldNormals) Diffuse filter (named diffuseFilter or VRayDiffuseFilter) Reflection filter (named reflectionFilter or VRayReflectionFilter) Refraction filter (named refractionFilter or VRayRefractionFilter) I suspect that the devs at Vray tried (or did) model how all of these affect the denoising and are using some fitted equation with weights based on all of these channels + the temporal images. I was messing the other day with OpenImage Denoise (oidn) 2.1 and managed to make it running outside of Blender on my AMD GPU. The speed up was roughly 2x on my RX6950XT, however it only thakes in the noisy image, normal map and albedo to do the denoise, so I'm thinking Vray has a more sophisticated algorithm. But all of this had me thinking. Perhaps a crazy idea, but what if we render double the number of frames, apply the temporal denoising, and then throw away every second frame, to get the same FPS? Basically, "oversampling" in the time domain for the purpose of better estimating the changes from frame to frame, and better denoising the image?

  • @rami22958
    @rami22958 Před 5 měsíci

    Now that I have finished creating the node, do I have to convert it to an image again, or can I convert it directly to a video? Please reply.

    • @MarkStead
      @MarkStead Před 5 měsíci

      👍 You can output as a video file. Just going from memory, you would (1) connect the denoised image to the Composite node, and then configure the output settings in the normal place, or alternatively (2) use the File Output node and specify the output settings in the node properties (N panel). Output using FFmpeg Video with MPEG-4/AV1/etc.

  • @pablog.511
    @pablog.511 Před 5 měsíci

    This method works with PNG rendering method??? (I render the frames in PNG first, and then combine them in a video editor)

    • @MarkStead
      @MarkStead Před 5 měsíci

      That's what I demonstrate in the Frame Blending part of the video. In the parts of the frames where there's movement there will be blurring. I guess you could say it's like an unsophisticated motion blur effect.

  • @user-tp3eq8zf1z
    @user-tp3eq8zf1z Před 6 měsíci

    Thanks, but how do I save the temporally denoised after compositing them?

    • @MarkStead
      @MarkStead Před 6 měsíci

      Yeah sorry about that - all the screen captures just show the Viewer node. You need to add a Composite node, and connect Image input. Then set your Render Output settings (presumably now rendering out as H.264 using FFmpeg Video), then activate Render Animation (Ctrl+F12).

  • @djdog465
    @djdog465 Před 6 měsíci

    wow you are such a pronumeral

  • @LiminalLo-fi
    @LiminalLo-fi Před 6 měsíci

    Hey Mark it looks like you are looking for median denoising. czcams.com/video/851cEK0Taro/video.html at about 8:00 minutes in he briefly goes over it, so if you have any deeper knowledge on this guys set up I would love to know!

    • @MarkStead
      @MarkStead Před 6 měsíci

      I did try using a median function, but didn't get better results. There's still a median node group implementation in the debug denoiser that you can hook up and try. I ended up focusing on what noisy pixels are like, where they might exhibit a different luminosity or a significant color shift. I tried a fancy (or dodgy) algorithm to apply a weighting to the hue, saturation and luminosity differences and exclude samples where the difference exceeds a threshold. I'd appreciate any feedback for where you see an improvement using the median function.

    • @LiminalLo-fi
      @LiminalLo-fi Před 6 měsíci

      @@MarkStead will let you know if I come up with anything useful. I am also looking into Blender to Unreal Engine, for its rendering speed.

    • @LiminalLo-fi
      @LiminalLo-fi Před 6 měsíci

      @@MarkStead So for my current project I am getting a perfect sequence with just a single pass denoise on each of the 3 frames ---- running "next" and "previous" into vextor displacemnt ------- running those 2 outputs and the output from the "current frame" into your median group then OUT. [(just the utility median blend group not any other parts from your package)] I will have to render it and see what it looks like in premier but it already looks cleaner than the averaged frame method I tried earlier. I mean it looks really good!

    • @LiminalLo-fi
      @LiminalLo-fi Před 6 měsíci

      my scene is a pretty simple project and not a heavily detailed with minimal objects, so I'm not sure how much that plays into the final result others may have.

  • @user-vz2tn8jd7q
    @user-vz2tn8jd7q Před 8 měsíci

    Awesome work! You deserve more views!

  • @himanshukatariaartist
    @himanshukatariaartist Před 8 měsíci

    How can i create such videos

  • @udbhavshrivastava
    @udbhavshrivastava Před 8 měsíci

    This was such a thorough analysis ! appreciate the good work mate.

  • @aulerius
    @aulerius Před 9 měsíci

    Do you know if there is any way to minimize the occlusion masks including edges of objects, even when they are stationary? Does it have something to do with aliasing in the render? I am using your techniques for a different purpose (in projection-mapping textures on moving scenes, to distinguish occluded regions and inpaint them)

    • @MarkStead
      @MarkStead Před 9 měsíci

      Have you looked at Cryptomatte? At one point I was trying to use the Cryptomatte node to distinguish between different objects. The problem is that it is designed to be used with a Matte selection - so then I tried to understand how the raw Cryptomatte render layer was structured - referring to this document raw.githubusercontent.com/Psyop/Cryptomatte/master/specification/IDmattes_poster.pdf However it was an impossible task for me - since there is no unique object ID for a given pixel position. Specifically the Cryptomatte data represents all the source objects that create the pixel (including reflections, anti-aliasing, transparency, motion blur) and a weighting for each. If you're able to make a Cryptomatte selection for the occluded region, then this should give you a mask with properly anti-aliased edges. However (not that I understand your project exactly), perhaps you could also be looking at the Shader nodes and rendering those faces with emission and everything else transparent (perhaps using material override for the whole scene). You might be able to use Geometry Nodes to calculate the angles to the projector to give you an X/Y coordinate. GN could also calculate the facing angle and therefore the level of illumination falloff (or whether a face is occluded completely).

  • @MrSofazocker
    @MrSofazocker Před 9 měsíci

    How to get more "free" samples in blender, without blending different frames: Simply render the same frame at a different seed. combine those. You can most of the time, only render a third or a half the samples. which might even be faster than rendering the image once with full samples.

    • @MarkStead
      @MarkStead Před 9 měsíci

      I'm not sure that really helps, though it might seem to. Rendering more samples is effectively giving more seed values because each sample has different random properties that result in light rays bouncing differently throughout the scene. In some cases a ray will randomly hit the diffuse colour, and in other cases it does a specular reflection (with a slightly different random bounce angle).

    • @MrSofazocker
      @MrSofazocker Před 9 měsíci

      ​@@MarkStead Please try, combining 3 "Seed renders" with say 500 samples, will give you a better image than rendering it once with 1500 samples. If you get what i mean. (I use M4CHIN3 tools, and he has that built-in as an custom operator in the Render menu) When rendering, each sample uses the same seed. If you ever rendered an animation with a fixed seed, you will notice that the noise stays the same. Bringing that to the extreme and only render with say 20 samples. You will notice the same pixels are black (not sampled at all) in the first frame as well as in the second frame. Now, using the same logic on a still frame, and rendering it with only 20 samples, but a differnt seed, now other pixels are black (not rendered). Of course this difference gets lower and lower depending on how many samples you start out with, but since we are not rendering to infinite sample, it will improve the clarity for low samples. It's the same effect as rendering an image with 200% resolution and half the samples. after denoising and downsampling you get a better image, as you gathered more "spatial samples". As one pixel previously was now 4 pixels to sample.

    • @MrSofazocker
      @MrSofazocker Před 9 měsíci

      This does get a little funky since Blender doesn't let you set the rays per pixel, but just an overall sample amount (Which is pretty dumb), regardless it still works.

    • @MarkStead
      @MarkStead Před 9 měsíci

      Yeah, in an earlier version of Blender (I guess 2.93 and earlier) there was Branched Path Tracing. This allowed you to specify how many sub-samples for different rays (e.g. Diffuse, Glossy, Transmission etc). So the benefit is that you can increase the samples where it matters - e.g. Glossy or Transmission. Furthermore I guess I saw it as a way where you didn't need to recalculate the all light bounces from the camera every time. However in my testing way back then, I actually got better results using Branched Path Tracing, and setting the sub-samples to 1 only. Anyway, if you're getting good results by modifying the seed value - then go for it. This is an excellent technique if you render scene (particularly for a video) - then decide you should have used more samples. Just render again with a different seed - and merge the frames.

  • @BlaBla-sf8pj
    @BlaBla-sf8pj Před 9 měsíci

    thx for your help

  • @c0nstantin86
    @c0nstantin86 Před 10 měsíci

    I need year and month stamps on each photo. I'm trying to figgure out my first memories before age 4 where they came from and in what order.

    • @MarkStead
      @MarkStead Před 10 měsíci

      Good suggestion. I've added timestamps to the subtitles. I hope it helps.

    • @c0nstantin86
      @c0nstantin86 Před 10 měsíci

      ​@@MarkSteadthank you... it helped a lot... so unlike my older brother, my earliest photo of me is when I was 1.2 years old, when my grandma tried to show my parents that my arm regenerated since birth and I had no remaining deffects... that's why I have no earlier memories of them... that's why they behave so badly with me... that's why they sent me to the mental hospital when I tried to become a monk... that's why they ware so concerned with keeping my mind busy with marriage and job... so I wouldn't stop to try to remember everything and figgure out their lies ... 😢

  • @traces2807
    @traces2807 Před 10 měsíci

    These time lapsed video 'diaries' are so beautiful and emotive. Incredible. There is one called 'Portrait of Lotte age 0 to 20' that is worth watching. Lump in my throat every time. Our babies grow up far too fast.❤

  • @M_Lopez_3D_Artist
    @M_Lopez_3D_Artist Před 10 měsíci

    Hey ive been rendering EXR with blender and i don't see Vector or Noisy Image and i have that checked on my render passes is there something im missing?

    • @MarkStead
      @MarkStead Před 10 měsíci

      Check it's saved as a MultiLayer EXR.

    • @M_Lopez_3D_Artist
      @M_Lopez_3D_Artist Před 10 měsíci

      i will do that right now hope it works all keep u posted @@MarkStead

    • @M_Lopez_3D_Artist
      @M_Lopez_3D_Artist Před 10 měsíci

      i figured it out it has to be selected to layer setting instead of combinded, when i set it to layer it showed all the inputs that i was wanting awesome@@MarkStead

    • @M_Lopez_3D_Artist
      @M_Lopez_3D_Artist Před 10 měsíci

      it works but how do i use this for a 250 frame animation@@MarkStead

    • @MarkStead
      @MarkStead Před 10 měsíci

      When rendering you render out your animation as MultiLayer EXR, ending up with 250 separate EXR files. Then import all the EXR files into a compositor session - importing as an Image Sequence (what I do is click on the first file, then press A to select them all).

  • @matejivi
    @matejivi Před 10 měsíci

    Thank you! A shadow pass would be nice indeed.

  • @dimigaming6476
    @dimigaming6476 Před 10 měsíci

    this video is much easier to digest at 1.75 speed

    • @MrKezives
      @MrKezives Před 10 měsíci

      That's what you have to say after such great content?

    • @dimigaming6476
      @dimigaming6476 Před 9 měsíci

      @@MrKezives You're coming in with a negative mindset. The content/Information is great. All i said is that it's easier to digest at a faster speed. Everyone has different methods of learning things. We're all on the same 3D journey here, you have no enemies brother.

    • @zonaeksperimen3449
      @zonaeksperimen3449 Před 7 měsíci

      Thanks dude

  • @kriskauf3980
    @kriskauf3980 Před rokem

    I legit thought this was some high tech battery-less remote. Thanks!

  • @thesammyjenkinsexperience4996

    Exactly what I needed. Thank you sir!

  • @siufa23
    @siufa23 Před rokem

    thanks Mark. This is great explantation. Do you think its possible to automate the denoise process with a python script commandline wihtout the need to enter blender?

    • @MarkStead
      @MarkStead Před rokem

      I personally haven't done that. Here's the command line doco, and you can certainly perform rendering, and run Python scripts. docs.blender.org/manual/en/latest/advanced/command_line/arguments.html If you have a Blender file configured for compositing then you could presumably just render that from the command line, with no Python scripting required. Perhaps what you could do from a Python script is substitute node parameters for the filenames or the number of frames. You should be able to fully integrate Python with pretty much anything in Blender including adding/manipulating compositing nodes. For example in Blender if I modify the frame offset in the compositor, I can see in the Scripting window it has executed this command: bpy.data.scenes["Scene"].node_tree.nodes["Image"].frame_offset = 1 Obviously you have a lot of extra complexity of setting up scripts and and all the command line parameters. However it makes sense when you're trying configure an automated rendering pipeline. Does that help?

  • @Szzachraj
    @Szzachraj Před rokem

    Cool video, clear explenation helped me with decison.

  • @leonarddoublet1113
    @leonarddoublet1113 Před rokem

    Thanks for the video Mark - a lot of clear detailed work to explain the process and functions. I appreciate it.

  • @djdog465
    @djdog465 Před rokem

    cool video dad

  • @stefyguereschi
    @stefyguereschi Před rokem

    CORAL PEONY, WHAT A BEAUTIFUL COLOR👏👏👏

  • @stefyguereschi
    @stefyguereschi Před rokem

    PONY FLOWERS SO.SEEET. BEAUTIFUL, 😊😊🎉🎉

  • @michaelfaith
    @michaelfaith Před rokem

    Just what i needed. Thanks

  • @fcweddington
    @fcweddington Před rokem

    Very nice! Just purchased! Working well. However, How do I add vertices to that spline?

    • @MarkStead
      @MarkStead Před rokem

      When in edit mode you can extrude from the vertex on either end, alternatively you can subdivide one or more vertices.

    • @fcweddington
      @fcweddington Před rokem

      @@MarkStead Excellent! Got it! Man! This is an absolute genius of a product. Thank you again so much. Have you ever thought about making such tools for Unity 3D? Their asset store is incredible. However, I couldn't find anything with such bridge creation.

  • @Mark01962
    @Mark01962 Před rokem

    Thanks for this video. Other posts show only solution for one of the remotes, which wasn't mine. Mine was the second one.

  • @nekomander6
    @nekomander6 Před rokem

    i wish we can see her again bet shes 16 now like me!!😅

  • @garden-22
    @garden-22 Před rokem

    Awesome

  • @darthslayerbricks
    @darthslayerbricks Před rokem

    you should redo this now! 😂

  • @mamooudagha62
    @mamooudagha62 Před rokem

    سلام.جميل..للقلوب..الرقيقه.والمحبه

  • @mamooudagha62
    @mamooudagha62 Před rokem

    لا.اله.الاانت.سبحانك.اني.كنت.من.الظالمين

  • @JoanTravels_World
    @JoanTravels_World Před rokem

    Is she 15 now? 😱

  • @rhananane
    @rhananane Před rokem

    I only like the baby parts. There so cute

  • @kerimkoc3538
    @kerimkoc3538 Před rokem

    İ am searching this video.what is the name application or program

  • @fatimacristina1148
    @fatimacristina1148 Před 2 lety

    A doro munto

  • @Meowanna420
    @Meowanna420 Před 2 lety

    Not helpful whatsoever. 😒

    • @MarkStead
      @MarkStead Před 2 lety

      Perhaps if you explain your problem I might be able to help.

  • @eremitamatos8696
    @eremitamatos8696 Před 2 lety

    vc viu aonde esta nem vi direito aonde ta

  • @fdss_channel6631
    @fdss_channel6631 Před 2 lety

    Parabéns pelo vídeo! 0 aos 7 anos czcams.com/video/2t6NNmgJAhU/video.html

  • @MichaelOshinsMom
    @MichaelOshinsMom Před 2 lety

    Can you tell me what program you use to create this? Is this done in After Effects?

    • @MarkStead
      @MarkStead Před 2 lety

      It was made more than 10 years ago using WinMorph on a PC. I understand that can also be used as a plugin for Premiere and After Effects. www.debugmode.com/winmorph/ The reason why I used WinMorph is because I could define control edges around features (eyes, nose, lips, face etc) to have direct control over the morph transitions. I imagine there are mobile apps that can do similar - possibly even using AI to automatically match facial features. I have not looked into this, so I don't have any recommendation.

    • @hashirhasankc
      @hashirhasankc Před 2 lety

      @@MarkStead Time flies right??! 11 years ago video.

  • @darkeyezs
    @darkeyezs Před 2 lety

    Nice. Quick and to the point. It's a new remote v2 and couldn't get the back off. Had to make sure I was doing it correctly.