3D Gaussian Splatting - Why Graphics Will Never Be The Same

Sdílet
Vložit
  • čas přidán 22. 05. 2024
  • 3D Gaussian Splatting explained
    original research paper: huggingface.co/papers/2308.04079
    twitter (more research): / dylan_ebert_
    tiktok (more games/tutorials): / individualkex
    tags: 3d graphics, rasterization, gaussian splatting, 3d gaussian splatting

Komentáře • 2,7K

  • @Johnnyjawbone
    @Johnnyjawbone Před 8 měsíci +11409

    No nonsense. No filler. No BS. Just pure information. Cheers dude.

    • @NeuralSensei
      @NeuralSensei Před 8 měsíci +86

      Editing probably took hours

    • @budgetarms
      @budgetarms Před 8 měsíci +11

      Yes yes yes

    • @realtimestatic
      @realtimestatic Před 8 měsíci +58

      Not enough explanation for me but pretty short, cut down and barebones although I’d like to understand the math a bit better

    • @CrizzyEyes
      @CrizzyEyes Před 8 měsíci +50

      Only problem is he seems to ignore the fact that this method requires exhaustive amounts of image/photo input, limiting its application especially for stylized scenes/games, and uh... Doom didn't have shadows so I have no idea what he's smoking.

    • @pazu_513
      @pazu_513 Před 8 měsíci +11

      probably best format youtube video I've ever seen.

  • @william_williams
    @william_williams Před 7 měsíci +2113

    That "unlimited graphics" company Euclidean was working with tech like this at least a decade ago. I think the biggest pitfall with this tech right now is that none of it is really dynamic, we don't have player models or entities or animations. It's a dollhouse without the dolls. That's why this tech is usually used in architecture and surveying, not video games. I'm excited to see where this technique can go if we have people working to fix its shortcomings.

    • @comeontars
      @comeontars Před 7 měsíci +58

      Yeah I was gonna say this magic tech sounded familiar

    • @shloop.
      @shloop. Před 7 měsíci +98

      I wonder how easily it can be combined with a traditional pipeline. It would be kind of like those old games with pre-rendered backgrounds except that the backgrounds can be rendered in real time from any angle.

    • @DuringDark
      @DuringDark Před 7 měsíci +18

      @@shloop. I wondered whether you could convert a splattered scene into traditional 3D but I realised mapping any of this to actual, individual textures and meshes would probably be a nightmare.
      Maybe you could convert animated models to gaussians realtime for rendering, and manually create scene geometry for physics?
      For lighting, I imagine RT would be impossible as each ray intersection would involve several gaussian probabilities.
      As a dilettante I think it's too radical for game engines and traditional path-tracing is too close to photorealism for it to make an impact here

    • @der.Schtefan
      @der.Schtefan Před 7 měsíci +11

      Not everything needs to be a game. There are people dying of Cholera RIGHT NOW!

    • @ntz752
      @ntz752 Před 7 měsíci +191

      ​@@der.SchtefanThis won't help with that though

  • @Jeracraft
    @Jeracraft Před 8 měsíci +3889

    I've learned so much, but so little at the same time 😂

    • @jakekeltoncrafts
      @jakekeltoncrafts Před 8 měsíci +14

      My feelings exactly!

    • @AlexanderHuzar
      @AlexanderHuzar Před 8 měsíci +41

      "The more you know, the more you realize how much you don't know." -Einstein

    • @Corvx
      @Corvx Před 8 měsíci +36

      Yeah that's the problem with zoomer attention span. Content/information needs to be shortened nowadays to miniscule clips. There are other more lengthy videos about this topic, but sadly they get less views because of the said problem.

    • @imatreebelieveme6094
      @imatreebelieveme6094 Před 8 měsíci +25

      If you want to actually understand this stuff you have to sit down with several papers full of university-level math for a few hours, and that's if you already have an education in this level of math you can draw on. Generally if you feel like a popsci video explains a lot without really explaining anything the reason is that they skipped the math.
      TL;DR: Learn advanced maths if you want to understand sciency shit.

    • @aidanwalter2823
      @aidanwalter2823 Před 8 měsíci +13

      @@imatreebelieveme6094There’s a spectrum. On one end, there are 2 minute videos like this, and on the other end is what you are talking about. I think there can be a happy medium, with medium to long form videos that explain enough about a topic to understand it at a basic level.

  • @wankertosseroath
    @wankertosseroath Před 8 měsíci +276

    It could work really well for spatial film making, but for 3D interactive game-engine based applications, it might be an optimisation nightmare to real-time move a bush somewhere when it's made of 3 million gaussians.

    • @the-coop
      @the-coop Před 5 měsíci +9

      Tools already solve this like blender and after effects, why does the editor need to move 3 million gaussians? It can process that separately it only needs the instruction.

  • @x1expert1x
    @x1expert1x Před 8 měsíci +6417

    this man can condense a 2 hour lecture into 2 minutes. subbed

    • @trombonemunroe
      @trombonemunroe Před 8 měsíci +3

      Same

    • @ronmka8931
      @ronmka8931 Před 8 měsíci +82

      Yeah and i didn’t understand a thing

    • @gigiopincio5006
      @gigiopincio5006 Před 8 měsíci +18

      it's in the description, where it says "original paper"

    • @acasccseea4434
      @acasccseea4434 Před 8 měsíci +15

      he missed out the most important part, you need to train the model for every scene

    • @SolarScion
      @SolarScion Před 8 měsíci +9

      ​@@acasccseea4434It's implicit. He started with the fact that you "take a bunch of photos" of a scene. The brevity relies on maximum extrapolation by the viewer. The only reason I understood this (~90+%) was because I'm familiar with graphics rendering pipelines.

  • @Q_20
    @Q_20 Před 8 měsíci +928

    this requires scene-specific training with precomputed ground truths, if this can be used independently for realtime rasterization that could be a big breakthrough in the history of computer graphics and light transport.

    • @astronemir
      @astronemir Před 8 měsíci +50

      Yeah but imagine photorealistic video games, they could be made from real scenes created in a studio, or from miniature scenes..

    • @xormak3935
      @xormak3935 Před 8 měsíci +144

      @@astronemir Miniature scenes ... we improved virtual rendering of scenes to escape the need for physical representations of settings, now we go all the way back around and build our miniature sceneries to virtualize them. Wild.

    • @noobandfriends2420
      @noobandfriends2420 Před 8 měsíci +39

      @@xormak3935 Or train on virtual scenes.

    • @StitchTheFox
      @StitchTheFox Před 8 měsíci +26

      @@xormak3935 it wont always be like that. We are still developing these technologies. This kind of stuff didnt exist 2 years ago. And many thought AI was just a dream until less than 8 years ago.
      By 2030 im sure we will be playing video games that look like real recordings from real life

    • @DoctorMandible
      @DoctorMandible Před 8 měsíci +13

      Train it on a digital scene

  • @GotYourWallet
    @GotYourWallet Před 8 měsíci +274

    In the current implementation it reminds me of prerendered backgrounds. They looked great but their use was often limited to scenes that didn't require interaction like in Zelda: OOT or Final Fantasy VII.

    • @robotba89
      @robotba89 Před 8 měsíci +18

      My thoughts exactly. Ok, well now do the devs have to build a 3D model to lay "under" this "image" so we can interact with stuff? And what happens when you pick up something or walk through a bush? How well can you actually build a game with this tech?

    • @Nerex7
      @Nerex7 Před 8 měsíci +9

      The funniest thing about Ocarina of time was: There was no background. At least, not really.
      What they did is they created a small bubble around the player that shows this background. There is a way to go outside of the bubble and see the world without it. I bet there's some videos on that, it's very fun and interesting.

    • @gumbaholic
      @gumbaholic Před 8 měsíci +3

      @@Nerex7 Are you talking about the 3D part of the OOT world or do you refer specifically to the market place in the castle with its fixed camera angles?

    • @Nerex7
      @Nerex7 Před 8 měsíci +1

      I'm talking about the background of the world, outside of the map (as well as the sky). It's all around the player only. @@gumbaholic It's refered to as a sky box, iirc.

    • @gumbaholic
      @gumbaholic Před 8 měsíci +6

      @@Nerex7 I see. And sure OOT has a skybox. But that's something different than pre-rendered backgrounds. It's like the backgrounds from the first Resident Evil games. It's the same for the castle court and the front of the Temple of Time. Those are different from the skybox :)

  • @turnipslop3822
    @turnipslop3822 Před 4 měsíci

    Man this was so good, please do more stuff. This content scratches an itch in my brain I didn't know I had. So so good.

  • @MenkoDany
    @MenkoDany Před 8 měsíci +1696

    This technique is an evolution, one could say it's an evolution from Point Clouds. The thing that most analysis I've seen/read is missing is that the main reason this exists now is because we finally have GPUs that are fast enough to do this. It's not like they're the first people who looked at point clouds and thought "hey, why can't we fill the spaces *between* the points?" EDIT: I thought I watched to the end of the video, but I didn't, the author addressed this in the end :) It's not just VRAM though! It's rasterization + alpha blending performance
    EDIT2: you know what I realised after reading all the new things coming out about gaussian splatting. I think most likely this technique will first be used as background/skybox in a hybrid approach

    • @zane49er51
      @zane49er51 Před 8 měsíci +31

      I was one of those people and ran into overdraw issues. I can't imagine how vram is the limiting factor rather than the sort and alpha blend.

    • @MenkoDany
      @MenkoDany Před 8 měsíci +7

      @@zane49er51 How long do you think before we have sensible techniques for animation, physics, lighting w/ gaussian splatting/NeRF-alikes?

    • @TheRealNightShot
      @TheRealNightShot Před 8 měsíci +55

      Just because we have gpus that are powerful enough now, doesn’t mean devs should go and cram this feature in mindlessly like they are doing for all other features recently, completely forgetting about optimization and completely crippling our frame rate.

    • @imlivinlikelarry6672
      @imlivinlikelarry6672 Před 8 měsíci +18

      How could one fit this kind of rasterization into a game, if possible? This whole gaussian thing is going completely over my head, but would it even be possible for an engine to use this kind of rasterization while having objects in a scene, that can be interactive and dynamic? Or where an object itself can change and evolve? So far everything is static with the requirement of still photos...

    • @zane49er51
      @zane49er51 Před 8 měsíci +38

      @@MenkoDany I have not worked for any AAA companies and was investigating the technique for indie stylized painterly renders (similar to 11-11 Memories Retold)
      If you build the point clouds procedurally instead of training on real images, it is possible to get blotchy brush-stroke like effects from each point. This method is also fantastic for LODs with certain environments because the cloud can be sampled less densely at far away locations, resulting in fewer, larger brush strokes. In the experiment I was working on I got a grass patch and a few flowery bushes looking pretty good before I gave up because of exponential overdraw issues. Culling interior points of the bushes and adding traditional meshes under the grass where everything below was culled helped a bit but then it increased the feasible range to like a 15m sphere around the camera that could be rendered well.

  • @DonCrafts1
    @DonCrafts1 Před 8 měsíci +5966

    Concept is cool, but your editing is really great! Reminds me of Bill Wurtz :D

  • @SteveAcomb
    @SteveAcomb Před 8 měsíci +2

    Perfect video on an absolutely insane topic. I’d love more bite-sized summaries like this on graphics tech news!

  • @sethbyrd7861
    @sethbyrd7861 Před 7 měsíci +1

    Love your editing and humor!

  • @why_i_game
    @why_i_game Před 8 měsíci +448

    Would be great for something like Myst/Riven with premade areas and light levels. It doesn't sound like this method offers much for dynamic interactivity (which is my favourite thing in gaming). It would be great for VR movies/games with a fixed scene that you can look around in.

    • @iamlordstarbuilder5595
      @iamlordstarbuilder5595 Před 8 měsíci +16

      I just realized it's also probably not great for rendering unreal scenes like a particular one I have trapped in my head.

    • @ZackMathissa
      @ZackMathissa Před 8 měsíci +11

      ​@@iamlordstarbuilder5595Yeah no dynamic lighting :(

    • @peoplez129
      @peoplez129 Před 8 měsíci +10

      You don't really need to render different lighting, you can merely shift an area of the gaussian to become brighter or darker or of a different hue, based on a light source. So flashlights would behave more like a photo filter. Since the gaussians are sorted by depth, you already have a simple way to simulate shadows from light sources.

    • @theteddychannel8529
      @theteddychannel8529 Před 8 měsíci +57

      @@peoplez129 hmmm, but that's not how light works. It doesn't just make an area go towards white, there's lots of complex interactions to take into account.

    • @archivethearchives
      @archivethearchives Před 8 měsíci +10

      I was just thinking this. Wouldn’t it be nice if computer gaming went full circle with adventure games like this becoming a living genre again?

  • @yahyah....
    @yahyah.... Před 8 měsíci +602

    the highly technical rapid fire is just what i need.
    the editing and the graphic design is just great, keep it coming i love it!

    • @Igzilee
      @Igzilee Před 7 měsíci

      While being way too technical for any normal person to understand. He immediately alienates the majority of people by failing to explain 90% of what is actually happening.

  • @isaacroberts9089
    @isaacroberts9089 Před 6 měsíci

    Congrats on making like a solidly informationally dense video, we like this!

  • @tubetomarcato
    @tubetomarcato Před 6 měsíci

    most compelling way to present, my recall is much higher from the way you make it so entertaining. Kudos!

  • @MitchellPontius
    @MitchellPontius Před 8 měsíci +860

    Wow. Never seen someone fit so much information in such a short timeframe while keeping it accurate and especially easy to take in. Way to go!

    • @pavelkalugin4537
      @pavelkalugin4537 Před 8 měsíci +3

      Two minute papers is close

    • @xhbirohx2214
      @xhbirohx2214 Před 8 měsíci +1

      fireship is close

    • @pvic6959
      @pvic6959 Před 8 měsíci +5

      hes the bill wurtz of graphics LOL

    • @philheathslegalteam
      @philheathslegalteam Před 8 měsíci +1

      We have done this for 12 years already. It’s not applicable to games. Everyone trashed Euclideon when this was initially announced, and now because someone wrote a paper everyone thinks it’s a new invention…

    • @MinerDiner
      @MinerDiner Před 8 měsíci +1

      Clearly you haven't seen "history of the entire world, i guess" by Bill Wurtz. This video feels heavily inspired by it.

  • @SAGERUNE
    @SAGERUNE Před 8 měsíci +253

    When people begin to do this on a larger scale, and with animated elements, perhaps video, ill pay attention. If they can train the trees to move and the grass to sway, that will be extremely impressive, the next step is reactivity which will blow my mind the most. I dont see it happening for a long time.

    • @yuriythebest
      @yuriythebest Před 8 měsíci +74

      exactly. these techniques are great for static scenes/cgi, but these scenes will be 100% static with not even a leaf moving, unless each item is captured individually or some new fancy AI can separate them, but the "AI will just solve it" trope can be said about pretty much anything, so for now it's a cool demo

    • @drdca8263
      @drdca8263 Před 8 měsíci +3

      @@yuriythebestIs there any major obstacle to like, doing this process on two objects separately, and then like, taking the unions of the point clouds from the two objects, and varying the displacements?

    • @somusz159
      @somusz159 Před 8 měsíci +5

      ​@@yuriythebestYeah, and the wishful thinking exhibited in that cliche is likely really bad for ML. Overshilling always holds AI back at some point, think of the 80s.

    • @theteddychannel8529
      @theteddychannel8529 Před 8 měsíci +12

      @@drdca8263 if i'm thinking about this in my head, one thing i can think of is that the program has no idea which two points are supposed to correspond, so stuff would squeeze and shift while moving.

    • @Rroff2
      @Rroff2 Před 8 měsíci +3

      Yup - as soon as any object or source of light moves you'll need ray tracing (or similar) to correctly light the scene and that is when a lot of the realism starts to break down.

  • @NapalmNarcissus
    @NapalmNarcissus Před 6 měsíci

    You deserve every view and sub you got from this. Amazing editing, quick and to the point.

  • @northofbrandon
    @northofbrandon Před 7 měsíci

    Great content, dig your delivery style

  • @rodrigoff7456
    @rodrigoff7456 Před 8 měsíci +16

    I like the calm and paced approach to explaining the technique.

  • @jrodd13
    @jrodd13 Před 8 měsíci +156

    This dude is the pinnacle and culmination of gen z losing their attention span

    • @neocolors
      @neocolors Před 7 měsíci +15

      I've watched it on double speed

    • @ayebraine
      @ayebraine Před 7 měsíci +9

      I'm 40, and I hate watching videos instead of reading a text. Even if it's a 2 minute video on how to open the battery compartment (which is, frankly, a good use case for video). I really don't want to wait until someone gets to the point, talks through a segue, etc. This is closer to reading, very structured and fast. Wouldn't equate it with short attention span.

    • @wcjerky
      @wcjerky Před 7 měsíci +1

      Ahh yes, another case of the older generation hating on the younger generation. I remember the exact same said about Millennials and Gen Y.
      A missed opportunity for unity and imparting useful lessons. Please see past your hubris and use the experience to create.

    • @jrodd13
      @jrodd13 Před 7 měsíci +8

      @wcjerky bro I am the younger generation hating on the younger generation 💀

    • @Woodside235
      @Woodside235 Před 7 měsíci +3

      I'm not sure I agree. This video to me is like an abstract high level of the concept. It gets to the point.
      This is in stark contrast to a bunch of tech videos that stretch to the 10 minute mark just for ads, barely ever making a point.
      It's a good summarization. Saving time when trying to sift through information does not necessarily equate to short attention span.

  • @spooky6oo
    @spooky6oo Před 8 měsíci +1

    Quick and informative and I love your editing style

  • @cosmic_gate476
    @cosmic_gate476 Před 5 měsíci

    Has to be the cleanest and most efficient way I got tech news on youtube. Keep it up my dude

  • @Koscum
    @Koscum Před 8 měsíci +56

    Very much a niche and limited method, that will become more practical and usable once it gets integrated into a more traditional rendering pipeline in a similar way that path tracing is still very much impractical for full scene rendering, but becomes a great tool if some scope limitations are applied and it gets used to augment the existing rendering model instead of replacing it.

    • @StarHorder
      @StarHorder Před 8 měsíci +2

      yeah, this doesn't look useful for anything that is stylized.

  • @ThereIsNoRoot
    @ThereIsNoRoot Před 8 měsíci +244

    Please continue to make videos like this that are engaging but also technical. From a software engineer and math enthusiast.

    • @gigabit6226
      @gigabit6226 Před 8 měsíci +2

      I recognize that default apple profile icon!

  • @jacquesbroquard
    @jacquesbroquard Před 20 dny

    This was amazing. Thanks for the humorous take. Keep going!

  • @AuraMaster7
    @AuraMaster7 Před 7 měsíci +295

    If this takes off and improves I could see it being used to create VR movies, where you can walk around the scene as it happens

    • @SupHapCak
      @SupHapCak Před 7 měsíci +26

      So kind of like eavesdropping but the main characters won’t notice and beat you for it

    • @theragerghost9733
      @theragerghost9733 Před 7 měsíci +60

      I mean... braindance?

    • @uku4171
      @uku4171 Před 7 měsíci

      ​@@theragerghost9733brooo

    • @ianallen738
      @ianallen738 Před 7 měsíci +33

      So porn is going to be revolutionized..

    • @thesenamesaretaken
      @thesenamesaretaken Před 7 měsíci +24

      ​@@ianallen738what a time to be alive

  • @user-on6uf6om7s
    @user-on6uf6om7s Před 8 měsíci +30

    At the moment, photogrammetry seems a lot more applicable as the resulting output is a mesh that any engine can use (though optimization/retopology is always a concern) whereas using this in games seems like it requires a lot of fundamental rethinking but has the potential to achieve a higher level of realism.

    • @fusseldieb
      @fusseldieb Před 8 měsíci +5

      Like I saw in another video (I believe it was from Corridor?), this technique is better applied when re-rendering a camera recorded 2D path, using this technique, and then have a new footage but without all the shakiness of your real 2D recording. Kinda sucked to explain it, but I hope you got it.

  • @MikeMorrisonPhD
    @MikeMorrisonPhD Před 8 měsíci +48

    So fun to watch. I want all research papers explained this way, even the ones in less visual fields. Subscribed!

  • @anuragparcha4483
    @anuragparcha4483 Před 3 měsíci +1

    This is exactly what I wanted. Keep these videos up, subbing now!

  • @sirens3237
    @sirens3237 Před 5 měsíci

    This was amazingly well put together

  • @user-co3nl9co5g
    @user-co3nl9co5g Před 8 měsíci +198

    This looks a lot like en.wikipedia.org/wiki/Volume_rendering#Splatting from 1991, I wonder if there is any big difference apart the training part,
    also I know everybody said the same, but your editing is so cool. It's so dynamic, yet it manages to not be exhausting at all

    • @francoislecomte4340
      @francoislecomte4340 Před 8 měsíci +9

      It is close but the new technique optimizes the Gaussians (both the number of gaussians and the parameters) to fit volumetric data while the other one doesn’t, leading to a loss of fidelity.
      Please correct me if I’m wrong, I haven’t actually read the old paper.

    • @user-co3nl9co5g
      @user-co3nl9co5g Před 8 měsíci +1

      ​@@francoislecomte4340 You are totally correct :)

    • @lunarluxe9832
      @lunarluxe9832 Před 8 měsíci +9

      im impressed youtube let you put a link in the comments

  • @ScibbieGames
    @ScibbieGames Před 8 měsíci +61

    It's more niche than photogrammetry because there's no 3D model to put into something else.
    But with a bit more work I'd love to see this be a feature on a smartphone.

    • @EvanBoldt
      @EvanBoldt Před 8 měsíci +5

      Perhaps the process could be repeated with a 360 degree FOV to create an environment map for the inserted 3D model. Casting new shadows seems impossible though.

    • @fusseldieb
      @fusseldieb Před 8 měsíci +6

      @@EvanBoldt "Casting new shadows seems impossible though." => It really depends. If your footage already has shadows, it'll be difficult. However, if your footage DOESN'T contain shadows, just add some skybox/HDRi, tweak some objects (holes, etc) and voilá.

    • @EZhurst
      @EZhurst Před 8 měsíci +11

      photogrammetry really isn’t that niche considering it’s used pretty heavily in both AAA video games and film

    • @noobandfriends2420
      @noobandfriends2420 Před 8 měsíci

      You can create depth fields from images which can be used to create 3D objects. So it should be possible to integrate it into a pipeline.

    • @randoguy7488
      @randoguy7488 Před 8 měsíci +5

      @@fusseldieb If you think realistic graphics require only "some skybox/HDRi and tweaking some objects" You have a lot to learn, especially when it comes to required textures.

  • @BigJemu
    @BigJemu Před 6 měsíci +1

    this is the content i need in my life, no filler. thanks

  • @isaacgary6801
    @isaacgary6801 Před 7 měsíci

    This guy is too good for this level of subs...
    Subbed right away

  • @WaleighWallace
    @WaleighWallace Před 8 měsíci +9

    I absolutely love this style of informative video. Usually I have to have videos set to 1.5x because they just drag everything out. But not here! Love it.

    • @Psythik
      @Psythik Před 7 měsíci

      Honestly, this video is *perfect* for people like me with ADHD and irritability. No stupid filler, no "hey guys", no "like and subscribe". Just the facts, stated as quickly and concisely as possible. 10/10 video.

  • @yaelm631
    @yaelm631 Před 8 měsíci +52

    Your video is spot on!
    HTC Vive/The Lab were the reasons why I got into VR.
    I loved the photogrammetry environments so much that it's my hobby to capture scenes.
    Google Light Fields demos were a glimpse of the future, but blurry.
    These high quality NeRF breakthroughs are coming much earlier than I thought it would be.
    We will be able to capture and share memories, places... it's going to be awesome!
    I don't know if Apple Vision Pro can only do stereoscopic souvenirs capture or if it can do 6dof, but I hope it's the latter
    :,D

    • @orangehatmusic225
      @orangehatmusic225 Před 8 měsíci

      We all know what you use VR for... you might want to not use a black light around your VR goggles huh.

    • @mixer0014
      @mixer0014 Před 8 měsíci +5

      The best thing about this technique is that it is not a NeRF! It is fully hand-crafted and that's why it beats the best of NeRFs tenfold when it comes to speed.

    • @orangehatmusic225
      @orangehatmusic225 Před 8 měsíci

      @@mixer0014 Using AI doesn't make it "hand crafted"... so you are confused.

    • @mixer0014
      @mixer0014 Před 8 měsíci

      @@orangehatmusic225
      There is no AI it that new tech, just good ol' maths and human ingenuity. TwoMinutePapers has a great explanation, but if you don't have time to watch it now, I hope a quote from the paper can convince you:
      "The un-
      structured, explicit GPU-friendly 3D Gaussians we use achieve faster
      rendering speed and better quality without neural components."

    • @NickJerrison
      @NickJerrison Před 8 měsíci

      @@orangehatmusic225Ah yes, the only and primary reason people throw hundreds of dollars into VR equipment is to jerk off, so viciously in fact that it would splatter the headset itself. For sure, man, for sure.

  • @JohnDoe-yr3lm
    @JohnDoe-yr3lm Před 7 měsíci

    Love the format. Cool guy

  • @ezdeezytube
    @ezdeezytube Před 8 měsíci

    Video topic aside, your style of delivering info is top notch mate!

  • @agedisnuts
    @agedisnuts Před 8 měsíci +42

    welcome back to two minute papers. i’m your host, bill wurtz

  • @grantlauzon5237
    @grantlauzon5237 Před 8 měsíci +501

    This could be used for film reshoots if a set was destroyed, but in a video game the player and NPCs would still need to have some sort of lighting/shading.

    • @kamimaza
      @kamimaza Před 7 měsíci +158

      This is a great application for Google Street View as opposed to the 3D they have now...

    • @fernandojosesampaio9173
      @fernandojosesampaio9173 Před 7 měsíci +4

      HDRI Is a lighting map that can be applied into any virtual space object, you just take a 360 photo of the ambient.

    • @MrRedstoneready
      @MrRedstoneready Před 7 měsíci +3

      lighting/shading can be figured out by the environment. Movie cgi is lit by using a 360 image sphere of the set

    • @KD-_-
      @KD-_- Před 7 měsíci +6

      Plants also won't be moving so no wind

    • @NoLongo
      @NoLongo Před 7 měsíci +3

      I watched a video about AI that confirmed this is already a thing. Not at this fidelity but tools already exist to reconstruct scenes from existing footage, reconstruct voices, generate dialog. Nothing in the future will be real.

  • @diarserouy
    @diarserouy Před 8 měsíci

    I LOVE this style of videos

  • @IgorFranca
    @IgorFranca Před měsícem

    Thank you for your objective presentation.
    Very interesting indeed.
    I'll check later for more.

  • @culpritdesign
    @culpritdesign Před 8 měsíci +5

    This is the most concise explanation of gaussian splatting I have stumbled across so far. Subscription achieved.

  • @eivarden
    @eivarden Před 8 měsíci +34

    photogrammetry is extremely useful, and has high potential for games.
    the problem is that we already fine tuned the current methods to their max potentials, so they are more efficient.
    imo, finding ways to mix them together is the best way to do it, and the more we use these technologies, the more fine tuned they become.
    (the technology used from 3d scanning that uses "points" instead of polygons(triangles) is even more impressive, but has limits with motion, so its another reason having both technologies together work so well)

  • @DKLHensen
    @DKLHensen Před 7 měsíci

    Subscribed: because you can condense quality information in 2 minutes, no bs, just to the point.

  • @mikaelsjodin
    @mikaelsjodin Před 7 měsíci

    this is the first video from this guys I've watched and I'm hooked. Like, damn son.

  • @eafortson
    @eafortson Před 8 měsíci +3

    I cannot overstate how much I appreciate this approach to delivering information concisely. Thank you sir.

  • @porrasm
    @porrasm Před 8 měsíci +37

    For now it’s niche. I imagine it could be used in games blended with traditional rendering pipelines. E.g. use this new method for certain areas that need a light level of detail.

    • @judahgrayson7953
      @judahgrayson7953 Před 7 měsíci

      more than just that niche - video production could utilize this in rendering effects

  • @gg1bbs
    @gg1bbs Před 7 měsíci

    This was great, im really happy it was in my feed. Thabks!

  • @LeonTalksALot
    @LeonTalksALot Před 8 měsíci +1

    10/10 intro, literally perfect in every way.
    I immediatly got what I clicked for and found myself intrested from 0:02 onwards.

  • @Critters
    @Critters Před 8 měsíci +27

    I wonder how we'll get dynamic content into the 'scene'. Will it be like Alone in the Dark where the env is one tech (for that game, it was 2D pre rendered) and characters are another (for them, 3d poly). Or if we will create some pipeline for injecting / merging these fields so you have pre-computed characters (like mortal combat's video of people). Could look janky. Also I don't see this working for environments that need to be modified or even react to dynamic light, but this is early days.

    • @edenem
      @edenem Před 8 měsíci +4

      well, it could already work in the state it's in right now for VFX and 3D work, even though you can't as of now inject models or lighting into the scene (to my knowledge), you could still technically take thousands of screenshots and convert the nerf into a photoscanned environment, and then use that photoscanned environment as a shadow and light catcher in a traditional 3D software, and then use a depth map from the photoscan as a way to take advantage of the nerf for an incredibly realistic 3D render, that way you can put things behind other things and control the camera and lighting, while still taking advantage of the reflections and realistic lighting nerfs provide

    • @Danuxsy
      @Danuxsy Před 8 měsíci +8

      Yes the issue here is that these scenes are not interactable because the things in them are not 3D objects, they are mere representations from your particular perspective. Dunno how they would solve those problems (which is probably why we won't see it in games anytime soon if ever)

  • @kunstigsmart
    @kunstigsmart Před 8 měsíci +7

    the man who made a 7 min video worth of something to a 2 minute barrage of info - I like it

  • @danielrock04
    @danielrock04 Před 6 měsíci

    love the narration, this video should be put in the hall of fame of youtube (Y)

  • @sergeyivanov3607
    @sergeyivanov3607 Před 8 měsíci

    Thanks for the video and for including the link to the paper.
    Looks like niche technology: AFAIU, it enables compact representation of an existing scene, but you need to either have photos or multiple very high quality renders of a static scene, so seems to be no-go for video games and only partially aplicable for engineering.

  • @BluesM18A1
    @BluesM18A1 Před 8 měsíci +18

    I'd like to see more research done into this to see how to superimpose dynamic objects into a scene like this before it has any sort of practical use in video games but for VR and other sorts of things, this could have lots of potential if you want to have cinema-quality raytraced renders of a scene displayed in realtime. Doesn't have to be limited to real photos.

    • @bentweedle3018
      @bentweedle3018 Před 6 měsíci +1

      I mean it'd be pretty simple to do, use a simplified mesh of the scene to mask out the dynamic objects then overlay them when they should be visible. The challenge is more making the dynamic objects look like they belong.

  • @Gameboi834
    @Gameboi834 Před 8 měsíci +351

    This is the first time I've seen an editing style similiar to Bill Wurtz that not only DIDN'T make me wanna gouge my eyes out, but also worked incredibly well and complimented the contents of the video. Nice!

    • @pigmentpeddler5811
      @pigmentpeddler5811 Před 8 měsíci +3

      so true bestie

    • @ratastic
      @ratastic Před 8 měsíci

      i did want to kill myself a little bit though just a little

    • @thefakepie1126
      @thefakepie1126 Před 8 měsíci +3

      bill wurtz without the thing that makes bill wurtz bill wurtz

    • @hadrux4643
      @hadrux4643 Před 8 měsíci +2

      bill wurtz explaining something complicated, kinda goes in one ear and out the other

    • @pigmentpeddler5811
      @pigmentpeddler5811 Před 8 měsíci

      @@thefakepie1126 yeah, without the cringe

  • @skoowy
    @skoowy Před 8 měsíci

    You have a very talented ability explaining technical topics.
    Well done!

    • @Igzilee
      @Igzilee Před 7 měsíci

      Well he would if he actually explained in-depth instead of spouting vocabulary that makes no sense to someone without an extensive understand of how graphics work.

  • @iRemainNameless
    @iRemainNameless Před 8 měsíci

    Brilliant style of video and flawless presentation. Bravo

  • @Smesp
    @Smesp Před 8 měsíci +3

    I've seen other, much longer, videos on this and wondered how it works. Now I know. After 2:11. This is the first time I pay money to show my gratitude. Great work. Keep it up! Liked and Subscribed. THANKS.

  • @thejontao
    @thejontao Před 8 měsíci +27

    Interesting!!!
    I always think back to when I was doing my degree in the 90s, and Ray Tracing was this high end thing PhD students did with super expensive Silicon Graphics servers, and it was always a still image of a very reflective metal sphere on a chess board with some kind of cones or polyhedra thrown in for kicks. It took days and weeks of render time.
    About 25 years passed between when I first heard of ray tracing and when I played a game with ray tracing in it.
    I might not be 70 when the first game using a Gaussian engine is released, but I wouldn’t imagine it happens before I’m 60.
    Still very interesting, though!!!

    • @dddaaa6965
      @dddaaa6965 Před 8 měsíci +1

      I don’t think it’ll be used at all personally but I’m stupid so we’ll see, I don’t see how this is better than digital scanning if you have to do everything yourself to add lighting and colission, someone said it could be used for skyboxes and I could see that.

    • @richbuckingham
      @richbuckingham Před 5 měsíci +1

      I remember exactly the ray-tracing program you're talking about, in 1993/4 I think a 640x480 image render took about 20 hours.

    • @memitim171
      @memitim171 Před 5 měsíci

      Ray tracing was also popular on the Amiga, these chess boards, metal spheres etc would crop up regularly in Amiga Format, (I've no idea how long it took to render one on the humble Amiga) some of them were a bit more imaginative though and I remember thinking how cool they looked and wondering if games would ever look like that, tbh I'm a bit surprised it actually happened...I'm not that convinced I'm seeing the same thing here though, how does any of this get animated? It's already kinda sad that it's 2023 and interactivity and physics have hardly moved an inch since Half-Life 2, the last thing we need is more "pre-rendered" backgrounds.

    • @niallrussell7184
      @niallrussell7184 Před 5 měsíci

      You didn't need an SGI. I bought a 287 co-processor for an IBM PC to do raytracing in late 80s. Started with Vivid and then POV raytracers. By mid 90s we were using SGI's for VR.

  • @TrueHelpTV
    @TrueHelpTV Před 5 měsíci

    I like your style Sir. Stay with it. It will pay off

  • @JoshLange3D
    @JoshLange3D Před 7 měsíci

    Love this approach to explaining. Very fun to take in.

  • @myusernamegotstollen
    @myusernamegotstollen Před 8 měsíci +13

    I don’t know much about rendering but this sounds so smart. You take a video, separate the frames, do the other steps, and now you have this video as a 3d environment

    • @redcrafterlppa303
      @redcrafterlppa303 Před 8 měsíci +3

      This would be an amazing thing for AR as you could use that detailed 3d model of an arbitrary room or place you are in and augment it with virtual elements. Or to have photorealistic environments in VR without needing a AAA production team. Imagine making a house tour abroad in VR where the visuals are rendered photorealistic in real time

    • @fusseldieb
      @fusseldieb Před 8 měsíci +3

      @@redcrafterlppa303 "Or to have photorealistic environments in VR without needing a AAA production team" -> This already exists. Some people use plain old photogrammetry for that

  • @briannamorgan4313
    @briannamorgan4313 Před 8 měsíci +38

    I was in college back when deferred shading was just being talked about as a viable technique for lighting complex scenes in real-time. I even did my dissertation on the technique. Back then GPU's didn't have even close to the memory needed to do it at a playable resolution, but now pretty much every game uses it. I can see the same thing happening with 3D Gaussian Splatting.

  • @ragnarlothbrok6240
    @ragnarlothbrok6240 Před 7 měsíci

    I wish everything on CZcams was this concise.

  • @dirtyharry5165
    @dirtyharry5165 Před 7 měsíci

    quick and to the point, perfect video

  • @rich1051414
    @rich1051414 Před 8 měsíci +3

    Replacing computation with caching. Clever caching, but it is what it is.
    This is the very first solution a programmer will use to computations taking too long. Can we cache it instead. Gaussian splatting is a way to cache rendering for use later. But it takes *all if the ram* to do it.

    • @DrCranium
      @DrCranium Před 8 měsíci

      So, in a sense this is an “optimization technique”: currently - for photogrammetry, but give it some traction and spotlight (like with “async reprojection outside of VR”) - and that’ll find its way into game engines.

    • @rich1051414
      @rich1051414 Před 8 měsíci +1

      ​@@DrCranium Imagine the world as being full of 3d pixels of 0 size. How large can you actually make those pixels before you start to notice? This is basically what splatting is. The smaller the pixels, the better things look. But since those 'pixels' are in 3d space, you dont have to update them as long as nothing otherwise has changed.
      It's less useful in dynamic scenes, though, which is where the technique starts to fall short. But I suppose filters could be stacked on top, or lower resolution realtime rendering, with prerendered splatting used as hinting to upscale the fidelity.

  • @thibaultghesquiere
    @thibaultghesquiere Před 8 měsíci +6

    I love how short and concise this was. Well done mate! No BS, cut to the chase

  • @goldstinger325
    @goldstinger325 Před 8 měsíci

    This was... Amazing. The best use of 2 minutes I've ever experienced in life. Thank you for opening my eyes to this

  • @Klaster_1
    @Klaster_1 Před 8 měsíci +19

    Great video, do you plan more like this? Terse, technical, about 3D graphics or AI. Basically, 2 minute papers, but with less fluff.

  • @NOVAScOoT
    @NOVAScOoT Před 8 měsíci +3

    Never thought id see the video version of an abstract before, very well done though. really does feel like ive just stepped into a researchers room right when they're about to finish up their 6 years of research and put it all together in one weekend without sleeping on 18 cups of extra caffeine coffee.

  • @jamieshelley6079
    @jamieshelley6079 Před 6 měsíci

    Great video and looking forward to seeing it being used!

  • @zaklesiou
    @zaklesiou Před 8 měsíci

    Absolute banger vid bro

  • @bogsbinny7124
    @bogsbinny7124 Před 8 měsíci +46

    this could be done using images from hyperrealistic renders instead of irl photos too, right? to move around in a disney cgi level environment that wouldn't be possible in realtime normally

    • @sebastianblatter7718
      @sebastianblatter7718 Před 8 měsíci +12

      cool idea.

    • @Lazyguy22
      @Lazyguy22 Před 8 měsíci +9

      You'd need a large number of those renders, and wouldn't be able to interact with anything.

    • @chrisallen9743
      @chrisallen9743 Před 8 měsíci +5

      So long as the renders basically functioned as they are required (as in, enough of them, from enough different points) and can be converted into whatever format is used to create this...i dont see why not.

    • @chrisallen9743
      @chrisallen9743 Před 8 měsíci +2

      You could use something that uses Ray Tracing to create a scene, and once the dots fill in to 100%, you have your screenshot/picture, so then you move the camera to the next position, and allow the dots to fill in. Rinse repeat, and then you'll have RTX fidelity.

    • @jcudejko
      @jcudejko Před 8 měsíci +1

      @@Lazyguy22 Yes, but you could add in bits redrawn as polygons with their own textures that could be interactable
      I'm thinking like the composite scenes from the late 90s fmv games

  • @todayonthebench
    @todayonthebench Před 7 měsíci +10

    I saw something similar years ago with similar impressive results. (also just a point cloud being brute forced into a picture.)
    However, the downside of this type of technique is its memory requirement, making it fairly niche and hard to use in practice.
    For smaller scens it works fine, for anything large and it starts to fall apart.
    Beyond this we also have to consider that these renders are of a static world. And given the fairly huge amount of "particles" making up even a relatively small object, then the challenge of "just moving" a few as part of an animation becomes a bit insane in practice. Far from impossible, just going to eat into that frame rate by a lot. Most 3d content is far more than just taking world data and turning it into a picture. And a lot of the "non graphics" related work (that graphics has to wait for, else we don't know what to actually render) is not an inconsequential amount to work with as is. Moving a few tens of thousands of polygons around as a character model walks by isn't trivial work. Change those tens of thousands of polygons into millions of points (to get similar visual fidelity) and that animation step is suddenly a lot more compute intensive.
    So in the end, that is my opinion.
    Works nice as a 3d picture, but dynamic content is a challenge. Same for memory utilization, something that makes it infeasible for 3d pictures too.

  • @Chrislevesque91
    @Chrislevesque91 Před 7 měsíci

    instant sub, great video homie

  • @Kazumo
    @Kazumo Před 6 měsíci

    Please be the 2MinutesPaper we deserve (without fillers, making a weird voice on purpose and exaggerating the papers). Good stuff, really liked the video.

  • @MrEricGuerin
    @MrEricGuerin Před 8 měsíci +4

    the issue is it does not have the 'logic' - so no dynamic light, no possibility to identify a gameobject directly like : 'hey do a transform of vector3 to the bush there' => no 'information' of such a bush.
    Let sees where it will bring us of course, but it looks more like a fancy way to represent a static scene.
    Probably it can be used in an FX for movie, where you can do something on a static scene, and you do something with 3D stuff inside this 'scene' I do not know ...

    • @fusseldieb
      @fusseldieb Před 8 měsíci

      Isolate the object, tweak it and export it. Should be doable...

    • @charlotte80389
      @charlotte80389 Před 8 měsíci

      @@fusseldieb the lighting wouldn't change when you move it tho

  • @onerimeuse
    @onerimeuse Před 8 měsíci +19

    This is the power thirst of super complex graphics algorithms, and I'm totally here for it.

  • @Ano_Niemand
    @Ano_Niemand Před 7 měsíci +1

    video explanation is packed, couldn't even finish my 3d gaussian splatting on the toilet

  • @allawallabedalla
    @allawallabedalla Před 7 měsíci

    Refreshing clean explanation. Thanks for talking short

  • @Brian_Sauve
    @Brian_Sauve Před 7 měsíci +6

    I literally don't care about the subject, but the way you did this... possibly the greatest YT video of all time. 2 minutes flat. No nonsense. Somehow hilarious. Kudos!

  • @ToxicNeon
    @ToxicNeon Před 8 měsíci +14

    I wonder if something like this could eventually be combined with some level of animation and interactivity - in games that is. That would be wicked.

    • @sun_beams
      @sun_beams Před 8 měsíci +5

      You could walk through it. It's not actually lit and it's not geo so there's nothing to rig. It's basically 3D footage. I would think this could be very useful for compositors when they need to fill in background data when vfx builds a brand new camera angle that wasn't filmed. This is almost useless as cg data because there's just nothing you can do with it except move through it. This isn't like normal maps, which are a way to utilize lighting and simulate new additional things on stuff that has been created. You can't create these fields, you can only capture them. I take that back, you might be able to create them but you'd need to have modeled, textured, and lit your scene so that you can capture the field from your scene. Again, could be useful to compositors but more of a nuisance for cg. It's cool though

    • @dddaaa6965
      @dddaaa6965 Před 8 měsíci +3

      PROBABLY NOT, that's why I kept thinking this is virtually useless (videogame wise), there is no interaction besides looking at a static enviroment.

    • @ddenozor
      @ddenozor Před 8 měsíci +1

      You can maybe do great skyboxes or non-interactable background objects, but is it worth doing for those?

  • @MrZhampi
    @MrZhampi Před 5 měsíci

    I've never learned so much about a subject I never heard about before in such a short ammount of time. Incredible!

  • @canoodlepoodle7892
    @canoodlepoodle7892 Před 8 měsíci +1

    Never knew anything like this existed just before now but I'm suddenly excited to see where this tech goes in the future and if it has a role in day to day life

  • @MrGriff305
    @MrGriff305 Před 7 měsíci +3

    Perfect graphics will merge with perfect VR goggles at the same time 👍

  • @bud389
    @bud389 Před 8 měsíci +107

    It'll be used for specific applications only. If you want to use that with video games you'll still have to give geometry to all the objects and environments, meaning it will still need to render all of that with polygons.

    • @GabrielLima-pi4kw
      @GabrielLima-pi4kw Před 7 měsíci +7

      It probably can be used with most fps games (for things that won't have to move, interact or have colisions) and far or non-interactible, scenarios from games.

    • @PuppetMasterdaath144
      @PuppetMasterdaath144 Před 7 měsíci +11

      yes then its the same thing as that scam all those years ago, it only renders a scene no objects

    • @DuringDark
      @DuringDark Před 7 měsíci +11

      @@GabrielLima-pi4kw It's just not worth it. For a city block a la Dust 2 you'd need a physical set, any changes to the scene would require a reshoot which would require the same weather or you'd end up with different lighting, it would clash stylistically with trad. 3D assets, there'd be much less vfx or lighting available, you'd need _two_ rendering pipelines ballooning dev time and render time, you'd need a trad. 3D substitute anyway if you want users with slower systems, shooting at gaussians would give no audiovisual feedback...

    • @ZeroX252
      @ZeroX252 Před 7 měsíci

      Actually, you can use the point cloud rendered objects for the visuals entirely and then rough low-poly objects for collision mapping and skeletal work exclusively. The bigger problem is that this technology needs to know where the camera is and then render the data for that camera. You would have to render extra frames in every direction that the camera could move, otherwise the rendered viewport would always be playing "catchup" (You'd get an unclean, not-yet-decided face until the render catches up - think texture pop-in in games that use texture streaming.)

    • @ZeroX252
      @ZeroX252 Před 7 měsíci +1

      You can see this prominently in this video - look at the lack of depth for the curvature of the vase or the edges of the table. You can see that as the camera pans the "sides" of those objects blit into existence after the camera has moved. It's very obvious on the bicycle tires as well.

  • @CarfDarko
    @CarfDarko Před 7 měsíci

    Love the presentation.

  • @paullanglois3553
    @paullanglois3553 Před 21 dnem

    The editing is perfect.
    that the Definition of "straight to the point"

  • @Posdrums3
    @Posdrums3 Před 8 měsíci +7

    very wurtzian

  • @Zangettsu_ZA
    @Zangettsu_ZA Před 8 měsíci +48

    Why can't ALL videos be as informative like this one in such a short span on time. Well done!

    • @LoLaSn
      @LoLaSn Před 8 měsíci +3

      Because they want ad money

    • @womp47
      @womp47 Před 8 měsíci +9

      get a longer attention span and youll be able to watch longer videos like this and actually learn stuff

    • @LoLaSn
      @LoLaSn Před 8 měsíci +2

      @@womp47 You think the problem with 20 minute long videos where maybe a quarter is about the topic is people's attention span?
      Funniest shit I've read in a while

    • @AndrewARitz
      @AndrewARitz Před 8 měsíci +5

      because this video isn't informative.

    • @womp47
      @womp47 Před 8 měsíci +2

      @@LoLaSn when did i say videos "where a quarter is about the topic"

  • @jacotay2827
    @jacotay2827 Před 8 měsíci

    insanely underrated. nice. short. super cool.

  • @StashOfCode
    @StashOfCode Před 2 měsíci

    WAY better than the Computerphile video! Excellent work. Keep on teaching, you are talented.

  • @theDragoon007yaboiCJ
    @theDragoon007yaboiCJ Před 8 měsíci +17

    I've always wondered about this. All my life since i started getting into videogames graphics from when I was a kid. I knew it had to be possible and watching this is like the biggest closure of my life.

    • @Sc0pee
      @Sc0pee Před 8 měsíci +3

      But this tech is not to be used in games/movies/3D printing, because these are not mesh 3d models. These are 100% static, non interactive models. Games need mesh models to be interacted with, lit, animated etc. It's also very demanding on the GPU. It's something that fe. Google Maps could use in the future.

    • @Suckassloser
      @Suckassloser Před 8 měsíci

      Surely there'd ways to make them non-static? Maybe a lot more computationally demanding and beyond current consumer hardware, but for example you could develop some sort of weighted bone system that acts on the points of a cloud model in similar way that is done on 3d meshes? And I imagine ray/path tracing could be used to simulate lighting? etc. I'll admit I don't have a strong grasp on this technology so I could be completely off base, but I feel that these are only static because the means to animate, light and add interactability is just yet to be realised (and suitable hardware to support this), just like how it would not have been originally with 3d meshes@@Sc0pee

    • @zombeaver69
      @zombeaver69 Před 8 měsíci +1

      ​​​@@Sc0pee if this technique generates 3d objects, that is usually what I would call a mesh. Static meshes and images both are used in games often, and we have editing capabilities for more complex requirements like animation. You're definitely right about the GPU thing tho

    • @oBCHANo
      @oBCHANo Před 7 měsíci +2

      Somehow, based on this comment, I doubt you know literally anything about computer graphics.

  • @Dodanos1
    @Dodanos1 Před 7 měsíci +5

    WHY EVERYBODY DOESNT MAKE INFORMATIVE VIDEOS IN THIS FORMAT? Fast, clear to the point, no filler stuff, just pure info in shortest amount of time.

    • @imveryangryitsnotbutter
      @imveryangryitsnotbutter Před 7 měsíci

      You should watch "history of the entire world, i guess" if you haven't already. It's 20 minutes of this kind of rapid-fire semi-educational delivery.

    • @n_tas
      @n_tas Před 7 měsíci

      5 second clip of something that happens later in the video "HEY WHAT'S GOING ON GUYS today we are doing a thing but first I want to thank this channel's sponsor NordVPN...."

  • @BonifiedGFX
    @BonifiedGFX Před 8 měsíci

    I appreciate straight information. Thanks.

  • @j102h
    @j102h Před 8 měsíci

    You just keep doing whatever this is and we'll keep watching

  • @-The-Golden-God-
    @-The-Golden-God- Před 8 měsíci +3

    As somewho who is ADHD, I'd like to thank you for the editing this video in such a concise manner. I usually have to set videos to 1.25/1.5x playback speed in order to have a hope of finishing the video and even then I find myself skipping through videos to avoid filler. This video is a breath of fresh air.

    • @jarrod1687
      @jarrod1687 Před 7 měsíci

      You realise that's not a real condition?