Epic's Unreal Optimization Disaster | Why Nanite Tanks Performance!

Sdílet
Vložit
  • čas přidán 12. 09. 2024

Komentáře • 1,2K

  • @ThreatInteractive
    @ThreatInteractive  Před 22 dny +341

    PLEASE READ FOR UPDATES & RESPONSES:
    Thank you so much for your support!
    1. As always, watch our videos in 4k (as in streaming settings) to see our comparison details through CZcams compression.
    2. Please remember to *subscribe* so we can *socially compete* with leading tech influencers who push poor technology on to everyday consumers. Help us spread REAL data, empowering consumers to push back against studios that blame your sufficient hardware!
    * RESPONSE TO COMMUNITY QUESTIONS:
    1. We've repeatedly seen comments attempting to explain how Nanite works; arguing that quad overdraw isn't relevant. When we compare the overdraw view mode to Nanite, it’s YOUR cue to understand we’re measuring traditional non-Nanite rendering against Nanite.
    Additionally many have claimed that Nanite has a "large but flat and consistent cost". This is utterly false. Nanite can and does suffer from its own form of overdraw (though not quad related). A major issue that people are missing involves Virtual Shadow Maps. Which are tied to Nanite.
    Nanite's shadow method not only re-renders your digital scenes at massive resolutions but these maps are also re-drawn under basic scenarios typical in games. Such as moving the CAMERA's position, shifting the SUN/MOON, or having moving objects or characters spread across your scene. Does that SOUND like good performance to you? News flash…It's not.
    Even Epic Games admitted VSMs were terrible for Fortnite but instead accepting it wasn't fundamentally a good fit. They "bit the bullet" and use it anyway. But they didn't really bite anything...consumers did.
    2. To those defending Nanite because it saves on development time. We are fully aware of that. We have constantly stated this in previous videos and comments. We have also said this is a great thing to work towards.
    What these ignorant people fail to grasp is that Nanite is a FORCED alternative, due to a workflow deficiency in legitimate optimization for meshes.
    3. Like we stated in our ‘Fake Optimization Video’, Pro-Nanite users fail to recognize the CONTRADICTION Nanite causes in "visual fidelity".
    If you are using a technology that has such a massive domino effect on performance that you end up having to use a blurry, detail-crushing temporal upscaler to fix performance then you end up smearing all the detail anyway for a distorted presentation. Then if you were to explore CHEAP deferred MSAA options. All that subpixel detail possible in Nanite and VSM's gross use of soft shadow sampling is promoting temporal aliasing/reliance on flawed TAA/SS.
    4: The test shown at 3:36 shows a workflow deficiency rather than an implementation issue. Unreal *does support per-instance LOD selection but the engine defaults to ISM(Instanced Static Meshes) which doesn't support LODs.* But UE5's HISM(Hierarchical Instanced Static Meshes) does but the developers have not made this as accessible and have not produced a system that combines all these meshes with precomputed asset separation culling. Before some people complain about "duplicated" assets and increased file size, we encourage viewers to research how Spider-Man PS4's open worlds where managed.

    • @HANAKINskywoker
      @HANAKINskywoker Před 22 dny +13

      6:16 wdym non existing problem?
      The problem is that dlss doesn't hurt the performance as much as if you were playing the game at higher resolution/using ssaa yet the quality is still good
      Please think more before saying stuff like that lmao

    • @QuakeProBro
      @QuakeProBro Před 22 dny +69

      @@HANAKINskywoker "...to a problem that never NEEDED TO EXIST." is what he said. We would not need DLSS that badly, if fundamental optimization methods would be pushed by the industry, instead of upscalers.

    • @QuakeProBro
      @QuakeProBro Před 22 dny

      ​@@HANAKINskywoker Yes it is true, as the games get bigger and bigger, optimization gets harder. But this is why things like the proposed AI tools would really shine.
      As long as Nanite is like "You can now render a rock with 10 million tris and the performance is (in context of our other new features) better than before, but it really isn't, because the base cost is exponentially higher. But hey, here is the magic fix: Upscalers!" and not "You can now render 10 million rocks with near infinite draw distance, and the base cost is the same or even less than before.", it creates a problem that shouldn't exist.
      There is much more work that needs to be done and Epic should provide support for those who want a smooth transition, but instead they only really push the features that sound great in marketing.
      My own project went from 120fps in UE4 to about 40fps in UE5 on a map that only has a fucking cube and floor.
      Upscalers are definitely not evil and can be super useful, but they just hurt the visuals too much for them being the only answer you get, if the performance is bad.

    • @V1vil
      @V1vil Před 22 dny +6

      Comparisons details were still noticeable while watching on Xbox 360 in 720p. :)

    • @Navhkrin
      @Navhkrin Před 21 dnem

      @@RandoCZcamsAccount Google -> unreal.ShadowCacheInvalidationBehavior

  • @BunkerSquirrel
    @BunkerSquirrel Před 12 dny +398

    2d artists: worried about ai taking their jobs
    3d artists: worried ai will never be able to perform topology optimization

    • @Tofu3435
      @Tofu3435 Před 9 dny +12

      Me: want a robot to take my job 😂

    • @GeneralKenobi69420
      @GeneralKenobi69420 Před 9 dny +4

      Who let the furry out of the basement 💀

    • @dimmArtist
      @dimmArtist Před 8 dny +3

      3d artists worried that hungry 2d artists will become better 3d artists and taking their jobs

    • @mainaccount888
      @mainaccount888 Před 8 dny +13

      @@dimmArtistsaid no one ever

    • @roilo8560
      @roilo8560 Před 6 dny +4

      ​@@GeneralKenobi69420bro has 69420 in his name in 2024 💀

  • @pyrus2814
    @pyrus2814 Před 14 dny +138

    DLSS was originally conceived as a means to make real-time raytracing possible. It's sad to see how many games today rely on it for stable frames with mere rasterization.

    • @gudeandi
      @gudeandi Před 9 dny +7

      tbh 99% of all players (especially in single player) won't see a difference.. and that's the point.
      Get 90% of the result by investing 10%. The last 10% cost you sooo much more and isn't often worth it.
      imo

    • @beetheimmortal
      @beetheimmortal Před 7 dny +24

      I absolutely hate the way modern rendering went. They switched from Forward to Deferred, but then broke anti-aliasing completely, introduced TAA which sucks, then introduced DLSS, which is now MANDATORY in any game to run at a sort-of acceptable framerate. Nothing is optimized, and everything is blurry and low-res. Truly pathetic.

    • @griffin5734
      @griffin5734 Před 6 dny +3

      No no no completly wrong. DLSS brought the BEST AA in decades. DLAA. Best tech in the world.

    • @dawienel1142
      @dawienel1142 Před 5 dny +10

      @@beetheimmortal agreed, can't believe that high end RTX40 series GPU's really struggle to run the latest games at acceptable settings and performance especially compared to the hardware we had with 2010-2015 games which all generally looked good and run well on the hardware of their time.
      I feel like we are at the point of gaining very slight graphical fidelity for way too much cost these days.

    • @techjunky9863
      @techjunky9863 Před 5 dny +1

      @@beetheimmortal only way to play games with proper resolution now is to buy a 4k monitor and render at 4k. Then you get somewhat a similar image quality as we had with foreward rendering

  • @SaltHuman
    @SaltHuman Před 22 dny +1208

    Threat Interactive did not kill himself

    • @ook_3D
      @ook_3D Před 21 dnem +115

      For real, dude presents incredible information without bias, gotta piss alot of AAA companies off

    • @bublybublybubly
      @bublybublybubly Před 20 dny +47

      @@ook_3D Epic and some twitter nerds may not be happy. I don't think the game companies care 🤷‍♀
      He might just bring them a different solution for saving money on dev time & optimizations. Why would they be mad about someone's who is this motivated to give them another option in their corporate oppression toolbox for free?

    • @168original7
      @168original7 Před 20 dny +2

      Timmy isn’t that bad lol

    • @mercai
      @mercai Před 20 dny

      @@bublybublybubly Cause this someone acts like a raging asshat, makes lots of factually wrong claims, offers no solution and then tries to crowdfund to "fix" something - all from this place of being a literal nobody with zero actual experience or influence?
      Yeah, it's not maddening, but quite annoying.

    • @gdog8170
      @gdog8170 Před 18 dny +9

      I can't lie this is not funny, doesn't mean that he is revealing info like this that he will be targeted, hopefully that never happens

  • @sebbbi2
    @sebbbi2 Před 18 dny +328

    Nanite’s software raster solves quad overdraw. The problem is that software raster doesn’t have HiZ culling. Nanite must lean purely on cluster culling, and their clusters are over 100 triangles each. This results in significant overdraw to the V-buffer with kitbashed content (such as their own demos). But V-buffer is just a 64 bit triangle+instance ID. Overdraw doesn’t mean shading the pixel many times.
    While V-buffer is fast to write, it’s slow to resolve. Each pixel shader invocation needs to load the triangle and runs equivalent code to full vertex shader 3 times. The material resolve pass also needs to calculate analytic derivatives and and material binning has complexities (which manifest in potential performance cliffs).
    It’s definitely possible to beat Nanite with traditional pipeline if your content doesn’t suffer much from overdraw or quad efficiency issues. And your have good batching techniques for everything you render.
    However it’s worth noting that GPU-driven rendering doesn’t mandate V-buffer, SW rasterizer or deferred material system like Nanite does. Those techniques have advantages but they have big performance implications too. When I was working at Ubisoft (almost 10 years ago) we shipped several games with GPU-driven rendering (and virtual shadow mapping). Assassin’s Creed Unity with massive crowds in big city streets, Rainbox Six Siege with fully destructive environment, etc. These techniques were already usable on last gen consoles (1.8TFLOP/s GPU). Nanite is quite heavy in comparison. But they are targeting single pixel triangles. We werent.
    I am glad that we are having this conversation. Also mesh shaders are a perfect fit for GPU-driven render pipeline. AFAIK Nanite is using mesh shaders (primitive shaders) on consoles at least. Unless they use SW raster today for big triangles too. It’s been long time since I analyzed Nanite for the last time (UE5 preview). Back then their PC version was using non-indexed geometry for big triangles, which is slow.

    • @user-bl1lh1xv1s
      @user-bl1lh1xv1s Před 16 dny +12

      thanks for the insight

    • @tubaeseries5705
      @tubaeseries5705 Před 16 dny +19

      the issue is that quad overdraw is not a big deal, modern GPUs are never limited by the amount of triangles they output, it's always shaders, and nanite adds a lot of additional work to the shader pipeline, which is already occupied as hell,
      for standard graphics with reasonable triangle counts nanite just doesn't make any sense, it offers better fidelity than standard methods, but performance is not what it can offer

    • @minotaursgamezone
      @minotaursgamezone Před 16 dny

      I am confused 💀💀💀

    • @torginus
      @torginus Před 16 dny

      Not an unreal expert, but from what I know of graphics, quad rasterization is unavoidable, since you need the derivatives for the pixel shader varyings that are needed for things like texture sampling. Honestly it might make sense to move beyond triangles to things like implicit surface rendering (think drawing that NURBS stuff directly) for the stuff nanite tries to accomplish.

    • @tubaeseries5705
      @tubaeseries5705 Před 16 dny +5

      ​@@torginus rendering nurbs and other non-primitive types always goes down to rendering primitives anyway, CAD software always processed nurbs into triangle meshes using various methods that produce a lot of overhead, GPUs are not capable of efficiently rendering anything else than primitives, we would need to build new hardware standard for rendering them, and that's not really reasonable

  • @gandev5285
    @gandev5285 Před 21 dnem +271

    This may or may not be true, but I actually believe Unreal Engine's quad-overdraw viewmode has been broken for a long time (atleast beyond 4.21). In a Robo Recall talk they talk about the impact of AA on quad overdraw and show that if you enable MSAA your quad overdraw gets significantly worse. Now if you run the exact same test quad overdraw IMPROVES significantly. So unless Epic magically optimized AA to produce less overdraw, the overdraw viewmode is busted. I tested the same scenes in 5.2 and 4.21 and the overdraw view was much worse in 4.21 with it actually showing some red from opaque overdraw (all settings the same). I'm not even sure if opaque overdraw can get beyond green now. I would suspect the overdraw you show from extremely high poly meshes should actually be significantly worse and mostly red or white.

    • @tubaeseries5705
      @tubaeseries5705 Před 21 dnem +26

      msaa by nature causes more overdraw because it takes multiple samples for each pixel, with varying amount depending on the settings, so in some cases where only 2pixels are occupied, meaning 50% overdraw, with msaa it could be for example 6 pixels out of 16, meaning 60% overdraw etc.

    • @futuremapper_
      @futuremapper_ Před 21 dnem +5

      I assume that Epic cheats the values when MSAA is enabled as that will cause overdraw in it's nature

    • @JorgetePanete
      @JorgetePanete Před 20 dny +1

      at least*

    • @neattricks7678
      @neattricks7678 Před 19 dny +2

      It is true. Unreal sucks and everything about it is fucked

    • @FourtyFifth
      @FourtyFifth Před 18 dny +3

      ​@@neattricks7678 Sure it does buddy

  • @sgredsch
    @sgredsch Před 4 dny +12

    im a mod game dev that worked with the source engine. and looking back how we optimized for performance manually with every mesh, texture, shader, and seeing how modern studios deliver the worst garbage that runs like a brick also makes me angry. now we start throwing upscaling at games that should run 2x as fast on native resolution to begin with.
    we are subsidising the sloppy or not existing game optimization by overspending on overbuilt hardware that in return gets choked to death by terrible software that is a result of cost cutting measures / lazyness / manipulation. nvidia is really talented in finding non-issues, bloating them up and then selling a proprietary solution to an issue that shouldnt be one in the first place. they did it with physX, tessellation, gameworks, raytracing and upscaling. the best partner in crime is of course the one engine vendor with the biggest market share - epic with their unreal engine.
    fun fact: the overdraw bloat issue goes back to when nvidia forced tessellation in everyones face - nvidia made their gpus explicibly tolerant to sub pixel geometry spam (starting with thermi, i mean fermi), while GCN couldnt handle that abuse. the witcher 3 tessellation x64 hairworks sends its regards, absolutely choking the r9 290X.
    its a shame what he have come to.

    • @e.s.r5809
      @e.s.r5809 Před 14 hodinami +1

      I've got tired of seeing "if it's slow, your hardware isn't good enough" for graphics on par with decade-old releases- struggling on the same tech that ran those games like butter. You shouldn't *need* to spend half a year's rent on a gaming PC, to compensate for memory leaks and poor optimisation. The only winners are studio execs and tech shareholders. It's convenient to call your customers poor, instead of giving your developers time and resources to ship viable products.

  • @sideswipebl
    @sideswipebl Před 23 dny +872

    No wonder fidelity development seems so slow since around 2016. It's getting harder and harder to tell by looking how old games are because we already figured out idealized hyper-realism around 2010 and have just been floundering since.

    • @MrGamelover23
      @MrGamelover23 Před 22 dny +211

      Yeah, imagine the optimization of back then with ray tracing. It might actually be playable then.

    • @metacob
      @metacob Před 22 dny +156

      GPU performance is still rising exponentially, but I literally can't tell a game made today from one made 5 years ago. As a kid my first console was a SNES, my second one an N64. That was THE generational leap. The closest to that experience we got in the last decade was VR, but that still wasn't quite the same.
      To be honest though, it's fine. I've heard the word "realism" a few too many times in my life. Now it's time for gameplay and style.

    • @Fearzzy
      @Fearzzy Před 22 dny +47

      @@metacob if u want too look at the potential of todays games just look at bodycam game, you wont be making that 5 years ago. But i see your point, RDR2 is still the most beautiful game i've played (other than faces etc.), but its more to its style and attention to detail rather than "raw graphics"

    • @arkgaharandan5881
      @arkgaharandan5881 Před 22 dny +25

      @@metacob well, i was playing just cause 3 recently, the lighting has some ambient occlusion and reflections so it looks bad compared to modern standards, you can see the foliage lod appearing in real time and unless you are running in 4k the antialiasing it has smaa 2x is not good enough to hide the countless jaggies. Id say a better comparison is with 2018 games, also the 4060 is barely better tan the 3060 with more ram, low to mid range needs to improve alot.

    • @slyseal2091
      @slyseal2091 Před 22 dny

      @@arkgaharandan5881 Just cause 3 is 9 years old. "5 years ago" is 2019 buddy

  • @doltBmB
    @doltBmB Před 16 dny +28

    Key insight every optimizer should know: draw calls are not expensive, context-switching is. The more resources that a draw call shares with adjacent draw calls the cheaper it is to switch to it. If all draw calls share the same resources it's free(!). Don't worry about draw calls, worry about the order of the draw calls.

  • @IBMboy
    @IBMboy Před 23 dny +294

    I think Nanite is innovative technology but it shouldnt replace old style of rendering for videogames as its still experimental in my opinion

    • @matiasinostroza1843
      @matiasinostroza1843 Před 22 dny +46

      yeah and its not made for optimization but for the sake of look, how it looks its beauty, there is almost no transition between lods, so for example in normal games when you get too far u can see how the mesh changes to lower poly models but with nanite this is almos imperceptible, so in that case is "better", thats why is being used for "realistic graphics".

    • @Cloroqx
      @Cloroqx Před 22 dny +29

      Baseless opinions by non-developers. "Old style of rendering videogames".
      What is your take on the economy's sharp downturn due to a lack of rate cuts by the FED, opinionated one?

    • @vendetta1429
      @vendetta1429 Před 21 dnem +60

      @@Cloroqx You're coming off as the opinionated one, if you didn't know.

    • @pizzaman11
      @pizzaman11 Před 21 dnem +5

      Also has lower memory footprint and you don’t need to worry about creating lods. Which is perfect with large games with a large amount of unique models.

    • @inkoalawetrust
      @inkoalawetrust Před 21 dnem +31

      @@Cloroqx What are you even talking about.

  • @YoutubePizzer
    @YoutubePizzer Před 21 dnem +131

    Here’s the question though: is this going to save significant enough resources in a development team to allow them to achieve more with less. If it’s not “optimal”, it’s fine, as long as the amount of dev time it saves is worth it. Ultimately, in an ideal world, we want the technology to improve so development can become easier

    • @wumi2419
      @wumi2419 Před 19 dny +69

      Cost of development doesn't disappear however, it's just transferred to customers. So they will have to either pay for a better GPU, run the game at lower quality, or will just "bounce off" and not purchase the game at all.

    • @blarghblargh
      @blarghblargh Před 19 dny +2

      ​@wumi2419 they could just make the game look worse instead. The development style being done now is already burning something, and that something is developer talent. And there isn't an infinite pool of that.
      You also ignored that GPU performance increases over time. So it may be a rough tradeoff now, but the tech will continue to get better hardware over time.

    • @insentia8424
      @insentia8424 Před 18 dny +7

      I don't understand why you ask about whether this allows development teams to be more efficient (achieve more with less), then you believe that in an ideal world tech would make things easier. Something becoming more efficient, and something becoming easier are not the same thing.
      In fact, an increase in efficiency often times causes things to become harder or more complex to do.

    • @snark567
      @snark567 Před 18 dny +8

      You can always just go for stylized visuals instead of hyper complex geometry and realism. This performance min maxing only becomes an issue of concern when your game is too detailed and complex but you still want it to run on a toaster.

    • @seeibe
      @seeibe Před 18 dny +13

      The problem is when newer games look and perform worse than older games while still costing the same. If that saved developer time will be passed on as price reduction to the user, sure. But that's not what happens for most games.

  • @stephaneduhamel7706
    @stephaneduhamel7706 Před 20 dny +107

    The point of nanite was never to increase performance compared to a perfecty optimized mesh with LODs. It is made to allow devs to discard LODs for a reasonably low perfomance hit.

    • @doltBmB
      @doltBmB Před 18 dny +60

      if losing 60% fps is "reasonably low" to you

    • @stephaneduhamel7706
      @stephaneduhamel7706 Před 18 dny +40

      @@doltBmB it's a lot less than that in most real life use cases.

    • @doltBmB
      @doltBmB Před 18 dny +25

      @@stephaneduhamel7706 Yeah it might be as low as 40%, real great

    • @Harut.V
      @Harut.V Před 18 dny +36

      In other words, they are shifting the cost from devs to hardware (consumers, console producers)

    • @ben.pueschel
      @ben.pueschel Před 18 dny +10

      @@Harut.V that's how progress works, genius.

  • @ThiagoVieira91
    @ThiagoVieira91 Před 23 dny +329

    Acquiring the same level of rage this guy has for bad optimization, but for putting effort into my SE career, I can single handedly lead us into the the singularity. MORE! MORE! MORE!

    • @JorgetePanete
      @JorgetePanete Před 20 dny +6

      the the

    • @DarkSession6208
      @DarkSession6208 Před 19 dny +13

      I have 100 other topics around Unreal Engine that make me rage about misinformation just like he rages about Nanite. The Nanite thematic i posted MULTIPLE times on Forums and Reddit to warn people how to not use it, and show how how it should be done. I was researching this topic since version 5.02. Nobody cared. Like i said there are 100 other similar topics (Blueprints, general functions, movement, prediction etc.) which are simply explained wrong by epic themselve and then aknowledged by users.
      If you google : Does culling really not work with nanite foliage?
      the first comment is mine, i was posting there since almost 3 YEARS and repeating myself just for people not listening.

  • @TenaciousDilos
    @TenaciousDilos Před 20 dny +160

    I'm not disputing what is shown here, but I've had cases where Nanite in UE 5.1.1 increased framerates 4x (low 20s fps to mid 90s fps) in my projects, and the only difference was turning Nanite on. "Well, there's more to it than that!" of course there is. But Nanite took hours of optimization work and turned it into enabling Nanite on a few meshes.

    • @jcdentonunatco
      @jcdentonunatco Před 20 dny +54

      none of the devs ever claimed Nanite would boost performance, that was idiots in the forum who said that. The devs were very transparent about the costs and limitations of nanite. The only case it would be an optimization is if you were never using LOD's to begin with

    • @forasago
      @forasago Před 20 dny +78

      @@jcdentonunatco This is not true at all. When they showed the first demo with the pseudo Tomb Raider scene they explicitly said (paraphrasing) that having as many high poly meshes in a scene would be impossible without Nanite. And they definitely weren't talking about "what if you just stopped using LOD" as the thing Nanite is competing with. Nanite was supposed to raise the limits for polycount in scenes, full stop. That equates a claim to improved performance.

    • @MrSofazocker
      @MrSofazocker Před 19 dny +73

      ​@@forasago "You can now render way denser meshes" does not equal "so less dense meshes now render faster"

    • @DeltaNovum
      @DeltaNovum Před 18 dny +14

      Turn off virtual shadow maps and redo your test.

    • @lokosstratos7192
      @lokosstratos7192 Před 18 dny +1

      @@DeltaNovum YEAH!

  • @GraveUypo
    @GraveUypo Před 23 dny +326

    this is the type of content i wish people would watch, but they don't. i'm tired of hearing that nanite is magic, that dlss is "better than native" and whatnot. when i made a scene in unreal 5, it ran like trash on my pc with 6700xt at the time, and there was almost nothing in the scene even with 50% render scale. it was absurdly heavy for what it was.

    • @SuperXzm
      @SuperXzm Před 22 dny +43

      Oh no! Think about shareholders! Please support corporations play pretend!

    • @chengong388
      @chengong388 Před 22 dny +46

      DLSS is better than native because it includes anti aliasing and native does not…

    • @FlippingLuna
      @FlippingLuna Před 22 dny +12

      Just return to forward shading or even mobile forward renderer. It's kinda good and pity that epic made good optimization on mobile forward renderer, but still not ported it to desktop forward

    • @Jakiyyyyy
      @Jakiyyyyy Před 22 dny +46

      DLSS is INDEED sometimes better than native because it uses its own antialiasing than the default poorly implemented TAA that blurring the games. If DLSS makes the games sharper and better, I would take it over forced-TAA. 🤷🏻‍♀️

    • @dingickso4098
      @dingickso4098 Před 22 dny +55

      DLSS is better than native, they think, because they tend to compare it to the blurfest TAA that is sometimes force enabled.

  • @yeah7267
    @yeah7267 Před 23 dny +270

    Finally someone taking about this... I was sick ok people claiming Nanites boost performance when in reality I was losing frames even in the most basic scenes

    • @cheater00
      @cheater00 Před 22 dny +24

      just another case of tencent timmy being confidently wrong. we've all known unreal engine was way worse than quake 3 back in the day as well, it was ancient by comparison and the performance was abysmal. the fact people kept spinning a yarn that unreal engine was somehow competitive with id tech was always such a funny thing to see.

    • @HankBaxter
      @HankBaxter Před 22 dny +3

      And it seems nothing's changed.

    • @cheater00
      @cheater00 Před 22 dny +5

      @@randoguy7488 precisely. and in those 25 years, NOTHING has changed. this should make you think.

    • @ClowdyHowdy
      @ClowdyHowdy Před 22 dny +41

      To be fair, I don't think nanite is designed at all to boost performance in the most simple scenes, so anybody saying that is either oversimplifying or just wrong. Instead, it was designed to provide an easier pipeline for artists and designers to build complex level designs, without inducing an equivalent increase in performance loss or time into developing LODs.
      It's designed to be a development performance boost.
      If you don't need nanite for your games then there's no reason to use it, but I think it's weird to pretend like the issue is because it doesn't do something it wasn't designed to do

    • @_Romulodovale
      @_Romulodovale Před 22 dny +6

      All the last features in engines came to make developers life easier while affecting performance negatively. Its not a bad thing, in the future those features will be well polished and will help us developers without affecting performance. All good tech take time to be polished.

  • @florianschmoldt8659
    @florianschmoldt8659 Před 22 dny +84

    The current state of "next gen visuals" vs fidelity is indeed questionable. Many effects are lower res, stochastically rendered with low sample counts, dithered, upscaled. Compromises on top of compromises and frame generation in between. Good enough in 4k and 60fps but if your hardware can't handle it, you'll have to live with a blurry, pixelated, smeary mess. My guess is that Nvidia is fine with it. As much as I like Lumen&Nanite in theory but I'm not willing to pay the price.
    To be fair, it isn't the worst thing to have next gen effects available but there is a huge disconnect what gamers expect a game to look like, based on trailers and how it feels to play in 1080p at 20fps.
    JFZfilms defines himself as filmmaker and isn't great at communicating that he and his 4090 doesn't care much about realtime visuals or optimization. Tons of path tracing vs Lumen videos and a confused game dev audience when they accidentally learn that both went through the render queue with maxed settings.

    • @snark567
      @snark567 Před 18 dny +13

      Gamers confuse realism and high fidelity visuals for good graphics. Meanwhile a lot of devs use realism and graphical advancements as a crutch because they lack the imagination to know how to make a game that looks good without these features.

    • @florianschmoldt8659
      @florianschmoldt8659 Před 18 dny +13

      @@snark567 I'm a graphic artist myself and as much as I love a good art direction, realism definitely has it's place. But I get the point. It for sure shouldn't be the only definition of next gen visuals.
      It became much easier to gather free photoscanned models than creating your own in an interesting style. Devs rely on nanite over optimized assets and even games with mostly static light sources use Lumen over "good old" lightmaps. Even if it could look just as good and makes a difference of 120 vs 30fps.
      And Nvidia is like "Here are some new problems...how about a 4090 to solve them?"

  • @chriszuko
    @chriszuko Před 21 dnem +39

    As much as I think this is a good topic and the results seem genuinely well done, the solution proposed here to have spent the time and money on an AI solution for LOD creation and a much faster / seamless workflow for optimization... is something companies have been trying to do for years.. so I personally don't think it's a good way to move forward. TO me.. it would seem possible that nanite + lumen can evolve and become much more friendly to meshes produced in an already optimized way and rely way less on lower resolutions. I DO think they probably are pushing it a bit too early still and I also agree that DLSS, TSR, and any other upscaling/reconstruction technology is just not good to lean on. But to your point, companies continue to do so because they can say "look everything runs faster!" so it is hard to envision a future without needing it.
    A side note.. I don't think your particular attitude is warranted and, in my opinion, makes it much harder to have a constructive conversation on how to move forward. I'm not perfect either in this regard, but as this gains traction it's probably good to dial that back. In the first post for the mega thread for example you show your results and then feel the need to say "Worse without LOD FPS LMAO!". This type of stuff is just all over the place in these threads.. which to me just looks bad and makes it harder to take you and this topic seriously.

    • @wumi2419
      @wumi2419 Před 19 dny

      It ends up being a problem of "who needs to spend money", with choices being developers (on optimizing) or customers (on hardware that can run unoptimized software), and it's obvious which choice saves money for the company.
      Granted, it might result in lower sales due to higher hardware requirements, but that is a lot harder to prove than lowered dev costs.

  • @hungryhedgehog4201
    @hungryhedgehog4201 Před 16 dny +15

    So if I understand correctly: Nanite results in a performance gain if you just drop in high poly 3D sculpts without any touchups, but results in a performance loss if you put in models designed with the industry standard and use industry standard workflow regarding performance optimization?

    • @AndyE7
      @AndyE7 Před 13 dny +6

      I swear the point of Nanite was to allow the artists and level designers to just focus on high quality assets and scenes because nanite would do the optimisation for them. It was to stop the need for them to think of optimisation, LODs, how much can the console render in a scene etc because it will handle all of that for you allowing you to just focus on the task at hand.
      In theory you could even develop with Nanite and then do proper optimisations afterwards.

    • @hungryhedgehog4201
      @hungryhedgehog4201 Před 13 dny +4

      @@AndyE7 tbf it seems to do that but that means that you move from the industry standard to this new approach that works with no other engine pipeline. Which then binds you to the Unreal Engine 5 ecosystem, which then obviously benefits Epic. That's why they push it so hard.

    • @Noubers
      @Noubers Před 9 dny +5

      The industry standard is significantly more labor intensive and restrictive. It's not a conspiracy to lock in people to Epic, it's just a better work flow overall that other engines should adopt because it is better.

  • @AlexKolakowski
    @AlexKolakowski Před 17 dny +49

    You didn't really touch on Nanite in combination with Lumen. Part of the benefit of this workflow is that run time GI doesn't require any light baking. The baseline fps is higher but the dev hours saved not having to optimize lights is worth leaving older cards behind for some devs.

    • @manoftherainshorts9075
      @manoftherainshorts9075 Před 7 dny +6

      "Why would we work if we can make players work more to afford better hardware for our game?" - Unreal developers circa UE5 release

    • @JensonTM
      @JensonTM Před 7 dny +3

      horrible take

    • @gelisob
      @gelisob Před 4 dny +1

      agreed, very horrible take. Havent seen game development prices? Want more dev's left jobless and projects canceled, because "there's a way to make it slightly better" for 300% development time? Yeah, i think get the card or accept few less frames, and project happening.

  • @FVMods
    @FVMods Před 4 dny +2

    Minimalist, quality, straight-to-the-point narration, tidy video editing with relevant information, engaging content. Great channel so far!

  • @XCanG
    @XCanG Před 20 dny +63

    I'm not game developer, but there is some questions that I have to ask and some personal opinion as a gamer.
    1. The main point I remember for introducing Nanite at first was that you can make models faster by not spending time on creating LODs. So in some of your examples comparison: how much time do you need to create same model with Nanite vs same model with LODs? Considering that all fresh games that being made with a bias in realism require a lot of details I think at a scale it will make a difference.
    2. May be because I'm not into this field I don't head that, but I really didn't hearing anyone who would render model with billions of triangles. The only closest example was Minecraft clones who was rewritten in C/Rust etc. who tried to achieve large render distance. Other rendering scenes where 1 frame make take hours of render time, so it was not realtime, but I really didn't hearing anyone else showing examples like that. I can imagine that you are some Senior game developer with at least 6 years of experience, but how many game devs also know about this optimizations? I can't imagine even more than 25%.
    3. Let's assume that you can optimize them better, does UE5 allow you to handle optimization by yourself? I imagine that UE4 do, but what about UE5? If answer is yes, then this problem is more arguing about better defaults where you have manual optimization with a cost of your time vs nanite auto-optimization with faster creation time.
    4. As a player I want to point out that many latest titles comes with bad optimizations, so much that gamers starting to hate their studios. My personal struggle was with Cities: Skylines 2. They use Unity engine where they definitely have all this abilities for optimization, but somehow they released lagging p* of s*, where some air conditioner on a building that you see from a fly height (far away) had 14k triangles and some pedestrians have 30k triangles. Considering that they can't optimize properly I believe if incompetent devs like them just use Nanite the game wouldn't be this laggy. For me it realistically to assume that by default a system that handle optimizations automatically is far better, than a manual one who can be only optimized properly very few individuals.

    • @XCanG
      @XCanG Před 19 dny +1

      @@miskliy1 I see.

    • @MiniGui98
      @MiniGui98 Před 18 dny +10

      "For me it realistically to assume that by default a system that handle optimizations automatically is far better, than a manual one who can be only optimized properly very few individuals"
      Fair point, although the "manual" technique has been used since basically the beginning of fully 3D games and had been mastered not by "few individuals" but by the whole industry at some point. Throwing everything in the nanite pipeline just because it's simpler and faster is a false excuse once you know that the manual, traditional techniques will get you better performances with little to no difference in visual fidelity. Even better, the extra performance you free with the manual LODs allow you to minimize the need for FSR or DLSS, both of which indisputably degrades image fidelity as well.
      Performance tests with FSR/DLSS enabled are basically a lie over what the performances of the raw game are. Native resolution with more traditional anti-aliasing techniques should still be the norm as it always has been.
      Big games relying on super-sampling are just a sign of badly optimized games, it's as simple as that.

    • @SioxerNikita
      @SioxerNikita Před 18 dny +18

      ​@@MiniGui98 No, it's never been "the whole industry" at some point.
      Frankly, the good optimizers are kind of far between, from the first days of optimization in general (not just 3D). Beyond that, optimizations are not a thing that you just apply every time, if so, everything would be using those optimizations. An optimization that works in one product, might not work in another, and might sometimes make it slower. Optimizations are not "Catch all solutions", and each project needs different optimizations... so there isn't a "manual" technique that has been "mastered".
      Beyond that, you have a different problem. Optimizations can only really be applied when the product is mostly done, and the rendering pipeline is done, so if the rendering pipeline gets continuesly developed until the end, then optimizations might be actual wasted work, or worse, create a buggy mess, because the rendering pipeline changes stuff that doesn't interact well with the optimizations.
      And then you have some of the most important stuff... Time... Development time... If you have an automated pipeline that is for example 30% better than you not doing manual optimization, then that tons of time you wouldn't spent on optimization, could be spent on doing optimizations and improvements elsewhere. For example, in LoD models, those take time to make... sometimes SIGNIFICANT amount of time. You create a model, then you have to create a lower poly model, and an even lower poly model model, and if you ever need to change that model, you have to do the lower poly versions again. That is a LOT! of time. Sure, may have better performance, but if you can offload that work to for example Nanite and it gives acceptable performance, you can focus on optimizing the main model instead.
      Performance tests with FSR/DLSS are a lie? If the game is intended to be run with FSR/DLSS, then that is the performance test you should do.
      Frankly, considering that you just focus on manual optimization being better, and NONE! of the other factors in it, kind of shows me that you don't really know much about optimizing or game development... and saying "Games relying on Super-Sampling are signs of a badly optimized game"...
      Even more interesting, Super Sampling is not just "DLSS" or similar... Do you even know what Super Sampling is? Relying on super sampling is the opposite of optimizing, because it is one of the more computationally expensive features. It is essentially rendering a LARGER image than you need... That is something you use if you either have optimized heavily, so you can afford it, or you have an option for it for people who have killer setups... Super Sampling is not an optimization technique at all.
      DLSS is kind of the opposite of Super Sampling. It uses Deep Learning (Or as we would call it today "AI") to infer a lot of information, using various different techniques, essentially getting some of the effects of Super Sampling, but without actually Super Sampling. So you saying "Big games relying on super-sampling are just a sign of badly optimized games, it's as simple as that." essentially means you have basically no clue what you are talking about...
      You start out saying "fair point" and then proceed to show you don't even understand the point.
      And relying completely on a single automated optimization technique is always bad... it's not just a "big games" thing... but almost no games do... but optimization is hard to do, very hard... if it wasn't you'd see games that were badly optimized.

    • @Megalomaniakaal
      @Megalomaniakaal Před 18 dny +4

      @@MiniGui98 Majority of games out there were rather unoptimized messes that Nvidia and AMD optimized on 'game ready' drivers release level in the DX11 and older days.

    • @Jiffy360
      @Jiffy360 Před 18 dny +2

      About that final point about CS:2, that was exactly their plan. They were going to use Unity’s competitor to Nanite, but then Unity delayed or cancelled it, so they were stuck creating a brand new LOD system from scratch at the last moment.

  • @nazar7368
    @nazar7368 Před 22 dny +114

    Epic games marketers killed the fifth version of the engine. at the time of 2018, there were no video cards that supported Mesh Shaders. They took advantage of this and added the illusion of cheating support for old video cards. Lumens and nanites were supported on old video cards, but this is all software implementation, which cannot be at the level of hardware mesh shader and ray tracing. This led to real problems with the engine kernel, namely that DX12 and Vulkan do not work correctly and have low efficiency due to the old code that was written for DX11. I'm not taking into account the problems with the engine's layer rendering and compositing algorithms, because everyone already sees the eternal blurring of the picture and annoying sharpening.
    This will not be fixed until they add hardware support for mesh shaders and rtx of the new version (added literally by the dxr 1.2 library). For example, Nvidia has many conferences to show their algorithms for improved ReStir in unreal engine 5, and gives access to them, but developers are firmly in their own way and continue to feed the gaming industry with an anthem.

    • @JoseDiaz-he1nr
      @JoseDiaz-he1nr Před 22 dny +4

      idk man Black Myth Wukong seems like a huge success and it was made in UE5

    • @manmansgotmans
      @manmansgotmans Před 22 dny +11

      The push to mesh shaders could have been nvidia's work, as their 2018 gpus supported mesh shaders while amd's gpus did not. And nvidia is known for putting money where it hurts their competitor

    • @Ronaldo-se3ff
      @Ronaldo-se3ff Před 22 dny +32

      @@JoseDiaz-he1nr its performance is horrendous and they reimplemented a lot of tech in-house so that it can run atleast.

    • @topraktunca1829
      @topraktunca1829 Před 22 dny +20

      @@JoseDiaz-he1nr yeah they made a lot of hard internal optimizations and in the end its running on a "ehh well enough I guess" level. Not to mention the companies that didn't bother to do such optimizations like Immortals of aveum, Remnant 2. These games are unplayable other than 4090s or 4080s. Even then they still have problems

    • @nazar7368
      @nazar7368 Před 22 dny +11

      @@JoseDiaz-he1nr A really great success. With a 2010 picture and 30fps on 4090 with a blurry picture

  • @rimuruslimeball
    @rimuruslimeball Před 22 dny +42

    These videos are amazing but to be honest, a lot of it flies over my head. What should we, as developers (especially indie), do to ensure we're doing good optimization practices? Alot of what your videos discuss seem to require an enormous amount of deep-level technical understanding of GPUs that I don't think many of us can realistically obtain. I'm very interested but not very sure where to start nor where to go from there. I'm sure I'm not the only one.

    • @cadmanfox6874
      @cadmanfox6874 Před 22 dny +4

      I think it is worth learning how it all works, there are lots of free resources you can use

    • @marcelenderle4904
      @marcelenderle4904 Před 21 dnem +3

      As an Indie developer I feel that it's very important to know the problems and limitations of those techs and the concepts behind good practices. That doesn't necessary mean you have to apply them. Nanite, lumen, dlss etc can be very efficient as a cheap solution. If it speeds up your game by alot and gets to the result you want, for me at least, it's what you should aim for. Those critiques at Unreal are great for Studios and the industry itself.

    • @Vysair
      @Vysair Před 20 dny +2

      I have a diploma in IT which is actually just CS and I dont have a clue on what this guy is talking about as well

    • @JorgetePanete
      @JorgetePanete Před 20 dny

      A lot*

    • @Vadymaus
      @Vadymaus Před 20 dny +1

      @@anonymousalexander6005. Bullshit. This is just basic graphical rendering terminology.

  • @GonziHere
    @GonziHere Před 22 dny +87

    Interesting video, but isn't nanite using it's own software solution for the small triangles exactly because of that issue? I'm pretty sure that they've said so when they started talking about it, it's in their tech talks, etc.
    This test feels somewhat fishy to me. Why not compare the normal scene, instead of captured frame, for example? That skips part of the work of one example, while using the full pipeline of the other...

    • @AdamKiraly_3d
      @AdamKiraly_3d Před 21 dnem +49

      I would love to get my hands on that test scene to give it a spin. I've been working with Nanite in a AA and AAA setting for almost 3 years now, and while it was an absolute pain to figure out the quirks and the correct workflows it has been overall a positive thing for production.
      In my experience Nanite REALLY struggles with anything using masked materials or any sort of Pixel Depth edits.
      I've also seen issues with perf when most meshes were "low poly" in the sense that the triangles are very large on screen, I vaguely remember the siggraph talk where they mention that larger triangles can rasterise slower because it's using a different raster path
      Nanite handling its own instancing and draws moves a lot of the cost off the cpu onto the GPU so the base cost being more should be a surprise to no one. It is also very VERY resolution dependent. The higher you go in res the exponential the cost of Nanite (and VSM, Lumen) not to mention the general cost increase of a larger res. I've grown to accept the only way forward with these new bits of tech is upscaling. I'm not happy as a dev, but as a gamer I couldn't care less and use it everywhere.
      VSM has similar issues, since for shadows you effectively re-render the nanite scene, but there are ways to optimise that with CVars, and generally it's been better performing than traditional shadow maps would when used with nanite - there's a caveat there, since if you bring foliage into the mix then nanite can get silly expensive both in view and in shadows.
      Collision memory has also been a major concern since for complex collision UE uses the Nanite fallback mesh by default, so in a completely Nanite scene you can end up with more collision mem than static mesh memory
      I also feel like having a go at Epic for not maintaining "legacy" feature compatibility indefinitely is a bit unfair. Both VSM and Lumen rely on Nanite to render efficiently and were created as a package. Epic decided that is the direction they want to take the engine, an engine that is as complex as it's large, it is almost expected to lose some things along the way. That being said I have run into many things that I wish they cared enough about to fix (static lighting for example has a ton of issues that will never be fixed cause "just use dynamic lighting"), but at the same time I won't start having a go at them for not supporting my very specific usecase that is not following their guidance.
      No part of this tech is perfect and I get the frustration, but it did unlock a level of quality for everyone to use that was only really available in proprietary engines before.
      Also do we really thing CPPR would drop their own insane engine for shits and giggles if they didn't think it was a better investment to switch to UE5 than to upgrade their own tech?
      same with many other AAA companies, and you can bet your ass they took their sweet time to evaluate if it makes sense or not (not to mention they will inevitably contribute to the engine a TON that will make it back to Main eventually)

    • @ThreatInteractive
      @ThreatInteractive  Před 19 dny +11

      re: Why not compare the normal scene, instead of captured frame, for example?
      You can compare yourself because we already analyzed the "normal" scene, same hardware, same resolution: czcams.com/video/Te9xUNuR-U0/video.html
      Watch the rest of our videos as everything we speak about is interconnected.

    • @SioxerNikita
      @SioxerNikita Před 18 dny +15

      @@AdamKiraly_3d A thing a lot of non-devs forget, is time...
      Time is the ultimate decider on quality, and the more time spent on optimizing, and making things better, the less time is spent on the game. CDPR is dropping their own engine because at this point, upgrades to the engine that either makes it more performant, or have higher fidelity takes exponentially longer with each feature, and they don't have a "specific" game the engine needs to do. If it was only FPS games they developed, having a single engine they'd upgrade continuesly might make sense, but they want to make more games than just FPS.
      Every second they use on upgrading the engine or doing tedious graphical optimizations on models, is time that could've been spent fixing physics bugs, weird interactions, broken map geometry, and other stuff, that will in the end make the game feel a lot better... and they can also focus on larger scale optimizations that might increase performance more overall, because you don't have to do 2-5 LoD models per model, you can instead focus on getting the poly count significantly down on the base model, and increase the perceived fidelity...
      The days of hyper optimization of games is over. Hardware is becoming ever more complex, and it is becoming completely infeasible to talk to hardware directly, as there are SOOOO!!! much different hardware you'd have to account for. Games are becoming prettier, and even the gamers saying "Graphics don't matter" still wont buy a game that looks like a PS1 game, because it gives them a feeling of "Shoddy game". Increasing poly counts, increasing player expectations, and ever more competitive market... yeah, devs need tools to automate a lot of the process.

    • @satibel
      @satibel Před 18 dny +1

      ​@@SioxerNikita imo graphics don't matter is more of a "yeah realistic graphics are neat, but consistent graphics are better"
      I'll take cartoony graphics like Kirby's epic yarn over battlefield graphics. yes realism looks good, but stylized does too, and a well stylized game can both be way more efficient and age way better, while still looking as good or better than wannabe photorealistic graphics.
      If we're talking about ps1, a game like vib ribbon still looks good nowadays, and for a more well known example, crash bandicoot.
      They look miles better than asset flip looking franchise games that have realistic graphics, but aren't that cohesive and have a meh game under.

    • @lycanthoss
      @lycanthoss Před 17 dny +3

      ​@@satibelRealistic graphics clearly sell. Just look at Black Myth Wukong.

  • @Gurem
    @Gurem Před 17 dny +7

    I remember epic saying this would not replace traditional methods but should be used in tandem with them as it is a way to increase productivity. Tbh this video taught me more about optimization than any optimization video and didnt waste my time. As an indie it did more to reinforce my desire to use nanite while also teaching me how to do more hands on techniques that while require more work may result in better performance that I can use when I have the free time to do so. I thank you for demistifing the BS as I really couldnt understand the tech from those other yt videos as they were purely surface level quick churned content.

  • @cenkercanbulut3069
    @cenkercanbulut3069 Před 12 dny +4

    Thanks for the video! I appreciate the effort you put into comparing Nanite with traditional optimization methods. However, the full potential of Nanite might not be fully apparent in a test with just a few meshes. Nanite shines when dealing with large-scale environments that have millions of polygons, where it can dynamically optimize the scene in real time. The true strength of Nanite is its ability to manage massive amounts of detail efficiently, which might be less visible in smaller, controlled setups. It would be interesting to see how both approaches perform in a more complex scene with more assets, where Nanite’s real-time optimization could show its advantages. Looking forward to more in-depth comparisons in the future!

  • @GugureSux
    @GugureSux Před 15 dny +10

    This explains both why I generally despise the modern UE games' looks (so much noise and blur) and why especially UE5 run like ass, even under decent HW.
    And since so many devs seem to use lazy upscalers as their "optimization" trick, things only get worse visually.

    • @kanta32100
      @kanta32100 Před 12 dny +2

      No visible LOD pop in is nice though.

    • @bennyboiii1196
      @bennyboiii1196 Před 12 dny +2

      The blurriness is due to TAA tbf, which is a scourge. This is why I use Godot lmao

    • @ThreatInteractive
      @ThreatInteractive  Před 10 dny +1

      @@kanta32100 Not true, you will still get pop and LODs have had various ways on reducing pop-visibility(just not in UE5) in a non-flawed TAA dependent way.

    • @PabloB888
      @PabloB888 Před 9 dny

      ​@@bennyboiii1196 TAA image looks blurry, but a good sharpening mask can help a lot. I use reshade CAS and luma filters. I have been using this method for the last couple of years on my old GTX1080. Now however I bought the RTX4080S and can get much better image quality by using DLSS balance and DLDSR2.25 (80% smoothness) at the same time. I get no performance penalty, and the image quality destroys native TAA. What's more if DLSS implementation is very good I can use DLSS ultra performance and still get sharper image compared to the native TAA and framerate is 2x better at this point.

    • @cdmpants
      @cdmpants Před 8 dny

      @@bennyboiii1196 You use godot because you don't want to use TAA? You realize that TAA is optional?

  • @piroman665
    @piroman665 Před 22 dny +38

    Its normal that new generation of rendering techniques introduces large overheads. Its a good tradeoff as it streamlines development and enables more dynamic games, Major issue is that developers ignore good practices and then blame the software, part of that is also epics fault as they try to sell it as magic solution for unlimited triangles which is not. Nanite might be slower but enable scenarios that would be impossible or very hard to achieve with static lighting. Sure you can render dense meshes with traditional methods but imagine lightmapping them or making large levels with realistic lighting and dynamic scenarios.

    • @SydGhosh
      @SydGhosh Před 20 dny +13

      Yeah... In terms of all this dude's videos - I find myself technically agreeing; but, I don't think they see the big picture.

    • @thegreendude2086
      @thegreendude2086 Před 20 dny +5

      @@piroman665 I believe unreal was made to be somewhat artist friendly, systems you can work with even if you do not have a deep technical understanding. Hitting the "enable nanite" checkbox so you have to worry less about polycount seems to fit that idea.

    • @lau6438
      @lau6438 Před 16 dny +2

      ​@@SydGhosh The bigger picture being deprecating traditional LODs that perform better, to implement a half-baked solution? Wow, what a nice picture.

    • @DagobertX2
      @DagobertX2 Před 12 dny +2

      @@lau6438 They will make it better with time, just like back in the gamedev stoneage they made LOD perform better. Imagine there was a time where you had to optimize triangle strips for a game for best performance 💀

  • @RiasatSalminSami
    @RiasatSalminSami Před 22 dny +187

    Can't ever take Epic seriously when they can't even be arsed to prioritize shader compilation stutter problems.

    • @USP45Master
      @USP45Master Před 22 dny +28

      Write the shader compiler yourself ! Only about 30 minutes... put it in ablank level that is async compiling and showing a loading screen :)

    • @user-ic5nv8lj9d
      @user-ic5nv8lj9d Před 22 dny +2

      maybe provide a solution?

    • @bricaaron3978
      @bricaaron3978 Před 22 dny +6

      Unreal Engine has been a console engine for a long time.

    • @randomcommenter10_
      @randomcommenter10_ Před 22 dny +24

      What's interesting is that UE4 & 5 actually has a built-in setting in the ini files to precompile shaders called "r.CreateShadersOnLoad" but the weird thing is that it's set to False by default, instead it should be True. What's even more weird is UE3 has a different ini setting called "bInitializeShadersOnDemand" which is False by default, which actually means all shaders will be precompiled before the game starts. I have no idea why Epic didn't set shader precompilation to true in UE4 & 5 but at least the setting is there and more devs should know about it to turn it on for their games

    • @RiasatSalminSami
      @RiasatSalminSami Před 22 dny +7

      @@user-ic5nv8lj9d why do I have to provide a solution? If devs need to have their own solution for such a game breaking problem, they might as well make their own engine.

  • @salatwurzel-4388
    @salatwurzel-4388 Před 16 dny +4

    Remember the times when {new technology} was very ressource intensive and dropped your fps but after a relative short time it was ok because the hardware became 3x faster in that short time?
    Good times :D

  • @PanzerschrekCN
    @PanzerschrekCN Před 21 dnem +81

    The whole point of Nanite existence is not to make games faster, but to reduce content creation costs. With Nanite it's not needed anymore to spend time creating LODs.

    • @alex15095
      @alex15095 Před 20 dny +20

      Exactly this, it makes things a lot easier if you can just mash an unoptimized 3d scan and a bunch of 150k poly models to make a scene and just make it work. As we know with electron apps, sometimes it's not about what's most efficient but rather what's easier for developers. An AI solution is unlikely as we've not found a good architecture/representation that effectively combines mesh shape, topology, normals, UVs, and textures, it's a much more complicated problem than just image generation

    • @chiboreache
      @chiboreache Před 20 dny

      @@alex15095 you can make synthetic dataset by using Blender Sverchok addon and model every thing in procedural way

    • @dbp_pc3500
      @dbp_pc3500 Před 20 dny +9

      Dude LOD are generated automatically by any tools. It’s not time consuming at all

    • @Frisbie147
      @Frisbie147 Před 20 dny

      @@dbp_pc3500 and they look like shit a lot of the time, nanite has better quality

    • @antiRuka
      @antiRuka Před 20 dny +6

      generate and save lods for a couple 100k poly mesh please

  • @QuakeProBro
    @QuakeProBro Před 23 dny +59

    Great video, you've talked about many things that really bother me when working with UE5. Extreme ghosting, noise, flickering, a very blurry image, and compared to Unreal Engine 4, much much worse performance with practically emtpy scenes (sometimes up to 80 fps difference on my 2080ti). All this fancy tech, while great for cinematics and film, introduces so many unnessessary problems for games and Epic seem to simply not care.
    If they really want us to focus on art, instead on optimizing, give us next-gen worthy and automated optimization tools instead of upscalers and denoisers that destroy the image for a "better" experience. This is only battling the symptomes.
    And don't get me wrong, I find Lumen and Nanite fascinating, but they just don't keep what was promised (yet).
    Thanks for talking about this!

    • @keatonwastaken
      @keatonwastaken Před 19 dny +1

      UE4 is still used and is capable, UE5 is more just for people who want fancier looks early on.

  • @pchris
    @pchris Před 22 dny +11

    I think easy, automatic optimizations that are less effective than manual ones still offer some value. When a studio has to dedicate fewer resources to technical things like they, the faster they can make games, even if they look slightly worse than they could if they wanted to take absolutely full advantage of the hardware.
    Every other app on your phone being over 100mb for what is basically a glorified web page should example how dirty but easy optimization and faster hardware mostly just enables faster and cheaper development by enabling developers to be a little sloppy.

    • @Frisbie147
      @Frisbie147 Před 20 dny +1

      the thing is nanite looks better unless a lot of artist time is wasted making good looking lods, autolod has exessive amounts of weird issues, maybe theyre alright for this dude but for me its not good enough at all

    • @theultimateevil3430
      @theultimateevil3430 Před 15 dny

      it's great in theory, but in practice the development is still expensive as hell and the quality of the products is an absolute trash. It's the reason we have a volume control in Windows lagging for a whole second before opening up. The same stuff that worked fine on Windows 95, lags now. Dumbasses with cheap technology still make bad products for the same price.

    • @pchris
      @pchris Před 15 dny +2

      @@theultimateevil3430 when you're looking at large products make by massive publicly traded corporations you should never expect any cost savings to get passed on to the consumer.
      I'm mostly talking about indies. The cheaper and easier it is to make something, the lower the bar of entry is, and the more you'll see small groups stepping in and competing with the massive selfish corps.

    • @Frisbie147
      @Frisbie147 Před 15 dny

      @@theultimateevil3430 i think something might be wrong with your windows install, unless youre talking about the animation effects which you can disable if you want btw

  • @eslorex
    @eslorex Před 22 dny +35

    You should definitely make a general optimization guide on UE5 as soon as possible. I was barely able to find valid information about optimization in UE5 or how optimized nanite is on different scenerios. I'm so glad I found someone that finally explaining it with reasons.

  • @imphonic
    @imphonic Před 22 dny +39

    This kind of thing has been bugging me for such a long time and it's good to know that I'm not completely insane. I'm currently using Unreal Engine 5 to prototype my game, using it as a temporary game engine (like how Capcom used Unity to build RE7 before the RE Engine was finished). My game will be finished on a custom game engine, which I will open-source when I'm finished. I don't want my debut to be ruined by awful performance and questionable graphics quality. I currently target 120 FPS on PS5/XSX, not sure what resolution yet, but all I know is that we're in 2024 and a smeared & jittery 30 FPS is simply unacceptable.
    I'm not trying to compete with Unreal/Unity/Godot, but I am interested in implementing a lot of old-school techniques which were very effective without destroying performance, while also exploring automated optimization tools rather than pushing the load onto the customers that make my career possible. The neural network LOD system is intriguing, and it might not be perfect, but it might still be a net improvement, so I'll keep that one in mind.

    • @4.0.4
      @4.0.4 Před 20 dny +8

      You're either a genius or insane. Much luck and caffeine to you regardless (we need more rule breakers).

    • @MrSofazocker
      @MrSofazocker Před 19 dny +4

      Starting a game project in ue5... good luck with that.
      They even tell you should stay on ue4.26.
      Seeing you playing more Minecraft from your recent video uploads I call cap.
      If you just want an Editor, you can stil use Unreal Engine, without the engine part. and write your own renderer pipeline-.
      No point in rewriting your own graphical editor, asset system, sound system, tools and packaging for PS and XSX... especially getting it to works.
      I imagine you don't even have a dev console for any of those... so good luck getting one as an Indie dev.

  • @luminox1806
    @luminox1806 Před 16 dny +14

    It's always been my believe that this whole UE5 performance stuff is back stage deal between Epic and NVIDIA to make people keep buying the latest and most expensive GPUs

    • @noyb77711
      @noyb77711 Před 15 dny +4

      Straight up

    • @sheeftz
      @sheeftz Před 8 dny

      It's no a big deal to believe in that if you have no clue what's actually going on. Nanites is the least problem of Unreal Engine.

  • @ThePlayerOfGames
    @ThePlayerOfGames Před 18 dny +16

    I have been confused as to why Epic have Nanite turned up so high? In all the demos it turns models into white noise of triangles, where in reality players will put up with a LOT of triangles before they notice them
    Is there no way to tell nanite to chill out and stop restoring or adding triangles past a certain point?
    This hand-waving of "AI will save us" isn't going to work, you could have an easy algo that gets rid of triangles by focusing on flatter areas to simplify and keeping areas that curve a little preserved - this has to exist already

    • @doltBmB
      @doltBmB Před 18 dny +4

      It exists already in every 3D software, but it is not even the optimal way, there is plenty more sophisticated algorithms in dedicated LOD software that prioritize percerptual quality.

  • @Lil.Yahmeaner
    @Lil.Yahmeaner Před 22 dny +54

    This is exactly how I’ve felt about graphics for years now. Especially at 1080p, all the ghosting, dithering, and shimmering of UE5 gets unbearable at times and everyone is using this engine. It’s like you have to play at 4k to mitigate inherent flaws of the engine but that’s so demanding you have to scale it back down which makes no sense. Especially bad when you’re trying to play at 165hz and most developers are still aiming at barely 30-60fps, now exacerbated by dlss/framegen.
    Just like all AI, garbage data in, garbage data out, games are too unique and unpredictable to be creating 30+ frames out of thin air.
    Love the videos, very informative and well spoken. Keep up the good fight!

    • @nikch1
      @nikch1 Před 20 dny +5

      > "all the ghosting, dithering, and shimmering"
      Instant uninstall from me. Bad experience.

    • @Khazar321
      @Khazar321 Před 20 dny +5

      For years now? Yeah I don't think that's UE5 mate. Maybe stop hopping on the misinformation train here...

    • @s1ndrome117
      @s1ndrome117 Před 19 dny +2

      @@Khazar321 you'd understand if you ever use unreal for yourself

    • @MrSofazocker
      @MrSofazocker Před 19 dny +3

      The notion that these defaults cannot be changed, and it's somehow a systemic issue of Unreal is insane to me.
      If you just use smth you don't even understand leave everything on default on press a button "make game", what kind of optimization do you expect?

    • @Khazar321
      @Khazar321 Před 18 dny +2

      @@s1ndrome117 I did and I have also seen the train wrecks that lazy devs cause with UE4 and engines around the same time.
      Horrible stutters, bad AA options, blurry and grey(unless you fix it yourself with HDR/ReShade), shader issues, etc.
      So yeah tell me how horrible UE5 is and what lazy devs can do wrong with it. I have seen it all in 30 years of gaming.

  • @sorakagami
    @sorakagami Před 11 dny +3

    Thanks for doing an in-depth analysis on Nanite vs LOD.
    As a self-taught solo UE game dev, nanite allowed me to focus on making my game without needing to optimise for LODs.
    Personally, I find Nanite to be a fantastic tool / feature and Epic has been doing an awesome job making everything more and more efficient and easier to use over time.
    Nanite also allowed us devs to use assets like those from MegaScans without some form of decimation done in advance. Just drag'n'drop.
    As you demonstrated in your video, LOD optimisations will still be more performant and sometimes by quite a bit.
    However that requires extra time and known-how to implement and optimise for. Time solo devs and small teams can put towards getting a game out faster or towards features/bug-fixes.
    Your analysis shows that devs with the expertise and experience can make a huge difference to game optimisation.
    Keep up the great work.

  • @dorbie
    @dorbie Před 14 dny +2

    Cool take. It's worth pointing out that altering the micropolygon size threshold can have a dramatic impact on SIMD (overdraw) efficiency here. There was an industry hardware transition over a decade ago from SIMD vector fragment component to SIMD vector fragment quad. This was driven mostly by the easy derivatives, especially as it pertains to dependent read and computed parameter derivatives, but also because it's an easy vec4 even when you have scalar shader math or vec2 & vec3 math. Along with it came the need for program branching & edge execution masks you are discussing as overdraw inefficiency etc. The derivatives usage would seem to make this irreversible, however perhaps there could be some MIMD innovation or a return to component SIMD when derivatives are eschewed by shader code, this could still allow derivatives from attribute rasterization to be directly calculated. A SIMD model that can switch between fragment quad SIMD and fragment component SIMD up to the point in a shader where SIMD derivatives are needed but then execution masks become inefficient might be a worthwhile hardware level innovation.

  • @ProjectFight
    @ProjectFight Před 20 dny +3

    Okey... a few things. This video was way faster than I could follow. But it was sooo interesting. I love seeing the technical aspect of how games are properly optimize, what counts and what not. And I will ALWAYS support those that are willing to go the extra mile to properly research this things. Sooo, new sub :)

    • @noyb77711
      @noyb77711 Před 15 dny +1

      XD I know Kevin in real life; in real life he speaks way faster! He's chill in his videos. He is brilliant, probably a bit on the spectrum (in a good way). He is more passionate and even more intense in real life. I have read some of his screenplay for his upcoming game it is terrifying & edge-of-your-seat exciting!

  • @DipJyotiDeka
    @DipJyotiDeka Před 2 dny +1

    The problem with such an endeavour is that GPU companies want worse performance so that you keep buying their latest and greatest.

  • @hoyteternal
    @hoyteternal Před 16 dny +7

    nanite is an implementation of rendering technique called visibility buffer, this technique is specifically created to overcome quad utilisation issues. once triangle density reduces to a single pixel, the better quad utilization of Visibility (nanite) rendering greatly outweighs the additional cost of interpolating vertex attributes and analytically calculating partial derivatives. you can search for an article called "Visibility Buffer Rendering with Material Graphs", it is a good read on filmic worlds website, with lot of testing and illustrations

    • @ThreatInteractive
      @ThreatInteractive  Před 16 dny +5

      In the original paper on visibility buffers, the main focus was on bandwidth related performance. Visibility buffers might not be completely out of consideration. We've spoken with some graphic programmers who have stated their implementation can speed up opaque objects, but we are still in the process of exploring the options here.
      While Nanite is a solution to real issues, it's a poor solution regardless because the cons outweigh the pros.
      We've seen the paper you've mentioned and we also shown other papers by filmic worlds in out first video ( which discussed more issues with Nanite) .

  • @Billy-bc8pk
    @Billy-bc8pk Před 11 dny +2

    Man I'm so glad I came across this video. I've been talking about the downsides to Nanite since it has been touted during the tech demos and fake scenes setup to sell the snake oil to developers. Everyone goes on and on about Nanite.... IF they have never used it to optimise their game. Most games only use it in linear, closed in scenarios because of the optimisations problems pointed out int he video, and even then the games oftentimes run like crap without looking all that much better than games that came out ten years ago. the micro-stuttering and load delays due to buffering and poor optimisation are so apparent these days but everyone keeps ignoring it and just pushing forward in what we now consider to be the era of corposlop.

  • @unrealcreation07
    @unrealcreation07 Před 13 dny +5

    10 years Unreal developer here. I've never really been into deep low-level rendering stuff, but I just discovered your channel and I'm glad your videos answered some long-lasting questions I had, like "why is this always ugly and/or blurry, no matter the options I select?? (or I drop to 20fps)".
    With time, i have developed a kind of 6th sense about which checkbox will visually destroy my game, or which option I should uncheck to "fix" some horrible glitches... but I still don't know why most of the time. In many cases, I just resign and have to choose which scenario I prefer being ugly, as I can never get a nice result in all situations. And it's kind of frustrating to have to constantly choose between very imperfect solutions or workarounds. I really hope you'll make standards change!

  • @mohammedelwardighi9426
    @mohammedelwardighi9426 Před 20 dny +3

    It's great to see someone who cares about optimization and performance, as not many people prioritize these aspects.

  • @maixyt
    @maixyt Před 18 dny +3

    Great video! Thanks for sharing this information and making it easily understandable. TAA and forced anti-aliasing is also a huge problem that gamers haven't had much of a say in, and I personally hate any AA method when it's forced, removing options from users just makes everything less accessible and generally worse quality as people can't customise the experience to their needs or hardware.

  • @p0146
    @p0146 Před 14 dny +2

    You're right about many topics but it's exciting to see advanced technology in use, and today it's standard for many indie developers and small studios to use tools like Unreal Engine. It simplifies their work, and since clients often have the budget and believe in the 'magic' of Unreal, it saves us Artists from needing to heavily optimize. Let others who aren't as experienced create content their way it's actually beneficial for us. When I freelanced, I encountered many wealthy clients who were new to game development. Having worked for numerous AAA game studios, it was amusing to see those small inxeperienced team focus on Unreal and all the latest fancy tech. Ultimately, this approach has been profitable for my career.

  • @hoyteternal
    @hoyteternal Před 20 dny +3

    nanite is specificaly made to render non-optimized scenes with ridiculous polycounts, and microgeometry when each pixel might render a separate triangle. and models that don't have lods. it has huge initial overhead but way higher scalability in this scenario. actually nanity is not a unique technology. it is an implementation of a techique called visibility buffer.

  • @exciting-burp
    @exciting-burp Před 9 dny +1

    I was a hobbyist game engine developer in a previous life. Rare is the video/article that I can't find a [frustrating] major technical flaw in. I couldn't even nitpick this video, kudos.

  • @ArtofWEZ
    @ArtofWEZ Před 12 dny +4

    I see Nanite like blueprints. Blueprints run slower that pure C++ and Nanite runs slower than traditional meshes, but both of them are lot more fun to work with than traditional ways.

  • @unfa00
    @unfa00 Před 21 dnem +5

    I was wondering how the heck does Nanite work around the quad overdraw if they aim to have "one triangle per pixel". Then I've heard they render in software instead which made even less sense to me...

    • @jean-claudelemieux8877
      @jean-claudelemieux8877 Před 12 dny

      You are missing the point that before Nanite (at least in the scene they used), you had 8+ triangles per pixel.

  • @almighty151986
    @almighty151986 Před 13 dny +6

    Nanite is designed for geometry way above what we currently have in games.
    So until games reach the point where their geometry is high enough to require Nanite then Nanite will be slower than traditional methods.
    Maybe next generation will get there.

  • @bits360wastaken
    @bits360wastaken Před 19 dny +33

    10:49, AI isnt some magic silver bullet, an actually good LOD algorithm is needed.

    • @michaelbuckers
      @michaelbuckers Před 17 dny +18

      He's talking about a hypothetical AI based LODder with presumably better performance than algorithmic LODders. Which is a fair guess considering that autoencoding is the bread and butter of generative AI. I hazard a guess that you can adapt a conventional image making AI to inpaint an original mesh with a lower poly color-coded mesh from various angles, and use these pictures to reconstruct a LOD mesh.

    • @yeahbOOOIIÌIIII
      @yeahbOOOIIÌIIII Před 7 dny

      What he is suggesting is amazing. I have to jump through hoops to get programs like reality capture (which is amazing) to simplify photogrammetry meshes in a smart fashion. They often destroy detail. This is an example where machine learning could shine, making micro-optimizations to the LOD that lead to better quality at higher performance. It's a great idea.

  • @lavallion7471
    @lavallion7471 Před 22 dny +4

    A company I recently worked for is set fully on Nanite. I had to create meshes with at times 5 digid polycount numbers. And I agree. nanite is an interesting technology but just like ai its capabilities are way overhyped. That's why you kind of lost me on the AI part. Because on one side, yes it does take time and money to make scans usable BUT it's still people who do it. People who need the money. AI good and well but there are tools that can do that retopology in seconds already and I guarantee that they are being used. Good video nonetheless and I wish I could send it to my former lead...

  • @MondoMurderface
    @MondoMurderface Před 2 dny +2

    Nanite isn't and shouldn't be a game developer tool. It is for movie and TV production. Unreal should be honest about this.

  • @therealvbw
    @therealvbw Před 15 dny +3

    Glad people are talking about these things. You hear lots of chatter about fancy UE features and optimisations, while games get slower and look no better.

  • @ShatteredDreams-pw9pn
    @ShatteredDreams-pw9pn Před 21 dnem +1

    Grateful for this channel. I had a feeling for a long time that games were continually getting less and less performant as time went on- these videos are helping me to understand why that's the case, and what direction engine development needs to go to fix it.

  • @SpookySkeleton738
    @SpookySkeleton738 Před 21 dnem +9

    you can also reduce draw calls using bindless techniques, like what they did in idtech 7, they are able to draw the entire scene with just a handful of draw calls.

    • @mitsuhh
      @mitsuhh Před 17 dny +3

      What's a bindless technique?

  • @Dom-zy1qy
    @Dom-zy1qy Před 21 dnem +2

    I am so glad I clicked on this video. Im a noobie when it comes to graphics programming, but I've learned quite a lot just hearing you talk about things. I didn't know shaders are ran on quads, I thought shaders were per-pixel. Maybe the reason behind this relates to the architecture of the gpus? Some kinda SIMD action going on?

  • @Madlion
    @Madlion Před 22 dny +11

    Nanite is different because it saves development cost by saving time to create LODs, its a simple plug and play system that just works

  • @drili3771
    @drili3771 Před 13 dny +1

    You have a very clear, and informative voice. Thanks for your insight, and keep up the good work.

  • @_boux
    @_boux Před 18 dny +3

    Damn, I knew BIG UPSCALE was to blame for the recent dip in game performance

  • @Miki19910723
    @Miki19910723 Před 19 dny +6

    Technicly everything you say is right from a perspective. But youre conclusions are very wierd. Especially the Ai one given quality problems it usually has. I think we should not confuse what some youtubers say with how the feature works and why it was developed and if any one didnt catch that its exactly for the work flow. Nanite doesn't render more triangles it renders them more where there needed. And yes it will lose to very well hand optimised scene but the point is you dont have to do that. Also examples you showed are actually rather bad for nanite. The point is not that it renders single triangle faster its about the complex scene and work requaiered.

  • @tourmaline07
    @tourmaline07 Před 22 dny +25

    Nice to see my £1200 RTX 4080 Super's GPU power being necessary to draw so much badly culled overdrawn geometry at native 4k and struggling to retain decent fps 🫤
    I did wonder why with UE5 games fps massively increases with DLSS being used (more so than other engines) and this does make sense.
    That is an excellent point you made about Intel and AMD funding optimization tools to make DLSS redundant , not one I've seen elsewhere.

    • @MrSofazocker
      @MrSofazocker Před 19 dny +4

      Yes, with DLSS you decrease the render resolution and with Nanite this means also rendering less polygons... in addition to rendering fewer pixels.
      If a non Nanite game, when lower resolution, that's all you do, fewer pixels to shade.
      With Nanite, a lower resolution proportionally affects how many polygons are generated and shaded.
      That game devs don't expose a "geometry detail" slider like the fractional CVAR that is built into Unreal to adjust how many polygons are created per pixels is stunning!

    • @MiniGui98
      @MiniGui98 Před 18 dny +3

      Yeah and using DLSS as a redundant solution means you have games with worse visual fidelity by default since DLSS outputs an upscaled image from a lower resolution. You cannot create details that didn't show up in the original image by upscaling. This things is killing the "crispyness" of game visuals.

    • @tourmaline07
      @tourmaline07 Před 18 dny +1

      @@MrSofazocker thanks for that detail - that does make sense if less polys are rendered as well. Agree this would be a useful setting to expose to the end user for lots of titles.

    • @tourmaline07
      @tourmaline07 Před 18 dny

      @@MiniGui98 I think that if dlss upscaling can give similar quality settings to native then there is a serious problem with native TAA , you'd think it would be broken?

  • @etherweb6796
    @etherweb6796 Před 4 dny +1

    This was probably a long time coming - the claims about nanite were definitely overpromising - imo, if you want good and efficient optimization, it needs to be done by hand and considering all the elements that will be on screen.

  • @MrSofazocker
    @MrSofazocker Před 19 dny +16

    What this fails to capture is Nanite uses Software rasterizer, which doesn't suffer from quad overdraw at all.
    Small enough Clusters of Vertecies are offloaded to a software rasterizer and geometry assembler.
    The performance degradation comes from not using GPU ResizeableBAR most likely or doing other fudging on the messurements.
    Epic could do a better at providing information of incompatible project settings etc.
    But what it is, is allowing you to put way more polygons on your screen. that's still true.
    Bringing in an 8th gen game and enabling Nanite won't give you more performance, could even give you worse performance.
    Also, even today, if you want to start a game project, use 4.26 like they tell you.

  • @CaptainVideoBlaster
    @CaptainVideoBlaster Před 22 dny +20

    This is what I imagine Unabomber reading his own manifesto would be like if it was about video game development. In another words, I like it.

  • @crownless236
    @crownless236 Před 23 dny +31

    your videos are amazing. just dont forget to spam a bunch of reels with different parts of this video that way your channel gets in the algorithm

  • @felixman9691
    @felixman9691 Před dnem +1

    This is SO interesting! Thank you so much for trying to solve this problem by explaining it and spreading the word!

  • @robbyknobby007
    @robbyknobby007 Před 22 dny +27

    I usually find people who excuse unreal's performance in titles and mention how developers have the choice in not using these techniques like Lumen and Nanite, but unreal 5.0 deprecated TESSELLATION, so where was the developer choice there? You should really do some tests comparing tessellation, parallax occlusion, and Nanite. Because if tessellation doesn't perform better than Nanite then Epic did the community a service by removing it with a replacement. But if tessellation is faster, then Epic really is to blame for alot of shty performance.

    • @vitordelima
      @vitordelima Před 22 dny +6

      Tesselation can be faster and better looking than Nanite, but nobody cared to code methods to export it from 3D modelling software or convert existing meshes into it.

    • @RADkate
      @RADkate Před 20 dny +2

      @@vitordelima tessellation is a gpu feature what are you talking about 💀

    • @vitordelima
      @vitordelima Před 20 dny +1

      @@RADkate There is no easy way to create content that uses it.

  • @delphicdescant
    @delphicdescant Před 2 dny

    What's silly to me is that the "industry standard" practice is to use a different shader for every type of object, merely because the object was made by some artist in some external application, who was allowed to just make up their own shader for that object in whatever gimmicky visual scripting system they had access to.
    A proper in-house engine can easily get by with just like 1-3 shaders and 1-3 draw calls for the entire scene, because it doesn't have to be general purpose. The advent of general purpose engines, meant to lower the barrier to entry, was the advent of poorly optimized games. Making software general purpose *necessarily* makes it worse at any single specific task.
    If you, as a developer, choose to use Unreal, Unity, or even Godot, you are choosing to leave a significant margin of performance on the table. And that's fine, as long as its acknowledged.
    Another side effect of this era of general purpose engines is that people have this ridiculous notion in their minds that creating an in-house engine is a herculean task. It's really not. We're overestimating how much of the heavy lifting Unreal or Unity actually does for us. When we try doing that part ourselves, we find that those general purpose engines aren't doing the hard part for us. They're doing the easy part for us, and charging us for it.

  • @gabrielgolzar
    @gabrielgolzar Před 5 dny +5

    I'm not sure what you're arguing against here. The point of Nanite is to eliminate the need for manually creating different levels of detail (LODs) for the same mesh. What makes Nanite so powerful is its ability to "procedurally" generate LODs at runtime during development. Then, for the final release, pre-generated LODs can be created, which function like traditional LODs. The transitions between LODs are customizable and can be adjusted based on the application's needs or preferences.
    That said, systems like Nanite will always be slower than traditional LOD switching because of the heuristic system they rely on. If you want a more accurate heuristic, it will take longer to compute since the code analyzing the context becomes more complex. On the other hand, if you prioritize faster computation, the heuristics will be simpler, creating a trade-off-a kind of cat-and-mouse game.
    Still, Nanite is a remarkable piece of software and an excellent foundation for pushing advancements in computer graphics.

  • @benjaminzysberg4645
    @benjaminzysberg4645 Před 8 dny

    The point of Nanite was never really to optimize scenes. It was to massively cut down on 3D production costs by not having to decimate models and by automating LODs. Developers basically gave up on optimizing when DLSS/FSR were invented.

  • @AzithromY
    @AzithromY Před 18 dny +5

    I am not even close to being a game developer but from person who loves art of games (me) point of view this video conclusions are more than relatable. I am notoriously finding myself in situations where I tink with video settings in newest games ( not just ue5 ) and I can't find a way to make my game not a messy blur. I blame the upscaling. As much I love the concept of the tech I can't stand that the root of it is pure marketing, a solution to problem manufactured by the same people (Nvidia) and now even the most popular game engine.
    Why I always come back to my "old" beloved games is the lack of choice in most modern titles between looking like blurry mess ( upscalers ) OR looking like blurry pile of shit ( TAA ) :D
    Edit: It just came back to me then I first noticed some of this crap. NFS 2015. 16 year old me realised TAA chosen in all higher video presets than low was looking like shit, so for me the irony of NFS 2015 is highest graphics settings looked worse than lowest which used FXAA :D

  • @legice
    @legice Před 20 dny +2

    Finally a video that talks about nanite!
    Honestly, nanite saved a project, because we had 100 meshes with 20mil poly each and nanite made it work on machines that had no right to be able to, but…
    It is in no way a silver bullet and the day to day use, as a quick LOD it is not.
    As a modeler, there are rules to modeling and if you do it right, you need a day max to optimise a big ass mesh, which you know how, because you made it!
    “Quick” and dirty modeling exists and optimisation down the road, but when you are making the prop, you KNOW or at least understand what and how to do it, for it to be the least destructive.
    Non destructive modeling exists, but it brings in different problems, such as time, approach, workflow and unless it requires it, you dont use it, as its a different beast all together.
    You can model a gun any way you want, but a trash can, house, something non hero, non changing and that has measurements set in stone, you do it the old fashion way.
    Texture and prop batching is simple, but being good at it is not.
    I love lumen, but it is clearly still in the early stages and needs additional work to be optimized for non nanite and optimized workflows.
    Im just so happy I wasnt the only one going insane about this

    • @XCanG
      @XCanG Před 20 dny +1

      I have a comment above with my opinion on this, but I'm not working in game development, so my knowledge is limited. Considering that you are modeler I have a few questions to you:
      1. How long does it take to take a model for nanite vs LODs?
      2. How many years you work/have experience as modeler? It's for the sake that pro's making stuff way quicker, so that difference one vs another may vary depending on experience.
      3. How much you are aware of the optimizations mentioned in the video? My opinion is that he have at least 6 years of experience and probably already some Senior gamedev, but it's hard to imagine that new gamedevs would have that knowledge.
      4. Do you think nanite is useful right now? Do you think it will be useful in the future? (may be with some polishing and fixes)

    • @legice
      @legice Před 18 dny

      @@XCanG Sure, I can answer those
      - nanite is obviously just a click, so nothing to do there really. As I said, when modeling, you ALWAYS plan ahead, so when you are doing retopo, its just a matter of how well you preplanned all your steps beforehand. So as there are rules to modeling, there are good practices in place for a long time and if you follow them, you are going to take a bit longer to make a prop, but when doing retopo/LODs, its going to take only a fraction of the time needed. Cant really time this, because its how you should be modeling regardless, unless you are doing for visualisation or movies only, where they dont need to be optimized.
      - professional, none really, as everything I have done is on personal works and game jams, due to the game dev industry being a bitch to really get in. There are steps or approaches, but in the end it dosent matter, as long as you deem it to be the best approach time wise, modeling wise and within budget, but most have the same workflow, because if you hand it off to somebody or leave the company, other need to be able to take your work and adapt/finish it.
      - very little/barely understood anything, because I learned from doing it and adapted my workflow base on the information I got, while searching for solutions. Honestly, you can basically skip anything he said, because that is all theory in the same way of how light works in games. As a modeler, you dont really need to know exactly how it works, but you get a feeling and slight understanding of how any why. He is going way more into technical stuff, something tech artists and programers deal. As a senior modeler, you touch this, but in the end your job is to do other things, such as modeling well, texture packing, instances, draw calls, modular design... in some areas and studios, this gets mixed in between and I 100% guarantee you, that most dont know it, even seniors, but they compensate in other areas.
      - nanite is already useful, it will be more useful, but limited. The fact is, nanite is a constantly active process going on in the screen, where as LODs are a one and done. LODs will never go away, but their dependency will be reduced, as less and less optimization will be needed to make games work so well, because computers are getting better.
      As I said, nanite straight up saved a project of ours when it was still in alpha/beta, so now if you use it for trying out how something looks in game, stress testing a rig pre optimization or whatever, it has its place, but should not be overused and has limitations. You cant use it on an skeleton/rigged model for example, as it relies on a constistent poly mesh.
      Take everything I said with a grain of salt. The video explained everything BEAUTIFULLY and I understand things I have unknowingly been doing for years, but never really grasped why, but I knew it worked.
      I learned things my way, studios teach their way, opinions clash and in the end, nobody really knows what they are doing, only how they feel they should be done and the final result dictates that.

  • @agd5554
    @agd5554 Před 21 dnem +21

    What? Smallest Nanite elements that would be affected by the issue you're talking about, have always been drawn using software rendering to avoid it.
    This video is just incorrect.
    While UE5 has started out as being Nanite-centric, they've been regularly adding back features to better support traditional rendering, like mesh tessellation. Nanite is performance heavy and nobody has denied that, it's just another solution in the toolset.

  • @average_ms-dos_enjoyer
    @average_ms-dos_enjoyer Před 22 dny +2

    It would be interesting to see similar breakdowns/criticisms of the other big 3D engines approaches to visual optimizations (Unity, Crytek, maybe even Godot at this point)

  • @ThePCxbox
    @ThePCxbox Před 18 dny +4

    I literally started building my own framework with opengl because I'm so sick of how over engineered engines are becoming. It's not just meshes either, they dumped the old input system that worked fine just to force us to use the "enhanced" one that takes significantly longer to setup. Lumen is enabled by default so it's pretty easy to accidentally ship a build that doesn't work on 80% of computers, and I swear that they've made it marginally more difficult to find work arounds for more complex problems because of deprication
    Sure I run into issues with opengl but at least I actually *learn* something from those mistakes and 9 times out of 10 it's 100% my fault and I don't have to go digging through 3 decades worth of abstraction to find the answer

  • @rockoman100
    @rockoman100 Před 10 hodinami

    Am I the only one who thinks LOD dithering transitions are way more noticeable and distracting than just having the models "pop" instantly between LODs?

  • @noyb6200
    @noyb6200 Před 22 dny +13

    8:33 is a bombshell.

    • @Vanderer11
      @Vanderer11 Před 18 dny +3

      Like 2 years ago I've spent so much time on trying to find out why this crap isn't working, my bug ticked and forum post were unanswered, now I see they knew the problem and refused to fix it, just great

  • @azrielsatan8693
    @azrielsatan8693 Před 12 dny

    As a certified UE5 hater from the first few games that released with it, I'm happy to see more discussion about it.
    The VRAM usage also needs to be talked about. Games requiring a min 8GB VRAM for 60fps is ridiculous (though Nvidia refusing to put any on their cards is also to blame).

  • @thelaw3536
    @thelaw3536 Před 22 dny +5

    Really though; the value of nanite is that you don't need as many lods. As an indie dev if I use a decent poly model I can nanite the lods out of my life which saves a lot of time. Lods still have to look good enough for the scene and spending time on that for optimization sake is really resource intensive for any big game. Their are better places to spend time optimizing like world partitioning

    • @ThreatInteractive
      @ThreatInteractive  Před 19 dny +3

      Threat Interactive is an indie studio and we communicate with other indie studios who feel the opposite about Nanite. We want automated systems that take advantage of performance research; not systems that reinvent so much we lose massive amounts of performance, for arguably no improvement to visual fidelity. We believe consumers will thrive better with excellent gameplay, if the game looks clear and performs at standards promoted in the anti-blur community.

  • @DonC876
    @DonC876 Před 22 dny +6

    Overall pretty good writeup 👍, i really wished the concept of overshading was more well known especially with 3d artists since they usually get all fixated on polycon counts which is only a small sliver of the story. Anyway, this discussion sounds to me a lot like the ones we had when deferred renderers and TAA took over the industry. In my opinion this is just a trade off for getting new systems that enable new workflows and higher quality graphics (depending on how you define that at least). I guess the best thing to do if you don't like the way UE5 is going would be to switch to something like Unity's URP and use the Forward+ Renderer, then you get the cleaner image you are looking for with MSAA and you can add multisampling in your shader with an LOD Bias of -1 to get sharper textures as well as use normal LODs. You can just let the people at Epic cook and come back when they improved the situation and the hardware has picked up in performance.

  • @thesneakysnek
    @thesneakysnek Před 22 dny +13

    Nanite would have been a nightmare anyway. Game file sizes are already huge, just think of how much bigger they will be if devs don't have to care about polygon counts.

    • @darrenstone3406
      @darrenstone3406 Před 22 dny

      I don't and my game nearing 800 gigs . I use no nanite except for landscape 😂

    • @mcxMakerx
      @mcxMakerx Před 20 dny

      @@darrenstone3406 Are you making a game for NASA computers?

    • @darrenstone3406
      @darrenstone3406 Před 20 dny

      @mcxMakerx most computers have tb drives ...

    • @mcxMakerx
      @mcxMakerx Před 20 dny

      @@darrenstone3406 Yeah…. and I’d rather not use almost half my storage on one game

    • @darrenstone3406
      @darrenstone3406 Před 20 dny

      @@mcxMakerx justa fyi ffxvi comes out soon its 200 gigs ... get used to it polygons costs hdd space . the better you want a game to look the less corners you must cut but hey... you do you

  • @zsigmondforianszabo4698
    @zsigmondforianszabo4698 Před 22 dny +2

    I'd rather think about Nanite as a magic wand for those who don't want to deal with mesh optimization and just want a consistent performance all across without manual optimization. This currently hits us heavily but as soon as technology evolves and everyone is going to have access to modern hardware that can utilize this system, the ease of the system and the decent performance will overcome these hardnesses.
    About the development: 4050 compared to 1060 has a 70% performance uplift in 7 years. 10% every year including hardware and software development and in 5 years nanite will work out really well for fast game development and consistent performance.
    PS: we need to mandate gamedevs when releasing a trailer to give performance statistics about the ingame scene and upscaling used :DD

  • @minepro2929
    @minepro2929 Před 23 dny +20

    This is crazy, I wonder how this could be fixed. (I haven't finished the video yet.)

    • @vitordelima
      @vitordelima Před 22 dny +8

      The redraw problem could be fixed by some sort of A-buffer based rasterizer (and it can also do anti-aliasing, better screen space effects, better upscaling, ...).

  • @devonjuvinall5409
    @devonjuvinall5409 Před 4 dny

    Great watch!
    I would also recommend Embark's Example-based texture syntheses video. They get into photogrammetry and their testing on the software for 3D Props. It's just rocks using displacement maps but I think the whole video could be relevant to this situation. I don't know enough to be confident though haha, still learning.

  • @Niwaessence
    @Niwaessence Před 22 dny +7

    Funny that I found this video while trying to optimize my Unreal scene by removing nanite.

  • @simpson6700
    @simpson6700 Před 9 dny +2

    I thought the lack of technical information when nanite was first presented was suspicious.

  • @plamen5358
    @plamen5358 Před 22 dny +11

    Ordered dithering looks and feels terrible, can we please not have that 😓 Awesome video otherwise 🙏

    • @vitordelima
      @vitordelima Před 22 dny

      Subdivision surfaces and MIP mapped displacement maps.

  • @mike64_t
    @mike64_t Před 14 dny +2

    Good video, but I disagree that you can currently train an AI model to reduce overdraw.
    There is no architecture currently that can really take an AAA model as an input.

    • @ThreatInteractive
      @ThreatInteractive  Před 14 dny +1

      We are more detached from utilizing AI for implementing the max surface area topology than most people are giving us credit for. We just need faster systems for LODs. One of the biggest problems with the LOD workflow in UE vs Nanite is LOD calculations is extremely slow compared to Nanite(near instant even with millions of triangles). We also need a polished system that bakes micro detail into normal/depth maps faster.
      The way we see is it's always going to be an algorithm. Most AI that get trained enough revert to one anyway.

    • @mike64_t
      @mike64_t Před 14 dny +2

      ​@@ThreatInteractive "most AI that get trained enough revert to one anyway" mhhh... I wouldn't say so. Yes, in a sense its an algorithm but the sort of compactness, discreteness and optimality that you picture when you hear the word "algorithm" is not present in a whole bunch of matrix multiplications that softly guide the input to its output. Just because it is meaningful computation doesn't make it deserving of the word algorithm. The LTT video isn't really accurate and makes some dangerous oversimplifications.
      I agree that tooling needs to become better, I also would love for there to be a magic architecture that you could just PPO minimize render time with and invents all of topology therory from that, but that is a long way to go... Transformers are constrained for sequence length and have a bias towards discrete tokens, not ideal for continous vertex data.
      For now it seems like you need to bite the bullet and write actual mesh rebuilding algorithms.

  • @2strokedesign
    @2strokedesign Před 22 dny +4

    Very interesting tests, thank you for sharing! I was wondering about memory consumption though? I would think a LOD0 Nanite mesh is lighter than let's say LOD0-LOD7.

  • @coleshores
    @coleshores Před 8 dny

    It requires much less time and resources as a software developer to import a extremely high mesh in to UE5 and have it auto LOD rather than prebake in the LOD.
    The cost is passed from the developer/publisher on to the consumer.

  • @fluffy_tail4365
    @fluffy_tail4365 Před 22 dny +4

    Aren't mesh shaders still a bit unoptimized and slower compared to a compute shader + draw indirect? I remeber reading something about thayt .The idea of compiling together lods still holds anyway

    • @internetexplorer781
      @internetexplorer781 Před 18 dny

      i guess it depends on hardware and api, but on my hw and vulkan, mesh shaders/meshlets are like 4-5x faster than traditional pipeline and with task shaders you can do some pretty interesting culling techniques etc.. i think i read from epics roadmap that eventually UE will move into this pipeline and ditch the vertex shaders completely. nanite is, iirc, already partly using that new pipeline.

  • @kukuricapica
    @kukuricapica Před 20 dny +2

    Latest norhlight engine is pretty great with its geometric density and LODs. EPIC was never The studio for engineering perfect tech. From what i have heard CD Projekt RED works on optimalization of UE5 and they poked EPIC about "need to build bridges to implement RED branch of UE5 into the main branch" so we shall see...

  • @letronix6243
    @letronix6243 Před dnem

    Gordon doesn't need to hear all this. He's a highly trained professional.

  • @AdamTechTips27
    @AdamTechTips27 Před 22 dny +8

    I've been feeling and saying the same thing for years. Just doesn't have the concrete testing yet. This videos proves it all. thankyou

  • @DiogoManteu
    @DiogoManteu Před 14 dny

    To me, since its inception, nanite was always marketed as a solution to a scene with too many triangles leading to overdraw and in that case, increasing performance over traditional rendering.
    To some developers, it's seen as automatic LODs, lol.