Fixing Intel's Arc Drivers: "Optimization" & How GPU Drivers Actually Work | Engineering Discussion

Sdílet
Vložit
  • čas přidán 5. 05. 2024
  • We sponsored ourselves! Buy one of our metal emblem badge pint glasses! store.gamersnexus.net/product... or our ultra comfortable hoodies! store.gamersnexus.net/product...
    In this discussion, we talk about a number of questions relating to GPU performance in gaming. Other than talking about how GPU drivers actually work, we get into what it means when a game is "compiling shaders" or is "caching shaders," what it means for a game to be "unoptimized" vs. the drivers or hardware, and more. The key goal is to define drivers and how they interact at both a hardware and software level, giving a better understanding as to what it means when Intel, NVIDIA, or AMD talk about "optimizing" performance. Likewise, this helps cover some of what it means for a game developer to have an "unoptimized" game. We're joined by Intel Engineer Tom Petersen for this discussion. If you learned from this talk, consider watching these:
    Intel Animation Error discussion: • FPS Benchmarks Are Fla...
    NVIDIA latency technical discussion: • Framerate Isn't Good E...
    Watch our Intel Arc 2024 revisit: • Intel Arc 2024 Revisit...
    To further support our interviews and deep dives, consider grabbing one of our PC building DIY anti-static modmats: store.gamersnexus.net/product...
    Or our soldering mats: store.gamersnexus.net/product...
    Like our content? Please consider becoming our Patron to support us: / gamersnexus
    TIMESTAMPS
    00:00 - No One Actually Knows What "Optimization" Means
    02:05 - The Driver Stack, APIs, & Basics
    04:38 - Shaders & Programs
    05:43 - What Does "Compiling Shaders" Mean?
    06:30 - Optimizing GameDev Shaders
    08:19 - Kernel Mode Driver
    10:08 - Graphics Firmware & Hardware
    11:18 - "OPTIMIZING" Drivers & Games
    16:25 - Types of Optimization
    18:30 - Future Plans
    21:24 - Games Being "Unoptimized"
    23:15 - Common Misconception About Drivers
    ** Please like, comment, and subscribe for more! **
    Links to Amazon and Newegg are typically monetized on our channel (affiliate links) and may return a commission of sales to us from the retailer. This is unrelated to the product manufacturer. Any advertisements or sponsorships are disclosed within the video ("this video is brought to you by") and above the fold in the description. We do not ever produce paid content or "sponsored content" (meaning that the content is our idea and is not funded externally aside from whatever ad placement is in the beginning) and we do not ever charge manufacturers for coverage.
    Follow us in these locations for more gaming and hardware updates:
    t: / gamersnexus
    f: / gamersnexus
    w: www.gamersnexus.net/
    Steve Burke: Host
    Tom Petersen: Guest
    Video: Vitalii Makhnovets
  • Hry

Komentáře • 1,2K

  • @GamersNexus
    @GamersNexus  Před měsícem +190

    If you like this, watch our video discussion animation error and flaws of frametime testing as we know it today! czcams.com/video/C_RO8bJop8o/video.html
    To further support our interviews and deep dives, consider grabbing our metal emblem pint glass! store.gamersnexus.net/products/gn-3d-emblem-glasses or our ultra comfortable hoodies! store.gamersnexus.net/products/warm-ultra-soft-fleece-zip-hoodie-tear-down
    NVIDIA latency technical discussion: czcams.com/video/Fj-wZ_KGcsg/video.html
    Or our Intel Arc 2024 revisit: czcams.com/video/w3WSqLEciEw/video.html

    • @olnnn
      @olnnn Před měsícem +3

      re drivers you should really have on one of the people working on the MESA linux graphics drivers, it's like one of the things making the steam deck possible

    • @robertlawrence9000
      @robertlawrence9000 Před měsícem +3

      Very interesting! I wonder if it would be possible for Tom to get a video of some engineers actually using these tools to show us what they do. Like maybe a behind the scenes video step by step of an actual problem they encounter in a game and the process of what they do to fixing it using software and testing. Maybe that's a bit intrusive but it could be an interesting video.
      Thanks for the video Steve and GN team!

    • @realdragonrude
      @realdragonrude Před měsícem +1

      Would love to pick this guys brain about the advantages and disadvantages the vulkan api has over directX on the gpu side
      Also
      Thanks Steve

    • @dasiro
      @dasiro Před měsícem +1

      With Tom having worked at NVIDIA and now at Intel in this very unique position: how do different choices at the hardware level impact the software and vice versa? And on a personal note: how hard is it to work around previous knowledge that you aren't allowed to use due to patents and NDA's + could your previous employer find out if you did so?

    • @AwesomeBlackDude
      @AwesomeBlackDude Před měsícem +1

      .
      Hey Steve, (bro) when are you planning to break this system? We're still waiting for an in-depth investigation into the actual prices for these fast, fancy calculators of GPUs in mass production.
      Correct me if I'm wrong, but isn't it mind-blowing that the iPhone 14 only costs $10 each in mass production out the door?

  • @beachslap7359
    @beachslap7359 Před měsícem +1738

    Thanks Steve.

  • @WereCatStudio
    @WereCatStudio Před měsícem +1053

    I wish all GPU vendors were more willing to talk about this kind of stuff. This is very interesting and informative stuff and I applaud Intel for willing to talk about this. Makes me want to pick up an Intel card just to tinker with it.

    • @mikebertolini120
      @mikebertolini120 Před měsícem +19

      You stole my words

    • @zivzulander
      @zivzulander Před měsícem +37

      Hopefully they will talk about this stuff more now that they see there is interest. I think they just assumed people aren't interested in the low-level details as much (specifically on the software/drivers side).
      Nvidia and AMD have had engineers on this channel before (and others) to discuss other technical info, though. That recent video on latency Steve did with Guillermo Siman of Nvidia was also very informative, as were the videos recorded in the AMD labs in Austin, Texas.

    • @PrefoX
      @PrefoX Před měsícem +9

      just AMD is not visiting GN, Nvidia was there quite often and Intel too.

    • @Mr.Genesis
      @Mr.Genesis Před měsícem +33

      @@PrefoX GN went to AMD's own offices and spoke to the engineers there. Guess you missed the video

    • @scpmichael4595
      @scpmichael4595 Před měsícem

      intel bends over to nazisrael don't buy into genos aiders

  • @MrMonday1000percent
    @MrMonday1000percent Před měsícem +955

    24:56 Thanks for watching Tom!

    • @GamersNexus
      @GamersNexus  Před měsícem +405

      I considered cutting that but figured so few people watch that long that it'd be a great easter egg!

    • @grievesy83
      @grievesy83 Před měsícem +56

      @@GamersNexus Really? People switch off from a video with Tom in it?
      In my country, we'd say they "have kangaroos loose in the top paddock".

    • @shaunwhiteley3544
      @shaunwhiteley3544 Před měsícem +7

      ​@@virtuserable that's what's stopping me from buying Arc 😪

    • @woodmon122
      @woodmon122 Před měsícem +6

      HAHAHAHA that cracked me up

    • @shaneeslick
      @shaneeslick Před měsícem

      @@grievesy83 🦘🦘🦘🦘🦘🦘🦘🦘🦘🦘🙃👎

  • @wansnek3997
    @wansnek3997 Před měsícem +428

    This is amazing. I work as a low level engineer in the reverse engineering field, and these technical deep dives are AMAZING. Its not an exaggeration to say that the engineering content you are putting out is legitimately one of the things people will be watching for years for knowledge of certain software topics. Thanks Steve & team, and you'll always have my support!

    • @GamersNexus
      @GamersNexus  Před měsícem +53

      That's a cool job! And thank you for the kind words!

    • @svampebob007
      @svampebob007 Před měsícem +3

      the software part is always a game of "speaking the right language" but the low level is always so interesting.
      like having this API have X registers or reading this text and inferring this information to perform "that" instruction.
      I work in retail, but low level circuits and programing has been my hobby, I remember building my first dual core circuit 10 years ago and it took me 2 years to figure out how to coordinate just to figure out how to share the information from RAM and match it with the program counter register.
      and that's is when I was using my own "coding language" it was very hardware specific language.
      watching those deep dive from giants like Intel is so much fun!

  • @sgredsch
    @sgredsch Před měsícem +292

    Former Mod developer here, working with source engine as 2d/3d artist. id like to share some basic views onto game optimizations. Petersen did a great job explaining deeper driver/engine Level optimizations, but this is only one part of it.
    Its not just the engine or driver, its also the assets, and this is also where some studios just drop the ball.
    you want to load a gpu evenly, but you can absolutely choke a gpu when overloading it with a heck of a lot of one single workload. that might be one specific shader, or absolutely insane geometry load, or stuffing the vram.
    1. 3D models. models are basically a wireframe that form a body. the possible level detail of a model is determined by the number intersecting lines of the grid. patches formed by the grid are refered to as polygons (there are different types like ngons, quads, not gonna touch on that). a high polycount gives you a higher resolution mesh with potentially more detail, HOWEVER you can have a model with millions of polygons with no details at all. the polycount is direct geometry load on the gpu. the higher the polycount in the scene, the higher the load on the gpu. once you overload the geometry pipeline with absolutely insane levels of geometry, the gpu performance drops off a cliff. this is basically what tessellation does - it takes a simple wireframe and bloats the polycount, increasing mesh fidelity, but blasting the gpu with geometry load. this was nvidias big trick to choke radeon gpus in certain games using gameworks or hairworks. nvidia massively increased the geometry capability of their gpus starting with fermi, and tried to get studios to use their tessellation libraries found in gameworks/hairworks that would absolutely obliterate radeon gpus with stupid high amounts of geometry load. notable games are the witcher 3 with hairworks and crysis 2 with environment tessellation that does absolutely nothing except cutting radeon framerates in half - this is why you can find the tessellation factor option in the radeon and intel arc driver, it limits the mesh complexity scaling of tessellation. geometry load is the whole visible scene with all models and map geometry on the screen. so you want to keep the polycount of the rendered scene as low as possible, and you absolutely want to avoid wasting polygons on geometry that doesnt even need it. a cube can be done with a "single digit" number of polygons, or 2.000.000.000.000 without any visible difference. you can have thousands of the minimal polygon cubes on screen without breaking a sweat, while just a few of the bloated cube will make your gpu scream. modern gpus can handle a ton of geometry load, but this is not an excuse to just waste it.
    having a ton of unnecessary polycount puts unnecessary load on the gpu, and is a result of lazy mesh optimization. sure, you want higher fidelity models for better visuals, but you can assume that studios rather cut on time per model in exchange for worse client side performance.
    one technique, usually used in older games, is "backface culling", which basically deletes all parts of the model that the player is never seeing, cutting geometry load. there are possible artifacts when you can actually see the deleted backface, and the model looks hollow. today this is not done alot because gpus are pretty powerful, but there are situations where this should still be done but isnt.
    but dont worry, polycount isnt the only way to have detailed models, theres a way to simulate polycount with textures, thats why we will switch over to...
    2. textures: textures in games have 3 main usecases. 1. give color and texture to a 3d model and to the map/environment, 2. decals and sprites, and 3. control shaders. like you already heard textures are relatively easy on the gpu, HOWEVER, this is not entirely true across the board. the taxing factor of textures is resolution, file size and function - hitting the Vram and the shader pipeline. resolution gives better clarity, but bloats filesize and you can combat filesize with compression. you wanna set a texture resolution that makes sense. high resolution textures that the player is viewing up close, low resolution where it doesnt matter so much, also use compression where possible. this applies for the albedo map, or diffuse map, which is basically the texture that gives you the color information. there are other textures that control shaders, like bump and normal maps, specular maps, phong shader maps, self illumination maps, parallax maps. these textures tell the engine what to do with which part of the texture. these can be greyscale or include different informations on each of the R G B channels, like a normal map (this is a very fancy bumpmap with accurate "3d" information baked into 2d space. normal maps are either hand made (legacy) or more commonly "baked" in a 3d application from a high resolution version of a 3d model to reduce geometry load by using a low poly model + normal map instead of a model thats 100-1000 times the polycount). so you can use a 2d texture to reintroduce "3d" details back onto a low poly model. the normal map acts somewhat like a textured rubber glove you pull over your hand. normal maps should not be lossy compressed, because it will introduce awful blocky artifacts when light hits the object, and normal maps usually also have an alpha channel (greyscale channel next to the RGB channels) with a different function, usually controlling a different shader, like specularity, which makes the normal map a chonker in file size. you dont wanna overdo normal map resolution, as it will eat vram for breakfast and very high resolution normal maps also put more load on the shader pipeline. a gpu can withstand a considerable amount of normal maps, but you can obviously overdo it. lets briefly talk about how you map texture space to models.
    in the 3d modeling software you "unwrap" or "uv map" the model into 2d space. imagine taking a roll of toilet paper and cutting it across the long side, now you can put it flat on the table and when youre done painting, you can put it back into a roll shape. its basically the same with 3d models, but theres a twist. with the toilet roll you have a 1:1 representation of the model and the texture, but with 3d models you set a texture size (for example 2048x1024) and then you place and scale parts of the model onto it. the bigger the parts on the uvmap, the more pixel space they get, hence higher resolution. now comes the kicker: to preserve texture size you map important parts big and unimportant parts, that are barely visible, but can still be seen, small. take a gun, for example - you want the regions close to the camera to be as high resolution as possible, but you also have tons of parts that just need color, but arent usually seen, or just are plain black that can be smaller. you can also use mirroring to cut uv map area in half for some parts. by not properly scaling the parts of the uv map you can end up with a very unoptimized uv map that gives you worse resolution to important parts while being twice the size. the impact of one bad texture set for one model isnt big, but consider a scene sometimes containing hundreds or thousands of assets. it adds up.

    • @sgredsch
      @sgredsch Před měsícem +123

      3. bad practice. now that we covered the general function and cost of models and textures, and the general need to optimize certain aspects to retain as much performance as possible, we enter the realm of wasted performance due to bad practice. there are shader effects that are very easy on the gpu, or very hard. and ususally there are multiple ways to do stuff. choosing the right way to do stuff can make or break a game. lets talk about reflections for a moment. there are two common easy ways to add reflectiveness via shaders, and this is via 1. cubemaps and 2. screenspace reflections. screenspace reflections take the actual content rendered in the scene and reapplies it - which makes it expensive to run, but quite close to the actual scene, but ONLY what is shown on the screen. out of screen stuff is blank - a game notorious for bad screenspace reflections is resident evil 2 remake in the police station, where objects would constantly blank out the reflection, because to the shader some parts of the scene plainly dont exist. the 2nd way to do it are cubemaps and they are so basic in function, that you can do it all day and it doesnt matter, because its basically a plain old texture. what is a cubemap? a cubemap is a texture or a set of textures that represent what something may reflect in a certain area of a map. its basically something like a skybox, but for reflections. these are non-dynamic and prebaked, which makes them insanely easy to run. now, if you want to give some kind of reflectivity to an object you can either do cubemaps, screenspace reflections or raytraced reflections, and each method being progressively harder to run. choose wisely. while a rifle scope might look absolutely stunning with raytraced reflections, 600 empty coke cans in an alley dont and will absolutely murder your gpu. raytracing is its own can of worms, and while raytracing the scene would have different implications, i just wanted to demonstrate that you can totally use the wrong tool for the job.
      lets touch on some different technical bad practice that you actually dont see - as theoretical example: hitboxes. you know why its called a hitbox - in early games a hitbox was exactly that - a simple box around a character that would register the hits instead of the actual "visual" model because its too complex, later hitboxes were a cylinder, now more sophisticated hitboxes are closely modeled after the visual model and look like a stick figure, made out of very basic geometry so its still easy to run. but what if you have very complicated models that need hitboxes and you run out of time... w..wou... would you just take the mesh of the visual model to use as a hitbox so you dont have to make a seperate hitbox mesh? nooooo, certainly no one would be insane enough to run a 80.000 polygon visual model mesh as a hitbox. right? RIGHT? not saying this is a thing, its just an example of under the hood insanity no one can see, but can make your 3D sidescroller the next crysis on steroids.
      and we have alot of other departments that all can do some really shoddy stuff, like animation and rig setup, mappers, coders, general shader setup etc. making games is very complicated.
      theres tons of stuff you can do one way or another. doing it properly takes more time than taking some shortcuts that trade developer time vs client side performance.
      blaming bad performance on the driver or the engine is only one part of the equation. you can do alot of bad stuff on the asset and coding side and, albeit having near perfect driver and engine optimization, the whole thing can just run like a brick.
      looking at games that barely hit 60 fps @wqhd on a 4090... lets just say i have some serious doubts regarding the technical quality of the games. the last game i had an in depth "game dev" look at was warcraft 3 reforged, and my god is this thing botched. some units dont even have proper textures, like the frostwyrm - its a placeholder texture in a finished game no one cared to fix or finish. having such assets in the game probably also tells you the shader optimization and coding isnt properly done aswell. thats the state of the industry, at least partly. dont be fooled by the hardware hunger of the games, chances are you buy expensive hardware to cross finance a sloppy development.

    • @ezg8448
      @ezg8448 Před měsícem +16

      Excellent comment. Adds an to what was spoken about in the video. One issue I had was while Tom wouldn't mind any question asked, he still did dance around the Starfield optimization question (may not actually be the game specified, but the video made it so).
      Care to your take on it if possible?

    • @umblapag
      @umblapag Před měsícem +13

      Thank you for the information. This comment should be higher up.

    • @KaiSoDaM
      @KaiSoDaM Před měsícem +20

      Thanks for the info, I remember hearing about a river of tesselation running on Crysis 2 under the map just to slow down Radeons lol. Crazy world...

    • @GamersNexus
      @GamersNexus  Před měsícem +56

      ​@@ezg8448we didn't specify a game. There was no specific game. We put the reference in the video as an example but it's not like he saw the edit live.

  • @grievesy83
    @grievesy83 Před měsícem +531

    I'm a simple man. I see Tom Peterson in a GN video, and I watch the entire thing sitting forward in my chair with both index fingers pressed against my lips, brow furrowed, learning intently.
    Love this series. Never stop. There's no such thing as too much Tom on GN's video catalogue.

    • @draigaur9543
      @draigaur9543 Před měsícem +18

      i'm an aging grease monkey, most of this is way above me in the clouds.but i still enjoy it.

    • @102728
      @102728 Před měsícem +2

      Oh yeah these are great!

  • @Curios_Cat
    @Curios_Cat Před měsícem +353

    Can't wait for ARC Battlemage!

    • @IntelArcTesting
      @IntelArcTesting Před měsícem +20

      Will definitely buy one or multiple

    • @102728
      @102728 Před měsícem +31

      There's a chance that the lessons learned from alchemist - especially driver development - came too late in battlemage's development to be included in the hw design. No idea how relevant drivers are in designing the hw, but if they are battlemage has probably been defined long before all the current development of alchemist drivers (and the lessons learned there) happened. Of course there will be a bunch of improvements regardless (otherwise why design new hw), but celestial is where I'd expect intel could be competitive enough to sell gpu's with reasonable margins. I'd suggest comparing alchemist die sizes and nodes to geforce/radeon cards with similar performance. Iirc, alchemist uses around 50% larger dies on a similar node. That's where they're probably losing a lot of money and making alchemist likely not very profitable if at all.

    • @operator8014
      @operator8014 Před měsícem +8

      Get a couple of them in SLI and you'll be ready for ARCs Fatalis.

    • @hololightful
      @hololightful Před měsícem

      I am really hoping for Battlemage as well

    • @DigitalJedi
      @DigitalJedi Před měsícem +23

      ​@@102728 The hardware team doesn't worry too much about driver-level stuff. They are mostly focused on making changes to the structure of the GPU logic to keep things moving through easier.
      Battlemage will make improvements to things like the dispatch logic Tom mentioned, as well as things like caching subsystems, predictive logic, or specific blocks like TMUs or RT accelerators.
      Battlemage will have several lessons learned from alchemist under the hood, but you are correct to assume they won't have everything right yet. The ARC team is fighting uphill into decades of industry progression, so I full expect them to need a few goes at getting everything down.

  • @sword0948
    @sword0948 Před měsícem +343

    I honestly just love this miniseries. Im currently studying to become a software engineer and watching your videos makes the learning process very interesting and fun. Thanks to you and Tom for bringing us these videos and I hope we'll see more of them !

    • @GamersNexus
      @GamersNexus  Před měsícem +83

      That's so awesome to hear! That our content can be helpful at all in early stages of education is a big compliment. Keep studying!

  • @Kendop16
    @Kendop16 Před měsícem +195

    Great video and explainer!
    As someone who bought an Arc A770 last year and has seen huge improvements in its drivers; it gives me great confidence seeing Toms passion and drive for Intel GPUS and that the future of intel graphics can only be getting better with the likes of professionals like him in Intel. 👍

    • @Jensen761
      @Jensen761 Před měsícem +2

      I'm hoping so cause I would love to try them in future!

    • @KingFeraligator
      @KingFeraligator Před měsícem +1

      Please tell me you didn't buy it as your main GPU...

    • @82_930
      @82_930 Před měsícem +7

      As Somebody who’s had two A770’s, (A770 Titan, A770 LE/Founders) this is true. I’d confidently say this card is faster than the 3060Ti and comparable to the RTX 4060Ti

    • @Kendop16
      @Kendop16 Před měsícem +13

      @@KingFeraligator No,I can't tell you. You'll need to say pretty please. Then I might tell you...

    • @volvot6rdesignawd702
      @volvot6rdesignawd702 Před měsícem +2

      im looking forward to the next gen ARC .. my current ARC 770 16gb is a great little card .. my main7800x3d and 7900xtx is a great combo but ive been using my 14600kf and ARC 770 16gb build and it has been chugging along beautifully !!

  • @zivzulander
    @zivzulander Před měsícem +167

    This graphics miniseries with TAP is great. Excellent explanations and presentation, and Steve is following up with all the right questions.

    • @GamersNexus
      @GamersNexus  Před měsícem +42

      Thank you! Great educational opportunity for us as well!

    • @RetroPaul6502
      @RetroPaul6502 Před měsícem +4

      Impressively good info. The register spills are spills to memory which is why it’s bad and slow. The UMD is technically one, but yes, it is made up of DX12, OCL, Vulkan, etc components where the component isn’t necessarily loaded every call.

  • @lian2780
    @lian2780 Před měsícem +72

    12:38 All that's happening per frame. PER FRAME, silicon + electrons = magic.

    • @GamersNexus
      @GamersNexus  Před měsícem +49

      Magic is definitely the best explanation for it. It is absolutely unbelievable when you think about how much happens to generate a single frame of a video game.

    • @Geo64x
      @Geo64x Před měsícem +1

      Certainly more electrons than silicon!

    • @KARLOSPCgame
      @KARLOSPCgame Před měsícem

      Magic is a lot of electric tickles

    • @xchronox0
      @xchronox0 Před měsícem +10

      @@GamersNexus I mean, it is magic. We're making silicon runes, engraving them with magical sigils, then running energy through it to produce effects that can't be done in nature.

  • @vitormoreno1244
    @vitormoreno1244 Před měsícem +102

    This made me realize the genius work from Valve on DXVK, the amount of work it needed to be seamless to the games is outstanding. Thank you Steve for the amazing interview.

    • @phoenixrising4995
      @phoenixrising4995 Před měsícem +16

      That’s what their new drivers legacy mode is. It would have been nice for them to just ship their dx9~11 dxvk library as a signed windows module and focusing on making dx12 Vulkan and OpenGL better. Less surfaces to maintain and find improvements for.

    • @MajesticBowler
      @MajesticBowler Před měsícem

      Most of that work was done by Microsoft. MS created DirectX HLSL to Vulkan SPIR-V compiler and released is as open source in 2018. HLSL is DirectX shader language. Spir-V is shader language created for Vulkan. MS created this compiler for Linux Subsystem for Windows (WSL). Compiler can translate Vulkan to DirectX and DirectX to Vulkan. DirectXShaderCompiler currently have more than 600 github forks created by: Valve, Sony, Apple and more

  • @spudhead169
    @spudhead169 Před měsícem +7

    I noticed something Steve said "I didn't feel confident in my understanding to....". THIS is what separates GN from most other channels and I'm not talking about only PC hardware, just technology in general. A lot of others will just do some half a$$ed job of compiling information and present it without a clue about what they're talking about. Steve doesn't treat us like that. Massive respect.

  • @BrownStain_Silver
    @BrownStain_Silver Před měsícem +12

    This was a great video! Thanks for having Tom on GN. I learned quite a bit in this one. On a separate note, I got an A770 from someone who gave up on Intel early. I let it sit in the box for 6-9 months and it aged like fine wine. It's really cool to see the performance getting unlocked by the hardworking people at Intel.

  • @ShrkBiT
    @ShrkBiT Před měsícem +59

    I was talking to a friend about drivers and how they work not too long ago, but couldn't really convey the finer details (not that I knew them to this extent anyway). This video explains it so well in "normal people speak". I really enjoy the way these video's and difficult topics are framed and laid out for everyone to understand. Awesome reporing!
    Edit: I really appreciate that Intel is fixed on letting people know what they are doing with Arc and really making a solid attempt at the GPU market. More competition = better and with AMD and NVIDIA so focussed on AI now, barely noticing their "ol' reliable" gamer market, we need someone to keep their attention invested there, before they get their marketshare snatsched away long term.

    • @Safetytrousers
      @Safetytrousers Před měsícem

      Nvidia at least have greatly increased their workforce over the past few years. They can easily have people 100% focused on gaming GPUs and software.

    • @SelecaoOfMidas
      @SelecaoOfMidas Před měsícem

      Only Nvidia has explicitly signaled that gamers are on the back burner, and their projected priorities sell it. Don't see that with AMD pursuing AI, at least not yet.

    • @Safetytrousers
      @Safetytrousers Před měsícem +1

      ''Number one, RTX was invented for gamers and for RTX, the technology, the most important technology is AI. Without AI, we could not do ray tracing in real time. It was not even possible. And the first AI project in our company-the number one AI focus was Deep Learning Super Sampling (DLSS). Deep learning. That is the pillar of RTX.'' Jensen Huang 2023@@SelecaoOfMidas

  • @TurntableTV
    @TurntableTV Před měsícem +46

    What I love about GN is that they respect their audience intelligence and have the guts to dive into more complex subjects such as this one. Thank you, Steve! This stuff is trully fascinating.

  • @Singurarity88
    @Singurarity88 Před měsícem +72

    I love these engineering insights. Keep it up Steve and thanks!

  • @PlantainSupernova
    @PlantainSupernova Před měsícem +29

    Just finished watching the video from earlier in the week. Perfect time to jump into this one. *THANKS STEVE!*

  • @zodwraith5745
    @zodwraith5745 Před měsícem +3

    Since the first time I saw Tap in a video I thought he should be the face of Intel. No marketing veil, just honest and open descriptions of the inner workings of a GPU. He genuinely wants you to understand what a GPU is and what it does like you would expect from a teacher.
    I mean, of course his appearance on certain channels is intended as some level of marketing, but it's not full of the normal bullshit where they ridiculously exaggerate everything about what they're selling. When he appeared on PCWorld I asked in the Q&A how Arc performed in VR and he straight up said "Do not buy Arc for VR." You would _NEVER_ see that from an AMD or Nvidia rep, and you gotta respect when he shows you respect with a blunt honest answer like that.

  • @Danger8255
    @Danger8255 Před měsícem +12

    My brain hurts from this, but I have a better understanding and appreciate how much goes into driver optimizations. Hats off to Tom.

  • @graphicarc
    @graphicarc Před měsícem +57

    Great! More intel arc news on GN always excited when I see the notification.

    • @IntelArcTesting
      @IntelArcTesting Před měsícem +4

      You and me both fellow arc user.

    • @1337Superfly
      @1337Superfly Před měsícem

      So am I. Tom Peterson is just a brilliant communicator!

  • @GamingBits-py1or
    @GamingBits-py1or Před měsícem +24

    I can only repeat: Incredible respect for Intel in working on this stuff even after the initial failure and issues. I hope the current financial losses dont stop you guys from keeping it up and coming with nee Hardware at some point.
    I will buy it, no matter what. yYou guys worked hard to convince you dont just give up, and you will have earned that money!

    • @_TrueDesire_
      @_TrueDesire_ Před měsícem

      Unless intel pay their fines and own their mistakes, I will never buy anything intel related..
      “The European Commission has re-imposed a fine of around €376.36 million on Intel for a previously established abuse of dominant position in the market for computer chips called x86 central processing units ('CPUs')” from September 2023

  • @maxmike181
    @maxmike181 Před měsícem +4

    This was fantastic, and seeing how invested the Arc team is has me very excited for Battlemage

  • @macgynan
    @macgynan Před měsícem +8

    I love these in depth tech talkes. Tom does such a fantastic job explaining and even is kind enough to bring slides. Cant wait to see more!

  • @PXAbstraction
    @PXAbstraction Před měsícem +13

    This was one of the most fascinating deep dies I've watched in quite a while. Tom is an excellent presenter and does a great job of taking really complex stuff and making an understandable.

  • @Lintary
    @Lintary Před měsícem +12

    I love these sorts of dives, it might not ever make us experts in the field, but it gives you a good sense of just how complex all this stuff is, this in turns allows you to get a better perspective on things.

  • @PXAbstraction
    @PXAbstraction Před měsícem +36

    21:40 That's some subtle shade thrown right there. Nice.

    • @Fluke1x
      @Fluke1x Před měsícem +3

      What is that sound, for those of us who don't know?

    • @KarlDag
      @KarlDag Před měsícem +6

      ​@@Fluke1x Starfield's theme song

    • @WayStedYou
      @WayStedYou Před měsícem +2

      @@Fluke1xthey also flickered starfield on the screen right after

  • @IntelArcTesting
    @IntelArcTesting Před měsícem +13

    Always love seeing some Arc content. Thanks Steve and Tom for these types of videos

  • @Abrasive-Heat
    @Abrasive-Heat Před měsícem +8

    This video helps me a lot as a hardware person.
    I don’t do much on the software side. I think I know what it’s doing (educated guessing) but it’s nice to have an explainer of what it’s actually doing.

  • @veganboi3938
    @veganboi3938 Před měsícem +34

    Saw the notification. Clicked it. Liked it.

  • @snemarch
    @snemarch Před měsícem +2

    I was *VERY* close to buying an ARC card for my new system build, and Intel letting their engineers participate in videos of this quality and detail increases the likelihood that my next GPU ends up being one of theirs (given reasonable performance, power consumption and driver quality).
    This is really, really good PR, and it's very interesting to watch. Thanks for covering both technical and business/process aspects, and for not dumbing things down. Kudos to both Steve for facilitating a conversation where he's sometimes slightly out of his depth (but still asks relevant questions!), and to Tom for very good explanations of pretty technical topics.

  • @mrwidestrides4802
    @mrwidestrides4802 Před měsícem +14

    This series is good. Very informative and Mr Intel is a natural teacher. Thanks Steve

  • @kazioo2
    @kazioo2 Před měsícem +37

    Kudos to GN for refraining from judging if game is "optimized", because it's a difficult things to analyze without knowing internal details of the game and it's a term often misunderstood by gamers and even experts misinterpreted it. I remember Digital Foundry guys, who are usually very knowledgeable, criticizing a completely empty open world UE5 demo for not utilizing all CPU cores and comparing it to Cyberpunk - with tons of NPCs and various simulations and systems running in the background. That UE5 demo literally had nothing to run on other cores, because there was nothing except graphics and one player entity, so even if their conclusion about the engine was correct, the example used was not and could not be used for this kind of comparison. It was bizarre to see this kind of cluelessness from some of the best "game optimization" nerds in the media.

    • @Mallchad
      @Mallchad Před měsícem +4

      There is always something to be run on other cores in the vast majority of cases. The case is not IF you can run things on other cores its more like _is it worth the complexity_? -because trying to juggle data flow and data race problems can make a multithreaded program slower than singlethreaded if you get it wrong. But it's still possible to make it go faster. This is especially true with game engines like Unreal because the very nature of a game engine shatters up the program into many thousands of individualized componments like transform components which are relatively straightforward to parallelize. So much so that Unreal does it by default...

    • @MSquared135
      @MSquared135 Před měsícem +3

      @@MallchadAgreed. That is what really hinders the amibtious UE4 games like Jedi Survivor, Callisto protocol, etc is the engine's poor CPU core utilization. Its indicative of the engine's origins in the early-to-mid 2010s where CPU clock speed was more important than utilizaton of the cores available.

    • @Mallchad
      @Mallchad Před měsícem

      @@MSquared135 It's not really the engines fault. the engine is just a shell you bolt onto what you eventually call your game. it's up to the game programmer to make it to faster.

    • @jcm2606
      @jcm2606 Před měsícem +2

      @@Mallchad In some cases it is the engine's fault. An engine is more along the lines of the _framework or scaffolding_ that you build your game within. If that framework does not scale properly across multiple threads then that will manifest as a CPU bottleneck when you eventually stress the framework in exactly the right way that it can't scale properly, and the only remedy is to either stop stressing the framework (dumb down your AI behaviours, reduce the number of AI agents, reuse meshes more to take advantage of instanced draw calls more, etc) or replace parts of the framework with your own in-house code (write your own AI system that can scale the number of agents well, write your own GPU-driven rendering pipeline that can handle many instances of different meshes well).

    • @Mallchad
      @Mallchad Před měsícem +1

      @@jcm2606 Yes, the "replace parts of the framework with your own in-house code" is why I don't see it as the engines fault.
      Unless you're using a completely priorietary engine where you have no ability to acces or even read the code, you usually have options. and even highly proprietary engines usually leave you with options for engine-tinkering- the game and the engine are one and the same, and must be treated as such.

  • @goldsquadron
    @goldsquadron Před měsícem +4

    What a amazingly informative video. Tom does a great job of breaking down very complicated topics and Steve is asking all the right questions.

  • @halko1
    @halko1 Před měsícem +4

    This is the kind of content I love. Technical but so well explained and presented that it’s easy to follow and I learn new stuff. This is the best kind of content on CZcams. The best.

  • @iogarchi
    @iogarchi Před měsícem

    Excelent video, thank you for giving the opportunitiy to have a general idea on how is established the relationship between software and hardware in termes of video games and gpu. thank you so much to both of you and your team.

  • @darkoz1692
    @darkoz1692 Před měsícem +3

    It's refreshing to see someone honestly talk about their products, as long as Intel offer their GPU I won't be going back to the other two.

  • @kurgo_
    @kurgo_ Před měsícem +12

    I was basically the "they're the same image" meme when that before and after slide came up, I was puzzled until it was zoomed in haha.
    Honestly, I got lost pretty early (I understand nothing about these things and goodness knows a scientific/analytic mind is not one of my strong points) but even with my limited understanding, it was really interesting. Another example of "yeah just optimise it" when the work behind it is truly daunting.
    Thanks for the interview Steve. Speaking of optimisation, do you think you could have a game developer on and ask them why a game collection that was 3gb was turned into one that's dozens of gbs heavier and doesn't even work properly? :p it might not be as technical as this but goodness knows I'd like to understand why game devs have decided compression and optimisation aren't necessary, maybe there's a graph for that too. Cheers!

    • @Ruhigengeist
      @Ruhigengeist Před měsícem +4

      An increase in install size like that is usually because the art assets are higher resolution. For example you might have had 256x256 pixel (width x height) images for textures (the stuff that gives color & depth to 3D models and surfaces), but they might have shipped higher resolution textures like 2048x2048. You might think that's 8x bigger because 256 -> 2048 but actually it's about 64x bigger because it's two dimensions (same idea as 1080p to 4K isn't 2x more pixels, it's 4x more).
      So think for every texture in the game, you might have thousands or tens of thousands of them, if all of them are made bigger then the game size balloons really fast. Higher resolution textures doesn't necessarily improve things on its own though, cause the new texture needs to actually have more detail to make use of the extra size.
      If you just resize it and that's it, you still have the same amount of detail as you had before which won't look any better at all. So you need to have an artist go in and redo the texture to make it more detailed and less blocky. And now there's AI tech that can do some of these tasks for upscaling textures (same idea as DLSS upscaling but applied by game devs to the actual game).

  • @jacoavo7875
    @jacoavo7875 Před měsícem +6

    Man, this is the kind of content youtube was made for. Outstanding and interesting work here, from both Steve and Tom !

  • @anastassiosroumboutsos8288
    @anastassiosroumboutsos8288 Před měsícem +1

    Fantastic video. Makes you realize and have a level of respect on the amount of research is needed to compete.

  • @TehPredrrr
    @TehPredrrr Před měsícem +5

    Please more videos like these, they are genuinely enjoyable and informative!

  • @notjustforhackers4252
    @notjustforhackers4252 Před měsícem +56

    Are we going to get a discussion about MESA support?

    • @DodoGTA
      @DodoGTA Před měsícem +8

      Especially the sparse implementation there (with TR-TT and VM_BIND) 🐸

    • @GamersNexus
      @GamersNexus  Před měsícem +54

      I don't feel qualified enough yet, but can study it more. Maybe that'd be a good one for Wendell to help me with!

    • @asunavk69
      @asunavk69 Před měsícem +15

      @@GamersNexus as a linux user, much appreciated and would be content to see a video on that one too :).

    • @olnnn
      @olnnn Před měsícem +14

      @@GamersNexus An interview with one or more of the people working on MESA (not specifically intel ones) would be awesome

    • @krishnachittur
      @krishnachittur Před měsícem +8

      +1 to all the Mesa comments, would really appreciate some educational content there!

  • @Akkbar21
    @Akkbar21 Před měsícem +1

    I LOVE this kind of in-depth and well informed break down of complex concepts like this. Ty to everyone involved.

  • @SToad
    @SToad Před měsícem +1

    I love these series, I didn't know there were more coming up after last one. Definitely enjoying Toms in-depth knowledge here. Thanks for sharing!

  • @TheBlackRogue96
    @TheBlackRogue96 Před měsícem +4

    Love seeing Tom on camera, he always has something new and interesting to say. I do have a follow up question for when something is the driver teams responsibility to improve or the application teams responsibilty. Would it not be a more efficient use of your time to identify the reasons people are getting poor performance i.e. Tom's example of register spilling, and writing tools and documentation that tells app developers "Hey, you're doing this wrong". So that over time more people write better performing code, rather than relying on the driver team to fix that issue for you

  • @eddiebreeg3885
    @eddiebreeg3885 Před měsícem +3

    As a game developer myself, I really love to see these conversations, it's not every day I have the opportunity to hear from hardware engineers every day!
    I do feel it's important to specify one thing I think wasn't super clear: just as every car uses an engine of some sort, same goes for games. The engine is the library that provides programmers like me with the tools we need to run the game in the first place. What Tom meant by "a lot of games use an engine" is "a lot of games use *publicly available* engines" (that's your Unreal and Unity and such). It's worth noting that a lot of game studios build their own if they have the resources to do so, there are quite a few advantages to this approach.

  • @envirovore
    @envirovore Před měsícem +1

    Thoroughly enjoying this series so far, great in depth look at what's going on behind the scenes (frames?). Looking forward to the third part, and hopefully more deep dives such like this and the dive into system latency with the nVidia rep.
    Great stuff!

  • @jayhsyn
    @jayhsyn Před měsícem

    Incredible video! I learned a lot.
    Thank you for the hard work Steve and team, and thank you Tom for the in depth explanations!

  • @ryanspencer6778
    @ryanspencer6778 Před měsícem +8

    As someone who knows just enough about this stuff to understand what TAP is saying at a surface level, this is possibly one of GN's best videos. The average gamer probably won't understand much of this and that's OK, but as a software engineering student and GPU nerd, this is so cool.

  • @Azureskies01
    @Azureskies01 Před měsícem +37

    it really goes to show how well AMD and nvidia have done given just how complex all this is, and intel is trying to get into it and having to do it from pretty much scratch.

    • @AvocadoBondage
      @AvocadoBondage Před měsícem +6

      Or if anything it shows how bad they are considering intels basically redone their entire driver multiple times over the past 2 or so years while amd and nvidia still have issues that have been prevalent for literal years.
      Hopefully intel kicks them into shape

    • @Azureskies01
      @Azureskies01 Před měsícem +6

      @@AvocadoBondage Saying intel doesn't have problems or that somehow some way they will get their drivers to be bug free is.... funny. to say the least anyway

    • @PixelatedWolf2077
      @PixelatedWolf2077 Před měsícem +1

      Well, just because they JUST got into consumer GPU hardware. They're VERY new to this, so considering they already have a DLSS and FSR competitor in their first card generation is outstanding already. Ontop of that, they have to catch up with drivers, but it seems like their optimizations have been very massive considering how poor they originally performed when they launched.

    • @Azureskies01
      @Azureskies01 Před měsícem +1

      @@PixelatedWolf2077 They are not in any way "new" to GPUs as they have been putting out APUs for longer than AMD has. It is however the first time intel has been serious about making their GPUs work well.

    • @PixelatedWolf2077
      @PixelatedWolf2077 Před měsícem +1

      @Azureskies01 Well originally, the iGPU for Intel was alot like a GT1030. It had its sole purpose as a display driver. Intel, however, realized they could benefit quite a bit by trying to make a proper iGPU.
      That first attempt was Iris which was a step in the right direction. It was better than the normal UHD graphics of the time. Then their silly little productivity only GPU came out and it turns out it wasn't bad, so that's what got the ball rolling towards making ARC. Arc didn't come until 2022 however. So in reality, they haven't had much experience in both driver tech and making a proper GPU.

  • @ddpwe5269
    @ddpwe5269 Před měsícem

    These are great videos! Even though I may not understand everything that is being said, it at least shows us that not only are the companies doing as much as they can, but they're showing it with someone who at least understands them to ask questions and confirm what they're being told, to a degree of course. It's also great when you have someone like Tom who is enthusiastic about the work he and his teams do!

  • @Green-Mountainboy
    @Green-Mountainboy Před měsícem +1

    I have been a gamer almost from the very beginning of gaming back on DOS, have very little understanding of how things actually work these days but i still love listening to people like this.

  • @NervousNoodles
    @NervousNoodles Před měsícem +6

    Videos like this are precisely why I'm a subscriber. Can't wait to see more!

  • @highcue
    @highcue Před měsícem +5

    So, if I get this right the drivers are "driving" the cards. Those DLLs and Kernels seems to be the instructions on how to handle the "road".
    It's almost as if you had to learn to drive a new car in a new city for every new game. But every time you come into a new city; I need to learn to drive differently. Not only because of the car but also because of the roads, signs, rules, temperature and terrain...
    Sometimes it's not too different from one city to another, but other times it's completely wild. What could be the safest maneuver in one city could lead to your death in another one.

  • @AnjanaDharmasiri85
    @AnjanaDharmasiri85 Před měsícem +1

    Great video guys. Very Informative! Keep them coming!!! 🔥🔥🔥

  • @Moostuffas
    @Moostuffas Před měsícem +1

    Super interesting. Thank you for taking the time for this content.

  • @aznmarinex2
    @aznmarinex2 Před měsícem +5

    Thanks for the Selfie in Asia Steve.

  • @SlocusST
    @SlocusST Před měsícem +9

    I may or may not currently work for the big blue silicon corp, but having someone like Tom on the GPU team is essential. When Raja Koduri left, ARC's future felt way less certain, but I think these types of improvements and attention in the gaming space translate to improvements in the AI compute space, so upper management sees the financial appeal of getting their hands dirty in the gaming space too. I'm actually stoked for the continued improvement to Alchemist and the release of Battlemage.

  • @mostevil1082
    @mostevil1082 Před měsícem

    This is amazing stuff. Simplifies some very complex subjects for us.
    Great to see them making progress too, more competition in the space.

  • @NextGenGamerss
    @NextGenGamerss Před měsícem +1

    More of this. We need people from the industry we are deeply invested being on this channel talking about the stuff we care, especially in deep dive.

  • @beardedgaming3741
    @beardedgaming3741 Před měsícem +9

    i love that intel is working so hard at this.... its so great to see. my next budget build i think id be in a place to go intel now.

    • @MaxUmbra
      @MaxUmbra Před měsícem +2

      Same thoughts.
      Very optimistic for their future

  • @Deltarious
    @Deltarious Před měsícem +3

    I personally have actually said "this game is optimised!" about the initial release of Overwatch (2016). For how good it looked it really was pretty damn well optimised and still looked pretty great even on lower settings on very old hardware while still delivering good or even great FPS. I guess the only area where it lost some points from me was it used to have some very odd random CTDs for no apparent reason.

  • @jakephills3013
    @jakephills3013 Před měsícem

    I loved this series. I left this series with so much knowledge about GPU image processing , render times, optimizations. amazing . thanks a lot steve and tom 🥰. Will wait for more exciting stuff like this

  • @haikopaiko
    @haikopaiko Před měsícem

    Great video! Thanks Steve and the team! Great explanation from Tom 🤙

  • @slumlord2625
    @slumlord2625 Před měsícem +4

    these videos with tom are great!

  • @EastyyBlogspot
    @EastyyBlogspot Před měsícem +8

    Thank you for this....the compiling shaders thing has always puzzled me as while it had existed before, nowdays it is everywhere...i just wish it was done behind the scenes rather than a bar when starting up the game lol

    • @GamersNexus
      @GamersNexus  Před měsícem +17

      I do like knowing though that the game isn't quite ready yet. Better than waiting and not being sure if it's working or stuck!

    • @EastyyBlogspot
      @EastyyBlogspot Před měsícem +1

      I did always wonder are hardrives making a difference to performance and compilations as noticed past few years games on hdd more issues than ssds

    • @zivzulander
      @zivzulander Před měsícem

      Yeah I wondered about this as well. I don't think I've even heard it as a phrase until the past few years. That was more a behind the scenes thing in the past.

    • @WereCatStudio
      @WereCatStudio Před měsícem +9

      To be fair I'd rather have the shader compile happen before I start playing rather than have a stutter fiesta when playing the game... Jedi Survivor for example. I think many games are way more shader heavy than they used to be so that can be part of the reason why this happens often now. AFAIK it has been a problem with UE4 and now with UE5 engine games mostly.

    • @EastyyBlogspot
      @EastyyBlogspot Před měsícem

      @zivzulander first I remember seeing it was the ps3 emulator rspc3 though have said it has been around for a while with some older battlefield games....but now almost every game has it

  • @itsdeonlol
    @itsdeonlol Před měsícem +2

    Thanks Steve & Tom!

  • @garretthazlett9116
    @garretthazlett9116 Před měsícem +2

    Love the "deeper dives" here! great stuff!

  • @NuSpirit_
    @NuSpirit_ Před měsícem +6

    Great video. My two only remaining questions are when Battlemage is coming and if the drivers will make it shine 😂 (jk jk)
    But seriously I so want to buy Intel GPU since prices are great but I’m still discouraged by the prevalent issues and I don’t want to risk the next big field of stars game won’t work on launch.

  • @ArthurM1863
    @ArthurM1863 Před měsícem +5

    Tom Petersen is amazing. I follow him since he was at Nvidia and he did a review about one of my first gpus, the gtx 650ti. His great commentary about products and software mixed with silly jokes here and there makes him the perfect man for the job lol

  • @TheChodax
    @TheChodax Před měsícem

    Appreciate you giving your time Tom. Great vid as always Steve.

  • @Leftycpe
    @Leftycpe Před měsícem

    Alway senjoy the videos with Tom. The details at times are well beyond what I understand, but that's just more fuel for google searches. I've had an A750 for a couple months now and been pretty pleased with it's performance on my old DX9 based titles. Fantastic deep dive into drivers!

  • @CXDezign
    @CXDezign Před měsícem +8

    I bought a laptop with an Intel A370M GPU. This was over a year and a half ago, I still experience crashes on Photoshop that halt the entire system to this day and cannot for the life of me reach out to anyone to diagnose the issue...

    • @Codec264
      @Codec264 Před měsícem

      If your driver's are all up to date, most likely you have bad silicon and need to look for a warranty replacement from the manufacturer

    • @thegeforce6625
      @thegeforce6625 Před měsícem

      @@Codec264or Intel or adobe hasn’t gotten around to fixing the crashing bug yet.

  • @guimello_silva
    @guimello_silva Před měsícem +1

    That was amazing in different levels. Thank you all for discussing this.

  • @ddthegreat
    @ddthegreat Před měsícem

    This is the video I’ve wanted to see for a long time. Thank you so much Steve and Tom for putting this together!

  • @Wild_Cat
    @Wild_Cat Před měsícem +4

    Another GN x TAP video so quickly?! Thanks Steve! PS. It would be wonderful if you took us behind the scenes at the Intel graphics dev lab office

  • @nielsbishere
    @nielsbishere Před měsícem +7

    5:50, just to add; not all shaders are text when they land in driver land. For DirectX12 and Vulkan, they have already compiled the text to an intermediary binary format (DXIL or SPIRV) which is easier for the driver to handle since there's less steps required to convert it to their instructions. But still it needs that compilation step. Probably text is easier to explain though (and relevant to opengl)

    • @SimonBuchanNz
      @SimonBuchanNz Před 28 dny

      In contrast, there's WebGL / WebGPU, where the browser parses the shader source (GLSL / WGSL) on demand, then internally translates into the browser's target API shader language (HLSL, GLSL, whatever Metal uses) then complies that again 😢
      It even needs to translate when it's GLSL to GLSL for safety and small changes in the flavor of the language.
      Chrome even uses something called ANGLE, which is implementing OpenGL in terms of DirectX for... reasons. So you can in theory end up translating GLSL to GLSL to HLSL to DXIL 😢
      Hopefully browsers get good at caching all this nonsense!

    • @nielsbishere
      @nielsbishere Před 28 dny

      @SimonBuchanNz such a giant step back. DXIL and SPIRV were greater specifically to remove the compiler from the process. Now you only have to deal with DXC and glslangvalidator directly (you can just file an issue rather than contact IHVs), rather than 1 compiler per vendor. Web never seems to learn from mistakes people found out in the real world :/

    • @SimonBuchanNz
      @SimonBuchanNz Před 28 dny

      @@nielsbishere well, there *are* reasons (security, a lot of the time) ... but I wouldn't be surprised if *eventually* there's a binary form of WGSL, and in theory the browsers could write their own optimizing emitters for DXIL and SPIRV (dunno about Metal) to skip another compile - they do way harder stuff all the time.
      A lot of the time, though, this stuff is just politics: I think there's a hint of a suggestion that SPIRV might have been vetoed because it might make Apple look worse due to the extra translation they would need due to not having Vulkan (the obvious response being ... why not?)

    • @nielsbishere
      @nielsbishere Před 28 dny

      @SimonBuchanNz I think in the end it's because Apple wants to force the way the api can go. With khronos they have way less influence which they don't like. Apple seems very determined to cut all ties with khronos, and I don't really know why. Security might be true in some cases, but I've ran into enough driver crashes to know that probably this also increases likelihood of bigger issues because of the additional layers in between (seeing as binary -> ISA already goes wrong). Now you'll have to deal with every different browser having different compilers and every backend and every vendor having different behavior... with spirv that'd have been reduced to 1, maybe 2 compilers (dxc for hlsl and glslangvalidator for glsl)

  • @ZinoAmare
    @ZinoAmare Před měsícem +2

    I always love these videos, Tom giving us info about arc I am very excited for Arc and hope to see Paladin arctype.

  • @kevinmorphet8083
    @kevinmorphet8083 Před měsícem +1

    Wow. This guy is a great communicator. Very informative. What a huge amount of work.

  • @mofstar8683
    @mofstar8683 Před měsícem +7

    Thanks for your continued coverage of Arc! On your most recent are video you talked about MSAA being broken for Arc in GTA and how the Intel team responded and said it was a GTA exclusive issue. Please tell the team this isn’t exclusive to GTA, in some older games like Watch Dogs 1 and Assassin’s Creed Black Flag MSAA is completely broken on Arc!

    • @neuronmind
      @neuronmind Před měsícem +4

      The Black flag AA issue is old and occured on AMD drivers many years ago . It's bad programming from Ubisoft. Not bad drivers.

  • @omniscient9533
    @omniscient9533 Před měsícem +4

    Really thinking of ARC for next card. This community out reach makes it feel less like a corporation

    • @redslate
      @redslate Před měsícem +1

      The ARC Team seems to be genuinely committed to their product.

  • @svaiskau
    @svaiskau Před měsícem +2

    The Tom videos are an awesome peak under the hood. Thanks for continuing to do these, Steve, it's great content for the community! And also want to acknowledge that Tom has continued to improve on camera from the early episodes when he was using a more Arc-centric framing for the discussion, versus the more recent episodes where he discusses the basic concepts in neutral/general terms, and then uses Intel's ongoing efforts as a case study. More Tom!

  • @martyfliesdrones
    @martyfliesdrones Před měsícem

    Super cool video, I love hearing this guy talk about these really technical challenges.

  • @brendanconaway
    @brendanconaway Před měsícem +4

    Tom Peterson seems like such a nice guy :)

  • @lassenlautta
    @lassenlautta Před měsícem +6

    just got a770 recently .. this is interesting

    • @IntelArcTesting
      @IntelArcTesting Před měsícem

      Great choice, hope you will have a great time with it

  • @MalamIbnMalam
    @MalamIbnMalam Před měsícem

    Great review! I learned a lot of this stuff in my undergrad computer science courses. This guy can take difficult concepts and explain them in a simplified fashion.

  • @Psychx_
    @Psychx_ Před měsícem

    I love these technical deep dives! TYVM Steve & Crew + Tom. Please do also occasionally bring in Wendell again.

  • @tek_lynx4225
    @tek_lynx4225 Před měsícem +6

    Please grill him on his companys DX3\5\7 OGL1.x\2.x emulation\wrapping. They never want to talk about it.

  • @Hobo_X
    @Hobo_X Před měsícem +3

    The fact that Tom is still doing this stuff (and Intel is letting him) makes me think they still have a long-term plan to _eventually_ get there with their graphics division. I truly believe if Intel holds out a bit and doesn't kill Arc too fast, they can carve out a chunk of the market by focusing on price/perf gamers. From that starting position, it can grow further. Nvidia needs a kick in the pants and to lose sales to snap back to reality.

  • @HamBown
    @HamBown Před měsícem

    Super interesting and appropriately nerdy content, love it. It is great to see companies who are willing to give some behind the scenes information.

  • @TYahn91942
    @TYahn91942 Před měsícem

    Thank you Steve, I love these videos. They help a lot, to understand how other hardware talks to other computer hardware. AWESOME!

  • @nocturneuh
    @nocturneuh Před měsícem +3

    Doom Eternal was OPTIMIZED. (🙏 Thank you ID).

  • @EastyyBlogspot
    @EastyyBlogspot Před měsícem +6

    I do wonder how much the game engine matters, for example is UT4 more optimised than UT5 purely because it has been around longer and more mature. I hear so much about games being ported from 4 to 5 and i wonder does the performance take a hit just by doing that

    • @Cinnamon1080
      @Cinnamon1080 Před měsícem +1

      UE5 games are heavier because the modern features Devs choose to use are heavier.

    • @GamersNexus
      @GamersNexus  Před měsícem +5

      Great question. Without being in a position to directly answer, I'd wager that UE5 is probably more optimized even in spite of UE4's maturity just because UE5 should be rolling all those optimizations into UE4. I do think there are probably situations where your thought is right though: If a dev has an option of a mature engine versus a totally ground-up build, there are probably things better optimized on the older one.

    • @kazioo2
      @kazioo2 Před měsícem +1

      Another problem many gamers don't understand is that a more flexible modern feature can be more expensive even if visually it looks the same or even worse. A photorealistic room with baked light in UE4 will run at 200 FPS and could even look better than the same room in UE5 with lumen running at 60 FPS, where every light can be dynamically changed and the wall be demolished. And it's not because UE5 is unoptimized, it's because it does things in real-time that were offline baked before and this allows for more dynamic worlds and gameplay. But when a dev still makes very static game while using fully dynamic lighting, this new cost doesn't feel justified.

    • @jcm2606
      @jcm2606 Před měsícem +2

      It does matter a lot since the engine dictates the exact calls, shader code, resource usages and such that are fed down into the driver. OpenGL and pre-DX12 drivers can do a lot to wring more performance out of what they've been given by the engine, but at the end of the day it _is_ the engine that dictates what exactly is going on in the frame. DX12 and Vulkan turn this up several notches, too, as they push a lot of stuff out of the driver and back into the engine. Management of API-level command buffers, memory management and frame presentation/buffering under OpenGL and DX11-and-below happened largely in the API runtime itself or the driver based on what the engine fed the driver, but DX12 and Vulkan both push these back up into the engine, making the engine responsible for recording API commands into an API command buffer, performing memory management at the API level, dictating the overall flow of frame presentation and frame buffering, etc. As time goes on this is happening more and more, too, as seen by the Approaching Zero Driver Overhead movement in OpenGL or "bindless" style descriptor management in Vulkan (descriptor indexing, buffer device addresses, etc).

    • @EastyyBlogspot
      @EastyyBlogspot Před měsícem +1

      @kazioo2 that is it exactly when I see a game and the graphics on the face of it do not look that great...and yet the performance is nowhere near what I would expect from what I see

  • @LurkingLarper
    @LurkingLarper Před měsícem

    This was some super enlightening stuff. I could just and just follow the topic at hand, but that's what made it so interesting in the first place. These companies really should let you guys interview their lead engineers like this more often. Good stuff.

  • @MadsonOnTheWeb
    @MadsonOnTheWeb Před měsícem

    Thanks for the insight Tom. I appreciate your time and effort. Wish the best for him and his GPU team!

  • @filcon12
    @filcon12 Před měsícem +4

    Hi, im tech lead for one of those games which was optimized in the intel driver lately. I would have question for Intel guy, is there any way to find out what did you optimize for our game? Maybe we could do more stuff on our side to make the game run faster.

  • @wile123456
    @wile123456 Před měsícem +3

    Wish you had asked him this critical question:
    "what was the point of 2 alpha releases with DG1 and DG2, as well as over a decade of integrated graphics drivers? Why were the drivers not ready when you had 2 years to get them ready before arc launched if we only count DG1 and DG2, and why was all the integrated driver work useless for ARC?"

    • @GamersNexus
      @GamersNexus  Před měsícem +6

      This video has been live for literally 5 minutes. You could not have even known what was asked, first of all, and secondly, the topic of this video is the driver stack.

    • @janbenes3165
      @janbenes3165 Před měsícem

      ​@@GamersNexus On one hand, sure, the commentor did not watch the whole video before commenting, neither did I, so far. On the other... Aren't they right? Sure, you say it's not the topic of THIS video, but how many videos have there been with Arc engineers so far? It's great to celebrate that Intel is making progress, but shouldn't they be held accountable for the original mess?

    • @GamersNexus
      @GamersNexus  Před měsícem +4

      @@janbenes3165 do you mean when we held them accountable for 2 straight years when it was all happening actively? Because yes, and we did. We covered DG1 before anyone else, covered DG2's messy launch, and in fact we broke the story on what a trainwreck the drivers were for Arc originally -- so much so that other reporters reported on our findings and on Intel's direct response to them. We covered that story. Receipts:
      Literally called "Worst We've Tested" - czcams.com/video/MjYSeT-T5uk/video.html
      Intel responds: czcams.com/video/znJzozRfJYY/video.html
      DG1 launch: czcams.com/video/HSseaknEv9Q/video.html
      Talking about what a nightmare the A380 was: czcams.com/video/La-dcK4h4ZU/video.html
      Unrelated but while we're at it, holding them accountable for a terrible card design: czcams.com/video/N371iMe_nfA/video.html

    • @janbenes3165
      @janbenes3165 Před měsícem +1

      @@GamersNexus Yeah, you covered the story. I know you did. I was there too. But there you have a person who was close to the top when it was all happening, or at least should have relevant information about it and did not ask simply "why?" it happened. I'm not saying you should be dragging Arc through mud or anything like that, but this question of "why the DG1 never amounted to functional drivers?" never comes up.

    • @marvinmallette6795
      @marvinmallette6795 Před měsícem +1

      The answers on the surface seem obvious to anyone who has been gaming long enough.
      Why are games releasing in a broken state so often? Sometimes you have to get a product out. You could spend two years with a product working in a lab, and not learn as much as you could learn three months post release.
      Arc is Intel's first attempt at a competitive dedicated GPU, which is a different product from the integrated GPUs. Gaming was likely not a priority, and usually nobody cared about gaming performance on Intel integrated. This is essentially Intel working at "getting in shape".

  • @adomingues00
    @adomingues00 Před měsícem +1

    This is singular and amazing, seeing the big corporations talk with the customers and seeing two very smart and nice people talk it's a breath of fresh air to the industry. This is great. Thanks for talking to us.

  • @Game_Rebel
    @Game_Rebel Před měsícem

    incredibly informative... loved this, thank you!