Most people just wait for AMD to launch a product that competes with Nvidia, wait for the inevitable price drop response from Nvidia, and they buy Nvidia's product anyway. This cycle has been repeating for years. Unless people are willing to support AMD with their pocketbooks, nothing will change. Nvidia keeps getting bolder because people keep buying their stuff. Even the people complaining about Nvidia.
@@slylockfox85 well... I was a Nvidia fan. 6 out of the 7 pc I owned are running on Nvidia card. But seriously, I'm so deceived by Nvidia about their latest actions that I'm really excited about what AMD will bring in November. For the first time ever, I'm cheering for the underdog here. I really hope AMD will win more marketshares, just to give Nvidia a lesson.
I want to see the difference between dlss 2.0 vs 3.0 without frame generation, since according to nvidia on reddit, dlss 3.0 without frame generation is what 3000 and 2000 series cards are getting.
@@iansrven3023 that question is not really smart there is physically no way it just copies prior frames that has to be fraud on their part if it does since the feature doesnt say nor imply it does that, so it's probably some more AI generated stuff based on the systems in place for the upscaling making use of what DLSS has to work with anyways, i feel like especially the temporal element of dlss is going to play a big role into inflating the fps counter without destroying the picture quality. now if all of what i said is true that means there would be no reason at least 30 series cards wouldnt be able to use this. which is REALLY depressing and shitty on their part if the feature is actually any good.
@@iansrven3023 If you look at the latency numbers on their Cyberpunk video the latency for the DLSS2 65ish fps was the same as the DLSS3 100fps. So unless that improves it will look smoother but not play a lot smoother than DLSS2
@@Arxgxmi Uhm he said "create more frames" which is what it does. It uses the past frame and the next expected frame and the movement between the frames to create and insert a middle frame. It isnt copying it is inferring. As for why the 30 series wont get frame insertion: Nvidia stated its because they made a 2x improvement in their Optical Flow Accelerator. 20 and 30 series both have these on the chips. IMO its just a copout for why they wont bring it to older cards. The real question is will the new frame insertion tech decrease latency and by how much. From their Cyberpunk video it didnt seem to decrease the latency to where it should be compared to if the engine drew the frame. they showed DLSS2.0 getting 65ish fps at 60ish latency, then they switch to DLSS3.0 getting 100ish fps and the same latency. It then slowly goes down as the scene changes. To me that says if your used to current high fps smoothness in your controls your not gonna have the same experience with DLSS3.0 frame insertion. It will play more like the fps it should be with DLSS2
There is 0 difference. There are going to be no improvement to super resolution other than the gradual updates already being released, they would have mentioned it. All they talked about was performance and the only reason the peformance is better is because of this framerate generation. There is no actual improvement to the reconstruction or the performance impact of the reconstruction.
Great video mate, I really hope amd kill it this gen , nvidia don't deserve the success . They've proven time and time again they couldn't care less about the general consumer . The 40 series pricing the fact dlss 3 is exclusive to the 40 series whereas amds version of upscaling fsr is agnostic and benefits cards across brands and generations, I have a 3080 and that will be my last nvidia card .
You need to wake up mate, you think these company's are not working together to establish price points? AMDs new GPUs are also 450 tdp with massive coolers. Amds new GPUs will be PRICED per its performance to other cards including Nvidia cards.
@@4gbmeans4gb61 yeah unfortunately while we want a hero to come in this is not going to happen. Amd will be priced in line with nvidia less maybe 5% to try and win a bit more market share. I mean if Amd match the 4080 at the 700 bracket then everyone would buy it but for some reason they won't do it.
Who would've guessed. Using software to inject fake frames instead of actually rendering frames is easier on the GPU. This is going to look bad and feel even worse.
@kaizer5lock @kaizer5lock Funny, I get the occasional frame stutter in Elden Ring. I don't think it is as restricted to FPS titles as you implied. Time will tell.
@@garyb7193 Framestutter can obviously happen for multitude of reasons. Elden Ring is just poorly coded, don't get me wrong, great game,... but they don't spend enough on technical improvements to the engine.
if you compare the numbers of these two GPUS, the RTX 3090 TI and the RTX 4090, the RTX 4090 has 52% more shader units and 35% higher clockspeed than the RTX 3090 TI... with all these improvements, you would expect at least 2x the raster performance compare to the previous flagship, but that's not the case at all... all you get is a really underwhelming 62% more performance on Cyberpunk on 1440p resolution... and still the Nvidia has the audacity to ask even more for their GPU architecture which apparently shows that the IPC went really backwards in order to have marketable clockspeeds... Hail Jensen, the master of deception....
NVidia IPC has been going backwards since Pascal. If you compared transistors and clockspeeds in Pascal-Turing and Turing-Ampere, you will see there has been a regression every generation. 1080ti had 11.8 billion transistors 1.58ghz boost 3090ti had 28.3 billion transistors 1.86ghz boost ( 2.39x X 1.17 = 2.81x expected performance but actual is 2x only for almost 3x the price) This is only if they merely reused Pascal architecture, when the norm is to improve your architecture but NV is actually regressing. Obviously this is because transistors are being wasted on RT/Tensor cores and god knows what other bloat NV is including.
push power and get clocks, but it costs nothing to them since the GPU die size remains the same, the cost is shifted to the user and his electricity bill, and to the world
You're comparing shader units in a workload that relies heavily on their ray tracing hardware, so that comparison doesnt really make sense - you're kinda comparing apples to bananas. Now don't get me wrong i don't think nvidias pricing is consumer friendly, but i really admire the amount of research and innovation they put into the advancement of realtime rendering in general. As a game dev myself that is enough to warrant the prices they've been asking, but i can totally see the point that regular/enthusiast gamers are making. But just ask yourself, if nvidia wouldn't be pushing the technology forward there would be no notable advancement at all. Take FSR from amd, it's just a glorified, traditional upscaler. Nothing new, nothing innovative about it. And if you've seen the interview from digital foundry with the lead developer of FSR: when df asked wether they would support an open api that would unify XESS, DLSS and FSR his answer (no we won't support that) was all to telling that they rather hinder progress than give up their unique selling point that FSR can work on any card because it's not doing anything really new.
I think it is fair to compare it to the 3090 as there will surely be a 4090ti next year, also those improvements are actually good, even if it is not 2x the perf of the top tier last gen gpu
honestly not that bad. I had a friend who paid $1800 aud for a 3070ti 12 months ago. so a rtx 4090 for $2400 is cheap in comparison.....however after 10% gst I expect the rtx 4090 to be $3000
I think a lot of people, including reviewers and tech youtubers, missed the fact in measuring wattage with DLSS in the past. My 3090 and for sure every GPU down to the 2060 cuts power draw because the internal resolution is lower, which means the GPU, memory controller, power delivery and memory bandwidth are less stressed than going 4K no DLSS when you can clearly see the GPU reducing clocks in favor to power other components. I have tested this because I thought DLSS would activate the tensor cores so a little more power could be used, but is not.
@@najeebshah. you're pretty delusional if you think the actual performance is 2x in games. Its 2x when the game can have dlss3 AND rtx. But you can't really compare apples to apples because the 30gen cannot have dlss3 only dlss2, that's purposeful gimping by nvidia of previous gen to make newer gen look better. So how many game will actually have dlss + rtx is? Not all that many. Less than 10 in a sea of games that have neither techs. Nvidia own slides show 3 game that have no dlss or rtx... And guess what. 4080 was slower or on par with a 3090. You will see x2 performance in like... 3-4 games... Maybe? Best case scenario is you're likely to have more something in the real of 20% more performance. I guess you're the kind of person their marketing actually work for. 2 digits IQ
So this generation fps isn’t going to mean much. The entire point of going above 60 fps is to decrease latency and make the game feel smoother. Our eyes can’t really tell the difference above 60 fps but the game feels way smoother, but if the latency doesn’t scale then it will feel like playing a console game on an old tv at 30 fps.
me too and this nvidia benchmarks and tables are bullshit at all every new gen they talk about 3x more power at the end its 25-40% im sure its the same 25-40% increase on the 40xx gen and all other performance boosts from dlss3 thats the point why nvidia dont want to make dlss3 compatible for old gen
Yep and I'd also like to see it without Raytracing, to see the actual raster hard-pull performance. Because that will be the baseline indicator for all games, since raytracing implementation varies greatly from game to game. Cyberpunk leans very heavily into Raytracing, while other games dont, which may cause massive performance differences in comparisons where Raytracing is enabled. I would also love to see DLSS 2.0 vs DLSS 3.0 without frame generation enabled (you can actively turn off this setting in DLSS 3), so we get the true performance of DLSS 3.0 without latency-incurring fake frame generation skewing the results.
great video, something i think you missed is that the ai generated image feature can be turned off independatly from DLSS 3.0. At least that's how i understand it from the way it's worded
@@wasd-moves-me ray tracing is looking a lot more reasonable this generation. There's a lot more Raytracing horse power in the upcoming generation of cards.
You definitely have a right to be concerned about latency and it's gppd to get this information out to help users and potential buyers be informed. But frametimes also have latency. Yes it would increase latency cpmpared to non-dlss three, but it shouldnt be much. Latency is important on esports and what not, so better just ti have lower settings, but for those who don't care much, but want a better fidelity, this is good.
hi thanks fro the info , you are using LG c1 as monitor can u make a video or say something about your experience with LG c1 how you use it or did u experience any pixel burning? do u suggest using LG oled tvs as monitor for gaming and static work for our pc build?
Question maybe you can Answer. 3090 w/ 12900k when HDR is on and I go to use the Xbox Game Bar or its FPS counter. I get stutter on my pc and games are unplayable. Do you know whats going on with that ?
wow amazing how when new cards come out the performance they can release thru drivers upon a new series release . and how much they can lower performance later when they want you to upgrade on the same series cards
I'd like to know what card is outputting the 22fps on the left , is it the same card putting out the 98fps on the right? If so the raw raster isn't looking good.
Btw Daniel, you should try DLDSR at 1,25x (5K downscaled to 4K with Deep Learning using the Tensor Cores) with DLSS Performance. It looks significantly better than 4K DLSS Quality. I'm using it in RDR2 on my CX OLED right now and it looks stunning.
Dlss performance of 5K also runs considerably worse than dlss quality fwiw despite being the same internal 1440p resolution. But yeah it does look better and you get to control the sharpening more with dldsr.
I have a 12900k(5.2ghz) and a msi suprim 3090ti suprim x ddr5 5600dom plat, when I ran those settings I ran at 45-55fps(not 39fps like wcct)..with dlss on ect I was able to hit 95-110 fps(performance mode). My monitor is a Samsung odyssey g9, 5120x1440 240hz, card is currently on air, boost clock over 2100mhz whole time, not really sold on 40 series yet. Ps. Still think these prices will crash hard. Also all my tests were performed on a test bench. (Open air gpu, open loop cpu)
Wccftech got 37 fps (not 39) on the oc 3090 ti in an incredibly demanding scene. Go and benchmark there in the exact same location and you would get similar fps. Bang4buckgamer's 3090 ti oc was struggling to maintain 60fps with dlss set to quality at 1440p psycho rt in that area, that scene is more demanding than driving around in the city.
@@dhaumya23gango75 I never said I was driving around heh. I did everything that was done, 15x over. If I block and oc this card I'm sure it'll do even better. All my settings are at Max/Psyco just to be clear here. At quality I was getting 45-55. Sorry to disappoint, 3090ti still very relevant. I'm sure I'll be picking up a 4090/ti anyway when the timing is right. (Currently waiting for ek to pack and ship my block and waiting for my cable from evga so I can burn this card)
@@pastafazool5951 Benchmark in the exact same location, that was my point. I know you were using the same settings and neither did I make any comment about the 3090 ti being irrelevant.
@@dhaumya23gango75 I did bench the exact location haha xD, and then some! No hate here brother! Ps. The news about the fragility of the 12pin adapters only having 30 cycles is scary as shit btw.
What's up with the PCIe connectors? Can't these just slot into a normal mobo? I ask because I think I saw it was said that the recommended CPU was 5900x which is 570/PCI gen 4.
What i am interested in is how the 4000 series will perform using PCVR units, where just getting the unit to run at it’s native resolution is important. There a lot of next generation headsets coming out and the 4000 series cards will be important and needed to drive these new high resolution sets.
Because it's the Crysis of its generation. A game that looks so good (on high details, obv), you have to check twice to make sure it's not a tech demo.
Only appears to reduce latency because it massively boosts frame rate with fake frames. These cards are going to be torn apart when review samples release!
All of this is done in performace mode right ? I know that I usually either play at Quality or Balanced mode. I prefer they show those numebrs at the mode instead of performance.
Cyberpunk has its own benchmark generator. So I imagine they used the games specific scene. Which is the bar/club scene and then the open city cam travel
Glad we both run the 3080 12 gb as daily driver lol Also, I remember you did some VR vids before, I'd be interested in seeing some differences between Nvidia and AMD for VR frametimez
3090ti -> 4090, not even double the performance with raytracing enabled in Cyberpunk, 1440p maxed? WTH nVidia! Please class action sue them for false advertisements.
62% native performance increase is still the biggest jump we ever had. Maybe it will actually be double performance in other games that are better optimized than Cybertrash 2077. Also keep in mind that performance will only get better with future drivers. It has always been this way.
@@EldenLord. it's just the issue that they are not allowed to mislead their customers. Well, we'll see when the reviews come out. But now it's not looking good.
@@EldenLord. Cyberpunk runs pretty well on a 2060 which is very low pricey. 4090 probably isn't much of an improvement from a raw performance perspective
HUmmm 25% less power? Make me think some internal components of the dlss pipeline are bottlenecking and making other components idle while it. OR maybe even the CPU cant keep up, but not likely as half frames are not computed.
When 3090 came out it was very cool too, then later the temps of the memory were added and ppl realized what kinda dumpster fire they had running in their pc.
the new 40xx coolers are massively improved though, I think that's the last part we should be worried about. For now, the only true drawback is the damn price which is more heated than the vrms on the 3090 cards
My take on the article. Turning on DLSS 3 Quality mode takes the rendered frame down to a lower res (no longer native) that is then DLSS upscaled. This increases the frame rate from 59 to 85. Then the Frame generation part will double the frame rate to 170 by inserting a generated frame in between every rendered frame. That's the only way this makes sense to me.
@@googleslocik at 85 FPS that is 11.76 ms additional delay. The thing is they are comparing reflex off to reflex on which by itself lowers latency by like 30 or 40 ms. It's not an apples to apples comparison but still the increased latency from the generated frames shouldn't be that noticeable.
@@googleslocik Yep but when the hardware latency chain is good and the engine is not laggy that latency is very tollerable. I stream PC VR games to my Quest2 via Wifi and even with that latency beleave it or not its fine if the total motion to photon lag is low enough. And if VR doesnt feel wierd with that latency unless your playing conterstrike you will totally be good. Time will tell
Really would encourage people to just take it all in & relax until Nov 3rd. I want a little more performace but the BEST choice is to WAIT for all the REAL data & what AMD has to offer before making a decision. I'll probably go Nvidia again BUT I would love to try AMD if they make a worthy enough product. ✌️
i never waited tbh, i bought the 1080ti, the 2080ti, the 3090 right at release and AMD never had a card where i said afterwards "oh man why didn't i wait, this card would've been better". so i have no reason to believe that AMDs flagship will make me regret getting the 4090 at launch.
@@kaseyboles30 😆😆 You can throttle down the power to 250W also if you crank down the clocks. Since there are rumors about the GPU hitting 3GHz... Also looking at transients figure I can already say the average to be half that value.
Really interested to heard the 3rd part reviews of DLSS 3.0. It is certainly a great way to manipulate FPS benhmarks, it will also be interesting to know if it is great for gameplay.
DLSS 3 is just that shitty “motion interpolation” like “Smooth mode” on TVs, it adds fake frames making motion look odd(soap opera effect) also adding a lot of input lag while putting frames which they pathetically trying to cover with already existing Nvidia Reflex. Best looking and most responsive experience is and will be DLSS 2 + Nvidia Reflex. Also new card are not much faster in real frames so they they use graphs with DLSS 3 fake frames, misleading like other big hypocrite companies
You’re telling me that an RTX 4090 barely gets 60fps running Cyberpunk at 2k? I get that it was using the best settings, but that means that 4k will be out of the question for true next-gen games. These cards are not near good enough for the prices they are asking
I would think that a digital TV's motion smoothing is actually just using the MPEG macroblock interpolation vectors to generate extra frames in there, but yeah a full display per-pixel motion vector map will allow for better interpolation. The problem is when there are transparent things, or particles (like smoke). The vector map can only represent the motion of a single surface pixel - not a layer of multiple overlapping things like particles, glass, etc.. That means either the motion of the transparent object will fudge around what's behind it, or the motion of what's behind it will fudge the transparent thing's position. This was visible in DOOM which used a velocity map for global motion blur, if something was behind a window moving around - the window's surface would be smudged around in certain situations. CP2077 running at 170FPS w/ DLSS3 means that it's rendering frames at 85FPS, which is a frametime of 11 milliseconds. The interpolation requires the next frame in time to render the interframe, so 11x1.5 = 17ms frametime latency, which is equivalent to the latency of 56FPS. I think I would be more inclined to play CP2077 at 85FPS with 11ms frametime latency instead of 170FPS with 17ms frametime latency.
The thing with TVs is there is no input, thus no input lag to worry about. They can afford to delay frames to calculate an intermediate frame of decent quality without the viewer even realizing things are delayed, because once they're put on screen the framerate is smooth, even if every single frame is delayed, they're all delayed equally so it doesn't matter. They can get away with very old ai tech and still get incredible results easily, but the moment latency becomes a point of care - like in pc gaming where you have mouse and keyboard input - that's when the response of every generated frame matters, and you can't just withhold frames for that long while supplementary frames are being generated. I could do it easily with some rudimentary FLASH morph features nearly 10 years ago with only a few weeks of practice in IT class. It's really not difficult to do.
The tensor cores are made to do the calculations used for AI vs the RT cores so by offloading the math needed on the RT cores which are running at max, it shifts the load to the tensor cores which are much better at AI calculations so it utilizes less power because the GPU does not have to think as hard. I really wish Nvidia would do tech architecture videos like Intel does so more people can understand how things are working under the hood.
The problem is the price... The performance and new tech sounds incredible and I would love to experience it! But how can I make a 1.6k purchase more justifiable to myself? Maybe I should learn blender haha.
i like the detailed videos . i do indeed only compare with what i need and is usefull for simracing. i'm stil on single screen and gpu runs at 80% load , i do that on purpose to have less frame drops when racing online. i see 40 series as a good step forward and 50 series will be worth upgrading in my case. settings are mostly maxed out. rennsport (ue5) is going to change that i guess lol
when the fps is already 100 for example latency is ok for me and in that case i could use dlss3 to get it up to the refreshrate from the monitor without using gsync . i just think i'll see a lot of people on facebook with the question to use a 40 series to bump their 30-40fps with dlss3
50C on full load? So why on earth did they make the cards so big? They could have made the cooling twice as small and hover around usual 70C. Something isn't adding up here.
I think DLSS 3, shouldn't be called DLSS 3 How about Deep Learning Frame Insertion DLRI or frame Sampling injection FSI Also think there should be a method of controlling what frame rate needs to be for the insertion, so say if it's below a target, then difficult scene can remain smooth but the feature doesn't need to be active all the time.
There should hopefully be a way to turn off the frame generation and essentially just run DLSS 2. Because not everyone is going to like it and not on every game. It shouldn't be up to the developer either because you know some won't bother.
Would you know, Nvidia Clock = 2850, Amd = near 3ghz, Nivida cuts power draw by 25%, Amd used less power draw, Strange how this comes out just before the amd cards launch. I feel panic from Nvidia
If there was panic they wouldnt be selling a 70 class for $900. Its crystal clear that Nvidia doesnt care at all about AMD at all. If they did then this would be the stupidest way of handling. This is just S tier hubris on Nvidias part and they genuinely think they dont have to worry at all about not just AMD but about people getting sick of their anti consumer crap. I think Jensen is gonna get a rude awakening this time around but i dont think he cares at all what AMD is doing
@@Angel7black Yea ya could be right, but I do think he will get a shock and that will be on the 4080"2 versions" but Im wondering, are they gonna now put the 4070 on a different dye or still use the 4080 12gb`s dye?
keep in mind that AMD and Nvidia use different ways to measure GHZ so you can make a comparison between an Nvidia card to another Nvidia, but not an Nvidia to an AMD
As it comes to inserting frames the previous video sounded more reasonably (even if it is not what is going at all in that algorithm). This time with some of the future frame buffer and motion vectors I don't buy it. Although I'm prepared to be corrected. Just it doesn't make much sense. Is the motion even known to the graphics card? And even if it is then it still would be predicting future. The same way you can take last two (three...) frames and extrapolate what will be next. It won't solve the problem of the responsiveness (since we don't know if player will generate input or not).
It sure looks like DLSS 3.0 Fake Frame Generation latency issue could introduce stutter at the very least. Personally, I find random freezing and stutter in games so annoying, almost as bad as lower fps itself.
@@zqzj Yeah, that and the fact DLSS 3.0 leaves behind those who purchased 20 and 30 series cards is a hard pill to swallow. Especially when AMD find ways to include several generations of their cards and the competition cards as well. As far as the quality difference between FSR and DLSS, I believe we will reach a point of diminishing returns, at which point FSR 2.x or FSR 3.x would be generally excepted as 'good enough'
@@ChrontardOh yeah no. The 4090 gets around 70 percent more performance over the 3090 ti in division 2 and re village, which are games with no rt/dlss implementation from the graphs on the website. 50 percent more ac valhalla , a game which runs bad on nvidia cards. So definitely well over 60 percent on average over the 3090 ti. 4090 seems to be an impressive card unlike both the 4080s.
It’s not worth buying at all. It is a full scam. For two major reasons. The first reason is the prices are ridiculously high for what the GPUs are actually capable of. The second reason is they are making false performance claims. Nvidia said the 4090 is 2-4x faster than the 3090 Ti and we all know that’s total bullshit. They shouldn’t be allowed to say it's 2-4x faster unless the GPU is actually that much faster than its predecessor apple to apple.
@@jeevan1198 i am saying its worth a buy as someone who currently doesn't have a GPU and is in the market for one. I agree that its a poor decision to upgrade from something like a 3090
@@gonkbous If Nvidia really tried to push the 40 series to the limits and tried to get the most performance out of them as possible, they could have made the 4090 around 80% faster than the 3090 Ti in rasterised games and up 3x faster in ray traced games without DLSS if they really wanted to. They choose not to because Nvidia likes to minimise performance and maximise the prices. That’s what makes them so fucking selfish. It proves they don’t give a shit about their customers. Nvidia has heavily relied on DLSS to give the 40 series a massive performance. They can’t just rely on DLSS to boost performance. I’m not saying DLSS is bad. It’s a cool feature and it can be useful in some cases. I don’t have a problem with Nvidia developing their DLSS technology. One problem I do have with DLSS though is when it becomes the main selling point of the GPUs. Even if someone had a 1080 Ti, I would recommend them either getting a 30 series or RX 6000 GPU. The other option is waiting for RDNA 3. If it was me, I would skip the 4000 series.
one thing people are overlooking is the lack of input information in the fake frame also if you get the frame rate high enough to overcome the input lag you are not going to notice the increase in smoothness
Dlss 3 will make any game playable at 8k with quality mode that's exciting I'm already can't tell the difference at 4k with performance mode maybe there a little bit aliasing to the objects but it's not noticeable
I knew it was marketing BS. If your product relies on a feature that not everyone may like for it get the performance numbers you advertise, which ALSO works for the previous gen products too, then you didnt really make that good a product. AMD, come out swinging. Swing hard and let your presence be known.
Don’t be too worried about buying more gpus. Nice thing about your channel is it could be considered a business, and those as business expenses. Hopefully you get what I’m hinting at 😉
62% native performance increase is massive bigger then any other generation. The craziest part is this is again the 3090ti so imagine a 4090ti! That means that it's over 70% faster then the 3090 in native performance without dlss.
4090 is impressive yes, 4080/4070 not so much given the price. They don't make much sense to buy where 4090 is so far ahead, glad I can skip this generation.
This is abysmal if their own advertising says 2-4X perf. They should be sued. But as the pricing goes, nVidia has no other option because they can't crash unsold 30-series price. Can't blame them for that, but for all the rest bullshit, oh god. At first, when watching the launch live I was ok with everything, but after when found out that 4080 12GB was really a 4070, I was pissed. That's misleading the customer, big time.
@@Rivexd 4080 still makes sense the 12gb and I will explain why, so far 12GB is enough for 99% of games at 4K, you getting the 4080 16gb performance but less Vram that results at 2-3%, performance loss (like previous gen), 3090 was only 10% faster than 3080, this gen Nvidia said both 4080 and 4090 2X or higher than their previous gen counterparts...... Considering both got same performance bump at 899$ 4080 12GB is alot better value than other offerings...... Who need others? 4080 16GB for VR users and 4090 for content creators/VR users
It was quite obvious that DLSS 3 would reduce energy use. If “fake” frames are added in (as they are) they will not use much computations. I think Nvidia had to come up with something. It may well work. We will have to see.
I’m excited. Lower power and cooler temps would definitely be a bonus for me. I’m not ultra competitive so the latency won’t bother me and i probably wouldn’t even notice it.
wake up bro your games have to support dlss in the first place for you to experience that high ass latency. Competitive games are generally not going to support dlss.
To be more correct, in addition to zooming in, DLSS3 also uses Optical Multi Frame Generation to generate entire individual images based on the frame before zooming in. Optical Multi Frame Generation still operates at reduced resolution. Without first reducing the amount of data that needs to be processed, which reduces the resolution, it is impossible to achieve acceleration. In other words, the Zoom technology of DLSS3 is better, but the image quality defects brought by DLSS2 in the past will still exist, but the degree of defects in DLSS3 will decrease.
The AI technology of DLSS3 is mainly based on the analysis and prediction of the motion of all objects in the game, but as long as there is an unexpected operation behavior, the image defects caused by DLSS3 will appear. That's why the DLSS3 images shown by nVidia are all straight forward, because the inertial operation of straight forward is unlikely to predict wrong. However, after maintaining a straight line for a period of time and making a sudden turn, the AI will make mistakes, but it is normal for the operator to change direction.
I don't really care if AMD will put a better GPU better than nvidia, just a good perfomance and price will be good enough
Most people just wait for AMD to launch a product that competes with Nvidia, wait for the inevitable price drop response from Nvidia, and they buy Nvidia's product anyway. This cycle has been repeating for years. Unless people are willing to support AMD with their pocketbooks, nothing will change. Nvidia keeps getting bolder because people keep buying their stuff. Even the people complaining about Nvidia.
well most people don't buy A 3080-3090 ti anyway even the 3070 is rare on the large scale
@@slylockfox85 well... I was a Nvidia fan. 6 out of the 7 pc I owned are running on Nvidia card. But seriously, I'm so deceived by Nvidia about their latest actions that I'm really excited about what AMD will bring in November. For the first time ever, I'm cheering for the underdog here. I really hope AMD will win more marketshares, just to give Nvidia a lesson.
@@slylockfox85 You're right.
I want to see the difference between dlss 2.0 vs 3.0 without frame generation, since according to nvidia on reddit, dlss 3.0 without frame generation is what 3000 and 2000 series cards are getting.
yeah also does DLSS 3.0 actually feel smoother or just create more frames
@@iansrven3023 that question is not really smart there is physically no way it just copies prior frames that has to be fraud on their part if it does since the feature doesnt say nor imply it does that, so it's probably some more AI generated stuff based on the systems in place for the upscaling making use of what DLSS has to work with anyways, i feel like especially the temporal element of dlss is going to play a big role into inflating the fps counter without destroying the picture quality. now if all of what i said is true that means there would be no reason at least 30 series cards wouldnt be able to use this. which is REALLY depressing and shitty on their part if the feature is actually any good.
@@iansrven3023 If you look at the latency numbers on their Cyberpunk video the latency for the DLSS2 65ish fps was the same as the DLSS3 100fps. So unless that improves it will look smoother but not play a lot smoother than DLSS2
@@Arxgxmi Uhm he said "create more frames" which is what it does. It uses the past frame and the next expected frame and the movement between the frames to create and insert a middle frame. It isnt copying it is inferring.
As for why the 30 series wont get frame insertion: Nvidia stated its because they made a 2x improvement in their Optical Flow Accelerator. 20 and 30 series both have these on the chips. IMO its just a copout for why they wont bring it to older cards.
The real question is will the new frame insertion tech decrease latency and by how much. From their Cyberpunk video it didnt seem to decrease the latency to where it should be compared to if the engine drew the frame. they showed DLSS2.0 getting 65ish fps at 60ish latency, then they switch to DLSS3.0 getting 100ish fps and the same latency. It then slowly goes down as the scene changes. To me that says if your used to current high fps smoothness in your controls your not gonna have the same experience with DLSS3.0 frame insertion. It will play more like the fps it should be with DLSS2
There is 0 difference. There are going to be no improvement to super resolution other than the gradual updates already being released, they would have mentioned it. All they talked about was performance and the only reason the peformance is better is because of this framerate generation. There is no actual improvement to the reconstruction or the performance impact of the reconstruction.
Great video mate, I really hope amd kill it this gen , nvidia don't deserve the success . They've proven time and time again they couldn't care less about the general consumer . The 40 series pricing the fact dlss 3 is exclusive to the 40 series whereas amds version of upscaling fsr is agnostic and benefits cards across brands and generations, I have a 3080 and that will be my last nvidia card .
It's because they re using more tensor cores
You need to wake up mate, you think these company's are not working together to establish price points? AMDs new GPUs are also 450 tdp with massive coolers. Amds new GPUs will be PRICED per its performance to other cards including Nvidia cards.
@@4gbmeans4gb61 yeah unfortunately while we want a hero to come in this is not going to happen. Amd will be priced in line with nvidia less maybe 5% to try and win a bit more market share.
I mean if Amd match the 4080 at the 700 bracket then everyone would buy it but for some reason they won't do it.
Leaks might suggest fsr 3.0 will be exclusive to rdna 3 via hardware just like nvidia did with dlss 3.0 but we will have to wait and see...
@@bombayroll4631 Exactly, by how the economy is looking i doubt they are even making that much money on these cards at first anyways.
Who would've guessed. Using software to inject fake frames instead of actually rendering frames is easier on the GPU. This is going to look bad and feel even worse.
random stuttering?
We won't know until gamers nexus does its review, better to wait before forming an opinion yet.
@kaizer5lock @kaizer5lock Funny, I get the occasional frame stutter in Elden Ring. I don't think it is as restricted to FPS titles as you implied. Time will tell.
@@garyb7193 Framestutter can obviously happen for multitude of reasons. Elden Ring is just poorly coded, don't get me wrong, great game,... but they don't spend enough on technical improvements to the engine.
@@garyb7193 elden ring has that since release normal for that game
if you compare the numbers of these two GPUS, the RTX 3090 TI and the RTX 4090, the RTX 4090 has 52% more shader units and 35% higher clockspeed than the RTX 3090 TI... with all these improvements, you would expect at least 2x the raster performance compare to the previous flagship, but that's not the case at all... all you get is a really underwhelming 62% more performance on Cyberpunk on 1440p resolution... and still the Nvidia has the audacity to ask even more for their GPU architecture which apparently shows that the IPC went really backwards in order to have marketable clockspeeds... Hail Jensen, the master of deception....
NVidia IPC has been going backwards since Pascal. If you compared transistors and clockspeeds in Pascal-Turing and Turing-Ampere, you will see there has been a regression every generation.
1080ti had 11.8 billion transistors 1.58ghz boost
3090ti had 28.3 billion transistors 1.86ghz boost ( 2.39x X 1.17 = 2.81x expected performance but actual is 2x only for almost 3x the price)
This is only if they merely reused Pascal architecture, when the norm is to improve your architecture but NV is actually regressing.
Obviously this is because transistors are being wasted on RT/Tensor cores and god knows what other bloat NV is including.
push power and get clocks, but it costs nothing to them since the GPU die size remains the same, the cost is shifted to the user and his electricity bill, and to the world
You're comparing shader units in a workload that relies heavily on their ray tracing hardware, so that comparison doesnt really make sense - you're kinda comparing apples to bananas. Now don't get me wrong i don't think nvidias pricing is consumer friendly, but i really admire the amount of research and innovation they put into the advancement of realtime rendering in general. As a game dev myself that is enough to warrant the prices they've been asking, but i can totally see the point that regular/enthusiast gamers are making. But just ask yourself, if nvidia wouldn't be pushing the technology forward there would be no notable advancement at all. Take FSR from amd, it's just a glorified, traditional upscaler. Nothing new, nothing innovative about it. And if you've seen the interview from digital foundry with the lead developer of FSR: when df asked wether they would support an open api that would unify XESS, DLSS and FSR his answer (no we won't support that) was all to telling that they rather hinder progress than give up their unique selling point that FSR can work on any card because it's not doing anything really new.
I think it is fair to compare it to the 3090 as there will surely be a 4090ti next year, also those improvements are actually good, even if it is not 2x the perf of the top tier last gen gpu
Memory speed is the same is why.
Prices here in Australia
24GB RTX 4090 $2390
16GB RTX 4080 $1792
12GB RTX 4080 $1344
🤣
I feel you I think my country will be similar to that
Damn it's cheaper in India at 1915 for 4090 I think
Yeah it really sucks living here doesnt it
in the netherlands:
24GB RTX 4090 1959 euro
16GB RTX 4080 1469 euro
12GB RTX 4080 1099 euro
honestly not that bad. I had a friend who paid $1800 aud for a 3070ti 12 months ago. so a rtx 4090 for $2400 is cheap in comparison.....however after 10% gst I expect the rtx 4090 to be $3000
NVIDIA Marketing 2020
3090: The 8K gaming monster
NVIDIA Marketing 2022:
4090: 1440p running low res with fake frames
I think a lot of people, including reviewers and tech youtubers, missed the fact in measuring wattage with DLSS in the past. My 3090 and for sure every GPU down to the 2060 cuts power draw because the internal resolution is lower, which means the GPU, memory controller, power delivery and memory bandwidth are less stressed than going 4K no DLSS when you can clearly see the GPU reducing clocks in favor to power other components.
I have tested this because I thought DLSS would activate the tensor cores so a little more power could be used, but is not.
Nvidia even highlighted DLSS as a way to save battery on gaming laptops it’s no big secret.
imagine ... now your gpu can consume "only" 500w instead of 600w, great indeed.
@@jobslolo7387 its 450 for 2x the performance.same as last gen so you got no grounds here lol
@@najeebshah. you're pretty delusional if you think the actual performance is 2x in games.
Its 2x when the game can have dlss3 AND rtx. But you can't really compare apples to apples because the 30gen cannot have dlss3 only dlss2, that's purposeful gimping by nvidia of previous gen to make newer gen look better.
So how many game will actually have dlss + rtx is? Not all that many. Less than 10 in a sea of games that have neither techs.
Nvidia own slides show 3 game that have no dlss or rtx... And guess what. 4080 was slower or on par with a 3090.
You will see x2 performance in like... 3-4 games... Maybe? Best case scenario is you're likely to have more something in the real of 20% more performance.
I guess you're the kind of person their marketing actually work for. 2 digits IQ
@@najeebshah. lol didn't you watched the video, its actually 62% faster than RTX 3090 Ti.
At least watch the video before smashing your keyboard.
So this generation fps isn’t going to mean much. The entire point of going above 60 fps is to decrease latency and make the game feel smoother. Our eyes can’t really tell the difference above 60 fps but the game feels way smoother, but if the latency doesn’t scale then it will feel like playing a console game on an old tv at 30 fps.
stfu old pos. peoples reactions vary and at ur age ur lucky to precieve at all
power draw from the wall could be from reduced cpu utilization, or other components.
I prefer to see the 3090 Ti vs 4090 with DLSS Off. I'd like to see the raw performance comparison.
me too
and this nvidia benchmarks and tables are bullshit at all
every new gen they talk about 3x more power
at the end its 25-40%
im sure its the same 25-40% increase on the 40xx gen and all other performance boosts from dlss3
thats the point why nvidia dont want to make dlss3 compatible for old gen
Yep and I'd also like to see it without Raytracing, to see the actual raster hard-pull performance. Because that will be the baseline indicator for all games, since raytracing implementation varies greatly from game to game. Cyberpunk leans very heavily into Raytracing, while other games dont, which may cause massive performance differences in comparisons where Raytracing is enabled.
I would also love to see DLSS 2.0 vs DLSS 3.0 without frame generation enabled (you can actively turn off this setting in DLSS 3), so we get the true performance of DLSS 3.0 without latency-incurring fake frame generation skewing the results.
great video, something i think you missed is that the ai generated image feature can be turned off independatly from DLSS 3.0. At least that's how i understand it from the way it's worded
Native resolution 100% for me, on 4k can see the difference very easily, so… can’t wait to see what amd comes up with, had enough with nvidia…
Same I game at 4k I don't use ray-tracing or DLSS
@@wasd-moves-me ray tracing is looking a lot more reasonable this generation. There's a lot more Raytracing horse power in the upcoming generation of cards.
Yeah I bought a 3060ti and it looks trash on 1080p using dlss 2.0 quality mode. After that I never used it again
Yes 1080p looks trash I agree. (2013 Xbox One 100$ resolution)
Yep 4k is becoming more and more important with the abysmal blurry TAA implementations
You definitely have a right to be concerned about latency and it's gppd to get this information out to help users and potential buyers be informed. But frametimes also have latency. Yes it would increase latency cpmpared to non-dlss three, but it shouldnt be much. Latency is important on esports and what not, so better just ti have lower settings, but for those who don't care much, but want a better fidelity, this is good.
hi thanks fro the info , you are using LG c1 as monitor
can u make a video or say something about your experience with LG c1
how you use it or did u experience any pixel burning? do u suggest using LG oled tvs as monitor for gaming and static work for our pc build?
Damn I was hoping for 60 fps at native 4k without blurry dlss
It can but not in cyberpunk. Thats 59fps at 1440p . It wud be 30fps at 4k native
Thanks for the video Daniel, quality video as always man
You can't know that, he's just uploaded it.
@@Rivexd Nah I didn't watch all the way through before commenting but a few mins in and i already liked it.
i love the little screams you make when you move your camera haha
Question maybe you can Answer. 3090 w/ 12900k when HDR is on and I go to use the Xbox Game Bar or its FPS counter. I get stutter on my pc and games are unplayable. Do you know whats going on with that ?
wow amazing how when new cards come out the performance they can release thru drivers upon a new series release . and how much they can lower performance later when they want you to upgrade on the same series cards
I'd like to know what card is outputting the 22fps on the left , is it the same card putting out the 98fps on the right? If so the raw raster isn't looking good.
Btw Daniel, you should try DLDSR at 1,25x (5K downscaled to 4K with Deep Learning using the Tensor Cores) with DLSS Performance. It looks significantly better than 4K DLSS Quality. I'm using it in RDR2 on my CX OLED right now and it looks stunning.
Dlss performance of 5K also runs considerably worse than dlss quality fwiw despite being the same internal 1440p resolution. But yeah it does look better and you get to control the sharpening more with dldsr.
@@Keivz I think the cost is worth it if you have a little performance to spare or can reduce some settings
This DLSS 3 tech will def have drawbacks. Nvidia couldnt up raster performance more so they relied heavily on DLSS to claim the “2-4x” bs
I have a 12900k(5.2ghz) and a msi suprim 3090ti suprim x ddr5 5600dom plat, when I ran those settings I ran at 45-55fps(not 39fps like wcct)..with dlss on ect I was able to hit 95-110 fps(performance mode). My monitor is a Samsung odyssey g9, 5120x1440 240hz, card is currently on air, boost clock over 2100mhz whole time, not really sold on 40 series yet.
Ps. Still think these prices will crash hard. Also all my tests were performed on a test bench. (Open air gpu, open loop cpu)
they will but probably, just like 3k series, will have to wait for over a year
Wccftech got 37 fps (not 39) on the oc 3090 ti in an incredibly demanding scene. Go and benchmark there in the exact same location and you would get similar fps. Bang4buckgamer's 3090 ti oc was struggling to maintain 60fps with dlss set to quality at 1440p psycho rt in that area, that scene is more demanding than driving around in the city.
@@dhaumya23gango75 I never said I was driving around heh. I did everything that was done, 15x over. If I block and oc this card I'm sure it'll do even better. All my settings are at Max/Psyco just to be clear here. At quality I was getting 45-55. Sorry to disappoint, 3090ti still very relevant. I'm sure I'll be picking up a 4090/ti anyway when the timing is right. (Currently waiting for ek to pack and ship my block and waiting for my cable from evga so I can burn this card)
@@pastafazool5951 Benchmark in the exact same location, that was my point. I know you were using the same settings and neither did I make any comment about the 3090 ti being irrelevant.
@@dhaumya23gango75 I did bench the exact location haha xD, and then some! No hate here brother! Ps. The news about the fragility of the 12pin adapters only having 30 cycles is scary as shit btw.
Great and informative vid. I appreciate your time and effort.
which subject do you teach?
When enabling dlss 2.0 on my 3090 the powerconsumtion is also lower than native. So why should the 4090 not do the same? There is less to render.
Great analysis. Hope you keep it up!
What's up with the PCIe connectors? Can't these just slot into a normal mobo? I ask because I think I saw it was said that the recommended CPU was 5900x which is 570/PCI gen 4.
They are normal pcie
What i am interested in is how the 4000 series will perform using PCVR units, where just getting the unit to run at it’s native resolution is important.
There a lot of next generation headsets coming out and the 4000 series cards will be important and needed to drive these new high resolution sets.
Same I can't wait to see how high I can take the res on Vr games with a 4080 16gig
Not great, except your VR-Games uses RT and DLSS 3
@@BillCipher1337 only one vr game I think use any type of dlss
@@BillCipher1337 latency on vr is not great, dlss3 may be a bad idea there.
@@eduardosantiago6948 that's true and as far as I know most VR headset use already some kind of interpolation
Almost 60K Subs! The power reduction at the wall could be the net effect of DLSS unloading the CPU?
I have no idea why they test their gpu with the most broken game at all time. Even a few month ago I had issues with bugs and glitches.
Because a lot of people like cyberpunk. Pretty simple really
The game has become popular again since Edgerunners release
Because it's the Crysis of its generation. A game that looks so good (on high details, obv), you have to check twice to make sure it's not a tech demo.
Thank you someone else gets it
@@TekGriffon I'm sorry it can look ok at times but poor just as much
Can you run your 3080 on cyberpunk 4k with raytracing no dlss to compare the framerate to nivida's 4090 presentation?
I have done with my 3090. Its 20 fps.
But nvidia has done in 1440p not in 4k. In 1440p native the 3090 gets 40 fps.
Only appears to reduce latency because it massively boosts frame rate with fake frames. These cards are going to be torn apart when review samples release!
No. It reduces latency because this Frame Generator runs along with Reflex. Something like that. Reflex>>Reflex+FrameG>>Off
@@daemonreaderx Latency reduction is based on interactions with fake frames.
All of this is done in performace mode right ? I know that I usually either play at Quality or Balanced mode. I prefer they show those numebrs at the mode instead of performance.
Cyberpunk has its own benchmark generator. So I imagine they used the games specific scene. Which is the bar/club scene and then the open city cam travel
Glad we both run the 3080 12 gb as daily driver lol
Also, I remember you did some VR vids before, I'd be interested in seeing some differences between Nvidia and AMD for VR frametimez
I would for sure like to talk out with you my current findings if you want sometime, I love VR and have accumulated alot of opinions lol
I too would love to see some benchmarks for vr gaming.
@@SkeleTonHammer sounds good. Im just gonna hold until I see the new AMD cards. been needing to upgrade my 1060 3gb for a while.
@@SkeleTonHammer I have a 3090 and in No Man's Sky the dlss helps to have a fluid game in VR.
I thought they told us it was suppose to be 4xs faster? Maybe I was mistaken
3090ti -> 4090, not even double the performance with raytracing enabled in Cyberpunk, 1440p maxed? WTH nVidia!
Please class action sue them for false advertisements.
62% native performance increase is still the biggest jump we ever had. Maybe it will actually be double performance in other games that are better optimized than Cybertrash 2077. Also keep in mind that performance will only get better with future drivers. It has always been this way.
@@EldenLord. it's just the issue that they are not allowed to mislead their customers. Well, we'll see when the reviews come out. But now it's not looking good.
@@EldenLord. if there is 62%... 62% is probably only your hopium at this point
@@EldenLord. Cyberpunk runs pretty well on a 2060 which is very low pricey. 4090 probably isn't much of an improvement from a raw performance perspective
classic teacher. loses times in the introduction and then rushes until the end haha
It’s so cool seeing old games remixed with rtx hopefully it becomes mainstream tech someday
HUmmm 25% less power? Make me think some internal components of the dlss pipeline are bottlenecking and making other components idle while it. OR maybe even the CPU cant keep up, but not likely as half frames are not computed.
When 3090 came out it was very cool too, then later the temps of the memory were added and ppl realized what kinda dumpster fire they had running in their pc.
the new 40xx coolers are massively improved though, I think that's the last part we should be worried about. For now, the only true drawback is the damn price which is more heated than the vrms on the 3090 cards
My take on the article. Turning on DLSS 3 Quality mode takes the rendered frame down to a lower res (no longer native) that is then DLSS upscaled. This increases the frame rate from 59 to 85. Then the Frame generation part will double the frame rate to 170 by inserting a generated frame in between every rendered frame. That's the only way this makes sense to me.
Probably, that means you always get 1-2 frame delay with DLSS 3, which is 16-33 ms of extra delay.
@@googleslocik at 85 FPS that is 11.76 ms additional delay. The thing is they are comparing reflex off to reflex on which by itself lowers latency by like 30 or 40 ms. It's not an apples to apples comparison but still the increased latency from the generated frames shouldn't be that noticeable.
theres also possibility that they intentionally make native res inefficient on dlss/fsr titles (overly high graphic quality)
@@googleslocik Yep but when the hardware latency chain is good and the engine is not laggy that latency is very tollerable.
I stream PC VR games to my Quest2 via Wifi and even with that latency beleave it or not its fine if the total motion to photon lag is low enough. And if VR doesnt feel wierd with that latency unless your playing conterstrike you will totally be good. Time will tell
@@googleslocik I wish people would watch the nVidia keynote first before posting BS like this.
Really would encourage people to just take it all in & relax until Nov 3rd. I want a little more performace but the BEST choice is to WAIT for all the REAL data & what AMD has to offer before making a decision. I'll probably go Nvidia again BUT I would love to try AMD if they make a worthy enough product. ✌️
i never waited tbh, i bought the 1080ti, the 2080ti, the 3090 right at release and AMD never had a card where i said afterwards "oh man why didn't i wait, this card would've been better".
so i have no reason to believe that AMDs flagship will make me regret getting the 4090 at launch.
I suspect at least some of the 4090 coolers were designed when the 600watt rumors will still strong.
🙄 Or maybe because it actually outputs 600W..
@@goldnoob6191 They measured less power usage than that.
@@kaseyboles30 😆😆 You can throttle down the power to 250W also if you crank down the clocks.
Since there are rumors about the GPU hitting 3GHz...
Also looking at transients figure I can already say the average to be half that value.
Really interested to heard the 3rd part reviews of DLSS 3.0. It is certainly a great way to manipulate FPS benhmarks, it will also be interesting to know if it is great for gameplay.
DLSS 3 is just that shitty “motion interpolation” like “Smooth mode” on TVs, it adds fake frames making motion look odd(soap opera effect) also adding a lot of input lag while putting frames which they pathetically trying to cover with already existing Nvidia Reflex. Best looking and most responsive experience is and will be DLSS 2 + Nvidia Reflex. Also new card are not much faster in real frames so they they use graphs with DLSS 3 fake frames, misleading like other big hypocrite companies
Hmmm, the "within 50C temps" could possibly be the room temperature...
You’re telling me that an RTX 4090 barely gets 60fps running Cyberpunk at 2k? I get that it was using the best settings, but that means that 4k will be out of the question for true next-gen games. These cards are not near good enough for the prices they are asking
Yes coz cyberpunks rt is crazy. Rt overdrive brings even 4090 to its knees didnt u see. 20fps. U wud have to use dlss or turn down rt options.
I would think that a digital TV's motion smoothing is actually just using the MPEG macroblock interpolation vectors to generate extra frames in there, but yeah a full display per-pixel motion vector map will allow for better interpolation. The problem is when there are transparent things, or particles (like smoke). The vector map can only represent the motion of a single surface pixel - not a layer of multiple overlapping things like particles, glass, etc.. That means either the motion of the transparent object will fudge around what's behind it, or the motion of what's behind it will fudge the transparent thing's position. This was visible in DOOM which used a velocity map for global motion blur, if something was behind a window moving around - the window's surface would be smudged around in certain situations.
CP2077 running at 170FPS w/ DLSS3 means that it's rendering frames at 85FPS, which is a frametime of 11 milliseconds. The interpolation requires the next frame in time to render the interframe, so 11x1.5 = 17ms frametime latency, which is equivalent to the latency of 56FPS. I think I would be more inclined to play CP2077 at 85FPS with 11ms frametime latency instead of 170FPS with 17ms frametime latency.
The thing with TVs is there is no input, thus no input lag to worry about. They can afford to delay frames to calculate an intermediate frame of decent quality without the viewer even realizing things are delayed, because once they're put on screen the framerate is smooth, even if every single frame is delayed, they're all delayed equally so it doesn't matter.
They can get away with very old ai tech and still get incredible results easily, but the moment latency becomes a point of care - like in pc gaming where you have mouse and keyboard input - that's when the response of every generated frame matters, and you can't just withhold frames for that long while supplementary frames are being generated.
I could do it easily with some rudimentary FLASH morph features nearly 10 years ago with only a few weeks of practice in IT class. It's really not difficult to do.
Imagine drooling over a 4090 when you had to sell your 3090 at a loss to get a 3090 ti. People doing this are a bigger problem than scalpers.
The tensor cores are made to do the calculations used for AI vs the RT cores so by offloading the math needed on the RT cores which are running at max, it shifts the load to the tensor cores which are much better at AI calculations so it utilizes less power because the GPU does not have to think as hard.
I really wish Nvidia would do tech architecture videos like Intel does so more people can understand how things are working under the hood.
Ah, now I understand why you are so good in explaining these complex concepts to viewers, you are a teacher!!
The problem is the price... The performance and new tech sounds incredible and I would love to experience it! But how can I make a 1.6k purchase more justifiable to myself? Maybe I should learn blender haha.
I’m going to start an only fans to help pay for it. Everybody seems to be at it these days😅
@@nomadicsouls3290 lol
Jensen will have to personally blow me off for me to pay those prices. Waiting for Nov 3.
i like the detailed videos . i do indeed only compare with what i need and is usefull for simracing. i'm stil on single screen and gpu runs at 80% load , i do that on purpose to have less frame drops when racing online. i see 40 series as a good step forward and 50 series will be worth upgrading in my case. settings are mostly maxed out. rennsport (ue5) is going to change that i guess lol
when the fps is already 100 for example latency is ok for me and in that case i could use dlss3 to get it up to the refreshrate from the monitor without using gsync . i just think i'll see a lot of people on facebook with the question to use a 40 series to bump their 30-40fps with dlss3
AMD is faster in raw raster and some racing games check them out too. Nvidia are being extra asshattery.
damn ur a high school teacher??? I would have loved to have u as a history teacher in highschool😂
Great to hear about less power consumption on 40 cards.
Love your videos, but you need an ad-blocker on your browser 🤣
50C on full load? So why on earth did they make the cards so big? They could have made the cooling twice as small and hover around usual 70C. Something isn't adding up here.
For that crazy prize it better be extremely good.
I think DLSS 3, shouldn't be called DLSS 3
How about Deep Learning Frame Insertion DLRI or frame Sampling injection FSI
Also think there should be a method of controlling what frame rate needs to be for the insertion, so say if it's below a target, then difficult scene can remain smooth but the feature doesn't need to be active all the time.
That would just add more latency.
There should hopefully be a way to turn off the frame generation and essentially just run DLSS 2. Because not everyone is going to like it and not on every game. It shouldn't be up to the developer either because you know some won't bother.
my local gigabyte distributor has those newegg prices and...6950xt at US$670, plenty of stock too. The rest hasnt follow though...dont know what's up
What do you teach in high school boss?
Would you know, Nvidia Clock = 2850, Amd = near 3ghz, Nivida cuts power draw by 25%, Amd used less power draw, Strange how this comes out just before the amd cards launch. I feel panic from Nvidia
If there was panic they wouldnt be selling a 70 class for $900. Its crystal clear that Nvidia doesnt care at all about AMD at all. If they did then this would be the stupidest way of handling. This is just S tier hubris on Nvidias part and they genuinely think they dont have to worry at all about not just AMD but about people getting sick of their anti consumer crap. I think Jensen is gonna get a rude awakening this time around but i dont think he cares at all what AMD is doing
@@Angel7black Yea ya could be right, but I do think he will get a shock and that will be on the 4080"2 versions" but Im wondering, are they gonna now put the 4070 on a different dye or still use the 4080 12gb`s dye?
@Salt Maker lol that bus width, I'd say thats a 4060, and the 16G 4080 is the 4070, and the 4090 is the 4080
keep in mind that AMD and Nvidia use different ways to measure GHZ so you can make a comparison between an Nvidia card to another Nvidia, but not an Nvidia to an AMD
@Salt Maker ops
As it comes to inserting frames the previous video sounded more reasonably (even if it is not what is going at all in that algorithm). This time with some of the future frame buffer and motion vectors I don't buy it. Although I'm prepared to be corrected. Just it doesn't make much sense. Is the motion even known to the graphics card? And even if it is then it still would be predicting future. The same way you can take last two (three...) frames and extrapolate what will be next. It won't solve the problem of the responsiveness (since we don't know if player will generate input or not).
does anyone know what classes Daniel teaches? Algebra, trig, calculus?
Edit: I found it on the internet he teaches geometry.
I hope they will implement DLSS 3.0 on 30 series, they said the same about RTX and then after all hype I was using RTX with my 1070
It sure looks like DLSS 3.0 Fake Frame Generation latency issue could introduce stutter at the very least. Personally, I find random freezing and stutter in games so annoying, almost as bad as lower fps itself.
Not only will it increase latency, but it will have significant negative impact on accuracy.
@@zqzj Yeah, that and the fact DLSS 3.0 leaves behind those who purchased 20 and 30 series cards is a hard pill to swallow. Especially when AMD find ways to include several generations of their cards and the competition cards as well.
As far as the quality difference between FSR and DLSS, I believe we will reach a point of diminishing returns, at which point FSR 2.x or FSR 3.x would be generally excepted as 'good enough'
I like this dude…keep up the great work!
can my 3090ti strix do dlss3
Does anyone know the 1440p and 4k benchmarks vs old gen without crappy dlss?
4080 12gb below 3090ti, 4090 in cherry picked games +40% over 3090ti
@@ChrontardOh yeah no. The 4090 gets around 70 percent more performance over the 3090 ti in division 2 and re village, which are games with no rt/dlss implementation from the graphs on the website. 50 percent more ac valhalla , a game which runs bad on nvidia cards. So definitely well over 60 percent on average over the 3090 ti. 4090 seems to be an impressive card unlike both the 4080s.
His heater kicked on? It's 97f here right now. Are you in the arctic???
not being able to hit 4k 60fps natively is really disappointing since they said 2x rastor perf. still probably worth a buy though
It’s not worth buying at all. It is a full scam. For two major reasons. The first reason is the prices are ridiculously high for what the GPUs are actually capable of. The second reason is they are making false performance claims. Nvidia said the 4090 is 2-4x faster than the 3090 Ti and we all know that’s total bullshit. They shouldn’t be allowed to say it's 2-4x faster unless the GPU is actually that much faster than its predecessor apple to apple.
@@jeevan1198 i am saying its worth a buy as someone who currently doesn't have a GPU and is in the market for one. I agree that its a poor decision to upgrade from something like a 3090
@@gonkbous If Nvidia really tried to push the 40 series to the limits and tried to get the most performance out of them as possible, they could have made the 4090 around 80% faster than the 3090 Ti in rasterised games and up 3x faster in ray traced games without DLSS if they really wanted to. They choose not to because Nvidia likes to minimise performance and maximise the prices. That’s what makes them so fucking selfish. It proves they don’t give a shit about their customers. Nvidia has heavily relied on DLSS to give the 40 series a massive performance. They can’t just rely on DLSS to boost performance. I’m not saying DLSS is bad. It’s a cool feature and it can be useful in some cases. I don’t have a problem with Nvidia developing their DLSS technology. One problem I do have with DLSS though is when it becomes the main selling point of the GPUs. Even if someone had a 1080 Ti, I would recommend them either getting a 30 series or RX 6000 GPU. The other option is waiting for RDNA 3. If it was me, I would skip the 4000 series.
Theres no way the gpu was at full utilization with 25 percent less wattage
one thing people are overlooking is the lack of input information in the fake frame also if you get the frame rate high enough to overcome the input lag you are not going to notice the increase in smoothness
Dlss 3 will make any game playable at 8k with quality mode that's exciting I'm already can't tell the difference at 4k with performance mode maybe there a little bit aliasing to the objects but it's not noticeable
Hey Daniel not to be rude but will you update your channel picture. It's kind of dark and not so nice. I love your videos though.
I knew it was marketing BS. If your product relies on a feature that not everyone may like for it get the performance numbers you advertise, which ALSO works for the previous gen products too, then you didnt really make that good a product. AMD, come out swinging. Swing hard and let your presence be known.
"i'm a highschool teacher" yeah i'm not surprised lmao. You sure sound like it.
It's not good that the 4000 series don't have Display Port 2.0
yeah because nothing says efficiency like a GPU cooler that's over 3 slots thick weighing over 4kg.....
5950x+3090 here, native 1440p + maximum ray tracing psycho settings = 30~33 fps, at outside of V's apartment.
can't wait for review and test.
Can you go one episode without saying Apples to Apples 😂
I can't wait until it tanks later and I can buy it for under a 1000 like the 3090TI
Don’t be too worried about buying more gpus. Nice thing about your channel is it could be considered a business, and those as business expenses. Hopefully you get what I’m hinting at 😉
It would be incredible to use a RTX 4090 with a high end VR headset.
Wait its 59 fps with ultra ray tracing, Psycho settings DLSS off. At quality preset I believe fps should be higher and I wonder how high.
62% native performance increase is massive bigger then any other generation. The craziest part is this is again the 3090ti so imagine a 4090ti! That means that it's over 70% faster then the 3090 in native performance without dlss.
4090 is impressive yes, 4080/4070 not so much given the price. They don't make much sense to buy where 4090 is so far ahead, glad I can skip this generation.
Still a looong way from the x2-4 they where boasting before these benchmarks came out.
This is abysmal if their own advertising says 2-4X perf. They should be sued.
But as the pricing goes, nVidia has no other option because they can't crash unsold 30-series price. Can't blame them for that, but for all the rest bullshit, oh god.
At first, when watching the launch live I was ok with everything, but after when found out that 4080 12GB was really a 4070, I was pissed. That's misleading the customer, big time.
@@Rivexd 4080 still makes sense the 12gb and I will explain why, so far 12GB is enough for 99% of games at 4K, you getting the 4080 16gb performance but less Vram that results at 2-3%, performance loss (like previous gen), 3090 was only 10% faster than 3080, this gen Nvidia said both 4080 and 4090 2X or higher than their previous gen counterparts......
Considering both got same performance bump at 899$ 4080 12GB is alot better value than other offerings...... Who need others? 4080 16GB for VR users and 4090 for content creators/VR users
@@TRX25EX have you been asleep?
14:35 i think too that some frames are made more equal than others
It was quite obvious that DLSS 3 would reduce energy use. If “fake” frames are added in (as they are) they will not use much computations.
I think Nvidia had to come up with something. It may well work. We will have to see.
You know what else is really fast? Ferraris. Can't afford them either.
Can’t wait to get a rtx 4090 this tech will be excellent!
I’m excited. Lower power and cooler temps would definitely be a bonus for me. I’m not ultra competitive so the latency won’t bother me and i probably wouldn’t even notice it.
50+ ms of latency is defo noticeable
wake up bro your games have to support dlss in the first place for you to experience that high ass latency. Competitive games are generally not going to support dlss.
Ignoring the majority of the player base just because it doesn't affect you is simply ignorant at best. Look around you nobody shares your opinion.
will it benefit my RTX 2060?
Nop
To be more correct, in addition to zooming in, DLSS3 also uses Optical Multi Frame Generation to generate entire individual images based on the frame before zooming in. Optical Multi Frame Generation still operates at reduced resolution. Without first reducing the amount of data that needs to be processed, which reduces the resolution, it is impossible to achieve acceleration. In other words, the Zoom technology of DLSS3 is better, but the image quality defects brought by DLSS2 in the past will still exist, but the degree of defects in DLSS3 will decrease.
The AI technology of DLSS3 is mainly based on the analysis and prediction of the motion of all objects in the game, but as long as there is an unexpected operation behavior, the image defects caused by DLSS3 will appear. That's why the DLSS3 images shown by nVidia are all straight forward, because the inertial operation of straight forward is unlikely to predict wrong. However, after maintaining a straight line for a period of time and making a sudden turn, the AI will make mistakes, but it is normal for the operator to change direction.
i think strategy games and simulator games will benefit a lot from dlss 3
Need to see dlss 2 4080 vs dlss 2 3080
A function that works in like 1% of games. Woot super excited! LOL
I play CP77 in 2K on 3090 SuprimX and I'm getting 69-100 FPS in the game, all settings maxed out, my CPU is 9900K OC 5.1Ghz