One last thing about the RTX 3090 power management.

Sdílet
Vložit
  • čas přidán 25. 10. 2021
  • Short version of my thoughts on the RTX 3090 vs New World situation: buildzoid.blogspot.com/2021/1...
    My Patreon: / buildzoid
    Teespring: teespring.com/stores/actually...
    The Twitch: / buildzoid
    The Facebook: / actuallyhardcoreovercl...
    #RTX3090 #NewWorld
  • Věda a technologie

Komentáře • 374

  • @VargVikernes1488
    @VargVikernes1488 Před 2 lety +122

    This WILL NOT be the last thing about burnt 3090s.

    • @guycxz
      @guycxz Před 2 lety +6

      What do you mean? And why are you playing New World in a church?

    • @NyangisKhan
      @NyangisKhan Před 2 lety +2

      The 3090 is cursed af. Glad I didn't snatch it when I had the chance and instead went for a 3080.

    • @VargVikernes1488
      @VargVikernes1488 Před 2 lety

      @@guycxz LMAO

    • @agenericaccount3935
      @agenericaccount3935 Před 2 lety

      ⛪️ 🔥

    • @nexum9977
      @nexum9977 Před 2 lety

      burnt 3090 or church by 3090? xD

  • @xvpower
    @xvpower Před 2 lety +133

    I did find it funny people were calling New World "unoptimized" to me it seemed like the exact opposite, they don't leave many transistors unused.

    • @cacheman
      @cacheman Před 2 lety +43

      Using all transistors just mean a lot of resources are used, not that they're put to good use. Something being "optimized" carries some implication of resources used well. Especially in a GPU where a LOT of parallel work may simply be thrown away if a shader is poorly written.

    • @defeqel6537
      @defeqel6537 Před 2 lety +10

      @@cacheman I've noted the same with a lot of Battlefield games maxing out the CPU utilization. That doesn't necessarily mean it is well optimized for multi-core, just that it uses a lot of multi-core resources.

    • @FrantisekPicifuk
      @FrantisekPicifuk Před 2 lety +10

      Not to mention that there is only so much you can do when it comes to optimalization for Nvidia cards. Developers have noted, that Nvidia drivers and their integration is like a blackbox, so any optimalization is much harder than integrating drivers for Radeon cards, that tend to be more open. So this migh actually be on Nvidia and some internal fuckery thats done on deep level inside their drivers

    • @Dhaydon75
      @Dhaydon75 Před 2 lety +1

      The game kinda feels unoptimised but not how good it uses addresses hardware, it just scales really bad with man player models/cities and some other effects.

    • @jabroni6199
      @jabroni6199 Před 2 lety +1

      I’m gonna tell my power company they shouldn’t charge me so much once I start using every available amp coming into my home because My power usage is optimized.

  • @10ghznetburst
    @10ghznetburst Před 2 lety +38

    IIRC AMD was working on a per WGP clock management system, exactly to deal with stuff like this.
    The reason you get big fluctuations of power is because you get different utilisation factors on the SIMD units. E.g we have a GPU that issues SIMD32 instructions, one instruction operates on up to 32 points of data, but it's unlikely that in an actual scene you will always have 32 points of data you want to execute one instruction on. You may, for the sake of example only use around 20 of the 32 execution units on average, but then if you have small portions of your workload where most of the GPU suddenly has close to 32/32 utilisation of the execution units, you end up drawing a lot more power than before.
    Per WGP or SM clock management would require you to detect at a pretty low level what the utilisation factor is, and then preemptively choose to not issue instructions every clock cycle where you have many high utilisation instructions back to back, or alternatively change the clock speed before the transient gets out of hand.

    • @JethroRose
      @JethroRose Před 2 lety +1

      it's probably in both AMD's and Nvidia's interest to pursue this as being able to selectively turn off parts of the card (like Ryzen does inside the CPU) will mean it can run cooler, draw less power and thus opportunistically boost higher when required as it is cooler.

  • @VargVikernes1488
    @VargVikernes1488 Před 2 lety +11

    Chad spiciest donut vs Virgin pretty 3D visuals

  • @Soviet_Elmo
    @Soviet_Elmo Před 2 lety +65

    Out of curiosity: Would you think comparing AMDs boost behaviour to Nvidias made for interesting content?

    • @tanishqbhaiji103
      @tanishqbhaiji103 Před 2 lety +10

      Yes but it wouldn’t be very easy to make that video.

    • @NyangisKhan
      @NyangisKhan Před 2 lety +7

      Buildzoid currently hates AMD atm because of them locking the maximum clock speeds. And his 6900xt blew up so I don't even think he even *has* a card to test that with even if he wants to.

    • @raze4789
      @raze4789 Před 2 lety +2

      Too true, I got a 6600xt and after a couple weeks of normal use, went to OC. No dice. It won't move past 2600mhz no matter what. So I just undervolted it and let it do its thing. Still not a bad card.

    • @moritzaufenanger2537
      @moritzaufenanger2537 Před 2 lety

      @@raze4789 2600?

    • @raze4789
      @raze4789 Před 2 lety

      @@moritzaufenanger2537 yeah. It doesn't care what I set the core clock to. It'll just boost itself into the 2600's and that's it.

  • @lrmcatspaw1
    @lrmcatspaw1 Před 2 lety +43

    VRMs: why are we here just to suffer?
    GPU CORE: Unlimited POWER!!!!! Also, Deal with it.

  • @Thirdeyestrappd
    @Thirdeyestrappd Před 2 lety +80

    I heard It’s been scientifically proven that if you can zoom in on the human iris enough you get the furmark donut

    • @SgtRock4445
      @SgtRock4445 Před 2 lety +10

      Same with Uranus

    • @Thirdeyestrappd
      @Thirdeyestrappd Před 2 lety +3

      @@SgtRock4445 I’ll have to buy a telescope this weekend 😂

  • @commentaccount7880
    @commentaccount7880 Před 2 lety +8

    "they especially don't last forever when you run them close to thier limits" then i just slide over and turn down my power limit on afterburner instantly lol

    • @toxy3580
      @toxy3580 Před 2 lety +1

      My 980ti has been at max limits for 6 years now

  • @ActuallyHardcoreOverclocking

    second

  •  Před 2 lety +1

    Excellent video as usual!

  • @user-ro1cc8tz6d
    @user-ro1cc8tz6d Před 2 lety +8

    21:44 don't worry about that. In future games are going to be rendered with pure electron, rust,javascrpit and soy

  • @centurion1443
    @centurion1443 Před 2 lety

    many thanks for this series of videos! any suggestions for protecting the gpu? e.g. max FPS, undervolting?

  • @tessierrr
    @tessierrr Před 2 lety +11

    Werent 3090s blowing up since launch? New world just opened peoples eyes on how shitty the power delivery is 🤣

  • @testynetesty
    @testynetesty Před 2 lety +5

    Wait, did Gigabyte really start using 60A drmos'es? Every 3080/Ti/90 that i've seen used either AL00 (AOZ5332 50A) or BLN0 (AOZ5311 which was 50A and then updated to 55A). If it is i wonder can it be some sort of preventive fix?

  • @ole7736
    @ole7736 Před 2 lety

    Great analyis!

  • @mmbr20
    @mmbr20 Před 2 lety +4

    Appreciate your analysis, thank you. I actually run mine at .850v 1800mhz. Would you say the longevity of the card will be increased?

  • @jazz9fr
    @jazz9fr Před 2 lety +11

    Would this be as much of an issue on something like a 3090 FE which uses 70A smart powerstages and better controllers?

    • @M1nat0
      @M1nat0 Před 2 lety

      Yes, because iirc FE's still use 50A powerstages, 90 Tis are the ones that use 70A powerstages

  • @benjaminchung991
    @benjaminchung991 Před 2 lety +8

    For consumers in the future - and ideally to try and drive manufacturer behavior - how hard would it be to include a look at the protections implemented in the VRM when you do board analyses on GPUs? What additional information would you need to conduct the analysis, beyond what's available from the PCB shots?

    • @10ghznetburst
      @10ghznetburst Před 2 lety +3

      You'd need to actually test the cards to see where they hit the limits or have board schematics to fully understand how they are configured. And anyway, I expect GPUs will start to implement more granular power management systems at a low level, to not issue instructions in a way that generates such transients, so it's likely not something that you'd be able to analyse at a board level.

    • @linnaea_lavia
      @linnaea_lavia Před 2 lety +1

      Publicly available datasheet for the components, which is not a guarantee

    • @10ghznetburst
      @10ghznetburst Před 2 lety +2

      @@linnaea_lavia Aside from leaked Gigabyte ones good luck getting board schematics for the GPUs.

    • @linnaea_lavia
      @linnaea_lavia Před 2 lety +1

      @@10ghznetburst if there's full datasheet available for the power controller one can guess where the monitoring components should be located, and the datasheet would specify how those components should be chosen, problem is nowadays even that's not a guarantee. Some manufacturer locks up their documentation and only publish a 2-page flyer (they insist on calling it "brief datasheet" which I very much disagree).

  • @augustusbeard4528
    @augustusbeard4528 Před 2 lety +1

    Could it be that the 8k results are just easier to monitor because the frames are longer so the transient response is might alse be longer? So the software update is fast enough to see the peaks compared to a faster framerate/shorter frame time? Also do you have any idea what information gpu-z uses to monitor power consumption? Because if its nvidia own shunt resister power monitor circuitry wouldnt that imply that the actual peak power is even higher than reported in software. Lastly if this is true, I understand that most electrical components are made to withstand pulses on a somewhat regular basis, though would the long term effects of these extreme pulse not degrade the vrm components itself or arent pulses that destructive for these components because of the relative little sustained temperature increase and short nature of these pulses.

  • @arthurberggren5618
    @arthurberggren5618 Před 2 lety

    Hey Buildzoid, completely off topic for this video but I was wondering if you happen to know which BIOS is best on a Gigabyte Z390 Aorus Pro WiFi, F12k or F11? I actually didn't even want the Aorus Pro. I wanted to buy the ultra or the master but they did not have any in stock anywhere at the time and if they did it was completely outrageous price. This brings me to a suggestion I have for a video for you. You could potentially make one for the differences in BIOS revisions on whatever motherboard seems appropriate. I know you have an oscilloscope so maybe measuring the differences in transient response, etc. I would also like to thank you for putting out such in-depth videos. I really wish more people would get a little more advanced when it comes to well everything really. I really appreciate your work and the time you put in. Thank you.
    Side note. I know a lot of people have trouble overclocking RAM on the Z390 Aorus Pro. I got mine to actually post at 4266 and boot into Windows at 4133 of course it was not stable. I was able to get 3800 15-15-15-30. 3866 however was not. This was all on F12K. I switched to f11 and 3900 and 3866 are stable at 15-15-15-30 all with really tight secondaries and tertiaries. Ram is G-Skill non-RGB 3200 14-14-14-34. Anything above 3900 I just cannot get stable. If you have any suggestions on how to get 4000 plus stable please pass on the information and thank you.

  • @andrewmcewan9145
    @andrewmcewan9145 Před 2 lety +6

    My friends 1070 would sometimes trip ocp on the power supply whenever you where loading something. Getting s bigger powersupply solved the issue but as you said proves that nv ignores short bursts.
    Turing was the first NVIDIA GPU: processing a 32-bit integer operation simultaneously with a 32-bit Floating point operation. Ampere increases the superscalar width to 2x FP32 operations per clock or 1xFP32 + 1x INT32 operation per clock.
    These additional FP32 operations per clock may lead to the more gross violations of the power limit compared to previous generations. Especially as fp32 probably has more transistors than int.
    A pascal vs turing vs ampure power spec violations in new world would be intresting to see but not nessarialy feasable.
    Although as you said this does make me conserned for using big ampure longterm.

    • @Azerkeux
      @Azerkeux Před 2 lety

      I wonder why the 16 series cards were so locked down- I have both an EVGA 1660 and 1060 and the 1660 does not let you increase power consumption at all in OC software and have never seen it go over 125w where as the 1060 you can

  • @ShaneCutting
    @ShaneCutting Před 2 lety

    Do you have any recommended solutions for this issue that can be done on the user end? I have a Gigabyte 3090 Gaming OC and I would like to not blow it up.

  • @tekjunkie28
    @tekjunkie28 Před 2 lety

    So how accurate is that 8 pin power voltage Buildzoid? 11.4V sounds pretty low? I know that may or may not impact the dying but isn't that out of spec ?

  • @KaNoMikoProductions
    @KaNoMikoProductions Před 2 lety +3

    Buildzoid, is this as much a problem for the 3090 TUF? Since the entire shtick is that it's meant to be durable, does it have better VRM protection or any such?

    • @vyor8837
      @vyor8837 Před 2 lety

      The Tuf parts have been garbage for an age

    • @KaNoMikoProductions
      @KaNoMikoProductions Před 2 lety

      @@pcoverthink He did a breakdown of the 3080 TUF, which has the same PCB as the 3090, and he said something along the lines of it being the best 3080.

  • @sythex92
    @sythex92 Před 2 lety +1

    I like how the memory temps are 205 degrees, let me just cook a fuckin pizza on that.

  • @ANiMOSiTYZA
    @ANiMOSiTYZA Před 2 lety +1

    The level of depth in your videos is astonishing! Thank you!
    I have a question.. Maybe..
    I have a Zotac Trinity 3090, which was the only card I could get last year, and I even got it at close to MSRP.
    I flashed it with a VBIOS that has the power target set to 370W (it's a 350W card, as you know) and I have the card limit set to 105% TDP, so it's close to the max allowed limit of 390W.
    It runs at, or near, the 105% TDP in many workloads and has been like this for most of the time I've had it.
    I have a custom water loop, covering all the power stages, the VRMs, and a passive backplate.
    Is cooling something that is allowing the card to survive?

    • @alexmills1329
      @alexmills1329 Před 2 lety +2

      Yes, temperature only helps degrade lifespans of these components, but if they are pushed out of spec they can and will still fail eventually and early

    • @anarcat6653
      @anarcat6653 Před 2 lety +2

      buildzoid already answer to this question, or similar "it does. If you keep a VRM that's on the edge of it's capabilities at 50C instead of 90C it helps a lot."

  • @Logan_67
    @Logan_67 Před 2 lety

    Does all this you have went through with this Vision card also apply to the Aorus 3090 Xtreme?

  • @thorstenschroder7929
    @thorstenschroder7929 Před 2 lety

    I wonder if twice the capacity on the GPU-Side of the VRM could take away a little bit of the peak load on the VRMs. Or are the peaks so long that you'd need crazy amounts of Capacity to handle these situations.
    Also: Are there power-stages with higher ratings, or are we getting back into discrete MOSFET-Territory at over 60 Amps continuous ratings?
    So far my SMPS-Designs peaked at 15 Amps with MOSFETs that have continuous current ratings at about the value of the calculated peak-values.
    Another thing that just popped into my head: How close are they running the Inductors to their Saturation-Limits? As soon as the Inductor is saturated, it's resistive component (less than 1 mOhm) becomes dominant, which will effectively look like you short the GPU Power-Rail to the 12V Input-Rail for a few microseconds, before the Controller can turn off the Powerstage due to Overvoltage (Overcurrent should have tripped here, but doesn't due to mentioned design-flaws) or the maximum On-Time being reached.

  • @riklaunim
    @riklaunim Před 2 lety +3

    Xonotic (free simple shooter) at least on Linux benchmarks could cause quite high power draw so maybe also usable for showcasing things.

  • @Faroghar
    @Faroghar Před 2 lety

    What is the true safe temperature limit (not just lasting to the warranty) for component ? 90°C ? (I don't want to spend 1k + on water cooling for nothing)

  • @ianmoone8244
    @ianmoone8244 Před 2 lety

    Which PSU had you used to that test? I saw 11.2v on 8-Pin #1! O.o

  • @theprofessor131
    @theprofessor131 Před 2 lety

    Haven't tried it myself but does renaming or deleting GPUMonitor_x64.dll from inside "C:\...\Superposition Benchmark\bin" resolve GPU-Z's reset issue? Figure it might be caused by some sort of fisticuffs between both programs polling the GPU for statistics

  • @BetteBalterZen
    @BetteBalterZen Před 2 lety

    Hi AHC
    I own a ROG STRIX 3080 Ti OC LC model.
    Would you fear playing New World with this card?
    I play New World with this card and are using an undervolt profile - 1800MHz @850mV
    Thanks

  • @MatthewKiehl
    @MatthewKiehl Před 2 lety

    This furmark donut is used in MSI kombustor - anyone know how similar it is? I discovered that Path of Exile (using the Vulcan engine) was giving me higher temps than MSI kombustor. I thought that this might be a result of full system utilization beyond just the GPU. (More heat in the case and on the board in general). I had to use frame caps with that game to keep temperatures in line.

  • @andytroo
    @andytroo Před 2 lety +1

    there's probably an optimal spike frequency, where the weighting of the old one has fallen out of the time window, where you can go for another millisecond at > 100% power.

  • @fleurdewin7958
    @fleurdewin7958 Před 2 lety

    Hi Buildzoid, I have 2 question:
    1. From the GPU-Z monitoring ; I can see that the 8-pin #1 voltage sometimes goes as low as 11.3V . From what I know, ATX specs calls for +-5% voltage tolerance on the 12V rail , which is 11.4V~12.6V. So is your power supply becoming faulty or GPU-Z reading is actually wrong ?
    2. The memory temps hovering at 94 Celsius, looks like it might still increase. Is it dangerous to run at these temps if I'm expecting the GPU to last at least 5 years ?

  • @michaelpascual2731
    @michaelpascual2731 Před 2 lety +4

    What about using higher quality components for the power delivery system to be able to handle the problem of the spikes, and is this even possible, does better quality components even exist?

    • @D3humaniz3d
      @D3humaniz3d Před 2 lety

      If you have more VRM phases that can share and distribute the load - the are going to last longer, since the load will be distributed. If you have higher quality components rated for higher voltages / power draw, of course they are going to last longer, by definition.
      That's why, if you intend to use a GPU for longer, you should basically pick whatever card has better power delivery components / better design. A good example of this is the POSCAP/MLCC fiasco at the launch of these [3090, 3080] cards. Cards that only had MLCC's (like the Strix and TUF) did not have any stability issues whatsoever. Meanwhile, everyone else who went with 5 or more POSCAP, had stability issues - at least from what I remember.
      Why did they use POSCAP's? Cause it's cheaper, nerd.

  • @ThisIsAGoodUserNameToo
    @ThisIsAGoodUserNameToo Před 2 lety +4

    I was really hoping you'd show us the power usage under New World.

  • @kaoskilo
    @kaoskilo Před 2 lety

    Might be a noob question, but does the 3080ti vision suffer from the same issues?

  • @LegendaryGauntlet
    @LegendaryGauntlet Před 2 lety

    Would watercooled cards (with watercooled VRMs, obviously) last a little bit longer ? How's the longevity of a VRM at peak load but cooler temps vs the same VRM but at high temps ?

    • @benjaminoechsli1941
      @benjaminoechsli1941 Před 2 lety +1

      To pull a reply BZ made to another, similar question, "A VRM under full load kept at 50C will last much longer than one that is running at 90C."

  • @futureb1ues
    @futureb1ues Před 2 lety +5

    Does this apply to the 3090FE design as well or just AIB/reference designs?

    • @Squall4Rinoa
      @Squall4Rinoa Před 2 lety

      just AIBS.

    • @Squall4Rinoa
      @Squall4Rinoa Před 2 lety

      @@pcoverthink Please don't reply if you have no qualifications to your name, the boost to 110% is not a violation its part of the boost design and has been the entire time.

    • @Squall4Rinoa
      @Squall4Rinoa Před 2 lety

      @@pcoverthink lmao, i have thrice as much experience and qualifications compared to you kid, bugger off.

  • @BillyC500
    @BillyC500 Před 2 lety

    Can can see this being a move made knowing the downsides. Is this typical of GPU power management or is this introduced with the 3090?

  • @Mako-sz4qr
    @Mako-sz4qr Před 2 lety

    Do you recommend undervolting the 3090?

  • @lkuzmanov
    @lkuzmanov Před 2 lety

    Hi BZ, I recently went custom water, so I've been staring at GPU power and temperatures a lot in the process of overclocking my 3080 GAMING Z TRIO 10G during Superposition stress tests @ 1440p.
    One odd thing I'm noticing is that the driver downclocks the GPU even in situations where the GPU is < 100% utilized in terms of power, e.g. I'm running a 100 MHz OC on the core and in games the GPU will usually hover around or beyond 2000 MHz, but during the 1440p stress test I'll often be seeing clocks of < 1950 MHz even 95% GPU power. As you can imagine under the custom block temps barely move. Thoughts?
    P.S. The card usually runs at around 365W when maxed and I'm seeing the above @ GPU Power readings around 340W, so closer to 90 than 95% which confuses me additionally...

    • @yourhandlehere1
      @yourhandlehere1 Před 2 lety

      Lyuben Kuzmanov....1440p doesn't "stress" a 3080 at all.
      My PNY only starts freezing if I go +300 core. I never mess with voltage. 1995-2050MHz oob...cruises at 2200MHz, mid 60s on air. I use an RM1000x to cover any spikes. It does like to pass it's 320w limit

    • @lkuzmanov
      @lkuzmanov Před 2 lety

      @@yourhandlehere1 it's what the test is called, Stress. At 1440p it stays around or often goes beyond 100%, which is enough for my purposes - to load the card. I'm not worried about max clocks or stability that much, I've found those points. I'm playing with RPM curves to find the sweet spot in terms of noise and temps. My question was about the odd behavior of Superposition in, for example, scene 8/17. For a while both the power and GPU clock drop at the same time and I can't make sense of it.

  • @MrPerpixel
    @MrPerpixel Před 2 lety

    My research on this with New World point to many failure when changing quality settings. It does spike when doing so.

  • @peteraasa5267
    @peteraasa5267 Před 2 lety +1

    320w is what a 3080 should draw, i just got my Evga ftw3 ultra it draws way over that if u dont tune it down, and the performace difference is like nothing, so i set it to draw 85% power , and now it draws 320 to 330 watts. I dont know why they set it so aggresive, dont want to be in a sauna. Thanks for your input.

  • @wten
    @wten Před 2 lety +3

    A modification to furmark to intentionally create a bursty load would be helpful.

  • @sagerdood
    @sagerdood Před 2 lety

    Gonna need you to do review the new master z690 asap. I pre ordered it and it looks reeeealy nice

  • @renchesandsords
    @renchesandsords Před 2 lety

    can this sort of issue be mitigated be higher capacitance output capacitors to help stabilize the output voltage and resduce some of the feedback to the vrm?

    • @nicholasvinen
      @nicholasvinen Před 2 lety

      The capacitors would need to be huge (probably in the Farads) to deliver 1000A for milliseconds without sagging more than tens of millivolts. You can calculate the required capacitance but I'm too lazy to do it.

    • @renchesandsords
      @renchesandsords Před 2 lety

      @@nicholasvinen good point, running numbers gets a value on the order of farads to dozens of farads per capacitor assuming 700W of peak consumption and somewhere around 10-20 mV of drop across 16 capacitors. This does seem a bit high

  • @noko59
    @noko59 Před 2 lety

    1080p to 4K has same geometry/triangles, what is different when you shade those triangles at higher resolutions 4K/8K you have more pixels to shade to color the triangle of the geometry. Each pixel for the most part is one operation of a pixel shader or compute shader and so on, more pixels more processing and more loading (keep more shaders busy with work). Well that is my understanding.

  • @lllllllllllillllllll
    @lllllllllllillllllll Před 2 lety +2

    So running the VRM on high load and close to its limits is reducing the lifespan or killing the VRM? How does EVGA shipping out new cards resolve this then (apart from them blaming the soldering or w/e it was)? Seems untenable to just keep replacing them because high loads keep killing the VRM. Is it maybe also a combination with the high temps the VRM and memory modules are running at?
    I would guess if that's the case, then running a single or even double sided GPU waterblock would probably help. Or, do you think we'll see aftermarket cards running beefier VRM in future designs to avoid issues like these?

    • @guycxz
      @guycxz Před 2 lety

      There probably aren't enough cards dying yet to make a more permanent solution more economically viable. Still, New World is unlikely to be the only game that can push the cards hard enough, and the VRM will probably be short lived regardless; hopefully enough cards die within warranty to actually make it more economically viable to make a proper card rather than hope it can last just until the warranty ends.

    • @UruguayOC
      @UruguayOC Před 2 lety

      Read what i posted some minutes ago bro. All the Best, Sergio!

  • @Haos666
    @Haos666 Před 2 lety

    @Actually Hardcore Overclocking
    ... so this is why extreme high FPS causes coil whine on some PCBs? Hundreds or over thousand ultra short poer spikes per second?

  • @evenbetterthantherealthing92

    I'm hoping that running at 1440p is putting a little less stress on my 3090 (Strix). I opted for high refresh over 4K gaming from HWinfo and GPU-Z I've not seen spikes that exceed the power limits yet- knock on wood.

    • @utubby3730
      @utubby3730 Před 2 lety

      Well considering the Strix has a 480W bios out of the box, I would hope its designed to handle the more typically seen power draws that gamers see with it. I have a modest UV and left the PL alone and it routinely draws 400Ws (gaming at 4-5K)

  • @Mickulty
    @Mickulty Před 2 lety +11

    What I'm hearing is that RTX 3090s are future classics destined to be very rare. Truly the Lamborghini of GPUs.

    • @N0N0111
      @N0N0111 Před 2 lety +2

      A lot water cooled models will survive the 3 years mark.

    • @andersjjensen
      @andersjjensen Před 2 lety +1

      @@N0N0111 Water cooling doesn't protect your VRM from blowing up.

    • @ActuallyHardcoreOverclocking
      @ActuallyHardcoreOverclocking  Před 2 lety +13

      @@andersjjensen it does. If you keep a VRM that's on the edge of it's capabilities at 50C instead of 90C it helps a lot.

  • @krane9259
    @krane9259 Před 2 lety

    What happens when you do it with no power limit

  • @stevenmarquez4476
    @stevenmarquez4476 Před 2 lety

    Would a 3060 ti be okay at 100% usage?

  • @Wasmachineman
    @Wasmachineman Před 2 lety

    there are a bunch of typos in your blog post BZ.

  • @Methos_101
    @Methos_101 Před 2 lety +1

    Does undervolting, and flattening the curve on MSI Afterburner help with this behaviour?

    • @happydawg2663
      @happydawg2663 Před 2 lety +1

      Yes, it should solve the problem, you get a little less FPS but you don't end with a burnt gpu, as BZ said, it mostly happens when all cuda cores are under load, so by using larger resolutions.

    • @nicholasvinen
      @nicholasvinen Před 2 lety +1

      That's how I stopped my 3090 rebooting my system due to OCP on the 650W power supply (meaning it was probably peaking close to 1000W). Had virtually no effect on performance but dropped average power from over 400W to about 350W.

  • @wewewe2712
    @wewewe2712 Před rokem

    In my computer, the hot spots reach 100c, what is the problem?

  • @squirrel6687
    @squirrel6687 Před 2 lety +1

    Fan cooled? Maybe the fans on full tilt are eating that last few percent of overhead.

  • @tarfeef_4268
    @tarfeef_4268 Před 2 lety +1

    posting here since alder lake is the new hype:
    can we get some rambling about the power delivery for DDR5 being on-stick now? maybe talk about the impact on PSUs and Motherboards now that from what I've heard, memory will run off of 5V, not 12V? I am not sure how much average PSUs are specced to handle of 5V, but I know in some cases that's not a huge number, and it could go up notably if DDR5 power consumption is higher, more so if boards/CPUs allow for higher density on consumer platforms (servers that support LRDIMMS, etc are already going to be specced for insane memory power draw)

  • @anthonyc417
    @anthonyc417 Před 2 lety

    My GB 3080 Ti Gaming OC maxes out at 362w in SP 8K. So whatever was going on is looking more and more like drivers to me personally. Unless the 3090 with the exact same PCB layout minus one power stage on the 3080 Ti's behalf is that different but they are the same TDP so IDK.

  • @apreviousseagle836
    @apreviousseagle836 Před 2 lety

    I have an AORUS water cooled 3090. I also stuck a fan on top of the backplate to further enhance cooling. My specs when running FurMark at 4k and 8xMSAA:
    GPU Temp: 55c
    Memory Junction Temp: 72c
    Hot Spot: 72c
    The only game I own that punches the card as hard as FurMark is MS Flight Sim 2020. This game, at 4K and maxed out graphics, is able to push the card to 99% and I still only get 54fps

  • @billgaudette5524
    @billgaudette5524 Před 2 lety +3

    If I run my EVGA 3080 FTW3 Ultra on the second vbios (105% power limit), and then set voltage and power to 100 and 105 each, the card will report over 400 watts drawn regularly in New World. I set it to 90/90 when I run the game just in case.

    • @Cinnabuns2009
      @Cinnabuns2009 Před 2 lety +1

      I was messing around with my EVGA 3080ti ftw3 with pwr limit and voltage limiting and it will boost like all the time to 1950mhz @.9 volts and it will run happily there for what seems like indefinitely and @67deg C where as if I run the card at STOCK it boosts to over 2Ghz very very briefly, then gets to just over 80C temperature and down clocks itself to 1800-1850mhz and then runs there pretty much full time at higher power usage and temperature.
      In other words, my benchmarks are higher by quite a bit if I set the power limit at 85% or even higher with power limit at 100% but voltage curve max at .9v Boost was taking the card up to over 1.1v and that is on the default bios.
      So, if you're NOT undervolting, you're leaving performance on the table and also using excess power and then have to run your fans faster to cool down all the power usage.
      Seems like Nvidia really where just pushing everything to the max with this gen to lose less ground to AMD and there will be (I'd bet also like Buildzoid states) a spate of dead cards come a year or two which Nvidia happily loves... oh you need a new GPU? We have just the thing! Failure mode built in.

  • @Seriessify
    @Seriessify Před 2 lety +3

    My understanding might be limited, but if furmark hits power limit at ~1200mhz/718mV and you for example doubled the power limit via shunt modding, how much would furmark pull in watts?

    • @VargVikernes1488
      @VargVikernes1488 Před 2 lety

      ALL OF THEM

    • @unlimiteddy5546
      @unlimiteddy5546 Před 2 lety

      A shunt mod is only for powerdelivery from the connector. The amount of amps drawn is determined by the GPU how much it needs to activate all its transistors. With a shunt mod you do not force double power going in to the core, but you basically tell the core that there's more power available if it needs it. The amount of power the core draws is basically based on voltage and frequency, which a shunt mod does not influence.

    • @muhschaf
      @muhschaf Před 2 lety

      many, like a fuckton...

    • @volodumurkalunyak4651
      @volodumurkalunyak4651 Před 2 lety

      @@unlimiteddy5546 actually shunt mod will influence frequency as nvidia boost system takes power limit to derive operating frequency and voltage. Less power reported -> higher frequency / voltage untill card things it used up all avaiable power headroom.

    • @Seriessify
      @Seriessify Před 2 lety

      @@unlimiteddy5546 That much I do understand, my point was wondering about what freq/voltage the card would run at, and how big the power draw, if not limited by the power limit it is currently beating against.

  • @Carfreak226
    @Carfreak226 Před 2 lety

    If your card has dual bios and you’re running on the lower TDP bios, and have a frame rate limiter in place, wouldn’t that mitigate these issues? So even if the card were to spike, it would still be under (hopefully) the higher watt bios? EVGA 3090 FTW3 for reference.

    • @arthurmoore9488
      @arthurmoore9488 Před 2 lety +2

      Unfortunately, the answer is probably no. Now, the reporting tools won't show you the transients, and they may be happening at a lower frame rate, but they are still happening. Remember, the power monitoring circuitry is before the power smoothing circuitry, which is before the VRM. So, a sharp transient will be handled by the smoothing, but the VRM will still see it.
      Now, the good news. Those power stages are rated for high transient load, so a lower frame rate means they might have time to recover. Also, since lower voltage equals lower power, undervolting the card can also help.

    • @Carfreak226
      @Carfreak226 Před 2 lety

      @@arthurmoore9488 Appreciate the informative and detailed response sir.

  • @mortenee88
    @mortenee88 Před 2 lety +1

    I have seen new world mostly crashing in the map section. When I don't cap fps. I play on Alot of different hardware so my latest tested that crashed Alot was a vega64 where I couldnt get this game stable unless I basicly undervolted it or capped the fps. It's got a alphacool block on it so it's really cool and all. But it crashes no matter what at a small oc. The card does 1700mhz in firestrikes quite consistent but it wouldn't hold a 1600/1620 oc in new world.. had to step it down Alot.

  • @cdurkinz
    @cdurkinz Před 2 lety +1

    Just a heads up, updates no longer take hours to do. That’s not a thing anymore.

  • @Baoran
    @Baoran Před 2 lety

    I was testing my Asus RTX 3090 in new world for a bit using gpu-z like in this video. It seems the card has higher power limit. It only hit 100% TDP somewhere around 400W. I have new world limited to 60fps. When limited to 60fps and 2560x1440 resolution I gpu load is around 40%, power usage is around 270W. If I change to 5120x1440 with 60fps limit the load is 70% and power is between 380W and 390W and TDP is at 95%. When I first ran new world without the 60fps limit it was doing over 90 fps with 5120x1440 resolution so after seeing those wattage numbers I dont want to try what the wattage would be if it was running at full load.

  • @JohnDoe-sv5jc
    @JohnDoe-sv5jc Před 2 lety

    I wonder if we can get some nvidia released software that blows through the power limits. Perhaps Minecraft RTX could get there? (Heaven from Unigine is listed as part of the nvidia tech demos)

  • @jacobs9391
    @jacobs9391 Před 2 lety

    Does anyone here know about what 10700k IMC limits are? Im considering getting a z590 Apex to play with some expensive RAM, but I'm not going to do that if my 10700k can't run RAM that fast anyways. So if I'm using a 10700k on a z590 Apex, will I be able to hit 4000 mhz cl 14, without being limited by the memory controller on my CPU?

  • @squibbly_mcgrink7689
    @squibbly_mcgrink7689 Před 2 lety

    Is the 3080 generally susceptible to the same problem(s)?

  • @Bllfrnd
    @Bllfrnd Před 2 lety

    For my gigabyte 3090 gaming OC the first and the second death occurred during GTA online. First one in march, they repaired it and now it died again today.

  • @dreamcat4
    @dreamcat4 Před 2 lety +2

    so getting to the point here, what you are basically saying is that maybe nvidia should introduce a more general type of auto downclocking feature in their future gpus. you cited AVX intel downclock as an example. however maybe that is not so easy for nvidia because they don't have something simple like AVX instructions to look ahead for.
    but i get your meaning: you mean do not go into such a restricted mode just only when detecting furmark program. but instead to the downclocking based on the general load being requested. and not to totally gimp it, but just to reduce the clocks down a bit further than usual to ensure a wide margin of safety so that it keeps it well enough away from any chance of blowing up catastrophically...
    but after repeating that i skrikes me maybe should that not be already something that should be incorporated within the existing firmware / software as a part of the inputs to the nvidia GPU boost algorithm? which is what dynamically controls the clocks....
    this then goes back to the rumor that either a) NVIDIA was to blame in their driver for not setting the GPU boost correctly. or b) that NVIDIA thought their original / pre-existing GPU boost algorithm settings were already safe. however then with these new 3090s come along. then maybe the actual hardware design and power delivery was different than expected. or alternatively also possible (just like you mention) maybe there are some bad questionable components in the supply chain. which nobody was aware of.
    the thing is in retrospect it seems that the companies involved did internally investigate (using the new world as a test benchmark). and did eventually find out what is the culprit cause was. but this information was never shared with the public. and perhaps due to not wanting to make public and then forced into some very expensive product recalls. it's cheaper to fix in software and push out a new driver update. and also get the developer to patch their games. this would explain pretty adequately why we never really got the truth about it. because who wants to recall all those 3090s under current market conditions? it would be pretty awful situation to have to do that.
    still: you have a great point about them dying eventually further down the line. none of these companies would give 1 cent to care to protect the product adequately into old age. they would rather they al blew up after 3-5 years. so that it is clear out of warranty and then everybody has to buy new cards all over again. what annoys me about this is that: well people want to do that anyhow, regardless if they still work because the future cards will be so much faster. there is always a high enough demand for the latest product. so it seems kindda mean in that respect. since anybody who can realistically afford to buy a brand new cards (next time)... they will certainly buy them! and for those who cannot - that is simple because they cannot afford to keep buying cards all the time. so it's a policy that penalizes only the poor. and not the wealthy ones. so that is really where i take issue. it is also pretty bad for the environment too, for so many expensive products to end up as e-waste. sadface
    😿

    • @petrofsko
      @petrofsko Před 2 lety

      Hi there I live in Glasgow Scotland UK I preordered a 3xs sytem from Scan UK on 18/09/21 they start build stage 3 on 8/11/2 I've up until build stage 3 the day before to cancel. I need a new pc with top gpu, initially I was looking at a scan custom built pc with EVGA 3080 I added the price up with ryzen 9 5900x with whatever else I needed it came to £2850, then I saw same kinda set up cpu etc with EVGA 3090 FTW3 ULTRA GAMING was £3099.98.However I'm reading and watching videos about the OCP sytem on evga 3090 and gigabyte 3090 is not set right hense they're deaths?It's a right pain as I'm still using an i7 930@4ghz 12gb ram with msi Rx 480 8gb/gigabyte X58A UD3R Mobo I got from overclockers UK in June 2010 for £1500 still works but get bsod with big ubisoft open worlds. Although the Rx 480 8gb only gives me 30fps max settings at 2k it's lasted 5-6years replacing my Rx 390 8gb, are better cards to be honest which looking at the situation is absolutely disgusting and sad since the evga 3090 is costing £1500.So I'm seriously considering cancelling in next few days and see how the 4000 series plays out or see about radeon 6900xt whatever.

  • @muaries12
    @muaries12 Před 2 lety +3

    Between Jaytwocents experiments in software and BZ experiments in hardware i learnt a lot of gpu power managment

    • @87Moonglow
      @87Moonglow Před 2 lety +6

      Hmm BZ actually knows what he is doing, Jay2Cents is just fun to watch but i dont watch him for the technical knowhow.

    • @tobydion3009
      @tobydion3009 Před 2 lety

      @@87Moonglow Exactly.

    • @ij6708
      @ij6708 Před 2 lety

      Jay is just for entertainment. Just recently he was using heaven dx11 benchmarks to look for improvements from memory OC

  • @camelCased
    @camelCased Před 2 lety +2

    So, when / if I get my 3060, should I underclock it just to be sure? I'm gonna use it for experiments with neural networks and Unreal Engine. So, I'm pretty confident I will accidentally write some clumsy code that uses 100% GPU. And also Blender rendering.

    • @jtnachos16
      @jtnachos16 Před 2 lety +2

      Realistically, the 3060 shouldn't be able to slam the limits that hard, as it's overall capabilities are much lower. The issue going on here is that the VRMs are getting slammed with rapidly cycling transients that are outside it's capabilities, based on the evidence available.
      The 3060 shouldn't be running quite so hard up against it's own hardware limits as the 3090 does with regards to voltages and power.
      Someone can correct me if I'm wrong on that, but I doubt the 3060 is likely to see such issues from violating power limits, just by it having a more conservative power limit and resulting headroom to begin with.
      If you are truly concerned, I'd start with underVOLTING, not underclocking. Undervolting provides lower power consumption, assuming the card doesn't ignore the limits put on it as part of the undervolting. Which would head off the issue of violating power limits. Undervolting also doesn't inherently hurt performance as much as underclocking does.

    • @threepe0
      @threepe0 Před 2 lety

      @@jtnachos16 because the capabilities are lower, it shouldn’t be able to try for capabilities that are higher than it’s lower capabilities berf lorgic nurrrrr buuuhhhhhh

    • @jtnachos16
      @jtnachos16 Před 2 lety +1

      @@threepe0 Not sure what you are trying to do here, short of coming across as a dumbass.
      For the most part, because lower end parts can't clock as high on frequencies and have lower transistor counts, they ride the line less on relative overhead in power staging. It's why there's been a relatively consistent thing with the higher end GPUs (such as top end and/or late revision Ti models) being more prone to power staging issues.

    • @camelCased
      @camelCased Před 2 lety

      @@jtnachos16 Thanks, sounds reasonable.
      There's just one problem - to wait when I can get a 3060 12GB for a normal price :D Those 12GB are really attractive for neural network experiments, and I'm not a heavy gamer; *60 series GPUs has always been enough for me.

    • @jtnachos16
      @jtnachos16 Před 2 lety

      @@camelCased I'm on a 2060S at the moment. It still handles everything I've thrown at it @1080p without much issue, gaming wise. Only occasionally need to turn down a setting to maintain 60fps. Or at least, it does now that it is in a decent case.
      Go figure that the moment I have the money to actually get a new card to go with my new build, is the moment the prices skyrocket.

  • @chapstickbomber
    @chapstickbomber Před 2 lety +8

    Makes Vega64 peaking look tame.
    I run my Strix3090 at 480W for triple 4k, so I suspect my 1ms peaks are like 700W. Jesus.

    • @Netsuko
      @Netsuko Před 2 lety +2

      At least you're somewhat lucky that the Strix cards seem to be some of the best and most sturdy ones of the bunch. So there's that.

  • @arjenmiedema8860
    @arjenmiedema8860 Před 2 lety +3

    I can confirm that both my 980ti's have died whilst my gf's 970 is humming along just fine till this day. It is one of the lesser considered aspects of GPU shopping that the higher power usage will inevitably result in the high end options dying sooner most of the time.

    • @tessierrr
      @tessierrr Před 2 lety

      970 master race 🤣

    • @andersjjensen
      @andersjjensen Před 2 lety +1

      Only an Nvidia problem.. Their approach to power management really spells "We hate users who don't upgrade every generation anyway..."

  • @jannegrey593
    @jannegrey593 Před 2 lety +2

    I was always scared to run FurMark. I only did it if someone requested or once for 6 hours on my HD4870 1GB.
    And New World allegedly is even worse (you said spikes, which are worse then continuous current). I'm not even trying to buy this game in the midst of GPU-shortage.

  • @Zfast4y0u
    @Zfast4y0u Před 2 lety

    furmark is detected by nvidia driver and cards do throttle on it, cause nvidia dosent want em to blow up, its before 3000 series rolled out, cant remmember which driver version exactly.

  • @Kerrathul
    @Kerrathul Před 2 lety +1

    Do you think nVidia did this in response to pressure from AMD Radeon 5700XT, knowing that the 6xxx series might take the crown on fastest GPU?

    • @andersjjensen
      @andersjjensen Před 2 lety +3

      This is their usual MO with power delivery, and it has been for a long time. They're just running Ampere much closer to red line (remember the day one VBIOS update that limited boost clocks?) precisely because they realized too late in the game that going with Samsung 8nm instead of TSMC 7nm gave AMD too much of an in.

    • @astarothmarduk3720
      @astarothmarduk3720 Před 2 lety

      They use GDDR6X memory which is power hungry, and maintain powerful RT and Tensor cores. Good intentions, but they crossed a red line. I would rather buy an RX 6900XT which shows what can be done with 300W TDP, or an RX 6800 for best energy efficiency and performance/price ratio.

  • @jhontavarish4088
    @jhontavarish4088 Před 2 lety

    My 3080 strix with the power slider on 121% got up to 440w sustained for a few seconds while running 3D mark

  • @user-yc5fq9bv3u
    @user-yc5fq9bv3u Před 2 lety

    Should it be solved with more capacitors basically? How much more capacitance should be enough?

    • @stanimir4197
      @stanimir4197 Před 2 lety

      more capacitors = even worse. If you mean output ones, capacitors equal effective short before their voltage rises, so even higher transients. Input capacitors (and cleaner input), also worse as the power limiting via the 12V shunts doesn't do anything for transients.

    • @user-yc5fq9bv3u
      @user-yc5fq9bv3u Před 2 lety

      @@stanimir4197 do you want to say that these power requirements can't be satisfied?
      Capacitor does have very low resistance but if there is enough capacity then the voltage difference will be comparable to it's internal resistance.

    • @stanimir4197
      @stanimir4197 Před 2 lety +1

      @@user-yc5fq9bv3u adding more capacitors causes high inrush current to charge them, the current depends on their ESR (and ESL). The peak current is what likely kills the power stage, adding more of it won't help.

    • @user-yc5fq9bv3u
      @user-yc5fq9bv3u Před 2 lety

      @@stanimir4197 are talking specifically about startup?

  • @jdoggsgarage4494
    @jdoggsgarage4494 Před 2 lety +1

    So with all of this being said, why dont these cards blow up left and right with the 500 and 1000w bios's? Using the 1000w bios my ftw3 would pull down 600+ w in port royal on water cooling.

    • @guycxz
      @guycxz Před 2 lety +2

      The card's power monitoring measures an average power draw over a period of time. If a spike occurs that is shorter than that period of time, the power monitoring will only catch it after the fact, and it will be averaged with the power draw over the rest of the period of time being measures.
      So when the GPU gets hit with a workload that uses all of it it will try to draw as much power as it needs, and be limited by the power limit of the card. If the load occurs for a short enough period of time the card's power monitoring will not catch it until after the fact and will not limit it.
      Additionally, the power spikes in this video may actually be higher than presented, and New World should theoretically induce ones that are higher still.
      Edit: While it could be that a higher clocked GPU may spike even higher, though considering all the power modded cards probably have better cooling, the VRM may still suffer as much from it, perhaps less.

    • @jdoggsgarage4494
      @jdoggsgarage4494 Před 2 lety

      @@guycxz I fully understand that, but with a reported power draw of 300w do you really think there are transients when playing a game that are higher than what we see when benchmarking with a 1000w bios and seeing reported power draw in the 600w range?

    • @guycxz
      @guycxz Před 2 lety

      @@jdoggsgarage4494 There may be. If the card doesn't down clock and gets nearly fully utilized, the power draw could potentially be huge. So much so that some cards actually tripped OCP before dying. If those cards had an OCP with similar values to those outlines in the previous video, you could potentailly see a current spike of 1000A. Depending on the voltage across the core, we could theoretically see a 1000W power surge that would be presented as much lower by the power monitoring. If we measure 200 times per second, or every 5 ms, and draw 1000w for 1 ms then 350W for 4 ms the average will be 480W, and that is what will be presented.
      This all also depends on whether there is OCL and how it's set up, and on the way the power stages are monitored and balanced.

    • @ActuallyHardcoreOverclocking
      @ActuallyHardcoreOverclocking  Před 2 lety +1

      @@jdoggsgarage4494 there's also manufacturing variance at play.

  • @dinxsy8069
    @dinxsy8069 Před 2 lety +10

    Has Nvidia or the board partners addressed this issue? Top tier card crapping out this day and age is ridiculous

    • @vyor8837
      @vyor8837 Před 2 lety

      They have not.

    • @dinxsy8069
      @dinxsy8069 Před 2 lety

      @@vyor8837 Typical behaviour, pass on issues to the end user with no acceptance. I'm glad that I won't in any position to buy a 3090.

    • @vyor8837
      @vyor8837 Před 2 lety

      @@dinxsy8069 meanwhile, Nvidia shills are blaming New World and not Nvidia. Because of course they are.

    • @dinxsy8069
      @dinxsy8069 Před 2 lety

      @@vyor8837 when I heard them blaming new world I did have a 'huh' moment. People who believe that are dilotional 🤣a game that "hacks" Nvidia software/components

  • @puddingsbane3110
    @puddingsbane3110 Před 2 lety

    Damn, this is the earliest I've ever been on a video.

  • @VargVikernes1488
    @VargVikernes1488 Před 2 lety +2

    So does that mean that the fact, that New World is badly optimized on AMD gpus, actually saves them from going up in flames? Because I am pretty sure RDNA2 has the same transient power spikes, especially on always power-hungry 6900XT. Or does AMD manage power-delivery more conservatively?
    Also, do you think it's safe to unlock power limit through MorePowerTool to something like 360-370W on an adequately cooled 6900XT?

    • @SolarianStrike
      @SolarianStrike Před 2 lety +7

      The AMD reference 6900XT is actually using a 13-phase 70A vcore with TDA21472. The VRM is almost powerful enough to power a 3080 ti / 3090.
      Cards like the Nitro+ is just a reference spec PCB with extra fuses and RGB added. As long as you can keep the VRM cool you should be fine.
      Also the thing about the Navi 21, it is just a much leaner GPU compare to the 3090.

    • @r3drumg33k3
      @r3drumg33k3 Před 2 lety +1

      IDK about safe....lol, But I have drawn over 575 watts on air with my 6900xt OCF.

    • @SolarianStrike
      @SolarianStrike Před 2 lety +1

      @@r3drumg33k3 575W on the core alone?

    • @ActuallyHardcoreOverclocking
      @ActuallyHardcoreOverclocking  Před 2 lety +5

      AMD cards don't pull as much power and use somewhat better power delivery components.

    • @andersjjensen
      @andersjjensen Před 2 lety +2

      @@ActuallyHardcoreOverclocking It would be nice if you could walk us through (from connector to VCORE output) how AMD does things. I know your 6900XT cracked the die, but if you still have it you can still measure configuration resistors and the like.

  • @kotekzot
    @kotekzot Před 2 lety +2

    You'd think NVIDIA would want to make their flagship as reliable as possible, but I guess not. Maybe they've realized the sort of people who buy 3090s for gaming are going to keep buying them regardless, and making the cards fail early just means more sales.

    • @MarshallSambell
      @MarshallSambell Před 2 lety +3

      The flagship cards have always had the highest failure rates for as long as Nvidia has been making flagships. It’s simply because they are pushing the architecture to it’s limit with more points of failure

    • @kotekzot
      @kotekzot Před 2 lety

      @@MarshallSambell is it the architecture or the underspecced power delivery that's causing the failures?

  • @drsoraka5632
    @drsoraka5632 Před 6 měsíci

    hi i had a same isuess i had 3090 oc trinity but core 990mhz on furmark

  • @konga382
    @konga382 Před 2 lety

    How applicable is this to the 3080 Ti? Since it's mostly the same as the 3090 but with half as much VRAM, does all of this still apply? It's crazy to me how most third-party 3080 Tis have a 400W board limit by default, when you're concerned about a 3090 with double the GDDR6X drawing over 350W. Makes me scared to see what's going to happen to my 3080 Ti if I happen to encounter these conditions.

  • @markearl7172
    @markearl7172 Před 2 lety

    is this gpu 50c at idle

  • @grempal
    @grempal Před 2 lety +4

    I always thought that furmark looked more like a furry eyeball than a furry donut. Enjoy that nightmare fuel.

  • @kilroy987
    @kilroy987 Před 2 lety

    If people have rendering bandwidth to work with, they'll use the game settings to keep upping to resolution, detail and framerate to get the best experience. People will naturally try to drive their GPU to approach 100% usage.

  • @Safetytrousers
    @Safetytrousers Před 2 lety

    When I used to have my 2080ti FE overclocked to mine and game with I uppped thepower limit to max and the TDP reading was often at 123%. I regarded that as the raised limit working. I now run that GPU at 90% power (and play New World with it, no crashes) and it runs as fine as ever.

    • @smokeyninja9920
      @smokeyninja9920 Před 2 lety

      For mining core clock tends to make little difference so the normal mining strategy is min power limit, reduce core clock, raise memory clock.
      I can lower my power consumption to 50% at a ~15% hit to hash rate, a 70% increase in profitability

    • @Safetytrousers
      @Safetytrousers Před 2 lety

      @@smokeyninja9920 I pay a fixed amount for my electricity every month, so how much exactly I'm using makes no difference to my mining profits. I try to use as less electricity as possible so I have lowered the power limit on all my GPUs to the least it can be without reducing mining performance.

    • @astarothmarduk3720
      @astarothmarduk3720 Před 2 lety

      @@Safetytrousers You shall think about the environment we all need for survival. We still do not have 100%+ renewable energy, and not enough chips to support gambling with cryptocurrency at a global scale. The principle "work for money, the person, not the machine" shall hold. I know it is less convenient to work for money in person, but we all have to reduce resource and energy consumption, and just not do mining is the easiest to do.

    • @Safetytrousers
      @Safetytrousers Před 2 lety

      @@astarothmarduk3720 I haven't travelled by plane since 1989, I don't own a car, I don't eat meat. I recycle everything I can.
      Giving up making a living for doing no work is not easy at all.

  • @transparentblue
    @transparentblue Před 2 lety

    5:44 Two different definitions, optimization level could be defined as "how much of the silicon is that program actually using" or "how much render time does the prgram need for any rendered frame", in New World's case it's optimized if you go by the first definition whereas I've seen a few claims by devs that it does a needlessly complicated things on the rendering side meaning it would be unoptimized by that second definition.
    tl;dr: utilization vs efficiency

  • @Todd_Manus
    @Todd_Manus Před 2 lety +1

    You keep mentioning that know one will use all "transistors" on an RTX3090.. and you keep going to games, but actually 3d rendering hammers a GPU just as much as Furmark... Take this for what it is worth, but I imagine most people who buy the rtx3090 are not gaming... the return on investment over the rtx3080 is minimal.

  • @No-One.321
    @No-One.321 Před 2 lety

    Wait so the 3090 only runs at 1200mhz while using 350w on furmark? Is this normal behavior have we seen this in the last let's say 5 years in anything else Amd or nvidia?

    • @benjaminoechsli1941
      @benjaminoechsli1941 Před 2 lety

      Pretty sure that was an intentional kneecapping put into the Nvidia drivers a couple years back so the cards can't melt themselves when using Furmark, specifically.
      It's like Nvidia realizes that their cards can't be left to their own devices...

  • @PuscH311
    @PuscH311 Před 2 lety

    You said don´t buy new world if you use a 3090?

  • @djwhu77
    @djwhu77 Před 2 lety +1

    You sound like a super smart Kermit the frog 😎

  • @thechurchofsupersampling

    Palit 3080ti gamerock has much better vrm right?