Why modern GPUs are failing | Explaining High End graphics card failures | How to prevent them!

Sdílet
Vložit
  • čas přidán 23. 07. 2024
  • #graphicscard #pcgaming #amd #nvidia #geforce #radeon
    its no secret that Graphics cards wether vrom AMD or from Nvidia tend to fail over time. but on current high end cards, it seems to be more frequent than usual. In this video i am going into Detail, why this could be the case and how to avoid, having these problems.
    Timestamps:
    00:00 Intro
    00:40 What GPUs are affected
    01:00 Causes of failure 1: Power consumption
    01:50 Causes of failure 2: Heat
    03:30 Common defects (GPU degradation)
    05:00 Common defects 2: (Interposer connect / BGA failures)
    07:30 Fixes and solutions
    08:40 How to avoid failures
    Business inquiries: wtl@eschos.gr
  • Věda a technologie

Komentáře • 134

  • @sstainlesst
    @sstainlesst Před 16 dny +35

    poor manufacture and design.. and greedy companies.. sony just took 8k out from their packaging for the ps5 who can bearly do 4k with upscaling .. motherboard manufacturers have temp and over voltage protection on some components and cpu to prevent burning out...not the shitty video cards cos nobody can break the nvidia monopoly..!

    • @limitless-hardware1866
      @limitless-hardware1866  Před 16 dny +7

      Absolutely with you on that. I think the consoles are going to suffer most because of poor maintanance especielly with the liquid metal only supposed to be lasting 5 years or so, but could also be a good thing for repair businesses i guess.
      In theory the lower end cards shouldnt have these problems as much as the high end ones as long as the designs are SOMEWHAT decent

    • @Hantu4686
      @Hantu4686 Před 14 dny

      Nvidia monopoly 😂 bludd stating something market now more prefere amd card rather than nvidia for valie performance wise .
      😂😂

    • @tourmaline07
      @tourmaline07 Před 13 dny

      I think some corners were cut with some Ampere PCBs to make margins and a few were dodgy. Consoles have always been marginal on cooling - Xbox's rings of death etc spring to mins so it wouldn't surprise me if this gen also had issues

    • @mrbabyhugh
      @mrbabyhugh Před 12 dny

      overclocking is the main issue, that very stupid thing. and overclocking was never meant to be permanent, but majority run on overclock every single second the machine is on.

  • @justinpatterson5291
    @justinpatterson5291 Před 15 dny +9

    My broseph thought my pc's default was to be as loud as his ps4... He didn't realize I have a choice in making noise. His HAS TO, otherwise it fries itself.

    • @limitless-hardware1866
      @limitless-hardware1866  Před 15 dny +3

      If the cooling setup is configured correctly, most pcs can be pretty quiet, especially with modern hardware!

    • @justinpatterson5291
      @justinpatterson5291 Před 15 dny +2

      @@limitless-hardware1866 I know. I just like seein lower numbers on the temps.

  • @lawrenegummy4736
    @lawrenegummy4736 Před 15 dny +10

    My 1080 is a tank. Still going strong

    • @ea168
      @ea168 Před 15 dny +1

      so is my 70. lets see what 50 series lower cards will offer.

    • @bjarne431
      @bjarne431 Před 15 dny +1

      So is my gtx 1060 6gb and i expect my kids will inherit it lol (temps are low 60s as far as i remember and noise is low)

    • @marioloncar2169
      @marioloncar2169 Před 14 dny

      My fury X rocks on 45C full load 😉

    • @alexcardosa8079
      @alexcardosa8079 Před 13 dny

      1080 and 1080ti where always tanks, its crazy to think how good they where for their time and how they still are.

  • @albertcamus6611
    @albertcamus6611 Před 10 dny

    Thanks for the info. I have a question about the asus proart series. they use low profile coolers, and assume they would be affected more right?

  • @inou2222
    @inou2222 Před 15 dny +3

    Something that wasn't mentioned is the bigger cards sag in the pcie slot if your card is horizontal and it breaks the pcb of the card and is super hard to repair.

  • @trailduster6bt
    @trailduster6bt Před 15 dny +3

    So is it mainly temperature that’s the issue or temperature and wattage? My 2080ti spikes above 250w in certain games, but almost always is below 60C under full load, and idles around 30C. Is that pretty safe or should I adjust the fans to allow it to idle at a higher temp in order to reduce temperature fluctuation? Is higher wattage always worse for silicon degradation even if temperatures are kept relatively low (below 80C)?

    • @limitless-hardware1866
      @limitless-hardware1866  Před 15 dny +2

      Its a combination of both, or one or the other. the silicon degredation happens mostly with high power draw through the GPU with electron migration. Temperatures accelerate that slightly. The thing that high temps mostly cause are cracks in solder joints, especially when the difference between idle and load is really high, like 30 and 85 degrees or even more.
      in addition to that sometimes very heavy cards can also be a factor as the pcb bends and can lead to cracked joints too.

    • @MrDecessus
      @MrDecessus Před 14 dny +1

      Temp yes but also physics. Material will have to change to get more performance or liquid cooling will need to become the norm.

    • @trailduster6bt
      @trailduster6bt Před 14 dny

      @@limitless-hardware1866 Thank you for the added detail. That makes alot of sense.

  • @rabbi619
    @rabbi619 Před 11 dny

    Can you please say if i should play game on turbo mode or silent mode or performance mode my laptop has two gpu one is 4050 and another is 780 and also please say when should i update the drivers i did stop updating as i got scared ones updating drivers once a white screen appeared while updating 4050 after closing the white screen it says the update was complete and another time a black screen appeared during 780 update i had to turn off the laptop and then do the update again so i am also scared to do the new updates.

  • @LiveType
    @LiveType Před 14 dny +3

    Nice summary of the failure modes for gpus!
    I didn't see a mention of physical damage due to a warped pcb from how heavy the cards are. This causes cracking solder joints and accelerating solder wear. Unleaded solder is particularly prone to this as it's considerably more brittle than leaded. Can't 100% prove this but it does seem to be a thing going by repair channels with what repairs typically get done. It's always the bottom part of the core and memory that have corrosion/ripped pads.

    • @limitless-hardware1866
      @limitless-hardware1866  Před 14 dny +1

      You are correct, i did not mention physical damage but i think i may do a video about that in combination with building a gpu support bracket that can be 3d printed

  • @Koeras16
    @Koeras16 Před 15 dny +4

    Usually cards tend to fail due to user error, dying gpu or indeed prolonged overheating of some of the components on the cards, especially when the card manufacturer decides to cheap out on some of the components (a "GPU" is much more than just the core).
    Modern cards are pretty smart and will run within specification at all times. I haven't seen any 40 series card or rdna 2&3 that hits junction temperature on a stock card.
    Like with every product there is always a failure rate expressed in %. I highly doubt it is higher these days than before though. 40 series had some major (while rare) issues with their new power 12 pin connector (it is called 12VHPWR).

    • @vladvah77
      @vladvah77 Před 15 dny +2

      it's not THAT rare failure, sadly....

    • @Koeras16
      @Koeras16 Před 15 dny

      @@vladvah77 I have no data, by consequence I can't really comment.
      Might not be that rare... NVIDIA might not care much about the GPU division and don't mind a higher failure rate than normal even though that cost them a lot.

    • @limitless-hardware1866
      @limitless-hardware1866  Před 15 dny

      This is completely true, although not really user error in that sense, as most users just dont know any better and even if they clean their card, never change the thermal paste, because they think "its ok"

    • @Koeras16
      @Koeras16 Před 15 dny

      @@limitless-hardware1866 Well, user error in the broad sense.
      I got an example:
      A few years back i was a happy owner of a GTX 770 4GB, that card did tend to get quite hot. Throughout the first years of the lifespan of that card i didn't know much about hardware, the card worked like a charm until I got into benchmarking and taking note of different metrics.
      As a i saw how the card got reallly hot, especially the VRAM, I decided to change pads and thermal paste. Changing the thermal paste worked like a charm, sadly i probably messed up with the pads i got (they were probably to thick, even though they shouldn't).
      I broke or caused damage to the VRAM.
      User error can also just be the eager willingness of the user to fix what isn't broken.
      Another example would be the case of the 12VHPWR connector, where poeple seemed to not clip it correctly inside of the connector.
      Some long term hardware failure might of cause be caused by negligence because the user don't know better. In my personal experience that seems only to be the case in laptops though, dry thermal paste and clogged up fans do kill hardware. How often will that happen on a desktop though (other than after 5-10 years of frequent utilisation).

    • @someperson1829
      @someperson1829 Před 6 dny

      You would think that cards are smart and then you have something like "Silent gaming", which starts the fans only when the temp is 65C or something, which is crazy. The fans are pretty "silent" unter 60% of max speed anyway, so why the need to "silent" them? First thing I did I made my owm fan curve, UV + OC. And now my temps are 56-58C on full load. But I could cook omelette on my card with the stock technology called "Silent gaming". So how many are there of these "smart" technologies? lol

  • @matthewxracer
    @matthewxracer Před 15 dny +5

    Do not undervolt a 4090. Lower the power limit instead. 40 series behaves very differently than previous generations. When they run at a lower voltages they will lose performance even with a higher core clock. Lowering the power limit will give you the same effect of reducing power but is much easier to do. The power limit will also has the benefit of setting a max power draw, where as an undervolt can still draw 450w - 600w under the right work load. See the optium video about undervolting a 4090 for benchmarks.

    • @limitless-hardware1866
      @limitless-hardware1866  Před 15 dny +2

      I have tested it, depends heavily, obviously you have to look at the performance to see how much it looses, but i have tried on a 4090 undervolting or rather lowering powerlimit and then increasing clocks the 4090 still was able to achieve 90 percent of its performance at 350 watts in power heavy games or benchmarks.

    • @someperson1829
      @someperson1829 Před 6 dny

      I have 4080S, I did UV (950 mV) + OC (core: + 270 MHz, VRAM: + 1500 MHz) - it does have the same performance as stock settings, I checked the difference in game with RivaTuner. The power draw on full load is 220-230W and the temp 56-58 C. So no, I heavily recommend UV + OC. Idk, maybe it's just something wrong with 4090.

  • @xtheory
    @xtheory Před 14 dny +2

    Let’s also not fail to mention that certain manufactures (looking at you Gigabyte) do not put any fuses on their GPU boards, which allows the most sensitive components on the board to burn to a crisp if there are voltage anomalies.

    • @limitless-hardware1866
      @limitless-hardware1866  Před 14 dny

      True, but i dont know if that happens as often, most of the time for the users those fuses actually arent doing anything as most defective cards just get sent in for warranty or thrown away.
      The issue is that it would save valuable resources as repair in general would be easier and more cost effective.

  • @snakeplissken1754
    @snakeplissken1754 Před 16 dny +2

    Any info if the problem also includes laptop gpu´s? (i mean their power limit is far lower)

    • @limitless-hardware1866
      @limitless-hardware1866  Před 16 dny +2

      laptop GPUs are power wise petty good, but im not sure how well they are cooled especially in the higher end cards, if the laptop is set up well and the cards stay below 85 core average then that should be ok but on a laptop 4090 might be a different story. ALTHOUGH laptop components in general seem to be having more issues therefore im thinking that the GPUs might not be the biggest issue in most cases.

    • @snakeplissken1754
      @snakeplissken1754 Před 16 dny

      @@limitless-hardware1866 My rtx 4080 laptop is kept pretty cool even under load. Seems that part is well designed on the scar 17.

    • @snakeplissken1754
      @snakeplissken1754 Před 16 dny

      @@limitless-hardware1866 The gpu doesn´t even reach 70 so far. usual during gaming session between 56-65 depending on load.

    • @halycon404
      @halycon404 Před 15 dny

      Laptop manufacturers historically cheap out on everything. Even high end laptops. There's a floor for how large a company needs to be to get into laptop manufacturing so it's pretty much medium to large corporations only. The companies which will do things to save 5 cents on a 1000-2000 dollar bill of components and compromise the entire thing. Bad screws, weak hinges, off brand caps, undersized coolers to shave a few grams off expensive copper. Simple things like not enough glue to hold a bezel in place because an extra dollop is a cent saved. The works.

    • @snakeplissken1754
      @snakeplissken1754 Před 15 dny

      @@halycon404 I certainly would agree that there are costs cut. But in general the last laptops i had over the year all ended up to be reliable and the only thing i did was clean them out every now and while at it add some upgrades here and there.
      One of the "worst" cost cuttings i had was the hp pavillion i have having a pretty useless microsd card slot that is limited to 25mb/s... which... well is slow as eff. But hey if THAT is the bad thing they cheap out on then fine, i ordered an external one and that worked fine.
      The only laptop that ever broke on me was a 280 bucks asus laptop with that intel atom quad core cpu... heck that thing was slow but did it´s job until it decided to no longer power up.

  • @natsu78999
    @natsu78999 Před 15 dny +3

    this is really scary ..4k ips monitor + 4090 +13700 kf easily pull out at least 700wtt when gaming..at this point not oly gpu but whole pc might dying

    • @limitless-hardware1866
      @limitless-hardware1866  Před 15 dny +3

      True that! BUT its important to look at the power drae of eah component. A 4070 for example only pulls like 220 watts wich is pretty efdicient

  • @JimmiG84
    @JimmiG84 Před 15 dny +3

    I always adjust the fan curve on my graphics cards. I find the defaults have become too conservative. In rhe past, GPU fans would sound like a jet taking off but these days they're very quiet. This is nice but results in very high temperatures. I'd rather deal with a bit more noise if it means my GPU lasts a few more years. Plus it results in higher boost speeds even without overclocking.

    • @limitless-hardware1866
      @limitless-hardware1866  Před 15 dny +4

      Totally understand you, but compared to about ten years ago, temperatures today are pretty low, looking back at 2014 for example an r9 290x was designed to run at 95 degrees celsius under load at all times.

  • @oswaldjh
    @oswaldjh Před 13 dny +1

    My method of GPU preservation is to limit the FPS to 120 and lowering the voltage.
    This is a perfect FPS for my games and the GPU is only at 60% to 80% utilization.

    • @limitless-hardware1866
      @limitless-hardware1866  Před 13 dny

      Correct, thats one way to do it the outher would be to reduce the power limit while increasing clock offset a bit.

  • @NulJern
    @NulJern Před 13 dny +2

    got xtx nitro+ and i don't want to talk about power consumption lol.

    • @tourmaline07
      @tourmaline07 Před 13 dny

      Those cards can't get enough power - if I got one I'd be throwing 500W into it 😂 , which my case can't deal with so I got a 4080 Super instead which I've pushed 370W into.

  • @MP_7
    @MP_7 Před 13 dny +1

    Still on a 1080ti and water cooling it from day 1 and she's happy :)

  • @raylopez99
    @raylopez99 Před 14 dny +2

    Wow I didn't know GPUs had 200 temp sensors. For that reason alone I will subscribe.

  • @pranavraval5282
    @pranavraval5282 Před 17 dny +7

    Oh so my 3070ti and 1080ti are chilling for now

    • @limitless-hardware1866
      @limitless-hardware1866  Před 17 dny +4

      id say the 3070 ti is at less of a risk than the 1080 ti hence the lower power consumption. depending on the cooler, the 1080 ti could benefit from lower temperatures, as in that generation, stock coolers were kinda weak sauce but at least they had a temp limit that was reasonable

    • @pranavraval5282
      @pranavraval5282 Před 17 dny

      @@limitless-hardware1866 honestly ive only ever gamed at 1080p and maybe let the 3070ti try push the frames up to 360 for my 360hz monitor but with the 1080ti being a blower style card i have to adjust the fan curve so it wont start dying so soon and lock it to 144fps in most of my 1080p games usually i find as long as temps don’t exceed 65 it hasn’t degraded performance or started the dreaded coil whine yet

    • @Apalis
      @Apalis Před 16 dny +2

      @@pranavraval5282 What 1080ti card do you have, is it FE? Regardless, 1080ti is the greatest card, sadly no more 600 usd top of the line gpus. Here I am in sweden paying 400-450 usd for a 7600xt smh.

    • @pranavraval5282
      @pranavraval5282 Před 16 dny +1

      @@Apalis Dang, luckily with the uk market it feels a lot cheaper with used gpu’s most of the time, got my 1080ti for around 150 a few years back, it isn’t FE but is the MSI blower style. Too plastic y for my taste but honestly it does the job great for the case its stuck in

    • @dark_cobalt
      @dark_cobalt Před 12 dny

      My 3070ti is drawing 350-380 Watts lol​@@limitless-hardware1866

  • @TitelSinistrel
    @TitelSinistrel Před 15 dny +15

    I don't think enough people are talking about the HUGE jump in power from the 2000 to the 4000 series, it literally doubled. Performance has stayed pretty linear when you adjust for power. A 250W 40 series card performs very similar to the top of the line 250w 2080ti or 3080. And of course double the power needs double the cooling and mass.

    • @limitless-hardware1866
      @limitless-hardware1866  Před 15 dny +2

      This is true as i elaborated in the video. electron migration because of hihgh power draw is a massive problem, although voltages dropped, due to the high amound of cores the amperage drawn are ridiculous a lot of the times.
      BUT! if you undervolt a 4090 to like 300-350 Watts, it can still provide 80-90 percent of the performance. sooo in theory when tweaking the car yourself you can gain a LOT from it!

    • @mikewunderkind6795
      @mikewunderkind6795 Před 15 dny +5

      Just not true. I owned a 3080 and now a 4080 super.
      The 3080 pulled about 360 watts.
      My 4080 super pulls about 310 and is about 40 percent faster across the board.
      Its at least 50 percent more efficient watt for watt.
      Although some might find the gains disappointing, they do add up over the generations.

    • @jurivjerdha2467
      @jurivjerdha2467 Před 14 dny +4

      This makes absolutely no sense . A rtx 4070 will out perform a 3070ti by 20% while using like 130 less watts

    • @arenzricodexd4409
      @arenzricodexd4409 Před 14 dny +1

      That's like saying 2080Ti have similar performance to 4070Ti. both card consumer around the same power.

    • @tourmaline07
      @tourmaline07 Před 13 dny +1

      I'd say that's true for Ampere but not Lovelace , a 4070Ti Super crushes a 2080ti for similar power consumption..

  • @petermoesker2977
    @petermoesker2977 Před 14 dny +2

    They went from leaded solder to unleaded, this is the main problem

    • @limitless-hardware1866
      @limitless-hardware1866  Před 14 dny

      When did that transition happen?

    • @petermoesker2977
      @petermoesker2977 Před 14 dny +1

      July , 2006

    • @limitless-hardware1866
      @limitless-hardware1866  Před 14 dny

      Good to know, yeah back then the 8800 series was released and that series started to have massive issues, qlthough those were the first cards having relatively high power draw too

  • @amaeyparadkar9632
    @amaeyparadkar9632 Před 15 dny +7

    Use AMD. Break Nvidia monopoly.

    • @limitless-hardware1866
      @limitless-hardware1866  Před 15 dny +4

      Amd right now has better value in most cases too. Its strange that nvidia still has such a big marketshare in comparison

    • @amaeyparadkar9632
      @amaeyparadkar9632 Před 15 dny +1

      @@limitless-hardware1866 I think it boils down to driver support and optimisations. I am not a gamer, my rendering softwares are optimised only on Nvidia platform. Also, history is the witness, AMD hasn't been very supportive in long lasting driver updates.

    • @takik.2220
      @takik.2220 Před 14 dny

      Or used nvidia, your choice

    • @danp9551
      @danp9551 Před 14 dny +1

      Intel

  • @l.i.archer5379
    @l.i.archer5379 Před 14 dny +1

    These issues with the 4080 and 4090 are why I went with a 4070 Ti Super.

  • @EmmanuelEmmanuel-zo8sr
    @EmmanuelEmmanuel-zo8sr Před 15 dny +3

    It's Aliens...

  • @Deboo-oz2rb
    @Deboo-oz2rb Před 9 dny

    Imagine spending 2k on a gpu, and it fails on you. Man, I would crash out

  • @electricfire7
    @electricfire7 Před 14 dny +1

    Would repasting a GPU help?

    • @limitless-hardware1866
      @limitless-hardware1866  Před 14 dny

      Anything that reduces temperatures will help. And because manufacturers often use low quality thermal ipaste, qimed at a long service life, its often a good idea

  • @kdzvocalcovers3516
    @kdzvocalcovers3516 Před 15 dny +3

    wow..great vid..hopefully my 4070ti super is o.k...the specs seem to indicate it is power efficient compared to a 3090 or 3080 for that matter..my 3070ti likes to suck power also..the price of performance is way too high.

  • @paulcrocker7347
    @paulcrocker7347 Před 11 dny

    I'm seeing more and more CZcamsrs using these mic's in their hand ? Is there a genuine reason or is it just a new image/fad ? Legit question! Just curious!

  • @mr.h5566
    @mr.h5566 Před 14 dny

    Thats why I set the power-limit of my 4080 from 350 to 250W. I lost about 5% performance, but the card runs so much cooler and quieter. And I dont have to worry about the burning cable thing.

  • @tourmaline07
    @tourmaline07 Před 13 dny

    The electromigration angle I've not come across before. With newer 30/40 series cards I'm a little skeptical - not heard much else about the cores dying straight out on these. If anything it's the PCBs getting warped and damaged solder balls/traces. On the CPU side I did think it was a bit much sending VIDs of 1400mV down a 10nm process on a 13900k when we did similar with the old Core 2 Quads on 65nm. But those seem to be holding out , as do Ryzens on 7nm and 14nm Intels. The newest Ryzens are designed for very high operating temps too. So I'm not sure if electromigration is going to be a massive problem for those who don't send 900W down their 4090s (I have seen a video here of someone doing exactly that , and the card dies pretty quicky 😂).
    I do however have a very relevant story of a 2080ti dying though recently almost exactly 2 years after I bought it second hand .
    Reason was that the memory controller basically cooked itself , with a part of the core near the edge being not cooled enough - but not near sensors which would have showed in hotspots. I got this second hand towards the end of the etherium mining boom so the previous owner would have possibly mined with it.
    The other thing I noticed about that is that the default fan curve was extremely conservative and had the hotspot running to 90+C (in hindsight I should have repasted it but it didn't seem too much higher than the core temp ). I did overclock and max the power budget out on this card as I needed the performance. But I imagine this card being hammered with little cooling when mined with -there wouldnt be a fan profile for max memory load and min core load.

  • @Z4d0k
    @Z4d0k Před 13 dny

    Most of the performance gains going from a 2080 to a 3080 were from the increased power usage. Thankfully the 40 series is noticeably more efficient than the 30 series.
    On my 4080 I just set my power limit to 90%, increased core clock by 80mhz and RAM by 800 and it’s been brilliant. Uses less than 300 watts under full load and performs better than stock while staying nice and cool.

    • @someperson1829
      @someperson1829 Před 6 dny

      I have 4080S, I did UV (950 mV) + OC (core: + 270 MHz, VRAM: + 1500 MHz) - it has the same performance as stock settings, I checked the difference in game with RivaTuner. The power draw on full load is 220-230W and the temp 56-58 C. Super and non-Super are pretty much the same anyway, I think you can tweak your card better. Try my settings, via curve, not via power limiting in %.

  • @Aanonymous88
    @Aanonymous88 Před 14 dny +2

    What do you expect to charge ridiculous prices. People will go broke and full of regrets.

    • @limitless-hardware1866
      @limitless-hardware1866  Před 14 dny

      Well the high prices nowadays are just unreasonable, 8-10 years ago the most expensive gpus were also 700 usd and that was really expensive. Charging triple that is just unreasonable. And it wold be possible for especially nvidia to offer these gpus at a much lower pricetag if they were willing to do ao

  • @an0n1man
    @an0n1man Před 14 dny +1

    Hey, nice to see you again after you quit at PCGH.

  • @marsamatruh5327
    @marsamatruh5327 Před 11 dny

    Hype of small ,micro, smallest, thin, ultra thin brings that result.

  • @thehimself4056
    @thehimself4056 Před 14 dny +1

    I’m waiting for the 50 series to drop. I will wait for several months after that before I buy anything

    • @limitless-hardware1866
      @limitless-hardware1866  Před 14 dny +2

      Thats good practice, as first issues can be ironed out by then and a lot of manufacturers have newer revisions of their cards, if issues come up

  • @napalmarsch
    @napalmarsch Před 14 dny +1

    My 3dfx Voodoo 5500 PCI runs like a Champion after all that Time 🤪

  • @marcelovidal4023
    @marcelovidal4023 Před 15 dny +2

    too much power! I have here a 13600T and a 4060TI 16gb... They draw at te wall 250w with screen!

    • @limitless-hardware1866
      @limitless-hardware1866  Před 15 dny +2

      thats not too bad, the 4060 ti is relatively efficient and most coolers are pretty ok so i would doubt that this card would have issues with heat and power, dont worry!

  • @TheAcadianGuy
    @TheAcadianGuy Před 15 dny +1

    My MSI 3090 is doing fine, though I might need to repaste it soon

    • @limitless-hardware1866
      @limitless-hardware1866  Před 15 dny

      Repasting is never a bad idea. I highly recommend a paste of good quality with a high lifespan such as noctua. Because thermal grizzly kryonaut that i see recommended often, isnt as durable/doesnt last as long as per their description

  • @acardenasjr1340
    @acardenasjr1340 Před 15 dny +1

    What about Laptops?

    • @limitless-hardware1866
      @limitless-hardware1866  Před 14 dny +1

      In theory same thing but power consumtion is much lower so you shouldt have to worry in that reguard, only thing is temperatures, if the laptop isnt cooled well, it may cause issues

  • @paulboyce8537
    @paulboyce8537 Před 14 dny +1

    Simply said the cards are at their limit. 450W seems is already too much to have reliability. INTEL has the solution via REBAR to pair the ARC with CPU and double the performance. I would advice to look little bit deeper on INTEL ARC. If there is 60FPS the performance equals 120FPS. You have two silicone's. AMD/Nvidia are mostly standalone cards and CPU plays very small part. If you don't believe me A770 for 4k and 60FPS gives you better experience than 4070ti at 120FPS. Less waiting. My guess Nvidia looking to make their own CPU's has this in mind.

    • @limitless-hardware1866
      @limitless-hardware1866  Před 14 dny

      Absolutely, they shouldnt have gone above like 300 watts, even that was a lot for the smaller coolers nowadays the coolers just get bigger and bigger to be able to somewhat keep these gpus cool, i think it just got out of hand kinda

    • @paulboyce8537
      @paulboyce8537 Před 14 dny

      @@limitless-hardware1866 Intels approach seem logical to split the tasks between two silicone's. It doesn't show in FPS count that much but from 1080p where INTEL has half the caped FPS in games that are gaped and 4k does better job you have to realize that there is something else going on when it is clear the ARC just gets more competitive with higher resolution. Traditional values from FPS seem to be wrong. Almost like the calibration that gives the FPS can't see that there are two silicone's and only giving the half that it can see.
      4070ti 12gb VRAM 192 bit bus 285W
      a770 16gb VRAM 256 bit bus 225W
      The point really is that ARC 225W + i7-13700K 253W = 475W all up. Wattage usually is good indication of the performance. Something to ponder I guess.

  • @mikehawk7307
    @mikehawk7307 Před 15 dny +2

    Hummmm. I have an all AMD laptop and I have not had any issues.

    • @limitless-hardware1866
      @limitless-hardware1866  Před 15 dny +1

      it also depends on how hot the cards run, the laptop gpus dont always suffer from the same issues, as their power draw is much lower, but in some cases they are not cooled as well. Though many of manufacturers in thicker notebooks have been able to actually be capable of building very potent cooling systems so the gpus stay relatively cool even while gaming

    • @mikehawk7307
      @mikehawk7307 Před 15 dny

      @@limitless-hardware1866 mine usually gets around 85 to 90 degrees. And that is while gaming. Mine is a MSI gaming laptop. 6700M GPU. And those temps are not too bad. Now if you have a Dell of any kind then yes you will have a GPU bottle neck due to the temps. Even the XPS are now horrible. They keep using the same old design when they release them. Even the fan is not enough anymore for those.

  • @davidswanson9269
    @davidswanson9269 Před 14 dny

    Come on engineers! Build a complete GPU SoC with 32GB of embedded vram. One product line. Keep the power draw less than 70Watts. Perhaps split the SoC into DirectX and OpenGL into their own optimized pipelines thus shutting down unused logic. Make the bus structure very wide thus less need for higher clocks and keeping thermals low. Work in some virtual memory that won't crash the whole system. Build one product that fits all with the ability to swap out the GPU SoC via some type of optical socket. Use optical bus signaling on the PCB reducing copper trace usage.

  • @kano326
    @kano326 Před 14 dny

    I am turning off zero rpm mode and using custom fan curve to prevent this. Maybe it will shorten the life of the fans, but fans are hundreds of times cheaper than GPU.

  • @maxstafford4007
    @maxstafford4007 Před 14 dny +1

    Honestly a surge of how to overclock videos don't help

    • @limitless-hardware1866
      @limitless-hardware1866  Před 14 dny

      I do videos on both overclocking and undervolting, everybody can decide for themselves what to do wirh their hardware.

  • @TheSocialGamer
    @TheSocialGamer Před 15 dny +1

    Why are you holding a mic? Is this 1980? lol 😂

    • @limitless-hardware1866
      @limitless-hardware1866  Před 15 dny

      The room im in unfortunately has very bad echo so i need to have the mic closer to my mouth, otherwise its sounds bad. And yes, its a good quality mic lol

  • @paulmoadibe9321
    @paulmoadibe9321 Před 13 dny +1

    to resolve that problem the next generation of GPU will be less powerfull.....

    • @limitless-hardware1866
      @limitless-hardware1866  Před 13 dny

      To resolve the issue it probably wouldnt need to be less powerful, as a 4090 can reach about 90 percent of its performance on 350w or depending on the game even 300w too!

  • @freevideos051
    @freevideos051 Před 14 dny +3

    A fan replacement is cheaper than the GPU replacement so turn up those fans!

    • @limitless-hardware1866
      @limitless-hardware1866  Před 14 dny

      Absolutely correct but most fans nowadays are of relatively decent quality and whould easily outlast a gpu under load 🙂

  • @Just-Ignore-It_88
    @Just-Ignore-It_88 Před 15 dny +1

    A step backwards for greedy nvidia. There should be an investigation going on for these clowns.
    So i should be worried about my 4080 then??? Nvidia are gonna get alot of shouting from me on their support page.

    • @limitless-hardware1866
      @limitless-hardware1866  Před 15 dny

      That issue is not only aparent on nvidia cards but can also be a problem on the amd side, although les pronounced as even their high end cards tend to pull less power

  • @fredlakota3595
    @fredlakota3595 Před 12 dny

    Not a fan of Nvidia .. long time ago i used nvidia and i had problems with the cards as long as i can remember.. i never ever use nvidia anymore in my life.. same with AMD CPU's .. they're cursed..... intel cpu and amd gpu only from MSI works perfect without issues

  • @crgwal
    @crgwal Před 10 dny

    Here if you are still powering on with a 1060 GPU...💪

  • @goldenstars5181
    @goldenstars5181 Před 12 dny +1

    Buy Intel, very good reviews.

  • @CodeCube-rv1rm
    @CodeCube-rv1rm Před 13 dny

    AMD owners: 💪🗿

  • @Wiksila-co1fg
    @Wiksila-co1fg Před 9 dny

    That high power input to gpus is worse invention ever made they need to think ways to make gpus run lower wats not how they can get more wats in 😬

  • @user-um7pq7uq9w
    @user-um7pq7uq9w Před 15 dny +1

    When even intel show more reliable GPUs 🤣

    • @limitless-hardware1866
      @limitless-hardware1866  Před 15 dny +1

      I mean they dont have high end unitsyet and their sold units are lower so well have to see how that turns out haha, but competition is always good! As far as im concerned the intel gpus seem to be pretty good

    • @user-um7pq7uq9w
      @user-um7pq7uq9w Před 15 dny

      @@limitless-hardware1866 I think that was a deliberate move on Intel they could already produce a reasonably high end card and certainly with the extra efficiency on battlemage but I think that they had enough trying (and mostly succeeding) to get the drivers right to risk problems at the high end. Maybe by the time that they get to celestial they will try their hand at the high end. Now if only the Intel CPU side of the company could work as hard to fix the problems. Oh well.

  • @HusniArsyah
    @HusniArsyah Před 14 dny

    Your analyze was kinda bias and technical... What make GPU die faster because people using GPU at maximum capacity, like playing game at max setting in a long time, for every day.
    The longer GPU in hot condition (High temperature) will make the silver solder melting and GPU pin cap turned dark.
    The only logical reason to expand GPU life span, you are not playing game on max capacity of GPU can handle, by definition, you are not forcing GPU into maximum utility.
    With this method, you will gives GPU a space to breath.
    I know, when people buying expensive GPU, they expected more from the price they get for performance, in result, people need to feed their own 'Ego', 😬
    Thats why modern GPU is tend to die faster, because of the culture gaming it self...😬

    • @limitless-hardware1866
      @limitless-hardware1866  Před 14 dny

      It is the problems i actually explained. Holding the components at a higher temperature isnt the huge issue, but as i said the fluctuation is because of thermal expansion and the stress acted onto the solder

  • @mrbabyhugh
    @mrbabyhugh Před 12 dny

    too much stupid overclocking

  • @reviewforthetube6485
    @reviewforthetube6485 Před 13 dny

    Completely false lol they all have these issues bur of course the more tech the more advancements the more parts equal more failures. But we also have 10x more gpus being sold meaning the failure rate goes higher it's statistics. It's not that gpus are worse it's that we have more sold and we have more parts and pieces. It's logical and commonsense. They aren't made worse they are actually made better regardless of what you think or believalso this high power draw? Its got more performance and less qatts per fps now. The 4000 series is pullinf less power then the 3000 seriees. I mean 320watts for a 4080 super? 200 watts for a 4070? Sure the 4090 is 450watts but i mean wtf do you expect?
    The 2080 was 215watts a 4070 super is 220 watts. Not to far off buddy! We are acrually gettng more effecient while giving much more performance. So much false information in this video.

    • @limitless-hardware1866
      @limitless-hardware1866  Před 13 dny

      I disagree. You obviously have to compare each card from each seeies so a 2080 to a 4080 for example looking at thet, power comsumption jumped 100 watts. And then compare the 2080 ti to a 4090 because a 4080 is just a very small upgrade but on the 20 series the ti was a huge upgrade.
      While lower end cards are getting more fps per watt yes, that is true but expected and therefore enharantly more expensive
      Also i didnt say anything about them being worse really but the sorrounding things like high power consumption and energy density just exasturbate existing problems. Not to mention the phzsical strain on the cards with the huge coolers.

  • @DaKrawnik
    @DaKrawnik Před 11 dny

    😂 modern gpus are fine.

  • @Callum-277
    @Callum-277 Před 17 dny +2

    First comment hehe

  • @TheMusicHeals.kjhjhhg
    @TheMusicHeals.kjhjhhg Před 13 dny +1

    You mean mostly high end Nvidia cards are failing right?

    • @limitless-hardware1866
      @limitless-hardware1866  Před 13 dny

      Nvidia and amd also, but as amd high end cards tend to draw a little less power it might be a bit different, although they still get hot!