Cool video for those that are curious about how the 2 compare, but don't understand they are engineered for 2 very different use cases. I think the RTX 6000 ADA is built for calculations and deep learning. So engineering, medical models, aerospace engineering etc... NASA, Phizer, SpaceX, Automanufacturing, Cancer Research, Bridge builders, & Cern are the types of companies/industries that could use the 6000 ADA. I think people with less complex needs like having images rendered, videos edited, or playing games would never have a use for the 6000 or an understanding of how to utilize it to it's potential.
i have 2 PC's, one has a 48GB RTX 6000 with 2 two additional 8GB RTX 3050's which is my work station and another PC with a 24GB RTX 4090 which is my personal pc/gaming pc, if i dont use the RTX 3050's then the RTX 4090 absolutely obliterates my RTX 6000, because the RTX 4090 is designed for games, RTX 6000 isn't. The RTX 6000 is like a MacBook, has crazy performance, but it's useless when used for games. Most of the professionals don't use the RTX 6000 either, it's designed for pin point resolution accuracy, mostly used by corporations to power their massive workstations.
One to one for small content creation, the quadros don't shine, but if you have a massive scene that requires a lot of vram thats where they outperform the gamming cards. (the main difference is the type of vram and the ability to link several cards to share the Vram)
Not a fair comparison really, but considering that the results are directly proportional to the RT Cores of each GPU, it is what it is. If you did a less simpler test, one for gaming, and one for deep learning for example, you could actually compare the difference in performance.
The time difference is 6 seconds, but points assigned 4099 vs 2864. The cards behave almost identically, but those numbers give completely wrong impression.
I thought Quadro series could be better in Workstation than Geforce, even tough it's bad in gaming, just like Threadripper vs Ryzen 9, or Xeon vs Core I9.
It depends on whether ECC mode is turned on or off for the 4090. ECC is always on for the workstation cards, like the RTX 6000. These benchmarks aren't the be all and end all, you need to test with intended applications. Typically, you don't buy a workstation card for sheer speed, you buy it for the ISV certification and because, through tuned drivers and ECC memory, it guarantees more accurate results for your work. Consumer cards like the 4090 will always trade accuracy for more speed, because speed matters more for gaming. No-one's going to get hurt if a polygon/pixel is rendered at a slightly inaccurate position on the screen in a game, whereas any inaccuracy in rendered results could be dangerous if you're designing an aeroplane or visualising MRI data.
@@little_fluffy_clouds Thats actually really interesting. So if ECC mode is turned off on the 6000, and if the TDP was increased to 350W via software - could this card outperform gaming cards such as the 4090?
@@nrbeast6000 if you compare specs, the RTX 6000 has more cores, more ROPs, more TMUs, more tensor cores, more ray-tracing cores and more VRAM than the RTX 4090, and shares the same 384-bit memory bus width, but it has ECC enabled and runs at slower clock speeds, by design. If you could somehow equalise clock speeds through software/firmware overclocking and then disable ECC operation, then yes, the 6000 should outperform the 4090 in gaming, but this is all theoretical. The two cards are using the same AD102 GPU chip, but tuned for different use cases. The 4090 is effectively an RTX 6000 with some cores disabled, but running at a higher clock speed. The other key difference is that the 6000 has twice as much VRAM as the 4090 (48 GB vs. 24 GB). That makes a big difference when working on complex projects, which typically workstation cards are deployed to do.
@@little_fluffy_clouds have you checked out machine learning RTX 6000 vs. M2 Ultra? It looks like M2 Ultra 192GB version has like 3X-4X more VRAM than the RTX 6000, and they are pretty similar in pricing. I guess the RTX 6000 is faster but M2 Ultra 192GB is better with bigger dataset
its not about the time even the gtx 1080 will also end up in 1 min cause the vray benchmark test every card for 1 minute each the real deal is the vray score
Cool video for those that are curious about how the 2 compare, but don't understand they are engineered for 2 very different use cases.
I think the RTX 6000 ADA is built for calculations and deep learning. So engineering, medical models, aerospace engineering etc... NASA, Phizer, SpaceX, Automanufacturing, Cancer Research, Bridge builders, & Cern are the types of companies/industries that could use the 6000 ADA.
I think people with less complex needs like having images rendered, videos edited, or playing games would never have a use for the 6000 or an understanding of how to utilize it to it's potential.
I work at Cern in IT and i agree ; )
i have 2 PC's, one has a 48GB RTX 6000 with 2 two additional 8GB RTX 3050's which is my work station and another PC with a 24GB RTX 4090 which is my personal pc/gaming pc, if i dont use the RTX 3050's then the RTX 4090 absolutely obliterates my RTX 6000, because the RTX 4090 is designed for games, RTX 6000 isn't.
The RTX 6000 is like a MacBook, has crazy performance, but it's useless when used for games.
Most of the professionals don't use the RTX 6000 either, it's designed for pin point resolution accuracy, mostly used by corporations to power their massive workstations.
Bro has a rocket at home
Can we ask what do you do for a living?
Nvidia quadro e Amd profissionais , são os drivers que fazem a divisão de aguas, a limitação para que jogos usem o poder delas esta no drive.
@@proto_64_xbest guess, either a VFX artist or 3D artist
makes no sense to test a quadro when it comes to 3D rendering. you should test CAD performance.
One to one for small content creation, the quadros don't shine, but if you have a massive scene that requires a lot of vram thats where they outperform the gamming cards. (the main difference is the type of vram and the ability to link several cards to share the Vram)
That being said, where I don't see the benefit of using quadros is in the laptop version. Because you can't link them as far as I know.
no nvlink on quadro 6000 ada, you cannot link several cards to pool vram.
NVLink isnt supported with Ada
Rtx Ada series is not for average consumers , it is for corporates and servers
the reason that the 4090 got its nvlink/sli support removed might be the fact that 2 or 3 4090s can likely beat a single 6000 ada
Not a fair comparison really, but considering that the results are directly proportional to the RT Cores of each GPU, it is what it is. If you did a less simpler test, one for gaming, and one for deep learning for example, you could actually compare the difference in performance.
The time difference is 6 seconds, but points assigned 4099 vs 2864. The cards behave almost identically, but those numbers give completely wrong impression.
I thought Quadro series could be better in Workstation than Geforce, even tough it's bad in gaming, just like Threadripper vs Ryzen 9, or Xeon vs Core I9.
interest.. RTX 4090 $ 1600 vs 6000 ADA $ 6800..
can do sli to rtx 4090 ?
No more SLI for rtx 40- series
4090 💪🏼💪🏼
So the 4090 can outdo the RTX 6000 ada?
It depends on whether ECC mode is turned on or off for the 4090. ECC is always on for the workstation cards, like the RTX 6000.
These benchmarks aren't the be all and end all, you need to test with intended applications. Typically, you don't buy a workstation card for sheer speed, you buy it for the ISV certification and because, through tuned drivers and ECC memory, it guarantees more accurate results for your work. Consumer cards like the 4090 will always trade accuracy for more speed, because speed matters more for gaming. No-one's going to get hurt if a polygon/pixel is rendered at a slightly inaccurate position on the screen in a game, whereas any inaccuracy in rendered results could be dangerous if you're designing an aeroplane or visualising MRI data.
@@little_fluffy_clouds Thats actually really interesting. So if ECC mode is turned off on the 6000, and if the TDP was increased to 350W via software - could this card outperform gaming cards such as the 4090?
@@nrbeast6000 if you compare specs, the RTX 6000 has more cores, more ROPs, more TMUs, more tensor cores, more ray-tracing cores and more VRAM than the RTX 4090, and shares the same 384-bit memory bus width, but it has ECC enabled and runs at slower clock speeds, by design. If you could somehow equalise clock speeds through software/firmware overclocking and then disable ECC operation, then yes, the 6000 should outperform the 4090 in gaming, but this is all theoretical. The two cards are using the same AD102 GPU chip, but tuned for different use cases. The 4090 is effectively an RTX 6000 with some cores disabled, but running at a higher clock speed.
The other key difference is that the 6000 has twice as much VRAM as the 4090 (48 GB vs. 24 GB). That makes a big difference when working on complex projects, which typically workstation cards are deployed to do.
@@little_fluffy_clouds have you checked out machine learning RTX 6000 vs. M2 Ultra? It looks like M2 Ultra 192GB version has like 3X-4X more VRAM than the RTX 6000, and they are pretty similar in pricing. I guess the RTX 6000 is faster but M2 Ultra 192GB is better with bigger dataset
Why Benchmark if you want to testing...go to 3Ds max or sketchup and make the billion Polygon
or billion Polygon curve in any texture ..
wtf💀😱but why?
We have rtx 6000 but rtx 5090 is not realesed
wauw my 3060 12900 k 16 gb 1193
having a rtx 3060 and a 12900k sounds very bad decision to me
@@macedonianlad i go now for a 4070 super
@@hardstylboy the cpu is still kind of overkill i think
@@macedonianlad Maybe
Nobody cares about v-ray renders etc... we are in the future with real time renders
Have you sure ?
Bruh.... films aren't rendered in real-time, games are. Vray is for people who do vfx professionally
I don’t see a difference
its not about the time even the gtx 1080 will also end up in 1 min cause the vray benchmark test every card for 1 minute each the real deal is the vray score
The difference is that the 4090 is like 25% of the price of the RTX6000.
then your blind AF
@@lovingit699 it can render 2 times or more objects not faster but quantity because textures and polygons take vram
@@chasetrue5635So which is better for game developers? Gaming or workstation GPUs?
Uhm... ok whatever. The 7900 XTX is a great card, it performs better and costs less than 4080. Good value for money I'm happy with it!
got one too with ryzen 7800x3d
@@EpicNublet bro it was a joke about praising AMD. the video is about two Nvidia cards
7900 xtx is still better
@@EpicNublet bro I don't care I already have 4090
4080 is still good, better ray tracing and rendering than 7900 xtx
i have rtx 2070 🙄
lol i got gtx 1650
lol i got a samsung a12s
lol i got gt 210M
I don't get this video. Two very different cards built for very different reasons being compared? Garbage take, sorry.
who will buy RTX6000 Ada?
i just did waste of money :/
not you
@@serpentes9818 bro did not spend $10k on a rtx 6000
It's Real Nvidia cashcow
big company, 3d studios,...etc