It wasn't even that long ago 😂. Nvlink was still supported on the 30 series last I checked. I know because I have a friend who uses linked 3090s for rendering.
You can slot multiple GPUs in a computer without SLI. Your display (and your video game) is only going to use one. However, for 3D artists like me, it can be useful because some programs like Blender support multiple GPUs for rendering.
I was just about to comment this same thing. Blender can take advantage of whatever processing cores are available, and has the ability to process through any amount of CUDA cores loaded on the system
Fun Fact: One of the big features of both DX12 and Vulkan is native support for multiple renderers. This also happened around the same time that consumer cards stopped shipping with SLI/nvlink/crossfire. So it's still possible, but even more up to the devs than it was before.
If I remember correctly it works even with combinations like RTX + integrated grapichs. I just read years ago I don't know how good it is. Also implementing it is rather easy.
I had dual 1080s and I think it was one of the Sniper Elites and the ASYNC computer DX12 multi gpu mode actually performed waaay better than SLI or single card. When it worked it worked pretty well.
@@Mitsou44 it's very time consuming and not easy task. Game have to duplicate VRAM to both GPUs, that means that texture/mesh streaming must ensure that both GPUs contain all the data. And the bad thing here is that transfering data between 2 GPUs takes a lot of time, not to say it cripples their bandwidth. Even 2 Blackwells would have similar issue of data transfer between 2 GPUs. The only thing where all of this would be beneficial is that if you render for VR, but it also has some questions opened.
You can run multiple gpus not in tandem if you're running multiple monitors and/or programs that are gpu heavy. Lot of stock traders, graphic design, cad, architecture, etc. Run multplie gpu setups. They just don't compound/work together.
@@taoqlee28 think of an architect or a scientist running some Building renderer/simulation software. That stuff Is VERY GPU heavy, so it makes sense that each program uses a dedicated GPU
That's the problem. "They just don't compound/work together" We already have 2 GPUs in the laptop, the iGPU and the discreet GPU. The problem is you CANNOT use both of them at the same time or make it work together. You can only choose one to do workload Maybe the reason is that if one GPU is weaker or slower, this can result in a bottleneck to the better GPU. Yea pretty much like diminishing returns
also because i have a crt monitor, which has vga only, i use a spare old amd gpu for that vga connector. i could’ve got some type of splitter but i had the extra gpu on hand and it consumes barely any power
Vulcan and DX12 were supposed to support using multiple graphics processors (without even having to be the same make/model) by offloading tasks between the GPUs to process things faster.
They do support that, but there is almost no software that does. I think Ahes Of The Singularity is the only "game" supporting DX12 multiGPU and doesn't gain much from it.
my answer: you can install more than one... and you can get performance boost if you running multiple 3d applications at same time... you just need to properly assign different GPU for each application and you dont need SLI for that
I crossed this short and I facepalmed hard. At least the dude could do 5 minute homework on the topic. Clustering GPUs or even entire PC segments was possible decades ago and is still possible and will be. What's more, it has nothing to do with devs - it's all about drivers. And drivers for clustered GPUs are in Pro segment which is used in professional segment like science, super computing, research, etc. In gaming industry i.e. home use makes no sense to have and maintain such devices.
That actually isn't a good idea. You "CAN" do that, but the PCIE express lanes are limited, and two gpus cuts them in half. But would only be an issue if we're talking two 4090 tier gpus.
@@johnmclain250 SLI/CrossFire worked that way - you didn't assign it to app, but you could make it that main GPU takes care of shaders, shadows, etc. and 2nd GPU takes care of AA and FXAA. And as you said - Main GPU was on x16 and 2nd was on x8. But currently it makes zero sense to maintain such solution for home-users because such power won't be used even partially, Clustering makes sense in really big operations like business and science. Not running games or processing videos.
No, I'm pretty sure if your system had the lanes available both cards would run at x16. Oh wait, to do that you would have probably had a team red processor. Bulldozers sure were nifty. @@utawamaru
@@aouyiu given nvidia has over 90% market share, I'd say crossfire wasn't "worse", it was just out numbered. Crossfire worked out of box without anything from the users at all, no extra hardware, no special software.
@@aouyiucrossfire was definitely better, they alternated which GPU was rendering, which has lower overhead than having 2 gpus each render half a frame and put it together.
I still (sometimes) run duel 3090s in SLI (using nvlink) for Rendering, it basically gives you 4090 performance with effectively 48Gb of vRAM. Only nvlink capable cards can do that though, plus you need software that supports it.
I've been running multi-GPU setups since the days when GPUs only had a single VGA output. Which is WHY you wanted to run a multi-GPU setup, as it was the only way at the time to run multiple monitors. And yes, you CAN still run a multi-GPU setup with cards that don't support SLI or CrossFire, they just function as independant cards. The catch is that since Windows Vista, they have to be similar enough to run the same graphics driver package. Before Vista, you were able to run different cards with different driver packages.
Even without SLI and Crossfire (which you completely failed to mention), you can still use multiple GPUs at the same time. This is just mostly wrong information in this video. The only difference is, they do not extend oneanother, but operate on their own. Still multiple GPUs. All you need is a motherboard with another x16 PCIe slot that is far enough away from the other one, to fit your 2nd GPU. It's still being done, primarily for applications where many screens are required, that you otherwise don't have enough ports for. But hey, what did I expect from a company that endorses Nvidia, correct information?
I think think the kind of user asking the original question doesn't understand the nuance of how GPUs handle workloads so I think answering it with a gaming focus is still a valid enough answer. In the era, SLI was kind of the Coke of multi GPU technologies so I think that's not egregious. If you wanted to be pedantic, multi GPU setups that aren't SLI/Xfire(x) don't even need x16 slots or lanes. Cryptocurrency miners get away with bifurcation down to 1 PCIe lane each GPU since the workload doesn't require a lot of PCIe bandwidth.
@Kori-ko exactly. He'll I've run multiple gpus in my current system, both AMD (5700xt) and Nidia (1080ti) at the same time no less. It's doable, they just don't play well with each other. But it's an option in Vulkan and DX12, meaning a Dev CAN support it. DCS does at last check, as does X-PLANE.
You still can actually, just without NvLink or SLI, you can't use it for gaming purposes. Slotting in multiple GPU's is still useful for some serious productivity apps like blender or even video rendering in some cases. If I remember correctly, it doesn't need to immediately display an image to your screen, so the latency of communication between the PCIe Slots is fine because the end result will be faster render times thanks to more resources available. Edit: Oh, and don't forget crypto currency mining too, that doesn't need SLI. But you can't just have stacks of 4090s because efficiency is also a a factor. More 4070s would be more useful for that case IIRC, IDK I don't mine.
Actually, even games can take advantage of multiple cards, but the developers need to enable mGPU. Ashes of the Singularity is the only game I know of that has the option, though, lol.
@@dennisperry8579 I don't doubt it, but latency and performance can become an issue with limited PCIe Bus Bandwidth. So although gaming should be possible, 99% of the time it wouldn't be optimal.
@@technologicalelite8076 there was zero latency when I was running a fury and my 7970s at the same time, or my 3090 and my fury.... But it's the only game I own that I could test with, lol. Likely something more fast paced would give the possibility of more issues.
@@dennisperry8579 Did you have an NVLink or an SLI connector? Uses PCIe protocols to let the cards communicate. Otherwise, usually the PCIe bus would have limited bandwidth and can cause huge performance issues. In every case? I'm sure cases exist where this isn't it, but you get my point, it's likley to happen.
There's also the old downside though of some SLI setups performing WORSE than a single GPU due to poor drivers or software optimization, as stuff like stuttering, screen tearing, and input latency became WAY more noticeable. It was effectively killed by inpracticality, since the amount of work going into supporting it was just not worth the potential gains in the vast majority of situations.
Likely what will happen is that SLI will come back as 2 GPUs in the same PCB, 1 will be mostly dormant, but able to kick in in really heavy applications
I recall someone managing to modify a driver to enable SLI, and not only it worked, but managed to do so without a SLI connector/bridge. It was after the 2010s when it stopped being a thing. Not sure if it was LTT or JayzTwoCents who covered it (maybe both?)
"why can't I..." is a question always best answered by "unless there's a law against it, you can, just think your actions through first and be mindful of the consequences"
@@leechanyeong409no, im pretty sure sli would bottle neck the 4090, its probably faster for 1 4090 to do the work instead of having to communicate with another one unless there's vram restrictions going on.
The biggest reason for SLI or Crossfire back in the day was video outputs. Not more power. Graphics Cards back then had one DVI output, so only 1 screen could be run off each card. Then more outpurs started to be added to graphics cards so less cards are required.
The main issue with sli/crossfire is latency. The graphic cards need to communicate with each other really fast in parallell calculations that are needed for gaming. You can run multiple graphic cards still, just not for gaming. Anything from. Mining or other heavy calculations.
and then you add that with DX12 and the rare vulkan game so that just gives the devs far more work to have them running without becoming a stuttering mess
the main issue is the sli bridge didnt have enough bandwidth and most of consumer grade cpus dont have and enough pcie lanes to properly utilize multiple cards. Also the software to use them poorly allocated data essentially causing both gpus to render the same thing at the same time which caused some issues. However you cant deny quad sli water cooled rigs look dope AF
Multiple graphics cards is still common in builds setup for machine learning research. It just uses them as two separate cards though for vectorized calculations, nothing like how sli worked.
SLI is really useful for 3D rendering. I have used quad SLI with 4 980 ti, and they perform better than a rtx 3090ti, and the 2 rtx 3090ti with SLI and cuda in blender absolutely roasts a 4090. The only thing being it was expensive back then, but now you can find two for a reasonable price in the used market.
A100 and H100 enterprise cards are still run though an NV Link setup for all kinds of applications, especially AI. Nvidia only dropped support for their consumer cards.
@@Anthonyg5005 Yes, you can even use 2 RTX 4090 with CUDA, I just said if CUDA is enabled it would perform even better, sometimes people even don't use CUDA and end up using the none option, not having the full performance of their GPU. I am talking about SLI mainly, even though a little performance is gained, it is still more performance. And talking about price, you can get a 3090ti for 1000$, and considering the price of the 4090 MSI supreme in my area is like 4000$. I can get 4 RTX 3090ti in that price. And still get more performance. I don't know if they (3090 ti) even support quad SLI, because I have only used 980 ti's in quad SLI. Anyways, 3090ti have more price to performance than a 4090 in the present day.
@@aouyiu Those are enterprise cards! , the cheapest ones are like 30k. I ain't no way spending that much on a server GPU. It won't even work on my PC. Those GPU's cost more than the top end PCs these days.
It's not just Crossfire or SLI. There were crazy instances, and maybe some people do it now. Where someone might've had an ATi/AMD graphics card alongside an nVidia card...the nVidia card would've just been used purely for PhysX in certain games, or CUDA.
3DFx also had this in early days of 3d accelerators. Nowadays it is also common to have a GPU mated to a onboard GPU in laptops for battery savings. There are also systems with multiple GPUs but not for gaming. It is not impossible, just uncommon.
I think the first thing we'll see is that nearly all CPUs will have both GPUs and AI cores and lesser tasks will be handled by the weaker card and when it needs more power it goes out to the bigger GPU
i remember in highschool my friends dad was constantly building PCs. this was around the Witcher 3 release my guy had it in 4K with I believe 2 or more Nvidia Titans SLId lol. had everything liquid cooled green with green lighting. It was so sick. Witcher 3 in 4K looked insane, now it’s 2024 and I just went from a 2080- a 4080 S and still haven’t bought a 4K monitor or plan on doing so.
You can easily run more than one GPU in a machine. I run one for my main gaming monitor and one for my other three monitors. Then I set GPU affinity so that processes like Firefox and Discord use my alternate card while my games get to use my main card. Works amazing.
what actually is possible nowadays: If you happen to have two Nvidia cards, e.g. an RTX3060 and a GTX1050 (or any two nvidia cards from GtX900 or newer) you can configure the Nvidia driver to primarily use the better one and have the other do only the PhysX-Calculations. In reality it will won't get you any benefits in many games, but in Games that do use PhysX, it might get you a significant uplift in performance.
it is actually possible to run multiple gpus, even now, without using sli, and this is the standard way of doing it. this way even lets you mix different GPUs, and even different vendors (for example, a game thats set up to do this could use your IGPU and dedicated GPU simultaneously). it just depends on whatever you're running is actually built to use multiple devices. it's more complicated, but it is possible. for example with vulkan: you can create multiple logical devices from multiple physical devices, create queues for each of them, then offload rendering or compute tasks to all of them. doing this efficiently is the tricky part (the pixels still need to be actually transferred to the gpu you are going to actually present the pixel data on). it's generally utilised for compute tasks only, such as AI. for example, twitters chatbot uses 8 GPUs to run.
I wonder if they could make a new version of SLI where the synchronization happens at the driver/firmware level of the cards themselves. Perhaps that could improve performance while relying less on developers factoring SLI into their software.
I build a PC with sli back in the day. It was so cool seeing 2 gpus. I never got into triple or quad sli, too much. But I remember looking doe builds online that featured 2 gpus. It's nostalgic. I did have difficulty returning to one but with no option for more...
A lot of people use multi-GPU setups for a variety of reasons. I myself use a 3080 and a 1050 for Linux VMS, so I can a dgpu for both the host and the VM and don't have to mess around with splitting a single GPU up. Also multi GPU is good for anything related to rendering, cad, dedicated encoding, ML/AI workloads and a whole host of other stuff. Also some people just have multiple gpus if they have more than 4 monitors lmao.
99% of devs dropped SLI support years before it was officially dropped, I had to disable it for a good chunk of games otherwise the games wouldn't launch. Personally I'd love to see the return of SLI especially since the GTX 900 and 1000 series had the best SLI performance bump of all time! In a lot of cases you'd get at least 40% boost(if I'm remembering correctly) where older generations had minimal uplift.
SLI would make a little sense for a server build, especially with multiple CPUs on one board for graphics rendering and simulation calculations, but not your regular, consumer grade PC.
I remember running an SLI build. Back at the time I i did it, it was very cost effective for me for the performance I was looking for and man, researching to make sure you didn't run into any bottlenecks was part of the fun of it
In the old days, I had two different cards in my computer. One Matrox Milenium and one Hercules card. The Hercules was a 16 bit ISA card and was used for debugging.
SLI is coming back due to faster speeds of connections and the cards and the help of AI. Also the voodoo line had a very successful sli card setup that could do amazing stuff in the 90s. But it was relegated to Military simulators
Would've been nice to see a mention about Explicit Multi-GPU, something introduced with DX12 that Vulkan apparently has an equivalent for. Unfortunately, it's hard to code for and would've been PERFECT for the lockdowns when prices soared, but hasn't gone anywhere. But the future of multi-GPU is likely renderer/software-based and no longer hardware-linked.
I used 2-way SLI back in the old days. It almost doubled your performance but the huge downside was the micro stuttering. 60FPS felt like ~24 because of that. No idea if it was ever fixed in newer cards.
I had a crossfire 4850 and the bottleneck was the bandwidth. When the CPU was hammered too much, the micro stutters were worse than with one 4850. But when the CPU was on easy mode, those two babies could beat a GTX 280. So... thinking myself smart, I overclocked my Q6600 to 4.0 Ghz on a Kandalf LCS build (watercooling) and one day the water drained and I burned my Q6600. It was 15 years at the time. I kept the Q6600 for sentimental reasons.
I feel like one of the reasons nobody mentions as far as why SLI stopped was because of the cyrpto mining boom back in 2017. When GTX1070 cards were going for $1200 instead of the $350 they should have been.
You can still run two gpus in a system, but you need to make sure your system can support it. You no longer need an sli bridge, and the protocol (specifically for nvidia cards, idr what the protocol for amd cards is) is still called sli, despite not actually being a traditional sli setup. What i want to see, is mother boards with more pcie lanes available for exspansion purposes, not just specifically for sli...
Dual gpu is still pretty good. You can offset processes by running a virtual machine with the second gpu that uses half of your cpu cores (since most software barely uses them) and have good multitasking.
The reason sli/crossfire died was because manufacturers will not sell future gpu like they want it to, they know exactly why they had to kill it and never really work on it properly during its early days, specially with the used market.
the issue isn't that it isn't possible but that they always end up data transfer nottle necked. our current gpu's are already using most if not all of the transfer speed the motherboard can give between gpu, cpu and ram. adding another gpu just adds a ton of overhead transfers to keep the in sync and thus making the transfer rates even slower. so in theory you can do it but you won't be able to get much out of it. for big servers, super computer farms or rendering/mining servers it is another thing. the main difference is that they do asynchronous tasks where it doesn't matter in what order the results come in or how fast.
Most Modern CPUs and Chipsets only support a limited amount of PCIe lanes, so even if you wanted to run multiple GPUs, you would be limited in bandwidth, which isn't as bad as it sounds. HEDT platforms like Intel's Xeon and AMD's Threadripper support many more connections, making them great for multi-GPU applications.
Multiple GPUs is still possible, but you won't be getting unified performance like with SLI. As long as you have the expansion slots for it, you can slot in a second GPU and use it to connect additional displays, the two GPUs will render separately and only communicate through the CPU.
My guess is that it has more to do with bus transfer rates. The cords required to transfer that amount of data would be pretty big so they don't melt. BTW, games "don't support" sli, that's all on the driver, which is opaque to the process.
I remember whe case modding started, whe you had to put the window in the case YOURSELF, with a dremel or a bandsaw. Then you had to paint it yourself if you wanted something other than bland (i used a textured paint on my first one, made it look like a blue rock) It was the era of beige cases and sadness (Windows 98 and 2000).
It's a good idea to have 2 GPU's to this day. You can use your primary one for gaming and another one inside a VM that can do more complex graphical stuff like rendering a CGI video for example. So yeah, its not the same as it was but i like having it organized like this.
One of my PC's still has 2 graphics cards in it. Not what you think though. It's an old HP Z820 workstation that I run a separate pair of GTX 760's in. the only reason I have these in there is so I can run 4 older monitors on it that don't have HDMI or DVI inputs. It was all just stuff I had kicking about, so it was the "Free" way to have a multi head PC setup so I could have multiple pages of text open (and viewable) at the same time. It eventually become redundant though. The PC I'm typing this on has a single GTX 1080 in it, and a pair of vertical stacked 24' monitors, and a pair of 20" side monitors.
You can still put multiple graphics cards in to your system. I run my second monitor on a seperate card. Also if the game supports it you can have explicit multi GPU enabled.
Parallel programming regular programs is hard. I can’t imagine how it would even work on graphics. I think I heard somewhere that SLI didn’t even distribute the load of graphics between the multiple cards because it would slow things down, which, if true, makes sense as to why SLI didn’t increase your FPS much.
One of the nice things of being an electrical and software engineer is that I was able with much effort to "hack" my 3090tis to be "sla" compatible. Its pretty good even though its kinda bottlenecky sometimes, but even whilst it bottlenecks itself It still performing better than a single card.
Technology is good enough that we don't need to worry about having the best and most expensive tech anymore. Developers are able to make anything they want with very little limitations.
I actually still use 2 GPUs. You can force windows render the majority of your desktop, browser and apps on your "aux" GPU (or Linux) and keep your entire gaming GPU ready for games
Lol, we used to have Crossfire and SLI. WE even had ATI/AMD cards that had two GPUs in one card! It used to be common to have a separate card just to run physics (PhysX). Actually, NVIDIA still supports using a second card for PhysX, latest update 5.0 was released in 2020.
Optimizing a game for multiple cards is hard. I tried a crossfire setup for a little while and found that a few games did great, but the games I liked weren't optimized and actually ran slower! The big deal that's coming soon, is chiplet GPUs. Multiple GPU chips on one card that are able to communicate (very fast) internally and represent themselves to the OS and applications as one chip. All the power and improved yield, no headache for devs.
Wait a second, when 2 4070 Supers get more performance than a 4090 in total, but they cost less together AND they consume less power... Isnt this just a total win!?
what might work would be a dedicated Ray Tracing + AI card add on to take over the load for the regular gfx card; rather like 3DFX cards used to back in the day?
I mean it makes sense if you do a lot in virtual machines. For example making a windows VM on Linux so you have a gaming VM. Then having a GPU connected for good performance.
I can see the 2 gpu solution when there are system that can utilise 2 graphic card such as a handheld and a dock.hopefully we can have this technologies in near future
It USED to be a feature before GPU manufacturers got greedy and forced you into only the 1 GPU, back then you could get 2 mid-levels for a decent price and outperform the leading card by a big margin… ahh simpler times.
I would definitely like to see the use of double video cards. But instead of the two cards working in unison for the same process, I would see its use case for streaming and/or recording. One card would simply run the game, and the other would be running the screen capture and hardware encoding to record and/or stream the games being played. OBS now supports multi-card capture, so you don't have to run the program on the same card as your captured environment anymore, therefore leaving room for dispatching the screen capture to one video card, and game render to another. Of course integrated video card in your CPU (which is essentially 3rd card) for the desktop environment rendering (2D).
You absolutely can have more than one GPU, different brands even, used to have a build like this back when I used to dedicate a physical GPU for a virtual machine whilst the other GPU handled the host system
I was about to say, “I swear I remember 2 1080s being a thing back in the day”
"back in the day".....
That was only 8 years ago.
I am clearly getting old.
@@ryant2568 8 years is a long time
I had double 1080ti super gpu's and holy shit minecraft on big shaders went hard. No other game could run sli so I just played mc 😭
@@ryant2568it feel like it was yesterday
@@ryant2568 I was 10 then
Dont worry sli is making a comeback because the rtx 5090 is gonna need 2 power supplies to run 💀
That's a good one 🤣🤣🤣
sli gpu? nah, sli psu
RX5090TiSuper Here I come!!
@@patrickmurphy3 you might aswell just plug a c4 in your computer that's going to be the same thing 🤣🤣🤣
@@coololplay3196 Hey maybe. But I already run 1200watt PDU. We'll have to see if I need a new Mobo and CPU though
oh god, when we're old enough that a staple of our day is being explained like a history lesson
It wasn't even that long ago 😂. Nvlink was still supported on the 30 series last I checked. I know because I have a friend who uses linked 3090s for rendering.
How old are you
You can slot multiple GPUs in a computer without SLI. Your display (and your video game) is only going to use one. However, for 3D artists like me, it can be useful because some programs like Blender support multiple GPUs for rendering.
This 👆
Redshift is wayyyyy better with multiple gpus.
I think it's also useful if you have a boatload of monitors for whatever reason since gpus only support so many simultaneous displays
I was just about to comment this same thing. Blender can take advantage of whatever processing cores are available, and has the ability to process through any amount of CUDA cores loaded on the system
Multiple 1080 Ti was a kids dream build back in the day 😌
I mean well, you might not be able to have dual 4090s in your system but you darn well can put two 3090 ti’s in it using nvlink 😂
more like multiple 8800 GTXs my young padawan
@@DavidAlejandro201990 there enver was a 8800 gtx ....
lol
it was 8800GT no X
@@DenverStarkey might wanna hit google friend 🙂
I have built 3 way GT8800 in SLI.
Yey, 3 of them.
They were slower than gtx 970 xD
Fun Fact: One of the big features of both DX12 and Vulkan is native support for multiple renderers. This also happened around the same time that consumer cards stopped shipping with SLI/nvlink/crossfire. So it's still possible, but even more up to the devs than it was before.
If I remember correctly it works even with combinations like RTX + integrated grapichs. I just read years ago I don't know how good it is. Also implementing it is rather easy.
I had dual 1080s and I think it was one of the Sniper Elites and the ASYNC computer DX12 multi gpu mode actually performed waaay better than SLI or single card. When it worked it worked pretty well.
AMD moved from crossfire to AMD MGPU which is basically the same objective as crossfire. Abd doesn't require anything except for the Radeon software.
@@Mitsou44 it's very time consuming and not easy task.
Game have to duplicate VRAM to both GPUs, that means that texture/mesh streaming must ensure that both GPUs contain all the data.
And the bad thing here is that transfering data between 2 GPUs takes a lot of time, not to say it cripples their bandwidth.
Even 2 Blackwells would have similar issue of data transfer between 2 GPUs.
The only thing where all of this would be beneficial is that if you render for VR, but it also has some questions opened.
@@SweetieSnowyCelestia The point of this being part of DirectX is to offload that manual work.
Bro really pulled a donut media reference
i saw that too
the donut reference was great
"Mo powaa baby!!"- DONUT MEDIA
Actually it's James pumphrey
Exactly what i thought
My friend has 2 lower mistier gpus and get Fortnite max settings around 150 in game💀
Yessss
I was searching for this comment
You can run multiple gpus not in tandem if you're running multiple monitors and/or programs that are gpu heavy. Lot of stock traders, graphic design, cad, architecture, etc. Run multplie gpu setups. They just don't compound/work together.
Thats what i was thinking…
U gonna need more gpus if u have way too much monitors cuz i think one gpu can only handle 4 monitors
@@taoqlee28 think of an architect or a scientist running some Building renderer/simulation software. That stuff Is VERY GPU heavy, so it makes sense that each program uses a dedicated GPU
That's the problem. "They just don't compound/work together"
We already have 2 GPUs in the laptop, the iGPU and the discreet GPU. The problem is you CANNOT use both of them at the same time or make it work together. You can only choose one to do workload
Maybe the reason is that if one GPU is weaker or slower, this can result in a bottleneck to the better GPU. Yea pretty much like diminishing returns
also because i have a crt monitor, which has vga only, i use a spare old amd gpu for that vga connector. i could’ve got some type of splitter but i had the extra gpu on hand and it consumes barely any power
I liked when he answered the question.
Vulcan and DX12 were supposed to support using multiple graphics processors (without even having to be the same make/model) by offloading tasks between the GPUs to process things faster.
They do support that, but there is almost no software that does. I think Ahes Of The Singularity is the only "game" supporting DX12 multiGPU and doesn't gain much from it.
my answer: you can install more than one... and you can get performance boost if you running multiple 3d applications at same time... you just need to properly assign different GPU for each application
and you dont need SLI for that
I crossed this short and I facepalmed hard.
At least the dude could do 5 minute homework on the topic.
Clustering GPUs or even entire PC segments was possible decades ago and is still possible and will be.
What's more, it has nothing to do with devs - it's all about drivers.
And drivers for clustered GPUs are in Pro segment which is used in professional segment like science, super computing, research, etc.
In gaming industry i.e. home use makes no sense to have and maintain such devices.
That actually isn't a good idea. You "CAN" do that, but the PCIE express lanes are limited, and two gpus cuts them in half. But would only be an issue if we're talking two 4090 tier gpus.
@@johnmclain250 SLI/CrossFire worked that way - you didn't assign it to app, but you could make it that main GPU takes care of shaders, shadows, etc. and 2nd GPU takes care of AA and FXAA.
And as you said - Main GPU was on x16 and 2nd was on x8.
But currently it makes zero sense to maintain such solution for home-users because such power won't be used even partially, Clustering makes sense in really big operations like business and science.
Not running games or processing videos.
No, I'm pretty sure if your system had the lanes available both cards would run at x16. Oh wait, to do that you would have probably had a team red processor. Bulldozers sure were nifty. @@utawamaru
no
Don't forget crossfire that was AMD's equivalent
How dare they not mentioning the crossfayah..
was even better, because it didn't require that dumb bridge adapter across the top... just ran through the PCI-e lanes.
@@shryko and this wasn't as good, Nvidia's implementation is so good it's still in use today for their enterprise cards, used a lot by AI companies.
@@aouyiu given nvidia has over 90% market share, I'd say crossfire wasn't "worse", it was just out numbered. Crossfire worked out of box without anything from the users at all, no extra hardware, no special software.
@@aouyiucrossfire was definitely better, they alternated which GPU was rendering, which has lower overhead than having 2 gpus each render half a frame and put it together.
I still (sometimes) run duel 3090s in SLI (using nvlink) for Rendering, it basically gives you 4090 performance with effectively 48Gb of vRAM. Only nvlink capable cards can do that though, plus you need software that supports it.
I've been running multi-GPU setups since the days when GPUs only had a single VGA output. Which is WHY you wanted to run a multi-GPU setup, as it was the only way at the time to run multiple monitors.
And yes, you CAN still run a multi-GPU setup with cards that don't support SLI or CrossFire, they just function as independant cards. The catch is that since Windows Vista, they have to be similar enough to run the same graphics driver package. Before Vista, you were able to run different cards with different driver packages.
Even without SLI and Crossfire (which you completely failed to mention), you can still use multiple GPUs at the same time. This is just mostly wrong information in this video. The only difference is, they do not extend oneanother, but operate on their own. Still multiple GPUs. All you need is a motherboard with another x16 PCIe slot that is far enough away from the other one, to fit your 2nd GPU. It's still being done, primarily for applications where many screens are required, that you otherwise don't have enough ports for. But hey, what did I expect from a company that endorses Nvidia, correct information?
what is wrong with them? i have zero clue.
You just got farmed my dude, videos like this leave out info so ppl like you post i. The comments and add more activity to the video as a hole
I think think the kind of user asking the original question doesn't understand the nuance of how GPUs handle workloads so I think answering it with a gaming focus is still a valid enough answer. In the era, SLI was kind of the Coke of multi GPU technologies so I think that's not egregious.
If you wanted to be pedantic, multi GPU setups that aren't SLI/Xfire(x) don't even need x16 slots or lanes. Cryptocurrency miners get away with bifurcation down to 1 PCIe lane each GPU since the workload doesn't require a lot of PCIe bandwidth.
If they are different skus then driver support becomes an issue.
@Kori-ko exactly. He'll I've run multiple gpus in my current system, both AMD (5700xt) and Nidia (1080ti) at the same time no less. It's doable, they just don't play well with each other.
But it's an option in Vulkan and DX12, meaning a Dev CAN support it. DCS does at last check, as does X-PLANE.
You still can actually, just without NvLink or SLI, you can't use it for gaming purposes. Slotting in multiple GPU's is still useful for some serious productivity apps like blender or even video rendering in some cases. If I remember correctly, it doesn't need to immediately display an image to your screen, so the latency of communication between the PCIe Slots is fine because the end result will be faster render times thanks to more resources available.
Edit: Oh, and don't forget crypto currency mining too, that doesn't need SLI. But you can't just have stacks of 4090s because efficiency is also a a factor. More 4070s would be more useful for that case IIRC, IDK I don't mine.
I was thinking about this. I've seen deep learning pc builds using multiple GPUs but most people associate GPUs with gaming.
Actually, even games can take advantage of multiple cards, but the developers need to enable mGPU. Ashes of the Singularity is the only game I know of that has the option, though, lol.
@@dennisperry8579 I don't doubt it, but latency and performance can become an issue with limited PCIe Bus Bandwidth. So although gaming should be possible, 99% of the time it wouldn't be optimal.
@@technologicalelite8076 there was zero latency when I was running a fury and my 7970s at the same time, or my 3090 and my fury.... But it's the only game I own that I could test with, lol.
Likely something more fast paced would give the possibility of more issues.
@@dennisperry8579 Did you have an NVLink or an SLI connector? Uses PCIe protocols to let the cards communicate. Otherwise, usually the PCIe bus would have limited bandwidth and can cause huge performance issues. In every case? I'm sure cases exist where this isn't it, but you get my point, it's likley to happen.
I have 2 gpu slots on my mb, i configured one for graphics processing and the other just helps my cpu out
There's also the old downside though of some SLI setups performing WORSE than a single GPU due to poor drivers or software optimization, as stuff like stuttering, screen tearing, and input latency became WAY more noticeable. It was effectively killed by inpracticality, since the amount of work going into supporting it was just not worth the potential gains in the vast majority of situations.
The 3090 had NVLink which is a more advanced form of SLI in 2021. It isn't been a decade.
I didn't even know it wasn't a thing anymore lol
At that point games were already dropping support for it and Nvidia was well on its way to kill SLI in 2021
3090 used nv-link, and it was useless for games, mostly for machine learning
@@TunsaMcHaggis I forgot that's what it was called. You're totally correct.
It used the NV Link connector however it was still just using the SLI protocol. So apart from more throughput it is just the old SLI.
"A Mo Pawo Baby" - James
Likely what will happen is that SLI will come back as 2 GPUs in the same PCB, 1 will be mostly dormant, but able to kick in in really heavy applications
I recall someone managing to modify a driver to enable SLI, and not only it worked, but managed to do so without a SLI connector/bridge. It was after the 2010s when it stopped being a thing. Not sure if it was LTT or JayzTwoCents who covered it (maybe both?)
Someone been watching a lot of donut media 😂
Thank you for noticing as well
Bro channeled his inner James Pumphrey 😂
"why can't I..." is a question always best answered by "unless there's a law against it, you can, just think your actions through first and be mindful of the consequences"
Mom, I need two RTX 4090s. One for homework and one for my online classes. 😅
Imagine 4 4090's
Average Bitcoin miner
96gb pool of vram💀💀💀
Idk according to LTT 2 4090s actually performs worse than 1
@@NavarrosGamingprobably psu and cpu limitations
@@leechanyeong409no, im pretty sure sli would bottle neck the 4090, its probably faster for 1 4090 to do the work instead of having to communicate with another one unless there's vram restrictions going on.
"Older cards"
3090: am I a joke to you?
The biggest reason for SLI or Crossfire back in the day was video outputs. Not more power. Graphics Cards back then had one DVI output, so only 1 screen could be run off each card. Then more outpurs started to be added to graphics cards so less cards are required.
The main issue with sli/crossfire is latency. The graphic cards need to communicate with each other really fast in parallell calculations that are needed for gaming. You can run multiple graphic cards still, just not for gaming. Anything from. Mining or other heavy calculations.
and then you add that with DX12 and the rare vulkan game so that just gives the devs far more work to have them running without becoming a stuttering mess
When it turned to black and white and said “in 2010s” really shook me
The 3090ti was the last Nivida gpu to support sli
the main issue is the sli bridge didnt have enough bandwidth and most of consumer grade cpus dont have and enough pcie lanes to properly utilize multiple cards. Also the software to use them poorly allocated data essentially causing both gpus to render the same thing at the same time which caused some issues. However you cant deny quad sli water cooled rigs look dope AF
Multiple graphics cards is still common in builds setup for machine learning research. It just uses them as two separate cards though for vectorized calculations, nothing like how sli worked.
SLI is really useful for 3D rendering. I have used quad SLI with 4 980 ti, and they perform better than a rtx 3090ti, and the 2 rtx 3090ti with SLI and cuda in blender absolutely roasts a 4090. The only thing being it was expensive back then, but now you can find two for a reasonable price in the used market.
A100 and H100 enterprise cards are still run though an NV Link setup for all kinds of applications, especially AI. Nvidia only dropped support for their consumer cards.
@@aouyiu I thought all cards could use more than one on CUDA workloads. It's just outputting 3D to a single display that one can do at a time.
@@Anthonyg5005 Yes, you can even use 2 RTX 4090 with CUDA, I just said if CUDA is enabled it would perform even better, sometimes people even don't use CUDA and end up using the none option, not having the full performance of their GPU. I am talking about SLI mainly, even though a little performance is gained, it is still more performance.
And talking about price, you can get a 3090ti for 1000$, and considering the price of the 4090 MSI supreme in my area is like 4000$. I can get 4 RTX 3090ti in that price. And still get more performance. I don't know if they (3090 ti) even support quad SLI, because I have only used 980 ti's in quad SLI. Anyways, 3090ti have more price to performance than a 4090 in the present day.
@@aouyiu Those are enterprise cards! , the cheapest ones are like 30k. I ain't no way spending that much on a server GPU. It won't even work on my PC.
Those GPU's cost more than the top end PCs these days.
@@JeffreyPresstonBezos oh okay. Also isn't Optix best for rtx cards because it uses RT cores and CUDA the best for Tesla cards without RT cores?
I was waiting for " Why you bastard " from American Psycho. 😂
It's not just Crossfire or SLI. There were crazy instances, and maybe some people do it now. Where someone might've had an ATi/AMD graphics card alongside an nVidia card...the nVidia card would've just been used purely for PhysX in certain games, or CUDA.
3DFx also had this in early days of 3d accelerators. Nowadays it is also common to have a GPU mated to a onboard GPU in laptops for battery savings. There are also systems with multiple GPUs but not for gaming. It is not impossible, just uncommon.
Love that reference 😂 "Mo Powa Baby!"
James Pumphrey?
@@CircuitBird yep
I want 3 rtx 4090s in one pc build fr :(
Get the Orion 8 pack for $45,000 😂
NASA has something like this.
I think the first thing we'll see is that nearly all CPUs will have both GPUs and AI cores and lesser tasks will be handled by the weaker card and when it needs more power it goes out to the bigger GPU
i remember in highschool my friends dad was constantly building PCs. this was around the Witcher 3 release my guy had it in 4K with I believe 2 or more Nvidia Titans SLId lol. had everything liquid cooled green with green lighting. It was so sick. Witcher 3 in 4K looked insane, now it’s 2024 and I just went from a 2080- a 4080 S and still haven’t bought a 4K monitor or plan on doing so.
The replacement for SLI is NVLink but its only used for ML, scientific computing, etc. Hence its not on consumer GPUs. Its on stuff like H100s tho.
You can easily run more than one GPU in a machine. I run one for my main gaming monitor and one for my other three monitors. Then I set GPU affinity so that processes like Firefox and Discord use my alternate card while my games get to use my main card. Works amazing.
what actually is possible nowadays:
If you happen to have two Nvidia cards, e.g. an RTX3060 and a GTX1050 (or any two nvidia cards from GtX900 or newer) you can configure the Nvidia driver to primarily use the better one and have the other do only the PhysX-Calculations. In reality it will won't get you any benefits in many games, but in Games that do use PhysX, it might get you a significant uplift in performance.
Back in 2003, I had a crossfire setup, I had dual x1200pros and it was actually badass. Running halflife 2 at max settings and cs 1.6 at 300+ fps
it is actually possible to run multiple gpus, even now, without using sli, and this is the standard way of doing it. this way even lets you mix different GPUs, and even different vendors (for example, a game thats set up to do this could use your IGPU and dedicated GPU simultaneously). it just depends on whatever you're running is actually built to use multiple devices. it's more complicated, but it is possible.
for example with vulkan: you can create multiple logical devices from multiple physical devices, create queues for each of them, then offload rendering or compute tasks to all of them. doing this efficiently is the tricky part (the pixels still need to be actually transferred to the gpu you are going to actually present the pixel data on). it's generally utilised for compute tasks only, such as AI. for example, twitters chatbot uses 8 GPUs to run.
I wonder if they could make a new version of SLI where the synchronization happens at the driver/firmware level of the cards themselves. Perhaps that could improve performance while relying less on developers factoring SLI into their software.
I build a PC with sli back in the day. It was so cool seeing 2 gpus. I never got into triple or quad sli, too much. But I remember looking doe builds online that featured 2 gpus. It's nostalgic. I did have difficulty returning to one but with no option for more...
A lot of people use multi-GPU setups for a variety of reasons. I myself use a 3080 and a 1050 for Linux VMS, so I can a dgpu for both the host and the VM and don't have to mess around with splitting a single GPU up. Also multi GPU is good for anything related to rendering, cad, dedicated encoding, ML/AI workloads and a whole host of other stuff. Also some people just have multiple gpus if they have more than 4 monitors lmao.
99% of devs dropped SLI support years before it was officially dropped, I had to disable it for a good chunk of games otherwise the games wouldn't launch. Personally I'd love to see the return of SLI especially since the GTX 900 and 1000 series had the best SLI performance bump of all time! In a lot of cases you'd get at least 40% boost(if I'm remembering correctly) where older generations had minimal uplift.
SLI would make a little sense for a server build, especially with multiple CPUs on one board for graphics rendering and simulation calculations, but not your regular, consumer grade PC.
I remember running an SLI build. Back at the time I i did it, it was very cost effective for me for the performance I was looking for and man, researching to make sure you didn't run into any bottlenecks was part of the fun of it
I'd love to see it make a comeback. I remember running two 980ti's
In the old days, I had two different cards in my computer.
One Matrox Milenium and one Hercules card.
The Hercules was a 16 bit ISA card and was used for debugging.
SLI is coming back due to faster speeds of connections and the cards and the help of AI. Also the voodoo line had a very successful sli card setup that could do amazing stuff in the 90s. But it was relegated to Military simulators
Would've been nice to see a mention about Explicit Multi-GPU, something introduced with DX12 that Vulkan apparently has an equivalent for. Unfortunately, it's hard to code for and would've been PERFECT for the lockdowns when prices soared, but hasn't gone anywhere.
But the future of multi-GPU is likely renderer/software-based and no longer hardware-linked.
I used 2-way SLI back in the old days. It almost doubled your performance but the huge downside was the micro stuttering. 60FPS felt like ~24 because of that.
No idea if it was ever fixed in newer cards.
I had a crossfire 4850 and the bottleneck was the bandwidth.
When the CPU was hammered too much, the micro stutters were worse than with one 4850.
But when the CPU was on easy mode, those two babies could beat a GTX 280.
So... thinking myself smart, I overclocked my Q6600 to 4.0 Ghz on a Kandalf LCS build (watercooling) and one day the water drained and I burned my Q6600.
It was 15 years at the time. I kept the Q6600 for sentimental reasons.
I feel like one of the reasons nobody mentions as far as why SLI stopped was because of the cyrpto mining boom back in 2017. When GTX1070 cards were going for $1200 instead of the $350 they should have been.
You can still run two gpus in a system, but you need to make sure your system can support it. You no longer need an sli bridge, and the protocol (specifically for nvidia cards, idr what the protocol for amd cards is) is still called sli, despite not actually being a traditional sli setup.
What i want to see, is mother boards with more pcie lanes available for exspansion purposes, not just specifically for sli...
Doesn't matter, Gabe Newell made PC gaming a horrific 1984-esque Dystopian Hellscape
As someone who has a 4060 and a 3060 I can tell you that we do utilize two gpus when modeling 3D images or developing games
SLI isn't gone, it is being used in servers that do alot of calculations. Dell R760xa can contain up to 4 GPU Cards.
Dual gpu is still pretty good. You can offset processes by running a virtual machine with the second gpu that uses half of your cpu cores (since most software barely uses them) and have good multitasking.
In some compute applications (machine learning, Rendering (e.g. Blender), productivity apps that use CUDA), you can still use several GPUs.
You'll probably need to connect your PC directly to the mains like welding machines. The power demand will be insane.
The reason sli/crossfire died was because manufacturers will not sell future gpu like they want it to, they know exactly why they had to kill it and never really work on it properly during its early days, specially with the used market.
the issue isn't that it isn't possible but that they always end up data transfer nottle necked. our current gpu's are already using most if not all of the transfer speed the motherboard can give between gpu, cpu and ram.
adding another gpu just adds a ton of overhead transfers to keep the in sync and thus making the transfer rates even slower.
so in theory you can do it but you won't be able to get much out of it.
for big servers, super computer farms or rendering/mining servers it is another thing. the main difference is that they do asynchronous tasks where it doesn't matter in what order the results come in or how fast.
My PC is still running with a pair of GTX 970's in SLI and this video makes me feel like it may be time to update my build...
Most Modern CPUs and Chipsets only support a limited amount of PCIe lanes, so even if you wanted to run multiple GPUs, you would be limited in bandwidth, which isn't as bad as it sounds. HEDT platforms like Intel's Xeon and AMD's Threadripper support many more connections, making them great for multi-GPU applications.
Multiple GPUs is still possible, but you won't be getting unified performance like with SLI. As long as you have the expansion slots for it, you can slot in a second GPU and use it to connect additional displays, the two GPUs will render separately and only communicate through the CPU.
My guess is that it has more to do with bus transfer rates. The cords required to transfer that amount of data would be pretty big so they don't melt. BTW, games "don't support" sli, that's all on the driver, which is opaque to the process.
I remember whe case modding started, whe you had to put the window in the case YOURSELF, with a dremel or a bandsaw. Then you had to paint it yourself if you wanted something other than bland (i used a textured paint on my first one, made it look like a blue rock)
It was the era of beige cases and sadness (Windows 98 and 2000).
It's a good idea to have 2 GPU's to this day. You can use your primary one for gaming and another one inside a VM that can do more complex graphical stuff like rendering a CGI video for example. So yeah, its not the same as it was but i like having it organized like this.
One of my PC's still has 2 graphics cards in it. Not what you think though. It's an old HP Z820 workstation that I run a separate pair of GTX 760's in. the only reason I have these in there is so I can run 4 older monitors on it that don't have HDMI or DVI inputs. It was all just stuff I had kicking about, so it was the "Free" way to have a multi head PC setup so I could have multiple pages of text open (and viewable) at the same time.
It eventually become redundant though. The PC I'm typing this on has a single GTX 1080 in it, and a pair of vertical stacked 24' monitors, and a pair of 20" side monitors.
I had two Voodoo 2 cards in my Pentium 2. Connecting 2 Voodoo cards not only doubled thebVRAM, it enabled the 1024x768 resolution.
They can and do. The SLI link is now handled in the software rather than a cable connection.
You can still put multiple graphics cards in to your system. I run my second monitor on a seperate card. Also if the game supports it you can have explicit multi GPU enabled.
Parallel programming regular programs is hard. I can’t imagine how it would even work on graphics. I think I heard somewhere that SLI didn’t even distribute the load of graphics between the multiple cards because it would slow things down, which, if true, makes sense as to why SLI didn’t increase your FPS much.
One of the nice things of being an electrical and software engineer is that I was able with much effort to "hack" my 3090tis to be "sla" compatible. Its pretty good even though its kinda bottlenecky sometimes, but even whilst it bottlenecks itself It still performing better than a single card.
Not sure if possible but I'd love to see if SLI could be implemented in DirectX and let it deside how the gpu's are used
Technology is good enough that we don't need to worry about having the best and most expensive tech anymore. Developers are able to make anything they want with very little limitations.
As a guy who just turned 40, turning the 2010s into old-timey black and white was deeply painful.
...And it will happen to you! 👻
I actually still use 2 GPUs. You can force windows render the majority of your desktop, browser and apps on your "aux" GPU (or Linux) and keep your entire gaming GPU ready for games
Even without SLI, you can install 2 or 3 cards, and have each one connected to it's own monitor. Then use windows to assign XYZ app to each card.
Lol, we used to have Crossfire and SLI. WE even had ATI/AMD cards that had two GPUs in one card! It used to be common to have a separate card just to run physics (PhysX). Actually, NVIDIA still supports using a second card for PhysX, latest update 5.0 was released in 2020.
Optimizing a game for multiple cards is hard. I tried a crossfire setup for a little while and found that a few games did great, but the games I liked weren't optimized and actually ran slower! The big deal that's coming soon, is chiplet GPUs. Multiple GPU chips on one card that are able to communicate (very fast) internally and represent themselves to the OS and applications as one chip. All the power and improved yield, no headache for devs.
should also mention that MGPU doesn't always mean SLI, Because applications like Blender can make use of MGPU setups very well.
Wait a second, when 2 4070 Supers get more performance than a 4090 in total, but they cost less together AND they consume less power... Isnt this just a total win!?
what might work would be a dedicated Ray Tracing + AI card add on to take over the load for the regular gfx card; rather like 3DFX cards used to back in the day?
I mean it makes sense if you do a lot in virtual machines. For example making a windows VM on Linux so you have a gaming VM. Then having a GPU connected for good performance.
I can see the 2 gpu solution when there are system that can utilise 2 graphic card such as a handheld and a dock.hopefully we can have this technologies in near future
I wouldn't mind a SLIghtly more complicated build. Especially if the frame rate helps me catch more in the Crossfire.
I used SLI of two GTX970. They look awesome (MSI ME) and I liked that technology. I wish they come back with it
It USED to be a feature before GPU manufacturers got greedy and forced you into only the 1 GPU, back then you could get 2 mid-levels for a decent price and outperform the leading card by a big margin… ahh simpler times.
I would definitely like to see the use of double video cards. But instead of the two cards working in unison for the same process, I would see its use case for streaming and/or recording. One card would simply run the game, and the other would be running the screen capture and hardware encoding to record and/or stream the games being played. OBS now supports multi-card capture, so you don't have to run the program on the same card as your captured environment anymore, therefore leaving room for dispatching the screen capture to one video card, and game render to another. Of course integrated video card in your CPU (which is essentially 3rd card) for the desktop environment rendering (2D).
or just use the capture card built into the gpu which just does the same thing.
You absolutely can have more than one GPU, different brands even, used to have a build like this back when I used to dedicate a physical GPU for a virtual machine whilst the other GPU handled the host system