PCI Express 6.0 Is A Big Deal!
Vložit
- čas přidán 10. 01. 2021
- Learn through problem solving, and the first 200 people can save 20% today on Brilliant at brilliant.org/Techquickie/
Is PCI Express 6.0 overkill or will it have real consequences?
Leave a reply with your requests for future episodes, or tweet them here: / jmart604
►GET MERCH: www.LTTStore.com/
►SUPPORT US ON FLOATPLANE: www.floatplane.com/
►LTX EXPO: www.ltxexpo.com/
AFFILIATES & REFERRALS
---------------------------------------------------
►Affiliates, Sponsors & Referrals: lmg.gg/sponsors
►Private Internet Access VPN: lmg.gg/pialinus2
►MK Keyboards: lmg.gg/LyLtl
►Nerd or Die Stream Overlays: lmg.gg/avLlO
►NEEDforSEAT Gaming Chairs: lmg.gg/DJQYb
►Displate Metal Prints: lmg.gg/displateltt
►Epic Games Store (LINUSMEDIAGROUP): lmg.gg/kRTpY
►Official Game Store: www.nexus.gg/ltt
►Amazon Prime: lmg.gg/8KV1v
►Audible Free Trial: lmg.gg/8242J
►Our Gear on Amazon: geni.us/OhmF
FOLLOW US ELSEWHERE
---------------------------------------------------
Twitter: / linustech
Facebook: / linustech
Instagram: / linustech
Twitch: / linustech
FOLLOW OUR OTHER CHANNELS
---------------------------------------------------
Linus Tech Tips: lmg.gg/linustechtipsyt
TechLinked: lmg.gg/techlinkedyt
ShortCircuit: lmg.gg/shortcircuityt
LMG Clips: lmg.gg/lmgclipsyt
Channel Super Fun: lmg.gg/channelsuperfunyt
Carpool Critics: lmg.gg/carpoolcriticsyt - Věda a technologie
4.0 isn't even mainstream. See y'all in 2029!
Do people of night city use 6.0?
@@est495 Considering what the AI was like, their brainchips must be using USB-1
@@dfkfgjfg lul
@@dfkfgjfg lmao
It is in mainframes and data centers
I didn't even know that we had got to pcie 5.0
Afaik the first consumer products with pcie5 are releasing 2nd half of this year (2021). I am certain pcie5 is in r&d samples across the tech industry now and there are probably industrial/data center products already in use.
Intel's Sapphire Rapids has PCIe5 - part of CXL
@@ianvisser7899 for amd
God Slayer
What now
Well it isnt necessarily "released", at least not for consuners. These are just the specs for the future hardware, or some r&d and classified hardware currently in use or being developed
“PCIE 6.0 is a big deal!!!”
“Yeah we'll get it in 20 years”
Y*es*!
It's awesome. Most people will have PCI-E 6 when they actually need PCI-E 5.
More like 5 years
We just got to 4.0 recently even tho we had been developing it for like 2 years ago. 5 is no where to be found
and itll be a consumer standard in after 5 years
2:45 I actually kind of died because of how similar high-pitched Riley sounds like Linus. 😂
They're distantly related and the only tell is their shared hyper-sonic pitch.
Or it's the helium.
Or linus actually voiced over
Yeah I think they're related. Are they cousins? lol
It is Linus. Riley is just an alter ego that Linus created while being bored, stuck at the house.
He really did sound like Linus
Me stuck with 2.0: Interesting.
edit: typo
More like: Inte *loading* *still loading* rsting.
If you aren't 4 generations behind why do you even have tech.
I also have PCI-E 2.0 and when monitoring my EVGA 1660s SC under 100% utilization it only occupies 21% of the BUS lane
Me too XD
I tip my hat to you sir with my super-bottlenecked RX5700XT and the FX8320 which only does 2.0.
“Best I can do is 3.0 take it or leave it” intel
11th gen is PCIE4
Rocket Lake is 4.0 and Alder Lake (Q4 2021) will be 5.0 IIRC.
is there any need for more than 3.0 today?,,, im running 3090 on a 10900k.. maybe i could fork out 1 more fps with pci 4.0,, really, like i care if i have 600 fps in a vintage game or 601.
All you need. 4 is overkill and only useful for ssds. Ssds that cost an arm and a leg.
@@lexecomplexe4083 how many people do you know with intel 11th gen
“Although we haven’t gotten PCI-E 5.0 devices in our hot little hands yet...”
How about “Although we haven’t gotten a device that requires the bandwidth PCI-E 4.0 provides...”
There was some rumoured SSD that hit the limit.
The newest GPUs are AFAIK bottlenecked by 3.0...
so you commented before getting 3 minutes into the video huh
@@zamundaaa776 Nah. They aren't even close to bottlenecked. The only difference is maybe 1-2 fps. That's it lol. I have an RTX 3070 and the difference from benchmarks compared to my GPU's performance from PCIE 3.0-->4.0 is so small that it's not worth it. 4.0 is worth it for SSDs tho, but even then they're not fully utilized yet.
@@zamundaaa776 LMAO No - not a single video card can even saturate a PCIe3 x16 slot. NONE.
Woah, the sped up high pitch Riley sounds a lot like Linus!
haha he really does
Listen to him at 0.25x, he sounds really drunk......
**points gun** always has been
@@ianvisser7899
Theirs a video they made awhile ago where they tried to Deep fake Linus using James as the actor lol
@@ianvisser7899
Lol yeah maybe,
They probably didn't want to bother going that far for a joke video...
On the other hand, it's not like Linus hasn't dealt with ridiculously expensive stuff in the past for videos
(Like server grade GPU'S and CPU'S)
So I guess they could if they were interested 😅
PCIe is the best example of how far ahead of the consumer sector the data center and enterprise sectors are. Only within the last few years did consumer products finally saturate a 3.0 x16 interface whereas in data centers and enterprise sectors a large part of the time is just waiting for better hardware because any advancements are immediately gobbled up and integrated into workflows.
when I was in high school (2005-ish) my computing teacher went on a long tangent about how using voltage amplitude to define more than just 2 states was super tricky because of how unstable it could be. he spent most of a lesson talking about how much it could improve computing to have that working reliably. I hope he's still around to see it being rolled into a standard
I know this is techquickie, so not the right channel, but it would be neat if you could have full interviews with some of the engineers you have mentioned in these explanations
YES i hope they see this comment
This must be a thing
These are usually interviews over mail. No IT engineer will be comfortable with a one on one interview.
It can be done on their other channel.
Why, you want them to pronounce Nelalojanan?
It’s crazy that pcie is evolving so fast
Seems kinda unnecessary imo
@@austindoud273 For the average Joe and most professionals? Yea kinda unnecessary. But for enterprise, data centers, servers etc? This is excellent and for 1 reason: storage. As memory controllers get better and better, this will be a godsend for anything requiring extremely large amounts of data streaming.
@@austindoud273 disagree, more data in the same amount of lanes is very important, 4x PCIe 6.0 lanes are equivalent to 16x PCIe 4.0 lanes and GPUs aren't even close to saturate that, so having only 4x lanes for the GPU would let you have 12x more lanes for other stuff like 3 M.2 28gbps NVMe SSDs or 6 M.2 14gbps NVMe SSD.
Fast? Seems kind of slow, pcie has become.. soon..ish.. 32 times faster! Meanwhile, 5g is 32 000 Times faster. And here. And it started later. Roughly.
@@LunaticCharade Yea but wireless has more to gain from improvements year over year. But as we know, 5G is also crippled by range issues. Completely different applications.
The laugh after "phatest pipe possible" = I don't smoke. Ever.
He might be the unfunniest person I've ever witnessed.
@@HeadsetHistorian Ots not a comedy video it was an awkward laugh at his own silly joke lighten tf up
@@HeadsetHistorian chill out poopy head
I don't get it
I desperately want a video of outtakes from Riley talking about the fattest pipe possible
He's having some knowledge I think ;)
0:59 You VS the guy she tells you not to worry about
Okay Jay
Here I was, feeling smug about having a 4.0 PCIe system.
2021: 4.0 system, 3.0 card
Same lol. At the time you've upgraded the rest of your components to match, 5.0 Will be mainstream.....
@@carlemildamsbofrederiksen5665 The only component in my computer that uses PCIe 4 is my NVMe drive.
@@NicolSD did you get a pcie 5.0 system?
@@vinylSummer At that point in time, PCIe 5 mobos did not exist yet. But now, I do have one.
This episode felt like Riley was going through puberty all over again
u2
This has already existed for years with RAM. ThioJoe has a video about tripling RAM capacity using this same technique!
what happens when you have a power outage
Love how you got Linus to voiceover Riley's face at 2:50. Excellent lip syncing.
The actual reason I need such a fast pci slot is for my quantum co-processor, and cocomputer.
Groucho Marx returns to TechQuickie.
But I'm curious, what does Riley look like when he takes off the Groucho Marx mask. Maybe its been Linus in disguise all this time.
oh hi stefan, how’s the pc? did it explode?
2:52 I realize now that I can't tell the difference between someone using helium to talk vs Linus talking.
This was great learning video, I didn't realize how much data errors and corrections we can do now all at once. I am still in PCIe 2.0, but it is working well for what I do. I get a lot of hand-me-downs and I clean and rebuild them.
This sounds like it's initially for data centre SSDs
It's needed for NICs in the data center. There are already switches with 400-gigabit ports. Switches with 800-gigabit ports are coming soon. However, servers are stuck with PCIe 4.0, which tops out at 256 Gbps.
I feel bad for the 1 in a billion billionth person to buy a self-driving car in the future only to see himself crash the instant he takes out his car, all because a 0 was supposed to be a 1... 😂
he would have crashed either way with or without self driving car. Each year, 1.35 million people are killed on roadways around the world
Fun fact: It's estimated that there have only ever been 100 billion people. Ever. As long as Earth has been around.
@@leftyelomis1824 We should start a car pandemic! That's awful! 1.35m dead per year, that's like so many coronas 😰😰😰
@@Terrafire123 450 billion+ ( estimate ),
...
Statisticians use 100 billion + to mean that the number is below 1 trillion.
@@julesverne4339
www.bbc.com/news/magazine-16870579
Population Reference Bureau estimates 107 billion, not 450 billion. I'm curious where your number of 450 billion comes from.
So, when’s Linus getting ahold of PCIe 6.0 hardware for play, cuz I wanna SEE that vid😅
I merely wonder if pcie6 devices would be faster simply by removing any bottlenecks in how data is transmitted. We think that data is simply pushed through wires as fast as it can go but there is a lot more to it then that. I am excited to see that as well!
2 years later, and there isn't a single graphics car that even uses PCIe 5.0 🤣
Car?
FBC fund is my choice, i dont worry about BTC rates at all
You call him Riley, I call him Flanders.
FLANNNNNNDEEERRRRRRSSSSSS
That moustache has GOTTA go.
Showing a picture of super hot red head and saying “shoving more stuff through the pipe isn’t always such a great idea”. well... I must disagree.
No it was spot on, he ment more stuff to stuff in the pipe. I personally don't want my redhead tag teames
Red head? You need to get your eyes checked (or your monitor calibnration). That ladies hair is chestnut.
@@noxious89123 not all of us are into Jeffery Star and James Charles to know these things.
2020: Well I'm gone have a new PC when PCIe 5.0
Now: nah I think I will wait for PCIe 6.0
I have bad news for you. They started working on 7.0 before 5.0 was finished
@@kaldo_kaldo then ill wait for it
lol
One of the big advantages is having low cost systems with fewer PCIe lanes without giving up baseline performance and versatility. A PCIe 6.0 SSD on just one lane can perform the same as a current cutting edge x4 device. This also opens up the choice for saying "7GB/s covers every scenario I'm likely to have and game developers are still learning to fully exploit it, so I'd rather have a bunch of M.2 slots I can fill over time for more storage than just a couple slots faster whose benefit is debatable." Being able to start out for a lower cost of entry and easily upgrade over time is appealing for much of the market. We like to have all of our stuff installed and ready to go on a moment's notice, even if it's been months since we last used some of it and it could be downloaded and installed again in minutes. Because we can.
The real question is when we'll see pci-e 6.0 on the consumer market though. 2030? 2035?
For now, I'll just wait for plebeian pci-e 5.0 first.
AM6 most likely
5-7 years per gen, that's 2025 for PCI-e gen 4 to be available everywhere, 2030 for gen 5, 2035 for 6, but that's in desktops, and only if we get faster SSDs and GPUs that can make use of those.
Laptops would still use gen 3 or 4 if next gen stuff is not efficient enough even for next 20 years.
@@harshivpatel6238 every nxt gen cpu support pcie 4 already
@@harshivpatel6238 What are u even talking about?? Gen 4 is already widespread, Ryzen 3000 Series was released mid 2019!! Gen 5 will be available by the end of this year.
@@FastSloth87lul what? gen4 is still in the premium/server products and has nowhere enough market penetration as gen3,
It's not common, except for AMD desktops and higher.
If you pick a random mid to low end laptop product, chances are higher, it'll be gen3.
there are barely any consumer use cases where you use 16x gen3 bandwidth at all, only gen4 SSDs make use of it, which are too expensive to be commonplace.
It'll be 3-4 more years for gen4 to be common enough that you see it everywhere. That's what "widespread" is.
So it's just faster and has a controlling protocol like CAN-Bus already had in 1994? Does not sound like a big deal to me tbh.
Also with very high bandwidth individual lanes we don't have to dedicate large numbers of lanes to all of our devices, meaning consumer processors with their limited lane counts could still adequately serve a large number of devices, even GPUs and high speed storage.
Couldn't we ditch the big x16 slot, and just have PCI-E 6.0 x4 video cards? More space on the motherboard, and more expansion slots (x16 could divide into 4 x4 slots). Or maybe, hopefully, laptops and APUs won't be bottlenecked when using external/dedicated GPUs
You know what else can use a fast data link? The connection between the CPU and the chipset.
Rog reboot girl would be perfect on this show
This is important because that means that some day, we can have video cards sitting on a x8 or a x4 slot just fine. That leaves the remaining lanes for other devices. So we can finally ditch all the USB 2.0 ports, and just have 8-12 USB 3.2 Gen 2 USB ports, a 10Gb ethernet port, 3x PCIe NVMe ports, and 6-10 Sata ports without having to have a PCIe bridge, or being bottlenecked by PCIe lanes hanging off the chipset.
I wonder if we eventually have a different architecture so we only need one type of communication line/bus instead of many like internal like pci x, and external like usb & ethernet & hdmi etc. and maybe allow all devices to communicate and not necessarily have to go to the pci
seems like it would be heavily influenced by RF interference.
Me: since when is there a 5.0 lol
3:24 I _also_ love to sit on my couch and look at my home security system through an iPad app with the words "Home Security System" taking up 15% of the screen
This is needed for NICs in the data center. There are already switches with 400-gigabit ports. Switches with 800-gigabit ports are coming soon. However, servers are stuck with PCIe 4.0, which tops out at 256 Gbps.
One major benefit of higher PCIe is what it delivers to x1 connections. Motherboards started shipping with 2.5Gbit Ethernet thanks to PCIe 4, if I'm not mistaken. If PCIe 6 becomes common, maybe 10Gbit Ethernet will follow.
PCIe 2 is more than enough for 2.5 GbE and one lane of PCIe 4 is 16 Gbps. 10GbE motherboards already exist, the issue is NIC price, not PCIe.
The most important thing everybody needs to understand about PCIe speeds is that the bus downclocks... PCIe 3 can run at PCIe 1 speeds, so can 2, and 4... and often they do because that capacity is there specifically to make it possible to manufacture shit hardware and have it still work.
Imagine if they allowed cars to be made that could only do 23 miles per hour and let them on the interstate but then destroyed homes for an eighth of a mile on each side to make sure there were enough lanes for the 23 mile per hour cars to function in... welcome to PCIe!
this is a good balance between rileyness and serious talk.
Normaly I kinda skip your videos, but this was great, thanks for the good info and nice balance in presenting
PCI-6 is monumental. CXL will allow parallelism between full power intelligent cards - not even needing the main CPU once the system is initialized. Its long over due really. Just as the development of the ASIC sector enabled the creation of non-CPU solutions (as an example a GPU which was better at graphics than any CPU could ever be) the CXL milestone will allow custom board level application specific solutions to play together, even sharing memory space, with less or no CPU overhead and delays. And the step from baseband to broadband ( PAM4) is a baby step. Much bigger steps to come. The CPU has really been a choke point.
Ahh yes. This is what Sauron was always been waiting for.
First COVID-20 and now PCIe-6.0, when are we getting Riley 2.0?
First we get Linus 2.0.
A.K.A. Madison. 😜
The bigger advantage in the short term is needing less lanes. For example, PCIe 4.0 x4, is the same as PCIe 6.0 x1. Suddenly you can attach four times as many high end SSDs to your system with the same number of lanes.
Your GPU gets the same from an x4 slot that it currently gets from an x16. So put it on x4, the CPU still has enough lanes for x4 for a dual port 10GbE card, and x4 for a SAS raid controller, etc etc.
The avg consumer might still get some use of PCIe6 someday, by trading a 16x 4.0 slot with a 4x 6.0, it saves a lot of space in ITX boards.
Can you guys do more tests with cities skylines,
just saying
Actually, bit errors can be corrected in real time using the Reed-Solomon algorithm. The device doesn't need to ask for the packet again unless there are too many errors to correct. This is how ECC memory works.
I liked the little car sound effects at 0:12 XD
Why does he intonate his voice exactly like linus ? It's really unnerving
I thought we were only on 4, wtf???
We are
4 different signals, low weight error checking and repair, super low error rate...sounds like DNA and I’ve already had that tech for years
When you said “going beyond” I had dbz flashbacks.
This is pcie ascended also called pcie 4.0
“Is he going to become double accended”
And this is to go even further beyond
PCIE 6.00000000000000000000
Why watch all these forecasts!? Read about FBC fund and their unique algorithm
Consume product, get excited for next product.
PCIE is a standard, not a product. If you don't care about technological advancement why are you watching a tech youtube channel bud? Most people can't afford to hop on bleeding edge tech but we still find it interesting. Stop being so cynical.
@@Epicloser10391 We just barely utilize PCIe 3 and there's already PCIe 6. I can already see the pc building youtubers in the near future claiming that you absolutely need the $400 motherboards with the latest chipset because it supports PCIe 6 instead of PCIe 5, despite the fact that nobody will need that for the next 10 years at least and by that time there will already be PCIe 10.
Nothing wrong with technological advancement, but it's kind of like building a huge hotel in a city of 3 people. Cool, but useless.
@@yourusernamehere You ever think that maybe it's not for you, the home consumer? These standards matter a lot more for science, production, and high end sever equipment. When you're capturing and processing raw uncompressed data or using multiple GPUs for deep learning the bandwidth matters a lot more. PCIE4 costs a premium at the moment, but will be affordable and more standard soon. The reason why they are talking about PCIE6 is because the people creating it are public about their expectations for the near future. That kind of transparency is super important for developers
Don't mean to sound like an asshole, you just have a short sited POV. I have a 3 year old GPU but new tech is cool if you're into it. You don't have to care, but don't act all high and mighty about it
The *zoom* sound with every animation totally felt like mosquitos lol
I’m annoyed that risers don’t work with PCIe 4 because of the signal loss. Hoping PCIe 6 will fix that and we can get through 4/5 quickly.
I still use pcie 2.0 !!!
Lol that thumbnail, riley is ready for them meat logs in his mouth
First CZcams video I watched in 16 years that I wanted to read the credits. That actually means something.
That was very good, well edited presentation! Nice one!
Why watch cryptocurrency price forecasts and waste your time if there is an FBC fund?
if it's not in stock i don't care.
@3:00
Get the 6 footer, Riley wants a thick rip
Feel like the point has been missed here somewhat. The point of faster PCIe lanes isn't to satisfy some odd requirement with self-driving cars on snowy roads, it's to reduce the need to bundle so many lanes up in a single slot. The point is that we simply won't *need* x16 slots with PCIe 6.0 for any feasible home use case; we may not even need x8 slots.
I think we haven't even saturated 3.0 yet
The top end cards do. 3080 and 3090 to be precise. Not by a lot. But they do so
Fun fact: only one person can be first
Thats not what they teach in school... Everyone's a winner!
Is it me, am I first
@@chrisbleakley1444 schools are wrong 😈
Fun fact: first is the worst
WE are first comrades!
1:14 no it's not. A 5 GHz WiFi connection is "unstable at long distances" because it's carrier wave can't travel as far without losing too much amplitude. In electronics like PCIe busses, the problem has to do with signal timing and interference. Whose solutions you nicely explain later in the vid though!
We're going back to analog eventually. I can feel it.
Also, this is going to be a nightmare for board developers. Signal quality and interference are going to play havoc with this.
They're going to require bus isolation and shielding.
2 comments, 3 watches, 10 likes, 0 dislikes
damn, i'm early!
3 watches............. 3 WATCHES?!?!?!?!
no it's not, cause no one can buy a card with it.
I'm running pcie 2.0 and I don't plan on changing that for a long time
It's crazy that PCI is evolving so fast, we have PCI-e 6 but the mainstream is still 3 and Intel is still on PCI-e 3 and we don't really use PCI-e 4.
first
Yes!!!
E
no way i actually got it
@@dynei., you did.
Nox was second, I was 3rd.
He should really shave the stache and grow a beard.
Dude, the speeds that are coming make so much possible! As he was saying self driving cars could be the norm before we even realize it!
"I don't know, it sounds like school" I almost laugh my brains out with that phrase in that voice!
Fun fact: people who are commenting now, haven't watched the video.
Watched at x10 speed like a boss
How creative
Fattest pipe possible.
Creative....
I think the real win with this will be to send a few lanes of PCIE6 into a switching fabric for connecting to a large number of PCIE4 x4 m.2/u.2 SSDs like the Liqid stuff does.
Also: Optane. I could imagine a full length dual-weight card plastered with Optane chips and a fabric and cooling being an Intel storage beast.
Optane is dead 😭
Love these vids. Top notch stuff man
it's not just "my graphics card don't use full 16x PCIE 4.0" because many CPU / MB share the bandwidth sometimes with other devices, like a PCIE card with 20 plus NVME
0:57 I counted and whoever made that graphic made sure that 6.0 literally illustrated 32 times the width of the data stream in 1.0.
At this point in time, the only piece of tech that I can think needs to make a jump is USB4 (get USB5 to 80GBPS for ideal eGPU usage). Other than that, maybe tech companies should focus on lowering energy consumption per thread for a few years? Oh and dumping HDMI in favor of modern USB ports.
3:01 , please, no! LMAO, he killed me there
This PAM 4 more than 2 states signals like thing was actually considered when in early electronics, they reckoned they could do about 5 or 6 voltages, but for extra reliability, they went for boring limited 2 states, with they went for for then 2 early on, computers would be much faster today if so
LMAO - stick to flipping burgers
@@godslayer1415 What? I think you replied to the wrong person haha
so basically this PCIE version things is "future-proof" because most of the devices that uses them isn't enough to saturate the actual bandwidth available just yet. Also 3.0 is still a really fast version that many modern GPU can't really use all the available bandwidth because they don't really need to. Maybe we'll see a Liqid storage card popping up at a whopping 100GB/s+ in a few years maybe? because that's probably the only thing that i could see for now.
The Error Correction description is quite wrong, actually. The extra bits allow the bitstream corrects itself. It is "lightweight" because the error correction is embedded in the data itself, using FEC fuctions.
I want more power trough the PCIe slot. Would be dope to power something like a 200W GPU with it.
No but Nvidia's gpu cables are so bEaUtIfUl
I'm sorry but 3:00 is quietly one of the greatest LMG moments ever
Love it Riley. Please keep making videos.
Year 2022 (AND A HALF) and we still didn't get any PCIe 5.0 PC...
AND YET THEY ARE TALKING ABOUT PCIE 7.0 NOW!
Smooth transition to the advertisment. I like this style.
Very nice production quality!
PCI-E 6 is probably mostly for composable infrastructure for modern cloud datacenters, something like liqid is already utilizing atm, but with pcie 4.0 currently...