The End of Hyper-Threading
Vložit
- čas přidán 4. 07. 2024
- Get 20% off DeleteMe US consumer plans when you go to joindeleteme.com/techquickie and use promo code Techquickie at checkout.
DeleteMe International Plans: international.joindeleteme.com/
Intel looks to be ditching their long-standing Hyper-Threading feature...but why?
Leave a reply with your requests for future episodes.
► GET MERCH: lttstore.com
► GET A VPN: www.piavpn.com/TechQuickie
► GET EXCLUSIVE CONTENT ON FLOATPLANE: lmg.gg/lttfloatplane
► SPONSORS, AFFILIATES, AND PARTNERS: lmg.gg/partners
FOLLOW US ELSEWHERE
---------------------------------------------------
Twitter: / linustech
Facebook: / linustech
Instagram: / linustech
TikTok: / linustech
Twitch: / linustech - Věda a technologie
my grans sewing machine excels at normal threading
This is the best comment rofl
smart
Your gran is the OG overlocker.
All sewing machines use 2 threads at once. One fed from the top and another from the bottom.
So even your gran uses multithreading.
@@Eoin-B Wait until he finds out some sewing machines support multi-needles.
Hyper Threading wasn't only introduced in a single core CPU, but one that actually really needed it. The Pentium 4 had a massively long pipeline, which made it clock far higher than other CPUs, but also increased branch misprediction penalties. Every time it mispredicted a branch instruction, the P4 would need at the most 20 (Willamette and Northwood cores) to 31 (Prescott and Cedar Mill cores) clock cycles to refill the pipeline. A huge waste of time.
With Hyper Threading, the P4 could have two instruction flows running in the pipeline. If one of them stalled due to mispredictions, it could easily switch to the other and process it while it waits for the stalled flow to load up again.
The performance gains with SMT aren't as big as true parallel computing, but it makes a considerable difference in deeply pipelined architectures. It does in fact increase power consumption, though.
Edit: by the way, this is exactly the reason you don't see HT in Intel's E cores, for instance. Their pipelines are shorter than P cores', so HT wouldn't make much of a difference in performance (sometimes it can actually hurt performance), and would increase its die area and power consumption.
About to comment the same thing. Honestly the research effort on this video seems to be poor…
@@k22kk22k I've commented just for historical reasons and to properly explain the usefulness of Hyper Threading. I think that, for the purposes of this video, it's okay. Yes, HT takes power. Yes, Intel thinks having a bunch of E cores can be more beneficial than having P cores with HT. They delivered their point.
I've seen some misconceptions about Hyper Threading in the comments here, so I felt it would be interesting to clarify some technical aspects.
@@yukinagato1573Thanks for replying. I see your point.
What made me think as my original post is, the video doesn’t take care of typical misconceptions in advance, and rough reasoning for not implementing HT (hence many people talk about why).
My intention is just to express my impression, but maybe I should write it more clearly in the first place!
@@k22kk22k their effort has always been low
Well put. From what I know, the ultimate limiting factor is memory access, so even if the pipeline is short (e.g. 5 cycles) and access to memory long (e.g. 40 cycles), then a missed branch will be stalled until the memory is read (e.g. 40 cycles in this example), and not just the time it takes to fill the pipeline. So, I think the main reasons to not having HT included in E cores was power and die space, and the fact that with E cores you should already have enough cores, so no need for additional complexity.
Hold up, it's a 20% increase in power consumption for a 30% boost in performance... Wouldn't that mean that it's actually *better* have the feature enabled than not, in places like data centers?
Hence why a) AMD didn't build their dense "c" core for high core count CPUs (Bergamo) without SMT and b) Intel will keep HT for their upcoming p-core Xeon as well. Only their high core count "Forest" lineup is without HT since it's entirely based on e-cores from the get-go and you can't just bolt HT on at will.
It is, that's also why Intel will keep Hyper Threading for their Xeon server CPUs.
Data centers are exactly the kind of place where Hyperthreading hinders performance rather than increasing it since they tend to keep their CPUs at pretty much 100% utilisation all the time. While it can give some performance uplift at a heavy power penalty (14900KS drawing 300 W when running all-core Cinebench) when the CPU is fully loaded, it's really meant for a CPU sitting at less than 50% utilisation where the limiting factor for performance is not raw computation speed but rather how efficiently the different threads can access the actual compute parts of the cores to have their computation needs met in a timely manner.
I think he means that you get that trade-off for CPUs with one kind of core. But now that we have efficiency cores, we can get even better gains by replacing some high-power cores with more efficiency cores. Once we do that, then we can get rid of hyperthreading and let stuff that needs the power get a core all to itself.
hyperthreading has always been a 1.5 core design, and only gives an advantage if your task matches that extra 0.5 of the core which is provided for hyperthreading. this is why multicore compatible operating systems don't use it until it needs to. as multicore and especially asymmetric multicore have taken off, and power usage and cooling become more important, the bad tradeoffs don't really work that well anymore.
the failure of the closest cache to the core to scale well makes the cost of halving it for hyperthreading even worse. being able to dump hyperthreading on two cores and get an extra core with full cache on all three cores makes a lot more sense, even before you start underclocking them to get even less power usage.
it has always been a case of marketing hype for the average user, who does not use 100 percent of their processing power anyway.
30% more performance for 20% more power sounds like an amazing deal, or did somebody mess up the numbers? usually you reach very diminishing returns with more power vs performance.
its 30% better performance for 20% more power in a richly threaded application, but hyperthreading can lead to ever so slightly worse performance and power draw in single threaded applications, and an E core is roughly half as strong as a P core while drawing about 25% the power. so 50/25 > 30/20.
Intel isn't ditching SMT fully. Lunar Lake doesn't have them. But the desktop and server CPUs will have SMT. AMD too has sold CPUs with SMT disabled. I own a 4700u laptop which had great light workload efficiency for the time period.
Our great new chocolate recipe has all the same great taste of our original, same great price, just now with 50% LESS fat! **
**original pack@150g/new pack@75g
I'll be interested when these get a thorough testing 😁
@@SirMo I was going to mention about them not ditching SMT fully as well. Intel specifically responded to this and said they are not getting rid of them entirely. They are dumping SMT from their server chips. As was mentioned in this video about energy constrained spaces, which servers generally are now. Intel is actually leaning more into full e-core only CPUs for data centers.
You can fit 2 e-cores in the space of a p-core, and 4 e-cores in the space of a p-core with SMT. Most server applications benefit from more cores to run things in parallel than having some really fast cores.
Desktop CPUs on the other hand need p-cores for things that really need a fast thread, like games. The can also consume large amounts of power. The hyper-threads still work well here as power consumption isn't as much of a concern. They work faster than in a mobile, or server setting because in those environments the p-cores still have fairly tight power constraints, which also restricts the hyperthreading speed.
@@SirMo Any info I've seen says that Arrow Lake on Desktop will NOT have SMT. Ofc rumors from any source are unreliable, so we don't know for sure.
I've heard that Intel is going to replace the Celeron with a newer less expensive CPU. It's going to be called the Intel Moron and it's target at consumers that don't know any better.
😂
Yooooooo 💀
😂😂
Spot on
Intel giveth hyper threading, Intel taketh hyper threading away
Yet SMT has been and will continue to be better than HT. Intel couldn't get it working for desktop in time seeing as how their arrow lake server chips, that come out after desktop, will have HT.
They didn't get rid of it on purpose, they are just bad at making CPUs now that everyone has left the sinking ship.
@@Azureskies01 dude amds smt is based off a licensed version of ht the only reason they are called different things is hyper threading is trademarked
@@Azureskies01they're the same 💀
SMT/HT implementations are pretty dependent on the underlying hardware.
I could break the first gen HT (in Pentium 4) then they only showed up again in 1st Gen i3s then 2nd Gen which was also a different kind of Hyperthreading. Then updated again in Skylake and the same kept on going all up till now.
Zen SMT is definitely different
@@Azureskies01 What is your IQ?
So... 30% more compute power, for 20% more electric power consumption. Its at least a 10% win. And: Intel is downplaying, because they want to sell the "we got rid of it". Its more like 40% more compute power in many programs, and ~15-18% more power consumption. Even Intel told you that a few years back, because they where proud of the efficiency of hyperthreading. But now, they lie about it, because now its inconvenient to admit it. And no, the e-cores wont give back anything of that. Dont get me wrong. The new chips may perform well at all. But they could perform even better in some tasks.
That sewing machine joke took me longer to get than it should have, lol 😂
After upgrading to a 14700K (I wanted the experience upgrading a CPU on a platform that was ending), I did a bit of cine-benchmarking to undervolt the CPU and limit its temperatures to a reasonable level. During that process, I noticed that, by disabling Hyperthreading, my Cinebench runs lost only 1000 points, but mu CPU used 80 less Watts. Meanwhile, in terms of real-world performance, I have noticed no change in performance. If nothing else, there's a grain of truth to Intel's hyperthreading claim. I'm not about to speculate how much truth there is though.
Interesting, as I only tried disabling at at the OS level (as my 14700K is in my home server so I didn't want to shut it down). I will have to try doing it in the BIOS properly.
KatsuneGaming
Cheap GPU u used ?
SMT primary purpose is to max out the arithmetic units on a CPU, which one single thread is unable to do even if out of order execution is very sofisticated. But doing so contributes to increased power density. And we know Intel has a problem with that. They are probably trying to reach higher frequencies by decreasing power density.
Actually, the reason is more so deeply pipelined architectures. Maxing out arithmetic units is not the cause, but consequence. If you have a long pipeline, you want to keep it as busy as possible, even when it stalls. With HT, when it does stall, you can occupy all the functional units that would be otherwise idle, waiting for the pipeline to fill up again.
Not using all the functional units is inneficient, of course. Especially in Intel's case where they have like four of each one (in a quadruple issue architecture). But it's generally not a problem if your pipeline is shorter so that they're gonna be occupied soon. It does become a problem when the pipeline is like 18 stages long, though.
@@yukinagato1573I think it is both. One core may not utilize all the units, even if the pipeline is filled up. Many CPUs try to prevent that by running the instructions out of order. But I can well imagine that running instructions out of order also increases the risk of losing progress on jump-misspredictions.
Interesting perspective on Intel ditching hyperthreading in favor of its hybrid chip design. I'm curious to see how Intel performs without one of its hallmark features.
But not all their chips just the high end ones with a lot of e cores.
2:08 skip ad
"delete me .." says hyper threading to Intel on his dying breath
Just an FYI, for SMT (Hyperthreading) to work each of the 2 threads needs to keep the data it needs to work on in cache, but that cache is shared among the 2 threads, thus, with this technology the working cache size for 1 thread gets reduced from the one stated in the specifications. This is one of the reasons why disabling it might boos performance in some applications, like games, as seen with X3D cache is important to games.
But the e cores also share L3 cache with the p cores
Cache limitations can be a problem, but with HT you still have the benefit of switching threads if one of them stalls in the pipeline. One other reason why having HT enabled can lead to lower performance is overhead. If you end up switching threads too much, the CPU will end up processing to much thread-switching instructions instead of doing actual work. Especially in poorly optimized implementations, they can take up a lot of performance.
@@guiorgy X3D is important because AMD shitty chiplet design imposes high latency access to RAM
@@bigben3019 L3 cache is always shared, I was mainly talking about L1 and L2, which is separate for each core, but shared between 2 threads if SMT is enabled
@@overlord10104 The chiplet design does increase latency, though they have managed to reduce the penalty quite a bit. More importantly, more cache would help Intel in games just as much, just check the videos by Hardware Unboxed, where they concluded that the main performance improvement between an i3, i5, i7 and i9 is the increased cache.
HyperThreading is also part of the Spectre/Meltdown nightmare vulnerabilities...
Speculative Execution was an issue that wasn’t exclusive to hyper threading. It was a vulnerability in ALL multi threaded CPUs. Hence the name Spectre (for Intel CPUs) & Meltdown (for AMD CPUs).
@@creeperz12345 Wrong. Spectre affects both Intel and AMD, while Meltdown was just for Intel (and some ARM).
@@iiisaac1312 Yea you’re right, that’s my bad. Still was right about the speculative execution not being exclusive to hyper threading though.
Bud, that's why I said _part of._ HT going out doesn't mitigate Spectre/Meltdown by itself, but it removes one _huge_ headache of that, because you were running two instruction pipelines through the same bloody core.
@@stephan553 slick comment edit but nt
You forgot to mention removing HT also mitigates vulnerability like spectre and meltdown, allowing intel to remove some of the mitigation circuitry
The way the video is frame the story is just wrong:
You first show that HT is power-efficient, specially for datacentres, and then claim it is a problem for those very same centres? No, that is just not the case which is also the reason why intel is NOT getting ridd of HT for those sectors.
The explanation of OS thread schedulers is also wrong: The tasks are not scheduled to the same cores not for powerefficiency reasons but for performance. If you have 2 cores with HT you can either run 2 threads on one core and get 130% performance, or run 1 thread on each of the 2 cores and get 200% - which is a looooot more.
HT is there to stay cause it is better at handling different situations. With HT when a thread is stalled it does not automatically stall the core. So for branch-heavy or data-dependent programs it can offer significant benefits in terms of throughput. We had seen as high as 60% scaling with HT. Of course having more cores and those being more efficient is in many scenarios the thing you want, but a simple singular core-architecture also has its benefits.
Meanwhile, AMD will continue to make SMT cores in their processors, and will continue beating intel while intel and microsoft mutually struggle with core scheduling.
From what I remember, Hyperthreading was made to combat a design shortcoming of the Pentium 4.
it was but it also made the chips run hot too
Ridiculously long pipeline, that is.
SMT has been around for a lot longer than "Hyperthreading". Hyperthreading is just Intel's marketing name for SMT when they implemented it in the Pentium 4.
AMD & ARM both forced Intel to become competitive after decades of being a monopoly.
Disabling HT is not a competitive choice
Maybe it's "less extra threads and more cores" thing.
Or just an efficiency thing that will only benefit battery-powered devices.
Either or.
@@BlueEyedVibeChecker disabling HT is just a way to lose performance. And anyways disabling ht is pointless. A whole point of HT is to utilize core more efficiently. Because you know at a time program can't utilize all parts of core, and giving that parts of core to second thread is clever idea. And it's take less space on die then adding a small core.
Overall it's a very bad decision
ARM CPU PCs are ganna fail again (and they diserve it)
Arm did nothing 😂
2077 called. They want their hyperthreading sewing machine back.
It won't be missed. Hyper-threading mattered a lot when there were only two or four cores. Now there are 16 cores, I would rather have consistent performance per thread.
Those Intel "efficiency" cores aren't really for battery life but just for multi core performance. The latest Intel mobile chip does seem to actually do well low power with the tile system, so it can actually save battery life, but it is just for a specific tile and not all e-cores.
Hyperthreading is also a serious security problem
The most worrying part should be "while consuming the same amount of power" 💀
its not like they didn't have ways to lower power consumption without E-cores. I have CPU from 2017, and it underclocks itself when not used. Very efficient, at 1-2% utilization it runs at 0.8Ghz even base is 3.9Ghz
Hyperthreading was a workaround for the piss poor Pentium 4 design.
Pentium 4 is legendary along with Intel core duo
@@hendrx
Pentium 4 was so bad, that Intel had to base it's successor on an older model.
@@kjakobsen Pentium 4 was so bad that I had a 2.4Ghz P4 and an 800Mhz P3 actually felt faster for general OS responsiveness and web browsing.
Removing HyperThreading? The 4c4t CPU sloppening is back on the menu boys!
But... If the scheduler only turns HT on upon saturation, we don't really incur on the energy cost unless necessary, and by then we are trading +20% power draw for +30% ipc, per the very numbers given. Furthermore, what is the upfront cost for having HT vs the equivalent compute in e-cores? We can't determine if trading that for more e-cores is actually better if we don't know that.
Actually, we do incur. HT is still there, implemented. The transistors are working, just like anything else. It might not consume all the 20% power draw, but still, there's an "idle power waste". Also, implementing HT makes your processor bigger, so it's kinda wasted silicon if you don't use it.
This was a great piece of content, great job guys. Covering info in a way to inform consumers is such a good thing.
Thanks for the news!
Biggest problem: Most god damn developers till don't know how to program in threaded environments properly even today. Basic SMP coding is also freaking rarely done well.
It also is freakishly hard to do really correctly - having it read and maintainable, performant and bugfree is rather hard. But at least going to 4-8 threads for games usually is easy as there are clearly separated tasks (like resource-loading, input-handling, AI etc).
I'd settle for just Epic learning multithreading from id or CDPR (or The Coalition?), since half of AAA is going UE5 now anyway.
Well my friend in a real user computer we have MANY programs running on parallel: for web browser, calendar, email, game, video, etc, etc.
For specialized operations such as video streaming, deconding and encoding, your program have to think parallely or delegate it to the GPU.
Most software is single threaded simply because for most tasks, the order of execution matters. You can't eat bread without first walking to the store, buying bread and taking it home. It is impossible to do all 3 at once.
@@roboko6618 "Most software is single threaded simply because for most tasks, the order of execution matters"
No. Cause most software is not in the tiny group that needs to be purely sequential.
And just to show that your analogy falls flat on its face:
To prepare a nice sandwich you can go to the store and buy bread, salad, tomatoes, cheese and onions. And for preparations you can cut the onions, tomatoes and bread at the same time while also washing the salad. Heck, you could have 5 people do everything in parallel and the only 2 points where it is sequential is when you start the whole thing, and at the end when everything is combined.
That is how parallelism works.
The reason most software is not multithreaded is the same reason why for making a sandwich: Coordinating all the stuff takes work and effort. There is little reason for a simple word-processor to be multi-threaded given that it is interfaced by a human. You wont be writing your email any faster just cause the mail-program is using 127 threads.
How do I know Intcel is trash? Pix4D Mapper...
While a i9 14900k crashes with mere hundreds of images, a humble 5600x didn't crash AND completed the same task...
What about virtualizations?
Whats better: giving 4 e-cores to a VM, because u have a lot or giving it only 1 p-core CPU with HT?
In practice, Intel wants to reduce p-core space to put more e-cores on the chip.
It took me too long to understand the sewing machine joke.
Hyper threading doesn’t run two threads at once. It has two sets of registers so when it switches tasks it doesn’t waste any time loading up the registers because it can load it up while the other task is running.
It technically does run two threads simultaneously because of the way that the back end of the CPU works. With out-of-order execution, the CPU is running several instructions at once and splitting up the input stream into multiple instructions it can run in parallel without breaking instruction dependencies. Often, the backend of the CPU can’t be completely filled with just one thread, so pulling instructions from two threads simultaneously reduces resulting pipeline bubbles.
Modern CPUs can decode 4-8 instructions in parallel per cycle (depending on the architecture), and can usually dispatch even more than this when the instruction flow permits. How full the pipeline actually gets just depends on how many instructions the CPU can find to dispatch such that it can maintain instruction dependencies. Modern CPU designs are designed to try to try to utilize all of the resources of the core as much as possible, but of course, not all instruction streams are necessarily always ideal in that regard (hence tricks like this to try to exploit a little more performance).
@photoniccannon2117 one core can only do one calculation at a time. The CPU have multiple components that run in parallel to each others, and that can include multiple cores. Hyper-threading allows one core to become 2 virtual cores by switching between sets of instructions.
@@tonymouannes They don’t switch, they’re interleaving instructions. Both threads are in fact running instructions simultaneously.
Cores on x86 have been able to run multiple instructions in parallel since the 1990s. They aren't just executing one instruction per cycle, they're loading up a whole bunch of instructions in a queue, figuring out which ones can be run in parallel without breaking instruction dependencies, and then dispatching several at once. It's incredibly sophisticated (and is a large part of what allows modern cores to be so much faster than older designs.)
@@tonymouannes I think this is just a confusion in the terminology. "Running two threads at once" doesn't mean doing two calculations at once.
1:14 So intel's idea is to improve battery life by 20%, while decreasing performance to 70% ?
Now I may just be a humble country PC enthuisiast, but I say I SAY, it sounds to me like Intel is leaving an additional 10% gain off the table by not implementing hyper threading on the P+E core design.
Don't look up, that ain't rain dripping onto your head
Just make a togglable switch(in bios) or adjust change in power modes:
Power save: no hyperthread
Default: hyperthead on e-cores
Gaming: hyperthead on p-cores
My P4 3.2 Prescott has HT and works till today.
the space heater chp
wow i always thought the Prescott line didn't have HT. I still have my Pentium 4 Northwood with HT and works fine as well
found a pentium 4 3.0 prescott ht desktop from the recycling centre and it works lol
@@BeautifulAngelBlossom substitute the heating in my college suite back in the days 😀
I thought Intel was ditching CPUs, seeing how well they destroy them.
So it's a question of whether they can produce FUNCTIONAL cpus, rather than hyperthreading vs not.
At some point, with so real many cores on chips these days, hyperthreading offers diminishing returns. The process scheduling becomes an issue itself.
RAM is getting faster, and cache is getting cheaper, meaning having a "standby" thread in case of a memory access stall is less rewarding. Also, as the pipeline stalls, its ALU/SIMD units stopped clocking in data, thus stopping generating heat. With modern processors vastly power-limited, saving this power and just let the core stall and allocate the power budget to other cores doesn't negatively impact the performance that much.
IBM seeing this justification while Power is more efficient for being RISC and has up to 8 threads per core: 🤣
Me with the core 2 duo laptop
wholesome
Nice
Ha, me too but I don't use it. It's actually a 'mobile workstation' from a time when a core2 duo was screaming quick. It's so heavy that I keep it under the bed instead of a baseball bat in case of intruders 😂
My core 2 duo from college died a few years back.
I can’t even get the Core 2 Duo in my Mid 2007 MacBook to do video acceleration under Linux 😭
Man, I haven’t heard or thought about hyper threading since the mid 2000 when I upgraded my CPU. It was a marvel. Then getting a core 2 duo with hyper threading. It was truly the future 😂😂😂
Windows Server Subscriptions 📈
great info....
When are we getting Backside Power Delivery?
Intel's shift from hyperthreading to a hybrid chip-focused design seems like a strategic play in the long-term game. Streamlining energy efficiency while not compromising the overall performance presents a win-win scenario.
HT doesn't process two threads at one time
sounds more gimmicky for the sake of matching arm performance per watt. Given that Snapdragons have shown far more stable product in its first iteration, intel has to think of such gimmicks now to stay relevant
With modern CPUs Core count and Pipeline length, I can understand why we don't need it on desktop. But I think in servers it may still be useful.
Never been a far of SMT/HT. It made sense at the time, because it required less die space than a second CPU core, and the deep pipelines on the P4 often left various execution units unused, so HT made additional use of that hardware.
But it’s time has passed.
I clicked on the video not due to the thumbnail change, but due to finally having time to watch it.
Just wanted you to know ❤
Pentium 4 seems like recent dayz... the HEAT IS ON
2:20 "It doesn't invoke hyper-threading unti ALL cores, both P and E cores, have been populated">> I don't think this is true for AMD's SMT.
Going from 1 to 2 threads was a game changer, like the difference of hdd to ssd. rip ht
If Intel is going to be removing hyperthreading it actually creates a market for Core-X to return. Right now we have Xeon 2400, which I think is now going to become a must for anybody who needs multi threaded workloads.
There is one use of hyperthreading barelly talked about that has a massive performance improvement: Terribly made software that does bussy wait on disk or some other slow stuff that ties a core while actually doing nothing. Hyperthreading can make use of those resources for other programs and that is not a rare happening at all because non performance critical code is often terribly made and it can be hogging resources.
If it's hogging resources then it's by definition performance critical.
@@shanent5793 Sadly most developers don't see it that way. It is hammered into their minds that "premature optimization is the root of all evil" and most will make it their mantra without understanding the full context of the quote. In practice it means that most won't even think of performance in the slightest until it is a huge problem when whatever is being developed shows the problem when running alone in the developer computer (that often is on the powerfull side of things). Then in the real world when you try to run several of those poorly optimized things at the same time (and likely some of those things leave resident crap running all the time), it begins to get quite noticeable.
It is not that I like that things are done that way but they often are (I would know, I was just scolded for making something too optimized and too versatile by an incompetent boss that can't wrap their head around that if a test takes me 3 minutes instead of 3 hours, optimization saves me time instead of wasting it).
I often wondered if hyperthreading could be much more effective when applications were specifically designed for it. (This would probably also require new APIs in the operating system to let the application control whether to use it.) Theoretically, two threads on the same core can communicate orders of magnitudes more efficiently then two other threads because they share the same L1 Cache.
Considering when it was introduced 1-2 cores was still common, it was a cheap way to imcrease thread count. Now chips have a lot more actual cores so it makes sense
Always love a quickie
How can i trust deleteMe if i have to give it my personal data first
I'm a bit curios, is it possible that there'd be programs/games that were made to function with hyper Threading to such an extent that they'd have problem with this?
Am I the only one who finds it weird hearing Intel is concerned about power efficiency?
As an automations dev, performance tends to matter more for my case, but the average consumer often cares more about efficiency, and battery life.
Interesting idea, but too bad that so many of the latest gen Intels are failing...
And honestly, I think ARM is going to kick them in the ass with that "do more with less power/heat" thing anyway.
haven't we been here before, Intel ditching hyper threading? only to bring it back a generation or two later
20% more power usage versus 200% big brain move.
I built my first desktop with a 9700K, a processor that didn't have hyperthreading. The only game I play that seems to use it a lot is Elite Dangerous at surface settlements, which makes my current 5900X stretch its legs. I agree that just adding more E-Cores is probably better.
Also, DeleteMe doesn't delete you, it requests the current owner of the dataset featuring some of your data to no longer index it for only so long as they're the current owner, and it resets upon ownership transfer.
Who will delete the data you submit on DeleteME :D?
Basically you feed another database with a personal data, not very privacy friendly.
Desperate move to compete better with amd on power. Will be interesting to see where overall performance ends up.
The last joke was quite a stretch... My yarn broke Badum-Tss
... And securing hyper threads is a performance hit
Awwww power cores and efficiency cores. Intel finally caught up to Apple and Qualcomm.
I wish desktop cpu's were still being used in modern gaming laptops like back in 2019 with the msi gt 76 titan, alienware area 51m and origin pc eon-17x gaming laptops. Back in the msi gt76 titan and the alienware area 51m both used the 9th gen intel core i9-9900k desktop cpu in them and the origin pc eon-17x used the 11th gen intel core i9-11900k desktop cpu. Even linus made a video about the origin pc eon-17x gaming laptop since it has the 11th gen intel core i9-11900k desktop cpu, 4k display, 3080 gpu 16gb vram version, 2 user replaceable and user removable batteries with each battery being 280 watts with a 99 watt power supply, 4 nvme ssd slots. The 2019 origin pc eon-17x gaming laptop has all the useful features that are missing in the modern laptops. I wish the framework laptops used a latch to access the battery compartment to remove the battery like the origin pc eon-17x removalable with a latch on the bottom of the laptop. The origin pc eon-17x used to be able to buy spare batteries on the 2019 models. The newer and more modern origin pc eon-17x gaming laptops don't have a user replaceable and user removable batteries and they don't sell spare batteries. I gone thru laptop batteries like food being gone thru at a buffet. User replaceable and user removable batteries makes the laptops last longer instead of the soldering garbage on newer laptops while having less ports on modern laptops. The more modern something has become, the more planned obsolescence takes over since planned obsolescence has already become normalized like non user removable batteries on bluetooth earbuds, headphones and when smartphones used to have removable batteries like the samsung galaxy note 4. Even the samsung galaxy s5 and the samsung galaxy active line of phones with the removalable batteries that has water resistance even with a removable battery. All the corporations and manufacturers are money hungry leeches that enjoy ripping off the consumers while lining up there pockets like they are the human versions of the atm machines.
Hybrid chip design has been here for a couple of years with hyperthreading, I guess the real reason is the chip instability when using at the same time cores with 2 threads and cores with one thread, that why they r going all one thread
When you have 32 cores and 16 of them are idling under most normal workloads what's the point of HT
Well if u have 16 cores, you supposed to do worloads to use it (or else you are losing money), as 32 cores.
And if u watched the video, HT gives us 30% more performance, so u spend 30% less time doing that workload. Time is money.
Went from a 7100U laptop to a 7940HS laptop. I think I'll be good for another decade.
Makes sense to me. Simpler architecture and opportunity to put power where it’s needed. As long as the CPU’s smart enough to keep tasks like the os and web browsers on E cores, that means more intensive tasks could be assigned their very own P-cores without being bottlenecked by some other random process stealing performance on another thread. Looking forward to seeing how good their CPUs run in the future.
Core scheduling is completely controlled by the OS. The only thing the CPU can do is report relevant information that the OS can use to optimize scheduling.
There are also significant security issues that have been discovered as a consequence of supporting hyperthreading in the core architecture
NO. You guys are mixing HT with out-of-order execution. Disabling HT in a processor with Meltdown and Spectre flaws do not protect u, because your processor still does out-of-order execution and other cores can access that data.
@@AlexeiDimitri didn't expect to be nitpicked on the enabling word; is this more to your satisfaction?
@@David-oc8yt Oh, did u hurt your soft heart or is so difficult to be exposed as wrong?
Before saying ANYTHING on the net is a wise thing to search first to not say bullshit and be exposed as u did.
@@David-oc8yt Oh, the hurted is you.
Get back to the school.
Did you just put an ad in a less than 4 min video???
Are you guys going to cover the Samsung scandal of tecnications deliberately slashing screens to void warranties
I'm sure this has nothing to do with all the security vulnerabilities (Spectre, Meltdown, etc) that, while I don't think hyperthreading enabled, some of the underlying technologies which it relied on like speculative execution did.
Basically, as I predicted at age 12, RISC architecture won. It was just a matter of time. CISC architecture just adding more and more power hungry cores and threads lost the race because it lost the race before it started.
Efficiency wins the long game every time.
3:15 So no performance only increase over raptor lake? Just performance / power?
Yes. But keep in mind they tried to increase only performance without looking into power in the past. That didn't end well...
As someone who has built rigs as my very first, personal rig, having been a P4 w/HT (which I still posess 😉), I feel the lower power consumption to performance gain is worth it. I have used Intel, solely since the P4, but this just puts a bad taste in my mouth for their "development." At least have an option for enthusiats and do a trial basis, vs just 86ing HT.
This is a bad idea.
So big little config which is what phones have.
Not a fan tbh can't justify this to myself especially when they do this to lower the temps
Later it will show it was a wrong choice.
30% performance boost for 20% extra power sounds like a good deal, but that's just me.
That's with old technology or the performance cores. But E-core do better without hyper-threading.
HT was amazing on the single core Pentium 4. Not necessarily for performance, but because it gave hardware accelerated multi tasking. Without HT a single thread could hang the system by consuming all CPU cycles. With HT the CPU still remained accessible to other threads and therefore giving much smoother multi-taking without lockups.
I mean yeah, I'm not surprised. I remember it was an advantage to turn off hyper threading to get higher clocks and even FPS in some games.
I accidentally turned off my pc while it was resseting and now when i turn it on it turns on loads then turns off then on, it keeps doing that on and on, what do i do?
With HT/SMT it's firstly because of speculation attacks line Heartbleed and Spectre.
The CPU can get an instruction partially processed before it recognises "oops i shouldn't have done that" but by then it's too late the shady data is in cache
No, speculation attacks do not ANY to do with HT.
speculation attacks area based on the out-of-order execution, a characteristic beyond HT.
Of course, SMT was really created for the never released DEC Alpha EV8 (which had 4-way SMT), before the HP/Compaq merger killed off that amazing architecture for good in favor of Itanium which HP was co-developing with Intel.
Honestly, the only good thing about the death-of-Alpha was that Intel acquired the rights to it which got them Dean Tullsen's SMT research and Intel was savvy enough to add it to the first P4 shrink (aka Northwood) birthing HyperThreading (aka 2 way SMT).
Hertz big - perpormance good.
All this could be wolved by limiting the GHz while on battery......why does my laptop needs to reach 4GHz to open my daily apps? Absurd.......
on a PC The best use of E-cores is....to turn them off!
And HT was mostly negligible, a marketing ploy , marginal improvement, Turning HT off allowed MORE STABLE OC to higher freq.
and on a PC (not a data center), you want the BEST SINGLE core performance you can get.
I wonder if it's possible to make core process 3 or 4 threads at a time. That would be amazing.
There are PowerPC designs that do exactly this (up to 8 threads/core actually), but they’re for data center use only these days.
There are diminishing returns for trying to do this for the most part. There is only really so much extra performance that you can salvage when the core itself has limited resources to utilize. (Though it’s neat that PowerPC figured out a way to make it useful in the data center world).