I’m Embarrassed I Didn’t Think of This.. - Asynchronous Reprojection
Vložit
- čas přidán 23. 04. 2024
- Get up to 25% Off Pulseway's IT Management Software at lmg.gg/PulsewayLTT
Save up to *40% off and get Free Worldwide Shipping until Dec. 22nd at www.ridge.com/LINUS
What if you didn’t need the best frame rate to reduce input latency? What if your display’s refresh rate was enough all on its own? With Async reprojection, anything is possible… Even turning 30 FPS into 240.
Discuss on the forum: linustechtips.com/topic/14713...
Comrade Stinger's original video + download links: • Async Reprojection out...
2kliksphilip's video about async reprojection: • The future of upscaling?
Purchases made through some store links may provide some compensation to Linus Media Group.
► GET MERCH: lttstore.com
► SUPPORT US ON FLOATPLANE: www.floatplane.com/ltt
► AFFILIATES, SPONSORS & REFERRALS: lmg.gg/sponsors
► PODCAST GEAR: lmg.gg/podcastgear
FOLLOW US
---------------------------------------------------
Twitter: / linustech
Facebook: / linustech
Instagram: / linustech
TikTok: / linustech
Twitch: / linustech
MUSIC CREDIT
---------------------------------------------------
Intro: Laszlo - Supernova
Video Link: • [Electro] - Laszlo - S...
iTunes Download Link: itunes.apple.com/us/album/sup...
Artist Link: / laszlomusic
Outro: Approaching Nirvana - Sugar High
Video Link: • Sugar High - Approachi...
Listen on Spotify: spoti.fi/UxWkUw
Artist Link: / approachingnirvana
Intro animation by MBarek Abdelwassaa / mbarek_abdel
Monitor And Keyboard by vadimmihalkevich / CC BY 4.0 geni.us/PgGWp
Mechanical RGB Keyboard by BigBrotherECE / CC BY 4.0 geni.us/mj6pHk4
Mouse Gamer free Model By Oscar Creativo / CC BY 4.0 geni.us/Ps3XfE
CHAPTERS
---------------------------------------------------
0:00 Intro
1:36 Play along at home!
1:57 I'm sorry, what is this? Demo time!
3:40 That looks bad, can it be improved?
5:12 Why haven't we been using this??
6:38 Blind frame rate tests
9:44 If frame rate is so important, why can't they tell?
11:52 Showing how the sausage is made
12:31 This could be a game-changer! - Věda a technologie
Thanks to Ridge for sponsoring today's video! Save up to *40% off and get Free Worldwide Shipping until Dec. 22nd at www.ridge.com/LINUS
thank you ridge
ridge sucks!!!!!!!!
Why does it look like inside of a vagania
6:43 "Top 5 - 10%" , me whos top .1% & still havent become a pro player...
if you render a little bit off screen you might not need stretching
I asked in a VR subreddit about a year ago why nobody is making Async for computer games and people gave me shit about it like "wouldn't work that way, the idea is stupid, just not possible, etc." so I gave up. Glad I asked the right people
There are a lot of people who like to do the seemingly safe bet of saying "it won't work" without actually knowing, because they aren't the experts they want to pretend they are.
If a person speaks in absolutes without even trying explain why, chances are, they are not truly experts. They might know some things and even genuinely think themselves to be experts, but in reality, they have much more to learn.
People that reply to things on the internet tend to respond that way to new ideas.
Maybe you got this answer because it was discussed and tested like a hundred times since John Carmack invented it in 2012. There are serious issues with this that are much less problematic in VR.
The right idea is not asking redditors anything.
@@kristmadsen it was the same when the first computer mouse was invented. the higher ups said its useless, why would anyone need this? And bam! everyone has a mouse or a trackpad
"He owns a display" - that's gotta hunt him for ever like the "you're fired" for Colton 😂
Love it 😁
which video is "he owns a display" from again? having a hard time finding it
@@fsendventd He's been doing a lot of monitor unboxing videos on ShortCircuit, I think it's from one of the Alienware monitor videos
@@fsendventd it's from the 8k gaming video
@@Jeffrey_Wong yeah, it's also a direct quote, he says "I own a display" in the dlss 3.0 video
Yeah i got a really good laught out those 4 words under his name.
2kliksphilip and LTT is a crossover I never knew I needed. Make it happen.
Bump lol
Wonder if it can be used with CSGO, whether Valve allows or you brute force it.
They won't
They just use him and his ideas with not even 1 full second of credit
@@morfgo its not malicious
@@morfgo Dude, they literally credited him and his video in the description!
Not mentioned in the video: you can render frames at slightly higher fov and resolution the the screen, so that there's some information "behind" the monitor corner.
Won't save you from turning 180 degrees, but it will fix most of the popup for a very slight hit on performance
this is not what pc gaming is supposed to be about. using vr handmedown techs. and vr and pc scene shouldn't be segregated and minding their own scene either if yall sudden mindlessly hurrah at this crossover weirdofiesta
@@lyrilljackson What do you mean?
@@martinkrauser4029 asmh
@@lyrilljackson Brainlet take
@@JustSomeDinosaurPerson Careful... thats a [Lvl. 163] PC Master-Supremacist, the bane of mobile, console, and vr gamers.....
Plouffe's "He owns a display" gag is always going to crack me up.
Don't all of them own displays? It's a tech media company, I'd hope they do.
There was a video a couple of weeks ago about his _display._
@@Lu-db1uf You know.. that's the joke
@@Lu-db1uf But his display is.... *special*
@@Lu-db1uf he bought the alienware miniled one and hes proud that he was one of the first to get it and now its a meme
I'm so happy Phil put a spotlight on this concept, and I'm even happier that a channel like LTT is carrying that torch forwards.
I tried to build something like that demo a few years ago, but I was trying to use motion vectors + depth to reproject my rendered frame which I never got to work correctly. In my engine I rendered a viewport larger than the screen to handle the issue with the blackness on the edges and then was going to use tier 2 variable rate shading to lower the render cost of the parts beyond the screen bounds. But VRS was not supported in any way on my build of Monogame which is what my engine was build apon so that was another killer for the project.
I am so glad that Phil popularised the idea and its awesome that someone else managed to get something like this working, how he did it in one day I will never know, I spent like 3 weeks on it and still failed to get it working correctly. I should find my old demo and see if I can get it compiling again.
You might be able to hide a lot of the edge warping by basically implementing overscan where the game renders at a resolution that's like 5-10% higher than the display resolution, but crops the view to the display resolution. It should in theory be only a very minor frame rate hit since you're just adding a relatively thin border of extra resolution.
The size of border you would need to eliminate the edge warping would probably impact performance more than just using a higher refresh rate to lower the amount of warping in the first place.
The magic combo there would be foveated rendering alongside the async reproj with overscan. The games that would make sense for will inevitably be a case-by-case thing for but the performance gains would be massive.
@@carlo6953 That assumes you'd need same resolution for overscan. If game is rendered at 45deg FOV at 1440p, render an overscanned area between 45 and 90deg FOV at 360p. You don't need a lot of detail, just something to make valid guesstimates within that motion blur until proper frame fills up the screen.
Yes, definitely. Surprised they don’t do this.
I’m glad I wasn’t the only one thinking this
As someone who plays VR constantly, it’s nice to see this brought up for non-VR stuff
I am so happy that Philip managed to get the message THIS far out. I do fear that this tech might have issues with particles and moving objects and the like, but when you mentioned that we could use DLSS to ONLY FILL IN THE GAPS, my jaw dropped. Thast so genius! I really hope that this is one of those missed opportunity oversights in gaming, and there isnt like some major issue behind it not being adopted yet.
Exactly. On Linux this exact setup has been available for the last year. It makes a massive difference
You don't need to worry about particles, just render them later.
The whole idea behind this solution is to split rendering into two phases:
1. Render the scene (expensive 3d phase)
2. Render the final frame from pictures of the scene (cheap 2d rendering)
Just move all particle and HUD rendering to phase 2
To be honest I would suggest to go even further and add phase `1.1` where you use DLSS to draw the less important background stuff, this way you can render the important objects in 4k and background objects (buildings, grass, trees etc) in 720p or lower and just upscale with DLSS.
Or go even further and render each layer in different framerate.
Background in 30fps, while objects in 60fps and final image in 120fps
This was my first thought, too. Combining Async with DLSS/FSR could potentially be the actual magic bullet we're looking for.
@@hubertnnn
I mean cheap is relative, if all animated and moving objects and particles have to be rendered later on it won't be that cheap.
Especially if it means that transparent object have to be rendered after that.
Also screen-space-reflections of animated objects will disappear if they are not part of an animated object itself.
Not saying it's not interesting, but it's definitely not a solution without compromises.
@@antikz3731 ??? Explain. I use Linux and there's no such thing.
2kliksphilip is an unsung hero, his DLSS coverage is also one of his best content
His upscaling content is the best ;)
Never seen either of those but I agree
Personally super excited to see 2klicksphilip's video referred to in a LTT video, a lot of Philip's content is really high quality, especially the ones where he covers DLSS and upscaling as mentioned earlier. Can't recommend checking it out enough!
2kliksphilip had a good idea, but 3kliksphilip is more advanced in every way!
@@elise3455 3klicksphilip is just more work. Both will be _automatically_ obsolete when 0clicksphilip releases.
Take this a compliment : I love how LTT has now transformed more into a Computer Science/Electronics for Beginners channel than just another "Hey we got a NEW GPU [REVIEW] " channel.
Well... They had covered everything on that aisle...
It's why I keep watching them, I got tired of watching reviews of hardware I can't afford/don't really need yet. Though my VR rig is getting very tired.
Philip is revolutionising the way we think about gaming and game dev just with common sense
what? nothing here is new
@@Neurotik51 using technology for VR with conventional monitors? I haven’t heard of that before
I know philip will see this and I know he will feel awesome.
You have come a long way Philip. I am proud to be part of your community since your first tutorial videos.
Here's to Philip, love his videos on all 3 of his channels
Love him. His tutorials layed the base for my environment artist gamedev job.
kliki boy i love you
@@charliegroves more like 14 lol
One thing that Philip's video covers that this one does not, and which I'm personally really excited about: combining this with a low shading rate border around the viewport (the fully rendered frame). Since peripheral vision is more trained on movement than detail, this is fine quality wise, and it means the screen doesn't have to guess what's at the edges - the information is already there, just in lower quality than the main viewpoint would have. That would, if not eliminate, significantly reduce the stretching artifacts.
like doing actual FOVeated rendering, where the "sharp part" is the whole normal viewport, while the low resolution is just around it, like extra 5-10% or so
I haven't seen Philip's video but I'm guessing you'd need eye tracking as well. It'd be pointless to render the fringes of the "screen" at a lower quality if you can point your eyeball directly at it...
@@ffsireallydontcare what if the lower quality rendered parts are actually outside your screen? You would trade a bit of framerate for more accurate projection predictions which would recoup the lost performance and give you a better experience
@justathought No because it will be fixed in 1/30th of a second. It's obviously not perfect, but that's what this technique is about, compromises. Lower resolution fringes would be way better than stretching.
@@gabrielenitti3243 Ahh ok, yeh that makes more sense.
Just think about that: You can see the difference on a CZcams video! Granted it's 60FPS but it's still compressed video streamed from CZcams. I can only imagine how much of the difference you can see live running it yourself. This makes it even more amazing!
Y’all have done a stupid good job recently researching and explaining difficult concepts. Between this video and the recent windows sleep/battery video, my (already high) respect for LMG’s tech knowledge has gone through the roof! And y’all didn’t even discover this hack! Thanks for sharing (and explaining)
older videos were more technical now they suck up to the chump who doesnt know how to navigate a settings menu
This explains the weirdest I've felt in VR. The game itself lagged for random reason but my head tracking and responsiveness of the control wasn't affected. I remember thinking if the head tracking lagged along with the other lag, I would have had severe motion sickness.
Yeah it was honestly a very amazing thing when my quest 2 froze, I was like "oh no please no motion sickness" for the first time but it was so normal
Yep, it's pretty standard and required for hmd based VR. (At least some sort of reproduction or time warp) There are a lot of different variations.
This is great, it really explains some odd behavior I've noticed while playing VR games, and using it for flat games sounds like an awesome idea, especially for consoles.
Handhelds too. This would make any game on the Switch or Steam Deck run near perfectly without having to tap into too much hardware power.
Why are we not funding this? GPUs are the size of a gaming console nowadays but they couldn't bother to solve those issues with much simpler and cheaper solutions?
Yes ! Sometimes when the game is stuttery, you can still move freely but you can see the black screen ! Such a cool tech. It works really well, input latency is really important.
@@nktslp3650 yeah, when he showed the black borders I had a strong feeling of "i have seen this before", but i couldn't put a finger on it, until he mentioned VR.
I actually was thinking about writing an injector to apply this to existing games a few years ago when I have seen the effect on the HoloLens. A few limitations though: camera movement with a static scene can look near perfect, however if an animated object moves depth reprojection cannot fix it properly, and you would need motion vectors to guess where objects will go, but that will cause artifact near object edges.
couldn't you just zoom in a little bit so you would never see the artifact from the edges of screen and then use a higher resolution or AI to compensate for the crop?
@@ambassador.to.Christ I think the problem they're talking about is that this works great when objects are holding still because the algorithm knows where the object should be in the next frame, but if it's a moving object, it has to be able to adjust not only for the altered perspective of the object, but also the altered position, but since many moving objects in games are random or player controlled, there's not actually any way to know for sure where the object will be on the next frame, so the information the player is getting is not necessarily the most up to date accurate information, which could mean the result is actually worse than a low frame rate. Because slow information that's always correct is better than fast information that's sometimes wrong.
Yeah, the parallax effect is a big issue to address.
This technique can be used only for the background or environment while the additional frames for the subjects of a scene can be rendered through the gpu. This way you can get the best of both technoques
Not if you can somehow use light ray data from sectors in an environment to determine depth (or lack thereof)
This technology on handhelds will be ABSOLUTELY a game changer, not only it "look" better, it will be even more difficult to spot artifacts on a way smaller screen.
We need this integrated into Steam Deck OS!
3kliks and LTT collabing is what the INTERNET NEEDS!!!!
2kliks but either way Yes.
no it isnt
@@Qimchiy its the same guy
@@cora2887 Erm, if you paid attention you would know they were brothers 🙄
@@Qimchiy might as well get kliksphillip in here too ;)
This is probably my favorite type of video from LTT. Highlighting and explaining interesting technology is fascinating.
oh wait, time for another balls to the wall computer build! only the third this week. /s
But for real, they've been doing a great job with not doing what I just said
it's up there for sure
Realistically, if you render outside the fov that you are doing by a percentage, you would have enough scene overshoot for it to not really be a problem unless you have extremely low frame rates and incredibly fast movements.
A LTT vídeo at 60fps?! My god, the little animations they put like the outro card look so good 👍
yeah doesnt the dummy know 60fps is better than 8k uploads
As a Hobby Game Engine/GFX developer, I developed this technique with some tweaks: static geometry would just be rendered every few frames, but characters, grass or particles get permanently rendered. With the depth sort and extended viewport, it feels like native rendered and one can really aim precisely on a target, since this is always up to date. As mentioned in the video, dlss uses motion vectors, but has to guess the motion and static geometry. With proper implementation, this guess is not required, but can be calculated by the same hardware as the AI
Does this end up looking like motion blur?
What happens if rendering your static geometry takes 20ms on the GPU? How do you schedule the reprojection to ensure it's executed in time?
Also, which graphics API did you use to implement this?
What I'm wondering is, does the GPU in any way know what it wouldn't need to render, sections of the screen that can persist using this tech & only re-rendering additional frames for the sections that require more updates?
This make sense?, its hard to put down.
I want it, and i want it now
@@Winston-1984 that's what I'm currently working on, since this is now a common technique for ray tracers. Currently I'm trying to create the formulas I need and proof them for small movements. But with this fixed splitting it works for first person shooter or smth like that with much static geometry. Static geometry is nowadays really fast to render.
Yay, 2kliksphilip and his brother 3kliksphilip finally get some well deserved attention!
The inventor already suggested using it for normal games in 2012. Then many people made experiments and demos over the last decade. This one finally got some traction, so kudos for that, but it's nothing new.
@@kazioo2 truth to be told, I'm a long time klik empire supporter, and I'm always happy if anything good happens to him like getting mentioned by another creator I like.
The technology is interesting, and it needs traction to take off, but i actually care more about Philip than the tech.
Brother?
@@semick4729 yeah he has two brothers kliksphilip and 3kliksphilip
@@MrALjo0oker Not sure if that is necessarily true, someone should get to the bottom of that. Valve, please fix.
This feels like a slightly hacky optimisation you would see in older games, and I personally find that really cool. I always admired hearing the cleaver ways game devs to over come the limitations of hardware.
Where as these days it feels like we rely on an abundance of processing power. That abundance of processing power is generally a good thing but it feels like these sorts of optimisations are becoming a lost art
cough cough Gotham Knight
I love how much labs has instantly matured this channel. I have watched LTT for a long time but recently its really boosted its level.
But… they are not even done?!
It's been far from instantaneous, but we are starting to see the returns and it is definitely nice.
Seems like interesting tech. Two immediate thoughts:
1. What about moving objects? Seems like the illusion falls apart there as this only really simulates fake frames of camera tilt, not any changes to things already in your FOV.
2. What if you just slightly over-rendered the FOV? Then you actually have some buffer when tilting the camera where you have an actual rendered image to display before you need to start stretching things at the edges of the screen. Now obviously since you're rendering more geometry, you are going to take a further FPS hit, but is there a point where the tradeoff is a net gain?
2. You can do that I think.
1. They will move at the actual framerate. All asynchronous reprojection does is make the game feel more responsive.
I mean vr uses it and it runs pretty well
In the 2kliksphilips video he mentioned how interesting would be instead of stretching the borders or showing the void leaved by not yet rendered, if we rendered a bit extra of the display area (quasi like overscan) but in low resolution to impact as little as posible the performance, and as our peripheral vision is not great we barely noticed when moving fast that a small area in the corners is momentarily lower resolution.
So yes, we would have plenty of ways to improve the illusion, for example you could boost the ondisplay area with DLSS or FSR, and maybe even the extra area (I don't think that always would a good idea, depending on your main resolution is not the same that the extra area is 480p and you're creating that image from 240p with DLSS that it to be 240p and get that from 120p, probably a bad idea at least for the ladder), maybe if the resolution on the extra area is not suitable for DLSS or FSR, you could use for on display area but the frame generation of DLSS 3.0 (and future FSR 3.0) but only the frame generation on the extra area to fill the gaps with mostly fake frames gotten of deducing what your movement would lead to showing and anticipating to it
moving objects still have poor framerates, that's how it is in vr as well, your hands are much more jittery feeling than the rest of the game when your fps drops... in my experience anyway.
1. Yes, moving objects are still noticably 30fps, but coming from someone who has spent time in VR reprojection in a game like Skyrim, with lots of moving actors, you don't notice that nearly as much when your actions are still so instant, as shown in the demo. It's crazy how much you can find yourself forgiving if your head and hand movement is still smooth as butter.
2. That is another technique that VR absolutely uses that works very well to solve that issue. Easily implementable and workable.
As per usual: John Carmack is the king of optimizing rendering in games. He first implemented this tech for the Oculus Rift and has a long history of coming up with awesome solutions for problems like this. This is the man that made Doom, he knows his stuff.
He's probably laughing right now and having a big "I told you so" moment.
John Carmack is the responsible for asynchronous reprojection!?!?!?!?!?!
This living god never stops to amuse the world of technology!
@@Felipemelazzi I thought JC was the one who had seen it somewhere and wanted to bring it to Oculus, but, I don't think he was responsible for its actual creation. Anyone know?
he basically mimicked how your eye works in real life, I thought of this too but i thought it was already implemented.
Meta improved on this tech, now it is called Asynchronous Spacewarp and bundled with Oculus Quest 2. And let me tell you, it is really cool.
2KP is such an amazing channel, he always has very interesting, out of the box ideas, and I love to see more of his wacky stuff being picked up!
This is really cool! I play shooter games a lot and the most annoying thing about low fps in games is the input lag. Slow visual information is more of an annoyance as long as it's above 30, but the slow input response times at anything below 60 fps drives me insane.
I'm so happy to see phillip reach this far out of the csgo bubble with this
'valve please fix'
This actually reminds me of the input delay reduction setting that Capcom added for Street Fighter 6. The game itself still runs at 60fps, but the refresh rate is 120Hz for the sake of decreasing input latency.
Good point. That's one of the added benefits of a high refresh rate monitor. Even though you might not reach a high fps, having a high refresh rate monitor can still benefit you from a reduced input latency.
that's not how any of this works...
this tech won't really be useful for fighting games specifically, and I think it would be more counterproductive tbh.
What does that even mean... The async shown, as I understand it, is essentially shifting your point of view before the GPU producing a new frame. But for a fighting game, it would have to make the new frame no matter what to show your input turning into a move.
The *effect* (not reality, which is a bit different) also reminds me a little bit of QuakeWorld (and to a lesser extent, Quake and Doom). Even when the framerate is high the models use low-FPS animations, and with QuakeWorld I seem to recall objects in motion skipping frames based on your network settings. Meanwhile the movement was still buttery.
I stumbled upon 2kliksphilip’s channels when I was researching how to make maps in Hammer. So glad you guys have mentioned him in multiple videos now!
Wow, not sure who wrote this one but such a good explanation. So clear and well presented, good job!
I was amazed watching Philip's video when it came out. I'm happy that it has reached you now! Hopefully the game developpers will get the message, I'd be really happy to see this implemented in actual games, because at the moment unless you have the most recent hardware, you have to choose between high resolution and very high framerate...
I wanna see the video in question, 3klinkphilips is the channel right? what's the video? Im guessing around to find this guy/video, what's the title so I can show him some love?
@@SendFoodz it's linked in this video's description my man (if you haven't found it yet)
I literally did a spit take at 6:01 Now I have coffee all over my keyboard 😂😂
This is so so incredible! I hope this will be the next-gen image helper in all upcoming and older games!
THIS IS INSANE, I use this already on Assetto Corsa in VR so I play at 120hz but it renders 60fps, Such a light bulb moment at the start, Really wish this can catch on because I've already seen first hand how great this is
I always had a feeling that tech like this is actually the real future of gaming / VR performance. And not just raw rtx 4090 performance.
This tech has been a part of VR for years and it's awful. They need to take a new approach and have developers actually implement it at the game level rather than it being an after effect because as it stands now it doesn't work worth a shit. Awful.
@@Thezuule1 I use it in RE8 vr so I can run rtx while in vr and it doesn't feel great but it feels better than native.
@@Thezuule1 On quest, they've built support for it in-engine, it's called SSW. It's actually better than ASW on PC because it has motion data for the image, so the interpolation is quite good. Sure, real frames are still better, but the tech is getting better
@@possamei you've got that a little twisted up but yeah. SSW is the Virtual Desktop version, AppSW is the native Quest version. It works better but still not well enough to have picked up support from any real number of devs. Step in the right direction though.
@@Thezuule1 But what if DLSS and FSR only had to correct the flaws of this instead of making whole frames. DLSS and FSR might get you even more performance.
I'd imagine that a lot of the edge stretching could be mitigated by rendering slightly more than is displayed on the screen, so there's a bit extra to use when turning before having to guess
that's an interesting thought. this would bring us back to the age of overscan i feel lol
@@Blancdaddyreject dlss return to overscan
I think I remember 2kliksphilip talking / showing this in his video, just have the part just outside of your fov rendered at a lower resolution and use that instead of most of the stretching because you can't see the detail anyway
I just commented something similar. Just have it render out 10% extra which is cropped off by your display anyway, so whatever "stitching" it's doing is outside your view. Would love to see this. Combine that with foveated rendering, so the additional areas rendered outside view is lower resolution.
imagine the longevity of GPUs if this technology ever becomes standardized. man.
I tried to build something like that demo a few years ago, but I was trying to use motion vectors + depth to reproject my rendered frame which I never got to work correctly. In my engine I rendered a viewport larger than the screen to handle the issue with the blackness on the edges and then was going to use tier 2 variable rate shading to lower the render cost of the parts beyond the screen bounds. But VRS was not supported in any way on my build of Monogame which is what my engine was build apon so that was another killer for the project.
I am so glad that Phil popularised the idea and its awesome that someone else managed to get something like this working, how he did it in one day I will never know, I spent like 3 weeks on it and still failed to get it working correctly.
That really explains some quirks of the Quest and Streaming, specially on how when something’s loading you can move your head freely and the VR picture would stay still in the space as a single frame, just like you showed here.
Really cool stuff. When I’m in VR and the frames drop during loading or something, it does exactly what you showed in the 10fps demo. You can see the abyss behind the projected image on the edges, with the location of the image updating to return right in front of you with each new frame. I had no idea that that’s what it was for.
ive been using vr for 3 years now- and I had no idea what it was until today either! Super cool to learn more about that stuff
Oooohhh. You're absolutely right and not once did that occur to me! Imagine if that didn't happen and everywhere you looked was the same loading screen...
@@MrScorpianwarrior how to get motion sickness lol.
Glad seeing Philip getting bigger every day. He's amazing
One thing I wondered about when I first saw that video is if the PERCIEVED improvement is good enough that you could lose a couple more frames in exchange for rendering a bit further outside the actual fov, but at a really low resolution. Basically like a really wide foveated rendering. It would mean the warp would have a little more wiggle room before things started having to stretch.
This was just fascinating to see. I love it when a tech solution is used in a new way.
been watching 3kliks for years. I'm glad he's getting some recognition.
2kliks
@@TehF0cus kliks
@@TehF0cus same person
@@TehF0cus but don't confuse him with his evil brother kliksphilip
@@veganssuck2155 Its a joke....
Having just started getting into VR, I just recently learned about what asynchronous reprojection. Really cool to see it getting mentioned, because when I heard about it it seemed like what DLSS 3 wanted to do, only it's been here already for quite some time.
Your description about how it decouples player input with the rendering makes me think of rollback netcode for fighting games and how that also decouples player input and game logic, and I'm really excited for what that means for the player experience
7:06 I love how editors put: "he owns a display"
I’ve never understood why this hasn’t been done before. I’ve thought it should be done since 2016 when I got my VR headset. Like you said, extremely obvious!
I feel like I just had my mind blown wide open at the possibilities. This is one of my favorite ltt videos. Its hard for me to find such a technical concept so well explained. well done.
I think we just witnessed one of those rare moments when an elegant solution clicks and starts a revolution
This video is poorly researched. Timewarp was invented by John Carmack in 2012 and described in his post "Latency Mitigation Strategies", more than 10 years ago, in the early 2012. In his original article games other than VR were already mentioned. I remember seeing normal desktop demos many years ago, but it never gained traction despite that.
I'd be very interested in a followup on this in the form of interviews/queries with the likes of nvidia/amd/nintendo/steam as to whether this is something they're aware of/considering/etc! With ridiculous power-draw for graphics cards being accepted as necessary, this seems like a gigantic sidestep with machine learning assistance to amazing benefits for end-users!
Who's ever PC y'all used to show the games list at the start. Nice to see you as a man of culture. 0:55
Holy shit, Philip MADE IT
Your demo should really have included any animated object. That would have shown some serious limitations and also would be present in nearly all games.
Seriously though.
I keep seeing this concern, and while it might be true at low FPS, I don’t think that’s really where it would be aimed. I’d imagine most would still aim for 60+ rendering.
Keep in mind that VR headsets are already using this method and are they experiencing these problems? I legitimately don’t know.
It proboly wouldn't have been as obvious as you might think. At least not in 30fps. The reason why 60 or even 120fps is so obviously faster have to do with light retention in the eye when you move the mouse. But you can't se that with animation. Have you ever been Iin the cinema.... 24fps... yes.. 24.... do you think cinema is chopy? I-max+ have 48 fps
@@matsv201 Movies don’t look or feel choppy because there is natural motion blur to everything and you also don’t control anything in them. It’s an apples to oranges comparison. Also IMAX doesn’t “have” 48 fps, IMAX is simply a format. A movie has to be shot at that framerate to display in that framerate. If it’s shot at 24 then it will be 24 in IMAX.
@@MrPhillian Depends on the game and what sort of post processing is going on in the game from my experience. It can work well but if there's a bunch of fancy effects going on it can be very noticable the smoothness is being faked.
I would be very curious if a hybrid solution would be possible, such as in a fps game, synchronously drawing the environment, but asynchronously drawing the players? I’m sure there’s some limitations involved with that, but it does sound intriguing
Would love to see some analytics on image scaling, ie software vs hardware (NIS vs DLSS), various resolutions, % variance between gpu generated resolutions vs monitor native and how effective they are etc. Comparisons between Nvidia AMD and Intel tecs with all of the above.
On the other side:
this is a static scene, no animated textures, no characters moving around, no post process effects, no particles ect. Porting this to modern game would be similar to what assassins creed syndicate (or later one, I don’t remember) did to clothing physics: capped it at 30fps while game run at 60. Effect would look similarly to what modern games do to animations when characters are too far for game engine to update them as frequent as game current fps. So I’m skeptical
Also, nice GPU you got there, can’t wait for the review ;)
it is effective in VR games designed with it, so it will probably be effective on normal PC games if it's kept in mind.
Yeah, this isn't new tech. Pretty much every web browser does something similar when scrolling or zooming, where most content is static, and it looks terrible when a heavy webpage tries to do parallax scrolling on an under powered system. The whole "no body thought about it" angle in this video is strange and patronising.
Cloud gaming would be great with this!
You handle the reprojection locally and use the delayed frames as a source.
It will basically eliminate the input lag.
This is a sick idea
You would still need to sent a depth buffer and probably other information to the computer playing the game. So that means more load on the internet connection, but it still sounds interesting.
not that will not do anything, but it will do less than what you think.
even if it works, and I'm not sure it does, at least not in this form, because the time till you receive a frame that will fill the stretched gaps you just created by the camera around is so much higher, compared to a computer which will just fill that gap in the next 34ms if you are running at 30fps.
then you might have a super smooth camera turning, but the time to shoot, jump, etc will still be the same.
heck because of the higher disparity between camera and everything else. it might even be a worsen the experience instead of making it better.
@@khhnator You're right, but the latency for cloud gaming is already not that high. The most noticeable effect at latency is moving the mouse to look around.
This is already done with VR cloud gaming services, when you use oculus air link, you’re basically doing the same thing but over LAN. If you drop a frame, you can still move your head around and it’s perfectly playable all the way down to 30 fps for most games.
Now that's a public interest video ! Raising awareness on this technique will certainly go a long way, especially in open source. I hope the constructors don't shy away from it from fear that it would diminish interest on their high-end GPUs.
High-end?
What r u talking abt, they can make AAA game 8k 240fps without 6slot GPU
This is pretty awesome, i can totally imagine this working together with something like DLSS in the future, exciting.
Something worth noting is that comrade stingers demo does not really do what they say it does (mostly due to issues with how unity is made being pretty incompatible with this sort of demo). The gpu draws the entire frame during the last frame, so work is NOT split up over several frames. Doing that would be a pretty complicated task in existing game engines like unity.
This!
The demo only distorts the *simulated* bad framerate from the slider. If you ran the demo with an actual bad framerate, it would just lag like normal.
To actually implement it properly is much harder than what I did, in unity's case might require some severe shenanigans, or straight up engine modification.
@@comradestinger Do you think that something like that could be a driver feature like the DLSS2 Stuff. Where the GPU gets some motion vectors and shifts existing objects more or less like sprites arround until a new real frame got created?
@@TrackmaniaKaiser I think both could work, though I lean towards it being done by the devs themeslves rather than by driver. Since games vary so much, different scenes and camera modes would benefit/suffer from the effect in different ways.
to be honest It's all very complicated.
Wonder if using DOTS and scriptable render pipeline would allow for it, can't imagine figuring all that out in an evening though. I wouldn't trust a solution that leverages Unity's undersupported APIs to be that stable though...
@@comradestinger Good work man
I knew about this because of my oculus rift, and as you mentioned, in racing games, asynchronous spacewarp (as oculus calls it) is quite noticeable, moving your head around while driving at 100 mph can be quite jarring but oculus updated the feature and the visual bugs weren't as noticeable, it's quite interesting to see how this works, excellent video guys
With the latest "Application Spacewarp" on Quest2 the games can now send motion vectors, so the extra-polated frames no longer have to rely on so much guesswork.
Yeah the visual artifacts can actually cause MORE issues in VR than not, at least in some very specific games. Its not super noticeable in VRChat but in the Vivecraft mod for minecraft the screen turns into a wavy, smeary mess. I actually hated that WORSE than running at 40fps natively which is what my system could do with my settings at that time
I'm very interested in using a form of overscan where you're viewing a cropped in frame of the whole rendered image, so when you're panning your screen around it doesn't have the issue of stretching, unless you pan outside of the rendered frame.
Well, that's as simple as rendering a little more outside of the screen area.
Asynchronous projection is great in VRChat (in PCVR on my relatively old PC) where I usually get 15 fps and often even far less than that! I don't really mind the black borders that much in that case especially since the view usually tends to go a bit farther than my FOV making them usually only appear if things are going _extremely_ slow, like, a stutter or any other time I'm getting over 0.25 seconds per frame. So, perhaps another way to make the black bars less obvious, would be to simply increase the FOV of the rendered frames a little bit so that there is more margin. Would make lower frame rates, but it might be worth it in any case where the frame rates would be terrible anyways.
I have said for a very long time that when it comes to refresh rate, I don't mind lower frame rates from a visual standpoint, but the input delay is more what I love about high refresh rate gaming. I'm excited to see where this technology goes.
I remember watching 2klik's video last month and saying wow this is amazing and mind blowing, but thought I was just excited for it because I'm a programmer, guess not
I was also excited for it but thought nothing would come of it since I've only ever seen him talk about it. Now perhaps there's a chance of this actually becoming popular and coming into games.
It would be interesting to explore the increased rendering cost of running a higher internal resolution with increased fov, then hiding the edges of the rendered resolution to eliminate the screen edge artifacting.
it really is amazing how fast game technology is getting better now that we've hit diminishing returns on fidelity and we're starting to see a wider gamut of techniques being reached for to make a certain experience better by using tech techniques that others have used for a while in different ways.
The main issue with these workarounds is that they depend on the Z buffer, they break down pretty quickly whenever you have objects superimposed like something behind glass, volumetric effects or screen space effects
Ya, that sounds like it could be a big issue...
You technically only need the depth buffer for positional reprojection (eg. stepping side-to-side). Rotational reprojection (eg. turning your head while standing still) can be done just fine without depth, and this is how most VR reprojection works already, as well as electronic image stabilization features in phone cameras (they reproject the image to render it from a more steady perspective).
It might sound like a major compromise but try doing both motions, and you'll notice that your perspective changes a lot more from the rotational movement than the positional one, which is why rotational reprojection is much more important (although having both is ideal).
I absolutely love the recognition that kliksphilip and his brothers have been getting. It really is an amazing idea and would make everything so much better!
I have been in VR for almost 3 years, and as soon as meta implemented ASW and ATW, I wanted this for PC Games... I have been and still waiting for this, for years
Things like Particle FX and transparency is where the issues really show up not just on the edge. A spinning rocket for instance may warp the inner sections where the fins were in the previous frame especially if you pause the game.. since the predicted velocity remains the same but the object has stopped moving. They've solved this for oculus space warp by calculating velocities per object in order to have better prediction, however that requires the game engine to support this on an object basis for all materials and edge cases. We wish we could have used this on Mothergunship: Forge, but had to disable it due to the visible warping. It has the potential to be an absolute savior though since trying to hit 72 or 90 fps basically on a phone attached to your face is a huge challenge for peformance.
This is interesting. I'd love to see this feature compared to actual higher fps to see whether the perceived gain renders an actual competitive advantage.
Whoa 2kliksphilip getting a shoutout on LTT before any of his brothers, imagine.
it would have been really nice to also do a very basic shooting accuracy test in each case with each candidate, but i know that the demo app itself unfortunately didnt contain such a feature. but it would have been REALLY interesting to see the actual, hard numbers.
"In the early days of VR" - damn, that one hits hard to hear
5:51
This actually seems interesting to try out, some games I play, even on a 2060 may struggle to have even just a stable fps, so perhaps this kind of thing will help for those kind of situations
Since you explored this topic, you should also do a video about foveated rendering.
If Async Timewarp becomes popular on desktop, I wonder if it makes sense to spend the extra time render the z-frames behind some of the foreground layers, or special models that are say 'tagged' or earmarked to have render behind turned on.
So for example you might want to render behind a fast-moving player model, or a narrow light pole, but you won't want to render behind a wall. Or maybe you only need to render behind a small pixel distance. I'm not sure how easy that is to add to game engines?
I'm curious if you'd still get the same results if you add moving objects into the scene. Since the objects update their position at the true frame rate, I bet they would look super choppy.
Yeah youre right
If you maybe played the teardown lidar mod this is kinda the same (atleast the mod has the same downsides as this)
Honestly tho it cant get any choppier because the framerate stays the same.
The examples were with really low framerates but if you had your normal 120 fps and a 360 hz monitor this technoglogie makes a big difference
Some games already separate physics fps from rendering fps.
The thing is, you always want consistent input no matter what. The alternative in your scenario is the objects are still choppy, but your mouse movement is also sluggish
I'm actually surprise LTT got a demo version without the moving objects (or at least not showing it). Cus yes, animation still looks choppy according to the fps cap, but the perception of moving the camera is smoother than butter 0-0
The latest version of the demo has moving objects, so you can see for yourself. (they look laggy, moving at the true framerate, as expected) x)
This could have insane potential for handhelds. The steam deck can usually pull at least 30 fps on new triple A titles, which seems to be perfect for making the picture much smoother
The coffee guy, the tech news guy, hi, he owns a display, Mark. Such incredible descriptions of the people and their roles
As a huge VR fanatic, seeing the tech that makes standalone VR possible put to use on a flat screen game is amazing!
I think one caveat here, which has not been mentioned, is that dynamic objects in the focus/center of the screen will also only be updated by whatever frame rate your GPU allows. I wonder how to handle those scenarios. Still a very worthwhile improvement for a lot of games, for sure!
From my experience with VR, although while looking around is perfectly smooth, animated characters on the screen update slower.. But that really isn’t much of a problem. There is no input lag from your HMD, you can look around perfectly fine and the 45fps the headset makes when reprojecting is still smooth enough to track targets.
The only real caveat is the input lag from your controllers. Moving your hands will feel less responsive when reprojecting than when running native framerate.
I wonder how this will carry over in desktop reprojection.
Linus: It's for free
Also linus: Makes no difference
Your point is?
@@SelecaoOfMidas Makes no difference
It can make game smoother for low end pc gamers. The pc market will discourage it because then the high end gpu's will become relevant and only medium tier gpus are enough if the tech is further develop for pc gaming.
man, this is a really interesting and intruiging technology! hope it's gonna be noted by gpu manufacturers and game developers and integrated in regular games too!
You guys were so young and fresh faced back in the early VR days!
Plouff :- owns a display
This kind of thing is what I actually always think about since way back when motion interpolation becomes common in TV plus the fact that I'm familiar with 3D (I don't do real time 3D rendering, only non real time). What I'm thinking was the fact that you have this motion data, depth, etc should be good enough to have some kind of in game motion interpolation but not really interpolation but for future frame. Even without taking the control input into account, just creating that extra frame based on the previous frame data should be good enough to give that extra visual smoothness feel (basically you'll end up with somewhat the same latency as the original FPS). Since it already working directly within the game, we should be able to account for the controller input and the AI, physics, etc to create the fake frames with an actual lower latency benefit, so basically the game engine run at double the rendering FPS so the extra data can be used to generate the fake frames.
For screen edge problem, the simple way to solve it is simply to overscan the rendering (or simply zoom the rendered image a bit) so the game have extra data to work with. Tied to this problem is actually the main problem with motion interpolation and this frame generation/fake frames thing, which is disocclusion. Disocclusion is something that was not in view in the previous frame becoming in view in the current frame. How can the game fill this gap because there is no data to fill the gaps. Nvidia I believe is using AI to fill those gaps which even with AI, it still looked terrible. But as it has been mentioned by people using DLSS3, you don't really see it, which is actually good for non AI solution, because if in motion people don't see that defects, then using non AI solution to fill the gaps (simple warp or something) should be good enough in most situation. Also doesn't need that optical flow accelerator because the reason why Nvidia use optical flow is to get motion data for elements that is not represented on the game motion data (like shadow movement) but in reality, that is not important, as in most probably won't notice when the shadow just move based on the surface motion (rather than the shadow motion itself) for that in between fake frames.
For a more advanced application, what I'm thinking is a hybrid approach where most stuff are being rendered at like half the FPS and half of it will reuse the previous frame data to lessen the rendering burden. So unlike motion interpolation or frame generation, this approach will still render the in between frame, but render it less, like probably render the disoccluded part, maybe decouple the screen space stuff and also shadow so it rendered at normal FPS instead of half so what the game end up with is alternating between high cost and low cost frame.
When I thought about that stuff, AI wasn't a thing thus I didn't think including any AI stuff in the process. Since AI is a thing right now, some stuff probably can be done better with AI like for example the disocclusion problem, rather than render the disoccluded part normally, probably it can just render the disoccluded part with flat texture as a simple guide for the AI to match that flat rendered look to the surrounding image which might be the faster way to do it.
Interpolation for the future is called extrapolation
When ever VR is mentioned i get happy.. just super sad LTT does not do more VR
I thought this was VR when I first clicked because VR has had this since 2017 at least. Awesome to see this finally make its way to the flatscreen :D
i honestly think this might be really great for ultra and super wide gaming as even more of th rendered frame is already in your peripharal vision, so the suboptimal edges will be even less noticible
As a dude with a 32:9 monitor, I'd settle for games actually supporting my monitor resolution AND aspect ratio. Most games I have to play windowed mode, because even if they allow me to go full screen (and don't add any black bars), usually the camera zoom is completely fucked and/or the UI elements are not properly positioned.
Heck, even a newer game such as Elden Ring just give me two 25% black bars on each side, but with mods it actually supports 32:9. Wouldn't have been that much work to support it by default. A checkbox for disabling black bars and vignette, and an option to push the UI elements out to the sides and all would be fine. But for some reason, most developers don't even care to support newer monitors with weird resolutions.
@@Imevul it did support it by default but fromsoftware disabled it intentionally because of "competitive advantage" or some bullshit
@@Imevul I also run with a 32:9 display and feel you, but don't be surprised with elden ring from soft does not care about proper PC support. About the UI thing, I actually can't think of a game off the top of my head that supports 32:9 without also having the option to adjust UI elements as in my experience it's very common and has always been a thing even consoles 10 years ago
The issue with asynchronous reprojection is that with complex scenes or fast action it creates visible artifacts and weirdness. This is where AI comes in, like DLSS 3 frame generation. By using deep learning, it can more accurately and realisticly insert additional frames. That's really the way I'd the future along with ai upscaling. It has to be, otherwise we're going to need a nuclear reactor to power the future rtx 7090 or whatever.
Even bigger issue is that in VR games (where the camera can just move through walls if you move your head in the wrong place) the only thing async timewarp has to do is take the latest headset position and reproject there.
However, in a regular game you can't just take keyboard/controller input and reproject to a new position based on that. Or your character would go through the floor, walls or obstacles in the map. Instead you would have to run full collision detections and physics simulations in order to tell where the camera is supposed to be in the reprojected frame.
This not only makes it massively harder to implement compared to VR where it can be automatic. But it also increases the chance of hitting a CPU bottleneck and not gaining that much performance anyway. Then when you combine visual artifacts and other issues you start seeing why game developers haven't used their development time to implement this before.
Oculus/Meta is already doing that with their 3rd generation reprojection tech - Application Space Warp (AppSW). It meshes Reprojection tech with game engine motion vector data w/ a splash of Machine Learning to generate the best looking VR Reprojection yet. The recently released Among Us VR on the Quest2 is a good example of a game using the latest AppSW techniques. All thx to John Carmack, the grand daddy of Asynchronous Time Warp (ATW; the first mainstream Asynchronous Reprojection)
@@meowmix705 I don't play enough quest 2 games to be able to speak on AppSW (I play pc through link, and admittedly almost all of always disable regular ASW because of the inherent visual ghosting). Carmack may be the GOAT... I also get a kick out of the fact that he's probably the most reluctant Meta employee ever, but he sticks around because he just loves VR that much
@@shawn2780 Yup, Carmack is the Goat. As to PC-ASW, the Rift mostly uses 1.0 of ASW (has the typical ghosting visual artifacts). ASW 2.0 is only used for a handful of games (greatly diminished the ghosting, but required depth data from the game; many games did not support it). ASW 3.0 is unfortunately a Quest exclusive (for now?), they've rebranded it as AppSW to not confuse it with PC-ASW.
An idea: If you force the 'game' to render at a wider FOV (Like +20 degrees each way) you could potentially reduce the 'edge weirdness' and make a more smooth experience? Obviously with 90-degree plus flicks it'll still be obvious, but I feel like just a little tinkering could drastically improve the user experience.
I want to say it was either Doom or Quake who used a similar technique with their raycasting, intentionally drawing just a little more beyond the players' FOV to prevent black spots on the edges of the screen/maps in more fast paced play.
Lol. Don't worry Mark the camera confidence will come. You just gotta be in a few more videos and you'll be a regular Riley.