The future of upscaling?
Vložit
- čas přidán 30. 10. 2022
- DLSS 3's framerate doubling feature is all well and good, but that lag is painful. But... what if there was a way to reduce it? Welcome to the wonders of Framerate Independent Mouse Movement.
Download Comrade Stinger's demo here: drive.google.com/file/d/1lSON...
It took him one evening to make, and another few hours to implement the position reprojection. - Hry
YES! THIS IS THE THING I HAVE WANTED! Ever since Oculus demoed reprojection with a dev kit prototype years ago, I have been begging the universe to make developers realize the brilliance of this!
Exactly this technology has been around for years in VR. DLSS 3 though is hardware accelerated which is nice as there was always an initial penalty using Asynchronous Timewarp (Meta) or Asynchronous Reprojection (SteamVR) before the extra frames were noticilble which would sometimes means lowering graphical settings. However you know what, even hardware accelerated reprojection has been around before DLSS 3. Jeri Ellsworth CEO of Tilt 5 with a product of the same name has hardware reprojection which allows the AR projectors in each pair of glasses to run at an incredible 180 Hz.
so funny seeing you here! I remember listening to your agile intrepid demo songs on sevenstring over 10 years ago hahah.
Its actually not easy to implement, you need to integrate it with camera controller in game, with physics, etc.
In vr it's easier cause you already get all player movements outside game, by tracking
reprojectrion is not in any way connected to framerates. you need to know how rendering is made to understand it.
@@sguploads9601 isnt reprojection just some fancy algebra
What I fear most is that companies will just target the same hardware requirements to run things at 60fps, for example, and use all these tricks to simply save themselves the bother of optimizing the game
With how shits going, making it run at least 60fps is actually quite good. But I get your point. But at the end of the day, what matters is the actual experience. Everything in 3d graphics is a lie. Most shadows are baked and are not real time. If you take your clothes off in game, its just an empty void.
If lettuce tasted 100% like pizza, I'd be fine with lettuce
The only issue is if they make lettuce taste like wet pizza, and most people dont give a shit except me. THEN it becomes an issue.
You could say the same about every technology or hardware upgrade ever
@@zarwil And I do
@@zarwil On that note actually. It's proof to Astra's argument. Ever play an old game on modern hardware and see how high the FPS flies (if its not cpu bound)?
The old games run 10 times faster even though they don't look 10 times worse.
It's what they are doing now with stuff DLSS and other stuff.
Why hadn't I thought of this? Reprojection was such a good idea, but I never even thought of putting it into normal games. Cool idea, undoubtedly will be coming in the future as gpus become more reliant on tricks and software to provide improvements, rather than ever diminishing chip speed improvements.
Sinky-sama.
And the cool thing is rendering as we know it is just the culmination of software tricks upon software tricks to bodge our 3D world into something that can be reasonably calculated 10s-100s of times a second
Down the line looking back at tech like this once it's adopted, iterated upon, and eventually matured, it'll be hard to imagine a time when games didn't make use of it
Why are wee sooo incrediable stuuoooopedd?
@@TheoHiggins You can simulate mouse input in a separate thread (Diabotical, Goldsrc) at a fixed frequency. I dont see how the same couldnt work for vr.
The combination of all these technologies is the perfect mix for gaming
Casual gaming
@@Systox25 I think some of this can also benefit competetive games. If you have a 240Hz monitor and run a game at 144Hz input lag could be reduced by polling your movements at 240Hz.
@@Dionyzos the whole can't benefit comp games is dumb, this is going to enable 1000fps@1000hz(yes they are actually working on 1080@1000hz monitors) that will give superior mousefeel that's perfectly consistent. it will be better for muscle memory with little to no sacrifice to base framerate.
I prefer it for baking.
@@Dionyzos Yep, I find CSGO utterly unplayable since it drops below 40 fps on my 60hz PC every now and again. But it's not because the image is any more or less stuttery - my eyes can't perceive that difference - but my muscle memory can. Tracking in that game is near impossible for me without weird flicks interspaced through otherwise smooth movement. Meanwhile in Quake Champions, a game I can run at 60fps solid, I have no issues with tracking. It's all down to how responsive the input is, and if the input matches what I see, boom, framedrops and hitches become a thing of the past.
I yearn for the day something like DLSS or FSR 2.0 is implemented on a hardware level. And even more so for the day something like this spatial warping is implemented. Especially as a developer. Not having to worry about optimisation as much would be great so I can focus on things like gameplay, sound, UI, etc. Y'know, the actually fun stuff.
That's so clever. After testing out the demo, I can easily say that I cannot see or feel any difference between uncapped 750fps and 60fps capped with both tweaks. Even if I moved the mouse super fast, it's not noticeable. If we added a slight motion blur to both, it would literally be same. Also the timewarp 3d screenshot is such a brilliant idea to interpolate between frames. Now I'm wondering why these weren't a thing for all these years. Finally I did try out 30fps, input lag is definitely a lot better but the ghosting is very visible and distracting.
Its noticeable, but it's up to each player to decide whether they want ghosting or laggy 30fps
I'm sure this same technique could be improved even further to hide reprojection artifacts. This would be great to have for console games because they run at 30fps a lot of the time.
per-pixel mb
do not add motion blur to anything. ever.
@@AriinPHD motion blur is great for racing games
This is one of your most important videos in recent times. Thanks a lot for bringing this to attention! I remember how I HATED the idea of VR because of "the lag that would be there if you just moved your head", which was before i knew that it was already a sovled issue. I was so shocked (positively) when I got my first Oculus in early 2019 to see how responsive it felt after all.
Yeah, this was one of the things they were laser-focused on when they were developing the Rift and Vive. They obviously knew from the VR experiments in the 90s that it was going to be ESSENTUAL to have zero lag when turning your head, otherwise motion sickness was going to be as rampant of an issue as, well, people who have never tried VR thinks it is.
@@fireaza motion sickness still is an issue for many VR users. It’s why the Zuck is wrong and it’ll never be fully adopted by the general public
@@Junya01 nah its gonna be fully adopted. There are people addicted to this shit like people were addicted to social media. Thats exactly why meta, a social media company is investing so heavily into this. Also the tech has already grounded its use in industrial settings. Its already adopted by the military and some police forces because of how versatile it is. Its just in its infancy. With how costs of everything is rising, its easier to have a pretend virtual things than real ones.
I like vr but at the same time I know for sure its going to be hell. Imagine readyplayer me but without the good ending. I started realising all the little details after actually using vr. Its fucking annoying me how I didn't notice so much shit in that movie. Smaller irl houses since everyone lives online.
@@Derpynewb motion sickness dude. 3/4 vr users already experience it in some form or another, u think that shit is gonna go away just cuz the tech is getting better? People aren’t gonna pay extra to do shit in vr when they can do it cheaper, faster, and without feeling nauseous by just doing it irl or through a screen.
@@Junya01 there's a few things you cannot replicate from vr to desktop. I've encountered some people that had horrible and I mean horrible motion sickness. They bruteforced their way till they no longer had it. You can lose motion sickness. I initially got a bit motion sick and personally lost it too.
The other thing is the motion sickness is caused by a disconnect from visual senses and your biological accelerometers conflicting. And there's research being done on that too.
Vr does give you motion sickness but you can get used to it. Its just right now there's no compelling content to convince the masses to.
Fps games give some people motion sickness and people got used to that too. However the sample size isn't big enough to say say if most people would be able to get over vr motion sickness. Since there are some people who can't get used to desktop motion sickness.
Everything I've said is anecdotal and the sample sizes arnt big enough to really give a concrete conclusion. Personal opinion is it will take off.
hey, the name for that "3d screenshot" thingy for frame smoothing is "Temporal reprojection" i think. i used this technique in a raytracing shader for minecraft once, it's actually really cool and not demanding at all either, just a little bit of matrix math
Correct!
@@bluescorpian lol its actually not that much math to do this, but theres of course more to it than that, like discarding pixels so you dont accidentally try to reproject when you shouldnt (think a pixel being occluded or something)
@@bluescorpian Matrix based calculations aren't that complex
It's also similar to the framerate smoothing done in Oculus VR
If a game happens to hang up, you'll see the same kind of weird artifacts as seen at 6:25
This is not correct. Temporal reprojectuon is tot ale an element from the current fram and transform it to the space of the last frame (using inverse current projection and last frames projection). But this is all completely different. This is rather using something called ray marching, which can be expensive, but there are tricks to speed it up, and the closer the projections are, the larger ray marching steps you can take, and it will be cheaper. Honestly, the implementation here looks a bit rough.
If you want to compare this technique to something common in games, compare it to screen space reflections. It's the same technique, but the rays are generated from the camera near plane and in projection direction. So all trucks toy use to get smoother results and not miss details in SSR can be used here too.
But, SSR is a relatively expensive technique. Many games does it at quarter res to save performance. Of course, it will be cheaper here if we can take larger steps. But that is also why we see the big slices that breaks up as you can see in the video.
The rhythm game "osu!" has this feature! Audio, graphics and input are all on separate threads, meaning drawing can run at 240Hz, while input is polled at 1000Hz
this is only true for the ‘lazer’ version I believe. I haven’t really noticed much of a difference as the game runs fast on my computer anyways (it’s not a very demanding game)
I don't think that actually does anything, because the screen displays the incorrect position of your cursor, meaning you cannot adjust its movement path as fast as if it was actually native. It doesn't matter if the input is at 240hz because you can't see where anything acurately is and hence you cannot react accordingly IMO.
@@Derpynewb True! However that means that actual cursor position is much more accurately mapped to real movement. For example, if your game runs too slow, one frame your cursor is out of the hit object, and the next, it's out from the other side, even though you did aim correctly :p
Older arena game called Reflex Arena also has this. Hitreg is processed at 1000hz.
@@Derpynewb Our own brains have significant processing latency. Even the people with fastest reaction time do not beat 100ms on visual stimuli. (We can respond faster to audio cues.)
If you played a game like osu! by moving your cursor, observing the new position of the cursor, and then clicking. You'd be much too late.
Similarly in shooters, you don't aim by moving your cursor, then seeing if it's on top of the enemy, and then clicking, even if your click-to-photon latency was 0ms, it'd be much too slow.
Point is, it matters a lot.
This was actually quite eye-opening. It makes sense now why turning my head with my Rift headset still feels smooth despite a game's framerate sitting around 25-40 FPS, but when I move around it feels juddery and laggy with visual artifacts (because of Asynchronous Space Warp).
I remember being blown away by the concept of Foveated Rendering back when I saw the Oculus presentation. This is just as mindblowing to me right now. Just imagine the visual fidelity we can achieve and make playable with this.
Congrats on the LTT mention Phillip! Glad two worlds collide
Its like RTS games draw mouse cursor outside of game render loop to keep it responsive, just now we want it for whole screen :)
they have been using hardware cursors in this kind of games for eons...That is why they feel responsive, because the mouse input is direct.. but this works with a cursor and nothing else, the game still will feel unresponsive if it is running at 10fps, specially if you move the camera around.
@@Trikipum Well of course with 10fps, but without hw cursor even 30fps would be horrible
Kliks empire expands to Canada
Big papa Linus feature
Holy fuck LTT straight up shouted out Philip, what a timeline
You have possibly inspired a real use-case for this thanks to Comrade Singer and LTT pushing your vision Philip
Incredible! I just tried it with a render target of 60 with a 280hz (yeah, don't ask) monitor, and aside from the little bit of weird ghosting on objects close to the camera while moving the character, it's amazing.
I would have little-to-no reason not to turn this tech on on literally every game, which as others have pointed out could be a problem when you think about the progress of computing power and game optimisation in general.
Why do you have a 280hz monitor?
@@starlight_garden I did say don't ask.
(It was available for around the same price as a 240hz one so why not.)
@@RandomBruh AW2723DF?
Your video was mentioned on LTT (and properly credited). Cool stuff
You made it to LTT
Congratulations on making it into and inspiring the newest LTT video with this one :)
The Unity game engine already has this feature ("frame independent input"), but only if you use the "New Input System" (Unity devs will know what that means). Unity introduced a new way to read user input in the last ~2 years that implements this feature. VR games made in Unity (which is most of them) use the new system.
Just remember that everything shown here will only work reasonably well in an fps where the camera just rotates around a fixed point when you move the mouse. Any third person camera would need the last technique shown here, a lot, and break so much easier
YUP, still cool though
i hate the games where your view orbits slightly off-center from your character and does not stay centered, and stuff that is very close to your view moves in a very strange way while you look around
Developers can tweak the peripheral resolution and size of the image to a point where it's less noticeable or even give the option to tweak it yourself. This would of course come with a bigger performance impact but as there is a net benefit with Frame Generation it's an overall improvement. Let's say we're going to the extreme without reduced peripheral resolution and render a 4K Image but only display it on a 1440p display. This should be enough to even compensate for quick camera movements while still giving an overall Framerate increase with Frame Generation as 4K does not halve the framerate compared to 1440p while FG doubles it. Add Foveated rendering to this method, render the peripheral pixels at DLSS Ultra Performance or something, and the results will be even better as noticeability decreases with increased camera speed.
@@Dionyzos you do know that scaling resolution is not growing the pixel count linearly right? 1440p is about 3.6 million pixels, 4k is about 8.3 million. So you will pay over double the cost for rasterization and all post effects. The gains you speak of will not be as big as you think. There is a reason why no game with ssr renders outside the frame to improve edge reflections. It is just not that cheap.
And no matter your buffer sizes, the tps camera issue will not be solved. Depth peeling might be stronger for that, like a low resolution wider fov depth peeling. But I can see a lot of issues for that too. But, for a standard FPS that rotates around a fixed spot this technique has interesting potential. But one more thing to remember, with DLSS3, no new depth buffer is generated as far as I know, so the depth reprojection mentioned here would not work, without changing DLSS3 segnificanrly and making it more espensive. And we can easily see that all transparency will have big issues. Like, imagine a muzzle flair.
So what would probably be best to do is to not render the weapon and weapon effects in to the regular scene buffer. But render those parts at 100% frame rate and overlay on top after reprojecting.
That would solve other issues too. Like making feedback from the weapon have a consistent timing with your click, and not depend on if you clicked on a generated frame or not.
@@Dionyzos oh and also, claiming 4k would be enough buffer for 1440p is just bogus. Maybe for gamepad players, but a mouse can easily turn 90 or 180 degrees in one frame. Especially when we talk competitive players.
Holy. That demo is actually super impressive!! I'm mostly interested in that skewing effect thing. AMD DLSS3 ALTERNATIVE COMING SOON?!!! I'm shocked that not much (if any) real games actually uses mouse independent movement, honestly in hindsight it seems so obvious
It's amazing how combining fairly simple tricks can give such an impressive result!
Stormworks: Build & Rescue implements something like this. When a big/laggy object or vehicle is loaded, the physics engine slows down, but the camera movement keeps locked at 60fps. It is quite obvious when the physics are running at 15 fps, but it helps reduce camera stutters and makes it easier to control
It's really good
Source engine had this for all this time. Physics in multiplayer run way slower than in single, yet are independent of the scene rendering frame rate
KSP Also did this but nowhere near the same level of Stormworks.
this has nothing to do with what the video talks about. What you are talking about is called time dilation and has been done for decades in certain games. The engine simply goes in slow motion so it has time to process everything and keep a decent frame rate.
It's kind of not the same thing. By that you can say Minecraft does the same thing, but the real reason is that the world you play runs on an internal server and is disconected from the render you are seeing.
Overscan (the concept of rendering stuff slightly off screen) is already used infrequently as a solution to certain screenspace artifacts from ssr etc.
Would certainly be a good idea for this kinda thing
Great video! I'm hoping that we don't have to wait too long for more advancements like this, I think a game engine can combine all of these technologies into 1 smooth and configurable experience that would advance gaming graphics and reduce the barrier to entry at the same time. Nice callout to Foveated rendering too.
Congrats on being featured in LTT video!
who would have thought this would be featured on LTT im stoked this technology is getting the attention it deserves
It's amazing to see how your very unique perspective on tech topics gets appreciated more and more in the scene.
Beginning of this video was very confusing but I am glad I was there to see this demo, really impressive stuff
Congrats on being showcased and mentioned on LTT! Really love the new content and approach to tech you've been doing.
Which LTT video was it? I'd like to hear what they said.
This feels like the kind of thing I go down a rabbit hole of when implementing mouse movement in a fresh project lol, good job
Seeing DLSS 2.x and FSR 2.x simultaneously in a game always cheers me up. Being able to enjoy "4K" and a big boost to fps (especially for high refresh rate panels) is just amazing. Zero or minimal loss to image quality and the game feels much smoother. It really looks like magic at times.
CSGO major starting and a 2kliks video. Oh boy we're spoiled
This was amazing to witness even just as a demonstration, thank you for posting this video, let's hope the future of budget options in gaming looks better like this.
Really like this kind of content, these advancements have their uses and make sense. Super interesting topics lately!
Love this video! Absolutely love VR and never even considered its uses for flatscreen gaming. Great job! ❤
I think this is it. This video clearly states my exact frustration with low framerates and offers a clear solution that already works. I hope someone at Nvidia sees this because this is a fantastic idea.
That is fascinating, and that there are already examples of the idea in action is really cool.
Thanks dude, rlly helped me understand this concept, wasnt like most videos where i cannot ever keep up with what is being said.
Great video, I had been thinking about reprojection outside VR and it's great to see that demo!
After seeing this I almost instantly wanted to try to implement this into the game im working on right now, but after thinking about it there are a few big issues with implementing this into an existing project made in a exiting game engine like unity:
1. Unity doesn't support async rendering, and I dont think most modern game engines do either. This means that you cant really push multiple frames per frame like that in a good way.
2. One possible approach around that could be to split the work of "1 real frame" into parts, IE if you're doing 4 "fake" frames per "real" frame you'd maybe do the cpu calculations on the first 2 frames and then the actual rendering on the next 2 (you could even do this in parallel, IE while you're rendering frame #1 you're also doing the cpu work of frame #2 in parallel). This would still probably be slower than if you could actually do async rendering properly though.
2. Even if you did that, most existing games rely on doing operations "per frame", all of which you'd have to redo for this new system, which is A LOT of work.
So you'd really need to write your own engine to do this effectively, which I mean you can 100% do, but is suddenly a huge project sadly. Sounds like a really fun side project and im 100% gonna try it out haha.
(also the demo you showed of is made in unity, but they just rendered the whole "real" frame in the last "fake" frame possible, meaning you dont really get anything extra out of it sadly (except maybe lower gpu usage on more powerful gpus which is nice I guess), it certainly shows off how cool the technology could be given if engines properly supported it though!)
yeah, this should be supported on the engine level. Only thing you have to separate from the game logic is camera movements, so that engine can take the "3d screenshot". Image next Unreal adding a checkbox for this, then you have have to implement camera movemnet in this specific way, boom. Unlocked.
It would be WILD, especailly for consoles
Interesting idea. IIRC, the RTS Supreme Commander did something similar, decoupling inputs and graphics processing from simulation processing. This was done to allow updates to process in .1s intervals to prevent the simulation from lagging under load, but the high refresh still let the game be smooth for the time and inputs feel responsive (since clicks and ui still responded as fast as the engine could render).
What you are talking about has been done since the dawn of videogames. What the guy in video is talking about is something different.
@@mrfatuchi Well, I get that different items are often split off from the main gameplay loop into their own threads, and often update at their own frequencies, but I was under the impression this specific proposal was a decoupling of the engine's simulation from the graphics and UI specifically? So responses to player inputs are tied specifically to the graphics refresh rate?
I guess I may be confused...?😅
@@TheFirstObserver What you're describing is doing game logic in a slower thread, and interpolating between states in the render thread while rendering as fast as possible. The video's proposal is to slap on another thread and transform the previously rendered frame to the new camera transform whenever the render thread hasn't produced a frame in time.
@@alex15095 Oh, OK. I think I get it now. Thanks for the clarification. 🙂
Hey glad people get to see the brother of 3clicksphillip
Fantastic video, really interesting stuff, perhaps because I studied real-time 3D technologies at University. This is really well researched, and cleanly worded. Great work!
That looks nice ! I look forward to see technologies like this in games !
I think Stingers' onto something with that 3D Projection tech.
I've been thinking about this as a possibility for a while now, but having someone with this much charisma explaining it really convinces me so much more than I did myself :)
This is similar to how a phone’s refresh rate can be 60Hz but its touch sample rate can be triple that, which makes 60Hz feel much smoother and more responsive than it would if touch detection was pinned to screen refresh rate
I don't think that's a good comparison. That would be similar to how mouse sampling rates are usually at 500 or 1000 hrz.
Johnathan is correct. This is already how our inputs work.
Can'ty believe you convinced me, but yeah, this is a great idea. Nicely done- the demo really did it.
Wow the demo is really great
I have wanted something like this forever. For a very long time I felt like something like this should have existed.
It could be my mind remembering games being better than what they were but I feel like older games (can't put an exact date, but maybe around ps1/ps2 era or maybe earlier?) even at low frame rates were more responsive than most games today. But I could be wrong. Either way. I really want this. It reminds me of playing at 120-240fps on a 60Hz screen. You can't see the extra frames but the game felt drastically quicker/smoother because of the low frametimes.
But if you can make inputs a separate thing polling at a higher rate then you don't have to draw a lot more frames to feel this effect.
LTT didn't bring me here but you getting a shoutout once more proves your in depth videos are quality content!
I think we will always need these tricks no matter how good the hardware becomes because the artists become more ambitious and push it to the limits always
Linus Tech redirected me here. This make even better sense to use in cloud gaming where images are rendered and streamed and your inputs clearly lag from the network round trip. There's something called GGPO rollback that dealt with this for fighting games that if put together with this could improve things even further.
What an awesome demo. I'm amazed this hasn't taken over gaming
Ooooh, this has cleared up a few things about synthesized frames in VR for me. I always assumed it was like motion interpolation in TVs, where it's generating a "fake" frame, but it's actually just moving the viewpoint on the current frame? That would explain why head movement always feels smooth, even when you're getting a bad framerate. This would also explain why when the FPS gets really low, that you can sometimes turn your head and see the edges of the image. This must be one of the tricks Valve came up with when they were developing the Vive and reached the point where none of their playtesters were getting motion sick anymore
This already exists with SpecialK and it's Latent Sync + Input offset.
EDIT: The input not being tied to framerate for games that is
err no specialk does not have async timewarp wtf 17 people upvoted this
@@brett20000000009 read the EDIT (that was made seconds after the comment was made)
I'd never heard of this concept before I love it
This, this is what we need!
Great video. Never heard of this approaches to smooth 30 FPS experience. But now I'm very interested.
Hurry my friend , thank you for watching , hit me up i have something for you...
very cool idea for sure. responsive mouse movement really is the most important
Someone got the Linus bump
Oh my god, this is genius! I actually didn't even realise the game was running at 15 FPS when you had 144 mouse movements, and at a glance the extended rendering Comrade Stinger's demo comes with is good enough. The only time I realised was when you moved the character itself.
Really fascinating!
Well at first 1/3 of video i was like "you cant do that..." but having watched through whole video - these are great ideas and tricks that can make gaming better!
Hurry my friend , thank you for watching , hit me up i have something for you
excellent video! very interesting! thanks!
wow I want this in all games, truly revolutionary and I thought about this stuff for years because VR makes it work so good.
Hurry my friend , thank you for watching , hit me up i have something for you
Actually, now that I recall it, I've seen this reprojection thing years ago in some VR demo
You basically could pilot a Mars rover while basically being located in a dark room, with a small screen of what rover sees floating in front of you. And you could use your controller to steer the camera, updating that view as if it were a flashlight of sorts
At least that's how I remember it
the problem is that regardless of how you try to uncouple the mouse input from the "gameplay", input lag will be the same. you are still stuck waiting for your input to manifest in the game. UNLESS you want the game to calculate stuff without them appearing on screen which introduces a whole lot of other issues. clicks won't register on screen at the correct visual timings. (it will just end up feeling like lag compensation and how you can get kills or get killed after moving behind a wall)
It’ll be very small though, so not much of an issue.
It’ll be very small though, so not much of an issue.
Just make raycasting updated on the same thread as the mouse movement, problem solved
There's no reason that a game can't process mouse clicks faster than the base framerate. There might be a delay in seeing the physical bullet, but the position on screen will be correct. It shouldn't be too difficult to run the game engine faster than the graphics, since we're already doing that with the interpolation anyways.
As he said, thus should not be used for competitive games. The "click problem" would nearly only be when you make an inhuman flick and dont even know if you should have hit or not.
I love videos like these.
When I wrote the renderer for JanusVR (which did have a 2D KBM driven mode) I went with a system that prioritized mouse/head movement latency over keyboard or mouse click latency. Rather than being pipelined like most renders it would try to always render to vsync and would re-render a previous frame's content at the new camera position if the CPU "frame builder" thread was unable to provide a new frame in time. Our scene content was user controlled so badly behaving scripts or poorly built avatars (per triangle unique materials......) was common and this kept motion-to-photon latency more stable. It shouldn't be too difficult to find or modify one of the time warp or spacewarp implementations to work in a purely mouse driven scene. I'd love to try out a prototype of that.
I was super excited when Unity released the decoupled Input from Rendering Feature. I loved to develop mobile apps with unity but the biggest drawback always was that those were killing your battery suuuper quick, even if you were only rendering the main menu. This feature allowed to signifantly reduce frame rate in menus, if there was no input from the user and only up it again if the user was scrolling or anything was animating. I mean it was possible before, but you always had to wait for the next frame for your user Input to be detected and thus having a lag.
Didn't think about FPS at the time. Great use of that feature!
holy shit, i'm pretty much unaware of vr tech since i'm not really interested in vr gaming. but holy this is some next level stuff lol. game optimization is such a work of art
Lots of good ideas. Anything to make games FEEL good is what I want.
This is a fascinating idea. Games already generally have multiple internal refreshrates.
I cant wait for this to become standard, this is so sick!!!
The best part of this is that this is in worst case scenario in what is essentially a tech demo
Imagine this stuff at 60fps, or 72fps.
While you would probably not get a competitive advantage on the frames updating, with 144hz mouse movement this would completely bypass games running poorly on reaction times in a sense.
I hope this catches on
That video should blow up and go viral so we can have some response from true AAA games developers. Great Vid!
This goes out of topic of upscaling, but I've recently found out about DirectX-Vulkan, which as far as I know helps stabilise and or increase framerate (mostly) in older games. I reckon people would be thankful for acknowledging that they finally can run GTA 4 on max setting with smooth framerate, which dxvk helped me achieve, so video on this would be awesome.
Excellent video 👍🏻
It's actually makes low framerate gaming much more appealing, like on laptops and old computers with old graphics cards.
This kind of tangentially reminds me, Overwatch 2 has an interesting approach for mouse, they allow you to shoot in between frames and it actually calculates where your camera *would* have been when you shot, even if it never spent a single frame there, it's very interesting stuff. That tech + this visual interpolation could help too.
This is a fantastic idea! Game designers please take notes
this would be great for old games that have fps limited physics
very cool thank you big man phillip
As a game developer, one important issue comes to mind. What happens to the game elements which are tied to the camera rotation directly, like your gun model? VR games can freely use warp because there is never anything of value attached to your face, while your arms fluidity is not as important as camera's. The problem with warp, is that it's fairly simple, in the sense that it just takes the last frame as a plane with a texture and transforms it. It means that your gun on the screen, that usually is directly tied to your camera movement, will feel rubber-y, lagging. And I don't see a simple way of addressing that, except for trying to reduce the time before the next frame as much as possible by using frame interpolation.
With velocity vectors you would know not to move those pixels anywhere. Ofc it would have to be "smart" warp
Hurry my friend , thank you for watching , hit me up i have something for you
this would be amazing
This video just got a shoutout on LinusTechTips channel. 13:50
Sounds like a great idea, I'm nowhere near informed enough to know if it's viable. But as always, you bring something interesting to the table.
One thing that I really want people to know, is that on top all of this. There are Ai generated facial animating, Ai Generated voices, and ai generated text that can adhere to storylines and world rules. Combine all of this together, and we will get an amazing VR experience like none before. You could genuinely create and entire VR MMO that has continuous quersting, that is random and follows the worlds laws, while also being unique and generating animations for specific body types of characters int he world as well.
We are so close to achieving these types of games. I hope int he next decade something close to an alpha version of this that combines it all together to make something that just tests these foundations out will come out.
I love this idea.
stinger released a new version with improved player reprojection and it works even better.
I need this in my veins right now
this is actually genius!!
This stuff would almost certainly require some wizardry at the engine level to allow interaction between the player and the world if they're being rendered separately.
God I love this channel
To address some criticism, the scene and the user interface with the minimap, stats, etc, should be rendered separately and mixed on the mouse input response.
The video card should render the 3D scene, be modified as shown on the demo, and then the superimposed user interface be put on top. This would solve a lot of critical comments. Also the gun or the hands in a first person game could threated as part of the user interface to not be in a different position.
A further enhancement of the technique could be to render different scene planes at different fps.
- Long distance objects, like mountains, sky, etc, could be rendered at 5 fps.
- Medium distance objects, rendered at 15 fps.
- Near objects, at 60 fps.
The fps numbers are invented, just and example.
Those planes would move according to the camera perspective immediately after the user input.
The final image would be a combination of long distance + medium + near + user interface. Each one updated at different frame rates and combined to give a new frame after the user input.
For the first time ever for any thing or opinion you have ever explained I agree absolutely and without Limits. Pretty much any modern graphics chip can get I'm more than reasonable frame rate in a grand majority of games except Apex Legends or cyberpunk. It really only makes sense to look at game optimization as a function of of input to frame consistency now. This is the true factor for game performance now in my opinion. Now I need to find a way to make this same optimization with my body and my brain as it's the ultimate skill in life I think