Radiance Cascades Rendered Directly
Vložit
- čas přidán 21. 08. 2024
- In this video we explore data stored in radiance cascades by observing it directly. This is equivalent to precalculating a scene, storing a cross-section of its radiance field and then rendering it from any viewpoint and any angle in O(1).
How ya'll folks found this video? I'm seeing a lot of half-life themed visitors coming from somewhere, and I've no idea where from.
This was on my recommended page and had interesting thumbnail. I watched it until the end and I still don't know what I am looking at.
@@porttek0oficial sorry for that..
Don't be sorry for it! This was a wonderful demo, and you don't need to be a graphics programmer to find it interesting, fun, enjoyable, and visually appealing!
Radiance Cascade sounds like Resonance Cascade combined with a cool thumbnail it is intriguing.
"What is a Radiance Cascade? I wanna know."
This was recommended to me on my homepage. I have watch videos on radiance fields and tutorials on Gaussian Splatting. The thumbnail also looks like something that could be rendered in Source 2.
Probably getting recommended because of SimonDev’s recent video on Radiance Cascades heh.
Gordon doesn't need to hear all of this, he's a highly trained professional.
No no that's resonance cascade.
@@minecraftermad same thing
Gordon doesn't need to *see* all of this, he's a highly trained *game dev.*
My dumbass thinking this was a half life video
There's no chance of a half life video.
resonance cascade🔥🔥🔥🔥🔥🔥🗣🗣🗣🗣
@@elpanatv2537 It's time to choose Mr. Freeman...
same lol i thought it was a high quality render or something
It was the weird thumbnail that tricked me
Never change your mic. It's somehow almost perfect
It's like a voice recording/voice report from games and movies.
It has a lofi feel, like a cassette recording.
The slightly nasal voice in a matter of factly intonation also contribute to the effect
its a mad scientist mic in the making
this is a sickass way to render cosmic shadow people
I never thought I'd live to see a radiance cascade
let alone create one...
☝️🤓 actually it was a resonance cascade
@@Masonova1 I like to do this thing sometimes where I notice that one word sounds a bit like another and then I make a joke out of that
@@Masonova1 whoooosh
One thing that I forgot to mention in the video is the, um, sparkles? These are path tracing fireflies that made their way into radiance fields -- they go away the more time you give path tracing to converge, but I did not bother waiting more than half an hour and I thought they look kind on cool anyway. They don't exist while using this data structure for calculating actual global illumination because it needs much lower resolution to be resolved and so it converges much faster.
Would denoising help?
@@johnsherfey3675I don't think so giving how little resolution there is
You should pin this as the top comment
thanks. the video is missing a brief assurance that this "ghost skull" asset is presented exactly as it would appear in a game. It is not a human head in debug mode.
it's a fascinating effect, actually- a volume that sparkles isn't something i've seen much before. i wonder if it has any actual use.
The cascades look like the raw output from a light field camera! Very cool!
There's only so many ways you can encode a light field..
@@Alexander_Sannikovso it is used in light field cameras then?
“Carmack doesn’t need to hear all this he’s a highly trained professional”
No idea what I've just listened to but the imagery is very fascinating.
My brain watching this and hearing how its done is doing the sparkly bits of the model. Thanks for the vid
Absolutely blown away by all of your work. Thank you for sharing!!
like a digital hologram. crazy that this has been doable for such a long time and only now has been found. And just by someone on yt.
As Wave Function Collapse terrain generation has proven, cool names inspired on physics are generally better for game development.
But wave function collapse is an abysmally terrible name. It has nothing to do with waves or functions. Gives wrong impression on both essence and complexity of the algorithm
2 years ago I was experimenting with directional lightmaps, trying to achieve both diffuse and sharp specular lighting. I've been messing with "plenoptic textures" that look very similar to this demo. It's really interesting how one can come up with a similar concept while trying to achieve different goal. The whole idea with the cascades and usage of this technique to calculate screen space global illuminance.... Just wow!
Absolutely gorgeous rendering
Thanks Alexander for you ExileCon presentation! It was a joy to watch and learn.
This is the first time I have ever comprehended these, because everyone else basically just called it black box. Thank you!!!
Really clever stuff! I know PoE 2 will be shipping with this lighting technique, and I can't wait to see it in action!
This is very reminiscent of lightfield rendering (originally "image based rendering" 20 years ago) of the sort that OTOY and Lytro were working on a decade ago, except here you have multiple resolutions for multiple depths? I'll have to look at your paper to understand the cascade aspect.
each cascade is encoded in a way that's similar to the good ol' image based rendering. but the most powerful property of RC is how information is distributed across multiple cascades.
The halflife refs are clearly due to your phenomenal HL voice and production quality
Nice video, and great explanation of the cross-sections and their relationship to the spatial and angular resolutions!
im so confuseddddd BUT THAT LOOKS cool and i love seeing new stuff!!!
would you mind doing more of these? Not just for Cascade rendering, but in general.
I quite appreciate your PoE presentations, and every time I rewatch them, I wish they gave you more time.
just think how this could replace individual props like tables in corners or show massive events like large cut scenes or animated backgrounds. I also think it will work very well as a way to manage surface texturing.
This is amazing!!!!! Really excited to see what comes out of this
What I gather:
We're "storing" in some way how the 3d model looks like if viewed through a plane from any angle, without keeping the 3d model. The most obvious way to do that is to store for each pixel of the plane what the pixel would look like if viewed from every angle. We could store, say, 200 different angles, which would mean that we have to store for _every_ pixel 200 different colors. Then, when rendering, we could check what angle we're looking at a pixel, then linarly interpolate between the colors associated with similar angles.
What this paper shows is that this is not necessary, and we can make the image still look decent while storing much less information. The key idea is that while we want to store a _few_ angles for _every_ pixel, we only need a _few_ pixels that store a _lot_ of angles. So for example, we could store 4 angles for _every_ pixel - then have a separate map that stores 8 angles for every 16th pixel - then another map that stores 16 angles but only every 36 pixels, and so on. By cleverly interpolating this information, we get a really life-like image, while only storing a fraction of the information (and, conversely, while only having to _calculate_ a fraction of the information, if running in real time).
the really important part is that the tradeoff of spatial:angular density is only possible for a given distance range. that's why RC stores radiance intervals, because they capture light coming from a certain distance range.
@@Alexander_Sannikov Distance... from the object being rendered? If you moved away from the plane the quality would decrease?
babe wake up they just dropped virtual holograms
Subbed - found your channel via the SimonDev video on radiance cascades then your ExileCon video :)
How.. illuminating. Very cool.
It came from recommendations and I have no idea what this is. But it's damn cool! Now I'll go deeper that rabbit hole to find out what it is and how you managed to make it work
A lot of similarities in the atlas to lightfield photography. Highly interesting work. Thank you!
it is capturing the same data, so yeah
Good thought. Like the Lytro light field camera.
digital hologram, finally
great work! keep it up
i have a video on actual simulated holograms. check my community page to see what it looks like.
@@Alexander_Sannikov I will
Cool video (Game Dev)... YT sees my interest in "digital volumes" and recommended this.
Hey Alex thanks for the video - Really cool to see a little more insight into this Radiance Cascades tech you've developed! Must be fun to play around with various little projects like this now that the tech has proven itself! How are you going with potentially open sourcing some form of it in the near future? Is it still a possibility? It seems like the only viable GI option for real-time procedural games like ARPGs or sim/management games, and, well I want to get my greedy little mitts on it! :D
I'm close to publishing an article first, them I'm going to open-source it.
@@Alexander_Sannikov amazing mate! I'm super excited and hope you get the recognition from it! Where might the article land and how do I get notified?
@@DJDDstrich I'm probably going to make a short announcement video on this channel with a link to something like google drive pdf before sending it to a journal.
this genuinely might change everything. good luck. I hope this actually pays off for you monetarily.
Looks pretty! If it's too memory intense i can imagine it being used sparingly, like for smoke grenade or effects on weapons
such a cool technique
Totally a coincidence that it looks like a interdimensional ghost celebrating cico de mayo.
Super cool! Show us what this same model looks like illuminated by the final technique!
Really cool stuff man good work
no idea what your talking about or what any of this means but, looks wicked cool dude
have you tried that with a billboard explosion? it may allow high speed rendering with some semi-3d effect.
The title sounds like a sleeper agent phrase haha
If you ever happen to have enough free time on your hands, I would love to have a video explanation for your radiance cascade for people who know not much about graphics programming :D SimonDev's Video was a good start for it tho, now I understand the things you do a bit more. Very fascinating. I still can't really imagine what those probes look like. As far as I understand now it sounds like you have those probes with different properties in the whole... err.. view? volume? of the camera and they are scanning everything and sending the averaged data to the camera and this is what you see in the end?
Cool research. So, does that mean the angle of view is quite limited? Does the quality degrades with the "depth" of the object from camera?
the angle is not limited, but the quality does degrade with depth with this encoding
Awesome stuff, well explained.
Thanks for showing the atlas view.
I'm curious why you chose parabolic encoding?
the profiler is legit. 👍
yea i dont know what any of this is but it looks cool
Your mention of the accuracy of penumbras makes me wonder if this same technique could be modified in the calculation of radiation dose calculation in radiation therapy. Basically we do a bunch of complex ray tracing or kernel convolutions to perform dose calculations in radiation therapy and calculate the interactions based on a fluence maps from particular angles. For us however reflectivity isn't so important. Mostly we just have attenuation (simple calculation), and a couple different types of scattering interactions (much more complex requiring monte carlo interactions). However, our accuracy in dose calculation tends to decrease at the edge of beams (at the penumbra) as the calculation becomes most difficult there with scattering interactions beginning to dominate over the attenuation interactions.
@@skicreature people are already applying RC to a bunch of non-light radiative transfer processes. any process where energy is propagated in rays is suitable.
Looks cool, will be interesting if nerfs could be baked in this cascade technique
yo this is really cool
so basically you bake point clouds to an atlas of cube maps and use it to render imposter pixels
Interesting effect, somewhat like glitter immersed in plastic. How much memory does this demo use? Seems too resource intensive for complex real-time scenes.
this is so cool
Great video, thank you. Please upgrade your microphone!
I like it, it sounds like a recording from 1960s 😁
Sorry for that. That's why I usually don't voice my videos. I should get an actual mic, but I normally much prefer to program something than to waste time setting it up.
UPD: ok, anyway, ordered a mic. Again, sorry for the quality on this one.
Never seen radiance fields rendered so fast - on the cpu, too!quick q - do you have angular resolution inconsistencies at different layers due to how the resolution changes at each depth layer?
cool, this is a super cool video (not understanding one thing)
Sparkles
0:07 global elimination technique 💀
This is cool!
Cool stuff! Needs more high quality examples
i'm curious, would this help with more realistic sun sparkles on a body of water?
Looks great!
But it is hard to understand how it works.
As i understand this, you just render the scene into a tiny cubemap for each point on a regular grid. You have 4 variants of this grid (cascades) - from less detail (spatial) to more detail, less detail grid contains more detail cubemaps, more detail grid contains less detail cubemaps. It is still unclear for me how fetching and mixing from these arrays of cubemaps works.
That's right, the blending part is kind of hard to explain in the video, that's why I'm writing an article with a proper explanation. However, the idea is actually very simple: each cubemap simply has an alpha channel, so since each cascade encodes its own depth range, they are always sorted front-to-back, so you just blend them using their alpha channel.
@@Alexander_Sannikov How do you choose these depth ranges? Are they manually chosen or do they "emerge" from the data (due to information not being representable at some angular resolution for example)?
@@JannikVogelthe discretization scheme is completely scene-independent. That means the exact same radiance cascades can be used to capture radiance of an arbitrary scene. In the paper I explain in great detail why depth ranges need to increase exponentially for subsequent cascades. If you're on "graphics programming" discord, you can have a look at the draft of my paper that people are reviewing right now.
IT'S A HOLIGRAM!!!!!
This is why computers were built.
so cool!!
how does the sampling work then with the cascades? can you also make a video on that? or is there a paper?
Saw this video on my homepage recommendations, no relation to Half-life at all (haven't looked at source/half-life content in forever).
To be honest I have no clue what this even means, but I'm very intrigued and I often enjoy watching people passionately explain something that to me is very niche.
Thank you for sharing!
Damn, what mic are You using? It sounds rad af. I seriously wanna know
P.S. Why do i have a feeling that i'm close to being the only one who came here *not* because i thought the video is somehow involved with half-life?:D
Boys, looks like we have a non-half-life person here. I repeat, a person who didn't come to joke about resonance cascades.
How do you style ImGui like that? Is there a theme or did you manually change backgrounds and border radius?
it's open source you can check. legitengine by raikiri
@@Alexander_Sannikov Thanks!
Half-Life + Hollow Knight = Radiance Cascade
Bro sounds like Posy
yeah
Does it mean that we can look into 3d models and see it more realistically from inside? Not like backfaces, but volume of it's mesh idk
So from what I understand from this and the presentation, you take an array of textures, and for each pixel in that texture, look at it's 8 neighbors, and do some short distance (1px) raytracing, and repeat that for the number of cascades you have?
Then to get the final value, you read the pixels in the final cascade for the direction you want to trace in?
Wouldn't this make it O(n) for the number of cascades instead of having some fixed upper bound?
Nope, after the precalculation is done, there's no raycasting in this demo at all. Rendering the radiance field of one cascade in this case is equivalent to just reading a cubemap texture for every pixel, which is just a hardware bilinear interpolation. So, each of the 4 cascades are looked up this way and then merged. See at 4:23 each cascades literally stores tiny cubemaps: you don't raymarch them, you interpolate them.
Great work! I'm really interested to find out how to encode the radiance cascade data into a texture, can you give me some pointers?
this is explained in the paper
A digital hologram.
@@vanillagorilla8696 i have a video about actual digital wavefield holograms if you're interested in that
@@Alexander_Sannikov I'd love that.
mirrors
Ah! So there’s no sphere harmonics, you used cube maps. Still seems like a trickery. I’m looking forward for the paper. Really, O(1) looks like magic.
I use spherical harmonics to gather diffuse GI (not in this demo). This demo does not need gathering irradiance, so it only uses parabolic mapping (basically, cubemaps).
@@Alexander_Sannikov I see. Thank you.
thank you for the video...veeery impresive.... please upgrade your mic
hell yeah
is that gauss splattering?
so its like deep shadow maps but for irradiance?
Ghost skeletons
you didnt even touch the roughness slider :( please, go into more tehcnical detail about how this works ! graphics programmer in me is amazed
now I also wonder what that slider did, because it makes no sense :D
neat!!!!!
It's funny i do something like that to render gi on mali 400 gpu in real time lol, but instead of storing the results, i store the uv of the points, which allows real time updates by simply texture feedback, ie the texture samples itself to resolve gi. I had the cascade idea but didn't implemented it, i didn't knew it would be that efficient. It's on unity's forum under the name exploration of custom diffise rtgi, the technique is called MAGIC for mapping approximation of gi compute 😂. I resolve slowly because i haven't tested how much i can render per frame, so it's one ray per pixel per frame, my computer is dead now lol hasn't finished.
can you link a video?
I dont think this is half life guys
Оп, оказывается это был тизер affliction в PoE)
Как Вам удалось их соединить вообще?
@@Alexander_Sannikov Отвечая на данный вопрос, мне станет стыдно. Сам я мало что понимаю в разработке движка или проектировании/моделировании объектов. Но ваши панели на exilecon и данные ролики посматриваю, чисто из любопытства. Подача материала отличная
Тут я увидел схожую модель наслоения частиц на объект, если можно так выразится, с текущей лигой в игре...даже цвета совпадают )
Заглянул сюда после вашего подкаста у CARDIFF'а. К слову, было бы отлично, если бы у вас получилось периодически организовывать с ним, или другими стримерами, такие подкасты. Хотя бы раз в полгода
У зарубежной аудитории есть Крис и Джонатан, а у нас будете вы.
Can you share those 8k images you have used in this demo for people who want to reproduce this (without having to capture their own scene first), or could you even upload the entire demo code somewhere?
If you want to try this i really recommend replace the volume rendering part with some really simple SDF fractal raymarcher or a sphere. The only reason why i used volume data is because it's obvious that i'm not rendering it in realtime (that'd be much slower).
That being said, at some point I will publish the sources.
So what would be the usecase of this?
Use case of global illumination? With O(1) time complexity? Practically all applications for every graphical representation for the foreseeable future.
Well this is different because it is NOT real-time like path tracing we see in games, this is computed offline and so cannot change afterwards. It's similar to splatting, looks like real life but not very usable in games (as of now anyway), me sleep.
@@Danuxsy it's a stretch to say path tracing is real-time.
i also don't know where or why you have the impression this isn't real-time, nor why it wouldn't be real-time, given how efficient it is.
@@NightmareCourtPictures i watched the ExileCon2023 talking about their GI implementation using radiance cascades so I see that it does have good usecases yess, it's cool !
@@Danuxsy it's usable in real time. Path of Exile 2 is using Radiance Cascades. IIRC their dev team published a white paper on the technique.
Is there a paper or something?
there is now!
cool
Нихуя не понял, но очень интересно)
I'm still a bit too stupid to understand but I will keep trying until I implement it
Just read the full paper again, I can understand it much better now.
ничего не понял, но очень интересно =)
wheres the siggraph paper my man. im tryna read that!!!
well now you have it!
Audio is pretty bad. You need to compress.