Next Level Triplanar Mapping: Assets, Skeletals and Dithered [UE5]
Vložit
- čas přidán 25. 07. 2024
- This video is the continuation of my previous one, in which we go through a correct implementation of the triplanar projection mapping technique for locally map textures in our shaders.
Previous Video: • Your Triplanar is wron...
Download: jumpeter.gumroad.com/l/tri-pe...
Discord: / discord
Patreon: / visualtechart
Awesome - to prevent the "blurriness" at the blends of a single-sample dithered triplanar uv set, make sure the sampler it's plugged into has its DDX and DDYs manually set - if you set them both to "1" you'll get a crisp, non-blurry blend. :)
Also - to reduce the shader complexity, consider passing "0" to the "Random" input of the Temporal AA Dither - that keeps it from sampling a texture to get the dither and instead does a simple Bayer matrix instead.
@@HowieDay82 w0w
I need to try the Dither without the noise :D
I thought about how to implement this for more than two weeks, or rather there were a lot of problems, but in short, I cry when I watch this video, I can move on, thank you, lol...
3:59
I can't thank you enough man, hope your channel grows more, appreciate your dedication!
Thanks for another amazing video! Your videos helped me a lot when I create my own world align material with 6 axis and 2 samplers (one sampler for world x,y,z and one sampler for a "45 degree world projection", then blend between them at the edges). It finally got rid of all the annyoing stretch artifacts I had with my existing world align materials.
I was literally thinking about this yesterday. And today CZcams recommended this video to me. Thanks a lot! Now I can use triplanar for animated objects.
Amazing! Thanks for sharing, this is so useful!
Ur channel is golden!
Amazing thank you!
OH MY GOD IT'S HERE!
Oh neat I'm in this one :)
Nice work as ever.. Out of interest for skinned meshes that are triplanar, what state of the model is it applying that mapping to.. Its bind pose? As this would differ greatly from its first frame of the anim.. And also might not work with Alembic geo caches as they dont really have a bind pose just a wierd array of point cloud data stored when no animation / blend shape is being applied?
Since we're using the "Pre-Skin" data, we're referring to the bind pose :)
About the Blend Shapes... As long they're applied on GPU this approach should still work, not sure though, I would need to test. No idea about the alembic, but since we're talking about an array of different meshes I feel like it wouldn't probably work
Hi, thanks for this time saving material, Ive bought it already. Aparently, to get correct normals, we are supposed to disable Tangent Space Normals in the Material graph. But then how would you blend a regular untiled normal map with the triplanar normals? Thanks again for advice!
Depends on the order of blending :)
If you use the triplanar one as "base" to add on top, you can use the BlendAngleCorrectedNormal function and they should come out nice and clean :D
First off, this is just brilliant!
Secondly, I'm struggling to figure out how to make the Texture(T2d) input accept a texture parameter and not just a texture object. So I can change the texture in the material instance. Am I overlooking something obvious or how do I fix this?
It's the TextureObjectParameter node, but you can just right click on your TextureObject one and convert it to a parameter :)
@@VisualTechArt Thanks mate. Didn't even think of that after it wouldn't let me hook up a texture parameter to begin with. Just using a Texture Object and then converting is working fine :) Thanks
I feel a bit foolish for not realizing I could have used dithering to blend the UV like this. I often used dithering while working on VR titles because using translucent materials has significantly impact performance especially in VR.
Does this three plane material function not support the substrate new material system? What does the missing link mean when reporting an error
This MF relates only to how you map a texture in the shader, shouldn't care about if you're using Substrate or not :)
Hmm.. In my testing your method for WA textures on Skeletal Meshes still gives artifacts on the X and Z plane (only on the exact plane, when previewing on a cube). It happens both with your 'Normalize' method before the transform and with the original. What gives? You have the same?
Mmm I'll have to double check... It shouldn't happen anymore as with the Derive Tangent Basis there shouldn't be a case where you get null vectors, I remember checking for that. But maybe I missed some exceptions
Hello! I'm wondering about the normal maps being attached to the surface in a way taht shadows are being casted in multiple different directions. If you look the thumbnails you will notice the shadows are not consistent in relation the the source light... am I missing something or this is the intended result? I wonder If we could get the normal maps interacting in away that they auto correct the expected direction of the projected shadows regarding orientation and deformations of the models... 🤔Thank you for this master class!!
Not sure I understood your comment tbh xD can you re elaborate? Sorry
@@VisualTechArt I've made 2 images elaborating my point. I've posted here but it disapered for some reason... how can I send you so you can understand better what I mean? Thanks!!
I wonder though, now that you have added the dithering to allow for smoother transitions between the projections in the "Simple Fetch" version, what additionnal advantage does the "Triple Fetch"
retain ?
It still has better visual quality, and depending on the texture set, transition and render size, the dithering might be noticeable or causing ghosting artefacts :)
@@VisualTechArt Ok,thank you for your answer !
Is this skeletal mesh triplanar optimized enough for games?
If your frame renders in less than 16.6ms, yes :D
This three plane material function has been updated, and the texture can be fixed on the object. How to link nodes? The original method seems to be ineffective...
What do you mean?
@@VisualTechArt How to operate this material function and link it to the material
If you download it by following the link, you get a sample project that shows it :)
@@VisualTechArt The example project has been opened, but it is still the same as before and has not been updated.... Following the tutorial link, the first step will fail
Thanks! That probably the most complete tuto/explanations on triplanar I saw and I will definitively borrow some ideas and methods for my own shaders.
An interesting thing that would have been nice to talk about is the cost in performance compared to other approaches. I will try by my self to see if it can be, in addition to be more accurate, also less instruction intensive than the shader I currently use (wish is quite demanding, especially as I use layered materials...). For example, that guy propose a way to reduce the sample count: czcams.com/video/dFApswlwsRc/video.htmlsi=Vsmh4ioF9JTo8qOg
Maybe it would be interesting to do researches and talk about triplanar shader optimization in a next video?
It could be interesting to measure with RenderDoc and see if there's any measurable difference between different approaches :)
The single sampling thing the guy in the link is doing is pretty much the same stuff I do with my Single Sample Mode (in which I go through in my previous video) + the Dithering I add at the end of this one :D
@@VisualTechArt Ho yes, I wasn't sure anymore if it was in your previous video and indeed your dithering addition recalls me that video I put link for ;)
@@VisualTechArt this is an interesting topic simply becuase of the unlimited ways people use Unreal these days. For example i use it simply as a renderer using MRQ and Sequencer to build scenes and render them out no real-time rendering infact there is no logic at all except on scene render. I am thereforce less concerned by performance and simply want the best visual approach (within reason) so my renders complete in a timely manner.. where as other folk who are using it for real-time purposes should be greatly concerned by performance :D
Have you tested this inside Packed Level Actors?
Something tells me it will break. Because PLA has different local pivot position and scale and all the transformations will be broken.
It shouldn't have any problems :)
The packed level actor is just a container, so it doesn't overwrite the original transforms of every mesh it is referencing, it just stores what it contains and where.
But let me know if you test and discover that it doesn't!
Well I have a different version of tri planar and yeah it works differently in word and PLA.
And for example when you compensate the scale of an object inside pla it will work fine but when you exit the editing mode it will have the same scaling as pla (which is equal to one)
I believe this is because local transforms are now calculated for pla not for its components.
Also pla is not a container it's a blue print that convert SM to HISM. And for unreal i believe it's the same as a single object with it's own transformations.
I will take a look at your version though maybe it's just a badly done math in my version. But I tested your way of calculating local scale in a new material with debug node and inside pla it show 2 2 2 but when I exit it show 1 1 1.
how many days? or weeks?
For what
Very impressive series! I am experimenting with triplanar projection for dirt layer in layered material for destructible mesh. After fracture dirt layer starts to "sliding" along pieces. It is even possible? I think it is because each piece is kind of instance of original object and engine is trying to reproject texture for every piece. Thank you!
Not sure, maybe the fragments move but are still considered part of one mesh? s
So even though you local map, they slide around