Change Your Understanding of Normals In Eight Minutes
Vložit
- čas přidán 15. 07. 2024
- If you haven't got a clue WTF a normal actually is, this video is for you.
My latest video - • Can You Recreate An Ol...
The first part of this series - • Change Your Understand...
Get access to exclusive materials, assets, and tutorials on my Patreon - / decoded
Get DECODED merch here - teespring.com/stores/decoded-3
Follow me on Facebook here - / blenderdecoded
Or on Twitter here - / decodedvfx
#Blender #DECODED #B3d
Yo! Thanks for watching. Leave your video requests here.
Maybe a video reviewing subs games, or channels. But I'm totally biased as I have a channel and a game , and I'm a committed sub oO
Possibly a more in depth video on how the weighted normal modifier works? Assuming you know of course. I’m finding it hard to figure out and you have a way of explaining things. Thanks!
Why bend modifier is always in the wrong direction, seriously it newer start out of the box the way you want.
Compositing videos are always appreciated, there are so few of those. Taking into account that recently blender moved in alpha compositing part to after render period it would be a nice opportunity to throw some more light at this topic.
Displacement. Thanks for sharing your knowledge!
whats about object vs. tangent space in normal maps? maybe you could make a video about this?
"All the modern 3d softwares, even Maya" lmao the shade
What's so funny about it I don't understand ?
@@roswarmth lowkey maya is old
😂😂
@@MrFastsone yeah I know that but what's the funny part in it ?
@@roswarmth Probably the blender cult trying to be comedians, don't get me wrong Blender is great but its not the second coming of jesus, not yet anyway.
Haha, we love it! 01:08
Ayy!
@@DECODEDVFX ily
Thank you, finally I can understand what normals are. Most tutorials tell you to "flip the normals" but don't explain why.
just flip the damn normals lool, or even better, recalculate normals
Nice! Some more technical details:
The RGB colors of the normalmap represent vector positions in X, Y and Z. A normalmap without any details has all its normals point straight upwards. The vector would look like (0 0 1), which if encoded in 24bit looks like (128 128 255), which is exactly the typical normalmap blue!
Something to be aware is: These normalmap vectors should be normalized! A normalized vector always has a length of 1 and that means not all colors represent a correct vector. This is usually not an issue when you render normals from substance or blender, but back in the dark days we had to paint out rendering issues in Photoshop and as such it was important to normalize the map again...
Also i used Photoshop layers generate normal map details... Once you understand how that tech works, you can do freaky stuff with it... Fun!
Footnote for readers: "Up" in this context means perpendicular to the uv plane. Up becomes a bit foggier with tangent space and world space.
So, painting the channels rgb in separate, allows full control of normals?
@@pbonfanti I guess so
@@pbonfanti Yep one channel per axis. No height though, you need another map or channel for that.
@@ThBlueSalamander iLiAS iLiAS F0RTNlTE adonnés CZcams
Before the video: I understand how normals work.
Watching the video: Oh, that's interesting... Ha... Ohh....
It was nice to hear someone mention Phong Shading after all these years, I thought I might be the only person who still calls it that.
I often deleted phong tags and added subdivision lol, and then i was like: 'computer stupid'
DECODED: Tell me which other areas of 3D you want me to explain.
ME: Yes.
In case anyone is interested in how the maths work with 3d graphics and normals I thought I'd ramble on a bit (Ok, a lot). This is totally unnecessary to know to use blender but I've got a glass of wine, it's lockdown and I feel like it. If you value your time, ignore this comment. Ok, you've been warned...
I'll use the example of ray marching (Eevee) as it's simpler than ray tracing (Cycles) but the basic concepts apply.
First you need to make a camera. All you have to do is give it a location in 3d space and for that you use a vector (a coordinate). Let's set ours at vec3(0, -3, 1). So that's one unit above the plane and three back looking forward. You then need to cast a ray from the camera, through the viewport to the object. Well, your GPU comes with a fragment shader which will run a calculation for each pixel in your viewport (screen). This is run for each pixel every frame - GPUs programme on a "wave front" (like a wave crashing on the beach) running your instruction for every pixel on the screen every frame. The name of the game here is to find out what colour that pixel should be. Coding a shader is running a programme a million times simultaneously, what GPUs do is amazing... It's like each pixel on your screen has it's own CPU.
You know which pixel is running your shader by a UV value the shader gives you which is really just the x and y coordinate of the pixel. Generally you normalise this from -0.5 to 0.5 which you can do easily by dividing it by the resolution and subtracting 0.5 to put it in the middle (for convenience).
Next step is getting the angle from the camera to the pixel. Dead easy, just minus your target position from your camera position. Let's put the viewport on zero on the Y so the vector would be vec3(U, 0, V) - say you wanted the pixel at the top right of the screen. That would be vec3(0.5, 0.0, 0.5) for example which you deduct from your camera position vec3(0, -3, 1). You then normalise this result: that means to doing a bit of Pythagoras and making it a unit vector (with a length of 1) so you just have the direction. Right, now you have an arrow pointing from the camera towards your pixel with a length of 1. Remember, this is run for every pixel on the screen at the same time. Want to make it 60 long? Just multiply it by 60. The "60" is called a "scalar" for obvious reasons.
Ok, let's put an object in our scene because right now we've only got a viewport and a camera. The simplest is a sphere. Let's put two of them just for fun. one is at vec3(2, 3, 2) and the other is to the left and back a bit at vec3(-2, 5, 2) and they both have a radius of 1 unit. Ok, so the question is: does this one particular ray hit a sphere? Well, if we send our ray too far we'll miss the target which would be a fail and the computer gods will chide you. How about to keep things simple we ignore direction and just look at distances?
But it's easy enough to work out the distance to the centre of the sphere. I'll skip the maths on that but it's very basic and, anyway, shaders have a length() function that does it for you. Then we just need to deduct the radius and we've got the distance to the surface of the object. But we've got two objects and we're running this same programme for every pixel on the screen so how far do we extend the ray? By the length of the closest one, that way we know we won't overshoot. Cool, now we check the distance to the surface again (within the same frame) and if it's very close then we call it a hit and we set the colour of that pixel to white. If not, we move on, how far? Well, we're in a loop, checking the distance to all the surfaces and then moving the ray forward by the minimum distance. Then we tell the loop to stop after a certain distance in case it didn't hit anything and to return black as the colour.
That's it, we have an image. And it's a black background and we have two white circles. Which looks completely crap. :) Ok, what about adding a light source? ok, let's say our light source is at vec3(0, 2, 7) above the spheres. Now [finally] we get to normals. We've got the location that the ray struck the object, and we've got the position of the light. Well, like we did before we can subtract one from the other and normalise and get the direction unit vector to the light source. Now we need another vector, the normal. Think about it, if it's facing away from the light then it'll be in shade, if it's on the top of the sphere then we can return a light value. So everything depend on whether the face is facing the light.
Working out the normal is a bit of a pain tbh, it's all a bit manual. You have to go cast a ray a tiny bit to the right/left of where you hit and another a tiny bit up/down and then you minus these vectors to get two tangent vectors. Imagine the sphere is a football. You grab a marker and put a dot where you hit. you put some dot left and right (by repeating what you did to get the first dot) and then you minus these new offset vectors and you've got two that cross flat to the surface of the ball.
You can then do a bit of maths called the cross product to get the vector perpendicular to these vectors: the normal. I won't explain the maths to the cross product but it's quite cool and, again, shaders have a function that does it for you. Ok, so now we have the ray direction, the normal direction and the light direction all in unit vectors. The next bit of maths magic is called the dot product. If two vectors are pointing in the same direction the dot product will give you a result of 1. If they're in opposite directions -1 and if they're at right angles then 0. So, we can use this value to determine how bright the pixel is. Less than 0, make it black. Greater than one (so pointing in the direction of the light source) then the pixel has that much brightness.
Tah-dah! Run this and there are two shaded spheres in your scene. Better yet, move the camera back and forth and it zooms. Move the camera with the viewport and you change the perspective. Move the light and the lighting changes on the spheres. We can easily add a plane or cubes by using the same principle. But what about shadows? Dead easy, just take the point of collision, run the vector towards the light source using the min distance loop and if it hits something then make it black.
Now, you'll notice that these are geometric shapes and not meshes. Well, a mesh is an array: a series of vectors each indicating a vertex. There's also an index. So, it's a long list of triangles looping round and round. So now to work out if you hit then you need to do a bit of maths, involved the dot product again, to work out if you hit a face. It's a bit more complicated but it's pretty much the same thing as our sphere examples. Now, the GREAT thing is as well as vertex location you can send a normal vector in this array. So no more messing around sending more rays working it out, it's in the data! Whoop! Now you run the maths and bang, Suzanne the Monkey is in your scene. You can give it a colour too, and just use the dot product calculation to decide how bright that is. You can give t he mesh a UV of its own and send a texture in so now you've got a photo.
If anyone got to the end of this extended edition of War and Peace (sorry), I hope you see how key normals are to 3d rendering, how simple the basics of it are (obviously, this is the simplest version I could make, blender is much more involved) and why giving them to your GPU in an array is a great idea.
If anyone has a question, if anything wasn't clear, I'd be more than happy to respond btw. If your question is on first principles, great, that's actually the more interesting end of things.
YOU IS BIG BIG SMART!!!
Clicked read more and my jaw dropped.
U are magic. Thanks.
But are normal map color values translated to angular values? I don't understand why nobody mentioned this :(
As I struggled with understanding normals for quite a long time as well, I'd like to add some information that can be pretty valuable for anyone trying to understand this topic:
1. There's actually not only face normals but also vertex normals. Face normals are what you've shown in the video and as they determine the direction of a face they are also important for operations like extrusions or modifiers like solidify since they use face normals to calculate the direction of the operation.
2. There's also vertex normals which are (who would have thought that) the normals of vertices. Vertex normals are actually responsible for the shading instead of face normals as you've shown in the video. I highly recommend you testing this in your 3D program of choice to really understand this. In Blender for example you find it in edit mode at the bottom of the viewport overlay settings (actually check vertex-split-normals, not vertex normals).
With flat shading each vertex normal has a normal for each connected face, so if it's connected to 4 faces, it has 4 vertex normals who each follow the face normal direction of their related face. If two faces now have an angle to each other, you can see the vertex normals separating from each other and pointing in different directions. That's why the edges between faces appear sharp.
With smooth shading however, these vertex normals are averaged out to represent a mix of all related face normals. You can now see that they all follow the same direction and it looks like only one normal now. With auto-smooth in Blender or soften/harden in Maya you can now determine at which angle the vertex normals are averaged and appear smooth and when they are left at flat shading.
4. Make use of the normal orientation for transformations in edit mode, it can help a lot!
5. Backface Culling is a technique primarily used in game engines for not rendering the backside of a face. So a plane would be visible from one side and invisible from the other one. In Blender you can enable it in the viewport shading settings. You might want to check that if you experience visibility issues.
6. The backfacing output of the geometry node in the shading editor can be immensly helpful if you're trying to texture an object without thickness like a plane or a leaf. It gives you a white value for backfaces and a dark value for frontfaces. Use this to drive a mix-node and you can add materials for each side of a face.
Although normals are pretty fundamental for 3D, they often get overlooked at the beginning and it helped me a lot to understand what's actually going on with them! It's best to test everything out yourself as you can learn a lot and it's also pretty fun (at least if you're a little nerdy like me haha).
Yeah, I actually have an old video about how to texture objects like leaves using the negative normals of the face.
just wanted to add to this seeing you already mentioned vertex normals in a vertex shaded system. The normal maps in the video were Tangent space normal maps, they don't override the surface normals but alter them relative to the original vertex normal direction. Due to the map only adjusting the shading it can be used on deforming shapes. Object and world space normals (the one shown in the render debug) can be used to actually override the surface normals entirely, most likely the reason you don't see them around that often, but are still used as intermediary for textures in Substance painter for example. As you said, nothing can ever be easy in 3d. Keep em coming tho, good video
Thanks for additional info! That’s a lot )) So, I made a screenshot to get back to some points later. Another nerd, ha ha
First video that actually explained the colors of the normal map. I was trying to find out the difference between Normal and Bump maps. Putting 2 and 2 together, this explains it. Thank you.
Absolutely amazing explanation! I always viewed normals as something that just comes with your downloaded texture and didn’t give it any more thought. And I didn’t even know that flipped normals is a thing to worry about. Honestly great job on this one
Yeah, that's why I made this video. Normals rarely get mentioned in tutorials, so it's not something a lot of artists really understand very well.
@@DECODEDVFX Once I learned about flipped normals I started inspecting my face orientation in my projects and was shocked how many normals were just wrong.
@@winstonlloyd1090 there's a Recalculate Normals option that ~generally~ gets them all pointed outwards, but depending on what you've been up to (usually naughty things) you might have to fix some yourself!
Yeah, I dread to look at my old projects sometimes. Flipped normals and amateur mistakes everywhere.
These are really, really awesome. Love tutorials, but understanding the principles behind the different tools, can be infinitely more valuable.
"Give a man a fish, he eats for a day; Teach a man to fish, he eats for a lifetime."
But you need to give him a fish before teaching him or he will be hungry and cannot learn properly.
@@ZackMathissa I've never heard that extension of the phrase before, absolutely love that.
I knew how normals work without knowing exactly what they were. Thanks to your video I learned everything I was missing. Thanks!
I think the simple definition of what a normal actually is was very helpful for me to understand it. I already knew mostly how they worked, but was never officially taught them.
Even though I already knew all this, this added value when its explained in such an effective way the same with your other video in this style, and it helps me think of ways to solve more problems with these tools, thank you!
Just started learning blender last week and this 8 min video was most of the other videos I've seen soo far, definitely do more short in depth explanations!
Thanks, will do!
one of the best invested 8 mins of my life
Flipping the green channel for dx normal maps and the explanation of what's OpenGL / DX do different in processing is a massive gold nugget, thank you! I haven't heard anyone on CZcams even mention that, super good to know
7:02 nothing can ever be easy in 3D. lol! Thanks for the great teaching.
Thaaank yooooou. Love your videos so far, you're so clear and concise with the information, audio quality is good and your demonstrations to go with it really help. Helping me no end with my 3D journey and understanding what the heck it all is XD
thank you!
What a sweet in depth video, I love how you kept it simple while going in depth into normal.
A bit of information about image formats for storing Normal-Maps:
When sampling (reading) Normal values from Textures (Images), the value contained within the texture needs to be within 1% of the value that was recorded into the texture to properly produce the normal that we want. If you are using 8-bits per color channel, then there are a lot of angles where normals will an error that's greater than 1%. This is the reason why some programs will just default to 16 bits per color channel. Using R16G16B16 will give you a nice bake while a lot of people will get a crappy bake if they use R8G8B8. Now, it turns out that the 1% max error gets resolved at 10 bits per channel, but 30 (or 40) bits per pixel lower the performance of most computers, so R11G11B10 would be preferable.
So, when baking normals onto a texture, R11G11B10 should be minimum color format that you select. Selecting more memory intensive formats like R16G16B16 is perfectly fine. But, going lower, like R8G8B8 will give you a lot of visible artifacts if the normal is a curved surface.
Think of it this way. You are trying to arrange a chair inside a scene, but you can only turn the chair by multiples of 30 degrees. Lets say that you need to set the chair to 45 degrees. Well, that's not possible because you can only turn it by multiples of 30 degrees. You can try 30, 60, 90, etc., etc.. If you only turn the chair by 30 degrees, it might not be noticeable that it's off by 15 degrees. Yet, if place that chair next to a table that has a 45 degree rotation, then it becomes noticeable that the chairs rotation is off. You play with the editor settings so that you can rotate things by 20 degrees. Well, that would let you rotate the chair to 40 degrees, making it 5 degrees off the target value. Being off by 5 degrees is a lot less noticeable then being off by 15 degrees. Now, being off by 5 degrees won't be noticeable for this use case, so close enough.
Switching from 30 to 20 degree rotations is a lot like switching from R8B8G8 to R11G11B10; it won't recreate all normal values perfectly, but it will be close enough. We just need the value in the image to be within 1% of the target value.
I loved the video and don’t want to miss any of your future ones. This fundamental level was super useful for me, thank you!
I knew normal maps, but the way you added phong shader info, that was amazing. 👍🏻
This was incredibly helpful! Understanding something makes it a lot easier to work with creatively.
Wow, thank you so so much, now I finally know why some normal maps produce such strange shading issues! My workaround usually produced okayish results but this is a much cleaner method!
thank you for this! that certainly clears up a topic I never thought I'd fully understand about 3d but makes total sense now
I've taken 3d classes and seen other videos explaining what normals are, but I never really understood normal maps until this video! thanks!
As always, your video was concise and very easy to understand. Thank you for the insight!
Even if I felt like I understood it, I still watched the video. And I actually did end up learning something new! (The green channel thing). Will come in handy later on once I start using normal maps!
Coming across this as a game artist its strange, as the knowledge is like breathing and so you don't ever really consciously think about it. It's nice to see a breakdown I can send to someone if I ever feel the need to explain how we make bad looking things look nice in engine :)
what games did you work on bro
@@5ld734 Dirty Bomb, Gears of War and Outcasters
This is great I would love a breakdown of flow / tangent maps used in realtime engines and such
The ACTUAL cameo from Flipped Normals had me rofl! Luv it!
Notification squad, keep up the great work . Really informative stuff
It has been a wonderful journey learning Blender... I try every day to expand my knowledge... I do not like following instructions blindly... I want to know why I am doing what I am doing... You have given me a better understanding... Then what I thought I knew before this video... Thanks for taking the time to share...
Amazing video! I've been struggling with understanding normals for months! And finally I get it! Thank you!
Man this is just pure uncut freebase information, I really wish other instruction was as good as this, but am very grateful for it here. subbed.
You're in luck. The next video in this series will be released in the next day or so.
wow I learned so much from this video. I'm a cinema 4d user and I always thought normals maps were just magic. now it all makes so much more sense thank you.
I've been watching shader tutorials trying to explain what normals are for a while not but this is the first one i truly understood everything in
Thank you so much. You should be really proud
Awesome, thank you!
Thought that this was gonna be a philosophical video for a sec.
Luckily I still need to learn how to use Blender so I'm glad I found this 😎
That "Flipped normals" reference is genius 1:08 lol
keep it up brother, good stuff!
1:24 That's actually an unfortunate example, because that square is made up of TWO triangular faces and the normal it shows is the average of those two faces normals. It's maybe not a big deal but it can be confusing if you're new to this concept.
This was extremely insightful on this topic. Thank you !
Outstanding overview :) You made it look normal!
"They have flipped normals."
And dat photo filled with awesome humor and sarcastic background.
Definitely like && subscription.
Your sense of humour just awesome.
Standing ovation!!!
Very informative! Thanks for the video. Also "because nothing can never be easy in 3d" I laughed SOOO MUCH! hahaha
First of all awesome explanation! I would love to see an explanation of the benefits of using an normal map compared to a bump map or vice versa and what there limitations are for example.
Did not expect to see my face in the middle of a video haha!
Great video, very well explained
This video was truly amazing. Thanks a lot.
this was superb, really enjoyed learning and understanding the logic behind what we do
Wow, that was very well explained!
This is the video I'll send to people when I need to explain Normals. Cheers!
VERY informative. Many thanks for taking the time to make this video good sir.
First video where i've actually understood Normals, thank you sir :)
It is amazing what you can learn in 8 minutes!
1:08 Why did I laugh so hard at this?
Regarding Normal maps, there are 2 main types of mapes called "Tangent Space Normal Maps" which are the blue/purple-ish textures and then there are "Object Space Normals Maps" which are a more accurate solution, but can not be tiled or used on animated meshes.
Interesting… do you know where I can learn more about this?
@@jessekendrick6553 try Google, not sure what papers or sites explain this properly but I'm sure you can find something on Google depending on the topic your e looking for. Tangent space maps are very common so there's tons of info about them. Object space maps are not widely used even though they are superior in every way. Might need to do some digging
I thought this video was going to be about enlightenment, but it turned out to be a blender tutorial... Still stuck around this stuff is really cool!
Wow! Great explanation!
I never understood Normals until I watched this video. Thanks for posting!
This was extremely helpful, thank you
amazing how normals can affect the model render appearance!
It's magic.
indeed!
"If you extrude down the normals are flipped" this explains some things haha
Great video about the normal map 👍👍
Also the last part was were important and u mentioned that properly of two different systems of normal maps viz. Directx and opengl
Thanks bro keep it up💯💯
Thanks for that!
Don't use Blended, but your explanation was easy to follow and I learned something today. Thanks.
Fantastic video!!!!
amazing I learned so much !
such good videos! thank you
Seeing you using my addon makes me happy :3
The temporal denoising is a nice addition. I'll be sure to give it a mention next time I made a video focused on different addons.
Excellent explanation!
Thanks for the awesome explanation
Great video! 💯
Amazing video, thank you to share your brilliant knowledge.
Nicely done :)
Great video! Thanks for sharing. :)
Wow I didn't know there were two types of normals, open gl and direct x, great video.
Good video!
I used to know where that Auto smooth setting was and then 2.8 happened and I lost track of it. Thank you so much! I thought it was just removed because low poly was fading out of popularity!
Great video!
great content! thanks!
Great video, thanks!
Finally a new video 😍
Thank you. This really helped !😘
You could consider making a video about tangent normals vs. world or local space normals and why tangent basis matters. It would be a natural, though somewhat twisty continuation
Seconded. I would be very interested in this video.
Super explanation!
Thank you for the video. What I was missing in the end was a simple example of using a normal map in blender. But I guess the video was supposed to be universal. Cheers.
thanks for the NVidia x OpenGL part, otherwise i would've never noticed i''ve been using my normal textures incorrectly this whole time
Seems like magic still. Great explanation. Thank you.
Damn, I've known the basic idea of normal maps since like 2003, but this is the first time I actually connected the idea with normals.
Thanks, this really helped a lot as I am self-teaching 3d.
*A subscription earned*
Thanks for the sub!
Really god explanation. Thanks.
You should mention about soft and hard edges. This works great in Wings3D. You have auto-smooth feature, but all it does is setting soft/hard edges based on angle. Most of the time such automation is good, but in almost every model there's at least few edges that you need to manually set soft/hard. For example metal parts should smooth at much lower angle than organic ones.
Awesome vid! Subscribed!! Can you do a video explaining texture coordinates & mapping like this?
Since the thumbnail looked kinda deep fried I was half expecting a meme guide to understanding normies... Funny how my brain works these days 🤔
im glad im not alone
Thank You!
I remember watching this when I first started using blender, it felt like those trigonometry videos
Informative.
Really interesting thank you
Nice video, and great summarization :)!
at around 0:50 you speak about positive and negative normals, but as far as i know there is no thing like that! it has to do with triangle winding order which decides wheather or not the side you are looking at is a front face, or a back face.
and that is used when it comes to culling for performance, or when you want different materials or apperances depending on the "inside"/"outside" of a mesh.
just wanted to clear that up :) but great videos !!