How to Make Seamless Textures with AI & Blender (Free and Easy) - Stable Diffusion Tutorial 2022
Vložit
- čas přidán 13. 09. 2022
- UPDATE: Check out my newer, easier video with more tips! • OUTDATED | How to Make...
How to Install and Use Stable Diffusion (June 2023) - Basic Tutorial
• How to Install and Use...
Other options for using Stable Diffusion: / dreamers_guide_to_gett...
Normal Map Generator: www.smart-page.net/smartnormal/
Get Blender here: blender.org/
----------------------------------------------
Did you like this vid? Like & Subscribe to this Channel!
Follow me on Twitter: / albertbozesan
when using the displacement output and node, you don't need to plug a normal input in, just plug the heightmap for the material into the height input of the displacement node. using a normal map in the normal input is telling each vert to displace in the direction the normal map is facing, leaving the normal input empty and plugging into the height input will cause it to displace in the direction of the objects surface normal. the normal input is intended for displacement textures designed specifically to displace the verts in many different directions to create overhangs.
This was actually one of the first implementations I tried out with ai. Works pretty well!
I love that the tiling feature is built in now! Saves so much time - and is a little harder to do in the paid AI's ;)
This was such a great video. You explain things so clearly, without being too fast or too slow. This is how blender tutorials should be!! Thank you!
Thanks so much! Glad you liked it 😄
Definetly also learned something for Blender textures
Been loving your content with stable diffusion!! I am following along and watching all your videos. Keep it up!!
Awesome! Thank you, there's much more to come :)
Loving these tutorials! Can't wait to see your channel explode, these are top notch videos.
Thank you so much! 😄
Really good video, lots of useful tips and info thanks!
This helped a lot thank you
You can use a Bump node to turn a heightmap(or color ramp your albedo) into a normal map btw, you don't need to use an external program.
That *is* essentially what the external program is doing anyways. Only benefit is that you don't have to concern yourself with the settings and can just leave that up to the program developer, which can be a lot of help if you're new to it.
True. I did kind of a weird mix of techniques in this video - the online normal map method might be helpful if you’re not going into Blender, for example, and perhaps need a map straight for a game engine.
You don't actually need the tiling option, one of the first things I tried, even before downloading SD to run locally was to put in "bark texture seamless" and it gave me some interesting perfectly seamless textures.
Oh, and you don't have to write "top down" as it understands "texture" or possibly "texturemap" and gives you the proper output. At least the times I've tried it, if you complicate the prompt it might stop working.
Yeah, I’ve tried just using texture. It’s worked less well than “top down” in my experience, but it’s worth playing around with!
There us a free app called Materialize , which can generate all other maps like normal, bump, rough etc... from an image.
You can also adjust it.
Awesomely awesome! :)
Is there a quick way to generate a height map, similar to SmartNormap? That would be super helpful for displacement. If its generated from the texture, or the normal map, either would work if the results are good.
Yay! thank you very much for making tNice tutorials video! Very helpful!
Cool, I didn't know there was a tiling variant of stable diffusion, I already tried putting "seamless" into the prompt and it didn't help. You can select a principled node and press ctrl+shift+t to select all the textures at once and then it'll automatically link them up to the correct inputs and a single mapping input.
Wirklich interessant was du da so hinzauberst. Ich will gleich selbst loslegen. Grüße aus der Schweiz.
Grüße zurück!
concepts finally line up in my brain and...well, who knows? Maybe I'll be able to make sotNice tutorialng now.
"Step 1 is deleting the default cube" - how every perfect blender tutorial should start.
Awesome tutorial! Thank you very much. Would you be able to make a video demonstrating the Carson Katri Dream Textures in Blender? I'm having trouble trying to figuring out how to use it. If so it would be a big help!
I will take a closer look at it 😄 It’s amazing that this vid is pretty much old now, just days after I made it 😅 what an awesome community
thanks
I would just replace the normal website with Materialize for a more effective pipeline. Very nice tutorial. Thanks.
Try the prompt surface texture as well that can work well. Not all subjects react well to certain prompts there is no universal prompt that gives the same effect across all subjects you have to experiment with different prompts. Every one should have a personal prompt list, that you save prompts and there potential effects, its what I do in a spread sheet.
Also here is another freebie try the prompt photogrammetry with textures it works very nice, combine this with camera details say lens type 80mm 35mm etc, shutter speed 2000, even camera names.
These models are made using billions of images 2.3 in fact with stable diffusion. So the issue is when you ask for it to make something there are a lot of options for it to pick from so you need to provide extra details to narrow it down, if you want a realistic texture you need to add details that guide the model to that section of its 2.3 billion images to pick from. If its more surreal or artificial then you need to provide the prompts that guide it in that direction.
I am going to give everyone who clicked read more and read down here into a little secret in image generation.
You can have the perfect prompts, and set the perfect settings but what really matters in image generation is the seed. No matter the prompts or settings if you get a bad seed the image will not turn out good. There is potentially billions of seeds you can use and each one will produce a different image, not only that if you change the settings such as the image size or int strength or the prompt itself with the same seed it can produce entirely different images. So think of all the different prompts, settings and seeds you can use and multiply them together and you can see there is almost an infinite amount of images that can be produced. Now how many of them are bad and how many are good?
The point is dont settle on a single image generated and think the prompt or setting is bad you have to generate again and again to see if it does not work for what your looking for.
The truth is those amazing images people post are usually the good one out of hundreds of bad or not so good images they have generated.
Its like playing a gatcha game with images, not every one is legendary.
Great comment with lots of tips. Thank you!
Thanks for the "top down" tip. Good stuff. Looks good but you're using specular wrong. So some research on PBR speculat and you'll see you really don't need to adjust it-- only in specific circumstances. Thanks again for sharing.
I really hope someone spins up an equivalent web ui for the amd version ASAP :D
Hey Albert, your content is fresh, smart and empowering because it contains relevant knowledge, no BS ! Thank you
Are you DE born and raised ? You name has an Eastern European feeling to it :D
Thank you! Glad you like the vids. I'm DE born, US raised :) German/Romanian ancestors.
I have been following your guides and made some great assets for a game im developing, now the only issue is that the style of these doesnt look alike, Do you have any tricks for style approaching, and moving images closer to one another?
Glad to hear you’ve made some good assets! Have you been using similar styles in your prompts?
To move your existing assets closer to each other, you could put them back into img2img and change the prompt slightly, with a low denoising strength. If you repeat that a few times you could adjust the style.
All your videos are awesome. Thanks for sharing. Wud you mind making a video on how to animate the image (made by the AI) in Blender.
I’m working on a way to turn images into 3D models in Blender. It’s a little complex but sub and keep an eye out for that video! 😄
@@albertbozesan Thanks much!
Are there system requirements for this , I've tried installing it , apparently sucessfully at v0.6 . but it simply would not generate an image without an error . I came across a mention in the Stable Diffusion that it needs 6Gb of VRAM , and it also makes mention of blender 3.3 . I only have a card with 4Gb , and having older system cannot upgrade from Blender 3.1 to 3.3 , am I wasting my time looking into this? Any thoughts appreciated.
„Pray to the AI gods.“…
I worship you, soft god!
Soft God is but a minor deity in the pantheon of the Open Source.
I loved your texture solutions, unfortunately the displacement wasn't enough for me... the bricks would pop more in a realistic setting. is there any way to prevent the extreme crumpling that happened as soon as the displacement was cranked up?
I think if you cleaned up the normal/bump map by hand in photoshop, yes. Make sure the dark spots on the bricks aren’t as dark as the black in-between areas, for example.
cool video
I like using materialize for creating maps from a diffuse. it's free
Good tip, thanks!
I waiting for an AMD tutorial, cause that part is really confusing for me, but the title sucks so I'm not re-trying it than you so much for your video.
Just use Bounding Box Materilize to avoid all these steps after creating Diffuse map in Stable Diffusion.
thank you for this video , i was running the old version of this - you should look into integrating "Materialize" (boundingboxsoftware) / FREE; in your PBR pipeline --- the workflow is much much faster for material building - i am now running batches of 15 1024's -
Great tip, thank you.
yeah, Materialize works super well to create additional maps based on the diffuse. great tool
Albert hello! Why do you think with the same parameters I get different images in different SD builds.
I use (AUTOMATIC1111 (Voldy) build, GRISK build, DreamStudio (online) and everywhere the results are extremely different. I tried changing the model to a larger version, but nothing changes
Huh, interesting. I’m not deep enough into the actual functioning of the AI to know for sure - are you using the same seeds as well?
@@albertbozesan Yes my friend, I used the same seeds and settings in all cases. The results were similar on the output of WEBGUI (AUTOMATIC1111 build) and the online version of DreamStudio, but there were still some minor differences. The other builds gave a completely different result
On the whole, it doesn't make much difference. But the very fact that with the same settings and seed - we can have different output. It made me very sad when I tried to repeat one of your works.
@@michaeldenisov4815 You understands that the images are always generated as new right? How do you expect it to be the same every time?
@@owlmaster1528 you are wrong, if you use the same parameters and seed - you will always get the same result
For some reason I run out of video ram if I try to make a 1024 texture. I have an rtx 2070 super so you'd think that would be enough. Is there a way around that error?
I don’t recommend going up to 1024. The AI was trained on 512x images, so the results won’t necessarily be better. It’s easier to upscale with ESRGAN.
Thats simply too much, upscale the image if you want a bigger picture but generate 512 only
Can you please direct me to a video that follows the Voldy install guide step by step. I am sure if someone already knows how to do this, or is familiar with this,the instructions seem straight forward. Step 2 ( note to update, all you need to do do is type ‘get pull’ within the newly created folder) is throwing me for a loop. I do not know how to do this. I looked for a video to assist me but did not find one that appears to follow these steps. Any assistance you can provide would be greatly appreciated. Thanks for sharing information about this application.
This one looks good and covers the basics, so it should apply to voldy: czcams.com/video/5dkHkWc5vN0/video.html
But don't fret: I'm not a coder or similar and got it working. You need to have a very basic knowledge of Git, and that's about all "advanced stuff" that's necessary. Just keep at it and read each step carefully :)
@@albertbozesan Thanks. I sincerely appreciate your response. I actually got it installed and working yesterday. Now I just have to learn the software. Will be watching your channel to learn this. I like your style and content. Thank you for sharing.
Me: okay, okay… deleting the default cube. A standard blender tutorial style…
Tutorial: and shift A another cube into it. It is the Blender Gods will.
Me:… what? 😂
HEY! Can you please put it into the desc that the stable diffusion download requires an Nvidia GPU. I spent like the last 5 hours trying to get this to work only to realize that it required Nvidia. So much time wasted. (You got my hopes up too TwT)
Will do. It does say in the installation instructions, though 😅 but if you have an AMD, there are possibilities, too.
i wish it could be made as an addon within blender
Good news, it was! Check the link in the description :)
Hi, my guess, it is not able to receive a top down texture from the user and create a tileable seamless version of it yet...
the img2img algorithm could help you there :) check it out, it also has a "tiling" option.
@@albertbozesan hello, have not found proper time to setup the software set to try and tbh, my good old gtx gpu might Just be incompatible. Can i give u a link to a texture sample, can we try and see how it handles the seamless generation?
@@I3ordo you can check out one of the many cloud services to try it out
@@albertbozesan ah can u recommend me any? have not found anything that can create seamless with style-gan stuff
Inpaint still doesn't work? I tried but it didn't work.
I don’t like the WebUI Inpaint. It seems buggy.
@@albertbozesan ok,I solved it.If the image had an alpha channel, the inpaint would not have an effect.
I got it working with "2d ,texture" instead of "top down"
Good tip! I've heard that works, too, haven't gotten great results myself.
@@albertbozesan I tried with asphalt, it loved to spawn tiny cars in the picture when I put "top down" so it depends on the type of the texture.
automatic is everywhere :D
you sound like maxor and i keep little laughing
TNice tutorials is literally the hardest tNice tutorialng my brain can't comprehend. AnytNice tutorialng else I do takes a few minutes and tNice tutorials is just.. it's just so confusing
What the fuck. The stone texture at the beginning already is prepared to be tiled. What the fuck?
What do you mean? I show my final results at the beginning, then go through the process.
software.
It is indeed.
Good video until you deleted the cube and created another one. People need to stop with this foolishness.
Are you afraid the cubes will come back from the dead for revenge?
Empty promises, directions are all scrambled around, cant get it installed. better tutorials out there.
Sorry. I'm confused. My Stable Diffusion doesn't have the option for "tiling".
Do you have the Automatic1111 UI installed?
too I made like s on garage band and thought it be easier in softsoft. nope
This isn't a tut for audio software, I don't understand...
@@albertbozesan AI bots
Nice tutorial as always, downloading the latest GUI right now :D What I was wondering, since the 1.5 model came out, and soon for the public I guess(?). It would be cool to compare some seeds and generate images using 1.4 vs 1.5 :D
You can be sure 1.5 is much better! I'm looking forward to the public release.