How to Make Seamless Textures with AI & Blender (Free and Easy) - Stable Diffusion Tutorial 2022

Sdílet
Vložit
  • čas přidán 13. 09. 2022
  • UPDATE: Check out my newer, easier video with more tips! • OUTDATED | How to Make...
    How to Install and Use Stable Diffusion (June 2023) - Basic Tutorial
    • How to Install and Use...
    Other options for using Stable Diffusion: / dreamers_guide_to_gett...
    Normal Map Generator: www.smart-page.net/smartnormal/
    Get Blender here: blender.org/
    ----------------------------------------------
    Did you like this vid? Like & Subscribe to this Channel!
    Follow me on Twitter: / albertbozesan

Komentáře • 100

  • @marshmallow_fellow
    @marshmallow_fellow Před rokem +16

    when using the displacement output and node, you don't need to plug a normal input in, just plug the heightmap for the material into the height input of the displacement node. using a normal map in the normal input is telling each vert to displace in the direction the normal map is facing, leaving the normal input empty and plugging into the height input will cause it to displace in the direction of the objects surface normal. the normal input is intended for displacement textures designed specifically to displace the verts in many different directions to create overhangs.

  • @Reforge3d
    @Reforge3d Před rokem +19

    This was actually one of the first implementations I tried out with ai. Works pretty well!

    • @albertbozesan
      @albertbozesan  Před rokem +2

      I love that the tiling feature is built in now! Saves so much time - and is a little harder to do in the paid AI's ;)

  • @pianoatthirty
    @pianoatthirty Před rokem +9

    This was such a great video. You explain things so clearly, without being too fast or too slow. This is how blender tutorials should be!! Thank you!

  • @lionkingmerlin
    @lionkingmerlin Před rokem +1

    Definetly also learned something for Blender textures

  • @briannaalejo9226
    @briannaalejo9226 Před rokem +2

    Been loving your content with stable diffusion!! I am following along and watching all your videos. Keep it up!!

    • @albertbozesan
      @albertbozesan  Před rokem

      Awesome! Thank you, there's much more to come :)

  • @MitchWilcoxen
    @MitchWilcoxen Před rokem +4

    Loving these tutorials! Can't wait to see your channel explode, these are top notch videos.

  • @Kalyptic
    @Kalyptic Před rokem +2

    Really good video, lots of useful tips and info thanks!

  • @arturaronov5676
    @arturaronov5676 Před rokem +1

    This helped a lot thank you

  • @Alex-nl5cy
    @Alex-nl5cy Před rokem +11

    You can use a Bump node to turn a heightmap(or color ramp your albedo) into a normal map btw, you don't need to use an external program.

    • @Sylfa
      @Sylfa Před rokem +2

      That *is* essentially what the external program is doing anyways. Only benefit is that you don't have to concern yourself with the settings and can just leave that up to the program developer, which can be a lot of help if you're new to it.

    • @albertbozesan
      @albertbozesan  Před rokem +5

      True. I did kind of a weird mix of techniques in this video - the online normal map method might be helpful if you’re not going into Blender, for example, and perhaps need a map straight for a game engine.

  • @Sylfa
    @Sylfa Před rokem +6

    You don't actually need the tiling option, one of the first things I tried, even before downloading SD to run locally was to put in "bark texture seamless" and it gave me some interesting perfectly seamless textures.
    Oh, and you don't have to write "top down" as it understands "texture" or possibly "texturemap" and gives you the proper output. At least the times I've tried it, if you complicate the prompt it might stop working.

    • @albertbozesan
      @albertbozesan  Před rokem +3

      Yeah, I’ve tried just using texture. It’s worked less well than “top down” in my experience, but it’s worth playing around with!

  • @RBN64
    @RBN64 Před rokem +2

    There us a free app called Materialize , which can generate all other maps like normal, bump, rough etc... from an image.
    You can also adjust it.

  • @blendercomp
    @blendercomp Před rokem +1

    Awesomely awesome! :)

  • @LumberingTroll
    @LumberingTroll Před rokem

    Is there a quick way to generate a height map, similar to SmartNormap? That would be super helpful for displacement. If its generated from the texture, or the normal map, either would work if the results are good.

  • @VonsterBR
    @VonsterBR Před rokem +1

    Yay! thank you very much for making tNice tutorials video! Very helpful!

  • @phischphood
    @phischphood Před rokem +1

    Cool, I didn't know there was a tiling variant of stable diffusion, I already tried putting "seamless" into the prompt and it didn't help. You can select a principled node and press ctrl+shift+t to select all the textures at once and then it'll automatically link them up to the correct inputs and a single mapping input.

  • @kevinstefanovic
    @kevinstefanovic Před rokem +1

    Wirklich interessant was du da so hinzauberst. Ich will gleich selbst loslegen. Grüße aus der Schweiz.

  • @RiscoDavinciResolve
    @RiscoDavinciResolve Před rokem +1

    concepts finally line up in my brain and...well, who knows? Maybe I'll be able to make sotNice tutorialng now.

  • @jtmcdole
    @jtmcdole Před rokem +1

    "Step 1 is deleting the default cube" - how every perfect blender tutorial should start.

  • @stufffromthing5988
    @stufffromthing5988 Před rokem +1

    Awesome tutorial! Thank you very much. Would you be able to make a video demonstrating the Carson Katri Dream Textures in Blender? I'm having trouble trying to figuring out how to use it. If so it would be a big help!

    • @albertbozesan
      @albertbozesan  Před rokem +2

      I will take a closer look at it 😄 It’s amazing that this vid is pretty much old now, just days after I made it 😅 what an awesome community

  • @blenderdream
    @blenderdream Před rokem +1

    thanks

  • @JSena-ff8we
    @JSena-ff8we Před rokem +1

    I would just replace the normal website with Materialize for a more effective pipeline. Very nice tutorial. Thanks.

  • @Darkjayson82
    @Darkjayson82 Před rokem +3

    Try the prompt surface texture as well that can work well. Not all subjects react well to certain prompts there is no universal prompt that gives the same effect across all subjects you have to experiment with different prompts. Every one should have a personal prompt list, that you save prompts and there potential effects, its what I do in a spread sheet.
    Also here is another freebie try the prompt photogrammetry with textures it works very nice, combine this with camera details say lens type 80mm 35mm etc, shutter speed 2000, even camera names.
    These models are made using billions of images 2.3 in fact with stable diffusion. So the issue is when you ask for it to make something there are a lot of options for it to pick from so you need to provide extra details to narrow it down, if you want a realistic texture you need to add details that guide the model to that section of its 2.3 billion images to pick from. If its more surreal or artificial then you need to provide the prompts that guide it in that direction.
    I am going to give everyone who clicked read more and read down here into a little secret in image generation.
    You can have the perfect prompts, and set the perfect settings but what really matters in image generation is the seed. No matter the prompts or settings if you get a bad seed the image will not turn out good. There is potentially billions of seeds you can use and each one will produce a different image, not only that if you change the settings such as the image size or int strength or the prompt itself with the same seed it can produce entirely different images. So think of all the different prompts, settings and seeds you can use and multiply them together and you can see there is almost an infinite amount of images that can be produced. Now how many of them are bad and how many are good?
    The point is dont settle on a single image generated and think the prompt or setting is bad you have to generate again and again to see if it does not work for what your looking for.
    The truth is those amazing images people post are usually the good one out of hundreds of bad or not so good images they have generated.
    Its like playing a gatcha game with images, not every one is legendary.

  • @ChippWalters
    @ChippWalters Před rokem +3

    Thanks for the "top down" tip. Good stuff. Looks good but you're using specular wrong. So some research on PBR speculat and you'll see you really don't need to adjust it-- only in specific circumstances. Thanks again for sharing.

  • @autogenes
    @autogenes Před rokem +2

    I really hope someone spins up an equivalent web ui for the amd version ASAP :D

  • @rhbrolotek
    @rhbrolotek Před rokem +1

    Hey Albert, your content is fresh, smart and empowering because it contains relevant knowledge, no BS ! Thank you
    Are you DE born and raised ? You name has an Eastern European feeling to it :D

    • @albertbozesan
      @albertbozesan  Před rokem

      Thank you! Glad you like the vids. I'm DE born, US raised :) German/Romanian ancestors.

  • @wendten2
    @wendten2 Před rokem +2

    I have been following your guides and made some great assets for a game im developing, now the only issue is that the style of these doesnt look alike, Do you have any tricks for style approaching, and moving images closer to one another?

    • @albertbozesan
      @albertbozesan  Před rokem +1

      Glad to hear you’ve made some good assets! Have you been using similar styles in your prompts?
      To move your existing assets closer to each other, you could put them back into img2img and change the prompt slightly, with a low denoising strength. If you repeat that a few times you could adjust the style.

  • @HN-ks9td
    @HN-ks9td Před rokem +1

    All your videos are awesome. Thanks for sharing. Wud you mind making a video on how to animate the image (made by the AI) in Blender.

    • @albertbozesan
      @albertbozesan  Před rokem +1

      I’m working on a way to turn images into 3D models in Blender. It’s a little complex but sub and keep an eye out for that video! 😄

    • @HN-ks9td
      @HN-ks9td Před rokem

      @@albertbozesan Thanks much!

  • @tdsdave
    @tdsdave Před rokem

    Are there system requirements for this , I've tried installing it , apparently sucessfully at v0.6 . but it simply would not generate an image without an error . I came across a mention in the Stable Diffusion that it needs 6Gb of VRAM , and it also makes mention of blender 3.3 . I only have a card with 4Gb , and having older system cannot upgrade from Blender 3.1 to 3.3 , am I wasting my time looking into this? Any thoughts appreciated.

  • @stuffyangel
    @stuffyangel Před rokem +2

    „Pray to the AI gods.“…
    I worship you, soft god!

    • @albertbozesan
      @albertbozesan  Před rokem

      Soft God is but a minor deity in the pantheon of the Open Source.

  • @Lou-li5mv
    @Lou-li5mv Před rokem

    I loved your texture solutions, unfortunately the displacement wasn't enough for me... the bricks would pop more in a realistic setting. is there any way to prevent the extreme crumpling that happened as soon as the displacement was cranked up?

    • @albertbozesan
      @albertbozesan  Před rokem +1

      I think if you cleaned up the normal/bump map by hand in photoshop, yes. Make sure the dark spots on the bricks aren’t as dark as the black in-between areas, for example.

  • @Ldmlchkl
    @Ldmlchkl Před 3 měsíci

    cool video

  • @StephenWebb1980
    @StephenWebb1980 Před rokem +1

    I like using materialize for creating maps from a diffuse. it's free

  • @Amor2point0
    @Amor2point0 Před rokem +1

    I waiting for an AMD tutorial, cause that part is really confusing for me, but the title sucks so I'm not re-trying it than you so much for your video.

  • @IndianCitizen04
    @IndianCitizen04 Před rokem +1

    Just use Bounding Box Materilize to avoid all these steps after creating Diffuse map in Stable Diffusion.

  • @guardianofnorth
    @guardianofnorth Před rokem

    thank you for this video , i was running the old version of this - you should look into integrating "Materialize" (boundingboxsoftware) / FREE; in your PBR pipeline --- the workflow is much much faster for material building - i am now running batches of 15 1024's -

    • @albertbozesan
      @albertbozesan  Před rokem

      Great tip, thank you.

    • @ergohash2517
      @ergohash2517 Před rokem

      yeah, Materialize works super well to create additional maps based on the diffuse. great tool

  • @michaeldenisov4815
    @michaeldenisov4815 Před rokem +1

    Albert hello! Why do you think with the same parameters I get different images in different SD builds.
    I use (AUTOMATIC1111 (Voldy) build, GRISK build, DreamStudio (online) and everywhere the results are extremely different. I tried changing the model to a larger version, but nothing changes

    • @albertbozesan
      @albertbozesan  Před rokem +1

      Huh, interesting. I’m not deep enough into the actual functioning of the AI to know for sure - are you using the same seeds as well?

    • @michaeldenisov4815
      @michaeldenisov4815 Před rokem

      @@albertbozesan Yes my friend, I used the same seeds and settings in all cases. The results were similar on the output of WEBGUI (AUTOMATIC1111 build) and the online version of DreamStudio, but there were still some minor differences. The other builds gave a completely different result

    • @michaeldenisov4815
      @michaeldenisov4815 Před rokem

      On the whole, it doesn't make much difference. But the very fact that with the same settings and seed - we can have different output. It made me very sad when I tried to repeat one of your works.

    • @owlmaster1528
      @owlmaster1528 Před rokem

      @@michaeldenisov4815 You understands that the images are always generated as new right? How do you expect it to be the same every time?

    • @michaeldenisov4815
      @michaeldenisov4815 Před rokem

      @@owlmaster1528 you are wrong, if you use the same parameters and seed - you will always get the same result

  • @adventuresinportland3032

    For some reason I run out of video ram if I try to make a 1024 texture. I have an rtx 2070 super so you'd think that would be enough. Is there a way around that error?

    • @albertbozesan
      @albertbozesan  Před rokem +1

      I don’t recommend going up to 1024. The AI was trained on 512x images, so the results won’t necessarily be better. It’s easier to upscale with ESRGAN.

    • @nolimit3281
      @nolimit3281 Před rokem

      Thats simply too much, upscale the image if you want a bigger picture but generate 512 only

  • @knowyourjoe8826
    @knowyourjoe8826 Před rokem +1

    Can you please direct me to a video that follows the Voldy install guide step by step. I am sure if someone already knows how to do this, or is familiar with this,the instructions seem straight forward. Step 2 ( note to update, all you need to do do is type ‘get pull’ within the newly created folder) is throwing me for a loop. I do not know how to do this. I looked for a video to assist me but did not find one that appears to follow these steps. Any assistance you can provide would be greatly appreciated. Thanks for sharing information about this application.

    • @albertbozesan
      @albertbozesan  Před rokem

      This one looks good and covers the basics, so it should apply to voldy: czcams.com/video/5dkHkWc5vN0/video.html
      But don't fret: I'm not a coder or similar and got it working. You need to have a very basic knowledge of Git, and that's about all "advanced stuff" that's necessary. Just keep at it and read each step carefully :)

    • @knowyourjoe8826
      @knowyourjoe8826 Před rokem

      @@albertbozesan Thanks. I sincerely appreciate your response. I actually got it installed and working yesterday. Now I just have to learn the software. Will be watching your channel to learn this. I like your style and content. Thank you for sharing.

  • @chickishot8172
    @chickishot8172 Před rokem +2

    Me: okay, okay… deleting the default cube. A standard blender tutorial style…
    Tutorial: and shift A another cube into it. It is the Blender Gods will.
    Me:… what? 😂

  • @lilyofluck371
    @lilyofluck371 Před rokem +1

    HEY! Can you please put it into the desc that the stable diffusion download requires an Nvidia GPU. I spent like the last 5 hours trying to get this to work only to realize that it required Nvidia. So much time wasted. (You got my hopes up too TwT)

    • @albertbozesan
      @albertbozesan  Před rokem

      Will do. It does say in the installation instructions, though 😅 but if you have an AMD, there are possibilities, too.

  • @samtunji
    @samtunji Před rokem +1

    i wish it could be made as an addon within blender

    • @albertbozesan
      @albertbozesan  Před rokem

      Good news, it was! Check the link in the description :)

  • @I3ordo
    @I3ordo Před rokem

    Hi, my guess, it is not able to receive a top down texture from the user and create a tileable seamless version of it yet...

    • @albertbozesan
      @albertbozesan  Před rokem

      the img2img algorithm could help you there :) check it out, it also has a "tiling" option.

    • @I3ordo
      @I3ordo Před rokem

      @@albertbozesan hello, have not found proper time to setup the software set to try and tbh, my good old gtx gpu might Just be incompatible. Can i give u a link to a texture sample, can we try and see how it handles the seamless generation?

    • @albertbozesan
      @albertbozesan  Před rokem

      @@I3ordo you can check out one of the many cloud services to try it out

    • @I3ordo
      @I3ordo Před rokem

      @@albertbozesan ah can u recommend me any? have not found anything that can create seamless with style-gan stuff

  • @rmumof
    @rmumof Před rokem

    Inpaint still doesn't work? I tried but it didn't work.

    • @albertbozesan
      @albertbozesan  Před rokem +1

      I don’t like the WebUI Inpaint. It seems buggy.

    • @rmumof
      @rmumof Před rokem

      @@albertbozesan ok,I solved it.If the image had an alpha channel, the inpaint would not have an effect.

  • @nizzinesworkshop2636
    @nizzinesworkshop2636 Před rokem +1

    I got it working with "2d ,texture" instead of "top down"

    • @albertbozesan
      @albertbozesan  Před rokem

      Good tip! I've heard that works, too, haven't gotten great results myself.

    • @nizzinesworkshop2636
      @nizzinesworkshop2636 Před rokem

      @@albertbozesan I tried with asphalt, it loved to spawn tiny cars in the picture when I put "top down" so it depends on the type of the texture.

  • @synthoelectro
    @synthoelectro Před rokem

    automatic is everywhere :D

  • @localfriendlycloud7720
    @localfriendlycloud7720 Před rokem +1

    you sound like maxor and i keep little laughing

  • @iquittv1915
    @iquittv1915 Před rokem

    TNice tutorials is literally the hardest tNice tutorialng my brain can't comprehend. AnytNice tutorialng else I do takes a few minutes and tNice tutorials is just.. it's just so confusing

  • @IschbindeKalle
    @IschbindeKalle Před rokem

    What the fuck. The stone texture at the beginning already is prepared to be tiled. What the fuck?

    • @albertbozesan
      @albertbozesan  Před rokem +1

      What do you mean? I show my final results at the beginning, then go through the process.

  • @fehrleon5627
    @fehrleon5627 Před rokem

    software.

  • @massivetree7937
    @massivetree7937 Před rokem

    Good video until you deleted the cube and created another one. People need to stop with this foolishness.

    • @albertbozesan
      @albertbozesan  Před rokem

      Are you afraid the cubes will come back from the dead for revenge?

  • @reachingeric
    @reachingeric Před rokem

    Empty promises, directions are all scrambled around, cant get it installed. better tutorials out there.

  • @dustinrolstad752
    @dustinrolstad752 Před rokem

    Sorry. I'm confused. My Stable Diffusion doesn't have the option for "tiling".

  • @hseworldkochi
    @hseworldkochi Před rokem

    too I made like s on garage band and thought it be easier in softsoft. nope

    • @albertbozesan
      @albertbozesan  Před rokem

      This isn't a tut for audio software, I don't understand...

    • @NebMotion
      @NebMotion Před rokem +1

      @@albertbozesan AI bots

  • @Null-byte
    @Null-byte Před rokem

    Nice tutorial as always, downloading the latest GUI right now :D What I was wondering, since the 1.5 model came out, and soon for the public I guess(?). It would be cool to compare some seeds and generate images using 1.4 vs 1.5 :D

    • @albertbozesan
      @albertbozesan  Před rokem +1

      You can be sure 1.5 is much better! I'm looking forward to the public release.