Stable Diffusion Inpainting Tutorial

Sdílet
Vložit
  • čas přidán 2. 06. 2024
  • Stable Diffusion Inpainting Tutorial! If you're keen on learning how to fix mistakes and enhance your images using the Stable Diffusion technique, you're in the right place.
    in this concise tutorial! I'll guide you through enhancing images using the Stable Diffusion Forge Interface, focusing on the Juggernaut XL Version 9. Learn how to correct and refine a cinematic geisha photo in a futuristic setting, utilizing the Image to Image and Inpaint features for precise alterations.
    Key steps include adjusting denoise strength, using random seeds for varied results, and mastering the Inpaint tool for specific area improvements. Understand the nuances of settings like Mask Blur, Mask Mode, and Fill Option to achieve the perfect look, whether you're modifying, removing, or adding elements to your image.
    I'll also demonstrate advanced techniques like expanding the bounding box for better context understanding and blending. Whether you're fixing a hand, altering a face, or transforming a bunny into a sci-fi creature, this tutorial covers it all.
    Join me in exploring the endless possibilities of Stable Diffusion Inpainting, perfect for both beginners and seasoned users. Don't forget to like the video if you find it helpful. Happy image editing!
    Chapters:
    00:00 Stable Diffusion Intro
    01:28 Inpaint Geisha Face and Hand
    05:53 Inpaint Bunny Head
    06:59 Remove Subjects with Inpaint
    07:59 Add Objects with Inpaint
    09:49 Change Clothes with Inpaint
    #StableDiffusion #InpaintingTutorial #ImageEditing #forge

Komentáře • 69

  • @ronaldp7573
    @ronaldp7573 Před 2 měsíci +12

    I have watched 40min videos on inpainting that did not have as much valuable information as your 11min video. You are killing it brother.

  • @fishpickles1377
    @fishpickles1377 Před dnem

    Very helpful video!

  • @rext7554
    @rext7554 Před 6 dny

    Very informative video, Thanks👍🏻

  • @mastertouchMT
    @mastertouchMT Před 2 dny +1

    Great to find such good tutorials for Forge. Would love to see a deep dive into the Integrated ControlNet tabs some time

    • @pixaroma
      @pixaroma  Před 2 dny +1

      I am using sdxl a lot and the control net models are not so good as v1.5 so i didn't use them too much beside canny that i use all the times, i was hoping they can improve it or sd3 comes out

    • @mastertouchMT
      @mastertouchMT Před 2 dny +1

      @@pixaroma I noticed the same... Wasn't sure if it was just me lol

  • @ArchitectureTokyo
    @ArchitectureTokyo Před 2 měsíci +1

    Best vid on inpainting I have come across yet.

  • @carolineito9312
    @carolineito9312 Před 3 měsíci +4

    Omg this was seriously useful thank you!!!

  •  Před 2 měsíci +1

    Duuude! I had no idea i had this much power with SD 1111.
    THANK YOU for taking the time to share your knowledge 🙏 this helps me with my AI art adventures immeasurably.

  • @marcialjimenez5223
    @marcialjimenez5223 Před 9 dny

    12 mins of awesomeness, thanks a bunch!

  • @matsnilsson7922
    @matsnilsson7922 Před 3 měsíci

    Keep doing what you're doing ! Nice work👍

  • @jorgeluismontoyasolis9800
    @jorgeluismontoyasolis9800 Před 3 měsíci

    Discovered your channel yesterday! Already a big fan, Thank you!!!

  • @59aml
    @59aml Před 3 měsíci

    Thank you so much that was excellent.

  • @tiffanyw3794
    @tiffanyw3794 Před 3 měsíci +3

    This is awesome. I just saw that the tiling issue is fixed. I hope you met your goal!

  • @TheEiyashou
    @TheEiyashou Před 3 měsíci

    Thank you for the very useful guide! I'm always getting amazing pictures with just small details that are ugly, until now I struggled to patch these up, but this guide helped a lot! Will scare with my friends~

  • @EssamSoliman
    @EssamSoliman Před 17 dny

    Man, You're the best !

  • @faredit-cq2xl
    @faredit-cq2xl Před měsícem

    thank's a lot, very usefull.

  • @johnyoung4409
    @johnyoung4409 Před 3 měsíci

    Very detailed tutorial, thanks for your work!

    • @johnyoung4409
      @johnyoung4409 Před 3 měsíci

      What if I'd like to remove the tattoo from one's body, without distorsion. Any idea?

    • @pixaroma
      @pixaroma  Před 3 měsíci

      Is more difficult, depends on how big is the tattoo and how much skin that is without tattoo around, you can do a combination of masked content fill first to fill that with a color, then use the masked content original, but probably is quicker with remove tool in photohop

    • @johnyoung4409
      @johnyoung4409 Před 3 měsíci

      @@pixaromaMany thanks for your reply, I've tried several times, but the result so far is not so good.

    • @pixaroma
      @pixaroma  Před 3 měsíci

      @@johnyoung4409 you can also try to paint the tattoo with a soft brush in photoshop picking color of the skin with eyedropper, and then run through img2img or inpaint at different denoise strength to reconstruct that skin. But so far remove tool in photoshop or generative fill with the word remove did better job

    • @johnyoung4409
      @johnyoung4409 Před 3 měsíci

      @@pixaromaThanks for your info! Yes, I did watched a video in youtube that using generative fill with photoshop to remove the tattoo. czcams.com/video/YPEBymT_lz0/video.html which is very impressive. He just use the word 'remove tattoo' and photoshop automatically do a great job. I'm curious why sd cannot do the same thing?

  • @SumoBundle
    @SumoBundle Před 3 měsíci

    Very useful tutorial

  • @danieldrutu7484
    @danieldrutu7484 Před 3 měsíci

    Thanks!

  • @konnstantinc
    @konnstantinc Před 3 měsíci

    Nice intro

  • @NicoSeymore
    @NicoSeymore Před 10 dny +1

    youre the best on youtube for Forge! Thanks a lot! Do you know any way to get two character Loras working together properly in Forge? I struggle with the known methods from a1111 and also Forge Couple doesnt seem to work. Inpaint works but not generation directly

    • @pixaroma
      @pixaroma  Před 10 dny +1

      I didn't find a good way, i usually just use in paint, or i make a selection of the image in Photoshop and use that in img2img to get the right character playing with denoise to be somehow similar then i blend it again back in Photoshop with masking so it fits

    • @NicoSeymore
      @NicoSeymore Před 9 dny +1

      @@pixaroma thanks for answering so fast! Yesterday i tried the Not masked area Option you talked about and its way faster and reliable than just using the masked area for couple pics

  • @LexChan
    @LexChan Před 2 měsíci +1

    great video. can you create a video on how to input certain object (with image of object) blend into current working file.. on spefific area on the picture.. example .. maybe on the floor, corner of the pix or on the hand

    • @pixaroma
      @pixaroma  Před 2 měsíci +1

      Is harder to control an actual object or image, what i do is to use Photoshop to place it and then with image to image i make it blend better , you can look at the video with AI mockups

  • @TheRealPuddin
    @TheRealPuddin Před 3 měsíci

    This tutorial was super helpful, thank you so much. Question... normally I use the 'Hires. fix' option to upscale images I like and then pass them to Inpaint. In your example, how do you go about upscaling your edited image after Inpainting it? Which tab do you send it to, or do you just use the 'Resize' slider on the Img2Img tab?

    • @pixaroma
      @pixaroma  Před 3 měsíci +1

      I send to extras and upscale it there or i use topaz gigapixel ai

    • @TheRealPuddin
      @TheRealPuddin Před 3 měsíci

      @@pixaroma Thank you for the speedy response!

  • @CristianLeandroCampagna

    Hey! thanks for the incredible video. Do you know how could I embed the stable diffusion model into my app?

    • @pixaroma
      @pixaroma  Před 29 dny

      I don't but you can talk maybe with the creator of the model, like on civitai must be a way to contact it.

  • @AndyHTu
    @AndyHTu Před měsícem

    When you created the "latent noise" did you have to send the new image with the latent noise to Inpaint? Or can you just continue to generate without clicking "send to"?

    • @pixaroma
      @pixaroma  Před měsícem +1

      In the video i just increased the denoise strength so is not visible the noise, I showed that to see how it looks, is useful when you dont have nothing in the image and you want to add something there, it ads that noise so it can create anything there

    • @AndyHTu
      @AndyHTu Před měsícem

      @@pixaroma Thanks for answering. I was a bit confused because I was adding the Latent Noise, then sending it to the working section, but didn't see you use the "send to" function so I thought I missed a step. But its nice to know that you don't have to do that! You have the best inpainting video on youtube btw. :)
      Infact your whole channel is underrated. You have a lot of great tutorials. I'm going to go through it!

  • @ZeroCool22
    @ZeroCool22 Před 15 dny

    I remember in the times of 1.5 DDIM was the recommended Sampler for inpaint, that has changed for SDXL?

    • @pixaroma
      @pixaroma  Před 15 dny +1

      You can try what works best i usually just use the recommended setting for that model

  • @sonbaz
    @sonbaz Před 22 dny

    Hi Great video but...help!!! please lol. I am trying to remove a item and replace with the wall behind. I use masked only area and tried original and latent noise. Low denoising nothing changes, high denoising the whole image changes. Like its not seeing the masked area? Any ideas? thanks in advance. Thanks in advance.

    • @pixaroma
      @pixaroma  Před 22 dny

      If you have Photoshop you can remove that with content aware tool, some things are just not so simple to do with ai. You can also paint in any editor with a color similar to your background and try to use then I paint or maybe img2img to generate that missing part. Even if i do someting for an image that work it might not work for your image, because ai try to guess what is there and sometimes it guess something else :)

    • @sonbaz
      @sonbaz Před 22 dny +1

      @@pixaroma thank you for the reply. For context, I wanted a image of a cat cooking but the AI gave me a Cat cooking a smaller cat which was on fire, AI can be pretty dark....haha

  • @TomaszJura
    @TomaszJura Před 3 měsíci

    To change a colour and kept original item use more denoising strength and control net - canny or depth

    • @pixaroma
      @pixaroma  Před 3 měsíci

      yeah I work with canny usually :) even is not keeping perfectly, at least it keeps contours and composition

  • @awholi
    @awholi Před 3 měsíci +1

    Wow, this is fast! Is it 4090? I have only 3.2 it/s with 4070

    • @pixaroma
      @pixaroma  Před 3 měsíci

      I speed it up so you don't wait, is fast but not so fast I think 4-5 seconds per image, yes is 4090

  • @hindimovies60fps5
    @hindimovies60fps5 Před 3 měsíci

    nice bro.. but can you load loras?

    • @pixaroma
      @pixaroma  Před 3 měsíci +1

      not sure if works with all lora but i tested on some and it worked for me

  • @QuangTran-pp1vj
    @QuangTran-pp1vj Před 21 dnem

    I don't see expand that bounding box, help me?

    • @pixaroma
      @pixaroma  Před 21 dnem

      There isn't one, sorry for misunderstanding that is just an overlays in video editing meant to visualize what area can see and how it would look like, i should have mentioned in the video, sorry about that

  • @YSNReview
    @YSNReview Před 23 dny

    Is that SD 1.5?

  • @Maeve472
    @Maeve472 Před měsícem

    how do you expand the bounding box?

    • @pixaroma
      @pixaroma  Před měsícem

      You don't actually have a bounding box i put there to have an idea of the area, you just add a tiny dot so the area seen is bigger and since the dot is tiny will not affect the outcome but will see more from the image so it understands better how to inpaint

    • @Maeve472
      @Maeve472 Před měsícem

      @@pixaroma Actually you are doing something at 4:16 manually editing the bounding box for onlymasked area but how what is the shortcut for that :D

    • @pixaroma
      @pixaroma  Před měsícem +1

      Sorry for misleading I should have been more clear, that is just a square I put in postprocessing the video to draw the attention on how much it area sees before, and if I put dots how much area it will see after, there is not such square in stable diffusion, is just a video overlay in capcut :) animated to resize to show before and after

    • @Maeve472
      @Maeve472 Před měsícem +1

      @@pixaroma oh no problem. Actually that would be good extension for only mask

  • @Maker_of_Creation
    @Maker_of_Creation Před 2 měsíci

    ai photo movie
    czcams.com/video/uP04emczDi8/video.html

    • @pixaroma
      @pixaroma  Před 2 měsíci

      cute, but didnt say how it was made