Comic Characters With Stable Diffusion SDXL

Sdílet
Vložit
  • čas přidán 29. 08. 2024
  • In this comprehensive tutorial, learn how to harness the power of Stable Diffusion AI to produce stunning and visually consistent comic book characters. Whether you're a seasoned artist or just starting, I’ll guide you through the step-by-step process of generating characters that maintain a consistent style from image to image.
    You’ll learn how to prepare custom character datasets, a crucial step in creating your own Stable Diffusion AI model for comic book character generation.
    Discover valuable tips, techniques, and tools to elevate your comic book artistry.
    Want to advance your ai Animation skills? Checkout my Patreon:
    / sebastiantorresvfx
    www.sebastianto...
    Install Stable Diffusion: • Stable Diffusion In Mi...
    Consistent faces : • Consistent Faces in St...
    Links from the Video ###
    SDXL Models:civitai.com/
    Random Name Generator: www.behindthen...

Komentáře • 84

  • @kanavwastaken
    @kanavwastaken Před 11 měsíci +9

    This video is a gem, really. I'm so sick and tired of most tutorials being so long and complicated, truly, you're explanations made me learn. Thank you, for real. We need more! ❤

    • @sebastiantorresvfx
      @sebastiantorresvfx  Před 11 měsíci

      I have more coming soon, it’s been a busy month unfortunately but I’m back on track now.

    • @Mr.Sinister_666
      @Mr.Sinister_666 Před 10 měsíci +1

      Quick, clear and conscise. You are right on point here. The video is a damn gem! ANNNNNDDDDD thanks for being awesome @sebastiantorresvfx

    • @sebastiantorresvfx
      @sebastiantorresvfx  Před 10 měsíci

      @Mr.Sinister_666, Made my day 😎 good to know I’m doing it right 😄

  • @shallmow
    @shallmow Před 10 měsíci +2

    Damn, use of actual names is so smart lol. Previously people had to make models with reference photos to get consistent characters.

  • @teamozOFFICIAL
    @teamozOFFICIAL Před 11 měsíci +3

    This tutorial is exactly what I want in tutorials. Giving us the information quick and not being to heavy on the memes. I've happily hit the sub and bell button.

  • @kenny_numbers
    @kenny_numbers Před 8 měsíci

    Thanks so much for creating these videos, Sebastian. I'm in the early stages of the learning curve in trying to get consistent characters and the kinds of images I need for a graphic novel. I spent September and October generating images for a different graphic novel which I published through Amazon KDP, but I did it through generating loads and loads of images to pick only those I could work with. I also spent at least 150 hours fixing problems and deformities, such as hands, eyes, limbs, clothing etc. in nearly every image. I basically brute-forced my way through and didn't get the results I wanted. I published it anyway. The end result was deficient character consistency and not the most dynamic posing and inadequate interaction between characters. I cannot go through the process like that again. I need to have a high degree of character consistency and images that work as generated, requiring little or no redrawing. I have generated a single image of a character with a design I like for the new graphic novel. However, SDXL produces a completely different looking image every time I click generate, even with the same text prompt. I cannot build a dataset of consistent character images when I cannot even generate a second image that looks like the first. What am I missing? Do you have any idea what I'm doing wrong? Any help or advice would be greatly appreciated. Thanks.

  • @trumpsaloser
    @trumpsaloser Před 10 měsíci +1

    still waiting on the 2nd part to this amazing video! great work!

  • @meritorioustechnate9455
    @meritorioustechnate9455 Před 11 měsíci +2

    Tutorial is great. I’m using Midjourney for consistenct characters and exploring new styles. But the main issue with ai is the jagged line art and proportions for me. I sketch over ai art and draw my line art adding a unique style.

    • @sebastiantorresvfx
      @sebastiantorresvfx  Před 11 měsíci +1

      I’ve been playing with re-inking after generating. Also another method I’ve found is to upscale the images, inpaint sections that need sharper line art. I’ll then downscale as needed and the quality of the line art will be superior. It’s basically how traditional comics are done, down scaling original art to roughly 65% of the original art work size.

  • @roymathew7956
    @roymathew7956 Před 11 měsíci +3

    Love the explanations and the wisdom. Would love to see a video where you work through a few panels for a comic strip, also possibly showing how you add the blurbs. I imagine you’d do that in Photoshop, but wondering if there’s a lora or something in stable diffusion that also works for that

    • @sebastiantorresvfx
      @sebastiantorresvfx  Před 11 měsíci

      As for how to put the pages together we’ll get there for sure.
      The word balloons and captions are best done in a photo editor. The best for it being clip studio formerly known as manga studio. I love photoshop but it’s not made for that where clip studio is more directed towards comic books. And once a year you can outright buy it for like $50-$60 for a permanent license. Can’t say the same for photoshop 😆

    • @roymathew7956
      @roymathew7956 Před 11 měsíci +1

      Thanks for that.@@sebastiantorresvfx

  • @luozhan
    @luozhan Před 9 měsíci

    Love your channel! ❤
    Thank you for creating this tutorial. It will be great if you could also show us how to create TWO or more consistent characters in the SAME scene. I am looking forward to it. Thanks again for the great work.

  • @gatotboediman9680
    @gatotboediman9680 Před 9 měsíci +1

    love your style. and tutorials. subscribed already

  • @hairy7653
    @hairy7653 Před 11 měsíci +2

    great tutorial

  • @TeluguNarrativeHub
    @TeluguNarrativeHub Před 11 měsíci +2

    Thanks for sharing your knowledge. good job.

  • @ConwayBrew
    @ConwayBrew Před 9 měsíci +1

    Which checkpoint were you using? I didn't see it in the video but really liked the output. Your videos have really helped me dive back into Stable Diffusion and catch up. Thanks!

    • @sebastiantorresvfx
      @sebastiantorresvfx  Před 9 měsíci +2

      Thank you so much your message, means a lot to know it’s helping you. I’m using the realities edge Anime xl , you can find the direct link to it on the description of my latest video on comic book line art. Have fun 😁

  • @Greensacks
    @Greensacks Před 10 měsíci +1

    really great video! so much more straight forward than others lol. using this process how might you handle for multiple characters? say, instead of a superhero i'm working on two brothers and a dog in a fantasy setting. would you train a lora for each character? and then how would you bring something like that together?

    • @sebastiantorresvfx
      @sebastiantorresvfx  Před 10 měsíci +1

      I’d prefer to have an individual Lora for each character and the dog so I have more consistency with the look and the clothing.
      As for combining them in automatic 1111 there’s a number of different methods but it’s a little long for a comment to cover. Perhaps a livestream 🙂

  • @user-ui2on4ll9v
    @user-ui2on4ll9v Před 11 měsíci +2

    Thanks for the tutorial. As for me, the main problem is background.I cant draw comics,for now, because i just cant get the same background (for example, the same classroom or the same street in the city) without using of 3d model.And ,in my point of view, it is vitally necessary to be able to generate the same background from different angles (and at a different distance) to draw action scenes in comics.Could you please tell me, if you know, how to solve this problem ?How can i get the same background to draw comics (without 3d model) ?

    • @sebastiantorresvfx
      @sebastiantorresvfx  Před 11 měsíci +2

      Unfortunately SD isn’t reliable for consistent backgrounds in different angles. My work around would be to generate the backgrounds then project them onto some rudimentary 3D geometry. The Archer tv show does as similar process so they can render out a different angle when needed.
      If you’re projecting an SD generation onto the 3D model you’ll get the same look and have more control. There’s ways to change the lighting and light sources also which can be useful.

  • @DanielSchweinert
    @DanielSchweinert Před 11 měsíci +2

    Thanks! Straight to the point!

    • @sebastiantorresvfx
      @sebastiantorresvfx  Před 11 měsíci +2

      Glad to see you back Daniel. 😁

    • @DanielSchweinert
      @DanielSchweinert Před 11 měsíci

      @@sebastiantorresvfx i released a new tutorial and a node workflow on civitai

    • @sebastiantorresvfx
      @sebastiantorresvfx  Před 11 měsíci +1

      Taking a couple days to play on stable, I’ll check it out 😃

  • @michaelcarnevale5620
    @michaelcarnevale5620 Před 10 měsíci +1

    so informative - i subbed

    • @sebastiantorresvfx
      @sebastiantorresvfx  Před 10 měsíci

      Thanks for the sub! Glad you liked it. Good timing, follow up video is coming this week 😁

  • @arnabroy2193
    @arnabroy2193 Před 10 měsíci +1

    Thank u so much for sharing

  • @g-aram1405
    @g-aram1405 Před 9 měsíci +1

    Hi mate, great tutorial, can you recommend model/lora that look simple like manhua or webtoon , because model that i see mostly for anime illustration
    Thank you

    • @sebastiantorresvfx
      @sebastiantorresvfx  Před 9 měsíci

      Try Counterfeit-V3.0 from civitai. And for the painted look I’d suggest using style selector extension and setting it to painting or something of that sort to push the image in that direction.

  • @DeanCassady
    @DeanCassady Před 11 měsíci +2

    Nice vid, good content

  • @jeffreychung7307
    @jeffreychung7307 Před 10 měsíci +1

    Great Video. If I want to make a consistent character for a pet, how can I do it. I still use Random Name Generator to name the pet?

    • @sebastiantorresvfx
      @sebastiantorresvfx  Před 10 měsíci +1

      For pets, depending on your situation I would suggest either getting a Lora that’s pre trained on a specific animal. Or training your own with photos of just one animal that way SD won’t mix other animals into it.
      Unfortunately when it comes to side characters (and pets) in comics, if they’re going to be showing up consistently. Then you’ll need a way to make sure they come out looking the same even if for a couple panels. Loras are your best bet.

  • @SolveForX
    @SolveForX Před 9 měsíci +1

    Could you do a video on how to train on our own artwork? So that the images come out in our specific style? Is that possible?

    • @sebastiantorresvfx
      @sebastiantorresvfx  Před 9 měsíci

      If you go through the process in the Lora video you can switch that out for your own art. Just make sure the images around around 1024px or bigger, but don’t go too crazy or it will take a while to train.
      But yeah the process is the same no matter what your source images are.

  • @Carmidian
    @Carmidian Před 9 měsíci +1

    This was so helpful thank you so much! One quick question what is SDXL style we're using to get that superhero look it was awesome?

    • @sebastiantorresvfx
      @sebastiantorresvfx  Před 9 měsíci

      Thank you 😁
      The style itself is using the SDXL style selector extension that you can find in the extensions tab and set to comic. As for the model its the realities edge anime XL checkpoint from civitai.

    • @Carmidian
      @Carmidian Před 8 měsíci +1

      @@sebastiantorresvfx Sorry for bothering you, one more question when it comes to making the lora, how many pictures should I generate?

    • @sebastiantorresvfx
      @sebastiantorresvfx  Před 8 měsíci

      No worries at all, that’s a complicated question. Because technically you could get away with 15 images, but you run the risk of it not having enough flexibility for what you require later on. I’d say it’s probably best to go with something like 30-50 good all round images to cover yourself.

    • @Carmidian
      @Carmidian Před 8 měsíci

      @@sebastiantorresvfx Thank you, once again. Your videos are incredibly helpful and easy to understand.

  • @iamnow8
    @iamnow8 Před 10 měsíci

    Amazing! Waiting on next video sir Torres, do you know how to create low file sizes Loras (possibly with faster training?)

    • @sebastiantorresvfx
      @sebastiantorresvfx  Před 10 měsíci +1

      Wait no more, just went live.
      Network rank and network alpha will keep the files smaller if you choose a lower value, as for training times 😬 it can take a couple hours depending on the amount of images in your dataset.

    • @iamnow8
      @iamnow8 Před 10 měsíci +1

      WOOH :D@@sebastiantorresvfx

  • @kentuckeytom
    @kentuckeytom Před 10 měsíci +1

    hi, would you mind share what video card you are using? mine is 1070ti 8g and takes 3 minitues to generate an image with the same prompt😪

    • @sebastiantorresvfx
      @sebastiantorresvfx  Před 10 měsíci +1

      Hello, I’m using a Gigabyte 3090 RTX turbo. Its a few years old now but still does the job.
      Make sure you have --medvram in your command arguments line of the webui-user.bat and might be a good idea to turn off live previews in your a1111 settings. Might give you a slight boost.

    • @kentuckeytom
      @kentuckeytom Před 10 měsíci +1

      it's much better now with --medvram, thanks!@@sebastiantorresvfx

    • @sebastiantorresvfx
      @sebastiantorresvfx  Před 10 měsíci

      Awesome! Glad to hear it. 🙂

  • @Kelticfury
    @Kelticfury Před 9 měsíci +1

    Is automatic1111 handling sdxl properly now? I switched to comfy UI because it was pretty bad at it.

    • @sebastiantorresvfx
      @sebastiantorresvfx  Před 9 měsíci +1

      I believe it is, I’ve been using SDXL exclusively for the last couple months. I believe it’s only short comings at the moment is the implementation of controlnet. It isn’t as consistent as it was with 1.5 models. But that might be more to do with the controlnet models more so than automatic 1111. But in terms of image quality, the potential is definitely greater.

    • @Kelticfury
      @Kelticfury Před 9 měsíci +1

      @@sebastiantorresvfx Hey that is good news. Thanks for the fast reply at an ungodly hour :)

    • @sebastiantorresvfx
      @sebastiantorresvfx  Před 9 měsíci +1

      I guess that depends on where you are in the world 😂

  • @anaversary-
    @anaversary- Před 10 měsíci +2

    Very informative video! I love the starwars style 2:04 you added to the prompts lol

    • @sebastiantorresvfx
      @sebastiantorresvfx  Před 10 měsíci +1

      lol only took a month for someone to mention the Star Wars crawl 😂 😂 i got a good chuckle making it so I refused to cut it 😂

  • @LouisGedo
    @LouisGedo Před 11 měsíci +2

    👋

  • @lastlight05
    @lastlight05 Před 2 měsíci

    How about comfyui

  • @matthewanacleto7885
    @matthewanacleto7885 Před 10 měsíci +1

    Another great video. How can we help getting you more subscribers?

    • @sebastiantorresvfx
      @sebastiantorresvfx  Před 10 měsíci

      You’re awesome! Share them on any forums, groups and discord you think the videos could be helpful. Unfortunately I’ve never been good at keeping up with forums. Definitely something I need to get on board with.
      Perhaps I should do live videos too? Only thing keeping me from doing that so far is that I like the fast pace of the videos. Can’t really do that on a live video.

    • @matthewanacleto7885
      @matthewanacleto7885 Před 10 měsíci +1

      @@sebastiantorresvfx find out the common problems like the repeatability issue and solve them too.

  • @100k-subs-target
    @100k-subs-target Před 10 měsíci +2

    Free ?

  • @zhoua0571
    @zhoua0571 Před 10 měsíci +1

    Why can't I comment?

  • @saierwe
    @saierwe Před 10 měsíci +8

    THE WORST PART ABOUT IA'S IMAGES, is that YOU NEVER OWN THE RIGHT OF THEM, and there's no way u can get the rights, no matter if u pay for premium apps, or talk to the companies, they own the copyrights of all of the images, which makes the IA's till the moment complete obsolete, if copyright policies change, artists would finally be able to use IA's images, for now they are all just obsolete.

    • @sebastiantorresvfx
      @sebastiantorresvfx  Před 10 měsíci +5

      This has been an interesting conversation for a while now. Though it makes me wonder, what if you designed and copyrighted a character without the use of Ai.
      Another work around is for a comic book artist to train a model using their own style. No one can claim ownership over the artists style.
      Obviously this isn’t possible if you’re using anything other than stable diffusion. For instance Midjourney, you can’t train your own model on that platform.

    • @saierwe
      @saierwe Před 10 měsíci +2

      @@sebastiantorresvfx I think, in that case yeah of course u own the character, but the images used to make ur "own" comic are images from Stable Difussion then they have the rights I guess, it's very sad, I would pay anything for an IA that creates images and gives u all the rights of them.

    • @sebastiantorresvfx
      @sebastiantorresvfx  Před 10 měsíci +4

      Seen as the SDXL 1.0 model is under and open source license which gives commercial rights to the generated images, there’s nothing stopping someone from publishing a comic book and selling it. If the characters are trademarked then it does stop anyone from legally copying the book and reprinting it.

    • @saierwe
      @saierwe Před 10 měsíci +2

      @@sebastiantorresvfx wait, so, there's actually a royalty free model??? the SDXL?

    • @sebastiantorresvfx
      @sebastiantorresvfx  Před 10 měsíci +4

      I think they specifically didn’t use the term ‘royalty free’ because there’s no pre-existing images to call royalty free. So instead they used the ‘open source’ license which means anything generated by the original SDXL model gives you the commercial rights to that image.
      In saying that, I am specifically talking about the original SDXL model or any model you create using legally sourced images like say if you’re an artist and you use 30-50 or more of your own art works to train a model and or Lora file.
      The variant models on civitai you unfortunately have no point of reference as to what images were used to train those models so it’s risky to use those.
      I am also wondering if the original argument of not being able to copyright an ai comic was due to the fact that you can’t copyright a style of art and that the comic book which started the whole issue was trying to copyright a comic with the exact likeness of zendaya. Which is insane to begin with.

  • @ledesseinduneidee
    @ledesseinduneidee Před 7 měsíci

    inkreadible

  • @MelanieRobertson-c6i
    @MelanieRobertson-c6i Před 9 dny

    Williams George Martin Matthew Lewis Donna

  • @jeffreychung7307
    @jeffreychung7307 Před 10 měsíci +1

    I get this ''NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs: query : shape=(1, 4096, 1, 512) (torch.float32) key : shape=(1, 4096, 1, 512) (torch.float32) value : shape=(1, 4096, 1, 512) (torch.float32) attn_bias : p : 0.0 `cutlassF` is not supported because: device=cpu (supported: {'cuda'}) Operator wasn't built - see `python -m xformers.info` for more info `flshattF` is not supported because: device=cpu (supported: {'cuda'}) dtype=torch.float32 (supported: {torch.float16, torch.bfloat16}) max(query.shape[-1] != value.shape[-1]) > 128 Operator wasn't built - see `python -m xformers.info` for more info `tritonflashattF` is not supported because: device=cpu (supported: {'cuda'}) dtype=torch.float32 (supported: {torch.float16, torch.bfloat16}) max(query.shape[-1] != value.shape[-1]) > 128 Operator wasn't built - see `python -m xformers.info` for more info triton is not available `smallkF` is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 Operator wasn't built - see `python -m xformers.info` for more info unsupported embed per head: 512'' I guess the reason is I am using a laptop with no GPU. Anyway I can fix it using my existing potato? I have tried googled to fix this and tried bunch of tricks but still not able to generate my first image. I keep pixels as 512 * 512 and sampling methog DDIM (seems the fastest) but I still not able to generate my first artwork.

    • @sebastiantorresvfx
      @sebastiantorresvfx  Před 10 měsíci

      Hey Jeffrey, without knowing your specs it’ll be difficult to say. But if you have an Nvidia GPU make sure you have the right Cuda soft installed I believe the latest is 11.8.
      Also make sure you have the latest versions of torch and xformers installed. You can install xformers automatically by adding “--Xformers” to the command arguments in your webui-user.bat.

    • @jeffreychung7307
      @jeffreychung7307 Před 10 měsíci

      I should have installed the pip, Xformers and torch latest version but still got the same result. I solved it by temporarily removing the --xformers flag.
      , is the impact slower only?@@sebastiantorresvfx

    • @musicwelikemang
      @musicwelikemang Před 8 měsíci

      You need a GPU to run a local model of SD. Integrated laptop gfx just wont cut it.
      Try look into stablehoard. Its kinda like a peer to peer compute net. People with higher power cards donate their cards in downtime to other users without the hardware to run SD.
      It uses a credit system and has a pretty good community willing to help teach people.