CONVERT ANY IMAGE TO LINEART Using ControlNet! SO INCREDIBLY COOL!

Sdílet
Vložit
  • čas přidán 8. 03. 2023
  • Transform any image to lineart using ControlNet inside Stable Diffusion! So in this video I will show you how you can easily convert any previously generated color image to a black and white lineart version with ControlNet without leaving the auto1111 Stable Diffusion webui and then how to make it even better inside a free image manipulation tool website like photopea. So let's go!
    Did you manage to convert your image into lineart? Let me know in the comments!
    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    SOCIAL MEDIA LINKS!
    ✨ Support my work on Patreon: / aitrepreneur
    ⚔️ Join the Discord server: bit.ly/aitdiscord
    🧠 My Second Channel THE MAKER LAIR: bit.ly/themakerlair
    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    Runpod: bit.ly/runpodAi
    Photopea: www.photopea.com/
    a line drawing lineart linework
    Thanks to SergioPAL for Inpiring this video: / how_to_turn_any_image_...
    All ControlNet Videos: • ControlNet
    My previous ControlNet video: • TRANSFER STYLE FROM An...
    3D POSE & HANDS INSIDE Stable Diffusion! Posex & Depth Map: • 3D POSE & HANDS INSIDE...
    Multiple Characters With LATENT COUPLE: • MULTIPLE CHARACTERS In...
    GET PERFECT HANDS With MULTI-CONTROLNET & 3D BLENDER: • GET PERFECT HANDS With...
    NEXT-GEN MULTI-CONTROLNET INPAINTING: • NEXT-GEN MULTI-CONTROL...
    CHARACTER TURNAROUND In Stable Diffusion: • CHARACTER TURNAROUND I...
    EASY POSING FOR CONTROLNET : • EASY POSING FOR CONTRO...
    3D Posing With ControlNet: • 3D POSING For PERFECT ...
    My first ControlNet video: • NEXT-GEN NEW IMG2IMG I...
    Special thanks to Royal Emperor:
    - Merlin Kauffman
    - Totoro
    Thank you so much for your support on Patreon! You are truly a glory to behold! Your generosity is immense, and it means the world to me. Thank you for helping me keep the lights on and the content flowing. Thank you very much!
    #stablediffusion #controlnet #lineart #stablediffusiontutorial
    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    WATCH MY MOST POPULAR VIDEOS:
    RECOMMENDED WATCHING - My "Stable Diffusion" Playlist:
    ►► bit.ly/stablediffusion
    RECOMMENDED WATCHING - My "Tutorial" Playlist:
    ►► bit.ly/TuTPlaylist
    Disclosure: Bear in mind that some of the links in this post are affiliate links and if you go through them to make a purchase I will earn a commission. Keep in mind that I link these companies and their products because of their quality and not because of the commission I receive from your purchases. The decision is yours, and whether or not you decide to buy something is completely up to you.

Komentáře • 186

  • @ryry9780
    @ryry9780 Před rokem +1

    Just binged your entire playlist on ControlNet. That and Inpainting are truly like magic. Thank you so much!

  • @arvinds2300
    @arvinds2300 Před rokem +52

    Simple yet so effective. Controlnet is seriously magic.

  • @iamYork_
    @iamYork_ Před rokem +2

    I havnt had much time to dabble with controlnet but one of my first thoughts was making images into sketches as opposed to everyone turning sketches into amazing generated art... Great job as always...

  • @winkletter
    @winkletter Před rokem +23

    I find mixing DepthMap and Canny lets you specify how abstract you want it to be. Pure DepthMap looks like more illustrated vector line art, but adding Canny makes it more and more like a sketch.

  • @tripleheadedmonkey6613
    @tripleheadedmonkey6613 Před rokem +10

    I love this. Now we just need someone to change-up the ControlNet IMG2IMG pipeline so that you can use a batch folder in ControlNet specifically.
    That way you could use a white blank background to make animated line art in batches instead of having to do it frame by frame again with this new process.

  • @GS-ef5ht
    @GS-ef5ht Před rokem +1

    Exactly what I was looking for, thank you!

  • @visualdestination
    @visualdestination Před rokem +3

    SD came out and was amazing. Then dreambooth. Now Controlnet. Can't wait to see what's the next big leap.

  • @titanitis
    @titanitis Před 6 měsíci +1

    Would be awesome with an updated edition of this video now as there is so many new options with the comfyUI. Thank you for the video Aitrepreneur!

  • @alexandrmalafeev7182
    @alexandrmalafeev7182 Před rokem

    Very nice technique, thank you! Also you can tune canny's low/high thresholds to control the lines and fills

  • @Aitrepreneur
    @Aitrepreneur  Před rokem +11

    HELLO HUMANS! Thank you for watching & do NOT forget to LIKE and SUBSCRIBE For More Ai Updates. Thx

    • @thanksfernuthin
      @thanksfernuthin Před rokem +2

      Wow! This is not working for me at all! I get a barely recognizable blob even though the standard canny line art at the end is fine. So I switched to your DreamShaper model. No good. Then I gave it ACTUAL LINE ART and it still filled a bunch of the white areas in with black. I also removed negative prompts that might be making a problem. No good. Then all negs. No good. I'm either doing something wrong or there's some other variable that needs to be changed like clip skip or something else. If it's just me... ignore it. If you hear from others you might want to look into it.

    • @hugoruix_yt995
      @hugoruix_yt995 Před rokem

      @@thanksfernuthin It is working for me. Maybe try this Lora with the prompt: /models/16014/anime-lineart-style (on civitai)
      Maybe its a version issue or a negative prompt issue

    • @thanksfernuthin
      @thanksfernuthin Před rokem +2

      @@hugoruix_yt995 Thanks friend.

    • @nottyverseOfficial
      @nottyverseOfficial Před rokem

      Hey there.. big fan of your videos.. I got your channel recommendation from another YT channel and I thank him a thousand times that I came here.. love all your videos and the way you simplify things to understand so easily ❤❤❤

    • @MarkKaminari00
      @MarkKaminari00 Před rokem

      Hello humans? Lol

  • @DimsOfBeauty
    @DimsOfBeauty Před rokem +1

    Love it! Can you show us how this would be used to convert lineart into a realistic image or painting? :)

  • @KolTregaskes
    @KolTregaskes Před rokem

    Another amazing tip, thank you.

  • @thanksfernuthin
    @thanksfernuthin Před rokem +6

    I need to test this of course but this might be another game changer. For someone with a little bit of artistic ability changing a line art image to what you want is A LOT easier than changing a photo. So I can do this, edit the line art and load it back into canny. Pretty cool.

  • @joywritr
    @joywritr Před rokem

    Is keeping the denoising strength very low while inpainting with Only Masked the key to preventing it from trying to recreate the entire scene in the masked area? I've seen people keep it high and have that not happen, but it happens EVERY TIME I use a denoising strength more than .4 or so. Thanks in advance.

  • @amj2048
    @amj2048 Před rokem

    so cool!, thanks for sharing!

  • @alexwsshin
    @alexwsshin Před rokem +2

    wow, it is amazing! But I have a question, here. My line art color is not black, it is very bright. Is there any way to make black color?

  • @coda514
    @coda514 Před rokem

    Amazing. Thank you. Sincerely, your loyal subject.

  • @swannschilling474
    @swannschilling474 Před rokem

    My god, this is crazy good!!!!!!!!!! 😱😱😱

  • @iseahosbourne9064
    @iseahosbourne9064 Před rokem

    Hey my k my ai overlord, how do you use openpose for objects? like say i wanted to generate a spoon but have it mid air at 90o?
    Also, does it work for animals?

  • @Knittely
    @Knittely Před rokem

    Hey Aitrepreneur,
    thanks for this vid! I recently read about TensorRT to speed up image generation, but couldn't find a good guide how to use it. Would you be willing to make a tutorial for it? (or even other techniques to speed up image generation, if any)

  • @IceTeaEdwin
    @IceTeaEdwin Před rokem +32

    This is exactly what artists are going to be using to speed up their workflow. Get a draft line art and work their style from there. Infinite drafts for people who struggle with sketch ideas.

  • @segunda_parte
    @segunda_parte Před rokem

    Awesome Awesome Awesome!!!!!!!!!!!!! You are the BOSS!!!

  • @nierinmath7678
    @nierinmath7678 Před rokem

    I like it. Your vids are great

  • @jpgamestudio
    @jpgamestudio Před rokem

    WOW,great!

  • @flonixcorn
    @flonixcorn Před rokem

    Great Video

  • @coulterjb22
    @coulterjb22 Před 9 měsíci

    Very helpful! I'm interested in creating vector art for my laser engraving business. This is the closest thing I've seen that helps. Anything else you might suggest?
    Thank you=subd!

  • @MissChelle
    @MissChelle Před rokem +1

    Wow, this is exactly what I’ve been trying to do for weeks! It looks so simple, however, I only have an iPad so need to do it in a web up. Any suggestion? ❤️🇦🇺

  • @Snafu2346
    @Snafu2346 Před rokem +2

    I .. I haven't learned the last 10 videos yet. I need a full time job just to learn all these Stable Diffusion features.

  • @CaritasGothKaraoke
    @CaritasGothKaraoke Před rokem +9

    I am noticing you have a set seed. Is this the seed from the generated image before?
    If so, does that explain why this is much harder to get it to work well on existing images that were NOT generated in SD? Because I'm struggling to get something that doesn't look like a weird woodcut.

    • @MrMadmaggot
      @MrMadmaggot Před rokem

      Dude where did yuou downloaded teh dreamshaper model?

  • @ribertfranhanreagen9821

    Dang using this with illustrator will be a lot time saver

  • @paulsheriff
    @paulsheriff Před rokem

    Would there be away to batch video frames like this ..

  • @stedocli6387
    @stedocli6387 Před rokem

    way supercool!

  • @r4nd0mth0_ghts5
    @r4nd0mth0_ghts5 Před rokem

    Is there any probability to create one line art forms using Controlnet. I hope next version will be bundled with this feature..

  • @angelicafoster670
    @angelicafoster670 Před rokem

    very cool, i'm trying to get "one line art" drawing, do you happen to know how ?

  • @nemanjapetrovic4761
    @nemanjapetrovic4761 Před rokem

    I still get some color in my image when i trg to turn it i to sketch is there a fix for that?

  • @Semi-Cyclops
    @Semi-Cyclops Před rokem +2

    man control net is awesome i use it to colorize my drawings

    • @Semi-Cyclops
      @Semi-Cyclops Před rokem

      @Captain Reason i use the canny model it preserves the sketch then i describe my character or scene. my sketch goes in the the control net and if i draw a rough sketch i add contrast. the scribble model does not work good for me atleast it creates its own thing from the sketch

  • @Argentuza
    @Argentuza Před rokem

    What graphic card are you using? thanks

  • @user-fn9dn1co5o
    @user-fn9dn1co5o Před 11 měsíci

    hello I met an error: RuntimeError: mat1 and mat2 shapes cannot be multiplied (154x768 and 1024x320)
    Is there any way to solve this? Thanks

  • @Mirko_ai
    @Mirko_ai Před rokem

    Hey, I would like to use Runpod with your Affiliate Link.
    If I do Seed Traveling I've to wait about 1-3 hours on my Laptop. Thats long^^
    So one question.. If I've found some good prompts with some good seeds.
    Can I copy the prompts and seeds after I'm happy with them to Runpod and just make the Seed Travel their?
    Will I get the exact same images with this way?

  • @jackmyntan
    @jackmyntan Před rokem +4

    I think the models have changed because I followed this video to the letter and all I get is very very faint line drawings. I even took a screen shot of the example image here used and got exactly the same issue. There are more controllers on the more recent iteration of ControlNet, but everything I try results in ultra feint line images.

    • @Argentuza
      @Argentuza Před rokem

      If you want to get the same results use the same model: dreamshaper_331BakedVae

    • @hildalei7881
      @hildalei7881 Před rokem

      I have the same problem. The line is not as clear as his.

  • @ratatattattat
    @ratatattattat Před rokem

    I'm using Automatic1111 and installed Controlnet, but Canny model isn't available, how come?

  • @copyright24
    @copyright24 Před rokem

    That looks amazing but I have an issue, I have recently installed Controlnet and in the folder I have the model control_v11p_sd15_lineart but it's not showing in the model list ?

    • @klawsklaws
      @klawsklaws Před 11 měsíci

      I had same issue i downloaded control_sd15_canny.pth file and put in models folder

  • @desu38
    @desu38 Před rokem +2

    3:33 For that it's better to pick a "solid color" adjustment layer.

  • @PlainsAyu
    @PlainsAyu Před rokem

    I dont have the guidance start in the settings, what is wrong with mine?

  • @cahitsarsn5607
    @cahitsarsn5607 Před rokem

    Can the opposite be done? sketch , line art to image ?

  • @loogatv
    @loogatv Před rokem

    thanks! i looked for a good way for hours and hours ... and everything what i needed to do is search quick on youtube ...

  • @fantastart2078
    @fantastart2078 Před rokem

    Can you tell me what I have to install to use this?

  • @NyaruSunako
    @NyaruSunako Před rokem

    I swear when I try these mine just doesnt want to listen to me lol Mine Might be broken Granted I just use it to make my workflow better now even wht I have atm its still as strong anyway just not to the lvl of this lol now Knowing This I can make my line Art even better from Learning with using different brushes Man This makes things even Fun and Easier for me to test out different line art brushes for this Always enjoyable to see new Stuff being evolved just So Fascinating

  • @sumitdubey6478
    @sumitdubey6478 Před rokem +1

    I really love your work. Can you please make a video on "how to train lora on google colab". Some of us have cards with only 4gb vram. It would be really helpful.

  • @edwardwilliams2564
    @edwardwilliams2564 Před 6 měsíci

    Any idea how to do this in comfyui? Auto1111 is really slow.

  • @OnlineTabletopcom
    @OnlineTabletopcom Před rokem +8

    Mine turns out quite light/grayish. the lines are also quite thin. Any tips?

    • @Argentuza
      @Argentuza Před rokem +1

      Same here, no way I can obtain the same results! why is this happening?

    • @archael18
      @archael18 Před 4 měsíci

      You can try using the same seed he does in his img2img tab or changing it to see which lineart style you prefer. Every seed will make a different lineart style.

  • @MrMadmaggot
    @MrMadmaggot Před rokem

    Where did u get that Canny model?

  • @vishalchouhan07
    @vishalchouhan07 Před rokem

    Hey i am not able to achieve the quality of linework you are able to achieve in this video. is it a good idea to experiment with different models?

    • @Argentuza
      @Argentuza Před rokem +1

      If you want to get the same results use the same model: dreamshaper_331BakedVae

  • @dinchigo
    @dinchigo Před rokem

    Can anyone assist me? I've installed Stable Diffusion but it gives me RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)` . Not sure what to do as my pc meets necessary requirements.

  • @sestep09
    @sestep09 Před rokem +2

    Can't get this to work it just results in a changed still colored image. I followed step by step and have triple checked my settings, I've only got it to work with one image no others. They all just end up being changed images from high denoising and still colored.

  • @mattmustarde5582
    @mattmustarde5582 Před rokem +5

    Any way to boost the contrast of the linework itself? I'm getting good details but the lines are near-white or very pale gray. Tried adding "high contrast" to my prompt but not much improvement.

    • @bustedd66
      @bustedd66 Před rokem

      i am getting the same thing

    • @bustedd66
      @bustedd66 Před rokem

      raise the denoising strength i missed that step :)

    • @zendao7967
      @zendao7967 Před rokem +3

      There's always photoshop.

    • @randomscandinavian6094
      @randomscandinavian6094 Před rokem +2

      The model you use seems to affect the outcome. I haven't tried the one he is using. And of course the input image you choose. Luck may be a factor as well. All of my attempts so far have looked absolutely horrible and nothing like the example here. Fun technique but nothing that I could use for anything if the results are going to look this bad. Anyway, it was interesting but now on to something else.

    • @TheMediaMachine
      @TheMediaMachine Před rokem

      I just save it, bang it in Photoshop and I just use adjustment layers i.e. contrast, curves. Until I get good with stable diffusion I am doing this for now. For colour lines, try adjustment layer colour, then make overlay mode screen.

  • @theStarterPlan
    @theStarterPlan Před 9 měsíci

    When I do it, I just get an error message (with no generated image) saying- AttributeError ControlNet object has no attribute 'label_emb". Does anybody have any idea what I could be doing wrong? please help!

  • @grillodon
    @grillodon Před rokem

    It's all ok before Inpaint procedure. When I click generate after all settings and black paint on face the Web UI tells me: ValueError: Coordinate 'right' is less than 'left'

    • @grillodon
      @grillodon Před rokem

      Solved. It was Firefox. But the Inpaint "new detail" works only f I select Whole Picture.

  • @St.MichaelsXMother
    @St.MichaelsXMother Před rokem

    how do I get controllnet? or if it's a website?

  • @vi6ddarkking
    @vi6ddarkking Před rokem +3

    So Instant Manga Panels? Nice!

  • @proyectorealidad9904
    @proyectorealidad9904 Před rokem

    how can i do this in batch?

  • @MaxKrovenOfficial
    @MaxKrovenOfficial Před rokem

    In theory, we could use this same method, with slight variations, to have full color characters with white backgrounds, so we can then delete said background in Photoshop and thus have characters with transparent backgrounds?

  • @user-vo7rv1mv3m
    @user-vo7rv1mv3m Před 3 měsíci

    Why I follow the same Settings as the video tutorial. For large models, controlnet is also set up. The image I generated was still white, no black and white lines,

  • @Bra2ha1
    @Bra2ha1 Před rokem

    Where can I get this canny model?

  • @user-ly7to7yt5d
    @user-ly7to7yt5d Před 6 měsíci +1

    What's this program called ?!

  • @kushis4ever
    @kushis4ever Před rokem +1

    Hi, I replicated the steps on an image but the image came out with blurred lines like brush marks with no distinguishable outline. BTW, it took me nearly 4-5 minutes to generate on a macbook pro i9 32GB RAM.

  • @kamransayah
    @kamransayah Před rokem

    Hey K, what happen? did they delete your video again?

  • @brandonvanderheat
    @brandonvanderheat Před rokem +1

    Haven't tried this yet but this might make it easier to cut (some) images from their background. Convert original image to line-art. Put both the original image and line art into photoshop (or equivalent) and use the magic background eraser to delete the background from the line art layer. Select layer pixels and invert selection. Swap to the layer with the original color image, add feather, and delete.

  • @edmatrariel
    @edmatrariel Před rokem +1

    Is the reverse possible? line art to painting?

    • @kevinscales
      @kevinscales Před rokem

      sure, just put the line art into controlnet and use canny. (txt2img) write a prompt etc
      Wait, does this make colorizing manga really easy? I never thought of that before

  • @theStarterPlan
    @theStarterPlan Před 9 měsíci

    What does the seed value say?

  • @tails8806
    @tails8806 Před rokem

    I only get a black image from the canny model... any ideas?

  • @welovesummernz
    @welovesummernz Před rokem +1

    The title says any image, how can I apply this style to one of my own photos? Please

    • @OliNorwell
      @OliNorwell Před rokem

      Yeah exactly, I tried with one of my own photos and it wasn't as good

  • @solomonkok1539
    @solomonkok1539 Před rokem

    Which app?

  • @andu896
    @andu896 Před rokem +1

    I followed this tutorial to the letter, but all I get is random lines, which I assume is related to Denoise Strength being so high. Can you try with a different model and see if this still works? Anybody got it to work?

    • @Argentuza
      @Argentuza Před rokem

      If you want to get the same results use the same model: dreamshaper_331BakedVae

    • @sudhan129
      @sudhan129 Před 11 měsíci

      @@Argentuza Hi, I found the only link about dreamshaper_331BakedVae. It's on hugging face, but seems not a file for download. Where can find a usable dreamshaper_331BakedVae file?

  • @maedeer5190
    @maedeer5190 Před rokem +1

    i keep getting a completely different image can someone help me.

  • @vaneaph
    @vaneaph Před rokem +2

    This is way more effective than anything i have tried with Photoshop.

    • @krystiankrysti1396
      @krystiankrysti1396 Před rokem

      which means you did not tried it because its not as good as that video makes it to be , he cherrypicked example image

    • @vaneaph
      @vaneaph Před rokem +1

      @@krystiankrysti1396 not sure what your point here. but AI is does not mean Magic! you still need to edit the picture in Photoshop to ENHANCE the result to your liking.
      Using controNet indeed saves me hell of a lot of time.
      (do not forget, the burgers on the pictures NEVER look like what you really order !)

    • @krystiankrysti1396
      @krystiankrysti1396 Před rokem

      @@vaneaph well, i got it working better with hed than canny , its just if i would make new feature vids, id premade a couple examples to show more than one so people can also see fail cases

  • @kiillabytez
    @kiillabytez Před rokem

    So, it requires a WHITE background?
    I guess using it for comic book art is a little more involved or is it?

  • @goldenshark6272
    @goldenshark6272 Před 10 měsíci

    Plz how to download controlnet ?!

  • @ssj3mohan
    @ssj3mohan Před rokem +2

    Not Working for me.

  • @global_ganesh
    @global_ganesh Před 5 dny

    Which website

  • @hildalei7881
    @hildalei7881 Před rokem

    It looks great. But I follow your steps but it doesn't work anymore...Maybe it's because the different version of webUI and controlnet.

  • @bustedd66
    @bustedd66 Před rokem +1

    i tried using it on images i had already created and they came out not so great. does it only work with the same seed. is that why you have to create the image first and then send it to img2img

    • @kylehessling2679
      @kylehessling2679 Před rokem +1

      This is a great observation, I think you might be right because I'm having a rough go with real photography

    • @bobbyboe
      @bobbyboe Před rokem

      Interesting Idea... I thought it is because he is using a different model than I do... Also in my case it does not turn out well. there are no wider lines like in the video and it looks bad.

    • @bobbyboe
      @bobbyboe Před rokem +1

      I tried it with results from text2img... and the same seed, does not work. I am downloading the model now that he used... to check that out.

    • @bustedd66
      @bustedd66 Před rokem +1

      @@bobbyboe please let us know.

    • @bobbyboe
      @bobbyboe Před rokem +1

      Update: I used his model and it is still not good, only thin ugly outlines, I updated all extensions, I used the negative prompt like he did (a user further down in the comments has achieved better results using it - I did't). I used rendered photorealistic figures.... I still wonder if there is some deeper reason in only using a picture-result from text2img and transfer that to img2img... which would make the whole thing useless for me.

  • @aliuzun2220
    @aliuzun2220 Před rokem

    gelişir gelişir

  • @diegomaldonado7491
    @diegomaldonado7491 Před rokem

    where is the link to this ai tool??

  • @tetsuooshima832
    @tetsuooshima832 Před rokem +2

    I found the first step unecessary. What's the point sending to img2img if you delete the whole prompt later on, just start from img2img directly then tweak any gen you have ? or any pic really

    • @TheDocPixel
      @TheDocPixel Před 11 měsíci

      Don't forget that it's good to have the seed

    • @tetsuooshima832
      @tetsuooshima832 Před 11 měsíci +1

      @@TheDocPixel I think seed become irrelevant with a denoise strength of 0.95. Besides, if your source is AI generated then the seed is in the metadata; if it's any images from somewhere else there's no meta = no seed. So I don't get your point here

  • @OtakuDoctor
    @OtakuDoctor Před rokem

    i wonder why i only have one cfg scale, not start and end like you, my controlnet should be up to date
    edit: nvm, needed an update

  • @serjaoberranteiro4914
    @serjaoberranteiro4914 Před rokem +2

    it dont work, i got a totally different result

  • @TheAiConqueror
    @TheAiConqueror Před rokem

    💪💪💪

  • @Niiwastaken
    @Niiwastaken Před 13 dny +1

    It just seems to make a white image. Ive triple checked that I got every step right :/

  • @sojoba3521
    @sojoba3521 Před rokem

    Hi, do you do personal tutoring? I'd like to pay you for a private session

  • @apt13tpa
    @apt13tpa Před rokem

    I don't knwo why but this isn''t working for me at all

  • @LouisGedo
    @LouisGedo Před rokem

    Hi

  • @RealitySlipTV
    @RealitySlipTV Před rokem +2

    and here i trained 2 embeddings all night long to do the same thing...

    • @Aitrepreneur
      @Aitrepreneur  Před rokem

      Ah.. well😅 sorry

    • @RealitySlipTV
      @RealitySlipTV Před rokem

      @@Aitrepreneur no no, this will be excellent! Right after I get done with this Patrick Bateman scene...

    • @RealitySlipTV
      @RealitySlipTV Před rokem

      @@Aitrepreneur Just tried to do this and I do not have a guidance start slider, only weight and strength.

  • @davidbecker4206
    @davidbecker4206 Před rokem +2

    Tattoo artists... Ohh I hate AI art! ... oh wait this fits into my workflow quite well.

  • @UnderstandingCode
    @UnderstandingCode Před rokem

    a line drawing lineart linework

  • @dylangrove3214
    @dylangrove3214 Před rokem

    Has anyone tried this on a building/architecture photo?

  • @ZeroXjokerXofXtime
    @ZeroXjokerXofXtime Před rokem

    First

  • @MonologueMusicals
    @MonologueMusicals Před rokem +1

    ain't working for me, chief
    Edit. I figured it out, the denoising is key.

  • @takezosensei
    @takezosensei Před rokem +9

    As a lineart artist, I am deeply saddened...

  • @user-dn1dv8ui9p
    @user-dn1dv8ui9p Před 7 měsíci

    Mine always looks like a grayscale or a fluffy model with shading. Its never lineart