How to make an AI Instagram Model Girl on ComfyUI (AI Consistent Character)

Sdílet
Vložit
  • čas přidán 25. 06. 2024
  • 🌟 Visite for Latest AI Digital Models Workflows: www.aiconomist.cc
    How to make AI Instagram Model Girl on ComfyUI (AI Consistent Character)
    🔥 New method for AI digital model • The "Secret Sauce" to ... 🔥
    Learn how to make an ai model girl using Stable Diffusion ComfyUI. In this tutorial you'll learn how to create an aitana ai model looklike. You'll have full control on your model, a Consistent Face, Clothing and environment of your ai instagram model.
    🌟 All Useful Links & Workflow Visit: aiconomist.cc/ai-model-workflow
    #aigirl #aimodel #stablediffusion
    🖥️ my PC setup:
    GPU - amzn.to/3PAbjP6
    CPU - amzn.to/4cngN9A
    RAM - amzn.to/494I5ig
    SSD Storage - amzn.to/3x739r4
    Prebuilds PC for Generative AI - amzn.to/3TQDTOQ
    For Business: info@aiconomist.cc
  • Věda a technologie

Komentáře • 243

  • @Aiconomist
    @Aiconomist  Před měsícem

    🔥 New method for AI digital model czcams.com/video/nVaHinkGnDA/video.html 🔥

    • @gingercholo
      @gingercholo Před měsícem

      maybe itll look decent this time

  • @BobDoyleMedia
    @BobDoyleMedia Před 6 měsíci +19

    This was excellent. Thank you for showing people there are other and better options than the Midjourney face swap method which of course limits your flexibility tremendously. I'm actually in the process of doing this exact thing and was building my ideal workflow, but what you did with the outfit and masking is really fantastic. Very helpful.

  • @rishabjain6076
    @rishabjain6076 Před 5 měsíci +1

    perfect video , only consistent and detailed video i was looking for . this is gem.thank you so much

  • @r2Facts
    @r2Facts Před 3 měsíci

    This was Absolutely AMAING! I've watched countless videos, spent a good amount but nothing as detailed and as great as this.. Definitely subscribing to this channel

  • @terrorcuda1832
    @terrorcuda1832 Před 6 měsíci +8

    Fantastic video. Simple, straightforward and well explained.

  • @Thomas_Leo
    @Thomas_Leo Před 4 měsíci +1

    Great video. I find using the prompt, "staring directly into the camera" works well for portrait shots or using "portrait". I'm also glad you're using low quality prompts instead of high quality and cinematic photography. Most average phone users don't have access to high quality cameras. 😁

  • @spiritform111
    @spiritform111 Před 3 měsíci

    great tutorial... very easy to follow. thank you!

  • @NicholasLaDieu
    @NicholasLaDieu Před 5 měsíci

    this is wild! Thanks.

  • @mysticmango-fl3ej
    @mysticmango-fl3ej Před 3 měsíci

    That's actually really incredible the psychology and systems that go into making that much money

  • @alexmattheis
    @alexmattheis Před 6 měsíci

    Love it! 🤠

  • @Blakerblass
    @Blakerblass Před 6 měsíci +3

    Excelente video hermano, bastante interesante.

  • @AnjarMoslem
    @AnjarMoslem Před 25 dny

    thanks for making this video, I just bought your workflow

  • @temporallabsol9531
    @temporallabsol9531 Před 6 měsíci

    This is great stuff.

  • @mikesalomon2695
    @mikesalomon2695 Před 5 měsíci

    Woaw, it seems very difficult but great in the same Time. I will try tomorrow

  • @ejro3063
    @ejro3063 Před 6 měsíci +88

    There's nothing comfy about ComfyUI

    • @xviovx
      @xviovx Před 6 měsíci +1

      This 🤣🤣

    • @EpochEmerge
      @EpochEmerge Před 6 měsíci

      @@xviovxthen you should do it manually on a1111 to get the idea why it’s called comfy

    • @AdrianArgentina-nd7rg
      @AdrianArgentina-nd7rg Před 6 měsíci +1

      Agree😂

    • @FudduSawal
      @FudduSawal Před 6 měsíci

      The flexibility it gives us over other tools are justified

    • @otherrings2887
      @otherrings2887 Před 6 měsíci

      😂😂😂

  • @elleelle6351
    @elleelle6351 Před 5 měsíci

    God bless you! Tysm

  • @DarioToledo
    @DarioToledo Před 6 měsíci +5

    I wish I'd have watched your video sooner. Well, if you posted it earlier 😂😂 just today I have come to a similar solution in order to achieve this. And now I still have a question: what if you want to switch to full body figure, or want to have a profile or rear view of your model, do the masks and the images going into the IPadapters remain unchanged? Or do you have to switch IPadapter's image and mask accordingly?

  • @Specialfx999
    @Specialfx999 Před 6 měsíci +2

    Amazing content. Thanks for sharing.
    Is it possible to place her in a specific environment by providing a reference image of the environment?

  • @crow-mag4827
    @crow-mag4827 Před 6 měsíci +1

    Excellent video!

  • @fabioespositovlog
    @fabioespositovlog Před 6 měsíci +2

    Great video, one question: why if I change the initial checkpoint despite keeping or removing the LoRA and connecting the model directly to the IFAdapter the whole process stops working because of a different size of the matrices?

  • @Benny-or7fl
    @Benny-or7fl Před 5 měsíci

    Amazing! Any recommendations on how to optimize it for SDXL? For a reason I can’t explain if I update all the models I’m getting worth results with SDXL compare to 1.5… 🤔

  • @ewzxyhh6180
    @ewzxyhh6180 Před 5 měsíci +2

    it worked, thanks, where i find the openposes to download?

  • @DYYGITAL
    @DYYGITAL Před 5 měsíci +7

    where do you get all the images for clothing and poses from?

  • @RichardMJr
    @RichardMJr Před 5 měsíci +3

    Hey thanks for the video! I cannot find the Ultimate SD Upscaler. has it been removed? If so is there something else you would suggest in its place?

  • @myemail1668
    @myemail1668 Před 3 měsíci

    Perfect tutorial, thank you.

  • @user-ef4df8xp8p
    @user-ef4df8xp8p Před 4 měsíci

    Awesome......

  • @favrumo.design
    @favrumo.design Před 2 měsíci

    Eres grande!

  • @Aiconomist
    @Aiconomist  Před měsícem +4

    Hey everyone! 😊
    I'm planning to create a comprehensive course on creating a virtual influencer from scratch and growing an Instagram account. It'll be a long and detailed course, so I'm thinking of making it a paid course, but at a reasonable price. What do you think? Would you be interested in something like that?

  • @ehteshamdanish000
    @ehteshamdanish000 Před 5 měsíci

    You have shared everything thank you for that. If you could also share the openpose image and the character dress one would be appreciated

  • @Anynak69
    @Anynak69 Před 5 měsíci +1

    Cool, but what about a different perspective view of the face? it seems like the face always keeps the same perspective regardless of Openpose settings. is there any way to fix this?

  • @JpresValknut
    @JpresValknut Před 6 měsíci

    How can I use multiple prompts at the same time? Say put 5 into the queue and then move onto the next one? So the same that with the stable dif default UI can be done by using the texfile or textbox on the very bottom?

  • @truck.-kun.
    @truck.-kun. Před 5 měsíci

    This is a good tutorial. Face swap maybe is a more fitting title in general

  • @shinkotsu6559
    @shinkotsu6559 Před 5 měsíci +5

    load clip vision model. what model do a load ? where to find this model.safetensors

  • @lilillllii246
    @lilillllii246 Před 5 měsíci

    I'm always thankful. Rather than first creating a female model as a prompt, is it impossible to import a photo of a female model?

  • @EvanKrom
    @EvanKrom Před 4 měsíci

    Cool video!
    Got my KSampler blocking (loading 0% over 25 steps) after IP Adapter, don't know why but maybe related to the AMD graphic card I am using?
    Generation of image and upscaling was working...

  • @thedawncourt
    @thedawncourt Před 3 měsíci +2

    IPAdapterApply node fails every time I try to use the workflow. I'm a noob help please :(

  • @breathandrelax4367
    @breathandrelax4367 Před 6 měsíci

    Hi, thanks for your content !
    Great stuff and efforts
    I have a little request would it be possible to make a video to explore AI models with the Multiarea conditioning .
    it would a great asset
    Thanks and regards !

  • @ilyesdz6360
    @ilyesdz6360 Před 6 měsíci

    thank you for your content, Please make a video how to use stable diffusion for creating designs for merch by amazon

  • @jonnym4670
    @jonnym4670 Před 5 měsíci

    any idea what kind of graphics card you can pull this off with???
    I have a rx 6700 xt and an old laptop with a Gtx 1050 so it's clear i will need to upgrade but what is like the minimal card so i don't have to spend a lot

  • @ryutaroosafune8756
    @ryutaroosafune8756 Před 6 měsíci +1

    Thanks for the great tutorial and the sample json files! I was able to do almost the same thing using the sample Json, but for some reason the face part is broken and does not show a beautiful face like your tutorial. For Automatic1111, Adetailer can be used to beautifully redraw face images as well, but currently Adetailer is not available for ComfyUI. Is there something I should do?

    • @toonleap
      @toonleap Před 5 měsíci +2

      There is a plugin called Face Detailer but of course, it needs more nodes and connections making the workflow more complicated.

  • @AmeliaIsabella_x
    @AmeliaIsabella_x Před 2 měsíci

    How can you increase the accuracy for retaining the face? The face, although quite subtle, was noticeably different in each generation, which is noticeable to a follower as I noticed just in this video. Thanks!

  • @TalZac
    @TalZac Před 6 měsíci

    Please make a video about the posing and where to get this, and how you make the outfits

  • @johnjd9640
    @johnjd9640 Před 5 měsíci

    Wow, this is nice, I wish there was an easier way to do this or this is too complicated to me :(

  • @SH-lh9ow
    @SH-lh9ow Před 8 dny

    thanks for this video! amazing! ..No matter what I try, I don't get the option of applying the Apply IPAdapter Node.. What do I miss? would be thankful for any help!

  • @Mauriziotarricone
    @Mauriziotarricone Před 5 měsíci

    I have an issue with the load clip vision doesn’t load the safetensor

  • @user-nd7hk6vp6q
    @user-nd7hk6vp6q Před 3 měsíci +1

    What if I just wanted to change the pose alone, no change of clothes or anything else, how do I go about that pls

  • @CostinVladimir
    @CostinVladimir Před 5 měsíci

    I am going to ignore that you took MKBHD's voice and thank you for the tutorial :P

  • @ChristianKozyge
    @ChristianKozyge Před 6 měsíci +4

    Whats ur Clip vison model?

  • @Lenovicc
    @Lenovicc Před 2 měsíci

    Where can I download the model for clip vision?

  • @maxdeniel
    @maxdeniel Před 22 dny

    Hi friend, a couple of questions here:
    1) how do I get the KSampler with the image preview?, I just have the normal one with no image preview.
    2) I search for the "Ultimate SC Upscaler", but I did not find it, it is something I have to install?, if so, from where can I download it?
    3) Now, I did find the Image Up Scale Loader Node, but did not find the 4x_foolhardy.pth feature, is that something I have to download from somewhere else?, in which folder do I drop it?, so it appears on the node as an option next time.
    Generally speaking there are some tools and features you are using on your video, that we don't know from where they came from, another option is to buy your workflow, which is not a problem because you are doing and amazing job, and we can support you in that way, however if I purchase the work flow, I will have the same problem because the lack of tools and at the end the work flow won't work as expected.
    I will go some research on how to get those tools and then come back to this video. Thanks bro!

  • @Stellalife706
    @Stellalife706 Před 5 měsíci

    If I get a Macbook M3 pro , can I easily install Comfy UI and work?

  • @davidsik2402
    @davidsik2402 Před 4 měsíci

    Hey i have a question, how can i do all of this, if i have already generated model in stable diffusion?

  • @Crysteps
    @Crysteps Před 6 měsíci +6

    Hey thanks for the video, but I where do you download all the controlnets from and how do you install them in to ComfyUI as well as where did you get the Clip Vision model from ?

  • @TalZac
    @TalZac Před 6 měsíci +1

    New to this, why not combine with faceswap?

  • @otaviokina22
    @otaviokina22 Před 2 měsíci

    my model is coming out with two heads all the time, do you know how I can solve it? I've tried several negative prompts but it doesn't help.

  • @zr_xirconio__3577
    @zr_xirconio__3577 Před 5 měsíci +6

    hey, nice tutorial, really well explained in detail. I am getting an error when running Ksampler and i was wondering if you could help me with that:
    "It seems that models and clips are mixed and interconnected between SDXL Base, SDXL Refiner, SD1.x, and SD2.x. Please verify. "
    one thing that is different from my workflow and your workflow is that in the clip vision loader I am using "clip_vision_vit_h.safetensors" instead of "model.safetensors" because I couldn't fine the file you are using on the web, any chance you could post a link to that file or help me out resolve this error?
    thanks in advance

  • @povang
    @povang Před 5 měsíci +2

    Still using a1111, I love the simplicity.

    • @pavi013
      @pavi013 Před měsícem

      You can do more with comfy.

  • @stasatanasov4263
    @stasatanasov4263 Před měsícem

    I am trying to find a course on creating a virtual influencer, so if you can tell me when you make it will be great!

  • @lilillllii246
    @lilillllii246 Před 5 měsíci

    When I use a same phto of clothes, almost similar characters appear, but when the image of the clothes changes, a completely different character appears. What should I fix?

  • @sculptrise
    @sculptrise Před 6 měsíci

    can you do an advanced tutorial for automatic1111?

  • @Draig1999
    @Draig1999 Před 6 měsíci

    Can you please do the same for A1111. PLEASE!!

  • @szachgr43
    @szachgr43 Před 6 měsíci

    does that work on Macbook machines?

  • @AnjarMoslem
    @AnjarMoslem Před 24 dny

    where should I put ipdapterplus models? I put in "custom_modules->ComfyUI_IPAdapter_plus\models" but it didn't detect the model?

  • @guillemgonzalosilveira2277
    @guillemgonzalosilveira2277 Před 5 měsíci

    how do you do to see ksampler and upscaler progress ??
    I'm searching for it and i can't find it
    PLEASEE

  • @Rodinrodario
    @Rodinrodario Před 5 měsíci

    Can u help me? Why is sometimes the facy ugly and sometimes perfect, depens which pose i take ?

  • @VaibhavShewale
    @VaibhavShewale Před 5 měsíci +1

    So what is the miimum system req?

  • @caluxkonde
    @caluxkonde Před měsícem

    How to consistent up to 3 character or more just style change with prompt

  • @ehteshamdanish000
    @ehteshamdanish000 Před 5 měsíci

    So i tried and everything works. But the next type I open comfyui the character face is getting change how to fix that can you make a video for this

  • @rangorts
    @rangorts Před 6 měsíci

    please do a tutorial for automatic1111

  • @rezasaremi4090
    @rezasaremi4090 Před 5 měsíci

    How can i make a fashion model?! I mean same person with different outfit that i choose. Thanks

  • @MrPupone27
    @MrPupone27 Před 2 měsíci

    It look so complicated to start any easiest way to learn with all those arrows connected?

  • @artisans8521
    @artisans8521 Před 5 měsíci +1

    Nothing in ComfyUI is beginner freindly. There is pain, there is frustration, there is lost sleep and then more pain. I miss Lawrence wont take this unkindly, these arrows looked rarher painfull 😂.

  • @Rodinrodario
    @Rodinrodario Před 5 měsíci +3

    where did u get the IPadapter, how did u config and install it, which CLIP vision did u use, where did u get it? where did u get the open pose? i tried to find all by my self, but the end result is that my faceswapping looks like dogshit, can u help?

    • @ramondiaz5796
      @ramondiaz5796 Před 3 měsíci

      I was the same when I saw the video, frustrated because it doesn't explain many details, but I was able to make it work, but it was on my own, I took the time to look for everything you mention separately and watch videos, and I was able to get what was missing.

  • @prashanthravichandhran5688
    @prashanthravichandhran5688 Před 5 měsíci +1

    How to add custom clothing of my brand

  • @alexalex9511
    @alexalex9511 Před 5 měsíci +8

    Hey! Thank you for the video. Can you advice about one problem? I used your workflow, but I catch this error: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1024]). I don't understand where it comes from

    • @gerardoperez6787
      @gerardoperez6787 Před 5 měsíci

      Could you solve it? I have the same issue and don't understand how to proceed

    • @tiggy4591
      @tiggy4591 Před 5 měsíci +10

      So for that I had to get the correct clip_vision model.
      I don't know that it will let me post a link in the comments so here is how to find it:
      Go to his link to "All Useful Links & Workflow Visit:" in his description.
      Go to "IPAdapter plus Models HuggingFace Link"
      Go to the main directory, then the "Models" folder. Go to the "Image_Encorder" folder. Download "model.safetensors"
      From there you have to put the model you downloaded in your comfyUI, clip vision model folder.
      That same place also has alternative IPAdapter models.

    • @sunnyandharia907
      @sunnyandharia907 Před 5 měsíci

      @@tiggy4591 thankyou very much it worked like charm

    • @tiggy4591
      @tiggy4591 Před 5 měsíci

      @@sunnyandharia907 Awesome, I struggled with it a bit last night. I'm glad it helped.

    • @xReLoaDKryPoKz
      @xReLoaDKryPoKz Před 5 měsíci

      @@tiggy4591 i love you! you are the GOAT

  • @DarioToledo
    @DarioToledo Před 6 měsíci +2

    And another question: what's the purpose of setting the denoise value to 0.75 in the ksampler at 7:45 with an empty latent?

    • @JustFeral
      @JustFeral Před 6 měsíci

      There is none. Cause an empty latent image is just pure noise. He likely meant to convert an image into latent space or something.

    • @ehsanrt
      @ehsanrt Před 6 měsíci

      partially right .. however i think the latent isn't totally empty . there is openpose . and the generation after the first one gets seeds and clips from last one . less denoise = less changes ....

    • @DarioToledo
      @DarioToledo Před 6 měsíci

      @@JustFeral indeed, this must be related to the IPadapter in some way or I can't see the point, as he's starting from an empty latent. What's the point of partially denoising the noise.

  • @chiptaylor1124
    @chiptaylor1124 Před 6 měsíci

    Did anyone happen to figure out what Clip Vision model was used?⁉

  • @avidlearner8117
    @avidlearner8117 Před 5 měsíci

    Maybe you address this in another video, but to me, the faces were vastly different from one another, evolving on every pass and getting away from the original one (before adding the SMILE in the prompt).

  • @conxrl
    @conxrl Před 5 měsíci

    cant get my characters eyes to look normal :( any tips?

  • @elleelle6351
    @elleelle6351 Před 5 měsíci +3

    I have a question, if we have a clothing or jewlery brand deal, how can we make the model wear that product?

  • @user-yb5es8qm3k
    @user-yb5es8qm3k Před 5 měsíci

    Very good, but the face painting is not very like the reference, the clothes can not change the same, are generated a little similar, but still very different, what is a good way

  • @parthsit
    @parthsit Před 2 měsíci

    InsightFace must be provided for FaceID models. any one getting this error ?

  • @DeeprajChanda-vt7kc
    @DeeprajChanda-vt7kc Před 2 měsíci +1

    the ipadapter apply node is not found, can't figure out how can i fix this? any solutuons?

    • @gammingtoch259
      @gammingtoch259 Před měsícem

      I have the same problem, try use others but not work

  • @brandonyork9924
    @brandonyork9924 Před 6 měsíci

    why wont it show the preview image on my screen?

  • @techvishnuyt
    @techvishnuyt Před 2 měsíci

    please do on for automatic1111

  • @ramondiaz5796
    @ramondiaz5796 Před 4 měsíci

    good video but there were things left to explain, for example: installing the models, in which folder to install it

  • @Halil-fi3pq
    @Halil-fi3pq Před 5 měsíci

    how download clip vision model ?

  • @AnjarMoslem
    @AnjarMoslem Před 24 dny

    I get this error while using your workflow from gumroad: ClipVision model not found.. help me please

  • @artisans8521
    @artisans8521 Před 5 měsíci

    Thanx, makes the pain a bit less.

  • @MarcJordan-zn5wn
    @MarcJordan-zn5wn Před 5 měsíci

    Can you run this in chrome?😅

  • @AnjarMoslem
    @AnjarMoslem Před 24 dny

    where to download open pose model?

  • @bordignonjunior
    @bordignonjunior Před 5 měsíci +1

    video is great, but you did not provide the links to download the models.
    I have downloaded your worflow, tried to install all models same as you have, and always get an error.
    I'm sure the problem is on my side. but you could take more time to explain the details.

  • @amrshbaitah
    @amrshbaitah Před 20 hodinami

    someone help me please, I didnt find the Load IPAdapter in my nodes and models

  • @user-is8hm2zs7c
    @user-is8hm2zs7c Před 6 měsíci +1

    the "model.safetensors" file at 5:07 is too large for the file system to install, what can i do ?

    • @rpharbaugh
      @rpharbaugh Před 5 měsíci

      i've been trying to find this

  • @beastemp627
    @beastemp627 Před 2 měsíci +1

    i cant find ip adapter apply nodes , what should i do ?

    • @Aiconomist
      @Aiconomist  Před 2 měsíci +3

      I'm updating this workflow because a lot has changed since then. +++ IPAdapter version 2 is even more advanced now. Be sure to check out my latest videos for all the updates.

    • @gammingtoch259
      @gammingtoch259 Před měsícem

      @@Aiconomist Please update this, i try use other nodes and adapt something similar that you .json file, but nothing work for me :(, appear a error shows "copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1664])"
      Please help us and update the .json file in the web, thank u very much

  • @dcell7037
    @dcell7037 Před 5 měsíci +1

    its so interesting and amazing how this is now. I don't know what many of the terms he says mean though. like dpmpp_sde.. .. and that kind of stuff. I feel like a total idiot watching this. There is no way I can learn this now and keep up with it. It's really so cool, but it sure makes me feel stupid.

  • @matyourin
    @matyourin Před 5 měsíci

    Hm... i finally got all needed models and nodes and shit but it still does not work as shown... I get an error message "Error occurred when executing IPAdapterApply: Error(s) in loading state_dict for Resampler: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1664])." And then it goes on with execution.py line 85, line 78, ... but i think the cause is something with the resolution of the images I put in? Is that somehow relevant? Like all input images have to be the same resolution?

    • @christophsch7839
      @christophsch7839 Před 5 měsíci

      read the comment from @alexalex9511 someone postet an answer

  • @TheAexchile
    @TheAexchile Před 7 dny

    ¿Qué pasa con la identidad de un perfil usado en Instagram, te pueden banear? Best

  • @noobicorn_gamer
    @noobicorn_gamer Před 6 měsíci +1

    NGL already have an account, have 6k followers and steadily growing. Can't share the secret sauce though, but making images is only 1/5 of the process ;)

    • @Rubberglass
      @Rubberglass Před 5 měsíci

      Link the acct?

    • @tiggy4591
      @tiggy4591 Před 5 měsíci

      @@Rubberglass If he's pretending to not be an AI, any link to his account could potentially compromise the whole operation.

  • @ewzxyhh6180
    @ewzxyhh6180 Před 5 měsíci

    how to install UltimateSDUpscale