Reposer = Consistent Stable Diffusion Generated Characters in ANY pose from 1 image!
Vložit
- čas přidán 11. 10. 2023
- Stable Diffusion Reposer allows you to create a character in any pose - from a SINGLE face image using ComfyUI and a Stable Diffusion 1.5 model! Highly consistent generation is thanks to IPAdapter which allows for easy, prompt-free image generation.
No finetuning needed or LoRA training required = massive time savings.
No need for Roop, ReActor or any other face swap software which can’t be used commercially. On top of that, any face can be used - not just “realistic” ones. Want a comic art style face? No problem! Can’t install roop? Not an issue 😉
All you need is 1 image and this FREE, ready-to-use ComfyUI workflow to keep both the face and the image style in your generations! Prompts can be used too for those extra little details, should you wish.
Enjoy :)
Get the very latest workflow versions via Patreon!
/ nerdyrodent
Example with clothing too:
Stable Diffusion - Face + Pose + Clothing - NO training required!
• Stable Diffusion - Fac...
Workflow + extra docs:
github.com/nerdyrodent/AVeryC...
How to install ComfyUI:
• How to Install ComfyUI...
Need even more help? No worries - here is a whole playlist!
• ComfyUI Tutorials and ...
== More Stable Diffusion Stuff! ==
* ControlNet Extension - github.com/Mikubill/sd-webui-...
* ComfyUI Workflow Creation Essentials For Beginners - • ComfyUI Workflow Creat...
* How do I create an animated SD avatar? - • Create your own animat...
* Installing Anaconda for MS Windows Beginners - • Anaconda - Python Inst... - Věda a technologie
If you are just starting Comfyui WATCH THIS VIDEO!
This answered so many questions. I've been dragging my feet for weeks and this solved so many problems.
Thanks so much!
I spent weeks in search of such techniques. I'm fortunate to get it here. Thankyou very much.
Bro, I only learnt about Stable Diffusion a couple days ago and came across your tutorial. It's just other-worldly stuff what you're doing. I'll forever be grateful to you for your efforts. I tried several times in vain after watching this tutorial, but then realized I wasn't using OpenPose model. Once I did that, the output image that came before me almost took my breath away. Outrageously good and thanks from the bottom of heart. I can't thank you enough for this video and the references ❤
Glad you like the things 😊 It’s amazing what you can make with Comfy!
@@NerdyRodent Thanks for the response ❤️ I even tried making a workflow of my own in ComfyUI to get face expression from a reference image and apply to any character. I used MediaPipe FaceMeshProcessor but it isn't really working out😅. Too much to learn I guess before I start making workflows. Do you have a video for the same by any chance so I can look up and get some insight on the facial expression aspect?
Amazing ! Will try for sure !
excellent always wanted chars to be consistent and now its possible thank you :)
me too! is there a setting for the number of images you want generated in batch somewhere-or am I just missing something?
Cool stuff, thanks Nerdy.
😉
The Nerdy Rodent is becoming the ComfyUI Workflow master of the Internet!
Lol. Just playing 😉
you are like a magician..thanks for everything
It's my pleasure. Thank you for watching!
Absolutely Amazing... Just what I've been looking for. Thank you so much!!!
Glad it was helpful!
EPIC tutorial, man! Thanks VERY much. 💪
Glad you liked it!
Thanks You, this is a game changer, roop struggles with none realistic faces so this is workflow is a awesome addition!
Glad you like it!
@@NerdyRodent is there a setting for output numbers or batch? I see image or batch node but that's it
WOW, great work Nerdy Rodent! This really seems like a useful comfyui workflow for storytelling... can't wait to try it out. Thanks for everything you do! 🤓🐀
Have fun!
😮This is the biggest incentive to install that spaghetti interface.
😂
Omnomnom spaghetti!
Not! That interface looks even worse that automatic1111
Ehem, it only took me 6.78 hours, to install this. All the custom nodes were about as fun to download and install as a root canal treatment. When I fist loaded it my screen was a red as the bridge on the NC 1701 under red alert. Still not working optimal.
@@artisans8521lol same here been trying to get this bogus workflow to work for weeks this and his “reposer plus” told me to dm him on pattern which required I pay the fee which I had no issue doing but has been zero help so far 😅
This is incredible
This is fantastic!
That character is beautiful!
Yes, she is, isn’t she 😃
Thanks for your inspiration and workflow. I have test it to create animation character, and I am combining this workflow to AnimatedDiff to play around. :)
Sounds great!
Wow this is eoic, many thanks!!
Seeing all these nodes and things, of which i know nothing, i wonder how insight ai works. i would imagine its a similar process but with different parameters and models but just visualizing it in the way you showed has sparked my curiousity for how these ai things work. Great video, thanks
Thx 4 Your hard Work. This is amazing =D
Glad you enjoy it!
Hello Nerdy,
many greetings from Berlin - Germany. Thank you very much for your great work, which helped me a lot with the realisation of my ideas. Do you see a possibility to create two characters - for example in the "Reposer". You then have one pose - but with two people who are then replaced?
Great Video. Thank you !
You are welcome!
Hey, awesome video tysm! Is there any shortcut how to find all the checkpoints and safetensors to test this or is it highly dependend on use case and I have to manually download and import them?
Well you've done it. I hope you're happy with yourself. I'm trying to figure out the spaghetti-hell that Comfy UI looks like to me! WELL PLAYED.
Most sincerely, well done and thank you.
Heh 😆 Yay! As long as something something, success is inevitable!
Finally we can create comics! Wow!❤
I use the comfyui extension for A1111, and it keeps everything in one place, super practical for that.
Wait, what? Can you use comfyUI inside of Automatic1111?? I'm confused O_O
SPEAK PERSON!!!! How... WHERE...
I don't think I can post a link here apparently. Last time I tried my comment got deleted. Search "model surge a1111", or "sd-webui-comfyui"
You've changed my perspective on everything.
I'm glad I am researching so much before diving in.
What generative model are you using?
1.5? SDXL? SDXL-Turbo?
Any thoughts on what you would recommend for someone that is just starting out learning?
Is it any good?
Very cool!
Thanks for the video. Could it be possible to combine this approach with something like "instant lora" (yt vid) to be able to maybe load multiple angles of one face ?
I am getting an error that is actually driving me nuts:
Error occurred when executing IPAdapter:
Error(s) in loading state_dict for Resampler:
size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1664]).
Thats really useful and amazing. Thanks and blessing from the Pope :D
Glad it was helpful!
This seems like it would be incredible for keeping videos consistent
Seems like an idea 😉
Yeah really. If a script could keep feeding each frame from the reference video back in, it should be amazing. It might not be able to keep the backgrounds consistent though.
@@NerdyRodentcan you try creating A video ??❤❤
Excellent work and tutorial!
Thanks for this! Works great in 1.5, but I'm having the damnest time figuring out what is dependent on 1.5. When I load it up with SDXL, the first ksampler throws a "Error occurred when executing KSampler: The size of tensor a (1024) must match the size of tensor b (1280) at non-singleton dimension 1". Anyone know what's the cause?
I lovee uuu NERRRRDYYYY! :)
Oh it not XL when XL? :')
When someone releases an SDXL face model 😐
Thank you, Nerdy Rodent! You should turn CZcams membership 👍🔥🔥
I’ll have to take a look at that 😆
@@NerdyRodent maybe we can have “Raton Laveur”, “Honey badger” badges 😜
@@banzai316 lol. Honey Badges 😆
This is Fn awesome 😎
Ikr! 😉
thanks for this!
No problem!
Thanks for uploading this, it's exactly what I'm looking for! Issue: it keeps saying it's missing CR Batch Process Switch even though I installed Comfyroll Custom Nodes. Pressing Queue Prompt yields an "unknown error". I'm new to this world, do you have any suggestions as to how I can troubleshoot? (I do see Comfyroll custom nodes in my nodes folder and have located CR Batch Process Switch in the logic.py folder, just not sure why ComfyUI can't seem to find it.) I've also updated Python to 3.11.6.
wanted to see more examples with the side angle thingy
Can you please make a Tutorial how we can do this in Automatic 1111? 🙏
Great work nerdy! Does it run on Krita sd plugin?
comfyUi more and more becoming the standard it seems
It’s fun to experiment with stuff, for sure!
Mate, this looks absolutly amazing!!! Can't wait to try it.. One question....
Is it able to copy clothing aswell if I have a character I want to remain consistent, can I use that full body character and then have it come out in a new pose or it's jusrly for faces?
This one is consistent faces, though it will use clothing influences from the face image also. For clothing swaps, see Stable Diffusion - Face + Pose + Clothing - NO training required!
czcams.com/video/ZcCfwTkYSz8/video.html
Truly Amazing! Thanks for fast reply.. However I'm stuck with an error: NNLatentUpscale Missing and it doesn't seem to be working.. Is there a fix to this that I'm missing@@NerdyRodent
Amazing workflow! I'm trying to achieve this result with SDXL but the quality is not even close to SD 1.5. Do you know if it has to do with the specific ipadapters for SDXL?
Nothing yet with face for SDXL that I’m aware of. Do let me know if you find anything!
Great stuff. Thank you so much. I guess for photorealistic images it would be even better if you could include roop on the target image. Would love to see that
For photorealistic images, you can use a photorealistic image input, a photorealistic model and photorealistic prompts
@@NerdyRodent I know, but it does not keep the face
@@christianblinde mine does ☹️
do you have any suggestions on fixing the IP adapter not being found?
is there a way to generate a pic from SD and then programatically generate another pose from the character using the seed or something? not using ui
Holy shit bro i would read the comic book you had going in the initial shots.
I think she may have to face… Cthulhu!
amazing vid
Thanks! 😌
this awesome :D
Would appreciate it if you made an updated tutorial for consistent faces considering there's a new IP adapter v2 and InstantID now. Also not able to get workflows from the links.
Unfortunately there isn't actually an IP adapter v2 out as yet - it's been a while since we had any new models! I imagine you're probably just getting tricked / fooled / confused with names, but that's a thing on yt apparently! Also, things like InstantID are for research use only, hence I avoid them. Check the links in the video description for loads of workflows and a bunch more too. As another options, you can support the channel and get the very latest workflows via patreon!
What do I need to start doing this? I’d want to start a comic strip using this software but have no idea where to start. Do I download Stable Diffusion to my laptop? If so, How do I even do that? Is Reposer like a preset? So many questions 😔
Ok, as a newb this one was a lot more difficult! Still working the kinks out, have had some interesting crashes. Basically make it as far as the HD image, but end up with errors before the final upscale. The rig I'm working on is 9 years old, so wondering if errors could be related to using outdated hardware? Wondering about your hardware configuration vs. minimum requirements?
Yeah, 9 years old will likely run out of VRAM when going to higher resolutions
thanks for create this awesome tutorial, but after install all custom node step by step , I have some problem , appreciate for your help.
when I first open this workflow file, the browser window popup information :
1
When loading the graph, the following node types were not found:
CR Batch Process Switch
Nodes that have failed to load will show as red on the graph.
__
2
after I click" queue prompt " button in browser ,
the window pop a meessage: SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5)
__
3
and my terminal come this error :
File "/home/young/Downloads/ComfyUI/ComfyUI/execution.py", line 598, in validate_prompt
class_ = nodes.NODE_CLASS_MAPPINGS[prompt[x]['class_type']]
KeyError: 'class_type'
If I add a skiny person to the input image and set a fat person for control net pose, will output be fat person or will it just detect and adjust pose?
hi there, thanks a lot for the video. Im completely new in this and im still finding out how everything works. Im getting an error trying to import the Allor Plugin, im using a Mac and wanted to know if it has something to do with it. hope you can help me out.
dumb question but I can't find the IPAdapter Image (FACE) anywhere, closest one is "apply IPAdapter FaceID but that one doesn't let me upload an image under it
Thank you for your hard work! Im a big fan of your talent, but having this issue and many people say thats up to devs...
conflicted nodes: image overlay [comfy_kepliststuff], and latent upscale [comfyui latent upscale], latent upscaler [sd-latent-upscaler
how to install controlnet for comfyui please
Can you use it, when you have more than one character?
Pls do one for A1111, thank you.
Hi!
Can you do this tutorial without the spagettichaosUI?
The workflow looks Robust. Although i've tries implementing but Ksample Pre scale keeps giving this error " Error occurred when executing KSamplerAdvanced: Expected query, key, and value to have the same dtype, but got query.dtype: struct c10::Half key.dtype: float and value.dtype: float instead". How do i Resolve this ?
I think I'm so close to get this working but I keep getting this: Error occurred when executing ControlNetApplyAdvanced: 'NoneType' object has no attribute 'copy' , is there any way to fix this?
how much GB should the ComfyUI folder take up...?
70's detective, Christy Love comes to mind.
Had never heard of them before, but yes!
Great video!! Please provide Json file along with image to make it easier during import process. Sometimes images don’t work so there is nothing better than json file. Many thanks in advance amigo!!!
do you know how i can find (name) and uninstall the old version of NNLatentUpscale? this workflow refuses to work unless i did.
Let say I have an image of a specific chair and want SD to be able to create this chair from a specific angle, can I do that with this type of workflow?
I’ve only focused on faces here, but it may indeed be somewhat possible using a similar approach!
can u do an automatic 1111 version?
I installed comfyui in a virtual env by cloning the repo. Set all directory paths to A1111 controlnet models , checkpoints etc. If I drag and drop this workflow I get the error: [When loading the graph, the following node types were not found:
CR Batch Process Switch
DWPreprocessor ]
I have also installed ComfyUI's ControlNet Auxiliary Preprocessors and DWPreprocessor Provider (SEGS) //Inspire shows in the list. How do I install the missing preprocessor and CR Batch Process Switch?
ComfyUI Manager is your friend!
im sure this is an easy answer but i dragged in your handy workflow and installed alll the missing nodes but when i engage the prompt i get an error "SyntaxError: Unexpected non-whitespace character after JSON" and a column line designation. i would deduce that means a typo but in that fascinatingly complex work flow i don't know whats a collumn vs a line, etc any help would be great but ill get tugging at it
Did you ever figure it out? Same thing happening for me
Hi! Your videos seem to show that you separated your checkpoints into subfolders within the comfyui structure. I can't figure out how to do this. It would be great to have sd15 and sdxl subfolders for checkpoints, loras and embeddings. If you haven't covered this already can you explain how to do this? If you already have, just a point to the video where you explain it would be great, too!
You can open the graphical file manager for your operating system, and then from the context menu create a new directory
@@NerdyRodent I must have labeled them poorly in the past. This time it worked!
Any chance on doing a video on Deep Floyd? Not much out there.
Sure, here you go - Deep Floyd - AI Generated Text In Images!?
czcams.com/video/139f-gbj9ko/video.html
Can we get an updated version of this that uses the new IP Adapter Advanced Node since the IPAApply node is depreciated? I can't figure out how to get the Advanced node to work in this workflow. I'd also appreciate explicit links to the models that must be used together for IPA and clip vision. The troubleshooting page for IPAAdvanced is not clear enough to be helpful.
I’ve swapped the node from the old new one to the new, new one 😉 Direct model links are in the “description” column so all ready to go!
@@NerdyRodent I appreciate the swift reply! However, I think I forgot to mention that I'm using SDXL. The SDXL reposer image in your github repo still produces a workflow with the old node. It shows up bright red and labeled "undefined" - I have the latest versions of all custom nodes. There are also no links describing any models for the SDXL reposer. Are you referring exclusively to the SD1.5 version of the reposer workflow?
Yup, I’m referring to the sd 1.5 version this video covers. Same as I did in Reposer2, any workflow with ip adapter apply simply needs it replaced!
I'm still confused between the model require just to run only reposer pls help out, its kindly of urgent😅😅
Is this to be installed located locally on our home system or accessing a cloud/matrtix? I'm sorry but I'm a total beginner.
You can run ComfyUI anywhere, but best run at home!
woah, I started with the first video and got that rodent druid to work. but now I am trying to make those poser workflows work but somehow I end up getting errors like this:
"Error occurred when executing IPAdapterApply:
'NoneType' object has no attribute 'patcher'"
I downloaded at least 5 different IP Adaper things and some by hand, some by the ComfyUI Manager, some are. bin, some are .safetensor.... I am so confused by now and I feel like I need an in between video that explains all the different kinds of, models, checkpoints, IPAdapters and what these errors even mean. Where can I get some help?
Same deal. Did you ever figure it out? On a deadline and getting desperate.
wish I knew how to set this up step by step, I mean the base install does it work on mac ?
How to install ComfyUI:
czcams.com/video/2r3uM_b3zA8/video.html - mac is practically the same as Linux 😀
Great! Where is your reposer workflow json file?
Links are in the video description 😀
Excellent as always but how are we determining the size of the original image before upscale ?? Cant find any resolution inputs..???
Its 0.5 megapixels
Yep, found that thanks, but can we not determine width/height of first image generated from face/pose inputs ?? @@NerdyRodent
Looking great. Very sexy renders. 👍
I wonder who’s face could be used? 😉
can you make a workflow for overlaying images with effects, something similar to IG filters, like objects on fire or glowing eyes, objects that change color randomly etc. ? pic2vid/gif or vid2vid. Great stuff so far Rody, much appreciated
ip2p may help there ;)
@@NerdyRodent 8GB of Vram is struggling with this. What else can I change/lower besides ImageScaleToTotalPixels ?
Do you have the workflow.json? I tried copying the visual by hand for learning, but I get loop errors, as there might be a bad node or something. I know that if I load the .json that I can use the manager to find the missing nodes.
You can drop me a dm via patreon for help!
@@NerdyRodentI have! Thanks!
I keep getting this error ''When loading the graph, the following node types were not found:
DWPreprocessor
GetImageSize
Nodes that have failed to load will show as red on the graph.'' I used the install manager and it says they are installed under the ''install custom nodes'' tab, so i'm not sure what the issue is.
Make sure to use the latest version of ComfyUI
Does this wf still hold up with all the recent changes?
Yup! The plus face one is still the best that isn't for research-only use :)
We are working with Txt2Img here.... Would it be possible to implement this technique in Img2Img to have control over the face? Like a kind of CNet? For example, when upscaling an image, I want to inject noise in the process to improve the textures, but I don't want to lose the actual face of the source image. Would this be possible with the ip-adapter-plus-face?
Possibly. Just working on adding extra clothing support so you can pick any outfit too! So many things on the go 😆
Incredibly flexible solution! Congratulations Rodent! latent_image input changed from EmtyLatentImage to LoadImage and voilà --> Works like a charm! 🙏 @@NerdyRodent 💪
hi @NerdyRodent , I haven't found the json file for the workflow, only provided things is a png image, it is little bit confusing for me when i am recreating your workflow, can you please provide the working json file for this workflow, i really want to try this....
thank you for creating such amazing video tutorials..
You can load the PNG workflow, and then click save if you want it in JSON format instead!
@@NerdyRodent thank You, i am new to comfyUI still learning.
Error occurred when executing IPAdapter:
'ClipVisionModel' object has no attribute 'processor'
can someone please help me with this error
For some reason I dragged in the image but my workflow looks way different than yours, any idea why that would be?? Did they update it?
I made a few versions. This is version 1
No matter what I try it always comes back to missing certain models or nodes. Is there a place where I can look this up? from beginning to end.
You can drop me a dm on www.patreon.com/NerdyRodent if you need more help!
Love your videos. But in this one once I got the workflow up and running I am lost at where you got the Node " Image Strength (IPAdaper ) from?? I have IPAdapter and IPAdapter Plus installed but still can't find that node? Help!! I tried searching and it doesn't come up?
Strength = weight
Made me wonder at the beginning too. It is nothing else than a PrimitiveNode with the corresponding value, renamed to "Image Strength (IPAdapter)".
So is comfyui the interface I need for this?
Yup, this is a workflow for ComfyUI! You can drop me a dm on patreon if you need more help 😀
Can you please tell how to get consistent outfit or clothes,how can I maintain same outfit?,Please it can be really useful, thank you.
Stay tuned 😉
Hello, thank you for the video. As I understand I can't write in positive promt things like . How can I add it to this workflow?
Check out my beginner video for the workflow basics - ComfyUI for first-time users! SDXL special
czcams.com/video/2r3uM_b3zA8/video.html
@@NerdyRodent thank you!
Hi sir, could you have another look at the SDXL version of this.
I'm getting an issue with the SDXL version of this workflow ("SDXL version of Reposer using the SDXL "IPAdapter Plus Face" model)
ERROR: IPAdapterApply: 'NoneType' object has no attribute 'encode_image'
I have a feeling it is an issue with the model used in the IP Adaptor Model, maybe the Load CLIP Vision too
After changing the model and clipvision to 'ip-adapter-plus-face_sdxl_vit-h' and 'CLIP-ViT-H-14-laion2B-s32B-b79K'
I now get:
Error occurred when executing KSamplerAdvanced:
Expected query, key, and value to have the same dtype, but got query.dtype: struct c10::Half key.dtype: float and value.dtype: float instead.
It’s best to use the suggested models - I can’t say if it will work with any others. You can drop me a dm on www.patreon.com/NerdyRodent for more info!
@@NerdyRodent Hi I managed to fix it. I tried to use the models in the workflow but it didnt work. So I downloaded the models found in the iAdaptor github (named in my above comment) and then fixed the next error I got by using the --flat 16 thingin the executable (I'm not in my pc I dont remenber the name) as my 1080ti processes in a different way to newer cards I guess
I would love how to make this workflow step by step, I just don't wanna copy paste
If you prefer to make workflows (rather than have them ready made for you), then check the links in the video description!
ComfyUI is somewhat of a challenge! I can't get these working as none of them will recognise the IPAdapter node, despite manager being happy all nodes are loaded. I wondered if it was just not seeing it and tried adding an IPAdapter, but I can't find one with the same inputs and outputs so I'm lost. Not used Comfy before but have used node based stuff in Unreal and Blender so not a stranger to the idea...@NerdyRodent any chance of a pointer or two? (This is a fresh install of Comfy, though it is also being used by the krita plugin so has all the plugins for that too.)
Drop me a dm on patreon and I can see what needs fixing for your install! www.patreon.com/NerdyRodent
@@NerdyRodent Figured out that it needed the original IPAdaptor, despite that page now saying to go to the IPAdapter plus page.
@@tre4B sounds like you somehow got a really old version of the workflow as the current version on github uses ipadapter plus
Dear Mr. Rodent, ip adapter has been updated and the workflow does not work anymore. are you planning to update this one_ I am still noob and now need to figure it out 🙂
Yup! Reposer2 was updated a while back already :)
How to load the workflow ,where shall I find .json file in order to load your workflow ,pls tell me how shall I load you exact workflow into my comfy ui
Check the video description for info!
@@NerdyRodentso basically I have to load the image you provided in you github?
Im new so kinda overwhelmed 😅
Hope you can give a preview of your setup for comfy ui, also recommended models, which do you prefer and more importantly how much space does it take for you ?, can i run it through hard drive but sadly its not SSD
Hope to hear from you soon 🤞
I love running sd from my ssd as it helps reduce that initial load time for the models 😊 Overall, including controlnets and other models, it’s about 60GB
@@NerdyRodent really appreciate the reply sadly mine isn't SSD so will it perform worse or will it be manageable?
Also wanted to ask I'm a bit confused on the setup for "reposer" what is the basic setup for a noob like me if you dont mind, sorry for the bother tho
The basic setup is to first install comfy, then download the models and setup like in the video. If you’re new to Comfy, I’d suggest going through all the videos in my ComfyUI playlist as they start basic and work up to workflows like this
@@NerdyRodent thank you 👍
@@NerdyRodent You seem extremely responsive and I appreciate that.
Would you recommend SDXL, SD 1.5, or SD-Turbo?
What generative model do you think works best?