Excellent vid. Could you do a deep dive into Hallo? I find that to be a beast to set up with it's millions of dependencies and tentacle-like extra nodes.
Quick tip for Abigail: If you use the middle mouse button (press the scroll wheel) instead of the left mouse button to move around the canvas, you won't move any nodes around. Took me forever to change to this and I still catch myself panning with the left button sometimes.
Interesting workflow, thanks for sharing. The first time I ran it got stuck during VAE decode after the upscale and was showing 99% vram with my 4090. Had to stop and restart comfyui. I wondered if it was maybe because of the two load checkpoints so I ended up bypassing the 2nd load checkpoint and just used reroutes to connect the upscale section to the 1st load checkpoint and it ran through ok that time, but still a pretty lengthy process, but fairly solid results!
Would love to try this workflow but for the life of me the recreator node won't load. Getting these errors, any ideas? When loading the graph, the following node types were not found: ReActorFaceSwap LayerColor: Brightness Contrast Nodes that have failed to load will show as red on the graph.
So we would make sure reactor face swap is installed and is the latest version from their GitHub. Also update ComfyUi and dependencies. For the layer brightness and contrast, you can bypass that node for now
For keeping to the original, you can lower the sampler's denoise down to .2 to avoid changing details like the face too much, or raise up the controlnet values up to keep the original image/face as much as possible. Another option is to use ipadapter with a face you like to influence the face (controlling the structure, ethnicity, colors, etc. for example you can use the same exact image to force the same face from the original image).
@@AIFuzz59thanks for reply. What are the best adjustments to make it work with SDXL? I tried and image output is noisy distorted. Thanks in advance for guidance
Hello, I am a Korean learning Comfy UI. I tried downloading the json file and running it. However, I'm running into a problem with the dwpose estimator node. I copied the error message and asked chatgpt. I don't understand everything, but roughly the problem is dwpose estimator's bbox_detector model and pose_estimator model are It seems to be happening because there is no problem. Where can I get the models for these two widgets?
@@AIFuzz59 There is a model for the basic open pose. It seems to me that there is no dedicated model for dwopenpose. Rather than saying it's missing, it seems like it wasn't there from the beginning.
What if I just want to enhance the overall features (the background, texture, plants, etc.) in the image without affecting the facial structure. Can you create a tutorial on that? Thank you so very much!
Try multiple takes on the voice over. Congrats on the second example, you added 20 years... for some reason. On the next chidlren example, you added cleavage, questionable move but OK.
This will be fun to experiment with. Thank you!
Excellent vid. Could you do a deep dive into Hallo? I find that to be a beast to set up with it's millions of dependencies and tentacle-like extra nodes.
Will do!
Quick tip for Abigail: If you use the middle mouse button (press the scroll wheel) instead of the left mouse button to move around the canvas, you won't move any nodes around. Took me forever to change to this and I still catch myself panning with the left button sometimes.
Interesting workflow, thanks for sharing. The first time I ran it got stuck during VAE decode after the upscale and was showing 99% vram with my 4090. Had to stop and restart comfyui. I wondered if it was maybe because of the two load checkpoints so I ended up bypassing the 2nd load checkpoint and just used reroutes to connect the upscale section to the 1st load checkpoint and it ran through ok that time, but still a pretty lengthy process, but fairly solid results!
Yes! It is a beast on your system.
Would love to try this workflow but for the life of me the recreator node won't load. Getting these errors, any ideas?
When loading the graph, the following node types were not found:
ReActorFaceSwap
LayerColor: Brightness Contrast
Nodes that have failed to load will show as red on the graph.
So we would make sure reactor face swap is installed and is the latest version from their GitHub. Also update ComfyUi and dependencies. For the layer brightness and contrast, you can bypass that node for now
@@AIFuzz59 Tried that and still the same error with the "LayerColor: Brightness Contrast"
Looks cool
Thanks! You are cool!
This is great, but I'm looking for ways to restore without changing the faces so much. This is a huge issue everywhere.
For keeping to the original, you can lower the sampler's denoise down to .2 to avoid changing details like the face too much, or raise up the controlnet values up to keep the original image/face as much as possible. Another option is to use ipadapter with a face you like to influence the face (controlling the structure, ethnicity, colors, etc. for example you can use the same exact image to force the same face from the original image).
Thank you, for sharing this workflow to us!
Glad you like it!
Many thanks for the great workflow!! Have you tried comparing it with SUPIR?
No we haven’t. We used to use the Supir workflow in the end of all of our workflows but once we created our own, we just use that
@@AIFuzz59thanks for reply. What are the best adjustments to make it work with SDXL? I tried and image output is noisy distorted. Thanks in advance for guidance
Error occurred when executing KSampler:
'NoneType' object has no attribute 'shape'
So the error is happening with one of the nodes going into the KSampler. We would check all the input nodes
And see if there
Are
Any missing values
How to make the comfyui background black ?
amazing content, subscribed!
Welcome!
I love your voice, it is soothing
Thank you baby 😊
Lol, I am learning how to upscale and enhance images and this is a great vid, but narrator comments crack me up. No I'm not a Swiftie.
Hello, I am a Korean learning Comfy UI.
I tried downloading the json file and running it.
However, I'm running into a problem with the dwpose estimator node.
I copied the error message and asked chatgpt.
I don't understand everything, but roughly the problem is
dwpose estimator's bbox_detector model and pose_estimator model are
It seems to be happening because there is no problem.
Where can I get the models for these two widgets?
And please understand if my English is awkward.
I rely on Google Translator.
보리야 힘내자! 너 영어 잘하니깐 기 죽지 말오!
Your English is very good! Are you missing the models?
@@AIFuzz59 There is a model for the basic open pose.
It seems to me that there is no dedicated model for dwopenpose.
Rather than saying it's missing, it seems like it wasn't there from the beginning.
@@user-eh7vz4de4q 보리야 힘내! Cheer up!!!!!!!!
Thanks for this amazing workflow. Do you think it can be adapted for old video ? Would it still work by adding animatediff ?
It should work! It may take time as it will do frame by frame
Make one for photo to painting
Will do!
I will try to add LaVa or some LLM to detect age and use with ipadapter and embandings... This WF is great start with thank you!
Thanks! Let us know how it works out!
Awesome workflow as always, the only thing I think is missing is a style and subject selector to make it even simpler.
Great suggestion!
What if I just want to enhance the overall features (the background, texture, plants, etc.) in the image without affecting the facial structure. Can you create a tutorial on that? Thank you so very much!
You got it!
Try multiple takes on the voice over.
Congrats on the second example, you added 20 years... for some reason. On the next chidlren example, you added cleavage, questionable move but OK.
Thanks for your support 😎