Using Set Area Conditioning in ComfyUI
Vložit
- čas přidán 3. 04. 2024
- Set area Conditioning is a way of allowing different parts of your image to have individual prompts. I cover the basics and then show a more complex example. I assume a basic knowledge of ComfyUI and Photoshop so this isn't a beginner's Video.
Workflow for first section:
drive.google.com/file/d/16zPB...
Workflow for wide image:
drive.google.com/file/d/1raz2...
You always provide great info and inspiration to create in new ways. thank you.
This is a neat technique in Comfy UI. You have to plan and set every details in advance. It might be too complicate for a beginner, but it is very useful for an intermediate and a pro.
Thanks, I do put a warning that it's not for beginners.
Every second of this was worth my time. ⌛👌
I use this method quite often to mask out a bg from the subject and getting better control of the bg generation! Great vid.
Super cool! What a fun workflow!
pure Content. thanks.
thx for sharing...quality vid as usual
Very useful, thank you.
Spotted that Syd Mead from a mile away... Good stuff. You know you can mix styles as well to form hybrid styles incorporating elements of each. I've done a lot of "in the style of syd mead and " and they usually turn out phenomenal.
Yes, some styles mix well, I quite often prompt a style and use a Lora with a conflicting style. This was a Syd Mead Lora which is quite nice The style in the prompt was Retro-Futurism as I recall.
love your tuts! Especially when there's a bit of Syd Mead involved!
QU: (more to clarify workflow) When you refine the girl's face do you use photoshop to cut and paste her face (from the final upscaled image) and then run that through image to image with something like face detailer - using the same stylistic prompts or is there a step I am missing? Thanks again!
Yes, it's still easier in Photoshop, the subject select is very good at selecting figures. I find the refined face hardly ever needs any work to make it fit. I'm sure this will appear in Comfy but at the moment Face refiners etc aren't that great.
Might it also be possible in this example that the resize (performed on the pixel space) degrades the latent version when you re-encode it? I heard that going in and out of latent space (pixel-to-latent and vice-versa) is lossy. So my guideline was to stick to latent manipulation unless it was necessary. I could be over-generalising, though.
The transition from latent to pixel space helps the process, (yes I was surprised to find it was so) Latent upscaling results in a loss of quality compared. So the latent produced by re-encoding refines better than a scaling in latent space. I have no idea why!
@@robadams2451it depends on the algorithm youre using for the latent upacaling. now theres the bislerp algo that improves a lot the quality of the latent upscaling.
but in general, pixel increase into encoding introduces less noise into the latent space than a simple latent upscaling, even using bislerp. that isnt inherently bad tho, latent upscaling into a ksampler will always be the best way to improve quality and add new details and fix issues, the "problem" is that latent upscaling requires a higher denoise and steps.
also depending on your scheduler and sampler you will need more or less steps and denoise to fully recover the upscaled latent.
FWIW, I'd recommend using 'anything everywhere' to remove all those Model, Clip and VAE edges.
Thanks, I sometimes do on my own workflow, but for instruction videos I think the plain noodles work best.
Anything Anywere + Big context + set/get + your own custom node component build out out of a bunch of reroutes connected one into the other (because not every type of input/output can be passed through big context
Also turning your groups into a single node component is reaaally good
@@PamellaCardoso-pp5tr I use various pipes etc but not for a demonstration video. I try and keep everything standard and not use too many unusual nodes.
using MultiAreaConditioning gives kinda bad results (bad quality)