LATENT Tricks - Amazing ways to use ComfyUI

Sdílet
Vložit
  • čas přidán 19. 03. 2023
  • Here are amazing ways to use ComfyUI. This node based UI can do a lot more than you might think. Especially Latent Images can be used in very creative ways. You can inject prompt changes. You can combine latent images to new results. Stop render steps and finishe the rendering after you changed to prompt, sampler and settings. A world of possibilities.
    #### Links from the Video ####
    Join my Discrod: / discord
    ComfyUI Projects ZIP: drive.google.com/file/d/1MnLn...
    ComfyUI Install Guide: • ComfyUI - Node Based S...
    Support my Channel:
    / @oliviosarikas
    Subscribe to my Newsletter for FREE: My Newsletter: oliviotutorials.podia.com/new...
    How to get started with Midjourney: • Midjourney AI - FIRST ...
    Midjourney Settings explained: • Midjourney Settings Ex...
    Best Midjourney Resources: • 😍 Midjourney BEST Reso...
    Make better Midjourney Prompts: • Make BETTER Prompts - ...
    My Facebook PHOTOGRAPHY group: / oliviotutorials.superfan
    My Affinity Photo Creative Packs: gumroad.com/sarikasat
    My Patreon Page: / sarikas
    All my Social Media Accounts: linktr.ee/oliviotutorials
  • Jak na to + styl

Komentáře • 168

  • @DJVARAO
    @DJVARAO Před rokem +12

    Man, you are a wizard. This is a very advanced use of SD.

  • @andresz1606
    @andresz1606 Před 10 měsíci +3

    This video is now #1 in my ComfyUI playlist. Your explanation at 17:50 on the LatentComposite node inputs (samples_to, samples_from) is priceless, as the rest of the video. Looking forward to ask some questions in your Discord channel.

  • @bjornskivids
    @bjornskivids Před 11 měsíci +4

    Ok, this is awesome. You inspired me to make a 4-sampler comparison-bench which lets me get 4 example pics from one prompt when exploring different engines. It makes sampler/settings comparisons simple and I can crank out sample pics at a blistering pace now. Thank you :)

  • @jorgeantao28
    @jorgeantao28 Před rokem +20

    This is an amazing tool for professional artists. The level of detail you can achieve reminds me of Photoshop... AI art is not a threat to artists, but rather a complement to their work.

  • @JimmyGunawan
    @JimmyGunawan Před rokem

    Great tutorial on ComfyUI! Thanks Olivio~ I just started using this today, reloading the "workflow" really help with efficiency.

  • @mrjonlor
    @mrjonlor Před rokem +18

    Very cool! I’ve been playing with latent composition in ComfyUI for the past couple days. It gets really fun when you start mixing different art styles within the same image. You start getting some really wild effects!

    • @OlivioSarikas
      @OlivioSarikas  Před rokem +3

      Thank you. That's a great idea too. I was thinking about using different models in the same image, but then thought that might be too complex for this video

  • @lovol2
    @lovol2 Před rokem +1

    Okay I'm convinced, I will be trying this out, fantastic demo

  • @JonathanScruggs
    @JonathanScruggs Před rokem +3

    The more I play with it, the more I'm convinced that this is the most powerful UI to Stable Diffusion there is.

    • @user-hz4fz5qy7l
      @user-hz4fz5qy7l Před 10 měsíci

      the moment HOUDINI mlops is updated to be able to use loras,and lycorish its going to be the most powerfull

  • @___x__x_r___xa__x_____f______

    Found this particular course super inspiring. Makes me keen to experiment

  • @AllYouWantAndMore
    @AllYouWantAndMore Před rokem

    I asked for examples, and you delivered. Thank you.

  • @LICHTVII
    @LICHTVII Před 10 měsíci

    Thank you! Hard to find a no-bs explanation of what-what does, helps a lot!

  • @MrGTAmodsgerman
    @MrGTAmodsgerman Před rokem +14

    The node system can make things complicated, but this system really empowers the potential of a lot of stuff. And seeing this for AI pictures now, gives it more meaning and control that could be then be considered as artistic again, as with ComfyUI the human takes huge control.

    • @KyleandPrieteni
      @KyleandPrieteni Před rokem +1

      YES have you seen the custom nodes on civit AI? They are nuts and you get even more control

    • @MrGTAmodsgerman
      @MrGTAmodsgerman Před rokem +1

      ​@@KyleandPrieteni Actually no, i haven't. Thanks for the info.

  • @andrewstraeker4020
    @andrewstraeker4020 Před rokem

    Thank you for your excellent explanations. I especially appreciate your excellent English, which is understandable even for non-native speakers.😸
    Every time I watch your videos, I want to run and experiment. New ideas and possibilities every time. 😺👍👍👍

  • @ColePatterson-mw2gy
    @ColePatterson-mw2gy Před 7 měsíci +1

    Whoa! Jeez! This looks complicated. All I searched for was how to use prompt weights. I can handle anything: algebra, calculus, etc., but when it comes to node editors, I check out ASAP.

  • @caiubymenezesdacosta5711

    Amazing, i will try it this weekend. Like always thanks for share with us .

  • @Dizek
    @Dizek Před rokem

    wow, discovered Comfy just recently but it is more than it looks, you can even use the same prompt into all the aviable samplers to test the ones that work the best with the style you are going for

  • @workflowinmind
    @workflowinmind Před rokem +12

    Great examples, the first one you should pipe the primary latent into the subsequent ones as you are over stepping at each image (last image has all the previous steps in your example)

    • @bonecast6294
      @bonecast6294 Před 7 měsíci

      could you possibly explain it in more detail or provide a nose setup? is his node setup not correct?

  • @remzouzz
    @remzouzz Před rokem +3

    Amazing video ! Could you also make a video where you get more in depth onto how to install and use controlnets in ComfyUi ?

  • @Spartan117KC
    @Spartan117KC Před 7 měsíci

    Great video as always, Olivio. I have one question - you say at 9:28 that 'you can do all of this in just one go' - where you referring to the 4x upscale with less detail that you had already mentioned or were you referring to another way to do the latent upscale workflow with better results and less steps?

  • @METALSKINMETAL
    @METALSKINMETAL Před rokem

    Excellent, Thank so much for this video!

  • @stephancam91
    @stephancam91 Před rokem +10

    Awesome video - very educational - thank you! I've been meaning to get ComfyUI installed - just have to find the time. (I swear, I'm having to update my AI skills weekly - it's nearly as time consuming as keeping up with Unreal Engine, lol).

    • @OlivioSarikas
      @OlivioSarikas  Před rokem +3

      Thank you very much. ComfyUI is a blast to play with. This will suck up your hours like nothing 😅

    • @jeremykothe2847
      @jeremykothe2847 Před rokem +1

      The good news is it's easy to install. The "bad" news is that it really needs more functionality to be useful, but it as a lot of promise if it's extended. If they managed to get the community to write nodes for them....

    • @Mimeniia
      @Mimeniia Před rokem +2

      Waaaaaaay easier and quicker than Auto1111 to install...but a bit more intimidating to use on an advanced level.

    • @stephancam91
      @stephancam91 Před rokem

      @@Mimeniia Thanks so much. I'm used to using node based programs (DaVinci Resolve + Red Shift). Hopefully, I'll be able to pick it up quickly! Just a matter of finding the time.

  • @wolfganggriewatz3522
    @wolfganggriewatz3522 Před 10 měsíci

    I love it.
    Do u have plan on more of this?

  • @CrimsonDX
    @CrimsonDX Před 10 měsíci

    That last example was insane O_O

  • @rsunghun
    @rsunghun Před 10 měsíci

    you are so smart and amazing!

  • @digitalfly73
    @digitalfly73 Před 9 měsíci

    Amazing!

  • @rakly3473
    @rakly3473 Před 6 měsíci

    This UI needs some Factorio influence, it's so chaotic!

  • @panzerswineflu
    @panzerswineflu Před rokem

    I'm a sea of ai videos i started skimming through, this got my subscribe. Now if only I had a rig to play with this stuff

  • @TSCspeedruns
    @TSCspeedruns Před rokem +1

    ComfyUI is amazing, I love it

  • @enriqueicm7341
    @enriqueicm7341 Před 5 měsíci

    It was useful!

  • @darmok072
    @darmok072 Před rokem +1

    How did you keep the image consistent when you did the latent upscale? When I try your wiring the face of the upscaled image is quite different?

  • @alexlindgren1
    @alexlindgren1 Před 7 měsíci

    Nice one! I'm wondering if it's possible to use Comfy UI to change the tint of an image, let say I have an image of a livingroom, and I want to change the tint of the floor in the livingsroom based on an image I have of another floor, how would you do that?

  • @petec737
    @petec737 Před 6 měsíci +2

    The latent upscaler is not adding more details as you mentioned, it's using the nearest pixels to double the size (as you picked), similar to how you'd resize an image in photoshop; the ksampler is the one who adds more details. That's a confusion I see many making. For best quality you don't upscale the latent, you upscale the image with the upscalemodelloader then pas it through the ksampler.

    • @bobbyboe
      @bobbyboe Před 5 měsíci

      I wonder then what a latent-upscaling is useful for?

  • @roroororo7088
    @roroororo7088 Před rokem

    I like videos about this UI, can you do exemples for clothes changing plz ? (it's harder and inpaint like but more friendly to use)

  • @pratikkalani6289
    @pratikkalani6289 Před rokem

    I love comfyui, this has so many use cases. I’m VFX compositor by profession so I’m very comfortable with node base ui (I work on Nuke). I wanted to know if we want to use comfyui as a backend for website, can I run this on serverless GPU?

  • @silentwindyou
    @silentwindyou Před rokem

    This method seems similar to a sequence of [from:to:when] prompts in webUI, the steps added up, and added image output after each prompt with custom steps finished,nice process!

    • @Mirko_ai
      @Mirko_ai Před rokem

      Never heard about that in the WebUI. Is this possible?o.o

    • @silentwindyou
      @silentwindyou Před rokem

      @@Mirko_ai cause [from:to:when] also applied in Latent space, so the logic applies,but webUI output the result from last step not after each [from:to:when] prompt by default.

  • @Kyvarus
    @Kyvarus Před rokem +6

    The only thing I wish that comfy had, is the ability to sequentially take frames from a video in order to use them as an open pose mask for each generation over time. Video generation would be amazing.

    • @Dizek
      @Dizek Před rokem

      im new, but can you select a folder of images? you could pre-split the images and use them

    • @Kyvarus
      @Kyvarus Před rokem +1

      @@Dizek There is no way for you to within comfyui control the selection of images in a sequential order. which means that you can only have a static reference image and no one has bothered to program in a way for us to load in multiple images from a folder in order yet. Honestly if i get the time this week i'll throw the script together. the addons for comfy UI are very powerful and it's likely to not be a big issue. the main issue is that we need the end of image generation event to call the next image to load. which will require someone to go learn the api for the software.
      So even if you have some presplit images into a folder there is no way to call the next image in the folder by index.

    • @anuroop345
      @anuroop345 Před 10 měsíci

      @@Kyvarus We can save the workflow in API format, then use python script to input image in sequence, save output images, and later combine them.

    • @Kyvarus
      @Kyvarus Před 10 měsíci

      @@anuroop345 never heard of using the saved workflow files as an api format for python scripts, but that sounds really quite nice, something along the lines of "Break down loaded video into input frames, standardize the input frame size, decide the fps of the final render, then pick an appropriate number of frames, load up the workflow api, enter in the input picture, model selection, loras, prompt, etc and run per image. in a for loop for number of images. Recompile mp4 from folder image sequence; done?" I guess this could also be used to compile open pose videos from standardized characters acting in natural video. Which would be great; allowing more natural posing without the artifacts over video of other control net types.

  • @MAKdotCZ
    @MAKdotCZ Před 9 měsíci

    Hi Olivio, I wanted to ask you if you could give me some advice. I have been using SD AUTOMATIC1111 so far and now I am trying ComfyUI.
    And my question: is there any possibility to push a prompt and settings to ComfyUI from the images generated by SD A1111?
    In SD A1111 I use PNG INFO and then send TXT2IMG. Is there any similar way to do this in ComfyUI but from the image I generated in SD A1111 ?
    Thank you very much, MAK

  • @HN-br1ud
    @HN-br1ud Před rokem

    잘 보았습니다~감사합니다^^

  • @MishaJAX_TADC5
    @MishaJAX_TADC5 Před rokem

    @OlivioSarikas Hi, can you explain, when i am use Latent Upscale, my smaller image is converted to a different image, do you have any idea how to fix it, or is there something wrong with what I'm doing?

  • @amva3455
    @amva3455 Před rokem

    With comfyUI is possible train my custom models like dreambooth? or is just to generate images?

  • @void2258
    @void2258 Před rokem

    Any way to variable this? I ask because a well known issue with this kind of repetition is the accidental forgotten/mistaken edit breakage. When you have to edit in a bunch of different places, you can either forget one or more or make one or more mistakes between them and break the symmetry. Being able to feed it "raw portrait...of a X year old Y woman..." and write the rest of the prompt 1 time would make this more easily handled. Also, in theory, you can produce the latent WITHOUT the X and Y filled in and add that on at each, so feed them all from a single latent instead of chaining, though not sure if that would work. Similar to the second thing you did but more automatic.
    I am speaking from a coders perspective and am not sure if any of this is sensible or not.

    • @OlivioSarikas
      @OlivioSarikas  Před rokem

      ComfyUi is still in early development. Most nodes need more in/out-puts and there are more nodes needed. So, for now things are rather complex and you need to have dublicate nodes for all the new steps you want to do, instead of being able to rout it through the same node several times. I'm not sure how you imagine combining different latent images without the x/y setting. if the latent image you provide is smaller it will stick to the top left corner. If it is not smaller it will simpley replace the latent image you put it on top of. so it needs to be smaller, as there is not latent image weight that can be used to mix the strength and no mask to maks it out - that would be a different process (the one i showed before)

  • @lisavento7474
    @lisavento7474 Před 5 měsíci

    ANYTHING YET to fix wonky faces in Dall-e crowds? I have groups of monsters! I've tried prompts like "asymmetrical, detailed faces" and it did a little better but i have perfect images except the crowds in the background that i need to fix.

  • @ryanhowell4492
    @ryanhowell4492 Před rokem

    Cool Tools

  • @MaximusProxi
    @MaximusProxi Před rokem

    Hey Olivio, hope your new PC is up and running now!

    • @OlivioSarikas
      @OlivioSarikas  Před rokem

      yes, it is. It really was the USB-Stick that was needed. Didn't connect the front RGB yet though ;)

    • @MaximusProxi
      @MaximusProxi Před rokem

      Glad to hear! Enjoy the faster rendering :)

  • @benjamininkorea7016
    @benjamininkorea7016 Před rokem

    Having a lineup of beautiful girls of different races like this is going to make me fall in love about 10 times per hour I think. Fantastic work as always!

    • @OlivioSarikas
      @OlivioSarikas  Před rokem

      Thank you very much. Yes this is great to show the beauty of different ethnicities :)

  • @Avalon19511
    @Avalon19511 Před rokem

    Olivio a question how would I go about putting my face on a image without training, besides photoshop of course or is training the only way?

    • @OlivioSarikas
      @OlivioSarikas  Před rokem

      Why not do the lora training? it''s very easy and fast.

    • @Avalon19511
      @Avalon19511 Před rokem

      @@OlivioSarikas Does A1111 recognize image links like midjourney?

  • @NoPr0gress
    @NoPr0gress Před 10 měsíci

    thx

  • @kennylex
    @kennylex Před rokem +2

    I see that you use things like "RAW Photo", "8k uhd DSLR" and "High quality" that I often say are useless prompts that do not do what folk think they will do, like RAW is just uncompressed data that later can be converted, so you do not want that in a image then it give flat colors, what folk often want is a stule like "Portrait Photo" that often is a color setting in cameras. BUT!
    My idea is if you can use the nodes to make images that is side by side but where "RAW photo" is compared with image that do not have that prompt or replace it with other prompts like "Portrait photo". "warm colors" and "natural color range", with nodes you can make sure you get same seed and so and that the result are almost made at the same time.
    For when you write "high quality", what to you want? For the AI can not make higher graphical quality than it can do, but I guess it change something then so many use that prompt, so can you just do some test to see what the most popular prompts do, like is Trending on Artstation" better than "Trending on Flickr" or "Trending on e621"?
    Edit: This is a tips for all, rather than write "African woman" use a nationality like "Kenyan woman" to get that that nice skin tone and great looking females, if you take nations down south you get that rounder face on males that can give a rather cool look, nations in the north Africa have a lighter skin tone and often a arabic or ancient roman look.

  • @LeKhang98
    @LeKhang98 Před rokem

    Awesome channel. I have 2 questions please help:
    - Is there any way to import real-life images of some objects (such as cloth, watch, hat, knife, etc.) into SD?
    - Do you know how to keep these objects consistent? I know about making consistent characters but it works for facial and hair only while I want to know how to apply that to objects. (Example: 1 knife with multiple different girls and different poses)

    • @OlivioSarikas
      @OlivioSarikas  Před rokem +1

      Thank you :)
      - Yes you can do that in comfyUI with the image loader
      - if you want a model that is trained on a object you would need to create a lora or dreambooth model

    • @krz9000
      @krz9000 Před rokem +1

      Create a lora of your thing you want to bring into your shot

    • @LeKhang98
      @LeKhang98 Před rokem

      @@OlivioSarikas ​ @Chris Hofmann Thank you. I'm not sure if it can work with clothes, though. I have some t-shirts and pants with logos, letters, or images on the front. Depending on the pose of different characters, the t-shirt, pants, and their images will change accordingly. That's why I'm hesitant to learn how to use AI tools since I don't know if I could do it or if I should just hire a professional photographer and model to do it the traditional way. Anyway, I do believe that in the near future, everyone should be able to do it easily. This is so scary & exciting.

  • @OlivioSarikas
    @OlivioSarikas  Před rokem +1

    #### Links from the Video ####
    Join my Discord: discord.gg/XKAk7GUzAW
    ComfyUI Projects ZIP: drive.google.com/file/d/1MnLnP9-a0Pif7CZHXrFo-pAettc7KAM3/view?usp=share_link
    ComfyUI Install Guide: czcams.com/video/vUTV85D51yk/video.html

  • @DezorianGuy
    @DezorianGuy Před rokem +3

    I appreciate your work, but can you make a video in which you share the basic working process - I literally mean a step by step guide. In your 2 released videos about ComfyUI I barely understood what you were talking about or what nodes are connected to which (looks like spaghetti world to me).
    If you could just create single projects from the start.

    • @OlivioSarikas
      @OlivioSarikas  Před rokem +1

      hm... that could be a interesting idea. In the meantime, the best way to go about this is to look at A1111 and compare the individual parts to the nodes in ComfyUI, because they are often similar or the same. Like the Empty Latent Image is simple the size setting you have in A1111. Or the k-smapler is just the Render settings in A1111, but with some more options in there

    • @DezorianGuy
      @DezorianGuy Před rokem

      @@OlivioSarikas i finally managed to replicate your project now, was a bit confusing at first. Those checkpoint files one can choose from do provide different art styles?

    • @lovol2
      @lovol2 Před rokem

      I think if you've not used automatic 1111 before looking at this. Your head will explode !
      It will be worth the time and effort to install automatic 1111 and then you will be familiar with all of the terms he is using here, and also see the power of all the mess and chaos of the little lions flying over the place

  • @miasik1000
    @miasik1000 Před rokem

    Is there a way to set upscale factor? 1.5;2...

  • @benjamininkorea7016
    @benjamininkorea7016 Před rokem

    I have a question-- in A1111, I can inpaint masked only. I like this, because I can inpaint on a huge image (4K) and get a small detail added but it doesn't explode my GPU.
    Can you think of any way to do this in ComfyUI?

    • @OlivioSarikas
      @OlivioSarikas  Před rokem

      I'm not sure if comfyui has "mask-only" inpainting yet.

    • @Max-sq4li
      @Max-sq4li Před rokem

      You can do it in auto1111 with (only mask) feature

    • @OlivioSarikas
      @OlivioSarikas  Před rokem +1

      @@Max-sq4li that's what he said. but the question was how to do it in comfyui

    • @benjamininkorea7016
      @benjamininkorea7016 Před rokem

      @@OlivioSarikas Since I watched this video and started using ComfyUI more, I figured you'd have to make the mask in Photoshop (or something) anyway, so probably wouldn't be worth it until they can integrate a UI mask painter.
      So I tried working with a 4K image and using the slice tool in Photoshop instead of a mask, and just exporting the exact section I want to work on. Then I can inpaint what I want, but with the full benefit of the entire render area.
      Working on just a face in 1024x1024 makes things look so amazing, and the ouput image snaps perfectly back into place in Photoshop. At that resolution, I can redo each eye, or even parts of the eye, with very high accuracy.

  • @jeffg4686
    @jeffg4686 Před 3 měsíci

    Trying to understand what a latent consists of for a previous image.
    Like, I can see that somehow it's still using the seed or something.
    Assuming the seed itself is stored in the latent or something?
    Any thoughts?
    Update: nm on this actually. I see that it likely just holds that as part of the "graph", and the next one has access to it because it's part of the branch that led up to it (guessing)

  • @DemonPlasma
    @DemonPlasma Před 10 měsíci

    where do i get the RealESRGAN upscaler models?

  • @PaulFidika
    @PaulFidika Před 8 měsíci

    Olivio woke up this morning and chose violence lol

  • @VisualWebsolutions
    @VisualWebsolutions Před rokem

    :) looks familiar :D

  • @teslainvestah5003
    @teslainvestah5003 Před rokem

    pixel upscale: the upscaler knows that it's upscaling white rounded rectangles.
    latent upscale: the upscaler knows that it's upscaling teeth.

  • @paulopma
    @paulopma Před rokem

    How do you resize the SaveImage nodes?

  • @maadmaat
    @maadmaat Před rokem +1

    I love this UI.
    can you also do batchprocessing and use skripts with this already?
    Creating animations with this workflow would be really convenient.

    • @OlivioSarikas
      @OlivioSarikas  Před rokem +1

      Thank you. Not yet, unless you build a series of nodes. I really hope patch processing and looping is coming soon

  • @Ibian666
    @Ibian666 Před rokem

    How is this different than just rendering the same image with a single word changed? What's the benefit?

  • @beardedbhais4637
    @beardedbhais4637 Před rokem

    Is there a way to add Face restoration to it?

  • @im5341
    @im5341 Před 10 měsíci

    5:30 I used same flow but instead of KSampler I put KSampler Advanced at second and third stage. 1st KSampler:steps:12, | 2nd KSampler Advanced:start_at_step:12, steps:20 | 3rd KSampler Advanced:start_at_step:20, steps:30

  • @BrandinoTheFilipino
    @BrandinoTheFilipino Před 6 měsíci

    where can i get the deliberate _v2 model ??

  • @ajartaslife
    @ajartaslife Před rokem

    Can comfyui batch img to img for animation?

  • @matthewjmiller07
    @matthewjmiller07 Před 9 měsíci

    How can I set up these same flows?

  • @dxnxz53
    @dxnxz53 Před 28 dny

    bester mann!

  • @Silversith
    @Silversith Před rokem

    The latent upscale randomised the output too much from the original for me, especially if it's a full body picture. I've output the latent upscale before sending it through the model again and it basically just redcues the quality more before reprocessing it. I ended up just passing it through the model twice to upscale it.

    • @Silversith
      @Silversith Před rokem

      Tomorrow I'm gonna try tweaking the code a bit or including some custom nodes to pass the seed from one to the next so it stays consistent and does a proper resize fix

    • @OlivioSarikas
      @OlivioSarikas  Před rokem

      I latent upscale you have different upscale methods. Give them a try and see if that changes your result to what you need

    • @Silversith
      @Silversith Před rokem

      @@OlivioSarikas I submitted a pull request that passes the seed value through to the next sampler. Seems to work well 🙂

    • @Dizek
      @Dizek Před rokem

      @@OlivioSarikas or better, create different nodes with all the aviable upscale methods and try all at once

  • @arnaudcaplier7909
    @arnaudcaplier7909 Před rokem

    Hi @OlivioSarikas, let me share what I think: I have been working in the domain of creative intelligence (orginally CNN based) since 2017, and your insights are solving problems that I have been facing for years ... just crazy staff ❤‍🔥, you are an absolute genius!
    Great respect for your work .Thank you for the insane value you share with us 🙏

  • @chinico68
    @chinico68 Před rokem

    Will it run on Mac??

  • @redregar2522
    @redregar2522 Před 9 měsíci

    for the 4 girls example i have the issue that the face of the first image is always messed up(rest of images is fine. Someone an idea or the same issue?

    • @OlivioSarikas
      @OlivioSarikas  Před 9 měsíci +1

      might be because you render it low res. If you upscale it, it should be fine. Or try more steps on the first image, or try a loop on the first image to render it twice

  • @mb0133
    @mb0133 Před rokem

    have you figured out how to redirect the models folder to your existing automatic1111 model folder? that's way too much GB for duplicate files

    • @benjaminmiddaugh2729
      @benjaminmiddaugh2729 Před rokem

      I don't remember what Windows calls it, but the Linux term you want is "symlink." You can make a virtual file or folder that points to an existing one (a "soft" link) or you can make it so the same file/folder is in multiple places at once (a "hard" link - soft links are usually what you want, though).

  • @toixco1798
    @toixco1798 Před rokem

    it's the best UI, but I don't think its creator is the kind of person to seriously maintain it, and I think he did it more for fun or curiosity before surely moving on

  • @linhsdfsdfsdfds4947
    @linhsdfsdfsdfds4947 Před rokem

    Can yopu share this workflow?

  • @mickeytjr3067
    @mickeytjr3067 Před rokem

    One of the things I read in the tutorial is that "bad hands" doesn't work, while (hands) in the negative will remove bad hands.

  • @Vestu
    @Vestu Před 8 měsíci

    I love how your ComfyUI setup is not overly OCD but a "controlled noodle chaos" like mine are :)

  • @digwillhachi
    @digwillhachi Před rokem

    not sure what im doing wrong as i can only generate 1 image the others dont generate 🤷🏻‍♂

  • @animelover5093
    @animelover5093 Před rokem

    sigh .. not available on Mac at the moment : ((

  • @dax_prime1053
    @dax_prime1053 Před rokem

    this looks ridiculously complex and intimidating.

  • @LouisGedo
    @LouisGedo Před rokem

    👋

  • @GiggaVega
    @GiggaVega Před rokem

    Hey Olivio, this was an interesting tool, but I really don't like the layout, it's too all over the place. Sorry to spam you but I tagged you in a video I just uploaded to youtube about: Why I don't Feel Real Artists have anything to worry about regarding Ai Art Replacing them. Feel free to leave your thoughts on that topic. Maybe a future video?
    Cheers from Canada bro.

  • @blisterfingers8169
    @blisterfingers8169 Před rokem +1

    So there's no tools for organizing the nodes yet, I take it? xD

    • @OlivioSarikas
      @OlivioSarikas  Před rokem

      not sure what you mean by that. you can move them and group them if you want

    • @jurandfantom
      @jurandfantom Před rokem +1

      Even simple spread out would help. I think that shows person who was working with node base system from those without experience.
      But I see you have middle point to spread connector so it's not that bad

    • @blisterfingers8169
      @blisterfingers8169 Před rokem

      @@OlivioSarikas I use node systems like this a ton and I've never seen a messier example. No big deal, just makes me assume the organisation tools aren't quite there yet.

  • @MisakaMikotoLuv
    @MisakaMikotoLuv Před rokem

    tfw you accidentally put the bad tags into the positive input

  • @dsamh
    @dsamh Před rokem

    Olivio. Try Bantu, or Somali, or other specific culture or peoples rather than referring to races by color. It gives much better results.

  • @kallamamran
    @kallamamran Před rokem

    More like UnComfyUI ;)

  • @sdjkwoo
    @sdjkwoo Před 9 měsíci

    24,000 STEPS? MY PC STARTED FLING IS THAT NORMAL??

  • @zengrath
    @zengrath Před rokem +1

    ugh another app that only works on Nvidia or CPU only. My 7900 xtx would really like to try some of these new things.

    • @scriptingkata6923
      @scriptingkata6923 Před rokem

      why new stuff should be using amd lol

    • @jeremykothe2847
      @jeremykothe2847 Před rokem +2

      When you bought your 7900 xtx, were you aware that nvidia cards were the only ones supported by ML?

    • @zengrath
      @zengrath Před rokem

      @@jeremykothe2847 Everything i read when doing my research indicated it also worked with AMD, at least on Linux with support coming to windows, even if that wasn't the case I still wouldn't support Nvidia, now with how they are treating their business partners in same way Apple does these days, by forcing them to say only good things about them or withhold review samples which they already done over and over, not to mention the things they are doing to their manufacturing partners as well. However what I didn't know before buying the card is that 7900 xtx doesn't work even on Linux and it appears AMD could be months away or more from updating ROCM for RDNA3. All the AMD fanboys acted like it wasn't an issue at all and so on. I've even had long arguments with AMD users claiming I just don't know what I am doing, yet i've spoken with several developers now trying to see if they can walk me through getting their stuff working on AMD on Linux and sadly. they confirm we have to wait. At least on Windows a program called Shark is making incredibly strides in doing various tasks like img generation and even language models and hopefully it's only a short time before most common features are working and can compete with other platforms that only support NVidia but makes me wonder if they can do it, why can't others and why do they continue to only use protocols that support only Nvidia yet anytime something comes out that uses more open platforms for AMD, Nvidia users can also use it with no issue. How is it fair that AMD consumers can't touch products made exclusively for Nvidia but Nvidia users can go other way. it's same stuff with Steam's Index/Oculus vs Meta, Meta buys up all the major VR dev's,, kills the VR market as a result by segmenting it to death, lies when they bought the crowd sourced open Oculus tech saying they would keep it open and not require facebook accounts but they did anyway and all kickstarter people can't do anything about it now, facebook has too much money and can do whatever they want. Yet when games come out on steam only, people with Meta or any other headset can come to steam and play games with no issue. It's incredibly unfair and only reason this situation keeps happening over and over again is because the public allows it. And it's publics fault when these horrific companies end up forming monopolies then taking over the world one day as described in most sci-fi novels.

    • @GyroO7
      @GyroO7 Před rokem +4

      Sell it and buy Nvidia one
      Amd is useless in anything other than gaming (even that has poor ray tracing and no dlss)

    • @zengrath
      @zengrath Před rokem

      @@GyroO7 Not true at all i, i really hat fanboys on both sides who lie. Your not different then republican and democrats who fight over bullshit and constantly lie and twist facts. I have been using ray tracing with no issue, and AMD doesn't have DLSS they have FSR which works very well with FSR 3.0 coming soon that will work vary similiar to DLSS as well. And I get to enjoy the fact that I am not part of the crowd of ignorance supporting hateful practices of Nvidia. I was an Nvidia fan for about 20 years until what they have done in just the past few years, clearly you haven't been keping up. Let me guess, you probably also love facebook Meta and love Apple products too. you like companies who tell you how to think and how to use their products and if you don't like it then they tell you your stupid and any reviewers who don't praise them like gods gets put on thier ban lists.

  • @arturabizgeldin9890
    @arturabizgeldin9890 Před 8 měsíci

    I'll tell you what: you're a natural born tutor!

  • @michaelphilps
    @michaelphilps Před 9 měsíci

    ja wohl!

  • @blacksage81
    @blacksage81 Před 9 měsíci +1

    Yeah, it isnt easy to get black people calling it that way, I've found that using Chocolate, or Mocha colored skin, and other brown colors will get the skin, in my limited testing the darker colors will help the characters gain more African features.

  • @str84wardAction
    @str84wardAction Před rokem

    this is way to advence to process whats gong on here

  • @jasonl2860
    @jasonl2860 Před rokem +1

    seems like img2img, what is the difference? thanks

  • @spider853
    @spider853 Před rokem

    I don't really understand how LatentComposition works without a mask

    • @OlivioSarikas
      @OlivioSarikas  Před rokem +1

      it combines the two noisees and since they are both just noises yet, they can melt into a final image in the later render. however, because the noise of your character has a different background, you will often see that the background around the character is different than the background of the rest of the image by a bit

    • @spider853
      @spider853 Před rokem

      @@OlivioSarikas I see, it's kind of a average, it will benefit from a mask

  • @Noum77
    @Noum77 Před rokem

    This is too complicated

    • @OlivioSarikas
      @OlivioSarikas  Před rokem

      you take the things i showed in the video and simplify them. start by just rendering a simple AI image with a prompt and then you can add things to that

  • @akratlapidus2390
    @akratlapidus2390 Před rokem

    In Midjouney you won't be able to show a black woman, because the word "black" is banned. It's one of the reasons why I take much attention to your advices about Stable Diffusion. Thanks!

    • @hfycentral
      @hfycentral Před rokem

      That's not entirely true. I use it all the time.

    • @OlivioSarikas
      @OlivioSarikas  Před rokem +1

      stop spreading misinformation. I just tried "black woman --v 5"and it worked perfectly

  • @Kaiya134
    @Kaiya134 Před rokem

    No disrespect to your work, but the concept itself is just sickening. These pictures are basically a window into the future of webcam filters. Our life is rapidly becoming a digital shitshow.

  • @user-kt7uz9xc5m
    @user-kt7uz9xc5m Před rokem

    can you download another picture, not connected to cyberpunk, lets say "fatima diame" photo, and make a kind of corelation 50% so youre character to change some kind of rational way - become a black woman athlet with fantastic body but in cyberpunk view?

  • @user-kt7uz9xc5m
    @user-kt7uz9xc5m Před rokem

    they can do all this changes at videos too wright? changing faces, emotions etc😂 in cia etc, as a media wars.

  • @user-kt7uz9xc5m
    @user-kt7uz9xc5m Před rokem

    that is why putin is allways so unhappy in youtube😂

  • @nikolesfrances1532
    @nikolesfrances1532 Před 6 měsíci

    Whats your discord?