- 8
- 81 083
Intelligent Image
United States
Registrace 6. 03. 2024
Welcome to Intelligent Image where I am exploring the intersection between AI art and digital painting. My goal is to fuse the AI generation process with traditional 2D and 3D image creation methods. I wish to solve the major issues that prevent AI from being integrated into a creative workflow, mainly issues of creative control and image authorship.
Struggling with PONY DIFFUSION? Here's Why
Today, we will be looking at how to get the best quality images from models based on Pony Diffusion V6 XL. I am demonstrating in ComfyUI, but these tips apply to all Stable Diffusion interfaces.
Pony Diffusion V6 XL: civitai.com/models/257749/pony-diffusion-v6-xl
Scores:
Enter this entire string of text for every model based on Pony Diffusion V6 XL:
score_9, score_8_up, score_7_up, score_6_up, score_5_up, score_4_up,
This is not as important on many merged models, but still has an effect.
Style Source:
source_pony
source_furry
source_cartoon
source_anime
Content Rating:
rating_safe
rating_questionable
rating_explicit
Write your prompt like this:
score_9, score_8_up, score_7_up, score_6_up, score_5_up, score_4_up, Describe the stuff you want to see, source_anime, rating_safe
You can change the source and rating of course.
Try these settings:
Resolution:1024px x 1024px (or other supported SDXL resolutions)
Clip Skip: 2 or -2 depending on the software (“CLIP Set Last Layer” node in ComfyUI)
VAE: sdxl_vae.safetensors huggingface.co/stabilityai/sdxl-vae
Sampler: Euler A
Scheduler: Karras
Steps: 25
CFG: 6 or 7
------------------------------------------------------------------------------------------------------------------------
Music by CreatorMix.com
Pony Diffusion V6 XL: civitai.com/models/257749/pony-diffusion-v6-xl
Scores:
Enter this entire string of text for every model based on Pony Diffusion V6 XL:
score_9, score_8_up, score_7_up, score_6_up, score_5_up, score_4_up,
This is not as important on many merged models, but still has an effect.
Style Source:
source_pony
source_furry
source_cartoon
source_anime
Content Rating:
rating_safe
rating_questionable
rating_explicit
Write your prompt like this:
score_9, score_8_up, score_7_up, score_6_up, score_5_up, score_4_up, Describe the stuff you want to see, source_anime, rating_safe
You can change the source and rating of course.
Try these settings:
Resolution:1024px x 1024px (or other supported SDXL resolutions)
Clip Skip: 2 or -2 depending on the software (“CLIP Set Last Layer” node in ComfyUI)
VAE: sdxl_vae.safetensors huggingface.co/stabilityai/sdxl-vae
Sampler: Euler A
Scheduler: Karras
Steps: 25
CFG: 6 or 7
------------------------------------------------------------------------------------------------------------------------
Music by CreatorMix.com
zhlédnutí: 2 204
Video
(KRITA AI) BEGINNERS guide to KRITA
zhlédnutí 7KPřed měsícem
Hey everybody! Today I'm going to show you the tools and options within Krita you are most likely to use when creating images with the Stable Diffusion Generative AI plugin. Resources: Krita: krita.org/ Plugin: github.com/Acly/krita-ai-diffusion Music by CreatorMix.com 0:00 Intro 0:25 Interface 8:25 Inpainting/Making Selections 14:27 Upscale 15:55 Face Refine/Transform/Transparency Masks 23:21 ...
(KRITA AI) STEP-BY-STEP Using Stable Diffusion in Krita
zhlédnutí 7KPřed 2 měsíci
Hey everybody! Today I am going to go over the process of actually creating something with the Generative AI plugin for Krita. Resources: Krita: krita.org/ Plugin: github.com/Acly/krita-ai-diffusion Models: VXP: civitai.com/models/311157/vxp-xl-hyper Music by CreatorMix.com
(KRITA AI) The NEW CONTROLNETS are a BIG DEAL!
zhlédnutí 7KPřed 3 měsíci
Update to the Stable Diffusion Plugin for Krita. New ControlNets and settings. Resources: Krita: krita.org/ Plugin: github.com/Acly/krita-ai-diffusion Update Instructions: github.com/Acly/krita-ai-diffusion/wiki/Common-Issues#how-do-i-update-to-a-new-version-of-the-plugin Installation Instructions: www.interstice.cloud/plugin Samplers Documentation: github.com/Acly/krita-ai-diffusion/wiki/Sampl...
(KRITA AI) INTRO to Stable Diffusion for Krita PART 2
zhlédnutí 14KPřed 3 měsíci
This is part two of my complete introduction to the Generative AI for Krita plugin where I go over the tools and features. Resources: Krita: krita.org/ Plugin: github.com/Acly/krita-ai-diffusion Installation Instructions: www.interstice.cloud/plugin Required Models and Nodes: github.com/Acly/krita-ai-diffusion/wiki/ComfyUI-Setup CivitAI: civitai.com/ Music by CreatorMix.com
(KRITA AI) INTRO to Stable Diffusion for Krita PART 1
zhlédnutí 43KPřed 3 měsíci
This is part one of my complete introduction to the Generative AI for Krita plugin. PART 2 HERE: czcams.com/video/ziXTE6mC_38/video.html Resources: Krita: krita.org/ Plugin: github.com/Acly/krita-ai-diffusion Installation Instructions: www.interstice.cloud/plugin Required Models and Nodes: github.com/Acly/krita-ai-diffusion/wiki/ComfyUI-Setup CivitAI: civitai.com/ Music by CreatorMix.com
Painting with Stable Diffusion (Speed Painting in Krita)
zhlédnutí 671Před 5 měsíci
Welcome to Intelligent Image where I am exploring the intersection between AI art and digital painting. This is a speedpaint combining traditional digital painting techniques and render passes using Stable Diffusion with ComfyUI. Please see my other videos for a more in-depth look into the process of rendering your digital paintings with AI and incorporating AI into your digital painting workfl...
Painting with Stable Diffusion (Speed Painting in Krita)
zhlédnutí 911Před 5 měsíci
Welcome to Intelligent Image where I am exploring the intersection between AI art and digital painting. This is a speedpaint combining traditional digital painting techniques and render passes using Stable Diffusion with ComfyUI. Please see my other videos for a more in-depth look into the process of rendering your digital paintings with AI and incorporating AI into your digital painting workfl...
Fixing hands in Krita pls
thank you for the help now I understand why you really need the VEA XD
Glad it was helpful!
Thank you for the video! First minute literary described my situation and confusion :D
The same thing happened to me, so I knew it was probably a common experience 😆
Brilliant tutorial, much indebted!
Thanks! Glad it was helpful!
exactly what i needed, thanks good content
Glad it helped!
I haven't commented on a youtube video in ages...but your video was both informative and got me to laugh a few time. Thanks :)
Thanks! I really appreciate that!
If someone do shit comments, dont listen them. We definitely needs MORE vids, if possible in real time <3
Thanks for coming to my defense! I have actually learned to enjoy the haters 😆 They just sound silly to me. I have a lot more videos in the works including more process videos!
@@IntelligentImage-sl7uf <3 Im sorry, what you mean in the works :D I've checked the links - no videos :DD
Sorry, I mean I am working on them and will post them soon :)
@@IntelligentImage-sl7uf cool, Im waiting for them. Dunno if rtx 3060 will good for AI Krita I had amd card, but its hard to configure, so ordered budget one <3 I was trying to learn cg drawing long, but it's hard may be this will help me :DD
Hi, good job Congrats !!! I have question about create an animation with pony Model in ComfyUI using animatediff no problem to create good image but it's seems that pony models are very versatile to product video or animation. Results are poor, often blurry. Using Clip-2 CFG at least 7 or 8 and DPPM or Euler A Have you any solution or workflow to suggest me ? Results seems better seems better with realistic Model than anime by advance many thanks for your answer
Thanks! I'm not sure about the answer to your question. Each Pony based model will be different. Some might work better than others. I haven't done much with Animatediff, but thinking about your question has made me want to try it again. I have a few ideas on how to make it work. I'll see if I can work in making a tutorial about it if I can figure it out.
I confirm, Some models like idéalpony or cinematicpony gives correct results but other like autism for example are bas. I'll try with other models an add loras thanks again
Your channel is amazing mate! Keep up with the good work! Im loving the way you are integrating AI in the workflow!
Thanks! I really appreciate it!
hey, mine's not greyed out.. and I checked the "AI Image Diffusion" in [settings> Configure Krita> Python Plugin Manager], yet I still couldn't find the AI in the dockers. could you please help me?
nevermind, I got it done.. haha
Do you haved direct linked download general ai Krita show me how I trouble connect my general ai sir thanks show me how ok made CZcams show me ok
Region prompt is tough right ?😅
I'm working on a regional prompt tutorial right now and so far I haven't had trouble getting Pony based models to work with it. I have been having a little trouble wrapping my head around setting up the whole regional prompting thing in general though.
This was both helpful *and* really funny, great vid
Thanks!
и кого ты удивить хотел? Любая бабка, включая и тебя теперь может так. Но в сообществе рисователей, художников ты никто. И будешь никем до тех пор, пока не докажешь, что можешь сам без костылей выдать хороший рисунок.
I'm sorry they did this to you
Отлично, тогда выкидывай все причиндалы и иди рисовать в пещере углем.. Покажи, что мужик
@@osieman какие причендалы то: бумагу и карандаш? Открою секрет для "незнайки": в любом деле нужно просто пробовать больше 1 раза, который естественно не получился. Если интересно, можем вместе с тобой докопаться до любого крутого художника. Сто к одному, что они и углем на стене неплохо нарисуют, и карандашом на бумаге. А такие как вы учат людей "обходной дороге" вместо того, чтобы учиться как надо.
@@Vlasteyob 'это не всем дано ( не надо тут про нет таланта есть только труд ( он есть, если его нет и будет мучать себя, уедешь в психишку), а этот "обходной путь" как программист говорю, это единственный путь. Кодер, что не использует чат джпити, просто дурак. Расскажи Ким Джан Ги, про нет таланта, он просто умер от переутомления... Хочешь так же? ВПЕРЕД ВМЕСТЕ СО СВОИМИ хУдожниками. Про глаза я вообще молчу, спроси любого, кто на хосте работает. Сколько он операций сделал на сетчатку?!
@@Vlasteyob Тебе просто еще лет 25ти нет и ты не понимаешь ,какая у успеха цена. Если есть возможно поберечь свое здоровье, не тупи и пользуйся. Если ты одарен и натренирован, тебе это только поможет
wow awesome i will use this for my drawings too lul , BTW if u are drawing many characters ( for a Manga cover with at least 4 characters in different angles ) , probably u need to use the IA in each one in different layers?? what u recommend?
The "Regional Prompting" recently added to the plugin would probably be best for that. I am working on a tutorial about that. In the mean time, the developer has made a video demonstrating it: czcams.com/video/PPxOE9YH57E/video.html
Can you show me the generate panel location
Can you show me where
Where can I see the generate panel
I am assuming you mean you are having trouble getting the AI image generation docker to show up. If so: Restart Krita. Go to settings> Configure Krita> Python Plugin Manager. Make sure "AI Image Diffusion" is checked there. Open a new document. Enable the Plugin Docker from the Menu bar: Settings ▸ Dockers ▸ 🗹 AI Image Generation. If it's enabled there and you still don't see it, make sure it isn't nested behind another docker somewhere. If it is greyed out in the Python Plugin Manager, you might try downloading a release package. It seems to be a known issue: github.com/Acly/krita-ai-diffusion/wiki/Common-Issues#plugin-is-grayed-out-in-python-plugins-manager Also make sure you have the latest version of Krita.
great video! Have you tried using embedding for the images? I'm not sure what I should put in prompt to activate the embedding.
Thanks! You can use embeddings like you normally would in ComfyUI. You would put the embeddings files in the embeddings directory of the ComfyUI installation Krita is using. In the prompt you would just put "embedding:filename" without the quotes to use the embedding. So for example, embedding:EasyNegative would go in the negative prompt.
Down right prophetic video. Just picked up pony 2 days ago and then this drops lol. Thanks a lot.
Glad! I was afraid I was a little late to the game with this.
That rating explicit big L is hilarious editing!
Oh boy! you are doing it! can't wait to see a Krita + Pony
Yes, next is regional prompting plus pony 😃
@@IntelligentImage-sl7uf Can't wait to see that then! (subbed) I have already installed the standalone ComfyUI and have my models in there. Is it necessary to install ComfyUI again through the Krita plugin? Does the .yaml code method work to save space and clutter? I hope you would address this in your Krita tuto.
You can connect it to your preexisting ComfyUI installation, but you will have to install the necessary components. Here is a link to what is required. github.com/Acly/krita-ai-diffusion/wiki/ComfyUI-Setup Sorry, I don't know what the .yaml code method is.
@@IntelligentImage-sl7uf no problem about not knowing the .yaml file method (check it out though it might help you. I had tried it when I installed Krita AI before to save space, but didn’t work out because I may have made some mistakes or something). Thanks for your reply!
@@IntelligentImage-sl7uf I don't have a local installation of CumfyUI besides the one that Krita installed using the add-on to work with. I never cared to learn how to use Cumfy, I jumped directly from Stable Diffusion to use the Krita addon.
2.6m images the majority of the images is NSFW images I bet😂😂😂
I should have specified that 20k images were hand scored while the actual dataset was much larger. I don't envy that task. I had to delete my browser history after making this video...
Exactly 50%. If you don't prompt against it or use img2img you're in for a wild unsafe ride.
v6 was trained on ~2.6M images, just as a quick note. Maybe one of the earlier versions was 20,000.
According to this article, around 20 thousand images were manually given quality scores. But yes, I should have been more specific that this wasn't the whole data set. civitai.com/articles/4248
@@IntelligentImage-sl7uf Ooohh, 20k were given quality scores. Interesting, learned something new, thank you!
The Damn folder options dont show up for the Model Checkpoint? WHY?, you didnt explain it!
Thanks a ton for going through the tools, I think this is the best integration of stable diffusion in any painting app I've seen. How would you outpaint with this though?
I mentioned outpainting very briefly in a previous video. Basically, expand the canvas out in the direction you want. Select the expanded area and choose "Expand" from the generation options. czcams.com/video/ziXTE6mC_38/video.htmlsi=UY_XBDKmOuGNbvxH&t=103
@@IntelligentImage-sl7uf cool ill try this. Again much appreciated for the work you put into your videos they've been really helpful. Keep it up!
hi thanks for making this video. curretly Im having issue with download local server. i cant download it 100%. so is there any way to download all file of this server separately ? and then i will put those file in right folder..
Yes, here is a link to a page with the required nodes and models. You can download them separately and put them in the appropriate folders. github.com/Acly/krita-ai-diffusion/wiki/ComfyUI-Setup
regionnal prompt tutorial soon :) ?
Yes! Will be working on that soon :)
thanks for sharing, great help.
Glad it was helpful!
Really useful tutorial. First time I watched it, I was overwhelmed. Then I played around with Krita, coming back to the video for specific advice. Works like a charm. Thanks. Need to point out that looking at the metadata of other AI artists on civitai is a gamechanger as well.
Thanks! I make things fast in my videos with the idea that this is a format that can be paused and rewound and the last thing you want to do when you rewatch something is try and find information amongst a bunch of hemming and hawing. And yes, looking at the metadata of particularly good generations is extremely helpful!
well unless it's a krita made composition ai assisted image. good luck finding metadata on theses one X)
Why do you have "refine" as an option?
When you lower the strength (denoising strength) below 100% the option changes from "Generate" to "Refine" because it is now resampling the current image instead of creating an entirely new one.
@@IntelligentImage-sl7uf Oh, thanks man :)
I love these videos. My content creation has been taken to a new level. Thank you!
Glad they have been helpful! That means a lot to me!
This is great, thank you!
Oh yeah! You're the one who gave me the idea to do this video! I meant to go back to your comment and tell you I had actually made it 😅
it actually good to create some stock image, which honestly I dont mind, and for art it more like a patch tool, I still wont trust it for create an art so I use it for some small adjustments task and the rest I still draw them myself.
Yes, I think creating images for reference is probably the best use.
Great video and plugin :) One question: Is it possible to use a different network for the depth map creation? I would like to use DepthAnythingV2 or Midas for it, as they seem to be able to create better and more detailed depth maps.
Thanks! There is probably a way to change the depth map model, but it would probably require changing some code and I don't know how to do it.
@@IntelligentImage-sl7uf Thanks :) Turned out, after upgrading from my older version 1.16 to the newest one, they also improved the depth map creation with it. The 1.20 depth maps are more detailed by default in comparsion to the 1.16 ones.
"military tactical kimono" lmao
I forgot about that. I'll have use that idea again in the future. 🤔
Please, please, please make a video on post-processing and raw images. Without altering the raw photograph's subject or background, I only want to post-process it by transferring its style, colour grading, lighting, retouching, etc. from another edited image.
What it sounds like you want to do isn't really possible with AI image generation right now. There is no way to have the AI change the colors, or lighting of the image and not have it noticeably alter the subject. Also, Stable DIffusion only works with the PNG format and not RAW. It sounds like you might be looking for something like this that would copy the settings from a preexisting image: light.princeton.edu/publication/neural-photo-finishing/ Something like that may already exist in photo editing software, or it probably will soon.
@@IntelligentImage-sl7uf but we can convert raw images into PNG. or I can shoot in jpeg format. there is a colour transfer feature in Photoshop but it is not exactly an image style.
Really enjoying your tutorials, very much indebted to your efforts!
Glad they are helpful!
Hey, just curious if you know what the difference between the "reference" and "composition" controlnets do? I've been messing around and they both seem to bring similar results.
Honestly, I'm not sure exactly. The Style, Composition, Reference and Face controlnets use different IP Adapter models. I am not sure which IP Adapter model it is using for "Reference" as opposed to "Composition". Actually, come to think of it, they could be using the same model with different settings. I wish there were better documentation about what is going on in the background.
@@IntelligentImage-sl7uf I did a bunch of testing and really the only thing I'm certain of is that reference tries to recreate the image. eg: If my reference image is a woman sitting in a car and I prompt something generic like "a 20 year old woman" then the results will be a woman sitting in a car. I'm guessing composition is meant to capture the look and feel as far as color, tone, shadows/highlights etc and styles is more for vector art, clay, photography, paper art etc. Much like if you were to use a1111 or comfy you can add a style that adds additional words to the prompt like "vector art, flat shading, cell shading" etc. Only an image to image version of that, idk.
Any idea how to use other model types on Krita, like the upscaler, Lora's using weights, and Lycoris?
You should be able to add models by adding them to the appropriate directories in the ComfyUI installation you are connecting to. You can add loras directly in the text prompt or in the settings. Lycoris should work the same, but I'm not sure. github.com/Acly/krita-ai-diffusion/releases/tag/v1.10.0
@@IntelligentImage-sl7uf OIC, that answers my question. Thank you so much. I can Finally and use Lora weights now.
Glad I could help!
Hi Afetr two hours of download time, it has got stuck at not installing Net Stencil, Hand Refiner, Adapter Face (SD1.5) and Control Extensions for SD XL. Nothing is moving beyond this!! What is that I should do now?? I followed your installation process completely!!!
Unfortunately it is a large download. You may want to try only downloading the basic components first and avoid SDXL. You should be able to go back and select more options to download later.
@@IntelligentImage-sl7uf Finally I was able to download the whole thing but as u said its a mammoth 40 GB plus download. As u suggested in your video check all the boxes and download everything I followed u and did the same. Now the problem is what exactly and how do i omit is the question. Also If possible u do a video based on the newer version as some of the options have changed completely. For Example Juggernaut XL and Zavy Chroma XL are mammoth downloads 7GB each, Zavy Chroma XL does not feature in your version. I only want to use this software at a very basic level to start with and the subsequently upgrade if I excel at the thing. Question is how do i eliminate stuff that i should not have at the moment?? Also its very confusing as per GITHUB documentation as to which Stable Diffusion to use, 1.5 OR XL?? The default is 1.5 . I have unfortunately downloaded both!! I think there needs to be a very basic tutorial which guides people like me, complete beginners to sought of handhold the process of basic requirements and then move to the next level. We are talking of mammoth power both in terms of Hard disk Space and lets not forget Graphics card. My process is taking too long at the moment probably I may be using the wrong combination!!! I hope u understand what I am talking about??
If you only want to use the software minimally, you should probably just stick to SD 1.5 for the time being. You should be able to delete the SDXL models by going to the directory where the plugin installed ComfyUI. Going to the folder ComfyUI>models>checkpoints. There you should be able to delete those large SDXL models. You could do the same with the SDXL controlnets in the ComfyUI>models>controlnet directory. Delete the control nets labeled "XL". If you want, you could also just delete the entire server installation and reinstall it with minimal features if you are willing to re-download.
@@IntelligentImage-sl7uf Thanks So Much.
I’m a traditional artist and digital artist, and while I mostly disagree with Ai art, I find this intriguing! Especially if you create art that you’ve drawn yourself. If you could draw 90% of the art and ai helps, then so be it! Ai is the future and it’s a hard pill to swallow for some.
I agree! I am also a traditional/digital artist and I'm not at all sold on the idea of the legitimacy of AI art being created from prompts alone. Right now I'm thinking that AI is best put to use at the very beginning of the creative process for concepting, or at the very end for finishing.
@@IntelligentImage-sl7uf that’s actually wise! I’ll try this out in the future!
I drew a circle the first thing that popped up for me was a fully naked woman, and no I didn't have a prompt...
Is that a feature or a bug? 🤔
@@IntelligentImage-sl7uf I think the problem comes with saturation of sexualized AI content, it's the only problem I see with this tech basically no longer learning off of real art but just whatever it saturates the net with.
It's interesting that the live mode works as a Rorschach test for the generative model. The more sexualized data the model has been exposed to, the more likely it is to interpret an ambiguous stimulus in a sexualized manner- just like with humans.
@IntelligentImage-sl7uf At least I did see that there's an NSFW filter, I still like my drawing so on live mode lowering the strength to 30 works well and I'm kind of just using the AI as a source of reference, but I couldn't get the posing mode to work so will have to go back to that, because I want to experiment with animation as much as possible.
oh great i missed this one. you made me switch over to Krita with part 1, i like ComfyUI hardware efficiency but i fiddle around the nodes too much. too much time wasted. Forge is dead, A1111 is hecking slow, SDNext is confusing, invokeAI i don't like... i'm very happy with the extension flexibility and active developpement (i started donating :D ). keep doing theses videos please 🤗! i would like to have a comprehensive guide on region prompt too. from what i see it make a transparency mask with color to assign region. i can make it work but i have to try and fail 4/5 times.
Glad these have been useful! I'm working on a regional prompt tutorial right now. I've had some trouble getting it to work properly too, mainly with errors. I'm still getting it all worked out.
Thank you for this comment. I was wondering for a long while why Forge wasn't getting updates. I poked around and found stable-diffusion-webui-reForge which looks promising.
@@Daeca indeed i've heard about it too and it support Dora where Forge don't. however there is still the cautious fear of running a script from a nobody. A1111 and Illyasviel are great secure sources. Panchovix ? well i don't know... only 209 stars as of today.
Very comprehensive guide, thank you for these!
Glad it was helpful!
very helpful, thanks!
Glad it was helpful!
it that way require a large GPU store? the same as install automatic1111 on PC?
It will require a pretty fast gpu for it to run quickly.
Depending on which SD model you use, optimal format is limited by the size of images used to train said model (512x512 for SD1.5, 1024x1024 for SDXL and SD3). Then you may use some specific secondary formats determined by SD standards. But it can already be a pain, for instance you may get duplications and anatomy mistakes by going for a non square format. At the very least to mitigate calculation issues if you don't go for one of these standards, you should only use sizes respecting the 64 steps (64x64, 64x128 etc). So you shouldn't use 1000x1000 but 1024x1024.
You're right, I try and at least stick to base 2 numbers. I don't know why I didn't here.
how to istall zluda directly to local?
Sorry, I don't really know what that is.
I have had this installed for a couple of days on a I9/4090 machine. A lot of fun but, you have to be a good prompter. I work at 2048 x 2048.
Prompting is still something I'm still working on 😅