How to Make Concept Art with AI (Free and Easy) - Stable Diffusion Tutorial 2022
Vložit
- čas přidán 10. 05. 2024
- ATTENTION! Lots has changed for the better since I made this video! Here’s my guide how to install and use Stable Diffusion in June 2023: • How to Install and Use...
Other options for using Stable Diffusion: / dreamers_guide_to_gett...
Sampling Method differences: i.ibb.co/vm4fm7L/166144002711...
CHAPTERS
0:00 - Installing Stable Diffusion
1:47 - Painting the Rough Input Image
2:25 - Your 1st AI Images
9:00 - Mixing Results in Photoshop
10:36 - Refining in Stable Diffusion & Photoshop
14:39 - Changing Parts of the Image with Crop
17:21 - Final Steps
----------------------------------------------
Did you like this vid? Like & Subscribe to this Channel!
Follow me on Twitter: / albertbozesan
Timer icon I used in the thumbnail: www.iconsdb.com/white-icons/t... - Věda a technologie
Now this is way closer to using Ai art generators as tools and not as lazy replacement for artistic work
Pioneers of the beginning
Yessss
This feels like Bob Ross time traveled. Way to go!
Blasphemy.
I get why you start to speed up the tutorial after you have shown the basics. But somehow I would love to see it in normal speed till the end, its super satisfying to watch you create this piece. You are one of the first people I found using AI really as an artist. Most just use trial and error with the prompt, but you have a clear image of what to produce and and work to control the ai, not the other way around :)
Thank you! Maybe I'll post a full process when I'm working on some "personal" art without explaining what I'm doing.
@@albertbozesan I agree it would be nice to see a thorough walkthrough but I will say you give the right amount of detail before you go into speed painting mode.
@@albertbozesan you could actually just stream it on twitch.
This is actually good news for concept artists. The more people think this is concept art, the more the quality and authenticity of portfolios will drop and the value of actual artists will rise. Concept art is not about making pictures, it's a very involved process with a lot of iteration due to feedback from an art director to achieve visual solutions to production and narrative problems. Communication and teamwork is key, so a tool like this will do no good, even if you know what you're doing, it will just be a hassle for you and a waste of time for the whole team.
The future is now old man
Humans do it best right? Must be really painful for people working in fields thought to be the exclusive playground for human beings and seeing science is only beginning to showcase that creativity on demand is nothing more than tying the right dots together in a vast (for humans) incomprehensible and ever expanding network of source material and machine made algorithmic approaches.
@@bartlx The source material making this possible is human, though? It is an exclusive playground for human beings (excluding any theoretical alien life). If you took the creations of other Earth animals and created a similar network, you wouldnt get these results.
i've worked in game studios where the 'concept' artwork was photos ripped out of magazines and newspapers glued to a sheet of paper. the game was a hit on Steam and it was a big buget AAA game. other times we just make blockout shapes in blender and the developers would tell us what else to do. no need for pretenous concept artists.
@@odakyuodakyu6650 which game was aaa hit?! I really want to know
I was a bit skeptical at first but after watching a few of your videos I think I understand a lot better now. In the beginning I thought this was going to be a very lazy “let the AI do the work” kind of thing but now I see that it’s a lot more hands on and isn’t completely negating the need for an artist with a vision
Thank you! I’m glad you appreciate that some thought needs to go into it. I’m from a creative industry, so I know that you need to have input to get any sort of professional result.
this stuff is wild, thank you for the video! please keep them coming, subscribed and ready for more.
Glad you liked it and thanks for the sub! I'm planning more vids now, anything specific you would be interested in?
As a concept artist this looks like a very cool tool, I’ll try to use it in my work flow, with the real knowledge of concept art this could be a really powerful tool
Yes! I don’t think this tech will replace skilled artists at all, unlike what others say. It is so important to bring art knowledge into the workflow to get great results. This will give you superpowers!
@@albertbozesan True, although the clients may want cheaper, faster results within an ever tightening time frame.
@Iamwolf134 always, always…
@@Iamwolf134 But, that will always be true, regardless of tools. How many people have tried to commission you with "exposure"?
@@Iamwolf134 Do you know anybody who doesn't want it cheaper? The client is not the problem. The problem is the artist who can leverage these tools to do it cheaper and quicker than you.
This is extremely awesome. Im decent at art and already do a lot of photo bashed work that I just blend and paint on top of. This would greatly speed up the process and save me a lot of effort and on occasion come up with unique elements better than my initial imagining. Very cool can't wait to give it a try.
I have no words to appreciate the quality training you provide here. I mean it, Thank you so much from the South of my heart.
What a kind thing to write! Thank you, I’m very glad I could help.
Thank you for sharing your workflow. It's an eye-opener :)
You’re very welcome!! Hope it’s useful to you :)
I only recently heard about "Stable Diffusion" and this is something i never knew existed, it's really clever stuff!
Many thanks for this - was never really sure how stable diffusion could be used in practical terms and now i know! Great tut! Extremely informative and nicely-paced. Looking forward to the next one!
Thank you! There’s plenty more videos coming :) please leave a sub if you haven’t already
that was a brilliant tutorial Albert. ty
Hey Albert (nicest tutorial guy on yt!). I've just used your tutorial to artwork some killer shots for a pitch. Absolutely could not have done it without your tutorial. Massive thanks.
Thank you so much for sharing that!! Best of luck with your project!
This video was the very first video I watched about three or four weeks ago. OMG has so much changed, but some of the basics here are still useful, and I still keep coming back to reference the workflow.
I should do an update, it really has changed, and I’ve gained way more experience. Thanks for coming back! 😄
@@albertbozesan mind you, this was also where I got the first taste about how stupid fast this tech was advancing. "Wait, my UI didn't have that option. It's phrased completely differently."
Crazy how when CZcams suggests a video about SD or AI Art that I hadn't seen, if the video is older than 3-4 weeks, I have to pick through it to glean what hasn't been updated/made easier since then.
FINALLY!! thank you for making this video!!
been searching high and low, for a month, trying to find a tutorial about stable diffusion that i could actually follow (as i have zero coding experience)....but i am a pretty good illustrator (if i do say so myself - lol)....but thank you, thank you....NOW i will try an install and knuckle down!!!
great instructions method. great voice. great tutorial~!
This is going to make the work of concept artists and illustrators much easier. I just hope they dont get overworked, simply because the AI made everything easy and are therefore expected to work twice as hard.
But at some point I think this would be the case
I think that without major cultural changes this will always be the case. We will always find a way to squeeze things within a hairs-width of the breaking point. Automation hasn't helped any industry see working conditions improve. Either productivity went up without any benefit to the workers, or people were laid off and productivity stayed the same.
No other words than one : EXCELLENT !
Fascinating, great video!
Definitely check out my newer ones!! This is way old!
I actually predicted that there would be a workflow like this when Nvidia GauGAN released their AI tool 3 years ago. Seeing this thing has come true now. just after 3 years, it scared me and made me happy at the same time. I just can't imagine how extremely easy it would be to realize any visual idea in the next 1 - 5 years.
What’s your next prediction? 😄 you have a good eye.
This is your first video?! You have only 100 subs?! But it's so good!
Thanks so much! I have a little experience making tutorials on other topics and giving uni workshops :)
Sir, you've just made my day! Huge thanks for this great tutorial!
You're very welcome! Glad I could help. Do you have any wishes for future videos?
Well, I'm very interested in a free method for outpainting. DALL-E2 has it, and it's great, but people have to use up lots of precious credits for that feature. I hope Stable Diffusion will have a UI version with a similar (and free) technique.
This is great, Albert. you've definintely won the 'nice guy who really knows his stuff' award! thank you.
What a nice to say! Thank you 😊
My new favorite channel
awesome tutorial ! thanks for that ! i was dissapointed by stable diffusion at first now i know excatly how i can make it work better
Thank you! Yeah, I feel like it needs more text and careful settings than other AI. But the results can be amazing.
it's impressive the way you used it, very clever!
Thankyou Thankyou Thankyou. You made it all make perfect sense!
I’m so glad 😄
Not all the heroes wear capes, but they do amazing Tutorials
With great GPU power comes great responsibility 💪
This is really a good way to showcase how a real artist can use AI to speed up the workflow! Thank you
Vraiment cool ! j'aurai gardé le guidon bleu pour la moto. Super vidéo incroyable et inspirante. Merci à vous.
This is just amazing! Inspired me to try it out aswell! Sehr nice :3
impressed, thank you
incredible process and tutorial
Thank you!
I think your technique is quite interesting. Your results were good. Thanks for sharing.
I have a hard question
can this AI fix a low quality Image from another AI Generator?
I have some very good anime image but the quality is bad
so I'm looking for AI to Redraw or fix it somehow
I even try to upscale it but no use
It’s probably going to look a little different, but if you don’t mind that then sure! But check out “waifu diffusion”, I hear it’s a version of Stable Diffusion trained more on anime. That should get you better results.
@@albertbozesan thanks a lot! I'll check it now
This is actually a cool and interesting use of AI
Awesome content!
Thanks Albert, this was really interesting.
You’re very welcome!
Very cool tutorial, Bravo !😀
BTW - in photoshop, using Image > Adjustments > Match colors can be quite nice in this process.
crikey, good stuff.. going to have to brush up on my photoshop skills
Incredible workflow. Thank's for the video.
installed gimp to try this. never had a clue how to use it, but starting to get the idea. great technique, thanks
Thanks for sharing, this is great!
I’m glad to hear that! Thanks for watching 😄
fantastic tut! thank you!
Nice!
This is super interesting. Thanks!
Thanks and you’re very welcome! 😊
Just a really small detail, but I always throw in Filter->Camera Raw Filter (not sure which version of PS you have) and play with Basic and HSL tabs. Even if something is just a quick concept art, but you like pretty colors, it's always worth spending a couple extra minutes 👍
Great tip, thanks!
i tried AI a few times but i always got results that looked so off but seeing this makes me wanna actually try it again
You must be trying a shitty ai try midjourney or dall-e 2
Its really fun just piecing everything together in photoshop
cool way to use ai with photoshop! will try to use, thanks
This is so cool and exactly what i needed! I've been trying to look for a good Stable Diffusion UI for days, and its a plus that you just showed me the steps it takes to really utilize this tool to the max and make my ideas come true. Thank you!
I’m happy to help!! Best of luck with your projects :)
Amazing!
Thanks!! So much fun to play with, too 😄
Very cool, I can definitely use this to improve my workflow. A good AI plug-in for photoshop/Gimp should do the trick as well. I thought NVidia is actually close. Thanks for sharing!
There are already photoshop plug-ins! They just don’t run locally and use paid cloud services, so I haven’t used them.
@@albertbozesan Yes, but they all want us to commit to a monthly/yearly plan. I still do photo bashing. Good work!
What Repo GUI you are using?
Great stuff, isn't faster to just speed compositing roughtly in photoshop with images/stocks, first, then do the same process in SD ?
And please do a serie of that, it's so good !
Also, final question, I'm struggling to make great faces with full body portrait in SD. Is it possible with this technique to change that, or photo batch a face on hte prompt I make then process again in stb i guess ?
I've done this as well, check out my tweet: twitter.com/AlbertBozesan/status/1563605096407019520?s=20&t=K8ndZLmFcMto6WQ8MXRrxA
The disadvantage is less creativity on the side of the AI. You give it a lot more hard details to work with from the beginning. Definitely worth a shot, of course :) experimentation is key!
To your 2nd question: I would suggest getting a good general full body shot first, then cropping and editing the face. The third tab with "GFPGAN" can then be used to perfect the face in a final step!
Thank you for your support, I'm glad you liked the video! A new one is coming later today.
full body portraits can be hard for the ai because the millions of full body portraits on the internet & the AI tries to follow the prompt so it ends up bashing all these people together sort of creating a blob human
Apparently, researchers from Adobe are working on something called Pix2Pix-Zero using Stable Diffusion; basically Img2Img but retaining your input almost exactly while only changing it's contents.
If that's going where I think it's going, Photoshops gonna jump to a new level. Basically like this video, but without an external Stable Diffusion UI.
Instruct Pix2Pix is available in the auto WebUI and an incredible feature that I’ve already used for work. Video coming soon!
It's super great. Thanks for sharing your knowledge. Have you some ideas how we can do it for non square images? Resizing in Photoshop or creating environment that is not really connected to the image will give not best results
Awesome!!...Awesome!!
Awesome. So many useful things in one video.
thx so much for the video
that's awesome, thanks for your step video, by the way this image result allow we upload in another microstock?
Are you asking if you’re allowed to upload AI art to stock photo websites? That depends on their Terms and Conditions, different for every site. I don’t know about Microstock.
@@albertbozesan ok thank's for your reply.
Great video thanks for sharing .
Love your tutorial! So helpful also more controlled than the other fancy AI generators!❤️❤️❤️
It works like Keywords for SEO. The more keywords you use, the more data it knows to scan through. But if you go too far with the keywords it will counter itself and be left with fewer and fewer data sets to pull from. It is an "art" form in itself and changes bsed on the data set model we use.
Great video thank you 🙏 was just checking the link and from what I understand this does not work on mac-book computers....?
You also use your own computer's GPU?
thanks in advance.
Yes, you need a windows PC with an NVidia GPU for best results. I’m using a local installation on my RTX 2070S.
Thanks for the video. The tutorial for installation asks for Python v3.10.6 but 3.10.7 is out. Must I use .6 or is it ok to get the most recent version?
I don’t know in this case, but I strongly suggest you install the exact version that’s asked for. Unexpected problems might occur otherwise.
Very soft!
I've found that using line art and colored backgrounds improves the results. Only color blocks make it more challenging for AI to understand.
You can use color blocks and then add noise in Photoshop, too! I’ve found that to work.
@@albertbozesan Noise? Cool! I'll try it.
Great video man. How can I get this webui on MacOS?
I don’t know if that’s possible. Not many are working with SD on Mac because it’s a lot slower. And I say that with a Mac as my main computer.
Hello, I have a question. It pulls image information from the internet (what it is training on for instance), but I was wondering if it is possible to tell it to pull inspiration from a folder on my computer?
There is a new feature which lets you feed it images so it learns a style! Check out this video: czcams.com/video/7OnZ_I5dYgw/video.html
I hate drawing bacground, this will help me much with my concept artist drawing! THank you
I checked - everything is clean
Thanks for the video, it's so amzaing!
In the setup, appreciate the videos! Maybe a dumb question but is there a way to export content in mono in soft20?
Not sure what you mean…
Wow, looking at this ui version in march 2023 shows so much evolution over these months
Yes!! It’s so much better now, as is our experience and knowledge.
I can foresee the future regarding this. CZcams challenge videos where the challenge is to generate nice looking AI art, but WITHOUT using Greg Rutkowskis name 😁. Poor Greg.
In any case, fantastic video!
Haha! Yeah, I’ve stopped using artists’ names, it’s not cool. And it’s entirely possible to get great results without names!
Still can't crop like in video even on the version of the ui linked in the description.
very nice , thank you
I installed following the guide in the description but I am missing a lot of the options. in particular the denoising strength and upscaler, any ideas?
Denoising strength should be in any version that has img2img…did you follow the Voldy guide or one of the others?
@@albertbozesan @Albert Bozesan yeah the Voldy guide. There was a few steps I was a little unsure but I must have figured it out because it all works. I do seem to have different sampling methods too.
Awesome
What is the UI you are using?
Just a random commentary but i like your voice lol its very relaxing to hear you doing stuff, i feel like im watching AI art bob ross
That’s a big compliment, thank you! 😳Means a lot to me 😄
very nice work. much appreciated. xyxy
5:56 Thanks for the chart. Now I know what is what
I'm so freaking excited! What happens when we use a quick n dirty 3D scene? *rubs hands together*
I’m sure great things will happen! 😄
The world is a better place because of people like you man. Thank you for sharing this. I am sure, this will turn "NO AI" bashing to something constructive.
Wow. Thank you for valuable information about AI art. I am a artist myself and curious about people fearing that AI will take the artist job. AI at this stage is not scary but I will one day take over artist job. Even Elon Musk feared AI. Thank you so much for simple tuto video, Its very simple. I will try it and do my first concept art. =)
I don’t think AI will take artists’ jobs. It’s quite obvious to any professional that you need an artist’s eye and skillset to really get controlled and reliable results - super important for real work in any creative industry. Best of luck with your projects!
I saw your newest video where you recommended the viewer to watch your automatic1111111 installation guides, but I haven’t figured out which video you were referring to?
I suppose my couple tips in here are super old…in that case check this out, it looks easy and super helpful: www.reddit.com/r/StableDiffusion/comments/zpansd/automatic1111s_stable_diffusion_webui_easy/?
@@albertbozesan Thank you!
Also have a question that May sound dumb but I know nothing about GPU and all, and I think I have up to 6G GPU and the AI cant run bigger images than 320/320 pixels, wich is disgusting, is there any ways to make 1920/1080 images with the GPU???
To my knowledge, there is no mid- or low-price GPU that can get beyond 1024 right now. You also lose quality when you go higher - the AI itself is only trained on 512x512, so weird things start happening above that. I don’t use anything above 512 in my work. I upscale with ESRGAN.
@@albertbozesan thanks a lot for the quick answer! I ll try this, sounds cool
I guess it goes like this: real humans label some images with words like detailed, realistic, and others like 4k are just automatic. Images that have these labels tend to have a certain style to them: 4k might imply high level of per pixel details, while detailed may imply high amount of objects in a single take, so maybe these two will produce images with a lot of objects that have high res textures.
Also, resolution doesn't matter as much with terrible drawings anyways. Like, having a 4k image of your childs cat-cow-dog thing isn't very important.
It is unlikely real humans doing the labelling. There might be some checks and initial hand curated data but AI models are well passed the point of doing image recognition and element abstraction.
I installed everything as instructed, but I get an error when I try to use Image-to-Image.
TypeError: img2img() missing 1 required positional argument: 'fp'
Hmm...I'm afraid I'm too much on the artist side to be able to help directly. It looks like this is a known issue in the newest version, though, so hopefully someone will fix it soon! github.com/hlky/stable-diffusion-webui/issues/463
Where can I find this model of stable diffusion? I did found it on huggingface but I didn't find download button there
I think you need an account on Huggingface, but it’s free.
Soon appears plugin which makes everything in apps like photoshop and krita :D
Amazing
Yes! I hope it will work with a local installation. I don’t need to pay for a cloud service if I already paid for a GPU 😅
Think of the prompt as flicking through a filing cabinet of image descriptions. Find the things in common that you want applied to your image, I call this stuff word vomit, And I have it saved out in various forms to copy paste in for differing styles, but it is very important to the outcome of the image.
For example, If I want an animated clean rendered style, I'll add tags like 'Official art by Disney' or 'Official art by Pixar' < Now this wont work for everything, This style of tagging works in this example because it is how they tag their own official releases, And that's the key, Wording things the way the specific image owners/critics describe their images.
The best tip I give to fellow Ai-tists is this. Learn Jargon for everything as best you can, and Get/use a thesaurus.
First Jargon, Remember every image is generally catagorized/descripted by the wording of where it was found/scraped. You want high art, use high art jargon, you want photography? Use photography and perhaps cinema and camera operator jargon aswell.
Learning the terminology for different types of camera shot for example is fundamentally helpful, EG 'A pulled back shot of a' A distance shot of a' 'Panoramic' 'Fish-eye lens' These will help you play with the scale and style of your image for instance.
Art terminology helps greatly well - Good to have your own discord channel to save all this shit to if your a mid journey peep - I've linked some terms below for copy paste (Sorry there were descriptions for each but youtube wouldn't let me dump that much text... Soz Beans) (Trompe l’oeil < This one is one of my personal favorites, Look them up if you haven't heard it before :)
Next, the thesaurus is crucial, Say you want 'Prompt: a london street with flying cars floating down the street' This is most likely going to give you flooded London streets, find other words for floating to help eliminate the alternate meaning, Read your prompts carefully and make sure you are not accidentally getting the wrong meaning out of your words.
On top of that, using terminology like 8K is great, but think about what images might mostly have that as a description, 4K is probably more helpful as there is a much larger pool of art uploaded at 4K with that as part of the description. Add both :)
People tend to think of these prompts like a mystical language, and yeah Kinda, but think of it more like a rolodex of image descriptions, your not cobbling together pictures more so cobbling together peoples description of them to make a new image.
Prompt ;A deep pulled back shot of A vast tundra with a futuristic city encased in a vast thick glass dome with yellow mist. ships flying around on dedicated flying lanes. glow on the out side, Inception style shot, hyper realism, crisp details, Landscape shot, vast 50 square kilometer. HD. OLED. Leading Lines. Golden Ratio. Rule of thirds.
This prompt above made these images. Sorry for the plug, but kinda hard to show example with out it?? :P......(.....> Please like and subscribe hehe)
instagram.com/p/Chx6eqlofAv/
Some extra things that are nice to add
:Color theme, Teal, green, Lime and Yellow. (Add your own colors and research color theory just a little bit)
:High contrast in Hue
:More Value than Hue (These two can really make the colors pop, sometimes.)
:OLED this should be pulling from TV and Monitor advert images which are some of the most expensive and high quality HDR contrasty images you can find, They gotta sell tvs after all :P
:HDR similar to above
:Unreal Engine
:Octane Render | Renderman Render | Vray | Redshift | Mental Ray < These are all rendering engines and images with this as part of their description should be mostly Clean AF and thus will help bring some of the clean to your piece.
:Symmetry < Obvious but handy
I'm still learning, But This should help.
Also Capcut is fantastic for making animated reels from a mobile device :D
Abstraction
Alla prima
Allegory
Appropriation
Avant-garde
Brushwork
Chiaroscuro
Color Theory
Composition
Contrapposto
Distortion
Figurative Art
Genre
Glazing
Impressionism
Mixed media
Motif
Narrative
Perspective
Photorealism
Plein air
Proportion
Realism
Scale
Sfumato
Symbolism
Texture
Theme
Trompe l’oeil
Underpainting
Value
FORESHORTENING
FOREGROUND
AERIAL PERSPECTIVE
ASSEMBLAGE
BIOMORPHIC
CONCEPTUAL
CONTOUR
ICONOGRAPHY
IMPASTO
MEDIUM
MODERN
PENTIMENTO
Love this. Thank you for the detailed suggestions!
@@albertbozesan You are most welcome fellow human :)
Nice
Thanks
Do you know by any chance how to solve CUDA out of memory problem?
When I get it occasionally, a reboot fixes it. Try not to have any other graphics-intensive apps open while running stable, it really needs all of your GPU. Photoshop seems to be okay.
14:58 I can't crop like this. Newest a1111 is probably broken :(. Is there any way to downgrade it to some good version?
It appears the cropping is broken…but nowadays I suggest switching to inpainting and “inpaint masked area” instead. That way you don’t have to mask and photoshop anymore. This tutorial is pretty old, check out my newer ones!
@@albertbozesan thanks
@@albertbozesan Which one of your new tutorials is best to start watching?
@@bradcasper4823 my ControlNet one should be a good place. czcams.com/video/dLM2Gz7GR44/video.html
very good one , this process is more or less explained through reddit posts , but it's always to see it live.
Now i see why artist are so angry XD.
now anyone with a little knowledge on photoshop and understanding of AI prompts could do very cool images , that before were imposible.
But evolve or die... you can hate it or if you're an artist you can learn to use it to improve your process. same thing when Photoshop and illustrator appeared and ppl went from pencils to digital painting
You can do that with midjounery ?
The new Midjourney remix feature might be worth experimenting with. But I think Stable Diffusion is the best option out there with the most creative freedom.