Noise Styling is the NEXT LEVEL of AI Image Generation

Sdílet
Vložit
  • čas přidán 11. 12. 2023
  • Noise Styling is the NEXT Dimension of AI Image Generation. This new Method by Akatsuzi creates incredible new Styles and AI Designs. Go far beyond what your AI Model can do. Explore now artistic Expressions. Become more versatile with AI Noise Styling.
    #### Links from the Video ####
    My Workflow + Noise Map Bundles: drive.google.com/file/d/1D0f5...
    Akatsuzi Workflows: openart.ai/workflows/L2orhP8C...
    Akatsuzi Noise Maps: drive.google.com/drive/folder...
    #### Join and Support me ####
    Buy me a Coffee: www.buymeacoffee.com/oliviotu...
    Join my Facebook Group: / theairevolution
    Joint my Discord Group: / discord
    AI Newsletter: oliviotutorials.podia.com/new...
    Support me on Patreon: / sarikas
  • Jak na to + styl

Komentáře • 136

  • @uni0ue87
    @uni0ue87 Před 5 měsíci +47

    Hmmm maybe I didn‘t get it, but seems like a very complicated way to get a tiny bit control of colors and shapes.

    • @OlivioSarikas
      @OlivioSarikas  Před 5 měsíci +5

      you get a lot of creative outputs that the model on it's own couldn't create. so there is endless ways of experimentation with this

    • @ImmacHn
      @ImmacHn Před 5 měsíci +4

      This is more of an exploratory method than anything, which sometimes you want for inspiration.

    • @uni0ue87
      @uni0ue87 Před 5 měsíci +1

      I see, makes sense now, thanks.

    • @alecubudulecu
      @alecubudulecu Před 5 měsíci +1

      You should try it. It’s pretty fun.

    • @jeanrenaudviers
      @jeanrenaudviers Před 4 měsíci +1

      Blender 3D has nodes too, and it’s totally stunning-amazing. Even for 3D elements, shading, compositing, finally you make your very own modules, and it’s non destructive.

  • @Foolsjoker
    @Foolsjoker Před 5 měsíci +6

    As always love your walkthroughs, you don't miss a node and explain the flow. Keeps it simple and on track. Hope you are having fun on your trip!

    • @OlivioSarikas
      @OlivioSarikas  Před 5 měsíci +1

      thank you very much. i forgot to include new shots from my bangkok stay this time

    • @Foolsjoker
      @Foolsjoker Před 5 měsíci

      @@OlivioSarikas No worries. I was there last year. Beautiful country.

  • @TimothyMusson
    @TimothyMusson Před 5 měsíci +7

    This reminds me - I've found that plain old image-to-image can be "teased" in a similar way, for really surprising/unusual results. The trick is to add "noise" to the input image in advance, using an image editor. And by "adding noise", I mean super-imposing/blending the source image (e.g. a face) with another image (e.g. a pattern - maybe a piece of fabric, some wallpaper, some text... something random). Using an interesting blend mode, so the resulting image looks quite psychedelic and messy, perhaps even a bit negative/colour-inverted looking. Then use that as the source image for image-to-image, with a prompt to help bring out the original face (or whatever it was). And the results can be pretty awesome.

    • @syndon7052
      @syndon7052 Před 4 měsíci +1

      amazing tip, thank you

    • @ProzacgodAI
      @ProzacgodAI Před 2 měsíci +1

      Hey we stumbled upon a similar technique. I've been using random photos I find a flickr, making them noisy then using them at like .85 denoise strength, to get it to "somewhat" influence the output, it's working well to get portraits and stylized photos, or just to get something way out there.

  • @BoolitMagnet
    @BoolitMagnet Před 5 měsíci +2

    The output really are artistic, can't wait to play around with this. Thanks for another great video on a really useful technique.

    • @OlivioSarikas
      @OlivioSarikas  Před 5 měsíci

      you are welcome. i love this creative approach and the results that akatsuzi came up with

  • @subhralaya_clothing
    @subhralaya_clothing Před 5 měsíci +10

    Sir Please Bring Automatic Tutorial Also

    • @CoreyJohnson193
      @CoreyJohnson193 Před 5 měsíci +2

      A1111 is dead, bro 😂

    • @pedrogorilla483
      @pedrogorilla483 Před 5 měsíci +1

      Can’t do it there.

    • @ciphermkiii
      @ciphermkiii Před 5 měsíci

      @@CoreyJohnson193 I’m a little out of the loop. What’s the better alternative for A1111? Counting out Comfy UI.

    • @CoreyJohnson193
      @CoreyJohnson193 Před 5 měsíci

      @@ciphermkiii SawmUI, FooocusUI... Check them out. A1111 is "old hat" now. Swarm is Stability's own revamped UI and I think those two are much better. I'd also look into Aegis workflows for COmfyUI that make it more professional to use.

    • @jakesalmon4982
      @jakesalmon4982 Před 5 měsíci

      there isn't one, a1111 is the best for what it is, he was saying it is dead because comfy exists.. i disagree for some use cases@@ciphermkiii

  • @jeffbull8781
    @jeffbull8781 Před 5 měsíci +4

    I have been using a similar self made workflow for a while on text2image but it requires no image inputs it creates weird noise inputs and cycles them through various samplers to generate a range of different images from the same prompt. The idea was based on a workflow from someone else and iterated on. You can do it by creating noise outputs with the 'image to noise' node, on a low step sample and them blending that with perlin or plasma noise and then having the step count start at a number above 10.

    • @OlivioSarikas
      @OlivioSarikas  Před 5 měsíci +1

      that's awesome! akatsuzi also has different pattern and noise generator nodes. in this video i wanted to show that you can also create them yourself and the effects it has from the different shapes you can paint into it. you can see in the images that the circle or triangle and the colors have a strong impact on the resulting composition

  • @manticoraLN-p2p-bitcoin
    @manticoraLN-p2p-bitcoin Před 5 měsíci

    This is so 80s... I liked it!

  • @blisterfingers8169
    @blisterfingers8169 Před 5 měsíci +5

    Fun stuff Olivio. Thanks for the workflows. FYI the workflows are way off from the default starting area meaning newbs might think it didn't work. ♥
    Thanks for going over how you make the inputs too. Makes me wanna train a lora for them.

    • @weirdscix
      @weirdscix Před 5 měsíci

      I'm glad I saw this comment as I thought the workflow was bugged, I never thought of looking that far from the start area

    • @TheSickness
      @TheSickness Před 5 měsíci +1

      Thanks, that got me haha
      Scroll out ftw^^

    • @OlivioSarikas
      @OlivioSarikas  Před 5 měsíci +2

      thank you, i will look into that

  • @Clupea101
    @Clupea101 Před 5 měsíci

    Great Guide

  • @webraptor007
    @webraptor007 Před 5 měsíci

    Thank you...

  • @Jan-jf4th
    @Jan-jf4th Před 5 měsíci

    Awesome video

  • @gameswithoutfrontears416
    @gameswithoutfrontears416 Před 5 měsíci

    Really cool

  • @frankiesomeone
    @frankiesomeone Před 5 měsíci +6

    Couldn't you do this in Automatic1111 using the colour image as img2img input and the black & white image as controlnet depth?

    • @eskindmitry
      @eskindmitry Před 5 měsíci +3

      just did it, looks awesome! I've actually replaced the first step of creating a white frame by using inner glow layer style,I mean, we are already in affinity, why not just make pictures in the right size and with the white border to begin with...

    • @OlivioSarikas
      @OlivioSarikas  Před 5 měsíci +2

      actually a good point, yes that should work. however you don't have the flexibility of manipulating the images inside the workflow like comfyui does. I show here a somewhat basic build. but you can do a lot more, blending noise images together, changing their color and more, all with different nodes.

    • @xn4pl
      @xn4pl Před 5 měsíci +1

      @@OlivioSarikas with photopea (web based photoshop clone) extention in automatic1111 you can just paint any splotches or even silhouettes and then import them into img2img with a single button, and then export it back into photopea with another button, then iterate it back and forth all you like. And stuff like blending images, changing colors, and many many more is much easier done in photopea than in comfy.

  • @mick7727
    @mick7727 Před 3 měsíci

    Nice results! Would this be achievable with multiple ipadapter references? I feel like it would in practice i just haven't thought of trying it yet.

  • @sb6934
    @sb6934 Před 5 měsíci

    Thanks!

  • @summerofsais
    @summerofsais Před 5 měsíci

    Hey I'm in Bangkok right now. I have a casual interest in AI not as in depth as you but we can grab a quick coffee

  • @minecraftuser8900
    @minecraftuser8900 Před 5 měsíci

    when are you making some more A1111 tutorials, i really liked them!

  • @MrMustachio43
    @MrMustachio43 Před 5 měsíci +2

    question: what's the biggest difference between this and image to image? easier to colour? asking because i feel you could get same pose easy with image to image

  • @AndyHTu
    @AndyHTu Před 5 měsíci

    This feature is actually built into Invoke AI. Its very easy to use as well if you guys havent played with it. It just works as a reference to be used as a texture.

  • @kazioo2
    @kazioo2 Před 5 měsíci +17

    Remember when AI gen was about writing a prompt?

    • @DivinityIsPurity
      @DivinityIsPurity Před 5 měsíci

      A1111 reminds me everytime I use it.

    • @jakesalmon4982
      @jakesalmon4982 Před 5 měsíci +2

      Much more interesting this way :) a depth map is worth 1000 words

    • @OlivioSarikas
      @OlivioSarikas  Před 5 měsíci +2

      it still is on Midjourney ;)

  • @Herman_HMS
    @Herman_HMS Před 5 měsíci +3

    For me it just seems like you could have used img2img with high denosing to get the same effect?

    • @rbscli
      @rbscli Před 5 měsíci

      Didn't really get it either.

  • @EddieGoldenberg
    @EddieGoldenberg Před 5 měsíci

    Hi, beautiful flow. I tried to run it on SDXL (with SDXL controlnet depth) but got weird results. Seems only 1.5 checkpoints work. Is it true?

  • @veteranxt4481
    @veteranxt4481 Před 5 měsíci

    @Olivio Sarikas what would be usefull for RX 6600 XT? AMD GPU?

  • @petec737
    @petec737 Před 5 měsíci +1

    Looks like soon enough we're going to recreate the entire photoshop interface inside a comfyui workflow :))

  • @programista15k22
    @programista15k22 Před 5 měsíci

    What hardware did do you use? What graphics card?

  • @KDawg5000
    @KDawg5000 Před 5 měsíci

    might be fun to use this with SDXL Turbo and do live painting

  • @pedroserapio8075
    @pedroserapio8075 Před 5 měsíci +1

    Interesting, but I don't get it, 05:15 where the blue went? Background? Or the blue that you are talking about turn into yellow?

    • @OlivioSarikas
      @OlivioSarikas  Před 5 měsíci +2

      Yes, i meant to say her outfit is yellow now

  • @ivoxx_
    @ivoxx_ Před 5 měsíci +1

    This is amazing, you're the boss Olivio!

    • @user-zi6rz4op5l
      @user-zi6rz4op5l Před 5 měsíci

      He is basically ripping off other people's workflows and pastes them on his channel.

    • @ivoxx_
      @ivoxx_ Před 5 měsíci

      @@user-zi6rz4op5l Unless he charges or don't share such workflows, I don't see the issue.
      Maybe he could at least tell where did he got it from.
      I end up using 3rd party workflows as base or to learn a process, then I make my owns or customize them as needed.

  • @geraldhewes
    @geraldhewes Před 5 měsíci +1

    I tried your workflow but just get a blank screen. I did update for missing nodes, update everything and restart. Akatsuzi workflow does load for me, but I don’t have a model for CR Upscale Image and not sure where to get it. The GitHub repo for this module is not clear where to get them.

    • @geraldhewes
      @geraldhewes Před 5 měsíci

      The v2 update fixed this issue. 🙏

  • @Dachiko007
    @Dachiko007 Před 5 měsíci +24

    I don't think you have to go this far to get this kind of effect. Just take those abstract images you generated and go i2i on them. It's a old technique proposed like a year ago and gives very much the same creative and colorful results.

    • @c0dexus
      @c0dexus Před 5 měsíci

      Yeah, the clickbait title made it seem like it's some new technique but it's just using img2img and control net to get interesting results.

    • @vintagegenious
      @vintagegenious Před 5 měsíci

      That's exactly what he is doing: 75% denoise with initial image is just i2i

    • @vuongnh0607l
      @vuongnh0607l Před 5 měsíci

      @@vintagegenious you can go 100% denoise and still get some benefit too.

    • @vintagegenious
      @vintagegenious Před 5 měsíci

      @@vuongnh0607l I didn't know, isn't that just txt2img (if we ignore the controlnet)

    • @AliTanUcer
      @AliTanUcer Před 5 měsíci

      I do agree, i dont see anything revolutionary here. I have been doing this since the beginning. :)
      Also, feeding weird depth maps. I think he just discovered it i guess :)

  • @kamillatocha
    @kamillatocha Před 5 měsíci +5

    soon ai artists will actualy have to draw their prompts

    • @UmbraPsi
      @UmbraPsi Před 5 měsíci +2

      Already getting there, I started with ai prompting and slowly gotten better with digital drawing using img2img, figured it made more sense that visual control translates better to visual output, I wonder how strange my art style will be, essentially being ai trained than classically trained

  • @alekxsander
    @alekxsander Před 5 měsíci

    I thought I was the only human being to have 10,000 tabs open at the same time! hahahaha

  • @hleet
    @hleet Před 5 měsíci +2

    I would rather prefer to inject more noise (resolution) in order to have more complex scenes. Anyway, it's a nice workflow. Got to check that Facedetailer node next :)

    • @OlivioSarikas
      @OlivioSarikas  Před 5 měsíci

      you can actually blend this noise with a normal empty latent noise or any other noise you create to get both :) - also you can inject more noise on the second render step too ;)

    • @sznikers
      @sznikers Před 5 měsíci

      Wouldn't addetail lora during upscaling part of workflow do the job too?

  • @Shingo_AI_Art
    @Shingo_AI_Art Před 5 měsíci +2

    The result looks pretty random, but the artistic touch is wonderful though

  • @windstar2006
    @windstar2006 Před 5 měsíci

    A1111 can use this?

  • @gatwick127
    @gatwick127 Před 5 měsíci +2

    can you do this in Automatic1111?

  • @mistraelify
    @mistraelify Před 5 měsíci

    Wasn't segmentation from controlnet doing the same thing for recoloring pictures using masks but this time it's kind of all-in-one ?? Want a little explaining about it.

  • @jibcot8541
    @jibcot8541 Před 5 měsíci

    I like it. It would be easier if it had a drawing node in ComfyUI but might not be as controllable as using a Photoshop type application.

    • @blisterfingers8169
      @blisterfingers8169 Před 5 měsíci +1

      There'a Krita plugin that uses Comfy as it's backend but it's really finicky to use, it seems.

    • @TheDocPixel
      @TheDocPixel Před 5 měsíci

      Try using the canvas node for live turbo gens, and connect to depth or any other controlnet. Experiment!

    • @SylvainSangla
      @SylvainSangla Před 5 měsíci +1

      You can use Photoshop, when you save a file from your ComfyUi input folder and you are using Auto Queue mode, the input picture is reloaded by ComfyUI.
      The only difference with an integrated canvas is that you have to save manually your changes, but it's way more flexible..

  • @keepitshort4208
    @keepitshort4208 Před 5 měsíci

    My Python crashed while running stable diffusion
    What can be the issue ?

  • @aymericrichard6931
    @aymericrichard6931 Před 5 měsíci +1

    I probably don't understand. I have the impression we replace a noise by another noise which effect we still not controlled either.

    • @filigrif
      @filigrif Před 5 měsíci

      I completely agree with that :) it's not giving "more control" but the opposite : more lack of control, so that stable diff could digress from the most common poses and image compositions... Which it has obviously been overtrained on. It's still something that can be more simply controlled via open pose (for more special poses) and img2img (if you need more colorful outputs). Much more satisfying when you need to use SD for work.
      Still, fun experiments!

    • @aruak321
      @aruak321 Před 3 měsíci

      @@filigrif What he showed was essentially an img2img workflow (with depth-map control net) with some extra nodes to per-condition the image along with a very high denoise. So I'm not sure what you mean that he could have just used img2img. Also this absolutely does provide an additional level of control over a completely empty latent noise.

  • @HasanAslan
    @HasanAslan Před 5 měsíci

    workflow doesn't load , it doesnt give any errors just nothing happens on comfyui. maybe the image you produced even non upscaled version ?

  • @PostmetaArchitect
    @PostmetaArchitect Před 5 měsíci

    you can also just use prompt travel to achieve the same result

  • @jhnbtr
    @jhnbtr Před 5 měsíci +1

    how is it ai, if you have to do all the work, may as well draw it at this point. Can you make ai more complicated?

  • @pedxing
    @pedxing Před 5 měsíci

    prooobably going to need to see this with a turbo or latent model for near-real-time wonderment. also.. any way to load a moving (or at least periodically changing/ auto queuing) set of images into the noise channel for some video-effect styling? thanks for the great video as always!

    • @pedxing
      @pedxing Před 5 měsíci

      also... how about an actual oscilloscope to create the noise channel from actual NOISE? =)

  • @Soshi2k
    @Soshi2k Před 5 měsíci

    Going to need GPT to break this down 😂

  • @LouisGedo
    @LouisGedo Před 5 měsíci

    👋

  • @MrSongib
    @MrSongib Před 5 měsíci

    So it's depth map + custom img2img with high denoise. ok

  • @kanall103
    @kanall103 Před 5 měsíci

    nothing change in this world

  • @TeamPhlegmatisch
    @TeamPhlegmatisch Před 5 měsíci +1

    that looks nice but totally random to me.

  • @xn4pl
    @xn4pl Před 5 měsíci

    The man at his wits end for some content invents img2img but calls it differently to make it seem like novelty. Bravo.

  • @patfish3291
    @patfish3291 Před 5 měsíci +1

    The point is, we need to make AI Images way more controllable in an artistic way! ...painting noise / strokes/ lines etc. for the base composition. Then refining in a second or third pass the detail and afterwards the color pass... All of that has to be in a simple Interface like Photoshop. This will bring the artistic part back to AI Imagery and bring it to completely different level

  • @Artazar777
    @Artazar777 Před 5 měsíci

    The ideas are interesting, but I'm lazy. Anyone have any ideas on how to make a lot of noise pictures without spending a lot of time on it?

    • @blisterfingers8169
      @blisterfingers8169 Před 5 měsíci +1

      ComfyRoll has a bunch of nodes for generating patterns like halftone, perlin noise, gradients etc. Blend a bunch of those together with an image blend node.

  • @AlexsForestAdventureChannel
    @AlexsForestAdventureChannel Před 5 měsíci

    Thank you for always being a great source of inspiration and admiration; I look forward to watching your videos. Also, thank you for not having these workflows and trips on a paid page. I understand why they do it; I'm so glad you're not one of them.

  • @HyperGalaxyEntertainment
    @HyperGalaxyEntertainment Před 5 měsíci

    are you a fan of aespa?

  • @sxonesx
    @sxonesx Před 5 měsíci +2

    It's cool, but it's unpredictable. And if it's unpredictable, then it's unusable.

    • @vuongnh0607l
      @vuongnh0607l Před 5 měsíci

      This is for when you want just a little bit of control but still let the model hallucinate. If you need stronger control, use the various controlnet models.

  • @robotron07
    @robotron07 Před 5 měsíci +1

    way to convoluted

  • @simonmcdonald446
    @simonmcdonald446 Před 5 měsíci

    Interesting. Not really sure why the AI art world has so many anime girl artworks. Oh well.......

  • @lazydogfilms30
    @lazydogfilms30 Před 5 měsíci

    Have you given up doing tutorials for proper photography, or are you going down this AI route?

    • @sirflimflam
      @sirflimflam Před 5 měsíci +6

      I think you're about 12 months late asking that question.

  • @jiggishplays6781
    @jiggishplays6781 Před 5 měsíci

    i dont like this because there are way too many error for someone who is just starting and who gets confused by all this stuff. other workflows have no issues though.

  • @chirojanee
    @chirojanee Před 5 měsíci

    I't cool, but not new...
    Have used gradients, generated in comfyui, in the past, injecting into a previous image and can change day to night and a few other things with it.
    Process almost identical -
    I do like the addition of the depth map - I tend to use monster instead

  • @artisans8521
    @artisans8521 Před 4 měsíci

    What i see are a lot of unweighted compositions. The masspoint of the poor girl is not above her feet. So she would drop to the floor.

  • @NotThatOlivia
    @NotThatOlivia Před 5 měsíci

    First

  • @Danny2k34
    @Danny2k34 Před 5 měsíci +4

    I get why comfy was created because Gradio is trash and A1111 doesn't update as fast as it should do for being at the front on the cutting edge of AI. Still though, I feel like it was really created because "real" artists kept complaining about ai-artists just writing some text and clicking generate which requires no skill and is lazy. So, behold, comfyUI, an interface that'll give you Blender flashbacks and over complicates the whole process of just generating a simple image.

    • @blisterfingers8169
      @blisterfingers8169 Před 5 měsíci +1

      Node systems have been gaining prevalence in all sorts of rendering areas including shaders for games, 3d software etc. The SD ecosystem just lends themselves to it.
      Also, check out Invoke for more artist focussed UI.

    • @dvanyukov
      @dvanyukov Před 5 měsíci +4

      I think you are missing the point of ComfyUI. It wasn't mean to compete with 1111. It was specifically designed to be a highly modular backend application. When you need to create something that you need to call over and over again it's fantastic and you can make that workflow very complex. However, if you are experimenting, doing miscellaneous 1111 should be your go-to. Personally, I switch between the two depending on the type of work, but like comfy more because it gives me more control and re-usibility.

    • @dinkledankle
      @dinkledankle Před 5 měsíci

      It is only as complex as you need it to be; it takes only a few nodes to generate. I don't know why people are taking such personal offense to a GUI that simply allows for essentially endless workflow customization. You're pointlessly hyperbolizing. A potato could learn to use ComfyUI.

  •  Před 5 měsíci

    Please don't leave A1111! Comfy is used by very few, A1111 is used by many.

  • @user-kr1jp3qr6q
    @user-kr1jp3qr6q Před 5 měsíci +1

    Got excited but clicked off after seeing ComfyUI.

    • @vuongnh0607l
      @vuongnh0607l Před 5 měsíci

      Missing all the fun stuff

    • @vintagegenious
      @vintagegenious Před 5 měsíci +1

      Basically use noisy colorful images to do img2img

  • @IDSbrands
    @IDSbrands Před 2 měsíci

    Makes no practical sense... It's like spin the wheel - you never know what the outcome is going to be. At best, we look at the results for entertainment, and then exit the app and go do some real work.

  • @Slav4o911
    @Slav4o911 Před 5 měsíci +1

    Using again that unComfyUI... I just don't like it... I'll wait for Automatic1111 video.

  • @GoodEggGuy
    @GoodEggGuy Před 5 měsíci

    Sadly, comfyui is so intimidating and so much like programming that it's terrifying. As a new/casual person, this is so very technical that I have given up all hope of using AI art. It's disheartening to see your videos of the last couple of months, knowing that I would take years to understand any of this, by which time the tech will have moved away from this so it will be of no value :-(

    • @dinkledankle
      @dinkledankle Před 5 měsíci +1

      It took me less than a month to get comfortable with ComfyUI and I have zero programming experience, and really it takes only a few days to understand the node flow. It's not intimidating or difficult, you're just putting yourself down for no reason. You can generate images with less than five nodes, even less with efficiency nodes.

    • @rbscli
      @rbscli Před 5 měsíci

      Come on. I don't love comfyui as a get-go either, but it is not that difficult. There are a ton of dumb proof tutorials out there. Just do some experimentation, and in minutes, you will get a grip. If you are that uncomfortable with learning difficult things, I don't even know how you got to SD instead of mid journey for example.

    • @GoodEggGuy
      @GoodEggGuy Před 5 měsíci

      @@rbscli Olivio recommended Fooocus and I have been using that.

    • @aruak321
      @aruak321 Před 3 měsíci

      @GoodEggGuy ComfyUI actually looks and works like a lot of modern artist tools and workflows that artists (not programmers) are already used to using. These types of tools are to allow programming like control for non-programmers. Programmers could do this a lot simpler with code.

  • @T-Bone54
    @T-Bone54 Před 5 měsíci

    Overblown, overreaction to a basic background texture achievable in any photo editor. 'Noise'? Really? The Emperor's New Clothes, anyone?

    • @aruak321
      @aruak321 Před 3 měsíci

      I think the point is to use specific noise patterns to guide your image as opposed to completely random noise with an empty latent. Just another way of experimenting.