Create Realistic Render from Sketch Using AI (you should know this...)

Sdílet
Vložit
  • čas přidán 23. 06. 2024
  • AI for Architects: learn.designinputstudio.com/a...
    Okay, let me show you how you can turn these quick drawings into realistic renders like these using AI in just seconds.
    You can find all the recourses here: designinputstudio.com/create-...
    Use Stable Diffusion & ControlNet in 6 Clicks For FREE: • Use Stable Diffusion &...
    Why to Pay an Architect? (99.7% AI-Designed Project): • Why to Pay an Architec...
    How I Created This Render From Google Maps with AI (sketch into photo): • How I Created This Ren...
    I totally agree that it is not good to rely on these new AI tools to do specific tasks for us, but why not use them to make our lives easier?
    In the early days of a project, it can be used to speed up the workflow in the conceptual phase, especially when you have to present your ideas to others but don't have anything other than many quick conceptual sketches.
    You can use AI to help you improve the overall quality of your presentation. But at the same time, I think it is also possible to somehow "inspire" from some of the results. From different forms, materials, etc.
    Let me know what you think about the new developments in the AI world related to Architecture!
    Timeline
    0:00 - Intro
    0:32 - Start Stable Diffusion
    1:00 - First View - Interior
    4:10 - Interior Render Results
    4:22 - Second View - Exterior
    4:57 - Exterior Render Results
    5:19 - Recap
    Join my Patreon: / designinput
    Free PNG Packs: designinputstudio.gumroad.com
    Newsletter: newsletter.designinputstudio....
    Instagram: / design.input
    ControlNet Paper: arxiv.org/pdf/2302.05543.pdf
    ControlNet Models: huggingface.co/lllyasviel/Con...
    Realistic Vision V2.0: civitai.com/models/4201/reali...
    Install Stable Diffusion Locally (Quick Setup Guide): • Install Stable Diffusi...
    Tools I Use
    My Computer: amzn.to/3mwVZr3
    My Mouse: amzn.to/3zPfyxS
    Free Notion Template: designinputstudio.com/freebies/
    Website Hosting: bluehost.sjv.io/rQgeVQ
  • Jak na to + styl

Komentáře • 313

  • @Fabi_terra
    @Fabi_terra Před rokem +18

    Thank you for taking the time to show us this fantastic tool, and very inspiring ideas. I believe that AI resources, are here to stay. All we have to do is figure out the best way to work with them. We are just starting to work with this, and we still have a lot to learn, including improving our writing skills to make better prompts.

    • @designinput
      @designinput  Před rokem

      Hi, thanks for your comment and lovely feedback. Totally agree; soon, we will have more ideas on how to use it more user-friendly way.
      Regarding prompting, I believe it will have less impact on the overall result in the future. We will be able to explain it with plain text without needing any special keywords or phrases.

    • @Fabi_terra
      @Fabi_terra Před rokem

      🧡

  • @perrymaizon
    @perrymaizon Před rokem +61

    My younger clients will be able to do all my previous work in arch visualisation within a year. GAME OVER!!!

    • @simonperry5990
      @simonperry5990 Před rokem +17

      All the fulfilling human jobs are on the way out!. Really depressing

    • @Tepalus
      @Tepalus Před rokem +23

      No they don't. Assuming you're an actual architect and not just someone who does visualisations. AI has an understanding of architecture, and it looks very good, BUT it doesn't have the knowledge what makes physically the most sense, what are regulations or what does and what doesn't work. You still can lead projects, and oversee the building process in general.
      I also specialised in visualisations but am shifting to a more technical level at the moment. Keep up with technology or it owns you.

    • @perrymaizon
      @perrymaizon Před rokem +9

      @@Tepalus ai is in a way just been born, and will be on a totaly new level already within a year!!! Beast mode in two years... What about 10 years?

    • @wahedasamsuri9248
      @wahedasamsuri9248 Před rokem +2

      in third world countries, this already affecting so many job market. those who do not even have a nuance of abilities to produce design can call themselves designer. They said ' we do not need these people now to do images of our product now, we can do that in Ai'. I'm waiting for the day when people left and right start suing over common design interest.

    • @ribertfranhanreagen9821
      @ribertfranhanreagen9821 Před rokem +4

      If this is enouh for you to be replaced.i question what you do as architect. This just help you make render easier.

  • @ThoughtFission
    @ThoughtFission Před rokem +9

    Thank you so much for sharing this. I am trying to figure out how to do something similiar with portraits, keeping the original face and changing the clothes, background focal length etc. This is a great starting point.

    • @designinput
      @designinput  Před rokem +1

      Hey, thanks for your lovely feedback and comment! Hmm, interesting idea. I will definitely try it out. Please share your result and experience with us!

    • @krissstoyanoff8853
      @krissstoyanoff8853 Před rokem +3

      @@designinput consider making a video in which you show us how to create a 3d render from a sketchup jpeg without any changes on the composition and the placement of the object. would be really helpful

  • @adetibakayode1332
    @adetibakayode1332 Před rokem +5

    PERFECT !!!! That's all i see about it. Nice work bro 👍

  • @tomcarroll6744
    @tomcarroll6744 Před rokem +1

    Nice work. This is clearly the direction of how concepts are generated. probably in another 4 weeks this capability will be available at numerous webapps for free.

    • @designinput
      @designinput  Před rokem

      Hey Tom, thanks for your comment! Totally agree! We will start to see this workflow integrated into many different applications soon.

  • @IDArch26
    @IDArch26 Před 11 měsíci +1

    Exactly what i was looking for, thank you!

    • @designinput
      @designinput  Před 11 měsíci

      Great to hear! You are very welcome :)

  • @m.a.a.1442
    @m.a.a.1442 Před rokem +1

    It is almost what I was searching of thank you for your help

    • @designinput
      @designinput  Před rokem +1

      You are very welcome, thanks for your comment!

  • @designinput
    @designinput  Před rokem +4

    You can find all the recourses here: designinputstudio.com/create-realistic-render-from-sketch-using-ai-you-should-know-this/
    ControlNet Paper: arxiv.org/pdf/2302.05543.pdf
    ControlNet Models: huggingface.co/lllyasviel/ControlNet/tree/main/models
    Realistic Vision V2.0: civitai.com/models/4201/reali...
    Install Stable Diffusion Locally (Quick Setup Guide): czcams.com/video/Po-ykkCLE6M/video.html
    Instagram: instagram.com/design.input/

    • @fc5130
      @fc5130 Před rokem

      Do u use realistic vision V2.0 or V1.4 like all the tutorials? Thank u!

    • @designinput
      @designinput  Před rokem +1

      @@fc5130 Hey, in this video, I used V1.4. Because, at that time, Realistic Vision V2.0 wasn't available yet. I am using V2.0 at the moment.
      You are very welcome :)

    • @fc5130
      @fc5130 Před rokem

      @@designinput Thank u :)

  • @chantalzwingli5698
    @chantalzwingli5698 Před 8 měsíci

    WOW it worked!!! THANKS A LOT!!! I had to download some important stuff like .pth files and then drag them to the right place.
    Just to find them afterwards in ControlNet / Model like in your example. YOU ARE AMAZING WITH THIS TUTORIALS!!! THANKS

    • @designinput
      @designinput  Před 8 měsíci

      Hi, you are right it's a bit detailed and a long process for an architect but super happy to hear it worked 🧡 Thanks for the lovely comment!

  • @andreaognyanova
    @andreaognyanova Před rokem

    Very clear explanation, selamlar addled

  • @panzerswineflu
    @panzerswineflu Před rokem +1

    I didn't know such a thing was possible from napkin sketch to render. Thanks

    • @designinput
      @designinput  Před rokem

      Hi, thanks for your comment. You are very welcome, happy to hear it was helpful!

  • @tatianagavrilova2252
    @tatianagavrilova2252 Před rokem

    it is fantastic!!! thank you so much for sharing

  • @alexanderburbitskiy4382

    looks amazing!

  • @petera4813
    @petera4813 Před rokem +54

    This channel will grow so fast if you can show either through Stable Diffusions or MidJourney 5.1 how to render a sketchup file, 3d max (jpeg) the exterior of a building into the render we want without a lot of distortions using prompts.
    There is no such video online. And I am positive that if people are not searching it now, they will very soon!

    • @adoyer04
      @adoyer04 Před rokem +3

      how do you knew that? maybe every architect and other creative person once heard about ai and is following these topic/is using it?

    • @petera4813
      @petera4813 Před rokem +1

      @@adoyer04 maybe..but maybe i am a wizard 🤷🏻‍♂️

    • @michaelbooth90
      @michaelbooth90 Před rokem +9

      @@petera4813 im an architect in a firm and we want it but cant find it.

    • @pappathescooper
      @pappathescooper Před rokem +1

      @@michaelbooth90 if you find... let me know...!!!! ;)

    • @designinput
      @designinput  Před rokem +11

      Hey, thanks a lot for your nice comment! I totally agree; that's where we are headed to. It is not quite possible to have a simple one-click render solution yet without many settings and prompting the "try error and experimenting" process. Although, I am working on a video for Midjourney and how to use it to render from a sketch or base simple image. I will share it as soon as I figure out a nice, straightforward workflow.

  • @tuyenguru
    @tuyenguru Před měsícem

    Great. Thank you very much.

  • @fabrizioc7644
    @fabrizioc7644 Před 3 měsíci

    Thank you for the tips! ;-)

  • @Ramb0li
    @Ramb0li Před rokem +4

    Hey I am a architect from Switzerland and it really amazes me how far we came. I already did a presentation in my architectural office and I am about to implement it in our design workflow... After using a lot of midjourney I came across the problem not having the control to just change a specific thing... I am trying now a combination of Stable Diffusion and MJ. Thank you for your informative video!

    • @Ramb0li
      @Ramb0li Před rokem

      One question do I have: What computer do you use, graphic card and memory and how long does it take to for you to create a picture (AI render process)? I am working with a late MacBookPro and it takes me up to 10min to have a picture.

    • @designinput
      @designinput  Před rokem +2

      Hey, thank you for your comment :) That's great to hear because I think, usually, our industry is not the fastest in case of adaptations to new technologies :|
      Thanks for your kind words, I really appreciated it ❤

    • @designinput
      @designinput  Před rokem +3

      @@Ramb0li I am using a laptop with RTX3060 and 12th Gen Intel(R) Core(TM) i7-12700H CPU. Of course, for this process, the most important one is the GPU. Depending on the image resolution, the number of sampling steps, and the sampling method, it takes 2-5 minutes on average.
      I usually test my prompt and settings with a low resolution and fewer sampling steps to make the process faster. And once I find a nice prompt combination and correct settings, I render a final version with a higher resolution. Maybe that can help to speed up the process.

  • @leslie5815
    @leslie5815 Před rokem

    The renderings are niceeeeeeeeeee!

    • @designinput
      @designinput  Před rokem

      Hi, thanks a lot for your lovely feedback

  • @davidlaska8248
    @davidlaska8248 Před rokem

    That is really impressive

  • @mukondeleliratshilavhi5634

    Love that you put it that it's only to help you come with more ideas. It is only a tool we are still the master and still need to match what the image is to what the client needs .. yes make more detailed video's

    • @designinput
      @designinput  Před rokem +1

      Exactly! Thank you very much for your comment!
      I will share a detailed step-by-step tutorial about it very soon.

  • @Exindecor
    @Exindecor Před 5 měsíci

    Very inspirational

  • @romneyshipway7161
    @romneyshipway7161 Před rokem

    Thank you for your time

    • @designinput
      @designinput  Před rokem

      Hey, you are very welcome! Thanks a lot for your comment, happy to hear that!

  • @RogerioDec
    @RogerioDec Před rokem +1

    THIS is a game changer.

    • @designinput
      @designinput  Před rokem

      Hi, it really is... Thanks for the comment!

  • @rasoolrahmani1585
    @rasoolrahmani1585 Před rokem

    Thanks so much helpful for me

    • @designinput
      @designinput  Před rokem +1

      Hey, thank you for your comment. So happy to hear that, you are very welcome ❤️

  • @PaoloBhA
    @PaoloBhA Před rokem

    Hi! thanks for the video, very interesting. how did you convert the safetensor for realistic vision v2.0 to cptk?
    Thanks and keep up the good work!

    • @designinput
      @designinput  Před rokem

      Hey, thanks for your comment! You can download Realistic Vision V2.0 here: civitai.com/models/4201/realistic-vision-v20
      And you should place it to the Stable Diffusion folder under the models file.
      Thanks for your support

  • @tailopezbutnolamborghini4862

    What settings do you keep to make it look exactly as my kitchen model on sketchup? I tried to keep CFG at 8 and the height/width to match but the AI keep generating my cabinets/refrigerator all over the place. My model has refrigerator on right side and it generates an on the left. How do I fix this? Can you show me a video tutorial on this?

  • @LorenceLiu
    @LorenceLiu Před rokem +2

    wow this looks amazing! Here is what I am thinking : is it possible to turn an image into sketch by AI, then use AI on that sketch to produce designs that actually fits the real-life object?

    • @motassem85
      @motassem85 Před rokem

      there's a lot of programs can help to turn it to sketch but will not be clear like rendering it

  • @antongerasymovich4876
    @antongerasymovich4876 Před rokem +2

    Thanks for these great instructions! Couldn't figure out how to add "models" in ControlNet tab, now I have only "none" in "Model" tab, but you have some options with names "control_sd15_canny/normal/seg etc..) Thanks!

    • @designinput
      @designinput  Před rokem

      Hi Anton, thanks for your great feedback! You must download them separately and place them in the ControlNet folder under the models folder. You can download them here: huggingface.co/lllyasviel/ControlNet/tree/main/models
      Also, you can check this video to use it easily: Use Stable Diffusion & ControlNet in 6 Clicks For FREE: czcams.com/video/Uq9N0nqUYqc/video.html

  • @cgimadesimple
    @cgimadesimple Před 9 měsíci

    impressive!

  • @ilaydakaratas1957
    @ilaydakaratas1957 Před rokem +13

    I had never heard of Stable Diffusion before and it looks really helpful!! Please make a tutorial on how to install it!!

    • @designinput
      @designinput  Před rokem +3

      Thank you for your comment! Definitely, I will make one soon.

    • @CoolBreeze39
      @CoolBreeze39 Před rokem +4

      I agree, this would be helpful!

    • @designinput
      @designinput  Před rokem

      @H M 😂😂

    • @knight32d
      @knight32d Před rokem +1

      Not only it's helpful. It'll save us lots of money n time.

    • @designinput
      @designinput  Před rokem

      @@knight32d haha :) Totally agree! Thanks for your comment!

  • @systemmusic6830
    @systemmusic6830 Před rokem

    thanks a lot❤

    • @designinput
      @designinput  Před rokem

      Thanks for your kind comment! Glad to hear that you liked it :)

  • @user-ok2wi2fl9k
    @user-ok2wi2fl9k Před rokem

    Thank you Now you are my teacher!

    • @designinput
      @designinput  Před rokem

      Hey, glad to hear you liked it :)
      Haha, thanks a lot for your lovely comment!

  • @Constantinesis
    @Constantinesis Před rokem +2

    I wish some of the prompting be replaced by inputing additional images and tagging or labeling through sketching perhaps like in Dalle-E. For example instead of describing how modern styled green sofa with geometrical patterns I want, I should be able to drop a reference photo of such a sofa or any other object inside my project. I am sure these kind of features will come sooner than later but what makes Stable Diffusion amazing is that its also free and open source.

    • @TJ-ki3gp
      @TJ-ki3gp Před rokem +2

      Just give it time and everything you described will be possible.

  • @nopnop6274
    @nopnop6274 Před rokem

    Wow! Fascinating, thank you for making this video.

    • @designinput
      @designinput  Před rokem

      Hey, thanks a lot for your lovely feedback and comment

  • @user-zd8tj4ll9j
    @user-zd8tj4ll9j Před rokem

    Hi there! Amazing info. Been trying this for the past few days. I had a problem with at first with CUDA and VRAM. I thought it was because of my GPU (i have an Nvidia GTX 1050 with 4GB), so i made a few adjustements with another video i've seend about this (editing medvram or xformers), but they usually change a bit the results from de AI.
    Did you have some problems with CUDA when you try to generate images? is there a way to solve this without changing to much parameters?
    Thx a lot for the info!

    • @designinput
      @designinput  Před rokem

      Hey, thanks for the comment! I have a GPU with 6GB VRAM, so I had issues with that too. As far as I know, xformers can change the result slightly, but I had better results only with medvram or lowvram. They use less vram but increase the generation time.

    • @user-zd8tj4ll9j
      @user-zd8tj4ll9j Před rokem +1

      @@designinput That's right, i did that too. Just testing some results, the best ones came from using only medvram. Also, i've seen another thing that's called "Token Merge", but that's in case those other things didn't work (xformers, med or lowvram).
      Thx a lot again!

  • @gelione
    @gelione Před rokem

    Superb.

    • @designinput
      @designinput  Před rokem

      Hey Berk, thanks a lot for the feedback! ❤

  • @SpinnerPen
    @SpinnerPen Před rokem +2

    Could you please tell me about your computer's specs? What graphics card are you using, and does it take a long time to generate each image?

    • @designinput
      @designinput  Před rokem +2

      Hey, I am using a laptop with RTX3060 and 12th Gen Intel(R) Core(TM) i7-12700H CPU. Of course, for this process, the most important one is the GPU. Depending on the image resolution, the number of sampling steps, and the sampling method, it takes 2-5 minutes on average.
      I usually test my prompt and settings with a low resolution and fewer sampling steps to make the process faster. And once I find a nice prompt combination and correct settings, I render a final version with a higher resolution.

  • @kasali2739
    @kasali2739 Před rokem +2

    not sure, but I believe you don't need to choose anything from preprocessing menu, just leave in at none because otherwise you let SD to create sketch from sketch as input

    • @designinput
      @designinput  Před rokem

      Yes, you are absolutely right. I didn't realize that at that time. Thank you for letting us know about it!

  • @yuyuyu9948
    @yuyuyu9948 Před rokem

    Hi, thanks for your video! quick question which lora model u used? where i can download it?

    • @designinput
      @designinput  Před rokem +1

      Hey, thank you! I have used Realistic Vision V2.0 model together with epi_noiseoffset. You can find their links here:
      civitai.com/models/4201/realistic-vision-v20
      civitai.com/models/13941/epinoiseoffset

    • @yuyuyu9948
      @yuyuyu9948 Před rokem

      @@designinput Thank you so much! Really appreciate it!

  • @anagraciela534
    @anagraciela534 Před 3 měsíci

    Is there a way we can incorporate specific furniture we might see in an online store

  • @Ssquire11
    @Ssquire11 Před 6 měsíci +1

    Thanks alot but also it could of helped to show how to install controll net

  • @moizzasaeed5132
    @moizzasaeed5132 Před rokem

    I can't figure out how to install it. When I open the webui-user batch file, the code tells me to press any key to continue and when I do it, it just closes the window. Have restarted the PC, still not working properly

  • @gergelybodnar6002
    @gergelybodnar6002 Před rokem

    Hi, everything is fine but under the controlnet dropdown under models it says that I have none. Where do I get the ones you have?

    • @designinput
      @designinput  Před rokem +1

      Hi, you need to download ControlNet Models separately and then put them in the ControlNet file: C:\stable-diffusion-webui\models\ControlNet
      You can find all the models here:
      huggingface.co/lllyasviel/ControlNet/tree/main/models
      You don't need all of them; if you want to follow this video, you can download only the Scribble model, but feel free to experiment with all of them :)
      Thanks for your comments!

  • @deborasouza3897
    @deborasouza3897 Před 2 měsíci

    This IA is paying or free? Carmaker online or need download program?
    Thanks 😊

  • @ovidiupatraus-ub8uq
    @ovidiupatraus-ub8uq Před rokem +3

    Hello, my problem with this is that I cant find when I press processor scribble , and my generated images are very different than my sketch I upload, can you help me with that please ? appreciate your work

    • @jolopukkii
      @jolopukkii Před rokem

      i also have that problem! The images it generate are very different (diferent shapes, window sizes, roof angle, etc.). I also have Realistic Vision V1.4 and Control Net with MLSD on... But the results are far from what is shown in the video..

  • @atlasmimarlik
    @atlasmimarlik Před rokem

    Hi dude, thaks for sharing❤

    • @designinput
      @designinput  Před rokem +1

      Hey, thank you for the feedback ❤ Happy to hear that you liked it!

    • @atlasmimarlik
      @atlasmimarlik Před rokem

      @@designinput Where r u from?

  • @Amir_Ferdos
    @Amir_Ferdos Před rokem

    thank you 🙏🙏🙏🙏🙏🙏

    • @designinput
      @designinput  Před rokem

      Thank you, glad that you liked it! ❤

  • @007vivek11
    @007vivek11 Před rokem

    Hey bro i did follow and reach a mad lvl crazzy stuff thanks but for your this process i couldn't figure out the control net 1.111 processor not get Only control net running! If yiu can help would be great!!

  • @user-wb8ne7fk7t
    @user-wb8ne7fk7t Před rokem +2

    Great video and I like to repeat the steps you demonstrate. The link to "Realistic Vision V1.4" appears broken, but I did find a similar download on huggingface. However, I do not have the ControlNet option visible when I go to Stable Diffusion after following all of the steps. What am I missing?

    • @designinput
      @designinput  Před rokem +3

      Hey, thanks for letting me know; I changed it with the updated Realistic Vision V2.0. At the moment, ControlNet doesn't come directly with Stable Diffusion, you need to download it separately and then put it in the ControlNet folder inside the Stable Diffusion folder on your computer.
      You can download the ControlNet models here: huggingface.co/lllyasviel/ControlNet/tree/main/models
      And then, you should move them here: C:\SD\stable-diffusion-webui\models\ControlNet
      After you place the folder, restart the Stable Diffusion, and you should see the ControlNet section. I will upload a detailed step-by-step tutorial about this in the following days.

    • @robwest1830
      @robwest1830 Před rokem +1

      @@designinput do we need all of the controlnet files? as there are 8 4.71 GB files

    • @designinput
      @designinput  Před rokem +1

      @@robwest1830 Hey, no, we don't need all of them. If you want to use only your sketches as input, you can download only the scribble model (which is the best for sketches).
      Or you can try depth mode if you want to use views from your 3D model or photos.

    • @Just.Dad.Things
      @Just.Dad.Things Před rokem

      @@designinput I'm very impressed and I would like to try it out myself, but I ran into the same problem, missing ContorlNet option in Stable Diffusion.
      I created the folder ControlNet in stable-diffusion-webui\models\
      Then restarted webui-user.bat, but stable diffusion doesn't show ControlNet at all. Am I missing something? I downloaded the scribble model and put it in ControlNet folder

  • @jojustchilling
    @jojustchilling Před rokem

    I’m so happy. Omg

    • @mmkamalraj8931
      @mmkamalraj8931 Před rokem

      Nice room decor video

    • @designinput
      @designinput  Před rokem +1

      Hey, thanks for your comment :) Happy to hear that you liked it!

  • @votrongkhiem2777
    @votrongkhiem2777 Před rokem

    Hi, is Stable Diffusion checkpoint important to have that result, I tried to use the same setting with the same sketch (your sketch) but can't have the same result

    • @designinput
      @designinput  Před rokem

      Hey, yes, which model you use has a significant impact on the final image. My current favorite model is Realistic Vision V2.0. You can download it from the link in the video description.
      Thanks for your comment!

    • @votrongkhiem2777
      @votrongkhiem2777 Před rokem

      @@designinput I tried, however, the result still didn't follow your sketch, I use the ggl collab one though

  • @user-rf2so1fv2r
    @user-rf2so1fv2r Před 11 měsíci

    Is there any way we can use the API to create our own app that does this in a more "one click" kind of way with the correct prompts?

    • @designinput
      @designinput  Před 9 měsíci

      Hey, there are many web applications that do that right now. You can get the API directly from Stability AI or just install it on a cloud computing service (like AWS) and run it there

  • @7ckngsane354
    @7ckngsane354 Před rokem +2

    This is amazing. I have a question: What does the mean in your key words? What does dslr mean as well? Much appreciated!

    • @designinput
      @designinput  Před rokem +1

      Hey, thanks for your nice comment! is an additional Lora model to improve overall quality of the image, but it is not necessary to use it. You can learn more about it here: civitai.com/models/13941/epinoiseoffset

    • @7ckngsane354
      @7ckngsane354 Před rokem

      @@designinput Thank you! Increasing the image quality is an important task for me. Could you be so nice to explain what is the meaning of "dslr" in your key words mean?

    • @designinput
      @designinput  Před rokem +1

      @@7ckngsane354 hey, dslr refers to the DSLR cameras. It is a common keyword for stable diffusion prompting, but it is hard to judge the effect of a keyword like this on the overall image quality. Even though sometimes it can help, I don't think it has a huge impact. Feel free to experiment with/without it to see the difference between them and share the results with us :)
      You are very welcome ❤

    • @7ckngsane354
      @7ckngsane354 Před rokem

      @@designinput 👍

  • @hyalimy3150
    @hyalimy3150 Před 8 měsíci +1

    Ellerine sağlık videolar çok güzel ve bilgilendirici. (Sanırım Türksün ingilizcesi ne güzelmiş çok iyi anladım dedim ve sonra farkettim) sanırım bu saatten sonra Türkçe içerik gelmez :)

  • @RodriguezRenderings
    @RodriguezRenderings Před rokem

    Is there a way for it to reference real world materials? As if I put a link for a backsplash, can it use that?

    • @designinput
      @designinput  Před rokem +1

      Hey, unfortunately, not really :/ You can primarily describe it with text; additionally, you can add similar textures to your sketch to mimic a similar material.
      Thanks for your comment!

  • @motassem85
    @motassem85 Před rokem +1

    thanks for the toturial bro
    can you add the link for Realistic-vision 1.4.ckpt which you used in the video please,and one more thing i can't find ControlNet to add picture what's the issue i've?

  • @michelearchitecturestudent1938

    Great video! I have a question...how do I activate controlnet in the text to image prompt? I don't see this option in my realistic_vision_1.4

    • @designinput
      @designinput  Před rokem +2

      Hi Michele, thanks for your comment. You need to download the ControlNet models additionally; you can find them here: huggingface.co/lllyasviel/ControlNet/tree/main/models
      I will upload a step-by-step tutorial about the whole process soon, hope that will be helpful for you.

    • @michelearchitecturestudent1938
      @michelearchitecturestudent1938 Před rokem

      @@designinput thanks for the reply. I found the video...but still have problems

  • @wangshuyen
    @wangshuyen Před rokem +9

    Great video. You should do one with the same sketches but using Midjourney as a comparison please.

    • @designinput
      @designinput  Před rokem +3

      Hi, thanks for your lovely comment and suggestion! I am currently working on that, I will upload a video about it soon!

  • @islandersean2213
    @islandersean2213 Před rokem

    how do i load control sd 15 scribble into model? thank you

  • @user-ik2to2hu3y
    @user-ik2to2hu3y Před 10 měsíci +1

    teşekkürler

  • @michawalkowiak1464
    @michawalkowiak1464 Před rokem

    it is possible to get interior using a 3d model of the lamp

  • @cador1624
    @cador1624 Před rokem

    thanks for sharing..
    i have a problem on getting my design into realistic as possible because i dont have budget to buy good performance PC (i even cannot open d5 render and have 0 to 5 fps when using lumion). if only i can master this and somehow make its like rendering my design image it will really helpful for my future !

    • @designinput
      @designinput  Před rokem

      Hey, you are very welcome; thanks for your comment! Ah, I feel your pain... Well, then, local Stable Diffusion is not a good option in this case, but you can try cloud base platforms to use Stable Diffusion; just with a couple of bucks, you can use it without any issues. I plan to make a video to share some options for these platforms.

    • @cador1624
      @cador1624 Před rokem

      @@designinput ah.. thanks for your insight imma learn into that! But this video like give me a glimpse of hope if maybe free AI can just rendering our design into realistic image and make ud can adjust the material/color too!! Well. But i think its will hits hard that many high budget rendering software and their very high specs PC too ! 🤣

  • @aceheart5828
    @aceheart5828 Před rokem +28

    So this needs to be developed using an interactive user interface.
    The word prompts need to become labels. Architects want to be able to draw lines from objects and label them feeding specific information into the AI generation.
    The Architect does not care about multiple options as much as he cares about creating the specific option he desires.
    He must be enabled through the interface to engage in an interactive back and forth. Erasing parts, and redrawing them, developing parts of the drawing, adding more specific labels,........ all in an endeavour to produce a vision as close to what he sees in his minds eye.
    This is of utmost importance.
    All said and done, on a positive note, this is the only useful sphere for architects, which I think they may use and be willing to pay for, that I have seen thus far from all the AI related attempts.
    It would be idiotic not to take it forward to fruition.

    • @designinput
      @designinput  Před rokem +5

      Hey, you are right, and we will soon see more user-friendly interfaces integrated with other software for sure.
      I totally agree; in the case of architecture, accuracy and quality are way more important than the number of alternatives you have. But even a couple of months ago, having this much control over the whole generation process was impossible. And it is getting better every day. I am sure you will be able to fine-tune your final result very soon.
      Thank you very much for your comment!

    • @Constantinesis
      @Constantinesis Před rokem +1

      I agree with you. Some of the drawing/erasing features of Dall-E would be amazing! You can already use Dall-E to replace parts of an image but you can`t use it for entire image2image process.

    • @StringBanger
      @StringBanger Před rokem +1

      Before you know it AI can take over the entire AEC industry. It can be smart enough to pull code sets from UpCodes,NFPA etc. and all relevant code models applicable by state and jurisdiction to construct an entire BIM model that is fully code compliant based on best engineering practices all while creating multiple models for clients within min.

    • @user-cn9kk8bj4e
      @user-cn9kk8bj4e Před rokem +1

      I certainly agree with you text prompts needs to become labels.Great Idea

  • @METTI1986LA
    @METTI1986LA Před rokem +2

    I actually don’t want random results in my designs... and it’s really not that hard to texture or model a 3d scene... but it is useful for finding ides maybe

    • @williambrady-is8bd
      @williambrady-is8bd Před rokem +8

      But with the technology, people will start using it and it may become the industry norm to have this quality of rendering early in the design stage. It might become less what we want and what the client / market expects. We are already facing similar things with clients expecting renders early on so they can visualise the thinking. They don't understand sketches and drawings like we do. The majority don't actually understand the work we do beyond what colour the kitchen bench should be, which is often how they want to express some control / knowledge in the design process. I also would never be able to produce so many variations to this level of detail in the time it would take to sketch five solid ideas and model them in sketchup or rhino, then render them dealing with vray crashing all the time or too many trees and details slowing things down. I also think this will change architecture schools dramatically in terms of pin up. Students who don't have that critical and analytical depth to their thinking will flock to this aesthetic driven approach fo ideation.

  • @B-water
    @B-water Před rokem +1

    Ammmmmaaaaaaaazzzzing

  • @dalegas76
    @dalegas76 Před rokem

    I have not seen anywhere any mention to the resolution of rendered images. How big or what size can you get from this? Thanks 😊

    • @designinput
      @designinput  Před rokem +1

      Hi, by defult it generates 512x512 but you can enter custom values up to 2048x2048. I think that's the limit.
      Thanks for your comment :)

    • @dalegas76
      @dalegas76 Před rokem

      @@designinput thanks for your answer.. I got the info I needed. This Ai tools are developing fast, I do believe better and more accurate to the architecture branch should be developed soon.😊👍

    • @designinput
      @designinput  Před rokem +1

      @@dalegas76 you are very welcome, that's great! Totally agree, I believe it will be very soon :)

  • @adoyer04
    @adoyer04 Před rokem

    can i upload a floorplan to create a scenery for every angle of visualisation is needed. they have to match its look from angle to angle and should be correct with the reality around it. give it some years and you just implement points on a 3d model to do do. keywords for every surface and a hirarchy for the post production look. from 3d to promts to avoid fine tuning in specific programs you may not understand.

  • @trevorpearson1702
    @trevorpearson1702 Před 2 měsíci

    How can I convert a 2d dwg file in to a 3d render using AI

  • @marcschipperheyn4526
    @marcschipperheyn4526 Před rokem +5

    I would like to see a video that uses both a floorplan and 2D designs for for example.a kitchen from the front. It would be interesting to see if people like me, with limited drawing and no 3D skills, could use tools like Figma to create 2D organizations of cabinets and floor plans to create effective rendering of the environment

    • @designinput
      @designinput  Před rokem +3

      Hi, thanks for your comment :) There is no such tool that allows us to use both floor plans and side views as input to create 3D models or renders. But the whole industry is moving and improving incredibly fast, and I am pretty sure someone is working on this right now :)
      When I see something related, I will definitely share it!

    • @amagro9495
      @amagro9495 Před rokem +1

      @@designinput Congrats for the video. Do you know if it is possible to generate, from a single image/design, several ones with different perspectives?

    • @designinput
      @designinput  Před rokem +1

      @@amagro9495 Thanks for your comment! Hmm, good question. Changing perspective for the same space can be challenging if you are only using text-to-image or image-to-image modes. But if you have a basic 3D model that you can work on, you can manage to do it. I just uploaded a video about creating renders from the 3D model; feel free to check that out.
      But I will definitely test and experiment with the perspective change!

    • @fervo1991
      @fervo1991 Před rokem

      @@designinput I think he means using a floorplan in SD to generate a "3D rendering"

  • @phgduo3256
    @phgduo3256 Před rokem

    Hi there, thanks for inspiring tutorial; what does " " mean?

    • @designinput
      @designinput  Před rokem +1

      Hey, thanks a lot for your comment, much appreciated

  • @Darkcrimefiles9
    @Darkcrimefiles9 Před rokem

    Hey ,,i need your help could you plz one image rendering for my college project, coz i haven't laptop

    • @designinput
      @designinput  Před rokem +1

      Hey, thanks for the comments! How can I help? Let me know please, thanks :)

    • @Darkcrimefiles9
      @Darkcrimefiles9 Před rokem

      I can send you one sketch could you plz convert into colour image plz

    • @Darkcrimefiles9
      @Darkcrimefiles9 Před rokem

      Reply me as soon as possible

  • @user-kt4kh8he7e
    @user-kt4kh8he7e Před 9 měsíci

    directly from sketchup to AI for different testing different looks

  • @mlee9049
    @mlee9049 Před 10 měsíci

    Hi, do you know of any A.I. that allows you to change the camera views for interiors and exteriors?

    • @designinput
      @designinput  Před 9 měsíci +1

      Hey, unfortunattely it's not possible yet so some manual work needed but maybe in soon future, why not?

    • @mlee9049
      @mlee9049 Před 9 měsíci

      @@designinput Thank you for your reply. That will be a game changer.

    • @designinput
      @designinput  Před 9 měsíci

      @@mlee9049 absolutelly!

  • @danielummenhofer6120
    @danielummenhofer6120 Před rokem +1

    I follow your steps, but for some reason, it won't use the image / sketch but makes a completely new image. How to you get stable diffusion to use the sketch as the base to create the CGI on?

    • @designinput
      @designinput  Před rokem

      Hey Daniel, thanks for your comment. It is probably related to ControlNet. Did you enable it before you generated the new image?

    • @danielummenhofer6120
      @danielummenhofer6120 Před rokem

      @@designinput Thank you for your reply. Yes, after reading through the comments I saw someone mentioned to turn it on, and I did. Still didn't solve the issue. I'm following your new video now and see if this works then.

  • @tusharpandey858
    @tusharpandey858 Před rokem

    can I install stable Diffusion on my home PC, it has a graphic card rtx2060, and an i7 10th gen with 16gb ram, will it work?

    • @designinput
      @designinput  Před rokem

      Hey, I believe you can. It mostly depends on your GPU and the amount of VRAM it has. I am using RTX 3060 6GB VRAM. So feel free to test it out.
      If you can't, you can check out this video to use it on Google Colab: czcams.com/video/Uq9N0nqUYqc/video.html&lc=Ugxw1pFnOcldtEnPEAt4AaABAg

  • @maiyadagamal8142
    @maiyadagamal8142 Před 9 měsíci

    can you give examples for the input text that can work

    • @designinput
      @designinput  Před 9 měsíci

      Hey, there is no special formula for the text input. I mostly try to follow the structure from the checkpoint I am using. But you can just freely describe the scene you would like to create in you prompt.

  • @moodoo3001
    @moodoo3001 Před rokem +1

    I can't find scrabble preprocessor even that I dowmloaded scrabble model other scrabble preprocessors just like scrabble hed and pidinet are available so what is the problem?

    • @designinput
      @designinput  Před rokem +1

      Hey, if you will upload your drawing to ControlNet you don't need to use preprocessor. Just choose "none" for preprocessor and "scribble" model. Thanks for your comment!

    • @moodoo3001
      @moodoo3001 Před rokem +1

      @@designinput Ok 👍 thanks for your help

  • @crisislab
    @crisislab Před rokem

    Forgive my ignorance: how do you install Control Net?

    • @designinput
      @designinput  Před rokem

      Hey, thanks for your comment! You must download them separately and place them in the ControlNet folder under the models folder. You can download them here: huggingface.co/lllyasviel/ControlNet/tree/main/models
      Also, you can check this video to use it easily: Use Stable Diffusion & ControlNet in 6 Clicks For FREE: czcams.com/video/Uq9N0nqUYqc/video.html

  • @marcinooooo
    @marcinooooo Před 11 měsíci +1

    Hey, thank you soooo much for this video! Your resutls are amazing, but mine are ekmg they s*ck haha...
    I think the problem is that control_sd15_scribble does not load for me, can you give links to all of the files we need donwload (models) - I am using RunPod, maybe you could help me with that?

    • @marcinooooo
      @marcinooooo Před 11 měsíci

      hey, so I see I have a problem in the "preprocessor x model" since don't see '....Scribble', but this: 'control_v11p_sd15_canny [d14c016b] '
      I have uplaoded it to the workspace/stable-diffusion-webui/models/Stable-diffusion/control_sd15_scribble.pth
      or should I put it somehwere else?
      Thank you

    • @designinput
      @designinput  Před 9 měsíci

      Hey, sorry for the late response :( workspace/stable-diffusion-webui/models/Stable-diffusion/control_sd15_scribble.pth this path is totally correct. Let me know if it still doesn't work and we can take a look together

  • @victorfeinstein1815
    @victorfeinstein1815 Před rokem +1

    Teşekkürler

  • @pilardicio7266
    @pilardicio7266 Před rokem

    Hi there! I have a Mac. How can I install stable diffusion?

    • @designinput
      @designinput  Před rokem

      Hey, unfortunately, I don't have much experience with how to use it on a Mac but you can follow this tutorial to install it. Hopefully, it will help, thanks :)
      czcams.com/video/Jh-clc4jEvk/video.html

  • @epelfeld
    @epelfeld Před 11 měsíci

    Is there any difference between sketch and scribble models?

    • @designinput
      @designinput  Před 9 měsíci

      Hi, no there is only one model for sketch inputs but with different preprocessor options. However, if you upload directly a sketch, you don't need to use any preprocessor

  • @michelearchitecturestudent1938

    I found how to install control net, but I can only select the preprocessor and not the model in the tab. In the video you have multiple solutions...my only one is "none"".
    Do you know how to fix it?

    • @designinput
      @designinput  Před rokem

      Hey, did you download the ControlNet models and place them into the ControlNet folder under the models file?

    • @michelearchitecturestudent1938
      @michelearchitecturestudent1938 Před rokem

      @@designinput thanks for the reply again. not it works ❤️

    • @designinput
      @designinput  Před rokem

      @@michelearchitecturestudent1938 you are very welcome ❤

  • @DonVitoCS2workshop
    @DonVitoCS2workshop Před rokem

    Until we're not able to change specific materials on specific objects i dont see a huge point in this.
    The sketch would be enough to let your agency or even the client imagine the result and the ai render could be very misleading compared to a handmade render of the sketch.
    Just a couple papaers down the line though, ghis will be the new process how it's done

  • @vaskodrogriski2697
    @vaskodrogriski2697 Před rokem

    How the hell did you get Stable diffusion to install? I've watched dozens of videos with instructions how to insta, after I saw your video, but not one of them have worked. I've installed, git, python and every thing that is instructed but nothing seems to work.

    • @designinput
      @designinput  Před rokem

      Hey, yes, I have used a similar process. What is the problem for you? What error do you get? I will upload a new video today to show how you can use it without downloading it to your computer, I hope that can help.

    • @vaskodrogriski2697
      @vaskodrogriski2697 Před rokem

      @@designinput HI, essentially i run into problems when i launch the webui-user file that tells me it can’t install torch. I therefore cannot get past that point to get the url return.

  • @schenier
    @schenier Před rokem +1

    your image is already a scribble, you don't need to put the preprocessor as scribble. It can be left to none. Use the preprocessor if you want to change your image into a scribble

    • @designinput
      @designinput  Před rokem

      Yes, you are absolutely right. I didn't realize that at that time. Thank you for letting us know about it!

  • @user-qi3bm8nm4r
    @user-qi3bm8nm4r Před 3 měsíci

    i still think its hard to control and fine tuning the ai image, still better to handle with 3d software

  • @alpahetmk
    @alpahetmk Před rokem +1

    nice btw r u turkish

  • @nguyenthithuhieu9501
    @nguyenthithuhieu9501 Před rokem

    why is my ControlNet not showing ?

  • @pedrodeelizalde7812
    @pedrodeelizalde7812 Před rokem

    Hi thanks but how do i install Realistic Vision V2.0?

    • @designinput
      @designinput  Před rokem +1

      Hi, you can download it from here:
      civitai.com/models/4201/realistic-vision-v20

  • @user-dj9xn2qi3v
    @user-dj9xn2qi3v Před rokem

    السلام عليكم ممكن تصمم بعض صور الي توقف عندي حساب

  • @nevergiveuptrader
    @nevergiveuptrader Před rokem +1

    Can you tell me how to install : " Realistic Vision V1.4 Model " or " Realistic Vision V2.0 " after download, ? thank you ^^

    • @designinput
      @designinput  Před rokem +2

      Hey, sure, after you download the Realistic Vision model, all you need to do is drop that file to the "C:\stable-diffusion-webui\models\Stable-diffusion" folder. After that, if you start Stable Diffusion again, you can find it in the available model's menu.
      Let me know if you need any help. Thank you :)

    • @nevergiveuptrader
      @nevergiveuptrader Před rokem

      @@designinput yes i did that, thanks you so muchhhhh

    • @designinput
      @designinput  Před rokem

      @@nevergiveuptrader great, happy to hear it worked. You are very welcome :)

    • @robwest1830
      @robwest1830 Před rokem

      i dont even get how to download it :D pls tell me

    • @designinput
      @designinput  Před rokem

      Hi @@robwest1830, you can find all the necessary resources in the link in the video description.
      For installation, I will share a quick tutorial, but until then, feel free to follow this one:
      czcams.com/video/hnJh1tk1DQM/video.html
      He clearly explains everything you need to install to start using it.

  • @Ssquire11
    @Ssquire11 Před 6 měsíci

    The scribble model isn't in my drop down

  • @kebsriad
    @kebsriad Před 11 měsíci

    What is ai pls

  • @jp5862
    @jp5862 Před rokem

    I am an architectural designer. I've been doing this for 15 years. I don't think I need it anymore.

  • @tuynsgoing789
    @tuynsgoing789 Před rokem

    why it keeps generating different image, not same as my uploaded one?

    • @tuynsgoing789
      @tuynsgoing789 Před rokem

      even I checked to enable on control net

    • @designinput
      @designinput  Před rokem

      Hey, what do you mean exactly by different images?
      It is possible to have a certain level of control over the process with ControlNet but up to some level. Even if you keep the seed number the same, the final images probably will be very different from each other.
      I am sure we will have more control over it soon with all the new developments, but it is not quite possible to generate exactly the same image multiple times.
      Thanks for your comment!

  • @tynnon
    @tynnon Před 11 měsíci

    great tool for visualization, but architecture is not just visual, anyway cool stuff!!!

    • @designinput
      @designinput  Před 11 měsíci

      Hi, totally agree! Thanks for the comment :)

  • @cr4723
    @cr4723 Před rokem

    I tried it. Constructing / modeling takes up most of the time. Assigning the materials in the render program is quick. In the AI you have to try a lot of prompts and generate a lot of images. That takes longer. And it's inferior in quality.

    • @designinput
      @designinput  Před 9 měsíci

      Hey, if the goal is to create a final quality render, you are absolutely right. It can easily become more time-consuming than actually modeling everything and creating renders. But if the goal is creating something more conceptual for the early phases of the design process it can be really beneficial and time saving

  • @phily708
    @phily708 Před rokem

    no need for preprocessor in control net Tab when U have already an image made with the control net model

    • @designinput
      @designinput  Před rokem +1

      Yes, you are absolutely right. I didn't realize that at that time. Thank you for letting us know about it!

  • @1insp3ru16
    @1insp3ru16 Před rokem +1

    will a gaming laptop be able to run it?

    • @designinput
      @designinput  Před rokem +1

      It mostly depends on your GPU and the amount of VRAM it has. But you don't need something crazy; my laptop has an RTX3060 with 6GB VRAM, and I can use it without any issues.

    • @1insp3ru16
      @1insp3ru16 Před rokem

      @@designinput I have a ASU’s rog laptop Ryzen 9 and RX 6800 X graphics and 16 gb ram graphics equal event to RTX 3080