learning IT is so fun when there is this brilliant voice explaining everything clearly and not a random indian tech support agent with the thickest accent you will ever hear
After installing home-brew, there will be an instruction given in the terminal output to add brew to your path. It's not shown in this video, because he has already installed brew, but you need to do it.
I don't understand how to do it. Care to explain? I'm writing in the commands it's telling me to run, but get the error message "-bash: syntax error near unexpected token `)'"
@@mohammedsarmadawy362 There are two strings under "==> Next steps: - Run these two commands in your terminal to add Homebrew to your PATH:". Copy the first one hit enter, and then copy the second one and hit enter. After that, you can continue to enter: brew install cmake....
This video didn’t work. It says error can’t generate metadata at the end. I don’t know if I did something wrong but I copied the video and it didn’t work.
Thank you so much ! Those are very clear instructions, I was able to do it. Hopefully it will become simpler in the future, but I guess we're still early adopters
I have Mac and had an issue setting up stable diffusion finally, I did it from the terminal and I was there but for the first time when I wanted to try and see how it generates the image I got this error: RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half' Time taken: 0.74s ? I even edited the command for stable diffusion for no halftime script and updated the Python version but no luck is there any other way
i have an error, can u advice how to fix it ? NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
cd /opt/homebrew/bin/ PATH=$PATH:/opt/homebrew/bin cd touch .zshrc echo export PATH=$PATH:/opt/homebrew/bin >> .zshrc Run the commands in that order in terminal, you'll be editing the path and creating the missing .zshrc file, exporting the path to this new file. Now you should be able to use:
i am stuck at a stage where i had used the browser link and then it did something that led the terminal to get stuck at "Model loaded in 5.5s (calculate hash:..... "
gave an error at the stage of launching the web interface: Installing torch and torchvision ERROR: Could not find a version that satisfies the requirement torch==2.0.1 (from versions: none) ERROR: No matching distribution found for torch==2.0.1 WARNING: You are using pip version 20.2.3; however, version 23.2.1 is available. You should consider upgrading via the '/Users/kupyasha/stable-diffusion-webui/venv/bin/python3 -m pip install --upgrade pip' command.
Sorry mate, I tried to use the comanda "brew install cmake protobuf rust python@3.10 git wget", but it was given as command not found. Do you kindly have any clues how can I solve that? Tis
Hi help? not working getting an error: NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check. Time taken: 0.49s
- Run these two commands in your terminal to add Homebrew to your PATH: (echo; echo 'eval "$(/opt/homebrew/bin/brew shellenv)"') >> /Users/vilijam/.zprofile eval "$(/opt/homebrew/bin/brew shellenv)"
Great vid but, when i want to generate an image this happens: RuntimeError: "LayerNormKernelImpl" not implemented for 'Half' What does this mean how can i fix it?
Everything worked so far but when I try to generate the pool it says "RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half'" can somebody tell me what the problem is?
Im getting this error after installing and running a prompt: RuntimeError: "LayerNormKernelImpl" not implemented for 'Half' I am super beginner and don't know how to fix this
I appreciate your video. I had no clue on what I was doing, but your video helped me install everything. My only question is, how do I know I’m running the latest version, which is 1.0. Been looking on how to update this on a Mac, but so far it’s only pc.
Hi, thank you for this! Very helpful! However, I got everything smoothly, but when I tried to generate the image, it said ""LayerNormKernelImpl" not implemented for 'Half'" in the terminal and failed. How I can fix this?
Solved my own problem Copy and Paste nano ~/.zshrc enter 2) export PATH="/opt/homebrew/bin:$PATH" enter Control + X - then - Y - then - Enter 3) source ~/.zshrc enter 4) brew --version (Check version) enter 5) brew install cmake protobuf rust python@3.10 git wget enter
When i tried to run the webui.bat, there is a error that comes: ERROR: Could not find a version that satisfies the requirement torch==1.12.1 (from versions: 2.0.0) ERROR: No matching distribution found for torch==1.12.1
thank you for the video, I am using Apple M2 Max and Deforum in Stable Diffusion is not working, maybe you can help? - I have such a report "NoneType' object has no attribute 'sd_checkpoint_info''. Before reporting, please check your schedules/ init values. Full error message is in your terminal/ cli."
Got through, but in the web browser ui I get an error (RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half') if there is a particular problem with that, I have a pro m1...
Hi! Thank you for your video, it was great! However I am encountering an issue: RuntimeError: MPS backend out of memory (MPS allocated: 4.14 GB, other allocations: 2.33 GB, max allowed: 6.80 GB). Tried to allocate 1012.50 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure). No idea where I have to change the value to 0, do you know?
After completing the install procees I don't get the local URL. Last line is "Model loaded in 64.6s (calculate hash: 32.3s, load weights from disk: 12.1s, create model: 6.5s, apply weights to model: 11.2s, apply half(): 2.3s)." Anyone with the same issue?
At 3mins33second in you say to go back to terminal and type “cd stable-diffusion-webui/“ When I run the command I get this back “cd: no such file or directory: stable-diffusion-webui/“ - iv followed every step up to this point so I’m not sure what I’m doing wrong. Could you please advice
Recieved this errow when enter the URL in the browser and add a prompt after completing everything. RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half' Can someone advice?
Thanks for video. Another question. To install additional models. Can I add them to the models folder whilst the Terminal and/or browser UI is still running, or should I quit out of it, add the models to the folder and then restart
Thanks so much for this. Was able to follow all the way through. I've downloaded 2 models since and dropped in 2 new models in the Stable-diffusion folder since. What are the steps or is there a video for adding new models?
compared to playgroundAI on the Mac MiniStudio Max the images take around 30 seconds to generate which means it is just shy if 2 to 3 times slower compared to the web. I don't think this is super slow.
Warning: /opt/homebrew/bin is not in your PATH. Instructions on how to configure your shell for Homebrew can be found in the 'Next steps' section below. hello, please help!
When I try to download the stable diffusion 1.5 model from hugging face, the download speed does not get above 200bits and gives a ETA of 6 hours, unfortunately it stops downloading after about 60mb and says there was a timeout. This is crazy as I have super fast broadband. Looks like I won't be using this!! The home-brew bit all went swimmingly.
I tried on a Mac Mini Intel 2018 i5 32GB. It worked, but too slow, at around 22s/it. One option though is to install Windows with Boot Camp and use an eGpu.
Lora list problem. Models list problem problem: Lora Model List empty solve: check the folder paths in which the program searches for files, by default its username/stable-diffusion-webui-maste even if you put all the files in your folder and run the program from there, the program may look for files in the root directory My story i follow this instruction in steps where i dont understand what to do and fell into the trap due to partial understanding. So in step when you show how to launch webui with terminal i understand that path can be different so i use my path downloads/stable-diffusion-webui-master and everything started perfectly, but there was a problem: I didn’t see the Lore models that I downloaded and added to the folder and no instructions on the Internet helped, until I found out through the inspection option in the browser that the path of the folder in which the program searches for Lore files is different from that folders where I save, it turns out that in the root directory there was another stable deffusion folder and the program didn’t care where I launched it from, it looked for the files there.
4:45 I'm stuck in this stage. There's an error that goes like "[notice] A new release of pip is available: 23.0.1 -> 23.1.2 [notice] To update, run: pip install --upgrade pip" And then "raise RuntimeError(message) RuntimeError: Couldn't clone Taming Transformers. " How do I fix this??
great clear instructions. Im an M1 Max 64gb and getting around 5.94it/s while rendering out 1920x1080 frame with 2x upscaling. I used the two suggested optimization commands which doubled the it/s. Is there a command to allocate more ram usage? During rendering, Acitivity Monitor shows around 93-94% GPU usage and around 13/64gb ram usage, would like to know if there's a way to leverage a little more ram for rendering.
ERROR: No matching distribution found for numpy==1.26.2 (from -r requirements_versions.txt (line 16)) WARNING: You are using pip version 19.2.3, however version 24.0 is available. You should consider upgrading via the 'pip install --upgrade pip' command.
A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check. WHAT DOES THIS MEAN?? How do I do this
Thanks for this, I followed the text doc as the command there worked to copy paste, I only couldn't do this step "To update, run git pull in the ~/stable-diffusion-webui folder." but still ended up with a url code and managed to generate an image. I guess thats fine, right?
At 3:40, after typing "cd stable-diffusion-webui" you say to press enter but then there is two letters or something that is typed and you don't mention ti! I can't continue the install because of it. What is it I have to type?
Hi, I finished all the tutorial, but once I have to copy the URL provided in terminal, it won't open on the browser. Any suggestions? Thank you for the video
thank you very much brother. for now everything works for me and my m1 max. one thing tho: do I need to do smth specific when im closing it like u mentioned control + c or can I just close the terminal and browser?
Ctrl c stops a process in the terminal. If you close the terminal it’s the same result. You can just close terminal, nothing will break. He did ctrl c because he wanted to close the process but not the terminal, to continue using it. But for you, at the end of your session just close terminal, you don’t need it anymore
Hi! Thanks for video! But I have a problem with generation images. TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead. How to fix it? Please help .
after ups and downs most of the time fixing error messages, I spent about 5h to make it work. Great vid, easy to follow
learning IT is so fun when there is this brilliant voice explaining everything clearly and not a random indian tech support agent with the thickest accent you will ever hear
I hardly ever comment but you are a legend my friend. This saved me hours, thank you!
After installing home-brew, there will be an instruction given in the terminal output to add brew to your path. It's not shown in this video, because he has already installed brew, but you need to do it.
Thank you
I don't understand how to do it. Care to explain? I'm writing in the commands it's telling me to run, but get the error message "-bash: syntax error near unexpected token `)'"
@@mohammedsarmadawy362 There are two strings under "==> Next steps:
- Run these two commands in your terminal to add Homebrew to your PATH:". Copy the first one hit enter, and then copy the second one and hit enter. After that, you can continue to enter: brew install cmake....
@@TienW626 where do the commands begin and end
This video didn’t work. It says error can’t generate metadata at the end. I don’t know if I did something wrong but I copied the video and it didn’t work.
Very timely. We're doing an artist residency using AI generated videos. Exactly what I needed. Thank you so much!
nice can i have the info of the residency? curious
Outstanding tutorial, thank you. Installs and runs on MacBook Pro M1 with stable-diffusion-v1-5.
THAAAAANK You!!!
I tried to instal for 4 hours until I found your video!!
Hero!
what type of mac do you have?
@@digitalpabs 2020 Intel
Hi, Im getting this error RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
yes me too what can done??
me too.
Same
i have been struggling with installing SD, THANK YOU VERY MUCH , I DID IT
Thank you so much ! Those are very clear instructions, I was able to do it. Hopefully it will become simpler in the future, but I guess we're still early adopters
To create a public link, set `share=True` in `launch()`.
Startup time: 120.1s (import torch: 3.8s, import gradio: 3.8s, import ldm: 0.7s, other imports: 3.4s, setup codeformer: 0.2s, load scripts: 1.5s, load SD checkpoint: 105.1s, create ui: 1.1s, gradio launch: 0.3s).
Error completing request
Arguments: ('task(eprn50rcg0itid4)', 'pool', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0) {}
Traceback (most recent call last):
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img
processed = process_images(p)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/processing.py", line 486, in process_images
res = process_images_inner(p)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/processing.py", line 625, in process_images_inner
uc = get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, p.steps, cached_uc)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/processing.py", line 570, in get_conds_with_caching
cache[1] = function(shared.sd_model, required_prompts, steps)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/prompt_parser.py", line 140, in get_learned_conditioning
conds = model.get_learned_conditioning(texts)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 669, in get_learned_conditioning
c = self.cond_stage_model(c)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/sd_hijack_clip.py", line 229, in forward
z = self.process_tokens(tokens, multipliers)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/sd_hijack_clip.py", line 254, in process_tokens
z = self.encode_with_transformers(tokens)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/sd_hijack_clip.py", line 302, in encode_with_transformers
outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 811, in forward
return self.text_model(
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 721, in forward
encoder_outputs = self.encoder(
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 650, in forward
layer_outputs = encoder_layer(
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 378, in forward
hidden_states = self.layer_norm1(hidden_states)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/normalization.py", line 189, in forward
return F.layer_norm(
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/functional.py", line 2503, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
I have Mac and had an issue setting up stable diffusion
finally, I did it from the terminal and I was there
but for the first time when I wanted to try and see how it generates the image I got this error:
RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half'
Time taken: 0.74s
? I even edited the command for stable diffusion for no halftime script and updated the Python version but no luck is there any other way
Got the same problem
i have an error, can u advice how to fix it ? NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
did u figure this out?
When I put the command: "brew install cmake protobuf rust python@3.10 git wget", it says "zsh: command not found: brew". Any idea how to fix that?
cd /opt/homebrew/bin/
PATH=$PATH:/opt/homebrew/bin
cd
touch .zshrc
echo export PATH=$PATH:/opt/homebrew/bin >> .zshrc
Run the commands in that order in terminal, you'll be editing the path and creating the missing .zshrc file, exporting the path to this new file.
Now you should be able to use:
@@MayurGawkar you're my hero thanks
Hi! i have this error when push in generate image: AttributeError: 'NoneType' object has no attribute 'lowvram' do you know how fix it? Thanks!!
i am stuck at a stage where i had used the browser link and then it did something that led the terminal to get stuck at "Model loaded in 5.5s (calculate hash:..... "
Do you have a tutorial on how to install deforum? Thanks so much for this video!
Dear TrobleChute, the terminal doesn't go ahed when I write this "brew install cmake protobuf rust python@3.10 git wget" Can you help me? Thank you
I'm having the same problem.
Can I use Radeon Vega eGpu with it?
When I try to generate something it pops out a window which says that python closed suddenly , and the programs abort in the terminal
gave an error at the stage of launching the web interface:
Installing torch and torchvision
ERROR: Could not find a version that satisfies the requirement torch==2.0.1 (from versions: none)
ERROR: No matching distribution found for torch==2.0.1
WARNING: You are using pip version 20.2.3; however, version 23.2.1 is available.
You should consider upgrading via the '/Users/kupyasha/stable-diffusion-webui/venv/bin/python3 -m pip install --upgrade pip' command.
Sorry mate, I tried to use the comanda "brew install cmake protobuf rust python@3.10 git wget", but it was given as command not found. Do you kindly have any clues how can I solve that? Tis
Install brew first.
Hi help? not working getting an error: NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
Time taken: 0.49s
add "export PATH=/opt/homebrew/bin:$PATH" if you get this error while installing brew "zsh: command not found: brew"
thanks
just want i needed thanks!
It says error metadata generation failed at the end. Any idea how I can fix this? Using Mac M1
@@quackyman796 same!
What about if you get "Error completing request"? Thoughts?
I can't install python@3.10. Command not found. Mac pro M1, Ventura.
me too
me too
It said error can’t generate metadata for me at the last step
- Run these two commands in your terminal to add Homebrew to your PATH:
(echo; echo 'eval "$(/opt/homebrew/bin/brew shellenv)"') >> /Users/vilijam/.zprofile
eval "$(/opt/homebrew/bin/brew shellenv)"
Great vid but, when i want to generate an image this happens:
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
What does this mean how can i fix it?
def what I'm searching for. Tkx so much bro
thank you so much!
this is amazing
Thanks mate! Such a nice tutorial. Im gone check all your videos now, and of course donate!
Got a message saying "command not found: brew" what should I do about that? Thanks in advance!
same here, can you fix it?
did you figure it out bud?
@@saideeprai29 Nooope lol, not super savvy with this kinda stuff so I had to set it aside
idk why my SB don't generate:( just called RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
Everything worked so far but when I try to generate the pool it says
"RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half'"
can somebody tell me what the problem is?
Im getting this error after installing and running a prompt: RuntimeError: "LayerNormKernelImpl" not implemented for 'Half' I am super beginner and don't know how to fix this
I have the same error
@@Luxeduardo same thing
I appreciate your video. I had no clue on what I was doing, but your video helped me install everything. My only question is, how do I know I’m running the latest version, which is 1.0. Been looking on how to update this on a Mac, but so far it’s only pc.
Hi, thank you for this! Very helpful! However, I got everything smoothly, but when I tried to generate the image, it said ""LayerNormKernelImpl" not implemented for 'Half'" in the terminal and failed. How I can fix this?
i have the same problems like you , i don't know how to fix
me too, please anyone..
Me too
hi there me too.
@@jeremieclimaco9946 does one of you find the issue ?
well i got stuck at brew install cmake protobuf rust python@3.10 git wget. it says command not found
Solved my own problem
Copy and Paste
nano ~/.zshrc
enter
2) export PATH="/opt/homebrew/bin:$PATH"
enter
Control + X - then - Y - then - Enter
3) source ~/.zshrc
enter
4) brew --version (Check version)
enter
5) brew install cmake protobuf rust python@3.10 git wget
enter
@@PHUKU thx from thailand!!
I tried, but it didn't work..
@@PHUKU thank you sooo much brother!!!!!!!
This was super helpful! Thanks for sharing!
It keeps saying "Stable diffusion model failed to load" at the very last step. I did everything the same as you. What am I doing wrong??
When i tried to run the webui.bat, there is a error that comes:
ERROR: Could not find a version that satisfies the requirement torch==1.12.1 (from versions: 2.0.0)
ERROR: No matching distribution found for torch==1.12.1
fr
me too (((( HELP
did you find how to solve this problem?
гений, спасибо за туториал
Все работает?
@@gerychdo да
How fast is stable diffusion on Mac M2 ? Is it equivalent to which GPU for computer in terms of speed?
Thank for your very clear tutorial, it seems "commandline_args low or medvram" doesn't change to much.
thank you for the video, I am using Apple M2 Max and Deforum in Stable Diffusion is not working, maybe you can help? - I have such a report "NoneType' object has no attribute 'sd_checkpoint_info''. Before reporting, please check your schedules/ init values. Full error message is in your terminal/ cli."
Got through, but in the web browser ui I get an error (RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half') if there is a particular problem with that, I have a pro m1...
great tutorial, the only thing is its missing controlnet
Hi! Thank you for your video, it was great! However I am encountering an issue:
RuntimeError: MPS backend out of memory (MPS allocated: 4.14 GB, other allocations: 2.33 GB, max allowed: 6.80 GB). Tried to allocate 1012.50 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure).
No idea where I have to change the value to 0, do you know?
After completing the install procees I don't get the local URL. Last line is "Model loaded in 64.6s (calculate hash: 32.3s, load weights from disk: 12.1s, create model: 6.5s, apply weights to model: 11.2s, apply half(): 2.3s)." Anyone with the same issue?
I have the same problem, did you solve it?
I too have the same problem :(
@@dias8837 mine showed up before these lines for some reason.
You made my life much easier!!
not working for 3d is there a certain setup for it?
At 3mins33second in you say to go back to terminal and type “cd stable-diffusion-webui/“
When I run the command I get this back
“cd: no such file or directory: stable-diffusion-webui/“ - iv followed every step up to this point so I’m not sure what I’m doing wrong. Could you please advice
did you solve it?
super helpful, thanks for it :))
You solved my problem
Thank youuu 🎉
i can not get passed the brew install.... sectiion keep getting an error message. i am on an M2 macbook air
Is this still the most up to date version?
Thank you for making this tut 🙌
Recieved this errow when enter the URL in the browser and add a prompt after completing everything.
RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half'
Can someone advice?
Same I have the same problem :( Is yours an M1 Mac too?
Mine is Imac
Thanks for video. Another question. To install additional models. Can I add them to the models folder whilst the Terminal and/or browser UI is still running, or should I quit out of it, add the models to the folder and then restart
Thanks a lot. But my m1 pro 14" runs faster without optimization part
Thanks so much for this. Was able to follow all the way through. I've downloaded 2 models since and dropped in 2 new models in the Stable-diffusion folder since. What are the steps or is there a video for adding new models?
Nevermind. I figured it out.
@@whiplashtv what did you do lol we're doing the same thing rn
@@bossmachine File drop from desktop to that folder, too awhile but it stook
When I tried to hit the "generate" after installation, python quits unexpectedly. any way to solve this issue? Im using macos ventura 13.2.1
following for anyone that can resolve. I am having the same issue. Runtime error unfortunately but I have great RAM
compared to playgroundAI on the Mac MiniStudio Max the images take around 30 seconds to generate which means it is just shy if 2 to 3 times slower compared to the web. I don't think this is super slow.
Way faster now with the Mac optimised models
@@someghosts which models do you mean ?
Great Install tutorial
This help a lot, thank you very much~!
Thanks!
Warning: /opt/homebrew/bin is not in your PATH.
Instructions on how to configure your shell for Homebrew
can be found in the 'Next steps' section below.
hello, please help!
Thank you so F**king much for this. You've saved me so much time and headaches.
When I try to download the stable diffusion 1.5 model from hugging face, the download speed does not get above 200bits and gives a ETA of 6 hours, unfortunately it stops downloading after about 60mb and says there was a timeout. This is crazy as I have super fast broadband. Looks like I won't be using this!! The home-brew bit all went swimmingly.
Please help me
I have problems here in this part 3:42 for some reason I can’t do anything, just % and nothing happens
I also tried manually writing 1s
I have the same issue, let me know if you figure it out!
@@ThriveVX sorry bro, i still can't fix this problem(((
have the same issue
I figured it out! It's an L not a 1. Type ls, not 1s.
@@ThriveVX Type ls, not 1s. Its an L. Not 1s.
Hello, can this work on an intel mac as well?
discovery if it works?
I tried on a Mac Mini Intel 2018 i5 32GB. It worked, but too slow, at around 22s/it. One option though is to install Windows with Boot Camp and use an eGpu.
Lora list problem. Models list problem
problem: Lora Model List empty
solve: check the folder paths in which the program searches for files, by default its username/stable-diffusion-webui-maste
even if you put all the files in your folder and run the program from there, the program may look for files in the root directory
My story
i follow this instruction in steps where i dont understand what to do and fell into the trap due to partial understanding.
So in step when you show how to launch webui with terminal i understand that path can be different so i use my path downloads/stable-diffusion-webui-master and everything started perfectly, but there was a problem: I didn’t see the Lore models that I downloaded and added to the folder and no instructions on the Internet helped, until I found out through the inspection option in the browser that the path of the folder in which the program searches for Lore files is different from that folders where I save, it turns out that in the root directory there was another stable deffusion folder and the program didn’t care where I launched it from, it looked for the files there.
I accidentally closed the WEB UI url, where can we get the url to run it again?
4:45 I'm stuck in this stage. There's an error that goes like "[notice] A new release of pip is available: 23.0.1 -> 23.1.2
[notice] To update, run: pip install --upgrade pip"
And then "raise RuntimeError(message)
RuntimeError: Couldn't clone Taming Transformers.
"
How do I fix this??
me too any idea¿
@@joel33famara61 im about to use this technique hope @troublechute can help us
when i type cd stable-diffusion-webui and enter nothing happens...is this normal?
Brilliant.thx! -Can Dreambooth be used with this as an extension? or in any other way on the Apple Sillicon?
Are you using tensorflow or do you have an nvidia gpu?
great clear instructions.
Im an M1 Max 64gb and getting around 5.94it/s while rendering out 1920x1080 frame with 2x upscaling. I used the two suggested optimization commands which doubled the it/s.
Is there a command to allocate more ram usage? During rendering, Acitivity Monitor shows around 93-94% GPU usage and around 13/64gb ram usage, would like to know if there's a way to leverage a little more ram for rendering.
The model is only around 7gb, so it will use 7gb of GPU memory, you'll need a bigger model if you want more ram use.
ERROR: No matching distribution found for numpy==1.26.2 (from -r requirements_versions.txt (line 16))
WARNING: You are using pip version 19.2.3, however version 24.0 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
WHAT DOES THIS MEAN?? How do I do this
i had the same problem, cannot proceed
@@avocadopictures9706 Do you have a link you can share?
Settings > Stable Diffusion > Enable option "Upcast cross attention layer to float32".
This worked for me.
@@kyni87 Thank you!! life saver
@@kyni87 where do you i type that or how do i get to settings i don't see it?
I have trouble with the deforum extension; Especially with FFMPEG ? can u do a video about it ? i think several user on mac will have this issue
Does it take more time to install the whole thing - mine was stuck at - Textual inversions loading
Thanks for this, I followed the text doc as the command there worked to copy paste, I only couldn't do this step "To update, run git pull in the ~/stable-diffusion-webui folder." but still ended up with a url code and managed to generate an image. I guess thats fine, right?
i got some error that say "RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'"
Me too
did you find the fix for this ?
@@zachorsinelli2142 No, I tried a lot of variations, but not one did not fit
I work in Mac OS intel
did you find a fix? having the same error
Did anybody figure out this error: "RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'"
Thank you, thank you and thank you.
I have installed a depth map extension,but the tab is not showing up in the UI. Any idea why is that?
At 3:40, after typing "cd stable-diffusion-webui" you say to press enter but then there is two letters or something that is typed and you don't mention ti! I can't continue the install because of it. What is it I have to type?
ls short for list
Thanks for the guide. I get the error "LayerNormKernelImpl" not implemented for 'Half'". How would I fix this?
did you fixed? help
@@digitalpabs i still have this error
Thanks brother, I did it.
Hi, I finished all the tutorial, but once I have to copy the URL provided in terminal, it won't open on the browser. Any suggestions?
Thank you for the video
thank you very much brother. for now everything works for me and my m1 max. one thing tho: do I need to do smth specific when im closing it like u mentioned control + c or can I just close the terminal and browser?
Ctrl c stops a process in the terminal. If you close the terminal it’s the same result. You can just close terminal, nothing will break.
He did ctrl c because he wanted to close the process but not the terminal, to continue using it. But for you, at the end of your session just close terminal, you don’t need it anymore
thanks for this unqiue program! now i can generate my anime characters!
Thank you, the video helped a lot..
Hi! Thanks for video! But I have a problem with generation images. TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead. How to fix it? Please help .
after downloading, i type "Pool" and it shows "RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'". What happened? QAQ
Great guide! Worked for me!
Can I know what version in Mac are you using?
@@bhanuwongnachiangmai4412 M1 Max with 32GB
@@bhanuwongnachiangmai4412 And it's always the latest version of the software, I do an software update everytime before I run it.
I have a lot of libraries to download, can I set up all this folder for my external hard drive?
You're a god. Cheers mate.
Thank you so much!
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half' ¿Qué puedo hacer? ¿What can i do?
Thank you for the video.
But, how can I install ControlNet?
how do i uninstall all of these files? i no longer want this and im not sure how to delete everything