按流程走的,但是出现了这些问题,请问怎么解决 Error occurred when executing OllamaVision: must provide a model File "J:\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "J:\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "J:\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "J:\ComfyUI\custom_nodes\ComfyUi-Ollama-YN\CompfyuiOllama.py", line 81, in ollama_vision response = client.generate(model=model, prompt=query, keep_alive=keep_alive, options=options, images=images_b64) File "J:\ComfyUI\.venv\lib\site-packages\ollama\_client.py", line 124, in generate raise RequestError('must provide a model')
@@Datou1977 之前是没有选择模型,选择选上了,但是出现了新的问题,视频我看了很多次,但不知道出在哪,所以再次打扰你 Error occurred when executing OllamaVision: File "J:\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "J:\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "J:\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "J:\ComfyUI\custom_nodes\ComfyUi-Ollama-YN\CompfyuiOllama.py", line 81, in ollama_vision response = client.generate(model=model, prompt=query, keep_alive=keep_alive, options=options, images=images_b64) File "J:\ComfyUI\.venv\lib\site-packages\ollama\_client.py", line 126, in generate return self._request_stream( File "J:\ComfyUI\.venv\lib\site-packages\ollama\_client.py", line 97, in _request_stream return self._stream(*args, **kwargs) if stream else self._request(*args, **kwargs).json() File "J:\ComfyUI\.venv\lib\site-packages\ollama\_client.py", line 73, in _request raise ResponseError(e.response.text, e.response.status_code) from None
大佬这份教程对小白来说非常及时👍👍
都是很棒的功能啊 感谢分享 ollama这个很方便 感谢
太详细了 感谢大佬
太强了
大佬终于发了,为了运行InstantID昨天折腾了一天没搞定,尤其是cuda环境部署,老是报错无法运行.....
祝你成功
显卡是AMD RX580是不是就不能玩了?
漂亮,记得b站也来一份。
你好大佬,我在虚拟环境 执行pip install insightface 的那步失败了 是什么原因呢
装上vs了吗?
@@Datou1977 装了 已经
@@Datou1977 之前那个CMD 命令窗口我给关了 然后 ”cd .venv cd Scripts activate.bat “我运行了这个代码,进到虚拟环境里,然后再执行的pip install insightface ,显示权限不够
已经处理好了 嘿嘿。但是大头哥 ,我发现我显卡配置可能不得行,装ollama模型 那个llava-7那个可以用小一点的替代吗?
@@user-sm3yd7zr1c 可以试试看,小一点会差一些,但差的不是特别多
colab 上能安装么? 我的win有点太老了
我不熟,没有发言权
昨天见到了zho佬,他就是用colab,平时在ipad上跑
按流程走的,但是出现了这些问题,请问怎么解决
Error occurred when executing OllamaVision:
must provide a model
File "J:\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "J:\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "J:\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "J:\ComfyUI\custom_nodes\ComfyUi-Ollama-YN\CompfyuiOllama.py", line 81, in ollama_vision
response = client.generate(model=model, prompt=query, keep_alive=keep_alive, options=options, images=images_b64)
File "J:\ComfyUI\.venv\lib\site-packages\ollama\_client.py", line 124, in generate
raise RequestError('must provide a model')
”must provide a model“ 没有选中视觉模型?
@@Datou1977
之前是没有选择模型,选择选上了,但是出现了新的问题,视频我看了很多次,但不知道出在哪,所以再次打扰你
Error occurred when executing OllamaVision:
File "J:\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "J:\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "J:\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "J:\ComfyUI\custom_nodes\ComfyUi-Ollama-YN\CompfyuiOllama.py", line 81, in ollama_vision
response = client.generate(model=model, prompt=query, keep_alive=keep_alive, options=options, images=images_b64)
File "J:\ComfyUI\.venv\lib\site-packages\ollama\_client.py", line 126, in generate
return self._request_stream(
File "J:\ComfyUI\.venv\lib\site-packages\ollama\_client.py", line 97, in _request_stream
return self._stream(*args, **kwargs) if stream else self._request(*args, **kwargs).json()
File "J:\ComfyUI\.venv\lib\site-packages\ollama\_client.py", line 73, in _request
raise ResponseError(e.response.text, e.response.status_code) from None
A卡是不是就只能看看了。
A卡能玩
@@Datou1977 上次MJ也说最好N卡,可怜我的5700了,谢谢楼主了,我继续往下安装。
@@user-uc6yo4qk7k amd可以跑,只是我没有,没办法演示
大佬,提示缺少以下节点怎么办呢?
MuseVPredictor V1 (comfyui_musev_evolved)
MuseVImg2Vid V1 (comfyui_musev_evolved)
我并么有用到这些节点啊🤨