- 142
- 88 817
Kevin Hu
Registrace 29. 09. 2013
Video
Flux Review & Tips
zhlédnutí 87Před měsícem
Some small tests and insights on Flux models, presented here in an informal manner without strict rigor.
Mimic Substance Designer
zhlédnutí 82Před 2 měsíci
Mimicking Substance Designer in ComfyUI for the sake of fun and to get familiar with some interesting Custom Nodes, without any other purpose.
Brainstorm Everything V3
zhlédnutí 101Před 2 měsíci
简化了Image Batch流程,去除参考图数量的限制。 github.com/BoosterCore/ChaosFlow github.com/BoosterCore/ComfyUI-BC-Experimental
Integration of ComfyUI with Obsidian
zhlédnutí 136Před 2 měsíci
在大佬们的协助下,终于用一个节点顺利连通ComfyUI和Obsidian,用起来太爽了!
Story From ComfyUI & Ollama
zhlédnutí 177Před 2 měsíci
借鉴开源项目MoA的思路,用多个开源模型共同创作剧本,ComfyUI的另类玩法。输出结果还比较幼稚,但工作流先搭好,静待开源大语言模型的飞速迭代!
Mad Dream of AI
zhlédnutí 82Před 3 měsíci
As open-source large language models continue to evolve, I attempted to utilize the Ollama platform by invoking the LLaVA model to interpret content across multiple images. Following this, I leveraged the Qwen2 model to structure these insights into narratives and prompts, ultimately feeding them into SD for creation of random outputs. This endeavor was solely driven by the joy of exploration a...
Brainstorm Everything V2
zhlédnutí 93Před 4 měsíci
IPAdapter更新后,这套流程拖了好久,今天才更新。IPAdapter已经相当强大了,所以这次就把Revision模块去掉了,整个流程精简了一些。
Play With Gemini & IPAdapter
zhlédnutí 74Před 5 měsíci
在之前的流程里加入了多一次的Gemini API调用,用来简化提示词,并对简化版施加风格化模板,与之前的提示词进行权重平均。 另外加入了最新版IPAdapter模块,实现风格与构图分离控制,并且IPA风格与之前的提示词模板风格可以实现融合。
Play With Gemini
zhlédnutí 48Před 5 měsíci
整个流程分两次调用Google Gemini API,第一次是将用户输入的 何语种语言翻译成英语,并重新打磨词句,使其成为更加纯正的英语文本。第二次调用是根据翻译后的英文文本,让Gemini发挥想象,帮用户书写一份SD提示词。
Happy Lottery With Gemini
zhlédnutí 78Před 5 měsíci
主要是用了ZHO ZHO ZHO大佬的Gemini API节点协助编写提示词,然后结合了LightingXL模型的快速出图,以及Preview Chooser节点的易用性,达成快速预览与筛选。
Seamless Texture Generator V3
zhlédnutí 134Před 5 měsíci
由于IPAdapter Plus迎来了V2版本的大更新,导致以前的工作流失效,不得不更新一版。这一版尝试采用更加精简的思路,并重新梳理了用户界面部分,希望用起来能有更好一些的体验。
University of Canterbury Mt John Observatory
zhlédnutí 126Před 6 měsíci
University of Canterbury Mt John Observatory
请问为什么Ksampler(Efficient)一直报错Error occurred when executing KSampler (Efficient): 'NoneType' object has no attribute 'shape'?
不太清楚啊,试试升级comfyui和插件呢?
repect, 可否加个DC群学习一下?之前评论的邀请码已失效
discord.com/invite/sBHP57BP 不过很冷清,没啥人讨论,您自便!😊
@@kevinhu1136 谢谢,加入了☺
不明觉厉
0:07 Bruh Mid-transformation he just turns back.
It looks excellent! I haven't tried it yet, I'll tell you once I do. I really liked the image at 0:05 (it's like a factory) do you remember the prompt? Thank you very much, I am subscribing right now.
Thanks, the prompt is "futuristic heavy industrial plant interior, blade runner style"
@@kevinhu1136, Thank you very much for your answer! I don't have much experience in this. My tests in V2 are failing due to a missing node, after having updated them with the Manager. Message when loading the flow: "When loading the graph, the following node types were not found: ClipTextEncodeBC Nodes that have failed to load will show as red on the graph." The ones that appear in red are 2, the ones that your tutorial says CLIP Text Encode (BC.... I will understand if you do not have the opportunity to answer me due to the precariousness of my knowledge. In any case, congratulations on your work.
Hello again, I managed to get it working by replacing the nodes with ones I downloaded from chaosaiart-nodes. It forms magnificent images for me. Now I have 2 problems: 1- the 4 preview images are very blurred. 2- It doesn't make me the high resolution image, only the low one. As before, thank you very much even if you can't answer me.
AI is so weird-
这个思路有意思,up的工作流图可以分享么
这个工作流很粗糙,还没自己打磨过,等空了再琢磨一下看
…why?
Ew
Thanks a lot for your videos on InstaMAT, they're an invaluable source of knowledge for this tool. I wonder though, if an Nvidia RTX card is accepted to operate in InstaMAT, my own 3080 is considered as "uncertified" to perform raytracing. Thanks again!
I'm not very clear on this point either, but logically speaking, Nvidia's graphics cards should not encounter such issues. Moreover, the 3080 model isn't considered outdated by any means.
老师,discord频道链接失效了
discord.gg/UjfbxKzZ
Text Box is missing , where I can find it ?
Derfuu have been updated, You can replace it with other text input node.
Hi. Could this be used to generate tiling textures ? Thanks
I think change the ksampler and vae, it will.
@@kevinhu1136 Thank you. Do you think you can add a sketch as a third input in this setup with Controlnet? So you would go from some image references, some descriptive words and a sketch to a fully generated image.
這個流程的效果蠻有趣的,請問可以分享來學習參考嗎?謝謝
噢。这个流程也是突发奇想,做了玩的,所以还没整理过。
大佬!String Function 里没有写提示词的框是什么原因呀😇
Derfuu这个插件已经升级过了,你可以用其它任何文本输入节点替换它。
Please never let bro cook again
That is bumblebee from temu
Bro bumblebee but if it were made by artificial intelligence cool!
牛逼
someone must have put cocaine in my tea
Never watching transformers on acid again
good
I made something simple from the older one that worked for us, but I am definitely going to check this new one too. Thank you for sharing. :)
How to make an image seamless? Now I can see the difference between the far right and the far left of the picture, and in 360 view the seam is visible. I use the sd_xl_base model, I also deleted one of the samplers because it didn’t work for me.
You can refer to this process for reference: czcams.com/video/ubbyk8Ys8DY/video.html&lc=UgxjJRbPAhcW5xVVCpt4AaABAg
Honestly the transformation sequences aren’t bad… For ai
New Camaro concepts
Where’s the beattle
可以分享该工作流给我吗?
GitHub上搜一下ChaosFlow
牛蛙
Great work as always :)
Thanks 😊
請問有工作流可以分享嗎
不好意思,这个流程太老旧了,之前也没好好整理过。加之相关新技术迭代飞快,所以并没有分享这个工作流。本频道其它的相对晚一些的视频,配套的工作流都在分享在了 github.com/BoosterCore/ChaosFlow
@@kevinhu1136 非常感謝~
好人一生平安
Thanks for sharing the workflow
You are most welcome
workflow plz
github.com/BoosterCore/ChaosFlow
hi! so wait the main idea is that gemini here helps you with prompting right? As it enhances your prompt?
Yes, that's absolutely right. English can indeed be a significant barrier for non-native speakers when it comes to writing SD prompts. This approach might help alleviate some of those challenges. I was also considering the possibility of using Gemini instead of Google Translate in the first step, given that Gemini's translation capabilities appear to be superior to Google Translate's.
@@kevinhu1136 wonderful work. As an italian, i would really love to try this out at some point
Incredible
Thank you for your amazing network! It's been very helpful. I tested both of your models, and I found that the original version (before the remaster) gave me more creative and interesting results for the 'image to image' function. I tried it by painting four blue squares in Photoshop and asking it to generate an ice tile, and it worked perfectly. The new version, however, is very faithful to the reference image, even when I adjust the IP adapter weight and add more noise. 🤔
Thank you for your test. IPAdapter Plus has recently undergone a major update. Many previous workflows are no longer usable. I will try to update and improve it in the future.
@@kevinhu1136 thank you for all of your contributions 🙏
Damn bumblebee found Megatrons dark energon stash💀
Amazing work, Would you like share this workflow?
Already shared on github, you can search "ChaosFlow"
大佬威武
不足道的,不足道的。😊
一堆節點消失啊QQ
检查一下插件节点有没有正确安装上吧
请问,您是如何保持 Google Translate Text Node 正常工作的呢? 是否需要自定义 API Key呢? 谢谢指教
不需要的,只要不被墙就可以了
谢谢回复@@kevinhu1136
Very kool , workflow for it please ?
search ChaosFlow on GitHub
为啥我的效果很差
Now, let's take a moment to discuss your poor performance, shall we? It appears that your efforts haven't quite met the mark. You see, achieving excellence requires a combination of diligence and attention to detail, qualities I'm sure you possess. However, in this instance, it seems that your execution fell short of expectations. But fear not, dear student, for every setback is an opportunity for growth. With a bit of extra effort and focus, I have no doubt that you'll soon excel in your endeavors.
It seems that TripoSR can be used to help create three view drawing more easy and precise
啊?
😍
Some of these modules and their nodes for the workflow have changed, and now the workflow does not work. What a pity that the workflows do not have a version for custom modules and for comfyui itself...
Thanks for the correction,This workflow has been upgraded today,Matches the latest version of ComfyUI and WAS Node Suite
@@kevinhu1136 Thank you! Now it works well! Question: You use SD1.5/pytorch_model.bin in Clip Vision in your workflow - but what is this file (what is clip vision)? Because the pytorch_model.bin that I found is incompatible with SDXL models (an error about the resolution of the generated images not matching). Anyway, using other Clip Vision works great. Thanks for your research and work! =)
epiCRealism是怎么安装到comfyui的哈
放在模型目录里就可以了。
Hi Kevin , can you post a link to your discord please, great workflow :)
Thanks,discord.gg/rdFKeknq
Bloody psychedelic Animation art