Show up to a panel All the content creators sitting at a table Everyone goes down the line Says a recognizable line/intro/whatever Gets to one guy with a notable lack of TV on his head Dead silence He backs away from the mic Screams F*CK Crowd bursts into applause Someone throws baby onstage Elected president on the spot Immediately assassinated Survives Hacksmith builds IRL TV helmet for him
I am not the one who made the AI art, but i assume he made it with stable diffusion and added images of your character to the model so it knows Code Bullet. Than he just types in the res of the prompt like the "Code bullet in a room with old monitors, realistic". Corridor crew has a cool video on something like this when they turned them selves into an Anime.
Not necessary, some good models might have some fan-art of him directly in their training. As for prompt, it's not really and ideal prompt formatting. it's good for dall-E, not really for stable diffusion. A stable diffusion prompt would be more like this:: (code_bullet:1.3), (old_monitors:1.2), realistic But it's not an easy subject, but at least you tried giving him an answer.
@@serzaknightcore5208 the very limited amount of fan-art will probably not do much, its very likely the person created a LoRA (Low-rank adaptation) which is like OP described. It takes only 10-20 images of the character you want to recreate and produces fairly good stuff. Plus it can still be combined with styles afterwards. Also the fact that it clearly likes black with green characters on it tells us that it learned that those are heavily related to CB, and its abundant usage of it really hints at the usage of a LoRA. It has also learned that CB wears hoodies, but just promping base Stable Diffusion would not give nearly the same hoodies as depicted here.
Could you imagine the chaotic energies of Code Bullet and I Did A Thing teaming up to make some sort of abomination?! Don't know how that'd work, but I'd be scared.
AI art is more complicated to set up than use: - First the model, you would usually start with Stable Diffusion 1.4 or 1.5, but depending on what you want, you might need some specific ones, for realistic images I like CyberRealistic. - Then you need a nice interface, working in a shell is just torture, I use EasyDiffusion, but there's a docen of them, so take the one you prefer. - If you see a pattern you don't like in the generated images, just use the negative prompt to remove those features. - If you want even more specific features, you might want to train your own LoRA, that you'll have to research yourself. - Start with a general idea, then modify the parts you're not happy with using inpainting, this way you don't need super specific prompts or the stars to align in a once in a million years pattern for a good final result. - Don't be scared of editing images manually, you can use MSpaint to modify a result you're half happy with, then use the edited image as an input for the next step, that's how I usually do it, quick and dirty, but functionally the same final result. - Don't get frustrated, just get more RAM, yeah, these tech uses infinite mountains of RAM, if you want good, detailed, high quality images, you need to get some, I was almost forced to go from 16 to 32GB, and this is for 1024x1024 images.
One way to get AI to generate the image you'd like it to generate is to start out with an image similar enough to what you want.. for example by drawing the exact image you want, and then tell the AI to use that as a start image with a high weight, and make the prompt essentially just be a description of the start image.
If you want quick good results with ai art, use image to image, instead of just a text prompt. You can just quickly and crudely edit your character into the right pose, add some basic doodles in paint for additional stuff you want in the image, and then just describe to the AI what you were trying to draw. Mess around with some sliders for better results, I guess. This should work with free online Stable Diffusion thingies too, as long as they have img to img support.
For CB and the AI-literate: Raev apparently posted their Lora over on CivitAI. The screenshots they posted of the Lora in action do include the prompts used if you click on them. Love the description for the Lora too. XD
You gotta use textual embedding (or hugging face "diffusers") to get good ai art of your avatar. You've already got like hundreds of source drawings, and really it should do well with only 5-10 of your model poses. Basically just teaches the AI what the avatar looks like. Look up hugging face diffusers, or stable diffusion embeddings, both do similar things I believe.
William Osman and cb should do a collab video of William making a tv with a hole in the bottom for cb to wear to open sauce and post it as one of their weekly posts
Just in case if you are still wondering about how you can generate the codebullets art. What you can do on midjourney, you can upload the current codebullet image and ask in prompt to use this image as reference and generate new image and you can also tell it in prompt how you want new image to look like.
@@freetousebyjtc I would agree but since I'm a programmer myself who also loves to do artsy shit and is 100% against AI being trained on stolen art, I won't.
2:48 i would assume he is using stable diffusion with automatic1111 im not sure, but it sure looks like it. Even if he isn't im sure that with the right model you definitly could do this with automatic1111 :)
one of the AI image posts mentioned that they trained "Lora". A Lora is a sort of refinement model to add on top of a existing model. they usually fit in megabytes instead of gigabytes for a full model. I'm guessing they trained the Lora on either images from you, or fanart, or probably both. CivitAI is the place to search for custom trained models, loras, or othe techniques of refining/changing a model.
For the AI art, it's probably made with stable diffusion. For the model, i can't tell, so i will just say the ones that i find best: waifu diffusion, and anything v5. The model heavely influence the quality. AnythingV5 have a much better quality, but tends to not follow your prompt. waifu diffusion is trained with danbooru images, so there is almost everything. As for prompt, maybe for dall-E it's good prompts, not for stable diffusion. here is how i would do it: (code_bullet:1.4), (cathodic television:1.2), green characters, (black hoodie), ((1 man)) As for some explanation: the bracket is used to tell the AI how much he should prioritize something. You can put multiple bracket to incentivise even more (that's what i did with 1 man). You can also add ":(some number)" if you don't want to put a thousand brackets (note that 1 bracket have a value of .1 . 2 brackets are 1.2, 3 are 1.3 etc.) You can also add styles to the prompt, with are essentially some general prompts that upgrade the quality. You can merge multiple from the internet, that's what i'm doing, generally it can't make the image worse, but it may make the AI less follow your prompt You can note that i didn't specified the bullet in the middle of the screen. Because it's really, really hard to point it precisely whitout the AI freaking up. If he have enough code bullet in the database, he will know what to do. If he don't, your best bet would be to to make a fast drawing, and use img to img (AI like stable diffusion and dall-E are just glorified denoiser. In txt to img, it generate a random noise, and try to denoise it following your prompt. In img to img, It will add noise to your image, and then will denoise it, which will get a similar result depending on the denoising strenght. 1 is completely random noise, 0 is no noise) I have thought about making a video resuming all the knowledge i got in a year. It's not hard to generate an image, but getting a good image is where it starts getting complicated. I can't just summarize all my knowledge into a single youtube comment, but i can tell the most important things. If i wanted, i could talk about resolution, cfg, choosing non-mainstream AI, model merging, seeds, and much more.
Holy shit, pal, this is possibly one of the most informational comments I have ever saw. I mean, legit, I think that this single comment gave me more than three hours of watched videos on this topic
@@VasiliyOgniov Yeah, the thing is most video actually say more or less the same things, sometimes whitout really knowing why it really matters. I actually used it for a long time, always experimenting, trying to see what upgrade the quality, searching things that i didn't knew what it did. I'm at a point where settings up parameters only take me seconds. I want a portrait, i put height at 960, weidth at 540, steps at 80-100 (lower don't make good results, and higher the difference is barely noticeable), batch usually at 3 (It allow you to choose which image is best, while not overloading your brain with too much choice), and then i write prompts, and in a way that for me it's almost like a third language (yeah, i'm french, writting prompts in english, in a way that don't make sense in the traditionnal english). Don't take my parameters as universal, though. Sometimes, you may wanna have other resolution (i'm on a 1660 ti, so i can't generate 1080p images), more or less steps depending on your time, or other models depending on what you need. Also, if you don't make them full resolution, use real-esrgan. It's an IA that upscale images, and when the image is 2x smaller than what you want, it make perfect results. I think integrated in stable diffusion, but maybe not with the model (i never used it, always used the original). If you are using the original, you can find a bat online to automatise all the process and not go through cmd
The one speed clip may do the same thing but fast, but that’s no fun! We’re here for the times where you remind us how much of a headache it is to get programs to do intermediate processes properly!
You can upload pictures to some AI tools and it will use that as it's base to make more images based on the prompt you enter. That is likely how they are making those.
Try saying “tv for a head that has matrix code on the screen with a black bullet in the center ” as well as saying “faceless” , the other option is using ref images to help guide the ai
3:17 it looks like one minigame from the game "watchdogs" where people in suits with red ties and camera's for heads hunted you around town in the darkness and if they saw you for too long their head would turn from green to yellow to red then they'd run at you and kill you it was horrifying
"No-one likes the clips channel!! But Tiktok and Instagram are doing so much better!" It's almost like different userbases gravitated towards different platforms because of different content format or something. :) If I could set youtube to ignore shorts entirely and permanently, I would.
On Android, CZcams Vanced will let you hide them completely. For browsers, I use AdGuard but most other ad blockers probably will have a rule available, too.
Alright so the reason the clip channel isn't doing as well is 1) you have a separate channel for shorts rather than using your other 2 higher subscribed channels. Since the Instagram and TikTok channels are your only channels on those platforms the algorithm preferences those. 2) This is more of a conspiracy thing but I believe that CZcams might use data on the other short video platforms to push down those who cross post as opposed to those who created unique CZcams Shorts to drive more people to CZcams for new content they can't get on TikTok etc. Another thing you may want to do is more interaction on your short videos (liking, favorite replies, and replying) and part 2/3 etc as the algorithm likes to feed the second part of shorts to a person.
the ai i use for images is dezgo and it has an image to image one where you can use an existing image you have and do edits if you can’t get it to work good with just prompts and you can edit how much it changes the images
Cheers to those still not on Windows 10! Upgrading is nice and all, but it's always annoying learning a new system, and... 10 is apparently pretty much a scam.
I'm thinking that the AI art was done with Stable Diffusion, and what probably happened was they make a photobashing image for a base and ask the model to reimagine it in a different style
prompt engineering and using reference images as part of the input are part of it AI is smart, but not magic, and sometimes you need to explain what you want in a way the AI can understand.
you can get your names on socialmedia easily if you contact the platform itself and show them that the other account is pretending to be you so it's gonna get banned and you can take the accountname
Different types of people on different platforms. I for certain hate shorts and clips and am probably not alone. I want the full phat codesplaining. :)
having made few AI generated images myself (my profile pic is AI generated, with the exception that i turned the face into darkness, because the AI didnt want to cooperate for that part). you have to be specific. like what type of tv you want, what you want to have in the tv, clothes, colors of the clothes, what kind of scenery, and optionally maybe add some effects, like darkness, mist, lights etc. maybe a perspective you want; front view, back view, wide angle etc, and generate images multiple times until it generates something you like. sometimes the AI just doesnt know the meanings. like, it doesnt know what Hogwarts is. or sometimes it gets confused and completely ignores a prompt. like short hair and hair slicked back. the AI art website i use, just refuses to make slicked back hair if theres short hair in the prompt. it just gives short hair. and even then, it messes up things, like, as you saw: instead of a tv AS a head, it put a head IN the tv. so, just generate multiple images. after few images, you can give more prompts, more specific and more defined descriptions.
3:10 image to image. either using your avatar or pasting a tv on top of some other model and then letting the AI clean it up. That would be my guess, at least.
The key to good AI art is to always specify that you want tits in the prompt since that will give it much more reference materials to work with.
LMAOOOOOOOOOOO
_porn! porn everywhere!_
Specify you want it to be 34 years old. Look up rule 34 for details.
Name checks out…
most out of pocket comment here lol
Not sure if anyone has noticed but in his open tabs is an Etsy with “custom monitor head” so I think the man himself is gonna show up lore accurate
@@zorkman777 😉
Noticed that too honestly if he’s trying to make a cosplay head I’d suggest minibitt’s video to him
1:01
Man really said "no horny" to himself, on his female version fanart.
Straight up refusing the whole genre of selfcest.
I think that was directed to the fans
If you have a kid can you please name him Code Pellet? Cheers
Code BB
Kid Bullet
CP
@@XplosivDS this is funny to me as a swed
At first, it would need to be named Bone Bullet
Show up to a panel
All the content creators sitting at a table
Everyone goes down the line
Says a recognizable line/intro/whatever
Gets to one guy with a notable lack of TV on his head
Dead silence
He backs away from the mic
Screams F*CK
Crowd bursts into applause
Someone throws baby onstage
Elected president on the spot
Immediately assassinated
Survives
Hacksmith builds IRL TV helmet for him
Really underrated comment
I hope my recognizable line will be chair related.
Ngl. I like the way you think 😂😂😂
id like the head bit
Cool story bro
Evan: We all use Windows here, we're all friends...
Mac and Linux users:
Same, I was just wondering how he got that idea.
BSD users: am I a joke to you
i use arch btw
Linux users are not friends
Yeah, I'm on linux right now.
CB out here really trying to avoid the "Evan as a chick" stuff.
Come on, mate, you know you'd hit that, don't deny it.
"go fuck yourself"
"don't mind if i do"
Isn't that's selfcest
@@baimhakaniyeah
What about the 3d walking part 1 video?
Honestly same
8:46 how did he not notice the shirt has "CBT" written on it lmao
it's technically a correct acronym though
What is Minecraft?
@@NoGoatsNoGlory. it's a small indie game, you should try it, it's really fun (the graphics are pretty bad though)
Broke: cognitive behavioral therapy
Woke: cock and ball torture
I am not the one who made the AI art, but i assume he made it with stable diffusion and added images of your character to the model so it knows Code Bullet. Than he just types in the res of the prompt like the "Code bullet in a room with old monitors, realistic". Corridor crew has a cool video on something like this when they turned them selves into an Anime.
Not necessary, some good models might have some fan-art of him directly in their training.
As for prompt, it's not really and ideal prompt formatting. it's good for dall-E, not really for stable diffusion. A stable diffusion prompt would be more like this::
(code_bullet:1.3), (old_monitors:1.2), realistic
But it's not an easy subject, but at least you tried giving him an answer.
@@serzaknightcore5208 the very limited amount of fan-art will probably not do much, its very likely the person created a LoRA (Low-rank adaptation) which is like OP described. It takes only 10-20 images of the character you want to recreate and produces fairly good stuff. Plus it can still be combined with styles afterwards. Also the fact that it clearly likes black with green characters on it tells us that it learned that those are heavily related to CB, and its abundant usage of it really hints at the usage of a LoRA. It has also learned that CB wears hoodies, but just promping base Stable Diffusion would not give nearly the same hoodies as depicted here.
@@tristansteens Yeah, i just saw that when i tried to generate it
Yeah he trained a LoRA, it's also written in a post "I made a Code bullet lora"
@@gaggix7095 Yup i just saw, didnt get to that point before i wrote my comment ;)
Could you imagine the chaotic energies of Code Bullet and I Did A Thing teaming up to make some sort of abomination?! Don't know how that'd work, but I'd be scared.
NO
YES
ai-controlled beyblade
AI art is more complicated to set up than use:
- First the model, you would usually start with Stable Diffusion 1.4 or 1.5, but depending on what you want, you might need some specific ones, for realistic images I like CyberRealistic.
- Then you need a nice interface, working in a shell is just torture, I use EasyDiffusion, but there's a docen of them, so take the one you prefer.
- If you see a pattern you don't like in the generated images, just use the negative prompt to remove those features.
- If you want even more specific features, you might want to train your own LoRA, that you'll have to research yourself.
- Start with a general idea, then modify the parts you're not happy with using inpainting, this way you don't need super specific prompts or the stars to align in a once in a million years pattern for a good final result.
- Don't be scared of editing images manually, you can use MSpaint to modify a result you're half happy with, then use the edited image as an input for the next step, that's how I usually do it, quick and dirty, but functionally the same final result.
- Don't get frustrated, just get more RAM, yeah, these tech uses infinite mountains of RAM, if you want good, detailed, high quality images, you need to get some, I was almost forced to go from 16 to 32GB, and this is for 1024x1024 images.
I’ve run out of vram before I ran out of ram personally. Though I do only have 6gb…
people: **Tired of Waiting**
Code Bullet: **Doesn't give a...**
One way to get AI to generate the image you'd like it to generate is to start out with an image similar enough to what you want.. for example by drawing the exact image you want, and then tell the AI to use that as a start image with a high weight, and make the prompt essentially just be a description of the start image.
If you want quick good results with ai art, use image to image, instead of just a text prompt. You can just quickly and crudely edit your character into the right pose, add some basic doodles in paint for additional stuff you want in the image, and then just describe to the AI what you were trying to draw. Mess around with some sliders for better results, I guess.
This should work with free online Stable Diffusion thingies too, as long as they have img to img support.
0:39 I heard "it cuts kids in half" and for a moment I was like wtf before I realized
For CB and the AI-literate: Raev apparently posted their Lora over on CivitAI. The screenshots they posted of the Lora in action do include the prompts used if you click on them. Love the description for the Lora too. XD
7:29 aka. “Code Bullet hasn't opted in to a privacy nightmare”. There's no shame in not upgrading, CB.
You gotta use textual embedding (or hugging face "diffusers") to get good ai art of your avatar. You've already got like hundreds of source drawings, and really it should do well with only 5-10 of your model poses. Basically just teaches the AI what the avatar looks like. Look up hugging face diffusers, or stable diffusion embeddings, both do similar things I believe.
William Osman and cb should do a collab video of William making a tv with a hole in the bottom for cb to wear to open sauce and post it as one of their weekly posts
How did it take CZcams this long to suggest your second channel? I love your content!
"this shits awesome it like cuts cans in half and stuff"
surely im not the only one who heard "kids" instead of "cans"
I love the tabs
"TV helmet - Etsy"
"Custom TV Head/Monitor..." (Also Etsy)
wait really you will be in open sause?
he will, hoping hw will have a crt monitor mask
@@adelalatawi3363 yeah. I never want to see his face. It would ruin my life.
Wait hes gonna be in open sus???
Just in case if you are still wondering about how you can generate the codebullets art. What you can do on midjourney, you can upload the current codebullet image and ask in prompt to use this image as reference and generate new image and you can also tell it in prompt how you want new image to look like.
Annother week, annother CB day off. Love it.🔥
8:35 "as one trained in the force, you know true coincidences are rare" -Kreia
mhmmmm definitely a Coincidence
"This subreddit is you guys showing how much more talented you are"
*proceeds to show the subreddit being full of AI Art*
I agree, but this is a programmer's channel so what can you expect, they're definitely 100% on board with art theft lmao
@@freetousebyjtc I would agree but since I'm a programmer myself who also loves to do artsy shit and is 100% against AI being trained on stolen art, I won't.
2:48 i would assume he is using stable diffusion with automatic1111 im not sure, but it sure looks like it. Even if he isn't im sure that with the right model you definitly could do this with automatic1111 :)
one of the AI image posts mentioned that they trained "Lora".
A Lora is a sort of refinement model to add on top of a existing model. they usually fit in megabytes instead of gigabytes for a full model. I'm guessing they trained the Lora on either images from you, or fanart, or probably both.
CivitAI is the place to search for custom trained models, loras, or othe techniques of refining/changing a model.
i had your video full screen and your time on the bottom scared the shit out of me bc i thought "friiick i am late!!"
For the AI art, it's probably made with stable diffusion.
For the model, i can't tell, so i will just say the ones that i find best: waifu diffusion, and anything v5. The model heavely influence the quality. AnythingV5 have a much better quality, but tends to not follow your prompt. waifu diffusion is trained with danbooru images, so there is almost everything.
As for prompt, maybe for dall-E it's good prompts, not for stable diffusion. here is how i would do it:
(code_bullet:1.4), (cathodic television:1.2), green characters, (black hoodie), ((1 man))
As for some explanation: the bracket is used to tell the AI how much he should prioritize something. You can put multiple bracket to incentivise even more (that's what i did with 1 man). You can also add ":(some number)" if you don't want to put a thousand brackets (note that 1 bracket have a value of .1 . 2 brackets are 1.2, 3 are 1.3 etc.)
You can also add styles to the prompt, with are essentially some general prompts that upgrade the quality. You can merge multiple from the internet, that's what i'm doing, generally it can't make the image worse, but it may make the AI less follow your prompt
You can note that i didn't specified the bullet in the middle of the screen. Because it's really, really hard to point it precisely whitout the AI freaking up. If he have enough code bullet in the database, he will know what to do. If he don't, your best bet would be to to make a fast drawing, and use img to img (AI like stable diffusion and dall-E are just glorified denoiser. In txt to img, it generate a random noise, and try to denoise it following your prompt. In img to img, It will add noise to your image, and then will denoise it, which will get a similar result depending on the denoising strenght. 1 is completely random noise, 0 is no noise)
I have thought about making a video resuming all the knowledge i got in a year. It's not hard to generate an image, but getting a good image is where it starts getting complicated. I can't just summarize all my knowledge into a single youtube comment, but i can tell the most important things. If i wanted, i could talk about resolution, cfg, choosing non-mainstream AI, model merging, seeds, and much more.
Holy shit, pal, this is possibly one of the most informational comments I have ever saw. I mean, legit, I think that this single comment gave me more than three hours of watched videos on this topic
@@VasiliyOgniov Yeah, the thing is most video actually say more or less the same things, sometimes whitout really knowing why it really matters. I actually used it for a long time, always experimenting, trying to see what upgrade the quality, searching things that i didn't knew what it did. I'm at a point where settings up parameters only take me seconds. I want a portrait, i put height at 960, weidth at 540, steps at 80-100 (lower don't make good results, and higher the difference is barely noticeable), batch usually at 3 (It allow you to choose which image is best, while not overloading your brain with too much choice), and then i write prompts, and in a way that for me it's almost like a third language (yeah, i'm french, writting prompts in english, in a way that don't make sense in the traditionnal english). Don't take my parameters as universal, though. Sometimes, you may wanna have other resolution (i'm on a 1660 ti, so i can't generate 1080p images), more or less steps depending on your time, or other models depending on what you need.
Also, if you don't make them full resolution, use real-esrgan. It's an IA that upscale images, and when the image is 2x smaller than what you want, it make perfect results. I think integrated in stable diffusion, but maybe not with the model (i never used it, always used the original). If you are using the original, you can find a bat online to automatise all the process and not go through cmd
I made a codebullet lora trained on video stills and fan art and on some them I used control net to fix the codebullet icon
The one speed clip may do the same thing but fast,
but that’s no fun!
We’re here for the times where you remind us how much of a headache it is to get programs to do intermediate processes properly!
I got an idea for a video.
Make two AIs, one with one AI creation algorithm, and one with another, and pit them against each other in a chess battle.
You can upload pictures to some AI tools and it will use that as it's base to make more images based on the prompt you enter. That is likely how they are making those.
HOW HAVE I ONLY NOW DISCOVERED YOUR SECOND CHANNEL
The sudden codebullet in my backyardscientist video blew my mind. Was never expected.
Try saying “tv for a head that has matrix code on the screen with a black bullet in the center ” as well as saying “faceless” , the other option is using ref images to help guide the ai
3:17 it looks like one minigame from the game "watchdogs" where people in suits with red ties and camera's for heads hunted you around town in the darkness and if they saw you for too long their head would turn from green to yellow to red then they'd run at you and kill you it was horrifying
Evan, you knew what you were doing with that thumbnail
Ha ha ha nice job confirming the bets with Will xD
"No-one likes the clips channel!! But Tiktok and Instagram are doing so much better!"
It's almost like different userbases gravitated towards different platforms because of different content format or something. :)
If I could set youtube to ignore shorts entirely and permanently, I would.
On Android, CZcams Vanced will let you hide them completely. For browsers, I use AdGuard but most other ad blockers probably will have a rule available, too.
I'm a apple use :(. I wish there was a option to turn off shorts built into CZcams
Video idea: Use AI to respond to hate comments for you. That way you get to laugh at that, instead of feeling down from reading the comment itself. :)
I absolutely knew that was a bullet on the monitor. I never thought it was a battery at all.
For AI generation: Using Midjourney you have the option to use reference images for the ai to generate smth new off
His chrome tab reads "Tv Helmet - Etsy" lol
Stable diffusion,
you can get it on your pc and give it a image as a prompt
Alright so the reason the clip channel isn't doing as well is 1) you have a separate channel for shorts rather than using your other 2 higher subscribed channels. Since the Instagram and TikTok channels are your only channels on those platforms the algorithm preferences those. 2) This is more of a conspiracy thing but I believe that CZcams might use data on the other short video platforms to push down those who cross post as opposed to those who created unique CZcams Shorts to drive more people to CZcams for new content they can't get on TikTok etc.
Another thing you may want to do is more interaction on your short videos (liking, favorite replies, and replying) and part 2/3 etc as the algorithm likes to feed the second part of shorts to a person.
When i saw the thumbnail my brain did neuron activation
Midjourney has an add picture feature that allows you to redraw images. That is probably how he did it.
"Let's not be elitist about the version of Windows, we're all using Windows, we're all friends here..."
*runs uname -s*
Linux
Interesting...
Code Bullet, man.. STABLE DIFFUSION FOR AI ART GO BRRR
Can’t wait to see a guy with a giant TV on his head at open source
7:42 Code Bullets: Thank you!!
10 seconde later: oh...
the ai i use for images is dezgo and it has an image to image one where you can use an existing image you have and do edits if you can’t get it to work good with just prompts and you can edit how much it changes the images
Thumbnail photo: Mommy bullet
Me:Cool
7:42 "we're all using windows"
Me Linux user: ahuh
There are dozens of us
It would be cool if you had a machine leaning program running live during the whole event
You know Bullet, your Reddit wouldn't need to do the hole coding thing if you were doing it XD
>we're all using windows
*sweats in linux* Yes of course, friends, we're all friends
bruw
i didn't even know you have this channel
Does anyone know what video they're talking about at 1:16 ? I can't find anything searching "code bullet bookworm" or "code bullet spelling game"
You can often have a reference image, but i don’t know more haha
I didn't even know code bullet day off before this dinner
For AI imagery he mentioned using controlnet which is s stable diffusion extension.
Cheers to those still not on Windows 10!
Upgrading is nice and all, but it's always annoying learning a new system, and... 10 is apparently pretty much a scam.
Insert "is this a butterfly" meme... Is this the main channel?
I'm thinking that the AI art was done with Stable Diffusion, and what probably happened was they make a photobashing image for a base and ask the model to reimagine it in a different style
You definitely knew what you were doing with that thumbnail
Hes using Stable Diffusion and a custom trained LoRA model to generate the Code Bullet images.
U can change the code bullet mod for opera GX a bit, I use it and disabled everything but the background
"We're just gonna scroll past that"
- the man who put 'that' in his tumbnail
"we're all using windows" nuh uh I use linuck
prompt engineering and using reference images as part of the input are part of it
AI is smart, but not magic, and sometimes you need to explain what you want in a way the AI can understand.
I see the thumbnail and immediately wondered when he turned down that path
i think the secret is using the midjurney ai, that stuff is just so good
Atleast now we now he lives in oslo, Norway 6:11
At least now we know Norway is a capital of Australia
It's really weird how Opera GX Weather defaults to Oslo, it doesn't use your location
@@Alpha_0ne276 It defaults to Oslo because that's where Opera's headquarters are located.
9:15 code Bullet just being there is good enough for me
you can get your names on socialmedia easily if you contact the platform itself and show them that the other account is pretending to be you so it's gonna get banned and you can take the accountname
you just need to do the VR pacman thing at OPEN SAUS
from code bullet to code bomb
The theme for Opera looks like it came straight from the early 2000s.
0:37 *laughing* "that shit's awesome, it, like, cuts kids in half and stuff"
man, you should absolutely have a TV-shaped LED mask made for upend sore that would be amazing.
of course you scroll past the clone that has jiggle melon
Wrens gonna be at Open Sauce ❤
Fun fact! At least one of the lasers backyard scientist uses comes from the company I work at!
Different types of people on different platforms. I for certain hate shorts and clips and am probably not alone. I want the full phat codesplaining. :)
2:36 Maybe with stable diffusion or midjourney?
I would go to your thing if it weren't so far away
having made few AI generated images myself (my profile pic is AI generated, with the exception that i turned the face into darkness, because the AI didnt want to cooperate for that part). you have to be specific. like what type of tv you want, what you want to have in the tv, clothes, colors of the clothes, what kind of scenery, and optionally maybe add some effects, like darkness, mist, lights etc. maybe a perspective you want; front view, back view, wide angle etc, and generate images multiple times until it generates something you like. sometimes the AI just doesnt know the meanings. like, it doesnt know what Hogwarts is. or sometimes it gets confused and completely ignores a prompt. like short hair and hair slicked back. the AI art website i use, just refuses to make slicked back hair if theres short hair in the prompt. it just gives short hair. and even then, it messes up things, like, as you saw: instead of a tv AS a head, it put a head IN the tv. so, just generate multiple images. after few images, you can give more prompts, more specific and more defined descriptions.
6:22
ambatukam
I like Code Bullet Clips on CZcams.... The DK barrel one makes me laugh everytime. Where is your god now... Lol
How dare you congrats 👏🎉🎉🎉
3:10 image to image. either using your avatar or pasting a tv on top of some other model and then letting the AI clean it up. That would be my guess, at least.
automatic 1111s UI is the best for ai art mr bullet
I’ll be at open sauce my guy
Joined the sub, followed on TikTok but I will sub to the clip channel when we get more than three main vids a year (averaged over four years).
"We're all using windows, is all good"
Me watching on my Fedora Linux: 👀
You know “projectile programming” does sound like an apt description of your style
This is the secret code bullet video