OpenAI's DALLE 3 has made it easier to make character and art style consistency. Here is how to make AI ART using DALLE-3 and keeping consistency in character and art style.
Very helpful! I added this to my Custom Instructions: "When outputting DALL-E3 images, always display the GEN_ID." So now it automatically displays the ID everytime.
Is there a direct link to dall-e 3 cause I have chatgpt with paid functions but it says that dall-e 3 doesn't exist and there is only dall-e 2 even going to the website and it refers me to chatgpt but it only let's me work with dalle 2 not 3
I use this Gen_ID technique with GPT-V. I generate images, adjust them until I'm happy and ask for Gen_ID. Then I take the one I like and create a new chat. Input my image and ask to 1) describe it in painful detail, 2) generate a concise prompt from this description, 3) I then add the Gen_ID + prompt + action to generate (like running).. after that I just change action and it works slightly better then just Gen_ID or just prefixed prompt. There are still detail changes, but are minimal.
Great stuff. One thing to consider is that I noticed differences if you use the prompt: blend, mix, combine, merge, meld, fuse, hybridize, synergize, incorporate, infuse. And the terms: style, aesthetics, element, design. Resulting in a prompt like “blend the aesthetics of rhinos with elements of eagles” which would be different from “Combine elements from rhinos and eagles”
Exploring the realms of storytelling and creative videos, and VideoGPT subtly stepped in, effortlessly refining my content with its professional touch.
Your video was great! Stylar AI, a web-based tool for logo, 3D, and interior design, supports image uploading and provides auto prompts. Though still developing, it's a promising tool for enhancing your projects.
Hi, is this chaning? I have asked a similar question from chatGOT and it answered “ No, the Generation ID of an image created by DALL-E cannot be used to make small changes or edits to that specific image. The Generation ID is a unique identifier for the image generation session and serves primarily for reference and record-keeping purposes. It doesn't enable revisiting or modifying the generated image. If you want to make changes to an image, you would need to describe the desired changes in a new image generation request. It's important to provide a detailed description of what you're looking to adjust or add to the new image based on the original. However, keep in mind that due to the nature of AI image generation, the new image might not be an exact edit of the original but rather a new creation based on your modified instructions.”
Thank you for the tips. I will try them out. But I am wondering why there is no master listing put out by OpenAI of these commands, instead of forcing us all to watch videos to discover this information? Or maybe there is a master list. Where can we find the latest commands? OpenAI must know by now that everyone is trying their best to manipulate images in various ways. Why are we having to spend hours on trial and error?
Just an idea, can you ask it to morph one image to another and give several steps of morphing, or even several frames so that it is possible to merge images into video? I tried to generate a matrix of frames of animation in a single image in Bing and it sometimes worked. It also may generate stereo pairs, which means it stores inside the whole 3d scene and it may be retrieved somehow.
@@JoseNajarroAI Hi Jose, search for "Important : generating by gen_id only works in the same session, since this super-global is not stored inbetween sessions"
I'd also like to know if this is true, I asked ChatGPT about it in a session I had directly after asking for a GenID and it said the following : GenIDs, like [redacted], are unique identifiers assigned to each image created by DALL·E. They aren't limited to each session; they're permanent identifiers for the images. You can use them to refer back to a specific image generation at any time, even in future conversations or sessions. This helps to retrieve or reference the generated image without having to regenerate or describe it again.
1. Is there a way to get the genid of older prompt ouputs? 2. Can I share genid with Others? If we have a perfect image, can two people begin to reference that one ID for newer images? You just don't know how badly I needed to see this tutorial! THank you.
Good video bud but I have since learned that using "Seed" instead of "GEN_ID" is better to replicate the image. GPT says "In summary, the seed dictates how an image is generated, ensuring the ability to replicate the exact same image if desired. In contrast, the GEN_ID is a unique identifier for an image that has already been generated, used for referencing and organizing images."
Hey Marx good suggestion but, I used to use seed and prompt consistency to keep the image the same that was a very old move used in various ai tools… as I recorded this video,GPT had made an update where you no longer could keep the seed the same. Not sure if they updated it back.
I've seenthe technique about asking for the seed and can confirm when I tried it out a couple days ago ChatGPT did not provide it or seem capable of doing so
Very interesting video.❤ I tried to get gen-id with bing chat but it doesn't work. Is there anyway to get consistent characters using bing image creator or ai bing chat?
I've talked to Bing chat about its workings in a very limited way and it doesn't even realize it's hooked up to a DALL-E 3, I think due to security and whatnot trying to prevent misuse and confusion with the model it only knows it has a tool called img_gen (which it told me very hesitantly) and does not see any of the workings, it can't even retrieve prompts or see the output. If you look up the flowcharts of how LLMs engage with APIs and where the different entry points are, it kind of makes sense. Tldr Bing deliberately does not know or have higher level access or knowledge of the openai tools it is using
HELP! Could it be that this doesn't work anymore since the introduction of custom GPTs? I get this message when I try to adjust an image with it's gen_id: "I don't have the capability to modify or edit existing images, including removing elements from an image that was generated. Each image created by DALL-E is final and cannot be altered post-generation within this environment. If you have specific adjustments or a different scene in mind, I can generate a new image based on your revised instructions. Please let me know how you would like to proceed!"
In the past few days since Open Ai combined all the DALL-E and other functionality like image recognition into one single ChatGPT 4 chat window, are you still getting "DALL-E 3"? Since these functionalities were combined and I am no longer able to specifically choose DALL-E 3 from the top drop down menu ... it no longer says DALL-E "3" and it only produces one image at a time so I'm unable to compare or to see variations. I asked ChatGPT 4 whether it's using DALL-E "3" and it said it's now using Open Ai's own version of DALL-E ... (which on first glance seems to be much worse than when it was DALL-E "3"). Have you experienced this same thing in the past few days since Open Ai combined all the functions into a single Chat window?
I don't have ChatGPT+, just wonder, if I upload my own image and ask for adjustments is it possible to use image itself instead of image description which usually generate different character? Will it give GEN_ID of uploaded images?
Gen ID does not work for me, to keep an image consistent. It varies wildly and basically just randomly generates new pictures based on my primary prompts, sometimes adjusting them how i asked, but resulting in completly new pictures. Biggest gripe for me is, that after creating a nice image and wanting to work with it, maybe using a cartoon character for scenes, the character always kinda mutates every time.
Same here. The 'cross reference' prompt doesn't work either, it gives me a text comparison of the images until I ask it to generate a 'cross reference image of' then it produces images but they're not even close to the originals.
It worked pretty well, but unfortunately they have already modified it and now it doesn't work anymore.☹ Now you get this message "I'm unable to use the specific generation ID "XXMpHdfDDQtAt4zi" to create a new image. The generation process in DALL-E doesn't support referencing or modifying previous images based on their generation IDs. Each request for image creation is independent and generates a new, unique image.
For me it doesn't allow modification of a certain gen_ID, this is the message that i get: The gen_id "JXC9mLEquCaTDMEO" corresponds to a specific image and cannot be used to modify or generate variations of that image, such as changing the color of an object within it.
Hmm interesting, I just tried it and it worked in my end. Just out of curiosity are you USA based? Also are you using the same chat where the original image was created? And are you using Chatgpt ( only reason I ask is numerous people are tying this in bing)
🎯 Key Takeaways for quick navigation: 00:00 Introduction *to achieving art style consistency and creating character variants in DALL-E 3.* 00:13 Overview *of generating multiple images with a single prompt.* 00:28 Explanation *of DALL-E's Generation ID, used for referencing and adjusting images.* 00:54 Importance *of sharing the Gen ID for each image to create variations and maintain consistency.* 01:09 Capabilities *of DALL-E with Gen ID: variations, theme iteration, adjustments, and art style consistency.* 01:36 Demonstration *of creating variations while maintaining art style.* 02:04 Examples *of art style consistency across different images.* 02:33 Using *DALL-E to iterate on themes and create related images.* 03:00 Showcasing *theme iteration with varied yet related images.* 03:14 Adjusting *images for close-ups and adding elements like fireflies.* 03:44 Example *of making small adjustments, like changing shirt color.* 03:56 Effect *of adjustments on future prompts and maintaining character consistency.* 04:25 Cross-referencing *images using Gen IDs for combined art styles.* 04:52 Using *Gen ID for evolving images, like showing different life stages.* 05:20 Gen *ID's role in ensuring art style and character consistency.* 05:49 Detailed *explanation of creating prompts for specific character traits.* 06:14 Adjusting *images using Gen ID for specific changes like shirt color.* 06:41 Requesting *full body shots and close-ups for character consistency.* 07:07 Creating *thematic art styles with Gen ID referencing.* 07:34 Combining *different Gen IDs for new, cross-referenced images.* 08:01 Generating *variations and adjustments based on user prompts.* 08:27 Creating *multiple images from a single prompt for efficiency.* 09:07 Conclusion *emphasizing the benefits of using DALL-E 3 for consistent art styles.* Made with HARPA AI
You know I asked it for the GEN_ID and it told me it couldn't give to me. Now maybe this is because its one of those customized GPTs but I'm super bummed. yes that is quite it. doesn't work in my custom gpt.
My opportunity is the opposite. Sometime I prompt D-3 for a photo of a family and multiple family members look the same, but with the kids having a smaller head of the father…sometime with 5 o’clock shadow:) Also, have example where a male head was placed on a female body….no offense tp hairy kids shut I think D3 was hallucinating?:)
I've just tried the technique you suggest with the gen_id and the results are not as spectacular as in your video. As with many CZcams videos on the subject of chat GPT, the results are not always as conclusive as one might hope.
Hey Philippe’s, hmm are you being very descriptive with the prompt? I just tried it again to see if any changes and but everything is working as intended
@@nutime2018 no adjustments when I try it , might follow up and do another video. What is your original prompt if you may share. I’ll test it out in my end
Pretty useless. It does not generate consistency at all. It works same as using seed and same as if you just prompt a name for the avatar or character in the prompt. Same results same quality. Chatgpt is just no ready for professional use, maybe in a year. I also asl for images in labeled stages. It still messes up overlaps and i end up throwin hands with my gpt.
Very helpful! I added this to my Custom Instructions: "When outputting DALL-E3 images, always display the GEN_ID." So now it automatically displays the ID everytime.
Thanks for sharing Eric, I'm actually going to do the same right now lol!!
Amazing insights!
Is there a direct link to dall-e 3 cause I have chatgpt with paid functions but it says that dall-e 3 doesn't exist and there is only dall-e 2 even going to the website and it refers me to chatgpt but it only let's me work with dalle 2 not 3
I'm messing with using it alongside seed numbers. Seed is how it is made. Gen ID is for the image itself. I think of it as epigenetic memory, and DNA.
I use this Gen_ID technique with GPT-V. I generate images, adjust them until I'm happy and ask for Gen_ID. Then I take the one I like and create a new chat. Input my image and ask to 1) describe it in painful detail, 2) generate a concise prompt from this description, 3) I then add the Gen_ID + prompt + action to generate (like running).. after that I just change action and it works slightly better then just Gen_ID or just prefixed prompt. There are still detail changes, but are minimal.
Thanks K8, gave you a shoutout in my new video, used this tactics in a scenario.
@@JoseNajarroAI 🥰
Great stuff. One thing to consider is that I noticed differences if you use the prompt: blend, mix, combine, merge, meld, fuse, hybridize, synergize, incorporate, infuse. And the terms: style, aesthetics, element, design. Resulting in a prompt like “blend the aesthetics of rhinos with elements of eagles” which would be different from “Combine elements from rhinos and eagles”
Wow this is amazing!!!!!! I've always wanted this conversational style for edits and lo behold! Just amazing
Glad you like it! Agree conversational style edits are my favorite
Didn't hear about it anywhere else and your video is so professional, subed :)
Thank you!
Congratulations!! You over 1k now before the end of the year. Excellent video very helpful, thank you.
Yes! Thank you! Can't believe I got to 1k way faster than anticipated
Very helpful! Thank you for a great video and explanation/guidance!
Only 1000 subs? I hit the like, subscribe and comment button because you provided us with unique insights! Keep up the incredible work man 👏🏼
Welcome aboard! If you have any topic ideas feel free to lmk. and thank you for the support!!
Essential information Jose, thanks. Definitely worth a like and subscribe.
Thank you so much for this dude. It has helped my work flow immensely. 🙌🏻
I’m happy it helped!!
Nice vid and great info. Thanks
This is gold man. Great work! I had no idea.
Glad you liked it! Just dropped an updated version with more tips and tricks
Subscribed and liked you bro because of your teaching style. Very impressive, hope to see this content further as Dall-E grows
Thanks for the sub and for the kind words!!
This is what I needed and been looking for. In the animation industry this is known as staying on model.
Exploring the realms of storytelling and creative videos, and VideoGPT subtly stepped in, effortlessly refining my content with its professional touch.
The GEN_ID was a great tip - thanks!
Glad it was helpful Suzanne
demistified a lot of things, thank you. subscribed! 🤩
Glad it was helpful and thanks for the sub!
Hey man, I hope you keep making these videos, they're really helpful!
Thanks, will do! Any topics you want to see next?
4:30 I can't stop laughing, all the male characters in the right picture look like you! :D
Lol, im alittle self-centered I made the prompt to reflect me haha"hispanic male, with glasses, black wavy curly hair, beard,"
Thank you for this!
Thank you very much! Game changer for me!
Great video
Great tutorial Jose!!!!
Thank you Kevin!! Glad it was helpful
This is a great video with a lot of added value
Glad you think so!
Just casually dropping some awesome stuff
Glad you liked it!
Nice! Thanks for the video. Helped a lot.
No problem glad you enjoyed it
amazing info!! thanks so much!!!
Glad it was helpful!
Thank you. Liked and already subscribed! 😊
Thanks for the support!!
Famtastic tips bro! Now subscribed
Thanks for the sub!
Great video, thanks!
Glad you liked it!
Thank you! 😊
great stuff! i’m using your tips to generate some video game level designs!
Glad you like them!
Very cool! Thank you very much.
Glad you liked it!
Great help thanks
Wow great video thanks I didn't know that
Wow I'm so glad I watched this video, new sub and I slapped that like button! Great info thanks so much for sharing
Glad it was helpful!
That's pretty cool thanks for the video !
Glad you liked it!
Usefull stuff. Thanks!
Glad it was helpful!
superb
😙👌🏻 amazing
Thank You!!
Thanks A lot!!
Great video. Thank you
Thanks for the kind words
Subscribed! Hello from Texas = )
Appreciate it Matt! Jersey man here lol
exactly what I needed
Glad it a was helpful
very useful content in this video. Keep producing videos in the same quality level
I will try my best!
Excellent, thanks
Glad you liked it!
Nice thanks for this.
Thank you for the support
Your video was great! Stylar AI, a web-based tool for logo, 3D, and interior design, supports image uploading and provides auto prompts. Though still developing, it's a promising tool for enhancing your projects.
Hi, is this chaning? I have asked a similar question from chatGOT and it answered “ No, the Generation ID of an image created by DALL-E cannot be used to make small changes or edits to that specific image. The Generation ID is a unique identifier for the image generation session and serves primarily for reference and record-keeping purposes. It doesn't enable revisiting or modifying the generated image.
If you want to make changes to an image, you would need to describe the desired changes in a new image generation request. It's important to provide a detailed description of what you're looking to adjust or add to the new image based on the original. However, keep in mind that due to the nature of AI image generation, the new image might not be an exact edit of the original but rather a new creation based on your modified instructions.”
amazing thx!
Glad you like it!
Thank you for the tips. I will try them out. But I am wondering why there is no master listing put out by OpenAI of these commands, instead of forcing us all to watch videos to discover this information? Or maybe there is a master list. Where can we find the latest commands? OpenAI must know by now that everyone is trying their best to manipulate images in various ways. Why are we having to spend hours on trial and error?
Wow great video thanks
Glad you enjoyed it Alex!!
Just an idea, can you ask it to morph one image to another and give several steps of morphing, or even several frames so that it is possible to merge images into video? I tried to generate a matrix of frames of animation in a single image in Bing and it sometimes worked. It also may generate stereo pairs, which means it stores inside the whole 3d scene and it may be retrieved somehow.
i need this and will try in 30 minutes. i maxed out! lol
lol maxxing out always sucks
I read that GEN_ID is sessional, so make sure you get all of the images you need during that run.
Hey Jim, where did you read this?
@@JoseNajarroAI Hi Jose, search for "Important : generating by gen_id only works in the same session, since this super-global is not stored inbetween sessions"
I'd also like to know if this is true, I asked ChatGPT about it in a session I had directly after asking for a GenID and it said the following :
GenIDs, like [redacted], are unique identifiers assigned to each image created by DALL·E. They aren't limited to each session; they're permanent identifiers for the images. You can use them to refer back to a specific image generation at any time, even in future conversations or sessions. This helps to retrieve or reference the generated image without having to regenerate or describe it again.
1. Is there a way to get the genid of older prompt ouputs?
2. Can I share genid with Others? If we have a perfect image, can two people begin to reference that one ID for newer images?
You just don't know how badly I needed to see this tutorial! THank you.
Good video bud but I have since learned that using "Seed" instead of "GEN_ID" is better to replicate the image. GPT says "In summary, the seed dictates how an image is generated, ensuring the ability to replicate the exact same image if desired. In contrast, the GEN_ID is a unique identifier for an image that has already been generated, used for referencing and organizing images."
Hey Marx good suggestion but, I used to use seed and prompt consistency to keep the image the same that was a very old move used in various ai tools… as I recorded this video,GPT had made an update where you no longer could keep the seed the same. Not sure if they updated it back.
I don't think seed works anymore, annoyingly.
I've seenthe technique about asking for the seed and can confirm when I tried it out a couple days ago ChatGPT did not provide it or seem capable of doing so
Amazing tip! Can I change the image to high resolution for printing?
Hey Wana I think you have to use some form of photo upscaler to do that.
Have you tried including an insctruction "always consider subsequent prompts to be in reference to original gen_id"? i'm about to.
I was just wondering if it's possible to reference a previous image to generae more in the same style. Thank you!
Yes you can! Dropped a new video showing examples.
useful tips! thanks! I try to work on art like you but soon after I get "You've reached the current usage cap for GPT-4, please try again"
They keep changing the cap usage,
Do you think it would be possible to do this via the api or just inside the chat interface?
Thank you Jose. Do you know how I can make seamless patterns with Dalle?? Very very helppul your video.
Hey Gisela, at the moment not sure but when I find out I’ll make a quick short and show it here
@@JoseNajarroAI I wait for this. Thank you
Very interesting video.❤
I tried to get gen-id with bing chat but it doesn't work. Is there anyway to get consistent characters using bing image creator or ai bing chat?
Hmmm I’ll try to come up with a method for bing chat, thanks for the suggestion
I've talked to Bing chat about its workings in a very limited way and it doesn't even realize it's hooked up to a DALL-E 3, I think due to security and whatnot trying to prevent misuse and confusion with the model it only knows it has a tool called img_gen (which it told me very hesitantly) and does not see any of the workings, it can't even retrieve prompts or see the output. If you look up the flowcharts of how LLMs engage with APIs and where the different entry points are, it kind of makes sense. Tldr Bing deliberately does not know or have higher level access or knowledge of the openai tools it is using
This is cool but I wish it had even more consistency, which that if you just change a short color or composition, the face remains identical.
subbed
Thank you so much
Dude, you saved my fucking job. Thank you!!!!!!!!
Lol, enjoy. Now if only Dalle can stop reducing the amount of images
That's great! I tried it in Bing chat and unfortunately it didn't work even though it uses DALL-E3 too.
That’s lame!! Thanks for letting me know
is this also possible in the free version of bing dalle image creator?
I'm not too sure about that
HELP! Could it be that this doesn't work anymore since the introduction of custom GPTs? I get this message when I try to adjust an image with it's gen_id:
"I don't have the capability to modify or edit existing images, including removing elements from an image that was generated. Each image created by DALL-E is final and cannot be altered post-generation within this environment.
If you have specific adjustments or a different scene in mind, I can generate a new image based on your revised instructions. Please let me know how you would like to proceed!"
In the past few days since Open Ai combined all the DALL-E and other functionality like image recognition into one single ChatGPT 4 chat window, are you still getting "DALL-E 3"? Since these functionalities were combined and I am no longer able to specifically choose DALL-E 3 from the top drop down menu ... it no longer says DALL-E "3" and it only produces one image at a time so I'm unable to compare or to see variations. I asked ChatGPT 4 whether it's using DALL-E "3" and it said it's now using Open Ai's own version of DALL-E ... (which on first glance seems to be much worse than when it was DALL-E "3"). Have you experienced this same thing in the past few days since Open Ai combined all the functions into a single Chat window?
I don't have ChatGPT+, just wonder, if I upload my own image and ask for adjustments is it possible to use image itself instead of image description which usually generate different character? Will it give GEN_ID of uploaded images?
Unfortunately it won’t be able to update your image
Gen ID does not work for me, to keep an image consistent. It varies wildly and basically just randomly generates new pictures based on my primary prompts, sometimes adjusting them how i asked, but resulting in completly new pictures.
Biggest gripe for me is, that after creating a nice image and wanting to work with it, maybe using a cartoon character for scenes, the character always kinda mutates every time.
Just dropped a new video covering these points.
Same here. The 'cross reference' prompt doesn't work either, it gives me a text comparison of the images until I ask it to generate a 'cross reference image of' then it produces images but they're not even close to the originals.
Is gen_id different from seed?
It worked pretty well, but unfortunately they have already modified it and now it doesn't work anymore.☹ Now you get this message "I'm unable to use the specific generation ID "XXMpHdfDDQtAt4zi" to create a new image. The generation process in DALL-E doesn't support referencing or modifying previous images based on their generation IDs. Each request for image creation is independent and generates a new, unique image.
Weird Jimmy, I just redid it again and worked fine, even dropped an updated video with new tips
Hey, is it possible to use this principle while using API? I would be grateful if you could help me :)
Not sure how it can be implemented in an API
great video man. somehow not working so great for me, its doing WILD variations on the theme, instead of just changing a simple shirt color.
Glad it helped, dropped a new video heedfully explain everything'd even more in detailed.
For me it doesn't allow modification of a certain gen_ID, this is the message that i get: The gen_id "JXC9mLEquCaTDMEO" corresponds to a specific image and cannot be used to modify or generate variations of that image, such as changing the color of an object within it.
Hmm interesting, I just tried it and it worked in my end.
Just out of curiosity are you USA based? Also are you using the same chat where the original image was created? And are you using Chatgpt ( only reason I ask is numerous people are tying this in bing)
Are image seeds different than genID?
Yes, for some reason dalle does not have the ability to hold the seed (it used to) now they just have a ID for the image so you can reference it
@@JoseNajarroAI How long can you reference the gen ID?
My dall e is refusing to cross reference the images...it says I should use a image editor software... :((((
🎯 Key Takeaways for quick navigation:
00:00 Introduction *to achieving art style consistency and creating character variants in DALL-E 3.*
00:13 Overview *of generating multiple images with a single prompt.*
00:28 Explanation *of DALL-E's Generation ID, used for referencing and adjusting images.*
00:54 Importance *of sharing the Gen ID for each image to create variations and maintain consistency.*
01:09 Capabilities *of DALL-E with Gen ID: variations, theme iteration, adjustments, and art style consistency.*
01:36 Demonstration *of creating variations while maintaining art style.*
02:04 Examples *of art style consistency across different images.*
02:33 Using *DALL-E to iterate on themes and create related images.*
03:00 Showcasing *theme iteration with varied yet related images.*
03:14 Adjusting *images for close-ups and adding elements like fireflies.*
03:44 Example *of making small adjustments, like changing shirt color.*
03:56 Effect *of adjustments on future prompts and maintaining character consistency.*
04:25 Cross-referencing *images using Gen IDs for combined art styles.*
04:52 Using *Gen ID for evolving images, like showing different life stages.*
05:20 Gen *ID's role in ensuring art style and character consistency.*
05:49 Detailed *explanation of creating prompts for specific character traits.*
06:14 Adjusting *images using Gen ID for specific changes like shirt color.*
06:41 Requesting *full body shots and close-ups for character consistency.*
07:07 Creating *thematic art styles with Gen ID referencing.*
07:34 Combining *different Gen IDs for new, cross-referenced images.*
08:01 Generating *variations and adjustments based on user prompts.*
08:27 Creating *multiple images from a single prompt for efficiency.*
09:07 Conclusion *emphasizing the benefits of using DALL-E 3 for consistent art styles.*
Made with HARPA AI
You know I asked it for the GEN_ID and it told me it couldn't give to me. Now maybe this is because its one of those customized GPTs but I'm super bummed. yes that is quite it. doesn't work in my custom gpt.
sometimes, it gives e the same error, but normally if I restart chat I get it too work.
So polymath Oracle the first ever AI animation is incredibly consistent and was made with out using this method
much props to them
My opportunity is the opposite. Sometime I prompt D-3 for a photo of a family and multiple family members look the same, but with the kids having a smaller head of the father…sometime with 5 o’clock shadow:) Also, have example where a male head was placed on a female body….no offense tp hairy kids shut I think D3 was hallucinating?:)
Great idea, just didn't work out very well for me
Mind sharing prompts you used
I cant get dall e to draw multiple images, it will just draw all 6 images in one drawing
I've just tried the technique you suggest with the gen_id and the results are not as spectacular as in your video. As with many CZcams videos on the subject of chat GPT, the results are not always as conclusive as one might hope.
Hey Philippe’s, hmm are you being very descriptive with the prompt? I just tried it again to see if any changes and but everything is working as intended
@@JoseNajarroAI doesn't work for me either.
@@JoseNajarroAI It does not work for me too. Maybe there are some adjustment that we need to do before working with gen id?
@@nutime2018 no adjustments when I try it , might follow up and do another video. What is your original prompt if you may share. I’ll test it out in my end
Chat gpt told me it can’t get the ID and is making me use specific prompts..wtf
Usually starting a new chat fixes it
Two words: ai pokemon
It seems that by telling DALLE create this, THAN that, THAN that, it creates a sequence.
is this still working ?!!!
I can't.. it seems that the Gen_ID doesn't work anymore.. What do you have for experiences?
The easiest way to get consistent characters: Draw them yourself
Don’t think that’s the easiest, if so everyone would do it lol
All this is meaningless without the prompts. You gotta show us the prompts. How do we use this "ID"? Now I have to go ask DALL-E.
Lol I literally show the prompts
He did, in the second half of the video.
@@vainezaiven6677 My bad. I didn't make it that far.
Pretty useless. It does not generate consistency at all. It works same as using seed and same as if you just prompt a name for the avatar or character in the prompt. Same results same quality. Chatgpt is just no ready for professional use, maybe in a year. I also asl for images in labeled stages. It still messes up overlaps and i end up throwin hands with my gpt.
Dam sucks it didn’t work
Thank you!!
No problem
asking seed works for me thanks for the comment @marxdrive