Hi Steve I am building a project on digitizing catalog for a hackathon in your recent video you told you trained a model on detecting images from webpages. if you can open source that it will mean a lot because we are having very less time to complete the project.
We created a number of GPT powered personal AI assistants for businesess like law, media industry and pretty much for corporate companies where they can integrate internal corporate resources and data securely to retrieve useful information and save 70% cost
This channel just feels like it’s on another level than the rest dude, in every way. The way you turn intimidating concepts into very easy to follow videos is invaluable. Quickly becoming my absolute favorite dev channel.
I was getting stuck on training a chatbot agent using off the shelves LLMs. This video came out at the right time and saved me at least a few days of monkering around. I really like the thought process breakdown instead of just showing jumbo codes like a tutorial channel. You got yourself a sub!
This is awesome dude! I watched your last video about developing proper AI apps. Your content on AI is top notch, you have opened my eyes to a whole new world of AI development. Much appreciation, keep up with the good job! Thank you :)
Thank you for putting out these type of videos. Each video has covered advanced topic in a simple manner and made me curious about building something simular!
I appreciate this a ton. Since seeing a nice, unique project a guy had with regarding AI, I only thought about applying multiple AI's together trying to get what I need. Now I have some new insight on what I just might be able to do by myself, with my own skills.
We created a number of GPT powered personal AI assistants for businesess like law, media industry and pretty much for corporate companies where they can integrate internal corporate resources and data securely to retrieve useful information and save 70% cost
I'm interested in the layout hierarchy approach too. What type of model did you use for the layout hierarchy? Do you give it an image and get it to return some HTML / JSON Structure? Or do you transform the existing Figma nodes into some text representation? How did you generate the training data for this step? My initial thoughts would be to provide the model an image (the same screenshot used for the object identification step), and get it to return HTML. Then for the training data, when you take the screenshots of the webpage, you also save off the HTML and provide that as training data for the hierarchy model (maybe after some processing). If you can share, I'd love to learn more!
Very interesting! The area of opportunity within the AI sector now is niche AI's that does one specific task very well as it is impossible to compete with general models as OpenAI, Anthropic, Google and other large companies has almost infinite money and resources. This means that quality of the data is as you say essential and one of the best ways to build a moat against competitors. I will definitely keep in mind that I should just write normal code to solve as much as possible before creating my own model. Do you use any open source models or have you made one from scratch? Been fiddling with fine-tuning LLMs such as llama-2 7B myself and I am now considering trying to find a model I can train instead with my own data.
Between turning to basic code and utilizing an LLM (a large "super" model) for generating customized code, I'd suggest storing both the input and output. Over time, you can train your own smaller LLM with fewer parameters, perhaps 7 or 13 billion, specifically tailored for the language/framework. By collecting user feedback through edits to the code and storing this feedback, you can achieve better results than relying on an expensive "super" model like GPT-4.
I was copycatting others project using Openai wrapper, and it didn't feel right to me, but I didn't know what to do instead. Until I came across your channel and now I have a whole new perspective on creating an SaaS AI product. Thanks Steve!
- Explore pre-existing models for solutions at 0:53. - Break down complex problems into smaller parts at 0:55. - Test general models to understand their limitations at 1:19. - Consider the costs and feasibility of training large models at 2:17. - Solve as much of the problem as possible without AI at 3:13. - Identify the right model type for your needs at 4:00. - Generate high-quality example data for training at 4:44. - Use tools like Google's Vertex AI for model training at 5:54. - Deploy and test your model with real-world data at 6:46. - Combine specialized AI models and plain code for best results at 8:35. - Utilize LLMs for final customization steps when necessary at 8:47.
Love the bullet points, and break down of what each task implemented. I would also add that to get things up and running quicker, see if there is an API for certain steps in your workflow i.e. dynamic scraping or object detection. Eden AI is expensive and slow, but if you haven't nailed down the workflow, it can help you try tools and compare services. Great vid!
Very cool, I was wondering,how were you guys able to make your layout hierarchy model and what did the image detection model help with that? Did you guys just use the code of the test data and train a prebuilt model off of that?
My friend this was a very clear and easy to comprehend video! I know nothing about coding and would like to learn more. Hypothetically if I wanted to develop a AI app. What route would you lay out based on your experience?
We're building an AI solution at my company. And this is the same conclusion we arrived at! Glad to see another company illustrating the same approach cu
Thanks that showed that training your own AI model is simpler than we might think. We, at JetSoftPro, a software development service, also think that custom AI development can achieve faster, cheaper, and better results than using off-the-shelf large models like OpenAI's GPT-3 or GPT-4.
Thanks for sharing that! Actually even more interesting is the other specialized model - layout generating one. Can you share something about it as well? 🙂
Sure, here's a table summarizing the key differences between using off-the-shelf large language models (LLMs) and training your own specialized AI model: | **Characteristic** | **Off-the-Shelf LLMs** | **Specialized AI Model** | |---|---|---| | **Cost** | Generally more expensive due to high training and usage costs | Can be more cost-effective, especially for specific use cases | | **Speed** | Can be slower due to model complexity and inference time | Can be faster due to smaller size and focused training | | **Accuracy & Predictability** | May be less accurate and predictable for specialized tasks | Can be more accurate and predictable for specific use cases | | **Customization** | Limited customization options | Highly customizable to specific requirements | | **Data Requirements** | Require large amounts of data for training | May require less data, especially if using transfer learning | | **Training Time** | Can take days or weeks | Can be faster, especially for smaller models | | **Expertise Required** | Requires specialized expertise in AI and ML | Can be more accessible with basic development skills | | **Use Cases** | Suitable for general-purpose tasks | Ideal for specific, well-defined problems | In general, off-the-shelf LLMs are a good starting point for exploring AI solutions and for tasks that require general knowledge and understanding. However, if you have a specific problem that requires high accuracy, predictability, and customization, training your own specialized AI model may be a better option. It's important to carefully consider the trade-offs and requirements of your project before deciding which approach to take.
So if you had a couple of computers with say some 3080s sitting around, is it terribly difficult to use those to train on even if it takes a few days or more? I'd be interested in playing around with my one GPU just to see what it can do since it's just sitting around doing nothing. Is python the only way/language that can be used to train models with data? As a go developer, I am rather surprised a much faster runtime language like Go isn't used for something this cpu (and gpu) intensive. With that in mind can you use CPU only if your GPU is low end or does training require GPUs? Subbed btw.. this video was very easy to follow so look forward to more.
not sure its possible cause i dont know much about Figma, but thinking about Figma mirror with the ability to mirror the design in the browser, id imagine you can take the generated html via inspection tools and feed that to the model instead which would kill the image detection model training step and really what you end up with is a HTML to React Component translator. its still along the same lines of what you end goal was which was taking the initial Figma design and translating it into code. anyways just food for thought from one dev to another! great vid!
Hi, Steve. This is really helpful. once you use Vertex AI for your specialized LMM, is it still stored at Google or are you able to download and run locally? Are you constantly making API calls for this one use case, for example?
Tried the Vertex AI a few times for different projects. The issue is that the endpoint always returns the same prediction irrespective of the input. This happened to me for 3 different projects over the course of last 1.5 years after which I entirely gave up on GCP.
In the broader context, investing in large-scale language models (LLMs) for natural language understanding may be deemed inefficient and resource-intensive if such capabilities are not a primary requirement. A conventional coding strategy that involves breaking down workflows into modular functions with standardized algorithms, coupled with the integration of smaller machine learning models, is not only more maintainable but also facilitates better evolution. This approach is not only more realistic but also cost-effective in comparison to the utilization of extensive LLMs.
Any thoughts on tinygrad? Also, do you think that eventually the commercial models will be so much better than an open source model that training your data on your own will be for very niche cases?
I have some questions. After Training the Model in Google vertex you have a working object Detector. Nice! But How Do you connect the output of the Model (the coordinates of the bounding boxes) to the Code of the Website?
is this ideal for studying especially in an engineering field (specifically electrical)? i noticed even gemini advance have difficulty understanding and solving intermediate problems.
Hello Steve I am a big fan of your content and it really helps me improve my AI knowledge. I am working on a solution that uses Knowledge base of OPEN AI and data (RAG) from a company , I am using OPEN AI API as brain for the system but eventually its pretty costly and doesnt work perfect even if i use good instructions, answers are pretty random and not consistent, can you guide me with this. Thankyou
If you were to add an extra network that goes between the smaller ones, it suddenly starts looking (from a high level of course) like the larger scale networks in the brain. The most obvious example would be the brains Default Mode Network.
Something like...using a dumptruck to deliver a small package is okay if you're doing it once or twice, but if you're starting a delivery service, a little effort to reduce cost is a good idea.
I don't understand. Why would you need to run image detection for importing Figma designs? They already have node/element structure. Image block detection is not needed for an importer plugin like this at all. It would be useful if you wanted to build a webpage from mock-ups, not Figma.
I was able to train YOLOv8s much cheaper than $60. Less than 5. And it was very good.
Před 6 měsíci
Hey there! It seems like you've put a lot of effort into learning and applying AI models to solve real-world problems. Your dedication and determination are truly commendable. Keep up the great work and continue exploring new possibilities in the field of AI and machine learning. Your insights and experiences are valuable and can inspire others in the community. Great job!
im new to this kinda thing but what does LLM mean?? im sorry for asking the dumbest question but can someone recommend more videos like this so i can get a better grasp or whats going on in the video.
LLM stands for large language models. I presume its self explanatory what that means but a popular example is ChatGPT 3.5 / 4. It runs off of a LLM that takes text input and returns output based on predicting the 'next text' / 'what text should follow'
Thank you. How can I do that with GPT4ALL? Looking forward for an instruction, how I can work with GPT4ALL and with my own documents and information. And how to train and fine tune my data, if this is possible. Thanks.
Thats so tureeee , Video i was looking for , WOuld you mind giving breakdowns , lol i know it need huge time and vakue consumption but i thing we need an community talking about this.
You messed up a bit your general formula: quality of the model=data quality/censorship. I remember AI Dungeon going from good enough quality to nearly unusable even on paid plan in one update that added censorship.
Read more in my latest blog post: www.builder.io/blog/train-ai
Hi Steve I am building a project on digitizing catalog for a hackathon in your recent video you told you trained a model on detecting images from webpages. if you can open source that it will mean a lot because we are having very less time to complete the project.
We created a number of GPT powered personal AI assistants for businesess like law, media industry and pretty much for corporate companies where they can integrate internal corporate resources and data securely to retrieve useful information and save 70% cost
@@saadshajy3849 thats nice, could you explain the steps that was taken to train these personal models
Hi Steve,
How can I reach you?
This channel just feels like it’s on another level than the rest dude, in every way. The way you turn intimidating concepts into very easy to follow videos is invaluable. Quickly becoming my absolute favorite dev channel.
Dude, you’re soothing to listen to and it’s nice to hear your chain of thought for approaching a problem. Subbed!
for real, the moment i open other videos and hear an indian accent its an instant nope for me
I was getting stuck on training a chatbot agent using off the shelves LLMs. This video came out at the right time and saved me at least a few days of monkering around. I really like the thought process breakdown instead of just showing jumbo codes like a tutorial channel. You got yourself a sub!
This is awesome dude! I watched your last video about developing proper AI apps. Your content on AI is top notch, you have opened my eyes to a whole new world of AI development. Much appreciation, keep up with the good job! Thank you :)
Thank you for putting out these type of videos. Each video has covered advanced topic in a simple manner and made me curious about building something simular!
This channel is combination of business, technology and sales training anyone needs to build a start up. Kudos to Steve.
I appreciate this a ton. Since seeing a nice, unique project a guy had with regarding AI, I only thought about applying multiple AI's together trying to get what I need. Now I have some new insight on what I just might be able to do by myself, with my own skills.
Simple video, complex concept explained beautifully. Thanks Steve!
Man Steve, you are QUICKLY becoming my new favorite person to learn advanced coding techniques from. Please keep up the amazing work!
We created a number of GPT powered personal AI assistants for businesess like law, media industry and pretty much for corporate companies where they can integrate internal corporate resources and data securely to retrieve useful information and save 70% cost
So quick and clean, thank you for taking the time to do this; I was trying to understand "Training", and now I know it is a button
First time seeing a video by you! Instant follow, really well broken down.
Would be happy to pay for a complete course on this explain each step in detail
I'm interested in the layout hierarchy approach too. What type of model did you use for the layout hierarchy? Do you give it an image and get it to return some HTML / JSON Structure? Or do you transform the existing Figma nodes into some text representation? How did you generate the training data for this step?
My initial thoughts would be to provide the model an image (the same screenshot used for the object identification step), and get it to return HTML. Then for the training data, when you take the screenshots of the webpage, you also save off the HTML and provide that as training data for the hierarchy model (maybe after some processing).
If you can share, I'd love to learn more!
Thank you for simplifying and proposing a best solution - great stuff! Subscribed 🤘🤘🤘
Exactly what I’d like to learn! Great content! Subbed
Amazing I was looking for exactly this info I was super overwhelmed
Amazing video, can’t express enough the amount of value it has. Thank you🙇♂️😊
Very interesting! The area of opportunity within the AI sector now is niche AI's that does one specific task very well as it is impossible to compete with general models as OpenAI, Anthropic, Google and other large companies has almost infinite money and resources. This means that quality of the data is as you say essential and one of the best ways to build a moat against competitors. I will definitely keep in mind that I should just write normal code to solve as much as possible before creating my own model. Do you use any open source models or have you made one from scratch? Been fiddling with fine-tuning LLMs such as llama-2 7B myself and I am now considering trying to find a model I can train instead with my own data.
Everything you build is very inspirational, Steve.
Your tutorials are great. I also make trainings on a text analysis tool I created so I know when I see good stuff! Thank you! 💪🏼🙏🏼
Thank you for this Steve, you helped me rethink my initial solutiion to a problem I planned to use AI to solve.
Between turning to basic code and utilizing an LLM (a large "super" model) for generating customized code, I'd suggest storing both the input and output. Over time, you can train your own smaller LLM with fewer parameters, perhaps 7 or 13 billion, specifically tailored for the language/framework. By collecting user feedback through edits to the code and storing this feedback, you can achieve better results than relying on an expensive "super" model like GPT-4.
I see Steve post.... I click...
I was copycatting others project using Openai wrapper, and it didn't feel right to me, but I didn't know what to do instead. Until I came across your channel and now I have a whole new perspective on creating an SaaS AI product. Thanks Steve!
Great stuff Steve, thanks a lot!
- Explore pre-existing models for solutions at 0:53.
- Break down complex problems into smaller parts at 0:55.
- Test general models to understand their limitations at 1:19.
- Consider the costs and feasibility of training large models at 2:17.
- Solve as much of the problem as possible without AI at 3:13.
- Identify the right model type for your needs at 4:00.
- Generate high-quality example data for training at 4:44.
- Use tools like Google's Vertex AI for model training at 5:54.
- Deploy and test your model with real-world data at 6:46.
- Combine specialized AI models and plain code for best results at 8:35.
- Utilize LLMs for final customization steps when necessary at 8:47.
I appreciate your content and your efforts
Love the bullet points, and break down of what each task implemented. I would also add that to get things up and running quicker, see if there is an API for certain steps in your workflow i.e. dynamic scraping or object detection. Eden AI is expensive and slow, but if you haven't nailed down the workflow, it can help you try tools and compare services. Great vid!
I'm so grateful for that type of content
Thanks a lot for posting this. Super interesting and valuable.
Sick dude, agree 100% thanks for the video!
I really liked this video, I’m commenting to boost algo and hopefully get more vids like this
Thanks, really cool video!! If you don't mind me asking, what sw are you using for the blackboard presentation? sleek!!
Super Helpful. Thank you very much
Thank you for sharing this valuable information with us.
Very cool, I was wondering,how were you guys able to make your layout hierarchy model and what did the image detection model help with that? Did you guys just use the code of the test data and train a prebuilt model off of that?
Amazing work thank you very much
Great! Thank for the video. One question. How do you train a help desk with this?
My friend this was a very clear and easy to comprehend video! I know nothing about coding and would like to learn more. Hypothetically if I wanted to develop a AI app. What route would you lay out based on your experience?
Well done, thank you!
YT please recommend to me these kinds of videos.
We're building an AI solution at my company. And this is the same conclusion we arrived at!
Glad to see another company illustrating the same approach cu
Thanks that showed that training your own AI model is simpler than we might think. We, at JetSoftPro, a software development service, also think that custom AI development can achieve faster, cheaper, and better results than using off-the-shelf large models like OpenAI's GPT-3 or GPT-4.
Thanks for sharing that! Actually even more interesting is the other specialized model - layout generating one. Can you share something about it as well? 🙂
Thanks a lot for this great advice! Cool to see, that the good old "divide and conquer" still is true in the brave new world of AI 😊
Sure, here's a table summarizing the key differences between using off-the-shelf large language models (LLMs) and training your own specialized AI model:
| **Characteristic** | **Off-the-Shelf LLMs** | **Specialized AI Model** |
|---|---|---|
| **Cost** | Generally more expensive due to high training and usage costs | Can be more cost-effective, especially for specific use cases |
| **Speed** | Can be slower due to model complexity and inference time | Can be faster due to smaller size and focused training |
| **Accuracy & Predictability** | May be less accurate and predictable for specialized tasks | Can be more accurate and predictable for specific use cases |
| **Customization** | Limited customization options | Highly customizable to specific requirements |
| **Data Requirements** | Require large amounts of data for training | May require less data, especially if using transfer learning |
| **Training Time** | Can take days or weeks | Can be faster, especially for smaller models |
| **Expertise Required** | Requires specialized expertise in AI and ML | Can be more accessible with basic development skills |
| **Use Cases** | Suitable for general-purpose tasks | Ideal for specific, well-defined problems |
In general, off-the-shelf LLMs are a good starting point for exploring AI solutions and for tasks that require general knowledge and understanding. However, if you have a specific problem that requires high accuracy, predictability, and customization, training your own specialized AI model may be a better option.
It's important to carefully consider the trade-offs and requirements of your project before deciding which approach to take.
world class. top tier video. spot on.
So if you had a couple of computers with say some 3080s sitting around, is it terribly difficult to use those to train on even if it takes a few days or more? I'd be interested in playing around with my one GPU just to see what it can do since it's just sitting around doing nothing. Is python the only way/language that can be used to train models with data? As a go developer, I am rather surprised a much faster runtime language like Go isn't used for something this cpu (and gpu) intensive. With that in mind can you use CPU only if your GPU is low end or does training require GPUs? Subbed btw.. this video was very easy to follow so look forward to more.
I would say training will be always very much faster with a GPU as GPUs are made for parallel computing
very good video. learnt alot
Perfect explanation depth and usage guidance for noobs like me!
not sure its possible cause i dont know much about Figma, but thinking about Figma mirror with the ability to mirror the design in the browser, id imagine you can take the generated html via inspection tools and feed that to the model instead which would kill the image detection model training step and really what you end up with is a HTML to React Component translator. its still along the same lines of what you end goal was which was taking the initial Figma design and translating it into code. anyways just food for thought from one dev to another! great vid!
Hi, Steve. This is really helpful. once you use Vertex AI for your specialized LMM, is it still stored at Google or are you able to download and run locally? Are you constantly making API calls for this one use case, for example?
Did you ever find out? I can't get a straight answer either.
@@n111254789 I did not. I guess I'm going to have to build one and find out myself. Wanna partner up?
Tried the Vertex AI a few times for different projects. The issue is that the endpoint always returns the same prediction irrespective of the input. This happened to me for 3 different projects over the course of last 1.5 years after which I entirely gave up on GCP.
In the broader context, investing in large-scale language models (LLMs) for natural language understanding may be deemed inefficient and resource-intensive if such capabilities are not a primary requirement. A conventional coding strategy that involves breaking down workflows into modular functions with standardized algorithms, coupled with the integration of smaller machine learning models, is not only more maintainable but also facilitates better evolution. This approach is not only more realistic but also cost-effective in comparison to the utilization of extensive LLMs.
Very cool Steve!
Amazing insights. Can you share more on steps 3 and 4?
Thanks for shearing. Question , how much was the total cost to build your models?
Any thoughts on tinygrad?
Also, do you think that eventually the commercial models will be so much better than an open source model that training your data on your own will be for very niche cases?
what do you use to create the flowcharts?
golben boy, golden content!
Steve loved it
all of this journey is just really a *wow🤯💥*
Wat model should I use to train a model using my own data for conversational purposes?
what headphones are you wearing? do they have a bluetooth microphone that you're using in the video?
Steve is one of the great brains in next stage technology ❤️
I have some questions.
After Training the Model in Google vertex you have a working object Detector. Nice!
But How Do you connect the output of the Model (the coordinates of the bounding boxes) to the Code of the Website?
Awesome!!
awesome!
this was awesome
this is SICK!!
is this ideal for studying especially in an engineering field (specifically electrical)? i noticed even gemini advance have difficulty understanding and solving intermediate problems.
Hello Steve I am a big fan of your content and it really helps me improve my AI knowledge. I am working on a solution that uses Knowledge base of OPEN AI and data (RAG) from a company , I am using OPEN AI API as brain for the system but eventually its pretty costly and doesnt work perfect even if i use good instructions, answers are pretty random and not consistent, can you guide me with this. Thankyou
realy like your videos keep going
If you were to add an extra network that goes between the smaller ones, it suddenly starts looking (from a high level of course) like the larger scale networks in the brain. The most obvious example would be the brains Default Mode Network.
cool video... subscribed.
Very interesting !
Something like...using a dumptruck to deliver a small package is okay if you're doing it once or twice, but if you're starting a delivery service, a little effort to reduce cost is a good idea.
So what a solution like this be called or marketed as if one was offering this as a service?
what scraper did you use
Can we use the final product?
Steve, you mentioned that it's easy to use some python library to run free LLM image model on PC. Can you show it?
I don't understand. Why would you need to run image detection for importing Figma designs? They already have node/element structure. Image block detection is not needed for an importer plugin like this at all. It would be useful if you wanted to build a webpage from mock-ups, not Figma.
en mantıklı yorum
I been wondering this, there's so many general models that are just too big and would love to have smaller model that do specific task.
Steve you rock it dude !!! Please create one tool to Java legacy projects lol
hmm how accurate/good are the results?
I was able to train YOLOv8s much cheaper than $60. Less than 5. And it was very good.
Hey there! It seems like you've put a lot of effort into learning and applying AI models to solve real-world problems. Your dedication and determination are truly commendable. Keep up the great work and continue exploring new possibilities in the field of AI and machine learning. Your insights and experiences are valuable and can inspire others in the community. Great job!
im new to this kinda thing but what does LLM mean??
im sorry for asking the dumbest question but can someone recommend more videos like this so i can get a better grasp or whats going on in the video.
LLM stands for large language models. I presume its self explanatory what that means but a popular example is ChatGPT 3.5 / 4. It runs off of a LLM that takes text input and returns output based on predicting the 'next text' / 'what text should follow'
@user-db9bw5cl1e it also means that it can only chat right? It is not good for logical discussions like creating code?
Is there a similar option of Vertex AI in Microsoft azure.
Thanks for this video. I'm sure you're saving us devs time, headaches and expensive bills from using LLMs
Wait, did I miss the part where you show how you got the text and other Figma objects?
Thank you. How can I do that with GPT4ALL? Looking forward for an instruction, how I can work with GPT4ALL and with my own documents and information. And how to train and fine tune my data, if this is possible. Thanks.
What company do you work for? My employer has been looking for a way to generate basic sites from Figma's. A vendor solution is just fine with them.
Builder.io
@@Steve8708 Excellent. Hugely appreciated!!
YEAP data set being clean is the raw quality ingredient for a 3 star michelin receipe. architecture comes second.
Thats so tureeee , Video i was looking for , WOuld you mind giving breakdowns , lol i know it need huge time and vakue consumption but i thing we need an community talking about this.
You messed up a bit your general formula: quality of the model=data quality/censorship.
I remember AI Dungeon going from good enough quality to nearly unusable even on paid plan in one update that added censorship.
is there alternative for Google's Vertex AI ?
That's cleanest AI generated code from design (figma)
can anyone tell what is the tool used to draw the flow and diagrams in the vdo?
us! ❤️