Deep Learning on Cloud using Amazon AWS | EC2 GPU Instance
Vložit
- čas přidán 7. 09. 2024
- Easy Steps to Train CNN Models on Amazon EC2 GPU Instance
In this video, we will learn how to use cloud EC2 GPU instances of Amazon AWS.
This is a part 2 of my video on Transfer Learning using Pytorch.
Link: • Transfer Learning | Im... .
The Part 1 explains the concept of transfer learning, discusses custom DataLoaders, implements transfer learning in PyTorch.
Link for the codes: github.com/sgs...
Link for dataset: www.kaggle.com...
#AWS #EC2 #GPU #Classification #Tutorial
another way of getting the dataset to the server that is faster:
right click on the download button >copy link address > in the ssh terminal $wget "the-link" > downloads it straight to the ec2 instance
Thanks George for the comment. Your method is more convenient. Pinned the comment :)
There is only one video for this these things, which is this, thank you very much
great work thank you. AWS is so simple yet so complicated at the same time hahah
Thank you so much, Sagar. Exactly what I was looking for. Please do more such videos.
felt so helpful for my clg minor project
Thank you so much Sagar it's very useful
Hi Sagar, This was a great video. Pls create videos like this which is detailed and simple. One doubt as it says 1hr is 0.71$ so within this hour I can train a model but what should be done if we need extra space ? Any solution pls do guide.
repost.aws/knowledge-center/ec2-instance-hour-billing
Thanks alot! this was very helpful!!!!
thanks this helped a lot
Hi , what if I want to use multiple ec2 instances for training.Will they be able to communicate with each other
thank you so much
Thanks for the video,
I have a question. If the data is about 2GB or even bigger, would uploading the data to ec2 for training is a good practice, or you would have another solution?
Thanks for the comment. For data around 1-2GB or less, I think it is better to upload the data to ec2 if u have good internet speed. However there is another way. For much larger data sizes or if there are bandwidth constraints, you can consider storing your dataset directly in an Amazon S3 bucket. EC2 instances can access data stored in S3, which can be more efficient for large datasets. Hope that helps
Great@@sagarGS. Thank you so so much for the quick response.
May I have some more questions about good practice?
1. Do you usually set a mechanism to let you know when the training is done? (e.g., send you an email automatically when it's done)
2. In real-world projects, do you usually arrange a mechanism to save trained model results, then download the results to your local machine and terminate the ec2? Also, the mechanism can send you an email or any form of notification to inform you that the training is done?
Well, I am not sure about email because I do not use such mechanism to know about training status. Regarding point 2, yes it can be good to write a code/script to do that and automate it for large and multiple projects. I do it manually though to be frank.
Thank you @@sagarGS
Based on your experience, what level of knowledge about AWS should be job-ready enough? I'm exploring about ec2 and s3.
Hi, it depends on the job profile. I would suggest to check out various job descriptions (related to AWS, if that is what you are aiming for) on LinkedIn or other job portals. This will give you an idea what more one needs to do. All the best :)
Terminal means which terminal , windows terminal?
no, terminal is the utility in Linux OS which can be used to ssh. If using windows OS, one can use software such as Mobaxterm or putty to do ssh