TensorFlow 2.0 Complete Course - Python Neural Networks for Beginners Tutorial

Sdílet
Vložit
  • čas přidán 8. 09. 2024
  • Learn how to use TensorFlow 2.0 in this full tutorial course for beginners. This course is designed for Python programmers looking to enhance their knowledge and skills in machine learning and artificial intelligence.
    Throughout the 8 modules in this course you will learn about fundamental concepts and methods in ML & AI like core learning algorithms, deep learning with neural networks, computer vision with convolutional neural networks, natural language processing with recurrent neural networks, and reinforcement learning.
    Each of these modules include in-depth explanations and a variety of different coding examples. After completing this course you will have a thorough knowledge of the core techniques in machine learning and AI and have the skills necessary to apply these techniques to your own data-sets and unique problems.
    ⭐️ Google Colaboratory Notebooks ⭐️
    📕 Module 2: Introduction to TensorFlow - colab.research...
    📗 Module 3: Core Learning Algorithms - colab.research...
    📘 Module 4: Neural Networks with TensorFlow - colab.research...
    📙 Module 5: Deep Computer Vision - colab.research...
    📔 Module 6: Natural Language Processing with RNNs - colab.research...
    📒 Module 7: Reinforcement Learning - colab.research...
    ⭐️ Course Contents ⭐️
    ⌨️ (00:03:25) Module 1: Machine Learning Fundamentals
    ⌨️ (00:30:08) Module 2: Introduction to TensorFlow
    ⌨️ (01:00:00) Module 3: Core Learning Algorithms
    ⌨️ (02:45:39) Module 4: Neural Networks with TensorFlow
    ⌨️ (03:43:10) Module 5: Deep Computer Vision - Convolutional Neural Networks
    ⌨️ (04:40:44) Module 6: Natural Language Processing with RNNs
    ⌨️ (06:08:00) Module 7: Reinforcement Learning with Q-Learning
    ⌨️ (06:48:24) Module 8: Conclusion and Next Steps
    ⭐️ About the Author ⭐️
    The author of this course is Tim Ruscica, otherwise known as “Tech With Tim” from his educational programming CZcams channel. Tim has a passion for teaching and loves to teach about the world of machine learning and artificial intelligence. Learn more about Tim from the links below:
    🔗 CZcams: / @techwithtim
    🔗 LinkedIn: / tim-ruscica
    --
    Learn to code for free and get a developer job: www.freecodeca...
    Read hundreds of articles on programming: freecodecamp.o...

Komentáře • 1,9K

  • @TechWithTim
    @TechWithTim Před 4 lety +5716

    Let me know what you guys think of the course?! Took a lot of preparation and work to get this out for you guys. Hope you all enjoy and get a solid foundation in the world of machine learning :)

    • @powergladius
      @powergladius Před 4 lety +38

      Eyyyyy, ur amazing

    • @zyzzbodybuilding
      @zyzzbodybuilding Před 4 lety +91

      About 40 minutes in. Loving it dude! You have no idea how much I appreciate it. Not a fan of that haircut tho.

    • @FlorinPop
      @FlorinPop Před 4 lety +15

      Thank you for this course Tim! I can't wait to get into it! 😃

    • @shanalishams1
      @shanalishams1 Před 4 lety +7

      Just started with the course will share my feedback once I complete this. Thank you uploading this.

    • @SanataniAryavrat
      @SanataniAryavrat Před 4 lety +7

      very extensive and damn good one so far...

  • @techstuff7568
    @techstuff7568 Před 3 lety +1545

    'I'm sorry I'm talking a lot but...'
    Bro, it's a 7 hour TensorFlow tutorial, I didn't expect anything less! Awesome tutorial, thanks man

  • @phenomadit1821
    @phenomadit1821 Před 4 měsíci +43

    00:05 Introduction to TensorFlow 2.0 course for beginners.
    02:26 Introduction to Google Collaboratory for easy machine learning setup
    07:07 AI encompasses machine learning and deep learning
    09:35 Neural networks use layered representation of data in machine learning.
    14:12 Data is crucial in machine learning and neural networks
    16:37 Features are input information and labels are output information.
    21:07 Supervised learning involves guiding the model to make accurate predictions by comparing them to the actual labels
    23:21 Unsupervised machine learning involves clustering data points without specific output data.
    27:57 Training reinforcement models to maximize rewards in an environment.
    30:00 Introduction to TensorFlow and its importance
    34:36 Understanding the relation between computations and sessions in TensorFlow
    36:52 Google Collaboratory allows easy access to pre-installed modules and server connection.
    41:11 Importing TensorFlow in Google Collaboratory for TensorFlow 2.0
    43:17 Tensors are fundamental in TensorFlow 2.0
    47:58 Explanation of tensors and ranks
    50:12 Understanding TensorFlow tensor shapes and ranks
    54:41 Reshaping Tensors in TensorFlow
    56:47 Using TF session to evaluate tensor objects
    1:01:16 Different categories of machine learning algorithms
    1:03:07 Linear regression for data prediction
    1:07:22 Calculating the slope of a line using a triangle and dividing distances
    1:09:29 Predicting values using the line of best fit
    1:13:31 Overview of important Python modules like NumPy, pandas, and matplotlib
    1:15:43 Predicting survival on the Titanic using TensorFlow 2.0
    1:19:40 Splitting data into training and testing sets is crucial for model accuracy.
    1:21:48 Separating the data for classification
    1:26:09 Exploring dataset statistics and shape attributes
    1:28:12 Understanding the data insights from the analysis
    1:32:21 Handling categorical and numeric data in TensorFlow
    1:34:39 Creating feature columns for TensorFlow model training
    1:38:42 Epochs are used to feed data multiple times for better model training
    1:40:55 Creating an input function for TensorFlow data set objects
    1:45:19 Creating an estimator and training the model in TensorFlow
    1:47:21 Explanation on how to access and interpret statistical values from a neural network model.
    1:51:46 Exploring survival probabilities based on indices
    1:53:52 Introduction to classification in TensorFlow 2.0
    1:58:01 Data frames in TensorFlow 2.0 contain encoded species already, simplifying data preprocessing.
    2:00:08 Creating input function and feature columns in TensorFlow 2.0
    2:04:26 Setting up the neural network and defining the number of nodes and classes.
    2:06:35 Using lambda functions to create chained functions
    2:10:44 Creating a prediction function for specific flowers
    2:12:46 Explaining the process of predicting on a single value
    2:17:25 Clustering helps find clusters of like data points
    2:19:50 Data points are assigned to clusters based on distance to centroids.
    2:24:02 Understanding K means clustering
    2:26:09 Hidden Markov model uses states and observations with associated probabilities.
    2:30:36 Defining transition and observation probabilities in two states
    2:32:56 Hidden Markov Model predicts future events based on past events
    2:37:22 Explanation of transition probabilities and observation distribution in a Hidden Markov Model
    2:39:31 Mismatch between TensorFlow versions
    2:43:45 Hidden Markov models are used for probability-based predictions.
    2:45:35 Introduction to neural networks and their working principle.
    2:50:00 Designing the output layer for neural networks
    2:52:19 Neural networks make predictions based on probability distributions for each class.
    2:56:39 Introduction to biases as trainable parameters in neural networks
    2:58:53 Neural network nodes determine values using weighted sums of connected nodes.
    3:03:21 Explanation of different activation functions in neural networks
    3:05:38 Sigmoid function is chosen for output neuron activation
    3:10:00 Loss function measures the deviation of the neural network output from the expected output.
    3:12:25 Understanding the concept of cost function and gradient descent
    3:17:01 Neural networks update weights and biases to make better predictions with more data.
    3:19:17 Loading and exploring the fashion amnesty dataset for training and testing neural networks.
    3:23:54 Data pre processing is crucial for neural networks
    3:25:54 Pre-processing images is crucial for training and testing in neural networks
    3:30:26 Selecting optimizer, loss, and metrics for model compilation
    3:32:33 Training and testing a neural network model in TensorFlow 2.0
    3:36:51 Training with less epochs can lead to better model performance
    3:39:00 Understanding predictions and probability distribution
    3:43:34 TensorFlow deep learning model used for computer vision and classification tasks.
    3:45:42 Images are represented by three color channels: red, green, and blue
    3:50:09 Convolutional neural networks analyze features and patterns in images.
    3:52:19 Convolutional neural networks use filters to identify patterns in images
    3:56:49 Quantifying presence of filters using dot product
    3:58:52 Understanding filter similarity in TensorFlow 2.0
    4:03:09 Padding, Stride, and Pooling Operations in Convolutional Neural Networks
    4:05:17 Pooling operations reduce feature map size
    4:09:30 Loading and normalizing image data for neural networks
    4:11:41 Understanding the input shape and layer breakdown
    4:15:58 Optimizing model performance with key training strategies
    4:17:59 Data augmentation is crucial for training convolutional neural networks with small datasets.
    4:22:12 Utilizing pre-trained models for efficient neural network training
    4:24:19 Modifying last layers of a neural network for classifying
    4:28:24 Using pre-trained model, MobileNet v2, built into TensorFlow
    4:30:31 Freezing the base model to prevent retraining
    4:34:45 Evaluation of model with random weights before training.
    4:36:58 Saving and loading models in TensorFlow
    4:41:00 Natural Language Processing (NLP) is about understanding human languages through computing.
    4:43:19 Sentiment analysis and text generation using natural language processing model
    4:47:46 Introduction to bag of words technique in neural networks
    4:49:54 Bag of words technique encodes sentences with the same representation, losing their meaning.
    4:54:13 Word embeddings aim to represent similar words with similar numbers to address issues with arbitrary mappings.
    4:56:25 Introduction to word embeddings in a 3D space
    5:00:59 Difference between feed forward and recurrent neural networks
    5:03:22 Explanation of processing words sequentially in a neural network
    5:08:01 Introduction to Simple RNN and LSTM layers
    5:10:29 Long Short Term Memory (LSTM) allows access to output from any previous state.
    5:14:53 Padding sequences to ensure equal length for neural network input
    5:17:02 Creating a neural network model for sentiment analysis
    5:21:24 Evaluating model accuracy and preparing for predictions
    5:23:49 Explanation of padding and sequence processing in TensorFlow 2.0
    5:28:20 Analyzing sentiment impact on prediction accuracy
    5:30:27 Training neural network to generate text sequences
    5:34:48 Creating mapping from characters to indices
    5:37:09 Creating training examples for TensorFlow neural network model
    5:41:53 Batching and model building process in TensorFlow 2.0
    5:44:07 Setting model parameters and layers in TensorFlow 2.0
    5:49:05 Explaining model predictions for each element in batch and sequence length
    5:51:26 The model outputs a tensor for each training example, and we need to create our own loss function to determine its performance.
    5:56:05 Training neural networks with varying epochs for performance evaluation
    5:58:29 Generating output sequences using TensorFlow model
    6:02:53 Processing steps for text data in TensorFlow 2.0
    6:05:05 Building and training the model with different batch sizes and checkpoints
    6:09:25 Reinforcement learning involves an agent exploring an environment to achieve objectives.
    6:11:43 States, Actions, and Rewards in Reinforcement Learning
    6:16:24 Q matrix represents predicted rewards for actions in states.
    6:18:43 Maximize agent's reward in the environment
    6:23:21 Introducing exploration in reinforcement learning
    6:25:26 Balancing Q table and random actions in Q learning algorithm
    6:30:03 Discount factor helps in factoring future rewards into the equation for finding the best action in the next state.
    6:32:16 Introduction to OpenAI Gym for training reinforcement learning models
    6:36:46 Introduction to navigating a frozen lake environment using q learning.
    6:38:54 Max steps and learning rate in reinforcement learning
    6:43:05 Training the agent using Q-learning algorithm
    6:45:18 Training process involves adjusting epsilon and monitoring reward progress.
    6:49:39 Focus on a specific area in machine learning or AI for deeper learning.
    6:51:47 Largest open source machine learning course in the world focused on TensorFlow and Python.
    Crafted by Merlin AI.

  • @thesral96
    @thesral96 Před 3 lety +407

    Please consider adding Chapters to the CZcams Progress Bar so that the information is easier to find later on.

    • @nickfiction5507
      @nickfiction5507 Před 2 lety +87

      ⌨️ Module 1: Machine Learning Fundamentals (00:03:25)
      ⌨️ Module 2: Introduction to TensorFlow (00:30:08)
      ⌨️ Module 3: Core Learning Algorithms (01:00:00)
      ⌨️ Module 4: Neural Networks with TensorFlow (02:45:39)
      ⌨️ Module 5: Deep Computer Vision - Convolutional Neural Networks (03:43:10)
      ⌨️ Module 6: Natural Language Processing with RNNs (04:40:44)
      ⌨️ Module 7: Reinforcement Learning with Q-Learning (06:08:00)
      ⌨️ Module 8: Conclusion and Next Steps (06:48:24)

    • @acosmic7841
      @acosmic7841 Před rokem

      Thanks

    • @aridorjoskowich7283
      @aridorjoskowich7283 Před 4 měsíci

      ​@@nickfiction5507 doing God's work

  • @aaronpaul2550
    @aaronpaul2550 Před 2 lety +82

    I think this course gives a chance to anyone who wants to learn machine learning in a fast and free way.
    And save a bunch of time looking at papers and library literature.
    This course is gradual.
    There is a clear understanding of everything from linear regression to reinforcement learning, and even the example programs are fully described and annotated. The people who made and designed this course are very thoughtful and selfless sharing and deserve huge applause.
    Thank you very much.

    • @bohaning
      @bohaning Před 7 měsíci

      Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!

  • @SomeshB
    @SomeshB Před 5 měsíci +39

    CAUTION: The Vido is outdated, you can use the video for concepts but code wise TensorFlow has deprecated many modules that are used in the code he mentioned.

    • @gdfra4733
      @gdfra4733 Před 5 měsíci +3

      i'm at 2 hours and the only thing i had a problem with was tensorflow.compat which now is tensorflow._api.v2.compat.v2

    • @vjndr32
      @vjndr32 Před 2 měsíci +1

      @@gdfra4733 you might be using older version of tensorflow. i'm on 2.16.1 on my local machine and not able run even the linear regression

    • @EJYIEI
      @EJYIEI Před měsícem +1

      @@vjndr32 in my case i decided to use keras models instead of estimators since the official tensorflow page itself has a tutorial to migrate

    • @KhalidKassim-gc6mj
      @KhalidKassim-gc6mj Před měsícem +1

      That’s y he said to use the same 2.0 version 🤦‍♂️

    • @LandonCummings1
      @LandonCummings1 Před měsícem

      @@EJYIEI hey im at that point where I want to use keras models instead of estimators but I cant figure out the tutorial for a linearregression model
      any chance you could share your code for that section?

  • @net.5503
    @net.5503 Před 4 lety +1144

    ⌨️ Module 1: Machine Learning Fundamentals (00:03:25)
    ⌨️ Module 2: Introduction to TensorFlow (00:30:08)
    ⌨️ Module 3: Core Learning Algorithms (01:00:00)
    ⌨️ Module 4: Neural Networks with TensorFlow (02:45:39)
    ⌨️ Module 5: Deep Computer Vision - Convolutional Neural Networks (03:43:10)
    ⌨️ Module 6: Natural Language Processing with RNNs (04:40:44)
    ⌨️ Module 7: Reinforcement Learning with Q-Learning (06:08:00)
    ⌨️ Module 8: Conclusion and Next Steps (06:48:24)

  • @UthacalthingTymbrimi
    @UthacalthingTymbrimi Před rokem +15

    1:19:30 In case anyone is curious, in the Titanic Dataset, "parch" is "Parents/Children" (ie: was this person travelling with other family members), and "fare" is the price paid for their ticket (which may include travelling costs for other people they were travelling with, family or not).

    • @SandSeppel
      @SandSeppel Před rokem +2

      thank you so much

    • @user-ze7sj4qy6q
      @user-ze7sj4qy6q Před rokem

      thank u i figured what fare was but i had no idea abt parch n it was bothering me lol

  • @toihirhalim
    @toihirhalim Před 3 lety +105

    this made me understand what I've been learning for 2 semesters

  • @NikhilYadav-ji8rm
    @NikhilYadav-ji8rm Před 3 lety +774

    Timestamps for all the different core learning algorithms,
    Linear Regression (01:00:00)
    Classification (01:54:00)
    K-Means Clustering (02:17:07)
    Hidden Markov Models (02:24:56)

    • @emberleona6671
      @emberleona6671 Před 3 lety +2

      @User Account Karen!

    • @filipo4114
      @filipo4114 Před 3 lety +5

      03:43:10 - Convolutional Neural Networks

    • @goksuceylan8844
      @goksuceylan8844 Před 3 lety

      @User Account ur mom

    • @prasaddalavi9683
      @prasaddalavi9683 Před 3 lety +2

      Hey just to confirm, are you sure the 1:00:00 is the linear regression and not linear classification. i am not able to get this. we are classifying whether it will be survived or not. based on the input data. can some one please help with this

    • @leonardodalcegio4763
      @leonardodalcegio4763 Před 3 lety

      @@prasaddalavi9683 I am also in doubt about that

  • @henryly213
    @henryly213 Před 2 měsíci +3

    Thing I needed to update as I went thru the course (will update as I go)
    1:13:12
    - Package change to scikit-learn : !pip install -q scikit-learn

  • @gioannguyen4213
    @gioannguyen4213 Před 6 dny +1

    I've been in this course for over 2 hours.
    I think this is a good point to start with as it can equip beginners with a big picture, together with quick (but enough) explanations on the terms such as ML, layer, etc.
    Although some of the codes provided in the GG collab couldn't run properly in 2024 (right now), I suggest watching the video to have a grasp on what might happen and practice them later (maybe in another course, or if you can figure out somehow to execute these codes).
    Happy learning!

  • @ramazad1363
    @ramazad1363 Před 2 lety +56

    48:00 rank
    50:00 shape
    52:00 change in shape
    55:10 types of tensors
    56:30 evaluating Tensors
    57:25 sources
    57:40 practice
    1:00:00 tensorflow core learning algorithms
    1:02:40 linear regression
    1:13:00 setup and import
    1:15:40 data

  • @yungrabobank4691
    @yungrabobank4691 Před 3 lety +97

    For people wanting to understand the basic idea behind Neural Networks, 3BlueOneBrown's video is a nice addition to your introduction! It helped me understand the topics and coding Tim discussed a lot better

    • @danielleivy8180
      @danielleivy8180 Před rokem +3

      Also Stanford has their full CS229 course online as well - along with lecture notes. :)

    • @pabloa.2586
      @pabloa.2586 Před 11 měsíci

      @@danielleivy8180 where can i find that course? thanks in advance

    • @danielleivy8180
      @danielleivy8180 Před 11 měsíci

      @@pabloa.2586 czcams.com/video/aircAruvnKk/video.htmlsi=02lMoL958AjkXkyy

  • @damascenoalisson
    @damascenoalisson Před 4 lety +53

    Just a comment, on 3:36:44, when you train the network again it's not re-training from scratch but instead using the weights it already had. Unless you manually reset the graph, you'll be training for the sum of all epochs you used the fit function (like 10 + 8 + 1 epochs)
    To avoid this problem you should use something like keras.backend.clear_session() or tf.reset_default_graph() between tests with hyperparameters 😉

    • @cwlrs4944
      @cwlrs4944 Před 4 lety +1

      Mm thought that was the case. Wasn't starting from the ~80% accuracy from the first epoch of the latter training runs.

    • @lawrencegranda7759
      @lawrencegranda7759 Před 2 lety +2

      Another way is just to rebuild the model.

    • @ufukdemiray6176
      @ufukdemiray6176 Před 2 lety +10

      this was painful to watch yeah.. i know he's doing his best to show stuff but he's pretty much a beginner too

  • @DrRussell
    @DrRussell Před 3 lety +63

    Just started. Know nothing so can’t contribute yet but wanted to thank you for advancing humanity. You may have just given my life a purpose

    • @benlaurent3102
      @benlaurent3102 Před 3 lety +2

      How’s it been going? Are you still doing machine learning?

    • @cutyoursoul4398
      @cutyoursoul4398 Před 3 lety +1

      life has no purpose

    • @thesickbeat
      @thesickbeat Před 3 lety

      @@cutyoursoul4398 Said the atheist.

    • @cutyoursoul4398
      @cutyoursoul4398 Před 3 lety +1

      @@thesickbeat not atheist, that's just the Truth

    • @thesickbeat
      @thesickbeat Před 3 lety

      @@cutyoursoul4398 Its your truth. Not the truth.

  • @andyh964
    @andyh964 Před 6 měsíci +1

    This video is gold. I am a MSc student in AI and I literally use this video as a reference to understand some topics that are poorly explained in the modules. I've watched 5/7 hours

  • @marufm8195
    @marufm8195 Před 3 lety +25

    Just finished the tutorial, it's really well made and an amazing intro to ML concepts. I'm really excited to explore this further thankyou so much Tim.

    • @bohaning
      @bohaning Před 7 měsíci

      Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!

  • @manikandans2030
    @manikandans2030 Před 4 lety +15

    Run time 3:37:00 - I think we have to compile the model every time before we do a fit. Otherwise it just memorize the previous epochs and use it for next iterations. In this case I believe that 92% accuracy of 1 epochs is the same as the addition of previous epochs i.e 10+8+1 = 19 epochs

    • @WalkerSuper900
      @WalkerSuper900 Před 8 měsíci

      I agree 100%. He was overfitting the model even more.

  • @thesultan1212
    @thesultan1212 Před 4 lety +41

    This video is pure gold, the guy explains really well. Learned more from this than payed courses. Thanks so much, keep it up!

    • @CivilSurveyor
      @CivilSurveyor Před 4 lety +3

      Hello Sir..
      I need a program, vba, excel sheet, or anything else.. In which I do add may data in Numaric form that is 1,2,3 etc and then that data plot or draw in AutoCAD with one click

    • @Trixz-the
      @Trixz-the Před 3 lety +1

      @Dario Argies relax pal

  • @gadi800
    @gadi800 Před 2 lety +27

    Despite being lost in the RNN part haha, this tutorial was great! I really appreciate your hard work and you've done great in simplifying explanations. Well done! It's programmers like yourself that make it possible for anybody to learn programming and that is a great thing. I hope to see more courses from you in the future!

  • @someatuffs
    @someatuffs Před rokem +13

    ⌨ (00:03:25) Module 1: Machine Learning Fundamentals
    ⌨ (00:30:08) Module 2: Introduction to TensorFlow
    ⌨ (01:00:00) Module 3: Core Learning Algorithms
    ⌨ (02:45:39) Module 4: Neural Networks with TensorFlow
    ⌨ (03:43:10) Module 5: Deep Computer Vision - Convolutional Neural Networks
    ⌨ (04:40:44) Module 6: Natural Language Processing with RNNs
    ⌨ (06:08:00) Module 7: Reinforcement Learning with Q-Learning
    ⌨ (06:48:24) Module 8: Conclusion and Next Steps
    Progress: 04:00

  • @redwings5576
    @redwings5576 Před 4 lety +6

    I've been going back and forth between python to game development... but I haven't actually learned anything and now I'm here for machine learning, and before this I started hacking course. Everything unfinished, and just the thought of what I would be able to do when I get good at either of these is what makes me want to learn them, but I can't really stick to one.

    • @abhinavyadav789
      @abhinavyadav789 Před 4 lety

      Happens with everyone . I went with flask to build a website , then api , left it , did a but of numpy , now here . Im just looking to find something that interests me enough to make me stick to it for a longer time, you should keep looking for something that might interest you so that you stick with it longer !

    • @redwings5576
      @redwings5576 Před 4 lety

      @@abhinavyadav789 The thing I really want to do, is what I don't have support with of any kind. So, it's like, I'm just trying to find something, in places where I know I won't find it.

  • @leixun
    @leixun Před 4 lety +68

    *My takeaways:*
    1. TensorFlow has two main components: graph and session 33:05
    2. We can rebuild a model and change its original parameters 5:56:31
    3. Reinforcement learning with Q-Learning 6:08:00

  • @networkserpent5155
    @networkserpent5155 Před rokem +2

    I only watched until 20 minutes today, but I can say this really helped me get a theortical grasp on machine learning in general. I just though machine learning is just predicting data but today i learned that in order for it to do that it uses an algorithm that make rules than follows them and gives back the label (output) thank you sm!!

  • @mariuspopovici4296
    @mariuspopovici4296 Před 4 lety +86

    Fare would be the amount they paid for the trip / ticket price. Parch is # of Parents/Children aboard.

    • @vovin8132
      @vovin8132 Před 3 lety +1

      Yeah I was thinking that fare was a function of cabin class (base value) and destination (length on board).

  • @owusukwakumoses99
    @owusukwakumoses99 Před 2 lety +2

    It's 2022 and this video is as relevant as ever. I didn't really follow up in the latter parts but you really did a good job explaining stuff. Thanks!!!

  • @skviknesh
    @skviknesh Před 4 lety +43

    1:02:44 "Do not Memorize just Understand" - made my mind to stay "calm". Felt to thank at that time frame... "Thank You!"

    • @Pinocciochannel
      @Pinocciochannel Před 4 lety +2

      Well im not the best at python.. But its my favorite out of all the language i know. So whenever i dont understand something i be like chill its just python. That helps at least to some extend.

    • @GabrielAzevedo_11
      @GabrielAzevedo_11 Před 3 lety

      I thought the same, it gave me a relief.

  • @DhruvPatel11
    @DhruvPatel11 Před 2 lety +6

    Thanks, man, I was having difficulties learning core concepts of ML for a long time but this video cleared all my queries and now I understand everything which you've explained. Thanks again for making this video. It helped a lot

  • @GeekTutorials1
    @GeekTutorials1 Před 4 lety +135

    Mate, this was very cool. I'd never heard of it before, but coming from a Python background, I found this very helpful. Keep up the great work! Looking forward to what else you have on offer.

    • @garzj
      @garzj Před 4 lety +5

      The only thing that bothers me is the way that he draws pacman...

    • @raspberrypi4970
      @raspberrypi4970 Před 4 lety

      Try OceanSDK/Leap2 from D-Wave

  • @johnsonamodu77
    @johnsonamodu77 Před rokem +1

    Parch represents number of parents and/or children passengers onboard the ship with (1:17:49)
    Fare represents the fare price. The ticket price (1:18:05)

  • @pallavijog912
    @pallavijog912 Před 4 lety +20

    At 3:37:00, when you said with less epochs, you are getting better test results. Which is actually not the case. You first run for 10 epochs, your weights got updated. Then again you run 8 epochs, your weights improved from previous values onwards.. so that eventually makes 18 epochs.. then you run for 1 epoch, which makes it 19 epochs.. so in this case, after 19th epoch, your accuracy on test data is increased.

    • @yashvander-bamel
      @yashvander-bamel Před 3 lety +2

      I was about to write the same thing...seems like I'm not the only one who noticed :)

  • @bobmimiaga
    @bobmimiaga Před 2 lety +11

    Tim, I know you've heard this before, but this was a very well done course on Tensorflow basics and Machine Learning. This is my first online course I've taken and glad it was yours! Thanks for the time you put into this course which will help countless programmers and adventurers interested in this fascinating field.

  • @porterneon
    @porterneon Před 4 lety +98

    parch: The dataset defines family relations in this way...
    Parent = mother, father
    Child = daughter, son, stepdaughter, stepson
    Some children travelled only with a nanny, therefore parch=0 for them.

  • @waiitwhaat
    @waiitwhaat Před rokem +1

    3:36:25 The accuracy keeps going up is because everytime you execute the codeblock, it takes the trained model and runs it for the specified number of epochs, since .fit() is being called on the same variable 'model'. To start training it with a fresh model, just re-initialize the variable 'model' by executing the code block with 'model = keras.Sequential([...])'.

  • @rishabhgarg1445
    @rishabhgarg1445 Před 4 lety +31

    Just wanted to add one small detail that in Module 4, while training the model on 1 epoch after training it for sometime, what actually happens in ipython notebooks is that they continue training on above of the previously trained model. So, that is why we got pretty high accuracy for one epoch, but technically that accuracy we got was not just from one epoch.

    • @eric9964
      @eric9964 Před 2 lety

      Are you sure? Adding on one epoch to that model made a significant jump from its previous accuracy. I don't believe this is the case.

    • @lawrencegranda7759
      @lawrencegranda7759 Před 2 lety +3

      I agree. He did not restart/rebuild the model, so it just kept training using the previous weights

    • @RodrigoLobatorodrigo
      @RodrigoLobatorodrigo Před 2 lety

      @@lawrencegranda7759 I was watching this and even though I am totally newbie I also noticed that the training was simply continuing, not starting from scratch.

  • @ScriptureFirst
    @ScriptureFirst Před 3 lety +7

    I typically hate narrative talk alongside & prefer scripted tutorials, but you’ve spoken very clearly & concisely while extemporaneously. Very well done! 🙏🏼

  • @mohdabdulrahman4210
    @mohdabdulrahman4210 Před 4 lety +14

    it's only 30 minutes and I'm already loving it

  • @yousefwaelsalehelsaidkhalil
    @yousefwaelsalehelsaidkhalil Před 10 měsíci +1

    Wow, Amazing. Eventually I lost the hole to continue in machine learning but you just had gave me a road to run on for a long amount of time!!

  • @jedi4ever
    @jedi4ever Před 2 lety +13

    I really, really enjoyed this tutorial . It takes the time to explain soo many aspects and has a great build up. Well done!

  • @gusinthecloud
    @gusinthecloud Před 3 lety +1

    I had read 3 books about AI, before this video. You made a very clear course and helps me a lot. Thank you very much indeed.

  • @ci9vt
    @ci9vt Před 4 lety +21

    The second argument to tf.Variable() is trainable not dtype, so when you set string = tf.Variable('some string', tf.string), you set string.trainable to tf.string. You can verify it by printing string.trainable.

    • @mom4839
      @mom4839 Před 4 lety +2

      Where is the subtitle??

    • @masudulalam2515
      @masudulalam2515 Před 4 lety +2

      what is string.trainable?what is the purpose of it?I'm real noob here,help me out!!

    • @sangramjitchakraborty7845
      @sangramjitchakraborty7845 Před 4 lety

      @@masudulalam2515 it sets the variable as trainable or not. Trainable variables are updated during training. Like weights and biases.

    • @sandeshadhikari2889
      @sandeshadhikari2889 Před 4 lety

      Can i learn machine learning without having a laptop with dedicated Graphics card?? Please help( i am going to buy a laptop with low budget)

  • @dannloloy
    @dannloloy Před 3 lety +8

    He may not be the greatest teacher tbh (but he is included to those who are great) but his commitment to teaching is undeniable! Thank you sir.

  • @humanbeing2282
    @humanbeing2282 Před rokem

    1:19:41 someone else has to have made a comment about this but fare refers to the price you paid for a ticket. As in the amount they paid to board the ship for the voyage. It’s a general term that broadly speaking means amount that you contributed economically to partake in or embark on an activity. You could feasibly replace fare with “price of ticket” or “entrance fee” and it would mean the same thing. Fare has a slightly different connotation but in any practical way it’s a synonym for cost of a thing. It’s notably not a tensor flow specific term.

  • @snackbob100
    @snackbob100 Před 4 lety +43

    Dude, this is fantastic! thank you. How can anyone dislike this i dont know!

    • @11hamma
      @11hamma Před 4 lety +4

      he portrays lots of wrong info. non-beginners would know readily

    • @LA-eq4mm
      @LA-eq4mm Před 3 lety +1

      @@11hamma like what

    • @itjustmemyselfandi
      @itjustmemyselfandi Před 3 lety

      Can I ask how long it took to learn and watch this video?

  • @3T-InfoTinker
    @3T-InfoTinker Před 3 lety +2

    Learning is something different than openionizing. Tim you are such a good teacher man.

  • @devloper_hs
    @devloper_hs Před 4 lety +4

    FOR TENSORFLOW 2.0
    For running seesions at : 57:03
    with tf.compat.v1.Session() as sess:
    print(tensor0.eval())

  • @anglewyrm3849
    @anglewyrm3849 Před 2 lety

    Here's a view of the atom of intelligence: NAND logic expresses the notion of contrasting two things; the light comes on when two things are different. NOR logic expresses absence; the light comes on in the dark. Each of these logic operations can individually express all of logic, entirely replacing AND/OR/NOT. So there are three interchangeable systems.

  • @puspamadak
    @puspamadak Před 3 lety +18

    This video is a must-watch for beginners getting into machine learning. I wish I had seen this video before. I have never got a better understanding of these topics and the differences between the terms AI, Machine Learning, etc. Thank you, sir, for your efforts.
    I am in class 12, and there is Linear Regression in Mathematics, but I haven't even thought that it can be used in ML also.

    • @shdnas6695
      @shdnas6695 Před 2 lety +2

      Just curious to know, what are u doing now dude? i mean in programming area

  • @beastkidoooo
    @beastkidoooo Před 4 lety +1

    very good course tim . I am 12 and have finished ur course upto neural networks ur a great teacher , ignore the bad comments cause the people who post these comments are failures that are jealous of u , so never ever give up !!

  • @janicesmyth2183
    @janicesmyth2183 Před rokem +6

    thank you so much Tim! I wish this was around when I was much younger! I was always very curious about learning about programming!

  • @ScriptureFirst
    @ScriptureFirst Před 3 lety

    Thank you for putting comments in each line. Many people skip this level of detail. I love that you’ve wrapped this in comments. 🙏🏼

  • @acidnynex
    @acidnynex Před 4 lety +5

    Good work, I appreciate you trying to teach the masses. However, the first example is not linear regression, it is binomial logistic regression and doesn't really represent what you explain earlier in the first part of the video. Perhaps the housing price data set or another data set would be a good example for this, with binomial logistic regression as a second step that then leads into multinomial logistic regression.

  • @fernandogamdev
    @fernandogamdev Před 3 lety +1

    I just don't know how to thank you ALL YOUR EFFORT on doing that video! All the content and the explanation! It's just mind blowing! I am eternally grateful!

  • @jamesmuthama1750
    @jamesmuthama1750 Před rokem +3

    If you're a complete beginner, ChatGPT explains the difficut concepts so well

  • @sirakovich1
    @sirakovich1 Před rokem +1

    parch is the number of parents or children of a specific passenger. By the way that is an amazing tutorial, thank you so much!!! 01:18:00

  • @TemisBall
    @TemisBall Před 3 lety +14

    Hmm, very deep. A y label called 'x' and an x label called 'y'. LOL, I really loved the video btw, I watched it until the end!

    • @badboogl8529
      @badboogl8529 Před 3 lety

      Yo, this tripped me up too lol
      P.S. 한국인이세요? 성험 때문에 물어요

    • @tidtechnologyindepth6337
      @tidtechnologyindepth6337 Před 3 lety +1

      I didn't understand that -1 thing at 54:25 , can anyone help me out!😭

    • @AlenaShomanova
      @AlenaShomanova Před 3 lety +1

      @@tidtechnologyindepth6337 this is basically when you're telling to your machine "idk, I already gave you one number, count it yourself"

  • @vierminus
    @vierminus Před rokem +1

    For those who stumble upon the error "AttributeError: module 'keras.preprocessing.sequence' has no attribute 'pad_sequences'" at 5:16:32:
    the pad_sequences function has been moved, you can do it like this now:
    train_data = keras.utils.pad_sequences(train_data, MAXLEN)
    test_data = keras.utils.pad_sequences(test_data, MAXLEN)

  • @vanishingentropy6488
    @vanishingentropy6488 Před 4 lety +13

    Loved it! Great tutorial covering a lot of areas. TechWithTim's explanation and the epic examples help open up the field to beginners like me, and the 7 hours were super-interesting!

  • @dlerner97
    @dlerner97 Před 3 lety

    Okay I'm not completely sure about this so take it with a grain of salt but I don't think you're hyperparameter/epoch tuning at 3:37:00 is doing what you expect. With jupyter notebooks, it saves the models and each time you run an epoch, it continues tuning the previous weights. In order to really display epoch differences, you need to restart the runtime and repeat the process. If you notice, each time you run the code, the "first epoch accuracy" increases significantly. The first time you ran it, the accuracy was 83% after the first epoch. After the 10th, it was 90.6%. Then, for the next iteration (8 epochs), the accuracy was 91.2% after the first epoch. Then, when running on just a single epoch, it started at 93%. Likely this is because the model continued to train an additional 9 epochs. So, in fact, the single epoch data is ironically quite overfitting.

  • @jsmammen6775
    @jsmammen6775 Před 4 lety +6

    Thank you for this video. This is the most thorough and simple introduction to Tensorflow and AI in general.

  • @ScriptureFirst
    @ScriptureFirst Před 3 lety

    I LOVE this firehose format of SPRINT-crawl-walk. Everyone thinks they need to crawl-walk-run & that’s crap. I like your style dude.

  • @saiteja7170
    @saiteja7170 Před 4 lety +10

    Another video need to be saved :) Thank you so much Tim!! ❤️

  • @rizalpurnawan3796
    @rizalpurnawan3796 Před 2 lety

    I heard of tensor flow several times, but I never expected that it literally uses tensors form math's multilinear algebra. Wow, it's cool. So now I am learning it with Tim. Thanks Tim!

  • @nadiakacem24
    @nadiakacem24 Před 4 lety +9

    ⭐️ Course Contents ⭐️
    ⌨️ Module 1: Machine Learning Fundamentals (00:03:25)
    ⌨️ Module 2: Introduction to TensorFlow (00:30:08)
    ⌨️ Module 3: Core Learning Algorithms (01:00:00)
    ⌨️ Module 4: Neural Networks with TensorFlow (02:45:39)
    ⌨️ Module 5: Deep Computer Vision - Convolutional Neural Networks (03:43:10)
    ⌨️ Module 6: Natural Language Processing with RNNs (04:40:44)
    ⌨️ Module 7: Reinforcement Learning with Q-Learning (06:08:00)
    ⌨️ Module 8: Conclusion and Next Steps (06:48:24)

  • @satishrapol3650
    @satishrapol3650 Před rokem +1

    As the video tite says, Its a good course for getting familiar to Tensor flow functionalities but not so recommended for understanding the core concepts of the algorithms for beginners

    • @Shitmeet
      @Shitmeet Před rokem

      What do you recommend instead for beginners?

  • @MarcelinoSileoni
    @MarcelinoSileoni Před 2 lety +2

    Tim I think you've done a good job of introducing each of the topics you've touched on. I must say that in each topic, especially the latest and most complex ones, you have left many gaps to be covered by each of us. For the next video I recommend speaking more slowly, explaining in greater depth the fundamental concepts, the foundations that later serve to understand the practical examples. I also recommend preparing presentations instead of using a basic graphics application. However, congratulations for the courage to make the video without being an expert in the field.

  • @rahulbhardwaj4568
    @rahulbhardwaj4568 Před 4 lety +6

    This is pure GOLD!!!!

  • @WahranRai
    @WahranRai Před 2 lety +1

    5:17 tf.keras.layers.LSTM(32) is not equal to the size of the embedding word ! It is the size of output of LSTM.
    You could have tf.keras.layers.LSTM(64) for example

  • @Jorvanius
    @Jorvanius Před 4 lety +7

    Dude, this course is amazing. I've only been through a third of it, but I know that I will watch it completely. Thanks you so much for sharing it

  • @cashmoneyhustler
    @cashmoneyhustler Před 2 lety

    Thank you so much for explaining TensorFlow. I built my own GPT-2 text generating AI that I am going to implement into a Twitter developer account that tweets Ai generated nonsense at regular intervals. Part of setting that up involved TF configs. I was able to copy-paste my way to success but after watching your video I have a far greater understanding of what exactly I was changing, so the next time I can do it myself. Great work here!

  • @guitarockdude
    @guitarockdude Před 4 lety +6

    Great Tutorial!
    Just a heads up, there was a mistake at 3:35:00 - you forgot to reinitialize the "model"!

  • @vierminus
    @vierminus Před rokem

    Thank you so much! After finishing some high level ai courses i was in search for a hands-on tutorial an this course was exactly the right "depth" i was searching for.
    Nice to see, that it's not necessary to understand every confusing math formula in depth to get started using ai.

  • @ferozabraham9401
    @ferozabraham9401 Před 3 lety +6

    Wonderful Job dear. God Bless!

  • @kawsydaisy
    @kawsydaisy Před 2 lety +1

    Only 25 mins and already so good! Your videos never disappoint, Tim!

  • @alexg2890
    @alexg2890 Před 4 lety +9

    3:37:01 Training on one epoch in this case builds on already existing model that was created using many epochs. You need to recreate the model to demonstrate this

    •  Před 4 lety +1

      I was thinking about this, and how tf he could actually get .9x in just one epoch

  • @michaelmarinos
    @michaelmarinos Před 3 lety

    Great Video (even if i havent finished it yet)! The example in 1:00:10 is NOT regression is Classification. You have 2 classes (survived or not) and you try to classify the passengers. In other words, the result would always be a probability, you cannot use the same methodology to predict the age, for example .

  • @ninaddesai5105
    @ninaddesai5105 Před rokem

    This is aweome. I paid for a course online - but could not understand lots of things .. just was able to clear all confusing concepts in this video. Thanks mate

  • @AgentRex42
    @AgentRex42 Před 4 lety +5

    Great video ! It could be cool videos about reinforcement learning

  • @something451
    @something451 Před 2 měsíci +2

    Thanks!

  • @kuravasic
    @kuravasic Před 4 lety +7

    OMG dude you're lit. I've just watched all 7 hours, great course!

  • @odunayokomolafe9485
    @odunayokomolafe9485 Před 2 lety

    Alright! This isi the most amazing tutorial I have seen with TensorFlow! I cant believe I can watch a tutorial for almost 7 hours being addicted. Thanks alot!

  • @ADNANAHMED-eo5xx
    @ADNANAHMED-eo5xx Před 4 lety +265

    People : netflix DARK is so confusing
    ML Algorithms : Hold my beer

  • @bigdhav
    @bigdhav Před 4 lety +9

    Tim is gonna be a CEO of a tech or education company in the future. What a legend.

  • @enx1214
    @enx1214 Před 2 lety

    New NN and tensorflow. I have searched and read lot before. Now I understood the different architecture, RNN vs CNN simple NN and they usage.

  • @nether-mobiletopc4293
    @nether-mobiletopc4293 Před rokem +1

    if you had problem at: 4:20:25 change image.img_to_array to tf.keras.utils.img_to_array()

  • @abcdxx1059
    @abcdxx1059 Před 4 lety +9

    There are a lot of tutorials like this already available but there is less content about cleaning data or building pipelines it would be really helpful if you could make tutorials on it

    • @RoboticusMusic
      @RoboticusMusic Před 4 lety +1

      Yep, or managing an automated 1D covnet with attention designed for time series prediction. This is both a desired topic at the moment and he could fundamentally demonstrate all the basics of building an automated pipeline where new data comes in, model updates and self optimizes, then outputs a prediction, process repeats upon completion or arrival of new data. This is something me and everyone getting into time series forecasting ML wants to see and it is not too ungodly complex like some other automated ML processes.

    • @abcdxx1059
      @abcdxx1059 Před 4 lety

      @@RoboticusMusic all i am saying is that no one wants to make videos on the complex stuff almost all the content in this video has like 50 similar videos or blogs

    • @RoboticusMusic
      @RoboticusMusic Před 4 lety +1

      @@abcdxx1059Very true. In the financial time series forecasting ML sphere I've only met one guy (Tom Starke) who has said anything rational. Everyone else is more or less Siraj. ML is not inherently bad but the industry is a huge elaborate scam like cryptocurrency. I haven't seen anyone build better ML models than the hand tuned hands-on real time manual adjustment algorithms I've built. I only need ML as icing on the cake to extract any last edge, and nobody seems to understand the basic principles of building a predictive model. For example none of the tutorials explain one-shot methods. That means everything out there overfits and it worse than useless! If a model can't learn in one episode it fundamentally is performing a very expensive database memorization hallucination.

  • @chhhhh2768
    @chhhhh2768 Před 2 lety +1

    Good intro, but hands down PyTorch > Tensorflow. Worked with Tensorflow and went to PyTorch and neverlooked back. Just easier to develop, test and explore and to know what you're actually doing.
    Pytorch almost feels like you're developing regular Python while Tensorflow feels like you are rearraning your Livingroom, but jou need to do it through a locked door via the keyhole with a long wire.
    Serving is a bit worse, but not that much and it improved.

  • @evanhagen7084
    @evanhagen7084 Před 4 lety +7

    Every video I've watched on Machine learning assumes we're in 1st grade and a college math major at the same time.
    "A tensor is a generalization of multidimensional vectors or matrices blah blah blah"
    5 mins later
    "Slope is rise over run."

    • @sangramjitchakraborty7845
      @sangramjitchakraborty7845 Před 4 lety +1

      It's easier to explain what slope is then what a tensor is. It boggles your mind. Vector already represents multiple dimensions, and tensor is a multidimensional vector? That's pretty much impossible to visualise and get an intuition for. Slopes on the other hand, is rise over run.

    •  Před 4 lety

      I get what you mean, either you end up in some multivariable calculus explanaition or a lame ass "just a weighted sum bro aka Siraj Raval"

  • @aadam7459
    @aadam7459 Před 3 lety +1

    Just finished the entire video, your explanations were great and I got all the examples to work on my local machine, so kudos it was an amazing course :)

  • @thecodingkid9755
    @thecodingkid9755 Před 3 lety +7

    great course even 1 year later

  • @mateolarralde1649
    @mateolarralde1649 Před 2 lety +2

    -Tim writing “enviornment”
    -Also Tim: Yeah I think I spelled that correctly

  • @Xarderrr
    @Xarderrr Před 4 lety +6

    What a timing! I've just finished a teoreticall ml course and it's time for some practise :D

  • @sureshkumar-kx2xz
    @sureshkumar-kx2xz Před 2 lety

    This course is not just amazing, it is great course--very informative, detailed, and easy to follow. I love the way how this cool guy explains things even for non-computer scientists. Great work!

  • @brokenvectors
    @brokenvectors Před 4 lety +19

    46:55
    don't mind me, just reminding myself

  • @jdcrunchman999
    @jdcrunchman999 Před rokem

    You should NOT expect us to "look up" some of these parameters, instead you should explain them, if not, then please give reference of where we should look them up. I'm referring to the video position that ended around 3.31. once you started explaining the math, I started to understand. but I know your viewers are not 2nd year calculus. but so far, this is the best video out there that explains Tensor flow.

  • @yahyafati
    @yahyafati Před 4 lety +7

    Wow. Did you just release this? Lucky me

  • @Someone2233
    @Someone2233 Před 2 lety

    Amazing!!! I did a udemy course but this is a lot better!! Btw anyone here who watched the whole 7 hrs of video on a go or in one day?

  • @marcel4366
    @marcel4366 Před 2 lety +3

    Great tutorial and also great explanations! Thanks for that.
    Just having a remark, that you actually use tensorflow 2.1, but as you use a lot of tf.compat.v1 (eg for Session), this is more of a tensorflow 1.x-2.1 tutorial, as Sessions are not part of the official workflow anymore (as can be guessed by the ".compat" -> just for compatibility)

    • @juliosouto9659
      @juliosouto9659 Před 2 lety

      Does tf.print(tensor) replace the session eval?

  • @yairfox2153
    @yairfox2153 Před 2 lety +1

    the best course to get into machine learning!
    thank you so much!