When M1 DESTROYS a RTX card for Machine Learning | MacBook Pro vs Dell XPS 15

Sdílet
Vložit
  • čas přidán 22. 06. 2022
  • Testing the M1 Max GPU with a machine learning training session and comparing it to a nVidia RTX 3050ti and RTX 3070. Cases where Apple Silicon might be better than a discrete graphics card on a laptop.
    Get TG Pro: www.tunabellysoftware.com/tgp... (affiliate)
    ▶️My recent tests of M1 Pro/Max MacBooks for Developers - • M1 Pro/Max
    ▶️ Is M1 Ultra enough for MACHINE LEARNING? vs RTX 3080ti - • Is M1 Ultra enough for...
    ▶️ GPU battle with Tensorflow and Apple Silicon - • GPU battle with Tensor...
    ▶️ Python Environment setup on Apple Silicon - • python environment set...
    ▶️ Apple M1 JavaScript Development Environment Setup - • M1 MacBook JavaScript ...
    ▶️ Apple M1 and VSCode Performance - • Apple M1 and VSCode Pe...
    #m1 #m1max #ml #pytorch #rtx3070 #macbookpro #intel12thgen #rtx3050ti #dellxps15
    ML code:
    sebastianraschka.com/blog/202...
    💻NativeScript training courses - nativescripting.com
    (Take 15% off any premium NativeScript course by using the coupon code YT2020)
    👕👚iScriptNative Gear - nuvio.us/isn
    - - - - - - - - -
    ❤️ SUBSCRIBE TO MY CZcams CHANNEL 📺
    Click here to subscribe: / alexanderziskind
    - - - - - - - - -
    🏫 FREE COURSES
    NativeScript Core Getting Started Guide (Free Course) - nativescripting.com/course/na...
    NativeScript with Angular Getting Started Guide (Free Course) - nativescripting.com/course/na...
    Upgrading Cordova Applications to NativeScript (Free Course) - nativescripting.com/course/up...
    - - - - - - - - -
    📱LET'S CONNECT ON SOCIAL MEDIA
    ALEX ON TWITTER: / digitalix
    NATIVESCRIPTING ON TWITTER: / nativescripting
  • Věda a technologie

Komentáře • 198

  • @AZisk
    @AZisk  Před 2 lety +18

    How to properly say nVidia's RTX 3050Ti (for this video I chose the second pronunciation) : czcams.com/video/XhvspQSAKiU/video.html

    • @remigoldbach9608
      @remigoldbach9608 Před 2 lety +2

      I’m used to hear “T I”

    • @DocuFlow
      @DocuFlow Před 11 měsíci

      Have you considered redoing this using Llama.cpp? You may have already, and I missed it. But by golly that would be interesting, especially the model sizes that could be run on lots of shared RAM. On my AMD3960/2090 I'm limited to the 7B model, quantized to 4bit.

  • @hugobarros6095
    @hugobarros6095 Před rokem +38

    I would still like to see a speed comparison with a lower batch size. Because memory is just one aspect of a GPU. If it is still slower then it's not better.

  • @PedroTeixeira
    @PedroTeixeira Před 2 lety +6

    Would love to see the other longer ML comparisons, thank you!

  • @georgioszampoukis1966
    @georgioszampoukis1966 Před rokem +81

    Having access to 64gb of GPU memory is just insane at this price. Theoretically you can even train large GAN models on this. Sure, it will take a very long time, but the fact that you can still do it at that price and with this efficiency is just madness. The unified approach is just brilliant and it seems that both intel and AMD are slowly moving towards this path.

    • @p.z.6712
      @p.z.6712 Před rokem +5

      I agree with your point. Laptops should be primarily used for local development and functionality test. Running less than 5 epochs on an Mac serves this purpose well. If passing the functionality test, we could then push the model to remote servers for long training. In contrast, most nv rtx graphic cards have extremely limited gram and you can only test small models on them, though cruely fast.

    • @mrinmoybanik5598
      @mrinmoybanik5598 Před rokem +4

      The m1 max with the 32 core gpu has a whooping 10.4 TFLOPS of computing power that's in the same order of magnitude of a mobile rtx3070 ti with 17.8 TFLOPS. It's insane how apple is progressing in It's efficiency. I hope the upcoming m2 maxes will be able to compete with mighty nvidia cards in terms of raw computing power.😮

    • @trubetskoy4395
      @trubetskoy4395 Před rokem +1

      @@p.z.6712 I can run yolov8x inference on mobile 3060 both plugged and unplugged, and both these times will be faster than apple

    • @trubetskoy4395
      @trubetskoy4395 Před rokem +6

      @@mrinmoybanik5598 how 10.4 is the same as 17.8? It is on par with lower-end 3060, which comes for 3x less price than m1 max

    • @mrinmoybanik5598
      @mrinmoybanik5598 Před rokem +1

      @TRUBETSKOY I said they are of the same order of magnitude, i.e, they both can perform 10^13 times some constant order of floating point operations per second. Sure that constant is 1.04 is case of m1 max and 1.78 in case of 3070ti mobile. And looking at the current pace of development this is just a generation worth of gap, like the rtx2070ti mobile also had similar 10.7tflops of raw power in it's highest tgp variant.

  • @dotinsideacircle
    @dotinsideacircle Před rokem +10

    Are TF and PT now optimized for silicon or still sketchy? If not, any tentative time frame? What about external Nvidia GPU solutions for Macs? Is that possible?

  • @gabrigamer00skyrim
    @gabrigamer00skyrim Před rokem +9

    It would be interesting to see the performance with limited batch size on the RTX GPUs versus the M1 max.

  • @bruno_master9844
    @bruno_master9844 Před 2 lety +2

    Hi, if the test you made at 3:12 was cpu based does that mean that the m1 pro with the 10 core cpu would finish at the same time as the m1 max?

  • @TheDevarshiShah
    @TheDevarshiShah Před rokem +2

    Is it single threaded benchmark? if so, will the results be more or less same on other M1 chips, such as, M1 or M1 Pro?

  • @planetnicky11
    @planetnicky11 Před 2 lety +2

    yes please make a video on that !! Can’t wait to install the pytorch metal. :)

  • @noone-dc4uh
    @noone-dc4uh Před 2 lety +5

    But in reality every production grade ML task is being done in a distributed manner on the cloud using spark. Because it's impossible to fit realtime data on a single computer storage. So it doesn't matter which computer you have locally apple or non-apple it is only used for initial development and prototypes.

  • @lehattori
    @lehattori Před rokem +2

    great videos it helps me a lot! thanks!

  • @dr.mikeybee
    @dr.mikeybee Před 2 lety +4

    Nice. Some of the Pytorch_light code doesn't seem to run, but the other benchmarks do run. I'm on the 16GB MacMini, and cifar10 runs. I'm up to just under 16GB being used, and it's not grabbing a bunch of swap. It may take forever to finish, but I think it will get to the end. I'll leave it running for a half-hour or so. Two years ago, I bought a K80 because of running out of memory, but the power draw is significant, and mostly I use models and don't train; so I suspect this M1 will be good enough.

  • @wynegs.rhuntar8859
    @wynegs.rhuntar8859 Před 2 lety +6

    Now make sense shared memory for GPU, good comment ;)

  • @ThomazMartinez
    @ThomazMartinez Před 2 lety +4

    I love that on PC's you are using Linux to test on Windows, please continue more test with Linux vs MacOS

  • @youneslaidoudi8214
    @youneslaidoudi8214 Před rokem +2

    I trained the VGG16 on a Fully loaded Mac Pro 14" 2023 (M2 max / 96Go of UM) in 16.65 min T.time

  •  Před 2 lety +12

    those m1 max laptops are beasts

  • @inwedavid6919
    @inwedavid6919 Před 2 lety +1

    Of course it is better on the 5000$ over the top M1 Max or will it does the same on the base model?

  • @motivsto
    @motivsto Před 3 měsíci

    Should I buy a mac mini m2 or a pc? Which one better?

  • @joseluisvalerio4006
    @joseluisvalerio4006 Před rokem

    wow, interesting to test in my M1 Pro. Thanks a lot.

  • @somebrains5431
    @somebrains5431 Před 2 lety +7

    It’s fine for learning, but the vram limitations when you start dealing with production quality algorithms will make you offload your workloads to something that has multiple A100s. Training time on rigs with dual 3090s is something worth taking a look at how gpu ram is being loaded.

  • @MachielGroeneveld
    @MachielGroeneveld Před 2 lety +1

    Could you get the actual GPU memory usage? Maybe something like iStats

  • @47shashank47
    @47shashank47 Před 2 lety +29

    Thanks a lot Alex for your videos. ... bez of your videos I purchased macbook m1 based ,which has made my work really smooth. Now I can use VS code with many other usefull chrome extensions simultaneously, making my web development work much easier. I think Apple should keep in their marketing team 😀😀. You are doing better their whole expensive marketing campaign. I had no reason to purchase Macbook then I saw your videos which really helped me out.

  • @gokul.s49ibcomgs22
    @gokul.s49ibcomgs22 Před rokem

    WHICH LAPTOP SHOULD I PREFER FOR MACHINE LEARNING,DL, DATA SCIENCE. MAC AIR M2 OR ROG STRIX G 15!?

  • @ptruskovsky
    @ptruskovsky Před 2 lety +1

    where is parallels with windows for arm with arm visual studio video, man? waiting for it!

  • @slothgirl2022
    @slothgirl2022 Před 2 lety +6

    Is there any word on if pyTorch will ever take advantage of the M-series' Neural Engine? That might well boost the numbers further.

    • @AZisk
      @AZisk  Před 2 lety +3

      Pytorch is still new to this. Perhaps one day it will be optimized, but for now i suppose we should be happy it works.

    • @FrankBao
      @FrankBao Před 2 lety +2

      I don't see it as possible shortly, as officially implemented TensorFlow doesn't support it either. I found ANE has been used on some inference tasks by apps. I'm hoping for XLA support.

  • @MHamzaMughal
    @MHamzaMughal Před 2 lety +6

    Loved the video! Please if you can compare rtx 3080ti mobile with M1 Max or M1 Pro. That would be a good comparison considering those rtx cards have more memory

    • @emeukal7683
      @emeukal7683 Před rokem

      No its a Bad comparison because Apple cant compete then. He is there than nget clicks, typical influencers. Watched some Video now to be sure. Given U are seriously machine learning then U need the best. The best is a Desktop with 4090, threadripper for example. Even the biggest Mac 29 whatever core cant compete. They can compete well in the Low Budget segment and on Laptops. So buy whatever, but dont buy a Mac for your scientific work.

  • @kevinsasso1405
    @kevinsasso1405 Před rokem +4

    This isnt actually the case if your data loaders are memory intensive (audio loading, etc). Ultimately youll want your own set of dedicated RAM so that your CPU isnt bottlenecked

  • @mohamedouassimbourzama5524

    what about macbook air m2 8/256 for maching learning ? and who's faster the air m2 or gpu t4 for google collab ? thanks

  • @keancabigao1461
    @keancabigao1461 Před 2 lety +46

    Great video! Currently, Im actually quite interested how well the m1(base m1)/m2 chip would perform in basic machine learning tasks implemented in R.

    • @datmesay
      @datmesay Před 2 lety +2

      I'm wondering what's the spread for the same exercice between m1 &m2 MacBook Air.

    • @terrordisco2944
      @terrordisco2944 Před rokem +4

      I don’t remember the gpu core numbers between a m1 and m1 max, but its in multiples, 8 vs 32 or something. So gpu based performance varies wildly. And extra memory too - the os snd the programs occupy the same amount of memory, so if thats 10gb, the difference between 6 gb spare and 54 gb spare is tremendous.There’s much less difference between cpus. But the base m2 is a cheap computer, it will compare favorably with other cheap/mid-price computers.
      So if your tasks are cpu computation and not wildly memory intensive, the difference is little, and the difference is greatest… well, you’re doing stuff in R. You can figure it out :)

    • @keancabigao1461
      @keancabigao1461 Před rokem

      @@terrordisco2944 huge thanks to this!

    • @tonglu3699
      @tonglu3699 Před rokem +2

      To my knowledge, R cannot do multi-core computing natively. There are R packages out there that allow you to manually manage your computing tasks and send them to different CPU cores. I've never done any GPU acceleration in R, so cannot really speak to that. I switched to an M1 machine earlier this year and noticed a significant performance improvement in R, but I'm pretty sure that's because M1 has great single core performance compared to my old machine. It does allow me to leave R running in the background and multi-task on other things with abandon, knowing how much computing capacity the CPU still has.

  • @cfaf-ct9xl
    @cfaf-ct9xl Před 2 lety +1

    That's very amzing result of doing deep learning on m1. But I think no DL engineers will use laptops... But I expect new mac pro to become a DL server.

  • @awumsuri
    @awumsuri Před rokem

    Yes please make the video on your project

  • @dorianhill2480
    @dorianhill2480 Před 2 lety +2

    I’d run the training in the cloud. Free’s up the lap top. Also means you don’t need such an expensive laptop.

  • @mikekaylor1226
    @mikekaylor1226 Před 2 lety

    Great stuff!

  • @stevenhe3462
    @stevenhe3462 Před 2 lety +2

    If PyTouch can use those Neural Engines, it will be much faster.
    Now, you can only do that in Swift I guess…

  • @chen0rama
    @chen0rama Před 2 lety +3

    Love your videos.

  • @edmondhung6097
    @edmondhung6097 Před 2 lety

    Sure. Love those real project videos

  • @TheCaesarChris
    @TheCaesarChris Před 8 hodinami

    Potentially dumb question but why didn’t you use an intel/amd laptop with a 4070/4080/4090 and 32/64GB ram?

  • @jalexromero
    @jalexromero Před 2 lety +1

    Would be great to see a video of clustering M1 or M2 mac minis to crunch very large CNN projects... But great videos! Look fwd to the pytorch one!

  • @hhuseyinbaykal
    @hhuseyinbaykal Před 7 měsíci

    We need an updated version of this video please do so 4080 4090 laptops vs m3 series

  • @hozayfakhleef1223
    @hozayfakhleef1223 Před 10 měsíci +58

    this comparison doesn't even make sense. You are comparing a 5000$ laptop to two laptops which don't cost a fraction of what this 64GB Ram Monster cost

    • @Jonathan-ff8tl
      @Jonathan-ff8tl Před 6 měsíci +2

      I'm seeing them used for $2000. But you're right you could definitely get a better machine for AI under 2k. Also consider this Mbp is also a great portable machine for everything else.

    • @MalamIbnMalam
      @MalamIbnMalam Před 3 měsíci

      The new Asus Zephyr G14 with RTX 4070 comes to mind

    • @Marc-mp6lf
      @Marc-mp6lf Před 2 měsíci +3

      I agree but GPU and CPU sharing of memory is what he was saying.

    • @WildSenpai
      @WildSenpai Před 19 dny

      Exactly what I said it's like a tank vs pistol

    • @shahmeercr
      @shahmeercr Před 8 dny +1

      @@WildSenpai lol

  • @csmac3144a
    @csmac3144a Před 2 lety +3

    I have multiple machines for different purposes. Two things i do absolutely require a mac so it's not even a question for me. iOS development with xcode and final cut pro.

  • @thedownwardmachine
    @thedownwardmachine Před 2 lety +9

    I'm interested in seeing your personal project benchmarked across systems! But some friendly advice: I think you should be consistent with your use of significant digits across measurements. 0.1m doesn't mean the same thing as 0.10m.

    • @1nspa
      @1nspa Před 2 lety

      They are both the same

    • @arunvignesh7015
      @arunvignesh7015 Před 2 lety +5

      @@1nspa No they aren't. 0.1 can translate in to 0 or 0.2 with 0.1 unit tolerance, and 0.10 translates to 0.09 or 0.11 with 0.01 unit tolerance. Now if your tolerance is at 0.01 you won't be representing that case with 0.1 and use 0.10 instead, this has caused so much confusion and errors in engineering over a long period of time which is indeed why we came up with measurement standards.

  • @bikidas2718
    @bikidas2718 Před rokem

    Which MacBook pro is this

  • @SinistralEpoch
    @SinistralEpoch Před rokem +1

    Dumb question, but did you notice that you'd done "cudo" instead of "cuda" in the pytorch test?

  • @trongnguyenkim3617
    @trongnguyenkim3617 Před 10 měsíci

    Please do more comparing between multi VGA card ( 3060 12GB, 3070, 4060) with M1, M2 apple CPU!
    WE do need more information about this comparing like.
    Total time to process A test , Total calculation in a range of time.
    Avg speed.
    Pros and Cons
    Thanks so much Sir!

  • @tsizzle
    @tsizzle Před 3 měsíci

    But don’t you need CUDA to utilize most of the ML python libraries? In that respect, don’t you have to use Nvidia hardware? What if you’re mostly working from the DevOps perspective, trying to setup the proper Conda and Pip environment and simply to test functionality on simple/smaller datasets and test small training sets and then move your code later to cloud to run the full training and inference on Amazon AWS Nvidia A100 or DGX A100 resources?

  • @MrRhee16
    @MrRhee16 Před 2 lety +1

    Good vid. wanna see whether PT on M1Max would be faster than Windows machines with a Nvidia GPU at the similar price range.

  • @zatchbell8112
    @zatchbell8112 Před 10 měsíci

    can you teach how to do basic ml in asus g15 advantage edition with windows and rocm use please?

  • @nasirusanigaladima
    @nasirusanigaladima Před rokem +3

    I love your machine learning Mac videos

  • @kevinmesto608
    @kevinmesto608 Před 2 lety

    cant wait for mac os ventura for the multitasking benefit of stage manager

  • @AliZamaniam
    @AliZamaniam Před 2 lety +3

    I'm really confused between 14 and 16 Mac book pro. also between windows or macos. plus between silver or space gray 😔🤔

    • @AZisk
      @AZisk  Před 2 lety +16

      i’m confused sometimes too, sometimes between the kitchen and the bedroom

    • @AliZamaniam
      @AliZamaniam Před 2 lety +2

      @@AZisk 😂😂👌🏻

  • @akibjawad7447
    @akibjawad7447 Před rokem

    I dont understand. How this is happening? How 64gb?

  • @datmesay
    @datmesay Před 2 lety

    Just for clarification, 0.1 of a minute is 6 seconds, the Mac found a solution in 6 seconds ?

  • @FOOTYAS
    @FOOTYAS Před 2 lety

    i died when it took a screenshot 😭

  • @lalpremi
    @lalpremi Před rokem

    Thanks 🙂

  • @Ricardo_B.M.
    @Ricardo_B.M. Před rokem +1

    Hi Alex, I want to get a laptop with i7 1260p, 64gb, intel iris xe, wud work fine for machine learning?

    • @HDRPC
      @HDRPC Před rokem

      Wait for meteorlake laptops(coming in September or October 2023) as it has 50% better efficiency than 13 gen laptops with dedicated VPU(AI accelerator) which will be super fast for AI and machine learning. It will also have 7400mhz lpddr5 ram support upto 96GB.
      2 times faster iGPU compared to best Intel igpu of 13 gen.

  • @woolfel
    @woolfel Před 2 lety +20

    CIFAR10 is considered a small test, but for a youtube video, it's large. Truly large models have datasets over 10 million images :)
    On a NVidia video card with 8G or less, you really have to keep the batch sizes small to train with CIFAR10 dataset. With CIFAR100 dataset, you have to decrease batch size to avoid running out of memory. You can also change your model in Tensorflow to use mixed precision.

  • @bipinkoirala2962
    @bipinkoirala2962 Před 2 lety

    I'll trim down the batch size, use buffers but still refuse to get a macbook.

  • @droidtang
    @droidtang Před 2 lety +4

    The ram size is still the bottleneck though. On some of my projects it requires easily far over 64 gb alone for data wrangling even before any training. But yeah, normaly you jjust don't do this on a laptop unless some mobile workstations like thinkpad p-series where you can have xeon cpu with up to 128 gb ram and nvidia rtx a5000 gpu.

    • @laden6675
      @laden6675 Před 2 lety

      can't wait for the Mac Pro on Apple Silicon. Imagine, 2TB of memory for the GPU.

    • @droidtang
      @droidtang Před 2 lety +1

      @Laden Yes, could be interesting. @Alex Ziskind More interesting would be a comparison between M1 Max GPU vs RTX A5000 on larger data sets where GPU is more efficient and faster vs CPU.

    • @woolfel
      @woolfel Před 2 lety

      I'm curious how hot does that thinkpad get running A5000 at full load :) I'm guessing it's enough to fry up some eggs for lunch

    • @davout5775
      @davout5775 Před 2 lety

      It could be very interesting if the SSD can fill that purpose. MacBook Pro with M1 Max use the fastest SSDs on the market and they could I believe be used for such operations.

    • @user-wj1ru2xn6q
      @user-wj1ru2xn6q Před 11 měsíci

      @@laden6675 sadly didn't happen.

  • @mikapeltokorpi7671
    @mikapeltokorpi7671 Před 2 lety

    64 MB shared memory is huge benefit, but you should redesign your AI code to handle smaller lots.

  • @James-hb8qu
    @James-hb8qu Před rokem +2

    First a very very short test that is basically measuring set up time. The shared memory system doesn't have the serial delay of loading the GPU so it comes out ahead. Then you rail it to the other extreme to find a test that will only run with substantial memory. That seems.... engineered to give a result. Honestly, that appears less than upfront.
    How about test some real world benchmarks that run on RTX machines of 8-12 GB and compare the performance to the M1. If the M1 comes out ahead then cool.

  • @blackpepper2610
    @blackpepper2610 Před 2 lety

    do a test for the neural core on M1 chip next please

  • @depescrystalline3392
    @depescrystalline3392 Před 2 lety

    Can the same test be run using the integrated GPU on the i9-12900? Would be interesting to see since the iGPU also shares system memory, so it might not have the same restrictions as a "discrete" GPU.

    • @RunForPeace-hk1cu
      @RunForPeace-hk1cu Před 2 lety

      no iGPU is gonna use all of the RAM in your system. That's not how the architecture of the iGPU works on intel

    • @brightboxstudio
      @brightboxstudio Před 2 lety

      I’m not sure if I found the best source because not many mention how much system memory integrated graphics can use on 12th gen Intel Core, but one source says up to 4GB. If your Intel Mac or PC is more than a few years old, integrated graphics are limited to grabbing 1.5GB from system memory, and only if more than a certain amount of RAM is installed.
      The difference with Apple unified memory is that it is completely dynamic, there are no walls. That is why any system memory not used by macOS, apps, and background processes is available to graphics, so if your Apple Silicon Mac is using 30GB out of 64GB system memory, the graphics can use all the rest if it wants, which is what Alex showed.

    • @davout5775
      @davout5775 Před 2 lety

      The problem with that it is extremely limited by like 4gb at best. Furthermore the unified memory on M1 has significantly more channels and there is no limitation. The limit is basically the top of your RAM which can be all 64GB if you have that specification on. You also don't have to worry about the RAM because MacBooks use swap and they can have RAM from the SSD which on the M1 Max spec is between 5 and 7+GB/s

  • @inwedavid6919
    @inwedavid6919 Před rokem

    Ram is relevant on M1 max if you buy it with plenty of RAM as it is shared. More RAM make price to explode.

  • @arhanahmed8123
    @arhanahmed8123 Před rokem +1

    Does it mean that Macbook M1 would be better for Machine Learning tasks? Should I buy macbook over dell XPS for ML and coding tasks?

    • @djoanna9606
      @djoanna9606 Před rokem +1

      Same question here. Any advice would be appreciated! Thanks!

    • @aniketainapur3315
      @aniketainapur3315 Před 9 měsíci

      @@djoanna9606 Do you get any answer? I m still confused

    • @matus9787
      @matus9787 Před 2 měsíci

      for CPU tasks ofc. for GPU tasks possibly not

  • @AndriiKuftachov
    @AndriiKuftachov Před 2 lety +1

    Who will do real ML tasks for business on a laptop?

  •  Před 2 lety +8

    Where is the Schwarzenegger?!?!

    • @AZisk
      @AZisk  Před 2 lety +13

      He'll be back

    • @ooppitz1
      @ooppitz1 Před 2 lety

      @@AZisk 😂👍🏻

    • @jay8412
      @jay8412 Před 2 lety +1

      Great video as always, would love to see more of such test related to software developers on apple silicon

  • @aimanyounis8387
    @aimanyounis8387 Před rokem +1

    I tried to fine-tune pretrained model using mps device, but I realised that on CPU the training is faster than MPS, and it seems doesn’t make sense to me

    • @vinayak1998th
      @vinayak1998th Před 6 měsíci

      There are a bunch of odd things here. Especially with the 3050ti outperforming a 3070

  • @muntakim.data.scientist
    @muntakim.data.scientist Před 2 lety +9

    Conclusion: I'm gonna buy m1 max 🥴

  • @paraggupta3099
    @paraggupta3099 Před 2 lety +1

    It would be great to know in which test m1 max beats i9 and in which test i9 beats m1 max,
    So please do make a video regarding the same

  • @wynegs.rhuntar8859
    @wynegs.rhuntar8859 Před 2 lety +1

    The Ziskind-net AI, soon, xD

  • @EdwardFlores
    @EdwardFlores Před 3 měsíci

    In Apple you get a lot more ram right ahead... For LLMs are apple machines better than any AMD/Nvidia solution for home computing.
    For some of the things I do I need more than 40 GB ram... There is no VCard I can pay for that

  • @manikmd2888
    @manikmd2888 Před 2 lety

    Machine learning doesn't require Nvidia RTX/ AMD Radeon. You only need Statistics an example of an R book and a Desktop/Laptop.

  • @whyimustusemyrealname3801

    m1 vs desktop GPU pls

  • @MarkTrudgeonRulez
    @MarkTrudgeonRulez Před 2 lety

    To be honest, RISC vs CISC is no comparison. Arm is a RISC based CPU, look back to the Archimedes , same CPU. We have gone through similar scenarios before except this time it is muti-core architecture. Who knows, today RISC is winning, tomorrow CISC will win...again who knows. Disclosure I ordered a M1 Pro to check it out and yes I had the Apple IIe as my first computer and no I'm not an Apple fab boy!!!My favorite computer by far was an Amiga!!!

  • @PsycosisIncarnated
    @PsycosisIncarnated Před 2 lety +1

    Why wont apple make some durable damn laptops with loads of ports??
    I just want a thinkpad with a roll cage and the m1 chip and linux ffs :((((((

  • @arnauddebroissia8964
    @arnauddebroissia8964 Před rokem +1

    So you used an incorrect batchsize, nice story, but it means nothing. The important aspect is number of samples per seconds.... You can have a bs of 12, if you do 10 batch a sec, it will be better than having a bs of 64 and doing 1 batch a sec...

  • @angrysob7962
    @angrysob7962 Před 2 lety +2

    I'm a bit disappointed that The Schwarzenegger did not make an appearance.

    • @AZisk
      @AZisk  Před 2 lety +2

      he’s on vacation. he’ll be back

  • @davidtindell950
    @davidtindell950 Před 2 lety +2

    thank you - yet again !
    ….

    • @davidtindell950
      @davidtindell950 Před 2 lety +1

      It is impressive the difference that the integrated M1 memory makes vs. laptop mobile NVidia components.

  • @Tech_Publica
    @Tech_Publica Před rokem

    If the macbook pro is that good for ML than why did you say that normally you use the Asus?

    • @AZisk
      @AZisk  Před rokem

      in what circumstances is key here - that’s what the vid is about

  • @underatedgamer9939
    @underatedgamer9939 Před 2 lety +2

    Rtx 3050 is not 90watt,the 90 watt tgb 3050 is almost 2x better

  • @2dapoint424
    @2dapoint424 Před 2 lety

    Mak video using M1 and run your project.

  • @prakhars962
    @prakhars962 Před 10 měsíci

    RTX GPU is designed for gaming not machine learning. Of course it will be slower. Its cool that they baked the RAM very close to both CPU and GPU.

    • @DennisBolanos
      @DennisBolanos Před 8 měsíci +1

      I think it depends on which RTX card it is. To my knowledge, RTX GeForce cards are meant for gaming while RTX Quadro cards are meant for professional use.

  • @mi7chy
    @mi7chy Před rokem +1

    Isn't that just demonstrating poor coding? ML workloads like Stable Diffusion is significantly faster with Nvidia and even with chunking for larger generated images to fit within VRAM it's still faster.

  • @javascriptes
    @javascriptes Před 5 měsíci

    I miss the Schwarzenegger 😂

  • @MaxwellHay
    @MaxwellHay Před rokem +1

    $4000 vs $1500 ??? How about a 3080 laptop?

  • @WhatsInTheName.
    @WhatsInTheName. Před 9 měsíci

    Better buy a windows laptop for eveything else other than ML as faster, i.e Programming, virtual machines, docker containers, non-apple software video editing, and gaming.
    And for ML models, run those on cloud at fraction of cost

  • @insenjojo1839
    @insenjojo1839 Před rokem +1

    thank you for pronouncing SILICON and not SILIKEN like most reviewers..

  • @WildSenpai
    @WildSenpai Před 19 dny

    But u do understand that those windows one are cheaper+64gb ram in apple i think i would have to sell my house

  • @puticvrtic
    @puticvrtic Před 2 lety +1

    Schwarzenegger has a leg day unfortunately

  • @iCore7Gaming
    @iCore7Gaming Před rokem

    but who really is going to be machine learning "on the go". if you were you could just remote into a server that had the correct hardware anyway.

  • @phucnguyen0110
    @phucnguyen0110 Před 2 lety +2

    It's called "3050 Ti-ai" not "3050 Tie" Alex :D

  • @ranam
    @ranam Před rokem

    When apples can help newton(ie natural scientist) find gravity why not an apple computer help a datascientist do machine learning correctly

  • @rishikeshdubey8823
    @rishikeshdubey8823 Před 7 měsíci

    why not just buy a windows laptop and then train models on external gpu

  • @eyeshezzy
    @eyeshezzy Před 2 lety +1

    You look much more cheerful

  • @DavidGillemo
    @DavidGillemo Před rokem +2

    Feels weird that a 3070 wouldn't beat the 3050 ti, and doesn't the 3070 have 8Gb of vram?

    •  Před rokem +1

      Yes the 3070 has 8GiB, if his NTB has 6GiB then it's the 3060. It's weird.

    • @cypher5317
      @cypher5317 Před rokem

      So it’s a 3060 not 3070 , but it should do better then the 3050Ti with 4gb VRAM , no ? Confusing to me

    • @iCore7Gaming
      @iCore7Gaming Před rokem

      a 3070 in the laptop is litterally just a 3060. crazy how nvidia get away with that.

  • @vit.c.195
    @vit.c.195 Před 5 měsíci

    Actually you are not compare M1 to RTX but MacOS to Shitbuntu. Just because you not able to get Linux compiled for Intel laptop and for dedicated purpose. Actually you can do that, but you not able... whatever reason.

  • @felipe367
    @felipe367 Před 2 lety

    3:58 erm that’s not quite half a minute more like 3/4 of a minute

    • @AZisk
      @AZisk  Před 2 lety

      0.46 is almost half a minute. half a minute being 0.5

    • @felipe367
      @felipe367 Před 2 lety

      @@AZisk how many seconds in a minute Alex? 60? thus half a Minute would be 30 seconds. 0.5 seconds is 50 seconds.

    • @AZisk
      @AZisk  Před 2 lety

      @@felipe367 should have been more clear, 0.5 is half. there is a decimal point there not a colon. if it looked like this 0:50, then that would be 50 seconds

    • @felipe367
      @felipe367 Před 2 lety

      @@AZisk 0.5 is half of 1 BUT NOT half a minute as that is 0.30 no matter if you put a decimal or colon.

    • @AZisk
      @AZisk  Před 2 lety

      @@felipe367 😂

  • @egnerozo1160
    @egnerozo1160 Před 2 lety

    Once more… power consumption… Mac book pro is excellent….

  • @arianashtari5433
    @arianashtari5433 Před 2 lety

    I Want That Video 😃