Gemini 1.5 Pro (Experimental 0801) - Google's Model Outperforms Everyone

Sdílet
Vložit
  • čas přidán 9. 09. 2024
  • This video demonstrates Google's Gemini 1.5 Pro (Experimental 0801) which uses a Mixture-of-Experts (MoE) architecture.
    🔥 Buy Me a Coffee to support the channel: ko-fi.com/fahd...
    🔥 Get 50% Discount on any A6000 or A5000 GPU rental, use following link and coupon:
    bit.ly/fahd-mirza
    Coupon code: FahdMirza
    ▶ Become a Patron 🔥 - / fahdmirza
    #gemini #geminipro
    PLEASE FOLLOW ME:
    ▶ LinkedIn: / fahdmirza
    ▶ CZcams: / @fahdmirza
    ▶ Blog: www.fahdmirza.com
    RELATED VIDEOS:
    ▶ Resource aistudio.googl...
    All rights reserved © 2021 Fahd Mirza

Komentáře • 7

  • @pfswilliams
    @pfswilliams Před měsícem

    I am not sure if you have covered it in a previous video but it has code execution under Advanced Settings which allows prompts like "Calculate the sum of the first 50 prime numbers" to create and run python code and work out the answer which it would give an incorrect/hallucination answer otherwise. This also works with other Google AI models including Gemini 1.5 Flash and in the API

  • @Cingku
    @Cingku Před měsícem

    It still failed on one of my chemistry math problems. No model has ever been able to solve this, not even closed ones like ChatGPT and Claude, except for Gemma27b, which nails it every time. Closed ones like GPT-4 and Claude can still do it, but only after maybe more than five tries. But Gemma27b solves it correctly every time, zero shot. It's weird that this latest model, supposedly better than GPT-4, is less capable than Gemma27b - it which is their open source model.

  • @sergeziehi4816
    @sergeziehi4816 Před měsícem

    So never sleep 😂!!!

  • @AndreasJansson2010
    @AndreasJansson2010 Před měsícem

    Do you find it more censored than GPT and Claude?