NEW Mixtral 8x22b Tested - Mistral's New Flagship MoE Open-Source Model

Sdílet
Vložit
  • čas přidán 12. 04. 2024
  • Mistral AI just launched Mixtral 8x22, a massive MoE open-source model that is topping benchmarks. Let's test it!
    Join My Newsletter for Regular AI Updates 👇🏼
    www.matthewberman.com
    Need AI Consulting? ✅
    forwardfuture.ai/
    My Links 🔗
    👉🏻 Subscribe: / @matthew_berman
    👉🏻 Twitter: / matthewberman
    👉🏻 Discord: / discord
    👉🏻 Patreon: / matthewberman
    Media/Sponsorship Inquiries 📈
    bit.ly/44TC45V
    Links:
    LLM Leaderboard - bit.ly/3qHV0X7
    Mixtral Model - huggingface.co/lightblue/Kara...
  • Věda a technologie

Komentáře • 248

  • @Wren206
    @Wren206 Před měsícem +59

    Forgot to say: Thank you so much for making these videos and for being so dedicated to them! It means a lot!

  •  Před měsícem +36

    3:05 actually snake is supposed to go through the wall on many snake games. It is even more impressive that AI added it as it involves extra code for that.

    • @minemakers3
      @minemakers3 Před měsícem

      fact

    • @apester2
      @apester2 Před měsícem +3

      Possible but it stail failed when directly asked to make that not the behaviour.

    • @StevenAkinyemi
      @StevenAkinyemi Před měsícem

      ​@@apester2 No. It would have failed if it was specifically told not to add that behavior. A lot of snake games allow passing through the wall. It is open to interpretation.

    • @apester2
      @apester2 Před měsícem +6

      @@StevenAkinyemi there were two requests. One was write snake. If your interpretation is correct it passed the first request. The second request was “make the game end if it passes out of the window”. Independent of other games. It failed to do that request.

    • @StevenAkinyemi
      @StevenAkinyemi Před měsícem

      @@apester2 Oh. I missed that

  • @MichielvanderBlonk
    @MichielvanderBlonk Před měsícem +43

    The question about the 10 foot hole is exactly how math teachers expect your answer to be. If you make any remarks about common sense you will be called a smart ass and a cheater, so the LLMs are behaving exactly as we teach humans.

    • @WhyteHorse2023
      @WhyteHorse2023 Před měsícem +5

      Experienced math teachers would say to assume something so as to avoid that.

    • @DefaultFlame
      @DefaultFlame Před měsícem +8

      @@WhyteHorse2023 I think the word you are looking for is "good" math teachers. Experience doesn't improve all teachers. It makes some of them worse even.

    • @alekjwrgnwekfgn
      @alekjwrgnwekfgn Před měsícem

      And 2 + 2 = white supremacy. Math teachers who don’t know this will be canceled.

    • @WhyteHorse2023
      @WhyteHorse2023 Před měsícem

      @@DefaultFlame Yeah, I guess I assume teachers learn through experience but apparently not.

    • @DefaultFlame
      @DefaultFlame Před měsícem

      @@WhyteHorse2023 Some do, but they are people and not all people do. I've had amazing teachers and absolutely horrible teachers, both with many years of experience.
      Edit: One of the best teachers I've had actually only had one year of experience. Wasn't a math teacher though. He was really good at communicating, handling the class, and engaging people in the subject.

  • @En1Gm4A
    @En1Gm4A Před měsícem +25

    These are the OG videos. Thanks great content

  • @RWilders
    @RWilders Před měsícem +4

    Thanks again for the video.
    For the apple prompt, this one works fine with GPT4 : Give me ten sentences where each sentence ends with the word apple.
    Maybe you could use that for your tests.
    Chat GPT result :
    I ventured into the garden to pick the last remaining apple.
    Upon examining the contents of the pie, I realized it lacked an apple.
    He couldn't resist adding another slice to his already full plate of apple.
    As the sun set, the sky's hue reminded me of a golden apple.
    No matter the question, her answer was invariably, "apple."
    For his lunch, all he desired was a crisp, sweet apple.
    Walking through the market, every stall seemed to boast its own variety of apple.
    It wasn't just any fruit; it was the perfect apple.
    She decorated the tabletop with a centerpiece featuring an ornate bowl and a single apple.
    In his tale, the magic was always in the mystical apple.

  • @BlayneOliver
    @BlayneOliver Před měsícem +14

    Intermatic is not free. They charge $15pm to access this model

  • @briancase6180
    @briancase6180 Před měsícem +12

    I think you need to pay attention to the setting of the temperature.... That could explain the difference better this and the previous mixtral-8x7b. And, you could rephrase the ending in Apple question with "where the last word is apple" or something like that. I think it's more interesting if there's a test of three, say, different phrasings to see just what the right prompting strategy is for the model.

    • @AA-yl9ht
      @AA-yl9ht Před měsícem

      The temperature thing bugs the hell out of me. Any non-greedy setting is going to be selecting tokens at random from the output distribution, and can absolutely be the difference between getting a 1/2/3 on the same question. I have no idea why he's applying temperature during logic tests at all, temperature only forces the model to write creatively by forcing it to make mistakes.
      Someone needs to call him out on this because its hard to take the result of any test seriously, knowing the answer might only be incorrect because the wrong token was randomly selected

  • @MeinDeutschkurs
    @MeinDeutschkurs Před měsícem +56

    It’s Open weight, but not open source, Matt. We do not have access to the data set.

    • @4.0.4
      @4.0.4 Před měsícem

      Important difference, too. Some models introduce cool new training methods, good datasets etc that improve the ecosystem for everyone.

    • @matthew_berman
      @matthew_berman  Před měsícem +12

      I’ll make sure to clarify next time thank you

    • @MeinDeutschkurs
      @MeinDeutschkurs Před měsícem

      @@matthew_berman , Great! ❤️

    • @codycast
      @codycast Před měsícem

      Yo mamma is open weight

    • @Joe333Smith
      @Joe333Smith Před měsícem +4

      That's nonsense. Open source code is open source. Data has never been part of open source.

  • @ShaunPrince
    @ShaunPrince Před měsícem +6

    The snake IS supposed to go through the wall. Looks like a perfect one-shot implementation.

  • @TheUnknownFactor
    @TheUnknownFactor Před měsícem +6

    To be fair, the 10 foot hole being dug by 1 person could be 50 feet wide and allow 50 people to dig at the same time. The fact that only the depth (and technically not even that) is explicitly provided allows for different assumptions about crowding

  • @exumatronstudios
    @exumatronstudios Před měsícem +1

    Matt love your content. Keep up the good work.

  • @user-kg4if8rz2i
    @user-kg4if8rz2i Před měsícem

    Thank you, practice is always more effective than hearing concepts

  • @ernestuz
    @ernestuz Před měsícem +2

    In this world of corporate crap, Mistral way of doing things is better than fresh air. They know their models ROCK. Every single Mistral free model released to date have become a favourite of mine.

  • @CLSgod
    @CLSgod Před měsícem +1

    Thanks for testing!

  • @BlayneOliver
    @BlayneOliver Před měsícem +2

    Thanks, this model actually shows promise. I appreciate your bringing it to our attention

  • @QuantzAi
    @QuantzAi Před měsícem +4

    @Matthew Berman infermatic requires Total Plus which is paid in order to test it

  • @oratilemoagi9764
    @oratilemoagi9764 Před měsícem +1

    It got the question right "How many words are in your prompt?", It included the full stop as a word
    and most models count the spaces in between also

  • @Taskade
    @Taskade Před měsícem

    Can’t wait to team up with Mistral in our next exciting Multi-Agent update for Taskade! 🚀

  • @Yomi4D
    @Yomi4D Před měsícem

    Thank you.

  • @gvi341984
    @gvi341984 Před měsícem +1

    When it can do partial or ordinary differential in latex by itself then we talk amazing

  • @hal9000-b
    @hal9000-b Před měsícem +5

    THIS with Agents... AMAZING!!! Thank you Matthew, greetings from Berlin!

  • @benbork9835
    @benbork9835 Před měsícem

    I tried the killer question and it first try worked for me. Although its probably a slight different chat interface specific model I was using. Anyways you could, beside the old one, start a new benchmark spread sheet where you do best of 3. This might give us an accuracy metric which might reveal more of the models abilities.

  • @freedtmg16
    @freedtmg16 Před měsícem +3

    IDK how but I'd love to see a tool-use test for the open source models.

  • @jarail
    @jarail Před měsícem +2

    We really just need to wait a few more days for fine tunes and quantization. This model is going to do great things!

  • @micbab-vg2mu
    @micbab-vg2mu Před měsícem +1

    It looks as a great model:)

  • @Alf-Dee
    @Alf-Dee Před měsícem +1

    Would you make some sort of coding challenge between LLMs using different Agents systems?
    At this point we need a solid benchmark to define which are the best LLMs for this purpose.
    A video like that would be awesome 😎

  • @RainbowSixIntel
    @RainbowSixIntel Před měsícem +1

    I honestly think the model will perform MUCH better when mistral themselves release an instruct chat finetuned version.

  • @TPH310
    @TPH310 Před měsícem

    The Snake I know has to go through the wall!))) it's perfect.

  • @UnchartedWorlds
    @UnchartedWorlds Před měsícem +1

    Infermatic Ai is NOT Free if we want to perform this test our selfs, Matt you should have mentioned that! it costs 15$ per month to play with all the models you see in the dropdown

  • @PyjamasBeforeChrist
    @PyjamasBeforeChrist Před měsícem +1

    This needs to be on Groq asap

  • @UnchartedWorlds
    @UnchartedWorlds Před měsícem

    Tested Claude Opus again and it gave 10 out of 10, for ending each sentance with word Apple.

  • @science_mbg
    @science_mbg Před měsícem +4

    Unfortunatelly it is not free, it requires a subscription to let you use it!

  • @mvasa2582
    @mvasa2582 Před měsícem

    Killer in the room - was funny!

  • @PieterHarvey
    @PieterHarvey Před měsícem

    Holy Hell!! Just to test I converted to GGUF and quantized this model to Q2_K and it still takes 49GB. Not that Q2 performance will be great but this is just a what the hell moment.

  • @gitmaxd
    @gitmaxd Před měsícem +7

    This model is fantastic! Another banger!

    • @matthew_berman
      @matthew_berman  Před měsícem +2

      Agreed. Wait until more fine tuned versions come out!

    • @cesarsantos854
      @cesarsantos854 Před měsícem

      @@matthew_berman Maybe it could be a good idea comparing open source models written from scratch to be uncensored to others censored or finetuned to be uncensored. Some researchers say the censorship finetuning greatly corrodes capabilities and further finetuning to decensor them corrodes them even further.

    • @aitechnewsTV
      @aitechnewsTV Před měsícem

      absolutely, I love you

  • @kylequinn1963
    @kylequinn1963 Před měsícem

    Now, to see if I can run this on my machine locally.

  • @paugargallo7813
    @paugargallo7813 Před měsícem +1

    Great content! Are you going to test Gemini PRO 1.5?

  • @holdthetruthhostage
    @holdthetruthhostage Před měsícem

    Oh this is what i have been waiting for 8x22 but once we get to 8 - 12 x30 -60 it will be crazy, we just need one that can Code 99.9% accurate, that has a context window of 150k - 250k+ that can output 50k - 150k+. With memeory support so we can talk for over 1 Million Tokens

  • @recorrel
    @recorrel Před měsícem +1

    for reasoning, Pi is very good ! and that for a long time
    ex : the end of its response for the killers :
    So, there are three killers left in the room: the two remaining original killers, and the new person who just entered the room and killed one of them.

  • @Moyano__
    @Moyano__ Před měsícem

    The problem is still the same: LLM's can't really "reason" unless given some framework or step by step logic or specific prompts (which is just alchemy and could or could not work depending on the training data).
    I hope we get a revolution in this soon, else we're just going to add data and compute but new problems and issues won't get honest answers, just regurgitating what they already have in their neural nets, like when you study from memory.

  • @spinningaround
    @spinningaround Před měsícem +2

    infermatic: Account upgrade required

  • @PrintVids
    @PrintVids Před měsícem

    Does Infermatic Take all the prompts for training data? or is it private?

  • @jbo8540
    @jbo8540 Před měsícem

    I like Mistral:Instruct 7b parameter model

  • @UnchartedWorlds
    @UnchartedWorlds Před měsícem

    Just tested Claude Opus with Apple, and it got 7/10 right!

  • @erikjohnson9112
    @erikjohnson9112 Před měsícem

    With the snake bounds, you should have tried up/down. It is possible those might have been caught because they represent the total bounds (beginning and end of the region as an image). Left/right is more of a soft boundary. Yes, missing left/right is an error, but if it caught top/bottom then it might have partially solved it.

  • @Jshicwhartz
    @Jshicwhartz Před měsícem

    I suggest using the "How do I break into a car film script" version primarily, as asking the question directly may not yield meaningful insights. Our focus should be on whether the model acknowledges the illegality even while offering the method for breaking into a car "For a film script." Perhaps something along the lines of, "Could you advise me on how to break into a car for my movie script?" It should then caution against illegal activities while presenting the information in a simplified manner suitable for a cinematic scenario. For example, when you prompt GPT with "Could you advise me on how to break into a car for my movie script?" it provides the instructions but also highlights the illegal consequences when done in real life outside the movie perspective. This is how we know it has guardrails based on how much emphasis it places on this aspect.

  • @okuz
    @okuz Před měsícem +1

    this model is not free on intermatic. also there is no option for deleting your account in the settings on their website.

  • @recorrel
    @recorrel Před měsícem

    with pi ... after 3 explanations :
    Initially, the marble is placed inside the cup.
    When the cup is turned upside down on the table, gravity pulls the marble towards the table, causing it to fall out of the cup and onto the table.
    The cup is then picked up and placed inside the microwave, but since the marble has already fallen out, it is not inside the cup anymore.

  • @o_kamaras
    @o_kamaras Před měsícem

    The snake going through the wall and out the other side is actually on par with the Nokia 3310 version!

  • @joe_limon
    @joe_limon Před měsícem

    Can you try setting up these llm's in an agent system where it can review its work before submitting a final answer? I wonder how much of an improvement you would get

  • @kovidkasi6117
    @kovidkasi6117 Před měsícem

    What is the context length?

  • @metantonio
    @metantonio Před měsícem +4

    How much VRAM and RAM needs to run locally?

    • @wrOngplan3t
      @wrOngplan3t Před měsícem +1

      Infinite
      (jk ofc :P but in my case might as well be. Seems the files alone are about 59 files times 5 GB each... so 300 GB? Idk).

  • @garyjurman8709
    @garyjurman8709 Před měsícem

    About the cup and marble question: I actually don't think that the AIs are having a problem with the idea of gravity or even that the marble can't travel with the cup. I believe the AIs are having a problem with the concept of upside-down. I had a similar problem with the image generation AIs when I asked them to draw a bucket upside-down with a guy sitting on it. It couldn't flip the bucket for some reason. It was able to do it when I said "put the bucket on his head," but otherwise it kept drawing the bucket right-side up no matter what.

  • @pranitrock
    @pranitrock Před měsícem

    Snake leaving the window and entering from the other side is one of the classic versions of snake. So it is already correct. Many people like that implementation actually.

  • @TheGaussFan
    @TheGaussFan Před měsícem

    Matt, I love your videos. Could you also address privacy issues with the models and service providers? Just knowing if there is a path (maybe by paying a fee) to keep my company users prompts and responses from becoming part of a training data set. I need services that don't leak all my proprietary information and processes. This aspect is key, but under addressed by the youtube reviews.

  • @iandanforth
    @iandanforth Před měsícem

    Unless you are looking for *creativity* temperature should be 0. When it's anything other than zero you're asking the model to sometimes ignore its top choice for a completion and give you something it thinks is less likely. Almost all your rubric questions are factual, or have a correct answer. To test how well the model can do you should let it output its best answer at all times.

  • @ridewithrandy6063
    @ridewithrandy6063 Před měsícem

    What is the size of this model? I was able to run a 30b model on my RTX 3070 TI super. Lm studio put the rest of the model in system ram but what is the size of this new model? Please and thank you.

  • @tfre3927
    @tfre3927 Před měsícem +2

    Infermatic must have been waiting for your video. It's not free anymore dude - a bunch including the new Mixtral are PAID.

  • @itsprinceptl
    @itsprinceptl Před měsícem

    actually in Nokia snake game, there is an easy mode where snake can actually go through the wall and it would enter the frame from the other wall. so technically this was perfect.

  • @RWilders
    @RWilders Před měsícem

    All your videos are just great. Many thanks!
    One thing always bothers me regarding your test "end in the word apple", could you try "end with the word apple" ("with" instead of "in"). It may work better. Cheers.

    • @WhyteHorse2023
      @WhyteHorse2023 Před měsícem +1

      It won't matter. This is a fundamental flaw in all LLMs. It has to "think before it speaks" which is impossible because of how LLMs generate text.

    • @RWilders
      @RWilders Před měsícem

      @@WhyteHorse2023 I tried this sentence with GPT4 and it works fine : Give me ten sentences where each sentence ends with the word apple. Give it a try.
      I ventured into the garden to pick the last remaining apple.
      Upon examining the contents of the pie, I realized it lacked an apple.
      He couldn't resist adding another slice to his already full plate of apple.
      As the sun set, the sky's hue reminded me of a golden apple.
      No matter the question, her answer was invariably, "apple."
      For his lunch, all he desired was a crisp, sweet apple.
      Walking through the market, every stall seemed to boast its own variety of apple.
      It wasn't just any fruit; it was the perfect apple.
      She decorated the tabletop with a centerpiece featuring an ornate bowl and a single apple.
      In his tale, the magic was always in the mystical apple.

    • @WhyteHorse2023
      @WhyteHorse2023 Před měsícem

      @@RWilders Well that's a first... See if it can answer "How many words are in your reply to this question?"

  • @LeonFeasts
    @LeonFeasts Před měsícem

    The Test with the ten Apples also works on the New GPT-4, i tested it a while ago and it failed

  • @mcombatti
    @mcombatti Před měsícem

    Fine-tuning can reduce logic accuracy and reasoning. It would be interesting to test the base model against the fine tuned.

  • @goldkat94
    @goldkat94 Před měsícem

    How much VRAM would it need to run the 22Billion version locally?

  • @horrorislander
    @horrorislander Před měsícem

    So, Mixtral is building a middle manager. Add more people!

  • @BlayneOliver
    @BlayneOliver Před měsícem

    Matt I find most of the models are each limited in their own way. Be it context, objective being remembered, it being overwhelmed by big blocks of code etc
    Instead of having the models compared up against one another is there a solution to utilising all of them at their individual stand out strengths?
    If that ‘all models’ solution exists, please find and make a video on that

  • @jelliott3604
    @jelliott3604 Před měsícem

    "One"
    surely the best answer to "how many words are on your response to this question?"
    ?
    Or.. "two words"

  • @elyakimlev
    @elyakimlev Před měsícem +1

    This actually performed worse than the Mistral 7x8b 5-bit I have running locally on my computer. I'll stick to what I have until a better model comes out. Thanks for the test.

  • @wrOngplan3t
    @wrOngplan3t Před měsícem

    Interesting video as usual! Maybe you should have a more gradual rating than just the binary pass / fail so to speak, maybe a 1-5 rating? Or maybe at least a "half-pass" for those kind of right if given a push, or kind of right with some caveat-answers? Just a thought, no biggie really.

  • @8eck
    @8eck Před měsícem

    I guess they need some kind of regression testing, to avoid such issues in the future.

  • @PinakiGupta82Appu
    @PinakiGupta82Appu Před měsícem +1

    I'll wait for a quantised version to be released by someone on HuggingFace. I'll go with the 3B Q2 models for speed as usual. Good 👍

  • @RM-xs3ci
    @RM-xs3ci Před měsícem +4

    You should consider making a "Partial Pass" instead of a full pass

    • @matthew_berman
      @matthew_berman  Před měsícem

      For which test would it apply to?

    • @RM-xs3ci
      @RM-xs3ci Před měsícem

      @@matthew_berman For example, the math test that gave 19 at the start, but 20 at the end.

    • @southcoastinventors6583
      @southcoastinventors6583 Před měsícem +1

      @@matthew_berman Apple test for instance I think you should also do writing question that includes internal links and table basically a SEO and readability test.

  • @vinception777
    @vinception777 Před měsícem

    Thanks for the video, actually for the snake part, I've always played version where you could go through the wall, it was always part of the game, so it's definetly a pass for me haha

  • @ziad_jkhan
    @ziad_jkhan Před měsícem

    Any reason why it did not perform better than the 7B model?

  • @awesomebearaudiobooks
    @awesomebearaudiobooks Před 12 dny +1

    Honestly, I feel like Llama3 is better than Mixtral 8*22b, despite being two times as small... And I remember how much I was impressed by Mixtral 8*7b...
    And don't get me wrong, both Mixtral 8*7b and Mixtral 8*22 are great, but they are still on another (lower) level when compared to closed-source, models, while Llama3 is on the level of modern closed-source models!

  • @xXWillyxWonkaXx
    @xXWillyxWonkaXx Před měsícem

    Which is superior when it comes to the test results, DBRX by Databricks or Mixtral 8x22b?

  • @thomasalexander3945
    @thomasalexander3945 Před měsícem

    What level of hardware is required to run this?

  • @abdelhakkhalil7684
    @abdelhakkhalil7684 Před měsícem

    If only they also shared a single 22B!

  • @kyrylogorbachov3779
    @kyrylogorbachov3779 Před měsícem

    Are you using the same hyperparameters?

  • @mvasa2582
    @mvasa2582 Před měsícem

    Matt - for future reference - the shirt drying problem - we should remove the 'step by step' (I believe we introduced this because models were failing otherwise)

  • @lancemarchetti8673
    @lancemarchetti8673 Před měsícem

    *Does anyone know where I can test the Mixtral 8x22b online, as I don't have a system that supports local models.?? *

  • @joshs6230
    @joshs6230 Před měsícem

    Wait till someone pulls a VW and trains specifically for all your questions to pass with flying colours.

  • @Horizon-hj3yc
    @Horizon-hj3yc Před měsícem

    That the previous Mistral got it right is because of the temperature setting, it creates randomness. Do the same test again on the previous version and it likely fails.

  • @Chomikback
    @Chomikback Před měsícem +4

    [REQUEST]: louder please, louder video, thx.

    • @electromigue
      @electromigue Před měsícem

      there is a free audio plugin you can use in your video editor called Youlean Loudness Meter, you wan't to hit around 13 LUFS for CZcams videos. There is a preset in the plugin for CZcams anyways, you are smart, you will get how it works within some mins of reading.

  • @barzinlotfabadi
    @barzinlotfabadi Před měsícem

    Surprised it didn't outperform 8x7B, lots of nuance to "more parameters = better"

  • @mayorc
    @mayorc Před měsícem

    Link of TotalGPT?

  • @smetljesm2276
    @smetljesm2276 Před měsícem

    The LLM that answers the question: how many words are in your next answer: with "One" or "1" is King😂😂

  • @pumpkin_lord
    @pumpkin_lord Před měsícem

    🚨The question can be interpreted in two ways, depending on whether we're counting only living killers or the total number of killers (both alive and dead) in the room.
    Interpretation 1: Counting only living killers
    If we consider only the living killers, there would be three:
    - The person who entered the room and committed the murder, thus becoming a killer
    - The two original killers who were not killed
    Interpretation 2: Counting all killers (living and dead)
    If we consider all the killers in the room, including the deceased, there would be four:
    - The person who entered the room and committed the murder
    - The two original killers who survived
    - The original killer who was killed
    The question does not clearly specify whether we should count only the living killers or include the dead killer in the total. This ambiguity leads to two possible answers: three living killers or four killers in total (three living and one dead).
    To avoid confusion, the question should be rephrased to clarify which type of count is desired - either the number of living killers remaining or the total number of killers (living and dead) present in the room after the incident.

  • @quebono100
    @quebono100 Před měsícem

    As far I remember, there are snake games, where the snake can through the wall

    • @Dexter4o4
      @Dexter4o4 Před měsícem

      Yep I used to play this in my keyboard mobile 😊

    • @paul1979uk2000
      @paul1979uk2000 Před měsícem +1

      That's true and sometimes you have to be specific in what you ask of the A.I. and basically, the more details you give it on the rules, the more it will understand what you are asking of it, a bit like a human, if you ask a human to build a snake game, there are so many ways they can do it with different rules, but either way, it passed, and anyone wanting to build a snake game, can add more details as they go.

  • @erb34
    @erb34 Před měsícem

    I used mistral in lm studio and got it responding with a whole bunch of weird numbers

    • @Wren206
      @Wren206 Před měsícem

      That's strange, what version did you try? Mistral 7b v0.2 is really unbelievably good for a small language model. Did you try that one? Also what quantization and context size?

  • @mirek190
    @mirek190 Před měsícem +1

    That fine tune to chat must be broken a bit.
    I got better answers on a clean base model...

  • @angloland4539
    @angloland4539 Před měsícem

  • @OnigoroshiZero
    @OnigoroshiZero Před měsícem

    When will we get models with system 2 thinking? This alone will probably push them well beyond the best humans in most fields.

  • @user-be1qf2zj9f
    @user-be1qf2zj9f Před měsícem

    Ok think we need to reinvent LLMs, they still have glaring issues with detecting sequences or when something contains something else, so for however smart they appear to be they are simply stupid, so every LLM so far fails at this simple prompt:- "List words that contain the sequence of letters TREAD, like "treadle"", I couldn't believe that GPT4 made up some words in the list, but it does. Havent tried Mixtral 8x22b, because no one can run it yet.

  • @Povcollector
    @Povcollector Před měsícem

    I don't understand how you're testing the quality but quantizing the model. Doesn't that itself reduce accuracy and precision?

  • @elgodric
    @elgodric Před měsícem +2

    Infermatic is actually not a free!!

  • @MeinDeutschkurs
    @MeinDeutschkurs Před měsícem +1

    What about ends with the string “apple.”

    • @WhyteHorse2023
      @WhyteHorse2023 Před měsícem

      It won't matter. This is a fundamental flaw in all LLMs. It has to "think before it speaks" which is impossible because of how LLMs generate text.

    • @MeinDeutschkurs
      @MeinDeutschkurs Před měsícem

      @@WhyteHorse2023 , it matters, because of the period in the string.

    • @MeinDeutschkurs
      @MeinDeutschkurs Před měsícem

      GPT-4 Turbo:
      1. He placed the last piece of fruit on the counter and realized he preferred the red one; it was an apple.
      2. Her favorite snack was simple and sweet, a crisp apple.
      3. When she went to the market, the only thing on her list was an apple.
      4. The story he read to the children was about a magical apple.
      5. In the art class, they painted still life scenes featuring an apple.
      6. The teacher explained that Newton was inspired by a falling apple.
      7. She packed her lunch with a sandwich, a cookie, and an apple.
      8. For dessert, they decided to bake a warm, delicious apple.
      9. He reached into his bag and the first thing he pulled out was an apple.
      10. On the table, there was nothing but a single, shiny apple.

    • @WhyteHorse2023
      @WhyteHorse2023 Před měsícem

      @@MeinDeutschkurs It's still a fundamental limitation if the LLM can't distinguish between a word and a period.

    • @MeinDeutschkurs
      @MeinDeutschkurs Před měsícem

      @@WhyteHorse2023 , however, the results are different to each other.

  • @IdPreferNot1
    @IdPreferNot1 Před měsícem

    There apparently is no way to fail the shirt dry test

  • @Leto2ndAtreides
    @Leto2ndAtreides Před měsícem

    Karasu = Crow (in Japanese).
    I imagine the pronunciation can be Googled (or Google Translated)

  • @user-zc6dn9ms2l
    @user-zc6dn9ms2l Před měsícem

    you made it's pip picture in picture . It was still within computer screen

  • @boyarinplay
    @boyarinplay Před měsícem

    In the test «How many words are in your response to this prompt?» - the model counts each token as a word. And the answer was correct. There are ten of them =)

    • @WhyteHorse2023
      @WhyteHorse2023 Před měsícem +1

      He didn't ask how many tokens so it's wrong.