Chris Hay
Chris Hay
  • 147
  • 822 730
X AI fooled me with grok 2 and sus-column-r...
a new model called sus-column-r appeared alongside an anonymous model in the lmsys chatbot arena. in this video chris shows that using the knowledge cutoff of a model and it's knowledge of taylor swift and beyonce who probably created the model. he also gets it wrong as grok 2 from xAI wasn’t in his testing but his method and reasoning is correct
chris also explores the math and reasoning capabilities of this model and guesses on it's size and whether it's a project strawberry or q* model by pitting it against other models especially gpt-4o, gpt-4o-mini, gemini and claude
zhlédnutí: 1 769

Video

what’s underneath the mystery gemini 2 models?
zhlédnutí 2KPřed dnem
In this video, we delve into the world of AI agents and their impact on direct LLM access. Join us as we explore the mystery of Gemini 2 and the future of LMSYS Chatbot Arena! in this video, chris looks at the new google gemini-test, mystery-gemini-1 and mystery-gemini-2 models and speculates in the lmsys arena, and speculates what the new gemini-2 models are like and how they have access to re...
I built an AI Math Compiler that emits synthetic datasets rather than code
zhlédnutí 791Před 14 dny
One of the big challenges in AI is synthetically generating math data including the reasoning steps to train large language models such as GPT, Llama3.1, Mistral.. Only with reasoning can you truly train the model. In this video Chris shows how to generate a synthetic dataset using generative ai for math using his new AI math compiler, which accurately returns questions, answers and step by ste...
Understanding STaR and how it powers Claude and Gemini/Gemma 2 (and maybe OpenAI Q* or Strawberry)
zhlédnutí 8KPřed měsícem
Understanding STaR and how it powers Claude and Gemini/Gemma 2B (and maybe Q* or Strawberry). STaR is short for Self-Taught Reasoning and is rumored to power OpenAI's Q* (now Strawberry), but definitely powers Claude 3.5 sonnet and Gemma / Gemini models. In this video Chris breaks down how Self Taught reasoning works and how it is used in the fine tuned phases of a model to improve training. Ch...
Multi-Head vs Grouped Query Attention. Claude AI, Llama-3, Gemma are choosing speed over quality?
zhlédnutí 1,1KPřed měsícem
Multi-Head vs Grouped Query Attention. Are Claude, Llama-3, Gemma are choosing speed over quality? frontier model providers such as anthropic claude 3.5 sonnet, and Google Gemini / Gemma 2B and Meta Llama-3 are trending towards using grouped query attention vs traditional multi-headed attention in transformer models as their attention mechansim. Interesting OpenAI with GPT-4o doesn't seem to be...
NVIDIA's Nemotron-4's is totally insane for synthetic data generation
zhlédnutí 1,6KPřed měsícem
the nvidia nemotron-4 340 billion parameter models are brand new open source models from nvidia that you can use to generate your own synthetic data. in this video chris shows you how to get started with nvidia cloud and nvidia nim services. using nvidia cloud you can use all the major open source and commericial models such as llama3, mistral, google and ibm granite models. in this video we fo...
i really want to say goodbye to copilot...
zhlédnutí 2,4KPřed 2 měsíci
in this tutorial chris looks at how open source copilots have moved on with the advent of the continue vscode extension and the new mistral 22 billion parameter codestral model and it compares against starcoder-2 and llama-3. he shows what the difference betweens chat and "fill in the middle" models are and why you need both today in ai coding assistants, and how they are used in continue do po...
The future of AI agents is WebAssembly (get started now)
zhlédnutí 1,7KPřed 2 měsíci
The future of AI Agents is WebAssembly. In this video. we look at how AI Agents can call WebAssembly functions dynamically using LlamaIndex, AssemblyScript and RAG. WebAssembly is probably the future of AI function calling due to it's secure sandboxing. In this video, Chris breaks down how to use function calling with AI Agents and LlamaIndex, how to build webassembly functions and how to call ...
getting started with typespec
zhlédnutí 1,1KPřed 2 měsíci
typespec is a new language for designing your API specifications upfront programmatically. typespec is a typescript style language that dramatically simplifies designing api's through support of inheritance, templates, interfaces and operations. using typespec you can take your simple api model design and generate openspec api 3.0 (swagger) definitions from it. In the future i see this being a ...
Creating ReAct AI Agents with Mistral-7B/Mixtral and Ollama using Recipes I Chris Hay
zhlédnutí 3,3KPřed 3 měsíci
Creating ReAct AI Agents with Mistral-7B/Mixtral and Ollama using Recipes I Chris Hay In this video, Chris shows how you can build effective ReAct AI agents with the Mixtral and Mistral-7B models using Langchain, Ollama and Recipes. Chris gives a brief overview of how the ReAct pattern works and how smaller models such as ChatGPT 3.5 and Mistral 7B struggle to perform the pattern due to lack of...
Fine-Tune Llama3 using Synthetic Data
zhlédnutí 2,8KPřed 3 měsíci
how to fine tune Llama-3 model in Google Colab in this tutorial using synthetically generated data. In this video chris not only shows you how to fine tune the model but also shows you his lessons learned, such as diversity of data, why the system prompt makes a difference, generalization and fine tuning to a particular format. You will not only learn how to fine tune a model but also how to ge...
why llama-3-8B is 8 billion parameters instead of 7?
zhlédnutí 3,4KPřed 4 měsíci
llama-3 has ditched it's tokenizer and has instead opted to use the same tokenizer as gpt-4 (tiktoken created by openai), it's even using the same first 100K token vocabulary. In this video chris walks through why Meta has switched tokenizer and the implications on the model sizes, embeddings layer and multi-lingual tokenization. he also runs his tokenizer benchmark and show's how it's more eff...
Getting Started with ReAct AI agents work using langchain
zhlédnutí 7KPřed 4 měsíci
Getting Started with ReAct AI agents work using langchain
Inside the LLM: Visualizing the Embeddings Layer of Mistral-7B and Gemma-2B
zhlédnutí 6KPřed 5 měsíci
Inside the LLM: Visualizing the Embeddings Layer of Mistral-7B and Gemma-2B
How the Gemma/Gemini Tokenizer Works - Gemma/Gemini vs GPT-4 vs Mistral
zhlédnutí 1,6KPřed 5 měsíci
How the Gemma/Gemini Tokenizer Works - Gemma/Gemini vs GPT-4 vs Mistral
HuggingFace Fundamentals with LLM's such as TInyLlama and Mistral 7B
zhlédnutí 7KPřed 6 měsíci
HuggingFace Fundamentals with LLM's such as TInyLlama and Mistral 7B
Getting Started with OLLAMA - the docker of ai!!!
zhlédnutí 11KPřed 6 měsíci
Getting Started with OLLAMA - the docker of ai!!!
how the tokenizer for gpt-4 (tiktoken) works and why it can't reverse strings
zhlédnutí 2,1KPřed 7 měsíci
how the tokenizer for gpt-4 (tiktoken) works and why it can't reverse strings
Natural Language Processing (NLP) is still a thing
zhlédnutí 502Před 7 měsíci
Natural Language Processing (NLP) is still a thing
What is Retrieval Augmented Generation (RAG) and JinaAI?
zhlédnutí 3,2KPřed 7 měsíci
What is Retrieval Augmented Generation (RAG) and JinaAI?
abstract syntax tree's are gonna be IMPORTANT in 2024
zhlédnutí 2,1KPřed 7 měsíci
abstract syntax tree's are gonna be IMPORTANT in 2024
Real-Time Rust: Building WebSockets with Tokio Tungstenite
zhlédnutí 7KPřed 8 měsíci
Real-Time Rust: Building WebSockets with Tokio Tungstenite
superduperdb supercharges your database for AI
zhlédnutí 1,6KPřed 8 měsíci
superduperdb supercharges your database for AI
Mistral-7B: Text Classification Thoroughbred or Doddling Donkey?
zhlédnutí 1,6KPřed 8 měsíci
Mistral-7B: Text Classification Thoroughbred or Doddling Donkey?
Bun Web Sockets are really kinda awesome (bun.js tutorial)
zhlédnutí 5KPřed 9 měsíci
Bun Web Sockets are really kinda awesome (bun.js tutorial)
mistral 7b dominates llama-2 on node.js
zhlédnutí 4,4KPřed 10 měsíci
mistral 7b dominates llama-2 on node.js
functional programming with nim language
zhlédnutí 3,2KPřed 11 měsíci
functional programming with nim language
nim language - arrays, sequences and stacks.
zhlédnutí 669Před 11 měsíci
nim language - arrays, sequences and stacks.
fine tuning llama-2 to code
zhlédnutí 13KPřed rokem
fine tuning llama-2 to code
conditionals and loops in nim language
zhlédnutí 865Před rokem
conditionals and loops in nim language

Komentáře

  • @raihanrafi3665
    @raihanrafi3665 Před dnem

    Please reverse engineering the ransomware in rust

  • @greghayes9118
    @greghayes9118 Před dnem

    Don't use your fingerprint. Swipe useing a knuckle.

  • @Winter_Sand
    @Winter_Sand Před 5 dny

    With the final running of the code, running the assembly hello world, the code still works if I don't link the SDK and libraries during the "ld" function (I can just do "ld hello.o -o hello -e _start"), and doing ./hello still works. Does that mean that the rest of the function linking the libraries and defining the main function is unnecessary? Genuine question, just trying to reduce the amount of complex code I'm not entirely sure I understand

  • @cagataydemirbas7259

    Hi, how can I find the data template of llama 3.1 base model ? How can I prepare research papers and books for fine-tuning the base model in right data format ?

  • @foreignconta
    @foreignconta Před 8 dny

    Very nice test!! Subscribed!

  • @chrishayuk
    @chrishayuk Před 8 dny

    turns out X AI created this model, which explains the issues i had with the math and reasoning parts.

    • @Redgta6
      @Redgta6 Před 8 dny

      good job for fixing the title lol

    • @chrishayuk
      @chrishayuk Před 8 dny

      wasn't a massive change looool

    • @danielhenderson7050
      @danielhenderson7050 Před 5 dny

      Tbh I didn't even consider Grok in the possible models!

  • @BlessBeing
    @BlessBeing Před 8 dny

    just to let you know, you were wrong. it is by XAI confirmed. lol elon got you

  • @shuntera
    @shuntera Před 9 dny

    You need a wee edit at 0:53 :-)

    • @chrishayuk
      @chrishayuk Před 9 dny

      hahaha, i missed this, was a quick edit last night

    • @chrishayuk
      @chrishayuk Před 9 dny

      fixed, and updated, thanks for the heads up

    • @danielhenderson7050
      @danielhenderson7050 Před 9 dny

      I love your videos. You should be way more popular than someone I won't mention 😅

    • @chrishayuk
      @chrishayuk Před 9 dny

      very kind, but honestly not about popularity, this channel is really just about getting thoughts out my head

  • @LombardyKozack
    @LombardyKozack Před 9 dny

    LLaMA2-70b uses GQA (only its 7b version used MHA)

  • @iliuqian
    @iliuqian Před 10 dny

    Thank you Chris. Could you show us how to create a ReAct agent using langgrahp?

  • @everyhandletaken
    @everyhandletaken Před 10 dny

    Nice one Chris, interesting!

  • @QrzejProductions
    @QrzejProductions Před 10 dny

    Great explanation, great content. Keep doing a good work, your channel's worth much more reach and subs :)

    • @chrishayuk
      @chrishayuk Před 9 dny

      very kind but I’m actually kinda cool with the reach

  • @PseudoProphet
    @PseudoProphet Před 11 dny

    Probably coming with the Pixel 9 pro.

  • @arindam1
    @arindam1 Před 11 dny

    This is epic. You got a sub!

  • @reza2kn
    @reza2kn Před 11 dny

    Nice find Chris! enjoyed the video!❤

    • @chrishayuk
      @chrishayuk Před 9 dny

      glad you enjoyed my weird ai detective show

  • @Mercury1234
    @Mercury1234 Před 14 dny

    Someone please correct me if I'm wrong here. I think that neither of the examples you showed comes from reasoning. The order is flipped, they should first provide the reasoning and then the answer, not the other way around as in your examples. The models take all the tokens into account from the input and the output (generated up to that point). What is giving the right answer a better chance is if the previously generated tokens contain the reasoning steps. In your examples the previous tokens did not contain the reasoning steps as those were generated after the answer.

  • @BlunderMunchkin
    @BlunderMunchkin Před 16 dny

    When you said "math" I thought you meant symbolic math, not arithmetic. Using an LLM to do arithmetic is pointless; a calculator does a far better job.

    • @chrishayuk
      @chrishayuk Před 15 dny

      @@BlunderMunchkin symbolic math is coming but you have to start with a foundation…. but in order to do symbolic math, the llm still needs to know how to count

  • @ErfanShayegani
    @ErfanShayegani Před 16 dny

    Great content as always! Thank you!

  • @hosseinbred1061
    @hosseinbred1061 Před 17 dny

    Great explanation

  • @billfrug
    @billfrug Před 17 dny

    does seem a bit verbose for ADTs : you need to define the tag type and use a case of on it to get the different member variables ( similar to record case in pascal)

    • @chrishayuk
      @chrishayuk Před 17 dny

      totally agree, it's heavily heavily pascal influenced

  • @novantha1
    @novantha1 Před 17 dny

    I don't have time to go through the whole video at this specific moment, but it seems to me that you came to a fairly similar answer to myself: LLMs are pretty strong at presenting data and handling noisy inputs, while traditional computer programs are pretty good at doing the math (numerical instability notwithstanding). One obvious opportunity that I'm not seeing in the first ten minutes (though I'll certainly allow it's possible that I'll have egg on my face after finishing), is that this seems like an efficient way to embed agentic function calling into a model; if the steps to solve the problem contain a call to a remote system with the equation as an argument, and the remote system can solve the equation, that seems a lot like a function call to my eyes. Beyond that, there's also probably some room to reverse engineer a problem with the synthetic generator LLM based on the equation and answer, in order to encourage semantic problem solving, as seen in certain benchmarks which have word problems encoding ideas best solved with mathematics. Overall, this is a super cool project, and is probably going to be very beneficial for people doing continued pre-training or experimenting with certain ideas like grokking. I'm pretty excited to have a hack at it myself.

    • @chrishayuk
      @chrishayuk Před 17 dny

      Absolutely spot on, I cover later in the video that the same technique can be used for function calling for complex expressions and can also be used for teaching code generation etc

  • @asimabusallam3147
    @asimabusallam3147 Před 18 dny

  • @tiympc
    @tiympc Před 19 dny

    Tremendous explanation. Thank you so much Chris!

  • @poochum4595
    @poochum4595 Před 21 dnem

    Great vid! Any chance we can get a link to the repo with this code?

  • @jimmyporter8941
    @jimmyporter8941 Před 21 dnem

    Rust doesn't have "object orientation".

  • @ilkkalehto8507
    @ilkkalehto8507 Před 21 dnem

    Brilliant!

  • @gandalfgrey91
    @gandalfgrey91 Před 24 dny

    I honestly forgot that Nim has func

  • @rmschindler144
    @rmschindler144 Před 24 dny

    note that you don’t need the `-o` flag when calling `wat2wasm`, and it will simply use the filename

  • @rmschindler144
    @rmschindler144 Před 25 dny

    installing WABT: on macOS, with Homebrew: `brew install wabt`

  • @joseluisbeltramone599
    @joseluisbeltramone599 Před měsícem

    Tremendous video. Thank you very much!

  • @_Spartan-107_
    @_Spartan-107_ Před měsícem

    These videos are insanely awesome. I LOVE the verbosity. Most of the internet videos are a high level abstraction of "what is programming". This breakdown of "What is happening when we program" is what's lacking in engineering these days! Well done :)

  • @omarei
    @omarei Před měsícem

    Great content 👍😁

  • @ckpioo
    @ckpioo Před měsícem

    so this is why gpt-4o is so much better at maths

  • @venim1103
    @venim1103 Před měsícem

    You have to check about the Claude 3.5 sonnet system prompt leak and all the talk about “artifacts” and persisting data with LLMs.

    • @chrishayuk
      @chrishayuk Před měsícem

      Oooh persisting with llms sounds interesting, I’ll find out about that

    • @venim1103
      @venim1103 Před měsícem

      @@chrishayukit seemed to me they are using clever prompt engineering with their “artifact” system in a way that resembles memory management and tool usage with the help of the massive context window. They must have also finetuned their models to support this syntax. Just crazy to think how the system message itself is able to help the AI with coherence and task management. All this seems fascinating as I’m trying to figure out why the Claude 3.5 sonnet is so good at code related tasks especially related to re-editing and updating the code compared to most other models. I can’t wait to see some open source models reach this level! Maybe finetuning and clever prompt engineering is all that is needed for now 👍

    • @chrishayuk
      @chrishayuk Před měsícem

      @@venim1103 i'll check out their system prompt... but i'm convinced they're using STaR backed by a reinforcment learning policy. the new mistral nemo model has followed this approach also. not checked out how they implemented artificat yet. but i'm convinced this is all now in the fine tune phase, hence these videos

  • @marilynlucas5128
    @marilynlucas5128 Před měsícem

    If you put Gemma in your title, you'll get low views. Gemma is absolutely disgusting. One of the dumbest models out there

  • @theklue
    @theklue Před měsícem

    Very good content, thanks! I was comparing models manually, and I'll integrate Nemotron into the eval. One off-topic question, the over imposed screen on top of your your video is a post prod edit or is there any software that let's you record the video like this? Thanks!

    • @chrishayuk
      @chrishayuk Před měsícem

      awesome glad it was useful. the overimposed screen effect is a post prod edit that i do. the way i set the lights, screen backdrop, combined with lumetric settings and use of opacity, allows me to achieve the effect

    • @theklue
      @theklue Před měsícem

      @@chrishayuk Thank you! it looks very good

    • @chrishayuk
      @chrishayuk Před měsícem

      Thank you, I like to think it’s one of the techniques that give a little uniqueness, glad you like it

  • @user-rs4sg2tz6k
    @user-rs4sg2tz6k Před měsícem

    I believe 4o's judges only 90%

    • @chrishayuk
      @chrishayuk Před měsícem

      interesting, where did you get that info from?

  • @kusanagi2501
    @kusanagi2501 Před měsícem

    I really liked the video. it was a mystery for me for a while.

  • @testales
    @testales Před měsícem

    I don't like that very much. Why? I absolutely hate getting walls of text and code thrown at me for simple yes/no questions all the time! Both ChatGPT and Claude have this issue. So in the end It's just that you hardcode a system prompt like "think step by step" into your model and it's very hard then make it giving quick and short answers again. A hidden scratch pad is a good compromise but still slows down responses and could by achieved with a system prompt too. The system-prompt method could also include multiple agents or personas with different strengths to provide input. The best would be to also train the model to estimate the complexity of a question and then decide whether to do additional thinking or not. Also I've seen open weight models answering harder questions with just one or very few words correctly where others generated a text wall and still came to the wrong result. So whether an explicity step-by-step thinking is really required remains debatable. Obviously the chances for a correct answer increase the more relevant (!) information is in the context and that's all what CoT etc. actually does: pulling more information into context. Another similiar thing that I see doing Claude quite often and which I like is that it does summarizations before responding. If the problem is complex and there was a lot of back and forth the perceptions of it may diverge. Summarizations greatly help to create a synchronization point between the LLM and the user and then focus on the established and relevant intermediate results.

    • @chrishayuk
      @chrishayuk Před měsícem

      I agree, it’s a balance and a trade off, and I think this is where RL can be used to bring this down to a more succinct response.

  • @raymond_luxury_yacht
    @raymond_luxury_yacht Před měsícem

    That explains why Claude 200k context is more like 50k for me. So much taken up with the scratchpad

  • @mrpocock
    @mrpocock Před měsícem

    The private scratch pad in claude 3 5 explains why it seems to behave as if it had private state in addition to the text visible in the conversation.

    • @chrishayuk
      @chrishayuk Před měsícem

      Yeah really nice technique for giving readable answers but not losing chain of thought reasoning

  • @rodneyericjohnson
    @rodneyericjohnson Před měsícem

    How can a full grown adult hide behind some decorations?

  • @tommy516
    @tommy516 Před měsícem

    Claude is NOT as good as GPT, sorry it is not. When you ask it to update code at least the way it works for me, is it sends only a new block of code that it changed not the whole class, to keep the response size down, it is really limited in the length of its response compared to ChatGPT. With all these Claude videos, I have to believe CZcamsrs are getting paid to shrill for it. Also, the Artifacts only works on a very small sample size of types of code, so all this selling like its a viable thing is disingenuous.

    • @chrishayuk
      @chrishayuk Před měsícem

      GPT is better at many things especially narrative, q&a, summarization and generative content (see my video on multiheaded attention) but Claude is definitely better on code. Not being paid by anyone, you will notice that I switch off ads for all my vids and I never take sponsorships

  • @Leo-ph7ow
    @Leo-ph7ow Před měsícem

    Great great content! Please, make a local finetune tutorial. Thanks again!

    • @chrishayuk
      @chrishayuk Před měsícem

      it's on the list, i promise

  • @bamh1re318
    @bamh1re318 Před měsícem

    Can you please give a tutorial on how to load private data, train/Rag/evaluate and deploy an open-source model on WatsonX or other online platform (AWS, Azure or Huggingface)? Many thanks! BTW Nemotron4 broke down this noon (PST), maybe due to too many users. I was in line 771 with a simple question. It gave out some sort of communication problem, after two minutes od waiting

    • @chrishayuk
      @chrishayuk Před měsícem

      Sure, will add to the backlog

  • @leeme179
    @leeme179 Před měsícem

    I believe you are correct in that both Claude and Lllama 3 are finetuned using STaR generated dataset but this method still needs a ranker or human to mark the correct answers, whereas from what I have read online is that OpenAI's Q* is a combination of "A* search" algorithm combined with Q learning from reinforcement learning to self improve, where the model generates 100 different answers and picks the best answer to improve similar to AlphaCode2 from Deepmind.

    • @spoonikle
      @spoonikle Před měsícem

      It does not. There is no human marker needed. For example, you can use a series of prompts + the dataset to judge aspects of the answers with really well trained fine tuners. You can even train a model to predict the human evaluation, then you just need to human eval in a given domain until an evaluator model is ready. In addition, this incentivizes further investment in synthetic datasets. Finally - the best argument for this. Big model prunes dataset to make a small model - that prunes the dataset for the next big model repeat ad infinitum. the smaller model is cheaper and faster which means you can prompt more data for the next big one - which will make the next improved small model.

    • @chrishayuk
      @chrishayuk Před měsícem

      Some folks use human feed back with RL and some folks use synthetic at the end of the video I talk about how it could be done with a mixture of judges and I show how you could use Nemotron for your reward model. I will do a video on RL for this soon to cover the Q part

    • @testales
      @testales Před měsícem

      I still don't get how a pathing algorithm like A* can be utilized to find the best answer. I mean it's not like navigating some terrain with exactly known properties. Maybe it's a thing in the latent space? So actually the explanation that this is a modified version of the STaR approach seems to be more plausible but if so then again it doesn't seem to be such a big thing.

    • @chrishayuk
      @chrishayuk Před měsícem

      I’m only covering the star part for now. I’ll cover the RL part in a later video

    • @GodbornNoven
      @GodbornNoven Před měsícem

      ​@@testalesQ* (Q-star) is a concept from reinforcement learning, a type of machine learning. In simple terms, it's a way to measure the best possible future rewards an agent can expect if it follows the optimal strategy from any given state. Think of it like a guide that tells you the best move to make in a game to maximize your chances of winning, based on all the possible future outcomes. Kinda like in chess.

  • @msssouza2
    @msssouza2 Před měsícem

    Thanks, for another great video Chris. I've been through some LLM courses on Udemy but your channel is helping me to clear many doubts I have on the whole thing. I'm glad I found your channel. It's really the best on this subject. Congratulations. Marcelo.

    • @chrishayuk
      @chrishayuk Před měsícem

      Very kind, my rule is to try and always go one level below. It means that my vids are never short, glad the content is useful

  • @msssouza2
    @msssouza2 Před měsícem

    Hi. I was looking for dozens of videos on how to make ReAct work on 7B models (to make a low cost Text to Sql solution) and the only video the answer my question so far is yours. Thank you. By the way, I'm from Rio and the current time is 09:42 AM

    • @chrishayuk
      @chrishayuk Před měsícem

      Lol, hello Rio, glad you like the example. Glad the video is useful, my back rule is to always go one level below and unveil the magic, glad it helped in this case

  • @srirammanda9697
    @srirammanda9697 Před měsícem

    Thanks for making this video, wonderful explanation 👏 I followed your step in my case the react agent going into loop and timeout. kindly let me know how to handle this case of looping. Thanks in Advance!

    • @chrishayuk
      @chrishayuk Před měsícem

      The less powerful models can loop. More guidance on the patterns usually fixes it

  • @oleholgerson3416
    @oleholgerson3416 Před měsícem

    when I run the final example it prints the text but then segfaults. The code is identical to the example. What could that be?