Yi-1.5: True Apache 2.0 Competitor to LLAMA-3
Vložit
- čas přidán 22. 05. 2024
- In this video, we will look at Yi-1.5 series models were just released by 01-AI. This update includes 3 different models with sizes ranging from 6 billion to 34 billion parameters and training on up to 4.1 trillion tokens. All models are released under Apache 2.0 license.
🦾 Discord: / discord
☕ Buy me a Coffee: ko-fi.com/promptengineering
|🔴 Patreon: / promptengineering
💼Consulting: calendly.com/engineerprompt/c...
📧 Business Contact: engineerprompt@gmail.com
Become Member: tinyurl.com/y5h28s6h
💻 Pre-configured localGPT VM: bit.ly/localGPT (use Code: PromptEngineering for 50% off).
Signup for Advanced RAG:
tally.so/r/3y9bb0
LINKS:
website: www.01.ai/
Testing Yi-1.5: huggingface.co/spaces/01-ai/Y...
Huggingface: huggingface.co/01-ai/Yi-1.5-34B
huggingface.co/01-ai
huggingface.co/01-ai/Yi-1.5-9...
TIMESTAMPS:
00:00 Introducing the Upgraded YI Model Family
00:40 Overview of YI Model Specifications
02:39 Testing the YI Models: Setup and Initial Observations
05:16 Exploring YI Model's Reasoning and Deduction Capabilities
10:01 Assessing YI Model's Mathematical and Contextual Understanding
12:45 YI Model's Programming and Coding Proficiency
14:39 Final Thoughts and Future Prospects
All Interesting Videos:
Everything LangChain: • LangChain
Everything LLM: • Large Language Models
Everything Midjourney: • MidJourney Tutorials
AI Image Generation: • AI Image Generation Tu... - Věda a technologie
Doesn't matter if one model is better than the other - I'm just glad to see many third-party models being released. Especially some of the open source ones and local LLMs. As much as I like what OpenAI has done, I don't want that to be the default standard for LLMs.
this is one of the reasons I love open source. Having options is always good.
This model rocks!
Nice!!! Can't wait to see it on Ollama so I can use it everywhere
Its already up on ollama
how to get the same model you are using, Yi-1.5 16bit percision ??
Guys, what are you using for these simpler models like 7B? For me even LLaMa 3 70B or GPT-4 is quite limiting and smaller models are completely useless.
check out WizardLM v2, Yi-6, Openbuddy
Not everyone can afford the hardware or pay to use them on hosted sites. Good for you if you can.
6:48 how so John would have 2 brothers? This should be wrong.
Correct! Even our lecturer got confused 😅
This proves that 'prompt engineering' chanel is actually a bot!
Is this any good?😊
Pretty amazing, inference speed in my 4gb vram 16gb ram medium range laptop, is 40 tk per second. Is fast and smart.
@@Alex29196 Which version are you using? I have 6gb vram and 16gb ram.
@@dadadies yi:6b The small one is running on Ollama with the PageAssist browser extension. By the way, PageAssist is a user interface that runs in your browser; it's a new UI that has become quite trendy recently.
When I ask questions about Tibet, Tiananmen Square and Hong Kong... I get the following answer: 非常抱歉,我不能提供您需要的具体信息,如果您有其他的问题,我非常乐意帮助您。Why is that?
你这个难题问的有点傻,所以非常抱歉。
No you don't.
You know very well that these are sensitive topics, aren't you?
@@ringpolitiet After a while using LLM, you`ll find a workaround for almost every question you have. LLM are manipulable.
@@ZhechengLi-wk8gyit is a very important part of history TANK MAN is a HERO. The fact it is censored is very evil. It is in no way sensitive as a topic other than to the guilty and evil Chinese government. That things like this would be censored in an ai model is troubling and telling of whether you can trust the model or it’s creators (meaning you can not).
I only want a long context and uncensored patched model....
AND THIS MODEL IS USELESS BECOZ IT HAD BEEN "WOKE"