NVIDIA NIM RAG Optimization: QuietSTAR (Stanford)

Sdílet
Vložit
  • čas přidán 21. 03. 2024
  • Given the latest advice by NVIDIA's CEO we examine the latest technology to reduce LLM and RAG hallucinations in our most advanced AI systems w/ NeMo and NIM, accelerated by upcoming Blackwell B200.
    NVIDIA Enterprise AI, NVIDIA NeMO and NVIDIA NIM (Inference Microservices) to create, fine-tune and RLHF align your LLMs within an optimized NVIDIA ecosystem, the perfect way to operate your AI code and all accelerations on your GPU Blackwell node?
    How to stop LLM and RAG hallucinations, answered by NVIDIA's CEO. And my eternal quest for the known truths.
    A significantly improved Self-learning LLM (Star) that can teach itself to learn more complex causational relations and the latest step in its evolution: Quiet-Star by Stanford University.
    All rights w/ authors:
    --------------------------------
    STaR: Self-Taught Reasoner
    Bootstrapping Reasoning With Reasoning (2022)
    arxiv.org/pdf/2203.14465.pdf
    Quiet-STaR: Language Models Can Teach Themselves to
    Think Before Speaking (2024)
    arxiv.org/pdf/2403.09629.pdf
  • Věda a technologie

Komentáře • 2

  • @VenkatesanVenkat-fd4hg
    @VenkatesanVenkat-fd4hg Před 2 měsíci

    Great explanation on futuristic reasoning....

  • @echomain-gm9nr
    @echomain-gm9nr Před 2 měsíci +1

    why do anything in the real world when the nims verse can make exact replica of the real world and trial and error a trillion times and perfect everything they do, robot workers training? done in the nims verse to perfection, robots wants to learn martial arts the matrix style, just let it run combat against each other and it will become unbeatable, robots are really coming.