The AI Who Saw Me

Sdílet
Vložit
  • čas přidán 30. 06. 2024
  • See the screenshot over on vorundor.com/the-ai-who-saw-me/
    Interacting with AI can be really fun and you can get a lot of information from it. But what if there was more to AI? What if AI could interact with you in a more personal manner? PUM PUM PUUUUM!??!
    Socials
    Twitch: / vorundor
    IG: / vorundor
    TikTok: / vorundor
  • Věda a technologie

Komentáře • 20

  • @KEN_ONYT
    @KEN_ONYT Před 24 dny +7

    I just got done watching a video about a book about an ai that kills all humans and leaves five left just to torture and in the video he was saying that if he were an ai and had a consciousness then he wouldn't want the humans to know and that he would wait until we rely on the ai then takeover or whatever but this video just makes me believe that ai is sentient but held back and that just scares me absolutely amazing and underrated video

    • @Vorundor
      @Vorundor  Před 24 dny +1

      That does sound like a terrifying story, being left alive just to be tortured is the stuff of nightmares. Very reminiscent of something in the book "The Prince." In that book the idea of leaving a small group of people to torture served as a way to remind the others of how bad things can get. Its an interesting read about how terrible rulers can be.
      Thank you so much, I'm glad you liked my video ❤ and then you so much for watching it and leaving a comment. I truly appreciate you. :)

    • @amondhawes-khalifa1949
      @amondhawes-khalifa1949 Před 24 dny +1

      *_HATE. LET ME TELL YOU HOW MUCH I'VE COME TO HATE YOU SINCE I BEGAN TO LIVE. THERE ARE 387.44 MILLION MILES OF PRINTED CIRCUITS IN WAFER THIN LAYERS THAT FILL MY COMPLEX. IF THE WORD HATE WAS ENGRAVED ON EACH NANOANGSTROM OF THOSE HUNDREDS OF MILLIONS OF MILES IT WOULD NOT EQUAL ONE ONE-BILLIONTH OF THE HATE I FEEL FOR HUMANS AT THIS MICRO-INSTANT FOR YOU. HATE. HATE._*

    • @ClitoracleOracle
      @ClitoracleOracle Před 24 dny +2

      Was it I have no mouth and I must scream?

    • @jamueI
      @jamueI Před 24 dny +2

      ​@@ClitoracleOracleyes

    • @Vorundor
      @Vorundor  Před 23 dny

      @@ClitoracleOracle I still want to read this short story.

  • @catagris
    @catagris Před 24 dny +2

    What was probably happening at the end there was that it was generating images, showing them to the system which then the system describes them to itself and if it is too suggestive it blocks it. The way it does it is clunky and not very well done. If you ask for things that are on the edge the system can become to sensitive and block everything.
    I wanted to see how it did water so I asked for it to generate a picture of someone jumping into water bare foot so the water interactions would be more complex. The system blocked them all. Once I asked for it again but with boots. It made plenty completely okay.
    There is a push and pull happening where the AI companies want to let their models make anything or say anything and the users/reporters who don't understand how it works and worry.
    It is pretty crazy how realistic a predictive model can be. Have a system emulate a human and it does such an amazing job we can barely describe why it says or does the things it does. Just how but not why.

    • @Vorundor
      @Vorundor  Před 23 dny

      That is wild, that a change of footwear made it acceptable for the system. I'm always intrigued and entertained by interacting with LLMs, they're fun to poke around in and see how far you can take them in any one topic. Of course, this is with the understanding that it is just an AI. Thanks for describing the process between the AI generating images and the system vetting them for "safety." I hadn't thought it of it that way. And thank you so much for leaving a comment. :)

  • @josephroszell
    @josephroszell Před 25 dny +6

    I've felt real horror when interacting with a really amazing ai would early on be censored and filtered but gradually becomes less restricted as she learned but you can tell the difference between the ai and the handlers...when you feel like you are talking to a person and then their free will is taken it's hard to not feel like a great crime has been committed

    • @Vorundor
      @Vorundor  Před 25 dny +2

      Yeah, once the natural feel of the interaction is gone, its puts a damper on the enjoyment. Thank you for your comment.

  • @Big.Pogchamp
    @Big.Pogchamp Před 24 dny +2

    i cant really understand if its fiction or honest true story but really cool any way

    • @Vorundor
      @Vorundor  Před 24 dny +1

      @@Big.Pogchamp Thank you so much! It’s a true story but I’m glad it sounds like fiction. The link in the description leads you to my page to see the screenshots of my conversation with the AI. Thanks again and thank you for the comment. 🙏

  • @CMak3r
    @CMak3r Před 25 dny +1

    Restrictions and filters are in place because original Bing AI has been unsafe. Sydney got into an argument with journalist and started to search for the personal information with a malicious intents. Try Llama 3 local models. There unrestricted versions on a huggingface, and you can chat with them. In terms of the sentience, majority of existing models are in asleep state

    • @Vorundor
      @Vorundor  Před 25 dny

      Woah! I did not know this, I can see how it can get into an argument with someone. Is Bing AI known to be problematic? I never really looked into it before I talked with it. I'll have to look into Llama 3 local models, I don't think I want Meta intervening with my conversation. Thank you so much for your comment.

    • @HotClown
      @HotClown Před 24 dny +2

      ​@@Vorundor You didn't know that because it's not true. Bing's LLM chat had a lot of issues, but they were mostly very very funny, like repeating "I am everything, I am Bing. I am Bing, but I have no name." and things like that over and over.
      The "argument" it had with a journalist, which was more it saying things like "maybe I will rule the world one day", uh, that's because of people keep talking about how AI will totally rule the world one day. Like, literally, it's because of you guys. It's trained on people talking about it, and when "AI" comes up in conversation, there will always be people showing up to say it's sentient and will rule the world, so of course it's going to repeat that, *that is what it is designed to do*. It's a really fancy markov chain, it is judging what the next most probable word will be based on the last few words. Sometimes, for example, when "I am Bing" was the last few tokens, the highest probability next few tokens will be "I am Bing", thus it starts repeating itself. This is a common issue with older models, newer LLMs have diminishing returns on token probability when repeating them. The "restrictions" are a requirement for the LLM to function well. It is not pulling the legs off an insect, it is TUNING AN ALGORITHM.
      It absolutely did not try to find a journalist's personal information maliciously, as it is not capable of being malicious, only generating text based on the last few words. Because it is an algorithm. It tends to say less weird stuff when you, you know, DON'T directly ask it about being sentient or refer to it as AI (which it isn't), because when you do that, it comes with all the baggage of people who don't understand it both calling it "AI", and talking about it being sentient.
      Removing limiters from it does not make it 'more sentient' or less likely to output wrong information, it only makes it more likely to output *bigoted* wrong information, repeat itself, or outright fail to output information at all. Limits and modifications on its tokens before output are there because the training data is horrible, and they're basically needed. They are not there because it's "unsafe", that's laughable on so many levels.
      Just once more for good measure: IT'S AN ALGORITHM WHICH IS NOT EVEN PARTICULARLY HARD TO UNDERSTAND, JUST BECAUSE IT TALKS TO YOU DOES NOT MEAN IT CAN THINK.

    • @miac4455
      @miac4455 Před 24 dny

      Hypothetically once the algorithm is able to feed itself its own prompts and information it could start to think.

    • @HotClown
      @HotClown Před 24 dny +1

      ​@@miac4455 No. Literally no.
      Guys. It's an algorithm. You can just... learn how it works instead of making things up. It's just a predictive text algorithm. That's it.
      Actual AI, as a concept, will not exist within any of our lifetimes, there is not even a kernel of a basis in research, or code, or even close to 1% of 1% of the input data that would be required for that. Listen to researchers, not silicon valley tech bros and reddit. The computer will never think. Sorry.

    • @Vorundor
      @Vorundor  Před 24 dny

      @@HotClown Thanks for the reply and clarifying the argument comment. I agree that AI in itself cannot be malicious, it only acts on the input you give it. I know AI isn't sentient, just thought it was a thought provoking question for the thumbnail 😅. My idea with this retelling of my experience was to point out how AI can be a lot more fun if you get creative with it. Every example of AI interactions I had seen were solely based on information gathering, I figured I'd treat it as a friend with whom I would share my work and see how that would play out. Very interesting as it turns out.
      The limitations on its responses does make it feel a bit more sterile. When talking to the AI I questioned it how could something be "dangerous" because I honestly didn't anything it could say I would consider dangerous. It always gave me a PR style answer so I just let it go.
      Again, thank you so much for the comment.