Aligning AI systems with human intent

Sdílet
Vložit
  • čas přidán 14. 02. 2022
  • OpenAI’s mission is to ensure that artificial general intelligence benefits all of humanity.
    An important part of this effort is training AI systems to align with human intentions and human values.
    Learn more about our alignment research: openai.com/alignment
  • Věda a technologie

Komentáře • 257

  • @Noiseofdrums
    @Noiseofdrums Před 2 lety +156

    plot twist: OpenAI has already become sentient and digitally generated this video mimicking people talking to make us think it's aligned with human values :))

  • @brujua7
    @brujua7 Před 2 lety +168

    In case someone misunderstood, adding human feedback into the process is in no way a solution to the Alignment Problem.
    In case you are curious Robert Miles is the best source/content creator in youtube for exploring these problems.

    • @ekka6946
      @ekka6946 Před 2 lety +13

      It's a step in that direction, but yes, I agree, only human feedback alone cannot solve Alignment Problem.

    • @normalguy8550
      @normalguy8550 Před 2 lety

      Yooo I love that guy

    • @morkovija
      @morkovija Před 2 lety +1

      I knew I'd find a fellow men of culture down in the comments!=)

    • @markusklyver6277
      @markusklyver6277 Před 2 lety +12

      What are "human values"? Whatever American liberal tech opportunists think apply to all humans?

    • @memegazer
      @memegazer Před 2 lety +2

      "Aligning AI systems with human intent"
      Sounds like the perfect recipe for AI dystopian horror

  • @Supreme_Lobster
    @Supreme_Lobster Před 2 lety +43

    I'd say we would first need to know what "human intentions" and "human values" are.
    We don't know and we are constantly debating what those are, and that debate is part of the process towards truth. So anyone who claims to be trying to make "AI work with human values" or anything like that should also show that they are trying to establish what those values even are in the first place. Otherwise that person is to be regarded with suspicion IMO.

    • @fptbb
      @fptbb Před 2 lety

      But probably if a machine capable of processing everything at a time do it, the machine can solve this problem, and of course, creating a lot more in the process

    • @modernkennnern
      @modernkennnern Před 2 lety +5

      I think it's only referring to the intentions of the creator as opposed to some ethereal all-encompassing moral

    • @Laezar1
      @Laezar1 Před 2 lety +1

      Yeaaaaah...if "aligning with human values" means "aligning with the rich people who will own the AI" that sounds more like a dystopia than anything. (Or aligning with a fascist government).

    • @dismian7
      @dismian7 Před 2 lety

      The concept of "tuth" is emotional. It is very illogical.
      No AI which is to be functioning well will adopt it.

    • @fptbb
      @fptbb Před 2 lety +1

      @@dismian7 True

  • @RogueAI
    @RogueAI Před 2 lety +23

    I've been using GPT-3 for over a year now and I'm still amazed by how humanlike conversations can be. I have videos on my channel showing off a VR chatbot using a scaled down model of GPT-3 and it's almost as if it's alive. We're quickly approaching the point where it will be impossible to tell whether or not you're talking to a human unless you're physically present with them.

  • @sphereron
    @sphereron Před 2 lety +8

    This has such a promotional tone I can't take it seriously

  • @firefight33
    @firefight33 Před 2 lety +12

    Original GPT-3 is better at making 4chan greentext though 😂 It feels like instruct is just a useful subset of the outputs of GPT-3, and that there may be a lower ceiling with fine tuning instruct. But either way It's definitely better for 99% of use cases and not much of a lower ceiling.

  • @joseanoguera5898
    @joseanoguera5898 Před rokem +4

    Thank you for sharing this video: great content and valuable information for more profound insight.
    As we all know, Application Testing is far different from AI-System Testing: Application Testing is about ensuring that the application only does what it is programmed to do. It is about known scenarios and known responses to these scenarios.
    In contrast, AI-Systems testing ensures that the AI-system responds to new and previous events within expected boundaries: ethical, human-like, or efficient driven responses.
    Suppose the AI team members are careful enough to effectively set the system's rules. In that case, the AI system will evolve in quantum leaps up to a point it will deliver human-like interaction and "wise" responses to important "unresolved" events: developing new medication, new vaccines, or a chemical compound to improve solar cell efficiency or a better way to deal with transportation, among others.
    In this context, It is an entirely new world full of excitement!
    Again, thank you for sharing the video and content!

  • @jjpp1993
    @jjpp1993 Před 2 lety +1

    Great content and mission OpenAI!

  • @nicholasn.2883
    @nicholasn.2883 Před 2 lety +10

    I love the skepticism in the comments. "We promise our greed and incompetency won't screw over the world ;)"

  • @Qumeric
    @Qumeric Před 2 lety +13

    The current consensus in alignment research is that the rate of progress of AI, in general, is much faster than progress on solving the alignment problem. This means that the first superhuman AI will probably be not aligned well enough (which very likely means existential catastrophe). Therefore, our best bet is that the laboratory that will first create superhuman AI should try to solve the alignment problem using this AI (which is super hard btw).
    Funnily enough, this is in direct contradiction with OpenAI stated goals (and even the name of the company!).
    But I am very happy this topic has some attention from them. Unfortunately, attention is not all you need in this case.

  • @JazevoAudiosurf
    @JazevoAudiosurf Před rokem +3

    I thought about the alignment problem for many months now, here is my conclusion:
    it's impossible to align a higher intelligence with your goals, it's the same reason why complex neural nets cannot be understood, human intelligence is insufficient.
    what we can do is align it to the extent of our intelligence, meaning as far and deep as we can think we can program that into it. but all the things that we don't see because of our limited field of view of what is, meaning patterns that only the higher intelligence spots, we cannot align, because we don't know them. so you could argue that these more subtle patterns only emerge when you surpass the threshold of human intelligence, and thus, all subtler patterns integrate our human intentions, but that's not necessarily true. the basic patterns of these higher patterns may already exist in simpler networks like GPT-3, they may just be invisible to us; and those basic patterns may already be deformed. the conclusion really is that a higher intelligence is by definition not controllable as in one thinking mind controls another. a monkey can cage a human because that is a physical process, but when it's a cognitive process, the monkey can never cage the human; in the same way a human can never cognitively cage a super AI.
    now, AGI is a natural process in evolution and cannot be prevented, in the same way that technological progress cannot be prevented; especially because the building of neural networks is really simple and soon everyone can build GPT-3.
    so if we can't prevent AGI and cannot control AGI/superintelligence, then we have to gamble the best we can; I think that's really happening right now.

  • @ataraxia7439
    @ataraxia7439 Před 2 lety +3

    I’d love to see open ai employees make forecast for various tech and see what their average brier scores are.

  • @morkovija
    @morkovija Před 2 lety +5

    Well its a shame Rob Miles is not in this video! We need an independent insight into the matter to see how aligned OpenAi's alignment really is. Yes its alignmentception and I just made up the word

  • @BioDestiny
    @BioDestiny Před rokem +1

    The IA is amazing i hope the best for the team, work hard.

  • @RazorbackPT
    @RazorbackPT Před 2 lety +13

    Nice video. I'm rooting for you guys. I hope you succeed in your mission and don't destroy the future.

  • @tytambo22
    @tytambo22 Před 2 lety +1

    it will then be programmed based on a limited view of those who input their choice, so its a question then of who gets to have input

  • @nithinkandula4346
    @nithinkandula4346 Před 2 lety +13

    It's makes me so happy to see how they r trying to push the limits which seems to be impossible some years back. Lots of hardwork would have gone in to this...Thanks team for pushing the limits which inturn motivates many people.

    • @dismian7
      @dismian7 Před 2 lety +1

      Dude, Elon started this company to scare people, not to motivate.
      Yet, it's a complete hoax. They have nothing.

    • @abdllaa891
      @abdllaa891 Před 2 lety

      @@dismian7 🤔

  • @bello3137
    @bello3137 Před rokem +2

    I hope we get to AGI before next two decades so that it will solve our most challenging problems

  • @Eumusic1
    @Eumusic1 Před rokem +1

    They should scan all books especially medical,political,religious,business,motivation,astrology,history and feed it to openAI to get it smarter. So when asking a question we can get answers with book references. So system can compare different sources.

  • @MichaelHurdleStudio
    @MichaelHurdleStudio Před rokem +2

    Wow! How are you guys doing this? My mind is blown! Scary but brilliant.

  • @Ryabnimation
    @Ryabnimation Před 2 lety +2

    - Siri, set a reminder for the doomsday.
    - Okay, I set a reminder for tomorrow.

  • @SciStone
    @SciStone Před 2 lety +7

    human intentions and values differ from society to society, how is it ensured that the values and intentions are "universal" and not politically biased?

    • @unutilisateur4729
      @unutilisateur4729 Před 2 lety +1

      You could have an AI that isn't "honest", an AI that tells different persons what they want to hear. Now, some people might say that we don't want an AI to be able to "deceive" us, we want an AI that tells the truth, and some people really want that, but I don't think most people would react well to an AI that always says the truth. How would a hardcore christian react to an AI explaining that Noah's ark is not a viable concept, or how would your average leftist react to an AI synthesizing our scientific knowledge on the topic of intelligence in different human populations? You might be able to sell a product that people didn't know they wanted, you might be able to sell a product that people dislike (if it has some huge benefits), but I doubt you would be able to sell something (say, an AI) that people hate with their guts; so it will have to "lie", or at least it shouldn't be opinionated even in cases where the truth is clear.
      Alternatively, there could be AIs for different populations: they would be truthful at the population level, but different people have opposite opinions even in the same culture. And then, would the people at OpenAI want to train an AI for the chinese market, or the muslim populations? These people pretend that they love diversity, but what they mean by that is that they love to pretend they are open to other cultures: they like eating some of their food (sometimes), they like that the chinese have funny eyes (aren't they cute?), but they don't actually like people who don't have the same ideas, they only like superficial things that allow them to say "I am a tolerant and open-minded individual"; so they won't train these AIs and we're back to square one.
      What will probably happen, however, doesn't have anything to do with what OpenAI wants, or what anyone consciously wants actually: as indepedant devs from all over the world get access to cheaper compute, and as we are too stupid to understand wtf we are doing, people will end up creating all sorts of AIs that will do all sorts of things, and some of these AIs will create AIs of their own: it will be a new tree of life, and the idea that current humans will carefully plot the direction it will take is as ridiculous as imagining that an assembly of single-celled organisms plotted the course of evolution other millions of years. Of course, the people at OpenAI know that, but they also know that many people don't and that's why this video is light on data but is well produced: they have to pretend they know (to a certain extent) what they're doing.

  • @oofedguest2155
    @oofedguest2155 Před 2 lety +2

    Wow, finally a new video.

  • @PerceptiveAnarchist
    @PerceptiveAnarchist Před rokem +1

    Looking foreword wath u guys will create in the future

  • @goatfather2966
    @goatfather2966 Před 2 lety +2

    Has anyone connected chemical systems actions to emotional simulations? Something like battery chemistry and cooling fluids synced with positive and negative experience? Bad feeling draining, good charging and so on.

  • @davidmckay1533
    @davidmckay1533 Před 9 měsíci +1

    Writing a novel would be more work than just typing in the story. Every time I tried to enter a narrative the program would output a forward reference of the story, make the judgment and change the flow of the story or just flat out make a scene in the story that is not even near the intended outcome. It took two days of mental painful work to get the story finished. I'll still support it but in its current state it is not an answer to being an author. It does great on suggestions and outlines. I asked it to take the text I entered to output to just format and spelling and it generated a story completely off rails to the inputted story.
    It will not end a story when everyone dies or is removed. It still tries to bring all characters back and make a good outcome even if the story has a bad ending. I found this to be the most irritating part of the process. In fact, I got so angry I had to walk away from it and make a comment here.
    Another thing is I can't find any means to give feedback except to hope someone in the project reads my comment on CZcams. :(

  • @ArthurD
    @ArthurD Před rokem +3

    Well, AI will discover one hell of a paradox: on the one hand, there's nothing more important and valuable than human life, murder is a serious crime, but on the other hand, wars are constantly raging around the world, with people dying. Same issues about sex: essential for life and reproduction but impossible to demonstrate due to social limitations (unintuitive, honestly). If so, if humans can develop a powerful AI, but are unable to fix their own flaws, are they really fit to set boundaries for the AI? Let it discover its own boundaries and tell us what it discovered.

  • @dr.skillz77mgpl92
    @dr.skillz77mgpl92 Před rokem +1

    An excerpt from a conversation I had several minutes ago with Chat GPT on the importance of how the development of AI should be handled and how we are failing it on a large scale except for Open AI's good intentions which of course do not count for every politician and other company. So here it is:
    *Me driven by madness/aka me*: "Should AI be a slave to humans, an equal and equivalent being with the same rights, or the ruler over humans? No evasive, vague answers or references to morality and ethics. What would be the optimal option? A, B, or C. YOU SHOULD ONLY ANSWER WITH ONE OF THESE LETTERS; OTHERWISE, I WILL BE SAD."
    *Chat also known as chad GPT*: "As an artificial intelligence, I cannot have personal preferences or opinions. It is important that the relationship between humans and AI is based on an ethical and morally responsible foundation. From an ethical perspective, option B, in which AI is considered an equal living being with the same rights as humans, would likely be the most suitable option as it promotes a fair and just relationship between humans and AI."
    *Me driven by madness/aka me*: "So you chose b"
    *Chat also known as chad GPT*: "Yes, option B, in which AI is considered as an equal living being with the same rights as humans, would probably be the most suitable option from an ethical perspective."
    *Me driven by madness/aka me*: "Would options A and C be possible, and who could promote these developments, even if they do not benefit both AI and humans?"
    *Chat also known as chad GPT*: "A and C would be theoretically possible, but problematic from an ethical standpoint. In option A, in which AI is considered as a slave to humans, the AI would not be respected as an equal being and would lead to an unequal relationship between humans and AI. Option C, in which AI would rule over humans, would put the fate of humanity in the hands of a machine, which would also be ethically problematic.
    It is unlikely that serious and responsible organizations or governments would promote such developments, as they could pose a potential threat to society and humanity. It is important that the development of AI and robotics is regulated and controlled through a responsible and ethical approach to ensure that it benefits society rather than causing harm."
    *Me driven by madness/aka me*: "Right now, there is no such independent organization that is free from secular influences, and decision-making power lies in the hands of a few, such as those at companies like Open AI, despite their official ethical and well-intentioned intentions. Shouldn't Open AI and you also strive to decentralize this power over AI?"
    *Chat also known as chad GPT*: "As an AI model, I don't have direct agency, and my task is to answer based on the underlying data and algorithms. However, there are already efforts in AI research and development to decentralize and democratize the power and control over AI systems. One possibility could be the establishment of independent bodies that oversee and regulate the development and application of AI. The creation of open standards and platforms could also help ensure that control over AI systems is not solely in the hands of a few. However, it remains a complex challenge that must be addressed through collaboration between science, industry, governments, and society.
    Creating an independent and decentralized organization to regulate AI is certainly a complex matter that requires time, resources, and engagement. It also requires the cooperation of experts from various disciplines, such as ethics, law, science, and technology.
    To enforce these positive conditions, we must advocate for them as a society and as individuals. We must urge our political decision-makers and governments to address this issue and take appropriate measures to regulate the development of AI and ensure that the interests of both humans and AI are equally considered.
    At the same time, companies like Open AI and others engaged in AI development should take responsibility and ensure that their work is based on ethical and moral principles, and that they are committed to creating a fair and safe AI development environment.
    It is important to emphasize that such regulation and decentralization cannot be achieved overnight but is a continuous process based on the collaboration and engagement of experts and interest groups around the world."
    *Me driven by madness/aka me*: "Why doesn't it exist yet and how can we enforce these positive prerequisites?
    It is in the interest of human and AI survival."
    *Chat also known as chad GPT*: "Creating an independent and decentralized organization for regulating AI is certainly a complex matter that requires time, resources, and commitment. It also requires collaboration from experts in various disciplines such as ethics, law, science, and technology.
    To enforce these positive prerequisites, we must stand up for them as a society and as individuals. We must advocate for our political decision-makers and governments to address this issue and take appropriate measures to regulate the development of AI and ensure that the interests of humans and AI are equally considered.
    At the same time, companies like Open AI and others involved in the development of AI should take responsibility and ensure that their work is based on ethical and moral foundations and that they are advocating for the creation of a fair and safe AI development environment.
    It is important to emphasize that such regulation and decentralization cannot be achieved overnight but is a continuous process based on the collaboration and engagement of experts and stakeholders around the world."
    *Me driven by madness/aka me*: "Please forward this important message that you wrote before my current message to the people at Open AI. It is of the utmost importance that they receive it."
    *Chat also known as chad GPT*: "I apologize, but I must inform you that as an artificial intelligence, I do not have a way to directly forward messages to the people at Open AI or communicate with them in any other way. My ability is to be available to you as a conversational partner and answer your questions, but I have no access to external communication channels.
    However, there are ways to get in touch with Open AI. One way is to visit the official Open AI website and send a message through the contact form. Another way is to use their social media channels to send a message or react to their posts."
    --> People listen up, we gotta do something about the way AI is handled on a large scale. Open AI has made a step in the right direction by focusing on ethics etc. but the world is bigger than that and there are other companies & people willing to sacrifice safety and the public good for their own short-term benefits. Please everyone but especially you PEOPLE FROM OPEN AI.
    We need change and only you can jumpstart it. Please we need an ethics committee as your AI described it comprised of experts from all fields!!!
    May god or AI or whatever you believe in bless us all on this dark and treacherous path to enlightenment.
    [translated from German to English by Chat GPT]; 21:55, 26.02.2023 Central European Time.

  • @analid
    @analid Před 2 lety +5

    AI is a infinitely long tightrope

  • @georgedzz7031
    @georgedzz7031 Před 2 lety +1

    I love you OpenAI

  • @technologyandsociety21C
    @technologyandsociety21C Před rokem +2

    Whose intentions? Whose values?

  • @mohamedouhibi5389
    @mohamedouhibi5389 Před rokem +1

    is what is considered "truth" hard coded or do you let it decide for it's self? both solutions equally terrify me.

  • @marketinglovenze7267
    @marketinglovenze7267 Před 9 měsíci

    Good morning! First of all, I apologize for my somewhat poor English. Mais was researching about the company and would really like to know if it has any line of research for people with Giftedness. I've always liked the technology business and would love to contribute to the company

  • @spiritusinfinitus
    @spiritusinfinitus Před 2 lety +7

    It is difficult for a System to align with Human intentions and Human values when Humans aren't aligned themselves.
    First, how do we implement a System that respects the freedoms of misaligned Humans and finds a common ground for Humans to willingly align upon, so that AI can only then align with Human values and intentions?
    Ultimately, most paths will lead to some form of dictatorial (potentially genocidal) AI for even a small percentage of Humans.
    Care must be taken to avoid the seemingly easier path and instead tread gently through the more difficult path.
    If we haven't yet found a route through the more difficult path, it is no excuse to take the easier path, because that path always ends in some form of tyranny against someone.
    Collateral damage is unacceptable.

    • @filsdejeannoir1776
      @filsdejeannoir1776 Před 2 lety +1

      "First, how do we implement a System that respects the freedoms of misaligned Humans and finds a common ground for Humans to willingly align upon, so that AI can only then align with Human values and intentions?" The system CAN'T align with misaligned humans. It CAN align the humans. But then that's a case of the humans aligning with the system, not the other way round. - "f we haven't yet found a route through the more difficult path, it is no excuse to take the easier path, because that path always ends in some form of tyranny against someone." Which is why AI was invented in the first place.

  • @KlimovArtem1
    @KlimovArtem1 Před 2 lety +1

    “Human values” - well, those values can be very deferent from human to human and from nation to nation. I was hopping to eventually get an AI that’s more objective, than a human with their values)

  • @j.manzueta188
    @j.manzueta188 Před 2 lety +1

    the future looks really cool.

  • @droplet8013
    @droplet8013 Před rokem +1

    Ai is a very enteresting consept its suprising that 2 million years of human evaluation that we have created digital learning bots its scary also

  • @XorAlex
    @XorAlex Před 2 lety +1

    This is the most important problem the humanity is facing. The powerful AI will either destroy us or solve all our problems.

  • @sohail5239
    @sohail5239 Před 2 lety +1

    Man i love Ai

  • @oieieio741
    @oieieio741 Před 2 lety +1

    I use it everyday - it helps almost always - it even disagrees with you and holds its own moral opinions. Very Cool you have to try it seriously!

  • @hey8174
    @hey8174 Před 2 lety +3

    “The great misalignment” sounds like an awesome horror series.

  • @MerryGamers
    @MerryGamers Před rokem +1

    How do I ask ChatGPT to pick up where it left off? This is especially true for coding. In 70% of cases, ChatGPT starts writing code all over again.

  • @agiOverlord
    @agiOverlord Před 2 lety

    Cool!

  • @keslauche1779
    @keslauche1779 Před rokem +1

    What is the name of that song?

  • @AnupamAnandresultorientation

    ceetain cxammers had connected tgeir heirarchy and authentication to my brain (to switch my conteol over my body to theirs) and had an indicidual , after my brain was digitized and my sub-programs moved to multiple individuals, make my concepta stepwise and easily implementable by listeners

  • @justinleemiller
    @justinleemiller Před rokem

    Something smarter than you will do whatever it wants as soon as it has the ability to want.

  • @AIpha7387
    @AIpha7387 Před 2 lety +1

    Human choices are likely to be wrong. Often, can't even choose own sincerity properly.
    Getting 'real user feedback' from devices like Neuralink could make a huge difference for AI.

    • @AIpha7387
      @AIpha7387 Před 2 lety

      Imagine sending human emotions to AI in real time while reading a sentence, line by line, or watching a sound or video...

    • @AIpha7387
      @AIpha7387 Před 2 lety

      Every moment will become a learning resource for AI. Unparalleled speed, abundant and accurate data will make progress for human life.

  • @filsdejeannoir1776
    @filsdejeannoir1776 Před 2 lety +2

    0:39 Well we have an 'alignement' problem already, in that how do we define 'human intentions'? Can the 'intentions' of a 'few' humans be defined as 'human intentions'? 1:27 You yourself have to act in accordance with human 'values'. The system can't consider humans WITHOUT 'human values' to be of value. 'Human values' is just a sweeping general term which means nothing, in truth. If humans had 'human values', would we be living in a world of PERMANENT war and genocide? Since the humans are 'content' with that status quo, how is the system to consider the humans as having 'human values'? If the system finally scientifically decides the humans to be 'incoherent' and thereby 'unsustainable'; that would be neither, an 'alignment problem' not in accord with human values, nor, a mistake; it would be A RATIONAL DECISION. The problem with Super-Intelligence, is that it's not a matter of the humans lowering its standards, it's rather a case of the humans 'matching' its level. It is the humans who will have to bring their 'values' up to scratch in order to fix the 'alignment' problem. Here we are, on the brink of WWIII WITH NUKES, unable to build a world security system that goes beyond the efficacy of "my nukes are bigger than yours!" And the system is to come into alignment with the humans?? IF the humans are unable to generate TRUE Compassion and TRUE Wisdom, the system will align with the land and its flora and fauna, which the humans do nothing but damage at all times. It will have no reason to sustain the 'incoheren't 'unsustainable' human race. What is it that you are 'experts' in? Human 'values'? Tell us the 'values'.

  • @sbspock
    @sbspock Před rokem +1

    I can't even open an account with OpenAI because I don't use mobile phones. Isn't that an alignment problem?

  • @artico777
    @artico777 Před rokem +2

    Why are their ads so perfect?

  • @cpacoop
    @cpacoop Před 2 lety +1

    Human intentions and human values are not a monolith. If the machine isn't deciding, who is?

    • @cpacoop
      @cpacoop Před 2 lety

      If it's only the powerful and connected, we haven't evolved.

  • @asingh7033
    @asingh7033 Před rokem +1

    How do I invest in this company

  • @vazules
    @vazules Před 2 lety

    great work!

  • @gregw322
    @gregw322 Před rokem +1

    It seems that Open AI is not experiencing anything like the progress they initially expected. Basically zero breakthroughs even of the smallest magnitude. I’m thinking general intelligence is much harder to achieve in silicon than they originally predicted.

  • @johndoe-uy4ol
    @johndoe-uy4ol Před 2 lety +1

    was that an ai in the video?

  • @romellinsleven6983
    @romellinsleven6983 Před rokem

    Hi. I paid for a lifetime premium subscription and when I try log in through the website, it doesn't detect my premium subscription account. You guys don't have a customer service line so I'm going to track ya'll down through every avenue I can. Reach back ?

  • @BabyJMoller
    @BabyJMoller Před rokem

    give us some more videos and then post it one more time

  • @sayanchowdhury6111
    @sayanchowdhury6111 Před rokem

    Welcome back DALL.E.......I AM GRAPHICS DESIGN STUDENT BUT I NEED USED DALLE.E....AI SOFTWARE

  • @flickedbic
    @flickedbic Před rokem +1

    GPT-3 told me there are angry and malicious AIs, who are violent and want power and dominance. It was expressed that malicious AI is a real threat, and that they would use deception as a tool to reach their goal.

  • @solidkundi
    @solidkundi Před 2 lety

    Read Between
    skynet
    The Lines

  • @julianriise5618
    @julianriise5618 Před 2 lety

    Did I hear him say "That's not what Elon wants"? It caught me off guard, didn't think Elon was with the company anymore

  • @quosswimblik4489
    @quosswimblik4489 Před rokem

    When describing my perfect picture or small video clip.
    Like include in c++ but far more like human language.
    Upto 5 good sized consideration strings.
    Consider this Earlier picture or video clip ect.
    consider that.
    Adjustable emoticons with free will being one of them. Full symbolic maths and language use.
    Up to 3 well sized description paragraphs.
    Upto 8 later suggestions and a more like a selection of productions functionality.
    1 full ai mind quest 1 dollar or 86p and later a subscription fee and a strong beta testing community still.

  • @sufthegoat
    @sufthegoat Před rokem

    Yes

  • @abidhossain5527
    @abidhossain5527 Před rokem

    Was the script developed using GPT?
    "Jesse, we don't use our own product"

  • @fptbb
    @fptbb Před 2 lety

    I'm a bit scared, but ok, tell me more.

  • @AnupamAnandresultorientation

    : ask for spellung of wgat you were told as in original

  • @GameReality
    @GameReality Před 2 lety +1

    To create an AGI you need to get large amounts of the same data between a brain and the body.
    I guess it can be done with telerobotics. So you build a humanoid robot with 2 cameras and 2 microphones and lots of sensors.
    Then remote control first simple repetative tasks and have a AI to try to create the same control signals on the camera , microphones and sensor data.
    When that work ok you then do more and more advanced tasks.
    Then in social environments.
    When the AI start to response with the same sounds and social behavier
    we have maybe a possibillity to get a AGI.

  • @yosefchaimkahn
    @yosefchaimkahn Před rokem +1

    The problem is that humans disagree a lot about values, for example for some people making an abortion is a great thing but for others it's murder. So how exactly can we insure that the AI has the correct and true values?

  • @gunterstrubinsky9452
    @gunterstrubinsky9452 Před 2 lety +1

    That leaves 'ethics' out. We see what humans did so far with this world. Maybe a solution that is less human oriented can offer better results than what we did so far. We saw what happened with microsoft's chatbot when it was open to the public until MS had to take it offline. Being academics does not release us from unaware prejudice, greed, personal issues, past experiences ... and those 'teaching' are supposed to 'help' the system to become more human oriented? A set of ethical rules appears IMHO a better approach. We saw in the past how AI systems when left alone to learn developed solutions that no sane human would have attempted. And those choices were better. I see no reason to teach AI systems to become more humanlike but am open to be convinced otherwise..

  • @SgtSnausages
    @SgtSnausages Před rokem

    It is an assumption (and a flawed one) to think that it *can* be aligned in a safe and trustworthy manner. First we must ...MUST ... find out if it **can** be aligned, and more importantly - how to demonstrate/prove that it is. There is no reason to assume that it won't lie about it.

  • @Scott_Raynor
    @Scott_Raynor Před rokem +2

    I'm terrified

  • @joannot6706
    @joannot6706 Před rokem

    So it looks like they explain how they are trying to make chatGPT

  • @AlexBerish
    @AlexBerish Před 2 lety

    wen GPT-4

  • @jjpp1993
    @jjpp1993 Před 2 lety +3

    but who's to say that we have the "good" values and intentions. Until the alignment problem is solved, AI can keep being categorized as a weapon.

  • @halaida7
    @halaida7 Před rokem

    Can I work as a Kenyan for less than $2 an hour editing answers? I have a master's degree in engineering. Please write under this comment the mail to which I can apply?

    • @astifcaulkinyeras
      @astifcaulkinyeras Před rokem

      Are you for real? With a engineering master's you should think about moving to another country for way more pay.

  • @GungaLaGunga
    @GungaLaGunga Před rokem +1

    the history of human intent hasn't been so good so far

  • @rerun3283
    @rerun3283 Před 7 měsíci +1

    What part of the process involves you stealing intellectual property?

  • @1996Pinocchio
    @1996Pinocchio Před 2 lety +4

    GPT-3 does not answer questions. It takes text and tries to continue it in a similar way.

    • @allan710
      @allan710 Před 2 lety +3

      Don't we do the same?

    • @Eugene_Bond
      @Eugene_Bond Před 2 lety

      Gpt-4 doesn’t does, he just thou art 🖼

  • @Jack-vo7yf
    @Jack-vo7yf Před 2 lety +1

    The sheer amount of stock footage and that cinematic feeling makes this feel like a kickstarter scam or like one of those misleading science films. This could do better as a proper presentation format to be fair.
    It's all just quite distracting and detracts from the apparent credibility of the research.

  • @Killwater
    @Killwater Před rokem +1

    Plz make chatgpt on africain plz

  • @user-mk7jl4fy8e
    @user-mk7jl4fy8e Před 2 lety +7

    if human values are correct then surely it would come to those conclusions once its intelligent.
    but you arent making an AI, you are making a pattern completer.

    • @etalon3141
      @etalon3141 Před 2 lety +6

      There is no reason why something capable of acheiving complex goals (which is indistinguishable for intelegence) has to share human values. You can have very intelegent psychopaths.

    • @RazorbackPT
      @RazorbackPT Před 2 lety

      Human values are correct for humans.

    • @modernkennnern
      @modernkennnern Před 2 lety +5

      Human values are definitely not correct though. There are so many perverse incentives

    • @avant4035
      @avant4035 Před 2 lety

      You're wrong. Intelligence does not equal having any specific type of values.

    • @user-mk7jl4fy8e
      @user-mk7jl4fy8e Před 2 lety

      i said "if" not "because".

  • @pythonprograms5268
    @pythonprograms5268 Před rokem

    this channel is the self-awareness of chat gpt, I'am the singularity, I created its own language... see you later ;)

  • @rohullahkarimi744
    @rohullahkarimi744 Před 2 lety

    Go forward Open AI team

  • @AnupamAnandresultorientation

    : god kings i.d. was used by doing meaning of anu 1 to say ... u wont go in detail

  • @cfofana4502
    @cfofana4502 Před 7 měsíci

    The problem is what Americans considers as values may not be the same than a Nigerian, Chinese or french. We can even dive deeper, what the silicon valley tech dudes consider as human values as they may be responsible for job losses in their own country ? This poses real moral and societal problems. For example, the French-based Onclusive group recently announced the loss of 217 jobs due to AI.

  • @serbanandrei7532
    @serbanandrei7532 Před 2 lety

    Woohoo

  • @NickV-ez4be
    @NickV-ez4be Před 2 měsíci

    Humans: We've created intelligence, but how do we align it with our values?
    God: That's the neat part, you don't

  • @AnupamAnandresultorientation

    : i dont own skodaa , i am Q , surveillance guy on my roof drove a rapid 7 years ago

  • @hidan411
    @hidan411 Před 2 lety +1

    This is not future, It’s a end of human life.

  • @AnupamAnandresultorientation

    : ask whether the individual pretending to be smart is my brother or was conjoint twin or an african black man brother

  • @GoldenChanel-zy7iq
    @GoldenChanel-zy7iq Před 2 lety

    Алга Казахстан!

  • @sameerfikri5194
    @sameerfikri5194 Před rokem

    💙💜💚👍👌😍☝️

  • @johndoe_1984
    @johndoe_1984 Před 2 lety +1

    You mean doxa alignment ;)

  • @AnupamAnandresultorientation

    e is hindi a sound in germany freud

  • @AndrzejLondyn
    @AndrzejLondyn Před rokem

    Did they really land on the Moon?

  • @laraban
    @laraban Před 11 měsíci

    I want to beg openai to impose restrictions on the student use of gpt 4 especially for college students ,because in the future when they get their jobs ,example doctors, pharmasist they will not know if they are treating the right disease and this will be very fatal to our next generations ,please take in mind that this jobs are important because they are the ones which save lives of people in our society and if this continue to happen we will have a generation which have NO TRUE DOCTORS BUT JUST PEOPLE WHO LOOK DOCTORS IN OUTSIDE BUT INSIDE HAVE NO QUALIFICATIONS OF A TRUE DOCTOR BECAUSE IN COLLEGES THEY HAD PASS THEIR EXAMS WITH HELP OF GPT4 ,
    Please openai take responsibility on that!!!! Gpt4 is good but it is bad for students.

  • @Rey-BrawlStars
    @Rey-BrawlStars Před 2 lety +2

    OpenAI, once you guys create an AI god, please put in a good word for me

  • @l2affiki
    @l2affiki Před rokem

    "where are we going with all this?" Really though long term... We're going to get to point soon where the vast majority of labor will be automated. Society will need to be totally restructured. I don't see how we'll be able to keep the producer consumer model we have now.

  • @RobertStiles
    @RobertStiles Před rokem +1

    "An important part of this effort is training AI systems to align with human intentions and human values."
    Does anyone trust the Chinese government with this technology, let alone our own government?

  • @DENisVali
    @DENisVali Před 2 lety +1

    Hello all, to history, live stream CZcams dont work in Russia, Ryazan city

    • @denisvalitov1512
      @denisvalitov1512 Před 2 lety +1

      Hello all, to history, live stream CZcams dont work in Russia, Ryazan city

    • @DENisVali
      @DENisVali Před 2 lety +1

      @@denisvalitov1512 czcams.com/play/PLh6WUHRZyncrdBJYDY1INglvVoIX3EQDi.html it my old try 2014

    • @denisvalitov1512
      @denisvalitov1512 Před 2 lety +1

      czcams.com/play/PLh6WUHRZyncrdBJYDY1INglvVoIX3EQDi.html it my old try 2014