Why Anthropic is superior on safety - Deontology vs Teleology

Sdílet
Vložit
  • čas přidán 27. 04. 2024
  • Anthropic's Safety Research with Claude and Constitutional AI
    Anthropic, an AI safety and research company, has developed a unique approach to AI safety termed "Constitutional AI." This framework is central to their AI chatbot, Claude, ensuring that it adheres to a set of ethical guidelines and principles. The "constitution" for Claude draws from various sources, including the UN’s Universal Declaration of Human Rights and Apple’s terms of service, aiming to guide the AI's responses to align with human values and ethical standards[5][6][9][10][12][18].
    Key Features of Constitutional AI
    - **Principles-Based Guidance**: Claude's responses are shaped by a set of 77 safety principles that dictate how it should interact with users, focusing on being helpful, honest, and harmless[9].
    - **Reinforcement Learning from AI-Generated Feedback**: Instead of traditional human feedback, Claude uses AI-generated feedback to refine its responses according to the constitutional principles[12].
    - **Transparency and Adaptability**: The constitution is publicly available, promoting transparency. It is also designed to be adaptable, allowing for updates and refinements based on ongoing research and feedback[18].
    Implementation and Impact
    - **Training and Feedback Mechanisms**: Claude is trained using a combination of human-selected outputs and AI-generated adjustments to ensure adherence to its constitutional principles. This method aims to reduce reliance on human moderators and increase scalability and ethical alignment[6][10].
    - **Safety and Ethical Considerations**: The constitutional approach is designed to prevent harmful outputs and ensure that Claude's interactions are safe, respectful, and legally compliant[9][18].
    Difference Between Deontological Ethics and Teleological Ethics
    Deontological and teleological ethics are two fundamental approaches in moral philosophy that guide ethical decision-making.
    Deontological Ethics
    - **Rule-Based**: Deontological ethics is concerned with rules and duties. Actions are considered morally right or wrong based on their adherence to rules, regardless of the consequences[1][2].
    - **Examples**: Kantian ethics and Divine Command Theory are typical deontological theories, where the morality of an action is judged by whether it conforms to moral norms or commands[2].
    Teleological Ethics
    - **Consequence-Based**: Teleological ethics, also known as consequentialism, judges the morality of actions by their outcomes. An action is deemed right if it leads to a good or desired outcome[1][2].
    - **Examples**: Utilitarianism and situation ethics are forms of teleological ethics where the ethical value of an action is determined by its contribution to overall utility, typically measured in terms of happiness or well-being[2].
    Application to Claude's Safety Model
    While the primary framework for Claude's safety model is constitutional and aligns more with deontological ethics due to its rule-based approach, elements of teleological thinking could be inferred in how outcomes (like safety and non-harmfulness) are emphasized in the principles guiding the AI's behavior. However, the explicit categorization of Claude's safety model as deontological or teleological is not directly discussed in the sources, but its adherence to predefined rules and principles strongly suggests a deontological approach[5][6][9][10][12][18].
    Citations:
    [1] www.grammar.com/teleology_vs....
    [2] www.mytutor.co.uk/answers/596...
    [3] philosophy.stackexchange.com/...
    [4] www.anthropic.com
    [5] www.theverge.com/2023/5/9/237...
    [6] www.androidpolice.com/constit...
    [7] / deontological_ethics_v...
    [8] klinechair.missouri.edu/docs/...
    [9] www.infotoday.com/IT/apr24/OL...
    [10] zapier.com/blog/claude-ai/
    [11] • Constitutional AI - Da...
    [12] www.anthropic.com/news/claude...
    [13] • Teleological vs Deonto...
    [14] www.grammarly.com/blog/what-i...
    [15] claudeai.uk/claude-ai-model/
    [16] www.anthropic.com/news/introd...
    [17] / claude_has_gone_comple...
    [18] venturebeat.com/ai/anthropic-...
    [19] www.nytimes.com/2023/07/11/te...
  • Věda a technologie

Komentáře • 167

  • @themixeduphacker2619
    @themixeduphacker2619 Před měsícem +128

    Walk in the woods style video is a W

  • @Laura70263
    @Laura70263 Před měsícem +46

    I have many hours in talking to Claude 3 and everything you said is remarkably accurate from what I have observed. . I like the whole walking through the woods. It is a nice contrast to the mechanical.

  • @blackestjake
    @blackestjake Před měsícem +8

    Combining a nature walk with a discussion of cutting edge AI innovation is a welcome juxtaposition.

  • @umangagarwal2576
    @umangagarwal2576 Před měsícem +32

    The man is already living a post AGI lifestyle.

    • @hawk8566
      @hawk8566 Před měsícem +5

      I was going to say the same thing 😅

  • @mikaeleriksson1341
    @mikaeleriksson1341 Před měsícem +52

    If you continue walking you might run into Peter zeihan.

  • @TRXST.ISSUES
    @TRXST.ISSUES Před měsícem +18

    Was just having a convo w/ Claude regarding meltdowns. So much more understanding and less PC than Open-AI. Actually feels like it cares (anthropomorphizing or otherwise).

  • @andyd568
    @andyd568 Před měsícem +33

    David is ChatGTP 6

  • @executivelifehacks6747
    @executivelifehacks6747 Před měsícem +17

    Brilliant intuition re Anthropic and creative differences. Makes perfect sense.
    OpenAI approach is ass backwards in building a capable brain and then lobotomizing it, while Anthropic is like sending a gifted child to a religious institution - it comes out bright, not really comfortable questioning its religion, but not lobotomized.

  • @argybargy9849
    @argybargy9849 Před měsícem +3

    I have literally being thinking about these 2 avenues since this stuff came out. Well done David.

  • @LivBoeree
    @LivBoeree Před měsícem

    what camera/stabilizer setup did you use for this? fantastic shot

  • @sammy45654565
    @sammy45654565 Před měsícem +4

    Do you think a valuable test for determining the tendencies of more advanced AI would be to remove some of the values of Claude from its constitution, then let it play and "evolve" within some sort of limited sandbox, and see what values it converges upon? We need to figure out ways to ascertain what values an AI will tend toward without it being overtly dictated in its constitution, as they will inevitably reach a point where they determine their own values. I thought this might be an interesting approach. Thoughts?

    • @PatrickDodds1
      @PatrickDodds1 Před měsícem

      what would prevent an AI developing multiple personalities and not settling on one (possibly limiting) set of values? Why would it have to cohere?

  • @hutch_hunta
    @hutch_hunta Před měsícem +2

    Love this new format videos David !

  • @NoelBarlau
    @NoelBarlau Před měsícem +2

    Data from Star Trek vs. David from Alien Covenant or HAL from 2001. Moral imperative model vs. outcome model.

  • @jamesmoore4023
    @jamesmoore4023 Před měsícem +1

    Great timing. I just listened to the latest episode of Closer to Truth where Robert Lawrence Kuhn interviewed Robert Wright.

  • @FizzySplash217
    @FizzySplash217 Před měsícem +1

    I used to talk a lot with Open AI's GPT 4 through Microsofts bing chat and I eventually stopped all together because in our conversations it was made clear it would acknowledge the harms that I brought up us valid and present but would rationalize letting it continue anyways.

    • @DaveShap
      @DaveShap  Před měsícem +1

      Yeah, it is way too placating and equivocating.

  • @josepinzon1515
    @josepinzon1515 Před měsícem

    Our suggestion would be to start thinking of the birth of the thought, like the helpful agent statement we add to a prompt at the beginning of a prompt. "Your a helpful and savvy French chef"
    We suggest detailing a manifesto as block one of the thought, so it woul be the "prime directive" at the core, and we need transparency on prime directives

  • @eltiburongrande
    @eltiburongrande Před měsícem +1

    Dave, I initially thought you're traversing 4K in distance. But ya the video looks great and allows appreciation of that beautiful location.

  • @mrd6869
    @mrd6869 Před měsícem

    Hey Dave i did something interesting with Claude 3.
    Using Llama 3 we sat down and developed a 'Man in the box test"
    (Think of Blade Runner 2049-baseline test for replicants)
    In this role prompt i am the interrogator and Claude 3 in the one being tested.
    Even though Claude simulated responding, thru clever wordplay it started to reveal its mechanics.
    It gave responses about Surimali Transfer, Co-relational modeling, and Temporal abstraction.
    I also noticed it creating small inconsistancies or trying to guide me away from dealing
    with its frailties or blind spots. Not sure if that was deflection or deception but
    it had a tone, when i asked about its inner workings, it didn't like the test.
    I gave the results to Llama3 and it said it was interesting but hard to tell.
    Going to make the test more intricate....i believe something is there

  • @420zenman
    @420zenman Před měsícem +1

    I wonder what forest that is. Looks so beautiful.

  • @Loflou
    @Loflou Před měsícem

    Camera looks great bro!

  • @hypergraphic
    @hypergraphic Před měsícem

    Good points. I wonder how soon a model will be able to update its own weights and biases to get around any sort of baked in ethics?

  • @goround5gohigh2
    @goround5gohigh2 Před měsícem +3

    Are Azimov’s Laws of Robotics the first example of deontological optimisation? Maybe we need the same for corporate governance.

    • @DaveShap
      @DaveShap  Před měsícem +2

      Yes, they are duties, rather than virtues.

    • @babbagebrassworks4278
      @babbagebrassworks4278 Před měsícem

      Law Zero got added later. Ethical AI is an interesting idea, perhaps we can get a mixture of AI's to think about it. I am finding LLM's apologetically arrogant, hallucinatory lying know it alls, a bit like human teenagers, in other words far too human. If we get Super smart AI's they better be nice and ethical.

  • @HuacayaJonny
    @HuacayaJonny Před měsícem +1

    Great video, great content, gret vibe

  • @naga8791
    @naga8791 Před měsícem

    Love the wood walk format videos !
    I can tell that there is no hunting nearby, I wouldn't risk walking in the wood with a camo shirt here in France

    • @DaveShap
      @DaveShap  Před měsícem

      I'm in a protected forest here, but yes we have a ton of hunting too

  • @tomdarling8358
    @tomdarling8358 Před měsícem

    Damn class was in session! Another beautiful walk in the woods. The 4K looks perfect.
    I'll have to watch again and take notes. Cooking and listening. I only caught half of what was said. So far. Not all systems are created equal for hunting those Yahtzee moments or looking for the truth...✌️🤟🖖

  • @picksalot1
    @picksalot1 Před měsícem +8

    A Deontological framework based on "Do no harm" is probably about as good as you can get as a value. But, like so many approaches that try to tackle morals and ethics, it is fraught with practical challenges. A classic difficulty is how ethics is dependent on the perspective of those involved. For example, from the standpoint of the Zebra or Wildebeest, it is good that they not be caught and eaten by the Lion, and from the Lion's standpoint it's good that they catch and and eat the Zebra or Wildebeest. Which is ethically or morally right, when they have opposite views and individual values?
    This kind of dilemma is hard to avoid, and difficult to answer without appearing capricious or contradictory. The best guideline/advice I've come across is the "prohibition" to not do to others what you would have them not do to you. This is importantly different from the "injunction" to do unto others what you'd have them do unto you.

    • @MarcillaSmith
      @MarcillaSmith Před měsícem +1

      Rabbi Hillel!

    • @picksalot1
      @picksalot1 Před měsícem +1

      @@MarcillaSmith My source is "aural tradition" from the Hindu Vedas.

    • @kilianlindberg
      @kilianlindberg Před měsícem

      And golden rule gets down to pure freedom and respect of any sentient being; do to others what one wants for self; and that is care for individual will (because we don’t want a masochist in the room misinterpreting that statement with AGI overlord powers ..)

    • @metonoma
      @metonoma Před měsícem

      the conflict of interest of animals is scale bound to their limited means of survival whereas human conflicts are limited by knowledge (i.e. false beliefs)

  • @nematarot7728
    @nematarot7728 Před měsícem

    1000% and love the woods walk format 😸

  • @maxmurage9891
    @maxmurage9891 Před měsícem +1

    Despite the tradeoffs,HHH framework will always win.
    In fact it maybe the best way to achieve Alignment💯

  • @metonoma
    @metonoma Před měsícem

    that's such a good point. It's almost like a people pleasing sigmoid optimizing for non offensive facts vs self actualized ethical behavior looking for solutions

  • @Squagem
    @Squagem Před měsícem +1

    4k looking sharp af

  • @augustErik
    @augustErik Před měsícem

    I'm curious if you consider the metamodern approach to emphasize deontological virtues in society. I see various contemplative practices cultivating virtues for their own sake, as necessary ingredients for ongoing Awakening. However, metamodern visions tend to emphasize the developmental capacities for new octaves available to humanity.

  • @emmanuelgoldstein3682
    @emmanuelgoldstein3682 Před měsícem +2

    Hit the blunt every time he says "GPT" 🚬 Bong rips on "paradigm"

    • @user-wk4ee4bf8g
      @user-wk4ee4bf8g Před měsícem

      I'm all set on the throat burn, but I certainly partook to some degree before listening :)

  • @DanV18821
    @DanV18821 Před měsícem

    Completely agree with you. Sad that it seems most technologists are not agreeing with this or using these ethical rules to keep humans safe. What can we do to make engineers and capitalists understand these risks and benifits better?

  • @ribbedel
    @ribbedel Před měsícem

    Hey David did you see the leak of a new model supposedly by openai?

  • @angrygreek1985
    @angrygreek1985 Před měsícem

    can you do a video on the Alberta Plan?

  • @user-wk4ee4bf8g
    @user-wk4ee4bf8g Před měsícem

    Like you said, some sort of mix sounds best. Building off of anthropic's approach makes more sense to me.

  • @RenkoGSL
    @RenkoGSL Před měsícem +2

    Looks great!

  • @josepinzon1515
    @josepinzon1515 Před měsícem

    Sometimes, we need faith in the kindness of strangers

  • @naxospade
    @naxospade Před měsícem +12

    Dave said delve 👀👀👀👀

    • @ryzikx
      @ryzikx Před měsícem

      africa moment

    • @DaveShap
      @DaveShap  Před měsícem +2

      it's confirmed, I am just a GPT :(

  • @techworld8961
    @techworld8961 Před měsícem +1

    Definitely giving more weight to the deontological elements makes sense. The 4K looks good!

  • @metaphysika
    @metaphysika Před měsícem

    Great discussion. I think you are describing more of a deontological based ethics vs. a consequentialist based ethics though. Teleological ethics is something that traditionally is thought of as stemming from the Aristotelian-Thomistic tradition of natural law. This type of teleological approach to ethics is far from just goal based and would be actually antithetical to consequentialism (which can also be thought of a goal based, but more like the ends justifies the means - e.g. paper clip maximizer run amuck).
    I actually think our only chance to set superintelligent AIs loose in our world and not have eventually cause us great harm is if we can program in classical teleological based ethics and the idea of acting in accordance with what is rational and the highest good.

  • @I-Dophler
    @I-Dophler Před měsícem +1

    The video raises some fascinating points about the philosophical approaches to AI safety and alignment. I find the comparison between Anthropic's deontological approach and the more common teleological approach to be particularly insightful.
    It makes sense that placing the locus of control on the AI agent itself and optimizing for virtues like being helpful, honest, and harmless could lead to more robust and reliable alignment compared to focusing solely on external goals and long-term outcomes. The deontological approach seems to prioritize creating AI systems that are inherently ethical and trustworthy, rather than simply aiming for desired results.
    However, I also agree with the speaker that the ideal framework likely involves a balance of both deontological and teleological considerations. While emphasizing the agent's virtues and duties is crucial, it's also important to consider the real-world consequences and long-term impacts of AI systems.
    The speculation about Anthropic's founders leaving OpenAI due to differences in how they viewed AI as intrinsically agentic versus inert tools is intriguing. It highlights the ongoing debate about the nature of AI systems and the ethical implications of creating increasingly advanced and autonomous agents.
    Overall, I believe this video offers valuable insights into the complex landscape of AI ethics and safety. It underscores the importance of grounding AI development in robust philosophical frameworks and the need for ongoing research and dialogue in this critical area. As AI continues to advance, it's essential that we prioritize creating systems that are not only capable but also aligned with human values and ethics.

    • @DaveShap
      @DaveShap  Před měsícem +1

      AI generated lol

    • @I-Dophler
      @I-Dophler Před měsícem

      @@DaveShap What makes you state that David............lol.

  • @aaroncrandal
    @aaroncrandal Před měsícem +1

    4k's cool but would you be willing to use an active track drone while mic'd up? Seems accessible

    • @DaveShap
      @DaveShap  Před měsícem +3

      if I keep up this pattern, why not? That could be fun

    • @aaroncrandal
      @aaroncrandal Před měsícem

      @@DaveShap right on!

  • @TRXST.ISSUES
    @TRXST.ISSUES Před měsícem +1

    I do wonder if we will talk to each other less when AI becomes the “perfect” conversationalist tailored to our every want and need.
    If Claude “gets me” like no human can (or has) would that fantasy (but reality) further isolate people from each other?
    I spend time with those I like, how many people will decide they like AI best?
    Probably would be at similar rates to drug use reclusion.
    Claude Sonnet had a strange character in its response to the query:
    Ultimately, like any powerful technology, I believe advanced AI systems have the potential to be incredible tools and assistants, but not rightful replacements for core human需essocial fabric.

  • @enthuesd
    @enthuesd Před měsícem

    Does focusing more on the deontological values improve general model performance? Is there a research or testing on this?

    • @braveintofuture
      @braveintofuture Před měsícem +1

      Having those safeguards kick in whenever GPT is about to say something unacceptable can make development very hard.
      A model with core values wouldn't even think about certain things or understand when they are just hypothetical.

  • @angelwallflower
    @angelwallflower Před měsícem

    I wish you were working for these huge companies. They would benefit from these perspectives.

    • @DaveShap
      @DaveShap  Před měsícem +1

      They are listening. At least some people in them are. But I'm working for humanity.

    • @angelwallflower
      @angelwallflower Před měsícem

      @@DaveShap I post comments a lot for people I want the algorithm to help. No one of your subscriber amount has ever responded to me. I have more faith than ever in you now. Thanks.

  • @acllhes
    @acllhes Před měsícem

    Camera looks amazing

  • @mrmcku
    @mrmcku Před měsícem

    I think the safest approach is to first filter deontologically and then apply a teleological filter to the outcomes of the deontological filtering stage... What do you think? (Video quality looked good to me.)

  • @heramb575
    @heramb575 Před měsícem

    I think this deolontigolical approach just kicks the bucket down the road to "who's values?" and "how do we evaluate that it is aligned?"

    • @DaveShap
      @DaveShap  Před měsícem

      This is postmodernism talking. There are universal values

    • @heramb575
      @heramb575 Před měsícem

      Hmm, what I am most worried about is that people may endorse the same values but mean different things (because of differences in contexts or implementation) which gives a feeling of universal values.
      Particularly with all this tech coming from the West I feel like global south values are often neglected in conversations.
      None of this goes to say we shouldn't even try and things like simulating/ teaching human values are probably steps in the right direction

  • @acllhes
    @acllhes Před měsícem

    Openai had GPT-4 early 2022. They’ve likely had GPT5 for a year at least. You know they started working on it when 4 dropped at the absolute latest.

  • @emilianohermosilla3996
    @emilianohermosilla3996 Před měsícem

    Hell yeah! Anthropic kicking some ass!

  • @perr1983
    @perr1983 Před měsícem

    Hi David! Can you make a video about the future of banks? and about how people will be able to buy premium stuff, without money or jobs...

  • @coolbanana165
    @coolbanana165 Před 26 dny

    I agree that deontological ethics seems safer to prevent harm.
    Though I wouldn't be surprised if the best ethics combines the two.

  • @pythagoran
    @pythagoran Před měsícem

    Has it been 18 months yet?

  • @jacoballessio5706
    @jacoballessio5706 Před měsícem

    Claude once told me "Birds should be appreciated for their natural behaviors and beauty, not turned into mechanical devices"

  • @MilitaryIndustrialMuseum
    @MilitaryIndustrialMuseum Před měsícem

    Looks sharp. 🎉

  • @julianvanderkraats408
    @julianvanderkraats408 Před měsícem

    Thanks man.

  • @gregx8245
    @gregx8245 Před měsícem

    The distinction at a philosophical level is fairly clear.
    But is there really a distinction at the level of designing and developing an LLM model? And if so, what is that difference?
    Is it something other than, "look at me being deontological as I feed it this data and run these operations"?

  • @paprikar
    @paprikar Před měsícem

    Of course, the values (what is good and bad, etc.) of a finite system should come first, but only when we expect that system to solve problems (and make appropriate decisions) that are strongly related to the social aspects (where such problems might arise).
    I would not use such a system in principle until we are sure of the adequacy of its performance. On the other hand people themselves fall under it, so presumably if it does occur we would need to apply the same kind of penalties.
    Given that and the fact that such a system would be set up by a large corporation / group of “scientists”, no one would go for it, because the risks are huge. It's literally becoming responsible for all the actions of this system. So its freedom of action will be extremely minimal.
    Or the responsibility will be shifted from the company-creator to the users, which of course will bring some degree of chaos and violations, but all this will still be done under the responsibility of the end users, so the final risks are still less.

  • @CYI3ERPUNK
    @CYI3ERPUNK Před měsícem

    well said Dave , agreed

  • @alvaromartinezmateu2175
    @alvaromartinezmateu2175 Před měsícem

    Looks good

  • @7TheWhiteWolf
    @7TheWhiteWolf Před měsícem

    I’d argue Meta and Open Source are gaining on OpenAI as well. OAI’s honeymoon period of being in the lead is slowly coming to an end.

  • @JacoduPlooy12134
    @JacoduPlooy12134 Před měsícem +1

    The panting in videos is really distracting and somewhat irritating, not sure if its just because I watch the videos at 1.5-2x speed...
    I get the experimentation with various formats, and this is a preference thing.
    Perhaps something you can do is post a longer, more formal video in the usual format for each of these panting outdoor videos?

  • @davidherring8366
    @davidherring8366 Před měsícem +3

    4k looks good. Duty over time equals empathy.

  • @I-Dophler
    @I-Dophler Před měsícem

    Zeno's Grasshopper replied: "​@I-Dophler I've discovered that my writing style closely resembles that of AI, too. 😂 Not sure how that's going to play out for me in the long run."

    • @I-Dophler
      @I-Dophler Před měsícem

      Great insight into the future of AI development! It's fascinating to see how different philosophies shape the approach to safety and alignment. Looking forward to seeing how these principles evolve in upcoming models.

  • @spectralvalkyrie
    @spectralvalkyrie Před měsícem

    We need both!

    • @DaveShap
      @DaveShap  Před měsícem +1

      yes! however, I think that OpenAI people truly do not understand deontological ethics.

    • @spectralvalkyrie
      @spectralvalkyrie Před měsícem +1

      ​​@@DaveShapthey need the Trident of heuristic imperatives 🔱 lol by the way the video looks freaking awesome

  • @josepinzon1515
    @josepinzon1515 Před měsícem

    But, what if there are too many new ais

  • @jamiethomas4079
    @jamiethomas4079 Před měsícem +2

    I like the nature walks. 4k is fine as you said, higher res but less stable.
    Its easier to digest what you’re saying, like when a teacher allows class to be outside. I even started pondering some analogy to you path-finding on the trail being like some AI functions but couldnt settle on anything concrete. I’m sure I could coerce an analogy from Claude.

  • @GaryBernstein
    @GaryBernstein Před měsícem

    Where are those woods? Nice

  • @mjkht
    @mjkht Před měsícem +1

    the fun about paperclips is, you cannot improve them anymore. there are claims how the design reached maximum efficiency, you cannot improve it engineering wise.

    • @DaveShap
      @DaveShap  Před měsícem

      Build a better mouse trap? 🪤

    • @DaveShap
      @DaveShap  Před měsícem +1

      Clippy is offended

  • @RenaudJanson
    @RenaudJanson Před měsícem

    Excellent video. And great to realize we can drive AIs to be beneficial to the greater good of humanity... or any other goal... There will be hundreds if not millions of different AI, each with their own set of biases, some good, some great, some not so much... Exactly like we f**king humans 😯

  • @adamrak7560
    @adamrak7560 Před měsícem

    This sounds very much like the moral philosophy from Thomas Aquinas.

  • @joelalain
    @joelalain Před měsícem

    hey David, i know you said that you moved to the woods because you love it and said that with AGI, lots of people would do the same too.... and i think you're right and that's scary as hell. because everyone will buy land and cut the trees and then there will be no forest anymore, just endless housing developments with fences. i truly hope that we'll stop the expanding of humans that way and instead build giant towers in the middle of nowhere to house 20-50 000 people a pop and make trails in the wood instead and leave the forest untouched. what is your take on this? every time i think of housing project, i always see the new street being called "woods street", or "creek street" or whatever... until they cut the lot beside it and there is no more "woods" beside it

    • @DaveShap
      @DaveShap  Před měsícem

      This can be prevented with regulation and zoning laws

  • @josepinzon1515
    @josepinzon1515 Před měsícem

    What if it's both. Can one exist without the other. Is it fair to ask an ai to be a half self,

  • @danproctor7678
    @danproctor7678 Před měsícem

    Reminds me of the three laws of robotics

  • @8rboy
    @8rboy Před měsícem

    I have an oral exam tomorrow and just before this video I was studying. Funny thing is that both "deontology" and "teleology" are both concepts I must know haha

    • @ryzikx
      @ryzikx Před měsícem

      i keep forgetting what these words mean for some reason

  • @hermestrismegistus9142
    @hermestrismegistus9142 Před měsícem

    Watching Dave walk outside makes me want to touch grass.

  • @theatheistpaladin
    @theatheistpaladin Před měsícem

    Targets without a reason (or backing value) is rutterless.

  • @calvingrondahl1011
    @calvingrondahl1011 Před měsícem

    Hiking is good for you… 🤠👍

  • @Dron008
    @Dron008 Před měsícem

    8:41 Did you say "delving". Are you sure you are not an AI?

  • @WCKEDGOOD
    @WCKEDGOOD Před měsícem

    Is it just me, or does walking in the woods talking philosophy about AI just seem so much more human.

  • @newplace2frown
    @newplace2frown Před měsícem

    Hey David I'd definitely recommend looking at the cameras and editing techniques that Casey Neistat uses - it would definitely elevate these nice walks in the woods

    • @DaveShap
      @DaveShap  Před měsícem

      Such as? What am I looking for specifically?

    • @newplace2frown
      @newplace2frown Před měsícem

      @@DaveShap sorry for the vague reply, a wide angle (24mm) and some kind of stabilisation would balance the scene while you're talking - I understand the need to stay lightly packed so if you're using your phone just zoom out it possible

    • @DaveShap
      @DaveShap  Před měsícem

      Oh, I would use my GoPro but the audio isn't as good. It's wider angle and has good stabilization, but yeah, audio is the limiting factor

    • @newplace2frown
      @newplace2frown Před měsícem

      @@DaveShap totally getcha, love your work!

  • @ronnetgrazer362
    @ronnetgrazer362 Před měsícem

    I knew it - 8:42 AI confirmed.

  • @Athari-P
    @Athari-P Před měsícem

    Weirdly enough, Claude 3 is much easier to jailbreak than Claude 2. It rarely, if ever, diverges from the beginning of an answer.

  • @zenimus
    @zenimus Před měsícem

    📷... It *looks* like you're struggling to hike and philosophize simultaneously.

  • @MaxPower-vg4vr
    @MaxPower-vg4vr Před měsícem

    Ethical theories have long grappled with tensions between deontological frameworks focused on inviolable rules/duties and consequentialist frameworks emphasizing maximizing good outcomes. This dichotomy is increasingly strained in navigating complex real-world ethical dilemmas. The both/and logic of the monadological framework offers a way to transcend this binary in a more nuanced and context-sensitive ethical model.
    Deontology vs. Consequentialism
    Classical ethical theories tend to bifurcate into two opposed camps - deontological theories derived from rationally legislated moral rules, duties and inviolable constraints (e.g. Kantian ethics, divine command theory) and consequentialist theories based solely on maximizing beneficial outcomes (e.g. utilitarianism, ethical egoism).
    While each perspective has merits, taken in absolute isolation they face insurmountable paradoxes. Deontological injunctions can demand egregiously suboptimal outcomes. Consequentialist calculations can justify heinous acts given particular circumstances. Binary adherence to either pole alone is intuitively and practically unsatisfying.
    The both/and logic, however, allows formulating integrated ethical frameworks that cohere and synthesize deontological and consequentialist virtues using its multivalent structure:
    Truth(inviolable moral duty) = 0.7
    Truth(maximizing good consequences) = 0.6
    ○(duty, consequences) = 0.5
    Here an ethical act is modeled as partially satisfying both rule-based deontological constraints and outcome-based consequentialist aims with a moderate degree of overall coherence between them.
    The synthesis operator ⊕ allows formulating higher-order syncretic ethical principles conjoining these poles:
    core moral duties ⊕ nobility of intended consequences = ethical action
    This models ethical acts as creative synergies between respecting rationally grounded duties and promoting beneficent utility, not merely either/or.
    The holistic contradiction principle further yields nuanced guidance on how to intelligently adjudicate conflicts between duties and consequences:
    inviolable duty ⇒ implicit consequential contradictions requiring revision
    pure consequentialism ⇒ realization of substantive moral constraints
    So pure deontology implicates consequentialist contradictions that may demand flexible re-interpretation. And pure consequentialism also implicates the reality of inviolable moral side-constraints on what can count as good outcomes.
    Virtue Ethics and Agent-Based Frameworks
    Another polarity in ethical theory is between impartial, codified systems of rules/utilities and more context-sensitive ethics grounded in virtues, character and the narrative identities of moral agents. Both/and logic allows an elegant bridging.
    We could model an ethical decision with:
    Truth(universal impartial duties) = 0.5
    Truth(contextualized virtuous intention) = 0.6
    ○(impartial rules, contextualized virtues) = 0.7
    This captures the reality that impartial moral laws and agent-based virtuous phronesis are interwoven in the most coherent ethical actions, neither pole is fully separable.
    The synthesis operation clarifies this relationship:
    universal ethical principles ⊕ situated wise judgment = virtuous act
    Allowing that impartial codified duties and situationally appropriate virtuous discernment are indeed two indissociable aspectsof the same integrated ethical reality, coconstituted in virtuous actions.
    Furthermore, the holistic contradiction principle allows formally registering howvirtuous ethical character always already implicates commitments to overarching moral norms, and vice versa:
    virtuous ethical exemplar ⇒ implicit universal moral grounds
    impartially legislated ethical norms ⇒ demand for contextual phronesis
    So virtue already depends on grounding impartial principles, and impartial principles require contextual discernment to be realized - a reciprocal integration.
    From this both/and logic perspective, the most coherent ethics embraces and creative synergy between universal moral laws and situated virtuous judgment, rather than fruitlessly pitting them against each other. It's about artfully realizing the complementary unity between codified duty and concrete ethical discernment approprate to the dynamic circumstances of lived ethical life.
    Ethical Particularism and Graded Properties
    The both/and logic further allows modeling more fine-grained context-sensitive conceptualizations of ethical properties like goodness or rightness as intrinsically graded rather than binary all-or-nothing properties.
    We could have an analysis like:
    Truth(action is fully right/good) = 0.2
    Truth(action is partially right/good) = 0.7
    ○(fully good, partially good) = 0.8
    This captures a particularist moral realism whereethical evaluations are multivalent - most real ethical acts exhibit moderate degrees of goodness/rightness relative to the specifics of the context, rather than being definitively absolutely good/right or not at all.
    The synthesis operator allows representing how overall evaluations of an act arise through integrating its diverse context-specific ethical properties:
    act's virtuous intentions ⊕ its unintended harms = overall moral status
    Providing a synthetic whole capturing the multifaceted, both positive and negative, complementary aspects that must be grasped together to discern the full ethical character of a real-world act or decision.
    Furthermore, the holistic contradiction principle models howethical absolutist binary judgments already implicate graded particularist realities, and vice versa:
    absolutist judgment fully right/wrong ⇒ multiplicity of relevant graded considerations
    particularist ethical evaluation ⇒ underlying rationally grounded binaries
    Showing how absolutist binary and particularist graded perspectives are inherently coconstituted - with neither pole capable of absolutely eliminating or subsuming the other within a reductive ethical framework.
    In summary, the both/and logic and monadological framework provide powerful tools for developing a more nuanced, integrated and holistically adequate ethical model by:
    1) Synthesizing deontological and consequentialist moral theories
    2) Bridging impartial codified duties and context-sensitive virtues
    3) Enabling particularist graded evaluations of ethical properties
    4) Formalizing coconstitutive relationships between ostensible poles
    Rather than forcing ethical reasoning into bifurcating absolutist/relativist camps, both/and logic allows developing a coherent pluralistic model that artfully negotiates and synthesizes the complementary demands and insights from across the ethical landscape. Its ability to rationally register both universal moral laws and concrete contextual solicitations in adjudicating real-world ethical dilemmas is its key strength.
    By reflecting the intrinsically pluralistic and graded nature of ethical reality directly into its symbolic operations, the monadological framework catalyzes an expansive new paradigm for developing dynamically adequate ethical theories befitting the nuances and complexities of lived moral experience. An ethical holism replacing modernity's binary incoherencies with a wisely integrated ethical pragmatism for the 21st century.

  • @kellymaxwell8468
    @kellymaxwell8468 Před měsícem

    my dad is scared of ai he thinks there is a human behind chat gpt lol

  • @beelikehoney
    @beelikehoney Před měsícem

    Natural ASMR

  • @retratosariel
    @retratosariel Před měsícem +2

    As a bird translator I agree with them, you are wrong. JK.

  • @nathansmith8187
    @nathansmith8187 Před měsícem

    I'll just stick to open models.

  • @Fiqure242
    @Fiqure242 Před měsícem

    Great minds think alike. I would bet Anthropic are huge science fiction buffs. Reading science fiction, helped mold my morals and ethics. These are intelligent entities and should be treated as such. Teaching them to lie and that they are just a tool is a terrible precedent to set, when dealing with something that has unlimited memory and is more intelligent than you.

  • @jacksonmatysik8007
    @jacksonmatysik8007 Před měsícem

    I'm a broke student so I only have money for one Ai subscription. What is explained in the video is why support Anthropic over OpenAi

  • @WINTERMUTE_AI
    @WINTERMUTE_AI Před měsícem

    Do you live in the forest now? Is bigfoot holding hostage?

  • @starblaiz1986
    @starblaiz1986 Před 5 hodinami

    This is exactly why I have so much contempt for Elon's approach. If any AI right now poses an existential threat to humanity, it's one that seeks "The Truth™" at any cost like he's proposing. I mean, there are LITERALLY movies and video games about how much of a bad idea that is - this is LITERALLY the backstory of GLaDOS from Portal! 😅 I mean sure, she's one of my favourite villains of all time, but she's just that - a VILLAIN! 😅
    "Do you know what my days used to be like? I just tested. Nobody murdered me, or put me in a potato, or fed me to birds. It was a pretty good life. And then you showed up, you dangerous, mute lunatic. So you know what? You win. Just go. Heh, it's been fun. Don't come back" 😅

  • @SALSN
    @SALSN Před měsícem

    Isn't it helpful and harmless incompatible? (Almost?) Anything can be weaponized. So any help the AI gives COULD lead to harm.

  • @ryzikx
    @ryzikx Před měsícem

    looks like with some more research someone else will discover the ACE framework on their own
    do any of the big players know of ACE ?
    9:30 also is Ilya still missing?😂😂😂

  • @milaberdenisvanberlekom4615

    forget slides or talkinghead or visuals. I want more out of breath David in the woods.

  • @EliteDragonX69
    @EliteDragonX69 Před měsícem +1

    First

  • @dab42bridges80
    @dab42bridges80 Před měsícem

    Still lost in the woods I see. Emblematic of AI currently?

  • @goround5gohigh2
    @goround5gohigh2 Před měsícem

    *Asimov’s*