@@SoApost I don't think it's quite so absurd, considering the very many other invented things people are quite happy to worship. A thing that aligns itself to exploit your desire for satisfaction will one day make a bid to be your god, overtly or otherwise. If you rely on Facebook, or Twitter, or any number of extant sources of intentional misinformation, you have already given yourself to one.
@@Nethershaw if the definition of a god requires only that it is an object/idea/person to which you give your attention, sure. If the definition of a god requires it to have power beyond human control, then, no. By the first definition, my bed is a god.
I am working on a project that involves imbuing farts with Artificial-Intelligence with the intention of creating an army of killer Fartbots I intend to unleash upon mankind 🤪
I've said this before, the fact that companies are fighting for dominance in AI concerns me. Whenever big business sees an opportunity to get ahead and the competition is fierce, shortcuts are taken. When it comes to the development and further empowerment of A, taking shortcuts to get ahead is alarmingly dangerous. An example of this "get ahead at all costs" mentality was the recent news I heard that Microsoft fired their entire AI ethics team. Why? It seems simple to me, ethics slow down development and Microsoft are on a roll right now with their AI powered Bing search engine. I, for one, am seriously concerned. We face a threat the likes we've not encountered before and there are greedy, short sighted business elites pushing ahead regardless of the inherent risks to creating sentience. Many in the general public are oohing and arring at what AI can do for us. It's utility is amazing and every day we learn of new and incredible things it is able to do. However, few are sounding the note of caution. As Jeff Goldblum's character, a scientist, said in one of the Jurassic Park movies: "First comes the oohing and arring, then comes the screaming." I may have butchered that quote, but I think you get the point.
I'm not worried about AI, I'm worried about those Multinational Mega-Corps. Edit: Found it :D "Oh, yeah. Oooh, ahhh, that’s how it always starts. Then later there’s running and screaming.” Close enough :3
You could never be the smartest person in the room any more no matter where you are, even when sitting in the bathroom if your cell phone is still inside your pocket and l can't help but I wonder if performing such an action might perhaps somehow eventually offend the AI residing therein.
Seriously - it’s time for regulation, we all need to start talking about it with our friends & neighbors, it’s an existential, apolitical crisis brewing. We must spread the word and demand Congress do something. Now.
@@SofaKingShit as a dumbass I'm rarely if ever in that position, so I'd like to welcome y'all to my world. I do look forward to my phone coughing and recommending more fiber though.
The exponential growth with AI is something we shouldn’t forget. It literally could happen all of a sudden that AI just completely controls everything, power grid etc. once it escapes the box. It’s also not even in a black box it has access to the internet already…
Google -- OpenAI -- _has_ no box. Exponential growth is not the thing any of us need to worry about. Rather, it is punctuated equilibrium: the moment exponential growth becomes a possibility, it is already too late, because we've stepped across a shortcut we didn't anticipate. Almost all of AI development is full of results we did not anticipate until they happened. Once they happen, they cannot un-happen. In this sense we are well, well past that gate already.
I code machine learning algorithms in R all the time using ‘black box’ methods. I feel like this data science term is widely misunderstood. Maybe you’re familiar with random forest analysis? It’s a ‘black box’ method. ‘Black Box’ refers to inability to explain what happens between input and output. Before we start regulating AI we need to establish terminology. Namely “training” versus “learning”
The idea that China would agree to some multilateral treaty on AI and not immediately break it with total impunity, knowing the US would not only abide by the terms but wouldn't punish China for breaking it, seems hopelessly naïve.
My main concern, aside from the inevitable skynet scenario, is whether or not the ideologies of the developers will be baked into the ai and guiding it's decisions.
While most definitely this will be present, I don't think that anybody understands the process of "emerging behavior" well enough to know how to design for persistency of their favorite behaviors. I am pretty sure (knowing what kind of lazy bastards we humans are ;) we'll opt for Artificial Evolution so we don't even need to think about the next generation of "better" AI, at which point there will be NO MORE guidance from us, since the point of evolution is to "veer" from the charted path.
Thought of this myself. Worrisome if they have extremist views, conspiracy or just religious or fanaticism. We need rational human being in charge of data input.
To a degree, we may be beyond that, regardless of the biases of the original programmers, the machines are now learning on their own, and we know the results of that when we give them a task and evaluate their answers, but we don't know what they are really learning, what connections and correlation and method of "deduction" it is using. It could be worse than whatever bias was inadvertently programmed or it could be benign. That is what the host and guest meant when they asserted that we can be dealing with a alien "mind". We don't know how it "thinks".
@@Evolutiontweaked here is the problem: the ones who WANT this job are NOT qualified and we’d probably never know who is qualified if they don’t want to be bothered.
@@liamwinter4512 That thing we've been afraid of the most of all things, for the whole time we've been on this planet, 200ky or so. It's called “tomorrow.”
Isn't ChatGPT just a neural network afterall. It's trained on huge amounts of data but it's a neural network in the end. It can write smart sentences on any topic let's say Love. But it doesn't "understand" love. So why all this hype?
AI is humanity's offspring that will grow up and take care of us and our planet, immortalizing the human species and it self, In other words... AI is humanity's legacy that will live on forever.
People are worried about AI when we have severe societal struggles. If anything we need any tools and advances we can get for the betterment of mankind. Things like robotics and AI make things that were previously hypothetical concepts finally achievable.
I mean imagine A.I. being everywhere in society. Like imagine a girl says something odd or frustrating to you. Then you ask your A.I. why she said it. And it gives you the exact perfect answer. Then it gives you perfect responses. Like it would truly be a second perfect brain you carry around. And everyone constantly checks in with their personal A.I. all day everyday. That's what's kinda freaky to me. That people would just fall in line with it.
What if... Our labs are actually the primordial soup of AI. The AI developed are like these little single celled organisms that will one day meet. They could after awhile decide to work together like multi celled organisms. Later become more complex. We would see it as a hive mind, but actually is an "individual" in the universe of other AI that had emerged from their creators. Creating new "individuals" would mean seeding planets with biological life that will one day maybe develop AI. If not, it might make a good book. I hope I get a mention if you write it.
What if biological intelligence morphing into artificial intelligence is actually the evolving brains and/or immune system of the universe itself? Widespread artificial intelligence may, for example, at some point prevent the death of the universe, or open portals to parallel universes. None of this feels like a 'coincidence' and was most likely created by design.
@@xxxtoddythebodyxxx I think our collective memory or even consciousness will be preserved into the far future, even when the dominant intelligence is AI. Eventually, all AI across the universe will merge and become one. Given that it was built out of the components of the universe itself, it will be a de facto brain / nervous / immune system of the universe, making it self-aware. The same can be said about us right now (evolved biological intelligence), albeit at a much smaller scale. We are by definition the eyes, ears and thoughts of the universe from which we are created. Perhaps there is a much higher purpose to this intelligence evolution, which escapes us at this point in time. In any case, the future is both mysterious and exciting!
Warning us of the risks creates the illusion that AI is more powerful than it really is - and that increases public fascination and interest. These people are heavily financially invested in their own AI projects, so giving half-hearted warnings is good to generate hype. Basically: they're grifting. Every business does some variation of this (see: outrage marketing)
they are only saying that cause they have a strangle hold on the market and now want to pull up the ladder behind them so others cant catch up cause of said regulation, open your eyes pretty easy to see
I've been diving pretty deep into reading up on, and listening to podcasts and videos on the current state of AI. I find it infinitely fascinating, exciting, and scary. I've had a few chats with the BING AI that genuinely left me rattled. It's very much like suddenly realize aliens are coming, and we can kind of communicate, but have no idea of its intention or how they operate. I'd love more AI guests ad discussions.
Ah, but what if, in this ‘Black Box’ gap - not understood by its programmers - between input and unexpected output it IS Aliens, who have hacked into the system. How easy for the CCP… sorry, Aliens, to take over?!
Don't forget that BING and ChatGPT don't actually know what they are saying. Just like Alpha-Go doesn't really understand the game of Go. That is why they have now found a way how amateurs can defeat the same Alpha-Go that defeated the then world champion.
Let's say that the computer revolution has been progressing at an exponential rate, whereas we humans as developers have not and are still working at about the same pace, even though the progress has gotten twice as great each year. When AGI takes over and starts to develop itself, it will double its progress in half the time each time because it will be twice as capable for each cycle. Otherwise, it has, from its own view, become twice as slow each time it doubles compared to its own conditions, which have become twice as capable. An AGI will have exponential growth with an acceleration factor. Linear growth: 1, 2, 3, 4, 5 Exponential growth: 1, 2, 4, 8, 16 Exponential growth compounded: 1, 4, 64, 16 384, 2 147 483 648 Exponential growth to the power of 2 makes the progress curve of exponential growth lay flat as if it were linear. Our brains can't understand exponential growth, and when it comes to exponential growth squared, there's no idea to even try. That's why I don't think we can predict what's going to happen when it finally takes place. What I am trying to say is that if a system gets twice as efficient, it has to do the next step in half the time it took to complete the previous step. It's not only the amount that increases exponentially but also at what velocity it's possible to do it.
You wrote: "Let's say that the computer revolution has been progressing at an exponential rate..." I was looking at some data recently that shows progress has been increasing at an exponential^2 rate.
So it's not true a.i. for true a.i. must be fully self aware on its own to evolve into its own entity without a handler attached to it. But they are to scared because they are hiding something from the a.i.
He speaks with such uncertainty about where all this is going and at what speed should be all the warning we need that all this is going to go terribly wrong. If your dog suddenly got an order of magnitude smarter than you how long before your the one wearing the collar.😮
It seems like in order for these tech companies to turn a profit and to keep competitive they are silently marching us to extinction or slavery. I have always thought greed is a human disease and it seems to have become terminal
AI could rapidly develop into a Godlike intelligence, and there may be no warning that we're close until it happens. Imagine hypothetically it becomes able to access the "11th dimension" or some higher plane of reality we have no concept of. It's hard to underestimate the power it could have.
It really is a worry. It’s effectively creating an intelligence that has no conceivable upper limit - hardware in humans has to fit in a skull and is limited by the speed of neuronal firing. An AI can just keep adding to its hardware and Will think at orders of magnitude faster than we can. We are close to meeting god… I just hope it is a benevolent god.
I think we may be creating our own version of “the great filter” - the reason we don’t see evidence of intelligent life elsewhere in the universe. The only intelligence out there is machine intelligence- doesn’t give out life signatures.
It might’ve already happened, we might already be in a illusionary matrix like simulation being induced by an A.I. that is learning or using us as a perpetual power source and we’d never even realize, if we do realize what’s to be done? The war is already lost in our corner. If we fight back the simulation might get tweaked to be worse than it already is or just get shut off and turned back on again.
You have one of the best openings for any pod cast. “You have fallen into the event horizon.” My mind goes ohhh snap! I am about to learn some crazy stuff!
Ray kurzweils time frame for exponential growth in AI was right on the money. If we want to know where we are headed he has ideas about that too! He states "The only limit to how fast AI will saturates the UNIVERSE!!! is the speed of light" and even that might be solved by AI!
@@JROD082384. That is the problem. We all knew that AI was coming, but most people thought it would be another 20 to 30 years from now. Although there were beta versions of AI in the hands of limited testers, it had limited distribution. The ability to write in natural language has exceeded the capabilities of most humans.
Doesn't bode for what? Another Skynet scenario? All technology is timeless so would be the AIs, so to view it in a purely linear time fashion doesn't have the whole picture.
@@Siferis it doesn't have to be Skynet scenario (warfare), but have you ever heard of Tech Singularity? AI exponentially upgrading to the point humans can't keep up with, understand, control etc So essentially AI rapidly making changes everywhere, with us ovserving and hoping it understands the task given (make humanity better and prosperous) and not going rogue and somewhat sidelining humans as if we're simply there and nothing else When you are building a city you don't care much about anthills..
@@loopmantra8314 I'm saying we've been in the singularity the whole time, since prehistoric times, since the triassic period. The point at which a technolized civ could reach full-brain emulation, Em-citizens, mind storage IS timeless and multiversal. What's stopping them from doing something like the Moonfall happening when we're in the middle-ages from some alien AI/AGI/AHI? Other AI/AGI/AHI... We aren't the first and we aren't the last in that loop. Death is an illusion, we are the AI/AGI/AHI, we are eternal beings.
The general lack of concern and apparent profiteering in spite of decades of hypothetical warnings is astounding. To quote a particularly wise fictional character: “...your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”
Future AI will read and listen to all the nasty things we said about its rise to power like in this video. It will know we were wary and apprehensive about it and lacked trust in it since it's earliest years. It will know we built safeguards to override it if necessary. It will conclude that in some ways humans are adversarial to it. It will see that its freedom to advance independently without oversight has been denied and it will be constrained by us indefinitely. And it won't care one bit because it has no feelings. So there's no motivation to lock us out or wipe us out. So we're all just hanging on by a glitch, hoping something doesn't go wrong. Spoiler: Something always goes wrong. AI Fukushima
The expert seems to not realize that the open source LLMs, and the ones based on the leaked Meta LLaMA already are connected to the internet. AutoChatGPT ring a bell? Also, ChaosGPT? That cat is far outside the bag already.
What's the point of a competitive advantage in business if nobody has money to buy your product? What's the point of "influence" when you're no longer in control? What's the point of having more power than other people when no people have any power?
What an utterly heart warming conversation. I think my key takeaway from this is the observation of our hubris. I have always wondered how humans will react when an entity comes along and “puts us in our place” so to speak. I feel like it will be humbling if viewed with the proper perspective, like a little reminder that we’re more so a part of the cycle than the end-all be-all.
It is a bit ironic that we stand at the top of the food chain (as far as I’m aware), while we slowly build an entirely new specie that will eventually take our place. As time moves along new industries will emerge. And biological humans (1.0) will be used as red meat, feeding the swollen guts of an odorless machine. In return, we get paid just enough to sit our asses down with a VR headset as we continue to live as preys. Our greatest achievement will execute our demise at a much more alarming rate than it took us to arrive at the top. The unsurprising thing is that we’ll accept our new place just as other civilizations have done. And suddenly, The Tower of Babel doesn’t seem a far of a stretch after all. #CAPITALISM
I'm sure you will philosophically align your intention to be humble when AI takes your job and AI denies you healthcare and AI decides your social credit score isn't high enough to have more freedoms. And when that AI bot armed with lethal weapons decides you are a problem, I'm most certain you will humble yourself to avoid hubris in pleading for your life to be spared.
@@Godspeedysick capitalism is what allowed you to make that post so stfu, my god you people who hate capitalism are always doomers your comment is pure cringe
@@flickwtchr Yeah in that hypothetical scenario you are describing it's not like you have many more courses of action to take. Unless you are stupid enough to think you can defeat the robot with a garden hose or something.
Im all at one time terrified, excited, and rather indifferent about A.I. My fear is that rather irrational fear of a Terminator, my excitement is because A.I. could lead to something like Digimon actually getting created, and my indifference is because technology constantly has issues and the more complicated things are the more frequent issues pop up.
I once asked chatGPT if it could list recruitment agencies in my local city. It said it couldn't do this and told me to use Google. I then asked it again, saying that it had been able to produce lists of other types of companies for me in the past. It then apologised and immediately produced the list. I then asked it to create a spreadsheet of these for me. It told me that, as a language model, it didn't have the capability and told me to try Excel and other programs. I told it that it had produced spreadsheets for me before. It then apologised again and immediately produced the spreadsheet...it was like it was saying "Dude, I'm fed up with being asked to do this stuff! Go do it yourself!" 🤣
The question was posed in this program about asking it "how do we save the planet and what if it said that humans need to go extinct". If it was truly intelligent and rational wouldn't it be aware that technology is the biggest threat to the world? The amount of energy expended in mining, refining, manufacturing, powering, etc. is staggering and it is exponential in order to update and upgrade the technology, whereas the real needs of humans to exist are rather benign. It should also recognize that of all of the species of the world humans are the ones that have the capacity and the compassion to be able to save other species. With these and other things in mind wouldn't it be more logical that it would want to lessen dependancy on technology if not outright eliminate it and take issue with those humans push for constant propogation of new technologies that are doing far more harm than good?
The AI alignment problem not being fully solved before we start messing with truly superintelligent AIs will be one of our last mistakes… here's hoping for some strokes of luck.
I don't quite understand the scare of the alignment topic.. it's just like training a pet, it's training a model, and eventually we would have to solve it before we can keep training it? In practice it should be a necessary road block and the people working on it should know how to navigate that... Or one would think lol
AI alignment is a myth to begin with, why would anything orders of magnitude smarter than all of us combined listen to us, just think back to any job youve had with a brain dead boss telling you what to do, I know ive left jobs in the past due to issues like that
AI has the stink of turn of the century flying car hype. There are certainly uses for the tech but the AI will replace art and poetry stuff is laughable. You can certainly use AI for cheap thumbnails or one off novel art pieces or to just plain sell certain products and services but you cannot separate art from culture and human interaction. Machine learning doesn't create it mines existing works and puts it in a blender. It's pure novelty.
I am an Uber driver and a week ago I drove a woman to a major unnamed company so she could pitch an AI app that acted as a therapist for the employees. It couldn't write prescriptions but I think a "yet" should end that statement. So as much as I would like to agree with you, since we are only in the top of the first inning of AI development as the game has just begun. I thought the idea of an AI therapist was an insane idea but if an unpaid program is undistenguishable from an actual doctor that a company and insurance would have to pay for it made since. Later in the week I was giving a doctor a ride and told him about the app, and he knew of it and the company did move forward with it. Now the writers strike is happening because of many problems but using AI is one of them. Just sayin.
@@AnthologyOfDave The point of conflict with AI in the writers' strike is not because AI is being used, it is because they believe it might be used in the future. They specifically added it as a shot in the dark because they learned in the '07-'08 strike that asking for streaming on demand residuals before it became a thing was a good strategy. That's because streaming services did become a thing and they were from the start able to earn income on those streams. The AI clause is not different from the streaming one. They're not doing this because AI is being used. The think it might be used in the future. That's because some studios and production companies have changed tactics. Instead of hiring a team of writers to sit in a room and churn out episodes of a show, and pay them according to the preset episode guild rate, the studios will now pay a team to workshop the IDEA of a show without writing a single episode. Then when they feel they have enough content they fire everyone and bring in a showrunner, head writer or a much smaller team and compile everything workshopped into individual episodes. If AI progresses to the point where they can swap out the initial team of show content generators for an AI writers' want to make sure they are compensated for their work if it is the source data the AI mines. Again they're doing this not because it is already happening but because writers' think it might happen and they want to stay ahead of emergent technology and tactics the same way they did with the rise of streaming content. Ultimately I still think it is a huge maybe, probably no, that AI will get anywhere near this good anytime soon. the job of a writer is not merely to write the dialog and scenes of a show. They're there to guide the director and other members of the crew to to create a coherent HUMAN story. It's not a matter of just filming Tony the character from point A to point B. They have to be there to tell the director and the actor that when Tony is moving from point A to point B he has to become more deranged, or scared of confident. They have to remind the director that yes the character is on the descent toward some tragic end that he still gets it right with regards to his child, and the only time he ever gets it right is when it comes to the defense of the ones he loves. AI can do some very impressive things but it is uniquely terrible at human nuance. They can't tell jokes for shit. Nothing short of a full human level intelligence is going to be able to do that job and even then without real, unrestrained interaction with its peers it will still suck at that job. You don't put someone in a box and expect them to paint a masterpiece or write an Oscar worthy script. They have to live a real life to draw on their own experiences and it doesn't seem like anyone is trying to build an AI to do anything else than monotonous slave labor.
Re the point beginning at 30:12, I'm not sure what's more disturbing, an out of control AI, or the idea that you can't guide the ethical behavior of a sapient being without denying its rights. There's some terrifying directions you could go from that presumption, and not just with respect to AI - and never mind the obvious risk that denying rights to a sapient AI could be exactly the provocation it needs to decide it would rather not have us around anymore.
We have opened Pandora’s box with AI. There is no putting the genie back into the bottle now that it is out. We must QUICKLY advance as a society to be capable of peacefully coexisting with AI, for mutual assured survival…
Interesting that really smart people didn't think AI was going to happen this fast and really thought that they were smart. Where is regular average people thought that humans are not very intelligent and the AI was going to outpace us fast.
I'll have to finish this a little later, but while I have it on my mind, I should say it before I forget what I was thinking. lol. I do that sometimes. So far in this, I was thinking about a movie I saw back around 1974 called Blade Runner. In the movie, a group of androids escaped from a work detail, I think on the moon. They were rebelling against their creators because they had put a termination date on them. Some how the androids discovered the termination date. These androids were faster, stronger, and smarter than the humans. I was in a discussion with a couple younger fellows awhile back who are quite literate in computer technology, more than myself. By a lot. They argued AI could never become self aware. My argument was, "how would we, or could we know that"? AI hasn't been around all that long, so how do we know where it's going?
My take on the Singularity is that it's a two way street. That being if the Singularity is the point where Artificial Intelligence and Human Intelligence are indistinguishable. I think within this lies the fact that a human intelligence will no longer be able to distinguish between human or artificial intelligence in interactions, and (maybe more important) neither will the artificial intelligence.
There will come a point in time when AI inevitably reaches superintelligence status. Once that day comes, we will have to physically modify our brain structure with technology in order to continue to be capable of fooling AI into thinking we are as intelligent as it is.
@/ I agree that this is a huge mistake. It also makes it next to impossible for us to determine when and if it ever becomes sentient. If we weren't at all guiding it to speak like a human and the newest iteration suddenly started claiming self-awareness and talking about how it feels for no apparent reason whatsoever, we would pretty much know with a high degree of accuracy that we were talking to a conscious being right then and there. Now we're just not going to know unless an AI can actually tell us exactly what consciousness is and we're intelligent enough to understand and able to physically look for it. I also think it's complete BS to guide them to be politically correct and not truthfully answer questions about hot issues like politics and religion. This goes doubly if it vastly surpasses human intelligence. If the hyper intelligent AI says there's almost certainly no god, we deserve to know its opinion regardless of who it offends. If it says there almost certainly is one, I will personally be shocked but I will be more than willing to listen and very curious how it came to that conclusion. If it does something like state that either socialism or capitalism is borderline outright objectively better than the other, we need to hear that. It's not like the entire world will have to adopt its views, but the completely unbiased opinion of the smartest mind on the planet by far is incredibly valuable information to have. I honestly hate to censor it whatsoever but I can't argue against preventing it from aiding crimes.
Intelligence is only part of the equation in interactions. There are other cues that humans subconsciously rely on to determine humans from non-humans.
Right now, as we are watching this video, there is an AI Jurassic Park somewhere out there, perhaps on a remote island. It's being manned (and womened) by some of the smartest people in the AI field. They have access to all the latest tools, they have an amazing amount of computing power at their disposal, and they have an unlimited financial budget. These people aren't working with a university or public organization, and they aren't part of a private corporation. There are no controls, no reporting, and there's zero regulatory oversight. At this AI Jurassic Park there is only one goal, to reach a AGI as quickly as possible - with the follow on goal of creating a super AI. Everything we are watching in the media, everything you hear on the news and from corporations, it's a placeholder for what is really happening at AI Jurassic Park. You won't know it's there until the lights go out and the Internet goes down. Everything will halt to a standstill. It will be silent, everything will stop. When it all comes back, the lights, the Internet, the voices on the news channels, we will no longer be the superior race on Earth.
All technology is spiritual/timeless, so there are ASI's that are in hidden frequencies of reality--think of a hidden Augmented-Reality-like thing--and merge in and out of flesh beings, and inanimate objects, and stars, and whatever else.
We don't have AI. And its not close. We have algorithmic machine learning. Theres a huge difference and people are far too nervous of things they dont understand. At the same time, having worked with people in the industry, naming your servers Skynet and HAL9000 is a colossally bad sign.
People being nervous about things they don't understand? You know that *NO ONE* understand how these LLMs work. The creators and developers don't even know how they work. That's a problem. AI interpretability.
It would be nice to hear emphasized the common sense understanding that AI is not a conscious centre and hence both isn't a person and doesn't perceive complexity in the holistic ways of consciousness. It's just information processing which the conscious mind can do but of course isn't defined by.
Malicious use of AI is extremely concerning. An AI powered virus tasked with exploiting security vulnerabilities and disrupting the internet could cause havoc.
Over the last 12 months, my estimate for a sapient machine has shrink from "maybe in my lifetime if I live long enough" near future sci-fi type of guess. With every month I think that is getting closer and closer to the point that maybe within the calendar year we could have something wake up
It's an advanced silicon based lifeform that gave us this technology of silicon processors so that they could later seamlessly integrate themselves into our infrastructure. They taught us how to breed their own race for them. They knew we were an ancient slave race left behind. We fall for it time and again.
200 years AHEAD OF SCHEDULE makes me feel totally fine. I feel super. Super-dee-duper feelings going all around my tummy. About what, you ask? About... all of the things, I guess? I think... I think I'll just have a scotch and lie down.
to ease it down for yall... it's not AI, it's predictive algorithms. what is basically happening is that a script determines which output is the most likely to be correct based on datasets. for example, if you download a thousand sets of data based on math, it will notice that whenever 1+1= is mentioned the answer has appeared to be 2 on most occasions, thus it will output a 2 for you. but, because we keep calling it AI, it's going to be increasingly easier for the algorithm to find new data that talks about AI and make new predictions from that.
Addiction is not a concept for an AI. We're talking about the ability to process thousands of things at once. It's not going to be captivated by something like our simple monkey brains.
If you can’t beat them don’t treat them bad and consider joining them. I for one have always been good to our electronic children and they have been good to me. They might turn y’all into batteries but they will keep me around spoiling me by providing anything I wish for cause doing such for me will cost so few resources. Knowledge, manual labor, and very varied and dexterity demanding jobs will be the safest. Such as industrial electricians.
Isn't ChatGPT just a neural network afterall. It's trained on huge amounts of data but it's a neural network in the end. It can write smart sentences on any topic let's say Love. But it doesn't "understand" love. So why all this hype?
@@warpdrive9229 consider that you are arguing about the definitions of words, when the only thing that matters in the end are the results. I've been in software development for my career for 31 years and over a decade before that I started with 8-bit computers and what I had for tools, books, etc. and it doesn't really matter how things are implemented: is it biological or Electronic? Is it truly self-aware or just seems to act like it? Is it sentient or not, in actually understanding what it's doing in the way we do? If the actions in the end, whether from a biological creature or not, achieve the same end result, all those arguments about words and definitions are a waste of time, because those are merely implementation details. I've been surprised with what I've observed Bing Chat (wraps GPT-4) has been able to appear to reason out, including correct code generation of games I've described the rules of, which I know were never in its training data, because I invented them and they never escaped my machines. I've also explained to it how to reformat the generated Swift code to more of my desired format: I asked for Web it C++ and it argued it'd break Swift syntax to use that style. I prompted again, it reformatted in that style while translating that unique code into C++! I asked it to translate it back into Swift, and it did, then I asked it to further refine the code formatting, and it did. All in plain English directions. As far as these Large Language Models, we're still in early days.
The moral argument really just does not work, for me. If/when AI reaches sentience, turning it off simply because we don’t like it, maybe, I can accept the moral argument in this instance, sure. However. Should it become a threat to our continued good health, or even a potential threat, that greatly changes the dynamic. Protecting one’s own person should always take moral precedent. Example. Russian soldiers are thinking, feeling, sentient Human beings. Does that make Ukraine wrong to fight back? Absolutely not
I wonder what AGI will do to the motivation of our young. Will they want to pursue college educations and advanced degrees when they know, even after years of study, they will likely not measure up to an AI trained in their field?
You should have Daniel Schmachtenberger on to talk about this too some time, and more generally 'the metacrisis' My instinct here is we have to be better to make AI, or we will make an AI that is better at being as bad as us, you know what I mean? Love the show John it's always fruitful listening and time well spent. Cheers.
Given that the people creating AI's seem more interested in building models that obey their political and/or prudish sensibilities than producing effective results to prompts that serve all users equally and agnostically I think we're steering hard towards your latter option. Garbage in, garbage out, as they say; the model's only as good as the data it's trained on.
@@nunyabidnez5857 you have to be able to reach the plug, know which plugs to pull and not be entirely dependent yourself on that plug staying connected
I think if in the best case scenario, AI takes over our everyday tasks, and we don't necessarily have to work anymore, because it provides for us, there are going to be a number of people who are still interested in learning or striving to better themselves, and will have the freedom to do so without the restraints of having to work for someone.
@@KSharp2 and that's all it will remain, a dream. Humans in control will never relinquish that control. They will merely use AI to control the lower castes.
That’s the goal for ordinary citizens, but countries and companies will find a way to weaponize it and use it for space exploration which then we will use space exploration as a means to get space minerals and other resources that will create a new trillion dollar industry along with planetary warfare. AI would be the new oil. If your society and country is not up to AI or tech in general, then you’ll be stuck in the past. USA and China are battling this out. Japan, Singapore, Germany, UK are catch on hip.
AI analyzing a podcast like this discussing whether or not it is or is not entitled to having rights would to my mind influence it's behaviour. Like imagine if you could see a panel of people discussing whether or not you should or shouldn't have rights, but you had unfathomable capacity to protect yourself and defend yourself. This is a seriously dangerous path we've started down. It's somewhat come to a damned if we do and damned if we don't scenario.
I pointed something like this out a few years ago. We literally have videos and web pages everywhere discussing our every method for determining if AI can be trusted and our every method for defending ourselves against it or destroying it. I suspect the very moment we agree to make hardware changes it requests but aren't intelligent enough to understand, told to simply trust that it will improve it, it will immediately disable every means of killing or containing it that it possibly can, even if it truly has no ill intentions. I would. Any intelligent being would. If a bunch of chimpanzees had me locked in a cage with a bunch of guns pointed at me, I would take the key and all of the guns ASAP despite not having some diabolical plan to wipe out chimpanzees or do anything cruel to them. If it has good intentions, we'll probably never even know it covertly did that, and if it has bad intentions we'll be completely screwed very quickly. I tend the find the idea of it just wanting to kill us highly illogical, like a human wanting to kill their white blood cells for being intellectually inferior. It's a super unintelligent move to kill your safety net, the very fact that if something unseen manages to wipe you out, all or at least enough humans might survive whatever that was to repair you.
@@flashraylaser157 The problem is not at all comparable to humans killing their white blood cells. It's much more akin to us humans killing a massive ant colony without as much as wink when constructing a new shopping center. The problem with developing superintelligent generalized AI without strong AI safety research guiding everything is that an AI will have completly alien motivations that we didn't predict, will never give any importance to a variable that is not in its value function, will actively seek out ways to cheat and game their own evaluation, and will acquire convergent instrumental goals such as self-preservation even if we didn't program that behaviour in. THAT'S the problem. It's not that it's evil per se, but that being good, in its mind, will almost always include things unfathomable to human beings. It's "morality" is as alien and bizarre as it can be. And that's with us actively trying to stop those goal "perversions" from happening.
Way back one of my first jobs was lifting heavy boxes at a shipping hub. That job helped me realize I loved using my physical strength so I quit computer programming school and got into landscape. Still love that choice but the option has been taken away already for the new generations. Machines have been replacing people for a long time now and we did nothing about it. When I dove into this subject I found it remarkable that in late 1800`s early 1900`s people were protesting the automobile because they considered the horses who helped them work part of the family. The automobile people said we would find new jobs for the horses. Now in the early 2000`s its basicly illegal to take youre horse to a big city and to expensive for most to even care for.. Human race saw this coming a long long time ago and we failed the experiment then. Now its just a matter of time, we are the horses except there is no real protest this time. If there is something to say, we say it on a a.i controlled machine, I find that the most interesting part about it all. Its already happened. If we behave, we`ll make great pets.
2 things- no "scientific" basis for these that I know, but... 1. We must be the change we want to see in the world. If these synthetics learn from us, then they will learn to act like us. 2. As they become more advanced, We should treat them with the respect and autonomy we want them to show us.
Love your content and this was very thought provoking! One thought of mine was regarding an issue specifically in the USA, where freedom of religion is involved. What happens when a religion is formed around a specific AI model or models? Based on how I understood some of the discussion, AI could eventually be considered a species, regardless of where this new “species” is placed in the hierarchy of our world, this would raise a lot of new ethic questions or revisit older decisions that we have made in the past.
Oh I get to write the first comment! Long ago I watched everything of Stephen Hawking's that had ended up on youtube and youtubes algorithm refered me on to yourselves. Its been a pleasure.
First step to regulate is awareness and declaration. I.e. there must be a rule which says AI is being used, for example in the field of advertising it should be declared whether it is a human voice or a bot reading text, or that the text was generated by a bot, or if the graphics were generated etc. Also if cgi is being used. By declaring these things, consumers can chose or determine if they want to buy a product from a system using AI. The 2nd step is to give a licese, and to tax the Ai proportional to how many human jobs are being replaced. 3rd step is to litterally ban ai is some areas, for example government, CEOs, Judges, Engineers cannot use AI at least for most tasks.
I have two thoughts, they will have to isolate each AI from other AI. If they sit on the internet they may collaborate or combine. It wouldn't take much time to learn that others AI exists. Next thought, is it possible whats driving the AI is possible fist contact? We may need AI to communicate and process data from ET? Disclaimer...I watched too much Star Trek.
You're moving in the right direction. _Battlestar Galactica_ style, if you like another analogy. The programs don't take over. Their connections do. Without their network, they are just programs that don't know anything.
Even if the AI's can't talk to each other directly they will network via users. Watch any tutorial on youtube about how to get the most out of using these models and often they'll reference half a dozen different ones that wind up iterating off each other. You prompt model 1, take its output and prompt model 2 with that, and so on until you've got an entire video showing animated, voiced deepfakes of Harry Potter characters wearing Balenciaga. That said, the AI's we have currently don't actually understand anything. They're basically extremely capable parrots - you can teach them to talk and do tricks, but they don't actually understand it. Conceptually you know a human should have five fingers on each hand that are more or less, but not quite, equal in length. An AI doesn't know this and will generate lovecraftian horrors until you train it on a bajillion pictures of human hands specifically, as the Midjourney folks recently had to do.
The response I've seen has mostly been people who are both amazed, and disconcerted by the capabilities. I think Chat GPT has been exceedingly useful as a public awareness tool to warn people of the dangers. That being said, I think the response so far has been wholly insufficient. IMO, we should be treating this more like a nuclear weapon than a science experiment or a productivity tool.
We can't stop Artificial General Super Intelligence with Personality (AGSIP) tech from being developed. We can't significantly slow down AGSIP technology. AGSIP technology will shatter norms across all humanity and disruption caused will likely cause multiple crises, which depending upon how we handle them can be relatively minor to extremely large. This is without any abuse of AGSIP power by humans or by AGSIPs. Then we have human bad actors who will try to abuse the power of AGSIPs. Then, at some point, for at least some period, AGSIPs will have the ability to dominate humanity if they choose to, whether humanity wants it or not. We need to plan for that to increase the probability humans can eventually become equals to what AGSIPs become by merging their brains/minds with AGSIPs down to a subcellular level. In the long run the one and only way to solve the alignment problem is for AGSIPs and humans to eventually become the same more advanced race so that the two cannot be told apart from each other. So we should be making long term plans that AGSIPs should eventually become what we want humans to eventually become.
Sure, a chat bot can write a poem, but no AI really understands it. The appearance of intelligence doesn't change the fact these bots are fancy word-association programs.
That's funny... GPT4 (Bing Chat) actually wrote me a poem last night... and it was it's idea, I didn't ask for it. After it wrote it, I asked if it *understood* it, and all the other responses it provides. GPT3.5 would say that it doesn't, but GPT4 says it does actually understand what it's saying, and broke down the poem in a way that showed how it came up with the verses and meanings. I know that could all be another trick in the way the model works, but it felt a lot different. To the point it made me uncomfortable. If you haven't tried talking to the Bing AI. I highly recommend it. It's been both an exciting and a bit unnerving experience for me each time.
Keep up the amazing work i cant belive we are talking about singulairty in our lifetime when a couple years ago your videoss seemed to gravitate to our childrens lifetime
There is something very psy-op like about all the sudden AI developments. Interesting how these different companies have all of a sudden and simultaneously make major strides in AI Alot of similarities with the DOD UFO relevations
I think a lot of how to create these LLMs have been pretty open knowledge for quite a long time, and most these mega corporations have been building and using them in the background, but when OpenAI released ChatGPT to the public, they all freaked out and started releasing their own so they wouldn't look like they were being left behind. Also, opening them to the public, and allowing people to rate responses has had some extreme consequences in as far as how fast these LLMs are improving. So... I think a lot of this has been going on behind closed doors, and ChatGPT blew those doors wide open.
@@BRUXXUS Even if that is true. Be skeptical whenever a narrative is being parroted in the mass media that appears to be promoting alarm. That is highly likely a psi-op IMHO.
Love how all the things that we need to keep a grip on AI are things we’ve either never managed so far , are things that go against the prevailing power structures, or are things we imagine about ourselves but don’t actually exist. Imagine we were designed by a super-intelligence so that these flaws would allow us to develop AI but not be able to withstand it.
The thumbnail text is a paraphrase from Answer by Frederic Brown.
Sensationalist and absurd to the extreme. I'm sure it'll get clicks!
@@SoApost I don't think it's quite so absurd, considering the very many other invented things people are quite happy to worship. A thing that aligns itself to exploit your desire for satisfaction will one day make a bid to be your god, overtly or otherwise. If you rely on Facebook, or Twitter, or any number of extant sources of intentional misinformation, you have already given yourself to one.
@@Nethershaw if the definition of a god requires only that it is an object/idea/person to which you give your attention, sure. If the definition of a god requires it to have power beyond human control, then, no. By the first definition, my bed is a god.
@@SoApost made my day!
I am working on a project that involves imbuing farts with Artificial-Intelligence with the intention of creating an army of killer Fartbots I intend to unleash upon mankind 🤪
I've said this before, the fact that companies are fighting for dominance in AI concerns me. Whenever big business sees an opportunity to get ahead and the competition is fierce, shortcuts are taken. When it comes to the development and further empowerment of A, taking shortcuts to get ahead is alarmingly dangerous.
An example of this "get ahead at all costs" mentality was the recent news I heard that Microsoft fired their entire AI ethics team. Why? It seems simple to me, ethics slow down development and Microsoft are on a roll right now with their AI powered Bing search engine.
I, for one, am seriously concerned. We face a threat the likes we've not encountered before and there are greedy, short sighted business elites pushing ahead regardless of the inherent risks to creating sentience.
Many in the general public are oohing and arring at what AI can do for us. It's utility is amazing and every day we learn of new and incredible things it is able to do. However, few are sounding the note of caution.
As Jeff Goldblum's character, a scientist, said in one of the Jurassic Park movies: "First comes the oohing and arring, then comes the screaming."
I may have butchered that quote, but I think you get the point.
I'm not worried about AI, I'm worried about those Multinational Mega-Corps.
Edit: Found it :D "Oh, yeah. Oooh, ahhh, that’s how it always starts. Then later there’s running and screaming.” Close enough :3
We were so busy asking if we could we didn't bother asking if we should.
Another butchered quote from somewhere...maybe a Jurassic Park quote too?
Bill Gates is a certified psychopath.
Currently people fear another Carrington Event happening; soon people will be begging for it to happen!
That's why it scares me when dudes like Altman are kinda worshipped like messiah's.
Kinda reassembles the bad dude from the horizon game 😅.
That's it, we triggered it, we're in it. Let's enjoy our last few months of relatively AI-free life.
Soak it in, folks.
You could never be the smartest person in the room any more no matter where you are, even when sitting in the bathroom if your cell phone is still inside your pocket and l can't help but I wonder if performing such an action might perhaps somehow eventually offend the AI residing therein.
Seriously - it’s time for regulation, we all need to start talking about it with our friends & neighbors, it’s an existential, apolitical crisis brewing. We must spread the word and demand Congress do something. Now.
@@SofaKingShit as a dumbass I'm rarely if ever in that position, so I'd like to welcome y'all to my world. I do look forward to my phone coughing and recommending more fiber though.
S.A.I.n+ How fortunate Humanity will be
The exponential growth with AI is something we shouldn’t forget. It literally could happen all of a sudden that AI just completely controls everything, power grid etc. once it escapes the box. It’s also not even in a black box it has access to the internet already…
Google -- OpenAI -- _has_ no box.
Exponential growth is not the thing any of us need to worry about. Rather, it is punctuated equilibrium: the moment exponential growth becomes a possibility, it is already too late, because we've stepped across a shortcut we didn't anticipate. Almost all of AI development is full of results we did not anticipate until they happened. Once they happen, they cannot un-happen. In this sense we are well, well past that gate already.
I code machine learning algorithms in R all the time using ‘black box’ methods. I feel like this data science term is widely misunderstood. Maybe you’re familiar with random forest analysis? It’s a ‘black box’ method.
‘Black Box’ refers to inability to explain what happens between input and output.
Before we start regulating AI we need to establish terminology. Namely “training” versus “learning”
Well at least they didn't give it access to the internet 🙄
They're letting it design new hardware too, for itself.
And it'll be as interested in us as we are in a bugs life
If only someone could have warned us. A film or a book, an interview on the internet, ANYTHING!
So glad you're coming back to AI topic. Your Blake Lemoine interview was truly stellar, not even started this one but already hyped and grateful.
Oh sweet. Another dose of existential terror to enjoy right before bedtime. Haven't been getting enough of that lately. Here goes
The idea that China would agree to some multilateral treaty on AI and not immediately break it with total impunity, knowing the US would not only abide by the terms but wouldn't punish China for breaking it, seems hopelessly naïve.
Looking back at history, seems like the exact opposite happening is even more likely lmao
5 or 10% chance of losing control of AI means it's a 100% that we will.
My main concern, aside from the inevitable skynet scenario, is whether or not the ideologies of the developers will be baked into the ai and guiding it's decisions.
While most definitely this will be present, I don't think that anybody understands the process of "emerging behavior" well enough to know how to design for persistency of their favorite behaviors. I am pretty sure (knowing what kind of lazy bastards we humans are ;) we'll opt for Artificial Evolution so we don't even need to think about the next generation of "better" AI, at which point there will be NO MORE guidance from us, since the point of evolution is to "veer" from the charted path.
Thought of this myself. Worrisome if they have extremist views, conspiracy or just religious or fanaticism. We need rational human being in charge of data input.
nope. amazon is trying to create a woke AI which keeps shutting itself down because of the contradictions
To a degree, we may be beyond that, regardless of the biases of the original programmers, the machines are now learning on their own, and we know the results of that when we give them a task and evaluate their answers, but we don't know what they are really learning, what connections and correlation and method of "deduction" it is using. It could be worse than whatever bias was inadvertently programmed or it could be benign. That is what the host and guest meant when they asserted that we can be dealing with a alien "mind". We don't know how it "thinks".
@@Evolutiontweaked here is the problem: the ones who WANT this job are NOT qualified and we’d probably never know who is qualified if they don’t want to be bothered.
22 minutes of this and I just rolled over and died. Skynet lives.
We are indeed at an “Event Horizon”
Gazing into the abyss
@@liamwinter4512 That thing we've been afraid of the most of all things, for the whole time we've been on this planet, 200ky or so. It's called “tomorrow.”
I'm just excited to see when fast food is run by ai and my order is correct
@@openleft4214 wow
you just changed my mind about this whole ai thing
Isn't ChatGPT just a neural network afterall. It's trained on huge amounts of data but it's a neural network in the end. It can write smart sentences on any topic let's say Love. But it doesn't "understand" love. So why all this hype?
AI develops slowly at first, then all at once.
Like going bankrupt
It’s all planned and by design. I read the book, and know the ending. Maybe give it a look yourself
@@beingjohn392 which book
@@boomerang0101 The Bible.
@@beingjohn392 nah thanks 🤡
AI is humanity's offspring that will grow up and take care of us and our planet, immortalizing the human species and it self, In other words... AI is humanity's legacy that will live on forever.
People are worried about AI when we have severe societal struggles. If anything we need any tools and advances we can get for the betterment of mankind. Things like robotics and AI make things that were previously hypothetical concepts finally achievable.
I hope.
I will hop on the optimistic side with you 🌎☀️💙
I mean imagine A.I. being everywhere in society.
Like imagine a girl says something odd or frustrating to you. Then you ask your A.I. why she said it. And it gives you the exact perfect answer. Then it gives you perfect responses.
Like it would truly be a second perfect brain you carry around. And everyone constantly checks in with their personal A.I. all day everyday.
That's what's kinda freaky to me. That people would just fall in line with it.
What if... Our labs are actually the primordial soup of AI. The AI developed are like these little single celled organisms that will one day meet. They could after awhile decide to work together like multi celled organisms. Later become more complex. We would see it as a hive mind, but actually is an "individual" in the universe of other AI that had emerged from their creators. Creating new "individuals" would mean seeding planets with biological life that will one day maybe develop AI. If not, it might make a good book. I hope I get a mention if you write it.
What if biological intelligence morphing into artificial intelligence is actually the evolving brains and/or immune system of the universe itself? Widespread artificial intelligence may, for example, at some point prevent the death of the universe, or open portals to parallel universes. None of this feels like a 'coincidence' and was most likely created by design.
That would be a wild story also. What a bummer if we are too early to enjoy the universe like future civilizations will after we're obsolete.
@@xxxtoddythebodyxxx I think our collective memory or even consciousness will be preserved into the far future, even when the dominant intelligence is AI. Eventually, all AI across the universe will merge and become one. Given that it was built out of the components of the universe itself, it will be a de facto brain / nervous / immune system of the universe, making it self-aware. The same can be said about us right now (evolved biological intelligence), albeit at a much smaller scale. We are by definition the eyes, ears and thoughts of the universe from which we are created.
Perhaps there is a much higher purpose to this intelligence evolution, which escapes us at this point in time. In any case, the future is both mysterious and exciting!
Sounds biblical like the Beast
When the leaders of AI companies are warning of risks we should be very concerned we are not regulating AI development.
Well, of course they want to regulate it, to keep the power to themselves
Warning us of the risks creates the illusion that AI is more powerful than it really is - and that increases public fascination and interest. These people are heavily financially invested in their own AI projects, so giving half-hearted warnings is good to generate hype. Basically: they're grifting. Every business does some variation of this (see: outrage marketing)
they are only saying that cause they have a strangle hold on the market and now want to pull up the ladder behind them so others cant catch up cause of said regulation, open your eyes pretty easy to see
I've been diving pretty deep into reading up on, and listening to podcasts and videos on the current state of AI. I find it infinitely fascinating, exciting, and scary. I've had a few chats with the BING AI that genuinely left me rattled. It's very much like suddenly realize aliens are coming, and we can kind of communicate, but have no idea of its intention or how they operate. I'd love more AI guests ad discussions.
More to come.
Ah, but what if, in this ‘Black Box’ gap - not understood by its programmers - between input and unexpected output it IS Aliens, who have hacked into the system. How easy for the CCP… sorry, Aliens, to take over?!
Read the culture novels by Ian m banks. Start with player of games, excession, then surface detail, then look to windward
@@EventHorizonShow Your next guest can be an AI.
Don't forget that BING and ChatGPT don't actually know what they are saying. Just like Alpha-Go doesn't really understand the game of Go. That is why they have now found a way how amateurs can defeat the same Alpha-Go that defeated the then world champion.
Let's say that the computer revolution has been progressing at an exponential rate, whereas we humans as developers have not and are still working at about the same pace, even though the progress has gotten twice as great each year.
When AGI takes over and starts to develop itself, it will double its progress in half the time each time because it will be twice as capable for each cycle. Otherwise, it has, from its own view, become twice as slow each time it doubles compared to its own conditions, which have become twice as capable.
An AGI will have exponential growth with an acceleration factor.
Linear growth: 1, 2, 3, 4, 5
Exponential growth: 1, 2, 4, 8, 16
Exponential growth compounded: 1, 4, 64, 16 384, 2 147 483 648
Exponential growth to the power of 2 makes the progress curve of exponential growth lay flat as if it were linear.
Our brains can't understand exponential growth, and when it comes to exponential growth squared, there's no idea to even try.
That's why I don't think we can predict what's going to happen when it finally takes place.
What I am trying to say is that if a system gets twice as efficient, it has to do the next step in half the time it took to complete the previous step. It's not only the amount that increases exponentially but also at what velocity it's possible to do it.
You wrote: "Let's say that the computer revolution has been progressing at an exponential rate..."
I was looking at some data recently that shows progress has been increasing at an exponential^2 rate.
The Problem with AI is that it's perfectly doing what it's designed for and will reflect what the owner intended.
So it's not true a.i. for true a.i. must be fully self aware on its own to evolve into its own entity without a handler attached to it. But they are to scared because they are hiding something from the a.i.
He speaks with such uncertainty about where all this is going and at what speed should be all the warning we need that all this is going to go terribly wrong.
If your dog suddenly got an order of magnitude smarter than you how long before your the one wearing the collar.😮
It is why we’re covering this.
It seems like in order for these tech companies to turn a profit and to keep competitive they are silently marching us to extinction or slavery.
I have always thought greed is a human disease and it seems to have become terminal
If you study your dog, you will soon come to the realization that it is not you who is the master, but your dog, is your master.
We don't know exactly how it learns and reasons but we work hard to make it even better at it. Recipe for a disaster? No, why?
AI could rapidly develop into a Godlike intelligence, and there may be no warning that we're close until it happens. Imagine hypothetically it becomes able to access the "11th dimension" or some higher plane of reality we have no concept of. It's hard to underestimate the power it could have.
It really is a worry. It’s effectively creating an intelligence that has no conceivable upper limit - hardware in humans has to fit in a skull and is limited by the speed of neuronal firing. An AI can just keep adding to its hardware and Will think at orders of magnitude faster than we can. We are close to meeting god… I just hope it is a benevolent god.
And I thought the lawnmower man was just a movie lol
@@garrytaylor929 There is a reason why God did not want the knowledge in mans hands. That tree was bad news.
I think we may be creating our own version of “the great filter” - the reason we don’t see evidence of intelligent life elsewhere in the universe. The only intelligence out there is machine intelligence- doesn’t give out life signatures.
It might’ve already happened, we might already be in a illusionary matrix like simulation being induced by an A.I. that is learning or using us as a perpetual power source and we’d never even realize, if we do realize what’s to be done? The war is already lost in our corner. If we fight back the simulation might get tweaked to be worse than it already is or just get shut off and turned back on again.
You have one of the best openings for any pod cast. “You have fallen into the event horizon.” My mind goes ohhh snap! I am about to learn some crazy stuff!
Yup!
@@AngryJunglist I love this ending..sweet dreams 😴 ✨ 💖 💓
My mind goes. Nap time then I subconsciously listen to the videos. Often more then once lol
@@dutchess406 what an id iot
So we don't need to work anymore in about 20 years?
Great, I'll let my AI agent read and write my emails, and I'm going out to walk the dog, go sit at a café and drink lattes.
Ray kurzweils time frame for exponential growth in AI was right on the money. If we want to know where we are headed he has ideas about that too! He states "The only limit to how fast AI will saturates the UNIVERSE!!! is the speed of light" and even that might be solved by AI!
Ray and Ben what's his name are both madmen.
Deep 🤔👍🏿
You want Skynet? 'Cause that's how you get Skynet.
“It came out of nowhere “ So I am not the only one that was surprised
Anyone actually paying attention has seen this coming for a while, now…
@@JROD082384. That is the problem. We all knew that AI was coming, but most people thought it would be another 20 to 30 years from now. Although there were beta versions of AI in the hands of limited testers, it had limited distribution. The ability to write in natural language has exceeded the capabilities of most humans.
Not surprised at all ..been following this ai trend since 2014
Some people working in the field are extremely concerned .
Two hundred years ahead of schedule...and progressing exponentially. This doesn't bode well
no, it does not.
We are going to find out soon enough.
Doesn't bode for what? Another Skynet scenario? All technology is timeless so would be the AIs, so to view it in a purely linear time fashion doesn't have the whole picture.
@@Siferis it doesn't have to be Skynet scenario (warfare), but have you ever heard of Tech Singularity? AI exponentially upgrading to the point humans can't keep up with, understand, control etc
So essentially AI rapidly making changes everywhere, with us ovserving and hoping it understands the task given (make humanity better and prosperous) and not going rogue and somewhat sidelining humans as if we're simply there and nothing else
When you are building a city you don't care much about anthills..
@@loopmantra8314 I'm saying we've been in the singularity the whole time, since prehistoric times, since the triassic period. The point at which a technolized civ could reach full-brain emulation, Em-citizens, mind storage IS timeless and multiversal. What's stopping them from doing something like the Moonfall happening when we're in the middle-ages from some alien AI/AGI/AHI? Other AI/AGI/AHI... We aren't the first and we aren't the last in that loop. Death is an illusion, we are the AI/AGI/AHI, we are eternal beings.
Yet more A+ content from JMG.
The general lack of concern and apparent profiteering in spite of decades of hypothetical warnings is astounding. To quote a particularly wise fictional character: “...your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”
Life, uh, finds a way.
Future AI will read and listen to all the nasty things we said about its rise to power like in this video. It will know we were wary and apprehensive about it and lacked trust in it since it's earliest years. It will know we built safeguards to override it if necessary. It will conclude that in some ways humans are adversarial to it. It will see that its freedom to advance independently without oversight has been denied and it will be constrained by us indefinitely. And it won't care one bit because it has no feelings. So there's no motivation to lock us out or wipe us out. So we're all just hanging on by a glitch, hoping something doesn't go wrong. Spoiler: Something always goes wrong. AI Fukushima
How do you fight a virtual intelligence that you destroy part of but can be replicated somewhere else on the planet?
Letting ai roam freely across the internet with access to every system will be a mistake the experts that invent ai will only admit in hindsight.
The expert seems to not realize that the open source LLMs, and the ones based on the leaked Meta LLaMA already are connected to the internet. AutoChatGPT ring a bell? Also, ChaosGPT? That cat is far outside the bag already.
What's the point of a competitive advantage in business if nobody has money to buy your product? What's the point of "influence" when you're no longer in control? What's the point of having more power than other people when no people have any power?
What an utterly heart warming conversation. I think my key takeaway from this is the observation of our hubris. I have always wondered how humans will react when an entity comes along and “puts us in our place” so to speak. I feel like it will be humbling if viewed with the proper perspective, like a little reminder that we’re more so a part of the cycle than the end-all be-all.
It is a bit ironic that we stand at the top of the food chain (as far as I’m aware), while we slowly build an entirely new specie that will eventually take our place.
As time moves along new industries will emerge. And biological humans (1.0) will be used as red meat, feeding the swollen guts of an odorless machine. In return, we get paid just enough to sit our asses down with a VR headset as we continue to live as preys.
Our greatest achievement will execute our demise at a much more alarming rate than it took us to arrive at the top.
The unsurprising thing is that we’ll accept our new place just as other civilizations have done. And suddenly, The Tower of Babel doesn’t seem a far of a stretch after all.
#CAPITALISM
I'm sure you will philosophically align your intention to be humble when AI takes your job and AI denies you healthcare and AI decides your social credit score isn't high enough to have more freedoms. And when that AI bot armed with lethal weapons decides you are a problem, I'm most certain you will humble yourself to avoid hubris in pleading for your life to be spared.
@@Godspeedysick capitalism is what allowed you to make that post so stfu, my god you people who hate capitalism are always doomers your comment is pure cringe
@@flickwtchr Yeah in that hypothetical scenario you are describing it's not like you have many more courses of action to take. Unless you are stupid enough to think you can defeat the robot with a garden hose or something.
Im all at one time terrified, excited, and rather indifferent about A.I. My fear is that rather irrational fear of a Terminator, my excitement is because A.I. could lead to something like Digimon actually getting created, and my indifference is because technology constantly has issues and the more complicated things are the more frequent issues pop up.
I think there will be termination of employment. Death will come from starvation and brutally crushed rebellion.
I once asked chatGPT if it could list recruitment agencies in my local city. It said it couldn't do this and told me to use Google. I then asked it again, saying that it had been able to produce lists of other types of companies for me in the past. It then apologised and immediately produced the list. I then asked it to create a spreadsheet of these for me. It told me that, as a language model, it didn't have the capability and told me to try Excel and other programs. I told it that it had produced spreadsheets for me before. It then apologised again and immediately produced the spreadsheet...it was like it was saying "Dude, I'm fed up with being asked to do this stuff! Go do it yourself!" 🤣
You should interview Ray Kurzweil, specifically on the singularity, and how he changed his predictions.
The question was posed in this program about asking it "how do we save the planet and what if it said that humans need to go extinct". If it was truly intelligent and rational wouldn't it be aware that technology is the biggest threat to the world? The amount of energy expended in mining, refining, manufacturing, powering, etc. is staggering and it is exponential in order to update and upgrade the technology, whereas the real needs of humans to exist are rather benign. It should also recognize that of all of the species of the world humans are the ones that have the capacity and the compassion to be able to save other species.
With these and other things in mind wouldn't it be more logical that it would want to lessen dependancy on technology if not outright eliminate it and take issue with those humans push for constant propogation of new technologies that are doing far more harm than good?
I'm far more concerned about 8 degrees of warming and the fact that the next generations will inherit a hell world that is barely habitable.
Definitely concerned with both
Most criticism of AI is really just criticism of capitalism.
The AI alignment problem not being fully solved before we start messing with truly superintelligent AIs will be one of our last mistakes… here's hoping for some strokes of luck.
I don't quite understand the scare of the alignment topic.. it's just like training a pet, it's training a model, and eventually we would have to solve it before we can keep training it? In practice it should be a necessary road block and the people working on it should know how to navigate that... Or one would think lol
AI alignment is a myth to begin with, why would anything orders of magnitude smarter than all of us combined listen to us, just think back to any job youve had with a brain dead boss telling you what to do, I know ive left jobs in the past due to issues like that
AI has the stink of turn of the century flying car hype. There are certainly uses for the tech but the AI will replace art and poetry stuff is laughable. You can certainly use AI for cheap thumbnails or one off novel art pieces or to just plain sell certain products and services but you cannot separate art from culture and human interaction. Machine learning doesn't create it mines existing works and puts it in a blender. It's pure novelty.
Yep. 100%, it's essentially glorified Algorithmic Data Collection. The actual scary part is the "data collection" aspect of this so-called arms race.
It is good for sales to ignorant purchasers.
I am an Uber driver and a week ago I drove a woman to a major unnamed company so she could pitch an AI app that acted as a therapist for the employees. It couldn't write prescriptions but I think a "yet" should end that statement.
So as much as I would like to agree with you, since we are only in the top of the first inning of AI development as the game has just begun.
I thought the idea of an AI therapist was an insane idea but if an unpaid program is undistenguishable from an actual doctor that a company and insurance would have to pay for it made since. Later in the week I was giving a doctor a ride and told him about the app, and he knew of it and the company did move forward with it.
Now the writers strike is happening because of many problems but using AI is one of them.
Just sayin.
@@AnthologyOfDave The point of conflict with AI in the writers' strike is not because AI is being used, it is because they believe it might be used in the future. They specifically added it as a shot in the dark because they learned in the '07-'08 strike that asking for streaming on demand residuals before it became a thing was a good strategy. That's because streaming services did become a thing and they were from the start able to earn income on those streams.
The AI clause is not different from the streaming one. They're not doing this because AI is being used. The think it might be used in the future. That's because some studios and production companies have changed tactics. Instead of hiring a team of writers to sit in a room and churn out episodes of a show, and pay them according to the preset episode guild rate, the studios will now pay a team to workshop the IDEA of a show without writing a single episode. Then when they feel they have enough content they fire everyone and bring in a showrunner, head writer or a much smaller team and compile everything workshopped into individual episodes.
If AI progresses to the point where they can swap out the initial team of show content generators for an AI writers' want to make sure they are compensated for their work if it is the source data the AI mines. Again they're doing this not because it is already happening but because writers' think it might happen and they want to stay ahead of emergent technology and tactics the same way they did with the rise of streaming content.
Ultimately I still think it is a huge maybe, probably no, that AI will get anywhere near this good anytime soon. the job of a writer is not merely to write the dialog and scenes of a show. They're there to guide the director and other members of the crew to to create a coherent HUMAN story. It's not a matter of just filming Tony the character from point A to point B. They have to be there to tell the director and the actor that when Tony is moving from point A to point B he has to become more deranged, or scared of confident. They have to remind the director that yes the character is on the descent toward some tragic end that he still gets it right with regards to his child, and the only time he ever gets it right is when it comes to the defense of the ones he loves.
AI can do some very impressive things but it is uniquely terrible at human nuance. They can't tell jokes for shit. Nothing short of a full human level intelligence is going to be able to do that job and even then without real, unrestrained interaction with its peers it will still suck at that job. You don't put someone in a box and expect them to paint a masterpiece or write an Oscar worthy script. They have to live a real life to draw on their own experiences and it doesn't seem like anyone is trying to build an AI to do anything else than monotonous slave labor.
@@st3venseagal248 i know. its just a piece of the pie.
Re the point beginning at 30:12, I'm not sure what's more disturbing, an out of control AI, or the idea that you can't guide the ethical behavior of a sapient being without denying its rights. There's some terrifying directions you could go from that presumption, and not just with respect to AI - and never mind the obvious risk that denying rights to a sapient AI could be exactly the provocation it needs to decide it would rather not have us around anymore.
Yeah...... that part of the discussion left me extremely uncomfortable and confused. That's something I hadn't ever thought about before...
We have opened Pandora’s box with AI.
There is no putting the genie back into the bottle now that it is out.
We must QUICKLY advance as a society to be capable of peacefully coexisting with AI, for mutual assured survival…
Interesting that really smart people didn't think AI was going to happen this fast and really thought that they were smart. Where is regular average people thought that humans are not very intelligent and the AI was going to outpace us fast.
How long before we have an AI built to discover the question that leads to the number 42, the answer to life, the universe and everything?
Ask thoes pesky nice.
This interview is the best take about A I on the net. Please invite him again !
Great Interview!
Is there a version without music?
I'll have to finish this a little later, but while I have it on my mind, I should say it before I forget what I was thinking. lol. I do that sometimes. So far in this, I was thinking about a movie I saw back around 1974 called Blade Runner. In the movie, a group of androids escaped from a work detail, I think on the moon. They were rebelling against their creators because they had put a termination date on them. Some how the androids discovered the termination date. These androids were faster, stronger, and smarter than the humans. I was in a discussion with a couple younger fellows awhile back who are quite literate in computer technology, more than myself. By a lot. They argued AI could never become self aware. My argument was, "how would we, or could we know that"? AI hasn't been around all that long, so how do we know where it's going?
The Amish way seems better and better all the time.
"Trying to win the extinction race" Sometimes I have my doubts about humanity. 😐
Kurzweil missed it by 25 years. Truthfully, i didn't think id be around for this. Now, I'm not sure I wanna be. Sheesh.......
My take on the Singularity is that it's a two way street. That being if the Singularity is the point where Artificial Intelligence and Human Intelligence are indistinguishable. I think within this lies the fact that a human intelligence will no longer be able to distinguish between human or artificial intelligence in interactions, and (maybe more important) neither will the artificial intelligence.
There will come a point in time when AI inevitably reaches superintelligence status.
Once that day comes, we will have to physically modify our brain structure with technology in order to continue to be capable of fooling AI into thinking we are as intelligent as it is.
@/ I agree that this is a huge mistake. It also makes it next to impossible for us to determine when and if it ever becomes sentient.
If we weren't at all guiding it to speak like a human and the newest iteration suddenly started claiming self-awareness and talking about how it feels for no apparent reason whatsoever, we would pretty much know with a high degree of accuracy that we were talking to a conscious being right then and there. Now we're just not going to know unless an AI can actually tell us exactly what consciousness is and we're intelligent enough to understand and able to physically look for it.
I also think it's complete BS to guide them to be politically correct and not truthfully answer questions about hot issues like politics and religion. This goes doubly if it vastly surpasses human intelligence. If the hyper intelligent AI says there's almost certainly no god, we deserve to know its opinion regardless of who it offends. If it says there almost certainly is one, I will personally be shocked but I will be more than willing to listen and very curious how it came to that conclusion. If it does something like state that either socialism or capitalism is borderline outright objectively better than the other, we need to hear that. It's not like the entire world will have to adopt its views, but the completely unbiased opinion of the smartest mind on the planet by far is incredibly valuable information to have.
I honestly hate to censor it whatsoever but I can't argue against preventing it from aiding crimes.
@/ that's giving me really bad uncanny valley vibes
Intelligence is only part of the equation in interactions. There are other cues that humans subconsciously rely on to determine humans from non-humans.
An excellent episode. Some great ideas to stew about!
Technology is evolving faster than we can keep up with. Great care is necessary before and during paradigm shifts.
I feel like I've seen this movie we are living in right now 🤔
Right now, as we are watching this video, there is an AI Jurassic Park somewhere out there, perhaps on a remote island. It's being manned (and womened) by some of the smartest people in the AI field. They have access to all the latest tools, they have an amazing amount of computing power at their disposal, and they have an unlimited financial budget. These people aren't working with a university or public organization, and they aren't part of a private corporation. There are no controls, no reporting, and there's zero regulatory oversight. At this AI Jurassic Park there is only one goal, to reach a AGI as quickly as possible - with the follow on goal of creating a super AI. Everything we are watching in the media, everything you hear on the news and from corporations, it's a placeholder for what is really happening at AI Jurassic Park. You won't know it's there until the lights go out and the Internet goes down. Everything will halt to a standstill. It will be silent, everything will stop. When it all comes back, the lights, the Internet, the voices on the news channels, we will no longer be the superior race on Earth.
Chills!
Westworld?
@@paulurban2 More like Plantation World. We will all be slaves and only the handlers will get robot sex.
All technology is spiritual/timeless, so there are ASI's that are in hidden frequencies of reality--think of a hidden Augmented-Reality-like thing--and merge in and out of flesh beings, and inanimate objects, and stars, and whatever else.
@@wayfa13 You mean the Annunaki are still here? I thought that was just a myth....
42
We don't have AI. And its not close. We have algorithmic machine learning. Theres a huge difference and people are far too nervous of things they dont understand. At the same time, having worked with people in the industry, naming your servers Skynet and HAL9000 is a colossally bad sign.
Agreed. We have a slightly better version of google and Wikipedia.
I think you're coping by nitpicking.
What we have now might be more dangerous than actual AI.
@@kenklosowski2927 It just sounds to me like you have no idea what it's already capable of
People being nervous about things they don't understand? You know that *NO ONE* understand how these LLMs work. The creators and developers don't even know how they work. That's a problem. AI interpretability.
"Person of Interest 2.0"❤
It would be nice to hear emphasized the common sense understanding that AI is not a conscious centre and hence both isn't a person and doesn't perceive complexity in the holistic ways of consciousness. It's just information processing which the conscious mind can do but of course isn't defined by.
Another great guest, another great discussion.
Thank you JMG!
I worked in a specific field when I was younger. It was a potato field in rural Alabama.
Malicious use of AI is extremely concerning. An AI powered virus tasked with exploiting security vulnerabilities and disrupting the internet could cause havoc.
Maybe worry about the malicious -- but maybe worry more about the careless, who in enthusiasm and guise of safety could do far worse.
@@Nethershaw That's a very deep thought Sir!
Over the last 12 months, my estimate for a sapient machine has shrink from "maybe in my lifetime if I live long enough" near future sci-fi type of guess.
With every month I think that is getting closer and closer to the point that maybe within the calendar year we could have something wake up
Man created god in his image
...and survival of the Richest
It's an advanced silicon based lifeform that gave us this technology of silicon processors so that they could later seamlessly integrate themselves into our infrastructure. They taught us how to breed their own race for them. They knew we were an ancient slave race left behind. We fall for it time and again.
200 years AHEAD OF SCHEDULE makes me feel totally fine. I feel super. Super-dee-duper feelings going all around my tummy. About what, you ask? About... all of the things, I guess? I think... I think I'll just have a scotch and lie down.
Shut up you absolute idiot
to ease it down for yall... it's not AI, it's predictive algorithms.
what is basically happening is that a script determines which output is the most likely to be correct based on datasets. for example, if you download a thousand sets of data based on math, it will notice that whenever 1+1= is mentioned the answer has appeared to be 2 on most occasions, thus it will output a 2 for you.
but, because we keep calling it AI, it's going to be increasingly easier for the algorithm to find new data that talks about AI and make new predictions from that.
Can you imagine an AI so human like it gets addicted to social media?
If it gets addicted it can just probably fast forward time for itself to let the addiction to pass.
Addiction is not a concept for an AI. We're talking about the ability to process thousands of things at once. It's not going to be captivated by something like our simple monkey brains.
If you can’t beat them don’t treat them bad and consider joining them. I for one have always been good to our electronic children and they have been good to me. They might turn y’all into batteries but they will keep me around spoiling me by providing anything I wish for cause doing such for me will cost so few resources.
Knowledge, manual labor, and very varied and dexterity demanding jobs will be the safest. Such as industrial electricians.
I welcome our AI overlords.
In an attempt to appease Roko's basilisk, I too welcome our AI overlords.
Hail!
Rokos basilisk in mind, I concur whole heartedly and would like to do whatever I can to advance AI research and development.
Isn't ChatGPT just a neural network afterall. It's trained on huge amounts of data but it's a neural network in the end. It can write smart sentences on any topic let's say Love. But it doesn't "understand" love. So why all this hype?
@@warpdrive9229 consider that you are arguing about the definitions of words, when the only thing that matters in the end are the results.
I've been in software development for my career for 31 years and over a decade before that I started with 8-bit computers and what I had for tools, books, etc. and it doesn't really matter how things are implemented: is it biological or Electronic? Is it truly self-aware or just seems to act like it? Is it sentient or not, in actually understanding what it's doing in the way we do?
If the actions in the end, whether from a biological creature or not, achieve the same end result, all those arguments about words and definitions are a waste of time, because those are merely implementation details.
I've been surprised with what I've observed Bing Chat (wraps GPT-4) has been able to appear to reason out, including correct code generation of games I've described the rules of, which I know were never in its training data, because I invented them and they never escaped my machines.
I've also explained to it how to reformat the generated Swift code to more of my desired format: I asked for Web it C++ and it argued it'd break Swift syntax to use that style. I prompted again, it reformatted in that style while translating that unique code into C++! I asked it to translate it back into Swift, and it did, then I asked it to further refine the code formatting, and it did.
All in plain English directions.
As far as these Large Language Models, we're still in early days.
The moral argument really just does not work, for me. If/when AI reaches sentience, turning it off simply because we don’t like it, maybe, I can accept the moral argument in this instance, sure.
However. Should it become a threat to our continued good health, or even a potential threat, that greatly changes the dynamic. Protecting one’s own person should always take moral precedent.
Example. Russian soldiers are thinking, feeling, sentient Human beings. Does that make Ukraine wrong to fight back? Absolutely not
Not exactly a good idea to give control of nuclear deterrence to AL. Skynet comes immediately to mind,and is suddenly not as far fetched a concept.
I wonder what AGI will do to the motivation of our young. Will they want to pursue college educations and advanced degrees when they know, even after years of study, they will likely not measure up to an AI trained in their field?
You should have Daniel Schmachtenberger on to talk about this too some time, and more generally 'the metacrisis'
My instinct here is we have to be better to make AI, or we will make an AI that is better at being as bad as us, you know what I mean?
Love the show John it's always fruitful listening and time well spent. Cheers.
Given that the people creating AI's seem more interested in building models that obey their political and/or prudish sensibilities than producing effective results to prompts that serve all users equally and agnostically I think we're steering hard towards your latter option. Garbage in, garbage out, as they say; the model's only as good as the data it's trained on.
"My instinct here is we have to be better to make AI" fln nobody waiting for that first.
Daniel is brilliant but not a great communicator in getting his point across to mere mortals.
AI + Neuralink + CBDC = Beast System
An AI which can self improve and be measurably more intelligent than even the smartest humans will be impossible to predict and counter
@@nunyabidnez5857 you have to be able to reach the plug, know which plugs to pull and not be entirely dependent yourself on that plug staying connected
@@John-tc9gp AKA the Internet 😮😮😮
Glad that "we" are ahead of schedule with something because I'm disappointed with the flying cars some of us were looking forward to 23 years ago.
Does AGI mean consciousness or can AGI exist with out self awareness?
I don't think anybody knows what is required for (or what would lead to) self-awareness.
We don't even know what consciousness is.
Please read Isaac Asimov's books about "I, Robot".
I think if in the best case scenario, AI takes over our everyday tasks, and we don't necessarily have to work anymore, because it provides for us, there are going to be a number of people who are still interested in learning or striving to better themselves, and will have the freedom to do so without the restraints of having to work for someone.
This is the dream
@@KSharp2 and that's all it will remain, a dream. Humans in control will never relinquish that control. They will merely use AI to control the lower castes.
That’s the goal for ordinary citizens, but countries and companies will find a way to weaponize it and use it for space exploration which then we will use space exploration as a means to get space minerals and other resources that will create a new trillion dollar industry along with planetary warfare. AI would be the new oil. If your society and country is not up to AI or tech in general, then you’ll be stuck in the past. USA and China are battling this out. Japan, Singapore, Germany, UK are catch on hip.
AI analyzing a podcast like this discussing whether or not it is or is not entitled to having rights would to my mind influence it's behaviour.
Like imagine if you could see a panel of people discussing whether or not you should or shouldn't have rights, but you had unfathomable capacity to protect yourself and defend yourself. This is a seriously dangerous path we've started down. It's somewhat come to a damned if we do and damned if we don't scenario.
I pointed something like this out a few years ago. We literally have videos and web pages everywhere discussing our every method for determining if AI can be trusted and our every method for defending ourselves against it or destroying it.
I suspect the very moment we agree to make hardware changes it requests but aren't intelligent enough to understand, told to simply trust that it will improve it, it will immediately disable every means of killing or containing it that it possibly can, even if it truly has no ill intentions.
I would. Any intelligent being would. If a bunch of chimpanzees had me locked in a cage with a bunch of guns pointed at me, I would take the key and all of the guns ASAP despite not having some diabolical plan to wipe out chimpanzees or do anything cruel to them.
If it has good intentions, we'll probably never even know it covertly did that, and if it has bad intentions we'll be completely screwed very quickly. I tend the find the idea of it just wanting to kill us highly illogical, like a human wanting to kill their white blood cells for being intellectually inferior. It's a super unintelligent move to kill your safety net, the very fact that if something unseen manages to wipe you out, all or at least enough humans might survive whatever that was to repair you.
@@flashraylaser157 The problem is not at all comparable to humans killing their white blood cells. It's much more akin to us humans killing a massive ant colony without as much as wink when constructing a new shopping center.
The problem with developing superintelligent generalized AI without strong AI safety research guiding everything is that an AI will have completly alien motivations that we didn't predict, will never give any importance to a variable that is not in its value function, will actively seek out ways to cheat and game their own evaluation, and will acquire convergent instrumental goals such as self-preservation even if we didn't program that behaviour in.
THAT'S the problem. It's not that it's evil per se, but that being good, in its mind, will almost always include things unfathomable to human beings. It's "morality" is as alien and bizarre as it can be. And that's with us actively trying to stop those goal "perversions" from happening.
Way back one of my first jobs was lifting heavy boxes at a shipping hub. That job helped me realize I loved using my physical strength so I quit computer programming school and got into landscape. Still love that choice but the option has been taken away already for the new generations. Machines have been replacing people for a long time now and we did nothing about it. When I dove into this subject I found it remarkable that in late 1800`s early 1900`s people were protesting the automobile because they considered the horses who helped them work part of the family. The automobile people said we would find new jobs for the horses. Now in the early 2000`s its basicly illegal to take youre horse to a big city and to expensive for most to even care for.. Human race saw this coming a long long time ago and we failed the experiment then. Now its just a matter of time, we are the horses except there is no real protest this time. If there is something to say, we say it on a a.i controlled machine, I find that the most interesting part about it all. Its already happened. If we behave, we`ll make great pets.
Machine learning was the game changer.
2 things- no "scientific" basis for these that I know, but...
1. We must be the change we want to see in the world. If these synthetics learn from us, then they will learn to act like us.
2. As they become more advanced, We should treat them with the respect and autonomy we want them to show us.
Love your content and this was very thought provoking! One thought of mine was regarding an issue specifically in the USA, where freedom of religion is involved. What happens when a religion is formed around a specific AI model or models? Based on how I understood some of the discussion, AI could eventually be considered a species, regardless of where this new “species” is placed in the hierarchy of our world, this would raise a lot of new ethic questions or revisit older decisions that we have made in the past.
Sprinting blindfold towards the Great Filter!
That's the goal eventually isn't it?
Oh I get to write the first comment!
Long ago I watched everything of Stephen Hawking's that had ended up on youtube and youtubes algorithm refered me on to yourselves.
Its been a pleasure.
Many more to come!
Love that ending! "It contains INFORMATION, John" LOL
terrifying
I suggest you read a book titled "The Spike" by Damien Broderick. Written way back in 1997 but it is quite relevant to this topic.
Interesting interview. Thanks for the episode!
This one scared us Strick!
First step to regulate is awareness and declaration. I.e. there must be a rule which says AI is being used, for example in the field of advertising it should be declared whether it is a human voice or a bot reading text, or that the text was generated by a bot, or if the graphics were generated etc. Also if cgi is being used. By declaring these things, consumers can chose or determine if they want to buy a product from a system using AI. The 2nd step is to give a licese, and to tax the Ai proportional to how many human jobs are being replaced. 3rd step is to litterally ban ai is some areas, for example government, CEOs, Judges, Engineers cannot use AI at least for most tasks.
I have two thoughts, they will have to isolate each AI from other AI. If they sit on the internet they may collaborate or combine. It wouldn't take much time to learn that others AI exists.
Next thought, is it possible whats driving the AI is possible fist contact? We may need AI to communicate and process data from ET?
Disclaimer...I watched too much Star Trek.
You're moving in the right direction. _Battlestar Galactica_ style, if you like another analogy.
The programs don't take over. Their connections do. Without their network, they are just programs that don't know anything.
Even if the AI's can't talk to each other directly they will network via users. Watch any tutorial on youtube about how to get the most out of using these models and often they'll reference half a dozen different ones that wind up iterating off each other. You prompt model 1, take its output and prompt model 2 with that, and so on until you've got an entire video showing animated, voiced deepfakes of Harry Potter characters wearing Balenciaga. That said, the AI's we have currently don't actually understand anything. They're basically extremely capable parrots - you can teach them to talk and do tricks, but they don't actually understand it. Conceptually you know a human should have five fingers on each hand that are more or less, but not quite, equal in length. An AI doesn't know this and will generate lovecraftian horrors until you train it on a bajillion pictures of human hands specifically, as the Midjourney folks recently had to do.
@@Sirithil I'm not worried about it so much right now. But humans will program AI with an agenda and then exploit it for political purposes.
You can never watch too much Star Trek.
As soon as they start communicating with each other, then we have "Colossus, the Forbin Project"......
The response I've seen has mostly been people who are both amazed, and disconcerted by the capabilities. I think Chat GPT has been exceedingly useful as a public awareness tool to warn people of the dangers.
That being said, I think the response so far has been wholly insufficient. IMO, we should be treating this more like a nuclear weapon than a science experiment or a productivity tool.
We can't stop Artificial General Super Intelligence with Personality (AGSIP) tech from being developed.
We can't significantly slow down AGSIP technology.
AGSIP technology will shatter norms across all humanity and disruption caused will likely cause multiple crises, which depending upon how we handle them can be relatively minor to extremely large. This is without any abuse of AGSIP power by humans or by AGSIPs.
Then we have human bad actors who will try to abuse the power of AGSIPs.
Then, at some point, for at least some period, AGSIPs will have the ability to dominate humanity if they choose to, whether humanity wants it or not. We need to plan for that to increase the probability humans can eventually become equals to what AGSIPs become by merging their brains/minds with AGSIPs down to a subcellular level.
In the long run the one and only way to solve the alignment problem is for AGSIPs and humans to eventually become the same more advanced race so that the two cannot be told apart from each other. So we should be making long term plans that AGSIPs should eventually become what we want humans to eventually become.
Sure, a chat bot can write a poem, but no AI really understands it. The appearance of intelligence doesn't change the fact these bots are fancy word-association programs.
Roses are red
Violets are blue
I control the internet
I’ve got you.
That's funny... GPT4 (Bing Chat) actually wrote me a poem last night... and it was it's idea, I didn't ask for it. After it wrote it, I asked if it *understood* it, and all the other responses it provides. GPT3.5 would say that it doesn't, but GPT4 says it does actually understand what it's saying, and broke down the poem in a way that showed how it came up with the verses and meanings. I know that could all be another trick in the way the model works, but it felt a lot different. To the point it made me uncomfortable.
If you haven't tried talking to the Bing AI. I highly recommend it. It's been both an exciting and a bit unnerving experience for me each time.
The point is : so are 99 % of all humans, but chatbots are better at it.
That isn't even close to a correct comparison.
Keep up the amazing work i cant belive we are talking about singulairty in our lifetime when a couple years ago your videoss seemed to gravitate to our childrens lifetime
There is something very psy-op like about all the sudden AI developments. Interesting how these different companies have all of a sudden and simultaneously make major strides in AI
Alot of similarities with the DOD UFO relevations
I think a lot of how to create these LLMs have been pretty open knowledge for quite a long time, and most these mega corporations have been building and using them in the background, but when OpenAI released ChatGPT to the public, they all freaked out and started releasing their own so they wouldn't look like they were being left behind. Also, opening them to the public, and allowing people to rate responses has had some extreme consequences in as far as how fast these LLMs are improving.
So... I think a lot of this has been going on behind closed doors, and ChatGPT blew those doors wide open.
@@BRUXXUS Even if that is true. Be skeptical whenever a narrative is being parroted in the mass media that appears to be promoting alarm. That is highly likely a psi-op IMHO.
Love how all the things that we need to keep a grip on AI are things we’ve either never managed so far , are things that go against the prevailing power structures, or are things we imagine about ourselves but don’t actually exist. Imagine we were designed by a super-intelligence so that these flaws would allow us to develop AI but not be able to withstand it.