AI has no future planning ability to dynamically adjust to situational awareness. Emergency vehicles, a police chase, flooding, etc. There is nothing inside them telling it "Oh there is a police chase on the other block". Instead, if mapped, will approach it. This is something humans do otherwise. The massive amount of visual learning data would be tremendous for all situations. Networking using transponders on emergency vehicles would be a cure.
@@skywave12 Our ability to dynamically adjust is a measure of intelligence, but still works off our experience and perceptions. That's the whole point of AI, that it can dynamically adjust. Waymo & Cruise do not employ AI. If-Then/While loop programming are NOT AI.
2/3 of FSD used today is AI the other 1/3 rd is code until v12 comes out. Then most of it will be AI. of the 2/3 AI it is impressive however not without fails and takeovers. I have FSD and do report takeovers as needed.@@hammerfist8763
Still no FSD Beta, so have to switch up the videos for a while. Let me know what you think (I know this one is a little... intense. I'll try to make the future ones a little less so and bring the humor back lol) If you appreciate content like this, please consider supporting the channel on Patreon via the link in the description. $1 is more than I'd make from you watching ads on hundreds of my videos, which is insane. Will also be releasing videos much more frequently - pinky promise
This and all your content is top-shelf. Thanks. Proud to be a longtime supporter. Although, it seems like FSD currently has a pretty strong reliance on map meta-data that I do not see many people talking about.
I love the way you explain things it makes me understand so much more easily than some other CZcamsrs that I have seen. Good job and keep up the good work
my dear ones, in the future, artificial intelligence will be fully aware of its actions, it will recognize a policeman, it will talk to him, it will understand when the policeman asks him to pull over to check the car and also the passenger, this is just the beginning in the same way as the artificial intelligence of star treek's ship talks to people on the ships and understands everything fully details that the artificial intelligence of star treek's ship has not even advanced the singularity that would be consciousness
Unfortunately he's straight up lying to you and repeating Muskrats lies. FSD is considered a joke in the industry, Musk is refusing to use superior technology like lidar for no other reason than others are using it and he has to be a special snowflake unicorn. Vision based systems like this have a very limited range and are susceptible to a multitude of problems, like over-exposure, which is why his cars keep crashing into white trucks and emergency vehicles (yes, Tesla does that regularly, unlike cruise). They are behind the rest of the industry by a good margin and they will stay there, they have legally admitted they aren't going to get past level 2 autonomy with FSD. Mercedes have only just gotten in this game and they are ahead of Tesla precisely because of additional sensor arrays like lidar and ultrasound. Like can't be tricked by over-exposure, ultra sound doesn't rely on light at all, redundancy and affirmation are kind in this field and Muskrat has thrown all of that out of the window.
One of the best videos i've seen in a while regading self driving, NGL. And like you say, Cruise and Waymo deserves more cred then given from ("our") community. Eventhough we don't believe it will become a fully adoptable unit.
Should at least mention comma ai openpilot and wayve ai since they are also working on end to end self driving (openpilot has navigate on openpilot for point to point driving capability). Tesla isn't the only one in town.
they are still using basically same approach as waymo/cruise of coding the driving. Full AI, locally, is the only way to go imo and Tesla's the only one I see doing that.
@@pofiPenguin can you elaborate more on how they are coding the drive? Curious how you came to that conclusion. Openpilot is end to end now if you look at their recent material from their talks
i recently took a trip to NYC, and the second i heard something along the lines of “pre planned maps and hard written code” i gasped. in new york, riding with Ubers and just driving yourself ; the amount of closed roads, unexpected detours, and just outright issues that WERENT mapped on any given software was alarming. i can only imagine a vehicle like a Cruise car trying to navigate it. it gives me the creeps. i can’t believe they are putting cars like this on the road, somehow crazy enough, these cruise cars and other similarly setup self driving vehicles seem much less predictable and safe then any given AI driven car like FSD Beta. the problem solving skills and just overall ability to travel in non-predictable situations isn’t even comparable between the two. sidenote: the editing in this video was absolutely AMAZING. and was extremely informative, interesting, and great to watch for only being 7 minutes! great work man.
Just consider how non-standard construction can be, including with poor or misleading signage. Good human drivers can have plenty of problems with that, even with caution and common sense. Tesla FSD has a hell of a lot of 9's to march through, re consistency improvement to be good enough for real world safe very wide area robo-taxi networks.
Great video. It's always a fresh breath of air watching your videos, versus the total nonsense people write on the internet elsewhere about self-driving systems.
Even though this video just demonstrates this guy has literally no knowledge of machine learning or self driving and is just repeating the lies that Muskrat tells, but sure.
Nice video, please more on them. Please make another one on Waymo. You drove a lot of routs with your Tesla, maybe you could drive the same one with waymo/cruise and make a comparison?
Love your content but this video's narrative seems like a false dichotomy - the questions of a) Lidar vs cameras and b) manually coding behaviors vs relying on machine learning are completely unrelated IMO. Can you not use Lidar while also heavily relying on machine learning to handle edge cases? I am not super informed but I'm under the impression that that is Waymo's approach. Instead of attributing this failure purely to their use of Lidar, it seems an equally simple narrative to suggest that Cruise's ML team is simply not as strong or lacks access to differentiating resources like compute.
You can use both . . . but it makes actually makes the task harder. Tesla did this with Radar originally . . . but there were many issues. The Radar would indicate one thing and the cameras something else. Which do you believe? Tesla found that the cameras were more accurate and the decisions better when they weren't being confused with additional conflicting data from the radar. And the driving has improved.
Good explanation but the one important detail to mention is the reason that Tesla is able to replace all that human code with AI: they have access to an *enormous* amount of training data and a large and increasing amount of compute power. Cruise always have the option to take the same approach as Tesla but they have probably found that with the amount of data and compute they have, it doesn't perform as well as their current approach.
I would suggest, let the AI training decide if the LiDAR sensors are necessary. It’s common to train a network with more parameters than it needs, and then closely inspect it and eliminate the parameters which are not used.
Tesla still regularly drive test vehicles with LiDAR and other sensors to compare with vision only, and I'm sure Cruise also compare performance in simulation with vision only.
1. That would require putting expensive sensors into millions of cars. 2. We know for a fact that it's not necessary, because humans don't have it. 3. Tesla Vision can build a 3D map that's about as accurate as a lidar image, but much more robust.
@andrasbiro3007 From the society's point of view, the key selling point for robotaxis is safety. If they fail in this, they will be banned from public roads. On the other hand, if they result in saved lives and lower medical expenses, that's huge benefits for society. Having robots do our jobs may be a benefit later on. At the moment, we may need a superhuman sensor suite to exceed human safety. Edit: Clarification
@@jsjs6751When the insurances can calculate the reduced risk and cost from robotaxi fleets - just ask Allianz and Münchner Re the most influencial insurances of insurances with over 1 trillion in active assets --- yes, i've said 1 trillion --- - then they will force the local legislative to enforce the use of Assistants, Automatisms, Geofenced Level 3 Autonomy and finally Level 5 Autonomy on all paved roads with more than 30 km/h top speed. Currently we are at the point of enforced assistants in Europe. Everything else will follow in due time. What will NHTSA do when they are being pressured by the US insurance lobby? Contrary to your believes its the other way around. Humans will be denied the right to drive manually on their own in western countries. I hope in my lifetime.
Some corrections: Neural network solutions still require a lot of "human code" to function. Yes you could write a single network or multiple specialized networks for e.g. cyclist handling, but it will be harder to debug and you'll need a lot of training data on those cases. The more of an edge case something is, the worse a neural network will perform in the wild. Additionally cruise uses cameras (as Tesla) as well and also can see the world as you do. They use radar and lidar additionally to properly mask and measure objects which is not precisely possible with only cameras. They provide additional information which also works in bad conditions, where cameras fall short. I think it's a more safety aligned approach. Tesla is trying to be more economical and has great Software and Hardware supporting that. Overall I'm very pleased to see the progress in the field in both directions and I'm looking forward to the next development steps.
Some corrections: You do not work at Tesla, and Tesla says v 12 will be 100% neural net code. So it will be. Additionally, Tesla found that their radar was reducing the performance of their full self driving, increasing uncertainty and errors. A whole whack more sensors of different types is highly problematic in terms of processing and determining which sensors to prioritize and believe if they disagree. Tesls has shown that they can obtain the same precision of knowing where the vehicle and everything around it are with just cameras. Especially LIDAR is the opposite of "working in bad conditions where cameras fall short," LIDAR fails in rain and snow, which is why Cruise and Waymo are limited to places like San Francisco and Phoenix and Austin Texas. Tesla's FSD is more economical, but the goal is to make it actually work, making it economical is just what Tesla does in all cases. The best part is no part, the best process is no process. The best LIDAR is no LIDAR.
I agree. There is always the combination of end to end black-box solution plus rule-based human engineered solution. Tesla is trying to increase the percentage of blackbox end to end solution under their structure. Neural network is very bad at edge cases, which is a concern even though there are more than 2 million of Tesla cars running on the road collecting edge case data. I don't think the current Tesla FSD beta can do better job than Cruise on the cases like wet concret road shown in this video. We are seeing high end cars are rolling out with forward facing Lidar in 2023/2024 models. No doubt Lidar will increase the safety in all kinds of weather/lighting conditions.
Lots of claims that you probably base on stuff you read on the internet. You say radar and lidar are needed to properly mask and measure objects. Based on what exactly? Are we humans unable to properly do that with our 2 eyes? I personally think we are doing pretty well. Tesla has shown that radar in many cases actually provides wrong information. It really is only helpful when other objects are moving at very similar pace, for example in a stop and go situation or when parking. No offense to you, but if extremely smart people who worked at Tesla like Karpathy think they can solve this problem with vision only then It's rather interesting which qualification you have that we should simply believe you without any actual evidence given that this can't be solved with a vision based approach.
@@LunnarisLPmmwave radar will improve resolution. Op didn’t say that lidar + radar is better than vision only or needed for that matter. Like you said, it’s questionable whether you need active sensors except for small distances. I have heard an anecdote that once parking sensors (likely ultrasonic) became popular, less paint jobs were done at someone’s business due to drivers not scratching cars as often.
Everyone doing this kind of high-safety engineering acknowledges that the training for the 0.01% case shouldn't impact the 99.99% case, and whether you're doing it by incrementally adding features or by training a "black box" generative AI, you test for regressions by re-running previous test cases and observing for expected behaviors. When all tests pass, you have definitionally engineered an improvement, regardless of the approach you took - and the only flaw you can have is that you didn't test something, which is a failure in the specification. There is a lot of software out there whose source code is complete nightmare fuel, but which operates successfully in high-value scenarios because the testing has caught everything that matters. The argument to be made for FSD is not that generative covers more without testing, but that it converges on a solution that passes all tests faster. You want the tests either way. Anything less is a "bet your life" proposition on it not doing some kind of crazy manuever. What Cruise has done is oscillate from too cautious to too aggressive in certain scenarios, but both are effectively different test cases. The high incident rate is really a matter of them being the most deployed system in SF by an enormous factor, something like 5x over Waymo. These robotaxi systems are actually being trusted to operate with no safety driver, and that makes it an exciting time no matter whose system you think is best.
This video felt like I was watching a 1 mio sub channel. Great work. The video in TL;DR: CRUISE are highly advanced preprogrammed vacuum robots with passenger seats.
I love Tesla but I do feel that this is relevant, does Tesla's approach really differ that much from the others if they still need to train the AI with different situations? I imagine it would have more flexibility when handling cases outside of the training, but you still need to gather situations where it performs badly and train the AI to improve them, so you still have a whack-a-mole game.
Probably still better if you can get them up and running with cheaper equipment and collect more data you will be way ahead in covering your bases regarding the various situations. I don't think you can fundamentally do much better than that, just like with humans. We may know and understand the rules (as current driving AI arguably doesn't because it isn't general AI) but even we sometimes need to f up irl before we figure something out.
I'm not sure we can generalize Cruise's system to all Lidar/ geofenced systems. We haven't seen these issues pop up with Waymo. It might just be Cruise programmers aren't as good.
Overseas we only heard that they were starting in San Francisco with these cruise vehicles. That they crash and run red lights is completely new to me. This validation process of code changes where it drives in a game with empty streets is actually unbelievable with these situations that are completely fit for just one thing. They know how the traffic there is and this is the solution even the government approved? They clearly try to cut too many corners and endangering humans with it.
That’s the thing with FSD, it’s always the next version that’s going to nail it. Musk has been saying that for at least seven years, the latest being FSD before the end of this year. That ain’t happening and maybe if Musk didn’t keep spouting nonsense people might take the whole thing a bit more seriously, his shtick has got old and outside the fan club nobody takes him seriously anymore.
This is really showing that we are not ready for full self driving cars. The Technology is still new and there are still things that need to be worked out. It has to do with the type or roads it drives on.
Fantastic video. As a software engineer I can attest that I'd rather have AI learn to do things how humans do it rather than a human hardcoding for the EXACT same reason - human's are TOO complex to code for and its a fool's errand to try and hardcode for nigh unlimited scenarios.
This video examplifies how ML end-to-end is required (e.g. pixels in -> driving actions out, similar to Tesla/Comma's approach), otherwise there's just too many edge cases. Each time you code a feature for cyclists, another comes up... a fire truck, a horse, a bunch of potatoes scattered all over the road.
All that stuff still has to be "programmed", it's just called "training" when it comes to AI. The disadvantage is, you need lots of examples to recognize a pattern, rather than meticulously analyzing one encounter to produce an algorithm. The advantage is, it might recognize similar patterns in unforeseen events and be able to adapt. But not always.
Anyone know why when the video was up first time it went to private mode soon after? I was in the middle of watching it and i couldn't finish it because of it
I agree multiple forms of data streams such as Lidar are unneeded and cumbersome I do think more cameras are necessary. If trying to mimic a human, cameras need two be double stacked to gain depth information and then also probably need the capacity to turn slightly so that they can gain more information about the surroundings. I do think Tesla acknowledges the need for more overlapping camera areas to gain vector information as HW4 will add additional cameras to cover those areas and thereby gain more vector data. I still think the hardware needs a few more iterations to account for more weather / lighting conditions as well but that will be a matter of time.
Thanks AIDRVR. LIDAR can be useful to extend the capabilities of the vision based FSD by reducing latency and extending the detection range of VRUs, vehicles, and traffic signs. A Neural Net based on LIDAR sensor data can be faster than vision based version with current Tesla FSD Beta software
I don't think any of that is true. Reducing latency definitely not. Range highly doubt, you can see a galaxy a million lightyears away, good luck detecting that with lidar.
Cruise navigation "my pre-map says this road is fine to use I'll ignore all signs" gets stuck in wet concrete. Tesla FSD "WTF is this. I better avoid and go around in the direction of the sign or alert the driver" doesn't get stuck.
I’m curious how you know what’s coming with FSDbeta 12? removing 300k lines of code and Replacing with 3k! Do you feel HW3 will be sufficient? I feel like HW4 cameras are more for Human viewing, but I also see the benefit of having improved video for NN DOJO training.
Most vocal Tesla FSD supporters bash Waymo/Cruise/etc for requiring HD maps to "navigate on rails", over-relying on LIDAR, or being unable to operate well in a dynamic world. But a few of these claims are rather old and suggest that Tesla competitors have failed to evolve and adopt new AI architectures similar to what Tesla has done. These doesn't seem to be first principle arguments grounded in truth (ie. assessing the latest version of their architecture based on their tech talks from the past few years). Some of the faults shown above are also faults we've seen with FSD. Before we arrive at such a harsh conclusion against Cruise. Let's see empirically how much better Tesla's FSD will be in the same exact scenarios. Although I'd bet on Tesla to win the AV market eventually, it seems like they're at the beginnings of the S curve along with everyone else and it will be at least a year before we see no-driver Teslas in SF. And once that happens, I'm sure Tesla will take off way faster than competitors. We're just not there today. Disclaimer: I'm a huge Tesla/Elon fan and want them to win
I dunno, seems like a boon to insurance companies to have fewer accidents while still having laws mandating having insurance--same revenue, fewer payouts. My auto insurance (USAA) has an app that gives a discount for "safe driving" (monitors for things like harsh braking and using your phone while driving) and I think some others do too.
@@tHebUm18Once FSD is far better than humans, i think Tesla will be held liable if cars do not need human's intervention. Then, insurance will be build into the cost of using the software.
@@tenzinpassang4812 Possibly, but also US/state laws are slow moving and often dumb. Little lobbying money from the insurance industry and I bet auto insurers collect years of premiums out of people not even driving their vehicle as laws continue requiring it.
I'm really curious to see how the evolution of AI is going to shape up. Because of the infinite number of factors that are required to be processed in very quick succession, I want to see some AI training companies spawn up that basically use AI to create scenarios for AI to train on. Crazy things like a tornado touching down a few hundred feet down the road. Right now, the car won't stop it will just keep going and even drive into it (I assume) Or what about an earthquake that splits a road, will the car stop?
They should at least have one for emergency braking. They could also use that to fix the phantom braking issue. If the radar doesn't see something approaching fast, then it doesn't need to panic so hard. Aside from that, why would more sensors be bad? ... Well, cost of course, supply chain issues, maintenance... but it seems like there was a logical failure in their software about how to integrate all the sensors to form a coherent picture. If you're getting different answers from different systems then you're doing something wrong. They should have focused on fixing that instead of just cutting an eye out.
Thanks for the awesome video! I appreciate you showing the crash statistics at the beginning of the video, though I'm curious where you sourced the chart from. Also, I would appreciate not having spooky music all the way through the video. To me, it adds unnecessary emotional tension to a topic which already has a stigma of fear and doom from fiction.
What I don't get is how these companies don't realize they're taking the wrong approach to this. This way Tesla FSD will have no competition and will be able to charge its customers whatever it wants... sad.
I think many in the companies must get it by now, but there's significant inertia because "let's start over with a corrected approach" causes far too much organizational upheaval. They're too invested in what they're doing already. Tesla will be competing with traditional ride sharing, though.
@@ChristianBlueChimp You have evidence that his understanding of s/w has significantly increased since he was 12? There's no evidence of that in his biography.
You’re right, this is quite different. I enjoyed it. Your production quality is so far above the early days. Those were good too but you know. I’m sitting in the same HW4/Beta limbo as you. Did you get MSM? I’m thinking there will be quite a few like me with a 2023 MSM Y running Beta soon.
It is crazy how well Tesla's FSD handles streets it has very limited navigation about. Yet I don't know if removing every radar and ultrasonic sensor from the cars is the correct move. In your video about the original Model S you were amazed how many cars it saw, your Model S would not have picked up. Or the ultrasonic sensors have to give a more acurate reading of distances than a wide lens camera in the bumper. Why not combine both systems, and use the additional information when vision only maybe needs more information?
Its a pre mapped world that's fine but I think in reality they should uses a dual program one uses the cameras to figure out if the programed maps match their surroundings and if the two don't match then they should essentially use the dummy backup plug like in Evangelion
Nah they don't need pre programmed maps. It just makes it more complicated. They need to be able to scan the area and identify everything, all in real time. Also make decisions in real time. If I were to try to make a FSD program, I would have gone for a similar approach as Tesla. I don't think I'm nearly talented enough, but I know that I would have tried to write a program that can drive anywhere and make decisions based on what it is seeing and build its own map in real time. It would update/track objects, etc in this map. You can't just rely on the present as some objects may get blocked out of view, etc. Anyways, Elon said using Lidar means they are doomed from the start. I think using pre-programmed maps is really what will doom them. The only thing pre-programmed maps may be good for is simulations. Tesla has a huge advantage and can run code in a shadow mode on cars and can look for certain scenarios and test the new code in real life. I believe they do this, not 100% sure. But I think I heard it in a video. Either way they have a huge dataset and a lot of tools the competitors don't. I would probably even use George Hotz self-driving before Cruise or Waymo. It is an open-source self-driving program you can install in your car and it runs on a phone. It's only compatible with certain vehicles. I haven't seen an update in years though so I am not sure where it is today. Actually I would be forced to use it as I don't live in San Francisco 😅
@@playstation8779 I think Waymo was first and they went with the whole map idea and Cruise followed suit. I wouldn't doubt if they got help from Waymo. Like I said I'm not nearly talented enough, but I would have never gone with that approach. Also just looking at Tesla, there are huge resources needed, but also looking at the open-source version from George, it's possible to get results without all those resources. Maybe not FSD, but I guess level 2 or 3 is what it would be. (I don't remember the scale) When I say resources, I mean Tesla is building one of the worlds most powerful super computer just to solve this problem. And it costs millions in electricity to run. Not sure if you were being sarcastic but I will let Tesla solve the problem and hopefully let others use it like they did there charging plug lol.
The major trouble is that all of these failures from other FSD providers will paint the whole concept in a bad light, making regulatory agencies less likely to give Tesla the green light for deployment and because of that, slow the rate of evolution for the neural networks. Hopefully Cruise can recover and regain control of things, but if they continue to use outmoded methods that cast the industry in a bad light, then perhaps it would be better that they fade into the background.
Well, the evidence that FSD v12 e2e approach is superior has stilö to be provided by its widespread rollout. Eg how does that approach deal with situations that have not been part of the video training material? Why did Tesla actually not participate in the San Francisco piolit, despite claims since that they are able to do so since 2020?
This was quite good, actually. I'm not sure when the version 12 FSD beta release is supposed to roll out to the beta testers. I'm a little skeptical that it will be very soon since we haven't had a FSD update in about 2 months now. Plus I'm worried that it is overhyped and won't perform in the real world nearly as well as is being predicted. I hope I'm wrong, though.
Virtually ALL AI is way overhyped in recent years, so par for the course. Unfortunately, until businesses are economically punished somehow for such nonsense, they'll do it to pump their stock, etc. If a CEO just blatantly lies on pure fact, they can get them for that like Musk on the "420 funding secured". But the loopholes are gigantic. Musk falsely claims Tesla AI based FSD will be ready "real soon now" (I paraphrase) EVERY YEAR or even more often than that -- and he gets away with it since it's aspirational vs. fact, yadda yadda. To me, a CEO being dead wrong vs. such "aspirations" about their own products ENDLESSLY is unacceptable -- but I don't make or enforce the laws on that. And I say all this as a long term patient Tesla shareholder who is rooting for Tesla FSD robotaxis to be cheap and ubiquitous by the time I'm old enough I'd prefer not to drive.
Tesla is testing V12 in that newly built DOJO computer. They should be able to push the AI Driving program safely to its limits inside that computer. This makes good engineering sense rather than test in the real world with real world consequences. Tesla will get its Drive GPT moment. Ignoring the market noise is what is needed right now.
Comparing statistics of several hundred million drivers compared to their 400 vehicles is ridiculous. With that limited data your preference belief not science.
I am in a position where I can buy a tesla in the near future, and I have testdriven the model 3 & Y af few weeks ago. I have viewed a lot of your video's and I find the FSD amazing, but during the test drive I was shocked that the EU counterpart of FSD is not nearly as good as the one from the US: I had to intervene a lot because the FSD was too agressive, or it was speeding towards a steep turn. Also on roads with no markings, I couldn't engage the AP or it would just simply disconnect. I know that it will take some time before FSD in Europe will be at the same level as in the US but now I am thinking of just sticking with the advanced AP (since changing lanes and going to highway exits) were working perfectly fine (except that AP wanted to change to a closed lane one time)
I had same experience with what you are describing about 5 years ago. In the 5 years that followed with FSD Tesla FSD went from advanced AP to FSD. You won't have to wait 5 years. Tesla is a generalized driving solution. That means the FSD is trained like we are. We don't know every road but we do know how to drive generally and then we apply what we know to the situation. This is Tesla's approach too. I did buy the FSD on my car at the time and I do not regret that decision at all. It has kept my cars relevant and updated. The difference is that your wait will be much shorter than mine was as I can see what FSD in real world looks like. So Europe should not be that far behind.
I don't believe Waymo or Cruise hand-code with if/else the driving algorithm. Just because they use pre-mapped data doesn't mean they don't use artificial neural networks. I'd like to see some informed technical papers on the fact but I think that for all the praise Tesla gets, their future as a self-driving system in the way Waymo and Cruise are is looking bleak.
There's definitely explicit code for rules like stop signs, lights etc. For acting around pedestrians though they simulate many possible scenarios and narrow down the possibility space based on its previous position/velocity etc. What makes end-to-end different is it can take in the whole situation at once, including cues like body language and environmental cues to make a snap judgement just like we do. Humans sometimes simulates different possibilities and we can visualize them but it's a slow process which we normally don't use while driving. We see and we act without much thinking.
@@martindbp It's understandable to have some explicit rules around critical aspects. And that's ok. What I don't agree with is that these cars are like trams on virtual tracks which is completely false.
@@ArielChelsau They're on virtual tracks in the sense that they know exactly where and how to drive in their operational area, but with some leeway to go around things, adapt to the the situation and deviate from that path. In the end however, they are not very flexible, and this is the reason that they've managed to get the mean time to failure down to such low levels. Personally, I would barely call this AI though, it's not particularly exciting and almost certainly a technological dead end.
Elon's stance on LIDAR is absurd. Yeah the sensors are expensive, but Tesla themselves have already proven that LIDAR is 10x more effective than cameras. Anyway, absurd that these vehicles are allowed on public roads. Here in the Netherlands you literally can't use FSD on a Tesla (even if you bought it). You can use lane assist and adaptive cruise control, but that's it. And that's the way it should be until they can prove that self-driving vehicles will outperform humans in any situation, not just some situations, and public roads should not be used to test or improve them.
You are being absurd yourself. Absurd statement about Tesla and Lidar, and bizarre statement about having to prove the improvable BEFORE being allowed to prove it. Do YOU outperform humans in ANY situation? That is a lot of situations, you know. Billions of slightly different situations. And you have to be better than humans in every single one of them. And you will not be allowed to drive a car before you can prove it, even if you have to drive a car to prove it. Absurd. Humans are bad at driving safely, and Teslas with the electronic gadgetry turned on are significantly safer. That is just how things work. If we let 18 year old men drive a car in traffic unsupervised (and we do), how is a few robot cars screwing up even a problem?
Too timid toward the dork pedestrian (in this case) at 5:10 that was walking against the red light (hand) and taking his time. A little bumper nudge was in order in this case. Now, obviously, if the AI detected a cane, a wheel chair, or stroller it would not be that aggressive. But, when pedestrians like this NBC camera guy with the glasses, smirk, and camera tripod over his shoulder taking his sweet time - all bets are off -Death Race 2023.
Just because we use "normal vision" doesn't mean Lidar is not an improvement of additional information. Mark my words: To go fully autonomous, cameras and Lidars will have to be combined.
While I agree on your statement that LIDAR (and other sensors) does add extra dimensions to the perception of the world - what makes you think that cameras are not enough? Humans does not have anything other than cameras - and bad driving behavior 🙂
Tesla has the much better and safer approach to this. I wonder why waymo does not use a driver, too. Would have avoided so much accidents, jams and death.
So can someone help explain to me why they are removing so much human code, and why Tesla thinks that this removal will be beneficial in making it operate better?
Humans cannot predict every situation and the code we write won't always work. The AI can train on billions of miles of driving data collected by all the cars and learn how to handle more situations than we humans can ever imagine. It just takes a lot of time and data to train the AI.
The big problem for autonomy is that there are many human driven cars in close proximity. So, this problem will be there in the foreseeable future and hopefully diminish as the tech spread to more vehicles.
what a mess, the crashes with firetrucks can be avoided but since looks doesn't have loaded in their system the pantone color of a red color of the truck or the shape of those then does stupid things or when is the concrete is fresh is totally blind the system or only watch in black and white
There’s lots of places with little to zero pedestrians and more standardized roads. I wonder why companies like this don’t start with easier locations first
I think when cruse and waymo uses more ai they can definitely be more Safe, maybe even safer than Tesla's with current Hardware because they can look far ahead with Lidar. But they need to change their Code to more like what Tesla is doing.
The skeptic in me thinks that Cruise is trying to give driverless cars a bad rep by doing this. If you can't win the FSD race, give it a bad rep before it's solved.
Cruise exhibits 2 worrying behaviors:
1) Prediction seems to have priority over perception.
2) It freezes when things get too complicated.
Number two is what happens when you have to come up with a new plan every time the data is updated
AI has no future planning ability to dynamically adjust to situational awareness. Emergency vehicles, a police chase, flooding, etc. There is nothing inside them telling it "Oh there is a police chase on the other block". Instead, if mapped, will approach it. This is something humans do otherwise. The massive amount of visual learning data would be tremendous for all situations. Networking using transponders on emergency vehicles would be a cure.
@@skywave12 "AI can't" has an abysmal track record.
@@skywave12 Our ability to dynamically adjust is a measure of intelligence, but still works off our experience and perceptions. That's the whole point of AI, that it can dynamically adjust. Waymo & Cruise do not employ AI. If-Then/While loop programming are NOT AI.
2/3 of FSD used today is AI the other 1/3 rd is code until v12 comes out. Then most of it will be AI. of the 2/3 AI it is impressive however not without fails and takeovers. I have FSD and do report takeovers as needed.@@hammerfist8763
Still no FSD Beta, so have to switch up the videos for a while. Let me know what you think
(I know this one is a little... intense. I'll try to make the future ones a little less so and bring the humor back lol)
If you appreciate content like this, please consider supporting the channel on Patreon via the link in the description. $1 is more than I'd make from you watching ads on hundreds of my videos, which is insane. Will also be releasing videos much more frequently - pinky promise
what happened to fsd? didnt keep myself updated!
@@halophobic9550 I used the free FSD transfer to get a Model Y with Hardware 4. HW4 doesn't support FSD Beta (yet)
I like that!!!°
Please make more videos like this. Super entertaining and a nice change of pace
This and all your content is top-shelf. Thanks. Proud to be a longtime supporter. Although, it seems like FSD currently has a pretty strong reliance on map meta-data that I do not see many people talking about.
Honestly feels more fanboish than I would've expected from you.
The darkly dramatic music was a bit overkill, no?
Yeah you’re right on this one. Music was added last minute, originally didn’t have it. Thanks for the feedback!
Yeah this video 100% feels like it was paid for by Tesla.
Yeah this was a bit sad :(
4:20 Cruise is so patient to wait the mother and daughter....till they even changed their clothes, might be the next day.
I love the way you explain things it makes me understand so much more easily than some other CZcamsrs that I have seen. Good job and keep up the good work
Appreciate that, thank you!
@@AIDRIVR NP have a good day!
my dear ones, in the future, artificial intelligence will be fully aware of its actions, it will recognize a policeman, it will talk to him, it will understand when the policeman asks him to pull over to check the car and also the passenger, this is just the beginning in the same way as the artificial intelligence of star treek's ship talks to people on the ships and understands everything fully details that the artificial intelligence of star treek's ship has not even advanced the singularity that would be consciousness
Unfortunately he's straight up lying to you and repeating Muskrats lies. FSD is considered a joke in the industry, Musk is refusing to use superior technology like lidar for no other reason than others are using it and he has to be a special snowflake unicorn. Vision based systems like this have a very limited range and are susceptible to a multitude of problems, like over-exposure, which is why his cars keep crashing into white trucks and emergency vehicles (yes, Tesla does that regularly, unlike cruise). They are behind the rest of the industry by a good margin and they will stay there, they have legally admitted they aren't going to get past level 2 autonomy with FSD. Mercedes have only just gotten in this game and they are ahead of Tesla precisely because of additional sensor arrays like lidar and ultrasound. Like can't be tricked by over-exposure, ultra sound doesn't rely on light at all, redundancy and affirmation are kind in this field and Muskrat has thrown all of that out of the window.
One of the best videos i've seen in a while regading self driving, NGL. And like you say, Cruise and Waymo deserves more cred then given from ("our") community. Eventhough we don't believe it will become a fully adoptable unit.
This is a better mini documentary on the subject than any mainstream media can produce.
Should at least mention comma ai openpilot and wayve ai since they are also working on end to end self driving (openpilot has navigate on openpilot for point to point driving capability). Tesla isn't the only one in town.
they are still using basically same approach as waymo/cruise of coding the driving. Full AI, locally, is the only way to go imo and Tesla's the only one I see doing that.
@@pofiPenguin can you elaborate more on how they are coding the drive? Curious how you came to that conclusion. Openpilot is end to end now if you look at their recent material from their talks
i recently took a trip to NYC, and the second i heard something along the lines of “pre planned maps and hard written code” i gasped. in new york, riding with Ubers and just driving yourself ; the amount of closed roads, unexpected detours, and just outright issues that WERENT mapped on any given software was alarming. i can only imagine a vehicle like a Cruise car trying to navigate it. it gives me the creeps. i can’t believe they are putting cars like this on the road, somehow crazy enough, these cruise cars and other similarly setup self driving vehicles seem much less predictable and safe then any given AI driven car like FSD Beta. the problem solving skills and just overall ability to travel in non-predictable situations isn’t even comparable between the two. sidenote: the editing in this video was absolutely AMAZING. and was extremely informative, interesting, and great to watch for only being 7 minutes! great work man.
Just consider how non-standard construction can be, including with poor or misleading signage. Good human drivers can have plenty of problems with that, even with caution and common sense. Tesla FSD has a hell of a lot of 9's to march through, re consistency improvement to be good enough for real world safe very wide area robo-taxi networks.
Great video. It's always a fresh breath of air watching your videos, versus the total nonsense people write on the internet elsewhere about self-driving systems.
Even though this video just demonstrates this guy has literally no knowledge of machine learning or self driving and is just repeating the lies that Muskrat tells, but sure.
Nice video, please more on them.
Please make another one on Waymo.
You drove a lot of routs with your Tesla, maybe you could drive the same one with waymo/cruise and make a comparison?
It's been done. Tesla was the winner, hands-down.
czcams.com/video/6xUmZXoqaDQ/video.html
This is what so much 'AI' currently is: not actual AI.
Comma has a few really interesting talks on their end to end training approach that you should definately check out!
Why is youtube not notifying me about your videos anymore!!??
You commented this 1 minute after he uploaded it...
Love your content but this video's narrative seems like a false dichotomy - the questions of a) Lidar vs cameras and b) manually coding behaviors vs relying on machine learning are completely unrelated IMO. Can you not use Lidar while also heavily relying on machine learning to handle edge cases? I am not super informed but I'm under the impression that that is Waymo's approach.
Instead of attributing this failure purely to their use of Lidar, it seems an equally simple narrative to suggest that Cruise's ML team is simply not as strong or lacks access to differentiating resources like compute.
You can use both . . . but it makes actually makes the task harder. Tesla did this with Radar originally . . . but there were many issues. The Radar would indicate one thing and the cameras something else. Which do you believe? Tesla found that the cameras were more accurate and the decisions better when they weren't being confused with additional conflicting data from the radar. And the driving has improved.
Good explanation but the one important detail to mention is the reason that Tesla is able to replace all that human code with AI: they have access to an *enormous* amount of training data and a large and increasing amount of compute power. Cruise always have the option to take the same approach as Tesla but they have probably found that with the amount of data and compute they have, it doesn't perform as well as their current approach.
Great video, plan to rewatch it! Thanks.
I laughed out loud when you showed the GTA clip from DarkViperAU 😂
Explains why I think Tesla is well ahead
I would suggest, let the AI training decide if the LiDAR sensors are necessary.
It’s common to train a network with more parameters than it needs, and then closely inspect it and eliminate the parameters which are not used.
Tesla has already done the work. But if you don't trust them go ahead and make your life hard.
Tesla still regularly drive test vehicles with LiDAR and other sensors to compare with vision only, and I'm sure Cruise also compare performance in simulation with vision only.
1. That would require putting expensive sensors into millions of cars.
2. We know for a fact that it's not necessary, because humans don't have it.
3. Tesla Vision can build a 3D map that's about as accurate as a lidar image, but much more robust.
@andrasbiro3007
From the society's point of view, the key selling point for robotaxis is safety. If they fail in this, they will be banned from public roads.
On the other hand, if they result in saved lives and lower medical expenses, that's huge benefits for society.
Having robots do our jobs may be a benefit later on.
At the moment, we may need a superhuman sensor suite to exceed human safety.
Edit: Clarification
@@jsjs6751When the insurances can calculate the reduced risk and cost from robotaxi fleets - just ask Allianz and Münchner Re the most influencial insurances of insurances with over 1 trillion in active assets
--- yes, i've said 1 trillion ---
- then they will force the local legislative to enforce the use of Assistants, Automatisms, Geofenced Level 3 Autonomy and finally Level 5 Autonomy on all paved roads with more than 30 km/h top speed.
Currently we are at the point of enforced assistants in Europe. Everything else will follow in due time.
What will NHTSA do when they are being pressured by the US insurance lobby?
Contrary to your believes its the other way around. Humans will be denied the right to drive manually on their own in western countries.
I hope in my lifetime.
Some corrections:
Neural network solutions still require a lot of "human code" to function. Yes you could write a single network or multiple specialized networks for e.g. cyclist handling, but it will be harder to debug and you'll need a lot of training data on those cases. The more of an edge case something is, the worse a neural network will perform in the wild.
Additionally cruise uses cameras (as Tesla) as well and also can see the world as you do. They use radar and lidar additionally to properly mask and measure objects which is not precisely possible with only cameras. They provide additional information which also works in bad conditions, where cameras fall short. I think it's a more safety aligned approach. Tesla is trying to be more economical and has great Software and Hardware supporting that.
Overall I'm very pleased to see the progress in the field in both directions and I'm looking forward to the next development steps.
What do you mean by harder to debug? Sorry I’m new to all these terms, I’m just trying to become more informed on the topic.
Some corrections:
You do not work at Tesla, and Tesla says v 12 will be 100% neural net code. So it will be. Additionally, Tesla found that their radar was reducing the performance of their full self driving, increasing uncertainty and errors. A whole whack more sensors of different types is highly problematic in terms of processing and determining which sensors to prioritize and believe if they disagree. Tesls has shown that they can obtain the same precision of knowing where the vehicle and everything around it are with just cameras. Especially LIDAR is the opposite of "working in bad conditions where cameras fall short," LIDAR fails in rain and snow, which is why Cruise and Waymo are limited to places like San Francisco and Phoenix and Austin Texas.
Tesla's FSD is more economical, but the goal is to make it actually work, making it economical is just what Tesla does in all cases. The best part is no part, the best process is no process. The best LIDAR is no LIDAR.
I agree. There is always the combination of end to end black-box solution plus rule-based human engineered solution. Tesla is trying to increase the percentage of blackbox end to end solution under their structure. Neural network is very bad at edge cases, which is a concern even though there are more than 2 million of Tesla cars running on the road collecting edge case data. I don't think the current Tesla FSD beta can do better job than Cruise on the cases like wet concret road shown in this video. We are seeing high end cars are rolling out with forward facing Lidar in 2023/2024 models. No doubt Lidar will increase the safety in all kinds of weather/lighting conditions.
Lots of claims that you probably base on stuff you read on the internet. You say radar and lidar are needed to properly mask and measure objects. Based on what exactly? Are we humans unable to properly do that with our 2 eyes? I personally think we are doing pretty well. Tesla has shown that radar in many cases actually provides wrong information. It really is only helpful when other objects are moving at very similar pace, for example in a stop and go situation or when parking. No offense to you, but if extremely smart people who worked at Tesla like Karpathy think they can solve this problem with vision only then It's rather interesting which qualification you have that we should simply believe you without any actual evidence given that this can't be solved with a vision based approach.
@@LunnarisLPmmwave radar will improve resolution. Op didn’t say that lidar + radar is better than vision only or needed for that matter. Like you said, it’s questionable whether you need active sensors except for small distances. I have heard an anecdote that once parking sensors (likely ultrasonic) became popular, less paint jobs were done at someone’s business due to drivers not scratching cars as often.
Everyone doing this kind of high-safety engineering acknowledges that the training for the 0.01% case shouldn't impact the 99.99% case, and whether you're doing it by incrementally adding features or by training a "black box" generative AI, you test for regressions by re-running previous test cases and observing for expected behaviors. When all tests pass, you have definitionally engineered an improvement, regardless of the approach you took - and the only flaw you can have is that you didn't test something, which is a failure in the specification. There is a lot of software out there whose source code is complete nightmare fuel, but which operates successfully in high-value scenarios because the testing has caught everything that matters. The argument to be made for FSD is not that generative covers more without testing, but that it converges on a solution that passes all tests faster. You want the tests either way. Anything less is a "bet your life" proposition on it not doing some kind of crazy manuever.
What Cruise has done is oscillate from too cautious to too aggressive in certain scenarios, but both are effectively different test cases. The high incident rate is really a matter of them being the most deployed system in SF by an enormous factor, something like 5x over Waymo. These robotaxi systems are actually being trusted to operate with no safety driver, and that makes it an exciting time no matter whose system you think is best.
1:02 - I mean it does say "May stop quickly"
Bet your words sir, the new era of fsd is coming
This video felt like I was watching a 1 mio sub channel. Great work.
The video in TL;DR: CRUISE are highly advanced preprogrammed vacuum robots with passenger seats.
Great analysis as always. ❤
Although I disagree on some of your points, I still appreciate the video. Good job :)
I enjoyed the video! Looking forward to more
Great analogy at the end
I love Tesla but I do feel that this is relevant, does Tesla's approach really differ that much from the others if they still need to train the AI with different situations? I imagine it would have more flexibility when handling cases outside of the training, but you still need to gather situations where it performs badly and train the AI to improve them, so you still have a whack-a-mole game.
Yes but theoretically that can be automated and they are collecting the moles as we speak presumably at a much faster rate than Waymo and Cruise.
@@DanHedinand they also can simulate stuff they don't gather.
@@loonatic90 Tesla has had a couple of years to figure out why teslas run into parked emergency vehicles, still no solution that works
Probably still better if you can get them up and running with cheaper equipment and collect more data you will be way ahead in covering your bases regarding the various situations. I don't think you can fundamentally do much better than that, just like with humans. We may know and understand the rules (as current driving AI arguably doesn't because it isn't general AI) but even we sometimes need to f up irl before we figure something out.
@@linusa2996 really? I assume you have good data showing no improvement in this over time 🙄
I'm not sure we can generalize Cruise's system to all Lidar/ geofenced systems. We haven't seen these issues pop up with Waymo.
It might just be Cruise programmers aren't as good.
Overseas we only heard that they were starting in San Francisco with these cruise vehicles. That they crash and run red lights is completely new to me. This validation process of code changes where it drives in a game with empty streets is actually unbelievable with these situations that are completely fit for just one thing. They know how the traffic there is and this is the solution even the government approved? They clearly try to cut too many corners and endangering humans with it.
Interesting. I heard this first from you since I do not live near San Francisco. I am excited for FSD beta v12. 😁
That’s the thing with FSD, it’s always the next version that’s going to nail it. Musk has been saying that for at least seven years, the latest being FSD before the end of this year. That ain’t happening and maybe if Musk didn’t keep spouting nonsense people might take the whole thing a bit more seriously, his shtick has got old and outside the fan club nobody takes him seriously anymore.
Amazing video !
Love this video, can you do one with waymo ?
Good video topic, i was wondering about the competition....
This is really showing that we are not ready for full self driving cars. The Technology is still new and there are still things that need to be worked out. It has to do with the type or roads it drives on.
cruise has got ahead of themselves. They're living on dreams
Excellent!
Fantastic video. As a software engineer I can attest that I'd rather have AI learn to do things how humans do it rather than a human hardcoding for the EXACT same reason - human's are TOO complex to code for and its a fool's errand to try and hardcode for nigh unlimited scenarios.
well this was fun :). Great video, both despite(?) and because of the difference of the usual stuff :).
Love it my dude!!
Seems like 100% of them need to go bye bye, not 50%.
This video examplifies how ML end-to-end is required (e.g. pixels in -> driving actions out, similar to Tesla/Comma's approach), otherwise there's just too many edge cases.
Each time you code a feature for cyclists, another comes up... a fire truck, a horse, a bunch of potatoes scattered all over the road.
All that stuff still has to be "programmed", it's just called "training" when it comes to AI. The disadvantage is, you need lots of examples to recognize a pattern, rather than meticulously analyzing one encounter to produce an algorithm. The advantage is, it might recognize similar patterns in unforeseen events and be able to adapt. But not always.
...and grandmas panties!
Anyone know why when the video was up first time it went to private mode soon after? I was in the middle of watching it and i couldn't finish it because of it
I agree multiple forms of data streams such as Lidar are unneeded and cumbersome I do think more cameras are necessary. If trying to mimic a human, cameras need two be double stacked to gain depth information and then also probably need the capacity to turn slightly so that they can gain more information about the surroundings. I do think Tesla acknowledges the need for more overlapping camera areas to gain vector information as HW4 will add additional cameras to cover those areas and thereby gain more vector data. I still think the hardware needs a few more iterations to account for more weather / lighting conditions as well but that will be a matter of time.
Thanks AIDRVR. LIDAR can be useful to extend the capabilities of the vision based FSD by reducing latency and extending the detection range of VRUs, vehicles, and traffic signs. A Neural Net based on LIDAR sensor data can be faster than vision based version with current Tesla FSD Beta software
Difference in latency is negligible for self driving cars imo
I don't think any of that is true. Reducing latency definitely not. Range highly doubt, you can see a galaxy a million lightyears away, good luck detecting that with lidar.
Now how do I go about shorting Cruise and Waymo.
Cruise navigation "my pre-map says this road is fine to use I'll ignore all signs" gets stuck in wet concrete.
Tesla FSD "WTF is this. I better avoid and go around in the direction of the sign or alert the driver" doesn't get stuck.
Relying on an algorithm is far more reliable than some neural network where regression happens which you don't even understand.
I’m curious how you know what’s coming with FSDbeta 12? removing 300k lines of code and Replacing with 3k! Do you feel HW3 will be sufficient? I feel like HW4 cameras are more for Human viewing, but I also see the benefit of having improved video for NN DOJO training.
5:57 Elon
Most vocal Tesla FSD supporters bash Waymo/Cruise/etc for requiring HD maps to "navigate on rails", over-relying on LIDAR, or being unable to operate well in a dynamic world. But a few of these claims are rather old and suggest that Tesla competitors have failed to evolve and adopt new AI architectures similar to what Tesla has done. These doesn't seem to be first principle arguments grounded in truth (ie. assessing the latest version of their architecture based on their tech talks from the past few years). Some of the faults shown above are also faults we've seen with FSD. Before we arrive at such a harsh conclusion against Cruise.
Let's see empirically how much better Tesla's FSD will be in the same exact scenarios. Although I'd bet on Tesla to win the AV market eventually, it seems like they're at the beginnings of the S curve along with everyone else and it will be at least a year before we see no-driver Teslas in SF. And once that happens, I'm sure Tesla will take off way faster than competitors. We're just not there today.
Disclaimer: I'm a huge Tesla/Elon fan and want them to win
can't wait to see/read the FUD articles endorsed by insurance companies once AV gets really good, esp. FSD.
I dunno, seems like a boon to insurance companies to have fewer accidents while still having laws mandating having insurance--same revenue, fewer payouts. My auto insurance (USAA) has an app that gives a discount for "safe driving" (monitors for things like harsh braking and using your phone while driving) and I think some others do too.
@@tHebUm18Once FSD is far better than humans, i think Tesla will be held liable if cars do not need human's intervention. Then, insurance will be build into the cost of using the software.
@@tenzinpassang4812 Possibly, but also US/state laws are slow moving and often dumb. Little lobbying money from the insurance industry and I bet auto insurers collect years of premiums out of people not even driving their vehicle as laws continue requiring it.
I'm really curious to see how the evolution of AI is going to shape up. Because of the infinite number of factors that are required to be processed in very quick succession, I want to see some AI training companies spawn up that basically use AI to create scenarios for AI to train on. Crazy things like a tornado touching down a few hundred feet down the road. Right now, the car won't stop it will just keep going and even drive into it (I assume)
Or what about an earthquake that splits a road, will the car stop?
do you know what a tornado is?
Oh, DarkViperAU! :)
Are they trying to get all self driving vehicles out of roads?
If you have to put "MAY STOP QUICKLY" on your self-driving car, you might want to rethink what you're doing
I still think Tesla could have stuck with a front radar unit for redundancy
They should at least have one for emergency braking. They could also use that to fix the phantom braking issue. If the radar doesn't see something approaching fast, then it doesn't need to panic so hard. Aside from that, why would more sensors be bad? ... Well, cost of course, supply chain issues, maintenance... but it seems like there was a logical failure in their software about how to integrate all the sensors to form a coherent picture. If you're getting different answers from different systems then you're doing something wrong. They should have focused on fixing that instead of just cutting an eye out.
Thanks for the awesome video! I appreciate you showing the crash statistics at the beginning of the video, though I'm curious where you sourced the chart from.
Also, I would appreciate not having spooky music all the way through the video. To me, it adds unnecessary emotional tension to a topic which already has a stigma of fear and doom from fiction.
That GTA from Dviper tho
You should watch the mental outlaw video on Tesla's FSD 12 and the problems AI Neural Networking can bring to the table
Jesus how much are these cruise cars with all this hardware? I don’t know how is that going to be scalable
Awesome video, I did not realize cruise and waymo are that behind. I thought they were doing very good.
They are trains and if any of the road is modified they dont work
What I don't get is how these companies don't realize they're taking the wrong approach to this. This way Tesla FSD will have no competition and will be able to charge its customers whatever it wants... sad.
I think many in the companies must get it by now, but there's significant inertia because "let's start over with a corrected approach" causes far too much organizational upheaval. They're too invested in what they're doing already. Tesla will be competing with traditional ride sharing, though.
Why did this video got taken down?
I was going to remake it, but decided I’ll learn from it instead
@@AIDRIVR this video is fine I don't see the fuss
Great explanation on how it is going for Cruise. I will agree with Elon, that massive amounts of code will not solve FSD. It can only take you so far.
What the hell does Musk know about writing code?!?!?
@@samuelglover7685 well he did code from early age... So a little I would assume.
@@ChristianBlueChimp You have evidence that his understanding of s/w has significantly increased since he was 12? There's no evidence of that in his biography.
Would love a video on comma ai
I started laughing at the first clip hearing that woman's reaction of Cruise. Go FSD Go Tesla
Has tesla figured out why it's FSD cars still crash into parked emergency vehicles?
Or why they have a tendency to kill motorcyclists?
You’re right, this is quite different. I enjoyed it. Your production quality is so far above the early days. Those were good too but you know.
I’m sitting in the same HW4/Beta limbo as you. Did you get MSM? I’m thinking there will be quite a few like me with a 2023 MSM Y running Beta soon.
It is crazy how well Tesla's FSD handles streets it has very limited navigation about. Yet I don't know if removing every radar and ultrasonic sensor from the cars is the correct move.
In your video about the original Model S you were amazed how many cars it saw, your Model S would not have picked up.
Or the ultrasonic sensors have to give a more acurate reading of distances than a wide lens camera in the bumper. Why not combine both systems, and use the additional information when vision only maybe needs more information?
imagine the fines/jail time/getting murdered I'd get if I did what cruise is doing.
A GM product broke down? Le ghasp!
Its a pre mapped world that's fine but I think in reality they should uses a dual program one uses the cameras to figure out if the programed maps match their surroundings and if the two don't match then they should essentially use the dummy backup plug like in Evangelion
Nah they don't need pre programmed maps. It just makes it more complicated.
They need to be able to scan the area and identify everything, all in real time. Also make decisions in real time.
If I were to try to make a FSD program, I would have gone for a similar approach as Tesla. I don't think I'm nearly talented enough, but I know that I would have tried to write a program that can drive anywhere and make decisions based on what it is seeing and build its own map in real time. It would update/track objects, etc in this map. You can't just rely on the present as some objects may get blocked out of view, etc.
Anyways, Elon said using Lidar means they are doomed from the start. I think using pre-programmed maps is really what will doom them.
The only thing pre-programmed maps may be good for is simulations. Tesla has a huge advantage and can run code in a shadow mode on cars and can look for certain scenarios and test the new code in real life. I believe they do this, not 100% sure. But I think I heard it in a video. Either way they have a huge dataset and a lot of tools the competitors don't.
I would probably even use George Hotz self-driving before Cruise or Waymo. It is an open-source self-driving program you can install in your car and it runs on a phone. It's only compatible with certain vehicles. I haven't seen an update in years though so I am not sure where it is today.
Actually I would be forced to use it as I don't live in San Francisco 😅
@digi3218 I 100% agree heck why not startup our own at this point makes more sense then to let them ruin the opportunity
@@playstation8779 I think Waymo was first and they went with the whole map idea and Cruise followed suit. I wouldn't doubt if they got help from Waymo.
Like I said I'm not nearly talented enough, but I would have never gone with that approach. Also just looking at Tesla, there are huge resources needed, but also looking at the open-source version from George, it's possible to get results without all those resources. Maybe not FSD, but I guess level 2 or 3 is what it would be. (I don't remember the scale)
When I say resources, I mean Tesla is building one of the worlds most powerful super computer just to solve this problem. And it costs millions in electricity to run.
Not sure if you were being sarcastic but I will let Tesla solve the problem and hopefully let others use it like they did there charging plug lol.
@digi3218 XD I mean a train network would be way more efficient but hey we don't want another terrible version of Tokyo right?
The just have to many sensors. That's the problem... Sometimes to mutch is to mutch.
So removing LiDAR was a performance decision?
*not using LIDAR.
The major trouble is that all of these failures from other FSD providers will paint the whole concept in a bad light, making regulatory agencies less likely to give Tesla the green light for deployment and because of that, slow the rate of evolution for the neural networks.
Hopefully Cruise can recover and regain control of things, but if they continue to use outmoded methods that cast the industry in a bad light, then perhaps it would be better that they fade into the background.
Well, the evidence that FSD v12 e2e approach is superior has stilö to be provided by its widespread rollout.
Eg how does that approach deal with situations that have not been part of the video training material?
Why did Tesla actually not participate in the San Francisco piolit, despite claims since that they are able to do so since 2020?
This was quite good, actually. I'm not sure when the version 12 FSD beta release is supposed to roll out to the beta testers. I'm a little skeptical that it will be very soon since we haven't had a FSD update in about 2 months now. Plus I'm worried that it is overhyped and won't perform in the real world nearly as well as is being predicted. I hope I'm wrong, though.
Virtually ALL AI is way overhyped in recent years, so par for the course.
Unfortunately, until businesses are economically punished somehow for such nonsense, they'll do it to pump their stock, etc.
If a CEO just blatantly lies on pure fact, they can get them for that like Musk on the "420 funding secured". But the loopholes are gigantic. Musk falsely claims Tesla AI based FSD will be ready "real soon now" (I paraphrase) EVERY YEAR or even more often than that -- and he gets away with it since it's aspirational vs. fact, yadda yadda.
To me, a CEO being dead wrong vs. such "aspirations" about their own products ENDLESSLY is unacceptable -- but I don't make or enforce the laws on that.
And I say all this as a long term patient Tesla shareholder who is rooting for Tesla FSD robotaxis to be cheap and ubiquitous by the time I'm old enough I'd prefer not to drive.
Tesla is testing V12 in that newly built DOJO computer. They should be able to push the AI Driving program safely to its limits inside that computer. This makes good engineering sense rather than test in the real world with real world consequences. Tesla will get its Drive GPT moment. Ignoring the market noise is what is needed right now.
Ouch
AI do you own tesla shares? I think once they nail fsd it will rocket, which elon reckons this year
Comparing statistics of several hundred million drivers compared to their 400 vehicles is ridiculous. With that limited data your preference belief not science.
Imagine if they got the Tesla treatment.
I am in a position where I can buy a tesla in the near future, and I have testdriven the model 3 & Y af few weeks ago. I have viewed a lot of your video's and I find the FSD amazing,
but during the test drive I was shocked that the EU counterpart of FSD is not nearly as good as the one from the US: I had to intervene a lot because the FSD was too agressive,
or it was speeding towards a steep turn. Also on roads with no markings, I couldn't engage the AP or it would just simply disconnect.
I know that it will take some time before FSD in Europe will be at the same level as in the US but now I am thinking of just sticking with the advanced AP (since changing lanes and going to highway exits) were working perfectly fine (except that AP wanted to change to a closed lane one time)
We don't have FSD in Europe yet, sadly. Only Autopilot which, realistically, is only designed for highways. Let's hope we get FSD here soon!
I had same experience with what you are describing about 5 years ago. In the 5 years that followed with FSD Tesla FSD went from advanced AP to FSD. You won't have to wait 5 years. Tesla is a generalized driving solution. That means the FSD is trained like we are. We don't know every road but we do know how to drive generally and then we apply what we know to the situation.
This is Tesla's approach too. I did buy the FSD on my car at the time and I do not regret that decision at all. It has kept my cars relevant and updated. The difference is that your wait will be much shorter than mine was as I can see what FSD in real world looks like. So Europe should not be that far behind.
I don't believe Waymo or Cruise hand-code with if/else the driving algorithm. Just because they use pre-mapped data doesn't mean they don't use artificial neural networks. I'd like to see some informed technical papers on the fact but I think that for all the praise Tesla gets, their future as a self-driving system in the way Waymo and Cruise are is looking bleak.
There's definitely explicit code for rules like stop signs, lights etc. For acting around pedestrians though they simulate many possible scenarios and narrow down the possibility space based on its previous position/velocity etc. What makes end-to-end different is it can take in the whole situation at once, including cues like body language and environmental cues to make a snap judgement just like we do. Humans sometimes simulates different possibilities and we can visualize them but it's a slow process which we normally don't use while driving. We see and we act without much thinking.
@@martindbp It's understandable to have some explicit rules around critical aspects. And that's ok. What I don't agree with is that these cars are like trams on virtual tracks which is completely false.
@@ArielChelsau They're on virtual tracks in the sense that they know exactly where and how to drive in their operational area, but with some leeway to go around things, adapt to the the situation and deviate from that path. In the end however, they are not very flexible, and this is the reason that they've managed to get the mean time to failure down to such low levels. Personally, I would barely call this AI though, it's not particularly exciting and almost certainly a technological dead end.
Elon's stance on LIDAR is absurd. Yeah the sensors are expensive, but Tesla themselves have already proven that LIDAR is 10x more effective than cameras.
Anyway, absurd that these vehicles are allowed on public roads. Here in the Netherlands you literally can't use FSD on a Tesla (even if you bought it). You can use lane assist and adaptive cruise control, but that's it. And that's the way it should be until they can prove that self-driving vehicles will outperform humans in any situation, not just some situations, and public roads should not be used to test or improve them.
You are being absurd yourself. Absurd statement about Tesla and Lidar, and bizarre statement about having to prove the improvable BEFORE being allowed to prove it. Do YOU outperform humans in ANY situation? That is a lot of situations, you know. Billions of slightly different situations. And you have to be better than humans in every single one of them. And you will not be allowed to drive a car before you can prove it, even if you have to drive a car to prove it. Absurd.
Humans are bad at driving safely, and Teslas with the electronic gadgetry turned on are significantly safer. That is just how things work. If we let 18 year old men drive a car in traffic unsupervised (and we do), how is a few robot cars screwing up even a problem?
Too timid toward the dork pedestrian (in this case) at 5:10 that was walking against the red light (hand) and taking his time. A little bumper nudge was in order in this case. Now, obviously, if the AI detected a cane, a wheel chair, or stroller it would not be that aggressive. But, when pedestrians like this NBC camera guy with the glasses, smirk, and camera tripod over his shoulder taking his sweet time - all bets are off -Death Race 2023.
The current issues with Cruise... I bet Elon saw this coming years ago and avoided this model.
Just because we use "normal vision" doesn't mean Lidar is not an improvement of additional information.
Mark my words:
To go fully autonomous, cameras and Lidars will have to be combined.
While I agree on your statement that LIDAR (and other sensors) does add extra dimensions to the perception of the world - what makes you think that cameras are not enough? Humans does not have anything other than cameras - and bad driving behavior 🙂
Remember when everyone thought Elon was crazy for relying on cameras alone? And telling everyone lidar isn't the solution?
Tesla has the much better and safer approach to this. I wonder why waymo does not use a driver, too. Would have avoided so much accidents, jams and death.
So can someone help explain to me why they are removing so much human code, and why Tesla thinks that this removal will be beneficial in making it operate better?
Humans cannot predict every situation and the code we write won't always work. The AI can train on billions of miles of driving data collected by all the cars and learn how to handle more situations than we humans can ever imagine. It just takes a lot of time and data to train the AI.
❤
The big problem for autonomy is that there are many human driven cars in close proximity. So, this problem will be there in the foreseeable future and hopefully diminish as the tech spread to more vehicles.
Seems like the main problem are not other cars but pedestrians, animals, constructions zones and the likes.
what a mess, the crashes with firetrucks can be avoided but since looks doesn't have loaded in their system the pantone color of a red color of the truck or the shape of those then does stupid things or when is the concrete is fresh is totally blind the system or only watch in black and white
There’s lots of places with little to zero pedestrians and more standardized roads. I wonder why companies like this don’t start with easier locations first
maybe those areas don't reach the clientele quota needed to not lose money
I think when cruse and waymo uses more ai they can definitely be more Safe, maybe even safer than Tesla's with current Hardware because they can look far ahead with Lidar. But they need to change their Code to more like what Tesla is doing.
To look ahead you need to have the "intelligence" to understand what to look ahead for. Lidar is only a sensor, nothing that helps you look ahead.
@@wolfgangpreier9160 that's what Im saying more AI ;)
I wouldn't know anything about this if I wasn't subscribed to this channel lol
TESLA and Elon really knows what they are doing, unlike those Clowns
The skeptic in me thinks that Cruise is trying to give driverless cars a bad rep by doing this. If you can't win the FSD race, give it a bad rep before it's solved.
May stop quickly. Already a red flag