Robocars.com interview with Amnon Shashua of MobilEye on 4 key issues
Vložit
- čas přidán 25. 07. 2024
- Text article: www.forbes.com/sites/bradtemp...
MobileEye (Intel) has risen to be one of the potential robocar leaders. This is a 27 minute interview with Amnon Shashua, founder CEO of MobileEye and one of the most astute players in the field. I dig into 4 technical issues listed below. If you want an introduction to MobilEye's strategy, with a contrast to Tesla's strategy, see the text article, or Shashua's own videos. They seem to be doing a lot of things right. Can they deliver?
At one point I describe Tesla FSD of having an MTBF of minutes, sometimes seconds. In that case I mean any problem, after that Amnon clarifies he means an accident. Tesla isn't quite that bad by his definition of MTBF.
Shashua video: • CES 2022 Under the Hoo... (1 hour)
Summary: • CES 2022 Under the Hoo...
0:00 Video Introduction
1:16 New Channel Plans
1:47 Sensor/Perception redundancy and fusion
10:52 REM and Mapping efficiency
12:19 Do we know the algorithms we will need?
21:09 MobilEye Robotaxi plans
29:35 What's Next - Auta a dopravní prostředky
MTBF - Mean Time Between Failures
REM - Road Experience Management: building maps from inputs
ADAS - Advanced Driver Assistance Systems: where step changes in capability don't necessarily get you to 'Full Self-Driving'
Can't believe I've not come across your YT channel before now. Subscribed, and thanks.
I’m glad I found your channel, great way to stay updated about the latest in self driving. Keep up the great work!
I just got around to watching the entire video. Great video & appreciate that you politely push for answers.
I've never seen an interview where Elon has allowed himself to be scrutinized like this.
Great interview!
Excellent interview. I loved how you challenged him even though I am a big fan of Mobileye.
Good interview Brad.
Frequent content like this covering the AV space and you'll get to 1000 subs.
Interesting interview. On the fusion question: it seems to me to be a matter of complexity. Building two systems capable of driving the car with relatively uncorrelated failure modes is likely easier than building one fused system. Why? Because the engineering at the meta level is the same. Same evaluation metrics. Same iterative improvement process. You can have specialized engineering teams working relatively independently on each system, without one gating the other. It may be that you can get even better performance fusing the sensor modes earlier in the pipeline, but that's likely a second-order improvement. The first-order improvement comes from (relatively) uncorrelated independent systems.
So it's more of a practical issue than what is theoretically superior in an idealized world.
When he said he could shut his cameras down, the redundancy system still works robustly. Clarifications: Can LIDAR, 4-D Imaging Radar identify which traffic light is on (red, yellow or green)? If they are color blind, can they identify the position of the lit traffic light and correlate it as red, yellow, or green? Or Is it impossible to navigate through traffic lights without cameras?
Super informative video! Looking forward to one with Sterling Anderson. (Glad to be subscriber 555)
I don't really know him, but I worked with Chris Urmson for a couple of years and he might be easier to invite. Aurora has said so little, it's hard to dig deep into what they have, though.
@@bradtem You can make videos about companies like Aurora, Argo.ai, and others where you do your best to lay out where they are, their strategy, history, etc.
I think making regular videos to educate the general public, whether with a big interview or not, will grow your audience substantially.
Nobody else is regularly covering autonomy on CZcams. There are plenty of fanboys, but the fanboys don't understand the rest of the field, so you can put yourself in the center of the most exciting automotive field.
great interview.. but please don't interrupt your guest..
Really interesting. Especially this topic on low vs high level fusion. Low level fusion feels much more powerful theoretically (less code and hand-written rules, more neural network optimization) however I agree with Prof. Amnon Shashua that you need to solve the problem that one sensor can break down. Can you data augment during training, e.g. with artificially covered cameras, to cover for this? However, if that is solved we still have the topic of verification of the system.
If a sensor fails in a taxi, it's sufficient to pull over and dispatch another taxi. In a private car, let the passenger take the wheel when ready
@@bradtem I was more thinking about the transition time (from identified failure to safely stopped at the road edge). Sometimes this period can be quite long, e.g. if you are in the middle of a busy three lane highway or in a narrow construction site lane with concrete barriers on both sides. You need to be able to drive with a broken sensor for a while. During this time the car must continue to drive, with reduced safety of course but it must make it to a safe spot on its own.
@@lightinspace2963 Yes, and with the broken sensor, it will have somewhat degraded performance. But you can have a performance level which is not sufficient enough for all-day operations, which presents low risk when only used in emergency situations. All driving consists of regions with higher risk and lower risk. Emergency pull over on limited sensors is going to be higher risk, but within reason, it's a perfectly acceptable solution to this problem.
Everyone has low level fusion, even MBY.