TSLA

3,779 Views | 76 Replies | Last: 3 days ago by hph6203
LOYAL AG
How long do you want to ignore this user?
AG
Medaggie said:

Waymo is not a viable solution and not scalable. I doubt Elon will want anything to do with Uber and what does Uber offer that Tsla needs?

If and when Tsla solved FSD, they could easily undercut both Uber/Waymo by 50% and still be break even.

I don't think Uber will be in its current form in 5 yrs. I have never shorted a stock, but I would UBER if I did.


Don't disagree except to say what uber offers is the customer base and infrastructure for Robotaxi to generate revenue without Tesla having to build that themselves.
_lefraud_
How long do you want to ignore this user?
AG
Where does food delivery factor? Looks like Waymo has the ability in some specific areas but it's still dependent on the customer to select that option.

I think it will take a pretty large shift in society for people to get on board with having their food/groceries delivered without a human, so Uber and their customer base will still have a lot of value for the next decade plus.
LOYAL AG
How long do you want to ignore this user?
AG
_lefraud_ said:

Where does food delivery factor? Looks like Waymo has the ability in some specific areas but it's still dependent on the customer to select that option.

I think it will take a pretty large shift in society for people to get on board with having their food/groceries delivered without a human, so Uber and their customer base will still have a lot of value for the next decade plus.


I would think people would be far more willing to let a Waymo deliver their groceries than they would to actually get in one and go somewhere. I would love to send my Tesla to HEB to pick up the groceries. It's capable right now if Tesla were to enable FSD without supervision. The front camera can see the spot I'm in so I could text HEB where to take them and I can open and close the trunk from my phone. There's even a megaphone I could use to thank the person loading them up. The moment FSD becomes autonomous people will be doing that.
Caliber
How long do you want to ignore this user?
AG
Regular, albeit Complex, code to handle situations will only ever be able to handle the programmed situations.

Driving will always have a 1 off situations and that is the hardest part of FSD. AI has to be the answer and the limitation today is that it still behaves a lot like complex code and can only handle what the data has taught.

I'm not saying it needs AGI to do it, I do think those inferential gaps can be closed in the near future to process odd situations at least as well as the average human driver could.

A curiosity question for anyone in the know... How would FSD handle road construction on a country road today with a flagger and 1 way traffic where you wait for the pilot car to go from either side? Can it do something like that yet?
MAS444
How long do you want to ignore this user?
AG
Yeah construction zones are one issue I'm most curious about. Similar, recent accidents or other things that unexpectedly change traffic routes/patterns. Sometimes it's difficult for a human driver to figure out what the hell to do (I know, I know...the technology is better than human).
Medaggie
How long do you want to ignore this user?
Caliber said:

Regular, albeit Complex, code to handle situations will only ever be able to handle the programmed situations.

Driving will always have a 1 off situations and that is the hardest part of FSD. AI has to be the answer and the limitation today is that it still behaves a lot like complex code and can only handle what the data has taught.

I'm not saying it needs AGI to do it, I do think those inferential gaps can be closed in the near future to process odd situations at least as well as the average human driver could.

A curiosity question for anyone in the know... How would FSD handle road construction on a country road today with a flagger and 1 way traffic where you wait for the pilot car to go from either side? Can it do something like that yet?

I have went through construction where they diverted with cones and it handles it well. I have not done a flagger but you prob can see on youtube but I bet it will handle it well.

FSD is vision only AI based and it actually follows/mimics what the car in front of you do which is similar to a human.

FSD drives with the flow of traffic. If its a 55mph zone and everyone is going 70mph, FSD will drive at 70mph. You can get a ticket b/c it will go 15+Mph above the speed limit but this is not different than a Human.

If they allowed FSD without supervision, I would trust it to drive from Austin to Dallas without concerns. It is this good.
LOYAL AG
How long do you want to ignore this user?
AG
Caliber said:

Regular, albeit Complex, code to handle situations will only ever be able to handle the programmed situations.

Driving will always have a 1 off situations and that is the hardest part of FSD. AI has to be the answer and the limitation today is that it still behaves a lot like complex code and can only handle what the data has taught.

I'm not saying it needs AGI to do it, I do think those inferential gaps can be closed in the near future to process odd situations at least as well as the average human driver could.

A curiosity question for anyone in the know... How would FSD handle road construction on a country road today with a flagger and 1 way traffic where you wait for the pilot car to go from either side? Can it do something like that yet?


My experience with construction is mixed and I'm not quite as enthusiastic as med is. There have been several times where it's pushed towards a forced merger where comes or barrels are closing my lane and it's not done anything to adjust speed to accommodate for the car even or slightly ahead of me. In other words if we were passing that car but running out of lane it kept going to pass. I'm certain the math told the car we were going to pass before the merge but it was uncomfortable for me so I took over and either slowed down or sped up depending on what was needed. In fairness to the car humans are pretty poor at that too so many it was just mimicking what it sees us do. lol. I have also had it misread a temporary double yellow line that was made of the plastic rivets where it tried to change into oncoming traffic to pass the slow moving cars in front of me.
hph6203
How long do you want to ignore this user?
AG
I've been in the pilot car scenario before and it didn't handle it yet, but I have no doubt it eventually will be able to. FSD isn't programmed, it's trained. They sort billions of miles of training data, collect the data they need, and then include it in their model to improve performance.



GeorgiAg
How long do you want to ignore this user?
AG
This morning, I was using FSD (full service driving) on the way to work (brand new model 3, hardware 4, FSD 13). A car one car ahead stopped suddenly. The car directly in front of me slammed on brakes but was going to hit. They had to swerve right in a turn lane with a cement porch chop type deal. Tesla did the same and thread the needle as well. Everything worked perfectly, but it was crazy there for a second.

I read FSD 14 is already rolling out to employees and we'll get the update any day now.
GeorgiAg
How long do you want to ignore this user?
AG
I also bought TSLA at $340 and I'm gonna HODL. I am really impressed with the car.
rlb28
How long do you want to ignore this user?
AG
Test drove a 2026 Model Y with my 82 y/o MIL in the driver's seat last month. On a Friday with 5 p.m. traffic from The Woodlands to Angleton thru Houston while it was raining. She never touched the steering wheel, and it went off without a hitch. I didn't realize we were "there yet", but it was amazing.
YouBet
How long do you want to ignore this user?
AG
rlb28 said:

Test drove a 2026 Model Y with my 82 y/o MIL in the driver's seat last month. On a Friday with 5 p.m. traffic from The Woodlands to Angleton thru Houston while it was raining. She never touched the steering wheel, and it went off without a hitch. I didn't realize we were "there yet", but it was amazing.


That's ballsy.
rlb28
How long do you want to ignore this user?
AG
Yeah, they bought it the next day.

It will allow them to stay in their home and have their own mobility rather than the car keys taken away and a whole slew of issues that comes with that.
GeorgiAg
How long do you want to ignore this user?
AG
rlb28 said:

Test drove a 2026 Model Y with my 82 y/o MIL in the driver's seat last month. On a Friday with 5 p.m. traffic from The Woodlands to Angleton thru Houston while it was raining. She never touched the steering wheel, and it went off without a hitch. I didn't realize we were "there yet", but it was amazing.

I dunno man. I've had mine 2 weeks and it already took a wrong turn. Not a big deal but it just ignored its own GPS directions. I'm sure it would have rerouted but it was kinda unnerving. Another time it saw a phantom green light. There were two red lights which it saw but then it saw one green one way off to the left that did not exist.

When you press the brakes in the middle of FSD the Tesla will ask for feedback why you did it. You press the microphone button and tell it why. I only did it for the "errors." That way the engineers can review and correct it.

My friend has a coworker physician that is a Tesla nut. He told me she had a wreck with FSD so bad it put her in the hospital for a while. Not sure if we are completely "there yet." Maybe FSD 14 will be closer to it.
rlb28
How long do you want to ignore this user?
AG
Agree. After they made it to my house I took it out and there is a place that is confusing to people concerning merging and it effed up at that spot.

But for the safety of the citizens of Conroe it will work waaaaayyyy better than them driving. LOL!
ABATTBQ11
How long do you want to ignore this user?
AG
hph6203 said:

You might be correct, but you also might be the guy in 1912 telling Henry Ford that it will take him 15+ years to sell his 2 millionth vehicle, because he had only sold 150,000 vehicles in the last 4 years, not recognizing that for the first 4 years he had been developing the moving assembly line manufacturing process and in actuality it only took him just over 4 years to sell the 2 millionth vehicle.



Quick clarification...

Ford didn't really develop the moving assembly line for manufacturing. What he developed were the jigs and processes to manufacturer parts consistent enough to make it possible. Plenty of people had the idea before him, but it was unworkable due to the variation in manufactured parts. Cars were basically hand built up to that point because parts had to be individually for to each other. Ford introduced jigs and other processes so that parts could easily be brought within tolerance and mate to any other part quickly and easily. Only then could the assembly line work.


That said, these are 2 completely different problems. Ford's issue was finding a way to quickly and consistently repeat a finite and relatively small set of operations. That problem had a boundary. Tesla's issue is how to teach a car to drive under all conditions and in all situations. That's an unbounded problem. You could look at Ford's production and point to specific parts of the production process that would need to be optimized and by how much to reach a production rate at a target cost to sell 2 million vehicles. With Tesla, there are potentially limitless edge cases that can't be known until they're encountered.

Consider the traveling salesman problem. A salesman has to travel to different cities in his territory. He wants to choose the shortest overall path to minimize time on the road and only visit each city once. Sounds easy enough, and smoking it is easy until you start adding more cities. You quickly get to a point where checking all of the possible solutions would take longer than the life of the universe, so you have to start using strategies to approximate the solution. While there are some really good approximation, there is no known general solution that will allow you to find the optimal path for any n number of cities, despite some of greatest minds in the history of mathematics trying. There were even some very smart people who thought that computers would finally be able to help solve the problem and an we needed was more computational power to tackle the problem, but that's never happened.

This is the territory Tesla is in. They need a general solution to a nearly endless problem. Maintaining a lane, stopping in traffic, changing lanes, waiting for a green light, recognizing a stop sign, etc are all relatively simple special cases that represent the well bound, small version of the problem. When you start adding in understanding construction signs, estimating potential collisions with obstacles, classifying obstacles, predicting the behavior of other vehicles or pedestrians, operating in low visibility or slick conditions, flat tire/blowout, atypical intersections, etc you quickly get to a point where there is too much and you must now approximate solutions. Maybe 90% of driving is those low hanging special cases, so it seems super close after getting them, but the other 10% is an explosion of endless edge cases. You could solve 99% of the problem and the remaining 1% could still represent decades of work.


To address same other points...

Yeah they launched the robotaxi service, but there were (and I believe still are) a lot of bugs in it. Austin and San Francisco also have highly mapped streets. You don't have that everywhere. There's still a safety driver in the car for a reason.

Faster and better computers have always been the proposed solution to lots of problems. They very rarely are. Some problems are simply intractable. I think this may be one of them. FSD may end up working like autopilot in aircraft. There are a lot of things that pilots have to know and do, but between takeoff and landing autopilot can do the majority. Yes, autopilot systems can often handle takeoff and landing too, but that's usually with a lot of ground systems like an ILS or by using GPS with mapped and stored airports. I don't know of any that could land on a dirt airstrip, make an emergency landing, or handle any of the complex scenarios a human pilot is trained for. That said, pilots are always in the cockpit and trained to be constantly ready in case they need to take over. Your typical driver if given FSD would fall asleep on the backseat or get on their phone and be completely oblivious.
BucketofBalls99
How long do you want to ignore this user?
rlb28 said:

Yeah, they bought it the next day.

It will allow them to stay in their home and have their own mobility rather than the car keys taken away and a whole slew of issues that comes with that.

That is really interesting to me. I would have never thought of that angle!
Diggity
How long do you want to ignore this user?
AG
My parents have trouble scheduling an Uber...so I don't think FSD would be for them
GeorgiAg
How long do you want to ignore this user?
AG
ABATTBQ11 said:

hph6203 said:

You might be correct, but you also might be the guy in 1912 telling Henry Ford that it will take him 15+ years to sell his 2 millionth vehicle, because he had only sold 150,000 vehicles in the last 4 years, not recognizing that for the first 4 years he had been developing the moving assembly line manufacturing process and in actuality it only took him just over 4 years to sell the 2 millionth vehicle.



Quick clarification...

Ford didn't really develop the moving assembly line for manufacturing. What he developed were the jigs and processes to manufacturer parts consistent enough to make it possible. Plenty of people had the idea before him, but it was unworkable due to the variation in manufactured parts. Cars were basically hand built up to that point because parts had to be individually for to each other. Ford introduced jigs and other processes so that parts could easily be brought within tolerance and mate to any other part quickly and easily. Only then could the assembly line work.


That said, these are 2 completely different problems. Ford's issue was finding a way to quickly and consistently repeat a finite and relatively small set of operations. That problem had a boundary. Tesla's issue is how to teach a car to drive under all conditions and in all situations. That's an unbounded problem. You could look at Ford's production and point to specific parts of the production process that would need to be optimized and by how much to reach a production rate at a target cost to sell 2 million vehicles. With Tesla, there are potentially limitless edge cases that can't be known until they're encountered.

Consider the traveling salesman problem. A salesman has to travel to different cities in his territory. He wants to choose the shortest overall path to minimize time on the road and only visit each city once. Sounds easy enough, and smoking it is easy until you start adding more cities. You quickly get to a point where checking all of the possible solutions would take longer than the life of the universe, so you have to start using strategies to approximate the solution. While there are some really good approximation, there is no known general solution that will allow you to find the optimal path for any n number of cities, despite some of greatest minds in the history of mathematics trying. There were even some very smart people who thought that computers would finally be able to help solve the problem and an we needed was more computational power to tackle the problem, but that's never happened.

This is the territory Tesla is in. They need a general solution to a nearly endless problem. Maintaining a lane, stopping in traffic, changing lanes, waiting for a green light, recognizing a stop sign, etc are all relatively simple special cases that represent the well bound, small version of the problem. When you start adding in understanding construction signs, estimating potential collisions with obstacles, classifying obstacles, predicting the behavior of other vehicles or pedestrians, operating in low visibility or slick conditions, flat tire/blowout, atypical intersections, etc you quickly get to a point where there is too much and you must now approximate solutions. Maybe 90% of driving is those low hanging special cases, so it seems super close after getting them, but the other 10% is an explosion of endless edge cases. You could solve 99% of the problem and the remaining 1% could still represent decades of work.


To address same other points...

Yeah they launched the robotaxi service, but there were (and I believe still are) a lot of bugs in it. Austin and San Francisco also have highly mapped streets. You don't have that everywhere. There's still a safety driver in the car for a reason.

Faster and better computers have always been the proposed solution to lots of problems. They very rarely are. Some problems are simply intractable. I think this may be one of them. FSD may end up working like autopilot in aircraft. There are a lot of things that pilots have to know and do, but between takeoff and landing autopilot can do the majority. Yes, autopilot systems can often handle takeoff and landing too, but that's usually with a lot of ground systems like an ILS or by using GPS with mapped and stored airports. I don't know of any that could land on a dirt airstrip, make an emergency landing, or handle any of the complex scenarios a human pilot is trained for. That said, pilots are always in the cockpit and trained to be constantly ready in case they need to take over. Your typical driver if given FSD would fall asleep on the backseat or get on their phone and be completely oblivious.

The solution to your problem is to have the roads be "autonomous" ready. DOT would monitor lane markings, signs, etc. to make sure there is nothing that is an unusual situation for the AI. You would also have the cars communicate with DOT or other cars if something out of standard was encountered. Then DOT would be dispatched or the other cars would not drive autonomously through that area, etc..

I think at first only stretches of rural interstate would be like that and then progress to more complex areas and situations.

There could always still be a problem, but it would be rare and then corrected for other drivers.
@NFLPlayerProps
How long do you want to ignore this user?
GeorgiAg
How long do you want to ignore this user?
AG
Man, them computers is smart!
hph6203
How long do you want to ignore this user?
AG
You just argued against yourself and didn't realize it.
hph6203
How long do you want to ignore this user?
AG
1) Totally side stepped the point. The point was you can't extrapolate future advancement based upon past advancement, because fundamental changes can occur that rapidly improve rate of advancement.

Ford was the first to implement the moving assembly line, the fact others had conceived and failed to implement it is irrelevant. Even if someone had conceived of it and implemented it prior to Ford it wouldn't change the fact that it wasn't in use by Ford prior to 2013, and an uninformed observer would presume the future rate of production would be comparable to historical rates of production, but in reality a fundamental change to the production processed radically improved the rate of production.

That's what Tesla did in 2022/2023. They increased the rate of data ingestion by automating video labeling, and increased the rate of training by changing their vehicle control methodology (utilizing human driver response married to visual data, removing hand coded instruction) and increased the quantity of data processed and rate of iteration by increasing the amount of compute.

2) This is where you are arguing against yourself. You are conceptualizing a perfect driving system rather than an approximation of a perfect driving system. No human is exact in their driving decisions, neither will be the autonomous driving systems. The threshold they have to exceed is average human driver, not perfection.

As you correctly pointed out, while you cannot reliably calculate the optimal path for the traveling salesman to travel, you can approximate perfection fairly reliably so that the variation is relatively inconsequential. Traveling salesman do travel, human drivers do drive even though neither does so perfectly and neither human or computer will ever do it perfectly.

3) Waymo just released their safety impact report last week. 75% reduction in airbag deployments over their 100 million miles of data when comparing to comparable driving scenarios (i.e. comparing city driving in San Francisco to San Francisco, Phoenix to Phoenix under same similar road conditions/weather etc), despite the fact that Waymo's system is decidedly not perfect nor is it trained to handle every edge case. How? Because the vast majority of driving is not edge cases and the vast majority of accidents are not arising from edge case scenarios, they arise from predictable/common scenarios that human drivers fail to appropriately react to. They are overwhelmingly caused by driver error (an estimated 94% of accidents), whether that's because they're speeding, following too closely, drunk, looking at their phone, zoning out, jamming too hard to Chumbawumba etc etc. Merely using good driving habits and paying attention cuts out a ridiculous amount of avoidable errors.

4) Tesla's self driving model is not deterministic. It is probabilistic. They monitor/record billions of miles of driving data across millions of different drivers, query those billions of miles for scenarios that they want to improve training on, select for the best responses by those drivers from the data set and uses those examples to train the model on the visual data and control response. The model then creates essentially a familiarity with similar, but not exactly the same scenarios to estimate the scenario it's in in the future and estimate the appropriate response, rapidly updating its probabilities of accurate perception and probabilities of appropriate response.


What is a rare experience for you in terms of proportion of driving and quantity of occurrence is a rare experience as a proportion of data, but not a rare occurrence in quantity of occurrence. What may take a single driver a lifetime to see while driving is approximately occurring 500 times per day, 150,000 times per year, across their fleet and the quantity of occurrences grows as they sell more vehicles (~2 million per year, current fleet is ~8.5 million), and the quality of the data improves as they go from relying on customer vehicle data to company owned fleet vehicles. There are no unreported fender benders in a company owned car.

5) Improvements in the onboard computer improves the number of parameters the model can account for while driving, which improves the accuracy of prediction of the scenario it's in, both in any given instant and across time. Meaning it can more fully analyze data in each of the 8 frames of video it processes 24 times a second and across time to more fully analyze changes in the frames over time.

The current model being used in vehicles is a half step between the previously hardware and the current hardware. The first fully exploitation of current hardware software version is expected in the next 3 months and is currently in use on employee vehicles.

6) Waymo released a white paper validating that the scaling laws that apply to LLMs also apply to autonomous driving. That more data and more training capacity leads to improved performance.

Waymo has also developed their own internal vision only model, suggesting they are not totally discounting the possibility that a vision only system will someday be capable of matching or exceeding their performance, they are just currently data deficient by comparison to Tesla. They can't compete in the same arena as Tesla on data, so they opt for improved sensors. I don't think it's a coincidence that Waymo is scaling their fleet faster now that Tesla is testing driverless operation (it's a bit inaccurate to call what they have in Austin safety drivers, they sit in the passenger seat without access to vehicle controls.)
Medaggie
How long do you want to ignore this user?
If this video doesn't convince someone that Tsla is so far ahead of everyone else, nothing will. Drop Waymo on these Chinese roads and it would probably never move.

I drove home last night and construction had lanes blocked but the shoulder. It took the shoulder without issues.

I will say it is not perfect and there are fringe scenarios that it is not good at but with their data and AI, this will be fixed. No different than a human having to pause when they see something strange, eventually they will figure it out the next time.
LOYAL AG
How long do you want to ignore this user?
AG
Medaggie said:

No different than a human having to pause when they see something strange, eventually they will figure it out the next time.


I find myself sitting back in situations where i don't know what im the car is going to do just to see what it figures out. Two come to mind that we're amusing.

The first involved a construction zone where the road was technically closed but everyone drove on it. We were leaving Saddle Creek in College Station and there was a barricade in our lane at the entrance to the neighborhood. The car approached the barricade and came to a complete stop for probably 10 seconds. Then it drove around the barricade before returning to the correct lane and continuing the trip. We chuckled at the idea that the car was studying the barricade trying to decide how to proceed.

The second was actually one where the cars decision was much better than the typical human driver. We were in fredericksburg and the car was driving to a winery. It was in the left lane and waited too long to get into the right lane to turn into the winery we were going to. It ended up stuck next to a tractor trailer in dense traffic and had no way to move to the right. We both know the typical human stops the entire world until they are able to turn right from the left lane. Instead the car kept going until it could move right then turned into a gas station that was separated from the winery by an open field. The car then drove to the edge of the field and stopped for probably a minute where we legit thought it was about to drive across the field to the winery. Instead it turned around and exited the gas station and drove a five mile loop of country roads to get past the winery so it could take the turn it missed the first time. It was impressive in how it handled this one.
TxAG#2011
How long do you want to ignore this user?
My neighbor works for TI and I swear she was telling me they make lidar products for Tesla.

Is it confirmed it only uses cameras?
hph6203
How long do you want to ignore this user?
AG
Tesla only uses LiDAR for model validation. They put a heavy duty bike rack on the top of some of their vehicles and compare LiDAR results to the vision model's perception of depth/shape of objects to make sure it's accurately perceiving the world. The production vehicles only have cameras/microphones/accelerometers/normal vehicle computer analytics.

harge57
How long do you want to ignore this user?
AG
The lidar system is not a significant cost factor in my opinion and ultimately provides better results. I think at scale the FSD systems in the future will all have a lidar component.

Try defending a FSD accident in court when not using lidar.
LOYAL AG
How long do you want to ignore this user?
AG
harge57 said:

The lidar system is not a significant cost factor in my opinion and ultimately provides better results. I think at scale the FSD systems in the future will all have a lidar component.

Try defending a FSD accident in court when not using lidar.


Who knows what tech eventually wins but in your hypothetical lawsuit you're going to have video evidence from all around your Tesla. Not sure how much more persuasive it can get.
harge57
How long do you want to ignore this user?
AG
https://share.google/aimode/L8RznhxK2E8KbGtki

It has already been used a an argument in their cases.

Tesla basically argued they should have known it did not have full FSD because it did not have LIDAR.

Tesla Admits in Federal Court that Self-Driving Requires Lidar : r/MVIS https://share.google/5dKX3HpxFEwtUWDFt
harge57
How long do you want to ignore this user?
AG
Just to clarify. I am not saying TSLA can't do it, but their approach is definitely a gamble at this point. They have significant regulatory, legal, and public perception hurdles that they will have to overcome and their vision only approach make those hurdles even harder to overcome.
LOYAL AG
How long do you want to ignore this user?
AG
harge57 said:

https://share.google/aimode/L8RznhxK2E8KbGtki

It has already been used a an argument in their cases.

Tesla basically argued they should have known it did not have full FSD because it did not have LIDAR.

Tesla Admits in Federal Court that Self-Driving Requires Lidar : r/MVIS https://share.google/5dKX3HpxFEwtUWDFt


Maybe I can't read too good but I think that says someone who believes Lidar is a requirement for self driving bought a Tesla knowing it didn't have Lidar then relied on a system he "knew" was incapable of doing what he wanted it to do. That feels like a gotcha moment rather than an honest lawsuit.

I don't see anything in there indicating Tesla believes Lidar is necessary for full autonomy. There were references to "experts" and they may end up being right in the end. What we know right now is that Tesla's vision only system is way, way, way ahead of everyone else even those using Lidar. Maybe one day a Lidar based system will catch up and we can make a real comparison. Until then Tesla is proving themselves while the Lidar argument is theoretical.

My $.02. Either way it's a fun thing to watch unfold.
Caliber
How long do you want to ignore this user?
AG
Based on the learning i've had in industrial systems. Computer vision wins out in complex, moving scenarios over lidar. Lidar works for static modeling and more predictable locals.

I would be interested to see the future arguments as AI advances. We use binocular visions backed by reasoning. At some point, AI will be able to simulate that reasoning for 99.9%+ of use cases which would probably exceed Humans average capability.

If you say that AI/Computer vision cannot do it, then how can Humans? What about the lawsuit where Human error hits an FSD car and causes an serious injury. The lawsuits will get interesting at some point...
hph6203
How long do you want to ignore this user?
AG
harge57 said:

The lidar system is not a significant cost factor in my opinion and ultimately provides better results. I think at scale the FSD systems in the future will all have a lidar component.

Try defending a FSD accident in court when not using lidar.

Assumes LiDAR will durably provide improvement over an advanced vision only system, which is not necessarily true. In the short term Waymo is targeting a vehicle in the ~$80,000 range, Tesla in the <$20,000 range. LiDAR is not the only thing Waymo is relying upon to create their self driving technology that contributes to its increased expense over Tesla's process.
AustinScubaAg
How long do you want to ignore this user?
AG
Caliber said:

Based on the learning i've had in industrial systems. Computer vision wins out in complex, moving scenarios over lidar. Lidar works for static modeling and more predictable locals.

I would be interested to see the future arguments as AI advances. We use binocular visions backed by reasoning. At some point, AI will be able to simulate that reasoning for 99.9%+ of use cases which would probably exceed Humans average capability.

If you say that AI/Computer vision cannot do it, then how can Humans? What about the lawsuit where Human error hits an FSD car and causes an serious injury. The lawsuits will get interesting at some point...

There are several obvious answers to you questions

1. You cannot systemically check that AI does the right thing. You can create tests etc but it is easy to train AI to pass the test. There is a ton of research going into how do you systematically validate an AI algorithm for safety applications.
2. Autonomous vehicles that meet ASIL standards require diverse redundancy for FSD. This means checking the results via at least two different methods. LIDAR or RADAR is used as a checker that when detecting the presence of object. Classical CV a check for object recognition/classification.
3. AI is only as good as the data it is trained on.
  • AI is being trained to do what a human would do in many cases including doing things humans are not supposed to do like turning into a left turn lane to merge into traffic. I have personally witnessed both a tesla test car and waymo test car do this.
  • Continuous training of AI models with data created by AI (the FSD driving data in this case) can result in model collapse. This is a know issue in AI research.
4. Cameras can be obscured for various reasons, non-camera redundancy is there as a back-up. This back-up does not need to be as capable but it has to be able to at least pull over in the case of a failure.

It is not that CV + AI can't do the job and can be the primary control element of FSD, it simply cannot be the only solution.
Page 2 of 3
 
×
subscribe Verify your student status
See Subscription Benefits
Trial only available to users who have never subscribed or participated in a previous trial.