Tesla's Neural Behemoth: an AI supercomputer 800x faster than anything ever seen
By Rahul Sonnad
Learn more about features and service available to Tesla Owners through Tesloop’s Carmiq Network.
At Autonomy Investor Day, Elon Musk and crew showed a lot of figures. They talked a lot about fleet size growth and the performance of the new FSD neural networking chip. However, he forgot to include one figure. The product of these two factors multiplied.
If you look at TOP500.org, which assembles a list of the 500 most powerful supercomputer systems in the world, you will see a lot of fast machines. One measure of these systems is Pflops (which is a quadrillion (thousand trillion) floating point operations per second).
The fastest supercomputer on the list is Summit at the Department of Energy in Oak Ridge, TN. It has a capacity of about 144 Pflops. And if you add up all of the processing power in the Top 500 computers, together they offer about 1415 Pflops of total processing power.
In about a year, Tesla will have about 1 million vehicles on the road that support their new “Full Self Driving” chip. About half of these will ship with this chip preinstalled, and the other half that are on the road today, can be upgraded.
Across the fleet, this will give Tesla about 117,000 TOPs of processing power. Now, typically supercomputer processing power is measured in floating point operations, whereas Tesla is only concerned with integer multiplication for autonomy. So let’s assume that FLOPs are 10x more “powerful” than OPs (not an exact conversion, but it should get us within an order of magnitude).
This would mean that Tesla’s combined vehicle network has a couple orders of magnitude more processing power than the world fastest supercomputer, and an order of magnitude more than all of the 500 fastest computers combined. Now, when comparing supercomputers, PFlops is not the only thing that matters, but it’s a nice easy yardstick. Clearly, with the very limited amount of bandwidth available from the cars to the cloud, the architecture of the application must be very different from a typical supercomputer, and will give it both advantages and disadvantages.
Tesla’s distributed machine will have some very interesting characteristics.
It will be spread across the world
It will be extremely mobile
It will be connected to an array of sensors (vision, radar, ultasonic)
It will run the world’s biggest neural network based on deep learning AI that is focused on image recognition
It will continually increase in size (and at an exponential rate for the next few years)
It will be optimized for low power, and increasingly be powered by solar energy
It will automatically and dynamically update itself
The goal of this computing behemoth will be limited to one task: driving vehicles with evolving “artificial intelligence”. This AI will be based on deep learning by neural networks that are processing billions of miles of visual vehicle data a month, along with other sensor data. This system’s goal will be to perfect “self-driving” for cars at a level that no human could not hope to achieve.
While there are other distributed applications with massive numbers of “nodes” that you might see in bitcoin mining, XBox, Whatsapp clients, etc…. The Tesla Self Driving behemoth is different in that it is a system that gets better as the number of nodes increases, and overtime will increasingly do this in an automated manner due to its increasing reliance on neural nets vs. software engineering algorithms.
As Stewart Bowers outlined during his Autonomy Investor Day presentation: “Not only can we look at what is happening around the vehicle, we can look at how humans chose to interact with that environment. We start with a single neural network, detections around it. We then build all that together into multiple neural networks and multiple detections. We then bring in all the other sensors and convert that into what Elon calls the “vector space” — an understanding of the world around us. As we continue to get better and better at this, we’re moving more and more of this logic into the neural network themselves. And the obvious endgame here is that the neural network looks across all the cars, brings in all the information together, and just ultimately outputs a source of truth for the world around us.”
So effectively, the Tesla Neural Net is both creating a model of the world as it is, and a probabilistic model for how various things in this world behave based on what they look like. Let’s think of this in the case of a pothole vs. a bag in the road. If it’s a pothole, the car will see this, and then notice it never moves. And if you run over it, your suspension will register some impact. If it’s a bag, it might be moving (if it’s moving it’s not a pothole), and if you hit it either on the ground or in the air, there is no major impact to the vehicle.
He continues: “We have a neural network running on our wide fisheye camera. That neural network is not making one prediction about the world, it’s making many separate predictions, some of which actually audit each other. […] These together combine to give us an increased sense of what we can and can’t do in front of the vehicle and how to plan for that. […] We can use that both to learn future behaviors that are very accurate, but we can also build very accurate predictions of how things will continue to happen in front of us.”
After Tesla’s autonomous driving deployment, the next biggest “dedicated” and naturally evolving application for a computer system, might be NOAA’s recently upgraded weather prediction system which runs on a pair of supercomputers at combined 8.4 PFlops, and predicts weather for the entire United States. However, while NOAA is last update was about 50% more power over three years, Tesla’s Self Driving Artificial Intelligence system evolving at a much steeper curve. As Elon notes: “When things are on an exponential rate of improvement its very difficult to wrap one’s mind around it, because we’re used to extrapolating on a linear basis. But when you’ve got massive amounts of hardware on the road, the cumulative data is increasing exponentially and the software is getting better at an exponential rate.”
In 4 years, on top of their new 3x faster chip architecture now in development, Tesla should have over 5 million cars on the road, with a combined processing power of something close to 2 million POPs (22,500 more processing power than the current NOAA pair).
It is also possible that in the next few years, Tesla will be able to use high speed satellite internet (at least outside of dense urban areas) to provide high speed bandwidth to the network, which currently is highly constrained by inconsistent 4G speed connectivity, thus opening up more potential for rapid improvement.
However, in the 4 year timeframe, competition could turn towards Tesla. By this time it is very likely that some other large car makers may figure out how to update the software in the majority of their new vehicles.