Online Class Notes (Mark)

Today we focused on:

A brief review of last class’ notes and then onto writing homework and reading comprehension.

  • Grammar
  • take out
  • Pronunciation
  • luxury [lug zhery]
  • halted
  • diversify
  • energy [en er jee]
  • creativity [kree ay tiv ity]
  • Vocab
  • Champion

Writing exercise

Self-driving vehicles currently rely on a mix of cameras, radar, and LiDAR to ensure they have all the data they need to safely navigate. However, Tesla intends to rely solely on cameras eventually by using a neural network to achieve vision-only autonomous driving.

Such a system is highly-desirable for a number of reasons. The most obvious is the fact it cuts down on the amount of technology required per vehicle, which reduces both cost and weight. And as Tesla CEO Elon Musk pointed out back in April on Twitter, “Vision has much more precision, so better to double down on vision than do sensor fusion.”

Using vision-only requires a lot of training, however, and that’s where Dojo comes in. As TechCrunch reports, Dojo is a neural network training computer Tesla intends to use to process the “vast amounts of video data” required to train such a self-driving system. The problem is, Dojo doesn’t exist yet, but Tesla just revealed the supercomputer it intends to use as Dojo’s prototype. According to Andrej Karpathy, director of AI at Tesla, the supercomputer consists of 5,760 GPUs offering 1.8 EFLOPS (exaFLOPS) of performance and supported by 10 petabytes of NVMe storage with a connection speed of 1.6 TBps.

During a workshop talk on Autonomous Driving at CVPR 2021, Karpathy explained how the LiDAR approach relies on a HD map being created beforehand and then localizing to that map when driving around. Tesla’s approach does everything locally, relying on the video feed from eight cameras mounted on the vehicle. Karpathy says it’s the much more difficult approach, but it’s also much more scalable than the LiDAR + HD map alternative because you simply can’t update the map data quickly enough.

Tesla’s solution is already advanced enough that the cameras do most of the heavy lifting and Karpathy confirmed cars started shipping three weeks ago without radar. The workshop video (start watching at the eight hour mark) shows you footage of the eight cameras Tesla relies on for the autonomous system. Now all Tesla needs to do is record lots of driving videos, store petabytes of data, and train its system to be competent and safe enough for all vehicles.

Summary
Most of the carmakers set up a self-driving function based on a complex system include cameras, radar, and LiDAR. Tesla intends to use cameras to grab the vision the need, and send them to network which will calculate the useful result in a supercomputer and feedback to the car.

The advantage of this method is that the structure of the vehicle is relatively simple. Network and supercomputer do the heavy job. So, cars can be designed simple which are easy to build and update. CEO Elon Musk believe that vision has much more precision like the human visual system better than vision-sensor hybrid system.

The only question is how to make this vision smart enough to identify useful information and learn itself. Tesla need a special AI system on the supercomputer. This computer, for example, has the capacity for supporting by 10 petabytes of NVMe storage with a connection speed of 1.6 TBps.
A new car without radar has been sent to market since three weeks ago.

**Corrected Summary**

Most of the carmakers set up a self-driving function based on a complex system which includes cameras, radar, and LiDAR. Tesla intends to use cameras to grab the vision they supercomputer needs, and send them it to the network which will calculate the useful result in a that supercomputer and feed it back to the car.

The advantage of this method is that the structure of the vehicle is relatively simple. The network and the supercomputer do the heavy job. So, cars can be designed simple simply which are would be easy to build and update. CEO Elon Musk believes that vision, like the human visual system, has much more precision than the vision-sensor hybrid system.  better than vision-sensor hybrid system.

The only question is how to make this vision smart enough to identify useful information and learn itself. Tesla needs a special AI system on in the supercomputer. This computer, for example, has the capacity for supporting by from 10 petabytes of NVMe storage with a connection speed of 1.6 TBps.
A new car without radar has been sent to market since three weeks ago.

Grammar


“that’s where ___ comes in” – thats where ___ is going to help

Pronunciation


rely – [ree lie]
neural – [noo rul]
obvious – [ob vee us]
revealed – [ree veeld]
petabytes – [pet a bites]
autonomous – [aw tahn a muss]
scalable – [skayl a bull]

Vocabulary


LiDAR – light detection using
precision – exactness