The technologies and capabilities of today's drones are only part of the overall vision for the transport of the future. Before the world can fully move to autonomous transport, there is a lot of work for engineers and scientists.
The main challenge they are working on today is how to teach self-driving cars to cope with emergency situations.
Self-driving cars can move on high-quality roads, in regions where difficult weather conditions are excluded, with little or no additional control. But if the route can be predictable to the smallest detail, then the driving conditions are not.
The drone must accurately identify pedestrians and cyclists on the road, take into account the presence of surrounding traffic, distinguish from puddles and spots from potholes, drive over obstacles, and also move in rain, snow and fog. All this is possible only if a system is implemented in the car that will collect data on the environment in real time and make appropriate decisions based on this data.
Such technologies already exist.
Light Ranging System (abbreviated as LiDAR) was developed by Velodyne LiDAR, based in Silicon Valley. Its essence is to equip the car with rotating lasers that emit short pulses of infrared light, and then measure the return time and, based on the data obtained, compile a detailed 3D map of the environment.
Velodyne LiDAR is used by Google. Before Google cars set off on their own trip, engineers will drive along the route several times to collect the right amount of data. This approach is effective but time consuming. To increase its practicality, a full 3D display of the entire global infrastructure is required. This is a tremendous job, and this is exactly what Google employees are doing today.
A three-dimensional map is an important infrastructural element, without which the existence of unmanned vehicles is unthinkable. Several major carmakers such as Ford and Audi have integrated LiDAR into their experimental self-driving cars. However, not everyone sees LiDAR as a promising technology. Tesla CEO Elon Musk takes a different approach.
Safe movement in constantly changing conditions can be ensured by recognizing images that fall into the field of view of the camera. This is the only way to capture variable pointers such as traffic lights, turn signals, road signs and anything that cannot be entered into the general database.
Pattern recognition can be divided into two categories - machine vision and computer vision. Machine vision is a little easier to implement, it allows you to select objects based on edges and angles, detect movement, its direction and estimate distance. All he needs is multiple high-resolution cameras installed in the car.
Computer vision is a more complex structure. This task is not only to recognize objects in the camera, but also to understand what this object is doing, as well as to predict its future actions. So far, pattern recognition programs do not have a sufficient level of accuracy suitable for a transport drone.
LiDAR vs. Computer Vision
A key advantage of LiDAR is that it is independent of ambient lighting, so it is effective in virtually any environment.
On the other hand, LiDAR systems are very expensive. For example, a 64-laser LiDAR costs about $ 70,000, which is the cost of a new business-class car. At the same time, equipping a car with high-quality image recognition cameras is ten times cheaper.
It seems that all the technologies needed to create a safe self-driving car already exist. For the most part, this is true. We can say that the ideal unmanned system would be a combination of LiDAR and computer vision. Despite Musk's skepticism, the first technology is great for general environmental studies, and computer vision, in turn, complements the picture with detailed details.
The Topic of Article: How do self-driving cars “see” the road?.