Ford tests Fusion Hybrid autonomous research vehicles at night, in complete darkness, as part of LiDAR sensor development – demonstrating the capability to perform beyond the limits of human drivers.

Autonomous logistics: sit back and enjoy the ride?  Part 2: core technology

You can read this article in 12 minutes

In this new series of articles, we will be exploring the past successes and failures, the present state of the art and the future potential in the automation of driving in logistics. We will look at the technologies, organisations and people driving the momentum in the autonomous vehicle development.

This part covers the core technology of autonomous driving. We will discuss how autonomous vehicles sense and navigate the surrounding world, what technologies help them make decisions on their own and react to dynamic situations quickly.

Driver Assistance vs Autonomy

Since we are still in the early development stages of autonomous vehicles, the use of safety drivers as a fallback is commonplace and often muddles the thin line between autonomous driving and advanced driver assistance systems (ADAS). There is also very little clarity in autonomous driving language at the present stage with stakeholders like Society of Automotive Engineers (SAE International) promoting a concept of phased automation and defining levels from SAE Level Zero (no automation) to SAE Level 5 (full vehicle autonomy).

For simplicity, we must acknowledge that in true autonomous driving situation there is no human fallback, the vehicle itself is in control at all times, even in case of an emergency. Staying true to this, a number of autonomous cars and trucks in development exclude the option of manual control. Their manufacturers remove steering wheels or driver cabs entirely.

Key Tasks of Autonomous Driving

To successfully fulfil its function an autonomous vehicle has to carry out several essential tasks simultaneously: localisation, navigation, decision making and control.

Localisation describes the use of various types of sensors and other data inputs (such as global positioning systems data and high definition maps) to figure out a precise current location of the vehicle in the surrounding world.

Navigation follows the process of figuring out how to travel from point A to B via the appropriate and legal paths, using the most efficient route, and to react to changing traffic conditions.

Decision making describes the software equivalent of predicting outcomes of driving situations and applying driver’s knowledge and experience to adequately react and safely resolve these situations. Such tasks require significant computing power and drive the development of specialized autonomous driving chipsets.

Control relates to all the output that decision-making process produces: acceleration and deceleration, motion stability and adaptive driving dynamics such as active suspension and adaptive power/torque distribution to axles and individual wheels.

Sensor and Imaging Technology

Sensors are the tools that autonomous vehicles use to “see”, to collect real-time data from the outside world and use that data to carry out the essential tasks. As some technologies perform best in certain environments and conditions, it is important to engage in complex solutions combining different sensors to ensure the safety of autonomous driving.

LIDAR is probably the most ubiquitous sensing technology in the world of autonomous vehicles. It stands for ‘Laser Imaging Detection and Ranging’. They are commonly used for precise medium range mapping, object detection and identification. Unlike most of the camera based sensing technologies, LIDAR operates perfectly in pitch darkness. However, it struggles in inclement weather, in rain, fog or snow, as well as in the presence of airborne particles like dust or smoke.

There is currently another drawback to LIDARs: they are very expensive, some reaching $75,000  per unit. Long-time manufacturers of LIDAR technology Sick AG, mavericks like Velodyne whose history we covered in the previous article, and newcomers such as Luminar and Google’s Waymo are all hard at work to improve the technology and to lower its cost for the users. There are many types of LIDAR sensors being developed. More common scanning LIDARs employ from dozens to hundreds moving laser arrays to quickly scan the environment around the vehicle. Solid state LIDARS like the one build by Innoviz Technologies of Israel provide a less costly solution at the expense of resolution.

RADAR is another long established technology used heavily in autonomous vehicles development. Radars are efficient at measuring the speed of moving objects very quickly and even at distance of several hundred metres. This is useful when an autonomous vehicle approaches an intersection for example. However, radars produce a lot of interference and communication authorities legally restrict the power of radar units. Due to these power restrictions, long-range radars are limited to a very narrow field of view. To work around the limitations, developers employ arrays of short and long-range radars, as well as articulated radars that move and track objects.

Time of Flight cameras are ranging cameras that calculate the distance by measuring the round trip time of an artificial light signal produced by a laser or a LED. Scannerless LIDAR is an example of a more expensive type of time of flight sensor.

Computer Vision uses a multitude of cameras to capture the views of the surrounding world and software algorithms to detect and identify persons and objects such as traffic lights and their state. It commonly uses regular visible light high definition cameras as well as stereo camera setups for depth perception and infrared and thermal imaging cameras for night vision. Due to limitations of computing power and memory storage, the images collected are often heavily compressed and their analysis requires complex algorithms to improve clarity and quality. Computer vision technologies are relatively cheaper than other sensing technologies and are seen by many as the cornerstone of autonomous driving.

Ultrasonic Sensors normally used in car parking assistance systems are employed by autonomous vehicle for close range obstacle detection.

Navigation

Global positioning systems data is highly inaccurate in its civilian applications, and on its own is inadequate to the needs of autonomous driving tasks. The margin of error in civilian GPS navigation varies between five to hundreds of metres. It also requires the view of the open sky to connect to the satellites and thus is not available in tunnels and unreliable in heavily built-up locations. GPS satellites are also supported by the expensive infrastructure of ground stations around the world. Autonomous vehicle developers supplement GPS data by using inertial measuring units or IMU and odometry to calculate and record the precise kinetic track of vehicle movement and increase accuracy.

The need for increased accuracy in localisation and mapping for autonomous driving led to a recent boom of High Definition Mapping industry. Navigation companies Here Technologies, Deepmap, TomTom, mapping division of Chinese giant Baidu and autonomous startups like Lyft all dedicate enormous resources to produce 3D maps with an accuracy of 2-3 centimetres or better.

Many autonomous driving developers apply a technique called Simultaneous Localisation and Mapping or SLAM. It allows them to construct or update a map of an unknown environment while simultaneously keeping the track of the autonomous vehicle location within it during real-world testing.

Decision Making and Motion Planning

Technologies that facilitate decision-making are often casually referred to as Artificial Intelligence or AI. However, it is more productive to think of these as very advanced, complex, interconnected and self-improving computing systems and algorithms. The main techniques of developing these systems are Machine Learning and Deep Learning. Both are generally seen as subsets of AI.

These two techniques lay the groundwork for the deployment of complex computing systems known as Artificial Neural Networks. These artificial networks are inspired and serve the same purpose as the biological neural networks constituting the human brain. They can learn to complete tasks by considering examples, generally without task-specific programming or guidance from a human. This autonomy makes artificial neural networks very handy when one has to parse an extraordinarily large amount of data such as hundreds of thousands of recorded driving situations. You may have already used a neural network for example by looking up an image on a search engine.

Multilayer versions of these neural networks, Deep Learning Networks, and in particular Convolutional Neural Networks are specifically used for autonomous driving learning and decision making development, because of their internal hierarchy: it resembles the organization of visual nervous systems.

Artificial Neural Networks build up knowledge of autonomous driving like a human driver builds up experience as the time passes. The huge difference is that knowledge accumulated by an artificial network and stored on a computer system can propagate through the entire fleet of autonomous vehicles near instantly, akin to downloading a new software patch. As soon as one vehicle in the fleet learns something – all vehicles can use that knowledge and experience.

Another factor is that this learning process can be done as easily across multiple instances of the virtual environment (and thus faster) as it is on real roads. Many companies like London-based FiveAI specialise on developing realistic road simulations in every detail to prototype and train their autonomous driving algorithms around the clock and at low-cost in lower risk, controlled environment that is as good as the real thing.

Machine learning requires significant computing power. Many companies from computing and electronics giants like Nvidia and LG to car manufacturers like Tesla focus on developing and manufacturing specialised chipsets for autonomous driving. Complex solutions for instant control minimize the reaction time of autonomous vehicles to unpredictable changes in driving situations. Established car safety technology manufacturers like Continental and startups like Realtime Robotics aim to produce ultrafast motion-planning chips to analyse thousands of possible vehicle trajectories simultaneously and in a matter of milliseconds.

Driving Control Input Systems

Most of the road vehicles produced today use drive-by-wire systems, where steering wheel or accelerator pedal are not physically connected to the front axle or the throttle of the car. Autonomous vehicles are no different. In fact, the widespread use of drive-by-wire made development autonomous systems practical. Replacing analogue controls with the end-to-end digital systems improves the accuracy of steering and acceleration inputs.

Similarly, continued development of advanced stability control systems and in-hub electric motors allows autonomous vehicles to use adaptive power and torque distribution algorithms, where each wheel of the vehicle can revolve at exact rotations per minute and achieve extremely precise control.

Did you enjoy this article? Read the first part of the series
about A Trip Down Memory Lane.

Photo: Ford

Tags