Senior C++ Software Engineer
January 27, 2020
 

Last Mile Delivery Robots


 

An increasing number of large international companies as well as startups are competing to develop robotic mobile units capable of delivering small goods, post or groceries in an autonomous and fast way. These units, called Last Mile Delivery (LMD) robots, have different sizes, shapes and modes of propulsion. Manifestations include flying dronesunmanned ground vehicleslegged robot or even humanoids. They are developed to deal with the last logistic step of the delivery process; that is, to navigate the last few kilometers to the end customer. They can be deployed from a static hub such as a local shop or from a mobile hub such as a van that carries the robot units and releases them from outside the congested residential area. They can operate during both day and night time. The main reason behind this global investment trend lies in the fact that over 50% of the total logistic delivery cost of goods can be attributed to the LMD process. As a report from Honeywell indicates, this cost is compounded by several practical issues including hard to find locations, invalid or incorrect recipient’s address, lack of nearby parking for the delivery truck and congestion in the urban areas. Surprisingly, more than 30% of parcels do not reach their destination; some are returned to the warehouse or shop whilst others are not delivered due to damage being incurred during transportation, theft or simply because the customer was not available for collection.

Even excluding the costs of hauling, sorting and collecting the goods, and considering only the process of moving parcels to end customers, the LMD cost is currently estimated to amount to over €70 billion annually. This is expected to increase with e-commerce and direct-to-consumer shipments growing every year. As a result, third-party logistics (3PL) and e-tailers like Amazon are pouring millions into perfecting the LMD process. It is an area that has seen little technical progress in the past decades. However, a report from McKinsey predicts that 80% of LMD vehicles will be autonomous by 2025. Reduced logistics costs, faster deliveries and happier customers will result in higher margins for 3PL and e-tailer companies. These gains will be most prominent in large and densely populated areas.  

“The small delivery robots can be deployed from a moving hub or from local shops”

Besides costs, there are several other compelling arguments that motivate the development of such robots. These include: 

Security & Convenience – the robots can be equipped with a secure log-in system (e.g. temporary authentication code) that grants access only to the correct recipient. The customer will be able to select the most convenient delivery time and have access to a tracking system that monitors the position of the delivery unit in real time.

Speed & Efficiency – the possibility of avoiding traffic as well as sharing sidewalks with pedestrians or bike lanes allows LMD Robots to shorten the path to the destination and reduce the delivery time.  Customers may also select specific delivery times (even outside the normal working hours). This reduces the likelihood of a failed delivery attempt which would result in the package being returned to the distribution hub.

Environment – LMD Robots tend to be small. This means they require considerably less energy when compared with larger delivery vans. They do not use fossil fuels for power. This mean their CO2 footprint is significantly lower than today’s delivery vehicles. They can also avoid traffic by using bike lanes or the sidewalk and can optimize the path to the destination. Governments are beginning to introduce environmental legislation which is likely to include tariffs on harmful emissions. This will weigh on logistics costs but LMD Robots are likely to be exempt.

Quality of delivery – products that require specialized transportation, such as food or delicate items, will benefit from the introduction of customizable delivery units. LMD Robots are small and unmanned and can be equipped with ad-hoc storage compartments (e.g. cooled or padded containers) which increase the probability that items are maintained in good condition during transit. LMD Robots provide retailers with an opportunity to improve customer service and differentiate themselves from their competition.

Development challenges

One of the main challenges in the development of LMD Robots is the design of robust AI-based perception components. Unmanned ground robots, in particular, face similar problems as self driving cars. They need to localize themselves relative to their surroundings with high accuracy and robustness; they need to detect lane markings, pedestrians and other vehicles; and they need to drive safely. Motion planning and vision-based perception are the most difficult aspects of such technology. 

Some companies are already successfully making active deliveries using autonomous mobile robots. However, their operations are limited to small controlled environments (e.g. school campus), and are only on specific roads, in day time and with mild weather conditions. Robots that ferry parcels from urban distribution points must be capable of dealing with environments that are significantly more complex.  

“Unmanned ground robots face similar
problems as self driving cars”

In a more realistic scenario, a LMD Robot needs to traverse a large and probably unexplored area in order to reach a customer. The robot starts from a known position in a city, such as a local shop, and computes a series of waypoints which define the optimal route to the intended recipient(s). Once the path is computed, the robot needs to safely navigate through this set of waypoints. In order to achieve this, the robot needs an accurate and reliable estimate of its position at all times. Positioning is one of the core components of the navigation pipeline of any autonomous mobile robot. In large, outdoor areas, positioning is often done using classic global or local tracking sensors such as GPS or wheel odometers. These sensing modalities do not provide a reliable estimate of the absolute position of the robot in urban environments. The localization accuracy of GPS is often poor because of the presence of tall buildings in the robot surroundings which occludes parts of the sky, a phenomenon known as signal shadowing. Wheel odometers cannot measure wheel slip and so derived position can drift significantly over time.

To solve these challenges and make localization possible, an approach based on vision and sensor fusion can be adopted. This means that the robot will use cameras in combination with other sensors to increase the confidence level in the estimated absolute position in the world. Vision sensors can also be used to detect obstacles or anomalies (e.g. working sites that force the robot to change paths). Modern vision-based algorithms are capable of measuring the underlying structure and geometry of the environment based on a set of features and landmarks extracted from a camera image stream. Such structures can be generalized and aggregated in a visual map that is used to estimate the correct position of the robot with high precision. This methodology is well recognized and has been widely adopted in many related fields such as self driving cars or mining robots. However, in these applications SWaP (Size, Weight and Power) constraints are low. For LMD Robots, SWaP constraints are a major problem.   

Why Univrses

Univrses’ clients include several leading companies developing autonomous systems such as Zenuity (autonomous cars), ABB (industrial robots) and Husqvarna (mobile robots). The focus of our work has been the development of perception solutions that enable systems to operate with a degree of autonomy in a range of environments. We have addressed and solved many challenging robotic and computer vision problems using components from 3DAI™ Engine. Perception in urban scenarios is a particular strength of Univrses. These environments are characterized by static, well structured elements (e.g. buildings) that are helpful for stable camera tracking, but also include many dynamic elements (e.g. cars or pedestrians) that challenge a positioning system’s robustness. Components from 3DAI™ Engine are designed to ensure efficient localization and tracking in such situations, as well as providing efficient object detection and classification. The 3DAI™ Localization module, in particular, is well suited for deployment in a LMD Robotic application. It is designed to combine multiple sensor signals (e.g. IMU, wheel odometer and RGB cameras or LIDARs) in order to ensure robust navigation. 

Univrses’ localization system handles typical sources of cumulative errors (e.g. wheel slippage) and implicitly assumes that GNSS updates will be rarely  available. Whenever the GNSS signal is available, the system can globally correct for these errors. The framework is also capable of building a map of the area traversed by the LMD Robot. Mapping enables other functionality such as loop closure or visual relocalization, to assist recovery from failure or to recognise when a LMD Robot has been moved to a location it has been before. Multi-robot mapping is also supported. Multiple LMD Robotic units can operate simultaneously and share live information about the environment to perfect the map and improve navigation.

The system has been designed to operate on low-cost, readily available processing hardware and sensors. A high level API makes available high frequency 6 DoF poses, useful for planning and control as well as other information such as object instances.

Univrses has conducted tests of components in the 3DAI™ Engine that have been adapted for scenarios similar to that for LMD Robots. In particular, tests were done using a single camera (monocular vision) that simulates cases when other sensors might become less available/reliable. The image in the figure below shows the position (trajectories in pink) of a vehicle as it moved through Stockholm. These trajectories are robustly estimated using only vision, i.e. NO wheel odometry, NO robot’ motion model, NO GNSS/RTK signal and NO IMU. Many pedestrians and cars were present in the scene during the tests.  The blue dots are the static map with each point representing a feature in the 3D structure of the environment. The orange arrows are manually added for clarity and indicate arriving/starting positions.

Examples of multiple trajectories around a large building in Stockholm, collected using software components from 3DAI™ Engine.

For the most challenging urban environments, researchers and engineers at Univrses are constantly exploring new technologies to enhance the autonomous capabilities of the mobile robot. For more details of our latest research, check out our Research page

 
WRITTEN BY: SERGIO CACCAMO
 

Get in contact to find out more.
CONTACT US