fbpx
TECHNOLOGY
Enabling the next generation of autonomous systems

Over the past four years, Univrses’ technologies have been developed by our team of expert scientists and experienced engineers. We work in a variety of industries but are particularly focused on mobile robotics, smart cities and automotive applications. Our core areas of expertise include 3D Computer Vision and 3D Artificial Intelligence. We excel at providing software solutions and components that enable detailed and accurate analysis of the surrounding environment. We work with data from a variety of sensors, particularly mobile cameras. Today, our Computer Vision and Machine Learning component technologies are amongst the most advanced in the world.

AUTONOMOUS SYSTEMS – UNIVRSES’ TECHNOLOGIES

WHAT IS POSITIONING?

Adding a camera to an autonomous mobile platform (such as a car or a robot) enables Positioning algorithms to track how that platform is moving. Software analyses data from the camera (in real time) to show how the position (and the pose, or orientation) of the platform is changing relative to its surroundings. The position of the platform is a crucial building block for enabling robust higher levels of autonomy in robotic systems.

WHAT IS MAPPING?

A camera can be used to create a high-resolution map of the space around the platform. The map shows the 3D structure of the space and is extended as the camera view changes to see more of the platform’s surroundings. A map can be built of very small spaces (such as a room in a factory), larger spaces (such as a warehouse) and even outdoor environments. The 3D map of the space around the platform is an essential part of enabling a system to navigate autonomously.

TECHNOLOGIES
Visual odometry
Visual inertial odometry
Multi-camera odometry
SLAM
VI-SLAM
Lidar-SLAM
Multiagent Mapping
Cloud Mapping
3D Reconstruction
Monocular Vision
Stereo Vision
RGBD Vision
​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​

SPATIAL DEEP LEARNING

WHAT IS LOCALISING?

The addition of a camera and 3D mapping software to an autonomous mobile platform enables Localization algorithms to identify the platform’s location relative to the map. Localising the position of the platform within a map complements the Positioning algorithms; it is another crucial building block for enabling robust higher levels of autonomy in robotic systems.

WHAT IS SPATIAL DEEP LEARNING?

Spatial deep learning can be used to make sense of the system’s surroundings. The data in the 3D map of the space around the platform is “translated” to information that can be readily interpreted by a human. For example, an object within the map can be identified as something specific, like a table, a park bench or a road sign. Spatial deep learning plays a very important role in autonomous driving, making it possible for a vehicle to identify key landmarks within the 3D map being used for navigation.

TECHNOLOGIES
Semantic Mapping
Mono-Depth
Semantic Segmentation
Object Detection
Object Tracking
Domain Adaptation
​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​

SENSOR FUSION

WHAT IS SENSOR FUSION?

Sensor fusion is the process of combing sensory data from numerous sources into a single optimized result. The single result obtained from this process will have less uncertainty than any of the individual sources of sensory data. Combining sensor data in this way is an integral part of solving robust Localization and a major part of generating accurate 3D maps.

WHAT IS 3D RECONSTRUCTION?

A camera mounted on a mobile platform will capture images of a scene from different perspectives as the platform moves. Similarly, a stationary camera will capture images of moving objects from different perspectives. The 3D structure of the object can be determined by comparing these different perspectives. This is useful for understanding the nature of objects around a camera. For example, objects moving past a camera on a conveyor belt can be rigorously examined for faults or defects.

 
Do you want to know how we implement our technology?