Autonomous Robots in Mining: The Differences Between Online and Offline SLAM
Written by Raffi Jabrayan, Exyn Technologies
There are many ways we can empower robots to navigate their environment autonomously. We can equip them with GPS, use an array of cameras and sensors (including smell), even get them to talk with each other and strategize.
But if you want a robot to do all that on its own without using any outside sensors you’ll most likely have to use a process known as SLAM, or Simultaneous Localization And Mapping.
SLAM is a remarkable algorithm that enables robots to safely navigate an environment they have no prior knowledge of. However, SLAM has its weak spots — relying heavily on environments rich in unique geometric features (though it still has some tricks up its sleeve).
Let’s look at how SLAM operates to capitalize on this revolutionary method of 3D data capture.
How does SLAM work?
To power a SLAM algorithm, you need at least two sensors. One to track the robot’s movements and orientation, the other to “see” the environment it’s trying to navigate through. Most modern-day robots come equipped with the first sensor, called an inertial measurement unit, or IMU. This sensor is usually equipped with an accelerometer and gyroscope which help the robot track its movements in 3D space along six axes. This is the sensor in your smartphone that understands if it’s in portrait or landscape mode.
Once the robot understands its orientation and trajectory, it needs to understand its environment and how to navigate through it safely and efficiently.
Enter the robot’s ‘eyes.’ Our robots use a gimballed LiDAR sensor to see the world around them. The sensor fires laser beams (safe for eyes) slightly angled on a horizontal plane and is rotated in 360º to give the robot an almost complete view of its environment (only obfuscated by the robot itself). The puck fires millions of beams every second and captures returns on each point that has a timestamp, intensity rating, and x, y, z coordinate attached. The robot uses this data to create a real-time 3D map of its surrounding environment that we call a point cloud.
That’s SLAM in a nutshell. You’re using onboard sensors to build a map of an unknown environment while estimating where the robot is inside that map. But this is where SLAM forks into two separate branches – online and offline. Online SLAM, as you can likely guess, is active while the robot is in flight and prioritizes object detection and avoidance over the quality of the map. Whereas offline SLAM takes place after the robot has landed and prioritizes map quality anThe turquoise sphere is the fast local map.d loop closure now that we don’t have to worry about robot control.
Online SLAM is sloppy because it needs to be fast. Offline SLAM can take time to run complex algorithms to align to very specific geometric features.
Online SLAM: object detection and avoidance
For the robot to localize its position in the map and maintain its state, we use a feature-based LIDAR Odometry and Mapping (LOAM) pipeline. And this pipeline needs to be fast to keep up with a robot moving at two meters per second! As our gimballed LIDAR scans the environment, the LOAM algorithm aggregates this data into sweeps (two sweeps per gimbal rotation) that it uses to build a fast local map with pose estimates of the robot’s probable location. With each sweep, LOAM updates the map and motion corrects the sweeps using the IMU data.
Our robots are actually building two maps on the fly (see figure below). One for edges, the other for surfaces. ExynAI is running complex equations on all LIDAR hits and classifying them while also excluding duplicates, stray points (like dust), and unstable edges.
ExynAI then converts all these points into voxel grids that the robot uses to path plan through the map it’s generating. You can think of a voxel like a 3D pixel, or a cube in Minecraft. We have multiple grids for the robot to understand solid objects, a safe flight corridor, and even explorable space. And our pipeline also implements algorithmic decay into these grids so that our robots can detect changes to their environment in real-time. This decay can be tuned for more / lessdynamic environments.
Lastly, our robots ‘cull’ the boundaries of the map they’re generating only to focus on their immediate surroundings. This is to keep online mapping fast and efficient. You can see an example of that in the GIF above. The blue area of the map is what’s being prioritized while the rest of the point cloud is culled once it’s a certain distance away from the sensor.
This all might sound relatively simple, but there’s lots of complex math happening in the background to compute surfaces, eliminate stray points, fuse pose estimates with batches of LIDAR sweeps, and millions of other calculations per second to keep the robot safely flying to its next objective. That’s how online SLAM can be a little “sloppy”, but once the robot’s landed we can refocus all the power of ExynAI on refining and constraining a high-density point cloud map.
Offline SLAM: refined map quality and loop closure
Now that we’ve examined how our robots use SLAM to fly autonomously in GPS-denied environments, let’s look at how offline SLAM is used to refine overall map quality.
All sensor data captured during each flight is logged and stored onboard the robot, which can get quite large depending on the length of the flight and the complexity of the environment. Our customers generally want the raw point cloud data gathered from our gimballed LIDAR sensor. That’s where our post-processing pipeline, ExSLAM, comes in. It extracts the raw cloud from our logs and refines it for 3rd party software.
ExSLAM uses a factor graph optimization algorithm to create low drift point cloud maps. This algorithm takes a series of LIDAR sweeps and stores them as “keyframes” associated with a specific pose state. We can then run an optimization on each keyframe and their neighbors for similar geometric features and use those matches as loop closure constraints, also called a pose graph. This is why it can be difficult to produce accurate maps of featureless environments.
Why is it important for SLAM to ‘close the loop’? The GIF below is a great representation of a feature-based SLAM / LOAM algorithm complete with loop closure. The red dots represent the robot’s pose estimates, and the green dots represent unique geometric features in the environment. You can see the farther the robot travels the more uncertain it is about its pose until it ‘closes the loop’ and recognizes where it started. Then you can see the algorithm constrain those uncertain pose estimates and landmarks with that new knowledge.
The end result is a high-density, low drift point cloud map that can be easily exported. And the best part? This entire process can be done directly through the same tablet you’d use to control Nexys. No need to connect to WiFi or send it to a 3rd party for processing. Maps can also be georeferenced, smoothed, and down-sampled for easy file transfer.
The future of SLAM mapping
At Exyn Technologies, we’re always testing new and interesting ways to create more accurate and visually captivating 3D models from our data sets. From colorizing point clouds in real-time on the Nexys or capturing photospheres to provide even more visual information for accurate BIM models. And as we continually improve the robustness of our autonomous SLAM pipeline, we can begin to introduce even more sensors that can help ExynAI navigate in demanding environments where customers need to capture accurate 3D models quickly and efficiently.
Interested in seeing our 3D SLAM pipeline in action? You can request a personalized demo today.
About the Author
Raffi Jabrayan is the Vice President, Commercial Sales and Business Development for Exyn Technologies. He oversees the expansion of the business internationally in the mining and construction sectors, as well as penetration into other industries.
A large part of his role at Exyn is to help miners leverage the data produced by Exyn’s autonomous aerial robots to streamline underground inspections, enhance operational efficiency and reduce risk. Prior to joining Exyn, Raffi managed digital and technology innovation projects for Dundee Precious Metals and was intimately involved with operationalizing new technologies into Dundee’s workflow. Raffi oversaw the scouting, due diligence, implementation and post integration assessment of Dundee’s digital and technology projects.
Raffi is a seasoned mining professional with practical experience at both the plant and corporate level in various capacities and has completed the Digital Business Strategy Program at MIT Sloan as well as Driving Strategic Impact from Columbia Business School.
About the Company
Exyn Technologies is pioneering multi-platform robotic autonomy for complex, GPS-denied environments. For the first time, industries like mining, logistics, and construction can benefit from a single, integrated solution to capture critical and time-sensitive data in a safer, more affordable, and more efficient way. Exyn is powered by a team of experts in autonomous systems, robotics, and industrial engineering, and has drawn talent from Penn’s world-renowned GRASP Laboratory as well as other storied research institutions. The company is VC-backed and privately held, with headquarters in Philadelphia. For more information, please visit www.exyn.com, you can also contact us on our website.