Key Technologies Defining Robotics – Positioning and Navigation

24-12-2020 | By Mark Patrick

What will the series cover?

In this series of six blogs, we take a look at the key technologies defining the way robots are being designed and used today, and how that may evolve in the future. It will cover developments at the hardware and software level, and how innovations such as AI are already shaping the future of robotics.

Blog 1: Key Technologies Defining Robotics – From Static Arms to AMRs

Blog 2: Key Technologies Defining Robotics – Mobility and Dexterity

Blog 3: Key Technologies Defining Robotics – Positioning and Navigation 

Blog 4: Key Technologies Defining Robotics – Robot Operating Systems

Blog 5: Key Technologies Defining Robotics – CoBots and AI

Blog 6: The Future of Robotics 

Positioning and Navigation

For almost 70 years, industrial robots have been capable of a lot of movement. Even the very first industrial robot (see the first blog in this series for more details) had 9 degrees of freedom in its central 'arm', as well as having roll and yaw in its 'hand'. There are various ways of keeping track of the relative position of these moving parts. 

The simplest is probably 'dead reckoning', where the position is measured relative to a known starting point. The amount of movement then based on how long the driving effort is applied. This method relies on the articulation of the parts involved being the same under all conditions. The use of sensors can improve this further. 

By counting the revolutions of a motor as it runs, the distance moved, calculated from the number of rotations is absolute. Combining these measurements for each motor controlling part of a robot arm will position it in 3D space. Enabling the robot to understand where it is in its workspace. The type of sensors used for this kind of position sensing include: 

  • Potentiometers

  • Optical encoders 

  • Hall Effect sensors

This approach to position sensing works well in an environment where nothing 'unexpected' happens. It doesn't provide the level of spatial awareness needed by robots expected to work in a more open environment. Even for static robots, the ability to sense and understand their surroundings brings increased capabilities. 

Miniature Movement Sensors Provide the Most Significant Insights

One of the most significant innovations to impact this area was the development of MEMS-based motion sensors. MEMS is a technology that provides electromechanical functionality at the scale of integrated circuits. The result is a sensor that measures just millimetres on each side. Still, it can detect movement in 9 degrees of freedom—bringing a new level of insight to using dead reckoning for position sensing. MEMS sensors help control systems to accurately measure the angular position of each of its moving parts. 

The SCL3300 inclination sensor from Murata is an excellent example of this technology applied to robotics. It measures movement in 3 axes using capacitive 3D MEMS technology. The iSensor MEMS gyroscope from Analog Devices can also measure the angular rate of change based on acceleration. It provides an output based on a relative angle of displacement. Control systems can use this information to monitor the speed of movement. 

Robots are Using Light to See in 3D

Other sensor technologies now being used in robotics include Time of Flight (ToF). A technique that accurately measures the distance between an emitter and a receiver based on the time it takes for light to travel between two points. It may sound unlikely, but auto-focus cameras used ToF, so it is a proven technology. Several manufacturers offer ToF sensors for applications that need accurate distance measurement. As well as using it for proximity detection, ToF can provide very detailed 3D relief maps of a surface. Meaning robots can also use it to detect and even identify objects. 

For engineers looking to use ToF in a robotic application, an excellent place to start is the Basler Blaze 101 video module. This ToF camera achieves almost millimetre accuracy over a distance of several metres. It provides a real-time stream of pre-processed images that include 3D points and 2D intensity. The ToF methodology detects reflected light, so it can only 'see' objects that are directly in line with that light. That provides enough data to create 'point clouds', which is the term used to describe the 3D images generated by ToF technologies. 

These point clouds can then be further processed using AI techniques to recognise shapes and how far away they are. Significantly of benefit to robots that need to recognise objects before manipulating them, even if they are moving on a conveyor belt. 

How Robots use LiDAR to Navigate

Another technology finding its way into robotics, particularly robots designed to move, is LiDAR, or Light Detection and Ranging. As it uses light, the technology is similar to ToF in many ways.  ToF can be considered a subset of LiDAR. 

It generally works by measuring the change in either amplitude or phase of the reflected light source. The controlled (typically pulsed) light source, steered in scanning LiDAR or fixed if the system doesn't provide a 3D image. 

The amount of shift in amplitude or phase of the reflected light source gives the range of the scanned object. LiDAR systems are now small and accurate enough for use in robotics. Intel's RealSense LiDAR camera, L515, is a solid-state LiDAR depth camera which uses a MEMS-based mirror that can scan its field of view. Here's a video of the technology in action

The Next Big Challenge for Robots that Move

These technologies are now being used in robotics to address one of the next significant challenges for developers and manufacturers. How to enable robots to move around in an unfamiliar environment? This problem isn't unique to the field of robotics; it is occupying the minds of computer scientists and engineers working in the area of artificial and virtual reality, too. 

One developing technique is Simultaneous Localisation and Mapping, or SLAM. This technique involves scanning an area and mapping the objects in that field of view, while simultaneously compensating for the position of the scanning device as it moves through that space. Autonomous vehicles will use SLAM to navigate, and robots are using the same technology to improve autonomy. 

Semiconductor manufacturers developing solutions for robotics also provide software development kits. They support the integration of hardware such as position and navigation sensors with software algorithms to implement SLAM. To find out more about the software and operating systems used in robotics, check out the next blog in this series.

Meanwhile, why not take a look at Mouser's additional resources to discover more about the world of robotics. 

Read More

Key Technologies Defining Robotics – From Static Arms to AMRs

Key Technologies Defining Robotics – Mobility and Dexterity



Mouser Electronics

Authorised Distributor

www.mouser.com

Follow us on Twitter

Mark-Patrick.jpg

By Mark Patrick

Mark joined Mouser Electronics in July 2014 having previously held senior marketing roles at RS Components. Prior to RS, Mark worked at Texas Instruments in applications support and technical sales roles. He holds a first class Honours Degree in Electronic Engineering from Coventry University.