Page 154 - Toucpad robotics C11
P. 154

Localisation (Knowing Where I Am)
                  Principle: The robot must constantly determine its own precise position and orientation within its environment.
              u
                  Key Sensors:
              u
                     Global Positioning  System (GPS): For outdoor  environments,  provides  geographical  coordinates  (latitude,
                     l
                     longitude, altitude). Highly useful but can be inaccurate indoors or in urban canyons.
                     Inertial  Measurement Units (IMUs): Comprising  gyroscopes  (measuring  rotation  rate)  and accelerometers
                     l
                     (measuring acceleration), this help track the robot’s changes in position and orientation over short periods.
                     However, errors accumulate over time (drift).
                     Wheel Encoders: For wheeled robots, these sensors measure the rotation of the wheels, allowing the robot to
                     l
                     estimate how far it has travelled (odometry). Errors can accumulate due to wheel slippage.
                     LIDAR and Cameras: Used for visual odometry or LIDAR odometry, where the robot tracks its own movement
                     l
                     by observing features in the environment.
                  Sensor Fusion: Data from GPS, IMUs, and wheel encoders are often fused (e.g., using a Kalman filter or Extended
              u
                  Kalman filter, which are mathematical algorithms) to get a more accurate and robust estimate of the robot’s position
                  and orientation, compensating for the weaknesses of individual sensors.
              Mapping (Understanding My Surroundings)

                  Principle: The robot needs to build or use a map of its environment to plan paths and avoid obstacles.
              u
                  Key Sensors:
              u
                     LIDAR (Light Detection and Ranging): Provides highly accurate 2D or 3D point clouds, ideal for creating precise
                     l
                     maps of indoor and outdoor environments. It’s excellent for detecting structural features like walls and furniture.
                     Cameras (Vision Sensors): Used to capture visual information. With techniques like Structure from Motion or
                     l
                     Visual Simultaneous Localisation and Mapping, cameras can also create 3D maps, especially for texture and
                     colour information that LIDAR lacks.
                     Ultrasonic Sensors: For simpler, lower-resolution mapping of immediate surroundings or for detecting large
                     l
                     obstacles.
                  Simultaneous Localisation and Mapping (SLAM): A sophisticated technique where the robot simultaneously builds
              u
                  a map of an unknown environment while determining its own location within that map. This is often achieved by
                  fusing LIDAR or camera data with IMU and odometry data.

              Perception (Detecting Objects and Obstacles)
                  Principle: The robot must identify and classify objects, obstacles, and other moving entities (like people or other
              u
                  robots) in its path.
                  Key Sensors:
              u
                     LIDAR: Excellent for detecting obstacles and their distances, regardless of lighting conditions.
                     l
                     Radar (Radio Detection and Ranging): Uses radio waves to detect the presence, range, and speed of objects,
                     l
                     particularly effective in adverse weather conditions (rain, fog) where LIDAR and cameras may struggle.
                     Cameras (Computer Vision): Essential  for identifying  the  type  of object  (e.g., pedestrian,  car, traffic light,
                     l
                     road sign), understanding semantic information (e.g., “this is a road,” “this is a sidewalk”), and reading text.
                     Advanced Artificial Intelligence (Deep Learning) is crucial here.
                     Ultrasonic and Infrared (IR) Sensors: Used for short-range proximity detection and collision avoidance.
                     l
                  Sensor Fusion: Data from all these perception sensors are continuously fused and processed by Artificial Intelligence
              u
                  algorithms to create a robust and comprehensive real-time understanding of the dynamic environment.
              Navigation and Path Planning
                  Principle: Once the robot knows where it is and what’s around it, it needs to plan a path to its destination and execute
              u
                  that path safely.

              152
              Touchpad Robotics - XI
   149   150   151   152   153   154   155   156   157   158   159