The software uses the power of CUDA enabled consumer graphics cards to do collision detection and motion planning in high resolution and in real time.
The following table shows some of the ideas of the collision checker and planner.

Image was published in [4].
This diagram gives an overview of the main software structure: Red parts are running on a host computer, blue parts are running on the GPU.
The core components are the data structures that represent the environment (GPU-Octree), the densely covered planning maps, e.g. for Swept-Volumes (GPU-Voxel-Map) and Motion-Primitives (GPU-Voxel-Lists).
Also there exists an independent visualization program that connects to the main program via Shared GPU Memory and that is able to transform large amounts of the mentioned datatypes into OpenGL renderings in real time. So the user can monitor sensor data, the robot and of course the Swept-Volumes of planning results.
On the host side, there are interfaces for the insertion of 3D-pointcloud-data and the bidirectional robot-interface.

Image was published in [1].
The main idea is the interleaved planning and execution of motions. As the Voxel based collision check can handle Swept-Volumes very efficient, the required volume for a planned motion can be monitored constantly for newly occurring obstacles. If a new obstacle penetrates the planned trajectory, the robot can be stopped or a replanning can be triggered if the collision would happen far enough in the future.

Image was published in [4].
Our service robot HoLLiE with a motion Swept-Volume (red) within a pointcloud environment (green) recorded with an external LIDAR system.

Image was published in [3].
This image illustrates the offline generation of the robot model. First, a CAD model of each robot part is sampled equidistant by a ray. This ray generates points not only when it intersects the triangle-mesh-model but also inside the volume of the robot part. The generated point cloud is kept read-only in the memory on the GPU. During runtime, the cloud is transformed in the workspace with the direct kinematics model according to the pose of the robot. The transformed cloud is finally inserted into the Robot-Voxel-Map. To avoid sampling errors, the pointcloud is more than double the resolution of the Voxel-Maps resolution.

Image was published in [2].
Here you can see the Swept-Volume of a planned 10-DOF trajectory of a mobile manipulator.
The grey box is a dynamic obstacles that intersects with the planned motion. Therefore the invalidated section of the Swept-Volume is highlighted in red, and a replanning process is triggered. You can see this demo in our Demos section.

Image was published in [2].
These images show Swept-Volumes of a rotated robot. Some with a retracted arm, some with an outstretched arm. As not the whole 360 degrees are possible to rotate collision free, a small section is left out. The red volume lies in collision with a dynamic obstacle.

Image was published in [2].
As our collision checker is optimized for the evaluation of whole motions and not for single robot poses, we developed a mobile platform planner that exploits this functionality by sampling the workspace with Swept-Volumes of a rotating robot shown above.
As the volumes contain additional information, the planner can check the whole rotational volume at once but is able to distinguish, which parts of the rotation are collision free and which are not.
From this information a continuous path can be planned in which neighbouring configurations are chosen in a way to minimize platform rotation during execution.

Image was published in [2].
Here one can see the connections between the equidistant sampling grid that ensures resolution completeness. The nodes in the centre are visualizing continuous sections of collision free rotations. These sections are transformed into a graph where only nodes with an overlapping rotation are connected. On this graph the actual planning happens.
A cost functions ensures paths with minimized rotations but still allows complicated omnidirectional manoeuvres when necessary.

Image was published in [4].
If the rotational Swept-Volumes are exchanged for Sweeps of a moving arm, that is also rotated about the TCP pose, the planner can evaluate poses for manipulation actions with a single collision check.
Typical arm motions are shown in the upper images. These are rotated around the TCP as shown in the first image in the lower row. The whole Volume is then evaluated around the red object on the table that should be grasped.
The platform planner takes the first valid manipulation pose as a goal, but has to replan, when an obstacle on the table is encountered.

Image was published in [1].
Another system overview. Blue components are on the GPU.


Our ongoing research has been published on the following conferences:

  • [1] A. Hermann, S. Klemm, Z. Xue, A. Roennau, and R. Dillmann,
    GPU-based Real-Time Collision Detection for Motion Execution in Mobile Manipulation Planning
    in 16th International Conference on Advanced Robotics, ICAR 2013, 2013.
    DOI: 10.1109/ICAR.2013.6766473
  • [2] A. Hermann, J. Bauer, S. Klemm, and R. Dillmann,
    Mobile manipulation planning optimized for  GPGPU Voxel-Collision detection in high resolution live 3d-maps
    in Conference ISR ROBOTIK 2014 (ISR – ROBOTIK 2014), Munich, Germany, June 2014. (Print ISBN: 978-3-8007-3601-0)
  • [3] A. Hermann and F. Drews,
    Tutorial: GPU based Voxel-Collision-Detection for Robot Motion Planning
    13th International Conference on Intelligent Autonomous Systems (IAS-13, Juli 2014)
  • [4] A. Hermann, F. Drews, J. Bauer, S. Klemm, A. Roennau and R. Dillmann,
    Unified GPU Voxel Collision Detection for Mobile Manipulation Planning
    in Intelligent Robots and Systems (IROS 2014, Chicago, September 2014)
    DOI 10.1109/IROS.2014.6943148
  • [5] A. Hermann, F. Mauch, K. Fischnaller, S. Klemm, A. Roennau, R. Dillmann,
    “Anticipate your surroundings: Predictive collision detection between dynamic obstacles and planned robot trajectories on the GPU”
    in European Conference on Mobile Robotics (ECMR 2015, Lincoln, September 2015)

Research Funding

The research leading to the presented results was conducted at the FZI Forschungszentrum Informatik Karlsruhe. It has received funding from multiple research projects:

  • By the German Federal Ministry of Education and Research (BMBF) under grant agreement no. 01IM12006C (ISABEL).

Logo des Bundesministerium für Bildung und Forschung

  • By the European Union under the HORSE research project in the European Union’s Horizon 2020 research and innovation program under grant agreement No 680734.
  • By the Baden-Württemberg Stiftung under the research project KolRob – Kooperativer, intelligenter Roboterkollege für den Facharbeiter des Mittelstands.