This page is intended for additional information on tutorials that we give:


I will give a talk at the Universidad Nacional Autónoma de México on the general usage of GPUs for robotics applications, especially real time collision detection with GPU-Voxels.
Hope to see you there!

Addendum: Slides of the talk: 2014_10_10_GPU-Talk_Mexico_0.3


IAS Conference 2013 Padova
We will give an introduction and a tutorial to our
GPU based Voxel-Collision-Detection for Robot Motion Planning
at the 13th International Conference on Intelligent Autonomous Systems
and hope to see you there!
An overview on all tutorials can be found here.

Here is the description on what we will present:

Motion planning has been an essential part of robotics research for more than a decade. Planning problems became more and more complex, while planning time was decreased by new algorithms and rising computational power. As collision checking is the most time intensive component in heuristic planning algorithms, our work focuses on efficiently solving this problem. Therefore we developed data structures and algorithms that are optimized for execution on massively parallel running hardware, namely General Purpose Graphics Processing Units (GPGPUs).
Instead of utilizing mesh based representations of the environment and the robot, we rely on Voxel structures. This allows us to take live point cloud data into account without the need of a triangulation step. Another advantage is the ease of generating swept volumes of motions during the planning phase, which can then be monitored for penetrating dynamic obstacles during the execution.
Our software is written in CUDA and outperforms current CPU based approaches easily, when motions have to be collision-checked. Therefore we will also present a special D*-Lite planner that exploits swept-volume planning.

In the tutorial we will give a brief introduction to CUDA programming paradigms before we introduce our collision-detection software stack with the underlying data structures (GPU Octree which offers almost 50 GB/sec throughput, Voxelmap, Voxellist) and algorithms. In a real world scenario we will then deal with live PointCloud data from a Kinect camera and check collisions between it and a voxelized robot model. Based on that, we will introduce a motion planner that generates swept volumes of motion trajectories, which can be continuously evaluated against disturbances from dynamic obstacles during execution. Also we will demonstrate an OpenGL shared memory visualization (running on the same GPU-card) to view the live calculations. As an outlook we show, how OMPL-planners can use our collision detector, and how new planners have to be designed to fully exploit the capabilities of our approach. If time is left, a discussion on the general usability of GPUs in robotics would be nice.

We will provide a powerful workstation for demonstrations but participants that bring a laptop with a CUDA GPU (Compute capability >= 2.0) can participate in the hands on part. For that we will bring all needed (LINUX) software. As a preparation, it would be helpful if participants could already install the latest CUDA drivers and SDK and the ROS Desktop Environment.

About the organizers:
Andreas Hermann has 4 years of experience in service robotics research, specializing on opportunistic hierarchical motion planning.
Florian Drews has 1 year of experience in designing CUDA algorithms.

Both are working at the Research Centre for Information Technology (FZI) Karlsruhe, Germany.

Addendum: Slides of the talk: http://www.fzi.de/wir-fuer-sie/mediacenter/veroeffentlichungen/?fd=2015-07-15_IAS_Tutorial_V4.0_final.pdf&no_cache=1