Hi everyone,
I just wanted to announce two new videos:
The YouTube video shows an application scenario with a shared workspace where the motions of the robot (represented as temporally encoded Swept-Volumes) are monitored via two Kinect cameras. As soon as an obstacle is detected in the upcoming trajectory, the robot stops and chooses another (collision free) subtask to execute. GPU-Voxels acts as some kind of relay between ROS MoveIt and the robot controller, so that planned motions can be monitored online. |
This video (third one on the linked demos page) shows our work on predictive collision detection: We analyze motion in RGB-D video input and project objects movements into the future. This generates a Swept-Volume that can be intersected with the Swept-Volumes of planned robot motions to anticipate probable upcoming collisions. Big thanks to Felix Mauch for his great work on this toppic! For further info please read our paper or contact me. A. Hermann, F. Mauch, K. Fischnaller, S. Klemm, A. Roennau, R. Dillmann, |
We are also making progress on bringing GPU-Voxels to a flying drone scenario, mobile platforms (motion primitive planning in simulation is already up and running) and into real-time grasp planning. So stay tuned!
Cheers,
Andreas