Better Point Clouds with the Kinect

The response to my blog post was amazing and many people have suggested some really great ideas to improve the data collected. For a long while I have been wanting to play with ROS and I think that this video has put me over the edge:

That is a very impressive SLAM implementation using SURF descriptors and the RANSAC algorithm. I believe it is using the OpenCV implementations of those algorithms which I know to be very good. There is more information about it here.

I implemented SURF myself as a personal project last year, but I think at this point the best thing to do is use the tools available and build on the amazing work that has gone into ROS and OpenCV.

I’m going to see what can be done in the outdoors using some of these techniques, there is a lot of potential for some really improved 3D maps.

We live in very exciting times with so many people sharing their code and ideas. I’m in awe of some of the things that have been done and I hope I can find a way to contribute to some of the projects that are making the real progress and allow people like me to hack on small projects.

By the way – unless I say otherwise all code (not including any external libraries or code not written by me) on my GitHub is MIT/X11 licensed.

A few people have requested point cloud data or raw Kinect data, you can find it here. I hope it’s helpful but I fear that the 1fps capture rate will not be enough for many of the things that have been suggested. Once I get a ROS-based capture process working I will capture more.

Tagged , , , , , , , , , ,

6 thoughts on “Better Point Clouds with the Kinect

  1. Nikolas says:

    Hello Martin

    Thanks for featuring our video on your blog 🙂 I like your work on the outdoor scene, I hadn’t expected the kinect to work that well outdoors. We are still working on our approach, the ros-site will be updated in about three weeks and it would be great to get it working with your inital pose estimates from gps.

    Could you say something about the quality of your gps-measurments?


    • Hi Nikolas!

      I love your work! I’m very impressed with your work and what you’ve been able to achieve. Everyone is very lucky that you share your results 🙂

      I haven’t had much time to spend doing this stuff recently, I’m working on the MakerBot more at the moment.
      Unfortunately the GPS measurements from the Nexus One aren’t really sufficient to do any sort of real-world mapping. Especially in an urban canyon environment both the absolute accuracy of each position and the variability of successive positions are really not up to the task.
      To properly do such a thing you really need a Differential GPS receiver and a 6DOF inertial pack – also postprocessing with a bi-directional kalman filter is probably good practice too. I have played with the idea of either using a Nexus S/iPhone 4 for the gyros and maybe a dedicated GPS receiver with a separate antenna, but it may be easier to use one of the sensor packs available on SparkFun and be done with it.

  2. […] commodity hardware (including the Kinect!). Some of the projects referenced in the video have been previously linked on this blog. It’s great to see hobbyist hackers recognised. He announced rosjava, a port of […]

  3. Gerhard Bax says:

    Hello Martin!
    I really loved your approach to use the kinect for real world mapping! One of my Ph.D. students from Uganda is establishing a road maintenance system for her country and we have already documented the roads of her study area in VIS and NIR HD-video combined with GPS readings. For some weeks now we have experimented with the kinect, as we think this “low cost gadget” would be perfect to scan the roads for potholes, at least during hours without disturbing sunshine.
    Now I have started looking at MS SDK for the kinect, but none of us has the necessary programming skills. Maybe you or somebody else has already done something that suites our needs. Would be marvelous to cooperate with you 🙂
    Gerhard @ Lydia

    • Hi Gerhard!

      Thanks for the comment 🙂
      Your application for the Kinect is interesting – I think it is worth investigating further.
      I didn’t use the official MS SDK for Kinect (as it wasn’t available when I did this), but if you are in need of some example code for logging depth data along with GPS you could have a look at the code in my repo here:

      I’m afraid I don’t have a lot of time to work on it further but hopefully it is helpful to you!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: