Playing with Kinect and ZeroMQ Part 1

Image

I was looking for something fun to play with in order to start experimenting with sending “binary” (non string) data over zeromq.  I realized I had a Microsoft Kinect laying around that no one was really using anymore, so I grabbed it and spent a day reading up on the available open source libraries for accessing it.

The Kinect is a really nifty little device.  You can pull data streams off of it containing rgb frame info (the video camera), depth information (the ir camera), four audio streams, and accelerometer data.  In addition, you can adjust the tilt of the device using the motorized base.

I now have some working test code that pulls both the rgb and depth data from the kinect and broadcasts it over zeromq using a pub socket, and a small receiver program that receives the data, parses out the rgb frame and displays it.

To accomplish this I’m using the following libraries:

First, we’ll look at the broadcast code.    The includes are simply the libfreenect_sync wrapper, the czmq library, and stdlib / stdio. I’m using the sync wrapper for libfreenect to start because it is a simpler interface than the asyncronous interface. I plan to move to the asyncronous interface soon, as it’s event driven / callback model would be a nice fit with czmq’s zloop.

#include <stdlib.h>
#include <stdio.h>
#include <libfreenect_sync.h>
#include <czmq.h>

So first I set up a zeromq publish socket. I’ve set a high water mark on of 1000 as I’d rather drop frames than run my laptop out of ram if the receivers can’t process fast enough:

        /*  set up zmq pub socket */
        zctx_t *ctx = zctx_new ();
        void *broadcast = zsocket_new (ctx, ZMQ_PUB );
        zsocket_set_sndhwm ( broadcast, 1000 );
        zsocket_bind ( broadcast, "tcp://192.168.1.113:9999" );

Since I want to send both the rgb and depth buffers, the next thing I do is get the sizes I will need for each buffer. To do this, I use freenect_find_video_mode and freenect_find_depth_mode, which are part of the openkinect “low level” API ( see http://openkinect.org/wiki/Low_Level ):

        size_t rgb_buffer_size = freenect_find_video_mode(
            FREENECT_RESOLUTION_MEDIUM, FREENECT_VIDEO_RGB).bytes;
        size_t depth_buffer_size = freenect_find_depth_mode(
            FREENECT_RESOLUTION_MEDIUM, FREENECT_DEPTH_11BIT).bytes;

Next, I’ll create an empty zeromq message using czmq’s zmsg api ( http://czmq.zeromq.org/manual:zmsg ):

        zmsg_t *msg = zmsg_new ();

Now, I’ll get the rgb data, put it into a buffer, put that buffer into a zeromq frame, and push the frame into my empty message. Note that the freenect_sync_get_video call also expects an unsigned int, into which it will place the timestamp for the frame. I’m currently not doing anything with the timestamp, but it would be easy enough to include in the message as well.

        /*  get rgb frame and timestamp
         *  and add rgb frame to msg */
        char *rgb_buffer;
        unsigned int rgb_timestamp;

        freenect_sync_get_video (
            (void**) (&rgb_buffer), &rgb_timestamp,
            0, FREENECT_VIDEO_RGB );
        zframe_t *rgb_frame = zframe_new ( rgb_buffer, rgb_buffer_size );
        zmsg_push ( msg, rgb_frame );

Now, I’ll do the same thing for the depth buffer:

        /*  get depth frame and timetamp
         *  and add depth frame to msg */
        char *depth_buffer;
        unsigned int depth_timestamp;

        freenect_sync_get_depth (
            (void**) (&depth_buffer), &depth_timestamp,
            0, FREENECT_DEPTH_11BIT );
        zframe_t *depth_frame = zframe_new ( depth_buffer, depth_buffer_size );
        zmsg_push ( msg, depth_frame );

All that’s left to do at this point is send the message and clean up:

  int rc = zmsg_send ( &msg, broadcast );
        assert ( rc == 0 );

        /*  cleanup */
        zmsg_destroy ( &msg );

I’ve been using czmq for quite awhile now. I’m pleased by the balance the library strikes between providing some nice higher level abstractions while allowing low level control over.  Hopefully this post demonstrates how simple it is to create multi frame messages from buffers using the library.

I’ll post about the receiver in a follow up post.  It currently receives the messages over zeromq, pulls out the frame with the rgb buffer, and uses opencv to construct and display the images as a video.

About taotetek

Sometimes stereotypical but never ironic. You can't stop the signal, Mal. All my opinions are my own, unless I stole them from you.
This entry was posted in 0mq, kinect, video, zeromq and tagged . Bookmark the permalink.

One Response to Playing with Kinect and ZeroMQ Part 1

  1. Pingback: a handy reminder on the use of czmq API | nailinthewall

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s