Blog moved to http://oldschool.systems.

Just a heads up that I’m moving my blogging to my new blog at http://oldschool.systems/ .  I’m now using Hugo  to manage my blog and I find it’s workflow much more comfortable than here.

Posted in Uncategorized | Leave a comment

Revisit.Link

For the last couple of weeks I’ve been spending some of my free time hacking on gorevisit, a golang microframework for making services for Revisit.link.  Revisit.link is an image mutation service started by my friend Edna Piranha in order to “encourage developers of all skills to build small, focused services with only two API calls”.

Basically, you go to the revisit.link hub and upload an image like this…

BovIYuAW_400x400

… then choose four revisit services, all of which are written and hosted by anyone who wants to create a service

glitches

… and out pops something like this:

me

It is totally unpredictable, and the result of multiple services created by multiple people all communicating through a shared coordination hub.  A real-time feed of glitched images is available at http://ws.revisit.link/ and some images are posted to the revisitron twitter account as well.

There’s a lot of different angles that I could address concerning this project, and perhaps I’ll cover more of them in subsequent posts.  Today I just wanted to address one, and that’s the question Angelina Fabbro asked on twitter yesterday:

angelina

Why do I love this so much?  I’ve put a lot of hours into my service micro-framework that I could have spent doing a lot of other things.  I code every day for work – spending nights and weekends coding on a project is a sign that it has seriously inspired me.  The most succinct answer is:

It feels like the old internet.

To me, Revisit.link is late nights listening to a modem sing accompanied by the light of blinking pixels.  It’s finding treasure troves of weird text files.  It’s a stack of devoured issues of Mondo 2000 next to computer while catching up on USENET posts.   It’s ANSI art and the demo scene and How To Mutate and Take Over The World and Hackers and Tron.

It’s a group of happy mutants pointing at the things being created by the thing they’ve made, and laughing and smiling and being surprised.

So thank you, Edna Piranha, for seeding this idea.. and thank you, all my wonderful crazy playmates collaborating together on this.  Revisiting is refreshing.

Posted in Uncategorized | Leave a comment

The Edge Net and Why Projects Like This Matter

My original intention with this blog was to focus just on the technical.  However, the recent revelations about the scope of the intrusion of various nation’s intelligence agencies into the fabric of the internet itself makes it impossible to be an internet technologist and not be political.  Continuing on as if everything is normal is just as much of a political decision as speaking out.

I was blessed with my first home computer in 1981 when I was 10 years old.  My father worked for Sperry Univac, so I was privileged to have access to quite a bit of technology at a young age.   I was involved in the BBS scene very early on… a recording of a modem dialing still brings me back to that magical feeling that computer communications had then, before this was all just simply a fact of life.

In 1994, my wife and I got our first home internet connection to our apartment.  The first time I dialed in to my local ISP (on a 486 running OS/2, for those who like technical details) was pure magic.  On irc, I was speaking with people by text, in real-time, who lived in other countries!  People who are younger than I am probably can’t really step in to just what a big deal this was – back when the web was very young, and we still received almost all of our information about the world from television and newspapers.

Directly speaking with whole groups of people, all over the world, was a completely transformative experience.  Maybe “people are the same all over the world” is a big “duh” to some people now.  I came of age in the 80s in the United States, being bombarded by propaganda about “evil empires” and “enemies”: Nothing was a better antidote to the state and corporate manufactured view of the world than direct communication with people in other places.

This is under attack.  Really, it has been from the start – but now we are starting to understand the full scope of the control systems that have been put in place.  Some of us may be fortunate enough to live in countries where “nothing bad” happens to us (mostly) as a result of this surveillance.  However, no matter how passive it may seem there is one very real effect:  When we worry that our words may be used against us, and we worry that who we speak with may be used against us, we internally monitor and censor our own communications.  As a concrete example, I was speaking online with a software engineer from Pakistan yesterday.  Can there be any doubt that this communication was monitored?  It certainly was an intrusive fact in my mind as we spoke.. and having to think about such things is stifling, and wrong.

Freedom of association and the free exchange of ideas are mandatory ingredients for humanity to learn, grow, and mature.

So… about The Edge Net.  Pieter Hintjens, the CEO of iMatix, the company behind the open source messaging library ZeroMQ, and author of the upcoming book “Culture & Empire“, has started a new project: The Edge Net:

We built the Internet to be a space for freedom and opportunity. Instead it has become the greatest surveillance system ever. My name is Pieter Hintjens, CEO of iMatix, and I want you to help me fix that.

Without privacy and anonymity, we lose our freedom of speech. And without that, we become slaves to a narrative where the powerful run amok, without oversight or regulation. I truly believe we’re facing the extinction of our digital freedoms, and then our real world freedoms.

By joining in this project and contributing, you help turn back the tide. – Pieter Hintjens

I wanted to write about this project here, because the idea behind the project is very important to me.  It’s important to the 10 year old kid I used to be, listening to a modem sing magical tones that put me in contact with communities of people.  It’s important to the 23 year old I was the first time I spoke with people from other countries on irc, and was overwhelmed with the experience.  It’s important to my children, who I want to leave a free and open internet to, so they can have the same experiences.

No particular project is going to be “the one” – it’ll take many people, working on many projects, many of us trying and failing.  However the idea is right, and it needs to be promoted.  I’m a huge fan of ZeroMQ as a technology (I contribute to the CZMQ API for it), and Pieter has a wonderful knack for building communities.  So, I wanted to use some space on my little pulpit here to promote it.  I want to leave the internet a better place than it was when I first found it – not a worse place.

If you actually read this far, thanks.

Brian

 

 

Posted in Uncategorized | Tagged | Leave a comment

Ethical System Administration

I started out in this industry in the mid 90s.  My first system / network administration job was at a medium sized local insurance company.  For a non tech company in the mid 90s, we had a pretty impressive infrastructure.  We had two in house datacenters with a large variety of operating systems and protocols.  By the time I left in 1998, we had Netware, HP-UX, Unixware, Windows NT, and Linux servers, and were running IPX/SPX, Appletalk, Decnet, SMB, and TCP/IP on our network.  It was a great place to learn – and not just for the technology environment, but for the people who worked there.

I mentored under a network engineer who believed in high standards of professionalism.  We had actual, physical log books in our datacenters.  When people did work on servers, they were required to hand write a synopsis of what they’d done and why into the log.  I can’t imagine what that might seem like to people coming up in the age of “the cloud”.

There was something else important that was drilled into me along with a sense of professionalism and craft – the idea that there were ethics around our profession.  As system administrators with access to every server, switch, and router in the company, we in theory had access to every bit of electronic information in the company.

It was drilled into me that how we treated that information was important, and that our respect for the privacy and confidentiality of communications we did not need to access as part of our job was a point of pride.  Doing something like reading another employee’s email would be a huge breach of the trust that was placed in us.  Having administrative privileges on multi user servers was a big deal.  It meant that you were a steward, and that role was to be treated with respect for the people sharing the domain you had stewardship over.

I know how compartmentalized large tech companies get – I worked at one for 6 years.  However, somewhere within Google, Microsoft, Facebook, Verizon and the other companies who are colluding with the NSA there are system administrators and network engineers who knew what was going on and helped it happen.

I’d ask those of you who work in this field to set aside a little time today to read, or if you’ve got a few grey hairs on your head like me probably re-read, the System Administror’s Code of Ethics, and then think about the things we, as technologists, are helping to enable, for good or bad.

Posted in prism, ramblings | Leave a comment

I love velcro

I love velcro

This weekend I spent some time neatening up my home office. I covered the top of a shelf with velcro, and then put velcro on the bottoms of various small electronics (raspberry pi’s, my intel nuc, usb hubs, etc). It’s working great.

Image | Posted on by | Tagged , | Leave a comment

Playing with Kinect and ZeroMQ Part 2

In the previous post I briefly outlined writing a simple tool to pull data off of a kinect, and pass the buffers into zeromq frames for transmission.  In this post I’m going to go over the other side of the equation – a simple receiver that subscribes to the data stream and displays it. I’ll be using opencv for image creation and display:

#include <stdio.h>
#include <stdlib.h>
#include <opencv/cv.h>
#include <opencv/highgui.h>
#include <czmq.h>

So, first I create my zeromq SUB socket, subscribe (in this instance I’m not using a topic so I subscribe to “”), and connect to the broadcaster:

zctx_t *ctx = zctx_new ();
void *receive = zsocket_new ( ctx, ZMQ_SUB );
zsocket_set_subscribe ( receive, "" );
int rc = zsocket_connect ( receive, "tcp://192.168.1.113:9999" );

Next, I’ll create an image header using opencv’s cvCreateImageHeader call:

IplImage *image = cvCreateImageHeader ( cvSize (640, 480), 8, 3 );

Within my while loop, I will first do a blocking zmsg_recv call. I could instead use the zmq poller, or czmq’s zloop, but I kept it simple for this example. After I receive the message, I think pop off the rgb and depth frames, then copy the rgb frame to a buffer:

/* receive message */
zmsg_t *msg = zmg_recv ( receive );

/* pop frames and then copy rgb data */
zframe_t *depth_frame = zmsg_pop ( msg );
zframe_t *rgb_frame = zmsg_pop ( msg );

char *rgb_buffer = zframe_strdup ( rgb_frame );

Note that I could use zero copy techniques instead of duplicating the frame, but once again for this example I kept things simple.

Next is the code do display the image:

cvSetData ( image, rgb_buffer, 640*3 );
cvCvtColor ( image, image, CV_RGB2BGR );
cvShowImage ( "RGB", image );

Then all that’s left is cleanup:

zframe_destroy ( &depth_frame );
zframe_destroy ( &rgb_frame );
zmsg_destroy ( &msg );

Here’s a little video showing 6 receivers running, all pulling from the same broadcaster. Oh, the music in the background is a track that happened to be up on spotify when I shot the video – it’s Marley Carroll’s “Meaning Leaving”, and you might want to check him out, he’s fantastic!

Have Fun!

Posted in 0mq, kinect, video, zeromq | Tagged | Leave a comment

Playing with Kinect and ZeroMQ Part 1

Image

I was looking for something fun to play with in order to start experimenting with sending “binary” (non string) data over zeromq.  I realized I had a Microsoft Kinect laying around that no one was really using anymore, so I grabbed it and spent a day reading up on the available open source libraries for accessing it.

The Kinect is a really nifty little device.  You can pull data streams off of it containing rgb frame info (the video camera), depth information (the ir camera), four audio streams, and accelerometer data.  In addition, you can adjust the tilt of the device using the motorized base.

I now have some working test code that pulls both the rgb and depth data from the kinect and broadcasts it over zeromq using a pub socket, and a small receiver program that receives the data, parses out the rgb frame and displays it.

To accomplish this I’m using the following libraries:

First, we’ll look at the broadcast code.    The includes are simply the libfreenect_sync wrapper, the czmq library, and stdlib / stdio. I’m using the sync wrapper for libfreenect to start because it is a simpler interface than the asyncronous interface. I plan to move to the asyncronous interface soon, as it’s event driven / callback model would be a nice fit with czmq’s zloop.

#include <stdlib.h>
#include <stdio.h>
#include <libfreenect_sync.h>
#include <czmq.h>

So first I set up a zeromq publish socket. I’ve set a high water mark on of 1000 as I’d rather drop frames than run my laptop out of ram if the receivers can’t process fast enough:

        /*  set up zmq pub socket */
        zctx_t *ctx = zctx_new ();
        void *broadcast = zsocket_new (ctx, ZMQ_PUB );
        zsocket_set_sndhwm ( broadcast, 1000 );
        zsocket_bind ( broadcast, "tcp://192.168.1.113:9999" );

Since I want to send both the rgb and depth buffers, the next thing I do is get the sizes I will need for each buffer. To do this, I use freenect_find_video_mode and freenect_find_depth_mode, which are part of the openkinect “low level” API ( see http://openkinect.org/wiki/Low_Level ):

        size_t rgb_buffer_size = freenect_find_video_mode(
            FREENECT_RESOLUTION_MEDIUM, FREENECT_VIDEO_RGB).bytes;
        size_t depth_buffer_size = freenect_find_depth_mode(
            FREENECT_RESOLUTION_MEDIUM, FREENECT_DEPTH_11BIT).bytes;

Next, I’ll create an empty zeromq message using czmq’s zmsg api ( http://czmq.zeromq.org/manual:zmsg ):

        zmsg_t *msg = zmsg_new ();

Now, I’ll get the rgb data, put it into a buffer, put that buffer into a zeromq frame, and push the frame into my empty message. Note that the freenect_sync_get_video call also expects an unsigned int, into which it will place the timestamp for the frame. I’m currently not doing anything with the timestamp, but it would be easy enough to include in the message as well.

        /*  get rgb frame and timestamp
         *  and add rgb frame to msg */
        char *rgb_buffer;
        unsigned int rgb_timestamp;

        freenect_sync_get_video (
            (void**) (&rgb_buffer), &rgb_timestamp,
            0, FREENECT_VIDEO_RGB );
        zframe_t *rgb_frame = zframe_new ( rgb_buffer, rgb_buffer_size );
        zmsg_push ( msg, rgb_frame );

Now, I’ll do the same thing for the depth buffer:

        /*  get depth frame and timetamp
         *  and add depth frame to msg */
        char *depth_buffer;
        unsigned int depth_timestamp;

        freenect_sync_get_depth (
            (void**) (&depth_buffer), &depth_timestamp,
            0, FREENECT_DEPTH_11BIT );
        zframe_t *depth_frame = zframe_new ( depth_buffer, depth_buffer_size );
        zmsg_push ( msg, depth_frame );

All that’s left to do at this point is send the message and clean up:

  int rc = zmsg_send ( &msg, broadcast );
        assert ( rc == 0 );

        /*  cleanup */
        zmsg_destroy ( &msg );

I’ve been using czmq for quite awhile now. I’m pleased by the balance the library strikes between providing some nice higher level abstractions while allowing low level control over.  Hopefully this post demonstrates how simple it is to create multi frame messages from buffers using the library.

I’ll post about the receiver in a follow up post.  It currently receives the messages over zeromq, pulls out the frame with the rgb buffer, and uses opencv to construct and display the images as a video.

Posted in 0mq, kinect, video, zeromq | Tagged | 1 Comment