Sunday, August 23, 2015

How to get the Mindstorms Ev3 brick to read coordinates from a file

12:19 PM Posted by Anton Vanhoucke No comments
In order to plot drawings I needed to read coordinates from a file, while running a program on the Ev3 brick. It turns out to be a very tricky task that is not well documented. The limitations I did know were:

  • Ev3 can only read a whole line per file access
  • There are no text manipulation blocks that allow me to split lines by commas
  • There is no way to detect the end of a file
  • I will have to put x coordinates and y coordinates in separate files

What is the file format used by the Ev3 for writing numbers?

I started by generating a file full of numbers from an Ev3 program so I could reverse engineer the file format. I made a simple program that divides 1 by 2 about 20 times.
Next I opened the file in vi to see what was in there:
A ha! This is very enlighting. My conclusions from this little test:
  • It seems that numbers have 4 decimal places
  • Numbers are separated by ascii 13
  • The file extension is .rtf
  • ...but the file format is txt. It has nothing to do with the Rich Text Files. 
TextEdit on Mac will not open the file. Only vi or sublime text are working for me. On to the next phase.

How to generate number files that the Ev3 can read

Now it's just a matter of writing my list of coordinates to file. In Python that looks like this:
xfile = open('x.rtf', 'w')
yfile = open('y.rtf', 'w')

for x,y in pointlist:
    #write each number on a new line


Test if the Ev3 can read the numbers

Finally I wrote a litte program that reads from the file, performs some math and plots the result to the screen. And behold, when I compare to the original text file, it works!

Building a Mindstorms plotter with two ropes

11:44 AM Posted by Anton Vanhoucke 1 comment
Today I finally succeeded in building a Mindstorms robot that plots pictures, suspended by two ropes. Check the video. I ran into so much trouble building this, that I'll share what I learned.

These are the problems I ran in to:

  • Making the robot lay flat on the door. Gravity can be a bitch. (coming soon)
  • Selecting rope and pulleys for the robot (coming soon)
  • Generating the coordinates for the portrait (coming soon)
  • Getting the Ev3 to read coordinates from a file
  • Building a PID controller that does not reset the motor sensor (coming soon)
  • The math behind a two-rope plotter (coming soon)
  • Making the pen go up and down (coming soon)
Enjoy! I hope you can avoid my mistakes and build an even better plotter.

Wednesday, December 24, 2014

Modifying the original BrickPi case to fit a Raspberry Pi model B+

8:40 AM Posted by Anton Vanhoucke
The standard acrylic plates that come with a BrickPi do not accommodate a Raspberry Pi model B+. The screws don't fit and the holes are in the wrong locations. You need to do some modding to make it work. But then you have 4 USB ports on your lego creation, and a nice and compact micro SD!

What you'll need

  • A 3mm drill
  • An M3 screw with a small head

Step 1: The extra hole

The first thing to do is drilling a 3mm hole in the corner of the bottom acrylic plate, the one without the BrickPi logo. It doesn't have to be super accurate, as we'll be using a large hole for the other screw. Just put your B+ on the acrylic plate to mark the hole.

Step 2: Enlarge the holes on your Raspberry Pi

Somehow the new model has smaller holes than the old B+. Carefully enlarge the holes on the circuit board with a 3mm drill.

Step 3: Mount the RPi on the acrylic plate

In the top left corner you'll need that smaller M3 screw. The screws that come with the BrickPi are too large and cover the micro USB port and the black 4R7 thingy, so you can't tighten them.
The bottom right screw goes into a larger hole in the acrylic plate that is meant for a Lego peg. So you have some play there.

Step 4: Slide on the BrickPi and assemble the rest

Here's a completed assembly in an unfinished Lego robot.

Optional: Add a bevel to the holes in the acrylic plates

It's hard to insert Lego pegs in the acrylic plates that com with the BrickPi. Using a large drill you can manually add a little bevel so they go in more smoothly. You can also use a slow turning Dremel tool.

Thursday, December 11, 2014

Writing blog posts on blogger with code snippets made easy

1:40 PM Posted by Anton Vanhoucke No comments

In my mindstorms hacking projects I need quite some code. And I want to blog about it, but the default editor in doesn’t have a button to mark text as code. You can abuse the blockquote, or add the <code> tag by hand, but it’s quite a hassle. The solution proved the be the brilliant StackEdit web app! It’s so amazing, I’m going to pay the little fee they ask.

With it you can write your blog post using markdown. And with the code blocks like GitHub has them in the file. Actually I got the idea to use markdown for blogging when I was writing a readme. Markdown is so much easier to use than a wysiwyg editor or typing all the HTML tags manually.

Here’s how to use it on your blog.

1. Edit the html of your blogger blog

In the blogger dashboard go to ‘template’ and click the ‘Edit HTML’ button. Then, just above the closing <\HEAD> tag insert this:

<link href='' rel='stylesheet'/>
  <script src=''/>

2. Go to and write something interesting about code

StackEdit is mostly self-explanatory. So start writing and then click the hash icon on the top left. Choose: Publish > Blogger and there you are! A great blogpost with minimal typing and layout effort!

Realtime video stream with a Raspberry Pi and PiCam

1:20 PM Posted by Anton Vanhoucke
I want to build remote controlled Lego robots with an onboard camera so I can drive around with them without having to see them. I did a lot of research to get a lagless video stream from the Raspberry Pi to my computer. It proved to be quite a challenge. But I found a way, it works!
Actually there are two methods that work: gstreamer and netcat. Both are detailed below. VLC and Mjpeg player are alternative methods that I didn’t get to work, at least not lagless. My favorite method is gstreamer.


This solution proved the most stable, lag-free and flexible solution to me. The reasons for this being that gstreamer has nice python bindings and the order in which you start the sender or the receiver doesn’t matter. Gstreamer installation should be really easy, both on the RPi and on your mac. I will assume you installed and enabled the PiCamera already. On your RPi just do:
$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get install gstreamer1.0
On the mac, the easiest way to install gstreamer is using homebrew. I prefer it over macports. Just do:
$ brew install gstreamer gst-plugins-base gst-plugins-good
Easy as Pi. On windows I wasn’t able to get gstreamer to work. If you know a good installation tutorial, let me know.
Now it’s time to stream. These are the commands you need.
On Raspberry Pi do (change the IP address to the address of your target computer):
$ raspivid -t 999999 -b 2000000 -o - | gst-launch-1.0 -e -vvv fdsrc ! h264parse ! rtph264pay pt=96 config-interval=5 ! udpsink host= port=5000
On your mac do:
$ gst-launch-1.0 udpsrc port=5001 ! application/x-rtp, payload=96 ! rtpjitterbuffer ! rtph264depay ! avdec_h264 ! fpsdisplaysink sync=false text-overlay=false
On my setup I had a near realtime stream over wifi. I didn’t measure it exactly but the lag was below 300ms.


The alternative to gstreamer is using netcat to dump the camera data over a network pipe. This requires installing mplayer on your mac/pc. Again, it’s easy with brew. The trick is to read at a higher framerate than Pi is sending. This way the buffer stays empty and the video is real-time.
Here the order in which you execute the commands is important. First do this on the mac:
$ nc -l 5001 | mplayer -fps 31 -cache 1024 -
Then, do this on the RPi - insert the correct IP address, of course.
$ raspivid -t 999999 -w 640 -h 480 -fps 20 -o - | nc 5001
It’s also possible to do this on Windows. For this you have to download netcat and mplayer and put them in the same directory. Go to that directory using the command prompt and execute this:
> nc -l -p 5001 | mplayer -cache 32 -demuxer lavf -


Streaming with VLC from the raspberry Pi is fairly straightforward. I was unable to do it lagless, but the cool thing is that you can pick up the stream on an iPad with vlc installed, or on a mac using just the VLC app. No need for brewing.
First install VLC on the RPi
$ sudo apt-get install vlc
Then start streaming on the RPi
$ raspivid -o - -t 0 -hf -b 1000000 -w 640 -h 480 -fps 24 |cvlc -vvv stream:///dev/stdin --sout '#standard{access=http,mux=ts,dst=:8160}' :demux=h264
To pick up the stream, open the VLC app and pick up the stream with a URL like this: Here insert the name or IP address of the RPi.


Mjpg-streamer is also cited as an alternative sometimes, but I haven’t gotten it to work. The installation instructions are arcane and require v4l drivers.
Written with StackEdit.

Thursday, January 16, 2014

Gamepad controlled Mindstorms NXT robots.

8:50 AM Posted by Anton Vanhoucke
Yeah, yeah the EV3 has fancy ipad remote control. Infrared too. But I happen to have an NXT in perfect working condition. And I had an old gamepad in the attic. So why not try and connect all of that gear? It's a nice evening project.

Step 1. Get the Mac (and python) to read gamepad input

I connected the generic USB gamepad to my mac but not much happened. So I tried connecting a PS3 sixaxis controller. This showed a visible connection, in the list of bluetooth devices. Ok, now python. Some googling gave me three options for libraries: jaraco.input, PyGame, Py-SDL2. There were some more but I couldn't get them to install. Turned out that jaraco.input only worked on linux and windows. PyGame took a very long time to compile, and in the meantime I tried SDL2. Installing SDL2 via homebrew was really fast and the coding examples worked right away. It turns out I could read both the cheap generic USB controller and the nice PS3 sixaxis controller. Sweet!

When you can get it to install with homebrew and pip, it's as simple as this:

Step 2. Send motor commands to the NXT via bluetooth

For sending motor commands I had already found the jaraco.nxt library. I used that one for sending bluetooth messages in the Sumo project. It saw that it contained motor commands too. As it turned, sending the motor commands wasn't as easy as I thought. Jaraco comes with pretty meager documentation and examples, so I had to do some digging in the source code. It turned out that you have to explicitly turn motor regulation on, otherwise the motors only seem to run at half power. Turning that on was an undocumented feature.

So here's the code for sending a simple motor command, turning the motor on with a certain speed.

Step 3. Converting Gamepad input into motor input that works with the robot configuration

As I was driving around an omnibot, the great HTRotabot, I still needed to convert (normalised) joystick input to motor speeds for the three motors. A simple sin() distribution of the power over the motors does the trick. The motors are all in a 120 degree (2/3*PI radians) to each other. Here's the code:

Abandoned routes for the lego sumo project

5:27 AM Posted by Anton Vanhoucke
During the development of the Sumo Arena with camera tracking I also tried a lot of approaches that DIDN'T work out. I think they are just as interesting as the final result, so I'll detail them here.

My first attempt was with processing. I went trough great pains to extend the great open source library NXTComm. I got it working. And contributed to the open source project. But then I abandoned the route for two reasons: java and openCV.
A while ago I discovered python, now I'm spoiled. I dislike java's semicolons, curly braces, variable types etc. etc. And was curious about openCV. Which - incidentally - plays very well with python and it's number crunching libraries SciPy and NumPy. In retrospect a bit of a shame, as is easy to install and plays well cross platform.

OpenCV - Haarcascades

I first played around with facial recognition and the algorithms behind that. What if I could train the computer to recognize robots instead of faces? After lots of code experiments it turned out that the haarcascades needed for facial recognition don't work very well rotated faces. Stand on your head and it doesn't work anymore. For the sumo match I needed to calculate rotation from each frame so this route was a dead end.

OpenCV - Feature recognition

Next I tried feature recognition. Computers can read QR codes, computers can detect transformed images in other images. So wouldn't they be able to recognize two mindstorms robots? Yes they could but... the process was too slow. I achieved a frame rate of 15 for detecting 1 robot. And I needed two robots at 30fps. Another 16h of coding wasted.

OpenCV - Blob recognition

First I figured that blob recognition would work best if I had a certain color on the robots that would really stand out. I thought that little LED lights would work nice. They would make for two spots that looked much brighter than their surroundings. And they look cool on robots too. Again I was wrong. Because of the autogain on most webcams the LEDs came in as white pixels, devoid of any color. Even with gain control at the lowest gain they stayed white.

The final solution was a couple of simple post-its. They stand out without triggering the camera gain. They are like very small 'green screens': easy to filter out.