This year, I helped a local high school FIRST robotics team get computer vision working on a Raspberry Pi. Two students worked on the problem. At the beginning, they both worked on computer vision. Later, one specialized on the Pi and the other on CV. I learned a lot from a book and from playing with it. We encountered some interesting and memorable problems along the way.
Recognizing a target
We started with sample code from another team from last year. This helped learn how to write the code and understand the fundamentals. It also helped with the “plumbing” code. As hard as it was to recognize a target, this didn’t prove to be the most frustrating part. The Pi itself mounted a number of challenges.
Parts for the Pi
We bought/scavenged parts for the Pi. A USB keyboard, USB mouse and cell phone micro charger where donated. A HDMI/DVI cable we needed to buy. We borrowed a computer monitor and ethernet cable.
Finding the right jars
The Pi is built on ARM. We needed javacv-0.2-linux-arm.jar. It turned out there is no linux arm version in the latest javacv (0.3). There is one in 0.2 which we used. Which was incompatible with the versions of other tools. (see next problem.)
Setting up the Pi
Compiling opencv on the pi takes 4 hours. Since that’s how long a meeting is, this meant running the compile overnight. Having to wait overnight to find out if something worked was like a taste of what punchcard programmers had to go through!
Then it turned out we couldn’t even use our compile. We were missing the libjniopencv_core.so file. We spent a few days trying to solve this. We wound up using a pre-compiled for Pi version. This is how we got version compatibility.
Updating a NetBeans Ant script
Since speed matters in a competition,we wanted to change the build run target to not compile first. Netbeans comes with an empty looking build.xml and a useful build impl xml file. (This is actually my favorite feature of NetBeans – that the build can easily be run outside of NetBeans.) We easily found the run target in the build impl file. We copied it to build.xml, renamed it and removed the compile dependency. This wasn’t actually a problem, but it was interesting how NetBeans sets up the build file.
Starting a script on startup
We wanted the Pi to start the computer vision script automatically on boot up. We created a file in /etc/init.d since this is a Linux (Debian) install. Then we made a fatal error. We forgot to add the & to run the script in the background. So when we tested rebooting, the Pi hung. And we couldn’t SSH to it because it hadn’t booted. The solution was to take the pi’s sd card to another computer and edit the bootup script to use single user mode. We could then login and edit the startup script to add the missing &.
Networking
We used Java sockets to transfer the “answer” from the Pi to the robot. The answer being a single number representing the number of degrees off from the center of the target. We made the mistake of testing this with both ends on a regular computer. When moving to the robot it didn’t compile because the robot uses J2ME. We then refactored to use the mobile version (code here).
Performance – CPU
Success. Computer vision works. The problem is it took 3 seconds per image. We reduced it to 1.3 seconds per image by reducing the resolution to the smallest one the camera supports. We shaved off another 0.1-0.2 seconds by turning off file caching in ImageIO. We learned the problem was full CPU usage when calling ImageIO.read.
I found an interesting thread showing the “old way” of creating a JPEG using ImageIcon was much faster. We tried it and the thread was right. It even created an image we could open in a photo editor. The problem is that it didn’t work with our code/libraries for image processing. We don’t know why. Obviously ImageIO has some default we are unaware of. A 1 second time is acceptable so this isn’t a problem. But it’s an interesting conundrum.
Another interesting CPU note. We tried compiling image magik. It took over 3 hours on the Pi. By contrast, it took 2.5 minutes on a Mac.
Question from Facebook: ”
Minuk Mark Choi commented on your post.
Minuk wrote: “This is cool – sorta glanced at the linked article, but how did you capture the image? Was it a USB webcam? Or were you able to get the highly anticipated Raspberry Pi Camera module?””
Answer: We used the Axis Camera 11 which came in the kit of parts for the robot. http://www.axis.com/products/cam_m1011/index.htm
Interesting: the code doesn’t take advantage of the GPU.