getting computer vision to work on a raspberry pi

This year, I helped a local high school FIRST robotics team get computer vision working on a Raspberry Pi.   Two students worked on the problem.  At the beginning, they both worked on computer vision.  Later, one specialized on the Pi and the other on CV.  I learned a lot from a book and from playing with it.  We encountered some interesting and memorable problems along the way.

Recognizing a target

We started with sample code from another team from last year.  This helped learn how to write the code and understand the fundamentals.  It also helped with the “plumbing” code.  As hard as it was to recognize a target, this didn’t prove to be the most frustrating part.  The Pi itself mounted a number of challenges.

Parts for the Pi

We bought/scavenged parts for the Pi.  A USB keyboard, USB mouse and cell phone micro charger where donated.  A HDMI/DVI cable we needed to buy.  We borrowed a computer monitor and ethernet cable.

Finding the right jars

The Pi is built on ARM.  We needed javacv-0.2-linux-arm.jar.  It turned out there is no linux arm version in the latest javacv (0.3).  There is one in 0.2 which we used.  Which was incompatible with the versions of other tools.  (see next problem.)

Setting up the Pi

Compiling opencv on the pi takes 4 hours.  Since that’s how long a meeting is, this meant running the compile overnight.  Having to wait overnight to find out if something worked was like a taste of what punchcard programmers had to go through!

Then it turned out we couldn’t even use our compile.  We were missing the libjniopencv_core.so file. We spent a few days trying to solve this.  We wound up  using a pre-compiled for Pi version.  This is how we got version compatibility.

Updating a NetBeans Ant script

Since speed matters in a competition,we wanted to change the build run target to not compile first.  Netbeans comes with an empty looking build.xml and a useful build impl xml file.  (This is actually my favorite feature of NetBeans – that the build can easily be run outside of NetBeans.)  We easily found the run target in the build impl file.  We copied it to build.xml, renamed it and removed the compile dependency.  This wasn’t actually a problem, but it was interesting how NetBeans sets up the build file.

Starting a script on startup

We wanted the Pi to start the computer vision script automatically on boot up.  We created a file in /etc/init.d since this is a Linux (Debian) install.  Then we made a fatal error.  We forgot to add the & to run the script in the background.  So when we tested rebooting, the Pi hung.  And we couldn’t SSH to it because it hadn’t booted.  The solution was to take the pi’s sd card to another computer and edit the bootup script to use single user mode.  We could then login and edit the startup script to add the missing &.

Networking

We used Java sockets to transfer the “answer” from the Pi to the robot.  The answer being a single number representing the number of degrees off from the center of the target.  We made the mistake of testing this with both ends on a regular computer.  When moving to the robot it didn’t compile because the robot uses J2ME.  We then refactored to use the mobile version (code here).

Performance – CPU

Success.  Computer vision works.  The problem is it took 3 seconds per image.  We reduced it to 1.3 seconds per image by reducing the resolution to the smallest one the camera supports.  We shaved off another 0.1-0.2 seconds by turning off file caching in ImageIO.  We learned the problem was full CPU usage when calling ImageIO.read.

I found an interesting thread showing the “old way” of creating a JPEG using ImageIcon was much faster.  We tried it and the thread was right.  It even created an image we could open in a photo editor.  The problem is that it didn’t work with our code/libraries for image processing.  We don’t know why.  Obviously ImageIO has some default we are unaware of.  A 1 second time is acceptable so this isn’t  a problem.  But it’s an interesting conundrum.

Another interesting CPU note.  We tried compiling image magik.  It took over 3 hours on the Pi.  By contrast, it took 2.5 minutes on a Mac.

maker faire vs toastmasters – co-running an event

Over the past week and a half, I co-ran two events.  One was the NYC FIRST robotics booth at Maker Faire (see picture.) The other was an area contest for Toastmasters.

Since these two events were so close to each other, I spent some time comparing my experience.

Overview of event

Maker Faire: Norm Sutaria and I have run the NYC FIRST robotics booth at Maker Faire for the past three years.  Our booth is 20 x 30 feet and we coordinate robotics teams from elementary school through high school to show their robots.

Toastmasters contest: Toastmasters holds speech contests where contestants from multiple clubs square off.  Since our area only has four clubs, I ran the contest with another area governor to make it a bigger event.

Ribbon

Our booth won an editors choice award at Maker Faire.  That is really cool and exciting.  Toastmasters gives ribbons out for every speech and certificates of appreciation for helping at a contest.  I don’t find these motivating because you get them frequently regardless of whether you did a good job.  The Maker Faire ribbon actually feels special.  I don’t know how many were given out, but we didn’t win it the last two years making it special for us!

Delegating

For Maker Faire, Norm and I get to practice delegating.  Both to each other – play to the other person’s strengths – and to the teams.  We are both good at delegating to students and not trying to do everything ourselves.  For planning, it was more of claiming the work we wanted to do or were best at rather than delegating.  There was some level of needing to trust the other person at the event though.  It’s important to eat and take breaks knowing the other is in charge!  And for me, remembering to ask for help lifting things when I need to and not worrying about not “pulling my weight” in that specific area.  I do lots of other things!

For Toastmasters, we split planning by event.  I did the lions share of the planning/organizing/running this time and the other area governor will this time.  As a result, less delegating went on.  There was some of course since a contest has a number of contest officials.  And I did seek volunteers for certain parts.

Contingency planning

The first two years, our Maker Faire rain plan was “it better not rain.”  This year, rain was likely so we came up with an actual rain plan.  We announced it both mornings and used it Sunday.  (The plan was to cover up the electronics with tarp and painters plastic, have some students under the tent and encourage most to go into the Hall of Science.)

For Toastmasters, I thought about things likely to go wrong and had an extra judge on hand along with a plan for how to get more if needed.  For the most part, it was “figure out how to run with it” if things went wrong.  Which worked out just fine.  I think this is because a Toastmasters speech contest is a lot more predictable.

Interestingly, this shows a difference between volunteer work and paid work.  In the business world, I don’t typically plan to wing it if something goes wrong!

Google forms

On the tech side, I used Google forms to organize both events.  For attendance lists in both cases and collecting data, listing teams, etc for Maker Faire.

Where I grew the most

Both FIRST robotics and Toastmasters help with soft skills and leadership skills.  Toastmasters does it more obviously – it is in their mission.  FIRST robotics “tricks” students into learning about teamwork and leadership through the “distraction” of a robot.  Does it for the mentors too!  I definitely stretch myself and learn the most at the Maker Faire event.  Partially because I talk to a lot of people I don’t know and give the “FIRST pitch.”  Partially because running the booth is a lot more work.  And partially because a public event is a lot less predictable than a Toastmasters contest.

Both work though.  Because if you tell a techie he/she will spend a weekend practice soft skills, you aren’t likely to get a good reaction.  Throw in robots or the need to get something done and it changes the picture completely.

cropping video fast for dummies on a mac

Two years ago, I wrote about cropping video fast for dummies on Windows.  I now need to do the basically same thing on the Mac.  This time is a little simpler as I only need one continuous segment cropped.  And I have more experience.  I’ve done it once before 🙂 on a different operating system.  However, I still don’t have any special software.

Where I started

The original video is 2 minutes and 44 seconds.  I want to get a 5 second or so video of the robot shooting a basket.

How I did it

  1. Learned that I do have video editing software – iMovie – that came on the Mac.
  2. Use ClipNabber to download youtube video.  Had to click “clipnabber classic” to get to the download screen as the first screen is about some Mac software to download.  As I don’t do this often, I don’t feel the need to download anything.  This downloaded the clip as an .mp4 file.
  3. Downloaded Squared to convert from mp4 to something iMovie can import.  (Squared beta lets you download directly from youtube, but I’ve already downloaded it.).  Open mp4 in it and choose export to DV>  Conversion took less than a minute.
  4. In iMovie, file > import > movies
  5. iMovie automatically splits the video into short thumbnails.  Drag the one(s) you want to the top.  It’s cool because you can select a range so this serves as a rough cropping.  You can also join clips that way.
  6. Click on point of subclip you want to start and choose split.  Repeat for end of subclip.
  7. Right click video and choose detach audio.  Select the purple audio track and select cut.
  8. Share > Export movie

Converting to Flash

It was requested I provide a Flash version of my 4 seconds of video.  There is software you can download that does this, but I didn’t want to download something (trial version) that I’d only use once.  Another option is to upload it to youtube.  I went with the youtube option.  Then back to ClipNabber to download as flv (flash.)

How did it work?

This process was better than the Windows way (without a real editor.)  iMovie is impressive.

The final product

The completed video does show what I wanted.  It was easier to get rid of the sound this time too which is good because I won’t control the viewer’s machines this time.