Project Log

Software / Hardware / Music Projects.

AI for Robotics Final Project

AI4R's final project was a bit different than you might expect... In fact, we were asked to track and predict the motion of an object that care barely be called a "robot" at all - a HEXBUG (Example on YouTube). Somebody at GT (presumably) constructed a wooden box for the robot, and then put a circular candle in the center as an obstacle. A video camera was mounted above the scene, and they put the hexbug in and recorded the results. Our job was to accept 60 seconds of this motion and then predict the hexbug's path over the following 2 seconds.

Hexbug Environment Setup

We were given both the video files and a csv of extracted coordinates of the hexbug. This was real-world data, and the extracted coordinates were very messy, so teams were free to do their own analysis of the video. Also, given that this was a real physical experiment in the physical world, there might be irregularities in the environment that teams could exploit. Maybe there's a dip in the wood base that causes a right turn at some point more frequently than a left, for example. There were many such rabbit holes one could venture down... There are two different tasks involved in this project. At first, it is a tracking problem with noisy input. Then, it becomes a prediction problem with no feedback loop. So on our team, we tacked each problem with different approaches. Tracking the bot wasn't actually too hard - a standard Kalman filter can do the job, as the motion can be modeled as linear. The only trick is handle collisions - if you don't model these, your KF will average out over time to a nearly-zero dx/dy - no good! We approached the problem by applying some pretty naive "billiards-style" physics to the bot, and using that to generate a u (external force or motion control) vector if a collision was predicted to occur. This worked pretty well, even though our model of the physics was not great compared to what the hexbug actually did: never went straight, got stuck in corners for an indeterminant amount of time, was modeled like a circle when it was really a rectangle, etc. Still, we were tracking the object well enough to move on, thanks to the Kalman filter incorporating these errors well. Our predictions were going to be what we were graded on, so it made sense to focus more effort in this area. We entered the prediction stage with a good estimate of the hexbug's state, as output from the Kalman filter. A a baseline, we can iteratively predict and handle collisions from that same KF. This served as our baseline prediction algorithm. It didn't do badly, but was sensitive to small variations in state before a bounce leading to big possible errors in course after the bounce. Enter "Probabilistic Robotics," the textbook by Sebastian Thrun, Wolfram Burgard, and Dieter Fox that this course was based around. The common thread to all ideas in the book is to admit that there is uncertainty in measurement and the robot's belief about its state, and to incorporate that in the model and algorithms used to compute or predict state. Work with it, not against it, in other words. Chapter five deals with robot motion, and suggests the idea of using a particle filter to predict future motion. Since the motion has some uncertainty associated with it, each particle moves to a slightly-different place after each motion step: A figure from Probabilistic Robotics illustrating the dispersal of the position beliefs Especially when there are obstacles in the environment, some of your particles may interact with them when others don't - and this possibility of interaction is then modeled. Neat trick. So again we can use that some bounce code from before to throw our particles at. We produced our predicted locations by simply averaging the locations of the particle cloud.

The belief particles interacting with an obstacle

How much uncertainty should we use for the motion? We developed the algorithm with a completely naive guess, then tried to get fancy and analyze the training data to get a better guess. We got a lower number, and plugged this in, and to my great surprise, got a worse error rate out from our predictions. This was totally baffling to me until I read the following quote from the intro to chapter five in Probabilistic Robotics:

"In theory, the goal of a proper probabilistic model may appear to accurately model the specific types of uncertainty that exist in robot actuation and perception. In practice, the exact shape of the model often seems to be less important that the fact that some provisions for uncertain outcomes are provided in the first place. In fact, many of the models that have proven most successful in practical applications vastly overestimate the amount of uncertainty. By doing so, the resulting algorithms are more robust to violations of the Markov assumptions, such as unmodeled state and the effect of algorithmic approximations." (p. 118)

I love it when stuff like that happens.

I'd like to share my project report here, but I'm not actually sure I can for academic integrity reasons. They are some serious folks over at Georgia Tech. I am fairly-certain just talking about approaches in the general sense on this blog is a-okay, however, so that might be all you get.

   SCHOOL    SOFTWARE    AI4R    ROBOTICS   

AI4R - The Robot Platform

My autonomous robot project will be based around an old toy Radioshack RC car I had as a kid. This thing was pretty good, however. It had an adjustable (proportional) throttle and steering, instead of the cheaper and more-common "all or nothing" approach. The motor looked pretty beefy and I had an RC battery and charger for it, so I thought things would be pretty smooth sailing.

Radioshack RC car base

Not exactly...

The battery was old and needed to be replaced. No problem, Amazon to the rescue.

The RC control circuit board is too confusing for me to re-use any of it. Oh well, no big deal, I'll get a motor driver board.

The steering servo has 6 wires. Modern servos have 3 wires - Vcc, Ground, and a PWM signal for the rotation target. The 6-wire version is apparently an old-style "brainless" servo, so you have to handle the control and feedback (via potentiometer) yourself. Oh, and did I mention it's not a standard size? And that the rather nice, intricate steering mechanics that attach to it won't mount to a standard servo without modification? Well, that's true.

There are two different ways to solve this problem, and I think which one you choose is indicative of what kind of engineer you are (or should be). You can modify the hardware and mechanical linkage and just swap in a new, modern servo, or you can take the old servo apart, figure out the wiring, put it back together, cut the wires, cannibalize a modern servo for the control circuitry, and (assuming the motor requirements and potentiometer resistance are the same) wire the old servo up to the new circuit. The mechanical engineer modifies the hardware and puts a new servo it, while the electrical engineer keeps the mechanics the same but swaps out the electronics. While it does seem like more work, I went with the EE solution, because I am definitely NOT a mechanical engineer.

Now I just need to pick out a compatible motor driver. But what are my motor requirements? I don't know - it's not like it has a datasheet or part number. To find out, I cut one of the wires running to the motor and measured the current on my multimeter. It is important to measure the current under load, and at a stall, as well as looking for any spikes in current (I think). Under no load, my motor pulls about 1 amp, but under normal driving, I expect around 3 amps, and under a stall, it pulls about 7 amps. So I think that means I need a very beefy motor driver. This may not save me all that much money in the long run.

And that's as far as I got tonight. I'll post more about the overall goals and project plan in a few days... Oh, I guess I did learn one more thing:

Very messy.

I really need to clean my desk.

   SCHOOL    SOFTWARE    AI4R    ROBOTICS    MAKE    BUILD LOG   

Artificial Intelligence for Robotics

It's officially the Fall 2016 semester of the GT OMS CS program, and this is a big one - Artificial Intelligence for Robotics. I've been looking forward to this class for years. The course is taught by Sebastian Thrun, who headed up the Stanford team that won the DARPA Grand Challenge and came in second in the DARPA Ubran Challenge. The Grand Challenge robot "Stanley" found its way into the Smithsonian National Air and Space Museum for that, and Thrun himself went on to head up the Google self-driving car project. So the course pedigree is solid, to say the least.

I've been fascinated about autonomous robots since I watched the PBS Nova episode "The Great Robot Race." Looking back on it, this show is partly responsible for my being in graduate school at all right now. Programming had become less exciting - it seemed more like plumbing than like problem-solving, and I was wondering "what next?" (No offense to plumbers, actually. It's just not what I find exciting.) And then there was this inspiring show about solving these crazy fuzzy problems with this insane, wonderful fusion of hardware, software, and math and, well, I loved it. It confirmed to me that there was more to learn, and that's always good.

The course focuses on software, on the AI side - things like Bayes Rule and Kalman filters. There's a reason I work on the software side of the world and didn't become a mechanical engineer instead. But... There's no way I'm getting out of this class without building a physical robot. That's not my style. So I've got a couple projects ahead of me, it seems. Stay tuned for discussions of the hardware chosen, the project goals, and progress updates.

   SCHOOL    SOFTWARE    AI4R    ROBOTICS   

Computational Photography Portfolio

It's all over but the grading for my Computational Photography class, so I thought I would share the final portfolio PDF I put together. The document showcases all the assignments from the course, and summarizes the goal for each, so even if you don't know much about CP, it should at least be a little bit interesting.

Download the PDF (1.7mb) if you're interested.

(Academic Integrity Note: Since the portfolio is a personal "highlight reel" of individual projects, each of which contained their own code and report write-ups, and since these individual projects are not themselves shared, the portfolio alone is not useful for potential cheating. Given that, I made the determination it is allowed to share.)

   SCHOOL    SOFTWARE    COMPUTATIONAL PHOTOGRAPHY   

Computational Photography - Depth from Stereo Images

I am just about at the end of my 4th class in the GT OMS CS program - Computational Photography. This was my first summer semester, and I took a two-week vacation in the middle of it, and I had some storm damage to my house during the semester as well - so I was not able to give it as much concentration as it probably deserved. I don't think I'm going to do a summer course again - I don't get to devote the time to the course that I would like to.

For an independent project in the class, I implemented a depth-mapping algorithm to extract depth from a stereo pair of images. I actually had three implementations, one from scratch, one from an OpenCV book, and one from the OpenCV library itself. I spent a good deal of time comparing them and tuning them to build up an intuition of what the parameter values should be for various scenes.

As an example, this is one of the input images in a stereo pair: Middlebury Adirondack Image

And here is the depth map (technically disparity, not depth) I computed for it, using a normalized cross-correlation-based method: NCC Disparity of the Adirondack Image

I created some videos of the output as I cycled through each algorithm and parameter value. They are kind of fun to watch, and more than a little bit trippy.

There is much more detail and images in the full presentation pdf, not shared here for academic integrity reasons. (Sorry!)

Credit for the stereo dataset shown goes to: D. Scharstein, H. Hirschm├╝ller, Y. Kitajima, G. Krathwohl, N. Nesic, X. Wang, and P. Westling. High-resolution stereo datasets with subpixel-accurate ground truth. In German Conference on Pattern Recognition (GCPR 2014), M├╝nster, Germany, September 2014. Found on http://vision.middlebury.edu/stereo/data/scenes2014/ And credit for the normalized cross-correlation algorithm goes to: Solem, Jan Erik. Programming Computer Vision With Python. Sebastopol, CA: O'Reilly, 2012. Print.

   SCHOOL    SOFTWARE    COMPUTATIONAL PHOTOGRAPHY    3D