Project Log

Software / Hardware / Music Projects.

IMU Software Tutorial [YouTube]

I recently ran across this short (13:15) video tutorial describing the process and math of figuring out your position and orientation from an IMU. It was a quick overview at just the right level of detail to connect a lot of the different concepts that I have been thinking about for the rover project.

For the rover, simply getting orientation information out of an AHRS algorithm isn't going to be enough. I also need acceleration values (in the inertial frame) with gravity subtracted out, to be able to plug into the Kalman filter. In essence, since the rover doesn't have wheel encoders, I'm using the IMU's acceleration to help do dead reckoning.

If you put that video together with an AHRS algorithm like Madgwick or Mahoney, and then take a look at the coordinate transformation sections in the rather famous book "The Global Positioning System & Inertial Navigation," you'll get a pretty good overview of how my robot will know where it is and where it's pointing.


SparkFun GP-20U7 GPS PPS Note

I have this SparkFun GP-20U7 GPS Receiver. It seems to work pretty well. I'm changing around my approach to the autonomous robot, and now I need access to the PPS (Pulse-Per-Second) functionality of the unit. The PPS pad isn't broken out on this device - it's under a pile of red goop you have to scrape off, and is actually part of an unpopulated LED circuit. It's not immediately clear (to me, anyway) how to wire it up to a microcontroller to get the PPS signal.

I've referenced this blog post, which noted that "PPS is open-drain so a pull-up is required. If the LED is installed it is the pull-up."

I think that means the following hastily-drawn diagram applies: PPS diagram

At any rate, I was able to verify that the correct solder pad does some kind of pulse every second (when the device has a fix) with my worlds-cheapest-ebay-kit-oscilloscope. I don't really want to populate the LED/resistors (I don't have the SMD parts), but I believe I can just connect that "left" solder pad to a microcontroller input with an internal pull-up resistor and set an interrupt and be good to go. I'm leaving this blog here for reference since there isn't much about this device online for electronics newbies like me.

(Side note: PPS only pulses when the GPS has a good fix. This makes sense in retrospect, but it's a pain if you have to haul your junk to an open window on the second floor, for example, just to figure out which pad to use. Hypothetically, that is.)


GPS is weird and awesome and terrible

A very after-the-fact robot update from late 2016: GPS works. This is perhaps more awesome than it sounds.

First, if you want a good rant about how GPS communication is terrible, check out this article. It's a magnificent rant about how you never really get your true state or know what time a particular state refers to with GPS, due to a pretty nonstandard standard (NMEA), which itself was due to the "N" in "NMEA" standing for "Nautical." Boats just don't need the same data as the rest of us, it turns out.

Other fine folks have attempted to solve this problem, and one approach is a GPS/NMEA processing daemon for Linux called GPSD. This monitors the serial communication, processes it to standardize vendor differences and sentence-order differences, and then makes it all available via UDP. Very cool, you can poll this when you want it. If you want to be notified as soon as new data comes in, I think you may still be out of luck, however. I'm anticipating my GPS error range will be more than my timing error range, so it should not matter too much. In my robot, GPS is not designed to be a precision sensor (yet).

But GPS isn't just terrible, it's also awesome. I have code that runs on my robot pi to listen to the GPS, and convert that lat/lng info into a local x/y plane, and then run a Kalman filter on it, and push each successive state out to the telemetry web application (also served by the robot) over a Redis backplane. Eventually this will host a maps view of the current estimated position, error ranges, etc.

Next steps are to read from the Inertial Measurement Unit (IMU) and do sensor fusion to integrate that with the GPS data (in progress), which has its own set of fun challenges. Stay tuned.


AI for Robotics Final Project

AI4R's final project was a bit different than you might expect... In fact, we were asked to track and predict the motion of an object that care barely be called a "robot" at all - a HEXBUG (Example on YouTube). Somebody at GT (presumably) constructed a wooden box for the robot, and then put a circular candle in the center as an obstacle. A video camera was mounted above the scene, and they put the hexbug in and recorded the results. Our job was to accept 60 seconds of this motion and then predict the hexbug's path over the following 2 seconds.

Hexbug Environment Setup

We were given both the video files and a csv of extracted coordinates of the hexbug. This was real-world data, and the extracted coordinates were very messy, so teams were free to do their own analysis of the video. Also, given that this was a real physical experiment in the physical world, there might be irregularities in the environment that teams could exploit. Maybe there's a dip in the wood base that causes a right turn at some point more frequently than a left, for example. There were many such rabbit holes one could venture down... There are two different tasks involved in this project. At first, it is a tracking problem with noisy input. Then, it becomes a prediction problem with no feedback loop. So on our team, we tacked each problem with different approaches. Tracking the bot wasn't actually too hard - a standard Kalman filter can do the job, as the motion can be modeled as linear. The only trick is handle collisions - if you don't model these, your KF will average out over time to a nearly-zero dx/dy - no good! We approached the problem by applying some pretty naive "billiards-style" physics to the bot, and using that to generate a u (external force or motion control) vector if a collision was predicted to occur. This worked pretty well, even though our model of the physics was not great compared to what the hexbug actually did: never went straight, got stuck in corners for an indeterminant amount of time, was modeled like a circle when it was really a rectangle, etc. Still, we were tracking the object well enough to move on, thanks to the Kalman filter incorporating these errors well. Our predictions were going to be what we were graded on, so it made sense to focus more effort in this area. We entered the prediction stage with a good estimate of the hexbug's state, as output from the Kalman filter. A a baseline, we can iteratively predict and handle collisions from that same KF. This served as our baseline prediction algorithm. It didn't do badly, but was sensitive to small variations in state before a bounce leading to big possible errors in course after the bounce. Enter "Probabilistic Robotics," the textbook by Sebastian Thrun, Wolfram Burgard, and Dieter Fox that this course was based around. The common thread to all ideas in the book is to admit that there is uncertainty in measurement and the robot's belief about its state, and to incorporate that in the model and algorithms used to compute or predict state. Work with it, not against it, in other words. Chapter five deals with robot motion, and suggests the idea of using a particle filter to predict future motion. Since the motion has some uncertainty associated with it, each particle moves to a slightly-different place after each motion step: A figure from Probabilistic Robotics illustrating the dispersal of the position beliefs Especially when there are obstacles in the environment, some of your particles may interact with them when others don't - and this possibility of interaction is then modeled. Neat trick. So again we can use that some bounce code from before to throw our particles at. We produced our predicted locations by simply averaging the locations of the particle cloud.

The belief particles interacting with an obstacle

How much uncertainty should we use for the motion? We developed the algorithm with a completely naive guess, then tried to get fancy and analyze the training data to get a better guess. We got a lower number, and plugged this in, and to my great surprise, got a worse error rate out from our predictions. This was totally baffling to me until I read the following quote from the intro to chapter five in Probabilistic Robotics:

"In theory, the goal of a proper probabilistic model may appear to accurately model the specific types of uncertainty that exist in robot actuation and perception. In practice, the exact shape of the model often seems to be less important that the fact that some provisions for uncertain outcomes are provided in the first place. In fact, many of the models that have proven most successful in practical applications vastly overestimate the amount of uncertainty. By doing so, the resulting algorithms are more robust to violations of the Markov assumptions, such as unmodeled state and the effect of algorithmic approximations." (p. 118)

I love it when stuff like that happens.

I'd like to share my project report here, but I'm not actually sure I can for academic integrity reasons. They are some serious folks over at Georgia Tech. I am fairly-certain just talking about approaches in the general sense on this blog is a-okay, however, so that might be all you get.


AI4R - The Robot Platform

My autonomous robot project will be based around an old toy Radioshack RC car I had as a kid. This thing was pretty good, however. It had an adjustable (proportional) throttle and steering, instead of the cheaper and more-common "all or nothing" approach. The motor looked pretty beefy and I had an RC battery and charger for it, so I thought things would be pretty smooth sailing.

Radioshack RC car base

Not exactly...

The battery was old and needed to be replaced. No problem, Amazon to the rescue.

The RC control circuit board is too confusing for me to re-use any of it. Oh well, no big deal, I'll get a motor driver board.

The steering servo has 6 wires. Modern servos have 3 wires - Vcc, Ground, and a PWM signal for the rotation target. The 6-wire version is apparently an old-style "brainless" servo, so you have to handle the control and feedback (via potentiometer) yourself. Oh, and did I mention it's not a standard size? And that the rather nice, intricate steering mechanics that attach to it won't mount to a standard servo without modification? Well, that's true.

There are two different ways to solve this problem, and I think which one you choose is indicative of what kind of engineer you are (or should be). You can modify the hardware and mechanical linkage and just swap in a new, modern servo, or you can take the old servo apart, figure out the wiring, put it back together, cut the wires, cannibalize a modern servo for the control circuitry, and (assuming the motor requirements and potentiometer resistance are the same) wire the old servo up to the new circuit. The mechanical engineer modifies the hardware and puts a new servo it, while the electrical engineer keeps the mechanics the same but swaps out the electronics. While it does seem like more work, I went with the EE solution, because I am definitely NOT a mechanical engineer.

Now I just need to pick out a compatible motor driver. But what are my motor requirements? I don't know - it's not like it has a datasheet or part number. To find out, I cut one of the wires running to the motor and measured the current on my multimeter. It is important to measure the current under load, and at a stall, as well as looking for any spikes in current (I think). Under no load, my motor pulls about 1 amp, but under normal driving, I expect around 3 amps, and under a stall, it pulls about 7 amps. So I think that means I need a very beefy motor driver. This may not save me all that much money in the long run.

And that's as far as I got tonight. I'll post more about the overall goals and project plan in a few days... Oh, I guess I did learn one more thing:

Very messy.

I really need to clean my desk.


Artificial Intelligence for Robotics

It's officially the Fall 2016 semester of the GT OMS CS program, and this is a big one - Artificial Intelligence for Robotics. I've been looking forward to this class for years. The course is taught by Sebastian Thrun, who headed up the Stanford team that won the DARPA Grand Challenge and came in second in the DARPA Ubran Challenge. The Grand Challenge robot "Stanley" found its way into the Smithsonian National Air and Space Museum for that, and Thrun himself went on to head up the Google self-driving car project. So the course pedigree is solid, to say the least.

I've been fascinated about autonomous robots since I watched the PBS Nova episode "The Great Robot Race." Looking back on it, this show is partly responsible for my being in graduate school at all right now. Programming had become less exciting - it seemed more like plumbing than like problem-solving, and I was wondering "what next?" (No offense to plumbers, actually. It's just not what I find exciting.) And then there was this inspiring show about solving these crazy fuzzy problems with this insane, wonderful fusion of hardware, software, and math and, well, I loved it. It confirmed to me that there was more to learn, and that's always good.

The course focuses on software, on the AI side - things like Bayes Rule and Kalman filters. There's a reason I work on the software side of the world and didn't become a mechanical engineer instead. But... There's no way I'm getting out of this class without building a physical robot. That's not my style. So I've got a couple projects ahead of me, it seems. Stay tuned for discussions of the hardware chosen, the project goals, and progress updates.