If you’re a software person, sometimes it can seem like hardware is the source of all the world’s problems. Take, for example, the problem of getting a robot to navigate by dead reckoning.
Dead reckoning is a very simple navigation technique. It’s a lot like reading a pirate map: “Walk 50 paces, turn left, walk 10 paces, turn right, walk 30 more paces, and ‘X’ marks the spot. Arrr!”
In a software world, dead reckoning like that works really well. I once wrote a set of virtual reality programs for Iowa State’s C6 that included some “virtual robots” that used the technique. In software, you can program in a complex set of maneuvers, but the robot will always know its correct position and orientation.
You’ll often see dead reckoning techniques used in networked software – multiplayer games and so forth – to animate the avatars of remote players. The software thinks, “This character was going in such-and-such direction at such-and-such speed, so we’ll keep doing that until we get another update.” If the updates are laggy and the implementation not very sophisticated, you might see a character jump from place to place while moving, as the software corrects from its dead-reckoning guess about where the character is to the “actual” position received across the network. The stream of updates means that there’s no accumulation of error beyond the next update.
Using dead reckoning in the real world can be trickier, as I’ve seen with the BoE-Bot robot. I’ve been skimming through the online tutorials (they started with a lot of basic Arduino programming that I didn’t need), and the first substantive examples use dead reckoning techniques to move the robot around. This is trickier than it sounds, for a variety of reasons.
The first reason is the servos that drive the wheels. Each of the servos is controlled by a digital pin on the Arduino. But it’s not just an on-or-off signal. The control signal is a pulse on that line, and the duration of that pulse tells the servo which direction to spin in, and how fast to spin. For example, a pulse lasting 1500 μs tells the servo to stay still (if it’s properly calibrated). In my testing so far, I’ve been using values between 1400 and 1600 μs, because that’s the range where the response is fairly linear.
I say “fairly” linear because it’s an analog world out there. The servo response is really a curve, and the curve isn’t exactly symmetrical. Also, the two physical servos have slightly different response curves. It’s also possible that the strength of the robot’s batteries will affect performance, but I haven’t measured by how much.
Then there’s the environment. I don’t have a lot of large flat surfaces in my apartment. I don’t want the robot driving on tabletops until it’s smart enough not to drive off the edge. The kitchen linoleum has a square pattern to it that can catch wheels. The carpet is unpredictable and makes turning more difficult. All very tricky.
The last and worst problem with dead reckoning in the real world is that error accumulates. With no sensors hooked up (yet), the robot is driving blind, so there’s no way to receive a correction to the calculated position. With each movement and each turn, the gap between where the robot is, and where it thinks it is, changes. Orientation errors are especially pernicious – if the robot turns just 85° instead of 90°, it can quickly find itself wildly off course.
Still, I felt like I wanted at least a basic demonstration, so I did a few things to try to make the dead reckoning performance of the robot more predictable. For example, I rewrote the subroutines for my individual maneuvers so that they gradually get the wheels up to speed, instead of just setting a flat value for a set period of time. Whether that actually helped or not, I’m undecided.
I also took advantage of the breadboard space on top of the BoE-Bot to add a pair of buttons to increment and decrement the amount of turning. That way I could adjust the turns on-the-fly, without bringing the robot back to the computer to reprogram it. This helped a lot, though I still had to do some manual adjustments because my left turns seem overall to be faster than my right turns.
Generally, I found that the robot’s right wheel moves slightly faster than its left, when both are set to the same value. Ultimately, I think I’m going to need some kind of speed mapping function in order to get really reliable navigation. As it is, the robot skews slightly to the left during its straightaways, and I adjusted the turns to compensate for that.
The video shows the robot performing a squared-off figure-eight driving pattern, using both left and right turns. It’s never going to be perfect – the orientation accumulates a little bit of error with each turn – but for one lap it returns the robot pretty close to its starting point.
If you look closely, you can see that I’ve added the “whiskers” to the front of the robot. They aren’t hooked up to any code yet, but I’m definitely ready to doing something with a little more feedback involved than this simple dead reckoning exercise.