Setting up a Simple Machine Learning Experiment for the Artificial Neural Network

The basic back propagation algorithm demonstrated in the Arduino Neural Network Tutorial does a pretty effective job of learning to recognize patterns. The training portion of the demo cycles through a series of potential inputs and the desired outputs, and the network converges on a solution fairly quickly. Once solved, you can feed the network any of the input sets from the training data and it will give you the correct outputs. (I should probably say the network "generally" converges on a solution, because with some of the training sets I've been working, it can still occasionally go into oscillation and never solve the set.)

I've arrived at a very simple concept for getting started applying the network to a robot machine learning scenario. The test robot has three infra-red (IR) reflective proximity sensors and two bump switches. For the experiment, the robot will use the bump switches to register collisions, and based on those collisions will learn to interpret the proximity sensors and avoid obstacles in the future.

As simple as the experimental setup sounds, right out of the gate the homebrew reflective sensors create interesting nuance. These sensors work by emitting IR light, and measuring the intensity of the light reflected back. Unlike ultrasonic sensors and prism-based IR detectors which give predictable readings that correlate to distance, the readings from simple reflective sensors will vary greatly depending on background IR light, and the color and other physical characteristics of the obstacle.

To account for these variations, the general method is to take two readings, one with the IR emitter on and the other with the emitter off, and then calculate the difference. The greater the difference, the stronger the reflection and therefore the closer the object. However, even with this method there is considerable variation across the spectrum of possible conditions, and calibrating the system so that it can reliably detect both the presence of an object and the absence of an object takes some doing. So in this regard, the IR sensors actually make for a great experiment. Neural networks are supposed to be useful for sorting through complex, sometimes noisy inputs where all of the possible scenarios aren't necessarily known in advance - and that's pretty much what we've got here.

The desired behavior for the robot is standard, beginner's obstacle avoidance. In this construct, the default behavior is to drive forward. If an obstacle is detected, the robot will execute one of three possible avoidance routines. If the obstacle is on the left, the robot will reverse slightly and turn slightly to the right. If the obstacle is on the right, the robot will reverse slightly and turn slightly to the left. If the obstacle is dead center, the robot will reverse further than when the obstacle is to one side or the other, and then make a hard left turn (turning left is arbitrary here - it could just as easily be right). After the avoidance routine is executed, the robot will resume its default behavior of driving forward unless and until another obstacle is detected.

For version 1.0 of the neural network robot, the learning process will be very structured and constrained. The environment will have large, regularly shaped walls and objects. The untrained robot will wander about bumping into things for a set period of time, probably on the order of about five minutes. During this time the bump switches will trigger the built-in avoidance routines and all sensor data will be saved. After the alloted time has passed, the robot will stop, and using the saved data the neural network will be trained to recognize when the IR sensors indicate an obstacle. The robot will then return to its wandering behavior, but now the IR inputs will be fed through the network and when an obstacle is detected will trigger the appropriate avoidance routine.

Once this major milestone is achieved, the goal of future work will be to extend the learning process indefinitely such that after the initial training the robot continues to gather data, learn from mistakes, and refine its behaviors accordingly. The environment will also be broadened to be less regular and predictable.

Alongside the application of the back propagation algorithm itself, which here is truly just a repackaging of the standard routines for the Arduino platform, developing strategies for converting sensor data to training sets is one of the primary tasks at hand.

In version 1.0 the concept is straightforward and the execution is manageable.

For inputs, the saved IR sensor readings can be copied directly to the input array used in the network training routines. (As a technical note, the Arduino's analog inputs return a number between 0-1024 which I'm dividing by 1024 to give a floating point number between 0-1.)

Capturing the desired outputs is only slightly more complicated. Because the goal is to avoid collisions, by the time the bump switches have registered a collision it is "too late." Therefore the algorithm needs to be backward looking and train the network to trigger avoidance based on sensor readings just prior to the collision. So in version 1.0, when there is a collision in the current set of sensor readings an avoidance trigger is placed in the training data for both the current set and the prior set.

Looking to future versions and the goal of perpetual, unsupervised learning in an unconstrained environment, things get considerably more complicated. One problem will be that as time passes it will no longer be practical to have a training set that includes every set of sensor readings collected over time. Another problem will be that in a less constrained environment, there are more likely to be confusing or conflicting sensor readings. For example a near miss of an object might give a nearly identical IR reading as an actual collision, or a low object might hit the bumper but not show up on the IR at all.

Identifying the problem areas with unsupervised learning and developing solutions will obviously benefit from data collected once the robot platform is fully operational. The hope will be to arrive at universal, modular strategies that can be applied not only to this robot in less constrained environments, but also to more capable robots with additional sensors and more complex behaviors.

November 2, 2013




Other Posts

An Arduino Neural Network
An artificial neural network developed on an Arduino Uno. Includes tutorial and source code.

Buster - A Voice Controlled Raspberry Pi Robot Arm
Buster is a fully voice interactive robot arm built around the Raspberry Pi. He acts upon commands given in spoken English and answers questions too.

Back to Basics
After spending quite a while exploring various approaches to walking robots and other mechanical conundrums, I'm turning my attention to machine learning and building a simple but robust platform to experiment with neural networks.

Migrating to the 1284P
The ATMEGA1284P is one of the more capable microcontrollers available in the hobbyist and breadboard-friendly 40-pin PDIP package. Here I discuss migrating the neural network project to the 1284p to take advantage of its relatively generous 16K RAM.

Getting Up and Running With a Tamiya Twin-Motor Gearbox
Tamiya makes a full line of small gearbox kits for different applications that are capable for their size and an easy, economical way to get a small to medium size wheeled robot project up and running.

Flexinol and other Nitinol Muscle Wires
With its unique ability to contract on demand, Muscle Wire (or more generically, shape memory actuator wire) presents many intriguing possibilities for robotics. Nitinol actuator wires are able to contract with significant force, and can be useful in many applications where a servo motor or solenoid might be considered.

Precision Flexinol Position Control Using Arduino
An approach to precision control of Flexinol contraction based on controlling the voltage in the circuit. In addition, taking advantage of the fact that the resistance of Flexinol drops predictably as it contracts, the mechanism described here uses the wire itself as a sensor in a feedback control loop.

LaunchPad MSP430 Assembly Language Tutorial
One of my more widely read tutorials. Uses the Texas Instruments LaunchPad with its included MSP430G2231 processor to introduce MSP430 assembly language programming.

K'nexabeast - A Theo Jansen Style Octopod Robot
K'nexabeast is an octopod robot built with K'nex. The electronics are built around a PICAXE microcontroller and it uses a leg structure inspired by Theo Jansen's innovative Strandbeests.



Home