I'm interested in how our genes construct the brain to guide our behavior. What better way to study this than by simulating "behavioral genetics" in robots?
To construct an intelligent robot, you need to provide it with the following:
- basic senses and prime directives
- ability to recognize patterns in its senses (environment)
- ability to predict future patterns, and use prediction to drive its own behavior
The robot's senses represent what's happening both outside and inside itself. If it sees food, that's external. If it moves its muscles or feels full after eating, that's internal. The robot's prime directive tell it "what's important", and we'll assume "stomach fullness" is an important outcome of its behavior.
First, the robot must recognize patterns in its senses. Stacked neural networks (and eventually memristors) can be used for this purpose. The robot's senses train the network, and, once trained, the network recognizes patterns, even those not identical with the training set.
Second, the robot must predict future patterns. This is not as hard as it seems. When trained in the previous step, the stacked neural network should be presented not only with the present state of the senses, but also with several past states, simultaneously. In other words, the neural network makes no distinction between past and present, as it associates inputs from its external senses and its internal senses (e.g. muscle tension and how full is its stomach).
We can simulate all this virtually in a computer program (no need to build a physical robot) -
- Let's assume that "food" falls randomly from the sky, and obeys certain simplified laws of physics, such as the steady rate of fall (one row per clock tick). A single piece of food appears at a time, and it is dropped at a random angle (diagonal left, right, or straight) from the top row. Food bounces off the sides but never bounces upward. The robot's mouth can only move one space at a time (left or right, on the bottom row) per clock tick. Only if the mouth arrives at the same position as the food (when it reaches the bottom row) can the robot eat it.
- From the diagram, you can see that the robot has (8 x 8) + 8 = 72 inputs. The external inputs (the visual sense, an 8 x 8 grid) take up most of the inputs, but there are also internal inputs (shown on the bottom row) such as: the position of the mouth (used to catch falling food), incremental mouth movement (whether the left or right mouth muscle is contracted), and how full is the robot's stomach.
How can we make the robot learn? By training its neural network brain using the sensory inputs. But not only current sensory inputs: we must combine the present inputs and past inputs into a single neural network. If we assume the robot can remember the inputs from each of the last 8 clock ticks, the total number of inputs presented to the network will be 72 x 8 = 576 inputs. Past and present are thus combined into a single network. At the next clock tick, the oldest 72 "remembered inputs" are dropped, and the current 72 are added to the new training set.
The prime directive is hard-wired into the robot. It simply states that the robot will be "happier" if its stomach is full. We should assume a certain rate of digestion, whereby the stomach becomes less full over time unless it consumes more food. Obviously, if the robot can be trained to move its mouth muscles back and forth to catch food from the sky, it will achieve greater "happiness". If it doesn't learn this quickly enough, it will starve.
At first, the robot moves its mouth muscles randomly, back and forth. But eventually, given the high weight (importance) assigned to the prime directive input (i.e. stomach fullness), the robot eventually learns to predict how to move its mouth left or right (one space per clock tick) to anticipate where the food will land a few ticks in the future.
Most interestingly, the robot uses prediction (or anticipation) to guide its own muscles. It predicts where its mouth will be in the future, and this very act of prediction "causes" its muscles to contract and its mouth to move to catch the food (or play ping pong).
This simulated robot, although simplistic, demonstrates a primitive form of behavioral genetics. Humans are more complex, and our hard-wired "prime directives" are subtle (and no two people have exactly the same prime directives), but the principle should be the same.