Searching \ for '[OT] : Robot learned behavior' in subject line. ()
Make payments with PayPal - it's fast, free and secure! Help us get a faster server
FAQ page:
Search entire site for: 'Robot learned behavior'.

Exact match. Not showing close matches.
PICList Thread
'[OT] : Robot learned behavior'
2000\12\30@145104 by Dave W

picon face
I'm looking to make my robot smarter than hit-wall-backup-turnleft

Hi, this is Dave again.   If you read my previous message, I was looking for
info on servo control.  Thanks to all who helped me!   I am now using JAL to
Control them, but I find jal creates enourmus code, and it could be much
smaller in assembler.

a0 & a1 active low feelers
bo - turns servos on/off
b1 & b2 - directioin of the two servos
b0-2 go to another chip for servo timing

two feelers, two motors...adding 1 ir soon

My question is this: I recently found this page
and found it very interesting. (I think it might be down now due to some
bandwidth thing...).   I was wondering how hard it wold be to port the code
for this experiment from the pic16C54 chip he uses to a pic16F84.
Specifically, what instructions are different, and so on........

I would just use the '84's internal eeprom, as the environment of mine would
be less than 6 bits......

also, does anyone else know of any other 'learned behavior' type of info?
I'm looking to make my robot smarter than hit-wall-backup-turnleft
intelligence.   Thanks again!

MSN ID:TenorDave
Yahoo ID:RydrDave

Get your FREE download of MSN Explorer at

-- hint: The PICList is archived three different
ways.  See for details.

2000\12\30@173736 by Chris Carr

It would appear that everyone on PicList has accessed this URL as the
bandwidth limitation has been exceeded for this month. However, if you are
in the UK you have no doubt been watching the Royal Institution Christmas
Lectures which this year were delivered by Professor Kevin Warwick of
Reading University (He of the Bio-implant Controversy). I believe that
Lecture 5 delivered today (Saturday) actually demonstrated the behaviour
that you are seeking to attain.

Robot runs around banging into walls, develops behaviour which stops it
banging into walls, then communicates this knowledge to a second robot which
then powers up and proceeds to run around the area with the knowledge gained
from the first robot, i.e. it does not bang into the walls.

The basic sensors are Bumper Bars and Ultrasonics

The down side, the basic technology used is a Neural Network with around 50
nodes which you have no chance implementing on a  (current) PIC.

You may wish to look at

(Yes, another prestigious programme lost to the BBC, well it is only
engineering and technology and therefore unimportant compared to the Arts
and the promotion of the EU )

For those of you in the UK who are interested and missed the programmes,
they are being repeated starting on the 2nd January at 0430 Hours GMT and
the subsequent 4 days.



{Original Message removed}

2000\12\31@151019 by Peter L. Peres

picon face
>I'm looking to make my robot smarter than hit-wall-backup-turnleft

Simple neural network ? Here is an outline of a version thereof:

You have 4 input sensors (or combined states) I0..3, and 9 output states
(like 2 motors go forward/backward/stop. each) O0..3. This would be like,
three collision switches (forward, left, right, back), and two simple
motor drivers, each can do forward/stop/reverse, at constant speed.

Now for the code: Build a vector of length 10 with 1 data point in each:

unsigned char brain[10];

(it must live in RAM). (Homework: why 10 and not 16? Because certain
combinations cannot be present at the same time unless you lock the robot
in a tight box - you can detect this easily and make it turn off by
itself before the motor drivers fry).

Your robot code will have two distinct concurrent functions: 'run' and
'learn'. When 'run' then inputs I0..3 are used to index an answer in
brain[I]. The output will be O = brain[I];

Now, at the beginning you fill the array with random data. Start the robot
(run). Every time an input state change happens the robot will do
something random. If this is bad, you get to press a 'bad' button. Then
code will change the action of the last input state to something else
(brain[Ilast] = brain[Ilast]++; undo(Ilast); will work). The robot assumes
that if nothing is sent, it did 'well'. If something is sent then it
undoes the last action and does the new action. If you proceed like this
after a while the brain will have 'learned' (from you) what is good and
what is bad, and it will start doing whatever it was you were teching it
(like not falling off the table) reasonably well.

This is a very simple neural network that can only deal with first order
events (it has no 'state memory'). It can be expanded easily. You can
input the desired 'good' state directly if you use a keyboard. The simple
version requires only one button, preferrably on a wire so you can push it
without bending down or disturbing the robot. Give youself time to push.
Insert a pause after each action.

It can be shown that such a robot can learn to traverse a maze if the
'bad boy' button is replaced with a collision sensor and if the actions
selected for a bad response are randomly chosen. However, don't hold your
breath ;-)


PS: If you want to get into this, read an intro on neural networks and
study some higher languages like Prolog which are rather closely related
to NN in a way.

-- Going offline? Don't AutoReply us!
email with SET PICList DIGEST in the body

'[OT] : Robot learned behavior'
2001\01\01@005541 by Nikolai Golovchenko
> The down side, the basic technology used is a Neural Network with around 50
> nodes which you have no chance implementing on a  (current) PIC.

How about a neuron on just 2 (two) transistors ??!!

Just watched a programme about analog robots on TV a couple of days ago.
They look like animals - spiders, ants, snakes and so on. They
move very similar to real animals. They even learn how to move.
One of the tests was to bend a critter's wire legs. It still
struggled and learnt how to walk on the crippled legs. All of
the robots don't have a processor inside, just transistors!

For example, a 5 motor walker is built on 12 transistors. Looks
like actually they are using an inverter or buffer gate for each
2 transistors, so it means that the robot probably has a 6 gate
chip (or 2-3 in parallel) to control the whole thing!

Here is some links: -

The basic principles are explained, see also the Mark Tilden's
patent on the circuits. - some examples and schematics. -

Mark Tilden's robots pictures (which I saw on TV). - an

interesting article (1.5 MB).

Amazing, isn't it?


-- hint: To leave the PICList

More... (looser matching)
- Last day of these posts
- In 2001 , 2002 only
- Today
- New search...