A little background first - I've developed a robot that works pretty good at roaming around my house and not getting stuck. Primarily it uses the US sensor on a servo to sweep the area in front of it. If it sees an obstacle it stops, looks left & right and then turns into the direction with the most "freedom". Also, as the US sensor is sweeping, there's a PID loop to sort of steer the robot along into the path of least resistance. This limits the number of times that it stops because it's constantly correcting it's course into the path of least resistance. Finally, as a back up there is a front bumper incase the US sensor doesn't see something and the robot actually runs into something. In this case it will stop and back up, then look left & right then move in the path of least resistance.
It's been pretty cool seeing how well the robot works so far. Sometimes (as I keep tweeking this gets less and less) it does eventually get stuck, either it gets into a place where it turns left, then right then left, etc or it gets physically stuck on something.
One thing I'm wondering about is if there is some kind of way to program some intelligence into the robot so that in a sense it can map out it's path and if it ever finds it self really stuck then it would know where to back up to to start again. I've looked around a bit and I found the post about the A* algorithm. I thought there could be something I could use out of the A* algorithm, but upon wikipedia reading it sounds like this is very well suited to a setup maze with a defined target of where to go. I'm trying not to tell my robot where to go (or how to get there, lol) so I don't know if this would work.
Does anyone have any insight into what I might try? Should I try to implement the A* algorithm, but if I do, what would I do with not defining a goal/target destination?
Thanks!
legofoote
Wandering Mapping Robot
Re: Wandering Mapping Robot
what you try to do is called a "map-and-explore robot".
Surely is this possible, I already tried this 4 or 5 years ago.
The problems is :
odometry and navigation must be 100% exact to get true, reliable, and undistorted maps when mapping and afterwards when following a calculated path on your map.
Odometry alone is quite unreliable,
the same it's about combining with the Hitechnic Gyro because of it's bad drift,
the same it's about combining with the Hitechnic Compass because of bad magnetic deviation indoors,
the same it's about combining with the Hitechnic Accelerometer because of bad sensor noise,
the same it's about combining all of them without a smart stochastic filter,
and even the best stochastic filters (Kalman, Monte Carlo particle filter) need well-defined external room references additional to internal robot sensor readings.
You might be more lucky when you will own a 100% intertial gyro (without any drift) to refer to, then you can calculate the direction of your moves just by the gyro values.
The calculations of the traveled distance still has to be based on odometry (maybe adding an accelerometer) - which also will never be 100% exact because of drift, slip, and sensor noise.
For a perfect experimental, artificial environment this might be set up comparatively fast with satisfactory results, but unfortunately not in large environments in the "real world"(e.g., for a whole floor with many rooms in a house: what if some obstacles like a chair or a box have been intermediately moved or a door has been opened or closed more or less? Will the robot still know where he is and what to do?)
So you will need something like taking the bearings of firmly positioned external light or ultrasonic beacons , or optical landmarks to refer to. Link
Otherwise the errors of internal readings will run out of limit at the latest after 15 or 20 minutes.
That external navigation/bearing system would require more additional sensors.
And a particle filter needs an insane quick calculation speed - not sure if NXC will manage this even if it's running that filter single-threaded.
I doubt that you'll manage to run the whole I/O control, an odometric navigation task, a bearing task, an astar, and a particle filter altogether simultaneously, multithreaded, on 1 NXT.
But even if you use 2 NXTs and you split up some tasks to them you must be able to pass your map, e.g.
char map[40][40]
quickly enough between your cooperating NXTs so that all can work simultaneously with the latest updated data sets of your environment and your calculated position.
Such a quick failsafe network for multithreaded sensor control and data transfer does unfortunately not exist for NXC, so I abandoned this project after 3 years of unsuccessful attempts. Maybe you are more lucky with Lejos, if they already got such a network protocol.
Or you start with nxtOSEK because it's generating genuine ARM machine code which is very fast and also has the big advantage of using the whole 320k memory (64k RAM plus 256k flash) both for code and variable memory - leaving just 1 big problem of all NXTs: far too few I/Os for sensors and motors on 1 single NXT.
Once having set up all this and got it running, you may command your robot to go there and back or search and grab and fetch something by commands given by a BT remote-controlling NXT or via speech recognition or let him automatically drive to the charging station if the battery level has become too low.
Surely is this possible, I already tried this 4 or 5 years ago.
The problems is :
odometry and navigation must be 100% exact to get true, reliable, and undistorted maps when mapping and afterwards when following a calculated path on your map.
Odometry alone is quite unreliable,
the same it's about combining with the Hitechnic Gyro because of it's bad drift,
the same it's about combining with the Hitechnic Compass because of bad magnetic deviation indoors,
the same it's about combining with the Hitechnic Accelerometer because of bad sensor noise,
the same it's about combining all of them without a smart stochastic filter,
and even the best stochastic filters (Kalman, Monte Carlo particle filter) need well-defined external room references additional to internal robot sensor readings.
You might be more lucky when you will own a 100% intertial gyro (without any drift) to refer to, then you can calculate the direction of your moves just by the gyro values.
The calculations of the traveled distance still has to be based on odometry (maybe adding an accelerometer) - which also will never be 100% exact because of drift, slip, and sensor noise.
For a perfect experimental, artificial environment this might be set up comparatively fast with satisfactory results, but unfortunately not in large environments in the "real world"(e.g., for a whole floor with many rooms in a house: what if some obstacles like a chair or a box have been intermediately moved or a door has been opened or closed more or less? Will the robot still know where he is and what to do?)
So you will need something like taking the bearings of firmly positioned external light or ultrasonic beacons , or optical landmarks to refer to. Link
Otherwise the errors of internal readings will run out of limit at the latest after 15 or 20 minutes.
That external navigation/bearing system would require more additional sensors.
And a particle filter needs an insane quick calculation speed - not sure if NXC will manage this even if it's running that filter single-threaded.
I doubt that you'll manage to run the whole I/O control, an odometric navigation task, a bearing task, an astar, and a particle filter altogether simultaneously, multithreaded, on 1 NXT.
But even if you use 2 NXTs and you split up some tasks to them you must be able to pass your map, e.g.
char map[40][40]
quickly enough between your cooperating NXTs so that all can work simultaneously with the latest updated data sets of your environment and your calculated position.
Such a quick failsafe network for multithreaded sensor control and data transfer does unfortunately not exist for NXC, so I abandoned this project after 3 years of unsuccessful attempts. Maybe you are more lucky with Lejos, if they already got such a network protocol.
Or you start with nxtOSEK because it's generating genuine ARM machine code which is very fast and also has the big advantage of using the whole 320k memory (64k RAM plus 256k flash) both for code and variable memory - leaving just 1 big problem of all NXTs: far too few I/Os for sensors and motors on 1 single NXT.
Once having set up all this and got it running, you may command your robot to go there and back or search and grab and fetch something by commands given by a BT remote-controlling NXT or via speech recognition or let him automatically drive to the charging station if the battery level has become too low.
Re: Wandering Mapping Robot
Like Doc said, entering a world of mapping is entering a world of uncertainties. Using statistics a robot can survive in such a world, but it is hard to get something working.
I think a more succesful approach would be to add some extra behavior to your robot. And to think out some strategies to escape for being stuck situations. Evaluate the strategies and start implementing a strategy that does not need to know (much) about the environment and history. It is fun to test different strategies with your robot.
You might want to know more about behavior. Search for subsumption architecture.
I think a more succesful approach would be to add some extra behavior to your robot. And to think out some strategies to escape for being stuck situations. Evaluate the strategies and start implementing a strategy that does not need to know (much) about the environment and history. It is fun to test different strategies with your robot.
You might want to know more about behavior. Search for subsumption architecture.
My blog: nxttime.wordpress.com
Re: Wandering Mapping Robot
Thanks guys for your input, this has given me a few good ideas that leads, of course, to more questions .
In regards to using odometry to position my robot - I don't think this will work well for my current incarnation. Currently my robot is a tracked vehicle. The Lego tracks don't provide much traction, so there's a lot of slipping if it happens to be driving over a small obstacle. Therefore, if I rely on rotations from the driver motors, I'll be way off pretty quickly.
As a result I'm thinking that I should develop some kind of "reliable" inertial navigation system. To this end, I think that I should invest in the imu sensor from Dexter industries. I seem to recall reading before that a way to compensate for the drift in the sensors is to pair them up and position them on my robot in such a way that the drift in one is in the opposite direction as the drift in the other sensor. Then I average out the readings from both. Therefore, I'm thinking that I should get two of the imu sensors for my robot. Does this sound reasonable? I should check before shelling out the cash .
As for the mapping matrix. It sounds to me, Doc, that your recommending some kind of matrix as the map of the robot's space. I think, and it seems your post alluded to this as well, that this would soon eat up all the memory in the robot. As perhaps a hybrid approach between the complete mapping ability and Aswin's comments - maybe I setup an array of the last 20 or so sensor sweeps. With the US sensor constantly sweeping the area in front of the roobo I could store the average distance from the sweep in the array. Once it's full, I'll just bump the oldest value. In this way, I'll have to find some way of co-ordinating the displacement from each place to the entry in the array.. hmm, I'll have to think about this. Once I have it working, then if it get's stuck (say 4 turns in less than X rotations) then it backs up to the last place with the most freedom.
I think in this situation I could use Aswin's idea of states. The states would be something like this:
1. Move towards the direction of least resistance.
2. Got too close to something, scan left and right for a way out then turn in that direction then go to state 1.
3. Bumped into something, backup and go to state 2.
4. I've been in state 2 y number of times in the past X rotations, backup to the best spot then go to state 2.
Aswin, I haven't used the subsumption architecture method, does this sound right?
Thanks for the ideas!
In regards to using odometry to position my robot - I don't think this will work well for my current incarnation. Currently my robot is a tracked vehicle. The Lego tracks don't provide much traction, so there's a lot of slipping if it happens to be driving over a small obstacle. Therefore, if I rely on rotations from the driver motors, I'll be way off pretty quickly.
As a result I'm thinking that I should develop some kind of "reliable" inertial navigation system. To this end, I think that I should invest in the imu sensor from Dexter industries. I seem to recall reading before that a way to compensate for the drift in the sensors is to pair them up and position them on my robot in such a way that the drift in one is in the opposite direction as the drift in the other sensor. Then I average out the readings from both. Therefore, I'm thinking that I should get two of the imu sensors for my robot. Does this sound reasonable? I should check before shelling out the cash .
As for the mapping matrix. It sounds to me, Doc, that your recommending some kind of matrix as the map of the robot's space. I think, and it seems your post alluded to this as well, that this would soon eat up all the memory in the robot. As perhaps a hybrid approach between the complete mapping ability and Aswin's comments - maybe I setup an array of the last 20 or so sensor sweeps. With the US sensor constantly sweeping the area in front of the roobo I could store the average distance from the sweep in the array. Once it's full, I'll just bump the oldest value. In this way, I'll have to find some way of co-ordinating the displacement from each place to the entry in the array.. hmm, I'll have to think about this. Once I have it working, then if it get's stuck (say 4 turns in less than X rotations) then it backs up to the last place with the most freedom.
I think in this situation I could use Aswin's idea of states. The states would be something like this:
1. Move towards the direction of least resistance.
2. Got too close to something, scan left and right for a way out then turn in that direction then go to state 1.
3. Bumped into something, backup and go to state 2.
4. I've been in state 2 y number of times in the past X rotations, backup to the best spot then go to state 2.
Aswin, I haven't used the subsumption architecture method, does this sound right?
Thanks for the ideas!
Who is online
Users browsing this forum: No registered users and 5 guests