Now, the robot can “see,” with the use of onboard LIDAR — a visual system that uses reflections from a laser to map terrain. The team developed a three-part algorithm to plan out the robot’s path, based on LIDAR data. Both the vision and path-planning system are onboard the robot, giving it complete autonomous control.
The algorithm’s first component enables the robot to detect an obstacle and estimate its size and distance. The researchers devised a formula to simplify a visual scene, representing the ground as a straight line, and any obstacles as deviations from that line. With this formula, the robot can estimate an obstacle’s height and distance from itself.
Once the robot has detected an obstacle, the second component of the algorithm kicks in, allowing the robot to adjust its approach while nearing the obstacle. Based on the obstacle’s distance, the algorithm predicts the best position from which to jump in order to safely clear it, then backtracks from there to space out the robot’s remaining strides, speeding up or slowing down in order to reach the optimal jumping-off point.
A blog of tips and recommendations for anyone interested in learning or teaching mathematics.
Monday, August 31, 2015
Monday Video: Robot Cheetah -- Now Even Cooler
From the good people at MIT:
Labels:
MIT,
robotic cheetah
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment