Skip to main content

MIT’s Cheetah 3 robot doesn’t need sight to navigate stairs

Vision-free MIT Cheetah

When we first met the Cheetah, a four-legged robot built by engineers at the Massachusetts Institute of Technology (MIT), the machine was, frankly, not all that fascinating. Sure, it could run pretty quickly for a robot. But at 10 mph, the Cheetah was hardly as impressive as Boston Dynamics’ line of robo-dogs, nor could it keep up with its living, feline counterpart.

Recommended Videos

Four years on, the Cheetah has made some progress thanks to its MIT engineers. Now dubbed the Cheetah 3, the robot’s current version can leap onto tables, handle rough terrain, and even use blind locomotion to navigate. By developing the machine to get around without the use of cameras, the engineers hope to create a robot that can “feel” its way through a room, no matter how dark an environment may be. In the real world, this ability could make the robot suited for reconnaissance and search and rescue missions.

“Blind locomotion is [locomotion] without vision,” Sangbae Kim, a mechanical engineer at MIT and the robot’s designer, told Digital Trends. Vision-oriented movement obviously seems natural to most humans, but it’s data intensive and noisy for machines. In comparison, movement can also be oriented by proprioception (a sense of one’s own body in relation to the world) and vestibular organs (those which help maintain balance). “Most research robots rely too much on vision and don’t utilize the other two enough,” Kim said. “Before we integrate the vision, we want to have robust behaviors first.”

The Cheetah 3 needs some help from humans to navigate blindly (it relies on manual commands for direction and speed) but it’s capable of tackling obstacles, such as stairs, autonomously. It does so by using sensors and algorithms to orient its body to its environment. For example, the robot uses a contact detection algorithm (which helps the robot determine when to move a particular leg) and a model-predictive control algorithm (which helps predict how much force the robot should apply to a given leg).

The Cheetah 3 weighs in at about 90 pounds, with four legs, a determined gait, and exposed wires and circuity. Each of its knee joints are invertible, meaning they can flex and bend in the opposite direction, which lets the robot adjust its stance to gain better balance and perform other double-jointed tricks. Among its new tricks, the Cheetah 3 can also jump onto a 30-inch desk, recover from being pushed, and twist.

In the near term, Kim and his colleagues are developing the Cheetah for use in disaster relief situations or for tasks that are difficult or dangerous for humans to perform. They plan to add an arm to allow the robot to manipulate objects around it.

The researchers will demonstrate their robot in October at the International Conference on Intelligent Robots in Madrid, Spain.

Dyllan Furness
Former Digital Trends Contributor
Dyllan Furness is a freelance writer from Florida. He covers strange science and emerging tech for Digital Trends, focusing…
BYD’s cheap EVs might remain out of Canada too
BYD Han

With Chinese-made electric vehicles facing stiff tariffs in both Europe and America, a stirring question for EV drivers has started to arise: Can the race to make EVs more affordable continue if the world leader is kept out of the race?

China’s BYD, recognized as a global leader in terms of affordability, had to backtrack on plans to reach the U.S. market after the Biden administration in May imposed 100% tariffs on EVs made in China.

Read more
Tesla posts exaggerate self-driving capacity, safety regulators say
Beta of Tesla's FSD in a car.

The National Highway Traffic Safety Administration (NHTSA) is concerned that Tesla’s use of social media and its website makes false promises about the automaker’s full-self driving (FSD) software.
The warning dates back from May, but was made public in an email to Tesla released on November 8.
The NHTSA opened an investigation in October into 2.4 million Tesla vehicles equipped with the FSD software, following three reported collisions and a fatal crash. The investigation centers on FSD’s ability to perform in “relatively common” reduced visibility conditions, such as sun glare, fog, and airborne dust.
In these instances, it appears that “the driver may not be aware that he or she is responsible” to make appropriate operational selections, or “fully understand” the nuances of the system, NHTSA said.
Meanwhile, “Tesla’s X (Twitter) account has reposted or endorsed postings that exhibit disengaged driver behavior,” Gregory Magno, the NHTSA’s vehicle defects chief investigator, wrote to Tesla in an email.
The postings, which included reposted YouTube videos, may encourage viewers to see FSD-supervised as a “Robotaxi” instead of a partially automated, driver-assist system that requires “persistent attention and intermittent intervention by the driver,” Magno said.
In one of a number of Tesla posts on X, the social media platform owned by Tesla CEO Elon Musk, a driver was seen using FSD to reach a hospital while undergoing a heart attack. In another post, a driver said he had used FSD for a 50-minute ride home. Meanwhile, third-party comments on the posts promoted the advantages of using FSD while under the influence of alcohol or when tired, NHTSA said.
Tesla’s official website also promotes conflicting messaging on the capabilities of the FSD software, the regulator said.
NHTSA has requested that Tesla revisit its communications to ensure its messaging remains consistent with FSD’s approved instructions, namely that the software provides only a driver assist/support system requiring drivers to remain vigilant and maintain constant readiness to intervene in driving.
Tesla last month unveiled the Cybercab, an autonomous-driving EV with no steering wheel or pedals. The vehicle has been promoted as a robotaxi, a self-driving vehicle operated as part of a ride-paying service, such as the one already offered by Alphabet-owned Waymo.
But Tesla’s self-driving technology has remained under the scrutiny of regulators. FSD relies on multiple onboard cameras to feed machine-learning models that, in turn, help the car make decisions based on what it sees.
Meanwhile, Waymo’s technology relies on premapped roads, sensors, cameras, radar, and lidar (a laser-light radar), which might be very costly, but has met the approval of safety regulators.

Read more
Waymo, Nexar present AI-based study to protect ‘vulnerable’ road users
waymo data vulnerable road users ml still  1 ea18c3

Robotaxi operator Waymo says its partnership with Nexar, a machine-learning tech firm dedicated to improving road safety, has yielded the largest dataset of its kind in the U.S., which will help inform the driving of its own automated vehicles.

As part of its latest research with Nexar, Waymo has reconstructed hundreds of crashes involving what it calls ‘vulnerable road users’ (VRUs), such as pedestrians walking through crosswalks, biyclists in city streets, or high-speed motorcycle riders on highways.

Read more