Skip to main content

Carnegie Mellon’s smart projector blurs the line between physical and digital desktops

Desktopography: Supporting Responsive Cohabitation Between Virtual Interfaces and Physical Objects
Back in the 1980s, people working on bringing the graphical user interface to PCs adopted the metaphor of the desktop to help people understand how to use their personal computers. Here in 2017, the technology exists to let us expand our computer interfaces out beyond our machines, and onto our real life desks.
Recommended Videos

That’s the basis for a new project developed by Carnegie Mellon’s ever-interesting Future Interfaces Group (FIG). They’ve created a nifty interface concept that allows your regular work desk to transform into one giant touchscreen — and even compensates for your desk being cluttered. If you thought the iPad Pro was big, you ain’t seen nothing yet!

Image used with permission by copyright holder

“We believe the time has come to re-imagine the light bulb as a 21st century computational appliance, illuminating our days not just with light, but information,” Professor Chris Harrison, head of FIG, told Digital Trends. “Instead of simply emitting light when a switch is flipped, why can we not emit structured light, more akin to a digital projector? Further, wherever the light may fall could become an interactive surface, infused with rich communication, creation and information retrieval capabilities. This is a low-cost, self-contained device that fits existing light fixtures: simply screw in your new ‘info-bulb’ and now your office desk or kitchen counter is an expansive multi-touch surface, able to respond to and augment your daily activities.”

The work debuted this week at the Symposium on Engineering Interactive Computing Systems. There are a few neat elements which make it particularly cool. For one, the projected augmented reality “info-bulb” manages to pull off touch sensing — while also retrofitting into standard flood bulb (BR30 LED) fittings, so it can be installed in most houses. In addition, the surfaces you can use it on don’t have to be flat, although Harrison notes that these are the most comfortable to use for prolonged periods. Finally, it includes some smart accessions to the fact that most desks aren’t simply large empty spaces by rearranging projected interface elements to avoid overlapping with your coffee mug, or even letting them “snap” to objects.

Image used with permission by copyright holder

As can be seen by the video demonstration, there’s still work to be done to areas like touch tracking to make it lower latency and improve accuracy, but this is definitely one heck of an impressive demo.

“We next plan to add a regular, visible light camera to our bulb so that we can better see information in the scene, things like written text, newspaper articles, and objects,” Harrison said. “This will let us digitize information and augment it — like Google searching right on a magazine article, digitizing hand-written shopping list, or popping up photos from your phone.”

A paper describing the work can be read here.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
BYD’s cheap EVs might remain out of Canada too
BYD Han

With Chinese-made electric vehicles facing stiff tariffs in both Europe and America, a stirring question for EV drivers has started to arise: Can the race to make EVs more affordable continue if the world leader is kept out of the race?

China’s BYD, recognized as a global leader in terms of affordability, had to backtrack on plans to reach the U.S. market after the Biden administration in May imposed 100% tariffs on EVs made in China.

Read more
Tesla posts exaggerate self-driving capacity, safety regulators say
Beta of Tesla's FSD in a car.

The National Highway Traffic Safety Administration (NHTSA) is concerned that Tesla’s use of social media and its website makes false promises about the automaker’s full-self driving (FSD) software.
The warning dates back from May, but was made public in an email to Tesla released on November 8.
The NHTSA opened an investigation in October into 2.4 million Tesla vehicles equipped with the FSD software, following three reported collisions and a fatal crash. The investigation centers on FSD’s ability to perform in “relatively common” reduced visibility conditions, such as sun glare, fog, and airborne dust.
In these instances, it appears that “the driver may not be aware that he or she is responsible” to make appropriate operational selections, or “fully understand” the nuances of the system, NHTSA said.
Meanwhile, “Tesla’s X (Twitter) account has reposted or endorsed postings that exhibit disengaged driver behavior,” Gregory Magno, the NHTSA’s vehicle defects chief investigator, wrote to Tesla in an email.
The postings, which included reposted YouTube videos, may encourage viewers to see FSD-supervised as a “Robotaxi” instead of a partially automated, driver-assist system that requires “persistent attention and intermittent intervention by the driver,” Magno said.
In one of a number of Tesla posts on X, the social media platform owned by Tesla CEO Elon Musk, a driver was seen using FSD to reach a hospital while undergoing a heart attack. In another post, a driver said he had used FSD for a 50-minute ride home. Meanwhile, third-party comments on the posts promoted the advantages of using FSD while under the influence of alcohol or when tired, NHTSA said.
Tesla’s official website also promotes conflicting messaging on the capabilities of the FSD software, the regulator said.
NHTSA has requested that Tesla revisit its communications to ensure its messaging remains consistent with FSD’s approved instructions, namely that the software provides only a driver assist/support system requiring drivers to remain vigilant and maintain constant readiness to intervene in driving.
Tesla last month unveiled the Cybercab, an autonomous-driving EV with no steering wheel or pedals. The vehicle has been promoted as a robotaxi, a self-driving vehicle operated as part of a ride-paying service, such as the one already offered by Alphabet-owned Waymo.
But Tesla’s self-driving technology has remained under the scrutiny of regulators. FSD relies on multiple onboard cameras to feed machine-learning models that, in turn, help the car make decisions based on what it sees.
Meanwhile, Waymo’s technology relies on premapped roads, sensors, cameras, radar, and lidar (a laser-light radar), which might be very costly, but has met the approval of safety regulators.

Read more
Waymo, Nexar present AI-based study to protect ‘vulnerable’ road users
waymo data vulnerable road users ml still  1 ea18c3

Robotaxi operator Waymo says its partnership with Nexar, a machine-learning tech firm dedicated to improving road safety, has yielded the largest dataset of its kind in the U.S., which will help inform the driving of its own automated vehicles.

As part of its latest research with Nexar, Waymo has reconstructed hundreds of crashes involving what it calls ‘vulnerable road users’ (VRUs), such as pedestrians walking through crosswalks, biyclists in city streets, or high-speed motorcycle riders on highways.

Read more