Skip to main content

How can you tell how affluent an area is? Penny does it from space

penny machine learning income predictor 30619164  space satellite orbiting the earth
Andrey Armyagov/123RF
It’s not just the Great Wall of China or the Great Pyramids of Giza that you can see from space — thanks to artificial intelligence and Penny, we can now discern things that seem … imperceptible, too.

While you may not be able to tell how much money the denizens of a neighborhood simply by walking through it, you may be able to do so by flying over said neighborhood — that is, if you’re flying in a satellite. It’s all thanks to a new venture from satellite mapping company DigitalGlobe, who partnered with San Francisco design studio Stamen Design in order to create a machine learning-powered mapping tool that combines income data and satellite imagery to estimate the average income of neighborhoods.

Recommended Videos

The program is called Penny, and it takes into consideration the shapes, colors, and lines that comprise a satellite image, explains Fast Company. Then, by comparing this analysis to analogous Census data, the program examines images for observable patterns between urban features and income levels. From there, Penny “learns” what affluence looks like from above, and its algorithm can predict what income level it’s looking at.

So what are the telltale signs that Penny has identified thus far? According to the project’s website, “Different types of objects and shapes seem to be highly correlated with different income levels. For example, lower income areas tend to have baseball diamonds, parking lots, and large similarly shaped buildings (such as housing projects).”

Conversely, in higher income areas, satellite imagery shows greener spaces, tall buildings, and single family homes featuring backyards. In between the two are the middle-income areas, in which a smaller number of single family homes can be observed, alongside apartment buildings.

Perhaps one of the coolest features of Penny is that it allows you to manipulate various areas to add elements, thereby determining how the addition of the Empire State Building in an otherwise low-income area would affect the predictions as a whole. “Every feature that you add affects Penny’s predictions differently. The same feature will have a different effect depending on where it’s placed,” the team explained.

Already, Penny has proven itself to be quite accurate — if you examine the New York area, Penny is 86 percent sure that the Financial District is a high-income area and 99 percent sure that Harlem is a low-income neighborhood. So start hovering, you may be surprised by what you learn.

Lulu Chang
Former Digital Trends Contributor
Fascinated by the effects of technology on human interaction, Lulu believes that if her parents can use your new app…
The BigSleep A.I. is like Google Image Search for pictures that don’t exist yet
Eternity

In case you’re wondering, the picture above is "an intricate drawing of eternity." But it’s not the work of a human artist; it’s the creation of BigSleep, the latest amazing example of generative artificial intelligence (A.I.) in action.

A bit like a visual version of text-generating A.I. model GPT-3, BigSleep is capable of taking any text prompt and visualizing an image to fit the words. That could be something esoteric like eternity, or it could be a bowl of cherries, or a beautiful house (the latter of which can be seen below.) Think of it like a Google Images search -- only for pictures that have never previously existed.
How BigSleep works
“At a high level, BigSleep works by combining two neural networks: BigGAN and CLIP,” Ryan Murdock, BigSleep’s 23-year-old creator, a student studying cognitive neuroscience at the University of Utah, told Digital Trends.

Read more
Clever new A.I. system promises to train your dog while you’re away from home
finding rover facial recognition app dog face big eyes

One of the few good things about lockdown and working from home has been having more time to spend with pets. But when the world returns to normal, people are going to go back to the office, and in some cases that means leaving dogs at home for a large part of the day, hopefully with someone coming into your house to let them out at the midday point.

What if it was possible for an A.I. device, like a next-generation Amazon Echo, to give your pooch a dog-training class while you were away? That’s the basis for a project carried out by researchers at Colorado State University. Initially spotted by Chris Stokel-Walker, author of YouTubers:How YouTube Shook Up TV and Created a New Generation of Stars, and reported by New Scientist, the work involves a prototype device that’s able to give out canine commands, check to see if they’re being obeyed, and then provide a treat as a reward when they are.

Read more
This is how Google’s internet-serving Loon balloons can float for nearly a year
google launches project loon in sri lanka balloon

Only Google could think that the way to improve the flight of giant, helium-filled balloons is by coming up with better algorithms. And to be fair to the Mountain View-based search leviathan, it seems to have worked.

For the past couple of years, Project Loon, a subsidiary of Google’s parent company Alphabet, has been working to provide internet access in rural and remote parts of the world by using high-altitude balloons in the stratosphere to create aerial wireless networks. Last year, Loon announced that it had reached 1 million hours of stratospheric flight with its combined balloon fleet. Then, at the end of October, Loon set a new record for longest stratospheric flight by remaining airborne for a whopping 312 days, covering a distance of some 135,000 miles.

Read more