Skip to main content

Google Inceptionism may be cooler than the real thing

google inceptionism breaks down artificial neural networks screen shot 2015 06 23 at 3 41 36 pm
Michael Tyka
It may not be quite the same thing as planting an idea in a dreaming mind, but it stands to argue that this form of inception is even cooler. In a fascinating leap forward in the realm of artificial intelligence, the Google research lab has effectively “trained” artificial neural networks by showing them millions of images whose features are recognized by layers of artificial neurons. Each layer recognizes an additional aspect of the image until the final output is reached. Taken all at once, the process allows for an artificially intelligent system to recognize a picture, but Google wanted to know at each individual stage. And that’s where things got cool.

When Google researchers decided to partition out the recognition process, allowing just one aspect of the entire analysis to enhance a certain image, they created some particularly groovy pictures. Calling it inceptionism, Google’s Alexander Mordvintsev explained, “Instead of exactly prescribing which feature we want the network to amplify, we can also let the network make that decision. In this case we simply feed the network an arbitrary image or photo and let the network analyze the picture. We then pick a layer and ask the network to enhance whatever it detected. Each layer of the network deals with features at a different level of abstraction, so the complexity of features we generate depends on which layer we choose to enhance. For example, lower layers tend to produce strokes or simple ornament-like patterns, because those layers are sensitive to basic features such as edges and their orientations.”

Recommended Videos

Essentially, this pinpointing of one particular recognition layer magnified whatever an image somewhat resembled. Wrote Mordvintsev, “We ask the network: ‘Whatever you see there, I want more of it!’ This creates a feedback loop: if a cloud looks a little bit like a bird, the network will make it look more like a bird. This in turn will make the network recognize the bird even more strongly on the next pass and so forth, until a highly detailed bird appears, seemingly out of nowhere.”

Beyond creating incredibly trippy images, Google believes that the implications they’ve unlocked with this new, deconstructed process are limitless. Concluded the research team, “The techniques presented here help us understand and visualize how neural networks are able to carry out difficult classification tasks, improve network architecture, and check what the network has learned during training. It also makes us wonder whether neural networks could become a tool for artists — a new way to remix visual concepts — or perhaps even shed a little light on the roots of the creative process in general.”

Lulu Chang
Former Digital Trends Contributor
Fascinated by the effects of technology on human interaction, Lulu believes that if her parents can use your new app…
Can A.I. beat human engineers at designing microchips? Google thinks so
google artificial intelligence designs microchips photo 1494083306499 e22e4a457632

Could artificial intelligence be better at designing chips than human experts? A group of researchers from Google's Brain Team attempted to answer this question and came back with interesting findings. It turns out that a well-trained A.I. is capable of designing computer microchips -- and with great results. So great, in fact, that Google's next generation of A.I. computer systems will include microchips created with the help of this experiment.

Azalia Mirhoseini, one of the computer scientists of Google Research's Brain Team, explained the approach in an issue of Nature together with several colleagues. Artificial intelligence usually has an easy time beating a human mind when it comes to games such as chess. Some might say that A.I. can't think like a human, but in the case of microchips, this proved to be the key to finding some out-of-the-box solutions.

Read more
Google’s LaMDA is a smart language A.I. for better understanding conversation
LaMDA model

Artificial intelligence has made extraordinary advances when it comes to understanding words and even being able to translate them into other languages. Google has helped pave the way here with amazing tools like Google Translate and, recently, with its development of Transformer machine learning models. But language is tricky -- and there’s still plenty more work to be done to build A.I. that truly understands us.
Language Model for Dialogue Applications
At Tuesday’s Google I/O, the search giant announced a significant advance in this area with a new language model it calls LaMDA. Short for Language Model for Dialogue Applications, it’s a sophisticated A.I. language tool that Google claims is superior when it comes to understanding context in conversation. As Google CEO Sundar Pichai noted, this might be intelligently parsing an exchange like “What’s the weather today?” “It’s starting to feel like summer. I might eat lunch outside.” That makes perfect sense as a human dialogue, but would befuddle many A.I. systems looking for more literal answers.

LaMDA has superior knowledge of learned concepts which it’s able to synthesize from its training data. Pichai noted that responses never follow the same path twice, so conversations feel less scripted and more responsively natural.

Read more
Facebook’s new image-recognition A.I. is trained on 1 billion Instagram photos
brain network on veins illustration

If Facebook has an unofficial slogan, an equivalent to Google’s “Don’t Be Evil” or Apple’s “Think Different,” it is “Move Fast and Break Things.” It means, at least in theory, that one should iterate to try news things and not be afraid of the possibility of failure. In 2021, however, with social media currently being blamed for a plethora of societal ills, the phrase should, perhaps, be modified to: “Move Fast and Fix Things.”

One of the many areas social media, not just Facebook, has been pilloried for is its spreading of certain images online. It’s a challenging problem by any stretch of the imagination: Some 4,000 photo uploads are made to Facebook every single second. That equates to 14.58 million images per hour, or 350 million photos each day. Handling this job manually would require every single Facebook employee to work 12-hour shifts, approving or vetoing an uploaded image every nine seconds.

Read more