Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

A.I. hit some major milestones in 2020. Here’s a recap

brain network on veins illustration
Chris DeGraw/Digital Trends, Getty Images

Tens of thousands of papers involving A.I. are published each year, but it will take some time before many of them make their potential real-world impact clear. Meanwhile, the top funders of A.I. — the Alphabets, Apples, Facebooks, Baidus, and other unicorns of this world — continue to hone much of their most exciting technology behind closed doors.

In other words, when it comes to artificial intelligence, it’s impossible to do a rundown of the year’s most important developments in the way that, say, you might list the 10 most listened-to tracks on Spotify.

Recommended Videos

But A.I. has undoubtedly played an enormous role in 2020 in all sorts of ways. Here are six of the main developments and emerging themes seen in artificial intelligence during 2020.

It’s all about language understanding

In an average year, a text-generating tool probably wouldn’t rank as one of the most exciting new A.I. developments. But 2020 hasn’t been an average year, and GPT-3 isn’t an average text-generating tool. The sequel to GPT-2, which was labeled the world’s most “dangerous” algorithm, GPT-3 is a cutting-edge autoregressive natural-language-processing neural network created by the research lab OpenAI. Seeded with a few sentences, like the beginning of a news story, GPT-3 can generate impressively accurate text matching the style and content of the initial few lines — even down to making up fabricated quotes. GPT-3 boasts an astonishing 175 billion parameters — the weights of the connections that are tuned in order to achieve performance — and reportedly cost around $12 million to train.

GPT-2 AI Text Generator
OpenAI

GPT-3 isn’t alone in being an impressive A.I. language model spawned in 2020. While it was quickly overtaken in the hype cycle by GPT-3, Microsoft’s Turing Natural Language Generation (T-NLG) made waves in February 2020. At 17 billion parameters, it was, upon release, the largest language model yet published. A Transformer-based generative language model, T-NLG is able to generate the necessary words to complete unfinished sentences, as well as generate direct answers to questions and summarize documents.

First introduced by Google in 2017, Transformers — a new type of deep learning model — have helped revolutionize natural language processing. A.I. has been focused on language at least as far back as Alan Turing’s famous hypothetical test of machine intelligence. But thanks to some of these recent advances, machines are only now getting astonishingly good at understanding language. This will have some profound impacts and applications as the decade continues.

Models are getting bigger

GPT-3 and T-NLG represented another milestone, or at least significant trend, in A.I. While there’s no shortage of startups, small university labs, and individuals using A.I. tools, the presence of major players on the scene means some serious resources are being thrown around. Increasingly, enormous models with huge training costs are dominating the cutting edge of A.I. research. Neural networks with upward of a billion parameters are fast becoming the norm.

“If we’re going to replicate brainlike artificial intelligence, more parameters are a must.”

GPT-3’s 175 billion parameters remains a crazy outlier, but new models such as Meena, Turing-NGL, DistilBERT, and BST 9.4B have all surpassed 1 billion parameters. More parameters doesn’t necessarily mean better performance in every case. However, it does mean that a text-generating tool is able to more accurately model a large range of functions. If we’re going to replicate brainlike artificial intelligence, more parameters are a must. This also means that major players will continue to rule the A.I. roost when it comes to the biggest models. It reportedly costs $1 per 1,000 parameters to train a network. Extrapolate that to a billion parameters and, well, you do the math.

A.I. for the good of humankind

As A.I. tools advance, it’s not just the computer scientists who benefit from them. Researchers from other disciplines jump on board, often with some innovative ideas about the ways machine learning can be used. Whether it’s A.I. that can diagnose tinnitus from brain scans; mind-reading headsets that use machine learning to turn thoughts into spoken words for vocally impaired wearers; DeepMind’s AlphaFold, which can accurately predict the shape of proteins based on their sequence, potentially helping develop new more effective therapies rapidly; or any other number of demonstrations, it’s clear that A.I. opened up some exciting new avenues for research in 2020.

The robocalypse is not here (yet)

The polarization of many aspects of life in 2020 discourages the idea of nuance. But it is becoming increasingly evident that nuance is exactly what applies when it comes to the the takeover of jobs by robots. This year has seen enormous job losses around the world. However, these have been brought on by the pandemic and its impacts, rather than any sinister Skynet-style assault on human jobs.

Flippy removing chicken tenders from the frier
Miso Robotics

While there have certainly been examples of A.I. and robotics carrying out human tasks (see Flippy the burger-flipping robot, for instance), these have typically been to augment human abilities or assist in areas where there isn’t enough a consistent workforce. In fact, the companies that are hiring the most people right now are those that are simultaneously investing in advanced technologies (read: big tech giants).

This isn’t to say that the robocalypse was an erroneous prediction. The hollowing out of the middle classes is a trend that will continue, although it’s one that’s far more complex than just the advent of a few tech companies introducing new smart software tools. If 2020 has had one thing to say about A.I. and employment, it’s that things are complicated.

Deepfakes

There’s no denying that 2020 has been a strange year for blurring the edges of reality in all sorts of weird ways. At the start of the year, COVID-19 plunged much of the world into a lockdown like something out of a contagion-themed blockbuster movie. (How did people escape the reality of this “new normal”? By seeking out pandemic-themed entertainment, of course.) The year then ended with the U.S. election presenting your choice of two versions of reality, depending on party (and leadership) affiliation.

A.I. has played a part in this Baudrillardian assault on reality in the form of deepfake technologies. Deepfakes aren’t an invention of 2020, but they have seen some significant developments this year. In July, researchers from the Center for Advanced Virtuality at the Massachusetts Institute of Technology put together a compellingly high-budget deepfake video depicting President Richard Nixon giving an alternate address about the moon landings, which was written in the event that the Apollo mission went terribly wrong.

Along with more convincing visual deepfakes, researchers have also created some astonishingly accurate audio deepfakes. One recent example? An Eminem vocal deepfake that launches a blistering diss against Facebook CEO Mark Zuckerberg. It sounded convincingly lifelike — even if it wasn’t quite up to Em’s usual lyrical standards.

Regulation of A.I.

A.I.-powered tools are, well, powerful. And that doesn’t apply just to abstract proof-of-concept demonstrations, but real-world deployments that can range from screening applicants for job interviews to facial recognition or parole decision tools employed by law enforcement and authorities.

Over the past few years, awareness of these tools — and the way that bias can be coded into them — has led to more concern being raised about their usage. In January, police in Detroit wrongly arrested a man named Robert Williams after an algorithm erroneously matched the photo on his drivers’ license with blurry CCTV footage. Shortly thereafter, IBM, Amazon, and Microsoft all announced that they were rethinking the use of their facial-recognition technologies in this capacity.

The aforementioned deepfakes have whipped up plenty of fear in particular, perhaps because they so obviously demonstrate how their misuse could be harmful. California’s passing of AB-730, a law designed to criminalize the use of deepfakes to give false impressions of politician’s words or actions, was one clear-cut attempt to regulate the use of A.I. Consistent rules on how to best develop A.I. tools on the side of good remain a work in progress.

This focus on A.I. ethics makes it feel like the subject is starting to go mainstream for the first time. Much of the credit must go to researchers like Caroline Criado Perez and Safiya Umoja Noble, whose tireless work to highlight algorithmic bias and the importance of accountability has clearly struck a chord.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
PayPal vs. Venmo vs. Cash App vs. Apple Cash: which app should you use?
PayPal, Venmo, Cash App, and Apple Wallet apps on an iPhone.

We’re getting closer every day to an entirely cashless society. While some folks may still carry around a few bucks for emergencies, electronic payments are accepted nearly everywhere, and as mobile wallets expand, even traditional credit and debit cards are starting to fall by the wayside.

That means many of us are past the days of tossing a few bills onto the table to pay our share of a restaurant tab or slipping our pal a couple of bucks to help them out. Now, even those things are more easily doable from our smartphones than our physical wallets.

Read more
How to change margins in Google Docs
Laptop Working from Home

When you create a document in Google Docs, you may need to adjust the space between the edge of the page and the content --- the margins. For instance, many professors have requirements for the margin sizes you must use for college papers.

You can easily change the left, right, top, and bottom margins in Google Docs and have a few different ways to do it.

Read more
What is Microsoft Teams? How to use the collaboration app
A close-up of someone using Microsoft Teams on a laptop for a videoconference.

Online team collaboration is the new norm as companies spread their workforce across the globe. Gone are the days of primarily relying on group emails, as teams can now work together in real time using an instant chat-style interface, no matter where they are.

Using Microsoft Teams affords video conferencing, real-time discussions, document sharing and editing, and more for companies and corporations. It's one of many collaboration tools designed to bring company workers together in an online space. It’s not designed for communicating with family and friends, but for colleagues and clients.

Read more