Skip to main content

17-year-old uses deep learning to program AI cars that race around in your browser

self driving car browser selfdriving head
Image used with permission by copyright holder
German software engineer Jan Hünermann watches two autonomous cars — one colored pink, the other turquoise — race around a track. There are various obstacles set up to confound them, but thanks to the brain-inspired neural networks that provide them with their intelligence, the cars smoothly navigate these obstacles with the confidence of seasoned pros.

From time to time, Hünermann throws a new obstacle in their path, and then watches with satisfaction as the cars dodge this new impediment. Best of all? The longer he watches them, the smarter the cars become: learning from their mistakes until they can handle just about any scenario that comes their way.

There are a couple of unusual things about the scenario. The first is that Hünermann is only 17 years old, impressively young to be coding autonomous cars. The second is that the cars don’t actually exist. Or at least they don’t exist outside of a couple of crudely-rendered sprites in a web browser.

This is Hünermann’s “Self-Driving Cars In A Browser” project; one which… well, does what it says on the tin, really. It’s a web app designed to “create a fully self-learning agent” that’s able to navigate a pair of cars through an ever-changing 2D environment. The “ever-changing” bit comes down to the individual users, who are able to use their mouse to click and drag new items onto the preexisting map.

Picture a solid vector suddenly appearing in the middle of the freeway on your commute to work, and you’ll have some sympathy for what Hünermann’s long-suffering cars are faced with!

The idea for the project hit Hünermann a couple of years ago when he was a high school sophomore. Like every else who follows tech, he marveled at the news coming out of Google DeepMind, showing how the cutting-edge research team there had used a combination of reinforcement learning (a type of AI that works toward specific goals, through trial-and-error) and deep learning neural networks to build bots which could work out how to play old Atari games. Unlike the intelligent agents that make up non-player characters (NPCs) in video games, these bots were able to learn video games without anyone explicitly telling them what to do.

At the time, Hünermann was focused on building iOS apps and websites for computer-based extracurricular activities. With limited resources, however, he decided to follow Google’s example. He went ahead and downloaded DeepMind’s paper, read it, and decided to have a go at coding his own project.

“I was really interested in this field of deep learning and wanted to get to know it,” Hünermann told Digital Trends. “I thought that one possible way to do that would be to create a self-driving car project. I didn’t actually have a car, so I decided to do it in the browser.”

The virtual cars themselves boast 19-distance sensors, which come out of the car in different directions. You can picture these like torch beams, with each beam starting out strong and then getting fainter the further away from the vehicle they are. The shorter the beam, the higher the input the agent receives when it comes into contact with something, similar to parking sensors which beep more rapidly the closer you get to a way. When taken in conjunction with the speed of a car and knowledge about the action it is taking, the cars provide 158 dimensions of information.

This data is then fed into a multi-layer neural network. The more the cars drive and crash, the more the “weights” connecting the network’s different  nodes are adjusted so that it can learn what to do. The result is that, like any human skill, the longer the cars practice driving, the better they get.

They’re not perfect, of course. In particular, the cars can tend to be a bit optimistic when it comes to the size of a gap they can squeeze through, since the sensor positioned at the front of the car spots open road, without always taking into account the cars’ width. Still, it’s impressive stuff — and the point is that it’s getting more impressive all the time.

“One thing I’d like to add is more intelligence so that the cars can realize that they’re stuck, and back up and try another route,” Hünermann continued. “It would also be really interesting to add traffic, and maybe even lanes as well. The idea is to get it to reflect, as closely as possible, the real world.”

If you want to follow what he’s doing with the project, Hünermann has made the code for the demo, along with the entire JavaScript library, available on GitHub. Given that real-life self driving cars are based on the same kinds of neural networks used here, Hünermann’s creation is a great way to get to grips with a simplified version of the tech that’s (no pun intended) driving real-world autonomous car projects.

As to what’s next for himself, Hünermann is off to study Computer Science at university in England this year. “I’d like to do this as a job,” he said. “I’m absolutely fascinated by this area of research.”

Who knows: by the time he arrives in the U.K., he may even be legally old enough to drive himself!

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Digital Trends’ Top Tech of CES 2023 Awards
Best of CES 2023 Awards Our Top Tech from the Show Feature

Let there be no doubt: CES isn’t just alive in 2023; it’s thriving. Take one glance at the taxi gridlock outside the Las Vegas Convention Center and it’s evident that two quiet COVID years didn’t kill the world’s desire for an overcrowded in-person tech extravaganza -- they just built up a ravenous demand.

From VR to AI, eVTOLs and QD-OLED, the acronyms were flying and fresh technologies populated every corner of the show floor, and even the parking lot. So naturally, we poked, prodded, and tried on everything we could. They weren’t all revolutionary. But they didn’t have to be. We’ve watched enough waves of “game-changing” technologies that never quite arrive to know that sometimes it’s the little tweaks that really count.

Read more
Digital Trends’ Tech For Change CES 2023 Awards
Digital Trends CES 2023 Tech For Change Award Winners Feature

CES is more than just a neon-drenched show-and-tell session for the world’s biggest tech manufacturers. More and more, it’s also a place where companies showcase innovations that could truly make the world a better place — and at CES 2023, this type of tech was on full display. We saw everything from accessibility-minded PS5 controllers to pedal-powered smart desks. But of all the amazing innovations on display this year, these three impressed us the most:

Samsung's Relumino Mode
Across the globe, roughly 300 million people suffer from moderate to severe vision loss, and generally speaking, most TVs don’t take that into account. So in an effort to make television more accessible and enjoyable for those millions of people suffering from impaired vision, Samsung is adding a new picture mode to many of its new TVs.
[CES 2023] Relumino Mode: Innovation for every need | Samsung
Relumino Mode, as it’s called, works by adding a bunch of different visual filters to the picture simultaneously. Outlines of people and objects on screen are highlighted, the contrast and brightness of the overall picture are cranked up, and extra sharpness is applied to everything. The resulting video would likely look strange to people with normal vision, but for folks with low vision, it should look clearer and closer to "normal" than it otherwise would.
Excitingly, since Relumino Mode is ultimately just a clever software trick, this technology could theoretically be pushed out via a software update and installed on millions of existing Samsung TVs -- not just new and recently purchased ones.

Read more
AI turned Breaking Bad into an anime — and it’s terrifying
Split image of Breaking Bad anime characters.

These days, it seems like there's nothing AI programs can't do. Thanks to advancements in artificial intelligence, deepfakes have done digital "face-offs" with Hollywood celebrities in films and TV shows, VFX artists can de-age actors almost instantly, and ChatGPT has learned how to write big-budget screenplays in the blink of an eye. Pretty soon, AI will probably decide who wins at the Oscars.

Within the past year, AI has also been used to generate beautiful works of art in seconds, creating a viral new trend and causing a boon for fan artists everywhere. TikTok user @cyborgism recently broke the internet by posting a clip featuring many AI-generated pictures of Breaking Bad. The theme here is that the characters are depicted as anime characters straight out of the 1980s, and the result is concerning to say the least. Depending on your viewpoint, Breaking Bad AI (my unofficial name for it) shows how technology can either threaten the integrity of original works of art or nurture artistic expression.
What if AI created Breaking Bad as a 1980s anime?
Playing over Metro Boomin's rap remix of the famous "I am the one who knocks" monologue, the video features images of the cast that range from shockingly realistic to full-on exaggerated. The clip currently has over 65,000 likes on TikTok alone, and many other users have shared their thoughts on the art. One user wrote, "Regardless of the repercussions on the entertainment industry, I can't wait for AI to be advanced enough to animate the whole show like this."

Read more