Skip to main content

A modular supercomputer built to birth AGI could be online by next year

AI startup SingularityNet is set to deploy “multi-level cognitive computing network” in the coming months. It is designed to host and train the models that will form the basis of an artificial general intelligence (AGI) capable of matching — even potentially exceeding — human cognition, the company announced on Monday.

Achieving AGI is widely viewed as the next major milestone in artificial intelligence development. While today’s cutting-edge models like GPT-4o and Gemini 1.5 Pro are immensely powerful and can perform specific tasks at superhuman levels, they’re incapable of applying those skills across disciplines. AGI, though still theoretical at this point, would be free of those limitations, and able to reason and learn on its own, regardless of the task.

Recommended Videos

SingularityNet is working to build the compute base necessary to train and deploy such a system using some of the most advanced components currently on the market. Per a report from LiveScience, the startup’s modular supercomputer will sport Nvidia L40S GPUs, AMD Instinct and Genoa processors, Tenstorrent Wormhole server racks running Nvidia H200 GPUs, as well as Nvidia’s 1,500W-plus GB200 Blackwell systems.

Supercomputer architectures differ from your conventional desktop setup in that they run multiple sets of processors (both CPUs and GPUs) assembled into individual nodes. Those nodes then get daisy-chained together by the tens of thousands into the larger arrays of the overarching supercomputer.

“This supercomputer in itself will be a breakthrough in the transition to AGI. While the novel neural-symbolic AI approaches developed by the SingularityNET AI team decrease the need for data, processing and energy somewhat relative to standard deep neural nets, we still need significant supercomputing facilities,” SingularityNet CEO Ben Goertzel told LiveScience. “The mission of the computing machine we are creating is to ensure a phase transition from learning on big data and subsequent reproduction of contexts from the semantic memory of the neural network to non-imitative machine thinking based on multi-step reasoning algorithms and dynamic world modeling.”

“Before our eyes, a paradigmatic shift is taking place towards continuous learning, seamless generalisation and reflexive AI self-modification,” he continued.

The company plans to Grant public access to the supercomputer, once it comes fully online in late 2024/early 2025, using a token system. Users will purchase tokens, as they would in an old-school arcade, and then spend those tokens to get a certain number of opportunities to play with the system. The data generated by those interactions is then fed back into the system for further AGI experimentation and development.

SingularityNet is far from the only company racing to build and deploy the first AGI. The pursuit of such systems is one of OpenAI’s founding tenets while Meta’s Mark Zuckerberg has earmarked more than $10 billion for his company’s AGI R&D.

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
DALL-E 3 could take AI image generation to the next level
DALL-E 2DALL-E 2 Image on OpenAI.

OpenAI might be preparing the next version of its DALL-E AI text-to-image generator with a series of alpha tests that have now been leaked to the public, according to the Decoder.

An anonymous leaker on Discord shared details about his experience, having access to the upcoming OpenAI image model being referred to as DALL-E 3. He first appeared in May, telling the interest-based Discord channel that he was part of an alpha test for OpenAI, trying out a new AI image model. He shared the images he generated at the time.

Read more
This new Photoshop tool could bring AI magic to your images
A mountainous landscape at night with the Northern Lights in the sky, a lake in the foreground, and a person standing under a rock archway on the right. This image was made with Adobe Photoshop's Generative Fill tool.

These days, it seems like everyone and their dog is working artificial intelligence (AI) into their tech products, from ChatGPT in your web browser to click-and-drag image editing. The latest example is Adobe Photoshop, but this isn’t just another cookie-cutter quick fix -- no, it could have a profound effect on imagery and image creators.

Photoshop’s newest feature is called Generative Fill, and it lets you use text prompts to automatically adjust areas of an image you are working on. This might let you add new features, adjust existing elements, or remove unwanted sections of the picture by typing your request into the app.

Read more
Google’s AI image-detection tool feels like it could work
An AI image of the Pope in a puffy coat.

Google announced during its I/O developers conference on Wednesday its plans to launch a tool that will distinguish whether images that show up in its search results are AI-generated images.

With the increasing popularity of AI-generated content, there is a need to confirm whether the content is authentic -- as in created by humans -- or if it has been developed by AI.

Read more