Skip to main content

From Open AI to hacked smart glasses, here are the 5 biggest AI headlines this week

Ray-Ban Meta smart glasses in Headline style are worn by a model.
Meta

We officially transitioned into Spooky Season this week and, between OpenAI’s $6.6 million funding round, Nvidia’s surprise LLM, and some privacy-invading Meta Smart Glasses, we saw a scary number of developments in the AI space. Here are five of the biggest announcements.

OpenAI secures $6.6 billion in latest funding round

OpenAI CEO Sam Altman standing on stage at a product event.
Andrew Martonik / Digital Trends

Sam Altman’s charmed existence continues apace with news this week that OpenAI has secured an additional $6.6 billion in investment as part of its most recent funding round. Existing investors like Microsoft and Khosla Ventures were joined by newcomers SoftBank and Nvidia. The AI company is now valued at a whopping $157 billion, making it one of the wealthiest private enterprises on Earth.

Recommended Videos

And, should OpenAI’s proposed for-profit restructuring plan go through, that valuation would grant Altman more than $150 billion in equity, rocketing him onto the list of the top 10 richest people on the planet. Following the funding news, OpenAI rolled out Canvas, its take on Anthropic’s Artifacts collaborative feature

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

Nvidia just released an open-source LLM to rival GPT-4

Nvidia CEO Jensen in front of a background.
Nvidia

Nvidia is making the leap from AI hardware to AI software with this week’s release of LVNM 1.0, a truly open-source large language model that excels at a variety of vision and language tasks. The company claims that the new model family, led by the 72 billion-parameter LVNM-D-72B, can rival GPT-4o. However, Nvidia is positioning LVNM not as a direct competitor to other frontier-class LLMs, but as a platform on which other developers can create their own chatbots and applications.

Google’s Gemini Live now speaks nearly four-dozen languages

A demonstration of Gemini Live on a Google Pixel 9.
Joe Maring / Digital Trends

Seems like being able to speak directly with your chatbot is the new must-have feature. Google announced this week that it is expanding Gemini Live to converse in nearly four dozen languages beyond English, starting with French, German, Portuguese, Hindi, and Spanish. Microsoft also revealed a similar feature for Copilot, dubbed Copilot Voice, that the company claims is “the most intuitive and natural way to brainstorm on the go.” They join ChatGPT’s Advanced Voice Mode and Meta’s Natural Voice Interactions in allowing users to talk with their phones, not just to them.

California governor vetoes expansive AI safety bill

CA Gov Gavin Newsom speaking at a lecturn
Gage Skidmore / Flickr

All the fighting over SB 1047, California’s Safe and Secure Innovation for Frontier Artificial Models Act, was for naught as Gov. Gavin Newsom vetoed the AI safety bill this week. In a letter to lawmakers, he argued that the bill focused myopically on the largest of language models and that “smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047.”

Hackers turn Meta smart glasses into automatic doxing machine

The Ray-Ban Meta smart glasses next to a pool.
Phil Nickinson / Digital Trends

A pair of Harvard computer science students managed to modify a pair of commercially available Meta smart glasses so they can identify and look up any person that walks into their field of vision, 404 Media reported this week. The glasses, part of the I-XRAY experiment, were designed to capture images of strangers on the street, run those images through PimEyes image recognition software to identify the subject, then use that basic information to search for their personal information (i.e., their phone number and home address) on commercial data brokerage sites.

“To use it, you just put the glasses on, and then as you walk by people, the glasses will detect when somebody’s face is in frame,” the pair explained in a video demo posted to X. “After a few seconds, their personal information pops up on your phone.” The privacy implications for such a system are terrifying. The duo have no intention to publicly release the source code, but now that they’ve shown it can be done, there is little to prevent others from reverse engineering it.

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
Google AI helped researchers win two Nobel Prizes this week
nobel peace prize

It's been another insane week in the world of AI. While Tesla CEO Elon Musk was debuting his long-awaited Cybercab this week (along with a windowless Robovan that nobody asked for), Google's AI was helping researchers win Nobel Prizes, Zoom revealed its latest digital assistant, and Meta sent its Facebook and Instagram chatbots to the U.K.

Check out these stories and more from this week's top AI headlines.
Google's AI helped researchers win two Nobel Prizes

Read more
OpenAI secures $6.6 billion in latest funding round
OpenAI CEO Sam Altman standing on stage at a product event.

OpenAI is now one of the wealthiest private companies on Earth after successfully raising $6.6 billion in its latest funding round on a valuation of $157 billion.

"Every week, over 250 million people around the world use ChatGPT to enhance their work, creativity, and learning," the company wrote in its announcement post. "The new funding will allow us to double down on our leadership in frontier AI research, increase compute capacity, and continue building tools that help people solve hard problems."

Read more
Nvidia just released an open-source LLM to rival GPT-4
Nvidia CEO Jensen in front of a background.

Nvidia, which builds some of the most highly sought-after GPUs in the AI industry, has announced that it has released an open-source large language model that reportedly performs on par with leading proprietary models from OpenAI, Anthropic, Meta, and Google.

The company introduced its new NVLM 1.0 family in a recently released white paper, and it's spearheaded by the 72 billion-parameter NVLM-D-72B model. “We introduce NVLM 1.0, a family of frontier-class multimodal large language models that achieve state-of-the-art results on vision-language tasks, rivaling the leading proprietary models (e.g., GPT-4o) and open-access models,” the researchers wrote.

Read more