Skip to main content

Google TV will soon get Gemini’s AI smarts

Using the Google TV Streamer.
Digital Trends

Starting later in 2025, yelling at your TV will finally accomplish something thanks to a new Google initiative announced Monday ahead of CES 2025. The company plans to incorporate its Gemini AI models into the Google TV experience as a means to “make interacting with your TV more intuitive and helpful.”

Google claims that this “will make searching through your media easier than ever, and you will be able to ask questions about travel, health, space, history, and more, with videos in the results for added context,” the company wrote in its announcement blog post. Google had previously forfeited a significant chunk of its market value after its Gemini prototype, dubbed Bard, flubbed its space-based response during the model’s first public demo in 2023. Google also had to pause the AI’s image-generation feature in early 2024, after it started outputting racially offensive depictions of people of color.

Recommended Videos

“The Gemini model on Google TV also enables you to do other things like create customized artwork with the family, control your smart home devices while your TV is in ambient mode, and even get an overview of the day’s news,” the company continued. You’d think that people would prefer being able to control their smart home devices without having to put their TV into standby, but the company seems confident in its offering. “These features will begin rolling out later this year on select Google TV devices,” Google added.

Google has been investing heavily in its Gemini model and app since 2023, and looks to continue doing so in 2025. We’ve seen the AI spread steadily throughout Google’s product ecosystem, including its mobile, laptop, and tablet offerings, and integrate across Workspace apps like Calendar and Gmail, and, as also announced Monday, into the WearOS environment. What’s more, the company plans to make the expansion of Gemini its “biggest focus” of the new year.

“I think 2025 will be critical,” Google CEO Sundar Pichai told employees during a companywide strategy session held in December. “I think it’s really important we internalize the urgency of this moment, and need to move faster as a company. The stakes are high. These are disruptive moments. In 2025, we need to be relentlessly focused on unlocking the benefits of this technology and solving real user problems.”

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
ChatGPT just got a bump to its coding powers
ChatGPT collaborating with Notion

For its penultimate 12 Days of OpenAI announcement, the company revealed a trio of updates to ChatGPT's app integration on Thursday, which should make using the AI in conjunction with other programs on your desktop less of a chore.

OpenAI unveiled ChatGPT's ability to collaborate with select developer-focused macOS apps, specifically VS Code, Xcode, TextEdit, Terminal, and iTerm2, back in November. Rather than needing to copy and paste code into ChatGPT, this feature allows the chatbot to pull specified content from the coding app as you enter your text prompt. ChatGPT, however, cannot generate code directly into the app, as Cursor or GitHub Copilot are able to.

Read more
Generative-AI-powered video editing is coming to Instagram
Instagram on iPhone against a colorful background.

Editing your Instagram videos will soon be as simple as typing out a text prompt, thanks to a new generative AI tool the company hopes to release in 2025, CEO Adam Mosseri announced Thursday.

The upcoming tool, which leverages Meta's Movie Gen model, will enable users to "change nearly any aspect of your videos," Mosseri said during his preview demonstration. Those changes range from subtle modifications, like adding a gold chain to his existing outfit or a hippo in the background, to wholesale alterations including swapping his wardrobe or giving himself a felt, Muppet-like appearance.

Read more
Ray-Ban Meta Smart Glasses get real-time visual AI and translation
Tracey Truly shows multi-reflective options with Ray-Ban Meta Smart Glasses.

Meta is rolling out two long-awaited features to its popular Ray-Ban Smart Glasses: real-time visual AI and translation. While it's just being rolled out for testing right now, the plan is that, eventually, anyone that owns Ray-Ban Meta Smart Glasses will get a live assistant that can see, hear, and translate Spanish, French, and Italian.

It's part of the v11 update that cover the upgrades Meta described at its Connect 2024 event, which also include Shazam integration for music recognition. This all happens via the camera, speakers, and microphones built into the Ray-Ban Meta glasses, so you don’t need to hold up your phone.

Read more