Skip to main content

DuckDuckGo’s new AI service keeps your chatbot conversations private

DuckDuckGo
DuckDuckGo

DuckDuckGo released its new AI Chat service on Thursday, enabling users to anonymously access popular chatbots like GPT-3.5 and Claude 3 Haiku without having to share their personal information as well as preventing the companies from training the AIs on their conversations. AI Chat essentially works by inserting itself between the user and the model, like a high-tech game of telephone.

From the AI Chat home screen, users can select which chat model they want to use — Meta’s Llama 3 70B model and Mixtral 8x7B are available in addition to GPT-3.5 and Claude — then begin conversing with it as they normally would. DuckDuckGo will connect to that chat model as an intermediary, substituting the user’s IP address with one of their own. “This way it looks like the requests are coming from us and not you,” the company wrote in a blog post.

Recommended Videos

As with the company’s anonymized search feature, all metadata is stripped from the user queries, so even though DuckDuckGo warns that “the underlying model providers may store chats temporarily,” there’s no way to personally identify users based on those chats. And, as The Verge notes, DuckDuckGo also has agreements in place with those AI companies, preventing them from using chat prompts and outputs to train their models, as well as to delete any saved data within 30 days.

Data privacy is a growing concern among the AI community, even as the number of people using it both individually and at work continues to rise. A Pew Research study from October found that roughly eight in 10 “of those familiar with AI say its use by companies will lead to people’s personal information being used in ways they won’t be comfortable with.” While most chatbots already allow their users to opt out from having their data collected, those options are often buried in layers of menus with the onus on the user to find and select them.

AI Chat is available at both duck.ai and duckduckgo.com/chat. It’s free to use “within a daily limit,” though the company is currently considering a more expansive paid option with higher usage limits and access to more advanced models. This new service follows last year’s release of DuckDuckGo’s DuckAssist, which provides anonymized, AI-generated synopses of search results, akin to Google’s SGE.

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
ChatGPT is violating your privacy, says major GDPR complaint
ChatGPT app running on an iPhone.

Ever since the first generative artificial intelligence (AI) tools exploded onto the tech scene, there have been questions over where they’re getting their data and whether they’re harvesting your private data to train their products. Now, ChatGPT maker OpenAI could be in hot water for exactly these reasons.

According to TechCrunch, a complaint has been filed with the Polish Office for Personal Data Protection alleging that ChatGPT violates a large number of rules found in the European Union’s General Data Protection Regulation (GDPR). It suggests that OpenAI’s tool has been scooping up user data in all sorts of questionable ways.

Read more
Google Bard could soon become your new AI life coach
Google Bard on a green and black background.

Generative artificial intelligence (AI) tools like ChatGPT have gotten a bad rep recently, but Google is apparently trying to serve up something more positive with its next project: an AI that can offer helpful life advice to people going through tough times.

If a fresh report from The New York Times is to be believed, Google has been testing its AI tech with at least 21 different assignments, including “life advice, ideas, planning instructions and tutoring tips.” The work spans both professional and personal scenarios that users might encounter.

Read more
AI can now steal your passwords with almost 100% accuracy — here’s how
A digital depiction of a laptop being hacked by a hacker.

Researchers at Cornell University have discovered a new way for AI tools to steal your data -- keystrokes. A new research paper details an AI-driven attack that can steal passwords with up to 95% accuracy by listening to what you type on your keyboard.

The researchers accomplished this by training an AI model on the sound of keystrokes and deploying it on a nearby phone. The integrated microphone listened for keystrokes on a MacBook Pro and was able to reproduce them with 95% accuracy -- the highest accuracy the researchers have seen without the use of a large language model.

Read more