Skip to main content

DuckDuckGo’s new AI service keeps your chatbot conversations private

DuckDuckGo
DuckDuckGo

DuckDuckGo released its new AI Chat service on Thursday, enabling users to anonymously access popular chatbots like GPT-3.5 and Claude 3 Haiku without having to share their personal information as well as preventing the companies from training the AIs on their conversations. AI Chat essentially works by inserting itself between the user and the model, like a high-tech game of telephone.

From the AI Chat home screen, users can select which chat model they want to use — Meta’s Llama 3 70B model and Mixtral 8x7B are available in addition to GPT-3.5 and Claude — then begin conversing with it as they normally would. DuckDuckGo will connect to that chat model as an intermediary, substituting the user’s IP address with one of their own. “This way it looks like the requests are coming from us and not you,” the company wrote in a blog post.

Recommended Videos

As with the company’s anonymized search feature, all metadata is stripped from the user queries, so even though DuckDuckGo warns that “the underlying model providers may store chats temporarily,” there’s no way to personally identify users based on those chats. And, as The Verge notes, DuckDuckGo also has agreements in place with those AI companies, preventing them from using chat prompts and outputs to train their models, as well as to delete any saved data within 30 days.

Data privacy is a growing concern among the AI community, even as the number of people using it both individually and at work continues to rise. A Pew Research study from October found that roughly eight in 10 “of those familiar with AI say its use by companies will lead to people’s personal information being used in ways they won’t be comfortable with.” While most chatbots already allow their users to opt out from having their data collected, those options are often buried in layers of menus with the onus on the user to find and select them.

AI Chat is available at both duck.ai and duckduckgo.com/chat. It’s free to use “within a daily limit,” though the company is currently considering a more expansive paid option with higher usage limits and access to more advanced models. This new service follows last year’s release of DuckDuckGo’s DuckAssist, which provides anonymized, AI-generated synopses of search results, akin to Google’s SGE.

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
ChatGPT AI chatbot can now be used without an account
The ChatGPT website on a laptop's screen as the laptop sits on a counter in front of a black background.

ChatGPT, the AI-powered chatbot that went viral at the start of last year and kicked off a wave of interest in generative AI tools, no longer requires an account to use.

Its creator, OpenAI, launched a webpage on Monday that lets you begin a conversation with the chatbot without having to sign up or log in first.

Read more
OpenAI’s new tool can spot fake AI images, but there’s a catch
OpenAI Dall-E 3 alpha test version image.

Images generated by artificial intelligence (AI) have been causing plenty of consternation in recent months, with people understandably worried that they could be used to spread misinformation and deceive the public. Now, ChatGPT maker OpenAI is apparently working on a tool that can detect AI-generated images with 99% accuracy.

According to Bloomberg, OpenAI’s tool is designed to root out user-made pictures created by its own Dall-E 3 image generator. Speaking at the Wall Street Journal’s Tech Live event, Mira Murati, chief technology officer at OpenAI, claimed the tool is “99% reliable.” While the tech is being tested internally, there’s no release date yet.

Read more
Most people distrust AI and want regulation, says new survey
A person's hand holding a smartphone. The smartphone is showing the website for the ChatGPT generative AI.

Most American adults do not trust artificial intelligence (AI) tools like ChatGPT and worry about their potential misuse, a new survey has found. It suggests that the frequent scandals surrounding AI-created malware and disinformation are taking their toll and that the public might be increasingly receptive to ideas of AI regulation.

The survey from the MITRE Corporation and the Harris Poll claims that just 39% of 2,063 U.S. adults polled believe that today’s AI tech is “safe and secure,” a drop of 9% from when the two firms conducted their last survey in November 2022.

Read more