Skip to main content

Hacker honeypot could help secure networks everywhere

honeypot network security hack canary2
Image used with permission by copyright holder
One of the biggest problems with the major hacks that hit firms like Sony and Target last year is that it often takes a while for them to be detected. That gives those responsible plenty of time to use their position within the network to sniff or phish out more credentials, which can move them higher up the chain to where the really valuable data is. But what if a trap was set for intrepid hackers that tipped off the admins to their presence?

That’s the idea being proposed by South African digital security company Thinkst. It wants to add a honeypot to enterprise networks that represents too valuable a target for hackers to pass up. When they attempt to read its contents or bypass its lax security, network admins and potentially even the authorities, can be alerted.

Recommended Videos

Related: Do theaters still matter? Amazon knows they do, even after The Interview

Please enable Javascript to view this content

Of course this isn’t some brand new technique that has just been thought up. The problem with a traditional honeypot though is it requires regular management and a lot of technical know-how to make it consistently tempting to hackers, without looking too good to be true. Where Thinkst comes in, is that it’s created a piece of hardware that can sit on a network and reliably report intrusions without much maintenance.

The piece of kit is called Canary, after the poor avians that were taken into coal mines back in the day. Its simple set up involves the pressing of a single button, after which an admin can connect to it over Bluetooth to adjust how the system appears on the network, with several OS options. They can also choose to add tempting looking files that sound like they’re related to valuable data.

If any are ever accessed, an alert is sent out.

Installation of two honeypots and their annual management from Thinkst costs $5,000. While unlikely to be perfect, they offer what sounds like a solid solution for use in augmenting other security features.

Jon Martindale
Jon Martindale is a freelance evergreen writer and occasional section coordinator, covering how to guides, best-of lists, and…
Security robots could be coming to a school near you
Team 1st Technologies' security robot.

A number of U.S. schools are testing AI-equipped security robots designed to roam the campus around the clock looking for unwanted visitors.

School safety is an ongoing concern for staff, students, and parents, with mass shootings at the extreme end of things to be worried about.

Read more
DOJ’s new NatSec Cyber unit to boost fight against state-backed hackers
A hacker typing on an Apple MacBook laptop while holding a phone. Both devices show code on their screens.

Eyeing the increasing threat of damaging cyberattacks by hackers backed by hostile foreign states, the U.S. Justice Department (DOJ) on Tuesday announced the creation of the National Security Cyber Section -- aka NatSec Cyber -- within its National Security Division (NSD).

Hackers operating out of countries like China, Russia, and North Korea seek to cause disruption across a wide range of sectors, steal government and trade secrets, spy on targets, and raise revenue via extortion. Such nefarious activities have long been a concern for those overseeing U.S. national security, and the DOJ’s new unit aims to improve the efficiency of tackling the perpetrators’ operations.

Read more
These ingenious ideas could help make AI a little less evil
Profile of head on computer chip artificial intelligence.

Right now, there’s plenty of hand-wringing over the damage artificial intelligence (AI) can do. To offset that, Firefox maker Mozilla set out to encourage more accountable use of AI with its Responsible AI Challenge, and the recently announced winners of the contest show that the AI-infused future doesn’t have to be all doom and gloom.

The first prize of $50,000 went to Sanative AI, which “provides anti-AI watermarks to protect images and artwork from being used as training data” for the kind of large-language models that power AI tools like ChatGPT. There has been much consternation from photographers and artists over their work being used to train AI without permission, something Sanative AI could help to remedy.

Read more