New genre of AI programs take computer hacking to another level

08 August 2018 - 14:20 By Reuters
subscribe Just R20 for the first month. Support independent journalism by subscribing to our digital news package.
Subscribe now
Researchers warned on Wednesday of a new genre of AI-driven programs that can by-pass even the most sophisticated defences.
Researchers warned on Wednesday of a new genre of AI-driven programs that can by-pass even the most sophisticated defences.
Image: iStock

The nightmare scenario for computer security - artificial intelligence (AI) programs that can learn how to evade even the best defences - may already have arrived.

That warning from security researchers is driven home by a team from IBM who have used the AI technique known as machine learning to build hacking programs that could slip past top-tier defensive measures. The group will unveil details of its experiment at the Black Hat security conference in Las Vegas on Wednesday.

State-of-the-art defences generally rely on examining what the attack software is doing, rather than the more commonplace technique of analysing software code for danger signs. But the new genre of AI-driven programs can be trained to stay dormant until they reach a very specific target, making them exceptionally hard to stop.

No one has yet boasted of catching any malicious software that clearly relied on machine learning or other variants of artificial intelligence, but that may just be because the attack programs are too good to be caught.

Researchers say that, at best, it's only a matter of time. Free AI building blocks for training programs are readily available from Alphabet's Google and others, and the ideas work all too well in practice.

“I absolutely do believe we’re going there,” said Jon DiMaggio, a senior threat analyst at cybersecurity firm Symantec. "It’s going to make it a lot harder to detect.”

The most advanced nation-state hackers have already shown that they can build attack programs that activate only when they have reached a target. The best-known example is Stuxnet, which was deployed by US and Israeli intelligence agencies against a uranium enrichment facility in Iran.

The IBM effort, named DeepLocker, showed that a similar level of precision can be available to those with far fewer resources than a national government.

In a demonstration using publicly available photos of a sample target, the team used a hacked version of video-conferencing software that swung into action only when it detected the face of a target.

“We have a lot of reason to believe this is the next big thing,” said lead IBM researcher Marc Stoecklin. “This may have happened already, and we will see it two or three years from now.”

At a recent New York conference, Hackers on Planet Earth, defence researcher Kevin Hodges showed off an "entry-level" automated program he made with open-source training tools that tried multiple attack approaches in succession.

"We need to start looking at this stuff now," said Hodges. "Whoever you personally consider evil is already working on this." 


subscribe Just R20 for the first month. Support independent journalism by subscribing to our digital news package.
Subscribe now