Artificial Intelligence will become a hacker artifact

network attack

Machine learning is defined as “the ability to learn without being explicitly programmed,” and it can have a huge impact on the information security industry. This is a potential technology that helps security analysts analyze malware and logs to identify and fix vulnerabilities earlier. Perhaps it also improves terminal security, automates repetitive tasks, and even reduces the possibility of attacks caused by data filtering.

But the problem is that hackers also know this and expect to build their own artificial intelligence and machine learning tools to launch attacks.

network attack

These criminals – increasingly organized and increasingly diverse services available on the web – may eventually exceed the speed of defensive innovation. This is taking into account the untapped potential of technologies such as machine learning and deep learning.

“We must recognize that while technologies like machine learning, deep learning, and artificial intelligence will be the cornerstone of future cyber-defense, our rivals are struggling to implement innovations with these technologies.” McAfee CTO Steve Steve Grobman said in a media comment. “As often happens in the area of cybersecurity, the increased artificial intelligence of technology will be the winning factor in an arms race between attackers and defenders.”

Machine-based attacks may still be infrequent for the moment, but in fact, some technologies have begun to be used by criminal groups.

1, Malware escape detection

The creation of malware is largely done by cybercriminals. They write scripts to compose computer viruses and Trojans and use rootkits, password grabbers, and other tools to help distribute and execute them.

But if they can speed this process? Can machine learning help create malware?

The first example of using machine learning malware to create was a paper published in 2017 titled “Examples of Malware Generating GAN-based Black Box Attacks.” In the report, the authors uncovered how they built a Generative Anti-Network (GAN) algorithm to generate counter-malware samples, and the key was to be able to bypass the machine-learning detection system.

In another example, at DEFCON 2017, security company Endgame revealed how it created custom malware using Elon Musk’s OpenAI framework to create malware that security engines cannot detect. Endgame’s research is based on seemingly malicious binaries, and by altering some of these sections, the code looks benign and trustworthy in the anti-virus engine.

In the meantime, other researchers predict that machine learning will eventually be used to “modify code based on detection in the lab,” an extension of polymorphic malware.

2, An intelligent botnet for scalable attacks.

Fortinet, a security firm, sees 2018 as the year for “Hivenets” and “Swarmbots,” essentially marking “smart” IoT devices that can be ordered to scale-up vulnerable systems. “They will be able to talk to each other and act on shared local information,” said Derek Manky, global security strategist at Fortinet. “In addition, ‘zombies’ will become clever enough to act without the guidance of ‘botnet shepherds.’ As a result, the cellular network will grow exponentially and expand its ability to attack victims at the same time and significantly hinder the ease and response. ”

Interestingly enough, Manky said the attacks have not yet used clustering technology that could allow these cellular networks to learn from past behavior. A branch of artificial intelligence, cluster technology is defined as “the collective behavior of decentralized, self-organizing systems, natural or artificial,” and is now being used in drones and emerging robotic devices.

3, Advanced harpoon phishing become more intelligent

A more obvious application of adversarial machine learning is the use of algorithms such as text conversion to speech, speech recognition and natural language processing (NLP) for smarter social engineering. After all, you’ve been able to make this kind of software a writing style through the repeated use of neural networks, so in theory phishing email can get more complicated and credible.

In particular, machine learning can make advanced spear phishing a target for celebrities while automating the process. The system can train on real e-mail and learn to do something that seems convincing.

In its McAfee lab forecast for 2017, the company said criminals will increasingly use machine learning to analyze large numbers of privacy records to identify potential victims and establish background details that are effective against them e-mail.

In addition, in 2016, Black Hat USA, John Seymour and Philip Tully presented an article entitled “Weapon Data Science for Social Engineering: Implementing Automatic E2E Harpoon Phishing on Twitter “, which presented a recursive neural network learning that pushes fishing tackles on Twitter to target specific users. In this paper, they present the SNAP_R neural network, which is trained on spear phishing test data and is dynamically posted on the timeline of target users (and their tweets or tracking users) Extracted, making it more likely to click.

Afterwards, the system is very effective. With 90 users, the success rate of the framework is between 30% and 60%, with a considerable improvement over the results of manual spear phishing and batch fishing.

4, Threat intelligence is out of control

In machine learning, threat intelligence can be said to be mixed. On the one hand, it is generally accepted that machine learning systems will help analysts identify the real threats from multiple systems.

However, there is also a view that cybercriminals will adapt how to simply overload these alarms again. McAfee’s Grobman previously pointed to a technique called “noise floor enhancement.” “Hackers can use this technology to ‘bomb’ an environment, creating lots of proactive error messages for normal machine learning models.” Once the target recalibrates its system to filter out false alarms, an attacker can initiate an A real attack through a machine learning system.

5, Unauthorized access

An early example was published in 2012 by researchers Claudia Cruz, Fernando Uceda and Leobardo Reyes on the topic of security attacks Machine learning They use support vector machines (SVMs) to destroy a system that runs on re-validated images with an accuracy of up to 82%. All of the CAPTCHA mechanisms have been improved later, however, and researchers again use deep learning to crack the CAPTCHA code. In an article published in 2016 detailing how to use deep learning to crack a simple captcha with 92% accuracy.

In addition, BlackHat’s “I’m a Robot” study last year revealed how researchers can crack the latest semantic image captchas and compare various machine learning algorithms. The paper mentions 98% accuracy on Google’s captcha system.

6, Poisoning machine learning engine

A simpler and more effective technique is to poison a machine learning engine used to detect malware, making it ineffective, just as cybercriminals used to do with anti-virus engines. This sounds easy. The machine learning model learns from the input data. If the data pool is poisoned, the output is poisoned. Researchers from New York University demonstrated how convolutional neural networks (CNN) produce these spurious (but controlled) results through CNNs like Google, Microsoft, and AWS.