AI can help to prevent network attack and defense in 2018
It is predicted that artificial intelligence (AI) learning ability will be the key to network defense and cyber attacks.
In the summer of 2016, hackers went to Las Vegas, United States, with seven hacker machines to participate in the Cyber Super Challenge (CGC) sponsored by the Defense Advanced Research Projects Agency (DARPA). In dozens of rounds of competition, each machine seeks to find more software vulnerabilities, “attempts” to take advantage of, and to grab other machines before using the same tactics to repair the game. The participating teams have given these machines the processing power, software analysis algorithms, and utilization tools.
Image: CNBC
CGC is by far the only full-machine hacking contest, an initial winner is a machine called “Mayhem”, which is now displayed as the first “non-human entity” in the Smithsonian National Museum of Natural History. This machine won at DEFCON conference hackers dream of the highest honor – black badge.
Mayhem took part in another battle with hackers in August 2017 and was eventually defeated by hackers. Although it works day and night, it lacks the energy and motivation to compete with mankind. Mayhem eventually lost to humans due to the machine’s lack of creativity, intuition, and determination.
What will happen in 2018?
However, this situation will change in 2018. Computing power, the theoretical and practical concepts of AI research, and the advancement of cybersecurity breakthrough technologies will drive machine learning algorithms and technologies as key components of cyber defense and may even have an impact on attacks. Hackers are improving machine technology to work with machines to meet new competitive challenges. For example, the Shellphish team introduced the “angr” open-source exploit automation tool.
From a defensive standpoint, cybersecurity experts are already using a large amount of automation and machine-driven analysis. On the other hand, automation capabilities are also constantly being used in offensive attacks. Cylance, a cyber-security company in the United States, found in a black hat conference in 2017 than 62% of information security experts believe hackers will weaponize AI in 2018 and begin using AI to launch attacks.
At DEFCON 2017, a data scientist at the U.S. endpoint security company Endgame demonstrated and released a malware operating environment for OpenAI Gym, the open source toolkit for learning algorithms. Endgame has developed an automated tool to learn how to modify bytes in code, hide malicious files, and evade the detection of antivirus engines.
With the development of such new tools, combined with competition for innovation, it is not hard to imagine that this growth ladder will then lead to automated systems that adapt, learn new environments and identify vulnerabilities, but such systems may also be hacked.