The Real Threat of Artificial Intelligence to the three major areas of the future

Real Threat of Artificial Intelligence

the growing popularity of artificial intelligence (AI) and social concern. While promoting the cybersecurity industry, AI will also be used by malicious elements to carry out malicious activities. Recently, AI experts in industry and academia unveiled a 100-page report, The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation that sends a clear message to the general public that every AI technological advance is also true to the wrongdoers A kind of progress.

Real Threat of Artificial Intelligence

Experts say that AI has a “dual purpose,” which means that AI’s ability to make thousands of complex decisions per second may help/harm humans, depending on who designed the AI system. These AI experts have categorized the malicious uses of AI that are present or likely to occur in the next five years into three categories: Digital securityPhysical security, and Political security

 

  • Digital security.
    The use of AI to automate tasks involved in carrying out cyberattacks will alleviate the existing tradeoff between the scale and efficacy of attacks. This may expand the threat associated with labor-intensive cyberattacks (such as spear phishing). We also expect novel attacks that exploit human vulnerabilities (e.g. through the use of speech synthesis for impersonation), existing software vulnerabilities (e.g. through automated hacking), or the vulnerabilities of AI systems (e.g. through adversarial examples and data poisoning).
  • Physical security.
    The use of AI to automate tasks involved in carrying out attacks with drones and other physical systems (e.g. through the deployment of autonomous weapons systems) may expand the threats associated with these attacks. We also expect novel attacks that subvert cyberphysical systems (e.g. causing autonomous vehicles to crash) or involve physical systems that it would be infeasible to direct remotely (e.g. a swarm of thousands of micro-drones).
  • Political security.
    The use of AI to automate tasks involved in surveillance (e.g. analysing mass-collected data), persuasion (e.g. creating targeted propaganda), and deception (e.g. manipulating videos) may expand threats associated with privacy invasion and social manipulation. We also expect novel attacks that take advantage of an improved capacity to analyse human behaviors, moods, and beliefs on the basis of available data. These concerns are most significant in the context of authoritarian states, but may also undermine the ability of democracies to sustain truthful public debates.