Malicious Use of AI Complicates US Cybersecurity Posture

Fortify Security Team
Oct 26, 2022

Malicious cyber actors are using increasingly advanced artificial intelligence (AI) and machine learning technologies that will likely outpace US network defense capabilities to counter attacks, if the technology  progresses at its current rate. The exponential increase in digital data, computing power, and advancements in machine learning algorithms in recent years have resulted in the significant adoption of AI by both benign and malicious actors, although we have not yet observed a prolific use of AI in malicious cyber operations. AI systems and machine learning, a subset of AI, can use large datasets to make decisions independent of human interaction and to automate complex processes, potentially streamlining cyber operations and multiplying the present  challenges traditional network defenses face from the rapid proliferation of devices,  networks, interfaces, and greater volume of cyber attacks.

  • Developments in AI and machine learning have the potential to enable larger-scale, faster, efficient, and more evasive cyber attacks with the ability to counter traditional, rule-based cybersecurity tools and capabilities, according to the National Security Commission on AI and a body of US cybersecurity blogs. Additionally, AI and machine learning can solve some authentication challenges, generate password lists for brute-force attacks, automatically identify users likely to be vulnerable to large-scale phishing attacks, and identify vulnerable attack surfaces from large-scale scans of perimeter systems, according to US cybersecurity blogs.
  • As AI continues to advance, cyber actors who have already used AI in a limited capacity could increase their attack frequency and efficacy to overwhelm defenders and increase their chances of success, according to an Israeli university report. The most recent notable malicious cyber activity was a disruptive botnet in 2018 that purportedly used AI. Integrating AI into offensive cyber could improve social engineering, exploit and tool development, and accelerate information gathering more than other offensive cyber applications over the next few years, judging from an Israeli university report. In 2016, cybersecurity researchers conducted an experiment that demonstrated AI sending spear-phishing tweets faster and with a higher victimization rate than that of a human operator, according to a US cybersecurity firm.
  • While AI and machine learning research has advanced beyond theoretical discussions over the last decade, we have seen limited offensive cyber applications using AI outside of test environments, probably due to resource demands required to develop the technology. The need for salient data and the complexity in deployment currently limits the benefits gained from integrating AI and machine learning into offensive cyber operations. At present, AI and machine learning capabilities may not have much more to offer than traditional automation to enable indiscriminate propagation, such as spear-phishing or distributed denial-of-service (DDoS) attacks, according to a US think tank. Although US technology companies have demonstrated that AI software can avoid anti-virus detection and effectively identify vulnerabilities in a system, we have not observed these techniques, tactics, and procedures (TTPs) deployed outside of test environments.

China, Russia, and Iran will likely incorporate advanced AI capabilities, once sufficiently developed and operationalized, to augment malicious cyber operations targeting US networks. China, Russia, and Iran are strategically developing AI and machine learning technologies with the potential capability to enhance targeting and exploitation of US networks through improved vulnerability discovery, spear-phishing, and delivery of malicious code to targeted networks. We assume that, as the technology becomes more accessible, other malicious cyber actors will seek to incorporate it in their operations.

  • Beijing aspires to be the world leader in AI by 2030 and specifically identified AI technology and research as a focal point of China’s long-term cybersecurity and defense planning, according to a 2017 Chinese Government AI development plan. As of 2021, six Chinese universities, with ties to Beijing’s security services and advanced persistent threat groups, were researching the integration of AI into cyber operations, according to an emerging technology-focused think tank.
  • In 2019, Russia released its national AI strategy, which outlines plans to increase AI research and development efforts through 2030. As of 2020, state-owned corporation Rostec and its subsidiaries were actively developing military applications of AI, according to a US think tank. Cyber attacks and disinformation campaigns are key tools in Russia’s foreign policy strategy toward the United States and Europe, and AI has the potential to improve Russia’s cyber capabilities through AI-driven, asymmetric warfare, according to a US university report. Since the conflict in Ukraine began, new technology sanctions in addition to an exodus of foreign technology companies and skilled workers from Russia could strain its AI development, according to a US media outlet.
  • In January 2022, Iran’s Research Institute of Information and Communication Technology published a national strategy on AI, but it lacked details on the potential incorporation of AI into Iran’s cybersecurity and defense efforts and did not include a specific timeframe, according to Iranian state media and government officials. However, Iran shows the expertise and willingness to conduct aggressive cyber attacks and cyber-enabled influence activity, according to a collection of CISA alerts. Iranian Government-supported cyber actors conducted a spear-phishing campaign targeting US and foreign universities, compromising over 8,000 e-mail addresses, according to a 2018 DOJ indictment. Spear-phishing campaigns like this could be improved with AI capabilities to enhance the speed and scale of similar operations.

Malicious cyber actors will likely seek to exploit unsecured, AI-augmented cybersecurity systems, potentially enabling follow-on cyber attacks. The integration of advanced AI capabilities into cybersecurity systems offers substantial benefits, but they can be exploited if not adequately secured. To address this growing threat as organizations become more dependent on AI technology, US organizations are currently developing standard AI design, deployment, and configuration processes to counter malicious cyber threats (see Appendix A). US organizations can implement safe practices to secure AI models and training data as identified by the National Institute of Standards and Technology (NIST), the DHS Science and Technology Directorate (S&T), and MITRE.

  • The data and information necessary for AI to function can potentially be manipulated by adversaries, causing unintended consequences of AI-supported cybersecurity systems. For example, AI model theft and training-data poisoning can be exploited by malicious cyber actors to train AI in negative, unintended ways. This includes reconstructing the AI system’s training data to launch social engineering attacks or subvert systems that rely on AI-based analysis to function, according to a software development company blog, DHS S&T’s strategic plan for AI, and a US cybersecurity blog.
  • Malicious cyber actors have undermined AI systems to enable successful cyber operations. For instance, attackers countered AI systems responsible for filtering spam e-mails via data poisoning campaigns, which sent millions of e-mails designed to mis-train the spam filters—allowing attackers to send undetected, malicious e-mails that contained malware, according to a US financial news publication. In addition, software development, including AI development, is heavily reliant on using external software libraries to create the software, which often contain vulnerabilities. These libraries can be potentially compromised by supply-chain attacks, similar to the SolarWinds attack, according to a US cybersecurity blog.
  • Machine learning systems, as a subset of AI, have been the target of cyber attacks and can be similarly poisoned or corrupted. In 2020, a US technology company observed a notable increase in attacks on commercial machine learning systems from the previous four years. In a survey of 28 organizations, including Fortune 500 companies, governments, non-profits, and small organizations, 25 out of the 28 organizations indicated that they did not have the right tools in place to secure machine learning systems. The survey also found that those organizations sought guidance to secure their machine learning systems, according to a US technology company.

US Government and private sector organizations are currently developing standard AI design, deployment, and configuration processes to counter malicious cyber threats. These guidelines can be used to implement safe practices to secure AI models and training data. Below are freely available resources to assist organizations in the implementation and integration of AI into cybersecurity practices by three different, standard-leading organizations: NIST, DHS S&T, and MITRE. NIST conducts research, collaborates with industries, and offers technical assistance to set standards and advance technology applications. DHS S&T acts as the science advisor to the DHS Secretary and serves as the research and development arm of DHS. MITRE, developer of the MITRE ATT&CK Framework, is commonly cited across government and private sector reporting regarding cybersecurity issues and malicious cyber actors’ TTPs.

Appendix A

US Government and private sector organizations are currently developing standard AI design, deployment, and configuration processes to counter malicious cyber threats. These guidelines can be used to implement safe practices to secure AI models and training data. Below are freely available resources to assist organizations in the implementation and integration of AI into cybersecurity practices by three different, standard-leading organizations: NIST, DHS S&T, and MITRE. NIST conducts research, collaborates with industries, and offers technical assistance to set standards and advance technology applications. DHS S&T acts as the science advisor to the DHS Secretary and serves as the research and development arm of DHS. MITRE, developer of the MITRE ATT&CK Framework, is commonly cited across government and private sector reporting regarding cybersecurity issues and malicious cyber actors’ TTPs.

National Institute of Standards and Technology

Prepared in response to Executive Order 13859, NIST published a “Plan for Federal Engagement in Developing Technical Standards and Related Tools” to address potential threats to machine learning and AI systems. NIST is currently producing training data and an AI Risk Management Framework that manages risks to individuals and organizations associated with AI. These resources provide guidance for risk management, safety, security, and reliability of AI systems and further recommends that US Government agencies should articulate how AI’s integration into their operations will affect stakeholders and communities.

Resource: https://www.nist.gov/itl/ai-risk-management-framework

DHS Science and Technology Directorate

DHS S&T published an “Artificial Intelligence Strategy” in December 2020, outlining objectives to integrate AI systems with the DHS workforce. The strategy highlights impacts to US critical infrastructure and opportunities for use guided by the principals outlined in Executive Order 13960. The principals of EO 13960 promote the innovation and use of AI, where appropriate, to improve government operations and services. Implementation of this strategy overlaps with NIST guidance and can be tailored to US organizations looking to add AI capabilities to their operational duties, including cybersecurity.

Resource: https://www.dhs.gov/publication/us-department-homeland-securityartificial-intelligence-strategy?topic=intelligence-and-analysis

Adversarial Machine Learning Threat Matrix

In partnership with MITRE and several technology and cybersecurity firms, Microsoft released the open source “Adversarial Machine Learning Threat Matrix,” which systematically organizes the techniques employed by malicious adversaries in subverting machine learning systems. MITRE has also released the “Adversarial Threat Landscape for Artificial-Intelligence Systems,” which is modeled after the MITRE ATT&CK
Framework.

Resource: https://atlas.mitre.org/

Recent Posts

July 17, 2023

Title: Thousands of Images on Docker Hub Leak Auth Secrets, Private Keys Date Published: July 16, 2023 https://www.bleepingcomputer.com/news/security/thousands-of-images-on-docker-hub-leak-auth-secrets-private-keys/ Excerpt: “Researchers at the RWTH Aachen University...

July 14, 2023

Title: Indexing Over 15 Million WordPress Websites with PWNPress Date Published: July 14, 2023 https://securityaffairs.com/148465/hacking/pwnpress-platform.html Excerpt: “Sicuranex’s PWNPress platform indexed over 15 million WordPress websites, it collects data...

Increased Truebot Activity Infects U.S. and Canada Based Networks

The Cybersecurity and Infrastructure Security Agency (CISA), the Federal Bureau of Investigation (FBI), the Multi-State Information Sharing and Analysis Center (MS-ISAC), and the Canadian Centre for Cyber Security (CCCS) are releasing this joint Cybersecurity Advisory...