Covering Scientific & Technical AI | Monday, October 7, 2024

Cybersecurity’s Rising Significance in the World of Artificial Intelligence 

(sdecoret/Shutterstock)

According to a 2023 business survey, 62 percent of enterprises have fully implemented artificial intelligence (AI) for cybersecurity or are exploring additional uses for the technology. With advancements in AI technologies, however, come more ways for sensitive information to be misused.

Globally, organizations are leveraging AI and implementing automated security measures into their infrastructure to reduce vulnerabilities. As AI is emerging, threats continue to take on various forms. A recent IBM report states that the average cost of a data breach is a staggering $4.45 million. The proliferation of generative AI (GAI) will likely consumerize AI-enabled automated attacks, including a level of personalization that would be difficult to detect by humans without GAI assistance.

While AI serves as a more generalized term for intelligence-based tech behavior, GAI is a subspecialty that extends the concept of AI to generate new content that spans across various modes and even combines them. The primary cause of concern within cybersecurity comes from GAI’s ability to “mutate,” which includes self-modifying code. This means that when a model-driven attack is unable to infiltrate a system, it alters its operative behavior to be successful.

The growing risk of cyberattacks coincides with the more widespread availability of AI and GAI through GPT, BARD, or the range of open-source options. It is suspected that cybercrime tools like WormGPT and PoissonGPT were developed using the open source GPT-J language model. Some of the GAI language models, particularly ChatGPT and BARD, have anti-abuse restrictions, yet the sophistication that GAI offers in devising attacks, generating new exploits, bypassing security structures, and clever prompt engineering might continue to pose a threat.

Issues like these play into the overarching problem of determining what is real and what is fake. As the lines between truth and hoax are blurred, it’s important to ensure the accuracy and credibility of GAI models in cybersecurity when detecting fraudulent information. Capitalizing on AI and GAI algorithms for protection against generated attacks from these technologies delivers a promising way forward.

Standards and Initiatives To Use AI in Cybersecurity

According to a recent Cloud Security Alliance (CSA) report, “generative AI models can be used to significantly enhance the scanning and filtering of security vulnerabilities.” In the report, the CSA demonstrates how OpenAI and large language models (LLMs) remain an effective vulnerability scanner for potential threats and risks. A primary example would be an AI scanner developed to quickly detect insecure code patterns for developers to eliminate potential holes or weaknesses before they become a significant risk.

Earlier this year, the National Institute of Standards and Technology launched the Trustworthy and Responsible AI Center which included their AI Risk Management Framework (RMF). The RMF assists AI users and developers in understanding and addressing the common risks involved with AI systems while providing best practices for reducing them. Despite the positive intentions of the RMF, the framework remains insufficient. This past June, the Biden-Harris administration announced that a group of developers will begin developing guidance for organizations to assist in assessing and tackling the risks associated with GAI.

Cyberattacks will become cheaper in the future as the entry barriers lower and these frameworks prove to be useful guiding mechanisms. Still, an increasing rate of AI/GAI-induced attacks will require developers and organizations to rapidly build and grow on these foundations.

The Benefits of GAI in Cybersecurity

With GAI reducing detection and response times to ensure that holes and vulnerabilities are efficiently patched, using GAI to prevent AI-generated attacks is inevitable. Some of the benefits of this approach include:

  • Detection and response. AI algorithms can be designed to analyze large and diverse datasets and capture behavior of users in the system to detect unusual activities. Extending that further, GAI can now generate a coordinated defense or decoy against those unusual activities in a timely manner. Infiltrations sitting in an organization’s IT systems for days, or even months, can be avoided.
  • Threat simulation and training. Models can simulate threat scenarios and generate synthetic datasets. Generated realistic cyberattack scenarios, including malware code and phishing emails, can radically improve the quality of response. Because AI and GAI learn adaptively, the scenarios are made progressively complex and difficult to resolve, building a more robust internal system. AI and GAI can operate efficiently in dynamic situations, thus supporting cybersecurity exercises intended primarily for training purposes, such as Quantum Dawn.
  • Predictive capabilities. Composite IT/IS networks of organizations require predictive capabilities for assessing the potential vulnerabilities that continuously evolve and shift over time. Consistent risk assessment and threat intelligence support and sustain proactive measures.
  • Human-machine, machine-machine collaborations. AI and GAI do not guarantee a completely automated system that excludes the need for human input. Their pattern recognition and generation capabilities might be more advanced, but organizations still require human creativity and their interventions. In this context, human-machine collaboration reduces overrides and clogged-up networks caused by false positives (AI-determined attack that isn’t really an attack), while machine-machine collaboration reduces false negatives across organizations given their strong combined pattern recognition capabilities.
  • Collaborative defense and cooperative approaches. The human-machine and machine-machine collaborations can ensure cooperative defense when implemented among disparate or competing organizations. Through collaboration, these competitors can work together defensively. Not being a zero-sum situation, this calls for cooperative game theory, an approach in which groups of entities (organizations) form “coalitions” and act as primary and independent decision-making units. By modeling various cyberattack scenarios as games, it is possible to predict the attacker’s actions and identify optimal defense strategies. This technique has been shown to support collaboration and cooperative behavior and the final result provides the foundation for cybersecurity policies and valuation. AI systems designed to cooperate with other AI models of competing organizations could provide an extremely stable cooperative equilibrium. Currently, such “coalitions” are mostly driven through information exchanges. AI-to-AI cooperation can enable more complex detection and response mechanisms.

These benefits contribute to GAI’s overall impact on cybersecurity but it is the collaborative efforts between developers and implemented AI that optimize cyber defense.

A Modern Approach to Cybersecurity

By 2027, the global market for AI-enabled cybersecurity technologies is expected to grow at a compound annual growth rate of 23.6 percent. While impossible to fully predict where generative AI and its role in cybersecurity will go from here, it’s safe to say that AI does not need to be feared or viewed as a potential threat. A modern approach to cybersecurity is centered around standardized AI modeling with the potential for continuous innovation and developments.

About the Author

Shivani Shukla specializes in operations research, statistics, and AI with several years of experience in academic and industry research. She currently serves as the director of undergraduate programs in business analytics as well as an associate professor in business analytics and IS. For more information, contact [email protected].

AIwire