Artificial intelligence (AI) is probably the most significant innovation transforming information technology right now—and it has a virtually limitless number of applications across nearly every industry. It is playing an increasingly vital role in legacy modernization processes as well.
One of the main benefits of AI for businesses is its ability to streamline operations, automate routine tasks, and supercharge data analysis. However, this assistance comes at a price. Namely, AI opens the door to greater cybersecurity risks by introducing new vulnerabilities. Understanding these vulnerabilities is critically important for ensuring your organization is taking all the necessary steps to protect sensitive information. The key cybersecurity vulnerabilities introduced by AI include:
1. Adversarial attacks:
AI systems rely on algorithms that provide instructions for “thinking”— an algorithm determines how an AI system processes information and comes to conclusions without human intervention. One of the unique vulnerabilities in AI involves control of the decision-making processes of these algorithms.
Adversarial attacks aim to alter input data to cause various errors and misclassifications and additionally bypass security. There are several types of adversarial attacks, and evasion and model extraction are two of the most common.
Evasion attacks involve inputs that are not detected by the AI’s defense system, so no alerts are generated despite unexpected or incorrect outputs. Model extraction involves stealing an existing AI model from an organization and using it for unintended purposes. Some industries are more vulnerable than others to adversarial attacks. For example, automated vehicles can be made to cause accidents or medical algorithms can be manipulated to make incorrect diagnoses. In these circumstances, these types of attacks can have fatal consequences.
2. Model poisoning:
While adversarial attacks target AI models within their production environments, model poisoning focuses on systems still in development or “training.” During this process, AI models are fed extremely large, curated datasets so they can learn how to make predictions or decisions. In a model poisoning attack, malicious data is added to the training dataset. Model poisoning can influence the AI’s output data and can sometimes significantly alter behavior of the model.
Having been fed incorrect data, the AI may offer biased or incorrect predictions that can lead to altered decision making. This becomes especially problematic when organizations invest in large language model AI for closed environments. These models can help address internal problems or handle specialized data, but they are also uniquely vulnerable to attack without security controls in place.
The manipulated outputs can have far-reaching consequences for companies. Furthermore, because the poisoned data is not always detectable by humans, identifying these attacks becomes especially challenging. Unfortunately, they can go unrecognized for a considerable amount of time.
3. Data breaches:
Another challenge presented by AI integration has to do with data breaches, especially for systems that store and process a large amount of confidential and sensitive data, such as health records or financial and personal information. AI algorithms that process and analyze this data may have weak security protocols or insufficient encryption, which leaves them vulnerable.
In addition, these algorithms may not have adequate monitoring to detect when a breach has occurred. When AI deals with sensitive data and creates logs or stores of this information, the system becomes a prime target for hackers.
In addition, using the many open-source variants of generative AI that are now available can increase the risk of a data breach. Employees—especially those who haven’t been educated about cybersecurity risks—may look to these open-source variants for help with their work, or simply because they’re curious about what they can do. However, feeding any kind of sensitive or proprietary information into these applications is a bad idea. For example, ChatGPT will save any data included in a prompt and use it for training. Organizations should create policies around acceptable use of generative AI applications.
4. Malware and ransomware:
Both malware and ransomware have been issues for information technology experts for more than a decade. Unfortunately, AI systems are not immune to attacks of this variety. AI makes it easier to generate malware, so people can now deploy new variants more quickly. Moreover, generating malware requires less skill and money when AI is used to create it.
Malware and ransomware can interrupt AI systems in several ways. For example, they may start to encrypt data to make it inaccessible, or they could overload the network to prevent legitimate use of the AI service. Some forms of malware exploit public AI platforms to invade a network and cause harm. Other forms of malware target AI systems to steal their enormous computing power. Hijacking these systems can facilitate crypto mining, for example.
5. AI infrastructure:
The unique infrastructure of AI systems introduces new vulnerabilities that can be exploited by attackers. For example, AI workloads can use graphic processing units (GPUs) or tensor processing units (TPUs) to accelerate inquiries. These specialized processing units can handle incredible amounts of data simultaneously, but they also present new vectors for attack.
Design flaws in these processors and other hardware can become the target of malicious actors. Identifying and fixing these flaws can take significant time and resources. One infrastructure cybersecurity risk is the rowhammer flaw in dynamic random access memory (DRAM) chips in many devices, including smartphones. Attackers can leverage this flaw to manipulate memory deduplication in virtual environments to gain access to sensitive data.
IT experts should also keep in mind that AI solutions are often built on and integrated with several other technologies and/or applications that can be targeted for attack. These more familiar cybersecurity threats also need to be considered.