Mitigating the Risks of Generative AI

Blog

Mitigating the Risks of Generative AI

Generative artificial intelligence (AI) holds tremendous promise across many industries and disciplines. However, as with any powerful new technology, it also brings new security risks. Let’s take a few moments to dive into the emerging generative AI threat landscape, focusing specifically on areas of data and system security. This blog post will also highlight how organizations can securely adopt these tools, even with these risks.

How is generative AI different?

To grasp how generative AI changes the threat landscape, we must first consider how these new systems differ from traditional systems that have served as the backbone of supply chain systems for the past 50 years. The top five differences are:

  • Security tools and practices for generative AI are still maturing, compared to technologies already available for databases. Database security vulnerabilities like SQL injection are well understood, following decades of focus. Developers are extensively trained on these threats, and robust auditing tools are integrated into CI/CD pipelines. However, the generative AI journey is just beginning, with threat modeling and tools still emerging.
  • Generative AI delivers novel insights, rather than merely retrieving records. While databases return data that they’ve previously stored, possibly with transformations or calculations, generative AI synthesizes novel data based on its training. This is analogous to an analyst generating insights versus a clerk fetching records.
  • Formal programming languages are predictable and unambiguous, unlike the nuances and ambiguity present in natural language used by generative AI. Databases utilize formal languages, such as SQL, which leverage a formal, understood syntax to access data. A given SQL statement, taken in the context of the already stored data, will always produce the same result. However, generative AI utilizes natural “everyday” language — with all its nuance and ambiguity — for all inputs and outputs. Like two people negotiating a contract, misunderstandings can occur between humans and AI applications. In addition, generative AI’s outputs are non-deterministic — which means identical inputs can yield distinct results in phrasing, wording or meaning.
  • Generative AI may lack traceability and auditing capabilities, versus databases with tighter controls. With databases, authorized users can easily audit stored data and trace its origin. In contrast, generative AI models store knowledge in a neural network, in a form that’s incomprehensible to most people. In addition, there are currently no robust techniques available to audit the generative AI models’ acquired “knowledge,” or the potential biases from its training data.
  • Generative AI currently has fewer built-in data access controls than databases. Databases have robust authorization controls that govern data access. However, generative AI currently lacks such built-in controls. Authenticated users may access any data.

 

Examining the differences between traditional systems and generative AI reveals new security vulnerabilities and necessary mitigations, which can be categorized into three key domains: Protecting sensitive data, securing systems and data from malicious use, and properly governing AI agents and plug-ins.
 

Can ChatGPT pass the supply chain test?

Blue Yonder reveals benchmark study for Language Learning Models (LLMs), testing how capable they are out-of-the-box and if they can be effectively applied to supply chain analysis to address the real issues faced in supply chain management. 

Understand the risk factors and how to manage them

When a company entrusts its software system with sensitive data, there’s an expectation that all information will be fully protected from unauthorized access, modification or exfiltration. While traditional vulnerabilities remain a concern, the unique nature of generative AI introduces additional risks that must be guarded against.

In addition to protecting sensitive data, it’s also important that generative AI meets its service level agreements (SLAs) — including availability, scalability, performance, reliability and disaster recovery. Generative AI must also be proven not to negatively affect the SLAs of downstream systems. Understanding these vulnerabilities, and preventing them from creating security exposures, paves the way for realizing the tremendous promise of generative AI.

Some key vulnerabilities to look out for include:

  • Prompt injection. Well-crafted inputs can trick generative AI applications into revealing confidential data or executing harmful actions.
  • Insecure output handling. Blindly using AI outputs without scrutiny opens the door for system exploits like unauthorized data access.
  • Training data poisoning. Manipulated training data can corrupt AI components, introducing dangerous biases or back doors.
  • Model denial of service. Attackers can overwhelm generative AI applications with complex requests, degrading or disabling service.
  • Excessive agency. Giving AI components uncontrolled autonomy may allow them to make damaging decisions based on faulty reasoning.
  • Insecure plug-in design. Third-party AI components can introduce severe vulnerabilities through unsafe data handling.
  • Supply chain compromise. If any third-party tools or data sources get hacked, these events create risk within the generative AI application.
  • Sensitive data leakage. Generative AI may reveal sensitive customer or business data it was exposed to during its training.

 

Fortunately, preventive measures may mitigate multiple types of AI vulnerabilities. For example, securing against prompt injection and training data poisoning also helps reduce the chance of sensitive information disclosure. A robust identity and access framework, with a well thought-out access control implementation, is a prerequisite for protecting against excessive agency attacks. And the traditional security measures that we’ve been practicing since the dawn of computing provide the foundation on which generative AI protections are built.

With a vigilant security posture and in-depth defense measures in place, companies can realize the tremendous potential of generative AI while safeguarding systems and sensitive information. Securing generative AI necessitates a multi-layered approach encompassing data, model training and fine-tuning, infrastructure, identities, access control and, importantly, diligence when evaluating vendors. Companies also need to implement comprehensive governance, rigorous access control, input and output controls, monitoring, sandboxing and well-defined development and operations protocols.


Assess your generative AI security position before diving in

Whether companies are incorporating generative AI directly into in-house built solutions, or acquiring these capabilities from vendors, asking the right questions is critical to ensure stringent security. The right questions can help guide conversations to determine if adequate protections have been implemented. Consider covering the following topic areas:

  • Supply chain security. Companies should request third-party audits, penetration testing and code reviews to ensure supply chain security. They need to understand how third-party providers are evaluated, both initially and on an ongoing basis.
  • Data security. Organizations need to understand how data is classified and protected based on sensitivity, including personal and proprietary business data. How are user permissions managed, and what safeguards are in place?
  • Access control. With a vigilant security posture — including privilege-based access controls and in-depth defense measures — companies can realize the tremendous potential of generative AI while also safeguarding systems and sensitive information.
  • Training pipeline security. Rigorous control around training data governance, pipelines, models and algorithms is essential. What protections are in place to protect against data poisoning?
  • Input and output security. Before implementing generative AI, organizations should evaluate input validation methods — as well as how outputs are filtered, sanitized and approved.
  • Infrastructure security. How often does the vendor perform resilience testing? What are their SLAs in terms of availability, scalability and performance? This is critical to assessing infrastructure security and stability.
  • Monitoring and response. Companies need to understand fully how  workflows, monitoring and responses are automated, logged and audited. Any audit records must be secure, especially if they’re likely to contain confidential or personal information.  
  • Compliance. Enterprises should confirm that the vendor complies with regulations like GDPR and CCPA, and that certifications like SOC2 and ISO 27001 have been achieved. They must understand where data will be collected, stored and used to ensure that country-specific or state-specific requirements are met.

 

Realize the promise of generative AI, securely

Generative AI has immense potential, with new applications being discovered almost daily. While current capabilities are already profound, even greater potential lies ahead.

However, with this promise come risks that require prudent, ongoing governance. 

Security establishes trust and enables progress — and the guidance in this blog post provides a starting point for organizations to assess and address these risks. With diligence, companies can adopt generative AI early, and securely, to get a head start on realizing generative AI’s benefits now and in the future. The key is balancing innovation with governance through continuous collaboration between security and AI teams.

Blue Yonder is applying the industry’s gold standard for generative AI security, OWASP Top 10 for Large Language Models, to safeguard our solutions. This means our customers can confidently take full advantage of the latest technology innovations that keep their businesses running faster and smarter. Contact us to discuss the potential for secure generative AI in your supply chain. 

Achieve peak supply chain performance with predictive and generative AI

Focus on decisions and let AI handle the data with decades of specialized experience and proven innovation that bring transformative results to your business.