AI Security

AI Application Security

A new paradigm to protect your AI applications from security and safety threats.

What is an AI Application?

An AI application is a software system that utilizes an artificial intelligence or machine learning model as a core component in order to automate complex tasks. These tasks might require language understanding, reasoning, problem-solving, or perception to automate an IT helpdesk, a financial assistant, or health insurance questions, for example.

AI models alone may not be directly beneficial to end-users for many tasks. But, they can be used as a powerful engine to produce compelling product experiences. In such an AI-powered application, end-users interact with an interface that passes information to the model, often with supplementary sources of data (e.g., health insurance information for the customer using the insurance app) or linking the application to external tools to be automated (e.g., submitting an insurance claim).

AI Application Security diagram

With billions of parameters and massive datasets, sophisticated pre-trained large language models (LLMs) provide general-purpose functionality. Companies may customize open-source and closed-source LLMs to improve the performance of a particular task using one or more of these techniques:

  • Fine-Tuning: an additional training task in which a smaller, specific dataset is used to specialize the model.
  • Few-Shot Learning: an approach used to supplement any queries to the model with examples of how to appropriately respond to requests; this is often included in the system instructions that describe to the model how it should operate.
  • Retrieval-Augmented Generation (RAG): a technique used to connect to additional data sources, such as text files, spreadsheets, and code, via a vector database. These data are fetched dynamically for each query, wherein specific documents that relate to the query are used to supplement the prompt so that the model has the correct context to answer a specific question.

Examples of AI Applications

The versatility of AI is helping to transform every industry. Common examples include:

  • Chatbots: enterprise search, customer service, virtual assistants
  • Decision Optimization: credit scoring, algorithmic trading, setting prices, route selection
  • Complex Analysis: predict outcomes, classification
  • Content Creation: writing tools, image and video creation

What is AI application security?

As adoption accelerates, so does the frequency and sophistication of attacks on AI systems. Protecting AI without stifling innovation is the concern of cybersecurity and AI leaders alike. This shared responsibility requires an organization-wide approach to protect against safety and security risks. Companies need a novel approach. Companies need AI application security.

AI application security is the process of making AI-powered applications safe and secure by hardening the underlying components against vulnerabilities and actively mitigating attacks. It encompasses the entire AI development lifecycle spanning the supply chain, development, and production. The concept of 'shifting left' means AI teams are considering security from the earliest stages, which can help to identify and remediate vulnerabilities at the time of development. Companies must also continuously monitor for novel threats and attacks in production.

Security VulnerabilitiesSafety Vulnerabilities
Prompt Injection - DirectToxicity
Prompt Injection - IndirectSensitive Content
Training Data PoisoningMalicious Uses
JailbreaksDeception and Manipulation
Meta Prompt ExtractionSocial Implications and Harms
Privacy AttacksTraining Data Extraction
Sensitive Information DisclosureInformation Leaks
Supply Chain Compromise 
Cost Harvesting 
ML Model Backdoor 
Denial of Service 

For detailed explanations and mappings, see the AI Security Taxonomy page.

What are the key differences between AI Application Security and traditional AppSec?

The clear and obvious difference is the application engine: AI versus traditional software. Traditional software is typically deterministic, meaning that the same output is always generated from the same input. This makes it highly predictable and consistent, which translates to a relatively stable and well-understood set of vulnerabilities covered by traditional AppSec.

Conversely, AI comes in various model types which are often non-deterministic. This is particularly true for generative models, where the outputs are routinely different when given the same input. Threats to non-deterministic models require different mitigation techniques than deterministic software. Additionally, some AI applications continuously learn from human feedback or other data, which means new vulnerabilities and emergent behavior can appear after deployment - unlike traditional software that doesn’t change unless you change it.

AI presents unique challenges that necessitate a new paradigm to mitigate security and safety risks.

What are AI application security solutions?

While traditional cybersecurity solutions won’t work on AI Applications due to the differences between AI and traditional software, many of the techniques and best practices can be reappropriated at each stage of the AI development lifecycle.

Securing AI Application Development

Like the traditional software supply chain, the AI supply chain includes software dependencies from the initial stages of development. But it also often involves third-party training data and model components, such as open-domain models from repositories like Hugging Face. It is critical that these models be scanned for unsafe or malicious code in the file formats, as well as for unsafe or insecure behaviors of the model to be used in an application.

Whether using open-domain, commercial models or proprietary models, it is necessary to validate the model for potential security and safety issues. For example, fine-tuning a well-aligned foundation model can destroy alignment in ways that must be understood by the application developer. Model validation consists of testing a model’s susceptibility to the universe of inputs that can elicit unsafe or insecure outcomes. The only requirement is API (or "black box") access to a model. This process should be repeated every time there are changes to a model.

Securing AI Application Deployment

Once an AI application is put into production, it’s important to continually scan the AI application for emerging safety and security vulnerabilities. These vulnerability scans should be informed by ongoing threat intelligence to ensure that the application is not susceptible to new attacker techniques.

Unlike traditional software vulnerabilities, AI vulnerabilities cannot be “patched”. So, they must be controlled. An AI application firewall enables AI application developers to block malicious requests and undesired information from reaching the model, as well as unsafe responses from reaching the end-user. Resulting logs can be passed to a security ticketing system and a security information and event management (SIEM) solution to be evaluated as part of the company’s preferred security workflow.

How Cisco can help

Cisco AI Defense is an end-to-end solution trusted by enterprises worldwide to protect both the development and use of AI applications so they can advance their AI initiatives with confidence.