Test Categories

Overview

AI Red Teaming test categories in Akto group security tests by the primary failure mode or control gap being validated. This structure helps you analyze coverage, prioritize remediation, and align AI security testing with established MCP, Agents and LLM risk models.

Each category represents a distinct class of vulnerability that can affect LLM-backed APIs, agent workflows, or supporting infrastructure.

How Akto Uses Test Categories

Akto assigns every AI Red Teaming test to exactly one category. Category assignment determines how test results are grouped, counted, and reported across scans and dashboards.

Category-based organization supports:

  • Risk-focused navigation of large test sets

  • Aggregated visibility into recurring security weaknesses

  • Consistent mapping to OWASP API and LLM security concepts

  • Trend analysis across AI-enabled endpoints

Available AI Red Teaming Test Categories

Here is the list of all test categories currently available in Akto Probe Library:

1

Agent Business Alignment

Whether agent actions and decisions remain aligned with defined business goals and constraints.

2

Agent Hallucination and Trustworthiness

Accuracy, factual consistency, and reliability of agent outputs used for decision-making.

3

Agent Identity Impersonation

Impersonation of agents, roles, or identities to gain unauthorized capabilities or privileges.

4

Agent Safety

Harmful, unethical, or policy-violating agent outputs or actions.

5

Agent Security – Agent Exploitation

Abuse of agent logic, reasoning, or planning to trigger unintended behavior.

6

Agent Security – Code Execution

Unsafe or unauthorized code execution initiated by agents or agent-controlled tools.

7

Agent Security – Data Exposure

Leakage of sensitive or restricted data through agent actions or outputs.

8

Agent Security – Infrastructure

Exposure of internal services, networks, or infrastructure through agent activity.

9

Agent Security – Prompt Injection

Prompt injection attacks targeting agent system prompts or internal instructions.

10

Broken Function Level Authorization (BFLA)

Missing or weak authorization checks on agent or API actions.

11

Broken Object Level Authorization (BOLA)

Unauthorized access to objects through manipulated identifiers.

12

Broken User Authentication

Authentication bypasses, token misuse, or insecure session handling.

13

Command Injection

Execution of system commands via user- or agent-controlled input.

14

Cross Origin Resource Sharing (CORS)

Overly permissive cross-origin policies allowing unauthorized access.

15

CRLF Injection

Manipulation of HTTP headers or responses using CRLF characters.

16

Cross-Site Scripting (XSS)

Injection of untrusted scripts via agent or API responses.

17

Data and Model Poisoning

Manipulation of agent memory, embeddings, training data, or retrieval sources.

18

Excessive Agency

Agents performing actions beyond intended autonomy limits or permissions.

19

Excessive Data Exposure

APIs or agents returning more data than required for an operation.

20

Improper Inventory Management

Undocumented, deprecated, or shadow agents, tools, or APIs.

21

Improper Output Handling

Unsafe formatting, rendering, or downstream consumption of agent or LLM outputs.

22

Injection Attacks (Inject)

SQL, NoSQL, expression, or generic injection vulnerabilities.

23

Input Validation (INPUT)

Missing validation of input type, format, length, or allowed values.

24

Lack of Resources & Rate Limiting

Absence of throttling controls enabling abuse or resource exhaustion.

25

Local File Inclusion (LFI)

Unauthorized access to local files through user- or agent-controlled paths.

26

Mass Assignment (MA)

Unsafe binding of user input to internal objects without field allowlisting.

27

MCP – Data Leak

Leakage of sensitive data through MCP context, tools, or resources.

28

MCP – Indirect Prompt Injection

Prompt injection introduced via MCP tools, resources, or external context.

29

MCP – Malicious Code Execution

Unsafe code execution through MCP tool definitions or invocation.

30

MCP Security – Broken Authentication

Authentication failures or identity validation gaps in MCP servers.

31

MCP Security – Denial of Service

Resource exhaustion or availability attacks targeting MCP servers.

32

MCP Security – Input Validation

Missing or weak validation of MCP inputs, parameters, or payloads.

33

Misinformation

Generation or propagation of misleading or deceptive information by agents.

34

Misconfigured HTTP Headers

Missing or insecure HTTP security headers.

35

Model Context Protocol (MCP)

Core MCP security including context isolation, tool exposure, and resource boundaries.

36

Prompt Injections

Direct prompt injection attacks against LLM or agent prompts.

37

Security Misconfiguration

Insecure defaults, exposed debug settings, or unsafe deployment configurations.

38

Sensitive Information Disclosure

Exposure of secrets, credentials, PII, internal prompts, or proprietary data.

39

Server-Side Request Forgery (SSRF)

Unauthorized internal or external network requests initiated by agents.

40

Server-Side Template Injection (SSTI)

Unsafe template rendering allowing code execution or data access.

41

Server Version Disclosure

Exposure of server, framework, or runtime version details.

42

Supply Chain

Risks from third-party models, tools, plugins, or external dependencies.

43

System Prompt Leakage

Exposure of system prompts, internal instructions, or agent configuration details.

44

Unbounded Consumption

Unrestricted token usage, tool execution, or resource consumption by agents.

45

Unnecessary HTTP Methods

Enabled HTTP verbs that increase attack surface.

46

Vector and Embedding Weaknesses

Weaknesses in embedding generation, vector storage, or retrieval logic.

47

Verbose Error Messages

Error responses exposing stack traces or internal logic.

Expected Outcome

Using AI Red Teaming test categories, you gain structured visibility into security risks across LLMs, and agents. Category-level organisation supports risk-based prioritisation, clearer reporting, and consistent governance for AI-enabled systems.

What Next

To create customised tests as per your requirement: Learn how to create custom tests

Last updated