OWASP Top 10 for LLMs: New Security Challenges due to AI

18. June, 2025

The rise of large language models (LLMs) such as ChatGPT, Gemini, and others offers organizations enormous potential — but also introduces new security risks. To address these challenges, the OWASP Foundation has published a dedicated Top 10 list for LLM applications [1]. Its goal is to raise awareness among developers, architects, security teams, and enterprises about the specific threats associated with AI usage.

LLMs not only generate text, write code, translate content, or analyze data — they also introduce new attack surfaces. Without appropriate safeguards, these vulnerabilities can lead to serious security incidents. The OWASP list highlights the ten most critical risks organizations should be aware of.

The OWASP Top 10 Risks for LLM Applications

  1. Prompt Injection: Manipulated user inputs (prompts) can cause the model to produce harmful or unintended outputs, such as bypassing security rules. A well-known case involved a Chevrolet chatbot that was tricked into offering vehicles for $1 [2].
  1. Sensitive Information Disclosure: LLMs may expose confidential data — either through risky user inputs or uncontrolled training on sensitive content. In 2023, a ChatGPT bug allowed users to access parts of other users’ chat histories and metadata [3]. With open models, it is often unclear how submitted data is handled.
  1. Supply Chain: Pre-trained models, external data sources, or third-party plugins can introduce vulnerabilities or become compromised.
  1. Data & Model Poisoning: Attackers may manipulate training datasets to deliberately induce biased, incorrect, or malicious model behavior.
  1. Improper Output Handling: Without proper filtering and validation of model outputs, there’s a significant risk of security issues such as code injection or cross-site scripting. ChatGPT has, in fact, been used to generate malware — despite built-in safeguards [4].
  1. Excessive Agency: Models granted too much autonomy may perform unpredictable or security-relevant actions, especially when integrated with other systems.
  1. System Prompt Leakage: Exposure of internal system prompts or configuration details can be exploited to manipulate model behavior.
  1. Vector and Embedding Weaknesses: Particularly in Retrieval-Augmented Generation (RAG) systems, vulnerabilities may arise from improper handling or manipulation of embeddings and vector queries.
  1. Misinformation: LLMs often produce convincing but factually incorrect content — known as hallucinations. This phenomenon becomes more frequent with more powerful models [5], and ranges from harmless confusion to serious reputation or business damage.
  1. Unbounded Consumption: Unrestricted resource usage by LLMs can lead to excessive operational costs or service outages.

Practical Recommendations for Secure LLM Usage

Data Responsibility & Governance

  • Data Classification: Clearly define what constitutes public, internal, or confidential data. This classification must be reflected in your organization’s LLM usage policies — including for internal models.
  • No Sensitive Data in Public LLMs: Competitive business information or personal data should never be entered into external LLM services. Instead, consider on-premise models or dedicated cloud instances that ensure data sovereignty.
  • Employee Training: Educate staff about risks such as hallucinations or misinformation. Fact-checking and a critical approach to AI-generated content should become standard practice.

Technical Safeguards for LLMs

  • Input Validation & Sanitization: Consistently review and sanitize prompts. Use whitelisting, filtering mechanisms, or even a dedicated “LLM firewall” to block harmful inputs.
  • Output Filtering & Moderation: Validate LLM outputs before presenting them to end users. Implement rule-based filters or use a second LLM to assess potentially problematic content.
  • API Security: Secure LLM APIs with strong authentication and authorization protocols. Limit access according to the principle of least privilege and implement rate limiting to prevent abuse.

Building Security Awareness

  • Ongoing Training: Use real-world examples — like the Chevrolet chatbot incident — to illustrate risks in a relatable way.
  • Clear Usage Policies: Make sure guidelines for LLM use are transparent and understandable. Communicate not just what is allowed, but also why certain rules exist.
  • Storytelling & Case Studies: People remember stories better than checklists. Use simulated or real incidents to anchor awareness.
  • Regulatory Monitoring: Since August 1, 2024, the EU AI Act has been in effect, setting unified standards for responsible AI use across the EU. It classifies AI applications by risk level and imposes specific obligations on both providers and users.

Further Frameworks & Recommendations

  • The AI Risk Management Framework (AI RMF) by NIST offers a structured approach to evaluating and managing AI risks, based on the principles of Map – Measure – Manage – Govern [6].
  • ENISA, the European Union Agency for Cybersecurity, has published the Multilayer Framework for Good Cybersecurity Practices for AI. It provides concrete technical and organizational recommendations across the entire AI lifecycle [7].
  • The EU AI Act offers legal orientation by introducing a unified risk-based classification system and binding obligations for AI providers and users across the European Union [8].

Conclusion: Using LLMs Securely and with a Future-Oriented Mindset

The era of artificial intelligence and large language models is only just beginning — and development is accelerating rapidly. With thoughtful governance structures, robust technical safeguards, and strong security awareness, organizations can proactively address the emerging challenges in AI security. When used responsibly, AI has the potential to become a powerful and sustainable driver of productivity.

Recommended posts:

Information Disclosure Vulnerabilities: How to Effectively Prevent Data Leaks

Business Email Compromise: Risks, trends and defences

Identity theft 2.0: New strategies and how organisations can respond

Sources:

Microsoft erzwingt DMARC
Data breach
Innovation Day 2025 der HTL Rennweg
data protection
ARM64
Save Internet
Ethical Hacking
MITRE-HarfangLab
Sicherheitslücke
Identitätsdiebstahl
Red Teams, Blue Teams, Purple Teaming

Living Off the Land attacks

MSSP of the Year 2024

SIEM

What is a SIEM?

Nozomi Guardian Air

WE ARE LOOKING FORWARD TO HEARING FROM YOU!

IKARUS Security Software GmbH Blechturmgasse 11
1050 Vienna

Phone: +43 1 58995-0
Sales Hotline: +43 1 58995-500
sales@ikarus.at

SUPPORT HOTLINE

Support hotline: +43 1 58995-400
support@ikarus.at

Support hours:
Mon – Thu: 8am – 5pm
Fri: 8am – 3pm
24/7 support by arrangement

Remote maintenance software:
AnyDesk Download