Large Language Models
Doyensec specializes in comprehensive AI security audits to ensure your AI-driven systems are resilient, compliant, and free from vulnerabilities that could expose your business to operational, financial, and reputational risks. Since the inception of embedding these technologies in modern applications, we have been the preferred partner of companies specializing in AI development and integration.
-
AI Agents and Models
Auditing LLMs models and systems requires a multi-faceted approach, which combines standard application security testing methodologies with new specialized techniques and skillsets.
During the security testing of AI chatbots and other LLM based applications, we employ a comprehensive proprietary approach which improves upon best practices from multiple standards, such as the OWASP API Security Top 10, OWASP Machine Learning Security Top 10, and other industry standard AI-focused guidelines.
While verifying input validation and output rendering are critical parts of every assessment, chatbots and other prompt input features can greatly increase the size and complexity of an application's attack surface. Modern defensive techniques and development practices must be implemented to ensure resilience against various attacks, including SQL and prompt injections. Given the power granted to models to alter their environment, including generating custom code or queries, care must be taken to ensure proper guardrails are in place in order to prevent manipulation.Legacy client-side vulnerabilities, such as Cross-Site Scripting and SQL injections, don't simply disappear with the adoption of advanced technologies, so they must still be thoroughly tested for and mitigated to protect end users and prevent privilege-escalation attacks.
To ensure robust defensive practices and mechanisms are in place, our multi-faceted methodology doesn't just center on the latest exploitation techniques. It also includes a focus on traditional access controls, ensuring users cannot perform unauthorized actions and that they are restricted from inappropriately accessing any sensitive data belonging to the systems, infrastructure and cloud components, other users or other tenants. -
Generative AI capabilities
Doyensec helps you identify your application's susceptibility to a wide range of malicious attacks aimed at exploiting LLMs.
Our expertise can help you avoid the pitfalls associated with prompts that might be leveraged to jailbreak or manipulate an LLM so that it does not follow its given instructions, extract sensitive information, force an LLM to provide erroneous information or talk about sensitive topics in a non-acceptable registry. For features with generative AI capabilities (GenAI), we also validate defenses are in place to prevent resource exhaustion, no internal secrets can be stolen, and users cannot bypass intended limits.
For prompts with advanced features like sandboxed code execution environments or internal API access, it is vital to verify their secure implementation and isolation. This involves testing for sandbox escape vectors and request forgeries, ensuring that any code executed remains within prescribed boundaries and cannot compromise the underlying system.
our research articles
Research is one of our founding principles and we invest in it heavily. All of our researchers have the privilege to use 25% of their time exclusively for self-directed research.
show more publications