Synapsed at NetEye Conference 2025: Building Trustworthy AI from Day One

At the NetEye Conference 2025 in Verona, one of Europe’s leading events on observability, cybersecurity, and IT operation, Synapsed delivered a keynote that addressed one of the most critical challenges of the decade:
how to build AI systems that are responsible, and operable from the very beginning.

Matteo Meucci, CEO of Synapsed and long-standing OWASP leader, presented the keynote “AI Inside: Handle with Care.”
The session explored the growing gap between AI adoption and AI readiness, and outlined a practical roadmap for organizations moving toward Trustworthy AI, aligned with OWASPEU AI Act, and international standards.


Why AI Needs to Be Handled With Care

Organizations are accelerating AI adoption—but often repeating the same mistakes made in the early days of software security.

Key insights from the keynote revealed that:

  • AI is being integrated without proper testing methodologies
  • Teams are relying on tools built for deterministic code, not adaptive models
  • Open models and libraries are assumed safe by default
  • Model fixes, retraining, and drift management are poorly understood and rarely operationalized
  • Developers are being held accountable for risks that require organizational governance, not individual effort

The result is the emergence of a new category of challenges:
AI Insecure Software Development.


The Four Systemic Errors in AI Development Today

Matteo described four recurring issues that Synapsed encounters across industries:

1. Using the Wrong Tools

Conventional DevSecOps tools cannot evaluate:

  • model robustness
  • bias and fairness
  • data drift
  • adaptive, non-deterministic behavior

Without AI-specific testing frameworks, models enter production unverified—creating compliance and operational risks under regulations such as the EU AI Act and NIST AI RMF.

2. Trusting LLM Models by Default

Synapsed’s research—summarized in the OWASP Top 10 LLM 2025 Research Study—found that 100% of major LLMs tested expose at least one vulnerability.

Organizations routinely build products on top of untested, unassessed models.
This is a major risk multiplier.

3. No Time (and No Process) for Fixing

AI systems:

  • adapt
  • drift
  • change over time

Traditional “one-off” assessments are ineffective. Continuous testing and AI drift monitoring must become the standard.

4. Blaming Developers Instead of Building Governance

AI trustworthiness is not a developer problem—it is a company-wide responsibility that spans:

  • governance
  • design
  • data quality
  • privacy
  • security
  • operational readiness

This shift—from Secure SDLC to Trustworthy SDLC—is essential for building reliable AI systems.


The OWASP Standards That Enable Trustworthy AI

Synapsed is directly involved in defining two global standards that were highlighted during the keynote:


OWASP AI Testing Guide (AITG)

A comprehensive framework to test AI applications, models, infrastructure, and data.
The AITG provides more than 30 structured tests covering:

  • adversarial robustness
  • prompt injection
  • model extraction
  • sensitive data leakage
  • fairness and bias evaluation
  • continuous monitoring

The objective is clear:
give organizations evidence-based assurance that their AI systems are safe, reliable, and compliant.


OWASP AI Maturity Assessment (AIMA)

AIMA is a structured maturity model that evaluates nine practice areas, including:

  • Responsible AI
  • Governance
  • Data Management
  • Privacy
  • Secure Design
  • Implementation
  • Verification
  • Operations

It enables organizations to:

  • measure their AI governance maturity
  • benchmark suppliers and outsourcers
  • align with EU AI Act readiness
  • create improvement roadmaps
  • build internal cross-functional accountability


Key Takeaway for 2025: Trustworthy AI Is Not Optional

Matteo closed the keynote with a message that captures Synapsed’s mission:

AI introduces new threats and new risks.
To build reliable systems, we must integrate governance, testing, and continuous monitoring from Day One.

This means:

  • adopting AI-specific testing frameworks → OWASP AITG
  • building organization-wide governance → OWASP AIMA
  • ensuring continuous operational monitoring for AI systems
  • aligning with global standards (EU AI Act, ISO 42001, ISO 5338, NIST AI RMF)

Trustworthy AI is the foundation for safe adoption, regulatory compliance, and long-term value.


Synapsed Thanks the NetEye Community

We are honored to have contributed to the NetEye Conference 2025 and to the vibrant community built by Würth Phoenix.

Synapsed will continue collaborating with partners, enterprises, and industry leaders to define the future of Responsible AI.

References:

NetEye Conference 2025

LinkedIN Post

Synapsed LLM Study White Paper

OWASP AI Testing Guide

OWASP AI Maturity Assessment
OWASP GenAI Security