Taming the Risks of AI to Unleash Innovation with Confidence
Raffaella del Prete tackled a crucial, often overlooked question in the race to adopt AI: what are the operational risks, and how can they be managed without slowing innovation? Her keynote explored the realities of AI implementation, where the speed of experimentation often outpaces governance. She pointed out that innovation only works when organisations are confident that the systems they build will not undermine them from within.
To illustrate her point, she described a startup that saw an AI usage bill balloon from seven dollars to more than seventy thousand overnight. The cause was not malicious activity but the absence of usage controls and basic oversight. Another example involved insecure APIs that exposed sensitive data. These were not exotic threats, but familiar missteps made under pressure.
Del Prete argued that AI adoption cannot be viewed as separate from infrastructure readiness. Organisations often think of development environments as low-risk, yet real customer data is commonly used in testing. This makes them vulnerable by default. She pointed out that if you are using real data, you are already operating in production, whether or not the system is public-facing.
She also spoke about architecture decisions that enable safe experimentation. Using real-world healthcare examples, she showed how companies can deploy and test large language models in flexible ways without compromising compliance. What mattered most was not which AI tool was used, but whether the platform had monitoring, access control, and automated guardrails in place.
Her message was practical and direct. Confidence in AI does not come from hype. It comes from knowing that the infrastructure underneath is capable of absorbing mistakes, managing risk, and allowing teams to work quickly without exposing the business to unnecessary harm.
Takeaways
Costs can escalate without warning if teams do not have spending controls
AI workloads, especially those involving generative models, can consume large amounts of compute in a short time. Without usage limits, teams may unintentionally trigger huge bills, particularly during testing.
Security incidents are more often caused by internal missteps than by external attackers
Common mistakes like open APIs or public storage buckets continue to expose sensitive data. These issues usually occur when businesses rush to experiment without putting standard controls in place first.
Governance should not be manual or after-the-fact
Traditional compliance methods like policy documents and audits are too slow for the pace of AI development. Organisations need automated governance tools that can enforce rules in real time.
Development environments need production-level protection when real data is used
Del Prete stressed that many businesses treat test environments casually, despite using live data. This creates a false sense of security. If the data is sensitive, the environment needs to be treated as critical.
AI development includes the full stack, not just the model
Infrastructure, access policies, data management, and logging are all part of the AI environment. Weakness in any of these layers can cause significant operational or reputational damage.
Dashboards are not enough if they only provide information after problems occurBusinesses need real-time enforcement tools, such as spend caps and permission restrictions, rather than relying solely on alerts that arrive after the fact.
Flexible experimentation requires pre-built safety nets
When teams can explore and deploy models with confidence that risks are contained, innovation moves faster. These safety nets include cost controls, compliance automation, and clear access boundaries.
AI adoption is an organisational issue, not just a technical one
Successful integration requires collaboration across teams. Engineering, operations, finance, and leadership all need to be involved early in the process to ensure risk is managed while progress continues.

