The API key buffet
With the birth of vibe coding, we get the yin and yang of AI. The speed to prototype in minutes, rapid change and re-test, amazing. But… as ever, there's a dark side, where API keys are littered on vibe coder's machines. Sitting there, just waiting for a malicious actor to gain access to the keys and feast on your organisation’s data!
So, what's an API key? Here's a nice succinct definition from ChatGPT:
A unique string of characters used to identify and authenticate an application when it makes requests to a service, such as OpenAI or Stripe. It acts like a password for software, allowing the service to track usage, enforce permissions, and control access
Not bad, but I prefer....
The keys to the castle
This is an obvious attack vector, especially with a little inside information on which department / person is leaning into vibe coding.
Cleaning up after the buffet
To resolve the api key buffet, we must first assess the situation to understand where the keys are being used, before putting processes in place to mitigate their use and not, importantly, ban them . The Scail AI Risk Value Index will expose these AI vulnerabilities and help to direct resource where required.
In the case of the api key buffet, we need build pipelines for all software. A pipeline in software development is a centralised set of processes that take our source code and build it into a deployable form, such as a .apk file for an Android app.
Once we have a pipeline, a step within that pipeline performs two sub-steps. Sub-step 1, query a cloud-based secrets repository, such as AWS Secrets Manager, to obtain the api key without any human intervention. Step 2, perform a search of the codebase, replacing instances of agreed tags, such as <OPENAI_APIKEY> with the appropriate API key / secret.
The security principle here is to take the human out of the loop. This is a classic case of security being very much an adoption issue. Only when we all adopt the rules for AI, can it be done safely - use the damn pipeline!
What boards need to see now
Most businesses are already using AI. Very few can clearly answer where AI is creating value, where it is increasing risk, and which initiatives should be scaled, fixed, or stopped.
Our scorecard gives leaders a structured way to continuously measure AI across the areas that actually determine performance:
Governance & Risk
Strategy & Prioritisation
Commercial Alignment & Value Design
Technology & Data
Culture & Capability
Execution & Delivery
Adoption & Integration
Measurement & Value Realisation
This isn’t a one-off assessment. It’s a board-level view of what’s working, what’s drifting, what’s risky, what’s valuable, and what needs action now.
AI is no longer just a technology issue. It is a growth issue, a governance issue, a trust issue, a margin issue, and a board issue.
The winners will not be the businesses doing the most AI. They will be the businesses that build the strongest AI capability.
Read more about our AI Risk & Value Scorecard.