The API Guys
Dark background with a key icon and warning symbol representing Google API key privilege escalation via Gemini AI access
·6 min read·The API Guys

Your Google API Keys Just Got More Dangerous Without You Doing Anything

SecurityAPI SecurityAPI Development

For over a decade, Google told developers that Google Cloud API keys are not secrets. Firebase's own security checklist says so explicitly. The Maps JavaScript documentation instructs developers to paste their key directly into HTML. This was accurate advice - these keys were designed as project identifiers for billing, not as authentication credentials. Embedding them in client-side code was normal, sanctioned practice.

That changed when Google introduced Gemini. And according to researchers at TruffleSecurity, nearly 3,000 organisations have not been told.

What Changed

Google Cloud uses a single API key format (keys starting with AIza...) across all of its services. When you enable the Gemini API (listed in Google Cloud as the Generative Language API) on a project, every existing API key on that project - including ones you created years ago for Maps embeds or YouTube integrations - silently gains the ability to authenticate against Gemini endpoints. No warning. No confirmation. No email.

TruffleSecurity describes this as retroactive privilege expansion. The sequence is:

  1. A developer creates an API key and embeds it in a website for Google Maps, following Google's own guidance.
  2. At some point later, a developer on the same team enables the Gemini API for an internal prototype or experiment.
  3. The public Maps key now authenticates to Gemini. The developer who created it is never notified.

There is also an insecure default problem. When you create a new API key in Google Cloud Console, it defaults to "Unrestricted" - meaning it is immediately valid for every API enabled on the project, Gemini included. The interface shows a warning about unauthorised use, but the default posture is wide open.

How Trivial the Attack Is

An attacker does not need to compromise your infrastructure. They visit your website, open the browser developer tools, and look at your page source or network requests. They copy your AIza... key from wherever it appears - a Maps embed, a Firebase initialisation block, a YouTube player. Then they run:

curl "https://generativelanguage.googleapis.com/v1beta/files?key=$API_KEY"

If your project has Gemini enabled, instead of a 403 Forbidden they get a 200 OK and a list of your files. From there they can:

  • Access private data: the /files/ and /cachedContents/ endpoints can contain uploaded datasets, documents, and cached context stored through the Gemini API.
  • Run up your bill: Gemini API usage is not free. TruffleSecurity estimates a threat actor maxing out API calls on a single account could generate thousands of pounds in charges per day.
  • Exhaust your quotas: this can shut down your legitimate Gemini services entirely.

The attacker never touches your servers. They use your own credential, taken from your own public webpage.

The Scale of the Problem

TruffleSecurity scanned the November 2025 Common Crawl dataset - a large archive of publicly scraped websites. They found 2,863 live Google API keys vulnerable to this vector. These were not obscure side projects: the affected organisations included major financial institutions, security companies, global recruiting firms, and Google itself.

One key found in the page source of a public-facing Google product had been sitting there since at least February 2023. TruffleSecurity tested it against the Gemini API's /models endpoint and received a valid response. They reported the issue to Google on 21 November 2025. Google classified it as "single-service privilege escalation" on 13 January 2026.

If Google's own engineering teams embedded a key that turned out to have Gemini access, the expectation that every developer will navigate this correctly is unrealistic.

Why This Is an API Design Problem

This is not a developer error in the traditional sense. The developers who embedded these keys followed Google's explicit guidance. The problem is architectural: a single credential format serving two fundamentally different purposes, with no separation between public identifiers and sensitive authentication tokens.

Secure API design distinguishes between publishable keys (safe to expose, limited to read-only or identification functions) and secret keys (never exposed client-side, used for privileged operations). Stripe does this well: your publishable key can be embedded in a payment form; your secret key must never leave your server. The Google Cloud model conflated these two categories for years, and Gemini's arrival exposed the assumption that had been baked in.

The broader lesson applies well beyond Google. Any credential can gain new capabilities when the service it belongs to expands. If you granted an API key access to a platform three years ago and that platform has launched new features or integrations since, the blast radius of that credential leaking may be larger than you originally assessed. This is a routine audit question that most teams never ask.

We covered a related pattern in our post on npm supply chain attacks targeting API keys. The threat vector is different, but the underlying issue is the same: credentials that were adequately scoped when they were created can become significantly more dangerous over time.

What to Do

Check whether the Generative Language API is enabled on your Google Cloud projects. In Google Cloud Console, go to APIs and Services and look for the Generative Language API. If it is enabled on any project that also has API keys used in client-side code, those keys now have Gemini access.

Audit all API keys across your Google Cloud projects. For each key, check what services it is restricted to. Any key set to "Unrestricted" is potentially dangerous if Gemini is enabled on the same project. Restrict keys to only the APIs they actually need.

Rotate any key that has been embedded in public code. If you have ever embedded a Google API key in JavaScript that is served to users, treat it as potentially compromised for Gemini access. Create a new restricted key, deploy it, and revoke the old one.

Use separate Google Cloud projects for public-facing and private services. If your Maps embed and your Gemini integration live on the same project, a Maps key found in your page source is a Gemini key. Isolating them by project prevents this entirely.

Scan your repositories for exposed keys. TruffleSecurity's open-source TruffleHog tool can detect live, exposed keys in code and repositories. Running it against your codebase will surface any Google API keys that are currently valid.

Apply this thinking to all credentials, not just Google ones. Do a periodic review of what each API key and service token in your environment can actually access today, not just what it could access when it was created. Platforms evolve, permissions expand, and the credential you issued for a narrow purpose two years ago may have a much larger footprint now.

Google's Response

Google has stated that it is aware of the report and has worked with TruffleSecurity to address the issue. Their announced mitigations are: new AI Studio keys will default to Gemini-only scope rather than being unrestricted, leaked API keys will be blocked from accessing the Gemini API, and proactive notifications will be sent when leaks are detected. These are sensible changes, but they do not retroactively protect the 2,863 keys that were already exposed before the fix.

If you want help auditing the API key posture across your Google Cloud projects or reviewing how credentials are scoped and managed in your application, get in touch. Getting this right is simpler than it sounds, and the risk of not doing it has just gone up considerably.

Ready to Start Your Project?

Get in touch with our Leeds-based team to discuss your Laravel or API development needs.