Ask any AI coding assistant to connect your app to a third-party service and there is a reasonable chance it will write the API key directly into the source code. This is one of the most common vulnerabilities in AI-generated code, and it is one of the easiest ways for a startup to lose thousands of dollars or expose user data overnight.
This guide covers why AI assistants hardcode secrets, what can go wrong when they do, and how to build a workflow that prevents it from ever reaching your repository.
🛡️ SecurePilot found these exact patterns, and 165+ more
Hardcoded secrets are SecurePilot's single most common finding in AI-generated codebases. Our scanner includes 40+ rules specifically for credential patterns: OpenAI keys, Stripe tokens, AWS credentials, JWT secrets, database URLs, and high-entropy strings. This is the vulnerability AI assistants introduce most reliably. Catch it before it reaches your repository.
Why AI Assistants Hardcode Secrets
AI models learn from public code. A large proportion of tutorial code, blog examples, and Stack Overflow answers hardcode API keys for convenience. The model has seen this pattern far more than the correct environment variable pattern, so it reproduces it.
The model also has no awareness of what will be committed to Git, who has access to the repository, or whether the repository is public. It generates code that works for the prompt it received. Version control and deployment context are outside its knowledge.
1. API Keys in Source Code
The most common form. You ask the AI to add OpenAI, Stripe, Twilio, or another service. It generates working code with the key inline.
// Vulnerable - AI-generated, key in source
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'sk-proj-aBcDeFgHiJkLmNoPqRsTuVwXyZ...',
});If this file reaches a public GitHub repository, automated scanners will find the key within minutes. These are not humans browsing your repo. They are bots running 24/7 specifically to harvest leaked credentials and either use them directly or sell them.
Even in a private repository, every developer with access can see the key. If any of them has their laptop stolen, their GitHub account compromised, or leaves the company, the key is at risk.
// Safe - environment variable
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});2. Database Credentials in Config Files
AI assistants generating database connection code will often produce a full connection string with username, password, and host hardcoded.
// Vulnerable - credentials in connection string
import { Pool } from 'pg';
const pool = new Pool({
connectionString: 'postgresql://admin:mypassword123@db.example.com:5432/production',
});A leaked database password is more dangerous than a leaked API key. An API key can often be rotated quickly with a known blast radius. A database compromise can mean full access to every user record, transaction, and piece of sensitive data you have ever stored.
// Safe - from environment
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
});3. JWT Secrets and Encryption Keys
When generating authentication code, AI assistants frequently hardcode the JWT signing secret or encryption key used to protect session tokens.
// Vulnerable - hardcoded JWT secret
const token = jwt.sign(
{ userId: user.id, role: user.role },
'my-super-secret-key-123', // anyone who sees this can forge tokens
{ expiresIn: '7d' }
);If an attacker obtains your JWT signing secret, they can forge tokens for any user, including admins. This effectively bypasses your entire authentication system.
// Safe - from environment
const token = jwt.sign(
{ userId: user.id, role: user.role },
process.env.JWT_SECRET,
{ expiresIn: '7d' }
);4. Secrets in Frontend Code
This is a separate but related problem. Developers building React, Next.js, or Vite apps sometimes put secrets in client-side environment variables, which get bundled into the JavaScript sent to every user's browser.
// Vulnerable - Next.js: NEXT_PUBLIC_ prefix exposes to browser
// .env.local
NEXT_PUBLIC_OPENAI_API_KEY=sk-proj-aBcDeFg...
// In component - this key ships to every visitor's browser
const openai = new OpenAI({ apiKey: process.env.NEXT_PUBLIC_OPENAI_API_KEY });Any secret prefixed with NEXT_PUBLIC_ (or VITE_ in Vite) is embedded in your JavaScript bundle and visible to anyone who opens DevTools. API calls to OpenAI, Stripe, or any third party must go through your server, never directly from the browser.
Prevention Checklist
A simple workflow to ensure secrets never reach your repository:
- Add a
.gitignoreentry for.env,.env.local, and.env.productionbefore your first commit - Commit a
.env.examplefile with placeholder values so other developers know what variables are needed - Use GitHub secret scanning alerts (enabled by default on public repos)
- Rotate any key that you are not 100% certain has never been committed
- Run a static analysis scan on every AI-generated file before committing
Detect Hardcoded Secrets Before They Ship
Human code review catches many things, but it consistently misses hardcoded secrets. Your eye skims past a string that looks like a configuration value. A static scanner does not.
SecurePilot includes 40+ rules specifically for hardcoded credentials: API key patterns for OpenAI, Stripe, AWS, Twilio, SendGrid, and many more, plus generic high-entropy string detection. Paste your AI-generated code and get a full secrets report in under a second, before it ever touches your repository.