ChatGPT is now one of the most widely used coding assistants on the planet. Developers use it to scaffold APIs, write authentication flows, build database queries, and ship entire features in minutes. But speed comes at a cost: ChatGPT is optimized to produce code that runs, not code that is secure.
This guide walks through the most common security flaws found in ChatGPT-generated code, with real examples of what they look like and practical steps for catching them before they reach production.
🛡️ SecurePilot found these exact patterns, and 165+ more
String-concatenated SQL, unverified JWTs, reflected XSS, and exposed stack traces appear in ChatGPT-generated code that ships to production without review. SecurePilot's 165+ rules detect every vulnerability class in this article, in under a second, no account required.
Why ChatGPT Generates Vulnerable Code
ChatGPT learns from a massive corpus of public code. That corpus includes millions of tutorials, Stack Overflow answers, blog posts, and open source projects, many of which were written without security in mind. The model reproduces patterns it has seen frequently, and insecure patterns are extremely common in training data.
The model also has no awareness of your deployment context. It does not know whether your API is public, whether users are authenticated, or whether the database it is querying contains sensitive data. It generates plausible code for the prompt you gave it, nothing more.
1. String-Concatenated SQL Queries
Ask ChatGPT to build a search feature or login endpoint and it will frequently produce raw SQL with user input concatenated directly into the query string.
// Vulnerable - ChatGPT output
app.get('/search', (req, res) => {
const q = req.query.q;
db.query("SELECT * FROM products WHERE name LIKE '%" + q + "%'", callback);
});An attacker can terminate the string early and append arbitrary SQL. The fix is parameterized queries, which any modern database library supports.
// Safe - parameterized
db.query("SELECT * FROM products WHERE name LIKE ?", ['%' + q + '%'], callback);2. Reflected XSS in Server-Rendered Output
When generating Express or similar server-rendered routes, ChatGPT often injects request parameters directly into HTML responses without escaping.
// Vulnerable - unescaped user input in HTML
app.get('/greet', (req, res) => {
res.send('<h1>Hello, ' + req.query.name + '</h1>');
});A crafted URL like ?name=<script>alert(1)</script> executes arbitrary JavaScript in the victim's browser. Always escape output or use a templating engine that handles escaping automatically.
3. JWT Verification Disabled or Skipped
ChatGPT-generated JWT authentication is frequently wrong in subtle ways. A common pattern is accepting tokens without verifying the signature, or using the none algorithm.
// Vulnerable - no signature verification
const decoded = jwt.decode(token); // decode only, no verify
if (decoded.role === 'admin') { ... }The difference between jwt.decode() and jwt.verify() is the difference between zero security and actual security. Always use verify() with a secret or public key.
4. Exposed Internal Stack Traces
Error handling in ChatGPT-generated code almost always leaks full stack traces to the client in production.
app.use((err, req, res, next) => {
res.status(500).json({ error: err.message, stack: err.stack });
});Stack traces reveal your file structure, library versions, and internal logic to attackers. Log the full error server-side, but send only a generic message to the client.
5. Missing Rate Limiting on Auth Endpoints
ChatGPT generates login and password reset endpoints without any brute-force protection. An attacker can make unlimited attempts against your auth endpoint to guess credentials.
Add rate limiting to every authentication endpoint using a library like express-rate-limit. Even a simple limit of 10 requests per minute per IP stops the vast majority of brute-force attacks.
How to Audit ChatGPT-Generated Code
Reading AI-generated code carefully is the right instinct, but human review misses things especially under time pressure. A static analysis scanner gives you a second opinion in under a second.
SecurePilot runs 165+ deterministic rules against your code, no AI involved, no false confidence from hallucinations. Paste your ChatGPT output and get a full vulnerability report before it ships. Free, no account required.