OpenAI API Security Best Practices for Internal Tools

Whether you are building a custom OpenAI-powered tool or evaluating a vendor, these are the eight security controls that separate a production-ready internal AI system from a prototype with an API key duct-taped to it.

Keys belong on the serverNever in frontend code, env vars checked into git, or client-side bundles
Secret managers are non-negotiableHashiCorp Vault, AWS Secrets Manager, or Azure Key Vault - not .env files
Scope keys to the minimum requiredPer-department isolation limits blast radius of any single key compromise
Audit every queryCompliance and incident response require a user-linked log of every model call
The Eight Controls

OpenAI API security best practices for internal tools

1. Backend-only key usage

OpenAI API calls must originate from your server, never from client-side code. Any API key embedded in JavaScript, a mobile app, or a browser extension is effectively public. Run a quick search of your frontend bundles for "sk-" to check if you have leaked keys already.

2. Secret manager storage

Store API keys in HashiCorp Vault, AWS Secrets Manager, Azure Key Vault, or GCP Secret Manager. Never in .env files that may be checked into version control, never in application configuration files, never hardcoded. The secret manager provides encryption at rest, access logging, and centralized rotation.

3. Scoped API keys using OpenAI projects

OpenAI's project API keys let you create separate keys for each use case with individual spending limits and usage tracking. Create one project per deployment environment (dev, staging, prod) and ideally one per department. A key scoped to the HR bot cannot be used to query the Finance bot's model deployment.

4. Per-department key separation

Give each department (HR, IT, Finance, Onboarding) its own API key in a separate OpenAI project. If the HR key is compromised, an attacker can only make API calls within the HR key's project limits. They cannot access Finance data or exhaust the full organization's API budget.

5. Audit logging per user query

Log every API call with: timestamp, user ID, department, query (or a hash if PII concerns apply), token count, and model used. OpenAI's usage dashboard shows aggregate token consumption but does not link calls to individual employees. Your application must create this link. Retain logs for at least 90 days.

6. Rate limiting per user

Enforce a daily token budget per user and a request rate per minute. Without this, a single compromised account or a curious employee can drain hundreds of dollars of API budget in an afternoon. Set limits in OpenAI's project settings AND in your application layer for defense in depth.

7. Key rotation policy

Rotate API keys at least quarterly. Rotate immediately when a team member with key access leaves the company or when a system that stores the key is decommissioned. OpenAI supports multiple active keys per project - create the new key, update your secret manager, verify the new key works, then delete the old one. Zero downtime rotation is straightforward.

8. Input validation against prompt injection

Prompt injection is an attack where a user crafts a message to override system instructions. Basic mitigations: treat user input as untrusted data in your prompt assembly, use delimiters to separate user content from system instructions, and design system prompts that explicitly scope what the model can reveal. Full elimination is hard; layered defense is achievable.

When Evaluating Vendors

Questions to ask an AI tool vendor about security

If you are buying rather than building, you need to verify that the vendor applies these controls on your behalf. A vendor who deflects or gives vague answers to these questions is a red flag:

  • Where exactly is my BYOK API key stored? Is it in an encrypted secret manager?
  • Can your engineers access my API key through normal deployment tooling?
  • Is there an audit log I can export? What fields does it include?
  • What per-user rate limits exist and can I configure them?
  • How are documents separated between departments? Is it row-level filtering or separate stores?
  • Do you have a responsible disclosure program and recent penetration test results?
ChatGridAI answers to these

BYOK keys are stored in encrypted vault storage. Department documents are in fully separate vector stores. Audit logs are available for export. Rate limits are configurable. Security reviews are part of ChatGridAI's development cycle.

FAQ

OpenAI API security - common questions

Environment variables are acceptable as a fallback for local development but are not appropriate for production. Environment variables are often visible in deployment dashboards, log outputs, and process listings. They are also frequently included in infrastructure-as-code repositories. Production deployments should use a proper secret manager. The key should be loaded from the secret manager at runtime, not baked into the environment at deploy time.
OpenAI project API keys are scoped to a specific project within your organization's OpenAI account. Each project has its own usage tracking, rate limits, and billing visibility. Using project keys means a compromised key has limited blast radius - it can only be used within that project's scope. Organization-level keys have no such scoping and should be avoided for application deployments.
Set a monthly usage alert in your OpenAI account. Unusual spikes in API usage are the clearest signal of a compromised or abused key. GitHub's secret scanning will alert you if a key is pushed to a public repository. Tools like TruffleHog can scan your repositories for leaked secrets retroactively. OpenAI also automatically revokes keys it detects in public GitHub repositories.

All eight security controls, built into ChatGridAI.

BYOK with vault storage, per-department key isolation, audit logging, and rate limiting out of the box.

$5/seat/month - 14-day free trial - no credit card required