OpenAI API Security Best Practices for Internal Tools
Whether you are building a custom OpenAI-powered tool or evaluating a vendor, these are the eight security controls that separate a production-ready internal AI system from a prototype with an API key duct-taped to it.
OpenAI API security best practices for internal tools
OpenAI API calls must originate from your server, never from client-side code. Any API key embedded in JavaScript, a mobile app, or a browser extension is effectively public. Run a quick search of your frontend bundles for "sk-" to check if you have leaked keys already.
Store API keys in HashiCorp Vault, AWS Secrets Manager, Azure Key Vault, or GCP Secret Manager. Never in .env files that may be checked into version control, never in application configuration files, never hardcoded. The secret manager provides encryption at rest, access logging, and centralized rotation.
OpenAI's project API keys let you create separate keys for each use case with individual spending limits and usage tracking. Create one project per deployment environment (dev, staging, prod) and ideally one per department. A key scoped to the HR bot cannot be used to query the Finance bot's model deployment.
Give each department (HR, IT, Finance, Onboarding) its own API key in a separate OpenAI project. If the HR key is compromised, an attacker can only make API calls within the HR key's project limits. They cannot access Finance data or exhaust the full organization's API budget.
Log every API call with: timestamp, user ID, department, query (or a hash if PII concerns apply), token count, and model used. OpenAI's usage dashboard shows aggregate token consumption but does not link calls to individual employees. Your application must create this link. Retain logs for at least 90 days.
Enforce a daily token budget per user and a request rate per minute. Without this, a single compromised account or a curious employee can drain hundreds of dollars of API budget in an afternoon. Set limits in OpenAI's project settings AND in your application layer for defense in depth.
Rotate API keys at least quarterly. Rotate immediately when a team member with key access leaves the company or when a system that stores the key is decommissioned. OpenAI supports multiple active keys per project - create the new key, update your secret manager, verify the new key works, then delete the old one. Zero downtime rotation is straightforward.
Prompt injection is an attack where a user crafts a message to override system instructions. Basic mitigations: treat user input as untrusted data in your prompt assembly, use delimiters to separate user content from system instructions, and design system prompts that explicitly scope what the model can reveal. Full elimination is hard; layered defense is achievable.
Questions to ask an AI tool vendor about security
If you are buying rather than building, you need to verify that the vendor applies these controls on your behalf. A vendor who deflects or gives vague answers to these questions is a red flag:
- Where exactly is my BYOK API key stored? Is it in an encrypted secret manager?
- Can your engineers access my API key through normal deployment tooling?
- Is there an audit log I can export? What fields does it include?
- What per-user rate limits exist and can I configure them?
- How are documents separated between departments? Is it row-level filtering or separate stores?
- Do you have a responsible disclosure program and recent penetration test results?
BYOK keys are stored in encrypted vault storage. Department documents are in fully separate vector stores. Audit logs are available for export. Rate limits are configurable. Security reviews are part of ChatGridAI's development cycle.
OpenAI API security - common questions
All eight security controls, built into ChatGridAI.
BYOK with vault storage, per-department key isolation, audit logging, and rate limiting out of the box.
$5/seat/month - 14-day free trial - no credit card required