Why Source Permissions Matter in Enterprise AI Search
The most dangerous failure mode in enterprise AI is not a wrong answer. It is a correct answer sourced from a document the user should never have seen. This is the "answering correctly from the wrong document" problem - and it is entirely preventable.
Answering correctly from the wrong document
Imagine an enterprise with a single shared AI knowledge base. HR has uploaded its handbook, which includes severance policy, executive compensation ranges, and individual employee PIPs. Finance has uploaded budget forecasts and pending acquisition targets. IT has uploaded network diagrams and system credential procedures.
Now a software engineer asks: "What is the severance package for a senior director?"
If the AI retrieves and cites HR's confidential severance document to answer this question, it has not made an error. It has answered correctly. But it answered using a document that was never intended to be accessible to someone outside HR. The problem is not the model - it is the retrieval layer having no awareness of document-level permissions.
When an AI gives a factually wrong answer, someone notices and reports it. When an AI gives a factually correct answer sourced from a document the user should not have accessed, no alarm sounds. The user gets the information, closes the chat, and moves on. The breach may never be discovered.
What can go wrong without source permissions
If compensation bands are in the shared knowledge base, any employee can retrieve this information by asking the right question. The AI answers helpfully and accurately - from a document that should be restricted to HR leadership and executives.
If a deal memo or strategic plan is in the knowledge base, a curious employee can extract deal targets that are material non-public information. The AI has no awareness of insider trading implications - only the permission layer can prevent this retrieval.
IT runbooks sometimes include credential patterns, key vault paths, or configuration details. Without source restrictions, a non-IT employee asking operational questions may receive infrastructure details that should be restricted to the IT team.
PIPs for specific employees are often stored in HR document libraries. Without document-level restrictions, a manager could inadvertently access a PIP intended for someone outside their reporting line by asking general performance process questions.
Two approaches to source-level permissions
Each department has an entirely separate vector database. HR documents are ingested only into the HR store. Finance documents only into the Finance store. A query from the HR bot physically cannot touch the Finance vector store. This is the approach ChatGridAI uses - isolation is architectural, not just access-controlled.
All documents share one vector store but each document has permission metadata (department, role, classification level). At retrieval time, the system filters results to only chunks the querying user is allowed to see. More flexible but higher risk - a filter misconfiguration can expose all documents.
Both approaches can work, but architectural isolation is more robust. A filter applied at query time can be misconfigured. A separate vector store that is physically unreachable from another bot cannot be misconfigured into leaking.
When evaluating enterprise AI vendors, ask specifically: "How is document access controlled between departments?" If the answer involves only row-level metadata filtering in a shared store, ask what happens if a filter condition contains a bug. If the answer involves separate stores, ask how documents are assigned to stores and whether that assignment is enforced at ingestion time.
Source permissions - common questions
Separate vector stores per department. Architectural isolation, not filtering.
ChatGridAI gives each department its own bot and its own knowledge base. HR documents never enter the Finance query path.
$5/seat/month - 14-day free trial - no credit card required