"But where does my disclosure actually go?"
I have had this question asked to me in every single demo I have done. Sometimes it comes up in the first 5 minutes; sometimes it comes right at the end, almost as an afterthought; but it always comes. And I am glad it does.
The attorneys who ask this are the ones thinking clearly. Because the moment you upload a client's invention disclosure to any cloud-based AI tool, you have made a decision that has professional responsibility implications, whether you realised it or not.
So let me answer it properly. Not with a policy link. With an actual explanation.
The problem with "we don't train on your data"
I have read the privacy pages of a lot of AI tools. Most of them say something like: "We don't use your data to train our models by default." When I first read sentences like this, I assumed they were reassuring. Then I started paying attention to the words.
"By default" is not the same as "never". Default settings can be changed, by you, by them, by an enterprise tier you didn't know existed. And "we don't" in a privacy policy is a statement of current intent; it is not a contractual commitment. The day they decide to change it, they update the policy, you get an email you will never read, and you have agreed.
I have seen this happen. Not with invention disclosures specifically; but the pattern is the same. A free or low-cost service changes its data practices; the users find out months later. By then, the data has already been used.
"Think about the difference between a colleague who promises to keep something confidential and a colleague who has signed an NDA. Same words, completely different enforceability. That is the difference between a privacy policy and a Data Processing Agreement."
What creates a real obligation is a contract. Specifically, a Data Processing Agreement: a DPA. This is a bilateral legal document where the AI provider explicitly commits to specific data handling obligations, with indemnification provisions if they breach it. Not a toggle. Not a policy page. A contract.
eety.ai operates under enterprise-tier DPAs with its AI providers, including Google. These are not the default consumer terms. These are negotiated agreements that explicitly and contractually prohibit using API-submitted content for model training.
But we went a step further than that
Even with a watertight DPA in place, there is a subtler risk that most people do not think about: what if you never send the raw disclosure to the AI at all?
When eety.ai drafts a section of your patent, the AI does not receive your client's disclosure document. It receives a structured, scoped invention summary, the output of our Brain model. Your client's name, the context of their specific problem, the identifying details of their technology… none of that leaves our secure environment. The AI sees only what it needs to write the section in front of it.
Layer 1: Contractual, Enterprise DPA Coverage
Explicit contractual prohibition on training with API-submitted content. Bilateral obligation with indemnification. Not a settings page, but a legal commitment.
Layer 2: Architectural, Raw Disclosures Never Leave Our Environment
The AI receives a scoped invention summary, not the original file. Your client's name, full context, and identifying details stay in our secure storage. The AI only gets what it needs to complete the task in front of it.
Layer 3: Stateless, Every Request Starts Fresh
Every AI generation request is completely stateless. Once the response comes back to eety.ai, the context window is discarded. There is no persistent memory of your content sitting on any AI provider's infrastructure.
"It is like calling a specialist consultant and briefing them verbally on only the specific question you need answered, instead of handing them the entire client file. They can do the work; they just don't need to know everything to do it."
And then the encryption
The AI privacy question gets most of the attention; but the data security question matters just as much. Here is what happens to your disclosures when they are at rest and in transit:
Your disclosures land in a firm-specific private storage bucket. There are no public URLs. Cross-firm data access is blocked at the database query level, not just at the interface. Even if a bug existed somewhere in the UI, another firm's session could not retrieve your data; the query itself would return nothing.
Each firm's data exists in its own sealed environment. Other firms using eety.ai cannot see in; your firm cannot see their data either. The isolation is architectural, not just access-controlled.
The question you should really be asking every AI tool
There is a question I encourage every attorney to ask before using any AI tool for patent work. Not "is my data safe?", because that question is too easy to answer with a reassuring sentence. The real question is:
"If something goes wrong with my client's disclosure: can you prove what happened, and what is your liability?"
A privacy policy cannot answer that question. A DPA can. An audit trail can. A firm-isolated architecture can.
I built eety.ai because I wanted to use it myself; and I could not use a tool that I could not answer that question about. As it turns out, most attorneys feel the same way :)
Ready to Draft With Confidence?
eety.ai is built for the security standard your clients expect from you.
Start Free — No Credit Card →