๐Ÿ” Security ยท 9 min read

Why Security is the DNA
of the Entire eety.ai Application.

Security in eety.ai wasn't something we bolted on after the product was built. It was the first architectural decision we made โ€” before a single line of product code existed. Here's what that actually means.

TA

Tabrez Alam

April 7, 2026 ยท Founder, eety.ai

eety.ai โ€” Security woven into every layer of the application architecture: database, API, AI inference, and frontend

There is a version of security that is a checkbox. And then there is ours.

I have seen how most software products approach security. They build the product. They launch the product. And then at some point โ€” usually when a customer asks a difficult question, or when the compliance team gets involved โ€” they go back and 'add security'. A checkbox on the marketing page. An SSL badge in the footer. A privacy policy written by a lawyer who had not read the codebase.

That is not an unfair thing to do. That is, honestly, how most SaaS tools are built. Ship fast; harden later.

I could not do that with eety.ai. Not because I am particularly principled about software methodology โ€” but because the moment I thought clearly about what this product handles, I realised there was simply no responsible way to do it in that order.

eety.ai processes invention disclosures. These are, by definition, the most commercially sensitive documents your clients will ever produce. Before a patent is filed โ€” before any public record exists โ€” that disclosure is the only thing standing between your client's idea and the rest of the world. Breaching it doesn't just embarrass a firm. It can destroy a patent, obliterate a business, and land an attorney in front of a bar committee.

So when I sat down to design eety.ai, the question was not "how do we add security?" It was: "what kind of system is even acceptable to build here?"

"Think of it like designing a building. You can add a lock to any door. But if you want the building itself to be a vault โ€” the walls, the floor, the foundation โ€” you have to decide that before you pour the concrete. That is the decision we made."

The architectural choices that most users never see

Security decisions that are made during architecture โ€” not after โ€” are invisible when they work. You never see them. You just notice, over time, that nothing has gone wrong.

Here are the ones we made, and why each one matters.

๐Ÿ—„๏ธ

Layer 1: Firm-Isolated Data Architecture โ€” Not Just Access Control

Every firm that uses eety.ai operates inside a completely isolated data environment. This is not a row-level filter applied to a shared table. It is a structural separation enforced at the database query layer. A bug in our UI โ€” even a catastrophic one โ€” cannot expose Firm A's data to Firm B's session, because the query itself would return nothing. The isolation is in the architecture, not in the interface on top of it.

๐Ÿ”‘

Layer 2: Two Distinct Phases. Analysis and Drafting. With Different AI Exposure.

This is the distinction that matters most. When your disclosure is first uploaded, the Brain engine (our 15-point extraction system) does read the extracted text of your document and sends it to Gemini to build a structured understanding of the invention. That is unavoidable: the AI has to read the document to understand it. What happens after that is where our architecture makes a real difference. When eety.ai drafts every single section of your patent (claims, description, abstract), the AI does not see your original disclosure at all. It operates entirely from the structured understanding the Brain produced. The raw document stays in our secure, encrypted storage. It is never re-sent for any drafting task.

โšก

Layer 3: Gemini Has No Memory. We Control What Context It Sees.

Gemini's API is stateless by design: it has no persistent memory of your conversations between sessions. When eety.ai sends a request to Gemini, it includes the relevant chat history from our own database, assembled, scoped, and sent by us on every call. This is an important distinction. Your conversation history is not stored on Google's infrastructure. It lives in our own firm-isolated, encrypted database on our servers, and we decide exactly what gets included in each request. The moment a session ends, Gemini has no record of it. We do, and we keep it under the same security model as everything else.

๐Ÿ“‹

Layer 4: Enterprise-Grade DPA โ€” Not a Toggle in Settings

eety.ai operates under enterprise-tier Data Processing Agreements with its AI providers, including Google. These are not the standard consumer or free-tier terms. They are negotiated bilateral contracts that explicitly and legally prohibit using API-submitted content for model training. Not a settings page. Not a privacy policy sentence that starts with "by default". A contract, with indemnification provisions if it is ever breached.

๐Ÿ”’

Layer 5: Encryption at Every Transit Point

Your disclosures are encrypted at rest with AES-256 and in transit with TLS 1.3 โ€” the same standards used by financial institutions handling payment data. There are no public URLs to your stored documents. No unauthenticated endpoints exposed to the public internet. Every API route requires a valid, scoped session token that expires on logout.

๐Ÿงน

Layer 6: Sanitised Error Surfaces โ€” Keys Never Touch Logs

This one is less visible, but it matters a great deal in practice. Every error path in eety.ai โ€” from streaming AI responses to image generation to background job failures โ€” is sanitised before anything is logged or returned to the client. API keys, private URLs, internal endpoint paths: none of them appear in our logs or in any error messages surfaced to the browser. This is enforced at the code level, not by hoping that exceptions are worded nicely.

AES-256
At Rest
TLS 1.3
In Transit
1ร—
AI Reads Disclosure

Why "we don't train on your data" is not enough

I keep coming back to this, because I think it is the most important thing for attorneys to understand when evaluating any AI tool for legal work.

Most AI privacy pages say something technically accurate but structurally weak: "We don't use your data to train our models by default." When I read that sentence carefully, here is what I notice: "by default" is not "never". Default settings change. Enterprise tiers may differ from what you signed up for. And "we don't" is a statement of current intent, not a legal commitment. The day they change their mind, they update the policy, send an email that goes unread, and consider the matter settled.

What creates a real obligation is a contract. A DPA. A bilateral legal instrument with defined obligations, audit rights, and โ€” crucially โ€” liability provisions if those obligations are breached.

"The difference between a privacy policy that says 'we won't do that' and a DPA that says 'we can't do that' is the same as the difference between a colleague who promises confidentiality and a colleague who has signed an NDA. Same words. Completely different enforceability."

eety.ai is covered by the latter. And because I operate as a developer on enterprise API tiers โ€” not the consumer-grade tiers that most free tools use โ€” the contractual baseline is fundamentally different.

The part that was actually hard to build

The encryption, the DPAs, the isolated storage buckets โ€” those are table stakes. They are important, but they are also straightforward: you make the right vendor choices, you configure things correctly, you sign the right agreements.

The genuinely difficult part was the two-phase architecture. During the understanding phase, yes. The extracted text of your disclosure goes to Gemini. That is the price of actually comprehending the invention. What I was not willing to accept was continuing to re-send that raw disclosure context for every subsequent operation: every drafted claim, every section of the description, every revision.

Building the Brain (the 15-point extraction engine that produces a structured, scoped invention model) meant that every drafting call that follows works from that model, not from your original document. Your file is read exactly once for understanding. After that, it stays in our encrypted storage and does not travel anywhere. This required significantly more engineering than the alternative. The alternative was: pipe the full disclosure into every prompt, forever. A lot of AI patent tools do exactly that.

That was not a trade-off I was willing to make.

Security as product design, not product policy

Here is the thing I want attorneys to take away from this. When security is designed in from the start โ€” not added as a feature โ€” it stops being a claim you have to evaluate on trust. It becomes something you can reason about structurally.

You can ask: "Even if something went wrong in your system, what is the worst that could happen?" And if the architecture is right, the answer is constrained in a way that a policy statement could never constrain it. A bug in our interface cannot hand one firm's trade secrets to another firm's session. The database query itself would return nothing. That is not a promise we are making. It is a consequence of how the system was built.

I also want to be transparent about what we are not claiming. Your disclosure is read by Gemini during the Brain extraction phase. That is how the understanding is built. What we control is: what happens to it after that, who it sits next to, whether it gets re-sent unnecessarily, and crucially, whether the AI provider is bound by a contract that prohibits using it for training. The answer to the last one is yes, under our enterprise DPA. Not a settings toggle. A legal commitment.

I built eety.ai because I wanted to use it myself. And I could not use a tool for this kind of work unless I could clearly reason about what happened to my client's data, at every step, in every failure mode, under every edge case I could think of. That meant being honest about what the system actually does, not just what sounds reassuring.

As it turns out, most attorneys feel the same way. The question comes up in every single demo :)

Built for the Standard Your Clients Expect

eety.ai is the only patent drafting tool where security was the first line of the architecture, not the last.

Start Free โ€” No Credit Card โ†’
TA

Tabrez Alam

Founder of eety.ai. The first thing I asked myself when designing this product wasn't "what features do we build?" โ€” it was "what kind of system is it even responsible to build?" Security is the answer I kept arriving at. This article is my attempt to show the working.

Continue Reading