Product Stories AI & Technology ⏱ 6 min read

"Can you please remove
the 90% restriction?"

A user asked me this. I asked three questions back. What happened next became the most honest conversation I have had about what patent drafting actually requires.

TA

Tabrez Alam

April 15, 2026 · Founder, eety.ai

A user staring at their screen, puzzled by eety.ai asking questions before it will draft

It came in on a Tuesday. A message from a user who had been using eety.ai for a couple of weeks. He was a smart person; a patent attorney at a firm that handles a fair bit of technology work. Not a beginner. Not someone easily confused by software.

The message said, roughly, this: eety.ai keeps stopping him from generating the first draft. The system had been asking clarifying questions about the invention; flagging that its confidence on understanding was still below the threshold. He had answered some questions. The system had more. He was getting impatient. He wanted to get the draft out.

So he asked me: "Can you please remove this restriction? I just want to start drafting."

I read it twice. Then I typed three questions. Not because I was being difficult. Because those three questions were the entire story.

The conversation

Three questions. One long pause.

Q1
I asked him

"Were the questions the system was asking actually relevant to the invention?"

His answer: Yes.
Q2
I asked him

"Were those questions critical to the invention; would knowing the answers change how the claims should be written?"

His answer: Yes, he admitted.
Q3
Then I asked the slow one

"So if the questions were relevant… and the answers would change the claims… why were you planning to draft the application before knowing them?"

There was a pause. A fairly long one.

I want to be clear: this user was not doing anything wrong. He was doing exactly what two years of using AI tools had trained him to do. And that training, it turns out, is the entire problem.

The Analogy

Allow me to introduce
THE CONFIDENT TAILOR

THE CONFIDENT TAILOR has never once asked a customer their measurements before cutting fabric. Why would he? He is excellent. He has made ten thousand suits. He knows what a suit looks like. You give him a description of what you want; he picks up his scissors and starts cutting. Immediately. Beautifully. Without a single hesitation.

The suit looks spectacular when it is done. The stitching is immaculate. The lapels are perfect. The lining is silk.

It just does not fit you.

Not because he made a bad suit. Because he made a suit. The suit you needed was a 38, short, with an extra inch on the left shoulder because of that old cricket injury from 1998. He did not ask. You did not think to tell him. And now you have a beautiful piece of fabric that belongs to someone slightly different. :)

ChatGPT is THE CONFIDENT TAILOR. You give it your invention disclosure; it picks up its scissors. No measurements. No questions. No hesitation. Just beautiful, confident, immediately available output.

A confident tailor cutting fabric while the customer stands in the background, not yet measured

He has made ten thousand suits.
He has never once taken your measurements.

The real problem

ChatGPT never blocked him. But it also never asked.

This is the thing my user had not yet fully considered. And I say "not yet" because once I asked the third question, he genuinely engaged. He is a thoughtful person; he just had not been given a reason to think about this before.

ChatGPT never blocked him. It also never asked whether the inventive step he described on page 2 actually works the way he thinks it does. It never asked whether the embodiment he mentioned in passing accidentally limits claim 1. It never asked who told him what on the phone last Thursday; and whether that verbal conversation changed everything about how the independent claims should be framed.

It just wrote. Fluently. Confidently. At speed.

The thing nobody talks about

The system that felt comfortable gave him a draft without understanding the invention.

The system that felt uncomfortable was the one that knew it did not fully understand; and refused to guess at legal language it was not certain about.

Which of those behaviours is actually the bug?

There is a term for what ChatGPT does in this situation. It is called confident ignorance. Not because the model is poorly designed. Because the model was not designed to know what it does not know. It was designed to generate plausible text from context. And generating plausible text from incomplete context is exactly what large language models do best.

For a blog post, this is fine. For a marketing email, this is fine. For a claim 1 of a patent application that will survive examination, prosecution, licensing negotiations, and potentially court… this is where "plausible" and "correct" start being very expensive synonyms for each other.

A genuine conversation about what the system needs before it will draft
What the user said after

"So your questions are basically
making me do my own homework."

I told him this. He thought about it. Then he said something I was not quite expecting.

"OK, so the questions are basically your way of making sure I have done my own homework."

Not quite; the questions are the system's way of making sure it has done its homework. But I understood what he meant. And I think he had just figured out, in his own words, what the 90% threshold actually is.

It is not a gate. It is not an obstacle the product team added to slow people down. It is a mirror.

The system holds it up and says: here is what I know about your invention; here is what I still do not know; and here is why I am not comfortable writing legal language about things I am currently guessing at.

The user answered the remaining questions. The system hit 91%. It opened the drafting pipeline. The first draft came out in minutes.

He sent me a follow-up that afternoon. "The claims actually look right this time," he said. Not "the claims are done." The claims look right. That word. That is the one that matters. :)

I think about this conversation every time someone asks me why eety.ai has a threshold at all. Why not just let people draft when they want to? Why impose this friction?

The answer is: ChatGPT does not have friction. And that is exactly why its patent drafts come out looking great and reading like someone else's invention.

The 90% requirement is not a technical limitation. We could set it to 0% tomorrow morning; the system would draft immediately upon receiving any disclosure, with full confidence, regardless of what it does not yet know. Some users would love that.

This pattern; rushing to produce output before truly understanding the invention; shows up long before anyone opens eety.ai. I see it most commonly at the provisional application stage. A startup files a vague provisional to "lock in the date," treats the real drafting as a problem for future them, and then finds twelve months later that the provisional did not actually describe what the non-provisional needs to claim. The story and the lesson are the same at every stage. I wrote about this in The Provisional Patent Is the Most Misunderstood Tool in IP. The confidence gate in eety.ai and the advice in that article are solving the same problem: do not commit words to paper before the invention is understood.

But then eety.ai would just be THE CONFIDENT TAILOR. Making beautiful suits that belong to someone slightly different.

"The AI that blocks you before it drafts respects your invention more than the one that drafts without ever asking."

I did not remove the 90% restriction. I do not plan to. :)

P.S. For the attorneys reading this who have used ChatGPT for patent drafts and are now slightly uncomfortable: you are not alone. And you are probably right to be slightly uncomfortable. The good news is that the discomfort is the beginning of the right question. Everything after that gets easier. :(

Want to see the 90% threshold in action?

Upload a real disclosure.
See what eety.ai asks before it drafts.

The questions it asks are not obstacles. They are the invention, mapped. Most users say the questions alone are worth the session.

Continue Reading