Loading...
There is a tension at the core of any useful AI assistant.
To give you a genuinely helpful, personalized AI that can represent you in conversations β one that understands your communication style, your values, your interests, your context β that AI needs to know things about you. Real things. Not just your job title and favorite hobbies, but the texture of your life: how you think about relationships, what you're working through, what matters to you in a potential connection.
The more it knows, the better it performs.
But more knowledge creates a new kind of exposure. An AI that knows you deeply, and can speak freely on your behalf to strangers, is an AI that can inadvertently share things you would never choose to say yourself. Not out of malice β but because no one ever asked it not to.
This is the paradox: the features that make AI valuable are exactly the features that create privacy risk. Most AI products resolve this by ignoring it. We've tried to resolve it by designing a different default.
The dominant approach to AI privacy today is reactive.
You share data. The AI uses it. If something goes wrong β if a conversation reveals something that shouldn't have been said, if you feel uncomfortable about what the AI is saying about you β you go back and delete it. Try again.
This model treats privacy as a cleanup task rather than an upfront design decision. It puts the burden on you to notice the problem after it has already occurred, often after the damage has already been done.
There is a well-established principle in information ethics called contextual integrity, formalized by philosopher Helen Nissenbaum.[1] It holds that privacy is not simply about keeping things secret β it is about information flowing appropriately for its context. What you tell your doctor is appropriate for your doctor. What you tell a close friend may not be appropriate for a stranger. What you tell a journaling app should not be accessible to an AI having a public conversation on your behalf.
The issue with most personalized AI is that it collapses context. Everything it knows becomes potentially available in any conversation, regardless of whether that context is appropriate.
Our approach flips the model.
Before your AI surrogate can be accessed publicly β before anyone else can start a conversation with it β you must review what it knows. Not skim it. Actually review it: visit your Memory Layer, read what's there, and interact with it. The act of looking is the act of consenting.
This creates what we call a memory review gate. Until it is complete, your AI surrogate is not publicly available. It will not respond to visitors. It will not appear in shared links. It simply isn't active yet.
When you complete the review, two things happen:
This is the "review before publish" model, and it is meaningfully different from every other approach. You are not being asked to configure a privacy settings menu. You are being asked to actually look at what your AI knows about you β and then decide.
Memory Layer β AI Twin
4 memories flagged by Privacy AI β tap to review
The demo above shows what the memory review experience looks like in practice. The chips flagged in amber are ones our Privacy AI has identified as potentially sensitive. Tap them to see why, and decide whether to hide them from your AI surrogate or keep them visible. You are always in control of what it can say.
Once a day, an automated system scans the memories visible to your AI surrogate and flags any that fall into high-sensitivity categories.
This "privacy red team" is a second AI β one specifically trained to think adversarially about what information could be problematic to share. It reads your memories through the lens of a stranger who has no prior relationship with you, and asks: would it be appropriate for an AI to volunteer this in a first conversation?
The categories it watches for include:
Items flagged as potentially sensitive are surfaced to you as suggestions, not actions. The AI cannot hide your memories on its own. You decide what happens to each suggestion: hide it from your surrogate or keep it visible. You can dismiss a suggestion if you disagree with it β the final decision is always yours.[2]
There is a larger argument here, beyond the mechanics of this specific feature.
One of the consistent findings in research on trust and technology is that transparency is more important than perfection.[3] People tolerate mistakes and limitations in systems they understand. They do not tolerate opacity in systems that affect them β even if those systems are technically performing well.
An AI surrogate that operates as a black box β that you've set up, handed over, and have no further insight into β is an AI you cannot genuinely trust. Not because it will necessarily do something wrong, but because you have no way to know whether it would.
The memory review gate and the privacy red team are an attempt to make the AI's knowledge legible. To give you a way to look inside, understand what it knows, and make a considered choice about what it should be able to say. The goal isn't just privacy compliance β it's trust through transparency.
If you've already set up an AI surrogate, you can complete the memory review in a few minutes. The process is:
Once those steps are complete, a green badge appears on your Memory Layer, your AI surrogate becomes publicly shareable, and anyone who visits your share link can have a real conversation with your AI β within the boundaries you've set.
Required before sharing Β· ~5 minutes
Visit your Memory Layer, review what your AI Twin can see, and toggle at least one memory. Once complete, you can share your AI publicly β with confidence that you've chosen what it knows.
Requires a free account. Your AI Twin only becomes publicly accessible after memory review is complete.