Blog

The state chain: how Parenta remembers your child without surveilling them

· The Parenta team

A natural assumption about an AI product that “remembers your child” is that, somewhere on a server, there is an enormous timeline of every detail you’ve ever shared, indexed and searchable, growing every week. We deliberately did not build it that way.

This post is about the architecture we built instead, and why.

The problem with a long memory

A long, raw memory of every conversation a parent has ever had about their child has obvious appeal — it sounds personal, intelligent, attentive. It also has three serious problems.

First, it is a surveillance liability. The longer and richer the record, the more dangerous it is if it is ever exposed, subpoenaed, or quietly mined.

Second, it is a child’s record more than a parent’s. A child has not consented to a thousand-page log of the hardest moments of their early life sitting in a database. A parent’s right to talk through a struggle does not transfer cleanly into the right to immortalise the struggle in a corporate system.

Third, it is genuinely unhelpful for the AI. A 200,000-token unfiltered transcript is not a richer signal than a 2,000-token thoughtful summary. It is just noisier.

The state chain

So we built what we call the state chain. It is two pieces:

  1. The child profile — a compact, structured, parent-controlled document that captures the diagnoses, sensory profile, communication style, triggers, strengths, interests, and current supports. You can see it. You can edit it. You can delete it. It is roughly the size of a long-form text message.
  2. Periodic state snapshots — every so often (currently after a quiet period of fourteen days), Parenta writes a short prose summary of what’s been happening lately: mood, sleep, regulation, social, school. These snapshots are deliberately short — they are paragraphs, not transcripts.

When you start a new conversation about your child, Parenta looks at: the child profile, the most recent state snapshot, any open issues you’re tracking, and the current thread. It does not go fishing through last August’s voice notes.

The result is that the AI behaves as if it remembers your child — because, in the specific sense that matters to a parent, it does. It just doesn’t remember them the way a panopticon would.

What we threw away

This architecture costs us things. We will never be able to say “let me search every conversation you’ve ever had to find that one strategy that worked for bedtime.” We will never build a “year in review” graph of your child’s emotional regulation. We will never sell the product to a school district that wants to ingest the “data.”

We are at peace with all of those losses. They were never the product anyway.

What this means for you in practice

In the app:

  • The child profile is in your settings. You can read it. You can rewrite it. You can delete it.
  • State snapshots are listed under each child. You can delete an individual snapshot. You can turn snapshots off entirely.
  • Conversations themselves age out of being injected into context fairly aggressively — a thread from six months ago will not silently shape a reply today unless you bring it back into the conversation yourself.
  • And, as ever — full export, full deletion, both within thirty days. See privacy.

Memory is a privilege, not a default. We’re trying to treat it that way.

The Parenta team