Sorry for another long post š
Good that the mobile app is open source
That addresses the code transparency issue
My core concern is a bit more architectural though, not about whether Self specifically is trustworthy.
From what I understand of Selfās architecture, during registration a nullifier (hash of passport data) is stored on-chain permanently to prevent double-registration. This hash is deterministic: anyone with access to the same passport data can compute the same hash.
Governments have passport databases. Airlines have passenger records. Hotels have check-in records and lease companies have your IDs (often copies of your passportand/or drivers license). Any of these could compute nullifier hashes and match them against the on-chain registry, linking an (AT Protocol D)ID to a real-world identity and do so retroactively.
This isnāt a bug in Selfās implementation, itās a structural property of any system that needs to prevent double-registration from the same document.
The for the TEE trust chain: Self delegates proof generation to Google Cloud Confidential Space (AMD SEV). The claim is that even Self and Google canāt access the data inside the enclave. But this trust chain depends on:
- AMD hardware attestation being uncompromised
- Googleās infrastructure operating as documented
- No government compelling either party to modify the enclave code or sign a malicious attestation
Under the US CLOUD Act, a court order could compromise these guarantees. For users in jurisdictions with adversarial governments, ātrust Googleās hardware enclaveā is a big
ask. And I canāt say that the US government is on a very trustworthy track right now. And for building this into protocol-level infrastructure⦠I donāt think we shouldā¦
I honestly donāt know the answer to this (Iām not a cryptography specialist in any way), but is there an architecture that can verify āthis is a unique real personā or āthis person is 18+ā without any party, not even a hardware enclave, ever having access to both the online identity and the government document simultaneously? Maybe there is, but I havenāt seen it yet.
If the answer is āno, that moment of linkage is unavoidable,ā then the question becomes: how do we minimize what persists after that moment? And whether any residual artifacts (nullifiers, attestation logs, metadata) create acceptable risk.
For ATProto, all content sits on public relays and is index(able) by anyone. Linking a DID to a government identity, even through a well-designed ZK system, creates permanent deanonymization risk. A future government or adversary doesnāt need to break the cryptography. They just need the passport database they already have plus the public nullifier registry.
IDK, but I donāt really find that a comforting solution.
Until there is a true zero-knowledge system, Iām inclined to see behavioral and graph-based trust models (like TrustNet, EigenTrust, or the contextual trust @psyverson.bsky.social mentioned) as an "imperfect next best thingā, building trust from observed network participation rather than identity documents.
These sidestep the linking problem entirely because the link is never created. The tradeoff is no
hard proof-of-personhood, but for most use cases on a social protocol, āthe network of people I trust considers this person trustworthyā might be enough.
I could see ZK-verified attributes working as an optional signal for specific high-assurance contexts though, as long as users clearly understand what persists and what the
residual risks are. And for those cases: link it to a new account just for that purpose, never link it to your main online account. The concern is when it becomes infrastructure rather than choice.
Would love to hear if Iām missing something in how the nullifier/TEE architecture works⦠happy to be corrected on any of this.