Trust Infrastructure on ATproto

I dont think this is a problem that needs solving.

If a user wants to use a separate did:plc account for other purposes that they do not wish to link to their primary did:plc account, they will keep those separate and will not need a method of linking them together.

Also there are other ways of linking identities together eg the (at)bsky.team and (at)name.bsky.team is a clear way of verification of org members.

I personally dont care about sybil resistance for the sake of one human one account (I think thats backwards, humans can have multiple identities to express themselves across different social contexts), I care about clearly identifying bots vs humans.

1 Like

@gui.do Thanks for the response. I should have pointed to @ngerakines.me code for sattestations (both the sattestor and the sattestee) though definitely a work in progress. Amongst other things, he plans to rewrite all the rust to python; Nick please correct if I misrepresent you. (To answer a possible implicit question, for all relevant purposes I basically cannot code. So code for this is all Nick—so far.) Also by sattestation I mean a specific claim that the sattestor has checked that a meaningful identifier (so far handle and/or registered domain) and a self-certifying identifier (so far DID and/or onion address) are bound together under control of the same entity, and signed by a private key of the sattestor (self-authenticating as assoc with the sattestor’s identifier). This binding may or may not also have a contextual label that the sattestor asserts they have : medical doctor, entity local to the town of Springfield, Bluesky employee, atproto-developer-site, atproto-event, cheese monger, etc.

And this works as an implementation of what you said in your reply to @snarfed.org: if Alice and Bob want to each attest to the identity of each other, as well as to their being collaborator’s on project foo, they can each sign a sattestation for the other. For Alice’s endorsement of Bob, she signs a CID that can either be placed (signed) in Bob’s PDS or posted in Alice’s PDS to be picked up by the firehose. There are advantages either way. A sattestation should only be accepted if it is also asserted by the sattestee (e.g. by the sattestee signing and putting it in the sattestee’s PDS). This provides an element what I call authority independence: nobody else can make claims about you that you don’t accept or that you cannot retract.

2 Likes

Arggh. And here’s the link that I again failed to provide to Nick’s code GitHub - ngerakines/atonion Ā· GitHub

1 Like

I like to see this. I wrote some notes about trust + verification here: GitHub - mycelial-systems/cawg: Creator Assertions Working Group Ā· GitHub

btw, trust on the internet was a big part of SSB culture, see trustnet — alexander cobleigh / cblgh.org

2 Likes

@gui.do regarding the self app in regard to Private Age Assurance for ATP

Self was audited by zksecurity you can find the audit reports here when you search for ā€œselfā€ https://reports.zksecurity.xyz/

but ill also link the audit reports here:

(Aadhaar is the Indian national id system afaik)

they also told me that their mobile app is open-source ( GitHub - selfxyz/self: Prove your self Ā· GitHub ). and I also want to point you to this whitepaper authored by google regarding how self works https://services.google.com/fh/files/misc/self_case_study.pdf

1 Like

Sorry for another long post šŸ˜…

Good that the mobile app is open source :flexed_biceps: That addresses the code transparency issue

My core concern is a bit more architectural though, not about whether Self specifically is trustworthy.

From what I understand of Self’s architecture, during registration a nullifier (hash of passport data) is stored on-chain permanently to prevent double-registration. This hash is deterministic: anyone with access to the same passport data can compute the same hash.

Governments have passport databases. Airlines have passenger records. Hotels have check-in records and lease companies have your IDs (often copies of your passportand/or drivers license). Any of these could compute nullifier hashes and match them against the on-chain registry, linking an (AT Protocol D)ID to a real-world identity and do so retroactively.

This isn’t a bug in Self’s implementation, it’s a structural property of any system that needs to prevent double-registration from the same document.

The for the TEE trust chain: Self delegates proof generation to Google Cloud Confidential Space (AMD SEV). The claim is that even Self and Google can’t access the data inside the enclave. But this trust chain depends on:

  • AMD hardware attestation being uncompromised
  • Google’s infrastructure operating as documented
  • No government compelling either party to modify the enclave code or sign a malicious attestation

Under the US CLOUD Act, a court order could compromise these guarantees. For users in jurisdictions with adversarial governments, ā€œtrust Google’s hardware enclaveā€ is a big
ask. And I can’t say that the US government is on a very trustworthy track right now. And for building this into protocol-level infrastructure… I don’t think we should…

I honestly don’t know the answer to this (I’m not a cryptography specialist in any way), but is there an architecture that can verify ā€œthis is a unique real personā€ or ā€œthis person is 18+ā€ without any party, not even a hardware enclave, ever having access to both the online identity and the government document simultaneously? Maybe there is, but I haven’t seen it yet.

If the answer is ā€œno, that moment of linkage is unavoidable,ā€ then the question becomes: how do we minimize what persists after that moment? And whether any residual artifacts (nullifiers, attestation logs, metadata) create acceptable risk.

For ATProto, all content sits on public relays and is index(able) by anyone. Linking a DID to a government identity, even through a well-designed ZK system, creates permanent deanonymization risk. A future government or adversary doesn’t need to break the cryptography. They just need the passport database they already have plus the public nullifier registry.

IDK, but I don’t really find that a comforting solution.

Until there is a true zero-knowledge system, I’m inclined to see behavioral and graph-based trust models (like TrustNet, EigenTrust, or the contextual trust @psyverson.bsky.social mentioned) as an "imperfect next best thingā€, building trust from observed network participation rather than identity documents.

These sidestep the linking problem entirely because the link is never created. The tradeoff is no
hard proof-of-personhood, but for most use cases on a social protocol, ā€œthe network of people I trust considers this person trustworthyā€ might be enough.

I could see ZK-verified attributes working as an optional signal for specific high-assurance contexts though, as long as users clearly understand what persists and what the
residual risks are. And for those cases: link it to a new account just for that purpose, never link it to your main online account. The concern is when it becomes infrastructure rather than choice.

Would love to hear if I’m missing something in how the nullifier/TEE architecture works… happy to be corrected on any of this.

Then build something that is 100% trustless.

The alternative is a picture of your ID + a selfie sits in a database.

I rather have this over that.

To further answer your concerns:

re: the attestation nullifier: an attacker obtaining the passport’s attestation, could identify if you registered and not what you did afterward.

and importantly: when using attps.social: the disclosure proof that is put on your didplc account does not contain the attestation nullifier.

+ you can always delete lexicon records, even from the webapp of attpsocial, and I’m also not be going into your concerns regarding amd,google,gov security. this applies to everything and anything.

This is an alternative option suitable for some use cases, i dont know why a forum solution would wanna ever kyc any user in the first place. even if its for some weird trust score. we can all use this forum right now without issues, right?

The two types of systems you’re discussing now are orthogonal.

On the one hand we have the web-of-trust systems originally discussed. On the other there’s age-gating and verifiable credentials, which is increasingly a state-imposed requirement that platforms and hosters need to decide for themselves whether to comply with in order to continue operating in a given jurisdiction.

Self shouldn’t be compared to trustnets. The point of Self and attpslabs is to be a more transparent alternative to the egregiously data-harvesting alternatives in use by the incumbents. That discussion can be kept to the Private Age Assurance for ATP topic.

3 Likes

I agree that they are related but have different purpose/usecase/compliance/obligation/needs. I do very much appreciate Guido for looking into attpssocial/self and pointing out their concerns. I’m happy to receive more feedback on the respective discussion post for further concerns/questions/feedback.

Ill be unfollowing this thread and leave u guys to focus on the web-of-trust side of discussion.

1 Like

hi @gui.do I trust you are well! I got some more feedback from self re your concerns:

Hi! While it is true that airports and governments can see your nullifier onchain they can’t link your activity. Self has a 2 step process - registration and disclosure. The registration nullifier is deterministic on your passport data but the only thing governments can know is if a person has registered on chain. The disclose nullifier is deterministic on your secret (passkey) and there is no way for the government to track your activity.

As for the second concern - that is valid. The TEE is a trust assumption in our protocol. However I’d like to point out that we’ll be moving to client side proving in the future - or rather creating proofs over PII (like creating commitments) on the user’s device and then creating the proof on a server for parts like signature verification and then combining both these proofs. As for the timeline for moving to client side proving we’re waiting on Noir to get audited so that we can move that

Also thank you for poking these questions into the trust model, its a great learning experience! PS I got this reply two days ago, sorry for being late lol

Oh, interesting. I thought it was client side ZK proof.

Honestly I don’t trust TEEs, but it’s just a personal feeling I’ve had that they can’t actually be secure.

Then I saw this post, which kind of helps confirm something spooky about them:

so they tried to get cloud providers to certify that they would never hand over the keys to a client. Providers could not do this.

3 Likes

yeah I hope that when they move away from TEEs that it becomes more trustless although I am not familiar with Noir and its level of trustlessness. :sweat_smile:

The mutual endorsement is interesting to me in that it is similar to vouch that Bluesky was once considering. I hope this will help you.

They later invented verification and eventually adopted it.

1 Like

I imagine @scottlanoue.com might have thoughts here too? Lots of prior art for trust-networking in email!

1 Like