While building out Barazo I keep running into the question behind EigenTrust, sybil detection, and trust propagation: how do you figure out (cross-network) if an account is real (e.g.: not a bot), without building a reputation score that inevitably gets gamed, weaponized, or both?
I wrote about the AI slop angle recently for background on where I’m coming from. But I think there are a couple harder unsolved questions underneath. One privacy constraint I’m working with is “no government ID or financial identity verification”. Pseudonymous accounts should be first-class citizens.
What makes ATproto exciting for this is that it might be the first real opportunity to solve trust across multiple apps and content types at once since the identity layer and data model are shared.
I’d love to think through these with people who are actually building on it. So here I am ![]()
Feel free to point me to another place where this discussion has already taken place somewhere else, relative new to the community ![]()
Sybil detection without a surveillance system
Effective sybil detection wants the full network graph. But “technically public” and “actively surveilling” are different things: ingesting every interaction across every ATproto app to compute trust scores about people who never used your service… that’s a panopticon (and probably a GDPR headache) .
The alternative could be user-initiated trust building. Users choose what feeds their trust profile (Keytrace verifications, community activity, endorsements) and then the app can verify those specific claims against signed records in other users’ PDS rather than surveilling everything. Only positive signals, so opting in more always helps.
(and using negative signals probably just makes bots rest by creating a new account whenever something negative is added to their profile, so my guess is that this wouldn’t work?)
So the app gives up proactive detection across the full network. But fake accounts that never show up aren’t a threat, and fake accounts that do show up have to leave traces (spam, fake endorsements, interactions with real users) that are visible. The bet is that the sybils that actually matter will reveal themselves through their behavior. Not sure that bet always holds though…
I very much assume this is be a shared ecosystem challenge. Should each app build its own detection, or is there a way to coordinate across labelers without creating a centralized clearinghouse?
Endorsement consent gap
Endorsements live in the endorser’s PDS. The endorsed user can’t delete them, can’t reject them, has no recourse. Without a consent model at the application layer, endorsement spam is trivially easy and there’s nothing the target can do about it.
Endorsements should only appear on someone’s profile once they’ve accepted them, but that’s an AppView-level decision so every app can solve it independently. Anyone thought about a shared convention for this?
Newcomer problem
Another fun challenge is that graph-based trust inherently favors incumbents. A newcomer with zero history is indistinguishable from a sybil to the algorithm. Cross-platform identity (Keytrace) helps: proving ownership of a GitHub account and a domain is a real signal. But not everyone has those… and requiring them would be exclusionary.
How can we design a system where someone joining in 2030 still has a realistic path to being trusted within a reasonable time (and without hindering any participation).
So some discussion points I’d love to get your view on:
-
Sybil detection that doesn’t require ingesting the full firehose. Inter-labeler coordination? Community-maintained heuristics? Something else?
-
Endorsement consent gap: a shared convention, or does every AppView just handle it?
-
Trust bootstrapping for newcomers without cross-platform identity. What works that isn’t just “wait and contribute”?
PS: the Glowrm thread covers related ground from the centralized clearinghouse angle.
