Building a trust & reputation clearinghouse for atproto niche networks

Glowrm an experiment in a shared trust/provenance layer for smaller ATProto-style networks.

The idea is to make it easier to run niche or micro-communities by letting accounts, reputation, and verification move across services instead of each network bootstrapping its own trust stack from scratch.

I build specs of a dating app & networking site (heypbj.xyz, leafroll.fun) as testing tools for how Glowrm connects and how identity and reputation can travel between spaces, they have their own unique value prop that uses a matching mechanic I’ve been working on for years.

My overarching thinking is understanding delegated decisionmaking, the algo era makes judgment more opaque and often impossible to revert, creating the trust infra through a larger suite of tools is where my head lies these days.

4 Likes

Very cool. We’re also thinking about this in the context of Roomy and we’ve collected a bunch of resources on the topic of Web Reputation Systems.

See also Ringspace by @taggart-tech.com

Will Glowrm be open source? Hard to commit one’s app to such a critical piece of infrastructure without full system transparency.

4 Likes

Yes it’s all open source! I’ll check out Ringspace, this has been a fun rabbit hole to be part of

1 Like

There’s a growing urgency to having a tiered trust-levels system at the level of the PDS:

Good behavior on a dating app means you start with that same reputation on a professional network.

Yeah because I love signing up with my grindr account on linkedin.

When someone gets reported on one app, it affects their reputation across the network.

Great now when an app dev decides to block my account I get rugged from everywhere. Even if the block was unjustified at the discretion of the app.

We already have a trust system on atproto and its who people follow and who follows them.

You could have a single use account.

But also, the idea here is to support niche apps through shared trust rails.

People already use social accounts to auth so this isn’t some invented use case, just exploring alternatives.

Trust scores might be useful for apps, I agree on this. There is also the Neynar User Score which has done this for Farcaster. And they have updated their scoring algorithm over time.

I just have some concerns as I pointed out.

Also note that Neynar has not developed this score for the sake of a score. They had thousands of users using mutiple apps and several app devs building and then the need emerged, I dont know if any app dev has asked for this to be needed or had ever faced issues regarding “low quality” users on their apps. Would be nice if other app devs can chime in on this.

1 Like

These are reasonable concerns, but the proposal has been shared here in good faith exactly so we can address any shortcomings together in transparency. No need to be so dismissive.

Speaking of transparency..

Like, it’s already available in a repo somewhere?

1 Like

Yeah this can easily get out of hand. I think the safest place to start is purely granting access, not revoking it.

Higher reputation users could have various rate limits lifted that are otherwise applied to brand new untrusted users. This is in essence how the Trust Levels of Discourse work:

1 Like

Hi, I’m building Barazo, a forum AppView on ATProto (lexicon thread).

Going back and forth on this same question from the forum side. My current implementation: reputation stays local to each community, computed AppView-side from typed reaction records (“helpful”, “agree”, “insightful”). Nothing stored on the PDS, because bad actors could fabricate their own scores. Different communities might weight the same reactions differently, because “helpful” in a Rust programming forum shouldn’t carry the same weight as in a meme community. Of course memes are way more reputation building, but not everyone may agree and that is fine :wink:

The Discourse trust levels comparison earlier in the thread is close to what I’m doing. Additive, local, earned through participation in that community. No negative scores, no cross-community spillover.

I think the two approaches can coexist if the cross-network layer is limited to positive, opt-in signals published as labels. Something like “sustained quality participation across N communities” that AppViews can subscribe to or ignore. The consuming app stays in control of how much weight to give it, and a bad moderation decision in one community doesn’t cascade into others.

2 Likes

Good stuff going to check it out

1 Like