Glowrm an experiment in a shared trust/provenance layer for smaller ATProto-style networks.
The idea is to make it easier to run niche or micro-communities by letting accounts, reputation, and verification move across services instead of each network bootstrapping its own trust stack from scratch.
I build specs of a dating app & networking site (heypbj.xyz, leafroll.fun) as testing tools for how Glowrm connects and how identity and reputation can travel between spaces, they have their own unique value prop that uses a matching mechanic I’ve been working on for years.
My overarching thinking is understanding delegated decisionmaking, the algo era makes judgment more opaque and often impossible to revert, creating the trust infra through a larger suite of tools is where my head lies these days.
Trust scores might be useful for apps, I agree on this. There is also the Neynar User Score which has done this for Farcaster. And they have updated their scoring algorithm over time.
I just have some concerns as I pointed out.
Also note that Neynar has not developed this score for the sake of a score. They had thousands of users using mutiple apps and several app devs building and then the need emerged, I dont know if any app dev has asked for this to be needed or had ever faced issues regarding “low quality” users on their apps. Would be nice if other app devs can chime in on this.
These are reasonable concerns, but the proposal has been shared here in good faith exactly so we can address any shortcomings together in transparency. No need to be so dismissive.
Speaking of transparency..
Like, it’s already available in a repo somewhere?
Yeah this can easily get out of hand. I think the safest place to start is purely granting access, not revoking it.
Higher reputation users could have various rate limits lifted that are otherwise applied to brand new untrusted users. This is in essence how the Trust Levels of Discourse work:
Going back and forth on this same question from the forum side. My current implementation: reputation stays local to each community, computed AppView-side from typed reaction records (“helpful”, “agree”, “insightful”). Nothing stored on the PDS, because bad actors could fabricate their own scores. Different communities might weight the same reactions differently, because “helpful” in a Rust programming forum shouldn’t carry the same weight as in a meme community. Of course memes are way more reputation building, but not everyone may agree and that is fine
The Discourse trust levels comparison earlier in the thread is close to what I’m doing. Additive, local, earned through participation in that community. No negative scores, no cross-community spillover.
I think the two approaches can coexist if the cross-network layer is limited to positive, opt-in signals published as labels. Something like “sustained quality participation across N communities” that AppViews can subscribe to or ignore. The consuming app stays in control of how much weight to give it, and a bad moderation decision in one community doesn’t cascade into others.