Curation & Trust Networks — Meta Wiki
A meta wiki of use cases where curation matters AND a trust network is what makes the curation valuable. The principle: information abundance is no longer the bottleneck — trusted-source filtering is. Curation is the act of filtering; the trust network is the substrate that makes the filter believable.
This wiki indexes specific curation projects (mine and others'), and reasons about the structural pattern shared between them.
Companion catalogs:
- Trusted Claude Skills
- Trusted OpenClaw Skills
- Use Cases (deep) — long-form list
The structural pattern
A curation × trust-network use case has all of:
- Abundance — way more candidate items than any individual can vet.
- Risk asymmetry — picking a bad item has real cost (security, time, money, health, reputation).
- Heterogeneous quality — items range from excellent to actively malicious.
- No single arbiter — no one authority can or should rate everything.
- Transitive trust works — if I trust Alice, and Alice trusts Bob's review of X, I get a useful prior on X.
- Audit trails matter — being able to point at why an item is trusted (and revoke when wrong) is essential.
If a domain has 3+ of these, curation + a trust network is the right shape of solution, not a single recommendation engine.
Concrete cases (where this wiki applies)
| Case | Abundance | Risk | Trust-net value |
|---|---|---|---|
| Claude Code skills | 7,000+ skills, growing | Code execution on dev machine; data exfil | High — 13% known malicious |
| OpenClaw skills | 5,400+ skills | Same; agent often runs unsandboxed | Critical — 820+ confirmed malicious |
| MCP servers (Model Context Protocol) | Growing rapidly | Code execution + data access | High; supply-chain auditing emergent |
| AI coding agents (Claude Code, Cursor, Aider, Codex CLI, Antigravity) | Many overlapping ones | Repo / shell access | Medium — picking the wrong one wastes weeks |
| Open-source LLM model weights | Hundreds of variants | Backdoor weights, license issues | Medium — emerging |
| Browser extensions | Thousands | Same browser security context | High — OG of this pattern |
| VS Code / Cursor extensions | Thousands | Code exec in IDE | High |
| Investors / VCs to talk to | Many | Time, dilution, reputation, signal-to-noise | High — gossip-network is the existing trust net |
| Bodywork / chiro / PT practitioners | Many | Health & money | High — friend-recs work, web-recs don't |
| Supplements & longevity protocols | Vast | Health | High — most reviewers are conflicted |
| Spiritual / contemplative teachers | Many | Years of life, sometimes cult dynamics | High |
| Academic papers & sources | Endless | Wrong-belief downstream | Medium-high |
| Restaurants / venues / activities | Many | Time, money | Low-medium (low risk) |
| Books to read | Vast | Opportunity cost | Medium |
| News / commentary sources | Vast | Belief-distortion | High — see the lists-to-curate triage (private) |
My active curation projects
The full list of curation projects pulled from Slack #curate lives in @jacobcole/lists-to-curate/index (private — some items are spicy). Public-shareable cross-references:
- Claude / OpenClaw skills — this wiki
- LLM Wiki Ecosystem — tools and implementations of the Karpathy LLM-wiki pattern
- Health Log (private) — bodywork practitioners, what worked
- Investors Wiki (private) — investor profiles + interest space
- Climate Change Facts —
climatechange.jacobcole.net - Media bias / source-ownership map — pulled from
#curate - NVC / xyz-pilled directories —
nvc.jacobcole.netetc.
Why a wiki + trust graph is the right substrate
- Pages are addressable — every reviewer, every reviewed item, every endorsement gets a stable URL.
- Backlinks make trust transitive and visible — clicking on a reviewer shows everything they've blessed.
- Revocation is a normal git operation — when a reviewer is found compromised, their endorsements roll back as a diff.
- Forking lets dissent live alongside consensus — you don't need a central judge.
- MCP / agent-readable — the same curated network feeds the agents that consume the curated items.
WikiHub is being built partly to be this substrate. See WikiHub.
Open questions
- What's the right "endorsement primitive" — a frontmatter signature? a separate signed-claims page? a per-page audit log?
- How do you bootstrap a trust network without it becoming an in-group? (cf. early PGP web-of-trust failures)
- Where does ML-based scoring (SkillHub-style) fit alongside human attestations? They aren't substitutes.
- What's the right denylist/revocation model? Public denylists are themselves attack surface.
- How to make silence informative? "Reviewer X has not endorsed this in 6 months" should be readable signal.
Inspirations / prior art
- Web of Trust (PGP, GnuPG) — the originating pattern; mostly failed at scale, but the idea is right
- Wikipedia editor reputation — implicit trust via edit-history
- GitHub starring + maintainer reputation — the working trust net most devs already use
- Yelp/Reddit "trusted reviewers" — emergent partial solutions
- Goodreads friends' shelves — soft trust net, low-stakes
- Conservation / mycology field-guide tradition — reviewers who literally signed each species ID
Sources
- See Trusted Claude Skills sources
- See Trusted OpenClaw Skills sources
- Lists to Curate — the source backlog (private)