awarm.spacenewsletter | fast | slow

Four years of fathom

I started working at ConsenSys four years ago this week. Fathom actually started a little bit earlier than that but that's when it became something real and not just a collection of ideas in people's heads.

After I left ConsenSys I experimented with a bunch of different things, one of which, hyperlink.academy, turned into the work I'm doing today. One of the most useful things I did during that transition was write up a retrospective of the first prototype of the website, while we were getting ready to embark on the next version.

I've been wanting to do something similar for Fathom. For multiple years I was convinced that a decentralized assessment protocol was the key to transforming the global education system. Now, I'm working on something different, though parts of the vision still remain, and I need to reconcile what changed. What did I learn?

I've been putting this off for a while because it's kinda scary. There's a lot of assumptions and personal failings to unpack, but also it's just so big. I want to do the project and the time I spent on it justice, but that's going to take time. I can't just sit down and unpack half a decade.

So instead, I think I'll take a couple issues of this newsletter (perhaps non-conseucitively) to tackle it piece by piece.


The simplest place to start is from the perspective of myself at the start of the project. If I were trying to make a decentralized assessment system, what would I want to know?

I'm going to ignore any operational or organization decisions here, so we're not even going to touch the blockchain question, at least not head on. We'll get to that some day. For now, let's just assume we want to design a general purpose protocol and have some way of implementing it.

Also full warning, some of this might not make sense without a deeper understanding of the fathom protocol. If you have any questions, please just ask!

Okay so, learnings! :

Keep the ontology as simple as possible

The onotology seems like a pretty critical part of a system that cares about peoples knowledge and skills, so it's tempting to spend a lot of time trying to get it "right". But, there are many many other problems to figure out, so it's extremely useful to just stick to a simple ontology until you really start running into problems.

Keeping it simple is pretty generally useful advice, but it really pays of here, because many other things will depend on this.

Just assume an initial set

A huge amount of the complexity came from trying to create credentials from existing ones. This made it easier to bootstrap a new community, and was a mechanism for similar communities to coordinate together over time.

But, in the immediate term, you're always dealing with just an arbitrarily defined intial set anyways, and trying to abstract that away just ends in pain.

Progressively tighten security assumptions

For something that wants to be trustworthy, security matters a lot. The system should maintain the properties it says it has. But, it doesn't make any sense to start with the strictest set of security assumptions. This created a ton of headaches for us, and more importantly biased us strongly against actually testing things out in the real world, as we had theoretical security models to satisfy first.

Instead we should've focused on getting the minimum security properties to do something interesting, and slowly tightened things up as we got more certain of our choices.

Cater to different security and trust levels

People have a wide range of needs from credentials. We focused on the "global" level, i.e the kinds of credentials that are easy to communicate and verify to everyone in the world. High trust, expensive to create credentials.

The thought process was that these were the ones that influence all other credentials downstream. The "design" of a high-school diploma is influenced by the college degrees.

While I still beleive this holds true to a certain extent, I think it's useful for a credentialling system to have an explicit mechanism for modulating trust, so that it can handle "casual" credentials, as well as heavy ones. This means that people can get experience with the system in lower stakes scenarios, and those can then provide evidence for more resource intensive processes.

Credentials are tied to Proof of Work

This was one of the biggest aha moments that we never got to fully realize. Initially we had our assessments structured solely around proving "facts" about people. However we ran into a problem, we had a pseudonymous identity requirement, which means people could "spam" earn credential, in a community.

While that identity requirment itself may have been the root cause, this prompted us to think about ways the proof could be resilient to these kinds of attacks. Ultimately what we came up with was making the process of assessment socially useful to the community.

This way, in order to earn a credential, you have to create something the communitie values. This not only tackles the spam problem, without creating arbitrary expensive roadblocks, but it also maps nicely to how reputation in a community actually works.

Many many more,

Writing these down now I'm struck by both how much there is to unpack, and how little of it is already unpacked in my head. I feel a little bit like I should have easier answers for what I learned in four years of work by now, but it looks like it's going to take a lot more thinking yet. So many of the things seem to boil down to, figure out something small the works and try it out, then build on top of that. But alas, it takes time to really let that sink in.

I'm planning on returning to this topic over time, and eventually compiling it all into a single fathom retrospective document.

Thanks for listening to me ramble about it!

subscribe for updates