by Thursday Bram
I worked in the housing office when I was in college. My boss had worked for the university I attended for well over twenty years by the time my youthful idealism and I came along. During my junior year, the housing office experienced a shortage of available housing, resulting in lots of angry students, columns in the campus paper, and town halls with university officials.
My boss didn’t care one bit. He told me that, in four years, no one would remember this kerfuffle had happened, and he wasn’t about to waste any sleep on it. After all, even the freshmen feeling the pinch from this housing shortage would almost all leave campus in a few short years — so even if there was another housing shortage five years out, no one would notice that it wasn’t the first housing shortage we’d dealt with.
This take is extremely cynical, and yet, it’s one of the best explanations I’ve ever found for how institutional memory works.
What is Institutional Memory?
Institutional memory is usually discussed in terms of knowledge management: it’s all the experience, information, and details held among a group of people, typically with no written or recorded element.
- Knowing that you need to thump the vending machine to get a snack to fall.
- Remembering that you have to ask someone in the accounts payable office to approve an invoice before it will actually be paid.
- Going through a mental checklist before deploying a new site.
As people cycle in and out of an organization — or an open source project — their institutional knowledge is automatically lost. And while open source communities don’t have the built-in expiration dates that go along with school-based communities, the loss of accumulated knowledge hurts us and our efforts to build a more sustainable open source ecosystem.
While institutional memory is surprisingly universal, you want to avoid it as much as possible. Institutional memory is dangerous at the best of times. Blocking a project is as easy as having a core contributor go away for the weekend. Leaving information in institutional memory also offers an easy method for excluding people who would otherwise be able to make valuable contributions. Assuming that you want other people to contribute to your open source project, you’re just speeding up your own burnout when you let institutional memory accumulate.
Open source projects have higher-than-average levels of institutional memory. Few projects have the resources to invest in knowledge management, and even if they did, open source projects tend to have no path to contribute anything but code. As long as some effort is being made to share at least a part of that information with all contributors, an open source community can get by. But if enough institutional memory accumulates, a whole host of problems will appear.
Symptoms your open source project has too much stored in institutional memory:
- ‘Doc’ rot, also known as outdated documentation.
- Any documentation at all that tells you to call a specific person (i.e., “See Alice for an explanation of how to reboot the server.”)
- A complete lack of contact information for core contributors.
- One person can control key decisions because they are the only person with relevant information (or because they know ‘where the bodies are buried’.)
Identifying Institutional Memory Issues in Open Source
Within open source, there are some underlying issues that make knowledge management particularly gnarly. Continuing failures to acknowledge contributions that aren’t code — whether those contributions are as simple as tracking a bug or as complicated as project managing a major release, they are crucial to the success of any open source project. Knowledge management will never be a code-heavy task, even for those optimists wanting to write their own knowledgement management system from scratch. (Don’t do that! Use one of the many open source knowledge management tools already out there, like GitHub’s built-in wikis or Read the Docs.)
Similarly, the cults of personality around certain projects represent an open source issue: Too often we suggest that a particular person is too instrumental to a project to be removed — some people can behave badly with impunity because the knowledge locked inside their heads is too valuable to lose. Even if that sort of bad behavior didn’t make an open source project unwelcoming, it would be dangerous to the viability of the project.
Management consultants consistently tell enterprise-level organizations that they cannot afford to have too much information locked inside of one person’s head. As a pure business case, having one individual in a position to make demands of a company is a very bad thing. But depending on how all that knowledge wound up with one person, there can be even greater dangers. If, for instance, a company keeps on an employee after retirement or burnout because of their knowledge, the stagnation can have a ripple effect. And if someone is stuck in a job with no room for advancement, because ‘they already know where everything is’ or because ‘training a replacement would be so hard,’ both the organization and the employee lose out. The employee loses opportunities for advancement, pay increases, and everything else that makes up a successful career. The company loses the chance to fully exploit a human resource who has already proven capable at learning new skills and doing good work.
Open source consultants see similar problems. Sumana Harihareswara, the founder of a consultancy for maintaining open source projects, notes, “implicit knowledge, if it stays in experts’ heads and doesn’t get turned into explicit explanation somewhere, means that new contributors and users can’t get up to speed as quickly, so it slows improvements and uptake. And it leads to a long-term attitude of learned helplessness in a community. The less expert people learn that their new ideas are likely to run into previously unarticulated reasons why their ideas won’t work, and they get discouraged about their potential to change architectures and to co-lead.” This situation prevents new contributors from joining a project, adding to a cycle of burnout for existing contributors.
Creating paths for preserving knowledge gives us leverage to improve our communities, whether or not there is a financial incentive to change. Allowing a core contributor to take a vacation (or even step back!) is only the first opportunity that comes with good knowledge management practices.
Preventing Knowledge Loss
There’s a pretty obvious option for most open source projects to preserve institutional memory: write it down. But writing down everything and anything may not get you where you need to be. You need your documentation to be usable and useful.
Ron Ashkenas, a management consultant, wrote an article for the Harvard Business Review in 2013, listing out three steps enterprise organizations needed to take to slow or stop the loss of institutional knowledge:
- Build an explicit strategy for maintaining crucial knowledge.
- Identify the few key details every team member must know.
- Use technology to create a process to capture and organize institutional memory.
These are roughly the steps you’ll hear in any seminar on knowledge management, whether you’re in academia or in a big corporation. Open source requires a few twists to these rules, though.
Different types of open source projects have different needs when it comes to knowledge management, along with an utter lack of resources, at least when compared to the sorts of organizations that can afford to hire management consultants. But most projects can start from the same basis:
- Commit to a culture of documentation. Reward documentation contributions on equal footing with code contributions. Note gaps in documentation as bugs.
- Choose one source of all truth (or at least choose what is upstream). Don’t ask people to look more than one place for your documentation, even if that means bringing some weird stuff into your wiki or repository.
- Create an onboarding and offboarding strategy. In fact, documentation expert (and Recompiler contributor!) Heidi Waterhouse suggests that developer onboarding should be the first piece of documentation written for any technical project.
Getting a fresh look at the situation can also be useful. Harihareswara, through Changeset Consulting, focuses on making open source projects sustainable. She often starts with bringing documentation up to date: “I usually need to read the existing docs and skim the issue tracker and mailing list traffic, and try out the project as a user, a sysadmin, and a developer. That helps me see what the big gaps and inaccuracies are, and especially helps me see where the old documentation has gotten obsolete. This also provides an opportunity for me to iteratively suggest improvements to installation and introductory documentation and to ask useful questions of the existing community, which builds my credibility as a contributor. I also take this opportunity to ask the existing user and developer and sysadmin community: what do they often find confusing? The answers become part of my TODO list.”
A community needs to keep fighting the entropy of institutional memory. What works for a small project won’t support thousands of contributors: after a couple of months, you can’t ask a new person to read all the mailing list archives before participating. Build a process for recording the institutional memory in your open source projects, before you find yourself doomed to repeat the history you don’t know.
Thursday Bram writes about technology for publications ranging from Bitch Magazine to Entrepreneur. You can find Thursday at her website, ThursdayBram.com.
Photo credit: NASA, taken at Langley Research Center, 1957