by Federico Mena Quintero

Legacy systems have a bad reputation among computer people. Pretty much no one wants to work on COBOL-based mainframe software. All the typing that goes on when you buy a plane ticket in person is because of SABRE, and one comes to think about whether the fee that travel agents charge is just due to the burden of battling that inscrutable software. We may know horror stories of a sad company that simply cannot move away from Windows 95, or Windows 3.1, because they have an old and unmaintained chunk of critical infrastructure that runs there, and that for $reasons has not been replaced with something newer.

It is hard to imagine having the ability to control the outcome of those systems. They are Too Big To Fail, and most definitely outside of the experience of “everyday people”.

But what about legacy systems that are closer to us? What, specifically, about free software with a long history?

About 20 years ago I read a wonderful little book which was even then called “20 Years of Unix”, and it wasn’t exactly fresh off the press, either. At that time I was involved in the creation of the GNOME Project, which almost 20 years later can boast of being more or less ubiquitous in the free software scene, as a core part of Linux desktop systems. You can see what GNOME looks like in —there is a general screenshot of the desktop evironment in that page.

The GIMP (GNU Image Manipulation Program) is from 1995, and even then the GUI toolkit which it used (Motif) was considered legacy software, and it was proprietary. The GIMP’s authors wrote another GUI toolkit from scratch, a free one (GTK) and it begat GNOME, which in turn is from 1997. In 1998 or 1999 one of the GNOME contributors, Raph Levien, started writing libart, a library for anti-aliased vector drawing. Later, through Lauris Kaplinski, the library begat Sodipodi (a vector illustration program), which again through many other people begat Inkscape, and this completes the GIMP’s dynamic duo.

There is a long history of software which is loved enough, or useful enough, that people maintain it and slowly upgrade it to newer infrastructure.


Now, let me draw your attention to something more tangible: cities. There are of course cities with thousands of years of history: Rome, Beijing, Paris, Cairo, Mexico City, Istanbul. And with hundreds of years? Veracruz, Seville, Tokyo, Berlin. I smiled when I read an article that talked about how “Vancouver (Canada) is only 300 years old…”, indicating that it’s special precisely because it is “new”, whereas the normal is being old. I don’t know if there is actual snobbery around “my city is older than yours”, but sometimes it feels that way.

And yet, nobody would call those places decrepit and unwanted, or a bad legacy. People live there! They maintain their own little part of the city as best as they can, even if it is just their home. Their governments do a uniformly good job, or an uniformly mediocre one, of maintaining the infrastructure. Some cities have markedly better infrastructure of some kinds than others. It always filled me with a little perverse pride that tap water in Mexico City, where I used to live, is perfectly clear and potable, while the water in Paris is cloudy and of dubious drinkability… at least to my third-world sensibilities. (Of course, Parisians would retort that their subway system and their suburban rail is much, much better than Mexico’s, and they would be completely right.)

Going back to software—in GNU/Linux-land we used to sneer at Windows for the primitive way in which it installs and manages packages, versus our own RPM and DEB ones. But these days we play catch-up with Android and iOS, which have perfectly working sandboxed bundles (and a payment infrastructure for creators, even if it is rigged in Apple’s or Google’s favor!), compared to which our own RPM/DEB seem technologically ancient and monetarily unfair.

There has been a lot of analysis on the effects of “urban renewal” projects of the 20th century. Jane Jacobs, in The Death and Life of the Great American Cities, writes in detail about historic districts which were demolished to build apartment complexes, and how they were uniformly disastrous. She also talks about places where the residents refused to be displaced, doing the upkeep by themselves, now better off and living in a highly desirable part of town. She also talks about cases where money comes in too fast into a city, and vibrant and unique parts are gentrified out of existence into uniformly dull, globalized tourist traps.

On a smaller scale, there is a fantastic book, How Buildings Learn, by Stewart Brand. It describes what happens to buildings after they are built, after people get settled in and start adapting the building to their needs (we need to rewire everything! We need to fit in network cables!), after buildings change tenants or purpose (this factory is now apartment lofts! This abandoned train station is now a shopping mall!), after they get remodeled or expanded or shrunk (this house now has a rental unit in what was the garage! This restaurant is now a cafe plus a bookshop!).

One of the most interesting concepts from that book is how buildings have different “shearing layers”, and each layer changes at a different speed: Site, Structure, Skin, Services, Space Plan, and Stuff.

The Site is the hardest to change; it is pretty much defined by the land itself and the legally defined lots.

The Structure remains in place from the time the building is constructed—the foundations, and the main load-bearing elements like columns and beams. It is possible to change the Structure, but usually only at great expense, and it takes time and a lot of skill. Good structures, when maintained, have a lifetime of centuries. Bad structures, only a few decades.

The Skin is the exterior surfaces. Paint, shingles, fake or real bricks. Those have a lifetime of a couple of decades and need regular maintenance or replacement.

Services are the electrical wiring, communications wiring, ducts, plumbing, air conditioning, and moving parts like elevators or escalators. They all need regular maintenance or replacement.

The Space Plan is the interior layout—walls and windows, floors, ceilings, doors. People regularly change them. A good structure allows them to be changed easily.

Finally, Stuff is the things we interact with daily: furniture, things hung on the walls, and all the paraphernalia in our living, working, and leisure spaces. It’s easy to move a bookcase or a table to a different place: it is very feasible to experiment with various configurations in a short time until you end up with one you like. That kind of experimentation is hard to do with Skin if you are just a dweller (Do you have the skills to lay tile?), and much harder to do with Structure (Do you have the money to tear down a wall and build a new one? Do you have permission to do it?).

Software’s shearing layers

I like to think that software is pretty much the same. It may not be possible to draw a perfect parallel to Site/Structure/Skin/Stuff, but let’s try.

A computer program assumes a certain environment: it can be the operating system in which it is designed to run, a set of APIs from which it is constructed, and even the programming language in which it is written. We have all sorts of tricks and libraries to aid portability, but changing operating systems is always a major undertaking. Changing programming languages is practically as hard as changing the Site of a building: a full or partial rewrite is pretty much akin to a demolition and rebuild.

And the Structure? We are quite familiar with the structure of computer programs. It happens through the interplay of design and construction, and it depends on the purpose of the program and its main data flows. A game needs a robust structure for concurrency to manage graphics and sounds and behaviors all at the same time. A drawing program needs to store large amounts of graphical data and needs to draw it fast. A web server needs to read data quickly off a data store, and to push it quickly to the network. Changing the large-scale Structure of a program is hard work, and it may involve a refactoring of a lot of the small-scale parts before changes to the main structure are even possible.

And the Skin? Changing minor visual details of the user interface may be as simple as changing some constants for colors or a string of text, or it may require infrastructure work—to do animations where none were possible before, to load vector icons where only bitmaps were available.

Maybe changing Stuff is when we don’t change the program’s flow or structure at all, and we just shuffle widgets around in the user interface.

The parallel is not perfect, but I think the core lesson we can extract from the trades of building is that things can and will change at different speeds, and life will be much easier for dwellers/users/developers if the design of the system takes this into account. Fortunately, with software, it is easier to refactor things into modify-ability than with buildings… and there is always “undo”.


I want to talk a bit about “legacy software” in the GNOME project, which is where I mainly work.

When GNOME started, back in 1997, one of the very first things we started writing was Gnome-panel. This was the horizontal bar at the bottom of the screen, similar to the one in Windows 95, that contained the program launcher, the list of open windows, and the system’s clock. At least, that was the intention. Our panel, like the KDE Project’s, was pluggable and supported “applets”. The result, now that we actually know something about usability and how free software evolves, was a rather unusable mess: tens of possible applets, about six different clock widgets, and no default configuration. We gave the user a box of parts, and they had to put them together into something useful before using the desktop.

Back then we did not have a default window manager—the software that draws window frames and title bars, so that you can move and resize them. We used whatever “traditional” window managers there were for X11, and some ended up being preferred due to being easier to configure for GNOME’s particular needs. These needs were definitely different from X11’s default blank-desktop-with-user-windows. We had a panel (or more than one) that demanded being left uncovered by other windows. We had dialog boxes that wanted to be centered over their parent windows. We wanted to pass mouse clicks on the desktop background to the file manager, so the user could manipulate file icons on the desktop… and even this wasn’t supported by default, since it was traditionally Not Done in X11 systems.

Like many things, real integration instead of haphazard development started to happen once money was put into GNOME. Red Hat formed a team to work exclusively on GNOME. Carsten “Rasterman” Haitzler worked there on the Enlightenment window manager, and took it from being quirky eye candy to a pretty reasonably integrated solution for GNOME. In conjunction with KDE we developed the “Window Manager Specification”, which made it easier to evaluate or implement window managers that actually worked for modern graphical desktops—with icons for files, windows that actually got centered, multiple workspaces, and all that.

Years later, Enlightenment started showing its age and limitations. It was getting dangerously into “legacy software” territory in people’s minds. The code was fragile and it was hard to change. Auxiliary controls for windows–things like the minimize/maximize/close buttons—did not match the visual style of the rest of our widgets. Enlightenment supported being a desktop shell on its own, and this sometimes clashed with GNOME’s wishes to be the desktop shell by itself.

We replaced Enlightenment with Sawfish, which was a large and painful change. Sawfish wasn’t a perfect window manager, but it was easier to change, and easier to develop, since most of it was written in a high-level language (Lisp) instead of C. It used GTK+, GNOME’s widget toolkit, for things like the menu that appears when you right-click a window’s title bar. It made a good effort of accommodating GNOME’s quirks, such as its requirements for areas of the desktop being left untouched.

Sawfish was still too configurable. There were combinations of options that did not lead to reasonable behavior, even when taking people’s preferences into account. Quirks in how it integrated with GTK+ made it slow in certain situations.

A few years later, we replaced the window manager again. Metacity was a new window manager, written from scratch in C (unfortunately), but it was an “opinionated” window manager with good-by-default behavior and as little configurability as possible. It worked really well! It was small, fast, and although not as easy to develop or debug as Sawfish, it matched GNOME’s goals better.

…and of course, then we replaced it for GNOME 3.

In fact, we changed both the window manager and the panel together: Gnome-shell, which is what runs the main desktop up to this day, is a combination of a window manager and, well, a desktop shell. It provides the “Site” for the rest of GNOME to be built upon, at least in terms of the screen’s real estate.

Gnome-shell was not a full rewrite of what we had before, fortunately. Metacity had been evolving slowly, along with the capabilities of X11. First, it gained an experimental branch to render window contents using OpenGL textures. Then, the X11-specific drawing engine was replaced with a generic, animatable, OpenGL-driven one. This is called Mutter–short for “Metacity plus Clutter” (the OpenGL library to do animations and pseudo-3D layers)—and is not a complete program for a window manager, but it is a library that implements the core of a window manager.

Gnome-shell is a bunch of JavaScript code that wraps Mutter’s functionality for window management, and adds all the window decorations, the panel (which is now at the top of the screen, and has a pretty much fixed configuration), and special modes like the Overview of all open windows (similar to Apple’s Exposé).

Very little of Gnome-panel’s code was reused. It had tons of code to manage screen real estate for applets within the panel’s bar, tons of code for drawing the panel in different styles and making it have different behaviors… and all of that just got discarded (but not thrown away—read on). Gnome-shell draws its “panel” in a well-defined fashion, without you having to configure it first, and with a well-defined set of widgets–the Activities button, a clock, and a couple of drop-down menus with options for accessibility, keyboard layout, and network/volume/logout. And that’s it.

Gnome-shell supports extensions so that people can write code to customize the default shell. The point is still that the defaults work well, and there is the ability to personalize a few things, but any extra functionality is left up to third parties. We’ve moved from the “we give you a box of parts” mentality to the “we give you something in good working order, and an infrastructure for further personalization” model.

At this point you may have an objection: I’m not telling you about legacy software, I’m just telling you about how GNOME evolved! But if you zoom into the timeline of development, like if you were looking closer at a fractal, you would see that indeed–at some points software was getting legacy-ish, unwanted and hard to change. We did some demolitions and rebuilds, but none of the results were 100% loved in the end.

It was not until we started refactoring Metacity to support OpenGL, and later to turn it into a library, and then wrapping that core-that-was-known-to-be-good into a higher-level language, that things started rolling along more smoothly.

The pattern here was: refactor to modernize; then refactor to turn a program into a library; then move up one level of abstraction and use a better language to implement the remaining part, which is not as hard to get right as the core.

What about Gnome-panel, then? GNOME stopped developing it right before the switch to GNOME 3.0. However, GNOME 2 was good code, used by many people. We handed it over to the MATE Desktop project, which wanted to keep the GNOME 2 infrastructure running; they later took it in their own direction. It is fun to see how that code has evolved. It’s definitely heart-warming to see that that code didn’t die… unlike proprietary software, when its parent company evaporates, our code kept evolving, under different maintainership and a different name.

I like to think of buildings that are not torn down when technology changes, and they just get adapted. People didn’t tear down houses and big public buildings when networked computers came in; they just made a few holes, put in some cable ducts, some cabling… some may have ended up with good old zip-ties and duct tape to hold a cable just so, but the whole building remains. People didn’t demolish things to replace old toilets with newer water-saving models; they just changed them (a messy and scary process in a small scale, but totally doable in an afternoon by a skilled plumber). Software can learn from that!

So, how can we evolve things in general?

Back in 1961, before complexity theory and nonlinear analysis were even well-developed things, Jane Jacobs published her monumental “The Death and Life of Great American Cities”. It’s a great tour through the systemic workings of cities and their people, of architecture and urbanism and politics and individuals. The final chapter, titled “The kind of problem a city is”, has an extremely lucid explanation of the kind of mental tools we need to analyze something as complex as a city.

Jane Jacobs writes:

Among the many revolutionary changes of this century, perhaps those that go deepest are the changes in the mental methods we can use for probing the world. I do not mean new mechanical brains, but methods of analysis and discovery that have gotten into human brains: new strategies for thinking. […] To understand what these changes in strategies of thought have to do with cities, it is necessary to understand a little about the history of scientific thought.

She goes on to summarize the history of scientific thought, in three stages: first, the ability to deal with problems of simplicity (linear equations, problems of one or two variables, classical physics); second, the ability to deal with problems of disorganized complexity (probability theory, statistics, statistical mechanics); and third, the ability to deal with problems of organized complexity.

These last ones are the kinds of problems that appear in systems. A lot of these methods came from life sciences, where living organisms are essentially bags of interconnected subsystems that maintain metabolism in very subtle ways. One cannot just “solve for a single variable” and gain insight into the system. The same happens with cities, and on a smaller scale with single buildings over time; and the same happens with how people build, interact, and think about software systems.

Jane Jacobs’ book ends with

Being human is itself difficult, and therefore all kinds of settlements […] have problems. Big cities have difficulties in abundance, because they have people in abundance. But vital cities are not helpless to combat even the most difficult of problems. They are not passive victims of chains of circumstances, any more than they are the malignant opposite of nature.

Vital cities have marvelous innate abilities for understanding, communicating, contriving and inventing what is required to combat their difficulties. […]

Dull, inert cities, it is true, do contain the seeds of their own destruction and little else. But lively, diverse, intense cities contain the seeds of their own regeneration, with energy enough to carry over for problems and needs outside themselves.

A few years after her book, Christopher Alexander, a mathematician and architect, was trying to mathematize design. Specifically, he and his team were looking into the question of why there are human-built places where it is very pleasant to be, those that are comfortable, livable, and nice, and some places where this is not the case. The pleasant places were present in all of the traditional architectures of the world—European, African, Asian, American—which pointed to the idea of being able to extract common factors from all of them.

Eventually they came up with a series of architectural and urbanistic patterns, distilled into the book A Pattern Language, and much later into an actual process for building these patterns, which is described in the collection of books called The Nature of Order.

Jane Jacobs figured out the narrative of how cities work, out of empirical evidence; Christopher Alexander et al figured out the geometric properties of such cities and the processes that are required to build them. But what about software?

A Pattern Language is famous for inspiring the programming book Design Patterns. Not so famous is the idea, explored by Richard P. Gabriel in Patterns of Software, that the generative process from The Nature of Order is more or less akin to the ideas of Martin Fowler’s Refactoring. Rather than considering refactoring as just miscellaneous code cleanups, these ideas validate the process of refactoring as a gradual reorganization of the small-scale details of a program, so as to make it easy, even natural, to improve and change the large-scale structure in the end. I have written a little summary of all of this in “Software that has the Quality Without a Name”.


I want to make the point that “legacy” software has a bad stigma. But what if we as software developers leave, well, a good legacy instead of a bad one? What if we write software that can be maintained, split apart, and re-glued into other systems, even by other people, so that our work is not wasted if we decide or are forced to stop writing that software? Free software—specifically, the explicit and irrevocable permission we can grant to other people to modify and redistribute our software—is practically a necessity if this is to happen.

It’s a bit sad to see old, proprietary, and well-loved software that has to be corralled in emulators or virtual machines—effectively, a life-support system to keep the software stuck in its preferred environment. Free software lets us avoid building a corral altogether because we can evolve it and its environment together. I think that’s a better legacy.

Further reading

Architecture and Urbanism

Jane Jacobs, “The Death and Life of Great American Cities”, Modern Library.

I am not aware of a downloadable English version of this book. However, as part of a campaign of urbanism activists in Mexico (“Leer la Ciudad” — reading the city), there is a PDF of the book in Spanish at

A tiny summary of the book in Spanish:

Christopher Alexander et al, A Pattern Language, Oxford University Press.

Christopher Alexander, The Nature of Order, books 1 to 4, The Center for Environmental Structure.

Stewart Brand, How Buildings Learn, Penguin Books. There is also a six-part, three-hour TV series of it, which is absolutely fantastic. Start at for the first part.

Nikos Salingaros, “Twelve Lectures on Architecture”, — A summary of Alexander’s monumental theory, and a further mathematization of it. Hand-drawn diagrams!


Katrina Owen’s talks on refactoring and test-driven development, at — these are deeply amazing. Katrina starts with an undocumented, or incorrectly documented code base with no tests. There is no specification that says exactly what the code is supposed to do. And yet, through bog-standard refactoring and a special trick to write the first tests, she is able to write tests for the code, make it readable, make it more robust, and generally turn it from a little swamp into a beautiful garden. The “special trick” is just assuming that if the code was working (even mysteriously), you can capture its current behavior in a test assertion, and that gives you the starting point for refactoring. In the “Therapeutic Refactoring” talk, wait for where she says, “… and now we have a license to go to town on this code” when the tests finally work – and then watch a masterful lesson on refactoring.

Richard P. Gabriel, “Patterns of Software: Tales from the Software Community”. Available from Richard’s page at

Federico Mena Quintero, “Software that has the Quality Without A Name”,

Federico is one of the founders of the GNOME Project, a graphical desktop for free software systems. He procrastinates with bicycling,
woodworking, cooking, OpenStreetMap, and vegetable gardening. His blog
is at

We have a print edition too! Find this issue in the shop.