Everything Easy is Hard Again
I wonder if I have twenty years of experience making websites, or if it is really five years of experience, repeated four times.
I wonder if I have twenty years of experience making websites, or if it is really five years of experience, repeated four times.
The great Cheops pyramid took around a decade and half with a peak workforce of 40k workers, maybe 1 billion man hours. Parthenon around 36 million. The Colosseum maybe 100 million, same as Notre Dame, that gorgeous cathedral.
However, Facebook, how many man hours do you think that took?...The rather crazy answer is that it probably took close to 200 million man hours.
...It’s an achievement on par with the pyramids of Giza. Actually, more, in its necessity to keep running to stand still! Creating software is some of the most work we’ve put into building almost anything, far beyond the wonders in Rome or Luxor. The International Space Station probably still leads, at around 6 billion man hours, but man, its getting close.
...We knew what the pyramid would be when we started building it. The actual act of creation was not also simultaneously a journey of discovery. This is not the case in software. Is the time for Facebook's current application the correct consideration, or is it all the time we spent learning what to build and building underlying infrastructure part of it? Maybe one and not the other? Technical debt isn’t debt, it’s the commit history of the decisions made in the past.
This book addresses the topic of software design: how to decompose complex software systems into modules (such as classes and methods) that can be implemented relatively independently. The book first introduces the fundamental problem in software design, which is managing complexity. It then discusses philosophical issues about how to approach the software design process and it presents a collection of design principles to apply during software design. The book also introduces a set of red flags that identify design problems. You can apply the ideas in this book to minimize the complexity of large software systems, so that you can write software more quickly and cheaply.
The degree of complexity of a system is tied to who we are and what we’re doing over time. When we buy back some complexity by using better tools or picking a simpler environment we’re going to spend that out again eventually.
...We can’t beat complexity, but we can be beat by it.
No winners, no losers, and no end — the Game of Life, also known simply as Life, is no ordinary computer game. Created by British mathematician John Horton Conway in 1970, Life debuted in Scientific American, where it was hailed as the key to a new area of mathematical research, the field of cellular automata. Less of a game than a demonstration of logical possibilities, Life is based on simple rules and produces patterns of light and dark on computer screens that reflect the unpredictability, complexity, and beauty of the universe.
This fascinating popular science journey explores Life's relationship to concepts in information theory, explaining the application of natural law to random systems and demonstrating the necessity of limits. Other topics include the paradox of complexity, Maxwell's demon, Big Bang theory, and much more.
How do we even start to talk about complexity? You might have heard about the term coupling in software. Ideally, systems prefer loose coupling over tight coupling; that is, we try to reduce the amount of interdependencies on any two parts of a system so that if we need to swap out a component, it is easier to do. This becomes important in software because software must change over time, so reducing the complexity between parts will also help reduce the burden of change. But have you of Connascence? This term helps us measure how intertwined parts of your system might be.
Two components are connascent if a change in one would require the other to be modified in order to maintain the overall correctness of the system. — Meilir Page-Jones
...The connascence ranges from weak to strong and from static to dynamic. Counterintuitively, the weaker and more static the connascence, the easier the system is to change. The stronger and more dynamic the connascence, the more complex the system is to change.
Aaron: You know that story about how NASA spent millions of dollars developing this pen that writes in Zero G? And how Russia solved the problem?
Abe: Yeah, they used a pencil.
Aaron: Right, a normal wooden pencil... It just seems like Philip takes the NASA route almost every time.
The first thing to note is just how delayed the feedback is from writing software to rewriting software if that feedback requires releasing the software. If the handoff of specification from product to engineer goes awry, it may take weeks to detect the issue. This is even more profound in “high cardinality” problem domains where there’s a great deal of divergence in user usage and user data: it may take months or quarters for the feedback to reach the developer about something they did wrong, at which point they–at best–have forgotten much of their original intentions.
I’ll try to convince you of two things:
- Creating quality is context specific. There are different techniques for solving essential domain complexity, scaling complexity, and accidental complexity.
- Quality is created both within the development loop and across iterations of the development loop. Feedback within the loop, as opposed to across iterations, creates quality more quickly. Feedback across iterations tends to measure quality, which informs future quality, but does not necessarily create it.
That computers should be easy to learn and use is a rarely-questioned tenet of user interface design. But what do we gain from prioritising usability and learnability, and what do we lose? I explore how simplicity is not an inevitable truth of user interface design, but rather contingent on a series of events in the evolution of software. Not only does a rigid adherence to this doctrine place an artificial ceiling on the power and flexibility of software, but it is also culturally relative, privileging certain information cultures over others. I propose that for feature-rich software, negotiated complexity is a better target than simplicity, and we must revisit the ill-regarded relationship between learning, documentation, and software.
I would not give a fig for the simplicity this side of complexity, but I would give my life for the simplicity on the other side of complexity.
When I polled my community about their attitudes toward busywork — a ruse to figure out what some of my nearest and dearests actually do for work — most at least saw value, if not joy, in occasional busywork. A web designer told me busywork serves as “productive procrastination” when she’s avoiding more complex tasks. A woman in sales and marketing said she values the solitude of rote tasks, and retreats into spreadsheets “when everybody’s annoying and I’m peopled out and my bullshit meter is filled.” A senior research program manager at a nonprofit explained that she values how data cleaning — combing through a dataset for errors, duplicates, and other issues — creates an intimacy with the information she’s processing. Cleaning data manually makes the phenomena she studies less abstract: “It connects you to a different way of working or being, or creates opportunities to see things in a different way.”
All in all, we are likely looking at 50 million+ lines of code active to open a garage door, running several operating system images on multiple servers.
The more living patterns there are in a thing—a room, a building, or a town—the more it comes to life as an entirety, the more it glows, the more it has this self-maintaining fire, which is the quality without a name.
The rule of least power: It's one of the core principles of web development and it means that you should Choose the least powerful language suitable for a given purpose. On the web this means preferring HTML over CSS, and then CSS over JS.
The Great Divide really resonated with me. I keep coming back to it and I do think it continues to accurately describe what feels like two very distinct and separate camps of web developer.
The question I keep asking though: is the divide borne from a healthy specialization of skills or a symptom of unnecessary tooling complexity?
...I want to be a web developer, not a JavaScript developer.
If you live in a different world too, we should be friends.
Soylent embodies the hubris and pitfalls of tech culture’s impulse to reduce the irreducible. The same naive confidence that code can optimize every aspect of our lives — the mindset that produced the mantra “everything should be as easy as ordering an Uber”- convinced Rhinehart and his team that they could engineer a better human fuel in defiance of millions of years of evolution. It’s an attractive illusion that technology can neatly solve the messy realities of existing in a body, of being a biological creature instead of a computer.
But again and again, biology proves more complex and unpredictable than software, with squishy and distinctive needs that aren’t so easy to generalize. Soylent is hardly the first or last startup to run aground chasing the seductive vision of human perfectibility through technology. But it offers a vivid object lesson in the limits of this worldview.
...My takeaway from Soylent is this: You can’t simply hack humanity into a more optimized version of itself. Our needs and drives have been shaped by millions of years of co-evolution and won’t be engineered away by a coterie of coders — no matter how much pedigreed venture funding they secure.
Because, in the end, even the most powerful code can’t reprogram the squishy, gloriously inefficient realities of the flesh.
One of my favourite laws – one of those principles that seem to be universally applicable – is Gall’s Law. People who have worked with me in the past are probably very tired of hearing about it.
...In my experience, this is a universal rule.
I have never seen anybody manage to break John Gall’s Law. It applies to every single aspect of software development, in that we are fond of planning and organising and building complex systems from scratch. We design these intricate structures of objects and classes that interact. We build things that have no chance ever of working.
Instead, the alternative that I always advocate is to try to bake evolution into your design. It’s fine to have a complex system design that you’re aiming for in the long term, but you need to have a clear idea of how it will evolve from a simple system that works to a complex system that works.
You cannot leap straight into the complex system.
You have to start small.
Niklaus Wirth of Pascal fame wrote a famous paper in 1995 called A Plea for Lean Software. His take is that “a primary cause for the complexity is that software vendors uncritically adopt almost any feature that users want”, and “when a system’s power is measured by the number of its features, quantity becomes more important than quality”.
A complex system that works is invariably found to have evolved from a simple system that worked.
A complex system designed from scratch never works and cannot be patched up to make it work.
You have to start over with a working simple system.
A good design is not just a solution to a problem; it is a gift to others. It embodies a clarity of thought and a perspective that eases burdens on others rather than adding to their complexities. It is a prosocial act to invest extra time in your design, as it allows you to give this valuable gift to others. Even if you are the sole audience for your design, it is a thoughtful gift to your future self.
My favorite technique in the book is “Design it twice.” Here, Ousterhout argues that when designing a system, take a step back and create it again a second time, making an effort to take a fresh approach. Then, compare your two designs. Weigh the pros and cons of each so that your final design emerges from the output of the original two designs. Even if you select one of the designs unaltered, it immediately impacts your confidence in your design. However, it will most likely help you identify flaws in your original design, leading to immediate improvements.
Technology is seeing a little return to complexity. Dreamweaver gave way to hand-coding websites, which is now leading into Webflow, which is a lot like Dreamweaver. Evernote give way to minimal Markdown notes, which are now becoming Notion, Coda, or Craft. Visual Studio was “disrupted” by Sublime Text and TextMate, which are now getting replaced by Visual Studio Code. JIRA was replaced by GitHub issues, which is getting outmoded by Linear. The pendulum swings back and forth, which isn’t a bad thing
It’s kind of funny when you think about it: what’s worse than the technical problem of building and maintaining multiple native applications? The people problem of building and maintaining multiple native applications.
To belabor the point, I am reminded of Dave’s post about states. He enumerates a large (but not comprehensive) list of the many of the dimensions that affect the your application. I’m not good at math, but to Peter’s point about how these compound, imagine the math on Dave’s list of states. That’s got to be a very, very large number.
How can you know that you have correctly functioning code?
This is what kills you with trying to control complexity. All the small variances play off of each other to create an unknowably complicated environment.
Sometimes, you have to learn to let go of control.
The real test is the question “what are you willing to sacrifice to achieve simplicity?” If the answer is “nothing”, then you don’t actually love simplicity at all, it’s your lowest priority.
Designers, limited as they must be by the capacity of the mind to form intuitively accessible structures, cannot achieve the complexity of the semilattice in a single mental act. The mind has an overwhelming predisposition to see trees wherever it looks and cannot escape the tree conception.
Experiments suggest strongly that people have an underlying tendency, when faced by a complex organization, to reorganize it mentally in terms of non-overlapping units. The complexity of the semilattice is replaced by the simpler and more easily grasped tree form.
Dr. Weaver lists three stages of development in the history of scientific thought: (1) ability to deal with problems of simplicity; (2) ability to deal with problems of disorganized complexity; and (3) ability to deal with problems of organized complexity.
The history of modern thought about cities is unfortunately very different from the history of modern thought about the life sciences. The theorists of conventional modern city planning have consistently mistaken cities as problems of simplicity and of disorganized complexity, and have tried to analyze and treat them thus.
It’s counter-intuitive, but maintaining a smaller, high-quality team of developers is more efficient at scaling software than having many less skilled individuals. Every person introduces mistakes and complexity- some much more than others. Less experienced or less caring developers create more bugs and more complicated code. The best developers guard simplicity with every change. The best teams have these great developers supervise the learning of less experienced, but eager to learn ones.
"How can it be done correctly" is hard to answer in one email, but I like this idea from Kent Beck:
“First make the change easy, then make the easy change”.
The idea is to always make your next code change easy to make. This requires refactoring your code to maintain simplicity as you work. As you do this, you’ll find sometimes that you might over-abstract things or go down blind alleys. This is when growth happens. Each of these moments make you a better developer.
Go is often described as a simple language. It is not, it just seems that way. Rob explains how Go's simplicity hides a great deal of complexity, and that both the simplicity and complexity are part of the design.
…the story is not as much about whether it’s true, or what the story means so much as what is the story trying to move? What does it do over time, and to time itself? Basically, if a story motivates a more respectful relationship with the land, then it has powerful “truth” that will bear out in the fullness of time.
I really like this, and especially the clash with the narrow Western mentality that truth is absolute and forever. “Sometimes true” just does not compute, to the scientific mind, and might blow a fuse—usefully.
AI and automation is often promoted as a way of handling complexity. But handling complexity isn't the same as reducing it.
In fact, by getting better at handling complexity we're increasing our tolerance for it. And if we become more tolerant of it we're likely to see it grow, not shrink.
...But something that can genuinely reduce complexity, rather than just mask it, is good design.
Good, user-centered design is how we can deliver services with just enough complexity - enough to model the rich diversity of people's circumstances, but no more.
Given the complexity of the web and how we build it nowadays, sometimes it helps to remember that a single file can be capable of doing incredible things. Create an HTML file, add your markup, maybe augment it with some style and script tags, and now you have a self-contained and coherent site. How cool is that? The Single File Philosophy focuses on fighting against unnecessary Complexity when it's applicable. Of course, if you're collaborating on applications at Scale or sites that have various Business Requirements, then this philosophy most likely doesn't – and probably shouldn't – apply. However, there are cases where we introduce Complexity to build our single-use web apps or side projects for the sake of Convention. If we take a step back and see the project for What It Is, you may realize that you don't need to follow Convention. Everything you wrote can realistically fit into a lone readable file and nothing more.
The basic idea of legibility is that the act of making something comprehensible enough to control is itself an act that shapes the thing to be controlled, often with far greater consequences than the control itself. This is because it removes complexity that is deemed as irrelevant that makes it harder to control, and that complexity may be in some way essential to the health of the system.
- They give PMs an excuse to not make hard decisions, such as completely removing a feature.
- The codebase becomes more complex and harder to maintain.
- Testing becomes harder (and lower quality) - figuring out what combination of feature flags needs to be supported.
A primary cause of complexity is that software vendors uncritically adopt almost any feature that users want. Any incompatibility with the original system concept is either ignored or passes unrecognized, which renders the design more complicated and its use more cumbersome. When a system's power is measured by the number of its features, quantity becomes more important than quality. Every new release must offer additional features, even if some don't add functionality.
Nothing so fundamental lies in the realm of concern to us aggregate humans, where the need is, now, for the study of real complexity, not idealized simplicity. In every field except high-energy physics on one hand, and cosmology on the other, one hears the same. The immense understanding that has come from digging deeper to atomic explanations has been followed by a realization that this leaves out something essential. In its rapid advance, science has had to ignore the fact that a whole is more than the sum of its parts.
Many have become so focused on the process and methodologies that they’ve forgotten the fundamentals of why we started focusing on the user and what we hope to achieve with that focus.
The idea of overlap, ambiguity, multiplicity of aspect, and the semilattice are not less orderly than the right tree, but more so. They represent a thicker, tougher, more subtle and more complex view of structure.
The people who’ve proven that they can make very good individual products with the radical focus of a spotlight seem to be pushed ever further from making good ecosystems.
Products are being made “consistent” with the application of so-called “design patterns,” and rather than bringing coherence to these various touch-points, the painting-on of interface standards and interaction patterns did something far less valuable.
Rote consistency, in the way many seem to be going about it (Material Design being just one example), is at odds with making things be good. It simplifies what needs to remain complex.
Always, when simplification is underway, meaning is being lost.