We Climbed the Trees — Then Handed the Forest to Landlords
- Jodson Graves
- Mar 6
- 7 min read

How the original vision of computing as a public utility was technically achieved and structurally betrayed
In 1964, a team at MIT announced Project MAC's Multics — the Multiplexed Information and Computing Service. The founding metaphor was explicit: computation would be delivered like electricity from a wall socket. You wouldn't own a generator. You'd plug in, use what you needed, and pay a metered rate. Fernando Corbató and his colleagues at MIT, joined by researchers from Bell Telephone Laboratories and General Electric, set out to build a computing utility that would serve hundreds of simultaneous users, democratizing access to a resource that had been locked inside air-conditioned rooms and controlled by priesthoods of operators.
The project was staggering in ambition. Multics pioneered hierarchical file systems, virtual memory with segmentation and paging, dynamic linking, ring-based security, and the radical idea of writing an operating system in a high-level language. Every one of these innovations would eventually become standard. But in the 1960s, the hardware couldn't keep up with the vision. The custom GE-645 mainframe arrived late. Costs ballooned. The system groaned under its own complexity.
Sam Morgan, who directed Computing Science Research at Bell Labs, offered the definitive postmortem: Multics was "an attempt to climb too many trees at once." In April 1969, Bell Labs withdrew from the project, and the researchers who had poured years into it — Ken Thompson, Dennis Ritchie, Doug McIlroy, Joe Ossanna, Rudd Canaday — suddenly found themselves without the interactive computing environment they'd grown to depend on.
What happened next is the most consequential accident in the history of technology.
Climbing one tree at a time
Thompson found a dusty PDP-7 minicomputer in a neighboring department, already obsolete, gathering dust after a circuit-analysis project wrapped up. Over the summer of 1969, while his wife took their infant son to visit family in California, Thompson wrote the core of a new operating system in three weeks — an editor, an assembler, and a kernel, one per week.
The system that emerged was everything Multics was not. Where Multics tried to solve every problem simultaneously, Unix solved one problem at a time. Where Multics required custom hardware, Unix ran on cast-off machines nobody wanted. Where Multics was designed by committee across three institutions, Unix was built by a handful of people who sat in adjacent offices and could shout across the hall.
Brian Kernighan named it with a pun: UNICS, the Uniplexed Information and Computing Service — a deliberate contrast to the Multiplexed ambition of its predecessor. Whether the pronunciation as "eunuchs" was intentional remains a matter of genial dispute.
The philosophy that crystallized around Unix was articulated by Doug McIlroy: make each program do one thing well, expect every program's output to become another program's input, and design software to be tried early and rebuilt without hesitation. The pipe — McIlroy's 1964 concept of connecting programs "like garden hose" — became the mechanism that made this philosophy operational. Small, composable tools connected by text streams could solve problems that monolithic applications couldn't even articulate.
And it worked. Not just technically, but culturally. Unix spread through universities like a benevolent virus, carried by source-code tapes shipped for the cost of postage. By 1983, it ran on at least sixteen different processor architectures from roughly sixty vendors. The Berkeley Software Distribution added virtual memory, TCP/IP networking, and the fast file system. Bill Joy left to co-found Sun Microsystems. The internet was built on Unix machines.
Here is the critical point: we have now climbed nearly every tree that Multics tried to climb simultaneously. Hierarchical file systems, virtual memory, dynamic linking, ring-based security, high-level-language operating systems, multi-user time-sharing — all of these are so thoroughly achieved that we barely think about them. They're plumbing. We've moved past them into containerization and orchestration, where thousands of isolated environments share hardware seamlessly. We debate whether to rewrite kernels in Rust for even more safety guarantees.
The Multics team saw the destination correctly. They underestimated the journey. Unix demonstrated that the journey had to be taken one tree at a time.
The destination arrived — under the wrong ownership
Corbató's computing utility exists. It is called Amazon Web Services.
You can provision a virtual machine in any region on Earth in under a minute. You pay by the millisecond. You scale up and down on demand. Computation flows like electricity from a wall socket — metered, abstracted, always available. The technical vision of 1964 has been fully realized.
But the ownership structure went in exactly the opposite direction from what the original metaphor implied. When Corbató compared computing to electricity, the model he was referencing was the regulated public utility — infrastructure governed in the public interest, with rates set by commissions and service obligations enforced by law. What we got instead was a handful of corporations renting back compute capacity from centralized data centers, governed by terms of service that can change overnight, extracting economic rent from every transaction that flows through their platforms.
This was not a technical failure. It was a structural choice — or more precisely, the absence of a structural choice that should have been made and wasn't.
There is no technical reason a neighborhood can't pool compute resources the way a housing cooperative pools purchasing power. Mesh networking works. Distributed storage works. Container orchestration works at scales from data centers to single-board computers. The pieces exist. They have existed for years.
But building that configuration doesn't serve the companies that currently intermediate every transaction. Amazon doesn't benefit from communities discovering they can federate their own infrastructure. Google doesn't benefit from locally hosted search and discovery. The platforms that dominate computing today are architecturally capable of decentralization but economically motivated toward concentration. The technology points one way. The incentive gradient points the other.
So it doesn't get built — not because it's hard, but because the entities with the resources to build it would be undermining their own business models.
The consent decree did it once by accident
The one time community-owned computing infrastructure emerged at scale, it required the federal government to force a monopoly's hand.
In 1956, AT&T settled a Department of Justice antitrust case by agreeing to a consent decree that confined the company to common carrier communications and required it to license all patents to any applicant. AT&T could not enter the computer business. This meant Unix could not be a product. AT&T's lawyers determined the company couldn't commercialize it, so Ken Thompson quietly began shipping tapes for nominal fees — educational licenses cost perhaps a few hundred dollars, and they included complete source code.
The consequence was the largest unplanned technology transfer in history. Thousands of programmers at hundreds of universities received Unix's source code, studied it, modified it, and shared their improvements. An entire generation of computer scientists learned systems programming by reading the same codebase. The BSD tradition at Berkeley, the GNU project, and ultimately Linux all trace their lineage to code that was shared because a court order prevented it from being locked up.
Economists Grindley and Teece called AT&T's resulting licensing policy "one of the most unheralded contributions to economic development — possibly far exceeding the Marshall Plan." The Electronic Frontier Foundation observed that without the consent decree, AT&T would never have allowed this Unix culture to flourish.
When the consent decree was lifted in 1984 as part of the AT&T breakup, the company immediately began commercializing Unix. The "Unix Wars" between AT&T's System V and the Open Software Foundation fragmented the ecosystem. The USL v. BSDi lawsuit cast a legal cloud over freely distributable BSD for two years — and in that window, Linus Torvalds' Linux, built from scratch with no AT&T code, captured the momentum that BSD might otherwise have claimed.
The pattern is clear. Open, shared computing infrastructure emerged as a side effect of legal constraint on a monopoly. When the constraint was removed, the monopoly immediately moved to extract rent from what had been a commons. And the community had to rebuild from scratch — this time with Linux — to preserve what the consent decree had accidentally created.
The lamp is built. The socket exists. The wiring is wrong.
We are not waiting for the technology to catch up to the Multics vision. We are waiting for the infrastructure to be configured correctly.
A Raspberry Pi has more computing power than the PDP-11 that ran the first production Unix. A mesh of commodity hardware running open-source orchestration software can provide the same fundamental services as a cloud region — compute, storage, networking, identity, service discovery — at the neighborhood level. Solar-powered edge nodes can maintain operation independent of centralized grid and network infrastructure. The protocol layers for federation — the ability of autonomous nodes to discover each other, negotiate trust, and share resources — are well-understood engineering problems, not research problems.
What's missing is not capability but configuration. The technology to deliver computing as a genuine public utility — owned by the communities it serves, governed by cooperative principles, federated across jurisdictions rather than concentrated in corporate data centers — exists today. It has existed for years. It simply hasn't been assembled, because the entities with the capital and engineering capacity to assemble it profit more from the current arrangement.
This is the gap. Not between vision and technology, but between technology and governance. Between what can be built and what the prevailing incentive structures allow to be built.
Building intentionally what the consent decree did by accident
The consent decree demonstrated that when structural conditions prevent monopolistic capture, computing infrastructure naturally evolves toward shared, community-governed models. Unix spread through universities not because AT&T was generous, but because AT&T was legally prohibited from being proprietary. The result was the most productive period of collaborative infrastructure development in computing history.
The question is whether that result can be achieved intentionally — without waiting for a court to compel it.
This means building community-owned compute cooperatives from the ground up. It means designing federation protocols that allow autonomous nodes to interoperate without requiring a central authority. It means licensing infrastructure software under copyleft terms that prevent the commons from being enclosed — the same structural function the consent decree served, implemented through intellectual property law rather than antitrust enforcement. It means treating multilingual accessibility not as a feature but as a foundational requirement, because the communities most underserved by centralized platforms are precisely the ones that need localized, cooperatively governed alternatives.
The Multics team was right about the destination. The Unix team was right about the method — build simply, iterate quickly, compose small pieces into larger systems. The consent decree was right about the structure — shared infrastructure produces more innovation than proprietary capture.
We climbed all the trees. Now the question is who owns the forest?

Comments