top of page

A History of Network Operating Systems, the Anthropology of Statecraft,and the Reclamation of Shared Ground

  • Writer: Jodson Graves
    Jodson Graves
  • Mar 8
  • 15 min read

Prologue: The Problem That Needed Solving — And Why Humans Have Solved It Before

Before there were network operating systems, there were islands. Each computer was its own sovereign territory — its files inaccessible to neighbors, its printer the exclusive property of whoever sat at its keyboard. In the late 1970s and early 1980s, as personal computers began proliferating through offices and universities, this isolation became increasingly absurd. Expensive laser printers sat idle three desks away from people who needed them. Duplicate files lived on dozens of floppy disks. The dream of shared resources demanded a new kind of software — one that didn't just manage a single machine, but orchestrated an entire community of machines as a coherent whole.

Open printer with color ink cartridges visible inside. Interface screen displays status. Soft ambient lighting creates a warm atmosphere.

But this is not the first time humanity has faced this problem. The challenge of coordinating shared resources across distributed, autonomous actors is precisely what anthropologists study when they study statecraft — the emergence of governance systems from the social and economic needs of communities. Long before anyone wrote a line of code, human beings invented network operating systems. They called them kingdoms, city-states, guilds, federations, and commons.


The Mesopotamian city-state was a network operating system. It defined who could access shared water from the irrigation canals (shared I/O), who could trade in the marketplace (packet routing), what weights and measures would be used (protocol standardization), and who had the authority to resolve disputes over access (authentication and privilege). The Hanseatic League — the great medieval trading network of northern European cities — was a network operating system that federated autonomous nodes under shared commercial protocols, allowing merchants in Lubeck and London and Riga to transact with mutual trust despite never having met. The Roman road network was physical infrastructure serving the same function as a switched network: reducing the latency between nodes and enabling the movement of resources, information, and authority across a distributed empire.


What unites all of these human governance systems with their digital descendants is a single fundamental question: how do you coordinate shared resources across autonomous actors in a way that is efficient, trusted, and fair? Every answer to that question encodes a philosophy — a set of assumptions about power, ownership, access, and the relationship between the individual node and the collective network. The history of network operating systems is, in this sense, a compressed replay of the entire history of human governance, running at silicon speed.


What follows is both a technical history and a political one. The choices made by engineers at Novell, Microsoft, and Google were not merely technical choices. They were choices about who governs, who benefits, who is excluded, and what kind of community the network creates. Seen this way, the story of network operating systems becomes legible to anyone who has ever studied how human beings organize themselves — because it is the same story, told again with new tools.


Novell NetWare: The Feudal Model (1983–2000)

The story of network operating systems begins in earnest with Novell NetWare, released in 1983. To understand NetWare is to understand a particular philosophy of the network that was, for its time, both practical and elegant — and which carried within it the seeds of its own eventual obsolescence.


The political analogy for NetWare is feudalism — and not pejoratively. Feudalism was a rational response to a specific problem: how do you provide security, resource management, and dispute resolution in a world where central authority is weak and infrastructure is expensive? You concentrate resources at fortified nodes (the castle, the manor), and you create clear, hierarchical relationships of obligation between those nodes and the people who depend on them. This worked because the alternative — every peasant defending their own holding against raiders — was worse.


NetWare built exactly this architecture in silicon. At the center of a NetWare network sat a dedicated file server — not a general-purpose computer that happened to share files, but a machine whose entire existence was devoted to the coordinated management of shared resources. This server ran no user applications. It managed files, printers, and authentication with extraordinary efficiency. The protocol stack, IPX/SPX, was purpose-built for this environment — faster and more deterministic than TCP/IP for local area network use.


NetWare's most visionary contribution was NetWare Directory Services (NDS), introduced with NetWare 4.0 in 1993. NDS was a global, hierarchical directory of all network resources — users, printers, servers, groups — that could span an entire enterprise and allow a user to log in once and access any authorized resource anywhere in the network. This is the digital equivalent of the feudal charter: a document that defines, for every member of the community, what they are entitled to access and under what conditions. NDS was something Microsoft would not approximate until Active Directory in 2000, and it remains the conceptual ancestor of every enterprise directory system in use today.


Yet NetWare's feudal philosophy made it brittle as the world changed. Feudalism fails when the conditions that made it rational disappear — when roads become safe enough that merchants no longer need a lord's protection, when trade networks make self-sufficient manors economically irrational, when ideas begin moving faster than the hierarchies built to control them. For NetWare, the equivalent disruption was the internet: a network that dissolved the boundary between "inside" and "outside" the managed domain, and that spoke TCP/IP natively while NetWare was still translating.


LAN Manager and the Rise of Microsoft: The Democratic Bureaucracy (1987–1993)

Microsoft's entry into network operating systems came through LAN Manager, developed in partnership with IBM and 3Com and released in 1987. LAN Manager represented a different philosophy: the network operating system should be an extension of the general-purpose operating system rather than a specialized beast apart from it.


The political analogy here is the emergence of the administrative state — the Weberian bureaucracy that runs on paper and procedures rather than personal loyalty. LAN Manager's bet was that familiarity and integration beat specialization. If you were already running DOS and Windows on your desktops, a server that spoke the same administrative language was worth a real-world performance penalty. This is the same logic that made standardized weights and measures, common law, and shared currencies so powerful in human governance: reducing the friction of coordination across previously incompatible systems creates more value than optimizing any individual system in isolation.


Windows NT: The Continental System (1993–2000)

Windows NT, released in 1993, was Microsoft's declaration that it was serious about enterprise networking. NT was a ground-up engineering project led by Dave Cutler, who had previously designed VMS at Digital Equipment Corporation. NT brought to Windows something it had never had: a genuine security model, true preemptive multitasking, and a kernel architecture capable of supporting enterprise workloads.


NT's domain model was conceptually simpler than NetWare's NDS but also cruder — a flat administrative grouping centered on a Primary Domain Controller, more like the Napoleonic prefecture system than like a genuine federation. Authority flowed downward from a single point, which made the system comprehensible and manageable for medium-sized organizations but genuinely limiting for enterprises with hundreds of thousands of users. Large organizations had to construct elaborate trust relationships between multiple domains to approximate what NDS did natively.


NT also arrived at the moment when TCP/IP was becoming universal. While NetWare was retrofitting TCP/IP into an IPX/SPX-native system, NT treated TCP/IP as a first-class citizen. This alignment with the internet's rise proved decisive — not because Microsoft had been far-sighted, but because they had been pragmatic. The philosophical lesson is one political economists know well: it is often more advantageous to adopt the emerging common protocol than to defend a superior proprietary one.


The Unix-based network operating systems of this era — Sun Solaris, HP-UX, IBM AIX — deserve acknowledgment as a parallel tradition. Unix carried a philosophy rooted in the internet's own intellectual culture: composability, transparency, shared standards, and skepticism of monolithic authority. Unix-based systems treated the network as a commons to be managed through convention and mutual protocol adherence, not through a single vendor's administrative model. This philosophy would not dominate until much later, but its moment was coming.


Windows 2000 and Active Directory: The Modern State (2000–2008)

Windows 2000 Server introduced Active Directory, Microsoft's serious answer to NetWare's directory services challenge. Active Directory organized network resources into a hierarchy of Organizational Units within Domains within Domain Trees within Forests — a structure that could scale to millions of objects while remaining comprehensible to administrators.


The political analogy is the modern Westphalian nation-state: a hierarchical administrative system with clear territorial boundaries, a centralized authority structure, and a comprehensive registry of all persons and resources within the jurisdiction. Group Policy — the mechanism by which administrators could define computing environments centrally and push them out to any machine in the domain — is the digital equivalent of legislation: rules promulgated from the center that govern behavior at every node in the network.


This is also where the governance philosophy of corporate network operating systems began to reveal its deepest assumption: that the network exists to serve the organization's administrative interests, not the interests of the individuals who use it. Active Directory is extraordinarily good at letting a small number of administrators manage a large number of users' computing environments. The users have no corresponding mechanism to hold the administrators accountable, to participate in governance decisions, or to own the resources that their participation makes valuable. This is not a criticism unique to Microsoft — it is a structural feature of how corporate network governance works, whether the platform is Active Directory or Twitter or Amazon's Marketplace.


Linux Enters the Data Center: The Republic (1994–Present)

While Microsoft was building its empire, Linux was doing something unprecedented: building a network operating system through collective action. Linux began as Linus Torvalds' 1991 kernel project and grew through the 1990s into a complete Unix-compatible operating system built and maintained by a global community of volunteers and, eventually, corporate contributors.


The political analogy for Linux is the republic — not in the contemporary partisan sense, but in the classical sense: res publica, the public thing, the resource held in common and governed through shared participation. Linux's development model was the closest thing the software world had yet produced to a genuine commons: a shared resource that anyone could use, anyone could improve, and no single actor could enclose or privatize, protected by the GNU General Public License.


Linux's philosophy of networking was inherited from Unix but intensified by the internet context in which it matured. The tools of Linux networking — Apache, Sendmail, BIND, Samba, OpenLDAP — were written by different people in different places and assembled into networks through configuration and convention rather than integrated by a single vendor. This is the governance model of customary law: shared norms that emerge from community practice rather than being imposed by a sovereign authority.


Red Hat Enterprise Linux, launched in 2002, demonstrated that a commons-based resource could generate commercial value without being privatized — the same insight that underlies every successful cooperative enterprise. Red Hat sold not the software itself (which remained free) but the support, certification, and accountability that enterprises required. The commons produced the value; the cooperative relationship with users captured a sustainable portion of that value for the organization doing the maintenance work.


The Virtualization Revolution: The Federal Architecture (2001–2012)

VMware's ESX Server, released in 2001, demonstrated that multiple operating systems could run simultaneously on a single physical machine. This changed what it meant to have a network operating system at all.


Virtualization's political analogy is federalism: multiple semi-autonomous governing units sharing common infrastructure, coordinated by a layer of authority that sits above them without displacing them. The hypervisor is the federal government; the virtual machines are the states. Each has its own internal governance, its own laws (operating system configurations), its own identity — but all share the physical hardware beneath, and all are subject to the hypervisor's resource allocation decisions.


This decoupling of logical from physical infrastructure had profound implications. Network resources — servers, IP addresses, storage — became fluid in a way they had never been. A "server" might run on any available physical host. The physical network had to become more dynamic to accommodate virtual machines that moved between physical hosts. The network was becoming less a fixed topology and more an emergent property of software-defined relationships.


The Cloud Era: Infrastructure as Abstraction (2006–Present)

Amazon Web Services, launched in 2006, extended the logic of virtualization to its endpoint. If virtual machines could run anywhere on physical hardware within a data center, why shouldn't they run anywhere across a global network of data centers? AWS introduced the concept of computing infrastructure as a service — not hardware you purchased or even servers you provisioned, but computing capacity you rented by the hour, storage you paid for by the gigabyte, network bandwidth you consumed on demand.


The political analogy for cloud computing is empire — and again, not pejoratively in the first instance. Empires arise when a single actor achieves sufficient economies of scale to provide infrastructure more cheaply than any local alternative. Rome's roads were cheaper to use than to rebuild. Amazon's global data center network is cheaper to rent than to replicate. The efficiency gains are real. But empires extract rents for the infrastructure they provide, those rents tend to grow over time as dependency deepens, and the governance of the empire serves the empire's interests rather than those of the provinces that depend on it.


Microsoft Azure and Google Cloud Platform followed with their own visions of cloud infrastructure. Azure brought Active Directory's identity management into the cloud as Azure Active Directory. Google brought its internal infrastructure philosophy — the same systems that ran Search and Gmail — to external customers. Together, these three providers came to control the infrastructure on which the majority of the world's digital activity runs, creating a concentration of technical and economic power with few precedents in the history of private enterprise.


Container Orchestration: The Post-OS Network (2013–Present)

The emergence of Docker in 2013 and Kubernetes in 2014 represented the latest philosophical evolution in networked computing. Containers package applications and their dependencies in portable, lightweight units that run identically on any compliant host. Kubernetes orchestrates containers across clusters of nodes, scheduling workloads, managing network routing between services, and handling failures transparently.


Kubernetes is, in effect, a network operating system for the post-scarcity compute era: a distributed system that presents its resources as a unified computing environment and manages the distribution of workloads across that environment according to declarative specifications. You tell Kubernetes what you want — three running instances of this service, accessible on this port — and it works out how to achieve and maintain that state across however many physical and virtual nodes it manages.


The political analogy is the administrative state at its most sophisticated: governance through policy rather than command, outcomes specified in law rather than procedures mandated in regulation. Kubernetes doesn't tell nodes what to do step by step — it defines desired outcomes and trusts the system to find paths to those outcomes. This is a more resilient and adaptable model than command-and-control, and its adoption at the infrastructure layer reflects a broader maturation in thinking about how distributed systems should be governed.


The Island Metaphor, and How It Became a Lie

Return for a moment to the opening image: computers as islands, isolated in a sea of potential connection. This metaphor was honest in 1983. The challenge really was to build bridges between isolated nodes, to create infrastructure that would let islands communicate and share resources. The history of network operating systems through the 1990s is legitimately a history of bridge-building — NetWare's IPX/SPX, Microsoft's domain model, the internet's TCP/IP stack — each successive system extending the reach and reliability of the connections between nodes.


But somewhere in the first decade of this century, the metaphor quietly inverted — and the inversion went largely unexamined. The internet had, by then, connected essentially everyone. The sea of isolation that the early network pioneers feared had been drained. There was dry land everywhere. The original problem — how do we connect these islands? — had been solved so thoroughly that the solution became invisible infrastructure, like electricity or running water.


The new problem that emerged in its place was not connection but concentration. As the internet matured and commerce organized around it, a small number of platforms discovered that they could capture the value of networked connection by positioning themselves as the only reliable places to stand. Google captured the value of information retrieval. Facebook captured the value of social connection. Amazon captured the value of commerce. Uber captured the value of transportation coordination. In each case, the platform offered genuine convenience — it really was easier to find things through Google, to reach people through Facebook, to buy things through Amazon. But the price of that convenience was dependency, and the currency of that dependency was data, attention, and ultimately governance.


The metaphor that the platforms implicitly promoted — and that most users internalized without examination — was that the internet is a dangerous, chaotic ocean, and the platforms are the ships: the only safe, stable places from which to navigate it. If you want to find information, you need Google's ship. If you want to reach your community, you need Facebook's ship. If you want to sell anything, you need Amazon's ship. The ocean, by this metaphor, is hostile to unaffiliated navigation. Dry land — independent, community-owned infrastructure — does not exist, or if it exists, it is too dangerous and difficult to reach.


This is a lie, but it is a lie that became structurally self-reinforcing. As more people moved their digital lives onto the platforms, the value of network effects concentrated there. The friend you want to reach is on Facebook. The customer you want to sell to is on Amazon. The job you want to find is on LinkedIn. This concentration is not a natural feature of the internet — it is a consequence of specific design choices, specific funding structures, and specific legal frameworks that allowed the enclosure of what had previously been common infrastructure. The platforms did not find people on islands and bring them together. They found people already connected, convinced them that the connection was fragile without platform mediation, and then extracted rent for the mediation service.


The anthropological parallel is not hard to identify. The enclosure of the commons — the centuries-long process by which English common land was privatized, fencing off what had been shared agricultural and pastoral resources into private holdings — followed exactly this pattern. The commons were not unmanaged. They were governed by sophisticated customary systems of right and obligation, maintained through community practice over generations. Enclosure worked not because the commons were dysfunctional but because private actors had the legal tools and financial incentives to capture the value that commons-governance had created, and because the people who depended on the commons lacked the political power to resist enclosure once it was underway. By the time most commoners understood what was happening, the fences were already up.


The fences on the digital commons are made of terms of service, API restrictions, proprietary data formats, and network effects that make departure costly. They are no less real for being invisible.


Toward Dry Land: NTARI, SoHoLINK, and the Reclamation of Shared Ground

The history traced in these pages — from NetWare's feudal server to Kubernetes' declarative orchestration, from the Hanseatic League's shared commercial protocols to Active Directory's centralized administrative state — describes not a linear progress toward better technology but a recurring negotiation between two fundamentally different visions of what a network is for.


One vision holds that the network is infrastructure to be governed by those who build and maintain it, that its value should accrue primarily to its owners, and that users are beneficiaries of the owner's generosity rather than participants in a shared enterprise. This is the vision encoded in every corporate platform, in every terms-of-service agreement that reserves the right to change the rules unilaterally, in every API that can be closed without notice when a platform decides that developer ecosystems have served their purpose.


The other vision holds that the network is a commons — a shared resource whose value is created collectively, whose governance should be accountable to its participants, and whose surplus should circulate within the community rather than being extracted by absentee owners. This is the vision encoded in the GNU General Public License, in the internet's RFC process, in every worker cooperative and community land trust and credit union that has organized around the principle that the people who create value should govern the systems through which that value flows.


NTARI's Network Theory Applied Research Institute, and its SoHoLINK community compute marketplace, are interventions in this negotiation. They proceed from the recognition that the technical tools for building community-owned network infrastructure — Linux, Kubernetes, decentralized protocols like Akash Network, open agricultural protocols like Agrinet, assessment frameworks like LBTAS — are mature and available. The barrier to community-owned infrastructure is no longer primarily technical. It is organizational, legal, and political: the challenge of building governance structures that can coordinate shared resources fairly and sustainably at the scale the internet makes possible.


SoHoLINK is not, at its core, a technology project. It is a statecraft project — an attempt to apply the hard-won lessons of cooperative governance to the domain of digital infrastructure. Its node operators are not merely hardware providers; they are participants in a governing community with genuine voice in how the system evolves, genuine claim to the value their participation generates, and genuine stake in the community's long-term health. Its applications — Agrinet processing local agricultural intelligence, LBTAS enabling trusted trade assessment — are not merely software; they are community capabilities, tools that make the collective more capable than any of its members could be alone.


The island metaphor with which we began was always partially false. Human beings are not naturally isolated nodes seeking connection through external platforms. We are social animals who have been building networks — of kinship, trade, language, law, and mutual obligation — since before recorded history. The internet did not create networked humanity; it gave networked humanity a new medium. What the platform era did was convince people that they needed corporate intermediaries to be networked at all — that the sea was too dangerous to navigate without a corporate ship, that dry land required a platform's permission to stand on.


The truth is that dry land was there all along. It is made of open protocols and shared standards and community governance and the simple fact that human beings, given appropriate tools and appropriate institutions, are entirely capable of managing shared resources for collective benefit. They have been doing it, in forms recognizable to any anthropologist, for ten thousand years.


SoHoLINK aims to return users to that dry land — not as isolated individuals standing on their own small patches of ground, but as members of a networked community standing together on ground they hold in common. The difference between a platform's users and a cooperative's members is not technical. It is the difference between subjects and citizens — between people whose relationship to the network is defined by the platform's terms, and people whose relationship to the network is defined by their own collective governance.


The history of network operating systems, from Novell's file server to Amazon's planetary compute infrastructure, is a history of the technical solving an essentially political problem: how do we coordinate shared resources across autonomous actors? The answer has always depended less on the technology than on the governance philosophy embedded in it. The next chapter of that history will be written by communities who understand this — who take the tools the technical tradition has built and embed them in governance structures worthy of the communities they serve.


The ships were never the only option. It is time to build the shore.


Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page