How Pacifists Go to War
- the Institute
- Jan 9
- 10 min read

In 1989, Linus Pauling died believing vitamin C cured cancer. He was brilliant—two-time Nobel Prize winner, quantum chemistry pioneer, peace activist. He was also catastrophically wrong about vitamin C. His conviction overpowered his methodology. He cherry-picked studies, dismissed contradictory evidence, and used his scientific reputation to promote claims that rigorous trials repeatedly disproved. Pauling's error wasn't stupidity or malice. It was certainty untempered by adversarial scrutiny.
NTARI researches network topology, cooperative infrastructure, and economic rent extraction. We believe platform monopolies harm communities through information asymmetries. We believe cooperative alternatives can succeed structurally, not just ethically. We believe the internet's centralized architecture contradicts its distributed potential. These aren't neutral observations—they're positions with consequences. Communities might build infrastructure based on our research. Policies might shift. Resources might redirect.
Which means we could be wrong in ways that matter.
So we go to war—not against people, but against our own conclusions. We treat certainty as the enemy and methodology as sacred. Three weapons form our arsenal: Maximum Observational Diversity (forcing contradictory perspectives into collision), Minimum Sustainable Projection (refusing claims that outrun evidence), and Scientific Method as Ritual (treating verification as non-negotiable practice). Not because we lack conviction, but because conviction without rigor is just noise that happens to align with our values.
This is how pacifists fight. Not to defeat opponents, but to defeat error.
Maximum Observational Diversity: The Adversarial Swarm
In 1964, the RAND Corporation convened physicists to estimate Soviet missile accuracy. The group converged quickly on a consensus estimate. Then one analyst, Albert Wohlstetter, forced a split: half the physicists argued for their original estimate, half argued the opposite. Within hours, both groups found flaws in their reasoning that the original consensus had buried. The final estimate changed by an order of magnitude.
This is adversarial collaboration—forcing multiple perspectives to attack the same question from incompatible positions. It works because blind spots aren't random. They correlate with worldview, training, and the frameworks we use to parse reality. A single perspective can't see its own edges.
NTARI employs methodological context switching: we deliberately fragment research questions across multiple AI instances with different system prompts, then force those perspectives into confrontation. Not because AI instances are conscious adversaries, but because different prompt architectures create different observational boundaries. One instance optimizes for parsimony, another for comprehensiveness. One privileges economic analysis, another network topology, another historical precedent.
When analyzing municipal broadband, for instance, we don't ask "does municipal broadband work better than private ISPs?" We ask:
Economic instance: "Model the cost structures that make municipal broadband 35-50% cheaper—what assumptions underlie those savings?"
Network topology instance: "How does last-mile infrastructure ownership affect network graph centralization?"
Historical instance: "What patterns from rural electrification and telephone cooperatives predict municipal broadband outcomes?"
Adversarial instance: "Find the steelmanned argument for why private ISPs structurally outperform municipal alternatives."
Then we collide the results. Not to synthesize a comfortable middle ground, but to map where frameworks disagree, where evidence contradicts theory, where our assumptions appear in the gaps between perspectives.
The goal isn't consensus. The goal is observational diversity—accumulating incompatible viewpoints until blind spots become visible through their absence. When five analytical frameworks agree, we're probably missing the sixth that would reveal why all five are wrong.
Humans coordinate this process. AI instances can't orchestrate their own adversarial structure—they lack the meta-level awareness of which perspectives are missing. Human researchers design the prompt architecture, identify which viewpoints need representation, and force confrontation at the fracture points where frameworks diverge. The AI instances are observational instruments; humans are the experimenters who know which instruments to deploy and when their readings contradict.
This mirrors how particle physics uses multiple detector geometries to reconstruct particle collisions. No single detector sees the complete event. But incompatible detector perspectives, properly coordinated, reveal structure that any single observation would miss.
Maximum Observational Diversity is the first weapon: multiply viewpoints until blind spots emerge.
Minimum Sustainable Projection: The Precision Constraint
In 2015, the replication crisis hit psychology. Researchers attempted to replicate 100 published studies. Only 39% succeeded. The failures weren't fraud—they were overreach. Researchers extracted claims from data that couldn't support the weight. Small sample sizes generating sweeping theories. Statistical significance mistaken for practical importance. Correlation elevated to causation.
The problem wasn't lying. The problem was projection—extending claims beyond what evidence justifies.
NTARI practices Minimum Sustainable Projection: we state the smallest claim the evidence supports, refusing the temptation to extract more meaning than exists. Not because larger claims are false, but because they're unverifiable with current data.
When analyzing why platform monopolies extract 25-70% of transaction value, we could claim:
Maximum projection: "Platform monopolies are inherently parasitic and must be abolished"
Medium projection: "Platform monopolies extract excessive rents that harm economic efficiency"
Minimum sustainable projection: "Information asymmetries and switching costs allow platform intermediaries to capture 25-70% of transaction value in documented markets including ride-sharing, food delivery, and agricultural commodities. This exceeds coordination costs in alternative models."
The minimum claim is boring. It's also defensible. Every element rests on verifiable data: information asymmetries are measurable, switching costs are documentable, transaction percentages come from financial disclosures and market analysis. The claim makes no prediction about inevitability, morality, or solutions—just structure and measurement.
This constraint frustrates advocacy instincts. NTARI believes cooperative infrastructure can replace extractive platforms. But "can replace" and "will replace" and "should replace" are different claims requiring different evidence. We limit ourselves to what network topology, economic structure, and historical precedent demonstrate—not what our mission hopes to achieve.
Minimum Sustainable Projection creates a strange kind of warfare: attacking your own conclusions to find the irreducible core. You start with the full argument, then strip away every claim that rests on assumption rather than evidence, every logical leap that skips intermediate steps, every generalization that outpaces data. What survives is sparse, precise, and maddeningly limited.
But it's also true in ways that survive adversarial scrutiny.
This is the second weapon: refuse claims that outrun evidence, even when—especially when—those claims align with your mission.
Scientific Method as Ritual: The Sacred Verification
In 1620, Francis Bacon published Novum Organum, outlining systematic observation, hypothesis testing, and empirical verification. He wasn't describing how science happened to work—he was prescribing ritual practice. Observation precedes hypothesis. Hypothesis precedes experiment. Experiment precedes conclusion. The sequence matters because human minds default to belief first, verification later.
Four centuries later, we still struggle with this. Confirmation bias isn't a failure to understand the scientific method—it's the default state of human cognition. We believe, then we seek evidence for belief, then we mistake that search for objective inquiry.
NTARI treats scientific method as ritual—not metaphorically, but structurally. Ritual creates behavioral invariance: you perform the steps regardless of emotional state, conviction, or desired outcome. You don't perform ritual because you feel like it. You perform ritual because the practice matters more than the practitioner's momentary state.
Our verification ritual follows explicit sequence:
1. Network Topology Before DynamicsMap structure before analyzing behavior. You cannot understand how information flows without first understanding the graph it flows through. This isn't optional—it's architectural. When examining platform monopolies, we map ownership networks, infrastructure dependencies, API coupling, and data flow graphs before analyzing market dynamics.
2. Historical Validation as Empirical ProofEvery structural claim must find precedent in historical record. Not because history repeats, but because historical pattern is our only laboratory for social systems. When claiming AGPL-3 prevents corporate enclosure, we don't theorize—we examine software fork histories, GPL compliance patterns, and corporate behavior around copyleft licenses.
3. Visualization as Understanding TestIf you cannot draw it, you do not understand it. Visual representation forces precision—you cannot handwave topology in a graph diagram. When analyzing mesh network resilience, we don't describe "distributed redundancy." We draw the network graph, mark single points of failure, trace routing paths, and calculate minimum cuts. Imprecise understanding cannot survive visual translation.
4. Source Transparency as DefaultEvery factual claim links to verifiable source. Every historical event cites documentation. Every statistic traces to origin data. Not as defensive citation, but as infrastructure for verification. We default to Wikipedia for accessible grounding, academic sources for authoritative depth, and primary documents for specifications.
These steps execute in sequence, each time, regardless of whether we "know" the answer. The ritual defeats the human tendency to skip verification when conclusions feel obvious.
Consider NTARI's claim that municipal broadband achieves 35-50% cost savings versus private ISPs:
Without ritual: "Municipal broadband is cheaper because it's nonprofit."
With ritual:
Topology: Map last-mile infrastructure ownership in 47 municipal networks versus comparable private ISP territories
Historical validation: Document cost structures from Chattanooga, Wilson NC, Lafayette LA, spanning 15+ years
Visualization: Graph capital costs, operational costs, and price-per-megabit across municipal/private comparison sets
Source transparency: Link to municipal financial disclosures, FCC broadband pricing data, academic cost studies
The ritual produces the 35-50% savings claim, bounded by specific documented cases, with transparent methodology for verification or contradiction.
This is the third weapon: treat verification as non-negotiable ritual, performed whether you believe the conclusion or not.
The War Against Certainty
These three methods—Maximum Observational Diversity, Minimum Sustainable Projection, Scientific Method as Ritual—share common structure. They all attack conviction. They all slow down conclusion. They all privilege verification over belief.
This creates friction with advocacy. NTARI argues for cooperative internet infrastructure,
challenges platform monopolies, and advocates for community ownership. We have positions. We take sides. How does methodological warfare against our own conclusions serve that mission?
Because the internet is full of confident bullshit. Thought leaders with sweeping theories. Advocacy organizations cherry-picking evidence. Corporate whitepapers masquerading as research. We compete in an information ecosystem where certainty is performance and nuance is weakness.
Our advantage isn't better rhetoric. Our advantage is representational adequacy—our models actually capture how systems work. Network topology determines propagation behavior. Information asymmetries enable rent extraction. Cooperative ownership alters incentive structures. These aren't slogans. They're verifiable claims about measurable systems.
When NTARI says "AGPL-3 prevents corporate enclosure of open-source improvements," we can show the licensing structure, demonstrate historical precedent from GPL enforcement cases, map the game theory of contribution versus forking, and point to specific corporate decisions that avoided AGPL-3 software specifically because of those constraints.
When we say "mesh networks survived Hurricane Sandy while cellular infrastructure failed," we can link to Red Hook WiFi documentation, interview participants, map the network topology that enabled resilience, and compare against centralized infrastructure failure modes.
When we say "platform monopolies capture 25-70% of transaction value," we can cite Uber driver earnings studies, food delivery commission structures, agricultural marketplace documentation, and trace that value extraction to specific information asymmetries and switching costs.
This precision is a competitive advantage. Most advocacy operates on vibes and values. We operate on structure and measurement. When adversaries attack our claims, they collide with verification infrastructure we've already built. We've already fought ourselves—adversarial scrutiny from outsiders just encounters defenses we erected during internal warfare.
Pacifism isn't passivity. Pacifism is refusing the easy violence of unexamined certainty. The war we fight is against our own tendency to believe what serves our mission. The weapons we deploy are methodological rituals that force truth above convenience.
This is how we earn the right to advocacy. Not by moderating our positions, but by ensuring those positions survive adversarial scrutiny.
Methodology as Infrastructure
NTARI builds cooperative internet infrastructure. Most organizations focus on technical infrastructure—fiber optic cables, mesh network hardware, federated servers. That matters. But infrastructure includes methodology: the repeatable processes that generate reliable knowledge.
Maximum Observational Diversity creates knowledge infrastructure that survives single-perspective blind spots. Minimum Sustainable Projection creates knowledge infrastructure that withstands adversarial testing. Scientific Method as Ritual creates knowledge infrastructure that operates independently of practitioner conviction.
These methods propagate. Other researchers can adopt adversarial collaboration frameworks. Other organizations can implement minimum projection constraints. Other communities can treat verification as ritual. The methodology becomes commons—shareable infrastructure for truth-seeking that isn't bounded by NTARI's specific mission.
This mirrors how NTARI approaches technical infrastructure. We don't build mesh networks for NTARI—we build AGPL-3 specifications that any community can implement. We don't create proprietary agricultural protocols—we publish Agrinet under licenses that enable cooperative forking and improvement.
Methodological infrastructure follows the same pattern. These aren't NTARI's private research techniques. They're coordination protocols for knowledge generation—tools any organization can deploy, adapt, or improve.
Which means this article isn't just explanation. It's specification. Take these methods. Fork them. Test whether they improve your organization's relationship to truth. Report back what breaks, what survives, what needs modification.
That's the point of infrastructure: not to own it, but to make it available for use.
The Consequence of Error
In 1986, the Space Shuttle Challenger exploded during launch. Seven astronauts died. The failure traced to O-ring seals that became brittle in cold temperatures—a risk engineers had documented but management dismissed. Richard Feynman, investigating the disaster, demonstrated O-ring failure in televised hearings by dropping a seal in ice water and showing how it lost elasticity.
The tragedy wasn't lack of information. The tragedy was methodology failure. Engineers knew the risk. Management wanted the launch. Confirmation bias and institutional pressure overpowered verification ritual. The consequence was seven lives and a program nearly destroyed.
NTARI doesn't launch spacecraft. But communities might build infrastructure based on our research. Municipalities might adopt broadband strategies we recommend. Cooperatives might implement protocols we specify. Agricultural networks might trust coordination systems we design.
If we're wrong—if our network topology analysis misses critical failure modes, if our economic models overestimate cooperative viability, if our historical precedents don't actually predict digital system behavior—communities waste resources, infrastructure fails, and trust in cooperative alternatives erodes.
The consequence of error isn't abstract. It's municipal budgets redirected toward failed infrastructure. It's farmer cooperatives investing in coordination systems that don't deliver value. It's communities choosing to trust NTARI's analysis over actual evidence.
Which is why we go to war against our own conclusions. Not because we lack conviction in cooperative infrastructure, but because communities deserve analysis that survives adversarial scrutiny. They deserve the smallest sustainable claims, built from maximum observational diversity, verified through ritual practice.
Certainty is easy. Precision is hard. We choose hard.
Join the Methodology
Maximum Observational Diversity, Minimum Sustainable Projection, and Scientific Method as Ritual aren't complete. They're frameworks we're testing, refining, and occasionally discovering we've implemented wrong.
The methodological war continues. New research questions reveal new blind spots. New evidence contradicts conclusions we thought were stable. New adversarial perspectives emerge that our current observational diversity framework doesn't capture.
If you research network topology, cooperative economics, platform monopolies, or infrastructure governance—if you care about knowledge infrastructure that serves communities rather than confirming priors—join the methodological development in NTARI's Slack workspace. Contribute adversarial perspectives. Challenge our minimum projections. Propose ritual modifications.
Or support the research financially at ntari.org. Methodological infrastructure requires sustained organizational capacity—time to design adversarial prompt architectures, resources to perform multi-perspective analysis, space to reject conclusions that fail verification even when they feel correct.
This is how pacifists fight. Not to defeat opponents, but to ensure our analysis deserves the trust communities place in it.
The war against certainty never ends. But every verified claim is territory held against the entropy of bullshit.
Learn More
Scientific Method & Verification:
Adversarial Collaboration:
Methodological Philosophy:
NTARI Research Applications:




Comments