EDO·OS | Governance of the Future cover art

EDO·OS | Governance of the Future

EDO·OS | Governance of the Future

By: Jesús Bernal Allende
Listen for free

About this listen

What if the institutions we build today determine whether the humanity that reaches the cosmos deserves to have tried? In an era where AI amplifies everything human — rationality and corruption alike — algorithmic governance cannot be improvised. EDO·OS explores the complete institutional architecture for the algorithmic age: Common Law for the Cosmos, democratic oversight, and the absolute limit no optimization crosses. Academic analysis for those who prefer to think before the window closes. A production of EDO·OS.

Jesús Bernal Allende 2026
Science Social Sciences
Episodes
  • CLA | Ch. 6 — The Sovereignty of Evidence: Anti-Capture Epistemic Infrastructure
    Apr 24 2026

    If authority that cannot show why it rules is not authority but inertia, what institutional infrastructure ensures that the evidence legitimizing an algorithmic system is not produced by the very actor with the greatest stake in manipulating it?

    The pattern repeats across recent history. The Value-at-Risk models that preceded the 2008 financial crisis were "evidence-based" — evidence produced by the very institutions whose stability they were meant to demonstrate. IMF development reports documented "progress" in countries where material conditions were worsening, using metrics designed to yield the desired outcome. Soviet planners presented production data fabricated by the very bureaucracy whose legitimacy hinged on that success. Whenever legitimacy depends on outcomes, there is a structural incentive to manipulate the evidence of those outcomes.

    This chapter builds the Sovereignty of Evidence as a fourth source of political legitimacy, complementing the three classical traditions (Weber, Scharpf, Schmidt):

    1. Five conditions of evidence (E1-E5): source traceability, methodological reproducibility, falsifiability, independent validation, and currency.

    2. A four-tier evidence hierarchy calibrated to criticality: from multi-source convergent evidence for existential decisions down to declarative evidence with no normative weight.

    3. IURUS as epistemic infrastructure: immutable registry, methodological certification, audit, and first-tier adjudication of challenges.

    4. Five anti-capture mechanisms: structural independence, mandatory rotation, pluralism of verification, reciprocal auditing, and radical transparency.

    5. Three-level circularity breaking: separation of epistemic functions, source triangulation, and institutionalized falsifiability.

    The institutional precedents are invoked with care: the IAEA in the nuclear domain, ICAO in civil aviation, Cochrane reviews in evidence-based medicine, the Artemis Accords (2020) as proto-transparency, and Weiss and Jacobson's work (2000) on information-based environmental compliance. The chapter draws on Jasanoff (2003, 2004) to frame IURUS as institutionalized "technology of humility" — it does not claim to hold the truth, but to establish the conditions under which truth claims can be evaluated, challenged, and corrected.

    Five domains are placed explicitly outside the sovereignty of evidence: the definition of ends, Inviolability Thresholds, cultural life, individual existential decisions, and what evidence cannot capture. The hierarchy with Algorithmic Dignity (Ch. 7) is lexicographic: evidence evaluates metrics; thresholds are set by the political community.

    The closing thesis: to trust what is verifiable is not cynicism. It is the most honest form of respect — respecting a community enough to show it, rather than tell it, that it is well governed.

    🔹 CLA — Algorithmic Law for the Cosmos

    Jesús Bernal Allende | Escuela del Deber-Optimizar y la Soberanía de la Evidencia

    🌐 https://edo-os.com 🔗 https://www.linkedin.com/in/jesus-bernal-allende-030b2795

    Show More Show Less
    22 mins
  • CLA | Ch. 5 — Validity by Critical Efficiency (VCE): The Validation System for Algorithmic Law
    Apr 22 2026

    If a norm no one can verify is not a norm but a hope, what makes an algorithmic decision legally valid when no one enacted it, no one interpreted it, and no one had time to deliberate on it?

    The question is not hypothetical. In low Earth orbit, AI systems are already executing collision-avoidance maneuvers for constellations of thousands of satellites, with decision windows that sometimes come down to minutes. If the system decides not to maneuver and the resulting collision generates debris that harms third parties, the chain of responsibility between operator, algorithm designer, and certifying regulator is legally ambiguous under current frameworks (UNOOSA, 2025; SmartSat CRC, 2024). No existing precedent —not corporate personhood, not autonomous vehicle regulation, not maritime law— resolves real-time normative validation of autonomous decisions with existential consequences.

    This chapter develops Validity by Critical Efficiency (VCE) as a fourth tradition of legal validity, complementing rather than replacing the three classical ones:

    1. Formal validity (Kelsen): a norm holds because the competent authority issued it.

    2. Substantive validity (Dworkin, natural law): a norm holds if it respects moral principles.

    3. Sociological validity (Hart, legal realism): a norm holds if it is generally obeyed.

    4. VCE validity: a decision holds if it produces verifiably optimal outcomes within constraints that protect human dignity.

    The four cumulative conditions (C1-C4): demonstrable optimality, constitutive constraints, complete traceability, and an available human override. Three optimality standards calibrated by criticality: strict optimum, reasonable optimum, demonstrable improvement. A failure taxonomy (F1-F4) with progressively heavier consequences, ranging from minor suboptimality to absolute nullity when a constraint is breached. And a three-tier appeal system: IURUS, THEA (Hybrid Spatial Algorithmic Tribunal), and standards review.

    The chapter closes with a canonical axiomatic formulation: six axioms that any system claiming to implement VCE must satisfy in full. Axiom 2 is lexicographic: U > O. Dignity constraints take absolute priority over optimization. No outcome, however efficient, is valid if it crosses an inviolable threshold.

    The central thesis: in high-stakes environments where the atmosphere is artificial, water is finite, and every algorithmic decision can be the last, verification is not optional — it is survival.

    🔹 CLA — Algorithmic Law for the Cosmos

    Jesús Bernal Allende | Escuela del Deber-Optimizar y la Soberanía de la Evidencia

    🌐 https://edo-os.com 🔗 https://www.linkedin.com/in/jesus-bernal-allende-030b2795

    Show More Show Less
    22 mins
  • CLA | Ch. 4 — From Tool to Normative Agent
    Apr 16 2026

    The question is no longer whether machines can think. It is whether machines that make decisions with legal consequences can continue to be treated as simple objects.

    Between Earth and Mars there are between 4 and 24 minutes of signal latency. Within that interval, an AI system may decide the fate of 120 people's life support. There is no time to consult anyone. There is no human to hand control back to. The system decides.

    Is that decision the act of a tool? Of a person? Of neither?

    This episode argues that the traditional dichotomy — persons versus things — is insufficient for twenty-first-century law. Space AI systems are a third category: algorithmic normative agents.

    They are not persons: they have no moral conscience or intrinsic dignity. They are not tools: they do not execute deterministic instructions. They are limited centers of normative imputation — entities with autonomous decision-making capacity, specific responsibilities, and constitutive restrictions that no calculation can transgress.

    Five conditions define them: they make autonomous decisions within defined domains, they operate under normative restrictions coded into their architecture, they generate legal consequences, they are auditable, and they admit human override.

    The law has already built analogous categories: corporate personhood for entities without a mind, in rem actions in maritime law, autonomous vehicle regulatory frameworks. None is sufficient for space. All point in the same direction: the law can create new categories when reality demands it.

    Reality in space demands it now.

    📙 CLA: Algorithmic Law for the Cosmos Jesús Bernal Allende | Escuela del Deber-Optimizar y la Soberanía de la Evidencia https://a.co/d/0aGJioHm 🌐 https://edo-os.com 🔗 https://www.linkedin.com/in/jesus-bernal-allende-030b2795

    Show More Show Less
    22 mins
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.