CLA | Ch. 5 — Validity by Critical Efficiency (VCE): The Validation System for Algorithmic Law
Failed to add items
Add to basket failed.
Add to Wish List failed.
Remove from Wish List failed.
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
About this listen
If a norm no one can verify is not a norm but a hope, what makes an algorithmic decision legally valid when no one enacted it, no one interpreted it, and no one had time to deliberate on it?
The question is not hypothetical. In low Earth orbit, AI systems are already executing collision-avoidance maneuvers for constellations of thousands of satellites, with decision windows that sometimes come down to minutes. If the system decides not to maneuver and the resulting collision generates debris that harms third parties, the chain of responsibility between operator, algorithm designer, and certifying regulator is legally ambiguous under current frameworks (UNOOSA, 2025; SmartSat CRC, 2024). No existing precedent —not corporate personhood, not autonomous vehicle regulation, not maritime law— resolves real-time normative validation of autonomous decisions with existential consequences.
This chapter develops Validity by Critical Efficiency (VCE) as a fourth tradition of legal validity, complementing rather than replacing the three classical ones:
1. Formal validity (Kelsen): a norm holds because the competent authority issued it.
2. Substantive validity (Dworkin, natural law): a norm holds if it respects moral principles.
3. Sociological validity (Hart, legal realism): a norm holds if it is generally obeyed.
4. VCE validity: a decision holds if it produces verifiably optimal outcomes within constraints that protect human dignity.
The four cumulative conditions (C1-C4): demonstrable optimality, constitutive constraints, complete traceability, and an available human override. Three optimality standards calibrated by criticality: strict optimum, reasonable optimum, demonstrable improvement. A failure taxonomy (F1-F4) with progressively heavier consequences, ranging from minor suboptimality to absolute nullity when a constraint is breached. And a three-tier appeal system: IURUS, THEA (Hybrid Spatial Algorithmic Tribunal), and standards review.
The chapter closes with a canonical axiomatic formulation: six axioms that any system claiming to implement VCE must satisfy in full. Axiom 2 is lexicographic: U > O. Dignity constraints take absolute priority over optimization. No outcome, however efficient, is valid if it crosses an inviolable threshold.
The central thesis: in high-stakes environments where the atmosphere is artificial, water is finite, and every algorithmic decision can be the last, verification is not optional — it is survival.
—
🔹 CLA — Algorithmic Law for the Cosmos
Jesús Bernal Allende | Escuela del Deber-Optimizar y la Soberanía de la Evidencia
🌐 https://edo-os.com 🔗 https://www.linkedin.com/in/jesus-bernal-allende-030b2795