AI and Constitutional Interpretation: The Law of Conservation of Judgment
AI Cannot Replace Human Judgment in Constitutional Law, Experts Argue
As artificial intelligence (AI) systems like OpenAI’s ChatGPT and Anthropic’s Claude continue to evolve at a rapid pace, some legal experts have questioned whether these tools could eventually provide objective interpretations of constitutional law. However, legal scholars Andrew Coan and Harry Surden argue that such a vision is fundamentally flawed.
In a recent analysis published by The Lawfare Institute in collaboration with the Brookings Institution, Coan and Surden explain that while large language models (LLMs) are powerful tools for research and analysis, they cannot eliminate the need for human judgment in constitutional interpretation. The authors introduce a concept they call the "law of conservation of judgment," asserting that the critical moral and political decisions required in constitutional law do not disappear when AI is introduced — they are simply shifted to different stages of the decision-making process.
Using case studies involving major constitutional rulings like Dobbs v. Jackson Women’s Health Organization and Students for Fair Admissions v. Harvard, Coan and Surden demonstrate how AI systems reached different conclusions based on subtle changes in instructions or prompts. For example, while ChatGPT interpreted the Third Amendment's reference to "soldiers" literally, Claude took a broader view, interpreting it as applying to government officials more generally. Neither approach was "wrong," but each reflected a different interpretive choice—just as human judges often do.
The scholars emphasize that LLMs often mirror the perspectives embedded in their training data, and AI-generated responses can shift when counterarguments are introduced. This "AI sycophancy" underscores the risk of relying on AI for high-stakes constitutional questions.
While AI can support judicial decision-making by summarizing case law, identifying precedents, and assisting with legal research, Coan and Surden caution against using it to replace human judgment. They recommend that judges and lawyers develop "AI literacy" to understand the limitations of LLMs and ensure responsible use of the technology.
Ultimately, the authors argue, constitutional interpretation will always require human judgment. No AI, however sophisticated, can fully resolve the moral, political, and value-driven decisions inherent in constitutional law.