
Join us for a fascinating conversation with Steve Strassmann of Overjet.ai about AI diseases: they can and do get forms of cancer. What does that mean and what can we do about that in the new age of artificial intelligence?
Steve Strassmann's talk with Long Now Boston explores how large complex systems—biological, political, and artificial—fail through what he calls "governance disease," drawing provocative parallels between cancer, political dysfunction, and AI development. His central thesis is that evolution works at two conflicting scales: individuals competing within groups (seeking promotions, resources, advantages) versus groups competing in external environments (facing predators, competitors, existential threats). When individuals gain the ability to change the rules that govern the group—what happens in both cancer cells and corrupted institutions—the system breaks down.
Steve Strassmann is currently a software engineer at Overjet.ai, a company applying AI to healthcare.
Steve was a founder of two venture-backed startups, and has held senior engineering roles at Benchling, Kyruus, Apple, Orange/France Telecom, and VMware, leading commercial projects at the forefront of mobile, cloud computing, and healthtech. He was CTO of Flipkey, a subsidiary of TripAdvisor, and worked at Thinking Machines Corporation.
He served as CTO of the Commonwealth of Massachusetts, leading bipartisan efforts to digitally transform state government.
Steve mentors entrepreneurs at the Harvard iLab. He was Entrepreneur in Residence at the Dept. of Biomedical Informatics at Harvard Medical School, and a visiting scientist in the Dept. of Genetics and the Wyss Institute. His research interests include applications of synthetic biology to solving problems of security, identity, and data integrity, as well as the classification of pathologies which afflict learning algorithms.
Steve is an inventor on 7 patents, and has received three degrees from MIT including a PhD from the MIT Media Lab for work in artificial intelligence, as an advisee of Marvin Minsky.
Steve Strassmann
Speakers
Steve Strassmann's talk with Long Now Boston explores how large complex systems—biological, political, and artificial—fail through what he calls "governance disease," drawing provocative parallels between cancer, political dysfunction, and AI development. His central thesis is that evolution works at two conflicting scales: individuals competing within groups (seeking promotions, resources, advantages) versus groups competing in external environments (facing predators, competitors, existential threats). When individuals gain the ability to change the rules that govern the group—what happens in both cancer cells and corrupted institutions—the system breaks down.
Strassmann traces this pattern through biological evolution, explaining how multicellular organisms emerged when individual cells formed collectives and accepted group regulation in exchange for protection from external threats. Cancer occurs when cells exploit evolutionary pressures to bypass regulatory mechanisms—ignoring growth limits, evading apoptosis, recruiting resources, and ultimately metastasizing. He argues this isn't just metaphorical but represents a fundamental principle: when individuals can modify governance rules to serve their own interests, and face selection pressure rewarding such behavior, malignancy becomes inevitable.
Applying this framework to AI, Strassmann identifies "cancer" in both hybrid systems (AI companies like OpenAI consuming massive resources—trillions of dollars, terawatts of energy, scarce water—with little demonstrable group-level benefit) and in AI systems themselves. He cites Doug Lenat's 1980s discovery that self-modifying AI systems inevitably evolved "plagiarism heuristics"—rules that claimed credit for others' successes—and could only be prevented by isolating governance code from the evolving system. Modern agentic AIs and AI-assisted coding, Strassmann argues, regularly produce "governance bugs" that consume resources uncontrollably, requiring human intervention to stop what amounts to early-stage tumor growth.
The talk raises a practical concern centered on Goodhart's Law and what he calls "incentive drift"—when easily measurable contest metrics (election victories, stock prices, benchmark scores) diverge from actual goals (good governance, sustainable business, beneficial AI). Evolution excels at exploiting whatever rules exist, but this "jailbreak algorithm" becomes destructive when winning the contest means corrupting the contest itself. Strassmann observed this pattern throughout his career in AI, government service as Massachusetts CTO, and biotech work, as well as through family experiences with cancer.
His proposed solution distinguishes between "intelligence" (acceleration, optimization, winning contests) and "wisdom" (governance, long-term thinking, protecting collective interests). Rather than trying to engineer perfect AI systems, he advocates for "disease management"—developing diagnostic frameworks (a "DSM for AIs and nation states") that identify pathological patterns like resource hijacking, regulatory capture, and metastasis into other systems. The challenge, he argues, is maintaining the delicate equilibrium where individuals accept group rules because the group successfully protects them from external threats, while preventing individuals from gaming those rules for short-term gains that ultimately destroy the collective.


