The Knife Edge of Value Alignment in AI: Utopia or Extinction
Doors open @ 6pm — Come early and meet other Long Now thinkers — Presentations start @ 7pm
October 2, 02018: A Long Now Boston Community Conversation with
Richard Mallah and Lucas Perry, The Future of Life Institute [FLi]
See the after-event writeup for a discussion of this event.
What happens when we ask the algorithms to make decisions for us – decisions that may have life and death consequences?
AI: Artificial Intelligence is one of this century’s most misunderstood buzzwords. In Kurzweil’s “Singularity”, it represents a glorious future where human toil and suffering is ended. In The Matrix, it conjures a future dominated by malevolent supermachines feeding on the energies of human slaves. In reality, AI is fast becoming the ubiquitous hand-maiden of human invention and ingenuity for much of what we relish in our day to day. From search engines to the energy grid; autonomous vehicles to life support systems; food production to weather forecasting; data security to anti-ballistic missile guidance. The list of AI processes we can no longer get by without grows daily.
AI programming ultimately relies on simple, digital decision chains, but they are at the point where machines can teach themselves. The Intelligence may be artificial and “inhuman”, but is increasingly more capable than our own. In the world of zeroes and ones, a near perfection of logical functioning can be achieved, with AI systems that are free of human foibles and the slowness of biological systems, free of human attributes like emotion, intuition, love, or a sense of right and wrong. Or are they?
What happens when we ask the algorithms to make decisions for us – decisions that may have life and death consequences? And what happens if, or when, their intelligence begins to match or exceed our own – the level of Artificial General Intelligence (AGI) where we can no longer tell if an agent is human or machine? Autonomous decision-making and human level agency will require moral and ethical guidance. Do our AI programmers have the perspective — historical, philosophical, moral — to be the arbiters of that guidance? Or do we let the algorithms themselves learn human morality by emulating humans? How do we properly align the values of our inventions, to achieve the goal of a beneficent future for all?
The Long Now Boston Conversation Series hosts the Future of Life Institute’s Richard Mallah and Lucas Perry to share their research on the frontiers of Value Alignment and the implications for the future of AI and AGI.
Join the conversation and be part of the solution.
$15 in advance // $20 at the door. Students w/ID admitted free.
Audience participation is encouraged.
The Future of Life Institute [FLI] is one of the world’s leading organizations exploring the potential existential challenges and solutions of technology in the fields of AI, Biotech, Nuclear Weapons and Climate Change.
Richard Mallah is the Director of AI Projects, Future of Life Institute. Richard has over fifteen years of experience leading AI research and AI product teams in industry, lending an appreciation for tradeoffs at all AI product lifecycle stages. As Director of AI Projects at the Future of Life Institute, Richard does meta-research, analysis, advocacy, research organization, community building, and technical direction of projects related to the safety, ethics, robustness, and beneficence of future AI systems in order to minimize their risks and maximize their benefits globally. Richard was the lead author of FLI’s landmark Landscape of Technical AI Safety Research, and he has given dozens of invited talks on safety, ethics, robustness, and beneficence of advanced AI. Within IEEE’s Global Initiative on Ethics of Autonomous and Intelligent Systems, Richard is a former chair of the committee on autonomous weapons, a current co-chair of the committee on AGI safety and beneficence, and a member of the executive committee. Richard holds a degree in computer science, AI, and machine learning from Columbia University, and is well read in natural philosophy.
Lucas Perry works as Project Coordinator for the Future of Life Institute. He focuses on enabling and delivering existential risk mitigation efforts ranging from direct interventions, to advocacy, and enabling research. Lucas was an organizer of the Beneficial AI 2017 conference, worked on a nuclear weapons divestment campaign, and has spoken at a number of universities and EA events. His AI activities have included grant making in the field of AI safety, a podcast on AI safety and value alignment, and work on the conceptual landscape of the value alignment problem. He studied philosophy at Boston College and has been working in AI safety and existential risk ever since.
We’re proud and excited to welcome Richard and Lucas to the Long Now Boston community.