Long Now Boston Conversation Series
November 4, 02019, at CIC, 1 Broadway, Cambridge MA, with James (“J”) Hughes (IEET) and Nir Eisikovits (UMAEC).
Synopsis: Human species have co-evolved with technology for hundreds of thousands of years. Fire and stone tools were once the killer apps, giving humans immense advantages – but human physiology and society also evolved with them. It is no different today, but the stakes are higher, as they include global existential risks, and the pace of change is faster by many orders of magnitude. It is impossible to plan or to predict the future, but we can shape its trajectory by better understanding the risks and tradeoffs and by seeking to achieve equity in how we govern technology.
According to Nir Eisikovits, the world we live in today would be unimaginable to our ancestors just a few generations back. Likewise, we would likely find the world of our descendants to be unimaginable, which makes visualizing the far future so difficult. We can, however, look at the present, and extrapolate the types of changes we are seeing, in order to forecast what may be coming. As Nir looks at current technological changes and the potential impacts on the lives of humans, Nir is fundamentally ambivalent. The same technology that allows us to converse with and stay close to loved ones on the far side of the world so easily, also allows us to sit at the dinner table with our children and have no conversation.
Nir sees technological change as an inevitable project of humanity, and it is one that always has upsides and downsides. Overall, the upsides have far outweighed the downsides, but there is often a price to be paid. Artificial Intelligence (AI) for example, is giving us algorithms that can replace fallible human judgements with far more precise, accurate and reliable ones, and it is being deployed today for hiring decisions, for evaluating mortgage applications and for deploying police. On par, perhaps, these decisions are better — and yet they can be badly biased and inaccurate in ways that human judgement would be more easily able to identify.
Nir believes that we are capable of fixing the deficiencies of AI decision-making (perhaps assisted by a different set of AI tools). That would give us consistently better outcomes, but it would not eliminate all of the downsides. One unavoidable downside is that humans are decision-making entities, and we find value, meaning and purpose in making decisions. What will it be like for humans, if all the decisions involved in the vast majority of the occupations, including middle managers, were all delegated to AI? Aristotle argued that making judgements is similar to using muscles. If you don’t exercise them, you lose them. What happens to humans that do not need to make decisions? Do our modern, high-tech, hugely profitable industrial workplaces become, to its workers, just another kind of button factory?
A similar ambivalence arises from the algorithmically tuned curation process by which products, services, images and ideas are selected for us. Yes, it weeds out a lot of nonsense in the vast universe of information, and it saves a lot of time. But, do we also lose the option of finding our own experiences, of being spontaneous, of following a crooked path and seeing or hearing things we would never have discovered? Whose desires and impulses are we following? As John Stewart Mill (in On Liberty) noted: “One whose desires and impulses are not his own, has no character, no more than a steam-engine has character.”
James (“J”) Hughes pointed out that the technology for “transhumanism” – the melding of humans with their technology – is already here. Indeed, technologies have changed us from the very beginning. Some human species were able to master fire hundreds of thousands of years ago, and the pre-digestion of cooking vastly increased the ability of the body to absorb calories. This enabled the evolution of shorter digestive tracts and larger brains, an adaptive advantage that helped catapult those humans to global dominance.
Humans today are, as a consequence of technology, better, smarter and happier than ever before, and that trend is going to continue. Technology is advancing, like it or not, and it is changing our bodies, our minds, our culture and even our ethics. The enlightenment test of being human, and the moral value it assigns, is based on suffering and agency. Those were the qualities of being human and defined the category of being morally superior. It was thought at the time that all animals (and, sadly, many humans) were not aware enough to suffer and did not have the capacity for independent choices. Hence, they were morally inferior.
Science has over time invalidated both tests, and enabling technologies are increasing the moral possibilities. The boundaries for moral value definitions have expanded. We feel differently now about physical and mental disabilities – they are, at least potentially, fixable with technology. We also have the technology to give smart genes to other species – how smart does a mouse have to be, or a marine mammal, or an octopus, to admit it into a category of superior moral value. Does it even matter anymore whether the entity has a biological substrate? If we upload a human brain to a neural net, or if a complex set of circuits suddenly “awaken”, will they be considered morally special, like humans?
In this context, ethical considerations relating to new technology are complex. Some of the ethical dilemmas are insoluble, as they sit at the root of one’s worldview. If we were to apply literally the principle of “first, do no harm,” embracing the precautionary principle for science and technology as a litmus test — convince me that nothing will go wrong — then science will stop, and the human race will never advance. At the same time, there are things we do not want anyone to do. We do not want people selling the modern equivalent of snake oil. Reportedly a third of women in rural areas in the 19th century were addicted to the opiate laudanum – is the opiate crisis today really any different? We do not want prosthetics or implants installed in people being subject to recall for nonpayment (the theme of the movie Repo Men).
We do want safety, efficacy and informed consent. These require forms of regulation and oversight. We also want, J Hughes argues, equity in a broader sense: equal access to information and services and equal treatment for everyone. These require that oversight, regulation and, yes, taxation, be developed and applied in a fair and impartial manner with the highest consideration being the public interest. J Hughes believes this can only be accomplished and secured in a governance system that embodies democratic principles — leaders can be thrown out when they fail to meet the public interest test.
How do we think about and plan for a future that is impossible to predict? In the field of ecology, the concept of “resilience” is the ability to resist or recover from negative events. How do we apply this to technology and human thriving? We want the positive outcomes and we want to avoid or mitigate the negatives, but we often do not know the negatives until years, decades or centuries after-the-fact. The answer may be in the antithesis to the precautionary principle – the proactionary principle. It is OK to do things, and we have to take risks. But we do want to be careful and to be experimental – let’s watch the outcomes closely and be prepared for rapid correction and remediation. This can not be done without oversight and participation by institutions whose mission is our collective best interest.
James Hughes Ph.D. is a bioethicist and sociologist, and the Executive Director of the Institute for Ethics and Emerging Technologies (IEET), which he co-founded with philosopher Nick Bostrom in 2004. James also serves as the Associate Provost for Institutional Research, Assessment and Planning for the University of Massachusetts Boston. He holds a doctorate in sociology from the University of Chicago, where he also taught bioethics at the MacLean Center for Clinical Medical Ethics. He is the author of Citizen Cyborg: Why Democratic Societies Must Respond to the Redesigned Human of the Future, and from 1999-2011 he produced the syndicated weekly radio program, Changesurfer Radio.
Nir Eisikovits is an associate professor of philosophy and founding director of the Applied Ethics Center at UMass Boston, and has recently been working on the impact of AI of our everyday experiences. He was an associate professor of legal and political philosophy at Suffolk University and co-founded the Graduate Program in Ethics and Public Policy. Nir is author of A Theory of Truces (Palgrave MacMillan) and Sympathizing with the Enemy(Brill), and the guest editor the recent issue of Theoria on The Idea of Peace in the Age of Asymmetrical Warfare. In addition to his scholarly work, he advises several NGOs focused on conflict resolution and comments frequently on the Middle East conflict for American newspapers and magazines.
Long Now Boston is a 501(c)(3) non-profit organization that is independent from but philosophically aligned with the Long Now Foundation. Long Now Boston provides a forum for discussing, investigating and engaging in issues that have long-term implications for our global cultures. Long Now Boston hosts a monthly Community Conversation series in Cambridge, MA. Please sign up on our website for notices.
Cambridge Innovation Center is an in-kind sponsor of the Long Now Boston Conversation Series. We are very grateful for their support.
Our next Community Conversation will be on December 2 02019, when Professor Avi Loeb takes Long Now Boston to the frontiers of cosmic discovery and exobiology, in a talk on Searching for Life in Deep Space. Professor Loeb is the Frank B. Baird, Jr. Professor of Science and Chair of Astronomy at Harvard, Director of the Institute for Theory and Computation, Founding Director of the Black Hole Initiative, Chair of both the Breakthrough Starshot Advisory Committee and the Board on Physics and Astronomy of the National Academies. In 2012, TIME magazine selected Loeb as one of the 25 most influential people in space science.
On January 6 02020, Long Now Boston will hold its 2nd annual FLASH TALKS at the CIC, titled Envisioning the Future. Members are encouraged to submit FLASH TALK Proposals on issues of interest. The proposals will be reviewed, and up to six presenters will be selected to give a FLASH TALK. A prize valued at $100 will be given to the best presentation, selected by the audience.