Matthew Southey half-baked half-thoughts    About    Archive    Feed

Letter: AI Safety and AI Policy

Many presentations on superintelligence begin with a caveat, “what I am about to discuss is not like the robots in Terminator…” or a reference to science fiction, “superintelligence is often depicted as the enemy of humanity…”. When discussing the future, researchers often employ literary devices like metaphor or analogy in order to make difficult ideas comprehensible. Even the works of Nick Bostrom such as Superintelligence or The Fable of the Dragon-Tyrant begin with fables, stories that help illuminate the often counter-intuitive subject matter. Marshaling a global response to the serious problem of superintelligence requires the ability to communicate difficult ideas effectively; the various ways these ideas are communicated is worth investigating.

Paying attention to how literary devices shape popular understanding of AI issues is essential to the future of humanity. A metaphor successfully applied can enhance understanding of the problem, while a failed metaphor can make people think there isn’t a problem at all. Unfortunately, many members of the public think that the danger of AI lies in the vicinity of a nude time-traveling Arnold Schwarzenegger. It is important to combat these misguided metaphors, hopefully through the use of more accurate comparisons. Effective communication about AI safety is very important and the audience determines the manner of communication. The problem of AI safety should be communicated differently to a crowd of machine learning researchers than to an assembly of the general public. Communication is context-sensitive.

We need to understand how to communicate effectively with two groups: policy makers and the general public. These two groups determine how the problem of AI safety is addressed. The future of humanity relies on our ability to communicate, agree with one another, and create a united response to this global existential risk. We must learn to make issues relevant to diverse groups of peoples. If communication with these two groups is not taken seriously, a united response to AI safety will remain out of reach. Effective cooperation relies on effective communication. In China, AI research often happens behind closed doors – this lack of communication makes cooperation impossible.

A rigorous analysis of how AI safety is communicated to policy makers and the general public is deeply needed. There are some very important questions to be asked: How do policy makers understand the problem of superintelligence? How does the general public understand superintelligence – and what are the metaphors they rely on? Do various demographics perceive AI differently: as a business asset, as a weapon, as an exciting new consumer technology? How is AI understood in the business market: as a boon or a double-edged sword? How does communication shape the purchase of AI technologies by businesses? There are also questions of metaphor: what are the common stories and narratives that people use to understand AI? How might metaphors circumscribe our predictions regarding superintelligence? What metaphors aid our understanding of the problem? These questions need to be asked in the West, but most importantly, they need to be asked to policy makers, researchers, and the general public in China. Responses to these questions can be surveyed and understood through statistical analysis before policy decisions are made.

Effective AI policy decisions can lead to greater cooperation between researchers and nations. It is important that the field of communication, in regards to AI safety, is analyzed using quantitative and qualitative measures. A comprehensive analysis can contribute to the important goal of a responsible, united policy on AI safety.