Last Wednesday we sat down with interested fellows and members of the public for a discussion on the ethics of artificial intelligence. After a lively introduction by Dr. Joanna Bryson, professor of computer science at the University of Bath and lecturer on ethics and AI, we discussed the transformative and sometimes dangerous power of these new technologies.
1. Not all AIs are created equal
In her opening speech, Joanna Bryson was keen to pull apart the difference between ‘general’ intelligence and more task-specific forms of artificial intelligence. The creation of a ‘singularity’ is still the subject of science-fiction. For now, our concern should be how we regulate specific applications of AI – for instance in transport, healthcare, law and customer service. Dr. Bryson was keen to stress that we should be looking at how these new technologies will be applied in discrete situations, rather than worrying about longer-term doomsday scenarios.
2. AI presents new opportunities
Almost everyone at the event seemed to agree that artificial intelligence had many revolutionary applications. In certain areas of medicine, for instance, many were enthused by the precision and accuracy that could come with new developments in AI. Machine learning could be vital in new radiology systems, and studies have shown that its use could lead to more accurate diagnoses. The participants of our salon saw its applications as myriad. While moral dilemmas and questions of regulation dominated the discussion, there was also excitement over the opportunities it could bring.
3. New technologies need to benefit everyone
A problem raised by many was how we can make new technologies serve the interests of society as a whole. When asked about potential trade-offs in the development of AI, a clear problem came out on top: what will the effects of artificial intelligence be on the labour market? Our fellows have clearly been keeping in tune with recent RSA work on Universal Basic Income and the 4-day week. Automation is changing the face of work, and we need to be prepared.
4. We don’t trust tech giants to regulate AI
The largest technology companies have a massive proportion of web traffic – Facebook and Google – some estimates put their combined share of web traffic at as high as 70%. This hasn’t been helped by the blasé attitude of some of these corporations – this week Mark Zuckerberg refused to appear at an international panel on internet regulation led by the UK and Canada. There was a general feeling amongst our participants that these organisations cannot be trusted to regulate themselves. Dr. Bryson was keen to praise legislation such as GDPR in regulating big data - we will likely need similar legislation for the use of highly intelligent systems in the future.
5. We are keen to keep the ‘human in the loop’
This phrase was often brought up by our participants. Oversight and control mechanisms will become more crucial as AI systems take on a broader range of roles. Humans still have the upper hand in dealing with certain types of inaccuracies, and human control is essential in situations where the cost of error is very large. US think-tank AI Now has called for every public agency to conduct Algorithmic Impact Assessments of their automated decision systems, which would include measuring their impact over time.
6. We want to see more transparency
As Cathy O’Neill discussed in a talk at the RSA last year, we can’t assume that algorithms act objectively. Rather, they are designed to carry out the particular ends of whoever programmed them, and are shaped by the datasets on which they are trained. So-called ‘black box’ algorithms are highly opaque, and in certain cases we can only view the inputs and outputs of such a system. A common theme at last week’s event was the notion of responsibility, and the question of who is culpable when an autonomous system acts malevolently. This was perhaps the greatest takeaway from this event: transparency and education will be key if AI is to benefit society as a whole.
Interested in RSA events and our research on artificial intelligence? You can read our report on the ethical use of AI.
Keep updated on the RSA’s Events page and the work of our Economy, Enterprise and Manufacturing Team.
Will Grimond is part of the Media and Communications team at the RSA. He has a History & Politics degree from Oxford University, and writes on UK and international politics.
Join the discussion
Comments
Please login to post a comment or reply
Don't have an account? Click here to register.
Last week I gave a talk to A-level students at Woodhouse College (North London) on technology and the labour market. The young generations are naturally worried about their future and the job opportunities that they will have access to in a few year's time. We ended with a brief discussion of whether new technologies will promote social mobility. I think this is a potential positive outcome of the fourth industrial revolution that perhaps has not been as widely discusses as other issues.
The practicality of AI is a key issue, including what we can and we cannot do. This is also valid for Blockchain. What we cannot do or should not do is use AI for anything which depreciates freedom or human integrity. China in the latter respect has made a headstart with an algorithm which street recognizes individual and credit scores sending on their 'good' or 'bad' connections.This nightmarish Orwellian situation is now a reality. We should not let of course this happend here. As to the potential benefits of AI they are innumerable.