The Ethical Challenge of Artificial Intelligence

Posted on 27th April 2018

Policy-makers and regulators around the world are becoming increasingly fixated by the rapid growth of artificial intelligence (AI). Some experts (including the eminent physicist and cosmologist Stephen Hawking and the entrepreneur Elon Musk) have made alarming predictions about the potential for AI to lead to human alienation, suffering, or worse, actual destruction of human life. However, even without taking such an extreme view, it’s evident that the adoption of AI tools will pose new ethical challenges, which may need a regulatory response.

On 16 April the House of Lords Select Committee on AI set out their thoughts on the matter, following a comprehensive inquiry which has been underway since June 2017. Having taken written comments from over 200 organisations and individuals, and heard testimony from a variety of industry, academic and regulatory bodies, peers arrived at the conclusion that a light-touch, industry-led regulatory model was preferable.

The committee report does envision an important role for government in ensuring that AI is deployed in a responsible and ethical way, for example through the creation of “data trusts” to facilitate the ethical sharing of data, seeing this as a way for UK-based SMEs to compete with large, mostly US-based technology companies that are close to holding “data monopolies”. The report also points out the need for the public sector to lead in procurement of AI solutions as a key way to build public trust and confidence in the use of AI.

At the EU level, the European Commission will next week set forth its own position on the topic of AI regulation, with the publication of a communication which is expected to touch on accountability, transparency and liability in the context of AI tools and services. Early indications are that the Commission will demand cooperation from companies developing AI solutions to explain in a clear and transparent way how decisions made using AI can avoid perpetuating entrenched bias, and who should be liable when an AI product or service causes harm.

Industry will push back strongly on any attempt by regulators to compel disclosure of proprietary information, such as algorithms used to generate machine learning. While supporting the aims of transparency and accountability, the prevailing logic in the tech industry is that creating a new regulatory framework specifically for AI today would be premature, as the way this market will develop is still highly uncertain.

There is a degree of truth to this assertion but given that EU regulators will soon be armed with strong new enforcement powers in data protection (through the General Data Protection Regulation) and cybersecurity (through the NIS Directive), it is entirely appropriate for regulators to consider how these new powers can be deployed to address the important ethical and normative concerns associated with AI. Without strong, demonstrable public oversight, trust in AI among the general population will be slow to develop and AI adoption rates will suffer as a result.

 

Author: Matt Allison, Manager, Public Policy, Access Partnership

The article was originally published on Tech UK on 26 April as part of the AI Week.

Back to document archive