The Ethical Challenge of Artificial Intelligence

In a guest blog for Tech UK, Matt Allison asks whether artificial intelligence presents a fundamental ethical challenge requiring a new regulatory framework.

Policy-makers and regulators around the world are becoming increasingly fixated by the rapid growth of artificial intelligence (AI). Some experts (including the eminent physicist and cosmologist Stephen Hawking and the entrepreneur Elon Musk) have made alarming predictions about the potential for AI to lead to human alienation, suffering, or worse, actual destruction of human life. However, even without taking such an extreme view, it’s evident that the adoption of AI tools will pose new ethical challenges, which may need a regulatory response.

On 16 April the House of Lords Select Committee on AI set out their thoughts on the matter, following a comprehensive inquiry which has been underway since June 2017. Having taken written comments from over 200 organisations and individuals, and heard testimony from a variety of industry, academic and regulatory bodies, peers arrived at the conclusion that a light-touch, industry-led regulatory model was preferable.

The committee report does envision an important role for government in ensuring that AI is deployed in a responsible and ethical way, for example through the creation of “data trusts” to facilitate the ethical sharing of data, seeing this as a way for UK-based SMEs to compete with large, mostly US-based technology companies that are close to holding “data monopolies”. The report also points out the need for the public sector to lead in procurement of AI solutions as a key way to build public trust and confidence in the use of AI.

At the EU level, the European Commission will next week set forth its own position on the topic of AI regulation, with the publication of a communication which is expected to touch on accountability, transparency and liability in the context of AI tools and services. Early indications are that the Commission will demand cooperation from companies developing AI solutions to explain in a clear and transparent way how decisions made using AI can avoid perpetuating entrenched bias, and who should be liable when an AI product or service causes harm.

Industry will push back strongly on any attempt by regulators to compel disclosure of proprietary information, such as algorithms used to generate machine learning. While supporting the aims of transparency and accountability, the prevailing logic in the tech industry is that creating a new regulatory framework specifically for AI today would be premature, as the way this market will develop is still highly uncertain.

There is a degree of truth to this assertion but given that EU regulators will soon be armed with strong new enforcement powers in data protection (through the General Data Protection Regulation) and cybersecurity (through the NIS Directive), it is entirely appropriate for regulators to consider how these new powers can be deployed to address the important ethical and normative concerns associated with AI. Without strong, demonstrable public oversight, trust in AI among the general population will be slow to develop and AI adoption rates will suffer as a result.

 

Author: Matt Allison, Manager, Public Policy, Access Partnership

The article was originally published on Tech UK on 26 April as part of the AI Week.

Related Articles

Driving Brazil’s app ecosystem: The economic impact of Google Play and Android

Driving Brazil’s app ecosystem: The economic impact of Google Play and Android

With the largest Internet population in Latin America and the fourth-largest market for app adoption globally, Brazil is an established...

15 Apr 2024 Opinion
Access Alert: Brazilian authorities ask for contributions on AI and connectivity

Access Alert: Brazilian authorities ask for contributions on AI and connectivity

On 9 April, Brazil’s National Telecommunications Authority (Anatel) released a public consultation to gather contributions and insights about the role...

11 Apr 2024 Latest AI Thought Leadership
Access Alert: Orbiting innovation – key satellite industry trends unveiled at SATELLITE 2024

Access Alert: Orbiting innovation – key satellite industry trends unveiled at SATELLITE 2024

The SATELLITE 2024 conference in Washington, DC, took place between 18-21 March 2024. The event brought together close to 15,000...

28 Mar 2024 Opinion
Access Alert: Saudi Arabia launches consultation on spectrum management

Access Alert: Saudi Arabia launches consultation on spectrum management

Continuing the efforts carried out by the Communications and Information Technology Commission (CST) to improve Saudi Arabia’s regulatory framework and...

26 Mar 2024 Opinion