AI Regulation: Where Is It and Where Should It Go?Posted on 7th June 2019
As of late, a number of hot topics have arisen in data policy, notably: how to ensure data privacy for individuals; the role of government in the regulation of technology; and how best to effectively and ethically leverage big data. At the forefront of these discussions is regulation of artificial intelligence (AI). As governments race to regulate AI, they should proceed with caution and seek to balance the needs of society and the private sector.
Despite its recent prevalence in public discussion, AI is not a new topic. Industry leaders, such as Jonathan Zittrain, have commented on the generative Internet and how such systems are facilitating new kinds of control. Similarly, Timnit Gebru, cofounder of Black in AI, discussed the diversity crisis facing AI systems. Indeed, a recent study by the AI Now Institute demonstrates how the lack of diversity in the AI workplace is reflected in AI systems. Cass Sunstein, a legal scholar and former Administrator in the White House Office of Information and Regulatory Affairs, has spoken of the impact of social technologies on governance and society as well as the beneficial uses of AI algorithms to overcome the harmful effects of cognitive biases.
Previously, AI systems have been subject to sector- or issue-specific laws and guidelines on a piece-meal basis, such as data protection, cybersecurity, and anti-discrimination regulation. Large regulatory gaps have emerged as a result of this haphazard approach to AI. Now, the EU, US, and countries in Asia and the Middle East are exploring AI-specific guidelines and regulations, fuelled by concerns regarding ethical implications.
In April 2019, the EU released the “Ethics Guidelines for Trustworthy AI”, including respect for human autonomy, prevention of harm, fairness, and explicability. These guidelines link ethical considerations to the broader discussion surrounding data protection and privacy. To achieve these principles, seven requirements must be satisfied: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental wellbeing; and accountability.
Under President Trump’s Executive Order on AI, the National Institute of Standards and Technology has been tasked with producing federal standards for deploying AI technologies by 10 August 2019. Subsequently, NIST is soliciting public comment and consultation from the private sector and academia. We expect NIST’s guide to focus on trustworthiness and align to that of the EU.
Singapore has released a framework relating to the ethical and responsible use of AI. It is intended to be a living document, evolving in parallel to industry perspective and as new challenges emerge. Certain articles reveal a nuanced understanding of the challenges with aggregating data and preserving human autonomy. On the one hand, Article 3.6 states that organisations operating in multiple countries should consider differences in societal norms and values given that individuals live in unique societal contexts. On the other hand, Articles 3.7 states that some risks to individuals may only manifest at group level. We caution against adopting the stance that ethics and norms are overly subjective and question the document’s allowance for corporations to decide their own ethics. This level of subjectivity could prevent the establishment of ethical norms surrounding AI.
The Smart Dubai Office created the new Ethical AI Toolkit advising individuals and organisations offering AI services. The toolkit attempts to eliminate the “black box” issue, calling for consideration of whether decision-making processes introduce bias. While the document offers some useful guidelines, it falls short in describing how they should be technically implemented.
International organisations are beginning to introduce AI-related themes into their agendas, work plans, and research. Most notably, the Organisation of Economic Cooperation and Development (OECD) adopted a set of principles on 22 May. While they’re not legally binding, they are highly influential and help governments shape their own national laws. The International Telecommunications Union (ITU) held its third high-level AI for Global Good Summit in May, providing a platform from which future norms and acceptable parameters will be established.
Governments continue to struggle with the development of cybersecurity and data privacy norms that promote both security and growth. In addition, they now also face the challenge of advancing norms surrounding the ethical use of AI.
The mere fact that “ethics in AI” has become such a widely debated issue does not warrant the hasty actions by governments to produce manifestos committing to vague ethical ideas. Not only could this be ineffective, but it could also be harmful for companies, particularly when specific guidance regarding their obligations is needed.
Instead, governments should aim to create regulations that promote ethics by design, incorporating checks and balances into the systems that use AI. Similar to security by design, this would involve explicitly stating what systems should have in them, the structure of the development process, and connecting ideas like human agency and bias to the structure of the algorithms in place.
However, requirements of this nature will significantly impact companies and potentially stifle innovation if done haphazardly. To avoid these issues, governments should consider the following as they attempt to regulate AI:
Don’t get caught up in lingo – It is important to understand that just because something is labelled “ethical” doesn’t mean it is inherently good. It is necessary to connect terms like ethics, trust, and fairness with protections that already exist, such as consumer protections and rights and data protection.
Collaborate with all sectors – Policy should be well informed; aligned with needs, values, and a holistic vision for a better future. AI regulation will invariably intersect with data privacy, big tech, data regulation, consumer rights, ethics, social justice, and law. Different perspectives on these issues and how best to regulate this sector should involve multiple actors including technology companies, academia, civil society groups, and governments.
Recognise underrepresented communities – AI allows for the formation of conclusions in a new way that can mask underlying logic. All stages and elements of AI applications such as training data, algorithms and effective performance should be examined to ensure they are fairly addressing underrepresented and historically marginalised communities.
Author: Halak Mehta-Shrivastava, International Public Policy Manager, Access PartnershipBack to document archive