The Global Artificial Intelligence (AI) Race and Data Ethics
Chemmalar. S
16 Jun 2022
Does embracing AI accelerate the data privacy concerns?
Artificial Intelligence (AI) has changed its phase rapidly and human’s dependency on the technology has drastically increased in recent years. As AI technology completely relies on data, the countries in order to accelerate the deployment of AI, concentrate on high-quality data. In the words of Russian President Vladimir Putin, “AI is the future, and whichever country becomes the leader in the sphere will become the ruler of the world”. In the pursuit of winning the AI race, the United States and China have invested vastly in AI research, data sharing and start-ups. While some countries have voiced their desire to develop human-centric AI, mirroring ethical values and have also given due consideration for transparency and human rights in their strategic approach. For instance, European Union approach is value based AI envisioned towards socio-economic changes [1], while Germany focus on work-life balance, South Korea and Canada committed towards research and development, Japan strive for improving the productivity through robotics, India’s “AI for All” indicates inclusive technology, France intent is to make AI environment-friendly and United Arab Emirates (UAE) tries to boost government performance through AI [2]. However, no two governance strategies for digital technologies are the same, the differences in the strategic approach are already surfing.
Big Data – Raw material of AI
Data is the key asset of AI technology and international “dataization” is increasing in recent years. The strategies of most of the countries involve collecting and processing massive data for accelerating AI. Making the individual data available to governments and researchers, facilitates the prompt governance system and enriches the lives of individuals. AI requires more data for its effective functionality and its efficiency depends highly on, how it is designed and what is the kind of data it uses.
While considering the privacy rights of the individuals, a clear legal mechanism has yet been in place to protect the humans from the machines when they both interact. Many strategies have made reference to the need for ethical frameworks and human-centric approaches to AI. Right to privacy has been the most significant aspect referred to by most of the nations. While few countries assert that AI can detect privacy issues and protect humans without human intervention. The national strategies of few countries state that cyber security issues can be controlled with AI technology.
UK and Data privacy
The UK government has established the Centre for Data Ethics and Innovation, a think tank to provide guidance on the procurement of responsible AI and ethical use of data in order to maximize the benefits of the technology [3]. Data Sharing is one among the core ideas to achieve the digital infrastructure to scale up AI and data driven economy. In this regard, the UK has joined hands with Open Data Institute (ODI) to explore a legal framework called data trust law in order to facilitate seamless sharing of data. ODI has entered into three pilot projects to study the efficacy of data trust law. The Data Protection Act 2018 is enacted by the UK government to implement the European Union’s General Data Protection Regulation (GDPR) to regulate, check and control personal information of Citizens of the UK handled by organizations and businesses across the globe. Accordingly, it has laid down data protection principles stipulating stringent legal protection for more sensitive information like religious beliefs, genetics, trade union membership, sex life, health, race etc. The act protects the right to be erased, right to be informed, data portability, and right to be forgotten etc.
USA and Data Privacy
Enhancing the data sharing to accelerate the value of resources for Research and Development (R&D), while protecting the safety, security, privacy of the data with a comprehensive framework is the strategic approach of the USA. Algorithmic Accountability Act (S. 1108, H.R. 2231) was introduced in Congress on April 10, 2019. The act requires “companies to regularly evaluate their tools for accuracy, fairness, bias, and discrimination. It also mandates the companies to conduct impact assessment through external technology experts and independent auditors and to reasonably address issues in a time bound manner [4]. The Algorithmic Accountability Act explicitly states that the act would not supersede state law. The act is the first federal legislative effort to reflect and implement the European Union’s General Data Protection Regulation (“GDPR”) to regulate AI across the businesses. Most importantly, the act does not provide for private action but empowered the Federal Trade Commission (FTC) to enforce civil suits on the grounds of deceptive and unfair acts in the industries.
Akin to the act, commercial Facial Recognition Privacy act, 2019 (S. 847) prohibits certain entities from using facial recognition technology to identify or track an end user without obtaining the affirmative consent of the end user, and for other purposes. Section 2(5) of the act defines facial recognition as technology that analyses facial features either in still or video or technology used to allot unique identifiers or used to identify a specific individual. The act forbids the controller from using facial recognition technology without the consent of the end user or to discriminate against an end user in violation of Federal State law or repurpose facial recognition data or sharing facial recognition data with unaffiliated third party without the consent of the end user. Under section 4(c)(1) if the attorney general has reason to believe that interest of the residents of the state has been threatened or adversely affected, as parens patriae, he can bring civil action on behalf of the residents against the violators in concerned district court for relief. Prior to bringing an action, the attorney general has to notify the Federal Trade Commission the intention to bring such action along with the copy of the complaint to be filed.
India and Data privacy
Niti aayog has taken a way forward towards sustainable and inclusive development of the country. The policy paper was designed with an ethical motive to leverage AI for all [5]. As the name suggests, “AI for ALL” strategy reveals the intention and effort of Indian government in enhancing and improving the living conditions of people by providing smart infrastructure facilities. Regarding legal protection and cyber security, India with the intent to provide comprehensive data protection law in line with GDPR (EU law on data protection) has introduced a bill in Lok sabha in 2019 December and at present it is being analysed by parliamentary joint committee.
Ethical, Inclusive and Equitable AI
The weak definition and transnational characteristics of AI pose major concern to the regulatory authority to precise the comprehensive mechanism for global governance of the technology. While the difference in regulatory regime and jurisdictions is considerably wide, the countries share the crucial goal to place them in top in this emerging field. As more automated decision systems are being used by public agencies, experts and policymakers worldwide are beginning to debate its credential. Emerging technologies are increasingly cross-border and significant opportunities could be lost without some level of alignment in the regulations and norms that guide technological development and implementation across jurisdictions [6]. The challenges for an equitable and inclusive AI implementation are numerous. It is not clear yet how to assess AI’s effects or whether algorithms can fully cope with complex social and historical settings. Algorithms are human creations and as such, subject to the same biases people have. Many proposals have emerged from international organizations in the past few years, as geopolitical entities such as the UN, the EU and the OECD have begun to encourage the discussion on AI regulation [7]. The goal behind many of these recommendations is to generate a human-centered approach for the development of AI, reducing differences among countries and ensuring a minimum protection for individuals. To conclude, regardless of robust mechanisms to control AI, stakeholders should be guided on ethical principles to innovate, use, deploy and implement AI.
References
[1] European Commission, 2018, High-level Expert Group on Artificial Intelligence, viewed 16 January 2021, from https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence.
[2] Samir, S, Nikhila & Madhulika S 2018, In Pursuit of Automony: AI and National Strategies, Observer Research Foundation.
[3] Centre for Data Ethics and Innovation (CDEI), Introduction to the Centre for Data Ethics and Innovation, viewed 20 February 2020, from
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/787205/CDEI_Introduction-booklet.pdf.
[4] Yoon Chae, 2020, ‘U.S. AI Regulation Guide: Legislative Overview and Practical Considerations’, Journal of Robotics, Artificial Intelligence & Law, vol. 3, no.1, pp.17-40.
[5] Niti Aayog., 2018, National Strategy for Artificial Intelligence, viewed 20 February 2019, from https://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf.
[6] ITU, 2018, Assessing the Economic Impact of Artificial Intelligence, viewed 01 February 2021 from https://www.itu.int/dms_pub/itu-s/opb/gen/S-GEN-ISSUEPAPER-2018-1-PDF-E.pdf
[7] OECD, 2019, ‘Artificial Intelligence in Society’, viewed 06 January 2020, https://ec.europa.eu/jrc/communities/sites/jrccties/files/eedfee77-en.pdf>.