Social Sector Hotline: Can States delegate responsibilities to Artificial Intelligence?

Posted by By at 21 October, at 12 : 50 PM Print

October 21, 2021

CAN STATES DELEGATE RESPONSIBILITIES TO ARTIFICIAL INTELLIGENCE?

 


  • No universally accepted definition of Artificial Intelligence (“AI”) despite the advancements and growth in its ambit
  • Obligation on States under International Human Rights Law to be respected while using AI technologies in state functions
  • Deployment of AI in State functions may pose challenges to:  (a) Right to Life; (b) Right to Privacy; and (c) Right against Discrimination
  • A need of a regulatory framework for using AI in law-and-order governance

INTRODUCTION

Recently the White House’s Office of Science and Technology Policy expressed the need for a new “bill of rights” to guard against powerful and uncharted use of artificial intelligence technology in day-to-day life. The concerns about the misuse of this technology which can potentially infringe upon the basic rights of individuals were raised by the chief science advisor to the Joe Biden Government, Mr. Eric Lander1.

Artificial Intelligence (AI) is a machine’s capacity to duplicate or replicate intelligent human behavior. It is an umbrella term that encompasses multiple technologies including machine learning, neural computing, deep learning, computer vision, natural language processing (NLP), machine reasoning, and strong AI2. However, there is no universally accepted definition of AI. With growing technology and advanced data science, the ambit of AI is ever increasing. Use of AI in day-to-day life has not been a new phenomenon but its active use by the State Governments for maintaining law and order in society has come with its own challenges. Regulating the use of AI in a State’s internal governance functions becomes imperative as the powerful technologies created by AI poses many ethical and legal challenges. Despite the rampant use of the AI technology, there have been no regulatory framework for the method, extent, conditions of use of AI in governing law and order situations.

CHALLENGES POSED BY AI

  • Right to Life

    There is a positive obligation placed upon states to protect the ‘right to life’ of its citizen. The International Human Rights Law (IHRL) which governs the obligations of States towards citizens and other individuals within their jurisdiction, imposes positive duties on Governments to protect individuals from human rights violations, and against infringement to ‘right to life’.3

    Deprivation of life is only permitted if it happens within a legal framework keeping in mind the principals of necessity, proportionality, and legality.4 Right to life is the grundnorm of all evolved existing legal systems. It does not only encompass a mere animal existence but a right to a dignified life5. If machines are given the power to take policing decisions on their own, it is most likely to be done on the basis of automated processes. For such a decision making, data will be collected, stored, analysed and used through algorithms. AI’s decision making would rely upon a software that will help in predicting the likelihood of a given scenario. AI cannot be expected to understand the complexities of societal structural problems as it lacks the basic elements of empathy, pain, guilt, feeling, emotions, love, care etc. that are exclusive to human beings. It is thus reasonable to doubt whether a machine would be able to access necessity, proportionality and legality of any actions. Moreover, the State cannot delegate its obligation to protect the life and dignity of its individuals to a machine, no matter how advanced it may be.

  • Right to Privacy

    Right to Privacy has been protected under the IHRL. The International Covenant on Civil and Political Rights (ICCPR), a multilateral treaty adopted by United Nations General Assembly Resolution 2200A (XXI) on 16 December 19666, prohibits ‘arbitrary or unlawful interference with his privacy’ under Article 17, thereby obligating the parties to avoid unwanted interference in privacy of an individual.7 The constant surveillance action by the State upon its citizens would subject them to a constant monitoring activity. Such monitoring activity would effectively be carried out by data collection of every activity of an individual. This data will be analysed by a set of recognised algorithms, (which may or may not be accurate for every situation) which will lay a resolution plan. This is likely to result in ‘one size fits all solution’ to certain complex societal problems. The constant monitoring of the individual activities would be a serious interference in the liberty of an individual, and thus there needs to be a safeguard against such harmful uses of AI. Recently, in India, the Right to Privacy became a fundamental right8, meaning thereby that this right is placed on equal pedestal with ‘right to life’ or ‘right against discrimination’. The protection of privacy has become an inevitable duty that the States must perform diligently.

  • Right against discrimination

    The Council of Europe recommendation has defined profiling as ‘an automatic data processing technique that consists of applying a “profile” to an individual, particularly in order to take decisions concerning her or him or for analysing or predicting her or his personal preferences, behaviours and attitudes.’9 The constant surveillance and collection of data by the State with the use of AI can result in a ‘profiling’ activity. Profiling of personal data can have the possible outcome of infringement of right against discrimination. The probability of algorithm-based decision-making being biased towards certain colour, caste, gender etc. cannot be entirely denied. The States with definitive legal systems prohibiting any discrimination would then have the responsibility to avoid any such event of discrimination. Such a responsibility cannot be delegated to machinery run by AI. The likelihood of tampering of AI cannot be ignored. If such sophisticated systems or the information gathered by such systems ends up being misused, the repercussions of the same can be beyond imaginable for the State as well as the individuals. According to report by Reuters, Amazon had used AI to automate the resume-review process for engineers and coders. The team that had trained the AI was male dominated. As a result, the system learned to disqualify anyone who attended a women’s college or who listed women’s organizations on their resume.10 Such act of discrimination by one of the world’s largest corporate giants would have opened floodgates for lawsuits on gender discrimination, but the software was pulled out by Amazon. Indirect discrimination at the hands of AI was recognized by the European Court of Human Rights (ECHR) in the case of DH & Ors v Czech Republic. The court struck down an apparently neutral set of statutory rules, which implemented a set of tests designed to evaluate the intellectual capability of children which resulted in an excessively high proportion of minority Roma children scoring poorly and consequently being sent to special schools, probably because the tests could not navigate the cultural and linguistic differences.11

CONCLUSION

With the development of AI, there is a need to revisit the applicable rules which are in tune with the international law on Human Rights. It is important to determine the extent and use of AI and regulate the conduct of States and individuals. It is imperative to analyse the extent of the duty of due diligence to be used by the State before taking aid from AI technology for governance functions. Both the right to life and the right to privacy demand a regulation of the use of AI in domestic law enforcement that must meet with the standards of the domestic law or law of the land of the State. The Governmental actions would have to be predictable and also provide adequate and effective guarantees against abuse of the power that the AI technology provides. Transparency in the use of AI in governance is of utmost importance. The States taking aid of AI cannot delegate the responsibilities for law and order control to AI entirely. The chances of AI technology being misused for certain benefits cannot be kept aside and thus an effective regulatory framework for legal use, method of use and rationale behind the use of AI in governance needs to be developed. Issues of liability and responsibility for the use or misuse of the AI have to be addressed by such regulatory framework in order to enable the individuals of a state to take necessary actions against misuse of the AI technology.

 

– Aanchal Singh & Rahul RishiYou can direct your queries or comments to the authors


1 See AP News, White House proposes tech ‘bill of rights’ to limit AI harms, available at https://apnews.com/article/joe-biden-science-technology-business-biometrics-b9dbf5fee3bf0e407b988b31e21f5300

2 See PR Newswire, Artificial Intelligence Market Forecasts, available at http://www.prnewswire.com/news-releases/artificial-intelligence-market-forecasts-300359550.html.

3 See Human Rights Committee, General Comment No 31 The Nature of the General Legal Obligation Imposed on States Parties to the Covenant, UN Doc CCPR/C/21/Rev.1/Add. 13 (26 May 2004) para 8, available at https://undocs.org/CCPR/C/21/Rev.1/Add.13

4 See Spagnolo Andrea, ‘Human rights implications of autonomous weapon systems in domestic law enforcement: sci-fi reflections on a lo-fi reality’, available at http://www.qil-qdi.org/human-rights-implications-autonomous-weapon-systems-domestic-law-enforcement-sci-fi-reflections-lo-fi-reality/

5 Maneka Gandhi v. Union of India, 1978 AIR 597

6 See International Covenant on Civil and Political Rights, available at https://treaties.un.org/Pages/ViewDetails.aspx?chapter=4&clang=_en&mtdsg_no=IV-4&src=IND

7 See International Covenant on Civil and Political Rights, ‘Article 17 1. No one shall be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence, nor to unlawful attacks on his honour and reputation. 2. Everyone has the right to the protection of the law against such interference or attacks.’, available at https://www.ohchr.org/en/professionalinterest/pages/ccpr.aspx

8 K.S. Puttuswamy v. Union of India, (2017) 10 SCC 1

9 See Recommendation CM/Rec(2010)13 of the Committee of Ministers to member States on the protection of individuals with regard to automatic processing of personal data in the context of profiling, Adopted by the Committee of Ministers on 23 November 2010 at the 1099th meeting of the Ministers’, available at https://rm.coe.int/16807096c3

10 See Jeffry Dastin, Amazon Scraps secret AI recruiting tool thatshowed bias against women, available at https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G

11 D.H and Ors. V. The Czech Republic, Application No. 57325/00, Judgement dated 13.11.2017, available in English at http://www.errc.org/uploads/upload_en/file/02/D1/m000002D1.pdf


Benchmark Litigation Asia-Pacific:Tier 1 for Government & Regulatory and Tax
2020, 2019, 2018

Legal500 Asia-Pacific:Tier 1 for Tax, Investment Funds, Labour & Employment and TMT
20a20, 2019, 2018, 2017, 2016, 2015, 2014, 2013, 2012

Chambers and Partners Asia-Pacific:Band 1 for Employment, Lifesciences, Tax and TMT
2020, 2019, 2018, 2017, 2016, 2015

IFLR1000:Tier 1 for Private Equity and Project Development: Telecommunications Networks.
2020, 2019, 2018, 2017, 2014

AsiaLaw Asia-Pacific Guide 2020:Ranked ‘Outstanding’ for TMT, Labour & Employment, Private Equity, Regulatory and Tax

FT Innovative Lawyers Asia Pacific 2019 Awards: NDA ranked 2nd in the Most Innovative Law Firm category (Asia-Pacific Headquartered)

RSG-Financial Times: India’s Most Innovative Law Firm
2019, 2017, 2016, 2015, 2014

Who’s Who Legal 2020:
• Nishith Desai- Thought leader (Corporate Tax 2020, India 2020), Global leaders
(Private Funds 2020)
• Vikram Shroff-Global Leaders (Labour & Employment 2020, Pensions & Benefits 2020)
• Milind Antani- Pharma & Healthcare – only Indian Lawyer to be recognized for
‘Life sciences – Regulatory,’ for 5 years consecutively


DISCLAIMER

The contents of this hotline should not be construed as legal opinion. View detailed disclaimer.

This Hotline provides general information existing at the time of preparation. The Hotline is intended as a news update and Nishith Desai Associates neither assumes nor accepts any responsibility for any loss arising to any person acting or refraining from acting as a result of any material contained in this Hotline. It is recommended that professional advice be taken based on the specific facts and circumstances. This Hotline does not substitute the need to refer to the original pronouncements.

This is not a Spam mail. You have received this mail because you have either requested for it or someone must have suggested your name. Since India has no anti-spamming law, we refer to the US directive, which states that a mail cannot be considered Spam if it contains the sender’s contact information, which this mail does. In case this mail doesn’t concern you, please unsubscribe from mailing list.

Hotline

Related Posts

Post Your Comment

You must be logged in to post a comment.

About Us

Nishith Desai Associates (NDA) is a research based international law firm with offices in Mumbai, Bangalore, Silicon Valley, Singapore, New Delhi, Munich and New York.

Links

Mobile App

.