Author : Anulekha Nandi

Expert Speak Digital Frontiers
Published on Mar 04, 2024

Questions around legal personhood for AI systems loom large given the potentially adverse consequences and the need to determine liabilities and remedial measures

Artificial intelligence and personhood: Interplay of agency and liability

Artificial intelligence (AI) constitutes an umbrella term covering a host of computational approaches, from machine learning to natural language processing and computer vision. This includes a diversity of functions for integration and analyses of data and information through complex interplays of logic, probability, mathematics, perception, reasoning, learning or action. As a result of its wide-ranging functions and general-purpose application, it has potentially transformative impacts across a range of sectors such as finance, national security, healthcare, criminal justice, transportation, smart cities, and labour markets. It has become a pervasive aspect of daily life with applications in speech recognition, customer service, image classification, and recommendation engines being used across a wide range of commercial mobile applications and products ranging from Apple’s Siri and Amazon’s Alexa to autonomous vehicles.

However, more recently, the rise of generative AI raises critical questions about the implications of its transformative impacts across issues such as copyright and intellectual property, data privacy and consumer protection, the future of work, and product safety among others. These, in turn, raise the question of legal personhood for AI i.e., whether AI should be considered a separate legal entity for regulation.

The rise of generative AI raises critical questions about the implications of its transformative impacts across issues such as copyright and intellectual property, data privacy and consumer protection, the future of work, and product safety among others.

Determining the civil liability of AI has become an important aspect of AI standard setting and innovation, particularly because of the concerns around potential biases, misrepresentation, ‘hallucinations’ or the generation of false information, and complex ethical conundrums that AI systems might have to navigate. An illustrative example was provided through MIT’s Moral Machine project, which was an online software tool designed to collect public feedback on the moral decisions that autonomous driverless cars should make. These included moral dilemmas like whom should the car prioritise in the event of an impending accident—its passengers or pedestrians? Or whether they should determine the outcome by ranking people based on their age (such as children over adults) or number (save more people over one person) or other relevant criteria.

Artificial agency and regulatory approaches

Implementing traditional approaches of dealing with liability becomes complicated when it comes to AI on account of unpredictability and causal agency without legal agency. This involves the opacity or inexplicability of AI algorithms or the inability to trace back causal chains behind AI outputs to determine where liabilities lie in case of adverse consequences. This becomes particularly contentious when issues are framed around whether choices made by AI systems and models are agent-dependent, which then raises the question of who the agent is when it comes to determining liability on the basis of a generative AI output—is it the developer, the deployer, or the AI system itself? These issues stand further compounded in the case of multi-model and multimodal AI systems, which involve combining foundational AI models like Large Language Models (LLMs) with broader AI capabilities and where AI systems can draw insights from multiple types of data involving texts, images, sounds and videos respectively.

The last assumes a specific legal relationship between the principal and the agent, where the agent acts on behalf of the principal, wherein it becomes crucial to establish the authority of the agent or the extent to which the agent is allowed to act on behalf of the principal.

Legal subjects can include both natural and legal persons, wherein legal persons refer to legal subjects that are not natural persons, e.g., corporations. The notion of personhood becomes important for an entity to act in law, be held liable, or even act on behalf of another as in a principal-agent relationship. The last assumes a specific legal relationship between the principal and the agent, where the agent acts on behalf of the principal, wherein it becomes crucial to establish the authority of the agent or the extent to which the agent is allowed to act on behalf of the principal. However, conceptions of legal agency and notions of artificial agency surface key questions around the location of responsibility and liability.

Artificial agency is derived from a set of interrelated elements within a system, which respond to their environment and that performs functions for other agents where its autonomy is derived from its self-steering and self-governing nature. Within this conception, artificial agents would qualify as legal objects or tools rather than legal subjects. This has led to calls for regulating such systems within a product liability regime through a risk management approach wherein the party best capable of controlling or managing a technology-related risk is held strictly liable as a single entry point of litigation. This resonates with the wider European approach and Expert Group recommendations on liability for AI and other emerging technologies of not granting legal personhood to AI. It holds developers and deployers accountable for the risk of harm or damage arising out of the systems while making it incumbent upon users to safely operate and maintain the technology.

However, there have been detractions for this position. Most AI systems tend to be a product of complex cross-border agreements between many developers, deployers, and users before it reaches the end-user wherein it might become impossible to identify someone liable within this international web of interdependent relationships. Moreover, because of the opacity and self-learning nature of some AI systems, the risk of harm or damages can be caused in a way that is unforeseeable for those responsible for its development and deployment, making it difficult to pin down liability on a particular entity.

Most AI systems tend to be a product of complex cross-border agreements between many developers, deployers, and users before it reaches the end-user wherein it might become impossible to identify someone liable within this international web of interdependent relationships.

However, granting AI legal personhood would also need to confront thorny legal questions across different dimensions of law i.e., whether AI can own property, enter into contracts, file or be named in a lawsuit, have special rights, and other legal competencies such as buying and selling as a commercial actor or own intellectual property. Further, granting AI legal personhood would absolve owners, manufacturers, or developers from the responsibility of their creations. While the law can confer legal personhood to an entity if the legislature finds it is important for protecting the rights and interests of its citizens, legal personhood involving criminal law or violation of constitutional rights such as privacy and non-discrimination seem to require entities that can be held accountable for their action. When legal persons like corporations are made liable under such conditions, even though they cannot be put in prison, other forms of punishment apply like the imposition of fines, cessation of operations, or organisational closure. Given the rapid pace of technological development and its consequent deep inroads into society and economy, it remains to be seen how these issues come to be resolved, whether through legislation and/or legal or commercial innovations.

Way forward 

Legal personhood for AI remains a contentious topic with equally compelling arguments on both sides of the debate with yet unresolved ethical and legal dilemmas. Some commentators have argued for a customised version of restricted legal personhood for AI systems that is analogous to their specific characteristics. On the other hand, it could be argued that the questions of legal personhood should be relegated to when artificial general intelligence or sentient AI becomes a matter of reality. In the meanwhile, existing AI computational approaches could be dealt with by updating existing laws, drafting new legislations or incorporating licensing regimes like that of pre-authorising acceptable applications or models akin to the operations of the Food and Drug Administration in the US. The question of AI and legal personhood continues to remain an open question with no jurisdiction in the world currently ascribing AI legal rights or responsibilities.

Legal personhood for AI remains a contentious topic with equally compelling arguments on both sides of the debate with yet unresolved ethical and legal dilemmas.

However, as concerns around copyright and privacy infringement, particularly in the era of deepfakes, increasingly assume centre-stage, so do questions around rights, responsibilities, and liabilities. Risks of harm are no longer confined to theoretical moral dilemmas but have acquired real-world implications for citizens. As India works to balance AI-led transformation while mitigating harms from its adverse fallouts, it has acknowledged the need to establish standards for civil liability for AI by building on learnings from different jurisdictions. The draft report by the Committee on Cybersecurity, Safety, Legal and Ethical Issues convened by MeitY (Ministry of Electronics and Information Technology) highlights the need for stakeholders to deliberate on the question of legal personhood for AI systems. However, it cautions that the granting of such personhood should be accompanied by an insurance scheme or compensation fund to compensate for damages. Going forward regulating for AI would need to identify the mode for balancing innovation, rights, and responsibilities to determine the legal or regulatory approach wherein liabilities and remedies can be clearly apportioned and delineated. This would involve conceptualisation of AI systems’ agentic capabilities along with pragmatic considerations for processes to align autonomic capacity, causal agency, and liability which would help identify and design the appropriate legal instruments or regulatory frameworks.


Anulekha Nandi is a Fellow at the Observer Research Foundation.

 

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Author

Anulekha Nandi

Anulekha Nandi

Anulekha Nandi is a Fellow at ORF. Her primary area of research includes technology policy and digital innovation policy and management. She also works in ...

Read More +