top of page

An Algorithm on Trial: Legal Personhood and the Ethics of Artificial Intelligence

  • Writer: sankalp suri
    sankalp suri
  • Apr 30
  • 7 min read

Updated: May 3

This article explores the initial complexities of artificial intelligence (AI) entities and the granting of legal personhood by analysing the challenges and potential consequences of such recognition. It introduces the overarching theme in the current global discourse while examining the evolving autonomy of AI, the limitations of existing legal frameworks, and the ethical dilemmas surrounding AI agency and liability.


While refraining from definitive conclusions, the author emphasizes the need for balanced legal paradigms that accommodate technological innovation while safeguarding human interests. The article aims to stimulate further discourse on whether—and to what extent—AI should be granted legal personhood in an increasingly AI-integrated world.

 

The principle that actions have consequences is foundational to legal systems worldwide. Law ensures order by assigning rights and responsibilities to individuals—traditionally, natural persons or artificial entities acting through them. However, rapid advancements in artificial intelligence (AI) challenge these conventional boundaries. AI systems, are defined by the European Union (EU) as autonomous, adaptive machine-based systems, now operate with minimal human oversight. This raises a critical question: 

 

The concept of artificial intelligence (AI) has long captured human imagination. However, until recent developments, machines exhibiting cognitive abilities comparable to human intelligence were largely absent from everyday life. Early forms of AI demonstrated only rudimentary capabilities, heavily reliant on a predetermined set of human-issued commands.

 

Defining the ambit of artificial intelligence remains a formidable challenge due to the absence of a universally accepted definition of ‘intelligence’ itself. In this context, any attempt to define AI is analogous to pursuing a receding horizon—appearing closer, yet perpetually out of reach.

 

Without a clear and coherent understanding of the subject matter, efforts to regulate AI are likely to prove ineffective. For a legal framework to operate successfully, it must be grounded in a precise and accessible comprehension of the phenomenon it seeks to govern. Individuals cannot reasonably be expected to adhere to rules that are ambiguous or incomprehensible. Where the law is so opaque or uncertain that it cannot be known or anticipated in advance, its capacity to guide human behaviour is significantly undermined, if not entirely nullified.

 

Contemporary AI systems are generally classified into two broad categories: narrow AI and general AI. Narrow AI, often referred to as ‘weak’ AI, encompasses systems engineered to perform specific, well-defined tasks using computational intelligence. Examples include natural language processing tools used for translation, and navigational systems in autonomous vehicles. Such systems are limited to their designated functions and lack the capacity to operate beyond their original programming. The vast majority of AI technologies currently in use fall within this category.

 

In contrast, general AI, or ‘strong’ AI, refers to systems capable of performing a wide range of cognitive tasks, including the autonomous generation of new objectives. This category seeks to emulate the breadth and depth of human intelligence and is frequently portrayed in popular culture through depictions of sentient robots and autonomous AI entities. However, current technological capabilities have yet to achieve a form of general AI comparable to human cognition, leading some commentators to question whether true general AI is attainable at all.

 

 

This article evaluates the feasibility and implications of recognizing AI as a legal person. It examines different forms of legal personhood, the evolving nature of AI autonomy, and the ethical and legal dilemmas posed by AI decision-making. Rather than advocating for immediate legal recognition, the discussion seeks to foster informed deliberation on how legal systems might adapt to AI’s unique challenges.

 

Defining a Legal Person


A "legal person" is an entity recognized by law as having rights and obligations. Unlike natural persons (humans), legal personhood can be extended to corporations, animals, and even natural entities like rivers. Legal personality comprises two key aspects:


(A) Legal Subjecthood – The capacity to hold rights and duties.

(B) Legal Agency – The ability to exercise those rights and duties.

 

While all legal agents are legal subjects, not all subjects possess agency. Historically, only humans have been granted full legal agency. However, as AI systems gain autonomy, this exclusivity may need a re-evaluation.

 

Legal Personhood as a Construct

Legal personhood is not inherent but conferred by legal systems. It is an institutional status, meaning legislatures and courts determine which entities qualify. This flexibility allows legal systems to adapt—raising the possibility of extending personhood to AI under certain conditions.

 

Types of Legal Persons and Their Liabilities


Legal persons can be categorized as natural (humans) or non-natural (corporations, animals, etc.). The extent of their rights and liabilities varies:


(A) Natural Persons

Minors have legal rights but lack full agency (their guardians act on their behalf).

Adults possess full legal capacity unless restricted by law.


(B) Non-Natural Persons

Corporations act through human representatives.

Animals/Rivers have limited rights (e.g., protection from harm) but no agency.

Idols/Deities are legal persons in some jurisdictions but lack independent agency.

 

AI does not fit neatly into these categories. Unlike corporations, AI can make unsupervised decisions, complicating liability attribution.

 

AI Autonomy


AI’s defining feature is its adaptive decision-making, which evolves beyond initial programming. For example, Generative AI (e.g., ChatGPT) produces unpredictable outputs, sometimes generating false or harmful content ("AI hallucinations").

If AI operates independently, can it be considered a legal agent? Or should it remain a legal subject, with liability falling on developers, users, or manufacturers?

 

Legal and Ethical Dilemmas

 

(A) Liability in AI-Caused Harm

- If a self-driving car causes an accident, who is responsible—the AI, the manufacturer, or the user?

- Current laws struggle to assign blame when AI acts unpredictably.

 

(B) Criminal Liability and Intent

- Criminal law requires mens rea (guilty mind). Can AI possess intent?

- AI-driven crimes (e.g., deepfake fraud, algorithmic collusion) complicate accountability.

 

(C) Ethical Concerns

- AI lacks consciousness or moral reasoning. Granting personhood risks anthropomorphizing machines.

- Yet, denying personhood may leave victims of AI harm without recourse.

 

Existing legal practices cannot provide conclusive answers to these questions. AI is not similar to an artificial corporation where the final agency remains with humans; it is not an idol or a river either, for its nature is much more dynamic, and its ability to affect humans is beyond ordinary human foresight.

 

The Path Ahead: Regulating Artificial Intelligence Without Rushing to Personhood


The European Union’s Risk-Based Regulatory Model The EU AI Act adopts a levelled approach to regulation, categorizing AI systems by risk levels and imposing proportionate obligations. High-risk applications—such as those in healthcare and law enforcement—are subject to more stringent requirements. Notably, the legislation refrains from addressing the question of AI personhood, instead emphasizing the necessity of human oversight and maintaining clear lines of accountability.


Exploring Legal Frameworks for AI Personhood As AI systems evolve in complexity and autonomy, legal scholars and policymakers have proposed several potential models for integrating AI into existing legal frameworks:


· (A) Limited Legal Personhood: AI entities could be granted limited legal capacities—such as holding property or entering into contracts—while ultimate liability would rest with identifiable human stakeholders.

· (B) "Electronic Personhood" as a New Legal Category: This model proposes a distinct, hybrid legal identity for AI, conferring limited rights while ensuring that control and accountability remain firmly with human actors.

· (C) Strict Liability for Developers and Users: Under this framework, all responsibility for AI actions would rest with human developers, operators, or users, thereby precluding the possibility of AI being used as a legal shield or scapegoat.


Cautious Advancement Is CrucialPremature recognition of AI as a full legal person risk destabilizing established legal doctrines and accountability structures. Conversely, failure to address AI's growing operational autonomy may create significant gaps in legal responsibility. A phased, evidence-driven approach is essential to ensure that regulation keeps pace with technological development—without undermining foundational principles of law and ethics.


AI is reshaping society, compelling legal systems to evolve. While granting AI legal personhood remains contentious, the discussion must continue, involving policymakers, technologists, and ethicists. The goal should be a framework that:

- Encourages AI innovation

- Ensures human accountability

- Adapts to AI’s evolving capabilities

 

Rather than fearing AI’s rise, we must develop adaptive legal mechanisms that balance technological progress with ethical safeguards. The question is not whether AI should have legal rights, but how much autonomy we are willing to grant— to what extent and most importantly, at what cost.

 

Before hastily concluding that a future dominated by AI is inevitable, it is essential to develop mechanisms that preserve human agency within the decision-making processes of machines and establish frameworks for the appropriate assignment of liability. The granting of legal personhood to AI entities should not be interpreted as an acknowledgment of their intrinsic moral worth. Rather, it can serve instrumental purposes, such as promoting economic efficiency or facilitating risk management. Nevertheless, this issue demands careful, inclusive deliberation. Discussions surrounding the legal status of AI must not be unilateral but should engage all relevant stakeholders, including developers, regulators, and end-users.

 

References:

1. Article 3: Definitions. In Regulation (EU) 2024/1689 of the European Parliament and of the Council on Artificial Intelligence (Artificial Intelligence Act). Link.

2. Brownsword, R. (2017). From Erewhon to AlphaGo: For the sake of human dignity, should we destroy the machines? Law, Innovation and Technology, 9(1), 117–153. https://doi.org/10.1080/17579961.2017.1303927.

3. Economics Observatory. (n.d.). AI cartels: What does artificial intelligence mean for competition policy? Retrieved April 13 , 2025, from https://www.economicsobservatory.com/ai-cartels-what-does-artificial-intelligence-mean-for-competition-policy.

4. Kurki, Visa A.J. 'Who or What Can be a Legal Person?’, A Theory of Legal Personhood (Oxford, 2019; online edn, Oxford Academic, 19 Sept. 2019), Chapter 4. https://doi.org/10.1093/oso/9780198844037.003.0005.

5. Kurki, Visa A.J. 'Who or What Can be a Legal Person?’, A Theory of Legal Personhood 5. Through: https://www.cnn.com/2017/03/15/asia/river-personhood-trnd/index.html.

6. Suri, M. (2017, March 23). India becomes second country to give rivers human status. CNN. https://edition.cnn.com/2017/03/22/asia/india-river-human/index.html.

7. Solum, Lawrence B. (1992). Legal Personhood for Artificial Intelligences, 70 N.C. L. Rev. 1231. Available at: https://scholarship.law.unc.edu/nclr/vol70/iss4/4.

Turner, J. Robot. (2019). Rules: Regulating Artificial Intelligence. 1st ed.

Recent Posts

See All

Comments


bottom of page