Legal Accountability of AI in Corporate Decision Making, Who Bears the Risk?

Why in News?

As artificial intelligence (AI) continues to evolve and integrate into the corporate decision-making process, a critical debate emerges regarding accountability and legal oversight. With AI now playing significant roles in hiring, investment decisions, risk assessments, and even regulatory compliance, the question is no longer just technological — it is legal: Who is responsible when AI makes a mistake?

A recent opinion piece by Pankaj Chhuttani sheds light on this vital and timely issue, outlining the emerging legal vacuum in India and emphasizing the urgent need for a robust regulatory framework for AI in corporate governance.

Introduction

AI’s integration into the boardroom has been rapid, driven by its ability to increase efficiency, reduce operational costs, minimize human error, and offer data-driven insights. However, with increased dependence comes increased risk and ambiguity. Unlike traditional tools, AI systems “learn” and adapt, making their decision-making processes more complex and often non-transparent.

This poses a serious challenge: If an AI system takes a decision that causes financial loss, regulatory violation, or reputational harm, who should be held responsible — the AI tool, its developers, or the corporate board?

Key Issues and Institutional Concerns

1. The Legal Vacuum in Indian Corporate Law

India’s Companies Act, 2013 does not currently provide explicit governance for AI tools or their usage in corporate functioning. While the law continues to impose fiduciary duties on directors — such as diligence, good faith, care, and loyalty — it does not directly address the unique nature of AI-based decisions.

In other words, even if a company relies heavily on AI, directors and officers cannot escape liability by simply blaming the algorithm. The accountability, as per current Indian law, remains human.

This gap is particularly problematic given the widespread adoption of AI across start-ups and larger corporations in India.

2. AI in the Boardroom: Benefits vs. Risks

AI is increasingly performing tasks that were traditionally under the purview of senior executives:

  • Drafting contracts

  • Conducting risk assessments

  • Screening job applicants

  • Flagging compliance issues

  • Monitoring financial irregularities

While this brings enormous benefits in terms of speed and efficiency, it also introduces legal ambiguity. AI systems can:

  • Act on biased or outdated data

  • Make opaque decisions

  • Miss the nuances of human context

  • Be vulnerable to cyberattacks or manipulation

Moreover, when an AI system fails — whether in recruitment (bias), compliance (error), or investment (poor forecasting) — the consequences are real, ranging from financial loss to lawsuits.

3. Board Responsibility and Fiduciary Duties

The use of AI doesn’t eliminate the duty of oversight by directors. In fact, it increases the complexity of this responsibility. Directors must:

  • Understand the logic, limitations, and risks of AI tools

  • Oversee the design and deployment of AI systems

  • Avoid blind trust in algorithmic outputs

Saurabh Bhatnagar, a corporate lawyer, notes that “directors are increasingly expected to exercise oversight over AI systems.” This includes questioning the rationale behind AI-driven decisions, especially in financial, legal, or strategic matters.

Using AI as a “defense” is not legally viable — boards remain responsible for decisions, even if they were AI-assisted.

4. Emerging Data Protection Regulations

The introduction of the Digital Personal Data Protection Act (2023) has added another critical layer of responsibility. Companies now face obligations to:

  • Ensure AI-based data processing systems are lawful, accountable, and consent-driven

  • Prevent autonomous AI-driven breaches, which may still attract penalties

  • Disclose AI use in public filings and compliance reports

This raises the stakes for boards and compliance teams — ignorance is no longer an excuse. Firms must proactively monitor how AI collects, stores, and uses data.

5. Discrimination, Profiling, and Ethical Risks

AI’s biggest threat lies not just in technical failure but in ethical lapses:

  • Discriminatory hiring algorithms

  • Unfair credit scoring

  • Biased facial recognition or surveillance

  • Invasive profiling of customers or employees

Such uses — even if unintentional — can lead to reputational ruin, lawsuits, and regulatory penalties.

Therefore, human accountability must remain at the center of corporate AI use. Companies must treat AI as a risk-strategic asset, not a black-box oracle.

The Way Forward: Regulatory Reform and Corporate Governance

A systemic solution lies in:

A. Robust Legal Framework for AI

India urgently needs sector-specific and national-level AI governance laws. Regulatory bodies like SEBI, RBI, and MCA must release clear, enforceable standards for:

  • AI decision-making

  • Risk auditing

  • Bias detection

  • Consent protocols

B. AI Use Disclosure

Companies should be required to:

  • Disclose AI use in hiring, investment, risk management

  • File transparency reports about AI systems

  • Conduct regular AI risk audits

C. Board-Level Oversight

Boards must be trained to understand AI tools and engage with AI ethics, transparency, and bias prevention. AI literacy among board members is now as essential as financial literacy.

Conclusion

Artificial intelligence may be fast, efficient, and smart, but it lacks the one thing corporations cannot do without — legal accountability. In the absence of clear laws, the burden of responsibility remains on human directors and officers.

For India to truly embrace AI without compromising on ethical standards, it must:

  • Create a robust legal framework

  • Enforce human-centric accountability

  • Promote transparency and fairness in algorithmic decisions

Until then, companies should remember: AI is a tool, not a shield. It assists, but does not replace, human responsibility.

Q&A Section

Q1. Why is there a legal vacuum regarding AI use in Indian corporate governance?
Answer: The Companies Act, 2013 does not explicitly govern AI tools. While it imposes fiduciary duties on directors, such as good faith and diligence, it does not address algorithmic decision-making, leaving a gap in accountability when AI tools are used.

Q2. Are corporate directors liable if an AI system makes a flawed decision?
Answer: Yes, under current law, directors and officers cannot shift blame to AI systems. They must exercise due diligence in overseeing AI tools and cannot use them as a defense in case of failure or loss.

Q3. What are the major risks of relying on AI in corporate decisions?
Answer: Risks include:

  • Biased or outdated data causing discrimination

  • Opaque decision-making

  • Legal penalties for data breaches

  • Reputational damage due to ethical violations

Q4. How does the Digital Personal Data Protection Act, 2023 impact AI use?
Answer: It mandates that companies ensure AI systems process data lawfully, with accountability and user consent. Breaches, even if caused by autonomous AI, can result in penalties. The law demands transparency and responsible data handling.

Q5. What steps should Indian companies take to ensure safe AI deployment?
Answer: Companies must:

  • Audit AI tools regularly

  • Train directors in AI oversight

  • Disclose AI usage in key decisions

  • Align AI use with regulatory compliance and ethical standards

Your compare list

Compare
REMOVE ALL
COMPARE
0

Student Apply form