AI Oversight Is Becoming a Board Issue

6 April 2022
View Debevoise Update

Key takeaways: 

  • As more businesses adopt artificial intelligence (AI), directors on many corporate boards are starting to consider their oversight obligations for AI, including as related to Environmental, Social and Governance (“ESG”) issues such as carbon emissions, perpetuation of discrimination, and the need for appropriate corporate governance for AI initiatives. 
  • Against this backdrop, regulators also beginning to underscore the importance of board and senior management accountability for AI-driven decisions. Board-level oversight of AI risks may also be important for companies that have invested heavily in AI in light of potential Caremark claims related to directors’ failure to oversee corporate compliance risks. 
  • For companies where AI has become (or is likely to become in the near future) a mission-critical regulatory compliance risk, directors may wish to consider implementing board-level oversight of key AI investments, risks, and compliance structures.

As more businesses adopt artificial intelligence (AI), directors on many corporate boards are starting to consider their oversight obligations. Part of this interest is related to directors’ increasing focus on Environmental, Social and Governance (“ESG”) issues. There is a growing recognition that, for all its promise, AI can present serious risks to society, including invasion of privacy, increased surveillance, carbon emissions and perpetuation of discrimination. But there is also a more traditional basis for the recent interest of corporate directors in AI: as algorithmic decision-making becomes part of many core business functions, it creates the kind of enterprise risks to which boards need to pay attention.

The promise of AI is evident from recent corporate spending. According to Stanford University’s 2022 AI Index Report, private investment in AI in 2021 totaled approximately $93.5 billion—more than double the previous year. But balanced against this promise are significant business risks. For example, the real estate company Zillow made headlines in 2021 when it decided to shut down its “Zillow Offers” business, and lay off 25% of its workforce, due in part to failures of its house-buying algorithm to accurately price homes. In addition, public scrutiny over facial recognition, credit algorithms, hiring tools and other AI systems, is creating substantial regulatory and reputational risk for companies, especially with respect to bias.

Where AI Overlaps with ESG

Both AI and ESG encompass a wide breadth of corporate issues, with considerable overlap, including:

  • Environmental—As AI models grow in size and complexity, so does the necessary computer processing power, which can carry a very large carbon footprint.
  • Social—Companies that deploy AI for hiring, lending, housing or insurance decisions need to consider ways to assess and, if necessary, remediate potential discrimination associated with those initiatives. Some AI applications have also been criticized for exacerbating income inequality, displacing large numbers of jobs, facilitating human rights abuses, and manipulating individuals’ behavior.
  • Governance—For AI programs to meet increasing regulatory requirements, as well as emerging ethical standards, the risks described above must be identified and mitigated through appropriate corporate governance, including policies, procedures, training and oversight.

The Rapidly Evolving Regulatory Landscape

Over the past several years, regulators across the globe have started passing legislation or providing regulatory guidance on AI systems. The European Commission is widely viewed as leading these efforts through its attempt to pass a comprehensive, cross-sectoral AI regulation. In addition, regulators in Hong Kong, Singapore, the Netherlands and the United States—among many others—have been outspoken on the need for appropriate corporate governance to address AI-related risks, including risks relating to bias, model drift, privacy, cybersecurity, transparency and operational failures.

One notable feature of several emerging regulatory pronouncements, particularly in the financial sector, is their express focus on the importance of board-level oversight of AI risks. For example:

  • The UK Financial Conduct Authority and Bank of International Settlements have both recently underscored that boards and senior management are going to have to tackle some of the major issues emerging from AI because that is where ultimate responsibility for AI risk will reside.
  • The Monetary Authority of Singapore has suggested that firms should set approving levels for highly material AI decisions at the Chief Executive Officer or board level, and should periodically update the board on the use of AI within the company so that the board maintains a central view of all material AI-driven decisions.
  • The Hong Kong Money Authority has issued principles stating that the board and senior management remain accountable for AI-driven decisions and therefore should work to ensure that appropriate AI governance, oversight and accountability frameworks are implemented, and that AI-driven activities are subject to appropriate risk-mitigating controls.
  • The NY DFS recently required each New York domestic insurer to designate one or more members of its board and its senior management to be responsible for oversight of the insurer’s management of climate risks, and it is likely that similar regulatory requirements for AI risks are coming.

These are just some of the recent examples of the coming wave of board-level responsibility for overseeing the regulatory, operational and reputational risks of AI.

AI Oversight and Caremark

Even in the absence of specific regulatory oversight obligations, board-level oversight of AI risks may be important for companies that have invested heavily in AI in light of potential Caremark claims, which focus on directors’ failure to oversee corporate compliance risks. Although Caremark has been called “the most difficult theory in corporation law” on which to prevail in litigation, several recent Caremark claims have survived motions to dismiss—underscoring the continued importance of this claim for directors overseeing important company compliance operations.

For example, in Marchand v. Barnhill, the Delaware Supreme Court allowed Caremark claims to proceed against the defendant directors of an ice cream company, finding that they failed to implement any structure to oversee food safety and sanitation risks. Despite the clear importance of food safety to the company’s operations, the Delaware Supreme Court found that the complaint sufficiently pled that the board (i) did not have any committee charged with monitoring food safety; (ii) did not devote a portion of the full board’s meetings to food safety compliance; (iii) did not have any board-level processes or protocols to ensure that the board was advised by management of food safety risks or developments on a consistent basis, even after a major listeria outbreak; (iv) did not receive reports from management concerning potential yellow or red flags about health safety risks, including reports from regulators or third-party laboratories; and (v) instead received information about food safety from management that was incomplete and misleading by omission. Thus, even though the defendant company had complied with its regulatory obligations, this did not foreclose claims against the directors based on their lack of attentiveness to significant food safety risks at the board level. As the Delaware Supreme Court noted, “Caremark does have a bottom-line requirement that is important: the board must make a good faith effort—i.e., try—to put in place a reasonable board-level system of monitoring and reporting.”

Key Considerations for Boards on AI

Accordingly, for companies where AI has become (or is likely to become in the near future) a mission-critical regulatory compliance risk, there are several issues that directors may wish to consider:

  • Board Responsibility: Consider having AI as periodic board agenda item. As with ESG, board oversight over AI can reside with the full board, an existing committee (g., audit or technology), or a newly formed committee dedicated to AI. Some companies have decided to place responsibility over AI with whichever committee is responsible for cybersecurity. If the board is concerned that it does not have the necessary expertise to oversee AI opportunities and risks, it should consider adding one more directors with that experience or have some board members receive AI training.
  • Awareness of Critical AI Uses and Risks: Consider making sure that at least some directors are aware of the most critical AI systems that the company employs, the nature of the data used to train and operate those systems and the associated risks to the company (including possible operational, regulatory and reputational risks), as well as any steps taken to mitigate those risks.
  • Understanding Resource Allocation: Consider requiring periodic updates on the resources devoted to AI development and operations and how much of that is dedicated to regulatory compliance and risk mitigation.
  • Senior Management Responsibility: Consider assigning management responsibility over AI risk and regulatory compliance (including the company’s regulatory risk disclosures relating to AI, if any) to a particular member of management or a management committee.
  • Compliance Structures: Boards should consider making sure that there is effective management-level AI compliance and reporting structures to facilitate board oversight, which may include periodic AI risk assessments and monitoring of high-risk AI systems, as well as written AI policies, procedures and training. Such policies may include procedures for responding to a material AI-related incident, responding to AI-related whistleblower complaints and risk management for any vendors that supply the company with critical AI-related resources.
  • Board Briefings on Material AI Incidents: Boards should consider ensuring that they are appropriately briefed on the company’s response to serious AI incidents and related impacts, the status of any material investigations and whether the company’s response was effective.
  • Board Minutes and Materials: Directors should ensure that their AI oversight activities, as well as management’s compliance efforts, are well documented in board minutes and in supporting materials.

Conclusion

Many directors may be uncomfortable with responsibility for overseeing AI risk because of their lack of expertise in this area. But as the SEC has made clear with respect to cybersecurity, boards need to find a way to exercise their supervision obligations, even in areas that are technical, if those areas present enterprise risk, which is already true for AI at some companies. That does not mean that directors must become AI experts, or that they should be involved in day-to-day AI operations or risk management. But directors at companies with significant AI programs should consider how they will ensure effective board-level oversight with respect to the growing opportunities and risks presented by AI.