Artificial intelligence has a long history, but OpenAI’s release of ChatGPT in late 2022 had people paying closer attention to the movement’s potential. Now, shareholders are preparing to scrutinize how AI affects a company’s ecosystem, including customers and workers. For clues about the future, we looked at the handful of shareholder proposals in the most recent proxy season.

Among other things, expect boards to be in the crosshairs as investors take a close look at the risks related to AI. For example, IT and business consulting firm CGI GIB received a shareholder proposal asking its board to start looking at the ethics of the use of AI.

Says Jackie Cook, director, stewardship at Morningstar Sustainalytics: “In recent years, shareholders have brought proposals on the societal impacts of specific AI-enabled applications. But, as companies race to exploit rapidly evolving AI models, the governance of AI may not be keeping pace. So, we’ll likely see a growing number of shareholder proposals, such as the one voted at CGI earlier this year, that focus on AI governance as a core board responsibility.”

Why ESG is critical to AI investing

While AI’s scope expands beyond chatbots, virtual assistants, or factory automation, many applications of AI software have been under scrutiny in recent years. Investors have grounds for concern about how data is being processed, sold, and analyzed. So, what does AI mean for investors, and how are investors addressing their concerns?

Certainly, AI is changing the landscape of investing. Robo-advisors are available across the financial industry from companies like Charles Schwab, Vanguard, and Wells Fargo. Investing apps like Q.ai make investing tools previously reserved for high-net-worth individuals accessible to everyday investors, according to app founder Stephen Mathai-Davis.

At the U.S. Morningstar Investment Conference in April 2023, Morningstar CEO Kunal Kapoor noted “if we apply AI as an industry, it puts [advisors] in a position to provide a much better investing experience and do that at scale.” AI also has countless applications for institutional investors.

Meanwhile, environmental, social, and governance approaches are important tools for investors concerned about risk for issuers they invest in. AI is critical to sustainable investing; ESG is also critical to AI.

For the former, it’s because sustainable investing currently relies on a vast array of nonfinancial metrics that aren’t necessarily reported in standardized ways. Many ESG tools, like social impact disclosures, can be voluntary or mandatory, depending on the regulations and standards that apply to the company. AI can help corral and find signals in this data.

From a risk-management perspective, rating agencies use satellites to look at companies’ exposure to increased physical risk from wildfires and other potential disasters. Companies can also use the same data to predict the potential effects of climate change across their own assets and to see how well they’re doing on their promises to achieve carbon neutrality.

“Directionally, climate management is set up for this,” said Gabriel Presler, Morningstar’s head of sustainability, in an interview.

Indeed, the World Economic Forum has said that ESG currently isn’t making a difference for climate change fast enough and that AI is critical to addressing it.

Another use of AI in sustainable investing are models that will make assumptions and fill in the gaps where there are no disclosures—systematic guesswork, say. Yet another is the use of natural-language processing to gauge tone and sentiment—in consumer attitudes, for example.

All this can help with the perceived defects of sustainable investing, such as inaccurate ratings and greenwashing.

“The integration of AI into sustainable investing could mark a profound turning point in investors’ ability to navigate the complex web of ESG factors,” said Matthew Slovik, head of global sustainable finance at Morgan Stanley, in a statement. “By harnessing AI’s analytical capabilities, investors can identify companies with strong ESG performance, mitigate risks and shape portfolios that better align with sustainability objectives.”

Why AI poses ESG risks

All that sounds great for ESG and sustainable investing, which have been under attack in the past year in the U.S. culture wars. But AI can also pose ESG risks, which are connected to financial risk. Security and privacy are at risk as vast amounts of data are shoveled into algorithms. Data centers are massive carbon emitters. Ethics are also an issue. While the World Economic Forum found that AI has the potential to reduce the presence of human biases in decision-making, new algorithms can reflect programmers’ biases. For example, there is evidence that facial recognition technology can worsen racial inequities in policing.
ESG practices will be needed to address risks associated with the race to build AI infrastructure, namely shareholder participation.

In 2023, members of the Investor Alliance for Human Rights filed a series of 15 human and digital rights-related proposals at tech companies Alphabet GOOGL, Meta META, and Amazon.com AMZN.

Harrington Investments, a member of the alliance, filed a proposal in April 2023 suggesting that a performance review of Alphabet’s Audit and Compliance Committee is necessary to effectively oversee the company’s foray into AI. Such a novel space, says Harrington, “against the backdrop of grave legal, regulatory, and social challenges” begs for thorough oversight.

Citing the minds of “technology luminaries” like Apple co-founder Steve Wozniak; Tom Gruber, who led the team that created Apple’s Siri; and Max Tegmark, MIT professor and Future of Life Institute President, the Harrington shareholder proposal argued that “AI should be paused because of the lack of ‘planning and management’ while AI labs are ‘locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.’”

Trillium Asset Management, another alliance member, presented a shareholder proposal in June 2023 “seeking better social impact disclosures regarding the company’s algorithms and related technologies.” Their proposal was favored by 43% of outside investors at Alphabet, which indicates strong consumer support for recommendations to outline safe and consistent practices surrounding AI.

In a statement, the Investor Alliance for Human Rights said, “the issues raised in the proposals speak to the power and influence these tech giants wield over society and highlight how a lack of adequate oversight structures to mitigate potential harms raises risks for all stakeholders.”

Planning to Invest in AI? Investors Want Companies to Study the Long-Term Effect
You’ll see more calls for companies to document and study the effects of their adoption of AI. Thus, Just Capital, an independent nonprofit that measures stakeholder performance across U.S. companies, is attempting to define what AI usage looks like not just for companies but also for their stakeholders: workers, customers, communities, shareholders, the environment, and society.

Eventually, Just Capital will start tracking and ranking corporate usage of AI and give companies incentives on how to use AI responsibly. Just Capital routinely surveys Americans on what they believe U.S. companies should prioritize most when it comes to just business behavior. In the coming year, it will also poll subjects on their attitudes toward AI, Just Capital CEO Martin Whittaker said in an interview.

So far, Whittaker says, “The biggest beneficiaries have been shareholders” of companies like Nvidia NVDA that are perceived to be beneficiaries of the AI trend.