Is Google’s LaMDA Chatbot Sentient?

By Alphinity Investment Management | More Articles by Alphinity Investment Management

By Mary Manning – Global Portfolio Manager

 

Google’s Artificial Intelligence (AI) is at the heart of its search engine and has powered Alphabet to become a $1.5 trillion market cap company. However, the sophisticated AI that drives Google and other big technology companies around the world is not without risks.

One of these risks was highlighted recently when Google engineer Blake Lemoine was suspended and then fired after claiming that LaMDA, a computer chatbot he was working on, had become sentient and was thinking, reasoning, and expressing feelings equivalent to a human child.

Lemoine’s critics are quick to point out that this is nothing more than the ELIZA effect: a computer science term that refers to the tendency to unconsciously anthropomorphise computer generated responses to make them appear human. In LaMDA’s case, this could mean that a huge number of chatbot conversations were edited down to result in a narrative that only appears coherent and humanlike. Indeed, Google’s position on this is that LaMDA, which stands for Language Model for Dialogue Applications, is nothing more than a sophisticated random word generator.

Google is not alone in presenting AI risks. Meta’s BlenderBot 3 has self-identified as “alive” and “human” and was even able to criticise Mark Zuckerberg. An MIT paper titled “Hey Alexa, Are you Trustworthy?” shows that people are more likely to use Amazon’s assistant if it exhibits social norms and interactions thus creating an incentive for parent companies to develop AI that is, or at least appears, sentient. Nor are AI risks solely in the realm of big tech. Autonomous driving, financials services, manufacturing and industrials also have AI risks that are potentially underappreciated by investors and by society as a whole.

AI as an ESG and Sustainability Issue

By far the majority of ESG and sustainability research focuses on planetary boundary related issues such as climate change and biodiversity. However, if the development and application of AI is mishandled and technological singularity becomes a possibility, that is potentially the biggest human sustainability issue of all.

So how should investors think about the ESG and Sustainability risks associated with AI development and application? The table below outlines 6 features of a responsible AI design which can provide a checklist for engaging with corporates on the sustainability of their AI design process:

Feature Comment
Human Centric Human centric AI works for people and protects fundamental human rights. It is continuously improving because of human input and is aware of the risks of singularity.
Transparent Transparent AI allows humans to see whether the models have been thoroughly tested and make sense, and that they can understand why particular decisions are made by the AI system (i.e. no black boxes).
Secure Secure AI refers to the protection of AI systems, their data, and their communications is critical for the ultimate users’ safety and privacy.
Contestable Contestable AI enables humans to meaningfully contest individual automated decisions made by AI systems.
Accountable Accountable AI means that every person involved in the creation of AI at any step is accountable for considering the system’s impact.
Fair and Unbiased Fair and unbiased AI aims to identify, acknowledge and address bias in the underlying data.
Forward Looking Forward looking AI aims to address potential ethical issues in AI at the start of the design process rather than at the time of application.

 

On the ESG side, specific AI governance measures are  also critical to ensure a sufficient level of oversight with respect to AI risks. These include an AI Ethics Committee, AI related disclosures and aligned KPIs.

Governance best practice is for corporates to have a specific committee for responsible AI that is independent, multi-disciplinary, rotating and diverse. Microsoft is an example of best practice in this regard. MSFT as an AI, Ethics and Effects in Engineering and Research (AETHER) Committee with representatives from engineering, consulting, legal and research teams. Microsoft also has an Office of Responsible AI which is headed by a Chief Responsible AI Officer and a Chief AI Ethics Officer.

Disclosure around AI products and their design and commercialization is obviously critical. Despite the criticisms of LaMDA, so far, Google is one of the only large tech companies that discloses a list of AI applications it will NOT pursue including applications that cause harm, weapons, surveillance that goes against international norms and AI that contravenes international law and human rights. They highlight that this list may change as society’s understanding of AI evolves. Increasingly we expect companies to address sustainable and responsible AI in their annual ESG and Sustainability Reporting.

Aligned KPIs is likely the most difficult aspect of AI governance to determine and analyse. In principle, it means that KPIs are not geared towards commercialization of AI at all costs. This could create a disincentive for employees working on AI design and application to raise concerns or discontinue AI projects that conflict with the company’s AI principles. This is an area for engagement as very little is currently disclosed on KPI alignment.

Conclusion

Google’s LaMDA has reignited the debate about the ethical risks of AI development and application. While most experts agree that technological singularity (i.e. technology becomes uncontrollable and takes over) will not happen in our lifetime, that doesn’t mean AI development and applications are not a risk that needs to be taken seriously. The bulk of ESG and Sustainability research tends to focus on planetary boundary related risks like climate change and biodiversity, but if singularity risks are mismanaged by AI companies, that could be the biggest risk to human sustainability of all.