Blogs

Responsible Business in an emerging world of AI. What are the risks?

April 09, 2024

By guest blogger, Mastane Williamson, Senior Associate, Pinsent Masons

We were delighted to hear from Mastane Williamson of Pinsent Masons at Verity’s latest Legal Mastermind group event. We discussed both the opportunities and risks around the rise of AI within a responsible business context.  Here are the key takeouts.

Ignore them, and you could miss out on the benefits AI can bring.  Embrace AI but get it wrong, and you jeopardise your responsible business credibility.       

AI – the risks for any responsible business  

Artificial Intelligence. No longer the stuff of science fiction. Even now at its earliest stages, 79% of us have already had some kind of exposure to generative AI in our work or personal lives[1]. At the forefront of public debate and top of the boardroom agenda, technological developments are happening at pace.

The international legal and regulatory landscape is still developing. For now, it’s up to individual organisations to ensure that AI use aligns with ethical principles, whilst also looking at ways to harness it to be a positive force for good and support ESG goals.   

At Pinsent Masons, we’ve been considering the risks of AI in the responsible business space within two different areas : human and environmental.

The human connection

It’s us humans who, of course, interact with and are impacted by AI technology within the workplace. The effects can be far-reaching. The technology will impact a vast number of people, at speed.  Considering the risks to people now will ensure you protect both them and your organisation into the future.  These are our four most immediate risks to consider in the human sphere.

  • Algorithmic Bias and Discrimination

Algorithms can cause bias that could lead to discrimination, prejudicing underrepresented groups on a massive scale. It ultimately comes down to the data that trains the AI model and how the algorithm operates.  For example, facial recognition technology has been shown to be more accurate in recognising lighter skinned men than darker skinned women[2].

This can also be an issue when AI is used for predictive decision making in all sorts of areas; including financial lending and credit scoring, recruitment, visa applications, exam results, and access to benefits and healthcare.

  • Human Rights

We’ve already seen breaches of the European Convention on Human Rights upheld in relation to certain AI usage, with cases going before the European Court of Human Rights.   

Algorithms spreading harmful content relating to child self-harm / suicide and genocide have been held a violation of Article Two, right to life.  Whilst AI being used to manipulate non-consensual sexual images of women in cyber bullying and harassment (deepfakes) held to be a violation of both Articles Three (inhuman and degrading treatment) and Eight (private life).

  • Workforce Impact

AI may impact job satisfaction and employee wellbeing.  It may also cause job losses.  Studies have already shown that women are being disproportionately displaced by AI.

  • Mental Health and User Diversity

There’s a convincing case that AI may impact employee or end user mental health. AI may lead to increasing workplace pressure to become more productive. It could cause isolation through less human interaction. It may also become problematic for, and discriminatory towards, certain end users, such as those with neurodivergence or additional needs, who may have difficulty interacting with AI tools.

Environmental risk

Artificial intelligence is benefitting the environment in many ways.  It’s now at the forefront of helping to fight climate change.   Yet conversely, the use of AI itself has a potentially overwhelming negative impact.

  • Data Storage

AI requires vast amounts of physical space to support the data it relies on; but data centres have a substantial carbon footprint.   As AI use becomes more ubiquitous, there will be an increasing need for data storage space.

  • Infrastructure

Physical infrastructure is also required to support AI activity.  This means the use of physical materials – for example, lithium and silicone – which create emissions. The more AI we use, the more digital infrastructure will be built.

  • Energy Consumption

Training an AI model involves processing huge volumes of data.  Which means using a lot of energy.  A single generative AI model can consume as much as 284,000 litres of water.  That’s the average person’s water consumption over 27 years[3].  It can emit over 626,000 pounds of carbon dioxide, which is the same as 63 gasoline-powered cars driven for a year; or five times the life emissions of an average car[4]

Running an AI system means continuous energy use. Widespread use of AI by individuals will mean increasing collective energy consumption.

So, what are the opportunities?

AI isn’t something to be feared. It’s something we should understand and embrace.  A technology we should utilise conscientiously and well.

Responsible AI is about creating ethical frameworks and guidance that supports the development and use of AI in a way that is legally, ethically and socially trustworthy. In short, using AI responsibly.

If your organisation is using AI – whether internally or outward facing – it needs to be mindful of Responsible AI. Some of your initial activities may include implementing organisational measures, revisiting company policies and instigating practical, technical modifications.

There are also responsible business opportunities involving AI, irrespective of whether a business is actually using it. This is still an immature and emerging space, with scope for innovation. At   Pinsent Masons, we are:

  • Supporting the Society for Computers and Law’s AI for Schools programme[5], educating young people from diverse backgrounds on the social and ethical implications of AI
  • Hosting professional networking events that address AI from a diversity and ethical perspective, such as an upcoming Women in Tech Event: “Will Artificial Intelligence Make Women’s Lives Better?”
  • Enabling lawyers to provide pro bono legal advice, which can be extended to advising on AI related issues.

How can AI help you as a responsible business leader?

AI tools can make your life easier and support responsible business efforts. Forbes describes how “by automating tasks, identifying patterns, and making predictions, AI is helping businesses to reduce their environmental impact, improve their social responsibility, and strengthen their governance”

Here are our Top Five ways to use AI tools in the responsible business space :

  • Data Analysis / Sustainability Analysis

AI can help collect and analyse vast amounts of ESG data, which can help businesses identify trends, track performance and make more informed decisions.

  • Compliance Monitoring / Reporting Support

Activities which ensure compliance with environmental and social regulations can be supported by AI.  It can automate compliance tasks, such as reviewing regulatory filings and identifying compliance gaps.  This may help organisations reduce costs, improve efficiency and reduce the risk of oversights.

  • Ethical Decision Support

By providing insights into the potential social, environmental and ethical implications of choices, AI can help assist leaders make ethical decisions.  For example, identifying sustainable investment opportunities.

  • Energy Management

AI can support and automate energy-efficient practices to reduce environmental impact.  One way is by using it to analyse patterns and adjust energy usage to minimise waste.

  • Risk Assessment

AI tools can help process volumes of data, to analyse a company’s exposure to ESG risks

At our firm, we’re using and trialling numerous AI tools to support responsible business. We’re actively using AI for research, project plans, communication and ideas. Proactively embracing AI to enable us to be a more responsible business.

“AI is still a young and evolving area, with many challenges and opportunities ahead; however, we believe that AI can be a force for good, and we are excited to be part of the journey. Our AI is already helping us in the Responsible Business Team, in ways I never thought possible even a few months ago. It’s enabling us to deliver high-quality legal services to our clients, whilst also supporting our social and environmental ambitions. We are committed to learning from our experiences, collaborating with our partners, and engaging with our stakeholders, to ensure that AI is used responsibly to the benefit of our clients, our people, our communities, and our planet.”

Mike Harvey, Head of Responsible Business, Pinsent Masons LLP

“We’re finding more clients bringing AI into the mix when working with us on responsible business and sustainability challenges. We use our expertise, planning and strategic approach to help clients talk about and embed their responsible business activities. Our programmes consider artificial intelligence, and how it can be used as a force for good, supporting the responsible business aims of our clients and encouraging business growth.”

Debra Sobel, CEO, Verity London

[1] McKinsey & Company report: The state of AI in 2023: Generative AI’s breakout year | McKinsey).

2 This is termed “the coded gaze” by computer scientist Joy Buolamwini Mission, Team and Story – The Algorithmic Justice League (ajl.org)

3 Green marketing vs generative AI: An environmental dilemma • Green Business Journal

4 Training a single AI model can emit as much carbon as five cars in their lifetimes | MIT Technology Review

5 SCL: SCL AI for Schools Programme

Explore other posts