contact us
 

Artificial intelligence: A human matter above all

Jun 07, 2023 | min read
By

Solange Sobral

Artificial intelligence is such a huge leap forward that we must all carry responsibility for its supervision and impact. It’s the only way to make sure it works for the benefit of our society.
 
From smartphones to social media, the 21st century has been bringing us an explosion of technology that’s forever changed our everyday lives. Once again, a technology — generative AI— looks set to become perhaps the most transformative tool that the world has ever seen. 
 
In January, ChatGPT became the fastest-growing consumer application in history, hitting 100 million users in just two months. Microsoft soon integrated AI into its Bing search engine, and Google even declared an internal ‘Code Red’ as it raced to release its own AI service. With AI advancements set to only grow, so too will its market size. Statista researchers project a near twenty-fold jump in value over the coming decade, from around $208 billion in 2023 to almost $2 trillion by 2030. It’s an incredibly exciting time—but how will it all impact our day-to-day lives?
 
As we watch AI expand into society, fears are growing about its use, accuracy, and implications. Sam Altman, CEO of ChatGPT and DALL-E developer OpenAI, has acknowledged the risks: "I'm particularly worried that these models could be used for large-scale disinformation," Altman said in an interview with ABC News. "The model will confidently state things as if they were facts that are entirely made up.” He added, “Society, I think, has a limited amount of time to figure out how to react to that, how to regulate that, how to handle it."
 
Before AI is widely accepted, we must put in place regulation that ensures the technology remains safe to use. The EU has recently proposed far-reaching legislation that strengthens supervision of the development and use of AI. The UK has also published its own whitepaper on regulation, although questions remain on whether these approaches are ambitious enough to match the incredible pace of AI advancements.
 
Equally, the next stage of AI development must also consider the concepts of ‘right’ and ‘wrong’. For the technology to be used securely and extensively, we all need to prioritise ethics as a key part of our mission when building AI-powered products.

AI’s ethical implications

Across history and the world today, we can find countless examples of mistakes made by humans in creating an inclusive society that values ​​diversity and is guided by ethics. Unfortunately, these errors and faults are now contaminating the outputs of AI technologies. This is because AI models are fed by distorted, or not fully representative, datasets selected by the groups of people they serve, which naturally contain their own biases.
 
“Part of the problem is that companies haven’t built controls for AI bias into their software-development life cycles, the same way they have started to do with cybersecurity,” Todd Lohr, Technology Consulting Leader at KPMG, told the Wall Street Journal. Chief Information Security Officer at LexisNexis Risk Solutions, Flavio Villanustre, added that once a tool shows biases, discovering their origin is “Absolutely difficult, and in some cases impossible—unless you can go back to square one and redesign it correctly with the right training data and the right architecture behind it.”
 
The real-world impacts of these biases are a huge challenge for society to tackle, along with other AI issues such as plagiarism and copyright infringements. There are concerns that deploying AI too quickly could add to inequality, infringe on privacy rights, increase unemployment, and enable malicious or unethical activities. In a technical report accompanying the release of ChatGPT-4, researchers highlighted examples of “risky emergent behaviours” shown by the AI itself, including an incident in which the model pretended to be a blind person to convince a human user to help it bypass a CAPTCHA security check. 
 
Fortunately, developers are already working on fixes. OpenAI itself has announced it’s building a tool that identifies whether a piece of text was produced by AI to help fight issues like plagiarism, misinformation, and human impersonation. If we can safely navigate past these problems, we can finally focus on the technology’s benefits.

It's important to remember that several other technological disruptions have brought similar threats to light, such as the internet, smartphones, data science applied to understand customer behaviour, cloud, and so on. As a society, we have been learning how to bring these revolutions for good.

The positive power of AI

Our society is becoming increasingly complex, so we need equally sophisticated tools. When used correctly, AI will become a key accelerator of public improvements worldwide.
 
According to a new report from McKinsey and Harvard University, AI could improve the healthcare industry’s clinical operations, and boost quality and safety, by crafting personalised treatment and medication plans that transform patient care. Plus, AI-powered efficiencies could even help the industry save hundreds of billions per year in healthcare spending.
 
Meanwhile, Insider Intelligence has estimated that financial sector AI can reduce industry costs by $447 billion in 2023 through task automation, fraud detection, and personalised wealth management insights. And though some are worried about AI’s possible role in academic plagiarism, AI algorithms within education can also analyse students' data and adapt to their learning styles, giving them feedback and recommendations that are tailored to individual needs and help students reach their potential. "We can have that for every profession, and we can have a much higher quality of life," OpenAI CEO Sam Altman has said. "But we can also have new things we can't even imagine today—so that's the promise.”

Our collective responsibility

We’re living through the exciting early days of a technology that can bring spectacular improvements to how we live and work. But we’re also standing on the edge of a precipice. Now is the time to ensure that humans are the guarantors of AI, not the other way around.
 
As Deloitte AI Institute Global Leader and Humans For AI Founder Beena Ammanath said, “What we need is more participation from the entire constellation of people interested in the future of humanity — which is all of us. The potential of AI is enormous, and what is needed is not just intent, but imagination and collaboration. Businesses today can help drive more momentum for social betterment by leading cross-industry conversations and pursuing AI deployments for the public good.”
 
Ultimately, generative AI is the latest addition to a vast arsenal of technologies ready to help humans solve problems. However, it’s also incumbent upon us—business leaders, industries, governments, and more—to take proactive measures that reduce the risks of misuse and unintended consequences. When this technology is used fairly and responsibly, we have the power to change people’s lives for good. What better business mission to uphold than that?


Solange Sobral

Solange Sobral

EVP & Partner