March 27, 2024 | Podcast

Can AI Be Ethical?

The Active Share Podcast

The Active Share Podcast

Olivia Gambelin blue and purple headshot

As artificial intelligence (AI) continues to evolve, how do we ethically approach a technology with such wide-ranging implications? In this episode of The Active Share, Hugo talks with Olivia Gambelin, founder and CEO of Ethical Intelligence, about AI ethics, responsible AI, and how our current systems—legal, social, economic, and political—adopt, and adapt to, new technologies.

Comments are edited excerpts from our podcast, which you can listen to in full below.

How do you define AI ethics and responsible AI?

Olivia Gambelin: I define AI ethics and responsible AI as two different things. AI ethics is the practice of implementing human values into our technology, specifically in AI. It’s a design-based approach that looks at technology and determines if it needs value protection or value alignment.

Responsible AI is an umbrella term that includes different topics such as AI ethics, regulation, governance, and safety. It’s the practice of developing AI in a responsible manner and is more focused on the operations and development of AI.

Do these definitions inform your individual working framework for AI?

Olivia: The split between AI ethics and responsible AI is commonly accepted, but I think my framework comes into play on the responsible AI side; it focuses on how to strategically implement responsible AI and what kind of gaps exist within an organization.

When you advise companies, what are the most frequently asked questions?

Olivia: Companies in high-risk industries—financial, health, or media—are focused on risk. The questions I usually get are, “Are we compliant with the law? What kind of regulation do we need to be watching for? What kind of risks are associated with our specific use-cases?” These companies have a risk-based mindset and are focused on protecting their company and making sure they are not intentionally doing harm.

I also work with companies in more creative fields looking to take an innovation-based approach toward AI ethics and responsible AI. These companies ask, “How do we make AI a competitive edge? How do we turn something like privacy, or fairness, or transparency into a competitive edge where we stand out from competition?”

Are companies beginning to set standards as AI capabilities evolve? Or are they interpreting legislation?

Olivia: This is a huge debate that’s occurring within the European Union (EU). Major players are concerned that the proposed regulation, the EU AI Act, is too strict, and that policy and regulation will guide AI best practice versus companies having space to shape AI best practice. Should companies influence the pace of innovation? Or is it the responsibility of legal bodies?

Ethics is a grey space, with the need to find balance and different context setting. There aren’t black and white answers. However, black and white answers help pave the way to laws and regulations, which means that these then become the baseline. While this is what we must be doing, it doesn’t mean it’s what we should be doing.

Who should have a seat at the table when it comes to determining the best approach to AI?

Olivia: We need both the public and private sectors. We need to have the public interest in mind, but people in the private sector know the technology best. The balance between public and private is incredibly important. Public brings in social good, while private brings in expertise.

The balance between public and private is incredibly important. Public brings in social good, while private brings in expertise.

I would also love to see more ethicists at the table. Another challenge we’re facing right now when it comes to AI is how we measure success. The tools we’re using aren’t necessarily in tune with what we as a culture, as a society, as global citizens want them to be. And if there isn’t someone at the table, like an ethicist, focused on the long-term impact and using that as a success marker, we will start to see an imbalance.

A lot of these questions exist in the grey, and they’re difficult to deal with. We need people with different mindsets around the table.

Does AI accelerate the need for the legal system to change?

Olivia: In my opinion, yes. We’re at a point in time where we need to adapt and grow. When the EU AI Act was established, there was no mention of generative AI. But then ChatGPT was released, which stalled the development of the EU AI Act because legislators had to figure out how to account for a new AI model.

While generative AI did exist before Chat GPT, it wasn’t a prevalently used type of architecture. The fact that AI development is outpacing AI regulation is a huge indicator that we need to rethink how some legal systems work. We either need to make regulations adaptable so they can grow alongside the development of AI or shorten feedback loops to be able to keep pace with technological development.

Technology moving faster than governments is a longstanding problem. You call this a democratic deficit. Can you explain?

Olivia: One of the challenges that companies are facing is a lack of feedback loops, such as talking to users and experts in the field, in development processes.

For example, a healthcare start-up may develop software for nurse practitioners, but the practice of talking to nurse practitioners to understand their needs is missing. Just because a company designs the software doesn’t mean it knows the best solutions for a certain profession.

Companies must start talking to field experts and get feedback loops and democratic input in place. This is shaping what a software platform and AI system would look like.

The fact that AI development is outpacing AI regulation is a huge indicator that we need to rethink how some legal systems work.

Does there need to be global coordination around AI?

Olivia: I think one of the unique challenges of AI is that these systems can reach a global scale. But the way that we interact with technology is heavily influenced by different cultures.

Although we need global communication to work on the main risks of AI, when it comes to specific risks or specific applications, there still needs to be cultural sensitivity. This makes global collaboration difficult. Take China and the United States. Each country has a very different approach to AI. How do we account for that if we’re supposed to have global cooperation?

Are there countries ahead of the curve on global coordination?

Olivia: We’re just now seeing countries catching up in terms of understanding the need for a more active role in AI development.

So far, AI has been driven by the private sector. Now, we’re seeing executive orders coming out of the United States, and other powerhouses are starting to play a bigger role. But there are differences in approach. For example, the United Kingdom has an innovation-based approach, while the EU has a risk-based approach.

Let’s move on to the idea of accountability. Are we capable of holding AI accountable?

Olivia: That’s still a big question. Speaking as an ethicist, there will always be blame to assign. Blame will always need to be assigned to a person, not a system.

But at the end of the day, we must look to our legal systems. We can’t prosecute an AI system; we must prosecute a person or a company. Even though it may feel like we can hide behind these systems, we can’t. There will always be legal ramifications.

Should users of AI-generated decisions or owners of AI systems be held accountable?

Olivia: It can be incredibly difficult to pinpoint if harm is occurring. As a user, you can say, “I think something feels off.” Or “I don’t know if I should be experiencing this technology in a different way.”

Research in the space of responsible AI and AI ethics allows us to preemptively and accurately catch a breakdown in the system. We’re moving away from a time where you can skirt accountability by saying, “We didn’t know.”

The way that we interact with technology is heavily influenced by different cultures.

In hindsight, could social media have been managed better from an ethical perspective?

Olivia: There could have been tighter feedback loops in terms of ethics, where we may have been able to catch any negative consequences and change the core structure of how we approach social media. Now we’ve been using social media for so long that would be difficult to go back and make changes.

For example, when Facebook first launched the Like button, it didn’t necessarily have the right controls set up to understand the effects and then feed that back into product and feature design.

The Like button is now ingrained in how we use social media. Instagram even launched a feature that hides the amount of likes a post gets to combat negative side effects, but it has resulted in a drop in engagement. We know the Like button causes these adverse effects, but we can’t leave it behind.

If there had been an ethics feedback loop in place earlier on, we would’ve been able to adapt. We also must have the humility to say, “We broke something we weren’t supposed to, but we’re going to try and fix it.”

Is that too great a responsibility to expect from a group of entrepreneurs?

Olivia: Life is a balance of both good and bad, and we’re never going to get past that. This is where I will differ from a lot of ethicists—I understand I will not be able to reach every company or individual with a different mindset.

In my work, I’m finding a growing sentiment for something different, for having work be more value-driven, and for technology to serve some type of greater purpose beyond just what the marketing team is putting out.

We’re always going to have bad actors. But I believe there is a shift happening, especially in Silicon Valley, toward, “Why don’t we change the world for the better instead of just changing it for change’s sake?”

Overall, are you optimistic?

Olivia: I have been lovingly nicknamed the optimistic ethicist because of the work that I’ve done and the change that I’ve seen. I’m working with people that want to achieve success. And you can hold ethics and success in the same hand; they don’t need to be opposing. I’ve seen that in practice, and I’ve seen the results.

When you have a value-driven approach to business, it can result in stronger technology, products, companies, and people behind the scenes. I’m optimistic because I’ve seen the change that is already happening. And the more success stories we have, the more momentum is going to build.

When you have a value-driven approach to business, it can result in stronger technology, products, companies, and people behind the scenes.

As investors, we think about risk in many ways. Does responsible AI help reduce technological risk?

Olivia: Recently, the Massachusetts Institute of Technology (MIT) and Boston Consulting Group (BCG) released a report that put a beautiful number on the work I do. It found that companies that engage in responsible AI practices reduce risk of AI failure rates by 28%, which is huge, as the AI risk failure rate is usually between 86% and 93%.

Responsible AI is good business practice and helps de-risk development processes. Combined with an ethics layer, companies have the potential to establish themselves as leaders in their industries. And it becomes riskier to not practice responsible AI than it is to invest in these practices.

Subscribe Now

Want the latest insights on the economy and other forces shaping the investment landscape?

Subscribe to our Investing Insights newsletter. 

Any investment or strategy mentioned herein may not be appropriate for every investor. There can be no assurance that investment objectives will be met. Products and services listed are available only to residents of this jurisdiction and may only be available to certain categories of investors. The information on this website does not constitute an offer for products or services, or a solicitation of an offer to any persons outside of this jurisdiction who are prohibited from receiving such information under applicable laws and regulations. Nothing on this webpage should be construed as advice and is therefore not a recommendation to buy or sell shares.

Please carefully consider the William Blair Funds’ investment objectives, risks, charges, and expenses before investing. This and other information is contained in the Funds’ prospectus and summary prospectus, which you may obtain by calling 1-800-742-7272. Read the prospectus and summary prospectus carefully before investing. Investing includes the risk of loss.

The William Blair Funds are distributed by William Blair & Company, L.L.C., member FINRA/SIPC.

The William Blair SICAV is a Luxembourg investment company with variable capital registered with the Commission de Surveillance du Secteur Financier (“CSSF”) which qualifies as an undertaking for collective investment in transferable securities (“UCITS”). The Management Company of the SICAV has appointed William Blair Investment Management, LLC as the investment manager for the fund.

Please carefully consider the investment objectives, risks, charges, and expenses of the William Blair SICAV. This and other important information is contained in the prospectus and Key Investor Information Document (KIID). Read these documents carefully before investing. The information contained on this website is not a substitute for those documents or for professional external advice.

Information and opinions expressed are those of the authors and may not reflect the opinions of other investment teams within William Blair Investment Management, LLC, or affiliates. Factual information has been taken from sources we believe to be reliable, but its accuracy, completeness or interpretation cannot be guaranteed. Information is current as of the date appearing in this material only and subject to change without notice. Statements concerning financial market trends are based on current market conditions, which will fluctuate. This material may include estimates, outlooks, projections, and other forward-looking statements. Due to a variety of factors, actual events may differ significantly from those presented.

Investing involves risks, including the possible loss of principal. Equity securities may decline in value due to both real and perceived general market, economic, and industry conditions. The securities of smaller companies may be more volatile and less liquid than securities of larger companies. Investing in foreign denominated and/or domiciled securities may involve heightened risk due to currency fluctuations, and economic and political risks. These risks may be enhanced in emerging markets and frontier markets. Investing in the bond market is subject to certain risks including market, interest rate, issuer, credit, and inflation risk. High-yield, lower-rated, securities involve greater risk than higher-rated securities. Different investment styles may shift in and out of favor depending on market conditions. Diversification does not ensure against loss.

Past performance is not indicative of future returns. References to specific companies are for illustrative purposes only and should not be construed as investment advice or a recommendation to buy or sell any security.

William Blair Investment Management, LLC is an investment adviser registered with the U.S. Securities and Exchange Commission.

Issued in the United Kingdom by William Blair International, Ltd., authorized and regulated by the Financial Conduct Authority (FCA), and is only directed at and is only made available to persons falling within articles 19, 38, 47, and 49 of the Financial Services and Markets Act of 2000 (Financial Promotion) Order 2005 (all such persons being referred to as "relevant persons").

Issued in the European Economic Area (EEA) by William Blair B.V., authorized and supervised by the Dutch Authority for the Financial Markets (AFM) under license number 14006134 and also supervised by the Dutch Central Bank (DNB), registered at the Dutch Chamber of Commerce under number 82375682 and has its statutory seat in Amsterdam, the Netherlands. This material is only intended for eligible counterparties and professional clients.

Issued in Switzerland by William Blair Investment Services (Zurich) GmbH, Talstrasse 65, 8001 Zurich, Switzerland ("WBIS"). WBIS is engaged in the offering of collective investment schemes and renders further, non-regulated services in the financial sector. WBIS is affiliated with FINOS Finanzomubdsstelle Schweiz, a recognized ombudsman office where clients may initiate mediation proceedings pursuant to articles 74 et seq. of the Swiss Financial Services Act ("FinSA"). The client advisers of WBIS are registered with regservices.ch by BX Swiss AG, a client adviser registration body authorized by the Swiss Financial Market Supervisory Authority ("FINMA"). WBIS is not supervised by FINMA or any other supervisory authority or self-regulatory organization. This material is only intended for institutional and professional clients pursuant to article 4(3) to (5) FinSA.

Issued in Australia by William Blair Investment Management, LLC (“William Blair”), which is exempt from the requirement to hold an Australian financial services license under Australia's Corporations Act 2001 (Cth). William Blair is registered as an investment advisor with the U.S. Securities and Exchange Commission (“SEC”) and regulated by the SEC under the U.S. Investment Advisers Act of 1940, which differs from Australian laws. This material is intended only for wholesale clients.

Issued in Singapore by William Blair International (Singapore) Pte. Ltd. (Registration Number 201943312R), which is regulated by the Monetary Authority of Singapore under a Capital Markets Services License to conduct fund management activities. This material is intended only for institutional investors and may not be distributed to retail investors.

Issued in Canada by William Blair Investment Management, LLC, which relies on the international adviser exemption, pursuant to section 8.26 of National Instrument 31-103 in Canada.