'Botsh*t' is an example of how AI is making customer service worse (2024)

When Jake Moffatt's grandmother died in 2022, he booked a flight to her funeral on Air Canada, hoping to use their bereavement travel discounts.

Air Canada's customer service chatbot told Moffatt he could claim the discount after the flight. Yet, the company later denied his discount request because they said it had to be filed prior to the flight.

In February, Canada's Civil Resolution Tribunal — an online platform to resolve disputes — ruled that Air Canada's chatbot had misled Moffatt and ordered the airline to compensate him for the discount.

The chatbot misleading Moffatt with false information is an example of "botsh*t," which describes incorrect or fabricated information produced by chatbots that humans use to complete tasks.

Advertisem*nt

Researchers Ian P. McCarthy, Timothy R. Hannigan, and André Spicer coined the term in a paper published in January and a July 17 Harvard Business Review article.

Botsh*t is one example of how the use of AI might worsen companies' customer service. As businesses employ generative AI, the researchers said employees must be more critical in managing chatbot-generated responses.

AI is here to stay, but chatbots keep spewing botsh*t

The use of generative AI in the workplace has nearly doubled in the past six months, according to a survey of 31,000 global workers conducted between February and March by the research firm Edelman Data and Intelligence. The results were published on May 8 by Microsoft and LinkedIn.

What's more, 79% of business leaders said their companies must adopt AI to remain competitive.

Advertisem*nt

According to economist Dan Davies, companies employ technology like AI and chatbots to streamline decision-making and optimize efficiency. Over the past decade, chatbots have proliferated as a customer service feature for businesses.

However, that can also lead to situations where no one employee is accountable for an algorithm going awry.

Related stories

For example, researchers in 2023 found that about 75% of ChatGPT responses to drug-related questions were often inaccurate or incomplete. Additionally, when asked, ChatGPT generated fake citations to support some of its inaccurate responses.

In January, a UK parcel company removed its new AI customer service chatbot after it swore at a customer.

Advertisem*nt

And when Google rolled out its AI chatbot Gemini earlier this year, it produced historically inaccurate images of people of color. The company paused and then relaunched the chatbot's image-generation tool after public backlash.

In a February memo to employees, Google CEO Sundar Pichai said the chatbot's responses were "unacceptable" and the company had "got it wrong" when trying to use new AI.

McCarthy, Hannigan, and Spicer wrote in the July 17 article that businesses that carelessly use AI-generated information jeopardize their customer experience and reputation, going as far as risking legal liability.

"Managers and organizations are beginning to see an increasing array of new risks based on expectations and professional standards around the accuracy of information," the researchers wrote.

Advertisem*nt

Still, they wrote that they believe AI provides opportunities for useful application "as long as the related epistemic risks are also understood and mitigated."

Black boxing and the risks of using AI

The biggest challenge associated with using AI chatbots for customer service is "black boxing," in which it becomes difficult to discern why an AI technology operates a certain way, according to the researchers.

McCarthy, Hannigan, and Spicer wrote that AI customer service chatbots can be improved through more rigorous and specific guidelines, guardrails, and restrictions on the available range of vocabulary and response topics.

However, the researchers argued that customer service is the least risky use of AI for businesses.

Advertisem*nt

Using AI chatbots for tasks like safety procedures in healthcare, complicated financial budgeting, or legal judgments are cases where it's most essential the AI is accurate, but it's most difficult to verify in real time, according to the researchers.

In 2023, a New York law firm was fined $5,000 after lawyers submitted a court brief containing false references produced by ChatGPT.

While general AI chatbots like ChatGPT are more susceptible to botsh*t, practice-specific chatbots that use retrieval augmented generation, a technology that enhances AI accuracy, are more promising, according to the researchers.

The researchers said that rigorous checking and calibration of the AI's output over time, including expert fact-checking of its responses, can mitigate risks.

Advertisem*nt

"Chatbots and other tools which draw on generative AI have great potential to significantly improve many work processes," the researchers wrote. "Like any important new technology, they also come with risks. With careful management, however, these risks can be contained while benefits are exploited."

'Botsh*t' is an example of how AI is making customer service worse (2024)
Top Articles
Supply Chain Now Episode 297 - Supply Chain Now
Medicare Tools and Resources | BCBS of Tennessee
Dragon Age Inquisition War Table Operations and Missions Guide
Craigslist St. Paul
Truist Bank Near Here
Patreon, reimagined — a better future for creators and fans
Cash4Life Maryland Winning Numbers
Rondale Moore Or Gabe Davis
Gameday Red Sox
ds. J.C. van Trigt - Lukas 23:42-43 - Preekaantekeningen
Seth Juszkiewicz Obituary
Hallelu-JaH - Psalm 119 - inleiding
ATV Blue Book - Values & Used Prices
Gon Deer Forum
Busby, FM - Demu 1-3 - The Demu Trilogy - PDF Free Download
List of all the Castle's Secret Stars - Super Mario 64 Guide - IGN
Csi Tv Series Wiki
Shasta County Most Wanted 2022
Rondom Ajax: ME grijpt in tijdens protest Ajax-fans bij hoofdbureau politie
Copart Atlanta South Ga
Ratchet & Clank Future: Tools of Destruction
bode - Bode frequency response of dynamic system
Rural King Credit Card Minimum Credit Score
FDA Approves Arcutis’ ZORYVE® (roflumilast) Topical Foam, 0.3% for the Treatment of Seborrheic Dermatitis in Individuals Aged 9 Years and Older - Arcutis Biotherapeutics
Thick Ebony Trans
Piri Leaked
Craiglist.nj
Criglist Miami
Craigslistodessa
Spy School Secrets - Canada's History
Truckers Report Forums
4083519708
Foolproof Module 6 Test Answers
ATM Near Me | Find The Nearest ATM Location | ATM Locator NL
Weapons Storehouse Nyt Crossword
My.lifeway.come/Redeem
Tripadvisor Vancouver Restaurants
Todd Gutner Salary
814-747-6702
Costco Gas Foster City
What to Do at The 2024 Charlotte International Arts Festival | Queen City Nerve
Satucket Lectionary
Ghareeb Nawaz Texas Menu
Greg Steube Height
How To Get To Ultra Space Pixelmon
Craiglist.nj
Walmart Front Door Wreaths
Poster & 1600 Autocollants créatifs | Activité facile et ludique | Poppik Stickers
Razor Edge Gotti Pitbull Price
Craigslist Indpls Free
O'reilly's Eastman Georgia
Texas 4A Baseball
Latest Posts
Article information

Author: Mr. See Jast

Last Updated:

Views: 5735

Rating: 4.4 / 5 (55 voted)

Reviews: 94% of readers found this page helpful

Author information

Name: Mr. See Jast

Birthday: 1999-07-30

Address: 8409 Megan Mountain, New Mathew, MT 44997-8193

Phone: +5023589614038

Job: Chief Executive

Hobby: Leather crafting, Flag Football, Candle making, Flying, Poi, Gunsmithing, Swimming

Introduction: My name is Mr. See Jast, I am a open, jolly, gorgeous, courageous, inexpensive, friendly, homely person who loves writing and wants to share my knowledge and understanding with you.