Dyson bot x IBM.

Dyson bot information
ROLE
UX/UI Designer | UX Researcher
INSTITUTION
University of Waterloo & IBM
PROJECT TYPE
Capstone Group Project - Group of 4
TIMEFRAME
2 Weeks

TOOLS

CONTRIBUTIONS

  • 💡 Identified major problems of the existing chatbot on Dyson's website
  • 💡 Lead in prototyping and UI design
  • 💡 Participated in usability test preparation and facilitated two test sessions
  • 💡 Took full responsibility for privacy-related design
About Dyson Bot 🤖️
UX/UI design & AI ethics

With the rapid development of technology, AI has been integrated into our lives in many ways. Accurate machine learning and NLP make them smart. But is this trend only good for humans? How can we trust AI with our data? In this capstone group project with two IBM mentors, we aim to address AI trust issues, specifically chatbot trust. We chose Dyson's official website chatbot for a redesign and ultimately came up with a conversational AI chatbot with a focus on privacy and other fields to build more trust.

Award ✨

I'm super excited to share that our design solution has won first place in the MDEI program capstone project competition! I really appreciate all the hard work of my team members and support from instructor Tobias Thielen as well as IBM mentors Gord Davison and Will Ragan.

rank 1

AI, can we trust you?

Artificial intelligence systems are rapidly starting to dominate our app and tech services. But it's important to remember that the AI is only as intelligent, fair and ethical as the data it is fed and the values of its creators.

Designers and developers of AI systems have the responsibility of understanding the impact of the systems they are building and consider that ethics of their work.

Only through transparency and ethics considerations can designers create the trust needed for users to adopt and use the systems and applications they build.

how to trust chatbot

What's this project about?

This is an intensive 14-day project aim at providing solutions to AI trustworthiness and ethics. The project focuses on solving the business prompt below with IBM mentors:

How might we help our users build trust in an AI-infused tool or application?

By analyzing the ethical problems of AI technologies and investigating various chatbots in the market, we determined our project scope to be redesigning the chatbot of Dyson's website to build user trust and AI ethics.

Research & problem identification.

We started by looking into the various types of AI technologies and decided to narrow our scope to chatbots as research shows that there is a strong need of reliable chatbot services in the market.

80%

of customer interactions can be resolved by well-designed bots

60%

of customers want easier access to self-serve solutions for customer service

50%

of enterprise will spend more on bots than traditional mobile app dev by 2021

We initially identified general ethical issues for chatbots, the main issues include

  • Personal data collection without informed consent
  • Users fail to realize they are communicating with a bot
  • Persona of bot may have gender bias
  • People trust chatbots for simple and convenient tasks
chatbot ethical problems
Chatbot Ethical Problems

We then narrowed our scope down to e-commerce chatbots and identified problems that prevent user trust in e-commerce chatbots.

  • 👉🏻 Mistrust of automations
    • When using AI chatbots people still would rather be connected to a real person because they hold more trust in humans to answer their questions reliably.
  • 👉🏻 Data privacy
    • People have concerns when inputting sensitive personal information into AI infused applications.
    • Chatbots don't appear to be secured.
    • It is unknown to customers how the AI tool will use their information or how susceptible it is to a breach.
  • 👉🏻 Impersonable qualities
    • Human beings are naturally inclined to want to trust others which is key to relationship building. Our brains are hard-wired, to make assessments on trust and the trustworthiness of others. Many e-commerce chatbots fail at building trust through natural and conversational methods. In other words, these chatbots are very mechanical and make users click through a series of prompts instead of having a genuine conversation to build trust.

Finally, we looked at the various chatbots on the market and defined their general problems. We found Dyson's chatbot to be the most problematic as the figure shown below, so we decided to go with it.

dyson chatbot problems
Dyson Chatbot Problems

Pain points.

list
Have to click through a long list of choices

technical support
System defaults to connecting users to a human advisor

no results
Conversation context lost when reaching a human

compliant
Lack of easily accessible privacy policy

These pain points pointed to a singular problem which was that the chatbot isn't utilizing conversational AI.

How could we make it trustworthy?

A conversational AI can better help with trust issues by supporting:

  • 👉🏻 Natural Language Understanding: Make real conversation
  • 👉🏻 24 Hour Customer Service: Accessible anytime
  • 👉🏻 API Connecting to Other Systems: Provide relevant and reliable services
  • 👉🏻 Sentiment Analysis: Develop a stronger relationship
What is conversational AI?

Conversational AI refers to AI tools that users can talk to. They utilize data, machine learning and natural language processing to imitate human interactions, recognizing speech and text inputs.

IBM's five ethical focal areas.

We categorized the issues into IBM's five areas of ethical focus: accountability, value alignment, explainability, fairness, and user data rights. The areas provide an intentional framework for establishing an ethical foundation for building and using AI systems.

Accountability

01
Accountability

AI designers and developers are responsible for considering AI design, development, decision processes, and outcomes

accountability problems and solutions
accountability problems and solutions
accountability problems and solutions

Value Alignment

02
Value Alignment

AI should be designed to align with the norms and values of your user group in mind

value alignment problems and solutions
value alignment problems and solutions
value alignment problems and solutions

Explainability

03
Explainability

AI should be designed for humans to easily perceive, detect, and understand its decision process

explainability problems and solutions
explainability problems and solutions
explainability problems and solutions

Fairness

04
Fairness

AI must be designed to minimize bias and promote inclusive representation

fairness problems and solutions
fairness problems and solutions
fairness problems and solutions

User Data Rights

05
User Data Rights

AI must be designed to protect user data and preserve the user's power over access and uses

user data rights problems and solutions
user data rights problems and solutions
user data rights problems and solutions

User persona.

Let's meet our user, Julie, a sales director who values quality and efficiency. She wants to buy a vacuum cleaner, and Dyson is her first choice, but with so many options, it is difficult for her to know which one is best for her.

persona

User flow.

We then mapped out the user flow for purchasing with the chatbot, which helped us figure out how the chatbot would perform to meet user expectations. We initially put the entire checkout process in a chatbot.

persona

Low-fidelity prototype.

We then designd a low-fidelity prototype with five flows, including product recommendation, checkout with an existing account, checkout with a new account, privacy policy, and end-chat survey.

low-fidelity prototpye

Usability testings & findings.

Participants
  • 2 rounds of testing
  • 4 participants per round
Research Methods
  • A/B testing
  • Rating of overall trust
  • Short interview
Data Collection
  • Screen recording on Zoom
  • Task completion and number of errors
1st Round Findings & Changes 👇🏻
01
No check-out or log-in with the chatbot

"Username and password is like a scam."

"Perhaps not asking the user to type in personal information in the chat box."

"I would say it's better to send me the link to the website and I can do the check out by myself."

Action: Move the entire checkout flow to Dyson's external secure checkout link instead of having it in the chat

change1
02
Highlight & adjust privacy options

"Better to provide privacy-related options ahead or highlight those options."

"I can't close the 'cookies settings' on the top."

Action: Add a popup at the beginning to allow changes to user data collection settings and inform user how to access the privacy policy during chatbot use

change2
03
Highlight the live agent option

"I wasn't able to get in touch with a human throughout the experience."

Action: Color the "live agent" button yellow and add a text in addition to the icon

change3
04
'Help me choose' instead of 'Make a purchase'

"I think 80% of customers won't purchase a product using a chatbot."

"'Make a purchase' makes me feel uneasy, because it feels like they want me to input credit card and payment info in the chat."

Action: Reword "Make a purchase" in the chatbot menu to "Help me choose a product"

change4
2nd Round Findings & Changes 👇🏻
01
Explain questions in the end-chat survey

"I think you could also add some description to 'Anything we can do better.'"

Action: Add description to 'Anything we can do better' in the end-chat survey

change1
02
Add 'Skip question' options

"Honestly, I don't know my answer of this question."

Action: Add "Skip Questions" options to handle cases where chatbots ask users a question that may not apply to them or want to leave the question unanswered

change2
03
Make the purchase entrance more obvious

"I don't know what 'How to make a purchase' means."

"I don't know how to checkout after receving recommendation."

Action: Reword 'How to make a purchase' to 'I'd like to make a purchase' and add a shopping cart icon on the bottom right corner of the screen

change3
04
Add sentiment analysis before connecting to live agents

"The chatbot connect me to a live agent without any explanation."

Action: Add a "Troubleshoot or Maintain My Dyson" flow and apply sentiment analysis by enabling the chatbot to analyze the user's sentiment and hand over the user to a live agent with proper explanation as appropriate

change4

Colors & typography.

color scheme
typography

Chatbot character design.

We designed a gender-neutral chatbot character named 'Dyson Bot'. The design is inspired from the image of Sir James Dyson, a billionaire entrepreneur who founded Dyson Ltd.

Responsive Friendly
Knowledgeable Gender Neutral
Sir James Dyson
Sir James Dyson
chatbot character design

High-fidelity prototype.

The final high-fidelity prototype consisted of five flows: product recommendation, checkout a vacuum cleaner, troubleshoot a vacuum cleaner, privacy policy, and end-chat survey.

How does our solution address AI trust issues? 👇🏻

There are eight design highlights, each of which helps address trust issues.

01
Anytime access to user data collection settings

Ensure user data rights: Users can access data collection settings through the pop-up window at the beginning, "Privacy Policy" in the menu, entering "Privacy Policy" in the chat, or the end-chat survey.

02
User-entered data usage instructions

Keep transparency and fairness: Users are informed about their data collection and usage when entering data (address, payment info, etc.). There is also a summary of collected data in the end-chat survey.

03
Feedback option for bot's responses

Minimize bias and constantly train AI: There is a feedback option that allows user to evaluate whether the bot's response is helpful or not.

04
User response templates

Quick and smart reply: There are multiple user response templates throughout the flow, and if applicable, users can easily click on the response in the template.

05
Product recommendations and comparisons

Offer explainability: Chatbot will ask users a series of questions to recommend a product. When a recommendation is given, the bot will also explain reasoning behind the recommendation.

06
Sentiment analysis

Empathize better with users: Sentiment analysis is applied in the troubleshooting flow. Chatbot can analyze users' sentiment to decide whether to connect them to a human advisor.

07
Survey to collect feedback and inform collected data

Ensure user data rights, fairness, transparency, and minimize bias: There is a survey at the end to ask users for feedback on improvement opportunities and biases. This section also summarizes the data collected.

08
Chatbot character

Give personality and empathize better with users: A gender-neutral chatbot character with name and image is developed so that users can feel more personality while chatting with the bot.

Next steps.

  1. Implementing sentiment analysis and voice access.
  2. Applying redesign changes to other industries, such as financial services and health care.

👀 Random projects.

🎲 Shuffle