Resumo

Título do Artigo

Experimental study on the effect of adopting humanized and non-humanized chatbots on the perception of user trust in the Yellow September campaign
Abrir Arquivo
Ver apresentação do trabalho

Palavras Chave

chatbot
experimental study
trust

Área

Tecnologia da Informação

Tema

Aspectos Comportamentais e Decisórios da TI

Autores

Nome
1 - Fernanda Silva De Gois
Escola Paulista de Política, Economia e Negócios - Universidade Federal de São Paulo - EPPEN/Unifesp - Osasco
2 - Luis Hernan Contreras Pinochet
Escola Paulista de Política, Economia e Negócios - Universidade Federal de São Paulo - EPPEN/Unifesp - Campus Osasco
3 - Vanessa Itacaramby Pardim
UNIVERSIDADE DE SÃO PAULO (USP) - FEA
4 - Luciana Massaro Onusic
Escola Paulista de Política, Economia e Negócios - Universidade Federal de São Paulo - EPPEN/Unifesp - EPPEN

Reumo

Chatbots can be known as virtual assistants and are consistent in quality, as the responses to the same stimuli are always the same. New responses and functionality are being inserted according to new information inputs. Other benefits are that chatbots do not have mood swings, bad days, or tiredness, among other human emotions that can harm customer service (Luo et al., 2019). Chatbots have the power to relieve customer service centers, providing self-service and having the possibility to answer uninterruptedly, and reduce the costs of call center operations services (Dennis et al., 2020).
Some research has suggested that chatbots can positively influence people’s lives, especially those facing mental health issues (Khan et al., 2021; Nordheim et al., 2019; Toader et al., 2019; Zhang et al., 2020). The objective of the interaction with this chatbot will be to inform and answer the main doubts about the theme of the campaign, which is focused on mental health care and suicide prevention. The theme addressed in the chatbot interaction is “Yellow September”, a national campaign that aims to make people aware of suicide.
Chatbots are an artificial intelligence tool with a wide range of field options. Artificial intelligence mimics human behavior and is developed and implemented with the customer as the center (Toader et al., 2019). Although chatbots have the inherent characteristic of the bot’s lack of understanding with the user, since it is not a human, interactions with Artificial intelligence stimulate a more intimate disclosure of the interlocutor due to the lack of perceived judgment (Ho et al., 2018).
The method used in the research is the experimental method. Chatbots were built with the online Chatfuel tool. The experiments were conducted by Facebook Messenger with a sample of 511 participants. A subsequent questionnaire was applied for data collection and submitted to four stages: exploratory factor analysis, experiment analysis, moderation analysis, and analysis of covariance and regression. Moderation analyses were performed in three steps.
The exploratory factor analysis proved that all the analyzed constructs had a high significance level when analyzing the difference between the two groups. In the analysis of the experiment, it was evidenced that similarity to human, perceived competence, and satisfaction had higher results in the group that interacted with the humanized chatbot than the group that interacted with the non-humanized chatbot. In the last step, the analysis of covariance and regression, high levels of trust were influenced by high levels of similarity to human, perceived competence, and satisfaction.
In terms of similarity to human, perceived competence and satisfaction, users who interacted with the humanized chatbot realized more these characteristics when compared to the group of users who interacted with the non-humanized chatbot. And it was found that higher levels of these three variables result in higher levels of user trust in this service. Thus, this research helps to understand what characteristics that chatbot needs to have to achieve more trust, which is one of the biggest challenges that this technology presents (Skjuve et al., 2021).
Nordheim, C. B., Følstad, A., & Bjørkli, C. A. (2019). An Initial Model of Trust in Chatbots for Customer Service—Findings from a Questionnaire Study. Interacting with Computers, 31(3), 317–335. Zhang, J., Oh, Y. J., Lange, P., Yu, Z., & Fukuoka, Y. (2020). Artificial Intelligence Chatbot Behavior Change Model for Designing Artificial Intelligence Chatbots to Promote Physical Activity and a Healthy Diet: Viewpoint. Journal of Medical Internet Research, 22(9), e22845.