Friendly But Faulty: A Pilot Study on the Perceived Trust of Older Adults in a Social Robot

The efforts to promote ageing-in-place of healthy older adults via cybernetic support are fundamental to avoid possible consequences associated with relocation to facilities, including the loss of social ties and autonomy, and feelings of loneliness. This requires an understanding of key factors that affect the involvement of robots in eldercare and the elderly willingness to embrace the robots’ domestic use. Trust is argued to be the main foundation of an effective adult-care provider, which might be more significant if such providers are robots. Establishing, and maintaining trust usually involves two main dimensions: 1) the robot’s reliability (i.e., performance) and 2) the robot’s intrinsic attributes, including its degree of anthropomorphism and benevolence. We conducted a pilot study using a mixed methods approach to explore the extent to which these dimensions and their interaction influenced elderly trust in a humanoid social robot. Using two independent variables, type of attitude (warm, cold) and type of conduct (error, no-error), we aimed to investigate if the older adult participants would trust a purposefully faulty robot when the robot exerted a warm behaviour enhanced with non-functional touch more than a robot that did not, and in what way the robot error affected trust. Lastly, we also investigated the relationship between trust and a proxy variable of actual use of robots (i.e., intention to use robots at home). Given the volatile and context-dependent nature of trust, our close-to real-world scenario of elder-robot interaction involved the administration of health supplements, in which the severity of robot error might have a greater implication on the perceived trust.

that the elder may find the robot useful, but only for certain 91 tasks [25]. Other present-day studies have attempted to assess 92 the effect of a robot-committed error in trust, but they have 93 considered contexts of mild severity of robot error, such as 94 in card games [26], Lego games [27], robotic suitcase [28] 95 or other simple domestic tasks (navigating the house, setting 96 a table, playing music) [29]. Hence, it is imperative to con-97 sider high-sensitivity tasks (e.g., health-related), for which 98 the robot's success rate might have considerable implications 99 on trust. 100 Given the importance to support the desire of older adults 101 to age independently and the potential to promote this via 102 technological support, the present study aims to shed light on 103 (i) the role of a robot's intrinsic features on the trust of older 104 adults in the robot and (ii) the relationship between trust and 105 their willingness to use robots at home, within the context 106 of a sensitive task. To our knowledge, this is the first pilot 107 study that attempts to examine the role of robot's intrinsic 108 features on the perceived trust toward robots in healthy older 109 adults (i) on a relatively sensitive task (ii), and by priming 110 the robot's anthropomorphic features with an introduction of 111 robot-initiated interpersonal touch. Moreover, no studies to 112 date have examined the relationship between trust influenced 113 by the type of robot's attitude and conduct, and the intention 114 to use robots at home in the older population. Our aims were 115 supported by the use of an experimental design, qualitative 116 interviews, and video analyses. 118 Factors related to the performance of a robot refer to its reli-119 ability and corresponding aspects such as failure rates [23]. 120 Differently, stable traits such as ''personality'' and degree of 121 anthropomorphism are included as attributes of a robot [23]. 122 The importance of these categories in the specific relationship 123 between people and robots is grounded on the more gen-124 eral tendency of humans to form impressions of their social 125 relationships based on the warmth (e.g., benevolence) and 126 competence (e.g., skill) dimensions [30]. 127 A robot that performs correctly according to expectations is 128 generally trusted more than a faulty one (e.g., [29]). However, 129 this relationship is not always linear. For example, the use 130 of recovery strategies (i.e., expressing awareness, regret, and 131 justifications for the error) seems to mitigate the negative 132 effect of a faulty robot on trust (e.g., [31]). At the same time, 133 the severity of the consequences associated with a robot's 134 error could, in turn, impact the extent to which recovery 135 strategies could mitigate the negative effect of the robot fail-136 ure on people's trust in robots [32]. Moreover, the type of 137 recovery strategy adopted by a robot to mitigate its mistake 138 can exert different levels of perceived robot's capability [33]. 139 When the robot expressed awareness of its mistake commu-140 nicating an intention to recover, people tended to perceive 141 the robots as more capable rather than when it simply apol-142 ogised for the mistake. On the other hand, the robot that 143 simply apologised was the one that was perceived as more 144 likeable, also eliciting higher levels of the intention to use 145 robots [33]. 146 Other than the severity of the consequences associated with 147 the type of task and the kind of recovery strategy adopted 148 by the robot, the way in which people respond to a faulty 149 robot seems to also vary according to the anthropomorphic 150 VOLUME 10, 2022 features of the robots. On this regard, studies have showed 151 that when a failure is committed by a humanoid robot, com-152 pared to a non-humanoid robot, people's intention to interact 153 with robots is not negatively affected [34] and their level 154 of satisfaction with the robot may even be higher [35]. For 155 example, a study with the humanoid robot NAO highlighted 156 that people liked the faulty robot more than the non-faulty 157 one [27]. Based on these earlier studies, it is yet unclear how 158 people, specifically older adults, would respond to a faulty 159 but apologetic humanoid robot in the context of a sensitive 160 task.

161
Trust has been found to be positively influenced by the 162 humanoid characteristics of the devices in human-robot inter-163 actions (e.g., [36]). Other human-associated features such as 164 body movement have also shown to be a promising strategy 165 for positive perceptions of robots' sociability even when the 166 robot's aesthetic is not humanoid (i.e., non-anthropomorphic 167 appearance [37]). As well, it has been shown that when 168 people perceive robots as similar to humans, receiving a 169 promise from a humanoid robot, compared to a computer, 170 increased people's trust in the robot [38]. However, the role of 171 anthropomorphic features on human-related outcomes such 172 as robot acceptance, intention to use, and trust is still subject 173 to debate. For example, a recent study showed that observing 174 a handshake between humans and robots could possibly exert 175 a negative impression, which decreases trust in social robots 176 [39]. Another study carried out with industrial robots demon-177 strated that people's trust was higher when interacting with a 178 service robot compared to a humanoid robot [40]. Similarly, 179 in another study with social robots, participants were more 180 likely to donate money to repair the robot when exposed 181 to a functional robot compared to an anthropomorphic one 182 [41]. be perceived as a warm-anthropomorphic quality of robots.

201
In this study, we introduced the first non-functional interper-202 sonal touch, in form of a robot-initiated handshake, between 203 older adults and the robot, as an additional feature to its 204 anthropomorphism. 206 Using a mixed-method approach, the present research work 207 focuses on further understanding the effect of social robot-208 related features on elderly trust in robots by examin-209 ing the role of robot attributes (i.e., robot's attitude) and 210 robot's performance (i.e., robot's conduct) in elder-robot 211 interaction.

212
Moreover, we have also examined the relationship between 213 trust and the intention to use robots at home, which reflects a 214 closer proxy of actual behaviour. We examined a sequential 215 path consisting of: (i) robot's attitude and robot's conduct on 216 trust, and (ii) trust on the intention to use robots at home. This 217 choice was based on previous well-established behavioural 218 models where the final outcome variable is the behaviour 219 itself (e.g., Value-Belief-Norm [49]), or close proxies as the 220 intention (e.g., Technology Acceptance Model [50]). In this 221 sense, the key predictor ''trust'' in our study could be seen as 222 a mediator. For example, in the Value-Belief-Norm the key 223 predictor or mediator of actual behaviour are personal norms 224 that influence actual behaviour. The limits imposed by the 225 difficulty of the recruitment process for our type of participant 226 sample did not allow us to test a mediation model. However, 227 the sequential path we propose offers key novel insights and 228 a promising baseline for future studies. Thus, we have (1) 229 experimentally examined the relationship between the robot's 230 type of attitude and conduct on trust in the robot and (2) a 231 correlational design for the path between trust and intention 232 to use robots at home.

233
To this aim, we articulate the following research questions: 234 RQ1: Does robot attitude influence the trust of older adults 235 in the robot? How do older adults receive interpersonal robot-236 initiated touch?

249
To address our research questions, we manipulated the factors 250 below: 251 1. We ran two behavioural conditions of the robot: warm 252 and cold, where warm indicates empathic behaviour 253 of the robot including benevolent speech (imitating 254 empathy) and presence of human-robot touch, and 255 cold indicates an aloof robot behaviour and absence 256 of human-robot touch. The touch was simulated as 257 a handshake when the robot introduced itself to the 258 participants at the start of the interaction. The warm 259

321
A further benefit of using a humanoid robot is in favourably 322 manipulating robot-initiated touch, here designed as a greet-323 ing handshake (Fig. 2). The sensor at the back of NAO's 324 palm (grey area) allows the robot to ''feel'' the human's touch 325 and react to the event, accordingly. If the handshake would 326 not occur (participants did not touch the robot), NAO would 327 retract its arm after a pre-determined waiting time of a few 328 seconds.

329
2) EXPERIMENTAL CONDITIONS 330 We investigated a close-to real-life scenario, in which the 331 robot was tasked with administering the supplement intake 332 between two types of coloured pills: blue pill -daytime 333 supplement and red pill -night-time supplement. These were 334 positioned on the table between NAO and the participant 335 (Fig. 2). The NAO robot would suggest which supplement the 336 participant should take by dictating the colour of the pill and 337 pointing to its direction (left or right) with arm movement. 338 The experiments were designed in Choregraphe Suite 2.8.6, 339 asking the participants about their day) and initiating a 347 handshake at the start of the interaction.   instruction was clear to receive an explicit reaction to the error 373 before the robot would correct it.

375
The participants were welcomed to the University premises 376 and accompanied to a waiting room, where they were offered 377 face masks, hand sanitisers and a consent form to read and 378 sign. Upon consenting to the study, the researchers recorded 379 their demographic data. The participants were briefed that 380 they would have a one-to-one interaction with a robot called 381 NAO, which would pretend to remind them to take their 382 daily supplements. Participants were instructed under no cir-383 cumstance to swallow the pills and it was made clear that 384 the experiment was only a simulation. They were advised 385 to speak loudly and clearly to the robot and encouraged to 386 face the robot throughout the interaction. We advised them to 387 repeat the questions if they believed the robot did not listen  The intention to use the robot was measured as the will-441 ingness to use the robot in prospective home contexts.   The video contents were blind-reviewed by two indepen-486 dent judges using a common coding framework. The cod-487 ing framework is compliant both with previous studies on 488 VOLUME 10, 2022 human-robot interaction (e.g. [27], [55]) and with the specific 489 aims of this study.

490
Judges were instructed to evaluate three dimensions: Our categorical evaluations indicated a total of 57.14% 544 of positive evaluations with the remaining (42.85%) being 545 neutral and none negative in response to the robot's initiation 546 of touch. Given that the handshaking gesture occurred before 547 the robot committed an error, we do not report any results 548 concerning the interaction between conditions for this case 549 (i.e., the handshake is indifferent to the presence or absence 550 of error).

551
The observations of video content indicated that only 3/7 552 participants touched the robot reciprocating a close-to-natural 553 handshake. Among these, one participant declared in the 554 post-interaction interview to have felt uncomfortable when 555 reciprocating the gesture.

559
Among the remaining participants that did not touch the 560 robot, one participant only pretended to reciprocate the hand-561 shake, but without touching the robot, declaring after that they 562 were unsure if they were allowed to touch the robot. Another 563 participant also reported that a major reason for not touching 564 the robot was her concern about coronavirus.

''No, in times of Covid we don't touch people'' (Female, 75). 566
Similarly, another participant expressed confusion about 567 touching the robot (both as observed in the recording and 568 as reported vocally during the interview) although they 569 recognised that the robot initiated a handshake. The last 570 participant neither touched the robot nor reciprocated the 571 gesture, maintaining a closed body posture leaning away 572 from the robot, but did not disclose any reason for their 573 reaction. These observations along with the participants' 574 subjective ratings confirmed that 6/7 participants recog-575 nised the presence of robot-initiated touch. Given that touch 576 was not the only element of the warm attitude condition 577 and did not play a role in the error conduct condition (it 578 occurred before the error), we included this participant in our 579 analyses. The participants were asked to describe their overall experi-582 ence with the robot immediately after the interaction through 583 open questions. The aim was to capture self-reported evalua-584 tions of the robot and the interaction that were not subject to 585 the interpretation of the research team.

586
Despite the non-trivial variability in the participants' expe-587 rience with NAO, the overall self-reported evaluations sug-588 gested that the robot's voice was perceived as unpleasant and 589 uncanny. Some participants wished the robot would appear 590 more humanlike.    The descriptive data reported in Table 2 Fig. 4 illustrate the level of 626 perceived trust as a function of type of attitude (A) and type 627 of conduct (B).

628
Our descriptive results suggest that the robot's type of 629 conduct influenced the older adults' trust in the robot (RQ2). 630 The participants perceived the robot as less trustworthy when 631 it committed an error. In contrast, the main descriptive effect 632 showed no meaningful impact of the type of attitude on 633 trust.

634
To further clarify these findings, we looked at the inter-635 action between the type of conduct and type of attitude on 636 the perceived level of trust (Fig. 5). The reported results 637 suggest that the absence of the observed differences in the 638 level of trust as a function of the type of attitude (i.e., main 639 effect) could be due to the cofounding effect of the type 640 of conduct. In simpler words, when the robot's conduct is 641 correct (no-error), interacting with an empathic (warm) robot 642 is important for increasing the level of trust toward the robot 643 (see Table 2) (RQ1). By contrast, as represented in Fig. 5, 644 when the robot commits an error, interacting with a warm 645 robot does not change trust perception compared to a cold 646 robot (RQ3).

648
Finally, we investigated the relationship between inten-649 tion to use robots at home and trust in the robot using 650 VOLUME 10, 2022  The interpersonal robot-initiated touch was intentionally 681 introduced to strengthen the robot's anthropomorphism and 682 evaluate if the degree of ''warmth'' of the robot would affect 683 our participants' trust. However, we could not disentangle the 684 specific effect of touch from the other implemented features 685 (e.g., emotion recognition, speech content) in the quantitative 686 analysis. Therefore, the role of robot touch on the partic-687 ipants' evaluation of the robot was only inspected via the 688 qualitative data, looking at their reaction(s) during the robot 689 handshake from the video analyses. Our qualitative findings 690 indicated a majority of positive evaluations, and none nega-691 tive in the behavioural response of older adults towards the 692 interpersonal robot-initiated touch.

693
In summary, the joint analyses showed that when the robot 694 error was not present, the trust of older adults in the robot 695 increased if the robot exerted an empathic (warm) attitude.  adults' intention to use robots domestically? 739 We found that trust toward robots and intention to use

762
The performance value of the robot exhibiting or not an 763 error (i.e., robot conduct) was assessed for two different 764 personality attributes (i.e., robot attitude). We measured the 765 trust in the robot in the absence of error and later evaluated 766 whether the behavioural traits of the robot (warm vs cold 767 attitude) would impact the trust when an error occurred. 768 A ''warm'' robot attitude involved benevolent behaviour and 769 robot-initiated touch (handshake), along with greater effort to 770 recover from the error.

771
Our findings indicated that, while older adults might value 772 a robot with a warm attitude, this type of attitude can-773 not efficiently compensate for the robot's failure in task 774 fulfilment. The quantitative data revealed a decrease in the 775 participants' trust in the robot when the robot committed 776 an error. Similarly, the qualitative analysis suggested that 777 although the overall rating towards the robot's error was more 778 positive in the case of a warm robot, the participant's reaction 779 to the error did not vary significantly. The robot's empa-780 thy, including robot-initiated touch, seemed to strengthen the 781 participant's trust if and only when the robot's conduct was 782 error-free. The robot's humanlike social behaviour accom-783 panied by empathic intelligence did not overcome the effect 784 of a faulty performance on trust perception, which might be 785 explained given that the robot was already anthropomorphic 786 in both cases or that the task requires higher reliability given 787 the sensitivity. However, though the trust was depleted, the 788 percentages of positive self-reported ratings of the interaction 789 (qualitative data) were higher when participants experienced 790 a faulty cold robot (50%) compared to a cold robot that did 791 not commit an error (25%). We speculate that this occurred 792 because despite the robot's cold behaviour, committing an 793 error might be approximated with human likeness given 794 the robot's morphology, as argued also by [42]. Finally, 795 we assessed the implication of trust in the participants' inten-796 tion to use robots at home. Our results suggest that a high 797 degree of trust indicates that older adults may be more willing 798 to accept the domestic use of robots, especially in health-799 related contexts.

800
In addition, this pilot study is among the first contribu-801 tions aiming to reflect on the consequences of robot touch 802 and pro-social behaviour in eldercare. Our participants' self-803 reported feelings indicated that older adults may be more 804 resistant to a robot's interpersonal touch. Even when par-805 ticipants recognised the touch, revealing a general affect of 806 57.14% positive, and the remaining neutral with no nega-807 tive rating, they demonstrated some uncertainty about the 808 touch (video analysis and interviews), which may be due 809 to the lack of familiarity or comfort regarding this type of 810 technology. Nevertheless, they appeared to affirm that feel-811 ings of empathy and solidarity are fundamental needs for 812 elderly people, although it remains unclear if a robot can meet 813 those needs meaningfully. Yet, having a ''warmer'' robot that 814 exerted these attributes strengthened the participants' trust 815 and likeability of the robot. Future investigations are needed 816 to generalise the findings of this pilot study and understand 817 how this might impact the design of robotic technologies, 818 with practical implications for facilitating the ageing in place 819 of older adults. interests include artificial intelligence (AI) and robotics, natural language 1132 understanding, social robots for eldercare, and AI in medical applications. 1133 He became a Senior Member of the IEEE Computational Intelligence Soci-1134 ety, in 2020. He has been part of program committees and chaired several 1135 international workshops and conferences. He is a Guest Associate Editor in 1136 many journals on AI and an Associate Editor at Frontiers, in robotics and 1137 AI-computational intelligence.