Media Effects Research Lab - Research Archive

The impact of anthropomorphic cues on personalized health assessment users’ privacy concern and intention to disclose personal information

Student Researcher(s)

Yerheen Han (Masters Candidate);

Jessica Ruiz (Masters Candidate);

Tara Traeder (Masters Candidate);

Faculty Supervisor

INTRODUCTION

Scholars seem to suggest that privacy concern and hesitance to disclose personal information might be an inevitable consequence of using personalization systems, despite the technological and legal endeavors to protect privacy of the users (Chellappa & Sin, 2005). The purpose of our study is to examine whether inserting anthropomorphic cues (e.g., facial features, gender, audible human voices; Kim & Sundar, 2012) —that are often used in domains such as e-commerce, where agents are used to evoke the same experience as when interacting with a salesperson at a “brick and mortar” store (Qui & Benbassat, 2009)—on the interface of an online personalization system can decrease or increase the users’ privacy concern and intention to disclose.

This paper attempts to examine the impact of anthropomorphic cues displayed in personalization systems on users’ privacy concern and intention to disclose personal information. Theoretically, (a) our study has potential to identify factors, anthropomorphic cues, that are necessary to be investigated in the studies on users’ privacy concern and intention to disclose; moreover, (b) our study might contribute to advancing knowledge on users’ responses to anthropomorphic cues. Practically, our study may provide tips for designing personalization systems that lower users’ privacy concerns and increase their willingness to disclose personal information.

RESEARCH QUESTION / HYPOTHESES:

H1: Embarrassment will be higher in the anthropomorphic cues condition than the machine cues condition.

H2: Embarrassment will be positively related to privacy concern.

H3: Embarrassment will be negatively related to intent to disclose personal information.

H4: Embarrassment will mediate the relationship between anthropomorphic cues (vs. machine cues) and privacy concern.

H5: Embarrassment will mediate the relationship between anthropomorphic cues (vs. machine cues) and intent to disclose personal information.

RQ1: Is there a relationship between anthropomorphic cues (vs. machine cues) and trust in an assessment agent?

H6: Trust in an assessment agent will be negatively related to privacy concern.

H7: Trust in an assessment agent will be positively related to intention to disclose personal information.

RQ2: Does having trust in an assessment agent mediate the relationship between the impact of anthropomorphic cues (vs. machine cues) and privacy concern?

RQ3: Does having trust in an assessment agent mediate the relationship between the impact of anthropomorphic cues (vs. machine cues) and intention to disclose personal information?

METHOD

Participants (N=197) in a between-subjects design experiment were exposed to one of four versions of a personalized health assessment, each with equivalent content but differing in cues depending on the agent guiding the assessment. An online personalized health assessment on alcohol consumption behaviors was constructed as stimulus material for this study. Questions selected for the health assessment employed in this study were intended to vary in the level of embarrassment elicited from participants. Participants received personalized information in response to 12 questions.

Information was equivalent in terms of word count and sentence structure. Responses from the assessment were not completely personalized but they were intended to be perceived as such. Versions of the health assessment were equivalent in all aspects except the cues displayed by the assessment agent. Two cues were used as manipulation, visual cues and text-based cues.

Visual cues were displayed to the left side of a static gray banner for all conditions that had “Health Assessment” displayed. All visual cues were the same size and in the same position on screen. The visual cues for anthropomorphic conditions included an image of a virtual agent (i.e., female face for female-anthropomorphic; male face for male anthropomorphic) and a nametag (i.e., “Laura” for female-anthropomorphic; “Alex” for male-anthropomorphic). Visual cues for the machine condition included an image of a computer and the label, “System.” The control condition had no image or label. Text cues were displayed in all conditions while participants waited to receive personalized responses (i.e., Please wait, Laura is working…” for female-anthropomorphic; “Please wait, Alex is working…” for male-anthropomorphic; “The system is processing” for machine; “The agent is processing” for control).

All participants completed the study online. They received an email with information about the study, instructions on how to participate, and a link leading to the survey. The link directed participants to a consent form where they clicked a link to give consent and were subsequently randomized into conditions. Health assessment questions were followed directly by the study questionnaire.

RESULTS

Among female participants, the agent with the male-anthropomorphic cues (M = 4.08, SD = 2.83, n = 24) was perceived as less human-like than the agent with machine cues (M = 6.29, SD = 2.45, n = 42; t (116) = 3.17, p < .05) but the agent with the female anthropomorphic cues (M = 5.29, SD = 3.14, n = 12) was perceived as being no different in humanness from the agent with the machine cues (t (116) = 1.12, n.s.). Male participants, however, perceived the agent with male-anthropomorphic cues (M = 5.38, SD = 2.59, n = 13) and the agent with female-anthropomorphic cues (M = 5.57, SD = 3.51, n = 14) was perceived as being no different in humanness from the agent with machine cues (M = 5.46, SD = 3.03, n = 29) (|t|s < .11).

The agent with machine cues (M = 7.80, SD = 2.23, n = 42; male) was significantly more trusted than the agent with male-anthropomorphic cues (M = 6.33, SD = 2.51, n = 24) among female participants(t (116) = 2.55, p < .05); but the comparison was not significant among male participants (t (73) = .99, n.s). Both female participants and male participants did not significantly trust the agent with female-anthropomorphic cues more or less than the agent with machine cues (t|s< 1.27). Although female participants but not male participants trusted the agent with machine cues more than the agent with male-anthropomorphic cues, overall, among all participants, the agent with anthropomorphic cues was not significantly trusted more or less than the agent with machine cues. Therefore, the evidence suggesting the existence of the relationship between anthropomorphic cue and trust in assessment agent proposed in RQ1 was not found.

The prerequisite for testing the indirect impact of anthropomorphic cue on intention to disclose via trust in assessment agent was met among female participants only because female participants trusted the assessment agent with male-anthropomorphic cues (=1) significantly less than the agent with machine cues (=0; unstandardized beta = -1.47, SE = .58, p < .05) and they were more likely to provide personal information to the agent that they trusted more (unstandardized beta = .36, SE = .11, p < .05).

The degree to which female participants trust the agent with male anthropomorphic cues compared to the agent with machine cues through the perception of its humanness positively influenced their intention to disclose (95% CI: -.1817, -.5021; 5000 bootstrap samples). In other words, male-anthropomorphic cues negatively influenced perceived humanness (unstandardized beta = -2.20, SE = .69, p < .05), which, in turn, positively influenced trust in assessment agent (unstandardized beta = .22, SE = .07, p < .05), which, in turn, positively influenced intention to disclose (unstandardized beta = .37, SE = .11, p < .05) among female participants.

DISCUSSION/ CONCLUSION

Among female participants only, compared to machine cues, male-anthropomorphic cues—but not female-anthropomorphic cues—decreased trust in the assessment agent, which, in turn, positively influenced intention to disclose. However, privacy concern was not indirectly influenced by anthropomorphic cues (vs. machine cues) via trust in assessment agent. Moreover, indirect impact of anthropomorphic cues (vs. machine cues) on either privacy concern or intention to disclose via embarrassment was not significant.

The theoretical implication we propose from our study is that anthropomorphic cues displayed in personalization systems can be considered as a factor influencing users’ intention to disclose. Our study suggests that the cues triggering the humanness of the systems may indirectly increase their intention to disclose because users have more trust in systems that are perceived as human-like.

Another theoretical implication of our study is that anthropomorphic cues displayed in personalization systems do not always increase the users’ perception about the humanness of the system. It is intuitively valid to assume that more anthropomorphic cues would increase perceived humanness. However, the results of our study suggest that the relationship between anthropomorphic cues and perceived humanness is not that simple; for example, the combination of gender of users and gender of anthropomorphic cues might influence the degree to which the system is perceived as human-like.

The practical implication of our study is that designers of personalization systems should consider inserting cues that could likely increase users’ intention to disclose personal information. The results seem to suggest that displaying male-anthropomorphic cues in personalization systems might not be a good idea; inserting female-anthropomorphic cues or perhaps cues that emphasize the artificial nature of the computer (e.g., non-humanoid robot) might be a better choice.

For more details regarding the study contact

Dr. S. Shyam Sundar by e-mail at sss12@psu.edu or by telephone at (814) 865-2173

More Articles From: