Socially-Aware Assistant (SARA): Rapport-Building, Social Explanations for Recommendations
COMPANY: YAHOO! & CARNEGIE MELLON ARTICULAB
PUBLISHED: ACM-HAI
RELEVANT TO: PERSONAL ROBOTS, CYBORG PSYCHOLOGY
Building Rapport Between Human and Machine
RESEARCH YAHOO! & CARNEGIE MELLON UNIVERSITY ARTICULAB
THE CHALLENGE
Explore how conversational agents can build rapport through socially-aware explanation strategies in human–AI dialogue.
ROLE
Conversational AI Designer, User Researcher, Psychology Researcher
FUNDING
Part of a large-scale, multi-year industry–academic research collaboration between Yahoo! and Carnegie Mellon University.
THE OUTCOME
A socially aware robot assistant (SARA) that uses human-like explanation strategies in movie recommendations. Experimental results showed that social explanations increased perceived system quality and recommendation acceptance, independent of recommendation accuracy.
WHY THIS MATTERS
This work laid early foundations for studying rapport, explanation, and trust in conversational agents—questions that continue to shape research in human–AI interaction, emotional AI, and responsible agent design.
PUBLICATION
ACM Human-Agent Interaction Conference (HAI), 2019
AWARDS
Best Paper Award at the Human-Agent Interaction Conference in Kyoto, Japan, 2019
RELEVANCE TO PERSONAL ROBOTS: SARA directly aligns with the Personal Robots group’s focus on social intelligence and natural human–robot interaction. By modeling how humans explain recommendations to one another, this work contributes empirical insight into how socially grounded explanation strategies can foster rapport, trust, and engagement—capabilities that are essential for personified technologies designed to collaborate with people in everyday settings.
RELEVANCE TO CYBORG PSYCHOLOGY: SARA examines how humans interpret and relate to AI systems when explanations are framed socially rather than technically. The findings show that explanation style alone can meaningfully shape perceived trust, quality, and relational dynamics, offering insight into how human cognition and social expectations adapt when AI systems become embedded in decision-making and interpersonal-like interactions.
SARA is a conversational movie recommendation agent designed to explore how social explanations—the kinds humans naturally use—shape trust, rapport, and perceived recommendation quality in human–AI interaction.
In 2018, most conversational agents explained recommendations using technical or item-based rationales (e.g., genres, ratings, similarity metrics). Our research asked a different question: How do humans explain recommendations to one another—and what happens if AI does the same? To answer this, we analyzed a corpus of dyadic human–human movie recommendation dialogues and developed a computational model of explanation strategies grounded in social and psychological theory, including personal experiences, opinions, and relational framing.
I contributed to the theoretical framing and empirical analysis by developing and refining a coding manual for social explanation strategies, annotating dozens of human–human dialogue transcripts, establishing inter-rater reliability, and incorporating early user interviews to inform both the research questions and the agent’s conversational behavior.
We integrated this model into a conversational agent architecture and evaluated it through a controlled user study. Results showed that socially grounded explanations significantly improved users’ perceived quality of both the recommendations and the interaction itself—even when the underlying recommendation quality was held constant. This demonstrated that explanation style alone can meaningfully shape user trust and experience.
Extended documentation: Research process, conversational strategies, and implementation details → CLICK HERE
Why this mattered: At a time when conversational AI research largely focused on task success and algorithmic performance, this work highlighted the importance of relational intelligence—showing that how AI communicates can be as impactful as what it recommends. The project contributed early empirical evidence that social explanation strategies are not just “nice to have,” but central to effective, human-centered AI systems.
Publication:
Florian Pecune, Shruti Murali, Vivian Tsai, Yoichi Matsuyama, and Justine Cassell. 2019. A Model of Social Explanations for a Conversational Movie Recommendation System. In Proceedings of the 7th International Conference on Human-Agent Interaction (HAI ’19), October 6–10, 2019, Kyoto, Japan. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/3349537.3351899