top of page
cat2a.gif

"It's time to take your daily medications!"

Phase 2– UX Project Objectives

To develop an inclusive design for the cat's nonverbal communication signals, using system hardware components like the LED light, e-ink display, and in-body speaker.

These signals will enable the cat robot to express a range of communicative intentions to older adult users who have varying levels of hearing and visual abilities.

Research Plan

  1. Desk research to identify target users and stakeholders

  2. Expert interview to define use scenarios 

  3. Brainstorming, prototyping, and heuristic evaluation to come up with the initial design

  4. Online evaluation study

  5. Iterative design process

  6. Experimentally controlled evaluation study to test the updated design and robustness to missing signal due to vision or hearing difficulties

  7. Focus group to get in-depth feedback

  8. Large-sample online evaluation study that collects various quantitative and qualitative data

  9.  Coding and Analysis of qualitative data, Data analysis on the quantitative data​​

Publications

Chi, V. B.*, Rebola, C. B., Malle, B. F. (in prep). Designing Nonverbal Communication in Companion Cat Robot

Malle, B. F., Chi, V. B.*, Rebola, C. B., (2023). Older Adults’ Understanding of Signals Communicated by Robot Companions for Caregiving. The Gerontological Society of America (GSA) 2023 Annual Scientific Meeting

Next Generation Robotic Intelligence Providing Psycho-Social Support for Older Adults

Topic background

Approximately 27% of adults aged 60 and over live alone, a figure that has been steadily rising over the past decades. In this context, companion robots emerge as a promising solution, offering interactive and responsive engagement to potentially alleviate feelings of loneliness. 

Project background

I'm leading a team to design and evaluate the next generation of Joy for All companion robotic cat product, with added functionalities, intuitive design, and an affordable price point.

Phase 1. System Design --> *Phase 2. Nonverbal Communication Design* -->Phase 3. Longitudinal User Study

​​​

UX outcomes

  • Successfully validated the nonverbal communication designs across 8 distinct user scenarios, enhancing interpretability, safety perception, user trust and enjoyment

  • Implemented these designs into 30 advanced cat robot models, ready for longitudinal user evaluations

Learnings

  • On inclusive design: The importance of using redundant signals via multiple modalities

  • On color psychology: Older adults, particularly those in assisted living or frequently interacting with medical environments, exhibit unique emotional responses and associations to colors compared to younger demographics. It's crucial to tailor our design process to these findings, ensuring these users are actively considered and included.

  • On delightful design: While adding sound to cat robots doesn't necessarily improve interpretability, and may even cause slight distraction, people significantly enjoy the experience more when the cat robot communicates with 'meow' sounds.

  • On marketing: Recognizing that the primary purchaser (often a caregiver to an older adult) differs from the end-user (the older adult), we strategically engaged both groups in the design process and assessed their impressions to craft a design that resonates with both groups and to enhance the overall impact and appeal of our product.

III

I

II

Publications

Chi, V. B.* & Malle, B. F. (2024). Interactive Human-Robot Teaching Recovers and Builds Trust, Even With Imperfect Learners. ACM/IEEE International Conference on Human-Robot Interaction (HRI) [24.9% acceptance rate, oral presentation]

Chi, V. B.* & Malle, B. F. (2023) People Dynamically Update Trust When Interactively Teaching Robots. ACM/IEEE International Conference on Human-Robot Interaction(HRI) [25.2% acceptance rate, oral presentation]

Impact

The findings enhance our understanding of user trust in AI and offer a comprehensive approach to measure the evolving trust throughout a user's journey. 

In future development of reliable and user-centric AI technologies, it's crucial for researchers to meticulously manage or experimentally control he key factors influencing trust updating identified in this study.

Measuring the Dynamically Updating Human Trust in Technology

Challenge

  • Emerging AI technologies need to forge a robust trust relationship with human users.

  • Recognizing that AI products, while initially imperfect, evolve through interaction, this research explores the dynamic journey of user trust – from initial skepticism to growing confidence as they experience the technology's advancements.

Project summary

How can researchers measure trust as a multifaceted, evolving concept?

Leveraging insights from human psychology and machine learning, I developed a novel, three-layered approach to trust measurement:

   I. the momentary feelings of trust 

   II. the dispositional perceptions of trustworthiness

   III. the intended reliance on future tasks

This methodology was then rigorously validated through an extensive online user study.

Key findings

  1. The three layers of trust do not compete but rather integrate at different levels, which demonstrates the discriminant validity of the approach.

  2. Users dynamically update their trust according to a) current task difficulty, b) local evidence, and c) the broader trend of performance improvement.

  3. Initial trust loss is common and may be inevitable, but users will have the opportunity to recover and continue to grow trust if users are engaged through an interactive role and supplied with local and accumulative evidence of improvement.

Key Findings

  • 89% of human teachers naturally adopt mixed teaching strategies. 

  • Human teachers dynamically update their perceptions of the robot throughout a teaching session. These evolving impressions, influenced by the robot's current actions and cumulative evidence of learning, are key in shaping people's teaching strategies.

Publications

 

Chi, V. B.* & Malle, B. F. (2023). Calibrated Human-Robot Teaching: What People Do When Teaching Norms to Robots. 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) [oral presentation]

Chi, V. B.* & Malle, B. F. (2022). Instruct or Evaluate: How People Choose to Teach Norms to Social Robots. ACM/IEEE International Conference on Human-Robot Interaction (HRI)

User Study: How People Choose to Teach Norms to Robots

Background

  • Autonomous artificial agents deployed in social communities are expected to follow the relevant social and moral norms whilst completing tasks.

  • Leveraging interactive human teaching is crucial for robots to effectively learn and evolve within constantly changing social environments.​

  • How could robot designers harness the power of human teaching? And importantly, how would people naturally teach robots?

Methods

I developed a novel paradigm in which participants interactively teach a simulated robot to behave appropriately in a healthcare setting, choosing to either instruct the robot or evaluate its proposed actions.

  • Literature Review

  • Behavioral experiment design

  • Behavioral Task Design

  • Web design (Javascript)

  • Statistical Analysis

Observational Learning of Prosocial Behaviors in Future Hybrid Mobility

Ongoing work– Contact me for details

Independently designed, executed, and analyzed a Human Factors study investigating the role of observational learning in future hybrid mobility

​​

Role

Lead Researcher​

Method

  • Stimuli Design: Brainstorm, Heuristics Evaluation,  Feedback Session, Unreal Engine Animation

  • User Study: VR, Web-based experiment (Javascript), Statistical Data Analysis​​

Publications

Chi, V. B., Mehrotra, S., Teruhisa, M., Kumar, A. (2024 in print). Should I Help a Delivery Robot? Cultivating Prosocial Norms through Observations. ACM Conference on Human Factors in Computing Systems (CHI’24)

Chi, V. B., Ulwelling, E., Salubre, K., Mehrotra, S., Teruhisa, M., Kumar, A. (2024 in print) More than just a Tool: People’s Perception and Acceptance of Prosocial Delivery Robots as Fellow Road Users. In Proceedings of Workshop on Robo-Identity: Designing for Identity in the Shared World (HRI ’24).

Effects of Interactive Teaching on Human Users’ Trust Development for the Spot Robot

Designed and conducted a Human-Robot Interaction experiment with the Boston Dynamics

Spot Robot to investigate the impact of teaching engagement on the development of human trust for the robot

Methods

  • Training task design: Brainstorming, Sketching, Prototyping, Pre-tests

  • Behavioral experiment design

  • Physical robot manipulation (Spot SDK)

Dos and Don'ts in Social Contexts:
Diverging Cognitive Activation of Prescription and Prohibition Norms

Publications

Chi, V. B.* & Malle, B. F. (In Prep) Towards a Model of Social Norm Activation

Chi, V. B.* (Master’s Thesis) Social Norm Network Activation Pattern in Visual Scene Processing

Malle, B. F, Austerweil, J. L, Chi, V. B.*, Kenett, Y., Beck, E. D, Thapa, S., & Allaham, M. (2021). Cognitive Properties of Norm Representations. Proceedings of the Annual

Meeting of the Cognitive Science Society

How do people represent and activate Prescriptions (Do) v.s. Prohibitions (Don't) Norms in concrete contexts?

Methods

  • Visual Stimuli Creation (Photoshop)

  • Statistical Analysis

  • Surveys

  • Behavioral Task Design

  • Web-based experiment design (Javascript)

  • Qualitative data coding

Research presented at:

  • Princeton University (October 2022)

  • 22nd meeting of the European Society for Cognitive Psychology (August 2022)

  • Association for Psychological Science Virtual Convention  (May 2021)

Moral Justifications to Foster Human-Machine Trust

Designed and conducted human-subjects research on autonomous agent moral dilemmas and the use of justifications to mitigate moral disapproval and repair lost trust.

Methods

  • Surveys

  • Feedback sessions

  • Design of robot dilemma tasks

Publication (in review)

Phillips E., Malle, B.F., Rosero A., Kim M.J., Kim. B., Melles L., Chi. V.B.* (2023) Systematic methods for Moral HRI: Studying human responses to robot norm conflicts​ 

Publications

Malle, B. F., Rosen, E., Chi, V. B.*, Ramesh, D. (2023). What Properties of Norms can we Implement in Robots? 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)

Malle, B. F., Rosen, E., Chi, V. B.*, Ramesh, D. (2023). Robots Capable of Norm-Appropriate Action (not Moral Agency). ACM/IEEE International Conference on Human-Robot Interaction (HRI) Workshop “Perspectives on Moral Agency in Human–Robot Interaction.”

Rosen, E., Hsiung, E., Chi, V. B.* & Malle, B. F. (2022). Norm Learning with Reward Models from Instructive and Evaluative Feedback. 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)

Hsiung, E., Rosen, E., Chi, V. B.* & Malle, B. F (2022). Learning Reward Functions from a Combination of Demonstration and Evaluative Feedback. ACM/IEEE International Conference on Human-Robot Interaction(HRI)

Malle, B. F., Rosen, E., Chi, V. B.*, Berg, M.& Haas, P. (2020). A General

Methodology for Teaching Norms to Social Robots. 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)

Computational Norm Learning for Robots

Robots are currently entering new social spaces, assuming new roles, and increasingly joining human communities. If we expect them to become effective, safe, and trustworthy collaborators and contributors to society, robots must therefore learn norms as well.

What would it mean to implement norms in robots?

A computational implementation must realize the relevant cognitive and social properties of norms.

Methods

  • Interactive Inverse Reinforcement Learning

  • Behavior Tree

  • Literature Review

  • Web-based experiment design (Javascript)

  • Statistical Data Analysis

bottom of page