Trust between Humans & AI
Verifying UX ideas contributing to a better human-AI collaboration for future vehicle design
User Research, Discovery Research, Design Strategy, Experience Prototype, Experiment Design
A systematic framework examining factors affecting trust between human and AI system, Design recommendations for future automated driving development
An HMI design team in a global automotive OEM
Perception of safety, fun, peace of mind, and all other positive feelings about driving depend on the realization of trust. It becomes even more critical when it comes to automated driving that requires human-AI collaborations. An HMI design team in an automotive OEM approached us to examine factors and verify UX ideas affecting human trust in AI systems.
A supporting argument for the HMI concept for level 3-4 automated driving
The factors we identified and the experiments we ran revealed key elements in building and maintaining human trust toward an automated system, especially concerning different AI agent representations. These results served as the backbone of our client's level 3~4 automated driving design development aiming to offer greater peace of mind and a more human-centered experience.
A more systematic view highlighting untapped potential
Human trust in automated systems was an extremely complex topic given the many factors concerning the person, the system, and the environment that vary in different contexts. Our synthesis provided a framework building upon decades of research in this domain, identifying the most influential and the least understood factors. This framework helped our client in prioritizing areas of interest, providing directions, and uncovering opportunities for future development.
An introduction to the value of a more collaborative process
Throughout the project, we communicated closely with our client in a collaborative manner, such as developing the research focus and interpreting the results. While this was a new approach to the client and required more of their involvement, it allowed the client and us to stay 100% synchronized and ensure the research insights were relevant for their work. Thus, it eliminated any surprises and possibilities for major revision at the end, which often happened in their previous projects.
Practical framework integrating multidisciplinary knowledge
How might we enhance trust between a person and an AI system? While there’s been a long history in trust research, and this area is attracting more attention, limited effort has tried to systematically summarize the space, especially with practical implications for designers working in the industry. Considering this, ICI surveyed multidisciplinary sources, including studies and design experiments concerning interpersonal trust, organizational trust, trust in automation systems, trust between human-AI and human-robot interactions.
The results were synthesized into a framework, aiming to provide a more systematic view into the space. The many factors concerning the person, the system, and the environment are situated under different layers and dimensions, noting their relative importance prior, during, and after an interaction took place. The framework also unveiled how critical individual differences, such as cultural background, personality traits, thinking styles, etc., are, which haven’t been considered much by our clients.
Deliberately designed experiments
Along with the clients and building upon the framework, we narrowed it down to three specific studies concerning some of the least understood but hypothetically the most critical factors to our client’s area of interest, such as perceived safety, user personalities, AI agent appearances, and communication styles. For each study, participants were presented with an interactive experience simulating scenarios with varying conditions. ICI designed and implemented these interactive experiences in-house, balancing effectiveness of the experiment and efficiency of production.
These interactive experiences took place as prototypes of common web applications, gamified experiences featuring different AI agent characters, backstories and visual prompts simulating hypothetical scenarios, and so on. We deliberately chose the context of each scenario, the format to present each experiment, and the platform to engage with the participants to maximize useful information obtained through each study respectively, especially given the COVID-19 constraints at the time.