Two Ways in Which AI and Machine Learning Alter the User Experience
By Alexa Polidora
MFA, Media Arts & Technology (Interactive Design focus)
2/14/2024
Many researchers and members of the human-computer interaction (HCI) and UX Design communities have long taken a human-centered design (HCD) approach. Such a focus entails many elements, but mainly, HCD’s mission is to design products that solve real human problems for real human people.
However, with the advent of AI, HCD becomes a bit more complicated. Now, HCI researchers, UX Designers, and other involved parties are confronted with an intelligent system (or systems) that interact with human users, make decisions, and affect the design process. Some systems even require little to no human involvement! What, then, should HCI and UX Design professionals do when confronted with such realities?
Through an exploration of the work of Phillip van Allen and Mikael Wiberg and Erik Stolterman Bergvist, this article focuses on two scholarly assessments of how AI and machine learning (ML) alter the user experience.
Welcome the Non-Human User
What happens when the user experience involves intelligent systems and machines whose goals go behind “just getting stuff done.” As described by van Allen (2017), “ML/AI systems are often non-visual and focused on complex behaviors and extended interactions with multiple people and digital systems, balancing goals through a collaborative approach that is not only focused on task completion” (p. 431). Additionally, who/what/are these intelligent systems? Extrapolating on this point, van Allen (2017) provides an example of an intelligent autonomous vehicle. This vehicle’s system involves many completing and differing processes and requires interaction with humans and other smart devices (van Allen 2017).
Further complicating things, van Allen (2017) explains that the requirements of such a system necessitate a user design process that differs from the traditional methods. The author writes that this “design context” is “an evolving, negotiated, inconsistent, improvised, serendipitous interaction that does not easily resolve to task accomplishment, efficiency, certainty, ROI,
customer expectations, or for that matter, one user’s experience” (van Allen, 2017, p. 431). In other words, such an intelligent system focuses on more, involves more, than just completing tasks to make “The Business” happy. Furthermore, such systems radically alter the user design process. As van Allen (2017) later explains, “When ML/AI systems are constantly learning, adapting, and renegotiating in a context of other evolving autonomous systems and humans, the design constraints and goals are different from conventional UX” (p. 431).
These smart devices introduce a new perspective to the UX design process: that of the intelligent “machine” (van Allen 2017). Van Allen (2017) questions how traditional UX design methods might work when many machine learning/AI systems are involved in addition to the traditional human user. Van Allen (2017) further asks, “Who is the ‘user,’ or is ‘user’ even an appropriate way to understand the problem? (p. 431). Van Allen (2017) states that the existence of “autonomous things” and the way they “behave, interact, communicate” and “embody a ‘lived’ history, evolve, and thrive” will change the nature of UX design (p. 431). Such change will require “new design methods and patterns” (van Allen, 2017, p. 431).
For instance, van Allen (2017) notes that ML/AI systems require characteristics that are typically ascribed to humans (such as ethics and personality). Such requirements, van Allen (2017) points out, create conditions in which “the concept of Human-Centered Design (HCD) starts to break down” (p. 432). As van Allen (2017) notes
When digital participants have their own goals, needs, intentions, ethics, moods and methods, an organic, unpredictable and evolving system is created. The human is no longer the center. Instead, the center of design becomes the system and its outcomes. Design moves towards building emergent ecologies (p. 432).
In short, the existence of a digital “Other” with its own perspectives and goals adds another layer of complexity to a user design process that was, until recently, solely focused on human users.
Yet, despite this movement away from human-centered design, van Allen (2017) remains optimistic about the use of ML/AI systems in the UX Design process. Van Allen (2017) hopes that, rather than replacing humans, such systems will enable humans to focus more on creative processes. Additionally, van Allen (2017) hopes that humans will come to see ML/AI systems as “peers that collaborate across common and competing goals” (p. 432). Rather than supporting a movement away from human-centered design, van Allen (2017) instead advocates that HCD methods “ . . . be secondary to newly imagined approaches that fully embrace the potentials of ML/AI” (van Allen, 2017, p. 432).
He proposes the notion of “Animistic Design,” which van Allen describes as
Animistic Design proposes that smart digital entities adopt distinct personalities that inform their perceived sense of aliveness. And rather than having people work with a single, authoritative system, this approach has people engage with multiple smart systems, where each entity has its own intentions, expertise, moods, goals, data sources
and methods. These are not . . .cute anthropomorphic dolls. Instead, Animistic Design strives for a more “native” digital animism, that embodies (metaphorically at least) the inherent characteristics of computational/mechanical systems (p. 432).
The outcome of such design is ultimately the creation of a design “ecology” that nurtures and encourages conversations between human designers and ML/AI systems (van Allen 2017). One benefit of such ecologies–as well as acknowledging the limitations of ML/AI systems–” allows designers to move away from trying to provide single, correct answers” (van Allen, 2017, p. 432). Van Allen (2017) holds that the existence of multiple problem solutions better trains ML/AI systems and fosters greater design capabilities. Additionally, van Allen (2017) postulates that Animistic Design encourages and enables distributed cognition. This method recognizes that, in addition to their physical brain, humans think using the environment with which they interact (van Allen 2017).
The “Automation of Interaction”
When automatic AI systems become involved, the user design process transforms once again.
Wiberg and Bergqvist (2023) focus their paper on the way(s) in which the combination of automated systems and user experiences impact the nature of user interactions. They (2023) examine AI and UX Design from a perspective that, while analyzing AI and its impact on the UX Design process, still “suggests a need to understand human–machine interactions as a foundation for the design of engaging interactions” (p. 2281). Focusing on the growing relationship between UX Design and automation, Wiberg and Bergqvist (2023) explore the “automation of interaction” (p. 2281). They (Wiberg and Bergqvist, 2023) focus their efforts on discussing how the principle of “interaction” is the joining force between user experience and artificial intelligence. Additionally, WIberg and Bergqvist (2023) note the ways in which the increased focus on automation changes the entire user experience. In some cases, such automation is removing the need for user interaction altogether (Wiberg and Bergqvist, 2023). A shift also occurs in the notion of the user being in control of the experience (Wiberg and Bergqvist, 2023). Instead, autonomous machines now control the process (Wiberg and Bergqvist, 2023).
Wiberg and Bergqvist are well aware of the transformative era in which HCI finds itself. They recognize “that HCI is at this crossroad between UX and AI from the viewpoint of designing for engaging interactions versus designing for automation” (Wiberg and Bergqvist, 2023, p. 2283). They then discuss the idea of an “automation space” in which discussions take place about which aspects of digital processes should be automated and which should be made manual (Wiberg and Bergqvist, 2023, p. 2283). Yet, the authors bring up a good point that makes such a space harder to create: when a process is being automated, that automation might show additional issues that make the process hard to automate (Wiberg and Bergqvist, 2023). Additionally, the authors discuss how automating processes might deprive users of feelings of “being in control” (Wiberg and Bergqvist, 2023). Such automation and subsequent user deprivation might rob the user of feelings of comfort (Wiberg and Bergqvist, 2023). Also, while
automation is beneficial process-wise, such automation becomes costly if it fails and then requires manual human interaction to fix the problem (Wiberg and Bergqvist, 2023).
Another such complexity discussed is the way in which automation puts the human interaction piece in the “foreground” of experience (Wiberg and Bergqvist, 2023, p. 2284). As Wiberg and Bergqvist (2023) further explain
As computing is increasingly designed along an aesthetics of disappearance, rendered invisible, and fundamentally entangled with our everyday lives, it becomes less clear what is suitable for automation and what requires true user control. In fact, this seems to be almost an interaction design paradox, and it has accordingly received some attention from HCI and interaction design researchers who have tried to resolve this tension between automation and user experience (p. 2284).
As computing becomes more “invisible” to the user experience, the authors explore the need for the human user to be involved–or at least, kept aware–of the automated processes taking place in the background (Wiberg and Bergqvist, 2023). Additionally, Wiberg and Bergqvist (2023) note that much research seeks to understand the interplay between human user interaction and the “behind-the-scenes” machine automation taking place. However, the authors propose the need for “ . . .a deeper understanding of what interaction is” (Wiberg and Bergqvist, 2023, p. 2284). As they state
We argue that we need a framework that allows us to focus on and examine the details of how the interaction unfolds (what is happening in the foreground) and that allows us to see aspects of automation (what is happening in the background), while at the same time relate to the user experience (Wiberg & Bergqvist, 2023, p. 2284).
To establish such a framework, the authors begin by exploring definitions integral to an understanding of the interaction process. Going off of the definition established by earlier researchers, Janlert and Stolterman, Wiberg and Bergqvist (2023) agree that “interaction” is “. . . an operation by a user, and the responding ‘move’ from the artifact” (p. 2284). They then go on to establish definitions created for the model created by the aforementioned researchers (Wiberg and Bergqvist, 2023). Those definitions include:
- “Internal states, or i-states for short, are the functionally critical interior states of the artifact or system.” (Wiberg and Bergqvist, 2023, p. 2284)
- “External states, or e-states for short, are the operationally or functionally relevant, user-observable states of the interface, the exterior of the artifact or system” (Wiberg and Bergqvist, 2023, p. 2284).
- “World states, or w-states for short, are states in the world outside the artifact or system causally connected with its functioning” (Wiberg and Bergqvist, 2023, p. 2284).
Additionally,
The model also details the activity on both the artifact and user sides. For instance, states change as a result of an operation triggered by a user action or by the move (action) by the artifact. These moves appear as a cue for the user. These cues come to the user either as e-state changes or w-state changes (Wiberg and Bergqvist, 2023, p. 2284).
The authors then state
Based on the model, we can now define any form of “automation of interaction” as removing a pair of actions and moves from an interaction while leading to the same or similar outcome (Wiberg and Bergqvist, 2023, p. 2284 – 2285).
Wiberg and Bergqvist (2023) then examine two “automation of interaction” relationships: “no automation (full interaction)” and “no interaction (full automation)” (p. 2285). They define “no automation” as meaning
…that the artifact does not perform any operations and moves other than those triggered explicitly by an action of the user. This means that the user has complete control of all activities and outcomes, which requires intimate knowledge and skill. It also means that the user needs to understand the artifact and the relationship between user actions and artifact moves (Wiberg and Bergqvist, 2023, p. 2285).
Conversely,
The extreme form of full automation of interaction means that the artifact performs all its operations and moves without being triggered by any actions from the user. Instead, the artifact moves are based on its i-states or changes in the w-states. This means that the user has no control over activities and outcomes. It also means that the user does not need any particular knowledge or skills since the artifact performs all actions (Wiberg and Bergqvist, 2023, p. 2285).
In concluding their laying out of definitions, the authors write,
We can now see that “automation of interaction” through AI means that we substitute man–machine interaction with AI support that can automate complex relationships between actions, operations, moves, and/or cues as the basic model of interaction shows (Wiberg and Bergqvist, 2023, p. 2285).
Regarding the “automation of interaction,” the authors conclude
In many cases, the reduction of interaction will, for the user, lead to a loss of control and precision, but maybe with a gain in functionality, performance, and of course, a lesser need to focus on interaction (Wiberg and Bergqvist, 2023, p. 2285).
Ultimately, Wiberg and Bergqvist (2023) withhold judgment on whether such “automation of interaction” is positive or negative. Instead, the authors distinctly state that their “automation of interaction” model is meant to be descriptive rather than judgmental (Wiberg and Bergqvist, 2023). In fact, regarding the “automation of interaction”, the authors note that, “Whether a certain combination is “good” or not can only be determined in relation to the purpose of the interaction and how users experience it, and the value and quality of its outcome” (Wiberg and Bergqvist, 2023, p. 2288). They note that fully-automated systems can lead to both good or bad experiences, depending upon the user(s) involved (Wiberg and Bergqvist, 2023).
A Transhuman Design Process?
As described in the articles explored, the user design process is certainly affected and complicated by the addition of AI and ML. As proposed by van Allen’s Animistic Design, HCI researchers and UX designers find themselves forced to consider the user perspective of a non-human Other. Additionally, as Wiberg and Bergqvist show, designers are confronted with a need to reconsider the notion of “interaction” itself. Is such transhumanism the future of the user design process? Perhaps, or maybe not. Either way, the use of AI and ML promises to transform the user design process in the immediate future.
Bibliography
Human-Centered Design (HCD). (n.d.). Interaction Design Foundation. February 14, 2024, https://www.interaction-design.org/literature/topics/human-centered-design
van Allen, P. (n.d.). The AAAI 2017 Spring Symposium on Designing the User Experience of Machine Learning Systems Technical Report SS-17-04. In Reimagining the Goals and Methods of UX for ML/AI. Retrieved from https://cdn.aaai.org/ocs/15338/15338-68263-1-PB.pdf.
Wiberg, M., & Bergqvist, E. S. (2023). Automation of interaction—interaction design at the crossroads of user experience (UX) and artificial intelligence (AI). Personal and Ubiquitous Computing, 27, 2281–2290. https://link.springer.com/article/10.1007/s00779-023-01779-0