DUQUX https://duqux.com Duquesne User Experience Thu, 14 Nov 2024 18:57:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.7 https://i0.wp.com/duqux.com/wp-content/uploads/2022/04/Logo_512H.png?fit=20%2C32&ssl=1 DUQUX https://duqux.com 32 32 214592065 How AI Changes the User Experience of UX Designers https://duqux.com/2024/02/28/how-ai-changes-the-user-experience-of-ux-designers/ Wed, 28 Feb 2024 21:45:20 +0000 https://duqux.com/?p=6418 How AI Changes the User Experience of UX Designers

By Alexa Polidora, MFA (Interactive Design)

 

Machine Learning (ML) and AI (artificial intelligence) are likely to radically change (UX) design, particularly the design process. In addition, these changes are transforming the role of UX designers. What happens when these changes affect the experiences of the people handling user experiences?

UX Design Without AI

First, let’s establish a “baseline” of how UX design functioned before AI. User Experience (UX) design encompasses all aspects of the end user’s interaction with the company, its services, and its products (Norman & Nielsen, 1998; Salazar, 2023). What sorts of activities do UX designers involve themselves with? A major focus of UX design or experience design is understanding people, their thinking, behaviors, goals, and needs, among other things, According to UX designer Nick Babich (associated with Adobe), UX design typically involves the following activities:

  • Product research and user research
  • Creating personas and scenarios
  • Information architecture
  • Creating wireframes
  • Prototyping
  • Product testing (Rousi, 2023)

Granted, as Rousi (2023) notes, these activities may change “depending on the product and service offering, business model and organization of the business” (p. 2). However, the above activities are (and were) pretty standard for the average UX designer.

A Quick Overview of the UX Design Process

Like any process, UX design follows a set of standard protocols. The Double Diamond (see Figure 1) is a common design process framework often used in UX design, with the major stages of Discover, Define, Develop, and Deliver (Design Council, 2024).

Designers first conduct research to understand the problem and its scope during the “Discover” phase (Design Council, 2024). For instance, UX designers might interview people to gain greater understanding of the problem, the people impacted, and the context in which the problem occurs. As shown in Figure 1, design researchers engage in divergent thinking and associated activities in attempts to understand the problem. They conduct field research including observations, contextual inquiry or interviews, and usability testing as well as secondary research (e.g., literature review, data analytics, competitor analyses) to better understand the problem, people, and context.

Figure 1. Double Diamond Frame (Design Council, 2024). This work by the Design Council is licensed under a CC BY 4.0 license

In the Define stage, design researchers organize, analyze, and interpret the collected data. They engage in convergent thinking to make sense of the information collected and narrow down and define the problem scope. Ideally, by this stage, designers aim to have a clear idea about the problem space and a good understanding of the target audience and the context.

Next comes the Develop stage, which is the initial step in determining a solution to the problem identified in the first two phases, and during this stage, ideation is important (Design Council, 2024; Schrock, 2022). Designers again engage in divergent thinking as they ideate or come up with many possible or divergent solutions to the problem. They evaluate possible solutions and refine them iteratively. During the Develop stage, design teams might generate ideas through diverse activities such as team brainstorming, sketching, prototyping (rapid prototyping), usability testing design scenarios, and user journey mapping.

Finally, during the Deliver stage, prototypes continue to be evaluated and improved. Designers engage in convergent thinking as they narrow down possible design solutions. As designers advance in this stage, they often refine prototypes, so they are of high-fidelity that look and function like a finished product. They deliver a close to final design solution but continue to evaluate it in the field through user feedback and testing.

AI Use in UX

Having established what UX involves without AI, let’s now delve into what UX may be like with AI. We begin by examining how AI tools influence the UX design process. Then, we look at how UX designers talk about AI tools. Finally, we’ll examine some pros and cons of AI use in UX.

The UX Design Process with AI

Bouschery et al. (2023) propose a Double Diamond design model integrated with AI focused on AI and ML methods that can greatly expand knowledge-gathering capabilities during design.

Regarding ML methods, Bouschery et al. (2023) write

ML algorithms perform well on a variety of pattern recognition tasks that are relevant for knowledge extraction. These can range from detecting patterns in visual data, for example, for quality control or analyzing technical samples of an experiment, to identifying novel ideas in online communities or customers with lead user characteristics (p. 143).

Transformer models like GPT-3 are the next evolutionary step after ML methods. These models are helpful because,

Their flexibility and generative capabilities provide ample opportunity for different knowledge extraction practices, allowing NPD [new product development] teams to apply one model for a large variety of tasks. Their context awareness plays a critical role in understanding important connections within a given text and extracting relevant information and knowledge (Bouschery et al., 2023, p. 143).

Text summarization, sentiment analysis, and customer insight generation are three examples of transformer models that can benefit UX professionals by reducing “knowledge-extraction efforts” and exposure to information that a person might otherwise not have known (Bouschery et al., 2023). Additionally, these models can help increase understanding of customer needs and, ultimately, allow design teams to translate insights obtained in the Discover and Define phases to tangible ideas for design solutions (Bouschery et al., 2023).

Because of their few-shot learning capabilities, they can generate adequate responses to a given problem statement and come up with original and useful ideas when prompted with just a few examples of what typical brainstorming results look like. In addition, users of such models can precisely tune them to produce more creative (radical) or more deterministic (incremental) responses–an ability rarely imaginable for humans (Bouschery et al., 2023, p. 145).

Downsides to transformer-based language models include questionable accuracy, insufficient elaboration with some ideas generated by models, and models are only as good as the knowledge on which they were trained, as noted by Bouschery et al. (2023, p. 147).

An important limitation to consider here is that the original training data for language models has a cut-off point after which new knowledge is no longer contained in the training set used for unsupervised learning. Hence, critical information might not be included in the model’s knowledge base. Users have to be conscious of this aspect when interacting with a model. Generally, this natural data cut-off point calls for a continued re-training of models in use. While this aspect might not be critical for applications such as lyric composition or the writing of novels, it is especially relevant in the innovation sphere as innovators should base their decisions on the newest knowledge available. This aspect is even more pronounced in research fields where this knowledge stock expands rapidly. At the same time, the re-training of these models is very easy. These models can very effortlessly acquire new knowledge that can then be incorporated into innovation processes, as long as the information is available in a machine-readable form.

Transformer-based language models can also be prone to biases such as stereotype biases, confirmation biases, and cultural biases because the data on which models are trained have biases (Bouschery et al., 2023; Lawton, 2023).

How UX Designers Discuss AI

Next, let’s better understand how UX designers discuss AI. Feng et al. (2023) examined “… how UXPs [UX practitioners] communicate AI concepts when given hands-on experience training and experimenting with AI models” (p. 2263) and found that UX Designers tend to struggle when describing AI. Designers often lack sufficient understanding of AI capabilities and limitations. Feng et al. (2023) note,

. . .prior work has shown that UXPs encounter numerous novel challenges when designing with AI that emerge from issues including understanding AI models’ capabilities and limitations, calibrating user trust, mitigating potentially harmful model outputs, a lack of model explainability, and unfamiliarity with data science concepts (p. 2263).

Researchers have proposed different solutions to these UX designer difficulties discussing AI such as “human-AI guidelines to offer both cognitive scaffolding and educational materials when designers work with AI” (Feng et al., 2023, p. 2264). Additionally, “researchers have also proposed tools that combine UI prototyping with AI model exploration, process models and boundary representations for human-centered AI teams, and metaphors for generative probing of models” (Feng et al., 2023, p. 2264). Allowing UX designers to create their own AI models helped designers better understand AI systems and bridge the gap between AI experts and UX Designers.

But, Productivity!

Despite such difficulties discussing AI, UX Designers can benefit from AI . . .right? Productivity is great . . .isn’t it? Well, it depends on who you ask. Gonçalves and Oliveira (2023) begin their paper by exploring the positive benefits of AI. First, they view AI as another new form of human creativity:

. . .the evolution of Artificial Intelligence has revolutionized design workflows, offering new possibilities and targeted approaches to computational tools, which play a crucial role in various stages of User Experience (UX) and User Interface (UI) projects. These stages range from automated content generation to advanced data analysis and market insights, enhancing creative and production processes, as well as interaction with the audience through chatbots and virtual assistants (Gonçalves and Oliveira, 2023, p. 2).

Are these developments positive or negative? The answer depends. To help answer that question, Gonçalves and Oliveira (2023) make use of “the advantages, challenges, and potential drawbacks of computational algorithms, as referenced by Madhugiri” (p. 3).

Here are the pros:

  • High accuracy and reduction of human error.
  • Allows you to automate repetitive tasks in different industries.
  • Efficient Big Data Processing.
  • Fast decision making.
  • Improved interaction with customers.
  • Discovery of trends and patterns.
  • Organizes the management of processes and workflows (Gonçalves and Oliveira, 2023, p. 3).

Conversely, here are the negatives:

  • Over-reliance on machines, diminishing human abilities and autonomy.
  • Need for investment in infrastructure and training, making the application of AI more expensive.
  • Data privacy and security concerns.
  • Creative limitations in challenging situations that require innovative thinking.
  • Lack of emotional understanding.
  • Misleading conclusions due to bias in data interpretation and limitations in the models.
  • Lack of flexibility or adaptability of systems (Gonçalves and Oliveira, 2023, p. 3).

“Computational algorithms” have certainly had a positive effect on UX and UI Design, but using AI in UX/UI design brings ethical considerations in addition to usefulness (Gonçalves and Oliveira, 2023, p. 3). For instance, the authors examine how AI’s impact on UX/UI design has necessitated the need for tools such as ChatGPT and Midjourney that optimize workflows and the necessity of understanding how they affect “the evolution and adaptation of professionals in the face of these transformations in creative processes” (Gonçalves and Oliveira, 2023, p. 4).

Where Do We Go From Here?

Gonçalves and Oliveira (2023, p. 6) point out,

…the integration of computational algorithms in UX/UI design is seen as a revolution in user experience, providing improvements in usability, efficiency, and satisfaction through dynamic personalization, predictive and automated interactions, as well as advanced features. However, this technological transition raises questions about the impact on creative processes and the activities of designers.

So much effort is spent on understanding how AI will increase productivity, profit margins, and user satisfaction. However, how do we acknowledge and utilize the usefulness of AI systems while honoring the humanity of UX designers?

AI Systems as Coworkers, Not Replacements

For starters, AI doesn’t have to exile humans from the design process. Rather, AI systems should be used as “assistants” rather than as replacements for humans in the UX/UI design process.

To help establish a positive, working union between UX/UI designers and AI systems, Gonçalves and Oliveira (2023) suggest a few actions.

  1. Join Human-Computer Interaction (HCI) research with Knowledge Discovery/Data Mining (KDD) for Machine Learning (ML), although many UX/UI designers are not experienced with such ML tools
  2. UX/UI designers should familiarize themselves with the responsible use of AI. The authors recommend that designers review documents such as the “Google AI Principles” and the “Beijing AI Principles” [links added for reader benefit].

Gonçalves and Oliveira (2023) note the more optimistic views of designers like Sam Anderson (Intuit) and Andrew Hogan (Figma) who “consider that the involvement of intelligent systems in UX/UI tasks will not replace the role of human designers but rather complement it” (Gonçalves and Oliveira, 2023, p. 7). Additionally, they point out that the creation of new fields due to AI’s inclusion in the design process such as Human-Centered Machine Learning (HCML) and Design for Machine Learning User Experiences (DLUX) that engender involvement of intelligent systems with human designers.

Acknowledging the Digital “Other”

As humans explore the use of AI in UX Design, we will inevitably be forced to address some uncomfortable issues. For instance, in their paper, Sciannamè and Spallazo (2023) aptly describe the implementation of AI in design as “a move from the paradigm of embodiment to alterity” (p. 2). Such an understanding truly changes the nature of human interaction with AI. Along such lines, they add, “AI-infused artifacts may be understood as counterparts, as suggested in Hassenzahl and colleagues’ definition of these products as otherware” (p. 2). They call for the establishment of “new interaction paradigms” so that humans “… see interactive items as other entities rather than users’ extensions” (p. 2). They even go so far as to write

AI-enhanced systems can be proactive and visibly demonstrate their agency to end users by learning, reflecting, and conversing. These systems go beyond delegated agency (Kaptelinin & Nardi, 2009). They can disappoint users, act independently, or – even better – select the ideal solution to the issue at hand (Sciannamè and Spallazo, 2023, p. 2).

If AI systems have agency and are more than merely tools, then such a realization changes human-AI interaction entirely. Yet, is this idea truly the reality of things or simply more of the “theater of the absurd” that Rousi (2023) describes? While this article will not attempt to answer such questions (such philosophical and ethical considerations deserve a paper unto themselves), the fact remains that questions of agency, intelligence, and consciousness are certainly applicable and relevant to how, why, and where AI is used–even beyond such tools’ implementation in the UX design process. The time is likely fast approaching when we, as humans, will be forced to address AI’s nature and ethical implications and what such considerations mean for the nature of consciousness and agency. Eventually, we will face this digital “Other”–whether or not we wish to do so.

 

Bibliography

Babich, N. (2017) “What does a UX designer actually do?”. https://theblog.adobe.com/what-does-a-ux-designeractually-do/

Beijing Artificial Intelligence Principles. International Research Center for AI Ethics and Governance. (n.d.). https://ai-ethics-and-governance.institute/beijing-artificial-intelligence-principles/

Bouschery, S. G., Blazevic, V., & Piller, F. T. (2023). Augmenting human innovation teams with artificial intelligence: Exploring transformer-based language models. Journal of Product Innovation Management, 139–153. https://onlinelibrary.wiley.com/doi/epdf/10.1111/jpim.12656

Design Council. (2024). The framework is fundamental to our work. Framework for Innovation. https://www.designcouncil.org.uk/our-resources/framework-for-innovation/

Feng, K. J. K., Coppock, M. J., & McDonald, D. W. (2023). Designing Interactive Systems 2023. In How Do UX Practitioners Communicate AI as a Design Material? Artifacts, Conceptions, and Propositions (pp. 2263–2280). Retrieved February 15, 2024, from https://dl.acm.org/doi/abs/10.1145/3563657.3596101.

Gonçalves, M., & Oliveira, A. G. N. A. (2023). IX International Symposium on Innovation and Technology. In Blucher.com. Retrieved February 14, 2024, from https://pdf.blucher.com.br/engineeringproceedings/siintec2023/305955.pdf.

Google. (n.d.). Google AI Principles. https://ai.google/responsibility/principles/

Hassenzahl, M., Borchers, J., Boll, S., Pütten, A. R. der, & Wulf, V. (2020). Otherware: How to best interact with autonomous systems. Interactions, 28(1), 54–57. https://doi.org/10.1145/3436942

Hassenzahl, M., Eckoldt, K., Diefenbach, S., Laschke, M., Len, E., & Kim, J. (2013). Designing Moments of Meaning and Pleasure. Experience Design and Happiness. International Journal of Dsign, 7(3), 21–31.

Humble, J. (7AD). What is the Double Diamond Design Process?. The Fountain Institute. https://www.thefountaininstitute.com/blog/what-is-the-double-diamond-design-process

Kaptelinin, V., & Nardi, B. A. (2009). Acting with Technology. Activity Theory and Interaction Design. MIT Press

Lawton, G. (2023, December 5). Transformer neural networks are shaking up AI. Tech Target. https://www.techtarget.com/searchenterpriseai/feature/Transformer-neural-networks-are-shaking-up-AI

Merritt, R. (2022, March 25). What Is a Transformer Model? Nvidia Blogs. February 19, 2024, https://blogs.nvidia.com/blog/what-is-a-transformer-model/

Rousi, R. (2023). Nordic network for research on communicative product design (Nordcode) seminar 2019. In Arxiv. Retrieved February 14, 2024, from https://arxiv.org/abs/2304.10878.

Norman, D. and Nielsen, J. (1998). The Definition of User Experience (Ux). Nielsen Norman Group. Retrieved June 1, 2023, from https://www.nngroup.com/articles/definition-user-experience/

Salazar, K. (2023). User Experience vs. Customer Experience: What’s The Difference? Nielsen Norman Group Retrieved Feb, 2024, from https://www.nngroup.com/articles/ux-vs-cx/

Schrock, D. (2022, February 22). A step-by-step guide for conducting better product discovery. Productboard. https://www.productboard.com/blog/step-by-step-framework-for-better-product-discovery/

Sciannamè, M., & Spallazo, D. (2023). International Association of Societies of Design Research Congress 2023. Design Research Society. Retrieved February 15, 2024, from https://dl.designresearchsociety.org/cgi/viewcontent.cgi?article=1127&context=iasdr.

]]>
6418
Tone Tagged Commenting https://duqux.com/2024/02/20/tone-tagged-commenting/ Tue, 20 Feb 2024 20:42:19 +0000 https://duqux.com/?p=6405 Tone Tagged Commenting

Team Harmony Hacktivists: Alex McElravy, Emily Brozeski and Tessa Datte

 

Hacking4Humanity is a hybrid tech and policy hackathon focused on countering online hate. This year, areas of focus included disproportionate impact on marginalized communities and creating online social cohesion. Our presentation won Runner’s Up of the Common Good Award!

Our team, the Harmony Hactivists (Emily Brozeski, Alex McElravy, and Tessa Datte), designed a comment feature that focused on expressing empathy by enabling a person to add a tone tag to convey their intent within each comment. 

Tone tags are short abbreviations appended to the end of an online message that express the delivery of the message in a digital space where it is hard to explain cadence or inflection.  

We aimed to tackle this question: how might we promote social cohesion through understanding others without disrupting the current patterns of use in social media?

As UX designers, we want our designs to be informed by people. To do this effectively, we conducted research gathering secondary reviews, experiential feedback, and an ethical perspective. First, We found that online hate involves a participating audience prompting us to explore our human-computer interaction in how we convey social approval, affirmation, and sociability to our online peers. Second, most people acknowledged the importance of speaking up yet many admitted they struggled to report or speak up themselves; reporting that they feel powerless in online echo chambers. And third, we took into account that speech is emotionally driven and can cause intentional or unintentional harm. Ethically, this is important to consider how a variety of speech impacts users.

From our research, we were able to dig deeper into the problem of online hate which is reflective of our real-world environment. We defined the problem as:

 

Social media users need a reminder that comments are opinions with varying delivery because people who comment with misunderstandings can perpetuate harmful and uneducated debates.

 

In our understanding of this scope of the problem, we recognized the opportunity to allow the user to express empathy through the introduction of a tool that supports and empowers the understanding of individual communities. Our hope was that a tool that expresses intent could help defuse hateful speech or at the very least provide background knowledge on a person’s stance in an interaction. 

Our prototype allowed the user to write their comment and select a tone tag from a pop-up that helped convey the delivery of their message to other commenters. Once sent, it included an abbreviated tone tag next to other interaction elements of the comment section. When the abbreviation icon is clicked, a drop-down of the size of a comment displays the meaning of each tone tag that the sender of the message chose to attach. The drop-down also included a short blurb that challenged the user to consider their role in online interaction and the use of comments. 

Photo of three students in Duquesne University's Gumberg Library from a video / Text on image: A tone tag indicates the attitude or delivery of a comment. Select a tone tag to describe the intent of your comment. / /gen can be used when commenting an authentic statement or asking a genuine question. / /pos can be used when commenting in a positive way or indicating positive connotation. / /s can be used when commenting the opposite of what your words literally mean. / /nm can be used to affirm a person's emotional stance in an interaction. / /rh can be used to express dramatics or make a point rather than seek an answer. / /srs can be used when communicating earnest consideration. Photo of three students in Duquesne University's Gumberg Library from a video / Text on image: so cute! love it!/gen / Isn't Gumberg the best!?/rh / Love the colorful markers! They are too cute./pos / /pos means positive! reminder: online comments are opinions used to create social interaction & reinforcement. As part of the participating audience please consider your role in this. / 5th floor is where it's at!/li

Our design attempted to be inclusive of our communities and the roles our social platforms require us to take on. For commenters, mitigating dialogues that trigger negative responses takes an active step to counter online hate. For observers, providing this empathy to better understand a person’s viewpoint allows them to still participate and create in this shared space despite differing perspectives. For communities, tone tags empower people to participate in environments that may have been polarizing without the shared understanding of the inflection of a message. And finally, for outside stakeholders, tone tags are an opportunity for large business partners to give weight to the importance of existing hate speech that surrounds us. 

We understand our initial iteration has significant considerations that in its current state are left unexplored. With more time we would tackle this by conducting user feedback on the wording of how each tag was explained as well as the wording of the dropdown reminder. In doing so, we hope to account for some of the unintended consequences that arise from emotionally driven statements.

Hate speech and its impact stifles speech within a community, influencing user participation and exposure. Our goal was to bring awareness to the fact that our online experiences are reflective of real-life interactions. Our tone-tagged comments aimed to create empathetic understanding between commenters in order to foster unity and authentic curiosity in digital spaces.

 

Figma Prototype: https://www.figma.com/proto/yxCCjKzjnywcvbuPfrei28/Hackathon-2024?node-id=123-5469&starting-point-node-id=72%3A2665&scaling=scale-down&mode=design&t=FnPDuHDJ2QmujTbZ-1 

Presentation: https://www.canva.com/design/DAF8y3kZo0M/Nh-kDjxQwxcRgsstJ2m1uw/edit?utm_content=DAF8y3kZo0M&utm_campaign=designshare&utm_medium=link2&utm_source=sharebutton 

]]>
6405
Two Ways in Which AI and Machine Learning Alter the User Experience https://duqux.com/2024/02/19/two-ways-in-which-ai-and-machine-learning-alter-the-user-experience/ Mon, 19 Feb 2024 14:07:56 +0000 https://duqux.com/?p=6365 Two Ways in Which AI and Machine Learning Alter the User Experience

By Alexa Polidora

MFA, Media Arts & Technology (Interactive Design focus)

2/14/2024

 

Many researchers and members of the human-computer interaction (HCI) and UX Design communities have long taken a human-centered design (HCD) approach. Such a focus entails many elements, but mainly, HCD’s mission is to design products that solve real human problems for real human people.

However, with the advent of AI, HCD becomes a bit more complicated. Now, HCI researchers, UX Designers, and other involved parties are confronted with an intelligent system (or systems) that interact with human users, make decisions, and affect the design process. Some systems even require little to no human involvement! What, then, should HCI and UX Design professionals do when confronted with such realities?

Through an exploration of the work of Phillip van Allen and Mikael Wiberg and Erik Stolterman Bergvist, this article focuses on two scholarly assessments of how AI and machine learning (ML) alter the user experience.

Welcome the Non-Human User

What happens when the user experience involves intelligent systems and machines whose goals go behind “just getting stuff done.” As described by van Allen (2017), “ML/AI systems are often non-visual and focused on complex behaviors and extended interactions with multiple people and digital systems, balancing goals through a collaborative approach that is not only focused on task completion” (p. 431). Additionally, who/what/are these intelligent systems? Extrapolating on this point, van Allen (2017) provides an example of an intelligent autonomous vehicle. This vehicle’s system involves many completing and differing processes and requires interaction with humans and other smart devices (van Allen 2017).

Further complicating things, van Allen (2017) explains that the requirements of such a system necessitate a user design process that differs from the traditional methods. The author writes that this “design context” is “an evolving, negotiated, inconsistent, improvised, serendipitous interaction that does not easily resolve to task accomplishment, efficiency, certainty, ROI,

customer expectations, or for that matter, one user’s experience” (van Allen, 2017, p. 431). In other words, such an intelligent system focuses on more, involves more, than just completing tasks to make “The Business” happy. Furthermore, such systems radically alter the user design process. As van Allen (2017) later explains, “When ML/AI systems are constantly learning, adapting, and renegotiating in a context of other evolving autonomous systems and humans, the design constraints and goals are different from conventional UX” (p. 431).

These smart devices introduce a new perspective to the UX design process: that of the intelligent “machine” (van Allen 2017). Van Allen (2017) questions how traditional UX design methods might work when many machine learning/AI systems are involved in addition to the traditional human user. Van Allen (2017) further asks, “Who is the ‘user,’ or is ‘user’ even an appropriate way to understand the problem? (p. 431). Van Allen (2017) states that the existence of “autonomous things” and the way they “behave, interact, communicate” and “embody a ‘lived’ history, evolve, and thrive” will change the nature of UX design (p. 431). Such change will require “new design methods and patterns” (van Allen, 2017, p. 431).

For instance, van Allen (2017) notes that ML/AI systems require characteristics that are typically ascribed to humans (such as ethics and personality). Such requirements, van Allen (2017) points out, create conditions in which “the concept of Human-Centered Design (HCD) starts to break down” (p. 432). As van Allen (2017) notes

When digital participants have their own goals, needs, intentions, ethics, moods and methods, an organic, unpredictable and evolving system is created. The human is no longer the center. Instead, the center of design becomes the system and its outcomes. Design moves towards building emergent ecologies (p. 432).

In short, the existence of a digital “Other” with its own perspectives and goals adds another layer of complexity to a user design process that was, until recently, solely focused on human users.

Yet, despite this movement away from human-centered design, van Allen (2017) remains optimistic about the use of ML/AI systems in the UX Design process. Van Allen (2017) hopes that, rather than replacing humans, such systems will enable humans to focus more on creative processes. Additionally, van Allen (2017) hopes that humans will come to see ML/AI systems as “peers that collaborate across common and competing goals” (p. 432). Rather than supporting a movement away from human-centered design, van Allen (2017) instead advocates that HCD methods “ . . . be secondary to newly imagined approaches that fully embrace the potentials of ML/AI” (van Allen, 2017, p. 432).

He proposes the notion of “Animistic Design,” which van Allen describes as

Animistic Design proposes that smart digital entities adopt distinct personalities that inform their perceived sense of aliveness. And rather than having people work with a single, authoritative system, this approach has people engage with multiple smart systems, where each entity has its own intentions, expertise, moods, goals, data sources

and methods. These are not . . .cute anthropomorphic dolls. Instead, Animistic Design strives for a more “native” digital animism, that embodies (metaphorically at least) the inherent characteristics of computational/mechanical systems (p. 432).

The outcome of such design is ultimately the creation of a design “ecology” that nurtures and encourages conversations between human designers and ML/AI systems (van Allen 2017). One benefit of such ecologies–as well as acknowledging the limitations of ML/AI systems–” allows designers to move away from trying to provide single, correct answers” (van Allen, 2017, p. 432). Van Allen (2017) holds that the existence of multiple problem solutions better trains ML/AI systems and fosters greater design capabilities. Additionally, van Allen (2017) postulates that Animistic Design encourages and enables distributed cognition. This method recognizes that, in addition to their physical brain, humans think using the environment with which they interact (van Allen 2017).

The “Automation of Interaction”

When automatic AI systems become involved, the user design process transforms once again.

Wiberg and Bergqvist (2023) focus their paper on the way(s) in which the combination of automated systems and user experiences impact the nature of user interactions. They (2023) examine AI and UX Design from a perspective that, while analyzing AI and its impact on the UX Design process, still “suggests a need to understand human–machine interactions as a foundation for the design of engaging interactions” (p. 2281). Focusing on the growing relationship between UX Design and automation, Wiberg and Bergqvist (2023) explore the “automation of interaction” (p. 2281). They (Wiberg and Bergqvist, 2023) focus their efforts on discussing how the principle of “interaction” is the joining force between user experience and artificial intelligence. Additionally, WIberg and Bergqvist (2023) note the ways in which the increased focus on automation changes the entire user experience. In some cases, such automation is removing the need for user interaction altogether (Wiberg and Bergqvist, 2023). A shift also occurs in the notion of the user being in control of the experience (Wiberg and Bergqvist, 2023). Instead, autonomous machines now control the process (Wiberg and Bergqvist, 2023).

Wiberg and Bergqvist are well aware of the transformative era in which HCI finds itself. They recognize “that HCI is at this crossroad between UX and AI from the viewpoint of designing for engaging interactions versus designing for automation” (Wiberg and Bergqvist, 2023, p. 2283). They then discuss the idea of an “automation space” in which discussions take place about which aspects of digital processes should be automated and which should be made manual (Wiberg and Bergqvist, 2023, p. 2283). Yet, the authors bring up a good point that makes such a space harder to create: when a process is being automated, that automation might show additional issues that make the process hard to automate (Wiberg and Bergqvist, 2023). Additionally, the authors discuss how automating processes might deprive users of feelings of “being in control” (Wiberg and Bergqvist, 2023). Such automation and subsequent user deprivation might rob the user of feelings of comfort (Wiberg and Bergqvist, 2023). Also, while

automation is beneficial process-wise, such automation becomes costly if it fails and then requires manual human interaction to fix the problem (Wiberg and Bergqvist, 2023).

Another such complexity discussed is the way in which automation puts the human interaction piece in the “foreground” of experience (Wiberg and Bergqvist, 2023, p. 2284). As Wiberg and Bergqvist (2023) further explain

As computing is increasingly designed along an aesthetics of disappearance, rendered invisible, and fundamentally entangled with our everyday lives, it becomes less clear what is suitable for automation and what requires true user control. In fact, this seems to be almost an interaction design paradox, and it has accordingly received some attention from HCI and interaction design researchers who have tried to resolve this tension between automation and user experience (p. 2284).

As computing becomes more “invisible” to the user experience, the authors explore the need for the human user to be involved–or at least, kept aware–of the automated processes taking place in the background (Wiberg and Bergqvist, 2023). Additionally, Wiberg and Bergqvist (2023) note that much research seeks to understand the interplay between human user interaction and the “behind-the-scenes” machine automation taking place. However, the authors propose the need for “ . . .a deeper understanding of what interaction is” (Wiberg and Bergqvist, 2023, p. 2284). As they state

We argue that we need a framework that allows us to focus on and examine the details of how the interaction unfolds (what is happening in the foreground) and that allows us to see aspects of automation (what is happening in the background), while at the same time relate to the user experience (Wiberg & Bergqvist, 2023, p. 2284).

To establish such a framework, the authors begin by exploring definitions integral to an understanding of the interaction process. Going off of the definition established by earlier researchers, Janlert and Stolterman, Wiberg and Bergqvist (2023) agree that “interaction” is “. . . an operation by a user, and the responding ‘move’ from the artifact” (p. 2284). They then go on to establish definitions created for the model created by the aforementioned researchers (Wiberg and Bergqvist, 2023). Those definitions include:

  • “Internal states, or i-states for short, are the functionally critical interior states of the artifact or system.” (Wiberg and Bergqvist, 2023, p. 2284)
  • “External states, or e-states for short, are the operationally or functionally relevant, user-observable states of the interface, the exterior of the artifact or system” (Wiberg and Bergqvist, 2023, p. 2284).
  • “World states, or w-states for short, are states in the world outside the artifact or system causally connected with its functioning” (Wiberg and Bergqvist, 2023, p. 2284).

Additionally,

The model also details the activity on both the artifact and user sides. For instance, states change as a result of an operation triggered by a user action or by the move (action) by the artifact. These moves appear as a cue for the user. These cues come to the user either as e-state changes or w-state changes (Wiberg and Bergqvist, 2023, p. 2284).

The authors then state

Based on the model, we can now define any form of “automation of interaction” as removing a pair of actions and moves from an interaction while leading to the same or similar outcome (Wiberg and Bergqvist, 2023, p. 2284 – 2285).

Wiberg and Bergqvist (2023) then examine two “automation of interaction” relationships: “no automation (full interaction)” and “no interaction (full automation)” (p. 2285). They define “no automation” as meaning

…that the artifact does not perform any operations and moves other than those triggered explicitly by an action of the user. This means that the user has complete control of all activities and outcomes, which requires intimate knowledge and skill. It also means that the user needs to understand the artifact and the relationship between user actions and artifact moves (Wiberg and Bergqvist, 2023, p. 2285).

Conversely,

The extreme form of full automation of interaction means that the artifact performs all its operations and moves without being triggered by any actions from the user. Instead, the artifact moves are based on its i-states or changes in the w-states. This means that the user has no control over activities and outcomes. It also means that the user does not need any particular knowledge or skills since the artifact performs all actions (Wiberg and Bergqvist, 2023, p. 2285).

In concluding their laying out of definitions, the authors write,

We can now see that “automation of interaction” through AI means that we substitute man–machine interaction with AI support that can automate complex relationships between actions, operations, moves, and/or cues as the basic model of interaction shows (Wiberg and Bergqvist, 2023, p. 2285).

Regarding the “automation of interaction,” the authors conclude

In many cases, the reduction of interaction will, for the user, lead to a loss of control and precision, but maybe with a gain in functionality, performance, and of course, a lesser need to focus on interaction (Wiberg and Bergqvist, 2023, p. 2285).

Ultimately, Wiberg and Bergqvist (2023) withhold judgment on whether such “automation of interaction” is positive or negative. Instead, the authors distinctly state that their “automation of interaction” model is meant to be descriptive rather than judgmental (Wiberg and Bergqvist, 2023). In fact, regarding the “automation of interaction”, the authors note that, “Whether a certain combination is “good” or not can only be determined in relation to the purpose of the interaction and how users experience it, and the value and quality of its outcome” (Wiberg and Bergqvist, 2023, p. 2288). They note that fully-automated systems can lead to both good or bad experiences, depending upon the user(s) involved (Wiberg and Bergqvist, 2023).

A Transhuman Design Process?

As described in the articles explored, the user design process is certainly affected and complicated by the addition of AI and ML. As proposed by van Allen’s Animistic Design, HCI researchers and UX designers find themselves forced to consider the user perspective of a non-human Other. Additionally, as Wiberg and Bergqvist show, designers are confronted with a need to reconsider the notion of “interaction” itself. Is such transhumanism the future of the user design process? Perhaps, or maybe not. Either way, the use of AI and ML promises to transform the user design process in the immediate future.

Bibliography

Human-Centered Design (HCD). (n.d.). Interaction Design Foundation. February 14, 2024, https://www.interaction-design.org/literature/topics/human-centered-design

van Allen, P. (n.d.). The AAAI 2017 Spring Symposium on Designing the User Experience of Machine Learning Systems Technical Report SS-17-04. In Reimagining the Goals and Methods of UX for ML/AI. Retrieved from https://cdn.aaai.org/ocs/15338/15338-68263-1-PB.pdf.

Wiberg, M., & Bergqvist, E. S. (2023). Automation of interaction—interaction design at the crossroads of user experience (UX) and artificial intelligence (AI). Personal and Ubiquitous Computing, 27, 2281–2290. https://link.springer.com/article/10.1007/s00779-023-01779-0

]]>
6365
Taming the new AI hero in TechTown: OpenAI’s ChatGPT; Discussing the chatbot’s use, why and where to draw the line. https://duqux.com/2023/03/14/taming-the-new-ai-hero-in-techtown-openais-chatgpt-discussing-the-chatbots-use-why-and-where-to-draw-the-line/ Tue, 14 Mar 2023 04:02:36 +0000 https://duqux.com/?p=6340 Taming the new AI hero in TechTown: OpenAI’s ChatGPT; Discussing the chatbot’s use, why and where to draw the line.

By Abharan Yadavalli

2/23/2023

What is ChatGPT?

ChatGPT is an artificial intelligence-based chatbot that uses a deep learning algorithm called a transformer to generate responses to text-based inputs. It is trained on a massive corpus of text data to understand the patterns and structures of language. Where GPT stands for Generative Pre-Trained Transformer, which means that it generates new text rather than selecting pre-existing responses from a database. This makes it highly versatile and able to generate responses to a wide range of inputs, including those it has never encountered before.

So, at this point there might have been a glaring question in front of us: “Why am I writing this article, when an AI could write essays in seconds?”

Let’s take a look at the result of asking ChatGPT to generate an article that outlines the following:
1. What is ChatGPT?
2. How is it different from other AI tools?
3. What are its use cases?

The following response was generated:

“Artificial intelligence has come a long way, and ChatGPT is at the forefront of this progress. ChatGPT is a deep learning language model that uses the transformer architecture to generate human-like responses to text-based inputs. Developed by OpenAI, it has garnered significant attention due to its impressive capabilities.

So, what sets ChatGPT apart from other AI text tools?

One of the most significant differences is its generative nature. ChatGPT does not rely on pre-programmed responses or keyword matching like rule-based systems or chatbots. Instead, it uses deep learning to analyze the context of a user’s input and generate a response based on its understanding of language patterns. This allows ChatGPT to create unique and personalized responses, making it a more natural and engaging conversational partner.

ChatGPT’s architecture also enables it to learn from vast amounts of text data, making it highly adaptable to new inputs. It can understand the meaning of words and phrases in context, making it better equipped to handle complex and ambiguous queries. 

The result is a highly sophisticated AI text tool that can be used in various industries and applications. Some of the use cases of ChatGPT include:

Customer service – ChatGPT can handle simple and complex customer queries, reducing the need for human intervention and speeding up response times.

Language translation – With its deep understanding of language patterns, ChatGPT can provide accurate translations for different languages.

Personalized content creation – ChatGPT can generate unique and engaging content for social media, email marketing, and other digital channels.

Education – ChatGPT can act as a tutor, helping students with questions and providing explanations for complex concepts.

Healthcare – ChatGPT can be used to create chatbots that assist patients in booking appointments, answering health-related queries, and more.

In conclusion, ChatGPT’s innovative approach to language processing has made it a game-changer in the AI text tool space. Its generative nature and ability to learn from vast amounts of text data make it highly versatile and adaptable to various industries and applications. As AI continues to evolve, ChatGPT will undoubtedly play a significant role in shaping the future of conversational interfaces.”

Great! I could not have given a better introduction. That is a well written concise piece of writing(more like a concise piece of algorithmic generation). It has summarised what it does, how it operates and a few of its uses. Clearly, ChatGPT is helping write the article, doing a pretty decent job, lets see which other fields where one can gain professional content using ChatGPT, in an instant!

So, I asked ChatGPT to list out 45 use cases in professional fields and here is the response: (feel free to skim to the end of the list)

  1. “Content creation: Assist with generating written content such as articles, social media posts, and blog posts.
  2. Chatbots: Create automated chatbots to handle customer queries and support.
  3. Customer support: Provide real-time assistance and support to customers.
  4. Language translation: Translate written text in real-time between multiple languages.
  5. Personalized content: Create personalized content for individual users based on their interests and preferences.
  6. Virtual assistants: Assist with scheduling, reminders, and other tasks to help with productivity.
  7. Creative writing: Assist with generating creative content such as poetry and short stories.
  8. Academic research: Assist with generating research papers and analyzing data.
  9. Marketing: Assist with generating marketing content such as ad copy and email campaigns.
  10. Gaming: Assist with generating dialogue and narrative for video games.
  11. Chat-based learning: Assist with language learning and educational material delivery.
  12. Text summarization: Summarize lengthy text into shorter and more manageable pieces of information.
  13. Speech recognition: Recognize and transcribe spoken words into written text.
  14. Sentiment analysis: Analyze and understand the sentiment and emotion of written text.
  15. Content moderation: Identify and remove inappropriate or offensive content from online platforms.
  16. Resume screening: Screen job applications and identify qualified candidates based on job requirements.
  17. Personal finance management: Assist with managing finances and tracking spending.
  18. Legal document analysis: Analyze legal documents and contracts.
  19. News aggregation: Collect and organize news articles from various sources.
  20. Chat-based therapy: Provide therapy and mental health support through chat.
  21. Knowledge management: Store and retrieve knowledge and information within an organization.
  22. HR management: Assist with recruiting, hiring, and managing employees.
  23. Sales support: Assist with generating leads and closing sales.
  24. Project management: Assist with managing projects and tasks.
  25. Virtual event assistants: Assist with organizing and managing virtual events.
  26. Social media management: Assist with managing and scheduling social media posts.
  27. Content curation: Collect and organize relevant content for users.
  28. Restaurant menu recommendation: Recommend menu items based on user preferences.
  29. Personalized fitness coaching: Create personalized fitness plans for individual users.
  30. Financial forecasting: Analyze financial data to make predictions about future trends.
  31. Data analysis: Analyze and interpret large amounts of data.
  32. Fraud detection: Identify and prevent fraudulent activity.
  33. Investment analysis: Analyze and make predictions about investment opportunities.
  34. Cybersecurity: Identify and prevent cyber threats and attacks.
  35. Natural language database querying: Query databases using natural language instead of programming languages.
  36. Chat-based health assessments: Conduct health assessments and provide medical advice through chat.
  37. Real-time language translation: Translate spoken language in real-time between multiple languages.
  38. Personal shopping assistants: Assist with online shopping and finding the right products.
  39. Voice-controlled smart home automation: Control smart home devices through voice commands.
  40. Mood tracking and emotional support: Provide emotional support and help users track their mood.
  41. Personalized travel recommendations: Recommend travel destinations and activities based on user preferences.
  42. Storytelling: Assist with generating engaging stories for various purposes.
  43. Technical support: Provide technical support for software and hardware issues.
  44. Legal advice: Provide legal advice and guidance.
  45. Financial planning: Assist with financial planning and budgeting.”

Good job!

ChatGPT may not pass the Turing test, but definitely can pass an interview with its “tell me about yourself” section. Now we have a long list of fields (non-exhaustive), which we can use ChatGPT in order to automate, research, or assist us with tasks. So, does it mean that now we can be reliably good at carrying out tasks of the above mentioned fields?

The answer is a big ‘No’.

Breaking down few of the use cases mentioned above:

Most of the use cases in the list that are listed up-to item No.30, is definitely an advantage that is provided by ChatGPT, but soon after that the list goes down-hill. For example, No. 44: Legal advice and guidance; is safe to say that it is rather a dangerous proposition to follow for a person who lacks subject matter expertise. Legal advice needs to incorporate factors that might or might not be considered while merely scratching the surface; especially in cases of Civil law, that are heavily based on considering the society, its individuals, its workings, geographically differing laws and their treatments. In a similar way, chat-based health assessment and therapy is equally harmful, because the chatbot’s bank of knowledge is too huge: “enables it to learn from vast amounts of text data”.

Why is the extensive amount of ‘text data’ a downside? (More the better right?). Sadly, it’s a ‘No’ again, because the vast amounts of text data are scoured from databases that present everything in their entirety, which includes a huge amount of generalized and inaccurate information. Human beings, on the other hand, require specialized assessments that cater to their individual factors with a great amount of careful filtering of information. Hence chatbots are not the solution for health advice either.

“But wait, facts are facts.. Right?”. To answer this, we need to understand how ChatGPT generates its content. ChatGPT “uses deep learning to analyze the context of a user’s input and generate a response based on its understanding of language patterns”. The key phrase here, being “its understanding of language patterns” is crucial! ChatGPT being a language model, is limited when it comes to functioning capabilities, since it generates responses based on past language trends and patterns, and does not possess the critical thinking and decision making capabilities(unlike actual intelligence). Pre-trained language models like ChatGPT, suggest answers only based on the language-based pattern recognition algorithm. And with the internet being home to a handful of misleading information (ex. blogs that suggest unregulated health and medication recommendations etc.), including it in the information pool can lead to great amounts of inaccuracy.

So, always take a chatbot’s recommendations with a grain of salt.

Learn more about AI language models: C. Montemayor, (2020), Language and Intelligence (https://doi.org/10.1007/s11023-021-09568-5)

There are numerous sources showcasing the advantages of using a chat-based AI’s fetching capabilities, which are very handy. So let’s skip to the part which addresses an ongoing concern of using not just AI but goes all the way back to the boom in importance that technology suddenly received. Diminishing critical thinking skills!. Reliance on technology, in the manner that we have collectively evolved into, will potentially altered our learning, and how we receive information.  This will likely raise questions about our critical thinking and decision-making skills and if they are enhanced or are adversely affected by AI.

Is technology bad? Should we stop using it?

Absolutely “Not”! Technology has existed from when the wheel was invented, and technology drove great historical leaps such as the industrial revolution. The question is not about the existence of technology around us, but the manner in which we use it.

When performing your tasks, “leverage” technology, as a catalyst to stepping closer to your goals.

AI is smart, but not actually intelligent, as it only fetches what seems to be an intelligent answer. Artificial Smartness (AS) might be a more apt acronym for it.

in·tel·li·gence: /inˈteləj(ə)ns/

noun. /ɪnˈtɛlədʒəns/ [uncountable] 1the ability to learn, understand and think in a logical way about things;


P.S:

A screenshot of what could contain information that can be misleading or problematic:

A screenshot of what could contain information that can be misleading or problematic

Recommending any kind of medications/names of medications has the ability to pose a great risk to potential individuals with lack of knowledge.

Recommending any kind of medications/ names of medications has the ability to pose a great risk to potential individuals with lack of knowledge.

]]>
6340
Designing In-Vehicle Information Systems to Reduce the Effect on Driver Attention https://duqux.com/2022/05/07/designing-in-vehicle-information-systems-to-reduce-the-effect-on-driver-attention-2/ Sat, 07 May 2022 18:02:40 +0000 https://duqux.com/?p=959

MFA Capstone Project: 

Mark O’Black

Year: 2022

ABSTRACT

Distracted driving is a primary cause of motor vehicle accidents each year, with mobile phones contributing to this type of distraction. Drivers may use their mobile phones to make calls, send text messages, or find directions to a destination while operating their vehicle, which has prompted automotive manufacturers to equip vehicles with an in-vehicle information system (IVIS). The primary goal of an IVIS is to help eliminate the need for a driver to use their mobile phone while operating a motor vehicle; however, completing a task through an IVIS requires a driver’s attention, which leads to less attention placed on the primary task of driving. When less attention is placed on the primary task of driving, the risk of a motor vehicle accident taking place increases. This project will explore the design of IVISs, their effect on a driver’s attention, and identify ways to improve an IVIS interface design so that minimal attention is required from drivers when interacting with an IVIS. Based on this research and analysis, an IVIS interface design that helps reduce the amount of attention required from a driver to perform a task will be designed and validated through user testing.

Mark O'Black Capstone Project Summary
]]>
959
A Interview Research Process: Adapting the Listening Guide to UX Design Research https://duqux.com/2022/04/26/a-interview-research-process-adapting-the-listening-guide-to-ux-design-research/ Tue, 26 Apr 2022 20:28:11 +0000 https://duqux.com/?p=5324

MFA Capstone Project: Interview Research Processes: Adapting the Listening Guide to UX Design Research

Author: Mary (Molly) Smith

Year: 2022

ABSTRACT

User Experience (UX) design researchers continually search for new methods that aid in understanding human thinking and behavior. Aside from interviews, they use methods such as contextual inquiry and usability testing with think-aloud verbal protocols in which verbalizations are collected to gain insights about how people perceive, think, and behave when interacting with products or experiences. Because verbalizations are often ambiguous, given the differences in cultures, languages and thinking processes, disparity can exist between what a person says about their experience and the researcher’s interpretation. The Listening Guide (LG), a method of psychological analysis developed by psychologist Carol Gilligan and associates, draws on voice, resonance, and relationship as ways to know the inner world of an individual (Gilligan, Spencer, Weinberg, & Bertsch, 2003). Researchers have used it to “to listen to and understand voices … [of individuals] … that have been missing from or inadequately represented” in research (Petrovic, Lordly, Brigham & Delaney, 2015). It is widely used as a method to analyze qualitative research data. This project proposes to adapt the LG as a method of inquiry in UX design research. It aims to understand the meaning of verbal responses collected in design research by simplifying the complexity of interview responses. Specifically, the project will assess the value of the LG for UX research. It will examine how the LG can help researchers gain deeper understanding of users and stakeholders through interviewing.

]]>
5324
An Application Design for Mobile Commerce Decision Support System https://duqux.com/2022/04/18/m-commerce-app-design/ Mon, 18 Apr 2022 10:02:16 +0000 https://duqux.com/?p=195

MFA Capstone Project: Application Design for M-Commerce Decision Support System

Author: Ngoc Nguyen

Year: 2021

ABSTRACT

Although people are comfortable making online purchases, the inability to effectively determine product quality and price to maximize value encumbers purchase decision-making. Existing mobile applications assist in online shopping, but do not offer search capability across different companies or return results with concise product information, customer reviews, and price comparisons. Additionally, the inconsistency of application interfaces impedes usability.

This project examined human decision making and the design process for a M-commerce application.  The application allows people to conduct product research across many retailers and to review product information, explore features, make price comparisons, and obtain deal alerts. The project used the Double Diamond design process model, a framework that aids designers by highlighting key design phases, principles, and methods. It afforded an accessible means by which to explore the design problem and to streamline product research and design processes. In this project, the author discussed user research, prototyping, and testing as well as the implications of using the Double Diamond process framework for designing a M-commerce application.

Keywords: M-Commerce, Human Decision-Making, Decision Support Systems, Interaction Design, Human Factors, Double Diamond Process, User Interface Design, User Experience, UX Research.

Read Project Report:M-Commerce Decision Support System_Submitted

]]>
1170
Stroop | Eye tracking https://duqux.com/2022/03/23/stroop-test-2/ Wed, 23 Mar 2022 17:58:43 +0000 https://duqux.com/?p=957

A Stroop test | Cognitive Load.

A Stroop test is a measure of selective attention. Understanding how and why someone attends to stimuli in the environment is interesting and important. It is also valuable to understand the impact on attention as task complexity increases. This is especially important when designing user interfaces.

We conducted a Stoop test (pilot-test) to examine cognitive load on participant eye movements, pupil dilation, and electrodermal activity. We collected eye movement and electrodermal activity (GSR-Galvic Skin Response) data.

NOTE: This was not a formal study and we only performed it to test our laboratory equipment. We asked three people to participate.

Task 1: We asked participants to read aloud a list of words representing colors (e.g., blue, green, red).  Words were presented in a white font colored and displayed on a black background. 

Task 2: We asked participants to read a list of words representing colors (e.g., blue, green, red).  Words were presented in a various font colors (e.g., the word red appeared in the color yellow) and displayed on a black background. Participants read the word. If the word “blue” was presented in the color “green”, the participant was to say “blue”.

Task 3: We asked participants to read a list of words representing colors (e.g., blue, green, red).  Words were presented in a various font colors (e.g., the word red appears in the color yellow) and displayed on a black background. Participants named the color of the word. If the word “blue” was presented in the color “green”, the participant was to say “green”.

Naming the color of the word can create interference effects as participants inadvertently try to read the word rather than name the color in which it is displayed – naming the color interferes with reading the word. The interference presents both Stimulus–Stimulus and Stimulus–Response incompatibility (Proctor & Vu, 2016) and makes the reading-naming task difficult; and it potentially increases cognitive load.

In testing our lab equipment, we were interested to see if the Stroop interference effects impacted eye tracking scans, pupil size, and electrodermal activity (GSR). We were especially interested in determining how well we could collect and represent these data.

Figure 1 show a heat-map of participant normal reading of words (white words displayed on black background).

Figure 2 shows heat-map of participants naming the color of the word.

Reading Patterns and Time: All participants read words from left to right. When they completed a row of words, their eyes traversed back to the left most word on the successive row, which is interesting because they could have read the words in any order. When naming the word color (interference effect), participants took more time and had more dispersed eye scans (as shown in Figure 2).

Figure 3 shows electrodermal activity for normal reading (BW-blue line) and naming the color of the word (Color – Red line).  There appeared to be increased electrodermal activity when naming the word color (interference effect, Color – Red line) compared to reading the word. 

Pupil size: Pupil size increases with task demands, and pupillometry has been shown to be a stable measure of Stroop interference (Laeng, Ørbo, Holmlund, & Miozzo, 2011).

Figure 4 shows the left pupil size for normal reading (BW-blue line) and naming the color of the word (Color – Red line).  Pupil size was larger when naming the color of the word (interference effect, Color – Red line). 

References:

Krejtz, K., Duchowski, A., Niedzielska, A., Biele, C., and Krejtz, I. (2018). Eye tracking cognitive load using pupil diameter and microsaccades with fixed gaze. PLoS ONE 13(9): e0203629. https://doi.org/10.1371/journal.pone.0203629

Laeng, B. Ørbo, M., Holmlund, T., and Miozzo, M. (2011). Pupillary Stroop effects. Cognitive Process,12: pp. 13–21.

Proctor, R. and Vu, K. (2016). Principles for Designing Interfaces Compatible With Human Information Processing. Intl. Journal of Human–Computer Interaction, 32: pp. 2–22.

Lajante,M., Droulers, O., Dondaine, T. and Amarantini, D. (2012).Opening the “Black Box” of Electrodermal Activity in Consumer Neuroscience Research. Journal of Neuroscience, Psychology, and Economic, 5(4),pp. 238–249.

 

]]>
957
360 Media https://duqux.com/2022/03/05/360-media/ Sat, 05 Mar 2022 02:15:06 +0000 https://duqux.com/?p=198

Abharan Yadavalli took 360 photographs of the New Broadcast Studio in the Center for Emerging and Innovative Media.

 

]]>
198
Human Factors: Mobile Application Design https://duqux.com/2022/02/26/human-factors-an-authentic-learning-mobile-application-design-project-in-a-higher-education-and-industry-context/ Sat, 26 Feb 2022 21:48:04 +0000 https://duqux.com/?p=5340

This book chapter Appeared in Human Factors Issues and the Impact of Technology on Society
Lum, 2021
URL: https://www.igi-global.com/chapter/human-factors/281748

It was written by the following students and instructor in the Interactive Design program:

E. Cooney, L. Kolber (Instructor), N. Martonik, E. Sekely
Duquesne University, USA

ABSTRACT

Human factors is a critical area of study in higher education. It is integral to applied academic programs such as Interaction Design. In this chapter, the authors begin by reviewing precepts of authentic, “real-world” learning. From a human factors and interaction design viewpoint, they then describe an authentic learning project – a mobile application design – that was done by university students in collaboration with a leading global specialty retailer.  Specifically, in terms of the project, the chapter reviews the following:

  1. benefits and challenges of academic and industry collaborations;
  2. human factors and interaction design processes, methods, and principles used throughout the authentic project;
  3. anthropometric features of the project prototype and their implications for usability;
  4. precepts of cognitive information processing (i.e., human attention, perception, and memory) and their importance for the design and usability of the project’s interface;
  5. insights and lessons learned about the use of authentic learning experiences in teaching human factors and interaction design.

DESCRIPTION OF AUTHENTIC LEARNING PROJECT

A leading global specialty retailer provided students in the Interactive Design program at Duquesne University (Interactive Design Studio course) a design challenge (project brief): design a smartphone application that alleviated known pain points within the in-store shopping experience associated with: a) product finding and browsing, b) product try on, and c) value maximization. These are authentic problems faced by many retailers who enable customers to augment shopping tasks with technology. Over a four-week period, student teams of 3-4 individuals created designs that they eventually presented to the retailer’s UX team at the conclusion of the semester.

]]>
5340