Emotion-Adaptive Interfaces: Designing AI Systems That Respond to Human Emotional States
Overview
My doctoral research tackles a core challenge in human-AI interaction: how to build AI systems that can sense and respond to human emotion proactively, without relying on explicit user commands. I am designing and building a closed-loop prototype that uses EEG signals to drive a generative AI (API) that creates virtual visual environments that adapt in real-time to a user's emotional state.
The goal is to move beyond simple relaxation and define the design rules for maintaining productive balance: The optimal midpoint between emotional positivity (valence) and mental energy (arousal), where users feel calm yet focused. This work provides a foundational framework for creating more intuitive, trustworthy, and affective Human-AI interaction.
Research Objectives
This research is structured to deliver actionable insights and tools for UX teams working on adaptive agentic AI. The objectives are designed to translate neuroscientific and HCI theory into a practical workflow protocol for product development:
1. Quantify the Impact of Adaptive AI:  Test whether AI-generated, emotion-adaptive imagery is more effective at regulating user state than static or randomly-changing visuals, using both EEG correlates of emotion and subjective user feedback.
2. Map Design Parameters to User Emotion: Identify which specific, controllable image-generation parameters—such as color warmth, light intensity, spatial openness, and visual clutter—most reliably and consistently shift metrics of valence and arousal.
3. Deliver an Actionable UX Framework: Synthesize the findings into a set of evidence-based design principles and a testing protocol that UX researchers and designers can use to build, evaluate, and prioritize emotion-adaptive features in AI-supported products.

Examples of emotion-adaptive environments. Each scene gradually changes its lighting, color warmth, and spatial atmosphere to reflect the system’s response to the user’s shifting emotional state (EEG readings).

A Closed-Loop Framework for Proactive Interaction
I am building a functional prototype that operationalizes adaptive AI through a continuous, five-stage feedback loop. This end-to-end system moves beyond reactive interfaces to demonstrate how future AI products can sense and respond to users without manual input.
1. Sense: Capture real-time physiological data via EEG headset, complemented by minimal self-reports for control.
2. Infer: Translate raw EEG signals into metrics of emotional valence and arousal using validated computational models.
3. Generate:  Dynamically adjust a generative AI (via API) by modifying prompt parameters for lighting, color palette, spatial openness, clutter, and other factors.
4. Evaluate: Measure the emotional impact of each adaptation through subsequent EEG readings and user feedback.
5. Refine: Iterate the visual environment toward the target emotional state, creating a continuous, personalized calibration.
This creates a continuous feedback cycle where the system learns which visual parameters most effectively regulate an individual's emotional state, moving beyond a one-size-fits-all approach to a personalized, data-driven experience.
From Cognitive Neurosciences to Actionable Design Rules
This research draws from neuroarchitecture and environmental psychology, fields that study how spatial and sensory characteristics of space shape human emotion and physiology. Evidence shows that color temperature, light gradients, spatial proportion, layout of the space, and symmetry directly affect comfort, stress, and focus.
To operationalize this knowledge, I created a Design Dictionary:
a data structure that maps psychological intentions (e.g., reduce arousal, increase stability) to specific visual features known to evoke those effects, based on the field's literature.

For example:

Reduce Emotional Arousal → softer light, lower saturation, less clutter in the space, reduced spatial depth.

Increase Motivation → centered symmetry, horizontal orientation, stable geometry, window view, Indoor plants.

When emotional feedback signals an adjustment is needed, the system consults the dictionary to select parameters based on EEG signals. It then updates the AI-generation prompt to create a refined visual environment. The process is deliberate rather than random, integrating cognition, perception, and computation in a single adaptive system.

Examples of emotion-adaptive environments. Each scene gradually changes its lighting, color warmth, and spatial atmosphere to reflect the system’s response to the user’s shifting emotional state (EEG readings).


Figure 1. Conceptual diagram of the proposed closed-loop system. The participant views AI-generated indoor imagery while their emotional and cognitive responses are continuously measured using EEG and self-reports. The system interprets these signals to generate or adapt new imagery, forming a feedback loop that iteratively refines the visual stimulus until indicators of emotional well-being are improved.
Research Methodology 
To move beyond theoretical frameworks and validate the real-world impact of emotion-adaptive interfaces, I've designed a controlled lab study that tests the system's ability to maintain user well-being without compromising task performance.

Participants: 24–36 adults in a within-subjects design. (undergraduate college students)

Conditions
Adaptive: The full closed-loop system responding to real-time EEG.
Non-Adaptive: Pre-scripted visual journey (controls for novelty effects).
Baseline: Neutral, static environment.
Tasks: short cognitive activities under mild time pressure to elicit manageable stress.

Measures & Methodology
I am employing a mixed-methods approach that triangulates physiological, behavioral, and subjective data:
Physiological: EEG-derived indices of emotional valence and arousal.
Performance: Task accuracy and reaction times (to ensure adaptations do not degrade performance).
Subjective: Self-reported affect ratings and structured UX feedback on comfort, usefulness, and perceived control.
Analysis & Success Metrics
Using mixed-effects models, I'll identify which specific visual adjustments most reliably drive users toward target emotional states. Success will be demonstrated by:
Significantly reduced stress markers in the adaptive condition compared to the control conditions.
Maintained or improved task performance, proving the system supports rather than distracts.

Figure 3. Diagram of the closed-loop adaptive system. EEG readings and self-reports inform the AI about the user’s stress or arousal level, prompting the generation of new images with adjusted visual features such as lighting, color, and form. The process repeats until the user’s emotional state reaches the target balance.

Applications & Product Impact
This research provides a foundational framework for the next generation of adaptive AI products, systems that sense and respond rather than waiting for text input and react to the text. The methodology and findings have direct applications across the AI chatbot's product ecosystem:
Agentic AI Trust: How can AI agents adjust their "personality" or visual presentation when they detect user confusion or frustration?
Adaptive UI: Moving beyond static Material Design to interfaces that adjust density, contrast, and pacing based on the user's cognitive load.
Multimodal Interaction: Establishing protocols for how systems should interpret non-verbal signals (stress, gaze, hesitation) to prevent error states.
Immersive Computing: Create AR/VR environments that dynamically adjust to reduce sickness or maintain engagement during extended use.
Digital Wellbeing & Productivity: Develop tools that actively help users regulate their emotional state, from shifting interfaces that support deep focus to a chatbot that considers a better tone to converse with you because you are stressed. 
Across all applications, the core principle remains: move from reactive design to proactive adaptation. The outcome is technology that feels less like a tool and more like a partner that's capable of supporting both well-being and performance through evidence-based, emotionally intelligent interactions.
Back to Top