Emotion-Adaptive Interfaces: Designing AI Systems That Respond to Human Emotional States

Overview
This project examines how artificial intelligence can enable visual environments that adapt to human emotion in real time. Using EEG signals and self-reported feedback, the system identifies subtle shifts in a user’s emotional balance and adjusts visual imagery to promote a calm yet alert state.
The central goal is not simply to relax the user but to maintain an optimal balance between valence (emotional positivity) and arousal (mental energy). Too little arousal causes disengagement, while too much increases stress. The research seeks to define the adaptive midpoint where users remain both steady and attentive, translating emotion into design behavior through measurable data.

Research Objectives
1. Test whether adaptive imagery reduces stress more effectively than static or pre-scripted visuals.
2. Identify which visual variables—light, color, spatial openness, clutter, window view, etc. —consistently influence shifts in valence and arousal.
3. Develop a set of design principles for emotion-adaptive interfaces that preserve engagement while promoting psychological recovery.
Closed-Loop System Design​​​​​​​
The proposed framework functions as a continuous feedback loop that senses, interprets, and modifies in response to user state.
1. Sense: Record short EEG windows and brief self-reports.
2. Infer: Estimate emotional valence and arousal from EEG and self-report data.
3. Generate: Adjust visual parameters using an image-generation API, modifying prompts that control lighting, color, and spatial composition.
4. Evaluate: Assess emotional change through follow-up EEG and mood reports.
5. Refine: Update the imagery toward the target emotional region and continue the cycle.
Through this loop, the system continuously evolves with the user, offering an adaptive, data-driven model for responsive design.


Figure 1. Conceptual diagram of the proposed closed-loop system. The participant views AI-generated indoor imagery while their emotional and cognitive responses are continuously measured using EEG and self-reports. The system interprets these signals to generate or adapt new imagery, forming a feedback loop that iteratively refines the visual stimulus until indicators of emotional well-being are improved.
Cognitive Foundations​​​​​​​
This research draws from neuroarchitecture and environmental psychology, fields that study how spatial and sensory characteristics shape human emotion and physiology. Evidence shows that color temperature, light gradients, spatial proportion, layout of the space, and symmetry directly affect comfort, stress, and focus.
To operationalize this knowledge, I created a Design Dictionary:
a data structure that maps psychological intentions (e.g., reduce arousal, increase stability) to specific visual features known to evoke those effects.
For example:
“Reduce arousal” → softer contrast, lower saturation, expanded spatial depth.
“Increase grounding” → centered symmetry, horizontal orientation, stable geometry.
When emotional feedback signals a need for adjustment, the AI references this dictionary, selects appropriate parameters, and generates an updated visual environment. The process is deliberate rather than random, integrating cognition, perception, and computation in a single adaptive cycle.
Figures 2: Examples of emotion-adaptive environments. Each scene gradually changes its lighting, color warmth, and spatial atmosphere to reflect the system’s response to the user’s shifting emotional state (EEG readings).
Experimental Plan
Participants: 24–36 adults.
Conditions: adaptive (closed-loop), non-adaptive (fixed), and baseline (neutral).
Tasks: short cognitive activities under mild time pressure to elicit manageable stress.
Measures:
-EEG indices of valence and arousal
-Self-report affect ratings
-Task accuracy and reaction times
-UX feedback on usefulness, comfort, and perceived control
Analysis: Mixed-effects models will compare emotional and behavioral outcomes across conditions, identifying which visual adjustments most reliably move users toward target states. Success will be demonstrated by reduced stress and stable task performance under adaptive feedback.

Figure 3. Diagram of the closed-loop adaptive system. EEG readings and self-reports inform the AI about the user’s stress or arousal level, prompting the generation of new images with adjusted visual features such as lighting, color, and form. The process repeats until the user’s emotional state reaches the target balance.

Applications and Impact

The broader vision extends beyond laboratory research. Emotion-adaptive systems could support:
-Immersive environments in VR and AR that respond to real-time mood.
-Wellbeing and productivity tools that balance calmness and concentration.
-Interface personalization that adapts pacing, contrast, or color to emotional context.
Across all cases, the objective remains consistent: design intelligent environments that sense emotional cues and respond through measured, evidence-based visual changes. The outcome is technology that feels less mechanical and more empathetic, capable of supporting wellbeing and performance simultaneously.
Back to Top