Ui Remix Achieves Faster Mobile UI Design through Interactive Example Adaptation

Designing effective user interfaces presents a significant challenge, particularly for those lacking specialist design skills, as articulating intent and confidently selecting appropriate choices can prove difficult. Researchers Junling Wang, Hongyi Lan, and Xiaotian Su, from ETH Zurich, alongside Mustafa Doga Dogan of Adobe Research and April Yi Wang from ETH Zurich, address this problem with UI Remix , a novel system enabling iterative design through interactive example retrieval and adaptation. Unlike current tools that either overwhelm users with options or limit them to single-example modification, UI Remix leverages a retrieval-augmented generation model to facilitate both broad exploration and focused refinement at component and interface levels. Crucially, the system builds user trust by providing transparency regarding example sources, including ratings and developer details, and a recent study with 24 participants demonstrated UI Remix significantly improved design goal achievement, iteration effectiveness, and encouraged broader design exploration. This work therefore points towards exciting new avenues for empowering end-users with greater control, confidence, and creativity in UI design.

The research addresses a critical challenge: individuals lacking design expertise often struggle to articulate their vision and trust design choices made by AI-powered tools. Existing example-based systems typically either overwhelm users with broad exploration or restrict creativity by requiring adaptation of a single example, potentially leading to design fixation. The system features three key panels, a Conversation Panel for interaction, an Example Gallery displaying retrieved UIs, and an Editable Canvas for live preview and modification. Participants reported a marked increase in confidence when adapting examples, directly attributing this to the readily available source transparency cues. The study reveals that providing contextual information about the origin and reception of UI examples is a powerful mechanism for building trust and empowering users to take ownership of their designs. This breakthrough establishes a new direction for AI-assisted design systems, moving beyond simple generation towards a collaborative process where users maintain control, trust, and openness to creative exploration.

The core innovation lies in the integration of MMRAG with a user-centric interface that supports both global (whole interface) and local (component) adaptation of examples. This granular control allows users to remix designs at different levels of abstraction, avoiding the pitfalls of both overwhelming exploration and restrictive single-example adaptation. Furthermore, the inclusion of source transparency cues, ratings, download counts, and developer information, represents a significant contribution to the field of explainable AI in design, addressing the critical need for trust and accountability in AI-assisted creative tools. The findings suggest that future systems should prioritise not only the generation of aesthetically pleasing interfaces but also the provision of contextual information that empowers users to understand, evaluate, and confidently adapt those designs.
User. The study meticulously measured user performance with 24 end users, finding that the system facilitated effective iteration and encouraged exploration of alternative designs. Participants achieved improved design outcomes, with the system’s adaptive capabilities allowing for flexible modification of both entire interfaces and individual components. Data shows that the MMRAG model accurately ranked UI examples by semantic similarity to user queries, displaying results in an Example Gallery alongside crucial traceability metadata.

This metadata included ratings, download counts, comment counts, and categories, providing users with valuable context for assessing example credibility and relevance. Researchers recorded that source transparency cues, such as ratings and developer information, enhanced participants’ confidence in adapting examples. The system’s technical architecture relies on a pipeline processing UI screenshots from the Mobbin, Interaction Mining, and MobileViews datasets, yielding around 900 unique interface screenshots for the example set. These screenshots, along with manually collected metadata from the Google Play Store for 196 popular mobile applications, were processed and converted into embeddings for efficient retrieval.

Cosine similarity was used to compare query embeddings with stored UI embeddings, ensuring relevant examples were presented to the user. Tests prove that the system can seamlessly switch between “apply” and “edit” modes, displaying live UI previews on an Editable Canvas and allowing users to inspect or refine the layout. Local remix functionality allows refinement of specific components, triggered by natural language queries like “a stylish red button”, with variations displayed alongside metadata. The breakthrough delivers a fine-grained, component-level editing experience, enabling efficient iteration without requiring a complete design restart, a significant technical accomplishment. This system leverages multimodal retrieval-augmented generation to allow users to iteratively search for, select, and adapt existing UI examples for their own designs. The authors acknowledge a limitation in the current system’s reliance on user-initiated actions, suggesting that future work could explore integrating proactive, agentic mechanisms to suggest examples and refinements based on user behaviour. Further research could also extend support to more complex UI flows and investigate long-term use in real-world collaborative settings.

👉 More information
🗞 UI Remix: Supporting UI Design Through Interactive Example Retrieval and Remixing
🧠 ArXiv: https://arxiv.org/abs/2601.18759

Rohail T.

Rohail T.

As a quantum scientist exploring the frontiers of physics and technology. My work focuses on uncovering how quantum mechanics, computing, and emerging technologies are transforming our understanding of reality. I share research-driven insights that make complex ideas in quantum science clear, engaging, and relevant to the modern world.

Latest Posts by Rohail T.:

Duwatbench Advances Multimodal Understanding with 1,272 Sample Arabic Calligraphy Benchmark

Duwatbench Advances Multimodal Understanding with 1,272 Sample Arabic Calligraphy Benchmark

January 29, 2026
Haste Achieves 64% Enhanced LLM Defence Against Evasive Attack Techniques

Haste Achieves 64% Enhanced LLM Defence Against Evasive Attack Techniques

January 29, 2026
Sicl-At Advances Auditory LLM Performance on Low-Resource Tasks with ICL

Sicl-At Advances Auditory LLM Performance on Low-Resource Tasks with ICL

January 29, 2026