KAIST Develops AI Tool to Collaborate with Musicians in Songwriting, Wins Award for Innovative Music Creation Support System

KAIST and Carnegie Mellon University researchers have developed Amuse, an AI system designed to assist music creators by converting text, images, or audio into chord progressions. The interactive tool allows users to integrate and modify AI suggestions, fostering a collaborative, creative process. Utilising a large language model and rejection sampling, Amuse ensures user-centric creation while filtering out unnatural results. Evaluated with musicians, it demonstrated potential as a creative companion. Presented at the ACM CHI conference in Yokohama, Japan, Amuse received the Best Paper Award for its innovative approach to human-AI collaboration in music composition.

Amuse is an AI-based music creation support system developed by researchers at KAIST in collaboration with Carnegie Mellon University. Designed to act as a creative companion, Amuse assists musicians by converting various forms of inspiration—such as text, images, or audio clips—into harmonic structures like chord progressions. This innovative approach enables users to explore different musical directions and overcome creative blocks.

The system’s functionality is rooted in its ability to interpret user inputs and generate corresponding musical elements. For instance, if a user provides a phrase like “memories of a warm summer beach,” Amuse processes this input and suggests chord progressions that align with the described imagery or emotion. This feature allows users to experiment with new sounds and ideas without being constrained by traditional composition methods.

Amuse distinguishes itself from existing generative AI tools through its emphasis on collaboration rather than automation. Its interactive method ensures that users can modify and refine AI-generated suggestions according to their needs, fostering a collaborative relationship between human intuition and computational power.

Amuse combines large language models (LLMs) with rejection sampling to generate musical ideas. LLMs are trained on vast amounts of text data, enabling them to understand patterns in language and apply this understanding to creative tasks. In the context of music creation, these models can interpret textual descriptions of emotions or scenes and translate them into musical structures.

Rejection sampling is a technique for refining AI-generated outputs by filtering out less relevant suggestions based on user feedback. This iterative process ensures that the final output aligns closely with the user’s creative vision while balancing human intuition and computational power.

Amuse has been validated through extensive user studies involving musicians from diverse backgrounds. Participants were asked to use the system for various creative tasks, including composing melodies, generating chord progressions, and exploring new musical ideas. The results demonstrated that Amuse effectively supports the creative process without compromising artistic control or quality.

Musicians reported that the system’s ability to translate abstract concepts into concrete musical structures significantly enhanced their creativity. Amuse’s interactive nature also allowed users to refine AI-generated suggestions according to their specific needs, fostering a sense of collaboration between human intuition and computational power.

Amuse represents a significant advancement in the field of music technology by demonstrating the potential of collaborative AI systems. The system’s ability to translate text-based descriptions into musical structures opens new avenues for creative expression, particularly for musicians seeking inspiration or exploring unconventional sounds. Amuse reflects Professor Sung-Ju Lee’s vision for creator-centric AI development, which emphasizes the importance of maintaining human agency in the creative process. By designing systems that augment rather than replace human creativity, Lee aims to empower musicians and other artists with tools that enhance their ability to express themselves.

The success of Amuse suggests that future tools could further enhance music production by integrating advanced computational methods with human creativity. This approach preserves artistic control and expands the possibilities for innovation in music composition and production.

The interactive method employed by Amuse is a direct realization of this vision. It ensures that users can modify and refine AI-generated suggestions according to their needs, fostering a collaborative relationship between human intuition and computational power. This approach preserves artistic control and opens new avenues for creative exploration.

In conclusion, Amuse represents a significant step forward in the development of AI tools for music creation. By combining advanced computational methods with an emphasis on human agency, the system offers musicians a powerful tool for exploring new sounds and ideas while maintaining control over their creative vision.

More information
External Link: Click Here For More

Quantum News

Quantum News

As the Official Quantum Dog (or hound) by role is to dig out the latest nuggets of quantum goodness. There is so much happening right now in the field of technology, whether AI or the march of robots. But Quantum occupies a special space. Quite literally a special space. A Hilbert space infact, haha! Here I try to provide some of the news that might be considered breaking news in the Quantum Computing space.

Latest Posts by Quantum News:

Qolab Secures Collaborations with Western Digital & Applied Ventures in 2025

Qolab Secures Collaborations with Western Digital & Applied Ventures in 2025

December 24, 2025
IonQ to Deliver 100-Qubit Quantum System to South Korea by 2025

IonQ to Deliver 100-Qubit Quantum System to South Korea by 2025

December 24, 2025
Trapped-ion QEC Enables Scaling Roadmaps for Modular Architectures and Lattice-Surgery Teleportation

Trapped-ion QEC Enables Scaling Roadmaps for Modular Architectures and Lattice-Surgery Teleportation

December 24, 2025