RAISE (Research on Artificial Intelligence in Sound and Musical Expression)

Studio 1200x800

The RAISE research project explores the interactions and relationships between artificial intelligence (AI), sound, and musical expression. This includes both the investigation of already established machine learning frameworks and AI-based applications in the realm of sound generation and processing, as well as the development of specialized tools for AI-assisted audio signal manipulation. 

The primary objective of this research project is to make AI-based technologies accessible and usable for sound creators of various competency levels particularly those with little or no prior knowledge of machine learning or programming. Various entry points for stakeholders from different domains of audio production are identified and offered to facilitate access to the often complex realm of AI technologies. For this purpose, the research team collects, analyzes, tests and parameterizes a wide array of current AI technologies in the sound domain. Various interactive and generative systems, audio plug-ins, neural networks for audio classification, processing and synthesis, as well as AI-based composition methods and web-based applications, will be examined for their technical, functional, ethical and socio-cultural characteristics. The outcomes are subsequently cataloged and consolidated into an interactive database, which will serve as a freely available web application, creating a central entry point for creatives interested in AI.


Prof. Alexander Oppermann


Research assistants

Bastian Kämmer

Joscha Berg

Zeno Lösch

Maciej Medrala

Charlotte Simon

Johanna Teresa Wallenborn

Student assistants

Sara Volpe, Elisa Deutloff,

Robin Stern


Electronic Media

Lab 1200x800
Meeting 1200x800

Building upon the knowledge gained through the examination of established AI frameworks and tools within the sound domain, AI-based audio applications are being developed as an integral component of the research project. This predominantly empirical part of the research aims to foster a better understanding of crucial parameters, questions, and criteria relevant to the development of AI-driven audio software. In addition, critical elements of dataset creation, documentation, and development process are examined to determine how they can be ethically, fairly and transparently configured.

The project adopts a participatory approach by actively involving artists and creatives from various sectors of the music scene in the development process. This makes it possible to take their perspectives and requirements into account, and to assess how the applications developed can add value to their creative practice. This approach is supported by a feedback loop: An online survey is used to collect experiences and viewpoints on working with AI. These insights are fed back into the research and development processes. The results of the research and the applications developed in-house will be made available to survey participants in an evaluation phase. This allows for further adjustments and validation of whether and to what extent the frameworks and applications studied and developed have been made accessible. Another focal point is the documentation of the research process, which includes the creation of proprietary datasets and meticulous documentation of all utilized data. The datasets employed in the project are captured using the "https://arxiv.org/abs/1803.09010" approach (Gebru et al. 2018) and are published as part of the publication of the research results, to ensure traceability and transparency.

Hessian ai logo

The project is funded by hessian.AI