Ethical AI Tools for Musicians and Sound Designers
Alayna Hughes
This talk introduces musicians and sound designers to a range of ethical, AI-powered tools for creating original sounds and music. With a focus on creative experimentation rather than imitation, it explores how machine learning can enhance sonic practice without relying on exploitative datasets or generative models such as Suno or Udio.
The session begins with an accessible overview of key concepts—neural networks, latent space, and audio synthesis—followed by a discussion of the ethical considerations surrounding AI in music. It then moves into practical demonstrations of tools including RAVE, DDSP, and Jukebox via Google Colab, alongside VST plugins such as RAVE VST and Datamind, designed for AI-driven sound transformation and synthesis.
Real-world use cases, including unreleased material from the speaker’s upcoming album, will illustrate how these tools can be woven into professional workflows. Attendees will leave with an understanding of how to integrate AI ethically into their own creative processes, plus resources to continue exploring after the talk.
About Alayna Hughes
Alayna Hughes is a Music and Interaction Technologist/Artist and Musician. She is from the US and lives in Valencia, where she creates and gives occasional workshops.
On my own, I focus on music mixed with visuals and creating interactive performances and installations. For the past several years, I have also gotten further into VR and game development with the interest of combining interactions in the real world and making experimental creations.
I also head of a partnership called Curiosibot, which is comprised of myself and my husband Pier. We create interactive projects and custom controllers and instruments and are very involved in the Maker community.