TY - CONF
T1 - No tracks or graphs? Designing sound-based educational audio workstations
AU - Pearse, Stephen
N1 - Biography.
Dr Stephen Pearse is a Senior Lecturer in Music Computing at the University of Portsmouth. He teaches units in C++ audio programming, digital sound design in Max MSP alongside music and sound synthesis using modular hardware and software.
Stephen is the software developer and engineer of the Compose with Sounds and Compose with Sounds Live Digital Audio Workstations. He is also an active composer of electroacoustic music.
PY - 2019/7/5
Y1 - 2019/7/5
N2 - The Compose with Sounds project was set up by a network of academics and teachers across the EU with the goals of increasing exposure to sound-based/electroacoustic practice in secondary schools and creating provisional tools with supporting teaching materials to further enhance usage and exposure to music technology amongst teenagers. This talk will present two large software tools that have been designed as part of this ongoing project. A new digital audio workstation entitled Compose with Sounds (CwS), alongside a networked environment for experimental live performance, Compose with Sounds Live (CwS Live). Both of these are scheduled for free distribution in August 2019.Unlike traditional audio workstations that are track or graph based, these tools and the interactions within are based on sound-objects. This talk will present the trials and tribulations of developing these tools and the complex technical and UX dichotomies that emerged when utilising academics, teachers and students as active components in the development process. The core Compose with Sounds DAW utilises imagery and dynamic 3D animations to help foster an engaging educational experience when using DSP effects provided. Throughout its development, it was utilised by numerous schools across the EU and has been presented to numerous educational authorities. This software and the curriculum that has been developed alongside it by the wider project team is scheduled to be released on the national music curriculum in two EU states currently (Cyprus and Greece) with discussions with further authorities planned.Audio software tools designed to run in school classrooms (where the likelihood of access to high powered computers is small) require careful audio and UX optimisations so that they act as stepping stones to more industrial workstations. The talk will discuss a collection of the unique audio optimisations that had to be made to enable the creation of these tools while maintaining minimal audio latency and jitter. Alongside this, it will present various approaches and concessions that had to be made to empower students to move onto more traditional track-based workstations. Talk outlineThe talk will be broken down into four sections: exploring the requirements of pedagogical audio tools; designing an approachable UX for sound-based music; audio optimisations required to enable true sound-based interactions; the dichotomies of designing sound-based tools that empower users to subsequently utilise track or graph-based workstations. A more significant breakdown of these can be found below. Across these, recommendations and tales of woe related to the software's development will be discussed in a broad and open manner to aid the wider audio developer community who may be interested in developing educational audio tools. Exploring the requirements of pedagogical audio tools.What are the core requirements for audio software if they are to be used in school classrooms?The changing dynamic in accessible music technology in schools.2) Designing an approachable UX for sound-based music.What is sound-based music? What are the theoretical requirements for designing an original sound-based DAW and might they be different from commercial tools?Designing an audio sequencer workflow without tracks.Designing sound editing and transformation tools.Designing visualisations of commonplace audio effects.3) Audio optimisations required to enable true sound-based interactions.Designing an audio object-based audio engine in C++.Consideration of object-based optimisations that could be made in such tools.Presentation of a handful of optimisations that were made in the tools and how they evolved from ongoing student testing.4) The dichotomies of designing sound-based tools that empower users to move to track based workstations.Issues surrounding the language and terminology used when discussing audio workstations amongst broad audiences with a variety of skill levels.Can we have tracks but not have tracks? CwS specific audio dichotomies that emerged from extensive testing. Designing constraints to encourage exploration within audio software and externally.
AB - The Compose with Sounds project was set up by a network of academics and teachers across the EU with the goals of increasing exposure to sound-based/electroacoustic practice in secondary schools and creating provisional tools with supporting teaching materials to further enhance usage and exposure to music technology amongst teenagers. This talk will present two large software tools that have been designed as part of this ongoing project. A new digital audio workstation entitled Compose with Sounds (CwS), alongside a networked environment for experimental live performance, Compose with Sounds Live (CwS Live). Both of these are scheduled for free distribution in August 2019.Unlike traditional audio workstations that are track or graph based, these tools and the interactions within are based on sound-objects. This talk will present the trials and tribulations of developing these tools and the complex technical and UX dichotomies that emerged when utilising academics, teachers and students as active components in the development process. The core Compose with Sounds DAW utilises imagery and dynamic 3D animations to help foster an engaging educational experience when using DSP effects provided. Throughout its development, it was utilised by numerous schools across the EU and has been presented to numerous educational authorities. This software and the curriculum that has been developed alongside it by the wider project team is scheduled to be released on the national music curriculum in two EU states currently (Cyprus and Greece) with discussions with further authorities planned.Audio software tools designed to run in school classrooms (where the likelihood of access to high powered computers is small) require careful audio and UX optimisations so that they act as stepping stones to more industrial workstations. The talk will discuss a collection of the unique audio optimisations that had to be made to enable the creation of these tools while maintaining minimal audio latency and jitter. Alongside this, it will present various approaches and concessions that had to be made to empower students to move onto more traditional track-based workstations. Talk outlineThe talk will be broken down into four sections: exploring the requirements of pedagogical audio tools; designing an approachable UX for sound-based music; audio optimisations required to enable true sound-based interactions; the dichotomies of designing sound-based tools that empower users to subsequently utilise track or graph-based workstations. A more significant breakdown of these can be found below. Across these, recommendations and tales of woe related to the software's development will be discussed in a broad and open manner to aid the wider audio developer community who may be interested in developing educational audio tools. Exploring the requirements of pedagogical audio tools.What are the core requirements for audio software if they are to be used in school classrooms?The changing dynamic in accessible music technology in schools.2) Designing an approachable UX for sound-based music.What is sound-based music? What are the theoretical requirements for designing an original sound-based DAW and might they be different from commercial tools?Designing an audio sequencer workflow without tracks.Designing sound editing and transformation tools.Designing visualisations of commonplace audio effects.3) Audio optimisations required to enable true sound-based interactions.Designing an audio object-based audio engine in C++.Consideration of object-based optimisations that could be made in such tools.Presentation of a handful of optimisations that were made in the tools and how they evolved from ongoing student testing.4) The dichotomies of designing sound-based tools that empower users to move to track based workstations.Issues surrounding the language and terminology used when discussing audio workstations amongst broad audiences with a variety of skill levels.Can we have tracks but not have tracks? CwS specific audio dichotomies that emerged from extensive testing. Designing constraints to encourage exploration within audio software and externally.
UR - https://adc19.sched.com/
UR - https://www.youtube.com/watch?v=kyALim8FSew&list=PLe2skUvADfhswhY0DaUM2b744Acwnvch0&index=14
M3 - Paper
T2 - Audio Developer Conference (ADC)
Y2 - 18 November 2019 through 20 November 2019
ER -