ACTOR Project Funding aimed to support innovative research and pilot projects led by ACTOR members across various Workgroups. These projects were designed to foster external funding opportunities or serve as a foundation for independent research.
Funding was allocated to three main categories:
- Strategic Projects: Research-focused initiatives.
- Research-Creation Projects: Combining creative practice with scholarly research.
- Student Collaborative Projects: Interdisciplinary efforts involving students from two different institutions.
Key outcomes included joint publications, new modules for the Timbre and Orchestration Resource (TOR), public presentations of ACTOR research, and the creation, premiere, or recording of musical compositions. Priority was given to interdisciplinary projects aligned with the ACTOR project’s mandate.
Below is a list of all funded projects, each linking to its abstract and detailing the researchers involved. Every project was led by a Principal Investigator (PI) and could include external collaborators. Additionally, we have included Partner Projects, which were related initiatives supported by external funding.
Strategic Projects
Research-Creation Projects
| Project Title | Researchers Involved | Date of Funding |
|---|---|---|
| ODESSA IV – New Orchestra Recordings, Measurements, Systematic Analysis | Malte Kob et al | Sep. 2020 |
| Orchestration for the String Quartet | Robert Hasegawa et al | Sep. 2020 |
| The Hybrid Space in Composition | Pierre Michel et al | Sep. 2020 |
| Musicians Auditory Perception Project (MAP) | Shahrokh Yadegari et al | Sep. 2020 |
| A Geometry of Sound and Music – Short Geometric Pieces | Gilbert Nouno & Luis Naón | Apr. 2021 |
| Album for Guitar and Electronics – Recording 3D audio with 3D video | Caroline Traube et al | Apr. 2021 |
| Computer-Assisted Orchestration: Machine Learning, Creation, and Orchestration Pedagogy | Gilbert Nouno et al | Sep. 2021 |
| Space as Timbre (SAT) | Robert Hasegawa et al | Sep. 2021 |
| Evaluating Vocality in Orchestrated & Mixed Works | Caroline Traube et al | Sep. 2022 |
Collaborative Student Projects
| Project Title | Researchers Involved | Date of Funding |
|---|---|---|
| Masque de fer – Extended Drum Kit Techniques Module | Martin Daigle & Gabriel Couturier | May 2021 |
| Investigation of Choral Blending: Soundfield Capture, Acoustic Evaluation, and Perceptual Analysis Methods | Ying-Ying Zhang & Jithin Thilakan | May 2021 |
| Sounding the Interaction of Cultures: Orchestration Techniques and Perceptual Effects | Lena Heng & Mengqi Wang | May 2021 |
| Speech as Timbre Models for Orchestration – A Comparative Study Between Cantonese and Québécois French | Darren Xu & Louis-Michel Tougas | May 2022 |
| Timbrenauts: Creative Explorations in Timbre Space | Yuval Adler & Bern Schneider | May 2022 |
| Ulezo – Mapping Acoustic Properties to Timbre Descriptors in Zambian Luvale Drum Tuning | Jason Winikoff & Lena Heng | Mar. 2023 |
| Real-time Timbral Analysis for Musical and Visual Augmentation | Martin Daigle & Pauline Patie | Mar. 2023 |
| Timbral, Textural, and Rhythmic Stratification in Footwork Percussion | Jeremy Tatar & Victor Burton | Mar. 2023 |
Partner Projects
Orchestration and Perception Project
This Orchestration and Perception Project seeks to create a psychological foundation for a theory of orchestration practice based on perceptual principles associated with musical timbre. It involves an international collaboration between McGill University, Ircam-Centre Pompidou and the Haute école de musique de Genève. The four thematic research axes include:
- the role of timbre in instrumental fusion and in the differentiation of musical voices,
- its role in the creation of musical structures,
- the perception of orchestral gestures as meaningful units in a musical discourse, and
- the historical evolution of orchestration techniques across epochs.
Each theme will be addressed by analyzing orchestration treatises, analyzing musical scores and cataloguing and classifying orchestral effects, automated mining of symbolic digital representations of scores, creating sonic renderings of scores by an orchestral rendering environment allowing for the comparison of several versions (original and reorchestrated) to test specific hypotheses, conducting perceptual tests on orchestral effects, integrating the results into a theory of orchestration, and transferring acquired knowledge to computer-aided orchestration systems and to the development of new pedagogical tools for orchestration.
This Project was part of the foundation of, and evolved into, the ACTOR Project.
e-Orch
Research into electronic orchestration (e-Orch) at the Haute école de musique Genève – Neuchâtel
Orchestration can be described from several angles: it is mainly represented as the art of writing, from symbolic data, musical pieces for several instruments, by combining them which each other, the size of the ensemble being variable. We can therefore deduce that it exploits instrumental timbres with the aim of producing orchestral effects; yet the very act of mixing the timbral and acoustic properties of instruments also falls within the domain of signal processing. This dual symbolic/signal view is not to be seen as an opposition, but as an illustration of the complexity of this practice; it lies at the heart of the relationship between art and science, two disciplines whose intersection is particularly relevant to the present time.
MAKIMOno
Multimodal analysis and knowledge inference for musical orchestration (MAKIMOno) [NSERC (Canada), ANR (France)]
This project brings together IRCAM-CNRS-Sorbonne Université, McGill University, and OrchPlayMusic, Inc. to address scientifically one of the most complex aspects of music: the use of timbre—the complex set of tone colours that distinguish sounds emanating from different instruments or their blended combinations—to shape music through various modes of orchestration. This first-of-its-kind project will lead to the creation of information technologies for human interaction with digital media that will radically change orchestration pedagogy, provide better tools for the computer-aided interactive creation of musical content, and lead to a better understanding of perceptual principles underlying orchestration practice.
VRACE: Virtual Reality Audio for Cyber Environments
ACTOR partners at the Detmold University of Music are part of the EU-funded research project VRACE: “The ITN project ‘VRACE – Virtual Reality Audio for Cyber Environments’ establishes a multidisciplinary network that will train the next generation of researchers in the audio part of virtual and augmented reality. The aim is to raise Virtual / Augmented Reality to a next level beyond gaming and entertainment by benefiting from the critical mass of expertise gathered in this distinguished consortium.” More information about VRACE
The Music Performance Markup Format (MPM)
A musical performance of some symbolic music data (e.g. the score, MusicXML, MEI) is the entirety of all transformations necessary to make the music sound. This includes the temporal order of sound events as well as their specific execution. The Music Performance Markup format (MPM) is dedicated to describe and model musical performances in large detail in the manner of a construction kit. It comes packed with a series of performance features from several domains incl. the following:
- Timing features: tempo (incl. discrete and continuous tempo changes), rubato, asynchrony, random/non-systematical deviations from precise timing,
- Dynamics features: macro dynamics (incl. discrete and continuous dynamics changes), metrical accentuation, random/non-systematical deviations from precise dynamics,
- Articulation: absolute and relative modifications of a tone's duration, dynamics, timing (e.g. agogic accent), and intonation, random/non-systematic variations of tone duration and intonation.
Each feature is designed on the basis of a mathematical model that was derived from empirical performance research. These models not only reproduce the typical characteristics of their respective features. Two musicians may perform the same features (say an articulation, a crescendo, or a ritardando) very differently. Thus, the models are also equipped with expressive parameters to recreate the whole bandwidth of such variations.