Postdoc Started at King’s College London

Another delayed update: at the start of January I began a new postdoctoral research associate position at King’s College London (KCL)!

I’m working now in the School of Biomedical Engineering and Imaging Sciences (BMEIS) on COSMOS: Computational Shaping and Modeling of Musical Structures with Principal Investigator Prof. Elaine Chew. COSMOS is a European Research Council Advanced Grant (AdG) project supported by the European Union’s Horizon 2020 research and innovation program under grant agreement No. 788960. COSMOS aims to use data science, optimization / data analytics, and citizen science to study musical structures as they are created in music performances and in unusual sources such as cardiac arrhythmias.

In this postdoc I will be continuing my work researching human perception of musical performance and interaction with biodata, starting with some tangible heartbeat projects through an Engagement Grant provided by KCL School of Natural, Mathematical, and Engineering Sciences (NMES), which I am very excited to present in the coming few months around London at KCL and our hospital partners at Guy’s and St. Thomas’s.

Check out the COSMOS website for more info about our work and events coming up!:

More info about my and my colleague Dr. Mateusz Soliński‘s new appointment to the COSMOS project

PhD Viva Passed! 🌟

On January 26, I successfully passed by PhD Viva Voce (aka the “defense”)!

The PhD thesis is titled Imagining & Sensing: Understanding and Extending the Vocalist-Voice Relationship Through Biosignal Feedback.

The TLDR (and it is a very long thesis, indeed!); I explored how singers are able to understand complex, internal feedback as they perform. Because the voice exists internally and is a part of the body, vocalists must have refined control and work with their instrument without seeing or touching it. Instead, vocalists rely on internal feelings and intimate understanding of their bodies. I examined how we can externalise some of the sensations and internal movemens to interact with our vocal movement in novel ways, and learn about this connection and be playful with the vocal physiology to better understand ourselves. We then applied this knowledge to communicating subjective sensory experience in musical interaction and human-computer interaction more broadly; hopefully, understanding the interaction with the voice and dialogue with the body will lead us to developing more intuitive, individual-reflective experiences with technology.

My examiners were Prof. Alexander Jensenius (University of Oslo) and Dr. George Fazekas (Queen Mary University of London) – a massive thanks for such an engaging and rewarding viva, and for the helpful feedback.

I’m excited to share more of this work soon – two chapters of my thesis have been adapted for and accepted to the ACM TEI and CHI conferences this year~

Fall 2023 Catch-up

Post CHI 23 submissions, I finally have some time to update on all of the things I’ve been doing at MPI and projects coming this autumn! I’ve updated my CV and this website template so far (baby steps).

Some major things: I’ve renewed my contract with MPI, I have my PhD thesis viva in the very near future, and we submitted three amazing papers to CHI with work done over the summer.

Over the next coming weeks I’ll play a bit of catch-up on the amazing things that have happened this year so far, including the AHs, CHI, and NIME conferences, some music-making, and workshops to be held for the rest of the year!

TEI 2022

February 13-16, 2022

I presented at the Graduate Student Consortium at TEI this year, discussing my PhD work with others nearing the end of their theses. My GSC paper, Examining Embodied Sensation and Perception in Singing, describes the studies I have been working on and my final collaboration with other singers using sonified sEMG to interact with their voices/bodies in new ways. The paper is now listed in the ACM Digital Library and can be found here. If you’re interested, I recorded a version of the presentation I gave at the GSC, which can be found here. 🙂

I also got a chance to participate in an excellent workshop, How Tangible is TEI, charting out the future of physical publication formats at TEI. My colleagues and I at senSInt submitted some mock physical swatches; I worked on one called the Vibro-Touch, an idea for a portable, adaptable method of experiencing tactile feedback designs. At the workshop, I collaborated with a group of other researchers in material design, haptic and audio feedback, and other multimodal research topics to discuss how we can get physical materials in the hands of other researchers. This can help with prototyping, designing new systems, and generally sharing better knowledge. I’m looking forward to continuing to work with the group in the coming months to hopefully propose some new ideas for physical publications!

The Vibro-Touch swatch, with a Teensy 3.2 to store and generate feedback and a voice coil speaker to touch it
How the swatch might look in a swatch book

Music from the Augmented Instruments Lab

Save the Date!: Tuesday 23 November 2021, 7:00PM (GMT)
This November’s concert in the AIL concert series will be streamed via the Augmented Instruments Lab YouTube Channel.

More information about the event can be found here on the AIL site.

Poster by Francis Devine

  • MrUnderwood – ams
  • Julia Set – AV performance
  • Courtney Reed & Andrea Martelloni – augmented acoustic guitar, voice augmented with EMG
  • Lia Mice’s Chaos Bells performed by A’Bear, Andrew Booker, Angela Last, Blue Loop, Bubble People, Clive Thomas, Lizzie Wilson, nagasaki45, Phillip Raymond Goodman, PolyphonieFae Harmer.

  • Nwando Ebizie – Distorted Constellations & Solve et Coagula
  • Sam Topley – Crafting e-Textile Musical Instruments

Seeing Music Live

I’ll be performing again with dynamic duo Betty Accorsi and Andrea Martelloni (Sloth in the City) in Seeing Music Live, an interactive music event powered through

Seeing music is an interdisciplinary collaboration across music, linguistics, cognitive science, and art. The exhibition and interactive was supported by students from QMUL’s Media & Arts Technology, the Centre for Digital Music, and the UKRI Centre for Doctoral Training in AI & Music, in collaboration with the Language Evolution, Acquisition & Development (LEAD) group at Newcastle University. The core team includes Dr Charalampos Saitis (QMUL), Dr Christine Cuskley (Newcastle), and Sebastian Löbbers (QMUL).

The first performance was held on Monday June 28, with the second scheduled for next
Friday July 9, 1:00-2:30 pm BST.

Tickets are free, but you must be pre-registered to attend! Come join us in sonifying your beautiful computer artwork!

VoxEMG Repo Update

VoxEMG (v3.1) Project & Resources

I’ve updated the repo for the VoxEMG because today I finally ordered the PCBs for the VoxEMG circuit! These boards have been designed with e-textile prototyping in mind, and I’m hoping to have a working version of the sEMG collar for July! I want to make sure that the designs and research behind them are shared in the hopes that others may be able to use them or adapt them for other projects

Creative Commons License

All of the resources for the VoxEMG are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
You are free to use/adapt them as long as you share the resulting work under the same license and give credit for the designs by citing the publications where this design has been presented:

Courtney N. Reed and Andrew P. McPherson. Surface Electromyography for Sensing Performance Intention and Musical Imagery in Vocalists. In Fifteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’21), February 14–19, Salzburg, Austria. ACM, New York, NY, USA, 11 pages. 2021. DOI: 10.1145/3430524.3440641 [PDF] [Presentation]

Courtney N. Reed and Andrew P. McPherson. Surface Electromyography for Direct Vocal Control. Proceedings of the International Conference on New Interfaces for Musical Expression (NIME), July 21-25, Birmingham City University, pp. 458–463. Birmingham Conservatoire, Birmingham, UK. 2020. [PDF] [Presentation]

If you do use the design (or would like to!), please get in contact if you need any help with setup. I’d be more than happy to learn how this circuit is being used for other vocal and movement studes.

[May 6] Presentation at King’s College LDCDL

Tomorrow, Thursday 6 May at 16:00 BST, I’ll be presenting for the first of the summer sessions of the King’s College Language, Discourse, and Communication Doctoral Lab (LDCDL). I’m excited for the opportunity to share my recent work on vocal pedagogy with a language-focused research group and getting more dialogue going between linguistics and music cognition. The way we refer to our bodies and imagine our behaviors forms the basis for our interactions, and the sensory-based connections vocalists have with their voices are heavily influenced through the metaphors used by voice teachers.

[Update] The slides from this presentation can be found here. Please contact me if you are interested in the video recording of the presentation or have any other questions!

Translating the Body:

Abstract Language in the Teaching of Fundamental Vocal Pedagogy

Abstract: Vocal coaches must direct their students on highly refined physiological movements needed for healthy fundamental singing techniques, such as supported breathing, postural alignment, sound generation in the larynx, and formation of resonant space in the vocal tract. These movements exist within the body and therefore are not able to be explicitly adjusted or seen by the teacher. Therefore, the voice is traditionally trained through the use of abstract language and metaphor. Vocal pedagogy functions as a sort of “oral tradition,” where teachers pass on knowledge from their own instruction; through abstract language, teachers must translate the sensations in their own bodies to their students, who must then translate metaphor back into feeling and movement. This study involved interviews with voice teachers to explore their teaching technique, how this translation occurs, and the roles of abstract language in vocal pedagogy. We find that underlying directional schema indicate teachers choose metaphor which aligns with their own imagery strengths; further, schema vary in different techniques to either run concurrently to physical action or to distract the student through divergent images. In addition, this research finds that the role of the specific language of instruction is secondary to the sensations conveyed. Bilingual teachers do not translate the language of the metaphors themselves between different languages, but rather translate the bodily feeling which they represent.

New AIL Youtube Channel

The Augmented Instruments Lab recently held its first livestream concert on Youtube! As well, we’re now using this space to showcase our research through performances, research presentations, and tutorials.

There’s a playlist for the Vocal sEMG project, containing some conference presentations I’ve given over the last year, with more to come very soon!

Please do subscribe and join in for our performances and future sessions!

Collaboration with Kings College London

Voice teachers’ practices in the one-to-one voice lesson

The study being conducted in vocal metaphor in the voice lesson will provide the basis for a collaboration with the School of Education, Communication and Society at Kings College London. This study will focus on lesson observation and observing how voice teachers adapt in the lesson to compromise between their own teaching style and goals and their students’ needs.

If you are interested in participating in this study, please see the call for participants below!:

Call for Participants:

Voice teachers’ practices in the one-to-one voice lesson

This research project investigates the practices of voice teachers to determine how their background, preferred teaching style, and interaction with students shapes a lesson. Further, we would like to explore what affects the course of a voice lesson, such as teacher and students goals, previous education experience, or possible compromises made while teaching.

This research involves a questionnaire and follow-up interview, with voluntary additional lesson observations. 

If you are a voice teacher and would like to participate in this study, please complete the questionnaire in the link below.

It will take about 10-15 minutes to complete this questionnaire, followed by a 30-minute interview with one of the project leads. If you would like to participate in lesson observations, please indicate this as well!

If you have any further questions about this project, feel free to contact Anja Stumpf ( or Courtney Reed (

Thank you,

Anja Stumpf (PhD, King’s College London)

Courtney Reed (PhD, Queen Mary University of London)


Pro forma and consent form  

Please read this first and provide your consent to collect and handle your data during this study:

Proforma & Consent – Fillable Word document (.docx)

Proforma & Consent – PDF (printable)

Musical background and skills questionnaire  

This questionnaire gathers some basic information about your musical background, as well as your auditory, kinetic, and visual imagery use.

Imagery & Music Skills – Fillable Word document (.docx)

Imagery & Music Skills – PDF (printable)