Networking concert halls, musicians, and interactive textiles: Interwoven Sound Spaces

ABSTRACT Interwoven Sound Spaces is an interdisciplinary project which brought together telematic music performance, interactive textiles, interaction design, and artistic research. A team of researchers collaborated with two professional contemporary music ensembles based in Berlin, Germany, and Piteå, Sweden, and four composers, with the aim of creating a telematic distributed concert taking place simultaneously in two concert halls and online. Central to the project was the development of interactive textiles capable of sensing the musicians’ movements while playing acoustic instruments, and generating data the composers used in their works. Musicians, instruments, textiles, sounds, halls, and data formed a network of entities and agencies that was reconfigured for each piece, showing how networked music practice enables distinctive musicking techniques. We describe each part of the project and report on a research interview conducted with one of the composers for the purpose of analysing the creative approaches she adopted for composing her piece.


Introduction
Interwoven Sound Spaces (ISS) investigated the creative possibilities of telematic performance in contemporary music ensemble practice through textile and network technologies.The aim was to enable rich and tangible interaction between musicians who are spatially in different places.Central goals for this artistic research project were to host a telematic concert programme in established, traditional music venues and to work with experienced professional musicians.This was motivated by our intention to bring telematic performance and wearable interaction design practices outside of conventional academic circles and engage with composers, performers, concert halls, and their audiences.The results of the project were presented in a joint interactive concert with the ensemble KNM in Berlin, Germany and ensemble Norrbotten NEO in Piteå, Sweden.The concert was simultaneously hosted in two concert halls approximately 1800 km apart: Studio Acusticum in Piteå and Konzertsaal der Universität der Künste in Berlin.The work was motivated by the need to enhance communication and a sense of co-presence between musicians and audiences during live concerts happening concurrently at multiple, distant geographic locations connected via telematic means.To achieve this, we commissioned new works that combined textile wearable technologies, interaction design, machine learning, and distributed performance, thereby extending the experience of playing and experiencing music telematically beyond screen-based applications.The project was the subject of a short documentary directed by Tim Nowitzki et al. (2023).The name Interwoven Sound Spaces was chosen to echo some key concepts of the project: textiles and interconnectedness of sounds and concert spaces.
In addition to providing tools for telematic ensemble play, the project was particularly interested in the roles and dynamics of sociocultural spaces that characterize live music performance.Factors other than music-such as dress cultures, communication between musicians occurring in addition to the musical interplay, socio-cultural environment in which the performance is experienced-contribute to the success of live music performances.The project and technical decisions were guided by these considerations through an iterative development and close collaboration between musicians, composers, researchers, venues, and developers.Composing interactions between musicians and objects located in multiple interconnected environments situates the project in the broader field of ubiquitous music (ubimus).This research and artistic field looks at how musical activities enacted by human agents, material resources, and the relational properties that characterize them can take place in ecosystems supported by a network infrastructure (Keller and Lazzarini 2017).In ISS we aimed to connect two established concert venues, thereby combining traditional music performance contexts with contemporary, distributed networked practices.To our knowledge, this is one of the first artistic projects combining e-textiles and networked music performance.

Telematic music performance and networked musicking
Telematic music performance occurs when geographically separated musicians perform together by means of telecommunication technologies.This means that musicians play together without being in the same room, far away from each other.We consider networked musicking any musical activity that is expressed through a network of connections between musicians, instruments, audiences, and other agents.The architecture of the network defines the relationships between the different interconnected agencies, establishing a system with specific creative affordances.In this context, we favour the term musicking introduced by Small (1998) as it focuses on the activities that constitute a musical experience and the ways to take part in it, 'whether by performing, by listening, by rehearsing or practising, by providing material for performance (what is called composing), or by dancing' (Small 1998, 9).
Since the early explorations of groups like the U.S. ensemble The Hub (Brümmer 2021) and the very first performance by remote musicians-streamed over the internet in 1993much technological research has been made in order to minimize latency and improve audio and video quality in transmission (Rottondi et al. 2016).These attempts may be related to how Minsky (1980) observes that 'the biggest challenge to developing telepresence is achieving a sense of "being there"' (46).Networked music performance practices have grown considerably in the past two decades thanks to faster and more widespread Internet infrastructures, and have also seen an acceleration in interest and development due to the constraints imposed in response to the COVID-19 pandemic (Onderdijk, Acar, and Van Dyck 2021).Several dedicated software platforms such as JackTrip (Cáceres and Chafe 2010), LOLA (Drioli and Buso 2013), and Elk LIVE-a network performance platform built on the Elk Audio OS framework (Turchet and Fischione 2021)-have received renewed interest, and some were used to support distributed concerts and enable distance collaboration during the lockdown (Bosi et al. 2021).A musician's listening involves not only auditory perception but is also cognitively embodied.In telematic performance, the relationships between body, sound, and space are modified, demanding from the performer 'to listen intently while being in a rather fragile, unstable environment' (Schroeder 2013, 225).As observed by Mills (2019, 6), 'while network technology collapses distance in geographical space, tele-improvisation takes place without the acoustic and gestural referents of collocated performance scenarios.This liminal experience presents distinct challenges for performers.' Several networked performances make use of sensors, transducers and other technologies focused on motion detection and motion control.For example, the 'Mocap Streamer' system developed by Goldsmiths University (Strutt 2022;Strutt et al. 2021) is used in a series of performances in which the avatars of two remote dancers danced together wearing an inertial sensor motion capture system that enables precise data capture.Network technologies have been used also to control body movement remotely.As an example, in some performances by Stelarc, the muscles of the artist could be controlled by the audience remotely through devices placed on the surface of the skin and a web interface (Elsenaar and Scha 2002;Stelarc 1991).
Recently, Comanducci (2023) proposed a comprehensive framework dedicated to networked music performance designed after experimental studies carried out in collaboration with classically trained musicians.The results indicated that, when confronted with high latency times, 'musicians are able to somehow adapt themselves or at least to adopt different type[s] of strategies' (Comanducci 2023, 122).This resulted in a system that avoided prioritising latency minimization in favour providing adaptive tools to help musicians coping with network latency and giving particular attention to spatial audio and audiovisual immersion.However, the use of recent network technologies such as 5G shows promising results when it comes to keep latency at levels considered acceptable for music-making (Turchet and Casari 2023), thereby opening scenarios in which such technologies make networked music performance more easily accessible.

An internet of musical things
ISS can be situated within a broader research field intersecting networked music performance, the internet of things (IoT), and new instrument for musical expression (NIME).Turchet et al. refer to this interdisciplinary area as the Internet of Musical Things (IoMusT) (2018).They envision a network of '[m]usical things, such as smart musical instruments or wearables, […] connected by an infrastructure that enables multidirectional communication, both locally and remotely' as the foundation of IoMusT (2018, 61994).They describe an ecosystem in which performers, audiences, and machines (i.e.networked objects capable of exchanging sounds, notes, and other music-related data) can be colocated and/or remote, each interacting with the other either locally or by means of a network connection.
ISS implemented an ecosystem that closely resembles many of the IoMusT core ideas.Two groups of performers were located in two distant concert venues connected via lowlatency audio and video technologies.Audiences could join the concert at both venues and online.The interactive textiles (see section 4.1) and circuit board (section 4.2) were designed to be worn by the performers during the concert and exchanged data over the network, so that the composers could use such data to design interactions (see section 5).

Interactive textiles in music and performance
Electronic textiles (e-textiles) and textile wearable technology are often used to explore new formats for music live performances, enabling physical touch and expressive body-based interaction between performer and computational audio systems.While typically sound is not a major consideration in textile design, e-textile and interaction designers have embroidered, woven, knitted, printed and knotted musical interfaces and instruments for both on and off the body (Perner-Wilson and Satomi 2018;Psarra 2014;Torres 2017).
E-textiles originating in both artistic and scientific communities feature in numerous electronic musical performances, highlighting how they can, for example, lead to more accessible forms of music making, 1 or bridge stage dress traditions with interactive functionality (Greinke et al. 2021).In HCI, research dealing with e-textile sensors is increasingly specialized in terms of what outputs are studied, with computational and interactive audio becoming an important topic (Stewart 2019).As one of the pioneers of electronic textiles in design, Maggie Orth's work included the Musical Jacket, which featured conductive embroidery to control MIDI sound (Post et al. 2000).The Embroidered Musical Ball was a handheld device with embedded capacitive pressure sensors (Weinberg, Orth, and Russo 2000) and Teresa Marrin Nakra (2000) designed a conductor's jacket using e-textile elements, which gathered physiological and gestural data in 16 channels.
Beyond a technical focus, e-textiles have also been investigated as a means to tackle gender disparity in audio engineering communities (Stewart, Skach, and Bin 2018), and for nondeterministic HCI approaches, embracing disruption, failure or uselessness when conceptualizing new types of interaction (Andersen et al. 2018;Nordmoen et al. 2019;Skach et al. 2018).
E-textiles and textile wearables are however rarely used in setups involving multiple musicians or larger ensembles.An example is Ann Rosén's work with interactive knitted knee cuffs, 2 worn in performances by The Barrier Orchestra. 3The knee cuffs consisted of textile sensors paired with small computers, allowing the musicians to play them as synthesizers and samplers while wearing the knitted tubes around their knees.Another example are the prototypes developed for 'Sound Folds' (2021) 4 which investigated the use of folded e-textile sensors to augment acoustic instruments while acknowledging Western traditions of formal stage dress.

Ensembles
Professional musicians from two contemporary music ensembles participated in ISS, with three musicians located in Berlin, Germany, and three in Piteå, Sweden.This section briefly introduces the ensembles.
KNM Berlin is an established ensemble for contemporary music.It was founded in 1988, presenting compositions, concert installations, and concert projects developed in collaboration with international composers, authors, conductors, artists, and stage directors.Three KNM ensemble musicians were involved in this project, playing cello (Cosima Gerhardt), contrabass clarinet (Theo Nabicht), and flute (Rebecca Lenton).All three musicians had long-standing experience with experimental live formats including electronic and digital sound and video.Wearables and telematic music performances, as practised in this project, had not been used by them previously.
Norrbotten NEO, based in Piteå, Sweden, has been at the forefront of contemporary chamber music for over 15 years, being one of the most distinctive voices on the Swedish contemporary music scene.Today the ensemble is the only one of its kind in Sweden, promoting contemporary art music on a national basis.The ensemble continuously commissions new works and collaborates with both younger and more established composers, nationally as well as internationally.Three musicians of the ensemble participated in the project, playing contrabass clarinet (Robert Ek), percussion (Daniel Saur), and viola (Mina Fred).The ensemble has been involved in telematic performances previously, with one of the musicians being also a member of a quartet focused on networked music practice (Ek et al. 2021).This proved to be an advantage, as the technical support during the development phase was limited on the NEO side, with a larger team joining only later for the concert.
All musicians had experience with improvisation and were familiar with the extended instrumental techniques found in contemporary and new music practices.They were very open to experimentation and working with the technologies developed for the project.Some of them were already familiar with the techniques and concepts employed by the composers.All musicians are experienced professionals and were regularly remunerated for their work.

Composers
The project involved a direct collaboration with four composers: Cat Hope, Ana Maria Rodriguez, Malte Giesen, and Ann Rosén.Each composer was asked to compose a piece tailored for the project, particularly the networked interactions between the two distant ensembles and the textiles designed for the project.This article focuses on an analysis of Cat Hope's approach to composing her piece.

Technical and design actors
People from various disciplines were involved with technical and design development, and implementing the final concert.These included wearables design and hardware (section 4), and interaction design (section 5).It further involved taking care of streaming and video communication between musicians, and sound engineering (described more in detail in section 6).It should be noted that these parts were also impacted by the telematic setup, meaning that standard technical stage procedures also prevalent in non-telematic performances needed to be adapted.

Interactive textiles for movement detection
This section introduces the design and technical work that went into creating a set of interactive artefacts to be used on stage.While different textile and non-textile mechanisms were embedded, all artefacts are designed to detect movement of the musicians wearing or operating them.

ISS textiles
Three different designs for interactive textiles to be used by the six musicians were developed.Two of these were textile wearables (The Tensile, The String).The third was an interactive rug (The Rug).All textile prototypes had integrated resistive sensors, able to detect stretch, pressure or acceleration when the musicians moved whilst playing their instruments.Through data processing (some examples in section 5) it was possible to detect different body positions and motion patterns.
The overall design concept for the textiles was inspired by networks.This refers both to the structural behaviour of textiles, such as stretch, as well as communication through a configuration unconfined by the boundaries of physical space.Two design researchers were responsible for the work described in this section, of which the first was in charge of the overall design concept and the garment design.The second is specialized in constructed textiles and was responsible for producing textiles from yarns (both with integrated sensors and without), as well as designing connections between textiles and hardware using textile techniques.All prototypes were made relying on specialized textile and garment knowledge, through which textile solutions for connecting different hardware parts to the textile structures could be developed.The conceptual design refrained from taking a purely functional route, but instead focused on poetics of movement.The two wearables were designed as jewellery-like pieces, focusing on a specific movement or body part with expressive significance for the respective instrument.The rug was a modular design, with each musician using their own module.This allowed for adjusting the design on the stage, while at the same time making sure the musicians stayed in a dedicated area, which was streamed and was visible on the stage monitors in the paired location.
The telematic arrangement of the performance had an impact on all other development processes in the project, including the garment design and making.Usually, the tailoring of garments requires several fittings, describing in-person meetings between garment maker and wearer, for discussing designs, taking measurements, and altering the garments to best fit the wearer.Due to the physical distance between the wearables team in Berlin and the NEO ensemble in Piteå, this was not possible in our setting.Designers and NEO musicians were not able to be physically together at any point of the project.While commonly tailored textile wearables are developed in iterative steps, using in-person exchanges and fittings to identify needed alterations, our design required a different approach.This was done in several stages.Firstly, we watched videos of the musicians playing at previous performances, gaining an initial understanding of their body types, learning about typical movements when playing their respective instruments, and identifying areas of the body that would benefit from motion detection.At this stage of the project, not all musicians and instruments involved were named.This required the fashion designer to also think about approaches for universally designed wearables, where little to no alteration would be needed when designs are worn by different people.Several video calls were held with the musicians known at this stage, which were the cellist, clarinettist and flautist based in Berlin.
In the next stage, two movement categories were identified that would be used by as many musicians as possible.The first was the movement of the right arm, which was observed as defining for string instrument players holding the bow, as well as percussionists.The second category was weight-shifting, which we observed mostly in woodwind instruments musicians, which in this group were the flautist and the two clarinettists.In addition, all musicians were accustomed to using pedals when playing, either to turn pages in the score or to add effects in contemporary music they had played in the past, which often resulted in more intentional or controlled weight-shifting.
Rehearsals offered opportunities to meet the musicians in-person in Berlin.These sessions were also used to train them in dressing the wearables as well as identify required alterations.Regarding the NEO musicians, no fittings could be scheduled until the wearables were finalized.At this stage, the prototypes were sent to them by post, including printed instructions for dressing and connecting.The designers then met the NEO musicians in a video call, assisting them with dressing and connecting the wearables and hardware.

The Tensile
The Tensile is a machine-knitted sleeve inspired by tensile architectural structures.It has three integrated knitted stretch sensors that detect movement of the right index finger, elbow, and shoulder.Three sleeves were produced and worn by the cellist in Berlin (shown in Figure 1), and the violist and percussionist in Piteå.
The geographical distance between the design researchers and the musicians in Piteå presented a series of design challenges throughout the process.The musicians were required to record and send their own body measurements, and being unfamiliar with the process of recording one's own body in such a manner resulted in imprecise measurements from the musicians.However the elastic properties of the Tensile textile, both in yarn and knitted structure, provided enough flexibility to create accurately-fitted wearables.Figure 2 shows the textile for the Tensile as it is being knitted (a) and the finished wearable (b).For the sleeve worn by the NEO percussionist, an alteration was required for which the wearable was posted back and forth to be iterated.

The String
The String is a layered shoulder harness with long, soft padded strings containing an integrated accelerometer.Three String wearables were produced, worn by the contrabass clarinettists in both Berlin (see Figure 3) and Piteå, and the flautist in Berlin.
The String comprised a fitted knit jersey shoulder harness with a velcro closure under one arm.The padded strings were stitched from power-mesh and filled with textile wadding.The accelerometer detected the swings that occured as a result of body movement, acting as a form of subtle motion tracking.
As with the Tensile, geographical distance and limited availability meant some musicians were required to provide their own body measurements, and the properties of the knitted jersey and power-mesh proved elastic enough to compensate for any imprecise measurements.

The Rug
An interactive rug capable of detecting musician's weight shifting when playing their acoustic instruments was designed and prototyped (see Figure 4).Given the musicians' familiarity with pedals, the rug provided a subtle control element that musicians were able to use with little need for training.It further served to mark out the areas on stage that were captured  by the streaming camera, ensuring that the musicians stayed within the camera frame at all times for effective streaming.
The Rug was modular, consisting of three independent rug shapes occupied by one musician each.Two sensors were mounted underneath the rugs 30 cm apart, each large enough for the musicians to comfortably stand on and allow them to move around.

ISS circuit board
The ISS hardware served as an interface between the textile sensors and one computer each in Berlin and Piteå respectively.Each musician carried one circuit board (Figure 5), and an adapted version was used for the rugs.The workstations then connected via Open Sound Control (OSC) (Wright 2005) and thereby formed two central nodes of a network that included all of the textile sensors.The circuit board consists of a microcontroller and break-out boards for connecting textile sensors, haptic drivers and power supply.Wireless connectivity was a central requirement, to allow for changing setups during different parts of a performance.We used Arduino Nano 33 IoT microcontrollers that offer both Bluetooth and WiFi connectivity, with the latter being implemented in this project due to the greater range and stability of WiFi for stage applications (Mitchell et al. 2014).
Regarding the connection between each board and the textiles, flexibility and ease of use were central requirements.We used 3.5 mm audio jack connectors to allow for quick and easy connections between the board and the textiles.Each of the boards has four inputs for resistive sensors, one 4-pin input used for the String Sensors external IMU and one 4-pin output for the haptic motor.Two of the four 2-pin inputs go into a   custom signal preprocessing board 5 that helps to achieve a better Signal-to-Noise ratio (SNR) from the sensors' signals.The signal offset and gain can be modified through adjustable variable resistors.While offering a cleaner signal when ideally tuned, the board can also be an additional source of complexity in setups that need to change quickly.Therefore making this option available partially proved a good compromise, especially with the outlook of using the board for future projects.
The essential functionality of the board is that of an interface between the workstation and the various sensors in two directions.It reads the sensor inputs as 10 bit values and forwards them as OSC packets to the workstation.Likewise, the vibration of the haptic motor can be triggered through the software running on the workstation.The command will be sent to the Arduino via OSC which then triggers the haptic driver and motor.The logic of this behaviour is handled by the Arduino microcontroller and programmed in the Arduino programming language. 6

Composing interactions
To present the expressive possibilities offered by the project, we conducted an online workshop with the composers.This included a presentation of the system parts and concepts with videos of some composition and interaction design ideas filmed in collaboration with three guest performers.Following a presentation, each composer had a timeslot with the developer and design team to discuss ideas for their pieces and how to implement them.We saw this as a necessary step in the project, as mapping sensor data for musical purposes is known to be a non-obvious and challenging process with many implications on the expressivity of the resulting musical interactions (Hunt, Wanderley, and Paradis 2003).The following two subsections will give an overview of a selection of the interactions proposed during the workshop, while section 7 will provide detail on which interactions were implemented in the piece composed by Cat Hope.Her approach will be discussed further by commenting on excerpts from an interview we conducted to gain a deeper insight on her experience composing for the project.It is worth noting that the musicians were explained how the interactive textiles worked and generated data, but the decision on whether their use of the textiles should be deliberate or more unconscious was left to the composers, who scored their pieces with different degrees of explicit instructions on how to interact with wearables and rugs.While some composers gave more explicit directions, Hope opted for informing the performers of how their movements would affect the score and the live electronics without explicitly notating such behaviours in her score.Other composers opted instead for a more choreographed sequence of movements or more explicit instructions on how to use the interactive textiles during the performance.

Interactions proposed during the workshop
We designed a set of interactions between the musicians' movements while performing and sound in order to showcase some of the creative possibilities of the ISS system.Movement data was captured through the ISS textiles and board (see sections 4 and 5).The data was then used for interactive sound synthesis and haptic cueing using various techniques implemented in Max 7 as described below.

Stretching sound using the Tensile and machine learning
The first proposed interaction involved the Tensile worn by a cellist.The wearable captured the movements of the right arm while the performer bowed the string, while a microphone mounted on the bridge of the instruments captured the sound.The core idea of the interaction was to process the sound of the instrument using the data from the tensile and aurally 'stretch' the sound of the cello to mirror the stretching occurring in the textile while the musician bowed the strings.This was implemented using an FFTbased frequency shifter to process the sound captured by the cello microphone.The mapping between the tensile sensor data and the frequency shifter parameters was defined by a linear regression model created using an interactive machine learning workflow (Visi and Tanaka 2021) and was implemented using the GIMLeT package 8 for Max.It worked as follows: 1.The composer defined how the frequency shifter should affect the sound when the body of the musician is in key positions, e.g. when the strings are the closest to the bow tip and the tensile is stretched, or when the strings are the closest to the bow frog and the tensile is released; 2. While the cellist was wearing the tensile, data in the aforementioned positions was recorded; 3. Sensor data for each position was paired with the corresponding frequency shifter parameters and used to train a neural network for obtaining a linear regression model; 4. The sound of the cello was processed (i.e. 'stretched') in real time as the musician played and new sensor data was fed into the linear regression model.
During the workshop we explained that such an approach can be transferred to other textiles as well as other sound parameters, and that the software interface we built would allow such interactions to be quickly set up during rehearsals.We called each key-position-to-sound-parameters pair an 'anchor point' (Tanaka et al. 2019).

The String: echoing sways
A key concept behind the example interaction for the String wearable was to consider the swaying of the strings hanging down the shoulder as echoes of the full body movement of the musician.We demoed this concept with flute players that perform standing.Small movements of the body made the string swing, and these oscillations were captured by the motion sensor in the wearable.To mirror these echoes of motion in the sound of the instrument, we used the sensor data to add layers of modulated echo to the sound of the instrument that became more present as the movement became more intense.Additionally, quick, sudden movement triggered additional impulsive sounds, resembling those made by hitting or shaking a spring reverb unit.

Rug
Each rug defined the area on stage in and around which each musician performed.It has three active areas that measure pressure (see section 4.3).We suggested three possible ways of using the pressure data obtained from the rug for musical interaction and demoed in collaboration with the flute player Diane Barbé: 1. On/off: the simplest interaction with the rug we could think of, the musician stepped on or off the rug.Stepping on the rug resulted in the sound of their instrument being sampled by a granular synthesizer and played back as long as the musician stood on the rug.2. Weight-shifting: built upon the previous interaction, if the musician shifted their weight to either side while standing on the rug the synthesis parameters of the granular synthesizer used for sampling the instrument sound changed adding subtle timbre variations tied to how the musician distributed their weight on the rug.Parameter mapping was done using machine learning to build a linear regression model similarly to the stretching sound interaction described in section 5.1.1.3. Pedal-like functions: to echo how effect pedals work, stepping on a third sensitive area of the rug resulted in more dramatic processing of the instrument sound, such as heavy distortion.This third sensing area was later discarded as it became clear that two sensors were sufficient to serve the composers' ideas.

ISS performance
The telematic concert took place on the 21st of December 2022 simultaneously at the Konzertsaal der UdK Berlin, 9 Berlin, Germany (Figure 6 and 9).This provided a visual representation of the remote musicians.Arranging musicians and screens along an arc (see Figure 8) allowed the screens to be seen both by the performers physically present on stage as well as by the audience.This created a visuo-spatial relationship on stage between the local and remote musicians, thereby conveying a sense of co-performance.The screens always showed the live video feed from the same angle for the purpose of creating a consistent spatial link between the stages, something more akin to looking through a window than watching a TV.This means that the fields of view of the cameras were set once before the performance using the rugs as reference points and then left unchanged throughout the concert.This approach is akin to that of media arts works such as 'Hole in Space' by Galloway and Rabinowitz (1980), in which the artists created a consistent telematic link between two distant places.Screens were essential during the performance to convey a sense of co-presence to both musicians and audience, but they proved useful also during other stages of the project.
During rehearsals, seeing the remote musicians ready through the screens helped the researchers and the local musicians coordinate actions before the beginning of a trial performance.Visual contact was also important when working with the composers during workshops, as it helped composers communicate their ideas to the remote musicians and get a better sense of how the performance was unfolding at the other location.The stage screens were not used to communicate any time-sensitive cues such as conduction gestures, as musicians relied more on the sound and the scores to synchronize their performance.They were also useful for the audio engineers at both locations, as it made it easier for them to understand what was going on the other stage while sound checking and during the performance.They also used the location of the screens on stage as a spatial reference when mixing and arranged the position of each instrument in the sound field to reflect where the corresponding screen was placed on stage.The rugs marked the position on stage for each local performer in order for them to be properly shown on screen on the other stage.The stage plan for Studio Acusticum is schematized in Figure 8 and the recordings of the live streamings from both locations are available on the project website. 12 We used JackTrip (Cáceres and Chafe 2010) for streaming uncompressed multichannel audio between the locations.All instruments had their own dedicated microphones.In Berlin cello and flute were close-miked using clipon condenser microphones while the contrabass clarinet was miked using a pair of smalldiaphragm condenser microphones mounted on stands placed close to the instrument.A similar solution was used for the contrabass clarinet in Piteå, while the viola was closemiked with a condenser clip-on and the percussion was captured with two overhead condenser microphones.Four channels dedicated to live electronics were added to the 16 audio channels assigned to the microphones (8 per location), for a total of 20 channels.Each location received all separate, uncompressed remote channels via JackTrip.This allowed the audio engineers at both locations to independently mix the sound for the respective concert hall.We measured an overall 13 round trip delay of 155 ms (i.e.77.5 ms one-way) in the rehearsals venues, which is comparable to the latencies reported by Bosi et al. (2021).The live video stream from Piteå had the same stereo mix of the hall, while the live video stream from Berlin used a binaural rendering of an ambisonics mix that was made specifically for the live streaming.

Cat Hope's 'The Drift'
This section introduces and discusses in detail one of the four pieces composed and performed for the project: 'The Drift' by composer Cat Hope.It provides an insight into how a selection of the technologies described in the previous sections were implemented in the artistic concept and structural setup of the piece (see section 7.1).Following an interview conducted with Hope, a set of topics were extracted from this discussion.We report and discuss her considerations on the relationships between the piece and the concept of latency (see section 7.2), liveness (7.3), and wearable sensing (7.4).We conclude the section with reflections on the practical implementations of the ideas of the composition and the musicians' interpretation.
It is worth making clear that the interactions here described were designed specifically for Hope's piece.The pieces composed by the other three composers involved entirely different interactions between the data generated by the interactive textiles, the musicians, and live electronics.Additionally, the other pieces adopted different approaches to scoring, including standard Western music notation, both static and animated, as well as a choreographed sequence of gestures.Describing the other pieces in detail is beyond the scope of this article and will be addressed in future publications.
7.1.Artistic and technical concept of 'The Drift' Cat Hope's composition 'The Drift' uses data generated by the wearables to alter the motion of notation in a digital animated score.The score images, which are normally fixed and moving from left to right in the majority of Hope's work, 'drift' around on the digital page, their movement indicating variations in timbral density for each player.The title of the work is a homage to US singer songwriter Scott Walker , whose 2006 album of the same name was described by him as being composed by employing 'blocks of sound' (Leone, 2006).Here the work is also developed in blocks, but in this case, of notated parts that float across the score 'surface'.
Hope's work focused on how data from the wearables could influence the score read by the musicians.Scored for two contrabass clarinets, viola, percussion and electronics, the data was also used by Federico Visi in the control of the electronics scored in the piece.The two contrabass clarinets wore String wearables, with the viola and percussion wearing the Tensile models.
Hope's compositions are usually presented as animated notation on networked iPads, using the Decibel ScorePlayer application 14 (Hope et al. 2015).Scores can be networked over the Internet in real time using the iPad application, and a 'canvas layer' function in the application enables real time drawing to occur as the score unfolds, using Max commands (Wyatt, Vickery, and James 2018).Coloured graphic notation, with parts for each instrument, scrolls from left to right across the screen, with a vertical 'play head' line indicating the point of performance for the musicians.The playhead in this piece provides a timbral 'density' scale, which determines the textural density the performers apply to their part, with the topmost part of the score page being the most 'complex' and the bottom the most 'clean' (see Figure 10).
'The Drift' uses data generated by the wearables to 'drift' individual score parts up and down the vertical axis of the score as they move towards the playhead, meaning the timbral variation of the sound is different at each performance.The score was written by the composer, the vertical position of each part is affected live by the data generated by the wearables.The score displayed on each screen includes all the parts and the scrolling is synced via network, therefore all musicians can see on their devices the full score and how the live data is affecting each part while they perform.
Hope followed the rehearsals on site in Berlin with an audio video connection to the musicians in Sweden.She gave detailed feedback to the musicians regarding the performance of her piece, answered questions regarding how certain graphical elements of the score should be interpreted, and gave directions about musical aspects other than timbral density.

Considerations on latency
The choice of the String wearable is motivated by the similarity, according to Hope, between its movement and that of her scores.Moreover, she found an analogy between the oscillating movement of the wearable and the inherent latency of telematic information transmission.In our interview, the composer explains that 'swaying seemed to be a movement close to how my scores move, and made me think about how I could use sway motion to animate my scores differently' and 'the movement of the sway continues long after the sound is over, and has a kind of delayed start, a little like the delay experienced in telematic performance'.
Latency is discussed further when reflecting about her approach to telematic performance and the implications of connecting musicians in distant locations: I've come to this latency friendly approach because I don't want to emulate a normal music making situation-a real in-place, inperson music making.I don't want to emulate that telematically, why would I do that?It's not going to be any good-physics always gets in the way.[…] [I tried not] to obtain some almost impossible synchronicity, but actually making a piece where that lack of synchronicity was to some extent okay, or even part of the piece.
These considerations point to two aspects of Hope's approach to composing for telematic music performance.Firstly, latency is not seen necessarily as a hindrance, but as a quality of the medium that can also be aestheticized.Secondly, this understanding of latency implies an approach to musical timing in which the pursuit of strict synchronicity is eschewed in favour of ways of composing that allow for-or even integrate-a more fluid approach to timing.This could be seen as a way of composing that is specific to distributed network performance, in which the qualities of the network do not simply affect the performance but rather have an influence on how the music itself is thought of and conceived.

Considerations on liveness
Reflecting on composing 'The Drift', Hope made some considerations regarding the liveness of the performance.Referring to whether the score should be altered by the data obtained by the String wearables in real time, she explains: I did feel that the ephemerality of the score was key, it had to move in real time, as driven by the performers during the concert.Liveness means little details are different every time, and the performers are much more 'embedded' into the composition.
In a project that makes extensive use of technology, Hope embraces human factors that make each performance unique.The wearables are conceived as fluid interfaces between the bodies of the musicians and the network, rather than controllers designed to achieve an exact and repeatable result: Yet these variations in how each musician placed the Tensile on their body, along with how their fastening of it and its shifting as they performed further served the intended purpose of the Tensile not as a controller, but as an interface which follows the movements that naturally occur when acoustic instruments are played.
Embracing liveness and not fully predictable outcomes in a network of diverse interconnected technologies creates an environment that affords experimentation but that is also fragile and complex.Hope reflects on realizing this while working on 'The Drift': The complexity that [using real-time data from the wearable to generate animated scores live] created, however, soon became very apparent to me, and it was on clear that a lot of the work we had to do would be in testing the technology rather than creating ideas or rehearsing music.

Wearables in 'The Drift'
Hope explains the relationship she sees between the gestural affordances of the wearables and her musical aesthetic: I also used The Tensile on viola and percussion because I thought they had the most dramatic arm movements.For example, the percussion hitting the tam tam with a long acceleration, the movement of the viola bow.I liked the fact that these and the swinging movements were long and 'glissando', in line with my musical aesthetic.
The interconnected wearables, musicians' bodies, stages, and sounds are entangled further through Hope's animated score, thereby extending the network of musical things that ISS enacts.Hope explains how she conceives the data collected and transmitted by the wearables as the musicians move '[it's] like an artefact of the sound and its making, rather than a different insight into the making of a sound.'In other words, the data generated by the wearables depends on the body movements required to play the instrument, it is not an independent process.The data is fed back into the network of musical agencies at the centre of the piece, affecting the score and as well as the live electronics, thereby closing an interaction loop involving musicians, instruments, wearables, and sound.

Practical implications of composing for the network
Hope reflected on the implications of composing for musicians distributed in different locations: being able to come to Berlin and work closely with [clarinettist of KNM ensemble] Theo but then not having that same opportunity with the Piteå musicians shaped the work considerably.[…] The result was that Theo ended up as a kind of soloist, even though that was not really what I had planned or written in the score.This reflection points to some implications of composing for a networked, distributed ensemble.The configuration of the network and its nodes has had practical implications on how established creative practices unfolded during the project.The composer worked differently with the musician physically present on location and could not do the same with the remote ones, and this affected how the composition developed.Hope elaborates further on this: Telematically, the way people related to each other when in the same space, was different than how they related over the Internet.For example, in Piteå, they had an 'acoustic' mix in addition to the recorded sound of them that Theo worked with.I think that might make it difficult to understand what each other were doing, across the telematic reach.The musicians together would rely very much on what they could hear from each other in the room, but Theo could only rely on what he was getting through the speakers in his place on stage.
Again, the topology of the network has implications in the performance that are not immediately obvious.Hope is however well aware that a networked performance should not be taken as surrogate of an in-person one: If you're having a telematic performance where the performers are expecting it to be like an in person thing, then you're going to fail.Whereas if you have a kind of agreement that what you're doing is designed for that platform, that it's not a poorer form, it's actually just a different one, and the affordances that come to you are particular.You go on a path of discovery.You have to agree to be interested in that journey, and I'm interested in that as a composer.How can I make music that encourages a rewarding telematic experience?I don't want to emulate in person performance with my piece, but I want to try and create a new type of performance experience, and use composition to drive that.You have to get the performers in the state of mind where they're also ready for that adventure and once all the technical stuff is taken care of, I would hope there's some new type of musical engagement.That's what happened.

Conclusion and future work
We provided an overview of the artistic project Interwoven Sound Spaces which was investigating telematic music performances enriched with interactive textiles in contemporary chamber music.The article described the technical and design development of the textiles, hardware, and the software system used to design the interactions.The second part of the article analyses how the system was put into use by one of the collaborating composers.
Hope reflects on considerations on latency, liveness, and the role of the wearables in the telematic performance of 'The Drift'.Latency is not seen as a drawback, but used as a conceptual and aesthetic component of the composition.This allows for composing in which synchronous play between musicians is replaced by fluid timing.Hope regards the wearables as fluid interfaces where the bodies of the musicians correlate with the network.Actions are not controlled in the way that they achieve exact and repeatable results, which could contribute to the perception of liveness in 'The Drift'.Hope gives an account of how she conceived the relationships between the musicians, the wearables, the data they generate, and the live score.This outlines the complex entanglement of interactions that networked music performance affords, indicating that networks of musical things may bring about new musical ecologies unique to the medium.The assumption that networked music performance is a medium in its own right as opposed to a surrogate to physically co-present music performance is supported further by the considerations Hope does of her experience writing 'The Drift' for ISS.
In implementing Hope's interaction ideas for her piece, we tackled a set of technical and conceptual challenges.Firstly, we had to network the Decibel ScorePlayer application used by the composer with our system in order for it to access the data generated by the wearables at both locations.We implemented a simple solution that allowed the exchange of data between the systems via OSC.That was good enough for the piece, but it was felt that deeper and more flexible integration between the systems would have been desirable.This could probably be achieved more easily in an ecosystem designed following the IoMusT vision.On the other hand, we were pleased with the network infrastructure that we were able to put in place between the two locations, which gave us access to the data from all wearables without requiring much intervention from the musicians aside from turning on the devices and occasionally resetting them using a switch we placed on the side of the board.A more conceptual challenge we addressed had to do with mapping wearable data to the graphical score.With 18 continuous streams, the data obtained from the wearables was complex.To reduce dimensionality and have useful descriptors that could be mapped to the score application and the live electronics we adopted a feature mapping approach (Visi and Tanaka 2021) which aggregated the data from the motion sensor of the String wearable to obtain an overall measure of motion activity for each clarinettist.We then implemented a way to easily adjust sensitivity with respect to the range of movement we wanted to obtain in the score.A challenge common to all pieces was to give the composers a clear sense of what actions the e-textiles respond well to and how the data obtained would behave.This was partly addressed in the workshop described in section 5.However, practical sessions were necessary in order for the composers to grasp the affordances of the system, denoting specificities that corroborate the idea that such platforms for networked music performance afford musical approaches that are unique to the medium.
This work has a number of limitations that motivate future work.To better understand the different artistic approaches adopted by the composers, as well as the possibilities offered by ISS as a telematic music performance framework, we are planning interviews with other composers that took part in the project.This will allow us to carry out further analysis as well as draw comparison between Hope's approach and that of other composers.In addition to that, we are planning to carry out interviews with some of the musicians from the ensembles to gain insights regarding the performers' perspective.We acknowledge that a deeper understanding of the audience's experience would also be valuable, this however is beyond the scope of the current project.
Continued work beyond this project will aim to gain more insight into the usability of interactive textiles in telematic music performances.Firstly, a technical evaluation of the system and its components is needed to understand reliability and repeatability of the technologies involved.Secondly, user studies conducted with composers and musicians will provide more information about usability and ease of use by the musicians.Stefan Östersjö is Chaired Professor of Musical Performance at PiteåSchool of Music, LuleåUniversity of Technology.He received his doctorate in 2008 for a dissertation on musical interpretation and contemporary performance practice.Östersjö is a leading classical guitarist specialising in the performance of contemporary music.As a soloist, chamber musician, sound artist, and improviser, he has released more than thirty CDs and toured Europe, the USA, and Asia.He has collaborated extensively with composers and in the creation of works involving choreography, film, video, performance art, and music theatre.

Figure 1 .
Figure 1.The KNM cello player wearing the Tensile.

Figure 2 .
Figure 2. The textile for the Tensile as it is being knitted on an industrial manual knitting machine (left), and after being sewn into the wearable for the NEO percussionist (right).

Figure 3 .
Figure 3.The clarinet player of KNM wearing the String wearable.

Figure 4 .
Figure 4.The Rug under a musician's feet.Figure 5. ISS circuit board.

Figure 5 .
Figure 4.The Rug under a musician's feet.Figure 5. ISS circuit board.
(a)); at Studio Acusticum Stora Salen in Piteå, Sweden (Figure 6(b)); and online through two live streaming video feeds hosted by UdK's own udk/stream platform. 10The recordings of the videos livestreamed from Berlin (F.Visi et al. 2023a) and Piteå (F.Visi et al. 2023b) were made publicly available.An audience was present at both locations.Attendance in Berlin can be approximately estimated as reserving a free ticket was required to attend.122 tickets were pre-booked. 11The concert programme was performed only once.Both stages featured three interactive rugs and three 65inch LCD screens each placed on a stand and set in portrait orientation.Each screen displayed a live video feed showing the nearly full figure of one of the musicians on stage at the other location and were large enough to be seen by the musicians on stage as well as by the audience (see Figures 7

Figure 7 .Figure 8 .
Figure 7. Two stills of the concert from the live streaming video feeds: view of the stage at Konzertsaal der UdK Berlin (a); view of the stage at Studio Acusticum Stora Salen (b).

Figure 9 .
Figure 9.The screens on stage at Konzertsaal der UdK in Berlin displaying the musicians performing at Studio Acusticum in Piteå (photo: Nikolaus Brade).The picture was taken during the performance of Cat Hope's piece 'The Drift'.The score was displayed on the screens placed on the floor in front of the musicians.

Figure 10 .
Figure 10.A screenshot of the performers view of 'The Drift', showing the 'density playhead' over the score, and score movement indicated with black arrows.Each part is colour-coded (not shown in the black and white print version of the article).Green is percussion, blue is viola, purple and pink are contrabass clarinet parts.The electronic parts are indicated by the finely dashed lines-the colour indicating the wearable source, and the shape providing a guide for the electronics musician.The straight, dashed horizontal line serves as a pitch reference for the performers-in this example, for one of the contrabass clarinets.

Notes
focuses on experimental material development, specializing in the intersection of heritage hand-weaving techniques, technology and textile engineering.Philipp Gschwendtner is a media artist, freelance writer and programmer.He is currently studying in the M.A. Design & Computation at UdK Berlin and TU Berlin and holds a B.Sc. in Media Technology.His work reflects current developments in technology from a media theory perspective.Professor Cat Hope is a composer, musician, artistic director and academic.She is the co-author of 'Digital Arts-An introduction to New Media' (Bloomsbury, 2014), co-editor of 'Contemporary Virtuosities' (Routledge, 2023) and director of the Decibel new music ensemble.She is a Professor of Music at Sir Zelman Cowen School of Music and Performance at Monash University, Melbourne and a Churchill, Civitella Ranieri and Hamburg Institute of Advanced Study Fellow.