Entropy, pitch, and noise: organisation and disorganisation in the perception of closure for different types of spectra

In this article, we develop a hypothesis on the role of spectral entropy in conveying a sensation of termination for short musical fragments. We tested the hypothesis in an experiment presenting spectral sequences involving sounds categorised in three families (pitch, bell, or noise) and asked 64 participants to rate their levels of closure in a Likert scale with five degrees. The results highlight an agreement by all participants to consider stimuli transiting from organised (e.g. pitch) to disorganised spectra (e.g. bells, noise) as more complete than transitions from the opposite direction.


Introduction
In 1948, the composer Pierre Schaeffer started to work on a sound study called Étude aux Chemins de Fer at his Studio d'Essai in Paris.The musical piece, which prevalently included sounds produced by 'six locomotives at the Batignolles station' (Palombini, 1993) and was part of a series of works by the name of Cinq études de bruits (Five studies on noise), soon became a representative example of an emerging interest by contemporary classical and electroacoustic composers in combining noisy materials for avant-garde music.The work was then broadcasted by the French radio in a radiophonic concert on 5 October 1948, and the gained interest towards his approach to sound allowed Schaeffer to release the article Introduction à la musique concrète in 1950, which became an important document delineating the principles of musique concrète.
Today, noisy sounds and chaotic components have become important elements for the composition of electroacoustic and instrumental works, yet the interrelations between pitched sounds and noises as materials for musical structuring have arguably received little attention.Previous studies in this direction that are noteworthy on the side of musical composition include research by Emmerson (1986) and Fischman (2007), who attempted to design a referential system for combining different sound types based on aural and mimetic principles.Other studies on the theme include research in the fields of music analysis and aesthetics.For example, Thoresen (2007) developed an expanded typomorphology of sound objects based on a previous attempt by Schaffer in his 1966s Traité des Objets Musicaux, categorising sound objects in three sound spectrum criteria: pitched sounds (sounds that have a clearly perceivable pitch or fundamental), dystonic sounds (sounds formed by a mixture of pitched elements and clusters, e.g.gongs), and complex or unpitched sounds (sounds with no perceivable fundamental, e.g.noises).However, if Schaeffer's 1948 experiment already posed important questions on how to musically organise such types of sounds in a composition, following research investigating their potential combinations did not produce specific perceptual models to help simplify relational strategies for musical structuring.
To address this question, we approached the concept of closure as a strategy to investigate the perceptual response of people to the combination of the three spectrum criteria highlighted by Thoresen (2007).Closure is a concept that has been thoroughly investigated in other musical domains, specifically in connection to harmony (Sears, 2015;Smit et al., 2020) and melody (McAdams, 2004;Narmour, 1990;Vos & Pasveer, 2002).
In such examples, studies based on harmony are usually confined around the principles of tonality or intervallic relationship, in which sound spectra are conceived as a combination of distinct tones potentially regulated by specific Eastern or Western harmonic systems.A question arises therefore on how closure perception may work for sound combinations whose spectra are not built upon a specific harmonic environment, as in the case of inharmonic/dystonic and noisy sounds.
Some approaches in this direction may be identified in the theoretical literature associated with timbre, in relation to more traditional frameworks such as tension/release (Pressnitzer et al., 2000) and consonance/dissonance (Lerdahl, 1987).Lerdahl (1987) proposed that timbre could be organised hierarchically, implying that it may be subjected to structural principles similar to those underlying the dimensions of pitch and harmony.In line with these approaches, Farbood and Price (2017) suggested that inharmonicity and tension perception may be correlated.In an experiment framing inharmonicity within the tension/release model, they asked participants to identify which sonic stimuli sounded more tense among a selection of spectra categorised into either harmonic or inharmonic structures.The results suggest that an increase in spectral inharmonicity (offset from integer multiples of the fundamental frequency) may correspond to an increase in perceived tension.Differently from the mentioned approaches, Dubnov (2006) proposes that an increase in spectral inharmonicity may impair musical expectancy, as the introduction of noise and random processes would decrease the level of information in the signal to assess probabilities for future events.Although the two approaches may be linked by the idea that difficulties in estimating probabilities for future events may indeed raise tension in the listener, they also show some contradiction when put in relation to older music theoretical models based on the concepts of dissonance and consonance, as dissonant chords are often thought to make listeners expect a resolution of the tension produced by the dissonance.For example, composer Kajia Saariaho (1987) has demonstrated to conceive pure and noisy spectra in consonance and dissonance terms within her practice, with 'rough, noisy texture [being] parallel to dissonance, whilst a smooth, clear texture would correspond to consonance' (p.94).
However, in musical literature, we can also find examples in which noises are used as a resolution process or termination of a musical process.For example, in the work Time and Motion Study I written by Brian Ferneyhough (Figure 1), the composer develops a musical piece characterised by a series of gestural and microtonal events played by a bass clarinet.The entire body of the musical composition is developed through the use of full tones, silences, sound masses, and microtonal elements.After consistent use of full tones, the piece ends by fading out the resonating tones to highlight the noise produced by the performer's blowing into the clarinet's tube (min.9:26), slowly terminating the piece through an aleatoric series of percussive impulses produced by the clarinet's keys tapped onto the instrument without air pressure (min.9:32).Other works present similar closural patterns, although they may look different on the surface, like the work Schall by composer Horacio Vaggione (Figure 2).The work builds on sound objects characterised by resonating pitches and glass sounds.At min.2:07, a first explosive enharmonic noisy gesture suggests a closure followed by an aleatoric series of noisy grains which introduce the second section of the musical piece.In particular, the piece presents a closural pattern at min.6:35, characterised by a transition from pitched glass sounds into an aleatoric series of noisy grains concluding the musical work (min 6:47).
In our investigation, we have chosen to address the topic of timbre and the theme of closure in connection to the concept of entropy, in line with the view presented by Dubnov (2006).Researchers in fields related to music psychology and physics have introduced innovative  approaches connecting sound and entropy from the perspectives of musical understanding and emotions (Dubnov, 2006;Manzara et al., 1992;Mihelac et al., 2018;Xie et al., 2022), as well as music theory, analysis and complex systems (Berezovsky, 2019;Ferrand et al., 2002;Mihelac & Povh, 2020).Two concepts related to entropy should be discerned in relation to the scope of the current study: those of sequence entropy and spectral entropy.One field of research (Ferrand et al., 2002;Manzara et al., 1992;Mihelac et al., 2018;Mihelac & Povh, 2020) is concerned with the effects of sequence entropy on the understanding and appreciation of music, in which sequence entropy estimates the degree of uncertainty experienced by the listener at any given time during the process of musical listening in relation to cultural features acquired through exposure to music.Through this approach, Mihelac et al. (2018) found that listening difficulty is inversely correlated to musical appreciation, with participants showing increasing patterns of appreciation, recognition, and repeatability on the second hearing of a piece when exposed to unknown musical styles, suggesting that barriers to comprehension (noise/entropy) were decreasing on second hearings.In a following study, Mihelac and Povh (2020) confirmed such results in relation to the harmonic complexities of musical styles, reporting that 'measures of complexity are consistent and are together with the musical style important features explaining the musical acceptability', with entropy describing levels of harmonic complexity.This approach is in line with Manzara et al. (1992), who studied the entropic profiles of Bach's chorales 151 and 61, observing a decrease in sequence entropy at their close which corresponds to an eventual process of simplification of the musical texture towards the end of both pieces.
The second concept, which informs the current study, involves the notion of spectral entropy.In this sense, Dubnov (2006) suggests that 'complex musical signals, such as polyphonic or orchestral music that contain simultaneous contributions from multiple instrumental sources, often have a spectrum so dense that it seems to approach a noise-like spectrum' (p.63).However, such complex musical signals differ from noisy and random spectra since noise appears to our perception as a rather simple signal, although they can be described as mathematically complex.As a result, measures of sequence entropy (describing levels of musical complexity) and spectral entropy (measuring the organisation of a spectrum) may differ.The current study aims to investigate the influence of spectral entropy in the perception of closure for sonic stimuli composed of organised and disorganised spectra.In the following sections, we will report a perceptual experiment including 64 participants who rated the level of closure of 160 stimuli presenting stepwise transitions (following 'sequences') between pitch, dystonic (bells and gongs), and noisy spectra.Within our undertaking, entropy is framed as a measure of the randomness constituting a chaotic system (Gaspard & Wang, 1993), with white noise characterised by maximum entropy (Shannon, 1948) and a rise in entropy leading to loss of information within a system (Baez et al., 2011).Differently from the aforementioned approaches to entropy as a descriptor of complexity levels, we considered entropy as a descriptor of the type of organisation of a spectrum, in which pitches represent instances of organised spectral structures, while bell sounds and noises represent instances of disorganised spectral structures with increasing levels of disorganisation.

Description and objectives
Interested in the potential use of noise as closural material as presented in the above musical examples, we decided to run an investigation on the influence of pitch, bell (dystonic), and noisy spectra in the perception of closure for short musical fragments.64 participants were asked to rate the 'degree of cadence, closure or completeness' of 46 sonic stimuli on a Likert scale with five degrees.We included both between-and within-subject designs to improve the power and accuracy of data for analysis, by asking participants to rate multiple instances of similar types of sequences.A short description of the concept of cadence was provided, reporting that 'the concept of cadence is often associated with a feeling of closure or completeness of a melodic or harmonic line, to a moment in which the musical fragment has concluded and we do not expect it to continue.'The experiment lasted approximately five minutes in total (552 s), and listeners partook in the test voluntarily without compensation.Participants could listen multiple times to each excerpt.

Environment
The perceptual experimentwas carried out online.A website was developed to allow the participants to stream the stimuli independently, and provide a rating for each of them.The collected dataset is publicly available in Danieli (2023).Participants were suggested to use their headphones when partaking in the experiment, although the condition was not mandatory.Stimuli were normalised to provide similar volumes for all excerpts.Further criteria for controlling the listening environment were not developed, as the study was conceived to compare sound categories that are very diverse.In this sense, differences among listening environments were supposed not to provide systematic effects to impact the data collection at this exploratory stage.

Stimuli
Throughout the experiment, we used a total of 160 stimuli in the form of sonic excerpts.We used the SuperCollider software to synthesise the stimuli.The sonic excerpts were categorised into six types of spectral sequences, as reported in Table 1.Each stimulus presented only one type of sequence, and was formed of two complementary parts: (1) an introductory section (fade in) with a duration of 4.5 s, presenting an exponential envelope growing to a factor of 0.85; (2) a release section with a duration of 5 s, characterised by an initial amplitude of 1 and an exponential envelope decreasing to null.( 1) and ( 2) presented different types of spectra, and a short rest of 0.5 s separated the two sections, for a total stimulus duration of 10 s.For example, stimuli presenting stepwise transitions from pitch to bell sounds (sequence code: p→b) used the sounds of traditional musical instruments (e.g.oboe) for the introductory section and the sounds of inharmonic percussive instruments (e.g. a gong) for the release section.SuperCollider's FreeVerb reverb was applied with default parameters to all excerpts, to improve the naturalness of the stimuli.
Three sound families (pitch, bell, noise) were used to design the stimuli, and each stimulus always presented two different sound families, one used for the introduction section (1) and a different one used for the release section (2).For each sound family, we had four different spectra at our disposal.The 'pitch' family included (p1) a clarinet sound with pitch F4; (p2) an oboe sound with pitch E4; (p3) a sax alto sound with pitch G4; (p4) a violin sound with pitch F4sharp.All sounds were downloaded from the Music Technology Group's profile on FreeSound.org,to enhance the coherence of the recordings among pitched instruments.We selected sounds with different pitches to avoid repeating the same tone throughout the experiment, which would risk becoming a reference for rating.In addition, either introductory or releasing sounds were tuned randomly in a range from 1x to 1.3x speed, to ensure pitch independence across excerpts, and intervallic independence between sections.
The 'bell' family also included four spectra downloaded from the FreeSound.orgwebsite.Sound (b1) was recorded by AncentOracle with id 476871; (b2) was recorded by Khrinx with id 333694; (b3) was recorded by Veller with id 209894; (b4) was recorded by Exotonestudio with id 416992.These sounds from the 'bell' family were proposed at three different octaves.Indeed, presenting stepwise transitions (e.g.pitch to bell, p→b) without modifying the sounds' octaves would result in sequences characterised by constant intervallic direction between fundamentals.By presenting each bell sound in three variations throughout the experiment (0.5x, 1x, 2x speed), we ensured that fundamentals were presented either above or below the previous fundamental for all types of sequences.
Finally, sounds in the 'noise' family included three types of snares (n1, n2, n3) and a digital white noise produced by our team (n4).They were also downloaded from FreeSound.org:(n1) was recorded by robblesurp with id 3148; (n2) was recorded by kaonaia with id 131363; (n3) was recorded by hookhead with id 13750.Sounds pertaining to the 'noise' family were filtered to remove the eventually perceived pitches due to resonating factors.
All sounds were deprived of their attacks, were frozen to achieve a total duration of 10 s through a high-density granular synthesis with random onset-jitter, and were applied a loudness normalisation function in Audacity to bring the level of perceptual loudness to similar levels among all sounds.A spectral representation of the sounds used is provided in Figure 3.
In order to compare the experimental results to the entropy values of the sounds, we calculated the entropy, as defined by Shannon (1948) and implemented in the MIRtoolbox (Lartillot & Toiviainen, 2007).As expected, the mean entropy decreased from noise to pitch sounds, in particular yielding 0.84 for noise, 0.68 for bells, and 0.56 for pitch sounds.

Procedure
The sounds in Figure 3 were combined into 160 different excerpts.We divided the 160 excerpts into eight groups containing 20 excerpts each.Every group included 6 instances of spectral sequence p→b, 6 instances of type b→p, 2 instances of type b→n, 2 instances of type n→b, 2 instances of type p→n, and 2 instances of type n→p, randomly chosen from the whole set of stimuli.
During the test, listeners were presented with two random groups, one after the other, for a total of 40 excerpts.The order of the excerpts within each group was randomised.A short training session consisting of six excerpts was added at the beginning of the experiment and was not included in the analysis of the experiment.The six stimuli in the training session included one excerpt for each type of sequence.The complete experiment, therefore, included a total of 46 stimuli (6 training excerpts + 40 excerpts for analysis).The training group was common to all participants, and the excerpts included in the training group did not differ among participant groups.The order of the stimuli included in the training group was also randomised.A schematic representation of the experiment is provided in Figure 4. Participants 64 listeners participated in the experiment for a total of 2560 ratings.16 participants considered themselves as not having a professional education in music; 30 participants considered themselves musical students at universities or conservatoires; 18 participants considered themselves as professors of music in universities or conservatoires.10 participants reported they only listen to classical or pop music; 32 participants reported they regularly listen to music with noisy components; 22 participants reported being composers of contemporary music with noisy components.5 participants reported not knowing the concept of cadence, while the rest of the population did.Participants were invited to partake in the experiment by posting an advertisement on various Facebook groups related to music and arts (e.g.'Electroacoustic composers', 'Contemporary classical music', 'IRCAM').

Analysis
We analysed the retrieved data in three ways: (1) Main effects: we performed an ANOVA among involved variables to retrieve information on main effects and interactions.(2) Effect of direction: we compared spectral sequences provided with the same sound families but opposite directions (p→b vs. b→p, b→n vs. n→b, p→n vs. n→p) to understand whether movements from lower (e.g.pitch) to higher entropic states (e.g.noise) would result into higher average ratings compared to movements from higher (e.g.noise) to lower entropic states (e.g.pitch).
(3) Effect of last sound: we also compared the effects of the three families when presented as closing sounds, to understand whether stimuli ending with noise ( ... →n) would present a higher average rating compared to stimuli ending with pitch ( ... →p) or bell sounds ( ... →b).

Main effects
We operated an ANOVA among the involved variables to identify the main effects on the perceived completeness/closure and their interactions.The results of the ANOVA are reported in Table 2.The first ANOVA included the type of sequence (6 levels: p→b, b→p, b→n, n→b, p→n, n→p) and the expertise of the participants (3 levels: non-professionals, students, professionals).The analysis highlighted that both the types of sequences and the expertise of the participants have a main effect on ratings, but no interaction is present between them.Alternatively, as the last sound is directly related to the type of sequence, we chose to perform a second ANOVA to analyse the main effect of the last sound on the ratings.The results highlighted a main effect for both last sounds (3 levels: . . .→p, . . .→b, . . .→n) and the expertise of the participants, as well as an interaction between them.

Effect of direction
We performed a pairwise comparison between the types of sequences characterising the design of the stimuli (SequenceTypes: p→b vs. b→p, b→n vs. n→b, p→n vs. n→p) for the whole population since no interaction with ExpertiseLevel was identified through the first ANOVA in Table 2.The significance of differences in the pairwise comparisons was determined by the Wilcoxon rank sum test with Boferroni-Holm correction to account for the problem of increased probability of obtaining Type I errors.The effect size value r was calculated based on the z-value as proposed by Rosenthal (1994).Note that 0.1 ≤ r < 0.3 indicates small effects, 0.3 ≤ r < 0.5 average effects, and r ≥ 0.5 large effects.The effect size was included to provide a robust indicator that is less sensitive to the number of data points compared to the p-values.
The results of the pairwise comparison show that spectral directions from bell to pitch sounds (b→p) are rated significantly lower (p < .001,r = .3871)than directions from pitch to bell sounds (p→b).A similar result (p < .001,r = .4793)is present for directions from noise to pitch (n→p) as compared to directions from pitch to noise (p→n).No significant difference and no effect (p = .4793,r = .0355)was identified for directions from bell to noise (b→n) against directions from noise to bell (n→b).A visual representation of mean values and corresponding 95% confidence intervals are provided in Figure 5.
For each SequenceType, we calculated the difference between the average entropy of the first sound and that of the last sound.We found a correlation of 69% between these differences and the average ratings.When focussing on the significant differences in pairwise comparisons of the ratings, i.e. no sequences that combine noise with bell sounds, the correlation increased to 94%.

Effect of last sound
We performed a pairwise comparison between stimuli ending with different sound families (pitch, bell, noise), to understand whether the type of closing sound has an effect on the ratings.The results of the pairwise comparison identified significant differences in ratings across the whole population.Similarly to the results obtained for effects of direction, stimuli ending with pitch sounds ( ... →p) are rated significantly lower than stimuli ending with either bell sounds ( ... →b, p < .001,r = .3999)or noise ( ... →n, p < .001,r = .4700).Results also show that there is a significant difference in ratings between stimuli ending with bell sounds ( ... →b) and noise ( ... →n), with stimuli of the former type rated significantly lower than stimuli of the latter type overall (p = .0016,r = .1112).A visual representation of mean values and corresponding 95% confidence intervals is provided in Figure 6, left graph.A correlation between these mean values and the calculated entropy values of the last sound yielded 93%.
Since the second ANOVA reported in Table 2 identified an interaction between the level of expertise of the participants and the type of sound closing the stimuli, we performed a pairwise comparison for the variable LastSound by ExpertiseLevel (Table 3).The comparison by groups highlighted that stimuli ending with bell sounds and noise ( ... →b vs. . . .→n) present no significant difference (p = .6704,r = .0299)for the group of participants that described themselves as not having a professional education in music.On the contrary, the contrast presents an effect for both groups of participants who described themselves as students in conservatories or music universities (p = .023,r = .1167)and participants who described themselves as professors or researchers in music (p < .001,r = .0002).In our results, the perceptual difference between stimuli ending with noise or bell sounds increases with the expertise level of the participants, as indicated by an increase in the mean difference, the effect size, and a decrease in the p-value.
For all groups of participants, stimuli ending with pitch were perceived as significantly less complete than those ending with bell (p < .001,r ≥ .3368)or noise (p < .001,r ≥ .3653).A visual representation of mean values and corresponding 95% confidence intervals is provided in Figure 6, right graph.When comparing the correlation between the average ratings and the entropy values of the last sound for the different groups of participants, we can see an increase in the correlation with expertise: While for the participants without a professional education in music, the correlation was 76%, for students in conservatories or music universities, it was 94%, and 97% for professors and researchers in music.

Discussion
The present study highlights novel insights on the potential contribution of spectral structure in the perception of musical closure.According to our findings, participants agreed on considering sonic sequences ending with pitched spectra less complete than those presenting bells and noisy spectra at the close.Retrieved data suggests that the organisation of spectral structures may be a relevant parameter in the perception of musical closure, with disorganised structures more inclined to convey a sensation of termination.As a numerical descriptor for the organisation of spectral structure, we calculated the spectral entropy of the employed sounds resulting in higher entropy values for disorganised structures.
In particular, the Effect of last sound suggests that the level of disorganisation of the musical materials and the level of perceived termination may be directly correlated for participants with expertise in musical listening.It is not clear whether this behaviour should be considered as a result of their exposition to twentieth-century musical practices with the observed phenomenon being now embedded in the cultures of contemporary classical and electroacoustic musics, or be attributed to the enhanced analytical skills of music professionals.The former explanation would provide information on historical compositional trends that may have implicitly characterised the writing of music in the past century, while the latter would advocate for a natural predisposition to consider disorganised spectra as closing materials.
A major observation that comes from our study consists in highlighting how listeners of any type can accurately discern between organised and disorganised spectra in short fragments.This would suggest that listeners can correctly identify entropic processes.The concept would suggest that processes increasing the level of disorder may convey a feeling of closure, eventually reframing such processes within a tension/release model.This approach may imply a reference to the field of natural sciences, specifically to the second law of thermodynamics (Ben-Naim, 2022;Berezovsky, 2019), where entropy represents the tendency of a closed system (complex or simple) to move 'from a more ordered state to a less ordered state' (Takeda, 2009, p. 17).The approach that brought us to design the current experiment consisted in supposing a parallelism between a musical piece and a complex system with emergent properties as described in the field of statistical mechanics (Berezovsky, 2019).In our assumption, the evolutionary formation of human perception might be characterised by experiencing real-world phenomena characterised by entropic processes of such a kind, eventually training perception in recognising such types of phenomena and retrieving information from them.
This observation does not have to be interpreted as if closural patterns in music always present entropic processes, as diverse and radically different solutions may emerge from the internal characteristics of a musical piece.Similarly, it is also unknown how strong this effect may be in real musical situations, with other more traditional parameters (e.g.tonality) taking priority in musical perception.However, our study suggests that entropic processes may be used in the composition of new music to strengthen closural effects.In this perspective, two types of entropy may be distinguished: one representing processes that are proper of natural systems and present an increase of entropy over time (e.g.spectral entropy); the other representing an eventual increase of complexity within a system (e.g.sequence entropy), with entropy reaching higher values in the middle section of a musical piece (e.g.Manzara et al., 1992), which does not necessarily correspond to an increase of disorder within the system.
It is also uncertain whether the terms 'organization' and 'disorganization' are the right ones to describe the spectral profiles presented in the article, and the current study opens up opportunities for further investigation.Indeed, the current study could not find statistical evidence on the role of direction for those stimuli combining bell and noisy sounds when paired within the same excerpt (see Effect of direction).In the perspective of considering noises as spectral structures with the highest level of internal disorganisation, we would expect a difference in ratings between stimuli b→n and n→b.We could not observe such a difference, and this may suggest that contrasts between disorganised structures are more difficult to detect.

Conclusions
The present study investigated the perception of musical closure in relation to sonic sequences combining sounds categorised into three families: pitch, bell, and noise.Pitch sounds were considered as forms of spectral organisation; bells and noises were considered as characterised by an increase in spectral entropy.Sixty-four listeners partook in an experiment on the web and rated the level of perceived closure for musical excerpts moving from one family to another.The results highlighted that stimuli ending with pitch sounds are consistently rated less complete than those ending with bell sounds and noise.This finding is complemented by a calculation of the sounds' spectral entropy revealing high completeness ratings for stimuli ending with high-entropy sounds.
The study provides useful data in line with emerging approaches to music theory involving the concepts of entropy and emergence (Berezovsky, 2019;Mihelac & Povh, 2020).The correlation between the level of disorganisation of sound materials and ratings observed in our analysis on the Effect of last sound raises a number of interesting questions for music theory, and the current focus on the perception of closure may help draw innovative connections between traditional frameworks used in the field of music theory and more recent approaches related to the field of information theory.Although the study originated from a hypothesis, we believe it would be more appropriate to consider the present experiment as an exploratory study.From a scholarship perspective, the finding is interesting as it provides a supported yet discontinuous approach from previous research relating entropy and musical meaning, as more traditional undertakings in the field may instinctively associate a tonal closure (such as an authentic cadence) with a motion from higher entropy (a dissonance, indicating complex information content) to lower entropy (the tonic), with the tonic conceived as a point of repose conveying lower levels of uncertainty (Cox, 2010).In such a framework, entropy would indicate a moment of complexity and tension (a dissonance), while in our approach entropy indicates a decrease in structure towards a chaotic form, by which no conjectures about future events can be made.Overall, the article brings an innovative contribution to the topic of music and entropy, providing information on a perceptual trend that may find important applications in the field of electroacoustic music composition, and further replications may provide additional insight into the effects of entropic processes on the perception of musical closure and segmentation.

Figure 3 .
Figure 3. Spectral representation of the 20 sounds used for the design of the stimuli.

Figure 4 .
Figure 4. Schematic representation of the experiment.

Figure 5 .
Figure 5. Visualisation of ratings (mean values and corresponding 95% confidence intervals) dependent on the sequence type.

Figure 6 .
Figure 6.Visualisation of ratings (mean values and corresponding 95% confidence intervals) in dependence of the last sound: all participants (left), for the three participants groups (right).

Table 1 .
List of spectral sequences used in the experiment and quantities.Sequence code' reports the identifiers used within the text to indicate types of spectral sequences; the column 'Spectral sequence type' presents introductory and release sound types for each identifier; the column 'Total number of excerpts' reports the number of excerpts in the whole database for each identifier; the column 'Number of excerpts for each participant' report how many of such excerpts were presented to the listener within each experiment.

Table 2 .
ANOVA comparisons for SequenceType and LastSound.' means 'interaction between' variables.The first ANOVA table reports the results for the variables SequenceType and ExpertiseLevel.The second ANOVA table reports the results for the variables LastSound and ExpertiseLevel.SequenceType represents the type of sequence, with the six levels presented in Table 1.ExpertiseLevel represents the degree of musi- *Note.The symbol ' * cal expertise of listeners, with three levels: (1) amateurs/non-professionals,(2) students, and (3) professionals/professors.LastSound represents the type of sound category used to conclude the stimuli, with three levels: (1) stimuli ending with pitch sounds, (2) stimuli ending with bell sounds, (3) stimuli ending with noises.

Table 3 .
Pairwise comparison for Analysis 2, LastSound by ExpertiseLevel.Note.The contrast ... →p vs. ... →b indicates a comparison between stimuli ending with pitch and bell sounds respectively.The contrast ... →p vs. ... →n indicates a comparison between stimuli ending with pitch sounds and noise respectively.The contrast ... →b vs. ... →n indicates a comparison between stimuli ending with bell sounds and noise respectively.