The neural activation patterns provoked in response to music listening can reveal whether a subject did or did not receive music training. In the current exploratory study, we have approached this two-group (musicians and nonmusicians) classification problem through a computational framework composed of the following steps: Acoustic features extraction; Acoustic features selection; Trigger selection; EEG signal processing; and Multivariate statistical analysis. We are particularly interested in analyzing the brain data on a global level, considering its activity registered in electroencephalogram (EEG) signals on a given time instant. Our experiment's results—with 26 volunteers (13 musicians and 13 nonmusicians) who listened the classical music Hungarian Dance No. 5 from Johannes Brahms—have shown that is possible to linearly differentiate musicians and nonmusicians with classification accuracies that range from 69.2% (test set) to 93.8% (training set), despite the limited sample sizes available. Additionally, given the whole brain vector navigation method described and implemented here, our results suggest that it is possible to highlight the most expressive and discriminant changes in the participants brain activity patterns depending on the acoustic feature extracted from the audio.

Music requires a high neural demand from who plays it (Münte, Altenmüller, & Jäncke, 2002; Peretz & Zatorre, 2005) and is an important tool for understanding the organization of the human brain (Münte et al., 2002; Peretz & Zatorre, 2005; Schlaug, 2015). In the last two decades, the processing of music by our brain has attracted the attention of researchers worldwide, and a number of scientific works have identified neural activations differences between musicians and nonmusicians in distinct experiments using wearable and non-wearable technologies, mainly EEG (electroencephalography) and fMRI (functional magnetic resonance imaging), respectively (Liang, Hsieh, Chen, & Lin, 2011; Mikutta, Maissen, Altorfer, Strik, & Koenig, 2014; Münte et al., 2002; Peretz & Zatorre, 2005; Saari, Burunat, Brattico, & Toiviainen, 2018; Schlaug, 2015; Tervaniemi, Castaneda, Knoll, & Uther, 2006; Virtala, Huotilainen, Partanen, & Tervaniemi, 2014; Vuust, Brattico, Sppanen, Naatanen, & Tervaniemi, 2012).

In the experiments of music listening and EEG signal analysis, past studies focused on understanding the neural processing of artificial stimuli that rely on controlled auditory paradigms. The most recent works on this issue (Abrams et al., 2013; Alluri et al., 2012, 2013; Markovic, Kühnis, & Jäncke, 2017; Poikonen, Alluri, et al., 2016; Poikonen, Toiviainen, & Tervaniemi, 2016; Saari et al., 2018) have analyzed though naturalistic music pieces rather than artificial ones, with the goal of describing the association between the dynamic changes in the audio features and the time courses of the neural activations recorded in volunteers (or subjects). Interestingly, these recent works have been able to achieve almost the same results found previously with artificial stimuli, indicating the recruitment of new brain areas related to music processing.

However, all these state-of-the-art findings have been obtained on a local level, based on the concept of massunivariate analyses (Markovic et al., 2017; Poikonen, Alluri, et al., 2016; Poikonen, Toiviainen, & Tervaniemi, 2016; Rigoulot, Pell, & Armony, 2015; Virtala et al., 2014; Vuust et al., 2012), which reveal statistical aspects of specific and unique areas of the brain that supposedly do not depend on changes in other cognitive areas. Although this approach is mathematically sound, it inevitably ignores possible global and interregional brain dependencies in music processing, providing limited understanding of the behavior of our brain during such a cognitive task. Since EEG brain data inherently include simultaneous measurements on a given number of electrodes, to understand the complexity of their information, it seems appropriate to disclose the relationships between all these electrodes in a multivariate way.

In the current exploratory study, we propose to analyze on a global level the EEG brain data collected during a music listening task, considering the whole brain activity registered in all the EEG electrodes simultaneously. More specifically, we approach this problem through a novel multivariate statistical perspective, using this global EEG brain information to predict musicianship as well. Therefore, rather than using a mass-univariate model to find statistical differences between distinct sample groups, we describe and implement here a two group pattern recognition framework to linearly separate the samples as musicians and nonmusicians, evaluated by means of classification accuracy and categorized previously, depending on whether or not they have undergone music training.

To the best of our knowledge, we present here the first results on whole brain EEG analysis of musicianship in the context of naturalistic music listening, elaborating on Ribeiro and Thomaz (2018), by adding new experimental evidences in EEG feature extraction and classification, using several acoustic features, and new multivariate and comparative statistical analyses. Our experimental results, based on high dimensional encoding of variance and discriminant information of EEG data, indicate feasible and plausible linear separation from musicians and nonmusicians sample groups, despite their corresponding limited sample sizes available.

Materials and Method

PARTICIPANTS

Two groups of 13 subjects took part in our experiment. The musicians (7 male and 6 female) were all amateurs, aged between 18 and 45 years (mean 26.8 years; all right-handed). They had started playing between 4 and 17 years (mean 17.1 years) and were currently playing 4 hours on average per week, with distinct musical styles (classic, pop, and rock). All of them received more than two years of formal training in music, but none had a professional degree in music performance. The nonmusicians (10 males and 3 females) were aged between 25 and 45 years (mean 31.2 years; 9 right-handed). Two of them received less than one year of formal training in music (none of them keeps playing) and the others had no formal music training. All participants gave a written informed consent to participate in the study.

STIMULUS

As stimulus, the classical music Hungarian Dance No.5 of Johannes Brahms was used, presented via intraearphone. The composition was performed by the Fulda Symphonic Orchestra, conducted by Simon Schindler at the Fürstensaal des Stadtschlosses.1 The ending of the audio, corresponding to the applauses, was replaced for silence, using the free, open source, audio software Audacity (version 2.2.2), resulting in an audio signal that was 3.2 minutes long.

All subjects were instructed to listen to music, remaining as still as possible while their EEG signals were recorded. The musical piece was selected—within the context of a benchmark classical music used in other works (Alluri et al., 2012; Poikonen, Alluri, et al., 2016)—due to its high range of variations during the performance in several musical features such as dynamics, timbre, and rhythm, having an appropriate duration for the experiment.

METHODOLOGY

Our computational framework consists of the following 5 steps: (I) acoustic features extraction; (II) acoustic features selection; (III) trigger selection; (IV) EEG signal processing; and (V) multivariate statistical analysis. The first three steps refer to the audio signal and the last two to the EEG signal. This framework, illustrated in Figure 1, is based on previous multidisciplinary works of EEG music processing (Poikonen, Alluri, et al., 2016; Poikonen, Toiviainen, & Tervaniemi, 2016) and high-dimensional data analysis (Davatzikos, 2004; Gregori, Sanches, & Thomaz, 2017; Sato et al., 2008; Thomaz, Duran, Busatto, Gillies, & Rueckert, 2007; Xavier et al., 2015).

FIGURE 1.

A schematic illustration of the following five main steps of the framework proposed to statistically analyze the EEG high-dimensional brain data: (I) acoustic features extraction; (II) acoustic features selection; (III) trigger selection; (IV) EEG signal processing; and (V) multivariate statistical analysis.

FIGURE 1.

A schematic illustration of the following five main steps of the framework proposed to statistically analyze the EEG high-dimensional brain data: (I) acoustic features extraction; (II) acoustic features selection; (III) trigger selection; (IV) EEG signal processing; and (V) multivariate statistical analysis.

Acoustic feature extraction

In this first step, the lowlevel acoustic features that describe the audio signal are extracted using MIRtoolbox (version 1.6.1) (Lartillot, 2014). To perform such extraction, the signal is decomposed into 50 ms windows with 50% of overlap, in order to study the evolution of each of these features over time. These characteristics correspond to aspects of the audio that can be identified in terms of human perception and, even though they may not accurately describe what is acoustically perceived, they generate significant EEG neural responses (Poikonen, Alluri, et al., 2016; Poikonen, Toiviainen, & Tervaniemi, 2016).

We have selected the following standard and well known acoustic features (Alluri et al., 2012; Lartillot, 2014; Lerch, 2012; Poikonen, Alluri, et al., 2016): (1) root mean square (RMS): measure of the energy in the signal computed by taking the square root average of the square of the amplitude; (2) zero crossing rate (ZCR): measure of the number of times the signal crosses the x-axis; (3) spectral rolloff: frequency below which 85% of the total energy is contained in the signal; (4) spectral roughness: estimation of the sensory dissonance; (5) brightness: measure of the amount of energy above 1500 Hz; (6) spectral entropy: measure of the relative Shannon entropy of the signal which indicates whether the spectrum contains predominant peaks or not; (7) spectral flatness: measure of the uniformity of the spectrum defined as the ratio between the geometric mean and the arithmetic mean, also known as the Wiener entropy; (8) spectral skewness: the third central moment of the spectrum distribution and is a measure of the asymmetry of the distribution; (9) spectral kurtosis: the fourth central moment of the spectrum distribution and indicates the flatness of the spectrum and a sudden change can indicate transients at the audio; (10) spectral centroid: the first central moment of the spectrum distribution and is the geometric center of the distribution; (11) spectral spread: the second central moment of the spectrum distribution; and (12) spectral flux: measure of the temporal changes in the spectrum between successive frames. A detailed explanation of these 12 features can be found in the user manual of the MIRtoolbox (Lartillot, 2014).

Acoustic feature selection

In this second step we assess whether all the s = 12 standard features extracted from the audio are statistically non-redundant, through a cluster analysis, disclosing the correlation among the acoustic features to adequately represent the data.

We have used factor analysis (FA) (Johnson & Wichern, 2007), a well-known multivariate statistical technique, to describe the association between the values extracted from the windows of each acoustic features in a non-supervised way. Our motivation here is to reduce the redundancy of the data extracted from the audio signal, selecting the R most significant factors (higher loadings), where Rs. We have chosen here principal component analysis (PCA) (Johnson & Wichern, 2007) to estimate the parameters of our factor model because its spectral decomposition solution simplifies the issue of how many factors to retain.

Ideally, a pattern of factor loadings where each cluster of acoustic features is highly represented by a single factor and has low coefficients on the remaining ones is desirable. Therefore, those F = [f1, f2, … , fR] factors can replace the initial s variables on R rotated common factor loadings, where the association between the acoustic features would be most significant in terms of variance (varimax rotation), choosing conventionally the factor loadings with corresponding eigenvalues greater than 1 to determine the adequate number of factors (Johnson & Wichern, 2007).

Trigger selection

The changes on the acoustic features used in this work are known as triggers (Poikonen, Alluri, et al., 2016), which are able to elicit sensory components, similar to those found with artificial stimuli. More precisely, triggers are instants in the time series generated by the extracted features where a high-contrast occurs, as detailed below.

We have adopted the Poikonen, Alluri, et al. (2016) method to identify the triggers, where the upper and lower thresholds Vp+ and Vp are determined from the mean values of the acoustic feature as a given percentage q above or below it, respectively (q = ± 20%, here). In other words, a trigger occurs when the signal remains below Vp for a minimum interval of time, defined as 500 ms for all acoustic features—called preceding lowfeature phase (PLFP)—followed by an ascendant phase where the signal reaches Vp+ (Poikonen, Alluri, et al., 2016). The intensity of the acoustic contrast depends on the parameters chosen for trigger selection (length of PLFP and the upper and lower thresholds Vp+ and Vp). These parameters can be changed accordingly, based on a trade-off between the number of triggers and how high each acoustic contrast might be defined; that is, the higher the acoustic contrast, the lower the number of triggers identified.

EEG signal processing

We have used the OpenBCI EEG device to acquire the brain electrical signals (OpenBCI, 2018). This device has a sampling rate of 127 Hz and is composed of 16 dry electrodes, positioned according to the international 10–20 system, including the additional two electrodes placed on each earlobe as reference. All EEG signal have been preprocessed using a b and pass Butterworth filter of 1–30 Hz analogously to others (Poikonen, Alluri, et al., 2016; Virtala et al., 2014). The continuous EEG data were separated into epochs according to the triggers and averaged for each electrode. The epochs started 100 ms before the trigger and ended 300 ms after the trigger. The baseline has been defined according to the 100 ms time period before the trigger (Poikonen, Alluri, et al., 2016; Virtala et al., 2014; Vuust et al., 2012). All signals have been inspected visually and EEG detected channels with noise have been removed from analysis as well as the epochs with amplitudes above 100 µV (in absolute values). The standard and well-known EEG Matlab toolbox (EEGLab 13.5.4b) was used to process the EEG data at Matlab R2015a software.

Multivariate statistical analysis

In this fifth and last step of our computational framework, we first describe the data matrix Xr composed of the EEG values at instant t = 100 ms post stimulus (predefined time-stamps), defined by the average along triggers of each r selected acoustic feature separately, where r 1, 2, … , R in a way that the corresponding sampled input signal can be treated as a high-dimensional point in a multivariate space, as follows:

 
Xr=[x1x2xN]=[x11x1nx21x2nxN1xNn],
(1)

where N is the total number of volunteers (or subjects) and n the total number of electrodes (n = 16, here).

According to this data representation, we are assuming that our EEG multivariate statistical analyses might include the electrical potentials at the predefined timestamps to understand the complexity of the brain activity not only in terms of the N sampling, but also in terms of all the n electrodes simultaneously.

Principal component analysis (PCA) and linear discriminant analysis (LDA) (Fukunaga, 1990; Johnson & Wichern, 2007) were explored here as two alternative multivariate statistical methods to understand how the information is changing in the original space of the EEG brain data, looking not only for the most expressive (higher variance) changes, but also for the most discriminant (higher separability) ones, according to the neural responses reflected by the acoustic feature extracted from the audio used as stimulus.

Principal component analysis

We have used PCA to highlight the most expressive changes in terms of the total variance information of the electrical potentials.

PCA calculates the spectral decomposition of the correlation matrix of the N × n matrix Xr, described in Equation (1). We have composed the PCA transformation matrix by selecting all the M eigenvectors with non-zero eigenvalues, where Mn. Data projected onto these [p1,r, p2,r, … , pM,r] eigenvectors ranked in decreasing order of their corresponding eigenvalues; that is, λ1,r, λ2,r, … , λM,r, follow the directions of higher variance of Xr.

Each pm,r vector, where m = 1, 2, … , M, can be used to form a spatial map of the brain regions that most vary in the data, moving from one side of the principal component axis (or dimension) to the other (Cootes, Edwards, & Taylor, 1998; Davatzikos, 2004). Thus, it is possible to navigate along such dimensions to capture and understand the most expressive changes of the data matrix Xr. This vector navigation method (Cootes et al., 1998; Gregori et al., 2017; Sato et al., 2008; Xavier et al., 2015) can be mathematically described as

 
ym,k=x¯i+(kλm,r)×pm,r,
(2)

where k{3, 2, 1, 0, 1, 2, 3}, m is the index number of the corresponding principal component to navigate, and x¯i is the n-dimensional global mean vector of Xr for each r selected acoustic feature.

Linear discriminant analysis

We have used LDA to identify the most discriminant dimension for separating the two sample groups (or classes) of interest, that is, musicians and nonmusicians, by maximizing their between-class separability while minimizing their within-class variability.

Let the between-class and within-class scatter matrices Sb and Sw be defined, respectively, as (Fukunaga, 1990)

 
Sb=i=1gNi(x¯ix¯r)T(x¯ix¯r),
(3)
 
Sw=i=1gj=1Ni(xi,jx¯i)T(xi,jx¯i),
(4)

where xi,j is the n-dimensional sample j from class i, Ni is the number of training samples of class i (N1 = 13 and N2 = 13, here), g represents the total number of classes (g = 2, here) and x¯i is the sample group mean vector given by the corresponding Ni samples only.

The main objective of LDA here is to find a projection vector wr that maximizes the ratio of the determinant of Sb to the determinant of Sw (Fisher's criterion), formulated by:

 
wr=argmaxw|wTSbw||wTSww|.
(5)

The Fisher's criterion is maximized when the projection vector wr is composed of the leading eigenvector of Sw1Sb with nonzero corresponding eigenvalue (Fukunaga, 1990; Johnson & Wichern, 2007).

Analogous to PCA, this vector wr can be used to form a spatial map of the most discriminant brain regions, moving from one side of the separating dimension to the other (Davatzikos, 2004; Sato et al., 2008). Thus, it is possible to navigate along such dimension to capture and understand the most discriminant changes of the data matrix Xr, assuming its dispersion follows a Gaussian distribution (Gregori et al., 2017; Sato et al., 2008; Xavier et al., 2015). This vector navigation method can be mathematically described as

 
zi,j=x¯r+(jσi+x¯i)×wr,
(6)

where j{1,0,1}, i is again the corresponding sample group, x¯r the n-dimensional global mean vector of Xr, and σi and x¯i are, respectively, the standard deviation and mean of each sample group on the LDA space for each r selected acoustic feature.

Results

AUDIO ANALYSIS

First, the audio was analyzed by means of the aforementioned factor analysis. The factor loadings values presented by the FA are shown in Figure 2 and Table 1. According to this, it is possible to observe that the acoustic features extracted from the audio clearly forms three clusters that show a relation between them, which can indicate that the acoustic information expressed for each cluster may be similar among the features that presented the highest loadings on each factor, as follows: RMS (cluster 1), spectral kurtosis (cluster 2), and spectral rolloff (cluster 3). Note that for the factor 2 the feature spectral skewness had higher loading than spectral kurtosis, but since it was found only three triggers for this feature, we choose to use the spectral kurtosis instead, given the trade-off described previously between the number of triggers and how high each acoustic contrast may be defined.

FIGURE 2.

Factor loadings of the acoustic features extracted from the audio signal, showing the formation of clusters between the following features: Cluster 1—1 (RMS), 4 (spectral roughness), and 12 (spectral flux); Cluster 2—8 (spectral skewness) and 9 (spectral kurtosis); Cluster 3—2 (ZCR), 3 (spectral rolloff), 5 (brightness), 6 (spectral entropy), 7 (spectral flatness), 10 (spectral centroid) and 11 (spectral spread).

FIGURE 2.

Factor loadings of the acoustic features extracted from the audio signal, showing the formation of clusters between the following features: Cluster 1—1 (RMS), 4 (spectral roughness), and 12 (spectral flux); Cluster 2—8 (spectral skewness) and 9 (spectral kurtosis); Cluster 3—2 (ZCR), 3 (spectral rolloff), 5 (brightness), 6 (spectral entropy), 7 (spectral flatness), 10 (spectral centroid) and 11 (spectral spread).

TABLE 1.

Factor Loadings of the Acoustic Features Extracted From the Audio Signal

FeatureF1F2F3
RMS 0.6291 0.1337 0.7203 
ZCR 0.7851 −0.3235 −0.3240 
S. Rolloff 0.9584 −0.1939 −0.1629 
S. Roughness 0.3883 −0.0242 0.7185 
Brightness 0.9471 −0.1643 −0.1790 
S. Entropy 0.8779 0.3712 −0.1710 
S. Flatness 0.9329 0.0290 −0.0764 
S. Skewness 0.0995 0.9847 −0.1210 
S. Kurtosis −0.0761 0.9617 −0.1139 
S. Centroid 0.9563 −0.1484 −0.1975 
S. Spread 0.9347 0.2493 −0.1738 
S. Flux 0.5880 0.0931 0.7174 
FeatureF1F2F3
RMS 0.6291 0.1337 0.7203 
ZCR 0.7851 −0.3235 −0.3240 
S. Rolloff 0.9584 −0.1939 −0.1629 
S. Roughness 0.3883 −0.0242 0.7185 
Brightness 0.9471 −0.1643 −0.1790 
S. Entropy 0.8779 0.3712 −0.1710 
S. Flatness 0.9329 0.0290 −0.0764 
S. Skewness 0.0995 0.9847 −0.1210 
S. Kurtosis −0.0761 0.9617 −0.1139 
S. Centroid 0.9563 −0.1484 −0.1975 
S. Spread 0.9347 0.2493 −0.1738 
S. Flux 0.5880 0.0931 0.7174 

Using these selected acoustic features, we have found 9 triggers for the RMS, 7 for the spectral rolloff and 8 for the spectral kurtosis along the music used as stimulus. As can be seen in Figure 3, there is a similarity of the spectrograms between the clusters and there are triggers that occur at the same time within the same cluster, although some of them are different. These differences can be related to the methodological parameters used to select the triggers, showing that some of these features might be complementary to each other specially within the same cluster.

FIGURE 3.

Spectrograms of each acoustic feature extracted from the audio signal separated in clusters (green dashed lines), showing the selected triggers (red lines) and the selected acoustic feature (in bold and underlined) for each cluster.

FIGURE 3.

Spectrograms of each acoustic feature extracted from the audio signal separated in clusters (green dashed lines), showing the selected triggers (red lines) and the selected acoustic feature (in bold and underlined) for each cluster.

EEG ANALYSIS

The topographic maps were generated by the vector navigation method on the four most expressive dimensions of PCA, according to Equation (2), and the discriminant dimension of LDA, according to Equation (6), for each acoustic feature selected, using the entire dataset. Such multivariate statistical analyses disclose visually the electrical potential changes and their data distributions along the corresponding dimensions.

Given the limited sample sizes available, we have adopted the leave-one-out and resubstitution methods (Fukunaga, 1990) to estimate, respectively, the lower (test set) and upper (training set) bounds of the most discriminant dimension, using the Euclidean sample mean distance classifier to decide whether the data projected is more similar to the musician or nonmusician group.

The topographic brain maps and the projection of the data generated by PCA from the acoustic feature spectral rolloff are shown in Figure 4, where we can see the changes of the brain responses along each principal component (PC), disclosing major electrical potential differences in several parts of the brain, especially at frontal areas. However, when the data are projected on the axis of each PC separately for classification evaluation, it's not possible to see a distinction between the musician and nonmusicians sample groups. In fact, all the classification rates were around chance level using the entire data set. Different from PCA, Figure 5 shows the navigation on the most discriminant axis of LDA for the same acoustic feature spectral rolloff. This vector navigation presents differences between the groups on the frontal areas, where the nonmusicians show a positive electrical potential distributed all over the frontal area, whereas the musicians present high activity at the prefrontal area. Such differences are not discriminant either, as can be seen on the data distribution, showing a classification rate that ranges from 15.4% (test set) to 74.5% (training set).

FIGURE 4.

Vector navigation on the first four principal components (PCs), ordered from the highest variance to the lowest, of the acoustic feature spectral rolloff captured by PCA at the latency of 100 ms post stimulus: (a) Brain mapping of each PC; (b) Projection of the data on each PC axis.

FIGURE 4.

Vector navigation on the first four principal components (PCs), ordered from the highest variance to the lowest, of the acoustic feature spectral rolloff captured by PCA at the latency of 100 ms post stimulus: (a) Brain mapping of each PC; (b) Projection of the data on each PC axis.

FIGURE 5.

Vector navigation on the most discriminant axis of the acoustic feature spectral rolloff captured by LDA at the latency of 100 ms post stimulus: (Top) Reconstruction of the mean brain mapping when navigating along the most discriminant axis between the sample group of musicians (right) and nonmusicians (left); (Bottom) Projection of the data on the discriminant axis. Classification accuracy: 15.4% (test set) and 74.5% (training set).

FIGURE 5.

Vector navigation on the most discriminant axis of the acoustic feature spectral rolloff captured by LDA at the latency of 100 ms post stimulus: (Top) Reconstruction of the mean brain mapping when navigating along the most discriminant axis between the sample group of musicians (right) and nonmusicians (left); (Bottom) Projection of the data on the discriminant axis. Classification accuracy: 15.4% (test set) and 74.5% (training set).

Additionally, Figure 6 shows the topographic maps and the projection of the data generated by PCA from the acoustic feature RMS and it's possible to see changes occurring mostly on the frontal areas of the brain, but again it's not possible to see a distinction between the musicians and the nonmusicians sample groups at the projection of the data on each PC dimension. Likewise, Figure 7 shows the RMS navigation along its discriminant axis, presenting visually subtle electrical potential differences between the groups, but more statistically discriminant than the previous spectral rolloff, with classification rate that ranges from 53.8% (test set) to 88.0% (training set).

FIGURE 6.

Vector navigation on the first four principal components (PCs)——ordered from the highest variance to the lowest——of the acoustic feature root mean square (RMS) captured by PCA at the latency of 100 ms post stimulus: (a) Brain mapping of each PC; (b) Projection of the data on each PC axis.

FIGURE 6.

Vector navigation on the first four principal components (PCs)——ordered from the highest variance to the lowest——of the acoustic feature root mean square (RMS) captured by PCA at the latency of 100 ms post stimulus: (a) Brain mapping of each PC; (b) Projection of the data on each PC axis.

FIGURE 7.

Vector navigation on the most discriminant axis of the acoustic feature root mean square (RMS) captured by LDA at the latency of 100 ms post stimulus: (Top) Reconstruction of the mean brain mapping when navigating along the most discriminant axis between the sample group of musicians (right) and nonmusicians (left); (Bottom) Projection of the data on the discriminant axis. Classification accuracy: 53.8% (test set) and 88.0% (training set).

FIGURE 7.

Vector navigation on the most discriminant axis of the acoustic feature root mean square (RMS) captured by LDA at the latency of 100 ms post stimulus: (Top) Reconstruction of the mean brain mapping when navigating along the most discriminant axis between the sample group of musicians (right) and nonmusicians (left); (Bottom) Projection of the data on the discriminant axis. Classification accuracy: 53.8% (test set) and 88.0% (training set).

Although, analogous to the other most expressive analyses, the PCA results from the acoustic feature spectral kurtosis, shown in Figure 8, highlight major but not discriminant electrical potential differences in several parts of the brain. This acoustic feature presents, in Figure 9, a clear electrical potential distinction between the sample groups of interest. In this case, the nonmusicians present high activity at the prefrontal area and the musicians show high activity at the right temporal area and negative electrical potential distributed over the parietal area. The data distribution exhibits a clear separation from these sample groups with classification rate between 69.2% (test set) and 93.8% (training set). For comparison, we have used the same approach to discriminate the sample groups using only a reduced number of electrodes, instead of all of them, with the following arrangements: 6 electrodes (F3, F4, C3, C4, P3, P4); 4 electrodes (F3, F4, C3, C4); 2 electrodes (F3, F4) and 2 electrodes (C3, C4). Table 2 presents these LDA classification rates of each acoustic feature and Table 3 the detailed classification rates using all the electrodes.

FIGURE 8.

Vector navigation on the first four principal components (PCs)——ordered from the highest variance to the lowest——of the acoustic feature spectral kurtosis captured by PCA at the latency of 100 ms post stimulus: (a) Brain mapping of each PC; (b) Projection of the data on each PC axis.

FIGURE 8.

Vector navigation on the first four principal components (PCs)——ordered from the highest variance to the lowest——of the acoustic feature spectral kurtosis captured by PCA at the latency of 100 ms post stimulus: (a) Brain mapping of each PC; (b) Projection of the data on each PC axis.

FIGURE 9.

Vector navigation on the most discriminant axis of the acoustic feature spectral kurtosis captured by LDA at the latency of 100 ms post stimulus: (Top) Reconstruction of the mean brain mapping when navigating along the most discriminant axis between the sample group of musicians (right) and nonmusician (left); (Bottom) Projection of the data on the discriminant axis. Classification accuracy: 69.2% (test set) and 93.8% (training set).

FIGURE 9.

Vector navigation on the most discriminant axis of the acoustic feature spectral kurtosis captured by LDA at the latency of 100 ms post stimulus: (Top) Reconstruction of the mean brain mapping when navigating along the most discriminant axis between the sample group of musicians (right) and nonmusician (left); (Bottom) Projection of the data on the discriminant axis. Classification accuracy: 69.2% (test set) and 93.8% (training set).

TABLE 2.

LDA Classification Rates for all Electrodes, 6 Electrodes, 4 Electrodes, and 2 Electrodes

FeaturesAll Electrodes6 Electrodes4 Electrodes2 Electrodes (F3-F4)2 Electrodes (C3-C4)
   Test Set   
S. Kurtosis 69.2% 53.8% 65.4% 53.8% 73.1% 
RMS 53.8% 57.7% 53.8% 61.5% 61.5% 
S. Rolloff 15.4% 42.3% 50.0% 65.4% 53.8% 
   Training Set   
S. Kurtosis 93.8% 72.9% 71.8% 54.0% 73.1% 
RMS 88.0% 72.2% 65.4% 68.2% 67.5% 
S. Rolloff 74.5% 68.5% 70.9% 69.4% 54.6% 
FeaturesAll Electrodes6 Electrodes4 Electrodes2 Electrodes (F3-F4)2 Electrodes (C3-C4)
   Test Set   
S. Kurtosis 69.2% 53.8% 65.4% 53.8% 73.1% 
RMS 53.8% 57.7% 53.8% 61.5% 61.5% 
S. Rolloff 15.4% 42.3% 50.0% 65.4% 53.8% 
   Training Set   
S. Kurtosis 93.8% 72.9% 71.8% 54.0% 73.1% 
RMS 88.0% 72.2% 65.4% 68.2% 67.5% 
S. Rolloff 74.5% 68.5% 70.9% 69.4% 54.6% 
TABLE 3.

Detailed LDA Classification Rates Using All Electrodes

FeatureAccuracySensitivitySpecificityF measure
  Test Set   
S. Kurtosis 69.2 69.2 69.2 69.2 
RMS 53.8 61.5 46.2 57.1 
S. Rolloff 15.4 30.8 26.7 
  Training Set   
S. Kurtosis 93.8 87.7 100 93.4 
RMS 88.0 89.8 86.2 88.2 
S. Rolloff 74.5 71.4 77.5 73.7 
FeatureAccuracySensitivitySpecificityF measure
  Test Set   
S. Kurtosis 69.2 69.2 69.2 69.2 
RMS 53.8 61.5 46.2 57.1 
S. Rolloff 15.4 30.8 26.7 
  Training Set   
S. Kurtosis 93.8 87.7 100 93.4 
RMS 88.0 89.8 86.2 88.2 
S. Rolloff 74.5 71.4 77.5 73.7 

Discussion

In the present study, we applied a multivariate statistical framework to characterize the differences between subjects categorized as musicians and nonmusicians depending on whether or not have undergone musical training. Our main findings can be summarized as follows: (1) in order to investigate the neural activation patterns evoked by the acoustic features, it was shown that not all the standard and well-known 12 features are necessary, given the redundancy found by FA, which associated all these 12 acoustic features commonly used into only 3 clusters that were statistically nonredundant; (2) the discriminative patterns required to classify the musicians and nonmusicians sample groups exist predominantly in the frontal areas of the brain, but there are differences between the responses for each cluster representative acoustic feature that indicate that some aspects of the music are better to predict musicianship than others.

Our results show that the Poikonen, Alluri, et al. (2016) method proposed to identify triggers in music can be very helpful to differentiate musicians and nonmusicians, considering the trade-off between the number of triggers and how high the acoustic contrast can be defined. Considering the similarity between the extracted acoustic features on the factor loadings (Figure 2), it is interesting to observe that each extracted acoustic feature corresponds to one of the clusters found, and the spectrograms (Figure 3) clearly display these similarities. The triggers selected mostly occur at the same instant within each cluster, even though some of them presents more triggers than others. These extra triggers are related to the methodological parameters used, since here we decided to use the same parameters for all the acoustic features. A different approach could be, instead of selecting only one representative acoustic feature within a cluster, concatenate the triggers found for each feature within each cluster. It is important to highlight though that all these findings were achieved because of the classical music used here. We would expect this clustering behavior to be similar among the same musical genre, but there is no guarantee that these findings can be generalized to other pieces of music and further analysis on this issue is necessary.

It is known that music training promotes differences between musicians and nonmusicians not only during music listening but also during task-free conditions (Klein, Liem, Hänggi, Elmer, & Jäncke, 2016). There is also evidence of differences between musicians with different types of training, musical style/genre, and listening experiences (Tervaniemi, Janhunen, Kruck, Putkinen, & Huotilainen, 2016; Vuust et al., 2012).

Thus, there is a large variability among individuals for this two-group classification problem. However, our experimental results have showed that the discriminant vector navigation method described and implemented here can give a comprehensive description about these sample groups’ activity patterns and classification boundary. The topographic maps generated by this navigation method clearly present distinct neural activation patterns between musicians and nonmusicians, enhancing the understanding about the transient states between these sample groups, despite the limited sample sizes available.

The results of the acoustic feature spectral kurtosis (Figure 9), better than the other ones, disclose the relationship between musicians and nonmusicians, indicating the best linear separation between the sample groups. It is interesting to observe that spectral kurtosis and spectral skewness are in the same cluster, since they characterize the spectral data distribution in terms of its shape, along with spectral flatness. Both skewness and kurtosis provide information about the type and magnitude of departures from normality (DeCarlo, 1997) and it has been demonstrated that a sudden change on spectral kurtosis can indicate transients at non-stationary signals (Antoni, 2006; Dwyer, 1984). However, even though this acoustic feature seems suitable to discriminate musicians from nonmusicians, its relationship with what is indeed perceived by the subjects remains uncertain and needs further analysis.

Our multivariate statistical navigation on the PC dimensions show that although PCA finds the directions that the EEG data vary the most, these directions are not necessarily the ones that best discriminate the sample groups. Meanwhile, LDA showed interesting information about the brain discriminant aspects of each sample group. We may anticipate that the use of professional musicians in our approach shall promote a better discrimination between the groups of interest and, consequently, a better classification performance. However, an additional limitation of our study is that the subjects who took part in our experiments were not matched according to some background variables, especially the musicians (e.g., gender, onset of music training, and formal training received). Future works must pay attention to these matching criteria as well.

Additionally, taking into account the whole brain activity (considering the information registered simultaneously in all the EEG electrodes) during the listening task, we intend here not to overlook possible interactions among different brain regions, because such interrelations might be useful to separate the sample groups under investigation (Davatzikos, 2004; Friston & Ashburner, 2004; Thomaz et al., 2007). In fact, this multivariate statistical approach allows us to have new perspectives to identify novel regions potentially discriminant to characterize musicianship beyond specific regions of interest. The statistical differences between musicians and nonmusicians captured by our approach revealed that the brain areas that best describe musicianship exist predominantly in the frontal areas of the brain, which corroborate previous works (Saari et al., 2018) on how the brain processes certain acoustic characteristics of music, specially related to the motor and auditory areas.

Nevertheless, in Poikonen, Alluri, et al. (2016) work, brain activity has been defined by searching for a signal peak at the vicinity of 100 ms corresponding to the presence of a N100 component. In our work only the musicians presented this phenomenon, preventing us from using this peak information to compose our data matrix. Therefore, we have defined the time instant at 100 ms as an equal and comparable predefined timestamp for all the subjects, musicians and nonmusicians. However, even if we could ensure that this N100 component occurs to all subjects, we would still be using predefined information based on the response of specific brain regions of interest (related to the electrodes where this N100 component should be expected to occur) without considering the information contained beyond, in other regions of the brain. We believe that this limitation could be properly addressed using an algorithm that summarizes all the information on the average epoch of each subject (Rocha, Massad, Thomaz, & Rocha, 2014), or through a multilinear perspective (Lu, Plataniotis, & Venetsanopoulos, 2013).

Finally, all our multivariate statistical analyses have been carried out on a limited sample size setup, given the original dimensionality of our data matrix and the difficulty of recruiting volunteers, particularly professional musicians, which has an impact on the estimates of the classification accuracy of the discriminant dimensions. This becomes clear when we use less electrodes (Table 2) to compare our results. It is possible to see that, in this case, the training accuracies are close to the test ones, whereas the multivariate estimates (that is, using all the electrodes) show more sensitivity to such choice giving larger ranges between lower (test set) and upper (training set) classification bounds. This limited sample size issue could be addressed using regularized versions of LDA, such as MLDA (Sato et al., 2008; Thomaz et al., 2007), or other small sample size classifiers like Support Vector Machine (Davatzikos, 2004). Another interesting possibility would be to address this problem through a multilinear subspace learning perspective (Lu et al., 2013), considering time, electrodes and subjects as a third-order tensor data, which, as mentioned in the previous paragraph, might also overcome the timestamp limitation of our work.

Conclusion

This work proposed and implemented a multivariate statistical framework to automatically classify whether the listeners were musicians or not, based on the most expressive and discriminant features obtained through the analysis of the brain EEG data on a global level. This is an exploratory study that describes in mathematical and computational terms the differences of musical processing between musicians and nonmusicians, allowing a plausible linear separation among the sample groups, based on the analysis of high-dimensional and limited sample size data, with relatively high classification accuracy.

We believe that this multivariate statistical analysis generates a more comprehensive description of the neural activation patterns, transition states, and classification boundary that separate cognitively musicians from nonmusicians. Further work with professional musicians and larger sample groups might increase the statistical discriminant power of our findings. The applicability of the multivariate statistical framework proposed is not restricted to the classical music used here, and other types of music beyond the classical ones can benefit from this whole brain signal analysis as well.

Note

References

References
Abrams, D. A., Ryali, S., Chen, T., Chordia, P., Khouzam, A., Levitin, D. J., & Menon, V. (
2013
).
Inter-subject synchronization of brain response during natural music listening
. European
Journal of Neuroscience
,
37
,
1458
1469
.
Alluri, V., Toiviainen, P., Jaaskelainen, I. P., Glerean, E., Sams, M., & Brattico, E. (
2012
).
Large-scale brain networks emerge from dynamics processing of musical timbre, key and rhythm
.
NeuroImage
,
59
,
3677
3689
.
Alluri, V., Toiviainen, P., Lund, T. E., Wallentin, M., Vuust, P., Nandi, K., et al (
2013
).
From Vivaldi to Beatles and back: Predicting lateralized brain responses to music
.
NeuroImage
,
83
,
627
636
.
Antoni, J. (
2006
).
The spectral kurtosis: A useful tool for characterizing non-stationary signals
.
Mechanical Systems and Signal Processing
,
20
,
282
307
.
Cootes, T., Edwards, G., & Taylor, C. (
1998
). Active appearance models. In H. Burkhardt & B. Neumann (Eds.),
ECCV lecture notes in computer science
(Vol.
1407
, pp.
484
498
).
Berlin
:
Springer
.
Davatzikos, C. (
2004
).
Why voxel-based morphometric analysis should be used with great caution when characterizing group differences
.
NeuroImage
,
23
,
17
20
.
DeCarlo, L. T. (
1997
).
On the meaning and use of kurtosis
.
Psychological Methods
,
2
,
292
307
.
Dwyer, R. F. (
1984
).
Use of the kurtosis statistic in the frequency domain as an aid in detecting random signals
.
IEEE Journal of Oceanic Engineering
,
9
,
85
92
.
Friston, K. J., & Ashburner, J. (
2004
).
Generative and recognition models for neuroanatomy
.
NeuroImage
,
23
,
21
24
.
Fukunaga, K. (
1990
).
Introduction to statistical pattern recognition
(2nd ed.).
Boston, MA
:
Academic Press
.
Gregori, I. R. S., Sanches, I., & Thomaz, C. E. (
2017
).
Clutch judder classification and prediction: A multivariate statistical analysis based on torque signals
.
IEEE Transactions on Industrial Electronics
,
64
(
5
),
4287
4295
.
Johnson, R. A., & Wichern, D. W. (
2007
).
Applied multivariate statistical analysis
(6th ed.).
Upper Saddle River, NJ
:
Prentice Hall
.
Klein, C., Liem, F., Hänggi, J., Elmer, S., & Jäncke, L. (
2016
).
The “silent” imprint of musical training
.
Human Brain Mapping
,
37
,
536
546
.
Lartillot, O. (
2014
).
MIRToolbox 1.6.1 user's manual
.
Aalborg, Denmark
:
Aalborg University
.
Lerch, A. (
2012
).
An introduction to audio content analysis: Applications in signal processing and music informatics
.
Piscataway, NJ
:
Wiley-IEEE Press
.
Liang, S., Hsieh, T., Chen, W., & Lin, K. (
2011
). Classification of EEG signals from musicians and nonmusicians by neural networks. In R. C. Luo, F.-T. Cheng, & L.-C. Fu (Eds.),
9th World Congress on Intelligent Control and Automation
(pp.
865
869
).
Taipei, Taiwan
:
IEEE
.
Lu, H., Plataniotis, K. N., & Venetsanopoulos, A. (
2013
).
Multilinear subspace learning: Dimensionality reduction of multidimensional data
(1st ed.).
Boca Raton, FL
:
Chapman and Hall/CRC
.
Markovic, A., KÜhnis, J., & Jäncke, L. (
2017
).
Task context influences brain activation during music listening
. Frontiers in Human
Neuroscience
,
11
,
1
14
Mikutta, C., Maissen, G., Altorfer, A., Strik, W., & Koenig, T. (
2014
).
Professional musicians listen differently to music
.
Neuroscience
,
268
,
102
111
.
Münte, T. F., Altenmüller, E., & JÄncke, L. (
2002
).
The musician's brain as a model of neuroplasticity
.
Nature Reviews Neuroscience
,
3
,
473
478
.
OpenBCI
. (
2018
).
OpenBCI - Open source biosensing tools
. Retrieved from https://openbci.com/
Peretz, I., & Zatorre, R. J. (
2005
).
Brain organization for music processing
.
Annual Review of Psychology
,
56
,
89
114
.
Poikonen, H., Alluri, V., Brattico, E., Lartillot, O., Tervaniemi, M., & Huotilainen, M. (
2016
).
Event-related brain responses while listening to entire pieces of music
.
Neuroscience
,
312
,
58
73
.
Poikonen, H., Toiviainen, P., & Tervaniemi, M. (
2016
,
September
).
Early auditory processing in musicians and dancers during a contemporary dance piece
.
Scientific Reports
,
6
,
1
11
.
Ribeiro, E., & Thomaz, C. E. (
2018
). A multivariate statistical analysis of EEG signals for differentiation of musicians and nonmusicians. In D. M. Denis & N. Murilo (Eds.),
15th National Meeting on Artificial and Computational Intelligence
(pp.
80
85
).
Sao Paulo, Brazil
:
SBC
.
Rigoulot, S., Pell, M. D., & Armony, J. L. (
2015
).
Time course of the influence of musical expertise on the processing of vocal and musical sounds
.
Neuroscience
,
290
,
175
184
.
Rocha, F. T., Massad, E., Thomaz, C. E., & Rocha, A. F. d. (
2014
).
EEG brain mapping of normal and learning disabled children using factor and linear discriminant analyses
.
Journal of Neurology and Neurophysiology
,
6
,
1
7
.
Saari, P., Burunat, I., Brattico, E., & Toiviainen, P. (
2018
).
Decoding musical training from dynamic processing of musical features in the brain
.
Scientific Report
,
8
,
1
12
Sato, J. R., Thomaz, C. E., Cardoso, E. F., Fujita, A., Martin, M. d. G. M., & Amaro, E. J. (
2008
).
Hyperplane navigation: A method to set individual scores in fMRI group datasets
.
NeuroImage
,
42
,
1473
1480
.
Schlaug, G. (
2015
). Musicians and music making as a model for the study of brain plasticity. In E. Altenmüller, S. Finger, & F. Boller (Eds.),
Progress in brain research
(Vol.
217
, pp.
37
255
).
Waltham, MA
:
Elsevier
.
Tervaniemi, M., Castaneda, A., Knoll, M., & Uther, M. (
2006
).
Sound processing in amateur musicians and nonmusicians: Event-related potential and behavioral indices
.
NeuroReports
,
17
,
1225
1228
.
Tervaniemi, M., Janhunen, L., Kruck, S., Putkinen, V., & Huotilainen, M. (
2016
).
Auditory profiles of classical, jazz, and rock musicians: Genre-specific sensitivity to musical sound features
.
Frontiers in Psychology
,
6
,
1
11
.
Thomaz, C. E., Duran, F. L. S., Busatto, G. F., Gillies, D. F., & Rueckert, D. (
2007
).
Multivariate statistical differences of MRI samples of the human brain
.
Journal of Mathematical Imaging and Vision
,
29
,
95
106
.
Virtala, P., Huotilainen, M., Partanen, E., & Tervaniemi, M. (
2014
).
Musicianship facilitates the processing of western music chords - An ERP and behavioral study
.
Neuropsychologia
,
61
,
247
258
.
Vuust, P., Brattico, E., Seppanen, M., Naatanen, R., & Tervaniemi, M. (
2012
,
June
).
The sound of music: Differentiating musicians using a fast, musical multi-feature mismatch negativity paradigm
.
Neuropsychologia
,
50
,
1432
1443
.
Xavier, I., Pereira, M., Giraldi, G., Gibson, S., Solomon, C., Rueckert, D., et al (
2015
). A photo-realistic generator of most expressive and discriminant changes in 2d face images. In A. Wael, H. Gareth, & M. Klaus (Eds.),
Sixth International Conference on Emerging Security Technologies
(pp.
80
85
).
Braunschweigh, Germany
:
IEEE
.