Academic research using Essentia¶
Academic studies citing Essentia-related publications on Google Scholar: Essentia (ISMIR 2013), Essentia (ACM MM 2013), Essentia TensorFlow models (ICASSP 2020), Essentia.js (TISMIR 2021).
Below is an incomplete list of academic studies that utilize Essentia, categorized by research topics.
Music analysis datasets¶
A. Porter, D. Bogdanov, R. Kaye, R. Tsukanov, and X. Serra. AcousticBrainz: a community platform for gathering music information obtained from audio. In 16th International Society for Music Information Retrieval Conference (ISMIR’15), pages 786-792, 2015.
Y. Bayle, P. Hanna, and M. Robine. SATIN: A Persistent Musical Database for Music Information Retrieval. In 15th International Workshop on Content-Based Multimedia Indexing (CBMI’17), 2017.
Bertoni, A. A., & Lemos, R. P. (2022). Using Musical and Statistical Analysis of the Predominant Melody of the Voice to Create datasets from a Database of Popular Brazilian Hit Songs. Journal of Information and Data Management, 13(1).
Moscati, M., Parada-Cabaleiro, E., Deldjoo, Y., Zangerle, E., & Schedl, M. (2022, October). Music4All-Onion–A Large-Scale Multi-faceted Content-Centric Music Recommendation Dataset. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management (pp. 4339-4343).
Music and sound classification¶
M. P. Singh and P. Rashmi Convolution Neural Networks of Dynamically Sized Filters with Modified Stochastic Gradient Descent Optimizer for Sound Classification. Journal of Computer Science. Volume 20, No. 1, 2024, 69-87.
Cui, M., Liu, Y., Wang, Y., & Wang, P. (2022). Identifying the Acoustic Source via MFF-ResNet with Low Sample Complexity. Electronics, 11(21), 3578.
Peixoto, B. M., Lavi, B., Dias, Z., & Rocha, A. (2021). Harnessing high-level concepts, visual, and auditory features for violence detection in videos. Journal of Visual Communication and Image Representation, 78, 103174.
Peixoto, B., Lavi, B., Bestagini, P., Dias, Z., & Rocha, A. (2020, May). Multimodal violence detection in videos. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 2957-2961). IEEE.
Caparrini, A., Arroyo, J., Pérez-Molina, L., & Sánchez-Hernández, J. (2020). Automatic subgenre classification in an electronic dance music taxonomy. Journal of New Music Research, 49(3), 269-284.
Melo, D. D. F. P., Fadigas, I. D. S., & Pereira, H. B. D. B. (2020). Graph-based feature extraction: A new proposal to study the classification of music signals outside the time-frequency domain. Plos one, 15(11), e0240915.
Ramirez, R., & Saarilahti, K. (2020). A Machine Learning Approach to Cross-cultural Chil-dren’s Songwriting Classification. MML 2020, 51.
Nahar, F., Agres, K., BT, B., & Herremans, D. (2020). A dataset and classification model for Malay, Hindi, Tamil and Chinese music. arXiv preprint arXiv:2009.04459.
Melo, D. D. F. P., Fadigas, I. D. S., & Pereira, H. B. D. B. (2020). Graph-based feature extraction: A new proposal to study the classification of music signals outside the time-frequency domain. Plos one, 15(11), e0240915.
John, S., Sinith, M. S., Sudheesh, R. S., & Lalu, P. P. (2020, December). Classification of indian classical carnatic music based on raga using deep learning. In 2020 IEEE Recent Advances in Intelligent Computational Systems (RAICS) (pp. 110-113). IEEE.
F. Rodríguez-Algarra, B. Sturm, B., and S. Dixon. Characterising Confounding Effects in Music Classification Experiments through Interventions. Transactions of the International Society for Music Information Retrieval, 52-66, 2019.
S. Oramas, D. Bogdanov and A. Porter. MediaEval 2018 AcousticBrainz Genre Task: A baseline combining deep feature embeddings across datasets. In MediaEval 2018 Workshop, 2018.
H. Schreiber. MediaEval 2018 AcousticBrainz Genre Task: A CNN Baseline Relying on Mel-Features. In MediaEval 2018 Workshop, 2018.
J. Kim, M. Won, X. Serra, and C. C. S. Liem, Transfer Learning of Artist Group Factors to Musical Genre Classification. In WWW ’18 Companion Proceedings of the The Web Conference 2018, 2018.
N. Karunakaran and A. Arya, A Scalable Hybrid Classifier for Music Genre Classification using Machine Learning Concepts and Spark. In International Conference on Intelligent Autonomous Systems (ICoIAS), 2018.
D. Bogdanov, A. Porter, J. Urbano, and H. Schreiber. The MediaEval 2017 AcousticBrainz Genre Task: Content-based Music Genre Recognition from Multiple Sources. In MediaEval 2017 Multimedia Benchmark Workshop (MediaEval’17), 2017.
K. Koutini, A. Imenina, M. Dorfer, A. R. Gruber, and M. Schedl. MediaEval 2017 AcousticBrainz Genre Task: Multilayer Perceptron Approach. In MediaEval 2017 Multimedia Benchmark Workshop (MediaEval’17), 2017.
D. Bogdanov, M. Haro, F. Fuhrmann, A. Xambó, E. Gómez, and P. Herrera. Semantic audio content-based music recommendation and visualization based on user preference examples. Information Processing & Management, 49(1):13–33, 2013.
N. Wack, E. Guaus, C. Laurier, R. Marxer, D. Bogdanov, J. Serrà, and P. Herrera. Music type groupers (MTG): generic music classification algorithms. In Music Information Retrieval Evaluation Exchange (MIREX’09), 2009.
N. Wack, C. Laurier, O. Meyers, R. Marxer, D. Bogdanov, J. Serra, E. Gomez, and P. Herrera. Music classification using high-level models. In Music Information Retrieval Evaluation Exchange (MIREX’10), 2010.
C. Laurier. Automatic Classification of Musical Mood by Content- Based Analysis. PhD thesis, UPF, Barcelona, Spain, 2011.
C. Laurier, O. Meyers, J. Serrà, M. Blech, P. Herrera, and X. Serra. Indexing music by mood: design and integration of an automatic content-based annotator. Multimedia Tools and Applications, 48(1):161–184, 2009.
C. Johnson-Roberson. Content-Based Genre Classification and Sample Recognition Using Topic Models. Master Thesis, Brown University, Providence, USA, 2017.
E. Fonseca, R. Gong, and X. Serra. A Simple Fusion Of Deep And Shallow Learning For Acoustic Scene Classification. In Sound and Music Computing Conference (SMC’18), 2018.
M. Sordo. Semantic Annotation of Music Collections: A Computational Approach. PhD thesis, UPF, Barcelona, Spain, 2012.
Y. Yang, D. Bogdanov, P. Herrera, and M. Sordo. Music Retagging Using Label Propagation and Robust Principal Component Analysis. In International World Wide Web Conference (WWW’12), International Workshop on Advances in Music Information Research (AdMIRe’12), 2012.
Music preferences, similarity, recommendation, and playlists¶
Chambers, S. (2023). The Curation of Music Discovery: The Presentation of Unfamiliar Classical Music on Radio, Digital Playlists and Concert Programmes. Empirical Studies of the Arts, 41(1), 304-326.
Porcaro, L., Gómez, E., & Castillo, C. (2022). Perceptions of diversity in electronic music: The impact of listener, artist, and track characteristics. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW1), 1-26.
Porcaro, L., Gómez, E., & Castillo, C. (2022). Assessing the Impact of Music Recommendation Diversity on Listeners: A Longitudinal Study. arXiv preprint arXiv:2212.00592.
Porcaro, L. (2022). Assessing the impact of music recommendation diversity on listeners (Doctoral dissertation, Universitat Pompeu Fabra).
Gomes, C. J., Gil-González, A. B., Luis-Reboredo, A., Sánchez-Moreno, D., & Moreno-García, M. N. (2022). Song Recommender System Based on Emotional Aspects and Social Relations. In Distributed Computing and Artificial Intelligence, Volume 1: 18th International Conference 18 (pp. 88-97). Springer International Publishing.
F. Korzeniowski, S. Oramas, and F. Gouyon. Artist Similarity with Graph Neural Networks. In 22nd International Society for Music Information Retrieval Conference (ISMIR 2021), 2021.
Magron, P., & Févotte, C. (2021, June). Leveraging the structure of musical preference in content-aware music recommendation. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 581-585). IEEE.
Niyazov, A., Mikhailova, E., & Egorova, O. (2021, May). Content-based music recommendation system. In 2021 29th Conference of Open Innovations Association (FRUCT) (pp. 274-279). IEEE.
Ashley, D. R., Herrmann, V., Friggstad, Z., Mathewson, K. W., & Schmidhuber, J. (2021). Automatic Embedding of Stories Into Collections of Independent Media. arXiv preprint arXiv:2111.02216.
Schoder, J. (2019). Music similarity analysis using the big data framework spark. Master Thesis.
K. Yadati, C. Liem, M. Larson, and A. Hanjalic. On the Automatic Identification of Music for Common Activities. In 2017 ACM on International Conference on Multimedia Retrieval, pages 192-200, 2017.
D. Bogdanov. From music similarity to music recommendation: Computational approaches based on audio and metadata analysis. PhD thesis, UPF, Barcelona, Spain, 2013.
D. Bogdanov, M. Haro, F. Fuhrmann, A. Xambó, E. Gómez, and P. Herrera. Semantic audio content-based music recommendation and visualization based on user preference examples. Information Processing & Management, 49(1):13–33, Jan. 2013.
D. Bogdanov, J. Serrà, N. Wack, P. Herrera, and X. Serra. Unifying low-level and high-level music similarity measures. IEEE Transactiions on Multimedia, 13(4):687–701, 2011.
O. Celma, P. Cano, and P. Herrera. Search sounds an audio crawler focused on weblogs. In 7th International Conference on Music Information Retrieval (ISMIR’06), 2006.
J. Kaitila. A content-based music recommender system. Master Thesis, University of Tampere, Finland, 2017.
Music psychology¶
Liew, K., Koh, A. H., Fram, N. R., Brown, C. M., Lee, L. N., Hennequin, R., … & Uchida, Y. (2023). Groovin’to the cultural beat: Preferences for danceable music represent cultural affordances for high-arousal negative emotions. Psychology of Aesthetics, Creativity, and the Arts.
Greenberg, D. M., Matz, S. C., Schwartz, H. A., & Fricke, K. R. (2021). The self-congruity effect of music. Journal of Personality and Social Psychology, 121(1), 137.
K. R., Fricke, D. M. Greenberg, P. J. Rentfrow, and P. Y. Herzberg. Measuring musical preferences from listening behavior: Data from one million people and 200,000 songs. Psychology of Music, 0305735619868280, 2019.
K. Fricke, and P. Herzberg. Know your big data: De-biasing subsamples of large datasets for personality research using importance sampling and kNN matching. 10.31234/osf.io/4ftb7, 2019.
K. R. Fricke, D.M. Greenberg, P.J. Rentfrow, and P.Y. Herzberg. Computer-based music feature analysis mirrors human perception and can be used to measure individual music preference. Journal of Research in Personality, 75:94-102, 2018.
Music version / cover song identification¶
Yesiler, F., Miron, M., Serrà, J., & Gómez, E. (2022, February). Assessing algorithmic biases for musical version identification. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining (pp. 1284-1290).
Yesiler, F., Molina, E., Serrà, J., & Gómez, E. (2021, June). Investigating the efficacy of music version retrieval systems for setlist identification. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 541-545). IEEE.
C. J. Tralie. Early MFCC And HPCP Fusion for Robust Cover Song Identification. arXiv preprint arXiv:1707.04680, 2017.
J. Serrà, E. Gómez, P. Herrera, and X. Serra. Chroma binary similarity and local alignment applied to cover song identification. IEEE Transactions on Audio, Speech, and Language Processing, 16(6):1138–1151, 2008.
Emotion detection¶
Bogdanov, D., Lizarraga Seijas, X., Alonso-Jiménez, P., & Serra, X. (2022). MusAV: A dataset of relative arousal-valence annotations for validation of audio models. In Proceedings of the 23nd International Society for Music Information Retrieval Conference (ISMIR 2022); 2022 Dec 4-8; Bengaluru, India.
Jandaghian, M., Setayeshi, S., Razzazi, F., & Sharifi, A. (2023). Music emotion recognition based on a modified brain emotional learning model. Multimedia Tools and Applications, 1-25.
Azuaje, G., Liew, K., Epure, E., Yada, S., Wakamiya, S., & Aramaki, E. (2023). Visualyre: multimodal album art generation for independent musicians. Personal and Ubiquitous Computing, 1-12.
Azuaje, G., Liew, K., Epure, E., Yada, S., Wakamiya, S., & Aramaki, E. (2021, September). Visualyre: Multimodal visualization of lyrics. In Proceedings of the 16th International Audio Mostly Conference (pp. 130-134).
Turchet, L., & Pauwels, J. (2021). Music emotion recognition: intention of composers-performers versus perception of musicians, non-musicians, and listening machines. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 30, 305-316.
Rajamani, S. T., Rajamani, K., & Schuller, B. W. (2021, October). Towards an Efficient Deep Learning Model for Emotion and Theme Recognition in Music. In 2021 IEEE 23rd International Workshop on Multimedia Signal Processing (MMSP) (pp. 1-5). IEEE.
S. Chowdhury, and G. Widmer. On perceived emotion in expressive piano performance: Further experimental evidence for the relevance of mid-level perceptual features. In International Society for Music Information Retrieval (ISMIR 2021), 2021.
Byun, S. W., Lee, S. P. A Study on a Speech Emotion Recognition System with Effective Acoustic Features Using Deep Learning Algorithms. Applied Sciences, 11(4), 1890, 2021.
Panda, R., Malheiro, R. M., & Paiva, R. P. (2020). Audio features for music emotion recognition: a survey. IEEE Transactions on Affective Computing.
Y. Kim, L. M. Aiello, and D. Quercia. PepMusic: motivational qualities of songs for daily activities. EPJ Data Science, 9(1), 13, 2020.
Y. Hifny, A. Ali. Efficient Arabic Emotion Recognition Using Deep Neural Networks. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6710-6714, 2019.
Grekow, J. (2021). Music recommendation based on emotion tracking of musical performances. Recommender Systems for Medicine and Music, 167-186.
Grekow, J. (2021). Music emotion recognition using recurrent neural networks and pretrained models. Journal of Intelligent Information Systems, 57(3), 531-546.
Grekow, J. (2020). Static music emotion recognition using recurrent neural networks. In Foundations of Intelligent Systems: 25th International Symposium, ISMIS 2020, Graz, Austria, September 23–25, 2020, Proceedings (pp. 150-160). Springer International Publishing.
J. Grekow. Finding Musical Pieces with a Similar Emotional Distribution throughout the Same Composition. In 2019 IEEE International Symposium on INnovations in Intelligent SysTems and Applications (INISTA), pages 1-6, 2019.
J. Grekow. From Content-based Music Emotion Recognition to Emotion Maps of Musical Pieces. Springer International Publishing, 2018.
J. Grekow. Musical performance analysis in terms of emotions it evokes. Journal of Intelligent Information Systems, 2018.
J. Grekow. Comparative Analysis of Musical Performances by Using Emotion Tracking. In book: Foundations of Intelligent Systems, pages 175-184, 2017.
J. Grekow. Audio features dedicated to the detection of arousal and valence in music recordings. In IEEE International Conference on INnovations in Intelligent SysTems and Applications (INISTA), pages 40-44, 2017.
J. Grekow. Music Emotion Maps in Arousal-Valence Space. In IFIP International Conference on Computer Information Systems and Industrial Management, pages 697-706, 2016.
J. Grekow. Audio Features Dedicated to the Detection of Four Basic Emotions. In IFIP International Conference on Computer Information Systems and Industrial Management, pages 583-591, 2015.
J. Grekow. Emotion Detection Using Feature Extraction Tools. In International Symposium on Methodologies for Intelligent Systems, pages 267-272, 2015.
T. Pellegrini, and V. Barriere. Time-continuous estimation of emotion in music with recurrent neural networks. In MediaEval 2015 Multimedia Benchmark Workshop (MediaEval’15), 2015.
A. Aljanaki, F. Wiering, and R. C. Veltkamp. MediaEval 2015: A Segmentation-based Approach to Continuous Emotion Tracking. In MediaEval 2015 Multimedia Benchmark Workshop (MediaEval’15), 2015.
Visualization and interaction with music¶
Turchet, L., Zanotto, C., & Pauwels, J. (2023). “Give me happy pop songs in C major and with a fast tempo”: A vocal assistant for content-based queries to online music repositories. International Journal of Human-Computer Studies, 173, 103007.
Efimova, V., Jarsky, I., Bizyaev, I., & Filchenkov, A. (2022). Conditional vector graphics generation for music cover images. arXiv preprint arXiv:2205.07301.
Font, F. (2021, September). SOURCE: a Freesound Community Music Sampler. In Proceedings of the 16th International Audio Mostly Conference (pp. 182-187).
Graf, M., Chijioke Opara, H., & Barthet, M. (2021, May). An Audio-Driven System for Real-Time Music Visualisation. In Audio Engineering Society Convention 150. Audio Engineering Society.
Lima, H., Santos, C., & Meiguins, B. (2019, July). Visualizing the semantics of music. In 2019 23rd International Conference Information Visualisation (IV) (pp. 352-357). IEEE.
C. Donahue, Z. C. Lipton, and J. McAuley. Dance dance convolution. In Proceedings of the 34th International Conference on Machine Learning, Volume 70, pages 1039-1048, 2017.
J. H. P. Ono, F. Sikansi, D. C. Corrêa, F. V. Paulovich, A. Paiva, and L. G. Nonato. Concentric RadViz: visual exploration of multi-task classification. In 28th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), pages 165-172, 2015.
D. Bogdanov. From music similarity to music recommendation: Computational approaches based on audio and metadata analysis. PhD thesis, UPF, Barcelona, Spain, 2013.
D. Bogdanov, M. Haro, F. Fuhrmann, A. Xambó, E. Gómez, and P. Herrera. Semantic audio content-based music recommendation and visualization based on user preference examples. Information Processing & Management, 49(1):13–33, 2013.
E. Maestre, P. Papiotis, M. Marchini, Q. Llimona, and O. Mayor. Online Access and Visualization of Enriched Multimodal Representations of Music Performance Recordings: the Quartet Dataset and the Repovizz System. IEEE Multimedia, 24(1):24-34, 2017.
C. F. Julià and S. Jordà. SongExplorer: a tabletop application for exploring large collections of songs. In International Society for Music Information Retrieval Conference (ISMIR’09), 2009.
C. Laurier, M. Sordo, and P. Herrera. Mood cloud 2.0: Music mood browsing based on social networks. In International Society for Music Information Retrieval Conference (ISMIR’09), 2009.
O. Mayor, J. Llop, and E. Maestre. RepoVizz: A multimodal on-line database and browsing tool for music performance research. In International Society for Music Information Retrieval Conference (ISMIR’11), 2011.
M. Sordo, G. K. Koduri, S. Şentürk, S. Gulati, and X. Serra. A musically aware system for browsing and interacting with audio music collections. In The 2nd CompMusic Workshop, 2012.
A. Augello, I. Infantino, U. Maniscalco, G. Pilato, R. Rizzo, and F. Vella. Robotic intelligence and computational creativity. Robotic Intelligence, 2, 161, 2019.
A. Augello, I. Infantino, U. Maniscalco, G. Pilato, R. Rizzo, and F. Vella. Robotic Intelligence and Computational Creativity. Encyclopedia with Semantic Computing and Robotic Intelligence, 2018.
A. Augello, E. Cipolla, I. Infantino, A. Manfre, G. Pilato, and F. Vella. Creative Robot Dance with Variational Encoder. In International Conference on Computational Creativity, 2017.
A. Augello, I. Infantino, A. Manfrè, G. Pilato, F. Vella, and A. Chella. Creation and cognition for humanoid live dancing. Robotics and Autonomous Systems, 86:128-137, 2016
F. Kraemer, I. Rodriguez, O. Parra, T. Ruiz, and E. Lazkano. Minstrel robots: Body language expression through applause evaluation. In IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), pages 332-337, 2016
O. Alemi, J. Françoise, and P. Pasquier. GrooveNet: Real-Time Music-Driven Dance Movement Generation using Artificial Neural Networks. In Workshop on Machine Learning for Creativity, 23rd ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2017.
J. Buhmann, B. Moens, V. Lorenzoni, and M. Leman. Shifting the Musical Beat to Influence Running Cadence. In European Society for Cognitive Sciences Of Music (ESCOM’17), 2017.
J. Buhmann. Effects of music-based biofeedback on walking and running. PhD Thesis, Ghent University, Belgium, 2017.
A. Xambó, G. Roma, A. Lerch, M. Barthet, G. Fazekas. Live Repurposing of Sounds: MIR Explorations with Personal and Crowdsourced Databases. In New Interfaces for Musical Expression (NIME’18), 2018.
Sound indexing, music production, and intelligent audio processing¶
Ma, A. B., & Lerch, A. (2022). Representation Learning for the Automatic Indexing of Sound Effects Libraries. arXiv preprint arXiv:2208.09096.
Rashid, U., Saleem, K., & Ahmed, A. (2021). MIRRE approach: nonlinear and multimodal exploration of MIR aggregated search results. Multimedia Tools and Applications, 80, 20217-20253.
Shier, J., McNally, K., Tzanetakis, G., & Brooks, K. G. (2021). Manifold learning methods for visualization and browsing of drum machine samples. Journal of the Audio Engineering Society, 69(1/2), 40-53.
Favory, X., Font, F., & Serra, X. (2020, June). Search result clustering in collaborative sound collections. In Proceedings of the 2020 International Conference on Multimedia Retrieval (pp. 207-214).
Vahidi, C., Fazekas, G., Saitis, C., & Palladini, A. (2020). Timbre space representation of a subtractive synthesizer. arXiv preprint arXiv:2009.11706.
K. Subramani, S. Sridhar, M. A. Rohit, and P. Rao. Energy-Weighted Multi-Band Novelty Functions for Onset Detection in Piano Music. In 2018 Twenty Fourth National Conference on Communications (NCC), pages 1-6, 2018.
S. Trump. Genetische Improvisation. Eine empirische Untersuchung von improvisierter Musik anhand evolutionstheoretischer Prinzipien. PhD Thesis, Hochschule für Musik Nürnberg, 2019.
M. A. Martinez Ramirez, and J. D. Reiss. Analysis and prediction of the audio feature space when mixing raw recordings into individual stems. In Audio Engineering Society Convention 143, 2017.
M. Grachten, E. Deruty, A. Tanguy. Auto-adaptive Resonance Equalization using Dilated Residual Networks. arXiv preprint arXiv:1807.08636, 2018.
H. Ordiales, M. L. Bruno. Sound recycling from public databases. In 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences (AM’17), 2017.
S. Parekh, F. Font, and X. Serra. Improving Audio Retrieval through Loudness Profile Categorization. In IEEE International Symposium on Multimedia (ISM), pages 565-568, 2016.
D. Moffat, D. Ronan, and J. D. Reiss. Unsupervised taxonomy of sound effects. In 20th International Conference on Digital Audio Effects (DAFx-17), 2017.
S. Böck. Event Detection in Musical Audio. PhD Thesis, Johannes Kepler University, Linz, Austria, 2016.
J. Shier, K. McNally and G. Tzanetakis. Sieve: A plugin for the automatic classification and intelligent browsing of kick and snare samples. In 3rd Workshop on Intelligent Music Production, 2017.
E. T. Chourdakis, and J. D. Reiss. A Machine-Learning Approach to Application of Intelligent Artificial Reverberation. Journal of the Audio Engineering Society, 65(1/2):56-65, 2017.
O. Campbell, C. Roads, A. Cabrera, M. Wright, and Y. Visell. ADEPT: A Framework for Adaptive Digital Audio Effects. In 2nd AES Workshop on Intelligent Music Production, 2016.
I. Jordal. Evolving artificial neural networks for cross-adaptive audio effects. Master Thesis, Norwegian University of Science and Technology, 2017.
Vande Veire, L., & De Bie, T. (2018). From raw audio to a seamless mix: creating an automated DJ system for Drum and Bass. EURASIP Journal on Audio, Speech, and Music Processing, 2018(1), 1-21.
J. B. Bonmati. DJ Codo Nudo: a novel method for seamless transition between songs for electronic music. Master Thesis, Universitat Pompeu Fabra, Barcelona, Spain, 2016.
F. Font, and X. Serra. Tempo Estimation for Music Loops and a Simple Confidence Measure. In 17th International Society for Music Information Retrieval Conference (ISMIR’16), pages 269-275, 2016.
F. Font. Tag recommendation using folksonomy information for online sound sharing platforms. PhD Thesis. Universitat Pompeu Fabra, Barcelona, Spain, 2015.
J. Janer, M. Haro, G. Roma, T. Fujishima, and N. Kojima. Sound object classification for symbolic audio mosaicing: A proof-of-concept. In Sound and Music Computing Conference (SMC’09), pages 297–302, 2009.
Environmental sounds¶
J. Sharma, O. C. Granmo, and M. Goodwin. Environment Sound Classification using Multiple Feature Channels and Deep Convolutional Neural Networks. arXiv preprint arXiv:1908.11219, 2019.
J. Salamon, and J. P. Bello. Deep convolutional neural networks and data augmentation for environmental sound classification. IEEE Signal Processing Letters, 24(3):279-283, 2017.
J. Salamon, and J. P. Bello. Unsupervised feature learning for urban sound classification. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’15), pages 171-175, 2015.
J. Salamon, and J. P. Bello. Feature learning with deep scattering for urban sound analysis. In 23rd European Signal Processing Conference (EUSIPCO), pages 724-728, IEEE, 2015.
M. Haro, J. Serrà, P. Herrera, and A. Corral. Zipf’s law in short-time timbral codings of speech, music, and environmental sound signals. PLoS ONE, 7(3):e33993, 2012.
G. Roma, J. Janer, S. Kersten, M. Schirosa, P. Herrera, and X. Serra. Ecological acoustics perspective for content-based retrieval of environmental sounds. EURASIP Journal on Audio, Speech, and Music Processing, 2010.
D. Moffat and J. D. Reiss. Objective evaluations of synthesised environmental sounds. In International Conference on Digital Audio Effects (DAFx-18), 2018.
Singing voice analysis¶
Faghih, B., Chakraborty, S., Yaseen, A., & Timoney, J. (2022). A new method for detecting onset and offset for singing in real-time and offline environments. Applied Sciences, 12(15), 7391.
Audio analysis tools for assisting music education¶
Acquilino, A., Puranik, N., Fujinaga, I., & Scavone, G. (2023). Detecting efficiency in trumpet sound production: proposed methodology and pedagogical implications. In Proceedings of the 5th Stockholm Music Acoustic Conference.
Alexandraki, C., Akoumianakis, D., Kalochristianakis, M., Zervas, P., & Cambouropoulos, E. (2022, July). MusiCoLab: towards a modular architecture for collaborative music learning. In Proceedings of the Web Audio Conference.
Borgogno, T., & Turchet, L. (2022, September). ImproScales: a self-tutoring web system for using scales in improvisations. In Proceedings of the 17th International Audio Mostly Conference (pp. 219-225).
S. Giraldo, G. Waddell, I. Nou, A. Ortega, O. Mayor, A. Perez, A. Williamon, and R. Ramirez. Automatic Assessment of Tone Quality in Violin Music Performance. Frontiers in Psychology, 10, 334, 2019.
K. Narang, and R. Preeti. Acoustic Features For Determining Goodness of Tabla Strokes. In 18th International Society for Music Information Retrieval Conference (ISMIR’17), 2017.
G. Bandiera, O. Romani Picas, H. Tokuda, W. Hariya, K. Oishi, and X. Serra. Good-sounds.org: A Framework to Explore Goodness in Instrumental Sounds. In 17th International Society for Music Information Retrieval Conference (ISMIR’16), pages 414-419, 2016.
O. Romani Picas, H. Parra Rodriguez, D. Dabiri, H. Tokuda, W. Hariya, K. Oishi, and X. Serra. A real-time system for measuring sound goodness in instrumental sounds. In Audio Engineering Society Convention 138, 2015.
Y. J. Luo, L. Su, Y. H. Yang, and T. S. Chi. Detection of Common Mistakes in Novice Violin Playing. In 16th International Society for Music Information Retrieval Conference (ISMIR’15), pages 316-322, 2015.
Audio problem detection¶
Wolff, D., Mignot, R., & Roebel, A. (2022). Audio Defect Detection in Music with Deep Networks. arXiv preprint arXiv:2202.05718.
Alonso-Jiménez, P., Joglar-Ongay, L., Serra, X., & Bogdanov, D. (2019). Automatic detection of audio problems for quality control in digital music distribution. In Audio Engineering Society Convention 146. 146th Convention of the Audio Engineering Society; 2019 Mar 20-23; Dublin, Ireland. New York: AES; 2019.. Audio Engineering Society.
Generative music, live coding, audio synthesis, style transfer¶
Singh, N. (2021, April). The Sound Sketchpad: Expressively Combining Large and Diverse Audio Collections. In 26th International Conference on Intelligent User Interfaces (pp. 297-301).
Cífka, O., Ozerov, A., Şimşekli, U., & Richard, G. (2021, June). Self-supervised vq-vae for one-shot music style transfer. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 96-100). IEEE.
Lee, K. J. (2021). Computer evaluation of musical timbre transfer on drum tracks (Master thesis).
Ramires, A., Chandna, P., Favory, X., Gómez, E., & Serra, X. (2020, May). Neural percussive synthesis parameterised by high-level timbral features. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 786-790). IEEE.
Trump, S. (2020). Sound cells in genetic improvisation: an evolutionary model for improvised music. In Artificial Intelligence in Music, Sound, Art and Design: 9th International Conference, EvoMUSART 2020, Held as Part of EvoStar 2020, Seville, Spain, April 15–17, 2020, Proceedings 9 (pp. 179-193). Springer International Publishing.
Xambó, A., Lerch, A., & Freeman, J. (2018). Music information retrieval in live coding: a theoretical framework. Computer Music Journal, 42(4), 9-25.
C. Ó. Nuanáin, P. Herrera, and S. Jordá. Rhythmic Concatenative Synthesis for Electronic Music: Techniques, Implementation, and Evaluation. Computer Music Journal, 41(2):21-37, 2017.
C. Ó. Nuanáin, S. Jordà, and P. Herrera. An Interactive Software Instrument for Real-time Rhythmic Concatenative Synthesis. In New Interfaces for Musical Expression, 2016.
C. O. Nuanáin, M. Hermant, A. Faraldo, and E. Gómez. The Eear: Building a real-time MIR-based instrument from a hack. In 16th International Society for Music Information Retrieval Conference (ISMIR’15), Late-Breaking/Demo Session.
Instrument detection¶
O. Slizovskaia, E. Gómez, and G. Haro. A Case Study of Deep-Learned Activations via Hand-Crafted Audio Features. arXiv preprint arXiv:1907.01813, 2019.
K. A. Pati, and A. Lerch. A Dataset and Method for Guitar Solo Detection in Rock Music. In 2017 AES International Conference on Semantic Audio, 2017.
F. Fuhrmann and P. Herrera. Quantifying the relevance of locally extracted information for musical instrument recognition from entire pieces of music. In International Society for Music Information Retrieval Conference (ISMIR’11), 2011.
F. Fuhrmann, P. Herrera, and X. Serra. Detecting solo phrases in music using spectral and pitch-related descriptors. Journal of New Music Research, 38(4):343–356, 2009.
Audio source separation¶
F. R. Stöter. Separation and Count Estimation for Audio Sources Overlapping in Time and Frequency. PhD Thesis, Friedrich–Alexander University Erlangen–Nürnberg, Germany, 2020.
Music segmentation¶
C. Bohak, and M. Marolt. Probabilistic segmentation of folk music recordings. Mathematical Problems in Engineering, 2016.
A. Aljanaki, F. Wiering, and R. C. Veltkamp. Emotion based segmentation of musical audio. In 16th Conference of the International Society for Music Information Retrieval (ISMIR’15), pages 770-776, 2015.
Tonality analysis¶
Ramires, A., Bernardes, G., Davies, M. E., & Serra, X. (2020). TIV. lib: an open-source library for the tonal description of musical audio. arXiv preprint arXiv:2008.11529.
Á. Faraldo, S. Jordà, and P. Herrera. A Multi-Profile Method for Key Estimation in EDM. In 2017 AES International Conference on Semantic Audio, 2017.
Rhythm and tempo analysis¶
Cano, E., Mora-Ángel, F., Gil, G. A. L., Zapata, J. R., Escamilla, A., Alzate, J. F., & Betancur, M. (2021). Sesquialtera in the colombian bambuco: Perception and estimation of beat and meter–extended version. Transactions of the International Society for Music Information Retrieval, 4(1).
Halina, E., & Guzdial, M. (2021, August). TaikoNation: Patterning-focused chart generation for rhythm action games. In Proceedings of the 16th International Conference on the Foundations of Digital Games (pp. 1-10).
B. Jia, J. Lv, and D. Liu. Deep learning-based automatic downbeat tracking: a brief review. Multimedia Systems, 1-22, 2019.
A. Srinivasamurthy, and X. Serra. A supervised approach to hierarchical metrical cycle tracking from audio music recordings. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’14), pages 5217-5221, 2014.
Music transcription¶
T. W. Su, Y. P. Chen, L. Su, Y. H. Yang. TENT: Technique-embedded note tracking for real-world guitar solo recordings. Transactions of the International Society for Music Information Retrieval, 2(1), 2019.
K. Ullrich, and E. van der Wel. Music transcription with convolutional sequence-to-sequence models. In International Society for Music Information Retrieval (ISMIR’17), 2017.
Computational musicology¶
Gorgoglione, M., Garavelli, A. C., Panniello, U., & Natalicchio, A. (2023). Information Retrieval Technologies and Big Data Analytics to Analyze Product Innovation in the Music Industry. Sustainability, 15(1), 828.
N. Kudakov, C. Reuter, A. X. Cui, I. Czedik-Eysenberg, A. Emmer. Dr. Dre vs. Everybody Akustische Fingerabdrücke von Produzenten und Rappern Hintergrund: Rapper und Musikproduzenten. Fortschritte der Akustik - DAGA 2023, 49. Deutsche Jahrestagung für Akustik. 2023.
Liew, K., Mishra, V., Zhou, Y., Epure, E. V., Hennequin, R., Wakamiya, S., & Aramaki, E. (2022, December). Network Analyses for Cross-Cultural Music Popularity. In ISMIR 2022 Conference.
Simonetta, F., Avanzini, F., & Ntalampiras, S. (2022). A perceptual measure for evaluating the resynthesis of automatic music transcriptions. Multimedia Tools and Applications, 81(22), 32371-32391.
Simonetta, F. (2022). Music Interpretation Analysis. A Multimodal Approach To Score-Informed Resynthesis of Piano Recordings. arXiv preprint arXiv:2205.00941.
Narang, J., Miron, M., Lizarraga, X., & Serra, X. (2021, November). Analysis of musical dynamics in vocal performances. In International Symposium on Computer Music Multidisciplinary Research (pp. 301-311). Cham: Springer International Publishing.
Fan, J., Yang, Y. H., Dong, K., & Pasquier, P. (2020, May). A comparative study of western and Chinese classical music based on soundscape models. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 521-525). IEEE.
Vavaroutsos, P., & Vikatos, P. (2023, September). HSP-TL: A Deep Metric Learning Model with Triplet Loss for Hit Song Prediction. In 2023 31st European Signal Processing Conference (EUSIPCO) (pp. 146-150). IEEE.
E. Zangerle, M. Vötter, R. Huber, and Y. H. Yang. Hit Song Prediction: Leveraging Low- and High-Level Audio Features. In Proceedings of the 20th International Society for Music Information Retrieval Conference 2019 (ISMIR 2019), 2019.
L. Maia, M. Fuentes, L. Biscainho, M. Rocamora, S. Essid. SAMBASET: A Dataset of Historical Samba de Enredo Recordings for Computational Music Analysis. In 20th International Society for Music Information Retrieval Conference (ISMIR’19), 2019.
I. Czedik-Eysenberg, O. Wieczorek, and C. Reuter. ‘Warriors of the Word’– Deciphering Lyrical Topics in Music and Their Connection to Audio Feature Dimensions Based on a Corpus of Over 100,000 Metal Songs. arXiv preprint arXiv:1911.04952, 2019.
C. Arévalo, M. J. Mora, and C. Arce-Lopera. Towards an efficient algorithm to get the chorus of a salsa song. In 2015 IEEE International Symposium on Multimedia (ISM), pages 258-261, 2015.
C. C. Liem, and A. Hanjalic. Comparative analysis of orchestral performance recordings: An image-based approach. In 16th International Society for Music Information Retrieval Conference (ISMIR’15), 2015.
R. C. Repetto, R. Gong, N. Kroher, and X. Serra. Comparison of the Singing Style of Two Jingju Schools. In 16th International Society for Music Information Retrieval Conference (ISMIR’15), 2015.
A. Karakurt, S. Şentürk, and X. Serra. MORTY: A Toolbox for Mode Recognition and Tonic Identification. In Proceedings of the 3rd International workshop on Digital Libraries for Musicology, pages 9-16, 2016.
A. Haron. A step towards automatic identification of influene: Lick detection in a musical passage. In 15th International Society for Music Information Retrieval Conference (ISMIR’14) Late-Breaking/Demo Session.
Melodic analysis¶
Rengaswamy, P., Reddy, M. K., Rao, K. S., & Dasgupta, P. (2020). Robust f0 extraction from monophonic signals using adaptive sub-band filtering. Speech Communication, 116, 77-85.
Viraraghavan, V. S., Pal, A., Aravind, R., & Murthy, H. A. (2020). Data-driven measurement of precision of components of pitch curves in Carnatic music. The Journal of the Acoustical Society of America, 147(5), 3657-3666.
Y. P. Chen, L. Su, and Y. H. Yang. Electric Guitar Playing Technique Detection in Real-World Recording Based on F0 Sequence Pattern Recognition. In 16th International Society for Music Information Retrieval Conference (ISMIR’15), pages 708-714, 2015.
N. Kroher, J. M. Díaz-Báñez, J. Mora, and E. Gómez. Corpus COFLA: a research corpus for the computational study of flamenco music. Journal on Computing and Cultural Heritage (JOCCH), 9(2), 10, 2016.
S. Balke, J. Driedger, J. Abeßer, C. Dittmar, and M. Müller. Towards Evaluating Multiple Predominant Melody Annotations in Jazz Recordings. In 17th International Society for Music Information Retrieval Conference (ISMIR’16), pages 246-252, 2016.
S. I. Giraldo. Computational modelling of expressive music performance in jazz guitar: a machine learning approach. PhD Thesis, Universitat Pompeu Fabra, Barcelona, Spain, 2016.
S. Giraldo, and R. Ramirez. Optimizing melodic extraction algorithm for jazz guitar recordings using genetic algorithms. In Joint Conference ICMC-SMC, pages 25-27, 2014.
R. C. Repetto, and X. Serra. Creating a Corpus of Jingju (Beijing Opera) Music and Possibilities for Melodic Analysis. In 15th International Society for Music Information Retrieval Conference (ISMIR’14), pages 313-318, 2014.
S. Zhang, R. C. Repetto, and X. Serra. Study of the Similarity between Linguistic Tones and Melodic Pitch Contours in Beijing Opera Singing. In 15th International Society for Music Information Retrieval Conference (ISMIR’14), pages 343-348, 2014.
B. Uyar, H. S. Atli, S. Şentürk, B. Bozkurt, and X. Serra. A corpus for computational research of Turkish makam music. In 1st International Workshop on Digital Libraries for Musicology, pages 1-7, ACM, 2014.
S. Şentürk, A. Holzapfel, and X. Serra. Linking scores and audio recordings in makam music of Turkey. Journal of New Music Research, 43(1):34-52, 2014.
S. Sentürk, S. Gulati, and X. Serra. Score Informed Tonic Identification for Makam Music of Turkey. In 14th International Society for Music Information Retrieval Conference (ISMIR’13), pages 175-180, 2013.
H.G. Ranjani & A. Srinivasamurthy, D. Paramashivan and Tv. Sreenivas. A compact pitch and time representation for melodic contours in Indian art music. The Journal of the Acoustical Society of America, 145, pages 597-603, 2019.
K. K. Ganguli, S. Gulati, X. Serra, and P. Rao. Data-Driven Exploration of Melodic Structure in Hindustani Music. In 17th International Society for Music Information Retrieval Conference (ISMIR’16), pages 605-611, 2016.
S. Gulati, J. Serra, and X. Serra. Improving Melodic Similarity in Indian Art Music Using Culture-Specific Melodic Characteristics. In 16th International Society for Music Information Retrieval Conference (ISMIR’15), pages 680-686, 2015.
S. Gulati, J. Serra, and X. Serra. An evaluation of methodologies for melodic similarity in audio recordings of indian art music. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’15), pages 678-682, 2015.
S. Gulati, J. Serra, V. Ishwar, and X. Serra. Mining melodic patterns in large audio collections of indian art music. In 10th International Conference on Signal-Image Technology and Internet-Based Systems (SITIS’14), pages 264-271, IEEE, 2014.
S. Gulati, A. Bellur, J. Salamon, V. Ishwar, H. A. Murthy, and X. Serra. Automatic tonic identification in Indian art music: approaches and evaluation. Journal of New Music Research, 43(1):53-71, 2014.
G. K. Koduri, S. Gulati, P. Rao, and X. Serra. Raga recognition based on pitch distribution methods. Journal of New Music Research, 41(4):337–350, 2012.
G. K. Koduri, J. Serrà, and X. Serra. Characterization of intonation in carnatic music by parametrizing pitch histograms. In International Society for Music Information Retrieval Conference (ISMIR’12), pages 199–204, 2012.
H. G., Ranjani, D. Paramashivan, and T. V. Sreenivas. Discovering structural similarities among rāgas in Indian Art Music: a computational approach. Sādhanā, 44(5), 120, 2019.
Psychoacoustics¶
G. Feller, C. Reuter. Klingt Sinus blau und Sägezahn rot? Eine Untersuchung zu Crossmodal Correspondences bei der Wahrnehmung von synthetischen Wellenformen. Fortschritte der Akustik - DAGA 2023, 49. Deutsche Jahrestagung für Akustik. 2023.
Roos, M., Reuter, C., Plitzner, M., Siddiq, S., Czedik-Eysenberg, I., & Rupp, A. (2023). Die Kirche im Dorf lassen. Präferenz für Glockenklänge in Abhängigkeit der Herkunft. In Fortschritte der Akustik-DAGA 2023. 49. Jahrestagung für Akustik (pp. 1003-1006). Deutsche Gesellschaft für Akustik eV (DEGA).
Reuter, C., Plitzner, M., Roos, M., Czedik-Eysenberg, I., Weber, V., Siddiq, S., … & Rupp, A. (2022, September). The Sound of Bells in Data Cells–Perceived Quality and Pleasantness of Church Bell Chimes. In Proceedings of Meetings on Acoustics (Vol. 49, No. 1). AIP Publishing.
R. T. Dean, A. J. Milne, and F. Bailes. Spectral Pitch Similarity is a Predictor of Perceived Change in Sound-as Well as Note-Based Music. Music & Science, 2, 2059204319847351, 2019.
P. Harrison and M. T. Pearce. Simultaneous consonance in music perception and composition. Psychological Review, 127(2), 2019.
Speech processing and voice technology¶
Benhafid, Z., Selouani, S. A., Amrouche, A., & Sidi Yakoub, M. (2023). Attention-based factorized TDNN for a noise-robust and spoof-aware speaker verification system. International Journal of Speech Technology, 1-14.
Ahmed, A., Sivarajah, U., Irani, Z., Mahroof, K., & Charles, V. (2022). Data-driven subjective performance evaluation: An attentive deep neural networks model based on a call centre case. Annals of Operations Research, 1-32.
Ahmed, A., Toral, S., Shaalan, K., & Hifny, Y. (2020). Agent productivity modeling in a call center domain using attentive convolutional neural networks. Sensors, 20(19), 5489.
Mannone, M., & Rocchesso, D. (2022). Quanta in sound, the sound of quanta: a voice-informed quantum theoretical perspective on sound. In Quantum Computing in the Arts and Humanities: An Introduction to Core Concepts, Theory and Applications (pp. 193-226). Cham: Springer International Publishing.
Rocchesso, D., & Mannone, M. (2020). A quantum vocal theory of sound. Quantum Information Processing, 19(9), 292.
Bioacoustic analyis¶
C. Fleitas and A. Moreno, Computational solution for the parametrization of bioacoustic signals. Bachelor Thesis, University of Havana, Cuba, 2018.
J. Salamon, J. P. Bello, A. Farnsworth, and S. Kelling. Fusing shallow and deep learning for bioacoustic bird species classification. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’17), pages 141-145, 2017.
J. Salamon, J. P. Bello, A. Farnsworth, M. Robbins, S. Keen, H. Klinck, and S. Kelling. Towards the automatic classification of avian flight calls for bioacoustic monitoring. PloS one, 11(11), e0166866, 2016.
C. Lopez-Tello and V. Muthukumar. Classifying Acoustic Signals for Wildlife Monitoring and Poacher Detection on UAVs. In 2018 21st Euromicro Conference on Digital System Design (DSD), pp. 685-690. IEEE, 2018.
C. Lopez-Tello. Acoustic Detection, Source Separation, and Classification Algorithms for Unmanned Aerial Vehicles in Wildlife Monitoring and Poaching. Master Thesis, University of Nevada, Las Vegas, USA, 2016
Acoustic analysis for medical and neuroimaging studies¶
S. Koelsch, S. Skouras, T. Fritz, P. Herrera, C. Bonhage, M. Kuessner, and A. M. Jacobs. Neural correlates of music-evoked fear and joy: The roles of auditory cortex and superficial amygdala. Neuroimage, 81:49-60, 2013.
Ali, L., He, Z., Cao, W., Rauf, H. T., Imrana, Y., & Bin Heyat, M. B. (2021). MMDD-Ensemble: A Multimodal Data–Driven Ensemble Approach for Parkinson’s Disease Detection. Frontiers in Neuroscience, 15, 754058.
E. Vaiciukynas, A. Verikas, A. Gelzinis, M. Bacauskiene, K. Vaskevicius, V. Uloza, E. Padervinskis, and J. Ciceliene. Fusing Various Audio Feature Sets for Detection of Parkinson’s Disease from Sustained Voice and Speech Recordings. In International Conference on Speech and Computer (SPECOM’16), pages 328-337, 2016
F. A. Araújo, F. L. Brasil, A. C. L. Santos, L. D. S. B. Junior, S. P. F. Dutra, and C. E. C. F. Batista. Auris System: Providing vibrotactile feedback for hearing impaired population. BioMed Research International, 2017, 2017.
M. A. Casey. Music of the 7Ts: Predicting and Decoding Multivoxel fMRI Responses with Acoustic, Schematic, and Categorical Music Features. Frontiers in psychology, 8, 2017.
G. Chambres, P. Hanna, and M. Desainte-Catherine. Automatic Detection of Patient with Respiratory Diseases Using Lung Sound Analysis. In 2018 International Conference on Content-Based Multimedia Indexing (CBMI), pages 1-6, IEEE, 2018.
A. Rao, E. Huynh, T. J. Royston, A. Kornblith, and S. Roy. Acoustic Methods for Pulmonary Diagnosis. IEEE reviews in biomedical engineering, 12, 221-239, 2018.
D. Perna, and A. Tagarelli, A. Deep auscultation: Predicting respiratory anomalies and diseases via recurrent neural networks. In 2019 IEEE 32nd International Symposium on Computer-Based Medical Systems (CBMS), pages 50-55, 2019.
V. K. Cheung, P. M. Harrison, L. Meyer, M. T. Pearce, J. D. Haynes, and S. Koelsch. Uncertainty and surprise jointly predict musical pleasure and amygdala, hippocampus, and auditory cortex activity. Current Biology, 29(23), 4084-4092, 2019.
Talebzadeh, A., Botteldooren, D., Van Renterghem, T., Thomas, P., Van de Velde, D., De Vriendt, P., … & Devos, P. (2023). Sound augmentation for people with dementia: Soundscape evaluation based on sound labelling. Applied Acoustics, 215, 109717.