TempoCNN

streaming mode | Rhythm category

Inputs

  • audio (vector_real) - the input audio signal sampled at 11025 Hz

Outputs

  • globalTempo (real) - the overall tempo estimation in BPM

  • localTempo (vector_real) - the patch-wise tempo estimations in BPM

  • localTempoProbabilities (vector_real) - the patch-wise tempo probabilities

Parameters

  • aggregationMethod (string ∈ {majority, mean, median}, default = majority) :

    method used to estimate the global tempo.

  • batchSize (integer ∈ [-1, ∞), default = 64) :

    number of patches to process in parallel. Use -1 or 0 to accumulate all the patches and run a single TensorFlow session at the end of the stream.

  • graphFilename (string, default = “”) :

    the name of the file from which to load the TensorFlow graph

  • input (string, default = input) :

    the name of the input node in the TensorFlow graph

  • lastPatchMode (string ∈ {discard, repeat}, default = discard) :

    what to do with the last frames: repeat them to fill the last patch or discard them

  • output (string, default = output) :

    the name of the node from which to retrieve the tempo bins activations

  • patchHopSize (integer ∈ [0, ∞), default = 128) :

    the number of frames between the beginnings of adjacent patches. 0 to avoid overlap

  • savedModel (string, default = “”) :

    the name of the TensorFlow SavedModel. Overrides parameter graphFilename

Description

This algorithm estimates tempo using TempoCNN-based models.

Internally, this algorithm is a wrapper to aggregate the predictions generated by TensorflowPredictTempoCNN. localTempo is a vector containing the most likely BPM estimated each ~6 seconds by default. localTempoProbabilities contains the probabilities attached to the tempo estimations and can be used as a confidence measure. globalTempo is an aggregation of localTempo using an aggregationMethod. We strongly recommend to use majority voting when assuming constant tempo in the input audio.

See TensorflowPredictTempoCNN for details about the rest of parameters. The recommended pipeline is as follows:

MonoLoader(sampleRate=11025) >> TempoCNN

Note: This algorithm does not make any check on the input model so it is the user’s responsibility to make sure it is a valid one.

References:

  1. Hendrik Schreiber, Meinard Müller, A Single-Step Approach to Musical Tempo Estimation Using a Convolutional Neural Network Proceedings of the 19th International Society for Music Information Retrieval Conference (ISMIR), Paris, France, Sept. 2018.

  2. Hendrik Schreiber, Meinard Müller, Musical Tempo and Key Estimation using Convolutional Neural Networks with Directional Filters Proceedings of the Sound and Music Computing Conference (SMC), Málaga, Spain, 2019.

  3. Original models and code at https://github.com/hendriks73/tempo-cnn

  4. Supported models at https://essentia.upf.edu/models/

Source code

See also

Key (standard) Key (streaming) MonoLoader (standard) MonoLoader (streaming) TempoCNN (standard) TensorflowPredict (standard) TensorflowPredict (streaming) TensorflowPredictTempoCNN (standard) TensorflowPredictTempoCNN (streaming)