TensorflowPredictMAEST

standard mode | Machine Learning category

Inputs

  • signal (vector_real) - the input audio signal sampled at 16 kHz

Outputs

  • predictions (tensor_real) - the output values from the model node named after output

Parameters

  • batchSize (integer ∈ [-1, ∞), default = 1) :
    the batch size for prediction. This allows parallelization when GPUs are available. Set it to -1 or 0 to accumulate all the patches and run a single TensorFlow session at the end of the stream
  • graphFilename (string, default = "") :
    the name of the file from which to load the TensorFlow graph
  • input (string, default = serving_default_melspectrogram) :
    the name of the input nodes in the Tensorflow graph
  • isTrainingName (string, default = "") :
    the name of an additional input node indicating whether the model is to be run in a training mode (for models with a training mode, leave it empty otherwise)
  • lastPatchMode (string ∈ {discard, repeat}, default = discard) :
    what to do with the last frames: repeat them to fill the last patch or discard them
  • output (string, default = StatefulPartitionedCall) :
    the name of the node from which to retrieve the output tensors
  • patchHopSize (integer ∈ [0, ∞), default = 1875) :
    number of frames between the beginnings of adjacent patches. 0 to avoid overlap
  • patchSize (integer ∈ [0, ∞), default = 1876) :
    number of frames required for each inference. This parameter should match the model's expected input shape.
  • savedModel (string, default = "") :
    the name of the TensorFlow SavedModel. Overrides parameter graphFilename

Description

This algorithm makes predictions using MAEST-based models.

Internally, it uses TensorflowInputMusiCNN for the input feature extraction. It feeds the model with mel-spectrogram patches and jumps a constant amount of frames determined by patchHopSize.

By setting the batchSize parameter to -1 or 0 the patches are stored to run a single TensorFlow session at the end of the stream. This allows to take advantage of parallelization when GPUs are available, but at the same time it can be memory exhausting for long files.

For the official MAEST models, the algorithm outputs the probabilities for 400 music style labels by default. Additionally, it is possible to retrieve the output of each attention layer by setting output=StatefulParitionedCall:n, where n is the index of the layer (starting from 1). The output from the attention layers should be interpreted as follows:

System Message: ERROR/3 (<stdin>, line 49)

Unexpected indentation.
[batch_index, 1, token_number, embeddings_size]

System Message: WARNING/2 (<stdin>, line 50)

Block quote ends without a blank line; unexpected unindent.

Where the first and second tokens (e.g., [0, 0, :2, :]) correspond to the CLS and DIST tokens respectively, and the following ones to input signal ( refer to the original paper for details [1]).

The recommended pipeline is as follows:

MonoLoader(sampleRate=16000, resampleQuality=4) >> TensorflowPredictMAEST

Note: this algorithm does not make any check on the input model so it is the user's responsibility to make sure it is a valid one.

Note: when patchHopSize and patchSize are not specified, the algorithm will parse the graphFilename string to try to set appropriate values.

References:

  1. Alonso-Jiménez, P., Serra, X., & Bogdanov, D. (2023). Efficient Supervised Training of Audio Transformers for Music Representation Learning. In Proceedings of the 24th International Society for Music Information Retrieval Conference (ISMIR 2023)
  2. Supported models at https://essentia.upf.edu/models.html#MAEST

See also

MonoLoader (standard) MonoLoader (streaming) TensorflowInputMusiCNN (standard) TensorflowInputMusiCNN (streaming) TensorflowPredict (standard) TensorflowPredict (streaming) TensorflowPredictMAEST (streaming)

Standard algorithms

AfterMaxToBeforeMaxEnergyRatio | AllPass | AudioLoader | AudioOnsetsMarker | AudioWriter | AutoCorrelation | BFCC | BPF | BandPass | BandReject | BarkBands | BeatTrackerDegara | BeatTrackerMultiFeature | Beatogram | BeatsLoudness | BinaryOperator | BinaryOperatorStream | BpmHistogram | BpmHistogramDescriptors | BpmRubato | CartesianToPolar | CentralMoments | Centroid | ChordsDescriptors | ChordsDetection | ChordsDetectionBeats | ChromaCrossSimilarity | Chromagram | Chromaprinter | ClickDetector | Clipper | ConstantQ | CoverSongSimilarity | Crest | CrossCorrelation | CrossSimilarityMatrix | CubicSpline | DCRemoval | DCT | Danceability | Decrease | Derivative | DerivativeSFX | DiscontinuityDetector | Dissonance | DistributionShape | Duration | DynamicComplexity | ERBBands | EasyLoader | EffectiveDuration | Energy | EnergyBand | EnergyBandRatio | Entropy | Envelope | EqloudLoader | EqualLoudness | Extractor | FFT | FFTC | FadeDetection | FalseStereoDetector | Flatness | FlatnessDB | FlatnessSFX | Flux | FrameBuffer | FrameCutter | FrameGenerator | FrameToReal | FreesoundExtractor | FrequencyBands | GFCC | GaiaTransform | GapsDetector | GeometricMean | HFC | HPCP | HarmonicBpm | HarmonicMask | HarmonicModelAnal | HarmonicPeaks | HighPass | HighResolutionFeatures | Histogram | HprModelAnal | HpsModelAnal | HumDetector | IDCT | IFFT | IFFTC | IIR | Inharmonicity | InstantPower | Intensity | Key | KeyExtractor | LPC | Larm | Leq | LevelExtractor | LogAttackTime | LogSpectrum | LoopBpmConfidence | LoopBpmEstimator | Loudness | LoudnessEBUR128 | LoudnessVickers | LowLevelSpectralEqloudExtractor | LowLevelSpectralExtractor | LowPass | MFCC | Magnitude | MaxFilter | MaxMagFreq | MaxToTotal | Mean | Median | MedianFilter | MelBands | MetadataReader | Meter | MinMax | MinToTotal | MonoLoader | MonoMixer | MonoWriter | MovingAverage | MultiPitchKlapuri | MultiPitchMelodia | Multiplexer | MusicExtractor | MusicExtractorSVM | NNLSChroma | NSGConstantQ | NSGIConstantQ | NoiseAdder | NoiseBurstDetector | NoveltyCurve | NoveltyCurveFixedBpmEstimator | OddToEvenHarmonicEnergyRatio | OnsetDetection | OnsetDetectionGlobal | OnsetRate | Onsets | OverlapAdd | PCA | Panning | PeakDetection | PercivalBpmEstimator | PercivalEnhanceHarmonics | PercivalEvaluatePulseTrains | PitchCREPE | PitchContourSegmentation | PitchContours | PitchContoursMelody | PitchContoursMonoMelody | PitchContoursMultiMelody | PitchFilter | PitchMelodia | PitchSalience | PitchSalienceFunction | PitchSalienceFunctionPeaks | PitchYin | PitchYinFFT | PitchYinProbabilistic | PitchYinProbabilities | PitchYinProbabilitiesHMM | PolarToCartesian | PoolAggregator | PowerMean | PowerSpectrum | PredominantPitchMelodia | RMS | RawMoments | ReplayGain | Resample | ResampleFFT | RhythmDescriptors | RhythmExtractor | RhythmExtractor2013 | RhythmTransform | RollOff | SBic | SNR | SaturationDetector | Scale | SilenceRate | SineModelAnal | SineModelSynth | SineSubtraction | SingleBeatLoudness | SingleGaussian | Slicer | SpectralCentroidTime | SpectralComplexity | SpectralContrast | SpectralPeaks | SpectralWhitening | Spectrum | SpectrumCQ | SpectrumToCent | Spline | SprModelAnal | SprModelSynth | SpsModelAnal | SpsModelSynth | StartStopCut | StartStopSilence | StereoDemuxer | StereoMuxer | StereoTrimmer | StochasticModelAnal | StochasticModelSynth | StrongDecay | StrongPeak | SuperFluxExtractor | SuperFluxNovelty | SuperFluxPeaks | TCToTotal | TempoCNN | TempoScaleBands | TempoTap | TempoTapDegara | TempoTapMaxAgreement | TempoTapTicks | TensorNormalize | TensorTranspose | TensorflowInputFSDSINet | TensorflowInputMusiCNN | TensorflowInputTempoCNN | TensorflowInputVGGish | TensorflowPredict | TensorflowPredict2D | TensorflowPredictCREPE | TensorflowPredictEffnetDiscogs | TensorflowPredictFSDSINet | TensorflowPredictMAEST | TensorflowPredictMusiCNN | TensorflowPredictTempoCNN | TensorflowPredictVGGish | TonalExtractor | TonicIndianArtMusic | TriangularBands | TriangularBarkBands | Trimmer | Tristimulus | TruePeakDetector | TuningFrequency | TuningFrequencyExtractor | UnaryOperator | UnaryOperatorStream | Variance | Vibrato | Viterbi | WarpedAutoCorrelation | Welch | Windowing | YamlInput | YamlOutput | ZeroCrossingRate