Main Content


CREPE deep pitch estimation neural network

Since R2023a

Audio Toolbox / Deep Learning


The CREPE block leverages a pretrained convolutional neural model to estimate pitch from an audio signal. This block requires Deep Learning Toolbox™.


expand all

This example shows how to use the CREPE blocks to combine preprocessing, network inference, and postprocessing and obtain pitch estimations from an audio signal. See Estimate Pitch Using Deep Pitch Estimator Block for an example that uses the Deep Pitch Estimator block to perform the same task.

Adjust the parameters of the blocks to speed up computation and see the pitch estimations in real time as the audio plays.

  • Set the Overlap percentage (%) of the CREPE Preprocess block to 50. With a lower overlap percentage, the system processes frames less frequently.

  • Set the Number of output frames of the CREPE Preprocess block to 5. This causes the CREPE Preprocess block to buffer audio frames and pass them to the CREPE block in batches. Passing batches to the CREPE block improves computational efficiency by allowing it to process multiple frames in parallel. However, it also increases latency because the system outputs pitch estimations in batches instead of one at a time.

  • Set the Model capacity of the CREPE block to Large. This model has fewer parameters than the full-size model, leading to faster computation at the cost of slightly lower accuracy.

Run the model to listen to a singing voice and view the estimated pitch in real time.



expand all

Preprocessed input to the network, specified as a 1024-by-1-by-1-by-N array, where N is the number of audio frames. If the input has only one frame, the block accepts a vector.

The CREPE Preprocess block takes in an audio signal and outputs the preprocessed frames.

Data Types: single | double


expand all

Network activations output by the CREPE network, returned as an N-by-360 matrix, where N is the number of input frames.

The CREPE Postprocess block converts these network activations to pitch estimates in Hz.

Data Types: single


expand all

Model capacity, specified as Full, Large, Medium, Small, or Tiny. The smaller sizes correspond to fewer parameters in the model, leading to faster computation but lower accuracy.

Size of mini-batches to use for prediction, specified as a positive integer. Larger mini-batch sizes require more memory but can lead to faster predictions.

Block Characteristics

Data Types

double | single

Direct Feedthrough


Multidimensional Signals


Variable-Size Signals


Zero-Crossing Detection



[1] Kim, Jong Wook, Justin Salamon, Peter Li, and Juan Pablo Bello. “Crepe: A Convolutional Representation for Pitch Estimation.” In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 161–65. Calgary, AB: IEEE, 2018.

Extended Capabilities

Version History

Introduced in R2023a