Documentation 
Calculate corner metric matrix and find corners in images
The Corner Detection block finds corners in an image using the Harris corner detection (by Harris & Stephens), minimum eigenvalue (by Shi & Tomasi), or local intensity comparison (Features from Accelerated Segment Test, FAST by Rosten & Drummond) method. The block finds the corners in the image based on the pixels that have the largest corner metric values.
For the most accurate results, use the Minimum Eigenvalue Method. For the fastest computation, use the Local Intensity Comparison. For the tradeoff between accuracy and computation, use the Harris Corner Detection Method.
Port  Description  Supported Data Types 

I  Matrix of intensity values 

Loc  Mby2 matrix of [x y] coordinates, that represents the locations of the corners. M represents the number of corners and is less than or equal to the Maximum number of corners parameter  32bit unsigned integer 
Count  Scalar value that represents the number of detected corners  32bit unsigned integer 
Metric  Matrix of corner metric values that is the same size as the input image  Same as I port 
This method is more computationally expensive than the Harris corner detection algorithm because it directly calculates the eigenvalues of the sum of the squared difference matrix, M.
The sum of the squared difference matrix, M, is defined as follows:
$$M=\left[\begin{array}{cc}A& C\\ C& B\end{array}\right]$$
The previous equation is based on the following values:
$$\begin{array}{c}A={({I}_{x})}^{2}\otimes w\\ B={({I}_{y})}^{2}\otimes w\\ C={({I}_{x}{I}_{y})}^{2}\otimes w\end{array}$$
where $${I}_{x}$$ and $${I}_{y}$$ are the gradients of the input image, I, in the x and y direction, respectively. The $$\otimes $$ symbol denotes a convolution operation.
Use the Coefficients for separable smoothing filter parameter to define a vector of filter coefficients. The block multiplies this vector of coefficients by its transpose to create a matrix of filter coefficients, w.
The block calculates the smaller eigenvalue of the sum of the squared difference matrix. This minimum eigenvalue corresponds to the corner metric matrix.
The Harris corner detection method avoids the explicit computation of the eigenvalues of the sum of squared differences matrix by solving for the following corner metric matrix, R:
$$R=AB{C}^{2}k{(A+B)}^{2}$$
A, B, C are defined in the previous section, Minimum Eigenvalue Method.
The variable k corresponds to the sensitivity factor. You can specify its value using the Sensitivity factor (0<k<0.25) parameter. The smaller the value of k, the more likely it is that the algorithm can detect sharp corners.
Use the Coefficients for separable smoothing filter parameter to define a vector of filter coefficients. The block multiplies this vector of coefficients by its transpose to create a matrix of filter coefficients, w.
This method determines that a pixel is a possible corner if it has either, N contiguous valid bright surrounding pixels, or N contiguous dark surrounding pixels. Specifying the value of N is discussed later in this section. The next section explains how the block finds these surrounding pixels.
Suppose that p is the pixel under consideration and j is one of the pixels surrounding p. The locations of the other surrounding pixels are denoted by the shaded areas in the following figure.
$${I}_{p}$$ and $${I}_{j}$$ are the intensities of pixels p and j, respectively. Pixel j is a valid bright surrounding pixel if $${I}_{j}{I}_{p}\ge T$$. Similarly, pixel j is a valid dark surrounding pixel if $${I}_{p}{I}_{j}\ge T$$. In these equations, T is the value you specified for the Intensity comparison threshold parameter.
The block repeats this process to determine whether the block has N contiguous valid surrounding pixels. The value of N is related to the value you specify for the Maximum angle to be considered a corner (in degrees), as shown in the following table.
Number of Valid Surrounding Pixels, N  Angle (degrees) 

15  22.5 
14  45 
13  67.5 
12  90 
11  112.5 
10  135 
9  157.5 
After the block determines that a pixel is a possible corner, it computes its corner metric using the following equation:
$$R=\mathrm{max}\left({\displaystyle \sum _{j:{I}_{j}\ge {I}_{p}+T}\left{I}_{p}{I}_{j}\rightT,{\displaystyle \sum _{j:{I}_{j}\le {I}_{p}T}\left{I}_{p}{I}_{j}\rightT,}}\right)$$
The following diagram shows the data types used in the Corner Detection block for fixedpoint signals. These diagrams apply to the Harris corner detection and minimum eigenvalue methods only.
The following table summarizes the variables used in the previous diagrams.
Variable Name  Definition 

IN_DT  Input data type 
MEM_DT  Memory data type 
OUT_DT  Metric output data type 
COEF_DT  Coefficients data type 
The Corner Detection dialog box appears as shown in the following figure.
Specify the method to use to find the corner values. Your choices are Harris corner detection (Harris & Stephens), Minimum eigenvalue (Shi & Tomasi), and Local intensity comparison (Rosten & Drummond).
Specify the sensitivity factor, k. The smaller the value of k the more likely the algorithm is to detect sharp corners. This parameter is visible if you set the Method parameter to Harris corner detection (Harris & Stephens). This parameter is tunable.
Specify a vector of filter coefficients for the smoothing filter. This parameter is visible if you set the Method parameter to Harris corner detection (Harris & Stephens) or Minimum eigenvalue (Shi & Tomasi).
Specify the threshold value used to find valid surrounding pixels. This parameter is visible if you set the Method parameter to Local intensity comparison (Rosten & Drummond). This parameter is tunable.
Specify the maximum corner angle. This parameter is visible if you set the Method parameter to Local intensity comparison (Rosten & Drummond). This parameter is tunable for Simulation only.
Specify the block output. Your choices are Corner location, Corner location and metric matrix, and Metric matrix. The block outputs the corner locations in an Mby2 matrix of [x y] coordinates, where M represents the number of corners. The block outputs the corner metric value in a matrix, the same size as the input image.
When you set the this parameter to Corner location or Corner location and metric matrix, the Maximum number of corners, Minimum metric value that indicates a corner, and Neighborhood size (suppress region around detected corners) parameters appear on the block.
To determine the final corner values, the block follows this process:
Find the pixel with the largest corner metric value.
Verify that the metric value is greater than or equal to the value you specified for the Minimum metric value that indicates a corner parameter.
Suppress the region around the corner value by the size defined in the Neighborhood size (suppress region around detected corners) parameter.
The block repeats this process until it finds all the corners in the image or it finds the number of corners you specified in the Maximum number of corners parameter.
The corner metric values computed by the Minimum eigenvalue and Local intensity comparison methods are always nonnegative. The corner metric values computed by the Harris corner detection method can be negative.
Enter the maximum number of corners you want the block to find. This parameter is visible if you set the Output parameter to Corner location or Corner location and metric matrix.
Specify the minimum corner metric value. This parameter is visible if you set the Output parameter to Corner location or Corner location and metric matrix. This parameter is tunable.
Specify the size of the neighborhood around the corner metric value over which the block zeros out the values. Enter a twoelement vector of positive odd integers, [r c]. Here, r is the number of rows in the neighborhood and c is the number of columns. This parameter is visible if you set the Output parameter to Corner location or Corner location and metric matrix.
The Data Types pane of the Corner Detection dialog box appears as shown in the following figure.
Select the rounding mode for fixedpoint operations.
Select the overflow mode for fixedpoint operations.
Choose how to specify the word length and the fraction length of the coefficients:
When you select Same word length as input, the word length of the coefficients match that of the input to the block. In this mode, the fraction length of the coefficients is automatically set to the binarypoint only scaling that provides you with the best precision possible given the value and word length of the coefficients.
When you select Specify word length, you can enter the word length of the coefficients, in bits. The block automatically sets the fraction length to give you the best precision.
When you select Binary point scaling, you can enter the word length and the fraction length of the coefficients, in bits.
When you select Slope and bias scaling, you can enter the word length, in bits, and the slope of the coefficients. The bias of all signals in the Computer Vision System Toolbox™ software is 0.
As shown in the following figure, the output of the multiplier is placed into the product output data type and scaling.
Use this parameter to specify how to designate the product output word and fraction lengths.
When you select Same as input, these characteristics match those of the input to the block.
When you select Binary point scaling, you can enter the word length and the fraction length of the product output, in bits.
When you select Slope and bias scaling, you can enter the word length, in bits, and the slope of the product output. The bias of all signals in the Computer Vision System Toolbox software is 0.
As shown in the following figure, inputs to the accumulator are cast to the accumulator data type. The output of the adder remains in the accumulator data type as each element of the input is added to it.
Use this parameter to specify how to designate this accumulator word and fraction lengths:
When you select Same as input, these characteristics match those of the input.
When you select Binary point scaling, you can enter the word length and the fraction length of the accumulator, in bits.
When you select Slope and bias scaling, you can enter the word length, in bits, and the slope of the accumulator. The bias of all signals in the Computer Vision System Toolbox software is 0.
Choose how to specify the memory word length and fraction length:
When you select Same as input, these characteristics match those of the input to the block.
When you select Binary point scaling, you can enter the word length and the fraction length of the output, in bits.
When you select Slope and bias scaling, you can enter the word length, in bits, and the slope of the output. This block requires poweroftwo slope and a bias of 0.
Choose how to specify the metric output word length and fraction length:
When you select Same as accumulator, these characteristics match those of the accumulator.
When you select Same as input, these characteristics match those of the input to the block.
When you select Binary point scaling, you can enter the word length and the fraction length of the output, in bits.
When you select Slope and bias scaling, you can enter the word length, in bits, and the slope of the output. This block requires poweroftwo slope and a bias of 0.
Select this parameter to prevent the fixedpoint tools from overriding the data types you specify on the block mask. For more information, see fxptdlg, a reference page on the FixedPoint Tool in the Simulink^{®} documentation.
[1] C. Harris and M. Stephens. "A Combined Corner and Edge Detector." Proceedings of the 4th Alvey Vision Conference. August 1988, pp. 147151.
[2] J. Shi and C. Tomasi. "Good Features to Track." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. June 1994, pp. 593–600.
[3] E. Rosten and T. Drummond. "Fusing Points and Lines for High Performance Tracking." Proceedings of the IEEE International Conference on Computer Vision Vol. 2 (October 2005): pp. 1508–1511.
Computer Vision System Toolbox software  
Computer Vision System Toolbox software  
matchFeatures  Computer Vision System Toolbox software 
extractFeatures  Computer Vision System Toolbox software 
detectSURFFeatures  Computer Vision System Toolbox software 