# Documentation

### This is machine translation

Translated by
Mouseover text to see original. Click the button below to return to the English verison of the page.

To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

# knnsearch

k-nearest neighbors search using Kd-tree or exhaustive search

## Syntax

• ``Idx = knnsearch(Mdl,Y)``
example
• ``Idx = knnsearch(Mdl,Y,Name,Value)``
example
• ``````[Idx,D] = knnsearch(___)``````
example

## Description

example

````Idx = knnsearch(Mdl,Y)` searches for the nearest neighbor (i.e., the closest point, row, or observation) in `Mdl.X` to each point (i.e., row or observation) in the query data `Y` using an exhaustive search or a Kd-tree. `knnsearch` returns `Idx`, which is a column vector of the indices in `Mdl.X` representing the nearest neighbors.```

example

````Idx = knnsearch(Mdl,Y,Name,Value)` returns the indices of the closest points in `Mdl.X` to `Y` with additional options specified by one or more `Name,Value` pair arguments. For example, specify the number of nearest neighbors to search for, distance metric different from the one stored in `Mdl.Distance`. You can also specify which action to take if the closest distances are tied.```

example

``````[Idx,D] = knnsearch(___)``` additionally returns the matrix `D` using any of the input arguments in the previous syntaxes. `D` contains the distances between each observation in `Y` that correspond to the closest observations in `Mdl.X`. The function arranges the columns of `D` in ascending order by closeness, with respect to the distance metric.```

## Examples

collapse all

`knnsearch` accepts `ExhaustiveSearcher` or `KDTreeSearcher` model objects to search the training data for the nearest neighbors to the query data. An `ExhaustiveSearcher` model invokes the exhaustive searcher algorithm, and a `KDTreeSearcher` model defines a K d-tree, which `knnsearch` uses to search for nearest neighbors.

Load Fisher's iris data set. Randomly reserve five observations from the data for query data.

```load fisheriris rng(1); % For reproducibility n = size(meas,1); idx = randsample(n,5); X = meas(~ismember(1:n,idx),:); % Training data Y = meas(idx,:); % Query data ```

The variable `meas` contains 4 predictors.

Grow a default four-dimensional K d-tree.

```MdlKDT = KDTreeSearcher(X) ```
```MdlKDT = KDTreeSearcher with properties: BucketSize: 50 Distance: 'euclidean' DistParameter: [] X: [145×4 double] ```

`MdlKDT` is a `KDTreeSearcher` model object. You can alter its writable properties using dot notation.

Prepare an exhaustive nearest neighbors searcher.

```MdlES = ExhaustiveSearcher(X) ```
```MdlES = ExhaustiveSearcher with properties: Distance: 'euclidean' DistParameter: [] X: [145×4 double] ```

`MdlKDT` is an `ExhaustiveSearcher` model object. It contains the options, such as the distance metric, to use to find nearest neighbors.

Alternatively, you can grow a K d-tree or prepare an exhaustive nearest neighbors searcher using `createns`.

Search the training data for the nearest neighbors indices that correspond to each query observation. Conduct both types of searches using the default settings. By default, the number of neighbors to search for per query observation is `1`.

```IdxKDT = knnsearch(MdlKDT,Y); IdxES = knnsearch(MdlES,Y); [IdxKDT IdxES] ```
```ans = 17 17 6 6 1 1 89 89 124 124 ```

In this case, the results of the search are the same.

Load Fisher's iris data set.

```load fisheriris ```

Remove five irises randomly from the predictor data to use as a query set.

```rng(1); % For reproducibility n = size(meas,1); % Sample size qIdx = randsample(n,5); % Indices of query data X = meas(~ismember(1:n,qIdx),:); Y = meas(qIdx,:); ```

Grow a four-dimensional K d-tree using the training data. Specify to use the Minkowski distance for finding nearest neighbors later.

```Mdl = KDTreeSearcher(X,'Distance','minkowski') ```
```Mdl = KDTreeSearcher with properties: BucketSize: 50 Distance: 'minkowski' DistParameter: 2 X: [145×4 double] ```

`Mdl` is a `KDTreeSearcher` model object. By default, the Minkowski distance exponent is `2`.

Find the indices of the training data (`X`) that are the two nearest neighbors of each point in the query data (`Y`).

```Idx = knnsearch(Mdl,Y,'K',2) ```
```Idx = 17 4 6 2 1 12 89 66 124 100 ```

Each row of `Idx` corresponds to a query data observation, and the column order corresponds to the order of the nearest neighbors, with respect to ascending distance. For example, using the Minkowski distance, the second nearest neighbor of `Y(3,:)` is `X(12,:)`.

Load Fisher's iris data set.

```load fisheriris ```

Remove five irises randomly from the predictor data to use as a query set.

```rng(4); % For reproducibility n = size(meas,1); % Sample size qIdx = randsample(n,5); % Indices of query data X = meas(~ismember(1:n,qIdx),:); Y = meas(qIdx,:); ```

Grow a four-dimensional K d-tree using the training data. Specify to use the Minkowski distance for finding nearest neighbors later.

```Mdl = KDTreeSearcher(X); ```

`Mdl` is a `KDTreeSearcher` model object. By default, the distance metric for finding nearest neighbors is the Euclidean metric.

Find the indices of the training data (`X`) that are the seven nearest neighbors of each point in the query data (`Y`).

```[Idx,D] = knnsearch(Mdl,Y,'K',7,'IncludeTies',true); ```

`Idx` and `D` are five-element cell arrays of vectors, with each vector having at least seven elements.

Display the lengths of the vectors in `Idx`.

```cellfun('length',Idx) ```
```ans = 8 7 7 7 7 ```

Because cell `1` contains a vector with length greater than k = 7, query observation 1 (`Y(1,:)`) is equally close to at least two observations in `X`.

Display the indices of the nearest neighbors to `Y(1,:)` and their distances.

```nn5 = Idx{1} nn5d = D{1} ```
```nn5 = 91 98 67 69 71 93 88 95 nn5d = Columns 1 through 7 0.1414 0.2646 0.2828 0.3000 0.3464 0.3742 0.3873 Column 8 0.3873 ```

Training observations `88` and `95` are 0.3873 cm away from query observation `1`.

## Input Arguments

collapse all

Nearest neighbors searcher, specified as an `ExhaustiveSearcher` or `KDTreeSearcher` model object, respectively. To create `Mdl`, with the appropriate mode creator. You can also use `createns`.

If `Mdl` is an `ExhaustiveSearcher` model, then `knnsearch` searches for nearest neighbors using an exhaustive search. Otherwise, `knnsearch` uses the grown Kd-tree to search for nearest neighbors.

Query data, specified as a numeric matrix.

`Y` is an m-by-K matrix. Rows of `Y` correspond to observations (i.e., examples), and columns correspond to predictors (i.e., variables or features). `Y` must have the same number of columns as the training data stored in `Mdl.X`.

### Name-Value Pair Arguments

Specify optional comma-separated pairs of `Name,Value` arguments. `Name` is the argument name and `Value` is the corresponding value. `Name` must appear inside single quotes (`' '`). You can specify several name and value pair arguments in any order as `Name1,Value1,...,NameN,ValueN`.

Example: `'K',2,'Distance','minkowski'` specifies to find the two nearest neighbors of `Mdl.X` to each point in `Y` and to use the Minkowski distance metric.

#### For Both Nearest Neighbor Searchers

collapse all

Distance metric used to find neighbors of the training data to the query observations, specified as the comma-separated pair consisting of `'Distance'` and a character vector or function handle.

For both types of nearest neighbor searchers, `Mdl` supports these distance metrics.

ValueDescription
`'chebychev'`Chebychev distance (maximum coordinate difference)
`'cityblock'`City block distance
`'euclidean'`Euclidean distance
`'minkowski'`Minkowski distance

If `Mdl` is an `ExhaustiveSearcher` model object, then `knnsearch` supports these distance metrics.

ValueDescription
`'correlation'`One minus the sample linear correlation between observations (treated as sequences of values)
`'cosine'`One minus the cosine of the included angle between observations (row vectors)
`'hamming'`Hamming distance, which is the percentage of coordinates that differ.
`'jaccard'`One minus the Jaccard coefficient, which is the percentage of nonzero coordinates that differ
`'mahalanobis'`Mahalanobis distance
`'seuclidean'`Standardized Euclidean distance
`'spearman'`One minus the sample Spearman's rank correlation between observations (treated as sequences of values)

If `Mdl` is an `ExhaustiveSearcher` model object, then you can also specify a function handle for a custom distance metric using `@` (for example, `@distfun`). The custom distance function must:

• Have the form `function D2 = distfun(ZI, ZJ)`.

• Take as arguments:

• A 1-by-K vector `ZI` containing a single row from `X` or from the query points `Y`

• An m-by-K matrix `ZJ` containing multiple rows of `X` or `Y`

• Return an m-by-1 vector of distances `D2`. `D2(j)` is the distance between the observations `ZI` and `ZJ(j,:)`.

For more details, see Distance Metrics.

Example: `'Distance','minkowski'`

Data Types: `char` | `function_handle`

Flag to include nearest neighbors that have the same distance from query observations, specified as the comma-separated pair consisting of `'IncludeTies'` and `false` (`0`) or `true` (`1`).

If `IncludeTies` is `true`, then:

• `knnsearch` includes all nearest neighbors whose distances are equal to the `K`th smallest distance in the output arguments.

• `Idx` and `D` are m-by-`1` cell arrays such that each cell contains a vector of at least `K` indices and distances, respectively. Each vector in `D` contains arranged distances in ascending order. Each row in `Idx` contains the indices of the nearest neighbors corresponding to these smallest distances in `D`.

If `IncludeTies` is `false`, then `knnsearch` chooses the observation with the smallest index among the observations that have the same distance from a query point.

Example: `'IncludeTies',true`

Number of nearest neighbors to search for in the training data per query observation, specified as the comma-separated pair consisting of `'IncludeTies'` and a positive integer.

Example: `'K',2`

Data Types: `single` | `double`

Exponent for the Minkowski distance metric, specified as the comma-separated pair consisting of `'P'` and a positive scalar. If you specify `P` and do not specify `'``Distance``','minkowski'`, then the software throws an error.

Example: `'P',3`

Data Types: `double` | `single`

#### For Exhaustive Nearest Neighbor Searchers

collapse all

Covariance matrix for the Mahalanobis distance metric, specified as the comma-separated pair consisting of `'Cov'` and a positive definite matrix. `Cov` is a K-by-K matrix, where K is the number of columns of `X`. If you specify `Cov` and do not specify `'``Distance``','mahalanobis'`, then `knnsearch` throws an error.

Example: `'Cov',eye(3)`

Data Types: `double` | `single`

Scale parameter value for the standard Euclidean distance metric, specified as the comma-separated pair consisting of `'Scale'` and a nonnegative numeric vector. `Scale` has length K, where K is the number of columns of `X`.

The software scales each difference between the training and query data using the corresponding element of `Scale`. If you specify `Scale` and do not specify `'``Distance``','seuclidean'`, then `knnsearch` throws an error.

Example: `'Scale',quantile(X,0.75) - quantile(X,0.25)`

Data Types: `double` | `single`

 Note:   If you specify `'``Distance``'`, `'``Cov``'`, `'``P``'`, or `'``Scale``'`, then `Mdl.Distance` and `Mdl.DistParameter` do not change value.

## Output Arguments

collapse all

Training data indices of nearest neighbors, returned as a numeric matrix or cell array of numeric vectors.

• If you do not specify `IncludeTies` (`false` by default), then `Idx` is an m-by-`K` numeric matrix, where m is the number of rows in `Y` and `K` is the number of searched nearest neighbors. `Idx(j,k)` indicates that `Mdl.X(Idx(j,k),:)` is the observation with the `k`th smallest distance to the query observation `Y(j,:)`.

• If you specify `'IncludeTies',true`, then `Idx` is an m-by-`1` cell array such that cell `j` (`Idx{j}`) contains a vector of at least `K` indices of the closest observations in `Mdl.X` to the query observation `Y(j,:)`. The function arranges the elements of the vectors in ascending order by distance.

Distances of the nearest neighbors to the query data, returned as a numeric matrix or cell array of numeric vectors.

• If you do not specify `IncludeTies` (`false` by default), then `D` is an m-by-`K` numeric matrix, where m is the number of rows in `Y` and `K` is the number of searched nearest neighbors. `D(j,k)` is the distance `Mdl.X(Idx(j,k),:)` is from the query observation `Y(j,:)` with respect to the distance metric, and it represents the `k`th smallest distance.

• If you specify `'IncludeTies',true`, then `D` is an m-by-`1` cell array such that cell `j` (`D{j}`) contains a vector of at least `K` distances of the closest observations in `Mdl.X` to the query observation `Y(j,:)`. The function arranges the elements of the vectors in ascending order by distance.

## Alternatives

collapse all

### Algorithms

For positive integer `K`, `knnsearch` finds the `K` points in `Mdl.X` that are nearest each `Y` point. In contrast, for positive scalar `r`, `rangesearch` finds all the points in `Mdl.X` that are within a distance `r` of each `Y` point.

## References

[1] Friedman, J. H., Bentely, J., and Finkel, R. A. (1977). "An Algorithm for Finding Best Matches in Logarithmic Expected Time." ACM Transactions on Mathematical Software Vol. 3, Issue 3, Sept. 1977, pp. 209–226.