Machine learning algorithms “learn” from data. They improve performance at a task based on experience. For example, the accuracy of the predictions made by a neural network will typically improve as you increase the number of samples available to train the network. Many machine learning algorithms develop their decision making rules based on training examples. This is known as “supervised” learning. “Unsupervised learning” algorithms draw inference using unlabeled data.
Statistics Toolbox includes a wide variety of machine learning algorithms including boosted and bagged decision trees, K-means and hierarchical clustering, K-nearest neighbor search, Gaussian mixtures, the expectation maximization algorithm, and hidden Markov models.
Neural Network Toolbox provides tools for designing, implementing, visualizing, and simulating neural networks including feedforward networks, radial basis networks, and self-organizing maps.