This article describes the five primary techniques of machine learning and provides some key application summaries. Further articles give more in depth examples of application of each of these techniques.
Machine learning trains models on datasets before they are deployed. The training phase can be iterative in nature and this enables models to be continuously improved as new data arrives.
Recent advances in computer hardware performance and new software algorithms have combined infuse new life into the discipline and deliver impressive results to commerce and industry alike.
New startups have received billions of dollars of funding new new machine learning and artificial intelligence applications. These investments have been underpinned not by media hype, but by the demonstrated results of several big players, including Google, Facebook, Amazon, Netflix, Microsoft and others who have improved their market shares with the application of machine learning technologies.
There are several technology enablers that have combined to assist the rapid growth of machine learning application:
Modern computer hardware has become increasingly more powerful, and technologies such as GPU’s for numerical calculations continue to drive performance.
Data storage costs have dropped dramatically, enabling much larger datasets to be stored and analyzed.
Distributed computing has seen an explosion of growth over the past ten years.
More commercial datasets are available to a broader community.
Open source software, provides reliable algorithms to a wider community. Free.
Graphical reporting and business intelligence software is now more widely available. This allows non-computer experts to see the results. Results can be understood by non data scientists.
Supervised learning is applied to established datasets that have well understood features and have been labeled. The algorithm seeks to classify the dataset to the known label categories.
The models are often termed Classification and Regression models (CART) and may be applied to both continuous data and categoric data. For continuous data we have a regression problem and for categorical data we have a classification problem.
These models are often applied in medical research to analyze disease susceptibility for different classes of the population, in finance they are applied in fraud detection.
Key algorithms include decision trees, Random Forests.
Unsupervised learning algorithms seek to identify structure in the dataset that has no existing labels.
This typically involves an iterative process of feature analysis to identify clusters, and structure in the data. These are continuously refined.
Often involves analysis of very large datasets to identify fine structure that might be overlooked by less sophisticated algorithms.
Unsupervised learning can sometimes be used to identify labels for subsequent supervised learning algorithms.
Recommender systems seek to analyze recommendation data to understand customer behaviour and purchasing preferences. For example the well known Amazon shopping basket was an early example of recommender system. Also “People you may know” on Facebook or LinkedIn are examples of recommendations.
Reports suggest that these algorithms have been responsible for perhaps a third of Amazon sales. This add-on sales tactic is often unavailable to high street shops.
Reinforcement learning is a behavioral model. Reinforcement learning often dispenses with training data, preferring to incrementally update as new information arrives. i.e., it learns through a trial and error process.
A sequence of successful choices or decisions is said to reinforce the choice as it correctly solves the problem. Conversely for poor choices. The reinforcement learning algorithm must identify the steps that led to a correct choice, and be able to re-play these steps for successive evaluations, which requires keeping track of new results as they appear and updating the predictions accordingly.
Key Algorithms include SARSA, DQN, Q-Learning
Neural Nets and Deep Learning
Deep learning is very useful to identify structure in large datasets.
Neural networks consist of three or more layers. An input layer. One or more hidden layers and an output layer. Data arrives in the input layer.
The term deep learning is used when there are many hidden layers within the neural network. These layers refine and adjust the predictions based on feedback as new data continually arrives.
Recent advances have included applications in image recognition and games analysis.
Key Algorithms include Long Short Term Memory Network (LSTMN), CNN, Gradient Descent.
This article has presented a brief summary of five key areas of machine learning and noted key application areas where researchers have achieved new advances in recent years.