Instance and Model-Based Learning: Topics of Machine Learning

Introducing Instance and Model-Based Learning

Our series on Machine Learning has elaborated on a variety of topics related to the subject. Firstly, we began by providing an overview to the various machine learning systems that appear in data science, as well as the algorithms associated with these systems. You can explore this overview, which also introduces batch and online learning topics, here. We then sought to dive deeper into these individual systems, beginning first with a thorough discussion of supervised machine learning and its cohort of clustering and regression algorithms. This was logically followed, in similar fashion, by discussion on the features of unsupervised machine learning. Furthermore, a great amount of time was devoted towards explicating the differences between batch and online learning. We follow these elaborations with an analysis of instance and model-based learning systems. This is the focus of the present article. Let’s begin.

Instance-Based Learning

What is Instance-Based Learning?

Finally, we arrive at machine learning systems that answer the final question of machine learning models. This being whether or not the system derives solutions by comparing new data to known data or if the solution predicts from a model. Instance based learning answers this question employing a methodology that derives solutions by comparing a data instance to the known training data. Based on the instance data’s relationship to the known data, the instance data may be extrapolated.

INSTANCE BASED MACHINE LEARNING < Blogs

Conducting an instance based learning model relies on the system operating to a degree by a measure of similarity. More specifically, it examines the degree of similarity between the instance data and the known data. The system uses known data and generalizes to a certain degree in order to derive parameters of the new data.

According to Keogh in an article on instance based learning, the author contends that instance based learning relies both on classification and regression to predict the labels that would be associated with the new data. This judgement employs the training data set. This methodology is rather efficient as all it must do is store data and at run-time compare the new data to the training data and identify the nearest neighbor.

Instance Based Methods

When it comes to applying instance-based methods in machine learning models, there are a variety of mechanics which a user may utilize. One particular instance-based method which can be employed is with an instance-based classifier. The curious thing about instance based classifiers is the fact that the program does not actually learn a model. Rather, the program stores training instances and when new data objects arrive, the program assigns a value to them based on the closest training instance to that object. This phenomenon is known as lazy learning.

An example of an instance-based classifier is a machine learning model that executes rote learning. With rote learning, the program memorizes verbatim the data set used to train the program, and operates as a classifier such that it matches new data to an exact correlate of the training data.

K-nearest Neighbors Algorithm Nearest Neighbor Search Machine ...

However, other instance-based classifiers exist. For example, one could implement a K-Nearest Neighbor instance based classifier. In this model, the program has memorized the training instances, and with new data objects, assigns it a value depending on the training data item to which it most closely relates to. The issue with the K-Nearest Neighbor algorithm in instance-based learning is the fact that the entire training set must be retained, as argued by Sagi Shaier. This can be rather cumbersome for a program’s memory.

To circumvent the issue of a burdensome training data set, one could apply an alternative instance-based classifier algorithm. One of these may utilize learning vector quantization (LVQ). LVQ is a supervised learning methodology, and differs from the standard K-Nearest Neighbor algorithm in that the LVQ algorithm allows the user to specify the number of training instances to retain.

Model-Based Learning

What is Model-Based Learning?

The model based learning methodology answers the third question a bit differently. Rather than comparing new data to known data, the model based learning method uses the known data to create a model, and this model predicts the value of the new data instance. This makes model based learning a consequence of prediction rather than a matter of identifying a nearest neighbor.

Model-based learning uses the training data to create a mathematical model of the data. For example, with a linear relationship in the data, the model will fit the data with a particular function. It will look like:

y=x_0+x_1(\delta)

Here, x0 and x1 represent the parameters of the model. Alternatively, the delta symbol is one of the parameters of the data. We use these to predict some alternative parameter of the data, represented as ‘y’.

Performance Measures

When we create a model-based machine learning system, it is essential to define a measure of performance. The performance measure demonstrates how effective the model is. This is useful as it confers how successful the model is at predicting a parameter value for the data. For example, with linear regression machine learning models, the cost function typically measures the performance of our linear regression model. The cost function measures the distance between the model’s predictions and the values from training examples. One method of creating a cost function is by computing with the squared error function. For a squared error cost function of a linear regression model of the form y=mx+b, the squared error cost function appears as:

𝑀𝑆𝐸=\frac{1}{𝑁}βˆ‘(𝑦_π‘–βˆ’(π‘šπ‘₯_𝑖+𝑏))^2

The Take Away For Instance and Model-Based Learning

The dichotomy between instance and model-based learning is an important one. They confer some significant differences between machine learning models. Instance based learning stores training data as memory and uses the training data to compare the incoming data objects to. One of the essential uses of instance-based learning is classification, but an important drawback is the amount of memory it may occupy. Alternatively, model-based learning takes in the training data, and uses it to create a mathematical model. In this manner, there is no need to store the data, but rather, the program can compare the incoming data objects to the model to infer the value of these objects.

Leave a Reply

%d bloggers like this: