##### Quick Contact

## Machine learning algorithms classifications

Machine learning uses so many different algorithms to solve their problems quickly. So here we will discuss the various classification algorithms.

List of commonly used algorithms.

– Linear Regression

– Logistic Regression

– Decision Tree

– SVM (Support Vector Machine)

– Naive Bayes

– K-Means

– Random Forest

## Linear Regression

Linear regression is one of the most popular machine learning algorithms. It is used for predictive analysis like age, salary, cost of the product, etc. In linear regression, it establishes the relationship between the input variable (X) and output variable (Y) to fitting the best line, which is known as the regression line and it represents an equation.

Equation –

Y=a *X + b

Here,

Y= Dependent variable

a= Slope

X= Independent variable

b= Intercept

In the above figure, the linear regression model provides a sloped straight line representing the relationship between the variables.

Linear Regression is two types:

- Simple Linear Regression.
- Multiple Linear Regression.

**Simple Linear Regression- ** It classified by one independent variable, so the linear regression algorithm is called simple Linear Regression.

**Multiple Linear Regression-** It classified by various (more than one) independent variable, so the linear regression algorithm is called Multiple Linear Regression.

## Logistic Regression

It is an essential part of machine learning which comes under supervised learning. It is used to predict the categorical dependent variable based on given set of independent variables. It signifies the probability, and its output value lies between 0 and 1.

It is similar to linear regression except there using the technique. Linear regression has problem-solving skills, whereas Logistic regression has to solve classification problems skills. It is an important machine learning algorithm because it can provide output probabilities and organize new data using continues and discrete datasets.

There is two important part of logistic regression that is **Hypothesis** and **Sigmoid.** In hypothesis, it helps to obtain the probability of the event, and the data achieve from this hypothesis can fit log functions, which creates an S-shaped curve known as “Sigmoid”. Sigmoid is a mathematical function which is used to map the predicted values to probabilities.

## Logistic Regression Equation-

Equation of Logistic Regression can be obtained from the Linear Regression Equation.

Here mathematical steps to get Logistic Regression Equation.

? Equation of Straight line:

In equation y can be between 0 and 1 only, so y or above equation divided by (1-y):

? But we need the range between –ve infinity to +ve infinity,then take the logarithm of the equation:

Now we get the final equation for Logistic regression.

## Decision Tree

Decision Tree is supervised learning technique which is used for both prediction, and classification but mostly it is used for classification problems. In the decision tree structure, there are two nodes decision node and lead node. Decision node has multiple branches, and leaf node has no multiple branches, it is the output of those decisions. It is similar to a tree-like structure, start with the root node, which spread further branches and construct like a tree structure.

Here diagram is explains the basic structure of the decision tree.

A Decision tree is a result of various hierarchical steps that will reach a favorable decision. There are two steps to build a decision tree- **Induction** and **Pruning.** In Induction we create a tree and pruning we remove several complexities of the tree.

## SVM (Support Vector Machine)

It is a most popular supervised learning algorithm which is used for classification and regression problems. Mainly it is used for classification problems machine learning. In this algorithm, we create the best line or plot each data item as a point in n-dimensional space so we can easily get new data point in the correct category for future and this is the best decision boundary which is called hyperplane.

It chooses sharp points which help in creating hyperplane, and these points are known as support vectors, so the algorithm is named Support Vector Machine.

For example, if we have only two features like height and weight of an individual, then first we plot these two variables in 2D space where each point has two coordinates, these coordinates are called **Support vectors.**

Now, we find some line that splits the data between the two different classified groups of data. These are the lines such that the distances from the closest point in each of the two groups will be farthest away.

In the above example, the line which splits the data into two differently classified groups is the black line, since the two closest points are the farthest apart from the line. This line is our classifier. Then, depending on where the testing data lands on either side of the line, that’s what class we can classify the new data as.

## Implementation of SVM in Python

## Step 1:

Import the essential libraries, which is used for implementation of SVM in our project.

## Step 2:

Implementation of SVM in Python, we use iris dataset, which is available with the load_iris() method. We will use make use of the petal length and height in this analysis.

Output

In the next step, we will split data into training and test set using train_test_split() function.

Now we visualize our data and observe that one of the classes is linearly separable.

## Code

## Output

Now we are scaling our data. It ensures that all of our data-values lie on an expected range.

Next step is the implementation of the SVM model. Now we will use the SVC function, which is provided by sklearn library. In this, we will select our kernel as **‘rbf’.**

After we achieved our accuracy, we would visualize our SVM model. We can use a function called decision_plot() and passing values.

## Code

## Output

## Naive Bayes

It is very powerful algorithm for machine learning which is used for classification. In naive Bayes theorem, each feature assumes independence. It is easy to build and particularly useful for a huge data set. It is used for the verity of tasks like spam filtering and other areas of text classification.

It provides the way of calculating posterior probability P(c|x) from P(c), P(x) and P(x|c).

Equation

Here,

- P(c|x) is the posterior probability of class (target) given predictor (attribute).
- P(c) is the prior probability of class.
- P(x|c) is the likelihood which is the probability of predictor given class.
- P(x) is the prior probability of predictor.

## K-Means

It is an unsupervised algorithm which is used to solves the clustering problem. Its procedure is used to follows a simple and easy way to classify a given data set through a certain number of clusters (assume k clusters). Data points inside a collection are homogeneous and heterogeneous to peer groups.

## How K-means forms cluster:

- K-means picks k number of points for each cluster known as centroids.
- Each data point forms a cluster with the closest centroids i.e. k clusters.
- Finds the centroid of each cluster based on existing cluster members. Here we have new centroids.
- As we have new centroids, repeat step 2 and 3. Find the closest distance for each data point from new centroids and get associated with new k-clusters. Repeat this process until convergence occurs i.e. centroids does not change.

## Random Forest

It is an altogether learning method which is used for learning, regression and other tasks that performed with the help of the decision trees.

Each tree is planted & grown as follows:

- If the number of cases in the training set is N, then a sample of N cases is taken at random but with replacement. This sample will be the training set for growing the tree.
- If there are M input variables, a number m << M is specified such that at each node, m variables are select at random out of the M and the best split on this m is used to split the node. The value of m is held constant during the forest growing.
- Each tree is grown to the most consiberable extent possible. There is no pruning.

## Limitations of machine learning.

## Scope and limitations of machine learning

## Scope

Scope of machine learning in the present and future both very high. Following reasons:

– It improves the online search result based on their preferences.

– It has natural language processing.

– It is predicting security breaches, finding malware, and other inconsistencies in data.

– It is personalized guidance like Amazon, Netflix, Hotstar, etc.

– It is the most useful technology, especially in the healthcare industry.

### Limitations

Sometimes using machine learning, we face challenges and limitations of machine learnings.

__First limitation of machine learning. __

It is not capable enough to handle high dimensional data. This is where input and output are massive. So handling and processing such type of data becomes very complex, and it takes up a lot of resources. Sometimes it is also known as the curse of dimensionality.

__Second limitation of machine learning.__

The significant challenge of traditional Machine Learning model is called feature extraction. In this model, we have to tell the computer what are the features it should look for that will play an essential role in predicting the outcome and in getting good accuracy. Feeding of raw data to the algorithm rarely works, and this the reason why feature extraction is a critical part of the machine learning workflow.

The programmer tell machine for the features, and depending on these features, and they have to predict the outcome. That is how Machine learning works. It is challanging to apply machine learning model to complex problems like object recognition, handwriting recognition, natural languages processing, and so on.

Apply now for Advanced Artificial Intelligence Course