Quick Contact

    Introduction to Machine Learning

    Machine learning is a sub-field of AI. It allows machines to learn with the ability and improve from their previous experience. It is also very different from other traditional computational approaches. Like, in traditional approaches to solving a particular problem, algorithms are explicitly programmed instructions. Whereas, machine learning algorithms learn with training data and build by logic independently. The field of Machine learning is continuously growing over time in technology. It is capable to allow machines to learn from its data, recognise patterns and make decisions. It can also predict a large amount of data and provide predictions with a better outcome.

    However, the machine learning method is not new, in the 21st century, it gained momentum. Nowadays, it is being applied to different sectors such as healthcare, sales and marketing, automobiles which provide various services like an online recommendation, fraud detection, and self-driving car.

    Life cycle of Machine Learning

    The Life cycle of Machine Learning can be defined as a cyclical process. An analyst needs to build and manage better quality models also it needs to find a solution to the problem. It is used for developing software and models.

    To develop and train the models, these phases are handle by the data scientist. However, several organisations can use various applications to define experimental business values. The lifecycle consists of raw data which is needed to be cleaned. After cleaning the dataset, the dataset is often shared and reused. In the received dataset, if a data scientist found any issues then they required the raw data to identify issues and to transform scripts. However, there can be various reasons which may be required to return to previous versions of the models. Figure 1 shows the phases of the life cycle:

    Machine learning tutorial
    • Data gathering:

      It is a first and most important step of the life cycle as it gathers data and identifies the problems. To determine the accuracy of the output that can be calculated by the quality of the gathered data. From the database, files and other several sources the data can be retrieved.

    • Data preparation:

      After gathering the data, this step prepares the data in such manner that, where the data can be placed in a suitable place. So that, further, it can be prepared to train machine learning.

    • Data wrangling:

      : In this method, raw data is cleaned and converts into a useable format to identify quality issues. Thus, the data that has been collected doesn’t need to always be the same or useful. In real-world applications, collected data can contain several issues like missing values, duplicate data, Invalid data and noise. To remove these issues, many filtering techniques are used to clean the data. It is also compulsory to identify and remove the issues because it can affect the quality of the output.

    • Data analysis:

      The main goal of this process is to build an accurate machine learning model to predict the data output. After gathering all the type of issues and build the model always select accurate algorithms such as supervised, unsupervised or reinforcement.

    • Data selection and verification:

      Data selection involves certain parameters which are associated with the training and also called the hyperparameters. By using these parameters the effectiveness of the training process can be controlled. The input in data verification method is trained by the model learning stage and also the output is verified. To define whether the model is working efficiently or not, it gives sufficient information to the users.

    • Data deployment:

      The main goal of this phase is to integrate models into processes and applications. For regular update, the models must be deployed in such a manner that it can be used for inference.


    The term “Machine Learning” was coined by Arthur Samuel in 1959. In recent year, many advancements have been made in this field. The timeline shows the history of machine learning in figure 2:

    Machine learning tutorial
    • 1950:

      In 1950, Alan Turing demonstrated the “Turing test”. The test was invented to check the ability of the computer, whether it can act similar to a human being.

    • 1952:

      In the field of Machine learning and computer gaming, Arthur Samuel developed the first computer learning program while working at IBM. While playing the game the computer program kept on learning the winning moves.

    • 1957:

      The first neural network called” Perceptron” was developed by Frank Rosenblatt. It can replicate the human brain thinking process.

    • 1967:

      First Evelyn Fix and Joseph Hodges proposed the k-nearest neighbour’s algorithm (k-NN) in 1951 and later it was expanded by Thomas Cover. This algorithm allows the computer to identify basic patterns in the given data like classification and regression.

    • 1979:

      At Stanford University, the students invented a device known as “Stanford Cart”. This device is a mobile robot which can be handled remotely and moves through cluttered paths. It was capable to gain knowledge via the images which were broadcasted by on TV system or Computer screen.

    • 1981:

      In a journal article, Gerald Dejong proposed the concept of explanation-based learning (EBL). In this method of learning, the computer predicts training data and generates a general rule which can be used to discard unnecessary data from the training set.

    • 1985:

      Terry Sejnowski proposed a software program called the “NetTalk”. This software program helps how to pronounce English words as a baby does. The Machine learning aimed to reconstruct a simple model which can solve the complexity of human-level cognitive tasks.

    • 1990:

      In this year, machine learning work shifted from knowledge-driven to the data-driven approach. Programs began to be created to analyse huge amounts of data and to draw conclusions from the analyses. Scientists and researchers developed a program for computers which can analyse huge amounts of data and to draw conclusions from the results.

    • 1997:

      Garry Kasparov led to the growth of the IBM Deep Blue computer, which won against the world’s chess champion.

    • 2006:

      Geoffrey Hinton defined the term “deep learning”. The term was used to explain the new type of algorithms. These can help a computer to differentiate between an object and text in images or videos.

    • 2010:

      Kinect, a motion-sensing input device, was proposed by Microsoft. At a rate of 30 times per second, it has the ability to track twenty human features.

    • 2012:

      Google X lab determined a machine learning algorithm which automatically was able to browse YouTube videos and identify which contained cats.

    • 2014:

      Facebook developed a software algorithm called the” DeepFace”. Just as human, it can identify individuals in photos.

    • 2015:

      A machine learning platform was launched by Amazon. Also, a distributed machine learning toolkit was created by Microsoft to allow efficient distribution of machine learning problems to multiple computers.

    • 2016:

      By over 3000 ML and Robotics researchers, an open letter was signed to warn the world about the danger of autonomous weapons. It was possible to be activated without human intervention.

    • 2017:

      Globally, “Go” the Chinese board game, is called as the most complex board game. Whereas, “ AlphaGo” a computer program, defected the 18-times world champion, the Lee Sedol in a five-game Go match.

    • 2020:

      AI has announced a ground-breaking natural language processing algorithm GPT-3 with which can generate human-like text. Today, in the world, GPT-3 is considered the largest and most advanced language model. AI uses for supercomputer training purpose and 175 billion parameters are used by Microsoft Azure’s.

    Copyright 1999- Ducat Creative, All rights reserved.