top of page


Updated: Aug 6, 2021

Author- Pabba Abhishek

Machine learning (ML), Artificial intelligence (AI), Deep learning, data specialist, data science, data analyst, data engineering, etc. etc. all these terms are quite popular these days, but, what are all these? Why the world is chasing after them? This blog tries to explain all the new technologies out there, while majorly focusing on explaining Machine learning and it’s types with example codes.

Contents of the Blog:

  • Data scientist, data analyst and data specialist

  • What is Artificial intelligence

  • What are Deep learning and neural networks?

  • Machine learning definition

  • What is machine learning actually?

  • Types of machine learning

  • What is supervised learning?

  • Classification and Regression models

  • What is unsupervised learning?

  • Clustering and Association models

  • What is Semi-supervised and Reinforcement learning?

A Quick Glance of Data World: Data scientist, Data Analyst and Data Specialist

Explaining all the terms mentioned earlier is kind of tricky, because though they have different definitions, different names, and different approaches, they all deal with the same thing in different ways, which is, “data.” This data is nothing more or less than a household grocery list or even your ‘to-do’ list. In laymen's terms, everything which includes information and statistics is termed as data.

In other words, data is everywhere, like the grocery list mentioned earlier or even your vehicle dashboard indicating the speed, distance traveled, fuel gauge, etc. However, not all this data is useful if you want to find the market price of a plot in an urban city or for a bookstore to find whether you will buy a new book they are releasing.

To answer specific questions, you need specific data. For example, in the case you want to find the market price of a plot in Delhi, you need the data of plot price in Delhi. And, if a bookstore wants to find whether you buy their new book, they need data about your previous reads. Even then, not all that collected data helps you in finding solutions. For example, if the collected plot price data is from the 1990s, it is irrelevant and useless to find the plot’s market price in 2021.

Data analyst, data scientist or a data specialist solves this kind of difficulties or problems. Though they approach the problem differently, their core functions of include data collection, data cleaning, and data analyzing, and the outcomes for the three are very similar.

Data world: Artificial intelligence

Now, to understand the other part of this data world, i.e. artificial intelligence, machine learning, and deep science, we need to know a person who is considered the father of artificial intelligence, Alan Turing[].

Alan Turing (1912-1954) is considered one of the most outstanding scientists of the twentieth century. He was a mathematician, logician, cryptanalyst and a philosopher. He made ​​history by laying the theoretical foundations of computer science. Besides, during world War II, he played a key role in cracking ciphers so that the Allies could intercept Nazis’ coded messages.

He started the question “Can machines think?” and also expected that the world would be developed enough by 21st century to create a machine that can pretend to be human. His role in that research and his works laid the foundation to modern AI, which in turn is what he tried to develop, an artificial intelligence for a machine. Maybe the current AI models are far away from what he hoped, and far too underdeveloped from any sci-fi movie concepts out there dealing with robots or machine intelligence, but for sure, the modern AI grew a lot.

FIG: 4

What is Artificial Intelligence (AI)?

AI is the technology that enables the machine to think like Turing wanted, that is without any human intervention the machine will be able to make its own decision. It is a broad area of computer science that makes machines seem like they have human intelligence. So it’s not only programming a computer to drive a car by obeying traffic signals but it’s when that program also learns to exhibit the signs of human-like road rage.

It is very similar to the sci-fi movie concepts, but instead of giving an eerie feeling as they are depicted in those stories, and always look like humanoids or speak like humans, the current AI are mostly chatting bots. While the talking AI’s are limited, they too exist and are quiet famous, example, Siri.

These AI’s use ML, deep learning, neural networks, cognitive computing and computer vision to make decisions. While ML, deep learning are quite famous, the others are booming and each one deals with one kind of trait, i.e., cognitive computing analyses texts, speech etc. and computer vision deals with images[].

Like mentioned earlier, AI is still not shining to it’s full extent and many researches, many models, and many tests are being invented or developed right we speak. And, in near future we might find more Sophia’s[] (humanoid robot) around us, also more Samantha’s[] (virtual assistant in film, Her).

ML subdomain: Deep learning and neural networks

Deep learning is a machine learning technique that is inspired by the way a human brain filters information, i.e. it is basically learning from examples. It helps a computer model to filter the input data through layers to predict and classify information. Since deep learning processes information in a similar manner as a human brain does, it is mostly used in applications that people generally do.

It is the key technology behind driver-less cars, that enables them to recognize a stop sign and to distinguish between a pedestrian and lamppost. Most of the deep learning methods use neural network architectures, so they are often referred to as deep neural networks.

While earlier it’s mentioned as, AI uses deep learning and ML, and here it is mentioned that deep learning is neural network and it is a subdomain in ML. To clearly understand the relation between the three, there is an illustration.


Spotlight of the show: Machine learning


There are many definitions for machine learning as it varies from person to person, company to company and even from field to field. While the above definition taken from Emerj[] summarizes all the other famous definitions, below are some other popular definitions:

  • “Machine Learning at its most basic is the practice of using algorithms to parse data, learn from it, and then make a determination or prediction about something in the world.” – Nvidia

  • “Machine learning is the science of getting computers to act without being explicitly programmed.” – Stanford

  • “Machine learning is based on algorithms that can learn from data without relying on rules-based programming.”- McKinsey & Co.

  • “Machine learning algorithms can figure out how to perform important tasks by generalizing from examples.” – University of Washington

  • “The field of Machine Learning seeks to answer the question “How can we build computer systems that automatically improve with experience, and what are the fundamental laws that govern all learning processes?” – Carnegie Mellon University

A More Detailed Explanation of Machine Learning

If just the definition doesn’t give you an understanding about Machine learning, then this example might help you.

Example of machine learning: The most common machine learning uses are observed by recommendation systems on Netflix, YouTube, Spotify etc. but how these use machine learning? Let’s say you have watched stranger things on Netflix, then your same friend who recommended ‘stranger things’ to you, recommends ‘money heist’, now you watched both. Then, you are bored, started browsing other catalogues, found one interesting thumbnail/image poster, and checked it. The series was interesting and you completed it in two binge nights. Next, you found another series to your liking and you started it, but dropped midway as it went out of your interest zone.

This process is continuous and it is happening to millions of users as we speak. Moreover, Netflix is collecting all these data and analyzing it all the time.

Netflix uses machine learning to predict what you want to watch next based on what you watched before; and based on what others who watched the same shows as you watched after. It’s machine learning algorithms tries to weigh your data based on your behaviour. The show you binged in two nights, is ranked on top, and the show you dropped midway is ranked at the bottom.

The machine learning algorithms try to predict your behavior, your interests and recommend the shows you watch based on it’s predictions. This is one of the application of ML by Netflix, but Netflix uses ML for many sorts. To mention some, the preview of a show in Netflix changes overtime, it is because the ML is trying to find the most appealing piece of the show that would bring more audience. Not just the preview, sometimes the thumbnails of the shows also change overtime. And Netflix not only uses ML for end users, but also for predicting content to be produced, locations to be shot at, audio encoding and video quality of a show etc.

Machine learning types

  • Supervised – Machine Learning uses labelled data

  • Semi-supervised- Machine Learning uses labelled data and unlabeled data

  • Unsupervised - Machine Learning uses only unlabeled data and needs to find the pattern for predictions

  • Reinforcement- Interacts with environment and uses feedback to correct itself

The labelled learning: Supervised learning

In supervised learning, the dataset is the collection of labeled examples {(xi, yi)}Ni=1. Each element xi among N is called a feature vector. A feature vector is a vector in which each dimension j = 1, . . . ,D contains a value that describes the example somehow. That value is called a feature and is denoted as x(j).

For instance, if each example x in our collection represents a person, then the first feature, x(1), could contain height in cm, the second feature, x(2), could contain weight in kg, x(3) could contain gender, and so on. For all examples in the dataset, the feature at position j in the feature vector always contains the same kind of information. It means that if xi contains weight in kg in some example xi, then xk will also contain weight in kg in every example xk, k = 1, . . . ,N.

The label yi can be either an element belonging to a finite set of classes {1, 2, . . . ,C}, or a real number, or a more complex structure, like a vector, a matrix, a tree, or a graph. You can see a class as a category to which an example belongs. For instance, if your examples are email messages and your problem is spam detection, then you have two classes {spam, not spam}.

The goal of a supervised learning algorithm is to use the dataset to produce a model that takes a feature vector x as input and outputs information that allows deducing the label for this feature vector. For instance, the model created using the dataset of people could take as input a feature vector describing a person and output a probability that the person has cancer’[].

FIG : 7

In laymen terms, the data fed to ML algorithm may contain many features, considering patients data in hospital, it might have heights and weights of patients, and also blood levels, cholesterol, etc. but there should be one feature to be predicted. In this situation, it can be, “does the patient have cancer or not?”

In a training set fed to the model, for every person, under “Have Cancer?” it should have yes or no. Now, this is the labelled feature in this data and ML algorithm tries to find the pattern from all other features from known patients in training set to find the “yes” or “no” for the unknown patient in testing set.

Supervised learning: Classification and Regression

Supervised learning can be further divided into two categories: Classification and regression.

Classification predicts the category the data belongs to. Some examples of classification include spam detection, churn prediction, sentiment analysis, dog breed detection and so on


Simply to say, as in the above illustration, when the model should predict quantitative values and the data can be classified to predict the results, classification models are used.

Regression predicts a numerical value based on previously observed data. Some examples of regression include house price prediction, stock price prediction, height-weight prediction and so on.


The unlabelled learning: Unsupervised learning

In unsupervised learning, the dataset is a collection of unlabelled examples (xi)}Ni=1. Again, x is a feature vector, and the goal of an unsupervised learning algorithm is to create a model that takes a feature vector x as input and either transforms it into another vector or into a value that can be used to solve a practical problem.

For example, in clustering, the model returns the id of the cluster for each feature vector in the dataset. In dimensionality reduction, the output of the model is a feature vector that has fewer features than the input x; in outlier detection, the output is a real number that indicates how x is different from a “typical” example in the dataset.

Again in laymen terms, in the earlier Netflix recommendation example, the dataset contains all kind of data about users, from the user recent watched show to last year watched show, and also how much gap user took between episodes and what shows the user dropped. But, there is no definite expected field or testing field.

In the above situations, unsupervised learning tries to infer the pattern in user watching, it tries to cluster the interests of the user and then the shows that belong to that cluster are recommended.

Unsupervised learning: Clustering and Association

Unsupervised learning problems further grouped into clustering and association problems.

Clustering mainly deals with finding a structure or pattern in a collection of uncategorized data. Clustering algorithms will process the data and find natural clusters (groups) if they exist in the data. The ML model can also be modified to how many clusters it should identify and it allows to adjust the granularity of these groups.

FIG 10: Clustering Model

Association rules allow you to establish associations amongst data objects inside large databases. This unsupervised technique is about discovering interesting relationships between variables in large databases. For example, people that buy a new home most likely to buy new furniture.

Other Examples:

  • A subgroup of cancer patients grouped by their gene expression measurements

  • Groups of shopper based on their browsing and purchasing histories

  • Movie group by the rating given by movies viewers.


Semi-supervised learning and Reinforcement learning

‘Semi-supervised learning, the dataset contains both labelled and unlabelled examples. Usually, the quantity of unlabelled examples is much higher than the number of labelled examples. The goal of a semi-supervised learning algorithm is the same as the goal of the supervised learning algorithm.

The hope here is that using many unlabelled examples can help the learning algorithm to find (we might say “produce” or “compute”) a better model. It could look counter-intuitive that learning could benefit from adding more unlabelled examples. It seems like we add more uncertainty to the problem. However, when you add unlabelled examples, you add more information about your problem: a larger sample reflects better the probability distribution the data we labelled came from. Theoretically, a learning algorithm should be able to leverage this additional information’[]Semi-supervised learning and Reinforcement learning.

‘Reinforcement learning is a subfield of machine learning where the machine “lives” in an environment and is capable of perceiving the state of that environment as a vector of features. The machine can execute actions in every state. Different actions bring different rewards and could also move the machine to another state of the environment. The goal of a reinforcement learning algorithm is to learn a policy.

A policy is a function (similar to the model in supervised learning) that takes the feature vector of a state as input and outputs an optimal action to execute in that state. The action is optimal if it maximizes the expected average reward. Reinforcement learning solves a particular kind of problem where decision making is sequential, and the goal is long-term, such as game playing, robotics, resource management, or logistics’[].

It is found that around 90% of data (around 40 zetabyte[]) present in the world is generated in the last one decade. This data when used correctly can do wonders; and can be solution for all problems in the world. This blog tried to explain every skill relating to data present today, but these skills grew everyday and in few years, there would be more useful and more wonderful ones which we never would have imagined.


97 views1 comment

Recent Posts

See All
bottom of page