Category Archives: Machine learning

Everyone is talking about ChatGPT: Here is what I learned.

openais-revolutionary-chatbot-chatgpt-see-what-it-is

ChatGPT – What is it?

ChatGPT is a large language model trained by OpenAI for generating human-like text. It can be useful for a variety of natural language processing tasks, such as generating text, translating languages, summarizing long documents, and answering questions. Because it is trained on a massive amount of text data, it has a wide range of knowledge and can generate text that is difficult for other models to produce. However, like all language models, ChatGPT has limitations and may not always produce accurate or appropriate text, so it should be used with caution.

It is not capable of making decisions or taking actions on its own. It is up to users to decide how to use ChatGPT and other AI technologies, and it is ultimately the responsibility of human beings to determine how they will be used and how they will impact society.

Facts about ChatGPT:

  • Created by OpenAI.
  • Organization founded by some of high profile entrepreneurs including Elon Musk, Sam Altman in 2015.
  • Valued at around $20 billion.
  • Other products including, DALL·E 2 and Whisper
  • ChatGPT is powered by GPT-3.5 series
  • Crossed 1 million users in just 5 days

Why is it important and how can we use it?

For chat – Simple chat

As the name suggests you can use ChatGPT simply to chat. Ask almost anything then it will give you accurate answers. ChatGPT is a chatbot that helps in generating content for digital marketing campaigns. It’s not just a text generator, this bot also tracks all the conversations and interactions with the audience on a website. It monitors when visitors are browsing and views the website, clicks links and leaves comments.

In short, just ask something and you will get a response, mostly sensible responses – may occasionally generate incorrect information.

Write, debug and code explaining

If you are a programmer, this is huge news for you. You can now use ChatGPT to write and debug code. The app not only write code but also fixes bugs and generates explanation for the code it writes.

The development process might significantly become faster and cheaper if we are able to use AI powered apps to write code. It seems this is happening and it is just beginning.

ChatGPT explains complex topics and concepts related to programming almost equally human levels and I’m wondering what’s stopping it from becoming an alternative to human coders.

For creative writing

Large language models are really good at generating coherent text with structured approach. ChatGPT does the same, structuring creativity with ChatGPT is easier than ever with little guidance and observation. It is able to handle more complex instructions and producing longer-form content such as Poem, Fiction, Non-Fiction and even long form text based essays.

ChatGPT is able to keep track of what has been said previously and use that information to generate appropriate responses. It generates formal or informal text, short and long form, depending on the context and the tone of the conversation. This tool can be beneficial for creating content for social media or other online platforms.

A user asking the chatbot to explain a regular expression and write a short essay on “effects of westward expansion on the civil war”. In both cases, it was incredibly creative too delivering pretty good results.

Deploy a virtual virtual machine (VM)

Jonas Degrave, a researcher showed how he turned ChatGPT into what appears to be a full fledged Linux terminal interacting with the VM here created right from your web browser. A Virtual Machine running inside ChatGPT feels like magic. See the written article by Jonas here.

Security

We not surprised at all, people are using it for various purposes. Some users are using ChatGPT to reverse engineer shellcode, rewrite it in C and others are playing with it to generate nmap scans.

Limitation

Like any other machine learning model, it is only as good as the data it has been trained on. This means that it may not be able to provide accurate answers to questions or generate responses that are outside of the scope of the data it has been trained on. Additionally, ChatGPT is a text-based model, so it is not capable of providing visual or audio responses. Finally, ChatGPT is not able to browse the internet or access external information, so it can only provide information that it has been trained to generate based on the input it receives. You can see the capabilities and limitations of ChatGPT in the picture below.

chatGPT

Wrap up

We are currently experiencing a huge development in this space, thanks to ChatGPT. ChatGPT is taking the world by storm. It can be used in various areas, including social media content generator, voice assistance, chatbots and virtual assistants, customer care application, meetings, code generators, and for security research areas. This opens the door for a new generation of chatbot innovation, possibly the kind that many anticipated but didn’t see come to pass. At least up till this point.

The Best Machine Learning Books That All Data Scientists Must Read

best ml books

Machine learning is an exciting field that has been growing rapidly in recent years and it’s only expected to continue to grow as we move forward into the future. There are many different options and topics that data scientists can explore while they’re studying machine learning, but there are some core principles and key texts that you should definitely be familiar with if you want to be taken seriously in this industry.

Today, I’ll take a look at the top machine learning textbooks that I’m currently reading and why you should read as well.

1. Artificial Intelligence: A Modern Approach

Artificial Intelligence is a massive and multi-disciplinary field, so it’s no surprise that there are plenty of resources for those looking to jump into this field. The most highly rated textbook for AI students on Amazon is Peter Norvig and Stuart Russell’s Artificial Intelligence: A Modern Approach. This book was introduced in 1995, and has been updated multiple times since then. This is a heavy book with 27 chapters that covers problem solving and search, logic and inference, planning, probabilistic reasoning and decision making, learning, communication, perception and robotics. Basically everything from common algorithms to neural networks and natural language processing.

Topics Covered:

  • Logical Agents
  • Learning, communication, perception and robotics
  • Supervised, Unsupervised learning and Reinforcement Learning, Machine Learning models and Algorithms
  • Probabilistic Reasoning
  • Natural Language Processing

This book is not only for students but also used by many experts in the field. Here are a few reviews from academics and professionals in the subject.

Experts Opinions

I like this book very much. When in doubt I look there, and usually find what I am looking for, or I find references on where to go to study the problem more in depth. I like that it tries to show how various topics are interrelated, and to give general architectures for general problems … It is a jump in quality with respect to the AI books that were previously available. — Prof. Giorgio Ingargiola (Temple).

Really excellent on the whole and it makes teaching AI a lot easier. — Prof. Ram Nevatia (USC).

It is an impressive book, which begins just the way I want to teach, with a discussion of agents, and ties all the topics together in a beautiful way. — Prof. George Bekey (USC).

2. Deep Learning (Adaptive Computation and Machine Learning series)

Ian Goodfellow, Yoshoua Bengio, and Aaron Courville are three researchers who stand at the forefront of Deep Learning. It comes with general context and comprehensive knowledge on mathematical foundation of Deep Learning. This book is highly recommended to read if you want to start your journey with deep learning.

Topics Covered:

First few chapters cover mathematical concepts for deep learning. You will be able to grasp these without difficulty if you have a concise knowledge of linear algebra, probability and statistics. Part 3 covers Deep Learning Research which include different techniques and methods for deep learning which is quite challenging.

  • Numerical Computation
  • Deep Feedforward Networks
  • Optimization for Training Deep Models
  • Deep Learning Research

Experts Opinions

“Written by three experts in the field, Deep Learning is the only comprehensive book on the subject.” —Elon Musk, cochair of OpenAI; cofounder and CEO of Tesla and SpaceX.

“If you want to know here deep learning came from, what it is good for, and where it is going, read this book.” —Geoffrey Hinton FRS, Professor, University of Toronto, Research Scientist at Google.

3. Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems 2nd Edition

This book is is a must-read book for everyone who seriously wants to enter this field. This is the perfect book for machine learning practitioners as it covers the most important aspects of machine learning, such as classification, regression, clustering, and dimensionality reduction. It simplifies highly complex concepts through concrete examples and real world example. It also provides detailed introduction of popular frameworks such as Scikit-Learn, Keras and TensorFlow. Author Aurélien Géron has put all the concepts in a beautiful manner so you can gain an intuitive understanding of the concepts and tools for building intelligent systems.

You need programming experience to get started, so learning Python programming language would greatly help to complete this book.

Topics Covered:

  • Introduction to machine learning and history
  • Use Scikit-Learn to track an example machine-learning project end-to-end
  • Explore several training models such as Support Vector Machines, Decision Trees, Random Forests, and Ensemble methods
  • Use the Tensor Flow library to build and train neural nets
  • Dive into neural net architectures, including convolutional nets, recurrent nets, and deep reinforcement learning
  • Techniques for training and scaling deep neural nets.

Experts Opinions

“An exceptional resource to study Machine Learning. You will find clear-minded, intuitive explanations, and a wealth of practical tips.” —François Chollet, Author of Keras, author of Deep Learning with Python.

“This book is a great introduction to the theory and practice of solving problems with neural networks; I recommend it to anyone interested in learning about practical ML.” — Peter Warden, Mobile Lead for TensorFlow.

4. Python Machine Learning – Second Edition: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow 2nd Edition

You never want to miss this book If you really want to learn Machine learning. This is perfect book as its primary focus is exclusively on the implementation of a various machine learning algorithms. The book places a special emphasis on using Scikit-learn to implement these algorithms, and is a must for anyone looking to develop mastery around algorithm development.

Sebastian Raschka and Vahid Mirjalili has updated it to third edition in 2020, covering TensorFlow 2, ScikitlearnReinforcement learning, and GANs in the recent release.

Topics Covered:

  • Explore and understand the key frameworks for data science, machine learning and deep learning
  • Master deep neural network implementation using the TensorFlow library
  • Embed machine learning model in web application

These books are well worth to read if you want advance your machine learning knowledge and skills. I’ve printed copies of each book I mentioned above. In addition to that I also started to read several other ML books, pdf copies, watch YouTube videos and research papers to improve my ML skills and knowledge.

How Weight and Bias Impact the Output Value of a Neuron

Had opportunity to stay one of the finest water villa resorts in Maldives, Gili Lankanfushi (No News No Shoes) for free, Yes for free!

So why not learn something in here. Thanks to Gili team and management.

Let’s begin….


Artificial neurons are digital construct that seeks to simulate the similar behavior of a biological neuron in human brain. Large number of artificial neurons are digitally connected to each other to make up an artificial neural network. Therefore the core fundamental building block of any neural network is artificial neurons.


Artificial Neuron


Artificial neuron is a mathematical model which mimic biological neuron. Each neuron receives one or more inputs and combines them using an activation function to produce an output.

Weights and Biases


Weights and biases are the learnable parameters of a machine learning models. When the inputs are transmitted between each neuron, the weights are applied to the inputs along with the bias.

Weights control strength of the connections between two neurons. It decides how much influence the input will have on the output.

Biases are constant values. Bias units are not influenced by the previous layer but they do have outgoing connections with their own weights.


How Neural Networks Work


At very high level a simple neural network consists input layers, output layers and many hidden layers in between. These layers are connected via series of nodes so they form a complex giant network.

Within each node there are weight and a bias values. As an input enters the node, it gets processed by the value of a weight and bias and then output the result which then passed to the next layer in the neural network. This way, it forms a signals which transmit form one layer to another until it reaches to last output layer.

This complex underlying structures give powers to computers to “think like humans” and produce more sophisticated cognitive results.


So let’s begin with single input neuron’s output with a weight of 1, bias of 0 and input x.

In the second example we will adjust the weight keeping bias unchanged and see how the slop of the function change.

As you see above, if we increase the value of the weight, the slop will get steeper. However, if we reduce weight of one neuron then the slop will decreases.

Now, what if we negate the weight. Obviously the slope turns to a negative.

As mentioned earlier, these graphs visualize how weight causes the output value of a single neuron. Now let’s change a little bit. This time we will keep weight at 1.0 and give different bias values. Let’s start with a weight of 1.0 and a bias of 2.0.

As we increase bias, the function output shifts upward. If we decrease the bias, then the overall function output will move downward as shown below.

Now we have learnt something about artificial neurons. Artificial neurons mimic how human brain works. These complex neurons require weight and bias value to output some result.

Important Points of Supervised Learning


For the first time ever I had opportunity to go for a multi-day fishing trip with a group of friends by a local fishing boat. This trip was 6 days long, spent roughly 100 hours in the middle of ocean, within the range of 20-50 nautical miles. This was totally a different experience in my life and during the trip I tried to learn something on supervised learning.

So let’s go…



  • Supervised learning models learn from any given labeled data. They are known as training data.
  • Training data contains different patterns.
  • The algorithm will learn underlying patterns during the training process.
  • In testing phase, training data set helps models to predict a desired outcome for unforeseen data.

Supervised Learning Algorithms

  • k-Nearest Neighbors
  • Linear Regression
    • formula for linear regression, Y= ax+b
  • Logistic Regression
    • formula for logistic regression, y = ln(P/(1-P))
  • Support Vector Machines (SVM)
  • Decision Trees and Random Forests
  • Neural Networks

Advantages of Supervised Learning

  • Supervised learning is easy to understand.
  • Number of classes or parameter will be known before model is deployed.

Challenges of Supervised Learning

  • It requires some amount of expertise to structure accurately.
  • Training a proper models can be very time intensive.
  • Human errors in the datasets can cause poor algorithms.
  • It cannot cluster or classify data on its own.

Supervised Learning Models Can Be Used in:

  • Image and object recognition: Supervised learning algorithms can be used to identify objects in a videos or images.
  • Predictive analytics: It provides deep insights into various business data points. Helps companies to take decisions more easily and accurately.
  • Customer sentiment analysis: Easy to extract and classify important pieces of information from large volumes of data such as emotion, intent and context.
  • Spam detection: Classification algorithms is used to recognize patterns or anomalies in a dataset.

A Gentle Introduction to Batch Learning Process

Introduction

Strategies for machine learning system are classified into two main categories. They are Batch Learning and Online learning. In batch learning, models learn offline while in online learning data flow into the learning algorithm in stream of pipelines. In this article, you will learn:

  • Gentle introduction of batch learning.
  • Problems in batch learning.
  • Solving batch learning problems using online learning method.

So let’s begin…


What is Batch Learning?

Data preprocessing is an important step in machine learning projects. It includes various activities such as data cleaning, data reduction, splitting dataset (training and testing dataset) and data normalization process. To train a well accurate model, a large set of data is required. In batch learning process we use all the data we have for the training process. Therefore, the training process takes time and requires huge computational power.


What is happening under the hood?

After model is fully trained in the development process it will be deployed into the production. Once model is deployed, it will use only the data that we have given to train it. We cannot feed new data directly then let it learn on the fly.

If we want to use new data then we need to start from the scratch. We need to bring down the machine learning model and use new dataset with old data and then train it again. When model trained completely on the new dataset, we then deploy it again to the production.

This is not a complex process perhaps in most of the cases, it might work without any major issues.

If we want to run the machine learning model, in every 24hours or in every week, then training the model from the beginning will be very much time consuming and also expensive. Training a machine learning model with new and old dataset not only requires time and computational power, but also requires large disk space and disk management which may again cost money.

This is fine for small projects but it gets tough in real time where the data is coming from various end points such as IoT devices, computers and from servers.


Training #DatasetDiskspace (TB)
11,000,000100
22,000,000200
33,000,000300

Disadvantages of batch learning

The negative effects of large batch sizes are:

  • Model will not learn in the production. We need to train the model every time with new data.
  • Disk management is costly. As dataset grows then it requires more disk spaces.
  • To train a model with large amount of data set costs time and computational resources.

Online learning

To solve issues we face on batch learning, we use a method called online learning. In online learning, it tends to learn from new data while model is in production. Small batches of data will be used to train the model which are known as mini batches. We will look more into online learning in another article.


Conclusion

In this article we have looked into batch learning strategy and how it works. W’ve highlighted the disadvantages of batch learning and how online learning is used to overcome issues we face in batch learning. Hope you understand something on batch from this article.

Learn Everything About Feature Scaling


Feature Scaling?


Feature scaling is a technique used when we create a machine learning model. It lets you to normalize the range of independent variables or features of the given field of the dataset. It is also known as data normalization. During data preprocessing phase, it is important to do data normalization because, machine learning algorithm will not perform well if the data attributes have different scales.

let’s scratch the surface…

Why Feature Scaling is Important?


The importance of feature scaling is can be illustrated by the following simple example.

Suppose in a dataset we have features and  each feature has different records.


featuresf1f2f3f4f5
Magnitude3004001520550
UnitKgKgcmcmg

Remember every feature has two components


  • Magnitude  (Example: 300)
  • Unit (Example: Kg)

Always keep in mind: Most of the ML algorithms work based on Euclidean distance, Manhattan distance or K Nearest-Neighbors and few others.


featuresf1f2f3f4f5 (f2- f1) (f4- f3)
Magnitude3004001520550400-300 = 10020-15=5
UnitKgKgcmcmgKgcm

So coming back to this example, so when we try to find out the distance between different features, the gap between them actually varies. Some attributes have large gap in between while others are very close to each other. See the table:

You may also have noticed, unit of f5 is in gram(g) while f1 and f2 are in Kilo gram (Kg). So in this case, the model may consider the value of f5 is greater than f1 and f2 but that’s not the case. Because of these reasons, the model may give a wrong predictions.

Therefore we need to make all the attributes (f1, f2, f3…) to have same scale with respect to its units.  In short, we need to convert all the data into same range (usually between 0-1)  such that no particular feature gets dominant over another or no particular feature has less dominant. (By doing so, the convergence will be also much fast and efficient).

There are two common methods used to get all attribute into same scale.


Min-max Scaling


In min-max scaling, values are rescaled to a range between 0 to 1. To find the new value,  we need to subtracting the min value and then divide by the max minus the min. Scikit-Learn provides MinMaxScaler for this calculation.



    \[X_{new} = \frac{ Xi-min(X)}{max(X)-min(X)}\]

Standardization

Standardization is much less affective by outliers (explain outliers – link) . First we need subtract the mean value then divide by standard deviation such that it forms resulting distribution of unit variance. Scikit-Learn provides  a transformer called StandardScaler for this calculation.



    \[ X_{new} = \frac{Xi-X_{mean}}{Standard Deviation} \]


Here I show an example for feature scaling using min-max scaling and standardization. I’m using google colab but you can use any notebook/Ide such as Jupyter notebook or PyCharm.


Go to the link and download Data_for_Feature_Scaling.csv


Upload csv to the google drive

Mount drive to the working notebook

For that you may need authorization code from google Run the code.


# feature scaling sample code
# import recommended libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn import preprocessing
# mount drive
from google.colab import drive
drive.mount('/content/drive')
# import dataset 
data_set = pd.read_csv('feature_scaling/Data_for_Feature_Scaling.csv')
# check the data 
data_set.head()

Output
        Country	 Age	Salary	Purchased
0	France	 44	72000	0
1	Spain	 27	48000	1
2	Germany	 30	23000	0
3	Spain	 38	51000	0
4	Germany	 40	1000	1

x = data_set.iloc[:, 1:3].values
print('Origianl data values: \n', x)

Output
Original data values: 
 [[  44   72000]
 [   27   48000]
 [   30   23000]
 [   38   51000]
 [   40    1000]
 [   35   49000]
 [   78   23000]
 [   48   89400]
 [   50   78000]
 [   37   9000]]

from sklearn import preprocessing
min_max_scaler = preprocessing.MinMaxScaler(feature_range=(0, 1))
# Scaled feature 
x_after_min_max_scaler = min_max_scaler.fit_transform(x)
print('\n After min max scaling\n', x_after_min_max_scaler)

Output
After min max scaling
 [[0.33333333  0.80316742]
 [0.           0.53167421]
 [0.05882353   0.24886878]
 [0.21568627   0.56561086]
 [0.25490196   0.        ]
 [0.15686275   0.54298643]
 [1.           0.24886878]
 [0.41176471   1.        ]
 [0.45098039   0.87104072]
 [0.19607843   0.09049774]]

# Now use Standardisation method
Standardisation = preprocessing.StandardScaler()
x_after_Standardisation = Standardisation.fit_transform(x)
print('\n After Standardisation: \n', x_after_Standardisation)

Output
After Standardisation: 
 [[ 0.09536935  0.97512896]
 [-1.15176827   0.12903008]
 [-0.93168516  -0.75232292]
 [-0.34479687   0.23479244]
 [-0.1980748   -1.52791356]
 [-0.56487998   0.1642842 ]
 [ 2.58964459  -0.75232292]
 [ 0.38881349   1.58855065]
 [ 0.53553557   1.18665368]
 [-0.41815791  -1.2458806 ]]

Learning resources: