## What is Deep Learning in a nutshell?

Deep Learning is a hot topic these days and it draws a lot of attention from people around the globe. This technology is applicable to various fields, such as image recognition and classification, speech recognition and generation, self-driving cars etc.  There are a number of definitions of what Deep Learning actually is. I find this detention of Deep Learning  by As Lex Fridman from MIT, as he puts it in his latest arxiv’s paper on the subject of self-driving cars, quite simple:

Deep Learning can be defined as a branch of machine learning that seeks to form hierarchies of data representation with minimum input from a human being on the actual composition of the hierarchy.

If you are interested in getting to know what is Deep Learning, how can it be applied in practice then the best way for you is to try to apply it yourself. Don’t worry, there is no need to enroll into PhD program in machine learning anymore since the state of Deep Learning technology is that with a dozen lines of code and leveraging existing machine and deep learning libraries along with pre-trained models it is possible to implement exciting applications of Deep Learning, such as image classification, image caption generation and more.

## All you need is a practical end to end working example

To jump start into Deep Learning (DL) right away I propose you to have a look at Machine Leaning Mastery site and specifically at the latest book there which is one related to DL and is called

This book composed of a number of self-contained tutorials that are concerned with applying DL techniques to natural language processing, such as sentiment analysis, image caption generation and language translation. What is nice about it is that it shows you how to apply these techniques from installing all required machine learning libraries, to describing how to implement DL pipeline from start to finish. It comes with all code samples mentioned in the book working and doing the job. You can take them as a starting point and expand with you creativity.

Although, tutorials are quite independent there they are arranged in the way that complexity of applications is growing from simple to mode advanced.

## The book engages you to try extensions and enjoy coding in Python

The book uses Python and its rich ecosystems of machine and deep learning libraries such as Keras to make you life easier and enjoyable. What is different in this book from others is that each chapter provides you with the references to all papers and books relevant to that chapter, for you to not waste time looking them up yourself. In addition, and this is the best part in my opinion, each chapter provides a number of extensions to think about and implement for application described. Such as trying to play with different model architecture, trying to tune hyper-parameters, etc.

## So why are you still reading this post?

Try this book by executing every example in it, try to play with examples by expanding them and I am sure you’ll get a feeling of what this Deep Learning is and how it results with quite fascinating outcomes when the model you trained predicts something like this:

This is what Deep Learning network trained to translate from German to English thinks about Canadians.

# Capsules are CapsNet

## A new type of neural network

Capsules network a brand new type of artificial neural networks that is superior to CNN is here to stay. Prepare for at least one detailed post about them in near future.

## Where do I find a working implementation?

Please refer to this implementation of CapsNet in Keras. This repository provides links to almost all known current implementations of freshly brewed Capsules network.

## The bright future

My intuition indicates that Capsules will substitute CNN in a near future due to there in-variance to image position transformation.

## Resources

Papers

Dynamic Routing Between Capsules

Matrix capsules with EM routing

Implementations

CapsNet-Keras

CapsuleNet on MNIST

Blogs

Capsule Networks Explained

What is a CapsNet or Capsule Network?

Capsule Networks: An Improvement to Convolutional Networks (Siraj Raval)

News

GOOGLE’S AI WIZARD UNVEILS A NEW TWIST ON NEURAL NETWORKS

# Deep Learning virus. Are you infected?

## Taken by surprise

It’s hardly possible to find a single person that hasn’t heard about Deep Learning virus epidemics. The size of affected population is quite significant and the infection is spreading faster than was ever imagined. Who would have thought that such an esoteric virus could spread so rapidly. The main question is how the authorities missed this case completely until the point when it is very little that can be done to fight this strong and capable, not to say a kind of intelligent adversary.

## What went wrong?

The virus origin dates back to nineteen sixties and seventies when there was reported that a couple of scientists were affected by Deep Learning virus which then had no such a name and was known as Perceptron. However, it was thought that timely treatment of a newly discovered XOR antibiotic cured it completely though sporadic eruptions of it were reported also in mid eighties.

The things started to change suddenly in 2012. Although, a few years before this there were a number of cases when people from speech recognition community were affected by Deep Learning virus. What happened in 2012 though was more significant since for the first time it was reported that vision, and more precisely, object classification functionality, was strongly affected by it.

Today, we are witnessing a new wave of this infection and it’s unsettling to see that this virus has grown to become such a beast. We all know now that almost all human senses is affected by it be it vision, speech generation and recognition, hearing, cognitive functions such a primitive sentiment, you name it. It is unclear if sense of taste and smell are in danger but we cannot be overoptimistic in this regard.

## Virus characteristics

At first, it was thought that somehow the virus only targets certain predisposed members of the population such as scientists and engineers like Geoffrey Hinton, Yann LeCun, Yoshua Bengio and others. It turned out that we were completely wrong in this assumption and the virus is much smarter and flexible than we thought possible. Now large fractions of population be it doctors, artists, entrepreneurs such as Elon Musk, and even renowned physicist such as Max Tegmark are deeply affected by this Deep Learning epidemics.

## It is mutating

Throughout the years, researchers were able to uncover a number of mutations of Deep learning virus and we now know about Auto Encoders, Convolutional and Recurrent spices of it. Each day the most authoritative remedy journal Arxiv reports on newer cases of mutations that gives us little hope that the treatment will keep pace with the virus evolution. There are rumors that a new and unseen kind of mutation such as Deep Reinforcement Learning are even more dangerous not to mention generative adversarial networks and who knows what else to come.

On a side note, it is at least, a little bit reassuring to know that one of the mutations called Theano was eradicated by MILA and we hope that other will follow too.

## Transmission

Apparently, the virus is transmuted via digital means of Internet publications, open-source, such as MXNet and usage of large corporations products such as Keras, TenserFlow, PyTorch. It is most probable that participation in conferences, such as NIPS and others can put you in immediate danger of being affected. So ask yourself before attending them if the risk worth it.

## A new hope

Even though the virus is strong and unrelenting,  we place our hopes in development of new antibiotics, such as Numenat’s HTM or neuromorphic drugs, such as Neurogrid.

So be cautious and take all measures to fight the virus and hope that human intelligence will beat this sneaky, powerful, smart and flexibly entity that somehow learns to outsmart us each time we think we’ve found a cure.

Take care.

# A Digest of Deep Learning Pearls

## All you need is time and GPU

Try to allocate time for these thought provoking Deep Learning papers. Part of them with try it yourself implementation at GitHub.

### 1. Try it yourself at home or anywhere at all (with GPU)

Transformer more than meet the eye!
– A novel approach to language understanding from Google Brain(via David Ha)
It is a very interesting solution for an old linguistic/ syntactic challenge (anaphora) with Deep Learning. More detailed explanation of anaphora resolution.
– Based on “Attention is all you needpaper

### 2. Learning To Remember Rare Events

An interesting approach to introduce memory module into various types of Deep Learning architectures to provide them with life long learning.

### 3. One Model To Learn Them All

A unified Deep Learning model that is capable of being applied to inputs from various modalities. It is a one step closer toward general DL architectures.

### 4. Meet Fashion-MNIST

Finally, it is time to ditch MNIST in favor of Fashion-MNIST

Which is better from a number of aspects. Which one? Find yourself.

### **Note:

If you haven’t noticed the one thing in common to all of these items except for one is
Łukasz Kaiser researcher from Google Brain.

# NLP is Natural Language Processing

## Get ready for a real NLP

I am back to blogging and have a motivation to post a number of posts (or at least one) on the subject of Natural Language Processing. Upcoming posts also will contain information on recurrent neural networks such as LSTM. So stay tuned.

## For now, check this out

If you are into Natural Language Processing (NLP) then you may find links below useful.

### Papers

1. Attention Is All You Need paper in arxiv.

### Posts

2. Memory, attention, sequences post by Eugenio Culurciello

### Tutorials

4. Neural Machine Translation and Sequence-to-sequence Models: A Tutorial by Graham Neubig
5. A Primer on Neural Network Models for Natural Language Processing by Yoav Goldberg

# No winter but AI global warming

## Name things for what they are

Is Deep Learning rage simply a bubble or is this time it here for a long time to stay. As researchers proposed first let’s change the Deep Learning title into the more humble and exact Multilayered Network for Functions Approximation. Now it sounds more practical and there is no sign of hype. Then check to what fields those networks were applied and see if it is diverse and if the algorithms used are universally applicable. Check the number of articles published that have a real essence within them. If you’ve got ‘yes’ as an answer to those questions then it feels like finally those approaches are really usefull.

## What’s next?

This post will be updated in a near future. Meanwhile check the posts by Carlos E. Perez from IntuitionMachine.com that writes extensively on the subject and do not forget to check his ‘The Deep Learning Playbook

# Wind of Deep Change

## Welcome to the world of Machine and Deep Learning

Following my transition to another continent in near future I’ll be able to focus more on Machine and Deep Learning being a technical editor at renowned Machine Learning Mastery site authored by Dr. Jason Brownlee. It means you can expect more posts on machine learning to come especially on LSTM and recurrent neural networks.

## What is it like to be a technical editor?

Throughout my career I’ve been a SW test engineer and SW developer but in parallel I’ve been busy helping to edit books as Jumping Into C++ by Alex Allain and other projects, such as Kindle Optimizer Chrome extension. So becoming a technical editor in machine learning field is just a logical step to make. Actually technical editor is a bit like a QA engineer and a developer at once since you have to understand how Python code is working to make that LSTM to be able to predict time series values and to be a test engineer to make the content and the code to be as good as it can be. In addition, there is a kind of freedom that regular tester or developer do not possess which is to suggest changes to the author which may be meaningful and influential. Most importantly, technical editor deals with the raw content of a future article, a blog post or a chapter from the book that millions of people may read and it provides you with the understanding of the responsibility that you bear on your shoulders. The corrections that you make may influence readers and make their experience pleasant or not.

## Why machine or deep learning after all?

Technical editing as testing or programming is a universal position since it can be successfully applied to various topics in those fields, but machine learning has the proper ingredients of math, programming and future potential that makes it very attractive.

## Stay tuned as John Sonmez says

So if you follow this blog stay around the corner to be up to date with the current progress in Deep Learning field and if you care check this public Deep Learning for All group at Facebook where I share latest and in my view greatest news coming from Deep Learning fruitful field.

# OpenCV installation on Linux and Windows

## How hard is to install OpenCV?

This was the question that I asked myself lately when I needed to use OpenCV for a project. I thought it must be simpler on Ubuntu than on Windows. But I was wrong. The goal of this tutorial is to provide working guidelines for OpenCV installation. I’ll cover installation instructions for OpenCV with following configurations:

### Windows 7/ 10

• OpenCV 3.x.x with Python 2.7
• OpenCV 3.x.x with Python 3.5

### Ubuntu 16.04

• OpenCV 3.x.x with Python 3.5

## Installation on Windows 7/ 10

### OpenCV 3.x.x with Python 2.7 on Windows 32 bit

To have all the dependencies that are related to Python it is useful to install Anaconda.

• Install Anaconda 2 for Python 2.7 (32 or 64 bit)
• Install Anaconda 3 for Python 3.5 (32 or 64 bit)
• For the sake of this tutorial I used OpenCV version 3.2.0
• opencv-3.2.0-vc14.exe
• After you’ve installed downloaded OpenCV version there is a need to move cv2.pyd file to a Python installation library.

Look for the cv2.pyd at the opencv installation folder

C:\Users\You\Downloads\opencv\build\python\2.7\x64\cv2.pyd


And move the cv2.pyd file to Python 2.7 installation folder

C:\Users\You\Anaconda2\Lib\site-packages\cv2.pyd

### Example application

• To test that opencv installed correctly
• Open command line and run python. Then type the commands below to figure out what is the current opencv version.
C:\Users\You>python
Python 2.7.13 |Anaconda 4.3.0 (32-bit)| (default, Dec 19 2016, 13:36:02) [MSC v.1500 32 bit (Intel)] on win32
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> import cv2
>>> print(cv2.__version__)
3.2.0
>>>

### OpenCV 3.x.x with Python 3.5 using Wheel on Windows 7 64 bit

• There is no library for Python 3.5 support in OpenCV out of the box that is why we can use  unofficial Windows binaries for Python extension packages from here to be able to use it.

Note: I downloaded this one because I have Windows 7 64 bit

• opencv_python-3.2.0-cp35-cp35m-win_amd64.whl

Pay attention that 3.2.0 means opencv version i.e. opencv-3.2.0

cp35 means Python version i.e. Python 3.5

C:\>cd C:\Users\You\Downloads
• Install wheel with pip install command
C:\Users\You\Downloads>pip install opencv_python-3.2.0-cp35-cp35m-win_amd64.whl
Installing collected packages: opencv-python
Successfully installed opencv-python-3.2.0
You are using pip version 8.1.2, however version 9.0.1 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command.


• Pay attention that you saw this line ‘Successfully installed opencv-python-3.2.0’

### Example application

• To test that opencv installed correctly
• Open command line and run python. Then type the commands below to figure out what is the current opencv version.
C:\Users\You\Downloads>python
Python 3.5.2 |Anaconda 4.2.0 (64-bit)| (default, Jul 5 2016, 11:41:13) [MSC v.1900 64 bit (AMD64)] on win32
>>> import cv2
>>> print(cv2.__version__)
3.2.0
>>>


• It is possible to configure virtual environment for Python 2.7 and 3.5 to run  them separately on the same machine. To do this you may consult Adrian Rosebrock tutorial on OpenCV installation on Ubuntu.
• Following tutorials helped me in composing this part

## Installation on Ubuntu 16.04

To install OpenCV on Ubuntu follow the steps in the guides below. The first one is the best and it worked for me.

• Simply run this command for basic opencv3 installation.
conda install -c menpo opencv3
• If Anaconda is not installed then run this one to install it.
sudo apt-get install python-opencv

## What’s next?

Now that you have a working OpenCV you may watch this nice tutorial by Siraj Raval that is funny and hands on with OpenCV. It will teach you How to do Object Detection with OpenCV. It will also teach you that there is a need to run a code at least once before filming a YouTube video.

In addition if you are interested in object detection with OpenCV then definitely look at Satya Mallick tutorial on the subject.

# Kids gonna love LSTM deep learning network

## Teaser

Prepare for an upcoming Android game from neaapps applications development. This game will be based on a LSTM deep learning network for prediction of a next character from various length characters string.

For now you can check out the already existing apps brought to you by neaapps.

## Why Long Short Term Memory deep learning network?

It turns out that LSTM is very good at learning and predicting sequences of patterns. That is why it is natural to use it for creating engaging games for little and not so kids. For more information what is LSTM and how to use it read Chris Olah’s post.

## Stay tuned

It will be available soon in the nearest  Google Play Store.

## Resources

The inspiration for this application came from a chapter on LSTM from Jason Brownlee’s Deep Learning With Python book.

# Multilayer Perceptron Predictions Exposed

## Learning Deep Learning

Currently, I learn Deep Learning fundamentals with the help of Jason Brownlee’s Deep Learning with Python book. It provides good practical coverage of building various types of deep learning networks such CNN, RNN etc. Running each model in the book is accompanied with providing various metrics, for instance, accuracy of the model. But accuracy does not provide a real feeling of the image recognition. To improve upon this I updated one of the code samples that came with the book with my own implementation that ran the model with an image file to perform a classification on it. The detailed steps explaining this follows.

## What Will We Do?

This tutorial explains how to build a working simple multilayer perceptron network consisting of one hidden layer. In addition to working model that is trained on handwritten digits of MNIST data-set we’ll see how can an image of a digit taken from this data-set can be classified using this network.

• Multilayer perceptron network is composed of a network of layers of fully connected artificial neurons. In our case it is built of three layers: input layer, hidden layer and output layer.
• MNIST database is an abbreviation of the Mixed National Institute of Standards and Technology database for handwritten digits. It has a training set of 60,000 examples, and a test set of 10,000 examples. It is used for supervised learning of artificial neural networks to classify handwritten digits.

## How Do We Do It?

To accomplish this task there is a need to fulfill certain prerequisites. A number of steps below were explained in a post on Keras, Theano and TensorFlow (KTT).

### Prerequisites

1. Supported operating systems are
1. Ubuntu 16.04 64 bit
2. Windows 10 or 7 64 bit
2. Python 2 or Python 3 installed with Anaconda 2 or 3 respectively. See KTT for more details.
3. Works with following deep learning libraries. See KTT for more details.
1. TensorFlow and Theano
2. Keras
4. May be run within Jupyter Notebook. See installation steps here.

## Building a Network

The multilayer perceptron in this particular case is built of three layers.

• Input layer with 784 inputs that are calculated from 28 x 28 pixel image that is 784 pixels.
• Hidden middle layer with 784 neurons and rectifier activation function
• Output layer with 10 outputs that give a probability of prediction by softmax activation function for each digit from ‘0’ to ‘9’.

### The Code Overview

The code structured as  follows.

1. Import proper Python libraries for working with Deep Learning networks.
2. Fetch image file from a disk in accordance with host Operating System
3. Load an image to be classified
4. Load MNIST dataset and preprocess images pixels into arrays
5. Define helper functions
6. Prepare multilayer perceptron model and compile it
7. Check if trained model exists
8. If not train new model, save it and predict image
9. Else load current model and predict image

## The code

The code below is brought to you in full and can be found in GitHub repository in addition to saved model and Jupyter Notebook that makes it possible to run this code module after module in a really interactive way.

• Import proper Python libraries for working with Deep Learning networks
# Baseline MLP for MNIST dataset
import numpy
import skimage.io as io
import os
import platform
import getpass
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.utils import np_utils
from keras.models import model_from_json
from os.path import isfile, join

# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
• Fetch image file from a disk in accordance with host Operating System. In our case it is a 28 x 28 pixel image of ‘3’ digit form MNIST dataset
• Load an image to be classified
# load data
platform = platform.system()
currentUser = getpass.getuser()
currentDirectory = os.getcwd()

if platform is 'Windows':
#path_image = 'C:\\Users\\' + currentUser
path_image = currentDirectory
else:
#path_image = '/user/' + currentUser
path_image = currentDirectory
fn = 'image.png'
img = io.imread(os.path.join(path_image, fn))
• Load MNIST dataset and preprocess images pixels into arrays
# prepare arrays
X_t = []
y_t = []
X_t.append(img)
y_t.append(3)

X_t = numpy.asarray(X_t)
y_t = numpy.asarray(y_t)
y_t = np_utils.to_categorical(y_t, 10)

(X_train, y_train), (X_test, y_test) = mnist.load_data()

# flatten 28*28 images to a 784 vector for each image
num_pixels = X_train.shape[1] * X_train.shape[2]
X_train = X_train.reshape(X_train.shape[0], num_pixels).astype('float32')
X_test = X_test.reshape(X_test.shape[0], num_pixels).astype('float32')
X_t = X_t.reshape(X_t.shape[0], num_pixels).astype('float32')

# normalize inputs from 0-255 to 0-1
X_train = X_train / 255
X_test = X_test / 255
X_t /= 255

print('X_train shape:', X_train.shape)
print ('X_t shape:', X_t.shape)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
print(X_t.shape[0], 'test images')

# one hot encode outputs
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)

num_classes = y_test.shape[1]
print(y_test.shape[1], 'number of classes')
• Define helper functions
# define baseline model
def baseline_model():
# create model
model = Sequential()
# Compile model
return model

def build_model(model):
# build the model
model = baseline_model()
# Fit the model
model.fit(X_train, y_train, validation_data=(X_test, y_test),nb_epoch=10, batch_size=200, verbose=2)
return model

def save_model(model):
# serialize model to JSON
model_json = model.to_json()
with open("model.json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model.save_weights("model.h5")
print("Saved model to disk")

# load json and create model
json_file = open('model.json', 'r')
json_file.close()
# load weights into new model
else:

def print_class(scores):
for index, score in numpy.ndenumerate(scores):
number = index[1]
print (number, "-", score)
for index, score in numpy.ndenumerate(scores):
if(score > 0.5):
number = index[1]
print ("\nNumber is: %d, probability is: %f" % (number, score))
• Prepare multilayer perceptron model and compile it
model = baseline_model()
path = os.path.exists("model.json")
• Check if trained model exists
• If not train new model, save it and predict image
if not path:
model = build_model(model)
save_model(model)
# Final evaluation of the model
scores = model.predict(X_t)
print("Probabilities for each class\n")
print_class(scores)
• Else load current model and predict image
else:
# Final evaluation of the model
print("Probabilities for each class\n")
print_class(scores)

### How to Run

If you downloaded/ cloned the project and you have all prerequisites set up then to run it simply type this command in terminal.

python mnist_mlp_baseline.py

## The Prediction Exposed

The predicted output for the image of digit ‘3’ looks like this.

Probabilities for each class

(0, '-', 3.4988901e-07)
(1, '-', 3.7538914e-08)
(2, '-', 0.00072528532)
(3, '-', 0.99788445)
(4, '-', 1.7879113e-08)
(5, '-', 1.3890726e-06)
(6, '-', 2.5650074e-10)
(7, '-', 2.233218e-05)
(8, '-', 0.0012537371)
(9, '-', 0.00011237688)

Number is: 3, probability is: 0.997884

## Resources

• If you want to see a 3-D visualization of Multilayer Perceptron Network in action  built with two hidden layers then check this one
• If you want to see a nice visualization of a shallow/ deep neural network and play with various parameters yourself in a real time then A Neural Network Playground is for you!