Multilayer Perceptron Predictions Exposed

start_mlp_image.pngLearning Deep Learning

Currently, I learn Deep Learning fundamentals with the help of Jason Brownlee’s Deep Learning with Python book. It provides good practical coverage of building various types of deep learning networks such CNN, RNN etc. Running each model in the book is accompanied with providing various metrics, for instance, accuracy of the model. But accuracy does not provide a real feeling of the image recognition. To improve upon this I updated one of the code samples that came with the book with my own implementation that ran the model with an image file to perform a classification on it. The detailed steps explaining this follows.

What Will We Do?

This tutorial explains how to build a working simple multilayer perceptron network consisting of one hidden layer. In addition to working model that is trained on handwritten digits of MNIST data-set we’ll see how can an image of a digit taken from this data-set can be classified using this network.

  • Multilayer perceptron network is composed of a network of layers of fully connected artificial neurons. In our case it is built of three layers: input layer, hidden layer and output layer.
  • MNIST database is an abbreviation of the Mixed National Institute of Standards and Technology database for handwritten digits. It has a training set of 60,000 examples, and a test set of 10,000 examples. It is used for supervised learning of artificial neural networks to classify handwritten digits.

How Do We Do It?

To accomplish this task there is a need to fulfill certain prerequisites. A number of steps below were explained in a post on Keras, Theano and TensorFlow (KTT).

Prerequisites

  1. Supported operating systems are 
    1. Ubuntu 16.04 64 bit
    2. Windows 10 or 7 64 bit
  2. Python 2 or Python 3 installed with Anaconda 2 or 3 respectively. See KTT for more details.
  3. Works with following deep learning libraries. See KTT for more details.
    1. TensorFlow and Theano
    2. Keras
  4. May be run within Jupyter Notebook. See installation steps here.

Building a Network

The multilayer perceptron in this particular case is built of three layers.

  • Input layer with 784 inputs that are calculated from 28 x 28 pixel image that is 784 pixels.
  • Hidden middle layer with 784 neurons and rectifier activation function
  • Output layer with 10 outputs that give a probability of prediction by softmax activation function for each digit from ‘0’ to ‘9’.

The overall structure of the network

mlp

The Code Overview

The code structured as  follows.

  1. Import proper Python libraries for working with Deep Learning networks.
  2. Fetch image file from a disk in accordance with host Operating System
  3. Load an image to be classified 
  4. Load MNIST dataset and preprocess images pixels into arrays
  5. Define helper functions 
  6. Prepare multilayer perceptron model and compile it
  7. Check if trained model exists
  8. If not train new model, save it and predict image 
  9. Else load current model and predict image 

The code

The code below is brought to you in full and can be found in GitHub repository in addition to saved model and Jupyter Notebook that makes it possible to run this code module after module in a really interactive way.

  • Import proper Python libraries for working with Deep Learning networks
# Baseline MLP for MNIST dataset
import numpy
import skimage.io as io 
import os 
import platform
import getpass
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.utils import np_utils
from keras.models import model_from_json
from os.path import isfile, join

# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
  • Fetch image file from a disk in accordance with host Operating System. In our case it is a 28 x 28 pixel image of ‘3’ digit form MNIST dataset
  • Load an image to be classified 
# load data
platform = platform.system()
currentUser = getpass.getuser()
currentDirectory = os.getcwd()

if platform is 'Windows':
 #path_image = 'C:\\Users\\' + currentUser
 path_image = currentDirectory 
else: 
 #path_image = '/user/' + currentUser
 path_image = currentDirectory 
fn = 'image.png'
img = io.imread(os.path.join(path_image, fn))
  • Load MNIST dataset and preprocess images pixels into arrays
# prepare arrays
X_t = []
y_t = []
X_t.append(img)
y_t.append(3)

X_t = numpy.asarray(X_t)
y_t = numpy.asarray(y_t)
y_t = np_utils.to_categorical(y_t, 10)

(X_train, y_train), (X_test, y_test) = mnist.load_data()

# flatten 28*28 images to a 784 vector for each image
num_pixels = X_train.shape[1] * X_train.shape[2]
X_train = X_train.reshape(X_train.shape[0], num_pixels).astype('float32')
X_test = X_test.reshape(X_test.shape[0], num_pixels).astype('float32')
X_t = X_t.reshape(X_t.shape[0], num_pixels).astype('float32')

# normalize inputs from 0-255 to 0-1
X_train = X_train / 255
X_test = X_test / 255
X_t /= 255

print('X_train shape:', X_train.shape)
print ('X_t shape:', X_t.shape)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
print(X_t.shape[0], 'test images')

# one hot encode outputs
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)

num_classes = y_test.shape[1]
print(y_test.shape[1], 'number of classes')
  • Define helper functions 
# define baseline model
def baseline_model():
 # create model
 model = Sequential()
 model.add(Dense(num_pixels, input_dim=num_pixels, init='normal',activation='relu'))
 model.add(Dense(num_classes, init='normal', activation='softmax'))
 # Compile model
 model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy'])
 return model
 
def build_model(model):
 # build the model
 model = baseline_model()
 # Fit the model
 model.fit(X_train, y_train, validation_data=(X_test, y_test),nb_epoch=10, batch_size=200, verbose=2)
 return model

def save_model(model):
 # serialize model to JSON
 model_json = model.to_json()
 with open("model.json", "w") as json_file:
 json_file.write(model_json)
 # serialize weights to HDF5
 model.save_weights("model.h5")
 print("Saved model to disk")
 
def load_model():
 # load json and create model
 json_file = open('model.json', 'r')
 loaded_model_json = json_file.read()
 json_file.close()
 loaded_model = model_from_json(loaded_model_json)
 # load weights into new model
 loaded_model.load_weights("model.h5")
 if loaded_model:
 print("Loaded model")
 else:
 print("Model is not loaded correctly")
 return loaded_model

def print_class(scores):
 for index, score in numpy.ndenumerate(scores):
 number = index[1]
 print (number, "-", score)
 for index, score in numpy.ndenumerate(scores):
 if(score > 0.5):
 number = index[1]
 print ("\nNumber is: %d, probability is: %f" % (number, score))
  • Prepare multilayer perceptron model and compile it
model = baseline_model()
path = os.path.exists("model.json")
  • Check if trained model exists
  • If not train new model, save it and predict image 
if not path:
 model = build_model(model)
 save_model(model)
 # Final evaluation of the model
 scores = model.predict(X_t)
 print("Probabilities for each class\n")
 print_class(scores)
  • Else load current model and predict image 
else:
 # Final evaluation of the model
 loaded_model = load_model()
 if loaded_model is not None:
 loaded_model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy'])
 scores = loaded_model.predict(X_t)
 print("Probabilities for each class\n")
 print_class(scores)

How to Run 

If you downloaded/ cloned the project and you have all prerequisites set up then to run it simply type this command in terminal.

python mnist_mlp_baseline.py

The Prediction Exposed

The predicted output for the image of digit ‘3’ looks like this.

Probabilities for each class

(0, '-', 3.4988901e-07)
(1, '-', 3.7538914e-08)
(2, '-', 0.00072528532)
(3, '-', 0.99788445)
(4, '-', 1.7879113e-08)
(5, '-', 1.3890726e-06)
(6, '-', 2.5650074e-10)
(7, '-', 2.233218e-05)
(8, '-', 0.0012537371)
(9, '-', 0.00011237688)

Number is: 3, probability is: 0.997884

Resources

  • If you want to see a 3-D visualization of Multilayer Perceptron Network in action  built with two hidden layers then check this one
  • If you want to see a nice visualization of a shallow/ deep neural network and play with various parameters yourself in a real time then A Neural Network Playground is for you!

 Java Code Geeks

It’s been a hard day’s night of Deep Learning

test_orange_2016

Turning the page

The new year  is round the corner and so are the thoughts about a quest into the hidden layers of Deep Learning.  This year’s goal was to become a developer and it was achieved as planned on time. The main projects were in Android and the end of the year was under the sign of Machine Learning and ,more precisely speaking, Deep Learning.

Summary 

So what are the main points in almost a two month headlong journey on the Deep Learning highway?

Deep Learning Book

  • As you should have already known by now Deep Learning Book by Ian Goodfellow and Yoshua Bengio and Aaron Courville was published. This detailed and helpful book on foundations of artificial neural networks is pretty expensive but can be accessed electronically in html format for free.

Jason Brownlee’s Machine Learning Mastery site comes in handy

Blog Posts Ignited by Deep Learning Sparks

In case you’ve missed recent posts on Deep Learning at this blog
  • First post was about using open source machine learning TensorFlow library. It explained how to install it on Linux and run an image to caption model.
  • The third one was about predicting possible future applications of Deep Learning. It appears it was really interesting for readers since more than 200 people read it around the globe (it is about 4 times more than regularly).
  • The recent one was about installing Keras, Theano and TensorFlow (all open source tools) on Linux and Windows. 

 

Here And There

Deep Goals 

I hope you’ll find at least some of the links helpful.

Best wishes for the upcoming year and set the goals to achieve in 2017. Have a clear plan to get going and jump Deeply into Learning.

Keras, Theano and TensorFlow on Windows and Linux

network.png

Tools for Deep Learning development

To start playing with Deep Learning one have to pick a proper tool for it. Python ecosystem tools for Deep Learning such as Keras, Theano and TensorFlow are easy to install and start development. Below follows a guide on how to install them on Windows and Linux operating systems.

What are those Theano, TensorFlow and Keras all about?

A few words about those tools from official websites.

Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently.

TensorFlow™ is an open source software library for numerical computation using data flow graphs.

Keras is a high-level neural networks library, written in Python and capable of running on top of either TensorFlow or Theano. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research.

Windows Strikes Back or a Big Surprise

A few days ago after upgrading to Ubuntu 16.04 from 15.10 I wanted to run some code example in TensorFlow but I found out that TensorFlow was not working. So I switched to Windows thanks to a dual-boot installation and to my amazement found that Keras -> Theano and Keras -> TensorFlow can be installed and run there very easily with some caveats. So let’s proceed to installation steps.

Prerequisites for Windows 7 or 10

It is possible to install Theano and Keras on Windows with Python 2 installation. However, if you want to be able to work on both Theano and TensorFlow then you need to install Python 3.5. As of now TensorFlow 0.12 is supported on 64 bit Windows with Python 3.5. The steps below aim at providing support for Theano and TensorFlow. 

To summarize the prerequisites for TensorFlow on Windows7/10 are

  1. 64 bit OS

   Note: I’ve found out that starting from Anaconda 3 version 4.3.0 the tutorial for Windows is broken due to the changes they introduced! 

That is why for it to work use the following Anaconda version

     2. Python 3.5 (Anaconda 3) use Anaconda 3 version 4.2.0  that has Python 3.5 and not 3.6

Anaconda Is Very Helpful

Anaconda is an open source packaging tool for Python and other languages. It is very helpful, easy to use and intuitive with detailed tutorials. It will help us install Python and all the dependencies for Keras, Theano and TensorFlow with only a few directives. Anaconda is brought to you by  Continuum Analytics.

So if you have 64 bit Windows PC or a VM do the following steps.

  • After Anaconda was installed open terminal and Install Theano.
  • When you are asked about installing dependencies click ‘y’ for yes.
C:\>conda install theano
  • To enable gcc compiler for Theano install following 
  • When you are asked about installing dependencies click ‘y’ for yes.
C:\>conda install mingw libpython
  • That’s it theano is installed. To check what version is installed
C:\>conda list theano
  • To install TensorFlow and Keras we’ll need to use pip Python packaging manager which is included in Anaconda
C:\>pip install tensorflow
C:\>pip install keras
  • To figure out what is the current backend type
C:\>python -c "from keras import backend; print(backend._BACKEND)"
  • To be able to change what backend Keras will use it is possible to edit keras.json configuration file. It may be found at
C:\Users\relevantUser\.keras\keras.json
  • Change the “backend” string to “theano” or “tensorflow” according to your needs.
{
"image_dim_ordering": "tf",
"epsilon": 1e-07,
"floatx": "float32",
"backend": "theano"
}
  • To test that they work at all let’s run this example in Python interpreter line by line
C:\>python
>>> import theano
>>> from theano import tensor
>>> a = tensor.dscalar()
>>> b = tensor.dscalar()
>>> c = a + b
>>> f = theano.function([a,b],c)
>>> result = f(1.5, 2.5)
>>> print(result)
4.0
>>>
  • To see that Keras is really functioning you may run a code for multi layer perception at GitHub.

Same Process on Linux (Ubuntu)

Installation of Keras, Thano and TensorFlow on Linux is almost the same as on Windows. Actually it is even easier since TensorFlow is working nice with Python 2 on Ubuntu. That is why below I’ll provide installation steps for 64 bit  Ubunut 16.04 and Python 2.

anaconda_linux.png

  • After Anaconda was installed open terminal and Install Theano.
$ conda install theano
  • That’s it theano is installed. To check what version is installed
$ conda list theano
  • To install TensorFlow and Keras run this commands
  • If you are asked about installing dependencies click ‘y’ for yes.
$ conda install tensorflow
$ conda install keras
  • To figure out what is the current backend type
$ python -c "from keras import backend; print(backend._BACKEND)"
  • To be able to change what backend Keras will use it is possible to edit keras.json configuration file. It may be found at
$ ~/.keras/keras.json
  • Change the “backend” string to “theano” or “tensorflow” according to your needs.
{
"image_dim_ordering": "tf",
"epsilon": 1e-07,
"floatx": "float32",
"backend": "theano"
}
  • To test that they work at all let’s run this example in Python interpreter line by line
$ python
>>> import theano
>>> from theano import tensor
>>> a = tensor.dscalar()
>>> b = tensor.dscalar()
>>> c = a + b
>>> f = theano.function([a,b],c)
>>> result = f(1.5, 2.5)
>>> print(result)
4.0
>>>
  • To see that Keras is really functioning you may run a code for multi layer perception at GitHub.

Official References

What’s next?

Deep Learning.

 Java Code Geeks