Multilayer Perceptron Predictions Exposed

start_mlp_image.pngLearning Deep Learning

Currently, I learn Deep Learning fundamentals with the help of Jason Brownlee’s Deep Learning with Python book. It provides good practical coverage of building various types of deep learning networks such CNN, RNN etc. Running each model in the book is accompanied with providing various metrics, for instance, accuracy of the model. But accuracy does not provide a real feeling of the image recognition. To improve upon this I updated one of the code samples that came with the book with my own implementation that ran the model with an image file to perform a classification on it. The detailed steps explaining this follows.

What Will We Do?

This tutorial explains how to build a working simple multilayer perceptron network consisting of one hidden layer. In addition to working model that is trained on handwritten digits of MNIST data-set we’ll see how can an image of a digit taken from this data-set can be classified using this network.

  • Multilayer perceptron network is composed of a network of layers of fully connected artificial neurons. In our case it is built of three layers: input layer, hidden layer and output layer.
  • MNIST database is an abbreviation of the Mixed National Institute of Standards and Technology database for handwritten digits. It has a training set of 60,000 examples, and a test set of 10,000 examples. It is used for supervised learning of artificial neural networks to classify handwritten digits.

How Do We Do It?

To accomplish this task there is a need to fulfill certain prerequisites. A number of steps below were explained in a post on Keras, Theano and TensorFlow (KTT).


  1. Supported operating systems are 
    1. Ubuntu 16.04 64 bit
    2. Windows 10 or 7 64 bit
  2. Python 2 or Python 3 installed with Anaconda 2 or 3 respectively. See KTT for more details.
  3. Works with following deep learning libraries. See KTT for more details.
    1. TensorFlow and Theano
    2. Keras
  4. May be run within Jupyter Notebook. See installation steps here.

Building a Network

The multilayer perceptron in this particular case is built of three layers.

  • Input layer with 784 inputs that are calculated from 28 x 28 pixel image that is 784 pixels.
  • Hidden middle layer with 784 neurons and rectifier activation function
  • Output layer with 10 outputs that give a probability of prediction by softmax activation function for each digit from ‘0’ to ‘9’.

The overall structure of the network


The Code Overview

The code structured as  follows.

  1. Import proper Python libraries for working with Deep Learning networks.
  2. Fetch image file from a disk in accordance with host Operating System
  3. Load an image to be classified 
  4. Load MNIST dataset and preprocess images pixels into arrays
  5. Define helper functions 
  6. Prepare multilayer perceptron model and compile it
  7. Check if trained model exists
  8. If not train new model, save it and predict image 
  9. Else load current model and predict image 

The code

The code below is brought to you in full and can be found in GitHub repository in addition to saved model and Jupyter Notebook that makes it possible to run this code module after module in a really interactive way.

  • Import proper Python libraries for working with Deep Learning networks
# Baseline MLP for MNIST dataset
import numpy
import as io 
import os 
import platform
import getpass
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.utils import np_utils
from keras.models import model_from_json
from os.path import isfile, join

# fix random seed for reproducibility
seed = 7
  • Fetch image file from a disk in accordance with host Operating System. In our case it is a 28 x 28 pixel image of ‘3’ digit form MNIST dataset
  • Load an image to be classified 
# load data
platform = platform.system()
currentUser = getpass.getuser()
currentDirectory = os.getcwd()

if platform is 'Windows':
 #path_image = 'C:\\Users\\' + currentUser
 path_image = currentDirectory 
 #path_image = '/user/' + currentUser
 path_image = currentDirectory 
fn = 'image.png'
img = io.imread(os.path.join(path_image, fn))
  • Load MNIST dataset and preprocess images pixels into arrays
# prepare arrays
X_t = []
y_t = []

X_t = numpy.asarray(X_t)
y_t = numpy.asarray(y_t)
y_t = np_utils.to_categorical(y_t, 10)

(X_train, y_train), (X_test, y_test) = mnist.load_data()

# flatten 28*28 images to a 784 vector for each image
num_pixels = X_train.shape[1] * X_train.shape[2]
X_train = X_train.reshape(X_train.shape[0], num_pixels).astype('float32')
X_test = X_test.reshape(X_test.shape[0], num_pixels).astype('float32')
X_t = X_t.reshape(X_t.shape[0], num_pixels).astype('float32')

# normalize inputs from 0-255 to 0-1
X_train = X_train / 255
X_test = X_test / 255
X_t /= 255

print('X_train shape:', X_train.shape)
print ('X_t shape:', X_t.shape)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
print(X_t.shape[0], 'test images')

# one hot encode outputs
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)

num_classes = y_test.shape[1]
print(y_test.shape[1], 'number of classes')
  • Define helper functions 
# define baseline model
def baseline_model():
 # create model
 model = Sequential()
 model.add(Dense(num_pixels, input_dim=num_pixels, init='normal',activation='relu'))
 model.add(Dense(num_classes, init='normal', activation='softmax'))
 # Compile model
 model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy'])
 return model
def build_model(model):
 # build the model
 model = baseline_model()
 # Fit the model, y_train, validation_data=(X_test, y_test),nb_epoch=10, batch_size=200, verbose=2)
 return model

def save_model(model):
 # serialize model to JSON
 model_json = model.to_json()
 with open("model.json", "w") as json_file:
 # serialize weights to HDF5
 print("Saved model to disk")
def load_model():
 # load json and create model
 json_file = open('model.json', 'r')
 loaded_model_json =
 loaded_model = model_from_json(loaded_model_json)
 # load weights into new model
 if loaded_model:
 print("Loaded model")
 print("Model is not loaded correctly")
 return loaded_model

def print_class(scores):
 for index, score in numpy.ndenumerate(scores):
 number = index[1]
 print (number, "-", score)
 for index, score in numpy.ndenumerate(scores):
 if(score > 0.5):
 number = index[1]
 print ("\nNumber is: %d, probability is: %f" % (number, score))
  • Prepare multilayer perceptron model and compile it
model = baseline_model()
path = os.path.exists("model.json")
  • Check if trained model exists
  • If not train new model, save it and predict image 
if not path:
 model = build_model(model)
 # Final evaluation of the model
 scores = model.predict(X_t)
 print("Probabilities for each class\n")
  • Else load current model and predict image 
 # Final evaluation of the model
 loaded_model = load_model()
 if loaded_model is not None:
 loaded_model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy'])
 scores = loaded_model.predict(X_t)
 print("Probabilities for each class\n")

How to Run 

If you downloaded/ cloned the project and you have all prerequisites set up then to run it simply type this command in terminal.


The Prediction Exposed

The predicted output for the image of digit ‘3’ looks like this.

Probabilities for each class

(0, '-', 3.4988901e-07)
(1, '-', 3.7538914e-08)
(2, '-', 0.00072528532)
(3, '-', 0.99788445)
(4, '-', 1.7879113e-08)
(5, '-', 1.3890726e-06)
(6, '-', 2.5650074e-10)
(7, '-', 2.233218e-05)
(8, '-', 0.0012537371)
(9, '-', 0.00011237688)

Number is: 3, probability is: 0.997884


  • If you want to see a 3-D visualization of Multilayer Perceptron Network in action  built with two hidden layers then check this one
  • If you want to see a nice visualization of a shallow/ deep neural network and play with various parameters yourself in a real time then A Neural Network Playground is for you!

 Java Code Geeks

It’s been a hard day’s night of Deep Learning


Turning the page

The new year  is round the corner and so are the thoughts about a quest into the hidden layers of Deep Learning.  This year’s goal was to become a developer and it was achieved as planned on time. The main projects were in Android and the end of the year was under the sign of Machine Learning and ,more precisely speaking, Deep Learning.


So what are the main points in almost a two month headlong journey on the Deep Learning highway?

Deep Learning Book

  • As you should have already known by now Deep Learning Book by Ian Goodfellow and Yoshua Bengio and Aaron Courville was published. This detailed and helpful book on foundations of artificial neural networks is pretty expensive but can be accessed electronically in html format for free.

Jason Brownlee’s Machine Learning Mastery site comes in handy

Blog Posts Ignited by Deep Learning Sparks

In case you’ve missed recent posts on Deep Learning at this blog
  • First post was about using open source machine learning TensorFlow library. It explained how to install it on Linux and run an image to caption model.
  • The third one was about predicting possible future applications of Deep Learning. It appears it was really interesting for readers since more than 200 people read it around the globe (it is about 4 times more than regularly).
  • The recent one was about installing Keras, Theano and TensorFlow (all open source tools) on Linux and Windows. 


Here And There

Deep Goals 

I hope you’ll find at least some of the links helpful.

Best wishes for the upcoming year and set the goals to achieve in 2017. Have a clear plan to get going and jump Deeply into Learning.

Keras, Theano and TensorFlow on Windows and Linux


Tools for Deep Learning development

To start playing with Deep Learning one have to pick a proper tool for it. Python ecosystem tools for Deep Learning such as Keras, Theano and TensorFlow are easy to install and start development. Below follows a guide on how to install them on Windows and Linux operating systems.

What are those Theano, TensorFlow and Keras all about?

A few words about those tools from official websites.

Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently.

TensorFlow™ is an open source software library for numerical computation using data flow graphs.

Keras is a high-level neural networks library, written in Python and capable of running on top of either TensorFlow or Theano. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research.

Windows Strikes Back or a Big Surprise

A few days ago after upgrading to Ubuntu 16.04 from 15.10 I wanted to run some code example in TensorFlow but I found out that TensorFlow was not working. So I switched to Windows thanks to a dual-boot installation and to my amazement found that Keras -> Theano and Keras -> TensorFlow can be installed and run there very easily with some caveats. So let’s proceed to installation steps.

Prerequisites for Windows 7 or 10

It is possible to install Theano and Keras on Windows with Python 2 installation. However, if you want to be able to work on both Theano and TensorFlow then you need to install Python 3.5. As of now TensorFlow 0.12 is supported on 64 bit Windows with Python 3.5. The steps below aim at providing support for Theano and TensorFlow. 

To summarize the prerequisites for TensorFlow on Windows7/10 are

  1. 64 bit OS

   Note: I’ve found out that starting from Anaconda 3 version 4.3.0 the tutorial for Windows is broken due to the changes they introduced! 

That is why for it to work use the following Anaconda version

     2. Python 3.5 (Anaconda 3) use Anaconda 3 version 4.2.0  that has Python 3.5 and not 3.6

Anaconda Is Very Helpful

Anaconda is an open source packaging tool for Python and other languages. It is very helpful, easy to use and intuitive with detailed tutorials. It will help us install Python and all the dependencies for Keras, Theano and TensorFlow with only a few directives. Anaconda is brought to you by  Continuum Analytics.

So if you have 64 bit Windows PC or a VM do the following steps.

  • After Anaconda was installed open terminal and Install Theano.
  • When you are asked about installing dependencies click ‘y’ for yes.
C:\>conda install theano
  • To enable gcc compiler for Theano install following 
  • When you are asked about installing dependencies click ‘y’ for yes.
C:\>conda install mingw libpython
  • That’s it theano is installed. To check what version is installed
C:\>conda list theano
  • To install TensorFlow and Keras we’ll need to use pip Python packaging manager which is included in Anaconda
C:\>pip install tensorflow
C:\>pip install keras
  • To figure out what is the current backend type
C:\>python -c "from keras import backend; print(backend._BACKEND)"
  • To be able to change what backend Keras will use it is possible to edit keras.json configuration file. It may be found at
  • Change the “backend” string to “theano” or “tensorflow” according to your needs.
"image_dim_ordering": "tf",
"epsilon": 1e-07,
"floatx": "float32",
"backend": "theano"
  • To test that they work at all let’s run this example in Python interpreter line by line
>>> import theano
>>> from theano import tensor
>>> a = tensor.dscalar()
>>> b = tensor.dscalar()
>>> c = a + b
>>> f = theano.function([a,b],c)
>>> result = f(1.5, 2.5)
>>> print(result)
  • To see that Keras is really functioning you may run a code for multi layer perception at GitHub.

Same Process on Linux (Ubuntu)

Installation of Keras, Thano and TensorFlow on Linux is almost the same as on Windows. Actually it is even easier since TensorFlow is working nice with Python 2 on Ubuntu. That is why below I’ll provide installation steps for 64 bit  Ubunut 16.04 and Python 2.


  • After Anaconda was installed open terminal and Install Theano.
$ conda install theano
  • That’s it theano is installed. To check what version is installed
$ conda list theano
  • To install TensorFlow and Keras run this commands
  • If you are asked about installing dependencies click ‘y’ for yes.
$ conda install tensorflow
$ conda install keras
  • To figure out what is the current backend type
$ python -c "from keras import backend; print(backend._BACKEND)"
  • To be able to change what backend Keras will use it is possible to edit keras.json configuration file. It may be found at
$ ~/.keras/keras.json
  • Change the “backend” string to “theano” or “tensorflow” according to your needs.
"image_dim_ordering": "tf",
"epsilon": 1e-07,
"floatx": "float32",
"backend": "theano"
  • To test that they work at all let’s run this example in Python interpreter line by line
$ python
>>> import theano
>>> from theano import tensor
>>> a = tensor.dscalar()
>>> b = tensor.dscalar()
>>> c = a + b
>>> f = theano.function([a,b],c)
>>> result = f(1.5, 2.5)
>>> print(result)
  • To see that Keras is really functioning you may run a code for multi layer perception at GitHub.

Official References

What’s next?

Deep Learning.

 Java Code Geeks

Systematic Approach To Applications Of Deep Learning

Hidden potential

The interest in Deep Learning research and applications is as hot as never before. A countless number of new research papers can be found at almost every day. Those papers provide us with descriptions of novel ways Artificial Neural Networks can be applied to various fields of our daily life. What is fascinating in Deep Learning is the fact that neural networks seem like universally capable to be applied to various kinds of problems that previously were tackled with a tailored approach. Moreover, each day there is an article or blog post that tells us about even more exotic ways of applying Deep Learning. The problem with those articles, blog posts and even books is that they do not provide systematic treatment of neural networks applications. At least, so far I haven’t seen this was done and if you know about such attempts please let me know.


While searching for materials for this post I’ve found a number of articles that summarize Deep Learning applications. Here comes a number of quotes from those articles with related links.

1. The first post called 8 Inspirational Applications of Deep Learning by Jason Brownlee is from Machine Learning  Mastery blog.

Here’s the list:

  1. Colorization of Black and White Images.
  2. Adding Sounds To Silent Movies.
  3. Automatic Machine Translation.
  4. Object Classification in Photographs.
  5. Automatic Handwriting Generation.
  6. Character Text Generation.
  7. Image Caption Generation.
  8. Automatic Game Playing.

As it can be seen these applications can be concisely described by the sensory modalities Artificial Intelligence research was initially applied to which are Audio, Visual and Spatial modalities. 

2. This one is called Deep Learning Use Cases and is taken from a site dedicated to Deeplearning4j machine learning library for Java.

3. Next one is called Deep Learning Applications in Science and Engineering by John Murphy. This article describes similar applications of Deep Learning as previous ones but also provides more exotic applications, such as Scientific Experiment Design, High Energy Physics and Drug Discovery.

4. In addition I want to mention The Next Wave of Deep Learning Applications post which
 is full of most exotic applications that maybe you haven’t heard about them before. To name a few there are Weather Forecasting and Event Detection, Neural Networks for Brain Cancer Detection applications.

5. The last one is a question about Deep Learning applications in Quora that  has a number of helpful answers.

Prediction example

If we look at row 4 and column B will find there ‘Speech recognizer -> Speech generator’ pair from Audio modality which can be interpreted as language to language translation application, such as Google translate. Moreover, if we choose row 6 and column D will find there ‘Image recognizer -> Image generator’ which is exactly the idea behind the Deep Convolutional Inverse Graphics Network paper at

 It can be seen that this matrix has following number of possible pairs  = 12 * (12 – 1) = 132.  In general case pairs = N*(N – 1).

If we want to think about a novel application it is possible systematically go over the matrix and look for it or pick a random pair, such as row 4 and column H which is ‘Image recognizer -> Natural language generator’. It may be  an application that lip reads a person talking in front of a mobile phone camera and generating text to be sent to another application. This application is useful when there is a noisy environment on the background (idea comes from here).

Notice that this matrix is composed for the sake of an example and it may be organized in other ways that may produce another combinations for possible applications of Deep Learning. Moreover this matrix may be multi-dimensional to take into account tuples of various parameters.

Morphological Matrix

Additional way to try to predict applications of Deep Learning is to use morphological matrix method developed by Fritz Zwicky, Swiss-national astrophysicist based at the California Institute of Technology. By the way, this method has been successfully used to predict the existence of neutron stars. The good explanation of what is morphological matrix and its applications may be found at Swedish Morphological Society. For our purposes it is sufficient to know that this matrix can be composed in such a way that the first row has various sensory modalities such as audio, visual, touch etc. and the rest of the rows provides possible options for those modalities. The screenshot will help to clarify this.

As it was shown in this post it is possible and effective to systematically look for Deep Learning applications in particular and Machine Learning in general by means of Combinations and Morphological matrices.

Java Code Geeks

Why I like CNN


CNN Is Everywhere

I like CNN because it’s a modern marvel. It is almost everywhere. You can see it on mobile or any other device. It brings you a lot of excitement. Recently it amazes lots of people around the world. An interest in CNN was never as hot as today.

CNN is always up to date and brings you unexpected topics almost everyday. You cannot help but hear about it again and again from every corner. CNN is like madness don’t you think? On the other hand it sheds light on our inner workings maybe even feelings.

In short you cannot ignore CNN simply because CNN is Convolution Neural Networks.

More details on CNN can be found 

What Is It Good For?

Check out this incredible usage of DNN, CNN, LSTM and other topologies in this article on Lip Reading Sentences in the Wild

Banana Classifier with OpenCV on Android


Update: BananaRecognition source code at GitHub

Make Banana Recognition Great Again

This post is about going from an idea to its implementation. The idea was to create an application for mobile phone that can detects cars. Now let’s see how it started and where I stand today. I’ll provide a brief description of the steps taken to this very day.

Step 1. Google is Your Friend or Search and You’ll Find It

The journey to banana Haar Feature-based Cascade Classifier started from searching for videos, articles and blogs on how to make object detection and tracking with a mobile phone. In particular Android platform was of interest to me since I participated in development of a few android applications beforehand. Below follow various kinds of resources that were found and how they provided me with a direction for further search.  

Going in a wrong direction

First search results were actually disappointing. There were a bunch of recognition APIs and recognition products for static object recognition  and classification of video for captions generation. To name a few

Those are good for static images but I was looking for something else i.e. dynamic treatment.

YouTube provides hope for feasibility

Looking for a possible implementation for my idea I started to search  further with a little help from Google and found a number of items on youtube and in blogs that were encouraging.

Advanced object detection take one

The video below appeared to be exactly what I imagined in my mind. It is ViNotion object detection from moving vehicle (car detection)

But as I found very fast this company from Netherlands used very capable HW and proprietary SW to accomplish this feat.

So I needed to find something else.

Advanced object detection take two

Next thing I found was this system that was able not only to recognise moving cars but also was able to classify them and provide their speed.

Once again it turned out to be this monstrous HW thing with thermal sensor, laser range finder you name it.

The power of academic research

The search continued and bingo! This was it. What I’ve found was an article with exactly what I wanted. Android phone detecting and tracking cars on the road.

In short they used Android phone powered by Haar-like Feature Detector with additional filters and were able to detect and track cars from the back. But the key elements I learned from this article were

  • It is possible and was done with mobile phone
  • Haar-like Feature Detector with AdaBoost algorithm is a candidate for usage

Step 2. OpenCV and Roman Hošek to the Rescue

It was only a small step to type in Google Search OpenCV on Android to find about the power of Open Source Computer Vision library known as OpenCV to get even closer to my goal. Looking into OpenCV tutorials I understood that it would take me much more time than I wanted to invest to understand how to set it up on Android and how to start development fast. So I continued searching and found a detailed two parts blog post of Roman Hošek describing exactly how to implement Android application for face recognition using OpenCV library.

Not only he described how to implement this application he also provided a link to his GitHub repository  with a working Android application that easily may be imported and build and run in Android Studio.

Step 3. Download. Build. Find Banana Model

I played with Roman’s application and was able to understand exactly what I needed to do to swap face classifier model to another model. Be it bananas or whatever. Bananas are more easy to classify for the newbies. Looking for bananas classifiers was also not so hard. Moreover there are a bunch of blogs in the wild providing a wealth of information on how to scientifically classify bananas and other fruits with Node.js. 

Train your own Cascade Classifier 

As I found Cascade classifier algorithm is pretty universal in the sense that it can recognise not only faces but other objects as you wish. But it means there is a need for custom training.

It happened that Thorsten Ball provided a GitHub repository that described how to train a custom classifier on… bananas. What was so special about this repository was the file banana_classifier.xml which was the last element in solving the puzzle of bananas, sorry car’s recognition.

Know How To Place A Right File Into A Right Place

Having Roman’s app for face recognition and a model for banana recognition in my hands I was able to tweak the Roman’s app to bananas recognition. The comparison of before/ after functionality is provided below.

App with face recognition model

WhatsApp Image 2016-11-12 at 17.40.47.jpeg

App with banana recognition model

WhatsApp Image 2016-11-14 at 22.18.41.jpeg

Step 4. Find a car model or train one yourself. Get rid of bananas

Next step is to train or find a car model and voila the idea is 100% implemented as envisioned.

WhatsApp Image 2016-11-19 at 18.51.05.jpeg

Java Code Geeks

Guess a Digit Game On Android With TensorFlow


Why Machine Learning?

About a week ago I’ve discovered an overwhelming topic of machine learning and since then there’s no stopping for me. This is a wast field that combines mathematics, programming, artificial intelligence, physics you name it. I have to say this is the thing I’ve been looking for so long time. It  has just a right combination of ingredients I’ve described above making it a very attractive, dynamic and interesting field of study and application.

An Idea For A Game

Recently I’ve been involved with applications development for Android and I thought that combining machine learning with Android is a good idea to try. That is why I want to use a basic application developed by César Delgado Fernández that makes use of TensorFlow library on Android and turn it into engaging game for children. Currently César Delgado Fernández app can recognize digits from 0 to 9. Surely, it can be extended to be capable of other things. If you are eager to see it in action you may download and import it into Android Studio then build. It worked fine for me. If you have any difficulty let me know. So stay tuned for a brand new game from neaapps in following weeks.

Mobile Classifier Application Demo From Google

Discover more about Mobile TensorFlow here. By the way if you want to play with TensorFlow on Android you may try to follow the guide provided here. This application use mobile phone camera to classify objects and then provides captions with probabilities for recognized objects.

Hot Updates In Machine Learning

E-learning for Free

E-Books for Free

  • Deep Learning  is a book by Ian Goodfellow and Yoshua Bengio and Aaron Courville
    the people who are foremost experts in the field of machine learning.

What’s next?

Next post will be about setting and using SciPy stack on Windows and much more.

Machine Learning For All

The Rise of the Machines

In recent decade we are witnessing the resurrection of interest in neural networks due to advances in computing hardware such as GPU accelerators and availability of large data sets such as Facebook, YouTube etc. for training those networks. As a result there are a great number of research articles and practical applications emerged in the filed of artificial intelligence that found fruitful usage at Google, Facebook, Amazon etc.

Meet TensorFlow Library

As a forerunner in the filed of applied machine learning Google has developed and open sourced TensorFlow software machine learning library that is accessible for all to tackle the problems in computer vision, natural speech processing and more. If you find this information interesting then following steps will help you to start playing with this library. The library is written in C++ with APIs available in C++ and Python. Actually most  education resources that are available online for TensorFlow use Python.

Installation Is Fast

Tensor Flow is developed with Linux and OS X in mind and struggles on Windows. That is why the steps below taken from official manual were run on Ubuntu 15.10.

  • Open terminal in Ubuntu/ Linux 64 bit and install pip package management system for Python
$ sudo apt-get install python-pip python-dev
  • Since I had no GPU on my  64 bit machine I chose this binary
# Ubuntu/Linux 64-bit, CPU only, Python 2.7
$ export TF_BINARY_URL=
  • Now let’s install TensorFlow library. Be prepared it will take a while
# Python 2
$ sudo pip install --upgrade $TF_BINARY_URL
  • If installation went flawlessly which it was in my case it is possible to test it typing
$ python
  • When you’ll see ‘>>>’ it means Python interpreter is running and you can play with it
>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow! I've made it so far.')
>>> sess = tf.Session()
>>> print(
Hello, TensorFlow! I've made it so far.
>>> a = tf.constant(10)
>>> b = tf.constant(32)
>>> print( + b))

Hot update: TensorFlow 0.12 available on Windows 7/10

Today we announced native Windows support in TensorFlow 0.12, with packages for Python 3.5. If you have installed the 64-bit version of Python 3.5 (either from or Anaconda), you can install TensorFlow with a single command:

C:\> pip install tensorflow

How to Use It?

If you want to get a feeling of what TensorFlow is capable of first hand then try to follow the Image Recognition official tutorial or do the steps below to recognize your most favorite image (do not try it with this one). TensorFlow Inception v-3 model will try to recognize the image provided by you and emit the caption for the image from the highest to lowest probability. For example, if you provide an image of Italian Alps then the result will be something along these lines.  


alp (score = 0.73395)
valley, vale (score = 0.18973)
cliff, drop, drop-off (score = 0.00309)
promontory, headland, head, foreland (score = 0.00171)
lakeside, lakeshore (score = 0.00154)

Try It yourself

  • The models and examples generally installed at this path
  •  To figure out the exact location use this directive for python 2.7 (change ‘python’ to ‘python3’ for Python 3)
$ python -c 'import os; import inspect; import tensorflow; print(os.path.dirname(inspect.getfile(tensorflow)))'
  •  For Image Recognition you need to run script which can be found at this path
  • Place the image in JPEG format you want to be recognized anywhere you like. In my case it was at this destination
  • Then run Inception v-3 model on that image this way. Pay attention that ‘– -image_file’ argument is there to indicate the path to the image
$ python /usr/local/lib/python2.7/dist-packages/tensorflow/models/image/imagenet/ --image_file /home/me/Pictures/image.jpg

Teach Yourself Machine Learning In Ten Years

So you like what you did with TensorFlow and you want to learn the subject in more depth. Congratulations, since there are more resources than you can digest in your lifetime. What I personally found useful for me so far are.

1. Start with the TensorFlow site itself where you can find tutorials, guides and APIs description.

2. If you need a more thorough introduction to Machine Learning try to get helped by Udacity free courses on the subject

3. If you are like me and likes books and rigorous theoretical background then this book by Michael Nielsen is just for you.

4. Machine Learning Mastery site by Jason Brownlee is definitely a place to visit

5. If you are an expert in the filed of Machine Learning then maybe you’ll find this peculiar blog by Chris Olah from Google Brain Team useful.

More Posts Are Expected 

Stay in touch.

Java Code Geeks

Get Ready For Machine Learning

Welcome To The Machine

In the upcoming weeks I’ll plan to post materials about Machine Learning be it deep or shallow etc.

A Few Bits Here And There

  • Just to warm your interest start by skimming over this insightful blog by Chris Olah a member at Google Brain Team.
  • Then take a look at TensorFlow open source machine learning library developed in Google
  • Do not forget to check a very different and yet somewhat similar to Neural Networks  Numenta’s Hierarchical Temporal Memory (HTM) model
  • Those of you who want to know more about Numenta 

More Stuff is Coming

So stay tuned.

Machine Intelligence at Numenta. Contribute and have fun.


The Dawn

For centuries people were fascinated with the brain and tried to understand how it might work. Real scientific research has started about 100 years ago. With the emergence of Artificial Intelligence movement in the mid 50th of the previous century there was a hope that the answer to the question of how to build intelligent machines was just round the corner. Today in 2015 the hopes of forerunners of AI community still remain the same as in the beginning.

New insights and the theory

Things started to change back in 2005 when ‘On Intelligence‘ the book written by Jeff Hawkins and Sandra Blakeslee provided an outline of the theory that for the first time explained the inner workings of the neocortex with a small number of biologically inspired assumptions. In the years to come predictions that followed from Jeff`s theory of the possible neocortex algorithm proved to be sound and fruitful and reinforced the theory.


The year the ‘On Intelligence’ was published another significant event took place and that was the founding of Numenta company. The main goal of the company was and still is to implement the proposed algorithm of neocortex inner workings in practical applications.

Numenta Platform for Intelligent Computing

Very soon Numenta has decided to open source their research implementations of the neocortex algorithm which is known as Hierarchical Temporal Memory (HTM) and Numenta Platform for Intelligent Computing (NuPIC) was born. Today, hundreds of NuPIC Open Source Community members along with Numenta development team are engaged in implementation and refinement of Cortical Learning Algorithm (CLA) that is a subset of HTM.

How to contribute

There is a number of ways in which it is possible to contribute.

  1. Get involved in NuPIC Open Sources Community and help to develop next step in biologically inspired Machine Intelligence. Maybe you`ll be the one who propose groundbreaking application of this technology in new surprising fields.
  2. Play with already existing applications, such as Grok which is used by Amazon.
  3. Have fun.

Java Code Geeks