Deep Learning for Time Series book


Is it for you?

Are you struggling to find an easy to digest and implement material on Deep Learning for Time Series? Then look no further and try the newest book by Jason Brownlee from  Machine Learning Mastery. The book is ‘Deep Learning for Time Series Forecasting‘.

What’s inside?

The book will help you apply classic and deep learning methods for time series forecasting. This book is no exception for what you expect from Machine Learning Mastery books. It is hands-on, practical with plenty of real world examples, and most importantly working and tested code samples that may form the basis for your own experiments.

You may very much like the real application of Deep Learning nets to Household Energy Consumption dataset that was used to train CNN, CNN-LSTM and ConvLSTM networks with good accuracy results.

What’s so special about the book?

I personally was fascinated with the Time Series Classification chapter that applied Deep Learning to Human Activity Recognition (HAR) dataset with quite accurate predictions. What I liked most in HAR is the fact that raw gyros and accelerators measurements from the cell phone were used to train the DL models without any feature engineering. The video of the dataset preparation is shown here.

What’s next?

In the next post I’ll use one of the examples for Human Activity Recognition in the book and try to expand it using Extensions part of the chapter.

If you’ll be able to do it before me, please feel free to provide your feedback in the comments section.


Did you know that Google’s Colaboratory provides you with the opportunity to use GPUs for free while working on your own Deep Nets implementation? More than that you can easily share these Jupyter notebooks with your peers.



Statistical Methods for Machine Learning. Is it for me?


Statistical Methods for Machine Learning?

In this age of flourishing Deep Learning frameworks that allow you to train and run a model in a matter of minutes (or more) practitioners tend to underestimate why they need Statistical Methods in their tool box. It turns out Machine Learning and Deep Learning as a sub-field of it use Statistical Methods extensively throughout the training-inference pipeline. Starting from data preparation and ending on model performance validation. So, yes if you are not aware how those methods may be helpful, then it is time to have a look at a new Statistical Methods for Machine Learning book by Dr. Jason Brownlee from Machine Learning Mastery.  This book will explain in simple terms with practical examples what are Statistical Methods and how one can incorporate them in a day to day settings. 

What is there for me?


Have you ever studied at college or university or elsewhere about normal or Gaussian distribution, but never really understood how to apply it in a real situation? Have you ever wondered what is p-value and is there any better way such as Estimation Statistics that might include the quantifying the size of an effect or the amount of uncertainty for a specific outcome or result, and not only whether there was a difference between samples. In addition, the book clarifies the difference between Law of Large Numbers and Central Limit Theorem that are frequently confused with one another.

In addition, the book includes hands-on code examples that are tested and work correctly, and may be a good starting point in your own Machine Learning projects. 

Is it worth buying?

box_plot.jpgThe book is worth buying if you intend to be a more productive Machine Learning practitioner that not only runs code from tutorials, but also understands how to prepare and analyse the data for an algorithm in an efficient way, one how strives to get better results from models by evaluating them using statistical methods, one that values code snippets that he or she may build upon in their own Machine Learning  projects.

Parting Words

All in all, Statistical Methods for Machine Learning has all the merits of books from Machine Learning Mastery that are easy to grasp and bring immediate practical value that is applicable from the start and they are joyful reading.


How often to post on a blog?

[Update 2018-03-28]

Only today I’ve posted on a blog that there is no breakthrough in Deep Learning field so far in 2018. Boy, how was I wrong. Welcome this exciting paper born out of collaboration of David Ha (Google Brain) and Jurgen Schmidhuber (one of the creators of LSTM, RNN neural network). 

This paper finally implements what Yann LeCun mentions in all his recent talks. An agent that acts on its internal Model of the world.

World Models

John Sonmez advises to post each week on a blog for blog to gain momentum and grow. I surely agree with this statement since I saw it actually worked. But as it happens I haven’t posted anything for about two months now. There were a couple of topics I wanted to write a post, but never did. In the upcoming days I’ll try to write on the topics that will spark my curiosity and that may be of interest to the readers of this blog.

It seems drumming will be one of the topics, then physics, such as how cloaking devices may work. There may be a piece on aviation with regard to stealth aircraft. Certainly, programming is also one of the topics that I like. Deep Purple, sorry, Deep Learning is progressing steadily, but no huge breakthroughs are visible despite optimistic forecasts made by various commentators in the field.

In addition, science fiction movies and stories reviews may be a possible topic for a blog post or even a sci-fi story written by me. Recently, I saw a number of movies that had an interesting sci-fi idea at their core, but in my opinion the idea wasn’t elaborated as it could. I mean movies, such as Downsizing which missed the point completely and more successful one, but nevertheless under-delivering Annihilation

That’s it for today. Stay tuned and if you want provide topics you want me to report on which are within fields mentioned above.

What do you think?
What is the right frequency of posts in a blog? 

Linear Algebra for Machine Learning


Who is this book for?

If you are starving to get deeper insights while reading Deep Learning or Machine Learning papers, but are a little bit rusty with Linear Algebra, then

Basics of Linear Algebra for Machine Learning by Jason Brownlee from Machine Learning Mastery is just for you!

What this book is all about?

  • This book is a gentle introduction into Linear Algebra for people interested in machine learning.
  • As all books written by Jason it features Python hands-on practical approach.
  • It comes with a number of exercises for each chapter.
  • It has extensive references for each chapter too.
  • It feels like a good tool for beginner practitioners to Deep or Machine Learning.

Additional resources that may come in handy

I personally liked two books that Jason mentioned in his latest book

  • No Bullshit Guide To Liner Algebra which is the best book of its kind in my opinion having examples from quantum physics and more. 
  • Deep Learning book by Ian Goodfellow, Yoshua Bengio an Aaron Courville. This book is more or less currently the Bible of Deep Learning.

What are you waiting for?

Grab one of the books and get amazed by applied math and Deep Learning.


How to start with Deep Learning?

What is Deep Learning in a nutshell?

Deep Learning is a hot topic these days and it draws a lot of attention from people around the globe. This technology is applicable to various fields, such as image recognition and classification, speech recognition and generation, self-driving cars etc.  There are a number of definitions of what Deep Learning actually is. I find this detention of Deep Learning  by As Lex Fridman from MIT, as he puts it in his latest arxiv’s paper on the subject of self-driving cars, quite simple:

Deep Learning can be defined as a branch of machine learning that seeks to form hierarchies of data representation with minimum input from a human being on the actual composition of the hierarchy.

The best way to start with Deep Learning

If you are interested in getting to know what is Deep Learning, how can it be applied in practice then the best way for you is to try to apply it yourself. Don’t worry, there is no need to enroll into PhD program in machine learning anymore since the state of Deep Learning technology is that with a dozen lines of code and leveraging existing machine and deep learning libraries along with pre-trained models it is possible to implement exciting applications of Deep Learning, such as image classification, image caption generation and more. 

All you need is a practical end to end working example

To jump start into Deep Learning (DL) right away I propose you to have a look at Machine Leaning Mastery site and specifically at the latest book there which is one related to DL and is called Natural Language Processing with Deep Learning’.

This book composed of a number of self-contained tutorials that are concerned with applying DL techniques to natural language processing, such as sentiment analysis, image caption generation and language translation. What is nice about it is that it shows you how to apply these techniques from installing all required machine learning libraries, to describing how to implement DL pipeline from start to finish. It comes with all code samples mentioned in the book working and doing the job. You can take them as a starting point and expand with you creativity. 

Although, tutorials are quite independent there they are arranged in the way that complexity of applications is growing from simple to mode advanced. 

The book engages you to try extensions and enjoy coding in Python

The book uses Python and its rich ecosystems of machine and deep learning libraries such as Keras to make you life easier and enjoyable. What is different in this book from others is that each chapter provides you with the references to all papers and books relevant to that chapter, for you to not waste time looking them up yourself. In addition, and this is the best part in my opinion, each chapter provides a number of extensions to think about and implement for application described. Such as trying to play with different model architecture, trying to tune hyper-parameters, etc.

So why are you still reading this post?

Try this book by executing every example in it, try to play with examples by expanding them and I am sure you’ll get a feeling of what this Deep Learning is and how it results with quite fascinating outcomes when the model you trained predicts something like this:

This is what Deep Learning network trained to translate from German to English thinks about Canadians.

src=[wir sind kanadier], target=[we’re canadians], predicted=[we’re unusual]

Capsules are CapsNet


A new type of neural network 

Capsules network a brand new type of artificial neural networks that is superior to CNN is here to stay. Prepare for at least one detailed post about them in near future.

Where do I find a working implementation?

Please refer to this implementation of CapsNet in Keras. This repository provides links to almost all known current implementations of freshly brewed Capsules network.

The bright future

My intuition indicates that Capsules will substitute CNN in a near future due to there in-variance to image position transformation.



Dynamic Routing Between Capsules

Matrix capsules with EM routing



CapsuleNet on MNIST


Capsule Networks Explained

What is a CapsNet or Capsule Network?


Capsule Networks: An Improvement to Convolutional Networks (Siraj Raval)



Deep Learning virus. Are you infected?

Taken by surprise

It’s hardly possible to find a single person that hasn’t heard about Deep Learning virus epidemics. The size of affected population is quite significant and the infection is spreading faster than was ever imagined. Who would have thought that such an esoteric virus could spread so rapidly. The main question is how the authorities missed this case completely until the point when it is very little that can be done to fight this strong and capable, not to say a kind of intelligent adversary.

What went wrong?

The virus origin dates back to nineteen sixties and seventies when there was reported that a couple of scientists were affected by Deep Learning virus which then had no such a name and was known as Perceptron. However, it was thought that timely treatment of a newly discovered XOR antibiotic cured it completely though sporadic eruptions of it were reported also in mid eighties.

The things started to change suddenly in 2012. Although, a few years before this there were a number of cases when people from speech recognition community were affected by Deep Learning virus. What happened in 2012 though was more significant since for the first time it was reported that vision, and more precisely, object classification functionality, was strongly affected by it. 

Today, we are witnessing a new wave of this infection and it’s unsettling to see that this virus has grown to become such a beast. We all know now that almost all human senses is affected by it be it vision, speech generation and recognition, hearing, cognitive functions such a primitive sentiment, you name it. It is unclear if sense of taste and smell are in danger but we cannot be overoptimistic in this regard.

Virus characteristics

At first, it was thought that somehow the virus only targets certain predisposed members of the population such as scientists and engineers like Geoffrey Hinton, Yann LeCun, Yoshua Bengio and others. It turned out that we were completely wrong in this assumption and the virus is much smarter and flexible than we thought possible. Now large fractions of population be it doctors, artists, entrepreneurs such as Elon Musk, and even renowned physicist such as Max Tegmark are deeply affected by this Deep Learning epidemics.

It is mutating

Throughout the years, researchers were able to uncover a number of mutations of Deep learning virus and we now know about Auto Encoders, Convolutional and Recurrent spices of it. Each day the most authoritative remedy journal Arxiv reports on newer cases of mutations that gives us little hope that the treatment will keep pace with the virus evolution. There are rumors that a new and unseen kind of mutation such as Deep Reinforcement Learning are even more dangerous not to mention generative adversarial networks and who knows what else to come.

On a side note, it is at least, a little bit reassuring to know that one of the mutations called Theano was eradicated by MILA and we hope that other will follow too.


Apparently, the virus is transmuted via digital means of Internet publications, open-source, such as MXNet and usage of large corporations products such as Keras, TenserFlow, PyTorch. It is most probable that participation in conferences, such as NIPS and others can put you in immediate danger of being affected. So ask yourself before attending them if the risk worth it.

A new hope

Even though the virus is strong and unrelenting,  we place our hopes in development of new antibiotics, such as Numenat’s HTM or neuromorphic drugs, such as Neurogrid.

So be cautious and take all measures to fight the virus and hope that human intelligence will beat this sneaky, powerful, smart and flexibly entity that somehow learns to outsmart us each time we think we’ve found a cure. 

Take care.