Thoughts on physics and artificial intelligence on 2019 New Year’s eve

Make the New Year happy, because you can

It seems to me the New Year will be interesting and exciting as it always seems this way on new year’s eve. What makes me think so though is a number of books I read recently. One of the book is a collection of interviews with prominent people in the field that is known as Artificial Intelligence. The other book is about the particle physics being stuck with high hopes in String theory and why it may be a root cause of not seeing no new physics discovered so far in the Large Hadron Collider (LHC) except for Higgs boson.  

The power of the right books

In his book Architects of Intelligence: The truth about AI from the people building it Martin Ford has done something interesting by combining a numbers of interviews, more than a dozen, with people who are focused on Artificial Intelligence progress in various levels. In it you may find Geoff Hinton the founding father of Deep Learning and his colleagues Yoshua Bengio and Yann LeCun who need no special advertising (hint, search in Google). There are also a row of interviews with people like Jeff Dean and Ray Kurzweil from Google Brain that are interesting to read too.

The main point of the book is that those people were asked more or less the same questions, including how they came into field of Artificial Intelligence, what they think about Deep Learning and whether it will alone lead to Artificial General Intelligence. Will the recent advances in machine learning jeopardize jobs and what to do about that. What is interesting to see is that each person interviewed naturally had a different answer to these questions, so it helps to get a balanced view on what is the state of the art of Deep and Machine Learning in 2018.

Things that require new explanations

In her book Lost in Math: How Beauty Leads Physics Astray Sabine Hossenfelder a particle physicist discusses an interesting matter of various biases that affect theoretical physicists that set out to devise a theory that intended to explain laws of physics. For example, String Theory is discussed extensively in the book since this theory though it’s very elegant, beautiful and full of  naturalness completely failed due to the absence of any predictions that the theory envisioned. Indeed, no new particles except for Higgs boson, were found in the Large Hadron Collider and it feels like there is a time to abandon String Theory which isn’t working and check other theories that won’t be plagued with ad hoc assumptions of naturalness and apparent, and very likely deceiving, beauty of the nature. If you are interested why there was found nothing new in the particle physics in recent decades, you may find Sabine’s explanations insightful. And maybe just, maybe you’ll discover that you too like me have biases that affect our perception of the nature. 

 

So make the upcoming year as you wish it to be

Remember that as intelligent creatures we are chanced to possess a capability to set goals and achieve them when we plan and act on plans with an enthusiasm and a perseverance (and Google search).

Happy New Year!

 

 

Better Deep Learning or How To Fine Tune Deep Learning Networks

Effective Deep Learning is possible

Nowadays, when Deep Learning libraries such as Keras makes composing Deep Learning networks as easy task as it can be one important aspect still remains quite difficult. This aspect that you could have guessed is the tuning of various number, which isn’t small at all, of hyper-parameters. For instance, network capacity which is number of neurons and number of layers, learning rate and momentum, number of training epochs and batch size and the list goes on. But now it may become a less of a hassle since a new Better Deep Learning book by Jason Brownlee focuses exactly on the issue of tuning hyper-parameters as best as possible given a task in hand.

 

Why is it worth reading this book?

When I myself worked through this book from the beginning to the end, I liked that this book as other books written by Jason Brownlee followed the familiar path of self-contained chapters that provided just enough theory and detailed practical  working examples, that might be extended and build upon by practitioners. The code samples themselves are concise and can be run on an average PC without a need in GPU, but nevertheless they convey very well what author intended to show.

While playing with code samples in each chapter I found myself thinking that I was back at college again doing a lab for electrical engineering. I felt this way since each chapter provides a great number of experiments with related graphs that help understand the hyper-parameter behavior in different configurations.

How this book may help me?

Better Deep Learning may help you if you have initial experience with Deep Learning networks and you want to fine tune network performance in a more controlled way than simple trial and error. Since the book uses restricted and simple data-sets generated with Python libraries it is easy to run each experiment and get fast understanding how each hyper-parameter effects network behavior.

In addition to working code examples, the book provides a number of focused references to papers, books and other materials that are related to the content of each chapter.

Last but not least, each chapter concludes with a number of extensions that make a practitioner think harder and try to play with the chapter’s content in a much more deeper level.

Conclusion

All in all, the book provides comprehensive treatment of all hyper-parameters you may find in various types of Deep Learning networks, such as CNN, RNN, LSTM and it makes it clear that fine tuning of Deep Learning is possible even for a beginner with proper guidance which the book provides.

Stay fine tuned!