Thoughts on physics and artificial intelligence on 2019 New Year’s eve

Make the New Year happy, because you can

It seems to me the New Year will be interesting and exciting as it always seems this way on new year’s eve. What makes me think so though is a number of books I read recently. One of the book is a collection of interviews with prominent people in the field that is known as Artificial Intelligence. The other book is about the particle physics being stuck with high hopes in String theory and why it may be a root cause of not seeing no new physics discovered so far in the Large Hadron Collider (LHC) except for Higgs boson.  

The power of the right books

In his book Architects of Intelligence: The truth about AI from the people building it Martin Ford has done something interesting by combining a numbers of interviews, more than a dozen, with people who are focused on Artificial Intelligence progress in various levels. In it you may find Geoff Hinton the founding father of Deep Learning and his colleagues Yoshua Bengio and Yann LeCun who need no special advertising (hint, search in Google). There are also a row of interviews with people like Jeff Dean and Ray Kurzweil from Google Brain that are interesting to read too.

The main point of the book is that those people were asked more or less the same questions, including how they came into field of Artificial Intelligence, what they think about Deep Learning and whether it will alone lead to Artificial General Intelligence. Will the recent advances in machine learning jeopardize jobs and what to do about that. What is interesting to see is that each person interviewed naturally had a different answer to these questions, so it helps to get a balanced view on what is the state of the art of Deep and Machine Learning in 2018.

Things that require new explanations

In her book Lost in Math: How Beauty Leads Physics Astray Sabine Hossenfelder a particle physicist discusses an interesting matter of various biases that affect theoretical physicists that set out to devise a theory that intended to explain laws of physics. For example, String Theory is discussed extensively in the book since this theory though it’s very elegant, beautiful and full of  naturalness completely failed due to the absence of any predictions that the theory envisioned. Indeed, no new particles except for Higgs boson, were found in the Large Hadron Collider and it feels like there is a time to abandon String Theory which isn’t working and check other theories that won’t be plagued with ad hoc assumptions of naturalness and apparent, and very likely deceiving, beauty of the nature. If you are interested why there was found nothing new in the particle physics in recent decades, you may find Sabine’s explanations insightful. And maybe just, maybe you’ll discover that you too like me have biases that affect our perception of the nature. 

 

So make the upcoming year as you wish it to be

Remember that as intelligent creatures we are chanced to possess a capability to set goals and achieve them when we plan and act on plans with an enthusiasm and a perseverance (and Google search).

Happy New Year!

 

 

Advertisements

Better Deep Learning or How To Fine Tune Deep Learning Networks

Effective Deep Learning is possible

Nowadays, when Deep Learning libraries such as Keras makes composing Deep Learning networks as easy task as it can be one important aspect still remains quite difficult. This aspect that you could have guessed is the tuning of various number, which isn’t small at all, of hyper-parameters. For instance, network capacity which is number of neurons and number of layers, learning rate and momentum, number of training epochs and batch size and the list goes on. But now it may become a less of a hassle since a new Better Deep Learning book by Jason Brownlee focuses exactly on the issue of tuning hyper-parameters as best as possible given a task in hand.

 

Why is it worth reading this book?

When I myself worked through this book from the beginning to the end, I liked that this book as other books written by Jason Brownlee followed the familiar path of self-contained chapters that provided just enough theory and detailed practical  working examples, that might be extended and build upon by practitioners. The code samples themselves are concise and can be run on an average PC without a need in GPU, but nevertheless they convey very well what author intended to show.

While playing with code samples in each chapter I found myself thinking that I was back at college again doing a lab for electrical engineering. I felt this way since each chapter provides a great number of experiments with related graphs that help understand the hyper-parameter behavior in different configurations.

How this book may help me?

Better Deep Learning may help you if you have initial experience with Deep Learning networks and you want to fine tune network performance in a more controlled way than simple trial and error. Since the book uses restricted and simple data-sets generated with Python libraries it is easy to run each experiment and get fast understanding how each hyper-parameter effects network behavior.

In addition to working code examples, the book provides a number of focused references to papers, books and other materials that are related to the content of each chapter.

Last but not least, each chapter concludes with a number of extensions that make a practitioner think harder and try to play with the chapter’s content in a much more deeper level.

Conclusion

All in all, the book provides comprehensive treatment of all hyper-parameters you may find in various types of Deep Learning networks, such as CNN, RNN, LSTM and it makes it clear that fine tuning of Deep Learning is possible even for a beginner with proper guidance which the book provides.

Stay fine tuned!

Driver drowsiness detection with Machine or/and Deep Learning.

It actually even more useful than Driver Assistant

In the previous post I mentioned that it is nice to have a mobile phone application which is capable of detecting various erroneously driven cars in front of the moving vehicle. Another more interesting application in my opinion, which is even more impact-full will be a mobile application that uses ‘selfy’ camera to track driver’s alertness state during driving and indicating by voice or sound effects that driver needs to take action. 

Why this application is useful?

The drivers among us and not only surely know that there are times when driving is not coming easy, especially when a driver is tired, exhausted by a little amount of sleep or certain amount of stress. And driving in alerted state of conciseness (against the law, by the way). This in turn is a cause of many road accidents that may be prevented should the driver be informed in timely manner that he or she requires to stop and rest. The above mentioned mobile application may assist in exactly this situation. It even may send a notification remotely to all who may concern that there is a need to call a driver, text him or do something else to grab the attention.

Is there anything like this in the wild?

As part of MIT group that is researching autonomous driving and headed by Lex Fridman the group used this approach to track drivers that drive Tesla cars and their interaction with the car. For more details, you may check out the links below with nice video and explanations.

This implementation combines best of state of the art in machine and deep learning.

face_detection_car.png

 

 

This implementation is from 2010 and apparently it is a plain old OpenCV with no Deep Learning.

face2

Requirements

  • Hardware
    • Decent average mobile phone 
  • Software
    • Operating system
      • Andorid or IPhone
    • Object detection and classification
      • OpenCV based approach using built-in DL models
  • Type of behavior classified
    • Driver not paying attention to the road
      • By holding a phone
      • Being distracted by another passenger
      • By looking at road accidents, whatever
    • Driver drowsiness detection
  • Number of frames per second
    • Depends on the hardware. Let’s say 24. 
  • Take action notifications
    • Voice announcement to the driver
    • Sound effects
    • Sending text, images notification to friends and family who may call and intervene
    • Automatically use Google Maps to navigate to the nearest Coffee station, such as Starbucks, Dunkin’ (no more donuts) and Tim Horton’s (select which applicable to you)

Then what are we waiting for?

This application can be built quite ‘easily and fast’ if you have an Android developer account, had an experience developing an Android apps. You worked a little bit with GitHub and had a certain amount of experience and fascination with machine learning, or OpenCV with DL based models. Grab you computer of choice and hurry to develop this marvelous piece of technology that will make you a new kind of person.

A possible plan of action

  • Get an Android phone, preferably from latest models for performance reasons.
  • Get a computer that can run OpenCV 4 and Android Studio.
  • Install OpenCV and all needed dependencies.
  • Run the example from Adrian Rosebrock’s blog post.
  • Install Android Studio.
  • Create a Android developer account (if you don’t have one, about $25 USD)
  • Use the Android app from this blog post as a blueprint and adapt the Pyhton code from Adrian’s implementation into Java.
  • Publish the app at Google Play Store.
  • Share the app.

 

References

Driver Assistant. Detect strange drivers with Deep Learning.

Driver assistant app. Can it be done?

I was too optimistic about making this work on Android since it takes more than a couple of seconds to process even single frame. So folks doing what I hoped in this post with OpenCV  is currently not achievable with a mobile phone.

This video which had 30 fps and was 11 seconds long took about 22 minutes to process.

larry.png

I wonder why is that there is close to none Android or IPhone applications that can in real-time detect erroneous drivers driving on the road before or sideways to you. The technology is there and algorithms, namely Deep Learning is there too. It is possible to run OpenCV based deep learning models in real-time on mobile phones and get good enough performance to detect suddenly stopping car ahead of you. Since mobile phone field of view isn’t that large, I think it will be hard if not impossible to detect erroneous driving on the sides of the car. A good example of OpenCV based object detection and classification using Deep Learning may be Mask R-CNN with OpenCV post by Adrian Rosebrock from PyImageSearch.

Requirements

  • Hardware
    • Decent average mobile phone 
  • Software
    • Operating system
      • Andorid or IPhone
    • Object detection and classification
      • OpenCV based approach using built-in DL models
  • Type of objects classified
    • Car
    • Truck
    • Bus
    • Pedestrian (optional)
  • Number of frames per second
    • Depends on the hardware. Let’s say 24. 
  • Field of View 
    • About 60 degrees
  • Type of erroneous driving detected
    • Sudden stopping
    • Zigzag driving
    • Cutting off from the side (hard to do with single forward facing phone camera)
    • etc.

Then what are we waiting for?

This application can be built quite ‘easily and fast’ if you have an Android developer account, had an experience developing an Android apps. You worked a little bit with GitHub and had a certain amount of experience and fascination with machine learning, namely OpenCV DL based models. To be able to detect some dangerous maneuvering others do there is a need to use a little bit of math to be able to detect them, as well as calculate speed, direction and distance to other cars. The effort is worth investing time into. Even a little helper can have a big impact, unless drivers start staring into the mobile phone screen looking how it’s doing while driving.

A possible plan of action

 

 

Language Acquisition. Multidisciplinary approach. Part three.

 

no_frame

Multidisciplinary approach

This post about the research of natural language acquisition will be as short as previous parts. This time I want to describe how the current research that is too linguistically focused may benefit from being open up to other disciplines, such as Machine Learning, computer Science, Neurosciences and Physics.

Currently, the language acquisition research is predominantly done by linguists. In my opinion, it is the reason why the progress in this field is so slow. It is very clear that researches that trained only in linguistic alone cannot leverage advances in other fields that are related to natural language processing, such as Neural Machine Translation which is a part of Machine Learning, Neuroimaging which is a part of Neuroscience, Neuromorphic Chips which are part of Electronics, and Dynamical Systems which are part of Physics. The mere luck of mathematical modeling is a very constraining factor, and it propelled all fields mentioned above. That is why groups that consists of generalists that have good grasp of math, machine learning, neuroscience and engineering will be most efficient in advancing the research and practical implementation of language acquisition. 

Clearly defined goal

As Jeff Hawkins from Numenta that is focused on developing general neocortex algorithm based on neurological evidence mentioned we have enough data to work on general theory of neocortex functioning. There is no lack of data, in opposite the data is in abundance. What lacks is the clear goal of what we want to achieve and clear plan how to move into right direction. It seems to me the best approach should be something along the lines of Lunar Program back in 60th and 70th of 20th century. Though there is no need to invest billions of dollars to make a progress, but dedicated people with right background and well defined goals.

References

 

 

 

Second Language Acquisition. A Possible Link to Deep Learning. Part two.

xtoy.jpg

Why is it important to understand how second language acquisition work?

Nowadays, we live in a world that is more interconnected than ever before. The Internet, including social media made communication as instant as possible. This in turn opened an opportunity for communication with people who speak different languages. But the slight complication is that in order to be able to speak with someone who knows a different language than you, there is a need to learn that new language. This will account to second language acquisition (SLA). Surely, the technology found a work around this problem using Machine Language Translation. At first machine translation was phrase based and the results were not that good. Then came the turn of statistical method in machine translation. And finally in November 2016 Google launched end to end Neural Machine Translation based on Artificial Neural Networks now known as Deep Learning. The results of this new approach were quite impressive, in comparison to struggling previous approaches. If until now you haven’t used this service, then try and see for yourself. Since I am a native Russian speaking person I am providing below an example of English to Russian translation that I can discuss.

english to russian

The link for this specif translation is here. I think any Russian speaking person would agree with me that this machine translation is grammatically correct and sounds fine.

Then back to the subject of SLA. It seems to me that looking into how Deep Learning techniques and models that are now used for Natural Language Processing (NPL), such as Word Embedding introduced by Tomas Mikolov and  Long Short-Term Memory  (LSTM) networks which are a special case of Recurrent Neural Networks work, may be very useful in tackling SLA. These and other approaches that are employed to tackle machine translation may shed the light into some aspects of first and second language acquisition in humans. Even though, currently used neural networks are based largely on an oversimplified and superficial model of a neuron, dating back to Perception introduced in 1960s, the successes of such methods cannot be easily dismissed. Why is that?

The notion ”probability of a sentence” is an entirely useless one, under any known
interpretation of this term. (Chomsky, 1969)

In recent years thanks to the advances in Graphical Processing Units (GPU) capabilities and introduction of new architectures and methods in Artificial Neural Networks now known as Deep learning, one of the long standing challenges namely Machine Language Translation seemed like to give up. The title above that belongs to Noam Chomsky the founder of Generative Grammar and Generative Linguistics may be finally proclaimed as wrong and Generative grammar theory may be seen as proven being incorrect by successes of Recurrent Neural Networks based on statistical methods machine translation. Then if Generative Grammar is not that useful to model first or second language acquisition in humans what else is? In following parts I’ll provide my suggestions of what approaches may be more efficient in modeling natural language. And as it isn’t hard to guess Deep Learning may provide at least a partial answer.

 

References

 

 

 

Second Language Acquisition. What do we know? Part one.

Abstract

I hope that this post will be the first one in a series of posts I want to write on the topic of second language acquisition abbreviated in linguistics as SLA. What is meant by SLA is  a language that a person learns as a second language (L2) after he had acquired the first one which is a native language (L1). The research into the subject shows that the first and second language acquisitions are interconnected and may effect each other. So it makes sense to discuss first language acquisition too.

Why am I interested in this topic?

Since childhood I was interested in how we learn languages. As my life progressed from childhood to where I am now I happened to acquire two languages with a very high level of proficiency and learned a number of others to some extent. As a native Russian speaker growing in Ukraine I learned Ukrainian as a second language at school, but my knowledge of the language is quite superficial, though I can understand it when I hear it well. Then I learned and talked Hebrew for about 19 years. Even though I also studied English back in Ukraine I never knew it well before I started to learn it by mostly reading magazines back in 1999. So I would say that real experience with English language I started to gather for about 19 years too. Though, one important point to make is that I only started to use it for speaking communication purposes for about 2 years now. In addition, back in Tel Aviv University I studied a Japanese language for a year. But my diminishing knowledge of it is rudimentary.

To summarize the above I would rate my knowledge of the languages as below, when by knowledge I understand speaking, reading and writing.

  1. Russian 
  2. Hebrew
  3. English
  4. Ukrainian
  5. Japanese

I hope that this background description explains a little bit why I might be interested in understanding how we learn a new language be it second, third or N-language.

Introduction

It is very strange that we know so little about how we learn first or second languages taking into consideration the advances in Neuroscience since early 2000 and Artificial Neural Networks starting from 2012 (also known now as Deep Learning). First, I heard about the subject of SLA back in 2004 when I studied Generative Linguistics in Tel Aviv University. When looking into the state of the art of the research back then I heard only about Noam Chomsky and Stephen Krashen’s research into this subject. Now almost 15 years since the state of the art of the research seems like frozen in the same place. But my intuition indicates, that by incorporating approaches from Supervised Machine Learning which includes Recurrent Neural networks such as LSTM and Convolutional Neural Networks with Attention Mechanism, along with a very promising research done at Numenta company and other approaches it is possible to make a significant progress in the field of second and first language acquisition.

The more detailed description of what I propose will be explained in further parts.