Driver drowsiness detection with Machine or/and Deep Learning.

It actually even more useful than Driver Assistant

In the previous post I mentioned that it is nice to have a mobile phone application which is capable of detecting various erroneously driven cars in front of the moving vehicle. Another more interesting application in my opinion, which is even more impact-full will be a mobile application that uses ‘selfy’ camera to track driver’s alertness state during driving and indicating by voice or sound effects that driver needs to take action. 

Why this application is useful?

The drivers among us and not only surely know that there are times when driving is not coming easy, especially when a driver is tired, exhausted by a little amount of sleep or certain amount of stress. And driving in alerted state of conciseness (against the law, by the way). This in turn is a cause of many road accidents that may be prevented should the driver be informed in timely manner that he or she requires to stop and rest. The above mentioned mobile application may assist in exactly this situation. It even may send a notification remotely to all who may concern that there is a need to call a driver, text him or do something else to grab the attention.

Is there anything like this in the wild?

As part of MIT group that is researching autonomous driving and headed by Lex Fridman the group used this approach to track drivers that drive Tesla cars and their interaction with the car. For more details, you may check out the links below with nice video and explanations.

This implementation combines best of state of the art in machine and deep learning.

face_detection_car.png

 

 

This implementation is from 2010 and apparently it is a plain old OpenCV with no Deep Learning.

face2

Requirements

  • Hardware
    • Decent average mobile phone 
  • Software
    • Operating system
      • Andorid or IPhone
    • Object detection and classification
      • OpenCV based approach using built-in DL models
  • Type of behavior classified
    • Driver not paying attention to the road
      • By holding a phone
      • Being distracted by another passenger
      • By looking at road accidents, whatever
    • Driver drowsiness detection
  • Number of frames per second
    • Depends on the hardware. Let’s say 24. 
  • Take action notifications
    • Voice announcement to the driver
    • Sound effects
    • Sending text, images notification to friends and family who may call and intervene
    • Automatically use Google Maps to navigate to the nearest Coffee station, such as Starbucks, Dunkin’ (no more donuts) and Tim Horton’s (select which applicable to you)

Then what are we waiting for?

This application can be built quite ‘easily and fast’ if you have an Android developer account, had an experience developing an Android apps. You worked a little bit with GitHub and had a certain amount of experience and fascination with machine learning, or OpenCV with DL based models. Grab you computer of choice and hurry to develop this marvelous piece of technology that will make you a new kind of person.

A possible plan of action

  • Get an Android phone, preferably from latest models for performance reasons.
  • Get a computer that can run OpenCV 4 and Android Studio.
  • Install OpenCV and all needed dependencies.
  • Run the example from Adrian Rosebrock’s blog post.
  • Install Android Studio.
  • Create a Android developer account (if you don’t have one, about $25 USD)
  • Use the Android app from this blog post as a blueprint and adapt the Pyhton code from Adrian’s implementation into Java.
  • Publish the app at Google Play Store.
  • Share the app.

 

References

Driver Assistant. Detect strange drivers with Deep Learning.

Driver assistant app. Can it be done?

I was too optimistic about making this work on Android since it takes more than a couple of seconds to process even single frame. So folks doing what I hoped in this post with OpenCV  is currently not achievable with a mobile phone.

This video which had 30 fps and was 11 seconds long took about 22 minutes to process.

larry.png

I wonder why is that there is close to none Android or IPhone applications that can in real-time detect erroneous drivers driving on the road before or sideways to you. The technology is there and algorithms, namely Deep Learning is there too. It is possible to run OpenCV based deep learning models in real-time on mobile phones and get good enough performance to detect suddenly stopping car ahead of you. Since mobile phone field of view isn’t that large, I think it will be hard if not impossible to detect erroneous driving on the sides of the car. A good example of OpenCV based object detection and classification using Deep Learning may be Mask R-CNN with OpenCV post by Adrian Rosebrock from PyImageSearch.

Requirements

  • Hardware
    • Decent average mobile phone 
  • Software
    • Operating system
      • Andorid or IPhone
    • Object detection and classification
      • OpenCV based approach using built-in DL models
  • Type of objects classified
    • Car
    • Truck
    • Bus
    • Pedestrian (optional)
  • Number of frames per second
    • Depends on the hardware. Let’s say 24. 
  • Field of View 
    • About 60 degrees
  • Type of erroneous driving detected
    • Sudden stopping
    • Zigzag driving
    • Cutting off from the side (hard to do with single forward facing phone camera)
    • etc.

Then what are we waiting for?

This application can be built quite ‘easily and fast’ if you have an Android developer account, had an experience developing an Android apps. You worked a little bit with GitHub and had a certain amount of experience and fascination with machine learning, namely OpenCV DL based models. To be able to detect some dangerous maneuvering others do there is a need to use a little bit of math to be able to detect them, as well as calculate speed, direction and distance to other cars. The effort is worth investing time into. Even a little helper can have a big impact, unless drivers start staring into the mobile phone screen looking how it’s doing while driving.

A possible plan of action

 

 

Language Acquisition. Multidisciplinary approach. Part three.

 

no_frame

Multidisciplinary approach

This post about the research of natural language acquisition will be as short as previous parts. This time I want to describe how the current research that is too linguistically focused may benefit from being open up to other disciplines, such as Machine Learning, computer Science, Neurosciences and Physics.

Currently, the language acquisition research is predominantly done by linguists. In my opinion, it is the reason why the progress in this field is so slow. It is very clear that researches that trained only in linguistic alone cannot leverage advances in other fields that are related to natural language processing, such as Neural Machine Translation which is a part of Machine Learning, Neuroimaging which is a part of Neuroscience, Neuromorphic Chips which are part of Electronics, and Dynamical Systems which are part of Physics. The mere luck of mathematical modeling is a very constraining factor, and it propelled all fields mentioned above. That is why groups that consists of generalists that have good grasp of math, machine learning, neuroscience and engineering will be most efficient in advancing the research and practical implementation of language acquisition. 

Clearly defined goal

As Jeff Hawkins from Numenta that is focused on developing general neocortex algorithm based on neurological evidence mentioned we have enough data to work on general theory of neocortex functioning. There is no lack of data, in opposite the data is in abundance. What lacks is the clear goal of what we want to achieve and clear plan how to move into right direction. It seems to me the best approach should be something along the lines of Lunar Program back in 60th and 70th of 20th century. Though there is no need to invest billions of dollars to make a progress, but dedicated people with right background and well defined goals.

References

 

 

 

Second Language Acquisition. A Possible Link to Deep Learning. Part two.

xtoy.jpg

Why is it important to understand how second language acquisition work?

Nowadays, we live in a world that is more interconnected than ever before. The Internet, including social media made communication as instant as possible. This in turn opened an opportunity for communication with people who speak different languages. But the slight complication is that in order to be able to speak with someone who knows a different language than you, there is a need to learn that new language. This will account to second language acquisition (SLA). Surely, the technology found a work around this problem using Machine Language Translation. At first machine translation was phrase based and the results were not that good. Then came the turn of statistical method in machine translation. And finally in November 2016 Google launched end to end Neural Machine Translation based on Artificial Neural Networks now known as Deep Learning. The results of this new approach were quite impressive, in comparison to struggling previous approaches. If until now you haven’t used this service, then try and see for yourself. Since I am a native Russian speaking person I am providing below an example of English to Russian translation that I can discuss.

english to russian

The link for this specif translation is here. I think any Russian speaking person would agree with me that this machine translation is grammatically correct and sounds fine.

Then back to the subject of SLA. It seems to me that looking into how Deep Learning techniques and models that are now used for Natural Language Processing (NPL), such as Word Embedding introduced by Tomas Mikolov and  Long Short-Term Memory  (LSTM) networks which are a special case of Recurrent Neural Networks work, may be very useful in tackling SLA. These and other approaches that are employed to tackle machine translation may shed the light into some aspects of first and second language acquisition in humans. Even though, currently used neural networks are based largely on an oversimplified and superficial model of a neuron, dating back to Perception introduced in 1960s, the successes of such methods cannot be easily dismissed. Why is that?

The notion ”probability of a sentence” is an entirely useless one, under any known
interpretation of this term. (Chomsky, 1969)

In recent years thanks to the advances in Graphical Processing Units (GPU) capabilities and introduction of new architectures and methods in Artificial Neural Networks now known as Deep learning, one of the long standing challenges namely Machine Language Translation seemed like to give up. The title above that belongs to Noam Chomsky the founder of Generative Grammar and Generative Linguistics may be finally proclaimed as wrong and Generative grammar theory may be seen as proven being incorrect by successes of Recurrent Neural Networks based on statistical methods machine translation. Then if Generative Grammar is not that useful to model first or second language acquisition in humans what else is? In following parts I’ll provide my suggestions of what approaches may be more efficient in modeling natural language. And as it isn’t hard to guess Deep Learning may provide at least a partial answer.

 

References

 

 

 

Second Language Acquisition. What do we know? Part one.

Abstract

I hope that this post will be the first one in a series of posts I want to write on the topic of second language acquisition abbreviated in linguistics as SLA. What is meant by SLA is  a language that a person learns as a second language (L2) after he had acquired the first one which is a native language (L1). The research into the subject shows that the first and second language acquisitions are interconnected and may effect each other. So it makes sense to discuss first language acquisition too.

Why am I interested in this topic?

Since childhood I was interested in how we learn languages. As my life progressed from childhood to where I am now I happened to acquire two languages with a very high level of proficiency and learned a number of others to some extent. As a native Russian speaker growing in Ukraine I learned Ukrainian as a second language at school, but my knowledge of the language is quite superficial, though I can understand it when I hear it well. Then I learned and talked Hebrew for about 19 years. Even though I also studied English back in Ukraine I never knew it well before I started to learn it by mostly reading magazines back in 1999. So I would say that real experience with English language I started to gather for about 19 years too. Though, one important point to make is that I only started to use it for speaking communication purposes for about 2 years now. In addition, back in Tel Aviv University I studied a Japanese language for a year. But my diminishing knowledge of it is rudimentary.

To summarize the above I would rate my knowledge of the languages as below, when by knowledge I understand speaking, reading and writing.

  1. Russian 
  2. Hebrew
  3. English
  4. Ukrainian
  5. Japanese

I hope that this background description explains a little bit why I might be interested in understanding how we learn a new language be it second, third or N-language.

Introduction

It is very strange that we know so little about how we learn first or second languages taking into consideration the advances in Neuroscience since early 2000 and Artificial Neural Networks starting from 2012 (also known now as Deep Learning). First, I heard about the subject of SLA back in 2004 when I studied Generative Linguistics in Tel Aviv University. When looking into the state of the art of the research back then I heard only about Noam Chomsky and Stephen Krashen’s research into this subject. Now almost 15 years since the state of the art of the research seems like frozen in the same place. But my intuition indicates, that by incorporating approaches from Supervised Machine Learning which includes Recurrent Neural networks such as LSTM and Convolutional Neural Networks with Attention Mechanism, along with a very promising research done at Numenta company and other approaches it is possible to make a significant progress in the field of second and first language acquisition.

The more detailed description of what I propose will be explained in further parts.