How using GenAI tools blindly makes you dumb

In the age of GenAI tools dominance it is easy to use these tools blindly, without thinking how the tools come up with their answers.

The main issue with treating these tools as a black box is the fact that people learn by making mistakes, while they apply what they try to learn. The problem with GenAI tools is that they give you a complete answer without you exerting any effort in trying to solve the problem on your own. Hence, you miss a critical step in a learning process, which is without a challenge there is no gain.

I can say this based on my own experience with using GenAI tools at work and at home for dozens if not hundreds times a day. The main issue with using GenAI tools at work, such as M365 Copilot Chat and the like, for software development tasks, is what I’ve mentioned above. These tools may give you a complete implementation that works, but you don’t learn anything new as a result, since you don’t try to implement that same thing on your own.

A good example, can be a situation where you’d like to have a Python script that can transform data from one format into another.
Recently, I needed to transform an OWASP Dependency Check plugin JSON report into a more user friendly format, specifically into a CSV file (aka Excel) to be able to see security vulnerabilities at a glance and to be able to sort them by the NVD last updated date.

To write such a script in Python would take me some time for sure. For I need to understand how to parse JSON file, extract various fields and then write the data in to CSV file. I had a bright idea to use GenAI to do it quickly to save time and effort. Indeed, after a few back and forth iterations with M365 Copilot Chat I had a working Python script that delivered what I needed, without me even looking into the implementation. It was what I’ve mentioned a black box treatment of the GenAI response. It worked and I didn’t care how. But, the main point was, that I didn’t learn anything in the process and felt actually as I was cheating.

I needed to update the script in the end, so I did indeed look at the implementation. And it took me some time to figure out how it worked. It was written well and in a modular fashion. But, I think I’d never do it in a similar way as a GenAI model did.
I was able to understand and adapt the code enough for it to work, but it didn’t work quite well. So, I said to myself, well it’s time to save my effort and asked GenAI to refactor it, which it did correctly first time.

So, in the end, I did save time on that script, and I did learn how it worked, since I was curious, but I didn’t get the same amount of experience, should I implement it on my own from start to finish.

In this case, I am not talking about GitHub Copilot, for I don’t use it currently. But, even using free GenAI tools can be harmful in the long run, if you treat them as black boxes, instead of tools that help you think better. They cannot substitute thinking, for then you become dumber in the process.

Also, with recent improvements in Anthropic Claude agents, people such as Andrej Karpathy state that we are in the new era of software programming. It sounds that it’s a valid statement to make, since he’s not the only one noticing this change in performance of GenAI agents, but the crucial point is that people who didn’t have software programming experience before starting to use such tools will certainly have a gap in programming skills, for they didn’t face challenges that taught them how to really think. Karpathy is a good example, since he was a programmer before GenAI tools where on the horizon.

In the end, GenAI tools can be very helpful and effective in helping you learn whatever topic you want, but they can be as effective in preventing you from gaining valuable experience while you master the subject.

GPT provides some insight into how ketones’ loss happens

Image generated with Windows Copilot

Windows Copilot shines

The reply is below


🌟 Exploring Energy Loss During Ketosis: The Role of Acetone and Acetoacetate🌟

When our bodies enter ketosis—often due to low carbohydrate intake or fasting—we begin producing ketone bodies as alternative fuel sources. The primary ketone bodies are:

  • Beta-Hydroxybutyrate (BHB)
  • Acetoacetate (AcAc)
  • Acetone

While these molecules serve as vital energy sources, not all are utilized efficiently, leading to some energy loss. Here’s how it happens:


🔹 Energy Loss Through Ketone Excretion

1. Acetone Exhalation

  • Formation: Acetone is produced from the spontaneous decarboxylation of acetoacetate.
  • Excretion: Being volatile, acetone is exhaled through the lungs and also lost via urine and sweat.
  • Energy Implications: The exhaled acetone contains caloric energy that the body doesn’t recover—this represents direct energy loss.

2. Urinary Excretion of Acetoacetate and BHB

  • Renal Handling: The kidneys filter blood ketone bodies, reabsorbing most but not all. Excess ketones are excreted in the urine.
  • Energy Implications: The energy in these excreted ketones is also lost to the body. During high ketone production, urinary losses can be significant.

🔹 Key Research Findings

• Magnitude of Energy Loss

  • Quantitative Estimates: Energy loss through ketone excretion can range from 50 to 100 kilocalories per day in individuals in ketosis.
  • Influencing Factors: Depth of ketosis, metabolic rates, and kidney function affect the extent of ketone loss.

• Metabolic Adaptations Over Time

  • Increased Efficiency: With prolonged ketosis, the body adapts—kidneys enhance ketone reabsorption, and tissues improve ketone utilization, reducing losses.
  • Reduced Acetone Production: The body produces less acetone over time, minimizing exhaled energy loss.

• Clinical Contexts

  • Diabetes Mellitus: In uncontrolled type 1 diabetes, ketone production surges, and energy loss through excretion becomes more pronounced, contributing to weight loss.
  • Ketogenic Diets: Initial ketone losses are higher but tend to decrease as the body adapts to ketosis.

🔹 Understanding the Energy Calculations

• Caloric Content of Ketone Bodies

  • Beta-Hydroxybutyrate and Acetoacetate: Approximately 4.5 kcal per gram.
  • Acetone: While not a significant energy source, its loss still represents unused potential energy.

• Estimating Daily Losses

  • Urinary Ketones: Losing 10 grams of ketones via urine equates to about 45 kcal lost per day.
  • Exhaled Acetone: Though smaller in quantity, exhaled acetone adds to the total energy deficit.

🔹 Implications of Ketone Energy Loss

• Weight Management

  • Contributing Factor: Inefficient ketone utilization and subsequent energy loss may aid in weight loss by creating a caloric deficit.
  • Dietary Adjustments: Understanding this loss can help calibrate caloric intake for those on ketogenic diets.

• Metabolic Health

  • Efficiency Over Time: As the body adapts and reduces energy loss, dietary adjustments may be necessary to maintain weight loss momentum.
  • Monitoring Ketosis Levels: Regular testing of blood or urine ketone levels can inform about the state of ketosis and metabolic efficiency.

🔹 Additional Insights

• Breath Acetone as a Marker

  • Non-Invasive Monitoring: Breath acetone levels correlate with blood ketone levels, offering a convenient way to monitor ketosis.
  • Research Utilization: Used to estimate acetone production and associated energy loss.

• Ketone Metabolism Beyond Energy

  • Signaling Molecules: Ketone bodies play roles in gene expression, inflammation, and oxidative stress.
  • Therapeutic Potential: Minimizing unnecessary energy loss can enhance the benefits of ketogenic diets, especially in medical conditions like epilepsy or metabolic disorders.

🔹 Connecting the Dots

The loss of energy through acetone exhalation and acetoacetate excretion during ketosis, while seemingly minor, has meaningful implications:

  • Metabolic Efficiency: Highlights the balance the body achieves between utilizing alternative energy sources and conserving energy.
  • Dietary Planning: Emphasizes the importance of adjusting nutritional strategies as the body’s adaptation to ketosis evolves.
  • Health Outcomes: Offers insights that could improve interventions for metabolic diseases, weight management, and overall well-being.

Understanding the nuances of ketone metabolism and energy loss showcases the body’s remarkable adaptability! Whether for personal health goals or academic interest, delving into this area uncovers the intricate interplay between diet, metabolism, and energy utilization. 🌟


How to solve programming tasks with GPT powered chatbot

Photo by D koi on Unsplash

Code generation to the rescue

As you probably have heard by this time ChatGPT a machine learning based chatbot created by OpenAI is very famous. It is based on Generative Pretrained Transformer model, hence, GPT in short. It’s quite successful in various tasks ranging from text summarization, essays generation to questions answering. But did you know that it can also generate working code in various programming languages? If you didn’t then this post will provide you with some ideas to try.

Task at hand

For example, in the JSON Crack online tool below you can see how such a list of objects can look like

To summarize what needs to be done is as follows

Using Java Stream API approach

Java 8 version introduced a powerful Streams API that allows processing of collections in a functional way. For example, it is possible to filter, apply mapping or sorting to a collection in a single line of code using a declarative syntax somewhat similar to SQL. So, accomplishing the task of sorting a list of objects that have a createdDate field of Date type is quite simple and if you google how to do it you’ll find a number of solutions.

For example, this is how it can be done based on the Stack Overflow answer.

List<Order> orders = list.stream().
                        sorted(comparing(Order::getCreatedDate).
                        collect(Collectors.toList());

First prompt

Write a java code to sort a list of objects while converting a string field to date type

and this is the solution that the Bing chatbot provided


Certainly! Here’s an example of a Java code snippet that sorts a list of objects while converting a string field to date type:

import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Comparator;
import java.util.Date;
import java.util.List;

public class Main {
    public static void main(String[] args) {
        List<YourObject> yourList = new ArrayList<>();
        // Add objects to yourList

        // Sort the list based on the string field "dateString"
        Collections.sort(yourList, new Comparator<YourObject>() {
            @Override
            public int compare(YourObject o1, YourObject o2) {
                SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd");
                try {
                    Date date1 = dateFormat.parse(o1.getDateString());
                    Date date2 = dateFormat.parse(o2.getDateString());
                    return date1.compareTo(date2);
                } catch (ParseException e) {
                    e.printStackTrace();
                }
                return 0;
            }
        });

        // Print the sorted list
        for (YourObject obj : yourList) {
            System.out.println(obj);
        }
    }
}

class YourObject {
    private String dateString;

    public String getDateString() {
        return dateString;
    }

    public void setDateString(String dateString) {
        this.dateString = dateString;
    }

    // Add other fields and methods as needed
}

In this example, we have a list of objects of type YourObject that contains a string field called dateString. We want to sort the list based on this field, treating it as a date.

To achieve this, we use the Collections.sort() method and provide a custom Comparator implementation. Inside the compare() method, we parse the dateString of each object into a Date object using a SimpleDateFormat. We then compare the two dates using the compareTo() method and return the result.

Please note that you need to replace YourObject with the actual class name and modify the code according to your specific requirements.

Let me know if you need any further assistance!

Second more nuanced prompt

write code to sort a list of objects while converting a string field to date type using java stream api

Certainly! Here’s an example of a Java code snippet that sorts a list of objects while converting a string field to date type using the Java Stream API:

import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.Comparator;
import java.util.Date;
import java.util.List;

public class Main {
    public static void main(String[] args) {
        List<YourObject> yourList = new ArrayList<>();
        // Add objects to yourList

        // Sort the list based on the string field "dateString" using Java Stream API
        yourList.sort(Comparator.comparing(obj -> {
            SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd");
            try {
                Date date = dateFormat.parse(obj.getDateString());
                return date;
            } catch (ParseException e) {
                e.printStackTrace();
            }
            return null;
        }));

        // Print the sorted list
        yourList.forEach(System.out::println);
    }
}

class YourObject {
    private String dateString;

    public String getDateString() {
        return dateString;
    }

    public void setDateString(String dateString) {
        this.dateString = dateString;
    }

    // Add other fields and methods as needed
}

In this example, we have a list of objects of type YourObject that contains a string field called dateString. We want to sort the list based on this field, treating it as a date.

To achieve this using the Java Stream API, we use the sort() method of the List interface and provide a lambda expression as the comparator. Inside the lambda expression, we parse the dateString of each object into a Date object using a SimpleDateFormat. We then return the parsed date for comparison.

Please note that you need to replace YourObject with the actual class name and modify the code according to your specific requirements.

Let me know if you need any further assistance!

Testing the generated code almost as is in an online Java compiler

Well, that last answer was what I needed. We can take it and check that it can run almost as is in an online Java compiler. I made a couple of small adjustments in the generated code:

  • I instantiated three objects with dateString populated.
  • I added these objects to a list and
  • I adjusted println function to print the dateString field, instead of the object itself.

This is how it looked after running

// Online Java Compiler
// Use this editor to write, compile and run your Java code online

import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.Comparator;
import java.util.Date;
import java.util.List;

public class Main {
    public static void main(String[] args) {
        List<YourObject> yourList = new ArrayList<>();
        // Add objects to yourList
        YourObject obj1 = new YourObject();
        obj1.setDateString("2023-09-01");
        
        YourObject obj2 = new YourObject();
        obj2.setDateString("2022-07-02");
        
        YourObject obj3 = new YourObject();
        obj3.setDateString("2021-08-06");
        
        yourList.add(obj1);
        yourList.add(obj2);
        yourList.add(obj3);

        // Sort the list based on the string field "dateString" using Java Stream API
        yourList.sort(Comparator.comparing(obj -> {
            SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd");
            try {
                Date date = dateFormat.parse(obj.getDateString());
                return date;
            } catch (ParseException e) {
                e.printStackTrace();
            }
            return null;
        }));

        // Print the sorted list
        yourList.forEach((obj -> System.out.println(obj.getDateString())));
    }
}

class YourObject {
    private String dateString;

    public String getDateString() {
        return dateString;
    }

    public void setDateString(String dateString) {
        this.dateString = dateString;
    }
    

    // Add other fields and methods as needed
}

Blowing your socks off

Better understanding with Optimization for Machine Learning

Long awaited book from Machine Learning Mastery

Recently, I’ve been reading the new Optimization for Machine Learning book from the Machine Learning Mastery written by Jason Brownlee. It just so happened that I read it fully from start to end, since I was one of the technical reviewers of the book. The book was interesting to read thanks to a number of ingredients.

As always Jason was able to write an engaging book with practical advice that can be actioned right away using open source software on Linux, Windows or MacOS. Apart from this the book has just enough clearly explained theoretical material so that even beginning machine learning practitioners can play with optimization algorithms described in the book.

What I liked and what surprised me in the book

Personally, I think it was while reading and working with this book, that I truly understood what an optimizations is. How it is used in machine learning. What is an optimization algorithm, like gradient descent and how to implement one from scratch. I also very much enjoyed the chapter about Global Optimization with various types of Evolution Algorithms. What was funny, that about two weeks after I finished reading the book I came across Donald Hoffman’s The Interface Theory of Perception with relation to consciousness which is based on The Theory of Evolution by Natural Selection. For example, one of his papers written with colleagues namely Does evolution favor true perception? provides an example of Genetic Algorithm (GA) which very much resembles the GA in Chapter 17 of the book. It is highly recommended reading for anyone interested in how consciousness arises in the mind. By the way, does it?

Overall

The Optimization for Machine Learning book is what you come to expect from Machine Learning Mastery books. It’s interesting, it’s practical and it makes you understand what is that you are doing in Machine Learning. As always, each chapter has extensive references to tutorials, technical papers and books on machine learning. So don’t wait and start reading it, maybe you’ll come up with a new theory of how consciousness emerges in the mind.

References

Donald D. Hoffman, Manish Singh, Justin Mark, “Does evolution favor true perceptions?”, Proceedings Volume 8651, Human Vision and Electronic Imaging XVIII; 865104 (2013)

Read as if you edit. Editing Imbalanced Classification with Python

Reading can be hard, but rewarding

I like to read books. They provide me with the opportunity to discover new worlds and learn new things. Unlike other sources, e.g. YouTube tutorials, which I find a little bit distracting, books don’t seduce you to click on them, instead they lay on a flat surface and don’t care. In addition, it’s quite hard to jump from a book to a book in haphazard way reading them in parallel physically. But when you find an interesting book, say a novel, it can draw you attention and hold you captive until you finish reading it. And there are books that are interesting and at the same time require from a reader a certain amount of concentration and work that needs to be done to get the most out of reading a book.

I call such a reading a workout. It’s similar to physical exercises, that can be unpleasant at times, but it has a reward of deeper understanding and grasp of concepts. It also resembles editing a book, call it testing, even alpha testing if you have a software background. By reading a book, as if you edit it, and as such need to pay attention to details and working on it from A to Z, you are bound to better understand the information that the book tries to convey. 

Read as if you edit

With the recent wave of high interest in Machine and Deep Learning there are a lot of books published on the subject to satisfy hungry readers. The books are ranging from popular explanations for a general audience to technical books, that teach readers how to apply Machine Learning to day to day practical applications. Machine Learning Mastery web site provides a number of such books, that are written with a hands on experience first approach. This makes the books perfect candidates for the Read as if you edit approach, since it is the best way to get actual practical experience in Machine Learning by actually applying examples from each chapter in these books. For readers, who aren’t familiar with Machine Learning Mastery books, all of them (books) are structured  in a similar way, where each chapter has just enough theory to get you started using practical code samples.

It is possible to only read through the books, without running a single code sample having a feeling of understanding how things work and being happy with yourself. The issue is, this approach brings almost zero value and provides you with no real experience. Instead, think of yourself as an editor or a tester, who was tasked with finding mistakes, omissions, unclear explanations or wrong code samples. Doing this will help you get the most out of  the book since it forces you to actually run the code, play with it by adjusting it. It also helps you to get better understanding of the material by cross-referencing unclear points by searching on the internet or in other books. 

Don’t you think that read as if you edit approach is only applicable to Machine Learning books. I find it also useful in reading books on mathematics, physics and engineering. Actually, it can be applied to any source of written information, only then it becomes a critical reading approach, where you don’t blindly trust what you read, but instead analyze it and verify the information.

So how was it editing Imbalance Classification with Python book?

I very much liked editing this recent book, since it had enough theory, math and new machine learning concepts to get me excited to work with the book from start to finish. The book has about 450 pages of actual content and it took me about three hours a day for nine days to finish it. I can’t say that it was smooth and easy. The content, at least for me, required cross-checking it with other sources. The code samples required, not once, a need to reference Python libraries documentation and quick dives into sources about imbalanced classification, statistics and information theory.

All in all, reading this from A to Z made me realize the importance of knowing that the data could be imbalanced, as in case of anomaly detection, and one cannot train a model assuming an equal distribution between positive and negative classes, since such a model will tend to classify incorrectly in practice.

 

The new book on Deep Learning for CV that worth working through

Deep-Learning-for-Computer-Vision-400

Books are not the same especially on Deep Learning

It feels and really is that we are bombarded by a growing number of books on Machine Learning, especially Deep Learning. Due to this large number of books published it is quite difficult to tell what book is worth investing the time and effort. Since humans as spices have a constrained lifespan what books you choose to read matters. That is why you’d better read the best books available out there on the subject.

The paragraph above may sound a little bit as advertising, but I really think a good book, which is frankly a subjective definition makes a difference. I would say, that a good book in my opinion is the one that engages you, makes you think, at least a little bit, and what is important makes you strive to check the references it provides and find additional information beside what the book already includes.

What’s inside?

planes

The newest Deep Learning for Computer Vision book from Machine Learning Mastery brings exactly this. It is crafted in a well recognizable machinelearningmastery style which is a practical approach with a simple to complex information presentation spiced with just enough theory to get you started in the Machine and Deep Learning fields.  

More details on content

  • If you read any of their books previously you know that each chapter has a battle proven working Python code samples that work on MacOS, Linux, and even Windows (who would thought). 
  • Each chapter composed in such a way that it may serve as a stand alone tutorial, but overall they are tide together in a logical order if you prefer to read the book from  beginning to end.
  • What I personally find valuable is the Extensions at the end of the chapters that provide additional tasks to practice the chapter’s material. 
  • Not to mention the references to books, papers and other relevant data that were mentioned in the chapter.

More Technical information

bananas_mrcnn.jpg

The book’s subject is about Pytohn libraries to process images while working on machine learning or Deep Learning projects. The main library that is used for Deep Learning is Keras including helpful Keras Functional API.  In addition, the book describes how to download build, train and run models such as Mask R-CNN, Multi-task Cascade CNN, FaceNet and other using TensorFlow.

 

So far

If this post made you curious about the book then give it a try. You may find it very helpful.

 

Better Deep Learning or How To Fine Tune Deep Learning Networks

Effective Deep Learning is possible

Nowadays, when Deep Learning libraries such as Keras makes composing Deep Learning networks as easy task as it can be one important aspect still remains quite difficult. This aspect that you could have guessed is the tuning of various number, which isn’t small at all, of hyper-parameters. For instance, network capacity which is number of neurons and number of layers, learning rate and momentum, number of training epochs and batch size and the list goes on. But now it may become a less of a hassle since a new Better Deep Learning book by Jason Brownlee focuses exactly on the issue of tuning hyper-parameters as best as possible given a task in hand.

 

Why is it worth reading this book?

When I myself worked through this book from the beginning to the end, I liked that this book as other books written by Jason Brownlee followed the familiar path of self-contained chapters that provided just enough theory and detailed practical  working examples, that might be extended and build upon by practitioners. The code samples themselves are concise and can be run on an average PC without a need in GPU, but nevertheless they convey very well what author intended to show.

While playing with code samples in each chapter I found myself thinking that I was back at college again doing a lab for electrical engineering. I felt this way since each chapter provides a great number of experiments with related graphs that help understand the hyper-parameter behavior in different configurations.

How this book may help me?

Better Deep Learning may help you if you have initial experience with Deep Learning networks and you want to fine tune network performance in a more controlled way than simple trial and error. Since the book uses restricted and simple data-sets generated with Python libraries it is easy to run each experiment and get fast understanding how each hyper-parameter effects network behavior.

In addition to working code examples, the book provides a number of focused references to papers, books and other materials that are related to the content of each chapter.

Last but not least, each chapter concludes with a number of extensions that make a practitioner think harder and try to play with the chapter’s content in a much more deeper level.

Conclusion

All in all, the book provides comprehensive treatment of all hyper-parameters you may find in various types of Deep Learning networks, such as CNN, RNN, LSTM and it makes it clear that fine tuning of Deep Learning is possible even for a beginner with proper guidance which the book provides.

Stay fine tuned!

Driver drowsiness detection with Machine or/and Deep Learning.

It actually even more useful than Driver Assistant

In the previous post I mentioned that it is nice to have a mobile phone application which is capable of detecting various erroneously driven cars in front of the moving vehicle. Another more interesting application in my opinion, which is even more impact-full will be a mobile application that uses ‘selfy’ camera to track driver’s alertness state during driving and indicating by voice or sound effects that driver needs to take action. 

Why this application is useful?

The drivers among us and not only surely know that there are times when driving is not coming easy, especially when a driver is tired, exhausted by a little amount of sleep or certain amount of stress. And driving in alerted state of conciseness (against the law, by the way). This in turn is a cause of many road accidents that may be prevented should the driver be informed in timely manner that he or she requires to stop and rest. The above mentioned mobile application may assist in exactly this situation. It even may send a notification remotely to all who may concern that there is a need to call a driver, text him or do something else to grab the attention.

Is there anything like this in the wild?

As part of MIT group that is researching autonomous driving and headed by Lex Fridman the group used this approach to track drivers that drive Tesla cars and their interaction with the car. For more details, you may check out the links below with nice video and explanations.

This implementation combines best of state of the art in machine and deep learning.

face_detection_car.png

 

 

This implementation is from 2010 and apparently it is a plain old OpenCV with no Deep Learning.

face2

Requirements

  • Hardware
    • Decent average mobile phone 
  • Software
    • Operating system
      • Andorid or IPhone
    • Object detection and classification
      • OpenCV based approach using built-in DL models
  • Type of behavior classified
    • Driver not paying attention to the road
      • By holding a phone
      • Being distracted by another passenger
      • By looking at road accidents, whatever
    • Driver drowsiness detection
  • Number of frames per second
    • Depends on the hardware. Let’s say 24. 
  • Take action notifications
    • Voice announcement to the driver
    • Sound effects
    • Sending text, images notification to friends and family who may call and intervene
    • Automatically use Google Maps to navigate to the nearest Coffee station, such as Starbucks, Dunkin’ (no more donuts) and Tim Horton’s (select which applicable to you)

Then what are we waiting for?

This application can be built quite ‘easily and fast’ if you have an Android developer account, had an experience developing an Android apps. You worked a little bit with GitHub and had a certain amount of experience and fascination with machine learning, or OpenCV with DL based models. Grab you computer of choice and hurry to develop this marvelous piece of technology that will make you a new kind of person.

A possible plan of action

  • Get an Android phone, preferably from latest models for performance reasons.
  • Get a computer that can run OpenCV 4 and Android Studio.
  • Install OpenCV and all needed dependencies.
  • Run the example from Adrian Rosebrock’s blog post.
  • Install Android Studio.
  • Create a Android developer account (if you don’t have one, about $25 USD)
  • Use the Android app from this blog post as a blueprint and adapt the Pyhton code from Adrian’s implementation into Java.
  • Publish the app at Google Play Store.
  • Share the app.

 

References

Driver Assistant. Detect strange drivers with Deep Learning.

Driver assistant app. Can it be done?

I was too optimistic about making this work on Android since it takes more than a couple of seconds to process even single frame. So folks doing what I hoped in this post with OpenCV  is currently not achievable with a mobile phone.

This video which had 30 fps and was 11 seconds long took about 22 minutes to process.

larry.png

I wonder why is that there is close to none Android or IPhone applications that can in real-time detect erroneous drivers driving on the road before or sideways to you. The technology is there and algorithms, namely Deep Learning is there too. It is possible to run OpenCV based deep learning models in real-time on mobile phones and get good enough performance to detect suddenly stopping car ahead of you. Since mobile phone field of view isn’t that large, I think it will be hard if not impossible to detect erroneous driving on the sides of the car. A good example of OpenCV based object detection and classification using Deep Learning may be Mask R-CNN with OpenCV post by Adrian Rosebrock from PyImageSearch.

Requirements

  • Hardware
    • Decent average mobile phone 
  • Software
    • Operating system
      • Andorid or IPhone
    • Object detection and classification
      • OpenCV based approach using built-in DL models
  • Type of objects classified
    • Car
    • Truck
    • Bus
    • Pedestrian (optional)
  • Number of frames per second
    • Depends on the hardware. Let’s say 24. 
  • Field of View 
    • About 60 degrees
  • Type of erroneous driving detected
    • Sudden stopping
    • Zigzag driving
    • Cutting off from the side (hard to do with single forward facing phone camera)
    • etc.

Then what are we waiting for?

This application can be built quite ‘easily and fast’ if you have an Android developer account, had an experience developing an Android apps. You worked a little bit with GitHub and had a certain amount of experience and fascination with machine learning, namely OpenCV DL based models. To be able to detect some dangerous maneuvering others do there is a need to use a little bit of math to be able to detect them, as well as calculate speed, direction and distance to other cars. The effort is worth investing time into. Even a little helper can have a big impact, unless drivers start staring into the mobile phone screen looking how it’s doing while driving.

A possible plan of action

 

 

Language Acquisition. Multidisciplinary approach. Part three.

 

no_frame

Multidisciplinary approach

This post about the research of natural language acquisition will be as short as previous parts. This time I want to describe how the current research that is too linguistically focused may benefit from being open up to other disciplines, such as Machine Learning, computer Science, Neurosciences and Physics.

Currently, the language acquisition research is predominantly done by linguists. In my opinion, it is the reason why the progress in this field is so slow. It is very clear that researches that trained only in linguistic alone cannot leverage advances in other fields that are related to natural language processing, such as Neural Machine Translation which is a part of Machine Learning, Neuroimaging which is a part of Neuroscience, Neuromorphic Chips which are part of Electronics, and Dynamical Systems which are part of Physics. The mere luck of mathematical modeling is a very constraining factor, and it propelled all fields mentioned above. That is why groups that consists of generalists that have good grasp of math, machine learning, neuroscience and engineering will be most efficient in advancing the research and practical implementation of language acquisition. 

Clearly defined goal

As Jeff Hawkins from Numenta that is focused on developing general neocortex algorithm based on neurological evidence mentioned we have enough data to work on general theory of neocortex functioning. There is no lack of data, in opposite the data is in abundance. What lacks is the clear goal of what we want to achieve and clear plan how to move into right direction. It seems to me the best approach should be something along the lines of Lunar Program back in 60th and 70th of 20th century. Though there is no need to invest billions of dollars to make a progress, but dedicated people with right background and well defined goals.

References