How to solve programming tasks with GPT powered chatbot

Photo by D koi on Unsplash

Code generation to the rescue

As you probably have heard by this time ChatGPT a machine learning based chatbot created by OpenAI is very famous. It is based on Generative Pretrained Transformer model, hence, GPT in short. It’s quite successful in various tasks ranging from text summarization, essays generation to questions answering. But did you know that it can also generate working code in various programming languages? If you didn’t then this post will provide you with some ideas to try.

Task at hand

For example, in the JSON Crack online tool below you can see how such a list of objects can look like

To summarize what needs to be done is as follows

Using Java Stream API approach

Java 8 version introduced a powerful Streams API that allows processing of collections in a functional way. For example, it is possible to filter, apply mapping or sorting to a collection in a single line of code using a declarative syntax somewhat similar to SQL. So, accomplishing the task of sorting a list of objects that have a createdDate field of Date type is quite simple and if you google how to do it you’ll find a number of solutions.

For example, this is how it can be done based on the Stack Overflow answer.

List<Order> orders = list.stream().
                        sorted(comparing(Order::getCreatedDate).
                        collect(Collectors.toList());

First prompt

Write a java code to sort a list of objects while converting a string field to date type

and this is the solution that the Bing chatbot provided


Certainly! Here’s an example of a Java code snippet that sorts a list of objects while converting a string field to date type:

import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Comparator;
import java.util.Date;
import java.util.List;

public class Main {
    public static void main(String[] args) {
        List<YourObject> yourList = new ArrayList<>();
        // Add objects to yourList

        // Sort the list based on the string field "dateString"
        Collections.sort(yourList, new Comparator<YourObject>() {
            @Override
            public int compare(YourObject o1, YourObject o2) {
                SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd");
                try {
                    Date date1 = dateFormat.parse(o1.getDateString());
                    Date date2 = dateFormat.parse(o2.getDateString());
                    return date1.compareTo(date2);
                } catch (ParseException e) {
                    e.printStackTrace();
                }
                return 0;
            }
        });

        // Print the sorted list
        for (YourObject obj : yourList) {
            System.out.println(obj);
        }
    }
}

class YourObject {
    private String dateString;

    public String getDateString() {
        return dateString;
    }

    public void setDateString(String dateString) {
        this.dateString = dateString;
    }

    // Add other fields and methods as needed
}

In this example, we have a list of objects of type YourObject that contains a string field called dateString. We want to sort the list based on this field, treating it as a date.

To achieve this, we use the Collections.sort() method and provide a custom Comparator implementation. Inside the compare() method, we parse the dateString of each object into a Date object using a SimpleDateFormat. We then compare the two dates using the compareTo() method and return the result.

Please note that you need to replace YourObject with the actual class name and modify the code according to your specific requirements.

Let me know if you need any further assistance!

Second more nuanced prompt

write code to sort a list of objects while converting a string field to date type using java stream api

Certainly! Here’s an example of a Java code snippet that sorts a list of objects while converting a string field to date type using the Java Stream API:

import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.Comparator;
import java.util.Date;
import java.util.List;

public class Main {
    public static void main(String[] args) {
        List<YourObject> yourList = new ArrayList<>();
        // Add objects to yourList

        // Sort the list based on the string field "dateString" using Java Stream API
        yourList.sort(Comparator.comparing(obj -> {
            SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd");
            try {
                Date date = dateFormat.parse(obj.getDateString());
                return date;
            } catch (ParseException e) {
                e.printStackTrace();
            }
            return null;
        }));

        // Print the sorted list
        yourList.forEach(System.out::println);
    }
}

class YourObject {
    private String dateString;

    public String getDateString() {
        return dateString;
    }

    public void setDateString(String dateString) {
        this.dateString = dateString;
    }

    // Add other fields and methods as needed
}

In this example, we have a list of objects of type YourObject that contains a string field called dateString. We want to sort the list based on this field, treating it as a date.

To achieve this using the Java Stream API, we use the sort() method of the List interface and provide a lambda expression as the comparator. Inside the lambda expression, we parse the dateString of each object into a Date object using a SimpleDateFormat. We then return the parsed date for comparison.

Please note that you need to replace YourObject with the actual class name and modify the code according to your specific requirements.

Let me know if you need any further assistance!

Testing the generated code almost as is in an online Java compiler

Well, that last answer was what I needed. We can take it and check that it can run almost as is in an online Java compiler. I made a couple of small adjustments in the generated code:

  • I instantiated three objects with dateString populated.
  • I added these objects to a list and
  • I adjusted println function to print the dateString field, instead of the object itself.

This is how it looked after running

// Online Java Compiler
// Use this editor to write, compile and run your Java code online

import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.Comparator;
import java.util.Date;
import java.util.List;

public class Main {
    public static void main(String[] args) {
        List<YourObject> yourList = new ArrayList<>();
        // Add objects to yourList
        YourObject obj1 = new YourObject();
        obj1.setDateString("2023-09-01");
        
        YourObject obj2 = new YourObject();
        obj2.setDateString("2022-07-02");
        
        YourObject obj3 = new YourObject();
        obj3.setDateString("2021-08-06");
        
        yourList.add(obj1);
        yourList.add(obj2);
        yourList.add(obj3);

        // Sort the list based on the string field "dateString" using Java Stream API
        yourList.sort(Comparator.comparing(obj -> {
            SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd");
            try {
                Date date = dateFormat.parse(obj.getDateString());
                return date;
            } catch (ParseException e) {
                e.printStackTrace();
            }
            return null;
        }));

        // Print the sorted list
        yourList.forEach((obj -> System.out.println(obj.getDateString())));
    }
}

class YourObject {
    private String dateString;

    public String getDateString() {
        return dateString;
    }

    public void setDateString(String dateString) {
        this.dateString = dateString;
    }
    

    // Add other fields and methods as needed
}

Blowing your socks off

Better understanding with Optimization for Machine Learning

Long awaited book from Machine Learning Mastery

Recently, I’ve been reading the new Optimization for Machine Learning book from the Machine Learning Mastery written by Jason Brownlee. It just so happened that I read it fully from start to end, since I was one of the technical reviewers of the book. The book was interesting to read thanks to a number of ingredients.

As always Jason was able to write an engaging book with practical advice that can be actioned right away using open source software on Linux, Windows or MacOS. Apart from this the book has just enough clearly explained theoretical material so that even beginning machine learning practitioners can play with optimization algorithms described in the book.

What I liked and what surprised me in the book

Personally, I think it was while reading and working with this book, that I truly understood what an optimizations is. How it is used in machine learning. What is an optimization algorithm, like gradient descent and how to implement one from scratch. I also very much enjoyed the chapter about Global Optimization with various types of Evolution Algorithms. What was funny, that about two weeks after I finished reading the book I came across Donald Hoffman’s The Interface Theory of Perception with relation to consciousness which is based on The Theory of Evolution by Natural Selection. For example, one of his papers written with colleagues namely Does evolution favor true perception? provides an example of Genetic Algorithm (GA) which very much resembles the GA in Chapter 17 of the book. It is highly recommended reading for anyone interested in how consciousness arises in the mind. By the way, does it?

Overall

The Optimization for Machine Learning book is what you come to expect from Machine Learning Mastery books. It’s interesting, it’s practical and it makes you understand what is that you are doing in Machine Learning. As always, each chapter has extensive references to tutorials, technical papers and books on machine learning. So don’t wait and start reading it, maybe you’ll come up with a new theory of how consciousness emerges in the mind.

References

Donald D. Hoffman, Manish Singh, Justin Mark, “Does evolution favor true perceptions?”, Proceedings Volume 8651, Human Vision and Electronic Imaging XVIII; 865104 (2013)

Read as if you edit. Editing Imbalanced Classification with Python

Reading can be hard, but rewarding

I like to read books. They provide me with the opportunity to discover new worlds and learn new things. Unlike other sources, e.g. YouTube tutorials, which I find a little bit distracting, books don’t seduce you to click on them, instead they lay on a flat surface and don’t care. In addition, it’s quite hard to jump from a book to a book in haphazard way reading them in parallel physically. But when you find an interesting book, say a novel, it can draw you attention and hold you captive until you finish reading it. And there are books that are interesting and at the same time require from a reader a certain amount of concentration and work that needs to be done to get the most out of reading a book.

I call such a reading a workout. It’s similar to physical exercises, that can be unpleasant at times, but it has a reward of deeper understanding and grasp of concepts. It also resembles editing a book, call it testing, even alpha testing if you have a software background. By reading a book, as if you edit it, and as such need to pay attention to details and working on it from A to Z, you are bound to better understand the information that the book tries to convey. 

Read as if you edit

With the recent wave of high interest in Machine and Deep Learning there are a lot of books published on the subject to satisfy hungry readers. The books are ranging from popular explanations for a general audience to technical books, that teach readers how to apply Machine Learning to day to day practical applications. Machine Learning Mastery web site provides a number of such books, that are written with a hands on experience first approach. This makes the books perfect candidates for the Read as if you edit approach, since it is the best way to get actual practical experience in Machine Learning by actually applying examples from each chapter in these books. For readers, who aren’t familiar with Machine Learning Mastery books, all of them (books) are structured  in a similar way, where each chapter has just enough theory to get you started using practical code samples.

It is possible to only read through the books, without running a single code sample having a feeling of understanding how things work and being happy with yourself. The issue is, this approach brings almost zero value and provides you with no real experience. Instead, think of yourself as an editor or a tester, who was tasked with finding mistakes, omissions, unclear explanations or wrong code samples. Doing this will help you get the most out of  the book since it forces you to actually run the code, play with it by adjusting it. It also helps you to get better understanding of the material by cross-referencing unclear points by searching on the internet or in other books. 

Don’t you think that read as if you edit approach is only applicable to Machine Learning books. I find it also useful in reading books on mathematics, physics and engineering. Actually, it can be applied to any source of written information, only then it becomes a critical reading approach, where you don’t blindly trust what you read, but instead analyze it and verify the information.

So how was it editing Imbalance Classification with Python book?

I very much liked editing this recent book, since it had enough theory, math and new machine learning concepts to get me excited to work with the book from start to finish. The book has about 450 pages of actual content and it took me about three hours a day for nine days to finish it. I can’t say that it was smooth and easy. The content, at least for me, required cross-checking it with other sources. The code samples required, not once, a need to reference Python libraries documentation and quick dives into sources about imbalanced classification, statistics and information theory.

All in all, reading this from A to Z made me realize the importance of knowing that the data could be imbalanced, as in case of anomaly detection, and one cannot train a model assuming an equal distribution between positive and negative classes, since such a model will tend to classify incorrectly in practice.

 

The new book on Deep Learning for CV that worth working through

Deep-Learning-for-Computer-Vision-400

Books are not the same especially on Deep Learning

It feels and really is that we are bombarded by a growing number of books on Machine Learning, especially Deep Learning. Due to this large number of books published it is quite difficult to tell what book is worth investing the time and effort. Since humans as spices have a constrained lifespan what books you choose to read matters. That is why you’d better read the best books available out there on the subject.

The paragraph above may sound a little bit as advertising, but I really think a good book, which is frankly a subjective definition makes a difference. I would say, that a good book in my opinion is the one that engages you, makes you think, at least a little bit, and what is important makes you strive to check the references it provides and find additional information beside what the book already includes.

What’s inside?

planes

The newest Deep Learning for Computer Vision book from Machine Learning Mastery brings exactly this. It is crafted in a well recognizable machinelearningmastery style which is a practical approach with a simple to complex information presentation spiced with just enough theory to get you started in the Machine and Deep Learning fields.  

More details on content

  • If you read any of their books previously you know that each chapter has a battle proven working Python code samples that work on MacOS, Linux, and even Windows (who would thought). 
  • Each chapter composed in such a way that it may serve as a stand alone tutorial, but overall they are tide together in a logical order if you prefer to read the book from  beginning to end.
  • What I personally find valuable is the Extensions at the end of the chapters that provide additional tasks to practice the chapter’s material. 
  • Not to mention the references to books, papers and other relevant data that were mentioned in the chapter.

More Technical information

bananas_mrcnn.jpg

The book’s subject is about Pytohn libraries to process images while working on machine learning or Deep Learning projects. The main library that is used for Deep Learning is Keras including helpful Keras Functional API.  In addition, the book describes how to download build, train and run models such as Mask R-CNN, Multi-task Cascade CNN, FaceNet and other using TensorFlow.

 

So far

If this post made you curious about the book then give it a try. You may find it very helpful.

 

Better Deep Learning or How To Fine Tune Deep Learning Networks

Effective Deep Learning is possible

Nowadays, when Deep Learning libraries such as Keras makes composing Deep Learning networks as easy task as it can be one important aspect still remains quite difficult. This aspect that you could have guessed is the tuning of various number, which isn’t small at all, of hyper-parameters. For instance, network capacity which is number of neurons and number of layers, learning rate and momentum, number of training epochs and batch size and the list goes on. But now it may become a less of a hassle since a new Better Deep Learning book by Jason Brownlee focuses exactly on the issue of tuning hyper-parameters as best as possible given a task in hand.

 

Why is it worth reading this book?

When I myself worked through this book from the beginning to the end, I liked that this book as other books written by Jason Brownlee followed the familiar path of self-contained chapters that provided just enough theory and detailed practical  working examples, that might be extended and build upon by practitioners. The code samples themselves are concise and can be run on an average PC without a need in GPU, but nevertheless they convey very well what author intended to show.

While playing with code samples in each chapter I found myself thinking that I was back at college again doing a lab for electrical engineering. I felt this way since each chapter provides a great number of experiments with related graphs that help understand the hyper-parameter behavior in different configurations.

How this book may help me?

Better Deep Learning may help you if you have initial experience with Deep Learning networks and you want to fine tune network performance in a more controlled way than simple trial and error. Since the book uses restricted and simple data-sets generated with Python libraries it is easy to run each experiment and get fast understanding how each hyper-parameter effects network behavior.

In addition to working code examples, the book provides a number of focused references to papers, books and other materials that are related to the content of each chapter.

Last but not least, each chapter concludes with a number of extensions that make a practitioner think harder and try to play with the chapter’s content in a much more deeper level.

Conclusion

All in all, the book provides comprehensive treatment of all hyper-parameters you may find in various types of Deep Learning networks, such as CNN, RNN, LSTM and it makes it clear that fine tuning of Deep Learning is possible even for a beginner with proper guidance which the book provides.

Stay fine tuned!

Driver drowsiness detection with Machine or/and Deep Learning.

It actually even more useful than Driver Assistant

In the previous post I mentioned that it is nice to have a mobile phone application which is capable of detecting various erroneously driven cars in front of the moving vehicle. Another more interesting application in my opinion, which is even more impact-full will be a mobile application that uses ‘selfy’ camera to track driver’s alertness state during driving and indicating by voice or sound effects that driver needs to take action. 

Why this application is useful?

The drivers among us and not only surely know that there are times when driving is not coming easy, especially when a driver is tired, exhausted by a little amount of sleep or certain amount of stress. And driving in alerted state of conciseness (against the law, by the way). This in turn is a cause of many road accidents that may be prevented should the driver be informed in timely manner that he or she requires to stop and rest. The above mentioned mobile application may assist in exactly this situation. It even may send a notification remotely to all who may concern that there is a need to call a driver, text him or do something else to grab the attention.

Is there anything like this in the wild?

As part of MIT group that is researching autonomous driving and headed by Lex Fridman the group used this approach to track drivers that drive Tesla cars and their interaction with the car. For more details, you may check out the links below with nice video and explanations.

This implementation combines best of state of the art in machine and deep learning.

face_detection_car.png

 

 

This implementation is from 2010 and apparently it is a plain old OpenCV with no Deep Learning.

face2

Requirements

  • Hardware
    • Decent average mobile phone 
  • Software
    • Operating system
      • Andorid or IPhone
    • Object detection and classification
      • OpenCV based approach using built-in DL models
  • Type of behavior classified
    • Driver not paying attention to the road
      • By holding a phone
      • Being distracted by another passenger
      • By looking at road accidents, whatever
    • Driver drowsiness detection
  • Number of frames per second
    • Depends on the hardware. Let’s say 24. 
  • Take action notifications
    • Voice announcement to the driver
    • Sound effects
    • Sending text, images notification to friends and family who may call and intervene
    • Automatically use Google Maps to navigate to the nearest Coffee station, such as Starbucks, Dunkin’ (no more donuts) and Tim Horton’s (select which applicable to you)

Then what are we waiting for?

This application can be built quite ‘easily and fast’ if you have an Android developer account, had an experience developing an Android apps. You worked a little bit with GitHub and had a certain amount of experience and fascination with machine learning, or OpenCV with DL based models. Grab you computer of choice and hurry to develop this marvelous piece of technology that will make you a new kind of person.

A possible plan of action

  • Get an Android phone, preferably from latest models for performance reasons.
  • Get a computer that can run OpenCV 4 and Android Studio.
  • Install OpenCV and all needed dependencies.
  • Run the example from Adrian Rosebrock’s blog post.
  • Install Android Studio.
  • Create a Android developer account (if you don’t have one, about $25 USD)
  • Use the Android app from this blog post as a blueprint and adapt the Pyhton code from Adrian’s implementation into Java.
  • Publish the app at Google Play Store.
  • Share the app.

 

References

Driver Assistant. Detect strange drivers with Deep Learning.

Driver assistant app. Can it be done?

I was too optimistic about making this work on Android since it takes more than a couple of seconds to process even single frame. So folks doing what I hoped in this post with OpenCV  is currently not achievable with a mobile phone.

This video which had 30 fps and was 11 seconds long took about 22 minutes to process.

larry.png

I wonder why is that there is close to none Android or IPhone applications that can in real-time detect erroneous drivers driving on the road before or sideways to you. The technology is there and algorithms, namely Deep Learning is there too. It is possible to run OpenCV based deep learning models in real-time on mobile phones and get good enough performance to detect suddenly stopping car ahead of you. Since mobile phone field of view isn’t that large, I think it will be hard if not impossible to detect erroneous driving on the sides of the car. A good example of OpenCV based object detection and classification using Deep Learning may be Mask R-CNN with OpenCV post by Adrian Rosebrock from PyImageSearch.

Requirements

  • Hardware
    • Decent average mobile phone 
  • Software
    • Operating system
      • Andorid or IPhone
    • Object detection and classification
      • OpenCV based approach using built-in DL models
  • Type of objects classified
    • Car
    • Truck
    • Bus
    • Pedestrian (optional)
  • Number of frames per second
    • Depends on the hardware. Let’s say 24. 
  • Field of View 
    • About 60 degrees
  • Type of erroneous driving detected
    • Sudden stopping
    • Zigzag driving
    • Cutting off from the side (hard to do with single forward facing phone camera)
    • etc.

Then what are we waiting for?

This application can be built quite ‘easily and fast’ if you have an Android developer account, had an experience developing an Android apps. You worked a little bit with GitHub and had a certain amount of experience and fascination with machine learning, namely OpenCV DL based models. To be able to detect some dangerous maneuvering others do there is a need to use a little bit of math to be able to detect them, as well as calculate speed, direction and distance to other cars. The effort is worth investing time into. Even a little helper can have a big impact, unless drivers start staring into the mobile phone screen looking how it’s doing while driving.

A possible plan of action

 

 

Language Acquisition. Multidisciplinary approach. Part three.

 

no_frame

Multidisciplinary approach

This post about the research of natural language acquisition will be as short as previous parts. This time I want to describe how the current research that is too linguistically focused may benefit from being open up to other disciplines, such as Machine Learning, computer Science, Neurosciences and Physics.

Currently, the language acquisition research is predominantly done by linguists. In my opinion, it is the reason why the progress in this field is so slow. It is very clear that researches that trained only in linguistic alone cannot leverage advances in other fields that are related to natural language processing, such as Neural Machine Translation which is a part of Machine Learning, Neuroimaging which is a part of Neuroscience, Neuromorphic Chips which are part of Electronics, and Dynamical Systems which are part of Physics. The mere luck of mathematical modeling is a very constraining factor, and it propelled all fields mentioned above. That is why groups that consists of generalists that have good grasp of math, machine learning, neuroscience and engineering will be most efficient in advancing the research and practical implementation of language acquisition. 

Clearly defined goal

As Jeff Hawkins from Numenta that is focused on developing general neocortex algorithm based on neurological evidence mentioned we have enough data to work on general theory of neocortex functioning. There is no lack of data, in opposite the data is in abundance. What lacks is the clear goal of what we want to achieve and clear plan how to move into right direction. It seems to me the best approach should be something along the lines of Lunar Program back in 60th and 70th of 20th century. Though there is no need to invest billions of dollars to make a progress, but dedicated people with right background and well defined goals.

References

 

 

 

Second Language Acquisition. A Possible Link to Deep Learning. Part two.

xtoy.jpg

Why is it important to understand how second language acquisition work?

Nowadays, we live in a world that is more interconnected than ever before. The Internet, including social media made communication as instant as possible. This in turn opened an opportunity for communication with people who speak different languages. But the slight complication is that in order to be able to speak with someone who knows a different language than you, there is a need to learn that new language. This will account to second language acquisition (SLA). Surely, the technology found a work around this problem using Machine Language Translation. At first machine translation was phrase based and the results were not that good. Then came the turn of statistical method in machine translation. And finally in November 2016 Google launched end to end Neural Machine Translation based on Artificial Neural Networks now known as Deep Learning. The results of this new approach were quite impressive, in comparison to struggling previous approaches. If until now you haven’t used this service, then try and see for yourself. Since I am a native Russian speaking person I am providing below an example of English to Russian translation that I can discuss.

english to russian

The link for this specif translation is here. I think any Russian speaking person would agree with me that this machine translation is grammatically correct and sounds fine.

Then back to the subject of SLA. It seems to me that looking into how Deep Learning techniques and models that are now used for Natural Language Processing (NPL), such as Word Embedding introduced by Tomas Mikolov and  Long Short-Term Memory  (LSTM) networks which are a special case of Recurrent Neural Networks work, may be very useful in tackling SLA. These and other approaches that are employed to tackle machine translation may shed the light into some aspects of first and second language acquisition in humans. Even though, currently used neural networks are based largely on an oversimplified and superficial model of a neuron, dating back to Perception introduced in 1960s, the successes of such methods cannot be easily dismissed. Why is that?

The notion ”probability of a sentence” is an entirely useless one, under any known
interpretation of this term. (Chomsky, 1969)

In recent years thanks to the advances in Graphical Processing Units (GPU) capabilities and introduction of new architectures and methods in Artificial Neural Networks now known as Deep learning, one of the long standing challenges namely Machine Language Translation seemed like to give up. The title above that belongs to Noam Chomsky the founder of Generative Grammar and Generative Linguistics may be finally proclaimed as wrong and Generative grammar theory may be seen as proven being incorrect by successes of Recurrent Neural Networks based on statistical methods machine translation. Then if Generative Grammar is not that useful to model first or second language acquisition in humans what else is? In following parts I’ll provide my suggestions of what approaches may be more efficient in modeling natural language. And as it isn’t hard to guess Deep Learning may provide at least a partial answer.

 

References

 

 

 

Deep Learning for Time Series book

Cover-400

Is it for you?

Are you struggling to find an easy to digest and implement material on Deep Learning for Time Series? Then look no further and try the newest book by Jason Brownlee from  Machine Learning Mastery. The book is ‘Deep Learning for Time Series Forecasting‘.

What’s inside?

The book will help you apply classic and deep learning methods for time series forecasting. This book is no exception for what you expect from Machine Learning Mastery books. It is hands-on, practical with plenty of real world examples, and most importantly working and tested code samples that may form the basis for your own experiments.

You may very much like the real application of Deep Learning nets to Household Energy Consumption dataset that was used to train CNN, CNN-LSTM and ConvLSTM networks with good accuracy results.

What’s so special about the book?

I personally was fascinated with the Time Series Classification chapter that applied Deep Learning to Human Activity Recognition (HAR) dataset with quite accurate predictions. What I liked most in HAR is the fact that raw gyros and accelerators measurements from the cell phone were used to train the DL models without any feature engineering. The video of the dataset preparation is shown here.

What’s next?

In the next post I’ll use one of the examples for Human Activity Recognition in the book and try to expand it using Extensions part of the chapter.

If you’ll be able to do it before me, please feel free to provide your feedback in the comments section.

Tip

Did you know that Google’s Colaboratory provides you with the opportunity to use GPUs for free while working on your own Deep Nets implementation? More than that you can easily share these Jupyter notebooks with your peers.