What the hack is Rule 30? Cellular Automata Explained

Introduction

If you ask the same question, what is this Rule 30, then you are not alone. This rule is related to one-dimensional Cellular Automata introduced by Stephen Wolfram in 1983 and later described in his A New Kind Of Science book. Even though, I skimmed through this book previously, I was never able to understand what was it all about. It had some neat diagrams, but I didn’t try to understand how the diagrams where generated. This changed a day ago when I watched an interesting interview that Lex Fridman had with Stephen Wolfram, where they discussed among other things Rule 30, which was the one that Wolfram described first. This rule is capable of generating a complex behavior even though the rule itself is very simple to define.

Elementary Cellular Automaton

The Elementary Cellular Automaton consists of a one-dimensional array of cells that can be in just two sates, namely, black or white color. The cellular automaton starts from initial state, and then transition to the next state based on a certain function. The next state of the automaton is calculated based on the color of current cell and its left and right neighbors’ colors. By the way, that transition function, gives its name to the cellular automaton rule, just like Rule 30.

Rule 30 transition function

Current Sate111110101100011010001000
Next state00011110

Where, for example, the binary digits ‘111’ in the first column indicate the black color of the left, current and right cells, and the values, ‘0’ or ‘1’ indicate what will be the color of the current cell in the next state (iteration) of the automaton. If we write down all the the binary values of the next state together as a single 8 digit binary number, which is 00011110 and convert it to a decimal number we get 30, and hence the name of the Rule 30. In a Boolean form this transition function is

(left, current, right) -> left XOR (current OR right)

We’ll use this function later in the C++ implementation of the Rule 30.

This is how the output from Rule 30 looks like after 20 and 100 steps respectively.

source: WolframAlpha

How to generate Rule 30?

It is easy to implement the Rule 30 using a 64 bit integer in C++. The only drawback is that we can get only 32 steps with the implementation below taken from Wikipedia.

Note: If this is a little bit to much for you now, then skip to the next part where we’ll use WolframAlpha to program it for us.

In the code below a 64 bit integer is used to generate an initial state using a bit mask in line 6. Then the outer for loop generates 32 steps of the Rule 30 by applying the transition function in line 19. The inner loop generates the black (using character ‘1’) and white (using character ‘-‘) colors using state bit mask.

#include <stdint.h>
#include <iostream>

int main() {
 // This is our bit mask with the 32 bit set to '1' for initial state
  uint64_t state = 1u << 31;

  for (int i = 0 ; i < 32 ; i++) {

    for (int j = sizeof(uint64_t) * 8 - 1 ; j  >=  0 ; j--) {
      // Here we decide what should be the color of the current cell based on the current state bit mask.
     // Bitwise operator is used to accomplish this efficiently
      std::cout << char(state  >>  j & 1 ? '1' : '-');
    }
    std::cout << '\n';
    
    // This is just the (left, current, right) -> left XOR (current OR right) functioned mentioned previously
   // Bitwise operators  are used to accomplish this efficiently
    state = (state >> 1) ^ (state | state << 1);
  }
}

It is possible to run this code in the online C++ compile. Just click on the link and then click on the green Run button at the top of the screen. The result looks similar to below.

The full output for 32 steps looks like this

--------------------------------1-------------------------------
-------------------------------111------------------------------
------------------------------11--1-----------------------------
-----------------------------11-1111----------------------------
----------------------------11--1---1---------------------------
---------------------------11-1111-111--------------------------
--------------------------11--1----1--1-------------------------
-------------------------11-1111--111111------------------------
------------------------11--1---111-----1-----------------------
-----------------------11-1111-11--1---111----------------------
----------------------11--1----1-1111-11--1---------------------
---------------------11-1111--11-1----1-1111--------------------
--------------------11--1---111--11--11-1---1-------------------
-------------------11-1111-11--111-111--11-111------------------
------------------11--1----1-111---1--111--1--1-----------------
-----------------11-1111--11-1--1-11111--1111111----------------
----------------11--1---111--1111-1----111------1---------------
---------------11-1111-11--111----11--11--1----111--------------
--------------11--1----1-111--1--11-111-1111--11--1-------------
-------------11-1111--11-1--111111--1---1---111-1111------------
------------11--1---111--1111-----1111-111-11---1---1-----------
-----------11-1111-11--111---1---11----1---1-1-111-111----------
----------11--1----1-111--1-111-11-1--111-11-1-1---1--1---------
---------11-1111--11-1--111-1---1--1111---1--1-11-111111--------
--------11--1---111--1111---11-11111---1-11111-1--1-----1-------
-------11-1111-11--111---1-11--1----1-11-1-----11111---111------
------11--1----1-111--1-11-1-1111--11-1--11---11----1-11--1-----
-----11-1111--11-1--111-1--1-1---111--1111-1-11-1--11-1-1111----
----11--1---111--1111---1111-11-11--111----1-1--1111--1-1---1---
---11-1111-11--111---1-11----1--1-111--1--11-1111---111-11-111--
--11--1----1-111--1-11-1-1--11111-1--111111--1---1-11---1--1--1-
-11-1111--11-1--111-1--1-1111-----1111-----1111-11-1-1-111111111

Compare the image above to the one generated using WolframAlpha symbolic language in the Wolfram Notebook.

Using WolframAlpha to generate Rule 30

WolframAlpha is a computational knowledge engine developed by Wolfram Research. It is embedded in Mathematica symbolic language and it has a declarative programming style. Which means that you specify what you want to be done instead of how it should be done.

For example, to generate Rule 30 and visualize it one simple writes

ArrayPlot[CellularAutomaton[30, {{1},0}, 64]]

where CellularAutomaton function generates 64 states of the automaton, starting with a one black cell in accordance with the Rule 30 generation function, and ArrayPlot function prints the result.

And the output is

Please follow the this link to the Wolfram Notebook where the various Elementary Cellular Automata Rules are generated.

Summary

Playing with cellular automat rules seems like an interesting game to play, plus doing it using WolframAlpha language is a piece of cake.

Better understanding with Optimization for Machine Learning

Long awaited book from Machine Learning Mastery

Recently, I’ve been reading the new Optimization for Machine Learning book from the Machine Learning Mastery written by Jason Brownlee. It just so happened that I read it fully from start to end, since I was one of the technical reviewers of the book. The book was interesting to read thanks to a number of ingredients.

As always Jason was able to write an engaging book with practical advice that can be actioned right away using open source software on Linux, Windows or MacOS. Apart from this the book has just enough clearly explained theoretical material so that even beginning machine learning practitioners can play with optimization algorithms described in the book.

What I liked and what surprised me in the book

Personally, I think it was while reading and working with this book, that I truly understood what an optimizations is. How it is used in machine learning. What is an optimization algorithm, like gradient descent and how to implement one from scratch. I also very much enjoyed the chapter about Global Optimization with various types of Evolution Algorithms. What was funny, that about two weeks after I finished reading the book I came across Donald Hoffman’s The Interface Theory of Perception with relation to consciousness which is based on The Theory of Evolution by Natural Selection. For example, one of his papers written with colleagues namely Does evolution favor true perception? provides an example of Genetic Algorithm (GA) which very much resembles the GA in Chapter 17 of the book. It is highly recommended reading for anyone interested in how consciousness arises in the mind. By the way, does it?

Overall

The Optimization for Machine Learning book is what you come to expect from Machine Learning Mastery books. It’s interesting, it’s practical and it makes you understand what is that you are doing in Machine Learning. As always, each chapter has extensive references to tutorials, technical papers and books on machine learning. So don’t wait and start reading it, maybe you’ll come up with a new theory of how consciousness emerges in the mind.

References

Donald D. Hoffman, Manish Singh, Justin Mark, “Does evolution favor true perceptions?”, Proceedings Volume 8651, Human Vision and Electronic Imaging XVIII; 865104 (2013)

Playing with the 3D Donut-shaped C code in JavaScript

Background

Having seen a short YouTube video by Lex Fridman about Donut-shaped C code that generates a 3D spinning donut where in comments he encouraged to look at the code and play with I did just that.

But before this, I went to the Donut math: how donut.c works link where the author of that code Andy Sloane described how he came from geometry and math to implementation of the donut in C.

The Andy Sloane’s tutorial has two visualizations that you can launch and also the JavaScript implementation of the donut. One visualization is of ASCII donut while another one uses <canvas>: The Graphics Canvas element to create a visualization.

Playing with the code

The easiest way to play with the 3D spinning donut in JavaScript is to use JSFiddle online editor. When the editor is opened you see four main areas, just like in almost any other JS editor, which are HTML, CSS, JavaScript and Result.

To be able to start playing with donut code like crazy we need to do a number of things.

First

First, there is a need to create a basic HTML page with a number of buttons, a <pre> tag to store ASCII generated donut and a <canvas> tag to be able to show another type of donut animation. To do this just copy and paste into HTML area of JSFiddle editor the code below

<html>
  <body>
    <button onclick="anim1();">toggle ASCII animation</button>
    <button onclick="anim2();">toggle canvas animation</button>
    <pre id="d" style="background-color:#000; color:#ccc; font-size: 10pt;"></pre>
    <canvas id="canvasdonut" width="300" height="240">
    </canvas>
  </body>
</html>

Second

Second, there is a need to copy and past JS code from Andy’s page or copy and paste the code below into JS area of the JSFiddle editor

(function() {
var _onload = function() {
  var pretag = document.getElementById('d');
  var canvastag = document.getElementById('canvasdonut');

  var tmr1 = undefined, tmr2 = undefined;
  var A=1, B=1;

  // This is copied, pasted, reformatted, and ported directly from my original
  // donut.c code
  var asciiframe=function() {
    var b=[];
    var z=[];
    A += 0.07;
    B += 0.03;
    var cA=Math.cos(A), sA=Math.sin(A),
        cB=Math.cos(B), sB=Math.sin(B);
    for(var k=0;k<1760;k++) {
      b[k]=k%80 == 79 ? "\n" : " ";
      z[k]=0;
    }
    for(var j=0;j<6.28;j+=0.07) { // j <=> theta
      var ct=Math.cos(j),st=Math.sin(j);
      for(i=0;i<6.28;i+=0.02) {   // i <=> phi
        var sp=Math.sin(i),cp=Math.cos(i),
            h=ct+2, // R1 + R2*cos(theta)
            D=1/(sp*h*sA+st*cA+5), // this is 1/z
            t=sp*h*cA-st*sA; // this is a clever factoring of some of the terms in x' and y'

        var x=0|(40+30*D*(cp*h*cB-t*sB)),
            y=0|(12+15*D*(cp*h*sB+t*cB)),
            o=x+80*y,
            N=0|(8*((st*sA-sp*ct*cA)*cB-sp*ct*sA-st*cA-cp*ct*sB));
        if(y<22 && y>=0 && x>=0 && x<79 && D>z[o])
        {
          z[o]=D;
          b[o]=".,-~:;=!*#$@"[N>0?N:0];
        }
      }
    }
    pretag.innerHTML = b.join("");
  };

  window.anim1 = function() {
    if(tmr1 === undefined) {
      tmr1 = setInterval(asciiframe, 50);
    } else {
      clearInterval(tmr1);
      tmr1 = undefined;
    }
  };

  // This is a reimplementation according to my math derivation on the page
  var R1 = 1;
  var R2 = 2;
  var K1 = 150;
  var K2 = 5;
  var canvasframe=function() {
    var ctx = canvastag.getContext('2d');
    ctx.fillStyle ='#000';
    ctx.fillRect(0, 0, ctx.canvas.width, ctx.canvas.height);

    if(tmr1 === undefined) { // only update A and B if the first animation isn't doing it already
      A += 0.07;
      B += 0.03;
    }
    // precompute cosines and sines of A, B, theta, phi, same as before
    var cA=Math.cos(A), sA=Math.sin(A),
        cB=Math.cos(B), sB=Math.sin(B);
    for(var j=0;j<6.28;j+=0.3) { // j <=> theta
      var ct=Math.cos(j),st=Math.sin(j); // cosine theta, sine theta
      for(i=0;i<6.28;i+=0.1) {   // i <=> phi
        var sp=Math.sin(i),cp=Math.cos(i); // cosine phi, sine phi
        var ox = R2 + R1*ct, // object x, y = (R2,0,0) + (R1 cos theta, R1 sin theta, 0)
            oy = R1*st;

        var x = ox*(cB*cp + sA*sB*sp) - oy*cA*sB; // final 3D x coordinate
        var y = ox*(sB*cp - sA*cB*sp) + oy*cA*cB; // final 3D y
        var ooz = 1/(K2 + cA*ox*sp + sA*oy); // one over z
        var xp=(150+K1*ooz*x); // x' = screen space coordinate, translated and scaled to fit our 320x240 canvas element
        var yp=(120-K1*ooz*y); // y' (it's negative here because in our output, positive y goes down but in our 3D space, positive y goes up)
        // luminance, scaled back to 0 to 1
        var L=0.7*(cp*ct*sB - cA*ct*sp - sA*st + cB*(cA*st - ct*sA*sp));
        if(L > 0) {
          ctx.fillStyle = 'rgba(255,255,255,'+L+')';
          ctx.fillRect(xp, yp, 1.5, 1.5);
        }
      }
    }
  }


  window.anim2 = function() {
    if(tmr2 === undefined) {
      tmr2 = setInterval(canvasframe, 50);
    } else {
      clearInterval(tmr2);
      tmr2 = undefined;
    }
  };

  asciiframe();
  canvasframe();
}

if(document.all)
  window.attachEvent('onload',_onload);
else
  window.addEventListener("load",_onload,false);
})();

Third

After HTML and JavaScript areas were populated click on the Run button (as indicated by me with number 1) in the screenshot below and you should see in the Result area on the right two buttons and two donuts (indicated with number 2).

When each button is clicked relevant animation is turned on. Both of them could run in parallel.

This is how it looks like in real-time

Playground

If you want understand the math of how it’s done then first read the explanation by Andy Sloane here. If you want jump right into messing around with the code then stay here.

Need for speed

To change the speed of the animations Ctrl + F in the JSFiddle and search for setInterval function

tmr1 = setInterval(asciiframe, 50);

tmr2 = setInterval(canvasframe, 50);

The second argument controls the speed of rotation of the donut. Increasing it makes it rotate faster and vice versa.

Paint it black

To change the background color of the donut created with the <canvas> animation search for ctx.fillStyle =’#000′;

ctx.fillStyle='#000'; //this is currently black

To change the color of the donut created with the <canvas> animation search for ctx.fillStyle = ‘rgba(255,255,255,’+L+’)’;

ctx.fillStyle = 'rgba(255,255,255,'+L+')'; // update any of the first three arguments which stand for Red, Green, Black in RGB.

See you

There are plenty other things you can try with this code. So why are you waiting? Just do it!

Salesforce Trailhead: An interesting approach to learning with inconclusive outcomes

What’s new?

Recently, I have changed jobs and in my new position I use Salesforce CRM platform. In comparison to my previous positions to become a productive developer in this area there is a very different approach to learning.

First of all, Salesforce created the Trailhead site that contains a large number of e-learning courses, which are more like tutorials that supposed to teach certain practical aspect of the Salesforce platform. These tutorials are called Trails which in turn consist of smaller Modules, which consist of even smaller Units. The units are small and don’t take a lot of time to go through.

To engage people interested in learning Salesforce, accomplishing units, modules and trails gives you points and badges. There are two additional types of activities that trailhead consist of which are Projects and Superbadges. These two are more hands-on oriented with Superbadges being a kind of real taste of what it’s like to work with production type of Salesforce CRM platform. Last but not least is a Trailmix which is an option to compose a freestyle collection of Superbadges, Trails, Projects or Modules in one bundle.

Overall, the structure of the Trailhead looks something like this

  • Trailhead
    • Hands-on learning
      • Suberbadge
      • Project
        • Units
    • Trail
      • Modules
        • Units
    • Module
      • Units
    • Trailmix
      • Superbadge
      • Projects
      • Trails
      • Modules
        • Units
Trailmix creation wizard

Some food for thoughts

After attempting and finishing each type of the e-learning in Trailhead I have some thoughts for improvements to this approach.

First of all, the e-learning concept is fresh and works good. It allows one to learn with his/her own pace. In addition, the content quality is good and assignments can be carried out in the free instances of Salesforce CRM that Salesforce provides a learner for free.

Second, the points and badges are really creating, at least in my opinion, a contest feeling where one competes with oneself.

After praising the learning platform comes my harsh criticism.

  1. There is no time stamp of when and also by whom the content was created. I deem this essential since I want to know is the content out of date, can I trust and rely on it. Also, it’s good to give a credit to creator of the content.
  2. There is no small amount of modules that have too much wording in them which is kind of repetitive and takes valuable time to read.
  3. The Modules present content in such a manner that doing it feels like being a machine, just type what they say as a dummy, and then by some magic the platform validates what’s done without providing too much useful feedback if something went wrong. This is especially frustrating while doing Superbadges. It’s easy to check that hundreds of novice Salesforce developers were daunted by the unintelligent feedback on the superbadge steps verification.
  4. In addition, it feels like doing modules is useless in comparison to doing a superbadge. Which means modules are only good as part of superbadges.
  5. But even superbadge doesn’t represent a real production Salesforce CRM environment and is a vanilla version of it, lacking crucial details, that make it or break it in the world of software development.

What can be done to improve the drawbacks?

  1. It would be nice for Modules and Projects to have an overview of how the content is used in a real life Salesforce development, by providing real use cases, even partially, without resorting to some kind of unreal companies with funny names.
  2. Superbadges should also be more production oriented with good and intelligent feedback or explanation of how one can debug the Salesforce CRM environment to understands what’s wrong with the implementation.
  3. It would be nice to try to incorporate the Trails, Modules etc. as part of the Salesforce CRM platform itself. This could assist in better understanding of how to work the CRM tool efficiently.
  4. The points and badges systems seems fine, but it’s possible to collect points without really understanding what the content means which defeats the point of having points altogether.

All in all

Trailhead is an interesting and engaging platform to learn Salesforce, but there are things that could and should be improved to make things better for the novice and seasoned Salesforce developers, admins etc.

How to Push Local Notification With React Native

This post is written by my good friend Aviad Holy who’s a team leader at Checkmarx and an Android enthusiast.

Installation and linking

Prerequisites

To see React Native live without installing anything check out Snack.

With my current setup (Windows 10 64 bit and Android phone)

  • I installed all prerequisites above
  • Started development server
C:\Users\xyz\AppData\Roaming>cd AwesomeProject

C:\Users\xyz\AppData\Roaming\AwesomeProject>yarn start
yarn run v1.22.10
  • Scanned QR Code shown in the development server
  • And I saw app started on my Android phone

Now you’re ready to follow Aviad’s tutorial below

I was looking for a simple solution to generate a local push notification in one of my projects,  when I came across this wonderful package that does exactly what is need and suitable for both iOS and Android, https://github.com/zo0r/react-native-push-notification.

The package has pretty good documentation, with only minor things missing to make it work straightforward. In the following guide I will present the configuration required and a suggested implementation to get you start pushing notifications locally. 

This guide targets Android since I encounter a lot of questions around it.

  • pending on your requests a dedicated guide for ios may be published as well.

Installation and linking, the short story.

  • yarn add react-native-push-notification
  • android/build.gradle
ext {
        buildToolsVersion = "29.0.3"
        minSdkVersion = 21
        compileSdkVersion = 29
        targetSdkVersion = 29
        ndkVersion = "20.1.5948944"
        googlePlayServicesVersion = "+" // default: "+"
        firebaseMessagingVersion = "+" // default: "+"
        supportLibVersion = "23.1.1" // default: 23.1.1
    }
  • android/app/src/main/AndroidManifest.xml
<span class="has-inline-color has-black-color">...
<uses-permission android:name="android.permission.VIBRATE" />
<uses-permission android:name="android.permission.RECEIVE_BOOT_COMPLETED"/>
...
<application
...>
<meta-data  android:name="com.dieam.reactnativepushnotification.notification_foreground"
    android:value="false"/>
      <meta-data  android:name="com.dieam.reactnativepushnotification.notification_color"
          android:resource="@color/white"/>
      <receiver android:name="com.dieam.reactnativepushnotification.modules.RNPushNotificationActions" />
      <receiver android:name="com.dieam.reactnativepushnotification.modules.RNPushNotificationPublisher" />
      <receiver android:name="com.dieam.reactnativepushnotification.modules.RNPushNotificationBootEventReceiver">
        <intent-filter>
          <action android:name="android.intent.action.BOOT_COMPLETED" />
          <action android:name="android.intent.action.QUICKBOOT_POWERON" />
          <action android:name="com.htc.intent.action.QUICKBOOT_POWERON"/>
        </intent-filter>
      </receiver>
...
</application></span>


  • create /app/src/main/res/values/colors.xml
<span class="has-inline-color has-black-color"><resources>
    <color name='white'>#FFF</color>
</resources></span>


  • android/settings.gradle
include ':react-native-push-notification'
project(':react-native-push-notification').projectDir = file('../node_modules/react-native-push-notification/android')
  • android/app/build.gradle
dependencies {
...
    implementation project(':react-native-push-notification')
...
}

Usage

  • create a new file src/handlers/notificationsInit.js to initialize our push notification

Pay attention that for Android, creating a notification channel and setting the request permission as  “iOS only” is a must, as presented in the following example.

<span class="has-inline-color has-black-color">import PushNotification from 'react-native-push-notification';
import { Platform } from 'react-native';
 
export const notificationsInit = () => {
    PushNotification.createChannel({
        channelId: 'channel-id-1',
        channelName: 'channel-name-1',
        playSound: true,
        soundName: 'default',
        importance: 4,
        vibrate: true,
    }, (created) => colnsloe.log(`createChannel returned '${created}'`)
    );
    PushNotification.configure({
        onRegister: function (token) {
            console.log('TOKEN:', token);
        },
        onNotification: function (notification) {
            console.log('NOTIFICATION:', notification);
        },
        permissions: {
            alert: true,
            badge: true,
            sound: true,
        },
        requestPermissions: Platform.OS === 'ios',
        popInitialNotification: true,
    });
}</span>


  • create a new file src/handlers/notifications.js
<span class="has-inline-color has-black-color">import PushNotification from 'react-native-push-notification';
 
export const showNotification = (title, message) => {
    PushNotification.localNotification({
        channelId: 'channel-id-1',
        title: title,
        message: message,
    })
}
export const showScheduleNotification = (title, message) => {
    PushNotification.localNotificationSchedule({
        channelId: 'channel-id-1',
        title: title,
        message: message,
        date: new Date(Date.now() + 3 * 1000),
        allowWhileIdle: true, 
    })
}
export const handleNotificationCancel = () => {
    PushNotification.cancelAllLocalNotifications();
}
</span>

  • Import and initialize notificationsInit in App.js
<span class="has-inline-color has-black-color">import React from 'react';
import { View } from 'react-native';
import { notificationsInit } from './src/handlers/notificationsInit';
 
const App = () => {
 
  notificationsInit;
 
  return (
      <View>
      </View>
  );
};
 
export default App;</span>


  • Now, lets display the notification
  • create a new file src/components/TestNotifications.js
<span class="has-inline-color has-black-color">import React from 'react';
import { View, TouchableOpacity, StyleSheet, Text } from 'react-native';
import {
    showNotification,
    showScheduleNotification,
    handleNotificationCancel
} from '../handlers/notifications';
 
export default class TestNotifications extends React.Component {
 
    onTriggerPressHandle = () => {
        showNotification('Simple notification', 'simple notification triggered, nice work');
        console.log('simple notification triggered');
    }
 
    onSchedulePressHandle = () => {
        console.log('schedule notification triggered');
        showScheduleNotification('Schedualed notification', 'Schedualed notification triggered, nice work');
    }
 
    onCancelHandle = () => {
        handleNotificationCancel();
        console.log('cancel notification triggered');
    }
 
    render() {
        return (
            <View>
                <Text style={styles.title}>Click to try</Text>
                <TouchableOpacity
                    style={styles.button}
                    onPress={this.onTriggerPressHandle}>
                    <Text style={styles.text}>Simple notification</Text>
                </TouchableOpacity>
                <TouchableOpacity
                    style={styles.button}
                    onPress={this.onSchedulePressHandle}>
                    <Text style={styles.text}>{'--Scheduled notification--\nFire in 3 secondes'}</Text>
                </TouchableOpacity>
                <TouchableOpacity
                    style={[styles.button, {backgroundColor: 'red'}]}
                    onPress={this.onCancelHandle}>
                    <Text style={styles.text}>Cancel all notification</Text>
                </TouchableOpacity>
            </View>
        );
    }
}
 
const styles = StyleSheet.create({
    button: {
        backgroundColor: 'dodgerblue',
        height: 80,
        borderRadius: 10,
        margin: 20,
        justifyContent: 'center'
    },
    text: {
        color: 'white',
        textAlign: 'center',
        fontSize: 20,
    },
    title: {
        backgroundColor:'dimgrey',
        color: 'white',</span>
        <span class="has-inline-color has-black-color">textAlign: 'center',
        textAlignVertical: 'center',
        fontSize: 30,
        height: 100,
        marginBottom: 20,
    }
});</span>


  • import TestNotifications component to App.js
<span class="has-inline-color has-black-color">import React from 'react';
import TestNotifications from './src/components/TestNotifications';
import { View } from 'react-native';
import { notificationsInit } from './src/handlers/notificationsInit';
 
const App = () => {
 
  notificationsInit;
 
  return (
      <View>
        <TestNotifications />
      </View>
  );
};
 
export default App;</span>


Try out the buttons to trigger the notifications

Feel free to clone and play with this project sample

https://github.com/aviadh314/RN-local-push-notification.git

Java Code Geeks
Web Code Geeks

Building your own computer language

Just code it

If you wanted to build your own computer language, but didn’t know how to start or thought you didn’t have time and skills to do this, then look no harder then the Crafting Interpreters book by Bob Nystrom on building a computer language from scratch. That’s it from the very beginning to the full fledged object oriented stuff.

Where do I find it?

Just visit this web page where Bob has a free of charge web book, still in the process of writing, were you can start to build you own language. You can call it BestLangEver or even something like Jabba, etc. The book is very detailed and explains thing in a clear and easy to understand manner. Thanks to the book I was able to understand how a Mark-and-Sweep garbage collector works and can be quite easily implemented. Bob has done a great work of bringing the art of language design to the masses.

What are you waiting for?

Start reading.

 

Driver drowsiness detection with Machine or/and Deep Learning.

It actually even more useful than Driver Assistant

In the previous post I mentioned that it is nice to have a mobile phone application which is capable of detecting various erroneously driven cars in front of the moving vehicle. Another more interesting application in my opinion, which is even more impact-full will be a mobile application that uses ‘selfy’ camera to track driver’s alertness state during driving and indicating by voice or sound effects that driver needs to take action. 

Why this application is useful?

The drivers among us and not only surely know that there are times when driving is not coming easy, especially when a driver is tired, exhausted by a little amount of sleep or certain amount of stress. And driving in alerted state of conciseness (against the law, by the way). This in turn is a cause of many road accidents that may be prevented should the driver be informed in timely manner that he or she requires to stop and rest. The above mentioned mobile application may assist in exactly this situation. It even may send a notification remotely to all who may concern that there is a need to call a driver, text him or do something else to grab the attention.

Is there anything like this in the wild?

As part of MIT group that is researching autonomous driving and headed by Lex Fridman the group used this approach to track drivers that drive Tesla cars and their interaction with the car. For more details, you may check out the links below with nice video and explanations.

This implementation combines best of state of the art in machine and deep learning.

face_detection_car.png

 

 

This implementation is from 2010 and apparently it is a plain old OpenCV with no Deep Learning.

face2

Requirements

  • Hardware
    • Decent average mobile phone 
  • Software
    • Operating system
      • Andorid or IPhone
    • Object detection and classification
      • OpenCV based approach using built-in DL models
  • Type of behavior classified
    • Driver not paying attention to the road
      • By holding a phone
      • Being distracted by another passenger
      • By looking at road accidents, whatever
    • Driver drowsiness detection
  • Number of frames per second
    • Depends on the hardware. Let’s say 24. 
  • Take action notifications
    • Voice announcement to the driver
    • Sound effects
    • Sending text, images notification to friends and family who may call and intervene
    • Automatically use Google Maps to navigate to the nearest Coffee station, such as Starbucks, Dunkin’ (no more donuts) and Tim Horton’s (select which applicable to you)

Then what are we waiting for?

This application can be built quite ‘easily and fast’ if you have an Android developer account, had an experience developing an Android apps. You worked a little bit with GitHub and had a certain amount of experience and fascination with machine learning, or OpenCV with DL based models. Grab you computer of choice and hurry to develop this marvelous piece of technology that will make you a new kind of person.

A possible plan of action

  • Get an Android phone, preferably from latest models for performance reasons.
  • Get a computer that can run OpenCV 4 and Android Studio.
  • Install OpenCV and all needed dependencies.
  • Run the example from Adrian Rosebrock’s blog post.
  • Install Android Studio.
  • Create a Android developer account (if you don’t have one, about $25 USD)
  • Use the Android app from this blog post as a blueprint and adapt the Pyhton code from Adrian’s implementation into Java.
  • Publish the app at Google Play Store.
  • Share the app.

 

References

Driver Assistant. Detect strange drivers with Deep Learning.

Driver assistant app. Can it be done?

I was too optimistic about making this work on Android since it takes more than a couple of seconds to process even single frame. So folks doing what I hoped in this post with OpenCV  is currently not achievable with a mobile phone.

This video which had 30 fps and was 11 seconds long took about 22 minutes to process.

larry.png

I wonder why is that there is close to none Android or IPhone applications that can in real-time detect erroneous drivers driving on the road before or sideways to you. The technology is there and algorithms, namely Deep Learning is there too. It is possible to run OpenCV based deep learning models in real-time on mobile phones and get good enough performance to detect suddenly stopping car ahead of you. Since mobile phone field of view isn’t that large, I think it will be hard if not impossible to detect erroneous driving on the sides of the car. A good example of OpenCV based object detection and classification using Deep Learning may be Mask R-CNN with OpenCV post by Adrian Rosebrock from PyImageSearch.

Requirements

  • Hardware
    • Decent average mobile phone 
  • Software
    • Operating system
      • Andorid or IPhone
    • Object detection and classification
      • OpenCV based approach using built-in DL models
  • Type of objects classified
    • Car
    • Truck
    • Bus
    • Pedestrian (optional)
  • Number of frames per second
    • Depends on the hardware. Let’s say 24. 
  • Field of View 
    • About 60 degrees
  • Type of erroneous driving detected
    • Sudden stopping
    • Zigzag driving
    • Cutting off from the side (hard to do with single forward facing phone camera)
    • etc.

Then what are we waiting for?

This application can be built quite ‘easily and fast’ if you have an Android developer account, had an experience developing an Android apps. You worked a little bit with GitHub and had a certain amount of experience and fascination with machine learning, namely OpenCV DL based models. To be able to detect some dangerous maneuvering others do there is a need to use a little bit of math to be able to detect them, as well as calculate speed, direction and distance to other cars. The effort is worth investing time into. Even a little helper can have a big impact, unless drivers start staring into the mobile phone screen looking how it’s doing while driving.

A possible plan of action

 

 

Kaitai Web IDE on Windows and Linux

kaitai

A Guide to Kaitai Web IDE

This post is a continuation of my previous post about Kaitai Struct DSL language for description of binary data formats. This post will describe how to download Kaitai Web IDE and run it locally as a web application and  also how to build and run it.

Prerequisites

If you are interested to run Kaitai Web IDE locally or take part in its development then there is a need to install some additional software, such as

  • Anaconda 2/3 that will help install
    • Git
    • Python 2/3
    • Node.js (to be able to build locally)

This guide will explain how to do it on Windows and Linux(Ubuntu). 

In addition the following versions will be used

  • Anaconda with Python 2.7
  • Git 2.14.1
  • Python 2.7
  • Node.js 6.11.0

Anaconda for the rescue

Anaconda is a package manager for various libraries be it Python, Node.js etc. We’ll use it to install all the dependencies we need for Kaitai Web IDE.

1. Download Anaconda from official site

conda.png

For the sake of this tutorial you can chose either Python 2.7 or 3.6 version.

2. Install Anaconda as any application.

Look for newly installed programs and run Anaconda Prompt (Windows)

anaconda.png

3. Open command line or Anaconda prompt and install Git (taken from here)

conda install -c conda-forge git

4. Approve if asked to install any additional packages.

5. Test that Git was installed correctly

C:\Users>git --version
git version 2.14.1.windows.1

6. Clone or download Kaitai Web IDE stable release from GitHub repository.

git clone https://github.com/kaitai-io/ide-kaitai-io.github.io

7. You’ll see output resembling this one

C:\Users>git clone https://g
ithub.com/kaitai-io/ide-kaitai-io.github.io
Cloning into 'ide-kaitai-io.github.io'...
remote: Counting objects: 3003, done.
remote: Compressing objects: 100% (8/8), done.
remote: Total 3003 (delta 0), reused 1 (delta 0), pack-reused 2995
Receiving objects: 100% (3003/3003), 5.98 MiB | 5.11 MiB/s, done.
Resolving deltas: 100% (2118/2118), done.

To run Kaitai Web IDE as is 

8. Open the folder where you placed Kaitai Web IDE (in my case

C:\Users\ide-kaitai-io.github.io

)

9. Run following command to lunch web server that will host locally Kaitai Web IDE app

python -mSimpleHTTPServer

10. Go to http://127.0.0.1:8000/

Note: pay attention that instead of port 8000 you may need to use any other such as 8888 if 8000 is used by some application if you see the error below

socket.error: [Errno 10013] An attempt was made to access a socket in a way forb
idden by its access permissions

10 (in case of error). To fix this execute previous command with a port as an argument

python -mSimpleHTTPServer 8888

Go to http://127.0.0.1:8888/

As you can see in the screenshot below I had to run on port 8888 and Kaitai Web IDE is up and running!

8888.png

Build it yourself

If you want to build Kaitai Web IDE yourself then install additional software with Anaconda.

The attempt below failed but the one after it passed. I preserve it for the sake of an argument (it can be done easily on Linux)

11. Install Node.js from here

conda install -c conda-forge nodejs 

12. Approve if asked to install any additional packages.

And there was an error on Windows (why is that?)

CondaError: WindowsError(206, 'The filename or extension is too long')

12 (once again). Ok. Then let’s download and install Node.js from official site.

node.png

13. Open command prompt and test Node.js version

C:\Users\iscamc>node --version
v6.11.3

We are almost on the finish line. To build Kaitai Web IDE

14. Clone or download Kaitai Web IDE stable release from GitHub repository.

git clone --recursive https://github.com/kaitai-io/kaitai_struct_webide

You’ll see output like this

C:\Users>git clone --recursi
ve https://github.com/kaitai-io/kaitai_struct_webide
Cloning into 'kaitai_struct_webide'...
remote: Counting objects: 5175, done.
remote: Compressing objects: 100% (17/17), done.
remote: Total 5175 (delta 6), reused 12 (delta 5), pack-reused 5153
Receiving objects: 100% (5175/5175), 9.63 MiB | 4.49 MiB/s, done.
Resolving deltas: 100% (3852/3852), done.
Submodule 'formats' (https://github.com/kaitai-io/kaitai_struct_formats/) regist
ered for path 'formats'
Cloning into 'C:/Users/kaitai_struct_webide/formats'...
remote: Counting objects: 1576, done.
remote: Compressing objects: 100% (58/58), done.
remote: Total 1576 (delta 37), reused 63 (delta 23), pack-reused 1495
Receiving objects: 100% (1576/1576), 431.29 KiB | 2.82 MiB/s, done.
Resolving deltas: 100% (888/888), done.
Submodule path 'formats': checked out 'a3643b677daccfd323f7d9ace998292c9ee51811'

15. Open the folder where you placed Kaitai Web IDE (in my case 

C:\Users\ide-kaitai-io.github.io

)

16. Run following command to install all the JavaScript and TypeScript dependencies with 

npm install

17. Now  compile and run the Web IDE itself

python serve.py --compile

Note: pay attention that instead of port 8000 you may need to use any other such as 8888 if 8000 is used by some application if you see the error below

socket.error: [Errno 10013] An attempt was made to access a socket in a way forb
idden by its access permissions

To fix this error open in any text editor of your choice serve.py file that resides inside Kaitai folder in my case in C:\Users\kaitai_struct_webide and replace PORT=8000 with, say PORT=8888  on line 15.

line_15.png

17 (in case of error). Rerun command below

python serve.py --compile

When everything worked nice you’ll see something like this

C:\Users\kaitai_struct_webide>python serve.py --compile
Starting typescript compiler...
Please use 127.0.0.1:8888 on Windows (using localhost makes 1sec delay)
Press Ctrl+C to exit.
12:57:28 AM - Compilation complete. Watching for file changes.

18. Go to http://127.0.0.1:8888/

Congratulations you’ve build Kaitai Web IDE!

congrats.png

 Java Code Geeks

Web Code Geeks

Reverse Engineering with Kaitai Struct

Reverse engineering the easy way

Imagine you have some kind of 3rd party data storage that you need to understand how to work with and the only thing you have is a detailed description of the protocol using the device. The only problem is that there is no source code available that can make this process easy to accomplish. And what is left is to implement manually this protocol while having lots of trial and error iterations. Next time in similar occasion repeat this difficult process once again. But no worries, there is one tool that comes in handy in situations like this when there is a file or a stream that you want to parse and you want to be able to do it fast.  

Meet Kaitai Struct

First, here comes an official description of Kaitai Struct

Kaitai Struct is a domain-specific language (DSL) that is designed with one particular task in mind: dealing with arbitrary binary formats.

Parsing binary formats is hard, and that’s a reason for that: such formats were designed to be machine-readable, not human-readable. Even when one’s working with a clean, well-documented format, there are multiple pitfalls that await the developer: endianness issues, in-memory structure alignment, variable size structures, conditional fields, repetitions, fields that depend on other fields previously read, etc, etc, to name a few.

Kaitai Struct tries to isolate the developer from all these details and allow to focus on the things that matter: the data structure itself, not particular ways to read or write it.

Features

  • Kaitai is supported on Linux and Windows (not sure about Mac).
  • So far, Kaitai supports generating parsers in following languages
    • C++/STL
    • C#
    • Java
    • JavaScript
    • Perl
    • PHP
    • Python
    • Ruby
  • If you want you are welcome to add one more language to the list

How to use this Kaitai?

In short, to use Katai 

  • You use declarative syntax to describe a data source you want to be able to parse, such as file system or image format or whatever you like, in ksy file. 
  • Then using Kaitai Web IDE or Katai Struct compiler you generate a code in one of the relevant supported languages, such as Java, C#, C++ etc.
  • That’s it. Now use the code to get full access to your data source.

Kaitai REPL (Read–Eval–Print Loop

repl.png

To get a feeling what Kaitai is capable of you can start from playing with Kaitai REPL which has a number of examples showcasing what can be achieved with it, such as parsing doom.wad package files format.

Katai Web IDE

2017_09_17_23_16_07_Kaitai_Web_IDE.jpg

If you think you are ready to start applying Kaitai to real problems then jump into Katai Web IDE which is very nice and easy to use. You can upload there your data source and start writing a description of how the data source is organized. 

This official wiki page will show you the main features or Web IDE.

Kaitai Compiler stand alone 

It is possible to use Katai compiler in a stand alone mode via command line interface of your choice be it on Linux, Windows etc. How to do it is described here.

Resources

mikhail.png

 Java Code Geeks