# Programming and a school algebra formula finally deciphered

### Explanation is due

I guess you also could have had the same feeling when you learnt algebra at school. Some formulas were clear and understandable, but some were cryptic and it was unclear how would anyone derive them. And then the only way to master it is to memorize it. For example, there is this known formula for a difference of squares:

a2 – b2 = (a – b) * (a + b) = a2 + ab – ba – b2 = a2 – b2 , (1)

Then, there was a little bit more cryptic formula for a difference of cubes, which is not that obvious for a regular student:

a3 – b3 = (a – b) * (a2 + ab + b2) = a3 + a2b + ab2 – ba2 – ab2 – b3 = a3 – b3, (2)

So, I think you get it and the next formula is for a4 – b4 ,

a4 – b4 = (a – b) * (a3 + a2b + b2a + b3) = a4 + a3b + a2b2 + ab3 – ba3 – a2b2 – b3a – b4 = a4 – b4 , (3)

And finally, we get to the most cryptic formula that could be frustrating in a school algebra lesson, the formula for a difference of two positive whole numbers (integers) of power of n

an – bn = (a – b) * (an−1 + an−2b + an-3b2 + … + a2bn-3 + abn−2 + bn−1) , (4)

Now, the last formula seems frightening, and most interestingly one could ask, how did in the world anyone derive it? Also, how do you use it correctly?

### Take it slow

Let’s look at it in a slow motion. If we look at how we get from formula (1) to formula (4) we can notice that there is some symmetry in the numbers in the second braces in each of the formula.

So, the second braces in formula (1) have

(a + b)

the second braces in formula (2) have

(a2 + ab + b2)

the second braces in formula (3) have

(a3 + a2b + ab2 + b3)

the second braces in formula (4) have

(an−1 + an−2b + an-3b2 + … + a2bn-3 + abn−2 + bn−1)

Do you see it? When there is a2 on the left side there is a corresponding b2 on the right, when there is a2b on the left, there is a corresponding b2a on the right side, etc. So this is the symmetry I am talking about. The general formula is actually a factorization of a polynomial formula. But we can look at it in a different manner, just to understand how to use it properly. The derivation of the general formula is a little bit more complex and can be found here.

One interesting thing to notice is that the sum of powers of each a, b or there multiplication ab in the second braces is always n – 1.

(a + b) = a1 + b1 , i.e. the powers are 1, 1

(a2 + ab + b2) = a2 + a1b1 + b2 , i.e. the powers are 2, 1 + 1, 2

(a3 + a2b + b2a + b3) = (a3 + a2b1 + a1b2 + b3) , i.e. the powers are 3, 2 + 1, 1 + 2, 3

(an−1+ an−2b + an-3b2 + … + a2bn-3 + abn−2 + bn−1), i.e. the powers are n – 1, n – 2 + 1 = n – 1 , n – 3 + 2 = n – 1, 2 + n – 3 = n – 1, etc.

Now, also let’s pay attention that we can treat 1 as 1 = a0 or 1 = b0, and let’s look again at the expressions above

(a + b) = a1b0 + a0b1 ,

(a2 + ab + b2) = a2b0 + a1b1 + a0b2 ,

(a3 + a2b + b2a + b3) = (a3b0 + a2b1 + a1b2 + a0b3),

(an−1+ an−2b + an-3b2 + … + a2bn-3 + abn−2 + bn−1)

= (an−1b0+ an−2b1 + an-3b2 + … + a2bn-3 + a1bn−2 + a0bn−1),

I hope you can see that there is a systematic pattern which is going on here.

### Rules of the game

Rule 1: The number of members in the second braces is always as the power of the initial expression, say two for a2 – b2; three for a3 – b3 etc.

Rule 2: The sum of the powers of each member in the second braces is n – 1, which was already shown in the previous examples.

Pay attention that this can be also proven by mathematical induction. But I leave it as an exercise for you.

### How to use this formula and how to zip it

Now that we’ve noticed there is a pattern this pattern show us how to use the formula in a simple way without the need in rote memorization or blindly using someone else derivation.

The only thing is to remember that the first braces always have (a – b) and in the second braces the sum of the powers of each member is n -1. Let’s look at the concrete example of a8 – b8.

Let’s start from the second braces, and write each member without powers in accordance to Rule 1. We know there should be n, i.e. 8 such members.

(ab + ab + ab + ab + ab + ab + ab + ab)

Now, let’s use the Rule 2 and add powers to each member in the second braces, remembering that for a‘s, powers start from n – 1 and decrement by 1 for each consecutive a, and for b‘s powers start from 0 power and increment by 1 for each b until n – 1. Applied to our example,

for a‘s: a7, a6, a5, a4, a3, a2, a1, a0

and b‘s: b0, b1, b2, b3, b4, b5, b6, b7

Now, putting these together in the formula we get,

a8 – b8 = (a – b) * (a7b0 + a6b1 + a5b2 + a4b3 + a3b4 + a2b5 + a1b6 + a0b7)

= (a – b) * (a7 + a6b + a5b2 + a4b3 + a3b4 + a2b5 + ab6 + b7).

### Zip it

So, now we ready to zip this formula using the math notation for the sum:

$a^n - b^n=(a - b)\sum^{n - 1}_{k = 0} a^{(n - 1) - k}{b^k}$

where k increments from 0 to n – 1, i.e. 0, 1 , 2, …, n – 1.

### An interesting turn of events

What is nice about this formula is the fact that it’s actually a concise description of an algorithm that checks whether a certain string is a palindrome.

The main idea is to take a sequence of letters (an array of characters in programming speak), and then start comparing

1. First vs. last letter
2. Second vs. one before last
3. etc
4. For each such case above check whether letters are the same. If there is at least one instance when they are not the same, then it’s not a palindrome.

In Java programming language this algorithm could be implemented as follows (run this code in online Java compiler)

public class Main {
public static void main (String[]args){
String word = "TENET";
System.out.println (isPalindrome(word));
}

static boolean isPalindrome (String word){
char[] charArray = word.toCharArray();
int n = charArray.length;

for (int k = 0; k <= n - 1; k++){
if (charArray[k] != charArray[(n - 1) - k]){
return false;
}
}
return true;
}
}


# When math powers algorithms it’s entertaining

I think that I already wrote previously that a couple of years ago I bought the Elements of Programming book by Alexander Stepanov and Paul McJones. The issue was that the book content was hard for me to grasp at the time. I can hardly say that I now understand it better, but now I got where the rationale for that book came from and why it was written the way it was. It turns out the Alexander Stepanov as a mathematician was influenced deeply by Abstract Algebra, Group Theory and Number Theory. The elements of these fields of mathematics can be traced in the Elements of Programming clearly. For example, chapter 5 is called Ordered Algebraic Structures and it mentions among other things semigroup, monoid and group, which are elements in Group Theory. Overall, the book is structured somewhat like Euclid’s Elements, since the book starts from definitions, that are later used to build gradually upon in other chapters of the book.

Which brings me to the main topic of this post. By the way, the post is about a different book Alexander Stepanov wrote with Daniel Rose and that book was created by refining the notes for the Four Algorithmic Journeys course that Stepanov taught in 2012 at A9 company (subsidiary of Amazon). The course is available in YouTube and it consists of three parts each having a number of videos and the Epilogue part.

I highly recommend to watch it to anyone who is curious about programming, mathematics and science in general. The course is entertaining and it talks about how programming, or more exactly algorithms that are used in programming, are based on algorithms that were already known thousands of years ago in Egypt, Babylon etc. Alexander Stepanov has a peculiar way of lecturing and I find this way of presentation funny. The slides for the course and the notes that were aggregated in the Three Algorithmic Journeys book draft are freely available at Alexander Stepanov’s site.

So the book which I want to mention is From Mathematics to Generic Programming which was published in 2014 and is a reworked version of the Three Algorithmic Journeys draft. This is how Daniel Rose describes this in the Authors’ Note of the book.

The book you are about to read is based on notes from an “Algorithmic Journeys” course taught by Alex Stepanov at A9.com during 2012. But as Alex and I worked together to transform the material into book form, we realized that there was a stronger story we could tell, one that centered on generic programming and its mathematical foundations. This led to a major reorganization of the topics, and removal of the entire section on set theory and logic, which did not seem to be part of the same story. At the same time, we added and removed details to create a more coherent reading experience and to make the material more accessible to less mathematically advanced readers.

## My verdict

As authors mentioned the book is geared towards Generic Programming, but I recommend to read both of them in parallel, since each one complements the other. I think that the Three Algorithmic Journeys is even better than the From Mathematics to Generic Programming (FM2GP). First, it’s free and second, ironically, it’s more generic than the FM2GP book.

# What the hack is Rule 30? Cellular Automata Explained

## Introduction

If you ask the same question, what is this Rule 30, then you are not alone. This rule is related to one-dimensional Cellular Automata introduced by Stephen Wolfram in 1983 and later described in his A New Kind Of Science book. Even though, I skimmed through this book previously, I was never able to understand what was it all about. It had some neat diagrams, but I didn’t try to understand how the diagrams where generated. This changed a day ago when I watched an interesting interview that Lex Fridman had with Stephen Wolfram, where they discussed among other things Rule 30, which was the one that Wolfram described first. This rule is capable of generating a complex behavior even though the rule itself is very simple to define.

## Elementary Cellular Automaton

The Elementary Cellular Automaton consists of a one-dimensional array of cells that can be in just two sates, namely, black or white color. The cellular automaton starts from initial state, and then transition to the next state based on a certain function. The next state of the automaton is calculated based on the color of current cell and its left and right neighbors’ colors. By the way, that transition function, gives its name to the cellular automaton rule, just like Rule 30.

### Rule 30 transition function

Where, for example, the binary digits ‘111’ in the first column indicate the black color of the left, current and right cells, and the values, ‘0’ or ‘1’ indicate what will be the color of the current cell in the next state (iteration) of the automaton. If we write down all the the binary values of the next state together as a single 8 digit binary number, which is 00011110 and convert it to a decimal number we get 30, and hence the name of the Rule 30. In a Boolean form this transition function is

(left, current, right) -> left XOR (current OR right)

We’ll use this function later in the C++ implementation of the Rule 30.

This is how the output from Rule 30 looks like after 20 and 100 steps respectively.

## How to generate Rule 30?

It is easy to implement the Rule 30 using a 64 bit integer in C++. The only drawback is that we can get only 32 steps with the implementation below taken from Wikipedia.

#### Note: If this is a little bit to much for you now, then skip to the next part where we’ll use WolframAlpha to program it for us.

In the code below a 64 bit integer is used to generate an initial state using a bit mask in line 6. Then the outer for loop generates 32 steps of the Rule 30 by applying the transition function in line 19. The inner loop generates the black (using character ‘1’) and white (using character ‘-‘) colors using state bit mask.

#include <stdint.h>
#include <iostream>

int main() {
// This is our bit mask with the 32 bit set to '1' for initial state
uint64_t state = 1u << 31;

for (int i = 0 ; i < 32 ; i++) {

for (int j = sizeof(uint64_t) * 8 - 1 ; j  >=  0 ; j--) {
// Here we decide what should be the color of the current cell based on the current state bit mask.
// Bitwise operator is used to accomplish this efficiently
std::cout << char(state  >>  j & 1 ? '1' : '-');
}
std::cout << '\n';

// This is just the (left, current, right) -> left XOR (current OR right) functioned mentioned previously
// Bitwise operators  are used to accomplish this efficiently
state = (state >> 1) ^ (state | state << 1);
}
}


It is possible to run this code in the online C++ compile. Just click on the link and then click on the green Run button at the top of the screen. The result looks similar to below.

The full output for 32 steps looks like this

--------------------------------1-------------------------------
-------------------------------111------------------------------
------------------------------11--1-----------------------------
-----------------------------11-1111----------------------------
----------------------------11--1---1---------------------------
---------------------------11-1111-111--------------------------
--------------------------11--1----1--1-------------------------
-------------------------11-1111--111111------------------------
------------------------11--1---111-----1-----------------------
-----------------------11-1111-11--1---111----------------------
----------------------11--1----1-1111-11--1---------------------
---------------------11-1111--11-1----1-1111--------------------
--------------------11--1---111--11--11-1---1-------------------
-------------------11-1111-11--111-111--11-111------------------
------------------11--1----1-111---1--111--1--1-----------------
-----------------11-1111--11-1--1-11111--1111111----------------
----------------11--1---111--1111-1----111------1---------------
---------------11-1111-11--111----11--11--1----111--------------
--------------11--1----1-111--1--11-111-1111--11--1-------------
-------------11-1111--11-1--111111--1---1---111-1111------------
------------11--1---111--1111-----1111-111-11---1---1-----------
-----------11-1111-11--111---1---11----1---1-1-111-111----------
----------11--1----1-111--1-111-11-1--111-11-1-1---1--1---------
---------11-1111--11-1--111-1---1--1111---1--1-11-111111--------
--------11--1---111--1111---11-11111---1-11111-1--1-----1-------
-------11-1111-11--111---1-11--1----1-11-1-----11111---111------
------11--1----1-111--1-11-1-1111--11-1--11---11----1-11--1-----
-----11-1111--11-1--111-1--1-1---111--1111-1-11-1--11-1-1111----
----11--1---111--1111---1111-11-11--111----1-1--1111--1-1---1---
---11-1111-11--111---1-11----1--1-111--1--11-1111---111-11-111--
--11--1----1-111--1-11-1-1--11111-1--111111--1---1-11---1--1--1-
-11-1111--11-1--111-1--1-1111-----1111-----1111-11-1-1-111111111



Compare the image above to the one generated using WolframAlpha symbolic language in the Wolfram Notebook.

## Using WolframAlpha to generate Rule 30

WolframAlpha is a computational knowledge engine developed by Wolfram Research. It is embedded in Mathematica symbolic language and it has a declarative programming style. Which means that you specify what you want to be done instead of how it should be done.

For example, to generate Rule 30 and visualize it one simple writes

ArrayPlot[CellularAutomaton[30, {{1},0}, 64]]


where CellularAutomaton function generates 64 states of the automaton, starting with a one black cell in accordance with the Rule 30 generation function, and ArrayPlot function prints the result.

And the output is

Please follow the this link to the Wolfram Notebook where the various Elementary Cellular Automata Rules are generated.

## Summary

Playing with cellular automat rules seems like an interesting game to play, plus doing it using WolframAlpha language is a piece of cake.

# Better understanding with Optimization for Machine Learning

## Long awaited book from Machine Learning Mastery

Recently, I’ve been reading the new Optimization for Machine Learning book from the Machine Learning Mastery written by Jason Brownlee. It just so happened that I read it fully from start to end, since I was one of the technical reviewers of the book. The book was interesting to read thanks to a number of ingredients.

As always Jason was able to write an engaging book with practical advice that can be actioned right away using open source software on Linux, Windows or MacOS. Apart from this the book has just enough clearly explained theoretical material so that even beginning machine learning practitioners can play with optimization algorithms described in the book.

## What I liked and what surprised me in the book

Personally, I think it was while reading and working with this book, that I truly understood what an optimizations is. How it is used in machine learning. What is an optimization algorithm, like gradient descent and how to implement one from scratch. I also very much enjoyed the chapter about Global Optimization with various types of Evolution Algorithms. What was funny, that about two weeks after I finished reading the book I came across Donald Hoffman’s The Interface Theory of Perception with relation to consciousness which is based on The Theory of Evolution by Natural Selection. For example, one of his papers written with colleagues namely Does evolution favor true perception? provides an example of Genetic Algorithm (GA) which very much resembles the GA in Chapter 17 of the book. It is highly recommended reading for anyone interested in how consciousness arises in the mind. By the way, does it?

## Overall

The Optimization for Machine Learning book is what you come to expect from Machine Learning Mastery books. It’s interesting, it’s practical and it makes you understand what is that you are doing in Machine Learning. As always, each chapter has extensive references to tutorials, technical papers and books on machine learning. So don’t wait and start reading it, maybe you’ll come up with a new theory of how consciousness emerges in the mind.

## References

Donald D. Hoffman, Manish Singh, Justin Mark, “Does evolution favor true perceptions?”, Proceedings Volume 8651, Human Vision and Electronic Imaging XVIII; 865104 (2013)

# Playing with the 3D Donut-shaped C code in JavaScript

## Background

Having seen a short YouTube video by Lex Fridman about Donut-shaped C code that generates a 3D spinning donut where in comments he encouraged to look at the code and play with I did just that.

But before this, I went to the Donut math: how donut.c works link where the author of that code Andy Sloane described how he came from geometry and math to implementation of the donut in C.

The Andy Sloane’s tutorial has two visualizations that you can launch and also the JavaScript implementation of the donut. One visualization is of ASCII donut while another one uses <canvas>: The Graphics Canvas element to create a visualization.

## Playing with the code

The easiest way to play with the 3D spinning donut in JavaScript is to use JSFiddle online editor. When the editor is opened you see four main areas, just like in almost any other JS editor, which are HTML, CSS, JavaScript and Result.

To be able to start playing with donut code like crazy we need to do a number of things.

### First

First, there is a need to create a basic HTML page with a number of buttons, a <pre> tag to store ASCII generated donut and a <canvas> tag to be able to show another type of donut animation. To do this just copy and paste into HTML area of JSFiddle editor the code below

<html>
<body>
<button onclick="anim1();">toggle ASCII animation</button>
<button onclick="anim2();">toggle canvas animation</button>
<pre id="d" style="background-color:#000; color:#ccc; font-size: 10pt;"></pre>
<canvas id="canvasdonut" width="300" height="240">
</canvas>
</body>
</html>



### Second

Second, there is a need to copy and past JS code from Andy’s page or copy and paste the code below into JS area of the JSFiddle editor

(function() {
var pretag = document.getElementById('d');
var canvastag = document.getElementById('canvasdonut');

var tmr1 = undefined, tmr2 = undefined;
var A=1, B=1;

// This is copied, pasted, reformatted, and ported directly from my original
// donut.c code
var asciiframe=function() {
var b=[];
var z=[];
A += 0.07;
B += 0.03;
var cA=Math.cos(A), sA=Math.sin(A),
cB=Math.cos(B), sB=Math.sin(B);
for(var k=0;k<1760;k++) {
b[k]=k%80 == 79 ? "\n" : " ";
z[k]=0;
}
for(var j=0;j<6.28;j+=0.07) { // j <=> theta
var ct=Math.cos(j),st=Math.sin(j);
for(i=0;i<6.28;i+=0.02) {   // i <=> phi
var sp=Math.sin(i),cp=Math.cos(i),
h=ct+2, // R1 + R2*cos(theta)
D=1/(sp*h*sA+st*cA+5), // this is 1/z
t=sp*h*cA-st*sA; // this is a clever factoring of some of the terms in x' and y'

var x=0|(40+30*D*(cp*h*cB-t*sB)),
y=0|(12+15*D*(cp*h*sB+t*cB)),
o=x+80*y,
N=0|(8*((st*sA-sp*ct*cA)*cB-sp*ct*sA-st*cA-cp*ct*sB));
if(y<22 && y>=0 && x>=0 && x<79 && D>z[o])
{
z[o]=D;
b[o]=".,-~:;=!*#$@"[N>0?N:0]; } } } pretag.innerHTML = b.join(""); }; window.anim1 = function() { if(tmr1 === undefined) { tmr1 = setInterval(asciiframe, 50); } else { clearInterval(tmr1); tmr1 = undefined; } }; // This is a reimplementation according to my math derivation on the page var R1 = 1; var R2 = 2; var K1 = 150; var K2 = 5; var canvasframe=function() { var ctx = canvastag.getContext('2d'); ctx.fillStyle ='#000'; ctx.fillRect(0, 0, ctx.canvas.width, ctx.canvas.height); if(tmr1 === undefined) { // only update A and B if the first animation isn't doing it already A += 0.07; B += 0.03; } // precompute cosines and sines of A, B, theta, phi, same as before var cA=Math.cos(A), sA=Math.sin(A), cB=Math.cos(B), sB=Math.sin(B); for(var j=0;j<6.28;j+=0.3) { // j <=> theta var ct=Math.cos(j),st=Math.sin(j); // cosine theta, sine theta for(i=0;i<6.28;i+=0.1) { // i <=> phi var sp=Math.sin(i),cp=Math.cos(i); // cosine phi, sine phi var ox = R2 + R1*ct, // object x, y = (R2,0,0) + (R1 cos theta, R1 sin theta, 0) oy = R1*st; var x = ox*(cB*cp + sA*sB*sp) - oy*cA*sB; // final 3D x coordinate var y = ox*(sB*cp - sA*cB*sp) + oy*cA*cB; // final 3D y var ooz = 1/(K2 + cA*ox*sp + sA*oy); // one over z var xp=(150+K1*ooz*x); // x' = screen space coordinate, translated and scaled to fit our 320x240 canvas element var yp=(120-K1*ooz*y); // y' (it's negative here because in our output, positive y goes down but in our 3D space, positive y goes up) // luminance, scaled back to 0 to 1 var L=0.7*(cp*ct*sB - cA*ct*sp - sA*st + cB*(cA*st - ct*sA*sp)); if(L > 0) { ctx.fillStyle = 'rgba(255,255,255,'+L+')'; ctx.fillRect(xp, yp, 1.5, 1.5); } } } } window.anim2 = function() { if(tmr2 === undefined) { tmr2 = setInterval(canvasframe, 50); } else { clearInterval(tmr2); tmr2 = undefined; } }; asciiframe(); canvasframe(); } if(document.all) window.attachEvent('onload',_onload); else window.addEventListener("load",_onload,false); })();  ### Third After HTML and JavaScript areas were populated click on the Run button (as indicated by me with number 1) in the screenshot below and you should see in the Result area on the right two buttons and two donuts (indicated with number 2). When each button is clicked relevant animation is turned on. Both of them could run in parallel. ## This is how it looks like in real-time ## Playground If you want understand the math of how it’s done then first read the explanation by Andy Sloane here. If you want jump right into messing around with the code then stay here. ### Need for speed To change the speed of the animations Ctrl + F in the JSFiddle and search for setInterval function tmr1 = setInterval(asciiframe, 50); tmr2 = setInterval(canvasframe, 50);  The second argument controls the speed of rotation of the donut. Increasing it makes it rotate faster and vice versa. ### Paint it black To change the background color of the donut created with the <canvas> animation search for ctx.fillStyle =’#000′; ctx.fillStyle='#000'; //this is currently black  To change the color of the donut created with the <canvas> animation search for ctx.fillStyle = ‘rgba(255,255,255,’+L+’)’; ctx.fillStyle = 'rgba(255,255,255,'+L+')'; // update any of the first three arguments which stand for Red, Green, Black in RGB.  ## See you There are plenty other things you can try with this code. So why are you waiting? Just do it! # Salesforce Trailhead: An interesting approach to learning with inconclusive outcomes ## What’s new? Recently, I have changed jobs and in my new position I use Salesforce CRM platform. In comparison to my previous positions to become a productive developer in this area there is a very different approach to learning. First of all, Salesforce created the Trailhead site that contains a large number of e-learning courses, which are more like tutorials that supposed to teach certain practical aspect of the Salesforce platform. These tutorials are called Trails which in turn consist of smaller Modules, which consist of even smaller Units. The units are small and don’t take a lot of time to go through. To engage people interested in learning Salesforce, accomplishing units, modules and trails gives you points and badges. There are two additional types of activities that trailhead consist of which are Projects and Superbadges. These two are more hands-on oriented with Superbadges being a kind of real taste of what it’s like to work with production type of Salesforce CRM platform. Last but not least is a Trailmix which is an option to compose a freestyle collection of Superbadges, Trails, Projects or Modules in one bundle. ## Overall, the structure of the Trailhead looks something like this • Trailhead • Hands-on learning • Suberbadge • Project • Units • Trail • Modules • Units • Module • Units • Trailmix • Superbadge • Projects • Trails • Modules • Units ## Some food for thoughts After attempting and finishing each type of the e-learning in Trailhead I have some thoughts for improvements to this approach. First of all, the e-learning concept is fresh and works good. It allows one to learn with his/her own pace. In addition, the content quality is good and assignments can be carried out in the free instances of Salesforce CRM that Salesforce provides a learner for free. Second, the points and badges are really creating, at least in my opinion, a contest feeling where one competes with oneself. After praising the learning platform comes my harsh criticism. 1. There is no time stamp of when and also by whom the content was created. I deem this essential since I want to know is the content out of date, can I trust and rely on it. Also, it’s good to give a credit to creator of the content. 2. There is no small amount of modules that have too much wording in them which is kind of repetitive and takes valuable time to read. 3. The Modules present content in such a manner that doing it feels like being a machine, just type what they say as a dummy, and then by some magic the platform validates what’s done without providing too much useful feedback if something went wrong. This is especially frustrating while doing Superbadges. It’s easy to check that hundreds of novice Salesforce developers were daunted by the unintelligent feedback on the superbadge steps verification. 4. In addition, it feels like doing modules is useless in comparison to doing a superbadge. Which means modules are only good as part of superbadges. 5. But even superbadge doesn’t represent a real production Salesforce CRM environment and is a vanilla version of it, lacking crucial details, that make it or break it in the world of software development. ## What can be done to improve the drawbacks? 1. It would be nice for Modules and Projects to have an overview of how the content is used in a real life Salesforce development, by providing real use cases, even partially, without resorting to some kind of unreal companies with funny names. 2. Superbadges should also be more production oriented with good and intelligent feedback or explanation of how one can debug the Salesforce CRM environment to understands what’s wrong with the implementation. 3. It would be nice to try to incorporate the Trails, Modules etc. as part of the Salesforce CRM platform itself. This could assist in better understanding of how to work the CRM tool efficiently. 4. The points and badges systems seems fine, but it’s possible to collect points without really understanding what the content means which defeats the point of having points altogether. ## All in all Trailhead is an interesting and engaging platform to learn Salesforce, but there are things that could and should be improved to make things better for the novice and seasoned Salesforce developers, admins etc. # How to Push Local Notification With React Native #### This post is written by my good friend Aviad Holy who’s a team leader at Checkmarx and an Android enthusiast. ## Installation and linking ### Prerequisites To see React Native live without installing anything check out Snack. #### With my current setup (Windows 10 64 bit and Android phone) • I installed all prerequisites above • Started development server C:\Users\xyz\AppData\Roaming>cd AwesomeProject C:\Users\xyz\AppData\Roaming\AwesomeProject>yarn start yarn run v1.22.10  • Scanned QR Code shown in the development server • And I saw app started on my Android phone ## Now you’re ready to follow Aviad’s tutorial below I was looking for a simple solution to generate a local push notification in one of my projects, when I came across this wonderful package that does exactly what is need and suitable for both iOS and Android, https://github.com/zo0r/react-native-push-notification. The package has pretty good documentation, with only minor things missing to make it work straightforward. In the following guide I will present the configuration required and a suggested implementation to get you start pushing notifications locally. This guide targets Android since I encounter a lot of questions around it. • pending on your requests a dedicated guide for ios may be published as well. Installation and linking, the short story. • yarn add react-native-push-notification • android/build.gradle ext { buildToolsVersion = "29.0.3" minSdkVersion = 21 compileSdkVersion = 29 targetSdkVersion = 29 ndkVersion = "20.1.5948944" googlePlayServicesVersion = "+" // default: "+" firebaseMessagingVersion = "+" // default: "+" supportLibVersion = "23.1.1" // default: 23.1.1 }  • android/app/src/main/AndroidManifest.xml <span class="has-inline-color has-black-color">... <uses-permission android:name="android.permission.VIBRATE" /> <uses-permission android:name="android.permission.RECEIVE_BOOT_COMPLETED"/> ... <application ...> <meta-data android:name="com.dieam.reactnativepushnotification.notification_foreground" android:value="false"/> <meta-data android:name="com.dieam.reactnativepushnotification.notification_color" android:resource="@color/white"/> <receiver android:name="com.dieam.reactnativepushnotification.modules.RNPushNotificationActions" /> <receiver android:name="com.dieam.reactnativepushnotification.modules.RNPushNotificationPublisher" /> <receiver android:name="com.dieam.reactnativepushnotification.modules.RNPushNotificationBootEventReceiver"> <intent-filter> <action android:name="android.intent.action.BOOT_COMPLETED" /> <action android:name="android.intent.action.QUICKBOOT_POWERON" /> <action android:name="com.htc.intent.action.QUICKBOOT_POWERON"/> </intent-filter> </receiver> ... </application></span>  • create /app/src/main/res/values/colors.xml <span class="has-inline-color has-black-color"><resources> <color name='white'>#FFF</color> </resources></span>  • android/settings.gradle include ':react-native-push-notification' project(':react-native-push-notification').projectDir = file('../node_modules/react-native-push-notification/android')  • android/app/build.gradle dependencies { ... implementation project(':react-native-push-notification') ... }  ## Usage • create a new file src/handlers/notificationsInit.js to initialize our push notification Pay attention that for Android, creating a notification channel and setting the request permission as “iOS only” is a must, as presented in the following example. <span class="has-inline-color has-black-color">import PushNotification from 'react-native-push-notification'; import { Platform } from 'react-native'; export const notificationsInit = () => { PushNotification.createChannel({ channelId: 'channel-id-1', channelName: 'channel-name-1', playSound: true, soundName: 'default', importance: 4, vibrate: true, }, (created) => colnsloe.log(createChannel returned '${created}')
);
onRegister: function (token) {
console.log('TOKEN:', token);
},
},
permissions: {
sound: true,
},
requestPermissions: Platform.OS === 'ios',
});
}</span>


• create a new file src/handlers/notifications.js
<span class="has-inline-color has-black-color">import PushNotification from 'react-native-push-notification';

export const showNotification = (title, message) => {
channelId: 'channel-id-1',
title: title,
message: message,
})
}
export const showScheduleNotification = (title, message) => {
channelId: 'channel-id-1',
title: title,
message: message,
date: new Date(Date.now() + 3 * 1000),
allowWhileIdle: true,
})
}
export const handleNotificationCancel = () => {
}
</span>


• Import and initialize notificationsInit in App.js
<span class="has-inline-color has-black-color">import React from 'react';
import { View } from 'react-native';

const App = () => {

return (
<View>
</View>
);
};

export default App;</span>


• Now, lets display the notification
• create a new file src/components/TestNotifications.js
<span class="has-inline-color has-black-color">import React from 'react';
import { View, TouchableOpacity, StyleSheet, Text } from 'react-native';
import {

export default class TestNotifications extends React.Component {

onTriggerPressHandle = () => {
}

onSchedulePressHandle = () => {
}

onCancelHandle = () => {
}

render() {
return (
<View>
<Text style={styles.title}>Click to try</Text>
<TouchableOpacity
style={styles.button}
onPress={this.onTriggerPressHandle}>
</TouchableOpacity>
<TouchableOpacity
style={styles.button}
onPress={this.onSchedulePressHandle}>
<Text style={styles.text}>{'--Scheduled notification--\nFire in 3 secondes'}</Text>
</TouchableOpacity>
<TouchableOpacity
style={[styles.button, {backgroundColor: 'red'}]}
onPress={this.onCancelHandle}>
</TouchableOpacity>
</View>
);
}
}

const styles = StyleSheet.create({
button: {
backgroundColor: 'dodgerblue',
height: 80,
margin: 20,
justifyContent: 'center'
},
text: {
color: 'white',
textAlign: 'center',
fontSize: 20,
},
title: {
backgroundColor:'dimgrey',
color: 'white',</span>
<span class="has-inline-color has-black-color">textAlign: 'center',
textAlignVertical: 'center',
fontSize: 30,
height: 100,
marginBottom: 20,
}
});</span>


• import TestNotifications component to App.js
<span class="has-inline-color has-black-color">import React from 'react';
import { View } from 'react-native';

const App = () => {

return (
<View>
</View>
);
};

export default App;</span>



### Try out the buttons to trigger the notifications

Feel free to clone and play with this project sample

# Building your own computer language

## Just code it

If you wanted to build your own computer language, but didn’t know how to start or thought you didn’t have time and skills to do this, then look no harder then the Crafting Interpreters book by Bob Nystrom on building a computer language from scratch. That’s it from the very beginning to the full fledged object oriented stuff.

## Where do I find it?

Just visit this web page where Bob has a free of charge web book, still in the process of writing, were you can start to build you own language. You can call it BestLangEver or even something like Jabba, etc. The book is very detailed and explains thing in a clear and easy to understand manner. Thanks to the book I was able to understand how a Mark-and-Sweep garbage collector works and can be quite easily implemented. Bob has done a great work of bringing the art of language design to the masses.

# Driver drowsiness detection with Machine or/and Deep Learning.

## It actually even more useful than Driver Assistant

In the previous post I mentioned that it is nice to have a mobile phone application which is capable of detecting various erroneously driven cars in front of the moving vehicle. Another more interesting application in my opinion, which is even more impact-full will be a mobile application that uses ‘selfy’ camera to track driver’s alertness state during driving and indicating by voice or sound effects that driver needs to take action.

## Why this application is useful?

The drivers among us and not only surely know that there are times when driving is not coming easy, especially when a driver is tired, exhausted by a little amount of sleep or certain amount of stress. And driving in alerted state of conciseness (against the law, by the way). This in turn is a cause of many road accidents that may be prevented should the driver be informed in timely manner that he or she requires to stop and rest. The above mentioned mobile application may assist in exactly this situation. It even may send a notification remotely to all who may concern that there is a need to call a driver, text him or do something else to grab the attention.

## Is there anything like this in the wild?

As part of MIT group that is researching autonomous driving and headed by Lex Fridman the group used this approach to track drivers that drive Tesla cars and their interaction with the car. For more details, you may check out the links below with nice video and explanations.

This implementation combines best of state of the art in machine and deep learning.

This implementation is from 2010 and apparently it is a plain old OpenCV with no Deep Learning.

## Requirements

• Hardware
• Decent average mobile phone
• Software
• Operating system
• Andorid or IPhone
• Object detection and classification
• OpenCV based approach using built-in DL models
• Type of behavior classified
• Driver not paying attention to the road
• By holding a phone
• Being distracted by another passenger
• By looking at road accidents, whatever
• Driver drowsiness detection
• Number of frames per second
• Depends on the hardware. Let’s say 24.
• Voice announcement to the driver
• Sound effects
• Sending text, images notification to friends and family who may call and intervene
• Automatically use Google Maps to navigate to the nearest Coffee station, such as Starbucks, Dunkin’ (no more donuts) and Tim Horton’s (select which applicable to you)

## Then what are we waiting for?

This application can be built quite ‘easily and fast’ if you have an Android developer account, had an experience developing an Android apps. You worked a little bit with GitHub and had a certain amount of experience and fascination with machine learning, or OpenCV with DL based models. Grab you computer of choice and hurry to develop this marvelous piece of technology that will make you a new kind of person.

## A possible plan of action

• Get an Android phone, preferably from latest models for performance reasons.
• Get a computer that can run OpenCV 4 and Android Studio.
• Install OpenCV and all needed dependencies.
• Run the example from Adrian Rosebrock’s blog post.
• Install Android Studio.
• Create a Android developer account (if you don’t have one, about \$25 USD)
• Use the Android app from this blog post as a blueprint and adapt the Pyhton code from Adrian’s implementation into Java.
• Publish the app at Google Play Store.
• Share the app.

## References

• Search
• Hands-on with tutorial from PyImageSearch
• General review
• Other approaches to drowsiness detection
• From Bosch by monitoring steering movements

# Driver assistant app. Can it be done?

I was too optimistic about making this work on Android since it takes more than a couple of seconds to process even single frame. So folks doing what I hoped in this post with OpenCV  is currently not achievable with a mobile phone.

This video which had 30 fps and was 11 seconds long took about 22 minutes to process.

I wonder why is that there is close to none Android or IPhone applications that can in real-time detect erroneous drivers driving on the road before or sideways to you. The technology is there and algorithms, namely Deep Learning is there too. It is possible to run OpenCV based deep learning models in real-time on mobile phones and get good enough performance to detect suddenly stopping car ahead of you. Since mobile phone field of view isn’t that large, I think it will be hard if not impossible to detect erroneous driving on the sides of the car. A good example of OpenCV based object detection and classification using Deep Learning may be Mask R-CNN with OpenCV post by Adrian Rosebrock from PyImageSearch.

# Requirements

• Hardware
• Decent average mobile phone
• Software
• Operating system
• Andorid or IPhone
• Object detection and classification
• OpenCV based approach using built-in DL models
• Type of objects classified
• Car
• Truck
• Bus
• Pedestrian (optional)
• Number of frames per second
• Depends on the hardware. Let’s say 24.
• Field of View