Thursday, December 21, 2017

Suffix trees: Algorithm of 1973

Interesting read on suffix trees

Kolmogorov Complexity and Algorithmic Information Theory

Measuring complexity of an algorithm by using length of the shortest possible description (series of instructions)
Turing machine/ Universal Turing machine - programs form a prefix free set (Kologorov complexity/ analogous to information theory) [1]
Church Turing thesis

[1] Elements of Information Theory

Friday, October 20, 2017

ITSC 2017 Takeaways!

This year, I presented my research "Multi-Agent Based Road Traffic Control Optimization" in Intelligent Transportation Systems Conference (ITSC 2017), International workshop on Large-Scale Traffic Modeling and Management, Yokohoma. I worked as a student volunteer there as well.

Connected Vehicle 

Key highlight of the all three keynote speeches is the concept of "Connected Vehicle".  Different V2X technologies to connect vehicles, infrastructure and people to provide better mobility services were discussed. First keynote was "SIP Autonomous Driving" delivered by Mr. Masao Fukushima (Nissan R&D). Development of a digital dynamic map platform with the aim of realizing autonomous driving and advanced driver assistance systems was discussed. Dynamic map is an essential component in autonomous car navigation. This is done by Dynamic map platform Pvt Ltd.

Dynamic map platform. Source:

ART information center was introduced as a way of gathering required mobility information such as traffic congestion, waiting time and bus route information. Currently, they are planning to come up with map using point cloud information and IoT based services will be added in the future.

 Next Keynote was "From the World's 1st Car Navigation, towards Connected and Automated Driving in the Future" by Mr. Yoichi Sugimoto (Honda R&D). This was an interesting session where he explained the "Honda electro gyrocator", world's first commercialized map based navigation system (1981).
Honda electro gyrocator. Source:

He further spoke about more advancements innovated by Honda R&D to realize connected vehicle concept. One such innovation is "Honda Telematics services" (1998) that focuses on  leveraging IoT solutions to provide connected car platform.

Honda Telematics. Source:

Further, they have introduced Internavi system route, the world's first floating car system that collects traffic information from vehicles and provide them to it's users. 

The next Keynote speech was done by Prof. Emeritus U. Ozguner (Control & Intelligent Transportation Lab - Ohio State University). The topic was "Smart cities: An Intelligent Vehicles Perspective". One highlight of the session was "Smart Columbus", an experimental smart city that is designed to enable the connected vehicle concept.

Smart Colombus. Source:

Saturday, August 26, 2017

Walt Disney Studios - The Innovation Continues!

It's another exclusive day at Miraikan! We got free passes to Miraikan special exhibition  "Art of Disney - The Magic of Animation" (Thanks to TIEC #TIECrox! :D).

Initially, we were under the impression that we are going to see lot of cartoon sketches and not more than that. However, soon we realized that we were wrong. Walt Disney Studios have demonstrated their journey from 1923 to 2017 in a way that we were so amazed on the sheer effort and attention to detail they have placed to keep the innovation alive in each and every production they embarked upon.  Here's few examples that impressed me.

Pinocchio - They've used multi plane cameras to add dimensionality (depth or 3D effects) as a visual effect

Bambi - They have studied animal anatomy (of deers) and they have used live animals as reference (There has been few deers, so that artists can observe their moves and behavior), Further, they have used minimalistic ink to depict forests.

"Always as you travel assimilate the sounds and sights of the world" - Walt Disney

Saludos Amigos - Before they produce animated films in diverse range of settings, they observe locations, societies, cultures, prominent shapes and colors during field visits to influence the "feel" of the final work. Observing the unique South American colors for "Saludos Amigos" is one such example. Similarly, they have observed Japanese culture for "Big Hero 6" and Africa for "Jungle book"

Fantasmic - Creative visual effects itself won't make the experience of the audience complete as it would address only one human sense. In Fantasmic, they have introduced the concept of visualization of sounds of classical music.

Dumbo - Dumbo is an elephant who doesn't talk. So, they have used effective expressions of emotions to convey its feelings to the audience.
Sad dumbo

Happy Dumbo with Opened ears and bright eyes

Lady and the Tramp - In Lady and the tramp, scenes are viewed as how a dog sees the world. (Few centimeters above the ground. (Dog's eye view)

How a dog sees the world? - A Dog's eye view scene

Frozen - Physical properties of snow (snow effects) has been considered in the animation movie for snow simulation in scenes (As given in the video below)  


Zootopia - They have analyzed animal hair and fur in different animal parks to get realistic look for their own animal characters. 

Animal fur reference. Source:

101 Dalmations - In this animated movie, they have used Xerox copying technology to animate many similar looking dogs. More information on that here.

Now a days, animations can be developed vastly with advanced computer graphics technologies and Disney Studios continue to strive on pushing the boundaries of imaginations as they used to be!

Wednesday, August 2, 2017

Neuroscience inspired Computer Vision


Having read the profound master piece “When breath becomes air”, by Neuroscientist – surgeon Paul Kalanithi, I was curious about how neuroscience could contribute to AI (Computer vision in particular). 

Then, I found an comprehensive article in Neuron Review journal (written by Demis Hassabis, Dharshan Kumaran, Christopher Summerfield, Matthew Botvinick) titled “Neuroscience inspired Artificial Intelligence”.  Here goes a brief excerpt of concepts I found inspiring in that article, related to computer vision.

  • How visual input is filtered and pooled into simple and complex areas of cells in area V1in visual cortex
  • Hierarchical organization of mammalian cortical systems 
Object recognition 
  • Transforming raw visual input into increasingly complex set of features - To achieve invariance towards pose, illumination and scale
  • Visual attention shifts strategically among different objects (no equal priority for all objects) - To ignore irrelevant objects in a given scene in the presence of a clutter, multi object recognition, image to caption generation, generative models to synthasize images 
Intuitive understanding of physical world 
  • Interpret and reason about scenes by decomposing them into individual objects and their relations 
  • Redundency reduction (encourages the emergence of disentangled representations of independent factors such as shape and position) - To learn objectness, construct rich object models from raw inputs using deep generative models, E.g., Variational auto encoder 
Efficient Learning 
  • Rapidly learn new concepts from only a handful of examples (Related with Animal learning, developmental psychology) 
  • Characters challenge - distinguish novel instances of an unfamiliar hand written character from another - "Learn to learn”  networks
Transfer Learning
  • Generalizing or transferring generalized knowledge gained in one context to novel previously unseen domains (E.g., Human who can drive a car drives an unfamiliar vehicle) - Progressive networks 
  • Neural coding using Grid codes in Mammalian entorhinal cortex - To formulate conceptual representations that code abstract, relational information among patterns of inputs (not just invariant features) 
Virtual brain analytics 
  • Increase the interpretability of AI computations, Determine response properties of units in a neural networks 
  • Activity maximization - To generate synthetic images by maximizing the activity of certain classes of unit 
From AI to neuroscience
  • Enhancing performances of CNNs has also yielded new insights into the nature of neural representations in high-level visual areas. E.g., 30 network architectures from AI to explain the structure of the neural representations observed in the ventral visual stream of humans and monkeys 

Friday, May 12, 2017

Process of innovation through “The five rivers of creativity”

Some insightful concepts I learnt related to the roots of innovation during our visit to Miraikan - The future museum in Odaiba (The best science museum I've ever visited so far and I'm so glad it's just there in our neighborhood. One day is definitely not enough to completely explore this place.)

  • Association - Associating novel concept from one field for the advancement of another field,  Conventional computer vs Quantum computer (Associating the properties of quantum mechanics with computer science) 

  • Quantum dot marking (Associating placing an marker on an object to detect target substances on the smallest scale) 
  • Intra body communications (Associating conductivity with telecommunications with communicating via the human body 

  • Integration - Combining and integrating things with different properties for a single purpose gives us the ability to generate new things (The idea of lab on a chip) 

  • Bio machine hybrid system (Insect controlled robot to investigate the ability of an insect to adopt to perturbations 
  • Mechano bionic machine (Integrating living muscles as a power source to machines) - Power source coming from the heart of an insect 
  • Metal plated fibres (Make fabric conductive by plating the surface of a synthetic fibre with a metal - lightness, strength, flexibility along with conductivity)  - Applications in Electronics products 

  • Serendipity - Unexpected developments give us the ability to make fortunate discoveries - The idea of conductive polymer (conductive plastics) by Dr. Shirakawa 

  • Post-it notes (Easily attachable and detachable memo slips as a solution for falling book marks using low tack adhesive) 
  • Hook and loop fasteners - idea inspired by the pet dog afflicted with burrs 
  • Large scale synthesis of carbon nano tubes

  • Mimic - Taking hints from the existing functions and forms gives us the ability to create things that formally didn’t exist or achieve things that couldn’t be done - Artificial Photo synthesis (Bio inspired) 

  • Learning super water repellency from lotus leaves 
  • Morphotex - development of fibre that generates beautiful colors without dying inspired by the wings of morpho butterfly 

  • Alternative- New ideas unconstrained by traditional values give us the ability to create new things (Color filter for a LCD) 
  • Making artificial skin using the hair thrown away during a hair cut (self recycling) 
  • Retinal imaging display (project a video directly into retina in the eye) same as pouring music using ear phones)
  • Power generating floor (using the force applied to the floor while walking) 

Sunday, November 20, 2016

How to extract frames in a video using ffmpeg?

You can follow the steps given below to extract all frames of a video using ffmpeg tool. 

  • Download ffmpeg package for your OS

  • Unzip the folder and move to that particular folder 
      cd /Lion_Mountain_Lion_Mavericks_Yosemite_El-Captain_04.11.2016

  • Extract frames using the following command:

./ffmpeg -i [your input video file] -r [frame rate] [output file format]


For a video names test.mp4 with frame rate of 8fps,

./ffmpeg -i test.mp4 -r 8/1 output%03d.jpeg

All the frames will be saved to the same directory which the command is executed. 

Saturday, November 12, 2016

How to add a delay in SUMO GUI?

A delay in SUMO GUI is slow down in simulation between two simulation steps for a given number of milliseconds.

Follow the steps given below to introduce a delay in SUMO GUI at the time of launching the simulation environment.

Create a file for settings (E.g., file.settings.xml) and add the following content. Mention the preferred value for delay.

Give reference to settings file in SUMO configuration file. (E.g., file.sumocfg) as given below.



Friday, October 7, 2016

How to avoid loss = nan while training deep neural network using Caffe

The following problem  occurs in Caffe when loss value become very large (infinity) and 

I0917 15:45:07.232023 1936130816 sgd_solver.cpp:106] Iteration 9500, lr = 0.000575702
I0917 15:45:08.376780 1936130816 solver.cpp:228] Iteration 9600, loss = nan
I0917 15:45:08.376814 1936130816 solver.cpp:244]     Train net output #0: loss = nan (* 1 = nan loss)
I0917 15:45:08.376822 1936130816 sgd_solver.cpp:106] Iteration 9600, lr = 0.000573498
I0917 15:45:09.522541 1936130816 solver.cpp:228] Iteration 9700, loss = nan
I0917 15:45:09.522573 1936130816 solver.cpp:244]     Train net output #0: loss = nan (* 1 = nan loss)
I0917 15:45:09.522581 1936130816 sgd_solver.cpp:106] Iteration 9700, lr = 0.000571313
I0917 15:45:10.663610 1936130816 solver.cpp:228] Iteration 9800, loss = nan
I0917 15:45:10.663782 1936130816 solver.cpp:244]     Train net output #0: loss = nan (* 1 = nan loss)
I0917 15:45:10.663791 1936130816 sgd_solver.cpp:106] Iteration 9800, lr = 0.000569147
I0917 15:45:11.808089 1936130816 solver.cpp:228] Iteration 9900, loss = nan
I0917 15:45:11.808120 1936130816 solver.cpp:244]     Train net output #0: loss = nan (* 1 = nan loss)

I0917 15:45:11.808128 1936130816 sgd_solver.cpp:106] Iteration 9900, lr = 0.000567001

I was able to fix this by adjusting learning rate. (mostly decreasing) Values for learning rate that usually works are 0.001 and 0.0001. (learning rate can be configure in solver.prototxt file) 

Following thread contains useful information on other possible reasons that this issue might occur. 

Sometimes, you might notice that loss values won't change, even if the values don't become nan. That issue also can be fixed by fine tuning the learning rate. 

E.g., base_lr: 0.00009

Sometimes, memory issues can occur, when changing learning rate. 

caffe(1636,0x201105b9000) malloc: *** error for object 0x7fe8a0c2ab20: incorrect checksum for freed object - object was probably modified after being freed.
*** set a breakpoint in malloc_error_break to debug
*** Aborted at 1474110894 (unix time) try "date -d @1474110894" if you are using GNU date ***
PC: @     0x7fff893d6286 __pthread_kill
*** SIGABRT (@0x7fff893d6286) received by PID 1636 (TID 0x201105b9000) stack trace: ***
    @     0x7fff8f8f9f1a _sigtramp
    @                0x0 (unknown)
    @     0x7fff8685db53 abort
    @     0x7fff89124e06 szone_error
    @     0x7fff8911b9dd szone_free_definite_size
    @     0x7fff91681c13 _dispatch_client_callout
    @     0x7fff9168488f _dispatch_root_queue_drain
    @     0x7fff91692fe4 _dispatch_worker_thread3
    @     0x7fff8af61637 _pthread_wqthread
    @     0x7fff8af5f40d start_wqthread

That can be fixed by adjusting (mostly decreasing) the batch size in train_test. prototxt. (for both train and test input layers)

batch_size: the number of inputs to process at one time


name: "LeNet"
layer {
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  transform_param {
    scale: 0.00390625
  data_param {
    source: "train_lmdb"
    batch_size: 16
    backend: LMDB

Wednesday, July 27, 2016

Issue when import matplotlib in Virtualenv in Python

import matplotlib.pyplot as plt

Traceback (most recent call last):

  File "", line 1, in

  File "/Users/jwithanawasam/MachineLearning/Caffe2/venv/lib/python2.7/site-packages/matplotlib/", line 114, in

    _backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup()

  File "/Users/jwithanawasam/MachineLearning/Caffe2/venv/lib/python2.7/site-packages/matplotlib/backends/", line 32, in pylab_setup


  File "/Users/jwithanawasam/MachineLearning/Caffe2/venv/lib/python2.7/site-packages/matplotlib/backends/", line 24, in

    from matplotlib.backends import _macosx

RuntimeError: Python is not installed as a framework. The Mac OS X backend will not be able to function correctly if Python is not installed as a framework. See the Python documentation for more information on installing Python as a framework on Mac OS X. Please either reinstall Python as a framework, or try one of the other backends. If you are Working with Matplotlib in a virtual enviroment see 'Working with Matplotlib in Virtual environments' in the Matplotlib FAQ

If you encounter the above error, there is an easier way to fix that without using Matplotlib FAQ ;)


If you have already installed Matplotlib install using the following command.
pip install matplotlib

cd ~/.matplotlib
vi matplotlibrc

Add the following content in opened matplotlibrc file.

backend: TkAgg

Try import matplotlib.pyplot as plt and it should work without errors.