Sunday, November 20, 2016

How to extract frames in a video using ffmpeg?

You can follow the steps given below to extract all frames of a video using ffmpeg tool. 

  • Download ffmpeg package for your OS

  • Unzip the folder and move to that particular folder 
      cd /Lion_Mountain_Lion_Mavericks_Yosemite_El-Captain_04.11.2016

  • Extract frames using the following command:

./ffmpeg -i [your input video file] -r [frame rate] [output file format]


For a video names test.mp4 with frame rate of 8fps,

./ffmpeg -i test.mp4 -r 8/1 output%03d.jpeg

All the frames will be saved to the same directory which the command is executed. 

Saturday, November 12, 2016

How to add a delay in SUMO GUI?

A delay in SUMO GUI is slow down in simulation between two simulation steps for a given number of milliseconds.

Follow the steps given below to introduce a delay in SUMO GUI at the time of launching the simulation environment.

Create a file for settings (E.g., file.settings.xml) and add the following content. Mention the preferred value for delay.

Give reference to settings file in SUMO configuration file. (E.g., file.sumocfg) as given below.



Friday, October 7, 2016

How to avoid loss = nan while training deep neural network using Caffe

The following problem  occurs in Caffe when loss value become very large (infinity) and 

I0917 15:45:07.232023 1936130816 sgd_solver.cpp:106] Iteration 9500, lr = 0.000575702
I0917 15:45:08.376780 1936130816 solver.cpp:228] Iteration 9600, loss = nan
I0917 15:45:08.376814 1936130816 solver.cpp:244]     Train net output #0: loss = nan (* 1 = nan loss)
I0917 15:45:08.376822 1936130816 sgd_solver.cpp:106] Iteration 9600, lr = 0.000573498
I0917 15:45:09.522541 1936130816 solver.cpp:228] Iteration 9700, loss = nan
I0917 15:45:09.522573 1936130816 solver.cpp:244]     Train net output #0: loss = nan (* 1 = nan loss)
I0917 15:45:09.522581 1936130816 sgd_solver.cpp:106] Iteration 9700, lr = 0.000571313
I0917 15:45:10.663610 1936130816 solver.cpp:228] Iteration 9800, loss = nan
I0917 15:45:10.663782 1936130816 solver.cpp:244]     Train net output #0: loss = nan (* 1 = nan loss)
I0917 15:45:10.663791 1936130816 sgd_solver.cpp:106] Iteration 9800, lr = 0.000569147
I0917 15:45:11.808089 1936130816 solver.cpp:228] Iteration 9900, loss = nan
I0917 15:45:11.808120 1936130816 solver.cpp:244]     Train net output #0: loss = nan (* 1 = nan loss)

I0917 15:45:11.808128 1936130816 sgd_solver.cpp:106] Iteration 9900, lr = 0.000567001

I was able to fix this by adjusting learning rate. (mostly decreasing) Values for learning rate that usually works are 0.001 and 0.0001. (learning rate can be configure in solver.prototxt file) 

Following thread contains useful information on other possible reasons that this issue might occur. 

Sometimes, you might notice that loss values won't change, even if the values don't become nan. That issue also can be fixed by fine tuning the learning rate. 

E.g., base_lr: 0.00009

Sometimes, memory issues can occur, when changing learning rate. 

caffe(1636,0x201105b9000) malloc: *** error for object 0x7fe8a0c2ab20: incorrect checksum for freed object - object was probably modified after being freed.
*** set a breakpoint in malloc_error_break to debug
*** Aborted at 1474110894 (unix time) try "date -d @1474110894" if you are using GNU date ***
PC: @     0x7fff893d6286 __pthread_kill
*** SIGABRT (@0x7fff893d6286) received by PID 1636 (TID 0x201105b9000) stack trace: ***
    @     0x7fff8f8f9f1a _sigtramp
    @                0x0 (unknown)
    @     0x7fff8685db53 abort
    @     0x7fff89124e06 szone_error
    @     0x7fff8911b9dd szone_free_definite_size
    @     0x7fff91681c13 _dispatch_client_callout
    @     0x7fff9168488f _dispatch_root_queue_drain
    @     0x7fff91692fe4 _dispatch_worker_thread3
    @     0x7fff8af61637 _pthread_wqthread
    @     0x7fff8af5f40d start_wqthread

That can be fixed by adjusting (mostly decreasing) the batch size in train_test. prototxt. (for both train and test input layers)

batch_size: the number of inputs to process at one time


name: "LeNet"
layer {
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  transform_param {
    scale: 0.00390625
  data_param {
    source: "train_lmdb"
    batch_size: 16
    backend: LMDB

Wednesday, July 27, 2016

Issue when import matplotlib in Virtualenv in Python

import matplotlib.pyplot as plt

Traceback (most recent call last):

  File "", line 1, in

  File "/Users/jwithanawasam/MachineLearning/Caffe2/venv/lib/python2.7/site-packages/matplotlib/", line 114, in

    _backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup()

  File "/Users/jwithanawasam/MachineLearning/Caffe2/venv/lib/python2.7/site-packages/matplotlib/backends/", line 32, in pylab_setup


  File "/Users/jwithanawasam/MachineLearning/Caffe2/venv/lib/python2.7/site-packages/matplotlib/backends/", line 24, in

    from matplotlib.backends import _macosx

RuntimeError: Python is not installed as a framework. The Mac OS X backend will not be able to function correctly if Python is not installed as a framework. See the Python documentation for more information on installing Python as a framework on Mac OS X. Please either reinstall Python as a framework, or try one of the other backends. If you are Working with Matplotlib in a virtual enviroment see 'Working with Matplotlib in Virtual environments' in the Matplotlib FAQ

If you encounter the above error, there is an easier way to fix that without using Matplotlib FAQ ;)


If you have already installed Matplotlib install using the following command.
pip install matplotlib

cd ~/.matplotlib
vi matplotlibrc

Add the following content in opened matplotlibrc file.

backend: TkAgg

Try import matplotlib.pyplot as plt and it should work without errors.

Error: Segmentation fault: 11 in Caffe (PyCaffe)

Error: Segmentation fault: 11

If you get the above error during import caffe in Python, check if the following path in [Caffe installation directory]/ makefile.config points to system Python instead of Homebrew version of Python.


PYTHON_INCLUDE := /usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/include/python2.7

PYTHON_LIB := /usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/

Then append Python directory in Caffe installation directory to PYTHONPATH as given below.

export PYTHONPATH=[Caffe installation directory]/python:$PYTHONPATH

Error: Fatal Python error: PyThreadState_Get: no current thread in Caffe (PyCaffe)

import caffe

Following error is encountered (and Python crashes) when executing the above import statement in Python. (PyCaffe)

Fatal Python error: PyThreadState_Get: no current thread

Usually, the error is due to the conflicts between different Python versions installed in the machine. If you tried the following step (re install), as mentioned in many other posts and still get the error, then try the trouble shooting steps that I have mentioned in this post. They worked for me :)

brew uninstall boost-python
brew install --build-from-source --fresh -vd boost-python

Caffe is primarily written in C++ and PyCaffe is it's Python interface. PyCaffe uses Boost Python
which is a C++ library to enable interoperability between C++ and the Python.

During the Caffe installation, we need to ensure that Boost Python is linked against Homebrew version of Python and not System Python. You can check the Python error report to check if there are any references to Python system libraries.

Use the following commands (in terminal) to check that.

otool -L [Caffe installation directory]/python/caffe/
otool -L /usr/local/opt/boost-python/lib/libboost_python.dylib

Note: You can replace the above libboost_python.dylib path with /usr/local/Cellar/boost-python/1.57.0/lib/libboost_python.dylib as well. (Additional info: Homebrew in /usr/local/Cellar/ - every formula is also linked to a /usr/local/opt directory. It provides a path for a formula's contents that does not change across version upgrades.)

otool is a command line tool that is being used to find dependencies of an executable. ‘-L’ option searches for the shared libraries used.

As a result of the above command, you may see a reference to Python system libraries as given below.

Then you need to change the location to Homebrew version of Python using "install_name_tool" as given below.

install_name_tool [-change old new] input

sudo install_name_tool -change /System/Library/Frameworks/Python.framework/Versions/2.7/Python /usr/local/Frameworks/Python.framework/Versions/2.7/Python /usr/local/opt/boost-python/lib/libboost_python.dylib

After this, try import caffe, it should work as expected.

Sunday, May 1, 2016

socket.error: [Errno 32] Broken pipe in SUMO TRACI_tls example


$ python
Fontconfig warning: ignoring UTF-8: not a valid region tag
Loading configuration... done.
Traceback (most recent call last):
  File "", line 129, in
  File "", line 81, in run
  File "../../../tools/traci/", line 394, in init
    return getVersion()
  File "../../../tools/traci/", line 416, in getVersion
    result = _sendExact()
  File "../../../tools/traci/", line 238, in _sendExact
    _connections[""].send(length + _message.string)
socket.error: [Errno 32] Broken pipe

sumoProcess = subprocess.Popen([sumoBinary, "-c", "data/cross.sumocfg", "--tripinfo-output", "tripinfo.xml", "--remote-port", str(PORT)], stdout=sys.stdout, stderr=sys.stderr)    

Increase delay(ms) in SUMO GUI as 10 ms (or any value greater than zero)

Mac OSX 10.10.3, Python 2.7, SUMO 0.21 

Friday, April 15, 2016

Error: mdb_status == 0 (2 vs. 0) No such file or directory

mdb_status == 0 (2 vs. 0) No such file or directory

check for lambda files location in prototext file and if files are there. 

Library not loaded: /usr/local/opt/libpng/lib/libpng16.16.dylib
Referenced from: /usr/local/lib/libopencv_highgui.2.4.dylib
Reason: Incompatible library version: libopencv_highgui.2.4.dylib requires version 37.0.0 or later, but libpng16.16.dylib provides version 34.0.0
make: *** [runtest] Trace/BPT trap: 5


brew reinstall libpng

Error: The "brew link" step did not complete successfully

The `brew link` step did not complete successfully
The formula built, but is not symlinked into /usr/local
Could not symlink include/google
/usr/local/include is not writable.


Permission issue 

sudo chown -R 'jwithanawasam':admin /usr/local/lib
sudo chown -R 'jwithanawasam':admin /usr/local/include
sudo chown -R 'jwithanawasam':admin /usr/local/share
sudo chown -R 'jwithanawasam':admin /usr/local/bin

You can try again using:
brew link protobuf

Thursday, April 14, 2016

Error: No nodes loaded. Quitting (on error).

Command: netconvert --node-files=hello.nod.xml --edge-files=hello.edg.xml

Error: No nodes loaded. Quitting (on error).

Check whether the hello.nod.xml file is corrupted. Check for formatting issues.

Saturday, February 13, 2016

Reminiscence of writing a book on Machine Learning: Challenges, Lessons Learnt, Experiences and Insights...

I have been working on writing a book on Machine Learning, namely “Apache Mahout Essentials" for about 6 months, which was published recently by Packt Publishing - UK.

I’m sharing my experience in this article, as it may help others who want to pursue the same.

So, I got an invitation to write a book, what’s next?

When I got an email from Shaon (Acquisition Editor at Packt Publishing) to write a book, I immediately replied saying that I’m currently occupied (if not overloaded) with MSc and office work and I won’t be able to do that. Then, Shaon again approached me saying they can give flexible timelines for chapter deliverables and asked me to give it a second thought.

Then I spoke to Abi, with three possible options in my hand and one was “not writing the book” which she straight away eliminated saying that “even writing a bed time story book itself is something she won’t miss out”.

Also, I spoke to Rafa, who was the Head of Research at Zaizi sometime back. He assured that I can do this and gave an advice which was just three words but helped me vastly though out the journey of writing the book. “Step by step!”

So, I want to emphasis the fact  that, even though I’m getting some recognition on writing a book, if it wasn’t for these people it would have been just a rejected invitation. I have no words to explain my gratitude for them for the motivation they provided.

From my side, the steady and compelling reason to start writing this book is my unquenchable curiosity about machine learning and the desire to learn more.

Yup, decided to go ahead and try out, But still…

So, I started writing and within no time I realised that this is not as easy as I imagined.

One reason is I was following MSc in Artificial Intelligence where we had to follow 4 modules in 8 weeks (and the following week exams! - no study leave) and we had lectures during entire week end 8-5 (Those who went through this struggle will realise the pain ;)). Apart from that, I was working full time as well. To make the situation even worse, I had to travel for 2 hours daily as I stayed out of Colombo.

So, I decided to utilise the travel time effectively and I was reading the required content using my smart phone even if I’m standing in a crowded train. There was a time which I worked almost all the hours continuously. As a result, I got stressed out and most of the time I was sick.

This is where "focusing one thing at a time" helped me, as it was so overwhelming to think about all the items in my “things-to-do” list. Also, I planned out the structure and the content before start writing, with fresh mindset. And then I spent all night before the deadline finalising everything.

However, regardless of the problems that came along my way, I was determined to complete what I started. I remember one day I was having a terrible ear infection and still I was struggling to meet a chapter deadline until 3 a.m.

Shaon and Nikhil (Content editor at Packt Publishing) were working with me during this time and they were kind enough to give me flexible chapter deadlines which will not overlap with my university exams.

Finally, it all worth the effort!

The book went through several stages of reviews/ revisions etc. before publishing and the happiest of all was the time I completed all the first drafts.
And the next may be getting the opportunity to decide an image with n-Shades of Grey as the cover page. ;)

Reading has been my favourite and consistent hobby since my childhood, yet I was unaware of the publishing process a book has to go through before it reaches reader’s hands. So, getting to know the process itself was another exciting factor.

In addition to learning and writing about ML concepts, planning out on how to structure and present the content to ensure others can understand was a novel experience as well.

Finally, writing a book was one of the bucket list item in my life and it turned out to be immensely rewarding that exceeded my expectations.

However, this is just one milestone in the long journey of machine learning. There is lot to learn, lot to experience and lot of things that needs to get better :)

Recap on WiML and NIPS 2015

I presented in WiML 2015 and attended NIPS 2015 which was held in Canada. I thought of sharing my experience in this blog on that. I know it's little late, but better late than never... ;)

WiML was founded by two women researchers (Hanna Wallach and Jenn Wartmen) from Microsoft research when there were sharing room for NIPS. Very few women engage in machine learning research (Specially in our region, very few women engage in machine learning research when compared to men). WiML was form to provide women machine learning researchers an opportunity to collaborate and share their research experiences.

Few quick facts on WiML: 

  • Support network for women researchers 
  • Share knowledge about their research work
  • Initiated for Grass hopper conference - proposal for grass hopper session 
  • Co located with Grace hopper conference (women in Computing)
  • 2008 co located with NIPS

What did I present there?

In WiML, I presented an approach to analyse and retrieve different content forms such as image, video, text, etc., which are embedded in different content forms in a collective manner. I have given more information on this in the link given below:

Few notes I took from WiML Invited Talks

The slides from the speakers for the invited talks are available at:

Super human multi tasking  - Raia Hadsell  - DeepMind

  • Games as platform to implement and test AI applications
  • Why? difficult and interesting for humans/ huge variety of games/  built in evaluation criteria and reward 
  • Atari 2600 games
  • Reinforcement Learning 
  • Deep Q- Learning
  • Knowledge/ policy distillation - model distillation (model compression/ compress the knowledge in an ensemble into single model) 
  • Create intelligent agents that can learn many tasks > multiple Atari games 

Structured data/ facts at scale (and bit of machine learning at Google) - Corinna Cortes - Google Research 

  • Structured snippets - Extracting structure from unstructured content - Less clicking, more convenient 
  • Problem - How do we find good tables on the web?
  • Feature design - 
    • semantics of the table is often determined by surrounding text
    • detecting subject columns - other columns contain properties of the subject
  • Determining column classes using Google knowledge graph 

Is it all in the phrasing? - Lillian Lee - Cornell University

  • Does phrasing affects memorability? 
  • Memorable and non-memorable movie quotes
  • Memorable quotes use less common word choices
  • Memorable quotes tend to be more general in ways that make them easy to apply in new contexts
  • “These aren’t the droids you are looking for” :)

Interactive and Interpretable Machine Learning Models for Human Machine Collaboration - Been Kim, AI2/University of Washington

  • Communication from machine to human - provide intuitive explanation 
  • Basin case model - proto type and subspaces to help humans understand machine learning results
  • BCM on recipe data
  • Subspaces, the sets of features that play important roles in the characterization of the prototypes
  • Learns prototypes, the ``quintessential observations that best represent clusters in a dataset
  • Prototype clustering and subspace learning. In this model, the prototype is the exemplar that is most representative of the cluster.

Other events I attended

  • Lean in Circles
    • Dedicated to helping all women achieve their ambitions.
    • Founded by Sheryl Sandersberg - COO Facebook
  • Nvidia 
    • GPU computing/ speed up deep learning matrix calculations 
    • NVidia digits - interactive deep learning GPU training system 
    • Demo that shows how GPUs can speed up training operation in deep neural networks 
  • Career advice session 
    • Helpful not specifically for machine learning but for any career 

Finally NIPS!

NIPS is one of the top machine learning conferences in the world. I have mentioned few important deep learning techniques that got highlighted in the conference.

  • Recognise images, used in computer vision 
  • Object proposal generation, image segmentation 
  • Feed forward neural networks 

  • Networks with recurrent connections which forms circles (signals travelling in both directions)
  • Used in NLP
  • Designed to recognise sequences such as speech signal or text 
  • Process arbitrary sequence of input 
  • Speech recognition, hand writing recognition 
  • LTSM - question answering 
    • Type of RNN
    • LTSM outperforms other sequence learning methods such as conventional RNNs and HMMs 
    • Grammer as a foreign language 

So, that’s it for now. :) I might write a detailed blog on NIPS, if I get some free time in future. NIPS is somewhat overwhelming and I need to go through the ideas presented there again to have a clear grasp on cutting edge technologies in machine learning.

I have given some thoughts on this in Zaizi blog as well. 

Also, I gave my thoughts on 2nd Colombo Machine Intelligence Meetup which was held in WSO2 on Feb, 2016.

I have given my slides at,

Saturday, January 2, 2016

Issues while setting up Jade (Java Agent Development Framework)

Use the following command to avoind the errors given below:

java -cp "jade.jar:(path to your classes)" jade.Boot -agents nickName:(fully qualified name for the agent class E.g., packageName:className)

java -cp "jade.jar:Control-0.0.1-SNAPSHOT.jar" jade.Boot -agents buy:Examples.BookBuyerAgent

Possible errors due to issues in class path or class name:

Error creating the Profile [Can't load properties: Cannot find file buyer:BookBuyerAgent]
jade.core.ProfileException: Can't load properties: Cannot find file buyer:BookBuyerAgent
    at jade.core.ProfileImpl.(
    at jade.Boot.main(

jade.Boot: No such file or directory

SEVERE: Cannot create agent buyer: Class BookBuyerAgent for agent ( agent-identifier :name buyer@ ) not found - Caused by:  BookBuyerAgent