Sunday, December 2, 2012

Content Extraction and Context Inference based Information Retrieval

Final year research project

 Abstract


At present, most of the information retrieval mechanisms consider only exact matches of textual metadata such as topics, manual tags and descriptions etc. These methods are yet to provide the right information to match the level of human intuition driven relevance. The main contributor to such factors is due to the lack of assessing relevance of the content and context of the available data in a unified manner. 

Extracting semantic content and inferring knowledge from low level features has always been a major challenge because of the well-known semantic gap issue.

The proposed solution strives to overcome the above mentioned difficulty by providing a framework based approach using machine learning and knowledge representation where right information can be retrieved regardless of the content format or contextual discrepancies. Given that information can be embedded as any content format, the proposed framework analyzes and provides a set of content and context descriptors which can be used in any information retrieval application. 



Key Words:
Information Retrieval, Computer Vision, Ontology, Development Framework


Literature Review


In spite of exponential growth of information, which comes in various forms such as video,
image and audio, modern information retrieval mechanisms still use manual, text based
metadata such as topic, tags and description to provide relevant results to user queries. 

Actual content and its associated context are not considered by information retrieval mechanisms
when assessing relevance of the search result to the given user query. 

Accordingly, modern information retrieval systems are yet far beyond from the way human instinct would assess relevance of the information. 

Considering existing products, given a sample image or audio, Query By Example (QBE) methods have been suggested to find similar content by analyzing and matching the actual content. However, in this approach matching is done between low level features. 

It is difficult for user to match the search intention in close proximity to the respective low level features, which results in significant semantic information loss. Ontology driven methods are invented to provide semantic analysis of the information using an underlying knowledge base. However, these solutions are either text or web content based. 

Each actual content representation and its associated context hold equal value when weighing relevance of information to the user. So, due to the information loss that can incur by only considering the text, relevance of the results is greatly reduced. Further, to bridge the gap between human cognitive way of information retrieval and automatic information retrieval, machines should simulate this behavior by analyzing the obtainable audio, visual content. 

Even though it is evident that visual information such as image and video helps user to understand a particular concept more realistically with less effort, no research has been done to find a collective way of processing visual information along with text and audio and infer the associated context.
After an extensive study on the research being done on this area it is found that, extracting semantic concepts from low level observable features has always been a major challenge due to the well-known semantic gap issue. 

When it comes to feature detection and extraction of visual information, local features are preferred over global features to avoid back ground clutter. With the advent of scale invariant feature detectors and descriptors such as SIFT and SURF, object detection and recognition task has improved drastically with invariance to scale, rotation and illumination. Also, due to the high dimensionality of these descriptors, distinctiveness is greatly improved. Bag of visual Words model has adopted the above advantages in its application on semantic object detection. 

During the evaluation of different content processing mechanisms for image, video and audio, it is found that even though the processing mechanism for each content format differs from each other, all of them have a common flow namely, pre-processing, feature detection, feature extraction and semantic concept detection. 

Once the semantic concepts are extracted from the low level features, context should be inferred for a given set of concepts to further refine the solution. 

Wide range of context inferring approaches is considered namely, Natural Language Processing (NLP), Logic based methods (formal logic, predicate logic), Fuzzy reasoning, Probabilistic methods (Bayesian Reasoning) and semantic networks. 

Semantic networks are designated as the main technique to infer the context, yet no sufficient algorithm was encountered to meet the exact requirement. Further, author discovered that concept of fuzzy reasoning can be used to assess relevance in form of gradual transition between falsehood to truth during application in information retrieval. 

Considering the above given aspects, it is concluded that the suggested solution for the problem domain is unique and feasible, yet immensely challenging due to its limitations in feature extraction techniques to bridge semantic gap issue, especially for a wide domain. Further, limitation on available algorithms for context inference has made it even more time consuming and thought provoking.



 High Level Design

Rich Picture in Information Retrieval Application


High Level Architecture




Data Flow




Class Diagram (Framework Design)

Implementation

Content Negotiation


Content Extraction

  • Image/ video processing tool: OpenCV (EmguCV)
  • Algorithms: Bag of visual words model, SIFT/ SURF for visual feature detection and extraction, K-means for feature clustering, Naive Bayes for classification

SIFT/ SURF Algorithm comparison



Visual word histograms for similar concepts


Context Inference


Context inference for ambiguous scenarios



Semantic network: WordNet

Application in Information Retrieval

Once content and context descriptors are retrieved from the framework, they can be used in many applications. One such application is given here. 

Literal relatedness between user context and content context can be derived using relevance decision factor. 

This measure can be used to assess the relevance of the available content to a particular user. 

For example, user context can be presented in different forms such as profession, personal interests and present short term search intention. Content context can be metadata of the available content.

Testing and Evaluation



Precision and recall has been used in many information retrieval applications to assess relevance.



For testing, test cases were derived according to the given criteria for each component. Training and ground truth images were taken from Google images and Flickr. 

Accuracy of the concept detection for images with different variations such as scale, illumination and back ground clutter were tested. 

Context inference component was tested for different threshold values to get quantitative measures for relevance such as precision and recall. 

Then, non-functional testing was performed. Test results indicate that the implementation of prototype is successful. 

Further, critical evaluation was performed with participation of domain experts from academic or technical background. Participants confirmed that the suggested approach helps to improve the relevance of information retrieval and thus it is a timely need. 

Future Enhancements



    • Support different content processing mechanisms for same content type (E.g., Research papers and new paper given as text content) 
    • Different processing modules such as audio and web content can be implemented and plugged in to the framework  
    • Fusion of audio words with visual content can be used to improve the accuracy of semantic concept recognition in video content  
    • Scale and evaluate the performance on a realistic database
     
     


 

Thursday, November 8, 2012

Connect to server failed; Check P4PORT

To fix type the following command in cmd:

set P4PORT=[Perforce server name: port]

Ex: set P4PORT=p4server.dev.org:1776

Then verify using P4 commands if the connection works fine.

Tuesday, October 9, 2012

Difference between "override" and "new" keyword

"New" modifier is used to hide a member inherited from a parent class which is not virtual. This does not have the polymorphic behavior, so if someone calls the parent class method, the implementation in the method which "new" modifier has been used is not been called.

"Override" modifier can be used to override virtual or abstract members and it contains the polymorphic behavior.

Wednesday, September 26, 2012

System.InvalidOperationException in NUnit with RhinoMocks

To solve the above exception while running unit tests, you need to make sure that all the mocked methods are using "virtual" keyword in method signature.
Ex: Public virtual method(int a, int b)

The reason for this issue is, Rhino mocks creates a proxy class when you call StrictMock (or any type) using  MockRepository. Then, it needs to extend your type.

By making the methods as virtual we let RhinoMocks know that they can override and inherit it for their mocking purposes.

Sunday, September 23, 2012

Convert clockwise angles to anti-clockwise angles in Construct2

By default, angles in Construct2 facing right and increments clockwise.

Angles in Construct2

To change this behavior, to anti-clockwise do the following:
 
Anti-clockwise angle = - ( Clockwise angle)

To display clockwise angle as anti-clockwise angle use the following calculation,
Anti-clockwise angle = 360 degrees - clockwise angle

Ex: Clockwise angle 45 degrees = Anticlockwise angle 315 degrees and equals to -45

Thursday, September 20, 2012

Test Driven Development(TDD) and Unit Tests

The idea of TDD is to focus on the requirements and the design first before go into implementation detail.

Unit tests are written to test a small piece of code in isolation.

When we write unit tests in TDD, we need to write the tests before doing the development, rather than writing the code and the writing unit tests like in traditional development.

First, all the unit tests should fail as implementation is not available. Once the implementation is done, we execute the tests again and then all the tests should pass.

This encourages the incremental/ evolutionary design by providing automatic regression testing for the design of the API.

Wednesday, August 22, 2012

SourceMonitor for code metrics

http://www.campwoodsw.com/sourcemonitor.html

Following information can be retrieved in your source code:
  • No. of files
  • No. of lines
  • No. of statements
  • % of comments
  • % of docs
  • Classes
  • Max complexity
  • Max depth
  • Avg. depth
  • Avg complexity

If you are using a source control mechanism such as Perforce, then you can get the changes for each time period or revision. This is significantly fast also.


Monday, April 23, 2012

ANTLR important language syntaxes

* Rules begins with lower case

* Token types begins with upper case letter

* Lexer rules are given in upper case.

* x | y | z - match any alternative: x or y or z

* x? - x is optional

* x* - x can present zero or more times

* x+ - x can present one or more times

* Lexer rules are always tokens and should be given in upper case. Methods related to lexer rules are prefixed with 'm'.

Reference: The defenitive antlr reference - building domain specific languages

ANTLR FAQ

Does ANTLR knows about a specific language?
No. It recognizes the language using the provided grammar*.
ANTLR can generate a recognizer which will get a particular sentence or phrase as an input and apply the grammatical structure defined in grammar files for those input symbols.

These recognizers can be implemented using different language targets such as C# or Java.

What do you mean by grammar?
Using the grammar, we can tell ANTR how a particular language looks like, so that ANTLR can identify that. Grammar describes a syntax of a language.
Grammar will consist of set of language rules*. Grammar notation used is BNF*.

What is a rule?
A rule represents a phrase or sentence of the language.
Each rule may consist of one of more alternatives sub rules. Rules are invoked in a recursive manner.

Example for BNF notation?
Postal address ::= Name Route City Country
Name ::= First name Middle name? Last name
Route ::= Route name part*

What is a token file?
Token file can be considered as vocabulary for the grammar of specific language.

What is a target language?
Target language is a computer language which ANTLR can generate the recognizer from.

What are actions?
Actions are code blocks written in target language. Actions can refer tokens, rules or character reference using element labels. (x = T where x - label name, T - token)



Tuesday, April 17, 2012

How to fix "TypeInitializationException"?

This exception is thrown as a wrapper around the exception thrown by the class initializer.
Best thing is to check the inner exception details. (View details)

I got the following exception:
The type initializer for 'Antlr.StringTemplate.StringTemplate' threw an exception

Due to one assembly missing:

{"Could not load file or assembly 'antlr.runtime, Version=2.7.7.3, Culture=neutral, PublicKeyToken=d7701e059243744f' or one of its dependencies. The system cannot find the file specified.":"antlr.runtime, Version=2.7.7.3, Culture=neutral, PublicKeyToken=d7701e059243744f"}
I could fix this after I add antlr.runtime dll as a reference to the project.

Wednesday, February 29, 2012

Nested IF conditions vs Single line IF with AND operation

In C#, compiler will generate the same MSIL code for following code chunks.

Single line IF

if ( A && B ){
  // Do something;
}

Nested IF

if ( A ){
  if ( B ) {
  // Do something;
  }
}

Monday, February 27, 2012

How to fix TypeLoadException was unhandled Error?

Problem: I got the following runtime error while debugging my application.
TypeLoadException was unhandled
















"Evaluate" is an interface method defined in a different project (Assembly A) and implemented in another project. (Assembly B).

Then I saw following warning message in the Error list:
Warning: Found conflicts between different versions of the same dependent assembly.

The given assembly name (Assembly C) for the above warmning is referenced by both Assembly A and Assembly B. It is a third party dll. Also, Assembly A is referenced by Assembly B.


Once I double clicked on the warning, it prompted me saying "One or more dependent assemblies have version conflicts. Do you want to fix these conflicts by adding binding redirect records in the app.config file?

Then, once I accepted that, the code chunk given below was automatically added to my App.config file.







Consequently, that has fixed the issue!

But I suspected that there can be some unused dlls as the above App.config entry is redirecting all assembly binding references for versions 0.0.0.0 to 3.1.3.42154 (the latest) to version 3.1.3.42154 dll. This should happen as you can't load multiple versions of the same assembly in the same app domain. So, without binding redirect, an attempt to load a different version of an already loaded assembly will fail.

Since the above code chunk sounds like a workaround to overcome the issue, I revisited the code to find the actual reason for the exception.

There, I could find that different versions of the same dll is used in two dependent projects.















So, when I refer the third party dll (Assembly C) from the same folder, same version, and remove the code chunk in the App.config, I did not get the "TypeLoadException" issue.

After doing some research on this I found that this may occur if you have an interface in one assembly and it's implementation in another, and the implementation assembly was build against a different version of the interface.

Also, this issue can occur if you load assembly dynamically using LoadFrom(string) method. I used LoadFrom method to invoke Assembly B.

Also, the project dependency given above is called as "diamond shaped" dependency which may cause this issue.

However, the error message is little bit mis-leading, but we can get some clue out of that. Exception can occur even the method is not executed during the runtime.

Sunday, February 26, 2012

How to survive as a Software Engineer? It's not all about CODING!

Being a software engineer for few years and also being involved in some other fields such as quality assurance and database administration, made me to think twice on "What are the essential qualities you need to develop to survive as a software engineer?".

These may be some of the SPECIFIC qualities you need in order to ENJOY your work, (in the sense) you can still keep on working as a software engineer and gain some income without having them.
Without boasting further about it, I will get in to the topic.

1.  Being Curious

2. Love Problems (This is hilarious though!)

3. Have faith that problems will not last forever, might as well call this "Optimism"

4. See problems and challenges as means of improving - as opposed to get "Suicidal feelings"

5. Celebrate "fixing a bug"

6. Hate 8 - 5

This is it up to this time. I'll keep on adding when ever I found some more. Add anything if missing. I will elaborate on each of the above later.

Thursday, February 23, 2012

Using ANTLR with Visual Studio 2008 (C# Target)

ANTLR can be used with different language targets such as C# and Java. This is how to use ANTLR with C# in VisualStudio IDE.
  1. Install Java Runtime Environment (JRE).
  2. Verify the above using "java -version" command.
  3. Set the Java CLASSPATH variable to point the ANTLR package version you want to use. Ex: antlr-3.1.3.jar
  4. Build the ANTLR project to verify that there are no issues using following command:
  5. java -cp "path to ANTLR package" org.antlr.Tool "grammer file name"
  6. Ex: java -cp "C:\antlr\antlr-3.1.3.jar" org.antlr.Tool tsqllexer.g
Some important options:

If you get "java.lang.OutofMemoryError: Java heap space" issue, use the following command to modify the JVM heap size.
Xmx750M

If you get "Multiple token rules can match input such as X, Tokens X.Y was disabled for that input" issue, use the following command to set NFA conversion timeout for each decision for a suitable value.
-Xconversiontimeout 10000
  1. Create new C# project.
  2. Add these dlls as references: antlr.runtime.dll, Antlr3.Runtime.dll, Antlr3.Utility, StringTemplate.dll
  3. Build the C# project.

How to fix "Exception in thread "main" java.lang.NoClassDefFoundError:org/antlr/Too"

The following error is an initial error for ANTLR developers, when they try to build their application for the first time using command prompt:







What you need to do?

You need to set the Java Classpath for ANTLR application as given below in cmd:
set CLASSPATH = D:\antlr\antlr-3.1.3.jar

Check if it is set correctly using the following command:
echo %CLASSPATH%

Above command should display the given path.

Then verify it using the following command:
java org.antlr.Tool -version

The result should be something like this based on your ANTLR version:
ANTLR Parser Generator Version 3.1.3 Mar.18 2009 10:09:25

Why you need to do that?

The Classpath is a parameter set either command line or through envronment variable that tells JVM (Java Virtual Machine) where to look for user defined classes and packages when running Java programs. Without this, JVM does not know where to fine code base related to org.antlr.Tool.

Wednesday, February 15, 2012

Multi-dimensional databases vs Relational databases

Just back from the SQL Server user group meeting and the first topic was about implementing multidimensional databases using OLAP.

Below video has a good introduction about this interesting concept!

Get data type of a table in database dynamically

You can get the data types of columns in a given table daynamically using system tables as given below: Assume 'CurrentStatus' is the table name.









The column name and the data type will be diplayed as result.

Tuesday, February 14, 2012

Monday, February 13, 2012

JOIN vs IN vs EXISTS in sql

Difference between IN and EXISTS
  • IN will ignore null values where as EXISTS will considers null values as the result.
  • JOIN also, will consider null values.
Difference between IN and JOIN
  • IN does not consider duplicate values where as JOIN consider duplicates. (To avoid duplicate values, use DISTINCT)
  • EXISTS also does not consider duplicate values.

Wednesday, February 8, 2012

Why/ Why not use "WITH RECOMPILE" option?

Once the sproc does not have any syntax errors, it will create entries in sysobjects, sysdepends and syscomments tables. But, it will not compile the query until you execute that.

At the time you execute the proc, SQL Server will create an execution plan and save that in procedure cache for future use.

When the proc executes again, it will re-use the same execution plan, unless otherwise statistics are being changed.

So, if you want to use a new execution plan for each individual execution, use WITH RECOMPILE option in your proc.

Why?
- Optimal use of indexes on columns in a case by case scenario

Why not?
- Improves performance

Add an alias to connect database server using SSMS in MSSQL

  1. All Programs > SQL Server Configuration Manager
  2. Expand SQL Native Client 10.0 Configuration
  3. Aliases
  4. New Alias



Wednesday, February 1, 2012

Linux Kernal Headers

Linux kernal headers provide APIs to interact between modules in kernal level as well as modules between user space and kernal space.

If you want to do any kernal level operations in Linux, you need to use above APIs with C language.

It is located at /usr/src/linux-headers-x.x. See below:



How to fix "The program 'gcc' can be found in following packages' issue

GCC is the compiler environment for Linux. Once you type "gcc" in cmd, you will get the following error if gcc is not installed.






To fix this issue, run following command:
sudo apt-get install build-essential

Create and Edit and Save a file in Linux

  1. pico test.c
  2. Type your content
  3. ctrl + x
  4. Select option 'Y'

Get process information in Linux

Use "top" command

How to get system information in Linux?

Use uname -m command to see whether your kernal is 64-bit or 32-bit. uname -ar will give you the other system and OS information such as machine name, Linux version etc.

get Linux system information

Tuesday, January 31, 2012

Cool tool for web accessibility evaluation - WAVE!

http://wave.webaim.org/

This is a free web accessibility evaluation tool. You can give the URL of your site or upload HTML file of give the HTML code and this tool will report the potential accessibility issues in your application.

Monday, January 30, 2012

Linux kernal vs Windows kernal

In Linux, you have more control over OS kernal level operations than in Windows. It is open source and free.
More info: http://linuxhelp.blogspot.com/2007/04/kernel-comparison-between-linux-2620.html

What is kernal in OS?
Kernal is the bridge between applications and system resources such as CPU, menory and devices.

http://en.wikipedia.org/wiki/Kernel_(computing)

Tuesday, January 10, 2012

Facebook maps status updates on Japan earthquake



Facebook maps status updates on Japan earthquake



                14 March 2011 Last updated at 10:38 GMT



As news of the Japanese earthquake and tsunami spread around the world, it was reflected in the status updates of Facebook users.



For the first time the social networking site has plotted user updates by place and time.



The site collated information on 4.5m status updates from 3.8m users.



It tracked the words 'Japan', 'earthquake' and 'tsunami'.

Times shown are in US Pacific time which is 8 hours behind GMT and 17 hours behind Japan.



Video courtesy of Facebook.












Facebook has generated an interactive map of how the news of Japanese tsunami and earth quake spread through the world.

Facebook has plotted how status updates about the Japanese earthquake and tsunami spread across the world in the hours following the disaster, creating the first map of its kind

Hopefully those spreading the news via pictures and videos will increase awareness and subsequently donations which will help people dealing with the aftermath of numerous disasters from earth-shaking to massive waves and potentially nuclear.

To see social networking truly being leveraged to help mankind is quite rewarding for someone chronicling its evolution.

Problem set: How Facebook used Map reduce to map status updates on Japan earth quake based on date time and place?

Identify the associated map tasks

One Document for one status update

Let’s assume that Facebook stores the status updates in the following text based format:

001         Jayani    3/17/2011 4:45 PM          Sri-Lanka              We are with Japan

002         Bieber   3/17/2011 4:46 PM          United States    Now I'm really glad that I speak French

003         Enrique                3/17/2011 4:50 PM          Jamaica                                Japan Earthquake result in a blast in Fukushima



The Situation type will contain the date, time and region details.



Class Situation

                DateTime

                Region



Map (Key [001], Value[Jayani     3/17/2011 4:45 PM          Sri-Lanka              We are with Japan])      

{

                    Situation01 –

                                         DateTime  = 3/17/2011 4:45 PM

                                         Region =   Sri-Lanka



                    for each word w in value: 

                                        EmitIntermediate (word - We, situation - Situation01)

}



Get intermediate values



Intermediate values from first map task:

We         Situation01(3/17/2011 4:45 PM, Sri-Lanka)

Are         Situation01

With      Situation01

Japan    Situation01        



Intermediate values from second map task:

Now      Situation02(3/17/2011 4:46 PM, United States)

I’m         Situation02

really     Situation02

glad        Situation02

that        Situation02

I               Situation02

speak    Situation02

French  Situation02



Intermediate values from third map task:

Japan                    Situation03(3/17/2011 4:50 PM, Jamaica)

Earthquake         Situation03

result                    Situation03

in                            Situation03

a                              Situation03

blast                      Situation03

in                            Situation03

Fukushima          Situation03



Sort the results



a                              Situation03

Are                         Situation01

blast                      Situation03

Earthquake         Situation03

French                  Situation02

Fukushima          Situation03

glad                        Situation02

I                               Situation02

I’m                         Situation02

in                            Situation03

in                            Situation03

Japan                    Situation01

Japan                    Situation03

Now                      Situation02

really                     Situation02

result                    Situation03

speak                    Situation02

that                        Situation02

We                         Situation01

With                      Situation01



Group the results by key



a                              Situation03

Are                         Situation01

blast                      Situation03

Earthquake         Situation03

French                  Situation02

Fukushima          Situation03

glad                        Situation02

I                               Situation02

I’m                         Situation02

in                            Situation03, Situation03               

Japan                    Situation01, Situation03                               

Now                      Situation02

really                     Situation02

result                    Situation03

speak                    Situation02

that                        Situation02

We                         Situation01

With                      Situation01



Reduce function

reduce(String word , Iterator situations):
{
                    For each situ in situations
                    {
                                        if (word == “japan” OR “earthquake” OR “tsunami”)
                                        {
                                                             If (not exists)                                                      
                                                                                 OutputCollection.Add(situ);
                                         }
                    }                   
}
 
Results:

Situation01, Situation03



Output collection (status updates about Japan earth quake based on region and time)



3/17/2011 4:45 PM, Sri-Lanka

3/17/2011 4:50 PM, Jamaica



Using the output collection we can design a video on how Japan earthquake was reflected in Facebook status updates in different regions of world.