I presented in WiML 2015 and attended NIPS 2015 which was held in Canada. I thought of sharing my experience in this blog on that. I know it's little late, but better late than never... ;)
WiML was founded by two women researchers (Hanna Wallach and Jenn Wartmen) from Microsoft research when there were sharing room for NIPS. Very few women engage in machine learning research (Specially in our region, very few women engage in machine learning research when compared to men). WiML was form to provide women machine learning researchers an opportunity to collaborate and share their research experiences.
Few quick facts on WiML:
- Support network for women researchers
- Share knowledge about their research work
- Initiated for Grass hopper conference - proposal for grass hopper session
- Co located with Grace hopper conference (women in Computing)
- 2008 co located with NIPS
What did I present there?
In WiML, I presented an approach to analyse and retrieve different content forms such as image, video, text, etc., which are embedded in different content forms in a collective manner. I have given more information on this in the link given below:
Few notes I took from WiML Invited Talks
The slides from the speakers for the invited talks are available at: http://www.thetalkingmachines.com/blog/2016/1/15/real-human-actions-and-women-in-machine-learning
Super human multi tasking - Raia Hadsell - DeepMind
- Games as platform to implement and test AI applications
- Why? difficult and interesting for humans/ huge variety of games/ built in evaluation criteria and reward
- Atari 2600 games
- Reinforcement Learning
- Deep Q- Learning
- Knowledge/ policy distillation - model distillation (model compression/ compress the knowledge in an ensemble into single model)
- Create intelligent agents that can learn many tasks > multiple Atari games
Structured data/ facts at scale (and bit of machine learning at Google) - Corinna Cortes - Google Research
- Structured snippets - Extracting structure from unstructured content - Less clicking, more convenient
- Problem - How do we find good tables on the web?
- Feature design -
- semantics of the table is often determined by surrounding text
- detecting subject columns - other columns contain properties of the subject
- Determining column classes using Google knowledge graph
Is it all in the phrasing? - Lillian Lee - Cornell University
- Does phrasing affects memorability?
- Memorable and non-memorable movie quotes
- Memorable quotes use less common word choices
- Memorable quotes tend to be more general in ways that make them easy to apply in new contexts
- “These aren’t the droids you are looking for” :)
Interactive and Interpretable Machine Learning Models for Human Machine Collaboration - Been Kim, AI2/University of Washington
- Communication from machine to human - provide intuitive explanation
- Basin case model - proto type and subspaces to help humans understand machine learning results
- BCM on recipe data
- Subspaces, the sets of features that play important roles in the characterization of the prototypes
- Learns prototypes, the ``quintessential observations that best represent clusters in a dataset
- Prototype clustering and subspace learning. In this model, the prototype is the exemplar that is most representative of the cluster.
Other events I attended
- Lean in Circles
- Dedicated to helping all women achieve their ambitions.
- Founded by Sheryl Sandersberg - COO Facebook
- GPU computing/ speed up deep learning matrix calculations
- NVidia digits - interactive deep learning GPU training system
- Demo that shows how GPUs can speed up training operation in deep neural networks
- Career advice session
- Helpful not specifically for machine learning but for any career
NIPS is one of the top machine learning conferences in the world. I have mentioned few important deep learning techniques that got highlighted in the conference.
- Recognise images, used in computer vision
- Object proposal generation, image segmentation
- Feed forward neural networks
- Networks with recurrent connections which forms circles (signals travelling in both directions)
- Used in NLP
- Designed to recognise sequences such as speech signal or text
- Process arbitrary sequence of input
- Speech recognition, hand writing recognition
- LTSM - question answering
- Type of RNN
- LTSM outperforms other sequence learning methods such as conventional RNNs and HMMs
- Grammer as a foreign language
So, that’s it for now. :) I might write a detailed blog on NIPS, if I get some free time in future. NIPS is somewhat overwhelming and I need to go through the ideas presented there again to have a clear grasp on cutting edge technologies in machine learning.
I have given some thoughts on this in Zaizi blog as well.
Also, I gave my thoughts on 2nd Colombo Machine Intelligence Meetup which was held in WSO2 on Feb, 2016.
I have given my slides at,