Pages

Tuesday, July 3, 2018

Network Dissection to Divulge the Hidden Semantics of CNN

Needless to mention that nowadays deep convolutional neural networks (CNNs) have gained immense popularity due to its ability to classify or recognize scenes or objects with reasonable accuracy. However, we already know that CNNs can also be fooled by adversarial attacks, so that a given image, that was accurately recognized by a CNN earlier, can be altered in a way that even though its still possible for a human to recognize well, CNN would fail to do so [1]. So, the natural question arises "Are they genuinely learning about object or scenes like we humans do?"

Dissection
Researchers from MIT have recently conducted some experiments along that line as what's happening in hidden layers of CNNs still remains a mystery [2]. Their experiments aim to find out if those individual hidden units align with some human interpretable concepts such as parts of an object or objects in a scene. E.g., lamps (object detector unit) in place recognition, bicycle wheel (part detector) in object detection. If so, they need to find a way to quantify the emerged 'interpretability'. It's interesting to know that neurologists perform a similar task to uncover the behavior of biological neurons too.

Researchers have conducted experiments to find which factors (E.g., axis representation, training techniques) influences to interpretability of those hidden units too. They have found that interpretability is axis dependent, in the sense that if you change the rotation of a given image, the hidden units will no longer be interpretable. Further, different training techniques such as dropout or batch normalization have an impact on interpretability too.

You can find more details on this research here.

[1] https://kaushalya.github.io/DL-models-resistant-to-adversarial-attacks/
[2] D. Bau*, B. Zhou*, A. Khosla, A. Oliva, and A. Torralba. "Network Dissection: Quantifying Interpretability of Deep Visual Representations." Computer Vision and Pattern Recognition (CVPR), 2017. Oral.

No comments:

Post a Comment