news

I attended Deep Learning Conference 2016 at CMR Institute of Technology, Bengaluru on July 1, 2016.

Deep Learning Conference 2016 poster

Anand Chandrasekaran, CTO, Mad Street Den began the day’s proceedings with his talk on “Deep learning: A convoluted overview with recurrent themes and beliefs”. He gave an overview and history of deep learning. He also discussed about LeNet, Deep Belief Network by Geoffrey Hinton, Backpropagation Algorithm (1974) by Paul Werbos, and Deep Convolutional Neural Networks (2012) by Alex Net, named after Alex Krizhevsky. Mad Street Den primarily work on computer vision problems. In one of their implementations, they extract 17,000 features from a dress, and provide recommendations to customers. They are one of the early users of NVIDIA GPUs. He also briefed on other deep learning tools like Amazon ML, Torch 7, and Google TensorFlow.

The second talk of the day was a sponsored talk on “Recent advancements in Deep Learning techniques using GPUs” by Sundara R Nagalingam from NVIDIA. He talked on the available GPU hardware and platforms for deep learning available from NVIDIA. It was a complete sales pitch. I did ask them if they have free and open source Linux device drivers for their hardware, but, at the moment they are all proprietary (binary blobs).

After a short tea break, Abhishek Thakur presented on “Applied Deep Learning”. This was one of two best presentations of the day. Abhishek illustrated binary classification and fine tuning. He also briefed on GoogleNet, DeepNet, and ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Deep learning software such as Theano, Lasagne, and Keras were also discussed. A query can be of three types - navigational, transactional, or informational. Word2vec is a two-layer neural net that can convert text into vectors. You can find a large collection of images for input datasets at CIFAR.

The next two sessions were 20-minute each. The first talk was on “Residual Learning and Stochastic Depth in Deep Neural Networks” by Pradyumna Reddy, and the second was on “Expresso - A user-friendly tool for Deep Learning” by Jaley Dholakiya. The Expresso UI needs much work though. I headed early for lunch.

Food was arranged by Meal Diaries and it was delicious!

The post-lunch session began at 1410 IST with Arjun Jain talking on “Joint Training of a Convolutional Network and a Graphical Model for Human Pose Estimation”. He gave a number of examples on how difficult it is to train models, especially the human body.

Vijay Gabale then spoke on “Deep Dive into building Chat-bots using Deep Learning”. This was the second best presentation of the day. He gave a good overview of chat-bots and the challenges involved in implementing them. There are four building blocks for chat bots - extract intent, show relevant results, contextual interaction and personalization. He also discussed on character-aware neural language models.

I then headed to the BoF session on “Getting Started with Deep Learning”. A panel of experts answered questions asked by the participants. It was suggested to start with toy data and move to big data. Andrew Ng’s Machine Learning course and Reinforcement Learning course were recommended. CS231n: Convolutional Neural Networks for Visual Recognition was also recommended for computer vision problems. Keras and Theano are useful tools to begin with. It is important to not just do a proof-of-concept, but, also see how things work in production. It is good to start to use and learn the tools, and subsequently delve into the math. Having references can help you go back and check them when you have the know-how. Data Nuggets and Kagil are two good sources for datasets. The Kaggle Facial Keypoints Detection (KFKD) tutorial was also recommended. Data science does involve both programming and math. We then headed for a short tea break.

Nishant Sinha, from MagicX, then presented his talk on “Slot-filling in Conversations with Deep Learning”. He gave an example of a semantic parser to fill slots using a simple mobile recharge example. He also discussed about CNN, Elman RNN and Jordan RNN. This was followed by the talk on “Challenges and Implications of Deep Learning in Healthcare” by Suthirth Vaidya from Predible Health. He spoke on the difficulties in dealing with medical data, especially biometric images. Their solution won the Multiple Sclerosis Segmentation Challenge in 2015.

The last talk of the day was on “Making Deep Neural Networks smaller and faster” by Suraj Srinivas from IISc, Bengaluru. He discussed how large model can be mapped to small models using model compression. This involves compressing matrices through four techniques - sparsify, shrink, break, and quantize. The objective is to scale down the solution to run on mobile and embedded platforms, and on CPUs. It was an interesting talk and a number of open research problems exist in this domain.

Overall, it was a very useful one day conference.