Skip to main content

This is the homepage and blog of Dhruv Thakur, a Data Scientist in the making. For more about me, see here.

Summary Notes: Bayes' Theorem

I just started Udacity's Intro to Self Driving Cars Nanodegree and the very first thing the instructors teach in it is Bayes' theorem. I read about it in school, but now's the time for me to really get into it, hence this summary note.

Bayes' theorem gives a mathematical way to correct predictions about probability of an event, and move from prior belief to something more and more probable.

The intuition behind application of Bayes' theorem in probabilistic inference can be put as follows:

Given an initial prediction, if we gather additional related data, data that the initial prediction depends upon, we can improve that prediction.

Read more…

Visualizing inputs that maximally activate feature maps of a convnet

Being able to visualize input stimuli that excite individual feature maps in a convnet is a great way to learn about it's internal workings, and can also come in handy while debugging networks. Matthew Zeiler and Rob Fergus demonstrated in 2013 that the feature maps are activated by progressively complex features as we move deeper into the network. They visualized these input features by mapping feature map activities back to the input pixel space by using a deconvnet. Another way to visualize these features is by performing gradient descent in the input space, which I first read about in this post by Francois Chollet, and then in A Neural Algorithm of Artistic Style by Gatys et al.

I'll be visualizing inputs that maximise activations of various individual feature maps in a pre-trained ResNet34 offered by PyTorch's model_zoo. The specific technique used is inspired by this blog post by Fabio M. Graetz, in which he eloquently explains the reasoning behind using methods like upscaling and blurring to get good results. My motive behind this exercise is to extend that approach to ResNets and to use it for debugging.

Read more…

Generating artistic images using Neural Style Transfer

One of the best (and fun) ways to learn about the inner workings of convnets is through the application of Neural Style Transfer. By making use of this technique we can generate artistic versions of ordinary images in the style of a painting. NST was devised by Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge and is described in their paper A Neural Algorithm of Artistic Style.

My primary motive behind this exercise is to understand how Gatys et al. used intermediate feature map activations in a convnet to generate artistic images of high perceptual quality. In order to do so, Gatys et al. define two aspects of an image: it's content and style. Content of an image refers to the objects in that image and their arrangement, whereas style refers to it's general appearance in terms of colour and textures. I’ll be using the following two images in this exercise. The first is an image of a farm which I'll refer to as the content image, and the second is Café Terrace at Night by Vincent van Gogh which I'll refer to as the style image.

Read more…

Understanding Object Detection Part 4: More Anchors!

This post is fourth in a series on object detection. The other posts can be found here, here, and here.

The last post covered use of anchor boxes for detecting multiple objects in an image. I ended that one with a model that was doing fine with detecting the presence of various objects, but the predicted bounding boxes were not able to properly localize objects with non-squared shapes. This post will detail techniques for further improving that baseline model.

Read more…

Blogging Philosophy

I started blogging fairly recently (started in Sep 2018). I find it to be a superb technique for strengthening my understanding of a certain topic and clearing my thoughts. For a long time, I used to subscribe to the idea that one should blog about a topic after they’ve gained mastery over it through multiple years of practice. But following the advice of many smart people who I hold in high regard, I decided to start blogging this year.

Read more…

Understanding Object Detection Part 2: Single Object Detection

This post is second in a series on object detection. The other posts can be found here, here, and here.

This is a direct continuation to the last post where I explored the basics of object detection. In particular, I learnt that a convnet can be used for localization by using appropriate output activations and loss function. I built two separate models for classification and localization respectively and used them on the Pascal VOC dataset.

This post will detail stage 3 of single object detection, ie, classifying and localizing the largest object in an image with a single network.

Read more…

Understanding Object Detection Part 1: The Basics

This post is first in a series on object detection. The succeeding posts can be found here, here, and here.

One of the primary takeaways for me after learning the basics of object detection was that the very backbone of the convnet architecture used for classification can also be utilised for localization. Intuitively, it does make sense, as convnets tend to preserve spatial information present in the input images. I saw some of that in action (detailed here and here) while generating localization maps from activations of the last convolutional layer of a Resnet-34, which was my first realization that a convnet really does take into consideration the spatial arrangement of pixels while coming up with a class score.

Without knowing that bit of information, object detection does seem like a hard problem to solve! Accurate detection of multiple kinds of similar looking objects is a tough one to solve even today, but starting out with a basic detector is not overly complex. Or atleast, the concepts behind it are fairly straightforward (I get to say that thanks to the hard work of numerous researchers).

Read more…