This is the homepage and blog of Dhruv Thakur, a Data Scientist in the making. Here's what I'm up to currently. For more about me, see here.
As per GCP's documentation "A service account is a special type of Google account that belongs to your application or a virtual machine (VM), instead of to an individual end user. Your application assumes the identity of the service account to call Google APIs, so that the users aren't directly involved."
Service accounts are used to authenticate users of a specific application, whilst maintaining proper access control to cloud resources. They come in handy when you have multiple applications running on GCP, each requiring a distinct set of privileges to function. The idea is to grant the service account only the minimum set of permissions required to achieve their goal. I'll list the basic steps required to get up and running with a Python app running on GKE using a SA.
Combination of small habits can yield remarkable results. Good habits thrive in a system designed to let them flourish without friction. Knowledge of the science and psychology behind habit formation can enable people to function at peak potential.
Atomic Habits details a comprehensive analysis on how to build habits and live better. In James Clear’s (the author) words, “Atomic Habits offers a proven framework for getting 1 percent better every day”.
I recently learnt about the workings of the A* algorithm through Udacity's Intro to Self Driving Cars Nanodegree, and wanted to understand it in detail, hence this post on it.
Wikipedia article on A* states: "A star is a computer algorithm that is widely used in pathfinding and graph traversal, which is the process of finding a path between multiple points, called "nodes". It enjoys widespread use due to its performance and accuracy."
It was devised by Peter Hart, Nils Nilsson and Bertram Raphael of Stanford Research Institute (now SRI International) who first published the algorithm in 1968.
Bayes' theorem gives a mathematical way to correct predictions about probability of an event, and move from prior belief to something more and more probable.
The intuition behind application of Bayes' theorem in probabilistic inference can be put as follows:
Given an initial prediction, if we gather additional related data, data that the initial prediction depends upon, we can improve that prediction.
Being able to visualize input stimuli that excite individual feature maps in a convnet is a great way to learn about it's internal workings, and can also come in handy while debugging networks. Matthew Zeiler and Rob Fergus demonstrated in 2013 that the feature maps are activated by progressively complex features as we move deeper into the network. They visualized these input features by mapping feature map activities back to the input pixel space by using a deconvnet. Another way to visualize these features is by performing gradient descent in the input space, which I first read about in this post by Francois Chollet, and then in A Neural Algorithm of Artistic Style by Gatys et al.
I'll be visualizing inputs that maximise activations of various individual feature maps in a pre-trained ResNet34 offered by PyTorch's
model_zoo. The specific technique used is inspired by this blog post by Fabio M. Graetz, in which he eloquently explains the reasoning behind using methods like upscaling and blurring to get good results. My motive behind this exercise is to extend that approach to ResNets and to use it for debugging.
One of the best (and fun) ways to learn about the inner workings of convnets is through the application of Neural Style Transfer. By making use of this technique we can generate artistic versions of ordinary images in the style of a painting. NST was devised by Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge and is described in their paper A Neural Algorithm of Artistic Style.
My primary motive behind this exercise is to understand how Gatys et al. used intermediate feature map activations in a convnet to generate artistic images of high perceptual quality. In order to do so, Gatys et al. define two aspects of an image: it's
style. Content of an image refers to the objects in that image and their arrangement, whereas style refers to it's general appearance in terms of colour and textures. I’ll be using the following two images in this exercise. The first is an image of a farm which I'll refer to as the content image, and the second is Café Terrace at Night by Vincent van Gogh which I'll refer to as the style image.
This post is fourth in a series on object detection. The other posts can be found here, here, and here.
The last post covered use of anchor boxes for detecting multiple objects in an image. I ended that one with a model that was doing fine with detecting the presence of various objects, but the predicted bounding boxes were not able to properly localize objects with non-squared shapes. This post will detail techniques for further improving that baseline model.
This post is third in a series on object detection. The other posts can be found here, here, and here.
This post will detail a technique for classifying and localizing multiple objects in an image using a single deep neural network. Going from single object detection to multiple object detection is a fairly hard problem, so this is going to be a long post.
I started blogging fairly recently (started in Sep 2018). I find it to be a superb technique for strengthening my understanding of a certain topic and clearing my thoughts. For a long time, I used to subscribe to the idea that one should blog about a topic after they’ve gained mastery over it through multiple years of practice. But following the advice of many smart people who I hold in high regard, I decided to start blogging this year.
This post is second in a series on object detection. The other posts can be found here, here, and here.
This is a direct continuation to the last post where I explored the basics of object detection. In particular, I learnt that a convnet can be used for localization by using appropriate output activations and loss function. I built two separate models for classification and localization respectively and used them on the Pascal VOC dataset.
This post will detail stage 3 of single object detection, ie, classifying and localizing the largest object in an image with a single network.