Pixel Deflection - a simple transformation based defense

Change local pixel arrangement and then denoise using wavelet transform

Paper: arXiv         Code: GitHub         Jupyter Notebook: Source

What

  1. Select a random pixel and replace it with another randomly selected pixel from a local neighborhood; we call this as pixel deflection (PD).
  2. Use a class-activation type map (R-CAM) to select the pixel to deflect. The less important the pixel for classification, the higher the chances that it will get deflected.
  3. Do a soft shrinkage over the wavelet domain to remove the added noise (WD).


Why

  1. PD changes local statistics without affecting the global statistics. Adversarial examples rely on specific activations; PD changes that but not enough to change the overall image category.
  2. Most attacks are agnostic to the presence of semantic objects in the image; by picking more pixels outside the regions-of-interest we increase the likelihood of destroying the adversarial perturbation but not much of the content.
  3. Classifiers are trained on images without such (PD) noise. We can smooth the impact of such noise by denoising, for which we found out that BayesShrink on DWT works best.
Read More

Visual Question Answering Demo in Python Notebook

This is an online demo with explanation and tutorial on Visual Question Answering. This is not a naive or hello-world model, this model returns close to state-of-the-art without using any attention models, memory networks (other than LSTM) and fine-tuning, which are essential recipe for current best results.


I have tried to explain different parts, and reasoning behind their choices. This is meant to be an interactive tutorial, feel free to change the model parameters and experiment. If you have latest graphics card execution time should be within a minute.


All the files required to run this ipython notebook can be obtained from

* https://github.com/iamaaditya/VQA_Demo
* Jupyter Notebook on Github </p>

Read More

Pseudo loss function in distributed Adaboost

This is the sketch of proof of correctness of L_{Hedge(\beta)} , pseudo loss function, in the case of distributed boosting algorithm. It has been known that boosting of functions with each applicator with more than 0.5 accuracy is sufficient to guarantee lowest minimum accuracy in iterative pooling. A similar sketch is presented for the distributed case where the score from each stage is not shared across all the computing nodes. While this guarantees the correctness, it does not guarantee the convergence, which is still a highly sought after problem in distributed algorithm.

Read More

Dokuwiki for Personal Notebook and Note taking

As a researcher, you soon start wondering if you had centralized all your notes, possibly digitized them, life would be much better. Recently when I had to make a tough choice of leaving all my notes from years when I am about to shift country (due to limited air travel baggage), I wish I had them on computer. Since I will be making more notes in the future, at least a lesson is learnt.

Read More

Solutions to Hamilton-Jacobi-Bellman under uncertainity

After doing some reading on decision under un-certainity, I get the feeling that this I will be looking more into this. More so because I have the feeling like there is more to this field, lot of unknowns yet(which is still partly due to my lack of profound knowledge in the field). I feel this field is yet to mature.

Read More

Pascal’s Triangle in Standard ML

It has been a while that I posted something (grad school applications !). For past few weeks I have been learning Standard ML (SML), my first foray into functional programming language. I must say, I was skeptical at first due to ‘no-state’ concept but it is turning out to be great experience. Recursion can only be appreciated when you have to write programs without loop. This makes me rethink about learning Scala and Haskell.

Read More

Monte Carlo Simulation in R

While the last post dealt with Bootstrap Sampling, no sampling discussion can be complete without discussion ‘Monte Carlo’ simulation. Readers please note, I will *not **discuss “MCMC (Markov Chain Monte Carlo)” *(perhaps in the future). MCMC primarily deals with distribution of equilibrium of the given Markov Chain.

Read More

Bootstrap sampling in R

Bootstrapping is a very useful sampling method. While it’s robustness is not that simlar to MCMC or Metropolis-Hastings or Landau. Bootstrapping draws from provided distribution with replacement.

Read More

Why Simulated Annealing works

Of all optimization methods, Simulated Annealing is one of the most fascinating one. If you need a quick refresher in Simulated Annealing then see this slide. Is Simulated Annealing better than other techniques in finding the global optima ? Perhaps. I will discuss why I think it is one of the best optimization technique and why so.

Read More

Value of Interdisciplinary Research

As the knowledge gets more and more advanced we find ourselves segmenting the branches of science into more specialized areas. This has lead to tunnel visioning of higher education. While this is not a bad thing, the consequences of it is when an intricate problem arises whose solution may be interdisciplinary, it remains unsolved or takes too long to be solved – primarily because specialist’s gap in knowledge of other areas. I am not trying to discount the profound knowledge of researchers and scientists but that their profundity has great depth but lacks necessary breadth.

Read More

When in Pain: Read

Human life with all its color and variety has one thing in common, feeling of joy and sadness. No amount of wealth or luck can save you from either. Different people have different means to cope with tumultuous rides of emotion.  While everyone has good handle on joyous days, the gloomy days elapse in a lot of pain.

Read More

Paradigm towards shared Branch Banking

Long time ago when I was in my native town I had to do some banking operations on my account. To my surprise I found that the place didn’t have a branch of the bank where I had my account.

Read More