Detecting placeholder images in e-Commerce product listings

Many times we trust sellers to upload correct product images for online products on our e-commerce platform, but due to checks in place that a product shall always accompany an image, 3rd party sellers upload stock or placeholder image in case there is no image for the product available.

Problem is that although we do not want to show products without images on our website, but due to zero validation of the image, we show placeholder images against a product, which sometimes leads to decrease in sales. In this post we discuss few approaches we took to resolve this. We divide our approach into parts:

  1. Detect placeholder images that are has already been seen in our product catalog.
  2. Detect placeholder images which have not been seen previously.

The set of different possible placeholder images can be huge. Just doing a Google search can fetch 100K of them. Although the 3P sellers tries to cheat the system by uploading placeholders, but they prefer to either re-use them for multiple products or use images from a very small subset of similar looking placeholder images. Since it is very hard to define what is a placeholder. We will tag any image as a placeholder if that image is “considerably” different from other valid images of the same category.

Image for post
Image for post

An Example Placeholder

For the first approach, the problem boils down to searching similar images. Given a set of N images seen and tagged as placeholders, classify a new image as a placeholder, if the new image is “similar” to at-least one of them. For this, we had decided that we will use image hashes to compare two images. The simplest of the image hashes we are aware of are : Average Hashing (aHash), Difference Hashing (dHash) and Perceptual Hashing (pHash).

I will not go into the details of each of the algorithm, but just give an overview.

  • All of the above algorithms generates a 64 bit string representation as a hash. To compare two images, just convert the hash into 64 bit values and compute the Hamming distance between the two, i.e. number of mis-matched bit positions.
  • All of them first scales down the image to 8x8 (aHash and pHash) and 9x8 (dHash), then converts it to grayscale.
  • aHash computes the mean of the 64 colors and assigns 1 to all colors with value greater than mean and 0 to those less than mean.
  • dHash computes the difference between consecutive colors. The difference is computed each row separately. Assign 1 if P[x][y] < P[x][y+1] else 0, where x is the row number and y is the column number of the 9x8 image color matrix.
  • pHash computes the discrete cosine transform (DCT) of the image, then assigns 1 to all where it is greater than mean DCT and 0 to less than mean DCT.

We went with dHash and pHash due to their superior performance on small changes in the images. Finally found pHash to be slightly better than dHash.

  • Pre-compute the pHash for all of the N tagged placeholder images.
  • Compute the pHash for the new image when it comes.
  • Compute the Hamming distance between the new image and all of the N placeholder images.
  • If the minimum distance from all N placeholders is less than some threshold (15 in our case) we tag the new image as a placeholder.
  • Compute the precision and recall for multiple such images. Obtain around 96% precision and recall.
  • The test set was generated using Keras ImageDataGenerator class by adding random variations (such as shift in height and width, zoom in/out, rescaling etc.)

The python class for hashing based similarity approach. Put the code inside a file names

The ‘fit’ method is used to create the hashes for all the tagged placeholder images. The ‘predict’ method is used to predict the labels 1 (for a placeholder) and 0 (for non-placeholder).

We have not considered scaling this approach for large values of N yet, but that can be handled using either Locally Sensitive Hashing (LSH) or a KD-Tree (distributed version). We will discuss more on large scale image retrieval in the next post.

The problem with hash-based approach is that, it works fine for existing placeholder images or very similar looking placeholders. But the problem with placeholders is that there could be many variations which hash-based matching will not be able to capture.

Also another problem is that the number of tagged placeholder images is very less (around 60 or so).

Thus instead of relying on finding similar placeholder images, our new and updated approach is to “ask” few “nominated” (around 10) valid product images from the same category as the new image, whether the new image looks similar to one of them. If at-least 5 of the votes says that the new image is very dissimilar to what valid images from same category looks like, then we tag the new image as a placeholder.

Image for post
Image for post

Taking votes from images belonging to same category.

The idea is borrowed from one-shot learning framework.

  • Create dataset with pairs of images.
  • The original images were resized to 64x64 and values were scaled by multiplying with 1.0/255.
  • Assuming that there are M different categories and each category has Q images, we can create 0.5*M*Q*(Q-1) positive image pairs.
  • A label of 1 implies that the pair belongs to same category.
  • Similarly we can randomly sample pairs of images from two different categories. There could be a maximum of 0.5*M*(M-1)*Q2different negative pairs from M categories.
  • A label of 0 implies that the pair do not belong to same category.
  • From both the positive and negative pairs we sample equal number of instances (Around 50K).
  • We have observed that augmenting the negative pairs with some tagged placeholder images improves the recall of the placeholder images.
  • We could not have trained a binary CNN model with single input images with label 1 indicating a placeholder and 0 a non-placeholder, because the number of tagged placeholder images were very less.
  • We had experimented by augmenting the placeholder images with Keras ImageDataGenerator, but the results did not show much improvement.
  • Using pairs of images as input enables us to generate more data rather than relying on augmentation techniques.
  • Train a Siamese CNN using these pair of images.
  • We followed two approaches, one where we had trained layers of CNN-MaxPooling from scratch and then added fully connected layers and
  • One where we had used transfer learning from VGG16’s pre-trained weights for the CNN layers and trained only the last fully connected layer.
  • Transfer learning with VGG16 gave better and more stable results.
  • The two inputs were combined by taking the element-wise absolute difference and passing through a sigmoid activation layer.
  • Batch size was kept at 256.
  • Number of image pairs used in training was around 100K, number of pairs used for validation was 20K.
  • Placeholder images used in testing data were explicitly downloaded from Google images, thus these images are un-seen placeholders for the model.
  • Number of pairs in testing data was around 20K. Although the number of placeholder images were very less (around 100).
  • The testing set precision-recall came out to be 96% (best model). The recall for the label 0 (i.e. placeholders) was 91%.
  • Using voting method by selecting 11 random valid product images to compare against, the recall for the placeholders came out to be around 95%.
  • Out of 11 if at-least 5 of them gives a label of 0, then we tag that image as a placeholder.

The approach is pretty generic in that it can be used to detect whether an image in a product category is actually an image for that category. Detecting placeholders becomes a special use case.

Image for post
Image for post

Training a pair of T-shirt images with label=1

Image for post
Image for post

Training a T-shirt image and a Laptop image with label=0

Image for post
Image for post

Training a T-shirt image with a placeholder image with label=0

Let’s create the file

I have omitted certain details from the code to make the code shorter and just enough to explain the above mentioned approach.

I am using python generators to train and test the model in mini-batches. The codes for the generator is defined in an other file whose detail I am omitting here, but the essential idea is that since there are around 100K training image pairs and each image pair contains two images of size 64x64x3 (3 for the number of channels in RGB images). Loading the entire data in-memory would take around 18 GB of memory.

Instead we can load the batches of size 256 for 391 batches (100K image pairs) and train using ‘fit_generator’ method of Keras.

The python data generator looks something like (the code is incomplete):

In the earlier ‘SiameseModel’ class, we are defining a method ‘score_ensemble’, which accepts a test image array (64x64x3) and a set of 11 voting image arrays. The method compares the test image with all the 11 voting image arrays and if at-least ‘frac’ percentage of predictions says that the label should be 0, then we classify the test image as a placeholder.

Originally published at on February 11, 2019.

Written by

Machine Learning Engineer, Algorithms and Programming

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store