Cookies help us deliver our services. By using our services, you agree to our use of cookies. Learn more

Torch Tutorial for PlantVillage Challenge

Torch Tutorial for PlantVillage Challenge.

Lets know prepare the dataset and set the stage for training.

Firstly download the dataset and extract it into a directory. We’ll divide the images into two directories, train and val for training and validation sets respectively. I used a simple bash script to do this:

    cd directory/containing/c_0c_1...etc directories
    mkdir -p train val
    for i in {0..37}; do mkdir val/c_$i; done
    mv c_* train

    cd train
    find . -iname *.jpg | shuf | head -n 2100| xargs -I{} mv {} ../val/{}

This will move random 2100 images (about 10% of the dataset) in to val directory and rest into train directory. Directory structure should now look like:

├── train
│   ├── c_0
│   │   ├── img_name.JPG
│   │   ├── ...
│   │   └── img_name.JPG
│   ├── c_1
│   ├── ...
│   ├── c_36
│   └── c_37
└── val
    ├── c_0
    ├── c_1
    ├── ...
    ├── c_36
    └── c_37
        ├── img_name.JPG
        ├── ...
        └── img_name.JPG

Before feeding images into neural networks we’ll resize the images to 224 x 224 and normalize the images with mean and standard deviation of RGB channels computed from a random subset of ImageNet.

In the world of deep learning, dataset of 20,000 images is a relatively small dataset. We’ll therefore augment the data during training with

  • RandomSizedCrop: A randomly sized crop covering anywhere between 8%-100% of the image
  • ColorJitter: Randomly vary brightness, contrast and saturation of the image
  • Lighting: Alexnet style PCA-based noise.
  • HorizontalFlip: Flip the image horizontally

Code to do these transformations is in datasets/transforms.lua. Most of it is borrowed from fb.resnet.torch repo.

We will load the images in batches and do all this processing/augmentation on the fly. This is done by writing a class named DataGen. Essentially, code Understanding how iterators work in Lua can be a little tricky. Read the following documentation for details. can be summarized as :


    require 'paths'
    t = require 'datasets/transforms.lua'

    local DataGen = torch.class 'DataGen'

    function DataGen:__init(path)
        -- path is path of directory containing 'train' and 'val' folders
        -- find all the images in train and val folders.
        self.rootPath = path
        self.trainImgPaths = self.findImages(paths.concat(self.rootPath, 'train'))
        self.valImgPaths = self.findImages(paths.concat(self.rootPath, 'val'))
        self.nbTrainExamples = #self.trainImgPaths
        self.nbValExamples = #self.valImgPaths

    -- Some utility functions
    function DataGen.findImages(dir)
        -- Returns a table with all the image paths found in dir using 'find'

    local function getClass(path)
        -- gets class from the name of the parent directory
        local className = paths.basename(paths.dirname(path))
        return tonumber(className:sub(3)) + 1

    --- Iterator
    function DataGen:generator(pathsList, batchSize, preprocess)
        -- pathsList is table with paths of images to be iterated over
        -- batchSize is number of images to be loaded in one iteration
        -- preprocess is function which will be applied to image after it's loaded

        -- Split all the paths into random batches
        local pathIndices = torch.randperm(#pathsList)
        local batches = pathIndices:split(batchSize)
        local i = 1

        return function ()
            if i <= #batches then
                local currentBatch = batches[i]

                local X = torch.Tensor(currentBatch:size(1), 3, 224, 224)
                local Y = torch.Tensor(currentBatch:size(1))

                for j = 1, currentBatch:size(1) do
                    local currentPath = pathsList[currentBatch[j]]
                    X[j] = preprocess(t.loadImage(currentPath))
                    Y[j] = getClass(currentPath)

                i = i + 1
                return X, Y

    function DataGen:trainGenerator(batchSize)
        local trainPreprocess = t.Compose{
                brightness = 0.4,
                contrast = 0.4,
                saturation = 0.4,
            t.Lighting(0.1, t.pca.eigval, t.pca.eigvec),

       return self:generator(self.trainImgPaths, batchSize, trainPreprocess)

    function DataGen:valGenerator(batchSize)
        local valPreprocess = t.Compose{
       return self:generator(self.valImgPaths, batchSize, valPreprocess)

Complete code for this class with some error catching is at datasets/plantvillage.lua. We can now simply use a DataGen object to write a for loop to iterate over all the images:

    for input, target in dataGen:trainGenerator(batchSize) do
        -- code to train your model

Neat, isn’t it?