Loading
Cookies help us deliver our services. By using our services, you agree to our use of cookies. Learn more

Dark Skies

Tensorflow solution for the Dark Skies Challenge

We are now ready to fine-tune a pre-trained Inception-v3 model on the darkskies-challenge data set. This requires two notable steps:
- Build the exact same model as pretrained Inception-v3 except we change the number of labels in the final classification layer.
- Restore all weights from the pre-trained Inception-v3 except for the final classification layer; this will get randomly initialized instead.

We can perform these two operations by specifying two flags: --pretrained_model_checkpoint_path and --fine_tune.

The first flag is a string that points to the path of a pre-trained Inception-v3 model. If this flag is specified, it will load the entire model from the checkpoint before the script begins training.

The second flag --fine_tune is a boolean that indicates whether the last classification layer should be randomly initialized or restored. You may set this flag to false if you wish to continue training a pre-trained model from a checkpoint. If you set this flag to true, you can train a new classification layer from scratch.

Putting this all together you can retrain a pre-trained Inception-v3 model on the darkskies-challenge data set with the following commands.

## Finetune 
# Build the training binary to run on a GPU. If you do not have a GPU, 
# then exclude '--config=cuda' 
cd $DK_ROOT
bazel build -c opt --config=cuda inception/dk_train 

# Directory where to save the checkpoint and events files. 
FINETUNE_DIR=$DK_ROOT/dk-finetune 
# Directory where preprocessed TFRecord files reside. 
DK_DATA_DIR=$DK_ROOT/tf_record
# Path to the downloaded Inception-v3 model. 
MODEL_PATH=$DK_ROOT/inception-v3-model/model.ckpt-157585 
# Run the fine-tuning on the dark-challenge dataset starting from the pre-trained 
# inception-v3 model. 
bazel-bin/inception/dk_train --train_dir="${FINETUNE_DIR}" --data_dir="${DK_DATA_DIR}" --pretrained_model_checkpoint_path="${MODEL_PATH}" --fine_tune=True --initial_learning_rate=0.001 --batch_size=32 --input_queue_memory_factor=8 –num_gpus=1 --num_epochs_per_decay=20 --max_steps=1000000

Fine-tuning a model a separate data set requires significantly lowering the initial learning rate. We set the initial learning rate to 0.001.

Now the training is in progress, it constantly outputs to terminal screen:

2016-10-13 11:56:22.949164: step 0, loss = 3.11 (1.6 examples/sec; 20.053 sec/batch)
2016-10-13 11:56:48.508299: step 10, loss = 2.55 (46.3 examples/sec; 0.692 sec/batch)
2016-10-13 11:56:55.458712: step 20, loss = 2.49 (42.9 examples/sec; 0.746 sec/batch)
2016-10-13 11:57:02.557317: step 30, loss = 2.43 (45.7 examples/sec; 0.700 sec/batch)
2016-10-13 11:57:09.584892: step 40, loss = 2.39 (45.1 examples/sec; 0.710 sec/batch)
2016-10-13 11:57:16.581422: step 50, loss = 2.20 (45.7 examples/sec; 0.700 sec/batch)
2016-10-13 11:57:23.572435: step 60, loss = 1.51 (46.2 examples/sec; 0.693 sec/batch)
2016-10-13 11:57:30.571183: step 70, loss = 1.97 (45.6 examples/sec; 0.701 sec/batch)
2016-10-13 11:57:37.520570: step 80, loss = 1.80 (46.0 examples/sec; 0.696 sec/batch)
2016-10-13 11:57:44.490582: step 90, loss = 1.62 (45.9 examples/sec; 0.698 sec/batch)
2016-10-13 11:57:51.469971: step 100, loss = 1.51 (46.0 examples/sec; 0.696 sec/batch)
…

The loss should be decreased gradually. I trained this model for 45000 steps.