“Transfer learning is a machine learning technique where a model trained on one task is re-purposed on a second related task. Transfer learning is an optimization that allows rapid progress or improved performance when modeling the second task.” If training and test sets are too small to implement ML application, transfer learning is very common technique that is used for avoiding overfitting in ML application. However there are two main points that are considering: inputs of pre-trained model and transferred model must be same and transferred model size (e.g. 1000) is smaller than pre-trained one (e.g. 1000000). Commonly, researchers use pre-trained models in the frameworks. For instance, Keras library added pre-trained RESNET model with trained ImageNet datasets.
More Information: https://machinelearningmastery.com/transfer-learning-for-deep-learning/
With following snippet, we can use Residual Nets (ResNet50) from Keras Library for transfer learning.
def get_model(): input_tensor = Input(shape=(224, 224, 3)) # this assumes K.image_data_format() == 'channels_last' # Create the base pre-trained model base_model = ResNet50(input_tensor=input_tensor,weights='imagenet',include_top=False) # Except last layer, other layers training is frozen with 'layer.trainable=false' for layer in base_model.layers: layer.trainable=False # Pooling and new softmax layers added for new application x = base_model.output x = GlobalAveragePooling2D(data_format='channels_last')(x) x = Dense(num_classes, activation='softmax')(x) # Model is updated updatedModel = Model(base_model.input, x) return updatedModel