Transfer Learning Uncertainty Quantification using QIPF (ongoing)
We extend the application of QIPF for quantifying transfer learning uncertainty. Here, we utilize the QIPF to quantify the overall predictive uncertainty of a model originally trained on a source dataset and then fine-tuned on a target dataset by evaluating the weights of the fine-tuned layers on the RKHS potential field created by source layer weights. This gives the discrepancy (with respect ot the model) between source and target datasets.
Transfer Learning
For transfer learning, a model is first pre-trained on a source dataset, and then then fine tuned on the target dataset on which predictions are to be made. All layers are frozen after pre-training, except for the last layer which is trained on the target dataset.
Results (UCI Datasets)
We used a 1-D Fully Convolutional Network (FCN) architecture that achieves state-of-the-art results on UCI times series classification datasets. Transfer learning was implemented using different UCI datasets as source and the rest of datasets as targets. MC Dropout and QIPF methods were implemented on the network (at each source-target example) for quantifying overall predictive uncertainties of the network on the test-sets of the target dataset (after pre-training and fine-tuning).
Correlation coefficient between errors and uncertainty estimates:
MC Dropout: 0.71
QIPF: 0.85
Results (ImageNet - KMNIST)
A VGG-16 network was pre-trained on ImageNet dataset finetuned on kuzushiji-MNIST data. Uncertainty was quantified by decomposing QIPF of finutuned layer weights in the field created by pretrained layer weights.
Return to research page