Recent Advances in Language Model Fine-tuning
Reference
Universal Language Model Fine-tuning for Text Classification
Pretrained Transformers Improve Out-of-Distribution Robustness
Neural Transfer Learning for Natural Language Processing
An Embarrassingly Simple Approach for Transfer Learning from Pretrained Language Models
Zero-Shot Entity Linking by Reading Entity Descriptions
Unsupervised Domain Adaptation of Contextualized Embeddings for Sequence Labeling
Pretraining Methods for Dialog Context Representation Learning
Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks