BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

Bidirectional pre-training that learns rich contextualized token representations by attending to both left and right context.
Foundational Models
Author

Imad Dabbura

Published

October 21, 2021

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

#nlp #llm

Back to top