Leveraging Pre-trained Transformer Models
In this project, we harnessed the power of pre-trained transformer models from Hugging Face Transformers to elevate our text classification capabilities. By tapping into these models, we gained access to their extensive knowledge of language and context, allowing us to achieve impressive results in genre classification.
Our Choice: bert-base-uncased Architecture
For this project, we selected the bert-base-uncased architecture, a well-established variant of BERT that's specifically designed for uncased text. By making this strategic choice, we enabled our model to comprehend the subtleties of language usage, leading to enhanced accuracy and performance in genre classification.
Model Training in model_training.ipynb
Our journey is documented in the model_training.ipynb notebook, where we documented each step of the training process. From preprocessing our data to configuring the model, defining the loss function, selecting optimization strategies, and monitoring training progress, the notebook serves as a comprehensive guide to our approach.