Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Github repo with tutorials to fine tune transformers for diff NLP tasks

License

NotificationsYou must be signed in to change notification settings

abhimishra91/transformers-tutorials

Repository files navigation

Transformer Tutorials

GitHub issuesGitHub forksGithub StarsGitHub license

Introduction

The field ofNLP was revolutionized in the year 2018 by introduction ofBERT and hisTransformer friends(RoBerta, XLM etc.).

These novel transformer based neural network architectures and new ways to training a neural network on natural language data introduced transfer learning to NLP problems. Transfer learning had been giving out state of the art results in the Computer Vision domain for a few years now and introduction of transformer models for NLP brought about the same paradigm change in NLP.

Companies likeGoogle andFacebook trained their neural networks on large swathes of Natural Language Data to grasp the intricacies of language thereby generating a Language model. Finally these models were fine tuned to specific domain dataset to achieve state of the art results for a specific problem statement. They also published these trained models to open source community. The community members were now able to fine tune these models to their specific use cases.

Hugging Face made it easier for community to access and fine tune these models using their Python Package:Transformers.

Motivation

Despite these amazing technological advancements applying these solutions to business problems is still a challenge given the niche knowledge required to understand and apply these method on specific problem statements. Hence, In the following tutorials i will be demonstrating how a user can leverage technologies along with some other python tools to fine tune these Language models to specific type of tasks.

Before i proceed i will like to mention the following groups for the fantastic work they are doing and sharing which have made these notebooks and tutorials possible:

Please review these amazing sources of information and subscribe to their channels/sources.

The problem statements that i will be working with are:

NotebookGithub LinkColab LinkKaggle Kernel
Text Classification: Multi-ClassGithubOpen In ColabKaggle
Text Classification: Multi-LabelGithubOpen In ColabKaggle
Sentiment Classificationwith Experiment Tracking inWandB!GithubOpen In Colab
Named Entity Recognition:with TPU processing!GithubOpen In ColabKaggle
Question Answering
Summary Writing:with Experiment Tracking inWandB!GithubOpen In ColabKaggle

Directory Structure

  1. data: This folder contains all the toy data used for fine tuning.
  2. utils: This folder will contain any miscellaneous script used to prepare for the fine tuning.
  3. models: Folder to save all the artifacts post fine tuning.

Further Watching/Reading

I will try to cover the practical and implementation aspects of fine tuning of these language models on various NLP tasks. You can improve your knowledge on this topic by reading/watching the following resources.

Releases

No releases published

Packages

No packages published

Contributors6


[8]ページ先頭

©2009-2025 Movatter.jp