Unknown

Dataset Information

0

Investigating cross-lingual training for offensive language detection.


ABSTRACT: Platforms that feature user-generated content (social media, online forums, newspaper comment sections etc.) have to detect and filter offensive speech within large, fast-changing datasets. While many automatic methods have been proposed and achieve good accuracies, most of these focus on the English language, and are hard to apply directly to languages in which few labeled datasets exist. Recent work has therefore investigated the use of cross-lingual transfer learning to solve this problem, training a model in a well-resourced language and transferring to a less-resourced target language; but performance has so far been significantly less impressive. In this paper, we investigate the reasons for this performance drop, via a systematic comparison of pre-trained models and intermediate training regimes on five different languages. We show that using a better pre-trained language model results in a large gain in overall performance and in zero-shot transfer, and that intermediate training on other languages is effective when little target-language data is available. We then use multiple analyses of classifier confidence and language model vocabulary to shed light on exactly where these gains come from and gain insight into the sources of the most typical mistakes.

SUBMITTER: Pelicon A 

PROVIDER: S-EPMC8237322 | biostudies-literature |

REPOSITORIES: biostudies-literature

Similar Datasets

| S-EPMC10702970 | biostudies-literature
| S-EPMC6428229 | biostudies-literature
| S-EPMC10496005 | biostudies-literature
| S-EPMC7566404 | biostudies-literature
| S-EPMC10280260 | biostudies-literature
| S-EPMC11006909 | biostudies-literature
| S-EPMC10078355 | biostudies-literature
| S-EPMC9467312 | biostudies-literature
| S-EPMC3253604 | biostudies-other
| S-EPMC4452086 | biostudies-literature