Our corpus follows the TEI guidelines. That is, its structure is not perfectly adapted to the task of reference field identification. Taking into account the reusing of Revues.org corpus, we decided to follow TEI guidelines rather than construct an optimized corpus for reference field identification.
This original corpus, which is manually annotated, contains rich information about references. But as our reference field identification needs comparably simple labeling structure considering the limitation of an automatic learning system, we should extract an appropriate learning data from the original corpus.
Automatic extraction of learning and test data from the corpus could not be perfect because of the complexity of original corpus and the possible errors of manual annotation. During a number of experiments conducted to construct an effective learning data, we verified that several estimation errors come from some miss-annotated tokens in learning data.
To prevent this kind of problems, we examine manually the completeness of learning and test data in terms of correct labeling. If there are miss-annotated tokens, we replace them by correct labels. We try to facilitate this revision process by coloring differently the token fields according to the manually annotated labels as the following example:
We expect that this kind of visualization will allow an intuitive revision to the learning and test dataset. There are 18 fields identified by different colors:
The description of each label is shown in the following table.
OpenEdition vous propose de citer ce billet de la manière suivante :
Young-Min Kim (29 juin 2011). Revision to the learning and test data (corpus level 1). BILBO - Automatic Annotation of Bibliographical References. Consulté le 11 décembre 2024 à l’adresse https://doi.org/10.58079/m0g9
Ping : Évaluation de l’apprentissage du corpus 2 | OpenEdition Lab
Ping : Évaluation de Bilbo : vérifier l’annotation du résultat d’estimation – Partie 1 | OpenEdition Lab