Fifth Experimental Result on Revues.org corpus level 1

This posting is about all our trials in order to acquire the most reasonable learning data in terms of tokenization, features and labels. We constructed about 20 different CRF models changing the way of tokenization, especially punctuation treatment, and the types of features and labels.

Recall that our manual tagging with TEI guidelines for Revues.org corpus level. 1 includes the tagging of some important punctuation marks, which are used as tokens in the previous experiments. But as we don’t have this kind of information for a new reference to be estimated in real world, testing with these tokens does not reflect an accurate performance of a constructed CRF model. Therefore, we need to apply the same tokenization rules for any punctuation marks in both learning and test data.

We also explore the effects of features concerning the characteristics of token. We empirically verified that the following features are positive for the construction of a good CRF model.

An example of learning data including the above features (‘/’ is line separator):

LEONARD ALLCAP surname / E. ALLCAP INITIAL forename / et ALLSMALL nolabel / VIMARDS ALLCAP / surname / P. ALLCAP INITIAL forename /, c / ( c / Eds NONIMPCAP POSSEDITOR abbr / ) c / , c / 2005 ALLNUMBERS date / , c / Dynamique ITALIC FIRSTCAP title / des ITALIC ALLSMALL title / populations ITALIC ALLSMALL title / , ITALIC c / crises ITALIC ALLSMALL title / et ITALIC ALLSMALL title …

Other tested features, which are rather negative for a good CRF model, are as follows.

Some similar labels are unified and the label <title> is divided as <title> and <booktitle>. Even though this separation of the <title> label decreases the overall accuracy, it is more natural to recognize differently title of reference and name of journal, conference, or book where a given reference is published. The modified labels are presented in the following table.

Of course, not all the above strategies on tokenization, features, or labels are applied at once. We repeated the experiments by applying one or more strategies at each stage.

In the following table, we compare the performance of different CRF models constructed during our experiments. As stage is updated, different strategies are applied based on the success and failure of the precedent experiments. We start from the most performing data setting in the previous experiments (fourth experiment).

The following table shows the detailed performance on our final stage of the experiments.


OpenEdition vous propose de citer ce billet de la manière suivante :
Young-Min Kim (21 juin 2011). Fifth Experimental Result on Revues.org corpus level 1. BILBO - Automatic Annotation of Bibliographical References. Consulté le 1 novembre 2024 à l’adresse https://doi.org/10.58079/m0g8


2 réflexions sur « Fifth Experimental Result on Revues.org corpus level 1 »

  1. Ping : Finding DOI through CrossRef | OpenEdition Lab

  2. Ping : Reconnaitre les noms d’auteur (1) « wikironie

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *

Ce site utilise Akismet pour réduire les indésirables. En savoir plus sur comment les données de vos commentaires sont utilisées.