First experimental result on Revues.org corpus level 1 – Part II

Part II: Data analysis

Revues.org reference corpus is represented in xml format and manually tagged according to the TEI guidelines. An xml file corresponds to an article of Revues.org site and includes on average more than twenty references of the article. The detailed characteristics of this xml source data are shown in the following table. We compared the Revues.org xml source data with a standard reference dataset, Cora, created by McCallum et al. This comparison will allow a better understanding of the complexity of real world data.

CORA reference data Revues.org
500 references 737 references
Data in html Data in xml with TEI guidelines
References contain 13 fields (tags). fields = labels References contain 30 tags. Labels are not decided.
No inner tags – just a level in tags Relatively complicate tags, inner tags exist. Two or three levels exist in tags
References have quiet regular formats. References have very different formats according to articles.
*Features (local, layout, external lexicon) are not verified; need preprocessing to use a CRF. Need to decide which tags can be labels and which attributes can be features.
Individual authors are not separated. Author field includes all the authors. Individual authors are separated. Even surname and forename are separated.
It seems that punctuations are not separated as tokens; the existence of punctuation is marked as feature. Important punctuations are separated from text with <c> tag.

* The role of features in CRFs: these are additional information for input data such as layout characteristics. Well-defined features allow a detailed CRF modelling that results a more accurate labelling system.

 

Examples of Cora and Revues.org reference data

Cora

<author> A. Cau, R. Kuiper, and W.-P. de Roever. </author>

<title> Formalising Dijkstra’s development strategy within Stark’s formalism. </title>

<editor> In C. B. Jones, R. C. Shaw, and T. Denvir, editors, </editor>

……

Revues.org

<bibl type= »article »><author><surname>BOULY<nameLink>de</nameLink>LESDAIN</surname>

<forename full= »init »>S.</forename></author>

<c type= »comma »>,</c>

<edition><date>2002</date></edition>

<c type= »comma »>,</c>

<c type= »guillemot_left »>«</c>

<title level= »a »>Alimentation et migration, une définition spatiale</title>

……

 

Comparison of Cora data tags and Revues.org tags

Cora Revues.org
<author> <author>, <surname>, <forename>
<booktitle> <title> of  <relatedItem>+ attributes in title or attribute in bibl
<date> <date>
<editor> <editor>, <author> of <relatedItem>
<institution> <orgName>
<journal> <title> of <relatedItem>+ attributes in title or attribute in bibl
<location> <pubPlace>, <country>
<note>
<pages> pp attribute of <bibleScope>
<publisher> <publisher>
<tech> thesis or technical report etc. attributes in title or attribute in bibl
<title> <title>
<volume> vol attribute of <bibleScope>

 

The necessary data to learn a CRF model are input sequences, output labels and input features. A reference string is segmented into a sequence of tokens, where a token is identified by a whitespace character or a tag. One of the difficulties in the preparation of learning data with Reveus.org xml source data is to determine output labels and input features. Since not all the tags are appropriate as labels, and some attributes are good for labels, we have to make a decision for the selection of output label types. In the same way, we should select the suitable input features from various attributes and tags.

Our first experiments have two major objectives. First, we will verify that a CRF model on a simplest Revues.org learning dataset gives a reasonable estimation result on a test set. This verification will justify our approach using CRFs for the reference analysis on the Revues.org dataset. For that, we construct a simple learning dataset where the input sequences are automatically extracted from xml sources without any pre-processing, and the label of each input token is defined by its nearest tag. In this time, the input features are ignored at the learning of a CRF model. Second, we expect that the result of our first experiments will contribute to the preparation of a better learning dataset.  In the consideration of the previously mentioned difficulties, the first experimental result is expected to be a stepping-stone to a more concrete model.

In the next part, we present this experimental result.


Une réflexion au sujet de « First experimental result on Revues.org corpus level 1 – Part II »

  1. Ping : Impact of linguistic features integration on automatic annotation of bibliographical references | OpenEdition Lab

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *