Archives de l’auteur : Young-Min Kim

Upper bound of annotation with proper noun features

An idea, which naturally stuck us, is to try out learning a model with a complete proper noun feature set for each strategy. It will give us an objective upper bound on the annotation performance. And from the expected upper bounds, we can decide which strategy is preferable that the others.

 The question here is how to complete proper noun lists. We figure out this problem by artificially attaching proper noun features from the real labels. For example, if a token has ‘surname’ label we attach ‘surnamelist’ feature to the token. In this way we testify the upper bound of different strategies. In the following table, we compare the upper bound of S1 and S3.

The result well depicts the limit of ‘namelist’ feature. Even though S1 works better than the baseline and produces a similar performance with S2 and S3, the upper bounds of S1 and S3(or S2) are quite different. When we apply distinct artificial features for all three fields, these fields are almost perfectly predicted whereas there is no improvement in surname and forename for the artificial feature ‘namelist’. It justifies our decision to continue to separate the ‘surnamelist’ and ‘forenamelsit’. Moreover the perfect annotation accuracy motivates us to explore in detail the completeness of proper noun features.

Proper noun features III (corpus 1)

We extract seven different learning sets according to the defined strategies in the previous post. Several important fields are selected for the comparison including the surname, forename and place fields that are our main concern here.

First, we compare the baseline model and the strategy S1, which attaches the ‘namelist’ feature to both surname and forename found in author name list. We emphasize that surname and forename are searched in pairs in S1.

Compared to the baseline, S1 obtains a gain about 0.4 points in general accuracy. The three main fields improved a little especially the recall of surname and precision of place. The other fields also have some change, which is influenced by the added features.

Now we continue comparing other CRF models constructed by the strategies S2 and S3. Here we separate ‘surnamelist’ and ‘forenamelist’ features for their corresponding token. We also search them in pairs as in S1. In S2 we attach just the name-concerned features whereas in S3 we also find ‘placelist’.

S2 gives the same performance with S1 in general accuracy(87.43 vs. 87.46). However there are anyway some improvements in precision and recall of most fields. We assume that this separation between two name types induces the model to learn more intensively the name fields. On the other hand, the ‘placelist’ feature in S3 brings only a tiny difference in all fields. It is may because of the comparatively few number of tokens having ‘place’ label or the incompleteness of the used place lists.

Strategies S4 and S5 are designed to verify if searching surname and forename in pairs (as in S1, S2, S3) is effective or not. Therefore in S4 and S5, surname and forename are searched independently in the name list.

While the general accuracies do not change much, the detailed performance shows a somewhat different pattern with the previous strategies. In both S4 and S5, the improvement of recall of forename is noteworthy. But at the same time, the recall of surname rather decreases. It means that the separation brings a favorable influence to finding forename but not for surname. We also notice an interesting result in the place field, which is better revealed in terms of precision when applying ‘placelist’ feature (S5), but gives opposite result in recall. According to our analysis, when the ‘surnamelist’ and ‘forenamelist’ are scattered here and there in learning data, an incomplete ‘placelist’ feature can disturb modeling by estimating ‘place’ label to the tokens having ‘placelist’ feature. That is why the precision is more correct but the learned model can find less tokens, which have place label.

To see a pure effect of ‘placelist’ feature on accuracy, we add just ‘placelist’ feature in S6. Like the above diagnosis, the precision for ‘place’ field is much better than that of baseline (88.41 vs. 72.51) but its recall is worse as shown in the following table (85.80 vs. 90.53) :

These results lead us to some temporary conclusions. First, inserting proper noun features improves the performance but not much. Second, we can not guarantee that imposing different features to surname and forename will obtain a better annotation result. Third, ‘placelist’ feature works for the precision of place field but not for recall. Forth, the decision about searching in pair the ‘surnamelist’ and ‘forenamelist’ does not influence on general accuracy but it makes anyway some difference depending on fields. We assume that the independence assumption between two features will be more sensible to the quality of used external list than the pair assumption.

This synthesis inspires us to tackle the incompleteness of proper noun lists. And we treat this issue as a missing feature problem.

Proper noun features II (corpus 1)

In the previous experiments on proper noun lists, certain author names were omitted because of parsing error. We deduce from this inadvertent mistake and the degradation of performance that we would have an improved result on the newly revised corpus with the same settings of proper noun feature as before. By modifying the way to match feature to token, we can prepare different learning data from same lists. So we design several detailed proper noun feature-matching strategies described in the following table. Same lists with the previous experiments are also used.

Strategy Type of external list Property Feature name
Baseline Without any proper noun lists
S1 People name list No distinction between surname and forename NAMELIST
S2 People name list Distinction between surname and forename SURNAMELIST FORENAMELIST
S3 People name list, Place lists Distinction between surname and forename, Place lists (city and country) SURNAMELIST FORENAMELIST PLACELIST
S4 People name list Surname and forename are separately found SURNAMELIST FORENAMELIST
S5 People name list, Place lists Surname and forename are separately found, Place lists (city and country) SURNAMELIST FORENAMELIST PLACELIST
S6 Place lists Place lists (city and country) PLACELIST

One solution against the restriction of name list, we allow the combination of new name from surnames and forenames in the list. It means that even a full author name in our data is not found in the list, if surname and forename are matched with their own list respectively, we accept them as name. This is possible because our author name list distinguishes between surname and forename.

An important rule in the matching strategies S1, S2 and S3 is we prohibit the separation of surname and forename in searching. That is, we first check if a given token is surname and if yes, we examine if one of surrounding token forename. For that we check three different expressions of full name: [surname forename], [forename surname], [surname , forename]. In this way we sequentially check all the tokens during feature extraction process. For the comparison, we also separately find surname and forename in the strategy S4 and S5. For the placelist feature, we also check city and country names, which consist of more than a word. A part of a reference in learning data with strategy S2 (or S3) is shown in the following figure.

Since the first four name tokens are found in the name list but the two latters are not, the presentation of the reference is like above. In this case, even though ‘René’ is found in the name list, as ‘OTAYEK’ is not, finally the matching rule does not attach ‘surnamelist’ or ‘forenamelist’ to them. On the contrary to this, we attach the ‘forenamelist’ feature to ‘René’ token when applying S4 and S5. In the next post we compare the performance of these seven (baseline + 6) strategies.

Manual annotation revision (corpus 1)

Excluding any errors in manual annotation is hard to be achieved especially when the annotation structure is complex and data size is not small as ours. But the quality of manual annotation is one of the core factors in sequence learning because the estimation ability of a statistical model considerably depends on learning data extracted from the annotation.

With the progress of our experiments, we go on correcting any small errors in terms of both manual annotation and original input string. An example of the latter is inserting unnecessary space in URL when typing online a reference that provokes a wrong tokenization in Bilbo system. Since the first attempt of revision, which was intended to modify directly learning and test data, we have continued to correct dispersed tiny errors. After several attempt, we found that correcting them directly in our corpus was more practical in case we change feature-extracting strategy even though searching errors in the corpus is more annoying than that in learning and test data.

These little changes in the corpus always influence final annotation result and we currently have the following accuracies with the same settings on feature and field types with the previous experiments.

 There is a light difference by comparison with the first data revision. Especially, total number of tokens and the number of punctuation marks decrease (6632 vs. 6569 and 2090 vs. 2024) that comes mostly from the change of tokenization. The correction also allows us automatically remove several unintended field labels such as ‘author’, ‘name’ and ‘region’, whereas we keep ‘ref’ and ‘genname’ in spite of their week frequency (because they occur often in corpus 2). With this revised version of corpus 1, we try out a series of experiments for the incorporation of external resources such as proper noun list.

Finding DOI through CrossRef

One of the main services that we have planed using automatic annotation is to provide a unique identifier such as DOI (Digital Object Identifier) to each reference. DOI is assigned by CrossRef, the registration agency of the International DOI Foundation (IDF). We use the CrossRef’s query service that allows us to extract DOI with some basic information about a target digital document.

To submit queries into CrossRef from our system, we need a registered user id. However, CrossRef also provides several guest query interfaces without inscription on the site. So before applying the service in our system, we analyzed the guest query interfaces. First, we need to have at least title and author name, especially surname of first author. If we enter full name, the system does not find an item. Second, the reference string parsing service in the site does not always work. For example, with the reference string, [WACQUANT, Loïc, “Deadly Symbiosis: When Ghetto and Prison Meet and Mesh”, in Punishment and Society 3, 2000, pp. 95-134.], we could not find the DOI even though it actually exists. Third, when we add metadata besides the basic information for searching, it should be exact. If not the system cannot find DOI even though it could by just submitting basic information (title and author). Fourth, title with several beginning words works well.

Finally, we decided to use xml queries with some beginning words of title and surname of first author. For each bibliographical reference in test data of corpus 1, these two items are extracted from the automatic annotation result. As a result, among 215 references 22 are revealed as having a DOI. Interestingly, 10 DOIs over 22 have not been discovered using the auto parsing service of CrossRef at the guest query interface (marked with ‘***’ in  the following examples). We sampled 6 references that reflect the diversity of form.

(Reference string  | automatically extracted items | found DOI | particularity)

CHAUVEAU, J-P, DOZON J-P. et RICHARD J., 1981, Histoires de riz, histoires d’igname. Le cas de la moyenne Côte d’Ivoire, Africa, 51 (2), 621-658. | First author :   CHAUVEAU  Start of title :   Histoires de riz | DOI : 10.2307/1158830 |

JOHNSTON-FELLER, R., Color Science in the Examination of Museum Objects. Nondestructive Procedures, Los Angeles, The Getty Conservation Institute, 2001.  | First author :   JOHNSTON-FELLER    Start of title :   Color Science in the Examination of Museum Objects | DOI : 10.1002/(ISSN)1520-6378 | ***NO DOI when AUTO PARSING in the site

ORGANSKI Kenneth, 1958, World Politics, New York, Knopf. | First author :   ORGANSKI     Start of title :   World Politics | DOI : 10.2307/1338221 | ***WRONG DOI when AUTO PARSING in the site

KRUEGER Dirk & Fabrizio PERRI, “Does Income Inequality Lead to Consumption Inequality? Evidence and Theory”, Review of Economic Studies, 73 (1), 2006. | First author :   KRUEGER      Start of title :   Does Income Inequality Lead to Consumption Inequality | DOI : 10.1111/roes.2006.73.issue-1 |

Halperin David M., Winkler John J. et Zeitlin Froma I. (éd.) (1990), Before Sexuality. The Construction of Erotic Experience in the Ancient World, Princeton. | First author :   Halperin    Start of title :   Before Sexuality | DOI : 10.1080/03612759.1991.9949408 | ***NO DOI when AUTO PARSING in the site

Veyne Paul (1981), «L’homosexualité à Rome», Histoire 30, p. 71-78 (= légèrement augmenté, « L’homosexualité à Rome », Communication 35, 1982, p. 26-33), réédité dans Duby G. (éd.), Amour et sexualité en Occident (1991), Paris, Seuil, p. 69-77). | First author :   Veyne    Start of title :   L’homosexualité à Rome | DOI : 10.3406/comm.1982.1519 | 

 

Several practical and learning issues for future work

First step to use the annotation result:

  • Finding a unique id such as DOI for each reference via the crossref site. It can be used as an important metadata that describes the reference.
  • Finding also a unique id such as ISBN for a reviewed book in an article of Revues.org, using external services such as amazon.org. Then a link can be created toward the corresponding page of the external site.

Practical construction of Bilbo system:

  • Expanding the testified result to other real references by integrating the test codes into the Revues.org platform.

To improve the accuracy of automatic annotation:

  • Incorporation of the external resources at the level of modeling. A kind of semi-supervised learning seems to be appropriate for this issue.
  • Applying rule based post-processing.

Processing of third corpus:

  • Another main tool for the treatment of corpus 3 will be topic model for the context analysis.
  • Named entity disambiguation is necessary because the initial expression of forename confuses the author identification.

Each issue will involve a number of detailed statistical or heuristic techniques.

 

Note annotation on Corpus level 2

We continue our experiments with the corpus 2. In the previous post, we showed that a well-defined set of three different feature types improves note classification accuracy. Now we verify the usefulness of note classification on automatic annotation of bibliographical reference in notes.

 Two CRF models are constructed on two different sets respectively. First set is ‘NotesCL’ that consists of the classified notes into positive category after applying a SVM with the strategy 10. Second set is ‘NotesOR’ that contains all 1532 notes without classification. We randomly selected 70% of a set as learning data and the remaining 30% as test data just as the previous experiments.

 The above figure shows the annotation result of these two models with some selected fileds. The gray cells are three most important fields, surname, forename and title of article. Embolden value means that the corresponding CRF model better estimates on the field than the other model. Total accuracy of our approach is 87.28% and it outperforms the baseline (85.16%). Considering the baseline model is learned on a larger number of notes (1532 vs. 1185), this result is quite encouraging because in general classification result improves with number of instances.

Consequently, our approach of sequence classification then applying a CRF significantly improved the final annotation accuracy.

A paper accepted in BooksOnline11 workshop

A paper “Automatic Annotation of Bibliographical References in Digital Humanities Books, Articles and Blogs” which describes our work is accepted in the workshop BooksOnline11 (Oct. 24, 2011. Glasgow, UK.) organized at CIKM 2011.

The list of accepted papers in this workshop is:

http://research.microsoft.com/en-us/events/booksonline11/acceptedpapers.aspx

Our paper has been allocated both oral presentation and poster slot. The progress of Bilbo system is summarized in the following poster (click image for larger view).

 

Proper noun features on corpus level 1

This part of experiments constitutes the use of external proper noun lists. To overcome the miss-annotation between people name and place, we think of using a set of proper noun lists. People name and country lists are provided by the Revues.org article collection and place list is completed from top 3000 largest city list (http://www.mongabay.com/cities_pop_01.htm) and 215 livable city list (http://www.businessweek.com/interactive_reports/livable_cities_worldwide.html).

 In fact, the previous study (Accurate Information Extraction from Research Papers using Conditional Random Fields, 2004) reports that the use of lexicon features does not influence much on the performance. But because of the uniqueness of our corpus that leads to some confusion between proper noun types (name and place), we expect that an appropriate use of proper noun lists would help the distinction of these two types.

 As the first attempt to apply proper nouns in a CRF model, we simply use them as features like in the earlier study. Considering that the name list is far from enough, we separate the surname and forename. It means that for a given token, we first check if the token is in the surname list and then we check if a nearby token is in the forename list. In this way, even a full name separately found in the list can be detected. The place lists are simpler. We just search if the given token is in the lists.

 We constructed four different models applying different lists as follows:

  •  People name list
  • City list
  • Country list + City list
  • People name list + City list

In this version, we verified again the learning and test data and made some correction. In addition, the “PUNC” feature, which was ignored in the previous experiments, is counted.

As we can see in the above table, all the attempts to apply proper noun lists have been failed. The proper noun features may somewhat effective to detect the corresponding proper noun types however the incomplete lists rather reduce the overall annotation performance.

Consequently, we are now interested in developing a new method to insert these lists. Since the Revues.org data includes several useful additional lists, we expect to find an effective way to use them that would contribute to the annotation performance.

CRF model reconstruction on the revised data (Corpus 1)

For the moment, we leave the note classification problem alone and return to the CRF learning on the corpus 1. After the detailed evaluation and revision of the automatic reference annotation result, we now try to reconstruct a CRF model on the revised learning data. As written in the completeness verification report, the main correction of learning and test data had been carried out on title and book title. So we expect that the reconstructed CRF model is able to separate better the title and book title than before. The following table shows the estimation result of this newly constructed CRF model.

This result is not exactly what we expected. In spite of the data revision, general accuracy rather decreased about 1% compared to the final version on corpus 1. Considering that the 1% difference is acceptable with the change of learning data, we conclude that the automatic annotation quality is not really changed.

Some small errors of manual annotation do not influence the automatic annotation quality. Instead, the limit of the current version of Bilbo system should be overcome using more informative features or by modifying the existing CRF model. Named entity lists can be used as the former approach and in the next posting, we show the results of experiments that use these lists.

Experimental results of note classification with different feature generation strategies – Part 3

In this posting, we present our last four strategies that add one by one the different binary local feature types.

Strategy 7 : input + non-weighted global features + a binary local feature (posspage)

Features M. et M. , « Aspects du droit dans marocain » , Droit et Société , , p. .STARTINITIAL POSSPAGE
Feature counts m.:2 et:2 ,:4 «:1 aspects:1 du:1 droit:2 dans:1 marocain:1 »:1 société:1 p.:1 .:1 strartintial:1 posspage:1
Numerical features 9:1 23:4 25:1 34:1 35:1 47:2 67:2 68:1 69:1 70:2 71:1 72:1 73:1 10004:1 10010:1
#Unique features 3939

The local feature ‘POSSPAGE’ is applied. Instead of counting the appearance of this feature in a note string, a binary value is used to mark its existence.

Total accuracy : 91.09

[Positive] Precision : 93.02 Recall : 95.42     [Negative] Precision : 84.31 Recall : 77.48

With this strategy, we obtain a small gain but significant enough for the fact that just one feature type is appended. Encouraged by this result, we continue to add the other types of local feature.

 

Strategy 8 : input + non-weighted global features + binary local features (posspage + weblink)

– Feature table is omitted –

Total accuracy : 91.30

[Positive] Precision : 93.28 Recall : 95.42     [Negative] Precision : 84.47 Recall : 78.38

The local feature ‘WEBLINK’ is applied. We get again a small gain.

 

Strategy 9:  input + non-weighted global features + binary local features (posspage + weblink + posseditor)

 – Feature table is omitted –

 Total accuracy : 91.30

[Positive] Precision : 93.28 Recall : 95.42     [Negative] Precision : 84.47 Recall : 78.38

The local feature ‘POSSEDITOR’ is applied but there is no change. It is perhaps because of the low frequency of this feature in the note data. Anyway we decide to keep this feature for the case of modifying learning data.

 

Strategy 10 : input + non-weighted global features + binary local features (posspage + weblink + posseditor + italic)

Features M. et M. , « Aspects du droit dans marocain » , Droit et Société , , p. .STARTINITIAL ITALIC POSSPAGE
Feature counts m.:2 et:2 ,:4 «:1 aspects:1 du:1 droit:2 dans:1 marocain:1 »:1 société:1 p.:1 .:1 strartintial:1 italic:1 posspage:1
Numerical features 9:1 23:4 25:1 34:1 35:1 47:2 67:2 68:1 69:1 70:2 71:1 72:1 73:1 10001:1 10004:1 10010:1
#Unique features 3942

Total accuracy : 94.78

[Positive] Precision : 95.77 Recall : 97.42     [Negative] Precision : 91.43 Recall : 86.49

In our final strategy, we applied the ‘ITALIC’ binary feature and this brings a large improvement, which is more than 3 points in terms of total accuracy. Especially, we obtain a great increase in both precision and recall of negative note. This result is reasonable because the italic feature usually appears with the title of article that is one of the main contributions of bibliographic reference.

In conclusion, we successfully generate a set of appropriate features that consists of three groups, input feature, local feature and global feature. With the final strategy, we obtain about 95 percent of micro-averaged precision for the note classification. This is 7 point better than the baseline strategy, which uses only input word features but is one of the most used approaches for text classification.

Experimental results of note classification with different feature generation strategies – Part 2

Strategy 5 : input + 12 feature types + weighted global features

Features M. et M. , « Aspects du droit dans marocain » , Droit et Société , , p. .ALLCAP INITIAL STARTINITIAL FIRSTCAP ALLSMALL ALLCAP INITIAL FIRSTCAP PUNC PUNC FIRSTCAP ALLSMALL ALLSMALL ALLSMALL ALLSMALL NONIMPCAP ALLSMALL PUNC PUNC ITALIC FIRSTCAP ITALIC ALLSMALL ITALIC FIRSTCAP PUNC ALLNUMBERS PUNC ALLSMALL POSSPAGE NUMBERS DASH PUNC
Feature counts m.:2 et:2 ,:4 «:1 aspects:1 du:1 droit:2 dans:1 marocain:1 »:1 société:1 p.:1 .:1 allcap:2 initial:2 strartintial:1 firstcap:5 allsmall:8 punc:7 nonimpcap:1 italic:3 posspage:1 allnumbers:1 numbers:1 dash:1
Numerical features 9:1 23:4 25:1 34:1 35:1 47:2 67:2 68:1 69:1 70:2 71:1 72:1 73:1 10001:3 10002:5 10003:2 10004:12 10005:8 10006:2 10007:7 10008:1 10009:1 10010:1 10011:1 10012:1
#Unique features 3949

Here we added 5 global features describing the distributional patterns of local features in a note string. Since a global feature expresses a pattern about features, it has a binary value (true or false). And we decide to weight the global features according to the local and global feature length, for more exact comparison in the vector space.

Total accuracy : 90.0

[Positive] Precision : 92.20 Recall : 94.84     [Negative] Precision : 82.18 Recall : 74.77

Not only total accuracy, the other accuracies also increased and especially the recall of negative notes that achieves about 5 point more than the best result until now (second strategy, negative recall: 69.37).

 

Strategy 6 : input + non-weighted global features

Features M. et M. , « Aspects du droit dans marocain » , Droit et Société , , p. .STARTINITIAL
Feature counts m.:2 et:2 ,:4 «:1 aspects:1 du:1 droit:2 dans:1 marocain:1 »:1 société:1 p.:1 .:1 strartintial:1
Numerical features 9:1 23:4 25:1 34:1 35:1 47:2 67:2 68:1 69:1 70:2 71:1 72:1 73:1 10001:1
#Unique features 3938

This time, we return to the problem of the strategy 3 that we leaved alone for a moment. Despite the failure of the local feature application, the global features based on them are successfully applied to the SVM classification. Expecting that there would be a better way to reorganize the local feature space, we first try to eliminate all the local features from the strategy 5. We need not weight the global features because the local features that caused the imbalance with the binary values of global feature are now disappeared.

Total accuracy : 90.65

[Positive] Precision : 92.5 Recall : 95.42     [Negative] Precision : 84.0 Recall : 75.68

We have obtained an interesting result by eliminating the local features. The performance is not really different compared to the previous experiment. And it means that the existing local feature usage does not influence the classification performance. It is quite interesting because when not applying the global features, the local ones have been rather negative for classification (strategy 3). In this circumstance we expect that a set of well-organized local features may increase the performance and at least it is not really negative to the performance. With this supposition, we try to apply some other representations of local features in the next posting.

 

Experimental results of note classification with different feature generation strategies – Part 1

For the experiments, we used a well-known implementation of Support Vector Machines, SVMlight developed by Thorsten Joachims of Cornell University. The learning and test data for this program should be represented as the following format:

where each line corresponds to a document, that is a note in our case. The number at the very front indicates if the corresponding note has bibliographic information (1) or not (-1). Then the features converted into numeric values are followed. Instead of a token, a unique feature id of the token and its count in the note are represented. This format is one of the standard representations of text document for machine learning techniques.

[category] [id]:[count] [id]:[count] [id]:[count] …

Before applying a strategy for the extraction of learning and test data, we randomly select 70% of notes in the corpus as learning data and the rest as test data. We keep this selected list for the other strategies. The common indicators of the corpus 2 are represented in the following table.

Total #notes 1532
#Notes in learning data 1072
#Notes in test data 460
#Positive notes in total data 1147 (1147/1532 = 0.75)
#Positive notes in learning data 798 (798/1072 = 0.74)
#Positive notes in test data 349 (349/460 = 0.76)

A positive note is a note including at least one bibliographic reference. All the positive notes in the corpus 2 are manually annotated as in the corpus 1. The proportion of the positive notes is 0.75 that reflects the real proportion in the articles of Revues.org.

Now, we show the experimental results of 10 different feature generation strategies by changing the feature types and the detailed criterions. We take again the note example in the previous posting to show how the data representation becomes different according to the selected strategy.

Example: M. Tozy et M. Mahdi, « Aspects du droit communautaire dans l’Atlas marocain », Droit et Société, 1990, p. 219-227.

 

Strategy 1: input words as features

Features M. et M. Aspects du droit dans marocain Droit et Société p.
Feature counts m.:2 et:2 aspects:1 du:1 droit:2 dans:1 marocain:1 société:1 p.:1
Numerical features 30:1 41:2 60:2 61:1 62:1 63:2 64:1 65:1 66:1
#Unique features 3920 

All notes are represented using just the input words and their counts. Then 70% of the selected notes are used as the learning data. We did not consider a more complicated format such as tf-idf, because even a stop word that is easily ignored in tf-idf representation can be an efficient indicator for our note classification. After constructing a SVM model on the learning data, we obtain the following accuracies on the test data.

Total accuracy (Micro Averaged Precision) : 87.61

[Positive] Precision : 88.42 Recall : 96.28     [Negative] Precision : 83.75 Recall : 60.36

Compared to the positive notes, both precision and especially recall are not good for the negative notes. The learned SVM classifier tends to classify a note to the positive class. It means that the used features are not sufficient to well describe the characteristics of negative notes.

 

Strategy 2: input words + punctuation marks as features

Features M. et M. , « Aspects du droit dans marocain » , Droit et Société , , p. .
Feature counts m.:2 et:2 ,:4 «:1 aspects:1 du:1 droit:2 dans:1 marocain:1 »:1 société:1 p.:1 .:1
Numerical features 9:1 23:4 25:1 34:1 35:1 47:2 67:2 68:1 69:1 70:2 71:1 72:1 73:1
#Unique features 3933

The punctuation marks are added to the features of the strategy 1. We have verified that these features are indispensable to the sequence learning of a CRF model on our corpus 1. Even though the learning objective here is different, we expect that these marks pick up well the specificity of bibliographic references. The identical notes are selected for learning and test data in order to compare more exactly the different strategies (same for the rest).

Total accuracy : 89.35

[Positive] Precision : 90.76 Recall : 95.70     [Negative] Precision : 83.70 Recall : 69.37

Total accuracy increased about 2 point compared to the previous one. Especially a remarkable gain (9 point) is achieved on the recall of negative notes. So, the usefulness of the punctuation marks as the classification feature is confirmed.

 

Strategy 3: input (strategy 2) + 11 feature types (CRF local features)

Features M. et M. , « Aspects du droit dans marocain » , Droit et Société , , p. .ALLCAP INITIAL FIRSTCAP ALLSMALL ALLCAP INITIAL FIRSTCAP PUNC PUNC FIRSTCAP ALLSMALL ALLSMALL ALLSMALL ALLSMALL NONIMPCAP ALLSMALL PUNC PUNC ITALIC FIRSTCAP ITALIC ALLSMALL ITALIC FIRSTCAP PUNC ALLNUMBERS PUNC ALLSMALL NUMBERS DASH PUNC
Feature counts m.:2 et:2 ,:4 «:1 aspects:1 du:1 droit:2 dans:1 marocain:1 »:1 société:1 p.:1 .:1 allcap:2 initial:2 firstcap:5 allsmall:8 punc:7 nonimpcap:1 italic:3 allnumbers:1 numbers:1 dash:1
Numerical features 9:1 23:4 25:1 34:1 35:1 47:2 67:2 68:1 69:1 70:2 71:1 72:1 73:1 10001:3 10002:5 10003:2 10004:1 10005:8 10006:2 10007:7 10008:1 10009:1 10010:1 10011:1
#Unique features 3944

The 11 local features that have been defined and verified useful during the experiments of the CRF learning on corpus 1 are now applied to the note classification. We also keep the input features of the strategy 2. All the local features describing the included tokens in a note are now counted. Since the local features capture the external form of tokens, these are expected to be useful for the classification.

Total accuracy : 86.09

[Positive] Precision : 87.80 Recall : 94.84     [Negative] Precision : 78.31 Recall : 58.56

The result is different from what we were expecting. Total accuracy decreases more than 3 point where all the precisions and recalls are far below that of the strategy 2 even less than the strategy 1. We analyze that this may be caused by the inappropriate treatment of the local features. For example, the binary value instead of the feature count may be more suitable to extract a wide pattern of local features.

Strategy 4 : input + 11 feature types + ‘posspage’ feature

Features (same as above) + POSSPAGE
Feature counts (same as above) + posspage:1
Numerical features (same as above) + 10012:1
#Unique features 3945

Even though the failure of the above strategy using local features, we decided to test an additional local feature, ‘POSSPAGE’. This feature was previously tested on the corpus 1 but confirmed to be not effective for the reference annotation. However, it can be a useful indicator for the classification. It describes that a token has a form of page indicator such as ‘p.’, ‘pp.’, ‘p’ or ‘pp’.

Total accuracy : 87.39

[Positive] Precision : 89.01 Recall : 95.13     [Negative] Precision : 80.46 Recall : 63.06

The result is still below that of the second strategy, however the accuracies increased than the above case. It signifies that the new local feature is effective for the classification.

 

Feature generation for note classification

In our experiments, features are divided into three main groups: input feature, local feature and global feature. Input features indicate the words or punctuation in note string. Local features are the characteristics of input features as in CRF models. We take the same 11 features used in our CRF learning for corpus level 1. Finally we generate a special type of features, global feature, which describes the distributional patterns of local features in a note string. For example, a global feature ‘NONUMBERS’ expresses the character of a note having no numbers in the string.

Five global features that seem useful for the separation are created.

  • NOPUNC : there are no punctuation marks in the note
  • ONEPUNC : there is just one punctuation mark in the note
  • NONUMBERS : there are no numbers in the note
  • NOINITIAL : the note includes no initial expressions
  • STARTINITIAL : the note starts with an initial expression

Note that the typical features in text classification are the input features or their variations via dimension reduction techniques. Moreover, in most cases, the punctuation marks are not counted. However in our experiments, we actively test many possible features that can influence on the judgment over notes as bibliographic reference.

We show the experimental results of 10 important feature generation strategies. The baseline strategy is the first one that extracts only input words excluding punctuation marks. On each strategy, we commonly eliminate the numeric tokens and the unique tokens, which appear only one time in the corpus total. For the comparison, we take an example of note and describe the features generated by each strategy.

Example: M. Tozy et M. Mahdi, « Aspects du droit communautaire dans l’Atlas marocain », Droit et Société, 1990, p. 219-227.

Input features M. et M. Aspects du droit dans marocain Droit et Société p.    , « », , , .
Local features ALLCAP INITIAL FIRSTCAP ALLSMALL ALLCAP INITIAL FIRSTCAP PUNC PUNC FIRSTCAP ALLSMALL ALLSMALL ALLSMALL ALLSMALL NONIMPCAP ALLSMALL PUNC PUNC ITALIC FIRSTCAP ITALIC ALLSMALL ITALIC FIRSTCAP PUNC ALLNUMBERS PUNC ALLSMALL NUMBERS DASH PUNC
Global features STARTINITIAL


Specificity of note texts as classification object

A text document for classification is basically represented by a set of word count-based features. The simplest one can be the word frequency feature. And the tf-idf (term frequency–inverse document frequency) weight computes each word’s importance in a document from word counts in the document and word existences in all documents.

On this basic representation, we can apply a dimension reduction technique such as feature selection to extract a more effective representation. However, the fact remains that the original representation of text documents is based on word counts. This representation is reasonable because the text classification aims to divide documents mostly according to their contextual similarity and dissimilarity. And the features having the most contextual information are words themselves.

On the other hand, the notes in our corpus level 2 need not only contextual features but also formal patterns for classification. As we can find in the examples, the basis that decide whether a note is bibliographic reference or not is rather its form than the contained words. But we should also notice that there are several important phrases or words informing that the note has bibliographic information.

Because of this specificity of notes, we try to newly generate the features, which can reflect the sequential form of note texts. In the next postings, we describe a number of our strategies for the feature generation.