{"id":3964,"date":"2019-09-19T12:24:02","date_gmt":"2019-09-19T12:24:02","guid":{"rendered":"http:\/\/temu.bsc.es\/meddocan\/?p=3964"},"modified":"2019-10-03T17:57:59","modified_gmt":"2019-10-03T15:57:59","slug":"evaluation-library","status":"publish","type":"post","link":"https:\/\/temu.bsc.es\/meddocan\/index.php\/evaluation-library\/","title":{"rendered":"Evaluation Library"},"content":{"rendered":"\n<p style=\"background-color:#ffe6ec\" class=\"has-background\"><strong>Attention: <\/strong>This script has been changed to fit the requirements of CodaLab. However, the metrics and scores returned by the script remain the same. The CodaLab version of this script and its documentation can be found at <a href=\"https:\/\/github.com\/PlanTL-SANIDAD\/MEDDOCAN-CODALAB-Evaluation-Script\">GitHub<\/a>.<\/p>\n\n\n\n<h3><strong>Introduction<\/strong><\/h3>\n\n\n\n<p>The MEDOCAN evaluation script is based on the evaluation script from the <em>i2b2 2014 Cardiac Risk<\/em> and <em>Personal Health-care Information (PHI)<\/em> tasks. It is intended to be used via command line:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$&gt; python evaluate.py [i2b2|brat] [ner|spans] GOLD SYSTEM\n<\/pre>\n\n\n\n<p>It produces Precision, Recall and F1 (P\/R\/F1) and leak score measures for the NER sub-track and P\/R\/F1 for the SPAN sub-track. The latter includes an additional metric where the spans are merged if only non-alphanumerical characters are found between them.<\/p>\n\n\n\n<p><em>SYSTEM<\/em> and <em>GOLD<\/em> may be individual files or also directories in which case all files in <em>SYSTEM<\/em> will be compared to files the <em>GOLD<\/em> directory based on their file names.<\/p>\n\n\n\n<p>The MEDDOCAN evaluation script <strong>can be downloaded from <\/strong><a href=\"https:\/\/github.com\/PlanTL-SANIDAD\/MEDDOCAN-Evaluation-Script\"><strong>GitHub<\/strong><\/a>.<\/p>\n\n\n\n<h3><strong>Prerequisites<\/strong><\/h3>\n\n\n\n<p>The evaluation script requires to have <strong>Python 3<\/strong> installed on your system.<\/p>\n\n\n\n<h3><strong>Directory structure<\/strong><\/h3>\n\n\n\n<p><strong>annotated_corpora\/<\/strong><\/p>\n\n\n\n<p> This directory contains files with annotations Brat annotation format. It may contain different sub-directories for different annotation levels: tokens, sentence splitting, part-of-speech taggin, etc. The sub-directory <code>sentence_splitting<\/code> is mandatory to  compute the <code>leak score<\/code> evaluation metric. These files must be stored with <code>.ann<\/code>  suffix. <\/p>\n\n\n\n<p><strong>gold\/<\/strong><\/p>\n\n\n\n<p>This directory contains the gold standard files in <code>brat<\/code> and <code>i2b2<\/code> format. Each sub-directory may contain different sub-directories for each data set: sample, train, development, test, etc. Files in the latter directories must be in the appropriate format: <code>.ann<\/code> and <code>.txt<\/code> for <code>brat<\/code>, and <code>.xml<\/code> for <code>i2b2<\/code>.  <\/p>\n\n\n\n<p><strong>system\/<\/strong><\/p>\n\n\n\n<p>This directory contains the submission files in <code>brat<\/code> and <code>i2b2<\/code> format. Each sub-directory may contain different sub-directories for each data set: sample, train, development, test, etc. Each of the previous directories may contain any number of folders, one for each system run. Files in the latter directories must be in the appropriate format: <code>.ann<\/code> and <code>.txt<\/code> for <code>brat<\/code>, and <code>.xml<\/code> for <code>i2b2<\/code>. <\/p>\n\n\n\n<h3><strong>Usage<\/strong><\/h3>\n\n\n\n<p>It is possible to configure the behavior of this software using the different options.<\/p>\n\n\n\n<ul><li>The&nbsp;<code>i2b2<\/code>&nbsp;and&nbsp;<code>brat<\/code>&nbsp;options allow to select the input format of the files.<\/li><li>The&nbsp;<code>ner<\/code>&nbsp;and&nbsp;<code>spans<\/code>&nbsp;options allow to select the sub-track.<\/li><li>The&nbsp;<code>gs_dir<\/code>&nbsp;and&nbsp;<code>sys_dir<\/code>&nbsp;options allow to select folders.<\/li><li><code>Verbose<\/code>&nbsp;option allow to control the verbosity level.<\/li><\/ul>\n\n\n\n<p>The user can select the different options using the command line:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">usage: evaluate.py [-h] [-v]\n                   {i2b2,brat} {ner,spans} gs_dir sys_dir [sys_dir ...]\n\nEvaluation script for the MEDDOCAN track.\n\npositional arguments:\n  {i2b2,brat}    Format\n  {ner,spans}    Subtrack\n  gs_dir         Directory to load GS from\n  sys_dir        Directories with system outputs (one or more)\n\noptional arguments:\n  -h, --help     show this help message and exit\n  -v, --verbose  List also scores for each document\n<\/pre>\n\n\n\n<h3><strong>Basic Examples<\/strong><\/h3>\n\n\n\n<p>Evaluate the single system output file &#8217;01.xml&#8217; against the gold standard file &#8217;01.xml&#8217; NER subtrack. Input files in i2b2 format:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$&gt; python evaluate.py i2b2 ner gold\/01.xml system\/run1\/01.xml\n\nReport (SYSTEM: run1):\n------------------------------------------------------------\nDocument ID                        Measure        Micro\n------------------------------------------------------------\n01                                 Leak           1.462\n                                   Precision      0.3333              \n                                   Recall         0.1364              \n                                   F1             0.1935              \n------------------------------------------------------------\n<\/pre>\n\n\n\n<p>Evaluate the single system output file &#8217;01.ann&#8217; against the gold standard file &#8217;01.ann&#8217; NER sub-track. Input files in BRAT format:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$&gt; python evaluate.py brat ner gold\/01.ann system\/run1\/01.ann\n\nReport (SYSTEM: run1):\n------------------------------------------------------------\nDocument ID                        Measure        Micro\n------------------------------------------------------------\n01                                 Leak           1.462\n                                   Precision      0.3333              \n                                   Recall         0.1364              \n                                   F1             0.1935              \n------------------------------------------------------------\n<\/pre>\n\n\n\n<p>Evaluate the set of system outputs in the folder system\/run1 against the set of gold standard annotations in gold\/ using the SPANS subtrack. Input files in i2b2 format.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$&gt; python evaluate.py i2b2 spans gold\/ system\/run1\/\n\nReport (SYSTEM: run1):\n------------------------------------------------------------\nSubTrack 2 [strict]                Measure        Micro\n------------------------------------------------------------\nTotal (15 docs)                    Precision      0.3468\n                                   Recall         0.1239              \n                                   F1             0.1826              \n------------------------------------------------------------\n\nReport (SYSTEM: run1):\n------------------------------------------------------------\nSubTrack 2 [merged]                Measure        Micro\n------------------------------------------------------------\nTotal (15 docs)                    Precision      0.469\n                                   Recall         0.1519              \n                                   F1             0.2294              \n------------------------------------------------------------\n<\/pre>\n\n\n\n<p>Evaluate the set of system outputs in the folder system\/run1, system\/run2 and in the folder system\/run3 against the set of gold standard annotations in gold\/ using the NER subtrack. Input files in BRAT format.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$&gt; python evaluate.py brat ner gold\/ system\/run1\/ system\/run2\/ system\/run3\/\n\nReport (SYSTEM: run1):\n------------------------------------------------------------\nSubTrack 1 [NER]                   Measure        Micro\n------------------------------------------------------------\nTotal (15 docs)                    Leak           1.369\n                                   Precision      0.3258              \n                                   Recall         0.1239              \n                                   F1             0.1795              \n------------------------------------------------------------\n\nReport (SYSTEM: run2):\n------------------------------------------------------------\nSubTrack 1 [NER]                   Measure        Micro\n------------------------------------------------------------\nTotal (15 docs)                    Leak           1.462\n                                   Precision      0.3333              \n                                   Recall         0.1364              \n                                   F1             0.1935              \n------------------------------------------------------------\n\nReport (SYSTEM: run3):\n------------------------------------------------------------\nSubTrack 1 [NER]                   Measure        Micro\n------------------------------------------------------------\nTotal (15 docs)                    Leak           1.6\n                                   Precision      0.4              \n                                   Recall         0.1429              \n                                   F1             0.2105              \n------------------------------------------------------------\n\n<\/pre>\n\n\n\n<h3><strong>License<\/strong><\/h3>\n\n\n\n<p>The MEDDOCAN evaluation script is distributed under the <em>MIT License<\/em>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Attention: This script has been changed to fit the requirements of CodaLab. However, the metrics and scores returned by the script remain the same. The CodaLab version of this script and its documentation can be found at GitHub. Introduction The MEDOCAN evaluation script is based on the evaluation script from the i2b2 2014 Cardiac Risk [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[5],"tags":[],"_links":{"self":[{"href":"https:\/\/temu.bsc.es\/meddocan\/index.php\/wp-json\/wp\/v2\/posts\/3964"}],"collection":[{"href":"https:\/\/temu.bsc.es\/meddocan\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/temu.bsc.es\/meddocan\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/temu.bsc.es\/meddocan\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/temu.bsc.es\/meddocan\/index.php\/wp-json\/wp\/v2\/comments?post=3964"}],"version-history":[{"count":10,"href":"https:\/\/temu.bsc.es\/meddocan\/index.php\/wp-json\/wp\/v2\/posts\/3964\/revisions"}],"predecessor-version":[{"id":4151,"href":"https:\/\/temu.bsc.es\/meddocan\/index.php\/wp-json\/wp\/v2\/posts\/3964\/revisions\/4151"}],"wp:attachment":[{"href":"https:\/\/temu.bsc.es\/meddocan\/index.php\/wp-json\/wp\/v2\/media?parent=3964"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/temu.bsc.es\/meddocan\/index.php\/wp-json\/wp\/v2\/categories?post=3964"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/temu.bsc.es\/meddocan\/index.php\/wp-json\/wp\/v2\/tags?post=3964"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}