Evaluation

Evaluation will be done by comparing the automatically generated results to the results generated by manual annotation of experts, with the main metrics being micro-averaged precision, recall and F1-scores.

  • For more info about the evaluation phases, metrics and some examples, check the Evaluation Setting page.
  • For more info on the Evaluation Library used for the task, check the Evaluation Library page.
  • For more info on the Submission method, check the Submission page.