Hello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

Protected: Results

This content is password protected. To view it please enter your password below:

Publications

ProfNER’s proceedings and resulting papers will be available here after the task takes place. Meanwhile, you can check some of the related publications to the task:

  • Weissenbacher D, Gonzalez-Hernandez G. Proceedings of the Fourth Social Media Mining for Health Applications (# SMM4H) Workshop & Shared Task. In Proceedings of the Fourth Social Media Mining for Health Applications (# SMM4H) Workshop & Shared Task. 2019. https://www.aclweb.org/anthology/W19-32
  • Weissenbacher D, Sarker A, Klein A, O’Connor K, Magge A, Gonzalez-Hernandez G. Deep neural networks for ensemble for detecting medication mentions in tweets. Journal of the American Medical Informatics Association 2019;ocz156. https://doi.org/10.1093/jamia/ocz156
  • Sarker A, Chandrashekar P, Magge A, Cai H, Klein A, Gonzalez G. Discovering cohorts of pregnant women from social media for safety surveillance and analysis. Journal of Medical Internet Research 2017;19(10):e361. https://doi.org/10.2196/jmir.8164
  • O’Connor K, Sarker A, Perrone J, Gonzalez-Hernandez G. Monitoring prescription medication abuse from Twitter: an annotated corpus and annotation guidelines for reproducible machine learning research. Journal of Medical Internet Research Preprints 13/08/2019:15861. https://doi.org/10.2196/preprints.15861
  • Klein AZ, Sarker A, Cai H, Weissenbacher D, Gonzalez-Hernandez G. Social media mining for birth defects research: a rule-based, bootstrapping approach to collecting data for rare health-related events on Twitter. Journal of Biomedical Informatics 2018;87:68-78. https://doi.org/10.1016/j.jbi.2018.10.001
  • Klein AZ, Sarker A, Weissenbacher D, Gonzalez-Hernandez G. Towards scaling Twitter for digital epidemiology of birth defects. npj Digital Medicine 2019;2:96. https://doi.org/10.1038/s41746-019-0170-5

Workshop

Participating teams are required to submit a paper describing the system(s) they ran on the test data. Sample description systems can be found in pages 89-136 of the #SMM4H 2019 proceedings. Accepted system descriptions will be included in the #SMM4H 2021 proceedings.

We encourage, but do not require, at least one author of each accepted system description to register for the #SMM4H 2021 Workshop, co-located at NAACL, and present their system as a poster. Select participants, as determined by the program committee, will be invited to extend their system description to up to four pages, plus unlimited references, and present their system orally.

Contact & FAQ

Email Martin Krallinger to encargo-pln-life@bsc.es or Antonio Miranda to antonio.miranda@bsc.es


  1. Q: What is the goal of the shared task?
    The goal is to predict the category (Track 1) or the named entities (Track 2) of the tweets in the test and background sets.

  2. Q: How do I register?
    Here: https://forms.gle/1qs3rdNLDxAph88n6

  3. Q: How do I submit the results?
    In CodaLab: https://competitions.codalab.org/competitions/28766

  4. Q: Can I use additional training data to improve model performance?
    Yes
    , participants may use any additional training data they have available, as long as they describe it in the system description.

  5. Q: ProfNER has two tracks. Do I need to complete all tracks?
    Sub-tracks are independent
     and participants may participate in one or two of them.

  6. Q: Is there a Google Group for the ProfNER task?
    Yes: https://groups.google.com/g/smm4h21-task-7

Evaluation

The evaluation will be done at CodaLab

Precision (P) = true positives/(true positives + false positives)

Recall (R) = true positives/(true positives + false negatives)

F-score (F1) = 2*((P*R)/(P+R))

Track A – Tweet binary classification

Submissions will be ranked by Precision, Recall and F1-score for the positive class (F-score is the primary metric).

Prediction format: tab-separated file with headers:

Track B – NER offset detection and classification

Submissions will be ranked by Precision, Recall and F1-score for each PROFESION [profession] or SITUACION_LABORAL [working status] mention extracted, where the spans overlap entirely (F-score is the primary metric).

A correct prediction must have the same beginning and ending offsets as the Gold Standard annotation, as well as the same label (PROFESION or SITUACION_LABORAL)

Prediction format: tab-separated file with headers. Same as the tab-separated format of the Gold Standard.


CodaLab

Predictions for each subtask should be contained in a single .tsv (tab-separated values) file. This file (and only this file) should be compressed into a .zip file. Please upload this zip file as submission. For the evaluation phase which will start on the 1st of March, you are allowed to add the validation set to the training set for training purposes.

Codalab: https://competitions.codalab.org/competitions/28766

  1. Register and wait for approval
  2. To make submissions : Participate -> Submit/View Results -> Click on Task -> Click Submit -> Select File
    Refresh your submission. It goes from Submitted -> Running -> Finished. Scores should be available in the files. You can choose to submit your best scores to the Leaderboard.
  3. To view results : Results -> Click on Task -> View results in table

You will be allowed to make unlimited submissions during the validation stage. During the evaluation stage only 2 submissions will be allowed.

Registration

To participate in a task, register (for free) on Google Form.

Please, choose a team name you remember since we will use it throughout the whole competition!

Student registrants are required to provide the name and email address of a faculty team member who has agreed to serve as their advisor/mentor for developing their system and writing their system description (see below). By registering for a task, participants agree to run their system on the test data and upload at least one set of predictions to CodaLab. Teams may upload up to three sets of predictions per task. By receiving access to the annotated tweets, participants agree to Twitter’s Terms of Service and may not redistribute any portion of the data.