SemEval-2017 Task - GM-RKB - Gabor Melli

SemEval-2017 Task From GM-RKB Jump to navigation Jump to search

A SemEval-2017 Task is a SemEval task associated with the SemEval-2017 workshop.

  • Context:
    • It is divided in 12 NLP benchmark tasks including:
      • SemEval-2017 Semantic Textual Similarity Benchmark Task,
      • SemEval-2017 Question Answering Benchmark Task,
      • SemEval-2017 Sentiment Analysis Benchmark Task,
      • SemEval-2017 Parsing Semantic Text Benchmark Task.
  • Example(s):
    • SemEval-2017 Task 1,
    • SemEval-2017 Task 2,
    • SemEval-2017 Task-10 ScienceIE.
    • SemEval-2017 Task-11.
    • SemEval-2017 Task-12.
  • Counter-Example(s):
    • a SemEval-2018 Task.
    • a SemEval-2016 Task.
  • See: Multilingual All-Words Sense Disambiguation and Entity Linking, Semantic Similarity Task.

References

2017

  • (Bethard et al., 2017) ⇒ Steven Bethard, Marine Carpuat, Marianna Apidianaki, Saif M. Mohammad, Daniel M. Cer, and David Jurgens. (2017). “Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval ACL 2017)".
    • QUOTE: SemEval-2017 was co-located with the 55th annual meeting of the Association for Computational Linguistics (ACL’2017) in Vancouver, Canada. It included the following 12 shared tasks organized in three tracks:
      • Semantic comparison for words and texts.
        • Task 1: Semantic Textual Similarity.
        • Task 2: Multi-lingual and Cross-lingual Semantic Word Similarity.
        • Task 3: Community Question Answering.
      • Detecting sentiment, humor, and truth
        • Task 4: Sentiment Analysis in Twitter.
        • Task 5: Fine-Grained Sentiment Analysis on Financial Microblogs and News.
        • Task 6: #HashtagWars: Learning a Sense of Humor
        • Task 7: Detection and Interpretation of English Puns
        • Task 8: RumourEval: Determining rumour veracity and support for rumours
      • Parsing semantic structures.
        • Task 9: Abstract Meaning Representation Parsing and Generation.
        • Task 10: Extracting Keyphrases and Relations from Scientific Publications.
        • Task 11: End-User Development using Natural Language.
        • Task 12: Clinical TempEval

2017b

  • http://alt.qcri.org/semeval2017/index.php?id=tasks
    • Semantic comparison for words and texts.
      • Task 1: Semantic Textual Similarity.
      • Task 2: Multi­lingual and Cross­-lingual Semantic Word Similarity.
      • Task 3: Community Question Answering.
    • Detecting sentiment, humor, and truth.
      • Task 4: Sentiment Analysis in Twitter.
      • Task 5: Fine-Grained Sentiment Analysis on Financial Microblogs and News.
      • Task 6: #HashtagWars: Learning a Sense of Humor.
      • Task 7: Detection and Interpretation of English Puns.
      • Task 8: RumourEval: Determining rumour veracity and support for rumours.
    • Parsing semantic structures.
      • Task 9: Abstract Meaning Representation Parsing and Generation[1]
      • Task 10: Extracting Keyphrases and Relations from Scientific Publications[2]
      • Task 11: End-User Development using Natural Language[3]
      • Task 12: Clinical TempEval[4]
Retrieved from "http://www.gabormelli.com/RKB/index.php?title=SemEval-2017_Task&oldid=880623" Categories:
  • Concept
  • Machine Learning
  • Computational Linguistics

Navigation menu

Personal tools

  • Log in

Namespaces

  • Page
English

Views

  • Read
More

Search

Navigation

  • Main page
  • Recent changes
  • Random page
  • Help about MediaWiki

Tools

  • What links here
  • Related changes
  • Special pages
  • Printable version
  • Page information

Từ khóa » Http Alt Qcri Org Semeval 2017 Task 2