Parallel session 2: Language Models: Training, Adaptation and Evaluation
Session Chair: Damith Premasiri, Varna hall
10.45-11.10 Bridging the Gap between Subword and Character Segmentation in Pretrained Language Models (Shun Kiyono, Sho Takase, Shengzhe Li and Toshinori Sato)
11.10-11.35 Forming Trees with Treeformers (Nilay Patel and Jeffrey Flanigan)
11.35-12.00 Modeling Easiness for Training Transformers with Curriculum Learning (Leonardo Ranaldi, Giulia Pucci and Fabio Massimo Zanzotto)
12.00-12.25 Towards a Consensus Taxonomy for Annotating Errors in Automatically Generated Text (Rudali Huidrom and Anya Belz)
12.25-12.50 Evaluating of Large Language Models in Relationship Extraction from Unstructured Data: Empirical Study from Holocaust Testimonies (Isuri Anuradha, Le An Ha, Ruslan Mitkov and Vinita Nahar)
12.50-1.15pm Lessons Learnt from Linear Text Segmentation: a Fair Comparison of Architectural and Sentence Encoding Strategies for Successful Segmentation (Iacopo Ghinassi, Lin Wang, Chris Newell and Matthew Purver)