Chair: Petya Osenova
- QuARK: LLM-Based Domain-Specific Question Answering Using Retrieval Augmented Generation and Knowledge Graphs (Edward Burgin, Sourav Dutta and Mingxue Wang)
- From Courtroom to Corpora: Building a Name Entity Corpus for Urdu Legal Texts (Adeel Zafar, Sohail Ashraf and Slawomir Nowaczyk)
- Financial News as a Proxy of European Central Bank Interest Rate Adjustments (Davide Paris, Martina Menzio and Elisabetta Fersini)
- LLM Compression: How Far Can We Go in Balancing Size and Performance? (Sahil Sk, Debashish Dhal, Sonal Khosla, Akash Dhaka, Shantipriya Parida, Sk Shahid, Sambit Shekhar, Dilip Prasad and Ondrej Bojar)
- Chakoshi: A Customizable Guardrail for LLMs with a Focus on Japanese-Language Moderation (Kazuhiro Arai, Ryota Matsui, Kenji Miyama, Yudai Yamamoto, Ren Shibamiya, Kaito Sugimoto and Yoshimasa Iwase)
- Instruction Finetuning to Attribute Language Stage, Dialect and Provenance Region to Historical Church Slavic Texts (Piroska Lendvai, Uwe Reichel, Anna Jouravel, Achim Rabus and Elena Renje)
- C-SHAP: Collocation-Aware Explanations for Financial NLP (Martina Menzio, Elisabetta Fersini and Davide Paris)
- Named Entity Recognition and Relation Extraction for Better Gut-Brain Interplay Understanding (Aleksis Ioannis Datseris, Mario Kuzmanov, Ivelina Nikolova-Koleva, Svetla Boytcheva and Dimitar Taskov)
- Balancing the Scales: Addressing Gender Bias in Social Media Toxicity Detection (Beatriz Botella-Gil, Juan Pablo Consuegra-Ayala, Alba Bonet-Jover and Paloma Moreda-Pozo)
- Authorship Verification Using Cloze Test with Large Language Models (Tomáš Foltýnek, Tomáš Kancko and Pavel Rychly)
- The Challenge of Performing Ontology-driven Entity Extraction in Real-world Unstructured Textual Data from the Domain of Dementia (Sumaiya Suravee, Carsten Oliver Schmidt and Kristina Yordanova)
- Detecting Fake News in the Era of Language Models (Muhammad Irfan Fikri Sabri, Hansi Hettiarachchi and Tharindu Ranasinghe)
- Reddit-V: A Virality Prediction Dataset and Zero-Shot Evaluation with Large Language Models (Samir el-Amrany, Matthias R. Brust, Salima Lamsiyah and Pascal Bouvry)
- Utilizing Large Language Models for Focused Conversational Assistants (Shruti Dhavalikar and Karthika Vijayan)
- APIO: Automatic Prompt Induction and Optimization for Grammatical Error Correction and Text Simplification (Artem Chernodub, Aman Saini, Yejin Huh, Vivek Kulkarni and Vipul Raheja)
- Pushing the (Generative) Envelope: Measuring the Effect of Prompt Technique and Temperature on the Generation of Model-based Systems Engineering Artifacts (Erin Smith Crabb, Cedric Bernard, Matthew Jones and Daniel Dakota)
- Multi-Agent Reinforcement Learning for Interactive Code Debugging with Human Feedback and Memory (Anjana Krishnamoorthy, Kartik Ivatury and Benyamin Ahmadnia)
- Differential Robustness in Transformer Language Models: Empirical Evaluation under Adversarial Text Attacks (Taniya Gidatkar, Oluwaseun Ajao and Matthew Shardlow)
- MLDataForge: Accelerating Large-Scale Dataset Preprocessing and Access for Multimodal Foundation Model Training (Andrea Blasi Núñez, Lukas Paul Achatius Galke and Peter Schneider-Kamp)
- Modelling the Relative Contributions of Stylistic Features in Forensic Authorship Attribution (G. Çağatay Sat, John Blake and Evgeny Pyshkin)
- Benchmarking Item Difficulty Classification in German Vocational Education and Training (Alonso Palomino and Benjamin Paassen)
- Mitigating Bias in Text Classification via Prompt-Based Text Transformation (Charmaine Barker and Dimitar Kazakov)
- Synthetic vs. Gold: The Role of LLM Generated Labels and Data in Cyberbullying Detection (Arefeh Kazemi, Sri Balaaji Natarajan Kalaivendan, Joachim Wagner, Hamza Qadeer, Kanishk Verma and Brian Davis)
- From the Tractatus Logico-Philosophicus to Later Wittgenstein: A NLP-Based Comparative Analysis (Andreiana Mihail, Silviu-Florin Gheorghe, Andrei Fotea and Liviu P. Dinu)
- Cyberbullying Detection via Aggression-Enhanced Prompting (Aisha Saeid, Anu Sabu, Girish Koushik, Ferrante Neri and Diptesh Kanojia)
- Graph-based RAG for Low-Resource Aromanian–Romanian Translation (Laurentiu G. Ghetoiu and Sergiu Nisioi)
- A Linguistically-informed Comparison between Multilingual BERT and Language-specific BERT Models: The Case of Differential Object Marking in Romanian (Maria Tepei and Jelke Bloem)
- Can LLMs Disambiguate Grounded Language? The Case of PP Attachment (John Blackmore and Matthew Stone)
- Optimism, Pessimism, and the Language between: Model Interpretability and Psycholinguistic Profiling (Stefana Arina Tabusca and Liviu P. Dinu)
- F-LoRA-QA: Finetuning LLaMA Models with Low-Rank Adaptation for French Botanical Question Generation and Answering (Ayoub Nainia, Régine Vignes-Lebbe, Hajar Mousannif and Jihad Zahir)
- Detecting Gender Stereotypical Language Using Model-agnostic and Model-specific Explanations (Manuela Nayantara Jeyaraj and Sarah Jane Delany)
- Prompting Techniques for Reducing Social Bias in LLMs through System 1 and System 2 Cognitive Processes (Mahammed Kamruzzaman and Gene Louis Kim)
- Reversing Causal Assumptions: Explainability in Online Sports Dialogues (Asteria Kaeberlein and Malihe Alikhani)
- How LLMs Influence Perceived Bias in Journalism (Asteria Kaeberlein and Malihe Alikhani)
- Exploring the Performance of Large Language Models for Event Detection and Extraction in the Domain of Health (Hristo Tanev, Nicolas Stefanovitch, Tomáš Harmatha and Diana F. Sousa)
- HoloBERT: Pre-Trained Transformer Model for Historical Narratives (Isuri Anuradha, Le An Ha and Ruslan Mitkov)