Workshop Session: Automated Legal Question Answering Competition (ALQAC 2022)
Automated Legal Question Answering Competition (ALQAC)
Run in association with the International Conference on Knowledge and Systems Engineering
ALQAC-2022 CALL FOR TASK PARTICIPATION
ALQAC-2022 Workshop: October 19-21, 2022
ALQAC-2022 Registration due: May 30, 2022
Sponsored by
Japan Advanced Institute of Science and Technology (JAIST)
University of Engineering and Technology (VNU-UET)
Overview
As an associated event of KSE 2022, we are happy to announce the 2nd Automated Legal Question Answering Competition (ALQAC 2022). ALQAC includes 2 tasks: (1) Legal Document Retrieval, and (2) Legal Question Answering. For the competition, we introduce the Legal Question Answering dataset – a manually annotated dataset based on well-known statute laws in the Vietnamese Language. Through the competition, we aim to develop a research community on legal support systems.
Prize
We have 2 tasks. For each task, we have
One First prize, the winning team will receive 250$.
One Second prize, the runner-up team will receive 150$.
Two Third prizes, each receives 50$.
In total, the prize is 1000$. Besides, the winning team of each task will have the KSE conference fee of the presenter covered.
Dataset
The dataset file formats are shown via examples as follows.
- Legal Articles: Details about each article are in the following format:
[
{
"id": "45/2019/QH14",
"articles": [
{
"text": "The content of legal article",
"id": "1"
}
]
}
]
- Annotation Samples: Details about each sample are in the following format:
[
{
"question_id": "q-1",
"text": "The content of question or statement",
"answer": <span of text>,
"relevant_articles": [
{
"law_id": "45/2019/QH14",
"article_id": "1"
}
]
}
]
Tasks
Tasks Description
Task 1: Legal Document Retrieval
Task 1’s goal is to return articles that are related to a statement. An article is considered “relevant” to a statement iff the statement can be answered/verified by the article.
Specifically, the input samples consist of:
- Legal Articles: whose format is the same as Legal Articles described in the Dataset section.
- Question: whose format is in JSON as follows:
[
{
"question_id": "q-1",
"text": "The content of question or statement"
}
]
The system should retrieve all the relevant articles as follows:
[
{
"question_id": "q-1",
"text": "The content of question or statement",
"relevant_articles": [
{
"law_id": "45/2019/QH14",
"article_id": "1"
}
]
}
]
Note that “relevant_articles” are the list of all relevant articles for the questions/statements.
The evaluation methods are precision, recall, and F2-measure as follows:
Precisioni = the number of correctly retrieved articles of query ith
the number of retrieved articles of query ith ,
Recalli = the number of correctly retrieved cases(paragraphs) of query ith
the number of relevant cases(paragraphs) of query ith ,
F2i= (5 x Precisioni x Recalli)
(4Precisioni + Recalli)
F2 = average of (F2i)
In addition to the above evaluation measures, ordinal information retrieval measures such as Mean Average Precision and R-precision can be used for discussing the characteristics of the submission results.
In ALQAC 2022, the method used to calculate the final evaluation score of all queries is macro-average (evaluation measure is calculated for each query and their average is used as the final evaluation measure) instead of micro-average (evaluation measure is calculated using results of all queries).
Task 2: Legal Question Answering
Factoid Question
Given a legal question, the goal is to produce a span of text which determines the exact answer to the question. That is, for each question-answer pair, if the character sequence of the model’s prediction exactly matches the character sequence of the true answer, the predicted answer is considered correct; otherwise, it is considered incorrect.
Specifically, the input samples consist of the question as follows:
[
{
"question_id": "q-1",
"text": "The content of question",
}
]
The system should answer whether the question/statement is a span via “answer” in JSON format as follows:
[
{
"question_id": "q-1",
"text": "The content of question",
"answer": <span of text>
}
]
The evaluation measure will be accuracy, with respect to whether the question was correctly confirmed:
Accuracy = (the number of queries which were correctly confirmed as true or false)
(the number of all queries)
In addition to the above evaluation measures, ordinal information retrieval measures such as Mean Average Precision and R-precision can be used for discussing the characteristics of the submission results.
Note: The output submitted by teams will be published to a public GitHub repository so that legal and AI experts can refer to this information for analysis purposes. Expert evaluation is the official metric to decide the systems’ performance of the teams.
Submission Details
Participants are required to submit a paper on their method and experimental results. The participants have to submit via e-mail files containing the results of each task, separately. For each task, participants can submit a maximum of 3 results corresponding to 3 different settings/methods. The code for evaluation is published on Google Colab (https://colab.research.google.com/drive/17tEVE2C56kHXxdfeooBPSA5mG4vZVTxf).
In this framework, we defined the mentioned input/output data structure and evaluation methods for all 2 tasks.
Note: Participants have the responsibility to warranty their result files to follow the required format.
These examples are outputs of 2 tasks that the participants’ model needs to generate for evaluation methods:
Task 1: Legal Document Retrieval
[
{
"question_id": "q-193",
"relevant_articles": [
{
"law_id": "100/2015/QH13",
"article_id": "177"
}
]
},
...
]
Task 2: Legal Question Answering
[
{
"question_id": "q-193",
"answer": <span of text>
},
...
]
At least one of the authors of an accepted paper has to present the paper at the ALQAC workshop of KSE 2022.
The papers authored by the task winners will be included in the main KSE 2022 proceedings if ALQAC organizers admit the paper novelty after the review process.
Papers should conform to the standards set out on the KSE 2022 webpage (section Submission) and be submitted to EasyChair.
Application Details
Potential participants in ALQAC-2022 should respond to this call for participation by submitting an application via: tinyurl.com/ALQAC2022Registration.
Schedule (Timezone: AOE)
May 6, 2022: Call for participation
June 1, 2022: Training data release
June 20, 2022: Testing data release
August 15, 2022: Submission deadline for Task 1 & 2
August 20, 2022: Announcements of rankings/assessments
August 31, 2022: Paper/Technical report of your methods for the tasks
September 7, 2022: Notification of Acceptance
September 14, 2022: Camera-ready Submission
September 24, 2022: KSE Registration Deadline
October 19-21, 2022: KSE 2022
Task winners
Questions and Further Information
Email: chau.nguyen@jaist.ac.jp with the subject [ALQAC-2022] <Content>
Program Committee
- Nguyen Le Minh, Japan Advanced Institute of Science and Technology (JAIST), Japan
- Tran Duc Vu, The Institute of Statistical Mathematics (ISM), Japan
- Phan Viet Anh, Le Quy Don Technical University (LQDTU), Vietnam
- Nguyen Minh Tien, Hung Yen University of Technology and Education (UTEHY), Vietnam
- Nguyen Truong Son, Ho Chi Minh University of Science (VNU-HCMUS), Vietnam
- Nguyen Tien Huy, Ho Chi Minh University of Science (VNU-HCMUS), Vietnam
- Nguyen Ha Thanh, National Institute of Informatics, Japan
- Bui Minh Quan, Japan Advanced Institute of Science and Technology (JAIST), Japan
- Dang Tran Binh, Japan Advanced Institute of Science and Technology (JAIST), Japan
- Vuong Thi Hai Yen, University of Engineering and Technology (VNU-UET), Vietnam
- Nguyen Minh Phuong, Japan Advanced Institute of Science and Technology (JAIST), Japan
- Nguyen Minh Chau, Japan Advanced Institute of Science and Technology (JAIST), Japan
- Le Nguyen Khang, Japan Advanced Institute of Science and Technology (JAIST), Japan
- Nguyen Dieu Hien, Japan Advanced Institute of Science and Technology (JAIST), Japan
- Nguyen Thu Trang, Japan Advanced Institute of Science and Technology (JAIST), Japan
- Do Dinh Truong, Japan Advanced Institute of Science and Technology (JAIST), Japan