Published at Applied Research Paper Track of IEEE Access Journal!
@ARTICLE{efeoglu_2026,
author={Efeoglu, Sefika and Paschke, Adrian and Schimmler, Sonja},
journal={IEEE Access},
title={Large Language Models for Continual Relation Extraction},
year={2026},
volume={},
number={},
pages={1-1},
keywords={Semantic Web;Computer networks;Continual Relation Extraction;Schema-Level Errors;Large Language Models;Knowledge Graph Construction},
doi={10.1109/ACCESS.2026.3682652}}
Note Trained models are public on HuggingFace, as stated in the journal article.
.
├── LICENSE
├── README.md
├── config.ini
├── data -> settings and data split setting here for tacred and fewrel like relation types per task
├── doc -> figures
├── results -> results for TACRED with Flan-T5 and All Results for FewRel
├── logs -> time cost logs for each experiment and FewRel's in side of FewRel results
├── main.py
├── requirements.txt -> dependecies like libraries
└── src
├── CRE -> continual training of Flan T5 Base, Llama2 and Mistral
├── analysis_viz -> Visualization like logs and section 4 figures.
├── clean -> cleaning of results of llama and mistral from explainations and instructions.
├── data_preparetation -> prompt dataset generation
├── metrics -> bwt, whole and average accuracy calculation
├── utils.py -> read and write
└── zero_shot_prompting -> ablation study, but not in the paper.Setup configuration in config.ini according to your needs before starting running experiments.
$ python main.pyor follow the steps below.
1.) Prepare datasets:
TACRED:
- This command with convert data row to (sentence, subject, object, object_type and subject_type)
$ python src/data_preparetation/data_prepare_tacred.py- Split datasets according to setting Cui et al. 2021
$ python src/data_preparetation/instruction_ft_data_same_setting_tacred.pyFewRel
- Same steps with TACRED
$ python src/data_preparetation/data_preparation_fewrel.py- split
$ python src/data_preparetation/instruction_ft_data_same_setting_fewrel.py2.)Trainer
- Decoder only models(Llama2-7B-chat-hf and Mistral-Instruct-7B-v2.0)
$ python python src/CRE/trainer_decoder.py- Encoder-Decoder model(Flan T5-Base)
$ python src/CRE/trainer_t5.py3.) Clean decoder-only models results from explainations
$ python src/clean/clean_decoder_results.py4.) Metrics
Average and Whole Accuracy Metrics
$ python src/metrics/cl_metrics.py
Backward Knowledge Transfer Computation
$ python src/metrics/bwt.py@inproceedings{cui-etal-2021-refining,
title = {{R}efining {S}ample {E}mbeddings with {R}elation {P}rototypes to {E}nhance {C}ontinual {R}elation {E}xtraction},
author = {Cui, Li and Yang, Deqing and Yu, Jiaxin and Hu, Chengwei and Cheng, Jiayang and Yi, Jingjie and Xiao, Yanghua},
editor = {Zong, Chengqing and Xia, Fei and Li, Wenjie and Navigli, Roberto},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)},
month = {8},
year = {2021},
address = {Online},
publisher = {Association for Computational Linguistics},
url = {https://aclanthology.org/2021.acl-long.20},
doi = {10.18653/v1/2021.acl-long.20},
pages = {232--243}
}