Skip to content

Yukang-Lin/RGER

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

[COLING 2025] Reasoning Graph enhanced Exemplars Retrieval for In-context Learning

This is the official implementation of "Reasoning Graph enhanced Exemplars Retrieval for In-context Learning"(RGER)

Method Framework

The retrieval framework includes a reasoning graph generation process and graph similarity reranking shown as below: Main Framework

Main Framework.

to graph

Build graph.

Setup

All required packages can be found in requirements.txt. You can install them in a new environment with

conda create -n RGER python=3.10
conda activate RGER

git clone https://github.com/Yukang-Lin/RGER.git

# The following line to be replaced depending on your cuda version.
pip install torch==2.3.0
pip install transformers=4.41.1
cd RGER
pip install -r requirements.txt
# if you don't want to use API from openai, just comment out the `openai` package in `requirements.txt`.

Testing on manual designed prompt methods

Manual cot prompt is placed at directory ./cot-prompt/, and result is save at output/svamp/vanilla/

sh scripts/concat_with_prompt.sh svamp cot
## this include cot and complex-cot
sh scripts/prompt_inference.sh 0 svamp cot
## generate response

For auto-cot method,

sh scripts/autocot_retriever.sh svamp 8
## generate auto-cot prompt in directory ./cot-prompt
sh scripts/concat_with_prompt.sh svamp autocot
sh scripts/prompt_inference.sh 0 svamp autocot
## generate response

Reproducing our results

  1. Train cot-aware embedding model. The model is save at output/svamp/svamp/vicuna-7b/bert-fix_ctx-shared-bs64/
sh scripts/retriever_trainer.sh 0 svamp vicuna-7b
## you should config your model_path in the script
  1. Generate graph representation

  2. Retrieve exemplars and inference. The retrieve example is save at output/svamp/svamp/vicuna-7b/RGER/prompt-RGER_rerank64.json where candidate ids is in 'ctxs' field. The response is save at output/svamp/svamp/vicuna-7b/RGER/pred_RGER_rerank64.json

sh scripts/retrieve_and_inference.sh 0 svamp RGER vicuna-7b

Testing on other retrieved-based methods

## Retrieve exemplars and inference
sh scripts/retrieve_and_inference.sh 0 svamp DQ-LoRe vicuna-7b
## method can select from EPR, CEIL, DQ-LoRe

evaluate the response

The script calcuates accuracy for response, then output results in results/svamp

  • For manual cot evaluation
sh scripts/evaluate_results_manual.sh svamp
  • For retrieve-based evaluation
sh scripts/evaluate_results_retrieved.sh svamp

use in your own dataset

Your can process your data in the manner as the paper, and save your graph in the directory save_graph/your_dataset with the function networkx_to_json in networkx package.

Acknowledgement

The code framework has been modified based on CEIL and DQ-LORE, and we are very grateful for their previous work.

Citation

If you found this repository helpful, please don't forget to cite our paper:

@inproceedings{lin-etal-2025-reasoning,
    title = "Reasoning Graph Enhanced Exemplars Retrieval for In-Context Learning",
    author = "Lin, Yukang  and
      Zhong, Bingchen  and
      Jiang, Shuoran  and
      Siebert, Joanna  and
      Chen, Qingcai",
    editor = "Rambow, Owen  and
      Wanner, Leo  and
      Apidianaki, Marianna  and
      Al-Khalifa, Hend  and
      Eugenio, Barbara Di  and
      Schockaert, Steven",
    booktitle = "Proceedings of the 31st International Conference on Computational Linguistics",
    month = jan,
    year = "2025",
    address = "Abu Dhabi, UAE",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.coling-main.651/",
    pages = "9737--9759",
    abstract = "Large language models (LLMs) have exhibited remarkable few-shot learning capabilities and unified the paradigm of NLP tasks through the in-context learning (ICL) technique. Despite the success of ICL, the quality of the exemplar demonstrations can significantly influence the LLM{'}s performance. Existing exemplar selection methods mainly focus on the semantic similarity between queries and candidate exemplars. On the other hand, the logical connections between reasoning steps can also be beneficial to depict the problem-solving process. This paper proposes a novel method named Reasoning Graph-enhanced Exemplar Retrieval (RGER). RGER first queries LLM to generate an initial response and then expresses intermediate problem-solving steps to a graph structure. After that, it employs a graph kernel to select exemplars with semantic and structural similarity. Extensive experiments demonstrate the structural relationship is helpful to the alignment of queries and candidate exemplars. The efficacy of RGER on mathematics and logical reasoning tasks showcases its superiority over state-of-the-art retrieval-based approaches."
}

If you have any questions, please feel free to contact Yukang.Lin00@gmail.com

About

[COLING 25] Code for "Reasoning graph enhanced exemplars retrieval for In-Context Learning"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors