EVOR: Evolving Retrieval for Code Generation

1The University of Hong Kong, 2Fudan University, 3Sea AI Lab

We introduce EVOR, a general pipeline for retrieval-augmented code generation (RACG). We construct a knowledge soup integrating web search, documentation, execution feedback, and evolved code snippets. Through active retrieval in knowledge soup, we demonstrate significant increase in benchmarks about updated libraries and long-tail programming languages (8.6% to 34.6% in ChatGPT)

Abstract

Recently the retrieval-augmented generation (RAG) has been successfully applied in code generation. However, existing pipelines for retrieval-augmented code generation (RACG) employ static knowledge bases with a single source, limiting the adaptation capabilities of Large Language Models (LLMs) to domains they have insufficient knowledge of. In this work, we develop a novel pipeline, EVOR, that employs the synchronous evolution of both queries and diverse knowledge bases. On two realistic settings where the external knowledge is required to solve code generation tasks, we compile four new datasets associated with frequently updated libraries and long-tail programming languages, named EVOR-BENCH. Extensive experiments demonstrate that EVOR achieves two to four times of execution accuracy compared to other methods such as Reflexion (Shinn et al., 2024), DocPrompting (Zhou et al., 2023), etc. We demonstrate that EVOR is flexible and can be easily combined with them to achieve further improvement. Further analysis reveals that EVOR benefits from the synchronous evolution of queries and documents and the diverse information sources in the knowledge base. We hope that our studies will inspire more insights into the design of advanced RACG pipelines in future research.

Arks is effective because:
  • Knowledge soup:   Diverse knowledge soup consists of not only documentation, but also more resources including web search, code snippets and execution feedback
  • Query Formulation:   Apart from natural language question, Arks employ the explained code as more informative query, which helps retrieval models match desired knowledge source
  • Active Retrieval:   Compared to RAG, we actively refine query and knowledge soup, which helps to retrieve more relevant information
  • Performance:   Across all datasets, ChatGPT and CodeLlama achieve performance gain of 24.0% and 23.8% respectively, compared to the vanilla setting without knowledge augmentation

Main Results

Diverse resources in general help LLM generalization.

ChatGPT and CodeLlama execution accuracy with different knowledge sources. Tensor-M refers to Tensorflow-M and Avg. refers to the average score across four benchmarks. Web denotes the web search content; Exec denotes the execution feedback from compiler/interpreter; Code denotes the code snippets generated by LLMs in previous rounds that are verified to be free of syntax error; Doc refers to the documentation. Adding more knowledge sources consistently enhances the performance, which demonstrates the advantage of a diverse knowledge soup in the RACG pipeline.

Analysis

We demonstrate the significant performance increase provided by active retrieval on top of one-time retrieval

Next, we show the critical role of both query formulation and retrieval model choice

Furthermore, we evaluate the retrieval accuracy, revealing that both retrieval and generator models have large rooms to improve for better generalization

With long-context models and more knowledge included in the prompt, model generalization performance is not guaranteed to improve, calling for more delicate mechanism in RACG pipeline

Last but not least, we perform a case study o provide a more intuitive understanding of the improvement achieved by ARKS on top of LLMs,

Dataset Statistics

To evaluate Arks, we curate 4 datasets covering two realistic scenarios of RACG: updated libraries and long-tail programming languages. Here are the data statistics: