EchoPrompt: Instructing the Model to Rephrase Queries for Improved In-context Learning

NAACL 2024
University of California, Irvine

Abstract

Language models are achieving impressive performance on various tasks by aggressively adopting inference-time prompting techniques, such as zero-shot and few-shot prompting. In this work, we introduce EchoPrompt, a simple yet effective approach that prompts the model to rephrase its queries before answering them. EchoPrompt is tailored for four scenarios, including standard and chain-of-thought prompting, in both zero-shot and few-shot settings. Experimental results show that EchoPrompt yields substantial improvements across all these settings for four families of causal language models. These improvements are observed across various numerical reasoning (e.g., GSM8K, SVAMP), reading comprehension (e.g., DROP), and logical reasoning (e.g., Coin flipping) tasks. On average, EchoPrompt improves the Zero-shot-CoT performance of code-davinci-002 by 5% in numerical tasks and 13% in reading comprehension tasks. Our empirical results indicate that EchoPrompt is an effective technique that enhances in-context learning performance.

EchoPrompt

EchoPrompt teaches language models to generate a version of the query before solving it. It has 2 variants, one for zero-shot learning and another for few-shot learning.

Zero-shot EchoPrompt

In regular Zero-shot chain-of-thought prompting, we use a prompt to help the model think through its steps of reasoning before giving the final answer. But with Echoprompt, we ask the model to reiterate the question before reasoning it. This change encourages the model to rephrase the question into its own words before it starts generating the answer. The prompt we use to extract the answer stays the same for both the methods. The following highlights the key differences between the two approaches.

Few-shot EchoPrompt

Similarly in few-shot learning, we teach the language model to rephrase the test query in a particular structure before answering the query. We do this by providing exemplars demonstrating the rephrase structure and corresponding responses to example queries. The following figure shows an example of few-shot EchoPrompt

Examples

Mathematical Reasoning examples with Few-shot EchoPrompt from GSM8K on GPT-3.5

BibTeX


    @misc{mekala2024echoprompt,
      title={EchoPrompt: Instructing the Model to Rephrase Queries for Improved In-context Learning}, 
      author={Rajasekhar Reddy Mekala and Yasaman Razeghi and Sameer Singh},
      year={2024},
      eprint={2309.10687},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
    }
  

Acknowledgement

This website is adapted from Nerfies, licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.