This article evaluates RECKONING's generalizability on the real-world multi-hop logical reasoning task, FOLIO.This article evaluates RECKONING's generalizability on the real-world multi-hop logical reasoning task, FOLIO.

RECKONING: Reasoning through Dynamic Knowledge Encoding: Generalization to Real-World knowledge

2025/10/25 00:41

Abstract and 1. Introduction

  1. Background

  2. Method

  3. Experiments

    4.1 Multi-hop Reasoning Performance

    4.2 Reasoning with Distractors

    4.3 Generalization to Real-World knowledge

    4.4 Run-time Analysis

    4.5 Memorizing Knowledge

  4. Related Work

  5. Conclusion, Acknowledgements, and References

\ A. Dataset

B. In-context Reasoning with Distractors

C. Implementation Details

D. Adaptive Learning Rate

E. Experiments with Large Language Models

4.3 Generalization to Real-World knowledge

To investigate how generalizable our method is to real-world knowledge beyond the synthetic setting, we evaluate RECKONING on a more real-world multi-hop logical reasoning task, FOLIO [29], and report the result in Table 2. The dataset has a rich vocabulary, diverse logic patterns, and abundant language variations. It has been shown to challenge LLMs in both supervised fine-tuning and in-context learning settings. We fine-tune the GPT-2 model following the in-context reasoning setting as the baseline. As before, we train the GPT-2 model and RECKONING using the multi-task objective. We also compare to more advanced baselines, including GPT-3.5 (text-davinci-003 [55]) and ChatGPT(gpt-3.5-turbo[2]), two popular large language models with around 175B parameters. For these two large models, we evaluate both in the zero-shot and few-shot settings. In the few-shot setting, we prompt the model with 8 single-task examples randomly sampled from the training set to perform in-context learning. We find that RECKONING’s performance (which is initiated here from GPT-2) is better than the GPT-2 in-context reasoning baseline. Compared to the two advanced large language models, RECKONING outperforms them by a significant margin (12% 0-shot and 7% 8-shot). We conclude that RECKONING is effective and significantly benefits reasoning tasks using real-world knowledge.

\ Table 2: Evaluation results on FOLIO. We compare RECKONING against the FT-ICR baseline with GPT-2 and two popular large language models.

\

:::info Authors:

(1) Zeming Chen, EPFL ([email protected]);

(2) Gail Weiss, EPFL ([email protected]);

(3) Eric Mitchell, Stanford University ([email protected])';

(4) Asli Celikyilmaz, Meta AI Research ([email protected]);

(5) Antoine Bosselut, EPFL ([email protected]).

:::


:::info This paper is available on arxiv under CC BY 4.0 DEED license.

:::

2 https://openai.com/blog/chatgpt

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Share Insights

You May Also Like

Cashing In On University Patents Means Giving Up On Our Innovation Future

Cashing In On University Patents Means Giving Up On Our Innovation Future

The post Cashing In On University Patents Means Giving Up On Our Innovation Future appeared on BitcoinEthereumNews.com. “It’s a raid on American innovation that would deliver pennies to the Treasury while kneecapping the very engine of our economic and medical progress,” writes Pipes. Getty Images Washington is addicted to taxing success. Now, Commerce Secretary Howard Lutnick is floating a plan to skim half the patent earnings from inventions developed at universities with federal funding. It’s being sold as a way to shore up programs like Social Security. In reality, it’s a raid on American innovation that would deliver pennies to the Treasury while kneecapping the very engine of our economic and medical progress. Yes, taxpayer dollars support early-stage research. But the real payoff comes later—in the jobs created, cures discovered, and industries launched when universities and private industry turn those discoveries into real products. By comparison, the sums at stake in patent licensing are trivial. Universities collectively earn only about $3.6 billion annually in patent income—less than the federal government spends on Social Security in a single day. Even confiscating half would barely register against a $6 trillion federal budget. And yet the damage from such a policy would be anything but trivial. The true return on taxpayer investment isn’t in licensing checks sent to Washington, but in the downstream economic activity that federally supported research unleashes. Thanks to the bipartisan Bayh-Dole Act of 1980, universities and private industry have powerful incentives to translate early-stage discoveries into real-world products. Before Bayh-Dole, the government hoarded patents from federally funded research, and fewer than 5% were ever licensed. Once universities could own and license their own inventions, innovation exploded. The result has been one of the best returns on investment in government history. Since 1996, university research has added nearly $2 trillion to U.S. industrial output, supported 6.5 million jobs, and launched more than 19,000 startups. Those companies pay…
Share
2025/09/18 03:26