Paper: Tree of Thoughts: Deliberate Problem Solving with Large Language Models

learnandburn.ai

Summary by Adrian Wilkins-Caruana [pdf on arxiv] Some kinds of problems are hard to solve in your head, like 768 × 364. For problems like this, it helps to break the problem down into many smaller problems and write down intermediate answers, like a multiplication algorithm does. LLMs are much the same: If we ask GPT-4 to predict the next token in the sequence: “768 × 364 = ”, chances are it’ll get the answer wrong. But, like people, LLMs are better when we let them take their time, such as instructing them to “think step by step.” Today’s paper is about a new way of getting LLMs to think using a “tree of thoughts.”

## Paper: Tree of Thoughts: Deliberate Problem Solving with Large Language Models

## Paper: Tree of Thoughts: Deliberate Problem…

## Paper: Tree of Thoughts: Deliberate Problem Solving with Large Language Models

Summary by Adrian Wilkins-Caruana [pdf on arxiv] Some kinds of problems are hard to solve in your head, like 768 × 364. For problems like this, it helps to break the problem down into many smaller problems and write down intermediate answers, like a multiplication algorithm does. LLMs are much the same: If we ask GPT-4 to predict the next token in the sequence: “768 × 364 = ”, chances are it’ll get the answer wrong. But, like people, LLMs are better when we let them take their time, such as instructing them to “think step by step.” Today’s paper is about a new way of getting LLMs to think using a “tree of thoughts.”