Shaw Talebi
Feb 25, 2024

--

Good question! The key distinction is that fine-tuning involves training at least 1 internal model parameter. These are values which determine how the LLM maps input text to output text generations.

When doing few-shot prompting we leave the model as is but include examples to help guide its generations.

On the other hand, when fine-tuning we use examples to update the model to generate completion that better align with our expectations.

I talk more about the difference between prompt engineering and fine-tuning here: https://towardsdatascience.com/a-practical-introduction-to-llms-65194dda1148

--

--

Shaw Talebi
Shaw Talebi

Responses (1)