What is zero shot, few shot and fine tune in OpenAI GPT models

When we say zero-shot or few-shot, it just the way we provide prompts when use GPT models, such as GPT3, GPT4 or chatGPT.

Performance wise, always start with zero-shot, then few-shot (example), neither of them worked, then fine-tune.

So what’s zero-shot prompt? Here is one example:

Extract keywords from the below text.

Text: {text}


If the performance is not good, try to add few examples in the prompt, this is so called few-shot learning.

Extract keywords from the corresponding texts below.

Text 1: Stripe provides APIs that web developers can use to integrate payment processing into their websites and mobile applications.
Keywords 1: Stripe, payment processing, APIs, web developers, websites, mobile applications
Text 2: OpenAI has trained cutting-edge language models that are very good at understanding and generating text. Our API provides access to these models and can be used to solve virtually any task that involves processing language.
Keywords 2: OpenAI, language models, text processing, API.
Text 3: {text}
Keywords 3:

Sometimes even with few shot examples, performance might be still not what you expect, so try to use fine tuning.
But notice that, as the GPT model get more and more powerful, OpenAI will expect people to use the base model more and less fine tune, so support on the fine tune side might be less in the future.

The resource from OpenAI about how to do fine tune is here:
fine tune instruction
fine tune code example

For the model parameters, Generally, OpenAI recommend model and temperature are the most commonly used parameters to alter the model output.

model - Higher performance models are more expensive and have higher latency.

temperature - A measure of how often the model outputs a less likely token. The higher the temperature, the more random (and usually creative) the output. This, however, is not the same as “truthfulness”. For most factual use cases such as data extraction, and truthful Q&A, the temperature of 0 is best.

max_tokens (maximum length) - Does not control the length of the output, but a hard cutoff limit for token generation. Ideally you won’t hit this limit often, as your model will stop either when it thinks it’s finished, or when it hits a stop sequence you defined.

stop (stop sequences) - A set of characters (tokens) that, when generated, will cause the text generation to stop.

More information about OpenAI API reference can be found here
OpenAI API reference

Author: robot learner
Reprint policy: All articles in this blog are used except for special statements CC BY 4.0 reprint policy. If reproduced, please indicate source robot learner !