Models and API keys
Setting API keys
In order to access the LM of your choice (and to access private GitHub repositories), you need to supply the corresponding keys.
There are two options to do this:
- Set the corresponding environment variables.
- Create a
.env
file at the root of this repository. All of the variables defined there will take the place of environment variables. - Use
--agent.model.api_key
to set the key
Here's an example
# Remove the comment '#' in front of the line for all keys that you have set
# GITHUB_TOKEN='GitHub Token for access to private repos'
# OPENAI_API_KEY='OpenAI API Key Here if using OpenAI Model'
# ANTHROPIC_API_KEY='Anthropic API Key Here if using Anthropic Model'
# TOGETHER_API_KEY='Together API Key Here if using Together Model'
See the following links for tutorials on obtaining Anthropic, OpenAI, and Github tokens.
Advanced settings
See model config for more details on advanced settings.
Supported API models
We support all models supported by litellm, see their list here.
Here are a few options for --agent.model.name
:
Model | API key | Comment |
---|---|---|
claude-3-5-sonnet-20241022 |
ANTHROPIC_API_KEY |
Our recommended model |
gpt-4o |
OPENAI_API_KEY |
|
o1-preview |
OPENAI_API_KEY |
You might need to set temperature and sampling to the supported values. |
Function calling and more: Setting the correct parser
The default config uses function calling to retrieve actions from the model response, i.e.,
the model directly provides the action as a JSON object.
If your model doesn't support function calling, you can use the thought_action
parser by setting
agent.tools.parse_function
to thought_action
.
Then, we extract the last triple-backticks block from the model response as the action.
See our API docs for more details on parsers.
Remember to document the tools in your prompt as the model will not be able to see the function signature
like with function calling.
Specific models
See model config for more details on specific models.
Using local models
We currently support all models that serve to an endpoint with an OpenAI-compatible API.
For example, to use llama, you can folloow the litellm instructions and set
agent:
model:
name: ollama/llama2
api_base: http://localhost:11434
per_instance_cost_limit: 0
total_cost_limit: 0
per_instance_call_limit: 100
If you do not disable the default cost limits, you will see an error because the cost calculator will not be able to find the model in the litellm
model cost dictionary.
Please use the per_instance_call_limit
instead to limit the runtime per issue.
Please see the above note about using a config that uses the thought_action
parser instead of the function calling parser.
Further reads
Further reads
See our API docs for all available options. Our model config page has more details on specific models and tips and tricks.
Debugging
- If you get
Error code: 404
, please check your configured keys, in particular whether you setOPENAI_API_BASE_URL
correctly (if you're not using it, the line should be deleted or commented out). Also see this issue for reference.
% include-markdown "../_footer.md" %}