Models
This page documents the configuration objects used to specify the behavior of a language model (LM).
Normal LMs
sweagent.agent.models.GenericAPIModelConfig
pydantic-model
Bases: BaseModel
This configuration object specifies a LM like GPT4 or similar.
The model will be served with the help of the litellm
library.
Config:
extra
:'forbid'
Fields:
-
name
(str
) -
per_instance_cost_limit
(float
) -
total_cost_limit
(float
) -
temperature
(float
) -
top_p
(float | None
) -
api_base
(str | None
) -
api_version
(str | None
) -
api_key
(SecretStr | None
) -
stop
(list[str]
) -
completion_kwargs
(dict[str, Any]
) -
convert_system_to_user
(bool
) -
retry
(RetryConfig
) -
delay
(float
) -
fallbacks
(list[dict[str, Any]]
) -
id
(str
)
api_base
pydantic-field
api_base: str | None = None
api_key
pydantic-field
api_key: SecretStr | None = None
API key to the model. We recommend using environment variables to set this instead
or putting your environment variables in a .env
file.
You can concatenate more than one key by separating them with :::
, e.g.,
key1:::key2
.
api_version
pydantic-field
api_version: str | None = None
completion_kwargs
pydantic-field
completion_kwargs: dict[str, Any] = {}
Additional kwargs to pass to litellm.completion
convert_system_to_user
pydantic-field
convert_system_to_user: bool = False
Whether to convert system messages to user messages. This is useful for models that do not support system messages like o1.
delay
pydantic-field
delay: float = 0.0
Minimum delay before querying (this can help to avoid overusing the API if sharing it with other people).
fallbacks
pydantic-field
fallbacks: list[dict[str, Any]] = []
List of fallbacks to try if the main model fails See https://docs.litellm.ai/docs/completion/reliable_completions#fallbacks-sdk for more information.
id
pydantic-field
id: str
name
pydantic-field
name: str
Name of the model.
per_instance_cost_limit
pydantic-field
per_instance_cost_limit: float = 3.0
Cost limit for every instance (task).
retry
pydantic-field
retry: RetryConfig
Retry configuration: How often to retry after a failure (e.g., from a rate limit) etc.
stop
pydantic-field
stop: list[str] = []
Custom stop sequences
temperature
pydantic-field
temperature: float = 0.0
Sampling temperature
top_p
pydantic-field
top_p: float | None = 1.0
Sampling top-p
total_cost_limit
pydantic-field
total_cost_limit: float = 0.0
Total cost limit.
get_api_keys
get_api_keys() -> list[str]
Source code in sweagent/agent/models.py
109 110 111 112 |
|
Manual models for testing
The following two models allow you to test your environment by prompting you for actions. This can also be very useful to create your first demonstrations.
sweagent.agent.models.HumanModel
HumanModel(config: HumanModelConfig, tools: ToolConfig)
Bases: AbstractModel
Model that allows for human-in-the-loop
Source code in sweagent/agent/models.py
222 223 224 225 226 227 228 229 230 231 232 233 |
|
logger
instance-attribute
logger = get_logger('swea-lm', emoji='🤖')
multi_line_command_endings
instance-attribute
multi_line_command_endings = {name: _hiIpQmGfor command in commands if end_name is not None}
stats
instance-attribute
stats = InstanceStats()
query
query(history: History, action_prompt: str = '> ') -> dict
Wrapper to separate action prompt from formatting
Source code in sweagent/agent/models.py
282 283 284 285 286 287 288 289 290 291 |
|
sweagent.agent.models.HumanThoughtModel
HumanThoughtModel(config: HumanModelConfig, tools: ToolConfig)
Bases: HumanModel
Model that allows for human-in-the-loop
Source code in sweagent/agent/models.py
222 223 224 225 226 227 228 229 230 231 232 233 |
|
query
query(history: History) -> dict
Logic for handling user input (both thought + action) to pass to SWEEnv
Source code in sweagent/agent/models.py
295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 |
|
Replay model for testing and demonstrations
sweagent.agent.models.ReplayModel
ReplayModel(config: ReplayModelConfig, tools: ToolConfig)
Bases: AbstractModel
Model used for replaying a trajectory (i.e., taking all the actions for the .traj
file
and re-issuing them.
Source code in sweagent/agent/models.py
313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 |
|
logger
instance-attribute
logger = get_logger('swea-lm', emoji='🤖')
stats
instance-attribute
stats = InstanceStats()
submit_command
instance-attribute
submit_command = submit_command
use_function_calling
instance-attribute
use_function_calling = use_function_calling
query
query(history: History) -> dict
Logic for tracking which replay action to pass to SWEEnv
Source code in sweagent/agent/models.py
338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 |
|