Skip to content

Commit 6557f24

Browse files
committed
Make using .env work.
1 parent 574bad1 commit 6557f24

File tree

7 files changed

+27
-1
lines changed

7 files changed

+27
-1
lines changed

docs/configuration.md

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,20 +39,33 @@ names must match the provider names known in the [litellm](https://docs.litellm.
3939
Parameters specified for each of the providers in the `providers` section apply to every llm in the `llms` section unless the same
4040
parameter is also specified for the llm, in which case that value takes precedence.
4141

42+
IMPORTANT: environment variable based settings, e.g. `api_key_env` will respect any `.env` file in the current directory when
43+
the config file is read, updated or an LLM is initialized!
44+
4245
The following parameters are known and supported in the `llms` and/or `providers` sections:
4346

4447
* `llm` (`llms` section only): specifies a specific model using the format `providername/modelid`.
4548
* `api_key`: the literal API key to use
46-
* `api_key_env`: the environment variable which contains the API key
49+
* `api_key_env`: the environment variable which contains the API key, using the value from the current environment or whatever is defined in .env
4750
* `api_url`: the base URL to use for the model, e.g. for an ollama server. The URL may contain placeholders which will get replaced with
4851
the model name (`${model}`), or the user and password for basic authentication (`${user}`, `${password}`), e.g.
4952
`http://${user}:${password}@localhost:11434`
5053
* `user`, `password`: the user and password to use for basic authentication, this requires `api_url` to also be specified with the
5154
corresponding placeholders
55+
* `user_env`, `password_env`: the environment variable to get the user or password from, this uses the value from the current enviroment or whatever
56+
has been set in any `.env` file in the current directory.
5257
* `alias` (`llms` section only): an alias name for the model which will have to be used in the API. If no `alias` is specified, the name
5358
specified for `llm` is used.
5459
* `num_retries`: if present, can specify the number of retries to perform if an error occurs before giving up
5560
* `timeout`: if present, raise timeout error after that many seconds
61+
* `via_streaming`: the default approach for getting LLM responses is to wait for the complete response to get returned. This can lead to time-outs
62+
or other problems with some LLMs. When this is set to true, the response will get retrieved using streaming. However, some information like
63+
cost is not available as part of the `llms_wrapper` response if streaming is enabled.
64+
* `min_delay`: the minimum delay in seconds to ensure between requests sent to the model from code running in the same process and thread.
65+
* `cost_per_prompt_token`: set or override the cost per prompt token for the model
66+
* `cost_per_output_token`: set or override the cost per output token for the model
67+
* `max_output_tokens`: set or override the maximum output tokens for the model
68+
* `max_input_tokens`: set or override the maximum input tokens for the model
5669

5770
All other settings are passed as is to the model invocation function. Different providers or APIs may support different parameters, but
5871
most will support `temperature`, `max_tokens` and `top_p`

docs/llms_wrapper/config.html

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -98,6 +98,7 @@ <h2 class="section-title" id="header-functions">Functions</h2>
9898
A dict with the configuration
9999
&#34;&#34;&#34;
100100
# read config file as json, yaml or toml, depending on file extension
101+
load_dotenv(override=True)
101102
if filepath.endswith(&#34;.json&#34;):
102103
with open(filepath, &#39;r&#39;) as f:
103104
config = json.load(f)
@@ -229,6 +230,7 @@ <h2 id="returns">Returns</h2>
229230
Returns:
230231
the updated configuration dict
231232
&#34;&#34;&#34;
233+
load_dotenv(override=True)
232234
for i, llm in enumerate(config[&#34;llms&#34;]):
233235
if isinstance(llm, str):
234236
provider, model = llm.split(&#34;/&#34;)

docs/llms_wrapper/llms.html

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -724,6 +724,8 @@ <h3>Methods</h3>
724724
project name (so far this only works for local phoenix instances). Default URI for a local installation
725725
is &#34;http://0.0.0.0:6006/v1/traces&#34;
726726
&#34;&#34;&#34;
727+
# before anything, make sure we have loaded any dotenv file to override any env var settings for the api keys
728+
load_dotenv(override=True)
727729
if config is None:
728730
config = dict(llms=[])
729731
self.config = deepcopy(config)

llms_wrapper/config.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,7 @@
2020
import hjson
2121
import tomllib
2222
import re
23+
from dotenv import load_dotenv
2324

2425
## Suppress the annoying litellm warning
2526
with warnings.catch_warnings():
@@ -69,6 +70,7 @@ def read_config_file(filepath: str, update: bool = True) -> dict:
6970
A dict with the configuration
7071
"""
7172
# read config file as json, yaml or toml, depending on file extension
73+
load_dotenv(override=True)
7274
if filepath.endswith(".json"):
7375
with open(filepath, 'r') as f:
7476
config = json.load(f)
@@ -148,6 +150,7 @@ def update_llm_config(config: dict):
148150
Returns:
149151
the updated configuration dict
150152
"""
153+
load_dotenv(override=True)
151154
for i, llm in enumerate(config["llms"]):
152155
if isinstance(llm, str):
153156
provider, model = llm.split("/")

llms_wrapper/llms.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@
1212
import traceback
1313
import inspect
1414
import docstring_parser
15+
from dotenv import load_dotenv
1516
from loguru import logger
1617
import typing
1718
from typing import Optional, Dict, List, Union, Tuple, Callable, get_args, get_origin
@@ -292,6 +293,8 @@ def __init__(self, config: Dict = None, debug: bool = False, use_phoenix: Option
292293
project name (so far this only works for local phoenix instances). Default URI for a local installation
293294
is "http://0.0.0.0:6006/v1/traces"
294295
"""
296+
# before anything, make sure we have loaded any dotenv file to override any env var settings for the api keys
297+
load_dotenv(override=True)
295298
if config is None:
296299
config = dict(llms=[])
297300
self.config = deepcopy(config)

pyproject.toml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,7 @@ dependencies = [
2525
"hjson",
2626
"loguru",
2727
"docstring_parser",
28+
"python-dotenv",
2829
]
2930

3031
[project.optional-dependencies]

uv.lock

Lines changed: 2 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

0 commit comments

Comments
 (0)