[EXAMPLE] Per Prompt #4
jawache
started this conversation in
Functional Units
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I propose one of the functional units we need to clarify in the AI spec is "prompt"
Description
A prompt here is described as a single-shot call to an LLM with a text or other type of input information and a single response fro the LLM.
Rationale
A prompt is the main mechanism of interacting with an LLM for an end user, so as a metric to inform end users of choices this would seem to be a good choice. End users don't typically think in terms of Tokens so to give a counter, tokens might be useful for people deploying an AI model but doesn't help an end user decide.
Methodology
There seem to be two approaches.
Usage Based: Compute the total emissions for your AI system over a day and divide by the number of user prompts.
Benchmark Based: A collection of benchmark prompts run against an AI system, we measure the total emissions from all components and divide perhaps by the number of prompts in the benchmark. Useful for perhaps open source or simple systems.
Both are perhaps valid but would need to be disclosed.
Considerations
Beta Was this translation helpful? Give feedback.
All reactions