pub enum Opt {
Show 24 variants
Model(ModelRef),
ApiKey(String),
NThreads(usize),
MaxTokens(usize),
MaxContextSize(usize),
StopSequence(Vec<String>),
Stream(bool),
FrequencyPenalty(f32),
PresencePenalty(f32),
TokenBias(TokenBias),
TopK(i32),
TopP(f32),
Temperature(f32),
RepeatPenalty(f32),
RepeatPenaltyLastN(usize),
TfsZ(f32),
TypicalP(f32),
Mirostat(i32),
MirostatTau(f32),
MirostatEta(f32),
PenalizeNl(bool),
NBatch(usize),
User(String),
ModelType(String),
}
Variants§
Model(ModelRef)
The name or path of the model used.
ApiKey(String)
The API key for the model service.
NThreads(usize)
The number of threads to use for parallel processing. This is common to all models.
MaxTokens(usize)
The maximum number of tokens that the model will generate. This is common to all models.
MaxContextSize(usize)
The maximum context size of the model.
StopSequence(Vec<String>)
The sequences that, when encountered, will cause the model to stop generating further tokens. OpenAI models allow up to four stop sequences.
Stream(bool)
Whether or not to use streaming mode. This is common to all models.
FrequencyPenalty(f32)
The penalty to apply for using frequent tokens. This is used by OpenAI and llama models.
PresencePenalty(f32)
The penalty to apply for using novel tokens. This is used by OpenAI and llama models.
TokenBias(TokenBias)
A bias to apply to certain tokens during the inference process. This is known as logit bias in OpenAI and is also used in llm-chain-local.
TopK(i32)
The maximum number of tokens to consider for each step of generation. This is common to all models, but is not used by OpenAI.
TopP(f32)
The cumulative probability threshold for token selection. This is common to all models.
Temperature(f32)
The temperature to use for token selection. Higher values result in more random output. This is common to all models.
RepeatPenalty(f32)
The penalty to apply for repeated tokens. This is common to all models.
RepeatPenaltyLastN(usize)
The number of most recent tokens to consider when applying the repeat penalty. This is common to all models.
TfsZ(f32)
The TfsZ parameter for llm-chain-llama.
TypicalP(f32)
The TypicalP parameter for llm-chain-llama.
Mirostat(i32)
The Mirostat parameter for llm-chain-llama.
MirostatTau(f32)
The MirostatTau parameter for llm-chain-llama.
MirostatEta(f32)
The MirostatEta parameter for llm-chain-llama.
PenalizeNl(bool)
Whether or not to penalize newline characters for llm-chain-llama.
NBatch(usize)
The batch size for llm-chain-local.
User(String)
The username for llm-chain-openai.
ModelType(String)
The type of the model.