pub enum OptDiscriminants {
Show 24 variants
Model,
ApiKey,
NThreads,
MaxTokens,
MaxContextSize,
StopSequence,
Stream,
FrequencyPenalty,
PresencePenalty,
TokenBias,
TopK,
TopP,
Temperature,
RepeatPenalty,
RepeatPenaltyLastN,
TfsZ,
TypicalP,
Mirostat,
MirostatTau,
MirostatEta,
PenalizeNl,
NBatch,
User,
ModelType,
}
Expand description
Auto-generated discriminant enum variants
Variants§
Model
The name or path of the model used.
ApiKey
The API key for the model service.
NThreads
The number of threads to use for parallel processing. This is common to all models.
MaxTokens
The maximum number of tokens that the model will generate. This is common to all models.
MaxContextSize
The maximum context size of the model.
StopSequence
The sequences that, when encountered, will cause the model to stop generating further tokens. OpenAI models allow up to four stop sequences.
Stream
Whether or not to use streaming mode. This is common to all models.
FrequencyPenalty
The penalty to apply for using frequent tokens. This is used by OpenAI and llama models.
PresencePenalty
The penalty to apply for using novel tokens. This is used by OpenAI and llama models.
TokenBias
A bias to apply to certain tokens during the inference process. This is known as logit bias in OpenAI and is also used in llm-chain-local.
TopK
The maximum number of tokens to consider for each step of generation. This is common to all models, but is not used by OpenAI.
TopP
The cumulative probability threshold for token selection. This is common to all models.
Temperature
The temperature to use for token selection. Higher values result in more random output. This is common to all models.
RepeatPenalty
The penalty to apply for repeated tokens. This is common to all models.
RepeatPenaltyLastN
The number of most recent tokens to consider when applying the repeat penalty. This is common to all models.
TfsZ
The TfsZ parameter for llm-chain-llama.
TypicalP
The TypicalP parameter for llm-chain-llama.
Mirostat
The Mirostat parameter for llm-chain-llama.
MirostatTau
The MirostatTau parameter for llm-chain-llama.
MirostatEta
The MirostatEta parameter for llm-chain-llama.
PenalizeNl
Whether or not to penalize newline characters for llm-chain-llama.
NBatch
The batch size for llm-chain-local.
User
The username for llm-chain-openai.
ModelType
The type of the model.
Trait Implementations§
Source§impl Clone for OptDiscriminants
impl Clone for OptDiscriminants
Source§fn clone(&self) -> OptDiscriminants
fn clone(&self) -> OptDiscriminants
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moreSource§impl Debug for OptDiscriminants
impl Debug for OptDiscriminants
Source§impl<'_enum> From<&'_enum Opt> for OptDiscriminants
impl<'_enum> From<&'_enum Opt> for OptDiscriminants
Source§fn from(val: &'_enum Opt) -> OptDiscriminants
fn from(val: &'_enum Opt) -> OptDiscriminants
Source§impl From<Opt> for OptDiscriminants
impl From<Opt> for OptDiscriminants
Source§fn from(val: Opt) -> OptDiscriminants
fn from(val: Opt) -> OptDiscriminants
Source§impl PartialEq for OptDiscriminants
impl PartialEq for OptDiscriminants
impl Copy for OptDiscriminants
impl Eq for OptDiscriminants
impl StructuralPartialEq for OptDiscriminants
Auto Trait Implementations§
impl Freeze for OptDiscriminants
impl RefUnwindSafe for OptDiscriminants
impl Send for OptDiscriminants
impl Sync for OptDiscriminants
impl Unpin for OptDiscriminants
impl UnwindSafe for OptDiscriminants
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<Q, K> Equivalent<K> for Q
impl<Q, K> Equivalent<K> for Q
Source§impl<Q, K> Equivalent<K> for Q
impl<Q, K> Equivalent<K> for Q
Source§fn equivalent(&self, key: &K) -> bool
fn equivalent(&self, key: &K) -> bool
key
and return true
if they are equal.