scandeval.benchmark_modules.litellm
source module scandeval.benchmark_modules.litellm
Generative models from an inference API, using the LiteLLM framework.
Classes
-
LiteLLMModel — A generative model from LiteLLM.
source class LiteLLMModel()
Bases : BenchmarkModule
A generative model from LiteLLM.
Attributes
-
generative_type : GenerativeType | None — Get the generative type of the model.
-
data_collator : c.Callable[[list[t.Any]], dict[str, t.Any]] — The data collator used to prepare samples during finetuning.
-
compute_metrics : ComputeMetricsFunction — The function used to compute the metrics.
-
extract_labels_from_generation : ExtractLabelsFunction — The function used to extract the labels from the generated output.
-
trainer_class : t.Type[Trainer] — The Trainer class to use for finetuning.
Methods
-
generate — Generate outputs from the model.
-
num_params — The number of parameters in the model.
-
vocab_size — The vocabulary size of the model.
-
model_max_length — The maximum length of the model.
-
model_exists — Check if a model exists.
-
get_model_config — Fetch the model configuration.
-
prepare_dataset — Prepare the dataset for the model.
source property LiteLLMModel.generative_type: GenerativeType | None
Get the generative type of the model.
Returns
-
GenerativeType | None — The generative type of the model, or None if it has not been set yet.
source method LiteLLMModel.generate(inputs: dict) → GenerativeModelOutput
Generate outputs from the model.
Parameters
-
inputs : dict —
A batch of inputs to pass through the model.
Returns
-
GenerativeModelOutput — The generated model outputs.
Raises
source method LiteLLMModel.num_params() → int
The number of parameters in the model.
Returns
-
int — The number of parameters in the model.
source method LiteLLMModel.vocab_size() → int
The vocabulary size of the model.
Returns
-
int — The vocabulary size of the model.
source method LiteLLMModel.model_max_length() → int
The maximum length of the model.
Returns
-
int — The maximum length of the model.
source property LiteLLMModel.data_collator: c.Callable[[list[t.Any]], dict[str, t.Any]]
The data collator used to prepare samples during finetuning.
Returns
-
c.Callable[[list[t.Any]], dict[str, t.Any]] — The data collator.
source property LiteLLMModel.extract_labels_from_generation: ExtractLabelsFunction
The function used to extract the labels from the generated output.
Returns
-
ExtractLabelsFunction — The function used to extract the labels from the generated output.
source property LiteLLMModel.trainer_class: t.Type[Trainer]
The Trainer class to use for finetuning.
Returns
-
t.Type[Trainer] — The Trainer class.
source classmethod LiteLLMModel.model_exists(model_id: str, benchmark_config: BenchmarkConfig) → bool | NeedsExtraInstalled | NeedsEnvironmentVariable
Check if a model exists.
Parameters
-
model_id : str —
The model ID.
-
benchmark_config : BenchmarkConfig —
The benchmark configuration.
Returns
-
bool | NeedsExtraInstalled | NeedsEnvironmentVariable — Whether the model exists, or an error describing why we cannot check whether the model exists.
Raises
-
e
source classmethod LiteLLMModel.get_model_config(model_id: str, benchmark_config: BenchmarkConfig) → ModelConfig
Fetch the model configuration.
Parameters
-
model_id : str —
The model ID.
-
benchmark_config : BenchmarkConfig —
The benchmark configuration.
Returns
-
ModelConfig — The model configuration.
source method LiteLLMModel.prepare_dataset(dataset: DatasetDict, task: Task, itr_idx: int) → DatasetDict
Prepare the dataset for the model.
This includes things like tokenisation.
Parameters
-
dataset : DatasetDict —
The dataset to prepare.
-
task : Task —
The task to prepare the dataset for.
-
itr_idx : int —
The index of the dataset in the iterator.
Returns
-
DatasetDict — The prepared dataset.