class CLIPTempScore
Bases: BaseMetric
Initialize the CLIPTempScore
evaluator.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_name
|
str
|
The name of the CLIP encoder model. Defaults to |
'openai/clip-vit-base-patch32'
|
logit_scale
|
bool
|
Whether to calcualte the cosine similarity as logits. Defaults to False. |
False
|
Source code in aigve/metrics/text_video_alignment/similarity_based/clipscore/cliptemp.py
compute_metrics(results)
Compute the metrics from processed results.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
results
|
list
|
The processed results of each batch. |
required |
Returns:
Type | Description |
---|---|
Dict[str, float]
|
Dict[str, float]: The computed metrics. The keys are the names of |
Dict[str, float]
|
the metrics, and the values are corresponding results. |
Source code in aigve/metrics/text_video_alignment/similarity_based/clipscore/cliptemp.py
process(data_batch, data_samples)
CLIPTempScore process
Process one batch of data samples and predictions. The processed
results should be stored in self.results
, which will be used to
compute the metrics when all batches have been processed.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
data_batch
|
Sequence
|
A batch of data from the dataloader. |
required |
data_samples
|
Sequence
|
A batch of data samples that contain annotations and predictions. |
required |