Skip to content

mirascope.core.gemini.call_response_chunk

This module contains the GeminiCallResponseChunk class.

Usage Documentation

Streams

GeminiCallResponseChunk

Bases: BaseCallResponseChunk[GenerateContentResponse, FinishReason]

A convenience wrapper around the Gemini API streamed response chunks.

When calling the Gemini API using a function decorated with gemini_call and stream set to True, the stream will contain GeminiCallResponseChunk instances

Example:

from mirascope.core import prompt_template
from mirascope.core.gemini import gemini_call


@gemini_call("gemini-1.5-flash", stream=True)
@prompt_template("Recommend a {genre} book")
def recommend_book(genre: str):
    ...


stream = recommend_book("fantasy")  # response is an `GeminiStream`
for chunk, _ in stream:
    print(chunk.content, end="", flush=True)

content property

content: str

Returns the chunk content for the 0th choice.

finish_reasons property

finish_reasons: list[FinishReason]

Returns the finish reasons of the response.

model property

model: None

Returns the model name.

google.generativeai does not return model, so we return None

id property

id: str | None

Returns the id of the response.

google.generativeai does not return an id

usage property

usage: None

Returns the usage of the chat completion.

google.generativeai does not have Usage, so we return None

input_tokens property

input_tokens: None

Returns the number of input tokens.

output_tokens property

output_tokens: None

Returns the number of output tokens.