Skip to content

DasshhRuntime

This page lays out all the methods and attributes available on the DasshhRuntime class.

Tip

DasshhRuntime is the core engine that handles AI model interactions, tool execution, and session management in Dasshh.

attr model

The model to use for the runtime

model: str = ""

attr api_base

The base URL for the API

api_base: str = ""

attr api_key

The API key to use for the runtime

api_key: str = ""

attr api_version

The API version to use for the runtime

api_version: str = ""

attr temperature

The temperature to use for the runtime

temperature: float = 1.0

attr top_p

The top_p to use for the runtime

top_p: float = 1.0

attr max_tokens

The max_tokens to use for the runtime

max_tokens: int | None = None

attr max_completion_tokens

The max_completion_tokens to use for the runtime

max_completion_tokens: int | None = None

attr skip_summarization

Whether to skip summarization after a tool call

skip_summarization: bool = False

property system_prompt

system_prompt -> dict

Returns the system prompt as a formatted message dictionary

Returns:

Type
Default
Description
dict System prompt formatted as a message with role and content

method init

__init__(session_service: SessionService)

Initialize the DasshhRuntime with a session service

Parameters:

Param
Default
Description
session_service The SessionService instance for managing conversations and events

method start

async start()

Start the runtime worker that processes the query queue

method stop

async stop()

Stop the runtime worker and cancel any pending operations

method submit_query

async submit_query(
    *,
    message: str,
    session_id: str,
    post_message_callback: Callable,
) -> None

Submit a query to the runtime for processing

Parameters:

Param
Default
Description
message The user message to send to the AI
session_id The session ID to associate with this conversation
post_message_callback Callback function to send UI events back to the interface

Returns:

Type
Default
Description
None This method is async and doesn't return a value

method _load_model_config

_load_model_config() -> None

Load model configuration from the config file

Raises:

Type
Default
Description
ValueError If API key is not set in configuration

method _generate_prompt

_generate_prompt(context: InvocationContext) -> List[dict]

Generate the complete prompt including system message and conversation history

Parameters:

Param
Default
Description
context The invocation context containing session and message information

Returns:

Type
Default
Description
List[dict] List of message dictionaries formatted for the AI model

method _process_queue

async _process_queue()

Main worker loop that processes queries from the queue

method _run_async

async _run_async(context: InvocationContext) -> AsyncGenerator[ModelResponse, None]

Execute the AI completion request and yield streaming responses

Parameters:

Param
Default
Description
context The invocation context for this query

Returns:

Type
Default
Description
AsyncGenerator Stream of model responses

method _handle_tool_calls

async _handle_tool_calls(
    context: InvocationContext,
    tool_calls: list[ChatCompletionDeltaToolCall],
) -> None

Process and execute tool calls from the AI model

Parameters:

Param
Default
Description
context The invocation context
tool_calls List of tool calls to execute

InvocationContext:

A named tuple to store current query context.

Param Type Description
invocation_id str The ID of the invocation
message dict The message to send to the LLM
session_id str The ID of the session
system_instruction bool Whether to use the system instruction