DasshhRuntime
This page lays out all the methods and attributes available on the DasshhRuntime class.
Tip
DasshhRuntime is the core engine that handles AI model interactions, tool execution, and session management in Dasshh.
attr
model
The model to use for the runtime
attr
api_base
The base URL for the API
attr
api_key
The API key to use for the runtime
attr
api_version
The API version to use for the runtime
attr
temperature
The temperature to use for the runtime
attr
top_p
The top_p to use for the runtime
attr
max_tokens
The max_tokens to use for the runtime
attr
max_completion_tokens
The max_completion_tokens to use for the runtime
attr
skip_summarization
Whether to skip summarization after a tool call
property
system_prompt
Returns the system prompt as a formatted message dictionary
Returns:
Type | Default |
Description |
---|---|---|
dict | System prompt formatted as a message with role and content |
method
init
Initialize the DasshhRuntime with a session service
Parameters:
Param | Default |
Description |
---|---|---|
session_service | The SessionService instance for managing conversations and events |
method
start
Start the runtime worker that processes the query queue
method
stop
Stop the runtime worker and cancel any pending operations
method
submit_query
Submit a query to the runtime for processing
Parameters:
Param | Default |
Description |
---|---|---|
message | The user message to send to the AI | |
session_id | The session ID to associate with this conversation | |
post_message_callback | Callback function to send UI events back to the interface |
Returns:
Type | Default |
Description |
---|---|---|
None | This method is async and doesn't return a value |
method
_load_model_config
Load model configuration from the config file
Raises:
Type | Default |
Description |
---|---|---|
ValueError | If API key is not set in configuration |
method
_generate_prompt
Generate the complete prompt including system message and conversation history
Parameters:
Param | Default |
Description |
---|---|---|
context | The invocation context containing session and message information |
Returns:
Type | Default |
Description |
---|---|---|
List[dict] | List of message dictionaries formatted for the AI model |
method
_process_queue
Main worker loop that processes queries from the queue
method
_run_async
Execute the AI completion request and yield streaming responses
Parameters:
Param | Default |
Description |
---|---|---|
context | The invocation context for this query |
Returns:
Type | Default |
Description |
---|---|---|
AsyncGenerator | Stream of model responses |
method
_handle_tool_calls
async _handle_tool_calls(
context: InvocationContext,
tool_calls: list[ChatCompletionDeltaToolCall],
) -> None
Process and execute tool calls from the AI model
Parameters:
Param | Default |
Description |
---|---|---|
context | The invocation context | |
tool_calls | List of tool calls to execute |
InvocationContext:
A named tuple to store current query context.
Param | Type | Description |
---|---|---|
invocation_id | str | The ID of the invocation |
message | dict | The message to send to the LLM |
session_id | str | The ID of the session |
system_instruction | bool | Whether to use the system instruction |