feat: add log probabilities from OpenAI-compatible servers and SGLang native endpoint (#9240)
* Add log probabilities support for RL training
This enables Letta server to request and return log probabilities from
OpenAI-compatible providers (including SGLang) for use in RL training.
Changes:
- LLMConfig: Add return_logprobs and top_logprobs fields
- OpenAIClient: Set logprobs in ChatCompletionRequest when enabled
- LettaLLMAdapter: Add logprobs field and extract from response
- LettaResponse: Add logprobs field to return log probs to client
- LettaRequest: Add return_logprobs/top_logprobs for per-request override
- LettaAgentV3: Store and pass logprobs through to response
- agents.py: Handle request-level logprobs override
Usage:
response = client.agents.messages.create(
agent_id=agent_id,
messages=[...],
return_logprobs=True,
top_logprobs=5,
)
print(response.logprobs) # Per-token log probabilities
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* Add multi-turn token tracking for RL training via SGLang native endpoint
- Add TurnTokenData schema to track token IDs and logprobs per turn
- Add return_token_ids flag to LettaRequest and LLMConfig
- Create SGLangNativeClient for /generate endpoint (returns output_ids)
- Create SGLangNativeAdapter that uses native endpoint
- Modify LettaAgentV3 to accumulate turns across LLM calls
- Include turns in LettaResponse when return_token_ids=True
* Fix: Add SGLang native adapter to step() method, not just stream()
* Fix: Handle Pydantic Message objects in SGLang native adapter
* Fix: Remove api_key reference from LLMConfig (not present)
* Fix: Add missing 'created' field to ChatCompletionResponse
* Add full tool support to SGLang native adapter
- Format tools into prompt in Qwen-style format
- Parse tool calls from <tool_call> tags in response
- Format tool results as <tool_response> in user messages
- Set finish_reason to 'tool_calls' when tools are called
* Use tokenizer.apply_chat_template for proper tool formatting
- Add tokenizer caching in SGLang native adapter
- Use apply_chat_template when tokenizer available
- Fall back to manual formatting if not
- Convert Letta messages to OpenAI format for tokenizer
* Fix: Use func_response instead of tool_return for ToolReturn content
* Fix: Get output_token_logprobs from meta_info in SGLang response
* Fix: Allow None in output_token_logprobs (SGLang format includes null)
* chore: remove unrelated files from logprobs branch
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: add missing call_type param to adapter constructors in letta_agent_v3
The SGLang refactor dropped call_type=LLMCallType.agent_step when extracting
adapter creation into conditional blocks. Restores it for all 3 spots (SGLang
in step, SimpleLLM in step, SGLang in stream).
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* just stage-api && just publish-api
* fix: update expected LLMConfig fields in schema test for logprobs support
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* chore: remove rllm provider references
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* just stage-api && just publish-api
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
---------
Co-authored-by: Ubuntu <ubuntu@ip-172-31-65-206.ec2.internal>
Co-authored-by: Letta <noreply@letta.com>
This commit is contained in:
@@ -29192,43 +29192,6 @@
|
||||
"title": "ChatCompletionSystemMessageParam",
|
||||
"description": "Developer-provided instructions that the model should follow, regardless of\nmessages sent by the user. With o1 models and newer, use `developer` messages\nfor this purpose instead."
|
||||
},
|
||||
"ChatCompletionTokenLogprob": {
|
||||
"properties": {
|
||||
"token": {
|
||||
"type": "string",
|
||||
"title": "Token"
|
||||
},
|
||||
"bytes": {
|
||||
"anyOf": [
|
||||
{
|
||||
"items": {
|
||||
"type": "integer"
|
||||
},
|
||||
"type": "array"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Bytes"
|
||||
},
|
||||
"logprob": {
|
||||
"type": "number",
|
||||
"title": "Logprob"
|
||||
},
|
||||
"top_logprobs": {
|
||||
"items": {
|
||||
"$ref": "#/components/schemas/TopLogprob"
|
||||
},
|
||||
"type": "array",
|
||||
"title": "Top Logprobs"
|
||||
}
|
||||
},
|
||||
"additionalProperties": true,
|
||||
"type": "object",
|
||||
"required": ["token", "logprob", "top_logprobs"],
|
||||
"title": "ChatCompletionTokenLogprob"
|
||||
},
|
||||
"ChatCompletionToolMessageParam": {
|
||||
"properties": {
|
||||
"content": {
|
||||
@@ -29453,7 +29416,7 @@
|
||||
"logprobs": {
|
||||
"anyOf": [
|
||||
{
|
||||
"$ref": "#/components/schemas/ChoiceLogprobs"
|
||||
"$ref": "#/components/schemas/openai__types__chat__chat_completion__ChoiceLogprobs"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
@@ -29469,42 +29432,6 @@
|
||||
"required": ["finish_reason", "index", "message"],
|
||||
"title": "Choice"
|
||||
},
|
||||
"ChoiceLogprobs": {
|
||||
"properties": {
|
||||
"content": {
|
||||
"anyOf": [
|
||||
{
|
||||
"items": {
|
||||
"$ref": "#/components/schemas/ChatCompletionTokenLogprob"
|
||||
},
|
||||
"type": "array"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Content"
|
||||
},
|
||||
"refusal": {
|
||||
"anyOf": [
|
||||
{
|
||||
"items": {
|
||||
"$ref": "#/components/schemas/ChatCompletionTokenLogprob"
|
||||
},
|
||||
"type": "array"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Refusal"
|
||||
}
|
||||
},
|
||||
"additionalProperties": true,
|
||||
"type": "object",
|
||||
"title": "ChoiceLogprobs",
|
||||
"description": "Log probability information for the choice."
|
||||
},
|
||||
"ClientToolSchema": {
|
||||
"properties": {
|
||||
"name": {
|
||||
@@ -30525,6 +30452,30 @@
|
||||
"description": "If True, compaction events emit structured `SummaryMessage` and `EventMessage` types. If False (default), compaction messages are not included in the response.",
|
||||
"default": false
|
||||
},
|
||||
"return_logprobs": {
|
||||
"type": "boolean",
|
||||
"title": "Return Logprobs",
|
||||
"description": "If True, returns log probabilities of the output tokens in the response. Useful for RL training. Only supported for OpenAI-compatible providers (including SGLang).",
|
||||
"default": false
|
||||
},
|
||||
"top_logprobs": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "integer"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Top Logprobs",
|
||||
"description": "Number of most likely tokens to return at each position (0-20). Requires return_logprobs=True."
|
||||
},
|
||||
"return_token_ids": {
|
||||
"type": "boolean",
|
||||
"title": "Return Token Ids",
|
||||
"description": "If True, returns token IDs and logprobs for ALL LLM generations in the agent step, not just the last one. Uses SGLang native /generate endpoint. Returns 'turns' field with TurnTokenData for each assistant/tool turn. Required for proper multi-turn RL training with loss masking.",
|
||||
"default": false
|
||||
},
|
||||
"streaming": {
|
||||
"type": "boolean",
|
||||
"title": "Streaming",
|
||||
@@ -36716,6 +36667,30 @@
|
||||
"title": "Strict",
|
||||
"description": "Enable strict mode for tool calling. When true, tool schemas include strict: true and additionalProperties: false, guaranteeing tool outputs match JSON schemas.",
|
||||
"default": false
|
||||
},
|
||||
"return_logprobs": {
|
||||
"type": "boolean",
|
||||
"title": "Return Logprobs",
|
||||
"description": "Whether to return log probabilities of the output tokens. Useful for RL training.",
|
||||
"default": false
|
||||
},
|
||||
"top_logprobs": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "integer"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Top Logprobs",
|
||||
"description": "Number of most likely tokens to return at each position (0-20). Requires return_logprobs=True."
|
||||
},
|
||||
"return_token_ids": {
|
||||
"type": "boolean",
|
||||
"title": "Return Token Ids",
|
||||
"description": "Whether to return token IDs for all LLM generations via SGLang native endpoint. Required for multi-turn RL training with loss masking. Only works with SGLang provider.",
|
||||
"default": false
|
||||
}
|
||||
},
|
||||
"type": "object",
|
||||
@@ -36888,6 +36863,30 @@
|
||||
"description": "If True, compaction events emit structured `SummaryMessage` and `EventMessage` types. If False (default), compaction messages are not included in the response.",
|
||||
"default": false
|
||||
},
|
||||
"return_logprobs": {
|
||||
"type": "boolean",
|
||||
"title": "Return Logprobs",
|
||||
"description": "If True, returns log probabilities of the output tokens in the response. Useful for RL training. Only supported for OpenAI-compatible providers (including SGLang).",
|
||||
"default": false
|
||||
},
|
||||
"top_logprobs": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "integer"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Top Logprobs",
|
||||
"description": "Number of most likely tokens to return at each position (0-20). Requires return_logprobs=True."
|
||||
},
|
||||
"return_token_ids": {
|
||||
"type": "boolean",
|
||||
"title": "Return Token Ids",
|
||||
"description": "If True, returns token IDs and logprobs for ALL LLM generations in the agent step, not just the last one. Uses SGLang native /generate endpoint. Returns 'turns' field with TurnTokenData for each assistant/tool turn. Required for proper multi-turn RL training with loss masking.",
|
||||
"default": false
|
||||
},
|
||||
"callback_url": {
|
||||
"anyOf": [
|
||||
{
|
||||
@@ -37083,6 +37082,30 @@
|
||||
"description": "If True, compaction events emit structured `SummaryMessage` and `EventMessage` types. If False (default), compaction messages are not included in the response.",
|
||||
"default": false
|
||||
},
|
||||
"return_logprobs": {
|
||||
"type": "boolean",
|
||||
"title": "Return Logprobs",
|
||||
"description": "If True, returns log probabilities of the output tokens in the response. Useful for RL training. Only supported for OpenAI-compatible providers (including SGLang).",
|
||||
"default": false
|
||||
},
|
||||
"top_logprobs": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "integer"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Top Logprobs",
|
||||
"description": "Number of most likely tokens to return at each position (0-20). Requires return_logprobs=True."
|
||||
},
|
||||
"return_token_ids": {
|
||||
"type": "boolean",
|
||||
"title": "Return Token Ids",
|
||||
"description": "If True, returns token IDs and logprobs for ALL LLM generations in the agent step, not just the last one. Uses SGLang native /generate endpoint. Returns 'turns' field with TurnTokenData for each assistant/tool turn. Required for proper multi-turn RL training with loss masking.",
|
||||
"default": false
|
||||
},
|
||||
"agent_id": {
|
||||
"type": "string",
|
||||
"maxLength": 42,
|
||||
@@ -37457,6 +37480,30 @@
|
||||
"title": "Include Compaction Messages",
|
||||
"description": "If True, compaction events emit structured `SummaryMessage` and `EventMessage` types. If False (default), compaction messages are not included in the response.",
|
||||
"default": false
|
||||
},
|
||||
"return_logprobs": {
|
||||
"type": "boolean",
|
||||
"title": "Return Logprobs",
|
||||
"description": "If True, returns log probabilities of the output tokens in the response. Useful for RL training. Only supported for OpenAI-compatible providers (including SGLang).",
|
||||
"default": false
|
||||
},
|
||||
"top_logprobs": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "integer"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Top Logprobs",
|
||||
"description": "Number of most likely tokens to return at each position (0-20). Requires return_logprobs=True."
|
||||
},
|
||||
"return_token_ids": {
|
||||
"type": "boolean",
|
||||
"title": "Return Token Ids",
|
||||
"description": "If True, returns token IDs and logprobs for ALL LLM generations in the agent step, not just the last one. Uses SGLang native /generate endpoint. Returns 'turns' field with TurnTokenData for each assistant/tool turn. Required for proper multi-turn RL training with loss masking.",
|
||||
"default": false
|
||||
}
|
||||
},
|
||||
"type": "object",
|
||||
@@ -37517,6 +37564,32 @@
|
||||
"usage": {
|
||||
"$ref": "#/components/schemas/LettaUsageStatistics",
|
||||
"description": "The usage statistics of the agent."
|
||||
},
|
||||
"logprobs": {
|
||||
"anyOf": [
|
||||
{
|
||||
"$ref": "#/components/schemas/letta__schemas__openai__chat_completion_response__ChoiceLogprobs"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"description": "Log probabilities of the output tokens from the last LLM call. Only present if return_logprobs was enabled."
|
||||
},
|
||||
"turns": {
|
||||
"anyOf": [
|
||||
{
|
||||
"items": {
|
||||
"$ref": "#/components/schemas/TurnTokenData"
|
||||
},
|
||||
"type": "array"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Turns",
|
||||
"description": "Token data for all LLM generations in multi-turn agent interaction. Includes token IDs and logprobs for each assistant turn, plus tool result content. Only present if return_token_ids was enabled. Used for RL training with loss masking."
|
||||
}
|
||||
},
|
||||
"type": "object",
|
||||
@@ -37708,6 +37781,30 @@
|
||||
"description": "If True, compaction events emit structured `SummaryMessage` and `EventMessage` types. If False (default), compaction messages are not included in the response.",
|
||||
"default": false
|
||||
},
|
||||
"return_logprobs": {
|
||||
"type": "boolean",
|
||||
"title": "Return Logprobs",
|
||||
"description": "If True, returns log probabilities of the output tokens in the response. Useful for RL training. Only supported for OpenAI-compatible providers (including SGLang).",
|
||||
"default": false
|
||||
},
|
||||
"top_logprobs": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "integer"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Top Logprobs",
|
||||
"description": "Number of most likely tokens to return at each position (0-20). Requires return_logprobs=True."
|
||||
},
|
||||
"return_token_ids": {
|
||||
"type": "boolean",
|
||||
"title": "Return Token Ids",
|
||||
"description": "If True, returns token IDs and logprobs for ALL LLM generations in the agent step, not just the last one. Uses SGLang native /generate endpoint. Returns 'turns' field with TurnTokenData for each assistant/tool turn. Required for proper multi-turn RL training with loss masking.",
|
||||
"default": false
|
||||
},
|
||||
"streaming": {
|
||||
"type": "boolean",
|
||||
"title": "Streaming",
|
||||
@@ -39265,6 +39362,30 @@
|
||||
"description": "Enable strict mode for tool calling. When true, tool schemas include strict: true and additionalProperties: false, guaranteeing tool outputs match JSON schemas.",
|
||||
"default": false
|
||||
},
|
||||
"return_logprobs": {
|
||||
"type": "boolean",
|
||||
"title": "Return Logprobs",
|
||||
"description": "Whether to return log probabilities of the output tokens. Useful for RL training.",
|
||||
"default": false
|
||||
},
|
||||
"top_logprobs": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "integer"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Top Logprobs",
|
||||
"description": "Number of most likely tokens to return at each position (0-20). Requires return_logprobs=True."
|
||||
},
|
||||
"return_token_ids": {
|
||||
"type": "boolean",
|
||||
"title": "Return Token Ids",
|
||||
"description": "Whether to return token IDs for all LLM generations via SGLang native endpoint. Required for multi-turn RL training with loss masking. Only works with SGLang provider.",
|
||||
"default": false
|
||||
},
|
||||
"max_context_window": {
|
||||
"type": "integer",
|
||||
"title": "Max Context Window",
|
||||
@@ -45313,13 +45434,15 @@
|
||||
"type": "object",
|
||||
"title": "ToolUpdate"
|
||||
},
|
||||
"TopLogprob": {
|
||||
"TurnTokenData": {
|
||||
"properties": {
|
||||
"token": {
|
||||
"role": {
|
||||
"type": "string",
|
||||
"title": "Token"
|
||||
"enum": ["assistant", "tool"],
|
||||
"title": "Role",
|
||||
"description": "Role of this turn: 'assistant' for LLM generations (trainable), 'tool' for tool results (non-trainable)."
|
||||
},
|
||||
"bytes": {
|
||||
"output_ids": {
|
||||
"anyOf": [
|
||||
{
|
||||
"items": {
|
||||
@@ -45331,17 +45454,54 @@
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Bytes"
|
||||
"title": "Output Ids",
|
||||
"description": "Token IDs from SGLang native endpoint. Only present for assistant turns."
|
||||
},
|
||||
"logprob": {
|
||||
"type": "number",
|
||||
"title": "Logprob"
|
||||
"output_token_logprobs": {
|
||||
"anyOf": [
|
||||
{
|
||||
"items": {
|
||||
"items": {},
|
||||
"type": "array"
|
||||
},
|
||||
"type": "array"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Output Token Logprobs",
|
||||
"description": "Logprobs from SGLang: [[logprob, token_id, top_logprob_or_null], ...]. Only present for assistant turns."
|
||||
},
|
||||
"content": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "string"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Content",
|
||||
"description": "Text content. For tool turns, client tokenizes this with loss_mask=0."
|
||||
},
|
||||
"tool_name": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "string"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Tool Name",
|
||||
"description": "Name of the tool called. Only present for tool turns."
|
||||
}
|
||||
},
|
||||
"additionalProperties": true,
|
||||
"type": "object",
|
||||
"required": ["token", "logprob"],
|
||||
"title": "TopLogprob"
|
||||
"required": ["role"],
|
||||
"title": "TurnTokenData",
|
||||
"description": "Token data for a single LLM generation turn in a multi-turn agent interaction.\n\nUsed for RL training to track token IDs and logprobs across all LLM calls,\nnot just the final one. Tool results are included so the client can tokenize\nthem with loss_mask=0 (non-trainable)."
|
||||
},
|
||||
"UpdateAgent": {
|
||||
"properties": {
|
||||
@@ -48653,6 +48813,105 @@
|
||||
"required": ["status"],
|
||||
"title": "ToolReturn"
|
||||
},
|
||||
"letta__schemas__openai__chat_completion_response__ChatCompletionTokenLogprob": {
|
||||
"properties": {
|
||||
"token": {
|
||||
"type": "string",
|
||||
"title": "Token"
|
||||
},
|
||||
"bytes": {
|
||||
"anyOf": [
|
||||
{
|
||||
"items": {
|
||||
"type": "integer"
|
||||
},
|
||||
"type": "array"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Bytes"
|
||||
},
|
||||
"logprob": {
|
||||
"type": "number",
|
||||
"title": "Logprob"
|
||||
},
|
||||
"top_logprobs": {
|
||||
"items": {
|
||||
"$ref": "#/components/schemas/letta__schemas__openai__chat_completion_response__TopLogprob"
|
||||
},
|
||||
"type": "array",
|
||||
"title": "Top Logprobs"
|
||||
}
|
||||
},
|
||||
"type": "object",
|
||||
"required": ["token", "logprob", "top_logprobs"],
|
||||
"title": "ChatCompletionTokenLogprob"
|
||||
},
|
||||
"letta__schemas__openai__chat_completion_response__ChoiceLogprobs": {
|
||||
"properties": {
|
||||
"content": {
|
||||
"anyOf": [
|
||||
{
|
||||
"items": {
|
||||
"$ref": "#/components/schemas/letta__schemas__openai__chat_completion_response__ChatCompletionTokenLogprob"
|
||||
},
|
||||
"type": "array"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Content"
|
||||
},
|
||||
"refusal": {
|
||||
"anyOf": [
|
||||
{
|
||||
"items": {
|
||||
"$ref": "#/components/schemas/letta__schemas__openai__chat_completion_response__ChatCompletionTokenLogprob"
|
||||
},
|
||||
"type": "array"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Refusal"
|
||||
}
|
||||
},
|
||||
"type": "object",
|
||||
"title": "ChoiceLogprobs"
|
||||
},
|
||||
"letta__schemas__openai__chat_completion_response__TopLogprob": {
|
||||
"properties": {
|
||||
"token": {
|
||||
"type": "string",
|
||||
"title": "Token"
|
||||
},
|
||||
"bytes": {
|
||||
"anyOf": [
|
||||
{
|
||||
"items": {
|
||||
"type": "integer"
|
||||
},
|
||||
"type": "array"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Bytes"
|
||||
},
|
||||
"logprob": {
|
||||
"type": "number",
|
||||
"title": "Logprob"
|
||||
}
|
||||
},
|
||||
"type": "object",
|
||||
"required": ["token", "logprob"],
|
||||
"title": "TopLogprob"
|
||||
},
|
||||
"letta__serialize_schemas__pydantic_agent_schema__AgentSchema": {
|
||||
"properties": {
|
||||
"agent_type": {
|
||||
@@ -48999,6 +49258,42 @@
|
||||
"type": "object",
|
||||
"title": "ToolExecuteRequest"
|
||||
},
|
||||
"openai__types__chat__chat_completion__ChoiceLogprobs": {
|
||||
"properties": {
|
||||
"content": {
|
||||
"anyOf": [
|
||||
{
|
||||
"items": {
|
||||
"$ref": "#/components/schemas/openai__types__chat__chat_completion_token_logprob__ChatCompletionTokenLogprob"
|
||||
},
|
||||
"type": "array"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Content"
|
||||
},
|
||||
"refusal": {
|
||||
"anyOf": [
|
||||
{
|
||||
"items": {
|
||||
"$ref": "#/components/schemas/openai__types__chat__chat_completion_token_logprob__ChatCompletionTokenLogprob"
|
||||
},
|
||||
"type": "array"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Refusal"
|
||||
}
|
||||
},
|
||||
"additionalProperties": true,
|
||||
"type": "object",
|
||||
"title": "ChoiceLogprobs",
|
||||
"description": "Log probability information for the choice."
|
||||
},
|
||||
"openai__types__chat__chat_completion_message_function_tool_call__Function": {
|
||||
"properties": {
|
||||
"arguments": {
|
||||
@@ -49032,6 +49327,73 @@
|
||||
"title": "Function",
|
||||
"description": "The function that the model called."
|
||||
},
|
||||
"openai__types__chat__chat_completion_token_logprob__ChatCompletionTokenLogprob": {
|
||||
"properties": {
|
||||
"token": {
|
||||
"type": "string",
|
||||
"title": "Token"
|
||||
},
|
||||
"bytes": {
|
||||
"anyOf": [
|
||||
{
|
||||
"items": {
|
||||
"type": "integer"
|
||||
},
|
||||
"type": "array"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Bytes"
|
||||
},
|
||||
"logprob": {
|
||||
"type": "number",
|
||||
"title": "Logprob"
|
||||
},
|
||||
"top_logprobs": {
|
||||
"items": {
|
||||
"$ref": "#/components/schemas/openai__types__chat__chat_completion_token_logprob__TopLogprob"
|
||||
},
|
||||
"type": "array",
|
||||
"title": "Top Logprobs"
|
||||
}
|
||||
},
|
||||
"additionalProperties": true,
|
||||
"type": "object",
|
||||
"required": ["token", "logprob", "top_logprobs"],
|
||||
"title": "ChatCompletionTokenLogprob"
|
||||
},
|
||||
"openai__types__chat__chat_completion_token_logprob__TopLogprob": {
|
||||
"properties": {
|
||||
"token": {
|
||||
"type": "string",
|
||||
"title": "Token"
|
||||
},
|
||||
"bytes": {
|
||||
"anyOf": [
|
||||
{
|
||||
"items": {
|
||||
"type": "integer"
|
||||
},
|
||||
"type": "array"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"title": "Bytes"
|
||||
},
|
||||
"logprob": {
|
||||
"type": "number",
|
||||
"title": "Logprob"
|
||||
}
|
||||
},
|
||||
"additionalProperties": true,
|
||||
"type": "object",
|
||||
"required": ["token", "logprob"],
|
||||
"title": "TopLogprob"
|
||||
},
|
||||
"LettaMessageUnion": {
|
||||
"oneOf": [
|
||||
{
|
||||
|
||||
Reference in New Issue
Block a user