Skip to content

Commit 904992d

Browse files
committed
fix(compogen): escape curly braces for readme.com compatibility (#1124)
Because - `readme.com` treats `{variable}` as variable placeholders, causing parsing errors and display issues in component documentation - JSON examples like `{"key": "value"}` and nested JSON objects like `{"mappings": {"properties"}}` were not being properly formatted for `readme.com` as well - Go template variables `{{variable}}` needed special handling to display correctly - The existing escaping logic was incomplete and couldn't handle complex nested JSON structures This commit - Implements comprehensive curly brace escaping for README.io compatibility in the compogen tool - Wraps JSON objects and arrays in backticks (e.g., `{"key": "value"}`) for proper code formatting - Preserves Go template syntax by wrapping `{{variable}}` in backticks - Escapes variable placeholders like `{placeholder}` to `\{placeholder\}` to prevent README.io variable resolution - Adds balanced brace matching algorithm to handle nested JSON structures correctly - Refactors complex escaping logic into smaller, maintainable functions with single responsibilities - Includes comprehensive unit tests covering all escaping scenarios - Updates integration tests to verify end-to-end functionality - Fixes MDX parsing errors that were occurring in generated component documentation
1 parent 5d2fbbc commit 904992d

File tree

17 files changed

+331
-74
lines changed

17 files changed

+331
-74
lines changed

pkg/component/ai/anthropic/v0/README.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ Anthropic's text generation models (often called generative pre-trained transfor
5252
| Prompt (required) | `prompt` | string | The prompt text. |
5353
| System Message | `system-message` | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model's behavior is set using a generic message as "You are a helpful assistant.". |
5454
| Prompt Images | `prompt-images` | array[string] | The prompt images (Note: The prompt images will be injected in the order they are provided to the 'prompt' message. Anthropic doesn't support sending images via image-url, use this field instead). |
55-
| [Chat History](#text-generation-chat-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: {"role": "The message role, i.e. `system`, `user` or `assistant`", "content": "message content"}. |
55+
| [Chat History](#text-generation-chat-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: `{"role": "The message role, i.e. `system`, `user` or `assistant`", "content": "message content"}`. |
5656
| Seed | `seed` | integer | The seed (Note: Not supported by Anthropic Models). |
5757
| Temperature | `temperature` | number | The temperature for sampling. |
5858
| Top K | `top-k` | integer | Top k for sampling. |
@@ -64,7 +64,7 @@ Anthropic's text generation models (often called generative pre-trained transfor
6464

6565
<h4 id="text-generation-chat-chat-history">Chat History</h4>
6666

67-
Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: {"role": "The message role, i.e. `system`, `user` or `assistant`", "content": "message content"}.
67+
Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: `{"role": "The message role, i.e. `system`, `user` or `assistant`", "content": "message content"}`.
6868

6969

7070
<div class="markdown-col-no-wrap" data-col-1 data-col-2>

pkg/component/ai/cohere/v0/README.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ Cohere's text generation models (often called generative pre-trained transformer
5555
| System Message | `system-message` | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model's behavior is using a generic message as "You are a helpful assistant.". |
5656
| Documents | `documents` | array[string] | The documents to be used for the model, for optimal performance, the length of each document should be less than 300 words. |
5757
| Prompt Images | `prompt-images` | array[string] | The prompt images (Note: As for 2024-06-24 Cohere models are not multimodal, so images will be ignored.). |
58-
| [Chat History](#text-generation-chat-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Each message should adhere to the format: : {"role": "The message role, i.e. `USER` or `CHATBOT`", "content": "message content"}. |
58+
| [Chat History](#text-generation-chat-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Each message should adhere to the format: : `{"role": "The message role, i.e. `USER` or `CHATBOT`", "content": "message content"}`. |
5959
| Seed | `seed` | integer | The seed (default=42). |
6060
| Temperature | `temperature` | number | The temperature for sampling (default=0.7). |
6161
| Top K | `top-k` | integer | Top k for sampling (default=10). |
@@ -67,7 +67,7 @@ Cohere's text generation models (often called generative pre-trained transformer
6767

6868
<h4 id="text-generation-chat-chat-history">Chat History</h4>
6969

70-
Incorporate external chat history, specifically previous messages within the conversation. Each message should adhere to the format: : {"role": "The message role, i.e. `USER` or `CHATBOT`", "content": "message content"}.
70+
Incorporate external chat history, specifically previous messages within the conversation. Each message should adhere to the format: : `{"role": "The message role, i.e. `USER` or `CHATBOT`", "content": "message content"}`.
7171

7272

7373
<div class="markdown-col-no-wrap" data-col-1 data-col-2>

pkg/component/ai/fireworksai/v0/README.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ Fireworks AI's text generation models (often called generative pre-trained trans
5353
| Prompt (required) | `prompt` | string | The prompt text. |
5454
| System Message | `system-message` | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model's behavior is set using a generic message as "You are a helpful assistant.". |
5555
| Prompt Images | `prompt-images` | array[string] | The prompt images (Note: According to Fireworks AI documentation on 2024-07-24, the total number of images included in a single API request should not exceed 30, and all the images should be smaller than 5MB in size). |
56-
| [Chat History](#text-generation-chat-chat-history) | `chat-history` | array[object] | 'Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : {"role": "The message role, i.e. `system`, `user` or `assistant`", "content": "message content"}.' |
56+
| [Chat History](#text-generation-chat-chat-history) | `chat-history` | array[object] | 'Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : `{"role": "The message role, i.e. `system`, `user` or `assistant`", "content": "message content"}`.' |
5757
| Seed | `seed` | integer | The seed. |
5858
| Temperature | `temperature` | number | The temperature for sampling. |
5959
| Top K | `top-k` | integer | Integer to define the top tokens considered within the sample operation to create new text. |
@@ -66,7 +66,7 @@ Fireworks AI's text generation models (often called generative pre-trained trans
6666

6767
<h4 id="text-generation-chat-chat-history">Chat History</h4>
6868

69-
'Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : {"role": "The message role, i.e. `system`, `user` or `assistant`", "content": "message content"}.'
69+
'Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : `{"role": "The message role, i.e. `system`, `user` or `assistant`", "content": "message content"}`.'
7070

7171

7272
<div class="markdown-col-no-wrap" data-col-1 data-col-2>

pkg/component/ai/gemini/v0/README.mdx

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ Gemini's multimodal models understand text and images. They generate text output
4949
| Input | Field ID | Type | Description |
5050
| :--- | :--- | :--- | :--- |
5151
| Task ID (required) | `task` | string | `TASK_CHAT` |
52-
| Model (required) | `model` | string | ID of the model to use. The value is one of the following: `gemini-2.5-pro`: Optimized for enhanced thinking and reasoning, multimodal understanding, advanced coding, and more. `gemini-2.5-flash`: Optimized for Adaptive thinking, cost efficiency. `gemini-2.0-flash-lite`: Optimized for Most cost-efficient model supporting high throughput. <br/><details><summary><strong>Enum values</strong></summary><ul><li>`gemini-2.5-pro`</li><li>`gemini-2.5-flash`</li><li>`gemini-2.0-flash-lite`</li></ul></details> |
52+
| Model (required) | `model` | string | ID of the model to use. The value is one of the following: `gemini-2.5-pro`: Optimized for enhanced thinking and reasoning, multimodal understanding, advanced coding, and more. `gemini-2.5-flash`: Optimized for adaptive thinking, cost efficiency. `gemini-2.5-flash-lite`: Optimized for most cost-efficient model supporting high throughput. `gemini-2.5-flash-image-preview`: Optimized for precise, conversational image generation and editing. <br/><details><summary><strong>Enum values</strong></summary><ul><li>`gemini-2.5-pro`</li><li>`gemini-2.5-flash`</li><li>`gemini-2.5-flash-lite`</li><li>`gemini-2.5-flash-image-preview`</li></ul></details> |
5353
| Stream | `stream` | boolean | Whether to incrementally stream the response using server-sent events (SSE). |
5454
| Prompt | `prompt` | string | The main text instruction or query for the model. |
5555
| Images | `images` | array[string] | URI references or base64 content of input images. |
@@ -69,7 +69,7 @@ Gemini's multimodal models understand text and images. They generate text output
6969
| [Safety Settings](#chat-safety-settings) | `safety-settings` | array[object] | Safety settings for content filtering. |
7070
| [System Instruction](#chat-system-instruction) | `system-instruction` | object | A system instruction to guide the model behavior. |
7171
| [Generation Config](#chat-generation-config) | `generation-config` | object | Generation configuration for the request. |
72-
| Cached Content | `cached-content` | string | The name of a cached content to use as context. Format: cachedContents/{cachedContent}. |
72+
| Cached Content | `cached-content` | string | The name of a cached content to use as context. Format: cachedContents/\{cachedContent\}. |
7373

7474
</div>
7575
<details>
@@ -518,7 +518,8 @@ Config for thinking features.
518518

519519
| Output | Field ID | Type | Description |
520520
| :--- | :--- | :--- | :--- |
521-
| Texts (optional) | `texts` | array[string] | Simplified text output extracted from candidates. Each string represents the concatenated text content from the corresponding candidate's parts, including thought processes when `include-thoughts` is enabled. This field provides easy access to the generated text without needing to traverse the candidate structure. Updated in real-time during streaming. |
521+
| Texts (optional) | `texts` | array[string] | Simplified text output extracted from candidates. Each string represents the concatenated text content from the corresponding candidate's parts, including thought processes when `include-thoughts` is enabled. This field provides easy access to the generated text without needing to traverse the candidate structure. Updated in real-time during streaming. |
522+
| Images (optional) | `images` | array[image/webp] | Images output extracted and converted from candidates. This field provides easy access to the generated images as base64-encoded strings. The original binary data is removed from the candidates field to prevent raw binary exposure in JSON output. This field is only available when the model supports image generation. |
522523
| Usage (optional) | `usage` | object | Token usage statistics: prompt tokens, completion tokens, total tokens, etc. |
523524
| [Candidates](#chat-candidates) (optional) | `candidates` | array[object] | Complete candidate objects from the model containing rich metadata and structured content. Each candidate includes safety ratings, finish reason, token counts, citations, content parts (including thought processes when include-thoughts is enabled), and other detailed information. This provides full access to all response data beyond just text. Updated incrementally during streaming with accumulated content and latest metadata. |
524525
| [Usage Metadata](#chat-usage-metadata) (optional) | `usage-metadata` | object | Metadata on the generation request's token usage. |
@@ -924,7 +925,7 @@ Context caching allows you to cache input tokens and reference them in subsequen
924925
| Task ID (required) | `task` | string | `TASK_CACHE` |
925926
| Operation (required) | `operation` | string | The cache operation to perform. The value is one of the following: `create`: Create a new cached content. `list`: List all cached contents. `get`: Retrieve a specific cached content. `update`: Update an existing cached content (only expiration time can be updated). `delete`: Delete a cached content. <br/><details><summary><strong>Enum values</strong></summary><ul><li>`create`</li><li>`list`</li><li>`get`</li><li>`update`</li><li>`delete`</li></ul></details> |
926927
| Model (required) | `model` | string | ID of the model to use for caching. Required for create operations. The model is immutable after creation. The value is one of the following: `gemini-2.5-pro`: Optimized for enhanced thinking and reasoning, multimodal understanding, advanced coding, and more. `gemini-2.5-flash`: Optimized for Adaptive thinking, cost efficiency. `gemini-2.0-flash-lite`: Optimized for Most cost-efficient model supporting high throughput. <br/><details><summary><strong>Enum values</strong></summary><ul><li>`gemini-2.5-pro`</li><li>`gemini-2.5-flash`</li><li>`gemini-2.0-flash-lite`</li></ul></details> |
927-
| Cache Name | `cache-name` | string | [**GET**, **UPDATE**, **DELETE**] The name of the cached content for get, update, or delete operations. Format: cachedContents/{cachedContent}. Required for get, update, and delete operations. |
928+
| Cache Name | `cache-name` | string | [**GET**, **UPDATE**, **DELETE**] The name of the cached content for get, update, or delete operations. Format: cachedContents/\{cachedContent\}. Required for get, update, and delete operations. |
928929
| Prompt | `prompt` | string | [**CREATE**] The main text instruction or query to be cached for create operations. |
929930
| Images | `images` | array[string] | [**CREATE**] URI references or base64 content of input images to be cached for create operations. |
930931
| Audio | `audio` | array[string] | [**CREATE**] URI references or base64 content of input audio to be cached for create operations. |
@@ -1252,7 +1253,7 @@ Configuration for specifying function calling behavior.
12521253
| Display Name | `display-name` | string | Optional. The user-provided name of the cached content. |
12531254
| Expire Time | `expire-time` | string | Expiration time of the cached content in RFC3339 format. |
12541255
| Model | `model` | string | The name of the Model to use for cached content. |
1255-
| Name | `name` | string | The resource name referring to the cached content. Format: cachedContents/{cachedContent} |
1256+
| Name | `name` | string | The resource name referring to the cached content. Format: cachedContents/\{cachedContent\} |
12561257
| Update Time | `update-time` | string | Last update time of the cached content in RFC3339 format. |
12571258
| [Usage Metadata](#cache-usage-metadata) | `usage-metadata` | object | Token usage statistics for the cached content. |
12581259

@@ -1282,7 +1283,7 @@ Configuration for specifying function calling behavior.
12821283
| Display Name | `display-name` | string | Optional. The user-provided name of the cached content. |
12831284
| Expire Time | `expire-time` | string | Expiration time of the cached content in RFC3339 format. |
12841285
| Model | `model` | string | The name of the Model to use for cached content. |
1285-
| Name | `name` | string | The resource name referring to the cached content. Format: cachedContents/{cachedContent} |
1286+
| Name | `name` | string | The resource name referring to the cached content. Format: cachedContents/\{cachedContent\} |
12861287
| Update Time | `update-time` | string | Last update time of the cached content in RFC3339 format. |
12871288
| [Usage Metadata](#cache-usage-metadata) | `usage-metadata` | object | Token usage statistics for the cached content. |
12881289

pkg/component/ai/groq/v0/README.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ Groq serves open source text generation models (often called generative pre-trai
5252
| Prompt (required) | `prompt` | string | The prompt text. |
5353
| System Message | `system-message` | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model's behavior is set using a generic message as "You are a helpful assistant.". |
5454
| Prompt Images | `prompt-images` | array[string] | The prompt images (Note: Only a subset of OSS models support image inputs). |
55-
| [Chat History](#text-generation-chat-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: {"role": "The message role, i.e. `system`, `user` or `assistant`", "content": "message content"}. |
55+
| [Chat History](#text-generation-chat-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: `{"role": "The message role, i.e. `system`, `user` or `assistant`", "content": "message content"}`. |
5656
| Seed | `seed` | integer | The seed. |
5757
| Temperature | `temperature` | number | The temperature for sampling. |
5858
| Top K | `top-k` | integer | Integer to define the top tokens considered within the sample operation to create new text. |
@@ -66,7 +66,7 @@ Groq serves open source text generation models (often called generative pre-trai
6666

6767
<h4 id="text-generation-chat-chat-history">Chat History</h4>
6868

69-
Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: {"role": "The message role, i.e. `system`, `user` or `assistant`", "content": "message content"}.
69+
Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: `{"role": "The message role, i.e. `system`, `user` or `assistant`", "content": "message content"}`.
7070

7171

7272
<div class="markdown-col-no-wrap" data-col-1 data-col-2>

0 commit comments

Comments
 (0)