You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
fix(compogen): escape curly braces for readme.com compatibility (#1124)
Because
- `readme.com` treats `{variable}` as variable placeholders, causing
parsing errors and display issues in component documentation
- JSON examples like `{"key": "value"}` and nested JSON objects like
`{"mappings": {"properties"}}` were not being properly formatted for
`readme.com` as well
- Go template variables `{{variable}}` needed special handling to
display correctly
- The existing escaping logic was incomplete and couldn't handle complex
nested JSON structures
This commit
- Implements comprehensive curly brace escaping for README.io
compatibility in the compogen tool
- Wraps JSON objects and arrays in backticks (e.g., `{"key": "value"}`)
for proper code formatting
- Preserves Go template syntax by wrapping `{{variable}}` in backticks
- Escapes variable placeholders like `{placeholder}` to
`\{placeholder\}` to prevent README.io variable resolution
- Adds balanced brace matching algorithm to handle nested JSON
structures correctly
- Refactors complex escaping logic into smaller, maintainable functions
with single responsibilities
- Includes comprehensive unit tests covering all escaping scenarios
- Updates integration tests to verify end-to-end functionality
- Fixes MDX parsing errors that were occurring in generated component
documentation
Copy file name to clipboardExpand all lines: pkg/component/ai/anthropic/v0/README.mdx
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -52,7 +52,7 @@ Anthropic's text generation models (often called generative pre-trained transfor
52
52
| Prompt (required) |`prompt`| string | The prompt text. |
53
53
| System Message |`system-message`| string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model's behavior is set using a generic message as "You are a helpful assistant.". |
54
54
| Prompt Images |`prompt-images`| array[string]| The prompt images (Note: The prompt images will be injected in the order they are provided to the 'prompt' message. Anthropic doesn't support sending images via image-url, use this field instead). |
55
-
|[Chat History](#text-generation-chat-chat-history)|`chat-history`| array[object]| Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: {"role": "The message role, i.e. `system`, `user` or `assistant`", "content": "message content"}. |
55
+
|[Chat History](#text-generation-chat-chat-history)|`chat-history`| array[object]| Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: `{"role": "The message role, i.e. `system`, `user` or `assistant`", "content": "message content"}`. |
56
56
| Seed |`seed`| integer | The seed (Note: Not supported by Anthropic Models). |
57
57
| Temperature |`temperature`| number | The temperature for sampling. |
58
58
| Top K |`top-k`| integer | Top k for sampling. |
@@ -64,7 +64,7 @@ Anthropic's text generation models (often called generative pre-trained transfor
Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: {"role": "The message role, i.e. `system`, `user` or `assistant`", "content": "message content"}.
67
+
Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: `{"role": "The message role, i.e. `system`, `user` or `assistant`", "content": "message content"}`.
Copy file name to clipboardExpand all lines: pkg/component/ai/cohere/v0/README.mdx
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -55,7 +55,7 @@ Cohere's text generation models (often called generative pre-trained transformer
55
55
| System Message |`system-message`| string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model's behavior is using a generic message as "You are a helpful assistant.". |
56
56
| Documents |`documents`| array[string]| The documents to be used for the model, for optimal performance, the length of each document should be less than 300 words. |
57
57
| Prompt Images |`prompt-images`| array[string]| The prompt images (Note: As for 2024-06-24 Cohere models are not multimodal, so images will be ignored.). |
58
-
|[Chat History](#text-generation-chat-chat-history)|`chat-history`| array[object]| Incorporate external chat history, specifically previous messages within the conversation. Each message should adhere to the format: : {"role": "The message role, i.e. `USER` or `CHATBOT`", "content": "message content"}. |
58
+
|[Chat History](#text-generation-chat-chat-history)|`chat-history`| array[object]| Incorporate external chat history, specifically previous messages within the conversation. Each message should adhere to the format: : `{"role": "The message role, i.e. `USER` or `CHATBOT`", "content": "message content"}`. |
59
59
| Seed |`seed`| integer | The seed (default=42). |
60
60
| Temperature |`temperature`| number | The temperature for sampling (default=0.7). |
61
61
| Top K |`top-k`| integer | Top k for sampling (default=10). |
@@ -67,7 +67,7 @@ Cohere's text generation models (often called generative pre-trained transformer
Incorporate external chat history, specifically previous messages within the conversation. Each message should adhere to the format: : {"role": "The message role, i.e. `USER` or `CHATBOT`", "content": "message content"}.
70
+
Incorporate external chat history, specifically previous messages within the conversation. Each message should adhere to the format: : `{"role": "The message role, i.e. `USER` or `CHATBOT`", "content": "message content"}`.
Copy file name to clipboardExpand all lines: pkg/component/ai/fireworksai/v0/README.mdx
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -53,7 +53,7 @@ Fireworks AI's text generation models (often called generative pre-trained trans
53
53
| Prompt (required) |`prompt`| string | The prompt text. |
54
54
| System Message |`system-message`| string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model's behavior is set using a generic message as "You are a helpful assistant.". |
55
55
| Prompt Images |`prompt-images`| array[string]| The prompt images (Note: According to Fireworks AI documentation on 2024-07-24, the total number of images included in a single API request should not exceed 30, and all the images should be smaller than 5MB in size). |
56
-
|[Chat History](#text-generation-chat-chat-history)|`chat-history`| array[object]| 'Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : {"role": "The message role, i.e. `system`, `user` or `assistant`", "content": "message content"}.' |
56
+
|[Chat History](#text-generation-chat-chat-history)|`chat-history`| array[object]| 'Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : `{"role": "The message role, i.e. `system`, `user` or `assistant`", "content": "message content"}`.' |
57
57
| Seed |`seed`| integer | The seed. |
58
58
| Temperature |`temperature`| number | The temperature for sampling. |
59
59
| Top K |`top-k`| integer | Integer to define the top tokens considered within the sample operation to create new text. |
@@ -66,7 +66,7 @@ Fireworks AI's text generation models (often called generative pre-trained trans
'Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : {"role": "The message role, i.e. `system`, `user` or `assistant`", "content": "message content"}.'
69
+
'Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : `{"role": "The message role, i.e. `system`, `user` or `assistant`", "content": "message content"}`.'
Copy file name to clipboardExpand all lines: pkg/component/ai/gemini/v0/README.mdx
+7-6Lines changed: 7 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -49,7 +49,7 @@ Gemini's multimodal models understand text and images. They generate text output
49
49
| Input | Field ID | Type | Description |
50
50
| :--- | :--- | :--- | :--- |
51
51
| Task ID (required) |`task`| string |`TASK_CHAT`|
52
-
| Model (required) |`model`| string | ID of the model to use. The value is one of the following: `gemini-2.5-pro`: Optimized for enhanced thinking and reasoning, multimodal understanding, advanced coding, and more. `gemini-2.5-flash`: Optimized for Adaptive thinking, cost efficiency. `gemini-2.0-flash-lite`: Optimized for Most cost-efficient model supporting high throughput. <br/><details><summary><strong>Enum values</strong></summary><ul><li>`gemini-2.5-pro`</li><li>`gemini-2.5-flash`</li><li>`gemini-2.0-flash-lite`</li></ul></details> |
52
+
| Model (required) |`model`| string | ID of the model to use. The value is one of the following: `gemini-2.5-pro`: Optimized for enhanced thinking and reasoning, multimodal understanding, advanced coding, and more. `gemini-2.5-flash`: Optimized for adaptive thinking, cost efficiency. `gemini-2.5-flash-lite`: Optimized for most cost-efficient model supporting high throughput. `gemini-2.5-flash-image-preview`: Optimized for precise, conversational image generation and editing. <br/><details><summary><strong>Enum values</strong></summary><ul><li>`gemini-2.5-pro`</li><li>`gemini-2.5-flash`</li><li>`gemini-2.5-flash-lite`</li><li>`gemini-2.5-flash-image-preview`</li></ul></details> |
53
53
| Stream |`stream`| boolean | Whether to incrementally stream the response using server-sent events (SSE). |
54
54
| Prompt |`prompt`| string | The main text instruction or query for the model. |
55
55
| Images |`images`| array[string]| URI references or base64 content of input images. |
@@ -69,7 +69,7 @@ Gemini's multimodal models understand text and images. They generate text output
69
69
|[Safety Settings](#chat-safety-settings)|`safety-settings`| array[object]| Safety settings for content filtering. |
70
70
|[System Instruction](#chat-system-instruction)|`system-instruction`| object | A system instruction to guide the model behavior. |
71
71
|[Generation Config](#chat-generation-config)|`generation-config`| object | Generation configuration for the request. |
72
-
| Cached Content |`cached-content`| string | The name of a cached content to use as context. Format: cachedContents/{cachedContent}. |
72
+
| Cached Content |`cached-content`| string | The name of a cached content to use as context. Format: cachedContents/\{cachedContent\}. |
73
73
74
74
</div>
75
75
<details>
@@ -518,7 +518,8 @@ Config for thinking features.
518
518
519
519
| Output | Field ID | Type | Description |
520
520
| :--- | :--- | :--- | :--- |
521
-
| Texts (optional) |`texts`| array[string]| Simplified text output extracted from candidates. Each string represents the concatenated text content from the corresponding candidate's parts, including thought processes when `include-thoughts` is enabled. This field provides easy access to the generated text without needing to traverse the candidate structure. Updated in real-time during streaming. |
521
+
| Texts (optional) |`texts`| array[string]| Simplified text output extracted from candidates. Each string represents the concatenated text content from the corresponding candidate's parts, including thought processes when `include-thoughts` is enabled. This field provides easy access to the generated text without needing to traverse the candidate structure. Updated in real-time during streaming. |
522
+
| Images (optional) |`images`| array[image/webp]| Images output extracted and converted from candidates. This field provides easy access to the generated images as base64-encoded strings. The original binary data is removed from the candidates field to prevent raw binary exposure in JSON output. This field is only available when the model supports image generation. |
522
523
| Usage (optional) |`usage`| object | Token usage statistics: prompt tokens, completion tokens, total tokens, etc. |
523
524
|[Candidates](#chat-candidates) (optional) |`candidates`| array[object]| Complete candidate objects from the model containing rich metadata and structured content. Each candidate includes safety ratings, finish reason, token counts, citations, content parts (including thought processes when include-thoughts is enabled), and other detailed information. This provides full access to all response data beyond just text. Updated incrementally during streaming with accumulated content and latest metadata. |
524
525
|[Usage Metadata](#chat-usage-metadata) (optional) |`usage-metadata`| object | Metadata on the generation request's token usage. |
@@ -924,7 +925,7 @@ Context caching allows you to cache input tokens and reference them in subsequen
924
925
| Task ID (required) |`task`| string |`TASK_CACHE`|
925
926
| Operation (required) |`operation`| string | The cache operation to perform. The value is one of the following: `create`: Create a new cached content. `list`: List all cached contents. `get`: Retrieve a specific cached content. `update`: Update an existing cached content (only expiration time can be updated). `delete`: Delete a cached content. <br/><details><summary><strong>Enum values</strong></summary><ul><li>`create`</li><li>`list`</li><li>`get`</li><li>`update`</li><li>`delete`</li></ul></details> |
926
927
| Model (required) |`model`| string | ID of the model to use for caching. Required for create operations. The model is immutable after creation. The value is one of the following: `gemini-2.5-pro`: Optimized for enhanced thinking and reasoning, multimodal understanding, advanced coding, and more. `gemini-2.5-flash`: Optimized for Adaptive thinking, cost efficiency. `gemini-2.0-flash-lite`: Optimized for Most cost-efficient model supporting high throughput. <br/><details><summary><strong>Enum values</strong></summary><ul><li>`gemini-2.5-pro`</li><li>`gemini-2.5-flash`</li><li>`gemini-2.0-flash-lite`</li></ul></details> |
927
-
| Cache Name |`cache-name`| string |[**GET**, **UPDATE**, **DELETE**] The name of the cached content for get, update, or delete operations. Format: cachedContents/{cachedContent}. Required for get, update, and delete operations. |
928
+
| Cache Name |`cache-name`| string |[**GET**, **UPDATE**, **DELETE**] The name of the cached content for get, update, or delete operations. Format: cachedContents/\{cachedContent\}. Required for get, update, and delete operations. |
928
929
| Prompt |`prompt`| string |[**CREATE**] The main text instruction or query to be cached for create operations. |
929
930
| Images |`images`| array[string]|[**CREATE**] URI references or base64 content of input images to be cached for create operations. |
930
931
| Audio |`audio`| array[string]|[**CREATE**] URI references or base64 content of input audio to be cached for create operations. |
@@ -1252,7 +1253,7 @@ Configuration for specifying function calling behavior.
1252
1253
| Display Name |`display-name`| string | Optional. The user-provided name of the cached content. |
1253
1254
| Expire Time |`expire-time`| string | Expiration time of the cached content in RFC3339 format. |
1254
1255
| Model |`model`| string | The name of the Model to use for cached content. |
1255
-
| Name |`name`| string | The resource name referring to the cached content. Format: cachedContents/{cachedContent}|
1256
+
| Name |`name`| string | The resource name referring to the cached content. Format: cachedContents/\{cachedContent\}|
1256
1257
| Update Time |`update-time`| string | Last update time of the cached content in RFC3339 format. |
1257
1258
|[Usage Metadata](#cache-usage-metadata)|`usage-metadata`| object | Token usage statistics for the cached content. |
1258
1259
@@ -1282,7 +1283,7 @@ Configuration for specifying function calling behavior.
1282
1283
| Display Name |`display-name`| string | Optional. The user-provided name of the cached content. |
1283
1284
| Expire Time |`expire-time`| string | Expiration time of the cached content in RFC3339 format. |
1284
1285
| Model |`model`| string | The name of the Model to use for cached content. |
1285
-
| Name |`name`| string | The resource name referring to the cached content. Format: cachedContents/{cachedContent}|
1286
+
| Name |`name`| string | The resource name referring to the cached content. Format: cachedContents/\{cachedContent\}|
1286
1287
| Update Time |`update-time`| string | Last update time of the cached content in RFC3339 format. |
1287
1288
|[Usage Metadata](#cache-usage-metadata)|`usage-metadata`| object | Token usage statistics for the cached content. |
Copy file name to clipboardExpand all lines: pkg/component/ai/groq/v0/README.mdx
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -52,7 +52,7 @@ Groq serves open source text generation models (often called generative pre-trai
52
52
| Prompt (required) |`prompt`| string | The prompt text. |
53
53
| System Message |`system-message`| string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model's behavior is set using a generic message as "You are a helpful assistant.". |
54
54
| Prompt Images |`prompt-images`| array[string]| The prompt images (Note: Only a subset of OSS models support image inputs). |
55
-
|[Chat History](#text-generation-chat-chat-history)|`chat-history`| array[object]| Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: {"role": "The message role, i.e. `system`, `user` or `assistant`", "content": "message content"}. |
55
+
|[Chat History](#text-generation-chat-chat-history)|`chat-history`| array[object]| Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: `{"role": "The message role, i.e. `system`, `user` or `assistant`", "content": "message content"}`. |
56
56
| Seed |`seed`| integer | The seed. |
57
57
| Temperature |`temperature`| number | The temperature for sampling. |
58
58
| Top K |`top-k`| integer | Integer to define the top tokens considered within the sample operation to create new text. |
@@ -66,7 +66,7 @@ Groq serves open source text generation models (often called generative pre-trai
Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: {"role": "The message role, i.e. `system`, `user` or `assistant`", "content": "message content"}.
69
+
Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: `{"role": "The message role, i.e. `system`, `user` or `assistant`", "content": "message content"}`.
0 commit comments