You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
feat: add detailed help tooltips for chat features (#621)
* feat: add detailed help tooltips for chat features
Signed-off-by: Bob Du <i@bobdu.cc>
* docs: imporve docs
Signed-off-by: Bob Du <i@bobdu.cc>
---------
Signed-off-by: Bob Du <i@bobdu.cc>
Copy file name to clipboardExpand all lines: README.en.md
+75Lines changed: 75 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,6 +36,8 @@ Some unique features have been added:
36
36
37
37
[✓] VLLM API model support & Optional disable deep thinking mode
38
38
39
+
[✓] Context Window Control
40
+
39
41
> [!CAUTION]
40
42
> This project is only published on GitHub, based on the MIT license, free and for open source learning usage. And there will be no any form of account selling, paid service, discussion group, discussion group and other behaviors. Beware of being deceived.
41
43
@@ -324,6 +326,71 @@ PS: You can also run `pnpm start` directly on the server without packaging.
324
326
pnpm build
325
327
```
326
328
329
+
## Context Window Control
330
+
331
+
> [!TIP]
332
+
> Context Window Control allows users to flexibly manage context information in AI conversations, optimizing model performance and conversation effectiveness.
333
+
334
+
### Features
335
+
336
+
- **Context Management**: Control the amount of chat history the model can reference
337
+
- **Per-conversation Control**: Each conversation can independently enable or disable context window
338
+
- **Real-time Switching**: Context mode can be switched at any time during conversation
339
+
- **Memory Management**: Flexibly control AI's memory scope and continuity
340
+
- **Configurable Quantity**: Administrators can set the maximum number of context messages
341
+
342
+
### How It Works
343
+
344
+
The context window determines the amount of chat history from the current session that the model can reference during generation:
345
+
346
+
- **Reasonable context window size** helps the model generate coherent and relevant text
347
+
- **Avoid confusion or irrelevant output** caused by referencing too much context
348
+
- **Turning off the context window** will cause the session to lose memory, making each question completely independent
349
+
350
+
### Usage
351
+
352
+
#### 1. Enable/Disable Context Window
353
+
354
+
1. **Enter Conversation Interface**: This feature can be used in any conversation session
355
+
2. **Find Control Switch**: Locate the "Context Window" toggle button in the conversation interface
356
+
3. **Switch Mode**:
357
+
- **Enable**: Model will reference previous chat history, maintaining conversation coherence
358
+
- **Disable**: Model will not reference history, treating each question independently
359
+
360
+
#### 2. Usage Scenarios
361
+
362
+
**Recommended to enable context window when:**
363
+
- Need continuous dialogue and context correlation
364
+
- In-depth discussion of complex topics
365
+
- Multi-turn Q&A and step-by-step problem solving
366
+
- Need AI to remember previously mentioned information
367
+
368
+
**Recommended to disable context window when:**
369
+
- Independent simple questions
370
+
- Avoid historical information interfering with new questions
371
+
- Handling multiple unrelated topics
372
+
- Need a "fresh start" scenario
373
+
374
+
#### 3. Administrator Configuration
375
+
376
+
Administrators can configure in system settings:
377
+
- **Maximum Context Count**: Set the number of context messages included in the conversation
378
+
- **Default State**: Set the default context window state for new conversations
379
+
380
+
### Technical Implementation
381
+
382
+
- **Context Truncation**: Automatically truncate specified number of historical messages
383
+
- **State Persistence**: Each conversation independently saves context window switch state
384
+
- **Real-time Effect**: Takes effect immediately for the next message after switching
385
+
- **Memory Optimization**: Reasonably control context length, avoiding model limits
386
+
387
+
### Notes
388
+
389
+
- **Conversation Coherence**: Disabling context window will affect conversation continuity
390
+
- **Token Consumption**: More context will increase token usage
- **Model Limitations**: Need to consider context length limits of different models
393
+
327
394
## VLLM API Deep Thinking Mode Control
328
395
329
396
> [!TIP]
@@ -336,6 +403,14 @@ pnpm build
336
403
- **Real-time Switching**: Deep thinking mode can be switched at any time during conversation
337
404
- **Performance Optimization**: Disabling deep thinking can improve response speed and reduce computational costs
338
405
406
+
### How It Works
407
+
408
+
After enabling deep thinking, the model will use more computational resources and take longer time to simulate more complex thinking chains for logical reasoning:
409
+
410
+
- **Suitable for complex tasks or high-requirement scenarios**, such as mathematical derivations and project planning
411
+
- **Daily simple queries do not need to be enabled** deep thinking mode
412
+
- **Disabling deep thinking** can achieve faster response speed
413
+
339
414
### Prerequisites
340
415
341
416
**The following conditions must be met to use this feature:**
Copy file name to clipboardExpand all lines: src/locales/en-US.ts
+4-1Lines changed: 4 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -61,10 +61,13 @@ export default {
61
61
turnOffThink: 'Deep thinking has been disabled for this chat.',
62
62
clickTurnOnContext: 'Click to enable sending messages will carry previous chat records.',
63
63
clickTurnOffContext: 'Click to disable sending messages will carry previous chat records.',
64
+
contextHelp: 'The context window determines the amount of chat history from the current session that the model can reference during generation.\nA reasonable context window size helps the model generate coherent and relevant text.\nAvoid confusion or irrelevant output caused by referencing too much context.\nTurning off the context window will cause the session to lose memory, making each question completely independent.',
64
65
clickTurnOnSearch: 'Click to enable web search for this chat.',
65
66
clickTurnOffSearch: 'Click to disable web search for this chat.',
67
+
searchHelp: 'The model\'s knowledge is based on previous training data, and the learned knowledge has a cutoff date (e.g., DeepSeek-R1 training data cutoff is October 2023).\nFor breaking news, industry news and other time-sensitive questions, the answers are often "outdated".\nAfter enabling web search, search engine interfaces will be called to get real-time information for the model to reference before generating results, but this will also increase response latency.\nFor non-time-sensitive questions, enabling web search may actually reduce answer quality due to the incorporation of latest data.',
66
68
clickTurnOnThink: 'Click to enable deep thinking for this chat.',
67
69
clickTurnOffThink: 'Click to disable deep thinking for this chat.',
70
+
thinkHelp: 'After enabling deep thinking, the model will use more computational resources and take longer time to simulate more complex thinking chains for logical reasoning.\nSuitable for complex tasks or high-requirement scenarios, such as mathematical derivations and project planning.\nDaily simple queries do not need to be enabled.',
68
71
showOnContext: 'Include context',
69
72
showOffContext: 'Not include context',
70
73
searchEnabled: 'Search enabled',
@@ -211,7 +214,7 @@ export default {
211
214
info2FAStep3Tip1: 'Note: How to turn off two-step verification?',
212
215
info2FAStep3Tip2: '1. After logging in, use the two-step verification on the Two-Step Verification page to disable it.',
213
216
info2FAStep3Tip3: '2. Contact the administrator to disable two-step verification.',
214
-
maxContextCount: 'Max Context Count',
217
+
maxContextCount: 'Number of context messages included in the conversation',
Copy file name to clipboardExpand all lines: src/locales/ko-KR.ts
+4-1Lines changed: 4 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -61,10 +61,13 @@ export default {
61
61
turnOffThink: '이 채팅에서 깊은 사고가 비활성화되었습니다.',
62
62
clickTurnOnContext: '클릭하여 컨텍스트 포함 켜기',
63
63
clickTurnOffContext: '클릭하여 컨텍스트 포함 끄기',
64
+
contextHelp: '컨텍스트 창은 생성 과정에서 모델이 참조할 수 있는 현재 세션의 채팅 기록의 양을 결정합니다.\n적절한 컨텍스트 창 크기는 모델이 일관되고 관련성 있는 텍스트를 생성하는 데 도움이 됩니다.\n너무 많은 컨텍스트를 참조하여 혼란스럽거나 관련 없는 출력이 나오는 것을 방지합니다.\n컨텍스트 창을 끄면 세션이 기억을 잃게 되어 각 질문이 완전히 독립적이 됩니다.',
64
65
clickTurnOnSearch: '클릭하여 이 채팅의 웹 검색 활성화',
65
66
clickTurnOffSearch: '클릭하여 이 채팅의 웹 검색 비활성화',
67
+
searchHelp: '모델의 지식은 이전 훈련 데이터를 기반으로 하며, 학습된 지식에는 마감일이 있습니다(예: DeepSeek-R1 훈련 데이터 마감일은 2023년 10월).\n돌발 사건, 업계 뉴스 등 시간에 민감한 질문의 경우 답변이 종종 "구식"입니다.\n웹 검색을 활성화한 후 결과를 생성하기 전에 검색 엔진 인터페이스를 호출하여 모델이 참조할 실시간 정보를 얻지만, 이 작업은 응답 지연도 증가시킵니다.\n또한 시간에 민감하지 않은 질문의 경우 웹 검색을 활성화하면 최신 데이터를 결합하여 답변 품질이 저하될 수 있습니다.',
66
68
clickTurnOnThink: '클릭하여 이 채팅의 깊은 사고 활성화',
67
69
clickTurnOffThink: '클릭하여 이 채팅의 깊은 사고 비활성화',
70
+
thinkHelp: '깊은 사고를 활성화한 후 모델은 더 많은 계산 리소스를 사용하고 더 오랜 시간을 소비하여 더 복잡한 사고 체인을 시뮬레이션하여 논리적 추론을 수행합니다.\n복잡한 작업이나 높은 요구 사항 시나리오에 적합합니다. 예: 수학 문제 유도, 프로젝트 계획.\n일상적인 간단한 조회는 활성화할 필요가 없습니다.',
68
71
showOnContext: '컨텍스트 포함됨',
69
72
showOffContext: '컨텍스트 미포함',
70
73
searchEnabled: '검색 활성화됨',
@@ -197,7 +200,7 @@ export default {
197
200
info2FAStep3Tip1: 'Note: How to turn off two-step verification?',
198
201
info2FAStep3Tip2: '1. After logging in, use the two-step verification on the Two-Step Verification page to disable it.',
199
202
info2FAStep3Tip3: '2. Contact the administrator to disable two-step verification.',
0 commit comments