Skip to content

Commit ded61ed

Browse files
author
hideya
committed
Update to ver 0.3.5
1 parent 25fd3bc commit ded61ed

File tree

10 files changed

+509
-343
lines changed

10 files changed

+509
-343
lines changed

CHANGELOG.md

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
# Change Log
2+
3+
All notable changes to this project will be documented in this file.
4+
5+
The format is based on [Keep a Changelog](http://keepachangelog.com/)
6+
and this project adheres to [Semantic Versioning](http://semver.org/).
7+
8+
## [0.3.5] - 2025-08-23
9+
10+
### Added
11+
- Support for Cerebras and Groq
12+
primarily to try gpt-oss-* with the exceptional speed
13+
- Usage examples of gpt-oss-120b/20b on Cerebras / Groq
14+
15+
### Changed
16+
- Replace "model_provider" with "provider" while keeping backward compatibility
17+
to avoid confusion between the model provider and the API provider
18+
- Use double quotation marks instead of single quotation marks for all the
19+
applicable string literals
20+
- Update dependencies

README.md

Lines changed: 40 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -25,8 +25,10 @@ A Python equivalent of this utility is available [here](https://pypi.org/project
2525
[OpenAI](https://platform.openai.com/api-keys),
2626
[Anthropic](https://console.anthropic.com/settings/keys),
2727
[Google AI Studio (for GenAI/Gemini)](https://aistudio.google.com/apikey),
28-
and/or
2928
[xAI](https://console.x.ai/),
29+
[Cerebras](https://cloud.cerebras.ai),
30+
and/or
31+
[Groq](https://console.groq.com/keys),
3032
as needed
3133

3234
## Quick Start
@@ -46,14 +48,12 @@ A Python equivalent of this utility is available [here](https://pypi.org/project
4648
```json5
4749
{
4850
"llm": {
49-
"model_provider": "openai",
50-
"model": "gpt-4o-mini",
51-
// "model_provider": "anthropic",
52-
// "model": "claude-3-5-haiku-latest",
53-
// "model_provider": "google_genai",
54-
// "model": "gemini-2.5-flash",
55-
// "model_provider": "xai",
56-
// "model": "grok-3-mini",
51+
"provider": "openai", "model": "gpt-5-mini",
52+
// "provider": "anthropic", "model": "claude-3-5-haiku-latest",
53+
// "provider": "google_genai", "model": "gemini-2.5-flash",
54+
// "provider": "xai", "model": "grok-3-mini",
55+
// "provider": "cerebras", "model": "gpt-oss-120b",
56+
// "provider": "groq", "model": "openai/gpt-oss-20b",
5757
},
5858

5959
"mcp_servers": {
@@ -75,7 +75,9 @@ A Python equivalent of this utility is available [here](https://pypi.org/project
7575
echo "ANTHROPIC_API_KEY=sk-ant-...
7676
OPENAI_API_KEY=sk-proj-...
7777
GOOGLE_API_KEY=AI...
78-
XAI_API_KEY=xai-..." > .env
78+
XAI_API_KEY=xai-...
79+
CEREBRAS_API_KEY=csk-...
80+
GROQ_API_KEY=gsk_..." > .env
7981

8082
code .env
8183
```
@@ -142,9 +144,12 @@ mcp-client-cli --help
142144

143145
## Supported LLM Providers
144146

145-
- **OpenAI**: `o4-mini`, `gpt-4o-mini`, etc.
147+
- **OpenAI**: `gpt-5-mini`, `gpt-4.1-nano`, etc.
146148
- **Anthropic**: `claude-sonnet-4-0`, `claude-3-5-haiku-latest`, etc.
147-
- **Google (GenAI)**: `gemini-2.5-pro`, `gemini-2.5-flash`, etc.
149+
- **Google (GenAI)**: `gemini-2.5-flash`, `gemini-2.5-pro`, etc.
150+
- **xAI**: `grok-3-mini`, `grok-4`, etc.
151+
- **Cerebras**: `gpt-oss-120b`, etc.
152+
- **Groq**: `openai/gpt-oss-20b`, `openai/gpt-oss-120b`, etc.
148153

149154
## Configuration
150155

@@ -166,29 +171,40 @@ Create a `llm_mcp_config.json5` file:
166171
```json5
167172
{
168173
"llm": {
169-
"model_provider": "openai",
170-
"model": "gpt-4o-mini",
171-
// model: "o4-mini",
174+
"provider": "openai",
175+
"model": "gpt-4.1-nano",
176+
// model: "gpt-5-mini",
172177
},
173178

174179
// "llm": {
175-
// "model_provider": "anthropic",
180+
// "provider": "anthropic",
176181
// "model": "claude-3-5-haiku-latest",
177182
// // "model": "claude-sonnet-4-0",
178183
// },
179184

180185
// "llm": {
181-
// "model_provider": "google_genai",
186+
// "provider": "google_genai",
182187
// "model": "gemini-2.5-flash",
183188
// // "model": "gemini-2.5-pro",
184189
// },
185190

186191
// "llm": {
187-
// "model_provider": "xai",
192+
// "provider": "xai",
188193
// "model": "grok-3-mini",
189194
// // "model": "grok-4",
190195
// },
191196

197+
// "llm": {
198+
// "provider": "cerebras",
199+
// "model": "gpt-oss-120b",
200+
// },
201+
202+
// "llm": {
203+
// "provider": "groq",
204+
// "model": "openai/gpt-oss-20b",
205+
// // "model": "openai/gpt-oss-120b",
206+
// },
207+
192208
"example_queries": [
193209
"Tell me how LLMs work in a few sentences",
194210
"Are there any weather alerts in California?",
@@ -240,6 +256,8 @@ Create a `.env` file for API keys:
240256
OPENAI_API_KEY=sk-ant-...
241257
ANTHROPIC_API_KEY=sk-proj-...
242258
GOOGLE_API_KEY=AI...
259+
CEREBRAS_API_KEY=csk-...
260+
GROQ_API_KEY=gsk_...
243261

244262
# Other services as needed
245263
GITHUB_PERSONAL_ACCESS_TOKEN=github_pat_...
@@ -259,6 +277,10 @@ There are quite a few useful MCP servers already available:
259277
- Use `--verbose` flag to view the detailed logs
260278
- Refer to [Debugging Section in MCP documentation](https://modelcontextprotocol.io/docs/tools/debugging)
261279

280+
## Change Log
281+
282+
Can be found [here](https://github.com/hideya/mcp-client-langchain-ts/blob/main/CHANGELOG.md)
283+
262284
## License
263285

264286
MIT License - see [LICENSE](LICENSE) file for details.

README_DEV.md

Lines changed: 3 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -8,12 +8,9 @@ When testing LLM and MCP servers, their settings can be conveniently configured
88
```json5
99
{
1010
"llm": {
11-
"model_provider": "openai",
12-
"model": "gpt-4o-mini",
13-
// "model_provider": "anthropic",
14-
// "model": "claude-3-5-haiku-latest",
15-
// "model_provider": "google_genai",
16-
// "model": "gemini-2.5-flash",
11+
"provider": "openai", "model": "gpt-5-mini",
12+
// "provider": "anthropic", "model": "claude-3-5-haiku-latest",
13+
// "provider": "google_genai", "model": "gemini-2.5-flash",
1714
},
1815

1916
"mcp_servers": {

llm_mcp_config.json5

Lines changed: 20 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
// "llm": {
99
// // https://docs.anthropic.com/en/docs/about-claude/pricing
1010
// // https://console.anthropic.com/settings/billing
11-
// "model_provider": "anthropic",
11+
// "provider": "anthropic",
1212
// "model": "claude-3-5-haiku-latest",
1313
// // "model": "claude-sonnet-4-0",
1414
// // "temperature": 0.0,
@@ -18,17 +18,17 @@
1818
// "llm": {
1919
// // https://platform.openai.com/docs/pricing
2020
// // https://platform.openai.com/settings/organization/billing/overview
21-
// "model_provider": "openai",
22-
// "model": "gpt-4o-mini",
23-
// // "model": "o4-mini",
21+
// "provider": "openai",
22+
// "model": "gpt-4.1-nano",
23+
// // "model": "gpt-5-mini",
2424
// // "temperature": 0.0, // 'temperature' is not supported with "o4-mini"
2525
// // "max_completion_tokens": 10000, // Use 'max_completion_tokens' instead of 'max_tokens'
2626
// },
2727

2828
"llm": {
2929
// https://ai.google.dev/gemini-api/docs/pricing
3030
// https://console.cloud.google.com/billing
31-
"model_provider": "google_genai",
31+
"provider": "google_genai",
3232
"model": "gemini-2.5-flash",
3333
// "model": "gemini-2.5-pro",
3434
// "temperature": 0.0,
@@ -37,11 +37,22 @@
3737

3838
// "llm": {
3939
// // hhttps://console.x.ai/
40-
// "model_provider": "xai",
40+
// "provider": "xai",
4141
// "model": "grok-3-mini",
4242
// // "model": "grok-4",
43-
// // "temperature": 0.0,
44-
// // "max_tokens": 10000,
43+
// },
44+
45+
// "llm": {
46+
// // https://cloud.cerebras.ai
47+
// "provider": "cerebras",
48+
// "model": "gpt-oss-120b",
49+
// },
50+
51+
// "llm": {
52+
// // ttps://console.groq.com/keys
53+
// "provider": "groq",
54+
// // "model": "openai/gpt-oss-20b",
55+
// "model": "openai/gpt-oss-120b",
4556
// },
4657

4758
"example_queries": [
@@ -129,7 +140,7 @@
129140
// "env": {
130141
// // Although the following implies that this MCP server is designed for
131142
// // OpenAI LLMs, it works fine with others models.
132-
// // Tested Claude and Gemini (with schema adjustments).
143+
// // Tested with Claude and Gemini (with schema adjustments).
133144
// "OPENAPI_MCP_HEADERS": '{"Authorization": "Bearer ${NOTION_INTEGRATION_SECRET}", "Notion-Version": "2022-06-28"}'
134145
// },
135146
// },

0 commit comments

Comments
 (0)