Skip to content

Commit e727797

Browse files
committed
New features in AI LLM request and response handling
1 parent 32a4076 commit e727797

23 files changed

+1021
-56
lines changed

doc/User Guides/2 Basic LLM Chat Implementation.md

Lines changed: 24 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
# ESP32_AI_Connect Library User Guide - 2 Basic LLM Chat Implementation
2-
> **Version 0.0.3** • Revised: May 10, 2025 • Author: AvantMaker • [https://www.AvantMaker.com](https://www.AvantMaker.com)
2+
> **Version 0.0.5** • Revised: May 15, 2025 • Author: AvantMaker • [https://www.AvantMaker.com](https://www.AvantMaker.com)
33
44
## Overview
55

@@ -101,11 +101,33 @@ After establishing the WiFi connection, we configure the AI client with specific
101101
These configuration options allow you to customize the behavior of the AI:
102102

103103
- `setChatTemperature(0.7)`: Controls the randomness/creativity of the responses. Lower values (closer to 0) make responses more deterministic and focused, while higher values (up to 2.0) make them more creative and diverse.
104-
- `setChatMaxTokens(200)`: Limits the maximum length of the AI's response in tokens (roughly 4 characters per token). This helps control response size and API costs.
104+
- `setChatMaxTokens(200)`: Sets the maximum number of tokens in the AI's response. This helps regulate response length and manage API costs. Each AI platform handles this parameter slightly differently: OpenAI uses `max_completion_tokens`, Claude uses `max_tokens`, and Gemini uses `maxOutputTokens`. The library automatically adapts to each platform, ensuring compatibility and seamless interaction regardless of which AI provider you choose.
105105
- `setChatSystemRole("You are a helpful assistant")`: Sets the system message that defines the AI's behavior and personality.
106106

107107
**Note: The parameters set by the above methods are optional. If you do not explicitly configure these parameters, the LLM will use its default values for temperature, max tokens, and system role.**
108108

109+
### Additional Parameter Configuration Methods
110+
111+
The library also provides methods for setting and retrieving custom parameters that are specific to each AI platform:
112+
113+
- `setChatParameters()`: Allows you to set custom parameters in JSON format that are specific to the AI platform you're using. These parameters can include platform-specific options like `top_p`, `presence_penalty`, or any other parameters supported by the platform.
114+
- `getChatParameters()`: Retrieves the currently set custom parameters.
115+
116+
These methods are demonstrated in the `basic_llm_chat.ino` example in the examples folder. If you need to use platform-specific parameters or want to see how to implement these methods, please refer to that example.
117+
118+
**Important Note**: When using `setChatParameters()`, be aware that:
119+
1. The parameters must be provided in valid JSON format
120+
2. If a parameter is already set by a specific method (like `setChatTemperature()`), the value from the specific method will take precedence
121+
3. The exact parameters available depend on the AI platform you're using
122+
123+
For example, in `basic_llm_chat.ino`, you can see how to use these methods:
124+
```cpp
125+
// Set custom parameters
126+
aiClient.setChatParameters(R"({"top_p":0.95})");
127+
128+
// Get current parameters
129+
String currentParams = aiClient.getChatParameters();
130+
```
109131

110132
## Step 6: Verifying Configuration with Getter Methods
111133

examples/basic_example/basic_example.ino

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -10,8 +10,8 @@
1010
*
1111
* Author: AvantMaker <admin@avantmaker.com>
1212
* Author Website: https://www.AvantMaker.com
13-
* Date: May 7, 2025
14-
* Version: 1.0.5
13+
* Date: May 15, 2025
14+
* Version: 1.0.7
1515
*
1616
* Hardware Requirements:
1717
* - ESP32-based microcontroller (e.g., ESP32 DevKitC, DOIT ESP32 DevKit, etc.)
@@ -30,7 +30,7 @@
3030
* Repository: https://github.com/AvantMaker/ESP32_AI_Connect
3131
*
3232
* Usage Notes:
33-
* - Adjust `setChatTemperature`, `setChatMaxTokens`, and `setChatSystemRole` as
33+
* - Adjust optional`setChatTemperature`, `setChatMaxTokens`, and `setChatSystemRole` as
3434
* needed for your application.
3535
* - Use getter methods like `getChatTemperature`, `getChatMaxTokens`, and `getChatSystemRole`
3636
* to retrieve current configuration values.
@@ -71,13 +71,14 @@ void setup() {
7171
Serial.print("IP address: ");
7272
Serial.println(WiFi.localIP());
7373

74-
// Configure AI client parameters:
75-
aiClient.setChatTemperature(0.7); // Set response creativity (0.0-2.0)
74+
// Configure optional parameters:
75+
aiClient.setChatTemperature(0.7); // Set response creativity (0.0-2.0)
7676
aiClient.setChatMaxTokens(200); // Limit response length (in tokens)
7777
aiClient.setChatSystemRole("You are a helpful assistant"); // Set assistant behavior
7878

79-
// Display the configured parameters
80-
Serial.println("\nAI Configuration:");
79+
// Display the configured parameters set by setChatSystemRole/setChatTemperature/setChatMaxTokens
80+
Serial.println("\nDisplay the configured parameters set by");
81+
Serial.println("\nsetChatSystemRole / setChatTemperature / setChatMaxTokens:");
8182
Serial.print("System Role: ");
8283
Serial.println(aiClient.getChatSystemRole());
8384
Serial.print("Temperature: ");

examples/basic_llm_chat/basic_llm_chat.ino

Lines changed: 18 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -10,8 +10,8 @@
1010
*
1111
* Author: AvantMaker <admin@avantmaker.com>
1212
* Author Website: https://www.AvantMaker.com
13-
* Date: May 8, 2025
14-
* Version: 1.0.3
13+
* Date: May 15, 2025
14+
* Version: 1.0.7
1515
*
1616
* Hardware Requirements:
1717
* - ESP32-based microcontroller (e.g., ESP32 DevKitC, DOIT ESP32 DevKit)
@@ -29,7 +29,7 @@
2929
* Repository: https://github.com/AvantMaker/ESP32_AI_Connect
3030
*
3131
* Usage Notes:
32-
* - Adjust `setChatSystemRole`, `setChatTemperature`, and `setChatMaxTokens` in `setup()` to customize AI behavior.
32+
* - Adjust optional parameters with `setChatSystemRole`, `setChatTemperature`, and `setChatMaxTokens` in `setup()` to customize AI behavior.
3333
* - Use getter methods like `getChatSystemRole`, `getChatTemperature`, and `getChatMaxTokens` to retrieve current settings.
3434
* - Enter messages via the Serial Monitor to interact with the AI; responses are displayed with error details if applicable.
3535
*
@@ -73,20 +73,30 @@ void setup() {
7373
while(1) delay(1000); // Halt on failure
7474
}
7575

76-
// --- Configure the AI Client ---
76+
// --- Configure the AI Client's optional parameters ---
7777
aiClient.setChatSystemRole("You are a helpful assistant.");
7878
aiClient.setChatTemperature(0.7); // Set creativity/randomness
7979
aiClient.setChatMaxTokens(150); // Limit response length
80-
81-
// Display configuration settings
82-
Serial.println("\nAI Client Configuration:");
80+
// You can set optional custom parameters with setChatParameters()
81+
// Note that if a parameter is already set by a method above, it will NOT be overwritten
82+
if (aiClient.setChatParameters(R"({"top_p":0.95})")){
83+
Serial.println("Request Parameters Set Successfully");
84+
Serial.print("Custom Parameters: ");
85+
Serial.println(aiClient.getChatParameters());
86+
} else {
87+
Serial.println("Setting Request Parameters Failed");
88+
Serial.println("Error details: " + aiClient.getLastError());
89+
}
90+
91+
// Display the configured parameters set by setChatSystemRole/setChatTemperature/setChatMaxTokens
92+
Serial.println("\nDisplay the configured parameters set by");
93+
Serial.println("\nsetChatSystemRole / setChatTemperature / setChatMaxTokens:");
8394
Serial.print("System Role: ");
8495
Serial.println(aiClient.getChatSystemRole());
8596
Serial.print("Temperature: ");
8697
Serial.println(aiClient.getChatTemperature());
8798
Serial.print("Max Tokens: ");
8899
Serial.println(aiClient.getChatMaxTokens());
89-
Serial.println("Ready to chat!");
90100
}
91101

92102
void loop() {

examples/custom_llm_chat/custom_llm_chat.ino

Lines changed: 9 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -15,8 +15,8 @@
1515
*
1616
* Author: AvantMaker <admin@avantmaker.com>
1717
* Author Website: https://www.AvantMaker.com
18-
* Date: May 7, 2025
19-
* Version: 1.0.1
18+
* Date: May 15, 2025
19+
* Version: 1.0.3
2020
*
2121
* Hardware Requirements:
2222
* - ESP32-based microcontroller (e.g., ESP32 DevKitC, DOIT ESP32 DevKit)
@@ -34,7 +34,7 @@
3434
* Repository: https://github.com/AvantMaker/ESP32_AI_Connect
3535
*
3636
* Usage Notes:
37-
* - Adjust `setChatSystemRole`, `setChatTemperature`, and `setChatMaxTokens` in `setup()` to customize LLM behavior.
37+
* - Adjust optional parameters with `setChatSystemRole`, `setChatTemperature`, and `setChatMaxTokens` in `setup()` to customize LLM behavior.
3838
* - Use getter methods like `getChatSystemRole`, `getChatTemperature`, and `getChatMaxTokens` to retrieve current settings.
3939
* - Enter messages via the Serial Monitor to interact with the LLM; responses and errors are displayed.
4040
* - Verify the custom endpoint URL is correct and accessible for your LLM provider.
@@ -92,16 +92,20 @@ void setup() {
9292
while(1) delay(1000); // Halt on failure
9393
}
9494

95-
// --- Configure the AI Client ---
95+
// --- Configure the AI Client's optional parameters ---
9696
aiClient.setChatSystemRole("You are a helpful assistant.");
9797
aiClient.setChatTemperature(0.7); // Set creativity/randomness
9898
aiClient.setChatMaxTokens(150); // Limit response length
99-
99+
100100
// Print configuration
101101
Serial.println("\nAI Client Configuration:");
102102
Serial.println("Platform: " + String(platform));
103103
Serial.println("Model: " + String(model));
104104
Serial.println("Custom Endpoint: " + String(customEndpoint));
105+
106+
// Display the configured parameters set by setChatSystemRole/setChatTemperature/setChatMaxTokens
107+
Serial.println("\nDisplay the configured parameters set by");
108+
Serial.println("\nsetChatSystemRole / setChatTemperature / setChatMaxTokens:");
105109
Serial.print("System Role: ");
106110
Serial.println(aiClient.getChatSystemRole());
107111
Serial.print("Temperature: ");

examples/tool_calling_demo/tool_calling_demo.ino

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -47,6 +47,7 @@
4747

4848
// --- Create the API Client Instance ---
4949
ESP32_AI_Connect aiClient(platform, apiKey, model);
50+
// Alternatively, you can use a custom endpoint:
5051
// ESP32_AI_Connect aiClient(platform, apiKey, model, customEndpoint);
5152

5253
void setup() {
@@ -66,7 +67,7 @@ void setup() {
6667
Serial.print("IP address: ");
6768
Serial.println(WiFi.localIP());
6869

69-
// --- Define Tools for Tool Calling ---
70+
// --- Define Tool(s) for Tool Calling ---
7071
const int numTools = 1;
7172
String myTools[numTools] = {
7273
R"({
@@ -94,7 +95,7 @@ void setup() {
9495
}
9596
Serial.println(F("Tool calling setup successful."));
9697

97-
// --- Demonstrate Configuration Methods ---
98+
// --- Configuration Methods ---
9899
aiClient.setTCChatSystemRole("You are a weather assistant.");// Optional: Set system role message
99100
aiClient.setTCChatMaxTokens(300); // Optional: Set maximum tokens for the response
100101
aiClient.setTCChatToolChoice("auto"); // Optional: Set tool choice mode.

examples/tool_calling_demo_2/tool_calling_demo_2.ino

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,8 +11,8 @@
1111
*
1212
* Author: AvantMaker <admin@avantmaker.com>
1313
* Author Website: https://www.AvantMaker.com
14-
* Date: May 9, 2025
15-
* Version: 1.0.3
14+
* Date: May 12, 2025
15+
* Version: 1.0.5
1616
*
1717
* Hardware Requirements:
1818
* - ESP32-based microcontroller (e.g., ESP32 DevKitC, DOIT ESP32 DevKit)
@@ -200,7 +200,7 @@ void setup() {
200200
Serial.println("System Role: " + aiClient.getTCChatSystemRole());
201201
Serial.println("Max Tokens: " + String(aiClient.getTCChatMaxTokens()));
202202
Serial.println("Tool Choice: " + aiClient.getTCChatToolChoice());
203-
Serial.println("Tool Choice: " + aiClient.getTCRawResponse());
203+
Serial.println("AI Raw Response: " + aiClient.getTCRawResponse());
204204

205205
Serial.println("\n--------------------");
206206
Serial.println("Demo finished. Restart device to run again.");

examples/tool_calling_follow_up_demo/tool_calling_follow_up_demo.ino

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -375,13 +375,13 @@ void runToolCallsDemo(String userMessage) {
375375
String toolResultsJson;
376376
serializeJson(toolResults, toolResultsJson);
377377

378-
// --- Demonstrate Configuration Methods for follow-up tool calls request ---
379-
// Config the max_token and tool_choice in the follow-up
378+
// --- Demonstrate Optional Configuration Methods for follow-up tool calls request ---
379+
// Config the token limit (as max_completion_tokens for OpenAI, max_tokens for Claude) and tool_choice in the follow-up
380380
// request. These parameters are independent of the parameters
381381
// set in the initial request.
382382

383-
aiClient.setTCReplyMaxTokens(900);
384-
aiClient.setTCReplyToolChoice("auto");
383+
aiClient.setTCReplyMaxTokens(900); // (Optional) Maximum tokens for the follow-up response
384+
aiClient.setTCReplyToolChoice("auto"); // (Optional) Tool choice for the follow-up (can be
385385

386386
Serial.println("\n---Follow-Up Tool Call Configuration ---");
387387
Serial.println("Follow-Up Max Tokens: " + String(aiClient.getTCReplyMaxTokens()));
@@ -412,7 +412,7 @@ void runToolCallsDemo(String userMessage) {
412412
Serial.println("\n--- AI RESPONSE TO TOOL RESULTS ---");
413413
Serial.println("Finish reason: " + finishReason);
414414

415-
if (finishReason == "tool_calls") {
415+
if (finishReason == "tool_calls" || finishReason == "tool_use") {
416416
// More tool calls requested - could implement nested calls here
417417
Serial.println("AI requested more tool calls: " + followUpResult);
418418
Serial.println("(This example doesn't handle multiple rounds of tool calls)");
Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
// --- User Credentials ---
2+
const char* ssid = "YOUR_WIFI_SSID";
3+
const char* password = "YOUR_WIFI_PASSWORD";
4+
const char* apiKey = "YOUR_API_KEY"; // Your OpenAI API Key
5+
const char* model = "YOUR_LLM_MODEL"; // Or another model supporting tool calls
6+
const char* platform = "openai"; // Or "gemini", "openai-compatible" - must match compiled handlers
7+
// const char* customEndpoint = "YOUR-CUSTOM-ENDPOINT"; // Replace with your custom endpoint

0 commit comments

Comments
 (0)