Skip to content

Commit 92788a1

Browse files
authored
ci: Add performance impact step to CI (#9916)
1 parent 7682644 commit 92788a1

File tree

4 files changed

+706
-2
lines changed

4 files changed

+706
-2
lines changed
Lines changed: 291 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,291 @@
1+
name: ci-performance
2+
on:
3+
pull_request:
4+
branches:
5+
- alpha
6+
- beta
7+
- release
8+
- 'release-[0-9]+.x.x'
9+
- next-major
10+
paths-ignore:
11+
- '**.md'
12+
- 'docs/**'
13+
14+
env:
15+
NODE_VERSION: 24.11.0
16+
MONGODB_VERSION: 8.0.4
17+
18+
permissions:
19+
contents: read
20+
pull-requests: write
21+
issues: write
22+
23+
jobs:
24+
performance-check:
25+
name: Benchmarks
26+
runs-on: ubuntu-latest
27+
timeout-minutes: 30
28+
29+
steps:
30+
- name: Checkout base branch
31+
uses: actions/checkout@v4
32+
with:
33+
ref: ${{ github.base_ref }}
34+
fetch-depth: 1
35+
36+
- name: Setup Node.js
37+
uses: actions/setup-node@v4
38+
with:
39+
node-version: ${{ env.NODE_VERSION }}
40+
cache: 'npm'
41+
42+
- name: Install dependencies (base)
43+
run: npm ci
44+
45+
- name: Build Parse Server (base)
46+
run: npm run build
47+
48+
- name: Run baseline benchmarks
49+
id: baseline
50+
run: |
51+
echo "Checking if benchmark script exists..."
52+
if [ ! -f "benchmark/performance.js" ]; then
53+
echo "⚠️ Benchmark script not found in base branch - this is expected for new features"
54+
echo "Skipping baseline benchmark"
55+
echo '[]' > baseline.json
56+
echo "Baseline: N/A (benchmark script not in base branch)" > baseline-output.txt
57+
exit 0
58+
fi
59+
echo "Running baseline benchmarks..."
60+
npm run benchmark > baseline-output.txt 2>&1 || true
61+
echo "Benchmark command completed with exit code: $?"
62+
echo "Output file size: $(wc -c < baseline-output.txt) bytes"
63+
echo "--- Begin baseline-output.txt ---"
64+
cat baseline-output.txt
65+
echo "--- End baseline-output.txt ---"
66+
# Extract JSON from output (everything between first [ and last ])
67+
sed -n '/^\[/,/^\]/p' baseline-output.txt > baseline.json || echo '[]' > baseline.json
68+
echo "Extracted JSON size: $(wc -c < baseline.json) bytes"
69+
echo "Baseline benchmark results:"
70+
cat baseline.json
71+
continue-on-error: true
72+
73+
- name: Upload baseline results
74+
uses: actions/upload-artifact@v4
75+
with:
76+
name: baseline-benchmark
77+
path: |
78+
baseline.json
79+
baseline-output.txt
80+
retention-days: 7
81+
82+
- name: Checkout PR branch
83+
uses: actions/checkout@v4
84+
with:
85+
ref: ${{ github.event.pull_request.head.sha }}
86+
fetch-depth: 1
87+
clean: true
88+
89+
- name: Setup Node.js (PR)
90+
uses: actions/setup-node@v4
91+
with:
92+
node-version: ${{ env.NODE_VERSION }}
93+
cache: 'npm'
94+
95+
- name: Install dependencies (PR)
96+
run: npm ci
97+
98+
- name: Build Parse Server (PR)
99+
run: npm run build
100+
101+
- name: Run PR benchmarks
102+
id: pr-bench
103+
run: |
104+
echo "Running PR benchmarks..."
105+
npm run benchmark > pr-output.txt 2>&1 || true
106+
echo "Benchmark command completed with exit code: $?"
107+
echo "Output file size: $(wc -c < pr-output.txt) bytes"
108+
echo "--- Begin pr-output.txt ---"
109+
cat pr-output.txt
110+
echo "--- End pr-output.txt ---"
111+
# Extract JSON from output (everything between first [ and last ])
112+
sed -n '/^\[/,/^\]/p' pr-output.txt > pr.json || echo '[]' > pr.json
113+
echo "Extracted JSON size: $(wc -c < pr.json) bytes"
114+
echo "PR benchmark results:"
115+
cat pr.json
116+
continue-on-error: true
117+
118+
- name: Upload PR results
119+
uses: actions/upload-artifact@v4
120+
with:
121+
name: pr-benchmark
122+
path: |
123+
pr.json
124+
pr-output.txt
125+
retention-days: 7
126+
127+
- name: Verify benchmark files exist
128+
run: |
129+
echo "Checking for benchmark result files..."
130+
if [ ! -f baseline.json ] || [ ! -s baseline.json ]; then
131+
echo "⚠️ baseline.json is missing or empty, creating empty array"
132+
echo '[]' > baseline.json
133+
fi
134+
if [ ! -f pr.json ] || [ ! -s pr.json ]; then
135+
echo "⚠️ pr.json is missing or empty, creating empty array"
136+
echo '[]' > pr.json
137+
fi
138+
echo "baseline.json size: $(wc -c < baseline.json) bytes"
139+
echo "pr.json size: $(wc -c < pr.json) bytes"
140+
141+
- name: Store benchmark result (PR)
142+
uses: benchmark-action/github-action-benchmark@v1
143+
if: github.event_name == 'pull_request' && hashFiles('pr.json') != ''
144+
continue-on-error: true
145+
with:
146+
name: Parse Server Performance
147+
tool: 'customSmallerIsBetter'
148+
output-file-path: pr.json
149+
github-token: ${{ secrets.GITHUB_TOKEN }}
150+
auto-push: false
151+
save-data-file: false
152+
alert-threshold: '110%'
153+
comment-on-alert: true
154+
fail-on-alert: false
155+
alert-comment-cc-users: '@parse-community/maintainers'
156+
summary-always: true
157+
158+
- name: Compare benchmark results
159+
id: compare
160+
run: |
161+
node -e "
162+
const fs = require('fs');
163+
164+
let baseline, pr;
165+
try {
166+
baseline = JSON.parse(fs.readFileSync('baseline.json', 'utf8'));
167+
pr = JSON.parse(fs.readFileSync('pr.json', 'utf8'));
168+
} catch (e) {
169+
console.log('⚠️ Could not parse benchmark results');
170+
process.exit(0);
171+
}
172+
173+
// Handle case where baseline doesn't exist (new feature)
174+
if (!Array.isArray(baseline) || baseline.length === 0) {
175+
if (!Array.isArray(pr) || pr.length === 0) {
176+
console.log('⚠️ Benchmark results are empty or invalid');
177+
process.exit(0);
178+
}
179+
console.log('# Performance Benchmark Results\n');
180+
console.log('> ℹ️ Baseline not available - this appears to be a new feature\n');
181+
console.log('| Benchmark | Value | Details |');
182+
console.log('|-----------|-------|---------|');
183+
pr.forEach(result => {
184+
console.log(\`| \${result.name} | \${result.value.toFixed(2)} ms | \${result.extra} |\`);
185+
});
186+
console.log('');
187+
console.log('✅ **New benchmarks established for this feature.**');
188+
process.exit(0);
189+
}
190+
191+
if (!Array.isArray(pr) || pr.length === 0) {
192+
console.log('⚠️ PR benchmark results are empty or invalid');
193+
process.exit(0);
194+
}
195+
196+
console.log('# Performance Comparison\n');
197+
console.log('| Benchmark | Baseline | PR | Change | Status |');
198+
console.log('|-----------|----------|----|---------| ------ |');
199+
200+
let hasRegression = false;
201+
let hasImprovement = false;
202+
203+
baseline.forEach(baseResult => {
204+
const prResult = pr.find(p => p.name === baseResult.name);
205+
if (!prResult) {
206+
console.log(\`| \${baseResult.name} | \${baseResult.value.toFixed(2)} ms | N/A | - | ⚠️ Missing |\`);
207+
return;
208+
}
209+
210+
const baseValue = parseFloat(baseResult.value);
211+
const prValue = parseFloat(prResult.value);
212+
const change = ((prValue - baseValue) / baseValue * 100);
213+
const changeStr = change > 0 ? \`+\${change.toFixed(1)}%\` : \`\${change.toFixed(1)}%\`;
214+
215+
let status = '✅';
216+
if (change > 20) {
217+
status = '❌ Much Slower';
218+
hasRegression = true;
219+
} else if (change > 10) {
220+
status = '⚠️ Slower';
221+
hasRegression = true;
222+
} else if (change < -10) {
223+
status = '🚀 Faster';
224+
hasImprovement = true;
225+
}
226+
227+
console.log(\`| \${baseResult.name} | \${baseValue.toFixed(2)} ms | \${prValue.toFixed(2)} ms | \${changeStr} | \${status} |\`);
228+
});
229+
230+
console.log('');
231+
if (hasRegression) {
232+
console.log('⚠️ **Performance regressions detected.** Please review the changes.');
233+
} else if (hasImprovement) {
234+
console.log('🚀 **Performance improvements detected!** Great work!');
235+
} else {
236+
console.log('✅ **No significant performance changes.**');
237+
}
238+
" | tee comparison.md
239+
240+
- name: Upload comparison
241+
uses: actions/upload-artifact@v4
242+
with:
243+
name: benchmark-comparison
244+
path: comparison.md
245+
retention-days: 30
246+
247+
- name: Prepare comment body
248+
if: github.event_name == 'pull_request'
249+
run: |
250+
echo "## Performance Impact Report" > comment.md
251+
echo "" >> comment.md
252+
if [ -f comparison.md ]; then
253+
cat comparison.md >> comment.md
254+
else
255+
echo "⚠️ Could not generate performance comparison." >> comment.md
256+
fi
257+
echo "" >> comment.md
258+
echo "<details>" >> comment.md
259+
echo "<summary>📊 View detailed results</summary>" >> comment.md
260+
echo "" >> comment.md
261+
echo "### Baseline Results" >> comment.md
262+
echo "\`\`\`json" >> comment.md
263+
cat baseline.json >> comment.md
264+
echo "\`\`\`" >> comment.md
265+
echo "" >> comment.md
266+
echo "### PR Results" >> comment.md
267+
echo "\`\`\`json" >> comment.md
268+
cat pr.json >> comment.md
269+
echo "\`\`\`" >> comment.md
270+
echo "" >> comment.md
271+
echo "</details>" >> comment.md
272+
echo "" >> comment.md
273+
echo "*Benchmarks ran with ${BENCHMARK_ITERATIONS:-100} iterations per test on Node.js ${{ env.NODE_VERSION }}*" >> comment.md
274+
275+
- name: Comment PR with results
276+
if: github.event_name == 'pull_request'
277+
uses: thollander/actions-comment-pull-request@v2
278+
continue-on-error: true
279+
with:
280+
filePath: comment.md
281+
comment_tag: performance-benchmark
282+
mode: recreate
283+
284+
- name: Generate job summary
285+
if: always()
286+
run: |
287+
if [ -f comparison.md ]; then
288+
cat comparison.md >> $GITHUB_STEP_SUMMARY
289+
else
290+
echo "⚠️ Benchmark comparison not available" >> $GITHUB_STEP_SUMMARY
291+
fi

CONTRIBUTING.md

Lines changed: 57 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,9 +21,13 @@
2121
- [Good to Know](#good-to-know)
2222
- [Troubleshooting](#troubleshooting)
2323
- [Please Do's](#please-dos)
24-
- [TypeScript Tests](#typescript-tests)
24+
- [TypeScript Tests](#typescript-tests)
2525
- [Test against Postgres](#test-against-postgres)
2626
- [Postgres with Docker](#postgres-with-docker)
27+
- [Performance Testing](#performance-testing)
28+
- [Adding Tests](#adding-tests)
29+
- [Adding Benchmarks](#adding-benchmarks)
30+
- [Benchmark Guidelines](#benchmark-guidelines)
2731
- [Breaking Changes](#breaking-changes)
2832
- [Deprecation Policy](#deprecation-policy)
2933
- [Feature Considerations](#feature-considerations)
@@ -298,6 +302,58 @@ RUN chmod +x /docker-entrypoint-initdb.d/setup-dbs.sh
298302

299303
Note that the script above will ONLY be executed during initialization of the container with no data in the database, see the official [Postgres image](https://hub.docker.com/_/postgres) for details. If you want to use the script to run again be sure there is no data in the /var/lib/postgresql/data of the container.
300304

305+
### Performance Testing
306+
307+
Parse Server includes an automated performance benchmarking system that runs on every pull request to detect performance regressions and track improvements over time.
308+
309+
#### Adding Tests
310+
311+
You should consider adding performance benchmarks if your contribution:
312+
313+
- **Introduces a performance-critical feature**: Features that will be frequently used in production environments, such as new query operations, authentication methods, or data processing functions.
314+
- **Modifies existing critical paths**: Changes to core functionality like object CRUD operations, query execution, user authentication, file operations, or Cloud Code execution.
315+
- **Has potential performance impact**: Any change that affects database operations, network requests, data parsing, caching mechanisms, or algorithmic complexity.
316+
- **Optimizes performance**: If your PR specifically aims to improve performance, adding benchmarks helps verify the improvement and prevents future regressions.
317+
318+
#### Adding Benchmarks
319+
320+
Performance benchmarks are located in [`benchmark/performance.js`](benchmark/performance.js). To add a new benchmark:
321+
322+
1. **Identify the operation to benchmark**: Determine the specific operation you want to measure (e.g., a new query type, a new API endpoint).
323+
324+
2. **Create a benchmark function**: Follow the existing patterns in `benchmark/performance.js`:
325+
```javascript
326+
async function benchmarkNewFeature() {
327+
return measureOperation('Feature Name', async () => {
328+
// Your operation to benchmark
329+
const result = await someOperation();
330+
}, ITERATIONS);
331+
}
332+
```
333+
334+
3. **Add to benchmark suite**: Register your benchmark in the `runBenchmarks()` function:
335+
```javascript
336+
console.error('Running New Feature benchmark...');
337+
await cleanupDatabase();
338+
results.push(await benchmarkNewFeature());
339+
```
340+
341+
4. **Test locally**: Run the benchmarks locally to verify they work:
342+
```bash
343+
npm run benchmark:quick # Quick test with 10 iterations
344+
npm run benchmark # Full test with 100 iterations
345+
```
346+
347+
For new features where no baseline exists, the CI will establish new benchmarks that future PRs will be compared against.
348+
349+
#### Benchmark Guidelines
350+
351+
- **Keep benchmarks focused**: Each benchmark should test a single, well-defined operation.
352+
- **Use realistic data**: Test with data that reflects real-world usage patterns.
353+
- **Clean up between runs**: Use `cleanupDatabase()` to ensure consistent test conditions.
354+
- **Consider iteration count**: Use fewer iterations for expensive operations (see `ITERATIONS` environment variable).
355+
- **Document what you're testing**: Add clear comments explaining what the benchmark measures and why it's important.
356+
301357
## Breaking Changes
302358

303359
Breaking changes should be avoided whenever possible. For a breaking change to be accepted, the benefits of the change have to clearly outweigh the costs of developers having to adapt their deployments. If a breaking change is only cosmetic it will likely be rejected and preferred to become obsolete organically during the course of further development, unless it is required as part of a larger change. Breaking changes should follow the [Deprecation Policy](#deprecation-policy).

0 commit comments

Comments
 (0)