Skip to content

Conversation

@ChrisBenua
Copy link
Contributor

@ChrisBenua ChrisBenua commented Nov 11, 2025

In my recent PR, I optimized JSONDecoder/JSONEncoder. However, I introduced a small mistake in the _asDirectArrayEncodable function:
Instead of using the value parameter from the function arguments, I mistakenly referenced the array property from the __JSONEncoder class.
Since __JSONEncoder.array is of type JSONFuture.RefArray, all type equality checks (e.g. against [Int]) failed.
As a result, the intended optimization did not take effect.
This issue slipped past benchmarks because there are currently no tests for decoding large [Int] (or similar) arrays.

To address this, I’ve added an additional benchmark that encodes 100k Ints, which highlights the performance improvements.

array-encodeToJSON metrics

Time (total CPU): results within specified thresholds, fold down for details.

Time (total CPU) (ns) * p0 p25 p50 p75 p90 p99 p100 Samples
new_as_direct_array_encodable_coders_v4 38 38 38 38 38 38 38 79
Current_run 3 3 3 3 3 3 3 985
Δ -35 -35 -35 -35 -35 -35 -35 906
Improvement % 92 92 92 92 92 92 92 906

Throughput (# / s): results within specified thresholds, fold down for details.

Throughput (# / s) (K) p0 p25 p50 p75 p90 p99 p100 Samples
new_as_direct_array_encodable_coders_v4 26 26 26 26 26 26 26 79
Current_run 330 329 329 329 327 322 298 985
Δ 304 303 303 303 301 296 272 906
Improvement % 1169 1165 1165 1165 1158 1138 1046 906

I’m happy to either include the new benchmark in this PR or submit it separately, depending on the maintainers’ preference. We can significantly reduce risks of introducing encoding large arrays performance degradations with separate benchmark.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant