-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize String write #1651
base: main
Are you sure you want to change the base?
Optimize String write #1651
Conversation
- Remove extra bounds checking. - Add ASCII fast-loop similar to String.getBytes().
That's related to #1629 🙏 |
…ncoding tests. - Adjust logic to handle non-zero ByteBuffer.arrayOffset, as some Netty Pooled ByteBuffer implementations return an offset != 0. - Add unit tests for UTF-8 encoding across buffer boundaries and for malformed surrogate pairs. - Fix issue with a leacked reference count on ByteBufs in the pipe() method (2 non-released reference counts were retained). JAVA-5816
|
||
if (c < 0x80) { | ||
if (remaining == 0) { | ||
buf = getNextByteBuffer(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still suggest to give at shot at the PR I made which use a separate index to access single bytes in the internalNio Buffer within the Netty buffers, for two reasons:
- NIO ByteBuffer can benefit from additional optimizations from the JVM since it's a known type to it
- Netty buffer read/write both move forward the internal indexes and force Netty to verify accessibility of the buffer for each operation, which have some Java Memory Model effects (.e.g. any subsequent load has to happen for real, each time!)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point - accessing the NIO buffer directly sounds like a potential win. I’m aiming to keep this PR focused and incremental for easier review and integration. We could consider Netty-specific optimizations in a follow-up PR/ scope, once we have Netty benchmarks running in CI
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good a couple of comments and one suggestion to help improve the readability of writeCharacters
driver-core/src/main/com/mongodb/internal/connection/ByteBufferBsonOutput.java
Show resolved
Hide resolved
driver-core/src/main/com/mongodb/internal/connection/ByteBufferBsonOutput.java
Show resolved
Hide resolved
driver-core/src/main/com/mongodb/internal/connection/ByteBufferBsonOutput.java
Show resolved
Hide resolved
driver-core/src/test/unit/com/mongodb/internal/connection/ByteBufferBsonOutputTest.java
Show resolved
Hide resolved
JAVA-5816
JAVA-5816
JAVA-5816
JAVA-5816
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
} | ||
|
||
// We get here, when the buffer is not backed by an array, or when the string contains at least one non-ASCII characters. | ||
return writeOnBuffers(str, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't we have within this a fast PATH for ASCII too?
It will grant more chances to get inlined (since is a smaller method) and be more unrolled...
If we can have a JMH bench it would be fairly easy (I can do it) to peek into the assembly produced to verify it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I’d expect the fast path for buffers to be in the else
branch of if (curBuffer.hasArray())
.
However, once we detect UTF-8 characters there, we call a fallback writeOnBuffers
(maybe we could rename it to writeUtf8OnBuffers
).
Are you suggesting we add a fast path similar to writeOnArrayAscii
, but using dynamic buffer allocation and falling back to writeOnBuffers/writeUtf8OnBuffers
when a UTF-8 character is encountered?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you suggesting we add a fast path similar to writeOnArrayAscii, but using dynamic buffer allocation and falling back to writeOnBuffers/writeUtf8OnBuffers when a UTF-8 character is encountered?
Yep, since I see the ascii path there is already taking care to change the buffer to write against instead of performing a lookup per each byte to write.
A tighter loop increase the chance it to be loop unrolled, although...the fact we can change the buffer where to write during the loop, can affect this - both for the array case and this.
# Conflicts: # driver-core/src/test/unit/com/mongodb/internal/connection/ByteBufferBsonOutputTest.java
Summary
This PR optimizes string writing in BSON by minimizing redundant checks and internal method indirection in hot paths.
Key Changes:
put()
. Logic follows the same fast path approach used inString.getBytes()
for ASCII.str.charAt()
instead ofCharacter.toCodePoint()
to avoid unnecessary surrogate checks when not needed (e.g., for characters within the ASCII or 2-byte UTF-8 code unit range). Fall back only when multi-unit characters (e.g., 3-byte UTF-8) are detected.Performance analysis
To ensure accurate performance comparison and reduce noise, 11 versions (Comparison Versions) were aggregated and compared against a stable region of data around the Base Mainline Version. The percentage difference and z-score of the mean of the Comparison Versions were calculated relative to the Base Mainline Version’s stable region mean.
The following tables show improvements across two configurations:
ASCII Benchmark Suite (Regular JSON Workloads)
Perf analyzer results: Link
Augmented Benchmark Suite (UTF-8 Strings with 3-Byte Characters)
To evaluate performance on multi-byte UTF-8 input, the large_doc.json, small_doc.json, and tweet.json datasets were modified to use UTF-8 characters encoded with 3 bytes (code units). These changes were introduced on mainline via an Evergreen patch, and benchmark results were collected from:
Perf analyzer results: Link
JAVA-5816