Skip to content

Conversation

@patrickmacarthur
Copy link

This pull request is based on the 0.5.1 release because I was unable to run the current master version

The character array buffer used for decoding UTF-8 characters is resized dynamically based on strings that are being parsed. However, the UTF-8 decoding always assumed that the length of the allocated array was equal to the expected length of the string. This means that if a very long string was parsed before a string containing a multi-byte character, the length would be incorrectly interpreted to be that of the previous long string instead of the string being currently parsed, resulting in the code reading past the end of the string and reading garbage.

This bug caused chunks that contained items with lore text using multi-byte characters to be completely ignored in statistics.

The fix is to pass the expected length into the readUTF function, and use it instead of relying on the size of the buffer.

The character array buffer used for decoding UTF-8 characters is
resized dynamically based on strings that are being parsed. However,
the UTF-8 decoding always assumed that the length of the allocated
array was equal to the expected length of the string. This means that
if a very long string was parsed before a string containing a
multi-byte character, the length would be incorrectly interpreted to
be that of the previous long string instead of the string being
currently parsed, resulting in the code reading past the end of the
string and reading garbage.

This bug caused chunks that contained items with lore text using
multi-byte characters to be completely ignored in statistics.

The fix is to pass the expected length into the readUTF function, and
use it instead of relying on the size of the buffer.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant