fix: fix entity aggregation bug for NER detection #1413
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
It looks like it’s because we’re using the “FIRST” aggregation strategy, with a tokenizer that is not word-aware: we’re falling back to some heuristics (the presence of spaces before/after the word), that somehow fails here.
Indeed, XLM-RoBERTa model does not use the same tokenizer as RoBERTa, and uses an Unigram model (instead of BPE), which is not word-aware.
Another issue of the “FIRST” aggregation strategy is that the ending dot after the ingredient list is predicted as part of the ingredient list, even though it’s not in the non-aggregated prediction. By switching to “SIMPLE” strategy (a strategy without an error correction mechanism), we don’t have this issue anymore, but two subwords belonging to the same word are sometimes predicted as belonging to two entities.
A more in-depth analysis of the TokenClassificationPipeline reveals that the issue comes from the Punctuation() pre-tokenizer we added: it was not included in the original tokenizer, and the heuristic doesn’t take it into account, leading to an incorrect detection. I updated the heuristic to use the
word_ids
provided by the tokenizer to know whether the token is a subword or not (with respect to the pre-tokenization output).By updating the way entities are aggregated, we don't have an issue anymore with the first aggregation strategy: we keep it as it.