-
Notifications
You must be signed in to change notification settings - Fork 203
[FLINK-38046][Connector/JDBC]Solve the issue of missing one piece of data #174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Thanks for opening this pull request! Please check out our contributing guidelines. (https://flink.apache.org/contributing/how-to-contribute.html) |
|
Hi @wangxiaojing could you add a test for this.. |
Add test cases |
|
This PR is being marked as stale since it has not had any activity in the last 90 days. If you are having difficulty finding a reviewer, please reach out to the If this PR is no longer valid or desired, please feel free to close it. |
RocMarshal
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice catch and Thanks @wangxiaojing for the contribution.
I left a few of comments, PTAL if you had the free time.
| void testBatchMaxMinTooLarge() { | ||
| JdbcNumericBetweenParametersProvider provider = | ||
| new JdbcNumericBetweenParametersProvider(2260418954055131340L, 3875220057236942850L) | ||
| .ofBatchSize(3); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean that we should call the ofBatchNum method here?
| long[][] expected = { | ||
| new long[] {2260418954055131340L, 2798685988449068510L}, | ||
| new long[] {2798685988449068511L, 3336953022843005681L}, | ||
| new long[] {3336953022843005682L, 3875220057236942850L} | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| long[][] expected = { | |
| new long[] {2260418954055131340L, 2798685988449068510L}, | |
| new long[] {2798685988449068511L, 3336953022843005681L}, | |
| new long[] {3336953022843005682L, 3875220057236942850L} | |
| }; | |
| long[][] expected = { | |
| new long[] {2260418954055131340L, 2798685988449068491L}, | |
| new long[] {2798685988449068492L, 3336953022843005643L}, | |
| new long[] {3336953022843005644L, 3875220057236942850L} | |
| }; |
I tried to follow your train of thought to review this test case. Perhaps you were trying to convey this expected outcome. Please let me know your opinion.
When using partitioned scan in Flink JDBC table, if scan.partition.lower-bound and scan.partition.upper-bound are very large, there could be a situation where one data record is lost. For example, maximum value: 3875220057236942850, minimum value: 2260418954055131340.