Skip to content

[FLINK-36540][Runtime] Add Support for Hadoop Caller Context when using Flink to operate hdfs. #26681

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 12 commits into
base: master
Choose a base branch
from

Conversation

liangyu-1
Copy link
Contributor

@liangyu-1 liangyu-1 commented Jun 16, 2025

What is the purpose of the change

As described in FLINK-36540.
When we use Flink to delete or write or modify files on Hadoop filesystem, callerContext is a helpful feature if we want to trace who did the operation or count how many files an application can create on hadoop filesystem. UGI is not good enough to trace these operations because if we have a tenant who has a lot of jobs writing into HDFS, we cannot find out which job caused the breakdown of HDFS.

I created a new interface and class in flink-core module, so that it will not cause the leak in ThreadLocal value, and it won't influence the situation if we do not use hdfs.

What's more, with this new feature and history json files in history server, we can calculate how many read operations and write operations a Flink application did to hdfs, and find out if there is a pressure or bottleneck to operate on hdfs files.

Brief change log

  • Add a new interface ContextWrapperFileSystem
  • Add a new class FileSystemContext
  • Add a new class HadoopFileSystemWithContext
  • Add initialization operation at the place where we initialize FileSystemSafetyNet

Verifying this change

Please make sure both new and modified tests in this PR follow the conventions for tests defined in our code quality guide.

This change added tests and can be verified as follows:

(example:)

  • Tested on our YARN CLUSTER

I rebuild this project, and test the new jar file in my cluster, it prints out the correct caller context as expected
image

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): (yes / no)
  • The public API, i.e., is any changed class annotated with @Public(Evolving): (yes / no)
  • The serializers: (yes / no / don't know)
  • The runtime per-record code paths (performance sensitive): (yes / no / don't know)
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
  • The S3 file system connector: (yes / no / don't know)

Documentation

  • Does this pull request introduce a new feature? (yes / no)
  • If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)

@flinkbot
Copy link
Collaborator

flinkbot commented Jun 16, 2025

CI report:

Bot commands The @flinkbot bot supports the following commands:
  • @flinkbot run azure re-run the last Azure build

@liangyu-1
Copy link
Contributor Author

@dmvk @xintongsong @ferenc-csaky
Hi, would you please help me check this PR?
I implement this feature in a new way which is created in dmvk@bfe9f60

<td><h5>hdfs.caller-context.enabled</h5></td>
<td style="word-wrap: break-word;">false</td>
<td>Boolean</td>
<td>A config of whether hadoop caller context is enabled.</td>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it would be more readable to say:
Whether hadoop caller context is enabled.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I forget to push my latest branch, sorry about that.

* used, such as caller context or other metadata.
*/
@Experimental
public interface ContextWrapperFileSystem {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if we need the words wrap and wrapper. It would simpler (/more intuitive?) to have ContextFileSystem and the method as addContext. WDYT?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agree

CONTEXTS.set(newContext);
}

static FileSystem wrapWithContextWhenActivated(FileSystem fs) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what does WhenActivivated mean here? Maybe explain in comments if this is important. Otherwise could we not say addContext?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agree

context = context + "_local";
}
context = context + "JobID_" + jobID;
FileSystemContext.initializeContextForThread(context);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking some file systems would have contexts and some would not. The code does context processing when the file system might not have a context. Have I understood this correctly?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, currently only hadoopFilesystem use this context.

Copy link
Contributor

@davidradl davidradl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add unit tests

@liangyu-1 liangyu-1 requested a review from davidradl June 19, 2025 09:59
@@ -115,6 +116,21 @@ public void run() {
checkpointMetaData.getCheckpointId(),
asyncStartDelayMillis);

String context = "FLINK";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shall we put the into an util class?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good idea

} else {
context = context + "_local";
}
context = context + "JobID_" + taskEnvironment.getJobID() + "_TaskName_" + taskName;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Compare to use JobID only, it will be better to use both job name and job id for readability.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I disagree with this. Job names may contain special characters such as spaces.
In my case, I want to load this context into a structured table for further analysis, so I believe the job ID is sufficient.
If we need to find the exact job name, we can always look it up in the History Server.

Copy link
Contributor

@HuangZhenQiu HuangZhenQiu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the contrition. We are also waiting for the feature.

@liangyu-1 liangyu-1 requested a review from HuangZhenQiu June 23, 2025 09:27
@liangyu-1
Copy link
Contributor Author

@flinkbot run azure

@ferenc-csaky
Copy link
Contributor

Does the CI failure related to this change? If not, let's rebase it to the latest master.

@github-actions github-actions bot added community-reviewed PR has been reviewed by the community. and removed community-reviewed PR has been reviewed by the community. labels Jun 30, 2025
@liangyu-1
Copy link
Contributor Author

liangyu-1 commented Jul 1, 2025

thanks for your reply @ferenc-csaky
I have re-run the failed UT, and it passed, so I think the failure is not related to this change.

@github-actions github-actions bot added community-reviewed PR has been reviewed by the community. and removed community-reviewed PR has been reviewed by the community. labels Jul 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
community-reviewed PR has been reviewed by the community.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants