Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kerberos support #1025 #1030

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,22 @@ public class FluoConfiguration extends SimpleConfiguration {

// Client properties
private static final String CLIENT_PREFIX = FLUO_PREFIX + ".client";

/**
* @since 1.2.0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be 1.3.0

*/
public static final String CLIENT_KERBEROS = CLIENT_PREFIX + ".kerberos";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this is only used for HDFS and not for zookeeper or accumulo, I think the prop name should be CLIENT_HDFS_KERBEROS = CLIENT_PREFIX + ".hdfs.kerberos"


/**
* @since 1.2.0
*/
public static final String CLIENT_KERBEROS_REALM = CLIENT_PREFIX + ".kerberos.realm";

/**
* @since 1.2.0
*/
public static final String CLIENT_KERBEROS_KEYTAB = CLIENT_PREFIX + ".kerberos.keytab";

/**
* @deprecated since 1.2.0 replaced by fluo.connection.application.name
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,7 @@
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.zookeeper.KeeperException;
import org.apache.zookeeper.KeeperException.NodeExistsException;
import org.slf4j.Logger;
Expand All @@ -76,6 +77,28 @@ public class FluoAdminImpl implements FluoAdmin {

private final String appRootDir;

/**
* Kerberos autentication method.
*
* @param realm Realm to be used in authentication.
* @param keytab Keytab path.
* @since 1.2.0
*/
public void loginWithKerberos(final String realm, final String keytab) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I can tell, this only enables authentication to HDFS using HDFS client-side authentication features. The name should indicate hdfsLoginWithKerberos or some other indication. At the very least, the Javadoc should indicate that this is HDFS authentication.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we notice, just the jar copy process demanded kerberos authentication. All other components were able to retrieve the tokens from environment. But I agree with you. Fluo should work using the environment authentication.

I'll try other approaches. For while if someone want to use Fluo on kerberos this could help.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, interesting. I'm surprised other components worked from the environment, but this did not. I wonder why. Did you try fluo-yarn? Also, do you know if this change allows it to work with Accumulo with Kerberos tokens? Or did you only try Accumulo using regular PasswordTokens?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another possible way to fix this would be to ensure we use hadoop config from the classpath and document that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes! Our Accumulo is kerberized too. We have no problem with. We didn’t tried fluo-yarn yet on this environment. We can try.


try {
Configuration conf = new Configuration();
conf.set("hadoop.security.authentication", "kerberos");
conf.set("hadoop.security.authorization", "true");
UserGroupInformation.setConfiguration(conf);
UserGroupInformation.loginUserFromKeytab(realm, keytab);

logger.info("Connected with REALM: '{}'.", realm);
} catch (Exception e) {
throw new RuntimeException(e);
}
}

public FluoAdminImpl(FluoConfiguration config) {
this.config = config;

Expand Down Expand Up @@ -373,6 +396,14 @@ public static String copyDirToDfs(String dfsRoot, String appName, String srcDir,
}

private String copyJarsToDfs(String jars, String destDir) {

if (config.getClientConfiguration().getBoolean(FluoConfiguration.CLIENT_KERBEROS, false)) {
this.loginWithKerberos(
config.getClientConfiguration().getString(FluoConfiguration.CLIENT_KERBEROS_REALM, ""),
config.getClientConfiguration().getString(FluoConfiguration.CLIENT_KERBEROS_KEYTAB, ""));
}


String dfsAppRoot = config.getDfsRoot() + "/" + config.getApplicationName();
String dfsDestDir = dfsAppRoot + "/" + destDir;

Expand Down