Skip to content

Conversation

divakaivan
Copy link

Closes #21547

Copy link

google-cla bot commented Aug 7, 2025

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @divakaivan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

I've implemented a new feature that enhances the SKLearnClassifier wrapper by adding a predict_proba method. This change allows Keras models, when used through the scikit-learn API, to output class probabilities, which is a common requirement for classification tasks and improves compatibility with scikit-learn's ecosystem. This directly addresses the need for probability predictions from wrapped Keras classifiers.

Highlights

  • New predict_proba method for SKLearnClassifier: I've added a new predict_proba method to the SKLearnClassifier class. This method allows users to obtain class probabilities for input samples X, aligning the Keras wrapper more closely with standard scikit-learn classifier interfaces. Internally, it uses sklearn.utils.validation.check_is_fitted to ensure the model has been trained and _validate_data for input validation before calling the underlying Keras model's predict method.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a predict_proba method to the SKLearnClassifier, enhancing its compatibility with the scikit-learn API. The implementation correctly leverages existing validation and prediction logic. My feedback focuses on improving the docstring of the new method to provide more detailed information about its parameters and return values, aligning it with scikit-learn's documentation standards for better usability.

@codecov-commenter
Copy link

codecov-commenter commented Aug 7, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 82.60%. Comparing base (a62a4a3) to head (437eb7f).
⚠️ Report is 2 commits behind head on master.

Additional details and impacted files
@@           Coverage Diff           @@
##           master   #21556   +/-   ##
=======================================
  Coverage   82.59%   82.60%           
=======================================
  Files         572      572           
  Lines       58322    58336   +14     
  Branches     9130     9134    +4     
=======================================
+ Hits        48173    48189   +16     
+ Misses       7818     7817    -1     
+ Partials     2331     2330    -1     
Flag Coverage Δ
keras 82.40% <100.00%> (+<0.01%) ⬆️
keras-jax 63.32% <100.00%> (+0.01%) ⬆️
keras-numpy 57.66% <60.00%> (+<0.01%) ⬆️
keras-openvino 34.31% <60.00%> (+<0.01%) ⬆️
keras-tensorflow 64.06% <100.00%> (+0.01%) ⬆️
keras-torch 63.65% <100.00%> (+0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

```
"""

def predict_proba(self, X):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem here is that since the model is configurable, we have no way to know whether the model outputs probabilities or not. This method serves no additional purpose over just predict().

Copy link
Author

@divakaivan divakaivan Aug 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fchollet Could you elaborate, please? I'm not sure I understand your comment. In the case when the user expects probas they will get probas. The only difference between this predict_proba and predict is that the target is not transformed back.

If the user expects probabilities, then they will get them. Although predict_proba might not always return proper probabilities, its inclusion allows users to interoperate with sklearn workflows that expect it. Some examples are in the original issue request.

Copy link
Author

@divakaivan divakaivan Oct 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had a chat with Adrin Jalali and @ glemaitre and he suggested we put predict_proba under an available_if decorator. Maybe something like

from sklearn.utils._available_if import available_if

def _returns_probas(estimator):
    return estimator.model_.layers[-1].activation.__name__ in ("sigmoid", "softmax")

class SKLearnClassifier:
    @available_if(_returns_probas)
    def predict_proba(self, X):
        ...
        return self.model_.predict(X)

Also pinging @adrinjalali for his thoughts on the issue/PR.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@divakaivan divakaivan marked this pull request as draft October 2, 2025 12:43
@divakaivan divakaivan marked this pull request as ready for review October 3, 2025 07:36
@divakaivan divakaivan requested a review from adrinjalali October 3, 2025 07:36
@divakaivan divakaivan requested a review from fchollet October 3, 2025 07:36

check_is_fitted(self)
X = _validate_data(self, X, reset=False)
return self.model_.predict(X)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are these probabilities? Or is this more of a decision_function implementation? 🤔

Copy link
Author

@divakaivan divakaivan Oct 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

model_.predict returns probas when the last layer's activation function is softmax/sigmoid. If the last layer has no activation - I believe it's logits.

Reproducible example in google colab
from tensorflow.keras.layers import Dense, Input
from tensorflow.keras.models import Model
from tensorflow.keras.losses import SparseCategoricalCrossentropy

from sklearn.model_selection import train_test_split
from sklearn.datasets import make_classification

import random
import numpy as np
import tensorflow as tf

random.seed(42)
np.random.seed(42)

X, y = make_classification(n_samples=1000, n_features=10, n_informative=4, n_classes=4, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)

inp = Input(shape=(10,))
x = Dense(20, activation="relu")(inp)
x = Dense(20, activation="relu")(x)
x = Dense(20, activation="relu")(x)
logits_output = Dense(4, activation=None)(x)

model_logits = Model(inp, logits_output)
model_logits.compile(loss=SparseCategoricalCrossentropy(from_logits=True), optimizer="adam")

model_logits.fit(X_train, y_train, epochs=10, verbose=0)

softmax_output = tf.keras.layers.Activation('softmax')(logits_output)
model_softmax = Model(inp, softmax_output)

test_sample = X_test[:1]

print("LOGITS OUTPUT:")
pred_logits = model_logits.predict(test_sample, verbose=0)
print(pred_logits)

print("SOFTMAX MODEL OUTPUT:")
pred_softmax = model_softmax.predict(test_sample, verbose=0)
print(pred_softmax)

print("MANUAL SOFTMAX APPLIED TO LOGITS:")
pred_manual_softmax = tf.nn.softmax(pred_logits).numpy()
print(pred_manual_softmax)

print("DIFFERENCE")
print(np.abs(pred_softmax - pred_manual_softmax))
LOGITS OUTPUT:
[[ 0.60939574  4.029889   -1.224225    1.267421  ]]
SOFTMAX MODEL OUTPUT:
[[0.02969535 0.9082174  0.00474632 0.05734099]]
MANUAL SOFTMAX APPLIED TO LOGITS:
[[0.02969535 0.9082174  0.00474632 0.05734099]]
DIFFERENCE
[[0. 0. 0. 0.]]

divakaivan and others added 2 commits October 3, 2025 18:24
Co-authored-by: Adrin Jalali <[email protected]>
)


def _estimator_has(attr):
Copy link
Author

@divakaivan divakaivan Oct 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be _check_proba or _estimator_has_proba? No reason to have attr as it's quite specific to what we want to check (something like here)

@divakaivan
Copy link
Author

divakaivan commented Oct 3, 2025

The tests fail because AttributeError: module 'sklearn.utils' has no attribute 'metaestimators'. But the tests run using scikit-learn==1.7.2 🤔 Edit: Need to from sklearn.utils.metaestimators import available_if

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Adding a predict_proba method on SKLearnClassifier
5 participants