-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix new plotting APIs with Triage's result schemas #713
base: post-postmodeling
Are you sure you want to change the base?
Conversation
This commit adds a small change into the catwalk component to calculate feature importances when the model object is a catwalk.estimators.ScaledLogisticRegression. Now, instead of not calculating anything, triage will be able to push feature importances using e to the power of the coefficients.
At long last, the experiment runs table. It contains a variety of metadata about the experiment run, such as installed libraries, git hash, and number of matrices and models built/skipped/errored. Similarly, the experiments table is augmented with data that doesn't change from run-to-run (e.g. number of time splits, as-of-times, total grid size) A variety of methods on the Experiment act as 'entrypoints'. The first entrypoint you hit when running an experiment (e.g generate_matrices, or train_and_test_models) gets tagged on the experiment_runs row. - Add Experiment runs table [Resolves #440] [Resolves #403] and run-invariant columns to Experiments table - Add tracking module to wrap updates to the experiment_runs table - Have experiment call tracking module to save initial information and retrieve a run_id to update with more data later, either itself or through components (e.g. MatrixBuilder, ModelTrainer) that do relevent work - Have experiment save run-invariant information when first computed
Fixed Audition's docs
Introduce experiment_runs table, beef up experiments table
Add feature_importance metric to SLR [solves #509]
* mostly removing non-ascii from the license file. adding explict lineterminator on csv.writer
* Update black from 18.9b0 to 19.3b0 * Update alembic from 1.0.7 to 1.0.8 * Update sqlalchemy from 1.2.18 to 1.3.1 * Update scikit-learn from 0.20.2 to 0.20.3 * Update pandas from 0.24.1 to 0.24.2 * Update boto3 from 1.9.105 to 1.9.125 * Update sqlparse from 0.2.4 to 0.3.0 * Update csvkit from 1.0.3 to 1.0.4 * Update fakeredis from 1.0.2 to 1.0.3 * Update hypothesis from 4.7.17 to 4.14.2 * Update tox from 3.7.0 to 3.8.4 * Fix SQLAlchemy warnings that are now errors
…irtyduck-integration
* Dirty duck (the whole enchilada) * Improve mkdocs.yml to fit dirty duck markdown version * Added function to create dirty duck md files to manage.py * Updated link at menu bar * Individual md files for dirty duck. Added markdown modules. Modified requirements.txt * Added some suggested modifications * Material design
…5+ GiB) Underlying library ``s3fs`` automatically writes objects to S3 in "chunks" or "parts" -- *i.e.* via multipart upload -- in line with S3's *minimum* limit for multipart of 5 MiB. This should, in general, avoid S3's *maximum* limit per (part) upload of 5 GiB. **However**, ``s3fs`` assumes that no *single* ``write()`` might exceed the maximum, and as such fails to chunk out such too-large upload requests prompted by singular writes of 5+ GiB. This can and should be resolved in ``s3fs``. But first, it can, should be and is resolved here in ``S3Store``. resolves #530
write 5+ GiB (matrices) to S3Store
* Don't auto-upgrade db for new Experiments [Resolves #695] To avoid the problem of time-consuming database upgrades happening when we don't want them, the Experiment now: 1. Checks to see if the results_schema_versions table exists at all. if it doesn't exist, upgrade. This is because means the results schema should be clean in this case, and new users won't have to always run a new thing when they first try Triage. 2. If it does exist, and the version number doesn't match what the code's current HEAD revision is, throw an error. The error message is customized to whether the database revision is a known revision to the code (easy case, just upgrade if you have time) or not (you probably upgraded on a different branch and need to go check out that branch to downgrade).
* Add more user database management options to CLI [Resolves #697] In recent weeks/months, more operations on the results schema have proven to be things that are useful to 'users' (people who use the 'triage' command), not just 'developers' (people who use the 'manage' command). These include: stamping to a specific revision, downgrading, upgrading to a specific revision, and even just viewing the revision history. Here we allow the `triage db` command to interface with alembic to do these things. Furthermore, the old 'stamp' logic in `triage db` isn't terribly useful now that we have been on alembic for a while, and pinning it to experiment config versions wasn't very useful. Using the standard alembic revisions for stamping I think makes more sense, but I copied the dictionary from before into the help text for 'stamp' because it could still be helpful. - Modify old `triage db stamp` logic to use standard alembic revisions - Enable `triage db upgrade` to take a revision (but default to HEAD) - Add `triage db downgrade` that takes a revision - Add `triage db history` to show revisions
Adds a bias_audit_config section to triage experiment config that supports: - Users can specify the protected groups logic using a pre-computed table (from_obj_table) or a query (from_obj_query) that must contain entity_id, date and the attributes columns to generate the groups for bias audit using aequitas. - Users must specify knowledge_date_column, entity_id_column and a list of attribute_columns, otherwise we would not be able to create the table without knowing which columns it has. - The bias_audit_config is optional. If is set, then there is protected_groups_table generator that is basically a replication of the labels generator. - The protected groups table created is in the named protected_groups_{experiment_hash} and is the result of a left join of the cohort table with the from_obj specified by the user.
Add README.md to example/config/, explaining experiment.yaml, audition.yaml, postmodeling_config.yaml and postmodeling_crosstabs.yaml Remove feature.yaml and change documentation of feature-testing since cli.py just takes an experiment config.
The pull request changes the functionality of the string_is_tablesafe validation primitive to only allow lowercase letters (or numbers, underscores) in strings it checks, as well as adding additional tests for feature aggregation prefixes and subset names, both of which will be used for table names. As described in #632, uppercase letters in these experiment config values end up getting lowercased on table creation by referenced using their uppercase forms (with quotes) at various places in the code, causing postgres to return a "table does not exist" error. This PR also removes a redundant/conflicting dev.txt requirement of different versions of black, keeping the newer version.
Incorporates an Aequitas bias audit into Triage. The bias audit is optional and is controlled with experiment configuration. This is run during evaluation time and on each model. One dirtyduck config (inspections_dt) is updated with a sample bias audit config. To enable this, some requirements are updated so that Triage and Aequitas can coexist together more peacefully.
* Pin ipython to latest version 7.5.0 * Pin ipython to latest version 7.5.0 * Pin jupyter to latest version 1.0.0 * Pin jupyter to latest version 1.0.0 * Pin sphinx to latest version 2.0.1 * Pin sphinx_rtd_theme to latest version 0.4.3 * Pin coverage to latest version 4.5.3 * Pin flake8 to latest version 3.7.7 * Pin mkdocs to latest version 1.0.4 * Pin tox to latest version 3.9.0 * Pin tox-pyenv to latest version 1.1.0 * Pin nose to latest version 1.3.7 * Pin mock to latest version 2.0.0 * Pin colorama to latest version 0.4.1 * Pin httpie to latest version 1.0.2 * Pin psycopg2-binary to latest version 2.8.2 * Update black from 18.9b0 to 19.3b0 * Pin mkdocs-material to latest version 4.2.0 * Update alembic from 1.0.8 to 1.0.10 * Update sqlalchemy from 1.3.1 to 1.3.3 * Update psycopg2-binary from 2.7.7 to 2.8.2 * Update boto3 from 1.9.125 to 1.9.139 * Update s3fs from 0.2.0 to 0.2.1 * Update ohio from 0.1.2 to 0.4.0 * Update moto from 1.3.7 to 1.3.8 * Update hypothesis from 4.14.2 to 4.18.3 * Update tox from 3.8.4 to 3.9.0
Why did you consider those unit tests old? Those are the tests that make sure that you can run postmodeling as much as possible even if you skipped predictions in your Triage Experiment run. I don't think those tests' usefulness has changed. |
@thcrock The reason I removed them temporarily is because the new postmodeling API basically is now totally different. There is no ModelEvaluator or ModelGroupEvaluator anymore. @nanounanue re-wrote it with a different pattern. I'm adding all the functions back following the new API design as well as those unit tests. That's also the reason why I didn't merge that into master branch but post-postmodeling branch that @nanounanue created originally. |
Codecov Report
@@ Coverage Diff @@
## post-postmodeling #713 +/- ##
====================================================
Coverage ? 84.79%
====================================================
Files ? 93
Lines ? 6024
Branches ? 0
====================================================
Hits ? 5108
Misses ? 916
Partials ? 0 Continue to review full report at Codecov.
|
Ah cool |
post-postmodeling
branch? Otherwise, the tests wouldn't pass.