Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revert "Typo on #153" #9

Open
wants to merge 11 commits into
base: master
Choose a base branch
from
Open
4 changes: 2 additions & 2 deletions DeepPATH_code/00_preprocessing/0d_SortTiles.py
Original file line number Diff line number Diff line change
Expand Up @@ -251,13 +251,13 @@ def sort_subfolders(metadata, load_dic, **kwargs):
parser.add_argument("--SourceFolder", help="path to tiled images", dest='SourceFolder')
parser.add_argument("--JsonFile", help="path to metadata json file", dest='JsonFile')
parser.add_argument("--Magnification", help="magnification to use", type=float, dest='Magnification')
parser.add_argument("--MagDiffAllowed", help="difference allwed on Magnification", type=float, dest='MagDiffAllowed')
parser.add_argument("--MagDiffAllowed", help="difference allowed on Magnification", type=float, dest='MagDiffAllowed')
parser.add_argument("--SortingOption", help="see option at the epilog", type=int, dest='SortingOption')
parser.add_argument("--PercentValid", help="percentage of images for validation (between 0 and 100)", type=float, dest='PercentValid')
parser.add_argument("--PercentTest", help="percentage of images for testing (between 0 and 100)", type=float, dest='PercentTest')
parser.add_argument("--PatientID", help="Patient ID is supposed to be the first PatientID characters (integer expected) of the folder in which the pyramidal jpgs are. Slides from same patient will be in same train/test/valid set. This option is ignored if set to 0 or -1 ", type=int, dest='PatientID')
parser.add_argument("--TMB", help="path to json file with mutational loads; or to BRAF mutations", dest='TMB')
parser.add_argument("--nSplit", help="interger n: Split into train/test in n different ways", dest='nSplit')
parser.add_argument("--nSplit", help="integer n: Split into train/test in n different ways", dest='nSplit')
parser.add_argument("--outFilenameStats", help="Check if the tile exists in an out_filename_Stats.txt file and copy it only if it True, or is the expLabel option had the highest probability", dest='outFilenameStats')
parser.add_argument("--expLabel", help="Index of the expected label within the outFilenameStats file (if only True/False is needed, leave this option empty).", dest='expLabel')
parser.add_argument("--threshold", help="threshold above which the probability the class should be to be considered as true (if not specified, it would be considered as true if it has the max probability).", dest='threshold')
Expand Down
25 changes: 9 additions & 16 deletions DeepPATH_code/01_training/2Classes/inception/slim/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,7 @@ keeping a model's architecture transparent and its hyperparameters explicit.
## Teaser

As a demonstration of the simplicity of using TF-Slim, compare the simplicity of
the code necessary for defining the entire [VGG]
(http://www.robots.ox.ac.uk/~vgg/research/very_deep/) network using TF-Slim to
the code necessary for defining the entire [VGG](http://www.robots.ox.ac.uk/~vgg/research/very_deep/) network using TF-Slim to
the lengthy and verbose nature of defining just the first three layers (out of
16) using native tensorflow:

Expand Down Expand Up @@ -244,12 +243,9 @@ number. More concretely, the scopes in the example above would be 'conv3_1',

### Scopes

In addition to the types of scope mechanisms in TensorFlow ([name_scope]
(https://www.tensorflow.org/api_docs/python/framework.html#name_scope),
[variable_scope]
(https://www.tensorflow.org/api_docs/python/state_ops.html#variable_scope),
TF-Slim adds a new scoping mechanism called "argument scope" or [arg_scope]
(scopes.py). This new scope allows a user to specify one or more operations and
In addition to the types of scope mechanisms in TensorFlow ([name_scope](https://www.tensorflow.org/api_docs/python/framework.html#name_scope),
[variable_scope](https://www.tensorflow.org/api_docs/python/state_ops.html#variable_scope),
TF-Slim adds a new scoping mechanism called "argument scope" or [arg_scope](./scopes.py). This new scope allows a user to specify one or more operations and
a set of arguments which will be passed to each of the operations defined in the
`arg_scope`. This functionality is best illustrated by example. Consider the
following code snippet:
Expand All @@ -264,7 +260,7 @@ It should be clear that these three Convolution layers share many of the same
hyperparameters. Two have the same padding, all three have the same weight_decay
and standard deviation of its weights. Not only do the duplicated values make
the code more difficult to read, it also adds the addition burder to the writer
of needing to doublecheck that all of the values are identical in each step. One
of needing to double-check that all of the values are identical in each step. One
solution would be to specify default values using variables:

```python
Expand Down Expand Up @@ -362,7 +358,7 @@ classes. For regression problems, this is often the sum-of-squares differences
between the predicted and true values.

Certain models, such as multi-task learning models, require the use of multiple
loss functions simultaneously. In other words, the loss function ultimatey being
loss functions simultaneously. In other words, the loss function ultimately being
minimized is the sum of various other loss functions. For example, consider a
model that predicts both the type of scene in an image as well as the depth from
the camera of each pixel. This model's loss function would be the sum of the
Expand Down Expand Up @@ -494,12 +490,9 @@ with tf.Session() as sess:
...
```

See [Restoring Variables]
(https://www.tensorflow.org/versions/r0.7/how_tos/variables/index.html#restoring-variables)
and [Choosing which Variables to Save and Restore]
(https://www.tensorflow.org/versions/r0.7/how_tos/variables/index.html#choosing-which-variables-to-save-and-restore)
sections of the [Variables]
(https://www.tensorflow.org/versions/r0.7/how_tos/variables/index.html) page for
See [Restoring Variables](https://www.tensorflow.org/versions/r0.7/how_tos/variables/index.html#restoring-variables)
and [Choosing which Variables to Save and Restore](https://www.tensorflow.org/versions/r0.7/how_tos/variables/index.html#choosing-which-variables-to-save-and-restore)
sections of the [Variables](https://www.tensorflow.org/versions/r0.7/how_tos/variables/index.html) page for
more details.

### Using slim.variables to Track which Variables need to be Restored
Expand Down
28 changes: 10 additions & 18 deletions DeepPATH_code/01_training/3Classes/inception/slim/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,7 @@ keeping a model's architecture transparent and its hyperparameters explicit.
## Teaser

As a demonstration of the simplicity of using TF-Slim, compare the simplicity of
the code necessary for defining the entire [VGG]
(http://www.robots.ox.ac.uk/~vgg/research/very_deep/) network using TF-Slim to
the code necessary for defining the entire [VGG](http://www.robots.ox.ac.uk/~vgg/research/very_deep/) network using TF-Slim to
the lengthy and verbose nature of defining just the first three layers (out of
16) using native tensorflow:

Expand Down Expand Up @@ -244,12 +243,9 @@ number. More concretely, the scopes in the example above would be 'conv3_1',

### Scopes

In addition to the types of scope mechanisms in TensorFlow ([name_scope]
(https://www.tensorflow.org/api_docs/python/framework.html#name_scope),
[variable_scope]
(https://www.tensorflow.org/api_docs/python/state_ops.html#variable_scope),
TF-Slim adds a new scoping mechanism called "argument scope" or [arg_scope]
(scopes.py). This new scope allows a user to specify one or more operations and
In addition to the types of scope mechanisms in TensorFlow ([name_scope](https://www.tensorflow.org/api_docs/python/framework.html#name_scope),
[variable_scope](https://www.tensorflow.org/api_docs/python/state_ops.html#variable_scope),
TF-Slim adds a new scoping mechanism called "argument scope" or [arg_scope](scopes.py). This new scope allows a user to specify one or more operations and
a set of arguments which will be passed to each of the operations defined in the
`arg_scope`. This functionality is best illustrated by example. Consider the
following code snippet:
Expand All @@ -264,7 +260,7 @@ It should be clear that these three Convolution layers share many of the same
hyperparameters. Two have the same padding, all three have the same weight_decay
and standard deviation of its weights. Not only do the duplicated values make
the code more difficult to read, it also adds the addition burder to the writer
of needing to doublecheck that all of the values are identical in each step. One
of needing to double-check that all of the values are identical in each step. One
solution would be to specify default values using variables:

```python
Expand Down Expand Up @@ -362,7 +358,7 @@ classes. For regression problems, this is often the sum-of-squares differences
between the predicted and true values.

Certain models, such as multi-task learning models, require the use of multiple
loss functions simultaneously. In other words, the loss function ultimatey being
loss functions simultaneously. In other words, the loss function ultimately being
minimized is the sum of various other loss functions. For example, consider a
model that predicts both the type of scene in an image as well as the depth from
the camera of each pixel. This model's loss function would be the sum of the
Expand Down Expand Up @@ -439,8 +435,7 @@ let TF-Slim know about the additional loss and let TF-Slim handle the losses.
## Putting the Pieces Together

By combining TF-Slim Variables, Operations and scopes, we can write a normally
very complex network with very few lines of code. For example, the entire [VGG]
(https://www.robots.ox.ac.uk/~vgg/research/very_deep/) architecture can be
very complex network with very few lines of code. For example, the entire [VGG](https://www.robots.ox.ac.uk/~vgg/research/very_deep/) architecture can be
defined with just the following snippet:

```python
Expand Down Expand Up @@ -494,12 +489,9 @@ with tf.Session() as sess:
...
```

See [Restoring Variables]
(https://www.tensorflow.org/versions/r0.7/how_tos/variables/index.html#restoring-variables)
and [Choosing which Variables to Save and Restore]
(https://www.tensorflow.org/versions/r0.7/how_tos/variables/index.html#choosing-which-variables-to-save-and-restore)
sections of the [Variables]
(https://www.tensorflow.org/versions/r0.7/how_tos/variables/index.html) page for
See [Restoring Variables](https://www.tensorflow.org/versions/r0.7/how_tos/variables/index.html#restoring-variables)
and [Choosing which Variables to Save and Restore](https://www.tensorflow.org/versions/r0.7/how_tos/variables/index.html#choosing-which-variables-to-save-and-restore)
sections of the [Variables](https://www.tensorflow.org/versions/r0.7/how_tos/variables/index.html) page for
more details.

### Using slim.variables to Track which Variables need to be Restored
Expand Down
4 changes: 2 additions & 2 deletions DeepPATH_code/03_postprocessing/0h_ROC_MultiOutput.py
Original file line number Diff line number Diff line change
Expand Up @@ -388,7 +388,7 @@ def main():


# save data
print("******* FP / TP for average probabilitys")
print("******* FP / TP for average probabilities")
print(fpr)
print(tpr)
for i in range(n_classes):
Expand Down Expand Up @@ -514,7 +514,7 @@ def main():


# save data
print("******* FP / TP for average probabilitys")
print("******* FP / TP for average probabilities")
print(fpr)
print(tpr)
for i in range(n_classes):
Expand Down