From ce7600347bd3ffbecf188449c587dc695abcfaee Mon Sep 17 00:00:00 2001 From: shappiron Date: Mon, 20 Nov 2023 11:13:38 +0300 Subject: [PATCH] Update documentation --- _sources/stat/survival_analysis_part2.ipynb | 56 +-- dyn/complex_systems.html | 102 ++--- dyn/system_resilience.html | 210 +++++----- genindex.html | 8 +- intro.html | 21 +- ml/aging_clocks.html | 136 +++--- projects/template.html | 8 +- search.html | 8 +- searchindex.js | 2 +- stat/survival_analysis_part2.html | 434 ++++++++++---------- 10 files changed, 483 insertions(+), 502 deletions(-) diff --git a/_sources/stat/survival_analysis_part2.ipynb b/_sources/stat/survival_analysis_part2.ipynb index 207b99e..f4d9683 100644 --- a/_sources/stat/survival_analysis_part2.ipynb +++ b/_sources/stat/survival_analysis_part2.ipynb @@ -202,16 +202,16 @@ "id": "9adc7d5c", "metadata": {}, "source": [ - "# Classical machine learning methods\n", + "## Classical machine learning methods\n", "The main advantage - ability to model the non-linear relationships and work with high dimensional data\n", - "## Decision tree\n", + "### Decision tree\n", "The basic intuition behind the tree models is to recursively partition the data based on a particular splitting criterion, so that the objects that are similar to each other based on the value of interest will be placed in the same node.\n", "\n", "We will start with the simplest case - decision tree for classification:\n", - "### Classification decision tree\n", + "#### Classification decision tree\n", "```{figure} figs/9.PNG\n", "```\n", - "#### Probabilities\n", + "##### Probabilities\n", "\n", "Before the first split:\n", "\n", @@ -228,7 +228,7 @@ "$$P(y=\\text{YELLOW}|X > 12) = \\frac{6}{7} \\approx 0.86$$\n", "\n", "\n", - "#### Entropy:\n", + "##### Entropy:\n", "$$\n", "H(p) = - \\sum_i^K p_i\\log(p_i)\n", "$$\n", @@ -246,7 +246,7 @@ "\n", "$$H_{\\text{total}} = - \\frac{13}{20} 0.96 - \\frac{7}{20} 0.58 \\approx 0.83$$\n", "\n", - "#### Information Gain:\n", + "##### Information Gain:\n", "$$\n", "IG = H(\\text{parent}) - H(\\text{child})\n", "$$\n", @@ -263,7 +263,7 @@ "id": "929d56ed", "metadata": {}, "source": [ - "### Regression decision tree\n", + "#### Regression decision tree\n", "Is a step-wise constant predictor\n", "\n", "Lets look at this example:\n", @@ -294,7 +294,7 @@ "id": "637ecf43", "metadata": {}, "source": [ - "## Ensembling\n", + "### Ensembling\n", "Ensembling - combining the predictions of multiple base learners to obtain a powerful overall model. The base learners are often very simple models also referred as weak learners. \n", "Multiple diverse models are created to predict an outcome, either by using many different modeling algorithms or using different training data sets.\n", "\n", @@ -320,14 +320,14 @@ "id": "5ada40bb", "metadata": {}, "source": [ - "### Random forest\n", + "#### Random forest\n", "Random Forest fits a set of Trees independently and then averages their predictions\n", "\n", "The general principles as RF: (a) Trees are grown using bootstrapped data; (b) Random feature selection is used when splitting tree nodes; (c) Trees are generally grown deeply (d) Forest ensemble is calculated by averaging terminal node statistics\n", "\n", "Importantly, the high number of base learners do not lead to overfitting. \n", "\n", - "### Gradient boosting\n", + "#### Gradient boosting\n", "In contrast to random forest gradient boosted model is constructed sequentially in a greedy stagewise fashion\n", "\n", "After training decision tree the errors of prediction are obtained and the next decision tree is trained on this prediction errors\n", @@ -347,7 +347,7 @@ "\n", "If we want to include high number of base learners we should use very low learning rate to restrict the influence of individual base learners - similar to regularization\n", "\n", - "# Survival machine learning\n", + "## Survival machine learning\n", "\n", "Survival analysis is a type of regression problem as we want to predict a continuous value, but with a twist. It differs from traditional regression by the fact that parts of the data can only be partially observed – they are censored\n", "\n", @@ -357,11 +357,11 @@ "Survival machine learning - machine learning methods adapted to work with survival data and censoring!\n", "\n", "\n", - "## 1. Survival random forest\n", + "### Survival random forest\n", "Survival trees are one form of decision trees which are tailored\n", "to handle censored data. The goal is to split the tree node into left and right daughters with dissimilar event history (survival) behavior.\n", "\n", - "### Splitting criterion\n", + "#### Splitting criterion\n", "The primary difference between a survival tree and the standard decision tree is\n", "in the choice of splitting criterion - the log-rank test. The log-rank test has traditionally been used for two-sample testing of survival data, but it can be used for survival splitting as a means for maximizing between-node survival differences. \n", "\n", @@ -403,7 +403,7 @@ "id": "559e3cad", "metadata": {}, "source": [ - "### Prediction\n", + "#### Prediction\n", "For prediction, a sample is dropped down each tree in the forest until it reaches a terminal node.\n", "\n", "Data in each terminal is used to non-parametrically estimate the cumulative hazard function and survival using the Nelson-Aalen estimator and Kaplan-Meier, respectively. \n", @@ -431,11 +431,11 @@ "id": "0cd849af", "metadata": {}, "source": [ - "## 2. Survival Gradient boosting\n", + "### Survival Gradient boosting\n", "\n", "Gradient Boosting does not refer to one particular model, but a framework to optimize loss functions. \n", "\n", - "### Cox’s Partial Likelihood Loss\n", + "#### Cox’s Partial Likelihood Loss\n", "The default loss function is the partial likelihood loss of Cox’s proportional hazards model. \n", "The objective is to maximize the log partial likelihood function, but replacing the traditional linear model with the additive model \n" ] @@ -456,7 +456,7 @@ "id": "64c915e1", "metadata": {}, "source": [ - "# Neural networks - Multi-Layer Perceptron Network \n", + "## Neural networks - Multi-Layer Perceptron Network \n", "\n", "Here is the model of artificial neuron - base element of artificial neural network. The output is computed using activation function on the summation of inputs multiplied by weights and the bias value\n", "\n", @@ -481,7 +481,7 @@ "id": "4a762b0f", "metadata": {}, "source": [ - "## MLP training\n", + "### MLP training\n", "The most popular method for training MLPs is back-propagation. During backpropagation, the output values are compared with the correct answer to compute the value of some predefined error-function. The error is then fed back through the network. Using this information, the algorithm adjusts the weights of each connection in order to reduce the value of the error function by some small amount. After repeating this process for a sufficiently large number of training cycles, the network will usually converge to some state where the error of the calculations is small. In this case, one would say that the network has learned a certain target function. \n" ] }, @@ -576,11 +576,11 @@ "id": "740e6a49", "metadata": {}, "source": [ - "# Survival neural networks\n", + "## Survival neural networks\n", "Neural networks methods adapted to work with survival data and censoring!\n", "Pycox - python package for survival analysis and time-to-event prediction with PyTorch, built on the torchtuples package for training PyTorch models.\n", "\n", - "## DeepSurv (CoxPH NN)\n", + "### DeepSurv (CoxPH NN)\n", "Continious-time model. \n", "\n", "Nonlinear Cox proportional hazards network. Deep feed-forward neural network with Cox proportional hazards loss function. Can be considered as nonlinear extension of the Cox proportional hazards: can deal with both linear and nonlinear effects from covariates. \n", @@ -606,7 +606,7 @@ "id": "9a8b7c3b", "metadata": {}, "source": [ - "## Nnet-survival (Logistic hazard NN)\n", + "### Nnet-survival (Logistic hazard NN)\n", "Discrete-time model, fully parametric survival model\n", "\n", "The Logistic-Hazard method parametrize the discrete hazards and optimize the survival likelihood.\n", @@ -623,21 +623,21 @@ "id": "86a4434a", "metadata": {}, "source": [ - "## Performance metrics\n", + "### Performance metrics\n", "Our test data is usually subject to censoring too, therefore common metrics like root mean squared error or correlation are unsuitable. Instead, we use specific metrics for survival analysis\n", - "### 1. Harrell’s concordance index\n", + "#### Harrell’s concordance index\n", "Predictions are often evaluated by a measure of rank correlation between predicted risk scores and observed time points in the test data. Harrell’s concordance index or c-index computes the ratio of correctly ordered (concordant) pairs to comparable pairs\n", "\n", "The higher the C-index is - the better model performance is\n", "\n", "\n", - "### 2. Time-dependent ROC AUC\n", + "#### Time-dependent ROC AUC\n", "Extention of the well known receiver operating characteristic curve (ROC curve) to possibly censored survival times. Given a time point, we can estimate how well a predictive model can distinguishing subjects who will experience an event by time \n", " (sensitivity) from those who will not (specificity).\n", " \n", "The higher the ROC AUC is - the better model performance is\n", "\n", - " ### 3. TIme-dependent Brier score\n", + " #### TIme-dependent Brier score\n", " The time-dependent Brier score is an extension of the mean squared error to right censored data.\n", " \n", " The lower the Brier score is - the better model performance is" @@ -648,7 +648,7 @@ "id": "b05e6c79", "metadata": {}, "source": [ - "## Features selection\n", + "### Features selection\n", "Which variable is most predictive?\n", "\n", "Different methodologies exist, however we will only talk about one simple but valuable method - permutation importance" @@ -659,11 +659,11 @@ "id": "2ff2731e", "metadata": {}, "source": [ - "### 1. Permutation feature importance \n", + "#### Permutation feature importance \n", "Permutation feature importance is a model inspection technique which can be used for any fitted estimator with tabular data. This is especially useful for non-linear estimators. \n", "\n", "The permutation feature importance is a decrease in a model score when a single feature value is randomly shuffled. This procedure breaks the relationship between the feature and the target, thus the drop in the model score is indicative of how much the model depends on the feature. \n", - "# Credits \n", + "## Credits \n", "This notebook was prepared by Margarita Sidorova" ] } diff --git a/dyn/complex_systems.html b/dyn/complex_systems.html index 08d6e4c..70e0385 100644 --- a/dyn/complex_systems.html +++ b/dyn/complex_systems.html @@ -6,7 +6,7 @@ - 12. Complex systems approach — Computational aging book + 7. Complex systems approach — Computational aging book @@ -55,8 +55,8 @@ - - + + @@ -156,7 +156,7 @@

Computational aging book

  • - 5. Survival analysis. Advanced methods. + 5. Advanced survival analysis
  • @@ -168,7 +168,7 @@

    Computational aging book

    @@ -168,7 +168,7 @@

    Computational aging book

    -

    Let’s combine both state variables into a state vector \(x = [r, p]^T\) (we will use column notation for vectors). Then the right part of the equation (13.9) can be written as a matrix \(M\) of coefficients multiplied by the state vector \(x\):

    +

    Let’s combine both state variables into a state vector \(x = [r, p]^T\) (we will use column notation for vectors). Then the right part of the equation (8.9) can be written as a matrix \(M\) of coefficients multiplied by the state vector \(x\):

    \[\begin{split} \begin{bmatrix} @@ -1504,10 +1504,10 @@

    13.3.1. One important observation is that we can say if the system will be stable or not by looking at one parameter - largest eigenvalue. Indeed, if largest eigenvalue is negative, then all others are also negative. It is a brilliant property of linear systems. But what about non-linear?

    -

    13.3.2. Stability in non-linear multidimensional case#

    +

    8.3.2. Stability in non-linear multidimensional case#

    We start with rather simple but still tractable Michaelis-Menten gene regulatory model []. We first formulate this model for \(n\) genes where each gene \(x_i\) is regulated by others including itself:

    -(13.10)#\[ +(8.10)#\[ \frac{dx_i}{dt} = -Bx_i + \sum^n_{j=1}A_{ij}\frac{x_j}{x_j + 1} \]

    Here, \(B\) - degradation rate in a linear term we have seen earlier, we assume it is the same for all genes; \(A_{ij}\) the regulatory constant which can take values \(0\) or \(1\) indicating the presence or absence of an activatory regulation. An element of matrix \(A_{ij}\) is read as \(A_{i\leftarrow j}\) meaning that \(j\)-th gene (column index) acts to \(i\)-th gene (row index). But wait, why the second non-linear term has so strange functional form? It is a saturating function converging to \(1\) as \(x_j\) increases. It turns out that it reflects quite simple idea of promoter regulation, namely, if some product of gene \(j\) binds to a promoter of gene \(i\) then this promoter is no more free for binding. Thus, the activatory effect is restricted with a number of different promoter binding sites for a particular gene what is reflected in the saturating functional form of the term.

    @@ -1570,7 +1570,7 @@

    13.3.2.

    or more explicitly:

    -

    13.3.3. Universality and resilience#

    +

    8.3.3. Universality and resilience#

    In this section, we will demonstrate exactly how complex systems can be reduced to simple universal laws. We will see that the system that we considered above obeys a universal law that allows us to investigate its stability and resilience (yes, in this section we will consider the difference between these two concepts). We will follow a framework developed in the paper [] which in short can be formulated as following. Consider a system consisting of \(n\) components (nodes of a network) \(x = (x_1, ..., x_n)^T\). The components of systems evolves according to the following general system of non-linear (in general) differential equations:

    -(13.12)#\[ +(8.12)#\[ \frac{dx_i}{dt} = F(x_i) + \sum_{j=1}^n A_{ij}G(x_i, x_j) \]

    The first term on the right-hand side of the equation describes the self-dynamics of each component, while the second term describes the interactions between component \(i\) and its interacting partners. The nonlinear functions \(F(x_i)\) and \(G(x_i, x_j)\) represent the dynamical laws that govern the system’s components, while the weighted connectivity matrix \(A_{ij} > 0\) captures the positive interactions between the nodes. This is quite general form of non-linear equation and a lot of multidimensional dynamical systems can be described with it.

    -

    As we learned earlier, the stability of a system can be compromised if many system parameters are changed. This distinguishes multidimensional systems (networks) from one-dimensional ones they have a definite topology of interaction network. The brilliant discovery of the paper under consideration is that we can compute universal number which characterize a total network topology of equation (13.12). The main result of the paper is that we can rewrite (13.12) in one-dimensional form as follows:

    +

    As we learned earlier, the stability of a system can be compromised if many system parameters are changed. This distinguishes multidimensional systems (networks) from one-dimensional ones they have a definite topology of interaction network. The brilliant discovery of the paper under consideration is that we can compute universal number which characterize a total network topology of equation (8.12). The main result of the paper is that we can rewrite (8.12) in one-dimensional form as follows:

    -(13.13)#\[ +(8.13)#\[ \frac{dx_{eff}}{dt} = F(x_{eff}) + \beta_{eff}G(x_{eff}, x_{eff}) \]

    where \(x_{eff}\) - is an effective dynamic component representing “average” dynamics of the whole system, and \(\beta_{eff}\) - is a effective topological characteristic of the network which is actually a macroscopic description of \(n^2\) microscopic parameters of matrix \(A\). They can be computed as follows:

    @@ -1673,7 +1673,7 @@

    13.3.3. You probably already see that this equation has a stable non-zero solution. But how can we deny ourselves the pleasure of exploring it?

    The fixed points are:

    -(13.14)#\[ +(8.14)#\[ x^*_1 = 0;\ x^*_2 = \frac{\beta_{eff}}{B} - 1 = 1 \]

    The stability parameter \(\lambda\) is:

    @@ -1763,7 +1763,7 @@

    13.3.3. We observe different dynamics for genes. Some of them are upregulated by others and coverge to a stationary value, others are do not have positive regulation and exhibit a decay trajectory with a rate of \(B\). Nevertheless we call this dynamics is stable and non-zero because the average dynamics is not equal to zero (\(\langle x\rangle \neq 0\)). Note that effective dynamics \(x_{eff}\) is not equal to the average one due to slightly different weighing.

    The second interesting result of the [] paper is that topological characteristic \(\beta_{eff}\) can be decomposed in well-interpretable topological properties. Let’s write the following relation:

    -(13.15)#\[ +(8.15)#\[ \beta_{eff} = \langle s\rangle + SH \]

    We already know \(\langle s \rangle\) is a average degree of elements, but it has another name - density of a network; \(S\) represents the symmetry of network \(A\), and \(H\) represents heterogeneity. They can be computed as follows:

    @@ -1777,7 +1777,7 @@

    13.3.3. ../_images/topology_properties.png
    -

    Fig. 13.4 The topological characteristics of a network.#

    +

    Fig. 8.4 The topological characteristics of a network.#

    The density property is quite self-explainable - it is just an average number of in/out degrees in a network. The symmetry means that most of nodes have bi-directed edges to each other. An asymmetric network is one where nodes with a large in-degree tend to have a small out-degree. The large heterogeneity means that some nodes may have large in/out degrees and others - small. In the network with small heterogeneity, all nodes have almost equal degrees. Let’s compute them and check that hey are actually sum into \(\beta_{eff}\).

    @@ -1812,7 +1812,7 @@

    13.3.3.

    -

    These nice network characteristics confer us an instrument for improvement \(\beta_{eff}\) value. You remember expression (13.14)? It is time to plot bifurcation diagram!

    +

    These nice network characteristics confer us an instrument for improvement \(\beta_{eff}\) value. You remember expression (8.14)? It is time to plot bifurcation diagram!

    beta0 = np.linspace(0, 1, 100)
    @@ -1839,8 +1839,8 @@ 

    13.3.3.

    -

    We have stable non-zero solution while \(\beta_{eff} > \beta_c\). The critical value of \(\beta_c=1\) is a root of the second expression in (13.14). Of course, crossing the \(\beta_c\) from right to left leads to a bifurcation and a cell dies. What else interesting about value of \(\beta_{eff}\) is that it represents the reserve of resilience in the system. If some topological perturbation occurs (e.g. some edges are removed) then it is better to have larger value \(\beta_{eff}\), otherwise the system is under risk of a killing bifurcation.

    -

    How can we improve the resilience? From the equation (13.15) we see several options:

    +

    We have stable non-zero solution while \(\beta_{eff} > \beta_c\). The critical value of \(\beta_c=1\) is a root of the second expression in (8.14). Of course, crossing the \(\beta_c\) from right to left leads to a bifurcation and a cell dies. What else interesting about value of \(\beta_{eff}\) is that it represents the reserve of resilience in the system. If some topological perturbation occurs (e.g. some edges are removed) then it is better to have larger value \(\beta_{eff}\), otherwise the system is under risk of a killing bifurcation.

    +

    How can we improve the resilience? From the equation (8.15) we see several options:

    1. Make the network more dense;

    2. If \(S\) is negative - decrease its absolute value;

    3. @@ -1914,28 +1914,28 @@

      13.3.3.
      -

      13.3.4. The remark on complex system aging#

      +

      8.3.4. The remark on complex system aging#

      From the previous model, we saw that there are many ways to reduce the resilience of the system to such an extent that the system (organism) can no longer cope with the next perturbation and a catastrophic bifurcation occurs. The hypothesis of aging as a complex system in which resilience decreases is not very optimistic, in the sense that it immediately follows that aging is mostly stochastic. However, the optimistic side of this hypothesis is that if we can find a systemic effect on the body that improves its resilience, then we may well be able to avoid aging. Thus, we could improve the dynamic model by supplementing it with the mechanisms of action of various metabolites on the gene regulatory network and get a playground for testing drugs that have a systemic effect on a body resilience. In the end, it’s possible that the complex system model is simply wrong and we need to look for something better.

      -

      13.4. Critical slowing down#

      +

      8.4. Critical slowing down#

      “The bifurcation theory predicts that approaching to a critical transition boundary (aka tipping point) is accompained with the system critical slowing down and an increase in variance and temporal autocorrelations of system state variable” - do your remember this citation from the previous chapter? In this section we discuss the explicit mathematical background of the phenomenon. We also introduce one additional instrument which is widely used in dynamical modelling, namely, stochasticity. In this section we will mostly rely on the theoretical results form the paper [Scheffer et al., 2009].

      -

      13.4.1. Introduce a stochastic model#

      +

      8.4.1. Introduce a stochastic model#

      In contrast to the previous section, we return to the one dimensional models, in addition, we understood that some of multidimensional models can be reduced to one-dimensional allowing to analyze system resilience in more compact form. We start with a new model which is a combination of what we learned above:

      -(13.16)#\[ +(8.16)#\[ \dot{x} = x(1 - \frac{x}{K}) - c \frac{x^2}{1 + x^2} + \sigma \epsilon \]

      We have seen all the parts of this equation. The first quadratic term describes the growth of something with a carrying capacity \(K\). The second term is similar to what we have seen in the previous section (especially if you did the exercise), it describes some saturating removal of something from the system with rate \(c\). For example, this model can describe a number of senescent cells in the organism which has a limited capacity for the growth and they are removed by the immune system. The last term represents the noise in the growth condition by a plephora of random factors which we do not know explicitly and just model them by random gaussian noise \(\epsilon \sim \mathbf{N}(0, 1)\) with average \(0\) and standard deviation of \(1\). After multiplication with \(\sigma\) the standard deviation is scaled. The presented stochastic model is simple and do not require a lot of specific knowledge for modeling as we will see it below.

      -

      13.4.2. Slowing down intuition#

      +

      8.4.2. Slowing down intuition#

      Critical slowing down arises in dynamical systems which undergoes changing in their parameters. If some parameter changes in such way that it moves the system to the bifurcation point (critical point), several interesting effects in actual dynamics can be observed. There are three such effects:

      If the system is driven by stochastic force, the critical slowing down manifests itself in:

        @@ -1943,7 +1943,7 @@

        13.4.2.

        Increase in autocorrelation of system variable;

      1. Increase in cross-correlation between different system variables;

      -

      We discuss the first and the second hallmarks of critical slowing down that can be easily observed in one-dimensional system. We will change the parameter \(c\) from the system above and move, therefore, system to the bifurcation. Below we write some code for simulating the equation (13.16) and the potential function for the deterministic part of the equation.

      +

      We discuss the first and the second hallmarks of critical slowing down that can be easily observed in one-dimensional system. We will change the parameter \(c\) from the system above and move, therefore, system to the bifurcation. Below we write some code for simulating the equation (8.16) and the potential function for the deterministic part of the equation.

      -

      13.4.3. Construct a resilience indicator from data#

      +

      8.4.3. Construct a resilience indicator from data#

      We have studied quite useful tools for data analysis. Can we use them to create an organism resilience indicator? The good resilience indicator should satisfy not only the all three properties mentioned above, but how one-dimensional variable can satisfy the third one? There is one way to do that by constructing a new latent variable from initial organism state variables. Moreover, the good resilience indicator is expected to be negatively correlated with age. How to deal with all of that? Let’s do it step by step.

      First let’s get the real data of mouse complete blood count (CBC) analysis. We will use one prepared dataset from the []. As authors mentioned, the samples were obtained from NIH Swiss male and female mice from Charles River Laboratories (Wilmington, MA) - please refer to the original paper for the details. We will use only one cohort of male mice to avoid batch effect.

      @@ -2311,21 +2311,21 @@

      13.4.3.

      -

      13.5. Conclusion#

      +

      8.5. Conclusion#

      This chapter has been entirely devoted to developing an intuition about how aging can be perceived through the lens of the criticality and resilience of systems.

      We have studied the very foundations of the theory of dynamical systems, got acquainted with their one-dimensional and multidimensional incarnations. We saw how multidimensional ones can be reduced to one-dimensional ones, the stability of which is much easier to study.

      In addition, we have learned to better understand the language of complex systems by linking its most important concepts (criticality, bifurcation, phase transition, network, etc.) with mathematical counterparts. It is good to understand this new language, but it is important to use it. If possible, try to model biological processes explicitly, instead of just saying “associated”, “correlates”, “induces”.

      Be quantitative, numerical relationships between variables deepen your understanding of the process. However, sometimes numbers can be confusing - don’t be a slave to numbers, think twice!

      -

      13.6. Learn More#

      +

      8.6. Learn More#

      Principles of Biological Design (Lectures)
      Dynamical Systems with Applications using Python
      An Introduction to Systems Biology. Design Principles of Biological Circuits
      Modelling Life

      -

      13.7. Credits#

      +

      8.7. Credits#

      This notebook was prepared by Dmitrii Kriukov.

      @@ -2361,7 +2361,7 @@

      13.7.

      previous

      -

      12. Complex systems approach

      +

      7. Complex systems approach

      diff --git a/genindex.html b/genindex.html index 1ab1695..5410401 100644 --- a/genindex.html +++ b/genindex.html @@ -151,7 +151,7 @@

      Computational aging book

    4. - 5. Survival analysis. Advanced methods. + 5. Advanced survival analysis
    5. @@ -163,7 +163,7 @@

      Computational aging book

      @@ -165,7 +165,7 @@

      Computational aging book

    diff --git a/ml/aging_clocks.html b/ml/aging_clocks.html index 36f6c70..2bfaa92 100644 --- a/ml/aging_clocks.html +++ b/ml/aging_clocks.html @@ -6,7 +6,7 @@ - 11. Aging Clocks — Computational aging book + 6. Aging Clocks — Computational aging book @@ -55,7 +55,7 @@ - + @@ -156,7 +156,7 @@

    Computational aging book

  • - 5. Survival analysis. Advanced methods. + 5. Advanced survival analysis
  • @@ -168,7 +168,7 @@

    Computational aging book