Skip to content

Commit 21138a6

Browse files
authoredJan 16, 2023
Fixed some small typos (#233)
* Fix small typo * Fix typo * Fix more typos * Fix typos in xai * Fix typos in attention * Run pre-commit
1 parent 1f95ef0 commit 21138a6

16 files changed

+32
-27
lines changed
 

‎applied/QM9.ipynb

+1
Original file line numberDiff line numberDiff line change
@@ -430,6 +430,7 @@
430430
"node_feature_len = 16\n",
431431
"msg_feature_len = 16\n",
432432
"\n",
433+
"\n",
433434
"# make our weights\n",
434435
"def init_weights(g, n, m):\n",
435436
" we = np.random.normal(size=(n, m), scale=1e-1)\n",

‎dl/Equivariant.ipynb

+1
Original file line numberDiff line numberDiff line change
@@ -971,6 +971,7 @@
971971
"\n",
972972
"def lift(f):\n",
973973
" \"\"\"lift f into group\"\"\"\n",
974+
"\n",
974975
" # create new function from original\n",
975976
" # that is f(gx_0)\n",
976977
" @np_cache(maxsize=W**3)\n",

‎dl/Hyperparameter_tuning.ipynb

-1
Original file line numberDiff line numberDiff line change
@@ -376,7 +376,6 @@
376376
"def train_model(\n",
377377
" model, lr=1e-3, Reduced_LR=False, Early_stop=False, batch_size=32, epochs=20\n",
378378
"):\n",
379-
"\n",
380379
" tf.keras.backend.clear_session()\n",
381380
" callbacks = []\n",
382381
"\n",

‎dl/VAE.ipynb

+1
Original file line numberDiff line numberDiff line change
@@ -997,6 +997,7 @@
997997
"source": [
998998
"import numpy as np\n",
999999
"\n",
1000+
"\n",
10001001
"###---------Transformation Functions----###\n",
10011002
"def center_com(paths):\n",
10021003
" \"\"\"Align paths to COM at each frame\"\"\"\n",

‎dl/attention.ipynb

+2-2
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
"source": [
77
"# Attention Layers\n",
88
"\n",
9-
"Attention is a concept in machine learning and AI that goes back many years, especially in computer vision{cite}`BALUJA1997329`. Like the word \"neural network\", attention was inspired by the idea of attention in how human brains deal with the massive amount of visual and audio input{cite}`treisman1980feature`. **Attention layers** are deep learning layers that evoke the idea of attention. You can read more about attention in deep learning in Luong et al. {cite}`luong2015effective` and get a practical [overview here](http://d2l.ai/chapter_attention-mechanisms/index.html). Attention layers have been empirically shown to be so effective in modeling sequences, like language, that they have become indispensible{cite}`vaswani2017attention`. The most common place you'll see attention layers is in [**transformer**](http://d2l.ai/chapter_attention-mechanisms/transformer.html) neural networks that model sequences. We'll also sometimes see attention in graph neural networks.\n",
9+
"Attention is a concept in machine learning and AI that goes back many years, especially in computer vision{cite}`BALUJA1997329`. Like the word \"neural network\", attention was inspired by the idea of attention in how human brains deal with the massive amount of visual and audio input{cite}`treisman1980feature`. **Attention layers** are deep learning layers that evoke the idea of attention. You can read more about attention in deep learning in Luong et al. {cite}`luong2015effective` and get a practical [overview here](http://d2l.ai/chapter_attention-mechanisms/index.html). Attention layers have been empirically shown to be so effective in modeling sequences, like language, that they have become indispensable{cite}`vaswani2017attention`. The most common place you'll see attention layers is in [**transformer**](http://d2l.ai/chapter_attention-mechanisms/transformer.html) neural networks that model sequences. We'll also sometimes see attention in graph neural networks.\n",
1010
"\n",
1111
"\n",
1212
"```{margin}\n",
@@ -89,7 +89,7 @@
8989
"source": [
9090
"## Attention Mechanism Equation\n",
9191
"\n",
92-
"The attention mechanism equation uses query and keys arguments only. It outputs a tensor one rank less than the keys, giving a scalar for each key corresponding to the attention the query should have for the key. This attention vector should be normalized. The most common attention mechanism a dot product and softmax:\n",
92+
"The attention mechanism equation uses query and keys arguments only. It outputs a tensor one rank less than the keys, giving a scalar for each key corresponding to the attention the query should have for the key. This attention vector should be normalized. The most common attention mechanism is a dot product and softmax:\n",
9393
"\n",
9494
"\\begin{equation}\n",
9595
"\\vec{b} = \\mathrm{softmax}\\left(\\vec{q}\\cdot \\mathbf{K}\\right) = \\mathrm{softmax}\\left(\\sum_j q_j k_{ij}\\right)\n",

‎dl/data.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -749,7 +749,7 @@
749749
"cell_type": "markdown",
750750
"metadata": {},
751751
"source": [
752-
"You can see how points far away on the chain from 0 have much more variance in the point 0 align, whereas the COM alignment looks better spread. Remember, to apply these methods you must do them to your both your training data and any prediction points. Thus, they should be viewed as part of your neural network. We can now check that rotating has no effect on these. The plots below have the trajectory rotated by 1 radian and you can see that both alignment methods have no change (the lines are overlapping)."
752+
"You can see how points far away on the chain from 0 have much more variance in the point 0 align, whereas the COM alignment looks better spread. Remember, to apply these methods you must do them to both your training data and any prediction points. Thus, they should be viewed as part of your neural network. We can now check that rotating has no effect on these. The plots below have the trajectory rotated by 1 radian and you can see that both alignment methods have no change (the lines are overlapping)."
753753
]
754754
},
755755
{

‎dl/flows.ipynb

+2
Original file line numberDiff line numberDiff line change
@@ -384,6 +384,8 @@
384384
"# use input (feature) and output (log prob)\n",
385385
"# to make model\n",
386386
"model = tf.keras.Model(x, log_prob)\n",
387+
"\n",
388+
"\n",
387389
"# define a loss\n",
388390
"def neg_loglik(yhat, log_prob):\n",
389391
" # losses always take in label, prediction\n",

‎dl/gnn.ipynb

+4-3
Original file line numberDiff line numberDiff line change
@@ -1053,7 +1053,7 @@
10531053
"A common piece of wisdom is if you want to solve a real problem with deep learning, you should read the most recent popular paper in an area and use the baseline they compare against instead of their proposed model. The reason is that a baseline model usually must be easy, fast, and well-tested, which is generally more important than being the most accurate\n",
10541054
"```\n",
10551055
"\n",
1056-
"SchNet is for atoms represented as xyz coordinates (points) -- not as a molecular graph. All our previous examples used the underlying molecular graph as the input. In SchNet we will convert our xyz coodinates into a graph, so that we can apply a GNNN. SchNet was developed for predicting energies and forces from atom configurations without bond information. Thus, we need to first see how a set of atoms and their positions is converted into a graph. To get the nodes, we do a similar process as above and the atomic number is passed through an embedding layer, which is just means we assign a trainable vector to each atomic number (See {doc}`layers` for a review of embeddings). \n",
1056+
"SchNet is for atoms represented as xyz coordinates (points) -- not as a molecular graph. All our previous examples used the underlying molecular graph as the input. In SchNet we will convert our xyz coodinates into a graph, so that we can apply a GNN. SchNet was developed for predicting energies and forces from atom configurations without bond information. Thus, we need to first see how a set of atoms and their positions is converted into a graph. To get the nodes, we do a similar process as above and the atomic number is passed through an embedding layer, which just means we assign a trainable vector to each atomic number (See {doc}`layers` for a review of embeddings). \n",
10571057
"\n",
10581058
"Getting the adjacency matrix is simple too: we just make every atom be connected to every atom. It might seem confusing what the point of using a GNN is, if we're just connecting everything. *It is because GNNs are permutation equivariant.* If we tried to do learning on the atoms as xyz coordinates, we would have weights depending on the ordering of atoms and probably fail to handle different numbers of atoms.\n",
10591059
"\n",
@@ -1220,6 +1220,7 @@
12201220
"\n",
12211221
"label_str = list(set([k.split(\"-\")[0] for k in trajs]))\n",
12221222
"\n",
1223+
"\n",
12231224
"# now build dataset\n",
12241225
"def generator():\n",
12251226
" for k, v in trajs.items():\n",
@@ -1553,7 +1554,7 @@
15531554
"\n",
15541555
"---\n",
15551556
"\n",
1556-
"Let's give now use the model on some data."
1557+
"Let's now use the model on some data."
15571558
]
15581559
},
15591560
{
@@ -1680,7 +1681,7 @@
16801681
"\n",
16811682
"### Common Architecture Motifs and Comparisons\n",
16821683
"\n",
1683-
"We've now seen message passing layer GNNs, GCNs, GGNs, and the generalized Battaglia equations. You'll find common motifs in the architectures, like gating, {doc}`attention`, and pooling strategies. For example, Gated GNNS (GGNs) can be combined with attention pooling to create Gated Attention GNNs (GAANs){cite}`zhang2018gaan`. GraphSAGE is a similar to a GCN but it samples when pooling, making the neighbor-updates of fixed dimension{cite}`hamilton2017inductive`. So you'll see the suffix \"sage\" when you sample over neighbors while pooling. These can all be represented in the Battaglia equations, but you should be aware of these names. \n",
1684+
"We've now seen message passing layer GNNs, GCNs, GGNs, and the generalized Battaglia equations. You'll find common motifs in the architectures, like gating, {doc}`attention`, and pooling strategies. For example, Gated GNNS (GGNs) can be combined with attention pooling to create Gated Attention GNNs (GAANs){cite}`zhang2018gaan`. GraphSAGE is similar to a GCN but it samples when pooling, making the neighbor-updates of fixed dimension{cite}`hamilton2017inductive`. So you'll see the suffix \"sage\" when you sample over neighbors while pooling. These can all be represented in the Battaglia equations, but you should be aware of these names. \n",
16841685
"\n",
16851686
"The enormous variety of architectures has led to work on identifying the \"best\" or most general GNN architecture {cite}`dwivedi2020benchmarking,errica2019fair,shchur2018pitfalls`. Unfortunately, the question of which GNN architecture is best is as difficult as \"what benchmark problems are best?\" Thus there are no agreed-upon conclusions on the best architecture. However, those papers are great resources on training, hyperparameters, and reasonable starting guesses and I highly recommend reading them before designing your own GNN. There has been some theoretical work to show that simple architectures, like GCNs, cannot distinguish between certain simple graphs {cite}`xu2018powerful`. How much this practically matters depends on your data. Ultimately, there is so much variety in hyperparameters, data equivariances, and training decisions that you should think carefully about how much the GNN architecture matters before exploring it with too much depth. "
16861687
]

‎dl/layers.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -346,7 +346,7 @@
346346
"\n",
347347
"#### Layer Normalization\n",
348348
"\n",
349-
"Batch normalization depends on there being a constant batch size. Some kinds of data, like text or a graphs, have different sizes and so the batch mean/variance can change significantly. **Layer normalization** avoids this problem by normalizing across the *features* (the non-batch axis/channel axis) instead of the batch. This has a similar effect of making the layer output features behave well-centered at 0 but without having highly variable means/variances because of batch to batch variation. You'll see these in graph neural networks and recurrent neural networks, with both take variable sized inputs. \n",
349+
"Batch normalization depends on there being a constant batch size. Some kinds of data, like text or graphs, have different sizes and so the batch mean/variance can change significantly. **Layer normalization** avoids this problem by normalizing across the *features* (the non-batch axis/channel axis) instead of the batch. This has a similar effect of making the layer output features behave well-centered at 0 but without having highly variable means/variances because of batch to batch variation. You'll see these in graph neural networks and recurrent neural networks, with both take variable sized inputs. \n",
350350
"\n",
351351
"### Dropout\n",
352352
"\n",

0 commit comments

Comments
 (0)