Skip to content

Commit 09533eb

Browse files
committed
Fix typos
1 parent 27e164c commit 09533eb

File tree

15 files changed

+27
-28
lines changed

15 files changed

+27
-28
lines changed

02_TensorFlow_Way/readme.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,9 @@ After we have established the basic objects and methods in TensorFlow, we now wa
88
* We show how to create multiple operations on a computational graph and how to visualize them using Tensorboard.
99
3. [Working with Multiple Layers](03_Working_with_Multiple_Layers)
1010
* Here we extend the usage of the computational graph to create multiple layers and show how they appear in Tensorboard.
11-
4. [Implmenting Loss Functions](04_Implementing_Loss_Functions)
11+
4. [Implementing Loss Functions](04_Implementing_Loss_Functions)
1212
* In order to train a model, we must be able to evaluate how well it is doing. This is given by loss functions. We plot various loss functions and talk about the benefits and limitations of some.
13-
5. [Implmenting Back Propagation](05_Implementing_Back_Propagation)
13+
5. [Implementing Back Propagation](05_Implementing_Back_Propagation)
1414
* Here we show how to use loss functions to iterate through data and back propagate errors for regression and classification.
1515
6. [Working with Stochastic and Batch Training](06_Working_with_Batch_and_Stochastic_Training)
1616
* TensorFlow makes it easy to use both batch and stochastic training. We show how to implement both and talk about the benefits and limitations of each.

03_Linear_Regression/readme.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,10 +11,10 @@ Here we show how to implement various linear regression techniques in TensorFlow
1111
4. [Understanding Loss Functions in Linear Regression](04_Loss_Functions_in_Linear_Regressions)
1212
* L2 vs L1 loss in linear regression. We talk about the benefits and limitations of both.
1313
5. [Implementing Deming Regression (Total Regression)](05_Implementing_Deming_Regression)
14-
* Deming (total) regression implmented in TensorFlow by changing the loss function.
14+
* Deming (total) regression implemented in TensorFlow by changing the loss function.
1515
6. [Implementing Lasso and Ridge Regression](06_Implementing_Lasso_and_Ridge_Regression)
1616
* Lasso and Ridge regression are ways of regularizing the coefficients. We implement both of these in TensorFlow via changing the loss functions.
1717
7. [Implementing Elastic Net Regression](07_Implementing_Elasticnet_Regression)
1818
* Elastic net is a regularization technique that combines the L2 and L1 loss for coefficients. We show how to implement this in TensorFlow.
1919
8. [Implementing Logistic Regression](08_Implementing_Logistic_Regression)
20-
* We implment logistic regression by the use of an activation function in our computational graph.
20+
* We implement logistic regression by the use of an activation function in our computational graph.

04_Support_Vector_Machines/02_Working_with_Linear_SVMs/readme.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Working with Linear SVMs
22

3-
We introduce a linear SVM on a binary set, which will be a subset of the Iris data. We know for I. setosa, that petal width and sepal length are completely seperable. We will create a linear SVM to predict I. setosa based on two features: petal width and sepal length.
3+
We introduce a linear SVM on a binary set, which will be a subset of the Iris data. We know for I. setosa, that petal width and sepal length are completely separable. We will create a linear SVM to predict I. setosa based on two features: petal width and sepal length.
44

55
It is worth noting that due to the small data set and the randomness of separating into train/test sets, that it may appear that a few points can end up on the wrong side of the line. This is because they are in the test set, and this will result in a lower test accuracy.
66

04_Support_Vector_Machines/readme.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ This chapter shows how to implement various SVM methods with TensorFlow. We fir
1010
* The heart of SVMs is separating classes with a line. We change tweek the algorithm slightly to perform SVM regression.
1111
4. [Working with Kernels in TensorFlow](04_Working_with_Kernels)
1212
* In order to extend SVMs into non-linear data, we explain and show how to implement different kernels in TensorFlow.
13-
5. [Implmenting Non-Linear SVMs](05_Implementing_Nonlinear_SVMs)
13+
5. [Implementing Non-Linear SVMs](05_Implementing_Nonlinear_SVMs)
1414
* We use the Gaussian kernel (RBF) to separate non-linear classes.
1515
6. [Implementing Multi-class SVMs](06_Implementing_Multiclass_SVMs)
1616
* SVMs are inherently binary predictors. We show how to extend them in a one-vs-all strategy in TensorFlow.

06_Neural_Networks/08_Learning_Tic_Tac_Toe/tic_tac_toe_moves.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,7 @@ def get_symmetry(board, response, transformation):
8383
new_board = board[::-1]
8484
return(new_board[6:9] + new_board[3:6] + new_board[0:3], new_response)
8585
else:
86-
raise ValueError('Method not implmented.')
86+
raise ValueError('Method not implemented.')
8787

8888

8989
## Read in board move csv file

06_Neural_Networks/readme.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ Neural Networks are very important in machine learning and growing in popularity
88
* We implement an operational gate with one operation. Then we show how to extend this to multiple nested operations.
99
3. [Working with Gates and Activation Functions](03_Working_with_Activation_Functions)
1010
* Now we have to introduce activation functions on the gates. We show how different activation functions operate.
11-
4. [Implmenting a One Layer Neural Network](04_Single_Hidden_Layer_Network)
11+
4. [Implementing a One Layer Neural Network](04_Single_Hidden_Layer_Network)
1212
* We have all the pieces to start implementing our first neural network. We do so here with regression on the Iris data set.
1313
5. [Implementing Different Layers](05_Implementing_Different_Layers)
1414
* This section introduces the convolution layer and the max-pool layer. We show how to chain these together in a 1D and 2D example with fully connected layers as well.

07_Natural_Language_Processing/04_Working_With_Skip_Gram_Embeddings/04_working_with_skipgram.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -201,7 +201,7 @@ def generate_batch_data(sentences, batch_size, window_size, method='skip_gram'):
201201
# Make it in to a big list of tuples (target word, surrounding word)
202202
tuple_data = [(x_, y) for x,y in batch_and_labels for x_ in x]
203203
else:
204-
raise ValueError('Method {} not implmented yet.'.format(method))
204+
raise ValueError('Method {} not implemented yet.'.format(method))
205205

206206
# extract batch and labels
207207
batch, labels = [list(x) for x in zip(*tuple_data)]

07_Natural_Language_Processing/05_Working_With_CBOW_Embeddings/text_helpers.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@ def generate_batch_data(sentences, batch_size, window_size, method='skip_gram'):
102102
# Add document index to batch!! Remember that we must extract the last index in batch for the doc-index
103103
batch = [x + [rand_sentence_ix] for x in batch]
104104
else:
105-
raise ValueError('Method {} not implmented yet.'.format(method))
105+
raise ValueError('Method {} not implemented yet.'.format(method))
106106

107107
# extract batch and labels
108108
batch_data.extend(batch[:batch_size])

07_Natural_Language_Processing/06_Using_Word2Vec_Embeddings/text_helpers.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@ def generate_batch_data(sentences, batch_size, window_size, method='skip_gram'):
102102
# Add document index to batch!! Remember that we must extract the last index in batch for the doc-index
103103
batch = [x + [rand_sentence_ix] for x in batch]
104104
else:
105-
raise ValueError('Method {} not implmented yet.'.format(method))
105+
raise ValueError('Method {} not implemented yet.'.format(method))
106106

107107
# extract batch and labels
108108
batch_data.extend(batch[:batch_size])

07_Natural_Language_Processing/07_Sentiment_Analysis_With_Doc2Vec/text_helpers.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@ def generate_batch_data(sentences, batch_size, window_size, method='skip_gram'):
102102
# Add document index to batch!! Remember that we must extract the last index in batch for the doc-index
103103
batch = [x + [rand_sentence_ix] for x in batch]
104104
else:
105-
raise ValueError('Method {} not implmented yet.'.format(method))
105+
raise ValueError('Method {} not implemented yet.'.format(method))
106106

107107
# extract batch and labels
108108
batch_data.extend(batch[:batch_size])

0 commit comments

Comments
 (0)