Skip to content

Commit 90c6308

Browse files
brettkooncermlarsen
authored andcommitted
minor spelling tweaks for lite docs (tensorflow#16275)
1 parent f20f28b commit 90c6308

File tree

2 files changed

+3
-3
lines changed

2 files changed

+3
-3
lines changed

tensorflow/contrib/lite/models/testdata/g3doc/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ with the corresponding parameters as shown in the figure.
5353
### Automatic Speech Recognizer (ASR) Acoustic Model (AM)
5454

5555
The acoustic model for automatic speech recognition is the neural network model
56-
for matching phonemes to the input autio features. It generates posterior
56+
for matching phonemes to the input audio features. It generates posterior
5757
probabilities of phonemes from speech frontend features (log-mel filterbanks).
5858
It has an input size of 320 (float), an output size of 42 (float), five LSTM
5959
layers and one fully connected layers with a Softmax activation function, with
@@ -68,7 +68,7 @@ for predicting the probability of a word given previous words in a sentence.
6868
It generates posterior probabilities of the next word based from a sequence of
6969
words. The words are encoded as indices in a fixed size dictionary.
7070
The model has two inputs both of size one (integer): the current word index and
71-
next word index, an output size of one (float): the log probability. It consits
71+
next word index, an output size of one (float): the log probability. It consists
7272
of three embedding layer, three LSTM layers, followed by a multiplication, a
7373
fully connected layers and an addition.
7474
The corresponding parameters as shown in the figure.

tensorflow/contrib/lite/toco/g3doc/cmdline_reference.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -229,7 +229,7 @@ additional information about the multiple input arrays:
229229
well-formed quantized representation of these graphs. Such graphs should be
230230
fixed, but as a temporary work-around, setting this
231231
reorder_across_fake_quant flag allows the converter to perform necessary
232-
graph transformaitons on them, at the cost of no longer faithfully matching
232+
graph transformations on them, at the cost of no longer faithfully matching
233233
inference and training arithmetic.
234234

235235
### Logging flags

0 commit comments

Comments
 (0)