Skip to content

Commit 4f2cfd1

Browse files
Fix documentation typos and list rendering (#6066)
* Fix list being rendered incorrectly in webdocs I assume this extra blank line will fix the list not being correctly formatted on https://unity-technologies.github.io/ml-agents/#releases-documentation * Fix typos in docs * Fix more mis-rendered lists Add a blank line before bulleted lists in markdown files to avoid them being rendered as in-paragraph sentences that all start with hyphens. * Fix typos in python comments used to generate docs
1 parent 1bee58f commit 4f2cfd1

20 files changed

+37
-33
lines changed

docs/Learning-Environment-Design-Agents.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -620,6 +620,7 @@ the order of the entities, so there is no need to properly "order" the
620620
entities before feeding them into the `BufferSensor`.
621621

622622
The `BufferSensorComponent` Editor inspector has two arguments:
623+
623624
- `Observation Size` : This is how many floats each entities will be
624625
represented with. This number is fixed and all entities must
625626
have the same representation. For example, if the entities you want to

docs/Learning-Environment-Examples.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -231,7 +231,7 @@ you would like to contribute environments, please see our
231231
objects around agent's forward direction (40 by 40 with 6 different categories).
232232
- Actions:
233233
- 3 continuous actions correspond to Forward Motion, Side Motion and Rotation
234-
- 1 discrete acion branch for Laser with 2 possible actions corresponding to
234+
- 1 discrete action branch for Laser with 2 possible actions corresponding to
235235
Shoot Laser or No Action
236236
- Visual Observations (Optional): First-person camera per-agent, plus one vector
237237
flag representing the frozen state of the agent. This scene uses a combination

docs/ML-Agents-Overview.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -434,6 +434,7 @@ Similarly to Curiosity, Random Network Distillation (RND) is useful in sparse or
434434
reward environments as it helps the Agent explore. The RND Module is implemented following
435435
the paper [Exploration by Random Network Distillation](https://arxiv.org/abs/1810.12894).
436436
RND uses two networks:
437+
437438
- The first is a network with fixed random weights that takes observations as inputs and
438439
generates an encoding
439440
- The second is a network with similar architecture that is trained to predict the
@@ -491,9 +492,9 @@ to the expert, the agent is incentivized to remain alive for as long as possible
491492
This can directly conflict with goal-oriented tasks like our PushBlock or Pyramids
492493
example environments where an agent must reach a goal state thus ending the
493494
episode as quickly as possible. In these cases, we strongly recommend that you
494-
use a low strength GAIL reward signal and a sparse extrinisic signal when
495+
use a low strength GAIL reward signal and a sparse extrinsic signal when
495496
the agent achieves the task. This way, the GAIL reward signal will guide the
496-
agent until it discovers the extrnisic signal and will not overpower it. If the
497+
agent until it discovers the extrinsic signal and will not overpower it. If the
497498
agent appears to be ignoring the extrinsic reward signal, you should reduce
498499
the strength of GAIL.
499500

docs/Migrating.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ from mlagents_envs.envs.unity_gym_env import UnityToGymWrapper
2121

2222
## Migrating the package to version 2.x
2323
- The official version of Unity ML-Agents supports is now 2022.3 LTS. If you run
24-
into issues, please consider deleting your project's Library folder and reponening your
24+
into issues, please consider deleting your project's Library folder and reopening your
2525
project.
2626
- If you used any of the APIs that were deprecated before version 2.0, you need to use their replacement. These
2727
deprecated APIs have been removed. See the migration steps bellow for specific API replacements.
@@ -130,7 +130,7 @@ values from `GetMaxBoardSize()`.
130130

131131
### GridSensor changes
132132
The sensor configuration has changed:
133-
* The sensor implementation has been refactored and exsisting GridSensor created from extension package
133+
* The sensor implementation has been refactored and existing GridSensor created from extension package
134134
will not work in newer version. Some errors might show up when loading the old sensor in the scene.
135135
You'll need to remove the old sensor and create a new GridSensor.
136136
* These parameters names have changed but still refer to the same concept in the sensor: `GridNumSide` -> `GridSize`,
@@ -151,8 +151,8 @@ data type changed from `float` to `int`. The index of first detectable tag will
151151
* The observation data should be written to the input `dataBuffer` instead of creating and returning a new array.
152152
* Removed the constraint of all data required to be normalized. You should specify it in `IsDataNormalized()`.
153153
Sensors with non-normalized data cannot use PNG compression type.
154-
* The sensor will not further encode the data recieved from `GetObjectData()` anymore. The values
155-
recieved from `GetObjectData()` will be the observation sent to the trainer.
154+
* The sensor will not further encode the data received from `GetObjectData()` anymore. The values
155+
received from `GetObjectData()` will be the observation sent to the trainer.
156156

157157
### LSTM models from previous releases no longer supported
158158
The way that Sentis processes LSTM (recurrent neural networks) has changed. As a result, models
@@ -169,7 +169,7 @@ the model using the python trainer from this release.
169169
- `VectorSensor.AddObservation(IEnumerable<float>)` is deprecated. Use `VectorSensor.AddObservation(IList<float>)`
170170
instead.
171171
- `ObservationWriter.AddRange()` is deprecated. Use `ObservationWriter.AddList()` instead.
172-
- `ActuatorComponent.CreateAcuator()` is deprecated. Please use override `ActuatorComponent.CreateActuators`
172+
- `ActuatorComponent.CreateActuator()` is deprecated. Please use override `ActuatorComponent.CreateActuators`
173173
instead. Since `ActuatorComponent.CreateActuator()` is abstract, you will still need to override it in your
174174
class until it is removed. It is only ever called if you don't override `ActuatorComponent.CreateActuators`.
175175
You can suppress the warnings by surrounding the method with the following pragma:
@@ -376,7 +376,7 @@ vector observations to be used simultaneously.
376376
method names will be removed in a later release:
377377
- `InitializeAgent()` was renamed to `Initialize()`
378378
- `AgentAction()` was renamed to `OnActionReceived()`
379-
- `AgentReset()` was renamed to `OnEpsiodeBegin()`
379+
- `AgentReset()` was renamed to `OnEpisodeBegin()`
380380
- `Done()` was renamed to `EndEpisode()`
381381
- `GiveModel()` was renamed to `SetModel()`
382382
- The `IFloatProperties` interface has been removed.
@@ -532,7 +532,7 @@ vector observations to be used simultaneously.
532532
depended on [PEP420](https://www.python.org/dev/peps/pep-0420/), which caused
533533
problems with some of our tooling such as mypy and pylint.
534534
- The official version of Unity ML-Agents supports is now 2022.3 LTS. If you run
535-
into issues, please consider deleting your library folder and reponening your
535+
into issues, please consider deleting your library folder and reopening your
536536
projects. You will need to install the Sentis package into your project in
537537
order to ML-Agents to compile correctly.
538538

docs/Package-Settings.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ You can find them at `Edit` > `Project Settings...` > `ML-Agents`. It lists out
99
## Create Custom Settings
1010
In order to to use your own settings for your project, you'll need to create a settings asset.
1111

12-
You can do this by clicking the `Create Settings Asset` buttom or clicking the gear on the top right and select `New Settings Asset...`.
12+
You can do this by clicking the `Create Settings Asset` button or clicking the gear on the top right and select `New Settings Asset...`.
1313
The asset file can be placed anywhere in the `Asset/` folder in your project.
1414
After Creating the settings asset, you'll be able to modify the settings for your project and your settings will be saved in the asset.
1515

@@ -21,7 +21,7 @@ You can create multiple settings assets in one project.
2121

2222
By clicking the gear on the top right you'll see all available settings listed in the drop-down menu to choose from.
2323

24-
This allows you to create different settings for different scenatios. For example, you can create two
24+
This allows you to create different settings for different scenarios. For example, you can create two
2525
separate settings for training and inference, and specify which one you want to use according to what you're currently running.
2626

2727
![Multiple Settings](images/multiple-settings.png)

docs/Profiling-Python.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Profiling in Python
22

3-
As part of the ML-Agents Tookit, we provide a lightweight profiling system, in
3+
As part of the ML-Agents Toolkit, we provide a lightweight profiling system, in
44
order to identity hotspots in the training process and help spot regressions
55
from changes.
66

docs/Python-Custom-Trainer-Plugin.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ capabilities. we introduce an extensible plugin system to define new trainers ba
55
in `Ml-agents` Package. This will allow rerouting `mlagents-learn` CLI to custom trainers and extending the config files
66
with hyper-parameters specific to your new trainers. We will expose a high-level extensible trainer (both on-policy,
77
and off-policy trainers) optimizer and hyperparameter classes with documentation for the use of this plugin. For more
8-
infromation on how python plugin system works see [Plugin interfaces](Training-Plugins.md).
8+
information on how python plugin system works see [Plugin interfaces](Training-Plugins.md).
99
## Overview
1010
Model-free RL algorithms generally fall into two broad categories: on-policy and off-policy. On-policy algorithms perform updates based on data gathered from the current policy. Off-policy algorithms learn a Q function from a buffer of previous data, then use this Q function to make decisions. Off-policy algorithms have three key benefits in the context of ML-Agents: They tend to use fewer samples than on-policy as they can pull and re-use data from the buffer many times. They allow player demonstrations to be inserted in-line with RL data into the buffer, enabling new ways of doing imitation learning by streaming player data.
1111

docs/Python-Gym-API.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ Unity environment via Python.
1111

1212
## Installation
1313

14-
The gym wrapper is part of the `mlgents_envs` package. Please refer to the
14+
The gym wrapper is part of the `mlagents_envs` package. Please refer to the
1515
[mlagents_envs installation instructions](ML-Agents-Envs-README.md).
1616

1717

docs/Python-LLAPI-Documentation.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -678,7 +678,7 @@ of downloading the Unity Editor.
678678
The UnityEnvRegistry implements a Map, to access an entry of the Registry, use:
679679
```python
680680
registry = UnityEnvRegistry()
681-
entry = registry[<environment_identifyier>]
681+
entry = registry[<environment_identifier>]
682682
```
683683
An entry has the following properties :
684684
* `identifier` : Uniquely identifies this environment
@@ -689,7 +689,7 @@ An entry has the following properties :
689689
To launch a Unity environment from a registry entry, use the `make` method:
690690
```python
691691
registry = UnityEnvRegistry()
692-
env = registry[<environment_identifyier>].make()
692+
env = registry[<environment_identifier>].make()
693693
```
694694

695695
<a name="mlagents_envs.registry.unity_env_registry.UnityEnvRegistry.register"></a>

docs/Python-On-Off-Policy-Trainer-Documentation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -694,7 +694,7 @@ class Lesson()
694694
```
695695

696696
Gathers the data of one lesson for one environment parameter including its name,
697-
the condition that must be fullfiled for the lesson to be completed and a sampler
697+
the condition that must be fulfilled for the lesson to be completed and a sampler
698698
for the environment parameter. If the completion_criteria is None, then this is
699699
the last lesson in the curriculum.
700700

docs/Python-Optimizer-Documentation.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -43,8 +43,8 @@ Get value estimates and memories for a trajectory, in batch form.
4343
**Arguments**:
4444

4545
- `batch`: An AgentBuffer that consists of a trajectory.
46-
- `next_obs`: the next observation (after the trajectory). Used for boostrapping
47-
if this is not a termiinal trajectory.
46+
- `next_obs`: the next observation (after the trajectory). Used for bootstrapping
47+
if this is not a terminal trajectory.
4848
- `done`: Set true if this is a terminal trajectory.
4949
- `agent_id`: Agent ID of the agent that this trajectory belongs to.
5050

docs/Python-PettingZoo-API.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ interfacing with a Unity environment via Python.
77

88
## Installation and Examples
99

10-
The PettingZoo wrapper is part of the `mlgents_envs` package. Please refer to the
10+
The PettingZoo wrapper is part of the `mlagents_envs` package. Please refer to the
1111
[mlagents_envs installation instructions](ML-Agents-Envs-README.md).
1212

1313
[[Colab] PettingZoo Wrapper Example](https://colab.research.google.com/github/Unity-Technologies/ml-agents/blob/develop-python-api-ga/ml-agents-envs/colabs/Colab_PettingZoo.ipynb)

docs/Readme.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,6 +52,7 @@ to get started with the latest release of ML-Agents.**
5252

5353
The table below lists all our releases, including our `main` branch which is
5454
under active development and may be unstable. A few helpful guidelines:
55+
5556
- The [Versioning page](Versioning.md) overviews how we manage our GitHub
5657
releases and the versioning process for each of the ML-Agents components.
5758
- The [Releases page](https://github.com/Unity-Technologies/ml-agents/releases)
@@ -165,7 +166,7 @@ We have also published a series of blog posts that are relevant for ML-Agents:
165166
### More from Unity
166167

167168
- [Unity Sentis](https://unity.com/products/sentis)
168-
- [Introductin Unity Muse and Sentis](https://blog.unity.com/engine-platform/introducing-unity-muse-and-unity-sentis-ai)
169+
- [Introducing Unity Muse and Sentis](https://blog.unity.com/engine-platform/introducing-unity-muse-and-unity-sentis-ai)
169170

170171
## Community and Feedback
171172

docs/Training-ML-Agents.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -413,7 +413,7 @@ Unless otherwise specified, omitting a configuration will revert it to its defau
413413
In some cases, you may want to specify a set of default configurations for your Behaviors.
414414
This may be useful, for instance, if your Behavior names are generated procedurally by
415415
the environment and not known before runtime, or if you have many Behaviors with very similar
416-
settings. To specify a default configuraton, insert a `default_settings` section in your YAML.
416+
settings. To specify a default configuration, insert a `default_settings` section in your YAML.
417417
This section should be formatted exactly like a configuration for a Behavior.
418418

419419
```yaml

docs/Tutorial-Custom-Trainer-Plugin.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ Users of the plug-in system are responsible for implementing the trainer class s
1313

1414
Please refer to the internal [PPO implementation](../ml-agents/mlagents/trainers/ppo/trainer.py) for a complete code example. We will not provide a workable code in the document. The purpose of the tutorial is to introduce you to the core components and interfaces of our plugin framework. We use code snippets and patterns to demonstrate the control and data flow.
1515

16-
Your custom trainers are responsible for collecting experiences and training the models. Your custom trainer class acts like a co-ordinator to the policy and optimizer. To start implementing methods in the class, create a policy class objects from method `create_policy`:
16+
Your custom trainers are responsible for collecting experiences and training the models. Your custom trainer class acts like a coordinator to the policy and optimizer. To start implementing methods in the class, create a policy class objects from method `create_policy`:
1717

1818

1919
```python
@@ -243,7 +243,7 @@ Before installing your custom trainer package, make sure you have `ml-agents-env
243243
pip3 install -e ./ml-agents-envs && pip3 install -e ./ml-agents
244244
```
245245

246-
Install your cutom trainer package(if your package is pip installable):
246+
Install your custom trainer package(if your package is pip installable):
247247
```shell
248248
pip3 install your_custom_package
249249
```

docs/Unity-Environment-Registry.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,8 @@ env.close()
2828

2929
## Create and share your own registry
3030

31-
In order to share the `UnityEnvironemnt` you created, you must :
31+
In order to share the `UnityEnvironment` you created, you must:
32+
3233
- [Create a Unity executable](Learning-Environment-Executable.md) of your environment for each platform (Linux, OSX and/or Windows)
3334
- Place each executable in a `zip` compressed folder
3435
- Upload each zip file online to your preferred hosting platform

ml-agents-envs/mlagents_envs/registry/unity_env_registry.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ class UnityEnvRegistry(Mapping):
1616
The UnityEnvRegistry implements a Map, to access an entry of the Registry, use:
1717
```python
1818
registry = UnityEnvRegistry()
19-
entry = registry[<environment_identifyier>]
19+
entry = registry[<environment_identifier>]
2020
```
2121
An entry has the following properties :
2222
* `identifier` : Uniquely identifies this environment
@@ -27,7 +27,7 @@ class UnityEnvRegistry(Mapping):
2727
To launch a Unity environment from a registry entry, use the `make` method:
2828
```python
2929
registry = UnityEnvRegistry()
30-
env = registry[<environment_identifyier>].make()
30+
env = registry[<environment_identifier>].make()
3131
```
3232
"""
3333

ml-agents/mlagents/trainers/optimizer/torch_optimizer.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -148,8 +148,8 @@ def get_trajectory_value_estimates(
148148
"""
149149
Get value estimates and memories for a trajectory, in batch form.
150150
:param batch: An AgentBuffer that consists of a trajectory.
151-
:param next_obs: the next observation (after the trajectory). Used for boostrapping
152-
if this is not a termiinal trajectory.
151+
:param next_obs: the next observation (after the trajectory). Used for bootstrapping
152+
if this is not a terminal trajectory.
153153
:param done: Set true if this is a terminal trajectory.
154154
:param agent_id: Agent ID of the agent that this trajectory belongs to.
155155
:returns: A Tuple of the Value Estimates as a Dict of [name, np.ndarray(trajectory_len)],

ml-agents/mlagents/trainers/poca/optimizer_torch.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -565,8 +565,8 @@ def get_trajectory_and_baseline_value_estimates(
565565
"""
566566
Get value estimates, baseline estimates, and memories for a trajectory, in batch form.
567567
:param batch: An AgentBuffer that consists of a trajectory.
568-
:param next_obs: the next observation (after the trajectory). Used for boostrapping
569-
if this is not a termiinal trajectory.
568+
:param next_obs: the next observation (after the trajectory). Used for bootstrapping
569+
if this is not a terminal trajectory.
570570
:param next_groupmate_obs: the next observations from other members of the group.
571571
:param done: Set true if this is a terminal trajectory.
572572
:param agent_id: Agent ID of the agent that this trajectory belongs to.

ml-agents/mlagents/trainers/settings.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -517,7 +517,7 @@ def need_increment(
517517
class Lesson:
518518
"""
519519
Gathers the data of one lesson for one environment parameter including its name,
520-
the condition that must be fullfiled for the lesson to be completed and a sampler
520+
the condition that must be fulfilled for the lesson to be completed and a sampler
521521
for the environment parameter. If the completion_criteria is None, then this is
522522
the last lesson in the curriculum.
523523
"""

0 commit comments

Comments
 (0)