Skip to content

Commit 6770aff

Browse files
committed
Changes for Tensorflow 2.5.0
1 parent 292865b commit 6770aff

20 files changed

+8147
-118
lines changed

README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ There currently exist several versions of the tutorial, corresponding to the var
77

88
## TensorFlow 2 Object Detection API tutorial
99

10-
[![TensorFlow 2.2](https://img.shields.io/badge/TensorFlow-2.2-FF6F00?logo=tensorflow)](https://github.com/tensorflow/tensorflow/releases/tag/v2.2.0) [![Documentation Status](https://readthedocs.org/projects/tensorflow-object-detection-api-tutorial/badge/?version=latest)](http://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/?badge=latest)
10+
[![TensorFlow 2.5](https://img.shields.io/badge/TensorFlow-2.5-FF6F00?logo=tensorflow)](https://github.com/tensorflow/tensorflow/releases/tag/v2.5.0) [![Documentation Status](https://readthedocs.org/projects/tensorflow-object-detection-api-tutorial/badge/?version=latest)](http://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/?badge=latest)
1111

1212
Since July 10, 2020 TensorFlow [announced that the Object Detection API officially supports TensorFlow 2](https://blog.tensorflow.org/2020/07/tensorflow-2-meets-object-detection-api.html). Therefore, an updated version of the tutorial was created to cover TensorFlow 2.
1313

Loading
Loading
Loading
Loading
Loading
Loading
Loading
Loading

docs/source/auto_examples/object_detection_camera.ipynb

+8-8
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\nDetect Objects Using Your Webcam\n================================\n"
18+
"\n# Detect Objects Using Your Webcam\n"
1919
]
2020
},
2121
{
@@ -29,7 +29,7 @@
2929
"cell_type": "markdown",
3030
"metadata": {},
3131
"source": [
32-
"Create the data directory\n~~~~~~~~~~~~~~~~~~~~~~~~~\nThe snippet shown below will create the ``data`` directory where all our data will be stored. The\ncode will create a directory structure as shown bellow:\n\n.. code-block:: bash\n\n data\n \u2514\u2500\u2500 models\n\nwhere the ``models`` folder will will contain the downloaded models.\n\n"
32+
"## Create the data directory\nThe snippet shown below will create the ``data`` directory where all our data will be stored. The\ncode will create a directory structure as shown bellow:\n\n.. code-block:: bash\n\n data\n \u2514\u2500\u2500 models\n\nwhere the ``models`` folder will will contain the downloaded models.\n\n"
3333
]
3434
},
3535
{
@@ -47,7 +47,7 @@
4747
"cell_type": "markdown",
4848
"metadata": {},
4949
"source": [
50-
"Download the model\n~~~~~~~~~~~~~~~~~~\nThe code snippet shown below is used to download the object detection model checkpoint file,\nas well as the labels file (.pbtxt) which contains a list of strings used to add the correct\nlabel to each detection (e.g. person).\n\nThe particular detection algorithm we will use is the `SSD ResNet101 V1 FPN 640x640`. More\nmodels can be found in the `TensorFlow 2 Detection Model Zoo <https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md>`_.\nTo use a different model you will need the URL name of the specific model. This can be done as\nfollows:\n\n1. Right click on the `Model name` of the model you would like to use;\n2. Click on `Copy link address` to copy the download link of the model;\n3. Paste the link in a text editor of your choice. You should observe a link similar to ``download.tensorflow.org/models/object_detection/tf2/YYYYYYYY/XXXXXXXXX.tar.gz``;\n4. Copy the ``XXXXXXXXX`` part of the link and use it to replace the value of the ``MODEL_NAME`` variable in the code shown below;\n5. Copy the ``YYYYYYYY`` part of the link and use it to replace the value of the ``MODEL_DATE`` variable in the code shown below.\n\nFor example, the download link for the model used below is: ``download.tensorflow.org/models/object_detection/tf2/20200711/ssd_resnet101_v1_fpn_640x640_coco17_tpu-8.tar.gz``\n\n"
50+
"## Download the model\nThe code snippet shown below is used to download the object detection model checkpoint file,\nas well as the labels file (.pbtxt) which contains a list of strings used to add the correct\nlabel to each detection (e.g. person).\n\nThe particular detection algorithm we will use is the `SSD ResNet101 V1 FPN 640x640`. More\nmodels can be found in the `TensorFlow 2 Detection Model Zoo <https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md>`_.\nTo use a different model you will need the URL name of the specific model. This can be done as\nfollows:\n\n1. Right click on the `Model name` of the model you would like to use;\n2. Click on `Copy link address` to copy the download link of the model;\n3. Paste the link in a text editor of your choice. You should observe a link similar to ``download.tensorflow.org/models/object_detection/tf2/YYYYYYYY/XXXXXXXXX.tar.gz``;\n4. Copy the ``XXXXXXXXX`` part of the link and use it to replace the value of the ``MODEL_NAME`` variable in the code shown below;\n5. Copy the ``YYYYYYYY`` part of the link and use it to replace the value of the ``MODEL_DATE`` variable in the code shown below.\n\nFor example, the download link for the model used below is: ``download.tensorflow.org/models/object_detection/tf2/20200711/ssd_resnet101_v1_fpn_640x640_coco17_tpu-8.tar.gz``\n\n"
5151
]
5252
},
5353
{
@@ -65,7 +65,7 @@
6565
"cell_type": "markdown",
6666
"metadata": {},
6767
"source": [
68-
"Load the model\n~~~~~~~~~~~~~~\nNext we load the downloaded model\n\n"
68+
"## Load the model\nNext we load the downloaded model\n\n"
6969
]
7070
},
7171
{
@@ -83,7 +83,7 @@
8383
"cell_type": "markdown",
8484
"metadata": {},
8585
"source": [
86-
"Load label map data (for plotting)\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nLabel maps correspond index numbers to category names, so that when our convolution network\npredicts `5`, we know that this corresponds to `airplane`. Here we use internal utility\nfunctions, but anything that returns a dictionary mapping integers to appropriate string labels\nwould be fine.\n\n"
86+
"## Load label map data (for plotting)\nLabel maps correspond index numbers to category names, so that when our convolution network\npredicts `5`, we know that this corresponds to `airplane`. Here we use internal utility\nfunctions, but anything that returns a dictionary mapping integers to appropriate string labels\nwould be fine.\n\n"
8787
]
8888
},
8989
{
@@ -101,7 +101,7 @@
101101
"cell_type": "markdown",
102102
"metadata": {},
103103
"source": [
104-
"Define the video stream\n~~~~~~~~~~~~~~~~~~~~~~~\nWe will use `OpenCV <https://pypi.org/project/opencv-python/>`_ to capture the video stream\ngenerated by our webcam. For more information you can refer to the `OpenCV-Python Tutorials <https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_gui/py_video_display/py_video_display.html#capture-video-from-camera>`_\n\n"
104+
"## Define the video stream\nWe will use `OpenCV <https://pypi.org/project/opencv-python/>`_ to capture the video stream\ngenerated by our webcam. For more information you can refer to the `OpenCV-Python Tutorials <https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_gui/py_video_display/py_video_display.html#capture-video-from-camera>`_\n\n"
105105
]
106106
},
107107
{
@@ -119,7 +119,7 @@
119119
"cell_type": "markdown",
120120
"metadata": {},
121121
"source": [
122-
"Putting everything together\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\nThe code shown below loads an image, runs it through the detection model and visualizes the\ndetection results, including the keypoints.\n\nNote that this will take a long time (several minutes) the first time you run this code due to\ntf.function's trace-compilation --- on subsequent runs (e.g. on new images), things will be\nfaster.\n\nHere are some simple things to try out if you are curious:\n\n* Modify some of the input images and see if detection still works. Some simple things to try out here (just uncomment the relevant portions of code) include flipping the image horizontally, or converting to grayscale (note that we still expect the input image to have 3 channels).\n* Print out `detections['detection_boxes']` and try to match the box locations to the boxes in the image. Notice that coordinates are given in normalized form (i.e., in the interval [0, 1]).\n* Set ``min_score_thresh`` to other values (between 0 and 1) to allow more detections in or to filter out more detections.\n\n"
122+
"## Putting everything together\nThe code shown below loads an image, runs it through the detection model and visualizes the\ndetection results, including the keypoints.\n\nNote that this will take a long time (several minutes) the first time you run this code due to\ntf.function's trace-compilation --- on subsequent runs (e.g. on new images), things will be\nfaster.\n\nHere are some simple things to try out if you are curious:\n\n* Modify some of the input images and see if detection still works. Some simple things to try out here (just uncomment the relevant portions of code) include flipping the image horizontally, or converting to grayscale (note that we still expect the input image to have 3 channels).\n* Print out `detections['detection_boxes']` and try to match the box locations to the boxes in the image. Notice that coordinates are given in normalized form (i.e., in the interval [0, 1]).\n* Set ``min_score_thresh`` to other values (between 0 and 1) to allow more detections in or to filter out more detections.\n\n"
123123
]
124124
},
125125
{
@@ -150,7 +150,7 @@
150150
"name": "python",
151151
"nbconvert_exporter": "python",
152152
"pygments_lexer": "ipython3",
153-
"version": "3.7.8"
153+
"version": "3.9.5"
154154
}
155155
},
156156
"nbformat": 4,

docs/source/auto_examples/object_detection_camera.rst

+32-3
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,33 @@
1+
2+
.. DO NOT EDIT.
3+
.. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY.
4+
.. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE:
5+
.. "auto_examples\object_detection_camera.py"
6+
.. LINE NUMBERS ARE GIVEN BELOW.
7+
18
.. only:: html
29

310
.. note::
411
:class: sphx-glr-download-link-note
512

6-
Click :ref:`here <sphx_glr_download_auto_examples_object_detection_camera.py>` to download the full example code
7-
.. rst-class:: sphx-glr-example-title
13+
Click :ref:`here <sphx_glr_download_auto_examples_object_detection_camera.py>`
14+
to download the full example code
15+
16+
.. rst-class:: sphx-glr-example-title
817

9-
.. _sphx_glr_auto_examples_object_detection_camera.py:
18+
.. _sphx_glr_auto_examples_object_detection_camera.py:
1019

1120

1221
Detect Objects Using Your Webcam
1322
================================
1423

24+
.. GENERATED FROM PYTHON SOURCE LINES 9-11
25+
1526
This demo will take you through the steps of running an "out-of-the-box" detection model to
1627
detect objects in the video stream extracted from your camera.
1728

29+
.. GENERATED FROM PYTHON SOURCE LINES 13-24
30+
1831
Create the data directory
1932
~~~~~~~~~~~~~~~~~~~~~~~~~
2033
The snippet shown below will create the ``data`` directory where all our data will be stored. The
@@ -27,6 +40,7 @@ code will create a directory structure as shown bellow:
2740
2841
where the ``models`` folder will will contain the downloaded models.
2942

43+
.. GENERATED FROM PYTHON SOURCE LINES 24-32
3044
3145
.. code-block:: default
3246
@@ -39,6 +53,8 @@ where the ``models`` folder will will contain the downloaded models.
3953
os.mkdir(dir)
4054
4155
56+
.. GENERATED FROM PYTHON SOURCE LINES 33-51
57+
4258
Download the model
4359
~~~~~~~~~~~~~~~~~~
4460
The code snippet shown below is used to download the object detection model checkpoint file,
@@ -58,6 +74,7 @@ follows:
5874

5975
For example, the download link for the model used below is: ``download.tensorflow.org/models/object_detection/tf2/20200711/ssd_resnet101_v1_fpn_640x640_coco17_tpu-8.tar.gz``
6076

77+
.. GENERATED FROM PYTHON SOURCE LINES 51-82
6178
6279
.. code-block:: default
6380
@@ -93,10 +110,13 @@ For example, the download link for the model used below is: ``download.tensorflo
93110
print('Done')
94111
95112
113+
.. GENERATED FROM PYTHON SOURCE LINES 83-86
114+
96115
Load the model
97116
~~~~~~~~~~~~~~
98117
Next we load the downloaded model
99118

119+
.. GENERATED FROM PYTHON SOURCE LINES 86-121
100120
101121
.. code-block:: default
102122
@@ -136,25 +156,31 @@ Next we load the downloaded model
136156
137157
138158
159+
.. GENERATED FROM PYTHON SOURCE LINES 122-128
160+
139161
Load label map data (for plotting)
140162
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
141163
Label maps correspond index numbers to category names, so that when our convolution network
142164
predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility
143165
functions, but anything that returns a dictionary mapping integers to appropriate string labels
144166
would be fine.
145167

168+
.. GENERATED FROM PYTHON SOURCE LINES 128-131
146169
147170
.. code-block:: default
148171
149172
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS,
150173
use_display_name=True)
151174
152175
176+
.. GENERATED FROM PYTHON SOURCE LINES 132-136
177+
153178
Define the video stream
154179
~~~~~~~~~~~~~~~~~~~~~~~
155180
We will use `OpenCV <https://pypi.org/project/opencv-python/>`_ to capture the video stream
156181
generated by our webcam. For more information you can refer to the `OpenCV-Python Tutorials <https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_gui/py_video_display/py_video_display.html#capture-video-from-camera>`_
157182

183+
.. GENERATED FROM PYTHON SOURCE LINES 136-140
158184
159185
.. code-block:: default
160186
@@ -163,6 +189,8 @@ generated by our webcam. For more information you can refer to the `OpenCV-Pytho
163189
cap = cv2.VideoCapture(0)
164190
165191
192+
.. GENERATED FROM PYTHON SOURCE LINES 141-155
193+
166194
Putting everything together
167195
~~~~~~~~~~~~~~~~~~~~~~~~~~~
168196
The code shown below loads an image, runs it through the detection model and visualizes the
@@ -178,6 +206,7 @@ Here are some simple things to try out if you are curious:
178206
* Print out `detections['detection_boxes']` and try to match the box locations to the boxes in the image. Notice that coordinates are given in normalized form (i.e., in the interval [0, 1]).
179207
* Set ``min_score_thresh`` to other values (between 0 and 1) to allow more detections in or to filter out more detections.
180208

209+
.. GENERATED FROM PYTHON SOURCE LINES 155-196
181210
182211
.. code-block:: default
183212

0 commit comments

Comments
 (0)