Skip to content

Commit 9c0d16a

Browse files
Update README.md
1 parent 366aa37 commit 9c0d16a

File tree

1 file changed

+94
-0
lines changed

1 file changed

+94
-0
lines changed

README.md

Lines changed: 94 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -608,3 +608,97 @@ In final step, we tooks all files from my 7th tutorial and replaced CSGO_frozen_
608608
Next we tried to play CS:GO and I let my bot to shoot enemies, you can check this out on my [YouTube](https://www.youtube.com/watch?v=9UjsnAg78x8) video.
609609

610610
That’s all for this tutorial. With new model I didn't solved FPS problem, it improved performance slightly but not that we could play our game. So for future work I decided to learn doing stuff on multiprocessing and run our code processes in parallel. So in next tutorial I will be doing stuff with multiprocessing.
611+
612+
# Part 7. Grab screen with multiprocessings
613+
614+
Welcome everyone to part 9 of our TensorFlow object detection API series. This tutorial will be a little different from previous tutorials.
615+
616+
In 8 part I told that I will be working with python multiprocessing to make code work in parallel with other processes. So I spent hours of learning how to use multiprocessing (was not using it before).
617+
618+
So I copied whole code from my second tutorial and removed ```screen_recordPIL``` and ```screen_grab``` functions. Left only to work with ```screen_recordMSS``` function. This function we can divide into two parts where we grab screen and where we show our grabbed screen. So this mean we will need to create two processes.
619+
620+
At first I divide whole code into two parts, first part we will call GRABMSS_screen. Next we need to put whole code into while loop, that it would run over and over. When we have our screen, we call ```q.put_nowait(img)``` command where we put our image into shared queue, and with following line ```q.join()``` we are saying wait since img will be copied to queue.
621+
```
622+
def GRABMSS_screen(q):
623+
while True:
624+
# Get raw pixels from the screen, save it to a Numpy array
625+
img = numpy.array(sct.grab(monitor))
626+
# To get real color we do this:
627+
#img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
628+
q.put_nowait(img)
629+
q.join()
630+
```
631+
632+
Second function we will call SHOWMSS_screen. This function also will run in a while loop, and we always check if our queue is not empty. When we have something in queue we call ```q.get_nowait()``` command which takes everything from queue, and with ```q.task_done()``` we are locking the process, not to interrupt queue if we didn't finished picking up all data. After that we do same things as before, showing grabbed image and measuring FPS.
633+
```
634+
def SHOWMSS_screen(q):
635+
global fps, start_time
636+
while True:
637+
if not q.empty():
638+
img = q.get_nowait()
639+
q.task_done()
640+
# To get real color we do this:
641+
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
642+
# Display the picture
643+
cv2.imshow(title, cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
644+
# Display the picture in grayscale
645+
fps+=1
646+
TIME = time.time() - start_time
647+
if (TIME) >= display_time :
648+
print("FPS: ", fps / (TIME))
649+
fps = 0
650+
start_time = time.time()
651+
# Press "q" to quit
652+
if cv2.waitKey(25) & 0xFF == ord("q"):
653+
cv2.destroyAllWindows()
654+
break
655+
```
656+
657+
Right now, we have two different functions, we will use them in parallel processes.
658+
659+
If we want to run our code in multiprocessing we must begin our code with ```if __name__=="__main__":``` and we must run python script from command prompt elsewise if we'll run it from python shell, we won't get any prints, which we need here to measure FPS. So our full 3rd code part looks like this:
660+
```
661+
if __name__=="__main__":
662+
# Queue
663+
q = multiprocessing.JoinableQueue()
664+
665+
# creating new processes
666+
p1 = multiprocessing.Process(target=GRABMSS_screen, args=(q, ))
667+
p2 = multiprocessing.Process(target=SHOWMSS_screen, args=(q, ))
668+
669+
# starting our processes
670+
p1.start()
671+
p2.start()
672+
```
673+
674+
More about python multiprocessing and queues you can learn on this [link](https://docs.python.org/2/library/multiprocessing.html#multiprocessing.Queue.qsize). Short code explanation:
675+
We begin with creating a chared queue:
676+
```
677+
# Queue
678+
q = multiprocessing.JoinableQueue()
679+
```
680+
With following lines we are creating p1 and p2 processes which will run in background. p1 function will call GRABMSS_screen() function and p2 will call SHOWMSS_screen() function. As an argument for these functions we must give arguments, we give q there.
681+
```
682+
# creating new processes
683+
p1 = multiprocessing.Process(target=GRABMSS_screen, args=(q, ))
684+
p2 = multiprocessing.Process(target=SHOWMSS_screen, args=(q, ))
685+
```
686+
Final step is to start our processes, after these commands our grab screen function will run in background.
687+
```
688+
# starting our processes
689+
p1.start()
690+
p2.start()
691+
```
692+
693+
For comparison I ran old code without multiprocessing and with multiprocessing. Here is results without multiprocessing:
694+
<p align="center">
695+
<img src="https://github.com/pythonlessons/TensorFlow-object-detection-tutorial/blob/master/1_part%20images/09_FPS_slow.JPG"
696+
</p><br>
697+
We can see that average is about 19-20 FPS.
698+
Here is results with multiprocessing:
699+
<p align="center">
700+
<img src="https://github.com/pythonlessons/TensorFlow-object-detection-tutorial/blob/master/1_part%20images/09_FPS_fast.JPG"
701+
</p><br>
702+
703+
We can see that average is about 32 FPS. So our final result is that our grab screen improved in around 50%. I would like like to impove it more, but for now I don't have ideas how to do that. Anyway results are much better than before !
704+

0 commit comments

Comments
 (0)