|
14 | 14 | * The vtest.avi video from https://github.com/opencv/opencv/blob/master/samples/data/vtest.avi |
15 | 15 |
|
16 | 16 | ## Setup |
| 17 | +1. You need the extra modules installed for the MOG background subtractor. This tutorial was tested on Windows, and the easiest way to install it was using |
| 18 | +``` |
| 19 | +pip install opencv-contrib-python |
| 20 | +``` |
17 | 21 | 1. Download the vtest.avi video from https://github.com/opencv/opencv/blob/master/samples/data/vtest.avi and put it in the same folder as the python script. |
18 | 22 | 2. Run the python script. You should see a diff-overlay.jpg when it's done. |
19 | 23 |
|
20 | 24 |  |
21 | 25 |
|
22 | 26 | ## Get the Code |
23 | | -The application can be downloaded as a .zip at the end of this article. |
| 27 | +Code is included in this folder of the repository in the .py file. |
24 | 28 |
|
25 | 29 | ## How it works |
26 | | -The main functions used in OpenCV are HoughCircles (to detect the outline of the gauge and center point) and HoughLines (to detect the dial). |
27 | | - |
28 | | -Basic filtering is done as follows: |
29 | | -For cirles (this happens in calibrate_gauge() ) |
30 | | -* only return circles from HoughCircles that are within reasonable range of the image height (this assumes the gauge takes up most of the view) |
31 | | -* average the resulting circles and use the average for the center point and radius |
32 | | -For lines (this happens in get_current_value() ) |
33 | | -* apply a threshold using cv2.threshold. cv2.THRESH_BINARY_INV with threshold of 175 and maxValue of 255 work fine |
34 | | -* remove all lines outside a given radius |
35 | | -* check if a line is within an acceptable range of the radius |
36 | | -* use the first acceptable line as the dial |
37 | | - |
38 | | -There is a considerable amount of triginomotry involved to create the calibration image, mainly sin and cos to plot the calibration image lines and arctan to get the angle |
39 | | -of the dial. This approach sets 0/360 to be the -y axis (if the image has a cartesian grid in the middle) and it goes clock-wise. There is a slight |
40 | | -modification to make the 0/360 degrees be at the -y axis, by an addition (i+9) in the calculation of p_text[i][j]. Without this +9 the 0/360 point would be on the +x axis. So this |
41 | | -implementation assumes the gauge is aligned in the image, but it can be adjusted by changing the value of 9 to something else. |
| 30 | +The main APIs used in OpenCV are: |
| 31 | +MOG background subtractor (cv2.bgsegm.createBackgroundSubtractorMOG()) - https://docs.opencv.org/3.0-beta/modules/video/doc/motion_analysis_and_object_tracking.html?highlight=createbackgroundsubtractormog#createbackgroundsubtractormog |
| 32 | +Note: the docs are out of date, and the propoer way to initialize is |
| 33 | +``` |
| 34 | +cv2.bgsegm.createBackgroundSubtractorMOG() |
| 35 | +``` |
| 36 | +cv2.threshold() - https://docs.opencv.org/3.3.1/d7/d4d/tutorial_py_thresholding.html |
| 37 | +cv2.add() - https://docs.opencv.org/3.2.0/d0/d86/tutorial_py_image_arithmetics.html |
| 38 | +cv2.applyColorMap() - https://docs.opencv.org/3.0-beta/modules/imgproc/doc/colormaps.html |
| 39 | +cv2.addWeighted() - https://docs.opencv.org/3.2.0/d0/d86/tutorial_py_image_arithmetics.html |
| 40 | + |
| 41 | +The application takes each frame and first applies background subtraction using the cv2.bgsegm.createBackgroundSubtractorMOG() object to create a mask. A threshold is then applied to the mask to remove small amounts of movement, and also to set the accumulation value for each iteration. The result of the threshold is added to an accumulation image (one that starts out at all zero and gets added to each iteration without removing anything), which is what records the motion. At the very end, a color map is applied to the accumulated image so it's easier to see the motion. This colored imaged is then combined with a copy of the first frame using cv2.addWeighted to accomplish the overlay. |
42 | 42 |
|
0 commit comments