You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: contrib/OpenFunscripter/README.md
+1-12Lines changed: 1 addition & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,15 +2,4 @@
2
2
3
3
A hacky lua script to use the python funscript generator script in Open Funscripter.
4
4
5
-
## Installation
6
-
7
-
1. Download the latest packed Python Funscript Editor from [github release page](https://github.com/michael-mueller-git/Python-Funscript-Editor/releases).
8
-
2. Extract the Archiv to an Path without special character or spaces.
9
-
3. Copy the `funscript_generator.lua` script to `data/lua` in your OFS directory.
10
-
4. Open the `funscript_generator.lua` file and adjust the `Settings.FunscriptGenerator` and `Settings.TmpFile` variable.
11
-
-`Settings.FunscriptGenerator`: point to the extracted Python Funscript Editor program
12
-
-`Settings.TmpFile`: specifies a temporary file where to store the result (must be a file not a directory!). The file will be overwritten automatically the next time the generator is started!
13
-
5. Now launch OFS.
14
-
6. Navigate to `View : Special functions : Custom Functions` and select the `funscript_generator.lua` entry. Click the Button `Bind Script` (This may trigger the funscript generator, just ignore it for now).
15
-
7. Navigate to `Options : Keys : Dynamic` and insert a shortcut for the funscript generator.
16
-
8. Now you can use the shortcut at any position in the video to start the funscript generator.
5
+
The [installation instructions](https://github.com/michael-mueller-git/Python-Funscript-Editor/tree/main/docs/app/docs/user-guide/ofs-integration.md) are now included in the documentation.
Idea: By using [OpenCV Tracker](https://learnopencv.com/object-tracking-using-opencv-cpp-python/), we can determine the relative movements in a static camera setup and map them into Funscript actions using simple signal processing.
51
-
52
-
The Algorithm is implemented for 3D Side-By-Side VR Videos. Some parameter are currently hard coded. It should be possible to expand the functionality to 2D Videos by changing the code, with the following limitations.
53
-
54
-
#### Limitations
55
-
56
-
- Static camera setup
57
-
- Fixed reference point of relative movement in video required
58
-
- No video cuts within a tracking sequence allowed
59
-
- No change of position of the performers
60
-
- Features in the video which are visible in all following frames of the tracking sequence required.
61
-
62
-
#### Process
63
-
64
-
1. Selection of the features for the Woman and Men in the video, which should be tracked.
65
-
2. Predict the feature positions in the following video frames by OpenCV Tracker.
66
-
3. Calculate the difference between the predicted tracking boxes.
67
-
4. Map the relative difference to an absolute difference score by user input.
68
-
5. Filter all local min and max points to get the final action positions for the Funscript.
69
-
70
-
#### Improvements
71
-
72
-
- You can change the OpenCV tracker in the source code which predicts the position. OpenCV offers several trackers which differ in prediction accuracy and processing speed. See also [OpenCV Tracker](https://learnopencv.com/object-tracking-using-opencv-cpp-python/).
73
-
74
-
- You can set the number of frames that are interpolated by the `skip_frames` parameter. 0 means that the OpenCV tracker delivers a prediction for each frame. This is slower but more accurate. Or if greater than zero, the individual frames are skipped and then the tracking boxes are interpolated, which increases the processing speed but decreases the accuracy. I have set the value to 1, i.e. every 2nd frame is skipped and interpolated. Which provides a good mix of accuracy and speed.
75
-
76
-
- It is recommended to use a low resolution video e.g. 4K for generating the funscript actions, as the processing speed is higher.
Idea: By using [OpenCV Tracker](https://learnopencv.com/object-tracking-using-opencv-cpp-python/), we can determine the relative movements in a static camera setup and map them into Funscript actions using simple signal processing.
6
+
7
+
The Algorithm is implemented for 3D Side-By-Side VR Videos. Some parameter are currently hard coded. It should be possible to expand the functionality to 2D Videos by changing the code, with the following limitations.
8
+
9
+
### Limitations
10
+
11
+
- Static camera setup
12
+
- Fixed reference point of relative movement in video required
13
+
- No video cuts within a tracking sequence allowed
14
+
- No change of position of the performers
15
+
- Features in the video which are visible in all following frames of the tracking sequence required.
16
+
17
+
### Process
18
+
19
+
1. Selection of the features for the Woman and Men in the video, which should be tracked.
20
+
2. Predict the feature positions in the following video frames by OpenCV Tracker.
21
+
3. Calculate the difference between the predicted tracking boxes.
22
+
4. Map the relative difference to an absolute difference score by user input.
23
+
5. Filter all local min and max points to get the final action positions for the Funscript.
24
+
25
+
### Improvements
26
+
27
+
- You can change the OpenCV tracker in the source code which predicts the position. OpenCV offers several trackers which differ in prediction accuracy and processing speed. See also [OpenCV Tracker](https://learnopencv.com/object-tracking-using-opencv-cpp-python/).
28
+
29
+
- You can set the number of frames that are interpolated by the `skip_frames` parameter. 0 means that the OpenCV tracker delivers a prediction for each frame. This is slower but more accurate. Or if greater than zero, the individual frames are skipped and then the tracking boxes are interpolated, which increases the processing speed but decreases the accuracy. I have set the value to 1, i.e. every 2nd frame is skipped and interpolated. Which provides a good mix of accuracy and speed.
30
+
31
+
- It is recommended to use a low resolution video e.g. 4K for generating the funscript actions, as the processing speed is higher.
Currently we use a hacky lua script to communicate between the Python Funscript Generator and the Open Funscripter.
4
+
5
+
## Installation
6
+
7
+
1. Download the latest packed Python Funscript Editor from [github release page](https://github.com/michael-mueller-git/Python-Funscript-Editor/releases).
8
+
2. Extract the Archiv to an Path without special character or spaces.
9
+
3. Copy the `funscript_generator.lua` script ([`Repositor/contrib/OpenFunscripter`](https://github.com/michael-mueller-git/Python-Funscript-Editor/tree/main/contrib/OpenFunscripter)) to `data/lua` in your OFS directory.
10
+
4. Open the `funscript_generator.lua` file and adjust the `Settings.FunscriptGenerator` and `Settings.TmpFile` variable.
11
+
-`Settings.FunscriptGenerator`: point to the extracted Python Funscript Editor program
12
+
-`Settings.TmpFile`: specifies a temporary file where to store the result (must be a file not a directory!). The file will be overwritten automatically the next time the generator is started!
13
+
5. Now launch OFS.
14
+
6. Navigate to `View : Special functions : Custom Functions` and select the `funscript_generator.lua` entry. Click the Button `Bind Script` (This may trigger the funscript generator, just ignore it for now).
15
+
7. Navigate to `Options : Keys : Dynamic` and insert a shortcut for the funscript generator.
16
+
8. Now you can use the shortcut at any position in the video to start the funscript generator.
0 commit comments