-
Notifications
You must be signed in to change notification settings - Fork 776
[WIB] Glviewer #261
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIB] Glviewer #261
Conversation
|
Besides the above comments, the overall structure looks good. The previous comments are lost, if any of them are not resolved in the new commit. |
… into develop Conflicts: examples/protonect/CMakeLists.txt examples/protonect/Protonect.cpp examples/protonect/src/cpu_depth_packet_processor.cpp
Conflicts: examples/protonect/CMakeLists.txt examples/protonect/Protonect.cpp examples/protonect/src/cpu_depth_packet_processor.cpp examples/protonect/src/turbo_jpeg_rgb_packet_processor.cpp
Renamed AddFrame to addFrame. Added define for opencv to be able to use either opencv or opengl.
|
I have merged in @christiankerl Mat substitute. Should I make it so its possible to use opencv (with defines everywhere necessary), since there is a lot of people having problems with the opengl 3.3 - what you guys think @xlz and @floe . Also I'll try investigate why 3.3 is necessary. I'll try look into identifying the frames being either 1 or 3 channels and then execute the correct shader for supporting more added frames for rendering. I also need to figure out how to dynamically split the view area to the number of frames added. I'll try to come up with a better way to only have to create the textures once and afterwards only update the data, but I can't wrap my head around doing such an implementation right now, so any pointers would be nice. Right now the textures are recreated each time, and I'm getting some depth processer packet skipping, due to the GPU probably being busy allocating/deallocating or rendering the images. (only tested this with opengl processing) |
|
I think OpenCV can be removed altogether. But I would like to keep the timing code enabled by default so the users have an easy way to report performance on their hardware. This means probably stealing OpenCV's getTickCount(). I think issues with OpenGL 3.3 are related to Intel's mesa driver support not being too good. Right now the latest mesa driver cannot make the OpenGL processor work. One thing is to report error automatically when OpenGL version is detected to be too low. Also, it seems GLEW was used before, but was found "buggy" and replaced with a generated static header. This is quite weird. Perhaps GLEW can help with detecting OpenGL features. |
|
Oki. I'll go for removing opencv all together. GLFW also has some timers, so maybe they can be used instead of the opencv timers. |
|
OpenGL is an optional feature. It is better not to make it a hard dependency. |
|
Yes, just thought of it. I guess only the protonect example should be opengl dependent then? considering this viewer implementation using opengl. |
|
Of course Protonect will need OpenGL/GLFW. libfreenect2.so does not have to. |
|
@xlz GLEW was removed, because it caused some trouble in our multithreading scenario with OpenGLDepthPacketProcessor running in a different thread. it was also an extra dependency and building reliably on every platform also caused problems. libfreenect2.so depends on GLFW if OpenGLDepthPacketProcessor is used |
|
opencv's getTickCount getTickFrequency are here: https://github.com/Itseez/opencv/blob/b46719b0931b256ab68d5f833b8fadd83737ddd1/modules/core/src/system.cpp#L405 |
|
just for reference @larshg I pushed @7c2cc209a19fce46433a8bf0379ccf31cf2e4c3d a basic viewer (I implemented some time ago but didn't finish) |
|
Finally got around to testing this on Ubuntu 14.04; I had to change When I'm running the viewer on an integrated Intel GPU using the OpenCL DPP, I get this result: I've noticed that the depth & IR images seem to be purely black and white; this is very likely caused by the Apart from that, this seems workable to me; however, I don't really see a lot of test reports so far. If I get a few more thumbs-ups, I'll merge this on the weekend and we can address any leftover issues (not linking against OpenCV anymore, timing calculation) in separate smaller PRs. |
|
@larshg I think you meant glfwGetTime(). I admit it would be somewhat cumbersome to import the whole chunk of cross platform timer code. To make it less work, I think it is fine to optionally use glfwGetTime() or std::chrono if they are available, or disable timing code if not, as OpenGL/GLFW is enabled by default. This way users get the performance number by default and they are also not required to use OpenGL. |
|
Any test reports, esp. for Windows/Mac? |
|
I'm slightly concerned that merging this will cause some breakage; OTOH we won't really get test reports until we do... suggestions? |
|
@larshg develops this on Windows. I think he may have some more cleanup. I won't be able to test for a few weeks. |
|
Yes, I develop on windows. I was working on removing the templated use of Texture, to make the viewer more dynamic in adding more frames, but I haven't worked on it for a while due to lack of time/interest and vacations :) I saw that @christiankerl used a variable to pass in the dividing factor for the ir and depth image. I'll try to find time for implementing this soonish. |
|
OK, so we have it working on Linux and Windows at least. One Mac test report and I'm fine :-) Could you integrate the |
|
Just tested it on my Macbook, no problems so far =D |
|
Closed via #361. |

A opengl viewer to use instead of opencv.
Far from optimal I think so comments and ideas would be nice!