Skip to content

Conversation

@larshg
Copy link
Contributor

@larshg larshg commented Jun 5, 2015

A opengl viewer to use instead of opencv.

Far from optimal I think so comments and ideas would be nice!

@xlz
Copy link
Member

xlz commented Jun 17, 2015

Besides the above comments, the overall structure looks good. The previous comments are lost, if any of them are not resolved in the new commit.

larshg added 5 commits June 17, 2015 08:18
… into develop

Conflicts:
	examples/protonect/CMakeLists.txt
	examples/protonect/Protonect.cpp
	examples/protonect/src/cpu_depth_packet_processor.cpp
Conflicts:
	examples/protonect/CMakeLists.txt
	examples/protonect/Protonect.cpp
	examples/protonect/src/cpu_depth_packet_processor.cpp
	examples/protonect/src/turbo_jpeg_rgb_packet_processor.cpp
Renamed AddFrame to addFrame.
Added define for opencv to be able to use either opencv or opengl.
@larshg
Copy link
Contributor Author

larshg commented Jun 17, 2015

I have merged in @christiankerl Mat substitute.

Should I make it so its possible to use opencv (with defines everywhere necessary), since there is a lot of people having problems with the opengl 3.3 - what you guys think @xlz and @floe . Also I'll try investigate why 3.3 is necessary.

I'll try look into identifying the frames being either 1 or 3 channels and then execute the correct shader for supporting more added frames for rendering. I also need to figure out how to dynamically split the view area to the number of frames added.

I'll try to come up with a better way to only have to create the textures once and afterwards only update the data, but I can't wrap my head around doing such an implementation right now, so any pointers would be nice.

Right now the textures are recreated each time, and I'm getting some depth processer packet skipping, due to the GPU probably being busy allocating/deallocating or rendering the images. (only tested this with opengl processing)

@xlz
Copy link
Member

xlz commented Jun 17, 2015

I think OpenCV can be removed altogether. But I would like to keep the timing code enabled by default so the users have an easy way to report performance on their hardware. This means probably stealing OpenCV's getTickCount().

I think issues with OpenGL 3.3 are related to Intel's mesa driver support not being too good. Right now the latest mesa driver cannot make the OpenGL processor work. One thing is to report error automatically when OpenGL version is detected to be too low.

Also, it seems GLEW was used before, but was found "buggy" and replaced with a generated static header. This is quite weird. Perhaps GLEW can help with detecting OpenGL features.

@larshg
Copy link
Contributor Author

larshg commented Jun 17, 2015

Oki. I'll go for removing opencv all together.

GLFW also has some timers, so maybe they can be used instead of the opencv timers.
Edit: well, thats only possible if glfw becomes required, which might be unreasonable.

@xlz
Copy link
Member

xlz commented Jun 17, 2015

OpenGL is an optional feature. It is better not to make it a hard dependency.

@larshg
Copy link
Contributor Author

larshg commented Jun 17, 2015

Yes, just thought of it. I guess only the protonect example should be opengl dependent then? considering this viewer implementation using opengl.

@xlz
Copy link
Member

xlz commented Jun 17, 2015

Of course Protonect will need OpenGL/GLFW. libfreenect2.so does not have to.

@christiankerl
Copy link
Contributor

@xlz GLEW was removed, because it caused some trouble in our multithreading scenario with OpenGLDepthPacketProcessor running in a different thread. it was also an extra dependency and building reliably on every platform also caused problems.

libfreenect2.so depends on GLFW if OpenGLDepthPacketProcessor is used

@floe floe mentioned this pull request Jun 23, 2015
@christiankerl
Copy link
Contributor

opencv's getTickCount getTickFrequency are here: https://github.com/Itseez/opencv/blob/b46719b0931b256ab68d5f833b8fadd83737ddd1/modules/core/src/system.cpp#L405
pretty small. another option could be to use std::chrono if compiled with c++11 and have a noop timer otherwise.

@christiankerl
Copy link
Contributor

just for reference @larshg I pushed @7c2cc209a19fce46433a8bf0379ccf31cf2e4c3d a basic viewer (I implemented some time ago but didn't finish)

@floe
Copy link
Contributor

floe commented Jul 23, 2015

Finally got around to testing this on Ubuntu 14.04; I had to change opencl_depth_packet_processor.cpp so that <math.h> is always included, not only on Windows (otherwise M_PI is undefined). Just remove the #ifdef here: https://github.com/OpenKinect/libfreenect2/pull/261/files#diff-52a1a946913425cbc42940a3f2033ac2L36

When I'm running the viewer on an integrated Intel GPU using the OpenCL DPP, I get this result:

screenshot from 2015-07-23 12 47 30

I've noticed that the depth & IR images seem to be purely black and white; this is very likely caused by the grayfragmentshader not dividing by 4500.0f in https://github.com/OpenKinect/libfreenect2/pull/261/files#diff-8ca25d621f26a3215be9537b77208b46R59

Apart from that, this seems workable to me; however, I don't really see a lot of test reports so far. If I get a few more thumbs-ups, I'll merge this on the weekend and we can address any leftover issues (not linking against OpenCV anymore, timing calculation) in separate smaller PRs.

@xlz
Copy link
Member

xlz commented Jul 24, 2015

@larshg I think you meant glfwGetTime(). I admit it would be somewhat cumbersome to import the whole chunk of cross platform timer code. To make it less work, I think it is fine to optionally use glfwGetTime() or std::chrono if they are available, or disable timing code if not, as OpenGL/GLFW is enabled by default. This way users get the performance number by default and they are also not required to use OpenGL.

@floe
Copy link
Contributor

floe commented Jul 28, 2015

Any test reports, esp. for Windows/Mac?

@floe
Copy link
Contributor

floe commented Aug 2, 2015

I'm slightly concerned that merging this will cause some breakage; OTOH we won't really get test reports until we do... suggestions?

@xlz
Copy link
Member

xlz commented Aug 2, 2015

@larshg develops this on Windows. I think he may have some more cleanup.

I won't be able to test for a few weeks.

@larshg
Copy link
Contributor Author

larshg commented Aug 11, 2015

Yes, I develop on windows.

I was working on removing the templated use of Texture, to make the viewer more dynamic in adding more frames, but I haven't worked on it for a while due to lack of time/interest and vacations :)

I saw that @christiankerl used a variable to pass in the dividing factor for the ir and depth image. I'll try to find time for implementing this soonish.

@floe
Copy link
Contributor

floe commented Aug 12, 2015

OK, so we have it working on Linux and Windows at least. One Mac test report and I'm fine :-) Could you integrate the #include changes from my previous comment, and divide by 4500.0f in the gray shader? I'd like to merge this ASAP and add all other minor tweaks as additional PRs.

@fran6co
Copy link
Contributor

fran6co commented Aug 13, 2015

Just tested it on my Macbook, no problems so far =D

@floe
Copy link
Contributor

floe commented Aug 18, 2015

Closed via #361.

@floe floe closed this Aug 18, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants