[14:58] <bschaefer> fritsch, soo im guessing that DRM branch (if I cant show it works for multiple monitor + multiple gpu wont be a go)
[14:58] <bschaefer> which is fine. I can ifndef around to keep x11 in place then just DRM for anything else
[15:16] <fritsch> bschaefer: :-)
[15:16] <fritsch> don't use ifdefs are he will get more "angry"
[15:16] <bschaefer> :)
[15:16] <fritsch> you did nothing wrong - but yeah, you hit something that broke kodi over time
[15:17] <bschaefer> yeah, the drm code was just put out in vaapi in 2014? or 15
[15:17] <bschaefer> very recent
[15:17] <fritsch> not really recent
[15:17] <fritsch> as we (kodi) helped developing the egl interop
[15:17] <fritsch> so for us not recent
[15:17] <bschaefer> yeah, im still not sure of the pro for using x11 display directly
[15:17] <fritsch> the discussion was not ontopic anymore - just some old battles fought out
[15:17] <fritsch> about recent "problems"
[15:18] <fritsch> there is no advantage
[15:18] <fritsch> none at all
[15:18] <bschaefer> (besides picking a device if youve multiple i suppose?) But i cant test that
[15:18] <fritsch> it was from ancient time, when the vaPutSurface method for vaapi was tight to the rendering / display
[15:18] <bschaefer> o i see, which im assuming your trying to decouple?
[15:18] <bschaefer> (though depends on EGL atm anyway)
[15:18] <bschaefer> from the display server
[15:18] <fritsch> i see no point in coupling a decoder on a window server
[15:18] <fritsch> if everything it needs is a EGLDisplay
[15:19] <bschaefer> right... which it already does that :)
[15:19] <fritsch> the libva guys also saw that ... if you look in the android "backend" code
[15:19]  * bschaefer will have to poke RAOF about render nodes today when he gets on
[15:19] <bschaefer> fritsch, o yeah theres like nothing there
[15:20] <bschaefer> fritsch, the issue is i cant *open* card0
[15:20] <bschaefer> since mir already opens that
[15:20] <fritsch> no need to do so
[15:20] <fritsch> let me link you something
[15:20] <bschaefer> which is why render nodes are perfect
[15:21]  * bschaefer is still waking up and starts making coffee
[15:22] <fritsch> https://lists.freedesktop.org/archives/libva/2014-September/002719.html
[15:22] <fritsch> here is the patchset that added it
[15:22]  * bschaefer looks
[15:23] <fritsch> the 2/2 patch
[15:23] <fritsch> has a test code
[15:23] <bschaefer> fritsch, o yeah i read that yesterday while digging around
[15:23] <fritsch> gwenole (the master mind of intel va) - not at intel anymore sadly - gave that test code
[15:23] <bschaefer> (well when i was running into issues with drmOpen)
[15:23] <fritsch> so, usage is perfectly fine
[15:24] <fritsch> back at the time, when render nodes nor EGL dma buf sharing existed
[15:24] <fritsch> the only way to get surfaces was either use vaPutSurface
[15:24] <fritsch> which was used for rendering, therefore used "X11" or whatever dependencies
[15:24] <fritsch> but then the infrastructure was finished
[15:25] <bschaefer> which makes sense!
[15:25]  * bschaefer needs to read how render nodes was implemented in the kernel
[15:25] <bschaefer> as im not *sure* what happens if you multiple render nodes
[15:25] <bschaefer> though reading it *shouldnt* matter
[15:25] <fritsch> I'd say nothing
[15:25] <fritsch> caus either: you can't open the node at all
[15:26] <fritsch> vaapi -> side
[15:26] <fritsch> or b) the interop fails
[15:26] <fritsch> but that should only fail if the eglDisplay / context
[15:26] <fritsch> is not valid
[15:26] <bschaefer> well lets say you have intel +  nvidia and you want to make kodi run on nvidia while your main display server runs on intel
[15:26] <bschaefer> IIRC you *can* do that
[15:27] <bschaefer> which idk ... what would happen or if we even support that :)
[15:27] <fritsch> nvidia has no support for the R8,G8 whatever sharing
[15:27] <fritsch> that means the moment the intel driver does no "own" the EGL context
[15:27] <fritsch> querying the extension will already fail
[15:27] <fritsch> I assume :-)
[15:27] <bschaefer> well then ... seems like more and more pointing to drm will work
[15:27] <bschaefer> how it already works
[15:28] <fritsch> yeah, as said - keep the PR as is for now. Thing will settle and calm down
[15:28] <bschaefer> yeah, I fixed the camel case and made a small comment but figured best left to let the dust settle :)
[15:28] <bschaefer> no worries
[15:30] <bschaefer> in the meantime ill now attempt to test mir how you mentioned in hopes of really testing the difference :)
[15:31] <bschaefer> it seems faster then x11 atm (at lease on my tests) for skipping at lease which means... the rendering
[15:31] <bschaefer> i think (vs drop which is the decoder?)
[15:33] <fritsch> i hope it IS faster than X11 :-)
[15:34] <fritsch> btw. are you sure you use lanczos3 scaler?
[15:34] <fritsch> afaik that only had GL support last time I looked
[15:34] <fritsch> drop is the decoder, yes
[15:34] <fritsch> skipping <- render performance
[15:35] <fritsch> benchmark case: 3840x2160 h264 60p, downscaled with lanczos3 to 1080p60
[15:35] <fritsch> you might only have nearest neighbour + bilinear available, though
[15:35] <bschaefer> fritsch, i wasnt really sure, i tried to change things but the settings when i hit o didnt seem to reflect the changes
[15:35] <bschaefer> which was confusing
[15:35] <fritsch> they don't afaik
[15:36] <fritsch> those values are hold in a ProcessInfo class
[15:36] <bschaefer> o well that makes sense since when i hit o it says dinterlace method: None
[15:36] <fritsch> that's fine
[15:36] <fritsch> you don't want to deinterlace progressive content
[15:36] <fritsch> http://solidrun.maltegrosse.de/~fritsch/
[15:36]  * bschaefer checks the scaler
[15:36] <fritsch> here are some samples I used when developing IMX6 support
[15:37] <fritsch> the 1080i50 should show something else, preferable VAAPI-MCDI
[15:37] <fritsch> then there is "Bob and Deinterlace" <- be careful with them, those cause a SSE copy and either deint in render (GL) or via ffmpeg's yadif on the cpu
[15:38] <fritsch> they switch the "path" in vaapi automatically, as told on github with the Prefer VAAPI-Output
[15:38] <fritsch> which was the last line you got? :-)
[15:38]  * bschaefer froze
 you don't want to deinterlace progressive content
[15:38] <bschaefer> sometimes devices like to fight when running two display serves
[15:38] <bschaefer> servers*
[15:39] <fritsch> 16:37 < fritsch> http://solidrun.maltegrosse.de/~fritsch/
[15:39] <fritsch> 16:37  * bschaefer checks the scaler
[15:39] <fritsch> 16:37 < fritsch> here are some samples I used when developing IMX6 support
[15:39] <fritsch> 16:37 < fritsch> the 1080i50 should show something else, preferable VAAPI-MCDI
[15:39] <fritsch> 16:38 < fritsch> then there is "Bob and Deinterlace" <- be careful with them,  those cause a SSE copy and either deint in render (GL) or via  ffmpeg's yadif on the cpu
[15:39] <fritsch> sorry :-)
[15:39] <bschaefer> o sweet yeah missed all that!
[15:40] <bschaefer> right after i said checks the scaler i dropped :)
[15:40] <fritsch> btw. something for ubuntu: there is only one render node by default
[15:41] <bschaefer> fritsch, yeah youre right linear/bi
[15:42] <bschaefer> even when theres multiple gpus?
[15:42]  * bschaefer use to have a laptop that had two
[15:43] <fritsch> i think he is not correct
[15:43] <fritsch> as the dma_buf is a kernel infrastructure
[15:43] <bschaefer> yeah x11 has the lanczo3 scaler
[15:43] <bschaefer> not mir
[15:43] <fritsch> it is just used as a zero copy
[15:43] <fritsch> interface
[15:44] <bschaefer> yeah reading that link you put (which clearly states its not a 1to1)
[15:44] <bschaefer> its like that for legacy reasons
[15:44] <fritsch> which one was that?
[15:44] <bschaefer> "It’s also important to know that render-nodes are not bound to a specific card. "
[15:44] <bschaefer> https://dvdhrm.wordpress.com/tag/drm/
[15:44] <bschaefer>  Instead, if user-space requires hardware-acceleration, it should open any node and use it.
[15:45]  * bschaefer would still like better documentation on render nodes, since i had a hard time finding the *real* number after renderD<num>
[15:46] <bschaefer> found some kernel usage of 128->128+16 soo i assumed but still didnt find any real manual pae
[15:46] <bschaefer> page*
[15:46] <fritsch> yeah, try to link him that thingy above if you want to get into a discussion
[15:46] <fritsch> in fact he is not right
[15:46] <fritsch> :-)
[15:46] <fritsch> but I gave up
[15:47] <bschaefer> "Yes, driver-specific user-space can figure out whether and which render-node was created by which driver, but driver-unspecific user-space should never do that!"
[15:47] <bschaefer> but kodi *is* not driver specific
[15:47] <bschaefer> soo it *shouldnt* matter
[15:47] <fritsch> jep
[15:47] <fritsch> cite that and see what happens :-)
[15:50] <bschaefer> done! But yeah I was going to poke RAOF later (which he knows far to many things...)
[15:50] <bschaefer> and hopefully can confirm the same... I can try to find more documentation or just read the kernel code
[15:53] <bschaefer> fritsch, so if im comparing against X11 and Mir doesnt support lanczos3 it seems unfair
[15:53] <bschaefer> to test them against each other using different scalers
[15:56] <fritsch> you needed to port the GL code of lanczos3 to the EGL render
[15:56] <fritsch> you can do differently
[15:56] <fritsch> mmh, nope you can't
[15:56] <bschaefer> haha, yeah something to *do* later
[15:57] <fritsch> concerning the render nodes
[15:57] <fritsch> just open two mirs
[15:57] <fritsch> and two kodis
[15:57] <fritsch> i say: both work fine, as the EGL ctx decides
[15:57]  * bschaefer shutters at resampling for rewriting some of those bits for touch events
[15:57] <fritsch> no matter which render node in use
[15:58] <bschaefer> fritsch, thats the things about render nodes though, they are unprivileged and allow for rendering on it with out any auth magic
[15:58] <bschaefer> (from my reading at lease)
[15:58]  * bschaefer can try to open two mir servers in kms and see what happens never really did that
[15:58] <fritsch> you can also just run kodi twice
[15:58] <fritsch> and see what happens
[15:58] <bschaefer> o true
[15:59] <fritsch> cause, consider the use case: video playback, someone comes via ssh
[15:59] <fritsch> and transcodes his video
[15:59] <fritsch> both must work
[15:59] <fritsch> :-)
[15:59] <bschaefer> fritsch, yeah no issues
[16:00] <bschaefer> with two kodis one 1 mir server
[16:02] <fritsch> bschaefer: yeah, a nice screenshot would also help
[16:02] <fritsch> that proves: the EGL ctx
[16:02] <fritsch> decides
[16:02] <fritsch> and the render node is just "below"
[16:02] <fritsch> and even copes with two processes
[16:05] <bschaefer> fritsch, like http://i.imgur.com/fEsKY5p.jpg?
[16:06] <fritsch> play a different video :-)
[16:06] <fritsch> in one of the windows
[16:06] <bschaefer> haha true
[16:06] <fritsch> but yeah, it's obvious already
[16:06] <fritsch> nice - two times 60 fps
[16:07] <fritsch> or open the "o" dialogue on one
[16:07] <bschaefer> o right
[16:09] <bschaefer> fritsch, http://i.imgur.com/0cEq880.jpg
[16:09]  * bschaefer needs a different 60pfs video
[16:09] <fritsch> hehe
[16:09] <fritsch> really fine
[16:16] <bschaefer> fritsch, also there was nothing else needed to be done on the mir branch it self?
[16:26] <ogra_> bschaefer, FYI  https://code.launchpad.net/~ogra/+snap/kodi-mir-snapshot ...  (but i cant get it to run )
[16:27] <ogra_> (and arm64 is completely unknown to the kodi build system it seems)
[16:27] <bschaefer> ogra_, sweet! You cant get it to run because raspi?
[16:27] <bschaefer> or you're trying on the dragon board? I can imagine...
[16:27] <ogra_> i cant get it to run on qemu either ...
[16:27] <bschaefer> :(
[16:27] <ogra_> no, dragonboard is arm64 ... that doesnt build
[16:28] <ogra_> well, its a start ...
[16:28] <bschaefer> ogra_, https://github.com/asciper/KodiDragonboard410c
[16:28] <ogra_> on the pi i have no mir-libs
[16:28] <bschaefer> err i ment arm64
[16:28] <ogra_> oh, cool
[16:28]  * bschaefer hopes that *might* have some patches to make it work
[16:28] <bschaefer> ive yet to dig to deep into that branch
[16:29] <bschaefer> it adds an arch64 though
[16:29] <ogra_> yeah, there are quite some changes to WinSystemX11.cpp
[16:29] <bschaefer> soo hopefully thats whats missing
[16:29] <ogra_> i guess the mir one would need them too
[16:29]  * bschaefer looks
[16:30] <bschaefer> ogra_, hmm looks like it just reports errors
[16:30] <ogra_> looking at https://github.com/asciper/KodiDragonboard410c/blob/master/patches/ubuntu/0001-Patch-for-DragonBoard410c-Kodi-build.patch
[16:30] <bschaefer> more so
[16:30] <bschaefer> yeah
[16:30] <bschaefer> diff --git a/xbmc/windows/GUIWindowSystemInfo.cpp b/xbmc/windows/GUIWindowSystemInfo.cpp
[16:30] <bschaefer> that *may* be needed
[16:31] <ogra_> anyway, i have the snaps but cant run them, there seems to be no armhf mir-libs in the store
[16:31] <bschaefer> pretty much ill need to go through and find any define for __arm__ and and __aarch64__
[16:31] <bschaefer> ogra_, dang and alberto is out until nov 22
[16:31] <bschaefer> or 21st
[16:31] <ogra_> same for me (theoretically) ... i'm alread on vacation ;)
[16:31] <bschaefer> :)
[16:31]  * bschaefer is just messing with kodi upstream patchs soo no real work today!
[16:32] <bschaefer> ogra_, thanks for testing that and getting a snap on launchpad!
[16:32] <ogra_> well, there is still a lot work ahead i guess ... but its a start
[16:33] <bschaefer> yeah i agree
[16:33] <ogra_> (despite a horribly hackish one)
[16:33] <bschaefer> ogra_, o also got vaapi working
[16:33] <bschaefer> for mir
[16:33] <bschaefer> soo thats good
[16:33] <bschaefer> using the DRM + EGL backend + Render Nodes
[16:33] <ogra_> nice
[16:33] <ogra_> (not much helpful for pi3 though ... which is what i'm after)
[16:33] <bschaefer> :( you mentioned that needed a kernel driver?
[16:34] <ogra_> but i guess there getting mir at all up is the biggest blocker
[16:34] <bschaefer> for actually getting mir working?
[16:34] <bschaefer> yeah :)
[16:34] <ogra_> well, tvoss and ppisati seem to work on that
[16:34] <bschaefer> cool, we can assume it'll be done eventually :)
[16:34] <ogra_> but i dont know any current status
[16:34] <bschaefer> ogra_, do you use lxd or pbuilder or chroot for arm64?
[16:35]  * bschaefer should get something setup for testing at lease vs the dragon board directly
[16:35] <ogra_> after an initial bringu i tend to rely on launchpad
[16:35] <ogra_> native and really fast ...
[16:35] <bschaefer> that works as well!
[16:35] <bschaefer> cool, and thanks again! Enjoy your vacation :)
[16:36] <ogra_> :)
[16:36] <ogra_> well, i'll surely tinker around during it :)
[16:43] <fritsch> bschaefer: nope all fine I think, squash the relevant commits together and keep it standing there
[16:43] <bschaefer> fritsch, o so its bad to have like 20 commits? Wouldnt it all be merged into one when it gets pulled into xbmc?
[16:44] <fritsch> nope
[16:45] <fritsch> you can decide what you want to squash before hand
[16:45] <fritsch> it needs to be: bisectable after squashing
[16:45] <bschaefer> well ill just squash into one commit Mir windowing system
[16:45]  * bschaefer will look into that
[16:45] <fritsch> commits "fitting together" need to be squashed
[16:45] <bschaefer> hmm "fitting together?"
[16:45] <fritsch> example:
[16:46] <fritsch> Vaapi: 2 changes, LinuxRenderGLES: 3 changes
[16:46] <fritsch> the vaapi fixes is feature + squash for feature => squash the two commits
[16:46] <fritsch> for linux render, 1 bugfix and one new feature in 2 commits
[16:46] <fritsch> squash the 2 commits into one
[16:46] <fritsch> and leave the bugfix separated
[16:47] <fritsch> in your case: squash everything together that implements only MIR
[16:47] <bschaefer> o ok that makes sense
[16:47] <bschaefer> yeah
[16:47] <fritsch> if there was bugfixes for generic code, leave those as is
[16:47] <bschaefer> since its a feature + no bug fixes since it wasnt around before :)
[16:47] <fritsch> yeah
[16:47]  * bschaefer doesnt think he fixed anything for xbmc in this branch
[16:49] <fritsch> btw. rethinking the GPU1 and GPU2 scenario
[16:49] <fritsch> that scenario would be: intel gpu1, amd gpu2
[16:49] <fritsch> cause neither nvidia nor fglrx support dma_buf (GPL)
[16:50] <fritsch> so that would mean, you need to use the intel gpu1 for processing (as gpu2 would not do vaapi anyways)
[16:50] <bschaefer> thats a more realistic example
[16:50] <fritsch> now you needed to display on gpu2, which is AMD
[16:50] <fritsch> but - this card does not support the R8G8 mesa extension
[16:50] <fritsch> :-)
[16:50] <fritsch> so, figure
[16:51] <fritsch> https://linux.slashdot.org/story/12/10/11/1918251/alan-cox-to-nvidia-you-cant-use-dma-buf <- nvidia
[16:51]  * bschaefer trying to figure out *when* a render node is allowed to open
[16:51] <bschaefer> anytime?
[16:51] <bschaefer> i assume so
[16:51] <fritsch> and something else:
[16:51] <fritsch> there won't be two heads active
[16:52] <fritsch> even not in a laptop
[16:52] <fritsch> so the given code neither runs on amdgpu as of now via vaapi
[16:52] <fritsch> and with nvidia it would make zero sense, that you choose the nvidia rendernode
[16:52] <fritsch> as that is not vaapi capable
[16:52] <bschaefer> true, i think that branch is a step forward (ie. it does everything that is currently supported)
[16:53] <bschaefer> from what i understand
[16:53] <fritsch> yeah
[16:53] <bschaefer> i suppose possibly more voices will come through in a few days
[16:53] <fritsch> there is only exactly one "problem"
[16:53] <fritsch> gpu1 decodes via vaapi
[16:53] <fritsch> gpu2 would be needed to display and would support R8G8 dma_buf sharing
[16:53] <fritsch> what would not happen, that's the only case I see a problem with
[16:54] <fritsch> cause for now - we fail much earlier
[16:54] <fritsch> and there are simple not two intel gpus
[16:54] <fritsch> you cannot just "stick" 2 of them into your computer
[16:54] <bschaefer> i suppose for ... imagining sake. *if* there were two active GPUs
[16:54] <bschaefer> it *shouldnt* matter from what im reading... render nodes *do not* 1to1 to a card
[16:54] <fritsch> if there were, both need to support dma_buf and both need the mesa RG8 extension
[16:54] <fritsch> jep, they don't
[16:54] <bschaefer> for legacy reasons render nodes are create per device
[16:55] <bschaefer> does not mean they map 1to1
[16:55] <bschaefer> 1toany is more like it
[16:55] <fritsch> dma_buf is a kernel API
[16:55] <fritsch> it does not matter which gpu is rendering
[16:55] <fritsch> as long as it supports dma_buf and the extension to create the image
[16:55] <fritsch> yeah, take your time to reply ...
[16:55] <fritsch> :-)
[16:55] <fritsch> getting popcorn
[16:55] <bschaefer> its kind of misleading to be honest and i was thinking about the same :)
[16:56] <bschaefer> im going to wait a bit
[16:56] <bschaefer> as, i dont think that paper helped sway his point of view
[16:57] <fritsch> yeah
[16:58] <bschaefer> as this point, i mean ill need a concrete example to convince which i dont have the hardware atm :)
[16:58] <bschaefer> at this point*
[16:58] <fritsch> there is no such hw combination
[16:58] <fritsch> for what he is asking
[16:59] <fritsch> as there is no GPU besides intel that has this support: https://github.com/BrandonSchaefer/xbmc/blob/6a668e62f15c8114cb43585af8760ed7a42d01dd/xbmc/cores/VideoPlayer/DVDCodecs/Video/VAAPI.cpp#L1285
[16:59] <bschaefer> a mythical device
[17:00] <fritsch> he might come up with - then let's share with RGBA ...
[17:01]  * bschaefer doesnt actually fully understand that code :)
[17:01] <bschaefer> as i understand it, you get a raw decoded image
[17:01] <fritsch> it's quite easy
[17:01] <bschaefer> from vaapi then copy it
[17:01] <bschaefer> (into a texture then render?)
[17:01] <fritsch> you tell vaapi to derive an image
[17:01] <fritsch> use that via vaAcquiteBufferHandle
[17:02] <fritsch> and now create an egl image of it
[17:02] <bschaefer> which is just the decoded buffer from the hardware?
[17:02] <fritsch> yeah
[17:02] <fritsch> then the eglCreateImageKHR is used
[17:02] <fritsch> to create Y
[17:02] <fritsch> and VU
[17:02] <fritsch> this is later used in a yuv2rgb shader for display
[17:02] <bschaefer> then just render this through a shader?
[17:02] <fritsch> jep
[17:02] <bschaefer> well that makes sense
[17:02] <bschaefer> and you need EGL_LINUX_DRM_FOURCC_EXT to get that raw image?
[17:03] <fritsch> that's the format we use
[17:03] <fritsch> YUV 8 bit
[17:03] <fritsch> Y -> 8 bit
[17:03] <bschaefer>     This extension allows creating an EGLImage from a Linux dma_buf file   descriptor or multiple file descriptors in the case of multi-plane YUV  images.
[17:03] <fritsch> and UV
[17:03] <bschaefer> right
[17:03] <fritsch> yeah, was created for kodi :-)
[17:03] <bschaefer> otherwise you would have to convert?
[17:03] <bschaefer> o very nice!
[17:03] <fritsch> otherwise we needed to say to vaapi: create a RGBA
[17:04] <fritsch> and 4*8 >> 8 + 2*4 = 16
[17:04] <bschaefer> i suppose the shader could do that... possibly
[17:04] <fritsch> and doing yuv-> RGBA needs an intermediate
[17:04] <fritsch> yeah, but why?
[17:04] <bschaefer> haha yeah :)
[17:04] <bschaefer> better then software
[17:04] <bschaefer> (looking at how bad software was!)
[17:04] <fritsch> I am currently in contact with the intel guys to get RG16 and R16
[17:04] <bschaefer> o nice
[17:05] <fritsch> as their hevc-10 bit is stored in P010 format
[17:05] <fritsch> basically same as NV12
[17:05] <fritsch> but 16 bit and the 10 bits are only relevant
[17:05] <fritsch> so the shader afterwards needs to do something "additional"
[17:05] <fritsch> down dithering for 8 bit rendering
[17:05] <fritsch> and extracting the values
[17:05] <bschaefer> huh its 16 bits but you only use 10
[17:05] <bschaefer> for p010
[17:05] <fritsch> jep
[17:05]  * bschaefer has never heard of that
[17:06] <fritsch> how would you efficently store 10 bit?
[17:06] <fritsch> :-)
[17:06] <fritsch> while still using fast data access?
[17:06] <bschaefer> haha yeah
[17:06] <fritsch> and that way we are ready for 12 bit, 16 bit and so on
[17:06]  * bschaefer would hope to use those extra 5 bits for more data
[17:07] <fritsch> btw. "more data"
[17:07] <fritsch> i hope MIR has something like metadata
[17:07] <fritsch> all that 10 bit to 8 bit
[17:07] <fritsch> limited range to full range
[17:07] <fritsch> just sucks for userspace
[17:07] <fritsch> we would normally juse say: here -> texture / image / whatever -> it is limited range
[17:07] <bschaefer> hmm most pixel formats we have are 32 bit + some 24 bits
[17:07] <fritsch> display it properly
[17:07]  * bschaefer isnt fully sure
[17:07] <fritsch> that's the desktop
[17:07] <fritsch> yes
[17:08] <fritsch> but when doing video we have limited range by default
[17:08] <fritsch> 16 .. 235
[17:08] <fritsch> values
[17:08] <fritsch> if you display them 1:1 on a full rgb display
[17:08] <fritsch> you end up without whites
[17:08] <fritsch> and without blacks
[17:08] <bschaefer> o eww
[17:08] <fritsch> that's why metadata is so important
[17:08] <bschaefer> thats something RAOF would know :)
[17:08] <fritsch> i am quite sure he does
[17:08] <fritsch> now good luck with your reply
[17:08] <bschaefer> haha
[17:09] <bschaefer> thanks
[17:09] <fritsch> afk a bit
[17:09]  * bschaefer does the same
[20:43] <bschaefer> fritsch, also is there interest in support arm64 in kodi? At lease for me there is :)
[20:43] <bschaefer> configure: error: unsupported native build platform: aarch64-unknown-linux-gnu
[20:43] <bschaefer> https://launchpadlibrarian.net/293127083/buildlog_snap_ubuntu_xenial_arm64_kodi-mir-snapshot_BUILDING.txt.gz
[21:38] <bschaefer> ogra_, also you can try cmake vs their autotools.... since they plan on deprecating autotools: http://bazaar.launchpad.net/~brandontschaefer/+junk/xbmc-snap/view/head:/snapcraft.yaml#L48
[21:38] <bschaefer> its commented out (plus you can fix some of the extra options since you've ffmpeg manually)