[03:25] <ashes> hello
[03:25] <ashes> my arm system's gpu is poorly supported by nvidia. can i play videos and firefox on another machine, and display it in a remote x11 window, or does that still use the arm gpu the same?
[03:29] <raster> don't even bother
[03:30] <raster> u wil be transferring pixels over the network in uncompressed full framebuffer formats
[03:30] <raster> (xshmputimage pixel blobs)
[03:30] <raster> if u are super-lucky it may xfer yuv
[03:30] <raster> but then it'lll be using your arm's gfx unit to convert yuv-
[03:30] <raster> but then it'lll be using your arm's gfx unit to convert yuv-->rgb + scale
[03:31] <raster> (xv)
[03:31] <raster> chances are it wont use xv as in a browser it has to composite video into the document tree so its converting and scaling to rgb space.
[03:31] <raster> as such the nvidia drivers for tegra3 at least are a bit poor
[03:32] <raster> they dont do vsync, they dont do buffer swaps (they do dumb copies)
[03:32] <raster> for gl anyway
[03:32] <raster> so performance-wise you probably lose something like 20-50% of your performance AND you get ugly tearing
[03:32] <raster> on local rendering with gl
[03:32] <ashes> it's a tegra2
[03:32] <raster> remote display is not a world you even want too touch with a barge pole
[03:33] <raster> i dont know about tegra2 - but i believe its the same drive3rs for 2 and 3
[03:33] <ashes> i already do it
[03:33] <ashes> with better systems
[03:33] <raster> u dont want to do remote display even with better systems
[03:33] <ashes> but nothing with crazy graphics
[03:33] <ashes> i dont know if i ever tried mplayer remotely
[03:34] <raster> the bandwidth of a netwo9rk is somethnig like 1/1000th of that of a local display
[03:34] <raster> i am assuming probably wifi connectivity
[03:34] <ashes> no, gigabit switch
[03:34] <raster> you also have latency out the wazoo
[03:34] <ashes> this arm and other system would be connected with a crossover cable
[03:35] <raster>  remote dispay is not something you want to do.. ever.. voluntarily.. unless you have no choice
[03:35] <raster> and you never do it if u care about performance
[03:35] <raster> ever
[03:35] <ashes> my main use for remote x11 is to use handbrake for video editing. the handbrake application doesn't do many frames per second
[03:36] <raster> again - don't
[03:36] <raster> but it'
[03:36] <raster> s your time and effort
[03:36] <raster> do what you like with it
[03:36] <ashes> in my little experience, remote x11 works just fine for what i have used it for
[03:37] <ashes> anyway. my original question
[03:37] <raster> i've been doing gfx for pushing on 30 years. written toolkits, apps and wm's for x11/linux for 17 years. this is my simple advice.
[03:37] <raster> \don't do remote display unless u can totally avoid it.
[03:38] <raster> results all depend on HOWit displays
[03:38] <raster> and that varies from app to app and toolkit to toolkit
[03:38] <raster> and depends on the capabilities of the xserver
[03:39] <raster> fewer and fewer apps/toolkits use regular emote rendering and more and more are pixel pushing or using gl. gl remote is not an option for u due ti it being egl/gles2
[03:40] <raster> well not unless u wish to write an extension for it and do all the plumbing too
[03:41] <raster> and even then xfer of data (verticieis, textures etc.) will b e raw, and slow over a network cmpared to locally - literally 1000th the speed
[03:41] <raster> 1//1000th
[03:42] <ashes> with the systems i already do remote x11 with, it's with a 28mB/s ssh connection
[03:42] <raster> u're talking an arm systeem
[03:42] <ashes> yes
[03:42] <ashes> which would be slower
[03:42] <ashes> i wouldn't use x11
[03:42] <ashes> uh
[03:42] <ashes> shh
[03:42] <ashes> ssh
[03:42] <raster> that will have to expend likely 50% of its dpu resources in JUST handlign the ssh decryption
[03:42] <ashes> although the tegra2 is a quad core
[03:43] <raster> u will not have a lot left over at that bandwidth
[03:43] <raster> no
[03:43] <raster> its 2 core
[03:43] <ashes> could use telnet
[03:43] <ashes> ok, dual core
[03:43] <lilstevie> tegra2 is dual core
[03:43] <lilstevie> 2 very weak cores
[03:43] <raster> u'll likely peg a cpu core on ssh
[03:43] <lilstevie> by arm standards
[03:43] <lilstevie> raster, don't be silly
[03:43] <raster> *IF* u can even maintain that bw
[03:43] <lilstevie> arm isn't that terrible
[03:44] <raster> lilstevie: it is in my experience.
[03:44] <lilstevie> I use ssh all the time both server and client on my tf201
[03:44] <raster> if you want it to run an ssh connection sustaining 300mbit
[03:44] <lilstevie> and it is nowhere near that bad
[03:44] <raster> THEN u have to add protocol decode, memcpy's
[03:45] <ashes> ok. to somewhat change the topic, what use can i get from a trimslice, with a tegra2, 1GB memory, and 500GB storage (other than a nas, because i already have a better one)?
[03:45] <lilstevie> ashes, well I use mine as a builder
[03:45] <lilstevie> for kernels
[03:46] <ashes> yes, ok. what else?
[03:47] <lilstevie> thats all I do with it
[03:47] <lilstevie> trimslice is getting fairly old these days
[03:47] <ashes> my idea was to give it to my 6 year old nephew as a desktop, but he will expect to play movies and youtube with it
[03:48] <raster> lilstevie: tesxgre3. 1.2ghz. sshd consumes about 40% cpu of 1 core to keep an scp at 2.8mb/sec
[03:48] <raster> right here now.
[03:48] <raster> doing it
[03:48] <ashes> raster: use arcfour
[03:48] <raster> ashes is talking of wantingh to systain 28mb/s
[03:48] <raster> sustain
[03:48] <raster> thats 10x
[03:48] <raster> u wont evben systain it
[03:49] <raster> damnit
[03:49] <raster> damn cat
[03:49] <raster> tegra3
[03:49] <raster> 1.2ghz
[03:49] <lilstevie> good luck sustaining that rate in general on the trimslice though
[03:49] <lilstevie> even though it is a gigabit card, I do find it cannot sustain that kind of speed
[03:49] <raster> yup
[03:49] <ashes> raster: scp -o Ciphers arcfour
[03:49] <raster> thats what i said
[03:50] <raster> it'll peg a core at 100% cpu
[03:50] <ashes> and forget ssh
[03:50] <ashes> i can use telnet
[03:50] <lilstevie> raster, it isn't so much the core, I don't think the pci-e bus is fast enough to support the rates of a gigabit card
[03:50] <raster> ashes:  even with that it runs about 25% cpui
[03:50] <raster> for 3mb/sec
[03:51] <raster> again
[03:51] <raster> at 10 TIMES that badnwdith.. there is simply not enough compute
[03:51] <lilstevie> raster, also in your config how is the network attached
[03:51] <ashes> telnet would be dramatically faster
[03:51] <raster> lilstevie:  this is wifi - thus the low rates
[03:51] <raster> i am not caring about that end of things
[03:51] <lilstevie> raster, well that is probably putting a little bit of strain on the cpu too
[03:51] <lilstevie> :p
[03:51] <raster> the cpu  overheadalone of sshd will peg your bandwidth
[03:52] <raster> sure
[03:52] <ashes> or rsh
[03:52] <raster> my point is for remote display you have several thigns that will kill the ui
[03:52] <raster> 1. network devide3 handling itself (packet handling and at least 1 or 2 memcpy's there)
[03:53] <lilstevie> eh, remote display is problematic on x86 in the best of conditions
[03:53] <raster> then more memcpy's within the xserver
[03:53] <raster> vs only a single memcpy locally for local display
[03:53] <raster> add to that the bandwidth bottleneck of the networi deevice
[03:54] <raster> remember memcpy bandwdith will clock in at the magnitudes of 500-1500mb/se
[03:54] <raster> at least on something tegra2-land
[03:54] <raster> or tegra3
[03:54] <raster> your gitibit card will drop you to 100mb/s
[03:54] <raster> gitbit
[03:54] <raster> gigabit
[03:54] <raster> thats IN THEORY.. if u can systain it
[03:55] <raster> add do that sshd bottlenecking your connection maybe to 10mb/sec on the best of days
[03:55] <raster> and soaking up a whole core on its own\
[03:55] <raster> add in the protocol handling by xserver, memcpy's etc.
[03:55] <ashes> you may very well know exactly what you're talking about, but i'll try it to see for myself
[03:56] <raster> you are now looking at just pure bandwidth-wise like 1-2mb/sec of effective bw
[03:56] <ashes> because i want to get some use out of it
[03:56] <raster> now throw in latencies more like 1-2ms
[03:56] <raster> whrere local latencies (round trips) are more in the 1/100th or less of that...
[03:57] <raster> unless your app is so insanely trivial where the overheads just barely make the effects visible... it'll be nasty to do remote x11 display however you look at it
[03:57] <raster> the model that works is high level control protocol
[03:57] <raster> with local display
[03:58] <raster> the problem comes when you simply cant avoid xferring large gobs of data around as part of the display
[03:59] <raster> it's then that you play tricks in downgrading the data quality (eg downscaling by 2x2 or 4x4 nd using local gpu+interpolation to upscale again) to retain interactivity
[03:59] <raster> and you are only xferring some susbet of the frame data across the network
[04:05] <raster> xzzzzz``````