[00:30] <TrickyJ> Good morning :)
[01:21]  * karni eods, c u tomorrow everybody
[02:35] <eross> can someone give me the dummy's skinny on how this works? i just bought an album and it's transferring to the ubuntu-1 storage area.. what can i do after that, re-download again to local?
[02:35] <eross> seems to be slow or stalled
[02:38] <eross> nm looks like the files are on some cloud
[02:40] <eross> how do i get at them
[11:37] <jderose> aquarius: okay, got oauth stuff basically working, but i'm trying to understand why you some times append an Authorization header, other times use a signed url?
[11:39] <aquarius> jderose, um...can't remember :)
[11:40] <aquarius> jderose, when I originally started I used signed URLs because it's easier (you can test hte URL in your browser directly), but that doesn't work with futon because futon pages introspect their own URL to do stuff, and having loads of extra query parameters screwed that up
[11:40] <aquarius> so i went to headers instead
[11:40] <aquarius> but I'm not sure why I didn't go entirely to headers
[11:41] <jderose> yeah, i tried it just using the header, but it's not working
[11:42] <jderose> aquarius: so ouath doesn't sign the entire request the way the amazon aws api dose, correct?
[11:43] <jderose> aquarius: mean, oauth just working from the uri, not the headers of the request
[11:43]  * jderose is getting tired :)
[11:43] <aquarius> correct; an oauth signature is composed of the URL, the HTTP method, a timestamp, and a nonce.
[11:44] <aquarius> the body and the other headers are not part of the signature calculation
[11:44] <duanedesign> hello aquarius
[11:44] <jderose> okay, gotcha
[11:44] <duanedesign> morning all
[11:44] <aquarius> duanedesign, heya!
[11:44] <jderose> morning duanedesign!
[11:45] <aquarius> jderose, ya, I'm not sure why it doesn't work with headers all the time either. It ought to :)
[11:45] <duanedesign> hello jderose
[11:45] <jderose> aquarius: but based on your emminse
[11:46] <jderose> damnit, can't type
[11:46] <duanedesign> jderose: were you in talking to karni the other day?
[11:46] <jderose> aquarius: but based on your immense oauth knowledge, it should work with just headers?
[11:46] <duanedesign> jderose: oh, and hello :)
[11:46] <nessita> hello everyone
[11:47] <jderose> duanedesign: i don't know who karni is, so i don't think so :)
[11:47] <duanedesign> :)
[11:49] <aquarius> jderose, well...it should, yes, in theory. I'm not sure why it doesn't :)
[11:50] <jderose> aquarius: well, i'll poke around some more.  i'll let you know if i figure it out.   thanks!
[11:54] <karni> hi guys. duanedesign, I probably pinged you with some guys question, didn't I :)
[11:56] <jderose> aquarius: ug, perhaps i found out why... looks like the request headers are immutable from Python! crap.
[11:56] <jderose> well, time for bed. later!
[12:01] <duanedesign> hey karni !
[12:01] <karni> duanedesign: holla :)
[12:22] <karni> bbl
[12:25] <ralsina> good morning!
[12:25] <nessita> hi ralsina
[12:27] <ralsina> hola naty
[12:46] <ralsina> nessita: any reviews for me?
[12:47] <nessita> ralsina: yes! https://code.launchpad.net/~nataliabidart/ubuntuone-control-panel/confirm-before-delete/+merge/47324
[12:47] <ralsina> cool, will review in 10' after updates download
[12:53] <nessita> ralsina: awesome!
[12:53] <ralsina> After that I have to run a review on windows so the day becomes markedly less fun ;-)
[12:55] <nessita> :-)
[12:56] <nessita> so, ralsina, I need to consult something with you. I'm about to do a release for the control panel, and, the services tab is not working right since the call to list all desktopcouch replication is (still) blocking, and seems like there is an issue with the plugins where the destopcouch service is frozen or deadlocked when starting up
[12:57] <ralsina> ouch
[12:57] <nessita> so basically the u1cp backend freezes for 30 seconds~ or maybe more
[12:57] <nessita> which is aweful
[12:57] <nessita> my proposal is the following:
[12:57] <ralsina> You could do a release without that tab
[12:57] <nessita> * hide the 'Services' tab for this release
[12:57] <ralsina> :-)
[12:58] <nessita> * define a way of strating DC in a thread or similar for next release to not depend on dc blockages
[12:58] <alecu> nessita, can you point me at that bit of code?
[12:58]  * pedronis back
[12:58] <nessita> alecu: u1/cp/replication_client.py
[12:58] <ralsina> nessita: check canonicaladmin you have good news there ;-)
[12:58] <alecu_> ack
[12:59] <nessita> alecu_: the problem is the problem with this line:      64         result = replication_module.ReplicationExclusion()
[12:59] <alecu> I see
[12:59] <ralsina> nessita: you have a comment by rye in that branch, did you see it?
[13:00] <nessita> nopes, let me check
[13:00] <nessita> I've been in a call with i-vanka up to just now
[13:01] <rye> ralsina, i believe I should not make such comments in branch reviews
[13:01] <nessita> I thinki it was a joke :-)
[13:01] <ralsina> rye: hahaha ok then :-)
[13:01]  * ralsina thought the "no" button was disabled or something
[13:02] <nessita> alecu: I think we need to make the replication_module.ReplicationExclusion call in a thread, I don't know how to join the thread without blocking the caller
[13:03] <rye> ralsina, commented again clarifying what I meant in case I get back to that review in a week and have no idea what I meant at that time.
[13:04] <ralsina> rye: jokes are always ok in principle for me, so don't worry :-)
[13:04] <ralsina> nessita: add a repeating timer that checks to see if the thread is running.
[13:04] <rye> ralsina, well, I will try to re-read the thing I am posting since the way I wrote that clearly indicates something bug-related and then "Approved". yeah, right...
[13:05] <rye> not joking today anymore
[13:05] <nessita> ralsina: and the timer runs where?
[13:05] <ralsina> nessita: event loop?
[13:05] <nessita> ralsina: in the glib loop, you will say
[13:05] <ralsina> nessita: exactly
[13:05] <nessita> ralsina: but we're trying to make this g-agnostic
[13:05] <ralsina> nessita: there's no way to have a g-agnostic timer, I'm afraid
[13:06] <ralsina> nessita: you could create a class that can be patched for other event loops, so at least it's always in one place
[13:06] <nessita> ralsina: so, only the backend executable knows about glib, we may need to pass the timer class as class creation parameter down to the lower layers?
[13:06] <ralsina> Or you could use unix signals but that's hell on a handbasket
[13:06] <ralsina> nessita: right
[13:07]  * ralsina hates even the sound of this
[13:07] <nessita> ralsina: I'm pondering the fact that desktopcouch *should* start in a few seconds
[13:07] <nessita> not in 30. There is a bug there.
[13:07] <ralsina> nessita: yes, but still, blocking for 5 seconds is not really acceptable either
[13:07] <nessita> so, how much effort we want to put in the cp considering that DC is not working?
[13:07] <ralsina> Is there a way to check non-blocking if it's running?
[13:08] <nessita> ralsina: not a clean way that I know of
[13:08] <nessita> other than asking for pidof or something like that
[13:08] <ralsina> nessita: not that much, I'd say. First we should make it start as fast as possible, and if that is not accpetable, then we hack with threads
[13:08] <ralsina> Else, it's premature optimization
[13:09] <ralsina> this is in the control panel backend or in the frontend?
[13:09] <nessita> ralsina: backend
[13:10] <ralsina> the backend could just start dc in background and the frontend would wait for it. We can use glib on the frontend
[13:10] <nessita> which is ugly since the UI is not freezed but shows 'loading' screens everywhere
[13:10] <ralsina> Or am I saying nonsense?
[13:10] <nessita> ralsina: you are :-) the frontend accesses the backend thru dbus, so the frontend only does method call and connects callbacks to signals
[13:11] <ralsina> nessita: but, as an exception to keep the backend glib-free... ok, forget it :-)
[13:11] <nessita> the backend need to return immediatly after a call (which is not doing right now in this case)
[13:11] <ralsina> nessita: right, that's why I said the backend should just start it and forget it.
[13:12] <ralsina> ok, looking at the code forget it, I would break half of it
[13:13] <ralsina> So, back to "hide the tab, complain about startup time, don't sweat it on control panel yet"
[13:13] <ralsina> maybe it will start in .2 seconds next week ;-)
[13:13] <alecu> nessita, ralsina: I've started digging into desktopcouch and couchdb python modules, and it seems we may run all that in a thread.
[13:13] <nessita> my point exactly
[13:14] <nessita> alecu: not sure what you mean
[13:14] <ralsina> alecu: but how will the backend notify the frontend that it's running?
[13:14] <alecu> ralsina, what do you mean "running"?
[13:14] <ralsina> ready?
[13:15] <alecu> by a dbus signal!
[13:15] <alecu> the frontend makes a dbus call, and expects a dbus signal when it's done.
[13:15] <alecu> or am I missing something?
[13:15] <ralsina> so the second thread emits the signal?
[13:16] <ralsina> can we do that?
[13:16] <alecu> ralsina, yes: python dbus bindings are thread safe
[13:16] <ralsina> alecu, nessita: so no need to join the thread then
[13:17] <alecu> (btw: the desktopcouch code uses the couchdb module, and that speaks http, by using raw sockets, so we won't have any issues there running it in a thread)
[13:17] <alecu> yup, no need to join threads.
[13:18] <ralsina> alecu: if you think that's not hard to do and it can be done today, it would be great
[13:18] <ralsina> nessita: any objections?
[13:19] <ralsina> and we do the release tomorrow but with a nice new feature.
[13:19] <nessita> ralsina, alecu: I don't see how a thread will fire a dbus signal. Let me explain:
[13:20] <nessita> we have these layers: dbus_service (signals, among other stuff), backend, replication_client (desktopcouch)
[13:20] <nessita> how DC, running in the 3rd layer will fire stuff in the first layer?
[13:20] <alecu> nessita, by passing a callback from the first layer into the third.
[13:21] <nessita> right
[13:21] <alecu> nessita, "here, call this function when you are done."
[13:21] <alecu> nessita, and that method just fires the signal.
[13:22] <alecu> does it makes sense?
[13:22] <nessita> it makes sense, but what am I thinking is:
[13:22] <alecu> ralsina, do you want me to work on it, or should I keep working on my aggregation branch?
[13:23] <nessita> I have no free slot this week to do that (I'm very very busy with the shares stuff), and the release needs to go out today
[13:23] <ralsina> I say we keep this solution in mind
[13:23] <ralsina> and don't do it yet
[13:23]  * ralsina cries a little bit
[13:23] <nessita> ralsina: I prefer you do aggregation and if you're free, the ZG listener bugs, that they are messing with syncdaemon logs a lot (in a sense they add confussion)
[13:24] <nessita> I meant alecu :-)
[13:24] <ralsina> nessita: yes
[13:24] <ralsina> and anyway it may not even be necessary to do this thread stuff
[13:24] <nessita> I would hide the services tab to make a release for the alpha2
[13:24] <ralsina> nessita: yes
[13:25] <ralsina> so we have a plan now. <captain picard>make it so</captain picard>
[13:25] <nessita> and restore it after, and chase CardinalFang for some input. He proposed a branch yesterday but the test suite is failing a lot in that branch
[13:25] <alecu> nessita, both hide and disable the backend code, right?
[13:25] <ralsina> nessita: what branch was that?
[13:25] <alecu> disable the backend *call.
[13:25] <nessita> alecu: just hide the tab, the code will no get executed until you enter the tab
[13:25] <alecu> nessita, cool.
[13:26] <nessita> ralsina: https://code.launchpad.net/~cmiller/desktopcouch/service-must-not-call-self-over-dbus/+merge/47262
[13:26] <nessita> ralsina: seems like the newly added plugin code is making the service startup deadlock
[13:27] <ralsina> right
[13:27] <ralsina> ugh :-(
[13:27] <nessita> ralsina: my second objection to that branch is that the fix has no tests...
[13:28] <ralsina> what would you test, that the port you get over dbus is the same as the port you put there? :-)
[13:28] <nessita> ralsina: first of all, I'd test that no internal API breaks
[13:29] <ralsina> add a None default to couchdb_port and get it by dbus if it's not passed could be a good idea
[13:29] <ralsina> So we keep the API stable
[13:30] <ralsina> then again, I come from C++ so I have a stable API fetish
[13:31] <nessita> my biggest concern is that there is already some tests for the load_plugins function
[13:31] <nessita> and this branch breaks that (or should break)...
[13:31] <ralsina> approved confirm-before-delete
[13:31] <nessita> how come CardinalFang is getting all green?
[13:31] <nessita> ralsina: thanks
[13:31] <ralsina> ah?
[13:34] <ralsina> all what green?
[13:36] <nessita> all tests pass for him
[13:36] <nessita> (he said)
[13:37] <ralsina> weird
[13:37]  * ralsina checks
[13:41] <ralsina> running the tests now. couchdb started in about 2 seconds, BTW
[13:42] <CardinalFang> ralsina, nessita, I think I have that fixed.  Making tests work has been a problem.
[13:42] <ralsina> CardinalFang: cool
[13:43] <ralsina> I got a bunch of errors already
[13:43] <CardinalFang> My kid is really grumpy, so it's hard to get started this morning.  I'll be back in a bit.  Hopefully before my 9AM standup-meeting.
[13:43] <ralsina> "    from desktopcouch.application import local_files
[13:43] <ralsina> " rings any bells?
[13:44] <ralsina> Of course the error is "cannot import local_files" ;-)
[13:44] <nessita> I got that too
[13:44] <CardinalFang> (Someone in my house mixed chlorine bleach and ammonia about a day ago, and we all felt bad for a while.)
[13:44] <nessita> but not in trunk...
[13:44] <nessita> ouch!
[13:45] <ralsina> that mix is incredibly dangerous, you know
[13:45] <CardinalFang> Yes.  We all know.  Someone wasn't attentive.
[13:45] <ralsina> In fact, the byproduct of that was used as a chemical weapon in WW1
[13:46]  * ralsina never has ammonia at home, just in case
[13:46] <CardinalFang> Indeed.  In different proportions, it tends to make explosives too.
[13:46] <ralsina> yup
[13:46] <ralsina> As in "that is rocket fuel" explosive.
[13:47] <CardinalFang> So, a fun evening in the Miller household, all in all.
[13:47]  * CardinalFang afk.
[13:48] <beuno> is the canonical irc network down or is it me?
[13:48] <nessita> beuno: I'm connected
[13:48] <ralsina> beuno: it's you
[13:48] <beuno> :/
[13:49] <beuno> 07:48 -!- Irssi: warning SSL handshake failed: server closed connection
[13:49] <ralsina> alecu CardinalFang dobey mandel nessita thisfred vds: standup in 11'
[13:49] <mandel> ok
[13:49] <thisfred> ackack
[13:50] <nessita> yeah
[13:50]  * ralsina forgot someone?
[14:00] <alecu> me
[14:00] <nessita> me
[14:00] <ralsina> me
[14:00] <vds> me
[14:01] <nessita> CardinalFang, dobey, mandel, thisfred?
[14:02] <thisfred> me
[14:02] <dobey> meh
[14:02] <mandel> me
[14:02] <ralsina> alecu, start
[14:02] <alecu> DONE: counters for (done/total) operations progress, aggregation based on threshold and timeout. Spent a lot of time chasing an issue that looks like a twisted bug.
[14:02] <alecu> TODO: discuss with thisfred how to pass this info to upper layers, plan for UDFs/shares, isolate twisted issue.
[14:02] <alecu> BLOCKED: no
[14:02] <alecu> LOVE & HATE: twisted.internet.task.Clock
[14:02]  * alecu hands a big fishbowl of twisty snakes to nessita
[14:02] <nessita> DONE: had some issues with syncademon that I debug a bit and reported. Worked on bug #706906 and bug #706888, landed branches for bug #692772. Had a call with cparrino to review all the strings in the control panel.
[14:02] <nessita> TODO: call with i-vanka to sync up re control panel, fix strings as per cparrino review, add marketing link in the control panel (u1 support, askubuntu, tiwtter, facebook). Release u1cp and ussoc.
[14:02] <nessita> BLOCKED: nopes
[14:02] <nessita> LOVE: chocolate cake
[14:02] <nessita> NEXT: ralsina
[14:02] <ubot4> Launchpad bug 706906 in ubuntuone-control-panel (Ubuntu Natty) (and 2 other projects) "Token info is logged (affects: 1) (heat: 6)" [High,Triaged] https://launchpad.net/bugs/706906
[14:02] <ubot4> Launchpad bug 706888 in ubuntuone-control-panel (Ubuntu) (and 1 other project) "Prompt for confirmation before removing a device (affects: 1) (heat: 6)" [Medium,Triaged] https://launchpad.net/bugs/706888
[14:02] <ubot4> Launchpad bug 692772 in ubuntuone-control-panel (Ubuntu) (and 1 other project) "Visual improvements (affects: 1) (heat: 102)" [High,Triaged] https://launchpad.net/bugs/692772
[14:03] <ralsina> DONE: reviews, talked to lots of people, did my canonicaladmin work, read code.
[14:03] <ralsina> TODO: reviews, management stuff, someday actual coding ;-)
[14:03] <ralsina> BLOCKED: no
[14:03] <ralsina> vds!
[14:03] <vds> DONE: second branch for #701029 proposed third ready to start
[14:03] <vds> TODO: continuing with the views and keep checking that the back end does what it is supposed to do
[14:03] <vds> BLOCKED: no
[14:03] <vds> thisfred: please
[14:03]  * ralsina doesn't LOVE or HATE anything yet
[14:03] <thisfred> DONE: messaging branch work, desktopcouch reviews, web and mobile discussions TODO: land messaging branch | discuss with alecu whatever just scrolled by BLOCKED: no
[14:03] <thisfred> dobey: a vous!
[14:03] <dobey> λ DONE: swap day
[14:03] <dobey> λ TODO: 3rd party apis?, evaluate SRUs for maverick, lucid bugs
[14:03] <dobey> λ BLCK: None.
[14:03] <dobey> mandel
[14:03] <mandel> DONE: Finished FileSystemNotificaion. Finished implementation of IPC yet got stuck in a multithreading issue (some lock in a file I cannot find.)
[14:03] <mandel> TODO: Debug, debug, debug... Move to ubuntu sso on windows.
[14:03] <mandel> BLOCKED: debugging, is that being blocked?
[14:03] <mandel> HATE: Windows file system and kernel.
[14:03] <mandel> eom?
[14:03] <nessita> one comment:
[14:04] <mandel> first!
[14:04] <ralsina> mandel: comments first
[14:04] <nessita> we would be making releases today, so if you have any pending branch let us know
[14:04] <mandel> hehehe
[14:04] <nessita> dobey: can you make releases for u1sp and u1client today?
[14:05] <ralsina> I would like a 3-line report in private of each of you about how you are doing with your assignments, so I don't have to think too hard for team leads ;-)
[14:05] <nessita> ralsina: ack
[14:06] <alecu> ralsina, on it.
[14:06] <dobey> i don't think there have been any changes to protocol since the last release, but probably
[14:07] <ralsina> argh, what's the magical command to clean debris when I abort run-tests on a desktopcouch branch?
[14:07] <dobey> unless someone committed some change that breaks the world again
[14:07] <alecu> ralsina, rm -rf?
[14:07] <ralsina> dobey: talking about breaking the world ;-)
[14:07] <ralsina> alecu: / ? ;-)
[14:07] <alecu> :-)
[14:08] <ralsina> dobey: it seems your use-devtools branch was a bit... under-announced?
[14:08] <nessita> dobey: I just said u1sp in the sense of "if needed"
[14:08] <ralsina> dobey: please post a quick mail on our malining list when you change how others should do things mmmkay?
[14:09] <dobey> ralsina: but how others should do things didn't change. "make test" and "make lint" still work exactly the same.
[14:10] <nessita> dobey: chicharra guys use ./contrib/test *a lot* to run selective tests
[14:10] <nessita> selectively*
[14:10] <ralsina> dobey: what nessita said
[14:15] <dobey> sigh
[14:17] <ralsina> dobey: that's why lucio sent that email today, he was frustrated by that late last night.
[14:17] <ralsina> nessita: I get errors with CardinalFang's branch too, but not at all like yours.
[14:18] <nessita> ralsina: right, mine starts with test_start_migration hanging, and then the rest is all a consequent mess, I think
[14:24]  * dobey thinks people should pay more attention to merge proposals if they care about changes
[14:24] <nessita> is impossible to keep track of every merge proposal
[15:52] <vds> ralsina, dobey https://pastebin.canonical.com/42279/
[15:54] <dobey> vds: that's nightlies on lucid, right?
[15:55] <vds> dobey, yep
[16:17] <nessita> ivanka1: ping
[16:18] <ivanka1> sorry nessita - am in meetings - maybe ping otto?
[16:19] <nessita> ivanka1: sure, no problem
[16:20] <dobey> lunch time
[17:16]  * karni is back
[17:34] <vadi2> The 'More' link  on the ubuntuone interface, in Chrome, is not doing anything upon clicking anymore.
[17:37] <karni> beuno-lunch: say a user wants to sync only on WIFI. we're on Mobile. we start the app, there's a Toast 'Not syncing on Mobile'. should a file download (upload) upon being tapped (uploaded) on Mobile connection? I think 'yes' is a sensible answer. That way you can still use the app without syncing at all.
[17:44] <beuno> karni, hrm
[17:44] <beuno> well
[17:44] <beuno> yes, but only if we say "Tap to start syncing"
[17:45] <karni> beuno: you mean, we shouldn't be able to donwload a single file (not doing sync beforehand) ?
[17:46] <beuno> wait, scratch what I just said
[17:46] <karni> uhm
[17:46] <beuno> yes, I think connectivity is for auto-sync
[17:46] <karni> right
[17:46] <beuno> manual sync should always happen
[17:47] <karni> wait, so we should add a 'Sync' option somewhere, shouldn't we (in case sync on network type XXX is disabled)
[17:48] <beuno> karni, yeah, maybe we should
[17:49] <karni> beuno: perhaps a 'refresh/sync' button in the action bar in case the sync is set to manual, or is not set to perform on current nework type :) ?
[17:50] <beuno> yeap, sounds good
[17:51] <karni> good!
[17:53] <ralsina> dobey ping
[17:55] <dobey> hi
[17:55] <ralsina> hi dobey
[17:56] <ralsina> I have a patch from Lucio that should go in syncdaemon but he feels he is not familiar enough with the code to do a real fix, and it should be done by facundo, who is on holiday
[17:57] <ralsina> but... the patch does fix something for nessita, and should not break anything (famous last words, I know)
[17:57] <ralsina> https://pastebin.canonical.com/42289/
[17:57] <ralsina> What do you think?
[17:58] <dobey> looks ok to me.
[17:58] <nessita> ralsina: it fixes a real bug that happened to hit me ;-)
[17:58] <ralsina> nessita: I know it's a real bug :-)
[17:59] <dobey> at least, having the "restart the upload" code cancel any current upload of that file, makes perfect sense to me
[17:59] <ralsina> cool, can you merge it?
[17:59] <mandel> thisfred: you ran away from #desktopcouch you evil…. hehe
[18:00] <dobey> ralsina: lucio and facundo are both on holiday?
[18:00] <ralsina> lucio is here, facundo is on holiday
[18:00] <thisfred> mandel: oops, that was unintentional, I promise!
[18:01] <mandel> thisfred: branch is done fixing the pep8 issues, here it is: https://code.launchpad.net/~mandel/desktopcouch/fix_pep8_from_windows/+merge/47433
[18:01] <thisfred> mandel: you rock!
[18:01] <rye> dobey, is there anything else I can do to get lucid-backport-work into lucid? btw -there's a long partial file fix branch proposed against your lucid-backport one for sd
[18:01] <mandel> thisfred: I guesses so, but it was funny nervertheless hehe :)
[18:01] <dobey> ralsina: then i guess he should propose a branch for it :)
[18:01] <ralsina> but lucio says he doesn't understand the area enough for a real fix, this is a temporary one until facundo comes back
[18:02] <mandel> thisfred: some of the errors where very stupid ones, like not allowing a white space infront of [… that is lame
[18:02] <mandel> thisfred: need to go, if you need anything else, ping me here and I'll see it later :)
[18:02] <thisfred> mandel: no, that's pep8, that's how you write python :)
[18:02] <dobey> rye: i saw the proposal but haven't had time to look at it yet. lucid is in SRU freeze atm. and there are some other fixes that need to go into stable-1-2 as well
[18:02] <mandel> todos: laters!
[18:03] <dobey> thisfred: that's what he said. it's lame :)
[18:03] <thisfred> mandel: you always break after [ or (
[18:04] <thisfred> dobey: it prevents confusion. Some pep8 decisions are pretty arbitrary, but I think this one is unambiguously the right thing™
[18:04] <thisfred> makes it immediately clear the statement is ongoing, even before scanning the next line
[18:07] <dobey> ralsina: i don't want to merge a "temporary fix" to trunk without a proposal and linked bug; if it's worthy of going in trunk, it's worthy of a proposal. how critical/important is the bug? just one hit from nessita, or lots of users reporting it? can it just wait for facundo's return to be fixed?
[18:09] <dobey> rye: do those two bugs not exist in maverick? if not, do you know when they were fixed?
[18:10] <ralsina> dobey: I agree with you. I'll ask lucio to propose a branch and if he can't for whatever reason, I will.
[18:10] <ralsina> But no, it can't wait until facundo comes back :-(
[18:23] <ralsina> dobey: https://code.launchpad.net/~lucio.torre/ubuntuone-client/add-upload-cancel/+merge/47435
[18:23] <ralsina> I approved it so with your seal of approval it's in
[18:25] <dobey> i think nessita should review/test it, given that she is hitting the bug
[18:25] <nessita> I am doing it rightnow
[18:26] <nessita> anyways, the bug is not reproducible so no I'm not able to fieldtest it
[18:26] <dobey> oh
[18:45] <rye> nessita, what bug?
[18:47] <nessita> rye: Bug #705231
[18:47] <ubot4> Launchpad bug 705231 in ubuntuone-client "ActionQueue.unqueue gives exceptions.ValueError: list.remove(x): x not in list (affects: 1) (heat: 127)" [High,Confirmed] https://launchpad.net/bugs/705231
[18:48] <rye> nessita, hm, i had that
[18:49] <nessita> rye: you did?!?!?! did you report it?
[18:50] <rye> nessita, well, at that time i was in Unlink() everything mode so.. no. Checking now...
[19:20] <karni> beuno: I've been working on connectivit/sync now. One question about the action bar button though. I'm no longer sure if the sync/refresh button should be there. Sync is more an app-wide operation rather than folder-level. Or at least, volume-level. I think we should either 1. let user manually sync single volume 2. let the user sync manually, but from the Dashboard options menu, for instance.
[19:20] <karni> beuno: any thoughts/ideas?
[19:21] <beuno> karni, I think general sync, and individual files
[19:21]  * karni processes in context of the app
[19:22] <karni> beuno: "general sync" :: Menu -> Sync [visible in case sync is disabled or preferred network not available]
[19:22] <karni> beuno: "individual file" :: file -> context menu -> Sync
[19:23] <karni> beuno: now, the thing is..
[19:23] <karni> beuno: to know if a single file has been updated since the last use, we need to ask for the volume generation
[19:23]  * beuno nods
[19:23] <karni> beuno: so we basically initiate sync process for a single volume. I could bring that down to a single file,
[19:24] <beuno> I think I'm mostly referring to downloading a specific file, rather than syncing, but it doesn't change a lot
[19:24] <karni> right.. it doesn't, because we have to know the new nodeId for that file
[19:24] <karni> (if it has changed)
[19:24]  * beuno nods
[19:25] <karni> verterok: can I request nodeState info in old-style if I use set-capabilities:generations ?
[19:25] <karni> verterok: i.e. if my client sets the caps to generations (among others), can I still ask for info on a single file ?
[19:25] <karni> probably that would involve setting another caps of nodeState, but I'm not sure beuno if they won't conflict
[19:26] <karni> __lucio__: can you have a look at the question I asked verterok ? ↑
[19:26]  * beuno does not know a lot about that part of the code
[19:26] <verterok> karni: you already have the info...why do you need to do a query on a single node?
[19:27] <karni> verterok: imagine I don't ;) ok, so the thing is
[19:27] <karni> verterok: let's assume the user doesn't want to sync all the files
[19:27] <karni> beuno: but wants to fetch one of them. if it was updated, we don't know the new nodeId
[19:27] <karni> ops that was to verterok
[19:28] <karni> verterok: I know this is very different from the "PC" version, but it's been like that evar hahah :)
[19:28] <karni> partial sync - is not what U1 was designed for
[19:28] <verterok> karni: if you don't know the node id how do you have the file? :)
[19:28] <karni> verterok: bah, I meant the new contentHash, sorry
[19:28] <karni> verterok: we need to know the new hash to fetch the file, don't we?
[19:28] <karni> one sec.. I should have checked that before asking
[19:29] <karni> getContent(volume, node, hash, null, os) yes
[19:29] <karni> verterok: we need to know the hash of the file to get it. we may now the previous hash, but we don't know if it has changed since.
[19:30] <karni> verterok: so the question is - can I fetch the file in AndroidU1 style, using a client that I've told to use 'generations' capability
[19:30] <verterok> karni: why not use getdelta?
[19:31] <karni> verterok: if I use getDelta, I'll get meta info of all the files from the volume, right?
[19:31] <verterok> __lucio__: ^ I'm wrong about using getdelta instead of query
[19:31] <verterok> karni: yes
[19:33] <verterok> karni: just to make it simpler...but I asume you can use query with a generations-enabled client...the issue here is that query is deprecated
[19:33] <__lucio__> karni, if you want to do limited access, just use the rest api
[19:33] <verterok> karni: it will go away sooner or later (I hope)
[19:33] <karni> verterok: aha :/
[19:33] <__lucio__> if you can travel to the future
[19:33] <verterok> jajaja
[19:33] <karni> that's not funny ;d
[19:33] <karni> you have the REST for me? ;d
[19:33] <beuno> karni, we can do that
[19:33] <karni> :)
[19:33] <beuno> vds, ping
[19:34] <verterok> karni: __lucio__ has a rent-a-time-machine shop  :p
[19:34] <karni> :O
[19:34] <beuno> karni, stalk vds about it
[19:34] <karni> are you kidding me guys :D establishing a connection is such a PITA, and we have the rest api working ;D?
[19:34] <beuno> he's been secretly working on them
[19:34] <karni> hahahah
[19:34] <karni> oh man
[19:35] <karni> I mean, I know he has, but didn't now on what stage it is
[19:35] <beuno> well
[19:35] <__lucio__> karni, keep in mind that the only way we have for notifications (ie, realtime sync) is with the connectin established
[19:35] <karni> anyway, verterok  - yes, that's what I need. query on a generations-enabled client... and that'll depricate, as you say
[19:35] <beuno> we could prioritize based on what you need, maybe
[19:35] <karni> __lucio__: yes, that's correct
[19:35] <karni> __lucio__: but 6-7 seconds of setup (best case) is still long for a mobile app
[19:35] <karni> setup = connection+auth+setup
[19:36] <__lucio__> karni, so you plan on dropping realtime?
[19:37] <karni> __lucio__: the thing is, we never assumed realtime. keeping that connection alive all the time, and the service in background, is a phone-killer
[19:37] <karni> CPU/batt/etc, no way we could do that. so, instead,
[19:37] <vds> beuno, pong
[19:37] <karni> __lucio__: we settled for periodic sync, starting from 5 min, up to a daily sync
[19:37] <__lucio__> karni, scanning each directory to find the diff is a no-no
[19:37] <vds> hi karni
[19:37] <karni> vds: hello! just a sec my friend :)
[19:38] <__lucio__> karni, you need generations for this.
[19:38] <karni> __lucio__: what do you mean. I'm using generations. the thing is,
[19:38] <karni> __lucio__: we perform a down-sync at the moment of favourite items only
[19:38] <karni> __lucio__: I'm not doing a local-rescan every time I sync. not at all ATM actually
[19:39] <__lucio__> karni, this is all about server rescan
[19:39] <karni> __lucio__: perhaps I misunderstand something?
[19:39] <karni> __lucio__: ok. so, whenever I ask for delta, it does a rescan?
[19:39] <__lucio__> karni, ideally, when you connect every 5 minutes or whatever, you should say "tell me all the changes since X"
[19:39] <karni> __lucio__: that's what I do :)
[19:40] <karni> __lucio__: and if the user is not using the app ATM, I drop the connection ASAP when I'm done syncing.
[19:40] <__lucio__> karni, then i dont follow what do you want to use query for
[19:40] <karni> vds: sorry to keep you waiting.
[19:40] <karni> __lucio__: Imagine a user is on Mobile connection, not WIFI. and it's expensive (happens)
[19:40] <__lucio__> yes
[19:40] <vds> karni, np for me but I'll be EODing in 30 mins, I'll be online tomorrow all day
[19:40] <karni> __lucio__: And he wants to download a single file, without pefroming full sync.
[19:41] <karni> vds: ACK!
[19:41] <karni> __lucio__: If the user wants 1 file
[19:41] <karni> __lucio__: he needs to have it's new hash. Thus, we want this info without calling getDelta for a whole volume, which is a waste of bandwith
[19:42] <karni> vds: ok, so it seems I got an ACK from the beuno for you to reveal your rest api secrets to me
[19:42] <karni> vds: not sure if I'll be converting to the rest at this very moment, but this is probably the way Ubuntu One Files for Android will go
[19:42] <beuno> and more importantly, to coordinate and see what would help karni the most
[19:42] <karni> exactly. thank you beuno
[19:43] <vds> karni, there are no real secrets :) it's just that I'm not done yet :) but sure we can talk about the api
[19:43] <karni> vds: The thing is, U1 was never designed for a partial sync. And that's what I'm forcing it to do ATM :)
[19:43] <__lucio__> karni, maybe i am mixing stuff because i dont think we have a "get delta" for the rest api. you would need that if you want to use the rest api.
[19:43] <vds> karni, the rest apis is not going to do any sync
[19:44] <karni> __lucio__: perhaps. that's what you know, but I dont :)
[19:44] <karni> vds: I'd like to hear what it can do, though :)
[19:44] <vds> karni, REST CRUD on all the volumes
[19:45] <karni> vds: read. good. can we fetch info about a single file?
[19:45] <vds> karni, yes
[19:45] <karni> vds: I assume the same goes to volumes? (i.e. know the volume last generation)
[19:48] <karni> __lucio__: plus, queries over https are much more lightweight than an on-going connection. I know I sometimes can sound confusing, but it's all due to the fact I'm thinking the android/mobile-way, which is quite different from the regular syncdaemon
[19:49] <karni> vds: don't timeout on me friend, I soo need to know that or I won't be able to sleep hahahh
[19:49] <karni> vds: can we do the same with volumes? fetch the last gen. number?
[19:49] <vds> karni, yes, sorry :)
[19:49] <karni> np np :)
[19:49] <karni> this is great!
[19:50] <karni> vds: now, the most important question hahah. what's up with the getDelta since X with the rest api :)? can we do that?
[19:50] <karni> vds: i.e. if we already asked what's the last gen, can we know which files changed
[19:50] <vds> karni, I have no idea what is it,
[19:51] <vds> aquarius, ^^
[19:51] <karni> vds: it means, if I know a volume generation is 4
[19:51] <karni> vds: and the server holds generation 10
[19:51] <karni> vds: if I ask for delta(4) it means, "what files have changed since generation 4?"
[19:51] <beuno> if it doesn't it shouldn't be hard I think
[19:51] <karni> vds: so basically, it's a list of iles
[19:51]  * karni nods
[19:51] <__lucio__> all the support is in the dal
[19:52] <__lucio__> should be easy
[19:52] <__lucio__> and it would be great to have that
[19:52] <vds> karni, it's not in the original specs I guess we can add it, I'm glad to work on it, we need to check with all the others if/when we want to do it
[19:53] <karni> The thing is guys, I was poking people (aquarius, beuno, other) about the rest api since some time already, because I know it would be much more lightweight and suitable for the android app.
[19:53] <vds> karni, I hope to reach a good point this week and deliver REST CRUD on everything apart from share
[19:53] <karni> vds: that would be sooo awesome, so swesome
[19:54] <beuno> vds, karni would be a good sample developer to test these apis on *wink*
[19:54] <karni> So, although the client with generations is cool, REST over http is what I've been looking forward to for quite a while :)
[19:54] <karni> beuno: definitely
[19:54] <vds> beuno, as I said I'll be more than happy to go ahead and spend more time on this, we need to check what the schedule says
[19:54] <aquarius> karni, the problem with the rest API is that it's not suitable for notification of changes, and I don't want you to repeatedly poll the rest API. Getting notification of a changed file as soon as it happens is what the storage protocol is *for*
[19:55] <ralsina> vds: I have 560MB of dev dependencies to download now. Don't worry, I'll do the review  but it may take an hour or two.
[19:55] <beuno> vds, right. Just so you know, you're back with the web team for 4 weeks to so starting next week  \o/
[19:55] <vds> beuno, I know that :D :D
[19:55] <karni> aquarius: indeed. the thing is, the connection gets *really* slow in the evenings.
[19:55] <beuno> so get some sleep this weekend!
[19:56] <aquarius> karni, why?
[19:56] <vds> ahahahaha :)
[19:56] <karni> aquarius: so slow that I would rate the app 1/5 if I was a regular user. unusable.
[19:56] <aquarius> karni, that doesn't happen on the desktop, does it?
[19:56] <karni> aquarius: I'm not sure, but it's repeatable
[19:56] <ralsina> E: Package 'python-canonical-payment-client' has no installation candidate
[19:56] <ralsina> ... what am I missing here?
[19:56] <beuno> aquarius, it does
[19:56] <beuno> it's slow api servers
[19:56] <karni> aquarius: you never know, I don't observe it so closely. plus
[19:56] <beuno> ralsina, the PPA configured?
[19:57] <aquarius> karni, and you are as sure as hell not going to provide a better service if you repeatedly poll the rest API once every ten seconds :)
[19:57] <pedronis> ralsina, is not really possible to test server stuff on natty
[19:57] <karni> aquarius: new connections are slowe, persisting connections don't get so affected
[19:57] <ralsina> pedronis: ooook
[19:57] <pedronis> ralsina, the deps are not built yet
[19:57] <beuno> ralsina, want me to pick up that review for you?
[19:57] <pedronis> etc etc
[19:57] <ralsina> beuno: please
[19:57] <karni> aquarius: definitely, but I don't plan to poll every 10 seconds ;)
[19:57] <ralsina> beuno: my lucid and maverick VMs need rebuilding :-(
[19:57] <aquarius> karni, and there's no way to ask the rest API "what has changed since X"...and there's not planned to be, either, because if you want that info then that's what the storage protocol is for :(
[19:58] <karni> beuno: vds: __lucio__ ↑ :<
[19:58] <beuno> well
[19:58]  * ralsina has destroyed more ubuntu installs this month than on the last 5 years
[19:58] <beuno> it could be
[19:58] <beuno> this is something we need to talk about
[19:58] <pedronis> aquarius, well it would not be hard to had get delta to the rest api
[19:58] <karni> aquarius: ok, perhaps I should share more details
[19:58] <pedronis> s/had/add
[19:58] <karni> aquarius: regular connect (connect+auth+setup) takes 6-7 seconds usually
[19:59] <aquarius> pedronis, I agree it's not hard technically. What concerns me is that providing that flat out encourages people to poll the rest API regularly, which is not what we want :(
[19:59] <karni> aquarius: that get's over 2-3 minutes in the evenings
[19:59] <__lucio__> aquarius, the only reason the protocol still survives is because its a great interface for managing versioned clients and because we need a channel to push stuff.
[19:59] <__lucio__> if we have continuos updates on the phones, reason one goes away
[19:59] <karni> aquarius: each volume getDelta takes another.. 1-2 minutes per volume (just the meta)
[19:59] <__lucio__> if we dont need notifications, reason 2 goes away
[19:59] <vds> ralsina, check if that can be traded with LP karma points ;)
[19:59] <pedronis> aquarius, well long term will need to put rate limiting on the apis anyway, that's what people end up doing anyway
[19:59] <aquarius> __lucio__, nah, there's another reason: the protocol is good for notifying you instantly that something has changed in a long-lived connection.
[20:00] <karni> aquarius: if you have few volumes, the initial start+sync takes... 6+ _minutes_ in the evenings
[20:00] <aquarius> karni, that's quite shit, I agree
[20:00] <__lucio__> aquarius, yeah, thats what i mean by push
[20:00] <karni> aquarius: yes, and we're not keeping that connection live on Android
[20:00] <karni> aquarius: we do periodic sync
[20:00] <karni> aquarius: 5 min, ..., up to daily
[20:01] <aquarius> __lucio__, we do not have that in the rest API. We could do, but we'd just have to invent it from scratch, which is something that needs thinking about and planning. It's not something where we can just throw it together in two minutes :)
[20:01] <karni> aquarius: thus, we drop the connection *if* the user is not using the app at the moment
[20:01] <aquarius> karni, I didn't know about the api servers taking 6 minutes to connect :(
[20:01] <karni> aquarius: thus, you get notifications -- as long as you're using the app or it's during the scheduled periodic sync
[20:01] <__lucio__> get delta should not take 2 minutes, it does not to me, and on average we finish a delta under a second
[20:01] <aquarius> beuno, is the delay on the api servers just because they're overloaded?
[20:01] <aquarius> brb
[20:01] <__lucio__> aquarius, unless we are broken, which is not all the time, its not 6 minutes
[20:02] <aquarius> __lucio__, karni. karni, __lucio__. You disagree on this. ;-)
[20:02] <karni> __lucio__: that has been tested multiple times in live environment over wifi
[20:02] <__lucio__> aquarius, being on ec2 also impacts a lot. we cant do more parallel queries to speed things up because of losa issues
[20:02] <__lucio__> karni, yeah, same here, and i get to also see the result of every single get delta that is done
[20:03] <karni> __lucio__: my full sync takes ~30 seconds including conn+setup+auth
[20:03] <karni> __lucio__: in the evenings, it takes ages.
[20:03] <karni> __lucio__: how can I prove that?
[20:03] <verterok> karni: what does setup+auth involves?
[20:04] <karni> verterok: it's just connect(), authenticate() and setup() [set caps]
[20:04] <karni> nothing else
[20:04] <karni> verterok: methods from your Client (to be specific)
[20:04] <verterok> karni: and that takes 30 seconds?
[20:04] <karni> it's not my custom stuff where I do tons of sh.t :)
[20:05] <karni> verterok: no, +sync
[20:05] <karni> verterok: that takes 5-7 seconds
[20:05] <karni> 6-7
[20:05] <karni> connect, authenticate, setup takes 6-7 seconds. no less. never.
[20:05] <karni> no wifi
[20:05] <karni> *on
[20:06] <karni> verterok: 30 seconds was an example of my U1 connect+sync
[20:06] <karni> let me check how things run ATM
[20:06] <__lucio__> right now, from nothing to working, 8 seconds
[20:07] <__lucio__> anyhow
[20:07] <verterok> karni: do you know/remember if we are using the "lite" version of protobuf?
[20:07] <__lucio__> we know we have performance issues
[20:07] <karni> verterok: I think we are
[20:07] <__lucio__> and we are working to get a more stable service
[20:07] <verterok> karni: k, thanks
[20:08]  * karni checks the app
[20:09] <karni> __lucio__: I confirm, at the moment things run smoothly
[20:09] <karni> I can ping you guys when things go slow next time for me.
[20:15] <karni> aquarius: beuno: I imagine it's not a one day or even one week matter, but I would definitely want at least to look into the API - one of the reason havinb query() depricating soon [thus, not being able to sync one file without syncing a whole volume's meta]
[20:16] <aquarius> karni, indeed, but if we're gonna make the rest API be a complete and total replacement for the storage protocol, then we need to plan it out right, and none of that planning has happened yet. :)
[20:16] <karni> aquarius: beuno: plus, we're not keeping that connection majority of the time anyway. It would have too huge impact on the battery life.
[20:17] <karni> aquarius: I understand that.
[20:17] <aquarius> I'm seriously thinking about how we shouls use c2dm for this sort of thing...
[20:17] <karni> aquarius: Right, as we talked before.
[20:18] <karni> aquarius: Even if we get notified via c2dm, connect takes 6-7 secs with current ubuntuone-java-storageprotocol implementation. (not that is bad! it's an excellend piece of great code)
[20:19] <karni> Unless we would use the REST API at that stage.
[20:19] <aquarius> karni, yeah, but six seconds is not that much :)
[20:19] <aquarius> we could just use c2dm for notifications, never ever ever ever ever poll, and just use the rest API.
[20:20] <karni> aquarius: true, unless we hit the performance issues again
[20:20] <aquarius> (what happens to c2dm messages if you're switched off? are they stacked, or thrown away?
[20:20] <karni> aquarius: indeed, that's what I meant
[20:20] <karni> aquarius: google servers make sure it gets to the receiver of the msg
[20:20] <karni> aquarius: they are aggregated
[20:20] <verterok> karni: 6-7 sec to do the connect+auth is a lot...the python storageprotocol takes 2 sec to do it
[20:21] <karni> verterok: the biggest part is the auth
[20:21] <karni> verterok: it produces tons of GC..
[20:21] <aquarius> karni, hm...c2dm to notify of a change, plus the rest api, might be a really good way to do it.
[20:21] <karni> verterok: When I see like.. 12 lines of GC in the logcat, I know it's authenticate() -- that's what taking most of the time
[20:21] <verterok> karni: so, lets fix it if that's the problem...2 sec is acceptable?
[20:21] <karni> verterok: hell yea! :D
[20:22] <karni> verterok: but, there's a little but.
[20:22] <karni> verterok: what worries me is not as much those 6-7 secs, but performance issues in the evenings. atm all runs nicely
[20:22] <beuno> karni, we're working on that
[20:22] <beuno> we have a plan
[20:22] <karni> but previous evenings the app was unusable (including yesterday)
[20:22] <karni> beuno: perfect!
[20:22] <beuno> and it's being executed
[20:22] <beuno> so don't worry about that in the medium-term
[20:23] <karni> verterok: if so ↑, then 2 secs would be great!
[20:23] <karni> beuno: uhm :)
[20:23] <karni> aquarius: that's how majority of the android apps run I think. it's so lightweight.
[20:24] <aquarius> beuno, your thoughts on c2dm+rest? would require us implementing c2dm push in the server, plus a dependency on Google, but it'd make the client really lightweight :)
[20:24] <beuno> aquarius, well, I think that's probably the way to go eventually
[20:24] <beuno> and not just for this
[20:25] <beuno> it will obviously not happen this cycle because of all the infrastructure needed
[20:25] <karni> yup
[20:25] <beuno> so we need to figure out how to deliver, and plan for that
[20:25] <aquarius> beuno, yeah, that's what I think. I'm not sure that the rest API is the place to do streaming instant notifications, but I agree it's the place to get and put and manipulate files and folders
[20:26] <karni> aquarius: I think c2dm+rest is the perfect solution. Not sure what's a replacement of c2dm on an iPhone though ;D
[20:27] <karni> aquarius: I think apple suggest regular Push for such apps on an iPhone. Just saying.
[20:27] <beuno> so
[20:27] <beuno> YOGA
[20:27] <beuno> bbl
[20:28] <karni> Ok guys, I'll ping next time we have a slowdown (if we do, hopefully not), and it'd be great to speed up that auth on the java-storageprotocol side (maybe it's the oauth library, maybe it's android problem itself, we've been there with verterok once already.. )
[20:29] <verterok> karni: FWIW, on my desktop conn+auth+setcaps takes 3.5 seconds
[20:29] <karni> verterok: compare that to a 355MHz procesor ;)?
[20:30] <karni> perhaps on a Snapdragon 1GHz phone, it takes less than 6 secs
[20:30] <karni> aquarius: you have some fancy google phone, don't you
[20:31] <karni> verterok: I'll use my friend to test tomorrow ;)
[20:31] <verterok> karni: ok, I'll try to profile this thing
[20:31] <karni> verterok: lovely
[20:32] <aquarius> karni, I do. Nexus S.
[20:32] <karni> aquarius: do you have U1F installed? (doesn't have to be the latest version)
[20:33] <verterok> karni, aquarius, beuno: we should compare json vs protobuf, in case you plan to move to the rest api to get more speed json parsing can be quite expensive if you don't use the right parser
[20:33] <verterok> and with that, I will go back to my stuff
[20:33] <verterok> :)
[20:33] <karni> verterok: thank you :) bye bye!
[20:33] <aquarius> karni, I don't, because since I got this new phone you were working on a new version so I was holding off :)
[20:34] <karni> aquarius: ha! then I'll keep the new version for few more days while I'm wrapping up sync and connection settings, I want it to look good for you.
[20:34] <karni> aquarius: I'll use my friends desire to test :)
[20:35]  * karni gets some food
[20:37] <nessita> ralsina: ussoc 1.1.9 is now released, so is u1cp 0.8.0
[20:37] <ralsina> nessita: yay!
[20:37] <nessita> (packages already in main repo)
[21:01] <nessita> bye all!
[22:01] <karni> __lucio__: you around? connect()+authenticate()+setup() [set caps] took 59 now
[22:01] <karni> __lucio__: seconds, that is.
[22:05] <karni> aquarius: what exactly is "description: doing server rescan" ? (PC syncdaemon) what is happening at that stage? client is rescanning the server? [I'm on 10.04]
[22:05] <beuno> karni, yes, 10.04 has a slow server-side rescan
[22:05] <__lucio__> karni, 10 seconds for me
[22:05] <beuno> 10.10 is super quicj
[22:05] <beuno> *quick
[22:06] <karni> __lucio__: thanks, let me recheck
[22:06] <karni> beuno: aha, thanks
[22:06] <__lucio__> karni, as beuno said, if you have 10.04, then forget about it, its a pain.
[22:07] <karni> __lucio__: now it's fine. <10secs for sure. looks like your plan is working. if I'll notice major slowdown, I'll ping you.
[22:07] <karni> __lucio__: ACK. I was just testing how fast does my PC syncdaemon connect+auth (which was fast)
[22:09] <karni> __lucio__: hmm. it took 15 secs (still, not bad). it's pretty undeterministic considering different server load at any given moment. however, still slower than PC.
[22:09] <karni> anyway, thanks guys. back to work.
[22:13] <__lucio__> karni, what took 15 secodns?
[22:13] <karni> __lucio__: conn+auth+set_caps . forget it, it's still close to 10, which you had :)
[22:14] <karni> __lucio__: I'll let you know when it gets considerably slower :)
[22:14] <karni> (hopefully not ^^)
[22:14] <__lucio__> cool
[22:14] <karni> __lucio__: thanks for your support and feedback
[22:15] <__lucio__> np
[22:34] <dobey> later all, out for now
[22:34] <karni> bye dobey