[00:00] <karni> shane4ubuntu: you can do that in ubuntu preferences. right?
[00:00] <karni> ok, then you're safe
[00:00] <shane4ubuntu> karni: I have all my files backed up, because u1 was eating my files randomly anyway.
[00:00] <karni> you can remove your files from webUI or from a computer (or move them), that is still connected
[00:00] <karni> and let it sync up the meta (so they'll get removed from the cloud)
[00:01] <karni> shane4ubuntu: you will find scripts to remove your couchDB data from the cloud in the link from the topic
[00:01] <shane4ubuntu> karni: I still have to un-subscribe my folders on the laptop, so I don't want to touch the WebUI for a few days, I will clean that up later.
[00:01] <karni> shane4ubuntu: it's always good to make a backup, though, if you keep your contacts synced with couchDB or something. you know, so that you're safe
[00:02] <karni> shane4ubuntu: when you disconnect all the machines, you can clean up under webUI, plus to remove logs
[00:02] <shane4ubuntu> ok, but basically I can sudo apt-get -purge remove ubuntuone
[00:02] <karni> shane4ubuntu: you'll find them under ~/.local/share/ubuntuone and desktop-couch
[00:02] <shane4ubuntu> karni: ok, thanks!
[00:02] <karni> shane4ubuntu: I think you could do that, yes
[00:02] <karni> shane4ubuntu: mind you, I'm giving those instructions for the first time
[00:02] <karni> shane4ubuntu: so the last thing I need is you loosing some data becase of me ;)
[00:03] <shane4ubuntu> karni: lol, I do have my files backed up, so not to worry too much.
[00:03] <karni> ok shane
[00:03] <karni> shane4ubuntu: I hope you'll get back to U1 one day, when it's more rock solid and suits your needs
[00:04] <shane4ubuntu> karni: I will be back, I'm just pretty disappointed in some stuff, I filed bugs, and everyone is too busy with other Ubuntu stuff to be dedicated to u1
[00:05] <karni> shane4ubuntu: it might be becase majority of the team was away for the last week, but I understand you.
[00:05] <karni> shane4ubuntu: I hope you're bugs will get fixed soon, anyway :) thanks for your support and input
[00:05] <shane4ubuntu> karni: I tried to help troubleshoot, but bug has been unattended for a while now, and I don't have time to mess with it if their isn't the interest in fixing it.
[00:05] <karni> shane4ubuntu: that's appreciated
[00:05] <karni> shane4ubuntu: i hope that'll change soon :)
[00:06] <shane4ubuntu> karni: I appreciate the help that you have given, and I know others have helped as well, and it was appreciated, just don't seem to be making headway.
[00:06] <sircram> hello, is there a UbuntuOne music streaming app for iphone?
[00:07] <karni> sircram: I'm pretty sure there is, but I might be wrong
[00:07] <karni> duanedesign: ↑
[00:07] <karni> beuno: ↑
[00:08] <sircram> I didn't see one in the appstore, so I was worried
[09:05] <toros> hi
[09:07] <toros> is something wrong with the u1 servers?
[11:31] <duanedesign> toros: hello
[11:31] <duanedesign> toros: still having an issue?
[11:48] <toros> duanedesign: I will check it
[11:50] <duanedesign> kk
[11:53] <toros> "WARNING - Connection lost: Connection to the other side was lost in a non-clean fashion." :(
[11:53] <toros> I'll try to restart the u1sdtool
[11:55] <toros> it is still rather slow...
[11:59] <toros> the connection has been closed again :(
[12:00] <toros> by the way, I had the same issue at home yesterday evening
[12:00] <toros> with a different u1 account
[12:10] <hrw> hi
[12:10] <nessita> toros: hi there. Yesteday we were updating our servers, so the connections were dropped
[12:11] <nessita> toros: today, can you check in your log file if you have something like this? "ubuntuone.SyncDaemon.ActionQueue - INFO - The request 'protocol_version' failed with the error: [('SSL routines', 'SSL23_READ', 'ssl handshake failure')] and was handled with the event: SYS_UNKNOWN_ERROR"
[12:11] <nessita> toros: log file is located at ~/.cache/ubuntuone/log/syncdaemon.log
[12:15] <toros> 2010-11-24 13:13:41,183 - ubuntuone.SyncDaemon.ActionQueue - INFO - The request 'protocol_version' finished OK.
[12:16] <hrw> any ideas with bug 558712/
[12:16] <ubot4`> Launchpad bug 558712 in ubuntuone-servers "N900 fails to PhoneSync with error 401 (affects: 6) (heat: 18)" [Wishlist,Triaged] https://launchpad.net/bugs/558712
[12:17] <nessita> beuno: would you have a pointer for hrw?
[12:17] <toros> nessita: the syncdaemon starts syncing but after a while it drops the connection
[12:18] <toros> it uploaded 3 files (out of 12) in 25 minutes
[12:18] <beuno> hrw, hi  :)
[12:18] <beuno> the N900 isn't supported
[12:18] <toros> (614 KB)
[12:19] <hrw> beuno: is there something other then gnomebuntu/androind/windows supported then?
[12:20] <nessita> toros: yes, we're currently debugging what is going on, we're getting similar disconnection ourselfves
[12:20] <hrw> beuno: saying 'n900 is not supported' is cheap. looking why it got http 401 is hard?
[12:20] <nessita> ourselves*
[12:22] <toros> ohh, ok. thank you!
[12:22] <nessita> toros: thank you!
[12:23] <nessita> mandel or vds: would you be available for a review?
[12:23] <beuno> hrw, right, so, phone sync is provided by a third party application (funambol)
[12:23] <vds> nessita: sure
[12:23] <beuno> hrw, we know it fails with N900's
[12:23] <beuno> it is a problem in the client implementation of syncml
[12:24] <hrw> beuno: scheduleworld also uses funambol but it syncs with n900
[12:24] <hrw> with same app
[12:24] <beuno> hrw, right, we have escalated it to Funambol many times
[12:24] <beuno> hrw, the problem is outside of our code
[12:24] <nessita> vds: Thanks!!! https://code.launchpad.net/~nataliabidart/ubuntu-sso-client/split-gui/+merge/41369
[12:25] <beuno> hrw, and our main focus is android and iphone, so no work on our part is going to go into other OSes
[12:26] <hrw> ok
[12:30] <rye> FYI: A server-side issue has been discovered that causes the clients to disconnect after some period of time. We are working on diagnosing and fixing this issue at the moment.
[12:31] <toros> rye: can I share this information on identica/twitter?
[12:32] <toros> other people might be affected too... :)
[12:35] <rye> toros, sending that message to !ubuntuone group @ identi.ca atm
[12:36] <toros> rye: thank you!
[12:38] <hrw> btw - how to get to ubuntuone filesystem from shell app?
[12:40] <hrw> sorry - found
[12:41] <duanedesign> u1sdtool ?
[12:49] <Chipaca> toros: rye: could you try to connect now and tell me how it goes?
[12:53] <toros> I restarted the syncdaemon... let's see how it works
[12:55] <diverse_izzue> Hi all, honk, need help. I have a U1 installation which is messed up in that it hangs upon initialisation. we looked at that in the past together, but could not figure out what's going wrong. i'm about to leave for a longer trip and need u1 to work and do not have time for debugging. could i please have my account reset and start from scratch?
[12:55] <rye> Chipaca, 'checking client version',
[12:55] <rye> diverse_izzue, hangs upon initialization - during connection or when syncdaemon starts up?
[12:55] <rye> diverse_izzue, if the first one - we are now looking into this since that affects all the users
[12:56] <diverse_izzue> rye, it's not a very recent problem, we looked at that a month ago or so but then i gave up on it and left it disabled since
[12:57] <diverse_izzue> rye, if i remember correctly it failed to crawl some of the folders, just started the daemon, will see what happens
[12:58] <diverse_izzue> rye, where are the logs again?
[12:58] <rye> diverse_izzue, ~/.cache/ubuntuone/log/syncdaemon.log
[12:58] <toros> 2010-11-24 13:57:45,182 - ubuntuone.SyncDaemon.ActionQueue - WARNING - Connection lost: Connection to the other side was lost in a non-clean fashion.
[12:58] <toros> 2010-11-24 13:57:45,177 - ubuntuone.SyncDaemon.StorageClient - INFO - Connection lost, reason: [Failure instance: Traceback (failure with no frames): <class 'twisted.internet.error.ConnectionLost'>: Connection to the other side was lost in a non-clean fashion.
[13:05] <hrw> btw - U1 music store is available only by banshee or are there other methods?
[13:10] <diverse_izzue> rye, so far it doesn't fail in the way it did earlier on, but disconnects very often, is that the issue you say you are investigating?
[13:10] <rye> diverse_izzue, exactly
[13:10] <JamesTait> hrw: Also from within Rythmbox
[13:10] <hrw> ok, so not for me - kubuntu
[13:10]  * JamesTait throws another 'h' in there.
[13:13] <diverse_izzue> rye, so basically this seems to be a bad point in time to try and figure out my other issue...
[13:17] <alecu> hi all
[13:19] <nessita> hi al
[13:19] <nessita> hi alecu*
[13:19] <nessita> alecu: so, how can I help you?
[13:19] <nessita> (btw, you forgot to do the review yesterday?)
[13:20] <alecu> nessita, oh, I forgot, yeah.
[13:20] <nessita> alecu: no problem I chased someone else
[13:20] <alecu> nessita, sorry.
[13:20] <nessita> :-)
[13:20] <alecu> nessita, is it already done?
[13:20] <nessita> yes, thanks
[13:20] <alecu> ok, great.
[13:20] <alecu> so, zeitgeist
[13:21] <alecu> I been looking at syncdaemon, and I'm trying to understand what would be the best place to log the events into zg
[13:21] <alecu> nessita, ^
[13:21] <nessita> aja
[13:22] <alecu> nessita, so, it can be either at the time that SD events are sent to the event queue, or by subscribing to those events
[13:22] <alecu> nessita, also, I'm trying to think of a good layering to have SD not depend much on zeitgeist
[13:23] <nessita> alecu: what about:
[13:23] <alecu> nessita, but my first idea would mean having one function per logged event, and that does not scale.
[13:23] <alecu> so...
[13:23] <alecu> shall we mumble?
[13:23] <nessita> alecu: having a new module (even installable in another package?) that defines the handle_EVENT_NAME?
[13:24] <alecu> right
[13:24] <rye> Chipaca, no, disconnecting as usual
[13:24] <nessita> alecu: we can certainly use some python power to avoid programming several handle_bla
[13:25] <alecu> nessita, what is the SD queue that was recently disabled because of performance issues?
[13:25] <hrw> http://marcin.juszkiewicz.com.pl/2010/11/24/ubuntu-one-good-or-bad/
[13:25] <Chipaca> rye: right, we need to roll back to the previous version while we debug this. We're leaking memory all over the place for some strange reason.
[13:25] <nessita> alecu: hum, I'm not aware of any queue being disabled
[13:26] <nessita> alecu: I know content queue has an exponential growth in trunk syncdaemon
[13:26] <alecu> nessita, hmmm... what about some events (perhaps dbus events) that were disabled?
[13:26] <nessita> but is not disabled, afaik
[13:26] <popey> twice today I have had "Temporary maintenance" page come up when i go to one.ubuntu.com/files - refreshing the page "fixes" it
[13:26] <alecu> oh, I heard some of that recently. perhaps verterok or facundobatista can shed some light?
[13:26] <nessita> alecu: I don't know about that either :-) shall we ping them?
[13:27] <nessita> alecu: maybe when they woke up? :-P
[13:28] <alecu> nessita, right. they must still be in Disney Time.
[13:28] <nessita> lucky them
[13:28] <alecu> nessita, so, a new module that listens for all (event/action?) queue events, and handle_SOMETHING them?
[13:29] <nessita> alecu: the listen setup is only one, no need to list events of interest
[13:29] <alecu> nessita, I assume that those events have the live syncdaemon objects passed into them, so we can fill up all zg fields.
[13:30] <nessita> alecu: and yes, handle_FOO. No syncdaemon object attached, just a few parameters
[13:30] <nessita> alecu: each event may have a particular set of parameters, you can find that listing in event.py
[13:30] <JamesTait> popey: Hi! We're having a spot of bother at the moment, see https://wiki.ubuntu.com/UbuntuOne/Status#Files
[13:30] <nessita> alecu: ubunutone/syncdaemon/events.py
[13:30] <alecu> lookin
[13:30] <popey> ah, ok
[13:31] <nessita> alecu: sorry, event_queue.py
[13:31]  * nessita realizes that module should be named events.py :-P
[13:31] <alecu> for instance: 'AQ_FILE_NEW_OK': ('volume_id', 'marker', 'new_id', 'new_generation')
[13:31] <JamesTait> popey: We are working hard to rectify it, although I don't currently have an ETA.
[13:33] <nessita> alecu: yes
[13:34] <alecu> nessita, let's say I'm handling that event, and sending a zg "File was created on the server, syncdaemon creating locally" event.
[13:34] <alecu> nessita, zg needs the local path, and the current HD partition uuid.
[13:34] <alecu> nessita, is there a way to get at that info?
[13:35] <nessita> alecu: that we need to consult with chicharra team. But I wonder: can't we store custom info?
[13:36] <nessita> alecu: I mean, why would we need the HD partition uuid?
[13:36] <alecu> nessita, we can store custom info in zg events, but it would make no sense for the programs reading it.
[13:36] <nessita> alecu: can be leave some fields empty?
[13:36] <alecu> nessita, (the programs == "gnome-activity-journal")
[13:37] <alecu> nessita, and the aggregation program thisfred will be working on.
[13:38] <alecu> nessita, I wouldn't worry about the partition. I can get the partition if I can get at the local file uri.
[13:38] <nessita> alecu: can we leave fields empty? since in the case of U1, every single file/folder would be located under home
[13:38] <alecu> nessita, and that's a piece of info I think we *must* store.
[13:39] <alecu> nessita, then I can query the partition at the start of syncdaemon and cache it for the rest of the sd life. Don't worry about the partition.
[13:39] <alecu> nessita, let's worry about the filename
[13:39] <nessita> alecu: the path can be retrieved, yes, by accessing filesystem_manager
[13:40] <nessita> example is located at: (looking)
[13:42] <nessita> alecu: still looking :-)
[13:43] <nessita> alecu: can't find the code right now but you can query volume_manager
[13:43] <nessita> alecu: that module tracks the metadata for every node
[13:44] <verterok> nessita, alecu: hi
[13:44] <verterok> what are you looking for?
[13:44] <nessita> verterok: we would need the path of file/folder when handling events
[13:44] <verterok> nessita: which events?
[13:44] <nessita> verterok: and right now, the events pass the volume_id and node_id
[13:44] <nessita> verterok: can we access the path somehow? I'm thinking that volume_manager has that info
[13:45] <nessita> verterok: any event that involves a file/folder/share/etc
[13:45] <verterok> nessita: volume_manager only knows about volumes
[13:45] <verterok> nessita: what events?
[13:45] <nessita> verterok: any event that involves a file/folder/share/etc
[13:45] <hrw> bye
[13:46] <verterok> nessita: event is a bit fuzzy, a FS_FILE_CLOSE_WRITE, SV_FILE_DELETED, dbus signal?
[13:47] <nessita> verterok: we would track events for creation/deletion of files/folders both happening locally and remotly
[13:47] <nessita> verterok: so, we'll use SV_ and AQ_ levels, I think
[13:47] <verterok> nessita: ok, but those are two different things
[13:48] <verterok> nessita: we now get deltas for the changes on the server, a fire the SV_FILE|DIR events internally (sync.py)
[13:48] <verterok> nessita: what are the AQ events?
[13:48] <verterok> *levels
[13:48] <nessita> verterok: we have no prior list right now
[13:49] <nessita> alecu: or do we?
[13:49] <alecu> verterok, would you mind taking a look at the index on this spec? https://wiki.ubuntu.com/UbuntuOne/Specs/ZeitgeistIntegration/EventsSpec
[13:49] <alecu> nessita, prior list? a list of prior events?
[13:49] <nessita> alecu: I mean, a list of events from event_queue that we will be using
[13:49] <alecu> nessita, not at all.
[13:50] <nessita> right
[13:51] <nessita> verterok: after a quick review, we would be interested in AQ_ and SV_ events
[13:51] <verterok> nessita: ok(?)
[13:51] <nessita> verterok: I didn't understand what you've said about generations and fire
[13:51] <verterok> nessita: mn
[13:51] <verterok> *nm
[13:51] <nessita> verterok: ideally, we would be defining new handle_BLA for the events we care about
[13:51] <verterok> nessita: a new subscriber with handle_EVENT_NAME
[13:52] <dobey> verterok: that stable-1-4 branch is just having the same build issue as trunk was, on narwhal.
[13:52] <verterok> dobey: k, thanks
[13:52] <dobey> verterok: and why did you resubmit the trunk branch; it was not necessary to do so
[13:52] <verterok> dobey: it was complaining about needing another review :/
[13:52] <verterok> tarmac ^
[13:52] <nessita> dobey: FYI, verterok reported bug #680719 and I assigned to you
[13:52] <ubot4`> Launchpad bug 680719 in ubuntuone-client "landing branches for stable-1-4 is broken! (affects: 1) (heat: 6)" [Critical,Confirmed] https://launchpad.net/bugs/680719
[13:53] <verterok> nessita: please please, don't mix this thingy into a EQ existing subscriber :)
[13:53] <nessita> verterok: what?
[13:53] <nessita> verterok: we would be building a new module, that subscribes a new listener
[13:53] <alecu> verterok, you mean creating a separate subscriber, right?
[13:53] <verterok> yes
[13:53] <dobey> verterok: yes because you set it to approved before launchpad rescanned, so it tried to merge the old revision
[13:53] <verterok> I'ld like to be able to completely turn it off, with a config option
[13:54] <verterok> dobey: ahh, ok
[13:54] <nessita> verterok: ideally, we won't be editing existing code
[13:54] <alecu> verterok, yes. And we would shut that off it zg is not installed or if you have turned it off with an option.
[13:54] <verterok> alecu: :)
[13:54] <verterok> nessita: *ideally*
[13:54] <verterok> :)
[13:55] <dobey> ok, brb; must install a pci sound card
[13:55] <nessita> dobey: we have stand up in 5
[13:55] <alecu> verterok, so, I heard about some events that were recently disabled for performance reasons.
[13:55] <nessita> (just FYI)
[13:55] <alecu> verterok, were those just dbus events?
[13:55] <dobey> nessita: standup or mumble?
[13:55] <nessita> dobey: I didn't get any invite
[13:56] <verterok> alecu: no, dbus_interface doesn't send events, it handle events and send signals :)
[13:56] <alecu> verterok, oh, right.
[13:56] <nessita> which makes me remember that Chipaca said he would be re-setting the weekly call
[13:56] <verterok> alecu: just to make it clear, let's name dbus stuff signals ;)
[13:56] <dobey> nessita: i thought we agreed last week to do the mumble meeting today this week, and thursday from now on
[13:56] <verterok> alecu: and the internal events: just events
[13:56] <nessita> dobey: we didn't set a time for today
[13:56] <alecu> verterok, so, I heard about some *signals* that were recently disabled for performance reasons.
[13:56] <dobey> nessita: but if it's not mumble then i will wait to fix my sound
[13:56] <dobey> irc i can do
[13:56] <nessita> dobey: let's have the standup in 4 minutes and schedule the weekly call
[13:56] <verterok> alecu: I'm not aware of such thing
[13:56] <nessita> dobey: thanks
[13:57] <verterok> alecu: we disabled sending a dbus signal for each internal event a long time ago
[13:57] <nessita> alecu: me neither. You heard from who?
[13:57] <verterok> alecu: because it's really slow
[13:57] <alecu> verterok, oh, it must be that.
[13:58]  * dobey tries to figure out what all he did yesterday
[13:58] <alecu> verterok, but it's only on the dbus level that was disabled, right?
[13:58] <verterok> alecu: dbus_interface.py line 520
[13:58] <verterok> alecu: yes
[14:01] <dobey> grrrrrrrrrr
[14:03] <nessita> me
[14:03] <nessita> alecu, CardinalFang, thisfred, mandel, dobey, Chipaca: stand up?
[14:03] <nessita> vds: ?
[14:03] <mandel> me
[14:03] <vds> me
[14:03] <alecu> nessita, didn't see your "me"
[14:03] <alecu> me
[14:03] <CardinalFang> me
[14:03] <alecu> oh, just did.
[14:03] <thisfred> me
[14:04] <verterok> alecu: there isn't an event for conflicts
[14:04] <dobey> nessita: please do not circumvent tarmac like that again.
[14:04] <nessita> dobey: I didn't, I ran tarmac locally
[14:04] <dobey> yes, you did
[14:05] <nessita> DONE: branches for bug #627496 and bug #677518, destopcouch reviews
[14:05] <nessita> TODO: build new package of ussoc, try to land natty package for u1cp, help alecu with zg
[14:05] <nessita> BLOCKED: nopes
[14:05] <nessita> NEXT: mandel
[14:05] <nessita> NOTES: tomorrow and Friday I'm not coming
[14:05] <dobey> you ran it locally because the one configured to land branches did not go exactly according to your desire.
[14:05] <ubot4`> Launchpad bug 627496 in ubuntu-sso-client (Ubuntu) (and 1 other project) "Registration screen looks cramped when big fonts selected (affects: 2) (dups: 1) (heat: 51)" [High,Triaged] https://launchpad.net/bugs/627496
[14:05] <ubot4`> Launchpad bug 677518 in ubuntu-sso-client (Ubuntu) (and 1 other project) "Split gui code to allow other implentations (affects: 1) (heat: 817)" [High,Triaged] https://launchpad.net/bugs/677518
[14:05] <dobey> regardless of how you did it, you did it.
[14:05] <mandel> DONE: add kwalet support, worked on windows code fr desktopcouch
[14:06] <mandel> TODO: file bugs regarding the above mentioned code (sorry)
[14:06] <mandel> BLOCKED no
[14:06]  * mandel looks at vds
[14:06] <dobey> meh
[14:06] <vds> DONE:  fixed the patch of python-couchdb to port the oauth session from DC to python-couchdb, code review, installing natty
[14:06] <vds> TODO: land the branch for #675551, do some more porting to python-couchdb
[14:06] <vds> BLOCKED: not at all
[14:06] <vds> alecu: please
[14:06] <alecu> DONE: discussed on #zeitgeist bug #676090. Started working on bug #674252, a branch for review: https://code.launchpad.net/~alecu/ubuntuone-client/add-simple-zeitgeist/+merge/41667
[14:06] <alecu> TODO: more discussion on the above two bugs, on #ubuntuone and on #zeitgeist
[14:06] <alecu> BLOCKED: no
[14:06] <alecu> Cardinalfang: the last to jump gets to code in C++... Geronimooooo!!!!
[14:06] <CardinalFang> DONE: Bug #680929.  Poked at zeitgeist to see how hard it would be to add to d-c
[14:06] <CardinalFang>  now; decided against for now.  Got python-couchdb packaged and pushed to PPA; n
[14:06] <CardinalFang> ot built yet.
[14:06] <CardinalFang> TODO: Merges.  Half day.  Remainder this weekend, packaging.
[14:06] <CardinalFang> BLOCKED: No
[14:06] <ubot4`> Launchpad bug 676090 in ubuntuone-client (Ubuntu) (and 1 other project) "Define the U1 Zeigeist ontology types for all events (affects: 1) (heat: 8)" [Undecided,In progress] https://launchpad.net/bugs/676090
[14:06] <ubot4`> Launchpad bug 674252 in ubuntuone-client (Ubuntu) (and 1 other project) "Syncdaemon needs to store events into zeitgeist (affects: 1) (heat: 8)" [High,In progress] https://launchpad.net/bugs/674252
[14:06] <ubot4`> Launchpad bug 680929 in desktopcouch "some tests run against personal couchdb (affects: 1) (heat: 6)" [Medium,In progress] https://launchpad.net/bugs/680929
[14:06] <thisfred> DONE: Stage #2 of bug 510159 TODO: wrap that up and start on the debian package split, and fix last small issues BLOCKED: no
[14:06] <ubot4`> Launchpad bug 510159 in desktopcouch (Ubuntu) (and 1 other project) "Split desktopcouch in two: a records library that can be used on the server and a desktop application/library (affects: 1) (heat: 8)" [Medium,In progress] https://launchpad.net/bugs/510159
[14:07] <thisfred> 1 sec brb
[14:07] <dobey> λ DONE: banshee backports (lucid), fixed 680501, 680593, 680553
[14:07] <dobey> λ TODO: nightlies, banshee store tests, more releases
[14:07] <dobey> λ BLCK: None.
[14:07] <thisfred> re
[14:07] <nessita> we need to set a time for the weekly call that would take place today
[14:08] <nessita> is one hour from now ok with everyone?
[14:08] <mandel> nessita, kind of bad for me
[14:08] <mandel> nessita, I need to have lunch
[14:08] <nessita> mandel: oh gosh, you may be starving
[14:09] <nessita> mandel: how long would you need?
[14:09] <dobey> at least spamassassin recognizes all google meeting invites as spam
[14:09] <mandel> nessita, not too much, 4:30 would be nicer :)
[14:09] <dobey> he's spanish, so he needs like 4 hours for lunch, you know
[14:10] <alecu> mandel, that's 1:20h from now, right?
[14:10] <nessita> mandel: 4:30 UTC?
[14:10] <mandel> nessita, yes
[14:10] <nessita> Chipaca: is that time good for you? ^
[14:10] <nessita> what about the rest of the crowd?
[14:10] <alecu> mandel, that's not 1:20h from now!
[14:11] <mandel> nessita, if not, I can wait, no worries, dobey the spanish things is to have lunch late
[14:11] <mandel> ok, no big deal, we can have it in an hour
[14:11] <dobey> mandel: yes, and siesta after :)
[14:11] <alecu> it's 14:11 gmt now, right?
[14:11] <nessita> mandel: 1:30 from now you meant? or 4:30 UTC? :-)
[14:11] <mandel> ya me liado!
[14:11] <alecu> dobey, hahaha
[14:12] <mandel> ok, we do it when you said, I do not care about lunch or anything
[14:12] <dobey> when the planets are aligned in full eclipse
[14:12] <dobey> is a good time for me
[14:12] <nessita> mandel: 2 hours from  now?
[14:12] <nessita> and that's my final offer :-P
[14:13] <mandel> nessita, cuando quieras
[14:13] <nessita> actually, 1:45 from now
[14:13] <mandel> I'll adapt
[14:13] <mandel> ok
[14:13] <nessita> that being 4pm UTC
[14:13] <mandel> nessita, so 5 CET
[14:13] <mandel> or something like that
[14:13] <nessita> alecu, CardinalFang, dobey, thisfred, vds, Chipaca: weekly call at 4pm UTC?
[14:14] <alecu> nessita, +1
[14:14] <vds> nessita: sure
[14:14] <Chipaca> +1
[14:14] <CardinalFang> okay.
[14:16] <thisfred> nessita: +1, that's in 1h~45m right?
[14:17] <nessita> Chipaca: is there someone else having access to the "official" tarmac? we need someone else being able to access it if dobey is not around
[14:18] <dobey> what the hell does that mean?
[14:18] <dobey> it's a cron job
[14:19] <nessita> dobey: today tarmac wasn't landing a branch becasue it wasn't happy with the approvers (not sure why). I'm asking who else can debug this issue (or issues like this) when you're not around
[14:19] <nessita> alecu: question re your branch: with ubuntuone/syncdaemon/zeitgeist_log.py is a module inside syncdaemon?
[14:20] <dobey> you didn't ask ANYONE to debug the issue; and didrocks is not a member of ubuntuone-hackers, so launchpad sees his review as one from community.
[14:20] <nessita> alecu: shouldn't that be outside syncdaemon, at the same (project) level?
[14:21] <nessita> dobey: as far as I know you're the only one with access to tarmac, and I knew it was way too early for you. Could anyone else helped me?
[14:21] <alecu> nessita, yes, it makes more sense for it to go outside syncdaemon. fixin it
[14:21] <nessita> alecu: thanks
[14:22] <dobey> nessita: #tarmac perhaps. or you could have waited the hour or so for me to come on-line.
[14:23] <alecu> nessita, you are proposing to move it as ubuntuone.zeitgeist_log.py, right?
[14:23] <nessita> dobey: my point exactly. I think we need more than person being able to access tarmac.
[14:23] <dobey> nessita: and i don't even have "access" to all of the things being handled by tarmac
[14:23] <nessita> than one* person
[14:24] <dobey> nessita: the stuff in the DC is not directly accessible by me
[14:24] <dobey> it requires an admin
[14:24] <nessita> alecu: I was thinking on a separated module
[14:24] <nessita> alecu: like ubuntuone/zeitgeist/
[14:24] <dobey> nessita: really, nobody should need to access tarmac. there is nothing with either of those two ussoc branches that having access to the machine running tarmac would have helped you to answer
[14:25] <nessita> dobey: so, for example, when we need to tweak a requirement like not every approver must be a u1hackers or similar, how do you usually fix that?
[14:25] <dobey> but circumventing the policies we're trying to enforce with tarmac, by simply running it yourself without the configuration, is not the correct course of action for any problem
[14:26] <nessita> dobey: can you please answer my question?
[14:26] <dobey> nessita: that requirement is not specifiyable in the tarmac config
[14:26] <nessita> dobey: where is specifiable?
[14:26] <dobey> you can not say "oh this one person is ok to review"
[14:26] <nessita> dobey: where is specifiable?
[14:27] <dobey> nessita: the isPersonValidReviewer() call on the Launchpad API's branch object for the target branch, is what determines if the review is valid for counting
[14:27] <nessita> dobey: why didrocks wasn't a valid person reviewer?
[14:27] <dobey> nessita: because he is not a member of the review team or branch owner (or team).
[14:28] <nessita> dobey: how can we solve this in the future?
[14:28] <didrocks> (well, I reviewed more the functionnality than the integration in the code as I'll need it in the future)
[14:28] <dobey> nessita: you will need to more specifically describe the exact issue.
[14:29] <dobey> nessita: you could have gotten another u1-hackers person to review and approve it
[14:29] <nessita> dobey: I could, yes. I didn't see the point or gain on that
[14:30] <dobey> nessita: well it is our team's policy
[14:30] <nessita> dobey: it is? I wasn't aware of that
[14:30] <dobey> two reviews from team members, yes
[14:30] <dobey> the point is to cross-polinate knowledge of different parts of our code, across our whole team
[14:31] <nessita> dobey: ok then. I have another question: the other day you said "I'll disable this new plugin so your branch can land". How do you accomplish that?
[14:33] <dobey> nessita: the plug-in in question is only used if a certain option is available, and i disabled that option. but i was trying to get that plug-in working for us at the time, because it is a valuable plug-in to have, and i was actively working on making it work. that is a different plug-in, and we have been using the review count policy for quite some time now
[14:33] <nessita> dobey: where can the option be disabled? is a config file? where that config file is located at?
[14:33] <nessita> I just want to understand the picture
[14:34] <dobey> yes it's a config file. and obvioiusly it is located on the machine where tarmac is running
[14:34] <dobey> granted, we actually have two separate instances of tarmac running currently.
[14:34] <dobey> one in DC, and one I'm maintaining due to the interdependent nature of some of our projects.
[14:35] <alecu> nessita, so verterok tells me that files and directory events are not mapped easily from SD to ZG
[14:36] <alecu> verterok, let's try to focus on one event, so we can understand the rest.
[14:36] <nessita> alecu: I may be outdated... why not? I see we have AQ_FILE_NEW_OK, AQ_DIR_NEW_OK\
[14:37] <verterok> alecu: not  mapped to dbus
[14:37] <alecu> nessita, verterok says we can easily look at dbus_interface for shares and UDFs... and not so easily for the rest.
[14:38] <alecu> verterok, so, let's focus on "File was created locally, syncdaemon creating on the server"
[14:38] <verterok> alecu: right, so. AQ_[FILE|DIR]_NEW_OK
[14:38] <verterok> alecu: that means the file/dir was created on the server
[14:38] <alecu> verterok, ok, it means it was succesfully created on the server. cool.
[14:38] <dobey> ok, must get sound working again. brb
[14:39] <nessita> alecu: I think that maybe you're mixing DBus events with raw SD events
[14:39] <alecu> nessita, no, I'm not.
[14:40] <nessita> alecu: ok, so at bus level we have no notif of files and folders events, but at SD level we do have
[14:40] <nessita> but maybe the params are not exactly ideal
[14:41] <alecu> nessita, verterok says we should look on dbus_interface.py to see what raw SD events generate the udf/shares dbus events. And use those for our udf/shares mapping.
[14:41] <alecu> nessita, so we are now mapping files and directorys SD events to zg
[14:42] <verterok> alecu, nessita: AQ_[DOWNLOAD|UPLOAD]_FINISHED -> down/upload ok
[14:42] <alecu> verterok, so, from the list you think that "File synchronization conflict" is the one that's not propagated, right?
[14:42] <verterok> alecu: no, we don't even have an event for that
[14:43] <verterok> alecu: one that might be interesting is FS_INVALID_NAME
[14:43] <verterok> another one is SYS_BROKEN_NODE
[14:43] <verterok> nessita: ^
[14:44] <nessita> verterok: what is a SYS_BROKEN_NODE?
[14:44] <verterok> nessita: it's from generations, when SD can't apply a delta on a node
[14:44] <verterok> nessita: that's also loggin in the broken-nodes log file
[14:45] <nessita> verterok: and how's that relevant to end users? i mean, if I'm a user, what that means? I lost the node? it will no longer synched?
[14:45] <nessita> it was fixed?
[14:45] <verterok> nessita: it will not be synced, we have a problem with that node that we can recover from
[14:46] <verterok> nessita: the next delta might apply ok, but we don't know
[14:46] <verterok> ...at the time we find it's broken
[14:46] <verterok> nessita, alecu: for AQ_[FILE|DIR]_NEW_OK and AQ_[DOWNLOAD|UPLOAD]_FINISHED, that have a (share_id, node_id) you can use "fsm.get_by_node_id(share_id, node_id).path" to get the path
[14:46] <alecu> verterok, I think we won't be storing recoverable errors, but only definite errors.
[14:47] <alecu> verterok, cool.
[14:47] <alecu> verterok, and regarding those two sets of events...
[14:47] <alecu> verterok, when *_NEW_OK is sent, does it mean that the file was created on the server but with no content yet?
[14:48] <verterok> alecu: yes
[14:48] <alecu> verterok, that's a bit different of the way I was planning.
[14:49] <verterok> alecu: so, just listen to AQ_UPLOAD_FINISHED :)
[14:49] <alecu> verterok, ideally I would like to send the "syncdaemon creating on the server" event when the file is completely uplodaded.
[14:49] <dobey> yay sound
[14:49] <alecu> verterok, I will need to listen to both!
[14:49] <verterok> alecu: not only that, there are cases when you will not know if the file was just created or it was updated
[14:50] <alecu> verterok, why?
[14:50] <verterok> alecu: e.g: the client creates a file: AQ_FILE_NEW_OK, starts uploading
[14:50] <verterok> alecu: the client dies/quit/whatever
[14:50] <alecu> right, I see.
[14:50] <verterok> alecu: the client start again, you only get the AQ_DOWNLOAD_FINISHED :)
[14:51] <nessita> alecu, verterok: one thing. Listening to only one signal (the DOWNLOAD) can make the user think that nothig is happening
[14:51] <nessita> nothing*
[14:51] <nessita> becasue between the file was created and the upload actually finished, can pass a lot of time
[14:52] <verterok> nessita: we have upload progress indicator signals
[14:52] <verterok> nessita: and events :p
[14:52] <verterok> we send those events via dbus
[14:52] <verterok> as signals
[14:52] <verterok> hehe
[14:52] <verterok> AQ_UPLOAD_FILE_PROGRESS
[14:52] <nessita> verterok: what's the event name?
[14:52] <nessita> ah!
[14:52] <alecu> but we won't be storing in-progress events in zg
[14:53] <alecu> and a "empty file created on server, but no contents there yet" is an in-progress event.
[14:53] <nessita> alecu: right, but we may need to store the NEW_OK and the DOWNLOAD_FINISHED
[14:53] <nessita> hum
[14:54] <nessita> I disagree a bit, but I understand your point
[14:54] <alecu> nessita, it's useless for the user to know that a bit of space was allocated on the server. :-)
[14:54] <verterok> alecu: no, isn't in progress. the file was created
[14:54] <verterok> :)
[14:54] <nessita> alecu: and I think it was allocated with all the needed space
[14:54] <verterok> alecu: all this might (or not) change in the following weeks/months
[14:54] <alecu> verterok, the user wants to know when the contents of the file are finally on the cloud.
[14:54] <nessita> verterok: is that fruit or true? ^
[14:55] <alecu> jajajaj
[14:55] <verterok> nessita: which fruit? manzana?
[14:55] <verterok> nessita: we don't allocate space
[14:55] <alecu> any vegetable!
[14:55] <verterok> nessita: a file with no content uses 0 bytes of quota
[14:55] <alecu> (cualquier verdura?)
[14:55] <nessita> alecu: I agree. But the user also needs to know that the file is being synched. I also think we can leave this for later in the cycle, that meaning: let's handle DOWNLOAD_FINISHED for now
[14:56] <nessita> :-)
[14:56] <alecu> The user just cares about his thesis being on the cloud so he can finally close that damn laptop.
[14:56] <verterok> let's ask a user? :)
[14:56] <alecu> :-)
[14:56] <nessita> alecu: that's completely acceptable for now
[14:58] <nessita> alecu: so, 'AQ_DOWNLOAD_FINISHED': ('share_id', 'node_id', 'server_hash'), and with what verterok said fsm.get_by_node_id(share_id, node_id).path we get the path
[14:58] <alecu> nessita, right. But the only zg event we can send with that is MODIFY_EVENT
[14:58] <nessita> alecu: why?
[14:59] <alecu> we have to track two events in order to send the CREATE_EVENT
[14:59] <alecu> "track two SD events"
[14:59] <alecu> and even then, we are not sure if SD disconnected...
[14:59] <nessita> alecu: why DOWNLOAD_FINISHED info is only enough for a modify event?
[15:00] <alecu> nessita, let's look again at "File was created locally, syncdaemon creating on the server"
[15:00] <alecu> nessita, how should we report that?
[15:00] <nessita> alecu: report where? you mean how shall we store that in zg?
[15:01] <alecu> yes, "report" as in store it in zg.
[15:01] <nessita> alecu: I have no ZG foo to answer that other than the wikipage
[15:02] <nessita> alecu: you tell me what info do you need and I'll try to provide it
[15:02] <alecu> hmm
[15:02] <nessita> alecu: we have path, and the HD uuid you said it was calculatable\
[15:02] <alecu> let's say a AQ_DOWNLOAD_FINISHED arrives
[15:03] <nessita> yes
[15:03] <alecu> how do we know if we need to store a CREATE_EVENT or a MODIFY_EVENT ?
[15:03] <nessita> ah, I see your point
[15:03] <alecu> we need to save (in a python dict, perhaps) all pending  AQ_FILE_NEW_OK
[15:04] <alecu> and check for each AQ_DOWNLOAD_FINISHED if a pending AQ_FILE_NEW_OK was there
[15:04] <alecu> so:
[15:04] <alecu> we need more code that I would have liked
[15:04] <nessita> alecu: you were expecting a single event?
[15:05] <alecu> nessita, ideally yes :-)
[15:05] <alecu> nessita, but it's not so complex *yet*.
[15:05] <nessita> alecu: hoepfully for the rest of the events we'll get luckier
[15:05] <nessita> alecu: are you taking notes somewhere? can I contribute? googledoc maybe?
[15:07] <alecu> nessita, let's googledoc, right!
[15:07] <alecu> creating.
[15:08] <verterok> alecu, nessita: we could add an extra arg to the event to signal if it's the "first" upload or just an update :)
[15:09] <verterok> it's a hack, but it might work :p
[15:09] <nessita> verterok: how wasy is to do that?
[15:09] <verterok> in AQ, we could check if the previous server_hash is != ""
[15:10]  * nessita browses code
[15:10] <verterok> actually if it's not bool(previous_server_hash)
[15:10] <verterok> door bell, bbiab
[15:12] <nessita> verterok: this would be DownloadFinishedNanny ?
[15:18] <alecu> nessita, seems like the nanny, yes.
[15:19] <nessita> alecu: I'm just seeing that we may not need new code there. The server_hash is sent in the event
[15:19] <nessita> alecu, verterok: what if we check the server_hash in our zg logger?
[15:20] <alecu> nessita, right! the nanny is for the download file dance
[15:20] <nessita> yes
[15:20] <alecu> nessita,  'AQ_UPLOAD_FINISHED': ('share_id', 'node_id', 'hash', 'new_generation'),
[15:20] <nessita> hum
[15:20] <alecu> nessita, perhaps we can look at the "generation" number?
[15:21] <nessita> alecu: perhaps. verterok. can we? (we need to confirm semantics)
[15:23] <verterok> nessita: the generation num is sent by the server
[15:23] <verterok> nessita: how do you will know something about the gen num?
[15:23] <nessita> verterok: can that indicates if the file/folder is new or not?
[15:23] <alecu> verterok, does it start at 0 for each server?
[15:23] <verterok> nessita: not at all
[15:23] <alecu> verterok, does it start at 0 for each file?
[15:23] <verterok> alecu: nope
[15:25] <nessita> verterok: so, with AQ_DOWNLOAD_FINISHED, can we check the server_hash in out own listener to decide if it's new or not?
[15:30] <nessita> verterok: ping?
[15:30] <alecu> nessita, look at signal_event_with_hash in sync.py
[15:31] <alecu> nessita, def build_hash_eq(...)
[15:32] <alecu> it seems to be using a two letter code, made up from the first letters of "True" and "False"
[15:33] <nessita> alecu: that's for the spreadsheet, not sure of that's being sent in this event? I'll try a IRL event
[15:37] <nessita> alecu: I'm getting:
[15:37] <nessita> 2010-11-24 12:37:16,528 - ubuntuone.SyncDaemon.EQ - DEBUG - push_event: AQ_DOWNLOAD_STARTED, args:(), kw:{'share_id': '', 'node_id': 'f324de5b-259c-43d9-96ff-394cc8aacc44', 'server_hash': 'sha1:c00a437101275f3aa9b762643dc84e584a521972'}
[15:37] <nessita> (file created on server being synched locally)
[15:37] <alecu> nice
[15:37] <nessita> 2010-11-24 12:37:36,567 - ubuntuone.SyncDaemon.EQ - DEBUG - push_event: AQ_DOWNLOAD_FINISHED, args:('', 'f324de5b-259c-43d9-96ff-394cc8aacc44', 'sha1:c00a437101275f3aa9b762643dc84e584a521972'), kw:{}
[15:38] <nessita> the server hash is not empty
[15:38] <alecu> nessita, what about uploads?
[15:38] <nessita> let's try the other way around
[15:38] <verterok> nessita: no, to check the hash you need to do it in AQ
[15:39] <nessita> verterok: that's becasue AQ sets the new server hash?
[15:39] <verterok> nessita: no, because AQ has the hash
[15:40] <nessita> verterok: so, you're saying that the server hash sent in the event is not the server hash? :-)
[15:41] <verterok> nessita: it's the new hash
[15:41] <nessita> alecu: 2010-11-24 12:41:01,206 - ubuntuone.SyncDaemon.EQ - DEBUG - push_event: AQ_UPLOAD_STARTED, args:(), kw:{'share_id': '', 'hash': 'sha1:7287a592845c932df0154bcf6f2e2c1fb012c7d6', 'node_id': 'f278f66c-0b3e-4491-afa7-9cf7c0baf84e'}
[15:41] <verterok> nessita: AQ has the previous hash
[15:42] <nessita> verterok: ah. Can we send both in the event, and we minimize the modifications to syncdaemon?
[15:42] <alecu> nessita, I would go for the "no mods to syncdaemon" route.
[15:43] <nessita> alecu: yeah but we can t decide if the file is new without that modification (or without listening to 2 events)
[15:43] <nessita> alecu: you say we settle with 2 events?
[15:43] <alecu> nessita, yeah
[15:44] <alecu> the only problem I see with that is sometimes logging a modify when we should have logged a create.
[15:44] <alecu> but it's not so grave.
[15:44] <nessita> alecu: ok, let's put this (modifying syncdaemon) on hold and assume we're using 2 events. Let's move on to the next event
[15:45] <alecu> "File was created on the server, syncdaemon creating locally"
[15:46] <nessita> that is a download, isn't it?
[15:46] <alecu> right.
[15:47] <nessita> alecu: so this is the case we just discussed?
[15:47] <alecu> nessita, no: we were discussing uploads
[15:47] <nessita> upload finished:
[15:47] <nessita> 2010-11-24 12:45:45,310 - ubuntuone.SyncDaemon.EQ - DEBUG - push_event: AQ_UPLOAD_FINISHED, args:(), kw:{'node_id': 'f278f66c-0b3e-4491-afa7-9cf7c0baf84e', 'hash': 'sha1:7287a592845c932df0154bcf6f2e2c1fb012c7d6', 'new_generation': 318L, 'share_id': ''}
[15:47] <nessita> alecu: the nanny was for downloads...
[15:48] <alecu> nessita, right
[15:48] <alecu> so, the nanny generates AQ_DOWNLOAD_FINISHED
[15:48] <alecu> how do we know if it was a new download?
[15:48] <alecu> hmm
[15:48] <alecu> how do we know if it was a new file?
[15:51] <verterok> nessita: sorry, yes...maybe we need to talk with facundobatista about it
[15:51] <nessita> verterok: ok
[15:51] <facundobatista> nessita, lo qué?
[15:51] <nessita> alecu: plugging in headset for weekly call, get back to you in a minute
[15:51] <verterok> alecu: currently you can
[15:51] <verterok> alecu: you need to change syncdaemon or keep track of AQ_FILE_NEW_OK events :/
[15:52] <nessita> facundobatista: if I'm a listener to AQ_DOWNLOAD_FINISHED, is there a way to know if the download was a newly created file in the server or was an update?
[15:52] <alecu> verterok, AQ_FILE_NEW_OK means an empty file was created on the server, right?
[15:52] <verterok> yes
[15:52] <alecu> verterok, I want to know if an empty file was created locally
[15:53] <verterok> huh?
[15:53] <facundobatista> nessita, 42
[15:53] <verterok> alecu: what would that be?
[15:53] <facundobatista> nessita, I mean, you don't understand the question :)
[15:53] <verterok> alecu: the file is created when it's downloaded
[15:53] <alecu> verterok, that way I can know if a "download finished" is for a new file or not.
[15:53] <nessita> facundobatista: I've asked you a yes/no question, you can't answer 42
[15:53] <nessita> :-)
[15:53] <facundobatista> nessita, you never download created files, so it always an update
[15:54] <alecu> nessita, facundobatista: wait
[15:54] <facundobatista> nessita, sometimes it's an update from non-content, other is an update from old-content
[15:54]  * facundobatista es pera
[15:54] <verterok> alecu: SD knows about new files during delta processing, but only the metadata is created
[15:54] <verterok> alecu: then we start downloading it
[15:54] <nessita> facundobatista: we're abstracting us from the fact that creation and upload are split
[15:54] <alecu> facundobatista, nessita: we need to know if a file I've just downloaded from the server existed locally or not.
[15:54] <verterok> alecu: there are no empty files waiting for content on the client
[15:54] <facundobatista> alecu, what do you mean with "existed locally"?
[15:55] <alecu> verterok, guessed so. I was trying to make an analogy with the upload steps.
[15:55] <alecu> facundobatista, if the file existed before downloading it. If it was created by syncdaemon or modified by syncdaemon.
[15:56] <facundobatista> alecu, existed in the syncdaemon metadata? or in the user disk? with "was created by syncdaemon or modified by syncdaemon" you are *excluding* that the user created it?
[15:56]  * facundobatista still doesn't understand
[15:57] <nessita> facundobatista: created by syncdaemon would be the user created it
[15:57] <nessita> facundobatista: did you read https://wiki.ubuntu.com/UbuntuOne/Specs/ZeitgeistIntegration/EventsSpec?
[15:57] <facundobatista> nessita, no, I didn't
[15:57] <nessita> facundobatista: we need to track events to summarize them and show them to the user
[15:57] <facundobatista> nessita, if the user created the file, syncdaemon did not
[15:58] <nessita> facundobatista: so, from a user POV, we need to log "there is a new file on the server that we're synching locally"
[15:58] <nessita> facundobatista: and "there is a new file locally that we will upload to the server"
[15:59] <nessita> facundobatista: so, for the former we would listen to AQ_DOWNLOAD_FINISHED, but we need to distinguish between:
[15:59] <nessita> * syncdaemon downloaded a whole new file that was created somewhere else
[15:59] <nessita> * syncdaemon downloaded an update to a file that is already in this machine
[16:00] <facundobatista> nessita, for "there is a new file on the server that we're synching locally", we would need to issue a signal, we had SV_FILE_NEW, but it's not an event anymore
[16:00] <facundobatista> "there is a new file on the server that we're synching locally" implies creation, not "downloadation"
[16:00] <alecu> nessita, let's re-redact those paragraphs, so they refer to things done, like so:
[16:00] <nessita> facundobatista: from the user POV, we want to track successful uploads/downloads
[16:01] <facundobatista> nessita, you keep changing me the assertions -.-
[16:01] <alecu> "there was a new file on the server that have just finished synching locally"
[16:01] <nessita> facundobatista: so file creation with empty content is not a significant event
[16:01] <nessita> alecu: thanks
[16:02] <alecu> facundobatista, "a new file on the server" implies that the file did not exist on the local drive.
[16:02] <facundobatista> alecu, we can not say that in an easy way
[16:02] <alecu> facundobatista, can it be inferred by listening to a sequence of events?
[16:02] <alecu> for instance:
[16:03] <alecu> for uploads we can listen for AQ_FILE_NEW_OK followed by AQ_UPLOAD_FINISHED for the same (volume_id, node_id)
[16:04] <facundobatista> alecu, what about "SV_FILE_NEW_OK" and "AQ_DOWNLOAD_FINISHED"?
[16:04] <alecu> what does "SV_FILE_NEW_OK" means?
[16:04] <facundobatista> alecu, sorry, it's "SV_FILE_NEW", a new file from the server
[16:04] <alecu> niiiice.
[16:05] <alecu> facundobatista, so, let me get this straight.
[16:05] <facundobatista> alecu, we don't currently have an event like that, we removed for generations, but we could add it back
[16:05] <alecu> "SV_FILE_NEW" was removed for generations?
[16:05] <facundobatista> alecu, I mean, we have the situation, we don't send an event to fly around
[16:07] <alecu> facundobatista, ok, so it's not an event that an event listener can handle right now.
[16:07] <facundobatista> alecu, exactly, it's trivial to add it, though
[16:08] <nessita> facundobatista: does that event mean the file was already downloaded?
[16:08] <alecu> facundobatista, so, what's exactly the situation when that event would happen?
[16:08] <alecu> nessita, I understand it's similar to AQ_FILE_NEW_OK, but viceversa.
[16:09] <nessita> alecu: yes, so not sure of that is what we need
[16:09] <alecu> facundobatista, I understand it means that a new file showed up on the server that does not exist on the client.
[16:09] <facundobatista> nessita, no, it means that there's a new file in the server
[16:10] <facundobatista> it doesn't even implies that it was created locally by syncdaemon!!! the user may just added it to the disk, also
[16:11] <facundobatista> alecu, it means that a new file showed up on the server that didn't exist in the client when the client asked for a GetDelta
[16:11] <nessita> facundobatista: so, after a (recently removed) SV_FILE_NEW we get a AQ_DOWNLOAD_STARTED and then a AQ_DOWNLOAD_FINISHED
[16:11] <alecu> facundobatista, that's very good enough.
[16:11] <facundobatista> alecu, nessita, sorry about being such a PITA with the assertions, but it's tricky, and I don't want for you to get a bad idea of how it works or should work
[16:11] <nessita> facundobatista: if we listen just to AQ_DOWNLOAD_FINISHED, is there a way of knowing if the file was a new adding or was a modification to an existent file?
[16:12] <facundobatista> nessita, no, AQ_DOWNLOAD_FINISHED means just the file finished downloading, it could be new or old
[16:12] <alecu> facundobatista, so, a SV_FILE_NEW followed by AQ_DOWNLOAD_FINISHED for the same (volume_id, node_id) would mean that a file did not exist when  the client asked for a GetDelta
[16:12] <alecu> that's the event we want to log.
[16:13] <facundobatista> alecu, but
[16:13] <alecu> there's always a but.
[16:13] <facundobatista> alecu, you could have a file that did not exist when the client asked for a GetDelta, and it could generate a SV_FILE_NEW but NOT a AQ_DOWNLOAD_FINISHED
[16:13] <facundobatista> alecu, there's always a butt
[16:14] <alecu> facundobatista, people would't finish otherwise.
[16:14] <facundobatista> je
[16:15] <alecu> ok, so we'll store the SV_FILE_NEW (and the AQ_FILE_NEW_OK) with an expiry time.
[16:15] <alecu> (store in memory, waiting for the corresponding AQ_(UP|DOWN)LOAD_FINISHED)
[16:15] <alecu> nice.
[16:16] <facundobatista> alecu, what about new dirs?
[16:16] <alecu> facundobatista, what about them?
[16:17] <facundobatista> alecu, you'd need to listen SV_DIR_NEW
[16:17] <alecu> facundobatista, verterok, nessita: would you mind if we keep talking about this in 1:30h, or so? We have a meeting in 15' and I want to grab a sandwich.
[16:17] <facundobatista> ok
[16:17] <nessita> alecu: ok
[16:18] <alecu> facundobatista, oh, so I need to add that event back as well. good point.
[16:18] <alecu> ok, ttly.
[16:18] <alecu> ttyl
[16:18] <facundobatista> je
[17:47] <diverse_izzue> honk
[17:53] <nessita> alecu_, alecu: you let us know when we restart zg analysis?
[17:54] <nessita> alecu_, alecu: question about txsecrets on ussoc: we're using BUS_NAME = "org.gnome.keyring". If we (or someone else) want to switch to use kwallet, how we need to modify our code?
[17:54] <nessita> shall we make that a dbus parameter?
[18:14] <nessita> alecu: ping?
[18:15] <alecu> nessita, pong.
[18:15] <nessita> I said before: alecu: you let us know when we restart zg analysis?
[18:15] <nessita> and also:L
[18:15] <nessita> alecu: question about txsecrets on ussoc: we're using BUS_NAME = "org.gnome.keyring". If we (or someone else) want to switch to use kwallet, how we need to modify our code?
[18:15] <nessita> shall we make that a dbus parameter?
[18:16] <alecu> nessita, don't know about that. I believe there must be an automatic way to query dbus for what bus implements a given object
[18:17] <alecu> nessita, but I haven't gotten around to finding it yet.
[18:17] <nessita> alecu: ok, for now this will be as is
[18:17] <alecu> regarding zg:
[18:18] <alecu> I'll be working on more zeitgeist awesomeness by joining the syncdaemon sprint next week.
[18:18] <alecu> nessita, so during this week I'll focus on adding all the events that map 1:1 to SD events
[18:18] <nessita> ok then
[18:19] <nessita> alecu: can you check out this trivialísimo branch? https://code.launchpad.net/~nataliabidart/ubuntu-sso-client/remove-gnomekeyring-from-docs/+merge/41776
[18:19] <alecu> facundobatista, verterok: I'll join you during the SD sprint next week, so I can poke you on SD -> ZG event conversion.
[18:20] <alecu> nessita, absolutely.
[18:22] <facundobatista> alecu, ok
[18:23] <alecu> nessita, approved as trivial.
[18:23] <alecu> nessita, oh, do we need two reviews anyway?
[18:23] <nessita> alecu: apparently.
[18:24] <alecu> even for trivial branches? BS :-)
[18:24] <nessita> dobey: can you please review https://code.launchpad.net/~nataliabidart/ubuntu-sso-client/remove-gnomekeyring-from-docs/+merge/41776 ?
[18:24] <nessita> alecu: yes, not sure of can set a trivial flag to tarmac\
[18:24] <nessita> s/of/if
[18:24] <alecu> review: Approve (trivial)
[18:25] <alecu> that *should* do it.
[18:27] <dobey> no
[18:55] <diverse_izzue> honk :-)
[19:10] <nessita> diverse_izzue: hey there, what can we do for you?
[19:12] <diverse_izzue> hi nessita
[19:12] <nessita> diverse_izzue: hello
[19:13] <diverse_izzue> so i had a udf which was misbehaving. it wasn't synching all files in it and so on, even though u1sdtool was claiming it's done and IDLE.
[19:13] <diverse_izzue> i tried to remove said UDF, by clicking the checkbox in the nautilus bar
[19:14] <diverse_izzue> U1 crashed in the process of removing the UDF, see http://paste.ubuntu.com/535985/. the UDF is now removed on the webspace one.ubuntu.com, but still shown as active locally when i execute u1sdtool --list-folders
[19:16] <duanedesign> hello diverse_izzue
[19:16] <diverse_izzue> hi duanedesign
[19:19] <duanedesign> diverse_izzue: when you run u1sdtool --list-folder does it say subscribed=True ?
[19:20] <diverse_izzue> duanedesign, yes, it does
[19:21] <nessita> diverse_izzue: you need to refresh your volumes locally. This is a know bug. To do so:
[19:21] <nessita> u1sdtool --refresh-volumes
[19:22] <diverse_izzue> nessita, done, should i see an immediate effect after executing that?
[19:22] <nessita> diverse_izzue: after syncdaemon reaching IDLE, yes
[19:22] <nessita> diverse_izzue: let me know how this goes
[19:25] <duanedesign> nessita: what is the known bug? THe volumes not refreshing after removing a UDF from nautilus?
[19:25] <diverse_izzue> nessita, it sais IDLE, but --list-folders still lists the udf which i cannot see online
[19:25] <nessita> duanedesign: the volumes not refreshing all the time, this includes shares
[19:26] <duanedesign> nessita: okies, thanks :)
[19:26] <nessita> diverse_izzue: can you please paste the output od u1sdtool -s and u1sdtool --list-folders ?
[19:26] <nessita> duanedesign: you're welcome, I'm looking for the bug #
[19:27] <diverse_izzue> nessita, there you go, http://paste.ubuntu.com/536014/. the ~/NBI folder is the one making trouble
[19:28] <nessita> looking
[19:31] <nessita> facundobatista: can you assist me with a chicharra issue?
[19:32] <diverse_izzue> chicharra?
[19:32] <facundobatista> nessita, yes, what happens?
[19:32] <facundobatista> diverse_izzue, it's the codename of part of the project
[19:33] <nessita> facundobatista: diverse_izzue has deleted and UDF and he says is no longer in the web ui. u1sdtool keeps listing it, even after a refresh-folder call
[19:33] <nessita> facundobatista: udf is /home/hunzikea/NBI in http://paste.ubuntu.com/536014/
[19:33] <facundobatista> diverse_izzue, how did you delete de UDF?
[19:33] <diverse_izzue> facundobatista, via nautilus ui
[19:34] <nessita> diverse_izzue: just to be sure, you deleted the folder?
[19:34] <facundobatista> diverse_izzue, you are in Lucid? Maverick?
[19:34] <diverse_izzue> facundobatista, i should say that u1 crashed in the process of removing the udf, see http://paste.ubuntu.com/535985/. also, the udf was messy to beging with (=not synching right)
[19:34] <nessita> diverse_izzue: deleted with the 'del' key, I mean?
[19:35] <diverse_izzue> nessita, no, i unchecked the checkbox in that U1 bar that nautilus shows in folders
[19:35] <diverse_izzue> facundobatista, maverick
[19:35] <nessita> diverse_izzue: unckecking will not delete the UDF but unsubscribe from it
[19:35] <nessita> diverse_izzue: how did you delete it from the web ui?
[19:35] <diverse_izzue> nessita, i didn't.
[19:36] <nessita> diverse_izzue: so, what does it mean that "the udf is gone in the web ui"?
[19:36] <facundobatista> diverse_izzue, nessita: I don't see any checkbox in my nautilus :|
[19:36] <diverse_izzue> nessita, it means that on https://one.ubuntu.com/files/ i cannot see that folder listed under "my synced folders"
[19:37] <nessita> facundobatista: do you have ubuntuone-client-gnome installed?
[19:37] <nessita> diverse_izzue: can you please share with me a screenshot of that web page?
[19:37] <nessita> facundobatista: is SD running?
[19:37] <facundobatista> ubuntuone-client-gnome:
[19:37] <facundobatista>   Instalados: 1.5.0+r749~maverick1
[19:37] <facundobatista> nessita, running, connected and idle
[19:38] <diverse_izzue> nessita, will do
[19:38] <nessita> facundobatista: where are you looking for a checkbox?
[19:38] <nessita> diverse_izzue: thanks
[19:38] <facundobatista> nessita, I see the checkbox that says that the folder is synchronized ok
[19:38] <nessita> facundobatista: right, diverse_izzue says he unchecked that
[19:38] <facundobatista> nessita, I'm looking for it in the folder drawing
[19:38] <nessita> facundobatista: no no, in the ribbon
[19:39] <facundobatista> nessita, ribbon?
[19:39] <nessita> facundobatista: 0.0
[19:39] <nessita> facundobatista: the banner at the top of special folders, such as Documents or an UDF
[19:39] <diverse_izzue> nessita, http://ubuntuone.com/p/QfJ/
[19:39] <facundobatista> ahhhhhhhh, ok
[19:41] <nessita> diverse_izzue: interesting! can I please have your logs from syncdaemon?
[19:41] <nessita> facundobatista: look, an UDF that is not listed in the web ui it is listed on syncdaemon
[19:41] <nessita> facundobatista: how can we debug further?
[19:42] <facundobatista> nessita, the logs, the server rescan part, there it details what ListShares answer
[19:42] <nessita> duanedesign: reference bug for the volumes not refreshing: bug #671913
[19:42] <ubot4`> Launchpad bug 671913 in ubuntuone-servers (and 1 other project) "Multiple instances of a single UDF in a local machine (or, deleted UDFs were not cleaned up) (affects: 1) (heat: 6)" [Medium,Invalid] https://launchpad.net/bugs/671913
[19:43] <duanedesign> nessita: oh cool. Great I will add it to my 'cheatsheet' of bugs :)
[19:43] <diverse_izzue> facundobatista, nessita, is it best to restart u1 daemon to get the right part of the log?
[19:43] <facundobatista> diverse_izzue, yes, but please set them in DEBUG first
[19:43] <diverse_izzue> facundobatista, how do i set to debug?
[19:44] <nessita> duanedesign: awesome!
[19:45] <facundobatista> nessita, I unchecked the folder through the "ribbon" and I don't see the UDF listed in --list-folders
[19:45] <facundobatista> nessita, are you sure it unsubscribes it?
[19:46] <facundobatista> nessita, no, it runs a DeleteVolume
[19:46] <nessita> facundobatista: it does?!?!?!
[19:47] <facundobatista> nessita, it does
[19:47] <nessita> facundobatista: ok, #nessitafail then
[19:47] <facundobatista> nessita, diverse_izzue: so, it's ok removed in the web ui, the issue is why it keeps listing it in the client
[19:48] <nessita> facundobatista: anyways, using this new info, and using the first error diverse_izzue pasted (http://paste.ubuntu.com/535985/)
[19:48] <facundobatista> nessita, diverse_izzue, as it had issues *before*, maybe there was something broken before the volume deletion, so it may not be a problem there
[19:48] <nessita> facundobatista: seems like SD has dirty metadata
[19:48] <facundobatista> nessita, diverse_izzue, the real issue now is to remove the UDF there
[19:49] <facundobatista> I mean, fix the metadata
[19:49] <facundobatista> however, I'd love to see what the client says when restarting in debug mode
[19:50] <facundobatista> diverse_izzue, add the following lines to your $HOME/.config/ubuntuone/syncdaemon.conf (create it if not there)
[19:50] <facundobatista> [logging]
[19:50] <facundobatista> level = DEBUG
[19:51] <facundobatista> nessita, note that in the pastebin you copied here, there is an error when handling AQ_DELETE_VOLUME_OK, so clearly the volume was deleted
[19:53] <nessita> facundobatista: true
[19:55] <diverse_izzue> facundobatista, the log: http://ubuntuone.com/p/QfP/
[20:00] <facundobatista> diverse_izzue, what u1sdtool --list-folders tell you now?
[20:01] <diverse_izzue> facundobatista, same as before
[20:01] <diverse_izzue> nbi is still listed and subscribed
[20:01] <facundobatista> diverse_izzue, awesome!
[20:01] <facundobatista> verterok, ping
[20:01] <verterok> facundobatista: pong
[20:01] <diverse_izzue> facundobatista, awesome?!? :-)
[20:02] <verterok> facundobatista: what's the problem?
[20:02] <verterok> facundobatista: I got disconnected and don't have the backlog
[20:02] <facundobatista> verterok, diverse_izzue removed a volume, but it got an error while handling the AQ_DELETE_VOLUME_OK (he claims that the volume had issues from before, and we fixed a couple of bugs around there), but so far so good
[20:03] <facundobatista> verterok, so diverse_izzue restarted the client: http://ubuntuone.com/p/QfP/
[20:03] <facundobatista> verterok, see that that folder seems to not be there (the path for it is /home/hunzikea/NBI)
[20:03] <facundobatista> verterok, and in the logs, we can even read:
[20:03] <facundobatista> 2010-11-24 20:53:13,956 - ubuntuone.SyncDaemon.VM - DEBUG - Share (id: 72ceeb1a-246c-4a7b-b16c-237cf496a8c2) deleted.
[20:03] <facundobatista> 2010-11-24 20:53:13,956 - ubuntuone.SyncDaemon.VM - WARNING - Got a share deleted notification ('72ceeb1a-246c-4a7b-b16c-237cf496a8c2'), but don't have the share
[20:04] <verterok> facundobatista: that's for a share...not a UDF
[20:04] <facundobatista> verterok, *however*, if diverse_izzue does --list-folder, he sees the folder there :| "nbi is still listed and subscribed"
[20:05] <verterok> facundobatista: k, I think this is bug #674092
[20:05] <ubot4`> Launchpad bug 674092 in ubuntuone-client (Ubuntu Maverick) (and 2 other projects) "Deleting UDF on one computer does not delete it from others. (affects: 1) (heat: 320)" [Undecided,New] https://launchpad.net/bugs/674092
[20:05] <verterok> facundobatista: --refresh-volumes should fix the issue
[20:05] <verterok> diverse_izzue: ^
[20:05] <verterok> hi, btw :)
[20:05] <diverse_izzue> hi verterok
[20:06] <diverse_izzue> verterok, we tried the refresh-volumes thing already, no effect
[20:06] <diverse_izzue> verterok, should i repeat now that DEBUG is enabled, and see what it says in the log?
[20:07] <verterok> diverse_izzue: yes, please
[20:08] <diverse_izzue> aha! a crash
[20:08] <diverse_izzue> will paste log
[20:11] <diverse_izzue> verterok, http://ubuntuone.com/p/QfP/
[20:11] <diverse_izzue> or not, wait
[20:12] <nhaines> I would like to get ahold of the Ubuntu One beta client for Windows a bit earlier than usual because I am writing a print article on Ubuntu One.  So if this is possible, please let me know.  :)  honk
[20:13] <facundobatista> mandel, ^
[20:14] <diverse_izzue> facundobatista, verterok, there we go: http://ubuntuone.com/p/QfW/
[20:15] <verterok> diverse_izzue: ok, looks like a problem with the metadata
[20:16] <duanedesign> nhaines: https://wiki.ubuntu.com/UbuntuOne/Windows
[20:16] <verterok> diverse_izzue: can I use/attach your logs in a bug?
[20:16] <duanedesign> nhaines: https://wiki.ubuntu.com/UbuntuOne/Tutorials/Windows <-- and how to setup
[20:16] <diverse_izzue> verterok, i guess so
[20:17] <verterok> diverse_izzue: k
[20:17] <nhaines> duanedesign: thanks, I signed up last week, but have not received a reply, and this week I was approached to do an article On Ubuntu One.
[20:17] <nhaines> But the tutorial is very useful.
[20:17] <verterok> diverse_izzue: so, in order to fix this, we need to mess with the metadata, I can give you a script to fix it...if you will be around for 10-15min
[20:17] <diverse_izzue> verterok, i will be
[20:18] <diverse_izzue> verterok, can you explain in two lines what's going on/wrong?
[20:18] <verterok> diverse_izzue: I can try :)
[20:19] <verterok> diverse_izzue: we have two sets of metadata, one for volumes (shares, UDFs) and one for the nodes (files, directories) that includes the root of each volume (share or UDF)
[20:20] <verterok> diverse_izzue: your client is missing the metadata for the root of the UDF, and when it tries to delete it...it fails
[20:20] <duanedesign> nhaines: ahhh, i see. yeah mandel can maybe help you with that.
[20:20] <nhaines> duanedesign: thanks.  :)
[20:23] <duanedesign> nhaines: [m]andels blog might be of interest if you are writing an article. http://www.themacaque.com/?paged=2
[20:24] <verterok> diverse_izzue: could you run a script to dump the metadata, before trying to fix it? :)
[20:24] <nhaines> duanedesign: Oh wow, I had never seen his blog.  I'll check that out!  I'm not sure the article will be much improved but it looks like the techy stuff I love to read about at least.  :)
[20:24] <diverse_izzue> verterok, of course
[20:24] <verterok> diverse_izzue: the script is: http://pastebin.com/S3rti6s8
[20:24] <verterok> diverse_izzue: save it as: dump_metadata.py, and execute it: python dump_metadata.py > sd_metadata_dump.txt
[20:26] <diverse_izzue> verterok, shouldn't you use U1 yourself? :-)
[20:27] <verterok> diverse_izzue: heh, pastebinit is my muscle memory :p
[20:28] <duanedesign> nhaines: i posted the link to the second page(because it had most the windows client stuff). He also has some newer posts that are good as well.
[20:28] <diverse_izzue> verterok, like a mercedes dealer who drives a toyota :-)
[20:29] <diverse_izzue> anyway, here's the dump: http://ubuntuone.com/p/Qff/
[20:29] <verterok> :)
[20:36] <verterok> diverse_izzue: something very wrong is going on with this metadata...I'ld like to wait for facundobatista and discuss how to fix it without breaking anything :)
[20:37] <diverse_izzue> verterok, ok, i'll be around for at least two hours
[20:37] <verterok> diverse_izzue: basically, there is a directory named: '' in the metadata...which I think it's impossible to create
[20:39] <karni> verterok: will you be on-line tomorrow, too? I'd like to catch you and ask about an issue with IConnector (that's probably me doing something wrong). I'm learning for a test tomorrow.
[20:39] <karni> *test that's tomorrow
[20:40] <verterok> karni: sure
[20:40] <karni> verterok: ok then. I'll probably try to catch you tomorrow :) have a great afternoon
[20:40] <verterok> karni: :) you too
[20:41] <karni> thank you :)
[20:51] <facundobatista> verterok, wtf? a '' dir??
[20:51] <facundobatista> wow
[20:51] <verterok> exactly
[20:51] <facundobatista> verterok, what about removing that node from the fileshelf? 2ca22cf5-934b-4d10-b386-1c09c56f7856
[20:53] <verterok> facundobatista: that should be ok, but deleteing the UDF will still fail
[20:53] <verterok> facundobatista: the udf should be unsubscribed...local rescan does that when it can't find the root
[20:56] <facundobatista> quizás se está pisando algo mal, porque >>> os.path.join("/home/foo", "")
[20:56] <facundobatista> '/home/foo/'
[20:56] <facundobatista> y eso quizás nos rompe cosas
[20:56] <facundobatista> damn
[20:56] <facundobatista> maybe it's stepping badly into itsledf
[20:56] <facundobatista> *itself
[20:56] <facundobatista> because of that "" dir
[21:07] <nessita> facundobatista: english?
[21:07] <nessita> ah!
[21:27] <verterok> diverse_izzue: hi, would you mind to try something? :)
[21:27] <diverse_izzue> verterok, maybe :-D
[21:28] <verterok> :)
[21:28] <verterok> diverse_izzue: quit syncdaemon, u1sdtool -q
[21:28] <diverse_izzue> done
[21:28] <verterok> diverse_izzue: then execute: find ~/.local/share/ubuntuone/syncdaemon -type f -name 2ca22cf5-934b-4d10-b386-1c09c56f7856*
[21:28] <verterok> diverse_izzue: it should return at most 2 files
[21:29] <diverse_izzue> it returns 1 file
[21:29] <verterok> diverse_izzue: ok
[21:29] <verterok> diverse_izzue: please execute: find ~/.local/share/ubuntuone/syncdaemon -type f -name 2ca22cf5-934b-4d10-b386-1c09c56f7856* | xargs rm
[21:29] <verterok> :)
[21:30] <diverse_izzue> file is gone
[21:30] <verterok> diverse_izzue: and then start syncdaemon again, u1sdtool --start
[21:31] <verterok> diverse_izzue: please check if there is an error in the logs :)
[21:31] <diverse_izzue> verterok, the log is 2500 lines long...
[21:31] <diverse_izzue> what am i looking for?
[21:32] <verterok> diverse_izzue: grep ERROR <log file>
[21:32] <verterok> :)
[21:32] <verterok> diverse_izzue: or pastebin/share the file and I can check
[21:32] <diverse_izzue> no error
[21:32] <verterok> diverse_izzue: please, connect the client, u1sdtool -c
[21:33] <verterok> diverse_izzue: and then execute: u1sdtool --list-folders
[21:34] <verterok> is the UDF still there?
[21:35] <diverse_izzue> verterok, yes, still there
[21:35] <diverse_izzue> i had one warning in the log
[21:35] <verterok> diverse_izzue: please pastebin the log file
[21:36] <diverse_izzue> verterok, is there a way to pastebin from command line?
[21:37] <verterok> diverse_izzue: pastebinit -b http://paste.ubuntu.com <path to file>
[21:37] <verterok> diverse_izzue: probably you will need to install pastebinit
[21:39] <diverse_izzue> verterok, http://paste.ubuntu.com/536058/
[21:40] <verterok> k, thanks
[21:50] <dobey> verterok: yo
[21:50] <dobey> verterok: commit message please :)
[21:54] <verterok> diverse_izzue: hi, I'm back
[21:55] <verterok> diverse_izzue: so, the udf is still listed using --list-folders?
[21:55] <verterok> right?
[21:55] <verterok> diverse_izzue: try running: u1sdtool --refresh-volumes
[21:55] <verterok> diverse_izzue: and pastebin the logs please
[21:56] <diverse_izzue> verterok, aha! now it's gone
[21:56] <diverse_izzue> refresh-volumes helped
[21:57] <verterok> yes,the fix for the bug  about running refresh-volumes to cleanup dead UDF is already proposed for merge in trunk and stable-1-4 (lucid)
[21:57] <verterok> diverse_izzue: no idea about what caused the metadata issue
[21:57] <verterok> will try to replicate it locally
[21:57] <diverse_izzue> verterok, thanks for the help. if it happens again i will let you know, then maybe we can reproduce
[21:57] <diverse_izzue> verterok, so now it should be safe to enable said folder again?
[21:57] <verterok> sure, thanks a lot!
[21:58] <verterok> diverse_izzue: yes
[21:58] <diverse_izzue> verterok, will try
[22:02] <verterok> k, let me knows if there is any problem
[22:11] <duanedesign> verterok: (reading scrollbaack) when you say two sets of metadata, is that the difference between ~/.local/shares/ubuntuone/syncdaemon/vm and fsm
[22:12] <verterok> duanedesign: yes, fsm (FileSystemManager) and vm (VolumeManager)
[22:13] <duanedesign> ahhh, volume manager :)