[05:21] <DaemonFC> What country is the data stored on Ubuntu One in?
[05:42] <kklimonda> DaemonFC: in the USA
[05:43] <DaemonFC> that rules it out then
[05:43] <DaemonFC> thank you
[05:43] <kklimonda> at least it was in the past
[05:43] <DaemonFC> I wouldn't want the FBI going up with fake warrants and such
[05:44] <kklimonda> sure
[05:44] <kklimonda> please remember that data is stored in the encrypted form
[05:44] <DaemonFC> doesn't really matter
[05:45] <DaemonFC> they like to fake terrorism emergencies to tap people's phones and see what library books they're reading
[05:46] <kklimonda> DaemonFC: it isn't really channel for this discussion
[05:46] <DaemonFC> How long before they go up to these online backup companies and say "spill it"?
[05:46] <DaemonFC> that's just why I was wondering
[07:13] <duanedesign> DaemonFC: The storage is taken care of by Amazon S3. They have servers all over the globe.
[07:14] <lifeless> duanedesign: they do, but you have to request a specific s3 availability zon, and as kklimonda says, currently its in the USA
[07:57] <duanedesign> lifeless: thank you for the clarification
[08:57]  * rye is away for 2 hours or so, need to have work-related documents translated
[09:05] <wgrant> My desktopcouch instance is downloading infuriatingly vast volumes of data from somewhere in the DC.
[09:05] <wgrant> How do I work out what it is doing and why?
[09:06] <rye> wgrant, you can look at the replication log in /.cache/desktop-couch/log/desktop-couch-replication.log to see what is being replicated
[09:07]  * wgrant wonders how it makes sense to replicate Gwibber messages.
[09:08] <wgrant> But anyway, there are no obvious recent replication incidents in there.
[09:08] <wgrant> (nothing within 10 minutes, and I have nothing big in anything that I know to use desktopcouch)
[09:08] <wgrant> It's just downloading many megabytes of data.
[09:09] <wgrant> Hm:
[09:09] <wgrant> 2010-02-17 20:08:28,052 WARNING  haven't finished replicating before next time to start.
[09:18] <rye> wgrant, could you please check how much space your database take in futon - login via ~/.local/share/desktop-couch/couchdb.html
[09:18] <rye> wgrant, you might need to compact the databases or see what takes so much space
[09:19] <wgrant> rye: I'm trying to, but it's not letting me authenticate.
[09:19]  * wgrant restarts it.
[09:19] <wgrant> Oh goody, the dbus-send call hangs.
[09:19] <wgrant> (this is Lucid, btw)
[09:19] <rye> wgrant, it might have not written the .html file properly. This is known but I haven't yet tested whether this is fixed
[09:22] <wgrant> Still not letting me in even after I log out, kill everything, and then log back in.
[09:23] <wgrant> The credentials in the HTML match those in desktopcouch.ini.
[09:27] <wgrant> Any ideas how I can make it let me in?
[09:27] <wgrant> It would be nice if it would stop eating up my download quota without me having to kill it.
[09:29] <rye> wgrant, could you please run /usr/lib/desktopcouch/desktopcouch-stop then after it terminates the process, /usr/lib/desktopcouch/desktopcouch-service - it should be left running in the terminal
[09:29] <rye> the port may be wrong
[09:30] <rye> ok, sorry, I really have to run now, since otherwise bad things will start to happen. I will get back here in 2 hours
[09:30] <wgrant> rye: Thanks for your help.
[14:17] <burn> hello, the ubuntu tool in karmic doesn't seem to sync anything
[14:22] <rye> burn, hello, could you please run the script mentioned at @ https://wiki.ubuntu.com/RomanYepishev/UbuntuOne/Diagnostics to rule out known and quickly-fixable issues ?
[14:23] <burn> rye: no issues were detected
[14:24] <rye> burn, ok, then are you experiencing the problem at the moment ?
[14:24] <burn> yes
[14:24] <rye> burn, additionally, have you filed a bug report about that?
[14:24] <burn> nope, I didn't
[14:24] <burn> there are so many
[14:25] <rye> burn, ok, let's debug it here
[14:25] <burn> I sould enable the debug info
[14:25] <rye> burn, first of all - what does dbus-send --session --print-reply --dest=com.ubuntuone.SyncDaemon --type=method_call /status com.ubuntuone.SyncDaemon.Status.current_status print?
[14:25] <rye> burn, then, apt-cache policy ubuntuone-client - to verify the version
[14:26] <rye> burn, i.e. what version is seen as Installed ?
[14:27] <burn> rye: I get a python error
[14:27] <rye> burn, hmmm could you select it and paste it here ?
[14:27] <burn> version 1.0.3
[14:27] <rye> i guess the channel is silent enough so nobody will object
[14:27] <burn> ok I'll paste the first lines
[14:27] <burn> method return sender=:1.239 -> dest=:1.250 reply_serial=2
[14:27] <burn>    array [
[14:27] <burn>       dict entry(
[14:27] <burn>          string "is_error"
[14:28] <rye> burn, ok, could you please post it completely to paste.ubuntu.com and give the URL here?
[14:29] <burn> rye: http://paste.ubuntu.com/378371/ please
[14:30] <rye> burn, aha, so the client disconnected but it is still waiting. Ok, could you please post the contents of ~/.cache/ubuntuone/log/syncdaemon.log to paste.ubuntu.com for diagnostic on how that happened and restart syncdaemon:
[14:30] <rye> burn, u1sdtool -q <- this quits syncdaemon
[14:30] <rye> burn, then click on the applet and select 'Connect', it should start doing something then.
[14:31] <rye> burn, when have it stopped syncing?
[14:32] <burn> rye: it didn't work at all
[14:32] <burn> did just a fresh install on my karmic machine
[14:33] <rye> burn, ok, then we'll definitely need ~/.cache/ubuntuone/log/syncdaemon.log - it should not be this way.
[14:34] <burn> rye: http://paste.ubuntu.com/378374/
[14:35] <rye> burn, great
[14:35] <rye> burn, could you please open client preferences from the applet and see whether you have bandwidth limits set?
[14:37] <burn> rye: yeah I did
[14:37] <burn> 64KB
[14:37] <burn> upload
[14:37] <rye> burn, i believe the download is 0
[14:37] <burn> yeah, unlimited
[14:37] <burn> not?
[14:38] <rye> burn, not quite, this is the bad thing about the preferences window - 0 means literally 0 download rate.
[14:38] <rye> i.e. no download possible.
[14:38] <burn> you can't be serious
[14:38] <rye> additionally it breaks the software in a strange way, this is already filed as a bug and I keep poking the devs to make it obvious what setting does what
[14:39] <burn> rye: ok, now files got synched
[14:39] <rye> burn, yeah, this is a great thing, how can a person set unlimited upload rate?
[14:39] <burn> but I don't understand what the Shared with me folder does?
[14:40] <burn> that's probabely an example?
[14:40] <burn> and another question, is my data encrypted? Where and how does it resides on servers?
[14:41] <rye> burn, Shared with Me folder is for files that were shared with you. I.e. if someone wants to share his directory with you, these files will appear under SHared With Me/$share_name from $username
[14:42] <burn> ow ok
[14:42] <rye> burn, the data is encrypted when it is transmitted via the network, i.e. connection is done via SSL
[14:42] <burn> ok, that's one part
[14:42] <burn> but for me it's very important how data resides on the server side
[14:44] <rye> however the data is not encrypted on the production servers, that's why you can access it online - via https://one.ubuntu.com/files/ . However the development team does not have access to the user storage. As it was mentioned previously here, if the sysadmin that has access to the s3 storage accesses some user's data he will be fined and put to prison according to the contract.
[14:44] <burn> ok, I understand that
[14:44] <rye> burn, the storage itself is located at Amazon S3 service
[14:44] <burn> but when it happens, it can be too late
[14:45] <beuno> burn, yes. I wouldnt keep government secrets in it
[14:45] <beuno> or any third party for that matter
[14:46] <beuno> anyone who offers you web access to your files will need to have access to them some way or another
[14:46] <rye> burn, on the other hand, if you do not plan to access the files from the web service, you may encrypt the files prior to uploading.
[14:46] <burn> beuno: if the data is encrypted on the server, I would doubt
[15:25] <rye> dobey, true, sorry about appending to existing bug report
[19:44] <rye> wife needs my pc for sun wonderland test
[19:44] <rye> declaring end-of-day and go offline
[20:19] <dobey> what are the db names for the contacts and bookmarks dbs in desktopcouch?
[20:19] <dobey> 'contacts' and 'bookmarks'?
[20:20] <dobey> urbanape, CardinalFang: ^
[20:22] <urbanape> bookmarks for bindwood
[20:25] <dobey> great, thanks
[20:52] <CardinalFang> dobey, yes "contacts"
[20:52] <dobey> hooray, thanks
[22:23] <sanderqd> has anyone managed to connect to desktopcouch from within a google chrome extension?
[22:28] <dobey> sanderqd: supporting chrome requires writing an NPAPI plug-in to do most of the work
[22:49] <sanderqd> dobey: only for authentication i assume? as soon as it has authenticated, the http json api should be usable
[22:58] <sanderqd> s/authentication/getting the authentication details and server port/
[22:58] <dobey> desktopcouch doesn't have a json api
[22:58] <dobey> couch does
[22:58] <dobey> using the desktopcouch api requires an NPAPI plug-in to call to python
[22:59] <dobey> anyway, urbanape has looked into it, as research for getting bindwood ported to chrome
[23:14] <sanderqd> ok, just got started learning couchdb, haven't looked yet at what the desktopcouch api is exactly
[23:19] <dobey> you could talk to couchdb directly i guess, if you wanted to reimplement all the record format handling
[23:20] <dobey> desktopcouch provides api to get the port, connects with oauth, and handles some specific record formats