[05:18] <RAOF> doko_: Re https://bugs.launchpad.net/ubuntu/+source/openjdk-7/+bug/1389493
[05:18] <RAOF> doko_: As far as I can tell it seems to be a largely cosmetic issue, but a sufficiently annoying one that we should resolve it.
[05:36] <doko> RAOF, I'm not sure how. just adding the old name as a symlink doesn't work either
[06:10] <RAOF> doko: Urgh.
[06:48] <pitti> Good mornin
[06:51] <ari-tczew> hello pitti
[06:52] <Noskcaj> Does seahorse-nautilus really need to depend on seahorse-daemon? if not, we can sync
[09:10] <Laibsch> now that Debian is frozen will Ubuntu default to syncing from experimental or should I file appropriate bugs for this to happen for the packages that I maintain?
[09:11] <LocutusOfBorg1> +1 for Laibsch question, I hope the latter
[09:15] <mitya57> No, we will not sync anything from rc-buggy (aka experimental) automatically.
[09:15] <mitya57> Please use requestsync from ubuntu-dev-tools to request syncs.
[09:15] <LocutusOfBorg1> happy to hear that, I hope you will consider sync from maintainer requests :)
[09:15] <LocutusOfBorg1> wonderful thanks
[09:16] <pitti> yes, manually requested syncs are no problem, we just don't want to auto-import experimental stuff without checking
[09:16] <LocutusOfBorg1> yes, seems legit, I would like to avoid that too :)
[09:36] <Laibsch> bug 1392236 it is
[10:06] <Riddell> who can say why plasma-desktop isn't transitioning from proposed?
[10:06] <Riddell> I can see kwrited isn't happy about me removing the kwrited-data package which I'm confused about http://people.canonical.com/~ubuntu-archive/proposed-migration/update_excuses.html
[10:06] <Riddell> everything else should be ok
[10:15] <doko> RAOF, hmm, would it help to place an empty pulse-java.jar there?
[10:41] <cjwatson> Laibsch: We don't have a mechanism to sync all your packages from experimental; unless you fancy doing some hacking on auto-sync I don't expect to have one in the near future.  Please just file explicit sync requests as needed for now
[10:41] <cjwatson> That is, per-upload
[10:42] <cjwatson> Easiest is if you get PPU rights or better for your packages, and then you can run syncpackage yourself :)
[10:43] <xnox> Laibsch: there is tool to do so in standard way to request syncs with the "requestsync" tool available from ubuntu-dev-tools in both ubuntu & debian.
[10:44] <cjwatson> xnox: He was already pointed at that
[10:45] <cjwatson> I just wanted to make it clear that the request in the text of bug 1392236 is not something that we actually have the technology put together to fulfil right now.
[10:48] <Laibsch> cjwatson: I'd love to get that, but my request for @ubuntu membership was rejected in 2010 because I was contributing too much and had been doing so for too long a time (I kid you not!)
[10:49] <Laibsch> after that experience I  said to myself "WTF" and rolled over and simply ignored the nonsense process and never retried
[10:49] <Laibsch> it is quite a bit of work and I only needed to waste my time on an application once
[10:50] <Laibsch> I have pretty high clearance on bug triage in LP and I am Debian DM but apparently I need to get that @ubuntu membership for what you are suggesting and frankly, after that experience in 2010  I cannot be  bothered to waste my time on the application process once again
[10:51] <cjwatson> Laibsch: OK, just wanted to let you know about the parameters that are available for syncing
[10:51] <cjwatson> Laibsch: BTW it's not true that membership is required before PPU or other similar upload access
[10:51] <cjwatson> Laibsch: getting upload access *grants* membership as part of it
[10:51] <Laibsch> really?
[10:52] <Laibsch> Ubuntu processes evolve fast and have gotten complicated
[10:52] <cjwatson> Laibsch: this has been the case as long as I can remember
[10:52] <Laibsch> where is the doc describing the process?
[10:52] <cjwatson> https://wiki.ubuntu.com/UbuntuDevelopers
[10:53] <cjwatson> which links to https://wiki.ubuntu.com/DeveloperMembershipBoard/ApplicationProcess
[10:53] <cjwatson> I would generally say that developers should be taking this route rather than going through the general Ubuntu membership thing first
[10:53] <cjwatson> and again, that's pretty much always been the case - sorry if you were misadvised before
[10:54] <Laibsch> "Joining the Per-package Uploaders Check out the general requirements for Ubuntu Membership. "
[10:54] <cjwatson> yes, that means you need to meet the requirements
[10:54] <Laibsch> it seems to be a requirement to me
[10:54] <cjwatson> it doesn't mean you need to separately gain membership first
[10:55] <Laibsch> but I have to go through the nonsense one more time?  seriously, how many times would you beg someone for a key when he told you you are TOO skilled and trustworthy?
[10:55] <cjwatson> you don't have to go to the people who do general membership
[10:55] <Laibsch> OK
[10:55] <Laibsch> that might help
[10:55] <Laibsch> ;-)
[10:55] <cjwatson> what this means is that the DMB will check for sustained and significant contributions as part of their process
[10:55] <Laibsch> I remember I had to wake up at 3 in the morning and sit around for two hours twiddling thumbs
[10:56] <Laibsch> to receive a rejection because I do too much
[10:56] <Laibsch> I am still thoroughly pissed off
[10:56] <Laibsch> yes, sustained and significant
[10:56] <cjwatson> so they check the same requirements, but you don't have to go through two committees or whatever
[10:56] <Laibsch> in my case too sustained (since 2005) and apparently too significant
[10:56] <Laibsch> :-/
[10:56] <cjwatson> being granted any kind of upload access implicitly grants membership
[10:57] <cjwatson> (technically: because ~ubuntu-dev is a member of ~ubuntumembers)
[10:58] <cjwatson> I'm sorry you had a bad experience.  I can't do anything about that, but I can suggest a more appropriate avenue that might yield better results
[10:58] <Laibsch> OK
[10:58] <Laibsch> that is appreciated
[10:58] <Laibsch> this really should not happen, I hope it never happened again, but I cannot be sure
[11:00] <Laibsch> I will keep my application efforts to a minimum this time, if I receive another rejection this time because "I did not proof my case" then that will be it for me.  I only jump through so many hoops to be admitted as a volunteer
[11:00] <cjwatson> I left the community council in 2006, so I haven't been directly involved with the non-developer membership stuff since then ...
[11:02] <xnox> Laibsch: yeah, developer membership board focuses on checking / verifying sufficient technical skills and knoweledge of release process (to make sure one syncs/uploads the right things at the right time). Looking over your profile, it looks to me like you have sufficient technical skill to apply for PPU (per package uploader).
[11:02] <xnox> for the packages that you are the mainter off in Debian already.
[11:02] <xnox> and then you will be able to sync them yourself into ubuntu.
[11:02] <xnox> ps. I sit on the Ubuntu Developer Membership Board that grants such rights
[11:02] <xnox> Laibsch: i have never been involved in the Community governance, I gained my ubuntu membership via Developer Membership board but becoming ubuntu contributing developer first, and later a core dev.
[11:03] <xnox> well, cjwatson chaired the meeting / voted to approve me :-)
[11:03] <Laibsch> I'm sure that must help
[11:04] <xnox> Laibsch: nah, he was skeptical, but fair =))))))
[11:05] <xnox> Laibsch: if you make application wiki page, and email it in, we can review you on the 1st of December 19:00 UTC as per https://wiki.ubuntu.com/DeveloperMembershipBoard/Agenda
[11:06] <Laibsch> I hope I won't have to be present this time?
[11:06] <xnox> (one needs to apply at least 2 weeks in advance of the meeting)
[11:06] <Laibsch> that's again middle of the night
[11:06] <Laibsch> the times are not good for Asia-based people
[11:06] <Laibsch> like I said, my application effort will be minimal, including wiki page (I will create a very simple one)
[11:10] <Laibsch> 19:00 UTC is not possible for me to attend
[11:10] <Laibsch> that's again exactly 3 AM
[11:10] <Laibsch> sorry, I love Ubuntu, but not that much
[11:10] <xnox> Laibsch: those are generally irc interractive meeting.
[11:10] <Laibsch> not after that experience
[11:10] <xnox> Laibsch: you can email, and request for an email application.
[11:10] <Laibsch> good
[11:10] <Laibsch> I wonder where my old wiki page went?
[11:10] <xnox> the meeting on the 15th will be at 15:00 UTC
[11:11] <Laibsch> I'd like not having to redo it
[11:11] <xnox> Laibsch: if that's any better for interractive application.
[11:11] <cjwatson> If you happen to know the URL, try appending ?action=info to it
[11:12] <Laibsch> here it is: https://wiki.ubuntu.com/RolfLeggewie
[11:12] <Laibsch> found it
[11:12] <Laibsch> deeply buried in google ;)
[12:50] <ogra_> does anyone have an idea why the fix for debian bug 169922 does not seem to be in ubuntu ?
[12:51]  * ogra_ tries to make adb on the phone actually force a readonly re-mount before killing the system on "adb reboot" ... seems "umount -f -r -a" does not work (nor does it work with a single mountpoint)
[12:52] <ogra_> according to that but this should work since 2004
[12:54] <ogra_> s/but/bug/
[13:22] <pitti> ogra_: I regularly do "mount -o remount,ro /", and that works fine
[13:22] <pitti> (as both dual-boot and the emulator are r/w by default, annoyingly)
[13:23] <ogra_> pitti, / is ro anyway ...
[13:23] <ogra_> pitti, http://paste.ubuntu.com/8986920/
[13:23] <ogra_> or
[13:23] <ogra_> root@ubuntu-phablet:~# umount -f -r /userdata
[13:23] <ogra_> umount: /userdata: target is busy
[13:23] <pitti> ah, busy
[13:23] <ogra_> right
[13:24] <ogra_> the bug above suggests this should work though
[13:24] <ogra_> even when busy
[13:24] <pitti> well, if there are processes still having open files on that (for write mode), how is it supposed to work?
[13:24] <ogra_> (and indeed it is busy)
[13:24] <pitti> you'd break all the running processes
[13:25] <ogra_> thats fine, we call reboot anyway (dircet kernel call)
[13:25] <pitti> (but I guess that's intended)
[13:25] <ogra_> *direct
[13:25] <ogra_> i dont care about the state of the processes, but i do care about the integrity of the fs
[13:26] <ogra_> we see a lot of file corruption on the phone ... one of the reasons is "adb reboot" ...
[13:26] <ogra_> the logical option would indeed be to make it call /sbin/reboot ... but that takes long ...  i would like ot keep the convenience of speedy reboots in adb
[13:27] <pitti> ogra_: so perhaps as a first mitigation sync; reboot -f?
[13:27] <pitti> (but yeah, forcibly unmounting/ro mounting would of course be betteR)
[13:27] <ogra_> the code already calls sync ... doe reboot -f gain me anything over the direct kernel reboot call ?
[13:27] <ogra_> *does
[13:28] <pitti> ogra_: no, reboot -f is pretty much that -- don't go through init, just reboot the kernel; that's what I meant
[13:28] <ogra_> yeah, well, that is what happens already
[13:28] <pitti> ogra_: hm, sysrq+u does forced r/o, I wonder if one can trigger that by some other means
[13:28] <ogra_> with the init layer ripped out inbetween
[13:28] <pitti>       MNT_FORCE (since Linux 2.1.116)
[13:29] <pitti>               Force unmount even if busy.  This can cause data loss.  (Only for NFS mounts.)
[13:29] <pitti> hmm
[13:29] <pitti> umount(8) also only talks about NFS
[13:30] <ogra_> well, the bug seems to actually use a real device
[13:30] <pitti> ogra_: ooh!
[13:30] <pitti> https://www.kernel.org/doc/Documentation/sysrq.txt
[13:31] <pitti> ogra_: echo u > /proc/sysrq-trigger
[13:31] <ogra_> hah !
[13:31]  * ogra_ hugs pitti
[13:31] <pitti> (well, open() and fputc('u') in C, of course)
[13:31] <pitti> ogra_: at least sysrq+u seems to work fairly reliably for me to reboot my machine after a crash and save the fs
[13:31] <ogra_> root@ubuntu-phablet:~# echo u > /proc/sysrq-trigger
[13:31] <ogra_> root@ubuntu-phablet:~# touch /userdata/foo
[13:31] <ogra_> touch: cannot touch ‘/userdata/foo’: Read-only file system
[13:31] <ogra_> \o/
[13:32] <pitti> touché
[13:32] <ogra_> lovely
[13:32] <pitti> ogra_: so, write 'u', sleep(0.5), reboot()?
[13:33] <pitti> or maybe even just (1); the whole reboot takes long enough that an extra .5 s for your data safety probably doesn't matter
[13:33] <ogra_> well, adbalready has:
[13:33] <ogra_> 188         execl("/system/bin/vdc", "/system/bin/vdc", "volume", "unmount",
[13:33] <ogra_> 189                 getenv("EXTERNAL_STORAGE"), "force", NULL);
[13:33] <ogra_> which it calls right before rebooting
[13:34] <ogra_> i guess i can just replace these two lines
[13:34] <pitti> so you are saying that this doesn't work, or it doesn't apply to our internal storage, or we need to do it for other mounts?
[13:34] <ogra_> we dont use /system stuff in ubuntu indeed :)
[13:34] <ogra_> nor do we use vdc ...
[13:34] <ogra_> that code is a no-op currently
[13:35] <pitti> aah
[13:35] <ogra_> but in adbd it is executed right before the reboot call to the kernel
[13:35] <pitti> ogra_: so yeah, sounds good
[13:35] <pitti> I'd still give it a second to actually sync and remount
[13:35] <ogra_> so for our usecase just writing to proc instead sounds good
[13:35] <ogra_> yeah, i can add a sleep
[13:36] <ogra_> that will still be miles better than using upstarts reboot though :)
[13:36] <pitti> right, I didn't even realize that adb reboot didn't use the "proper" reboot
[13:36] <pitti> it takes long enough, after all (some 5 s here)
[13:37] <pitti> and I use it all the time
[13:40] <ogra_> pitti, hah, i gues you never used a normal reboot then ... thats more in the area of 20s
[13:41] <pitti> ogra_: well, I did (long-press power button)
[13:41] <pitti> but I didn't really pay enough attention to wonder about the time difference
[13:41] <pitti> ogra_: but these days most of what I do with the phone is to test/fix adt-run :)
[13:41] <ogra_> it is quite significant ... i always prefer adb reboot
[13:41] <ogra_> if i do work on the phone at least
[13:42] <pitti> ogra_: so, thanks for fixing that!
[13:42] <ogra_> well, thanks for getting me on the righ track !!
[13:42] <ogra_> (i wouldnt have thought about sysreq ever)
[13:59] <pitti> smoser: ok, SRU for bug 1391354 uploaded (it's fine in vivid)
[14:24] <mterry> wgrant, heyo -- I have a bzrlib script that I would ideally like to run even faster -- it's intent is to delete invalid tags.  http://paste.ubuntu.com/8987882/  Is there a cleverer way to do this?  (Or can you point me at someone else that would know?)
[14:25] <Laibsch> cjwatson, xnox: what about the possibility not to have to attend the IRC meeting?  one of you mentioned a mail interview process?!
[14:26] <xnox> Laibsch: yeah, in the application submission email you should state that you cannot attend either irc meeting times and wich to be processed via email.
[14:27] <wgrant> mterry: I don't know bzrlib well at all, but I'd look for a bulk revision-id lookup method.
[14:27] <Laibsch> well, I might be able to attend but I don't want another night session.  Even 1500 UTC is 23:00 here or even 00:00 if I am in Tokyo at the time and if it takes two hours like it did last time then I don't want that
[14:29] <xnox> mterry: well that script is fast if the branch you point it at is local, rather than remote.
[14:30] <xnox> mterry: so i'd clone it to a temp location first, find all tags to delete, and then delete them from target.
[14:31] <xnox> it's equivalent of $ bzr tags | grep ? | cut -d\  -f1 | xargs -L1 bzr tag --delete
[14:31] <mterry> xnox, yeah that might be the best solution, I was hoping to do a nice bzrlib thing
[14:31] <mterry> xnox, but shell always wins  :)
[14:31] <Saviq> mterry, check out http://people.canonical.com/~mwh/bzrlibapi/bzrlib.repository.Repository.html#all_revision_ids
[14:31] <Saviq> xnox, that was the first thing we were doing, and it was _slow_ ;)
[14:32] <mterry> Saviq, oh really?
[14:32] <mterry> Saviq, just two bzr requests, I would expect it to be better
[14:32] <Saviq> mterry, one request per tag, no?
[14:33] <Saviq> mterry, `bzr tag --delete` does not seem to allow multiple tags (at least per man)
[14:36] <Laibsch> xnox: I basically left my old application page from 2010 as is and only added a paragraph to the top explaining why I did so.  https://wiki.ubuntu.com/RolfLeggewie What are the next steps? Send e-mail to devel-permissions@lists.ubuntu.com and request for becoming a PPU dev?
[14:36] <caribou_> when merging a package from Debian closes outstanding Ubuntu bugs, should those be listed in the changelog ?
[14:36] <Laibsch> caribou_: You can add "LP: #123" in the changelog
[14:36] <Laibsch> similar to "Closes: #123"
[14:36] <caribou_> Laibsch: yeah, that's what I meant
[14:37] <caribou_> Laibsch: I know that, I just want to know if this is part of a normal merge activity
[14:37] <Laibsch> closes is for the Debian BTS, LP: for Launchpad
[14:37] <Laibsch> oh, you are wondering if you need to leave the changelog intact?
[14:37] <Laibsch> I'm not sure I understand the question
[14:39] <Saviq> mterry, yeah, looks like pulling all_revision_ids() and matching to tags would be faster
[14:43] <caribou_> Laibsch: when updating the changelog during a merge to the development version and the merge brings in patch from debian that fix existing bugs,should the changelog flag those bugs with (LP: #{bugno})
[14:43] <xnox> caribou_: add lp: #N reference in the debian/changelog, upload to debian, the rest will happen.
[14:43] <caribou> I would say I should, just need confirmation
[14:44] <xnox> caribou: you can add lp:#N references, if you can, during merge/before uploads.
[14:44] <caribou> xnox: "upload to debian" I suppose upload to ubuntu
[14:44] <Laibsch> xnox: I believe he wants to know if he should fiddle with the changelog when the debian maintainer forgot to flag the LP bug
[14:44] <xnox> caribou: otherwise you will need to close bugs yourself.
[14:44] <xnox> caribou: don't modify other entries, just your own.
[14:44] <caribou> Laibsch: nope, I'm merging a package from debian into Ubuntu Vivid
[14:44] <caribou> xnox: indeed
[14:45] <xnox> caribou: e.g. "* merge from debian, remaining changes:
[14:45] <xnox>  * foo bar
[14:45] <caribou> xnox: I meant in the section I'm adding following the merge
[14:45] <xnox>  * Changes in debian fix LP: #1, LP: #2
[14:45] <caribou> xnox: ok, that's what I wanted confirmed
[14:45] <xnox> caribou: yeah.
[14:45] <Laibsch> xnox: I basically left my old application page from 2010 as is and only added a paragraph to the top explaining why I did so.  https://wiki.ubuntu.com/RolfLeggewie What are the next steps? Send e-mail to devel-permissions@lists.ubuntu.com and request for becoming a PPU dev?
[14:50] <Saviq> mterry, you there? connection issues it seems?
[14:50] <Saviq> mterry, if you didn't get it, there's a branch.repository.all_revision_ids()
[14:51] <Saviq> that can be matched against tags
[14:53] <mterry> Saviq, yeah I found that, am working on revision (got sidetracked by a wizard issue)
[14:53] <mterry> Saviq, thanks
[14:54] <Saviq> mterry, no pressure, just wasn't sure you got the msg
[14:54] <mterry> Saviq, I hadn't
[14:54] <mterry> Saviq, I am having dumb irc problems indeed  :(
[14:55] <mterry> Saviq, tell me if this is faster: http://paste.ubuntu.com/8988295/
[14:56] <mterry> still seems slightly slow to me, but I think that's just my connection today
[14:56] <mterry> locally is very fast
[14:58] <Saviq> mterry, http://pastebin.ubuntu.com/8988345/
[14:59] <Saviq> it's very slightly slower than the original one with hardcoded list
[14:59] <Saviq> mterry, so it's great
[14:59] <mterry> Saviq, heh
[15:00] <mterry> Saviq, well once we feel comfortable with this, please replace your copy of the script in chinstrap
[15:00] <Saviq> mterry, yup I will
[15:00] <Saviq> mterry, I'll just add some logging and display for when you can't write to the branch
[15:21] <Saviq> mhall119, hmm... how is it that sessions that were supposed to start at 1400 UTC are already finished¿?
[15:24] <infinity> Saviq: Because it's 15:24?
[15:25] <infinity> Saviq: 'date --utc' is your friend. :)
[15:26] <Saviq> infinity, d'oh, I'm +1 now, not +2 ;)
[15:26] <Saviq> damn DST
[15:49] <Laibsch1> I had in the past always been able to simply swap out my HD, put it into a different computer and boot my old system from it.  This seems no longer to be the case.  what do I need to now?
[15:52] <mhall119> Saviq: lol
[15:54] <Laibsch> oops, wrong channel
[16:31] <pitti> doko: FYI, regresssion in https://jenkins.qa.ubuntu.com/job/vivid-adt-python3.4/7/; (test.test_pyexpat.HandlerExceptionTest)
[16:31] <pitti> it was auto-synced, so no mail notification to you specifically
[17:04] <doko> pitti, succeeds on the buildd, and currently I can't see anything network specific
[17:07] <pitti> doko: no, indeed; test.test_pyexpat.HandlerExceptionTest doesn't sound network specific at all, neither does "RuntimeError: a"
[17:35] <pitti> smoser, infinity: btw, now would be a good time to reboot the wolfe host, or the remaining VMs (I suppose if I just reboot them from "within", they won't get teh changed qemu RAM config)
[17:36]  * pitti slips in a quick dist-upgrade
[18:33] <smoser> pitti, you rock.
[18:33] <smoser> thank you.
[18:33] <smoser> oh. i meant thank you for the help on that iscsi issue.
[18:33] <smoser> pitti, i'm sorry, wrt wolfe, what do you need ?