[03:41] <infinity> slangasek: Drat.  Remember when I said your mountall magicall fixes my "/tmp not ready yet" bug?
[03:41] <infinity> slangasek: I must have just had a few lucky reboots in a row, cause it's back.
[03:42] <infinity> slangasek: Which indicates it's a race of some sort, but I'm not sure why there's a race to find a filesystem for something without a mountpoint in the first place. :P
[04:27] <slangasek> infinity: very good question
[04:58] <mah454> Hello
[04:58] <mah454> I need confirm new distro template to apt-add-repository
[04:59] <mah454> I changed lsb_release
[05:25] <pitti> Good morning
[06:23] <FourDollars> pitti: Could you help to review https://code.launchpad.net/~fourdollars/language-selector/singleton_and_escape_key/+merge/132595 ? Thanks.
[06:24] <pitti> FourDollars: yes, can do today
[06:24] <FourDollars> pitti: Thanks.
[06:24] <FourDollars> pitti: Could you also help https://bugs.launchpad.net/precise-backports/+bug/1076901 ?
[06:25] <pitti> I'm not a backporter, sorry
[06:27] <FourDollars> pitti: Do you know who can help or what should I do next?
[06:27] <pitti> AFAIUI the backporter team regularly reviews pending requests
[06:29] <FourDollars> pitti: I see. Thanks.
[06:50] <didrocks> hey, can anyone reject https://code.launchpad.net/~vanvugt/ubuntu/quantal/nux/fix-1039155/+merge/128422 please?
[06:51] <pitti> didrocks: fini
[06:51] <didrocks> pitti: thanks :)
[07:39] <pitti> jibel: so, cjwatson and I didn't talk about britney integration of adt yet; however, I spent quite some time to fix two handfuls of failing adt tests
[07:47] <jibel> pitti, ok. I thought about it last week, and I think I'll have to split the job that triggers the tests in 2 parts, one part which identifies packages to test and the other part will deal with jenkins.
[07:47] <dholbach> good morning
[07:48] <jibel> pitti, this way, britney can start the jobs without any dependency on lab's specific bits, and will know which tests have been started.
[08:50] <pitti> @pilot in
[09:07] <zequence> Isn't there a ubuntu sys admin channel here somehwere?
[09:24] <lifeless> zequence: #canonical-sysadmins perhaps ?
[09:40] <pitti> $ bzr merge lp:~obounaim/ubuntu/raring/virtualbox/debian-merge
[09:40] <pitti> Unapplying quilt patches to prevent spurious conflicts
[09:40] <pitti> bzr: ERROR: Unable to unapply quilt patches for 'other' tree: rmdir: failed to remove `.pc/cve-2012-3221.patch': No such file or directory
[09:40] <pitti> did anyone happen to run into this before and knows how to fix/workaround?
[09:42] <xnox> pitti: yes.
[09:43] <apw> do i recall correctly that build essential in raring is now 'correct' that we have less extraneous things in the chroots by default
[09:43] <xnox> pitti: in the ~/.bazaar/builddeb.conf set "quilt-smart-merge=False"
[09:43] <xnox> pitti: and then unapply patches yourself in the old branch & the new branch. Commit in both, then merge.
[09:43] <xnox> pitti: re-push quilt series.
[09:44] <pitti> what could be easier...
[09:44] <pitti> so I need to check out two full virtualbox branches, fun
[09:44] <xnox> probably the package is in 1.0 source package and builddeb got it wrong.....
[09:44]  * pitti screws it, downloads the diff, applies manually and lets the package importer sort it out
[09:44] <xnox> pitti: bzr init-repo virtualbox; cd virtualbox
[09:44] <OdyX> win 23
[09:45] <xnox> then it's only half the download =)
[09:45] <xnox> pitti: i tend to always upload and let the package importer sort it out. But I do use bzr for auto-merging most of the stuff for me.
[09:45] <pitti> xnox: pre-applied patches and I have a deep hate for each other
[09:46] <xnox> heh
[09:46] <pitti> I yet have to find a person who gets along with those without sacrificing chicken and screwing it up from time to time
[09:49] <cjwatson> I do :)
[09:49] <cjwatson> Not that I think the importer's approach is ideal, but
[09:52] <xnox> doko_: the reason for python-imaging and python-reportlab is hplip
[09:53] <xnox> on the cd, that is.
[10:14] <apw> cjwatson, i think you were working with benc on linux-ppc, i see it ftbfs and (if you arn't already) was planning on trying to fix that to release linux
[10:15] <cjwatson> apw: I was just processing it through the archive
[10:15] <cjwatson> apw: Be my guest, though I see he has his own git branch for it
[10:15] <zequence> lifeless: That's the one. Thanks
[10:16] <apw> cjwatson, i have his branch indeed, will use that as a base and send it back to him
[11:08] <pitti> . o O { sponsoring merges as Ubuntu MPs is about as far away from "fun and efficient" as it could be }
[11:08] <cjwatson> Sponsoring merges is hopeless
[11:09] <cjwatson> One of the reasons I discourage people from doing that until they can upload directly
[11:09] <pitti> a nice and clean "current debian - merged ubuntu" debdiff attached to a bug is fine IMHO
[11:09] <pitti> but hopping through all that "checkout ubuntu, merge, fight with teh quilt patches, etc." until you get to that debdiff is a pain
[11:13] <xnox> huh?! if you are sponsoring you checkout the sponsoree branch and diff against the tags within that branch.
[11:14] <xnox> unless it's a horrible mess with forgotten $ bzr add .pc
[11:14] <pitti> xnox: ah, I guess you could do that if you don't push it back to the official branch anyway
[11:15] <xnox> pitti: yeap. and you can do $ bzr bd -S to get a source package out of the sponsoree branch. If that doesn't work, throw it back.
[11:15] <seb128> pitti, I found the launchpad diff mostly useful for those, it's the ubuntu->ubuntu diff basically, so you see "ok, few revisions from debian added, looks good"
[11:16] <xnox> and there you have your source package with the pristine-tarball regenerated.
[11:16] <seb128> you just need to filter you the quilt noise when debian added new patches
[11:16] <pitti> seb128: yeah, but I do want to verify the d->u debdiff, as that's the one which we should minimize
[11:16] <xnox> pitti: $ bzr diff -rtag:1.0-2
[11:16] <seb128> pitti, then you are going over sponsoring in fixing it yourself mode ;-)
[11:16] <pitti> xnox: right, thanks
[11:16] <seb128> pitti, well I usually read the changelog summary to see the diff
[11:17] <seb128> "diff", e.g what changes we carry
[11:17] <pitti> seb128: not necessarily fixing, but I don't want to sponsor merges which have obsolete stuff in them
[11:17] <seb128> right, I found that the changelog was enough to check the changesets and their validity usually
[11:17] <seb128> but maybe I check less into details than you do
[11:18] <cjwatson> pitti: clean debdiff> It depends what you're checking.  That's still a royal pain if the merge is at all complex and you're trying to check whether the merged debdiff actually matches what used to be in the Ubuntu delta
[11:18] <cjwatson> pitti: I've had too many instances of well-intentioned newcomers dropping patches they don't understand to skip that kind of verification, and it ends up being significantly more work than doing the merge myself
[11:19] <pitti> cjwatson: I look at both, but the more interesting one is the d->u one for me usually
[11:20] <pitti> so I guess I go with xnox's approach to only checkout the proposed branch, check the diffs against the tags, and never push it back
[11:20] <pitti> did that with one branch now, and it's a lot easier indeed
[11:23] <seb128> pitti, what tag do you check against? the vcs from contributor doesn't have the current debian versions right? e.g no way to do "current debian to proposed update" from the vcs?
[11:23] <xnox> well. to address cjwatson's concern and pitti's usability I do this. Checkout proposed branch. generate the new d->u diff and u->u diff. But also pull the old d->u diff (with pull-lp/debian-source & debdiff). and then review old d->u & new d->u.
[11:23] <pitti> seb128: they do have the current Debian version
[11:23] <pitti> seb128: I guess because the creator of the branch actually used "bzr merge lp:debian/foo"
[11:23] <pitti> that imports all the tags etc.
[11:24] <seb128> oh ok, nice
[11:24] <seb128> quite some don't
[11:24] <xnox> with the last step, I catch a few "overzealous" preservations: double application of the same patch due to fuzz, and keeping the changelog entries for "actually merged in debian long time ago".
[11:24] <xnox> seb128: if there is no debian tag, throw the branch a way. It was not a bzr merge, I will not trust it.
[11:24] <pitti> seb128: e. g. I check out lp:~logan/ubuntu/raring/desktop-base/debian-merge, and "bzr diff -r tag:7.0.0ubuntu2" gives me u→ u, and "bzr diff -r tag:7.0.3" gives me d→ u
[11:25] <seb128> pitti, xnox: nice tip, thanks
[11:25] <xnox> (e.g. if the importer is lagging, atleast bzr import-dsc should have been done)
[11:25] <Laney> why do you need to pull-*-source if you have the old tags in vcs too?
[11:25] <seb128> I usually end up dgetting the ubuntu and debian versions and doing debdiffs locally
[11:25] <pitti> that's what I did before, and download the diff from the MP, clean it, and apply it locally
[11:25] <seb128> when the merge is not trivial, when it's trivial (like most of those from "logan" I usually can ack it from the launchpad diff)
[11:26] <xnox> seb128: dgetting or pull-[lp|debian]-source $pkg [version|release] ? =)
[11:26] <pitti> (clean: throw away the "applied patches" portions)
[11:26] <seb128> xnox: I'm too old school, dunno about those pull tools :p
[11:26]  * seb128 notes to try those
[11:26] <xnox> I also do ` | filterdiff -x '*.pc*'
[11:27] <seb128> pitti, oh, I usually bzr branch the vcs from the submitter and bzr bd --source it
[11:27] <seb128> pitti, then work from the source package, dput that
[11:27] <pitti> seb128: yeah, that's what xnox told me as well; I initially tried "bzr merge" and tried to push that back, but that's too brittle
[11:27] <seb128> take from the discussion: everybody is working using a different workflow
[11:28] <xnox> pitti: the problem with $ bzr merge, is that it always try to do "merge from debian" e.g. auto-unappy quilt patches. Which doesn't make sence when merging a proposed to-be-sponsored branch.
[11:29] <xnox> pitti: with those, if they are actually clean. I push them to lp:ubuntu/$pkg. Or do pull/merge with quilt-automerging turned off.
[11:32] <cjwatson> seb128: Heh, even as an old-schooler, pull-{lp,debian}-source are the best things ever
[11:33] <seb128> cjwatson, no doubt, I just didn't know about them ;-)
[11:34] <seb128> too many scripts and I don't keep up with everything which is available there
[11:34] <seb128> seems like I should ;-)
[11:34] <pitti> +1 on pull-*-source
[11:34] <xnox> generally, if the utility is shipped in ubuntu-dev-tools it generally means that "all of your aliases and scripts suck compared to this shiny command line toy"
[11:35] <cjwatson> Heh
[11:36] <pitti> I go through dpkg -L ubuntu-dev-tools every 6 months or so, and so far it never failed to surprise me with at least one new cool thing :)
[11:36] <mlankhorst> xnox: or it's too ugly for ubuntu-dev-tools ;)
[11:39] <rbasak> pitti: what's the rationale for autopkgtest/dep8 to treat a zero exit status but output to stderr as a failure? Eg. wget prints progress to stderr, so script that calls wget is treated as a failure by default. I guess we could wrap it, but is there a more general answer?
[11:39] <xnox> rbasak: currently implementation.
[11:40] <xnox> as in, that's what the currently implementation does from historic reasons dating back to 2007
[11:40] <pitti> rbasak: I'm not quite sure TBH; it's nice to detect new warnings that weren't there before, but of course it's a bit pointless if your tests are expecting to write stuff to stderr
[11:40] <xnox> e.g. for desktop apps, launching and having output on stderr is bad, means some errors are present.
[11:41] <rbasak> How about a "writes-stderr" "restriction"?
[11:41] <pitti> rbasak: for Python unitest I usually use "unittest.main(testRunner=unittest.TextTestRunner(stream=sys.stdout, verbosity=2))"
[11:41] <pitti> to fix python's unfortunate default of writing the regular output to stderr
[11:41] <cjwatson> Or wrap in a script that does 2>/dev/null if you know you never care
[11:41] <pitti> yeah, a lot of tests do that, too
[11:42] <rbasak> The thing is if the test does fail, then the output that went to stderr may be useful
[11:42] <pitti> rbasak: writes-stderr> not sure whether upstream likes that, but seems fine to me
[11:42] <pitti> rbasak: oh, we do keep that; if stderr is nonempty, you see it as an artifact in jenkins
[11:42] <rbasak> Yes, but if the workaround is to >/dev/null, then it won't be :)
[11:43] <pitti> rbasak: 2>&1 is better really
[11:43] <cjwatson> stderrfile="$(mktemp)"; cleanup () { rm -f "$stderrfile"; } trap cleanup EXIT HUP INT QUIT TERM; run-tests 2>"$stderrfile" || cat "$stderrfile" >&2
[11:43] <xnox> wget has --quiet option
[11:43] <cjwatson> Or something
[11:43] <cjwatson> But yeah, it's often less effort to silence the spurious errors in the first place ...
[11:43] <rbasak> xnox: yes, but the script that we're calling doesn't (currently) have a --quiet option, so it won't pass through to wget. We're not testing wget - we're testing a script that calls wget
[11:43] <pitti> if I get along with fixing the output fd of unittest, I usually prefer that; for others, debian/tests/foo doing 2>&1 seems the next best approach
[11:43] <xnox> rbasak: I see. ok.
[11:44] <rbasak> cjwatson: that's exactly what I'm suggesting that a "writes-stderr" restriction might do :)
[11:45] <xnox> I did patch guilt package test-suite to be sensitive on $ADTTMP, which means roughly - we are in autopkgtest environment, use system install, don't do silly things on stderr, but do fail on stderr.
[11:47] <rbasak> OK, thanks all
[12:02] <xnox> thinking about the hash-sum missmatch from apt (which I just had on my laptop) - shouldn't apt be smart about those & requeue and try updating that repo again (with custom delta-t & #-retries) before giving up?
[12:04] <rbasak> It's pretty awkward to fix that within apt
[12:04] <rbasak> I'm hoping to fix it for good with the by-hash stuff this cycle
[12:08] <xnox> rbasak: explain why it's awkward to fix that within apt? does it return a different error code on hash-sum missmatch? if it does a wrapper script around apt-get update installed with dpkg-divert, which loops around apt-get update should do it.
[12:09] <xnox> rbasak: although by-hash stuff is nice, I feel cautious that not all mirrors will support it any time soon.
[12:09] <rbasak> xnox: inside apt, the code that manages the downloads is extremely twisted and laden with more tech debt than any other project I've ever seen
[12:10] <xnox> 8) ok.... scary
[12:10] <rbasak> xnox: an outside wrapper would definitely be far easier. I'm not sure if apt returns a unique return code, but that probably wouldn't be too hard to do (or to parse the stderr)
[12:10] <xnox> parse the stderr sounds dirty =)) but why not ;-)
[12:11] <rbasak> Yeah mirrors might take a while to pick it up. Including our own. But the nice thing is that if you care you can implement your own by-hash mirror without upstream support :)
[12:11] <rbasak> (admittedly that still doesn't solve the problem in the general case)
[12:25] <xnox> rbasak: where/how are the cloud images generated? I'd like to tinker with a few settings to improve performance and reduce size those.
[12:25] <rbasak> xnox: utlemming and smoser manage those
[12:25] <xnox> rbasak: thanks.
[12:26] <rbasak> I think the scripts that generate them are in LP somewhere
[12:26]  * xnox wishes lp had "show all activity bzr/revisions/pushes for a person", cause I bet the branch has team owner.
[12:34] <xnox> rbasak: https://code.launchpad.net/~ubuntu-on-ec2/vmbuilder/automated-ec2-builds
[12:34] <xnox> looks about right?! it's referenced in the build-logs of the cloud-images available for download.
[12:34] <rbasak> xnox: seems likely. smoser can confirm when he comes online
[12:35] <xnox> rbasak: yeap, it is right. Referenced in https://help.ubuntu.com/community/UEC/Images
[13:40] <pitti> @pilot out
[13:58] <smoser> xnox, that is not correct any more. unfortunately that document is out of date.
[13:59] <smoser> https://wiki.ubuntu.com/UbuntuCloud/Images/Publishing is *better*, but still out of date.
[13:59] <xnox> smoser: hm.. ok. where is _the_ source?
[14:00] <smoser> well, that url above references just about everything except for reference to live-build branch that we use
[14:00] <smoser> lp:~ubuntu-on-ec2/live-build/cloud-images
[14:01] <xnox> ok...
[14:01] <xnox> thanks
[14:01] <smoser> http://bazaar.launchpad.net/~ubuntu-on-ec2/vmbuilder/automated-ec2-builds/view/head:/misc/ec2-build-on-ec2 does much of it.
[14:01] <smoser> what were you looking to fix/change?
[14:01] <xnox> default apt & dpkg settings.
[14:02] <smoser> as in?
[14:02] <xnox> maybe it should be in cloud-init instead, I am not sure.
[14:02] <xnox> do not download description translations, disable i386 multiarch on amd64 images.
[14:02] <xnox> possibly set lang to C and purge all translations.
[14:03] <xnox> unless all of this is already done on the recent images.
[14:03] <xnox> cut's down apt-get update time by more than a half.
[14:03] <smoser> weell..
[14:03] <xnox> maybe,  not the lang bit =)
[14:04] <smoser>  - translations: i dont know enough about how they work to decide if its ok or not. i'm not opposed to it, but it would be a regression for some users i'm sure (being uncultured american, it "works for me")
[14:04] <xnox> well the fact is that currently you are downloading normal package list & then the en_US translation of the same package list.
[14:04] <ogra_> why dont run the cloud builds oem-config on first login ?
[14:05] <xnox> doubling your package list.
[14:05] <xnox> ogra_: no oem-config. ever on cloud.
[14:05] <ogra_> xnox, right, but why ?
[14:05] <xnox> ogra_: we don't want to manually configure 100 instances I spin up for my haddoop. ever.
[14:05] <ogra_> oh, just have a default preseed
[14:05] <smoser>  - i386 on amd64 images: i think i've actually tried to do this unsuccessfully.  in the maas ephemeral image (http://bazaar.launchpad.net/~smoser/maas/maas.ubuntu.com.images-ephemeral/view/head:/maas-cloudimg2ephemeral line 251)
[14:06] <ogra_> only if you dont use that preseed you can manually configure
[14:06] <xnox> ogra_: well, think of cloud images more as if they are pre-installed for automated usage.
[14:06] <ogra_> yes, i do
[14:06] <smoser> ogra_, cloud-init does what you want it to do.
[14:06] <xnox> ogra_: there is cloud-init which allows you to change settings on first boot.
[14:06] <cjwatson> You don't download a separate package list for translations, only files containing the different descriptions.
[14:06] <cjwatson> (point of information)
[14:06] <ogra_> well, oem-config uses a statndardized way also the installer uses so you get changes automatically
[14:07] <smoser> - setting lang to C: i'm probably not opposed to that, setting it as a default. cloud-init will change it to en_US on first boot though.
[14:07] <smoser> unless we also change that.
[14:08] <smoser> cjwatson, the translation downloads are annoying, though if not needed. especially with S3 mirrors, where the serial small file requests are quite painful.
[14:09] <xnox> cjwatson: true. but it's a lot of queries to see if they are updated for each component/pocket, isn't it? plus they are useless waste on auto-spinned up instances...
[14:09] <cjwatson> I don't really care either way.  I was just taking issue with the description.
[14:09] <smoser> rbasak, was/is going to make apt able to do parallel there, which will allow us to blast S3 very unfriendly-like.
[14:09] <rbasak> smoser: that a separate thing that I hope to do this cycle to. apt happens to have the logic in there that should make this change easy
[14:10] <rbasak> that's a separate thing that I hope to do this cycle too.
[14:10] <smoser> i hope so too
[14:10] <smoser> :)
[14:10]  * rbasak learns to speak English
[14:11] <smoser> for the reader, each http connection to S3 has a large overhead (on the S3 side). as a result, the average download time across some large get like 'apt-get --download-only install ubuntu-desktop' tops out at like 8M/s (maybe 12).
[14:11] <smoser> but if we parallelize that (with apt-fast), we can saturate a gigabit link.
[14:12] <smoser> and if S3 doesn't stand up, thats their fault. (especially since amazon explicitly told us that S3 was not designed for lots of little files like apt is doing... after they told us to put mirrors in S3 and ignored my complaints about speed).
[14:13] <smoser> (and for the record pipelining there has just about zero affect)
[14:13] <smoser> xnox, so in general, i'm not opposed to any of your changes, but i'm careful in all changes.
[14:15] <xnox> smoser: I understand. I want to play around with these settings and see if I can still generate images and launch them & check the performance differences.
[14:15] <smoser> speed of 'apt-get update' is a pretty important thing in my opinion
[14:16] <smoser> and i'd love for the images to not use i386 on amd64.
[14:18] <xnox> ack.
[14:20] <smoser> ogra_, fwiw, you can feed debconf preseed to cloud-init just like you would the installer.
[14:21] <smoser> cloud-init does not recondifgure anything alreadyy installed (arguably a bug), but new packages installed would then get those preseeds.
[14:21] <ogra_> does it use the udebs ?
[14:21] <xnox> no =)))))
[14:21] <ogra_> link ubiquity/oem-config does
[14:21] <ogra_> aha
[14:21] <ogra_> that was actually my point :)
[14:22] <ogra_> so you dont have to re-implement all these bits all the time when something changes
[14:22] <smoser> i'm probably just being ignorant, but i have no idea why i would want to use udebs.
[14:22] <smoser> example of change?
[14:22] <ogra_> the sudo group
[14:23] <xnox> hmm... oem-config is meant to be run with tty, where as cloud-init is always unattended.
[14:23] <ogra_> the point is, by using the udebs you dont have to care for the backend stuff at all, whatever the distro changes will happen there and you just inherit
[14:24] <xnox> not sure it would be wise to mix the two.
[14:24] <smoser> i'm not opposed to getting "free stuff"
[14:24] <ogra_> oem-config runs fine on serial tty's etc and you can preseed it as well as you can any other bit
[14:24] <ogra_> in which case it wont need a tty
[14:24] <xnox> amazon cloud does not give you serial tty =)
[14:25] <ogra_> well, you dont need one if you prseed
[14:25] <ogra_> but if you can have one and dont preseed you would get a proper fist login configuration
[14:25] <ogra_> and everything for free without having to poke at backend code ;)
[14:26] <ogra_> anyway, just wondering ... its not my work area :)
[14:26] <smoser> i'm not opposed to it in principle, but not convinced of the merrit. the sudo group change sucked, but generally we have very few such things "baked in".
[14:26] <xnox> ogra_: well.. the "first login" is the problem that cloud-init solves. By default you can't login into ec2 image. You have ssh, but then your ssh key needs to be in the user-account pre-setup.
[14:27] <ogra_> xnox, right, but where doies the functionality of cloud-init differ from what a reseeded oem-config does ?
[14:27] <xnox> and clound-init does that funny grub booting as well.
[14:27] <ogra_> i dont think it does
[14:27] <ogra_> (differ i mean) ...
[14:30] <xnox> the way the first user is done and the hostname is different, and apart from locale choice nothing else is common between the two.
[14:30] <toabctl> can someone please have a look at #1077938, please? maybe that's something for an SRU, too.
[14:31] <xnox> such that, sure cloud-init can use oem-config, but it would still require to calculate all the values & preseed them to the oem-config which then hopefully will succeed in setting them.
[14:31] <xnox> it already does the calculation bit, and simply sets them straight away, instead of generating pressed & feeding it to the oem-config.
[14:31] <xnox> so oem-config would be an extra (un-necessary) layer.
[14:33] <ogra_> well, it seems just like a lot less work to rely on oem-config/ubiquity/d-i
[14:33] <ogra_> (which are the same backend wise)
[14:34] <xnox> but d-i is not used at all on cloud images.
[14:34] <xnox> so why _add_ it?
[14:34] <ogra_> ?
[14:34] <ogra_> i didnt say you shoudl add d-i
[14:34] <ogra_> just that the backend functions of d-i and oem-config or ubiquity are identical
[14:35] <ogra_> its all coming out of the same usebs, so you would get distro code changes for free
[14:35] <ogra_> instead of chacing a rabbit and trying to keep up with what the distro does
[14:35] <ogra_> less error potential
[14:36] <ogra_> *chasing
[14:36] <xnox> i am not aware of any distro changes that are being chased.
[14:36] <ogra_> well, bits like dropping groups from the default set and the matching transitions for example
[14:38] <xnox> smoser: did ^^^^ need manual intervention on cloud-init side to handle?
[14:39] <ogra_> to be consistent you simply need to know exactly what changed in distro plumbing to have it in your code as well
[14:40] <smoser> we were affected (and had to realize) the sudo group.  it didn't really affect anything though.
[14:40] <smoser> but there is very little *distro plumbing* baked in to the images, and anything that is, is probably specifically done with intent.
[14:40] <smoser> the group thing sucked. but creation of that default user is now moved to cloud-init.
[14:40] <smoser> the images themselves actually have no local user now.
[14:40] <xnox> smoser: none of which is needed nor wanted in the oem-config.
[14:41] <xnox> smoser: ok. that's what I was thinking that if $ adduser is used, no need to hard-code the default groups.....
[14:41] <smoser> xnox, well, its configurable. it has a default list
[14:42] <xnox> and the default groups come from the deboostrap
[14:42] <xnox> hmm..
[14:43] <smoser> i dont really remember. we did (and do) have a hard coded list of groups that the default user is present in. and we were making the 'sudo' group.
[14:43] <smoser> i forget.
[14:47] <xnox> ogra_: maybe there is point to dig into cloud-init and see how much stuff can be refactored.
[14:47] <ogra_> xnox, well, i'm not really after oem-config in cloud setups ... but more after the functionality that ubiquity uses to make use of the udebs
[14:47] <ogra_> i.e. if you look at the user-setup udeb you wil find it dfoes a lot more than just aclling adduser
[14:48] <ogra_> *calling
[14:48] <ogra_> or rather user-setup-allpy which does the actual user creation
[14:48] <ogra_> *apply
[14:52] <pitti> jodh: hey James, how are you?
[14:52] <jodh> pitti: good thanks - you?
[14:52] <pitti> jodh: fine, thanks!
[14:53] <pitti> jodh: you said in bug 1075976 that eatmydata was a red herring, so do you still need the --no-eat option?
[14:53] <ogra_> xnox, wrt default groups ... http://paste.ubuntu.com/1353260/
[14:53] <pitti> jodh: I'll take the typo fixes in any case of course, thanks!
[14:53] <xnox> ogra_: well on cloud, we have: ubuntu:ubuntu with password ubuntu, no auto-login, no encryption, system groups + inject ssh key fingerprint into ~/.ssh/authorized_keys.
[14:53] <jodh> pitti: I think not, so feel free to ignore that MP (sorry - forgot to cancel it)
[14:53] <pitti> jodh: I like "Lauchpach"!
[14:54] <ogra_> xnox, right, does printing work ? :)
[14:54] <jodh> pitti: :)
[14:54] <pitti> jodh: I'll use it for the typos then; thanks!
[14:54] <jodh> pitti: thanks!
[14:54] <xnox> ogra_: the bits about the groups, yes are correct. but creating lpadmin & sambashare is very "desktop" like
[14:54] <xnox> ogra_: and these are done by those packages anyway.
[14:54] <ogra_> xnox, right, but the udeb would inherit that stuff even from debian, you would have to care if i.e. debian decides to rename lpadmin to lpadmins
[14:54] <xnox> ogra_: so the only useful bit in the whole script (from cloud perspective) is the passwd/user-default-groups
[14:55] <ogra_> -*wouldnt
[14:55] <xnox> which notice, it does not create the user-default-groups first (unless I missed something)
[14:55] <ogra_> xnox, i'm talking about maintenance overheard
[14:55] <ogra_> someone would have to knwo debian renamed the group
[14:55] <ogra_> and would have to change the cloud-init code manually for it
[14:55] <ogra_> instead of just inheriting the change
[14:55] <xnox> the maintainence overhead of keeping the rest of the script working in the cloud image, which doesn't have apt-install and the rest of it.
[14:56] <xnox> is what I think will be more.
[14:56] <ogra_> k
[14:56] <xnox> but we don't know for sure, unless we try =/
[14:57] <cjwatson> Yeah, I think the duplication is justifiable in this case.
[14:57] <cjwatson> It doesn't seem worth all the effort of reengineering that, to me.
[15:21] <toabctl> pitti, can you look at bug 1077938?
[15:23] <pitti> toabctl: deferring to mvo, who is our upgrade master
[15:23] <toabctl> pitti, thx
[15:56] <pitti> bdrung: hey Benjamin, how are you?
[15:56] <pitti> bdrung: I have a question for you in bug 1073390
[16:12] <toabctl> xnox, any news for bug 1037588 ?
[16:14] <xnox> toabctl: no news, it is ok for this be sponsored in debian-experimental + sync into raring, or upload into raring.
[16:15] <xnox> heck, it's one liner. can be done.
[16:18] <toabctl> xnox, seems that the debian maintainer wants to wait until wheezy is released. so we can fix it in ubuntu.
[16:18] <toabctl> right?
[16:19] <OdyX> toabctl: or in experimental
[16:20] <toabctl> OdyX, sure.
[17:01] <bdrung> pitti: ask.
[17:17] <seb128> hum, I managed to screw my apt/dpkg dunno how
[17:17] <seb128> $ LC_ALL=C sudo apt-get install linux-firmware
[17:17] <seb128> dpkg: error processing linux-firmware (--configure):
[17:17] <seb128>  package linux-firmware is not ready for configuration
[17:17] <seb128>  cannot configure (current status `half-installed')
[17:17] <seb128>  
[17:17] <seb128> does anyone know how to get out of that state?
[17:18] <xnox> seb128: $ sudo dpkg --configure -a
[17:18] <xnox> ?
[17:18] <seb128> xnox, tried that, no luck
[17:18] <xnox> =(
[17:18] <seb128> that returns without doing anything
[17:18] <xnox> $ sudo apt-get install --reinstall linux-firmware
[17:18] <xnox> ?
[17:18] <cjwatson> find the .deb for linux-firmware, dpkg --unpack it, continue
[17:19] <seb128> xnox, oh, that worked, thanks!
[17:19] <OdyX> (check disk space on the various partitions inbetween
[17:19] <seb128> OdyX, did that before ;-)
[17:19] <xnox> seb128: it's a slightly fancy way of doing what cjwatson suggested ;-)
[17:19] <seb128> OdyX, I stopped an upgrade my-unpacking by close the wrong win
[17:19] <seb128> xnox, cjwatson: thanks
[17:23] <cjwatson> apw: looks like that linux-ppc upload still fails?
[17:35] <apw> cjwatson, yeah benc is on the case
[17:36] <BenC> cjwatson, apw: Since it only takes me about 30 minutes, I'm doing a full binary-arch build to make sure this one works (my mistake for messing with d-i related things without testing that on the .3 upload)
[17:37]  * cjwatson nods
[17:37] <apw> BenC, thanks
[18:31] <janimo> slangasek, the consensus seems to be we need to prompt the user with the written license statement when installing nexus7 wifi/bt firmware. Is something better than debconf note to do that?
[18:57] <shadeslayer> chrisccoulson: got a moment?
[18:58] <shadeslayer> chrisccoulson: I wanted to talk about ubufox
[19:56] <soren> pitti, kees, stgraber, cjwatson, mdz: TB meeting in a couple of minutes, right? Wiki says 2000 UTC.
[19:58] <stgraber> soren: ah yeah, dst change. I'll be there
[20:02] <cjwatson> soren: available whenever anyone else shows up :)
[20:46] <slangasek> janimo: if it's an after-install package install, a debconf note is the right way to do it
[20:47] <slangasek> janimo: not a 'note' actually, but a debconf (boolean) question
[20:53] <kees> anyone know a good javascript CLI like OSX's "jsc"?
[20:55] <highvoltage> something doesn't feel right about that sentence
[20:55] <kees> agreed
[20:56] <mlankhorst> I guess it makes sense that at one point js replaces bash..
[20:56] <kees> I was just watching https://www.destroyallsoftware.com/talks/wat again and wanted to see the javascript bits myself
[20:56] <janimo> kees, nodejs? Not sure what you mean by good though :)
[20:56] <kees> just stuff where I can type   {} + []  and laugh
[20:56] <highvoltage> kees: I guess you've seen http://bellard.org/jslinux/ already :)
[20:57] <mlankhorst> that one was so amusing and scary to run for the first time..
[20:57] <kees> highvoltage: heh yeah
[21:00] <pitti> bdrung: I asked on the bug
[21:00] <pitti> soren: here now
[21:00] <kees> janimo: yeah, nodejs does the trick!
[21:01] <pitti> soren, kees, stgraber, cjwatson, mdz: but it's winter time again, so 2100 UTC?
[21:01] <pitti> I'm still at Taekwondo an hour before
[21:01] <kees> pitti: not sure. I'm fine to move it later
[21:01] <stgraber> 2100 UTC works fine for me too
[21:02] <pitti> perhaps we should define it as 21:00 London time
[21:02] <kees> yay, my day is complete:
[21:02] <kees> > Array(16).join("wat" - 1) + " Batman!"
[21:02] <kees> 'NaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN Batman!'
[21:03] <lifeless> kees: the WAT talk ?
[21:03] <lifeless> kees: ah yes, classic
[21:03] <kees> lifeless: yeah, I watch it every few weeks. I got curious about what I could use on Ubuntu that was like "jsc" on OSX
[21:03] <lifeless> kees: question for you - the bug that the libguestfs FAQ points at about -r mode'd kernels
[21:03] <kees> though it looks like v8 is less "fun"^W^Wmore consisten
[21:03] <kees> t
[21:04] <lifeless> kees: he claims you can read the live kernel, but I wanted to verify - that requires root already, right ?
[21:04] <kees> yup
[21:05] <kees> lifeless: if he's got a way to real kernel addresses on a default ubuntu install NOT as the root user, I would consider it a bug.
[21:05] <kees> s/real/read/
[21:05] <lifeless> kees: I wonder if we should mark libguestfs as bad somehow then, since installing it lessens security.
[21:06] <lifeless>  / undoes your hard work
[21:06] <kees> hrm?
[21:07] <slangasek> kees: heh, people were criticizing that talk in CPH on the grounds that the examples don't work in nodejs
[21:07] <kees> slangasek: haha
[21:07] <lifeless> kees: installing it alters the settings you documented, making vmlinuz world readable
[21:07] <kees> slangasek: it's clearly undefined behavior manifesting as humor.
[21:07] <kees> lifeless: O_O
[21:07] <kees> lifeless: bug #?
[21:07] <slangasek> so their defense of the language is that the behavior isn't defined, yes :)
[21:08] <lifeless> kees: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/759725
[21:09] <lifeless> is the one where your thing is discussed
[21:09] <lifeless> do I need to make to libguestfs so that when a sysadmin installs it, it will
[21:09] <lifeless> change the permissions back to 0644 automatically?
[21:09] <lifeless> ^ is the author of libguestfs
[21:09] <lifeless> I am making an assumption that he has followed up and made those changes
[21:10] <kees> lifeless: well, the act of installing libguestfs shouldn't do the dpkg-statoverride changes -- that's up to an admin.
[21:10] <kees> as such, if an admin uninstalls, they should undo that fix.
[21:11] <lifeless> kees: ack
[21:19] <geofft> Is there an Ubuntu analogue of debian-keyring for PGP keys of individuals (for validating source packages)?
[21:23] <lifeless> geofft: ubuntu-keyring ?
[21:40] <geofft> lifeless: ubuntu-keyring looks like the analogue of debian-archive-keyring (PGP keys for signing the archive itself)
[21:40] <geofft> or am I misreading?
[21:41] <ScottK> No.  You aren't.  There's no exact analogue.  You'd have to find the key in Launchpad and see if it's owned by a developer.
[21:55] <BenC> cjohnston, apw: 0.5 ready and uploading
[21:56] <BenC> cjwatson: ^^
[21:56] <BenC> cjohnston: oops
[22:23] <cjohnston> exit
[22:35] <YokoZar> How do I figure out why my package has been sitting in raring-proposed for over a week?
[22:37] <tumbleweed> YokoZar: you look in http://people.canonical.com/~ubuntu-archive/proposed-migration/
[22:39] <infinity> YokoZar: Is this wine, or something else?
[22:49] <YokoZar> infinity: it is indeed wine, I'm not sure why it is claiming unsatisfiable depends there.  In fact I'm having trouble understanding the output of both of tumbleweed's pages linked.
[22:51] <infinity> YokoZar: It has unsatisfiable depends because it depends on things outside its architecture.
[22:51] <infinity> YokoZar: We'll have to hint it one when we're satisfied that it's otherwise sane.
[22:52] <infinity> YokoZar: As discussed elsewhere, we certainly don't want to allow cross-arch depends as the rule, rather than the exception, or it pretty much breaks the world.
[22:52] <infinity> YokoZar: (Since you could be out-of-date on an arch, but still migrate due to the dep being satisfiable from another, etc)
[22:53] <infinity> YokoZar: As it stands, britney has no concept of multiarch (it only examines the packages files for each arch as a self-contained unit), and I'm pretty sure it would be wrong to teach it otherwise.
[22:54] <infinity> YokoZar: Anyhow.  We'll sort out wine, and see if maybe there's some clever way we can make it an exception to the norm without breaking anything or having to do it manually every time.
[22:54] <infinity> YokoZar: (Talk to me about it tomorrow, when it's not a day off for me?)
[23:03] <YokoZar> infinity: That sounds reasonable, thanks.
[23:36] <bdrung> mdeslaur: libav got a security update, but libav-extra didn't. will it get a security update?