[00:03] <slangasek> pitti, cjwatson: hmm, so I notice that the startup time is the same if I call update-manager with --no-update as without; so whatever it's doing takes a long time, and the question of whether it checks the server for updates seems to have no impact
[03:56] <infinity> jbicha: Please don't arch restrict just to hide build failures.
[03:57] <infinity> jbicha: porters use the FTBFS list as a TODO list, there's no harm in having things not build that need porting/fixing.
[03:57] <infinity> jbicha: (Backing out your openimageio diff)
[03:59] <infinity> jbicha: One of those build failures is a simple qreal porting exercise, the other is just using GCC intrinsics properly, neither one is a fundamental "this can only work on x86" issue (like, say, nvidia binary blobs).
[04:02] <jbicha> infinity: ok...but wouldn't the package be stuck in -proposed?
[04:02] <infinity> jbicha: No.
[04:02] <infinity> jbicha: proposed migration ensures that things are built on the arches where they were previously.
[04:02] <infinity> jbicha: And since this was always FTBFS on arm/powerpc, that'll work fine.
[04:02] <jbicha> infinity: thanks, the proposed migration was the only reason I did that
[04:03] <infinity> jbicha: If you've been doing this sort of thing more elsewhere, can you please undo it?
[04:03] <infinity> jbicha: proposed-migration is the same as Debian testing in this regard.  Testing only cares about build regressions, if something's never built on an arch, it's fine if it continues to not do so.
[04:04] <infinity> (And, conversely, if it HAS built elsewhere before, it's a bug when it stops doing so, and we should generally try to fix it, not ignore it)
[04:45] <micahg> infinity: and livecd-rootfs changes you were planning to make?  if not, I'll go ahead an upload changes for Ubuntu Studio
[04:46] <micahg> *any
[05:00] <pitti> good morning
[05:02] <pitti> slangasek: no reports, but I just tried to check that, and update-manager crashes right away with "ImportError: No module named UpdateManager.UpdateManager"
[05:03]  * pitti reinstalls it, I think I've seen this before when attempting to sponsor a branch for mterry
[05:04] <pitti> ok, that worked
[05:05] <pitti> slangasek: so just calling "update-manager" here takes some 10 seconds, then shows me the updates
[05:05] <pitti> slangasek: (I did run apt-get update just before, though, so updating indexes was fast)
[05:06] <FourDollars> Who can help me to review https://code.launchpad.net/~fourdollars/software-properties/fix-1138121-a-typo-in-CountryInformation.py/+merge/151870 ?
[05:06] <FourDollars> It is just a litte typo.
[05:07] <FourDollars> s/litte/little/
[05:14] <micahg> infinity: nevermind, I'm uploading
[05:32] <jefimenko> does anyone here know how to run the equivalent of debuild using pbuilder-dist? "pbuilder-dist build" requires a .dsc file
[05:33] <lifeless> is there a pdebuild-dist ? There is a pdebuild
[05:34] <slangasek> pitti: interesting
[05:34] <jefimenko> yes, there is pdebuild
[05:34] <jefimenko> and pbuilder-dist
[05:35] <jefimenko> the man page for pbuilder-dist says that the `operation` argument can be any operation that pbuilder supports
[05:36] <jefimenko> one of those operations is debuild, but pbuilder-dist errors out when i try to use it
[06:55] <infinity> micahg: Checking bzr for pending changes would have worked, but I don't think anyone has any.
[06:55] <infinity> micahg: Oh, and you committed to bzr anyway.  So, yay.
[07:03] <pitti> yay for apt-get autoremove cleaning up old kernels
[09:50] <mitya57> doko: when was the last time you built those docs successfully?
[09:50] <mitya57> maybe it's related to https://bazaar.launchpad.net/~mitya57/ubuntu/raring/python-docutils/0.10-1ubuntu1/revision/32
[09:51] <doko> mitya57, see the publishing history
[09:53] <mitya57> yes, the previous version was built with old docutils
[09:53] <mitya57> let me look at the docs source
[09:56] <mitya57> looks like it fails to process lines like ".. _getting-started:"
[09:56] <testi> Will Applications compiled for Mir run natively without compatibility layer (protocol translation, additional context switches) on Wayland Compositors? Does that apply for the other direction too? By application I mean anything not deeply integrated with the system, especially games, because under no circumstances I want Mir to introduce any delay (context switch, protocol translation) only because some game developer has chosen Mir as ta
[09:57] <testi> Is Mir capable of reliable bypass offscreen on fullscreen? (also in order to reduce delays)?
[09:58] <Laney> ubuntu
[09:58] <Laney> err, #ubuntu-mir is where you'll get proper answers to mir questions
[09:58] <testi> okay, thanks
[10:03] <xnox> slangasek: pitti: i have been noticing significant cpu usage from update-manager with top when my system is in swapdeath / under heavy load.
[10:06] <Adri2000> it seems that pkgbinarymangler removes upstream changelog without removing the associated symlink if there is one (created by dh_installchangelogs -k). that leaves a broken symlink in the package. bug, right?
[10:06] <seb128> Adri2000, does the package depends on a binary that provides the symlink target?
[10:07] <Adri2000> seb128: no
[10:08] <Adri2000> it's all in the same package
[10:08] <seb128> k, dunno then
[10:20] <infinity> Adri2000: Sounds like a bug to me.
[10:20] <infinity> Adri2000: Assuming the behaviour actually matches what you described.
[10:40] <mitya57> doko: lp:~mitya57/ubuntu/raring/python-docutils/disable-references-patch disables the patch, and I've reported a bug
[10:41] <mitya57> https://sourceforge.net/tracker/?func=detail&aid=3607029&group_id=38414&atid=422030
[10:41] <doko> mitya57, ohh, I already had report the bug, see the email
[10:41] <mitya57> doko: commented there as well, please close it (I'm not able to do that)
[10:42] <mitya57> it's not a Sphinx issue
[10:42] <doko> ahh, ok
[10:43] <mitya57> btw it's a kind of issue that won't happen when britney will test-build all rbuilddepends before copying to -release
[13:17] <zyga> hey
[13:18] <zyga> I have a Lenovo G580 laptop, it just stopped booting current raring (it hangs on boot), one thing it does display is "mount: unable to allocate memory" for "/sys/firmware/efi/efivars"
[13:18]  * zyga hopes that's not laptop bricking on some EFI bug
[13:18] <hyperair> nah, if it bricked all you would see is a black screen
[13:19] <hyperair> with no text
[13:19] <hyperair> no boot logo
[13:19] <hyperair> nothign
[13:19] <zyga> yeah, it's not dead-dead
[13:19] <hyperair> then you have hope
[13:20] <hyperair> in any case, i think only samsung has a history of making chips that are brickable from supposedly valid commands
[13:20] <zyga> it seems to be a regression
[13:20] <hyperair> hmm i think there was an e1000e issue once as well
[13:20] <zyga> I just booted an earlier kernel
[13:20] <hyperair> ah
[13:20] <zyga> let me see if it really works
[13:20] <hyperair> there you go
[13:20] <zyga> yup
[13:20] <zyga> desktop
[13:21] <zyga> let's see the next kernel
[13:21] <zyga> so it _works_ on 3.8.0-5
[13:21] <hyperair> file a bug
[13:21] <hyperair> and head over to #ubuntu-kernel
[13:21] <zyga> yeah
[13:21] <zyga> checking next kernel
[13:32] <Hobbsee> So long, and thanks for all the fish, guys & girls!
[13:41] <slangasek> xnox: it's not swapdeath / heavy load for me, except for update-manager itself using the CPU
[13:45] <hyperair> Hobbsee: you make it sound like you're leaving the ubuntu project.
[13:45] <Hobbsee> hyperair: I am
[13:45] <Hobbsee> Still running Ubuntu on a few things though
[13:45] <hyperair> wat
[13:45] <hyperair> whyyyy
[13:49] <Hobbsee> hyperair: It was summed up pretty well in http://doctormo.org/2013/03/06/ubuntu-membership-2/
[13:51] <hyperair> hmm =\
[13:51] <hyperair> that's a pity
[13:51] <Hobbsee> Indeed
[13:54] <jcastro> Hobbsee: sorry you feel that way!
[13:58] <ScottK> So long Hobbsee.  I can't say I blame you.  What have you switched to?
[14:03]  * ScottK doesn't even know what "Technical Architect (Client)" means.
[14:05] <mlankhorst> presumably the client is a team
[14:06] <ScottK> No idea.
[14:06] <ScottK> There was a "Client" track for UDS, so I assume it's somehow related.
[14:08] <Hobbsee> ScottK: I haven't switched my work machine  & laptop to anything else yet.  We'll see, on that front.  As for the desktop, it's using windows for gaming
[14:08] <ScottK> Hobbsee: OK.  Maybe we'll see you in Debian then.
[14:08] <Hobbsee> ScottK: possibly.  You never know :)
[14:29] <davmor2> guys I did an install from this mornings cd I open the dash I have the cursor in the search bar turning I see the home lens and that's it
[14:50] <pitti> davmor2: dash search is quite broken right now, didrocks says it's being fixed
[14:51] <pitti> hm, your's sounds different, though
[14:51] <didrocks> davmor2: pitti: part of the batch of fix, the lenses are not recommended by default
[14:51] <davmor2> pitti: sorry I thought I was on the ubuntu-unity channel when I typed this I'm in there now looking through it
[14:52] <pitti> davmor2: so you hit the right channel after all :)
[14:52] <didrocks> so if you removed the lenses, yeah, you won't have them :)
[14:52] <didrocks> (fix is still building)
[14:53] <davmor2> didrocks: this is a fresh install
[14:53] <didrocks> davmor2: makes even more sense if you took today's daily :)
[14:53] <didrocks> yeah, lenses are not installed by default, I missed a build-dep when moving the lens recommendation in a perl script parsing a json file
[14:53] <davmor2> didrocks: Yeap it's an up-to-date iso
[14:54] <didrocks> and so the recommends: stenza is empty
[14:54] <davmor2> didrocks: this is what I see http://ubuntuone.com/4VY5XIUSXNWvpTzctl3hS9
[14:54] <didrocks> wait for next unity
[14:54] <didrocks> it's fixing it
[15:03] <micahg> infinity: I checked the bzr branch, I was looking for conservation of uploads :)
[15:04] <infinity> micahg: Ahh, the 738th law of thermodynamics.
[15:53] <jcastro> plenty of room for lightning talks
[15:53] <jcastro> don't make me start assigning people!
[15:57] <ev> my debian/control fu is rusty. Is there a way to specify a dependency equal to just the major version component?
[15:58] <pitti> ev: you can certainly do things like "firefox (>= 3)", it doesn't matter how much prefix you use
[15:58] <pitti> jcastro: one/two-minute LTs ok as well, I suppose?
[15:58] <ev> pitti: I thought that might be the case, but dpkg --compare-versions behaved differently so I wasn't certain
[15:59] <ev> pitti: thanks!
[15:59] <pitti> ev: orly?
[15:59] <pitti> people do that all the time with e. g. debhelper (>= 9)
[15:59] <ev> oh, does it fall over with strictly equals?
[15:59] <jcastro> pitti: sure, I'm mostly just interested in showing off cool things, so if it's less than 5 that's totally ok
[15:59] <ev> for example:
[16:00] <ev> dpkg --compare-versions 1.0.17 '=' '1.0'; echo $?
[16:00] <ev> 1
[16:00] <ev> that would make sense, but then how would I express 1.0.x series is fine, 2.0.x wont work
[16:01] <OdyX> conflicts >= 2~ ?
[16:01] <pitti> right, or depends <= should probably also work
[16:02] <slangasek> ev: you'd do it as depends: foo (>= 3), foo (<< 4)
[16:02] <ev> slangasek: ah, of course! Thanks muchly!
[16:06] <cjwatson> ev: Depending on what you're doing, the ${source:Upstream-Version} substvar may be helpful too (deb-substvar(5))
[16:06] <cjwatson> Sorry, deb-substvars(5)
[16:06] <cjwatson> Only if the dependency is on something else within your own source
[16:06] <ev> separate package (separately built library depending on the 1.0.x series of Cassandra)
[16:06] <ev> yeah
[16:06] <cjwatson> OK
[16:07] <ev> thanks though!
[16:07] <slangasek> :)
[16:15] <zul> mterry:  ping can we get python-json-patch MIRed as well please
[16:16] <mterry> zul, I didn't notice that MIR
[16:16] <zul> mterry: i though it subscribed ubuntu-mir
[16:16] <mterry> zul, doesn't look like it.  Looking now anyway.  But please sub them
[16:16] <zul> mterry:  just did
[16:21] <mterry> zul, I talked to the Debian maintainer of the json-pointer package, and he said we probably shouldn't be dropping the openstack-pkg-tools
[16:22] <mterry> zul, I opened a MIR for it already (it's super tiny, shouldn't be a problem)
[16:22] <mterry> zul, if we can promote that, we can go back to sync for json-pointer at least
[16:22] <zul> mterry:  i disagree there is really no reason for that build-depend
[16:23] <mterry> zul, it only provides some maintainer-oriented functionality now.  But he says that may change in future.  Plus, it forces us to keep a delta.  It's not worth it when we can just MIR it easily.  Is there a reason to actively pursue dropping it?
[16:25] <zul> mterry:  it doesnt add any value at all and not worth it
[16:26] <mterry> zul, "worth it"?  What is it costing us?
[16:26] <mterry> Debian packages do all sorts of things that we aren't directly interested in
[16:27] <zul> mterry:  its a superflous dependency and a bad idea imho
[16:30] <mterry> zul, I don't mind a superfluous dependency as long as it is tiny and build-time like this one.  I do mind deltas that don't serve us much purpose.  So can you expand on the "bad idea" comment?  What active harm is the build-dep doing?
[16:33] <zul> mterry:  its not doing any harm i just dont think its a good idea because the maintenance stuff that is intended for openstack-pkg-tools is not really used in the python-json-patch package or anywhere else
[16:34] <mterry> zul, I agree it's not actively helping.  But as long as it's not actively hurting, I'd rather avoid the delta
[16:35] <zul> mterry:  fine
[16:35] <mterry> zul, I already filed a MIR for it and assigned to didrocks
[16:35] <zul> mterry:  k
[16:35] <didrocks> yep
[17:45] <dobey> hrmm. is there a good overview of how autopkgtests work in practice? i have the spec document open, but it doesn't really say anything about how test runs get triggered
[17:47] <mitya57> dobey: http://developer.ubuntu.com/packaging/html/auto-pkg-test.html#executing-the-test
[17:49] <dobey> ah, thanks
[17:53] <dobey> hrmm
[17:54] <dobey> "…or [when] any of their reverse-dependencies change." <- this is for example the output of apt-cache rdepends $package? or if any of the dependencies listed in Depends: in tests/control change?
[17:54] <jtaylor> the ones in tests/control
[17:54] <jtaylor> but it doesn'T work
[17:54] <jtaylor> (in ubuntu adt jenkins)
[17:55] <dobey> oh; so tests only get run when the package is uploaded, at the moment?
[17:56] <jtaylor> it probably depends, some packages to rebuild some don't
[17:56] <jtaylor> (its a bug)
[17:58] <mitya57> jtaylor: by the way, any news about scipy tests failing?
[17:58] <jtaylor> mitya57: the adt tests?
[17:58] <jtaylor> the atlas one looks ugly
[17:58] <mitya57> jtaylor: yes, maybe disable it?
[17:59] <jtaylor> the other one is due to ubuntu compressing png's
[17:59] <jtaylor> I filed a bug upstream for that
[17:59] <jtaylor> still need to look at the atlas failure, that will be fun
[17:59] <mitya57> but why only on amd64?
[18:00] <mitya57> hm, pyxdg is also failing...
[18:00] <Laney> fixed that one
[18:00] <Laney> uploading in 2 mins
[18:00] <jtaylor> mitya57: which one is amd64 only?
[18:00] <jtaylor> atlas fails on i386, and that is not unusual for rounding issues
[18:01] <jtaylor> i386 is horrible concerning that
[18:01] <mitya57> jtaylor: my wrong, numpy was failing only on amd64, scipy fails on i386 as well
[18:02] <jtaylor> mitya57: numpy is fixed
[18:02] <dobey> pyxdg needs to get replaced (removed)
[18:02] <jtaylor> mitya57: it was just not rebuild due the bug I mentioned earlier
[18:02] <mitya57> dobey: ???
[18:02] <dobey> jtaylor: is there any way to say "only run the tests when the deps change, not when it uploads" ?
[18:02] <jtaylor> dobey: probably not, what would be the use case?
[18:02] <dobey> mitya57: pyxdg is unmaintained. apps need to move off of it, really
[18:03] <dobey> jtaylor: well, i don't really want to just run the same tests twice when i upload something (once in the normal source build, and then again in the autopkgtests). seems like a waste of time
[18:03] <mitya57> dobey: it's now well maintained by Thomas Kluyver — who is also upstream developer
[18:04] <Laney> I have fixed it
[18:04] <jtaylor> dobey: true, but you can do in adt tests what you can't do during the build
[18:05] <jtaylor> dobey: e.g. scipy and numpy, it tests blas, atlas and openblas
[18:05] <jtaylor> dobey: impossible during the build
[18:05] <dobey> jtaylor: what do you mean?
[18:05] <mitya57> Laney: pyxdg? thanks! Will you commit it to Debian or should I do that? :)
[18:05] <Laney> if you can, feel free
[18:06] <Laney> it's an upstream cherry-pick
[18:06] <dobey> jtaylor: i don't quite understand that
[18:06] <jtaylor> dobey: numpy and scipy can have their blas provider replaced underneath them, during the build I can't install new packages, in adt tests I can (via test dependencies)
[18:07] <Laney> mitya57: if you take the other patch too then we could sync; shouldn't be harmful on Debian but it's not entirely applicable there either
[18:07] <Laney> up to you
[18:07] <mitya57> Laney: I don't yet see the new upload, will look in a couple of minutes (and I can, yes)
[18:07] <Laney> it's not done yet, that's why ;-)
[18:07]  * Laney was test building
[18:08] <dobey> jtaylor: that's a special case though it sounds like. most code probably isn't like that? i mean, unless i can depend on packages from universe in the autopkgtests for a package in main?
[18:08] <cjwatson> You can
[18:08] <jtaylor> dobey: you can do that
[18:09] <jtaylor> dobey: you also test that the binary package is actually usable in adt tests
[18:09] <dobey> oh, then that might be nominally useful for me then
[18:09] <jtaylor> dobey: during the build you have everything installed as upstream intended and tests that, binary packages may make mistakes in splitting stuff up
[18:09] <Laney> mitya57: alright, there we go - perhaps wait and see if it passes in jenkins but feel free to upload at your leisure (or get tumbleweed to do it for you :P)
[18:09] <Laney> have to go out now - see you later
[18:09] <tumbleweed> what am I uploading?
[18:09] <jtaylor> dobey: e.g. gevent, their dbg package is broken, during the build you won't see that, but in adt tests you do (seen in pyzmq)
[18:09] <dobey> jtaylor: well, "works as intended" with ubuntuone is probably not all that testable in adt either though :)
[18:11] <dobey> most of the stuff we're already testing in the unit tests anyway, and not much more testing can really be done without actually talking to the server
[18:11] <jtaylor> another case is ipython which tests stuff only if a mongodb service is running, I can't do that during ab uild, but I can in adt
[18:11] <jtaylor> but I don'T because mongodb is broken in chroots ._.
[18:11] <mitya57> Laney: I'll look tomorrow then
[18:11] <dobey> but for some of the stuff where we use pyqt, we need a package that's in universe to run the tests, so we aren't running all the tests in the package build
[18:12] <mitya57> tumbleweed: Laney was suggesting to drop ubuntu pyxdg delta by committing it to DPMT
[18:12] <tumbleweed> ah
[18:13] <tumbleweed> mitya57: it's team maintained, that's a reasonable approach
[18:13] <dobey> is there any way to get autopkgtests to work for PPAs as well?
[18:28] <mitya57> dobey: https://bazaar.launchpad.net/~auto-package-testing-dev/auto-package-testing/trunk/view/head:/doc/USAGE.md#L54
[18:29] <dobey> mitya57: but there's no infrastructure already set up to do this automatically? i'd have to set up my own jenkins jobs somewhere doing that?
[18:30] <mitya57> dobey: that's a question to pitti or jibel
[18:32] <pitti> dobey: technically we can do it, it's just a resource issue
[18:32] <pitti> dobey: if you mail jibel and toss him a pointer to a PPA and some package names, he can set it up
[18:32] <pitti> dobey: we already do this for e. g. chrisccoulson's firefox PPA and seb128's gtk
[18:33] <chrisccoulson> we even get proper test results: https://jenkins.qa.ubuntu.com/job/raring-ppa-adt-ubuntu_mozilla_daily_ppa-firefox-trunk/69/#showFailuresLink :)
[18:34]  * chrisccoulson must fix the failures
[18:34] <jtaylor> :O
[18:34] <jtaylor> how do you get the test results into jenkins?
[18:34] <pitti> dobey: e. g. https://jenkins.qa.ubuntu.com/view/Raring/view/All/job/raring-ppa-adt-ubuntu_mozilla_daily_ppa-firefox-trunk/
[18:34] <dobey> pitti: it would be fine if i had to set it up on a separate jenkins as well (we already have jenkins set up for u1 stuff for testing on windows and landing branches and such, so not a big issue). just wanted to know what's what :)
[18:35] <dobey> chrisccoulson: jenkins gives you a stormy cloud
[18:36] <jtaylor> it would be nice when one could see the configurations of the jobs
[18:36] <chrisccoulson> dobey, ah, i wanted more than a stormy cloud and a 100MB text file though
[18:37] <dobey> heh
[18:37] <dobey> oh, that jenkins doesn't use sso
[19:04] <pitti> dobey: jenkins.q.u.c. is just a r/o mirror
[19:04] <pitti> dobey: the real one is behind a VPN in Lex, so you won't actually see execution nodes, login, etc.
[19:10] <dobey> ah right
[19:22] <lifeless> cr3: ping
[19:37] <dobey> still, it would be nice to be able to disable running the autopkgtests when their running would be the same as the tests run during the build, except for when any of the dependencies changed
[19:41] <cjwatson> my concern there would be that there would be no baseline for when they're rerun when deps change
[19:42] <cjwatson> it's quite possible for the autopkgtest setup files to be wrong even if the unit tests themselves pass
[19:48] <pitti> slangasek: do you have your systemd changes against 44-10 in some broken-out form like bzr commits, or do I just look at the debdiff?
[19:49] <dobey> i suppose that's true. i'm just looking to optimize out the bits where it would be unequivocally indifferent from running the tests in the builds, to avoid wasting resources
[19:50] <dobey> i guess it won't be too big an issue though
[19:53] <slangasek> pitti: umm I have them in a git tree here which I meant to push somewhere
[19:54] <slangasek> pitti: speaking of, what's the right way for me to submit my changes to systemd upstream for enabling a Debian backend on timedated?
[19:54] <pitti> slangasek: or just format-patch origin.. perhaps, then we can directly forward/apply them?
[19:55] <pitti> slangasek: that's a good question actually; back then I used the debian git, but that still has 44
[19:55] <pitti> I didn't find a git tree from which mbiebl built his version 195 packages
[19:55] <slangasek> pitti: I meant for forwarding to systemd upstream rather than Debian
[19:55] <pitti> oh
[19:55] <slangasek> well, the 195 packages also aren't published in Debian
[19:55] <pitti> slangasek: http://lists.freedesktop.org/mailman/listinfo/systemd-devel/
[19:56] <pitti> slangasek: most patches go there, and it's the fastest way to get them reviewed
[19:56] <pitti> slangasek: I'm on the list, so if someone acks patches I can push them, too
[19:56] <slangasek> pitti: great, thanks
[19:57] <pitti> slangasek: I guess Lennart is fine with me pushing Debianisms :)
[20:02] <pitti> good night everyone
[20:18] <ogra_> https://plus.google.com/hangouts/_/914b5784e52c5967784eae44e4b138a346b1ff90?authuser=0 post UDS beer hangout
[20:27] <chiluk> stgraber, in reference to http://pad.lv/1057358 .... sorry about that..I do have a question about it though.
[20:29] <mbiebl> pitti: http://people.debian.org/~biebl/systemd-198/
[20:29] <mbiebl> that's not a real git repo though, just some bits I'm currently experimenting with
[20:29] <mbiebl> which is sufficient to boot a systemd yet
[20:29] <chiluk> stgraber, the bazaar branch for precise available at lp:ubuntu/precise/isc-dhcp is stuck at 4.1.ESV-R4-0ubuntu5   Is there a newer place for the precise branch?
[20:30] <stgraber> chiluk: no, you need to pull the current source from LP outside of bzr branches
[20:30] <chiluk> I'd prefer not to use the patching system if I could instead just use a bazaar branch /
[20:30] <chiluk> stgraber where?
[20:30] <chiluk> I ended up using pull-lp-source for the latest debdiff I created.
[20:30] <stgraber> chiluk: pull-lp-source isc-dhcp precise-updates
[20:31] <stgraber> right, that's how you have to do it for SRU for isc-dhcp because the UDD branches are busted
[20:32] <chiluk> I'm still not sure where the logic is that blew away my patch...
[20:32] <chiluk> but moving the patch above the comment works...
[20:33] <chiluk> stgraber, anyhow sorry.. do I need to fill out another SRU in 1057358?
[20:34] <mdeslaur> chiluk: if happens at least twice to every person who touches the isc-dhcp package :P
[20:34] <mdeslaur> s/if/it/
[20:34] <chiluk> hah...
[20:35] <stgraber> chiluk: nope, we can use the same bug, just attach an updated patch to it
[20:35] <stgraber> mdeslaur: we finally fixed that with 12.10 though!!!
[20:35] <mdeslaur> stgraber: yes, thanks again :)
[20:35] <chiluk> I was going to just patch apparmor-profile.dhcpd , but I wanted to be fancy and use the darn patching system
[20:35] <stgraber> mdeslaur: though we also broke LDAP support in the process and didn't notice until a few weeks ago ;) I pushed an SRU last week that turns on LDAP support in the ldap packages
[20:35] <stgraber> mdeslaur: because the debian/rules magic dual-build stuff was completely broken and the ldap binary was overwritten by the standard binary
[20:35] <mdeslaur> stgraber: whoops :) although, meh, maybe it's an acceptable compromise :)
[20:36] <cr3> lifeless: pong, what's up?
[20:36] <chiluk> stgraber so you want a modified patch from 5.6 or a patch fixing 5.7?
[20:36] <stgraber> chiluk: from 5.6 would be easier to review
[20:36] <lifeless> cr3: subunit v2
[20:36] <lifeless> cr3: have you seen my blog posts ?
[20:36] <chiluk> alright.
[20:37] <cr3> lifeless: dude, what'a coincidence! I just got out of a meeting where I mentionned subunit and testmanager too!
[20:38] <cr3> lifeless: I haven't seen your blog posts, link? I'll forward to a few colleagues
[20:40] <lifeless> cr3: rbtcollins.wordpress.com
[20:41] <stgraber> chiluk: saw the new patch. Thanks, I'll try to review and bundle with other fixes when I have a sec.
[20:41] <chiluk> that's the fix against 5.7
[20:41] <cr3> lifeless: any estimate on when you expect v2 to be finalized?
[20:42] <chiluk> stgraber do you still want me to create a new debdiff against 5.6?
[20:43] <stgraber> chiluk: it's a simple enough patch that it doesn't really matter. 5.6 would be easier but 5.7 will just take me an extra minute or so
[20:43] <chiluk> stgraber thanks... I'm still new to how all the patching systems work in Ubuntu..
[20:44] <lifeless> cr3: soon I hope, the more feedback I get the better :)
[20:44] <lifeless> cr3: I'd love to be able to replace your custom protocol with v2 ;)
[20:46] <cr3> lifeless: I haven't touched checkbox in a while, zyga or roadmr should be made aware of this ^^^
[20:47] <lifeless> cr3: ah!
[20:48] <lifeless> cr3: well, care to point them at it, or mail me their contact details and I'll mail them a tl;dr summary?
[20:49] <cr3> lifeless: I was thinking of dropping them a quick email about it, I can cc you too
[20:49] <lifeless> please
[21:30] <zyga> lifeless: hey
[21:30] <lifeless> hi :)
[21:30] <zyga> lifeless: subunit you say? I read about your v2 work
[21:30] <lifeless> zyga: ah cool
[21:31] <lifeless> so yeah, I know checkbox has a reporting format, and I'd like to be sure that subunit v2 is at least an in-principle suitable candidate for you
[21:31] <zyga> lifeless: as for checkbox, we're not using it actively
[21:31] <zyga> lifeless: so we have a rewrite going on
[21:32] <zyga> lifeless: the core is mostly rewritten now
[21:32] <zyga> lifeless: we have a concept of exporters where all the test data can go to
[21:32] <zyga> lifeless: we have json, (now removed) yaml, rfc822 and custom certification xml outputs
[21:32] <zyga> lifeless: and a plain-text human readable output
[21:33] <zyga> lifeless: who would be a consumer for subunit exporter?
[21:33] <lifeless> zyga: your server ? Testrepository? Anything doing data mining?
[21:34] <zyga> we don't have a server, we only send data to certification rewrite that only eats the xml I've mentioned
[21:34] <zyga> I don't mind having that exporter but I don't know if it's applicable - the exporter is given a session state object that has all of the state, all the test that went by, all the output, all the user feedback, everything
[21:35] <zyga> then it has to produce some text to a stream
[21:35] <zyga> there's a sub-layer there that can select a subset of data
[21:35] <zyga> and we actually do that, also transforming from the objecet graph to something that can be easily json.dump()'ed
[21:35] <lifeless> well
[21:36] <zyga> which is also what most derivative exporters consume to be customizable
[21:36] <lifeless> so the idea of subunit is to avoid the buffering issues that e.g. xml has
[21:36] <lifeless> and support concurrent tests
[21:36] <zyga> we alredy buffer everything
[21:36] <zyga> we don't do any concurrent testing
[21:36]  * zyga sounds negative but I don't see how we could take advantage of that
[21:36] <lifeless> fair enough
[21:36] <zyga> we buffer and save to disk because jobs can crash machines (and do)
[21:37] <zyga> so we took a painful careful road to save stuff sanely
[21:37] <lifeless> right - thats what subunit is meant to tackle
[21:37] <zyga> so we have everything stored on disk anyway
[21:37] <lifeless> distributed lossy testing - just emit the events as they happen, direct onto the network.
[21:37] <zyga> lifeless: it would not store everything the way we need I suspect, our resume logic is not a serialization problem
[21:37] <lifeless> if the machine crashes you know it did because you only see the test start event not the finish
[21:37] <lifeless> ok
[21:38] <zyga> do you have docs docs on your v2 work?
[21:38] <zyga> (I only read the blog headline)
[21:38] <lifeless> I do, I will dig up in a bit, OTP just now.
[21:38] <zyga> k
[21:39] <zyga> lifeless: I'll look at them but frankly, it would probably require us to reachitect the core a little, to put subunit storage as the center of our state holiding
[21:39] <zyga> lifeless: and I don't think there's anything we gain, apart from a dependency and code sharing
[21:39] <zyga> lifeless: and correct me if I'm wrong but isn't subunit just equivalent to protocol buffers, json, yaml *records* being written somewhere?
[21:40] <lifeless> its the semantic rules that matter - the serialisation isn't interesting
[21:40] <lifeless> http://rbtcollins.wordpress.com/2013/02/14/time-to-revise-the-subunit-protocol/ is the first blog post
[21:41]  * zyga got a message from cr3 about subunit 2
[21:41] <zyga> lifeless: some parts of subunit seem like our io_log
[21:41] <lifeless> https://github.com/rbtcollins/testtools/blob/streamresult/doc/for-framework-folk.rst#extensions-to-testresult is framework author docs around the API you get
[21:41] <zyga> lifeless: which I fully agree we have a shit implementation of, but that's fine for now
[21:42] <zyga> lifeless: any chance for testtools.rtfd.org?
[21:42] <zyga> lifeless: works on kindle :) (and everything else)
[21:42] <zyga> ah
[21:42] <zyga> nice
[21:43] <zyga> reading
[21:43] <lifeless> http://bazaar.launchpad.net/~lifeless/subunit/streamresult/view/head:/README#L148 is the subunit *wire level* README
[21:43] <lifeless> parser/serialiser http://bazaar.launchpad.net/~lifeless/subunit/streamresult/view/head:/python/subunit/v2.py
[21:45] <zyga> lifeless: how are you using that?
[21:45] <bryce> hey, anyone know if there is an official preferred C++ lib for JSON parsing/writing for Qt/QML devel?
[21:45] <zyga> lifeless: I need to break for some real-life activities
[21:46] <zyga> lifeless: I'll look at that and ping you tomorrow
[21:46] <lifeless> zyga: ok, ping me whenever
[21:46] <zyga> lifeless: thanks
[21:46] <lifeless> Not sure what you mean by 'how are you using' - do you mean you want to see the CLI entry points for e.g. subunit.run or subunit-filter ?
[21:56] <RAOF> bryce: I'm somewhat surprised there's not one in Qt?
[22:02] <bryce> RAOF, there is qjson.  Would that be considered the official way to go?
[22:03] <bryce> there's a bunch of more general purpose options on C++, some of which seem pretty popular.  jansson, jsoncpp, rapidjson, et al.  Just wondering if we have a standard, or if I should just choose randomly.  :-)
[22:04] <RAOF> I'm not aware of a standard, but that's not terribly good evidence that there isn't one :)
[22:05] <bryce> hrm
[22:05] <dobey> RAOF, bryce: there's one in qt5, but it's a seprate lib with qt4. it's probably waht you want to use to do json in a c++ app using qt
[22:06] <dobey> i think a couple of the other people on online servers that are working on a qt/c++ thing are using it for the json parsing
[22:06] <bryce> alright, guess I'll play it by ear see what Qt does on its own with it
[23:18] <xnox> jdstrand: so slangasek passed a missed ping to me. Searching for consolekit it's nice to search for org.freedesktop.ConsoleKit as one needs to use this verbantim "well-known" name in the code.
[23:18] <xnox> regardless of which language is used to talk over dbus.
[23:19] <xnox> so this string is present in python / c / C++ / some config files etc.