[00:07] <plars> looks like devices are back up now, restarting everything on the latest build
[03:04] <plars> 27 looks to have some bad problems
[04:47] <rsalveti> plars: still around?
[04:47] <plars> rsalveti: yep
[04:48] <rsalveti> plars: just replied your email, can you get your /proc/last_kmsg?
[04:48] <rsalveti> after a crash/reboot
[04:48] <plars> rsalveti: yes, looking now
[04:48] <plars> I always forget about that!
[04:48] <rsalveti> yeah, it's quite useful
[04:49] <plars> rsalveti: http://paste.ubuntu.com/6451729/
[04:49] <rsalveti> [20678.650539] mdm_power_down_common: MDM2AP_STATUS never went low. Doing a hard reset
[04:50] <plars> wlan stuff it seems
[04:54] <rsalveti> https://android.googlesource.com/kernel/msm/+/3ab322a9e0a419e7f378770c9edebca17821bf6e/arch/arm/mach-msm/mdm2.c
[04:54] <rsalveti> plars: modem driver
[04:55] <plars> http://forum.xda-developers.com/showthread.php?p=44918540
[04:56] <rsalveti> now why the hell it's powering it down
[04:57] <rsalveti> interesting, but the fix in there is not related with the kernel
[04:58] <plars> yeah, doesn't look plausible
[04:58] <rsalveti> last time we had a kernel upload was at 10-18
[04:58] <rsalveti> so wonder how this could be specific to 27
[04:59] <rsalveti> do you know if that's the same error you're getting with the devices from the lab?
[05:00] <rsalveti> you also got a few errors in the wlan driver
[05:00] <rsalveti> wonder if the modem shutdown is just a consequence of another failure
[05:04] <rsalveti> we can investigate a bit more tomorrow, but let me know if you're getting different failures in the lab
[05:04] <rsalveti> later!
[05:13] <plars> rsalveti: same error in the lab
[05:22] <Mirv> morning
[09:23]  * didrocks uses this time for exercising (missed yesterday and tuesday: 3 days with sitting for more than 14 hours isn't good for your health)
[09:23] <didrocks> will make a longer tour then, 1h30
[09:23] <didrocks> asac: just in time for our meeting I guess ^
[09:38] <asac> didrocks: sounds good... ttyt
[10:13] <Mirv> cihelp please investigate "misc stack is already running" http://q-jenkins.ubuntu-ci:8080/view/cu2d/view/Head/view/All/job/cu2d-build_all-head/281/console - and still happening, preventing builds from happening
[10:13] <Mirv> so it's not there visibly running at least anywhere
[10:13] <psivaa> Mirv: ack will do
[10:14] <Mirv> psivaa: thanks, and sorry I forget to check the topic whether it's ci_help or some person at the moment
[10:15] <psivaa> Mirv: np, will do after a meeting
[12:07] <psivaa> Mirv: just to give you an update.. there are some old files that needs clearing up to solve the issue. Will do that once the US side comes online
[12:34]  * retoaded wants to let everyone know that some hardware replacement work is getting ready to start on the public jenkins server. While the work is happening the public instance will be unavailable. This is https://jenkins.qa.ubuntu.com
[13:29] <psivaa> ogra_: http://pastebin.ubuntu.com/6453240/ is the dmesg during the failed network setup stage
[13:30] <ogra_> well, there is an OOPS
[13:30] <ogra_> hmm, actually only a warning
[13:46]  * retoaded announces that the public jenkins instance is back up and operational. And it seems to perform significantly better. Just check https://jenkins.qa.ubuntu.com/view/All/ if you want to get an idea of how much more responsive it is now.
[14:09] <fginther> cyphermox, jibel, can one of you re-review https://code.launchpad.net/~fginther/cupstream2distro-config/update-stack_status/+merge/195956
[14:22] <rsalveti> plars: happy birthday!
[14:22] <plars> rsalveti: thanks :)
[14:23] <ogra_> plars, happy b-day !!!
[14:25] <rsalveti> plars: talking about the last_kmsg from the other devices, besides the modem shutdown message, do you know if the rest is also similar?
[14:25] <rsalveti> I wonder if this is actually a wifi bug
[14:25] <rsalveti> we got a change in wpa from cyphermox a few days ago, but not sure if that would cause any issue
[14:25] <ogra_> rsalveti, psivaa noted networking issues too in the utah tests ... though seems thats more affecting maguro
[14:26] <ogra_> rsalveti, i was wondering if the new udev has a new way of loading firmware so that our overrides stop working
[14:26] <plars> rsalveti: I've seen this for sure on two devices - the mako in the lab that runs the image smoke tests, and the one sitting right next to me. They both had the same error
[14:29] <rsalveti> ogra_: right, would be nice to know the issue with maguro, to see if it's related in some way
[14:30] <rsalveti> ogra_: afaik it's using the same upstream base version, it was just a merge with debian
[14:30] <ogra_> rsalveti, http://pastebin.ubuntu.com/6453240/ there is a warning OOPS, but nothing else intresting
[14:30] <rsalveti> we could have issues, sure, but at least we didn't get a major version update
[14:30] <rsalveti> plars: interesting
[14:30] <ogra_> i think it pumped a bunch of debian versions at least
[14:30] <ogra_> *bumped
[14:30] <rsalveti> too bad I can't reproduce it here, let me reflash from scratch
[14:31] <ogra_> 5 debian revisionas actually
[14:31] <rsalveti> I think that warning is happening at every boot
[14:31] <rsalveti> ogra_: right, but no major upstream version update, right?
[14:33] <ogra_> rsalveti, well ... https://launchpad.net/ubuntu/+source/systemd/204-5ubuntu1 thats quite a changelog
[14:36] <rsalveti> yeah hehe
[15:50] <sergiusens> rsalveti, the only thing uploaded that was relevant to my eyes was wpa_supplicant
[15:50] <ogra_> when was that uploaded ?
[15:51] <sergiusens> ogra_, on you changes 19
[15:51] <ogra_> i didnt see it in any of the changes files since image 22
[15:51] <ogra_> oh, wow
[15:51] <ogra_> how could i miss that
[15:51] <ogra_>     deinitialize the P2P context when the management interface gets removed for
[15:51] <ogra_>     whatever reason, such as a suspend/resume cycle. (LP: #1210785)
[15:52] <ogra_> thats the change
[15:52] <sergiusens> plars, happy bday, here's your small token of appreciation https://code.launchpad.net/~sergiusens/phablet-tools/recovery_check/+merge/196102
[15:52] <rsalveti> sergiusens: yeah, as I said before, cyphermox's changes :-)
[15:53] <rsalveti> let me review that one now as well
[15:53] <plars> sergiusens: cool, give me a sec and I'll take a look
[15:53] <rsalveti> where is the broken recovery :-)
[15:53] <sergiusens> rsalveti, was it image 23?
[15:53] <sergiusens> or 22
[15:53] <rsalveti> 25 iirc
[15:54] <sergiusens> just phablet flash with that branch ad --revision 25 then
[15:54] <sergiusens> should eventually timeout
[16:00] <doanac`> sergiusens: as per that MP. does the .shell method detect errors?
[16:00] <doanac`> usually you have to do that ADB_RC type hack
[16:03] <doanac`> added a note to the MP. with what i *think* is needed. not verified though
[16:07] <sergiusens> doanac`, it doesn't; if it did, I would need to add an exception catch to the loop
[16:07] <sergiusens> doanac`, as it's polling adb 'run something' when it may not be avail
[16:09] <doanac`> sergiusens: okay. i see how it works now. /me sucks at reading diffs
[16:09] <sergiusens> doanac`, the diff isn't that easy to read
[16:09] <sergiusens> doanac`, either way, your proposal is still valid
[16:10] <sergiusens> but I'd need to catch it every loop
[16:10] <doanac`> the less we use the "ignore_errors=False" the better
[16:10] <sergiusens> doanac`, too hacky?
[16:12] <doanac`> sergiusens: it always feels that way to me. but then again, we have some amount of code that probably has no idea its ignoring errors
[16:18] <cyphermox> rsalveti: ogra_: I very very strongly doubt this wpa change would break your stuff, whatever it is
[16:19] <cyphermox> rsalveti: if something's broke, file a bug so I can take a look, please
[16:20] <cyphermox> what the oops ogra_ posted before looks like it more an issue with bringing up the device at all, that's driver level stuff
[16:20] <cyphermox> there was a message about writing the mac address, which happens long before wpasupplicant gets stsarted
[16:22] <rsalveti> cyphermox: the oops is not an issue, the problem seems to happen with mako
[16:22] <rsalveti> the device is rebooting now from time to time
[16:22] <cyphermox> so how would this be related to wpa?
[16:22] <rsalveti> cyphermox: http://paste.ubuntu.com/6451729/
[16:23] <rsalveti> because it seems the wireless driver is causing the crash
[16:23] <rsalveti> the last message, which causes the reboot, is just the modem doing shutdown
[16:23] <rsalveti> but not the issue itself
[16:24] <cyphermox> I see nothing there that points to wifi being broken
[16:25] <cyphermox> if you feel like it, downgrade wpa, but then there's still a bug that really needs to be fixed in that commit
[16:25] <cyphermox> I mean, downgrade as a test, not in the distro
[16:25] <rsalveti> cyphermox: right, no clue exactly to what is happening, all we know is that we got a bunch of messages related with the mako's wifi driver
[16:25] <cyphermox> right
[16:25] <cyphermox> but those are all just info, not errors
[16:26] <rsalveti> plars: which image was the last one we know that was working fine?
[16:26] <cyphermox> on the contrary, it tells me that wifi should have properly disconnected
[16:26] <rsalveti> right
[16:27] <plars> rsalveti: looking at http://reports.qa.ubuntu.com/smokeng/trusty/touch/ 23 and 24 looked pretty good. iirc 24 didn't get a lot of reruns because of timing so it might have done slightly better with a little love
[16:28] <plars> rsalveti: but then we had this adb issue in the lab, and the image 25 stuff, and devices were down for most of yesterday
[16:28] <plars> rsalveti: so 26 didn't happen before 27 came out
[16:30] <rsalveti> plars: can we give 26 a try/run?
[16:30] <rsalveti> not sure how hard would be that
[16:30] <rsalveti> so at least for this issue specifically we believe it's something between 24-27
[16:33] <plars> rsalveti: we're not well set up for it, but I'm sure we could pull it off somehow. The other problem is that psivaa isn't seeing the issue happen this morning, and even last night it was pretty random for me. I saw it lots of times within minutes of boot both at home and in the lab, then my phone at home was up for a long time with no crash (it's been up for 13 hours now)
[16:34] <plars> rsalveti: so I don't think we have a deterministic test for it at the moment
[16:34] <rsalveti> plars: heisenbug?
[16:34] <rsalveti> :-)
[16:35] <plars> rsalveti: seems so, yes
[16:35] <plars> rsalveti: but at least I had logs with proof that something bad was going on - from 2 different makos even
[16:35] <rsalveti> yeah
[17:40] <cyphermox> fginther: think we're good for just about everything soon to update all the jobs and all of that
[17:42] <fginther> cyphermox, thanks, when that MP is merged, can you deploy the updates?
[17:42] <cyphermox> fginther: once your cupstream2distro-config branch gets merged I'll deploy everything, yeah
[17:42] <cyphermox> I already disabled all the build_all jobs to avoid it starting in 20 min
[17:44] <didrocks> cyphermox: you need to redeploy saucy as well :)
[17:49] <cyphermox> yep
[17:55] <fginther> cyphermox, the MP is merged now
[17:59] <dobey> fginther: hey. quick question about the PPA usage for upstream merger 2.0. what's the specific reasoning behind using dput directly instead of using recipes there?
[18:02] <cjwatson> recipes have a cap on how many builds per day you get
[18:02] <cjwatson> I wouldn't advise using them for this
[18:02] <fginther> dobey, mainly because that is what our tools currently do, is there advantages to using recipes?
[18:04] <dobey> cjwatson: it depends on exactly what the goals of the PPA builds are
[18:04] <dobey> fginther: i'm just curious because tarmac already has existing support for triggering recipe builds on launchpad, after the successful merger of branches in a target
[18:05] <fginther> the goal is to create a builds of MPs to be tested
[18:05] <fginther> so prior to merging to trunk
[18:06] <dobey> fginther: so the "custom rotated PPAs for testing MPs" part of the CI Airline stuff?
[18:06] <fginther> dobey, right, they are just build slaves
[18:07] <dobey> yeah, recipes would be problematic to use for that
[18:08] <dobey> i didn't realize it was for that purpose. i thought it was for the post-merge builds into the daily PPA, as upstream merger is doing currently
[18:08] <dobey> wasn't clear in the session :)
[18:10] <fginther> dobey, sorry about that, thanks for the question though, there may be a use case for that functionality
[18:10] <cyphermox> fginther: didrocks: deploying head now
[18:10] <didrocks> \o/ deploying THE head please! ;)
[18:10] <fginther> cyphermox, thank
[18:11] <cyphermox> alright didrocks. deploying TEH head.
[18:11] <didrocks> ;)
[18:12] <fginther> dobey, upstream merger does dput sources on merge of specific projects, a recipe might be a better way to handle this in the future
[18:12] <fginther> these dputs are based on specific needs of the project owners, recipe might give them more control over that
[18:12] <dobey> fginther: right. we're using recipes extensively with the u1 tarmac setup
[18:13] <cyphermox> dobey: depends if you mean launchpad recipes or bzr builder recipes, I guess.
[18:15] <dobey> cyphermox: they are the same thing, but i mean recipes hosted on launchpad of course :)
[18:15] <cyphermox> well, you could get the best of both worlds and use bzr builder recipes, but just dput the built source package after..
[18:15] <cjwatson> I'd still caution against it for anything that might be high-volume for a single recipe.  you might not be running into that with tarmac right now, but ...
[18:16] <cjwatson> (the limit is IIRC five a day, and if we wanted to unhardcode that we'd need to very carefully consider the impact on the build farm as some users abuse recipes to the maximum extent possible and our build farm is already oversubscribed)
[18:17] <cyphermox> didrocks: fginther: deploying TEH sauce now.
[18:18] <fginther> cjwatson, noted
[18:24] <didrocks> !
[18:26] <cyphermox> saucy done..
[18:27] <didrocks> sweet!
[18:28] <cyphermox> so you're saying not to bother with raring?
[18:29] <fginther> cyphermox, we should be ok to start a build
[18:30] <cyphermox> yup, thatś the next step, I'll start QA now
[18:31] <fginther> doh~
[18:31] <fginther> doh!
[18:31] <fginther> fixing
[18:34] <fginther> cyphermox, https://code.launchpad.net/~fginther/cupstream2distro/fix-missing-paren/+merge/196180
[18:35] <cyphermox> fginther: so since I mentioned a symlink before, and all the paths are supposed to be up to date, if you added a symlink it would be best to remove it, to avoid keeping old crap working by chance
[18:36] <fginther> cyphermox, there is no symlink at the moment, I'm also going to hide the old /var/lib/jenkins
[18:37] <cyphermox> ok
[18:37] <cyphermox> at least I don't need to redeploy everything for this fix :)
[18:38] <fginther> cyphermox, :-), just waiting for the MP to merge
[18:40] <fginther> cyphermox, is lp:cupstream2distro merged by hand?
[18:41] <cyphermox> no, it should also be under ci... didrocks?
[18:42] <fginther> cyphermox, ack, I'll merge then
[18:42] <didrocks> cyphermox: it's by hand IIRC
[18:42] <cyphermox> oh ok
[18:44] <fginther> cyphermox, ready again
[18:45] <cyphermox> ok
[18:47] <cyphermox> well, crap.
[18:47] <cyphermox> Building on master in workspace /iSCSI/jenkins/jobs/cu2d-qa-head/workspace
[18:47] <cyphermox> [workspace] $ /bin/bash -eu /tmp/hudson3259210953020072847.sh
[18:47] <cyphermox> only one instance of a stack can be queued for building
[18:48] <cyphermox> not sure what to do with this
[18:48] <fginther> cyphermox, are these just stale files?
[18:49] <cyphermox> no idea
[18:49] <cyphermox> http://q-jenkins.ubuntu-ci:8080/view/cu2d/view/Head/view/QA/job/cu2d-qa-head/435/console
[18:49] <cyphermox> oh wait
[18:49] <cyphermox> yeah just stale files
[18:50] <cyphermox> I don't have access to remove them though
[18:50] <fginther> is it safe to remove ALL the stack.status files (I can do that)?
[18:50] <cyphermox> see /iSCSI/jenkins/cu2d/work/head/qa
[18:50] <fginther> I'll remove that first
[18:52] <cyphermox> btw (pending - daily-release-executor is offline )
[18:52] <fginther> cyphermox, ugh
[18:53] <dobey> cjwatson: i'm not sure what the limit for recipe builds is, but it is per-user, and i'm sure we could get an increase in the limit for the necessary bots, if needed. tarmac also works on multiple MPs before triggering the recipe, not single MPs one-at-a-time as j-l-p currently does
[18:56] <cjwatson> dobey: like I say, it's five a day and there is no provision right now for any kind of configurability
[18:56] <cjwatson>     def isOverQuota(self, requester, distroseries):
[18:56] <cjwatson>         """See `ISourcePackageRecipe`."""
[18:56] <cjwatson>         return SourcePackageRecipeBuild.getRecentBuilds(
[18:56] <cjwatson>             requester, self, distroseries).count() >= 5
[18:57] <cjwatson> recipes aren't really meant for this kind of use case AIUI, but you'd have to talk to a real LP developer rather than a stunt one :-)
[18:58] <cjwatson> (i.e. that's per[C per-requester per-recipe per-series)
[18:58] <cjwatson> modulo lag
[18:59] <dobey> right
[19:00] <dobey> it's also something we could improve in launhpad itself, as well
[19:02] <cjwatson> maybe, though as I say there are other considerations and dput should be perfectly functional already
[19:02] <cjwatson> without any of these limitations
[19:02] <cjwatson> one great thing about it being hardcoded is that there's no temptation for admins to allow 10 chromium recipe builds a day if somebody is really annoying about it :-)
[19:04] <cyphermox> ahah :)
[19:05] <dobey> cjwatson: and yet, it's also easy enough to work around and build chromium 10 times a day :)
[19:06] <cjwatson> true, though there were many and various reasons why I put effort this year into making build cancellation reliable :-)
[19:06] <cjwatson> it's much more of a pain to deal with when recipes are generating the builds as you're trying to get queues clear enough for other people's builds to happen
[19:06] <cjwatson> but shrug
[19:06] <fginther> cyphermox, \o/ daily-release-executor is online
[19:07] <cyphermox> yay
[19:09] <cyphermox> thinks look pretty good so far
[19:12] <fginther> cyphermox, http://q-jenkins.ubuntu-ci:8080/job/cu2d-qa-head-1.1prepare-autopilot-gtk/418/console
[19:14] <cyphermox> fginther:
[19:14] <cyphermox> dpkg-source: error: can't build with source format '3.0 (native)': native package version may not have a revision
[19:14] <cyphermox> dpkg-buildpackage: error: dpkg-source -b autopilot-gtk-1.4+14.04.20131121 gave error exit status 255
[19:14] <cyphermox> debuild: fatal error at line 1361:
[19:14] <cyphermox> dpkg-buildpackage -rfakeroot -d -us -uc -v1.4+14.04.20131106.1-0ubuntu1 -S failed
[19:14] <cyphermox> bzr: ERROR: The build failed.
[19:14] <cyphermox> I don't think that's your fault
[19:15] <fginther> cyphermox, what about the "prepare-package", line 193, traceback?
[19:15] <fginther> was that caused by the bzr failure?
[19:16] <cyphermox> afaik yes
[19:16] <cjwatson> you need pristine-tar information in the branch so that bzr-build{er,deb} can reconstruct an .orig.tar.gz
[19:16] <cjwatson> dpkg-source got stricter about this in trusty
[19:16] <cjwatson> i.e. if you declare 3.0 (quilt) it actually wants this to be possible and won't just silently fall back to 3.0 (native)
[19:16] <cjwatson>   * Catch mismatches between version strings and format versions in
[19:16] <cjwatson>     dpkg-source. Ensure that a 3.0 (quilt) package has a non-native version
[19:16] <cjwatson>     and that a 3.0 (native) package has a native version. Closes: #700177
[19:16] <cjwatson>     Thanks to Bernhard R. Link <brlink@debian.org>.
[19:17] <cyphermox> this is in time going to break each of the packages
[19:17] <fginther> \o/
[19:17] <cyphermox> cjwatson: ack, makes sense
[19:18] <cjwatson> there's some support for pristine-tar information as a bzr property somewhere but I don't remember the details
[19:18] <cjwatson> infinity: ^- I wonder if we should consider temporarily reverting that change; it's been causing trouble for people showing up in #launchpad too
[19:19] <infinity> cjwatson: pristine-tar, or people can stop using non-native versions for their recipes.
[19:19] <infinity> cjwatson: But I could revert the dpkg strictness too...
[19:19] <cyphermox> cjwatson: don't revert that for us though. I've been kind of against using native for non-native packages.
[19:20] <cyphermox> unless didrocks says otherwise, it's his baby after all
[19:20] <cyphermox> but none of this really needs to be native packages
[19:20] <cjwatson> the pristine-tar bit is kind of cumbersome and non-obvious
[19:20] <cjwatson> there's "bzr import-upstream" according to https://answers.launchpad.net/launchpad/+question/203083
[19:21] <didrocks> it doesn't seem straightforward indeed
[19:21] <didrocks> cyphermox: we should just use 1.0 package for those as we are in split bzr mode
[19:21] <cjwatson> "bzr help import-upstream" actually looks pretty easy to run
[19:21] <cjwatson> could you try that before changing the source format?
[19:21] <cyphermox> oh, you're right, they should be 1.0 anyway
[19:22] <cyphermox> if the import-upstream bits work I'd be all for moving to 3.0 though
[19:22] <cyphermox> provided there wan't another such strictness that we chose 1.0 for
[19:23] <cyphermox> I'll look at import-upstream
[19:23] <didrocks> cjwatson: import-upstream is to import an upstream tarball
[19:23] <didrocks> we don't have upstream releases
[19:24] <cjwatson> oh, then you should be native in some way, indeed
[19:24] <cyphermox> right
[19:24] <cjwatson> I'd suggest changing the version number scheme and using 3.0 (native) then, if possible
[19:25] <cjwatson> it would be a better reflection of reality
[19:25] <cjwatson> that said: you *do* seem to have upstream tarballs
[19:25] <cjwatson> in that e.g. "apt-cache showsrc unity8" shows me .orig.tar.gz
[19:25] <cjwatson> but I guess it's cumbersome to import those in advance of "releases"?
[19:28] <didrocks> cjwatson: we create them
[19:28] <didrocks> with bzr bd, split mode
[19:28] <didrocks> for other distro if they want to use it, one day…
[19:29] <didrocks> and bzr bd is then, getting this 3.0 conflict now, even in split mode ^
[19:29] <cyphermox> didrocks: autopilot-gtk
[19:29] <cjwatson> 1.0 probably isn't terrible for that, but this seems like a bunch of work for now when we could revert the dpkg change and get moving again
[19:30] <cyphermox> because it's in 3.0... but most other pacakges should be 1.0 as per the wiki
[19:30] <cyphermox> didrocks: couldn't cu2d import-upstream as it goes to do the package build?
[19:30] <didrocks> cjwatson: if you can revert it, then, we can have a task to update everything for 1.0
[19:30] <didrocks> cyphermox: well, it's a chicken and egg problem
[19:30] <cjwatson> though it would be pretty rubbish to end up in a place where the CI infrastructure can't deal with packages that *do* have a separate upstream existence
[19:31] <didrocks> you need import-upstream for running bzr bd now
[19:31] <cjwatson> didrocks: I'm out of time for this week, but if infinity wants to do it ...
[19:31] <didrocks> and you need to run bzr bd to get an upstream image
[19:31] <didrocks> cjwatson: previously, I was doing it myself, without bzr bd
[19:31] <didrocks> but it's hackish
[19:32] <didrocks> cyphermox: maybe we can use --export-upstream=
[19:33] <didrocks> that maybe won't run debian/rules and get into this dpkg change
[19:33] <cyphermox> yeah... that or some for of bzr export as the get-orig-source rule, or whatever it is that creates a tarball
[19:34] <didrocks> cyphermox: do you have some time to investigate today?
[19:34] <cyphermox> didrocks: already done
[19:34] <cyphermox> well, kindof
[19:34] <cyphermox> hold on a second :)
[19:34] <didrocks> oh nice! it's working?
[19:34] <didrocks> ok ;)
[19:34] <cyphermox> well itś working and you don't have to run bzr bd twice...
[19:35] <didrocks> excellent!
[19:35] <cyphermox> let me make sure
[19:35] <didrocks> cyphermox: feel free to MP against cupstream2distro once confirmed :)
[19:37] <dobey> fginther: btw, did you get to make any progress on the u1 branches in tarmac with daily-release support stuff?
[19:37] <fginther> dobey, ah, yes. forget to send you the infos
[19:39] <cyphermox> didrocks: we'll still need to make it (quilt) instead of (native)
[19:39] <didrocks> so, it's a change either way
[19:39] <didrocks> but in even more packages
[19:40] <fginther> dobey, https://code.launchpad.net/~fginther/cupstream2distro-config/ubuntuone-no-upstream-merger/+merge/196189
[19:40] <didrocks> we should maybe revert the dpkg change and transition then to (quilt)
[19:40] <didrocks> with your change
[19:40] <didrocks> infinity: ^
[19:40] <cyphermox> didrocks: why do we need to revert the dpkg strictness then?
[19:40] <cjwatson> oh, I got it backwards, you're 3.0 (native) right now not 3.0 (quilt)
[19:40] <dobey> fginther: i don't really understand what that does exactly
[19:40] <infinity> If they're 3.0 (native), just fix the version number.
[19:40] <cjwatson> well having 3.0 (native) with a revision number is just unambiguously wrong :-)
[19:41] <cyphermox> yes
[19:41] <infinity> I *can* revert the dpkg change, but I don't want that to be an excuse for people to keep being wrong. :P
[19:41] <didrocks> cyphermox: well, because evrything will fail?
[19:42] <cyphermox> didrocks: well we have the occasion of fixing it by fixing the source format where necessary?
[19:42] <didrocks> infinity: they are not native, but bzr bd doesn't know how to split the package correctly…
[19:42] <sergiusens> agree, fix version number to match what you are doing
[19:42] <infinity> didrocks: If the format is 3.0 (native), they're native...
[19:42] <cyphermox> didrocks: actually, that particular pacakge does specify 3.0 (native)
[19:42] <didrocks> cyphermox: is debian/source says natives?
[19:42] <didrocks> ah
[19:42] <infinity> didrocks: If they have orig tarballs, stop calling them native, and import the tarballs.
[19:42] <didrocks> so please drop that
[19:42] <infinity> didrocks: There's not really an in between here, is there?
[19:42] <didrocks> cyphermox: that should be enough right?
[19:42] <cjwatson> just to clarify, if we're reverting the dpkg change then that should IMO be primarily because it's a support cost for #launchpad (if that can't be handled some other way)
[19:42] <didrocks> infinity: well, I didn't set it to native, clearly
[19:42] <cjwatson> in terms of recipes in general
[19:42] <cyphermox> right. the 3.0 (native) is absolutely wrong, and that's it
[19:42] <didrocks> infinity: I agree :)
[19:43] <fginther> dobey, sorry, it's not well explained. Essentially this will allow ubuntuone-client-data and ubuntuone-credentials to be picked up by the daily-release process when they are ready
[19:43] <didrocks> cyphermox: so, please change that :)
[19:43] <didrocks> and be done
[19:43] <cyphermox> didrocks: as far as I know it's enough yeah
[19:43] <cjwatson> however, the next couple of people who come into #launchpad with this problem, I'll point them at bzr import-upstream now that I've found it
[19:43] <fginther> dobey, but it will not do any upstream-merger on them
[19:43] <cjwatson> and we'll see if that helps, and FAQify or whatever
[19:44] <dobey> fginther: does that change mean ps jenkins bot won't run the tests in ps jenkins, as is currently happening?
[19:45] <fginther> dobey, yes, ps jenkins will ignore these
[19:45] <dobey> fginther: i don't think we want/need to stop that happening. is that required to make daily-release possible, while using tarmac?
[19:46] <cyphermox> https://code.launchpad.net/~mathieu-tl/autopilot-gtk/source-format/+merge/196190
[19:47] <infinity> cyphermox: Of course, 3.0 (quilt) might *still* fail because you lack an orig...
[19:48] <infinity> cyphermox: Unless you're also going to fix that.
[19:48] <infinity> (Or does this have an orig?)
[19:50] <fginther> dobey, I can revisit it, but I don't have time to spend on it right now. I thought that disabling upstream merger might be a viable short term solution, but it appears to not be the case
[19:51] <fginther> dobey, my other thought would be to make some change to tarmac to operate on specific branches
[19:51] <sergiusens> infinity, cyphermox looking at the history, it was initially 3.0 (native), when added to daily release switched to 1.0 then someone changed it to 3.0 (native) again
[19:51] <sergiusens> meaning that maybe some general communication on this would help
[19:52] <dobey> fginther: it would be possible and very easy to add such a feature to tarmac, if it's necessary. it doesn't really fit with the general design of how tarmac works, but it's very simple for me to implement.
[19:52] <cyphermox> infinity: AFAICS bzr bd in split mode should be writing the orig anyway... and it seems to on my system
[19:53] <infinity> cyphermox: Ahh, cool.  So, that's a solved problem for you guys, just don't call those native. :P
[19:53] <fginther> dobey, ok. let me create a bug with what I think would help
[19:53] <infinity> Non-split recipes run into dpkg whining at them, and importing the actual upstream tarball is the solution there.
[19:53] <cyphermox> I've always said those weren't supposed to be considered native packages in any way, since they have a debian rev
[19:53] <dobey> fginther: could you enumerate in an e-mail ro something for me, what exactly is required for daily-release to work with the current setup, on a purely technical level (must be specific branch merge, must have debian/ in-tree, etc… sort of stuff)?
[19:54] <fginther> dobey, sure
[19:55] <infinity> cjwatson: I guess we could make lp-buildd try to spit out a useful "hey, maybe you need to bzr import-upstream" message on failed recipe-derived builds of that sort?  I dunno.
[19:55] <dobey> fginther: thanks. that should make it clearer as to what we need to exactly to make it work and fit tarmac in for doing the merges
[19:55] <cyphermox> brb
[19:59] <dobey> fginther: and i voted disapprove on that MP as it doesn't seem to facilitate what we want to do there. :)
[19:59] <fginther> dobey, agreed
[20:03] <thomi> didrocks: still around?
[20:17] <fginther> dobey, https://bugs.launchpad.net/tarmac/+bug/1253770
[20:27] <retoaded> all, I am starting to mark the various jenkins instances as "prepare to shutdown" so no additional jobs start before we have to take things down for the (hopeful) network fix.