[00:12] <rsalveti> cyphermox: still around?
[00:54] <cyphermox> back now yeah
[00:55] <cyphermox> rsalveti: pong
[00:56] <rsalveti> cyphermox: seems fine now, I just copied over the new phablet-tools that was pending in the ppa since nov-08
[00:56] <cyphermox> ok
[00:56] <rsalveti> that had a few fixes that are useful for the emulator bootstrap
[00:57] <rsalveti> cyphermox: everything is in manual still, right?
[00:57] <rsalveti> cyphermox: which then probably means that someone had to file a landing request at that time
[00:57] <cyphermox> rsalveti: yeah
[00:57] <rsalveti> which didn't happen, so it was stuck in the ppa
[00:58] <rsalveti> but since jenkins if down, and this only affects the emulator use case, I decided to push push it directly
[00:58] <cyphermox> hey, if it's pressing just upload directly to the archive ad we'll sync it back
[00:59] <rsalveti> right, I know there might be a missing step in there
[00:59] <rsalveti> like publishing in another ppa (for different series), and also updating trunk
[00:59] <rsalveti> with the released version
[00:59] <rsalveti> but is that part of a jenkins job as well?
[00:59] <cyphermox> yes
[01:00] <cyphermox> but like I said, you could jsut as well upload phablet-tools to distro directly and I'll merge the changelog
[01:00] <rsalveti> cyphermox: right, that's kind of what I did, but I decided to copy the binary packages instead
[01:00] <rsalveti> if you can merge it manually it'd indeed be helpful
[01:05] <cyphermox> ok I'll take a look at that first thing tomorrow
[01:05] <rsalveti> great, thanks :-)
[05:26] <Mirv> love the amount of jenkinses up! I wonder what's the status of mergers?
[05:29] <Mirv> ah, up but jobs don't launch at jatayu at least
[08:40] <ogra_> hmm, something tried an image test it seems but with a weird version
[08:44] <didrocks> yep
[08:44] <didrocks> and I see stuck jobs on ex-magners
[08:44] <didrocks> I guess vila would know ^
[08:44] <didrocks> ogra_: btw, can we publish 101 on saucy? request from stephan :)
[08:44] <didrocks> ogra_: IIRC, this one is correct
[08:45] <ogra_> didrocks, i was waiting for asac's "go" on that ...
[08:46] <vila> didrocks: on it but it's mostly guess work at that point
[08:46] <didrocks> ogra_: hum? what asac knows more than us on publishing a new image in the stable release (between minor ones, 100 and 101)
[08:46] <didrocks> we are not talking about saucy->trusty, but rather saucy->saucy
[09:08] <sil2100> didrocks: is mangers up already but on some other IP or something? ;)
[09:09] <didrocks> sil2100: you do have http://q-jenkins.ubuntu-ci:8080
[09:09] <didrocks> but it's not fully up yet
[09:10] <didrocks> missing executors, firewall rules for syncing on the archive admin machine and the otto machines
[09:10] <didrocks> sil2100: I guess vila is on it, but let's sync up during a short meeting
[09:10] <sil2100> I would have to know the IP of q-jenkins.ubuntu-ci anyways!
[09:11] <didrocks> sil2100: hum, no, you need to change the vpn config as per wiki
[09:11] <vila> sil2100: it's IP changed overnight, stop *thinkinh* about IPs ;)
[09:11] <sil2100> :<
[09:12]  * sil2100 has a lot of syncing up as he does not know about any wiki
[09:12] <vila> didrocks: we're on it with jibel but blocked by access rights :-/ no sudo anymore on jatayu which is the new home for q-jenkins and inherits the restricted sudo policy from m-o apparently
[09:12] <Mirv> didrocks: those stuck jobs were started by me earlier today, just to see what's up. daily-release-executor seemed down.
[09:13] <vila> Mirv: starting that slave wasn't documented for m-o and wasn't migrated, jibel just documented the needed steps but we lack the rights to process them
[09:13] <Mirv> vila: ok
[09:17] <didrocks> sil2100: look at the engineering ML (mails from yesterday)
[09:17] <didrocks> vila: and do you know about the firewall? this is an orthogonal question I guess :)
[09:18] <sil2100> didrocks: what e-mail is it? Since I don't see anything, could you give me the title?
[09:18] <vila> didrocks: nope
[09:18] <sil2100> I might not be on that mailing list?
[09:19] <sil2100> Ah, the earlier update
[09:19] <sil2100> Got it
[09:19] <didrocks> sil2100: great ;)
[09:19] <didrocks> vila: do you know who is tracking it then?
[09:20] <vila> ev: ^
[09:21] <didrocks> (better to figure it out now than too late)
[09:23] <ev> didrocks: when you say firewall, do you mean https://rt.admin.canonical.com//Ticket/Display.html?id=65887 ?
[09:23]  * didrocks looks
[09:25] <didrocks> ev: is tachash q-jenkins?
[09:25] <vila> didrocks: no
[09:26] <vila> didrocks: q-jenkins was magners but it's now hosted on jatayu
[09:26] <didrocks> I don't see then the request between snakefruit and jatayu
[09:26] <didrocks> which is used for daily release
[09:26]  * didrocks tries
[09:27] <didrocks> $ rsync rsync://jatayu.ubuntu-ci
[09:27] <didrocks> rsync: failed to connect to jatayu.ubuntu-ci (10.98.3.12): Connection refused (111)
[09:27] <didrocks> rsync error: error in socket IO (code 10) at clientserver.c(122) [Receiver=3.0.9]
[09:27] <didrocks> from snakefruit
[09:27]  * didrocks add this to the ticket
[09:29]  * didrocks updates the cu2d config to point the final mirror copy from jatayu.ubuntu-ci instead of magners
[09:30] <didrocks> plars: ogra_: vila: sil2100: Mirv: ev: short meeting
[09:30] <vila> didrocks: we're trying to identify service names instead of machine names, can you come up with a name for that one ? (doesn't have to be definitive we'll fix them later)
[09:30] <didrocks> vila: the cu2d machine?
[09:30] <didrocks> then, call it as you wish
[09:31] <vila> didrocks: no, the need to use rsync, what's the service there, isos ?
[09:31] <vila> didrocks: archive ?
[09:32] <vila> joining in a sec
[09:35] <ogra_> [09:37] <asac> didrocks: ogra_: lets talk about the saucy update thingy
[09:37] <asac> so we have now 101?
[09:37] <sil2100> didrocks: ok, I lost my account on that jenkins, and now when I re-created it and log in I don't see cu2d actually - what do I need to get access to it?
[09:37] <sil2100> Or is it not visible?
[09:38] <ogra_> asac, since a month, idling in saucy-proposed ... i remember you asked me to hold it back, but i dont remember why anymore
[09:38] <ogra_> asac, stgraber would like us to promote it so we test the stable channel promotion
[09:38] <asac> ogra_: was 101 produced _before_ we started producing trusty?
[09:38] <ogra_> yes
[09:38] <ogra_> it only has fixes for the uevent spam on maguro
[09:39] <ogra_> built from SRU fixes only
[09:40] <asac> can we in theory produce a 102?
[09:40] <sil2100> didrocks: ok, I was looking in the wrong jenkins
[09:41] <ogra_> asac, any time, yes, but there were a lot more SRUs now ...  so that would need careful testing and looking at
[09:41] <asac> of course
[09:41] <ogra_> http://reports.qa.ubuntu.com/smokeng/saucy/touch_mir/
[09:41] <ogra_> 101 looks even better than 100
[09:41] <ogra_> hmm or was that under touch_ro
[09:42] <ogra_> (where it looks even better better :) )
[09:42] <asac> sure. i just wonder if we should also test other parts of the update infra
[09:42] <asac> e.g. that we can produce saucy still
[09:42] <asac> that we can test it
[09:42] <ogra_> we surely can roll another saucy image ... but i would wait until we can get test results
[09:43] <didrocks> asac: ogra_: we can promote 101 to stable right now and kick an 102 build
[09:43] <asac> yeah. how about we target saucy update for next week thu
[09:43] <ogra_> didrocks, no
[09:43] <didrocks> why?
[09:43] <ogra_> didrocks, lets wait with kicking a new build until the infra is fully back
[09:43] <ogra_> i dont want to experiment with that
[09:43] <asac> so they idea was always that we release an update after one month
[09:43] <ogra_> right
[09:43] <didrocks> ogra_: ok, but we can promote 101 right now still :)
[09:44] <ogra_> so lets promote 101 and roll an 102 build on monday
[09:44] <didrocks> +1
[09:44] <asac> lets discuss final things monday
[09:44] <ogra_> right, so that we see image testing works as expected
[09:45] <didrocks> but we still promote 101 now?
[09:45] <asac> not please
[09:45] <didrocks> asac: can you please tell that to stephan then?
[09:45] <asac> sure
[09:45] <didrocks> asac: because he's blocking on it
[09:45] <didrocks> thanks
[09:45] <asac> he is blocking? not: he feels blocked :)?
[09:45] <didrocks> not sure…
[09:46] <asac> ok let me figure
[09:46] <ogra_> right he wants to finish stuff on the weekend but needs results from our side
[09:46] <didrocks> I don't know why we are not promoting 101 without any reason, but well :)
[09:46] <didrocks> I think we're not the stackholder there
[09:46] <asac> the plan was to release an image to stable channel after one month
[09:46] <asac> thats what we sold to exec mgmt etc.
[09:47]  * ogra_ doesnt care about stakeholders ... but it is simply a better image than 100 
[09:47] <ogra_> i.e. brings benefit to our users
[09:47] <asac> that was true for the last 3 weeks :)
[09:47] <didrocks> asac: so a month, we do release on sunday? :)
[09:47] <ogra_> asac, well the plan was surely too that we do more regular image builds then ;)
[09:47] <asac> so i am not sure why the sense of urgency today
[09:47] <ogra_> asac, and not have something rotting in proposed for 4 weeks
[09:48] <asac> didrocks: just saying: for me it was last thursday
[09:48] <asac> hence i am not ready mentally
[09:48] <ogra_> asac, because stephane needs info if the setup works
[09:48] <asac> because for me it was in the longer future and i want to talk to a few folks first
[09:48] <ogra_> well, then lets keep it til monday ...
[09:48] <ogra_> but it seems weird to leave images rot for 4 weeks "just because"
[09:49] <asac> i will check with him
[09:49] <asac> once he is up
[09:49] <ogra_> there were a lot of SRUs, if 102 breaks it will become really hard to find out why
[09:49] <didrocks> "The goal behind this image, besides fixing the issue on maguro was to
[09:49] <didrocks> test our upgrade process for 14.04.
[09:49] <didrocks> "
[09:49] <didrocks> "This end to end test was a requirement before we can officially
[09:49] <didrocks> discontinue support of the saucy image in favor of the trusty one, so
[09:49] <ogra_> (unrealted to when we promote 101 ... we should do more frequent proposed builds)
[09:49] <didrocks> it'd be good if this could be done ASAP."
[09:50] <didrocks> so I guess we can wait for Monday
[09:50] <didrocks> but would be good not say on Monday "let's wait even more"
[09:50] <asac> thats his perspective. there is more parts of the process we should pipeclean
[09:50] <didrocks> asac: however, if we promote image 101 next week
[09:50] <didrocks> it will mean we flip the stable channel to trusty in a month?
[09:50] <didrocks> (to not have more than one update a month?)
[09:50] <ogra_> didrocks, no
[09:51] <ogra_> didrocks, stable is stable
[09:51] <ogra_> it wont switch before we release trusty
[09:51] <ogra_> (as stable)
[09:51] <didrocks> ogra_: that's not what I understood…
[09:51] <asac> hehe
[09:51] <asac> dont argue about that part
[09:51] <ogra_> everything else would be nonsense
[09:51] <asac> i disagree with that statement, but lets not go down here
[09:52] <ogra_> trusty is in constant flux ... it isnt stable we cant call it stable
[09:52] <didrocks> well, if we take a snapshot, are happy about it quality-wise
[09:52] <ogra_> thast still not stable
[09:52] <didrocks> I don't see the difference with saucy unmaintained :)
[09:52] <ogra_> api versions might still change etc
[09:52] <ogra_> you have no guarantee your click packages still work and the like
[09:53] <ogra_> (we have a devel channel for a reason ... why would you make them the same)
[09:53] <ogra_> anyway
[09:54] <ogra_> (just sounds pretty messed up to me )
[10:30] <popey> ogra_: is a build in progress?
[10:30] <ogra_> hmm, might actually be done
[10:30] <ogra_> no, not yet
[10:30] <ogra_> and yes, see above
[10:30] <ogra_> 22 is in the making
[10:31] <ogra_> (cdimage part is done)
[10:39] <popey> ah
[10:39]  * popey does "/hilight [10:46] <davmor2> 22 installing
[10:52] <ogra_> right
[10:52] <ogra_> [10:58] <davmor2> ogra_, popey: if you send yourself a bunch of text messages in the indicator does it change,  send about 5.  I'll try and get a photo
[11:00] <ogra_> does the keyboard work again (it didnt in 21 ... after you rotated the device it didnt come up anymore)
[11:00] <popey> davmor2: i sent a bunch of texts and they were all received
[11:01] <popey> ogra_: unlikely, that bug was only filed last night
[11:01]  * ogra_ doesnt have his maguro around atm
[11:01] <popey> davmor2: can you be more specific about the issue
[11:01] <ogra_> yeah, thats what i expected
[11:01] <popey> davmor2: http://popey.com/~alan/phablet/device-2013-11-15-110127.png
[11:01] <davmor2> popey: no I get them but in the indicator the icon changes from text to globe and doesn't give the same effect if you click on it to send back
[11:05] <popey> ogra_: yes, still happens in #22
[11:05] <ogra_> as expected then
[11:34] <popey> bug 1251597
[11:34] <popey> anyone else ever see that?
[11:34] <popey> (not a regression, seen it quite a bit)
[11:38] <didrocks> popey: yeah, I think there is already a bug from asac related to it
[11:38] <ogra_> popey, all the time
[11:40] <ogra_> confirmed
[11:42] <popey> ah okay
[11:43] <davmor2> popey: yes
[11:54] <davmor2> popey, ogra: http://ubuntuone.com/20tSyBkGQWHDRmOVznvWcw this is what you normally see right,  every now and again I get this http://ubuntuone.com/4OPQyVQdfhliEcdIyq3jlK
[11:54] <ogra_> ah, i never noticed that
[11:55] <ogra_> (but i must admit i also never paid much attention to this icon)
[11:55] <davmor2> ogra_: it's normally the 5-7 text but it's not just the icon I can't reply to this one
[11:57] <davmor2> ogra_: I noticed it on 21 but couldn't seem to reproduce it but now I can woohoo \o/
[12:01] <popey> not seen that
[12:19] <asac> cool. i something on dashboard about todays image :)
[12:19] <asac> even if 0%
[12:19] <ogra_> oh, a proper versioned entry ?
[12:19]  * ogra_ checks 
[12:19] <asac> not proper
[12:19] <asac> but something :)
[12:20] <asac> 20131115 ?
[12:20] <ogra_> well, thats just the dashboard going mad i think
[12:20] <ogra_> (there was no such thing like 20131114 )
[12:20] <asac> its more than all the days before :)
[12:20] <asac> e.g. there is a heart beating somewhere again
[12:20] <ogra_> right, but most likely still not proper
[12:21] <asac> of course not... i would hope we are better than 0% and still have a proper version
[12:21]  * ogra_ guesses another firewall issue, though i dont know the actual setup 
[12:23] <didrocks> ogra_: no, it's what we discussed this morning, the phone don't start
[12:23] <ogra_> ah, that
[12:23] <ogra_> yeah
[12:23] <didrocks> so the job is stuck in install
[12:23] <ogra_> well, the odd versioning usually means the phones are unreachable
[12:23] <ogra_> i just didnt know why
[12:23] <psivaa> didrocks: the jenkins timeout of 30 mins is not working :/
[12:24] <didrocks> psivaa: I hope it's not an uninstalled plugin :p
[12:29] <psivaa> didrocks: Build timeout plugin is shown as installed, but not sure if the correct version was restored after the migration..
[13:37] <plars> psivaa: which plugin are you concerned about?
[13:38] <psivaa> plars: the 'Build timeout plugin'
[13:38] <plars> psivaa: that's a plugin? I thought that was just part of jenkins
[13:38] <psivaa> the isntalled version is 1.11 but 1.12. is available and the jobs are not timing out with 30 mins
[13:39] <psivaa> plars: http://q-jenkins:8080/pluginManager/ has more details
[13:39] <plars> psivaa: interesting, but I'm more concerned with why the phones are dying when we try to install at the moment :(
[13:40] <psivaa> plars: yea, that is concerning. have seen in 4 devices
[13:40] <psivaa> but i ran provision.sh in the same manner locally and it works fine
[13:40] <plars> psivaa: I've been through multiple runs locally and can't get it to happen
[13:41] <psivaa> plars: not sure if any kinnara side pkg versions are any different to what we had in phoenix
[13:41] <plars> psivaa: I think I saw somewhere that they got some sort of new usb hub, so I wonder if that has something to do with it
[13:50] <psivaa> plars: possibly, not sure why so determinant soon after flashing. I tried doing some reboots with one of the devices and it comes up fine in adb
[14:02] <fginther> morning
[16:05] <asac> vila: ev: retoaded: so the otto machines... i thought those were in the DC and wired up yet.
[16:05] <asac> is that not the case?
[16:07] <retoaded> asac, they are all in the DC and wired as far as KVM, network and power are concerned. There are still a few systems that were migrated out of their desktop cases into rack mount cases that require some internal wiring (extended power leads, system fan adapters, etc ...). rfowler is picking those pieces up today so the can be brought online.
[16:08] <asac> retoaded: are those the otto machiens?
[16:08] <asac> or are those wired up etc.?
[16:08]  * asac thinks those are desktop machines and its hard to migrate those into server cases
[16:08] <asac> but guess rackmount is something else
[16:08] <vila> asac: see https://wiki.canonical.com/UbuntuEngineering/CI/1ss-move-current-issues missing harware pieces
[16:09] <ev> asac: they were desktop machines - they were reconfigured into rackable units
[16:09] <fginther> asac, yes, those are the otto machines
[16:09] <retoaded> asac, the otto machines were desktop machines now in rack mount cases. The problem was that some of the smaller form factor desktops had very short outputs from the power supplies.
[16:10] <asac> ok so those are our x86 test boxes.
[16:10] <asac> what about the phones?
[16:10] <asac> do we know yet what the issues are with running our jobs?
[16:10] <asac> http://reports.qa.ubuntu.com/smokeng/trusty/touch/
[16:10] <ogra_> looks like the dashboard has at least one proper version there now
[16:10] <asac> :)
[16:10]  * asac is purely color driven :)
[16:11] <ev> We have 20 phones with two down
[16:11]  * vila sends green all over asac 
[16:11] <asac> lol
[16:11] <asac> ev: that sounds good. but why dont the job run?
[16:12] <asac> ok i heard that a new attempt is currently running
[16:12] <asac> lets see
[16:12] <asac> and wait a bit
[16:12]  * ogra_ was playing with the emulator all day ... 
[16:13] <ogra_> i doubt we'll ever get it to make the systemsettle tests :P
[16:19] <plars> ok, the image smoke tests seem to be rolling along on both mako and maguro now
[16:20] <plars> install_and_boot is done on both, and default has passed on maguro, should start to see results on reports.qa.ubuntu.com soon
[16:25] <plars> ogra_: I'm less worried about systemsettle tests, and more concerned with whether sensitive timings in the autopilot tests will be a problem
[16:28] <ogra_> thy most likely will
[16:32] <plars> http://reports.qa.ubuntu.com/smokeng/trusty/touch/
[16:33] <balloons> didrocks, are you going to be able to pick up the new source for the core apps?
[16:34] <balloons> sorry, I mean to say, are you going to land the stuff in the pipeline for core apps?
[16:36] <balloons> fginther, I'm confused by the output from the merge bot on this: https://code.launchpad.net/~nskaggs/ubuntu-rssreader-app/add-activity-indicator-check/+merge/195322. http://91.189.93.70:8080/job/generic-mediumtests-trusty/147/console
[16:37] <cjohnston> balloons: use the vanguard to start with please :-)
[16:37] <fginther> balloons, interesting. looks like something fell over. Will investigate
[16:38]  * balloons smacks balloons around a little bit
[16:38] <balloons> sorry cjohnston.. bad habits
[16:39] <doanac> plars: smoke is showing up on the dashboard now: http://reports.qa.ubuntu.com/smokeng/trusty/touch/mako/22:20131115:20131111.1/4977
[16:39] <doanac> well done
[16:39] <plars> doanac: yep, I posted just a bit ago ^ :)
[16:40] <plars> doanac: looks promising at least, will be interesting to see the overall results
[16:41] <balloons> cjohnston, can I bug the vanguard about landing asks as well? :-)
[16:41] <cjohnston> balloons: probably not... I guess it would depend on what it is, but we don't yet have alot to do with that
[16:44] <didrocks> balloons: I think only sergiusens can (once the CI infra back)
[16:44] <ogra_> doanac, and its all GREEN !!!
[16:44] <ogra_> asac will love that
[16:44] <balloons> didrocks, ty.. I figured the infrastructure might be the holdup
[16:44] <didrocks> heh
[16:45] <balloons> there should be merges for everything.. so I'm curious to see it all get pulled and go green
[16:45] <sil2100> I wouldn't too hastily hope for 'go green' in the nearest days ;)
[16:46] <balloons> hope springs eternal sil2100
[16:46] <ogra_> sil2100, well, if we stop the image tests right now they will be green :)
[16:47] <sil2100> hahah ;)
[16:53] <davmor2> ogra_: you need to perfect the following line,  " asac these aren't the results you're looking for, move along, move along"
[16:53] <ogra_> haha
[16:53] <ogra_> that only works in hangouts, else he cant see the specific hand wave
[16:59] <davmor2> o| o/ o_
[17:00] <didrocks> sil2100: ogra_: robru: kenvandine: cyphermox: ev: vila: plars: landing meeting time!
[17:00] <ogra_> pfft
[17:00] <ogra_> your late
[17:00] <sil2100> !
[17:00]  * sil2100 joins
[17:00] <ogra_> *you're
[17:02] <sil2100> kenvandine: ping!
[17:04] <kenvandine> sil2100, pong
[17:11] <ev> cyphermox, vila: oh hai
[17:11] <ev> so the problem we're trying to solve, to recap:
[17:11] <ev> we want the openvpn server to push the DNS server's IP address to the client, but we want this to be a fallback for whatever the person currently has set up
[17:12] <ev> so if I have 8.8.8.8, I don't want to start suddenly routing all DNS requests through batun
[17:12] <cyphermox> right
[17:12] <cyphermox> so, using openvpn directly, there's just one way to do it; via the up script as per the VPN wiki page
[17:13] <cyphermox> we can tweak that to some degree to be even more useful if we were to push config to dnsmasq rather than to resolvconf directly
[17:13] <cyphermox> otherwise, NM should be able to do the right thing without change... if not, there's a bug
[17:14] <ev> would `push "dhcp-option DNS 10.99.244.1"` not work?
[17:14] <vila> my understanding is that even for the openvpn trick documented on the wiki you end up with the pushed dns on top of yours in /etc/resolv.conf
[17:14] <vila> i.e. the one pushed from the vpn becomes the primary one
[17:15] <vila> currently mine is: nameserver 10.99.244.1
[17:15] <vila> nameserver 192.168.0.254
[17:15] <vila> search ubuntu-ci
[17:15] <cyphermox> ev: correct, push dhcp-option DNS (and DOMAIN) should be all you really no
[17:16] <cyphermox> argh
[17:16] <cyphermox> .. really all you need
[17:16] <ev> :)
[17:16] <cyphermox> vila: that's why you need resolvconf or dnsmasq to properly handle the multiple DNS servers and domains
[17:16] <ev> cyphermox: so to be clear, that wouldn't overwrite 8.8.8.8?
[17:16] <cyphermox> shouldn't, no
[17:17] <ev> yay
[17:17] <vila> cyphermox: additional magic then ?
[17:17] <cyphermox> resolvconf and dnsmasq will do things differently, but both should work provided they are set up properly
[17:17] <vila> ev: hold on your enthusiasm ;)
[17:17] <vila> cyphermox: I'm fine if we have proper instructions to give but as of today that won't work out of the box
[17:17] <cyphermox> e.g. resolvconf depends on putting , iirc, the VPN dns first, and making sure it only responds for the domain and anything else NXDOMAIN (I think)
[17:18] <cyphermox> and dnsmasq can do everything nicely
[17:18] <cyphermox> vila: are the two push commands currently in the VPN config?
[17:18] <vila> how ?
[17:18] <cyphermox> dnsmasq is able to say hey, I have this dns server but only for domain XYZ, and ship the requests only to it
[17:19] <vila> cyphermox: no, but the trick I used in the wiki fakes it so that should give the same end result
[17:19] <cyphermox> right
[17:20] <robru> didrocks, hey wait
[17:20] <didrocks> robru: yep?
[17:21] <robru> didrocks, i tried to kick a friends stack build but it failed immediately without even trying.
[17:21] <didrocks> robru: can you point vila and I to it?
[17:21] <vila> cyphermox: so currently the VPN dns is first but last I checked with wireshark, it receives all dns requests
[17:21] <cyphermox> vila: I'm just saying, what you need is that the push lines are there in the config, or adding the settings to NM; and NM out of the box will do the right thing, otherwise, if you use openvpn directly, you need an up / down script
[17:21] <didrocks> robru: and please, don't land friends before we know the image is green :p
[17:21] <didrocks> robru: but nice for trying ;)
[17:21] <cyphermox> vila: with resolvconf, that's expected
[17:21] <robru> didrocks, no, i won't land it, i just wanted to see some recent friends commits built in PPA ;-)
[17:21] <robru> didrocks, http://q-jenkins.ubuntu-ci:8080/view/cu2d/view/Head/view/Friends/job/cu2d-friends-head/347/
[17:21] <didrocks> yeah ;)
[17:22] <cyphermox> vila: with resolvconf, you'd have the first DNS server get all the requests, and possibly skip to the next if it answers with SRVFAIL
[17:22] <vila> cyphermox: ha, nm is a different story, indeed I have only namesserver 127.0.1.1\nnseach ubuntu-ci there and I didn't check with wireshark
[17:23] <cyphermox> vila: I'll get you an updated up script to try
[17:23] <cyphermox> ahah, or actually, just a file that could be left in place always...
[17:23] <didrocks> robru: "only one instance of a stack can be queued for building
[17:23] <didrocks> "
[17:24] <didrocks> robru: did you see this?
[17:24] <didrocks> robru: but yeah, nothing is running
[17:24] <didrocks> vila: ev: it seems jenkins was backed up in a very very weird state
[17:24] <robru> didrocks, yes... so where does it show that the stack is already building? jenkins lists zero builders active, it says totally idle.
[17:24] <didrocks> robru: yeah, the backup was done while stuff was running. I know where to clean that
[17:25] <robru> didrocks, ok, otherwise i won't be able to build anything today, right? ;-)
[17:25] <didrocks> robru: well, at least not friends, maybe other stacks are in a more lucky state ;)
[17:26] <didrocks> rm: cannot remove `head/platform/stack.started': Permission denied
[17:26] <didrocks> grumph the ACL are not restored :/
[17:26] <didrocks> retoaded: ^
[17:26] <didrocks> I can't restore the state until then
[17:26] <didrocks> normally desktop-team can touch /var/lib/jenkins/cu2d
[17:28] <didrocks> robru: when someone from the CI team will be available, can you ask on jenkins-q.ubuntu-ci to:
[17:28] <didrocks> cd /var/lib/jenkins/cu2d/work
[17:28] <didrocks> rm */*/*started */*/*building
[17:28] <robru> didrocks, who? can fginther do this?
[17:28] <didrocks> probably, not sure
[17:28] <didrocks> we used to have access to that directory as desktop-team
[17:28] <didrocks> so restoring the ACL can be helpful
[17:30] <robru> josepht, need some help with q-jenkins: http://paste.ubuntu.com/6422247/
[17:31] <vila> geeee
[17:31] <ev> didrocks, robru: josepht is your guy. He's the vanguard right now.
[17:31] <vila> didrocks: do you expect *started to match both getting-started and .started ?
[17:31] <cyphermox> vila: confirm that you have dnsmasq running?
[17:31] <didrocks> vila: getting-started?
[17:31] <didrocks> vila: where is it?
[17:32] <vila> ./head/misc.bak/ubuntu/cordova-docs/cordova-docs-3.0.0+13.10.20130930/docs/en/2.4.0/guide/getting-started for example
[17:32] <cyphermox> ev: you use NM or openvpn to connect to the VPN?
[17:32] <ev> cyphermox: both
[17:32] <ev> I use openvpn, as do a few other people. Others use NM
[17:32] <didrocks> vila: that doesn't match rm */*/*started
[17:32] <ev> ideally we'd like a solution for both
[17:32] <cyphermox> ev: yes
[17:32] <vila> find . -name '*started' -ls | wc -l
[17:32] <vila> 125
[17:32] <vila> vila@jatayu:/var/lib/jenkins/cu2d/work$ find . -name 'getting-started' -ls | wc -l
[17:32] <vila> 106
[17:32] <didrocks> vila: yeah, this is in a subdirectory of subdirectory of subdirectory of…
[17:33] <didrocks> not in */*/*started which is just 2 level down
[17:33] <cyphermox> ev: stop-gap while the openvpn config doesn't push the info, is a file in /etc/NetworkManager/dnsmasq.d and /etc/dnsmasq.d that contains just "server=/ubuntu-ci/10.99.244.1"... doesn't need to be removed or modified ever
[17:34] <cyphermox> ev: and I'm writing a proper up/down script to handle openvpn when the settings do get pushed
[17:34] <cyphermox> ^^ that stop-gap requires removing the up/down script for openvpn
[17:35] <didrocks> vila: I think you are handling it and will give a sign to robru, right? (also, can you add to your list of "things to do" to restore the desktop-team ACL?)
[17:35] <vila> didrocks: rm done, that's the kind of commands we don't want to have to run anymore
[17:36] <didrocks> vila: thanks!
[17:36] <didrocks> robru: should be good now
[17:36] <vila> didrocks: ask ev, I think that's not how we want to proceed in the future, instead we would probably want to reduce that kind of access
[17:36] <didrocks> vila: ok, as long as the vanguard can answer quickly
[17:37] <didrocks> and you handle it/understand the system
[17:37] <didrocks> no worry for me, one thing less I have to handle :)
[17:37] <ev> I'm making exceptions for AU/NZ
[17:37] <robru> didrocks, vila: thanks
[17:37] <ev> because they're wonderful people who happen to live at a timezone entirely not conducive to me ever sleeping if we're to cover them with the vanguard
[17:38] <retoaded> didrocks, try again
[17:39] <vila> retoaded: I did the rms
[17:39] <didrocks> retoaded: working, thanks!
[17:40] <retoaded> vila, ack. the ACLs will all be changed anyway when we move to userdir-ldap
[17:40] <vila> didrocks: so that cu2d/work dir is the one you were referring to as your fs sync back in lexington ?
[17:40] <didrocks> vila: sorry, what do you mean?
[17:40] <vila> didrocks: you mention some use of the file system to sync jenkins jobs
[17:41] <didrocks> vila: hum, not sure I mentionned that. I mentionned a while ago that we are using a shared fs for all the stacks
[17:41] <didrocks> that is that one, yeah
[17:41] <vila> didrocks: yeah, that
[17:42] <didrocks> robru: http://q-jenkins.ubuntu-ci:8080/job/cu2d-friends-head-1.1prepare-friends/337/console
[17:42] <didrocks> vila: it seems cowbuilder isn't setup ^
[17:42] <didrocks> (so cu2d not available)
[17:42]  * didrocks is late for his appointment already
[17:43] <robru> bah
[17:55] <robru> vila, what's the deal with cowbuilder then
[17:55] <robru> ?
[17:56] <vila> robru: no idea yet
[17:57] <vila> robru: it should be part of cu2d setup but I can't find where it's documented for now, I did come across that at some point but doing it once is not the same as knowing how do it  from memory ;)
[17:58] <robru> vila, ok, no worries.
[17:59] <vila> https://wiki.ubuntu.com/DailyRelease/MovingNewRelease#First_setup_on_the_jenkins_server
[17:59] <vila> First setup on the jenkins server
[17:59] <vila>     we need to create a release+1 pbuilder. Ping jibel for it.
[17:59] <vila> based on the assumption that we need to re-create it, it may be something else but 'ping' is all the doc I can found
[17:59] <vila> find
[18:01] <vila> ev, cihelp: EOW here, dead end on cowbuilder for q-jenkins
[18:02] <retoaded> vila, what needs to be done for cowbuilder? I remember seeing that as a package on m-o but was not able to find it in the repository.
[18:08] <vila> retoaded: that's what I'd like to know... cowbuilder is installed but /var/cache/pbuilder/trusty-amd64/base.cow is not found
[18:12] <retoaded> vila, ack
[18:15] <retoaded> vila, pushing /var/cache/pbuilder/trusty-amd64 now from m-o
[18:24] <vila> retoaded: oh my just when I was checking and found it there 8-)
[18:24] <vila> robru: can you retry ?
[18:25] <vila> retoaded: pushed ? or did I ping robru too soon ?
[18:25] <vila> retoaded: and sorry for disturbing you one more time :-/
[18:25] <robru> vila, well, trying. we'll see
[18:26] <plars> I had to kill the notes test run on maguro, it was stuck
[18:27] <retoaded> vila, was pushing but it is done so can now be called pushed
[18:29] <plars> retoaded: we should probably update the build timeout plugin as psivaa mentioned earlier, unless you have any reason for wanting the earlier one on there: http://q-jenkins:8080/pluginManager/
[18:30] <retoaded> plars, I have the plugin downloaded already. Just need to find the moments to update it (and a few others that have newer versions out).
[18:31] <retoaded> plars, what I will need from you though is a list of systems that need to be available for kernel sru testing come Monday.
[18:32] <plars> retoaded: ok, will sort that out
[18:32] <retoaded> plars. thx
[18:35] <plars> retoaded: looks like the jobs might be there, but the views are definitely missing from http://d-jenkins:8080/
[18:39] <vila> retoaded: thanks, http://q-jenkins.ubuntu-ci:8080/job/cu2d-friends-head-1.1prepare-friends/ turned to green congrats
[18:39] <vila> robru: unblocked then ?
[18:39] <retoaded> vila, sweet
[18:39] <robru> vila, retoaded: yep, looks good, thanks.
[18:40] <retoaded> vila, robru: that was all cu2d related correct?
[18:40] <robru> retoaded, well i needed that for cu2d, yeah, but i couldn't say if anybody else is using it or not
[18:40] <retoaded> robru, ack. thx
[18:41] <vila> retoaded: 80% sure it's only for cu2d
[18:41] <retoaded> vila, ack
[18:44] <vila> robru: what job should be tracked to get a feeling on whether cu2d is working ? I.e. what do *you* expect ? ;)
[18:44] <robru> vila, well i just kicked friends job instinctively because that's my stack ;-)
[18:45] <robru> vila, also friends is a relatively stable stack that should never have failing tests, so any problems there are usually cu2d problems, not friends problems.
[18:46] <robru> vila, not sure really. i'll kick webapp stack as well just to see what happens, since I'm more familiar with that stack as well
[18:49] <vila> robru: yeah, better start with a friendly one ;)
[19:09] <vila> cyphermox: sorry got diverted and almost forgot you ;-/ EOW'ed but if you send me an email with the stop-gap and post-stop-gap scripts I'll look into them and see how to document their use in our wiki (or get back to you if needed)
[19:09] <cyphermox> sure
[19:09] <cyphermox> I just need to know if you have dnsmasq running though
[19:09] <cyphermox> but my script is ready
[19:11] <vila> cyphermox: I have two configs, alptop uses NM, desktop using openvpn, will have to check for dnsmasq, but can't dig more right now
[19:11] <vila> alptop... laptop !
[19:11] <cyphermox> ok
[19:11] <sergiusens> cihelp is there an ETA for http://s-jenkins.ubuntu-ci:8080/ ?
[19:12] <cjohnston> not to my knowledge
[19:12] <retoaded> sergiusens, can you be more specific? The calxeda nodes are now online and we will be working the VMs from naartjie next.
[19:12] <sergiusens> retoaded, there's a "Jenkins is going to shut down"
[19:12] <sergiusens> retoaded, so someone thinks it's not ready, right?
[19:13] <fginther> sergiusens, it's still missing some critical slaves
[19:13] <sergiusens> retoaded, I'm not rushing, just want to plan  bit
[19:13] <sergiusens> fginther, ok, thanks
[19:14] <fginther> sergiusens, the VM server is still not available, once that comes online and all checks out, jenkins will be enabled
[19:14] <sergiusens> fginther, that's good to hear
[19:15] <robru> vila, so i haven't seen a failure yet, but i dunno, seems slow to me. friends stack has been running for 50 minutes now. i can't remember how long it normally takes, but that just seems really slow to me. it builds locally in just a few minutes.
[19:53] <fginther> balloons, I think i have your test issue fixed. There appears to have been a regression in the package python-simplejson 3.3.1-1ubuntu2
[19:57] <balloons> fginther, thanks. So I'll just try a  rebuild then eh
[19:57] <fginther> balloons, yep, one test made it through with only test failures
[19:58] <balloons> ahh.. https://code.launchpad.net/~nskaggs/ubuntu-calendar-app/fix-new-event-test/+merge/195421 :-)
[19:59] <fginther> balloons, thankfully someone else already spotted the regression and uploaded a new python-simplejson already
[20:00] <balloons> yea, that's what it looked like. honestly I suppose I should have pushed it more at that level
[20:01] <fginther> balloons, no worries. I think they right course of action is to contact the ci team first. Although it's always helpful if you can offer deeper insight
[20:01] <fginther> in the case that a new package was not available, we would have needed to address it in ci to shutdown the job or find a workaround
[20:04] <robru> vila, nearly 2 hours and friends still isn't done building. something is definitely wrong here.
[20:10] <ev> robru: he's gone for the night. Please use cihelp instead.
[20:11] <cjohnston> robru: link?
[20:11] <robru> cjohnston, http://q-jenkins.ubuntu-ci:8080/view/cu2d/view/Head/view/All/job/cu2d-friends-head/
[20:12] <ev> Thanks cjohnston
[20:13] <kenvandine> robru, cjohnston: it's waiting for friends-app to build on arm64
[20:13] <kenvandine> but there is no build job for arm64
[20:13] <robru> kenvandine, yeah, just noticed that...
[20:13] <robru> wait, no build job?
[20:13] <kenvandine> nope
[20:13] <kenvandine> not in the PPA
[20:14] <robru> kenvandine, oh i see, just armhf
[20:14] <robru> kenvandine, so should i cancel those jobs?
[20:15] <kenvandine> that won't fix it though
[20:15] <kenvandine> not sure why cu2d is waiting from arm64, if the ppa doesn't build those
[20:15] <robru> kenvandine, well, no, but it's never going to finish, right?
[20:15] <kenvandine> right
[20:15] <robru> kenvandine, same thing happened to webapp stack, it's waiting for arm64 builds.
[20:17] <robru> cjohnston, kenvandine: ok, so i cancelled the build jobs, now the check jobs are starting and it says "Configuration autopilot-nvidia is still in the queue: autopilot-nvidia is offline" so there's another problem
[20:17] <cjohnston> its possible that it still isn't back
[20:20] <cjohnston> there are still some autopilot systems that aren't functioning, I'm not sure what the reasoning for this one being off is
[20:25] <retoaded> cjohnston, robru: the only autopilot system up atm is the intel system; the nvidia and radeon systems are waiting on some cables for the inside (either to make sure all of the fans are plugged up and running or to extend a power lead somewhere).
[20:25] <robru> retoaded, any ETA?
[20:26] <retoaded> robru, the parts were going to be picked up today but rfowler was diverted to another task. Might be tomorrow at earliest.
[20:26] <robru> retoaded, ah, ok. i won't wait around then. thanks ;-)
[20:27] <retoaded> robru, ack
[20:36] <kenvandine> ogra_, is there an image 23 building?
[20:36] <ogra_> nope
[20:36] <kenvandine> ok, were we supposed to be waiting for the smoke tests on image 22 to be green?
[20:37] <ogra_> (nobody asked for one)
[20:37] <kenvandine> didrocks told us to test the apps when we had a green image
[20:37] <kenvandine> so maybe he was hopeful that things would look better with 22
[23:35] <vila> robru: basically it's your choice: either we keep the whole line blocked because an hardware piece is missing (so may have to wait until monday in the worst case) or we take the risk to exposing ourselves to a regression on nvidia only (until monday in the worst case), ev, asac, didTAB thoughts ?