[00:04] katco: just eating breakfast, 5 mins please [00:04] katco: also, network has been going out still if it's not obvious... [00:04] axw: haha, ok i'll email my question... please take your time and enjoy your breakfast [00:17] katco: standup now? [00:18] axw: sure [00:19] axw: one sec sorry [00:19] katco: sure [00:20] axw: k omw [02:20] I'm likely to drop off at some time as I move from ADSL to VDSL [02:21] thumper: VDSL? [02:21] very dsl, it's like super powered [02:21] very fast [02:21] :P [02:21] haha [02:21] nice [02:21] more dsl than normal [02:22] I'm currently getting about 14 Mbps down and less than 1 up [02:22] VDSL should make it about 20/6 [02:22] nice [02:22] sweet. [02:22] slightly faster down, much faster up [02:22] upgraded to 50/10 last week. Got sick of 1.5mb up [02:22] and the same modem they have given me will work for fibre when I get it in six months or so [02:22] I (briefly) considered moving to kansas city when google announced fiber there [02:23] which will make it over 150 symmetric [02:23] * rick_h_ dreams of 100 bidirectional [02:23] wow [02:23] well, you have to remember that this is within NZ [02:23] push anything over the wet noodle to the states and it slows down [02:23] lol [02:23] yeah, when you cross the pacifc you slow to a crawl I suppose [02:24] thumper: yea, remmeber that in all these 'api calls' they going to go through london across oceans heh [03:24] axw_: dummy charm is working; i'll try postgres tomorrow [03:29] axw_: latest changes are in the PR. still not 100% done, but could use a look if you're bored [03:29] good night everyone [03:58] wow, that made a big difference [03:58] locally, went from 14/0.8 Mbps to 33/9.5 [03:59] and between here and VA, I get 7.76/5.12 Mbsp [03:59] so a big improvement === kadams54 is now known as kadams54-away [04:13] katco: sweet. [04:13] thumper: :o nice [04:13] welcome to gigatown [04:13] ;) === axw_ is now known as axw [07:23] axw: did you see the question on the mailing list about a missed upgrade step related to AvailZone ? [07:23] Do you know anything about it? [07:32] jam: I saw it, but I don't. I think Eric worked on that one [07:38] axw: eric is at pycon :( [07:45] morning all [07:47] voidspace: morning [07:48] axw: o/ [07:48] jam: so, looks like if there's any instanceid in state that's invalid, the whole upgrade step will fail [07:48] not sure if that's the cause tho [07:48] axw: could that be triggered by containers? [07:48] ISTR that used to be invalid machine ids [07:49] jam: I don't think containers get an entry in instancedata, but not entirely sure [07:51] actually, they must, that's where the HW info and all that is set [07:52] I guess the instance ID is not something that MAAS will know about tho [07:52] I'm just spit balling. I believe there were bugs in the past because destroy-environment would send a request to MAAS to destroy the instances that were actually containers. [07:54] jam: yeah, I think you're right [07:58] voidspace: morning o/ [08:14] Bug #1441478 was opened: state: availability zone upgrade fails if containers are present [08:22] TheMue: o/ [08:22] no dimiterm (yet) ? [08:22] voidspace: hmm, he's offline on Friday, but today he should be here [08:23] yeah, that's what I thought [08:23] TheMue: dooferlad: in case I forget to mention it in standup, I'm off on Friday too [08:23] going to a conference with my Dad on Friday, Saturday - flying to Nuremberg Sunday [08:31] voidspace: dooferlad: based on current planning I'll be off tomorrow. organizing the new car, preparing my Go/Juju talk on the conference the week after the sprint, and wedding anniversary of my parents-in-law [08:43] TheMue: ok [09:32] Bug #1438683 was opened: Containers stuck allocating, interface not up === benonsoftware is now known as clockwork === clockwork is now known as benonsoftware [11:10] Hehe, that's what juju is for: http://dilbert.com/strip/2015-04-08 [11:13] I *really* hate how I'm forced to mock out 99% of a huge Environ interface just so I can test 2 method calls on it [11:19] dimitern: yeah, would be better if it would be a combination of smaller interfaces, like io.ReaderWriter, and we only use the small ones instead of the combination [11:19] dimitern: so for test one only would have to mock the according smaller one [11:21] TheMue, exactly! - so for example in order to get an Environ from environ config, I need a registered provider which only needs an Open() method, returning environs.Something which then has methods to get full featured interface, e.g. GetZoned -> ZonedEnviron [11:22] or GetNetworking -> NetworkingEnviron [11:23] granted, using finer-grained smaller interfaces for specific environ features will incur some overhead at run-time (type-asserting against a "feature sub-interface" before calling "feature-specific methods") [11:24] dimitern: hmm, setting the smaller ones as parameter types is fine, but when Open() returns an Environment it always have to be a combination. but this at least could be done with a struct combining multiple smaller mocks [11:28] TheMue, why it has to be a full Environ? Open can return a smaller "feature-facade" interface (e.g. GenericEnviron), which you can then use to access specific features (e.g. GenericEnviron.SupportZones() (ZonedEnviron, bool); SupportNetworking() (NetworkingEnviron, error)) [11:29] the problem is embedding smaller feature-based interfaces into a bigger "all-in-one" Environ interface - that thing should die at some point soon [11:29] dimitern: oh, yes, that could be a good approach. it has to know those interfaces but don't implement them. yeah, like it [12:29] dimitern: use a type assertion [12:29] e := someenviron() [12:30] ze, ok := e.(ZonedEnviron) [12:30] if !ok { // bummer, doens't support zones } [12:34] davecheney, I know, but I wanted to avoid the need to create something implementing Environ, just so I can call 2 methods on it [12:35] davecheney, I did find a relatively cruft-free solution though: embed Environ and only implement the methods I need to test [12:41] dimitern: what are the chances of having a serious discussion about refactoring the environ interface in nuremberg [12:41] as in, a discussion that leads to a resolution and work in the next cycle [12:41] not just another commitment to fix it, but someday [12:43] katco, ping [12:43] davecheney, we already had an interesting discussion about it in Cape Town [12:44] davecheney, by interesting I mean with actual implementable steps and a rough roadmap [12:45] davecheney, so let's sit down, discuss it finally and get it done :) [12:45] dimitern: +1 [12:45] dimitern: have you put it on the sprint discussion spreadsheet ? [12:45] axw, seems that commit https://github.com/juju/juju/commit/a16c5c3fd534e9457965b61621cbc2aca00cd21b , adds the leader-election by default. Would be possible to trigger a new -devel PPA build? [12:46] davecheney, if not I'll do it now, while still frustrated :) [12:46] niedbalski: best to talk with sinzui, mgz or abentley about that. I know nothing about the PPA builds [12:47] axw, yeah, will have to wait , thanks [12:47] dimitern: that's best [12:53] davecheney, done, I've added you to the list of attendees for it [12:56] dimitern: are you sure you did ? [12:56] i don't see my name there [12:56] davecheney, which spreadsheet are you looking at? [12:57] dimitern: which one are you looking at ? [12:57] i'm looking at the correct one [12:57] and so is everyone else [12:57] davecheney, I've updated Juju MaaS Sprint Agenda - Nuremberg - April 201 [12:57] 2015 [12:57] fucking wonderful [12:57] again we manage to have two planning documents for one event [12:57] lets open the champagn [12:57] :) [12:57] dimitern: you have updated a document nobody on juju has access to [12:58] dimitern: please give me the link to the document you are using [12:58] I think that one I've updated is used for scheduling based on the first one with the list of topics [12:58] I thought it's the Nuremberg Sprint Topics/Planing Juju Core? [12:58] I'll update both now [12:58] dimitern: can we stop writing stuff down twice ? [12:58] https://docs.google.com/a/canonical.com/spreadsheets/d/1TrPuHrWvnHU-Ekzt9SLEoWVaSV-f87kHOuLXqzjHnCc/edit#gid=0 [12:58] davecheney: +1 [12:58] this is the document [12:58] any others shold be deleted and merged into the correct document, https://docs.google.com/a/canonical.com/spreadsheets/d/1TrPuHrWvnHU-Ekzt9SLEoWVaSV-f87kHOuLXqzjHnCc/edit#gid=0 [13:02] davecheney, updated both now [13:03] thanks [13:14] davecheney, I have to get use to you talking in this timezone ;) [13:16] then i'll change it up on you [13:20] jam, you still around? [13:20] alexisb: I am [13:20] heya [13:21] first off thank you for looking at lp 1441302 [13:21] I will follow-up with the QA team today re the env [13:21] for the other bug you are were looking at: https://bugs.launchpad.net/juju-core/+bug/1440737 [13:22] that's also timing related, just potentially higher sensitivity I think [13:23] sounds like there are some test updates needed, do you think their is an actual bug in the code? [13:23] alexisb: why is a test failure marked as a private bug [13:24] both bugs look to be test related [13:24] davecheney, that is a very good question [13:24] it contains only information about a synthetic test run [13:24] and is not related to a security issue [13:25] fat finger issue ? [13:25] davecheney, I believe so, but i have not actually verified with the bug submitter [13:26] trust me, it's just a common garden test faliure [13:30] davecheney: I think CI prefers to make them private because there have been bugs in the past where Juju would leak secrets to the log file [13:30] alexisb: ^^ [13:31] jam: this is a test run [13:31] there are no secrets to leak [13:34] katco, ping [13:34] natefinch: 1 on 1? [13:35] wwitzel3: yep [13:58] alexisb: pong [14:01] katco, sorry one sec [14:01] alexisb: no worries [14:03] mgz: how can i determine if a given repo is managed by the juju bot or not? [14:04] rogpeppe1: from the github side? if it has hacker or bots as contributors [14:04] *collaborators [14:05] mgz: so juju/charm has Hackers under team collaborators, so that means i should use $$merge$$ there? [14:05] rogpeppe1: see github.com/orgs/juju/teams/hackers/repositories and same but /bots/ [14:06] rogpeppe1: charm does not [14:06] but yeah, if it does [14:06] mgz: but it has hackers as collaborators... [14:06] rogpeppe1: I wanted to do charm, but it doesn't build with tips of deps when I tried, and has no dependencies.tsv [14:06] mgz: so... how can i determine... etc? [14:07] rogpeppe1: see the pages above^ [14:07] mgz: juju/charm is in github.com/orgs/juju/teams/hackers/repositories [14:07] katco, can you take a look at this bug: https://bugs.launchpad.net/juju-core/+bug/1441319 [14:07] Bug #1441319: failed to retrieve the template to clone: template container juju-trusty-lxc-template did not stop [14:07] mgz: is it just bots that i should be looking at? [14:07] looks like you worked in 1.22 [14:08] * katco looking [14:08] rogpeppe1: look at either, if it's in hackers you have to use the github merge button or similar, if it's in bots you need $$merge$$ [14:09] mgz: you could quite easily (i hope) add a dependencies.tsv file to juju/charm by updating deps to those found in juju-core or charmstore first, then using that [14:10] rogpeppe1: I'd like to add more of these subrepos to the gated set, if you have any you want please say [14:10] there are just a few requirements/restrictions over what's needed to build/test [14:10] mgz: sure. some of them are gated by a rival bot [14:10] mgz: (e.g. charmstore) [14:10] rogpeppe1: the juju-gui bits are the same code (roughly) via a different jenkins [14:11] both are in the bots group [14:19] alexisb: i remember this [14:40] mgz: would you be able to tell me what "lxc-start --version" returns on whatever machine runs "local-deploy-vivid-amd64 (non-voting)"? [14:41] katco: sure, sec [14:43] need some assist regarding LogMatches. my test shows http://paste.ubuntu.com/10773293/ as output and I don't know why it fails. [14:48] katco: 1.1.0 [14:48] mgz: ty kindly [14:48] package version 1.1.0-0ubuntu1 [14:48] thanks [14:50] TheMue: did you try printing out the length of each message and make sure it matches? [14:51] TheMue: sometimes there is some hidden character and I know the last check the LogMatches does is a length check [14:51] wwitzel3: to catch appended spaces? good idea [14:52] * natefinch grumbles about tests that do string matching on errors [14:53] ah, that's why I hear it grumbling here [14:55] wwitzel3: hmm, both are the same [15:02] TheMue: :( [15:02] wwitzel3: got a good hint by dimitern, the parens need escaping [15:03] 17 stupid tests fail when you add more information to an error with errors.Annotatef. Fantastic. [15:03] yeeeeeeehaw, it passes [15:04] natefinch: isn't that what tests are supposed to do? [15:05] perrito666: no, we should be using a type system so that we know the right type of error is returned, not that the string the error serializes to is exactly the same [15:06] perrito666: the actual text of the error message doesn't really matter [15:14] hi jam, dimitern, natefinch: can we say that bug 1258485 is fix committed in 1.23-beta4 now that leader elections are not behind a feature flag? [15:14] Bug #1258485: support leader election for charms [15:14] sinzui, I think so, yeah [15:14] yeah [15:14] yay [15:14] \o/ [15:22] gah, and now another test *is* checking types, but just printing out the string in the test failure method. Geez, people. [15:25] man you really get angry with things badly done, you would die of a hearth attack in my country :p [15:26] heh [15:26] lol [16:07] mgz: when we create an lxc container for cloning, we create a log file for the containers console output... would it be possible to pull that from the test machine? [16:09] thank you wwitzel3 [16:16] katco: I can see [16:16] does it generally get removed to destroy-environment? [16:17] mgz: hm... yes probably [16:18] mgz: actually, it probably doesn't even exist. that's probably the issue [16:18] voidspace, dooferlad, TheMue, please have a look - http://reviews.vapour.ws/r/1399/ subnets api server-side facade [16:19] I think I managed to get a good balance between test coverage and the amount of unavoidable boilerplate I need to stub out environ/provider/state methods [16:22] katco: I can add any files you ant to the list of local logs to copy before destroy-environment [16:22] but I'd expect *some* to already be present if it had worked at all, and there are none [16:24] older failures have the jenv, all-machines, machine-0 and cloud-init-output [16:25] mgz: yeah on second thought i don't think it's necessary. it looks like if it exists, and we can read from it, the messages will be present in machine-#.log [16:30] man I hate tests that rely on the order of items in a slice [16:33] I think it is easier to keep track of what nate doesn't hate in tests [16:34] much shorter list :) [16:34] wwitzel3: lol [16:34] lol [16:40] o/ whats the trick to pass a port to juju add-machine when ssh is not on the standard port of 22? [16:40] juju add-machine ssh:user@host -P 2222 doesn't seem to be doing it [16:41] mgz: ah wait, there are 2 log files: console.log & container.log... container.log would be useful to have i think. it contains stderr information from lxc-start [16:42] mgz: and it doesn't appear that the logged information would be anywhere else [16:43] mgz: actually, i'm wondering if this is desired, or is a dormant bug... [16:44] https://github.com/juju/juju/blob/master/container/lxc/clonetemplate.go#L282-L283 [16:44] can i get another opinion on that? shouldn't the stderr of lxc-start go into the same place as console output? especially if we're only watching console output? [16:49] kat, hmm, yeah, that could get added [16:52] katco: /var/lib/juju/containers/juju-*-lxc-template/*.log ? [16:53] mgz: looks right [16:53] mgz: i think i might need that to troubleshoot further. command line arguments to lxc-start look sane. i think it's erroring out, yet remains running? need to see stderr [16:53] here's a change to allow deploying charms with authorization; any reviews appreciated: https://github.com/juju/juju/pull/2048/files [16:53] mgz: any chance we could add those log files and then rerun vivid? [16:55] katco: on it [16:55] just checking if need to change how elevation works [16:55] mgz: ty sir. i'm going to take this opportunity to fix lunch [17:02] to anyone following alogn about re: add machine with non-standard ssh port - it appears we dont support this - http://paste.ubuntu.com/10774659/ [17:03] unless i hear otherwise, i'll repost with a bug and bugger off :) thanks [17:06] https://bugs.launchpad.net/juju-core/+bug/1441749 [17:06] cheers [17:06] Bug #1441749: Add-Machine does not support non-standard ssh port [17:12] Bug #1441749 was opened: Add-Machine does not support non-standard ssh port [17:18] alexisb: what time is team lead meeting tomorrow/tonight? [17:19] I think I see it in the afternoon. good for me. [17:20] jam, yep, back to the old time [17:24] katco: only got the jenv, nothing else [17:24] mgz: that's odd... [17:24] maybe collecting at the wrong point or something [17:25] oh, I tyoped [17:25] >_< [17:25] haha [17:26] katco: attempt #2 [17:26] :) [17:26] 20 mins [17:27] np [18:00] g'night all [18:02] abentley, ping [18:02] katco: finished, see #394 [18:03] mgz: ty [18:07] mgz: i haven't quite figured this out yet... how do i get to 394? i don't see it on http://reports.vapour.ws/releases [18:09] katco: I'm manually triggering on jenkins, so am in there [18:09] mgz: oh gotcha [18:10] you should have a dev login for the jenkins site? [18:10] I can also direct link you the files via data.vapour.ws [18:10] mgz: i do not have a jenkins login [18:10] because it's not going through the standard job triggering process, reports won't pick up the new artifacts till it happens to run again for another reason [18:12] mgz: just point me at the files if you don't mind... no idea where to find that on jenkins either [18:19] http://data.vapour.ws/juju-ci/products/version-2525/local-deploy-vivid-amd64/build-394/console.log.gz [18:19] for those following along at home [18:19] and container.log.gz same path, big file [18:21] container.log is what we're interested in, and it's HUGE because it contains the same error repeated thousands of times :) [18:38] niedbalski: pong [18:50] mgz: any thoughts on whether this is applicable? https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/1296459 [18:50] Bug #1296459: Upgrade from 2.8.0-0ubuntu38 to 2.8.95~2430-0ubuntu2 breaks LXC containers [18:51] meh.. just noticed the date on that [18:52] katco: not that directly, I'd expect [18:52] fragile apparmour profiles are always a possible though [18:52] mgz: what version of apparmor is on that machine? [18:52] mgz: this is the 1st interesting error in the log: [18:52] lxc-start 1426806676.710 ERROR lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:183 - No such file or directory - failed to change apparmor profile to lxc-container-default [18:53] 2.9.1-0ubuntu9 [18:54] no apparmor updates pending, there is an lxc though [18:54] worth a try i suppose [18:54] while i continue looking into this [18:54] Bug #1441808 was opened: juju units attempt to connect to rsyslog tls on tcp port 6514 but machine 0 never installs required rsyslogd-gnutls package [18:54] Bug #1441811 was opened: juju-1.23beta3 breaks glance <-> mysql relation when glance is hosted in a container [18:55] dist-upgrading === kadams54_ is now known as kadams54-away === kadams54-away is now known as kadams54_ === FunnyLookinHat_ is now known as FunnyLookinHat [19:56] mgz: is that machine cycled often? [19:57] Bug #1441826 was opened: deployer and quickstart are broken in 1.24-alpha1 [19:59] katco: uptime says 6 days, cloudinit says the machine was created 15 mar [19:59] katco: I can restart now if desired [19:59] mgz: it's a shot in the dark, but if you don't mind [20:00] mgz: i've had issues with lxc where i had to cycle cgmanager [20:00] mgz: and research shows that power-cycles have fixed various issues for others with these types of errors [20:01] mgz: who is our resident app armor expert? [20:01] good question, maybe check in #ubuntu-sever ? [20:02] mgz: freenode or internal? [20:02] freenode, I guess there's an internal equiv === kadams54_ is now known as kadams54-away [20:27] mgz: did you cycle that machine and trigger a new run [20:27] ? [20:27] katco: yup [20:27] mgz: awesome ty [20:28] looks much the same though... [20:28] mgz: it was worth a try [20:28] mgz: btw isn't it super late for you? [20:28] late-ish, yeah :) [20:28] vmaas is a blessing, too bad the ammount of network magic I need to do to get it exported [20:58] Bug #1441808 changed: juju units attempt to connect to rsyslog tls on tcp port 6514 but machine 0 never installs required rsyslogd-gnutls package [21:15] team... [21:15] I have this bug from a stakeholder: [21:16] https://bugs.launchpad.net/juju-core/+bug/1287718 [21:16] Bug #1287718: jujud on machine 0 stops listening to port 17070/tcp WSS api [21:16] it needs some love, if someone has cycles [21:16] I will find "volunteers" if no one speaks up :) [21:18] * jw4 whistles while working really hard [21:21] :) === kadams54-away is now known as kadams54_ [21:45] thumper: ping [21:55] katco: hey [21:55] katco: I was just about to go and run an errand [21:55] katco: is this quick, or can we do it in abit? [21:55] thumper: rq [21:55] shoto [21:56] or... [21:56] shoot [21:56] thumper: i'm working on https://bugs.launchpad.net/juju-core/+bug/1441319 [21:56] Bug #1441319: failed to retrieve the template to clone: template container juju-trusty-lxc-template did not stop [21:56] thumper: going to have to hand off to you so i can work with axw tonight on some storage stuff [21:56] yeah... [21:56] thumper: it's well triaged, and we're engaged with hallyn in #canonical to further diagnose [21:56] thumper: so hopefully by the time i hand it off you'll have an easy go [21:57] ack [21:57] thumper: we can talk more when you have time... i should be on tonight === kadams54_ is now known as kadams54-away [23:02] axw: daily question since it's just us: can we push the stand-up back an hour? [23:02] katco: no worries, may need to be a bit later still if that's okay, since my wife will be getting ready for work an hour from now [23:03] axw: that's perfectly fine [23:03] axw: i'll eat dinner and give my kiddo a bath and catch up with you in a bit :) [23:03] axw: enjoy your family as well! [23:03] cool, ttyl [23:03] ttyl === kadams54-away is now known as kadams54_ [23:13] katco: did you work it out? I think I know what the problem is [23:13] katco: from not looking at the bug nor the code... [23:13] so an educated guess [23:15] hmm... reading the bug, seems to be different [23:18] comment added [23:25] Bug #1441899 was opened: jujud-machine-0 handles mongo errors poorly (and fails to start after a juju upgrade gone wrong) [23:25] Bug #1441904 was opened: juju upgrade-juju goes into an infinite loop if apt-get fails for any reason === kadams54_ is now known as kadams54-away === kadams54-away is now known as kadams54_ [23:47] thumper: o/ [23:47] thumper: so i think those errors were pulled from a vivid machine [23:48] katco: hangout? [23:48] thumper: sure, if you don't mind me popping out for a few mins in the middle [23:48] katco: that's fine [23:48] thumper: https://plus.google.com/hangouts/_/canonical.com/tanzanite-stand?authuser=0&hceid=Y2Fub25pY2FsLmNvbV9pYTY3ZTFhN2hqbTFlNnMzcjJsaWQ5bmhzNEBncm91cC5jYWxlbmRhci5nb29nbGUuY29t.q61hqsau8oh348d0dqmosuqilk [23:55] Bug #1441913 was opened: juju upgrade-juju failed to configure mongodb replicasets