[00:04] <axw> katco: just eating breakfast, 5 mins please
[00:04] <axw> katco: also, network has been going out still if it's not obvious...
[00:04] <katco> axw: haha, ok i'll email my question... please take your time and enjoy your breakfast
[00:17] <axw> katco: standup now?
[00:18] <katco> axw: sure
[00:19] <katco> axw: one sec sorry
[00:19] <axw> katco: sure
[00:20] <katco> axw: k omw
[02:20] <thumper> I'm likely to drop off at some time as I move from ADSL to VDSL
[02:21] <jw4> thumper: VDSL?
[02:21] <rick_h_> very dsl, it's like super powered
[02:21] <thumper> very fast
[02:21] <rick_h_> :P
[02:21] <jw4> haha
[02:21] <jw4> nice
[02:21] <rick_h_> more dsl than normal
[02:22] <thumper> I'm currently getting about 14 Mbps down and less than 1 up
[02:22] <thumper> VDSL should make it about 20/6
[02:22] <rick_h_> nice
[02:22] <jw4> sweet.
[02:22] <thumper> slightly faster down, much faster up
[02:22] <rick_h_> upgraded to 50/10 last week. Got sick of 1.5mb up
[02:22] <thumper> and the same modem they have given me will work for fibre when I get it in six months or so
[02:22] <jw4> I (briefly) considered moving to kansas city when google announced fiber there
[02:23] <thumper> which will make it over 150 symmetric
[02:23]  * rick_h_ dreams of 100 bidirectional
[02:23] <jw4> wow
[02:23] <thumper> well, you have to remember that this is within NZ
[02:23] <thumper> push anything over the wet noodle to the states and it slows down
[02:23] <rick_h_> lol
[02:23] <jw4> yeah, when you cross the pacifc you slow to a crawl I suppose
[02:24] <rick_h_> thumper: yea, remmeber that in all these 'api calls' they going to go through london across oceans heh
[03:24] <katco> axw_: dummy charm is working; i'll try postgres tomorrow
[03:29] <katco> axw_: latest changes are in the PR. still not 100% done, but could use a look if you're bored
[03:29] <katco> good night everyone
[03:58] <thumper> wow, that made a big difference
[03:58] <thumper> locally, went from 14/0.8 Mbps to 33/9.5
[03:59] <thumper> and between here and VA, I get 7.76/5.12 Mbsp
[03:59] <thumper> so a big improvement
[04:13] <axw_> katco: sweet.
[04:13] <axw_> thumper: :o nice
[04:13] <axw_> welcome to gigatown
[04:13] <axw_> ;)
[07:23] <jam> axw: did you see the question on the mailing list about a missed upgrade step related to AvailZone ?
[07:23] <jam> Do you know anything about it?
[07:32] <axw> jam: I saw it, but I don't. I think Eric worked on that one
[07:38] <jam> axw: eric is at pycon :(
[07:45] <voidspace> morning all
[07:47] <axw> voidspace: morning
[07:48] <voidspace> axw: o/
[07:48] <axw> jam: so, looks like if there's any instanceid in state that's invalid, the whole upgrade step will fail
[07:48] <axw> not sure if that's the cause tho
[07:48] <jam> axw: could that be triggered by containers?
[07:48] <jam> ISTR that used to be invalid machine ids
[07:49] <axw> jam: I don't think containers get an entry in instancedata, but not entirely sure
[07:51] <axw> actually, they must, that's where the HW info and all that is set
[07:52] <axw> I guess the instance ID is not something that MAAS will know about tho
[07:52] <jam> I'm just spit balling. I believe there were bugs in the past because destroy-environment would send a request to MAAS to destroy the instances that were actually containers.
[07:54] <axw> jam: yeah, I think you're right
[07:58] <TheMue> voidspace: morning o/
[08:14] <mup> Bug #1441478 was opened: state: availability zone upgrade fails if containers are present <upgrade-juju> <juju-core:Triaged> <https://launchpad.net/bugs/1441478>
[08:22] <voidspace> TheMue: o/
[08:22] <voidspace> no dimiterm (yet) ?
[08:22] <TheMue> voidspace: hmm, he's offline on Friday, but today he should be here
[08:23] <voidspace> yeah, that's what I thought
[08:23] <voidspace> TheMue: dooferlad: in case I forget to mention it in standup, I'm off on Friday too
[08:23] <voidspace> going to a conference with my Dad on Friday, Saturday - flying to Nuremberg Sunday
[08:31] <TheMue> voidspace: dooferlad: based on current planning I'll be off tomorrow. organizing the new car, preparing my Go/Juju talk on the conference the week after the sprint, and wedding anniversary of my parents-in-law
[08:43] <voidspace> TheMue: ok
[09:32] <mup> Bug #1438683 was opened: Containers stuck allocating, interface not up <add-machine> <cloud-installer> <landscape> <maas-provider> <network> <juju-core:Fix Committed by mfoord> <juju-core 1.23:Fix Released by mfoord> <juju-core trunk:Fix Committed by mfoord> <https://launchpad.net/bugs/1438683>
[11:10] <TheMue> Hehe, that's what juju is for: http://dilbert.com/strip/2015-04-08
[11:13] <dimitern> I *really* hate how I'm forced to mock out 99% of a huge Environ interface just so I can test 2 method calls on it
[11:19] <TheMue> dimitern: yeah, would be better if it would be a combination of smaller interfaces, like io.ReaderWriter, and we only use the small ones instead of the combination
[11:19] <TheMue> dimitern: so for test one only would have to mock the according smaller one
[11:21] <dimitern> TheMue, exactly! - so for example in order to get an Environ from environ config, I need a registered provider which only needs an Open() method, returning environs.Something which then has methods to get full featured interface, e.g. GetZoned -> ZonedEnviron
[11:22] <dimitern> or GetNetworking -> NetworkingEnviron
[11:23] <dimitern> granted, using finer-grained smaller interfaces for specific environ features will incur some overhead at run-time (type-asserting against a "feature sub-interface" before calling "feature-specific methods")
[11:24] <TheMue> dimitern: hmm, setting the smaller ones as parameter types is fine, but when Open() returns an Environment it always have to be a combination. but this at least could be done with a struct combining multiple smaller mocks
[11:28] <dimitern> TheMue, why it has to be a full Environ? Open can return a smaller "feature-facade" interface (e.g. GenericEnviron), which you can then use to access specific features (e.g. GenericEnviron.SupportZones() (ZonedEnviron, bool); SupportNetworking() (NetworkingEnviron, error))
[11:29] <dimitern> the problem is embedding smaller feature-based interfaces into a bigger "all-in-one" Environ interface - that thing should die at some point soon
[11:29] <TheMue> dimitern: oh, yes, that could be a good approach. it has to know those interfaces but don't implement them. yeah, like it
[12:29] <davecheney> dimitern: use a type assertion
[12:29] <davecheney> e := someenviron()
[12:30] <davecheney> ze, ok := e.(ZonedEnviron)
[12:30] <davecheney> if !ok { // bummer, doens't support zones }
[12:34] <dimitern> davecheney, I know, but I wanted to avoid the need to create something implementing Environ, just so I can call 2 methods on it
[12:35] <dimitern> davecheney, I did find a relatively cruft-free solution though: embed Environ and only implement the methods I need to test
[12:41] <davecheney> dimitern: what are the chances of having a serious discussion about refactoring the environ interface in nuremberg
[12:41] <davecheney> as in, a discussion that leads to a resolution and work in the next cycle
[12:41] <davecheney> not just another commitment to fix it, but someday
[12:43] <niedbalski> katco, ping
[12:43] <dimitern> davecheney, we already had an interesting discussion about it in Cape Town
[12:44] <dimitern> davecheney, by interesting I mean with actual implementable steps and a rough roadmap
[12:45] <dimitern> davecheney, so let's sit down, discuss it finally and get it done :)
[12:45] <davecheney> dimitern: +1
[12:45] <davecheney> dimitern: have you put it on the sprint discussion spreadsheet ?
[12:45] <niedbalski> axw, seems that commit https://github.com/juju/juju/commit/a16c5c3fd534e9457965b61621cbc2aca00cd21b , adds the leader-election by default. Would be possible to trigger a new -devel PPA build?
[12:46] <dimitern> davecheney, if not I'll do it now, while still frustrated :)
[12:46] <axw> niedbalski: best to talk with sinzui, mgz or abentley about that. I know nothing about the PPA builds
[12:47] <niedbalski> axw, yeah, will have to wait , thanks
[12:47] <davecheney> dimitern: that's best
[12:53] <dimitern> davecheney, done, I've added you to the list of attendees for it
[12:56] <davecheney> dimitern: are you sure you did ?
[12:56] <davecheney> i don't see my name there
[12:56] <dimitern> davecheney, which spreadsheet are you looking at?
[12:57] <davecheney> dimitern: which one are you looking at ?
[12:57] <davecheney> i'm looking at the correct one
[12:57] <davecheney> and so is everyone else
[12:57] <dimitern> davecheney, I've updated Juju MaaS Sprint Agenda - Nuremberg - April 201
[12:57] <dimitern> 2015
[12:57] <davecheney> fucking wonderful
[12:57] <davecheney> again we manage to have two planning documents for one event
[12:57] <davecheney> lets open the champagn
[12:57] <dimitern> :)
[12:57] <davecheney> dimitern: you have updated a document nobody on juju has access to
[12:58] <davecheney> dimitern: please give me the link to the document you are using
[12:58] <dimitern> I think that one I've updated is used for scheduling based on the first one with the list of topics
[12:58] <TheMue> I thought it's the Nuremberg Sprint Topics/Planing  Juju Core?
[12:58] <dimitern> I'll update both now
[12:58] <davecheney> dimitern: can we stop writing stuff down twice ?
[12:58] <davecheney> https://docs.google.com/a/canonical.com/spreadsheets/d/1TrPuHrWvnHU-Ekzt9SLEoWVaSV-f87kHOuLXqzjHnCc/edit#gid=0
[12:58] <TheMue> davecheney: +1
[12:58] <davecheney> this is the document
[12:58] <davecheney> any others shold be deleted and merged into the correct document, https://docs.google.com/a/canonical.com/spreadsheets/d/1TrPuHrWvnHU-Ekzt9SLEoWVaSV-f87kHOuLXqzjHnCc/edit#gid=0
[13:02] <dimitern> davecheney, updated both now
[13:03] <davecheney> thanks
[13:14] <alexisb> davecheney, I have to get use to you talking in this timezone ;)
[13:16] <davecheney> then i'll change it up on you
[13:20] <alexisb> jam, you still around?
[13:20] <jam> alexisb: I am
[13:20] <alexisb> heya
[13:21] <alexisb> first off thank you for looking at lp 1441302
[13:21] <alexisb> I will follow-up with the QA team today re the env
[13:21] <alexisb> for the other bug you are were looking at: https://bugs.launchpad.net/juju-core/+bug/1440737
[13:22] <jam> that's also timing related, just potentially higher sensitivity I think
[13:23] <alexisb> sounds like there are some test updates needed, do you think their is an actual bug in the code?
[13:23] <davecheney> alexisb: why is a test failure marked as a private bug
[13:24] <jam> both bugs look to be test related
[13:24] <alexisb> davecheney, that is a very good question
[13:24] <davecheney> it contains only information about a synthetic test run
[13:24] <davecheney> and is not related to a security issue
[13:25] <davecheney> fat finger issue ?
[13:25] <alexisb> davecheney, I believe so, but i have not actually verified with the bug submitter
[13:26] <davecheney> trust me, it's just a common garden test faliure
[13:30] <jam> davecheney: I think CI prefers to make them private because there have been bugs in the past where Juju would leak secrets to the log file
[13:30] <jam> alexisb: ^^
[13:31] <davecheney> jam: this is a test run
[13:31] <davecheney> there are no secrets to leak
[13:34] <alexisb> katco, ping
[13:34] <wwitzel3> natefinch: 1 on 1?
[13:35] <natefinch> wwitzel3: yep
[13:58] <katco> alexisb: pong
[14:01] <alexisb> katco, sorry one sec
[14:01] <katco> alexisb: no worries
[14:03] <rogpeppe1> mgz: how can i determine if a given repo is managed by the juju bot or not?
[14:04] <mgz> rogpeppe1: from the github side? if it has hacker or bots as contributors
[14:04] <mgz> *collaborators
[14:05] <rogpeppe1> mgz: so juju/charm has Hackers under team collaborators, so that means i should use $$merge$$ there?
[14:05] <mgz> rogpeppe1: see github.com/orgs/juju/teams/hackers/repositories and same but /bots/
[14:06] <mgz> rogpeppe1: charm does not
[14:06] <mgz> but yeah, if it does
[14:06] <rogpeppe1> mgz: but it has hackers as collaborators...
[14:06] <mgz> rogpeppe1: I wanted to do charm, but it doesn't build with tips of deps when I tried, and has no dependencies.tsv
[14:06] <rogpeppe1> mgz: so... how can i determine... etc?
[14:07] <mgz> rogpeppe1: see the pages above^
[14:07] <rogpeppe1> mgz: juju/charm is in github.com/orgs/juju/teams/hackers/repositories
[14:07] <alexisb> katco, can you take a look at this bug: https://bugs.launchpad.net/juju-core/+bug/1441319
[14:07] <mup> Bug #1441319: failed to retrieve the template to clone: template container juju-trusty-lxc-template did not stop <ci> <lxc> <oil> <test-failure> <vivid> <juju-core:Triaged> <juju-core 1.23:Triaged> <https://launchpad.net/bugs/1441319>
[14:07] <rogpeppe1> mgz: is it just bots that i should be looking at?
[14:07] <alexisb> looks like you worked in 1.22
[14:08]  * katco looking
[14:08] <mgz> rogpeppe1: look at either, if it's in hackers you have to use the github merge button or similar, if it's in bots you need $$merge$$
[14:09] <rogpeppe1> mgz: you could quite easily (i hope) add a dependencies.tsv file to juju/charm by updating deps to those found in juju-core or charmstore first, then using that
[14:10] <mgz> rogpeppe1: I'd like to add more of these subrepos to the gated set, if you have any you want please say
[14:10] <mgz> there are just a few requirements/restrictions over what's needed to build/test
[14:10] <rogpeppe1> mgz: sure. some of them are gated by a rival bot
[14:10] <rogpeppe1> mgz: (e.g. charmstore)
[14:10] <mgz> rogpeppe1: the juju-gui bits are the same code (roughly) via a different jenkins
[14:11] <mgz> both are in the bots group
[14:19] <katco> alexisb: i remember this
[14:40] <katco> mgz: would you be able to tell me what "lxc-start --version" returns on whatever machine runs "local-deploy-vivid-amd64 (non-voting)"?
[14:41] <mgz> katco: sure, sec
[14:43] <TheMue> need some assist regarding LogMatches. my test shows http://paste.ubuntu.com/10773293/ as output and I don't know why it fails.
[14:48] <mgz> katco: 1.1.0
[14:48] <katco> mgz: ty kindly
[14:48] <mgz> package version 1.1.0-0ubuntu1
[14:48] <katco> thanks
[14:50] <wwitzel3> TheMue: did you try printing out the length of each message and make sure it matches?
[14:51] <wwitzel3> TheMue: sometimes there is some hidden character and I know the last check the LogMatches does is a length check
[14:51] <TheMue> wwitzel3: to catch appended spaces? good idea
[14:52]  * natefinch grumbles about tests that do string matching on errors
[14:53] <TheMue> ah, that's why I hear it grumbling here
[14:55] <TheMue> wwitzel3: hmm, both are the same
[15:02] <wwitzel3> TheMue: :(
[15:02] <TheMue> wwitzel3: got a good hint by dimitern, the parens need escaping
[15:03] <natefinch> 17 stupid tests fail when you add more information to an error with errors.Annotatef.  Fantastic.
[15:03] <TheMue> yeeeeeeehaw, it passes
[15:04] <perrito666> natefinch:  isn't that what tests are supposed to do?
[15:05] <natefinch> perrito666: no, we should be using a type system so that we know the right type of error is returned, not that the string the error serializes to is exactly the same
[15:06] <natefinch> perrito666: the actual text of the error message doesn't really matter
[15:14] <sinzui> hi jam, dimitern, natefinch: can we say that bug 1258485 is fix committed in 1.23-beta4 now that leader elections are not behind a feature flag?
[15:14] <mup> Bug #1258485: support leader election for charms <juju-core:Triaged> <cassandra (Juju Charms Collection):Triaged> <postgresql (Juju Charms Collection):Triaged> <https://launchpad.net/bugs/1258485>
[15:14] <dimitern> sinzui, I think so, yeah
[15:14] <sinzui> yeah
[15:14] <sinzui> yay
[15:14] <sinzui> \o/
[15:22] <natefinch> gah, and now another test *is* checking types, but just printing out the string in the test failure method.  Geez, people.
[15:25] <perrito666> man you really get angry with things badly done, you would die of a hearth attack in my country :p
[15:26] <natefinch> heh
[15:26] <wwitzel3> lol
[16:07] <katco> mgz: when we create an lxc container for cloning, we create a log file for the containers console output... would it be possible to pull that from the test machine?
[16:09] <alexisb> thank you wwitzel3
[16:16] <mgz> katco: I can see
[16:16] <mgz> does it generally get removed to destroy-environment?
[16:17] <katco> mgz: hm... yes probably
[16:18] <katco> mgz: actually, it probably doesn't even exist. that's probably the issue
[16:18] <dimitern> voidspace, dooferlad, TheMue, please have a look - http://reviews.vapour.ws/r/1399/ subnets api server-side facade
[16:19] <dimitern> I think I managed to get a good balance between test coverage and the amount of unavoidable boilerplate I need to stub out environ/provider/state methods
[16:22] <mgz> katco: I can add any files you ant to the list of local logs to copy before destroy-environment
[16:22] <mgz> but I'd expect *some* to already be present if it had worked at all, and there are none
[16:24] <mgz> older failures have the jenv, all-machines, machine-0 and cloud-init-output
[16:25] <katco> mgz: yeah on second thought i don't think it's necessary. it looks like if it exists, and we can read from it, the messages will be present in machine-#.log
[16:30] <natefinch> man I hate tests that rely on the order of items in a slice
[16:33] <wwitzel3> I think it is easier to keep track of what nate doesn't hate in tests
[16:34] <wwitzel3> much shorter list :)
[16:34] <jw4> wwitzel3: lol
[16:34] <natefinch> lol
[16:40] <lazyPower> o/ whats the trick to pass a port to juju add-machine when ssh is not on the standard port of 22?
[16:40] <lazyPower> juju add-machine ssh:user@host -P 2222 doesn't seem to be doing it
[16:41] <katco> mgz: ah wait, there are 2 log files: console.log & container.log... container.log would be useful to have i think. it contains stderr information from lxc-start
[16:42] <katco> mgz: and it doesn't appear that the logged information would be anywhere else
[16:43] <katco> mgz: actually, i'm wondering if this is desired, or is a dormant bug...
[16:44] <katco> https://github.com/juju/juju/blob/master/container/lxc/clonetemplate.go#L282-L283
[16:44] <katco> can i get another opinion on that? shouldn't the stderr of lxc-start go into the same place as console output? especially if we're only watching console output?
[16:49] <mgz> kat, hmm, yeah, that could get added
[16:52] <mgz> katco: /var/lib/juju/containers/juju-*-lxc-template/*.log ?
[16:53] <katco> mgz: looks right
[16:53] <katco> mgz: i think i might need that to troubleshoot further. command line arguments to lxc-start look sane. i think it's erroring out, yet remains running? need to see stderr
[16:53] <rogpeppe1> here's a change to allow deploying charms with authorization; any reviews appreciated: https://github.com/juju/juju/pull/2048/files
[16:53] <katco> mgz: any chance we could add those log files and then rerun vivid?
[16:55] <mgz> katco: on it
[16:55] <mgz> just checking if need to change how elevation works
[16:55] <katco> mgz: ty sir. i'm going to take this opportunity to fix lunch
[17:02] <lazyPower> to anyone following alogn about re: add machine with non-standard ssh port - it appears we dont support this - http://paste.ubuntu.com/10774659/
[17:03] <lazyPower> unless i hear otherwise, i'll repost with a bug and bugger off :) thanks
[17:06] <lazyPower> https://bugs.launchpad.net/juju-core/+bug/1441749
[17:06] <lazyPower> cheers
[17:06] <mup> Bug #1441749: Add-Machine does not support non-standard ssh port <juju-core:New> <https://launchpad.net/bugs/1441749>
[17:12] <mup> Bug #1441749 was opened: Add-Machine does not support non-standard ssh port <juju-core:New> <https://launchpad.net/bugs/1441749>
[17:18] <jam> alexisb: what time is team lead meeting tomorrow/tonight?
[17:19] <jam> I think I see it in the afternoon. good for me.
[17:20] <alexisb> jam, yep, back to the old time
[17:24] <mgz> katco: only got the jenv, nothing else
[17:24] <katco> mgz: that's odd...
[17:24] <mgz> maybe collecting at the wrong point or something
[17:25] <mgz> oh, I tyoped
[17:25] <mgz> >_<
[17:25] <katco> haha
[17:26] <mgz> katco: attempt #2
[17:26] <katco> :)
[17:26] <mgz> 20 mins
[17:27] <katco> np
[18:00] <voidspace> g'night all
[18:02] <niedbalski> abentley, ping
[18:02] <mgz> katco: finished, see #394
[18:03] <katco> mgz: ty
[18:07] <katco> mgz: i haven't quite figured this out yet... how do i get to 394? i don't see it on http://reports.vapour.ws/releases
[18:09] <mgz> katco: I'm manually triggering on jenkins, so am in there
[18:09] <katco> mgz: oh gotcha
[18:10] <mgz> you should have a dev login for the jenkins site?
[18:10] <mgz> I can also direct link you the files via data.vapour.ws
[18:10] <katco> mgz: i do not have a jenkins login
[18:10] <mgz> because it's not going through the standard job triggering process, reports won't pick up the new artifacts till it happens to run again for another reason
[18:12] <katco> mgz: just point me at the files if you don't mind... no idea where to find that on jenkins either
[18:19] <mgz> http://data.vapour.ws/juju-ci/products/version-2525/local-deploy-vivid-amd64/build-394/console.log.gz
[18:19] <mgz> for those following along at home
[18:19] <mgz> and container.log.gz same path, big file
[18:21] <katco> container.log is what we're interested in, and it's HUGE because it contains the same error repeated thousands of times :)
[18:38] <abentley> niedbalski: pong
[18:50] <katco> mgz: any thoughts on whether this is applicable? https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/1296459
[18:50] <mup> Bug #1296459: Upgrade from 2.8.0-0ubuntu38 to 2.8.95~2430-0ubuntu2 breaks LXC containers <apparmor (Ubuntu):Fix Released by tyhicks> <https://launchpad.net/bugs/1296459>
[18:51] <katco> meh.. just noticed the date on that
[18:52] <mgz> katco: not that directly, I'd expect
[18:52] <mgz> fragile apparmour profiles are always a possible though
[18:52] <katco> mgz: what version of apparmor is on that machine?
[18:52] <katco> mgz: this is the 1st interesting error in the log:
[18:52] <katco>       lxc-start 1426806676.710 ERROR    lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:183 - No such file or directory - failed to change apparmor profile to lxc-container-default
[18:53] <mgz> 2.9.1-0ubuntu9
[18:54] <mgz> no apparmor updates pending, there is an lxc though
[18:54] <katco> worth a try i suppose
[18:54] <katco> while i continue looking into this
[18:54] <mup> Bug #1441808 was opened: juju units attempt to connect to rsyslog tls on tcp port 6514  but machine 0 never installs required rsyslogd-gnutls package <juju-core:New> <https://launchpad.net/bugs/1441808>
[18:54] <mup> Bug #1441811 was opened: juju-1.23beta3 breaks glance <-> mysql relation when glance is hosted in a container <oil> <juju-core:New> <https://launchpad.net/bugs/1441811>
[18:55] <mgz> dist-upgrading
[19:56] <katco> mgz: is that machine cycled often?
[19:57] <mup> Bug #1441826 was opened: deployer and quickstart are broken in 1.24-alpha1 <ci> <regression> <juju-ci-tools:Triaged> <juju-core:Triaged> <https://launchpad.net/bugs/1441826>
[19:59] <mgz> katco: uptime says 6 days, cloudinit says the machine was created 15 mar
[19:59] <mgz> katco: I can restart now if desired
[19:59] <katco> mgz: it's a shot in the dark, but if you don't mind
[20:00] <katco> mgz: i've had issues with lxc where i had to cycle cgmanager
[20:00] <katco> mgz: and research shows that power-cycles have fixed various issues for others with these types of errors
[20:01] <katco> mgz: who is our resident app armor expert?
[20:01] <mgz> good question, maybe check in #ubuntu-sever ?
[20:02] <katco> mgz: freenode or internal?
[20:02] <mgz> freenode, I guess there's an internal equiv
[20:27] <katco> mgz: did you cycle that machine and trigger a new run
[20:27] <katco> ?
[20:27] <mgz> katco: yup
[20:27] <katco> mgz: awesome ty
[20:28] <mgz> looks much the same though...
[20:28] <katco> mgz: it was worth a try
[20:28] <katco> mgz: btw isn't it super late for you?
[20:28] <mgz> late-ish, yeah :)
[20:28] <perrito666> vmaas is a blessing, too bad the ammount of network magic I need to do to get it exported
[20:58] <mup> Bug #1441808 changed: juju units attempt to connect to rsyslog tls on tcp port 6514  but machine 0 never installs required rsyslogd-gnutls package <logging> <juju-core:New> <https://launchpad.net/bugs/1441808>
[21:15] <alexisb> team...
[21:15] <alexisb> I have this bug from a stakeholder:
[21:16] <alexisb> https://bugs.launchpad.net/juju-core/+bug/1287718
[21:16] <mup> Bug #1287718: jujud on machine 0 stops listening to port 17070/tcp WSS api <cts> <cts-cloud-review> <mongodb> <state-server> <juju-core:Triaged> <https://launchpad.net/bugs/1287718>
[21:16] <alexisb> it needs some love, if someone has cycles
[21:16] <alexisb> I will find "volunteers" if no one speaks up :)
[21:18]  * jw4 whistles while working really hard
[21:21] <alexisb> :)
[21:45] <katco> thumper: ping
[21:55] <thumper> katco: hey
[21:55] <thumper> katco: I was just about to go and run an errand
[21:55] <thumper> katco: is this quick, or can we do it in  abit?
[21:55] <katco> thumper: rq
[21:55] <thumper> shoto
[21:56] <thumper> or...
[21:56] <thumper> shoot
[21:56] <katco> thumper: i'm working on https://bugs.launchpad.net/juju-core/+bug/1441319
[21:56] <mup> Bug #1441319: failed to retrieve the template to clone: template container juju-trusty-lxc-template did not stop <ci> <lxc> <oil> <test-failure> <vivid> <juju-core:Triaged by cox-katherine-e> <juju-core 1.23:Triaged> <https://launchpad.net/bugs/1441319>
[21:56] <katco> thumper: going to have to hand off to you so i can work with axw tonight on some storage stuff
[21:56] <thumper> yeah...
[21:56] <katco> thumper: it's well triaged, and we're engaged with hallyn in #canonical to further diagnose
[21:56] <katco> thumper: so hopefully by the time i hand it off you'll have an easy go
[21:57] <thumper> ack
[21:57] <katco> thumper: we can talk more when you have time... i should be on tonight
[23:02] <katco> axw: daily question since it's just us: can we push the stand-up back an hour?
[23:02] <axw> katco: no worries, may need to be a bit later still if that's okay, since my wife will be getting ready for work an hour from now
[23:03] <katco> axw: that's perfectly fine
[23:03] <katco> axw: i'll eat dinner and give my kiddo a bath and catch up with you in a bit :)
[23:03] <katco> axw: enjoy your family as well!
[23:03] <axw> cool, ttyl
[23:03] <katco> ttyl
[23:13] <thumper> katco: did you work it out? I think I know what the problem is
[23:13] <thumper> katco: from not looking at the bug nor the code...
[23:13] <thumper> so an educated guess
[23:15] <thumper> hmm... reading the bug, seems to be different
[23:18] <thumper> comment added
[23:25] <mup> Bug #1441899 was opened: jujud-machine-0 handles mongo errors poorly (and fails to start after a juju upgrade gone wrong) <juju-core:New> <https://launchpad.net/bugs/1441899>
[23:25] <mup> Bug #1441904 was opened: juju upgrade-juju goes into an infinite loop if apt-get fails for any reason <juju-core:New> <https://launchpad.net/bugs/1441904>
[23:47] <katco> thumper: o/
[23:47] <katco> thumper: so i think those errors were pulled from a vivid machine
[23:48] <thumper> katco: hangout?
[23:48] <katco> thumper: sure, if you don't mind me popping out for a few mins in the middle
[23:48] <thumper> katco: that's fine
[23:48] <katco> thumper: https://plus.google.com/hangouts/_/canonical.com/tanzanite-stand?authuser=0&hceid=Y2Fub25pY2FsLmNvbV9pYTY3ZTFhN2hqbTFlNnMzcjJsaWQ5bmhzNEBncm91cC5jYWxlbmRhci5nb29nbGUuY29t.q61hqsau8oh348d0dqmosuqilk
[23:55] <mup> Bug #1441913 was opened: juju upgrade-juju failed to configure mongodb replicasets <juju-core:New> <https://launchpad.net/bugs/1441913>