[00:25] <menn0> ericsnow: ping?
[00:25] <ericsnow> menn0: hey
[00:26] <menn0> ericsnow: quick question... why is the vsphere provider set up to not build under gccgo or go 1.2?
[00:27] <ericsnow> menn0: dependencies
[00:27] <menn0> ericsnow: i'm reviewing the rackspace provider PR and the sshclient from vsphere has been moved and generalised to support it and the build tags came along
[00:27] <ericsnow> menn0: (govmomi requires 1.3+)
[00:27] <menn0> ericsnow: but the ssh client should work fine?
[00:27] <ericsnow> menn0: yep
[00:27] <menn0> ericsnow: ok, i'll mention that the build tag should go
[00:27] <ericsnow> menn0: it's just the Go bindings to the vsphere API that were the problem
[00:28] <menn0> ericsnow: thanks
[00:28] <ericsnow> menn0: np
[00:28] <menn0> ericsnow: actually, one more thing, the sshclient code assumes the host is Linux so won't work with windows hosts. is this ok/expected? (i know nothing about vsphere)
[00:29] <ericsnow> menn0: for vsphere I expect it's okay
[00:29] <menn0> ericsnow: any idea when it comes to rackspace? this PR is using the same code there.
[00:30] <ericsnow> menn0: rackspace is openstack under the hood
[00:32] <menn0> ericsnow: and openstack does appear to support windows hosts, but I don't know about rackspace
[00:32] <menn0> ericsnow: i'l raise it as a possible issue. thanks.
[00:32] <ericsnow> menn0: k, cool
[01:08] <natefinch> katco, ericsnow, wwitzel3: evening
[01:09] <katco> natefinch: o/
[01:09] <katco> natefinch: i'm in moonstone. eric and wayne are taking a break
[02:20] <wallyworld> thumper: have you forward ported bug 1468581?
[02:20] <mup> Bug #1468581: juju bootstrap fails - Waiting for API to become available ERROR cannot get all blocks: EOF <api> <bootstrap> <oil> <juju-core:In Progress by thumper> <juju-core 1.24:Fix Released by thumper> <https://launchpad.net/bugs/1468581>
[02:20] <thumper> I think so...
[02:20] <thumper> let me check
[02:20] <wallyworld> axw: and bug 1474614
[02:20] <mup> Bug #1474614: rsyslog connections fail with certificate verification errors after upgrade to 1.24.2 <regression> <juju-core:Triaged by axwalk> <juju-core 1.24:Fix Released by axwalk> <https://launchpad.net/bugs/1474614>
[02:20] <wallyworld> and there's a few from menno as well
[02:21] <wallyworld> menn0 even
[02:21] <axw> wallyworld: from the bug, "Forward-porting fix to 1.25 is currently blocked on other changes to the rsyslog worker being forward ported (i.e. the change that the upgrade step is concerned with)."
[02:21] <wallyworld> ah ok
[02:21] <wallyworld> didn't read the bug :-)
[02:21] <axw> wallyworld: I'll look at doing that after I finish up what I'm working on
[02:22] <wallyworld> just the milestone page
[02:22] <wallyworld> ty
[02:22] <wallyworld> we're aiming to cut 1.25 alpha 1 real soon
[02:22] <wallyworld> hence the follow up
[02:22] <menn0> wallyworld: i'm planning on doing the forward port for bug 1474195 this afternoon
[02:22] <mup> Bug #1474195: juju 1.24 memory leakage <cpec> <deployer> <performance> <regression> <juju-core:In Progress by menno.smits> <juju-core 1.24:Fix Released by menno.smits> <https://launchpad.net/bugs/1474195>
[02:23] <wallyworld> awesome tyvm
[02:23] <menn0> wallyworld: actually, I lie. I have the PR ready for that but want Will's review (have emailed him)
[02:23] <thumper> wallyworld: no
[02:23] <menn0> wallyworld: i'm doing bug 1474606 this afternoon
[02:23] <wallyworld> menn0: and the other 2 against you?
[02:23] <mup> Bug #1474606: $set updates may clear out the env-uuid field <juju-core:In Progress by menno.smits> <juju-core 1.24:Fix Released by menno.smits> <https://launchpad.net/bugs/1474606>
[02:23] <thumper> wallyworld: I must have forgotten
[02:23] <thumper> bad me
[02:23]  * wallyworld spanks thumper
[02:24] <thumper> oi
[02:24] <thumper> none of that
[02:24] <menn0> wallyworld: bug 1474588 is nowhere near investigated let alone fixed
[02:24] <mup> Bug #1474588: Many hook failures after upgrade <regression> <juju-core:Triaged by menno.smits> <juju-core 1.24:Triaged by menno.smits> <https://launchpad.net/bugs/1474588>
[02:25] <wallyworld> menn0: the othr one i was refering to in marked in progress bug 1457225
[02:26] <wallyworld> wake up mup bug 1457225
[02:27] <wallyworld> oh right, you answer that
[02:27] <wallyworld> sorry
[02:51] <menn0> wallyworld: mup knew that :)
[02:52] <mup> Bug #1476895 opened: ec2: auto-created EBS volumes are not tagged <ec2-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1476895>
[02:58] <mup> Bug #1476895 changed: ec2: auto-created EBS volumes are not tagged <ec2-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1476895>
[03:16] <mup> Bug #1476895 opened: ec2: auto-created EBS volumes are not tagged <ec2-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1476895>
[03:34] <katco> ericsnow: wwitzel3: if by any chance you're lurking around, natefinch and i are in moonstone
[03:37] <thumper> menn0: ignore http://reviews.vapour.ws/r/2234/
[03:37] <axw> wallyworld: would you PTAL at https://github.com/go-goose/goose/pull/12
[03:38] <wallyworld> sure
[03:38] <menn0> thumper: happily ignoring :)
[03:39] <wallyworld> axw: +1
[03:40] <axw> wallyworld: ta
[03:59] <menn0> wallyworld: cherry picking the fixes for bug 1474606 is proving difficult because it depends the fixes for bug 1474195 (which i'm waiting on a review from will on)
[03:59] <mup> Bug #1474606: $set updates may clear out the env-uuid field <juju-core:In Progress by menno.smits> <juju-core 1.24:Fix Released by menno.smits> <https://launchpad.net/bugs/1474606>
[03:59] <mup> Bug #1474195: juju 1.24 memory leakage <cpec> <deployer> <performance> <regression> <juju-core:In Progress by menno.smits> <juju-core 1.24:Fix Released by menno.smits> <https://launchpad.net/bugs/1474195>
[03:59] <menn0> wallyworld: can it wait until tomorrow?
[04:03] <wallyworld> menn0: for both? can wait if necessary
[04:04] <wallyworld> i ping william later to make sure he reviews
[04:13] <menn0> wallyworld: thanks. the second one doesn't need will's review, just the one that's up.
[04:13] <menn0> wallyworld: i've emailed him about it as well.
[04:18] <wallyworld> ty
[04:25] <mup> Bug #1457797 changed: Juju bootstrap doesn't work behind proxy <bootstrap> <juju> <juju-core:Expired> <https://launchpad.net/bugs/1457797>
[04:40] <mup> Bug #1457797 opened: Juju bootstrap doesn't work behind proxy <bootstrap> <juju> <juju-core:Expired> <https://launchpad.net/bugs/1457797>
[04:46] <mup> Bug #1457797 changed: Juju bootstrap doesn't work behind proxy <bootstrap> <juju> <juju-core:Expired> <https://launchpad.net/bugs/1457797>
[04:46] <mup> Bug #1476918 opened: juju switch doesn't work without an environments.yaml file <cli> <jes> <juju-core:Triaged> <https://launchpad.net/bugs/1476918>
[04:59] <axw> wallyworld: another small review if you don't mind: https://github.com/go-amz/amz/pull/56
[04:59] <wallyworld> sure
[05:18] <wallyworld> jam: morning, would you have time soonish to talk about resources? in maybe 15 minutes when anastasia gets back from school pickup?
[05:40] <jam> wallyworld: sure, are you available now?
[05:41] <wallyworld> jam: almost, just waiting to hear back from anastasia who is caught in traffic. may have to defer if it gets to close to my school pickup time
[05:41] <jam> k
[06:06] <wallyworld> jam: you free now?
[06:06] <jam> wallyworld: sure
[06:06] <wallyworld> https://plus.google.com/hangouts/_/canonical.com/tanzanite-stand
[06:09]  * thumper looks around for urulama
[08:11] <mup> Bug #1476996 opened: Network communication failed during juju bootstrap <juju-core:New> <https://launchpad.net/bugs/1476996>
[08:47] <mup> Bug #1477010 opened: provider/openstack: volumes may not attach if instance takes a long time to provision <openstack-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1477010>
[09:00] <dooferlad> fwereade: hangout?
[09:38] <frankban_> ocr: quick deps update http://reviews.vapour.ws/r/2238/ thanks!
[10:15] <fwereade> wallyworld, you still around?
[10:17] <fwereade> wallyworld, I'm wondering whether we need to read the whole previous status doc every time we set
[10:18] <fwereade> wallyworld, could we just write the same data to both the watched and the raw collections every time?
[10:18] <fwereade> wallyworld, (and fwiw I think it might be more reliable?)
[10:38]  * fwereade was doing some python the other day and just spent *far* too long wondering why gofmt didn't like "def"
[10:41] <natefinch> lol
[10:44] <bogdanteleaga> wallyworld: https://bugs.launchpad.net/juju-core/+bug/1287949
[10:44] <mup> Bug #1287949: {image,tools}-metadata-url not usable w/ ec2 provider <arm64> <config> <hs-arm64> <ppc64el> <juju-core:Triaged> <https://launchpad.net/bugs/1287949>
[10:57] <wallyworld> bogdanteleaga: by design, we currently only allow signed image metadata for ec2. i don't know the reason for that decision
[10:58] <wallyworld> the the metadata were to be signed, it would work
[10:58] <wallyworld> if
[10:58] <bogdanteleaga> yes, but as per the guy above me in the bug report
[10:59] <bogdanteleaga> there's no way of specifying a key
[10:59] <bogdanteleaga> so it has to be official?
[10:59] <fwereade> wallyworld, bogdanteleaga: nor do I, but it crosses my mind that we want to be sure that anything we think is the official source is signed with our key
[11:00] <wallyworld> fwereade: that is true, but for ec2 it currently requires user specified image metadata to be signed
[11:00] <bogdanteleaga> definitely makes sense, but then there's no way of specifying custom images
[11:00] <fwereade> wallyworld, bogdanteleaga: I can't see any reason not to allow either additional keys or to allow unsigned -- so long as we check our source
[11:00] <wallyworld> and i don't know why
[11:00] <fwereade> wallyworld, nor do I but I strongly suspect it was an all-we-had-time-for deal
[11:01] <wallyworld> maybe because - can people add custom images to ec2?
[11:01] <fwereade> wallyworld, yes, and IMO that's a good thing
[11:01] <wallyworld> maybe the reasoning was, well if you can't add cutom images....
[11:01] <bogdanteleaga> no, but there's no way of using a ec2 AMI that's not specified on cloud-images.ubuntu.com
[11:01] <bogdanteleaga> i.e. a windows ami
[11:01] <wallyworld> this was all done prior to windows :-)
[11:02] <wallyworld> so the code needs changing for windows
[11:02] <bogdanteleaga> yeah, I'm aware and I do understand
[11:02] <fwereade> wallyworld, bogdanteleaga: I strongly agree that is a bug and it's symptomatic of a wider one -- we have metadata generate-image, but a cloud configured to never work with it?
[11:02] <bogdanteleaga> was just signaling it's an actual bug and it's getting more important
[11:02] <wallyworld> fwereade: ec2 was configured to only use signed metadatsa before genetate metatdata came along
[11:03] <wallyworld> generate metadata was only for openstack
[11:03] <wallyworld> for private clouds
[11:03] <bogdanteleaga> and I wasn't aware it was a design decision until now
[11:03] <wallyworld> where there were no official images
[11:03] <wallyworld> bogdanteleaga: it made sense at the time because there were no unofficial images possible
[11:03] <fwereade> wallyworld, heh, not in my mind, I always thought that the ability to specify custom images was important, just that the way we were originally doing it was badwrongevil
[11:04] <wallyworld> fwereade: at the time, everything was based on simplestreams - that was our language
[11:04] <wallyworld> and only openstack had the need for custom images
[11:05] <wallyworld> and ec2 i think deliberately precluded them
[11:05] <fwereade> wallyworld, custom images are super-cool all over the place
[11:05] <bogdanteleaga> ok, so for a fix: do we want to allow unsigned or to allow specifying a key?
[11:05] <fwereade> wallyworld, if we *just* let people do this we can have the big data folks deploying an image with their JVM already in place
[11:05] <fwereade> wallyworld, and cut their deployment times like anything
[11:05] <wallyworld> fwereade: i agree, but at the time it was impossible to have custom images for aws
[11:06] <wallyworld> so why support something impossibe
[11:06] <fwereade> wallyworld, I am certain that I was building custom images for aws before even ensemble was a thing
[11:06] <wallyworld> i'm guessing - maybe it was a policy deciison
[11:06] <wallyworld> not sure now
[11:06] <wallyworld> bogdanteleaga: allow unsigned i think
[11:07] <wallyworld> to be consistent with ec2
[11:07] <wallyworld> openstack
[11:07] <wallyworld> i meant
[11:07] <bogdanteleaga> wallyworld: so I'm guessing ec2 is asking for signed only right? I was disabling it for everything for testing
[11:07] <wallyworld> fwereade: to answer your other question - i'm not sure why whole status record written each time - histerical reaosns?
[11:07] <wallyworld> bogdanteleaga: from memory there's a bool constant
[11:08] <fwereade> wallyworld, no, read each time, for copying into hysterical statues
[11:08] <fwereade> wallyworld, bogdanteleaga: it *would* IMO be best to allow specification of acceptable keys as well
[11:08] <wallyworld> fwereade: that's a bigger change, but sure
[11:09] <wallyworld> but do we really need that straight up?
[11:09] <fwereade> wallyworld, bogdanteleaga: and that feels like something that shouldn't take too much per-env configuration
[11:10] <fwereade> wallyworld, I am encouraging bogdanteleaga to do it, because apart from anything else I think security-conscious people *will* want to use signed simplestreams, and we're not going to sign the metadata for every OS we can deploy
[11:11] <wallyworld> i agree, just ensuring it's the right time to do it
[11:11] <bogdanteleaga> how about I enable the unsigned one, and add a todo+bug report for doing signed with key
[11:11] <fwereade> wallyworld, bogdanteleaga: that sgtm
[11:11] <wallyworld> that's what i was expecting to hapen
[11:11] <fwereade> wallyworld, bogdanteleaga: indeed
[11:12] <fwereade> bogdanteleaga, please talk it through with gsamfira_ though, I think it will be relevant if we expect Serious People to deploy a bunch of different OSs with juju
[11:13] <fwereade> wallyworld, anyway re status
[11:13] <fwereade> wallyworld, what I really want to do
[11:13] <fwereade> wallyworld, is have every setstatus method be
[11:14] <fwereade> go setHistoricalStatus(key, doc) // logs but ignore errors
[11:14] <fwereade> ...and then to just set txnal status as usual
[11:15] <fwereade> wallyworld, if I *just* do that, we'll miss one hostirical value around an upgrade
[11:15] <wallyworld> fwereade: looking at the code, it seems that it is excluing the actual current status from the history?
[11:15] <fwereade> wallyworld, and if I take on an upgrade step as well I will feel I've strayed too far from "enable leadership"
[11:16] <fwereade> wallyworld, hmm, hadn't looked there
[11:16] <fwereade> wallyworld, any particular reason?
[11:16] <fwereade> wallyworld, seems surprising
[11:16] <wallyworld> maybe "history" was taken as lieterally meaning in the past
[11:16] <wallyworld> but yeah surprising
[11:16] <fwereade> wallyworld, <pedant>latest know status is still part of history because it was set in the past
[11:17] <wallyworld> fwereade: i can't see a reason not to make the change you suggest
[11:17] <fwereade> wallyworld, cool, thanks
[11:19] <fwereade> wallyworld, oh hell, status history is still txnal in 1.24
[11:19] <fwereade> wallyworld, shouldn't it be backported?
[11:19] <wallyworld> damn, i'll ask horatio to backport - i thought it was done there
[11:20] <fwereade> wallyworld, no worries, thanks
[11:20] <wallyworld> sorry
[11:20] <fwereade> wallyworld, that's the trouble with branches, so many exciting opportunities to miss stuff
[11:21] <wallyworld> yeah
[11:21] <wallyworld> and we branched 1.24 off too early
[11:21] <wallyworld> way too early
[11:44] <fwereade> wallyworld, that too
[11:45] <fwereade> wallyworld, if you're still here: if I were to make SetStatus accept a StatusInfo, and apiserver responsible for setting the time, would that be a Bad Thing?
[11:46] <fwereade> wallyworld, I probably won't either way, am trying not to distract myself
[11:46] <wallyworld> i'd rather apiserver not contain any business logic
[11:46] <wallyworld> business logic should be in a separate service layer
[11:47] <wallyworld> other services may call the status service to set the status
[11:47] <wallyworld> and we'd want the status service to set the time
[11:48] <fwereade> wallyworld, so, apiserver->model->persistence, and the rules in model?
[11:48] <wallyworld> those other services may be co-located, hence the network layer would be short circuited
[11:48] <wallyworld> yes
[11:49] <wallyworld> i'd characterise it as apiserver->business_services->domain_model->persistence
[11:49] <wallyworld> business services sit on an enterprise bus
[11:50] <fwereade> well, that implies the model knowing about persistence rather than vice versa, but that's an aside
[11:50] <wallyworld> they operate on a domain model, the model is independent of persistence
[11:50] <fwereade> right
[11:50] <wallyworld> yea, the linear representation doesn't work
[11:50] <wallyworld> and the enterprise bus would provide pubsub, rpc, service discovery etc
[11:51] <fwereade> wallyworld, so something akin to https://blog.8thlight.com/uncle-bob/2012/08/13/the-clean-architecture.html
[11:51] <fwereade> wallyworld, or not?
[11:51] <wallyworld> looking
[11:51] <fwereade> wallyworld, because the representation as services doesn't *necessarily* fit
[11:52] <wallyworld> yeah, my concept doesn't quite fit that particular model
[11:53] <fwereade> wallyworld, and my current background thoughts are directed towards extracting an in-memory model layer, and moving business-rule responsibilities into there from state (but state still referencing the model)
[11:53] <fwereade> wallyworld, the main driver for this is the uncommitted-state/transactionality stuff
[11:53] <wallyworld> agree with the above, need to think about the bit in brackets
[11:54] <wallyworld> by state, do you mean "persistence"?
[11:54] <fwereade> wallyworld, in which I contend that having a dependency-free model representation, with a *single* and authoritative implementation of the business rules, is critical for composing and checking sanity of large logical operations without reference to db state
[11:54] <fwereade> wallyworld, yeah
[11:55] <fwereade> wallyworld, although it's sorta tricky because it's still going to have a bunch of consistency concerns that are hard enough to deserve thought
[11:55] <fwereade> wallyworld, so, "state" as shorthand for "what's in state today, referencing business rules defined elsewhere"
[11:55] <fwereade> state without the business rules
[11:55] <wallyworld> +1 for getting business rules right out of state
[11:56] <wallyworld> that would be a big win
[11:56] <wallyworld> for not that much effort in the big cheme of things
[11:56] <wallyworld> ideally, persistence would be abstracted so it were pluggable
[11:56] <wallyworld> that would help ensure a clean design
[11:57] <fwereade> wallyworld, well, if you have insights, I want you to share them, because... *all* our business rules are encoded in dynamically generated strings of transaction operations
[11:57] <wallyworld> fwereade: that's an artifact of out current implementation, and sadly ties us to mongo, and sadly weaves separate concerns together
[11:57] <fwereade> wallyworld, and I need a sane and comprehensible model of the world, its changes, and the conditions those changes require -- that I can effectively render down into txn operations
[11:58] <wallyworld> fwereade: we should be able to hand off model changes to a persistence layer, over a well defined interface boundary
[11:59] <wallyworld> we should nut this out next week
[11:59] <wallyworld> when i am less tired
[11:59] <fwereade> wallyworld, sgtm :)
[11:59] <wallyworld> flagging a bit atm
[11:59] <fwereade> wallyworld, sorry to keep you up
[11:59] <wallyworld> np, only 10pm but i'm tired
[11:59] <wallyworld> brain needs rest
[12:00] <wallyworld> hard to give decent answers and think deeply about complex issues
[12:00] <wwitzel3> it's best to just avoid it ;)
[12:26] <perrito666> morning
[12:31] <TheMue> heya perrito666
[12:41] <jam> fwereade: did I completely miss you guys on actions?
[12:41] <jam> jw4: ^^
[12:42] <jam> I realize I'm late coming out of the last meeting.
[12:42] <fwereade> jam, heyhey, jw4 is having mic troubles
[12:42] <jam> ah
[12:51] <natefinch-afk> nick natefinch
[12:51] <natefinch> heh
[12:51] <TheMue> *lol*
[12:57] <natefinch> man, I *love* that we made workload processes use plugins and not hardcoded stuff from juju-core.  It makes it trivial to produce specific fake technology plugins that mimic real ones without having to muddy juju-core codebase
[13:44] <mgz> mattyw: bug 1477157
[13:44] <mup> Bug #1477157: Broken windows dependencies <blocker> <ci> <regression> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1477157>
[13:44] <mup> Bug #1477157 opened: Broken windows dependencies <blocker> <ci> <regression> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1477157>
[13:45] <mattyw> mgz, ah crap, ack, I'll fix it
[13:45] <mgz> ta!
[13:52] <mattyw> mgz, https://github.com/juju/juju/pull/2860
[13:56] <dooferlad> TheMue: This is what I have so far for CreateSpace: https://github.com/juju/juju/compare/net-cli...dooferlad:net-cli-api-spaces-create?expand=1
[13:57] <mgz> mattyw: shipit
[13:57] <TheMue> dooferlad: one moment, will take a look in a few secs
[13:58] <dooferlad> TheMue: I am not doing any param checking in CreateSpace on the client side since the CLI already does that. This is also the case in api/action/client.go, but I wanted to start that conversation early.
[13:59] <TheMue> dooferlad: spaces.go line 34 has a typo
[14:00] <dooferlad> TheMue: wasn't expecting a full code review, I just want to discuss client side params checking.
[14:00] <TheMue> dooferlad: only seen it while flying over the code, eagle eyes *cough* *cough*
[14:02] <TheMue> dooferlad: what do you think, could it make sense to rename the params type now, from add to create? simply using the opportunity
[14:02] <dooferlad> TheMue: sure
[14:03] <TheMue> dooferlad: and when adding the subnetIds as tags into the params, don't work with append but make it with the right length and use the index
[14:06] <TheMue> dooferlad: rest looks fine so far, I maybe would move testCreateSpace as a closure into TestCreateSpace.
[14:06] <fwereade> perrito666, did wallyworld mention backporting status-history txnality to 1.24?
[14:07] <TheMue> dooferlad: a very good example for the while testing is dimiterns instancepoller
[14:07] <fwereade> perrito666, should be simple, 1.24 has state.Database now
[14:07] <perrito666> fwereade: yup. already assigned me a card :)
[14:07] <fwereade> perrito666, cool
[14:07] <fwereade> perrito666, would you ping me when it's ready to review, I will at some stage become blocked on it
[14:07] <perrito666> fwereade: sure
[14:08] <perrito666> fwereade: want to tell me more?
[14:08] <fwereade> perrito666, cheers
[14:08] <fwereade> perrito666, I have to do gated service.SetStatus
[14:09] <fwereade> perrito666, which means I need to at least touch the status-history storage
[14:09] <fwereade> perrito666, and I would rather someone else had that change in place before I got involved in the details
[14:10] <perrito666> fwereade: ill do as fast as possible :)
[14:10] <fwereade> perrito666, oh, and if you're doing that, would you please make the updateStatusHistory signature match all the other similar funcs? (st, globalkey, doc)
[14:11] <perrito666> fwereade: k
[14:12] <dooferlad> TheMue: http://reviews.vapour.ws/r/2241/
[14:22] <TheMue> dooferlad: reviewed
[14:22] <dooferlad> TheMue: Thanks!
[14:23] <TheMue> dooferlad: yw
[15:30] <mattyw> mgz, can we close this yet? https://bugs.launchpad.net/juju-core/+bug/1477157
[15:30] <mup> Bug #1477157: Broken windows dependencies <blocker> <ci> <regression> <windows> <juju-core:In Progress by mattyw> <https://launchpad.net/bugs/1477157>
[15:31] <mattyw> bogdanteleaga, yeah - totally forgot to do the GOOS=windows thing
[15:31] <mattyw> bogdanteleaga, copied and pasted an old script :(
[15:31] <mgz> mattyw: really the process is let ci bless the branhc and close the bug itself
[15:32] <mattyw> mgz, ack
[15:34] <bogdanteleaga> mattyw: heh, I remember doing that a few times, but I got scared when I did git diff :)
[15:39] <mattyw> bogdanteleaga, I saw it in the diff, and got over excited by deps being removed
[15:49] <bogdanteleaga> mattyw: haha
[16:15] <dooferlad> TheMue: could you take a look again at that review?
[16:26] <mgz> okay, new run on 1.24 branch has started
[16:34] <perrito666> if you ever feel bad about a hack https://twitter.com/adrianchm/status/623485623538712576
[16:47] <natefinch> perrito666: lol, that's amazing
[17:24] <mup> Bug #1476996 changed: Network communication failed during juju bootstrap <bootstrap> <mongodb> <juju-core:New> <https://launchpad.net/bugs/1476996>
[18:29] <mgz> juju upgrade on windows just passed for the first time.
[18:29]  * perrito666 cries
[18:48] <wwitzel3> ericsnow: ping
[18:49] <ericsnow> wwitzel3: hey
[18:49] <wwitzel3> ericsnow: never mind, I'm going ot look at the board
[18:49] <ericsnow> wwitzel3: k
[18:50] <wwitzel3> ericsnow: we have an issue with the charm in that without having destroy available .. we can't really do much other than add-relattion
[18:50] <wwitzel3> katco: ^
[18:51] <katco> wwitzel3: ah, because docker containers are static and need to be torn down anytime something changes?
[18:51] <wwitzel3> katco: right
[18:51] <ericsnow> wwitzel3: you could add a basic proc-destroy script to the charm that calls juju-process-docker destroy...
[18:54] <ericsnow> wwitzel3: in case you missed it, I didn't merge that --extends patch (we didn't get it quite right)
[18:54] <katco> wwitzel3: ericsnow: natefinch: wondering how critical it is to get that right just for the demo. we probably only need the container to spin up once, yeah?
[18:55] <katco> wwitzel3: ericsnow: natefinch: i.e. i don't know if it's worth the time to write a destroy script
[18:56] <ericsnow> katco: well, the script would just be a one-liner call to the plugin
[18:56] <wwitzel3> I don't even have to make a script
[18:56] <wwitzel3> I can just call it directly from the charm
[18:56] <wwitzel3> which I just tested, it works
[18:56] <ericsnow> wwitzel3: true
[18:57] <katco> wwitzel3: pft. over achiever
[18:58] <wwitzel3> I'm going a end to end of the latest charm now, no intervention, will see how it goes
[18:58]  * katco crosses fingers
[19:00] <wwitzel3> katco: it worked here, so please give it a try
[19:00] <wwitzel3> ericsnow: how did we get the parseUpdates wrong? ..
[19:00] <wwitzel3> ericsnow: nvm I'll look at the reivew
[19:00] <katco> wwitzel3: cool, pull from gh?
[19:00] <wwitzel3> katco: yep
[19:01] <ericsnow> wwitzel3: we weren't supposed to apply the updates to c.info
[19:01] <ericsnow> wwitzel3: I left a review comment
[19:06] <mup> Bug #1477263 opened: Enabling allow-lxc-loop-mounts can cause error when destroying an environment <juju-core:New> <https://launchpad.net/bugs/1477263>
[19:16] <wwitzel3> ericsnow: so when I implement the changes you suggested, our test suite fails all over the place :(
[19:17] <ericsnow> wwitzel3: :(
[19:17] <ericsnow> wwitzel3: likely due to s.setMetadata calls
[19:18] <ericsnow> wwitzel3: I can take over that patch if you need to work on the charm
[19:18] <wwitzel3> ericsnow: that would be great
[19:18] <ericsnow> wwitzel3: k
[19:36] <katco> wwitzel3: hm. i think i'm doing something wrong
[19:36] <katco> wwitzel3: i deploy wordpress-wpm, i deploy mysql
[19:36] <katco> wwitzel3: i relate them
[19:36] <katco> wwitzel3: but wordpress-wpm is stuck in the install hook
[19:36] <wwitzel3> katco: the install can take a while .. since we pull the image down
[19:36] <wwitzel3> katco: how long has it been stuck?
[19:39] <katco> wwitzel3: most likely longer than it should be
[19:39] <katco> wwitzel3: like... 15m maybe? i dunno i'm hopping around a lot
[19:39] <katco> wwitzel3: actually more like 25 doing some timestamp math
[19:40] <katco> wwitzel3: what order do you do things in?
[19:40] <wwitzel3> katco: it shouldn't matter :)
[19:40] <katco> wwitzel3: very true :)
[19:40] <wwitzel3> katco: sounds like the install hook is getting hungup on the pull
[19:40] <katco> wwitzel3: debug-log just spams: unit-wordpress-wpm-0[2980]: 2015-07-22 19:40:44 INFO unit.wordpress-wpm/0.install logger.go:40
[19:41] <katco> wwitzel3: empty log message
[19:41] <katco> wwitzel3: so i'm not missing any steps? just need to relate the two, yeah?
[19:42] <wwitzel3> yep
[19:43] <wwitzel3> katco: I'm doing a fresh deploy, see if I get that same install hook hanging issue
[19:43] <katco> wwitzel3: k, i'm retrying as well
[19:48] <katco> wwitzel3: weird... docker.io is consuming most of the cpu
[19:48] <katco> wwitzel3: but the image isn't listed
[19:51] <katco> wwitzel3: any ideas why docker.io would be spinning? never seen that
[20:02] <wwitzel3> katco: no idea, my install hook is spinning as well
[20:03] <wwitzel3> katco: I am wondering if there is just not enough time between installing the docker.io service and starting it and issuing the pull
[20:03] <wwitzel3> katco: going to try it with a sleep, see if that helps
[20:03] <katco> wwitzel3: doesn't apt-get install docker.io start the docker.io service already?
[20:03] <katco> wwitzel3: instead of a sleep, query the status of the service
[20:06] <mup> Bug #1477281 opened: machine#0 jujud using ~100% cpu, slow to update units state <canonical-bootstack> <canonical-is> <juju-core:New> <https://launchpad.net/bugs/1477281>
[20:07] <katco> running into bug 1168154 too
[20:07] <mup> Bug #1168154: Destroying a service in error state fails silently <destroy-service> <juju-core:Triaged> <https://launchpad.net/bugs/1168154>
[20:08] <natefinch> ericsnow: process.Info.Id()'s comment states "ID composes a unique ID for the process" .... does that mean that we don't expect the process name to be unique?
[20:09] <ericsnow> natefinch: potentially
[20:09] <natefinch> ericsnow: the reason I ask is that the status output as stated in the spec is using the process name as the key to a map, so if there's a collision.... badness
[20:09] <ericsnow> natefinch: we had discussed support for launching multiple copies of a proc
[20:10] <ericsnow> natefinch: I suppose we can cross the bridge later
[20:11] <natefinch> we seem to be doing a lot of "let's worry about that later" with this feature, which makes me worry about it now.
[20:12] <ericsnow> natefinch: well, currently we don't support multiple copies of a proc (and we may never)
[20:13] <natefinch> ericsnow: fair enough
[20:13] <natefinch> ericsnow: I'm all good with YAGNI, so long as it's not YAGNIUNW  (You ain't gonna need it ... until next week)
[20:14] <ericsnow> natefinch: :)
[20:22] <bogdanteleaga> mgz: yay!
[20:22] <bogdanteleaga> perrito666: tears of joy? :p
[20:23] <perrito666> bogdanteleaga: ??
 juju upgrade on windows just passed for the first time.
[20:24] <bogdanteleaga> * perrito666 cries
[20:25] <perrito666> ah si
[20:25] <perrito666> yes
[20:26] <wwitzel3> katco: looks like there is an issue with the registry
[20:26] <wwitzel3> katco: I can't pull the wp container at all, atm
[20:26] <katco> wwitzel3: as in the official docker registry?
[20:26] <wwitzel3> katco: yeah
[20:26] <katco> wwitzel3: well that explains things lol. do they have a status page or something?
[20:27] <natefinch> oh great, one more thing to break
[20:28] <katco> wwitzel3: https://status.docker.com/
[20:28] <wwitzel3> katco: nope, but after running it 4 or 5 times it started working :)
[20:28] <wwitzel3> 2015/07/22 20:19:45 Could not find repository on any of the indexed registries.
[20:28] <wwitzel3> Pulling repository wordpress
[20:28] <wwitzel3> 48e924db07d1: Pulling dependent layers
[20:28] <wwitzel3> so now it works, lol
[20:30] <wwitzel3> aaaand now it doesn't work again, wtf
[20:31] <katco> wwitzel3: i wonder if there's a route issue b/t ec2 <-> registry?
[20:34] <wwitzel3> katco: something weird for sure, it is working locally jsut fine
[20:34] <katco> wwitzel3: again, top shows docker.io is spinning
[20:35] <katco> wwitzel3: i'll try pulling it manually, but yeah something odd is happening
[20:36] <katco> wwitzel3: whoa: $ docker pull wordpress
[20:36] <katco> 2015/07/22 20:35:39 Post http:///var/run/docker.sock/images/create?fromImage=wordpress&tag=: dial unix /var/run/doc
[20:36] <katco> ker.sock: permission denied
[20:36] <katco> wwitzel3: does it have to be sudo pull?
[20:37] <katco> wwitzel3: says job is already started. log doesn't have anything interesting
[20:41] <wwitzel3> katco: you have to be in the group or you have to be sudo, yes
[20:45] <katco> wwitzel3: cycled docker.io service, now: Could not find repository on any of the indexed registries.
[20:45] <katco> [a0bad6af] -job pull(wordpress, latest) = ERR (1)
[20:48] <wwitzel3> katco: yeah, something odd is going on since it was working with out incident the dozen other times I did it today
[20:48] <wwitzel3> and yesterday
[21:00] <mup> Bug #1477293 opened: Bootstrap attempts to use a private network address instead of the public address <bootstrap> <ec2-provider> <network> <juju-core:Triaged> <juju-core jes-cli:Triaged> <https://launchpad.net/bugs/1477293>
[21:03] <mup> Bug #1477293 changed: Bootstrap attempts to use a private network address instead of the public address <bootstrap> <ec2-provider> <network> <juju-core:Triaged> <juju-core jes-cli:Triaged> <https://launchpad.net/bugs/1477293>
[21:04] <katco> wwitzel3: so i'm pulling this manually from the ec instance, and it's doing "stuff", but it keeps returning to a header "pulling fs layer"
[21:05] <wwitzel3> katco: yeah, it should do that for a while
[21:05] <wwitzel3> katco: and download little bits and fs layers
[21:05] <katco> wwitzel3: just seems to be taking a long, long time
[21:05] <wwitzel3> katco: yeah, the registry isn't fast
[21:05] <wwitzel3> katco: I've had it take 20 minuites or longer to get an image
[21:05] <katco> wwitzel3: k
[21:09] <wwitzel3> katco: once I get another copy of it downloaded successfully
[21:09] <wwitzel3> katco: I'll put it in the charm and we can just load it locally
[21:09] <wwitzel3> katco: so we won't have to worry about network for it
[21:09] <katco> wwitzel3: good idea. or i wonder if we have a docker repo charmed up :)
[21:09] <mup> Bug #1477293 opened: Bootstrap attempts to use a private network address instead of the public address <bootstrap> <ec2-provider> <network> <juju-core:Triaged> <juju-core jes-cli:Triaged> <https://launchpad.net/bugs/1477293>
[21:19] <perrito666> katco: can I get an amen http://reviews.vapour.ws/r/2244/ ?
[21:20] <katco> perrito666: tal
[21:21]  * perrito666 brain makes an odd connection and its taken back to the past to the zope template attribute language
[21:23] <katco> perrito666: did we address fwereade's concern in the todo there?
[21:23] <perrito666> yup, that is what the pr does :D removing all trace of txn in status history
[21:23] <perrito666> pure raw metal.. err mongo insertion :p
[21:25] <katco> perrito666: i recall him disagreeing with that approach pretty strongly. i can't remember if it was ever resolved. hm
[21:26] <perrito666> katco: he did disagree with a previous patch, which led to uncovering an issue in envuuid automatic patching
[21:26] <perrito666> I recalled that pr and made this new one
[21:26] <katco> perrito666: well not just the patch, but the idea of doing "raw" mongo ops w/o txns
[21:27] <perrito666> katco: in this particular case it was his idea to completely remove, history works much like logs
[21:27] <perrito666> we push on the top and remove on the bottom
[21:28] <katco> perrito666: yeah i recall the counter-argument, i guess i never saw it get resolved
[21:28] <katco> perrito666: i have a meeting now, ian is in it, so i'll talk to him too
[21:28] <perrito666> fantastic :)
[21:33] <katco> perrito666: you're good! :D
[21:34] <cmars> waigani or menn0, could one of you review http://reviews.vapour.ws/r/2167/?
[21:37] <perrito666> oh, blocked again, it was good while it lasted
[21:40] <menn0> cmars: i'll take a look
[21:40] <cmars> menn0, thanks!
[21:48] <ericsnow> wwitzel3: http://reviews.vapour.ws/r/2245/
[21:50] <perrito666> bbl
[21:51] <menn0> cmars: done. just a few little things.
[21:53] <menn0> cmars: fwereade is right though: this will need to change soon when env destruction is done properly. But this is fine for now.
[21:54] <sinzui> katco: https://launchpad.net/juju-core/1.26 exists. I as does https://launchpad.net/juju-core/+milestone/1.26.0
[21:54] <katco> sinzui: ty sir!
[21:56] <cmars> menn0, ok, we'll have to watch out for that .. any advance notice would be helpful, we're ready to make whatever changes necessary
[22:00] <menn0> cmars: you guys might not need to worry about it. we just have to move when the last ditch metric sending happens to somewhere else.
[22:00] <cmars> menn0, gotcha
[22:01] <menn0> cmars: the plan is to have a state server worker that manages environment life cycle instead of doing env destruction synchronously in the api call
[22:02] <menn0> cmars: b/c as it stands if anything goes wrong (e.g. machine destruction get stuck) the user has very little visibility
[22:02] <menn0> cmars: we also need to shut down things like storage in a more controlled fashion
[22:03] <menn0> cmars: all this will happen "soon". we'll keep you in the loop but you don't need to worry too much about it.
[22:03] <cmars> menn0, ack. cool stuff. the env lifecycle mgmt sounds especially interesting
[22:04] <katco> sinzui: i proposed a blueprint for 1.26... who does that approval go to?
[22:04] <menn0> cmars: interesting/necessary whatever :)
[22:05] <sinzui> katco: I think mramm, the driver of the project
[22:05] <katco> sinzui: ah ok
[22:05] <sinzui> wallyworld: abentley my go1.4 juju on vivid is succeeding
[22:05] <wallyworld> oh interesting
[22:06] <wallyworld> can we drop the bug then :-)
[22:07] <katco> sinzui: what is feature-proc-mgmt in this context? https://bugs.launchpad.net/juju-core/feature-proc-mgmt
[22:09] <sinzui> katco: There is/was a branch in github that was tested and failed. We registered the series to report bugs against it
[22:10] <katco> sinzui: ah ok. i just created a blueprint... would it make more sense to report bugs against that? or is series considered a branch in lp parlance?
[22:11] <sinzui> katco: I wish the later part of your statement was 100% true. A series should be assocated with a branch. A line of development that changes will be made too. Lp doesn't enforce the branch
[22:12] <katco> sinzui: would it be an imposition to ask you to target bugs against the blueprint instead?
[22:13] <sinzui> katco: Thate is difficult to do
[22:13] <katco> sinzui: in an automated fashion, or just in general?
[22:13] <sinzui> katco: bugs can be linked ot a blueprint, but since the blueprint feasture is broken in many places, I cannot do it
[22:14] <katco> sinzui: oh, it's a permissions/lacking feature thing?
[22:14] <sinzui> also the blueprints cannot be managed by the team. You for example could make the milkestone and series, but blueprints don't have sane ownership
[22:15] <katco> sinzui: not sure what you mean? i am the starter, drafter, and assignee, and i can link bugs... do you mean no one BUT the starter can assign bugs?
[22:17] <sinzui> katco I am not writing software to use blueprints, you are free to link issues you want fixes in blueprints. CI is not concerned with feature planning
[23:24] <thumper> wallyworld: menno said he'd look at the go 1.3 on vivid issue
[23:44] <menn0> sinzui: ping?
[23:45] <menn0> sinzui, wallyworld: i'm trying to be sure what the aws-deploy-trusty-amd64-on-vivid-amd64 CI job does
[23:45] <menn0> I think it means, deploy an EC2 env using trusty instances, with the client running on vivid
[23:45] <menn0> sinzui, wallyworld, mgz: is that right?
[23:47] <mgz> menn0: that's correct
[23:47] <mgz> menn0: it's mostly just exercising the vivid client
[23:48] <mgz> menn0: unrelated, I added some comments to the rackspace provider review
[23:48] <menn0> mgz: ok cool. i only managed to look at some of it.
[23:49] <menn0> mgz: i'm currently looking at bug 1477293
[23:49] <mup> Bug #1477293: Bootstrap fails to connect on vivid/go 1.3 <blocker> <bootstrap> <ci> <ec2-provider> <network> <vivid> <juju-core:Triaged> <juju-core 1.24:Triaged by menno.smits> <https://launchpad.net/bugs/1477293>
[23:49] <menn0> mgz: I can only see one CI run where bootstrap failed due to not being able to connect
[23:49] <mgz> menn0: on the ssh bug, I feel like it's just as likely to be another manifestation of ssh config screwing up juju as anything else
[23:49] <menn0> mgz: and it works for me
[23:49] <menn0> mgz: the other failed runs with the same rev seem to be something else
[23:50] <menn0> mgz: so i'm looking at that
[23:50] <mgz> juju 1.24 makes ssh generally dodgy in the presence of any existing ssh setup
[23:50] <menn0> mgz: what do you mean by "any existing ssh setup"?
[23:50] <mgz> eg, I have ~/.ssh/config with stuff in it and an agent and several keys
[23:51] <mgz> juju doesn't like me much.
[23:51] <wallyworld> thumper: menn0: sorry, just got out of meeting, reading backscroll
[23:51] <menn0> mgz: hmmm, I have that too and it works for me. but of course it might something specific in your config.
[23:51] <mgz> menn0: the other failed runs at the same rev look like job setup debugging
[23:52] <menn0> mgz: the bootstrap is failing: "Bootstrap failed, destroying environment"
[23:52] <menn0> mgz: but i'm not sure why
[23:52] <wallyworld> menn0: mgz: sinzui switched to go 1.4 and said it worked
[23:52] <mgz> nah, he tried the same thing via osx which happens to use go 1.4 and it works
[23:52] <menn0> Juju cannot bootstrap because no tools are available for your environment.
[23:52] <wallyworld> if it is a go 1.3 issue, then i say we don't fix anything
[23:52] <menn0> mgz: there it is
[23:53] <wallyworld> mgz: oh, i thought we were going to use everything the exact same, but just switck go verson
[23:53] <menn0> so the earlier CI failures were due to missing tools
[23:54] <mgz> oh, he did also use go 1.4 on ubuntu, but wily
[23:54] <mgz> but that's still proxuing through a different machine
[23:54] <wallyworld> but we want a controlled experiement
[23:54] <mgz> so it's hard to make it identical
[23:54] <wallyworld> only changde one thing
[23:55] <wallyworld> can't we switch go version on the vivid machine
[23:56] <mgz> we could download and build go 1.4 on that machine and make the job use it maybe
[23:57] <katco> wwitzel3: /entrypoint.sh apach   19 minutes ago      Exited (1)
[23:57] <katco> wwitzel3: 2015/07/22 23:36:38 Stopping proxy on tcp/[::]:8080 for tcp/172.17.0.2:80 (accept tcp [::]:8080: use of closed network connection)
[23:59] <mgz> I think going through the workspace runner just tickles ssh issues