[08:24] <badsyntax> can juju manage containers on ec2? i'm getting a "Sub-containers not supported" message in the juju-gui.
[08:26] <lazyPower> badsyntax: 1 moment, let me bootstrap and try. Shouldn't be an issue though
[08:26] <lazyPower> i admittedly haven't tried with the new GUI, and its a bit early for the GUI folks to be poking around
[08:32] <lazyPower> badsyntax: appears it works without an issue on AWS - this may be a gui defect surfacing. http://paste.ubuntu.com/8453759/
[08:40] <lazyPower> paste of a container started in case that comes into question: http://paste.ubuntu.com/8453794/
[14:08] <mwenning> lazyPower, good morning!
[14:08] <lazyPower> Morning mwenning o/
[14:10] <mwenning> lazyPower, I saw your comment about the charm - my main problem was that squid-deb-proxy only allows certain repos - linux.dell.com is not one of them.
[14:10] <mwenning> not sure when this went in.
[14:11] <lazyPower> mwenning: you can add that to the squid config
[14:11] <mwenning> lazyPower, yes that
[14:11] <lazyPower> i had to do that with ppa.ubuntu.com, as those weren't in by default either
[14:11] <lazyPower> Do you need me to fish up the config you need to edit?
[14:11] <mwenning> s what I did.  So do I just add this to the README>
[14:11] <lazyPower> the fact you had to update squid?
[14:12] <lazyPower> I woudl add it under caveats, that if you're behind a squid-deb-proxy, that the host needs to be added. I wouldn't think that effects many installations, but the info is there if they find an issue fetching the packages.
[14:12] <mwenning> lazyPower, if people actually want to use the charm they need to know this.  So I assume I need to add something to the doc
[14:12]  * lazyPower nods
[14:12] <mwenning> ok cool.
[14:13] <lazyPower> did you capture the test run output?
[14:13] <mwenning> lazyPower, I'll ping you a bit later, it's still complaining about something.
[14:13] <lazyPower> ack. Looking forward to seeing resolution on this one for ya mwenning
[14:13] <mwenning> me too :-)
[14:21] <tvansteenburgh> if i force-terminate a bunch of machines, should i expect juju to eventually destroy all the services on those machines?
[14:21] <lazyPower> mwenning: not to be a pest, but are you documenting your papercuts as you run into them?
[14:22] <lazyPower> tvansteenburgh: if you leave the bootstrap node in tact you have to follow upd estroy the services.
[14:22] <lazyPower> tvansteenburgh: inversely, you can juju deployer -T, which will do this for you.
[14:22] <mwenning> lazyPower, is there a way to put constraints on amulet?  I've got one machine marked as "bootstrap"; I'd like to tell amulet to, um use that as the bootstrap node.
[14:22] <lazyPower> mwenning: i'm assuming you mean consuming maas tags?
[14:23] <mwenning> lazyPower, yup.
[14:23] <mwenning> yup on your previous question as well.  main one was the squid-deb proxy
[14:23] <tvansteenburgh> yes you can pass contraints to the add() method
[14:24] <lazyPower> hazmat: does deployer understand maas tagging?
[14:25] <mwenning> tvansteenburgh, I'm assuming that d=amulet.Deployment() is what bootstraps juju, so that's before I can use add
[14:25] <lazyPower> mwenning: if deployer supports maas tagging, the same format you would put in a bundle you specify inline in the test, and it shoudl just hand it off.
[14:25] <hazmat> lazyPower, for constraints, its pass through .. ie. yes
[14:25] <lazyPower> i thought so, awesome.
[14:26] <hazmat> constraints: "tags=mymaastag"
[14:26] <lazyPower> not sure about on the bootstrap though mwenning
[14:27] <mwenning> lazyPower, hazmat, so d=amulet.Deployment( 'constraints "tag=bootstrap"');
[14:27] <mwenning> then d.add("tag=")
[14:27] <mwenning> to shut it back off
[14:29] <mwenning> sorry d.add( set-constraints "tag=")
[14:29] <mwenning> lazyPower, ok forget that for now ;-)
[14:30] <mwenning> I'll dump juju status in a pastebin, stby
[14:30] <lazyPower> ack
[14:32] <mwenning> lazyPower, https://pastebin.canonical.com/117818
[14:33] <lazyPower> mwenning: interesting. looks like the first machine choked on the series?
[14:33] <mwenning> juju 1.20.8
[14:34] <mwenning> lazyPower, correct.
[14:34] <mwenning> relation sentry wants to be precise
[14:34] <lazyPower> can you file a bug about this against amulet?  launchpad.net/amulet
[14:34] <mwenning> lazyPower, sure will do.
[15:24] <gnuoy> jamespage, two branches to hopefully unbreak neutron + juno
[15:24] <gnuoy> https://code.launchpad.net/~gnuoy/charms/trusty/neutron-api/next-fix-1372893/+merge/236370
[15:24] <gnuoy> https://code.launchpad.net/~gnuoy/charms/trusty/nova-cloud-controller/next-fix-1372893/+merge/236371
[15:31] <tvansteenburgh> hazmat: i'm almost done with the deployer patch to force-terminate machines. do you want that to be the default for an env reset?
[15:32] <hazmat> tvansteenburgh, sounds good to me
[15:33] <hazmat> should be faster and avoids having to do the inane code for retrying every single unit..
[15:38] <tvansteenburgh> hazmat: any reason to even make non-forced an option? right now i'm passing this 'forced' param around, but maybe it should just always force, thoughts?
[15:41] <hazmat> tvansteenburgh, no.. force always.. its faster
[15:42] <tvansteenburgh> hazmat: sweet, that simplifies things
[15:42] <hazmat> tvansteenburgh, if there tearing down the entire env.. there's not much cause to do things slowly imo
[15:42] <hazmat> hmm
[15:43] <hazmat> tvansteenburgh, there is some call for a non forced option.. for external resources management.. ie. aws-elb or dns charm wanting to clean up there
[15:45] <tvansteenburgh> hazmat: well i'm still destroying services b4 terminating machines, doesn't that cover the cleanup?
[15:46] <tvansteenburgh> i have to do that, otherwise the machines die but the services hang around forever
[15:48] <hazmat> tvansteenburgh, not really unless you wait for units to die and resolve on error them they won't have all the lifecycle hooks invoked which is what the current impl tries to do.. and then terminate machines after that. (at which point the force doesn't matter)
[15:48] <tvansteenburgh> bleh
[15:49] <hazmat> tvansteenburgh, yeah.. force by default is fine for now though. the commands in deployer need to be split out at some point so they can take options without more global option pollution.
[15:49] <tvansteenburgh> hazmat: sounds good
[15:49] <hazmat> and orderly destroy can be revisited there.. mostly reset is about geddon.. and speed of nukes counts :-)
[16:10] <tvansteenburgh> hazmat: https://code.launchpad.net/~tvansteenburgh/juju-deployer/force-terminate-machines/+merge/236381
[16:10] <hazmat> tvansteenburgh, danke
[18:24] <marcoceppi> cory_fu: we should talk about services framework being default charm template, I don't think it's a good opinion
[18:25] <cory_fu> Why not?
[18:26] <marcoceppi> I think it's too specialized
[18:26] <marcoceppi> for a default
[18:28] <cory_fu> I disagree.  It's a general purpose pattern that solves several issues that are common to most charms in a consistent way that makes the charms much easier to understand and follow.  It also encourages making the charms unit testable
[18:29] <cory_fu> It's a new pattern, and I can see maybe not wanting to make it the default yet, until it is used more, but I would definitely recommend it for any charm.
[18:29] <marcoceppi> I think it requires too much investment to get started compared to some of the other templates
[18:29] <cory_fu> That doesn't make any sense
[18:29] <cory_fu> It doesn't require any investment; that's the point of a template
[18:29] <marcoceppi> the pattern
[18:29] <marcoceppi> itself
[18:29] <cory_fu> You charm create, and then you fill in the actions and requirements
[18:30] <cory_fu> If anything, it requires *less* investment, because there are fewer places that you have to add code and reason about interactions
[18:31] <marcoceppi> I think we should drop the notion of a default altogether and instead have it prompted on first one and saved as a user seting
[18:31] <marcoceppi> IMHO ^
[18:31] <cory_fu> I'm not averse to that
[18:32] <cory_fu> I know that was your original idea for how the templates would work, and I'm entirely ok with that
[18:32] <marcoceppi> going forward, since "services framework" isn't really clear, would this be better summarized as a declaritive framework?
[18:33] <marcoceppi> or rather, a declaritive template?
[18:33] <cory_fu> Yeah, the "name" sucks, we just hadn't come up with a better one yet.  Declarative framework is ok, but still not great.
[18:56] <cory_fu> marcoceppi: How about python-managed
[19:23] <bic2k> Quick question, does juju 1.20.7 support ec2 regionally availability zone constraints. My region is out of a container type in the default AZ. Just nee to specify us-east-1a or us-east-1c
[19:25] <bic2k> looks related to this bug #1183831
[19:25] <mup> Bug #1183831: unable to specify availability zone <charmers> <constraints> <ec2-provider> <landscape> <reliability> <juju-core:Fix Released by axwalk> <https://launchpad.net/bugs/1183831>
[19:26] <bic2k> I don't see any documentation specific to ec2 machine constraints on the juju site
[19:36] <lazyPower> bic2k: see the add-machine directive listed in HA docs: https://juju.ubuntu.com/docs/charms-ha.html
[19:46] <bic2k> lazyPower: perfect, thanks. Didn't think about it in terms of HA since it was related to availability :-)
[19:46] <lazyPower> bic2k: i have a bug open discussing where it should go: https://github.com/juju/docs/issues/187
[19:46] <lazyPower> feel free to chime in
[20:20] <hazmat> bic2k, juju will auto balance units across multiple azs for a service. on a per unit basis its not a constraint per se but a placement directive for  a given unit --to="zone=us-east-1a"
[20:45] <Subbu__> Hi I am deploying openstack with juju charms from: ~openstack-charms/charms/trusty/openstack-dashboard/next deployment to lxc:1 is all good, but horizon login fails: I see this from error.log from apache2:: RSA certificate configured for 10.0.3.36:433  does NOT include an ID which matches the server name
[20:59] <arosales> mbruzek: is the VPN some thing we can share from Brian's docs
[20:59] <arosales> mbruzek: just point it at the new setup for thumper?
[21:00] <mbruzek> arosales: I don't know, I may be able to share it but I was given an id
[21:01] <arosales> ah ok so there wasn't a general one that was provided in Brian's doc
[21:01] <arosales> mbruzek: no worries on sharing your ID.
[21:11] <arosales> mbruzek: do you know if akash was able to reproduce the problem on the other maas set up?
[21:11] <mbruzek> I haven't talked with him in a bit now.
[21:11] <arosales> mbruzek: do you know who was going to try that?
[21:11]  * arosales doesn't see akash in here
[21:11] <mbruzek> arosales: I do not know
[21:12]  * arosales pinged in #canonical.
[21:15] <arosales> mbruzek: in order to keep this thing going you may want to give that new maas a try before you eod
[21:15] <mbruzek> arosales: The new one ?
[21:15] <arosales> correct
[21:15] <mbruzek> The one that Canonical set up or ?
[22:26] <jamespage> Subbu__, you are probably getting the standard snakeoil certificate which is auto-generated
[22:26] <jamespage> horizon also listens on port 80 as well