[02:35] <whit> my new answer to the "should I use juju or docker question?"(at least in headline and image style) http://shoreditchworks.com/containers-dont-solve-infrastructure/
[11:01] <unused_PhD> is it typical to deploy juju and maas on the same machine?  I only have six servers to cluster, so I'm wondering if both services can sit on the same box
[11:02] <dimitern> unused_PhD, you could install both in separate kvm containers
[11:03] <dimitern> unused_PhD, and by juju I presume you mean the juju state server machine, not the client where you're running commands from
[11:05] <unused_PhD> yes
[11:05] <unused_PhD> and thanks!
[12:32] <philip_stoev> Hello. What is the canonical way to open and close TCP ports in a Python-based charm?
[12:36] <philip_stoev> Should I put an open_port() call in the config-changed hook?
[12:36] <marcoceppi> philip_stoev: probably, esp if the port is configurable
[13:32] <philip_stoev> ok, thanks. Next question: When I do bzr bind lp:~philip-stoev-f/charms/trusty/galera-cluster/trunk to push my charm to  LaunchPad, I get
[13:32] <philip_stoev> bzr: ERROR: Invalid url supplied to transport: "lp:~philip-stoev-f/charms/trusty/galera-cluster/trunk": No such source package galera-cluster.
[13:41] <apuimedo> marcoceppi: Hi
[13:41] <apuimedo> I was checking out http://bazaar.launchpad.net/~charmers/charms/bundles/openstack/bundle/view/head:/bundles.yaml
[13:41] <apuimedo> and I was wondering for example in neutron-openvswitch
[13:42] <apuimedo> charm: "cs:trusty/neutron-openvswitch-2"
[13:42] <apuimedo> what's the trailing '-2'
[13:52] <tvansteenburgh> apuimedo: that's the charm revision. the charm store increments it each time an update to the charm is pushed to the store
[13:53] <apuimedo> tvansteenburgh: cool, thanks
[13:53] <tvansteenburgh> apuimedo: np. leaving it off will always give you the latest revision for the target series
[13:58] <marcoceppi> philip_stoev: you don't need to do a bzr bind, bzr bind gives you SVN like co/ci stuff
[13:58] <marcoceppi> philip_stoev: just "bzr push" instead
[13:58] <apuimedo> tvansteenburgh: ah. good ;-)
[13:59] <marcoceppi> img
[13:59] <philip_stoev> marcoceppi: yes, you are right. My problem in particular was that I did not run bzr launchpad-login prior to bzr push.
[13:59] <marcoceppi> philip_stoev: ah!
[14:24] <marcoceppi> tvansteenburgh: is this still the case in amulet? https://bugs.launchpad.net/amulet/+bug/1394078
[14:24] <mup> Bug #1394078: Only tests committed files <Amulet:New> <https://launchpad.net/bugs/1394078>
[14:24] <tvansteenburgh> marcoceppi: dunno, would need to be tested
[14:25] <marcoceppi> tvansteenburgh: ack, ta, will take a look
[14:45] <jcastro> anyone know where the CPC guys keep the sources for building the vagrant boxes?
[14:48] <Odd_Bloke> jcastro: It's https://code.launchpad.net/~ubuntu-on-ec2/vmbuilder/jenkins_kvm
[14:48] <jcastro> thanks!
[14:48] <Odd_Bloke> jcastro: Specifically http://bazaar.launchpad.net/~ubuntu-on-ec2/vmbuilder/jenkins_kvm/view/head:/jenkins/CloudImages_Juju.sh for JuJu Vagrant images.
[14:49] <Odd_Bloke> (And http://bazaar.launchpad.net/~ubuntu-on-ec2/vmbuilder/jenkins_kvm/view/head:/jenkins/CloudImages_Vagrant.sh for non-Juju Vagrant images)
[14:49] <jcastro> yeah that's the one I want, thanks
[16:30] <jcastro> marcoceppi, are juju resources documented?
[16:30] <marcoceppi> jcastro: ask cory_fu
[16:31] <marcoceppi> (yes) but I don't know where
[16:31] <cory_fu> The upcoming feature for Core?  No one could ever point me to any, so *shrug*
[16:31] <marcoceppi> cory_fu: the think you made
[16:32] <cory_fu> I am going to have to rename it, I think.  :(
[16:32] <marcoceppi> cory_fu: why?
[16:33] <cory_fu> But yes, it's fairly well documented on pypi.
[16:33] <cory_fu> Because it is confusing and it'd be better to rename it now than later
[16:33] <cory_fu> jcastro: http://pythonhosted.org/jujuresources/
[16:34] <marcoceppi> cory_fu: it'd be great if it was composed the same way that the juju feature which I guess is going to be wokred on eventually, so that as that feature goes live no one has to change anything
[16:35] <cory_fu> Yeah, that would be ideal.
[16:35] <marcoceppi> I'd just do that instead of renaming
[16:35] <cory_fu> But, TBH, I think there will be a lot that I won't be able to replicate in user-space (or won't be able to replicate with the same interface)
[16:36] <marcoceppi> not to sound snide, but then why even bother continuing to work on it, if in X time it'll be duplicated by core?
[16:36] <cory_fu> Because it's useful now.
[16:38] <jcastro> lazyPower, can you explain this tip better for me: juju upgrade-charm --force, get that iterative action without a teardown/redeploy in an error state
[16:40] <jcastro> hey marcoceppi
[16:40] <marcoceppi> hey jcastro
[16:40] <jcastro> thierry's python charmhelper tip seems more like a bug to me
[16:41] <marcoceppi> jcastro: they filed bugs about it already
[16:42] <jcastro> ack, so I won't put it on the tips page
[16:42] <marcoceppi> jcastro: right, it was marked won't fix fwiw
[16:42] <jcastro> got a link?
[16:47] <marcoceppi> jcastro: https://bugs.launchpad.net/charm-tools/+bug/1425938
[16:47] <mup> Bug #1425938: use of charmhelpers log function raise exception if not under juju environment  <Juju Charm Tools:Won't Fix> <https://launchpad.net/bugs/1425938>
[16:47] <marcoceppi> jcastro: and lp:1425943
[16:48] <jcastro> ok but your response is newer than his mail
[16:48] <jcastro> which is all I really needed to know
[17:05] <jcastro> bloodearnest, ping
[17:11] <bloodearnest> jcastro, hey
[17:14] <jcastro> heya, if you guys have any bandwidth over the next 2 weeks or so, we could really use a hand in the charm review queue
[17:14] <jcastro> even 1 or two would really help
[17:34] <ctlaugh> I have nodes of 2 different architectures that need 2 different sets of configuration settings.  Is it possible to 'juju deploy' to one node and specifying one configuration file, then 'juju add-unit' to other nodes, specifying a different configuration file?
[18:05] <jcastro> http://askubuntu.com/questions/593564/how-to-juju-deploy-a-service-to-units-of-multiple-architectures-with-different-c
[18:05] <jcastro> is the link to ctlaugh's question
[18:07] <jcastro> marcoceppi, this is a good one ^^
[20:12] <hazmat> ctlaugh: no it would need to be two services
[20:13] <hazmat> ctlaugh: services are treated as mostly homegenous collections of iaas/software resources managed via the service, so configuration activities apply at the service level not at the unit.
[20:15] <ctlaugh_> hazmat: so, I assume I would need to do something like duplicate the nova-network charm?
[20:15] <ctlaugh_> (and give it a different name)?
[20:17] <hazmat> ctlaugh_: does openstack support that now? previously different architectures had to be different regions?
[20:17] <hazmat> er. zones
[20:18] <ctlaugh_> I've done it with image flavors
[20:18] <hazmat> and the scheduler picks up the right one i guess.
[20:19] <ctlaugh_> yes
[20:20] <ctlaugh_> I can't remember specifics (it was mid-last-year when I was doing it last), but am about to start setting up something similar again and wanted to use juju to do it.
[20:20] <hazmat> ctlaugh_: so.. best person to answer this is someone more familiar with the nova-compute charms.. if you need different config it would need to be two services
[20:20] <hazmat> ctlaugh_: do you need separate config on nova-network/neutron as well?
[20:21] <ctlaugh_> Looks like some multi-arch support in Juju is needed....
[20:21] <ctlaugh_> No, not neutron yet (but who knows, maybe later).
[20:21] <ctlaugh_> Right now, only nova-network is known to work on arm.
[20:21] <hazmat> interesting
[20:22] <hazmat> ctlaugh_: so if its the same config, you should be able to get by with a single service there and related to both the compute services
[20:22] <hazmat> ctlaugh_: i think you'd be best firing off an email to the list on this one.. i don't think any of our openstack charmers are around atm (europe based past EOD)
[20:24] <hazmat> agreed juju could be a bit nicer about multi-arch, its supports it, but most of the arch testing we do is homogenous (power, arm, amd64).. but again thats also distinct from how the openstack charms in particular work and support that multi-arch
[20:24] <ctlaugh_> Yes - for all other services (cinder, glance, horizon, etc), the same config works, so I can deploy on both arm64 or amd64 with no issues.  nova-compute needs 3 additional parameters (passed in using 'config-flags') to work correctly on arm64.  They just end up as config settings in nova.conf.  I could manually modify them, but the problem is that juju
[20:24] <ctlaugh_> overwrites that file at will.
[20:25] <hazmat> ctlaugh_: another option is to modify the charm to auto detect arch and your flags automatically
[20:25] <hazmat> the nova-compute charm that is
[20:26] <ctlaugh_> True -- that would be pretty easy.  Good idea.
[20:26] <ctlaugh_> I can do it locally for now... what is the procedure for suggesting those changes upstream later?
[20:27] <hazmat> ctlaugh_: so in launchpad.. you'd branch the existing upstream, make your changes, test, commit, push upstream to your user account, submit merge proposal..
[20:28] <hazmat> pretty similiar to github.. just substitute lp for gh and bzr for git
[20:29] <ctlaugh_> ok - I haven't done much with launchpad for source control, but I'll give it a try.  Thanks for the help.
[20:29] <hazmat> ctlaugh_: more specifically bzr lp:charms/trusty/nova-compute .. make changes.. bzr commit .. bzr push lp:~yourusername/charms/trusty/nova-compute/trunk
[20:29] <hazmat> then within the launchpad ui, if you go to your branch you can create a merge proposal to the origin branch
[20:30]  * hazmat wanders back to aws land
[21:33] <ctlaugh_> quick juju charm-writing question... is there a way already present in some help lib or data structure to determine the processor architecture of the host the charm is running on?
[21:39] <jcastro> we do this in the power charms iirc
[21:39] <jcastro> mbruzek, ^^^
[21:40] <mbruzek> ctlaugh_: yes
[21:40] <mbruzek> ctlaugh_: What I have done for power charms was ARCH=`uname -m`
[21:40] <mbruzek> for a bash charm.
[21:41] <mbruzek> arch = subprocess.check_output(['uname
[21:41] <mbruzek> ', '-am]).strip()
[21:41] <mbruzek> for a python charm
[21:41] <ctlaugh_> mbruzek, jcastro: I'm trying to modify the nova-compute charm (written in python, I think)
[21:42] <mbruzek> the -am was a typo
[21:42] <ctlaugh_> mbruzek: ok, I'll go with something like that... I just wasn't sure if something was already available.
[21:42] <ctlaugh_> mbruzek: thank you
[21:42] <mbruzek> arch = subprocess.check_output(['uname', '-m']).strip()  would give you x86_64 or ppc64le
[21:44] <mbruzek> ctlaugh_:  FYI I also use the `lsb_release -a` command for getting things like "trusty" vs "precise"
[21:45] <mbruzek> CODENAME = `lsb_release -cs`
[21:53] <ctlaugh_> Looks like in python, I can also just call platform.machine() to get the same data from uname
[21:55] <hazmat> ctlaugh_: that sounds much better