[09:12] <apuimedo> gnuoy: in bundles, how do you tell it to use a local charm
[09:12] <apuimedo> local:path/to/charm/dir ?
[11:48] <rick_h_> apuimedo: yes, and then you have to use the juju-deployer to deploy the bundle with a JUJU_REPOSITORY env var set
[11:51] <apuimedo> cool, thanks ;-)
[12:42] <stub> http://reports.vapour.ws/all-bundle-and-charm-results/charm-bundle-test-11139-results/charm/charm-testing-azure/1 indicates a problem in the new ci environment, which we had seem in the old one too.
[12:47] <stub> The exception is virtualenv being run with a python3 interpreter, but for some insane reason python2.7 libraries ending up in the path.
[12:48] <stub> http://reports.vapour.ws/all-bundle-and-charm-results/charm-bundle-test-parent-207/charm/charm-testing-azure/1 shows it getting past that point fine last week.
[12:53] <apuimedo> rick_h_: can you set 'expose: true' in a bundle?
[12:53] <rick_h_> apuimedo: definitely
[12:54] <apuimedo> rick_h_: like this http://paste.ubuntu.com/10782512/ ?
[12:56] <rick_h_> apuimedo: +1
[12:57] <apuimedo> thanks ;-)
[13:12] <tvan-afk> stub: http://reports.vapour.ws/charm-test-details/charm-bundle-test-parent-215
[13:13] <tvansteenburgh> stub: i think you were looking at the wrong results?
[13:13] <tvansteenburgh> lxc, joyent, aws still running
[13:22] <apuimedo> rick_h_: does it have any effect to have ntp with num_units=0 like in the openstack bundle?
[13:25] <rick_h_> apuimedo: hmm, is that a subordinate?
[13:25] <rick_h_> apuimedo: I think juju will not like having a subordinate with a num-units = 0 since it has to be on the parent machine
[13:25] <rick_h_> apuimedo: but if it's not then I don't think it'll care
[13:25] <apuimedo> rick_h_: as in "it will have no effect" ?
[13:26] <rick_h_> apuimedo: it will add the service to the environment, so the charm will be there, but it will not be deployed anywhere
[13:26] <rick_h_> apuimedo: so it'll take up no machines/etc
[13:27] <apuimedo> rick_h_: my question is, will it make any difference that the charm is there
[13:27] <rick_h_> apuimedo: it's a fine line, but it does have an effect in that juju will fetch the charm down from the charmstore and add it to it's 'database'
[13:27] <apuimedo> and relations are added to it?
[13:27] <rick_h_> apuimedo: yes, if there are relations in the bundle, juju will know 'if this thing has any units it needs to be related'
[13:28] <apuimedo> but as long as there are no units, no effect, right?
[13:28] <rick_h_> right
[13:39] <apuimedo> cool. Thanks
[16:04] <jcastro> evilnick, the instructions for the docs don't take into account the new multiversion stuff
[16:04] <jcastro> so as a result I have to cherry pick to each doc version
[16:05] <evilnick> jcastro, yes, I know. it is a pain at the moment, but it is okay, because I am electing myself chief of backports
[16:05] <jcastro> excellent
[16:05] <jcastro> I don't mind doing it if I would have known to do it in 1.18 and then moving forward is easy
[16:05] <evilnick> jcastro, which isn't to say i don't trust anyone else to do it properly of course
[16:05] <jcastro> but marco neglected to tell me all of these things until after I had commited
[16:05] <evilnick> yeah, it sucks going back
[16:06] <evilnick> don't worry, i will fix it
[16:06] <evilnick> jcastro, I will add something to the README also
[16:22] <drbidwell> With a MAAS I installed landscape-dense-maas with "juju-quickstart --no-browser bundle:~landscape/landscape-dense-maas/landscape-dense-maas" and then tried to install openstack with "juju quickstart bundle:openstack/openstack".  It complained that some of the services for openstack were conflicting with the services of landscape-dense-maas.  What is the right way to start the opentack install and how do it give it my config.yaml to use?
[16:24] <lazyPower> drbidwell: this is not an uncommon problem - thanks for bringing this up. I have a slight alternative you could try that doesn't involve quickstart
[16:24] <lazyPower> we have a python tool called juju-deployer, and its apt-get isntallable as 'apt-get install juju-deployer' - it will see the difference in topology and ammend the deployment command to leverage whats already in the environment.
[16:25] <lazyPower> juju-deployer bundle:openstack/openstack-base   should do whats right, but I'll bring this up with rick_h_ and see if there isn't something we can do here to ease that pain of duplicated services when using quickstart
[16:30] <drbidwell> lazyPower: thanks.  I will try it.  Can I amend the config (like add more disks or ceph/ceph-osd) with juju-deployer?
[16:30] <lazyPower> you would need to modify the bundle i think, as thats all config based with ceph i do beleive.
[16:31] <jcastro> this has been a longstanding bug
[16:31] <jcastro> if you have a bundle that wants to deploy "mysql" and you have "mysql" deployed, you need to edit the bundle to something else
[16:31] <jcastro> this usually means you can't deploy multiple bundles in the same environment
[16:31] <jcastro> because everyone calls their bundle databases "mysql" or "postgres" instead of something unique
[16:32] <drbidwell> I have downloaded the openstack.yaml from the openstack charm and edited it.  Will deployer take this yaml file?
[16:33] <jcastro> juju-deployer -c thatfile.yaml
[16:33] <drbidwell> Wonderful!
[16:35] <drbidwell> Can I add placement constraints to thatfile.yaml?
[16:36] <lazyPower> sure can
[16:36] <drbidwell> What is the syntax?
[16:36] <lazyPower> drbidwell: are you looking for a placement directive (to colocate) or need to set machine constraints such as 2GB of memory?
[16:37] <drbidwell> such as 2GB of memory
[16:37] <lazyPower> ok let me find either an example or the official docs - i dont recall right off teh top of my head
[16:38] <drbidwell> Actually I will need both as I only have 5 machine to allocate to my openstack at the moment
[16:39] <lazyPower> drbidwell: ok the placement directive is
[16:39] <drbidwell> I assume that if I run out of physical machines it can start using lxc in machines that meet the requirements
[16:39] <lazyPower> to: service
[16:39] <lazyPower> it will not, it will add them to teh topology and they will sit in pending until your elastic cloud can satisfy the machine requirements
[16:39] <lazyPower> if you need to push it to a lxc container, the syntax is
[16:40] <lazyPower> to: lxc:# (# being the machine id, or service identifier - eg lxc:1 or lxc:nova-gateway)
[16:41] <lazyPower> drbidwell: here's an example bundle with colocation placement (no lxc) https://gist.github.com/b64070bc83d3e4725d25
[16:41] <drbidwell> Thanks
[16:41] <lazyPower> still looking for the machine constraints, i was certain we had some big data bundles with that embedded
[16:43] <lazyPower> i'm not finding one but i'm fairly certain its in the format of adding a key 'constraints:' to the service definition and key=value pairs afterwords
[16:43] <lazyPower> constraints: mem=2G
[16:43] <lazyPower> use array notation for multiple constraints
[16:44] <drbidwell> I will try it.  I have 2 types of machines for my test lab, disk/controllers and compute servers with differing numbers of cores.  Should be easy.
[16:46] <lazyPower> evilnick: before I file this bug against the docs (and follow it with a PR) is there anything I need to do wrt multi versions that jcastro mentioned above? i'm going to update the bundle docs w/ constraints listing
[16:47] <evilnick> lazyPower, it depends what the bug is :)
[16:47] <lazyPower> https://github.com/juju/docs/issues/341
[16:48] <evilnick> the basic rule of thumb is, target your change against the *earliest* version that needs changing
[16:48] <evilnick> and make your PR against that
[16:48] <lazyPower> ok, i'm fairly certain thats 1.18+
[16:49] <evilnick> lazyPower, it is easier to pull the changes forward
[16:49] <lazyPower> so just proposing against master would only make more work for you.
[16:49] <lazyPower> got it
[16:49] <evilnick> yes, it looks like it
[17:08] <lazyPower> ok, should be g2g if the content is approved - https://github.com/juju/docs/pull/342