[00:45] <axw> marcoceppi Icey: storage will be supported in bundles when deployed from CLI in 1.26, not sure when juju-gui support will be added
[00:46] <axw> rick_h_: ^^
[00:47] <lathiat> storage in bundles?
[00:57] <axw> lathiat: yes, the ability to specify how storage is allocated for charms that support the new storage feature
[00:58] <axw> lathiat: e.g. how many disks to allocate for each unit by default
[01:01] <rick_h_> axw: conversations took place today. we wanted to check about creating pools from bundles?
[01:02] <axw> rick_h_: that sounds a bit odd, since a bundle should be deployable to multiple clouds?
[01:03] <rick_h_> axw: true but it also can't be a full deployment without the pool. it's sonething to chat on there.
[01:04] <rick_h_> axw: but the bundle wxport with storage will be on gui 2.0 in dec i think
[01:04] <rick_h_> export
[01:09] <axw> rick_h_: yeah I think it needs sprint-meeting discussion. it doesn't seem straight forward to me. for some providers the pools are really environment specific. e.g. for MAAS, you can specify tags that disks must match... that's going to be specific to an installation of MAAS
[01:09] <axw> rick_h_: so then you have (bundle, pool, MAAS-install) tuple that defines your deployment
[01:13] <rick_h_> axw: right so curious if it's like spexifying machines and the putting services on those machines
[01:14] <rick_h_> axw: it can be very substrate specific and we'd not want those in the store
[01:14] <rick_h_> axw: but useful for a repeatable deploy for your own use
[01:14] <axw> rick_h_: yeah, I guess so. like using instance-type in constraints.
[01:15] <rick_h_> axw: right
[01:19] <axw> rick_h_: FWIW, there's a way to specify an override for storage when deploying a bundle on the command line. --storage <service>:<storage-name>=<constraints>. so it's possible to deploy a bundle with a specific pool, it's just not self-contained
[01:20] <axw> rick_h_: if it's just for personal use, I'm not really convinced the amount of work involved in support it is warranted, but I'm not deploying bundles all the time so I may have a warped view :)
[05:34]  * Sharan slaps kwmonroe around a bit with a large fishbot
[14:12] <tvansteenburgh> rick_h_: need clarification - with new juju store stuff, can one promulgate directly from a /development/ url, or must that development revision be published first?
[14:12] <rick_h_> tvansteenburgh: yes, you can --publish a ~rharding/development/mysql to publish the latest development revision
[14:13] <rick_h_> tvansteenburgh: at least that's the spec, I'm looking forward to getting the client to try it out
[14:13] <tvansteenburgh> rick_h_: and you could do the same with a specific revision i assume?
[14:13] <rick_h_> tvansteenburgh: yes
[14:13] <tvansteenburgh> oh wait, you didn't answer my question :)
[14:13] <rick_h_> tvansteenburgh: or even just 'charm upload --publish .' to both upload it to dev and publish it in one stroke
[14:13] <tvansteenburgh> *promulgate*
[14:14] <rick_h_> tvansteenburgh: oh,  so in promulgate you have to first promulgate the published url.
[14:14] <tvansteenburgh> ok, thanks
[14:14] <rick_h_> tvansteenburgh: then the end user can only upload to develop, and only those with the promulgate ACL can publish from develop to the promulgated published space
[14:14] <rick_h_> if that makes sense
[14:14] <tvansteenburgh> yep
[14:44] <lazypower> woo
[14:44] <lazypower> new tooling
[15:07] <stokachu> anyone know if there is a way to react to a 'relation finished' using juju api or any other means?
[15:07] <stokachu> not specific to reactive pattern just in general
[15:07] <tvansteenburgh> rick_h_: how does one determine whether a user-namespace charm has advanced beyond what is currently promulgated?
[15:08] <tvansteenburgh> stokachu: relation-departed|broken hooks?
[15:09] <stokachu> looking for a way to run a process against a service after it has joined a relation
[15:09] <stokachu> but want to make sure the relation stuff is done
[15:09] <tvansteenburgh> ah, probably need to use status-set for that
[16:28] <roadmr> helloo juju people. How can a unit know its own id? i.e. which command can I run, while inside unit foo/0 (via ssh), to get "foo/0" (or the tag: unit-foo-0)?
[16:29] <marcoceppi> roadmr: if you're in a hook, $JUJU_UNIT_NAME
[16:31] <jrwren> is there a place to see queue of tests to be run for stuff at http://reports.vapour.ws/ ?
[16:31] <roadmr> marcoceppi: what if I'm not in a hook? :( i.e. a plain shell
[16:31] <marcoceppi> jrwren: what do you mean
[16:31] <marcoceppi> roadmr: I mean, not easily, there are "ways"
[16:32] <jrwren> marcoceppi: I updated a MR and am wondering when tests for it will run again.
[16:32] <marcoceppi> jrwren: never, we have to push a button for updates. Only initial new items are run. In the new review queue it'll work on update but it's hard for us to track that in the old one
[16:32] <roadmr> marcoceppi: I could write the unit's id/tag to a file as part of a charm, then I'd have that info available for later... off the top of my crazy head that's one idea
[16:32] <marcoceppi> jrwren: link me to the review item in the review queue and I'll kick off tests
[16:33] <marcoceppi> roadmr: yeah, the other is to sniff init files, but that won't work if you have multiple units on the node
[16:34] <jrwren> marcoceppi: oh!  I'm glad I asked :)  https://code.launchpad.net/~evarlast/charms/trusty/mongodb/fix-dump-actions/+merge/277191
[16:34] <marcoceppi> jrwren: that's not a review queue link ;)
[16:34] <jrwren> marcoceppi: oh.
[16:34] <marcoceppi> jrwren: http://review.juju.solutions/review/2357 will show tests and the results
[16:34] <roadmr> marcoceppi: I see them! is it at all possible for two units of the *same* service to be deployed on the same node?
[16:34] <marcoceppi> jrwren: if there are no tests "PENDING" then none are running
[16:35] <jrwren> marcoceppi: where do I get a review queue link? its in the list here, I've no idea what link you want. http://reports.vapour.ws/latest-bundle-and-charm-results
[16:35] <marcoceppi> roadmr: if someone is crazy, and does a juju add-unit --to, then yes
[16:35] <marcoceppi> jrwren: http://review.juju.solutions
[16:35] <roadmr> marcoceppi: if not, maybe init file poking would help me, since I don't care about services bar, baz, quux, as long as I know this node is foo/0
[16:35] <roadmr> marcoceppi: oh, yes the crazy factor :)
[16:35] <marcoceppi> roadmr: it's probably easier to write it to a file, tbh
[16:35] <marcoceppi> jrwren: I just queued up tests for it
[16:36] <roadmr> marcoceppi: hey thanks for your help/feedback :) at least I know 1) it's not straightforward, 2) I have several options to work with.
[16:36] <jrwren> marcoceppi: http://review.juju.solutions/review/2357 ?  that link?
[16:36] <marcoceppi> yes
[16:36] <marcoceppi> jrwren: it'll say either PASS, FAIL, PENDING, or RUNNING
[16:36] <jrwren> marcoceppi: in that case, this too please: http://review.juju.solutions/review/2371
[16:37] <marcoceppi> jrwren: so if you don't see any PENDING or RUNNING then ping a ~charmer to kick them off
[16:37] <marcoceppi> jrwren: done :)
[16:37] <jrwren> marcoceppi: thank you.
[16:38] <marcoceppi> tvansteenburgh: it looks like LXC substrate for charm testing is broken again "ERROR there was an issue examining the environment: cannot use 37017 as state port, already in use"
[16:38] <marcoceppi> tvansteenburgh: http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/1535/console
[16:38] <tvansteenburgh> marcoceppi: thanks, looking into it
[16:39] <roadmr> marcoceppi: hey feel free to defer me if busy, but what I want to ultimately achieve is being able to deploy a crontab configuration to all units for one particular service but have only one of them run the crontab, the others should ignore or early-exit from it. I can't imagine I'm the first to need something like this?
[16:40] <marcoceppi> roadmr: so, are you doing this crontab as part of a charm? or outside the charm?
[16:41] <roadmr> marcoceppi: good question :) my plan is for the charm to write the crontab file (say in /etc/cron.daily/blah)
[16:41] <marcoceppi> roadmr: if you want it to only run on one unit, then just have this kind of codeblock
[16:41] <marcoceppi> http://paste.ubuntu.com/13493735/
[16:42] <marcoceppi> roadmr: juju will elect a leader from the service group for you at deploy time, and there will ever only be one leader. If that leader goes away a `leader-elected` hook will fire where you can codify that check so that the cront will only ever exist on one machine
[16:43] <roadmr> marcoceppi: oh cool! yes, I was gravitating towards using is-leader but was thinking of using it at runtime (i.e. in the unit). Doing it on hooks sounds reasonable
[16:44] <marcoceppi> roadmr: it's be a safer and repeatable way to do what you're looking for
[16:44] <roadmr> marcoceppi: indeed... and it sounds like the correct way to use is-leader, rather than my horrid mental hacks :)
[16:44] <marcoceppi> :D
[16:45] <roadmr> marcoceppi: cool! I'll dive into doing it that way. Thanks so much!
[16:49] <cory_fu> marcoceppi: Still no ANN for the new charm-tools?  mbruzek was hitting an issue that was fixed in the new version with --hide-metrics
[16:53] <marcoceppi> cory_fu: the latest version is out and live
[16:54] <marcoceppi> cory_fu: it's been out (1.9.2) since Friday, still fighting homebrew'
[16:54] <cory_fu> Anything I can help with?
[16:55] <marcoceppi> cory_fu: it's grunt work, see https://github.com/Homebrew/homebrew/pull/46273 apparently the way I've been doing Formulas for the past 2 years is "wrong"
[16:58] <marcoceppi> cory_fu: I'm working on it again now, if I can get poet to work I should have it in homebrew soone enough
[16:58] <cory_fu> Thanks a bunch
[16:58] <cory_fu> mbruzek's issue was just that he was not aware of the new release.  It is working now, I believe.
[17:01] <marcoceppi> of course, poet doesn't install cleanly.
[17:03] <tvansteenburgh> how would one fix this: ERROR failed to bootstrap environment: cannot make cloud-init init script for the machine-0 agent: relative path in ExecStart (cloud-city/charm-testing-lxc/tools/machine-0/jujud) not valid
[17:05] <tvansteenburgh> "relative path not valid"... i have cloud-city/ dir, but nothing beyond in that path
[17:05]  * tvansteenburgh tries creating dirs...
[17:07] <tvansteenburgh> nope
[17:15]  * tvansteenburgh facepalms. note to future self - don't set JUJU_HOME to a relative path
[17:18] <rick_h_> tvansteenburgh: ouch, that seems like a party there
[22:23] <blahdeblah> Hi all, quick Q: is it possible to tell juju to use MAAS to bootstrap and install the boostrap node into a container in MAAS rather than the base node itself, leaving the base node available for other juju units?
[22:30] <marcoceppi> blahdeblah: so bootstrap in a LXC container?
[22:31] <blahdeblah> marcoceppi: yep
[22:31] <marcoceppi> blahdeblah: not really, at least not atm, however you can just create a KVM container on the maas-master and tag it "bootstrap" so you can `juju bootstrap --constraints="tags=bootstrap"`
[22:32] <blahdeblah> yeah - I've done that before, but it's cumbersome and manual
[22:32] <blahdeblah> When the LXD driver comes it will be possible?
[22:32] <blahdeblah> s/it will/will it/
[22:32] <lazypower> marcoceppi: it may be complete coincidence, but if i have a machine tagged bootstrap maas always picks it for the bootstrap node.
[22:33] <lazypower> i dont have to pass the --constraints bit
[22:33] <marcoceppi> blahdeblah: I don't know, I'm not sure if you'll be able to register lxd as a chassis for maas
[22:33] <blahdeblah> lazypower: hi - did you see my question about the DNS charm recently?
[22:33] <marcoceppi> though that would be pretty awesome
[22:33] <lazypower> blahdeblah : I did not
[22:33] <blahdeblah> marcoceppi: :-(
[22:33] <lazypower> blahdeblah on the repo?
[22:34] <blahdeblah> lazypower: No, here; I've read through the doco on it a couple of times and I'm still struggling to understand what you're aiming at with it.
[22:35] <lazypower> blahdeblah : a single charm to handle DNS
[22:35] <lazypower> either setup the bind infrastructure to handle DNS for me, or proxy requests to my upstream DNS provider like Rt53
[22:35] <blahdeblah> lazypower: Was hoping you'd have some time to have a hangout to discuss so I can fit into it with the stuff I'm working on.
[22:36] <lazypower> Sure. I have a todo to lend a hand integrating it into the big data bundles possibly
[22:36] <lazypower> I can wrap it into that todo, and sync next week over it so it'll be fresh in my mind?
[22:37] <lazypower> i haven't looked under teh hood of the charm in a bit, its been an on-going WIP
[22:37] <blahdeblah> marcoceppi: Any idea who are the people to talk to about explaining my use case?
[22:37] <blahdeblah> lazypower: Cool - thanks
[22:38] <blahdeblah> lazypower: The current thing that you seem to be aiming for is sending requests to manage DNS records over the relation, right?
[22:39] <lazypower> Correct
[22:39] <blahdeblah> lazypower: What I'm hoping to do is tie the DNS records directly to a relation, so that as soon as I add a unit and it's functional, it gets added to DNS without needing to ask anything.
[22:39] <lazypower> Thats exactly the plans of the auto relation
[22:39] <lazypower> it uses the units name and a wildcard domain to populate the entire model
[22:39] <lazypower> that or SRV records
[22:39] <lazypower> er
[22:39] <blahdeblah> I thought that might be the case, but the doco for it is empty at the moment. :-)
[22:39] <lazypower> welllll
[22:40] <lazypower> lets sling some code g-funky
[22:40] <lazypower> :D
[22:40] <marcoceppi> blahdeblah: the maas team, or email the juju mailing list about it
[22:40] <blahdeblah> marcoceppi: I would have thought it would be all in the juju driver side of things; we can deploy units to LXCs on MAAS now, just not the bootstrap node.
[22:41] <marcoceppi> blahdeblah: wait
[22:41] <marcoceppi> blahdeblah: what?
[22:41] <marcoceppi> the lxc container has to run somehwere though
[22:41] <blahdeblah> lazypower: I'm definitely happy to do some coding for it
[22:42] <blahdeblah> marcoceppi: yep, and the MAAS driver allows saying "juju add-unit --to lxc:N", where N is a MAAS-provisioned machine.
[22:42] <blahdeblah> marcoceppi: s/driver/provider/ maybe - not sure on the exact terminology there
[22:42] <marcoceppi> blahdeblah: sure, that makes sense, that exists for other providers
[22:42] <marcoceppi> so what you want is for maas to spin up a machine and put the bootstrap node on a lxc container on that node?
[22:43] <blahdeblah> exactly
[22:45] <blahdeblah> That way an environment could be fully auto-provisioned in MAAS without requiring a dedicated bootstrap machine or manually adding KVM nodes to MAAS.
[23:44] <los_> Q: is there a current list of providers for JuJu?  "providers" is the "per IaaS enabler" bits right?
[23:45] <cholcombe> i seem to have run into this problem with the manual provider: https://bugs.launchpad.net/juju-core/+bug/1412621
[23:45] <mup> Bug #1412621: replica set EMPTYCONFIG MAAS bootstrap <adoption> <bootstrap> <bug-squad> <charmers> <cpec> <cpp> <maas-provider> <mongodb> <oil> <juju-core:Fix Committed by frobware> <juju-core 1.24:Won't Fix> <juju-core 1.25:Fix Released by frobware> <https://launchpad.net/bugs/1412621>
[23:46] <marcoceppi> los_: there is
[23:48] <los_> marcoceppi: thankx...I'll look again.  Did you see this re: LXD? https://www.youtube.com/watch?v=QyXLRDN0ERo
[23:48] <marcoceppi> los_: `juju init --show | grep "type:" | grep -v "#" |  awk '{print $2}'` which says the following:
[23:49] <marcoceppi> los_: openstack maas joyent gce ec2 cloudsigma vsphere manual local azure
[23:49] <marcoceppi> los_: Yes! I have seen teh lxd provider, it'll be in the 1.26-alpha2 release which should be out in a week or so
[23:50] <los_> marcoceppi: I swear, if I could wave a magic wand and banish all the old maas/juju vids I think that'd be my best contribution!
[23:50] <marcoceppi> los_: haha, yeah... you and me both. You should checkout the Juju video channel which is only publishing fresh content
[23:51] <marcoceppi> los_: https://www.youtube.com/channel/UCSsoSZBAZ3Ivlbt_fxyjIkw
[23:51] <marcoceppi> los_: we're publishing new content about once or twice a week to that channel
[23:54] <los_> marcoceppi: THANKS!  Awesome.