[03:39] <omgponies> what's the format for deploying a charm that's not in the official charm store but is in a bzr repo ?
[03:40] <davecheney> omgponies: local ?
[03:41] <omgponies> without having to grab it local
[03:41] <omgponies> looks like this works - juju deploy cs:~paulcz/precise/elasticsearch
[03:42] <davecheney> yup, that is the format for a private charm store branch
[03:42] <davecheney> thing
[03:43] <omgponies> is there a flag for setting a version ... which I assume is the equivalent of the 'revision' file ?
[03:47] <davecheney> omgponies: version is the value of the revision file in the root of your charm
[03:47] <davecheney> or 1, by default
[03:48] <omgponies> right,  I mean during deployment
[03:48] <omgponies> for instance 'revision' for the charm above is currently 50
[03:49] <omgponies> can I specify that so if some jerk does a breaking change I don't find out by surprise
[03:49] <davecheney> omgponies: make a file called revision in the root of your charm
[03:49] <davecheney> put the numer 50 in ther
[03:49] <davecheney> oh, hang on
[03:49] <davecheney> i see what you are asking
[03:50] <davecheney>  cs:~paulcz/precise/elasticsearch:$REVISION
[03:50] <davecheney> is the full name
[03:50] <davecheney> best to commit a revision file
[03:50] <davecheney> hm
[03:50] <davecheney> actualy
[03:50] <davecheney> no
[03:50] <davecheney> that may
[03:50] <davecheney> probably not work
[03:50] <davecheney> i don't think private charmstore branches have a concept of revision as strong as real charms do
[03:50] <davecheney> there is only one revision of the charm
[03:50] <davecheney> and that is head
[03:51] <omgponies> right
[04:16] <omgponies> is there a doc somewhere that describes everything that is available from the 'unit-get' command ?
[04:18] <davecheney> omgponies: not really
[04:18] <davecheney> from memory
[04:18] <davecheney> public-address and private-address are the only useful ones
[04:20] <davecheney> yup, sauce says those are the only two commands
[04:20] <omgponies> thinking about from a monitoring perspective ... be able to get a list of units/services deployed to a box and a) set a useful hostname, b) determine what metrics to care about
[04:23] <davecheney> omgponies: unit-get probably isn't going to be what you want
[04:24] <omgponies> yeah I don't think there's any good way to get what I want
[04:25] <omgponies> probably need something to stick in the middle that can correlate 'ip-10-29-206-28' to data from `juju status`
[08:47] <gnuoy> I don't have a public bucket for juju tools and use bootstrap --upload-tools to get them into a new environment. I've upgraded my client to 1.14 and now I want to upgrade the juju tools in existing environments. How do I do that ? I don't see an --upload-tools option to sync-tools
[09:38] <gnuoy> I do see --source for sync-tools and the comment suggests I can specify a local dir but I'm unable to spot the tools on the filesystem
[09:47] <gnuoy> ok, facepalm. I didn't know about: juju upgrade-juju --upload-tools
[09:53] <allenap> When talking about containers in https://juju.ubuntu.com/docs/authors-subordinate-services.html, is that a general term? Or is it referring to LXC, for example?
[10:51] <evilnickveitch> allenap, I believe it is used in a general sense. That's another page that needs a good rewrite...
[10:53] <allenap> evilnickveitch: Cool, thanks.
[12:16] <_mup_> Bug #1236824 was filed: boostrap tries to build jujud <juju:New> <https://launchpad.net/bugs/1236824>
[14:49] <_mup_> Bug #1236900 was filed: tar: unrecognized option ''--numeric-uid'' <juju:New> <https://launchpad.net/bugs/1236900>
[15:15] <drj11> hello
[15:16] <drj11> we have changed our config.yaml for a service to add a new configuration option. How do we add that configuration option to an already running service?
[15:22] <marcoceppi_> drj11: you'll need to upgrade the charm
[15:22] <drj11> marcoceppi_: thanks. I thought so. and I thought we'd tried that. me and morty are working on it. we'll try again
[15:23] <drj11> marcoceppi_: thanks again
[15:25] <marcoceppi_> drj11: was the charm initially deployed from the charm store or from local?
[15:26] <marcoceppi_> adeuring: I've got a few comments on your merge for charm-tools
[15:26] <adeuring> marcoceppi_: thanks, I'll look
[15:26] <marcoceppi_> adeuring: You're trying to find "maintainer" of the branch, but to be honest the owner of the branch is always going to be ~charmers for  promulgated branches (or ~team)
[15:26] <marcoceppi_> adeuring: we don't do stacking anymore for promulgated branches
[15:27] <drj11> marcoceppi_: from local
[15:27] <drj11> marcoceppi_: we never use the charm store
[15:28] <marcoceppi_> drj11: gotchya, that shoudl work. You should be able to verify that the charm revision is bumped in the juju status
[15:28] <marcoceppi_> adeuring: so if a maintainer isn't in ~charmers (which is an expected case) then your function will fail
[15:29] <adeuring> marcoceppi_: ok, so we might need a special rule for branches owned by charmers. But what if somebody deliberately want to fork a promulgated charm? In that case, this person should change the maintiner field. Otherwise, the official maintainers might receive undeserved "hate mail".
[15:32] <marcoceppi_> adeuring: It's an interesting case
[15:33] <marcoceppi_> I think an exception will need to be made for ~chamers for sure
[15:34] <marcoceppi_> adeuring: also, things like the the juju gui charm are maintained by "Juju GUI Team" not sure how this would handle that
[15:35] <adeuring> marcoceppi_: let me look how this works with today data.
[15:56] <adeuring> marcoceppi_: the GUI is acutally a good example: Sending a mail to the address given in the maintainer field (juju-gui@lists.launchpad.net) results in an error: "host polevik.canonical.com [91.189.95.64]: 550 unknown user". OTOH, the "real" mailing list (juju-gui@ubuntu.com) can't be checked either... Anyway, I#m open to sugegstion how else to check the sanity of the maintainer field.
[15:58] <adeuring> ah, "juju-gui@lists.launchpad.net" would have worked, if the juju-gui team had set up this list on LP
[15:59] <marcoceppi_> adeuring: I think just making sure it follows "Full Name <properly-formated@email.tld>" would suffice. We don't take much responsibility for  personal branches and these should be checked during finaly charm review
[15:59] <adeuring> marcoceppi_: ok, I believe the check you suggest already exists, so let's abandon the MP
[16:00] <marcoceppi_> adeuring: I like the moving of the version to the package
[16:02] <adeuring> marcoceppi_: yeah, that makes things easier for some changes, but taht's easy to include in as a drive-by fix in any real branch.
[16:02] <marcoceppi_> adeuring: I'll see how easy it is to just cherry pick that commit
[16:03] <adeuring> marcoceppi_: ok, let me just revert the other changes, that's the most easy way, I beleive.
[16:07] <adeuring> marcoceppi_: done
[16:38] <matsubara> hi there, I bootstrapped a juju environment on openstack, then juju destroyed it. When I try to bootstrap again, juju says there's already a environment bootstrapped. Any ideas on how to fix this? Logs here: http://pastebin.ubuntu.com/6210041/
[17:05] <kurt_> Hi Guys - I cannot get maas-tags to work with 14.1.  According to the constraints documentation, it should.  Any comments?
[17:07] <kurt_> http://pastebin.ubuntu.com/6210186/
[17:09] <marcoceppi_> 1.14.1 doesn't have maas-tag support
[17:10] <marcoceppi_> It was just added in 1.15.1, https://lists.ubuntu.com/archives/juju/2013-October/003019.html
[17:10] <marcoceppi_> kurt_: ^
[17:10] <kurt_> marcoceppi_: is 1.15.1 support on precise?
[17:11] <marcoceppi_> kurt_: all juju releases are available for all current supported ubuntu releases
[17:12] <kurt_> Ok, guess its time to upgrade :)
[17:12] <CheeseBurg> no weekly update today?
[17:12] <marcoceppi_> kurt_: however, odd series, like 1.13, 1.15, etc are considered "devel" releases
[17:12] <marcoceppi_> so you need to get them from ppa:juju/devel
[17:13] <kurt_> marcoceppi_: Ok, thanks.  I may have it already.
[17:33] <kurt_> marcoceppi_: in reading the readme for 1.15.1, I'm confused by this statement: "As an unstable release we do not make guarantees about clean upgrade
[17:33] <kurt_> paths of running environments from one 1.13.x version to another."
[17:44] <marcoceppi_> kurt_: 1.<ODD>.X releases are development releases, they are not considered stable
[17:45] <marcoceppi_> For 1.<EVEN>.X repleases you should be able to safely run `juju upgrade-juju` to upgrade from one stable juju release to another
[17:47] <kurt_> marcoceppi_: right, but I am in fact going from 1.13.1 to 1.15.1.  The statement appears to guarantee it will break. :)
[17:58] <marcoceppi_> kurt_: it shouldn't, it's just saying it might
[17:58] <marcoceppi_> once 1.16 comes out I recommend you sit on that release and move between 1.<even> releases
[17:58] <kurt_> marcoceppi_: ok, we'll see soon enough, I have it installed ;).  And yep - how far is 1.16 out?
[17:59] <marcoceppi_> kurt_: it should be released sometime this month
[17:59] <marcoceppi_> iirc
[17:59] <kurt_> cool, thanks
[18:39] <matsubara> Does juju keep any local state of a bootstrapped environment? I keep getting an error:  ERROR juju supercommand.go:282 environment is already bootstrapped even though I destroyed the whole environment.http://pastebin.ubuntu.com/6210041/
[19:02] <marcoceppi_> matsubara: is this a local environment?
[19:07] <matsubara> marcoceppi_, canonistack
[19:08] <matsubara> marcoceppi_, btw, I updated juju-core to 1.15.1 and now when I destroy-environment, the command returns but doesn't seem to destroy anything (i.e. If I run the command again, I get the question if I want to destroy the env)
[19:32] <mhall119> jcastro: marcoceppi_: halp!
[19:32] <marcoceppi_> mhall119: what's up?
[19:32] <mhall119> I'm trying to make my config-changed charm call my db-relation-changed charm to re-acquire my DB credentials
[19:32] <mhall119> http://bazaar.launchpad.net/~api-website-devs/ubuntu-api-website/api-website-canonical-is-charm/view/head:/hooks/config-changed#L49
[19:33] <mhall119> but according to the charm log, that's failing: https://pastebin.canonical.com/98709/
[19:33] <marcoceppi_> mhall119: yeah, because relation hooks get extra environment variables
[19:33] <mhall119> marcoceppi_: so how can I make this work?
[19:33] <marcoceppi_> you can't really call relation-get out-of-band without supplying a relation-id
[19:34] <marcoceppi_> mhall119: what I've done in charms is record the data in a dot file in the $CHARM_DIR and read that in other hooks
[19:34] <mhall119> marcoceppi_: line 48 of the config-changed hook gets a relation-id
[19:34] <marcoceppi_> mhall119: you need to record the $JUJU_RELATION_ID from the db-relation-changed hook
[19:34] <mhall119> marcoceppi_: FWIW, this is derived from the internal certificaiton website charm
[19:35] <mhall119> marcoceppi_: $(relation-ids db) won't work?
[19:36] <mhall119> or is it that I need to set that to a named env var in the db-relation-changed hook before calling relation-get?
[19:37] <marcoceppi_> mhall119: db-relation-changed, when fired during a relation event, will already have the proper JUJU_RELATION_ID set
[19:37] <marcoceppi_> one second
[19:37] <mhall119> ok, so if I fire it from config-changed I need to set JUJU_RELATION_ID=$DID when I call db-relation-changed?
[19:38] <marcoceppi_> mhall119: yes
[19:38] <mhall119> instead of as $1
[19:38] <marcoceppi_> $1 ?
[19:39]  * marcoceppi_ checks code
[19:39] <mhall119> it was passing the relation id from $(relation-ids db) as the first argument to db-relation-changed when it called it
[19:39] <mhall119> https://pastebin.canonical.com/98709/
[19:39] <mhall119> I mean
[19:39] <mhall119>     DID=$(relation-ids db)
[19:39] <mhall119>     [ -n "$DID" ] && hooks/db-relation-changed $DID
[19:39] <marcoceppi_> right, set it as an environment variable before executing the hook
[19:40] <mhall119> ok
[19:40] <mhall119> and is that the only one that needs to be set?
[19:40] <marcoceppi_> db-relation-changed, unless programmed to, won't know what to do with $2
[19:40] <marcoceppi_> $1
[19:40] <marcoceppi_> mhall119: you typically need JUJU_REMOTE_UNIT
[19:40] <marcoceppi_> but I think you can get away without it
[19:41] <marcoceppi_> I'm waiting for my desktop to come back online to double check
[19:41] <mhall119> ok
[19:44] <mhall119> marcoceppi_: could it have ever worked passing it as the first parameter?  Like I said, this was taken from one of the IS charms
[19:44] <marcoceppi_> mhall119: depends on how the db-relation-changed hook is set up
[19:45] <marcoceppi_> mhall119: doesn't look like it
[19:47] <mhall119> ah, ok, I see how it was being used before
[19:48] <mhall119> marcoceppi_: on, new question
[19:48] <mhall119> suppose I have a config parameter called bzr_revno, and in config.yaml for my charm I have it default to 1
[19:48] <mhall119> then, I update my app's code and I update the charm's config to make bzr_revno default to 2
[19:49] <mhall119> now I just found out that this doesn't actually call config-changed hook
[19:49] <mhall119> is there *any* hook that would react to a change in the default config value?
[19:50] <marcoceppi_> mhall119: hooks/upgrade-charm - but that's only reacting to an upgrade-charm event. What I recommend you do is call hooks/config-changed from the upgrade-charm hook
[19:50] <mhall119> and then when IS deploys a new version, upgrade-charm will be called?
[19:51] <marcoceppi_> mhall119: I don't know how IS does it, but if they use upgrade-charm, then yes, hooks/upgrade-charm will run
[19:51] <mhall119> ok, I'll check
[19:52] <mhall119> oh, hey, hooks/upgrade-charm is a symlink to hooks/config-changed
[19:52] <mhall119> so it should be called anyway
[19:52] <marcoceppi_> :)
[19:52] <mhall119> but it wasn't...
[19:52] <mhall119> and I'm not sure why
[19:52] <marcoceppi_> check the logs?
[19:52] <mhall119> https://pastebin.canonical.com/98709/
[19:53] <marcoceppi_> also, run juju get api-website
[19:53] <marcoceppi_> see what juju thinks the value/defaults are
[19:54] <marcoceppi_> mhall119: there might be a thing where juju doesnt' update default values for configs when charms are upgraded
[19:54] <marcoceppi_> I could see that being the case
[19:55] <mhall119> so you'd have to still call uju set api-website-app bzr_revno=37
[19:55] <marcoceppi_> mhall119: if what I described above is true, then yes
[19:55] <mhall119> ok
[19:55] <mhall119> between that and needing to set JUJU_RELATION_ID, I think that explains the failure I was having
[19:56] <mhall119> thanks marcoceppi_
[19:56] <marcoceppi_> mhall119: np, let me know if you bump in to any other oddities
[19:59] <mhall119> don't worry, I will :)
[20:20] <kurt_> marcoceppi_: odd things going on with using maas-tags and deploying services http://pastebin.ubuntu.com/6210935/
[20:21] <kurt_> my tag matches an existing the host, the one the bootstrap node is on, but juju still wants to spin up a second node
[20:45] <jamespage> kurt_, specifying a tag does not force two services to deploy onto the same machine
[20:45] <jamespage> kurt_, you have to use --to <machineid> todo that
[20:47] <kurt_> jamespage: but if those hosts are not yet known to juju, how do you accomplish deploying services to those nodes?
[20:47] <jamespage> kurt_, deploy one service first, and then deploy using --to for subseqent services
[20:48] <kurt_> jamespage: how is handling moving on to the next node done then?
[20:48] <jamespage> by co-locating services you are specifically telling juju that you are taking charge of where stuff lands
[20:48] <jamespage> to add a new node just don't specify --to
[20:49] <kurt_> right.  but if I want specific services to land on specific nodes, and they are unknown to juju, this sounds like a problem
[20:49] <kurt_> I get that I use --to.  But when I need the next set of services on the the next node...
[20:50] <kurt_> how do I force juju to use a particular node? I thought that's what the purpose of tagging was
[20:50] <kurt_> tagging + constaints
[20:52] <jamespage> kurt_, tagging is just a way of grouping servers
[20:52] <jamespage> kurt_, juju will ask maas for a new unit which matches the provided tag
[20:52] <kurt_> ok, but it doesn't appear that was happening
[20:53] <jamespage> kurt_, thats not what it sounded like above "juju still wants to spin up a second node"
[20:53] <jamespage> thats the correct result
[20:55] <kurt_> jamespage: so if the node is already running that matches the tag, juju will look to deploy elsewhere without the "--to" function?
[20:55] <jamespage> kurt_, yes
[20:55] <kurt_> that doesn't seem intuitive
[20:55] <jamespage> the node is already allocated to a service
[20:55] <jamespage> so juju won't by default place another service on it
[20:56] <jamespage> think about a deployment with 1000 nodes
[20:56] <jamespage> where 400 are for storage and 600 are for compute
[20:56] <jamespage> I can assign tags for 'compute' and 'storage' to the different server types
[20:56] <jamespage> and then use juju to target nova-compute to 'maas-tag=compute' and ceph to 'maas-tag=storage'
[20:57] <jamespage> or servers could be tagged per physical availability zone, or switch or whatever
[20:57] <kurt_> Ok, when we talk about more servers in the tag group than a single server it makes sense
[20:58] <kurt_> but in the context of a single node, it didn't
[20:58] <jamespage> tags don't really make sense in that context
[21:00] <kurt_> ok. makes sense.  I am really looking for the one-size fits all hammer for juju.  It just doesn't exist.
[21:00] <kurt_> my "--force-to" option :D
[23:15] <omgponies> is there a way to get the unit name from inside a hook
[23:16] <omgponies> i.e. from juju status where I see  'services:\n  elasticsearch\n ... units: elasticsearch/0