[00:15] bolthole: blahdeblah is correct, you can't rely on order of any hook execution.. so the hooks must handle the potential for nothing being returned by relation-get [00:16] thanks for confirming, kwmonroe [00:16] in your case, it would probably be easiest to wire in your webapp using webapp-container-relation-changed [00:16] you are guaranteed that relation-changed *will* fire anytime data on the relation changes [00:16] (vs relation-joined) [00:18] so, a webapp-container-relation-changed hook for you might look like "FOO=`relation-get webapp-path`; if [ ! -z $FOO ]; then wget the webapp and move it to $FOO; else do nothing and wait for the next time relation-changed runs. [00:18] eventually, relation-get webapp-path will return data (once tomcat sends it), and your -changed hook will get it. [00:20] thanks. I kinda empirically found out that gets fired when I want it to. Nice to know that it's guarranteed that way. [00:21] now that I think about it some more though... [00:22] I trhink that mbruzek is approaching the information exchange the wrong way. [00:22] he is using the relationship to share the tomcat webapp directory [00:23] but... when and ifjuju ever gets to the point whre subordinate services can be removed.... [00:23] I'm thinking that the relationship may be severed, before the service gets told to 'stop' itself. [00:23] But at that point, it has just lost the information on where to cleanup itself. becuase that information was in the relatoinship, which doesnt exist [00:24] you can remove subordinate services today in juju. relation-departed will fire, which is where you should remove any data that you don't want left on the principal unit. [00:24] huh. so relatoin-departed is inappropriately named? it should be called "relation-deparTING"? ;-) [00:25] meh, i guess. -departed means the relationship is tearing down but you can still access relation data (like webapp-path). -broken means the relation is gone. [00:38] WOOO! I got it to work! [00:44] nice bolthole! [00:48] kwmonroe could you please aprove my one-char-syntax-error fix to the tomcat charm? [00:49] i submitteed it this morning. it was ignored, and someon else worked on the charm so there were conflicting lines. [00:49] i just redid it for no conflicts. please push it through before more conflicts happen? [00:50] the webapp-path value is completely unusable without the fix. [00:52] sure bolthole - let me take a look [00:53] bolthole: i only see this from 18 hours ago. did you see another commit to trusty/tomcat that made this merge conflict? [00:53] https://code.launchpad.net/~bolthole/charms/trusty/tomcat/trunk/+merge/284213 [00:57] yeah just 60 seconds beofre i wrote that :) [00:57] https://code.launchpad.net/~bolthole/charms/precise/tomcat/trunk/+merge/284373 [00:58] on wiat, its precise [00:58] not trusty. yet. [00:59] so i guess thre's no conflict for the trusty one, and hyou could just approve that one too ;) [01:31] Thanks [01:33] bolthole: you talking to me? if so, yw :) [01:34] :) So.. how does the charm version in the store, get derived from the backend code baes? [01:34] one is at 15, the other is at 3 or something? [01:35] yeah bolthole - there's no correlation. at least not one that i can figure out. [01:36] i mean, i guess you could have 10 commits to LP, push to store making charm rev 1, then 4 more commits to LP, push to make charm rev 2, then 1 more LP commit, push to make charm rev3. [01:36] i but i don't know how you could look at an LP commit number and figure out the charm revno. [01:37] okay, so it isnt automatic; you have to make some kind of explicit push each time [01:37] that's correct [01:39] well, no, that's not correct. i don't know how it works. the numbers just go up ;) [01:39] dohhh [01:40] oh oh oh -- maybe it is how it works.. each commit bumps the LP revision, but only a push triggers the ingestion into the store. [01:40] and now i have thought too much about this. thanks. === natefinch-afk is now known as natefinch [08:06] does juju on openstack support the storage hooks? [09:26] gnuoy, quick review pls - https://code.launchpad.net/~james-page/charm-helpers/lp1537155/+merge/284411 [09:34] marcoc|airplane, the subordinate thing is a red herring, I would just like to be able to discover the units running on the this machine, in the same way I can find out the hostname of the machine [09:34] to avoid having to explicitly configuring that info in every charm [09:40] the context here is *not* charm code. It's a library that is used by our apps to standardise/improve their log outout [09:41] we currently tag each log line with the hostname of the machine, which is easy to discover. But the hostname is kinda useless in a juju world, we are more interested in the juju unit that the app runs under [09:41] so I was wondering if there's away to discover the juju units running automatically [09:42] or else, every charm for each service that uses this library (10+) would need to be manually updated to set an env var or similar with the unit name [13:16] Hi, I'm trying to learn more about Juju by using the official vagrant boxes and they fail to properly boot up both on windows and MacOSX. I've tried trusty64 trusty32 and wily 64. All of them are stuck at "Intalling package: cloud-utils" [13:46] kadams54: with theblues library how can i pull a specific revision? for example, bundle = cs.bundle('data-analytics-with-sql-like/5') [13:48] specifically for a bundle [13:58] stokachu: yes, https://api.jujucharms.com/charmstore/v4/openstack-base-34/archive/bundle.yaml vs https://api.jujucharms.com/charmstore/v4/openstack-base-39/archive/bundle.yaml [13:58] stokachu: the thing is that a new revision might be due to a readme update or the like [13:58] stokachu: so not all reivsions will be changes to the bundle.yaml file itself [13:58] ah ok [13:59] I was confused because that openstack 38 and 39 revisions the files were the same [13:59] but the readme was different [14:04] so i can generate the archive url with cs.archive_url('data-analytics-with-sql-like-4') but that doesn't contact the charmstore to validate it [14:05] stokachu: not sure on the library itself [14:05] i guess i could just generate the api url and check that a 200 is returned [14:06] stokachu: yea, have to bug jcsackett or some other folks that work on that atm [14:06] rick_h_: ok cool, will do thanks [16:47] marcoc|airplane: https://github.com/lxc/lxd/issues/1477 === marcoc|airplane is now known as marcoceppi [16:47] how do you force a destroy-controller with juju 2.0? [16:48] i interrupted a lxd bootstrap and now juju is unable to cleanup after itself [16:48] marcoceppi: er, https://github.com/juju/juju/pull/4191 [16:48] i am bad at copying and pasting [16:58] tych0: aren't we all [17:05] tvansteenburgh: hey, did you get a chance reproduce that bundletester issue i've been hitting? [17:07] cmagina: i was unable to repro. would you mind filing a bug here https://github.com/juju-solutions/bundletester/issues [17:07] tvansteenburgh: will do, thanks for trying [17:07] please include traceback [17:08] i will have another look soon === zz_CyberJacob is now known as CyberJacob === urulama is now known as urulama__ [20:28] does juju / local provider log output of uvtool errors? === jrwren_ is now known as jrwren [21:59] adam_g I don't believe so [21:59] But i'm not 100% on that. === scuttlemonkey is now known as scuttle|afk === scuttle|afk is now known as scuttlemonkey [22:49] scalability question: what if we have a 'cloud' where we anticipate 200-300 services? how can we use juju for that and keep it manageable? [23:03] bolthole: we'd suggest a controller with multiple models in HA mode [23:03] bolthole: the controller multiple model work is in the 2 [23:03] p alpha work [23:03] bah 2.0 [23:24] rick_h_ sounds like you are describing more scalability. however, I am actually trying to focus on manageability [23:25] bolthole: i'm talking about moving those 200 services into 20 more tightly scoped models [23:25] so you're managing things in smaller groups [23:25] ah, good [23:26] demo or screenshots? [23:26] bolthole: https://jujucharms.com/docs/devel/wip-systems [23:27] * rick_h_ looks for youtube talk [23:27] https://youtu.be/-1aVgnJIwLk [23:28] bolthole: shows the old gui visualizing it [23:28] bolthole: don't have one with the new gui atm [23:28] ah [23:28] thanks for the video [23:31] cant use sound at the moment.... is "model" the thing tothe right of juju/admin ? [23:31] so, [23:31] juju/admin/openstack [23:31] "openstack" is a model view? [23:35] bolthole: yea the model name is sonething you give whwb you create it [23:36] you'll see different names as the video goes on [23:36] "it" [23:38] you said that was with the "old" gui... but I dont see that option with the trusty/juju-gui screen? [23:38] bolthole: it was a demo [23:39] bolthole: today you use the jes festure flag in that doc link [23:39] bolthole: and use the gui 2.0 deployed into the admin model [23:39] oh. um.. waitaminute though... [23:41] the feature flag you are referring to, is for "multiple environments"? [23:41] bolthole: yes [23:41] models == enviroments? or someting else? [23:42] bolthole: where we're doing s/environments/models in 2.0 [23:42] ugh. naming overload:( [23:42] but.. I dont want to set up a separate juju .. controller(?) for every model wehave [23:42] that seems rather wasteful [23:42] no one controller many models [23:43] huh [23:43] once your bootstrapped you can creat more and more models [23:43] try it out in that doc link [23:43] so whats the new name for what is now called environments? :-D taht is to say "my azure environment" vs "my amazon environment" ? [23:44] my azure controller [23:44] and in 2.0 you give them names and can have multiple azurr controllers [23:44] so cloud-controllers-models [23:44] ah, k. thanks. Rough ETA for release of this thing? [23:45] 16.04 [23:45] april release [23:45] thanks [23:45] i'm showing some of it at the summit next wed [23:45] the charmers summit