=== Spads_ is now known as Spads [03:25] stub: hey there [03:25] thumper: yo [03:25] stub: had an issue with the postgresql charm the other day [03:25] I have an old one in use cs:trusty/postgresql-10 [03:25] You using the rewrite or the cs: one? [03:25] k [03:25] and I tried to upgrade to -28 [03:25] only one unit [03:26] but it said "can't as it would break replication relation" [03:26] seems to have a peer relation with itself [03:26] or something [03:26] how do I upgrade it? [03:26] I do not recall such an error message :-/ [03:26] --force only mentions units in error state [03:27] that error message probably came from juju itself [03:27] I think the charm upgrade expects endpoints to be the same ... [03:27] FSVO the same [03:27] oh... that is a juju error message? Right. So probably because some relations got renamed and I didn't realize it would be a problem? [03:27] yeah... [03:27] that makes sense [03:28] what does the replication relation do if there is only one unit? [03:28] anything? [03:28] or the name of the interface changed [03:28] nothing [03:28] * thumper guesses interface changed name [03:29] hmm... [03:29] I wonder what the nice way is to say "just upgrade, ..." [03:29] I should try it locally first I guess :) [03:29] So lp:~stub/charms/trusty/postgresql/rewrite is what I recommend and actually has a test to upgrade from r127 (no idea of the cs: revno). But you are earlier than that. [03:30] thumper: Is this modern juju, or haven't you updated that bit? [03:30] I deployed this last year around July [03:30] and that charm hasn't been upgraded since:) [03:30] thumper: upgrade-charm --force will push the new charm everywhere without running hooks. [03:31] really? --force won't run hooks? [03:31] that seems fcuked [03:31] I'd test upgrading to 1.24 then upgrading to the rewrite branch (again, recommended but not landed for review/beurocratic reasons) [03:32] thumper: It is the only way to fix a charm that errored in its upgrade-charm hook. [03:32] I have my env running 1.24.5 [03:32] right, but it hasn't errored [03:32] it just said "won't upgrade" [03:32] still on -10 [03:32] marcoceppi: I see you are around [03:32] Yeah, but that is why we love upgrade-charm --force. [03:32] marcoceppi: quick charm upgrade question [03:32] thumper: yes, I am in your timezone [03:33] I have cs:trusty/postgresql-10 and I want to upgrade to cs:trusty/postgresql-28 [03:33] but an interface got renamed (or something) [03:33] and juju says "nah" [03:33] what will --force do? [03:34] thumper: so force will, at the next idle point in the agent, unpack the charm files without queuing an upgrade-charm hook [03:34] thumper: not sure if thta fixes your missing interface/relation problem or not, but it's how I develop [03:35] so I should manually run the hooks? [03:35] this seems flakey to me... [03:36] marcoceppi: also, stub has upgrade tests from r127 of the charm, any idea how we can work out what charm version that is? [03:37] thumper: Not so much flaky, but a necessary back door to get you out of this snafu. The real problem is way back then I changed a name and nobody at that time realized it was a problem. [03:38] thumper: like right now, I've got a broken hook. So I patch the charm layer, charm build, then upgrade-charm --force, exit 1 in the debug hooks so I can keep that hook, then do a juju resolved --retry on the unit to recaputre that hook [03:38] thumper: no, you should avoid using --force [03:38] thumper: the way to fix this is to break the relation, upgrade-charm, attach the new relation [03:38] whoa, I'm really lagged [03:39] thumper: just doing a 'juju set' on it after the upgrade-charm --force will kick off a hook, and one hook is all that is needed with the rewrite charm. Not sure about cs:28. [03:39] marcoceppi: the relation is a peer relation [03:39] thumper: oh bother [03:40] marcoceppi: a single unit with a peer relation to boot ;) [03:40] thumper: well then you're fun [03:40] * thumper taps... [03:40] thumper: well --force upgrade will get you the new payload [03:40] thumper: not sure if it'll queue the upgrade-charm hook or not tbh [03:41] stub: why is your new rewrite not promulgated (or however you spell that damn word) [03:41] i just know it'll drop the new charm payload [03:41] * thumper goes to look at the code... [03:42] syn [03:42] thumper: its in the review queue. It got looked at about a month ago, when it was pointed out some tests were failing on the ci system. I got that down to one failing test, which is still better than the existing charm with its tests disabled. [03:43] so... disable the failing test like a real developer :-) [03:43] heh [03:45] I've been poking at it to get it fixed, but the ci system gets rather constipated recently and there is still a good chance of failures due to cloud timeouts and such. [03:47] Ohh... someone gave it an enema. I got a fresh run coming up in an hour or three. [03:48] lxc green already, just need to wait for the real clouds [03:48] thumper: where is my lxd provider? [03:48] stub: coming [03:49] thumper: will an environment be able to span multiple lxd servers? [03:49] not initially [03:50] thumper: will that be a juju task, or an lxd task do you know? [03:51] My evil plan needs juju talking to multiple lxds, or a virtual lxd server that abstracts a collection of lxds. [03:52] (maybe the latter is better, providing a ha end point to a cluster of lxds and automatic failover of containers etc.) [03:53] the aim is to be able to have the lxd provider be able to point to a remote lxd endpoint [03:53] as well as the default of "localhost" [03:56] I guess the manual provider is quite happy deploying a unit to an lxd container somewhere, so it wouldn't be a blocker to anyone. Just trickier to wire up. [03:58] marcoceppi: would stopping people making incompatible metadata.yaml changes be a job for charm proof or the charm store? It would need to be able to discover the previous revision of the charm. [03:58] stub: so, juju does this already with relations [03:58] it just makes you remove relation [03:59] but I don't think anyone considered the peer corner case [03:59] oh, so this is just an edge case with the peer relation [03:59] right [03:59] juju should just allow you to upgrade regardless of the peer interface change [03:59] a bug would be in order [04:00] Maybe for juju 2.0 peer relations are considered more special. Currently I think you can have multiple peer relations with all sorts of names and interfaces defined, which isn't really useful afaict. [04:13] thumper: are you filing a bug? [04:14] sorry, wasn't following, on a call [04:14] I can file a bug [04:47] stub: bug 1510787 [04:47] Bug #1510787: juju upgrade-charm errors when it shouldn't === skay is now known as skay__ [09:55] So I now need to have two branches for my charm, one containing my work and layers.yaml and another one containing the built charm? [09:57] My first attempt was 'charm build -o .' to keep everything together in an easy to develop with place, but that just builds to ./trusty/charmname [10:00] If two branches, do we have a official location for the 'source' branch, which is the target branch for reviews, with the current trunk remaining for the generated output to be ingested into the charmstore? [11:51] stub what I've been working on is having them be 2 separate repos, one with the layers.yaml and such, one that is the deployable charm [11:54] Icey: Yeah, that seems to be the way it needs to work. And I see I can specify a LAYER_PATH environment variable, so I can split out things into multiple layers when developing. [11:54] The review process is going to be more interesting [11:55] yeah, almost like the layers repo isn't going to be reviewed, just the final output? [11:59] One of the reasons composer exists is to avoid all the boilerplate that ends up in reviews from things like charmhelpers. I think the layers will need to be reviewed, not the final output. [11:59] hopefully :) [11:59] ideally we get to the point where each layer is independently reviewed, and we can just review the stuff that the charm author is writing [12:00] tarmac or something should be able to to the build, commit and push automatically for the CI system to ingest. === circ-user-C4wS5 is now known as gennadiy [12:24] hi i still have question about expose services in bundle. i use expose: true in my bundle but after deploring services are not exported [13:03] gennadiy: And the services in question have open ports listed in `juju status`? The `expose: true` should work, if that's the case. :/ [13:04] Are you deploying the bundle with `juju quickstart` or `juju deployer`? [13:05] i deployed bundle from juju-gui [13:06] Oh yes, that, too. And the services have open ports listed? [13:06] but i think i have found problem i use bundles.yaml but in juju store i see bundle.yaml which doesn't contain expose [13:06] my bundle name is - tads2015-demo [13:07] urulama_: ^ does the expose stuff not get translated perhaps? [13:08] rick_h__, gennadiy: it might be a bug, yes. we'll investigate. [13:08] constraints field is absent too [13:09] sorry. constraints presents in result bundle [13:10] do i have possibility to create bundle.yaml directly now? [13:10] as i understood my original bundles.yaml was converted to bundle.yaml. am i right? [13:11] gennadiy: yes, you can. use the same format as you see the results. [13:11] gennadiy: yes, though be careful. The old system looks for a bundles.yaml so you need both to ingest properly [13:11] can i add expose field? [13:12] * can i use expose field in bundle.yaml? [13:12] gennadiy: yes definitely [13:12] gennadiy: you should. [13:12] ok. i will try [13:12] gennadiy: download the bundle and leave both files when you update it. === JoshStrobl is now known as JoshStrobl|AFK [13:44] can i setup name of bundle in bundle.yaml? [13:44] because when i try to use bundle and config w/o name config is not applied [13:45] i use the following script [13:45] juju-deployer -e $JUJU_CUR_ENV -c tads2015-demo/bundle.yaml -c config-demo.yaml tads2015-demo/bundle.yaml [13:49] gennadiy: unfortunately, we currently don't handle expose. [13:49] (in new bundle format) [13:49] now i use bundle instead of bundles and expose works [13:52] but now i have question about juju-deployer it has ability to merge yaml files. [13:53] so if i don't use bundle name for inside bundle file it doens't work correctly [13:53] gennadiy: in new bundle format there is no top level namespace with the bundle name [13:53] i see, but how to use juju-deployer now? [13:54] seem it doesn't work correctly w/o bundle name [13:54] gennadiy: how to use a config yaml file? [13:54] am i right? [13:54] gennadiy: don't know, checking sources, could be a juju-deployer bug [13:58] gennadiy: what's the problem with new bundle format? overriding values with multiple yaml files? if so, have you tried passing yaml overrides not including the top level bundle name? [14:13] gennadiy: what version of the deployer. You need a fairly recent one to support the newer format where the bundle name is not at the root of the file. [14:57] is there a way to specify machine arguments when deploying? [14:58] in essence, I want to use `juju deploy ceph` but specify a machine add constraint to restrict the service to us-east-1a [14:58] or am I just going to have to use a bundle to get that kind of effect? [15:30] mgz_: ping [15:37] is 1.25 out yet :'( [15:45] vila: hey [15:46] mgz_: hey ! See PM ? [15:51] Icey its in -proposed [15:51] Icey add-apt-repository ppa:juju/proposed [15:52] install juju-core and profit from having 1.25 [15:52] you get payload management, update-status hooks, and a whole slew of other nice things [15:52] but its /proposed, ymmv :) === JoshStrobl|AFK is now known as JoshStrobl [16:18] vila: with 1.24.7 you may benefit from bug 1435283 fix [16:18] Bug #1435283: juju occasionally switches a units public-address if an additional interface is added post-deployment [16:18] [16:18] ack [16:48] lazypower I'm riding proposed now ;-) [16:51] lazypower: Layers question for you. I renamed my composer.yaml to layers.yaml by mistake, and charm build complained so I renamed it to layer.yaml. I'm getting this error trying to build now, though: http://pastebin.ubuntu.com/12991002/ [16:51] lazypower: I suspect something's cached, but not sure where. deps/ looks like it just has the interface I pulled in. [16:51] ouch [16:51] that stack trace not really clear with what it duped on [16:51] but i think i know whatshappening here [16:52] lazypower: I can dig deeper if needed. [16:52] if you ls $JUJU_REPOSITORY/trusty/mything [16:52] does mything have a compose.yaml? [16:52] if so, charm build is notifying you that files exist out of band, and you should only continue building if its OK to have this file present. (Eg: local modifications to the charm not the layer) [16:52] oh, it has a layers.yaml [16:52] --force will allow you to override that and get past it, but its erring on the side of caution for you [16:53] i'm referring to the constructed charm [16:53] the artifact [16:53] lazypower: removing that fixed it. Should I open a bug? [16:53] * lazypower is doing a terrible job of explaining htis [16:53] nah its intended behavior [16:53] if you run with -l DEBUG [16:53] it tells you what its matching on i think [16:54] lazypower: Right, the previous build left the layers.yaml in $JUJU_REPOSITORY/trusty/mycharm. Bad artifact. [16:55] :) [16:55] lazypower: so a build copies over the charm, rather than replacing or rsyncing, right? [16:55] the one thing build doesn't do, is take care of removal [16:55] say you remove an interface from layer.yaml, then re-run build - you need to implicitly rm -rf the built charm and then build, otherwise that interface will linger [16:56] aisrael_ correct, there are different strategies at play [16:56] lazypower: ack. I can see an argument for doing that cleanup automatically, even if it's an extra flag [16:56] like config.yaml, metadata.yaml get merged [16:56] other files have a default policy of highest layer that has this file wins. [16:56] i.e, if I'm writing a charm in layers/, I'd expect that to be authoritative [16:57] so whatever your top layer is, will override files below it, save for a handfull of special files. and your local layer takes precedence over the interfaces.juju.solutions api yes but only if you have LAYER_PATH set [16:57] same with INTERFACE_PATH [16:57] like the charm equivalent to make clean [16:57] yeah [16:58] Caffeinated ramblings. Thanks for the help unblocking me! [16:58] seems like charm build --clean should just nuke the compiled dir if exists then build. [16:58] NP thanks for the questions :) [17:03] how hard would it be to change the AMIs used by Juju to be something that's EBS optimized? [17:22] lazypower: I am now a layer convert (not that I wasn't a fan before). [17:22] :) [17:22] well, layer + reactive [17:22] Layer + reactive is a powerful tool [17:23] aisrael_ : https://github.com/mbruzek/layer-k8s/pull/5 [17:23] Niiiiice [17:23] working on a feature branch with matt for the last 2 days, working independently there have been zero merge conflicts - putting complexity in its place :) [17:23] and the change sets are getting smaller [17:53] lazypower: lxc now builds for os x, so if we can run lxd in a container... we might be closer to having a working local provider. [17:55] nice! [17:56] I'll poke around at it tonight when I'm back home. [18:13] lazypower: +1 to a --clean flag where it removes the previous build [18:13] helpful when heavily devving [18:13] I'll feature rq it [18:15] https://github.com/juju/charm-tools/issues/33 === wolverin_ is now known as wolverineav [20:41] can you combine all your juju actions code into one file and symlink it just like the hooks? [20:48] cholcombe: yes [20:48] marcoceppi, nice :) [20:48] so i do the usual @hooks.hook business [20:49] cholcombe: welllll [20:49] cholcombe: no one has tried that, they're not hooks, they're actions, so it's hard to say if that code will respect that [20:50] cholcombe: for now it's probably better to just make a python module and then stub the import/execute of that module/methods in each file [20:50] marcoceppi, if i remember right at the code i looked at it just calls the hook with the matching filename that it was called with [20:50] cholcombe: sure, but actions are not in the hooks directory [20:50] oh i know [20:50] so if there's any attempt to parse out hook related file paths [20:50] it'll fail [20:51] hmm yeah [20:51] * cholcombe goes back to splitting them up === Icey is now known as IceyEC === IceyEC is now known as Icey === menn0_ is now known as menn0