=== Spads_ is now known as Spads | ||
thumper | stub: hey there | 03:25 |
---|---|---|
stub | thumper: yo | 03:25 |
thumper | stub: had an issue with the postgresql charm the other day | 03:25 |
thumper | I have an old one in use cs:trusty/postgresql-10 | 03:25 |
stub | You using the rewrite or the cs: one? | 03:25 |
stub | k | 03:25 |
thumper | and I tried to upgrade to -28 | 03:25 |
thumper | only one unit | 03:25 |
thumper | but it said "can't as it would break replication relation" | 03:26 |
thumper | seems to have a peer relation with itself | 03:26 |
thumper | or something | 03:26 |
thumper | how do I upgrade it? | 03:26 |
stub | I do not recall such an error message :-/ | 03:26 |
thumper | --force only mentions units in error state | 03:26 |
thumper | that error message probably came from juju itself | 03:27 |
thumper | I think the charm upgrade expects endpoints to be the same ... | 03:27 |
thumper | FSVO the same | 03:27 |
stub | oh... that is a juju error message? Right. So probably because some relations got renamed and I didn't realize it would be a problem? | 03:27 |
thumper | yeah... | 03:27 |
thumper | that makes sense | 03:27 |
thumper | what does the replication relation do if there is only one unit? | 03:28 |
thumper | anything? | 03:28 |
stub | or the name of the interface changed | 03:28 |
stub | nothing | 03:28 |
* thumper guesses interface changed name | 03:28 | |
thumper | hmm... | 03:29 |
thumper | I wonder what the nice way is to say "just upgrade, ..." | 03:29 |
thumper | I should try it locally first I guess :) | 03:29 |
stub | So lp:~stub/charms/trusty/postgresql/rewrite is what I recommend and actually has a test to upgrade from r127 (no idea of the cs: revno). But you are earlier than that. | 03:29 |
stub | thumper: Is this modern juju, or haven't you updated that bit? | 03:30 |
thumper | I deployed this last year around July | 03:30 |
thumper | and that charm hasn't been upgraded since:) | 03:30 |
stub | thumper: upgrade-charm --force will push the new charm everywhere without running hooks. | 03:30 |
thumper | really? --force won't run hooks? | 03:31 |
thumper | that seems fcuked | 03:31 |
stub | I'd test upgrading to 1.24 then upgrading to the rewrite branch (again, recommended but not landed for review/beurocratic reasons) | 03:31 |
stub | thumper: It is the only way to fix a charm that errored in its upgrade-charm hook. | 03:32 |
thumper | I have my env running 1.24.5 | 03:32 |
thumper | right, but it hasn't errored | 03:32 |
thumper | it just said "won't upgrade" | 03:32 |
thumper | still on -10 | 03:32 |
thumper | marcoceppi: I see you are around | 03:32 |
stub | Yeah, but that is why we love upgrade-charm --force. | 03:32 |
thumper | marcoceppi: quick charm upgrade question | 03:32 |
marcoceppi | thumper: yes, I am in your timezone | 03:32 |
thumper | I have cs:trusty/postgresql-10 and I want to upgrade to cs:trusty/postgresql-28 | 03:33 |
thumper | but an interface got renamed (or something) | 03:33 |
thumper | and juju says "nah" | 03:33 |
thumper | what will --force do? | 03:33 |
marcoceppi | thumper: so force will, at the next idle point in the agent, unpack the charm files without queuing an upgrade-charm hook | 03:34 |
marcoceppi | thumper: not sure if thta fixes your missing interface/relation problem or not, but it's how I develop | 03:34 |
thumper | so I should manually run the hooks? | 03:35 |
thumper | this seems flakey to me... | 03:35 |
thumper | marcoceppi: also, stub has upgrade tests from r127 of the charm, any idea how we can work out what charm version that is? | 03:36 |
stub | thumper: Not so much flaky, but a necessary back door to get you out of this snafu. The real problem is way back then I changed a name and nobody at that time realized it was a problem. | 03:37 |
marcoceppi | thumper: like right now, I've got a broken hook. So I patch the charm layer, charm build, then upgrade-charm --force, exit 1 in the debug hooks so I can keep that hook, then do a juju resolved --retry on the unit to recaputre that hook | 03:38 |
marcoceppi | thumper: no, you should avoid using --force | 03:38 |
marcoceppi | thumper: the way to fix this is to break the relation, upgrade-charm, attach the new relation | 03:38 |
marcoceppi | whoa, I'm really lagged | 03:38 |
stub | thumper: just doing a 'juju set' on it after the upgrade-charm --force will kick off a hook, and one hook is all that is needed with the rewrite charm. Not sure about cs:28. | 03:39 |
thumper | marcoceppi: the relation is a peer relation | 03:39 |
marcoceppi | thumper: oh bother | 03:39 |
stub | marcoceppi: a single unit with a peer relation to boot ;) | 03:40 |
marcoceppi | thumper: well then you're fun | 03:40 |
* thumper taps... | 03:40 | |
marcoceppi | thumper: well --force upgrade will get you the new payload | 03:40 |
marcoceppi | thumper: not sure if it'll queue the upgrade-charm hook or not tbh | 03:40 |
thumper | stub: why is your new rewrite not promulgated (or however you spell that damn word) | 03:41 |
marcoceppi | i just know it'll drop the new charm payload | 03:41 |
* thumper goes to look at the code... | 03:41 | |
marcoceppi | syn | 03:42 |
stub | thumper: its in the review queue. It got looked at about a month ago, when it was pointed out some tests were failing on the ci system. I got that down to one failing test, which is still better than the existing charm with its tests disabled. | 03:42 |
thumper | so... disable the failing test like a real developer :-) | 03:43 |
thumper | heh | 03:43 |
stub | I've been poking at it to get it fixed, but the ci system gets rather constipated recently and there is still a good chance of failures due to cloud timeouts and such. | 03:45 |
stub | Ohh... someone gave it an enema. I got a fresh run coming up in an hour or three. | 03:47 |
stub | lxc green already, just need to wait for the real clouds | 03:48 |
stub | thumper: where is my lxd provider? | 03:48 |
thumper | stub: coming | 03:48 |
stub | thumper: will an environment be able to span multiple lxd servers? | 03:49 |
thumper | not initially | 03:49 |
stub | thumper: will that be a juju task, or an lxd task do you know? | 03:50 |
stub | My evil plan needs juju talking to multiple lxds, or a virtual lxd server that abstracts a collection of lxds. | 03:51 |
stub | (maybe the latter is better, providing a ha end point to a cluster of lxds and automatic failover of containers etc.) | 03:52 |
thumper | the aim is to be able to have the lxd provider be able to point to a remote lxd endpoint | 03:53 |
thumper | as well as the default of "localhost" | 03:53 |
stub | I guess the manual provider is quite happy deploying a unit to an lxd container somewhere, so it wouldn't be a blocker to anyone. Just trickier to wire up. | 03:56 |
stub | marcoceppi: would stopping people making incompatible metadata.yaml changes be a job for charm proof or the charm store? It would need to be able to discover the previous revision of the charm. | 03:58 |
marcoceppi | stub: so, juju does this already with relations | 03:58 |
marcoceppi | it just makes you remove relation | 03:58 |
marcoceppi | but I don't think anyone considered the peer corner case | 03:59 |
stub | oh, so this is just an edge case with the peer relation | 03:59 |
stub | right | 03:59 |
marcoceppi | juju should just allow you to upgrade regardless of the peer interface change | 03:59 |
marcoceppi | a bug would be in order | 03:59 |
stub | Maybe for juju 2.0 peer relations are considered more special. Currently I think you can have multiple peer relations with all sorts of names and interfaces defined, which isn't really useful afaict. | 04:00 |
stub | thumper: are you filing a bug? | 04:13 |
thumper | sorry, wasn't following, on a call | 04:14 |
thumper | I can file a bug | 04:14 |
thumper | stub: bug 1510787 | 04:47 |
mup | Bug #1510787: juju upgrade-charm errors when it shouldn't <juju-core:Triaged> <https://launchpad.net/bugs/1510787> | 04:47 |
=== skay is now known as skay__ | ||
stub | So I now need to have two branches for my charm, one containing my work and layers.yaml and another one containing the built charm? | 09:55 |
stub | My first attempt was 'charm build -o .' to keep everything together in an easy to develop with place, but that just builds to ./trusty/charmname | 09:57 |
stub | If two branches, do we have a official location for the 'source' branch, which is the target branch for reviews, with the current trunk remaining for the generated output to be ingested into the charmstore? | 10:00 |
Icey | stub what I've been working on is having them be 2 separate repos, one with the layers.yaml and such, one that is the deployable charm | 11:51 |
stub | Icey: Yeah, that seems to be the way it needs to work. And I see I can specify a LAYER_PATH environment variable, so I can split out things into multiple layers when developing. | 11:54 |
stub | The review process is going to be more interesting | 11:54 |
Icey | yeah, almost like the layers repo isn't going to be reviewed, just the final output? | 11:55 |
stub | One of the reasons composer exists is to avoid all the boilerplate that ends up in reviews from things like charmhelpers. I think the layers will need to be reviewed, not the final output. | 11:59 |
Icey | hopefully :) | 11:59 |
Icey | ideally we get to the point where each layer is independently reviewed, and we can just review the stuff that the charm author is writing | 11:59 |
stub | tarmac or something should be able to to the build, commit and push automatically for the CI system to ingest. | 12:00 |
=== circ-user-C4wS5 is now known as gennadiy | ||
gennadiy | hi i still have question about expose services in bundle. i use expose: true in my bundle but after deploring services are not exported | 12:24 |
cory_fu | gennadiy: And the services in question have open ports listed in `juju status`? The `expose: true` should work, if that's the case. :/ | 13:03 |
cory_fu | Are you deploying the bundle with `juju quickstart` or `juju deployer`? | 13:04 |
gennadiy | i deployed bundle from juju-gui | 13:05 |
cory_fu | Oh yes, that, too. And the services have open ports listed? | 13:06 |
gennadiy | but i think i have found problem i use bundles.yaml but in juju store i see bundle.yaml which doesn't contain expose | 13:06 |
gennadiy | my bundle name is - tads2015-demo | 13:06 |
rick_h__ | urulama_: ^ does the expose stuff not get translated perhaps? | 13:07 |
urulama_ | rick_h__, gennadiy: it might be a bug, yes. we'll investigate. | 13:08 |
gennadiy | constraints field is absent too | 13:08 |
gennadiy | sorry. constraints presents in result bundle | 13:09 |
gennadiy | do i have possibility to create bundle.yaml directly now? | 13:10 |
gennadiy | as i understood my original bundles.yaml was converted to bundle.yaml. am i right? | 13:10 |
urulama_ | gennadiy: yes, you can. use the same format as you see the results. | 13:11 |
rick_h__ | gennadiy: yes, though be careful. The old system looks for a bundles.yaml so you need both to ingest properly | 13:11 |
gennadiy | can i add expose field? | 13:11 |
gennadiy | * can i use expose field in bundle.yaml? | 13:12 |
rick_h__ | gennadiy: yes definitely | 13:12 |
urulama_ | gennadiy: you should. | 13:12 |
gennadiy | ok. i will try | 13:12 |
rick_h__ | gennadiy: download the bundle and leave both files when you update it. | 13:12 |
=== JoshStrobl is now known as JoshStrobl|AFK | ||
gennadiy | can i setup name of bundle in bundle.yaml? | 13:44 |
gennadiy | because when i try to use bundle and config w/o name config is not applied | 13:44 |
gennadiy | i use the following script | 13:45 |
gennadiy | juju-deployer -e $JUJU_CUR_ENV -c tads2015-demo/bundle.yaml -c config-demo.yaml tads2015-demo/bundle.yaml | 13:45 |
urulama_ | gennadiy: unfortunately, we currently don't handle expose. | 13:49 |
urulama_ | (in new bundle format) | 13:49 |
gennadiy | now i use bundle instead of bundles and expose works | 13:49 |
gennadiy | but now i have question about juju-deployer it has ability to merge yaml files. | 13:52 |
gennadiy | so if i don't use bundle name for inside bundle file it doens't work correctly | 13:53 |
frankban | gennadiy: in new bundle format there is no top level namespace with the bundle name | 13:53 |
gennadiy | i see, but how to use juju-deployer now? | 13:53 |
gennadiy | seem it doesn't work correctly w/o bundle name | 13:54 |
frankban | gennadiy: how to use a config yaml file? | 13:54 |
gennadiy | am i right? | 13:54 |
frankban | gennadiy: don't know, checking sources, could be a juju-deployer bug | 13:54 |
frankban | gennadiy: what's the problem with new bundle format? overriding values with multiple yaml files? if so, have you tried passing yaml overrides not including the top level bundle name? | 13:58 |
rick_h__ | gennadiy: what version of the deployer. You need a fairly recent one to support the newer format where the bundle name is not at the root of the file. | 14:13 |
Icey | is there a way to specify machine arguments when deploying? | 14:57 |
Icey | in essence, I want to use `juju deploy ceph` but specify a machine add constraint to restrict the service to us-east-1a | 14:58 |
Icey | or am I just going to have to use a bundle to get that kind of effect? | 14:58 |
vila | mgz_: ping | 15:30 |
Icey | is 1.25 out yet :'( | 15:37 |
mgz_ | vila: hey | 15:45 |
vila | mgz_: hey ! See PM ? | 15:46 |
lazypower | Icey its in -proposed | 15:51 |
lazypower | Icey add-apt-repository ppa:juju/proposed | 15:51 |
lazypower | install juju-core and profit from having 1.25 | 15:52 |
lazypower | you get payload management, update-status hooks, and a whole slew of other nice things | 15:52 |
lazypower | but its /proposed, ymmv :) | 15:52 |
=== JoshStrobl|AFK is now known as JoshStrobl | ||
mgz_ | vila: with 1.24.7 you may benefit from bug 1435283 fix | 16:18 |
mup | Bug #1435283: juju occasionally switches a units public-address if an additional interface is added post-deployment <addressability> <bug-squad> <network> <openstack-provider> | 16:18 |
mup | <juju-core:Fix Released by mfoord> <juju-core 1.24:Fix Released by mfoord> <juju-core 1.25:Fix Released by mfoord> <https://launchpad.net/bugs/1435283> | 16:18 |
vila | ack | 16:18 |
Icey | lazypower I'm riding proposed now ;-) | 16:48 |
aisrael_ | lazypower: Layers question for you. I renamed my composer.yaml to layers.yaml by mistake, and charm build complained so I renamed it to layer.yaml. I'm getting this error trying to build now, though: http://pastebin.ubuntu.com/12991002/ | 16:51 |
aisrael_ | lazypower: I suspect something's cached, but not sure where. deps/ looks like it just has the interface I pulled in. | 16:51 |
lazypower | ouch | 16:51 |
lazypower | that stack trace not really clear with what it duped on | 16:51 |
lazypower | but i think i know whatshappening here | 16:51 |
aisrael_ | lazypower: I can dig deeper if needed. | 16:52 |
lazypower | if you ls $JUJU_REPOSITORY/trusty/mything | 16:52 |
lazypower | does mything have a compose.yaml? | 16:52 |
lazypower | if so, charm build is notifying you that files exist out of band, and you should only continue building if its OK to have this file present. (Eg: local modifications to the charm not the layer) | 16:52 |
aisrael_ | oh, it has a layers.yaml | 16:52 |
lazypower | --force will allow you to override that and get past it, but its erring on the side of caution for you | 16:52 |
lazypower | i'm referring to the constructed charm | 16:53 |
lazypower | the artifact | 16:53 |
aisrael_ | lazypower: removing that fixed it. Should I open a bug? | 16:53 |
* lazypower is doing a terrible job of explaining htis | 16:53 | |
lazypower | nah its intended behavior | 16:53 |
lazypower | if you run with -l DEBUG | 16:53 |
lazypower | it tells you what its matching on i think | 16:53 |
aisrael_ | lazypower: Right, the previous build left the layers.yaml in $JUJU_REPOSITORY/trusty/mycharm. Bad artifact. | 16:54 |
lazypower | :) | 16:55 |
aisrael_ | lazypower: so a build copies over the charm, rather than replacing or rsyncing, right? | 16:55 |
lazypower | the one thing build doesn't do, is take care of removal | 16:55 |
lazypower | say you remove an interface from layer.yaml, then re-run build - you need to implicitly rm -rf the built charm and then build, otherwise that interface will linger | 16:55 |
lazypower | aisrael_ correct, there are different strategies at play | 16:56 |
aisrael_ | lazypower: ack. I can see an argument for doing that cleanup automatically, even if it's an extra flag | 16:56 |
lazypower | like config.yaml, metadata.yaml get merged | 16:56 |
lazypower | other files have a default policy of highest layer that has this file wins. | 16:56 |
aisrael_ | i.e, if I'm writing a charm in layers/, I'd expect that to be authoritative | 16:56 |
lazypower | so whatever your top layer is, will override files below it, save for a handfull of special files. and your local layer takes precedence over the interfaces.juju.solutions api yes but only if you have LAYER_PATH set | 16:57 |
lazypower | same with INTERFACE_PATH | 16:57 |
aisrael_ | like the charm equivalent to make clean | 16:57 |
lazypower | yeah | 16:57 |
aisrael_ | Caffeinated ramblings. Thanks for the help unblocking me! | 16:58 |
lazypower | seems like charm build --clean should just nuke the compiled dir if exists then build. | 16:58 |
lazypower | NP thanks for the questions :) | 16:58 |
Icey | how hard would it be to change the AMIs used by Juju to be something that's EBS optimized? | 17:03 |
aisrael_ | lazypower: I am now a layer convert (not that I wasn't a fan before). | 17:22 |
lazypower | :) | 17:22 |
aisrael_ | well, layer + reactive | 17:22 |
lazypower | Layer + reactive is a powerful tool | 17:22 |
lazypower | aisrael_ : https://github.com/mbruzek/layer-k8s/pull/5 | 17:23 |
aisrael_ | Niiiiice | 17:23 |
lazypower | working on a feature branch with matt for the last 2 days, working independently there have been zero merge conflicts - putting complexity in its place :) | 17:23 |
lazypower | and the change sets are getting smaller | 17:23 |
aisrael_ | lazypower: lxc now builds for os x, so if we can run lxd in a container... we might be closer to having a working local provider. | 17:53 |
lazypower | nice! | 17:55 |
aisrael_ | I'll poke around at it tonight when I'm back home. | 17:56 |
marcoceppi | lazypower: +1 to a --clean flag where it removes the previous build | 18:13 |
marcoceppi | helpful when heavily devving | 18:13 |
lazypower | I'll feature rq it | 18:13 |
lazypower | https://github.com/juju/charm-tools/issues/33 | 18:15 |
=== wolverin_ is now known as wolverineav | ||
cholcombe | can you combine all your juju actions code into one file and symlink it just like the hooks? | 20:41 |
marcoceppi | cholcombe: yes | 20:48 |
cholcombe | marcoceppi, nice :) | 20:48 |
cholcombe | so i do the usual @hooks.hook business | 20:48 |
marcoceppi | cholcombe: welllll | 20:49 |
marcoceppi | cholcombe: no one has tried that, they're not hooks, they're actions, so it's hard to say if that code will respect that | 20:49 |
marcoceppi | cholcombe: for now it's probably better to just make a python module and then stub the import/execute of that module/methods in each file | 20:50 |
cholcombe | marcoceppi, if i remember right at the code i looked at it just calls the hook with the matching filename that it was called with | 20:50 |
marcoceppi | cholcombe: sure, but actions are not in the hooks directory | 20:50 |
cholcombe | oh i know | 20:50 |
marcoceppi | so if there's any attempt to parse out hook related file paths | 20:50 |
marcoceppi | it'll fail | 20:50 |
cholcombe | hmm yeah | 20:51 |
* cholcombe goes back to splitting them up | 20:51 | |
=== Icey is now known as IceyEC | ||
=== IceyEC is now known as Icey | ||
=== menn0_ is now known as menn0 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!