/srv/irclogs.ubuntu.com/2018/04/16/#juju-dev.txt

veebersCan I get a review on https://github.com/juju/juju/pull/8599 please :-)00:02
wallyworldlooking00:11
wallyworldveebers: lgtm00:13
wallyworldkelvinliu: don't forget to $$merge$$ your PR once approved00:13
veeberswallyworld: cheers00:14
kelvinliu_yup, i am doing it now, thx Ian00:20
thumperanastasiamac: I just realised I have a clash around our 1:1, I have a physio appt01:35
thumperanastasiamac: can we do it an hour later?01:35
anastasiamacthumper: sounds good01:35
thumperthanks01:35
wallyworldkelvinliu: i am seeing a hook execution error in the mysql charm when adding a relation to gitlib. it's related to some python executed by the reactive framework and looks like an incorrect file path is being passed in. so something has changed which needs to be fixed01:42
wallyworldkelvinliu: this is the log from the mysql unit pod. you'll see the error in there https://pastebin.ubuntu.com/p/zmgH3Hxkvh/01:44
wallyworldwould be good to know if you're seeing the same issue01:45
anastasiamacan easy review PTAL, adding bionic to utils - https://github.com/juju/utils/pull/29901:46
kelvinliu_i just recreated the k8s core controllers and models, but a install hook was failed in worker node and didnot retry. So I deleted whole models and controller and is doing it again. the cluster is stablizing now01:47
wallyworldkelvinliu: it's coming from this reactive helper def juju_version():01:47
wallyworldthe path for jujud is different01:48
wallyworldso i need to make an upsteam change to account for that01:48
wallyworldfor caas agents, it's /var/lib/juju/tools01:49
wallyworldnot /var/lib/juju/tools/machine-*01:49
wallyworldanastasiamac: lgtm, ty01:50
anastasiamacwallyworld: \o/ phenomenal! thnx01:51
kelvinliu_u mean in charms.reactive ?01:51
wallyworldkelvinliu: yeah, in the core package01:52
* thumper afk for a bit01:54
kelvinliu_https://github.com/juju/charm-helpers/blob/master/charmhelpers/core/hookenv.py#L104201:55
kelvinliu_i can work on this01:55
wallyworldkelvinliu: that's upstream. the best way to test I think i to add a replacement function to the hookenv.py file in the reactive base layer for caas. that function in there *should* be used in preference to the upstream core one. we can get things working and then publish changes to the caas base layer. it will take a bit more then to get stuff upstream02:06
kelvinliu_sure, agreed02:09
vinowhat is the clean way to get the controller destroyed ? i started the bootstrap but then i gave sigkill :(03:35
vinodestroy gives trouble03:38
wallyworld_kill-controller03:41
wallyworld_i normally do "juju kill-controller -t 0 -y <name>"03:42
vinook. thx. i didnt know this and was trying some other froce option with destroy03:43
vinowhich doesnt exists03:43
veeberswallyworld_: d'oh, could have sworn I ran the whole suite, I updated https://github.com/juju/juju/pull/8599 with a unit test fix (I realise now I squashed it so it'll be harder to see the test changes :-\)03:43
anastasiamacwallyworld_: i replied but u seemed to have bowed out of Canonical channels.... so dunno if u saw...03:51
wallyworldvino: vpn messed with irc, did you see my reply above?03:56
vinokill-controller ?03:56
vinoyes03:56
wallyworldgreat03:56
vinothx03:56
thumperbabbageclunk: got a few minutes to talk about the engine worker dependency issues?04:17
babbageclunkthumper: yup yup! Would welcome more ideas04:21
babbageclunkin 1:1?04:21
babbageclunkthumper: ?04:24
babbageclunkI guess he went away04:32
wallyworldbabbageclunk: 12 line PR?04:42
wallyworldhttps://github.com/juju/juju/pull/860104:42
wallyworldkelvinliu: the above fixes the mysql/gitlab issue ^^^^^04:42
babbageclunkwallyworld: with pleasure!04:42
wallyworldyay!04:42
wallyworlduntil we get upstrean charmhelpers fixed04:42
* wallyworld afk for a meeting for a bit04:42
* thumper takes a deep breath05:28
* thumper gets back to tests for migrationmaster facade05:29
jamwallyworld: thumper: /wave. I don't think I have anything pressing to bring up, though I'm curious how the meeting on Friday went.05:35
thumperjam: which one?05:35
jambtw, are we supposed to be switching to #juju?05:35
jamthumper: mark05:35
thumperyes05:35
thumperjam: it went really well05:35
thumperjam: I want to sort the mailing lists out first, and was going to announce irc changes when that was done05:36
jamsure05:36
thumperbug got a bit caught up trying to finish something off05:36
* vino in induction prg06:30
wallyworldvino: there's a go fmt error in the juju run pr thant landed cmd/juju/commands/run_test.go:95:30: missing ',' before newline in composite literal07:07
wallyworldkelvinliu: i just pushed a new version of the operator image to dockerhub which adds a symlink the the jujud that the pythin charm helper code expects. will will still need to fix the python though07:10
wallyworldbut things will work again with this change07:10
wallyworldwe'll need to add the fixed python to the base base layer next and once that's verified, push an upstream change07:11
vinohi wallyworld07:13
vinoi am looking into it.07:13
wallyworldhey07:13
wallyworldthank you07:13
vinonot just that i found another issue as well.07:16
=== frankban|afk is now known as frankban
vinojuju run is broken because of the commit we made this morning it wont work without explicitly specifying a timeout value07:50
kelvinliu_thx Ian.08:17
vinowallyworld:  PR#8603 fix for run command08:47
wallyworldlooking08:53
vinothank u08:53
wallyworldvino: lgtm08:54
=== deanman_ is now known as deanman
admcleod_anyone here have experience with a mixed provider model, e.g. MAAS + manually provisioned machines?20:11
rick_h_admcleod_: what are you looking for?20:13
admcleod_someonet that has experience with that20:13
rick_h_admcleod_: in what way? I mean I know several folks have used it for certain things20:13
admcleod_i have a controller in maas, ive added an s390x lpar (it has to be a manually provisioned machine)20:14
rick_h_k20:14
admcleod_and im having issues with deploying bundles to it. im in the process of trying different scenarios to create a bug20:14
admcleod_but someone may know something very explicit already20:14
rick_h_this hit the mailing list right? /me thought he saw it earlier today20:15
admcleod_i didnt post it there20:15
rick_h_ah sorry, the main juju channel20:15
admcleod_yep :)20:15
rick_h_admcleod_: so if you add a machine for the non-s390x and then use the --map-machines where 0=0,1=1 (assuming 0 is the lpar per the other convo) that fails to work work?20:17
rick_h_hmm, one thing I've not tested out is if the map-machines will allow you to only map some of the machines in the bundle to existing numbers in the model20:17
admcleod_well. im using --map-machines=existing, because if its ALL manual provider, that works fine20:18
admcleod_but what im seeing so far is that it wont use the existing machine20:18
admcleod_i have 12 different scenarios that im trying out now to add to a bug, e.g., bundle with/without machines, with/without constraints, with/without map machines20:18
rick_h_right, but if you're map machine existing but you only have one machine manually added I'm not sure how that's supposed to work right20:19
rick_h_constraints are ignored if a placement directive is in play20:19
rick_h_you've overriden any of that20:19
rick_h_and map machines is just a placement directive is my understanding20:19
admcleod_well, that may be what its supposed to do20:19
admcleod_but im testing it regardless20:19
rick_h_there's no sense saying "constrain to a machine with 4g of ram and put it on machine 2"20:19
rick_h_k20:20
admcleod_sure. that makes no sense.20:20
admcleod_neither does 'map-machines=existing' .. oh ill ask for another machine from maas then20:20
rick_h_right20:20
rick_h_so I would expect that before the bundle deploy you'd have the existing machines setup, you'd map number by number (or they have to match the same number in the bundle to the number in the model) and it should work?20:21
rick_h_but you're saying that it fails to work properly in that case?20:21
admcleod_well, imagine a scenario where i want 3 machines for 3 magpie units (the real scenario is openstack with different hypervisor architectures)20:21
admcleod_so i have to add the s390x machine before i deploy the bundle20:21
admcleod_but its a maas controller for the amd64 nodes20:21
admcleod_so i only manually add 1 machine20:22
admcleod_then what i would expect is if i deploy the bundle with --map-machines=existing, it would use the manually provisioned machine and request 2 more from maas20:22
rick_h_so I'd ask you to not use existing, but instead use a number match for that one machine and see if it works20:22
admcleod_ill try that now20:22
admcleod_but why should existing not work?20:23
admcleod_does that imply 'all must exist or i will ignore you'?20:23
rick_h_so I'm nervous about what "existing" does as it's assuming use the numbers of the machines in the bundles against the same numbers in the model20:23
thumpermorning20:23
rick_h_this is what I mean in that I've not tested a partial deploy like that where only some machines are available in the model vs not20:23
admcleod_ok, let me tell you what happens with the number...20:24
rick_h_thumper: here can tell you the what why after that :)20:24
thumperballoons: shall we use the QA sync for the 1:1 you can't make tomorrow?20:26
admcleod_rick_h_: didnt work, pastebin coming20:26
rick_h_admcleod_: kk20:27
* thumper awaits pastebin20:27
admcleod_wait wrong bundle, have to do that again20:27
rick_h_ah ok20:28
thumperadmcleod_: can you make sure we have the status showing machines, and the bundle?20:28
admcleod_thumper: yep20:28
admcleod_alright, so if i specify the machines in the bundle correctly, then both map-machines=existing and map-machines=#=# work ok. so thats ok20:32
rick_h_admcleod_: ok cool, that makes me feel a little better20:33
admcleod_so map-machines=existing explicitly requires machine defintiions and placement directives?20:33
rick_h_admcleod_: that's what I'm not sure what exactly existing does. I've not used it much tbh. If I map I go all mappy20:33
admcleod_i was hoping i could specify a constraint, e.g. arch, and that would map to existing machines20:33
rick_h_admcleod_: yea, but since mapping is a placement, constraints get ignored20:33
admcleod_alright20:34
admcleod_so then i cant use constraints with manually added machines?20:34
rick_h_admcleod_: I don't think there's a place where that works out because you're back to saying "constraint this" but at the same time it's about matching machine numbers.20:34
admcleod_well. if i was only using MAAS for machines, i could ignore defining machines explicitly, and use arch constraints20:35
rick_h_admcleod_: right20:35
rick_h_"let the provider deal with it"20:35
admcleod_but i cant do that with a manually added machine20:36
veebersMorning all20:36
rick_h_but when you go with manual machines and mapping you've taken the provider out of the picture20:36
admcleod_is that a 'feature' or a bug?20:36
admcleod_well forget mapping then20:36
rick_h_hmm, if you don't do any mapping and all the machine were added to the model (e.g. you weren't asking juju to provider for some things and not for others) I'm not sure tbh20:36
admcleod_right so i guess "juju deploy" talks directly to the provider and doesnt check manually provisioned machines first20:37
rick_h_right20:37
admcleod_that is incredibly annoying20:37
thumperadmcleod_: existing effectively replaces 0=0,1=1,2=2...20:37
admcleod_thumper: right20:38
thumperexplicit mapping overrides existing20:38
thumperyou can use explicit mapping with and without existing20:38
admcleod_alright. so i can sort of work around my issue with mapping and machine specification and placement20:38
thumperadmcleod_: deploy of bundles does check the existing model first20:38
admcleod_thumper: ah well it doesnt seem like it does20:39
thumperno... it does20:39
thumperI'd like a concrete example that shows it isn't working20:39
admcleod_thumper: ok, without machine definitions, with a manually added machine, with constraints20:39
rick_h_thumper: so if he manually adds a machine of a unique arch into the model and then deploys the bundle and expects juju to match the machine based on arch constraints juju will not pick up the existing machine20:39
rick_h_thumper: because constraints = provider and the machine is not provider based (manaully added)20:39
thumperjuju doesn't do any of that20:40
thumperuse existing is *only* for numbered placement20:40
admcleod_ok20:40
admcleod_so forgt map-machines20:40
admcleod_how about just using constraints20:40
rick_h_thumper: that's what we're saying. If you juju add-model; juju add-machine ssh.....(s390x); juju deploy openstack-bundle-with-one-s390x constraint machines;20:40
admcleod_thats what im trying to prove20:40
rick_h_it doesn't work that way20:41
thumperno... it doesn't work that way20:41
thumperat all20:41
thumperjuju deploying a single app with constraints will never choose an existing machine20:42
thumperunless you use --to20:42
thumperwhich points to a machine20:42
rick_h_thumper: right, and then --to negates any constraint checks at that point.20:42
thumperthis is outside bundles even20:42
thumperagreed20:42
admcleod_ok so thats my answer then20:42
admcleod_i cant use constraints20:42
rick_h_thumper: that's what we're establishing with admcleod_, he was hoping that to make his bundle work his only change to the deploy was to add the machine that matched the constraints manually.20:42
admcleod_and just before i bothered: https://pastebin.canonical.com/p/BYj5p3hfzT/20:43
admcleod_right because we have some bundles that do rely on constraints to provision multiple architectures without using --to20:43
rick_h_thanks for clarifying admcleod_20:43
thumperyeah, that isn't going to work with the current implementation20:43
thumperyou can specify multiple architectures, but the provider must provide those20:43
admcleod_thumper: alrighty20:44
thumpercan you maas also have a s390x pod?20:44
thumpers/you/your/20:44
admcleod_not right now20:45
admcleod_ill use --map-machines and explicit machines20:45
admcleod_it wouldve just been nice to be able to only use constraints20:45
* thumper nods20:45
thumperI understand20:45
thumperbabbageclunk: ping20:46
admcleod_i mean if juju knows the arch of the manually provided machine... would it be that hard to implement? or would that cause more problems than it would solve20:46
thumperI think it would cause more problems than it would solve without a comprehensive review of behaviour20:47
admcleod_right. i guess being explicit is the safest way20:47
balloonsMorning thumper20:47
admcleod_alright thanks for not making me do that 24 times20:47
thumperbabbageclunk: morning20:47
thumperugh20:47
thumperthat was ment for balloons20:47
veebersthumper: woah, getting a bit passive aggressive with your ping there ;-)20:48
balloonsSure we can chat now20:48
thumperballoons: I've jumped in our 1:1 HO20:49
babbageclunkthumper: morning!20:57
babbageclunk(also everyone else)20:59
thumperbabbageclunk: I have something I'd like you to look at before our 1:121:16
babbageclunkthumper: ok21:17
thumperbabbageclunk: https://github.com/juju/juju/pull/860421:17
babbageclunk(Oh which is at 10, not 9:30, right? I keep forgetting.)21:17
babbageclunklooking now21:17
thumperbabbageclunk: yes 1021:18
admcleod_hrm. so should i also expect networking issues with this "mixed provider" model? i normally dont have a problem deploying to lxd on these manual machines, but in this scenario im getting "cannot get subnet" - so, presumably the MAAS provider has issues with the subnet used on the manually deployed machine?21:45
thumperbabbageclunk: I'm in our 1:1 HO now if you can start early21:54
babbageclunkthumper: ok - not done with the review yet though (although probably also won't be by 10 either so <shrug>)21:55
thumperadmcleod_: there is always opportunities for networking issues between a normal provider and manual21:55
thumperbabbageclunk: I want to talk through that change a bit too21:55
admcleod_thumper: alright well ill log a bug for that one, not sure its much of a blocker21:56

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!