/srv/irclogs.ubuntu.com/2018/05/23/#juju.txt

kelvinliu_morning, veebers, would u mind to take a look this tiny PR plz? thx -> https://github.com/juju/juju/pull/8743/files00:53
veeberskelvinliu_: hey o/ What's the issue, is it not passing under CI but passing locally?01:02
kelvinliu_veebers, yeah, it's passing locally but failed on jenkins on different errors.01:03
kelvinliu_veebers, got error like kubeconfig file was empty when it's built from jenkins gui,01:03
kelvinliu_veebers, when i ran it manually from jenkins host, the error was cluster(api server) not stablized to let kubectl available01:04
kelvinliu_so i have this pr to wait for workloads(this is actually required, i think) after wait_for_started.01:05
veeberskelvinliu_: so looking at the latest jenkins job run, the kube config file it copies it seems that it wasn't populated with the required data? And thus the wait for workloads should sort that?01:07
kelvinliu_veebers, the kubeconfig file is generated later in some time during the cluster bootstraping progress.01:08
veeberskelvinliu_: how did the scp command work if the file wasn't generated at that time? Or are we talking about different things now?01:09
veebersin the jenkins run I see "ERROR No k8s cluster definitions found in config", that's because the kubeconfig file is empty?01:10
kelvinliu_veebers, scp seems never failed01:13
veeberskelvinliu_: this is the fun part, figuring out why a test that passes on your machine doesn't pass in CI :-|01:13
kelvinliu_well, another werd thing is k8s 1.10 bundle never pass on my local when i manually deploy the bundle, so i always have to use 1.9. but it passes in citest on my local.01:15
veeberswallyworld: you have a moment re: tomb v1 -> v2 I know why the tests are failing, change in tomb v1 -> expectations, not sure how to proceed to fix01:34
wallyworldveebers: give me 501:35
veebersI can give you 4.9, that's my best offer01:35
veebersheh, sure thing01:35
wallyworldveebers: did you want a HO?01:42
veeberswallyworld: sure, probably best. 1:1?01:48
wallyworldveebers: ok01:49
wallyworldbabbageclunk: i pushed commit 2 to https://github.com/juju/juju/pull/873901:52
babbageclunkwallyworld: ok looking01:55
anastasiamaca review of a very simple status filtring fix PTAL - https://github.com/juju/juju/pull/874401:57
thumperwallyworld: ping02:11
* thumper looks for next victim02:17
thumperbabbageclunk: what 'cha up to?02:17
thumperanastasiamac: lgtm02:18
anastasiamac\o/02:19
wallyworldthumper: hey02:24
thumperwallyworld: how are your reviewing muscles feeling?02:25
wallyworldhmmm, a loaded question there02:25
wallyworldsure, what is thepr02:25
thumperjust proposing now02:26
thumperit is the proxy stuff02:26
thumperended up being significantly bigger than I was initially expecting02:26
wallyworldalways is02:27
thumperheh02:27
thumper+889 −253 and 27 files changed02:28
thumperhttps://github.com/juju/juju/pull/874502:28
thumperwallyworld: we should chat about it02:28
wallyworldok02:28
wallyworldnow ort after i have a look?02:28
thumper1:1?02:28
wallyworldok02:29
thumpernow02:29
cliuHi... I have deployed Openstack with juju with Telemetry #49 bundle. after deployment, how do I bring up the ceilo? I did not see that option in the horizon dashboard.02:52
thumperwallyworld: the proxy PR https://github.com/juju/proxy/pull/302:52
wallyworldok02:52
thumpercliu: sorry, but I have no idea. Many of the openstack folk are europe based and are likely sleeping02:53
cliuthumper: thanks. when would be a good time for me to join in the ask again/02:54
* thumper wonders if there is a freenode channel for canonical openstack questions...02:54
thumpercliu: european morning02:54
cliuthumper: thanks.02:54
rick_h_thumper: cliu  #openstack-charms02:54
thumperrick_h_: thanks02:54
rick_h_is the freenode channel for them per https://docs.openstack.org/charm-guide/latest/find-us.html02:54
cliurick_h_: thanks02:55
wallyworldthumper: that one lgtm02:55
thumperwallyworld: ta02:55
rick_h_cliu: most of them are out at the openstack summit this week but I'd hit up their irc channel or mailing list02:55
cliurick_h_: thanks. what is their mailing list?02:56
anastasiamaccliu: according to the find-us link rick_h_provided, the mailing list can be found on https://docs.openstack.org/charm-guide/latest/mailing-list.html02:58
cliuanastasiamac: thanks02:59
thumperrick_h_: where would I find the GUI code for exporting a bundle?03:13
thumperrick_h_: initial juju support will just mirror the GUI03:13
wallyworldthumper: doneski03:25
wallyworldbabbageclunk: you happy with the PR?03:26
babbageclunkwallyworld: oh, sorry - got distracted by other work.03:27
wallyworldno wuckers03:27
babbageclunkwallyworld: approved03:37
babbageclunkthumper: sorry, missed your message03:37
babbageclunkthumper: I'm struggling to write a benchmarking thing for the leadership API03:38
thumperwallyworld: awesome, will look03:43
anastasiamacand another status fix review PTAL https://github.com/juju/juju/pull/8747 <- this time fixes inconsistent results when filtering by app name \o/05:44
thumperanastasiamac: minor tweak but otherwise good05:59
=== frankban|afk is now known as frankban
kelvinliu_veebers, updated the PR and the citest passed on goodra, would u mind to take a look tmr? thx09:36
kelvinliu_veebers, thx for the lxd upgrade09:36
srihashi, how can we change the network configuration on machines using juju?10:12
srihasthe current configuration came from the interface settings in MAAS10:13
srihasis there a way to do without without deleting machines?10:13
manadartexternalreality: Logged a bug for the peergrouper warning messages: https://bugs.launchpad.net/juju/+bug/177289511:29
mupBug #1772895: Peergrouper should not log multiple address warnings when not in HA <juju:Triaged by manadart> <https://launchpad.net/bugs/1772895>11:29
externalrealitymanadart, ack11:30
externalrealitymanadart, should the message also be directing users to configure using `juju controller-config` rather than `juju config`?11:31
manadartexternalreality: Yes it should. I'll add that info.11:32
jasongarberHi! 👋🏻 I'm looking for some help using constraints on Rackspace, where instance types contain spaces.11:54
jasongarberCreating Juju controller "atat-dev" on rackspace/iad ERROR invalid constraint value: instance-type=4GB%20Standard%20Instance valid values are: [512MB Standard Instance 1GB Standard Instance 2GB Standard Instance 4GB Standard Instance 8GB Standard Instance 15GB Standard Instance 30GB Standard Instance 15 GB Compute v1 30 GB Compute v1 3.75 GB Compute v1 60 GB Compute v1 7.5 GB Compute v1 1 GB General Purpose v1 2 GB General Purpose 11:55
jasongarber(I added the %20, otherwise it says ERROR malformed constraint "Standard"11:55
jasongarber(I added the %20, otherwise it says ERROR malformed constraint "Standard")11:55
manadartstickupkid: Mind taking another look at https://github.com/juju/juju/pull/8740 ? Just addressed your comment.12:16
stickupkidmanadart: of course12:26
stickupkidmanadart: done, LGTM :D12:30
manadartstickupkid: Ta.12:31
manadartstickupkid: Do I need to rebase/merge develop into my PR to resolve: "fatal: unable to access 'https://github.com/alecthomas/gometalinter/': Could not resolve host: github.com" ?12:33
rick_h_Howdy jasongarber, I'm looking to see how the instance stuff works in rackspace. Can you use the constraint of 4gb of memory?12:34
rick_h_oh bah, he's gone12:34
stickupkidmanadart: so you have two options - 1) we wait for https://github.com/juju/juju/pull/8749 to land 2) comment out gometalinter checks in `./scripts/verify.bash`12:35
stickupkidmanadart: considering it's a pain, i'd just do 2 for now12:35
stickupkidhttps://github.com/juju/juju/pull/8748 is the same tbh12:36
stickupkidso we could just land it right?12:36
stickupkidnow *12:36
manadartstickupkid: OK. Going (2) for now.12:38
stickupkidmanadart: ok by me12:41
TheAbsentOnenice try rick_h_ x) I'll ping you if I notice it if he's back12:42
rick_h_TheAbsentOne: :)12:42
TheAbsentOneBesides you guys are referenced in my thesis as "everyone from irc" hope that is allright x)12:42
rick_h_wooo anonymous!12:43
manadartstickupkid: I don't actually have the metalinter stuff on my PR branch. I notice #8749 is merging, so I'll wait for it.12:45
jamguild: anyone interested in https://github.com/juju/juju/pull/8751 ? it should handle overlapping subnets in Openstack and better support for IPv6 subnets (not trying to create FAN overlays for them)12:54
hmljam: looking13:01
jamhml: I was hoping you might be able to test it against a real openstack having created multiple networks there.13:01
hmljam: i may be able to - have to check my permissions on the openstack13:03
jamhml: even if you just test that I haven't broken things terribly without changing networks would still  be useful13:03
hmljam: ack - there is a case i’m concerned about, but i need to look at the pr in more depth first13:04
=== degville_ is now known as degville
hmljam: verified pr on openstack where my project has two internal networks with subnets and one external network….15:08
jamhml: any chance that you can give it overlapping subnets?15:09
hmljam: i’ll try15:09
jamhml: rick_h_ asked us not to land anything on 2.3 until 2.3.8 goes out the door, but I definitely appreciate your testing.15:09
hmljam: works with duplicat subnets - ran both with change and with 2.3.7 to ensure openstack subnets configured to reproduce15:21
jaceknis there anything I need to do in charms.reactive for it to create hooks in the hooks directory other than adding bits to metadata.yaml? I added cluster relation support but "charm build" did not realize that and created no hooks15:31
rick_h_jacekn: hmm, do you have the layers?15:31
jaceknrick_h_: yeah includes: ['layer:basic', 'interface:http', 'layer:snap']15:31
jaceknrick_h_: but I can't see how those layers would know whihc hooks to create?15:32
rick_h_jacekn: did you add the reactive.py to handle the events?15:32
rick_h_jacekn: e.g. https://github.com/mitechie/jujushell-charm/blob/master/reactive/jujushell.py15:32
jaceknrick_h_: yes and they are handled fine under update-status hook. But I want xxx-cluster-relation-{changed|joined|departed} hooks too15:32
rick_h_jacekn: and metadata.yaml has the provides/requires bits added? Other than that not sure what might be missing15:35
jaceknrick_h_: metadata.yaml only has "peers" section for the missing hooks15:35
rick_h_jacekn: https://jujucharms.com/u/juju-gui/jujushell/ is the code that generates https://jujucharms.com/u/juju-gui/jujushell/ which does that as far as comparing15:35
rick_h_jacekn: ah no, you need https://api.jujucharms.com/charmstore/v5/~juju-gui/jujushell/archive/metadata.yaml provides/requires in there like so15:36
jaceknok let me try adding it15:39
jaceknI followed https://docs.jujucharms.com/2.3/en/authors-relations BTW, there is nothing in the "Peers" sectin about need to add them to provides/requires15:39
manadartSmall one for review. Nice in that it is almost all deletion: https://github.com/juju/juju/pull/875215:39
=== salmankhan1 is now known as salmankhan
jaceknrick_h_: no that made no difference at all. I added the relation to all 3 parts in metadata.yaml (peers, requires, provides) and charm build even knows there are no hooks: https://pastebin.ubuntu.com/p/wCwYw2TBrw/15:50
rick_h_jacekn: what's alertmanager-cluster vs prometheus-alertmanager ?15:51
jaceknrick_h_: prometheus-alertmanager is realtion between prometheus and prometheus-alertmanger. alertmanager-cluster is internal alertmanager clustering15:52
hmljam: I hit approve and found a snag in the testing15:58
hml:-)15:58
hmljam: confirming now…16:01
kwmonroejacekn: sounds like alertmanager-cluster is a new interface that you're adding to prom-alertmanager for peer relations.  is that right?16:25
jaceknkwmonroe: not really an interface, it's 2 lines so I added support to my alertmanager.py16:25
kwmonroejacekn: charm build creates hooks for interfaces.  so for example, prom-am provides an alertmanager-service over the http interface (https://api.jujucharms.com/charmstore/v5/prometheus-alertmanager-2/archive/metadata.yaml), so charm build knows to go create alertmanager-service hooks using the http interface that it pulls from  https://github.com/juju/layer-index16:28
kwmonroejacekn: if you are providing a peers section in the prom-am charm, you'll need to define what interface that uses, and have something in the layer-index registry (or local in your $INTERFACE_PATH) for charm build to know what to do.16:29
kwmonroejacekn: afaik, you can't just use "@when <peer>.joined" in your reactive.py without having <peer> defined in your metadata, which needs an interface.16:30
kwmonroejacekn: also, a point of semantics, i said "charm build creates hooks for interfaces", when i should have said "charm build creates hooks for relations and pulls in the provides/requires.py from the interface associated with that relation from the layer registry (or locally)".16:32
jaceknkwmonroe: so I have entry in metadata.yaml: https://pastebin.ubuntu.com/p/f9x933whJ3/16:33
kwmonroejacekn: cool, now you need to implement the provides/requires side of the altermanager-cluster interface16:34
jaceknkwmonroe: which I did but I wanted to avoid overhead of maintaining multiple files for 3 lines function...16:35
kwmonroerick_h_: my kingdom for friggin 301 redirects in the docs!!!  or make google reindex faster :)16:35
jaceknkwmonroe: so I'm just reading data from the relation in my alertmanger.py (I don't actually care about hooks any more in charms.reactive, I just want it to run my code on any hook since hooks are idempotent and properly gated anyway)16:35
jaceknkwmonroe: if that's the way I think I prefer to just create hooks/* files manually in my layer16:38
kwmonroejacekn: if you need data from your peers, i don't know how to do it other than over an interface.  that doesn't have to be complicated, btw.. the spark-quorum interface, for example, simply lets each peer get a list of all the other peer addresses (https://github.com/juju-solutions/interface-spark-quorum).16:41
jaceknkwmonroe: I just call context.Relations().peer.items()16:42
jaceknkwmonroe: dedicated interface really looks like boilperplate for the sake of it16:43
jaceknkwmonroe: but it looks like lack of hooks is expected in my case, I'll just ship them with my layer for simplicity16:44
kwmonroejacekn: i get the hesitation, but i don't know what unholy mess you're gonna have later.  i think a proper interface would be useful if you ever want to expand what your peers are doing, and will help you by having a proper endpoint to use in reactive (considering you can't know when your manual hook will fire).16:48
kwmonroeyou already know the idempotency and gate pitfalls, so i'm sure you'll make your hooks work -- i'm just throwing out advice for how <ahem> LITERALLY EVERYONE ELSE is doing it :)16:51
jaceknkwmonroe: so if I ever have to expand beyond a few lines I'll consider splitting code to proper interface. And I do know when my hook will fire - I'll have alertmanager-cluster-{joined,changed,broken,departed} hooks and the'll run when anything in my relatino changes16:51
jaceknkwmonroe: I wrote interface myself BTW, every real realtion in prometheus charms is an interface. But in this case it's significant overhead, increased troubleshooting cost and almost zero value16:52
kwmonroejacekn: on the hook firing, i was saying you *won't* know if altertmanager-cluster-foo fires before install, or after config-change, etc.  just cautioning you that you may not know what else has happened to the charm when that peer relation fires.16:53
kwmonroejacekn: just be assimilated already.  it's nice in here.16:54
jaceknkwmonroe: yes I'm aware of that. I though reactive charms were not supposed to use @hook unnecessarily?16:54
kwmonroeright jacekn - they're not.. ideally charms would react to flags.  so let's say your manual peer relation fires before anything else.  prom-am will not be installed yet because whatever does the installation hasn't happened yet.16:56
=== frankban is now known as frankban|afk
kwmonroejacekn: i think you'll make it work because you know what hooks need to do -- we don't have to keep going back-and-forth.16:57
jaceknkwmonroe: that's no different from any other $random hook firing at any time. I don't use @hook so I dont' rely on hook names for anything16:57
jaceknkwmonroe: all I want for the hook to call alertmanger.py, like all other hooks do16:58
kwmonroejacekn: aaah, so you *are* going to invoke the reactive loop in the manual hook.  i thought you were going to write some 3 line thing in your manual hooks that said "do_peer_things".16:59
jaceknkwmonroe: ah no no. All I care about is my readtive code being called automatically17:00
jaceknkwmonroe: sorry if I did not explain that clearly17:00
kwmonroeno no, it's fine -- i assumed the worst.17:00
jaceknkwmonroe: anyway I think I know sensible way to solve this. Thanks for your help!17:02
jaceknrick_h_: and thanks to you as well17:02
kwmonroenp jacekn!17:04
acworkHello can someone please tell me where I make the configuration changes to preserve network settings on reboot after a box has been provisioned with maas and juju.17:06
pmatulisacwork, what kind of settings are you referring to?17:07
acworkmtu specifically with bridge interfaces17:08
acworkI can get them where I need them to be but on reboot the configuration is overwritten I am trying to understand where provisioning is occurring.17:10
rick_h_kwmonroe: :( on the docs stuff. I'm curious how the new docs.jujucharms.com does.17:16
rick_h_jacekn: ah glad you got it worked out17:16
acworkam I asking a stupid question considering I am new to juju and maas17:24
pmatulisacwork, you are looking directly on the MAAS node, or indirectly by some other means?17:51
pmatulisi know you can set an MTU per VLAN for instance17:52
rick_h_acwork: no, not stupid. I think that juju uses the machine as it's delivered from MAAS. In looking for setting up mtu and maas there's some bugs/etc I see on it.17:52
rick_h_acwork: the other hting you can look into is customizing the setup https://docs.maas.io/2.3/en/nodes-custom17:54
acworkrThank you for the response I will try on the customization.  I have a built an openstack cluster amongst 11 machines and I wanted to make sure I had the proper mtu as well.  Part of this build is ceph as well.18:18
veebersMorning o/20:43
redir\o20:48
babbageclunkhey redir! how's it going?21:36
KingJWhat's the best way to deploy an application to all nodes, present and future? I want to ensure that the snmpd charm (https://jujucharms.com/u/bertjwregeer/snmpd/) runs on every host.21:42
KingJAt a guess, as it's a subordinate application i'd just need to link it to an application that's deployed on all hosts?21:47
thumperKingJ: yeah...22:05
thumperwe don't have the concept of machine subordinates...22:05
thumperalthough it was raised at some stage...22:06
babbageclunkI can't spin up lxd containers in a aws model - I keep getting this error: machine-0: 10:31:30 WARNING juju.provisioner failed to start machine 0/lxd/4 (Unpack failed, Failed to run: unsquashfs -f -d /var/lib/lxd/containers/juju-dbe87a-0-lxd-4/rootfs -n /var/lib/lxd/images/08bbf441bb737097586e9f313b239cecbba962222:39
babbageclunk2e58457881b3718c45c17e074.rootfs: .  ), retrying in 10s (10 more attempts)22:39
babbageclunkI thought it was disk space at first (I'm making about 10 lxds at once), but the host has plenty22:40
babbageclunkOh, hang on - they're coming up now. Took ages though.22:42
thumperbabbageclunk: thoughts on http://10.125.0.203:8080/view/Unit%20Tests/job/RunUnittests-race-amd64/369/testReport/github/com_juju_juju_worker_peergrouper/TestPackage/22:54
thumperrick_h_: if you come back, do you know where the bundle export code lives in the gui?22:57
babbageclunkthumper: looking22:59
veeberskelvinliu_: actually I just asked the question on the PR ^_^23:35
kelvinliu_ah, veebers I had a new push before had the chance to read ur comment. sorry, looking now23:37
veeberskelvinliu_: hah sorry should have made sure I had the latest23:40
veeberskelvinliu_: much nicer :-) question still stands, but still LGTM (actuall, LBTMNYAT, Looks Better To Me Now You Added That)23:41
kelvinliu_veebers, good finding, now it always waits echo to be terminated23:46
blahdeblahYour acronym-crafting powers are impressive, young veebers.23:54
veebers^_^23:54

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!