[00:53] <kelvinliu_> morning, veebers, would u mind to take a look this tiny PR plz? thx -> https://github.com/juju/juju/pull/8743/files
[01:02] <veebers> kelvinliu_: hey o/ What's the issue, is it not passing under CI but passing locally?
[01:03] <kelvinliu_> veebers, yeah, it's passing locally but failed on jenkins on different errors.
[01:03] <kelvinliu_> veebers, got error like kubeconfig file was empty when it's built from jenkins gui,
[01:04] <kelvinliu_> veebers, when i ran it manually from jenkins host, the error was cluster(api server) not stablized to let kubectl available
[01:05] <kelvinliu_> so i have this pr to wait for workloads(this is actually required, i think) after wait_for_started.
[01:07] <veebers> kelvinliu_: so looking at the latest jenkins job run, the kube config file it copies it seems that it wasn't populated with the required data? And thus the wait for workloads should sort that?
[01:08] <kelvinliu_> veebers, the kubeconfig file is generated later in some time during the cluster bootstraping progress.
[01:09] <veebers> kelvinliu_: how did the scp command work if the file wasn't generated at that time? Or are we talking about different things now?
[01:10] <veebers> in the jenkins run I see "ERROR No k8s cluster definitions found in config", that's because the kubeconfig file is empty?
[01:13] <kelvinliu_> veebers, scp seems never failed
[01:13] <veebers> kelvinliu_: this is the fun part, figuring out why a test that passes on your machine doesn't pass in CI :-|
[01:15] <kelvinliu_> well, another werd thing is k8s 1.10 bundle never pass on my local when i manually deploy the bundle, so i always have to use 1.9. but it passes in citest on my local.
[01:34] <veebers> wallyworld: you have a moment re: tomb v1 -> v2 I know why the tests are failing, change in tomb v1 -> expectations, not sure how to proceed to fix
[01:35] <wallyworld> veebers: give me 5
[01:35] <veebers> I can give you 4.9, that's my best offer
[01:35] <veebers> heh, sure thing
[01:42] <wallyworld> veebers: did you want a HO?
[01:48] <veebers> wallyworld: sure, probably best. 1:1?
[01:49] <wallyworld> veebers: ok
[01:52] <wallyworld> babbageclunk: i pushed commit 2 to https://github.com/juju/juju/pull/8739
[01:55] <babbageclunk> wallyworld: ok looking
[01:57] <anastasiamac> a review of a very simple status filtring fix PTAL - https://github.com/juju/juju/pull/8744
[02:11] <thumper> wallyworld: ping
[02:17]  * thumper looks for next victim
[02:17] <thumper> babbageclunk: what 'cha up to?
[02:18] <thumper> anastasiamac: lgtm
[02:19] <anastasiamac> \o/
[02:24] <wallyworld> thumper: hey
[02:25] <thumper> wallyworld: how are your reviewing muscles feeling?
[02:25] <wallyworld> hmmm, a loaded question there
[02:25] <wallyworld> sure, what is thepr
[02:26] <thumper> just proposing now
[02:26] <thumper> it is the proxy stuff
[02:26] <thumper> ended up being significantly bigger than I was initially expecting
[02:27] <wallyworld> always is
[02:27] <thumper> heh
[02:28] <thumper> +889 −253 and 27 files changed
[02:28] <thumper> https://github.com/juju/juju/pull/8745
[02:28] <thumper> wallyworld: we should chat about it
[02:28] <wallyworld> ok
[02:28] <wallyworld> now ort after i have a look?
[02:28] <thumper> 1:1?
[02:29] <wallyworld> ok
[02:29] <thumper> now
[02:52] <cliu> Hi... I have deployed Openstack with juju with Telemetry #49 bundle. after deployment, how do I bring up the ceilo? I did not see that option in the horizon dashboard.
[02:52] <thumper> wallyworld: the proxy PR https://github.com/juju/proxy/pull/3
[02:52] <wallyworld> ok
[02:53] <thumper> cliu: sorry, but I have no idea. Many of the openstack folk are europe based and are likely sleeping
[02:54] <cliu> thumper: thanks. when would be a good time for me to join in the ask again/
[02:54]  * thumper wonders if there is a freenode channel for canonical openstack questions...
[02:54] <thumper> cliu: european morning
[02:54] <cliu> thumper: thanks.
[02:54] <rick_h_> thumper: cliu  #openstack-charms
[02:54] <thumper> rick_h_: thanks
[02:54] <rick_h_> is the freenode channel for them per https://docs.openstack.org/charm-guide/latest/find-us.html
[02:55] <cliu> rick_h_: thanks
[02:55] <wallyworld> thumper: that one lgtm
[02:55] <thumper> wallyworld: ta
[02:55] <rick_h_> cliu: most of them are out at the openstack summit this week but I'd hit up their irc channel or mailing list
[02:56] <cliu> rick_h_: thanks. what is their mailing list?
[02:58] <anastasiamac> cliu: according to the find-us link rick_h_provided, the mailing list can be found on https://docs.openstack.org/charm-guide/latest/mailing-list.html
[02:59] <cliu> anastasiamac: thanks
[03:13] <thumper> rick_h_: where would I find the GUI code for exporting a bundle?
[03:13] <thumper> rick_h_: initial juju support will just mirror the GUI
[03:25] <wallyworld> thumper: doneski
[03:26] <wallyworld> babbageclunk: you happy with the PR?
[03:27] <babbageclunk> wallyworld: oh, sorry - got distracted by other work.
[03:27] <wallyworld> no wuckers
[03:37] <babbageclunk> wallyworld: approved
[03:37] <babbageclunk> thumper: sorry, missed your message
[03:38] <babbageclunk> thumper: I'm struggling to write a benchmarking thing for the leadership API
[03:43] <thumper> wallyworld: awesome, will look
[05:44] <anastasiamac> and another status fix review PTAL https://github.com/juju/juju/pull/8747 <- this time fixes inconsistent results when filtering by app name \o/
[05:59] <thumper> anastasiamac: minor tweak but otherwise good
[09:36] <kelvinliu_> veebers, updated the PR and the citest passed on goodra, would u mind to take a look tmr? thx
[09:36] <kelvinliu_> veebers, thx for the lxd upgrade
[10:12] <srihas> hi, how can we change the network configuration on machines using juju?
[10:13] <srihas> the current configuration came from the interface settings in MAAS
[10:13] <srihas> is there a way to do without without deleting machines?
[11:29] <manadart> externalreality: Logged a bug for the peergrouper warning messages: https://bugs.launchpad.net/juju/+bug/1772895
[11:29] <mup> Bug #1772895: Peergrouper should not log multiple address warnings when not in HA <juju:Triaged by manadart> <https://launchpad.net/bugs/1772895>
[11:30] <externalreality> manadart, ack
[11:31] <externalreality> manadart, should the message also be directing users to configure using `juju controller-config` rather than `juju config`?
[11:32] <manadart> externalreality: Yes it should. I'll add that info.
[11:54] <jasongarber> Hi! 👋🏻 I'm looking for some help using constraints on Rackspace, where instance types contain spaces.
[11:55] <jasongarber> Creating Juju controller "atat-dev" on rackspace/iad ERROR invalid constraint value: instance-type=4GB%20Standard%20Instance valid values are: [512MB Standard Instance 1GB Standard Instance 2GB Standard Instance 4GB Standard Instance 8GB Standard Instance 15GB Standard Instance 30GB Standard Instance 15 GB Compute v1 30 GB Compute v1 3.75 GB Compute v1 60 GB Compute v1 7.5 GB Compute v1 1 GB General Purpose v1 2 GB General Purpose 
[11:55] <jasongarber> (I added the %20, otherwise it says ERROR malformed constraint "Standard"
[11:55] <jasongarber> (I added the %20, otherwise it says ERROR malformed constraint "Standard")
[12:16] <manadart> stickupkid: Mind taking another look at https://github.com/juju/juju/pull/8740 ? Just addressed your comment.
[12:26] <stickupkid> manadart: of course
[12:30] <stickupkid> manadart: done, LGTM :D
[12:31] <manadart> stickupkid: Ta.
[12:33] <manadart> stickupkid: Do I need to rebase/merge develop into my PR to resolve: "fatal: unable to access 'https://github.com/alecthomas/gometalinter/': Could not resolve host: github.com" ?
[12:34] <rick_h_> Howdy jasongarber, I'm looking to see how the instance stuff works in rackspace. Can you use the constraint of 4gb of memory?
[12:34] <rick_h_> oh bah, he's gone
[12:35] <stickupkid> manadart: so you have two options - 1) we wait for https://github.com/juju/juju/pull/8749 to land 2) comment out gometalinter checks in `./scripts/verify.bash`
[12:35] <stickupkid> manadart: considering it's a pain, i'd just do 2 for now
[12:36] <stickupkid> https://github.com/juju/juju/pull/8748 is the same tbh
[12:36] <stickupkid> so we could just land it right?
[12:36] <stickupkid> now *
[12:38] <manadart> stickupkid: OK. Going (2) for now.
[12:41] <stickupkid> manadart: ok by me
[12:42] <TheAbsentOne> nice try rick_h_ x) I'll ping you if I notice it if he's back
[12:42] <rick_h_> TheAbsentOne: :)
[12:42] <TheAbsentOne> Besides you guys are referenced in my thesis as "everyone from irc" hope that is allright x)
[12:43] <rick_h_> wooo anonymous!
[12:45] <manadart> stickupkid: I don't actually have the metalinter stuff on my PR branch. I notice #8749 is merging, so I'll wait for it.
[12:54] <jam> guild: anyone interested in https://github.com/juju/juju/pull/8751 ? it should handle overlapping subnets in Openstack and better support for IPv6 subnets (not trying to create FAN overlays for them)
[13:01] <hml> jam: looking
[13:01] <jam> hml: I was hoping you might be able to test it against a real openstack having created multiple networks there.
[13:03] <hml> jam: i may be able to - have to check my permissions on the openstack
[13:03] <jam> hml: even if you just test that I haven't broken things terribly without changing networks would still  be useful
[13:04] <hml> jam: ack - there is a case i’m concerned about, but i need to look at the pr in more depth first
[15:08] <hml> jam: verified pr on openstack where my project has two internal networks with subnets and one external network….
[15:09] <jam> hml: any chance that you can give it overlapping subnets?
[15:09] <hml> jam: i’ll try
[15:09] <jam> hml: rick_h_ asked us not to land anything on 2.3 until 2.3.8 goes out the door, but I definitely appreciate your testing.
[15:21] <hml> jam: works with duplicat subnets - ran both with change and with 2.3.7 to ensure openstack subnets configured to reproduce
[15:31] <jacekn> is there anything I need to do in charms.reactive for it to create hooks in the hooks directory other than adding bits to metadata.yaml? I added cluster relation support but "charm build" did not realize that and created no hooks
[15:31] <rick_h_> jacekn: hmm, do you have the layers?
[15:31] <jacekn> rick_h_: yeah includes: ['layer:basic', 'interface:http', 'layer:snap']
[15:32] <jacekn> rick_h_: but I can't see how those layers would know whihc hooks to create?
[15:32] <rick_h_> jacekn: did you add the reactive.py to handle the events?
[15:32] <rick_h_> jacekn: e.g. https://github.com/mitechie/jujushell-charm/blob/master/reactive/jujushell.py
[15:32] <jacekn> rick_h_: yes and they are handled fine under update-status hook. But I want xxx-cluster-relation-{changed|joined|departed} hooks too
[15:35] <rick_h_> jacekn: and metadata.yaml has the provides/requires bits added? Other than that not sure what might be missing
[15:35] <jacekn> rick_h_: metadata.yaml only has "peers" section for the missing hooks
[15:35] <rick_h_> jacekn: https://jujucharms.com/u/juju-gui/jujushell/ is the code that generates https://jujucharms.com/u/juju-gui/jujushell/ which does that as far as comparing
[15:36] <rick_h_> jacekn: ah no, you need https://api.jujucharms.com/charmstore/v5/~juju-gui/jujushell/archive/metadata.yaml provides/requires in there like so
[15:39] <jacekn> ok let me try adding it
[15:39] <jacekn> I followed https://docs.jujucharms.com/2.3/en/authors-relations BTW, there is nothing in the "Peers" sectin about need to add them to provides/requires
[15:39] <manadart> Small one for review. Nice in that it is almost all deletion: https://github.com/juju/juju/pull/8752
[15:50] <jacekn> rick_h_: no that made no difference at all. I added the relation to all 3 parts in metadata.yaml (peers, requires, provides) and charm build even knows there are no hooks: https://pastebin.ubuntu.com/p/wCwYw2TBrw/
[15:51] <rick_h_> jacekn: what's alertmanager-cluster vs prometheus-alertmanager ?
[15:52] <jacekn> rick_h_: prometheus-alertmanager is realtion between prometheus and prometheus-alertmanger. alertmanager-cluster is internal alertmanager clustering
[15:58] <hml> jam: I hit approve and found a snag in the testing
[15:58] <hml> :-)
[16:01] <hml> jam: confirming now…
[16:25] <kwmonroe> jacekn: sounds like alertmanager-cluster is a new interface that you're adding to prom-alertmanager for peer relations.  is that right?
[16:25] <jacekn> kwmonroe: not really an interface, it's 2 lines so I added support to my alertmanager.py
[16:28] <kwmonroe> jacekn: charm build creates hooks for interfaces.  so for example, prom-am provides an alertmanager-service over the http interface (https://api.jujucharms.com/charmstore/v5/prometheus-alertmanager-2/archive/metadata.yaml), so charm build knows to go create alertmanager-service hooks using the http interface that it pulls from  https://github.com/juju/layer-index
[16:29] <kwmonroe> jacekn: if you are providing a peers section in the prom-am charm, you'll need to define what interface that uses, and have something in the layer-index registry (or local in your $INTERFACE_PATH) for charm build to know what to do.
[16:30] <kwmonroe> jacekn: afaik, you can't just use "@when <peer>.joined" in your reactive.py without having <peer> defined in your metadata, which needs an interface.
[16:32] <kwmonroe> jacekn: also, a point of semantics, i said "charm build creates hooks for interfaces", when i should have said "charm build creates hooks for relations and pulls in the provides/requires.py from the interface associated with that relation from the layer registry (or locally)".
[16:33] <jacekn> kwmonroe: so I have entry in metadata.yaml: https://pastebin.ubuntu.com/p/f9x933whJ3/
[16:34] <kwmonroe> jacekn: cool, now you need to implement the provides/requires side of the altermanager-cluster interface
[16:35] <jacekn> kwmonroe: which I did but I wanted to avoid overhead of maintaining multiple files for 3 lines function...
[16:35] <kwmonroe> rick_h_: my kingdom for friggin 301 redirects in the docs!!!  or make google reindex faster :)
[16:35] <jacekn> kwmonroe: so I'm just reading data from the relation in my alertmanger.py (I don't actually care about hooks any more in charms.reactive, I just want it to run my code on any hook since hooks are idempotent and properly gated anyway)
[16:38] <jacekn> kwmonroe: if that's the way I think I prefer to just create hooks/* files manually in my layer
[16:41] <kwmonroe> jacekn: if you need data from your peers, i don't know how to do it other than over an interface.  that doesn't have to be complicated, btw.. the spark-quorum interface, for example, simply lets each peer get a list of all the other peer addresses (https://github.com/juju-solutions/interface-spark-quorum).
[16:42] <jacekn> kwmonroe: I just call context.Relations().peer.items()
[16:43] <jacekn> kwmonroe: dedicated interface really looks like boilperplate for the sake of it
[16:44] <jacekn> kwmonroe: but it looks like lack of hooks is expected in my case, I'll just ship them with my layer for simplicity
[16:48] <kwmonroe> jacekn: i get the hesitation, but i don't know what unholy mess you're gonna have later.  i think a proper interface would be useful if you ever want to expand what your peers are doing, and will help you by having a proper endpoint to use in reactive (considering you can't know when your manual hook will fire).
[16:51] <kwmonroe> you already know the idempotency and gate pitfalls, so i'm sure you'll make your hooks work -- i'm just throwing out advice for how <ahem> LITERALLY EVERYONE ELSE is doing it :)
[16:51] <jacekn> kwmonroe: so if I ever have to expand beyond a few lines I'll consider splitting code to proper interface. And I do know when my hook will fire - I'll have alertmanager-cluster-{joined,changed,broken,departed} hooks and the'll run when anything in my relatino changes
[16:52] <jacekn> kwmonroe: I wrote interface myself BTW, every real realtion in prometheus charms is an interface. But in this case it's significant overhead, increased troubleshooting cost and almost zero value
[16:53] <kwmonroe> jacekn: on the hook firing, i was saying you *won't* know if altertmanager-cluster-foo fires before install, or after config-change, etc.  just cautioning you that you may not know what else has happened to the charm when that peer relation fires.
[16:54] <kwmonroe> jacekn: just be assimilated already.  it's nice in here.
[16:54] <jacekn> kwmonroe: yes I'm aware of that. I though reactive charms were not supposed to use @hook unnecessarily?
[16:56] <kwmonroe> right jacekn - they're not.. ideally charms would react to flags.  so let's say your manual peer relation fires before anything else.  prom-am will not be installed yet because whatever does the installation hasn't happened yet.
[16:57] <kwmonroe> jacekn: i think you'll make it work because you know what hooks need to do -- we don't have to keep going back-and-forth.
[16:57] <jacekn> kwmonroe: that's no different from any other $random hook firing at any time. I don't use @hook so I dont' rely on hook names for anything
[16:58] <jacekn> kwmonroe: all I want for the hook to call alertmanger.py, like all other hooks do
[16:59] <kwmonroe> jacekn: aaah, so you *are* going to invoke the reactive loop in the manual hook.  i thought you were going to write some 3 line thing in your manual hooks that said "do_peer_things".
[17:00] <jacekn> kwmonroe: ah no no. All I care about is my readtive code being called automatically
[17:00] <jacekn> kwmonroe: sorry if I did not explain that clearly
[17:00] <kwmonroe> no no, it's fine -- i assumed the worst.
[17:02] <jacekn> kwmonroe: anyway I think I know sensible way to solve this. Thanks for your help!
[17:02] <jacekn> rick_h_: and thanks to you as well
[17:04] <kwmonroe> np jacekn!
[17:06] <acwork> Hello can someone please tell me where I make the configuration changes to preserve network settings on reboot after a box has been provisioned with maas and juju.
[17:07] <pmatulis> acwork, what kind of settings are you referring to?
[17:08] <acwork> mtu specifically with bridge interfaces
[17:10] <acwork> I can get them where I need them to be but on reboot the configuration is overwritten I am trying to understand where provisioning is occurring.
[17:16] <rick_h_> kwmonroe: :( on the docs stuff. I'm curious how the new docs.jujucharms.com does.
[17:16] <rick_h_> jacekn: ah glad you got it worked out
[17:24] <acwork> am I asking a stupid question considering I am new to juju and maas
[17:51] <pmatulis> acwork, you are looking directly on the MAAS node, or indirectly by some other means?
[17:52] <pmatulis> i know you can set an MTU per VLAN for instance
[17:52] <rick_h_> acwork: no, not stupid. I think that juju uses the machine as it's delivered from MAAS. In looking for setting up mtu and maas there's some bugs/etc I see on it.
[17:54] <rick_h_> acwork: the other hting you can look into is customizing the setup https://docs.maas.io/2.3/en/nodes-custom
[18:18] <acwork> rThank you for the response I will try on the customization.  I have a built an openstack cluster amongst 11 machines and I wanted to make sure I had the proper mtu as well.  Part of this build is ceph as well.
[20:43] <veebers> Morning o/
[20:48] <redir> \o
[21:36] <babbageclunk> hey redir! how's it going?
[21:42] <KingJ> What's the best way to deploy an application to all nodes, present and future? I want to ensure that the snmpd charm (https://jujucharms.com/u/bertjwregeer/snmpd/) runs on every host.
[21:47] <KingJ> At a guess, as it's a subordinate application i'd just need to link it to an application that's deployed on all hosts?
[22:05] <thumper> KingJ: yeah...
[22:05] <thumper> we don't have the concept of machine subordinates...
[22:06] <thumper> although it was raised at some stage...
[22:39] <babbageclunk> I can't spin up lxd containers in a aws model - I keep getting this error: machine-0: 10:31:30 WARNING juju.provisioner failed to start machine 0/lxd/4 (Unpack failed, Failed to run: unsquashfs -f -d /var/lib/lxd/containers/juju-dbe87a-0-lxd-4/rootfs -n /var/lib/lxd/images/08bbf441bb737097586e9f313b239cecbba9622
[22:39] <babbageclunk> 2e58457881b3718c45c17e074.rootfs: .  ), retrying in 10s (10 more attempts)
[22:40] <babbageclunk> I thought it was disk space at first (I'm making about 10 lxds at once), but the host has plenty
[22:42] <babbageclunk> Oh, hang on - they're coming up now. Took ages though.
[22:54] <thumper> babbageclunk: thoughts on http://10.125.0.203:8080/view/Unit%20Tests/job/RunUnittests-race-amd64/369/testReport/github/com_juju_juju_worker_peergrouper/TestPackage/
[22:57] <thumper> rick_h_: if you come back, do you know where the bundle export code lives in the gui?
[22:59] <babbageclunk> thumper: looking
[23:35] <veebers> kelvinliu_: actually I just asked the question on the PR ^_^
[23:37] <kelvinliu_> ah, veebers I had a new push before had the chance to read ur comment. sorry, looking now
[23:40] <veebers> kelvinliu_: hah sorry should have made sure I had the latest
[23:41] <veebers> kelvinliu_: much nicer :-) question still stands, but still LGTM (actuall, LBTMNYAT, Looks Better To Me Now You Added That)
[23:46] <kelvinliu_> veebers, good finding, now it always waits echo to be terminated
[23:54] <blahdeblah> Your acronym-crafting powers are impressive, young veebers.
[23:54] <veebers> ^_^