[00:32] <anastasiamac> wallyworld_: looking at cosmic deploy/bootstrap, it is kind of hard without cosmic images... i think i'll park until they are available... all i can do at this stage is just blindly add 'cosmic' as a string to our OS collection...
[00:33] <anastasiamac> wallyworld_: "blindly" because i cannot really test that it works ;)
[00:56] <anastasiamac> a review anyone? https://github.com/juju/os/pull/3
[01:00] <los_> (as an outsider: I would suppose its mosdef time since 18.04 is really really out...right?)
[01:09] <wallyworld> anastasiamac: lgtm
[01:36] <veebers> wallyworld, kelvin: Sorry for dumb questions; How do I see the k8s container created when deploying the caas charm. Juju status shows 'Creating mysql container' for ages
[01:36] <kelvin> veebers, kubectl get po -n model
[01:36] <wallyworld> veebers: from where did you get the myql charm? did you have it locally?
[01:37] <wallyworld> if you have got the latest copy from my repo you'll need to setup storage
[01:37] <veebers> wallyworld: Aye, I have it locally, I built from a cloned branch (with changes, now changing the built one and dploying)
[01:37] <wallyworld> or hack the metadata.yaml to remove the storage bit
[01:37] <wallyworld> jusyt check the built copy has no storage in metadata.yaml
[01:37] <wallyworld> that will do for now
[01:37] <veebers> I pulled it yesterday or so, no mention of storage in metadata
[01:37] <veebers> err, storage
[01:38] <wallyworld> in that case there's something else amiss
[01:38] <veebers> get po only shows juju-operator-mysql
[01:38] <wallyworld> so you'll need to check the operator logs and juju debug-log
[01:38] <veebers> wallyworld: Err, probably something I've introduced, I'll add more logging and up the logging factor
[01:39] <wallyworld> if there's just the operator it means that juju has not been told how to make the pods
[01:39] <wallyworld> ie the charm has not sent the pod spec yaml to juju via the pod-spec-set hook command
[01:39] <veebers> wallyworld: ack, I suspect I've added something that breaks that. Not sure why no logging coming in yet
[01:39] <wallyworld> you won't see operator logs in juju
[01:40] <veebers> sorry, controller logs
[01:40] <wallyworld> nothing to log until the charm runs
[01:40] <wallyworld> and that happens in the operator not the controller
[01:42] <veebers> wallyworld: yeah, IK suspect juju misbehaving: https://pastebin.canonical.com/p/sTwBPpRWMF/
[01:43] <wallyworld> veebers: there's nothing amiss with that log. is that all there is?
[01:44] <wallyworld> there should be a log of the pod spec yaml
[01:44] <veebers> wallyworld: It's that repeated over and over, I see this though (at the end there) juju-log Invoking reactive handler: ../../application-mysql/charm/hooks/relations/mysql/provides.py:25:_handle_broken:server
[01:45] <wallyworld> that's nothing
[01:45] <veebers> ah o
[01:46] <veebers> ugh, why can't I remember how to set logging to DEBUG for the controller, it's a model config or so right?
[01:47] <wallyworld> yeah
[01:47] <wallyworld> juju model-config logging-config="<root>=DEBUG;"
[01:47] <veebers> awesome, thanks
[01:48]  * veebers adds another model and starts again :-0)
[02:13] <veebers> wallyworld: (I hate to pester, I know you have heaps on): Trying a deploy, describe pods has this error: Error: cannot find volume "juju-operator-mysql-config-volume" to mount into container "juju-operator"
[02:13] <veebers> state is Terminiated
[02:14] <wallyworld> veebers: otp to kelvin, give me a sec
[02:14] <veebers> ack
[02:38] <wallyworld> veebers: it goes to terminated griefly and then back to creating
[02:39] <veebers> wallyworld: I pulled microk8s down and re-installed and added a modek with a differnt name (hence diff namespace) and that's working. I caught an issue that should have been caught via unit tests, so continuing
[02:39] <wallyworld> veebers: great
[02:47] <wallyworld> thumper: head for lunch, but wouild love a review of https://github.com/juju/juju/pull/8919 if you get a chance
[02:47] <wallyworld> bbiab
[03:08] <thumper> wallyworld: still finishing up with eric's one
[03:45] <veebers> kelvin: how long does it take to deploy that k8s cluster bundle on lxd locally (roughly)
[03:45] <kelvin> veebers, 20-40mins
[03:46] <veebers> kelvin: ok thanks, wondering if that would be more stable than microk8s at the mo, I keep mucking it up :-\
[03:46] <kelvin> veebers, do it on ur host machine directly, no vm
[03:47] <veebers> kelvin: ack, yeah I would deploy to local host. Then I would scp the kubectl bin, and config over right? (as per the ci test)
[03:47] <kelvin> veebers, i did't test microk8s for complex charms yet.
[03:47] <kelvin> veebers, yes
[03:48] <veebers> sweet, cheers kelvin
[03:49] <veebers> I think the microl8s issue is an apparmor thing (it being beta and all) and I just want to tear everything down to start from scratch etc.
[04:11] <kelvin> veebers, np
[04:11] <xavpaice> anyone about and able to take a look at https://bugs.launchpad.net/juju/+bug/1781322? I'm about to throw a workaround in, and only affects us when we remove a subordinate, but it's just broken a model for me
[04:11] <mup> Bug #1781322: Upgrade to 2.4.0 caused tools symlinks to be in a subordinate charm dir <canonical-bootstack> <juju:New> <https://launchpad.net/bugs/1781322>
[05:10] <anastasiamac> a dependency update for juju side, PTAL   https://github.com/juju/juju/pull/8921
[05:21] <wallyworld> anastasiamac: lgtm
[05:23] <anastasiamac> tyvm \o/
[05:39] <veebers> wallyworld: https://github.com/juju/juju/pull/8922 please don't giggle at the branch miss-naming :-\
[05:39] <anastasiamac> m not sure wallyworld can giggle
[05:39] <anastasiamac> :)
[05:39] <veebers> hah :-)
[05:40] <wallyworld> lol, i mistype the same thing all the time
[05:40] <veebers> hey, I said no giggling
[05:40] <veebers> ;-)
[06:05] <wallyworld> frankban: hey, you around?
[07:00] <frankban> wallyworld: hey
[07:02] <wallyworld> frankban: question - i want to do a one line patch to the GUI to add kubernetes series support. but trying the follow the hacking howto and make guio, i get an error. error: Error: file to import not found or unreadable: normalize.css/normalize
[07:02] <wallyworld> i think all i need to do is add a line to urls.js?
[07:02] <frankban> wallyworld: I think so yes, but weird error...
[07:02] <wallyworld> maybe you could whip up a quick PR for it so i don't need to figure out how to get things compiling?
[07:03] <frankban> wallyworld: sure
[07:03] <frankban> wallyworld: so the name of the series is "kubernetes", correct?
[07:03] <wallyworld> frankban: the unit renders but the js console has an error
[07:03] <wallyworld> yeah
[07:04] <frankban> wallyworld: ok is there a k8s charm in prod cs?
[07:04] <wallyworld> on staging charm store i think
[07:04] <frankban> or staging
[07:04] <frankban> ok
[07:04] <wallyworld> veebers has one uploaded. i just deploy locally
[07:06] <wallyworld> frankban: since i can't compile it, is there a tarball or something you can send me and i can test?
[07:06] <wallyworld> i think there's a way to deploy a local gui from a tarball?
[07:07] <frankban> wallyworld: yes, i'll send it to you
[07:07] <wallyworld> yay you rock
[07:07] <wallyworld> tyvm
[07:07] <wallyworld> and a line to tell me what to type
[07:08] <frankban> wallyworld: then you can do "juju upgrade-gui /path/to/gui.bz2" on your controller
[07:08] <frankban> heh
[07:08] <wallyworld> awesome, easy
[07:33] <frankban> wallyworld: the branch is https://github.com/juju/jaaslibjs/pull/1
[07:34] <frankban> wallyworld: I don't have perms to merge it, but will try to figure out something, or I'll have to wait for americans
[07:37] <kelvin> wallyworld, can i get ur a minute?
[07:56] <jam> kelvin: I would guess he's gone off for now. anything I can help you with?
[07:59] <kelvin> jam, got a question about allwatcher. i am not very sure how does it work. i saw juju gui uses 2 websockets. one of them for allwatchers.
[08:00] <jam> kelvin: so the intent of the allwatcher is that whenever something changes in a model, it sends a message to on the allwatcher so that things like the gui can react to it
[08:01] <kelvin> jam, it seems the payload returned from unit change for example is not always consistent, in my case, the Tools does not always present, but i can see .tools presents in all doc in mongo
[08:03] <kelvin> jam, can i get ur a few minutes on HO? :-)
[08:04] <manadart> stickupkid: Small one for review when you've time: https://github.com/juju/juju/pull/8926
[08:09] <wallyworld> frankban: great tyvm. do you need to build a tarball or if i try will i get an error again?
[08:10] <frankban> wallyworld: I need to release that on npm, I'll have to wait for jeff as I don't have perms, so yes sorry we'lll have to wait
[08:10] <wallyworld> kelvin: no worries
[08:13] <wallyworld> frankban: oop, meant you. no worries, thanks for doing pr
[08:13] <wallyworld> kelvin: you all sorted out now?
[08:14] <kelvin> wallyworld, i just re boostrapped ctl again to test it.
[08:14] <kelvin> wallyworld, do u mind to have a quick chat?
[08:14] <wallyworld> sure
[08:15] <wallyworld> kelvin: am in standup HO
[08:16] <kelvin> thx
[08:16] <jam> wallyworld: want me there as well? or yo ugot it?
[08:17] <wallyworld> jam:  just started talking, ty for offering, will let you know
[08:17] <jam> k
[08:42] <wallyworld> jam: all sorted. i helped identify the root cause of the watcher issue. the fix is quite simple as it turns out
[08:42] <jam> wallyworld: great!
[08:45] <manadart> What's a good charm to test Cosmic machines/units with?
[08:45] <wallyworld> kelvin: we talked about backingUnit, same would apply for backingMachine I expect
[08:46] <kelvin> wallyworld, yup, sure. thanks
[08:46] <kelvin> thanks, jam as well
[08:46] <stickupkid> manadart: LGTM
[08:48] <anastasiamac> manadart: i was using usbuntu with "--series cosmic --force"
[08:48] <anastasiamac> ubuntu even
[08:48] <jam> manadart: afaik, we don't have any charms today that support cosmic, but you could force/edit something like ubuntu-lite to include cosmic
[08:50] <manadart> anastasiamac jam: I've got Cosmic machines and containers deployed fine using image-stream and container-image-stream respectively.
[08:50] <jam> \o/
[08:50] <anastasiamac> manadart: having said that, i saw that rick_h_ has a personal cosimc charm :) maybe because rick_h_ is an astronaut ?
[08:50] <jam> manadart: with anastasiamac's patch ?
[08:50] <anastasiamac> manadart: on 2.4? or 2.5-bX?
[08:51] <manadart> jam anastasiamac: 2.4 HEAD.
[08:51] <anastasiamac> manadart: could u plz then also update bug 1781301 as it looks like we support it
[08:51] <mup> Bug #1781301: Juju needs to support cosmic <juju:In Progress by anastasia-macmood> <https://launchpad.net/bugs/1781301>
[08:52] <anastasiamac> manadart: yes! this is awesome news as my patch did land on 2.4 and soon to be on 2.5
[08:52] <anastasiamac> thnx for verifying :)
[08:52] <manadart> anastasiamac: NP.
[08:52] <manadart> stickupkid: Ta.
[08:52] <anastasiamac> manadart: actually m updatignt he bug now... nm :)
[08:52] <manadart> anastasiamac: Ack.
[08:55] <anastasiamac> manadart: jam: really appreciate u taken over both th symlink bug and verification of cosmic deploys! great team work :D
[09:01] <stickupkid> anastasiamac: i believe you're right with your theory about the jujuConnSuite...
[09:02] <jam> manadart: stickupkid: I'm seeing a bunch of conflicts merging 2.4 into develop around lxd code
[09:02] <jam> stickupkid: is one of them supposed to supersede? (I'm guessing the 2.4 code is the 'best' code, but I'm guessing)
[09:02] <stickupkid> anastasiamac: there are changes in the commit diffs that change how the session is created for tests
[09:03] <stickupkid> jam: manadart: ouch, hmmm - yeah, for most cases
[09:08] <manadart> jam: There will be one that I know of - usage of IsInstalledLocally and IsRunningLocally. The utils module was deleted from both, but 2.4 supersedes develop in using it from container/lxd.
[09:08] <manadart> Not sure about other conflicts.
[09:08] <jam> stickupkid: looks like there are some things that are only in 2.5 like "ServerSpec" vs "RemoteSpec"
[09:08] <jam> and defaultProfileWithNIC vs defaultProfile
[09:09] <anastasiamac> stickupkid: i'd imagine difference in session creations would indeed b top suspect...
[09:09] <stickupkid> jam: ServerSpec, RemoteSpec is being removed and manadart should know about the latter
[09:11] <jam> stickupkid: is ServerSpec being replaced by RemoteServer ?
[09:11] <jam> you need *one* of those
[09:11] <manadart> jam: Stays as ServerSpec in develop.
[09:11] <stickupkid> the other way around, ServerSpec is replacing RemoteServer
[09:11] <jam> providerSuite.createProvider is a test suite
[09:12] <jam> stickupkid: manadart: kk. I did look to switch to ServerSpec, so that's good.
[09:14] <manadart> jam: 2.4 is behind for LXD generally. After the release freeze, enough was back-ported to support constraints. Clustering enhancements are all going into develop.
[09:14] <manadart> I'd like to sync them, because it is so much cleaner without tools/lxdclient. But that's a conversation...
[09:16] <jam> manadart: lots of conflicts in the test suite, unfortunately
[09:17] <jam> maybe not so many. just a couple tests that end up interleaved somehow
[09:17] <manadart> jam: If the conflicts are all in LXD land, want me to give it a lash?
[09:18] <jam> manadart: I'll push something up in a sec, I'm close to done
[09:18] <manadart> jam: Ack.
[09:18] <stickupkid> anastasiamac: so I believe this is the commit https://github.com/juju/juju/commit/e95762d1c094c3a6b41e9ab98f1068b0aaea240e
[09:21] <veebers> stickupkid: huh, I notice that's a commit of mine, you suspect the extra session usage there is causing grief?
[09:21] <stickupkid> veebers: yeah, no idea why yet
[09:22] <stickupkid> veebers: i've not touched anything to do with mongo in juju yet, so I'm have a poke atm
[09:22] <veebers> stickupkid: I wonder if the cleanup isn't ironclad or something along those lines, there is a session created outside of the setupTest for instance
[09:23] <stickupkid> veebers: yeah - exactly
[09:24] <veebers> stickupkid: well, if it's something obvious and silly I owe you a beer in Brussels :_P
[09:24] <stickupkid> veebers: haha
[09:48] <stickupkid> so the test that repeatedly fails is CAASModelDeployCharmStoreSuite.SetupTest and suite composes the new changes of that commit.
[09:50] <jam> manadart: stickupkid: can you look at https://github.com/juju/juju/pull/8927 and see if everything makes sense? the test suite passes in container/lxd and provider/lxd on my machine
[09:50]  * jam away for lunch
[09:54] <veebers> stickupkid: I wonder if it's a mongo/session thing or if it's a proper bug and a worker or something is borked due to the new controller config?
[09:55] <stickupkid> veebers: i wonder if i skip the test and see if the problem goes away?
[09:55] <stickupkid> veebers: then we can at least know that the test is buggy
[09:56] <veebers> stickupkid: true. It passes sometimes though, right? (it must have for it to get landed, and I ran the suite a couple of times etc.)
[09:57] <stickupkid> veebers: yeah, but it seems to be getting worse - it never fails locally, note - that it's not your test that fails, but someone composing that test suite that you changed
[10:00] <veebers> stickupkid: interesting, I wonder if the testTeardown is doing something in an order that borks things. But I'm just stabbing in the dark :-) I'll leave you too it as you have eyes on the code ^_^
[10:03] <jam> back
[10:07] <stickupkid> jam: LGTM
[10:19] <jam> thxn
[10:51] <stickupkid> So I'm skipping the test that "seems" to be causing the CI issue, but, unfortunately it's intermittent - so only time will see
[10:52] <stickupkid> https://github.com/juju/juju/pull/8918/commits/ec595f73202dd65ee842823d0e50b52083c53753
[11:12] <anastasiamac> stickupkid: thank u for looking at tests. fingers crossed that skipping will make a difference \o/ at the very least, it'll finger-point which ones need to be addressed :)
[11:14] <stickupkid> anastasiamac: np - yeah, let's hope
[11:15] <anastasiamac> stickupkid: and really appreciated the comments on the bug :) i'll add the skipping PRs for future tracking too
[11:17] <stickupkid> anastasiamac: welcome
[11:28] <anastasiamac> stickupkid: i've added further comments as I believe that the issue is caused by having s.State.Model() calls
[11:29] <anastasiamac> stickupkid: but cannot look into it further (must eod)...
[11:29] <stickupkid> anastasiamac: awesome! well it built and merged without failure, that's the first time in a week or so
[11:30] <stickupkid> anastasiamac: i'll have a quick look, but i'm not sure what I'm really looking at though :)
[11:30] <stickupkid> anastasiamac: thanks for the help, enjoy the rest of your day
[11:31] <anastasiamac> stickupkid: gr8 about landing
[11:32] <anastasiamac> stickupkid: "looking" would probably involve the why-is-State.Model()-causing issue and can-the-test-survive/be-rewritten-without-State.Model()-call
[11:32] <anastasiamac> stickupkid: but yeah, i'll go away for now before my head explodes \o/ ttyl
[11:32] <stickupkid> anastasiamac: \o/
[11:32] <anastasiamac> o/
[14:03] <hfp> Hey, I can't get juju going. This is what I'm getting on ubuntu 17.10: https://dpaste.de/hgRs
[14:03] <hfp> is it about the lxd profile default having an ipv6 or is it something else?
[14:04] <hfp> This is my default profile: https://dpaste.de/MQ9s, it's as default as they go
[14:14] <manadart> hfp: Looks like IPv6. The profile is OK AFAICT. Try "lxc network set lxdbr0 ipv6.address none".
[14:16] <manadart> Work checking with "lxc network show lxdbr0". Ensure ipv4.address is something and ipv4.nat is true.
[14:57] <jam> i need to EOD, but if someone could give a look at https://github.com/juju/juju/pull/8928 I would appreciate it.
[14:59] <rick_h_> manadart: any chance you can give ^ a look before your EOD? /me checks clocks
[15:00]  * manadart pulls his punch card back out of the slot.
[15:01] <manadart> rick_h_: Looking.
[15:01] <rick_h_> manadart: if you're EOD let it go and please grab it in the AM
[15:02] <manadart> rick_h_: It's fine. I got it.
[15:12] <manadart> rick_h_ jam: Approved it. One comment.
[15:13] <rick_h_> ty manadart !
[15:13] <rick_h_> have a good night
[15:44] <los_> Can I get a pointer to a reference Reactive Pattern charm that has no dependencies?
[15:44] <hfp> manadart: lxbdr0 has an IPv4 and an IPv6. Juju doesn't do IPv6?
[16:07] <manadart> hfp: LXD integration will ultimately support IPv6, but at the moment we eschew it.
[16:30] <zeestrat> Hey rick_h_, you know of a way to cross post stuff on discourse? Would love to get some input from the maas folks on https://discourse.jujucharms.com/t/ideal-use-cases-for-maas-and-pods-vs-lxd-esp-with-clustering-with-juju/68/3
[17:59] <rick_h_> zeestrat: hmm, not really. Just have to ask them to chat. What are you looking for them to fill in?
[17:59] <rick_h_> zeestrat: the maas vs pods vs cluster plans? I mean really their plans are just to make stuff work so folks can get machines provisioned from their DC as easy as possible
[18:13] <zeestrat> rick_h_: right, but as someone that's  been using both for a while I'm still missing a more simple and unified approach to deploy both maas and juju controllers. I've also seen a couple of folks around here and #maas that have been confused or struggled with figuring out how to deploy juju controllers without using manual kvm or umpteen physical machines. A common question is why they can't use lxd containers as juju
[18:13] <zeestrat> controllers on their physical maas controllers.
[18:15] <zeestrat> Was interested in hearing in their plans as it looks like their moving towards snap and lxd containers as preferred deployment methods
[18:15] <rick_h_> zeestrat: understand. My point is that the maas folks aren't worrying about how to load juju controllers on there.
[18:16] <rick_h_> unfortunately there's not an easy fix yet for the chicken and egg of "I need a cloud, I need to run stuff on my cloud" by having the cloud be both control plane and target for workloads for that cloud
[18:16]  * rick_h_ rereads that and wonders if he needs more coffee
[18:17] <rick_h_> zeestrat: we'll see. at the moment I'm looking at the lxd cluster stuff going "cool the controllers can be on the same plane of hardware as the workloads" so it helps with the "I need three machines in MAAS just for Juju HA"
[18:17] <rick_h_> much more like an openstack setup tbh
[18:17] <rick_h_> but it's still means you need to setup the lxd cluster which is a manual thing atm
[18:18] <zeestrat> Hah, and I need some more beer! I gotcha, but at least the path towards being able to enlist and commission lxd clusters/containers in maas needs some work from the maas sife
[18:19] <rick_h_> yea
[18:19] <rick_h_> it'd be darn cool if you could ask maas for a machine to come up in a cluster
[18:19] <zeestrat> The more easy and fun it is to get started with juju on maas the better.
[18:19] <rick_h_> and that expanding that cluster was just asking maas for another machine
[18:19] <rick_h_> but that's not on roadmap atm
[18:20] <rick_h_> just in the "cool ideas" phase
[18:23] <zeestrat> OK, so the ideal use case of bootstrapping juju controllers on maas will be physical machine or manual kvm's for the time being?
[18:26] <rick_h_> zeestrat: yea, there's some toys that might open up more possibility but not there yet.
[18:51] <hfp> manadart: Noted, thanks
[20:58] <wallyworld> veebers: did my comments on the PR make sense?
[20:58] <veebers> wallyworld: I haven't seen them yet, I'll ping if I have questions
[20:58]  * veebers wonders how he missed that email come through
[20:59] <wallyworld> veebers: sorry, should have pinged but it was very late her and you would have been AFK
[21:00] <veebers> wallyworld: no worries, I normally see the review email come in and respond. I must have skimmed over it in the morning cleanup
[21:02] <veebers> wallyworld: just sorting out a vsphere query from thumper, then will continue debugging the resource-get issue I'm seeing (relevant to this PR) and attack those comments (they all make sense)
[21:02] <wallyworld> gr8, ty
[21:45] <thumper> babbageclunk: ping
[21:45] <babbageclunk> thumper: hey
[21:45] <thumper> babbageclunk: noticing your email, wanna chat?
[21:46] <babbageclunk> yes please!
[21:46] <thumper> jump in 1:1
[22:25] <magicaltrout> random question for a charmer, that will likely make them sad. Is there a sane way to execute a command on another service from within another charms hook
[22:26] <magicaltrout> for example, I need to stick a new unix user into another charm and ideally i could do with a way to do this without adding stuff to the existing relations
[22:43] <thumper> magicaltrout: I don't think so
[22:43] <thumper> magicaltrout: it is almost like you want one unit to be able to run an action on another unit?
[23:41] <veebers> Now why would none of the debugging logging that I've sprinkled around not show up at all :-|
[23:41] <babbageclunk> veebers: is it going to a controller log and you're looking in the default model?
[23:42] <veebers> babbageclunk: no I've def got debug-log -m controller there, I wonder if it's not getting paste the cmd context part
[23:42] <veebers> but surely that should show up in the hook logs for the operator pod
[23:43] <babbageclunk> hmm, not sure about k8s stuff but that might be in the model log, not the controller one
[23:45] <veebers> yeah, for that I should be able to see it with something like "microk8s.kubectl -n aaaa log -f juju-operator-mysql"
[23:46] <veebers> (do you like my naming scheme? aaaa, bbbb, I think there is a zyx as well as a xyz)