[00:07] <veebers> wallyworld: ok, so docker login and docker pull work with the image, it's something me or m-k8s is doing. Will dig deeper after food
[00:08] <wallyworld> ok
[00:11] <babbageclunk> wallyworld: sorry, haven't looked at that pr yet - some roofers have turned up to fix a leak.
[00:11] <wallyworld> no rush
[00:33] <wallyworld> kelvin_: here's a small PR to add support for docker cmd/args https://github.com/juju/juju/pull/8908
[00:33] <kelvin_> wallyworld, yup, looking now.
[00:47] <kelvin_> wallyworld, LGTM. thanks
[00:57] <veebers> wallyworld: I *think* this originally succeeded getting the image, but container creation failed and it tried again and failed to pull image (TTL for image access?) https://pastebin.canonical.com/p/9khpdhpxCy/
[01:18] <wallyworld> veebers: i have to head out to lunch in about 30 - need any input before i go?
[01:18] <veebers> wallyworld: no, all good. making progress
[01:18] <wallyworld> awesome
[01:18] <wallyworld> you got secret sorted?
[01:24] <veebers> wallyworld: I thought I did, still working on it. Made progress though
[01:27] <wallyworld> onward and upward
[02:21] <veebers> ah shoot, I didn't read the docs (shocker) and appparently you need to run microk8s.reset before snap removing the thing 0_0 I wonder if a reboot will help
[02:23] <thumper> veebers: oops
[02:23]  * thumper goes to make a coffee
[02:27] <balloons> Tell veebers he should really experiment in a sandbox so he doesn't have to reboot his entire machine :p
[03:17] <veebers> wallyworld: would you have some time to chat about the docker bits?
[04:27] <ybaumy> ok this is kinda rant about the localhost kubernetes deployment option with juju or conjure-up... short: it just doesnt work.. longer: why on earth is there a option to deploy on localhost if your tools just cannot do it. has anyone of the devs lately tried to deploy on localhost eg. kubernetes-core? just a small cluster. probably not because its broken.
[04:28] <ybaumy> ok so i dont give up easily.. i read lxd tutorial here for localhost and tried to configure it like that.. then i read github issues of other ppl.. tried that too
[04:28] <ybaumy> i just cant get it to run... then i post on github on a similar problem. but the issue is closed. closed but not answered
[04:29] <ybaumy> there is no tutorial for localhost deployment which just says do this and do that because our software if not able to
[04:29] <ybaumy> s/if/is
[04:30] <ybaumy> annoying is the word im looking for.
[04:30] <ybaumy> its not the first time that open source just fails to provide for users in documentation and so on.
[04:31] <ybaumy> if developers are not able to write documentation then just dont offer options that do not work out of the bo
[04:31] <ybaumy> x
[04:32] <ybaumy> now everybody will say .. heh did you open a issue yet?
[04:33] <ybaumy> no i havent. why? because i was wasting my time on other projects and didnt get a solution or even the devs give an answer that they also dont know what to do
[04:33] <ybaumy> that would be honest and straight forward
[04:34] <ybaumy> so if anyone reads this here. take it as feedback or leave and do whatever you devs do
[04:52] <ybaumy> and this might also be news to you.. system administrator/engineers/architects are not there for you to be the single source of quality control of your code. yes we understand that there are bugs here and there.. but we are not mainly there to deliver feedback about broken software. and yes we know how to use google but if you have to read 4 different websites to be able to use a feature that is not well
[04:52] <ybaumy> documented in the official doc on the software website then you have to realize that something is really not working as intented
[05:05] <wallyworld> ybaumy: i do it every day. what's the issue you are seeing?
[05:08] <wallyworld> my setup is a lxd cloud and then i deploy kubernetes-core bundle. one tweak i do make is to not nest one of the lxd containers
[05:08] <thumper> wallyworld: is that a local tweak to the default bundle?
[05:08] <wallyworld> since support for that i think is kernal dependent (not sure), and it only matters when deploying to real vms
[05:08] <wallyworld> yeah
[05:09] <thumper> perhaps a short discourse post on getting it working?
[05:09] <wallyworld> yup. i didn't realise it was an issue
[05:09] <wallyworld> i just did it for expediency
[05:09] <wallyworld> no need for nested lxd when testing things out
[05:10] <thumper> perhaps we should have a CDK local bundle in the store for people testing it out?
[05:10] <thumper> one that doesn't use nested lxd
[05:10] <thumper> well, one that doesn't specify containers in the bundle
[05:11] <thumper> ybaumy: sorry that you had such a negative experience with it
[05:11] <ybaumy> ok so first /var/run/kubernetes is not created on the master .. then i had to follow this https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/507 https://github.com/juju-solutions/bundle-canonical-kubernetes/wiki/Deploying-on-LXD https://github.com/lxdock/lxdock/pull/49/commits/39b19f90ad8f7331c5e49ba27fba7446be254ea7
[05:11] <wallyworld> thumper: i *think* the guys saif they were going to remove the nested lxd container anyway
[05:11] <ybaumy> https://askubuntu.com/questions/863675/lxc-lxd-and-apparmor-permission-denied-attempted-to-load-a-profile-while-confi https://stackoverflow.com/questions/48190928/kubernetes-pod-remains-in-containercreating-status https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/262
[05:12] <wallyworld> ybaumy: so that's CDK not kubernetes-core?
[05:12] <ybaumy> ybaumy: didnt matter for me.. had the same issues
[05:12] <wallyworld> the kubernetes-core is what i'd recommend to play with
[05:12] <ybaumy> i just searched google
[05:13] <ybaumy> for similar problems
[05:13] <ybaumy> was using kubernetes-core
[05:13] <ybaumy> first stable then edge
[05:14] <ybaumy> i pretty much ran from error to error
[05:14] <thumper> wallyworld: have you changed your default lxd profile like the wiki linked above suggested?
[05:15] <ybaumy> https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/262 this is the latest state i was in
[05:15] <ybaumy> on the bottom
[05:16] <wallyworld> ybaumy: here's my bundle that i tweaked; i simply deployed easyrsa to a new machine (2) instead of a nested lxd https://pastebin.ubuntu.com/p/5p5W6YnrHF/
[05:16] <ybaumy> the system pods were not starting and then i gave up 2 weeks ago
[05:17] <ybaumy> ok have to try that tonight since i have to go to work in half an hour
[05:18] <wallyworld> i'm not a k8s expert - hyave not seen the error you mention
[05:18] <wallyworld> but i know my tweaked bundle seems to work everytime i have tried
[05:18] <wallyworld> the only change i made is to introduce that new machine 2 instead of using a nested lxd
[05:19] <wallyworld> as deploying locally uses lxd for the top level vms
[05:19] <wallyworld> so no value in nesting
[05:19] <kelvin_> ybaumy, it's worth to have a check if ur lxd version >= 3
[05:19] <ybaumy> wallyworld: well this is the thing i guess. lxd is not working properly
[05:19] <wallyworld> nesting does require special kernal things / privilieges that i do nit fully understand
[05:20] <ybaumy> http://pastebin.centos.org/931976/
[05:21] <ybaumy> those version im using now
[05:21] <ybaumy> juju is beta because i tried stable first and thought that might fix it
[05:21] <thumper> ybaumy: funnily enough the beta channel is behind the release channel just now
[05:21] <thumper> :(
[05:22] <ybaumy> wallyworld: and yes you are right without the changes to lxd i did .. the lxd containers were not even starting
[05:22] <ybaumy> thumper: good to know
[05:22] <thumper> ybaumy: release is 2.4.0
[05:22] <thumper> stable I mean
[05:22] <ybaumy> thumper: i remember getting the mail for the release yes
[05:23] <ybaumy> but the problem is either kube-core or lxd
[05:23] <thumper> to be honest though, rc3 == .0 except the version number
[05:23] <wallyworld> ybaumy: i would be very interested to see if the tweaked bundle file without nested lxd works. let us know
[05:23] <wallyworld> let's rule out nesting as an issue
[05:23] <ybaumy> wallyworld: will post my feedback tonight here in the channel
[05:23]  * thumper has to go make dinner for minions
[05:23] <wallyworld> if there's still an issue wil be easier to find
[05:23] <thumper> see y'all tomorrow
[05:24] <wallyworld> o/
[05:24] <wallyworld> i may be out tonight but will check tomorrow
[05:24] <ybaumy> will be online tomorow morning from 0500-0700 CET here
[05:24] <wallyworld> i'm GMT+10
[05:25] <ybaumy> gotta go work later
[05:26] <kelvin_> wallyworld, can i get u a few minutes?
[05:26] <wallyworld> sure
[05:26] <kelvin_> HO?
[05:26] <wallyworld> yup
[05:53] <anastasiamac> kelvin_: wallyworld: i think u may have addressed this one already https://bugs.launchpad.net/juju/+bug/1764649 right? so we can close it?
[05:53] <mup> Bug #1764649: juju caas remove-relation does not work <juju:New for wallyworld> <https://launchpad.net/bugs/1764649>
[06:01] <kelvin_> anastasiamac, yes, it's been resolved, we can close it.
[06:01] <wallyworld> kelvin_: there's a method on application IsRemote() - use that instead of NotFOund check
[06:01] <kelvin_> wallyworld, ok, ic. thanks
[06:04] <kelvin_> anastasiamac, I just closed it. sorry for the confusion.
[07:38] <kelvin_> wallyworld, got a tiny PR, would u help to take a look when u got time? thanks https://github.com/juju/juju/pull/8909/files
[09:15] <manadart> stickupkid: This one is almost too trivial for review, but here 'tis anyhow: https://github.com/juju/juju/pull/8910
[09:18] <stickupkid> manadart: haha - to easy :D
[09:21] <stickupkid> manadart: i've turned on verbosity on tests (-check.v) in a branch, that will hopefully tell me which test is causing most if not all merges to fail
[09:21] <stickupkid> manadart: commit for reference: https://github.com/juju/juju/pull/8894/commits/abe85287b6053bcc798b49e18a7c846af9a344ae
[09:22] <stickupkid> manadart: i believe it's down to luck if your merge lands now, 1 out of 3 requests tend to merge
[09:26] <manadart> stickupkid: OK, thanks.
[11:24] <manadart> stickupkid: Are you going to land the approved PRs?
[11:25] <stickupkid> manadart: would love too
[11:25] <stickupkid> manadart: $$merge$$ just doesn't like me
[11:26] <stickupkid> manadart: see http://ci.jujucharms.com/job/github-merge-juju/706/consoleFull
[11:26] <stickupkid> manadart: check out this - https://github.com/juju/juju/pull/8894
[11:26] <stickupkid> manadart: about 5 merges :(
[11:43] <stickupkid> mandart: it keeps getting stuck in the same place - github.com/juju/juju/cmd/juju/application
[11:45] <stickupkid> jam: any thoughts on the hanging tests when trying to merge?
[11:47] <ybaumy> https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/608
[11:48] <ybaumy> somebody please take a look at this
[11:48] <ybaumy> logs from the cdk field agent are attached
[12:59] <los_> Hi!  I'm using Juju+Maas.  Do I need sudo for any of add-cloud/add-credential as per https://askubuntu.com/questions/1053630/juju-ever-need-sudo-for-install ?
[13:47] <hml> externalreality: good morning - i looked at the pr and responded to some stuff.  did I miss any outstanding questions?
[13:49] <externalreality> hml, good afternoon - Let me check I see you've responded to some stuff but 2 minutes ago
[13:50] <externalreality> hml, let me whats new
[16:13] <stickupkid> rick_h_: in order to differentiate between lxd and lxd-remote for reporting, you might have to be explicit in the cloud spec about which one you want to interact with
[16:15] <stickupkid> rick_h_: https://pastebin.canonical.com/p/Y7JNphH8gr/
[16:18] <rick_h_> stickupkid: looking
[16:19] <rick_h_> stickupkid: k, interactive isn't the normal interactive from add-credential but used in azure as you're sent over to azure to fill out data
[16:19] <rick_h_> stickupkid: or is interactive the trust password setup
[16:19] <rick_h_> ?
[16:20] <stickupkid> rick_h_: interactive in this case is pointing to a file, but yeah, that will be trust password later on
[16:20] <rick_h_> gotcha ok
[16:20] <stickupkid> s/file/cert file/
[16:20] <stickupkid> rick_h_: it's more the fact that we now have two different lxd providers, which i'm not sure about
[16:21] <stickupkid> lxd and lxd-remote
[16:21] <rick_h_> stickupkid: cool
[16:21] <rick_h_> stickupkid: well I'd ask if it's lxd-remote or lxd-cluster it should be?
[16:22] <stickupkid> rick_h_: i guess you could have one node in a lxd-cluster
[16:22] <rick_h_> stickupkid: right, you'd have "turned on lxd cluster"
[16:23] <stickupkid> rick_h_: https://pastebin.canonical.com/p/TnFHbvgMYG/
[16:24] <stickupkid> rick_h_: so when adding a cloud, it will come up with two different cloud types
[16:24] <rick_h_> stickupkid: yea, I think that we need that cluster word since that's the tutorial/wording of setting it up in lxd
[16:24] <rick_h_> stickupkid: hmm, yea...but can you add a lxd? /me looks if that's there currently
[16:25] <stickupkid> rick_h_: ok, that's fine, quick change, but we ok with that setup, because it's a bit weird
[16:25] <rick_h_> stickupkid: right, so lxd doesn't currently show in add-cloud
[16:25] <rick_h_> stickupkid: so it should just be one lxd in there, and I'd use lxd-cluster vs "remote"
[16:25] <stickupkid> rick_h_: which branch?
[16:25] <rick_h_> stickupkid: current 2.4.0 release
[16:26] <stickupkid> ok, that's interesting
[16:33] <stickupkid> rick_h_: glad you noticed that, was a nice easy fix :D
[16:33] <rick_h_> stickupkid: :)
[16:33] <rick_h_> stickupkid: thank you for bringing up something that didn't quite look right
[16:33] <rick_h_> appreciate the eye for "hmmm, this isn't ideal"
[16:33] <stickupkid> rl
[16:33] <stickupkid> rick_h_: https://pastebin.canonical.com/p/X8zV8gFPNk/
[16:34] <rick_h_> stickupkid: yea, that looks peachy to me
[16:34] <stickupkid> rick_h_: that looks better, i'll do some testing, to make sure it works well
[16:34] <rick_h_> stickupkid: <3
[19:56] <knobby> how does juju know where to deploy subordinate charms? Does it somehow figure it out from the relation information in the bundle.yaml?
[20:20] <rick_h_> knobby: exactly. Subordinates don't exist by themselves and only come into play once related to something in the model
[20:23] <knobby> rick_h_: if you don't mind me digging into specifics here, why is it that in https://api.jujucharms.com/charmstore/v5/canonical-kubernetes/archive/bundle.yaml flannel is created on the masters and workers, but not etcd. How does juju know that the relation to etcd doesn't mean to attach flannel to that machine as well?
[20:24] <rick_h_> knobby: looking sec
[20:24] <knobby> no rush, rick_h_
[20:24] <rick_h_> knobby: so when you look at the relation definitions here: https://api.jujucharms.com/charmstore/v5/~containers/flannel/archive/metadata.yaml
[20:25] <rick_h_> knobby: you can see that two of them are "container" scoped which basically means it needs to be installed/setup on the machine for those relations
[20:25] <rick_h_> knobby: but note not for the cni relation which is using etcd for data storage. So it's more like a mysql/postgres db in that relation context. You'd not install your app into the same machine as your data store
[20:25] <knobby> ok, so if you build a subordinate and don't have container scope, you end up with something that won't deploy without placement?
[20:26] <rick_h_> knobby: check out https://docs.jujucharms.com/2.4/en/authors-subordinate-applications and the section "Declaring subordinate charms"
[20:26] <knobby> rick_h_: I completely get the desire not to install on etcd, I just didn't see how juju could determine that was the intent
[20:26] <rick_h_> knobby: even with placement I don't think it'll deploy tbh
[20:26] <knobby> ok, thanks
[20:27] <rick_h_> knobby: so it's doing that because it knows the relation from flannel->etcd is on the endpoint cni, and that the cni endpoint is not container scoped
[20:33] <knobby> thanks, rick_h_
[20:43] <veebers> Morning all o/
[20:50] <hml> morning veebers
[20:52] <thumper> morning team
[20:52] <thumper> veebers: seems we have some confusion around the snap build jobs, see rick_h_'s email
[20:53] <rick_h_> thumper: morning
[20:54] <veebers> thumper: looking now
[20:56] <veebers> rick_h_: ah, it was probably disabled in one of the releases and not re-enabled. Seems like a process failure
[20:56] <veebers> rick_h_, thumper actually I think I may have disabled it during a release as I wasn't so sure myself
[20:56] <rick_h_> veebers: hmm, ok. I didn't know of any reason for a release we'd have stopped the 2.5 dev builds since I started them up after 2.4.0 final
[20:56] <rick_h_> veebers: but maybe they were turned off manually as part of that
[20:57] <veebers> rick_h_: I think I was confused about what might get overwritten and did both jobs,  sorry
[21:00] <rick_h_> veebers: ok, all good.
[21:01] <rick_h_> veebers: I'll have to ponder if we can do any sort of warning flag on this. It's so easy to miss it for a length of time because we're used to the snaps being automatic
[21:02] <veebers> rick_h_: maybe it could be as easy as having a rotating card between teams that we check the output of 'snap info juju' at 1 or 2 standups a week
[21:03] <rick_h_> veebers: but but but, what about a giant dashboard with pretty pictures and things go red when they're not updated after 3 days and if can email everyone and ... or maybe we just check a couple of times a week :)
[21:06] <veebers> rick_h_: hah ^_^, we could put an html file in people.c.c that is either red or green as part of the process if you like, <body style="background-color: red"></body>
[21:08] <hml> who’s the current expert on the juju/interact/pollster code ?  and juju add-cloud schemas?
[21:10] <hml> thumper, rick_h_ , wallyworld ^^  :-)
[21:10] <vino> hi thumper
[21:20] <thumper> vino: you're up early
[21:20] <vino> becuase i slept early.
[21:20] <vino> :)
[21:20] <vino> have a min
[21:21] <vino> its regarding the PR.
[21:22] <vino> thumper: i havent landed it. Apologise. Because the tweaks u have asked me to do. esp the state obj ModelTag().
[21:22] <vino> What you have mentioned is reasonable to add there. But created lots of ambiguity in other facades.
[21:22] <vino> which is not relevant to fix for that PR
[21:23] <vino> however i did push the commit. Still more issues to fix becuase of the ModelTag() addition in state obj.
[21:23] <thumper> I'm not sure I understand
[21:24] <thumper> what are the issues to fix?
[21:24] <vino> ok 1:1
[21:24] <thumper> sure
[21:58] <los_> Is sudo needed for add-credential when you first join Juju->MaaS?
[22:02] <los_> ...I'm getting a permission denied on doing add-credential onto a by-the-book new install
[22:30] <thumper> los_: I don't think so
[22:30] <thumper> permission denied on what?
[22:30] <thumper> los_: and which version of juju?
[22:36] <los_> "Latest" one sec for details.  This on 18.04 for the control node.
[22:41] <los__> Juju is 2.4-rc3-bionic-amd64 and MaaS has no --version (I just installed it yesterday)
[22:42] <los__> I zapped the .local/share/juju between iterations
[22:43] <los__> MaaS version from the webUI is:  2.4.0~beta2 (6865-gec43e47e6-0ubuntu1)
[22:46] <veebers> thumper: could the unit test failures be related to the mongo issues mentioned re: the pr check/merge failures? It seems to largely be the same test(s) failing across the arches
[22:47] <thumper> veebers: which failure is it?
[22:47] <veebers> thumper: rick_h_ mentioned that it appears to be mongo related, when I asked him if it was a ppa add failure that was causing the PR grief
[22:47] <thumper> los_: what is the permission denied error? is there more shown than just that?
[22:48] <los__> thumper: Details here: https://askubuntu.com/questions/1053630/juju-ever-need-sudo-for-install
[22:48]  * thumper looks
[22:49] <thumper> hmm...
[22:50] <los__> thumper: I just zapped my .local/share/juju, did add-cloud, ok, then $ juju add-credentials blah and got "ERROR cannot get current controller name: permission denied"
[22:50] <veebers> thumper: ok, so looks like the PR failures are the same as we're seeing in the ci unit test failures (just a quick eyeball)
[22:50] <veebers> as to the reason of failure I dn't have any more info
[22:50] <los__> This is different than the error in the SO so sorry for that
[22:51] <thumper> los_: there was a bug around people using sudo to do things by mistake and it changing permissions on the user files to root
[22:51] <thumper> we had a bug fix for that recently, but I can't remember exactly where it went
[22:51] <los__> thumper: This page https://docs.jujucharms.com/2.4/en/clouds-maas has the sequence add-cloud, add-credential, bootstrap ... is that right?
[22:51] <thumper> los_: can you pastbin 'ls -al  ~/.local/share/juju/ ' ?
[22:52] <los__> thumper: thanks, that's cool...sudo is a button pushed all too often.  The how-to page (above) didn't say to use sudo but....
[22:52] <thumper> no, it shouldn't need sudo
[22:52] <los__> thumper: Which pastebin is the one that works this week :) ?
[22:52] <thumper> pastebin.ubuntu.com
[22:52] <thumper> that works
[22:52] <thumper> never fails me :)
[22:53] <los__> thumper: oh ok thanks :) was shocked (but shouldn't be) that the "real one" is gone.  Didn't notice!
[22:53] <thumper> there was a real one?
[22:54] <los__> thumper: you know, the various public ones that got wonky and went poof (full of hacks)
[22:54] <thumper> the other thing I'm thinking... is around the validation of the oauth token, but I would have though that if the validation failed, it would have a better error message
[22:54] <thumper> when we add a credential, a validation check is made to the endpoint to check the validity of the credential
[22:54] <los__> thumper: https://pastebin.ubuntu.com/p/SsBcx4SkJk/
[22:55] <los__> thumper: https://pastebin.ubuntu.com/p/Wc5XGysbmX/
[22:55] <thumper> ok...
[22:56] <thumper> how about running the juju add-credential with --debug?
[22:56] <thumper> I think you may have found a real bug... but one the developer didn't hit because they already had a current controller
[22:57] <thumper> but it is weird...
[22:58] <los__> thumper: Glad to... so I do this to get fresh API key: $ sudo maas-region apikey --username=los
[22:59] <thumper> ok
[22:59] <los__> thumper: ...and "los" is a valid (non-admin) user (is that the bug?) that is showing up in the MaaS webUI
[22:59]  * thumper nods
[23:04] <los__> thumper: Here's the (getting longer, most-recent on top) paste with that debug: https://pastebin.ubuntu.com/p/HMQcrQzNWF/
[23:04]  * thumper looks
[23:05] <thumper> los_: looks like you have another file lying around from a previous sudo
[23:05]  * thumper gets particular location
[23:05] <los__> thumper: yay...I was going "Hey, I should strace this..." locations are good tho!
[23:06] <los__> thumper: (It's been like 2 years since I messed with MaaS / Juju and wow, has it changed for the better!)
[23:06] <thumper> ok...
[23:07] <thumper> go and look for /tmp/juju-*
[23:07] <thumper> there may be something like /tmp/juju-store-lock-(hash)
[23:07] <thumper> probably owned by rood
[23:07] <thumper> root
[23:08] <los__> thumper: "23:08:33 INFO  cmd supercommand.go:465 command finished" wo0t!  Tanks!
[23:09] <los__> thumper: So, to the "other bug" :)
[23:09] <thumper> so that was it?
[23:09] <thumper> I think we need a better error
[23:09] <thumper> to lead users to the problem
[23:10] <los__> thumper: I did a bootstrap several times and by the console of the MaaS (I agree!) the config script was "done" in 62 seconds but the lonnnnnnng timeout on the bootstrap had it hang for a go-see-a-movie amount of time
[23:10] <los__> thumper: I have a getting-longer set of notes about somethings I've seen, also errors that a vague etc.  Will supply!
[23:10] <thumper> ok...
[23:10] <thumper> so... when you bootstrap, maas considers it done when it has installed the OS on the machine and handed it off
[23:11] <thumper> juju gets the machine and runs cloud-init on it
[23:11] <los__> thumper: like "los@davros:~$ juju add-credential maas-cloud  <newline> ERROR cloud maas-cloud not valid"
[23:11] <los__> thumper: Right...but the bootstrap command it self takes (1hr?) a long time to "finish"
[23:11] <thumper> what juju is doing is running apt-get update / upgrade on the os
[23:11] <thumper> installing a few packages
[23:11] <thumper> and configuring mongo and the agents
[23:12] <thumper> it shouldn't take an hour...
[23:12] <los__> Right it gets to that point...but then
[23:12] <thumper> unless your internet connection is very small pipe
[23:12] <los__> thumper: So now I'm doing (150Mb/sec): JUJU_LOGGING_CONFIG="<root>=TRACE" time juju bootstrap daleks davros
[23:12] <thumper> oh...
[23:12] <thumper> don't do that
[23:12] <thumper> root level TRACE is a massive firehose
[23:13] <los__> thumper: Doh... (and its deploying 16.04...that's normal-good yah?)
[23:13] <thumper> you only really want to trace particular modules
[23:13] <thumper> yeah, we won't default to bionic until after 18.04.1
[23:14] <thumper> you can ask for bionic though
[23:14] <los__> thumper: nah, its cool, as long as it works :) in a typical way so I can get onto the next (dev) steps...
[23:14] <thumper> for you 1 hour bootstraps
[23:14] <thumper> a thing to do is to ssh in after it is done
[23:14] <thumper> and look at the cloud-init logs to see if we can work out what is taking all the tiem
[23:15] <thumper> /var/log/cloud-init......
[23:15] <los__> thumper: what trace level would be helpful?  Right. I read about the ssh (very cool) I will do a no-trace bootstrap right now.  I have a console monitor plugged into that unit.
[23:15] <thumper> I'm sure one of those has times in it
[23:15] <thumper> just do this:
[23:15] <thumper> time juju bootstrap daleks davros --debug
[23:16] <thumper> that effectively is like <root>=debug
[23:16] <thumper> the controllers will inherit that config
[23:16] <thumper> and debug will be enough
[23:16] <thumper> tracing is too much unless you know what you are looking for, and turn it off again
[23:17] <los__> thumper: got it running with that --debug
[23:17] <thumper> los_: is this your first go with juju 2.x?
[23:17] <los__> (all in all: MaaS is wayyyy faster/better than my previous experiment, and Juju is 10x what it was) Yes.  I was 1.3'ing it I think...lonnnnng time ago
[23:17] <los__> Back when half the info was "watch Jorge's vids"
[23:17] <thumper> haha
[23:18] <los__> thumper: (fetching Juju GUI 2.12.3)
[23:18] <thumper> 2.x is way better than 1.25
[23:18] <los__> thumper: Q: I have the MaaS stack on a sep, 2ndary network.  Does routing need to be on between it and the upstream "real" connection?
[23:18] <thumper> ok, if it is getting to fetch the gui, it is part way there, and I think done after the update/upgrade
[23:19] <thumper> yeah, juju expects to be able to reach out... there are proxy config you can set up if using proxies
[23:19] <thumper> it needs to download the agent binaries from streams.ubuntu.com
[23:19] <los__> thumper: aha, it handled my "no nodes in AZ default" thing well.
[23:19] <thumper> there are ways to bootstrap completely off-line
[23:20] <los__> thumper: hmm...ok let's see how this goes.  (obv'ly it'll be a while with 16.04 loading)
[23:20] <los__> thumper: thanks for the help: I'm glad to get to have a chance at seeing a useful error instead of one of my own making
[23:20] <thumper> well, if it got to fetching the juju gui, 16.04 is up and running
[23:20] <los__> thumper: I'm just going by what is the latest message on the juju...
[23:21] <los__> thumper: well, the MaaS console too says that (Deploying...)
[23:21] <thumper> I *think* the update/upgrade is before fetching the gui
[23:21] <thumper> I don't bootstrap onto maas much myself though
[23:22] <los__> thumper: I think so too.  There'd be nowhere to fetch TO... oh, another thing I noticed, wow, the images are way smaller!
[23:23] <los__> thumper: this is all for a live demo (with 3 node hardware behind plexi sound barrier for drum risers) of MaaS / Juju on Fri at our OKC LugNuts group
[23:23] <los__> thumper: Oklahoma City, USA
[23:23] <thumper> nice
[23:24] <los__> thumper: the console VGA shows its blazing away installing all kinds of packages (was at libc-bin)
[23:25] <los__> thumper: I know too many top-down devs (JS, Python) that are not SysAd/DevOps that need this, and so to show "hey this is not brain surgery...run your stack at home on-demand"  SetiAtHome is my 1st charm I'm working on, then ngrok, then the Flamenco render farm controller for Blender rendering.  The last one won't be done (beast mode with MongoDB etc.) by then.
[23:26] <veebers> anastasiamac: a thought, we could probably update the release process so that a stable release auto updates the bugs (fixed released, move to next milestone if not fix committed etc.). Although I'm not up to speed with the previous discussions surrounding this earlier
[23:26] <thumper> los_: that is very cool
[23:26] <los__> thumper: thanks, trying to share since our local scene is cool and helps me more...
[23:27] <thumper> veebers: we had strong guidance that a bug shouldn't be maked fixed released until it is in a non beta non rc release
[23:27] <los__> thumper: the classic "mysql/mediawiki/haproxy/expose" demo blew these guys minds, esp. when "oh you can make this 100x bigger in minutes"...like for some viral thing that is blowing up...they made funny faces
[23:27] <thumper> veebers: so our .0 release process should go back through the older milestones
[23:28] <thumper> los_: yeah, we need to get better at telling our story around this
[23:28] <thumper> a big focus for us recently has all been around openstack and kubernetes
[23:28] <thumper> los_: have you looked at JAAS?
[23:28] <thumper> jujucharms.com
[23:29] <thumper> there are canonical hosted controllers, so people can just start using the public clouds
[23:29] <veebers> thumper: ack, it would only be for stable releases that we do some sort of automated bug updating etc.
[23:29] <los__> thumper: I tell ya, except for aws being so slow to provision / de- it was a killer demo to these guys esp. when I tore it down to 1/1/1 again.
[23:29] <los__> thumper: YES!  I was shocked to see JAAS!  Its an obvious idea once you see it but it still surprised me
[23:29] <thumper> from the user's point of view, it is one controller
[23:30] <thumper> and many models on that controller, sometimes in different clouds or regions
[23:30] <los__> thumper: right makes sense.
[23:31] <los__> thumper: this is where we are, The Time of The Great Pausing: https://pastebin.ubuntu.com/p/CfzDfNbPqW/
[23:31] <thumper> los_: fyi, we are in the process of moving email discussions to our new discourse instance: discourse.jujucharms.com
[23:32] <thumper> los_: are you installing juju from a snap?
[23:33] <los__> thumper: very cool.  Discourse has come a long way too.  Ok the console of the machine (yes, with --beta and --classic) says something like "Cloud init   v 18.2 finished at..."
[23:33] <thumper> los_: right, you might want to move off --beta to release (or candidate)
[23:33] <thumper> we need to fix our snap channels...
[23:34] <thumper> beta still has rc3 whereas stable is .0
[23:34] <thumper> candidate is latest .1 builds
[23:34] <los__> thumper: Ok I will then... this is the network config: https://pastebin.ubuntu.com/p/fbqB9Vf7zt/
[23:34] <thumper> los_: I think the output of fetching the gui is a lie
[23:34] <thumper> I think it has just looked up the version
[23:35] <los__> thumper: you know, with messages the "liar" is just an old message...needs more messages?
[23:35] <thumper> the time at 28 minutes past is when juju bootstrap was able to ssh into the instance
[23:35] <thumper> and maas should now show it as running
[23:35] <thumper> however,
[23:35] <thumper> this is where juju is now doing the apt update / upgrade
[23:35] <thumper> and installing packages
[23:35] <thumper> downloading mongo etc
[23:36] <los__> thumper: MaaS webGUI is showing the unit on, with Status "Ubuntu 16.04 LTS" and my user
[23:36] <thumper> I *thought* that we had more output on the script when running with --debug
[23:36] <los__> thumper: NOTE: as per before there is no routing between my 192.168.29.X network (upstream to internet) and the 192.168.4.X net with the workers on it
[23:37] <thumper> los_: this might be the cause of the great pause
[23:37] <thumper> do you have a proxy set?
[23:38] <los__> thumper: 18.04 has some strong differences, and I have two serious netplan bugs brewing (outrageous!) but having LOOKED INTO netplan, wow, there's some powerful stuff in there for "oh bond all this together into a bridge and ..." stuff
[23:38]  * thumper nods
[23:39] <thumper> we are rapidly getting out of my knowledge area... sorry
[23:39] <los__> No proxy set, but, I have seen the "set it in clouds.yaml" ...since the workers can reach out via the HTTP proxy and get their stuff (it seems, since all those packages went whirling by)
[23:39] <thumper> sorting out the routing etc
[23:39] <thumper> so...
[23:39] <thumper> perhaps they have outward routing but not inbound?
[23:40] <thumper> if that is the case, it should be fine...(ish)
[23:40] <los__> thumper: no problems... I can set routing between the two interfaces etc...but I had gotten to this point previously by running juju bootstrap with evil sudo...so this is one big improvement that will help me get to the next step
[23:40] <anastasiamac> veebers: yeah, but we could do it f-2-f at sprint. this only applies to .0 releases and we r not likely to b bitten by it more than once until sep
[23:40] <los__> Well there's the httpd proxy in MaaS ...that's the only way they can reach out to the world
[23:40] <thumper> ok...
[23:41] <thumper> then the instances are probably coming up pointing to that proxy...
[23:41] <thumper> which means juju should curl the agent binaries just fine
[23:41] <thumper> los_: I'm going to have to head out shortly, but let us know how it goes
[23:41] <los__> if I control-\ it there are good clues about what it was trying to do, but, I'm gonna let it go and see what timeout natural gives me.
[23:42] <los__> thumper: thanks a ton, seriously, this is "phew" for Friday demo, I'm getting there.
[23:42] <thumper> nice
[23:42] <los__> thumper: dev'ing the charms with aws provider in parallel
[23:42] <los__> thumper: gonna have fun hitting a script and having machines turn on in performance mode and be really annoying
[23:42] <los__> thumper: thanks again.