[00:07] wallyworld: ok, so docker login and docker pull work with the image, it's something me or m-k8s is doing. Will dig deeper after food [00:08] ok [00:11] wallyworld: sorry, haven't looked at that pr yet - some roofers have turned up to fix a leak. [00:11] no rush [00:33] kelvin_: here's a small PR to add support for docker cmd/args https://github.com/juju/juju/pull/8908 [00:33] wallyworld, yup, looking now. [00:47] wallyworld, LGTM. thanks [00:57] wallyworld: I *think* this originally succeeded getting the image, but container creation failed and it tried again and failed to pull image (TTL for image access?) https://pastebin.canonical.com/p/9khpdhpxCy/ [01:18] veebers: i have to head out to lunch in about 30 - need any input before i go? [01:18] wallyworld: no, all good. making progress [01:18] awesome [01:18] you got secret sorted? [01:24] wallyworld: I thought I did, still working on it. Made progress though [01:27] onward and upward [02:21] ah shoot, I didn't read the docs (shocker) and appparently you need to run microk8s.reset before snap removing the thing 0_0 I wonder if a reboot will help [02:23] veebers: oops [02:23] * thumper goes to make a coffee [02:27] Tell veebers he should really experiment in a sandbox so he doesn't have to reboot his entire machine :p [03:17] wallyworld: would you have some time to chat about the docker bits? [04:27] ok this is kinda rant about the localhost kubernetes deployment option with juju or conjure-up... short: it just doesnt work.. longer: why on earth is there a option to deploy on localhost if your tools just cannot do it. has anyone of the devs lately tried to deploy on localhost eg. kubernetes-core? just a small cluster. probably not because its broken. [04:28] ok so i dont give up easily.. i read lxd tutorial here for localhost and tried to configure it like that.. then i read github issues of other ppl.. tried that too [04:28] i just cant get it to run... then i post on github on a similar problem. but the issue is closed. closed but not answered [04:29] there is no tutorial for localhost deployment which just says do this and do that because our software if not able to [04:29] s/if/is [04:30] annoying is the word im looking for. [04:30] its not the first time that open source just fails to provide for users in documentation and so on. [04:31] if developers are not able to write documentation then just dont offer options that do not work out of the bo [04:31] x [04:32] now everybody will say .. heh did you open a issue yet? [04:33] no i havent. why? because i was wasting my time on other projects and didnt get a solution or even the devs give an answer that they also dont know what to do [04:33] that would be honest and straight forward [04:34] so if anyone reads this here. take it as feedback or leave and do whatever you devs do [04:52] and this might also be news to you.. system administrator/engineers/architects are not there for you to be the single source of quality control of your code. yes we understand that there are bugs here and there.. but we are not mainly there to deliver feedback about broken software. and yes we know how to use google but if you have to read 4 different websites to be able to use a feature that is not well [04:52] documented in the official doc on the software website then you have to realize that something is really not working as intented [05:05] ybaumy: i do it every day. what's the issue you are seeing? [05:08] my setup is a lxd cloud and then i deploy kubernetes-core bundle. one tweak i do make is to not nest one of the lxd containers [05:08] wallyworld: is that a local tweak to the default bundle? [05:08] since support for that i think is kernal dependent (not sure), and it only matters when deploying to real vms [05:08] yeah [05:09] perhaps a short discourse post on getting it working? [05:09] yup. i didn't realise it was an issue [05:09] i just did it for expediency [05:09] no need for nested lxd when testing things out [05:10] perhaps we should have a CDK local bundle in the store for people testing it out? [05:10] one that doesn't use nested lxd [05:10] well, one that doesn't specify containers in the bundle [05:11] ybaumy: sorry that you had such a negative experience with it [05:11] ok so first /var/run/kubernetes is not created on the master .. then i had to follow this https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/507 https://github.com/juju-solutions/bundle-canonical-kubernetes/wiki/Deploying-on-LXD https://github.com/lxdock/lxdock/pull/49/commits/39b19f90ad8f7331c5e49ba27fba7446be254ea7 [05:11] thumper: i *think* the guys saif they were going to remove the nested lxd container anyway [05:11] https://askubuntu.com/questions/863675/lxc-lxd-and-apparmor-permission-denied-attempted-to-load-a-profile-while-confi https://stackoverflow.com/questions/48190928/kubernetes-pod-remains-in-containercreating-status https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/262 [05:12] ybaumy: so that's CDK not kubernetes-core? [05:12] ybaumy: didnt matter for me.. had the same issues [05:12] the kubernetes-core is what i'd recommend to play with [05:12] i just searched google [05:13] for similar problems [05:13] was using kubernetes-core [05:13] first stable then edge [05:14] i pretty much ran from error to error [05:14] wallyworld: have you changed your default lxd profile like the wiki linked above suggested? [05:15] https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/262 this is the latest state i was in [05:15] on the bottom [05:16] ybaumy: here's my bundle that i tweaked; i simply deployed easyrsa to a new machine (2) instead of a nested lxd https://pastebin.ubuntu.com/p/5p5W6YnrHF/ [05:16] the system pods were not starting and then i gave up 2 weeks ago [05:17] ok have to try that tonight since i have to go to work in half an hour [05:18] i'm not a k8s expert - hyave not seen the error you mention [05:18] but i know my tweaked bundle seems to work everytime i have tried [05:18] the only change i made is to introduce that new machine 2 instead of using a nested lxd [05:19] as deploying locally uses lxd for the top level vms [05:19] so no value in nesting [05:19] ybaumy, it's worth to have a check if ur lxd version >= 3 [05:19] wallyworld: well this is the thing i guess. lxd is not working properly [05:19] nesting does require special kernal things / privilieges that i do nit fully understand [05:20] http://pastebin.centos.org/931976/ [05:21] those version im using now [05:21] juju is beta because i tried stable first and thought that might fix it [05:21] ybaumy: funnily enough the beta channel is behind the release channel just now [05:21] :( [05:22] wallyworld: and yes you are right without the changes to lxd i did .. the lxd containers were not even starting [05:22] thumper: good to know [05:22] ybaumy: release is 2.4.0 [05:22] stable I mean [05:22] thumper: i remember getting the mail for the release yes [05:23] but the problem is either kube-core or lxd [05:23] to be honest though, rc3 == .0 except the version number [05:23] ybaumy: i would be very interested to see if the tweaked bundle file without nested lxd works. let us know [05:23] let's rule out nesting as an issue [05:23] wallyworld: will post my feedback tonight here in the channel [05:23] * thumper has to go make dinner for minions [05:23] if there's still an issue wil be easier to find [05:23] see y'all tomorrow [05:24] o/ [05:24] i may be out tonight but will check tomorrow [05:24] will be online tomorow morning from 0500-0700 CET here [05:24] i'm GMT+10 [05:25] gotta go work later [05:26] wallyworld, can i get u a few minutes? [05:26] sure [05:26] HO? [05:26] yup [05:53] kelvin_: wallyworld: i think u may have addressed this one already https://bugs.launchpad.net/juju/+bug/1764649 right? so we can close it? [05:53] Bug #1764649: juju caas remove-relation does not work [06:01] anastasiamac, yes, it's been resolved, we can close it. [06:01] kelvin_: there's a method on application IsRemote() - use that instead of NotFOund check [06:01] wallyworld, ok, ic. thanks [06:04] anastasiamac, I just closed it. sorry for the confusion. === frankban|afk is now known as frankban [07:38] wallyworld, got a tiny PR, would u help to take a look when u got time? thanks https://github.com/juju/juju/pull/8909/files [09:15] stickupkid: This one is almost too trivial for review, but here 'tis anyhow: https://github.com/juju/juju/pull/8910 [09:18] manadart: haha - to easy :D [09:21] manadart: i've turned on verbosity on tests (-check.v) in a branch, that will hopefully tell me which test is causing most if not all merges to fail [09:21] manadart: commit for reference: https://github.com/juju/juju/pull/8894/commits/abe85287b6053bcc798b49e18a7c846af9a344ae [09:22] manadart: i believe it's down to luck if your merge lands now, 1 out of 3 requests tend to merge [09:26] stickupkid: OK, thanks. [11:24] stickupkid: Are you going to land the approved PRs? [11:25] manadart: would love too [11:25] manadart: $$merge$$ just doesn't like me [11:26] manadart: see http://ci.jujucharms.com/job/github-merge-juju/706/consoleFull [11:26] manadart: check out this - https://github.com/juju/juju/pull/8894 [11:26] manadart: about 5 merges :( [11:43] mandart: it keeps getting stuck in the same place - github.com/juju/juju/cmd/juju/application [11:45] jam: any thoughts on the hanging tests when trying to merge? [11:47] https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/608 [11:48] somebody please take a look at this [11:48] logs from the cdk field agent are attached [12:59] Hi! I'm using Juju+Maas. Do I need sudo for any of add-cloud/add-credential as per https://askubuntu.com/questions/1053630/juju-ever-need-sudo-for-install ? [13:47] externalreality: good morning - i looked at the pr and responded to some stuff. did I miss any outstanding questions? [13:49] hml, good afternoon - Let me check I see you've responded to some stuff but 2 minutes ago [13:50] hml, let me whats new === chrisp_ is now known as chrisp262 [16:13] rick_h_: in order to differentiate between lxd and lxd-remote for reporting, you might have to be explicit in the cloud spec about which one you want to interact with [16:15] rick_h_: https://pastebin.canonical.com/p/Y7JNphH8gr/ [16:18] stickupkid: looking [16:19] stickupkid: k, interactive isn't the normal interactive from add-credential but used in azure as you're sent over to azure to fill out data [16:19] stickupkid: or is interactive the trust password setup [16:19] ? [16:20] rick_h_: interactive in this case is pointing to a file, but yeah, that will be trust password later on [16:20] gotcha ok [16:20] s/file/cert file/ [16:20] rick_h_: it's more the fact that we now have two different lxd providers, which i'm not sure about [16:21] lxd and lxd-remote [16:21] stickupkid: cool [16:21] stickupkid: well I'd ask if it's lxd-remote or lxd-cluster it should be? [16:22] rick_h_: i guess you could have one node in a lxd-cluster [16:22] stickupkid: right, you'd have "turned on lxd cluster" [16:23] rick_h_: https://pastebin.canonical.com/p/TnFHbvgMYG/ [16:24] rick_h_: so when adding a cloud, it will come up with two different cloud types [16:24] stickupkid: yea, I think that we need that cluster word since that's the tutorial/wording of setting it up in lxd [16:24] stickupkid: hmm, yea...but can you add a lxd? /me looks if that's there currently [16:25] rick_h_: ok, that's fine, quick change, but we ok with that setup, because it's a bit weird [16:25] stickupkid: right, so lxd doesn't currently show in add-cloud [16:25] stickupkid: so it should just be one lxd in there, and I'd use lxd-cluster vs "remote" [16:25] rick_h_: which branch? [16:25] stickupkid: current 2.4.0 release [16:26] ok, that's interesting [16:33] rick_h_: glad you noticed that, was a nice easy fix :D [16:33] stickupkid: :) [16:33] stickupkid: thank you for bringing up something that didn't quite look right [16:33] appreciate the eye for "hmmm, this isn't ideal" [16:33] rl [16:33] rick_h_: https://pastebin.canonical.com/p/X8zV8gFPNk/ [16:34] stickupkid: yea, that looks peachy to me [16:34] rick_h_: that looks better, i'll do some testing, to make sure it works well [16:34] stickupkid: <3 === frankban is now known as frankban|afk === jhobbs_ is now known as jhobbs [19:56] how does juju know where to deploy subordinate charms? Does it somehow figure it out from the relation information in the bundle.yaml? [20:20] knobby: exactly. Subordinates don't exist by themselves and only come into play once related to something in the model [20:23] rick_h_: if you don't mind me digging into specifics here, why is it that in https://api.jujucharms.com/charmstore/v5/canonical-kubernetes/archive/bundle.yaml flannel is created on the masters and workers, but not etcd. How does juju know that the relation to etcd doesn't mean to attach flannel to that machine as well? [20:24] knobby: looking sec [20:24] no rush, rick_h_ [20:24] knobby: so when you look at the relation definitions here: https://api.jujucharms.com/charmstore/v5/~containers/flannel/archive/metadata.yaml === dames is now known as thedac [20:25] knobby: you can see that two of them are "container" scoped which basically means it needs to be installed/setup on the machine for those relations [20:25] knobby: but note not for the cni relation which is using etcd for data storage. So it's more like a mysql/postgres db in that relation context. You'd not install your app into the same machine as your data store [20:25] ok, so if you build a subordinate and don't have container scope, you end up with something that won't deploy without placement? [20:26] knobby: check out https://docs.jujucharms.com/2.4/en/authors-subordinate-applications and the section "Declaring subordinate charms" [20:26] rick_h_: I completely get the desire not to install on etcd, I just didn't see how juju could determine that was the intent [20:26] knobby: even with placement I don't think it'll deploy tbh [20:26] ok, thanks [20:27] knobby: so it's doing that because it knows the relation from flannel->etcd is on the endpoint cni, and that the cni endpoint is not container scoped [20:33] thanks, rick_h_ [20:43] Morning all o/ [20:50] morning veebers [20:52] morning team [20:52] veebers: seems we have some confusion around the snap build jobs, see rick_h_'s email [20:53] thumper: morning [20:54] thumper: looking now [20:56] rick_h_: ah, it was probably disabled in one of the releases and not re-enabled. Seems like a process failure [20:56] rick_h_, thumper actually I think I may have disabled it during a release as I wasn't so sure myself [20:56] veebers: hmm, ok. I didn't know of any reason for a release we'd have stopped the 2.5 dev builds since I started them up after 2.4.0 final [20:56] veebers: but maybe they were turned off manually as part of that [20:57] rick_h_: I think I was confused about what might get overwritten and did both jobs, sorry [21:00] veebers: ok, all good. [21:01] veebers: I'll have to ponder if we can do any sort of warning flag on this. It's so easy to miss it for a length of time because we're used to the snaps being automatic [21:02] rick_h_: maybe it could be as easy as having a rotating card between teams that we check the output of 'snap info juju' at 1 or 2 standups a week [21:03] veebers: but but but, what about a giant dashboard with pretty pictures and things go red when they're not updated after 3 days and if can email everyone and ... or maybe we just check a couple of times a week :) [21:06] rick_h_: hah ^_^, we could put an html file in people.c.c that is either red or green as part of the process if you like, [21:08] who’s the current expert on the juju/interact/pollster code ? and juju add-cloud schemas? [21:10] thumper, rick_h_ , wallyworld ^^ :-) [21:10] hi thumper [21:20] vino: you're up early [21:20] becuase i slept early. [21:20] :) [21:20] have a min [21:21] its regarding the PR. [21:22] thumper: i havent landed it. Apologise. Because the tweaks u have asked me to do. esp the state obj ModelTag(). [21:22] What you have mentioned is reasonable to add there. But created lots of ambiguity in other facades. [21:22] which is not relevant to fix for that PR [21:23] however i did push the commit. Still more issues to fix becuase of the ModelTag() addition in state obj. [21:23] I'm not sure I understand [21:24] what are the issues to fix? [21:24] ok 1:1 [21:24] sure [21:58] Is sudo needed for add-credential when you first join Juju->MaaS? [22:02] ...I'm getting a permission denied on doing add-credential onto a by-the-book new install [22:30] los_: I don't think so [22:30] permission denied on what? [22:30] los_: and which version of juju? [22:36] "Latest" one sec for details. This on 18.04 for the control node. [22:41] Juju is 2.4-rc3-bionic-amd64 and MaaS has no --version (I just installed it yesterday) [22:42] I zapped the .local/share/juju between iterations [22:43] MaaS version from the webUI is: 2.4.0~beta2 (6865-gec43e47e6-0ubuntu1) [22:46] thumper: could the unit test failures be related to the mongo issues mentioned re: the pr check/merge failures? It seems to largely be the same test(s) failing across the arches [22:47] veebers: which failure is it? [22:47] thumper: rick_h_ mentioned that it appears to be mongo related, when I asked him if it was a ppa add failure that was causing the PR grief [22:47] los_: what is the permission denied error? is there more shown than just that? [22:48] thumper: Details here: https://askubuntu.com/questions/1053630/juju-ever-need-sudo-for-install [22:48] * thumper looks [22:49] hmm... [22:50] thumper: I just zapped my .local/share/juju, did add-cloud, ok, then $ juju add-credentials blah and got "ERROR cannot get current controller name: permission denied" [22:50] thumper: ok, so looks like the PR failures are the same as we're seeing in the ci unit test failures (just a quick eyeball) [22:50] as to the reason of failure I dn't have any more info [22:50] This is different than the error in the SO so sorry for that [22:51] los_: there was a bug around people using sudo to do things by mistake and it changing permissions on the user files to root [22:51] we had a bug fix for that recently, but I can't remember exactly where it went [22:51] thumper: This page https://docs.jujucharms.com/2.4/en/clouds-maas has the sequence add-cloud, add-credential, bootstrap ... is that right? [22:51] los_: can you pastbin 'ls -al ~/.local/share/juju/ ' ? [22:52] thumper: thanks, that's cool...sudo is a button pushed all too often. The how-to page (above) didn't say to use sudo but.... [22:52] no, it shouldn't need sudo [22:52] thumper: Which pastebin is the one that works this week :) ? [22:52] pastebin.ubuntu.com [22:52] that works [22:52] never fails me :) [22:53] thumper: oh ok thanks :) was shocked (but shouldn't be) that the "real one" is gone. Didn't notice! [22:53] there was a real one? [22:54] thumper: you know, the various public ones that got wonky and went poof (full of hacks) [22:54] the other thing I'm thinking... is around the validation of the oauth token, but I would have though that if the validation failed, it would have a better error message [22:54] when we add a credential, a validation check is made to the endpoint to check the validity of the credential [22:54] thumper: https://pastebin.ubuntu.com/p/SsBcx4SkJk/ [22:55] thumper: https://pastebin.ubuntu.com/p/Wc5XGysbmX/ [22:55] ok... [22:56] how about running the juju add-credential with --debug? [22:56] I think you may have found a real bug... but one the developer didn't hit because they already had a current controller [22:57] but it is weird... [22:58] thumper: Glad to... so I do this to get fresh API key: $ sudo maas-region apikey --username=los [22:59] ok [22:59] thumper: ...and "los" is a valid (non-admin) user (is that the bug?) that is showing up in the MaaS webUI [22:59] * thumper nods [23:04] thumper: Here's the (getting longer, most-recent on top) paste with that debug: https://pastebin.ubuntu.com/p/HMQcrQzNWF/ [23:04] * thumper looks [23:05] los_: looks like you have another file lying around from a previous sudo [23:05] * thumper gets particular location [23:05] thumper: yay...I was going "Hey, I should strace this..." locations are good tho! [23:06] thumper: (It's been like 2 years since I messed with MaaS / Juju and wow, has it changed for the better!) [23:06] ok... [23:07] go and look for /tmp/juju-* [23:07] there may be something like /tmp/juju-store-lock-(hash) [23:07] probably owned by rood [23:07] root [23:08] thumper: "23:08:33 INFO cmd supercommand.go:465 command finished" wo0t! Tanks! [23:09] thumper: So, to the "other bug" :) [23:09] so that was it? [23:09] I think we need a better error [23:09] to lead users to the problem [23:10] thumper: I did a bootstrap several times and by the console of the MaaS (I agree!) the config script was "done" in 62 seconds but the lonnnnnnng timeout on the bootstrap had it hang for a go-see-a-movie amount of time [23:10] thumper: I have a getting-longer set of notes about somethings I've seen, also errors that a vague etc. Will supply! [23:10] ok... [23:10] so... when you bootstrap, maas considers it done when it has installed the OS on the machine and handed it off [23:11] juju gets the machine and runs cloud-init on it [23:11] thumper: like "los@davros:~$ juju add-credential maas-cloud ERROR cloud maas-cloud not valid" [23:11] thumper: Right...but the bootstrap command it self takes (1hr?) a long time to "finish" [23:11] what juju is doing is running apt-get update / upgrade on the os [23:11] installing a few packages [23:11] and configuring mongo and the agents [23:12] it shouldn't take an hour... [23:12] Right it gets to that point...but then [23:12] unless your internet connection is very small pipe [23:12] thumper: So now I'm doing (150Mb/sec): JUJU_LOGGING_CONFIG="=TRACE" time juju bootstrap daleks davros [23:12] oh... [23:12] don't do that [23:12] root level TRACE is a massive firehose [23:13] thumper: Doh... (and its deploying 16.04...that's normal-good yah?) [23:13] you only really want to trace particular modules [23:13] yeah, we won't default to bionic until after 18.04.1 [23:14] you can ask for bionic though [23:14] thumper: nah, its cool, as long as it works :) in a typical way so I can get onto the next (dev) steps... [23:14] for you 1 hour bootstraps [23:14] a thing to do is to ssh in after it is done [23:14] and look at the cloud-init logs to see if we can work out what is taking all the tiem [23:15] /var/log/cloud-init...... [23:15] thumper: what trace level would be helpful? Right. I read about the ssh (very cool) I will do a no-trace bootstrap right now. I have a console monitor plugged into that unit. [23:15] I'm sure one of those has times in it [23:15] just do this: [23:15] time juju bootstrap daleks davros --debug [23:16] that effectively is like =debug [23:16] the controllers will inherit that config [23:16] and debug will be enough [23:16] tracing is too much unless you know what you are looking for, and turn it off again [23:17] thumper: got it running with that --debug [23:17] los_: is this your first go with juju 2.x? [23:17] (all in all: MaaS is wayyyy faster/better than my previous experiment, and Juju is 10x what it was) Yes. I was 1.3'ing it I think...lonnnnng time ago [23:17] Back when half the info was "watch Jorge's vids" [23:17] haha [23:18] thumper: (fetching Juju GUI 2.12.3) [23:18] 2.x is way better than 1.25 [23:18] thumper: Q: I have the MaaS stack on a sep, 2ndary network. Does routing need to be on between it and the upstream "real" connection? [23:18] ok, if it is getting to fetch the gui, it is part way there, and I think done after the update/upgrade [23:19] yeah, juju expects to be able to reach out... there are proxy config you can set up if using proxies [23:19] it needs to download the agent binaries from streams.ubuntu.com [23:19] thumper: aha, it handled my "no nodes in AZ default" thing well. [23:19] there are ways to bootstrap completely off-line [23:20] thumper: hmm...ok let's see how this goes. (obv'ly it'll be a while with 16.04 loading) [23:20] thumper: thanks for the help: I'm glad to get to have a chance at seeing a useful error instead of one of my own making [23:20] well, if it got to fetching the juju gui, 16.04 is up and running [23:20] thumper: I'm just going by what is the latest message on the juju... [23:21] thumper: well, the MaaS console too says that (Deploying...) [23:21] I *think* the update/upgrade is before fetching the gui [23:21] I don't bootstrap onto maas much myself though [23:22] thumper: I think so too. There'd be nowhere to fetch TO... oh, another thing I noticed, wow, the images are way smaller! [23:23] thumper: this is all for a live demo (with 3 node hardware behind plexi sound barrier for drum risers) of MaaS / Juju on Fri at our OKC LugNuts group [23:23] thumper: Oklahoma City, USA [23:23] nice [23:24] thumper: the console VGA shows its blazing away installing all kinds of packages (was at libc-bin) [23:25] thumper: I know too many top-down devs (JS, Python) that are not SysAd/DevOps that need this, and so to show "hey this is not brain surgery...run your stack at home on-demand" SetiAtHome is my 1st charm I'm working on, then ngrok, then the Flamenco render farm controller for Blender rendering. The last one won't be done (beast mode with MongoDB etc.) by then. [23:26] anastasiamac: a thought, we could probably update the release process so that a stable release auto updates the bugs (fixed released, move to next milestone if not fix committed etc.). Although I'm not up to speed with the previous discussions surrounding this earlier [23:26] los_: that is very cool [23:26] thumper: thanks, trying to share since our local scene is cool and helps me more... [23:27] veebers: we had strong guidance that a bug shouldn't be maked fixed released until it is in a non beta non rc release [23:27] thumper: the classic "mysql/mediawiki/haproxy/expose" demo blew these guys minds, esp. when "oh you can make this 100x bigger in minutes"...like for some viral thing that is blowing up...they made funny faces [23:27] veebers: so our .0 release process should go back through the older milestones [23:28] los_: yeah, we need to get better at telling our story around this [23:28] a big focus for us recently has all been around openstack and kubernetes [23:28] los_: have you looked at JAAS? [23:28] jujucharms.com [23:29] there are canonical hosted controllers, so people can just start using the public clouds [23:29] thumper: ack, it would only be for stable releases that we do some sort of automated bug updating etc. [23:29] thumper: I tell ya, except for aws being so slow to provision / de- it was a killer demo to these guys esp. when I tore it down to 1/1/1 again. [23:29] thumper: YES! I was shocked to see JAAS! Its an obvious idea once you see it but it still surprised me [23:29] from the user's point of view, it is one controller [23:30] and many models on that controller, sometimes in different clouds or regions [23:30] thumper: right makes sense. [23:31] thumper: this is where we are, The Time of The Great Pausing: https://pastebin.ubuntu.com/p/CfzDfNbPqW/ [23:31] los_: fyi, we are in the process of moving email discussions to our new discourse instance: discourse.jujucharms.com [23:32] los_: are you installing juju from a snap? [23:33] thumper: very cool. Discourse has come a long way too. Ok the console of the machine (yes, with --beta and --classic) says something like "Cloud init v 18.2 finished at..." [23:33] los_: right, you might want to move off --beta to release (or candidate) [23:33] we need to fix our snap channels... [23:34] beta still has rc3 whereas stable is .0 [23:34] candidate is latest .1 builds [23:34] thumper: Ok I will then... this is the network config: https://pastebin.ubuntu.com/p/fbqB9Vf7zt/ [23:34] los_: I think the output of fetching the gui is a lie [23:34] I think it has just looked up the version [23:35] thumper: you know, with messages the "liar" is just an old message...needs more messages? [23:35] the time at 28 minutes past is when juju bootstrap was able to ssh into the instance [23:35] and maas should now show it as running [23:35] however, [23:35] this is where juju is now doing the apt update / upgrade [23:35] and installing packages [23:35] downloading mongo etc [23:36] thumper: MaaS webGUI is showing the unit on, with Status "Ubuntu 16.04 LTS" and my user [23:36] I *thought* that we had more output on the script when running with --debug [23:36] thumper: NOTE: as per before there is no routing between my 192.168.29.X network (upstream to internet) and the 192.168.4.X net with the workers on it [23:37] los_: this might be the cause of the great pause [23:37] do you have a proxy set? [23:38] thumper: 18.04 has some strong differences, and I have two serious netplan bugs brewing (outrageous!) but having LOOKED INTO netplan, wow, there's some powerful stuff in there for "oh bond all this together into a bridge and ..." stuff [23:38] * thumper nods [23:39] we are rapidly getting out of my knowledge area... sorry [23:39] No proxy set, but, I have seen the "set it in clouds.yaml" ...since the workers can reach out via the HTTP proxy and get their stuff (it seems, since all those packages went whirling by) [23:39] sorting out the routing etc [23:39] so... [23:39] perhaps they have outward routing but not inbound? [23:40] if that is the case, it should be fine...(ish) [23:40] thumper: no problems... I can set routing between the two interfaces etc...but I had gotten to this point previously by running juju bootstrap with evil sudo...so this is one big improvement that will help me get to the next step [23:40] veebers: yeah, but we could do it f-2-f at sprint. this only applies to .0 releases and we r not likely to b bitten by it more than once until sep [23:40] Well there's the httpd proxy in MaaS ...that's the only way they can reach out to the world [23:40] ok... [23:41] then the instances are probably coming up pointing to that proxy... [23:41] which means juju should curl the agent binaries just fine [23:41] los_: I'm going to have to head out shortly, but let us know how it goes [23:41] if I control-\ it there are good clues about what it was trying to do, but, I'm gonna let it go and see what timeout natural gives me. [23:42] thumper: thanks a ton, seriously, this is "phew" for Friday demo, I'm getting there. [23:42] nice [23:42] thumper: dev'ing the charms with aws provider in parallel [23:42] thumper: gonna have fun hitting a script and having machines turn on in performance mode and be really annoying [23:42] thumper: thanks again.