[02:00] <kelvinliu_> wallyworld, could we have a quick chat?
[02:00] <wallyworld> sure
[02:00] <kelvinliu_> wallyworld, thanks, standup HO?
[02:04] <veebers> wallyworld: FYI pushed up the latest for the cloud container (also: https://pastebin.canonical.com/p/YQGrRfcS2S/, note my dumb message for active needs to be better)
[02:04] <wallyworld> veebers: will look after kelvin
[02:04] <veebers> awesome, thanks
[02:12] <veebers> hmm, might have to re-run my full test, may have mucked it up
[02:15] <babbageclunk> wallyworld: I think the leases in that bug are a red herring. We don't have any way to look at that system anymore, do we?
[02:16] <wallyworld> babbageclunk: might do as it's an IS model
[02:17] <babbageclunk> wallyworld: The reason I ask is that there's a note at the bottom saying he'll need to clear it out soon and it was a week ago.
[02:20] <wallyworld> babbageclunk: ah ok, might not have it then. maybe just leave a note on the bug and ask for more info?
[02:20] <wallyworld> or we can hack the db to orphan a lease?
[02:20] <babbageclunk> Yeah, doing that now. I can't find anything that would cause a lease to keep a model around.
[02:26] <kelvinliu_> wallyworld, babbageclunk the reset apiRoot issue is solved. I can see workers started, then can see lots of NEW errors which is great! thanks for the help!
[02:26] <wallyworld> yay
[02:27] <babbageclunk> kelvinliu_: oh awesome - what was the problem?
[02:27] <babbageclunk> (ha, yay lots of new errors! :)
[02:28] <kelvinliu_> babbageclunk, i wrongly removed the a.root.rpcConn.ServeRoot in login method coz it caused a different error before..
[02:28] <babbageclunk> ah, right
[02:28] <kelvinliu_> babbageclunk, yeah, it's expected to have lots of errors! lol
[02:29] <wallyworld> veebers: there's still this: "workload   active       Instantiating pod"
[02:30] <wallyworld> it shouldn't be active if the pod has not come up yet
[02:30] <veebers> wallyworld: aye, that's the charm setting 'active', with message 'Instantiating pod' (it does that just after setting the podspec)
[02:30] <veebers> that's the crummy message I pt there
[02:30] <wallyworld> right, but if the contianer status is not there or is blocked/waiting, we need to filter that
[02:31] <wallyworld> filter the active status (regardless of message)
[02:31] <wallyworld> as it's not active yet
[02:31] <veebers> wallyworld: aye, that happens; That pastebin is me deploying without setting trust, it goes into error (you see the pod errors there) then setting trust, it goes though and sorts it all out etc.
[02:32] <wallyworld> but the workload status goes through an active state
[02:32] <wallyworld> which is wrong
[02:32] <wallyworld> as the pod is not up at all at that stage
[02:33] <veebers> wallyworld: ah right you are, yeah that idea of ours of the charm setting active when setting the pod spec is wrong. Let me address that
[02:33] <wallyworld> we said it could do that
[02:33] <wallyworld> because it has no way to know otherwise
[02:33] <wallyworld> hence we need to have that filter to correct it
[02:35] <wallyworld> gitlab would set blocked initially, but when relaton is joined it will set active then and as with mariadb, pod may not ve ready then either
[02:35] <veebers> right, so at the moment it'll set the pod spec, that will come through and we probably won't have a container status with it, nor any historic ones so it uses the unit status. It needs a tweak there
[02:35] <wallyworld> yup, i think we said if container status is missing, count that as waiting for container
[02:36] <veebers> yeah, that's what we have. I'm re-running as I may have screwed up what I was actually running against.
[02:36]  * veebers triple checks that unit test
[02:57] <veebers> wallyworld: ah, having a look it appears to be because AddUnitOperation.Done(..) calls SetStatus for unit status, which calls probablyUpdateHistory etc. adding a unit test for that and looking at how to resolve
[02:57] <wallyworld> veebers: righ, but the map of global key to status should have been updated to have the inferred status
[02:58] <veebers> wallyworld: that's UpdateUnitOperation, not Add*
[02:58] <wallyworld> ah ok. makes sense. so similar fix needed there also
[03:27] <wallyworld> babbageclunk: +14/-4 :-) https://github.com/juju/juju/pull/9185
[03:32] <babbageclunk> wallyworld: looking!
[03:46] <babbageclunk> wallyworld: approved
[03:46] <wallyworld> tyvm
[03:48] <wallyworld> babbageclunk: i need the extra check - !os.IsNotExist(err) returns true for nil err and thus returns without doing the download. and worse, the returned error is nil and so the caller thinks there's nothing wrong
[03:50] <babbageclunk> Oh right!
[03:50] <wallyworld> babbageclunk: a subtle bug - we were getting nil charm urls *sometimes* (and no errors logged)
[03:51] <babbageclunk> No, hang on - if the err is nil that means the file is there, right?
[03:51] <babbageclunk> (and there was no other error statting it)
[03:52] <babbageclunk> Oh, I see
[03:52] <wallyworld> the dir is there
[03:52] <babbageclunk> ah
[03:52] <babbageclunk> yup
[03:52] <wallyworld> and we weren't therefore setting the url to return
[03:52] <babbageclunk> doh
[03:53] <wallyworld> a fine mess
[04:06] <veebers> you have a moment for me to pick your brain? I was hoping to copy the pattern used in UpdateUnitOperation (creating the ops for status) so we can seed what status doc gets used for history (and to avoid using setStatus in Done as that sets history). This code errors 'not found' because the createStatusOps from the addUnitOps call hasn't run yet: http://paste.ubuntu.com/p/DSnYcYwwFS/ thoughts?
[04:27] <veebers> wallyworld: ^^ d'oh never actually pinged you, that wall of text is for you :-)
[04:28] <veebers> is there a nice way to create and apply ops in the Done method? That seems a bit off though
[04:28] <wallyworld> veebers: looking
[04:31] <veebers> wallyworld: would it be sensible to have the 'new' code not in Build, but as part of done, some ops.application.db().Run(func() { the stuff I have there that will return status ops })
[04:32] <wallyworld> veebers: might be easier to jump on hangout
[04:33] <veebers> ack
[04:33] <veebers> jumping in standup
[04:46] <Doctor_Nick> what sort of dark secrets were revealed on that hangout? we may never know
[05:27] <veebers> Doctor_Nick: lol, a bit of going in circles then realising late that the solution we came up with won't work for all cases and then starting the process again ^_^
[08:06] <boritek> hello
[08:06] <boritek> how can I change the virtual IP for the juju cloud-controller?
[08:46] <boritek> guys, after the host restart, I couldnt reach the juju gui anymore, therefore I started everything from scratch, deleted/killed the controller and recreated with juju bootstrap, but it hangs now at the "Fetching Juju GUI 2.13.2" phase
[08:55] <boritek> and how can I set a static IP for the cloud controller while bootstraping?
[11:23] <rick_h_> boritek: try bootstrapping with --debug phase. When it says fetching it's kind of wrong often about that stage. The next steps are boring and don't output stuff but I'm guessing the issue is more there
[11:23] <rick_h_> boritek: as far as a static IP, on what cloud?
[12:15] <boritek> rick_h_: in the end it could recreated the controller, but it was slow
[12:15] <boritek> rick_h_: static IP for my maas-cloud-controller gui
[12:16] <boritek> rick_h_: am I right that it is an lxc container but is somehow hidden?
[12:16] <rick_h_> boritek: so it'll use the same IP as the controller itself for the GUI. It's served via the same http setup. The controller should listen to all IPs on the machine and so any IP on the machine can/should work
[12:16] <boritek> it does not show up with lxc list
[12:19] <rick_h_> no, there's no lxc by default
[12:19] <boritek> rick_h_: no juju controller (probably not lxc but snap) has a different IP than the phyisical machine underneath, which is the maas-controller
[12:19] <boritek> so juju-controller has a virtual IP, but I am not sure how to change it
[12:19] <rick_h_> boritek: ok, you've got a maas controller. When you bootstrap the controller will get a machine from MAAS to install onto
[12:19] <rick_h_> boritek: what machines are in your MAAS that Juju is pulling from? e.g. what node in maas shows up as used?
[12:21] <boritek> when I bootstrapped juju controller i set it up to connect to the maas-controller underneath, beyond that maas it self sees 32 physical machines
[12:21] <boritek> but i can see the snapp app on the controller node by "snap list"
[12:21] <boritek> juju              2.4.3      5139  stable    canonical✓  classic
[12:22] <boritek> i guess it is not only the juju client app but also the controller and gui part too
[12:22] <rick_h_> boritek: so that just means the juju client is there. The controller does not use a snap
[12:22] <boritek> ah, so where is it then?
[12:22] <rick_h_> boritek: a controller only comes into being when you run juju bootstrap $cloud
[12:23] <boritek> yeah i ran that
[12:23] <boritek> so where it went to?
[12:23] <rick_h_> boritek: run `juju controllers`
[12:23] <rick_h_> boritek: that will list out all of your known controllers out there
[12:23] <rick_h_> boritek: and then you can use `juju switch x` to switch to the controller
[12:24] <boritek> maas-cloud-controller*  default  admin  superuser  maas-cloud         2         1  none  2.4.3
[12:24] <rick_h_> boritek: and `juju gui` to see the GUI from that controller
[12:24] <boritek> yeah i know that, gui works now
[12:24] <boritek> but i want to change its virtual IP
[12:24] <rick_h_> boritek: so that's going to be running on a MAAS node then.
[12:24] <boritek> and also have a problem that gui will stop working after the physical host restarted
[12:25] <rick_h_> boritek: if you want to know which node you can look at your maas dashboard or do this: juju switch controller; juju status
[12:25] <rick_h_> and it'll show the machine information of the controller in status
[12:25] <rick_h_> boritek: hmm, not sure on that. When you restart the jujud service should restart and the GUI is served via the same jujud your client talks to
[12:26] <boritek> there is no jujud service
[12:26] <rick_h_> boritek: it's running on that machine
[12:26] <boritek> i was also searching for stuffs like that
[12:27] <rick_h_> boritek: so when you run juju status and see the 0: machine it should show you the IP of it
[12:27] <rick_h_> boritek: and you can use `juju ssh 0` to connect to the controller node
[12:27] <rick_h_> boritek: and see things like jujud running on it
[12:27] <boritek> juju status:
[12:27] <boritek> default  maas-cloud-controller  maas-cloud    2.4.3    unsupported  12:27:22Z
[12:27] <boritek> Model "admin/default" is empty.
[12:27] <rick_h_> boritek: right, you need to change to the controller model
[12:28] <rick_h_> boritek: `juju switch controller`
[12:28] <rick_h_> boritek: and try juju status again
[12:28] <boritek> ah ok
[12:29] <rick_h_> boritek: check out the tutorials. There's some good info in there on adding models, etc.
[12:29] <boritek> 0        started  10.189.242.63  eypwax   bionic  default  Deployed
[12:29] <boritek> this is the one
[12:29] <rick_h_> boritek: https://docs.jujucharms.com/2.4/en/tut-google and such
[12:29] <boritek> so how to change the IP?
[12:29] <rick_h_> boritek: right, that's the running controller machine in MAAS with that address
[12:29] <rick_h_> boritek: so that's up to MAAS and not Juju
[12:29] <boritek> ah
[12:30] <rick_h_> boritek: it's going to get the IP on the machine which I'm guessing is handed out/provided by MAAS
[12:30] <boritek> so it means it is running on a physical host?
[12:30] <rick_h_> boritek: since you're not on an AWS/etc you don't have a elastic IP to stick on it via an API
[12:30] <rick_h_> boritek: correct
[12:30] <boritek> i thoguht this will be a container
[12:31] <boritek> yeah MAAS(-controller) is also the DHCP server
[12:31] <rick_h_> boritek: no, it gets a machine on whatever cloud you're using it against. So in AWS/GCE/etc it's an instance there. In LXD it's a container, etc
[12:32] <rick_h_> boritek: it's using the cloud-api to set things up on whatever cloud it's pointed at and MAAS can only provide instances from its pool like AWS can only provide instances from its pool
[12:32] <boritek> rick_h_: how can I ask the bootstrap process to deploy it to a selected machine?
[12:32] <boritek> or even better to a container?
[12:33] <rick_h_> boritek: so you can specify --bootstrap-constraints that guide it to use characteristics, or you can use MAAS to tag machines and to specify a tag at bootstrap or deploy time
[12:33] <boritek> rick_h_: yeah i have a pool, but what if i dont want it to be random
[12:33] <rick_h_> boritek: to do a container you have to do more work to manually add the container, register it in maas, tag it, and bootstrap to MAAS specifying that tag
[12:34] <rick_h_> boritek: well in the cloud world we very much follow the "think cattle, not pets" mantra
[12:34] <rick_h_> boritek: so you specify the type of machine you want as far as cpu, ram, etc and we ask the cloud for one
[12:34] <rick_h_> boritek: if you want to be that specific then you have to do things like unique tags or the like
[12:36] <boritek> ok, I see, i would prefer all kind of controllers to be on the same node
[12:37] <boritek> others could be more random, but it would also nice to fill up machines with contrainers from top to bottom
[12:38] <boritek> rick_h_: well now i tried to login to the cloud-controller machine, but it does not let me in with the ubuntu user
[12:39] <boritek> does it not deploy my keys automatically as with other nodes with maas?
[12:39] <boritek> how can i login?
[12:39] <boritek> or same keys with the gui?
[12:39] <rick_h_> boritek: it should. You can use Juju to ssh in via `juju ssh 0` (machine or unit id)
[12:39] <boritek> ah
[12:40] <boritek> perfect
[12:40] <rick_h_> boritek: and then you can do manually with the SSH key that's in MAAS for the user that Juju is using and then ssh ubuntu@$IP
[12:40] <rick_h_> boritek: but you have to have your SSH key setup in MAAS for that user that the API key you're using
[12:40] <boritek> yes i have my keys in maas
[12:40] <boritek> and it worked for other physical nodes
[12:41] <boritek> but not here
[12:41] <rick_h_> boritek: hmm, not sure. It should.
[13:09] <boritek> rick_h_: juju does not communcate and share info with maas?
[13:09] <boritek> juju list-machines only sees 1 machine that it created for the controller
[13:10] <boritek> but i have some other machines deployed from maas gui
[13:38] <rick_h_> boritek: sorry was on the phone. So no, MAAS is just a cloud to Juju. If you go to the cloud and do work Juju ignores it. If you want Juju to manage things it has to be done through Juju. It only tracks work done through the controller using the Juju client.
[13:39] <rick_h_> boritek: and it'll communicate with MAAS about getting instances, what to run on them, etc. MAAS will communicate back details about the machines given, etc.
[13:39] <rick_h_> boritek: but Juju will not "pick up" stuff on the underlying cloud and auto add to itself any knowledge about it
[13:43] <boritek> ok, understood.
[13:43] <boritek> rick_h_: thank you very much for your help so far. I need to go now, but will continue working and learning about it tomorrow. Espacially regarding the containers
[13:44] <rick_h_> boritek: cool np. Happy tinkering
[15:28] <manadart> externalreality: Landed that patch, which makes its follow-up ready to review: https://github.com/juju/juju/pull/9186/files
[15:28] <manadart> No rush; tapping out for the day.
[15:28] <externalreality> manadart, ack
[15:28] <externalreality> manadart, have a nice evening
[15:29] <manadart> externalreality: Cheers; have a good one.
[15:29] <externalreality> manadart, watchers driving me crazy, but will try :-D
[15:30] <manadart> externalreality: I muse about the watcher pattern sometimes. I usually come around to thinking about streams.
[15:32] <externalreality> manadart, roger that
[16:21] <asbalderson> Good day everyone!
[16:21] <asbalderson> Is it possible to use juju to deploy something like rehl?
[18:03] <rick_h_> asbalderson: sure, there's an ubuntu charm that basically does that. It's setup to be the ubuntu series and just brings up an instance.
[18:04] <rick_h_> asbalderson: you could do one for centos I believe in most clouds or in your own MAAS with custom images
[20:09] <asbalderson> rick_h_: I've been having a lot of trouble browsing the charm store to find something like this; where can i find it?
[20:10] <asbalderson> also, thank you :)
[20:11] <rick_h_> asbalderson: so there's the ubuntu and ubuntu-lite charms that show the idea: https://jujucharms.com/ubuntu/12 and https://jujucharms.com/u/jameinel/ubuntu-lite/7
[20:11] <rick_h_> asbalderson: there's not a community contributed one for centos atm, it'd be a good thing to have submitted :)
[20:19] <magicaltrout> we also mulled over the idea of a more generic layer-basic a while ago, so its easy to write charms from both ubuntu and centos.... if you feel inspired... ;)
[23:58] <NickZ> does anyone know where the documentation on how config.yaml options are exposed to the install hooks?