[00:59] wallyworld: Do we have a convenient way for taking multiple model operations and applying them as one transaction? [00:59] babbageclunk: yes, ModelOperation [01:00] wallyworld: but if I have multiple ModelOperations is there a function to combine them into one? [01:01] ie, I'm calling LeaveScopeOperation on n relation units, but I want to run them all as one transaction. [01:02] Hmm, actually that might not be a good idea, since each one will change the unitcount and the last needs to remove the relation [01:02] ok, I'll just run them as separate txns [01:02] yeah +1 [01:19] kelvinliu: i don't understand the newPreconditionDeleteOptions stuff - why do we need to wait for UID sometimes and not others? [01:20] wallyworld: that ensures it asserts the deleted resource has the that id [01:20] and name is not sufficient? [01:21] we seem to use both? [01:21] ideally, we should follow this pattern for all deleting resources [01:21] given name is unique, why is that not sufficient? [01:21] name is uniq, but the deletion is async [01:23] we can't create another one with the same name until the current one is totally gone, not just in terminating state though [01:23] and it's not deleted immediately which usually takes a bit time [01:23] but we can't prevent non-juju ppl or app create a resource with same name, right? [01:24] true, but while it's being terminated, we can't create another can we? you get an error from the api [01:24] there was a discussion about this, i will find it for u [01:24] ok [01:25] if we need to change our code we should do it for everything rather than leave stuff done 2 different ways [01:42] wallyworld: u can see i left a TODO for refactoring all updating/deleting places [01:43] ok [01:43] wallyworld: i think this change is really good to have. [01:43] kelvinliu: is there a reason for swapping the order of update and create in the ensure() methods? we have been calling update() first then create() [01:43] https://github.com/kubernetes/kubernetes/issues/20572 [01:44] yes, there is a reason, and i think we should refactor all the ensure like this later as well [01:44] wallyworld: HO to discuss? [01:44] sure [01:48] hello everyone, everytime I try to run juju add-unit kubernetes-worker i get `failed to start machine 28 (failed to acquire node: No available machine matches constraints: [('agent_name', ['93d500ee-7e14-4ece-81ae-69137d451f3a']), ('cpu_count', ['6']), ('mem', ['32768']), ('storage', ['root:1024']), ('zone', ['default'])] (resolved to [01:48] "cpu_count=6.0 mem=32768.0 storage=root:1024 zone=default")), retrying in 10s (9 more attempts)` [01:48] can anyone help please? [02:18] atdprhs: looks like the cloud on which k8s is deployed doesn't have a machine with the required memory to host the new worker [02:19] looks like this is cdk on maas? === exsdev0 is now known as exsdev [04:04] wallyworld: autoload-creds ask-or-tell PTAL https://github.com/juju/juju/pull/10617 [04:04] ok [04:04] \o/ [04:14] anastasiamac: lgtm, ty [04:44] Hello wallyworld, sorry I missed your message [04:44] yes, it is on KVM Pod on MaaS [04:48] i'm not 100% across kvm pod allocation on maas. there would be maas specific set up which controls how kvm pods are allocated. you can use juju constraints to limit what is asked for which may help, but it could still be additional maas setup is needed. eg the error messages says cpu count = 6. perhaps asking for 2 or something would work [04:48] ahhh, got it [04:48] will test it out [04:48] the maas ux will show what machines have been allocated already and what's available [04:49] but i can't give you specific advice as i'm not a heavy maas user [04:50] one thing to note is adding a unit will use the constraints the original charm was deployed with [04:50] if you want to use different ones, you need to use juju add-machine --constraints="blah" first, and then "juju add-unit --to " [04:51] atdprhs: you could also try asking here https://discourse.maas.io/ [04:52] the maas folks there can help much better than me about the kvm setup stuff [05:59] wallyworld: review plz? https://github.com/juju/juju/pull/10618 [05:59] wallyworld: ended up being a bit fiddlier than I expected. [06:05] babbageclunk: no worries, looking [06:19] babbageclunk: lgtm, a couple of comments and a question [07:11] wallyworld: ru still there? [07:42] jam: I am going to do QA testing on PR 10566 today, but I want to squash the commits at some point. If you want to take a look before I do (preserving the context of your prior comments) please go ahead. [07:50] manadart: np, I think the big thing is to know which parts are the interesting ones and just focus there [07:51] jam: core/network and state. [08:12] jam: And probably apiserver/facades, where ProviderAddress -> SpaceAddress conversion is now occurring. [08:12] I am doing my own review now, and will annotate where appropriate. [08:53] hello everyone, I seem to have a model or controller that keeps creating VMs no matter how many times I keep deleting them, it recreates them [08:53] do anyone know if there is any way for me to find out whose doing that? [08:54] I keep seeing > 120 VMs that gets created [09:15] is there anyway that I can cleanup juju from all controllers, models everything [09:26] atdprhs: what is the output of juju status? [09:27] do you know what command triggered the first VM to be created? [09:29] it's stuck [09:29] I deleted the wrong vm [09:29] it doesn't respond anymore, i am really flooded with tons of them [09:30] that sounds really horrible [09:30] yes, extremely horrible [09:30] i recommend trying to remove the models first, hopefully the logs on the controller can be saved for a postmortem [09:30] luckily this is a test environment [09:31] juju destroy-model [09:31] ok, I'll destroy all of the moels [09:31] actually do this [09:31] juju destroy-model --force --no-wait [09:32] if you have many units, that will be faster [09:32] you can also add a -y flag to avoid the confirmation prompts [09:33] one problem [09:33] I can't see models [09:33] and so I don't know what models I have [09:33] okay [09:33] it says no controller registered [09:33] oh, that's not okay [09:34] what is the output of juju controllers [09:44] ERROR No controllers registered. [10:05] timClicks: is there a way that I can just simply reset or clean up juju? [10:06] atdprhs, what do you mean by reset? what does `juju controllers` say? [10:06] not without the controller, I don't think [10:07] so I'll have to live with those VMs that keeps getting created? [10:07] :O [10:09] it's the controller that would be creating them [10:10] is htere something like juju discover controllers? [10:10] juju register [10:11] atdprhs, can you tell me what's in "less ~/.local/share/juju/controllers.yaml" [10:11] it's empty [10:11] atdprhs: I'll hand over to stickupkid (it's after 10pm where I am) [10:12] atdprhs, do you have access to where vm software, i.e. where juju registered the controller? [10:12] thanks timClicks [10:12] yes, it's KVM [10:13] atdprhs, so you should be able to list all your vms, virsh list --all or similar [10:13] There are lot of VMs [10:13] not sure which one [10:13] cuz sadly when juju creates a VM, it doesn't give MaaS a proper name for the VM [10:14] so I end up with ideal-bream, clean-cougar, fresh-beetle, free-chimp, comic-orca, ace-mink [10:17] atdprhs, so without knowing which is the controller, you'd have to resort to arp -n or ifconfig to get the ip address and then kill it [10:17] I'd just delete all of the Vms [10:17] they are all created by juju [10:18] atdprhs, then you can start again, by bootstrapping, sorry I'm not much more of a help [10:19] the thing is [10:19] that if i rebooted the server [10:19] atdprhs, otherwise juju register would be better to go down [10:19] juju will continue to recreate the VMs again [10:20] but if you've removed the controller, I've no idea how it's doing that [11:20] stickupkid: atdprhs did you juju unregister? that just removes the local cache of the controller information. It doesn't remove the controller [11:20] stickupkid: atdprhs if you unregistered and don't have it anywhere else you're going to be a bit stuck unfortunately. If you know what machine the controller is running on you can try to login to it, but you need to know the password of the admin user [11:21] rick_h, according to tailback atdprhs did juju destroy-model [11:21] stickupkid: looking at the traceback atdprhs is getting "it says no controller registered" so can't actually run any commands/etc [11:37] rick_h: mine, brb === grumble is now known as \emph{grumble} [12:41] hi stickupkid / rick_h : I deleted the VMs [12:42] i'll bootup the server and see if anything gets created [12:42] If you guys say that no controller nothing gets created, so could it be possible MaaS? But what in MaaS could be doing that? [12:43] but from the specs of the VMs that gets created, I know it's from juju for one reason, the specs matches exactly the storage machines that I was trying to create [13:25] I booted up the server and I couldn't find any new machines [13:25] very strange [13:37] achilleasa, let me switch and bootstrap the last test case, land and if it no worky i'll let you know [13:38] stickupkid: sure thing [13:52] achilleasa, it worked [14:58] stickupkid: rick_h can I get a sanity check on https://github.com/juju/juju/pull/10620? [14:58] achilleasa, of course [15:04] achilleasa, scanned all the files, seems like very thing is spot on [15:07] stickupkid: the commits merged more or less cleanly (I had to tweak some imports because we have renamed some network pkgs on develop) [15:07] achilleasa, nice nice [16:42] hml, got a sec? [16:42] stickupkid: sure [16:43] hml, meet in daily? [16:43] stickupkid: omw [20:58] o/ [20:59] If I have a node reporting "pending" when running "juju status", how can it be forcibly cleared? [21:00] I think it happened because I removed an application before it finished deploying. [21:00] then aborted from MAAS [21:25] Anyone know how to tell if my bootstrap of juju installed the gui juju-gui? [21:42] pepperhead: execute `juju gui` [21:43] SWEET! You ROCK! === exsdev0 is now known as exsdev [21:44] Harder question: I have a juju State of pending on a maas node that wont stop. Is there a way to force kill it? [21:45] It was a deploy job that a killed "removed" before it ended, now stuck. [21:46] juju remove-unit --force / [21:46] pepperhead: does `juju remove-machine --force ` work? [21:46] started deploying mysql, and removed it in the middle, bad idea. [21:46] does remove-machine destroy the machin in maas? [21:46] perhaps `juju remove-machine ` [21:46] it should release it back to the maas available pool [21:46] it'll put that machine back into the maas pool iirc [21:47] ha timClicks beat you that time [21:47] ha [21:48] Looks like that did it, I was afraid to try something that said "remove machine"... :) [21:49] pepperhead: luckily juju doesn't have the ability to order a truck to send hardware to the landfill [21:49] But what if it did ;) [21:49] * babbageclunk starts coding it up [21:49] LOL [21:50] please use the coffee pot protocol [21:50] pepperhead: in terms of the wording though, everything juju-related is wrapped within a model [21:50] so when you see remove anywhere, it means remove from the model [21:51] Ahhhhh, THAT makes sense. [21:51] also remove-* commands are recoverable, they have a symmetric command add-* that reverses the removal [21:51] destroy-* commands are, in a sense, unrecoverable [21:52] they require you to start from scratch if you want to get back to pre-destruction [21:53] My last Q of the day, promise: I want to try deploying OpenStack on a set of hardware nodes, but they only have one drive and one nic in eeach. Is this possible? the blog I was going to follow mentioned it REQUIRED two drives in each and two nics. [21:54] pepperhead: which guide? [21:54] Or is that more a mass thing, need to work around with curtin? [21:55] https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/rocky/install-openstack.html [21:55] i'm not familiar enough with openstack to speak authoritatively on production-grade deployments, but it's likely those "requirements" are actually "very strong recommendations" [21:57] if you have the compute resources available, my recommendation would be to try it and see what happens [21:57] I am just doing a POC. Newly hired and the company REALLY need a private cloud to test terraform k8s control. It looked like Openstack look great, and I sold them on the maas/juju solution. [21:58] timClicks yes, I have struggling getting juju up. They handed me a stack of Intel NUC's, and juju would NOT bootstrap. FINALLY found it was a bios issue. [21:58] been struggling even [22:01] The bios is super poorly designed. Had to turn off the optical drive boot selection, the node being bootstrapped froze on reboot looking for it it seems. Also had to turn off "boot nic last", it overrode the boot order. and the boot order was specified in two locaions in a gui that required a mouse. Nightmare. [22:01] Thanks Intel [22:03] sounds like you've had a fun few hours then [22:04] This fun is driving me to drinkin [22:04] admittedly a short drive [22:05] So the juju gui allows building a model, but wouldnt help with drive requirements/locations, right? [22:08] WOOT, the gui is working. VERY NICE! [22:13] VERY polished interface, kudos if any of y'all worked on it [22:29] Got that node re-commissioned, thanks again for the help! [22:36] pepperhead: really great to hear that you're moving forward; do make sure that you're signed up here: https://discourse.jujucharms.com/