[00:59] <babbageclunk> wallyworld: Do we have a convenient way for taking multiple model operations and applying them as one transaction?
[00:59] <wallyworld> babbageclunk: yes, ModelOperation
[01:00] <babbageclunk> wallyworld: but if I have multiple ModelOperations is there a function to combine them into one?
[01:01] <babbageclunk> ie, I'm calling LeaveScopeOperation on n relation units, but I want to run them all as one transaction.
[01:02] <babbageclunk> Hmm, actually that might not be a good idea, since each one will change the unitcount and the last needs to remove the relation
[01:02] <babbageclunk> ok, I'll just run them as separate txns
[01:02] <wallyworld> yeah +1
[01:19] <wallyworld> kelvinliu: i don't understand the newPreconditionDeleteOptions stuff - why do we need to wait for UID sometimes and not others?
[01:20] <kelvinliu> wallyworld: that ensures it asserts the deleted resource has the that id
[01:20] <wallyworld> and name is not sufficient?
[01:21] <wallyworld> we seem to use both?
[01:21] <kelvinliu> ideally, we should follow this pattern for all deleting resources
[01:21] <wallyworld> given name is unique, why is that not sufficient?
[01:21] <kelvinliu> name is uniq, but the deletion is async
[01:23] <wallyworld> we can't create another one with the same name until the current one is totally gone, not just in terminating state though
[01:23] <kelvinliu> and it's not deleted immediately which usually takes a bit time
[01:23] <kelvinliu> but we can't prevent non-juju ppl or app create a resource with same name, right?
[01:24] <wallyworld> true, but while it's being terminated, we can't create another can we? you get an error from the api
[01:24] <kelvinliu> there was a discussion about this, i will find it for u
[01:24] <wallyworld> ok
[01:25] <wallyworld> if we need to change our code we should do it for everything rather than leave stuff done 2 different ways
[01:42] <kelvinliu> wallyworld: u can see i left a TODO for refactoring all updating/deleting places
[01:43] <wallyworld> ok
[01:43] <kelvinliu> wallyworld: i think this change is really good to have.
[01:43] <wallyworld> kelvinliu: is there a reason for swapping the order of update and create in the ensure() methods? we have been calling update() first then create()
[01:43] <kelvinliu> https://github.com/kubernetes/kubernetes/issues/20572
[01:44] <kelvinliu> yes, there is a reason, and i think we  should refactor all the ensure like this later as well
[01:44] <kelvinliu> wallyworld: HO to discuss?
[01:44] <wallyworld> sure
[01:48] <atdprhs> hello everyone, everytime I try to run juju add-unit kubernetes-worker i get `failed to start machine 28 (failed to acquire node: No available machine matches constraints: [('agent_name', ['93d500ee-7e14-4ece-81ae-69137d451f3a']), ('cpu_count', ['6']), ('mem', ['32768']), ('storage', ['root:1024']), ('zone', ['default'])] (resolved to
[01:48] <atdprhs> "cpu_count=6.0 mem=32768.0 storage=root:1024 zone=default")), retrying in 10s (9 more attempts)`
[01:48] <atdprhs> can anyone help please?
[02:18] <wallyworld> atdprhs: looks like the cloud on which k8s is deployed doesn't have a machine with the required memory to host the new worker
[02:19] <wallyworld> looks like this is cdk on maas?
[04:04] <anastasiamac> wallyworld: autoload-creds ask-or-tell PTAL https://github.com/juju/juju/pull/10617
[04:04] <wallyworld> ok
[04:04] <anastasiamac> \o/
[04:14] <wallyworld> anastasiamac: lgtm, ty
[04:44] <atdprhs> Hello wallyworld, sorry I missed your message
[04:44] <atdprhs> yes, it is on KVM Pod on MaaS
[04:48] <wallyworld> i'm not 100% across kvm pod allocation on maas. there would be maas specific set up which controls how kvm pods are allocated. you can use juju constraints to limit what is asked for which may help, but it could still be additional maas setup is needed. eg the error messages says cpu count = 6. perhaps asking for 2 or something would work
[04:48] <atdprhs> ahhh, got it
[04:48] <atdprhs> will test it out
[04:48] <wallyworld> the maas ux will show what machines have been allocated already and what's available
[04:49] <wallyworld> but i can't give you specific advice as i'm not a heavy maas user
[04:50] <wallyworld> one thing to note is adding a unit will use the constraints the original charm was deployed with
[04:50] <wallyworld> if you want to use different ones, you need to use juju add-machine --constraints="blah" first, and then "juju add-unit --to <machinenum>"
[04:51] <wallyworld> atdprhs: you could also try asking here https://discourse.maas.io/
[04:52] <wallyworld> the maas folks there can help much better than me about the kvm setup stuff
[05:59] <babbageclunk> wallyworld: review plz? https://github.com/juju/juju/pull/10618
[05:59] <babbageclunk> wallyworld: ended up being a bit fiddlier than I expected.
[06:05] <wallyworld> babbageclunk: no worries, looking
[06:19] <wallyworld> babbageclunk: lgtm, a couple of comments and a question
[07:11] <kelvinliu> wallyworld: ru still there?
[07:42] <manadart> jam: I am going to do QA testing on PR 10566 today, but I want to squash the commits at some point. If you want to take a look before I do (preserving the context of your prior comments) please go ahead.
[07:50] <jam> manadart: np, I think the big thing is to know which parts are the interesting ones and just focus there
[07:51] <manadart> jam: core/network and state.
[08:12] <manadart> jam: And probably apiserver/facades, where ProviderAddress -> SpaceAddress conversion is now occurring.
[08:12] <manadart> I am doing my own review now, and will annotate where appropriate.
[08:53] <atdprhs> hello everyone, I seem to have a model or controller that keeps creating VMs no matter how many times I keep deleting them, it recreates them
[08:53] <atdprhs> do anyone know if there is any way for me to find out whose doing that?
[08:54] <atdprhs> I keep seeing > 120 VMs that gets created
[09:15] <atdprhs> is there anyway that I can cleanup juju from all controllers, models everything
[09:26] <timClicks> atdprhs: what is the output of juju status?
[09:27] <timClicks> do you know what command triggered the first VM to be created?
[09:29] <atdprhs> it's stuck
[09:29] <atdprhs> I deleted the wrong vm
[09:29] <atdprhs> it doesn't respond anymore, i am really flooded with tons of them
[09:30] <timClicks> that sounds really horrible
[09:30] <atdprhs> yes, extremely horrible
[09:30] <timClicks> i recommend trying to remove the models first, hopefully the logs on the controller can be saved for a postmortem
[09:30] <atdprhs> luckily this is a test environment
[09:31] <timClicks> juju destroy-model <model-name>
[09:31] <atdprhs> ok, I'll destroy all of the moels
[09:31] <timClicks> actually do this
[09:31] <timClicks> juju destroy-model <model-name> --force --no-wait
[09:32] <timClicks> if you have many units, that will be faster
[09:32] <timClicks> you can also add a -y flag to avoid the confirmation prompts
[09:33] <atdprhs> one problem
[09:33] <atdprhs> I can't see models
[09:33] <atdprhs> and so I don't know what models I have
[09:33] <timClicks> okay
[09:33] <atdprhs> it says no controller registered
[09:33] <timClicks> oh, that's not okay
[09:34] <timClicks> what is the output of juju controllers
[09:44] <atdprhs> ERROR No controllers registered.
[10:05] <atdprhs> timClicks: is there a way that I can just simply reset or clean up juju?
[10:06] <stickupkid> atdprhs, what do you mean by reset? what does `juju controllers` say?
[10:06] <timClicks> not without the controller, I don't think
[10:07] <atdprhs> so I'll have to live with those VMs that keeps getting created?
[10:07] <atdprhs> :O
[10:09] <timClicks> it's the controller that would be creating them
[10:10] <atdprhs> is htere something like juju discover controllers?
[10:10] <timClicks> juju register
[10:11] <stickupkid> atdprhs, can you tell me what's in "less ~/.local/share/juju/controllers.yaml"
[10:11] <atdprhs> it's empty
[10:11] <timClicks> atdprhs: I'll hand over to stickupkid (it's after 10pm where I am)
[10:12] <stickupkid> atdprhs, do you have access to where vm software, i.e. where juju registered the controller?
[10:12] <atdprhs> thanks timClicks
[10:12] <atdprhs> yes, it's KVM
[10:13] <stickupkid> atdprhs, so you should be able to list all your vms, virsh list --all or similar
[10:13] <atdprhs> There are  lot of VMs
[10:13] <atdprhs> not sure which one
[10:13] <atdprhs> cuz sadly when juju creates a VM, it doesn't give MaaS a proper name for the VM
[10:14] <atdprhs> so I end up with ideal-bream, clean-cougar, fresh-beetle, free-chimp, comic-orca, ace-mink
[10:17] <stickupkid> atdprhs, so without knowing which is the controller, you'd have to resort to arp -n or ifconfig to get the ip address and then kill it
[10:17] <atdprhs> I'd just delete all of the Vms
[10:17] <atdprhs> they are all created by juju
[10:18] <stickupkid> atdprhs, then you can start again, by bootstrapping, sorry I'm not much more of a help
[10:19] <atdprhs> the thing is
[10:19] <atdprhs> that if i rebooted the server
[10:19] <stickupkid> atdprhs, otherwise juju register would be better to go down
[10:19] <atdprhs> juju will continue to recreate the VMs again
[10:20] <stickupkid> but if you've removed the controller, I've no idea how it's doing that
[11:20] <rick_h> stickupkid:  atdprhs did you juju unregister? that just removes the local cache of the controller information. It doesn't remove the controller
[11:20] <rick_h> stickupkid:  atdprhs if you unregistered and don't have it anywhere else you're going to be a bit stuck unfortunately. If you know what machine the controller is running on you can try to login to it, but you need to know the password of the admin user
[11:21] <stickupkid> rick_h, according to tailback atdprhs did juju destroy-model
[11:21] <rick_h> stickupkid:  looking at the traceback atdprhs is getting "it says no controller registered" so can't actually run any commands/etc
[11:37] <jam> rick_h: mine, brb
[12:41] <atdprhs> hi stickupkid / rick_h : I deleted the VMs
[12:42] <atdprhs> i'll bootup the server and see if anything gets created
[12:42] <atdprhs> If you guys say that no controller nothing gets created, so could it be possible MaaS? But what in MaaS could be doing that?
[12:43] <atdprhs> but from the specs of the VMs that gets created, I know it's from juju for one reason, the specs matches exactly the storage machines that I was trying to create
[13:25] <atdprhs> I booted up the server and I couldn't find any new machines
[13:25] <atdprhs> very strange
[13:37] <stickupkid> achilleasa, let me switch and bootstrap the last test case, land and if it no worky i'll let you know
[13:38] <achilleasa> stickupkid: sure thing
[13:52] <stickupkid> achilleasa, it worked
[14:58] <achilleasa> stickupkid: rick_h can I get a sanity check on https://github.com/juju/juju/pull/10620?
[14:58] <stickupkid> achilleasa, of course
[15:04] <stickupkid> achilleasa, scanned all the files, seems like very thing is spot on
[15:07] <achilleasa> stickupkid: the commits merged more or less cleanly (I had to tweak some imports because we have renamed some network pkgs on develop)
[15:07] <stickupkid> achilleasa, nice nice
[16:42] <stickupkid> hml, got a sec?
[16:42] <hml> stickupkid:  sure
[16:43] <stickupkid> hml, meet in daily?
[16:43] <hml> stickupkid: omw
[20:58] <pepperhead> o/
[20:59] <pepperhead> If I have a node reporting "pending" when running "juju status", how can it be forcibly cleared?
[21:00] <pepperhead> I think it happened because I removed an application before it finished deploying.
[21:00] <pepperhead> then aborted from MAAS
[21:25] <pepperhead> Anyone know how to tell if my bootstrap of juju installed the gui juju-gui?
[21:42] <timClicks> pepperhead: execute `juju gui`
[21:43] <pepperhead> SWEET! You ROCK!
[21:44] <pepperhead> Harder question: I have a juju State of pending on a maas node that wont stop. Is there a way to force kill it?
[21:45] <pepperhead> It was a deploy job that a killed "removed" before it ended, now stuck.
[21:46] <timClicks> juju remove-unit --force <app>/<n>
[21:46] <babbageclunk> pepperhead: does `juju remove-machine --force <id>` work?
[21:46] <pepperhead> started deploying mysql, and removed it in the middle, bad idea.
[21:46] <pepperhead> does remove-machine destroy the machin in maas?
[21:46] <timClicks> perhaps `juju remove-machine <n>`
[21:46] <babbageclunk> it should release it back to the maas available pool
[21:46] <timClicks> it'll put that machine back into the maas pool iirc
[21:47] <babbageclunk> ha timClicks beat you that time
[21:47] <timClicks> ha
[21:48] <pepperhead> Looks like that did it, I was afraid to try something that said "remove machine"... :)
[21:49] <timClicks> pepperhead: luckily juju doesn't have the ability to order a truck to send hardware to the landfill
[21:49] <pepperhead> But what if it did ;)
[21:49]  * babbageclunk starts coding it up
[21:49] <pepperhead> LOL
[21:50] <timClicks> please use the coffee pot protocol
[21:50] <timClicks> pepperhead: in terms of the wording though, everything juju-related is wrapped within a model
[21:50] <timClicks> so when you see remove anywhere, it means remove from the model
[21:51] <pepperhead> Ahhhhh, THAT makes sense.
[21:51] <timClicks> also remove-* commands are recoverable, they have a symmetric command add-* that reverses the removal
[21:51] <timClicks> destroy-* commands are, in a sense, unrecoverable
[21:52] <timClicks> they require you to start from scratch if you want to get back to pre-destruction
[21:53] <pepperhead> My last Q of the day, promise: I want to try deploying OpenStack on a set of hardware nodes, but they only have one drive and one nic in eeach. Is this possible? the blog I was going to follow mentioned it REQUIRED two drives in each and two nics.
[21:54] <timClicks> pepperhead: which guide?
[21:54] <pepperhead> Or is that more a mass thing, need to work around with curtin?
[21:55] <pepperhead> https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/rocky/install-openstack.html
[21:55] <timClicks> i'm not familiar enough with openstack to speak authoritatively on production-grade deployments, but it's likely those "requirements" are actually "very strong recommendations"
[21:57] <timClicks> if you have the compute resources available, my recommendation would be to try it and see what happens
[21:57] <pepperhead> I am just doing a POC. Newly hired and the company REALLY need a private cloud to test terraform k8s control. It looked like Openstack look great, and I sold them on the maas/juju solution.
[21:58] <pepperhead> timClicks yes, I have struggling getting juju up. They handed me a stack of Intel NUC's, and juju would NOT bootstrap. FINALLY found it was a bios issue.
[21:58] <pepperhead> been struggling even
[22:01] <pepperhead> The bios is super poorly designed. Had to turn off the optical drive boot selection, the node being bootstrapped froze on reboot looking for it it seems. Also had to turn off "boot nic last", it overrode the boot order. and the boot order was specified in two locaions in a gui that required a mouse. Nightmare.
[22:01] <pepperhead> Thanks Intel
[22:03] <timClicks> sounds like you've had a fun few hours then
[22:04] <pepperhead> This fun is driving me to drinkin
[22:04] <pepperhead> admittedly a short drive
[22:05] <pepperhead> So the juju gui allows building a model, but wouldnt help with drive requirements/locations, right?
[22:08] <pepperhead> WOOT, the gui is working. VERY NICE!
[22:13] <pepperhead> VERY polished interface, kudos if any of y'all worked on it
[22:29] <pepperhead> Got that node re-commissioned, thanks again for the help!
[22:36] <timClicks> pepperhead: really great to hear that you're moving forward; do make sure that you're signed up here: https://discourse.jujucharms.com/