[00:34] magicaltrout: are you trying to add your square brackets to an attribute in the "config:" section of the charm podspec yaml? can you post to the discourse topic the yaml or a suitable snippet so we can reproduce any issue? is this with juju 2.6? [01:32] anyone? https://github.com/juju/juju/pull/10594 [01:48] thumper: looking - trade you: https://github.com/juju/juju/pull/10588 [01:48] ,e ;ppls [01:48] * thumper looks [01:48] types without putting his fingers on home [01:48] :) [01:51] babbageclunk: why the two line test? [01:52] if you are outputting the same thing [01:52] aren't those to the same? [01:52] You mean in the Makefile? No, they're subtly different - the old one includes the full list of packages which is super long [01:53] so what does it output now? [01:53] The same command but with $PROJECT_PACKAGES instead of the actual packages [01:54] thumper: It's the same thing we do in go-install [01:55] It just means you don't get spammed with a huge list of packages when running `make test` [01:56] * babbageclunk finds an example... [01:57] * babbageclunk gets 2FA'd [01:57] I see it now [01:57] babbageclunk: I have a makefile change coming too [01:57] and it overlaps with yours [01:58] but let's get it reviewed, and I'll deal with the conflicts [01:58] :) [01:58] thanks [01:59] wallyworld: PTAL https://github.com/juju/juju/pull/10595 [02:00] ok, give me 5 [02:01] nw \o/ thnx :) [02:04] babbageclunk: https://github.com/juju/juju/pull/10596 [02:08] wallyworld: what is the best, most current, guide to follow to create k8s charms? [02:09] there's an (oldish) discourse post [02:09] i can find it [02:13] grr... [02:13] some of our tests don't honour tmpdir [02:13] * thumper digs [02:17] thumper: https://discourse.jujucharms.com/t/writing-a-kubernetes-charm [02:18] wallyworld: ta [02:23] * thumper wonders how to isolate which package is writing to /tmp... [02:25] Put some logging into the stdlib [02:25] * babbageclunk half-winks [02:26] babbageclunk: with the simple TMPDIR change, we catch most of the /tmp files [02:26] definitely all the go-build and go-link ones [02:26] and the check- and the mgo ones [02:26] but there are a few store-lock ones left [02:26] based on timestamps, I think it is just two packages [02:27] anastasiamac: lgtm [02:27] wallyworld: \i/ [02:27] \o/ even :) [02:29] * thumper decides to go one package at a time while doing emails [02:35] kelvinliu: here's that actions PR https://github.com/juju/juju/pull/10597 [02:37] * thumper thinks there has to be a better way to do this [02:37] * thumper thinks of bashisms [02:42] thumper: when you are free, a huge PR https://github.com/juju/juju/pull/10598 [02:45] haha if thumper is going to run tests pkg by pkg, he is not likely to b free this century... [02:45] wallyworld: done [02:45] ta [02:53] anastasiamac: for i in $(ls -d */); do echo ${i%%/}; touch /tmp/latest; cd $i; TMPDIR=/tmp/fake go test ./...; cd ..; find /tmp -path /tmp/juju-store-lock-\* -newer /tmp/latest 2> /dev/null; done [02:53] that is what I came up with to keep my sanity [02:53] anastasiamac: and it tells me it is /api/backups [02:53] just like that [02:54] thumper: m very impressed :) we do prefer it if u could do everything in ur power to keep ur sanity [02:54] and backups need some attention (not just the tests of course) [02:55] * anastasiamac honestly does not know what's with this new trend of keeping leaders sane... [03:01] yeah, I know [03:05] but m kind of happy that it's backups and not something more recently developed :) [03:28] wallyworld: just back home, looking soon [03:28] no worries [05:19] hpidcock: here's a PR which adds a new hook command that you may find interesting to look at. no rush. https://github.com/juju/juju/pull/10599 [05:20] alright [07:26] Good Morning Juju! [08:15] I'm trying to setup juju with an OpenStack provider and I get the following error [08:16] caused by: failed executing the request https://ncc_url:8774/v2.1/servers [08:16] caused by: Post https://ncc_url:8774/v2.1/servers: EOF [08:17] I saw this related bug: https://bugs.launchpad.net/juju/+bug/1655716 [08:17] Bug #1655716: retry failed API calls [09:28] I want to set up a juju controller. Is it as simple as installing the juju snap? [09:28] Will that create the database for me too? [09:29] I'm looking at https://jaas.ai/docs/installing and it doesn't give any pointers as to what the next step is after having installed the snap [09:30] I had a look into setting this up a while back and I seem to remember the controller needs a DB [09:31] I ran out of time and didn't get it running but can't remember how far I got [09:32] danboid, to run it locally you should be able to do https://gist.github.com/SimonRichardson/1609d6f93dcfdfbd97771afa7c1b38f0 [09:33] stickupkid, Thanks! `juju bootstrap lxd test` will do what exactly? [09:34] danboid, bootstrap to a local lxd provider. [09:34] danboid, so this will setup a juju controller for you on a lxd provider, is what I should of said [09:36] That will create a juju controller called test? I presume it will use lxd on localhost? If lxd isn't installed on the local machine, how does it know where to look for one? [09:37] danboid, you are correct, it will attempt to use the one on localhost. If lxd isn't installed, it will fail to bootstrap. [09:37] OK, sounds easy enough [09:38] danboid, let me know how your adventures go [09:38] stickupkid, I will. Thanks! [09:40] I presume it is advised to run the juju controller on a separate machine to your maas controller? It is possible for them to run on the same machine tho too right? [09:41] the latter is the way to go when first setting up, but yes you can run your controller somewhere else if you so desire [09:43] anybody got any idea how to handle "Unable to allocate static IP due to address exhaustion"... [09:44] parlos, you got any more info? [09:45] not really, my maas has lots of IP addresses available in all the networks juju may want to use... [09:46] stickupkid not really, my maas has lots of IP addresses available in all the networks juju may want to use... [09:48] parlos, i'm assuming you've filed a bug, so we can track it... that way i can point people in the right location [09:49] stickupkid nope, no bug filed.. Assumed that it was user error :( Not sure if its a juju or maas bug... [09:49] Any thoughts on "juju bootstrap" resulting in: [09:49] parlos, one way to find out :D [09:49] caused by: failed executing the request https://ncc_url:8774/v2.1/servers [09:49] parlos, https://bugs.launchpad.net/juju/+filebug [09:49] caused by: Post https://ncc_url:8774/v2.1/servers: EOF [09:51] yan0s, what's the provider you're bootstrapping to (lxd, manual, MAAS)? [09:51] openstack [09:51] rocky release [09:58] stickupkid, I'm running lxd init on my maas controller. I said yes to "Would you like to connect to a MAAS server" now it is asking for an API key. Should I create a new MAAS user just for lxd or use my personal API key or... [10:01] anyway to 'replace' a machine that failed in a deployment. Without destroying the model and redeploying it? [10:02] stickupkid, I suppose the idea would be that every maas user would have their own juju controller/API key? [10:03] stickupkid, So is it not possible for multiple maas users to share a juju controller? [10:04] danboid, afaik no... [10:04] danboid, so you would allow juju to manage maas and allow juju manage the users... [10:04] danboid, see `juju users --help` [10:05] parlos, remove-machine? [10:07] danboid; In my setup, I've got one juju controller on it i deploy models from different different maas users/keys. So, enables me to track (on Maas) what nodes are ued by what user.. (would be nice to push tags from juju to maas during deployment) [10:07] stickupkid, would remove-machine trigger a redeployment of the apps that were destined for that machine? [10:08] parlos, nope, you'd have to use `--to` directive [10:08] when deploying [10:09] parlos, That sounds like a setup I want to try. So when you were configuring lxd, which API key did you give it? [10:09] stickupkid, so the work around would be to allocate a new machine, move the units to that machine and then remove the machine that failed... [10:10] parlos, rick_h would know more tbh [10:10] parlos, I'm configuring lxd on my maas controller, gonna run a juju controller on there too [10:10] he comes online soon [10:11] stickupkid, ok. Will submit bug, and hang around. [10:20] lxd init failed: `Couldn't find the specified machine`. When it asked "What's the name of this host in MAAS` I just gave the hostname of the MAAS controller which was the default but there is obvs some more, missing config [10:21] Hmm. No. Under MAAS General, MAAS name the controller has a name which is the same as the hostname so I dunno why lxd init failed [10:23] Maybe I can get away with skipping the MAAS options in lxd init if it is running on the same machine? [10:26] maybe it'll let me get away with using 127.0.0.1 for the controller name? [10:29] So I suppose my most important question is, does lxd need to be configured to access/use MAAS at all? [10:31] So long as juju can use lxd fine, is there any need to link lxd and MAAS? [10:33] I suppose I'll just have to try without and see how far I get [10:50] There seems to be a clash between bind running on my MAAS controller and lxd init wanting to create lxdbr0 [10:51] So I either install lxd and juju ob a different machine or... [10:53] Not sure there is an or [10:54] unless MAAS doesn't need to use bind/named [11:00] I suppose the other option is to create the bridge interface myself then get lxd init to use that [11:17] parlos: no, it won't auto redeploy. Recovery is up to the operator because we don't know what restrictions the operator is expecting. "don't autodeploy to that machine, I'm keeping X and Y on different hosts..." or the like [11:17] parlos: so you'd in theory use add-unit for anything running with any constraints/placement notes [11:18] parlos: and then yes, remove the machine with the units on it when you were happy to deal with it [12:00] https://discourse.jujucharms.com/t/square-brackets-in-env-vars/2024/2 rick_h !!! any ideas?! :) [12:14] rick_h; would have been nice with replace-machine, that would just deploy a new machine with the same specs as the one that failed... [14:47] Yay! I have a juju controller on my MAAS box now! :) [14:49] No idea what to do with it now :) [14:49] danboid, congrats [14:49] danboid, juju deploy :D [14:49] stickupkid, Thanks! :) [14:50] This is to start playing with openstack [14:50] juju deploy openstack ? [14:51] I'm sure there will be a guide online somewhere, I'm just being a bit overly lax :) [14:53] danboid, https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/install-openstack.html#deploy-openstack [14:53] pmatulis, Thanks! [14:54] danboid, kindly open doc bugs if needed --> https://bugs.launchpad.net/charm-deployment-guide/+filebug [14:55] pmatulis, Yeah. I think the juju install page could do with a bit of tweaking too [14:55] danboid, that guide will take you through deploying OpenStack by individual service. you can also try a bundle [14:56] it all depends what you want to do/learn [14:57] https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/install-openstack-bundle.html [14:57] pmatulis, Thanks! [15:01] danboid, how many MAAS nodes do you have? and what resources do they have? it's typical to require multiple disks and network cards [15:05] pmatulis, I have 1 MAAS controller with 5 nodes attached. They are Xeons, 3 have 64 GB RAM and 2 have 256 GB. They've all got at least 1 TB of disk space and multiple NICs [15:06] Before I run juju deploy, is it something I need to run under screen/tmux or will it background itself? [15:06] achilleasa, when we deploy a bundle, the overlay is merged on to the base bundle yaml, so I can upload that to the server to get the changes? [15:07] I'd imagine there is a bit more config before I can deploy the bundle. I'll need to tell it which nodes to use at least right? [15:10] or does it automaticaly use all available nodes? [15:16] danboid, it sounds like you should do some experimenting with stuff that is lighter than openstack [15:17] just to get the basics. did you look at the documentation yet? [15:17] https://jaas.ai/docs [15:17] Nope. I've been playing with MAAS for a while but not in-depth. I haven't touched juju yet [15:19] The problem is those docs seem to be JAAS centric, rather than focusing on a on-prem juju setup [15:20] jam, what happens if you want the BestFacadeVersion, but the facade doesn't exist on old controllers? [15:22] or at least thats the case for the Getting Started guide [15:22] danboid, nope. jaas is just one tiny thing in the docs. url notwithstanding [15:24] What would you suggest I try putting together before attempting openstack? [15:26] danboid, just go through the main topics in the docs. there are a couple of beginner tutorials [15:36] stickupkid: any overlays are sequentially merged to the base bundle. What do you mean by "upload to the server"? [15:36] achilleasa, ho? [15:36] omw [15:51] hello fine people... if anyone in the US has any clue about https://discourse.jujucharms.com/t/square-brackets-in-env-vars/2024/2 that would be very useful indeed! ;) [15:52] like, does anyone know if there is something I can do to get GO to parse some YAML read by some Python? or do I need to chop up the upstream container to make it happen? [15:53] magicaltrout: was looking...stop breaking things! :P [15:54] magicaltrout: I think we'll have to create some test cases and get back to you unfortunately. Honestly it's probably better as a bug [15:55] fair enough. Yeah I dunno what I can try cause I don't know how the yaml -> python -> go handoff works [15:55] so i gave up at that point [15:55] I'll file it and mash up the container for now [15:55] want to get this demo stack done for ApacheCon next week [16:03] https://bugs.launchpad.net/juju/+bug/1842691 [16:03] done [16:04] Bug #1842691: Array in YAML causes Kubernetes charm failed deployment [16:21] achilleasa, i ejected - it would take to long, but my PR for exposing the method on the API server is solid - can you give a CR, i'll start on bringing this into pylib https://github.com/juju/juju/pull/10601 [16:21] stickupkid: sure, let me check [17:01] rick_h, merge master into 2.7 branch, for pylibjuju https://github.com/juju/python-libjuju/pull/353