[10:53] manadart: i've reworked the mocking tests and don't use the stubs anymore. In for another review round? https://github.com/juju/juju/pull/11088 [11:02] nammn_de: Yep. will look. [11:28] can I get a quick CR on https://github.com/juju/juju/pull/11132 ? [11:45] achilleasa, done [11:45] stickupkid: thanks. I will push a 2.7->develop once this lands [11:57] Hi all... For some time now we are having an issue with juju export-bundle and I was wondering if anybody could help. We run the command "juju export-bundle -m openstack" and it stops at the same point all the time with not error, as if it has ran completely. I do not know if it is related but the point where it stops is an inline yaml config option. It definetely worked with v2.6.8. Now we are running 2.7.1 [11:58] soumplis, can you add --debug and see if that reveals any more information [12:02] stickupkid I have ran it and the last lines are:" 12:01:50 DEBUG juju.api monitor.go:35 RPC connection died [12:02] 12:01:50 INFO cmd supercommand.go:525 command finished [12:02] " [12:03] stickupkid Please also check https://github.com/juju/python-libjuju/issues/384 it is related [12:07] soumplis: if you run 'juju debug-log -m controller --replay | grep ERROR' do you see any messages related to the export-bundle command? [12:09] achilleasa not, there is nothing relevant. I would say that the problem is that it cannot handle inline configs (or big values) because it also fails in another model, when it comes to a gpg-key config [12:10] achilleasa, for example " gpg-key: | [12:10] 12:10:24 DEBUG juju.api monitor.go:35 RPC connection died [12:10] 12:10:24 INFO cmd supercommand.go:525 command finished [12:10] " [12:13] soumplis: is juju cli generating some output for the bundle and then getting stuck when it reaches that particular section? [12:14] achilleasa yes, it starts to print the bundle and when it reaches the point that fails, it just stops and returns to the shell [12:18] soumplis: can you try running the same command with '--filename bundle.yaml' and check if you get the full bundle dumped in that file? [12:19] achilleasa the command reports "Bundle successfully exported to test.yaml" but the file has the same incomplete content as in the cli [12:23] soumplis: sounds like a potential bug in the bundle export code running on the controller side. Can you please open a bug? [12:24] achilleasa, sure I'll do it now, thx [12:29] stickupkid: can you take a look at https://github.com/juju/juju/pull/11133 (2.7 -> dev) [12:47] manadart: for the rename-space: do we have some old code lying around where we also touch multiple collections at once? Want to make sure to follow conventions and thinking about whether this should be done synchronous as this may take time (?) [12:50] nammn_de: There are many examples. One is in the state code for committing a generation. But the way I think we should do this in the future is using ModelOperation as per https://github.com/juju/juju/pull/10630. [12:50] I have not had time to realise that in proper fashion. [12:51] manadart: thats perfect. Just want to follow our future plans instead of maybe running into legacy and using it. Thanks [12:51] To do your part would require altering settings in some cases, so I might put a patch up with changes that allow that. [13:01] manadart: can you take a look at 11133? [13:08] How do I a query to update the spaceid in every collection matching a query? Only found code where we do it for each collection we defined ourselves. Is there a kind of crosscollection search? [13:18] achilleasa: Yep. [13:23] stickupkid: thanks === exsdev0 is now known as exsdev [14:09] Patch I mentioned in stand-up: https://github.com/juju/juju/pull/11134 [14:30] manadart: i can take a look, but I don't think I am the correct person to approve this [14:31] manadart: I took a look at your comment. I cannot 100% revert the stub_network change. As it stubbing `Backing` which has interface methods to my created one. So I still need to implement the interface in stub_network, right? [14:32] nammn_de, manadart I think you should understand it, I'll mark it as approved [14:33] nammn_de: That imports the spaces facade. The general cannot depend on the specific. [14:34] manadart: nvm me, my ide was slow and showed an error which did not exist in the first place [14:34] stub is using networkBacking not general backing [14:36] manadart: quick ho? [14:38] stickupkid: kk, didnt look at it yet, didn't work a lot with our mongo things yet [14:39] nammn_de: I'm in DAILY. [14:43] manadart: done and pushed [15:24] nammn_de: Doing QA. The command output includes the error member. I.e. "error: null". [15:26] ahh, yeah makes sense. I just put the default param into the output. I should parse it again it exclude the error. Something else? [15:26] nammn_de: Exclude. [15:28] manadart, having fun trying to get microstack running in lxd [15:28] - Start snap "microstack" (198) services ([start snap.microstack.nginx.service] failed with exit status 1: Job for snap.microstack.nginx.service failed because a timeout was exceeded. [15:29] fun, fun, fun [15:29] stickupkid: Yes, I expect there will be tribulation with that. [15:29] manadart: something else beside excluding the error output, in case there is no error? [15:34] nammn_de: You check the payload for errors in the API client. They will bubble up to error output at the CLI. We want to exclude the error member from the command output. [15:34] nammn_de: Commented on the patch. [15:42] manadart: thanks! The output is directly put into yaml (no formatter option) as in the spec, therefore I would add an additional kind of result param which does not have error fields. [15:43] nammn_de: Use a type specifically for display. Branches do this. [15:46] manadart: yeah was planning to do so. Still thinking where the best place to put it. Not param, maybe in show-space command itself or core/network/types ? [15:46] as branches puts them into core/model/generation [15:51] nammn_de: If you want to return that type from the API client itself, it can go in core/network/space. [15:51] manadart: ahh great, will do [16:35] i reworked the output to something like this: https://pastebin.canonical.com/p/wbpNzTWmdw/ [16:35] manadart:^ [16:56] nammn_de: OK. We'll finish it tomorrow. [16:56] manadart: sure! Thanks for going through with me [17:53] achilleasa, one for tomorrow https://github.com/juju/juju/pull/11135 [17:54] I went mental with the test to ensure that it works :) [17:54] stickupkid: nice! Will take a look in the morn [17:54] wicked [22:26] why is my postgresql unit stuck waiting for peers? [22:29] previously it was not stuck, but I deployed a pristine environment and it is stuck [22:32] Hi skay, off the top of my head it sounds like it's trying to connect to another instance. Are you able to provide exact error/app status strings ? [22:33] tlm: the status message is 'Waiting for peers' and the status is 'waiting'. [22:34] I'm looking at the charm src now, but I do not know what changed since the last time i did this [22:34] skay: what about 'status --format=yaml'? [22:34] I am just deploying one unit [22:34] skay: diff juju versions? [22:36] anastasiamac: it's in a private environment [22:36] skay: format yaml ususally ahs more details [22:36] has* [22:36] * skay nods [22:38] skay: so u've deployed charm previously and are just adding another unit now and that unit does not come up? [22:39] anastasiamac: no, I've deployed a bundle previously (using a mojo spec) that included postgres. I ripped it own to verify my spec, and this time the postgres unit is stuck in the waiting state [22:40] I am wondering if it is possibly because a relation is hanging around or it's waiting on storage ? [22:42] tlm: that's possible [22:42] maybe not storage.. but sunds like a relation maybe?.. [22:42] anastasiamac: is there a way to list relations ? [22:43] status --relations [22:43] these are the cases where it is stuck in that state https://paste.ubuntu.com/p/QdJzPN4dHs/ [22:43] a snippet from teh charm source [22:43] it's related to pgbouncer [22:44] and nrpe [22:47] oh, there are relations I didn't set up in there [22:47] skay: is ti possible that whatever u did to reap the original deployed postgresql was not clean? and some things were lingering that r preventing this new deployment? [22:47] skay: yes, so mayb clean the relations :) [22:47] it's weird though. I ran remove-application on everything [22:47] altho m a bit surprised that they r not removed when u did the original clena [22:48] I'll remove everything again and list relations on the empty model just to be sure [22:48] sounds good