[00:55] thumper: see the comment on the juju status bug? we have had discussions previously as to whether juju client should auto retry and have said no it shouldn't. it seems that's something juju wait could reasonably do? [01:02] +1 that 'wait' should do it [01:04] veebers: so under stress, cannot reproduce this inteermittent failure ;( however, if i interupt my stress testing, I am getting the same error.. m pondering what to do next (m not really keen to jump on ci machine... rather work locally) [01:25] anastasiamac: odd. Sounds like something is interrupted. Next time we see it in CI lets have a poke around on the lxd container that gets left there and check logs etc. [01:25] anastasiamac: as to repro locally, um, are you running it in a container? [01:27] anastasiamac: I'm just checking out what else we do in those tests [01:27] (to run those tests) [01:28] veebers: no, not in the container locally... [01:28] veebers: yes, we should [01:29] anastasiamac: is there a specific arch that it fails on? [01:41] anastasiamac: so the only things I can think of that are different from you doing it locally and in ci are: It's a 'shared' machine as in many things can be running on it, it's happening in a lxd container, it's setting up tmpfs of 20G and exporting TMPDIR .. . (although that machine has 250+GB of ram so should be sage) [03:59] yeah, veebers, none of these strike me as potential cause.. [03:59] if it was due to a 'shared' ,achine, i would have expected failure to b visble under stress... [03:59] i'll keep digging [04:02] kelvinliu_: a small k8s PR - just some cleanup https://github.com/juju/juju/pull/9145 [04:05] wallyworld, looking now [04:15] kelvinliu_: maybe we should delete the namespace first [04:15] i think that would be safer [04:15] wallyworld, yeah, [04:19] kelvinliu_: changes pushed [04:21] wallyworld, looks great, thanks [04:22] ty [04:24] babbageclunk: those lease timeout errors do seem to be gone now; my aws k8s deployment seems happy [04:26] wallyworld: oh, awesome [04:37] github is down? [04:48] kelvinliu_: look like it :-( [04:52] wallyworld, :-( [04:52] kelvinliu_: but just cam back [05:34] yeah, it's just down for ~ half hr [05:44] wallyworld, got a minute to talk about kubeflow doc? [05:45] kelvinliu_: yeah, sure [09:01] anastasiamac: hopefully you've cracked the invalid zip issue! [09:01] veebers: so far so good... [09:02] but i'd like to have at least 10 runs without the problem... [09:03] sweet [09:03] anastasiamac: thanks for doing this, it's been the bane of my life :D [09:03] stickupkid: don't thank me yet... m just poking it a bit for now... [10:13] So what is the story with 2.3 and 2.4 branch dependencies? I check out the latest, run (the old) godeps command, but get build failures. [10:13] https://pastebin.ubuntu.com/p/266CSCfpKp/ [10:18] ouch, which branch is doing that one 2.4? [10:19] 2.3 and 2.4 [10:19] let me test that [10:20] works for me? [10:20] did you run "export JUJU_MAKE_GODEPS=true; make godeps" [10:27] manadart: stickupkid: for 2.3 and 2.4 u need to run ^^ [10:27] for 2.5 (develop) run 'make dep' to get dependencies [10:33] anastasiamac: Thanks. It's the removal of the vendor directory that I missed. [10:33] manadart: oh yeah, it is easily missed but important :) [10:37] it does that automatically now in latest 2.4 [11:44] howdy juju world [12:59] Is there any ways to decide which root-disk type to use in AWS when adding a new machine? [13:28] maaudet: i *think* you only get to specify the root-disk size.. i don't see a "volume-type" here: https://docs.jujucharms.com/2.4/en/reference-constraints -- that said, some aws instances have different storage characteristics (eg, the I3 instances have SSDs; see the storage-optimized tab here: https://aws.amazon.com/ec2/instance-types/). so you might be able to get a root disk type indirectly by --constraints instance- [13:28] type=foo. [13:29] maaudet: there *is* support for volume-type when using juju storage (https://docs.jujucharms.com/2.4/en/charms-storage), but that's not root-disk, so it may not be what you're after. [13:31] maaudet: there's an iops setting I believe. Maybe that's just in juju storage? [13:31] rick_h_: Yeah, I tried the setting and it only works for additional storages [13:32] kwmonroe: I think that all instances types defaults to the same disk type in AWS [13:32] Although, the default disk type differs from regions <_< [13:32] Some regions have standard types, others SSD as defaults [13:33] On another hand, it seems easy to switch to another disk type from the AWS console [13:43] so... using 2.4.3, on s390x - i have model-config image-stream set to daily [13:44] but /etc/cloud/build.info says 20180823, whereas "lxc image list ubuntu-daily:bionic/s390x" shows 20180830 [13:45] is there something else i can check before i log a bug? [13:45] hey! you're right maaudet. i didn't grok the instance-types correctly, but a quick deploy on i3-large yields the same 'ol 8g slow disk as /dev/xvda1. the difference in i3 vs our default is in the instance store (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#instance-store-volumes), which is temporary storage, and not at all related to the root-disk :/ [13:50] oh [13:50] image-stream is not container-image-stream [13:50] thanks kwmonroe ! [13:50] :) [13:50] hml: so when you say verify lxd-profile, are you thinking that charm repo would validate the whitelist - or passing a validator to the charm validation? [13:51] hml: walking through the bundle validation, it's interesting [13:54] alright so i changed container-image-stream to daily in the model and added a container and didnt get the daily image [13:54] 5 min til bug [13:55] stickupkid: I was just thinking to have validate in the charm … , but it doesn’t have to be. [13:55] hml: cool, just working my way through [13:56] stickupkid: if we use the Profile structure of the lxd api - it’s very easy to unmarshall the yaml [13:56] hml: ooo, let me check that [13:58] hml: yeah, 100% agree, we could use api.Profile [14:41] stickupkid: i looked at 9141, added a comment. anything else? i’ll put up a failing charm shortly [14:41] hml: done :D [15:07] stickupkid: the failing charm pr: https://github.com/juju/juju/pull/9147 [15:08] hml: looks good to me :D [15:08] stickupkid: ty!