=== blahdeblah_ is now known as blahdeblah [02:12] thumper: I added https://bugs.launchpad.net/juju/+bug/1730809 to 2.3-rc1, feel free to move off if you think it should wait. seems like a good time to add support, and should be quick [02:12] Bug #1730809: ec2: add support for C5 instance types [02:32] +1 from me [02:40] hmm maybe not entirely straight forward though, since with C5, EBS volumes are exposed as NVME devices [02:40] so probably need some changes to support storage [02:41] I'll bump to 2.4 for that reason, we can bring it back if there's time [02:47] axw: sounds reasonable [02:48] axw: although to be honest, we won't have time [02:48] too many other things [02:48] thumper: ok [02:48] a boy can dream [03:01] wallyworld, axw, jam: with you shortly [03:02] ok [05:55] wallyworld: are you able to add juju/mutex to the github powerup on trello? [05:55] wallyworld: says it can't find it when I try [05:55] ok, looking [05:57] axw: done [05:57] wallyworld: thanks [06:00] jam: you have a mac, right? would you be able to run the tests for https://github.com/juju/mutex/pull/4 on it? [06:00] yes [06:01] jam: thanks, no rush - when you have some time [06:39] axw: where did you push the branch to? [06:39] I'm missing something in the github UX to make sure I'm grabing your exact branch [06:39] I see axw:mutex-blocking but what *repo* is that? [06:39] jam: http://github.com/axw/juju-mutex [06:40] axw: thx. Am I just dense or does Github not give you a link back to the branch that is being proposed? [06:40] I can browse the revision, but it does so in the juju/mutex contex [06:40] not in the axw/juju-mutex one [06:41] axw: http://paste.ubuntu.com/25915717/ [06:41] doesn't seem particularly interesting, is there any manual steps you'd want me to try? [06:42] ah, I see the fairness program [06:45] axw: fairness confirmed [06:45] axw: want me to test on Windows as well? [06:48] axw: test fails on Windows [06:50] axw: and the fairness check breaks, too [06:54] axw: I updated the PR with some Windows results. HereBeABug [06:59] jam: thanks, will try to repro here. I was running under Wine before, maybe something's a bit off - or could just be timing [06:59] axw: seems reproducible here, if you need me to, I could try investigating, but I'm happy to do other things too :) [06:59] jam: I can reproduce the fairness failure with Wine [07:00] so I'll take it from here, thanks [07:00] I've got a Windows laptop too, it's just a bit slower going back and forth [07:02] axw: yeah, I can guess that maybe flock et al depend on kernel things, and so wine's emulation isn't exact [07:03] axw: thoughts on me landing the mgo patches? Care to have a quick HO cause I want to talk through an edge case [07:03] jam: sure [07:04] axw: https://hangouts.google.com/hangouts/_/canonical.com/morning-jam?authuser=1 [07:42] axw: so... small hiccup. [07:42] abort-or-reload wants the list of revnos for the docs in the txn op [07:42] not sure what its doing with it yet [07:43] but you don't really get that revno until you hit prepared (I think) === frankban|afk is now known as frankban [07:57] jam: instead of $addToSet and then $pullAll, I *think* you could use $slice with $addToSet and then you'll have the revno as well as the current queue [07:58] jam: you do have the revno in txnInfo struct [07:59] or is that not what you need? [08:02] axw: that's not what I need. Txn docs include the revno of all the related documents in the txns.Ops slice [08:02] its accumulated when you go from preparing => prepared [08:03] but if we fail to go to prepared half-way [08:03] then we don't have the revno for all of the docs we didn't get to [08:03] I see [08:03] i'm tracking through to see if it matters [10:56] balloons: I'm looking at https://bugs.launchpad.net/juju/+bug/1701142, and noticed that deploy-trusty-amd64-vsphere is gone. is that because vsphere was busted? is it coming back? [10:56] Bug #1701142: juju destroy-controller failed on vSphere. [10:57] balloons: we have "add-cloud-vsphere", maybe that's its replacement? [12:32] balloons: did you change the password for aron? I can't log in anymore [12:38] welp, braixen is buggered again [13:09] axw, :( [13:09] I didn't change anything [13:09] balloons: okey dokey. I'm not sure what to do now. do we have access to the MAAS that's running aron? [13:10] We don't AFAIK. I was surprised you could ssh in [13:10] balloons: I was ssh'ing to aron, not the MAAS. isn't aron ours? [13:11] I was hoping IS would help out in helping us how the two machines are setup / ensuring access [13:11] It predates me, and I had to cobble together some info to figure out it existed and how to access. [13:12] I can go look at the original rt and see if there are more clues [13:12] balloons: there's https://wiki.canonical.com/InformationInfrastructure/IS/JujuVMware [13:12] But yes, both braixen and Aron are ours [13:12] balloons: do you know what TUP means? [13:14] Nice axw. I never knew about that page [13:14] No, not sure about TUP. Just enigma for secrets [13:16] Let's just ask is [13:16] balloons: ah, derp, the password is the same. my script was munging the password [13:19] axw, please send along what you do if successful so we have it for our notes [13:20] balloons: sure [13:24] balloons: sent an email. I just deleted the juju VMs and it seems happy now :/ [13:24] axw, on the deploy we removed it as redundant. network-health-vsphere is the replacement [13:24] jam: tyvm for the review, will look to add more tests for compat tomorrow, then land [13:24] balloons: ok [13:24] axw, right need to add a cleanup script to that substrate now [13:25] balloons: I failed to include the command, I just did "govc vm.destroy juju-*" [13:25] axw, perhaps something you could easily whip up? [13:25] balloons: that leaves some VM folders behind, but there doesn't appear to be any govc command to delete them [13:26] balloons: maybe? depends on what else I have going on tomorrow, need to finish some other stuff off too [13:26] Basically we would want to remove instances older than a couple hours is generally how we do it. And run that on cron every hour [13:26] and also get to the bottom of why this happened [13:26] don't want to be hosing customer systems, if it is juju that's at fault [13:27] axw, right. Just asking ;) it's a task on the list, but vsphere needs to be working first [13:27] I'm just glad you have a working playground now. [13:27] balloons: yup, I'll add a card and add myself to it. I can advise if somebody gets to it first [13:28] There is a card in the quality section, you can put your face on [13:28] balloons: no QA board yet? [13:28] oh I don't see it - maybe I don't have access [13:28] No, I'll do it today. Haven't moved from leankit yet [13:29] Card is still in there [13:29] right I see [13:29] Didn't finish migration [13:29] I'll look tomorrow then, I'm going to sign off shortly [13:29] Anyways, thank you [13:29] balloons: nps [13:29] You have a lovely evening [17:08] axw: sup [17:08] ahh I see you are signing off [17:08] no worries [17:08] hitting some crazy issues with storage [17:09] ill file a bug [17:44] hml: I responded to your email [17:44] let me know if I can help clarify things [17:45] jam: ty - i’m taking a look [17:49] jam: one thing to note - right now DeriveAvailabilityZones only looks at the StartInstanceParams.Placement and the volume availability zones, spaces/constraints are handled in StartInstance(). It’s be easy enough to move spaces to DeriveAvailabilityZones however