[01:06] <axw> thumper-gym: re Preflight, do you have a preference for splitting it into two methods, one for creating an instance, one for a container on an instance? Then the instance.Instance parameter is never optional.
[01:29] <thumper> axw: sounds very reasonable
[01:29] <wallyworld> thumper: so i missed you before, i was stuffing my face. i got that big branch landed :-)
[01:29] <thumper> axw: for example, the null provider can't create machines, but could perhaps create containers
[01:29] <thumper> axw: what did you think of my method name suggestions?
[01:29] <thumper> wallyworld: good to get it landed
[01:29] <axw> thumper: I quite like Vet, and Precheck isn't bad
[01:30] <axw> not so keen on Probe
[01:30] <axw> or Review
[01:30] <thumper> PrecheckCreateMachine, PrecheckCreateContainer
[01:30] <thumper> ?
[01:30] <axw> yeah I think that sounds decent
[01:31] <axw> thumper: do you think different interfaces are necessary, or is that overkill?
[01:31] <axw> MachinePrechecker, ContainerPrechecker... *shrug*
[01:31] <thumper> I don't mind either way
[01:31] <thumper> chances are
[01:32] <thumper> that every provider will end up having it anyway
[01:32] <thumper> for some precheck stuff
[01:32] <axw> yeah true
[01:32] <axw> I'll just keep it in one then
[01:32] <thumper> ok
[01:32] <axw> cool
[01:32] <axw> thanks!
[01:37] <wallyworld> thumper: if you are tired of coding, i have a small piece of work which uses the new storage stuff to set tools retries when needed. but it can wait if you have other things to do https://codereview.appspot.com/13577045/
[01:37] <thumper> tired? I haven't even started
[01:38] <wallyworld> oops, sorry
[01:38]  * thumper looks
[01:38] <wallyworld> you don't have to
[01:53] <thumper> did
[01:57] <wallyworld> thumper: with bool params, i hear you but they are used everywhere. and there are others in the tools area and felt consistency was better here
[01:58] <thumper> I know... but I still feel the need to push back
[01:58] <thumper> what about type AllowRetry bool
[01:58] <wallyworld> alright
[01:58] <thumper> and then two const values
[01:59] <thumper> it doesn't have to be exactly that
[01:59] <thumper> but play and see what feels good
[01:59] <wallyworld> yeah, i know the pattern to use. just felt wrong ony changing one instance of it
[02:00] <wallyworld> i'll finish my current branch and go back to it soon
[02:11] <hazmat> anyone know where we are filing doc bugs?
[02:11] <davecheney> hazmat: juju-core docs series isn't a bad start
[02:11] <davecheney> they all kind of go to the same place/people anyway
[02:11] <hazmat> yeah..
[02:12] <hazmat> davecheney, thanks
[02:32] <davecheney> % juju scp gccgo/0:/home/ubuntu/gccgo.tar.bz2 .
[02:32] <davecheney> ^ :heart juju
[02:56] <hazmat> fwereade, curious what you think of this https://bugs.launchpad.net/juju-core/+bug/1227450
[02:56] <_mup_> Bug #1227450: juju does not retry provisioning against transient provider errors <juju-core:New> <https://launchpad.net/bugs/1227450>
[02:57] <hazmat> fwereade, it seems with 1.14 things are a bit better for recovering without waiting (ie i can kill machines and services/units without the unit agent coming online)... but ideally the transient would just be auto'd
[03:30] <thumper> wallyworld: coming?
[03:30] <wallyworld> no 1:30 yet
[03:30] <wallyworld> still 10s to go
[03:31] <axw> thumper: wallyworld can you hear/see me in the hangout?
[03:31] <thumper> no
[03:32] <thumper> try this one axw wallyworld https://plus.google.com/hangouts/_/0d2e7f7c6d9229ef8ed15d6c8f7ff08b0de146cc?hl=en
[03:37] <bradm> davecheney: any luck on those charm reviews? :)
[03:51] <davecheney> bradm: i am really sorry
[03:51] <davecheney> i know they are reviewed but have not made it into the charm store
[03:51] <davecheney> i will try to find out (again) what is going on
[03:53] <bradm> davecheney: no worries, I'll just prod you every now and then about it :)
[03:56] <davecheney> bradm: i will follow up with marcoceppi
[04:11]  * thumper twiddles thumbs waiting for lbox before going to make coffee
[04:12]  * wallyworld looks at thumper's review
[04:12]  * thumper noticed something wrong
[04:12] <thumper> poo
[04:12]  * thumper leaves it for now as an exercise for wallyworld to find
[04:13]  * thumper -> coffee
[04:17]  * thumper needs to help with early dinner
[04:17] <thumper> back for meetings tonight
[05:01] <rogpeppe> mornin' all
[05:04] <davecheney> o/
[05:13] <rogpeppe> davecheney: hiya
[05:41] <hazmat> just ran into my first real world user who has a default vpc
[05:51] <davecheney> zomg
[05:51] <davecheney> how did it go
[05:51] <davecheney> ?
[05:54] <hazmat> davecheney, it didn't work with juju .. throws an error
[05:54] <hazmat> thumper-afk, how do you remove a container on a machine
[05:55] <davecheney> hazmat: poop
[06:30] <bradm> is there some best practice about how to ship updated code to juju instances via a charm?  right now I've got to update the stuff from python-moinmoin to do openid properly, and I'm shipping files, which just feels like a terrible unmaintainable hack.
[06:30] <bradm> I could do patches, but thats only slightly less of an unmaintainable hack
[06:31] <bradm> I could try to get the code upstream, but that won't always happen
[06:31] <davecheney> bradm: if you're talking about including code inside the charm
[06:32] <davecheney> that is one way
[06:32] <davecheney> otherwise you could have the charm bzr branch something
[06:32] <davecheney> but putting the code inside the chamr will at least ensure that all units of that service get the same copy
[06:32] <bradm> davecheney: yeah, basically I need to patch the python code that the python-moinmoin deb puts on disk for openid
[06:33] <davecheney> sounds nasty any way you cut it
[06:33] <bradm> lp:~brad-marshall/charms/precise/python-moinmoin/python-rewrite
[06:34] <davecheney> quite a few charms have a configuration settting to control adding a ppa
[06:34] <davecheney> so if you have that config set
[06:34] <bradm> thats what I've gotten so far, it does seem to work ok, I've got an instance up on canonistack
[06:34] <davecheney> the install hook will use the ppa version
[06:34] <bradm> ooh, that could work
[06:34] <bradm> it feels like a less terrible hack
[06:35] <davecheney> bradm: the wordpress charm is a good plcae to look
[06:36] <bradm> davecheney: ta, I'll have a look at that, it feels like at least then I'll know it'll all work together
[06:37] <bradm> it does mean I'd have to keep on top of new versions of moin that come out, and have a way of getting people to update
[06:38] <davecheney> fork's suck
[06:39] <bradm> ah, a downside of that is you have to assume egress access too, which wouldn't work in our environment
[06:40] <bradm> maybe I should just try and get the code upstream, thats the least sucky option
[06:41] <rogpeppe> anyone for a review? https://codereview.appspot.com/13778043
[07:10] <yolanda> hi, any update on the juju failure from yesterday?
[07:32] <rogpeppe> yolanda: hi
[07:33] <yolanda> hi rogpeppe
[07:33] <rogpeppe> yolanda: i got a question that you may be able to answer
[07:33] <yolanda> tell me
[07:33] <rogpeppe> yolanda: do you still have access to an instance that this happened on?
[07:33] <yolanda> rogpeppe, i destroyed it
[07:33] <yolanda> but i can create a new one with the same problem
[07:34] <rogpeppe> yolanda: that would be great. i'd like to find out the contents of /proc/sys/vm/overcommit_memory
[07:34] <yolanda> ok, just a moment
[07:34] <rogpeppe> yolanda: thanks
[07:34] <yolanda> i need to have a working juju for the work i'm doing now so i'm glad to help
[07:36] <rogpeppe> yolanda: for the record, here's the question i asked, and its reply from one of the Go core team: http://paste.ubuntu.com/6127199/
[07:37] <yolanda> shall i try with a bigger instance?
[07:40] <rogpeppe> yolanda: if you can, that would be an excellent thing to try, yes
[07:40] <rogpeppe> yolanda: but i'd like to see the overcommit_memory thing on an instance where the problem has happened
[07:40] <rogpeppe> yolanda: so that we can try to pin down the issue
[07:40] <yolanda> ok, still bootstrapping, it takes time to sync tools
[07:44] <yolanda> rogpeppe, /proc/sys/vm/overcommit_memory = 0
[07:46] <rogpeppe> yolanda: ah ok, i think that probably settles it
[07:46] <rogpeppe> yolanda: thanks
[07:46] <rogpeppe> yolanda: i guess that too much is going on at bootstrap time
[07:47] <rogpeppe> yolanda: perhaps we should run juju as root
[07:49] <yolanda> rogpeppe, shall i try with a bigger instance to discard memory?
[07:49] <rogpeppe> yolanda: yes please - i predict it'll work fine on a bigger instance
[07:50] <yolanda> let me try
[07:50] <yolanda> by default it uses 512mb right?
[07:53] <rogpeppe> yolanda: i don't think there's any default - it probably just chooses the smallest available
[07:53] <yolanda> trying with 1024
[07:54] <rogpeppe> mgz: ping
[08:07] <yolanda> rogpeppe, with 1gb of memory works
[08:07] <yolanda> so yes, it's a ENOMEM issue
[08:07] <rogpeppe> yolanda: ok, thanks, at least we know where we are now
[08:08] <yolanda> but this was working until yesterday afternoon, can it be related with some update done?
[08:08] <rogpeppe> yolanda: perhaps something different is being done at bootstrap
[08:08] <rogpeppe> yolanda: which uses more memory
[08:09] <yolanda> i'll deploy a service with 1gb of memory now to test
[08:10] <rogpeppe> yolanda: i'm afraid i don't know much about the details of how the openstack instances are set up
[08:11] <yolanda> at least the defaults now should be set to 1GB to work
[08:22] <rogpeppe> yolanda: yes. or we could set overcommit_memory to 1, i guess
[08:23] <rogpeppe> yolanda: or, even better, work out what's taking all that memory at bootstrap time :-)
[08:23] <rogpeppe> yolanda: go does grab a lot of VM at init time, although it doesn't touch it until it actually needs it
[08:25] <yolanda> rogpeppe, but that's not directly related with juju, but with cloud-init, right?
[08:42] <rogpeppe> yolanda: well, juju is written in Go, so the VM issue is partially Go-related
[08:43] <rogpeppe> yolanda: but what's running at bootstrap time is indeed cloud-init and image-related
[08:44] <yolanda> rogpeppe, i was asking that to try to file a bug, but i'm not sure on how to place it
[08:49] <rogpeppe> yolanda: file it against juju, because there are several possible ways to solve the problem, not all of them in cloud-init.
[08:50] <yolanda> rogpeppe, ok
[09:05] <jam> yolanda, rogpeppe: If you are not overcommitting memory, and go is asking for a bunch that it will use later (but doesn't touch yet), doesn't that cause this issue?
[09:05] <rogpeppe> jam: indeed it does
[09:05] <jam> or is the go request "give me address space" different from "allocate some memory to me"
[09:05] <jam> rogpeppe: I know internally we have a default "minimum memory" of 1GB for VMs.
[09:06] <jam> though I think you can pass your own constraints to override it.
[09:06] <rogpeppe> jam: i'm surprised that when yolanda used a 1GB constraint, that it caused it to work
[09:06] <jam> rogpeppe: it might depend on which provider
[09:06] <yolanda> works like a charm now
[09:06] <rogpeppe> jam: if we do have that minimum memory default constraint
[09:07] <rogpeppe> jam: yeah. are we getting this stuff from simplestreams now? i wonder if something changed there.
[09:07] <rogpeppe> yolanda: which charm would that be? :-)
[09:09] <jam> rogpeppe: environs/instances/instancetype.go: "minMemoryHeuristic = 1024"
[09:09] <jam> which I'm pretty sure is in MB
[09:10] <rogpeppe> jam: if the instance types were somehow reporting the wrong value, then it might be choosing the wrong instances, i suppose
[09:11] <rogpeppe> jam: hmm, but then the change in constraints value wouldn't affect it.
[09:11] <jam> rogpeppe: theoretically. It is possible that we are getting our units wrong. Do you know what provider this is?
[09:11] <rogpeppe> jam: this is on canonistack
[09:12] <rogpeppe> jam: and the issue only just started happening
[09:12] <rogpeppe> jam: so *something* has changed recently
[09:13] <jam> rogpeppe: do you know if there was any change like lcy01 => lcy02 ?
[09:14] <jam> 01 runs Openstack F while 02 runs Grizzly, IIRC
[09:14] <rogpeppe> jam: the same issue occurs in both, afair
[09:15] <yolanda> rogpeppe, https://bugs.launchpad.net/juju-core/+bug/1227533
[09:15] <_mup_> Bug #1227533: Juju fails to bootstrap if memory is lower than 1GB <juju-core:New> <https://launchpad.net/bugs/1227533>
[09:16] <rogpeppe> yolanda: thanks
[09:20] <jam> yolanda: are you passing a constraint? or just no constraints?
[09:20] <yolanda> jam, now i pass a constraint for 1024mb, without constraints it fails
[09:21] <yolanda> i tried with canonistack only, but i think it was failing under other environments
[09:21] <jam> yolanda: strange, because I bootstrapped to canonistack 2 days ago, and things are working. I'll see if I can reproduces
[09:21] <yolanda> jam, yes, it started to fail yesterday afternoon
[09:21] <yolanda> i've been playing with juju and canonistack for weeks
[09:22] <jam> yolanda: I wonder if a new instance type showed up in the catalogue or something.
[09:22] <yolanda> it's grabbing from simple-streams, so maybe, yes
[09:22] <jam> yolanda: so instance types are in the openstack Flavor catalogue. Vs *image* which is from simplestreams
[09:22] <jam> Image == Ubuntu 12.04
[09:22] <jam> Instance == m1.small
[09:23] <jam> yolanda: it is possible that we got a new image, too, which might have changed overcommit
[09:23] <yolanda> jam, do you have some booted machine previous to the failure? then you can check for the overcommit
[09:24] <jam> yolanda: I do, give me a sec
[09:25] <jam> my instance has overcommit_memory = 0 as well
[09:25] <jam> free says I have 512MB, though
[09:25] <yolanda> same as mine that were failing
[09:25] <jam> top agrees
[09:26] <jam> so it is always possible that canonistack itself is under mem pressure because of a lot of people starting instances, etc. But if it is reliable and not intermittent than I doubt that is the issue
[09:27] <yolanda> jam, it's not intermittent, happens all time since yesterday, in the 2 zones, at different hours
[09:27] <jam> yolanda: so I'll try quickly with 1.14 that is in the stable ppa, I don't know if I can get 1.12
[09:28] <yolanda> jam, i tested with 1.14 today
[09:28] <yolanda> same problem, now i'm using that
[09:28] <jam> rogpeppe: so at the least, we have a bug that we think we use a min-memory heuristic of 1GB but don't
[09:28] <yolanda> yesterday i tried with 1.12
[09:28] <rogpeppe> jam: well, i think that's true - but it would be good to check the actual memory of the provided instance
[09:29] <jam> rogpeppe: I did, the instance I have running for a couple of days is 512MB
[09:29] <rogpeppe> jam: cool
[09:30] <jam> rogpeppe: And I just bootstrapped now and got the same 512MB
[09:30] <rogpeppe> jam: i've got a proposal for review, if you fancy taking a look: https://codereview.appspot.com/13778043
[09:31] <fwereade> rogpeppe, are you expecting https://codereview.appspot.com/13493044/ to go anywhere, or are you dropping it? I've kinda forgotten context from before this week
[09:31] <rogpeppe> fwereade: ^
[09:31] <fwereade> rogpeppe, I'm just going through from the top today, trying to approach the bottom
[09:31] <jam> rogpeppe: I do see the failure like yolanda mentioned, Cloud init reports that something failed, and I have "fatal error: runtime: ..."
[09:31] <fwereade> rogpeppe, but it's also my meeting day, so...
[09:32] <rogpeppe> jam: ah, so juju failed to start
[09:32] <jam> rogpeppe: right
[09:32] <fwereade> rogpeppe, also https://codereview.appspot.com/13512051/
[09:32] <axw> fwereade: thansk for https://codereview.appspot.com/13635044/ I'm just waiting for sshstorage to land first.
[09:32] <rogpeppe> fwereade: ah, i'll submit the latter; the former can wait, i think
[09:32] <axw> thanks*
[09:33] <thumper> rogpeppe: hey
[09:33] <rogpeppe> thumper: yo
[09:34] <jam> rogpeppe: https://codereview.appspot.com/13512051/ I think Tim has a branch that changes some names, so it will conflict (eventually), otherwise seems ok
[09:38] <jam> fwereade: https://bugs.launchpad.net/juju-core/+bug/1227450 Are we intending that if the first request to Provision fails, we will ever try again?
[09:38] <_mup_> Bug #1227450: juju does not retry provisioning against transient provider errors <juju-core:New> <https://launchpad.net/bugs/1227450>
[09:38] <jam> I thought we intentionally weren't restarting things people manually stopped
[09:38] <jam> which sort of falls into a similar category
[09:39] <fwereade> jam, the original intent was that we hook up juju resolved
[09:39] <jam> fwereade: so you could run "juju resolved" and it would try to provision again?
[09:39] <fwereade> jam, yeah, that's the idea
[09:40] <dimitern__> fwereade, hey, I see you didn't have time for this - can you take a look now please? https://codereview.appspot.com/13501051/
[09:41] <fwereade> dimitern__, I'm ...2 away from it in the list
[09:42] <dimitern__> fwereade, ok
[09:42] <fwereade> dimitern__, thanks for your patience
[09:45] <dimitern__> fwereade, no rush, just reminding :)
[10:01] <thumper> fwereade: coming?
[10:06] <hazmat> is there a tag for 1.14?
[10:43] <jam> mgz: can you link me the document?
[10:59] <jam> thumper, fwereade: 1 min to meeting
[11:06] <jam> mgz: fwereade, rogpeppe: https://codereview.appspot.com/13562045/ is the critical fix for Openstack security groups
[11:06] <mgz> jam: sorry, was paged up
[11:11] <rogpeppe> jam: i don't think it would be too hard to add a live test that checked exposing
[11:11] <rogpeppe> jam: but it would be slow to run
[11:12] <rogpeppe> jam: unless we jammed it in the kitchen sink of BootstrapAndDeploy
[11:12] <jam> rogpeppe: we don't have any tests today that run a custom program on the remote machine, so I didn't have much to go off of. It would be possible, but I'd rather land the fix and have done manual testing.
[11:12] <rogpeppe> jam: agreed, good to land the fix, but please file a bug
[11:15] <jam> rogpeppe: https://bugs.launchpad.net/juju-core/+bug/1227586
[11:15] <_mup_> Bug #1227586: cross-provider test that we don't expose non-juju service ports <tech-debt> <juju-core:Triaged> <https://launchpad.net/bugs/1227586>
[11:15] <rogpeppe> jam: thanks
[11:17] <jam> rogpeppe: note also that we have bug #1217595 which means that "juju upgrade" won't fix the security groups that exist, and "juju destroy-environment" doesn't delete security groups so "juju destroy-environment && juju bootstrap" will leave you vulnerable, but I'm thinking to fix #1217595
[11:17] <_mup_> Bug #1217595: security groups reused without ensuring validity <canonical-webops> <juju-core:Triaged> <https://launchpad.net/bugs/1217595>
[11:17] <_mup_> Bug #1217595: security groups reused without ensuring validity <canonical-webops> <juju-core:Triaged> <https://launchpad.net/bugs/1217595>
[11:19] <jam> bug #1227588
[11:19] <_mup_> Bug #1227588: destroy-enviroment does not delete security groups <tech-debt> <juju-core:Triaged> <https://launchpad.net/bugs/1227588>
[11:24] <rogpeppe> jam: i don't think that the old security groups pose a security hazard, on ec2 anyway
[11:24] <rogpeppe> jam: just a resource problem
[11:24] <rogpeppe> jam: because the ensures that if old security groups exist, they have the correct permissions
[11:25] <rogpeppe> jam: s/because the/because the code/
[11:25] <rogpeppe> jam: ah, i hadn't seen 1217595
[11:26] <rogpeppe> jam: do you know what provider that's using?
[11:26] <jam> rogpeppe: openstack at least
[11:26] <rogpeppe> jam: i remember writing logic in the ec2 provider that specifically tried to cope with that case
[11:26] <jam> rogpeppe: note that I think we copied that behavior from ec2
[11:27] <jam> it detects a duplicate, though I'm trying to see if it updates it
[11:27] <jam> "if err == nil { g = resp.SecurityGroup"
[11:27] <rogpeppe> jam: see ec2.environ.ensureGroup
[11:27] <jam> I do see it doing a set stuff
[11:28] <jam> to find what to revoke
[11:28] <rogpeppe> jam: it looks like the openstack provider doesn't do the right thing
[11:29]  * rogpeppe gets some lunch
[11:31] <mgz> rogpeppe: that was changed in codereview.appspotcom/11655043
[11:32] <mgz> the alternative is using the same fiddly stuff that ec2 does with security groups
[11:32] <mgz> really the code should be unified regardless
[11:36] <jam> mgz: I think we should move the permSet into a shared module and both use it
[11:38] <jam> mgz: right, so I'm fine with doing a Get before we do Create to avoid the duplicate-with-quantum bug
[11:38] <jam> but we can still take the thing we have and do a set diff to figure out what to put on it
[11:41]  * TheMue => AFK
[11:44]  * thumper -> bed
[11:49] <dimitern__> fwereade, review poke :)
[11:50] <rogpeppe> mgz: it doesn't look like the openstack provider ever revoked security group rules, even before that change
[11:50] <rogpeppe> dimitern__: i'll do yours if you do mine: https://codereview.appspot.com/13778043
[11:51] <dimitern__> rogpeppe, looking
[11:59] <dimitern__> rogpeppe, reviewed
[11:59] <rogpeppe> dimitern__: ta!
[12:14] <dimitern__> rogpeppe, fwereade: I have 2 CLs that need reviews please https://codereview.appspot.com/13501051/ and https://codereview.appspot.com/13627051
[12:15] <rogpeppe> dimitern__: i'm already on the first one
[12:15] <dimitern__> rogpeppe, great
[12:20] <jam> dimitern__ how many underscores do you need ? :) Is it possible to share the Machine object somewhere, it sure feels repeated.
[12:21] <dimitern__> jam, not sure what do you mean?
[12:22] <mgz> about the underscores or the Machine? :)
[12:23] <dimitern__> it's how we agreed to do the api
[12:23] <jam> for underscores, there appear to be 3 of you
[12:24] <dimitern__> my irc client surely acts funny today
[12:24] <mgz> ah, so that's his trick
[12:24] <jam> for Machine, it looks like a tiny object with 2 bits of state and a callable that has a few apis on it. If it is actually different than the other ones thats fine
[12:24] <dimitern__> jam, they're all different
[12:25] <dimitern__> jam, and expose only a subset of the state.Machine methods, as needed by each worker
[12:26] <yolanda> hi, using config_changed hook, what should be the best way if a config var has changed since the previous invocation? we receive the hook on each config, but i'd like to take some action if a given var has a different value. Maybe store old value in some place?
[12:27] <dimitern__> yolanda, you could save the old value at each config_changed call, and the use it in the next
[12:28] <dimitern__> yolanda, that will of course trigger a config_changed on the remote unit as well
[12:28] <jam> dimitern__: Id() Life() and Tag() sure seem common, but we can deal with that some other time, I guess
[12:28] <dimitern__> jam, we can yes
[12:29] <yolanda> dimitern__, you mean, saving on a file? not sure if i follow you
[12:29] <dimitern__> yolanda, I mean doing relation-set oldValue=X, but you can use a file as well
[12:30] <dimitern__> yolanda, that way the remote config_changed hook won't be triggered
[12:30] <yolanda> dimitern__, maybe a file is simpler, then i avoid double relation hook call
[12:31] <dimitern__> yolanda, yes
[12:31] <yolanda> just i was wondering if juju methods had something to retrieve previous val, may be useful
[12:32] <rogpeppe> yolanda: i sometimes wonder if we should provide a standard easy way to store persistent state across hook invocations
[12:32] <rogpeppe> yolanda: because everyone reinvents their own wheel there
[12:32] <dimitern__> yolanda, juju doesn't provide that at the moment
[12:33] <yolanda> rogpeppe, dimitern__, sounds like an useful feature
[12:33] <dimitern__> rogpeppe, something like relation-save key=value
[12:34] <rogpeppe> dimitern__: there's not really anything relation-oriented about it
[12:34] <dimitern__> rogpeppe, config settings are stored per relation
[12:35] <dimitern__> rogpeppe, so it makes sense to have a way to store them locally with a hook command, but the name could be anything, yeah
[12:35] <rogpeppe> dimitern__: i'm not necessarily talking about config settings - just persistent state that one hook invocation can set to let another one see
[12:36] <rogpeppe> dimitern__: tbh it's probably just a matter for better standard bash tooling - not something that juju-core needs to be involved in
[12:36] <dimitern__> rogpeppe, should these things be stored in state as well?
[12:36] <rogpeppe> dimitern__: definitely not
[12:36] <dimitern__> rogpeppe, i agree
[12:48] <rogpeppe> dimitern__: https://codereview.appspot.com/13501051/ reviewed
[12:48] <dimitern__> rogpeppe, thanks!
[12:50] <jam> mgz:  did you get a chance to look at: https://codereview.appspot.com/13562045/ ?
[12:51] <jam> or rogpeppe. You made some comments on IRC, but didn't comment on the CL from what I can see
[12:51] <rogpeppe> jam: oh, sorry, i got distracted.
[12:52] <jam> rogpeppe: I'm looking at dimitern's second CL
[12:53] <rogpeppe> jam: cool, thanks - i was some way through it, but happy for you to do it
[12:53] <jam> rogpeppe: well certainly submit what you've gotten through
[12:53] <rogpeppe> jam: i had no comments yet
[12:54] <mgz> jam: looked at, not gone through all the test stuff yet
[12:54] <mgz> will do that now
[13:00] <rogpeppe> jam: reviewed
[13:00] <jam> dimitern: https://codereview.appspot.com/13627051/ reviewed
[13:08] <jam> natefinch: you should be doing some sort of "ssh shared@maas.mallards"
[13:09] <jam> eg Username == shared
[13:09] <natefinch> jam: oh.. duh
[13:10] <natefinch> jam: I guess I expected the config "user" setting to do that for me, but then failed to realize it obviously wasn't
[13:10] <natefinch> jam: using shared@  works perfectly
[13:10] <jam> natefinch: it should
[13:10] <jam> were you accidentally doing nate@ ?
[13:10] <natefinch> jam: nope
[13:11] <natefinch> jam: maybe  it just wasn't picking up the config
[13:11] <jam> natefinch: did you put it into config.personal?
[13:11] <jam> I don't *think* that file is read by default, (It has to be imported somehow)
[13:11] <natefinch> jam: yep
[13:12] <natefinch> I reran sshebang afterward...
[13:12] <jam> natefinch: I think it is ok to use garage mass without calendar as long as you aren't trying to allocate all 16 machines, and cleanup after yourself
[13:12] <natefinch> actually I see User nate being specified for *.mallards in config.personal above there.... depending on which one wins
[13:12] <jam> natefinch: your test cases are just provisioning 1-2 machines so it should be reasonably well behaved.
[13:13] <jam> natefinch: I *think* first-entry wins
[13:13] <natefinch> that would do it.  easy enough to try
[13:15] <natefinch> jam:  yep, first one in wins
[13:44] <natefinch> writing wiki docs makes me happy
[14:02] <dimitern__> rogpeppe, fwereade, updated https://codereview.appspot.com/13501051/
[14:05] <fwereade> TheMue, I can'tfind where your auto-sync-tools code is, did something happen to it?
[14:06] <TheMue> fwereade: have to look myself
[14:08] <TheMue> fwereade: should be this one https://code.launchpad.net/~themue/juju-core/035-bootstrap-autosync merged on Aug, 2nd
[14:12] <TheMue> fwereade: and found it in trunk
[14:13] <fwereade> TheMue, yep, just did likewise, thanks
[14:13] <TheMue> fwereade: yw
[14:13] <fwereade> TheMue, was looking right past it, sorry
[14:13] <TheMue> fwereade: np, as long as my answer is positive (and i haven't missid to merge it *phew*)
[14:51] <rick_h_> niemeyer: ping, any luck with more details on the issue?
[14:52] <niemeyer> rick_h_: Hey
[14:52] <niemeyer> rick_h_: Not really, unfortunately
[14:52] <niemeyer> rick_h_: Now that I'm consciously trying to replicate it I can't either
[14:52] <gary_poster> thanks rick_h_ .  yeah niemeyer, we want to stomp this but can't dupe :-(
[14:52] <hatch> yes very frustrating
[14:52] <rick_h_> niemeyer: ok, we've had 4 people try to replicate without success so far. If we can find something let me know and we'll jump right on it. Where were you linked from when it died?
[14:53] <niemeyer> rick_h_: I've accessed it directly
[14:53] <niemeyer> rick_h_: It actually stopped right on entrance the first time
[14:53] <rick_h_> niemeyer: right, your second email said you were linked there?
[14:53] <niemeyer> rick_h_: I was trying to guess the URL of a charm
[14:53] <niemeyer> rick_h_: So both times were hand-written
[14:54] <niemeyer> rick_h_: Both times CTRL-R solved the issue
[14:54] <rick_h_> niemeyer: ok, hmm. So maybe some race in a deeper url. Ok, well that gives more info to go on trying to replicate
[14:54] <hatch> niemeyer: on entrance - do you mean, 'loading juju-gui' ?
[14:54] <niemeyer> hatch: No, just typing jujucharms.com
[14:55] <gary_poster> :-/
[14:55] <niemeyer> rick_h_: I wouldn't be surprised if it's something timing out
[14:55] <hatch> niemeyer: sorry - I mean, there are two loading messages, one is for the assets and one is for the actual juju connection
[14:55] <niemeyer> hatch: Hmm, ok?
[14:56] <hatch> sounds like you're getting hung up on the connection
[14:56] <niemeyer> hatch: I'm not using any juju environments
[14:57] <niemeyer> hatch: Well, that I know of
[14:57] <hatch> yeah it still connects on sandbox
[14:57] <gary_poster> (to an in-browser-memory fake juju)
[14:57] <niemeyer> I see, ok
[14:57] <hatch> yeah that :D
[14:58] <niemeyer> So, a timeout might explain why it failed on the first try, when all the caches on my path were cold
[14:58] <hatch> yeah odly enough that should never happen
[14:59] <hatch> I have some ideas to track this down though
[14:59] <gary_poster> right, we need console messages
[14:59] <niemeyer> Okay, I don't have them now, but if it happens again I'll try them
[15:00] <gary_poster> thank you very much niemeyer.  ok hatch, cool, glad you have some ideas.  I was going to suggest that we reply to Gustavo's message with a request for anyone who encounters this issue to please get in touch with us *before* reloading.
[15:00] <gary_poster> I'll send that out quickly
[15:01] <niemeyer> gary_poster: Sorry about that.. it was kind of stupid.. I should know to have observed  the console for any hnts
[15:01] <gary_poster> np thanks for raising it niemeyer.  at least we know there's a likely issue somewhere
[15:05] <hatch> damn schrodinbugs
[15:06] <rogpeppe> fwereade: hmm, i just noticed that checkers.Set was renamed to testing.PatchValue. i'm not that keen, as the reason for it being in jc was because it has minimal dependencies so that it can be used from internal tests without fear of import loops.
[15:09] <rogpeppe> fwereade: i thought it's worth discussing before i propose a move back though.
[15:10] <mramm> is the lean kit board reasonably up to date for me to look through it for things that have already landed but are not yet in a release?
[15:11] <mramm> fwereade: rogpeppe: dimitern__: TheMue: ^^^^
[15:11] <rogpeppe> mramm: i did put a Doing card on earlier, but tbh I haven't looked at it much since we stopped doing daily kanban runthroughs
[15:11] <mramm> so how are we managing work now?
[15:11] <rogpeppe> mramm: that's bad, i know, sorry
[15:12] <mramm> tickets, lkk, whatever -- something needs to be there
[15:12] <rogpeppe> mramm: yeah. i thought that kanban board discussion gave a good focus actually.
[15:13] <mramm> well, that is for you guys to sort out
[15:13] <mramm> for now I guess I'll try to just look through the commits that have been merged to trunk and go from there...
[15:15] <dimitern__> mramm, mine is always up-to-date
[15:15]  * rogpeppe polishes dimitern__'s halo :-)
[15:16] <dimitern__> :P
[15:16] <rogpeppe> dimitern__, natefinch, TheMue: do you have any opinions about jc.Set vs testing.PatchValue ?
[15:17] <rogpeppe> dimitern__, natefinch, TheMue: (the latter being a new name for the former)
[15:17] <fwereade> rogpeppe, I confess a mild preference for testing.PatchValue
[15:18] <rogpeppe> fwereade: i'd like it not to be in testing
[15:18] <rogpeppe> fwereade: because of the dependency issue
[15:18] <rogpeppe> fwereade: and how about just Patch ?
[15:19] <fwereade> rogpeppe, jc.Patch definitely > jc.Set
[15:19] <fwereade> rogpeppe, I'd +1 that
[15:20] <rogpeppe> fwereade: cool
[15:21] <natefinch> yeah, I like patch
[15:22] <natefinch> Patch implies temporary, set implies permanent
[15:23]  * TheMue votes for Patch too
[15:24] <dimitern__> +1 for Patch
[15:28] <marcoceppi> How is this possible? http://i.imgur.com/7x5eYib.png
[15:28] <mgz> does TestManageStateServesAPI failin on the bot mean anything to anyone?
[15:28] <marcoceppi> Ran debug-hooks during an install error hook, ran resolved --retry, trapped both hooks?
[15:28] <marcoceppi> 1.14.0, should this be a bug or is this expected, or what?
[15:30] <mgz> hm, bug 1219661
[15:30] <_mup_> Bug #1219661: TestManageStateServesAPI is flakey <test-failure> <juju-core:Fix Committed> <https://launchpad.net/bugs/1219661>
[15:31] <mgz> landed on trunk only
[15:36] <bac> hi, go question:  for my juju-core branch, bzr shows i'm out of date by many revisions.  but 'go get -v launchpad.net/jujuj-core/...' does nothing.  what gives?
[15:36] <rogpeppe> fwereade: one thought on <environmentname>.yaml vs bare <environmentname>, how about a ".juju" extension? that way the files are readily identifiable when sent around the network, and could even be potentially double-clicked to open
[15:38] <bac> gah, nm s/v/u/
[15:44] <mgz> rogpeppe: can I have a rubber stamp on cl 13722051 please?
[15:45] <rogpeppe> mgz: stamped
[15:45] <mgz> ta
[15:55] <rogpeppe> fwereade: ping
[15:56] <fwereade> rogpeppe, pong
[15:57] <fwereade> rogpeppe, ah, sorry, missed the above
[15:57] <rogpeppe> fwereade: just wondering how you feel about the above possibility
[15:57] <fwereade> rogpeppe, it's somewhat interesting
[15:57] <rogpeppe> fwereade: using an extension means that we can be sure that temp files don't clash too
[15:57] <rogpeppe> fwereade: (we don't currently have any restriction on environment names)
[15:58] <fwereade> rogpeppe, yeah, that's not a bad idea... but, hmm, .juju doesn't feel quite right
[15:58] <fwereade> rogpeppe, if anything, .env or something
[15:58] <rogpeppe> fwereade: .jujuenv
[15:58] <fwereade> rogpeppe, not totally in love
[15:58] <rogpeppe> fwereade: neither me
[15:58] <fwereade> rogpeppe, anyway, sorry, I have a call starting
[15:59] <rogpeppe> fwereade: can i leave .yaml for the time being, and we can bikeshed it when we actually hook up the Write code?
[15:59] <fwereade> rogpeppe, ok, sgtm
[16:16] <mgz> arosales, sinzui: landed the required fix on 1.14 branch
[16:16] <sinzui> mgz thank you
[16:16] <arosales> mgz, thanks. Do you have authority to get sinzui access to the go-bot creds?
[16:16] <arosales> needed to make a release
[16:17] <sinzui> ^ you are a faster typer than I am
[16:17] <mgz> I can't add his key to the launchpad account, I can stick a tag on for him today though
[16:17] <sinzui> mgz, that helps.
[16:18] <sinzui> mgz, we also need a 1.14.1 tag on goose at the same rev as 1.14.0
[16:18] <TheMue> fwereade: ping
[16:18] <sinzui> mgz, I want to try using dependencies.tsv for the 15+ releases. No more tags
[16:18] <fwereade> TheMue, listening, but in a meeting
[16:19] <mgz> that would be a good thing
[16:19] <TheMue> fwereade: you hade a good nose
[16:19] <TheMue> fwereade: see http://paste.ubuntu.com/6128903/
[16:19] <mgz> sinzui: changing the tarball script is easy, the harder half is making the bot do the same, so things actually break if people forget to bump a dep
[16:19] <fwereade> TheMue, nice
[16:20] <TheMue> fwereade: have to see now where exactly it crashes, but it dislikes those statements in the data
[16:20] <TheMue> fwereade: i think it's the $where
[16:20] <sinzui> mgz. yeah, I thought as much. I intended to try a ci approach where the tarball is used as a source for the tests. Release candidates fail if the tarball doesn't build
[16:21] <mgz> ugh, 1.14 is impressively diverged from trunk
[16:21] <mgz> sinzui: I poked tarmac a little to try that approach, but it doesn't fit in well with the current use of the bzr plugin
[16:21] <mgz> just need some time to re-look at the bot setup really
[16:22] <mgz> and understand how tarmac wants to work a little
[16:23] <sinzui> mgz. Understood. We are building a plugin/subordinate charm for the jenkins charms that acts as a tarmac gatekeeper and test runner.
[16:24] <sinzui> We have nothing against tarmac, but it was just one piece that was failing in our setup of charmworld testing
[16:41] <TheMue> fwereade: halve so wild, it only has been the . in the map key. mongo dislikes dots there. eval or db.eval as value are not executed
[16:44] <rogpeppe> dimitern__, fwereade, mgz, TheMue: a small change:  https://codereview.appspot.com/13786043
[16:46] <dimitern__> rogpeppe, looking
[16:46] <rogpeppe> dimitern__: thanks
[16:47] <TheMue> rogpeppe: reviewd
[16:47] <rogpeppe> TheMue: ta!
[16:49] <dimitern__> rogpeppe, me too
[16:49] <rogpeppe> dimitern__: ta to you too!
[16:59] <mgz> rogpeppe, fwereade: stamp on cherrypick sg fix to trunk cl  13787043 please!
[17:01] <rogpeppe> mgz: done
[17:13] <rogpeppe> natefinch, dimitern__, fwereade, mgz, TheMue: another small one: https://codereview.appspot.com/13341051
[17:20] <mgz> rogpeppe: I'll run the live tests on that branch
[17:20] <rogpeppe> mgz: thanks
[17:21] <rogpeppe> mgz: i don't think the live tests verify the expose semantics currently, so probably worth doing a sanity check there too, if poss
[17:24]  * lamont has what may be a stupid question...
[17:24] <lamont> To connect to ... insecurely, use `nocheckcertificate'.
[17:24] <lamont> where do I say 'nocheckcertificate' for juju (-core) bootstrap to see and use it?
[17:28] <mgz> lamont: juju-core doesn't use ssh to connect, you may want bug 1202163
[17:28] <_mup_> Bug #1202163: openstack provider should have config option to ignore invalid certs <cts> <papercut> <Go OpenStack Exchange:Confirmed> <juju-core:Triaged by jameinel> <https://launchpad.net/bugs/1202163>
[17:28] <TheMue> rogpeppe: reviewed, like it
[17:29] <rogpeppe> TheMue: thanks
[17:33] <natefinch> rogpeppe: is there a NewApiClient that is not from a name? Wondering why you need to say "fromName" when it clearly takes the name as an argument?
[17:33] <TheMue> so, will step out now. see you on monday (tomorrow holiday)
[17:33] <rogpeppe> natefinch: i didn't invent the name and i try to avoid repainting sheds
[17:34] <rogpeppe> natefinch: i agree in principle though
[17:34] <rogpeppe> TheMue: see ya
[17:34] <natefinch> natefinch: oh, sorry, I should have scrolled down, missed that it was there before
[17:34] <TheMue> have a good night
[17:46] <lamont> mgz: it's trying to talk to swift, using https because I don't like cleartext tokens and all that
[17:46] <lamont> but the CA cert for the signing cert is not in the ca-certs package
[17:46] <lamont> and I'm too cheap to go buy a cert for this particular venture
[17:47] <lamont> + tar xz C /var/lib/juju/tools/1.12.0preciseamd64
[17:47] <lamont> + wget noverbose O  https://swift....
[17:47] <lamont> so what I actually need to know is how to tell whatever is calling wget to say wget --nocheckcertificate ...
[17:47] <lamont> which may just mean hacking over the tarball
[17:50] <lamont> except it's fetching the tarball that is failing.
[18:17] <rogpeppe> g'night all
[18:18] <natefinch> g'night
[18:19] <natefinch> mgz: is there a way to do a build such that I can copy the juju client and the tools to another computer for testing purposes?
[18:20] <natefinch> mgz: I finally have a maas setup, but I can't really get juju to connect to it from my local machine
[18:20] <natefinch> maybe that's fixable with some ssh magic, I don't know
[18:20] <jpds_> lamont: That related to https://bugs.launchpad.net/juju-core/+bug/1202163 ?
[18:20] <_mup_> Bug #1202163: openstack provider should have config option to ignore invalid certs <cts> <papercut> <Go OpenStack Exchange:Confirmed> <juju-core:Triaged by jameinel> <https://launchpad.net/bugs/1202163>
[18:20] <lamont> jpds_: probably
[18:20]  * natefinch is not good with teh ssh
[18:21] <mgz> natefinch: you can give sync-tools a directory with stuff in as a param
[18:21] <lamont> and I think I got my mess figured out enough to have ways forward
[18:22] <natefinch> natefinch: I recognize that sentence as valid english, but I don't know how to execute on it :)
[18:22] <natefinch> mgz: err ^^
[20:23] <rogpeppe> natefinch: fancy a review? https://codereview.appspot.com/13489044
[20:23] <rogpeppe> fwereade: ^
[20:23] <rogpeppe> or anyone else that happens to be around
[20:24] <rogpeppe> natefinch: fairly simple stuff
[20:28] <natefinch> rogpeppe: sure
[20:28] <rogpeppe> natefinch: thanks
[21:09] <natefinch> rogpeppe: sorry, got a phone call in the middle, and now don't have time to finish, but I can finish in the morning if it still needs it. Sorry :/
[21:09] <rogpeppe> natefinch: ok, fair enough
[21:09] <rogpeppe> natefinch: if you could publish any comments you already have, that would be great
[21:10] <natefinch> rogpeppe: good point.. I don't have much, it was a long phone call
[21:10] <natefinch> gotta run
[21:11] <rogpeppe> natefinch: g'night
[21:24] <natefinch> rogpeppe: actually, got a reprieve from dinner duties, so I can finish the review
[21:44] <thumper> morning
[21:46] <natefinch> thumper: morning.  How's your team in the Cup doing?  Scoring lots of goalies?
[21:46]  * natefinch doesn't understand sailing.
[21:47] <thumper> natefinch: oracle won the first race, and the second was called off due to wind limits hit
[21:47] <thumper> next race is tomorrow morning (afternoon SFO time)
[21:47] <thumper> nz needs one more to win
[21:47] <thumper> oracle needs 7
[21:47] <natefinch> thumper: wow
[21:47] <natefinch> thumper: ok then
[21:47] <thumper> oracle has fixed their boat now and has a slight speed advantage
[21:48] <thumper> so it makes it interesting
[21:48] <natefinch> hey, interesting is better than boring
[21:49] <natefinch> friend of mine follows the Cup and was complaining for days about all the races getting cancelled
[21:51] <rogpeppe> thumper: a review for you if you wanna: https://codereview.appspot.com/13489044
[21:52] <natefinch> rogpeppe: I just finished that one up btw
[21:53] <bigjools> howdoo
[21:56] <thumper> o/ bigjools
[21:57] <natefinch> whelp, I'm outta here
[21:57] <bigjools> https://bugs.launchpad.net/juju-core/+bug/1227722
[21:57] <natefinch> night all
[21:57] <bigjools> fun
[21:57] <_mup_> Bug #1227722: juju uses tools for the wrong architecture when bootstrapping a MAAS node <maas> <juju-core:New> <https://launchpad.net/bugs/1227722>
[22:00] <thumper> bigjools: I blame wallyworld
[22:00] <rogpeppe> thumper: thanks
[22:00] <thumper> I don't know if it is his fault or not
[22:00] <thumper> but
[22:00]  * thumper shrugs
[22:00] <thumper> rogpeppe: np
[22:00] <bigjools> blame schmame.
[22:01] <bigjools> it's far too early in the morning for blamestorming
[22:01] <wallyworld> never too early
[22:01] <bigjools> never too early to blame wallyworld? I could get behind that :)
[22:02] <wallyworld> yep
[22:05] <bigjools> wallyworld: plus I am happy because I am off on our road trip tomorrow
[22:05] <wallyworld> oh yah
[22:05] <wallyworld> will be great
[22:05] <bigjools> brimmed the fuel tank - $200...eek
[22:06] <wallyworld> i still have another week :-(
[22:06] <wallyworld> petrol is expensive here
[22:06] <bigjools> cheaper than Europe
[22:07] <bigjools> should come down a bit, the A$ made ground against US$ lately
[22:07] <wallyworld> yep :-)
[22:07] <wallyworld> fwereade: you still around?
[22:08] <fwereade> wallyworld, heyhey, yeah, I wanted to chat about your CL, because the last thing I want to do is block you
[22:08] <wallyworld> fwereade: ok, quick hangout?
[22:08] <fwereade> wallyworld, sure
[22:08] <wallyworld> https://plus.google.com/hangouts/_/d3f48db1cccf0d24b0573a02f3a46f709af109a6
[22:09] <fwereade> wallyworld, you sure that's the link?
[22:09] <wallyworld> it worked for me
[22:09] <wallyworld> i'll try another
[22:10] <fwereade> wallyworld, https://plus.google.com/hangouts/_/67b33e32c2942787f0aa5076ba1f070cd1203c4c
[22:26] <thumper> fwereade: any reason you aren't landing https://code.launchpad.net/~fwereade/juju-core/prepare-leave-scope/+merge/181065 ?
[22:41] <thumper> ironically a test for synchronization fails
[23:05] <thumper> hmm...
[23:05]  * thumper is looking at an intermittent test failure
[23:13]  * thumper tries something...
[23:29]  * thumper proposes
[23:34] <thumper> wallyworld: I'd like axw to review the branch I'm just proposing
[23:34] <thumper> but you could take a look too if you like
[23:35] <wallyworld> ok
[23:35] <thumper> it fixes a race condition in a test
[23:35] <thumper> and also writes the test in a more broken up way
[23:35] <thumper> so three tests, not one test that tests three things
[23:36] <wallyworld> \o/
[23:36] <thumper> https://codereview.appspot.com/13799043 if you are interested
[23:36] <wallyworld> looking
[23:36] <wallyworld> thumper: this is for the 1.15 release if you have a moment https://codereview.appspot.com/13763044
[23:37] <thumper> sure
[23:38] <thumper> wallyworld: I have the gym shortly, but we should catch up after that to talk about logging
[23:38] <wallyworld> sounds good
[23:38] <wallyworld> i have one more branch to write for the release
[23:38] <wallyworld> then testing
[23:38] <wallyworld> then tools cleanup
[23:38] <wallyworld> then release notes
[23:42] <wallyworld> thumper: those revised tests look nice to me
[23:42] <thumper> thanks
[23:42] <wallyworld> smaller tests are good
[23:42] <thumper> looking over yours now, but not sure I'll get it finished before gym time :)
[23:42] <thumper> agreed
[23:42] <wallyworld> np
[23:43] <wallyworld> i'm cleaning up the mps william looked at, and landing those now