[00:52] hazmat: jimbaker: bcsaller: if any of you is around, would one of you be available to help poor bigjools with the problems we are having making the maas provider work? [00:53] * bigjools makes puppy dog eyes [00:55] * SpamapS can try to help [00:55] I doubt I can offer quite the same level of code introspection as them.. but I *did* write a provider many moons ago [00:56] heh [00:57] SpamapS: ok thank you. my problem is that the provisioning agent craps out when it tries to deploy stuff. Let me try to pastebin a log. [00:57] we are confused about system/instance/resource/ etcetc IDs. [00:57] the current branch is here, FWIW: lp:~julian-edwards/juju/maas-system-id [00:59] SpamapS: thanks [01:04] bigjools: I noticed that there was some stuff missing from other places in the code today too. Providers aren't nearly as self-contained as they should be [01:04] SpamapS: :( [01:09] bigjools: pulling your branch now. What exactly can I help with? [01:11] SpamapS: I am trying to get my prov agent log, one sec [01:12] bigjools: just an FYI, you need to add maas to juju/unit/address.py [01:13] bigjools: but I do not know if that is for sure your issue. Most likely not. ;) [01:13] SpamapS: ok thanks [01:14] bigjools, based on the error robbiew forwarded earlier, I would look askance at MaasProvider.get_machines [01:14] SpamapS: so here's a snippet of log: http://pastebin.ubuntu.com/893090/ [01:14] yeah basically the same as that email [01:14] fwereade_: hi! [01:14] bigjools, the description was basically that it was trying to shut down machines that it shouldn't even know about, right? [01:14] bigjools, heyhey :) [01:15] bigjools, that implies that the PA is getting more machines than it should back from provider.get_machines [01:15] fwereade_: well that AND the fact it was shutting down something that was not running (maas returns the 409 CONFLICT for that) [01:15] fwereade_: ah that is useful, thanks [01:15] bigjools, it will only ever try to shut a machine down if the provider tells it it exists [01:16] bigjools: could it be that get_nodes() doesn't limit itself to the nodes that have been acquired? [01:16] bigjools, of all those machines which exist, any which correspond to a machine state will be saved [01:16] flacoste, sounds very likely [01:16] flacoste: yeah could be [01:16] I need to debug this [01:16] flacoste, orchestra had to make that distinction [01:16] on an internet connection that has some b/w [01:18] bigjools, best of luck [01:18] it does look like get_machines shows all machines [01:18] ok I'll look into that, thanks for the advice guys! [01:18] bigjools, I'm afraid I have to sleep now [01:19] bigjools, a pleasure, glad to be of service :) [01:20] bigjools: each of the other providers uses something to filter out the list of all machines that can be seen w/ the service+creds to just the ones pertinent to this environment [01:20] right [01:20] bigjools: for orchestra it was mgmt class, for ec2 the group is used [01:20] I assumed it was just asking about all nodes passed [01:21] bigjools: so in our cases, returning the nodes associated to the user is probably fine [01:21] although [01:22] we might need to introduce a new environment key [01:22] flacoste: how so? [01:22] (in case a user, wants to start two separate environments) [01:22] juju bootstrap; juju bootstrap [01:22] both with the same credentials [01:22] You need something to tag each node. One maas may host many envs for many users. [01:23] SpamapS: we are multi-tenant already [01:23] but not sure multi-env per user yet [01:23] the potential to break your juju environment just by manually starting a node is a bit high though [01:23] if nothing else, prefix the names [01:24] or allow an optional filter in the environment configuration [01:24] but anyway [01:24] I need to understand what juju needs here, I am not sure yet. But let me come back to that when the other stuff is working [01:24] sounds like the problem is in hand [01:25] Anyway, to test this hypothesis one should be able to try it out with a maas that only has juju nodes [02:51] bigjools, ping [02:51] * hazmat backtracks [02:53] hazmat: hey [02:53] hi bigjools just backtracking through the logs [02:53] it sounds like orchestra is returning something via the api that it shouldn't [02:54] you mean maas? [02:54] yeah - we think get get_machines call is returning the wrong things, it's not filtering properly [02:54] bigjools, yup i mean the resource uri vs the node url seem to be two different concepts. anyways.. i'm around for a few hrs.. if you need any further debugging [02:55] hazmat: great, thanks! [03:15] * bigjools relocating [09:26] mornin' all [09:27] TheMue: sorry i didn't take any pictures or record anything at all from last night! [09:27] TheMue: but it was interesting, and a good time was definitely had. [09:30] heya rogpeppe [09:30] fwereade_: heya too [09:31] rogpeppe, TheMue: laura's off school and cath has to go out, I will be away for a couple of hours [09:31] hi [09:32] fwereade_: lots of tunnels just after kings cross so communication will be patchy... [09:32] why i can't see [09:32] discussions [09:32] taking place in this channel [09:33] rogpeppe: moin [09:34] roubi: which kind of discussion do you expect? [09:34] questions that every body ask about juju [09:35] fwereade_: lots of tunnels just after kings cross so communication will be patchy anyway... [09:35] wrtp: so you're in the train? [09:35] TheMue: i am [09:35] TheMue: again! [09:35] wrtp: ic ;) [09:37] TheMue: i said hi to Eleanor and Erik for you BTW [09:37] wrtp: great, thx [09:38] TheMue: at least... i said hi to *an* Erik! [09:38] TheMue: (the one that gave the talk) [09:38] wrtp: that's exactly the right one [09:38] ;) [09:39] TheMue: i'm not sure he remembered your name though - they do *many* startup gatherings... [09:41] wrtp: that's one of the problem when you so far only had electronical contact with nicknames [09:41] TheMue: yeah [10:12] * wrtp can now run the environs/ec2 tests without accessing the internet. yay! [10:13] wrtp: great [13:05] hazmat, ping, I'm getting a horrible feeling that we need to hit the "placement" key as well [13:06] hazmat, it really doesn't feel like an access setting ;) :( [13:32] Heya! [13:43] heya niemeyer [13:44] niemeyer, I think I'm missing some context on placement policies [13:44] niemeyer, why did they become env-level only, when we use to have deploy --placement? [13:44] used to have^^ [13:45] fwereade_: They may be set both in the environment and in the service, right? [13:46] fwereade_: I mean, constraints can [13:46] niemeyer, *constraints* can [13:46] fwereade_: Yes, and that's the real placement.. --placement IIRC is a hack to allow re-use of machine 0 [13:47] niemeyer, it seems that *actually* placement ought to be a constraint, but not one I'd fully appreciated [13:47] niemeyer, OK, this is another point of potential pain [13:47] niemeyer, do we wish to retain that hack in some form? [13:48] niemeyer, given that we currently have no other way to express it, my guess is "yes", but..? [13:48] fwereade_: What's the potential pain? [13:49] niemeyer, it *also* seems that they *cannot* be set on the service [13:49] niemeyer, but perhaps I'm blind there [13:49] niemeyer, that it's a non-access setting that currently lives in environments.yaml [13:49] fwereade_: Placement policy sounds orthogonal to constraints and to environment settings [13:50] fwereade_: Sorry, yes, not to environment settings [13:50] niemeyer, yeah, local policy overrides constraints [13:50] fwereade_: Why is it tricky? [13:50] niemeyer, which I think matches intuitive intent [13:50] fwereade_: Sounds like a normal setting that should go into set-env [13:50] niemeyer, yeah; it's just something else we have to warn people about :( [13:51] fwereade_: We need to warn them about only one thing [13:51] fwereade_: Settings in environments.yaml are being moved onto the environment itself [13:51] fwereade_: We don't need a separate warning for each setting [13:52] niemeyer, this is true, but the required actions are different for different settings [13:52] niemeyer, "default-series" and "placement" are both "use bootstrap/set-env"; the ec2 ones are "use constraints" [13:53] fwereade_: That's fine.. we can easily document the distinction in a wiki page, and have both the email note and the error from juju pointing out to it [13:53] fwereade_: We also don't have so much variation [13:53] niemeyer, ok, that sounds good then [13:53] fwereade_: Can we please call placement as "placement-policy" on the way to that? [13:54] niemeyer, sounds like a good idea to me [13:54] fwereade_: I might even call that as debug-placement-policy, but YMMV ;) [13:59] niemeyer, sorry, paralysed myself wishing we'd implemented it as deploy/add-unit --force-machine=0 [13:59] niemeyer, I'm not quite sure that debug works myself [14:00] niemeyer, but then again I'm not sure I have the best perspective on how people are using it [14:03] fwereade_: This option was never intended to be visible/widely used [14:05] fwereade_: Some clever people may be using it to fine tune specific tests and deployments in good ways, but it spoils much of the reasoning why we have juju [14:05] niemeyer, I guess we would have got rather more blowback from making it harder to use back whenever it was it disappeared from the unit-adding commands [14:06] niemeyer, I'm not sure it does, it feels at least 50% like a reasonable temporary workaround to what remains one of juju's drawbacks [14:06] fwereade_: It's not a reasonable workaround at all, IMO, except in those very specific circumstances [14:07] niemeyer, I guess the question is "how do we quell the ire of those who *are* using it" [14:07] fwereade_: Having everything sitting on machine 0, next to ZooKeeper, in ways that can't ever be uninstalled... [14:08] niemeyer, this is true, it does suck [14:08] fwereade_: Hold on.. I wasn't suggesting removing it.. [14:08] fwereade_: We need this behavior for the local provider case.. that's why we have it [14:09] niemeyer, well, the behaviour is fine for that; you seemed to be saying it was a mistake to have it available at all on ec2 [14:10] fwereade_: That's not what I said.. the option is just not intended to be very visible or widely used [14:10] niemeyer, anyway, ok, understood [14:10] niemeyer, we just move it exactly like default-series [14:11] fwereade_: It's fine to have it in EC2.. bcsaller made some very good use of that while developing some of the local provider stuff [14:11] fwereade_: Since he was able to test the LXC deployments straight in EC2 [14:11] niemeyer, cool, sorry misunderstanding :) [14:11] fwereade_: That's why I'd name it debug-placement-policy [14:12] fwereade_: This causes the environment to behave in very awkward ways if one doesn't know what's going on [14:12] fwereade_: But it's fine if you're really into it [14:13] niemeyer, the "debug" in there implies dev-only, rather than "what you should do for openstack" for example [14:13] niemeyer, which IIRC was a lot of the motivation for it initally(?) [14:13] fwereade_: You shouldn't do that for openstack, unless you're Adam :) [14:13] niemeyer, rabbitmq with mysql [14:13] niemeyer, haha [14:15] fwereade_: There's a sequence of events that requires using it in a precise way. This is much closer to "debugging" than to being a "juju feature" [14:15] niemeyer, I agree it's awkward; just fretting that it may have become a de facto feature [14:16] fwereade_: For Adam, it has :) [14:16] SpamapS, m_3: Is anybody else using the placement hack these days? [14:18] * rogpeppe is back on a proper internet connection again. [14:20] rogpeppe: welcome back to modernity [14:21] niemeyer: actually i felt quite futuristic controlling ec2 instances from on a train :-) [14:21] rogpeppe: That's quite amazing indeed :) [14:21] niemeyer: managed to push out two merge requests too. better when the signal was better, usually in a station. [14:27] niemeyer, fwereade_, TheMue: https://codereview.appspot.com/5866049/ and https://codereview.appspot.com/5864047/ if you fancy taking a look. [14:28] * rogpeppe is very much enjoying having lbox do prerequisites properly! [14:29] niemeyer: not sure [14:29] doubt it [14:29] m_3: Cheers [14:30] fwereade_: ^ [14:30] niemeyer, m_3: thanks :) [14:30] np [14:34] fwereade_, hazmat: Just had another round on https://codereview.appspot.com/5849054/ [14:44] niemeyer: the placement hack has been unavailable to us since --placement was removed [14:44] Hey question.. [14:45] if you remove default-image-id ... how do private clouds work? [14:45] SpamapS: It's actually been available all along.. but thanks, that answers the question as well [14:48] niemeyer: heh, ok.. I'll take your word on that. I never knew it was. [14:50] SpamapS: Adam said it was important for his use case, so we just moved it.. [14:50] SpamapS: He said it was fine, as long as it was available [14:51] SpamapS: The question about private clouds is a good one [14:51] fwereade_, hazmat: ^ [14:51] SpamapS: We need a proper way to map series > image in those as well [14:53] I suspect the simplest way will be to set image id on each deploy. [14:53] but thats ... tedious :-P [14:53] niemeyer, *that* is plausibly an env access setting, now I come to think of it [14:54] fwereade_: Yeah [14:54] niemeyer, and SpamapS' observation does make me all nervous about forcing automated image selection on people now [14:55] (again) [14:56] fwereade_: Do you have any suggestions that do not involve having "deploy cs:~fwereade/oneiric/mongodb" deploying it in Precise? [14:57] niemeyer, no, I don't [14:59] niemeyer, I would suggest that the people sophisticated enough to use custom image ids are demonstrably comfortable in less-comfortable environments [14:59] fwereade_, SpamapS: The proper way is likely to have e.g. [14:59] images: [14:59] precise: [14:59] oneiric: [14:59] ... [14:59] niemeyer, arch? hvm? [15:00] fwereade_: arch is an issue.. hvm doesn't exist outside of ec2 [15:00] niemeyer, true, but that's another way of saying "it exists" [15:01] niemeyer, it'll all be against the ec2 provider... [15:01] fwereade_: Yeah, you're right [15:01] hmm [15:02] niemeyer, IMO the cost of persisting the hack until 12.04.1 is outweighed by the benefit of offering an actual transition path for anyone who is using it [15:02] fwereade_: precise: {id: , constraints: } [15:02] fwereade_: ? [15:03] niemeyer, hmm, that could work, but it feels kinda icky [15:03] fwereade_: Forget "persisting the hack".. let's describe it in terms of having series not working [15:03] fwereade_: I don't care about the hack.. I care about series not working [15:04] niemeyer, ok [15:04] niemeyer, what I want is for everyone who needs it to run their own local uec-images :) [15:04] niemeyer, ie, same data format and url %s-able by series name [15:05] niemeyer, that makes it an env access setting, the isers are responsible for advertising sane images, and we're done [15:05] fwereade_: That's a plausible idea. We just have to reduce the burn of setting it up, and we also need to think about the fact that non-real-EC2 deployments don't have pretty pre-defined classes of machines [15:06] niemeyer, heh, I wish I could remember who told me that basically all openstack deployments reuse instance names (even if they don't perfectly match capabilities) [15:06] niemeyer [15:07] niemeyer, actually it's not just uec-images data: it *is* the capabilities of the instances, assumptions about what instance types can run what images are hardcoded :( [15:08] niemeyer, however I don't think even amazon publish that data in a usefully-consumable format [15:08] fwereade_: We do.. [15:09] niemeyer, it's a tradeoff that seemed sensible given the constraints I was operating under [15:09] fwereade_: But we can't publish it for arbitrary deployments that have their own image ids and their own capabilities [15:09] niemeyer, oh wait, did I totally miss the meaning of "we do"? [15:10] fwereade_: seemed sensible? was operating under? I think you missed it :) [15:10] fwereade_: Canonical publishes details for images in EC2 [15:10] niemeyer, if you're saying that we publish uec-images: yes, I know :p [15:10] fwereade_: No, we publish the data that you said Amazon doesn't [15:11] niemeyer, details of instance types? [15:11] niemeyer, which ones require HVM images, etc? [15:12] niemeyer, which one's it's OK to start with an i386 image [15:12] niemeyer, it's the fact that even if the *names* match, private "ec2"s might have (say) m1.smalls that are 64-bit only [15:12] fwereade_: Yes [15:12] fwereade_: http://uec-images.ubuntu.com/query/precise/server/daily.txt [15:13] niemeyer, I see lots of image data there in exactly the format I was asking for [15:13] fwereade_: You need to know whether the instance is amd64/i386/ebs/hvm.. [15:13] niemeyer, I see nothing about instance types [15:13] niemeyer, yes [15:14] niemeyer, ebs we assume everywhere so I really have no option but to punt on that for now at least [15:14] fwereade_: This is still relevant as it gives us which ones are ebs or not [15:15] niemeyer, how is any of that information useful to me, as juju, running against a private cloud and attempting to determine whether or not it is appropriate for me to start a c1.medium with an i386 image? [15:16] fwereade_: The constraints part of the picture must be provided by the provider [15:16] fwereade_: This is already the case for the constraints mechanism you're implementing to exist at all [15:16] niemeyer, agreed [15:17] niemeyer, this information will indeed need to be exposed by the provider [15:17] fwereade_: This is solving the other half of the problem.. given you know what constraints a machine is under, what image? [15:18] fwereade_: I'll have lunch while I think a bit more about the issue :) [15:18] niemeyer, that is not a problem in need of a solution [15:18] niemeyer, we do that already [15:18] fwereade_: Is it not? That's great.. so what's the answer to SpamapS question? [15:18] if you remove default-image-id ... how do private clouds work? [15:18] * niemeyer => lunch [15:19] niemeyer, SpamapS: if I have my way, private clouds will work as follows [15:20] niemeyer, SpamapS: (1) they have to publish uec-images format image data just like uec-images, and must specify the url in the env access settings [15:21] niemeyer, SpamapS: (2) they have to [somehow] publish instance-type data in a format we can interpret [15:22] niemeyer, SpamapS: if we remove default-image-id now, we utterly crush any current efforts to play with juju in private openstack clouds (right?) [15:23] niemeyer, SpamapS: if we focus on it, we *can* come up with a sensible way to import the required information by 12.04.1, and we can IMO bear the risk of sophisticated users telling juju to do bad things in the meantime [15:25] niemeyer, SpamapS: it now seems clear that default-instance-type and default-image-id are both critically important for this use case -- without d-i-t you're forced to use the ec2 instance types, and without d-i-i you're completely helpless [15:27] niemeyer, SpamapS: is this approaching the tipping point at which we say "ok, we really actually cannot break this now, however much we'd like to"? [15:45] fwereade_: What's the suggested behavior of "cs:~fwereade/precise/mongodb" vs. "cs:~fwereade/oneiric/mongodb"? [15:57] eek.. du -hs machine-agent.log [15:57] 163G machine-agent.log [15:58] hazmat: Holy crap [15:58] rogpeppe: You have a couple of (trivial) reviews [15:58] niemeyer, yeah.. zk client debug loggin is a bit verbose ;-) [16:00] i can dev/null it, the alternative is attach a pipe, and filter/rate-limit messages into python logging [16:00] niemeyer: thanks a lot [16:00] it seems like it goes into a tail spin of errors/warnings when it has trouble connecting [16:04] niemeyer, look up an appropriate image at a url based on series; use other constraints to pick an appropriate image from that data source [16:04] hazmat: Sounds fine to dev/null it [16:05] fwereade_: I mean for 12.04 [16:05] niemeyer, we loudly and anrgily warn that default-image-id is deprecated every time we parse env.yaml [16:06] fwereade_: Sorry, I mean what's the plan for supporting multiple series in 12.04 [16:06] niemeyer, in private clouds, we don't have one [16:06] niemeyer, for everyone else, they just have to stop using those keys [16:06] niemeyer, well, that one specific key [16:07] fwereade_: Ok, I can buy into that as an intermediate step [16:07] niemeyer: thanks for the LGTM on https://codereview.appspot.com/5857049/. it's got https://codereview.appspot.com/5864047/ as a prerequisite BTW in case you'd overlooked that one. [16:08] niemeyer, cool [16:08] niemeyer, I'm certainly not advocating that it should live on in 12.04.1 [16:09] niemeyer, however, since d-i-i was the driver for a lot of this, we should check response sanity for the other keys [16:09] rogpeppe: I didn't overlook it.. just didn't get there [16:09] niemeyer: np [16:10] niemeyer, d-i-t is I think helpful in the private cloud use case; aws users can just retire it when they feel ready [16:11] fwereade_: I think d-i-t is --constraints, isn't it? [16:11] fwereade_: If it is still useful after constraints, we're doing something wrong [16:11] niemeyer, it is, but it makes aws-specific assumptions about the nature of the instance types it exposes [16:11] fwereade_: Really? Why? [16:12] fwereade_: I thought we had non-AWS needs in mind the whole time [16:12] niemeyer, I am only aware of one place that publishes that information, and it's a github project that I'm reluctant to pull and parse at runtime [16:12] fwereade_: Which means constraints is broken [16:13] fwereade_: It must be finished [16:13] fwereade_: default-instance-type is a constraint like any other [16:13] fwereade_: cpu=N arch=X etc [16:14] niemeyer, default-instance-type is a bad name: what it really is is force-instance-type [16:14] fwereade_: Huh!? [16:14] niemeyer, that is the effect it has always had [16:14] fwereade_: "default" in there should really be *default* [16:14] fwereade_: Because there's no other way to select the instance type [16:15] fwereade_: Constraints should entirely obsolete the need for default-instance-type [16:15] fwereade_: Or we're doing something wrong [16:16] niemeyer, the wrong thing that we are doing is encoding assumptions about AWS in the environment [16:16] fwereade_: Such as? [16:17] niemeyer, cc2.8xlarge requires an HVM image [16:17] fwereade_: We don't have that in the environmen, do we? [16:17] niemeyer, t1.micros can be started with i386 and amd64 images [16:17] niemeyer, are you aware of any other place to get that information? [16:17] fwereade_: Sorry.. we're using "environment" in different ways [16:18] fwereade_: It's not an environment setting [16:18] niemeyer, developing the infrastructure to distribute it ourselves seemed a touch overambitious for a small compnent of a feature I had |1 month to do [16:18] fwereade_: EC2 knowing that in Amazon a t1.micro is i386 sounds quite ok [16:18] niemeyer, hence hardcoded assumptions like the above [16:18] niemeyer, right [16:18] niemeyer, can we make that assumption about instance types available in private clouds? [16:19] fwereade_: No.. didn't we go over that already? [16:19] niemeyer, a consequence of that is that constraints are "broken", as you put it, in certain private-cloud situations [16:19] fwereade_: This has to be provided by the provider [16:19] even private openstack environments they typically do have something mapping to the instance types from ec2, but the definitions are more ad hoc user defined. there is a way via the native api to query out those capabilities. [16:20] this discussion sounds familiar [16:20] hazmat, that is good to know [16:21] niemeyer, hazmat: does anything like that exist for AWS (apart from that github project)? [16:21] fwereade_, but the mapping is not always complete for ec2 types [16:21] fwereade_, just that project the boto author did as far as machine parsable ones [16:21] fwereade_, we could see if smoser/someone could set that up on cloud-images [16:21] fwereade_: EC2 knowing that in Amazon a t1.micro is i386 sounds quite ok [16:22] hazmat, it's not just serving it; it's updating it etc [16:22] but we'd also need to start distinguishing in the ec2 provider on how to query capabiiltiies based on impl (ostack/aws) [16:22] fwereade_: We can also offer a document [16:22] hazmat, it feels like a pretty serious responsibility to take on [16:22] fwereade_: Similar to uec-images [16:22] fwereade_, really? https://github.com/garnaat/missingcloud setting up a cron job? [16:23] fwereade_: You mean, providing data about which instance types exist? [16:23] fwereade_, oh its hand written you mean? [16:23] niemeyer, as I said, I rejected that option as being unrealistic in the time we had available [16:23] niemeyer: todays watches of service and unit are build with callbacks. any already existing concept on how to handle it in go? or shall i use the idea of a watcher like rog and i already outlined a few days ago? [16:23] niemeyer, perhaps I was wrong there [16:23] fwereade_: Which option? Sorry.. it feels like there are wires being crossed all the time [16:23] fwereade_: I'm a bit lost [16:23] niemeyer, yeah, I'm losing track a bit [16:24] niemeyer, I feel that it is inappropriate to depend on someone else's AWS data in an automated way [16:24] fwereade_: We can publish that information [16:25] niemeyer, and that taking on the responsibility of publishing it ourselves would be excessively painful [16:25] niemeyer, and that even if that were not the case (I guess it isn't) [16:25] fwereade_: Why? [16:25] fwereade_: Canonical published the whole *operating system* that people are using [16:25] fwereade_: I can't see how publishing which instance types are availalbe is a problem [16:26] niemeyer, that my prospects of getting all that set up and out of my hair in the course of december 2011 was unrealistic [16:26] niemeyer, if we provide it we have to keep up with the amazon announcements and always remember to update [16:26] fwereade_: Yep [16:27] fwereade_: Changes in instance types are nowhere close to frequent [16:27] niemeyer, which IMO makes it only less likely that it will be something that we will remember to do in a timely way [16:28] niemeyer, and makes a certain juju-update-dependent lag in access to latest instances acceptable [16:28] fwereade_: Nothing too bad happens if it takes an entire month to be updated, really [16:29] fwereade_: This is a non-issue in the context of things we've been talking about.. let's see what are the actual issues we have to decide now [16:31] niemeyer: e.g. my watch question above. *scnr* [16:31] niemeyer: ;) [16:31] niemeyer: basically, given that we cannot discard d-i-i -- and that doing so was the primary motivation for dropping a sudden format change on our users -- can we perhaps entirely avoid inflicting an env.yaml change on them until sometime after 12.04? [16:32] niemeyer, by keeping d-i-i for a while we'recommitting to another change down the line [16:33] niemeyer, by making d-i-i and d-i-t act as though "default" meant "force", we can preserve existing behaviour for everyone, and make constraints accessible to all who can give up those keys [16:33] fwereade_: Sorry, this whole conversation is flying well above my head [16:33] fwereade_: default is default, not force [16:34] fwereade_: constraints are broken, as you describe [16:34] fwereade_: You're asking to not change environment, but I don't know what you mean by that [16:34] fwereade_: We need a call [16:34] niemeyer, sounds good [16:35] TheMue: Not a good time, sorry [16:35] niemeyer: yeah, ic, followed your discussion with one eye [16:35] niemeyer: will port state/auth first [16:35] niemeyer, invited on g+ [16:37] fwereade_: Uh oh [16:44] Uh oh.. again [18:05] fwereade_, niemeyer: do you know anything about the way that admin_identity is used to make acls, by any chance? i'm seeing an "invalid acl" error, which i'm presuming is from StateHierarchy.initialize. [18:05] jimbaker: ^ [18:06] i don't see any traceback [18:07] this is how juju-admin initialize is being invoked: [18:07] juju-admin initialize --instance-id='$(curl http://169.254.169.254/1.0/meta-data/instance-id)' --admin-identity=sham --provider-type='ec2' [18:07] i'm wondering if admin-identity needs to be in a particular format [18:08] ah, got it! [18:09] wonderful how explaining things to people so often solves the problem... [18:09] (not that it's solved for definite yet - waiting for the machine to boot now) [18:15] dammit, i was wrong. [18:18] i think i've seen it now though... [18:24] Ok.. off the call with fwereade_, talking to mthaddon about store now [18:28] dammit, i wrote that function once and can't find it! anyone know of a way to grep through all files in all branches in a bzr history? the bzr-grep plugin doesn't seem to do it. [18:30] hazmat: It'd be good to have a call at some point today/tomorrow with you, fwereade_, and myself, to sync up on that conversation [18:31] hazmat: Mainly clarification of the whole series/etc conversation [18:31] niemeyer, i'm game for it now if you'd like [18:31] hazmat: fwereade_ just stepped out for some family time after about 2h of phone call [18:32] hazmat: We both need a break before diving into it again [18:32] ah, found bzr-search and found my code! [18:32] niemeyer, ic fair enough, i hadn't realized, i'm around whenever you guys are ready [18:32] niemeyer, also the unit-stop spec could use a review, just pushed the latest [18:34] hazmat: Aweomse, will have a look [18:35] niemeyer, i liked awsum ;-) [18:35] and thanks [18:35] hazmat: Me too.. I also found funny the way it was ignored :) [18:36] hazmat: Like "OMG, stop the bikeshed!" :) [18:40] hmm.. where oh where do we sync settings [18:42] ah there it is.. deploy [19:10] * niemeyer perceived fwereade_ seems to have been hit by the network bug in Precise too [19:12] [niemeyer@gopher ~]% ps auxw | grep chromium-browser | wc -l [19:12] 10 [19:12] [niemeyer@gopher ~]% killall chromium-browser [19:12] chromium-browser: no process found [19:12] Why oh why [19:13] * niemeyer invokes awk powers to do a trivial action.. poor normal users [19:20] all tests pass. woop woop. [19:20] we have lift off [19:25] niemeyer: a happy note to end the day on: https://codereview.appspot.com/5868051 [19:25] * rogpeppe is off for the day. see y'all tomorrow. [19:41] rogpeppe: Neat! [19:42] hazmat: I'm on the stop spec, btw [20:08] hazmat: Delivered [20:09] jimbaker: Any progress on the specs? [20:09] jimbaker: and in their implementation? [20:12] rogpeppe: Uh oh. I'm wondering if having pre-req support is a good idea. I'm starting to feel like we're getting tiny changes that are completely independent being piled up on unrelated changes. [20:31] Now I'm getting two messages when I post a message in Rietveld.. eventual consistency is rocking our world :) [20:32] niemeyer, hazmat: so, it turns out discussion of instance types and environment key deprecation is an excellent family insomnia cure; who would have guessed? [20:32] niemeyer, hazmat: that is to say I have time for a chat :) [20:32] fwereade_: Hehe :) [20:32] fwereade_: I'm game as well [20:32] hazmat? [20:40] niemeyer, i've been working on two branches re impl, relation-id and relation-hook-context (to manage the contexts associated with using -r in the relation hook commands spec) [20:41] jimbaker: I was just reading that spec, actually [20:41] niemeyer, i'm going to do another round on the spec for relation-ids (what was called relation-info) [20:41] niemeyer, good, i hope it's in the direction you like [20:41] jimbaker: Is there anything else being said in addition to "relation-get needs to support the -r argument"? [20:45] niemeyer,not so much. most of the spec in relation-hook-commands-spec is to describe various scenarios through an example; and to address such details as the specifics of the caching of the relation hook contexts that are read in -r [20:45] read in with -r [20:45] and how the order they are written out [20:47] jimbaker: Cool [20:48] jimbaker: I don't think the ordering is important, btw [20:48] niemeyer, sounds reasonable [20:49] niemeyer, the ideal scenario is that uses a ZK multi [20:49] in which case, it goes away [20:50] however, one counterpoint is that does make it easier to test against log output corresponding to relation changes, if any [20:50] so perhaps just an impl detail [20:51] jimbaker: Right [20:56] niemeyer, i was just chatting with jim and looking over the wip impl. he's out at the moment though, [20:56] niemeyer, fwereade_ i'm up for the chat [20:57] hazmat, i'm around right now [20:57] hazmat: He seems alive and kicking :) [20:57] jimbaker, doh.. yeah i had written that message before i went away myself [20:57] and was just chatting with niemeyer re the relation-hook-commands spec [20:58] hazmat, niemeyer: heyhey [21:00] fwereade_, niemeyer g+ invites out [21:01] hazmat: Just give me a couple and will be with you [21:07] jimbaker: Review delivered [22:41] hazmat, would you follow up the warning thread with a precise and reassuring explanation of what you're planning re env settings? I fear missing some nuance ;) [22:42] hazmat, and am a touch sleepy ;) [22:49] fwereade_, ack, and get some sleep [23:34] niemeyer, incidentally i noticed there's an lbox bug on milestone selection, it always selects the newest milestone afaics, instead of the oldest open milestone