_mup_ | juju/env-origin r361 committed by jim.baker@canonical.com | 00:07 |
---|---|---|
_mup_ | Merged trunk & resolved conflicts | 00:07 |
_mup_ | juju/env-origin r362 committed by jim.baker@canonical.com | 00:08 |
_mup_ | Reverted to trunk | 00:08 |
_mup_ | juju/env-origin r363 committed by jim.baker@canonical.com | 00:09 |
_mup_ | Merged config-juju-origin (new attempt) | 00:09 |
_mup_ | juju/config-juju-origin r364 committed by jim.baker@canonical.com | 00:14 |
_mup_ | Missing new file in bzr | 00:14 |
_mup_ | juju/env-origin r364 committed by jim.baker@canonical.com | 00:15 |
_mup_ | Merged config-juju-origin to get missing file | 00:15 |
niemeyer | Hohoho | 01:38 |
niemeyer | http://wtf.labix.org/ | 01:38 |
niemeyer | http://wtf.labix.org/wtf/361/unittests.out | 01:39 |
hazmat | niemeyer, cool | 01:39 |
jimbaker | niemeyer, nice | 01:41 |
niemeyer | Will tweak the path a bit, and then will try to come up with a test that actually gets in touch with AWS | 01:41 |
bcsaller | niemeyer: given the comment about the lxc-lib [6] do you still feel the same about how it should be changed? I could move from iter over the internal dict to writing the keys explicitly but I don't really want to make the code there any larger unless you feel strongly | 01:56 |
niemeyer | bcsaller: I'm a bit sad about the slightly suboptimal handling of arguments there, but I'd be happy for that to be cleaned up after you feel happy with the release | 01:58 |
bcsaller | niemeyer: the provider writes it own values to the upstart job in the container, I'd rather see it all come from that script really, but we didn't get it synced up like that in time. | 02:00 |
bcsaller | niemeyer: I think with a little polish the script could be used to build out other providers as well though and hope we can move some of that out of the Python code | 02:01 |
_mup_ | juju/provider-determines-placement r397 committed by kapil.thangavelu@canonical.com | 02:02 |
_mup_ | revert pick_policy, provider determines placement | 02:02 |
niemeyer | bcsaller: Cool, that sounds like a nice directly | 02:10 |
niemeyer | direction! | 02:10 |
* niemeyer has brain issues typing today | 02:10 | |
bcsaller | oh, me too, me too | 02:11 |
hazmat | argh.. keyboard interupts in tests.. trial fail | 02:12 |
_mup_ | juju/provider-determines-placement r398 committed by kapil.thangavelu@canonical.com | 02:14 |
_mup_ | yank placement cli parameter per gustavo's suggestion. | 02:14 |
_mup_ | juju/env-origin r365 committed by jim.baker@canonical.com | 02:17 |
_mup_ | Doc changes | 02:17 |
_mup_ | juju/env-origin r366 committed by jim.baker@canonical.com | 02:23 |
_mup_ | Clarification on PPA support by juju-origin | 02:23 |
_mup_ | juju/provider-determines-placement r399 committed by kapil.thangavelu@canonical.com | 02:37 |
_mup_ | raise a providererror if the environment placement policy is not supported by the local provider | 02:37 |
_mup_ | juju/trunk r362 committed by kapil.thangavelu@canonical.com | 02:39 |
_mup_ | merge provider-determines-placement [r=niemeyer][f=855162] | 02:39 |
_mup_ | In order to better support the local provider which only supports | 02:39 |
_mup_ | a single placment strategy, this branch moves the determination | 02:39 |
_mup_ | of placement to the provider (while respecting environments.yaml | 02:39 |
_mup_ | config). This also removes the placement cli option. | 02:39 |
_mup_ | juju/local-provider-config r396 committed by kapil.thangavelu@canonical.com | 02:52 |
_mup_ | data-dir is required for local provider, drop storage-dir param | 02:52 |
_mup_ | juju/trunk r363 committed by kapil.thangavelu@canonical.com | 02:54 |
_mup_ | merge local-provider-config [r=niemeyer][f=855260] | 02:54 |
_mup_ | Exposes local provider via environments.yaml | 02:54 |
_mup_ | juju/lxc-omega-merge r400 committed by kapil.thangavelu@canonical.com | 02:58 |
_mup_ | merge pipeline and resolve conflict | 02:58 |
hazmat | bcsaller, is lxc-library-clone ready to merge? | 02:58 |
bcsaller | hazmat: niemeyer wanted some changes to how the config file is written, I'm making those now, but they are mostly minor | 02:59 |
hazmat | bcsaller, all the pre-reqs on my side are merged omega fwiw | 03:00 |
bcsaller | great | 03:00 |
hazmat | i'm going to move on to fixing origin | 03:00 |
bcsaller | ok | 03:00 |
_mup_ | juju/local-origin-passthrough r404 committed by kapil.thangavelu@canonical.com | 03:25 |
_mup_ | juju-origin is passed to agent | 03:25 |
hazmat | hmm | 03:28 |
hazmat | where is origin defined | 03:28 |
hazmat | oh.. its still juju-branch | 03:31 |
jimbaker | hazmat, please try using env-origin for juju-origin as an env option | 03:34 |
hazmat | jimbaker, is that branch ready? | 03:34 |
jimbaker | hazmat, yes it is | 03:34 |
hazmat | okay, i'll rebase on it | 03:35 |
jimbaker | hazmat, sounds good | 03:35 |
niemeyer | wtf@li167-23:~/ftests$ ./churn -f ec2 | 03:50 |
niemeyer | 2011-09-26 23:47:11-04:00 Writing output to: /home/wtf/ftests/build/wtf/361 | 03:50 |
niemeyer | 2011-09-26 23:47:11-04:00 Running test ec2-wordpress... OK | 03:50 |
niemeyer | OK=1 FAILED=0 | 03:50 |
niemeyer | wtf@li167-23:~/ftests$ | 03:50 |
niemeyer | !!! | 03:50 |
_mup_ | juju/local-origin-passthrough r405 committed by kapil.thangavelu@canonical.com | 03:58 |
_mup_ | merge env-origin | 03:58 |
hazmat | hmm.. we have two different implementations here | 04:01 |
hazmat | for juju-origin | 04:01 |
jimbaker | hazmat, how so? | 04:01 |
hazmat | jimbaker, lxc provider uses a shell script implementation for container initialization which also interprets origin | 04:02 |
hazmat | there is no cloud init in the container | 04:02 |
jimbaker | hazmat, i recall you mentioning that might be a good idea, to unify | 04:03 |
jimbaker | anyway, let me take a look at juju-origin in lxc | 04:03 |
hazmat | the problem is they have different values, i guess i can bridge them | 04:03 |
hazmat | okay enough for today, now that today is over.. bedtime | 04:04 |
jimbaker | hazmat, where is juju-orgiin defined in the lxc container stuff? | 04:05 |
jimbaker | juju-origin, to be precise ;) | 04:05 |
hazmat | jimbaker, different name .. but lib/lxc/data/juju-create | 04:05 |
jimbaker | hazmat, ok, i 'll take a look at that | 04:06 |
hazmat | jimbaker, not nesc | 04:07 |
hazmat | jimbaker, more important to get the branch merged | 04:07 |
hazmat | jimbaker, the lxc provider will need to bridge the values | 04:07 |
hazmat | since the containers aren't init with cloud-init | 04:08 |
jimbaker | hazmat, yeah, it should be fine from my cursory look | 04:08 |
niemeyer | Awww.. _almost_ an end-to-end ec2 test.. | 04:31 |
niemeyer | Another try.. | 04:31 |
niemeyer | Night all! | 05:11 |
fwereade | heh, the zookeeper documentation is fun | 13:05 |
fwereade | Ephemeral nodes are useful when you want to implement [tbd]. | 13:05 |
fwereade | These can be used to [tbd]. | 13:05 |
fwereade | For more information on these, and how they can be used, see [tbd] | 13:05 |
fwereade | hazmat: ping | 13:11 |
rog | fwereade: the documentation in the C bindings header file isn't bad | 13:15 |
rog | (see /usr/include/c-client-src/zookeeper.h) | 13:15 |
fwereade | rog: thanks, good to know | 13:15 |
fwereade | rog: since you've spoken, and therefore volunteered, can I talk at you about concurrent charm publishing for a moment? | 13:16 |
fwereade | :p | 13:16 |
rog | :-) | 13:17 |
rog | of course | 13:17 |
rog | "at" being the operative word | 13:17 |
fwereade | :p | 13:17 |
fwereade | ok | 13:18 |
rog | 'cos i'm not exactly fully up to speed on the charm thing yet | 13:18 |
fwereade | don't worry, I only started a couple of months ago myself | 13:18 |
rog | but discussion is always good for advancing state of knowledge... | 13:18 |
fwereade | any questions you may ask are likely to instructively expose my own ignorance ;) | 13:18 |
rog | i'll do my best | 13:19 |
rog | :-) | 13:19 |
fwereade | so, when you ask juju to deploy a charm, it will find the charm from somewhere (this step isn't directly relevant) and upload it to storage on the machine provider | 13:19 |
fwereade | in the case of EC2, this will be S3 | 13:20 |
fwereade | when two people happen to ask the same juju environment to deploy the same charm, the situation warrants closer attention | 13:20 |
fwereade | (at the same time) | 13:21 |
rog | are we assuming that charm names are unique? | 13:21 |
fwereade | it is perfectly possible that the two charms will be different, despite sharing the same id, but we will assume for now that that won't happen | 13:21 |
rog | (presumably that was the original of the extra hash code discussion yesterday, right?) | 13:22 |
fwereade | if you have two people deploying from local repos that don't match, "local:oneiric/mysql-1" does not uniquely identify a charm | 13:22 |
fwereade | yep, exactly | 13:22 |
fwereade | that's a pathological case, though, because local: charms are intended for development really | 13:23 |
rog | anyway, assuming names are unique... | 13:23 |
fwereade | yep :) | 13:23 |
fwereade | I think it seems like a good idea to ensure that a given charm is only uploaded once | 13:23 |
rog | how big can charms be? | 13:24 |
fwereade | so, for example, you and I each try to add a service using the charm "cs:oneiric/mysql-32767" | 13:24 |
fwereade | rog: I don't believe there's an explicit limit | 13:24 |
rog | so they could be huge. in which case, yeah, i think you're right. | 13:24 |
fwereade | although there is a practical limit at the moment because we can only upload so much to S3 in one go | 13:24 |
rog | still, people aren't gonna be happy about wasted bandwidth. | 13:25 |
fwereade | quite so | 13:25 |
fwereade | at the moment, the process is: | 13:25 |
fwereade | * upload to s3 | 13:25 |
fwereade | * write state to a zk node | 13:25 |
rog | although... maybe people wouldn't be unhappy if a charm was uploaded exactly once for each user | 13:25 |
fwereade | * blow up if the node already exists | 13:25 |
fwereade | define user | 13:25 |
fwereade | s/user/environment and I'm happy | 13:26 |
rog | yeah, environment is good | 13:26 |
rog | but you've still got the same problem then, of course | 13:26 |
fwereade | ok, that algorithm definitely doesn't work given the above goals | 13:26 |
fwereade | please tell me what's wrong/overcomplicated/undercomplicated with the following suggestion | 13:27 |
rog | why not just choose a deterministic charm->s3 node name mapping? | 13:27 |
fwereade | I think we have one anyway, but for some reason I'm loath to assume that just because a file with the correct name exists it's necessarily the data we want | 13:28 |
fwereade | I'd prefer to depend on matching ZK state than mere file existence | 13:28 |
rog | i think you should. i don't think it's a problem, if the name is sufficiently unique | 13:29 |
rog | if the file exists with the right name and it hasn't got the right contents, then it's an error. | 13:29 |
rog | and it can be checked. | 13:29 |
rog | (assuming the name contains a hash of the charm's contents) | 13:29 |
fwereade | hmm, that takes us back to needing deterministic hashing | 13:30 |
rog | yup | 13:30 |
rog | i think that's a very useful primitive to be able to rely on. | 13:30 |
rog | (gustavo may beg to differ!) | 13:30 |
rog | i've found content-addressable stores to be very useful in the past for this kind of thing. | 13:31 |
fwereade | niemeyer didn't seem strongly in favour of it yesterday, indeed | 13:31 |
rog | hey, speak of the devil! | 13:31 |
fwereade | speak of the devil :) | 13:31 |
fwereade | jinx:p | 13:31 |
rog | niemeyer: hiya! | 13:31 |
fwereade | niemeyer: morning :) | 13:32 |
rog | fwereade: i think the alternative is to have a more fragile state-based charm uploading scheme | 13:33 |
niemeyer | Man.. I'm called a devil twice in the morning in the first 10 seconds! | 13:33 |
rog | fwereade: where different clients negotiate as to who is going to upload the charm first | 13:33 |
niemeyer | :-) | 13:33 |
fwereade | niemeyer: :p | 13:33 |
rog | niemeyer: well, you did arrive when your name was mentioned | 13:33 |
rog | about 2 seconds after | 13:33 |
rog | in a flash of sulphurous smoke | 13:34 |
niemeyer | ;) | 13:34 |
fwereade | rog: well, in effect, I suppose, if indirectly | 13:34 |
fwereade | sorry just a mo | 13:34 |
niemeyer | fwereade: What's the issue there? | 13:35 |
rog | niemeyer: concurrent charm uploading | 13:36 |
niemeyer | Why is that a problem? | 13:36 |
fwereade | niemeyer: sorry, back | 13:36 |
rog | because you want to avoid uploading the same charm to storage if it's already there | 13:36 |
fwereade | niemeyer: I've got myself a little bogged down, hopefully you'll be able to point out something I've missed | 13:37 |
niemeyer | fwereade: Sure, what's up | 13:37 |
fwereade | niemeyer: I'm instinctively against assuming that a file in storage which happens to have the correct name is actually the file we're looking for | 13:37 |
fwereade | niemeyer: actually, I'm definitely against it, if only for local development with multiple repos | 13:37 |
fwereade | niemeyer: could cause all sorts of horrifying confusion | 13:38 |
fwereade | niemeyer: (unless, as rog points out again, we have deterministic hashing and use hashes again) | 13:38 |
fwereade | niemeyer: (which of course is fine if it's the Right Thing, but I was up late last night removing all the hashes so I'm not strongly in favour :p) | 13:39 |
fwereade | niemeyer: *so*, IMO, we need to depend on the environment's zk state to know what has been published | 13:39 |
fwereade | niemeyer: agree? | 13:39 |
niemeyer | fwereade: Sure | 13:39 |
fwereade | niemeyer: ok, to avoid insanity, we want to make sure that only one publisher actually publishes a given charm | 13:40 |
fwereade | niemeyer: and I'm fretting that the "obvious" answer is more complicated than it needs to be | 13:41 |
fwereade | niemeyer: that answer goes as follows | 13:41 |
fwereade | niemeyer: (1) does the charm state already exist in ZK? if so, it's there, we're done | 13:42 |
fwereade | niemeyer: (2) if not, create an ephemeral node at /charms/pending/[charm_url] | 13:42 |
niemeyer | fwereade: Ugh.. ok, hold on | 13:42 |
niemeyer | fwereade: Do we have a problem today? | 13:42 |
rog | lol | 13:42 |
fwereade | niemeyer: er...only potentially, I guess, and only in the case of concurrent publishes | 13:43 |
niemeyer | fwereade: How? | 13:43 |
fwereade | niemeyer: only in situations where we can't depend on uniqueness of charm urls | 13:44 |
niemeyer | fwereade: Why would we have a problem there? | 13:44 |
fwereade | niemeyer: because the current implementation will have 2 publishers uploading to the same storage key, and we can't be sure that the "winning" one will be the one that "wins" in zookeeper... can we? | 13:45 |
fwereade | niemeyer: at the moment, we blindly upload and set the charm state once the upload is done | 13:45 |
niemeyer | fwereade: Ok, a couple of things: | 13:46 |
niemeyer | 1) There's a hash.. it is being uploaded to the same location, it's very very likely to be the same content | 13:46 |
niemeyer | 2) It's uploaded before being stored in zk, so the first one will win | 13:46 |
niemeyer | fwereade: That kind of situation was exactly why we designed the current logic as it is | 13:46 |
fwereade | niemeyer: wait... surely the one that's uploaded first is the one that wins in ZK | 13:47 |
niemeyer | fwereade: and it feels like we're spending time redesigning it, but it's still not clear to me why | 13:47 |
niemeyer | fwereade: Why does it matter? | 13:47 |
fwereade | niemeyer: and is therefore the one that's likely to be overwritten? | 13:47 |
niemeyer | fwereade: The first write to zk wins | 13:47 |
fwereade | niemeyer: and the last write to storage might win | 13:47 |
niemeyer | fwereade: and it will point to a file in the storage that matches the expectation of the uploaded | 13:47 |
niemeyer | uploader | 13:47 |
fwereade | niemeyer: let me check where we actually look at hashes | 13:49 |
niemeyer | fwereade: We don't have to _look_ at them, actually | 13:49 |
niemeyer | fwereade: It's part of the filename | 13:49 |
fwereade | niemeyer: did we not discuss this at length yesterday, and decide to use revisions alone? | 13:50 |
niemeyer | fwereade: Yeah, we did decide to follow your suggestion, as long as you were willing to review the problems coming out of it :-) | 13:50 |
niemeyer | fwereade: It now sounds that rather than "let's drop hashes" we're going towards "let's introduce complexity to solve a problem we don't have today" | 13:51 |
niemeyer | fwereade: which puts me in an alert mode | 13:51 |
fwereade | niemeyer: er, I was never arguing for dropping hashes | 13:52 |
niemeyer | ROTFL | 13:52 |
fwereade | niemeyer: you seemed to put forward a number of fairly solid arguments for dropping them, and convinced me :p | 13:52 |
fwereade | <niemeyer> fwereade: I'm happy for us to remove the hash from the name if we can find a way to avoid surprising results in these scenarios | 13:54 |
fwereade | fwereade: Both | 13:54 |
fwereade | <fwereade> niemeyer: heh, I was more in favour of making the hash a required part of the ID throughout (at least internally) | 13:54 |
fwereade | niemeyer: my issue was that it *wasn't* included in the ZK node name at the moment | 13:54 |
rog | do we need a name that includes both the symbolic name *and* the hash? it seems to me we might be good with either/or | 13:55 |
rog | then we can define a mapping from symbolic name to hash name | 13:55 |
rog | (possibly with some preference heuristics taken into account) | 13:55 |
fwereade | niemeyer, rog: my original position was that, if we have the hashes, we should use them throughout as part if the ID | 13:56 |
rog | and then internally we could use hash names exclusively. no ambiguity. | 13:56 |
rog | we could use them *as* the ID | 13:56 |
fwereade | niemeyer, rog: I *thought* niemeyer's position was "actually, we don't need the hashes" | 13:56 |
rog | with name stored inside the charm | 13:56 |
rog | s/with/with the/ | 13:56 |
niemeyer | fwereade: Fair enough | 13:57 |
niemeyer | fwereade: Sep 26 09:55:04 <niemeyer> fwereade: I'm happy for us to remove the hash from the name if we can find a way to avoid surprising results in these scenarios | 13:57 |
niemeyer | fwereade: Sorry if I misguided you in the wrong direction | 13:58 |
niemeyer | fwereade: Let's stop the fuzz and start to move forward again | 13:58 |
* rog feels fwereade's pain. | 13:58 | |
fwereade | niemeyer: not to worry :) | 13:59 |
* fwereade marshals thoughts while he gets a drink | 14:00 | |
fwereade | niemeyer: ok, let's rewind a day or so, back to the question that started this off | 14:07 |
fwereade | niemeyer: actually, no, to an even earlier one | 14:08 |
fwereade | niemeyer: how would you define a "charm id"? | 14:09 |
fwereade | niemeyer: specifically, is the hash part of a charm id? | 14:09 |
fwereade | niemeyer: I contend that, internally, it should be: even if the same charm can be bundled N times and produce N different hashes, once it's been bundled and put into the system the hash is an important part of the identifier | 14:10 |
fwereade | niemeyer: is that a reasonable position in your opinion? | 14:11 |
rog | fwereade: if you've got the hash, you don't need anything else | 14:11 |
niemeyer | fwereade: It's not.. | 14:12 |
niemeyer | fwereade: There's no such thing as a "charm hash" today | 14:12 |
niemeyer | fwereade: We have the hash of a file | 14:12 |
fwereade | niemeyer: ok, tweak terminology: we have the hash of a file, which is -- within one environment -- the single representation of a given charm | 14:13 |
rog | a charm hash would be trivially implemented, if we want one | 14:13 |
niemeyer | rog: We don't have even a trivial amount of time right now | 14:13 |
rog | ok | 14:13 |
* fwereade thinks again | 14:14 | |
niemeyer | rog: We're already late.. it should be in _now_ | 14:14 |
niemeyer | rog: and we have to implement the store functionality, which is stopped while we talk about hashes | 14:14 |
rog | well, for the time being, just use the hash of that file | 14:14 |
niemeyer | rog: No.. why!? | 14:14 |
niemeyer | There's no bug.. | 14:14 |
niemeyer | fwereade: Please continue | 14:15 |
fwereade | niemeyer: ok, end-run discussion | 14:15 |
niemeyer | fwereade: I'm still keen on understanding your perspective | 14:15 |
fwereade | niemeyer: I reinstate the hashes on storage bundles | 14:15 |
niemeyer | fwereade: You'll be implementing this and must be comfortable with what's going on | 14:15 |
fwereade | niemeyer: and we don't need to do anything else | 14:15 |
fwereade | niemeyer: as before, hashes are not part of the charm id in ZK, because they're not part of the charm id anyway | 14:16 |
niemeyer | fwereade: That's right | 14:17 |
fwereade | niemeyer: that's all fine then | 14:17 |
niemeyer | fwereade: Again, as I mentioned yesterday, the real reason why hashes were ever introduced is uniqueness | 14:17 |
niemeyer | fwereade: In the storage | 14:18 |
fwereade | niemeyer: all of this came from my faliing to realise that the hashes weren't purely based on relevant content | 14:18 |
niemeyer | fwereade: It sorts out precisely the problem we started the discussion with | 14:18 |
fwereade | niemeyer: yep, I just didn't come to that realisation until we'd sidetracked on the whole "get rid of hashes" idea | 14:18 |
niemeyer | fwereade: Without that, there's the chance that two people fight for an upload, and the person that writes to zk is not the one that won in S3 | 14:19 |
fwereade | niemeyer: which is what had me bogged down, and mystified by the idea of dropping the hashes, and proposing zk cleverness to get around it | 14:19 |
niemeyer | fwereade: The whole thing is my fault.. I knew about the difficulty in doing this in a correct way and left you rambling around it | 14:20 |
fwereade | niemeyer: sadly the mystification only kicked in relatively recently, when I finally hit the concurrent upload tests I'd punted on with self.fail("looks tricky") yesterday | 14:20 |
fwereade | meh, it takes two ;) | 14:20 |
niemeyer | rog, fwereade: I'm happy to consider developing an actual "charm hash" algorithm in the future, if we find actual issues or advantages that would make it attractive | 14:21 |
rog | content-addressed storage is often attractive in diistributed systems :-) | 14:22 |
niemeyer | rog: If that's the reasoning for introducing it, no thanks | 14:23 |
rog | niemeyer: the reason is it's very useful to have an unambiguous name for something, regardless of its origin or its location. | 14:23 |
rog | it's a nice solid foundation | 14:23 |
niemeyer | rog: No thanks.. solid foundations sink | 14:24 |
rog | niemeyer: only if someone breaks the hashing algorithm... | 14:25 |
niemeyer | rog: We need to approach it from a problem/feature perspective | 14:25 |
rog | sure. hash-based naming is just a useful tool in the box. | 14:26 |
fwereade | niemeyer: potential future reason: avoid repeatedly downloading big charms that are already hanging around in the control bucket, if we can verify name-including-hash with the store | 14:26 |
fwereade | niemeyer: but I think it can wait until we're actually experiencing that as a problem | 14:27 |
fwereade | ;) | 14:27 |
niemeyer | fwereade: There's already a single file for any given charm identifier int he system | 14:27 |
niemeyer | fwereade: and in fact, I think it's already cached | 14:27 |
fwereade | niemeyer: sorry delay, it's a bit of a derail anyway, I'll just focus on getting you a new MP | 14:40 |
niemeyer | fwereade: No worries | 14:41 |
robbiew | rog: oing | 14:41 |
niemeyer | fwereade: Have you checked out the go-charm-url branch? | 14:41 |
robbiew | rog: *p*ing :) | 14:41 |
rog | robbiew: oing boing | 14:42 |
fwereade | niemeyer: when I looked, some files were missing, I'm afraid | 14:42 |
niemeyer | fwereade: Hmm | 14:42 |
niemeyer | fwereade: Let me check | 14:42 |
rog | r | 14:42 |
niemeyer | fwereade: Yeah, great.. | 14:44 |
niemeyer | fwereade: I forgot exactly the meaningful content. :( | 14:44 |
_mup_ | juju/go-charm-url r15 committed by gustavo@niemeyer.net | 14:45 |
_mup_ | Actually _add_ the relevant files.. :-( | 14:45 |
hazmat | the hashes can be fixed to be stable | 14:45 |
hazmat | by using actual content | 14:45 |
hazmat | instead of the zip | 14:45 |
niemeyer | hazmat: Good morning | 14:46 |
niemeyer | hazmat: Yeah, that was mentioned a few times | 14:46 |
hazmat | niemeyer, g'morning | 14:47 |
niemeyer | hazmat: We can easily develop a "charm hash", when we need it | 14:47 |
niemeyer | hazmat: We don't right now | 14:47 |
hazmat | agreed, its fine for concurrent uploads atm, one fails. | 14:48 |
hazmat | first one wins | 14:48 |
niemeyer | fwereade: The files are pushed.. please let me know what you think | 14:48 |
hazmat | and charm id as ns:name:id is unique | 14:49 |
fwereade | niemeyer: cheers | 14:49 |
niemeyer | hazmat: I'll try to polish what I did yesterday to get the waterfall/wtf running | 14:49 |
hazmat | niemeyer, cool | 14:50 |
niemeyer | hazmat: Would be nice to have a test there for the local case once that's up | 14:50 |
hazmat | niemeyer, sure, i'm kinda of blocked on getting anything else in, but i can add more stuff to the pipeline | 14:50 |
niemeyer | hazmat: Blocked? | 14:50 |
hazmat | niemeyer, i'm fixing up origin, and pending on the rename, and local test | 14:50 |
hazmat | niemeyer, i've got several branches that i | 14:50 |
niemeyer | hazmat: Ah, cool | 14:50 |
hazmat | i'm waiting on other merges for | 14:50 |
niemeyer | hazmat: Ok.. blocked as in doing a lot.. that's cool :-) | 14:51 |
niemeyer | hazmat: I'm going over the queue right now | 14:51 |
niemeyer | hazmat: Most of the branches are already re-reviews, so I'm hoping it'll just go smoothly | 14:51 |
hazmat | niemeyer, i also need to switch out and work on slides for some time time, their do this afternoon | 14:51 |
hazmat | niemeyer, cool | 14:51 |
hazmat | s/there | 14:52 |
niemeyer | hazmat: Ah, that's a good time to mention | 14:52 |
niemeyer | I'll be traveling to Sao Paulo tomorrow | 14:52 |
niemeyer | Will be working on and off still | 14:52 |
niemeyer | The PythonBrasil conference is Thu-Sat | 14:52 |
niemeyer | I have a keynote on Fri morning | 14:53 |
niemeyer | But otherwise I'll be working on the release | 14:53 |
niemeyer | robbiew, fwereade, rog, SpamapS, m_3, bcsaller, jimbaker: ^ | 14:54 |
robbiew | niemeyer: ack | 14:54 |
fwereade | niemeyer: sounds good, enjoy :) | 14:54 |
niemeyer | fwereade: Thanks.. I'm a bit sad about the timing | 14:54 |
niemeyer | If I knew about how we'd be running right now, I'd not have taken it a couple of months ago | 14:55 |
niemeyer | I'll be working from there, anyway | 14:55 |
=== pwnsauce is now known as Guest7312 | ||
fwereade | niemeyer: ping | 15:22 |
fwereade | niemeyer: I hadn't considered CharmURL.Revision to be optional | 15:23 |
fwereade | niemeyer: I see how it will make sense in the future | 15:23 |
fwereade | niemeyer: but it does mean we can't have charm names like the "mysql2" we have in the test repo | 15:24 |
niemeyer | fwereade: Hmm.. I don't get either of those points, I think :) | 15:24 |
niemeyer | fwereade: The revision must necessarily be optional.. otherwise how can we parse things like | 15:25 |
niemeyer | fwereade: juju deploy cs:~fwereade/oneiric/wordpress | 15:25 |
niemeyer | ? | 15:25 |
niemeyer | fwereade: Then, what's the deal with mysql2? | 15:25 |
fwereade | niemeyer: I'd considered the full CharmURL to be something we're only able to construct once we've asked the repo for the latest version | 15:25 |
fwereade | niemeyer: +var validName = regexp.MustCompile("^[a-z]+([a-z0-9-]+[a-z])?$") | 15:26 |
niemeyer | fwereade: The thing above is a charm url | 15:26 |
niemeyer | fwereade: Yeah, that looks bogus | 15:26 |
niemeyer | fwereade: It should probably be "^[a-z]+([a-z0-9-]+[a-z0-9])?$" | 15:27 |
niemeyer | fwereade: Nice catch | 15:27 |
niemeyer | fwereade: The intention there was just to avoid mysql- | 15:28 |
fwereade | niemeyer: I had the idea that a CharmURL was a pointer to a specific version of a charm | 15:28 |
niemeyer | fwereade: A charm url is.. a charm url :-) | 15:28 |
niemeyer | fwereade: We're going to support charm urls without revisions in real world scenarios | 15:29 |
niemeyer | fwereade: There's no reason to inflict pain on us and make the code unable to handle those | 15:29 |
fwereade | niemeyer: I'd say we're going to allow people to specify charms without revisions in real world scenarios | 15:29 |
fwereade | niemeyer: ok, so we create a charm url, ask a repository about it, and then construct the real charm url? | 15:30 |
niemeyer | fwereade: Both charm urls are real.. one contains a revision, the other doesn't | 15:30 |
fwereade | niemeyer: to me, the task of extracting the user's intention is distinct from extracting the somponents of a fully specified charm url | 15:30 |
fwereade | niemeyer: and, internally, we're always going to use ones with revision, just as they always have schemas and series | 15:31 |
niemeyer | fwereade: Why? | 15:31 |
fwereade | niemeyer: because we want to be able to upgrade charms? | 15:32 |
niemeyer | fwereade: a charm url without a revision is a fine identifier | 15:32 |
niemeyer | fwereade: just like a package name without a version is a fine identifier | 15:32 |
niemeyer | fwereade: Sure.. we also upgrade packages | 15:32 |
niemeyer | fwereade: and still, most package management is done without a version | 15:32 |
fwereade | niemeyer: it's a fine specifier, but to upgrade charms we need distinct zk nodes for the distinct versions | 15:33 |
niemeyer | fwereade: Sure.. you're looking at one very specific operation for which you need to know revisions | 15:34 |
niemeyer | fwereade: The abstraction of a charm url is not restricted to that one operation | 15:34 |
fwereade | niemeyer: what's the distinction between revision and series then? | 15:35 |
fwereade | niemeyer: both are optional, from the user's point of view, given a certain amount of extra context that allows us to infer what they mean | 15:35 |
niemeyer | fwereade: Please check out the test cases | 15:35 |
niemeyer | fwereade: They provide good insight into what each part is, and what are erroneous situations | 15:36 |
fwereade | niemeyer: I've seen them | 15:36 |
niemeyer | fwereade: So I don't get your question.. revision is a number | 15:36 |
niemeyer | fwereade: series is "oneiric", etc | 15:36 |
niemeyer | fwereade: ? | 15:36 |
fwereade | niemeyer: agreed | 15:36 |
fwereade | niemeyer: both are optional from the sufficiently-naive user's POV | 15:37 |
fwereade | niemeyer: but only one is in your CharmURL implementation | 15:37 |
fwereade | niemeyer: I'm suggesting that the user's perception is not enough reason to allow non-specific charm urls | 15:38 |
niemeyer | fwereade: Sorry, I'm really missing context about how you feel about this | 15:38 |
fwereade | niemeyer: and that a charm url should unambiguosly specify a particular collection of bits now and forever | 15:38 |
niemeyer | fwereade: cs:~joe/oneiric/wordpress | 15:38 |
niemeyer | fwereade: This is a charm URL | 15:38 |
niemeyer | fwereade: Correct? | 15:38 |
fwereade | niemeyer: disagree, it's enough information to discover a charm url, given context provided by the formula store | 15:39 |
niemeyer | fwereade: Ah, great.. ok | 15:39 |
niemeyer | fwereade: So that's where we disagree | 15:39 |
niemeyer | fwereade: This _is_ a charm URL | 15:39 |
fwereade | niemeyer: in the same way that "wordpress" is enough info to determine a charm url, given the context of the environment | 15:39 |
niemeyer | fwereade: "wordpress" is _not_ a charm URL | 15:40 |
fwereade | niemeyer: so a chamr url can be, by design, inadequate to specify a given charm? | 15:40 |
fwereade | niemeyer: (without requiring repo access, I mean) | 15:41 |
niemeyer | fwereade: cs:~joe/oneiric/wordpress | 15:41 |
niemeyer | fwereade: This specifies a given charm | 15:41 |
niemeyer | fwereade: For both of us.. | 15:41 |
niemeyer | fwereade: It may change over time, but it is a specified | 15:42 |
niemeyer | specifier | 15:42 |
niemeyer | fwereade: and it is what the user will enter in the command line | 15:42 |
niemeyer | fwereade: Having the user talking to the server about such an URL, and having to manage it internally on both the server and the client, and then saying "Oh, but that's not an _actual_ url", would be weird | 15:44 |
rog | niemeyer: [aside, the regexp you gave above would forbit a name like "go" - i think you probably meant something like "^[a-z]([a-z0-9-]*[a-z0-9])?$"] | 15:46 |
niemeyer | fwereade: Think about bazaar branches for a second | 15:46 |
niemeyer | fwereade: Is the revision number part of the url? | 15:46 |
niemeyer | rog: | 15:47 |
fwereade | niemeyer: agreed, it's not | 15:47 |
niemeyer | >>> re.match("^[a-z]+([a-z0-9-]+[a-z0-9])?$", "go").group(0) | 15:47 |
niemeyer | 'go' | 15:47 |
fwereade | niemeyer: sorry, I didn't understand your previous paragraph | 15:48 |
niemeyer | fwereade: Which part? | 15:48 |
rog | oops, missed the first + | 15:48 |
fwereade | niemeyer: "that's not an _actual_ url" | 15:49 |
fwereade | niemeyer: the user is already using things that aren't actual urls | 15:49 |
fwereade | niemeyer: like "wordpress" | 15:49 |
niemeyer | fwereade: As mentioned in the spec, this is an alias. It is ambiguous, and is _not_ a URL. | 15:50 |
rog | niemeyer: that said, it doesn't match "p9" which it should | 15:50 |
fwereade | niemeyer: but isn't the revisionless version essentially an alias to an actual versioned charm? | 15:51 |
niemeyer | rog: Indeed, will fix it when I work on the branch again | 15:51 |
niemeyer | rog: Thanks | 15:51 |
fwereade | niemeyer: we have to use context to infer what's intended in both cases | 15:52 |
niemeyer | fwereade: It is a Universal Resource Locator.. exactly like an lp: url, or an http: url.. | 15:52 |
niemeyer | fwereade: Content can change over time | 15:52 |
niemeyer | fwereade: In all of these cases | 15:52 |
niemeyer | fwereade: We _need_ to handle charm urls without revisions | 15:52 |
hazmat | are we still trying to get repository client work and local dev in for oneiric, if so what are we doing with regard to FFE dates and upload to the repositories? | 15:52 |
niemeyer | fwereade: To talk to the client about them, to talk to the server about them, and internally | 15:52 |
niemeyer | fwereade: Not supporting it in the code facing that would be silly IMO | 15:53 |
niemeyer | hazmat: Yes, we are trying.. but we've been getting stuck on details for the past couple of days :) | 15:53 |
fwereade | niemeyer: it seems to me that we only need them without revisions in order to locate actual charms, which themselves do have revisions, and which we then want to use throughout | 15:54 |
fwereade | niemeyer: "the content isn't important until we have the content", if you like, and from then on we actually care about it | 15:55 |
niemeyer | fwereade: Alright.. let's move on.. please support charm URLs without revisions. | 15:55 |
fwereade | niemeyer: sure | 15:56 |
niemeyer | fwereade: Thanks | 15:56 |
fwereade | niemeyer: I'm sorry to be delaying things :( | 15:56 |
niemeyer | fwereade: Not a problem.. I just won't let our ramblings detract us from what I know for sure to be the correct approach. | 15:58 |
niemeyer | fwereade: Did that yesterday with the hash stuff | 15:58 |
niemeyer | fwereade: If for no better reason (which I know exist), the user provides us a url without a revision that we have to manage. Having charm url handling and then having to parse by hand to tell if it's right or not, or to extract information out of it, would be quite impractical. | 15:59 |
niemeyer | hazmat: SpamapS has better details on the FFE | 16:01 |
niemeyer | hazmat: He's already filing them | 16:02 |
hazmat | niemeyer, cool, i'm just concerned that we're not sticking to any dates | 16:02 |
niemeyer | hazmat: and I was hoping to merge local dev today, and the store work by the end of the week | 16:02 |
hazmat | and its not clear what the date is | 16:02 |
hazmat | okay | 16:02 |
niemeyer | hazmat: Then, we have about a week of hard core testing and bug fixing to polish what we've got | 16:02 |
niemeyer | hazmat: Until being completely unable to fix anything | 16:03 |
hazmat | niemeyer, sounds good | 16:14 |
koolhead17 | hey all | 16:19 |
koolhead17 | SpamapS: around | 16:19 |
niemeyer | Will get lunch | 16:24 |
niemeyer | biab | 16:24 |
niemeyer | koolhead17: Hi, btw :) | 16:24 |
koolhead17 | niemeyer: hello. howdy | 16:24 |
* koolhead17 bows to robbiew Daviey | 16:25 | |
* koolhead17 stuck with this saving secret key to document root, in automation of gallery2 : | 16:26 | |
koolhead17 | :( | 16:26 |
hazmat | niemeyer, i'm going to move not in progress tickets to the next milestone | 16:28 |
koolhead17 | hey hazmat | 16:28 |
hazmat | hi koolhead17 | 16:28 |
hazmat | db-config/commons still an issue? | 16:28 |
koolhead17 | hazmat: am feeling bit frustrated with this gallery2 thing | 16:28 |
koolhead17 | hazmat: no i am almost done with it | 16:28 |
hazmat | koolhead17, what's the problem? | 16:29 |
hazmat | koolhead17, you can save the secret out of the document root? or does it need to be read by the app? | 16:29 |
koolhead17 | this gallery2 s/w doing populating config files asks user to enter few details and one of it is like download secret key and save it to document root of gallery | 16:29 |
koolhead17 | read by app | 16:30 |
hazmat | koolhead17, i assume that's common to a normal installation then? | 16:30 |
koolhead17 | hazmat: yes i am confused how to move with that | 16:31 |
koolhead17 | http://www.rndguy.ca/2010/02/24/fully-automated-ubuntu-server-setups-using-preseed/ | 16:31 |
koolhead17 | this helped me as to know preseed | 16:31 |
koolhead17 | hazmat: i needed you help if you have some time to undrstand that metadata workflow | 16:37 |
koolhead17 | db-relation-joined | 16:38 |
hazmat | koolhead17, i don't see the actual question? you preseed the mysql db with a master password, its db-relation-joined, creates an account for gallery 2 (all of that's already in the mysql formula), the gallery formula on db-relation-changed stores the password for the app into a location accessible by the app | 16:39 |
hazmat | and there should be some sort of .htaccess config to prevent that config file from being served up directly via the web | 16:39 |
hazmat | although by default the network security will prevent public access to mysql, its good practice not to expose the credentials farther | 16:40 |
koolhead17 | hazmat: http://bazaar.launchpad.net/~juju/juju/trunk/view/head:/examples/wordpress/hooks/db-relation-changed | 16:40 |
koolhead17 | here | 16:40 |
koolhead17 | hostname=`curl http://169.254.169.254/latest/meta-data/public-hostname` | 16:41 |
koolhead17 | so this metadata url serves value for variable like $user $password | 16:41 |
koolhead17 | which gallery2 will need? correct | 16:41 |
hazmat | 1) that metadata url is ec2 specific, its only exposing virtual instance attributes, it has nothing to do with what's installed on the machine, 2) Its usage by formulas will disappear in a future version of juju. | 16:42 |
hazmat | ie. user/password having nothing to do with that url | 16:43 |
SpamapS | koolhead17: no, gallery2 should not need the public hostname | 16:43 |
hazmat | `relation-get user` && `relation-get password` | 16:43 |
hazmat | get the db user and password | 16:43 |
koolhead17 | hazmat: hmm my question was ec2 specific if am not wrong juju currently on runs on ec2 environment? | 16:43 |
hazmat | koolhead17, with the oneiric release we're also supporting bare metal installations via (orchestra/cobbler) and local machine development | 16:44 |
hazmat | using lxc containers | 16:44 |
koolhead17 | hazmat: am trying writing charm on oneiric virtualbox only | 16:45 |
koolhead17 | hazmat: i have not tried orchestra yet, been working on cobbler all this while. i am soo confused :( | 16:46 |
koolhead17 | i will go back https://juju.ubuntu.com/Documentation and spend some time again. what i was currently doing is simply write bash script to get my installation done automated. once that is achieved use to form a charm out of it. | 16:49 |
koolhead17 | am i doing wrong procedure? | 16:49 |
* koolhead17 wonders if he asked some dumb question :( | 16:50 | |
_mup_ | juju/lxc-provider-rename-local r404 committed by kapil.thangavelu@canonical.com | 16:51 |
_mup_ | rename lxc provider to local provider | 16:51 |
hazmat | koolhead17, i don't understand how you'd be doing a virtualbox installation of a charm | 16:52 |
hazmat | its not a supported machine provider atm | 16:52 |
koolhead17 | hazmat: what i am doing is writing bash script which does auto install of everything for me. and then i put it as a charm and test it on EC2 | 16:53 |
hazmat | koolhead17, i think its just as easy to build it out in a formula esp. with tools like debug-hooks | 16:53 |
hazmat | and charm-upgrade | 16:53 |
hazmat | because you'll need information from the remote relations which you'd have to mock/stub in the bash script | 16:54 |
hazmat | and you'll need to tease apart the bash script into its parts.. i mean.. there's nothing wrong with doing it that way | 16:55 |
hazmat | just that its some additional work to restructure as a charm when its done. | 16:55 |
koolhead17 | hazmat: hmm. in that case i have to do everything on ec2 which am using from some friend`s account. | 16:55 |
koolhead17 | :D | 16:55 |
hazmat | koolhead17, if you want to live on the bleeding edge the local dev stuff allows for doing it all on your local machine | 16:56 |
rog | that's all folks. see ya tomorrow. | 16:56 |
hazmat | rog, have a good one | 16:56 |
rog | hazmat: will do. you have no idea. :-) | 16:56 |
SpamapS | hazmat: before I wrote the lxc provider, I wrote a relation-get / relation-set mocker .. it worked quite nicely. ;) | 16:57 |
* SpamapS is quite excited tho, about having a local provider built in. :) | 16:57 | |
hazmat | indeed its very nice | 16:58 |
koolhead17 | hazmat: i only have a 2 GB laptop which allready runs 2 VM when i play with juju | 16:58 |
koolhead17 | :D | 16:58 |
hazmat | SpamapS, bcsaller did some nice work to minimize construction time of instances as well (via lxc-clone) | 16:58 |
SpamapS | :-D | 16:58 |
SpamapS | hazmat: I'm glad you guys found a way to make lxc-clone work. | 16:59 |
hazmat | koolhead17, the overhead both for disk and load of an lxc container is *significantly* less than a vm | 16:59 |
koolhead17 | i have no idea how i can use orchestra and local env for using the say | 16:59 |
hazmat | koolhead17, you can have dozens of a container on a machine with minimal load if their not doing any active work | 16:59 |
SpamapS | koolhead17: you don't need orchestra or cobbler | 17:00 |
koolhead17 | shall i simply install oneiric on my latop then? | 17:00 |
hazmat | and the disk overhead is around 200mb for a minimal container, up to 500mb for a useful container (not including data) | 17:00 |
SpamapS | koolhead17: thats the opposite of what you need.. you need something local.. which is landing in trunk as we speak. :) | 17:00 |
SpamapS | koolhead17: it should work in natty too | 17:00 |
hazmat | woah.. | 17:00 |
SpamapS | koolhead17: tho you'll need the LXC that is in the juju PPA | 17:00 |
hazmat | SpamapS, koolhead17 it does not work in natty | 17:00 |
koolhead17 | okey. let me reach home and try this experiment then | 17:01 |
SpamapS | hazmat: why not? | 17:01 |
hazmat | SpamapS, i don't think the ppa has been updated with the latest lxc pkg | 17:01 |
koolhead17 | i have lucid | 17:01 |
SpamapS | I definitely think we need to spend more time making these things work on Lucid. | 17:01 |
koolhead17 | and oneiric on VM | 17:01 |
hazmat | SpamapS, that needs a kernel update afaik | 17:01 |
koolhead17 | SpamapS: please do it. it will be awesome | 17:01 |
koolhead17 | LTS ++ | 17:01 |
SpamapS | hazmat: it didn't before.. but maybe they've moved on from the stuff I did in Austin. | 17:02 |
koolhead17 | am on 10.4.3 | 17:03 |
koolhead17 | 2.6.32-33-generic #72-Ubuntu SMP Fri Jul 29 21:08:37 UTC 2011 i686 GNU/Linux | 17:03 |
hazmat | SpamapS, maybe it doesn't.. i wasn't sure | 17:03 |
hazmat | bcsaller, there's another failing test in omega.. make b/ptests usage is really rather problematic | 17:08 |
bcsaller | hazmat: not for me? what are you seeing? | 17:09 |
hazmat | just a failure around the upstart file test | 17:09 |
_mup_ | juju/lxc-provider-rename-local r405 committed by kapil.thangavelu@canonical.com | 17:10 |
_mup_ | additional fixes for s/lxc/local | 17:10 |
koolhead17 | also https://juju.ubuntu.com/Documentation explains using EC2 only | 17:10 |
hazmat | bcsaller, ./test juju.machine | 17:11 |
hazmat | bcsaller, i get about 5 failures | 17:11 |
jimbaker | koolhead17, definitely agreed on that. still trying to get the new lxc stuff to work for me | 17:11 |
bcsaller | hazmat: I'm seeing that, yes, hadn't been running those :) | 17:12 |
hazmat | bcsaller, i know.. that's why running b/ptests is a false sense of anything | 17:12 |
koolhead17 | jimbaker: it will be based on oneiric then i suppose. :D | 17:12 |
bcsaller | because the branch is functional | 17:12 |
jimbaker | hazmat, i'm having a problem in bootstrap with lxc-omega | 17:12 |
hazmat | jimbaker, do tell? ;-) | 17:13 |
jimbaker | bcsaller tells me it's more likely to be just the networking stuff you've been working on | 17:13 |
jimbaker | koolhead17, yes, i have been just trying it w/ oneiric beta 2 | 17:13 |
hazmat | jimbaker, there isn't any networking stuff that's post lxc-omega | 17:13 |
hazmat | jimbaker, what's the problem? | 17:13 |
jimbaker | hazmat, indeed, that's my understanding ;) so i get 2011-09-27 10:46:48,319 ERROR Command '['virsh', 'net-start', 'default']' returned non-zero exit status 1 in bootstrap | 17:14 |
hazmat | jimbaker, you have libvirt-bin installed? | 17:14 |
koolhead17 | k | 17:14 |
jimbaker | running that explicitly, $ virsh net-start default | 17:14 |
jimbaker | error: Failed to start network default | 17:14 |
jimbaker | error: internal error Network is already in use by interface virbr0 | 17:14 |
hazmat | interesting | 17:14 |
hazmat | jimbaker, are you still on natty? | 17:14 |
jimbaker | hazmat, indeed i do | 17:14 |
jimbaker | and i'm running oneiric beta 2 | 17:15 |
hazmat | it shouldn't be running net-start if it see it already running | 17:15 |
jimbaker | hazmat, sure | 17:15 |
hazmat | jimbaker, can you pastebin virsh net-list | 17:16 |
jimbaker | hazmat, no need: $ virsh net-list | 17:16 |
jimbaker | Name State Autostart | 17:16 |
jimbaker | ----------------------------------------- | 17:16 |
hazmat | jimbaker, clearly your libvirt networking is wedged somehow | 17:17 |
jimbaker | hazmat, i'm sure i'm missing some dependency. just don't know what | 17:17 |
hazmat | if that's your output | 17:17 |
hazmat | virsh net-start fails because already started, and virsh net-list doesn't show it started | 17:17 |
jimbaker | hazmat, yes sounds like a wedge indeed | 17:18 |
jimbaker | maybe i should reboot :) | 17:18 |
hazmat | jimbaker, well it might a persistent config issue from the upgrade.. probably relating to libvirt, i'd check dnsmasq running, and libvirt config files, reboot couldn't hurt, and finally brctrl | 17:19 |
jimbaker | hazmat, thanks for the suggestions. definitely upgrade issues could be involved, since i was trying to get the new lxc work going last week w/ beta 1 | 17:21 |
niemeyer | hazmat: I also don't have a "default" network locally in natty, FWIW | 17:21 |
hazmat | niemeyer, interesting | 17:21 |
hazmat | hmm | 17:21 |
hazmat | thankfully we have all the tools to install one by hand | 17:21 |
hazmat | but it should be ootb with libvirt-bin | 17:21 |
niemeyer | hazmat: True.. we should just confirm that with a clean instlal | 17:22 |
niemeyer | hazmat: or a clean upgrade :) | 17:22 |
niemeyer | hazmat: If the upgrade is lacking something, there's still time to fix it | 17:22 |
hazmat | niemeyer, i can add code to support the case its pretty trivial with the existinng network suport | 17:22 |
hazmat | to just add a default network if one isn't defined | 17:22 |
hazmat | hmm.. actually its automatic already | 17:23 |
hazmat | if its not defined when we go to start, we define it | 17:24 |
hazmat | and then start it | 17:24 |
hazmat | jimbaker, actually can you pastebin virsh net-list --all | 17:25 |
hazmat | i forget the --all output | 17:25 |
hazmat | flag | 17:25 |
hazmat | without it, it only lists actives | 17:26 |
bcsaller | hazmat: additional we saw the network code calls virsh net-stop which doesn't exist, looks like its net-destroy | 17:26 |
hazmat | ugh | 17:26 |
hazmat | yeah.. that's a bug | 17:27 |
hazmat | doesn't anyone believe in symmetry anymore ;-) | 17:28 |
niemeyer | !!! | 17:30 |
niemeyer | hazmat: "destroy" is such a good command, though! | 17:30 |
niemeyer | apt-get destroy table | 17:31 |
hazmat | lo.. i am become death.. destroyer of worlds | 17:32 |
bcsaller | the way I type its only destroyer of words | 17:32 |
jimbaker | hazmat, http://pastebin.ubuntu.com/698012/ | 17:34 |
hazmat | hmm.. so it is defined, but we can't start it | 17:35 |
jimbaker | hazmat, again, i'm going to reboot before trying anything else | 17:35 |
hazmat | jimbaker, sounds good | 17:35 |
jimbaker | but first, i need to run to lunch. biab | 17:35 |
hazmat | bcsaller fortunately local provider doesn't stop the network since its normally setup as autostart by libvirt-bin | 17:36 |
hazmat | and already running | 17:36 |
bcsaller | hazmat: the start/destroy tests in juju.machine are because of the later construction of the .container when using the async interface, they are not around when the tests expect them to be to setup the mocks. trying to fix em | 17:37 |
hazmat | bcsaller, woah.. the async interface should still be returning a deferred that can be waited on | 17:38 |
hazmat | ? | 17:38 |
bcsaller | its simpler than that | 17:38 |
bcsaller | the containers are not built until start() now | 17:39 |
bcsaller | rather than in init | 17:39 |
hazmat | ah | 17:39 |
bcsaller | so .container isn't defined until later and thus can't be mocked | 17:39 |
bcsaller | well... I can mock it, but python has issues | 17:39 |
hazmat | so there is no access to the container till its started? i think that has problems in several places for the rest of the code | 17:40 |
hazmat | ie. its not soley a test problem | 17:40 |
hazmat | the setup directories uses the container.rootfs path several times | 17:40 |
hazmat | prior to starting the container for example | 17:40 |
hazmat | that's unfortunate | 17:41 |
bcsaller | hazmat: all that code is called after | 17:43 |
bcsaller | but yes, fixing this is a little trickier than I wanted | 17:43 |
hazmat | bcsaller, add a container_rootfs using the name to the container in class init | 17:48 |
hazmat | all the usage is to get the fs | 17:48 |
* hazmat grabs some food | 17:49 | |
niemeyer | hazmat, bcsaller: Folks, just off a call with robbiew.. I'll finish the ftests polishings I'm working on to get this ready and out of my plate, and will then jump back onto the reviews | 18:30 |
hazmat | niemeyer, cool, also wtf site is empty now | 18:35 |
niemeyer | hazmat: I suspect I broke it | 18:36 |
niemeyer | hazmat: I'm there cleaning it up a bit | 18:36 |
* hazmat starts working on presentation slides | 18:36 | |
niemeyer | I'll separate the setup/teardown so that we can easily have several tests for EC2, etc | 18:36 |
TheMue | niemeyer: What framework do you use for testing? | 18:42 |
niemeyer | TheMue: A few trivial scripts | 18:43 |
TheMue | niemeyer: And unit tests is with standard gotest? | 18:44 |
niemeyer | TheMue: this test framework produces a waterfall of success/failure per revisoin | 18:45 |
niemeyer | TheMue: The tests are whatever we want them to be | 18:45 |
niemeyer | TheMue: Right now we have two: one that runs the whole internal suite | 18:46 |
niemeyer | TheMue: and another one that exercises a real interaction with ec2 | 18:46 |
niemeyer | TheMue: The juju-go branch, that contains the evolving Go port, is not part of this yet, but can be easily integrated | 18:46 |
niemeyer | TheMue: We use gocheck there | 18:47 |
TheMue | ah, ok, thx | 18:59 |
TheMue | btw, is it possible to simulate juju actions with local vms? | 18:59 |
RoAkSoAx | fwereade: how's it going man? | 19:21 |
fwereade | RoAkSoAx: ah, not too bad, just reverted a big pile of revisions -- which is obviously bad, but feels surprisingly good ;) | 19:21 |
fwereade | RoAkSoAx: and you? | 19:21 |
RoAkSoAx | lol | 19:22 |
RoAkSoAx | fwereade: pretty good | 19:22 |
RoAkSoAx | fwereade: orhcestra/juju nwrking like a charm | 19:22 |
RoAkSoAx | fwereade: without the benefits of auto power management | 19:23 |
RoAkSoAx | fwereade: since we dont have direct access to PDU's and stuff | 19:23 |
RoAkSoAx | fwereade: but good | 19:23 |
fwereade | RoAkSoAx: awesomesauce :D | 19:23 |
RoAkSoAx | fwereade: have time to discuss a bit about juju/orchestra? | 19:23 |
fwereade | RoAkSoAx: surely | 19:23 |
RoAkSoAx | fwereade: so | 19:23 |
RoAkSoAx | fwereade: about showing status pending | 19:23 |
RoAkSoAx | fwereade: when we deploy or bootstrap | 19:23 |
RoAkSoAx | fwereade: right after we do it, it already show the machine | 19:24 |
RoAkSoAx | fwereade: the dns-name /instnace id | 19:24 |
RoAkSoAx | fwereade: however, the machine might have not even been turned on | 19:24 |
RoAkSoAx | fwereade: so I was wondering it might be better to list them "pending" till it actually finish installing and disables PXE | 19:24 |
RoAkSoAx | for its cobbler profile | 19:24 |
RoAkSoAx | but at the same time | 19:24 |
RoAkSoAx | while they are pending, they should probably show | 19:24 |
RoAkSoAx | what machine has been obtained | 19:25 |
RoAkSoAx | fwereade: because, it is actually really needed for us to know what machine was selected, but, we need to see it as pending till it finishes installing I think | 19:25 |
RoAkSoAx | fwereade: what do you think? | 19:25 |
fwereade | RoAkSoAx: in alternative words: available/acquired is not enough information? | 19:25 |
RoAkSoAx | fwereade: that's enought | 19:26 |
RoAkSoAx | fwereade: but, my point being is when I do juju status | 19:26 |
RoAkSoAx | I see the machine as available | 19:26 |
RoAkSoAx | when it should probably be pending | 19:26 |
RoAkSoAx | because it hasn't finished installing | 19:26 |
fwereade | RoAkSoAx: ah, ok -- was confused by the mention of bootstrap, because you can't even get status until we've actually managed to bootstrap | 19:27 |
RoAkSoAx | right | 19:27 |
fwereade | RoAkSoAx: that definitely sounds sensible | 19:27 |
RoAkSoAx | fwereade: yeah, but while showing pending, it doesn't show what machine (dns-name) has been selected | 19:27 |
RoAkSoAx | fwereade: i think we need to know that | 19:27 |
RoAkSoAx | so usually is : 13: pending | 19:27 |
RoAkSoAx | tight? | 19:27 |
RoAkSoAx | right | 19:27 |
=== Guest7312 is now known as cburke | ||
RoAkSoAx | in orhcestra we see something like : 13: {dns-name: blabla.domain.com, instance-id: MTMxNzA2NDA1NS4xNzg2NTQ1OTQuNzE4Mg} | 19:28 |
RoAkSoAx | but, it is still pending because installation is still executing | 19:28 |
RoAkSoAx | so should show: 13: {dns-name: hassium.canonical.com, instance-id: MTMxNzA2NDA1NS4xNzg2NTQ1OTQuNzE4Mg}: pending | 19:28 |
RoAkSoAx | or something similar | 19:28 |
fwereade | RoAkSoAx: ok, that makes sense | 19:28 |
RoAkSoAx | fwereade: but it will be very very orchestra specific | 19:28 |
fwereade | RoAkSoAx: offhand, do you recall what it shows for EC2 in similar circumstances? | 19:29 |
fwereade | RoAkSoAx: because the situation is definitely analogous | 19:29 |
RoAkSoAx | fwereade: it shous 13:pending I think | 19:30 |
fwereade | RoAkSoAx: cool | 19:30 |
fwereade | RoAkSoAx: the problem is really just getting the info out of cobbler reliably then, right? | 19:30 |
fwereade | RoAkSoAx: and the problem is kinda bound up with the power-management woes we already know about | 19:31 |
fwereade | RoAkSoAx: ...although I guess it doesn't have to be | 19:31 |
fwereade | RoAkSoAx: what *should* I be paying attention to to figure it out? | 19:32 |
RoAkSoAx | fwereade: power amangement should not really be part of the problem | 19:32 |
RoAkSoAx | fwereade: becase, even if we | 19:32 |
RoAkSoAx | fwereade: do that when we manually or automaticlaly start the machine | 19:32 |
RoAkSoAx | it *wont* show pending | 19:33 |
RoAkSoAx | fwereade: from what I think, pending is the state on ec2 when the image is starting up, once is completely up and usually, then it changes to being available, right? | 19:33 |
fwereade | RoAkSoAx: "running" but yeah | 19:33 |
RoAkSoAx | fwereade: so similarly, the status should show pending while the machine is running the installation, once it has finished, it should show it | 19:34 |
RoAkSoAx | fwereade: but in case of orchestra, i think it is important to know what machine has been selected (dns-name) and its status is pending | 19:34 |
fwereade | RoAkSoAx: do we have a channel that lets us figure it out? | 19:35 |
fwereade | RoAkSoAx: or do we have to store the fact that it *should* show up soon | 19:35 |
fwereade | RoAkSoAx: and go from there? | 19:35 |
RoAkSoAx | fwereade: i think | 19:36 |
RoAkSoAx | when we do status, and easy way would be to check | 19:36 |
RoAkSoAx | if pxe has been disabled in the system itself | 19:36 |
fwereade | RoAkSoAx: I *may* be happy with that but I'll have to think | 19:37 |
RoAkSoAx | fwereade: that's the only way we can know that | 19:37 |
RoAkSoAx | fwereade: because in ec2 is running/pending right? | 19:37 |
RoAkSoAx | pending is when it is booting the VM | 19:37 |
RoAkSoAx | and running its when finished booting | 19:37 |
RoAkSoAx | in our case we cannot verify if it is installed/post_installed | 19:37 |
RoAkSoAx | fwereade: the only way we can do that is by simply checking the pxe enabled on the system | 19:38 |
RoAkSoAx | fwereade: because that's the last step of installation | 19:38 |
fwereade | RoAkSoAx: that sounds great to me | 19:38 |
RoAkSoAx | fwereade: if installation fails, it will never disable PXE booting on the system | 19:38 |
fwereade | RoAkSoAx: was always a bit uncomfortable with what we were using netboot_enabled for | 19:38 |
fwereade | RoAkSoAx: yep | 19:38 |
RoAkSoAx | fwereade: exaclty, so we could just extend status to check netboot_enabled | 19:39 |
RoAkSoAx | fwereade: so for each system netboot_enabled is True, that means it hasn't finished installed or not even powered on | 19:39 |
RoAkSoAx | fwereade: if netboot_enabled is False (on a status) we can assume it finished installed | 19:39 |
fwereade | RoAkSoAx: so available/pxe means we can grab it and use it; acquired/pxe means pending; acquired/nopxe means running(-very-soon); available/nopxe means don-t-touch | 19:39 |
RoAkSoAx | fwereade: because that's the last command executed when deploying | 19:39 |
fwereade | RoAkSoAx: perfect | 19:40 |
RoAkSoAx | fwereade: right, so we keep the management classes as they are right now | 19:40 |
fwereade | RoAkSoAx: just need to make sure we handle the state transitions correctly in CobblerClient | 19:40 |
RoAkSoAx | fwereade: right, so basically when we *already* deployed the machine, and we are *checking* status | 19:40 |
fwereade | RoAkSoAx: assuming that, yes, pending/running is just pxe/nopxe | 19:41 |
RoAkSoAx | fwereade: we should check, "machine A is being deployed, let's check netboot_enabled. If True, it hasn't finished installed, or has failed. If False, then it has finished" | 19:41 |
fwereade | RoAkSoAx: yep | 19:41 |
RoAkSoAx | fwereade: right, obviously in the future, we would need to know if installation failed | 19:41 |
RoAkSoAx | I just have no idea how to know that right now | 19:42 |
fwereade | RoAkSoAx: ...may just come down to storing when we asked it to come up, and timing out :/ | 19:42 |
fwereade | RoAkSoAx: still, that's the future :) | 19:42 |
fwereade | RoAkSoAx: definitely sounds like a good plan | 19:43 |
RoAkSoAx | fwereade: cool | 19:44 |
fwereade | RoAkSoAx: I really need to capture my mental list of orchestra deficiencies as bugs, and soon :/ | 19:44 |
RoAkSoAx | fwereade: hjeheh ok, will try to file some by EOW | 19:44 |
fwereade | RoAkSoAx: so will I, hopefully between us we'll cover most of it ;) | 19:45 |
fwereade | RoAkSoAx: thanks :) | 19:45 |
adam_g | http://paste.ubuntu.com/698089/ <- mean anything to anyone? machine agent log from bootstrap node | 19:45 |
SpamapS | Is there any way to get feedback from the provisioning agent? | 20:16 |
SpamapS | Like.. if its unable to provision instances for some reason.. other than debug-log ? | 20:16 |
niemeyer | adam_g: That's pretty weird.. | 20:20 |
niemeyer | adam_g: machine 0 is the first machine run | 20:20 |
niemeyer | adam_g: Theoretically if there's no machine 0 zookeeper shouldn't even exist | 20:21 |
niemeyer | adam_g: What's the context there? | 20:21 |
hazmat | niemeyer, it was an old client | 20:27 |
hazmat | was the problem | 20:27 |
hazmat | SpamapS, not atm | 20:27 |
SpamapS | ahh | 20:32 |
* SpamapS searches the bug lists to +1 or report that.. | 20:32 | |
SpamapS | hmm.. why is the eureka milestone set to release on 2011-01-01 ? | 20:34 |
jimbaker | SpamapS, awesome backdating ;) | 20:34 |
SpamapS | we are *SERIOUSLY* late then ;) | 20:34 |
SpamapS | So.. I'm thinking we need to make the released version of juju not pull itself from the PPA, but rather from the Ubuntu archive only. | 20:44 |
robbiew | +1000 | 20:44 |
SpamapS | I suppose we can say if a user wants it on lucid/maverick/natty that they use juju-branch | 20:45 |
SpamapS | or has that been replaced with juju-origin now? | 20:45 |
robbiew | SpamapS: so we can't have the archive version pulling from a ppa...that means a deployment that works today could conceivably behave differently tomorrow. | 20:47 |
SpamapS | right | 20:47 |
SpamapS | just thinking through what that will break | 20:48 |
robbiew | right | 20:48 |
jimbaker | SpamapS, that's the intent of juju-origin and how the env-origin branch determines the correct origin to deploy | 20:52 |
SpamapS | so yeah I think I'll just patch in that the default source is _DISTRO instead of _PPA .. and if people want to spawn releases before 11.10 they will have to use the PPA or juju-branch | 20:52 |
jimbaker | SpamapS, i have had to mock the origin for distro (apt-cache policy juju), but it works well in testing | 20:53 |
jimbaker | SpamapS, so you will see in the env-origin branch, the default origin is determined using that, instead of just using _DISTRO (or the old _PPA) | 20:54 |
SpamapS | jimbaker: awesome, but thats not in trunk yet, is it? | 20:54 |
jimbaker | SpamapS, i should say: i have had to mock the distro behavior. everything else i can directly test | 20:54 |
jimbaker | SpamapS, it is not yet in trunk. i have some issues to fix | 20:55 |
jimbaker | but should be resolved pretty soon | 20:55 |
robbiew | SpamapS: ack on the patch approach | 20:55 |
robbiew | jimbaker: define "pretty soon" :) | 20:55 |
SpamapS | if its not in the next 20 minutes, its not going to be uploaded today. ;) | 20:56 |
adam_g | hey-- is '--placement=local' no longer possible, to deploy to the bootstrap node? | 20:56 |
jimbaker | robbiew, well it has to complete the review process, but the issues i have to fix are small and mostly related to how the testing and code is structured | 20:56 |
robbiew | jimbaker: ack | 20:56 |
robbiew | SpamapS: i translate jimbaker's response to be "not in the next 20min" ;) | 20:57 |
jimbaker | eg how do we test a specific circumstance with respect to policy, do we parse the data, make a call to code that looks like apt-cache, or in the old case, mock that out | 20:57 |
jimbaker | there are a number of approaches, so i'm converging on what works best | 20:57 |
jimbaker | SpamapS, robbiew - that's correct | 20:57 |
jimbaker | not 20 min :) | 20:57 |
SpamapS | danke | 20:58 |
* hazmat finishes up slides | 21:00 | |
adam_g | hazmat: with --placement='local' gone from the CLI, is it even possible to deploy certain charms to the bootstrap node anymore? | 21:03 |
niemeyer | adam_g: That was never the intention of the placement logic | 21:04 |
SpamapS | but it was an *awesome* way to test things without having to start multiple nodes | 21:04 |
niemeyer | adam_g: That said, | 21:04 |
SpamapS | hacky, but awesome. :) | 21:04 |
niemeyer | adam_g: placement is still supported | 21:05 |
adam_g | niemeyer: i understand, ill rephrase.. is there currently a new way to abuse this and let us put stuff on the bootstrap node? | 21:05 |
adam_g | :) | 21:05 |
niemeyer | adam_g: In ~/.juju/enviornments.yaml | 21:05 |
niemeyer | adam_g: Shhh.. don't tell anyone | 21:05 |
adam_g | niemeyer: ive found that, but can that be changed per 'juju deploy' or iss the placement policy set for the lifetime of the environment? | 21:05 |
* SpamapS is fairly certain it will be desirable to be able to set placement at runtime as we come up with more interesting placement strategies. | 21:05 | |
hazmat | adam_g, yeah.. in the placement: local in the environment config | 21:06 |
hazmat | whoops.. mis constructed.. | 21:06 |
niemeyer | I'll step outside for a while and bbiab | 21:06 |
hazmat | SpamapS, yeah.. i agree, but we start conflating very different concepts.. | 21:06 |
SpamapS | hazmat: users tend to like doing things that developers never dreamed of. I'd hope we'd follow the unix model, and give them enough rope to hang themselves (and then a little bit more) | 21:07 |
hazmat | co-location and placement look very similiar to end users | 21:07 |
hazmat | SpamapS, we might end up resurrecting it | 21:08 |
hazmat | i see it as very useful for cross-az deploys on a per unit basis | 21:08 |
SpamapS | cross az.. cross cloud.. silly things that you just want to have stacked up on one t1.micro ... flexibility is good. | 21:10 |
hazmat | adam_g, is that removal a significant burden, i can resurrect it now if need be? | 21:11 |
SpamapS | also when did it disappear? | 21:12 |
SpamapS | I use it about twice a day. :-/ | 21:12 |
SpamapS | but I'm on an older build | 21:12 |
hazmat | SpamapS, yesterday evening | 21:13 |
SpamapS | ahh ok | 21:14 |
adam_g | hazmat: we were using it to reduce our hardware needs on this openstack cluster by 3 or 4 nodes. i can workaround by just modifying environments.yaml between 'deploys' | 21:14 |
hazmat | adam_g, that kinda of sucks though | 21:14 |
adam_g | hazmat: yah, especially since --placement=local is what we've documented internally. id love to get the option back, but i can see why others wouldn't | 21:15 |
hazmat | adam_g, the ideal placement policy to me is min/max instances, but its very hard to determine where to place a formula to avoid a conflict | 21:15 |
* hazmat ponders | 21:16 | |
hazmat | i guess i should try to make that happen since i'm blocked on other stuff | 21:16 |
SpamapS | hazmat: Its not that hard.. you can keep a record of charms that have failed to deploy together and just use optimistic collision avoidance. | 21:16 |
hazmat | SpamapS, lol | 21:16 |
SpamapS | Its the ethernet model. | 21:17 |
hazmat | SpamapS, i figure the easiest thing is deploy is keep the number of service units of the same formula on a machine to 1, and error if we can't do that | 21:17 |
hazmat | its not real avoidance but it should help.. | 21:17 |
SpamapS | hazmat: except they still might conflict. Which is fine, just move it to another machine if that happens. | 21:18 |
hazmat | i guess its easier to resurrect --placmenet | 21:18 |
SpamapS | :) | 21:18 |
hazmat | niemeyer, ^ | 21:18 |
adam_g | hazmat: hi, sorry, i got pulled away. IMO, i think in the long run, users are going to want the *option* to have total control over charm placement regardless of the risks. currently "--placement=local" is the only thing that gives me that option | 21:58 |
SpamapS | especially with hardware | 21:59 |
SpamapS | hrm.. so defaulting to _DISTRO leaves us in a bind w.r.t. testing proposed updates of juju for SRU | 22:00 |
SpamapS | we'd need to have some way to enable proposed... | 22:01 |
hazmat | bcsaller, do you have fixes for test failures on omega, i wanted to do some work further down the stack | 22:02 |
bcsaller | hazmat: not yet, sorry | 22:02 |
bcsaller | hazmat: most, but not all | 22:02 |
hazmat | bcsaller, did you end up just adding the container_rootfs attr or trying to rework the api usage/tests? | 22:03 |
bcsaller | hazmat: I tried that but didn't get it working | 22:03 |
bcsaller | so I moved something more comprehensive | 22:04 |
bcsaller | but then its trying to build out the container for real to destroy it and wants root, so still playing with it | 22:04 |
_mup_ | juju/cli-placement-restore r397 committed by kapil.thangavelu@canonical.com | 22:21 |
_mup_ | restore placement cli | 22:21 |
_mup_ | Bug #860966 was filed: Restore command line placement. <juju:In Progress by hazmat> < https://launchpad.net/bugs/860966 > | 22:30 |
hazmat | ^ SpamapS, adam_g if the feature removal matters to you commenting on the above bug/merge proposal would be helpful | 22:32 |
* hazmat hugs lbox | 22:32 | |
niemeyer | hazmat: Let's please not resurrect --placement now | 22:38 |
niemeyer | hazmat: We can look at this again after the release | 22:39 |
SpamapS | niemeyer: actually its critical that we have it until there's something better | 22:40 |
SpamapS | niemeyer: understanding full well that its less than ideal, without it, we need 9 full hardware machines to test a full openstack deployment. | 22:41 |
niemeyer | SpamapS: We survived until it existed, so we can survive without it for this release | 22:41 |
SpamapS | niemeyer: orchestra didn't exist before this existed. | 22:41 |
niemeyer | Man.. that's exactly why I don't like that kind of half baked feature.. :-/ | 22:42 |
bcsaller | seemed like a good idea at the time | 22:42 |
SpamapS | Thats just how things go... you put stuff in, then you come up with something better and you take it out. :) | 22:42 |
SpamapS | Just look at devfs.. | 22:43 |
SpamapS | half baked, overly ambitious, all those things.. then udev made it all better. :) | 22:44 |
_mup_ | Bug #860982 was filed: Rename lxc provider to local <juju:In Progress by hazmat> < https://launchpad.net/bugs/860982 > | 22:44 |
SpamapS | BTW, for the Oneiric packages .. I'm hacking in a 'enable-proposed' option to the environment config. Its the only way we'll ever be able to do SRU's. | 22:45 |
hazmat | well it should hacked onto env-origin | 22:45 |
SpamapS | That would be awesome. | 22:46 |
SpamapS | running out of time tho | 22:46 |
hazmat | yeah.. | 22:46 |
niemeyer | SpamapS: What's enabled-proposed? | 22:47 |
hazmat | use the proposed repo to install juju for testing | 22:47 |
SpamapS | niemeyer: the 11.10 packages default to installing from the distro for quite obvious reasons. We also need to be able to enable -proposed so users can test an SRU that manifests on the spawned machines. | 22:48 |
SpamapS | Another option would be to just make people build an AMI that enables proposed | 22:48 |
SpamapS | which actually might be better | 22:48 |
SpamapS | but a lot harder | 22:49 |
* hazmat watches the size of the testing community drop like a stone | 22:49 | |
niemeyer | SpamapS: hazmat is right.. that's just another option for juju-origin | 22:49 |
SpamapS | niemeyer: which doesn't exist yet in r361 (the one I've been testing heavily for the last 2 days) | 22:49 |
* hazmat dog walks bbiab | 22:50 | |
niemeyer | SpamapS: enable-proposed also doesn't exist | 22:50 |
SpamapS | righ! But its a smaller patch. :) | 22:50 |
niemeyer | SpamapS: heh | 22:50 |
SpamapS | one I fully hope to drop in a week | 22:50 |
niemeyer | SpamapS: That's juju-origin.. if you're planning to land this, please let's do it the right way. | 22:51 |
SpamapS | I can leave it out.. and we can SRU in the ability to.. SRU things.. when we need to. | 22:51 |
niemeyer | SpamapS: There's zero benefit in having another option | 22:51 |
niemeyer | SpamapS: The env-origin branch is in review, and I hope jimbaker has it ready for land | 22:52 |
SpamapS | simplest solution.. just leave both out | 22:52 |
=== medberry is now known as med_out | ||
niemeyer | SpamapS: Or maybe leave juju out? That's even simpler. | 22:52 |
SpamapS | You can take that up with higher powers. :) | 22:52 |
SpamapS | I think the simplest thing is to just leave out the ability to turn on proposed, and open it as a bug in the package. | 22:53 |
SpamapS | When the time comes for an SRU, we'll fix it then. | 22:53 |
SpamapS | Hopefully by merging in juju-origin. | 22:53 |
niemeyer | SpamapS: Either we merge juju-origin, or we take juju out of Ubuntu. There's no middle way. | 22:54 |
niemeyer | SpamapS: It's necessary for handling the source. | 22:54 |
SpamapS | Err, wha? | 22:55 |
SpamapS | It works fine w/o it | 22:55 |
SpamapS | We may never actually need to SRU juju | 22:55 |
niemeyer | SpamapS: Where does it take the packages from? | 22:56 |
niemeyer | SpamapS: In the server side? | 22:56 |
SpamapS | given the nature of the project I'd say we'd only SRU it if it was catastrophically broken anyway. | 22:56 |
SpamapS | distro | 22:56 |
niemeyer | SpamapS: Have you patched it? | 22:58 |
SpamapS | niemeyer: yes, I may have missed where there's another way to get that to work. | 22:59 |
SpamapS | niemeyer: not uploaded yet.. just testing currently | 22:59 |
niemeyer | SpamapS: Oh man.. that's awesome.. ok. | 22:59 |
SpamapS | but need to upload very soon as we're already starting to talk about juju in 11.10 in blog posts ... | 23:00 |
SpamapS | And the rename needs time to "settle" .. | 23:00 |
SpamapS | BTW, I may have to disable the test suite on build.. I get this almost every time I build in PPA: https://launchpadlibrarian.net/81242493/buildlog_ubuntu-oneiric-i386.juju_0.5%2Bbzr361-0ubuntu1~ppa3_FAILEDTOBUILD.txt.gz | 23:01 |
SpamapS | Failure: zookeeper.ClosingException: zookeeper is closing | 23:02 |
SpamapS | juju.agents.tests.test_unit.UnitAgentTest.test_agent_executes_config_changed_hook | 23:02 |
SpamapS | hazmat: wondering if that is related to your REUSEADDR change | 23:02 |
hazmat | SpamapS its not | 23:12 |
hazmat | that typically signals some sort of background activity is happening when the connection is closed | 23:13 |
* hazmat runs in a loop | 23:13 | |
hazmat | looks okay through a hundred iterations | 23:15 |
SpamapS | hrm | 23:15 |
SpamapS | it only ever happens on the buildd | 23:15 |
hazmat | SpamapS, is it consistent? | 23:15 |
SpamapS | it has happened the last 3 times, but I think I had a build with r361 that passed | 23:15 |
SpamapS | I'll upload one more time w/o disabling the test.. | 23:16 |
SpamapS | would be good for the ppa to have this turned on | 23:16 |
* hazmat widens the loop scope to include the whole test class | 23:16 | |
SpamapS | so we get told about failures like this sooner | 23:16 |
SpamapS | hazmat: /win 20 | 23:22 |
SpamapS | doh | 23:22 |
hazmat | ;-) | 23:22 |
hazmat | SpamapS, so i widened the loop to the entire unit agent | 23:23 |
hazmat | tests.. no luck reproducing | 23:23 |
SpamapS | Yeah I think its something with the clean isolated environment | 23:23 |
SpamapS | hazmat: the next build is here , starts in 10 min https://launchpad.net/~clint-fewbar/+archive/fixes/+build/2810619 | 23:33 |
hazmat | my env is pretty clean for a developer ;-) | 23:34 |
hazmat | i'll check back on the build | 23:34 |
SpamapS | buildd doesn't even have the internets | 23:34 |
SpamapS | we build "on the moon" just in case you have to | 23:35 |
niemeyer | hazmat: How's that for a test case: http://pastebin.ubuntu.com/698196/ | 23:35 |
hazmat | SpamapS, now i remember why packaging java apps was such a pain | 23:35 |
* hazmat shakes fist at maven and ivy, and points to the moon | 23:35 | |
niemeyer | SpamapS, hazmat: Btw, I've the tests in the wtf run in a clean env | 23:36 |
niemeyer | SpamapS, hazmat: Btw, tests in the wtf run in a clean env | 23:36 |
niemeyer | Will get food, biab | 23:37 |
hazmat | niemeyer, test case looks nice, better abstractions around waiting would make that even cleaner | 23:37 |
hazmat | although really with lxc based tests and apt-cacher things should fly | 23:38 |
hazmat | also on the not around note, i'm going to be out thursday and friday at the conference, lightning talks are tomorrow evening, so i'm going head out a bit early to head up there and promote some good juju | 23:41 |
hazmat | SpamapS, aha.. i reproduced it | 23:41 |
hazmat | the error | 23:42 |
SpamapS | hazmat: race condition somewhere? | 23:42 |
hazmat | its some form of background activity | 23:42 |
hazmat | when the test shutsdown | 23:42 |
hazmat | really, its a specific type of race due to lack of adequate control structure for termination.. let me see if i can reproduce in isolation rather than with the whole test case | 23:44 |
hazmat | i guess i should actually look at the test ;-) | 23:44 |
hazmat | hmm | 23:46 |
hazmat | what would be talking to the unit state as part of hook execution | 23:53 |
* niemeyer back | 23:55 | |
hazmat | niemeyer, incidentally a while i ago i added some patches to txzk which record the path for an op as part of the exception | 23:56 |
niemeyer | hazmat: Yeah, I recall something like that | 23:57 |
niemeyer | hazmat: is it in? | 23:57 |
hazmat | niemeyer, no.. its floating.. just lbox proposed it | 23:58 |
niemeyer | hazmat: Aha, neat | 23:58 |
* hazmat hugs lbox | 23:58 | |
hazmat | niemeyer, its attached to bug 861028 | 23:59 |
_mup_ | Bug #861028: Errors should include path information. <txzookeeper:In Progress by hazmat> < https://launchpad.net/bugs/861028 > | 23:59 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!