/srv/irclogs.ubuntu.com/2011/09/27/#juju.txt

_mup_juju/env-origin r361 committed by jim.baker@canonical.com00:07
_mup_Merged trunk & resolved conflicts00:07
_mup_juju/env-origin r362 committed by jim.baker@canonical.com00:08
_mup_Reverted to trunk00:08
_mup_juju/env-origin r363 committed by jim.baker@canonical.com00:09
_mup_Merged config-juju-origin (new attempt)00:09
_mup_juju/config-juju-origin r364 committed by jim.baker@canonical.com00:14
_mup_Missing new file in bzr00:14
_mup_juju/env-origin r364 committed by jim.baker@canonical.com00:15
_mup_Merged config-juju-origin to get missing file00:15
niemeyerHohoho01:38
niemeyerhttp://wtf.labix.org/01:38
niemeyerhttp://wtf.labix.org/wtf/361/unittests.out01:39
hazmatniemeyer, cool01:39
jimbakerniemeyer, nice01:41
niemeyerWill tweak the path a bit, and then will try to come up with a test that actually gets in touch with AWS01:41
bcsallerniemeyer: given the comment about the lxc-lib [6] do you still feel the same about how it should be changed? I could move from iter over the internal dict to writing the keys explicitly but I don't really want to make the code there any larger unless you feel strongly01:56
niemeyerbcsaller: I'm a bit sad about the slightly suboptimal handling of arguments there, but I'd be happy for that to be cleaned up after you feel happy with the release01:58
bcsallerniemeyer: the provider writes it own values to the upstart job in the container, I'd rather see it all come from that script really, but we didn't get it synced up like that in time.02:00
bcsallerniemeyer: I think with a little polish the script could be used to build out other providers as well though and hope we can move some of that out of the Python code02:01
_mup_juju/provider-determines-placement r397 committed by kapil.thangavelu@canonical.com02:02
_mup_revert pick_policy, provider determines placement02:02
niemeyerbcsaller: Cool, that sounds like a nice directly02:10
niemeyerdirection!02:10
* niemeyer has brain issues typing today02:10
bcsalleroh, me too, me too02:11
hazmatargh.. keyboard interupts in tests.. trial fail02:12
_mup_juju/provider-determines-placement r398 committed by kapil.thangavelu@canonical.com02:14
_mup_yank placement cli parameter per gustavo's suggestion.02:14
_mup_juju/env-origin r365 committed by jim.baker@canonical.com02:17
_mup_Doc changes02:17
_mup_juju/env-origin r366 committed by jim.baker@canonical.com02:23
_mup_Clarification on PPA support by juju-origin02:23
_mup_juju/provider-determines-placement r399 committed by kapil.thangavelu@canonical.com02:37
_mup_raise a providererror if the environment placement policy is not supported by the local provider02:37
_mup_juju/trunk r362 committed by kapil.thangavelu@canonical.com02:39
_mup_merge provider-determines-placement [r=niemeyer][f=855162]02:39
_mup_In order to better support the local provider which only supports02:39
_mup_a single placment strategy, this branch moves the determination02:39
_mup_of placement to the provider (while respecting environments.yaml02:39
_mup_config). This also removes the placement cli option.02:39
_mup_juju/local-provider-config r396 committed by kapil.thangavelu@canonical.com02:52
_mup_data-dir is required for local provider, drop storage-dir param02:52
_mup_juju/trunk r363 committed by kapil.thangavelu@canonical.com02:54
_mup_merge local-provider-config [r=niemeyer][f=855260]02:54
_mup_Exposes local provider via environments.yaml02:54
_mup_juju/lxc-omega-merge r400 committed by kapil.thangavelu@canonical.com02:58
_mup_merge pipeline and resolve conflict02:58
hazmatbcsaller, is lxc-library-clone ready to merge?02:58
bcsallerhazmat: niemeyer wanted some changes to how the config file is written, I'm making those now, but they are mostly minor02:59
hazmatbcsaller, all the pre-reqs on my side are merged omega fwiw03:00
bcsallergreat03:00
hazmati'm going to move on to fixing origin03:00
bcsallerok03:00
_mup_juju/local-origin-passthrough r404 committed by kapil.thangavelu@canonical.com03:25
_mup_juju-origin is passed to agent03:25
hazmathmm03:28
hazmatwhere is origin defined03:28
hazmatoh.. its still juju-branch03:31
jimbakerhazmat, please try using env-origin for juju-origin as an env option03:34
hazmatjimbaker, is that branch ready?03:34
jimbakerhazmat, yes it is03:34
hazmatokay, i'll rebase on it03:35
jimbakerhazmat, sounds good03:35
niemeyerwtf@li167-23:~/ftests$ ./churn -f ec203:50
niemeyer2011-09-26 23:47:11-04:00 Writing output to: /home/wtf/ftests/build/wtf/36103:50
niemeyer2011-09-26 23:47:11-04:00 Running test ec2-wordpress... OK03:50
niemeyerOK=1 FAILED=003:50
niemeyerwtf@li167-23:~/ftests$03:50
niemeyer!!!03:50
_mup_juju/local-origin-passthrough r405 committed by kapil.thangavelu@canonical.com03:58
_mup_merge env-origin03:58
hazmathmm.. we have two different implementations here04:01
hazmatfor juju-origin04:01
jimbakerhazmat, how so?04:01
hazmatjimbaker, lxc provider uses a shell script implementation for container initialization which also interprets origin04:02
hazmatthere is no cloud init in the container04:02
jimbakerhazmat, i recall you mentioning that might be a good idea, to unify04:03
jimbakeranyway, let me take a look at juju-origin in lxc04:03
hazmatthe problem is they have different values, i guess i can bridge them04:03
hazmatokay enough for today, now that today is over.. bedtime04:04
jimbakerhazmat, where is juju-orgiin defined in the lxc container stuff?04:05
jimbakerjuju-origin, to be precise ;)04:05
hazmatjimbaker, different name .. but lib/lxc/data/juju-create04:05
jimbakerhazmat, ok, i 'll take a look at that04:06
hazmatjimbaker, not nesc04:07
hazmatjimbaker, more important to get the branch merged04:07
hazmatjimbaker, the lxc provider will need to bridge the values04:07
hazmatsince the containers aren't init with cloud-init04:08
jimbakerhazmat, yeah, it should be fine from my cursory look04:08
niemeyerAwww.. _almost_ an end-to-end ec2 test..04:31
niemeyerAnother try..04:31
niemeyerNight all!05:11
fwereadeheh, the zookeeper documentation is fun13:05
fwereadeEphemeral nodes are useful when you want to implement [tbd].13:05
fwereadeThese can be used to [tbd].13:05
fwereadeFor more information on these, and how they can be used, see [tbd]13:05
fwereadehazmat: ping13:11
rogfwereade: the documentation in the C bindings header file isn't bad13:15
rog(see /usr/include/c-client-src/zookeeper.h)13:15
fwereaderog: thanks, good to know13:15
fwereaderog: since you've spoken, and therefore volunteered, can I talk at you about concurrent charm publishing for a moment?13:16
fwereade:p13:16
rog:-)13:17
rogof course13:17
rog"at" being the operative word13:17
fwereade:p13:17
fwereadeok13:18
rog'cos i'm not exactly fully up to speed on the charm thing yet13:18
fwereadedon't worry, I only started a couple of months ago myself13:18
rogbut discussion is always good for advancing state of knowledge...13:18
fwereadeany questions you may ask are likely to instructively expose my own ignorance ;)13:18
rogi'll do my best13:19
rog:-)13:19
fwereadeso, when you ask juju to deploy a charm, it will find the charm from somewhere (this step isn't directly relevant) and upload it to storage on the machine provider13:19
fwereadein the case of EC2, this will be S313:20
fwereadewhen two people happen to ask the same juju environment to deploy the same charm, the situation warrants closer attention13:20
fwereade(at the same time)13:21
rogare we assuming that charm names are unique?13:21
fwereadeit is perfectly possible that the two charms will be different, despite sharing the same id, but we will assume for now that that won't happen13:21
rog(presumably that was the original of the extra hash code discussion yesterday, right?)13:22
fwereadeif you have two people deploying from local repos that don't match, "local:oneiric/mysql-1" does not uniquely identify a charm13:22
fwereadeyep, exactly13:22
fwereadethat's a pathological case, though, because local: charms are intended for development really13:23
roganyway, assuming names are unique...13:23
fwereadeyep :)13:23
fwereadeI think it seems like a good idea to ensure that a given charm is only uploaded once13:23
roghow big can charms be?13:24
fwereadeso, for example, you and I each try to add a service using the charm "cs:oneiric/mysql-32767"13:24
fwereaderog: I don't believe there's an explicit limit13:24
rogso they could be huge. in which case, yeah, i think you're right.13:24
fwereadealthough there is a practical limit at the moment because we can only upload so much to S3 in one go13:24
rogstill, people aren't gonna be happy about wasted bandwidth.13:25
fwereadequite so13:25
fwereadeat the moment, the process is:13:25
fwereade* upload to s313:25
fwereade* write state to a zk node13:25
rogalthough... maybe people wouldn't be unhappy if a charm was uploaded exactly once for each user13:25
fwereade* blow up if the node already exists13:25
fwereadedefine user13:25
fwereades/user/environment and I'm happy13:26
rogyeah, environment is good13:26
rogbut you've still got the same problem then, of course13:26
fwereadeok, that algorithm definitely doesn't work given the above goals13:26
fwereadeplease tell me what's wrong/overcomplicated/undercomplicated with the following suggestion13:27
rogwhy not just choose a deterministic charm->s3 node name mapping?13:27
fwereadeI think we have one anyway, but for some reason I'm loath to assume that just because a file with the correct name exists it's necessarily the data we want13:28
fwereadeI'd prefer to depend on matching ZK state than mere file existence13:28
rogi think you should. i don't think it's a problem, if the name is sufficiently unique13:29
rogif the file exists with the right name and it hasn't got the right contents, then it's an error.13:29
rogand it can be checked.13:29
rog(assuming the name contains a hash of the charm's contents)13:29
fwereadehmm, that takes us back to needing deterministic hashing13:30
rogyup13:30
rogi think that's a very useful primitive to be able to rely on.13:30
rog(gustavo may beg to differ!)13:30
rogi've found content-addressable stores to be very useful in the past for this kind of thing.13:31
fwereadeniemeyer didn't seem strongly in favour of it yesterday, indeed13:31
roghey, speak of the devil!13:31
fwereadespeak of the devil :)13:31
fwereadejinx:p13:31
rogniemeyer: hiya!13:31
fwereadeniemeyer: morning :)13:32
rogfwereade: i think the alternative is to have a more fragile state-based charm uploading scheme13:33
niemeyerMan.. I'm called a devil twice in the morning in the first 10 seconds!13:33
rogfwereade: where different clients negotiate as to who is going to upload the charm first13:33
niemeyer:-)13:33
fwereadeniemeyer: :p13:33
rogniemeyer: well, you did arrive when your name was mentioned13:33
rogabout 2 seconds after13:33
rogin a flash of sulphurous smoke13:34
niemeyer;)13:34
fwereaderog: well, in effect, I suppose, if indirectly13:34
fwereadesorry just a mo13:34
niemeyerfwereade: What's the issue there?13:35
rogniemeyer: concurrent charm uploading13:36
niemeyerWhy is that a problem?13:36
fwereadeniemeyer: sorry, back13:36
rogbecause you want to avoid uploading the same charm to storage if it's already there13:36
fwereadeniemeyer: I've got myself a little bogged down, hopefully you'll be able to point out something I've missed13:37
niemeyerfwereade: Sure, what's up13:37
fwereadeniemeyer: I'm instinctively against assuming that a file in storage which happens to have the correct name is actually the file we're looking for13:37
fwereadeniemeyer: actually, I'm definitely against it, if only for local development with multiple repos13:37
fwereadeniemeyer: could cause all sorts of horrifying confusion13:38
fwereadeniemeyer: (unless, as rog points out again, we have deterministic hashing and use hashes again)13:38
fwereadeniemeyer: (which of course is fine if it's the Right Thing, but I was up late last night removing all the hashes so I'm not strongly in favour :p)13:39
fwereadeniemeyer: *so*, IMO, we need to depend on the environment's zk state to know what has been published13:39
fwereadeniemeyer: agree?13:39
niemeyerfwereade: Sure13:39
fwereadeniemeyer: ok, to avoid insanity, we want to make sure that only one publisher actually publishes a given charm13:40
fwereadeniemeyer: and I'm fretting that the "obvious" answer is more complicated than it needs to be13:41
fwereadeniemeyer: that answer goes as follows13:41
fwereadeniemeyer: (1) does the charm state already exist in ZK? if so, it's there, we're done13:42
fwereadeniemeyer: (2) if not, create an ephemeral node at /charms/pending/[charm_url]13:42
niemeyerfwereade: Ugh.. ok, hold on13:42
niemeyerfwereade: Do we have a problem today?13:42
roglol13:42
fwereadeniemeyer: er...only potentially, I guess, and only in the case of concurrent publishes13:43
niemeyerfwereade: How?13:43
fwereadeniemeyer: only in situations where we can't depend on uniqueness of charm urls13:44
niemeyerfwereade: Why would we have a problem there?13:44
fwereadeniemeyer: because the current implementation will have 2 publishers uploading to the same storage key, and we can't be sure that the "winning" one will be the one that "wins" in zookeeper... can we?13:45
fwereadeniemeyer: at the moment, we blindly upload and set the charm state once the upload is done13:45
niemeyerfwereade: Ok, a couple of things:13:46
niemeyer1) There's a hash.. it is being uploaded to the same location, it's very very likely to be the same content13:46
niemeyer2) It's uploaded before being stored in zk, so the first one will win13:46
niemeyerfwereade: That kind of situation was exactly why we designed the current logic as it is13:46
fwereadeniemeyer: wait... surely the one that's uploaded first is the one that wins in ZK13:47
niemeyerfwereade: and it feels like we're spending time redesigning it, but it's still not clear to me why13:47
niemeyerfwereade: Why does it matter?13:47
fwereadeniemeyer: and is therefore the one that's likely to be overwritten?13:47
niemeyerfwereade: The first write to zk wins13:47
fwereadeniemeyer: and the last write to storage might win13:47
niemeyerfwereade: and it will point to a file in the storage that matches the expectation of the uploaded13:47
niemeyeruploader13:47
fwereadeniemeyer: let me check where we actually look at hashes13:49
niemeyerfwereade: We don't have to _look_ at them, actually13:49
niemeyerfwereade: It's part of the filename13:49
fwereadeniemeyer: did we not discuss this at length yesterday, and decide to use revisions alone?13:50
niemeyerfwereade: Yeah, we did decide to follow your suggestion, as long as you were willing to review the problems coming out of it :-)13:50
niemeyerfwereade: It now sounds that rather than "let's drop hashes" we're going towards "let's introduce complexity to solve a problem we don't have today"13:51
niemeyerfwereade: which puts me in an alert mode13:51
fwereadeniemeyer: er, I was never arguing for dropping hashes13:52
niemeyerROTFL13:52
fwereadeniemeyer: you seemed to put forward a number of fairly solid arguments for dropping them, and convinced me :p13:52
fwereade<niemeyer> fwereade: I'm happy for us to remove the hash from the name if we can find a way to avoid surprising results in these scenarios13:54
fwereade fwereade: Both13:54
fwereade<fwereade> niemeyer: heh, I was more in favour of making the hash a required part of the ID throughout (at least internally)13:54
fwereade niemeyer: my issue was that it *wasn't* included in the ZK node name at the moment13:54
rogdo we need a name that includes both the symbolic name *and* the hash? it seems to me we might be good with either/or13:55
rogthen we can define a mapping from symbolic name to hash name13:55
rog(possibly with some preference heuristics taken into account)13:55
fwereadeniemeyer, rog: my original position was that, if we have the hashes, we should use them throughout as part if the ID13:56
rogand then internally we could use hash names exclusively. no ambiguity.13:56
rogwe could use them *as* the ID13:56
fwereadeniemeyer, rog: I *thought* niemeyer's position was "actually, we don't need the hashes"13:56
rogwith name stored inside the charm13:56
rogs/with/with the/13:56
niemeyerfwereade: Fair enough13:57
niemeyerfwereade: Sep 26 09:55:04 <niemeyer>      fwereade: I'm happy for us to remove the hash from the name if we can find a way to avoid surprising results   in these scenarios13:57
niemeyerfwereade: Sorry if I misguided you in the wrong direction13:58
niemeyerfwereade: Let's stop the fuzz and start to move forward again13:58
* rog feels fwereade's pain.13:58
fwereadeniemeyer: not to worry :)13:59
* fwereade marshals thoughts while he gets a drink14:00
fwereadeniemeyer: ok, let's rewind a day or so, back to the question that started this off14:07
fwereadeniemeyer: actually, no, to an even earlier one14:08
fwereadeniemeyer: how would you define a "charm id"?14:09
fwereadeniemeyer: specifically, is the hash part of a charm id?14:09
fwereadeniemeyer: I contend that, internally, it should be: even if the same charm can be bundled N times and produce N different hashes, once it's been bundled and put into the system the hash is an important part of the identifier14:10
fwereadeniemeyer: is that a reasonable position in your opinion?14:11
rogfwereade: if you've got the hash, you don't need anything else14:11
niemeyerfwereade: It's not..14:12
niemeyerfwereade: There's no such thing as a "charm hash" today14:12
niemeyerfwereade: We have the hash of a file14:12
fwereadeniemeyer: ok, tweak terminology: we have the hash of a file, which is -- within one environment -- the single representation of a given charm14:13
roga charm hash would be trivially implemented, if we want one14:13
niemeyerrog: We don't have even a trivial amount of time right now14:13
rogok14:13
* fwereade thinks again14:14
niemeyerrog: We're already late.. it should be in _now_14:14
niemeyerrog: and we have to implement the store functionality, which is stopped while we talk about hashes14:14
rogwell, for the time being, just use the hash of that file14:14
niemeyerrog: No.. why!?14:14
niemeyerThere's no bug..14:14
niemeyerfwereade: Please continue14:15
fwereadeniemeyer: ok, end-run discussion14:15
niemeyerfwereade: I'm still keen on understanding your perspective14:15
fwereadeniemeyer: I reinstate the hashes on storage bundles14:15
niemeyerfwereade: You'll be implementing this and must be comfortable with what's going on14:15
fwereadeniemeyer: and we don't need to do anything else14:15
fwereadeniemeyer: as before, hashes are not part of the charm id in ZK, because they're not part of the charm id anyway14:16
niemeyerfwereade: That's right14:17
fwereadeniemeyer: that's all fine then14:17
niemeyerfwereade: Again, as I mentioned yesterday, the real reason why hashes were ever introduced is uniqueness14:17
niemeyerfwereade: In the storage14:18
fwereadeniemeyer: all of this came from my faliing to realise that the hashes weren't purely based on relevant content14:18
niemeyerfwereade: It sorts out precisely the problem we started the discussion with14:18
fwereadeniemeyer: yep, I just didn't come to that realisation until we'd sidetracked on the whole "get rid of hashes" idea14:18
niemeyerfwereade: Without that, there's the chance that two people fight for an upload, and the person that writes to zk is not the one that won in S314:19
fwereadeniemeyer: which is what had me bogged down, and mystified by the idea of dropping the hashes, and proposing zk cleverness to get around it14:19
niemeyerfwereade: The whole thing is my fault.. I knew about the difficulty in doing this in a correct way and left you rambling around it14:20
fwereadeniemeyer: sadly the mystification only kicked in relatively recently, when I finally hit the concurrent upload tests I'd punted on with self.fail("looks tricky") yesterday14:20
fwereademeh, it takes two ;)14:20
niemeyerrog, fwereade: I'm happy to consider developing an actual "charm hash" algorithm in the future, if we find actual issues or advantages that would make it attractive14:21
rogcontent-addressed storage is often attractive in diistributed systems :-)14:22
niemeyerrog: If that's the reasoning for introducing it, no thanks14:23
rogniemeyer: the reason is it's very useful to have an unambiguous name for something, regardless of its origin or its location.14:23
rogit's a nice solid foundation14:23
niemeyerrog: No thanks.. solid foundations sink14:24
rogniemeyer: only if someone breaks the hashing algorithm...14:25
niemeyerrog: We need to approach it from a problem/feature perspective14:25
rogsure. hash-based naming is just a useful tool in the box.14:26
fwereadeniemeyer: potential future reason: avoid repeatedly downloading big charms that are already hanging around in the control bucket, if we can verify name-including-hash with the store14:26
fwereadeniemeyer: but I think it can wait until we're actually experiencing that as a problem14:27
fwereade;)14:27
niemeyerfwereade: There's already a single file for any given charm identifier int he system14:27
niemeyerfwereade: and in fact, I think it's already cached14:27
fwereadeniemeyer: sorry delay, it's a bit of a derail anyway, I'll just focus on getting you a new MP14:40
niemeyerfwereade: No worries14:41
robbiewrog: oing14:41
niemeyerfwereade: Have you checked out the go-charm-url branch?14:41
robbiewrog: *p*ing :)14:41
rogrobbiew: oing boing14:42
fwereadeniemeyer: when I looked, some files were missing, I'm afraid14:42
niemeyerfwereade: Hmm14:42
niemeyerfwereade: Let me check14:42
rogr14:42
niemeyerfwereade: Yeah, great..14:44
niemeyerfwereade: I forgot exactly the meaningful content. :(14:44
_mup_juju/go-charm-url r15 committed by gustavo@niemeyer.net14:45
_mup_Actually _add_ the relevant files.. :-(14:45
hazmat the hashes can be fixed to be stable14:45
hazmatby using actual content14:45
hazmatinstead of the zip14:45
niemeyerhazmat: Good morning14:46
niemeyerhazmat: Yeah, that was mentioned a few times14:46
hazmatniemeyer, g'morning14:47
niemeyerhazmat: We can easily develop a "charm hash", when we need it14:47
niemeyerhazmat: We don't right now14:47
hazmatagreed, its fine for concurrent uploads atm, one fails.14:48
hazmatfirst one wins14:48
niemeyerfwereade: The files are pushed.. please let me know what you think14:48
hazmatand charm id as ns:name:id is unique14:49
fwereadeniemeyer: cheers14:49
niemeyerhazmat: I'll try to polish what I did yesterday to get the waterfall/wtf running14:49
hazmatniemeyer, cool14:50
niemeyerhazmat: Would be nice to have a test there for the local case once that's up14:50
hazmatniemeyer, sure, i'm kinda of blocked on getting anything else in, but i can add more stuff to the pipeline14:50
niemeyerhazmat: Blocked?14:50
hazmatniemeyer, i'm fixing up origin, and pending on the rename, and local test14:50
hazmatniemeyer, i've got several branches that i14:50
niemeyerhazmat: Ah, cool14:50
hazmati'm waiting on other merges for14:50
niemeyerhazmat: Ok.. blocked as in doing a lot.. that's cool :-)14:51
niemeyerhazmat: I'm going over the queue right now14:51
niemeyerhazmat: Most of the branches are already re-reviews, so I'm hoping it'll just go smoothly14:51
hazmatniemeyer, i also need to switch out and work on slides for some time time, their do this afternoon14:51
hazmatniemeyer, cool14:51
hazmats/there14:52
niemeyerhazmat: Ah, that's a good time to mention14:52
niemeyerI'll be traveling to Sao Paulo tomorrow14:52
niemeyerWill be working on and off still14:52
niemeyerThe PythonBrasil conference is Thu-Sat14:52
niemeyerI have a keynote on Fri morning14:53
niemeyerBut otherwise I'll be working on the release14:53
niemeyerrobbiew, fwereade, rog, SpamapS, m_3, bcsaller, jimbaker: ^14:54
robbiewniemeyer: ack14:54
fwereadeniemeyer: sounds good, enjoy :)14:54
niemeyerfwereade: Thanks.. I'm a bit sad about the timing14:54
niemeyerIf I knew about how we'd be running right now, I'd not have taken it a couple of months ago14:55
niemeyerI'll be working from there, anyway14:55
=== pwnsauce is now known as Guest7312
fwereadeniemeyer: ping15:22
fwereadeniemeyer: I hadn't considered CharmURL.Revision to be optional15:23
fwereadeniemeyer: I see how it will make sense in the future15:23
fwereadeniemeyer: but it does mean we can't have charm names like the "mysql2" we have in the test repo15:24
niemeyerfwereade: Hmm.. I don't get either of those points, I think :)15:24
niemeyerfwereade: The revision must necessarily be optional.. otherwise how can we parse things like15:25
niemeyerfwereade: juju deploy cs:~fwereade/oneiric/wordpress15:25
niemeyer?15:25
niemeyerfwereade: Then, what's the deal with mysql2?15:25
fwereadeniemeyer: I'd considered the full CharmURL to be something we're only able to construct once we've asked the repo for the latest version15:25
fwereadeniemeyer: +var validName = regexp.MustCompile("^[a-z]+([a-z0-9-]+[a-z])?$")15:26
niemeyerfwereade: The thing above is a charm url15:26
niemeyerfwereade: Yeah, that looks bogus15:26
niemeyerfwereade: It should probably be "^[a-z]+([a-z0-9-]+[a-z0-9])?$"15:27
niemeyerfwereade: Nice catch15:27
niemeyerfwereade: The intention there was just to avoid mysql-15:28
fwereadeniemeyer: I had the idea that a CharmURL was a pointer to a specific version of a charm15:28
niemeyerfwereade: A charm url is.. a charm url :-)15:28
niemeyerfwereade: We're going to support charm urls without revisions in real world scenarios15:29
niemeyerfwereade: There's no reason to inflict pain on us and make the code unable to handle those15:29
fwereadeniemeyer: I'd say we're going to allow people to specify charms without revisions in real world scenarios15:29
fwereadeniemeyer: ok, so we create a charm url, ask a repository about it, and then construct the real charm url?15:30
niemeyerfwereade: Both charm urls are real.. one contains a revision, the other doesn't15:30
fwereadeniemeyer: to me, the task of extracting the user's intention is distinct from extracting the somponents of a fully specified charm url15:30
fwereadeniemeyer: and, internally, we're always going to use ones with revision, just as they always have schemas and series15:31
niemeyerfwereade: Why?15:31
fwereadeniemeyer: because we want to be able to upgrade charms?15:32
niemeyerfwereade: a charm url without a revision is a fine identifier15:32
niemeyerfwereade: just like a package name without a version is a fine identifier15:32
niemeyerfwereade: Sure.. we also upgrade packages15:32
niemeyerfwereade: and still, most package management is done without a version15:32
fwereadeniemeyer: it's a fine specifier, but to upgrade charms we need distinct zk nodes for the distinct versions15:33
niemeyerfwereade: Sure.. you're looking at one very specific operation for which you need to know revisions15:34
niemeyerfwereade: The abstraction of a charm url is not restricted to that one operation15:34
fwereadeniemeyer: what's the distinction between revision and series then?15:35
fwereadeniemeyer: both are optional, from the user's point of view, given a certain amount of extra context that allows us to infer what they mean15:35
niemeyerfwereade: Please check out the test cases15:35
niemeyerfwereade: They provide good insight into what each part is, and what are erroneous situations15:36
fwereadeniemeyer: I've seen them15:36
niemeyerfwereade: So I don't get your question.. revision is a number15:36
niemeyerfwereade: series is "oneiric", etc15:36
niemeyerfwereade: ?15:36
fwereadeniemeyer: agreed15:36
fwereadeniemeyer: both are optional from the sufficiently-naive user's POV15:37
fwereadeniemeyer: but only one is in your CharmURL implementation15:37
fwereadeniemeyer: I'm suggesting that the user's perception is not enough reason to allow non-specific charm urls15:38
niemeyerfwereade: Sorry, I'm really missing context about how you feel about this15:38
fwereadeniemeyer: and that a charm url should unambiguosly specify a particular collection of bits now and forever15:38
niemeyerfwereade: cs:~joe/oneiric/wordpress15:38
niemeyerfwereade: This is a charm URL15:38
niemeyerfwereade: Correct?15:38
fwereadeniemeyer: disagree, it's enough information to discover a charm url, given context provided by the formula store15:39
niemeyerfwereade: Ah, great.. ok15:39
niemeyerfwereade: So that's where we disagree15:39
niemeyerfwereade: This _is_ a charm URL15:39
fwereadeniemeyer: in the same way that "wordpress" is enough info to determine a charm url, given the context of the environment15:39
niemeyerfwereade: "wordpress" is _not_ a charm URL15:40
fwereadeniemeyer: so a chamr url can be, by design, inadequate to specify a given charm?15:40
fwereadeniemeyer: (without requiring repo access, I mean)15:41
niemeyerfwereade: cs:~joe/oneiric/wordpress15:41
niemeyerfwereade: This specifies a given charm15:41
niemeyerfwereade: For both of us..15:41
niemeyerfwereade: It may change over time, but it is a specified15:42
niemeyerspecifier15:42
niemeyerfwereade: and it is what the user will enter in the command line15:42
niemeyerfwereade: Having the user talking to the server about such an URL, and having to manage it internally on both the server and the client, and then saying "Oh, but that's not an _actual_ url", would be weird15:44
rogniemeyer: [aside, the regexp you gave above would forbit a name like "go" - i think you probably meant something like "^[a-z]([a-z0-9-]*[a-z0-9])?$"]15:46
niemeyerfwereade: Think about bazaar branches for a second15:46
niemeyerfwereade: Is the revision number part of the url?15:46
niemeyerrog:15:47
fwereadeniemeyer: agreed, it's not15:47
niemeyer>>> re.match("^[a-z]+([a-z0-9-]+[a-z0-9])?$", "go").group(0)15:47
niemeyer'go'15:47
fwereadeniemeyer: sorry, I didn't understand your previous paragraph15:48
niemeyerfwereade: Which part?15:48
rogoops, missed the first +15:48
fwereadeniemeyer: "that's not an _actual_ url"15:49
fwereadeniemeyer: the user is already using things that aren't actual urls15:49
fwereadeniemeyer: like "wordpress"15:49
niemeyerfwereade: As mentioned in the spec, this is an alias.   It is ambiguous, and is _not_ a URL.15:50
rogniemeyer: that said, it doesn't match "p9" which it should15:50
fwereadeniemeyer: but isn't the revisionless version essentially an alias to an actual versioned charm?15:51
niemeyerrog: Indeed, will fix it when I work on the branch again15:51
niemeyerrog: Thanks15:51
fwereadeniemeyer: we have to use context to infer what's intended in both cases15:52
niemeyerfwereade: It is a Universal Resource Locator.. exactly like an lp: url, or an http: url..15:52
niemeyerfwereade: Content can change over time15:52
niemeyerfwereade: In all of these cases15:52
niemeyerfwereade: We _need_ to handle charm urls without revisions15:52
hazmatare we still trying to get repository client work and local dev in  for oneiric, if so what are we doing with regard to FFE dates and upload to the repositories?15:52
niemeyerfwereade: To talk to the client about them, to talk to the server about them, and internally15:52
niemeyerfwereade: Not supporting it in the code facing that would be silly IMO15:53
niemeyerhazmat: Yes, we are trying.. but we've been getting stuck on details for the past couple of days :)15:53
fwereadeniemeyer: it seems to me that we only need them without revisions in order to locate actual charms, which themselves do have revisions, and which we then want to use throughout15:54
fwereadeniemeyer: "the content isn't important until we have the content", if you like, and from then on we actually care about it15:55
niemeyerfwereade: Alright.. let's move on.. please support charm URLs without revisions.15:55
fwereadeniemeyer: sure15:56
niemeyerfwereade: Thanks15:56
fwereadeniemeyer: I'm sorry to be delaying things :(15:56
niemeyerfwereade: Not a problem.. I just won't let our ramblings detract us from what I know for sure to be the correct approach.15:58
niemeyerfwereade: Did that yesterday with the hash stuff15:58
niemeyerfwereade: If for no better reason (which I know exist), the user provides us a url without a revision that we have to manage. Having charm url handling and then having to parse by hand to tell if it's right or not, or to extract information out of it, would be quite impractical.15:59
niemeyerhazmat: SpamapS has better details on the FFE16:01
niemeyerhazmat: He's already filing them16:02
hazmatniemeyer, cool, i'm just concerned that we're not sticking to any dates16:02
niemeyerhazmat: and I was hoping to merge local dev today, and the store work by the end of the week16:02
hazmatand its not clear what the date is16:02
hazmatokay16:02
niemeyerhazmat: Then, we have about a week of hard core testing and bug fixing to polish what we've got16:02
niemeyerhazmat: Until being completely unable to fix anything16:03
hazmatniemeyer, sounds good16:14
koolhead17hey all16:19
koolhead17SpamapS: around16:19
niemeyerWill get lunch16:24
niemeyerbiab16:24
niemeyerkoolhead17: Hi, btw :)16:24
koolhead17niemeyer: hello. howdy16:24
* koolhead17 bows to robbiew Daviey 16:25
* koolhead17 stuck with this saving secret key to document root, in automation of gallery2 :16:26
koolhead17:(16:26
hazmatniemeyer, i'm going to move not in progress tickets to the next milestone16:28
koolhead17hey hazmat16:28
hazmathi koolhead1716:28
hazmatdb-config/commons still an issue?16:28
koolhead17hazmat: am feeling bit frustrated with this gallery2 thing16:28
koolhead17hazmat: no i am almost done with it16:28
hazmatkoolhead17, what's the problem?16:29
hazmatkoolhead17, you can save the secret out of the document root? or does it need to be read by the app?16:29
koolhead17this gallery2 s/w doing populating config files asks user to enter few details and one of it is like download secret key and save it to document root of gallery16:29
koolhead17read by app16:30
hazmatkoolhead17, i assume that's common to a normal installation then?16:30
koolhead17hazmat: yes i am confused how to move with that16:31
koolhead17http://www.rndguy.ca/2010/02/24/fully-automated-ubuntu-server-setups-using-preseed/16:31
koolhead17this helped me  as to  know preseed16:31
koolhead17hazmat: i needed you help if you have some time to undrstand that metadata workflow16:37
koolhead17db-relation-joined16:38
hazmatkoolhead17, i don't see the actual question? you preseed the mysql db with a master password, its db-relation-joined, creates an account for gallery 2 (all of that's already in the mysql formula), the gallery formula on db-relation-changed stores the password for the app into a location accessible by the app16:39
hazmatand there should be some sort of .htaccess config to prevent that config file from being served up directly via the web16:39
hazmatalthough by default the network security will prevent public access to mysql, its good practice not to expose the credentials farther16:40
koolhead17hazmat: http://bazaar.launchpad.net/~juju/juju/trunk/view/head:/examples/wordpress/hooks/db-relation-changed16:40
koolhead17here16:40
koolhead17hostname=`curl http://169.254.169.254/latest/meta-data/public-hostname`16:41
koolhead17so this metadata url serves value for variable like $user $password16:41
koolhead17which gallery2 will need? correct16:41
hazmat1) that metadata url is ec2 specific, its only exposing virtual instance attributes, it has nothing to do with what's installed on the machine, 2) Its usage by formulas will disappear in a future version of juju.16:42
hazmatie. user/password having nothing to do with that url16:43
SpamapSkoolhead17: no, gallery2 should not need the public hostname16:43
hazmat`relation-get user` && `relation-get password`16:43
hazmatget the db user and password16:43
koolhead17hazmat: hmm my question was ec2 specific if am not wrong juju currently on runs on ec2 environment?16:43
hazmatkoolhead17, with the oneiric release we're also supporting bare metal installations via (orchestra/cobbler) and local machine development16:44
hazmatusing lxc containers16:44
koolhead17hazmat: am trying writing charm on oneiric virtualbox only16:45
koolhead17hazmat: i have not tried orchestra yet, been working on cobbler all this while. i am soo confused :(16:46
koolhead17i will go back https://juju.ubuntu.com/Documentation and spend some time again. what i was currently doing is simply write bash script to get my installation done automated. once that is achieved use to form a charm out of it.16:49
koolhead17am i doing wrong procedure?16:49
* koolhead17 wonders if he asked some dumb question :(16:50
_mup_juju/lxc-provider-rename-local r404 committed by kapil.thangavelu@canonical.com16:51
_mup_rename lxc provider to local provider16:51
hazmatkoolhead17, i don't understand how you'd be doing a virtualbox installation of a charm16:52
hazmatits not a supported machine provider atm16:52
koolhead17hazmat: what i am doing is writing bash script which does auto install of everything for me. and then i put it as a charm and test it on EC216:53
hazmatkoolhead17, i think its just as easy to build it out in a formula esp. with tools like debug-hooks16:53
hazmatand charm-upgrade16:53
hazmatbecause you'll need information from the remote relations which you'd have to mock/stub in the bash script16:54
hazmatand you'll need to tease apart the bash script into its parts.. i mean.. there's nothing wrong with doing it that way16:55
hazmatjust that its some additional work to restructure as  a charm when its done.16:55
koolhead17hazmat: hmm. in that case i have to do everything on ec2 which am using from some friend`s account.16:55
koolhead17:D16:55
hazmatkoolhead17, if you want to live on the bleeding edge the local dev stuff allows for doing it all on your local machine16:56
rogthat's all folks. see ya tomorrow.16:56
hazmatrog, have a good one16:56
roghazmat: will do. you have no idea. :-)16:56
SpamapShazmat: before I wrote the lxc provider, I wrote a relation-get / relation-set mocker .. it worked quite nicely. ;)16:57
* SpamapS is quite excited tho, about having a local provider built in. :)16:57
hazmatindeed its very nice16:58
koolhead17hazmat: i only have a 2 GB laptop which allready runs 2 VM when i play with juju16:58
koolhead17:D16:58
hazmatSpamapS, bcsaller did some nice work to minimize construction time of instances as well (via lxc-clone)16:58
SpamapS:-D16:58
SpamapShazmat: I'm glad you guys found a way to make lxc-clone work.16:59
hazmatkoolhead17, the overhead both for disk and load of an lxc container is *significantly* less than a vm16:59
koolhead17 i have no idea how i can use orchestra and local env for using the say16:59
hazmatkoolhead17, you can have dozens of a container on a machine with minimal load if their not doing any active work16:59
SpamapSkoolhead17: you don't need orchestra or cobbler17:00
koolhead17shall i simply install oneiric on my latop then?17:00
hazmatand the disk overhead is around 200mb for a minimal container, up to 500mb for a useful container (not including data)17:00
SpamapSkoolhead17: thats the opposite of what you need.. you need something local.. which is landing in trunk as we speak. :)17:00
SpamapSkoolhead17: it should work in natty too17:00
hazmatwoah..17:00
SpamapSkoolhead17: tho you'll need the LXC that is in the juju PPA17:00
hazmatSpamapS, koolhead17 it does not work in natty17:00
koolhead17okey. let me reach home and try this experiment then17:01
SpamapShazmat: why not?17:01
hazmatSpamapS, i don't think the ppa has been updated with the latest lxc pkg17:01
koolhead17i have lucid17:01
SpamapSI definitely think we need to spend more time making these things work on Lucid.17:01
koolhead17and oneiric on VM17:01
hazmatSpamapS, that needs a kernel update afaik17:01
koolhead17SpamapS: please do it. it will be awesome17:01
koolhead17LTS ++17:01
SpamapShazmat: it didn't before.. but maybe they've moved on from the stuff I did in Austin.17:02
koolhead17am on 10.4.317:03
koolhead172.6.32-33-generic #72-Ubuntu SMP Fri Jul 29 21:08:37 UTC 2011 i686 GNU/Linux17:03
hazmatSpamapS, maybe it doesn't.. i wasn't sure17:03
hazmatbcsaller, there's another failing test in omega.. make b/ptests usage is really rather problematic17:08
bcsallerhazmat: not for me? what are you seeing?17:09
hazmatjust a failure around the upstart file test17:09
_mup_juju/lxc-provider-rename-local r405 committed by kapil.thangavelu@canonical.com17:10
_mup_additional fixes for s/lxc/local17:10
koolhead17also https://juju.ubuntu.com/Documentation explains using EC2 only17:10
hazmatbcsaller, ./test juju.machine17:11
hazmatbcsaller, i get about 5 failures17:11
jimbakerkoolhead17, definitely agreed on that. still trying to get the new lxc stuff to work for me17:11
bcsallerhazmat: I'm seeing that, yes, hadn't been running those :)17:12
hazmatbcsaller, i know.. that's why running b/ptests is a false sense of anything17:12
koolhead17jimbaker:  it will be based on oneiric then i suppose. :D17:12
bcsallerbecause the branch is functional17:12
jimbakerhazmat, i'm having a problem in bootstrap with lxc-omega17:12
hazmatjimbaker, do tell? ;-)17:13
jimbakerbcsaller tells me it's more likely to be just the networking stuff you've been working on17:13
jimbakerkoolhead17, yes, i have been just trying it w/ oneiric beta 217:13
hazmatjimbaker, there isn't any networking stuff that's post lxc-omega17:13
hazmatjimbaker, what's the problem?17:13
jimbakerhazmat, indeed, that's my understanding ;) so i get 2011-09-27 10:46:48,319 ERROR Command '['virsh', 'net-start', 'default']' returned non-zero exit status 1 in bootstrap17:14
hazmatjimbaker, you have libvirt-bin installed?17:14
koolhead17k17:14
jimbakerrunning that explicitly, $ virsh net-start default17:14
jimbakererror: Failed to start network default17:14
jimbakererror: internal error Network is already in use by interface virbr017:14
hazmatinteresting17:14
hazmatjimbaker, are you still on natty?17:14
jimbakerhazmat, indeed i do17:14
jimbakerand i'm running oneiric beta 217:15
hazmatit shouldn't be running net-start if it see it already running17:15
jimbakerhazmat, sure17:15
hazmatjimbaker, can you pastebin virsh net-list17:16
jimbakerhazmat, no need: $ virsh net-list17:16
jimbakerName                 State      Autostart17:16
jimbaker-----------------------------------------17:16
hazmatjimbaker, clearly your libvirt networking is wedged somehow17:17
jimbakerhazmat, i'm sure i'm missing some dependency. just don't know what17:17
hazmatif that's your output17:17
hazmatvirsh net-start fails because already started, and virsh net-list doesn't show it started17:17
jimbakerhazmat, yes sounds like a wedge indeed17:18
jimbakermaybe i should reboot :)17:18
hazmatjimbaker, well it might a persistent config issue from the upgrade.. probably relating to libvirt, i'd check dnsmasq running, and libvirt config files, reboot couldn't hurt, and finally brctrl17:19
jimbakerhazmat, thanks for the suggestions. definitely upgrade issues could be involved, since i was trying to get the new lxc work going last week w/ beta 117:21
niemeyerhazmat: I also don't have a "default" network locally in natty, FWIW17:21
hazmatniemeyer, interesting17:21
hazmathmm17:21
hazmatthankfully we have all the tools to install one by hand17:21
hazmatbut it should be ootb with libvirt-bin17:21
niemeyerhazmat: True.. we should just confirm that with a clean instlal17:22
niemeyerhazmat: or a clean upgrade :)17:22
niemeyerhazmat: If the upgrade is lacking something, there's still time to fix it17:22
hazmatniemeyer, i can add code to support the case its pretty trivial with the existinng network suport17:22
hazmatto just add a default network if one isn't defined17:22
hazmathmm.. actually its automatic already17:23
hazmatif its not defined when we go to start, we define it17:24
hazmatand then start it17:24
hazmatjimbaker, actually can you pastebin virsh net-list --all17:25
hazmati forget the --all output17:25
hazmatflag17:25
hazmatwithout it, it only lists actives17:26
bcsallerhazmat: additional we saw the network code calls virsh net-stop which doesn't exist, looks like its net-destroy17:26
hazmatugh17:26
hazmatyeah.. that's a bug17:27
hazmatdoesn't anyone believe in symmetry anymore ;-)17:28
niemeyer!!!17:30
niemeyerhazmat: "destroy" is such a good command, though!17:30
niemeyerapt-get destroy table17:31
hazmatlo.. i am become death.. destroyer of worlds17:32
bcsallerthe way I type its only destroyer of words17:32
jimbakerhazmat, http://pastebin.ubuntu.com/698012/17:34
hazmathmm.. so it is defined, but we can't start it17:35
jimbakerhazmat, again, i'm going to reboot before trying anything else17:35
hazmatjimbaker, sounds good17:35
jimbakerbut first, i need to run to lunch. biab17:35
hazmatbcsaller fortunately local provider doesn't stop the network since its normally setup as autostart by libvirt-bin17:36
hazmatand already running17:36
bcsallerhazmat: the start/destroy tests in juju.machine are because of the later construction of the .container when using the async interface, they are not around when the tests expect them to be to setup the mocks. trying to fix em17:37
hazmatbcsaller, woah.. the async interface should still be returning a deferred that can be waited on17:38
hazmat?17:38
bcsallerits simpler than that17:38
bcsallerthe containers are not built until start() now17:39
bcsallerrather than in init17:39
hazmatah17:39
bcsallerso .container isn't defined until later and thus can't be mocked17:39
bcsallerwell... I can mock it, but python has issues17:39
hazmatso there is no access to the container till its started? i think that has problems in several places for the rest of the code17:40
hazmatie. its not soley a test problem17:40
hazmatthe setup directories uses the container.rootfs path several times17:40
hazmatprior to starting the container for example17:40
hazmatthat's unfortunate17:41
bcsallerhazmat: all that code is called after17:43
bcsallerbut yes, fixing this is a little trickier than I wanted17:43
hazmatbcsaller, add a container_rootfs using the name to the container in class init17:48
hazmatall the usage is to get the fs17:48
* hazmat grabs some food17:49
niemeyerhazmat, bcsaller: Folks, just off a call with robbiew.. I'll finish the ftests polishings I'm working on to get this ready and out of my plate, and will then jump back onto the reviews18:30
hazmatniemeyer, cool, also wtf site is empty now18:35
niemeyerhazmat: I suspect I broke it18:36
niemeyerhazmat: I'm there cleaning it up a bit18:36
* hazmat starts working on presentation slides18:36
niemeyerI'll separate the setup/teardown so that we can easily have several tests for EC2, etc18:36
TheMueniemeyer: What framework do you use for testing?18:42
niemeyerTheMue: A few trivial scripts18:43
TheMueniemeyer: And unit tests is with standard gotest?18:44
niemeyerTheMue: this test framework produces a waterfall of success/failure per revisoin18:45
niemeyerTheMue: The tests are whatever we want them to be18:45
niemeyerTheMue: Right now we have two: one that runs the whole internal suite18:46
niemeyerTheMue: and another one that exercises a real interaction with ec218:46
niemeyerTheMue: The juju-go branch, that contains the evolving Go port, is not part of this yet, but can be easily integrated18:46
niemeyerTheMue: We use gocheck there18:47
TheMueah, ok, thx18:59
TheMuebtw, is it possible to simulate juju actions with local vms?18:59
RoAkSoAxfwereade: how's it going man?19:21
fwereadeRoAkSoAx: ah, not too bad, just reverted a big pile of revisions -- which is obviously bad, but feels surprisingly good ;)19:21
fwereadeRoAkSoAx: and you?19:21
RoAkSoAxlol19:22
RoAkSoAxfwereade: pretty good19:22
RoAkSoAxfwereade: orhcestra/juju nwrking like a charm19:22
RoAkSoAxfwereade: without the benefits of auto power management19:23
RoAkSoAxfwereade: since we dont have direct access to PDU's and stuff19:23
RoAkSoAxfwereade: but good19:23
fwereadeRoAkSoAx: awesomesauce :D19:23
RoAkSoAxfwereade: have time to discuss a bit about juju/orchestra?19:23
fwereadeRoAkSoAx: surely19:23
RoAkSoAxfwereade: so19:23
RoAkSoAxfwereade: about showing status pending19:23
RoAkSoAxfwereade: when we deploy or bootstrap19:23
RoAkSoAxfwereade: right after we do it, it already show the machine19:24
RoAkSoAxfwereade: the dns-name /instnace id19:24
RoAkSoAxfwereade: however, the machine might have not even been turned on19:24
RoAkSoAxfwereade: so I was wondering it might be better to list them "pending" till it actually finish installing and disables PXE19:24
RoAkSoAxfor its cobbler profile19:24
RoAkSoAxbut at the same time19:24
RoAkSoAxwhile they are pending, they should probably show19:24
RoAkSoAxwhat machine has been obtained19:25
RoAkSoAxfwereade: because, it is actually really needed for us to know what machine was selected, but, we need to see it as pending till it finishes installing I think19:25
RoAkSoAxfwereade: what do you think?19:25
fwereadeRoAkSoAx: in alternative words: available/acquired is not enough information?19:25
RoAkSoAxfwereade: that's enought19:26
RoAkSoAxfwereade: but, my point being is when I do juju status19:26
RoAkSoAxI see the machine as available19:26
RoAkSoAxwhen it should probably be pending19:26
RoAkSoAxbecause it hasn't finished installing19:26
fwereadeRoAkSoAx: ah, ok -- was confused by the mention of bootstrap, because you can't even get status until we've actually managed to bootstrap19:27
RoAkSoAxright19:27
fwereadeRoAkSoAx: that definitely sounds sensible19:27
RoAkSoAxfwereade: yeah, but while showing pending, it doesn't show what machine (dns-name) has been selected19:27
RoAkSoAxfwereade: i think we need to know that19:27
RoAkSoAxso usually is : 13: pending19:27
RoAkSoAxtight?19:27
RoAkSoAxright19:27
=== Guest7312 is now known as cburke
RoAkSoAxin orhcestra we see something like : 13: {dns-name: blabla.domain.com, instance-id: MTMxNzA2NDA1NS4xNzg2NTQ1OTQuNzE4Mg}19:28
RoAkSoAxbut, it is still pending because installation is still executing19:28
RoAkSoAxso should show: 13: {dns-name: hassium.canonical.com, instance-id: MTMxNzA2NDA1NS4xNzg2NTQ1OTQuNzE4Mg}: pending19:28
RoAkSoAxor something similar19:28
fwereadeRoAkSoAx: ok, that makes sense19:28
RoAkSoAxfwereade: but it will be very very orchestra specific19:28
fwereadeRoAkSoAx: offhand, do you recall what it shows for EC2 in similar circumstances?19:29
fwereadeRoAkSoAx: because the situation is definitely analogous19:29
RoAkSoAxfwereade: it shous 13:pending I think19:30
fwereadeRoAkSoAx: cool19:30
fwereadeRoAkSoAx: the problem is really just getting the info out of cobbler reliably then, right?19:30
fwereadeRoAkSoAx: and the problem is kinda bound up with the power-management woes we already know about19:31
fwereadeRoAkSoAx: ...although I guess it doesn't have to be19:31
fwereadeRoAkSoAx: what *should* I be paying attention to to figure it out?19:32
RoAkSoAxfwereade: power amangement should not really be part of the problem19:32
RoAkSoAxfwereade: becase, even if we19:32
RoAkSoAxfwereade: do that when we manually or automaticlaly start the machine19:32
RoAkSoAxit *wont* show pending19:33
RoAkSoAxfwereade: from what I think, pending is the state on ec2 when the image is starting up, once is completely up and usually, then it changes to being available, right?19:33
fwereadeRoAkSoAx: "running" but yeah19:33
RoAkSoAxfwereade: so similarly, the status should show pending while the machine is running the installation, once it has finished, it should show it19:34
RoAkSoAxfwereade: but in case of orchestra, i think it is important to know what machine has been selected (dns-name) and its status is pending19:34
fwereadeRoAkSoAx: do we have a channel that lets us figure it out?19:35
fwereadeRoAkSoAx: or do we have to store the fact that it *should* show up soon19:35
fwereadeRoAkSoAx: and go from there?19:35
RoAkSoAxfwereade: i think19:36
RoAkSoAxwhen we do status, and easy way would be to check19:36
RoAkSoAxif pxe has been disabled in the system itself19:36
fwereadeRoAkSoAx: I *may* be happy with that but I'll have to think19:37
RoAkSoAxfwereade: that's the only way we can know that19:37
RoAkSoAxfwereade: because in ec2 is running/pending right?19:37
RoAkSoAxpending is when it is booting the VM19:37
RoAkSoAxand running its when finished booting19:37
RoAkSoAxin our case we cannot verify if it is installed/post_installed19:37
RoAkSoAxfwereade: the only way we can do that is by simply checking the pxe enabled on the system19:38
RoAkSoAxfwereade: because that's the last step of installation19:38
fwereadeRoAkSoAx: that sounds great to me19:38
RoAkSoAxfwereade: if installation fails, it will never disable PXE booting on the system19:38
fwereadeRoAkSoAx: was always a bit uncomfortable with what we were using netboot_enabled for19:38
fwereadeRoAkSoAx: yep19:38
RoAkSoAxfwereade: exaclty, so we could just extend status to check netboot_enabled19:39
RoAkSoAxfwereade: so for each system netboot_enabled is True, that means it hasn't finished installed or not even powered on19:39
RoAkSoAxfwereade: if netboot_enabled is False (on a status) we can assume it finished installed19:39
fwereadeRoAkSoAx: so available/pxe means we can grab it and use it; acquired/pxe means pending; acquired/nopxe means running(-very-soon); available/nopxe means don-t-touch19:39
RoAkSoAxfwereade: because that's the last command executed when deploying19:39
fwereadeRoAkSoAx: perfect19:40
RoAkSoAxfwereade: right, so we keep the management classes as they are right now19:40
fwereadeRoAkSoAx: just need to make sure we handle the state transitions correctly in CobblerClient19:40
RoAkSoAxfwereade: right, so basically when we *already* deployed the machine, and we are *checking* status19:40
fwereadeRoAkSoAx: assuming that, yes, pending/running is just pxe/nopxe19:41
RoAkSoAxfwereade: we should check,  "machine A is being deployed, let's check netboot_enabled. If True, it hasn't finished installed, or has failed. If False, then it has finished"19:41
fwereadeRoAkSoAx: yep19:41
RoAkSoAxfwereade: right, obviously in the future, we would need to know if installation failed19:41
RoAkSoAxI just have no idea how to know that right now19:42
fwereadeRoAkSoAx: ...may just come down to storing when we asked it to come up, and timing out :/19:42
fwereadeRoAkSoAx: still, that's the future :)19:42
fwereadeRoAkSoAx: definitely sounds like a good plan19:43
RoAkSoAxfwereade: cool19:44
fwereadeRoAkSoAx: I really need to capture my mental list of orchestra deficiencies as bugs, and soon :/19:44
RoAkSoAxfwereade: hjeheh ok, will try to file some by EOW19:44
fwereadeRoAkSoAx: so will I, hopefully between us we'll cover most of it ;)19:45
fwereadeRoAkSoAx: thanks :)19:45
adam_ghttp://paste.ubuntu.com/698089/ <- mean anything to anyone? machine agent log from bootstrap node19:45
SpamapSIs there any way to get feedback from the provisioning agent?20:16
SpamapSLike.. if its unable to provision instances for some reason.. other than debug-log ?20:16
niemeyeradam_g: That's pretty weird..20:20
niemeyeradam_g: machine 0 is the first machine run20:20
niemeyeradam_g: Theoretically if there's no machine 0 zookeeper shouldn't even exist20:21
niemeyeradam_g: What's the context there?20:21
hazmatniemeyer, it was an old client20:27
hazmatwas the problem20:27
hazmatSpamapS, not atm20:27
SpamapSahh20:32
* SpamapS searches the bug lists to +1 or report that..20:32
SpamapShmm.. why is the eureka milestone set to release on 2011-01-01 ?20:34
jimbakerSpamapS, awesome backdating ;)20:34
SpamapSwe are *SERIOUSLY* late then ;)20:34
SpamapSSo.. I'm thinking we need to make the released version of juju not pull itself from the PPA, but rather from the Ubuntu archive only.20:44
robbiew+100020:44
SpamapSI suppose we can say if a user wants it on lucid/maverick/natty that they use juju-branch20:45
SpamapSor has that been replaced with juju-origin now?20:45
robbiewSpamapS: so we can't have the archive version pulling from a ppa...that means a deployment that works today could conceivably behave differently tomorrow.20:47
SpamapSright20:47
SpamapSjust thinking through what that will break20:48
robbiewright20:48
jimbakerSpamapS, that's the intent of juju-origin and how the env-origin branch determines the correct origin to deploy20:52
SpamapSso yeah I think I'll just patch in that the default source is _DISTRO instead of _PPA .. and if people want to spawn releases before 11.10 they will have to use the PPA or juju-branch20:52
jimbakerSpamapS, i have had to mock the origin for distro (apt-cache policy juju), but it works well in testing20:53
jimbakerSpamapS, so you will see in the env-origin branch, the default origin is determined using that, instead of just using _DISTRO (or the old _PPA)20:54
SpamapSjimbaker: awesome, but thats not in trunk yet, is it?20:54
jimbakerSpamapS, i should say: i have had to mock the distro behavior. everything else i can directly test20:54
jimbakerSpamapS, it is not yet in trunk. i have some issues to fix20:55
jimbakerbut should be resolved pretty soon20:55
robbiewSpamapS: ack on the patch approach20:55
robbiewjimbaker: define "pretty soon" :)20:55
SpamapSif its not in the next 20 minutes, its not going to be uploaded today. ;)20:56
adam_ghey-- is '--placement=local' no longer possible, to deploy to the bootstrap node?20:56
jimbakerrobbiew, well it has to complete the review process, but the issues i have to fix are small and mostly related to how the testing and code is structured20:56
robbiewjimbaker: ack20:56
robbiewSpamapS: i translate jimbaker's response to be "not in the next 20min" ;)20:57
jimbakereg how do we test a specific circumstance with respect to policy, do we parse the data, make a call to code that looks like apt-cache, or in the old case, mock that out20:57
jimbakerthere are a number of approaches, so i'm converging on what works best20:57
jimbakerSpamapS, robbiew - that's correct20:57
jimbakernot 20 min :)20:57
SpamapSdanke20:58
* hazmat finishes up slides21:00
adam_ghazmat: with --placement='local' gone from the CLI, is it even possible to deploy certain charms to the bootstrap node anymore?21:03
niemeyeradam_g: That was never the intention of the placement logic21:04
SpamapSbut it was an *awesome* way to test things without having to start multiple nodes21:04
niemeyeradam_g: That said,21:04
SpamapShacky, but awesome. :)21:04
niemeyeradam_g: placement is still supported21:05
adam_gniemeyer: i understand, ill rephrase.. is there currently a new way to abuse this and let us put stuff on the bootstrap node?21:05
adam_g:)21:05
niemeyeradam_g: In ~/.juju/enviornments.yaml21:05
niemeyeradam_g: Shhh.. don't tell anyone21:05
adam_gniemeyer: ive found that, but can that be changed per 'juju deploy' or iss the placement policy set for the lifetime of the environment?21:05
* SpamapS is fairly certain it will be desirable to be able to set placement at runtime as we come up with more interesting placement strategies.21:05
hazmatadam_g, yeah.. in the placement: local in the environment config21:06
hazmatwhoops.. mis constructed..21:06
niemeyerI'll step outside for a while and bbiab21:06
hazmatSpamapS, yeah.. i agree, but we start conflating very different concepts..21:06
SpamapShazmat: users tend to like doing things that developers never dreamed of. I'd hope we'd follow the unix model, and give them enough rope to hang themselves (and then a little bit more)21:07
hazmatco-location and placement look very similiar to end users21:07
hazmatSpamapS, we might end up resurrecting it21:08
hazmati see it as very useful for cross-az deploys on a per unit basis21:08
SpamapScross az.. cross cloud.. silly things that you just want to have stacked up on one t1.micro ... flexibility is good.21:10
hazmatadam_g, is that removal a significant burden, i can resurrect it now if need be?21:11
SpamapSalso when did it disappear?21:12
SpamapSI use it about twice a day. :-/21:12
SpamapSbut I'm on an older build21:12
hazmatSpamapS, yesterday evening21:13
SpamapSahh ok21:14
adam_ghazmat: we were using it to reduce our hardware needs on this openstack cluster by 3 or 4 nodes. i can workaround by just modifying environments.yaml between 'deploys'21:14
hazmatadam_g, that kinda of sucks though21:14
adam_ghazmat: yah, especially since --placement=local is what we've documented internally. id love to get the option back, but i can see why others wouldn't21:15
hazmatadam_g, the ideal placement policy to me is min/max instances, but its very hard to determine where to place a formula to avoid a conflict21:15
* hazmat ponders21:16
hazmati guess i should try to  make that happen since i'm blocked on other stuff21:16
SpamapShazmat: Its not that hard.. you can keep a record of charms that have failed to deploy together and just use optimistic collision avoidance.21:16
hazmatSpamapS, lol21:16
SpamapSIts the ethernet model.21:17
hazmatSpamapS, i figure the easiest thing is deploy is keep the number of service units of the same formula on a machine to 1, and error if we can't do that21:17
hazmatits not real avoidance but it should help..21:17
SpamapShazmat: except they still might conflict. Which is fine, just move it to another machine if that happens.21:18
hazmati guess its easier to resurrect --placmenet21:18
SpamapS:)21:18
hazmatniemeyer, ^21:18
adam_ghazmat: hi, sorry, i got pulled away. IMO, i think in the long run, users are going to want the *option* to have total control over charm placement regardless of the risks. currently "--placement=local" is the only thing that gives me that option21:58
SpamapSespecially with hardware21:59
SpamapShrm.. so defaulting to _DISTRO leaves us in a bind w.r.t. testing proposed updates of juju for SRU22:00
SpamapSwe'd need to have some way to enable proposed...22:01
hazmatbcsaller, do you have fixes for test failures on omega, i wanted to do some work further down the stack22:02
bcsallerhazmat: not yet, sorry22:02
bcsallerhazmat: most, but not all22:02
hazmatbcsaller, did you end up just adding the container_rootfs attr or trying to rework the api usage/tests?22:03
bcsallerhazmat: I tried that but didn't get it working22:03
bcsallerso I moved something more comprehensive22:04
bcsallerbut then its trying to build out the container for real to destroy it and wants root, so still playing with it22:04
_mup_juju/cli-placement-restore r397 committed by kapil.thangavelu@canonical.com22:21
_mup_restore placement cli22:21
_mup_Bug #860966 was filed: Restore command line placement. <juju:In Progress by hazmat> < https://launchpad.net/bugs/860966 >22:30
hazmat^ SpamapS, adam_g if the feature removal matters to you commenting on the above bug/merge proposal would be helpful22:32
* hazmat hugs lbox22:32
niemeyerhazmat: Let's please not resurrect --placement now22:38
niemeyerhazmat: We can look at this again after the release22:39
SpamapSniemeyer: actually its critical that we have it until there's something better22:40
SpamapSniemeyer: understanding full well that its less than ideal, without it, we need 9 full hardware machines to test a full openstack deployment.22:41
niemeyerSpamapS: We survived until it existed, so we can survive without it for this release22:41
SpamapSniemeyer: orchestra didn't exist before this existed.22:41
niemeyerMan.. that's exactly why I don't like that kind of half baked feature.. :-/22:42
bcsallerseemed like a good idea at the time22:42
SpamapSThats just how things go... you put stuff in, then you come up with something better and you take it out. :)22:42
SpamapSJust look at devfs..22:43
SpamapShalf baked, overly ambitious, all those things.. then udev made it all better. :)22:44
_mup_Bug #860982 was filed: Rename lxc provider to local <juju:In Progress by hazmat> < https://launchpad.net/bugs/860982 >22:44
SpamapSBTW, for the Oneiric packages .. I'm hacking in a 'enable-proposed' option to the environment config. Its the only way we'll ever be able to do SRU's.22:45
hazmatwell it should hacked onto  env-origin22:45
SpamapSThat would be awesome.22:46
SpamapSrunning out of time tho22:46
hazmatyeah..22:46
niemeyerSpamapS: What's enabled-proposed?22:47
hazmatuse the proposed repo to install juju for testing22:47
SpamapSniemeyer: the 11.10 packages default to installing from the distro for quite obvious reasons. We also need to be able to enable -proposed so users can test an SRU that manifests on the spawned machines.22:48
SpamapSAnother option would be to just make people build an AMI that enables proposed22:48
SpamapSwhich actually might be better22:48
SpamapSbut a lot harder22:49
* hazmat watches the size of the testing community drop like a stone22:49
niemeyerSpamapS: hazmat is right.. that's just another option for juju-origin22:49
SpamapSniemeyer: which doesn't exist yet in r361 (the one I've been testing heavily for the last 2 days)22:49
* hazmat dog walks bbiab22:50
niemeyerSpamapS: enable-proposed also doesn't exist22:50
SpamapSrigh! But its a smaller patch. :)22:50
niemeyerSpamapS: heh22:50
SpamapSone I fully hope to drop in a week22:50
niemeyerSpamapS: That's juju-origin.. if you're planning to land this, please let's do it the right way.22:51
SpamapSI can leave it out.. and we can SRU in the ability to.. SRU things.. when we need to.22:51
niemeyerSpamapS: There's zero benefit in having another option22:51
niemeyerSpamapS: The env-origin branch is in review, and I hope jimbaker has it ready for land22:52
SpamapSsimplest solution.. just leave both out22:52
=== medberry is now known as med_out
niemeyerSpamapS: Or maybe leave juju out? That's even simpler.22:52
SpamapSYou can take that up with higher powers. :)22:52
SpamapSI think the simplest thing is to just leave out the ability to turn on proposed, and open it as a bug in the package.22:53
SpamapSWhen the time comes for an SRU, we'll fix it then.22:53
SpamapSHopefully by merging in juju-origin.22:53
niemeyerSpamapS: Either we merge juju-origin, or we take juju out of Ubuntu. There's no middle way.22:54
niemeyerSpamapS: It's necessary for handling the source.22:54
SpamapSErr, wha?22:55
SpamapSIt works fine w/o it22:55
SpamapSWe may never actually need to SRU juju22:55
niemeyerSpamapS: Where does it take the packages from?22:56
niemeyerSpamapS: In the server side?22:56
SpamapSgiven the nature of the project I'd say we'd only SRU it if it was catastrophically broken anyway.22:56
SpamapSdistro22:56
niemeyerSpamapS: Have you patched it?22:58
SpamapSniemeyer: yes, I may have missed where there's another way to get that to work.22:59
SpamapSniemeyer: not uploaded yet.. just testing currently22:59
niemeyerSpamapS: Oh man.. that's awesome.. ok.22:59
SpamapSbut need to upload very soon as we're already starting to talk about juju in 11.10 in blog posts ...23:00
SpamapSAnd the rename needs time to "settle" ..23:00
SpamapSBTW, I may have to disable the test suite on build.. I get this almost every time I build in PPA: https://launchpadlibrarian.net/81242493/buildlog_ubuntu-oneiric-i386.juju_0.5%2Bbzr361-0ubuntu1~ppa3_FAILEDTOBUILD.txt.gz23:01
SpamapSFailure: zookeeper.ClosingException: zookeeper is closing23:02
SpamapSjuju.agents.tests.test_unit.UnitAgentTest.test_agent_executes_config_changed_hook23:02
SpamapShazmat: wondering if that is related to your REUSEADDR change23:02
hazmat  SpamapS its not23:12
hazmatthat typically signals some sort of background activity is happening when the connection is closed23:13
* hazmat runs in a loop23:13
hazmatlooks okay through a hundred iterations23:15
SpamapShrm23:15
SpamapSit only ever happens on the buildd23:15
hazmatSpamapS, is it consistent?23:15
SpamapSit has happened the last 3 times, but I think I had a build with r361 that passed23:15
SpamapSI'll upload one more time w/o disabling the test..23:16
SpamapSwould be good for the ppa to have this turned on23:16
* hazmat widens the loop scope to include the whole test class23:16
SpamapSso we get told about failures like this sooner23:16
SpamapShazmat: /win 2023:22
SpamapSdoh23:22
hazmat;-)23:22
hazmatSpamapS, so i widened the loop to the entire unit agent23:23
hazmattests.. no luck reproducing23:23
SpamapSYeah I think its something with the clean isolated environment23:23
SpamapShazmat: the next build is here , starts in 10 min  https://launchpad.net/~clint-fewbar/+archive/fixes/+build/281061923:33
hazmatmy env is pretty clean  for a developer ;-)23:34
hazmati'll check back on the build23:34
SpamapSbuildd doesn't even have the internets23:34
SpamapSwe build "on the moon" just in case you have to23:35
niemeyerhazmat: How's that for a test case: http://pastebin.ubuntu.com/698196/23:35
hazmatSpamapS, now i remember why packaging java apps was such a pain23:35
* hazmat shakes fist at maven and ivy, and points to the moon23:35
niemeyerSpamapS, hazmat: Btw, I've the tests in the wtf run in a clean env23:36
niemeyerSpamapS, hazmat: Btw, tests in the wtf run in a clean env23:36
niemeyerWill get food, biab23:37
hazmatniemeyer, test case looks nice, better abstractions around waiting would make that even cleaner23:37
hazmatalthough really with lxc based tests and apt-cacher things should fly23:38
hazmatalso on the not around note, i'm going to be out thursday and friday at the conference, lightning talks are tomorrow evening, so i'm going head out a bit early to head up there and promote some good juju23:41
hazmatSpamapS, aha.. i reproduced it23:41
hazmatthe error23:42
SpamapShazmat: race condition somewhere?23:42
hazmatits some form of background activity23:42
hazmatwhen the test shutsdown23:42
hazmatreally, its a specific type of race due to lack of adequate control structure for termination.. let me see if i can reproduce in isolation rather than with the whole test case23:44
hazmati guess i should actually look at the test ;-)23:44
hazmathmm23:46
hazmatwhat would be talking to the unit state as part of hook execution23:53
* niemeyer back23:55
hazmatniemeyer, incidentally a while i ago i added some patches to txzk which record the path for an op as part of the exception23:56
niemeyerhazmat: Yeah, I recall something like that23:57
niemeyerhazmat: is it in?23:57
hazmatniemeyer, no.. its floating.. just lbox proposed it23:58
niemeyerhazmat: Aha, neat23:58
* hazmat hugs lbox23:58
hazmatniemeyer, its attached to bug 86102823:59
_mup_Bug #861028: Errors should include path information. <txzookeeper:In Progress by hazmat> < https://launchpad.net/bugs/861028 >23:59

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!