/srv/irclogs.ubuntu.com/2011/09/26/#juju.txt

hazmatnice fixed debug-hooks to work for early in the agent lifecycle00:14
hazmater. charm00:14
niemeyerhazmat: Ohhh.. that's sweet00:17
niemeyerhazmat: unix modes in zip is in upstream, btw.. we'll just need tip for the moment00:18
hazmatniemeyer, cool, i've got a tip build i can utilize00:18
hazmatniemeyer, its going to be a little while till the next release?00:18
niemeyerhazmat: I've upgraded the PPA, but failed to put the series in the version number00:18
hazmatsince they just released00:18
niemeyerhazmat: There's a bug in the build procedure with the colon in the path that I'll have to check out when I get a moment00:18
niemeyerhazmat: Yeah, but it shouldn't be a big deal for us for the server side00:19
niemeyerhazmat: We can deploy with tip00:19
niemeyerhazmat: Well.. and that'll be in the weekly in a couple of days00:19
hazmatniemeyer, cool00:19
_mup_juju/status-with-unit-address r403 committed by kapil.thangavelu@canonical.com01:30
_mup_debug hooks works with unit address, also address deficiency around debugging early unit lifecycle, by allowing debug-hooks command to wait for the unit to be running01:30
hazmatniemeyer, i think i'm going to go ahead and try to address the placement stuff after the lxc merge01:31
niemeyerhazmat: Sound ok.. but I also think it's going to be surprisingly trivial to handle it as suggested01:31
niemeyerSounds01:31
niemeyerhazmat: It's indeed fine either way, though01:32
hazmatniemeyer, i know.. i'm just fading, and want to get merges in..  this stuff needs to go upstream... maybe i should hold off till i get full night's rest01:32
hazmatanyways.. last of the branches is ready for review ( cli with unit address)01:32
niemeyerhazmat: Yeah, get some rest01:33
niemeyerhazmat: I'll probably do the same to get up early tomorrow01:33
hazmatniemeyer, there are places where i think placement on the cli is useful, and placement is a global provider option..01:33
hazmatsome of the discussion from earlier w/ bcsaller..01:34
hazmatthe cross-az stuff in particular is of interest to me01:34
niemeyerhazmat: I think it's overthinking the problem a bit01:34
niemeyerhazmat: This is well beyond what we need for the feature at hand01:34
niemeyerhazmat: I'd rather keep it simple and clean until practice shows the need for the cli01:35
hazmati'm concerned that placement is going to get out of hand on responsibilities on the one hand, and on the other i see it as being very convienent for implementing features like deploy this unit in the a differrent az01:35
niemeyerhazmat: I feel uncertain about that01:36
hazmatic cross az as something required for production on ec2.. i'm not sure where else we can put this sort of decision01:36
niemeyerhazmat: We're implementing one feature, and imagining something else without carefully taking into account the side effects01:36
hazmatfair enough01:37
niemeyerhazmat: It's not required really..01:37
niemeyerhazmat: cross az can be done with a single cluster01:37
hazmatniemeyer, sure it can.. but how do we place it such that is01:37
hazmater.. place such that it is01:37
niemeyerhazmat: Yeah, good question.. I don';t think the placement branch answers it01:37
niemeyerhazmat: So I'd rather keep it in a way we're comfortable rather than creeping up without properly figuring what we're targeting at01:38
hazmatit doesn't but cli placement is easy facility for it.. i agree there are ramifications there given provider validation that bear more thought, but it works pretty simply afaics01:38
niemeyerhazmat: I'm not sure, and given that it really won't work either way right now, I'd rather not do it for now.01:39
niemeyerhazmat: If nothing else, we're offering a visible interface to something that makes no sense to the user, with some intermangling in the implementation that we're not totally comfortable with.01:40
niemeyerhazmat: Feels like a perfect situation to raise KISS and YAGNI01:40
* hazmat ponders01:42
hazmati'll sleep on it... i still think cross-az stuff is very important.. and that this is probably the simplest way to offer it to users.01:43
hazmatbut perhaps its a red herring... much else to do for internal failure scenario recovery01:44
hazmatreconnects, restarts, etc01:44
niemeyerhazmat: That's not even the point.. no matter if it's the implementation we want or not, it doesn't work today, and won't work for quite a while.01:45
hazmatniemeyer, i could implement this cross-az with via cli placement in a day i think.01:45
niemeyerhazmat: I'd rather not have this stuff creeping up in the code base until we figure it out.01:45
hazmattomorrow even01:45
niemeyerhazmat: Heh01:45
hazmat;-)01:46
niemeyerhazmat: I suggest we KISS and you suggest doing even more.. get some sleep. :)01:46
hazmatindeed01:46
_mup_Bug #859308 was filed: Juju commands (ssh/status/debug-hooks) should work with unit addresses. <juju:In Progress by hazmat> < https://launchpad.net/bugs/859308 >01:52
niemeyerHello!10:48
rogniemeyer: hiya!10:49
niemeyerrog: Hey!10:50
rogniemeyer: what's the best way for me to update to your merged version?10:56
rog(of gozk)10:56
rogis it now in a new repository?10:57
niemeyerrog: It's a new branch.. just branch from lp:gozk/zk10:57
niemeyerrog: Which is an alias for lp:~juju/gozk/zk10:57
niemeyerrog: In the future it'll go back to being lp:~juju/gozk/trunk, once we kill launchpad.net/gozk10:58
niemeyerI mean, kill as in not support this import path10:58
rogok10:58
__lucio__hi! is there a way to compose to formulas so i can say, for example, deploy a database server + a monitoring agent to this node?11:16
* rog finds lots of documentation bugs. oops.11:22
niemeyer__lucio__: Absolutely11:27
__lucio__niemeyer, how? (hello!)11:27
niemeyer__lucio__: Hey! :)11:27
niemeyer__lucio__: Charms (previously known as formulas) interconnect via relations that follow a loose protocol11:28
niemeyer__lucio__: We give a name to the interface between them so that we can distinguish the protocols11:29
niemeyer__lucio__: So, you can define in one of the formulas that it requires (consumes) a given relation interface, and in the other side that it provides (serves) the given relation interface11:29
niemeyer__lucio__: This way both sides can be interconnected at runtime11:30
niemeyer__lucio__: Using the "juju add-relation" command11:30
niemeyer__lucio__: The charms will be notified when such a relation is established via the hooks11:30
niemeyerrog: Hm?11:31
niemeyer__lucio__: Does that make sense? :)11:31
__lucio__niemeyer, not exactly what i mean. imagine i get the mysql charm and want to deploy it. get machine 1 with mysql. then i want to deploy some agent to monitor the system stats there. i want to create a new charm and say "deploy this charm to this machine that already exists"11:32
__lucio__is that the "placement policy"?11:32
niemeyer__lucio__: Ah11:32
niemeyer__lucio__: I see11:32
__lucio__the key part in here would be that those charms should know nothing of each other11:32
niemeyer__lucio__: This will be supported in the coming future through what we're calling co-located charms11:33
niemeyer__lucio__: In practice it'll be just a flag in the relation11:33
niemeyer__lucio__: and juju will put the charms together based on that11:33
niemeyer__lucio__: It's not implemented yet, though11:33
niemeyer__lucio__: and it's not the placement policy11:33
niemeyerhazmat: See? :)11:34
niemeyer__lucio__: Yeah, exactly11:34
niemeyer__lucio__: Re. knowing nothing about each other11:34
niemeyer__lucio__: They will use exactly the same interface for communication that normal charms use11:34
__lucio__niemeyer, ack. nice to see you guys thought about it :)11:34
niemeyer__lucio__: Despite them being in the same machine11:35
niemeyer__lucio__: Yeah, there's a lot of very nice stuff to come.. just a matter of time11:35
fwereadeniemeyer: ping12:33
niemeyerfwereade: Hey!12:34
fwereadeniemeyer: thanks for the review :)12:34
fwereadeniemeyer: how's it going?12:34
niemeyerfwereade: np12:34
niemeyerfwereade: Going in a roll!12:34
fwereadeniemeyer: sweet :D12:35
fwereadefwereade: I was wondering about charm id/url/collection/etc terminology12:35
niemeyerfwereade: Ok12:35
fwereadeniemeyer: and wanted to know what your theoughts were re: the hash at the end12:35
fwereadeniemeyer: I see it as not really part of the *id* so much as just a useful bit of verification12:36
fwereadeniemeyer: but... well, it's an important bit of verification :)12:36
niemeyerfwereade: Which hash?12:36
fwereadeniemeyer: lp:foo/bar-1:ry4xn987ytx984qty498tx984ww12:37
fwereadewhen they're stored12:37
kim0Howdy folks .. did the LXC work land already?12:37
kim0seeing lots of cool comments12:37
niemeyerfwereade: It must be there12:37
kim0commits I mean12:37
niemeyerfwereade: For storage, specifically12:37
fwereade(yes that was a keyboard-mash, not a hash, but close enough ;))12:37
niemeyerfwereade: The issue isn't verification, but uniqueness12:38
niemeyerkim0: Heya!12:38
niemeyerkim0: It's on the way12:38
fwereadeniemeyer: ...ha, good point, hadn't internalised the issues with revision uniqueness12:38
fwereadeniemeyer: except, wait, doesn't the collection-revision pair guarantee uniqueness?12:38
fwereadeniemeyer: I know rvisions and names wouldn't be enough12:39
niemeyerfwereade: Define "guarantee"12:39
niemeyerfwereade: ;_)12:39
kim0cool, can't wait to tell the world about this .. It's such a nice feature12:40
niemeyerfwereade: A hash is a reasonable "guarantee", even if it's not 100% certain.  Trusting the user to provide a unique pair isn't very trustworthy.12:40
* kim0 compiles regular juju progress report .. shares with planet earth12:40
niemeyerkim0: It is indeed! And we're almost there12:40
fwereadeniemeyer: ok, it feels like the bad assumption is that a collection + a name will uniquely identify a (monotonically increasing) sequence of revisions12:41
fwereadeniemeyer: confirm?12:41
niemeyerfwereade: I'd say more generally that the tuple (collection, name, id) can't be proven unique12:42
niemeyerfwereade: If we were the only ones in control of releasing them, we could make it so, but we're not12:43
fwereadeniemeyer: hm, indeed :)12:43
fwereadeniemeyer: ok, makes sense12:43
fwereadeniemeyer: in that case, I don't see where we'd ever want the revision without the hash12:43
niemeyerfwereade: That seems a bit extreme12:44
kim0mm .. the juju list is not on https://lists.ubuntu.com/12:44
niemeyerfwereade: The revision number is informative12:45
niemeyerfwereade: and in the store it will uniquely identify the content12:46
niemeyerfwereade: FWIW, the same thing is true for packages12:46
roglunch12:48
fwereadeniemeyer: ok... but if we ever have reason to be concerned about uniqueness of coll+name+rev, in what circumstances *can* we assume that that alone is good enough to identify a charm?12:48
fwereadeniemeyer: (ok: we can if it came from the store, probably (assuming *we* don't screw anything up) but it doesn't seem sensible to special case that12:49
fwereadeniemeyer: )12:50
niemeyerfwereade: Pretty much in all cases we can assume it's unique within the code12:50
fwereadeniemeyer: if we want the bundles to be stored with keys including the hash, why would we eschew that requirement for the ZK node names?12:51
fwereadeniemeyer: um, "pretty much in all cases" == "not in all cases" :p12:51
niemeyerfwereade: Sure, you've just found one case where we're concerned about clashes12:52
niemeyerfwereade: Maybe we can change that logic, actually.. hmm12:52
niemeyerfwereade: The real problem there is that it's very easy for the user to tweak a formula and ask to deploy it, and then deploy something else12:53
niemeyerfwereade: The question is how to avoid that situation12:54
fwereadeniemeyer: sorry, are we talking about upgrades, or just normal deploys?12:54
niemeyerfwereade: I'm happy for us to remove the hash from the name if we can find a way to avoid surprising results in these scenarios12:55
niemeyerfwereade: Both12:55
fwereadeniemeyer: heh, I was more in favour of making the hash a required part of the ID throughout (at least internally)12:55
fwereadeniemeyer: my issue was that it *wasn't* included in the ZK node name at the moment12:56
fwereadeniemeyer: that seemed like a problem :)12:56
niemeyerfwereade: That will mean we'll get two things deployed with the same name-id12:56
niemeyerfwereade: Not a situation I want to be around for debugging ;-)12:56
niemeyerfwereade: HmM!12:57
niemeyerfwereade: What about revisioning local formulas automatically based on the latest stored version in the env?12:57
niemeyerfwereade: Effectively bumping it12:58
niemeyerfwereade: The spec actually already suggests that, despite that problem12:58
niemeyerfwereade: This way we can remove the hash.. but we must never overwrite a previously existing charm12:58
fwereadeniemeyer: I'm confused12:59
niemeyerfwereade: Ok, let me unconfuse you then12:59
niemeyerfwereade: What's the worst point in the above explanation? :)12:59
fwereadeniemeyer: can we agree that (1) we can't guarantee that a (coll, name, rev) doesn't necessarily uniquely identify a charm13:00
fwereade(2) therefore, we need something else to guarantee uniqueness13:01
niemeyerfwereade: My suggestion is to guarantee uniqueness "at the door"13:01
niemeyerfwereade: We never replace a previous (coll, name, rev)13:01
niemeyerfwereade: If we detect the user is trying to do that, we error out13:02
niemeyerfwereade: To facilitate development, though, we must give people a way to quickly iterate over versions of a charm13:02
niemeyerfwereade: Which means we need to bump charm revisions in the local case based on what was previously deployed13:03
niemeyerfwereade: Makes sense?13:03
fwereadeniemeyer: I think so13:04
* fwereade thinks...13:04
niemeyerfwereade: This way we can remove the hash13:04
niemeyerfwereade: But you'll have to review logic around that a bit so that we're sure we're not naively replacing a previous version13:04
niemeyerfwereade: It shouldn't be hard, IIRC13:05
fwereadeniemeyer: I don't remember it being exceptionally complex13:05
niemeyerfwereade: Because we consciously store the charm in zk after uploading13:05
niemeyerfwereade: So if the charm is in zk, it must be in the storage13:05
fwereadeniemeyer: I have a vague feeling it'lll already complain if we try to overwrite a charm in zk13:05
niemeyerfwereade: and thus we shouldn't replace13:05
niemeyerfwereade: I think upgrade is a bit more naive13:06
niemeyerfwereade: But I'm not sure13:06
niemeyerfwereade: Perhaps my memory is failing me13:06
fwereadeniemeyer: I know less about the code than you might think, I was working most of last week with about 3 mental registers operating properly :/13:06
fwereadeniemeyer: CharmStateManager calls client.create with a hashless ID, so that should explode reliably already13:08
niemeyerfwereade: Not sure really.. but please review it.. it'll be time well spent13:09
niemeyerfwereade: Then, we'll need to implement the revision bumping that is in the spec13:09
niemeyerfwereade: For the local case, that is13:09
fwereadeniemeyer: there was atlk a little while ago about allowing people to just ignore revisions locally13:10
fwereadeniemeyer: which seems to me to be quite nice for charm authors13:10
niemeyerfwereade: Exactly.. that's a way to do exactly that13:11
niemeyerfwereade: The user will be able to ignore it, because we'll be sorting out automatically13:12
niemeyerfwereade: Please see details in the spec13:12
niemeyerWill get a bite.. biab13:12
fwereadeniemeyer: by overwriting the revision file in the local repo? (the spec seems to me to be talking about how the formula store should work, not local repos)13:12
=== med_out is now known as medberry
niemeyerfwereade: CTRL-F for "local formula" within "Formula revisions"13:28
niemeyerfwereade: Sorry..13:28
niemeyerfwereade: CTRL-F for "local deployment" within "Formula revisions"13:28
fwereadeniemeyer: hm, I see it now, sorry13:29
niemeyerfwereade: np13:29
fwereadeniemeyer: for some reason I'm not very happy with us writing into a local repo though13:29
niemeyerfwereade: That's why the revision is being taken out of the metadata13:30
fwereadeniemeyer: ...and it seems to say we should bump on every deploy, which feels rather aggressive13:30
fwereadeniemeyer: just a suggestion: if the revision and the hash don't match, we blow up as expected13:31
niemeyerfwereade: You have the context for why this is being done now.. I'm happy to take suggestions :)13:31
niemeyerfwereade: The hash of what?13:31
niemeyerfwereade: Directories have no hashe13:31
fwereadeniemeyer: don't they?13:31
fwereadeniemeyer: ok, it's the hash of the bundle13:31
fwereadebut they do have the appropriate method13:32
niemeyerfwereade: Yeah. it's a hack really13:32
niemeyerfwereade: Plus, not updating means we'll force users to bump it manually13:32
niemeyerfwereade: Effectively doing the "rather aggressive" part manually, which sucks13:32
fwereadeniemeyer: what if we treat a revision file that exists as important -- so if you change a revisioned formula but don't change the rev, you blow up -- but allow people to just delete the revision file locally, in which case we identofy purely by hash and treat the has of the current local version as "newer" than any other hashes that might be around13:34
niemeyerfwereade: I don't get what's the problem you're solving with that behavior13:34
fwereadeniemeyer: the failure-to-upgrade-without-manually-tweaking-revision13:35
niemeyerfwereade: The solution in the spec solves that without using hashes13:35
niemeyerfwereade: Why is your suggestion better?13:36
fwereadeniemeyer: but at the cost of repeatedly uploading the same formula every time it's deployed whether or not it's required13:36
niemeyerfwereade: Hmm13:36
fwereadeniemeyer: I'm also a bit suspicious of requiring write access to local repos, just to deploy from them13:37
fwereadeniemeyer: feels icky ;)13:37
niemeyerfwereade: That's trivial to solve.. but let's see, your earlier point is a good one13:38
niemeyerfwereade: Hmm.. I think we can specialize the behavior to upgrade13:43
niemeyerfwereade: and make deploy consistent for local/remote13:44
niemeyerfwereade: In deploy, if there's a charm in the env, use it no matter what13:44
niemeyerfwereade: Well, assuming no revision was provided, which is always true nowadays13:44
niemeyerfwereade: In upgrade, if it is local, bump the revision to the the revision currently deployed (in the *env*) + 113:45
fwereadeniemeyer: so we *might* still needlessly upload, but less frequently... not entirely unreasonable, I guess :p13:46
niemeyerfwereade: Sure, which gets us back to the original issue.. we need a method that:13:47
niemeyer1) Does not needlessly bump the revision13:47
niemeyer2) Does not require people to bump the revision manually13:47
niemeyerThat's one solution13:47
niemeyerfwereade: I don't want to get into the business of comparing the hash of an open directory with a file in the env13:48
niemeyerfwereade: At least not right now.. to solve the problem we'd need to create a unique way to hash the content that doesn't vary with different bundlings13:48
fwereadeniemeyer: hm, I wasn't aware we had different bundlings to deal with..?13:49
niemeyerfwereade: Well..13:49
niemeyerfwereade: There's a file in the env.. there's an open directory in the disk13:49
niemeyerfwereade: How do we compare the two?13:49
fwereadeniemeyer: well, at the moment, we zip up the env and hash the zipfile; I understand you think that's a hack, but I don't understand how it makes the situation any worse13:51
niemeyer<fwereade> niemeyer: hm, I wasn't aware we had different bundlings to deal with..?13:51
niemeyerfwereade: So you do understand we have different bundlings to deal with13:51
fwereadeniemeyer: we have different representations of charms, but the hashing is the same13:51
niemeyerfwereade: Why is it the same?13:51
niemeyerfwereade: Where's the zipping algorithm described that guarantees that zipping the same directory twice necessarily produces the same hash?13:52
fwereadeniemeyer: because we convert dirs to bundles to hash them?13:52
fwereadeniemeyer: and we *also* convert dirs to bundles to deploy them13:52
niemeyerfwereade: Where's the zipping algorithm described that guarantees that zipping the same directory twice necessarily produces the same hash?13:52
fwereadeniemeyer: ah-ha13:52
rogzip files hold modification times...13:53
fwereadeniemeyer, rog: hmm.13:54
rogniemeyer: i did something like this before13:54
niemeyerrog: modification times can be preserved.. but there are other aspects like ordering that are entirely unspecified13:54
rogyup13:54
rogniemeyer: my fs file traversal thing (which later became alphabet) solved this by always archiving in canonical order13:55
niemeyerSo, there are two choices: either we define a directory/content hashing algorithm, or we don't take the content into account13:55
rogand i added a filter for canonicalising metadata we don't care about (e.g. mtime, atime)13:56
rogoh yes, permissions were a problem too.13:56
rogit worked very well in the end though13:56
niemeyerrog: Sure, I'm not saying it's not possible.. I'm just saying that it requires diving into the problem more than "hash the zip files"13:56
rogsure13:56
rogzip files aren't canonical13:57
fwereadeniemeyer: as a side note: what's the trivial solution to my discomfort with requiring write access to local repos?13:58
niemeyerfwereade: bundle the revision dynamically13:58
fwereadeniemeyer: so we'd have local repos with different revs to the deployed versions? that feels like a pain to debug, too13:59
niemeyerfwereade: That may be the case either way, and there's absolutely nothing we can do to prevent it13:59
niemeyerfwereade: The prove being that the local version is user modifiable14:00
niemeyerfwereade: Either way, the normal case is writing the revision.. so let's not worry about the read-only case for now14:01
fwereadeniemeyer: ok then :)14:02
niemeyerfwereade: local: is really targeting development..14:02
fwereadeniemeyer: true14:03
niemeyerfwereade: Again, please note that the local revision bumping must take the revision from the env + 1, rather than taking the local revision number in consideration14:03
niemeyerfwereade: On upgrade, specifically..14:03
niemeyerfwereade: I believe we can handle the deploy case exactly the same for local/remote14:04
fwereadeniemeyer: understood, I just feel that "local newer than env" is easily comprehensible, while "env newer than local (from which it was deployed" is a touch confusing14:04
fwereadeniemeyer: agree on deploy: just use the one already deployed if it exists14:04
fwereadeniemeyer: (I know I'm still talking about the magic non-writing case, I'll try to forget about that)14:05
niemeyerfwereade: I don't understand the first comment in this series14:06
fwereadeniemeyer: sorry, I was still wittering on about the non-writing case, it's not relevant ATM14:06
niemeyerfwereade: The local namespace is flat..14:07
niemeyerfwereade: Ponder for a second what happens if both of us start deploying the same "local" formula on the env14:07
niemeyerfwereade: and what the revision numbers mean in that case14:08
fwereadeniemeyer: I've been having quiet nightmares about that, actually ;)14:08
niemeyerfwereade: There's no nightmare that, if you acknowledge that local: is targeting development most importantly14:09
niemeyers/that,/there,/14:09
fwereadeniemeyer: I think the only sensible thing we can say is Don't Do That14:09
niemeyerfwereade: It's fine actually.. the last deployment will win14:09
niemeyerfwereade: Which is a perfect valid scenario when development is being done14:09
niemeyerperfectly14:09
niemeyerCan't write today14:09
niemeyerfwereade: "local:" is _not_ about handling all non-store cases..14:10
niemeyerfwereade: We'll eventually have a "custom store" people will be able to deploy in-house14:10
fwereadeniemeyer: ok, a separate piece fell into place, part of my brain was conflating services and charms14:11
fwereadeniemeyer: I'm happy about that now14:12
niemeyerfwereade: Ah, phew, ok :-)14:12
fwereadeniemeyer: so... we trash hashes, then, and double-check that we'll explode if we try to overwrite a (coll, name, rev) in ZK14:14
niemeyerfwereade: Yeah, "explode" as in "error out nicely".. :-)14:14
fwereadeniemeyer: quote so ;)14:15
fwereadegaah, I can't write either :/14:15
fwereadeniemeyer: tyvm, very illuminating discussion14:16
niemeyerfwereade: It's been my pleasure.. have been learning as well14:16
fwereadeniemeyer: cheers :)14:16
niemeyerfwereade: Btw, the critical piece to review is whether we might overwrite the storage content or not14:17
niemeyerfwereade: We have some protection from zk that create(...) won't work if it already exists14:17
niemeyerfwereade: But we have none from the storage14:17
niemeyerfwereade: So if the logic is not as we think it is, it'll blindly overwrite and we'll figure later14:17
niemeyerfwereade: The hash protected us from that, even if not in an ideal way as you pointed out14:18
fwereadeniemeyer: yes indeed, I'll need to be careful but it's not insoluble14:18
niemeyerfwereade: I _think_ the original logic had "store + put in zk" for exactly that reason14:19
fwereadeniemeyer: btw, really quick lazy question: what would cause a zk charm node to be deleted?14:19
niemeyerfwereade: The ordering means that if an upload breaks mid-way, we still retry and overwrite14:19
niemeyerfwereade: Nothing, IIRC14:19
niemeyerfwereade: We debated a bit about garbage collecting it14:20
fwereadeniemeyer: ok, I thought I saw some logic to deal with that case, and was a bit surprised14:20
niemeyerfwereade: and we can do it at some point14:20
niemeyerfwereade: but I don't recall supporting it ATM14:20
fwereadeniemeyer: cool, I won't fret too much about that14:22
niemeyerMan.. empty review queue.. I'll run and do some addition server-side work on the store14:30
niemeyeradditional..14:30
* hazmat catches up on the backlog14:47
_mup_juju/go-store r14 committed by gustavo@niemeyer.net14:48
_mup_Bootstrapping store package.14:48
hazmatfwereade, niemeyer interesting about col/name/rev uniqueness.. one of the bugs/useability things for charm authors, is being able to do away with constant rev increments for iteration and just relying on hash14:49
niemeyerhazmat: morning!14:49
hazmatits something that bites pretty much every charm author14:49
fwereadehazmat: indeed, but niemeyer has convinced me that auto-incrementing on upgrade from local repos should solve that14:49
niemeyerhazmat: Yeah.. there are other ways to handle this without relying on hash, though.. read through :)14:50
* hazmat continues the backlog14:50
hazmatlong conversation indeed14:50
kim0m_3: howdy .. please ping me hwen you're up14:57
_mup_juju/go-store r15 committed by gustavo@niemeyer.net14:57
_mup_Imported the mgo test suite setup/teardown from personal project.14:57
hazmatniemeyer, so the conclusion is, for local repositories, always increment the version on deploy regardless of any change to the formula?15:06
niemeyerhazmat: Not quit15:06
niemeyere15:06
niemeyer<niemeyer> fwereade: Hmm.. I think we can specialize the behavior to upgrade15:07
niemeyer<niemeyer> fwereade: and make deploy consistent for local/remote15:07
niemeyer<niemeyer> fwereade: In deploy, if there's a charm in the env, use it no matter what15:07
niemeyer<niemeyer> fwereade: Well, assuming no revision was provided, which is always true nowadays15:07
niemeyer<niemeyer> fwereade: In upgrade, if it is local, bump the revision to the the revision currently deployed (in the *env*) + 115:07
niemeyerhazmat: ^15:07
hazmathmm. also we should log at info level the formula we're using on deploy (already in env, vs uploaded)15:09
niemeyerhazmat: True15:09
hazmatthat's part of what bites people, lack of discovery into the problem till they go inspecting things15:09
niemeyer<hazmat> hmm. also we should log at info level the formula we're using on deploy (already in env, vs uploaded)15:09
niemeyerfwereade: ^15:09
niemeyerLOL15:10
niemeyer<niemeyer> hazmat: True15:10
niemeyer<niemeyer> <hazmat> hmm. also we should log at info level the formula we're using on deploy (already in env, vs uploaded)15:10
niemeyer<hazmat> that's part of what bites people, lack of discovery into the problem till they go inspecting things15:10
niemeyerfwereade: ^^^15:10
fwereadeniemeyer, hazmat: sounds sensible15:10
hazmatauto increment on upgrade sounds good15:11
hazmatthe upgrade implementation is pretty strict on newer versions, which is why i punted on a hash based approach, it was hard to maintain that notion15:12
niemeyerhazmat: Agreed.  The hash stuff sounds interesting to detect coincidences for sure, but the detail is that it won't really solve the problems we have.. we need to consider larger versions anyway, and need to be able to update the previous deployment15:14
niemeyer... without manual interaction15:14
niemeyerSo for now it feels like the auto-increment upgrade is enough15:14
niemeyerfwereade: When do you think the new CharmURL & CharmCollection abstractions will be available?15:16
niemeyerfwereade: Just want to sync up because I'd like to have a look at them before mimicking in Go15:16
niemeyerfwereade: So we match logic15:16
fwereadeniemeyer: hopefully EOmyD, but I'm not quite sure when that will be15:16
niemeyerfwereade: Cool, thks15:17
fwereadeniemeyer: certainly before strot of your day tomorrow though15:17
niemeyerfwereade: Ok15:17
fwereadegaah *start* of your day15:17
niemeyerfwereade: Are you planning on doing any modifications to the suggested API?15:17
fwereadeI think I'm happy with everything you proposed15:18
niemeyerfwereade: Awesome, I'll get started on it then15:18
fwereadeniemeyer: I'll let you know ASAP if I come up with any problems15:19
niemeyerfwereade: Superb, cheers15:19
niemeyerfwereade: Will do the same on my end15:19
m_3kim0: hey man... what's up?15:37
_mup_juju/config-juju-origin r358 committed by jim.baker@canonical.com15:42
_mup_Merged trunk15:42
hazmatfwereade, do you know if the orchestra machines generally have useable fqdns?15:47
fwereadehazmat: better check with roaksoax, but I don't think you can guarantee it15:48
fwereadehazmat: context?15:48
hazmatfwereade, niemeyer, re the delta with local/lxc vs orchestra on address retrieval.. with local the fqdn isn't resolvable, but the ip address is routable and there  is a known interface. with orchestra the number of nics on a machine isn't knowable, but i was hoping we could say fqdns are resolvable15:48
fwereadehazmat: IIRC the dns_name should work from other machines, but I don't think we have any guarantees about how it works from outside that network15:49
hazmatthis also per SpamapS comments on the original implementation that we should favor fqdn over ip address, and neatly sidesteps ipv4 vs ipv6 behind dns15:49
niemeyerhazmat: We can't guarantee it ATM15:49
niemeyerhazmat: Most of the tests I recall were done with IP addresses15:49
hazmatniemeyer, on the address branch its all just a popen... local with ip, ec2 and orchestra with fqdn hostnames15:50
niemeyerhazmat: The fully qualified domain will also not resolve the problem.. it may have multiple nics despite the existence of a fqdn15:50
hazmatniemeyer, multiple nics is fine if the fqdn is resolvable15:51
niemeyerhazmat: I believe it's not.. it'll resolve to an arbitrary ip address15:51
niemeyerhazmat: Which may not be the right one if a machine has multiple ips15:51
niemeyerhazmat: ec2 is a different case..15:51
niemeyerhazmat: We know what we're doing there15:51
hazmatniemeyer ? hostname -f  returns the fqdn of the host regardless of multiple nics15:52
SpamapSFor multiple NIC's, the FQDN should resolve to the NIC that you wish the host to be externally reachable on...15:52
hazmatwhich is what we do for orchestra15:52
niemeyerhazmat: hostname -f returns *a* name, that may be resolvable or not, and that may map to the right ip or not15:52
SpamapSI *can* see a situation where you have a management NIC, and a service NIC .. each needing different handling.15:53
hazmatSpamapS, we've got separation of public/private addresses for units, but getting those addresses on on orchestra deployments is the question15:53
hazmatdoesn't seem like we can do that apriori15:54
SpamapSIndeed. DNS is the only reliable way, IMO, to handle something so loosely coupled.15:54
niemeyerhazmat: I suggest checking with smoser and RoAkSoAx then15:55
niemeyerhazmat: If they're happy, I'm happy :)15:55
koolhead11hi all15:57
niemeyerkoolhead11: Hey!15:58
koolhead11hello niemeyer15:58
rogniemeyer: one merge proposal sent your way: https://code.launchpad.net/~rogpeppe/gozk/update-server-interface/+merge/7700915:58
niemeyerrog: Woohay, cheers!15:58
koolhead11SpamapS: i got some idea how not to use dbconfig-common :)15:58
rogniemeyer: (ignore the first one, i did the merge the wrong way around)15:58
niemeyerrog: The first one?15:59
SpamapSI think IP's grokked from the network provider are usable... EC2 knows which one is externally available vs. internal, and the provider has full network control, so you can take that IP and use it confidently. Orchestra has no such guarantees, so the hostname that we gave to the DHCP server and that we built from its DNS settings is the only meaningful thing we can make use of.15:59
SpamapSkoolhead11: progress is good. :)15:59
koolhead11SpamapS: yeah. :D16:00
* koolhead11 bows to Daviey 16:01
SpamapSFor servers with multi-NIC, the only real thing we can do is use a cobbler pre-seed template that selects the most appropriate one. Making use of multiples for mgmt/service seems like something we'll have to do as a new feature.16:01
rogniemeyer: hold on, i think i mucked up. too many versions flying around.16:01
niemeyerrog: No worries16:01
roggozk/zk vs gozk vs gozk/zookeeper16:02
rogniemeyer: no, it's all good i think16:02
niemeyerrog: Coolio16:03
rogniemeyer: i just did a dud directory rename, but i don't think it affects what you'll see16:03
niemeyerRoAkSoAx: We were just talking about ips vs hostnames in the context of orchestra units16:04
niemeyerRoAkSoAx: hazmat has more details16:04
koolhead11hello robbiew RoAkSoAx16:04
niemeyerI'm going to step out for lunch and leave you guys with trouble!16:04
RoAkSoAxniemeyer: ok16:04
RoAkSoAxniemeyer: im on a sprint atm16:04
RoAkSoAxhazmat: ^^16:04
niemeyerRoAkSoAx: It's quick16:04
niemeyerRoAkSoAx: But important16:04
* niemeyer biab16:04
hazmatRoAkSoAx, just trying to determine if on an orchestra launched machine we can assume either a routable hostname (fqdn) or nic for recording an address to the machine16:06
hazmatie. if something like hostname -f is useable to reach the machine from another machine in the orchestra environment16:06
hazmati assume the orchestra server is just tracking mac addresses on the machine16:07
RoAkSoAxhazmat: hazmat yes the orchestra server is tracking the MAC address16:07
RoAkSoAxhazmat: we always have to track it16:07
RoAkSoAxhazmat: though, we were making sure hostnames was fqdn as an standard and that it was set correctly16:08
RoAkSoAxhazmat: via could-init16:08
RoAkSoAxsmoser: ^^16:08
RoAkSoAxhazmat: the idea is to use a DNS reacheable name for each machine that's fqdn16:09
hazmatRoAkSoAx, if thats the case that's perfect.. fqdn == hostname that is16:09
RoAkSoAxhazmat: yes that's what we are trying to standarize last couple weeks. Give me a few minutes till I get a hold on a few people here16:11
RoAkSoAxhazmat: and discuss the approach16:11
SpamapShazmat: its fair to say that we should take a look at other strategies for addressing services and machines as we get deeper in to the hardware deployment story...16:11
SpamapShazmat: for this primary pass, making it work "a lot like the cloud" is the simplest approach.16:12
smoserfor what its worth, you really shoul dnot expect that 'hostname --fqdn' gives an addressable hostname16:13
SpamapSsmoser: we have no other reliable source of data about what this machine's name is.16:13
smoseri believe we've fixed it so that will be the case under orchestra, and in EC2 (and we're fixing that for single nic guests in nova).16:13
SpamapSThe fact that it wasn't happening was a bug.16:14
smoserno.16:14
smoserin those limited cases, that is the case.16:14
smoserbut 'hostname --fqdn' is just not reliable.16:14
smoserread the man page if you disagree.16:14
smoserit basically says not to use it16:14
smoserso i would really suggest against telling charms that the right way to do something is something that documents itself as the wrong way16:15
smoser:)16:15
smoseri dont have a solution16:15
SpamapSsmoser: Indeed, this is the first time I've actually read this.. I wonder how recently this changed. :-/16:15
SpamapSI don't know if I agree with the man page's reasoning or with the mechanics of --all-fqdns16:16
SpamapS"Don't use this because it can be changed" vs. "Rely on reverse DNS instead" ...16:16
smoserif you're depending on cloud-init (which you are for better or worse), we can put something in it , or an external command that would basically query the metadata provided by the cloud provider to give you this.16:16
smoseri would i guess suggest making a ensemble command "get-hostname" or something16:17
SpamapSsmoser: Its something we can control (since we control the initial boot of the machine) which ripples through and affects everything else on the machine.16:17
SpamapSI believe the plan is to have some sort of "unit info" command for charms to use.16:17
smoseryou do not control the initial boot of the machine.16:17
smoseryou do not control the dns.16:17
smoserso how could you possibly control resolution of a name to an IP?16:18
SpamapSsmoser: We do control what we've told the provisioner to do .. which is to name that box "X"16:18
smoserno you do not16:18
smosernot on ec216:18
SpamapScobbler does16:18
smoserright.16:18
smoserbut stay out of that16:19
smoserthat would mean that ensemble is acting as the cloud provider in some sense when it talks to cobbler16:19
smoserwhich is just yucky.16:19
SpamapSwe don't put the hostname in the metadata for the nocloud seed?16:19
smosernot any more16:19
smosercobbler does16:19
smoserensembel does not16:19
smoserwhich is much cleaner16:19
smosers/ensemble/juju/16:20
SpamapSCan we ask cobbler what it put there?16:20
smoseror s/cleaner/more ec2-or-nova-like/16:20
smoseryou *can*, but you should not.16:20
smoseroh16:20
SpamapSOk.. where then should we get the address for the machine?16:20
smoserwait16:20
smoseryes16:20
smoseryou can ask cobbler what it put there16:20
smosersorry16:20
SpamapScan and should I think16:20
smoseryes16:20
smoser:)16:20
smosersorry16:20
smoseri thought you were saying "Can we tell cobbler what to put there"16:21
SpamapSI'm not enthralled with hostname --fqdn. It is, however, the only common tool we have between all environments at the moment.16:21
smoserwell its easy enough to add a tool16:21
smoserthat lives on the nodes16:21
SpamapSI think it might actually be quite trivial to write a charm tool ... 'machine-info --hostname' which gives us the hostname the provider wants us to be contacted with.16:22
smoserthe other thing, i think might be reasonable to consider, if you're only interested in single-cloud systems, would be to have juju run a dns server.16:22
smoserSpamapS, right. that is what i'm suggesting is fairly easy.16:22
SpamapSToo tightly coupled to juju at that point16:22
smoserright16:23
SpamapSIf an environment can't provide reliable DNS then it should just give us network addresses when we ask for the hostname.16:23
smoseri agree with this.16:23
SpamapSI believe thats the direction the local provider has gone16:23
smoserwhy do you care about a hostname ?16:24
smoserjust curious16:24
smoserwould it not be superior to always be IP ?16:24
SpamapSdefinitely not16:24
smoser(assuming that the IP would not change)16:24
smoserwhy?16:24
SpamapSIP can vary from your perspective16:24
SpamapSa hostname provides the appropriate level of indirection16:25
smosersomewhat.16:25
smoserbut in all cases you are aaware of so far, the IP address of the system is what you want.16:25
smoserie, in all of cobbler, nova, ec2, 'ifconfig eth0' returns an internally addressable IPv4 address.16:26
SpamapSIPv4 or IPv6? internal or external?16:26
smoseryou are interested in IPv4 internal16:26
SpamapSusually16:26
smoseryou're 100% only interested in internal if you're using hostname --fqdn16:26
smoserso that leaves you only ipv4 and ipv616:26
SpamapSI'm not saying we can't use IP's, I'm saying we need to talk about *hosts*16:26
smoserec2 has no ipv416:27
smoserso now you're down to nova (which i know you've not tested ipv6 of) and cobber, which i highly doubt you have16:27
smosermachine-info --hostname16:27
SpamapSYou're getting all pragmatic on me.16:27
smoserjust return ipv4 internal ip address.16:27
smoserno16:28
hazmatso this is below the level of  charm16:28
SpamapSLike what, you want to ship something *now* ?16:28
smoseri dont understand the question16:28
hazmatjuju is going to prepopulate and store the address, we just need to know how to get it on an orchestra machine16:28
smoserno16:28
hazmati was hoping hostname -f would do.. seems like it won't16:28
smoserdo not do that hazmat16:28
smoserthat is broken16:28
smoserjuju should *NOT* prepopulate the address.16:28
smoserjuju is not orchestra16:29
smoserit can query, it does not set or own.16:29
hazmatsmoser, sorry wrong context.. juju was going to store the address from the provider for the charm16:29
SpamapSsmoser: I'm being a bit sarcastic. Yes, all currently known use cases are satisfied with IP's. But all of them also *should* have hostnames, and we shouldn't ignore the need for hostnames just because we can.16:29
hazmatsmoser, the question is how to get the address16:29
smoseri'm fine with wanting to have hostnames16:29
smoseryou can hide that cleanly behind a command16:29
smoserin which right now, you're assuming that command is 'hostname --fqdn'16:29
smoserwhich is documented as broken16:30
smoserso i'm suggesting adding another command16:30
smoserwhich does the sambe general thing, but works around a bug or two16:30
smoserand may, in some cases, return an ipv4 address.16:30
hazmatsmoser, that command is?16:30
smoser'machine-info --hostname'16:30
SpamapShazmat: we've talked about a "machine info" or "unit info" script before.16:31
SpamapSI think you want unit info, not machine info.16:31
smoserwhich you add as a level of abstraction into the node16:31
smoserfine16:31
hazmatSpamapS, that doesn't answer the question of how that command gets the info16:31
hazmatie. how do we implement machine-info's retrieval of the address16:31
smoserhazmat, right now, it does this: echo $(hostname --fqdn)16:31
smoserthat makes it 100% bug-for-bug compatible with what you have right now16:32
smoserbut is fixable in one location16:32
SpamapShazmat: it queries the provider (or, more likely, queries the info we cached in the zk topology when the machine/unit started)16:32
smoserSpamapS, is right.16:32
hazmatso for local and ec2 providers, we have known solutions, its the orchestra case that its not clear what we should od16:32
smoserin the orchestra provider 'hostname --fqdn' works16:32
smoserand i thought we had (or i think we should) assume that the machine's "hostname" in cobbler is fqdn internal address.16:33
smosers/assume/insist/16:33
smoserso ensembel can just query that from cobbler16:33
smoserafaik, the only place broken right now is in nova16:33
smoserdue to bug 85461416:34
hazmatsmoser, does cobbler have any notion of external/public addresses? or just hostnames for a given mac addr16:34
smoserwhich will be fixed16:34
_mup_Bug #854614: metadata service local-hostname is not fqdn <server-o-rs> <OpenStack Compute (nova):In Progress by smoser> <nova (Ubuntu):Confirmed> < https://launchpad.net/bugs/854614 >16:34
smoserRoAkSoAx would know more, but whatever  it is, you assert that in some portion of the machines's metadata, a fqdn exists for internal address.16:34
smoserand you use it16:34
smoseri dont have cobbler in front of me to dump machine data. but i think it is a reasonable assertion.16:35
SpamapSwow, --all-fqdns /win 2416:35
SpamapSdoh16:35
SpamapSso --all-fqdns is pretty new16:35
SpamapSAppeared just before 9.10 I think16:36
smoserits really all messed up.16:36
smoserand it doesn't help you16:36
smoseras it doesn't sort them in any order (how could it?)16:36
SpamapSyeah its not useful16:36
smoserso how can you rely on its output16:36
SpamapSproviders need to tell us how a machine they're responsible for is addressable16:37
smoserright.16:37
smoserand we just assert at the moment that cobbler stores that in (i think) 'hostname'16:37
SpamapSAnd then the external and internal IP's are both the result of querying DNS for that hostname.16:38
smoseri dont folow that.16:38
smoseri didn't know external ip was something that was being discussed.16:38
SpamapSJust thinking of analogs for ec2's metadata16:39
SpamapSIts needed16:39
SpamapSfor expose16:39
smoseri agree it would be needed...16:39
SpamapSFor orchestra, all the firewall stuff is noop'd though16:39
smoseri really have to look at nova to find a good place for this.16:39
smoserbut basically i think we just need to store it there and assert that it is configured sanely.16:39
SpamapSI believe there's a desire to mvoe that FW management to the agents managing ufw/iptables .. but for now, providers have to do it, and orchestra can't.16:40
SpamapSyes, hostname in cobbler is the canonical source of the machine's hostnanme16:40
SpamapSand Mavis Beacon is the canonical source of my bad typing16:40
smoseri think for our near term purposes cobbler no op is fine for firewall16:40
hazmatagreed16:40
hazmatSpamapS, so hostname -? is fine for cobbler for the private address.. and hopefully the public address?16:41
smoseralmost certainly not the public address.16:41
hazmatsmoser, its not clear what a public address means in orchestra.. its outside the purview of the provider16:41
smoserhazmat, well, sort of16:42
smoserclearly orchestra could have that data16:42
smoserand could provide it to you16:42
smoserbut i dont think we have a place where we assert it is stored now.16:42
SpamapSorchestra does not imply whether it has public/private networks.16:43
SpamapSIts really not all that interesting, just return hostname for anything wanting to address the machine.16:44
smosergood enough for me.16:45
smoserso i do suggest the layer of indirection over 'hostname --fqdn'16:45
SpamapSAnd I'll open up a bug for the desired charm tool16:45
SpamapSsmoser: agreed, will open up that bug now16:45
hazmatSpamapS, the common use for that is going away16:45
hazmatSpamapS, the relations be prepopulated with the info16:46
hazmatalthough we still need a way to query it agreed16:46
hazmatat the unit level16:46
SpamapSRight, is there a bug for that then?16:46
SpamapSor will it be a reserved variable in relation-get ?16:47
hazmatSpamapS, not yet.. but the units-with-addresses branch does the work of storing it directly on the units (pub/private) address in provider specific manner16:47
hazmatSpamapS, just a prepopulated one16:47
SpamapSI like that16:47
hazmati just needed to verify that hostname --fqdn does something sane w/ orchestra16:48
hazmatand it seems like thats what we should use use for now16:48
hazmatwhich is nice, since that's whats implemented for orchestra16:48
niemeyerWow.. long thread16:48
SpamapShazmat: since all the charms currently rely on it, its been made to work that way. But as we've discussed here, its not really robust as a long term solution.16:50
hazmatsmoser, RoAkSoAx does that mean that bug 846208  is fixed?16:50
_mup_Bug #846208: Provisioned nodes do not get a FQDN <juju:New> <orchestra (Ubuntu):New> < https://launchpad.net/bugs/846208 >16:50
hazmatwrt to orchestra16:50
hazmatSpamapS, agreed, but getting it out of the charms, goes a long way to giving us the flexibility to fix it16:51
SpamapSniemeyer: yeah, when you get Me, the tire kicker, and smoser, Mr. Meh, talking about something.. the threads tend to go back and forth with a lot of "NO, no, no NO No, no, ahh, yes."16:51
niemeyerSpamapS: That's a nice way to get something proper in place..16:51
smoseradam_g, probably knows aboug 846208 but i would have thought yes.16:52
SpamapSspeaking of long term and short term... I'm hoping to file the FFE tomorrow.. where are we at?16:52
hazmatSpamapS, this is probably the closest bug 78899216:52
_mup_Bug #788992: example formulas refer to providing the hostname in ensemble itself <juju:New> < https://launchpad.net/bugs/788992 >16:52
smoserat very least, i'm fairly sure that 'hostname -f' should do the right thing there now.16:53
hazmatsmoser, cool16:53
RoAkSoAxyeah that bug was fixed already ##846208 will verify now that im here16:55
_mup_Bug #846208: Provisioned nodes do not get a FQDN <juju:New> <orchestra (Ubuntu):New> < https://launchpad.net/bugs/846208 >16:55
hazmatSpamapS, we're very close on local dev.16:56
hazmatbcsaller, how's it going?16:56
bcsallerhazmat: I was just reading back the channel actually16:56
SpamapSAwesome16:56
bcsallerhazmat: have you tried the branch yet?16:57
hazmatbcsaller, not yet.. i'll do so now16:57
hazmatbcsaller, what's the url for the stats on apt-cacher-ng?17:02
bcsallerhttp://localhost:3142/acng-report.html17:03
SpamapShazmat: btw did those tests get fixed?17:04
hazmatSpamapS, which tests?17:04
SpamapShazmat: lxc tests IIRC17:04
SpamapSthe ones that were blatantly broken last week in trunk17:05
hazmatSpamapS, oh yeah.. the breakage, indeed their fixed.. trunk is green17:05
SpamapScool17:06
SpamapSI've been doing regular uploads to my PPA with the distro packaging, which runs the test suite... that was blocking those from working. :p17:06
hazmatbcsaller, i'm seeing some oddities around namespace passing which is breaking lxc-ls, but the units are up and running17:06
bcsallerhazmat: I'll need details ;)17:06
hazmatbcsaller, i'll dig into it17:07
hazmatbcsaller, but it appears to be working17:07
bcsallerhazmat: in an older version its wasn't setting the ns to qualified name and created images with out a prefix, but that was fixed17:07
hazmatbcsaller, ah.. that looks like the problem17:08
hazmatsounds like17:08
bcsallerhazmat: you didn't pull?17:08
hazmatbcsaller, i probably need to remerge your branch17:08
bcsallersounds like17:08
hazmatbcsaller, i've been pulling your branch and looking over the diff, but i don't think i've remerging into the rest of the pipeline17:08
bcsallerthen I'm surprised it worked. I expect the services in the container didn't actually start for you17:09
bcsallerhazmat: that 'conf' change was missing too I expect17:09
hazmatbcsaller, does the template machine get the namespace qualifier?17:11
hazmats/machine/container17:11
bcsallerno, there are some advantages and disadvantages there17:11
bcsallerI expect there will be debate around that point in the review17:12
bcsallerI guess it *should* though, I can think of many ways it can go wrong for people17:13
bcsallervs being a cost savings for the well behaved. It should also have things like series name in it I expect17:13
hazmatbcsaller, the question is can we get this stuff landed today for push to the repos tomorrow, is there anything i can help with?17:17
hazmati think all my branches are approved at this point, i've got one last minor to update the provider name, and prepopulate the relations with the unit address17:18
hazmatbcsaller, latest revno is 404 on omega?17:21
bcsallerIdk, can't find it17:21
bcsaller;)17:21
bcsalleryeah, thats it17:21
hazmatbcsaller, getting pty allocation errors, just had a kernel upgrade going to try a reboot17:30
hazmatunit agents aren't running17:30
hazmatconf file looks fine17:30
=== koolhead11 is now known as koolhead11|bot
_mup_juju/config-juju-origin r359 committed by jim.baker@canonical.com18:00
_mup_Add support for get_default_origin18:00
* rog is done for the day. see y'all.18:01
niemeyerrog: Cheers!18:01
hazmatbcsaller, the container unit agents never start, and i get pty allocation errors trying to login manually18:12
bcsallerhazmat: sounds like what you were having at the sprint18:12
bcsallerhazmat: what was the resolution to that?18:12
hazmatbcsaller, upgrading to oneiric18:12
hazmati don't think that works twice ;-)18:12
bcsallerdarn18:13
hazmatcurrently on lxc == 0.7.5-0ubuntu818:13
bcsallersame18:13
bcsallerhazmat:  the lxc-library tests do or don't trigger this issue for you?18:19
hazmatbcsaller, are you specing the origin somehow?18:31
hazmatbcsaller,  the lxc lib tests fail in omega for me18:31
niemeyerSpamapS: What is the set of valid charm names we're going to support?18:41
niemeyerSpamapS: foo(-bar)*?18:41
niemeyerOr, more properly "^[a-z]+([a-z0-9-]+[a-z])*$"18:41
niemeyerfwereade, bcsaller, hazmat, anyone: ^^^?18:42
SpamapSniemeyer: yes that looks exactly right18:42
SpamapSbasically the hostname spec. ;)18:42
SpamapSbut no capitals18:43
SpamapS+118:43
bcsallerniemeyer: looks fine to me, might need [-_]18:43
hazmatsounds good18:43
niemeyerbcsaller: It contains - already18:43
SpamapSno _'s18:43
SpamapSone visual separator is fine18:43
bcsallerahh 0-9-, ic18:43
hazmatbcsaller, do you have some delta in your omega branch that's not pushed?18:44
bcsallerhazmat: no18:44
hazmatbcsaller, i get test failures.. it looks like around juju package install18:44
bcsallerorigin should be ppa at this point, I think thats what it says in the code, I'll check again18:44
niemeyerfwereade: In case you are around, these will be useful:18:45
niemeyervar validUser = regexp.MustCompile("^[a-z0-9][a-zA-Z0-9+.-]+$")18:45
niemeyervar validSeries = regexp.MustCompile("^[a-z]+([a-z-]+[a-z])?$")18:45
niemeyervar validName = regexp.MustCompile("^[a-z]+([a-z0-9-]+[a-z])?$")18:45
hazmatbcsaller, http://paste.ubuntu.com/697431/18:45
bcsallerhazmat: so either the origin isn't ppa, the networking isn't working or...18:48
hazmatbcsaller, the networking is working at least packages are being installed18:48
bcsallerhazmat: and you said you can't ssh into the container? I'd try to run the juju-create script, it will be some /tmp/xxxxx-juju-create script in the container and follow the output18:49
hazmatbcsaller, also when the tests fail they leave an orphan container18:50
niemeyerjimbaker: any chance of getting env-origin landed today?19:17
jimbakerniemeyer, i'm working on the mocks for this. once done, it will be ready for review19:18
niemeyerjimbaker: Ugh..19:18
jimbakerniemeyer, so pretty close i would say19:18
niemeyerjimbaker: "working on the mocks" gives me bad feelings nowadays, for some reason19:18
jimbakerniemeyer, well as i understand i need to mock out apt-cache policy for the various cases19:19
niemeyerjimbaker: Not really.. that's a pretty side-effects free problem to solve19:20
jimbakerniemeyer, how we would test in the case of being on a distro vs one where it was installed from the ppa? or in the case of being installed from a branch?19:21
niemeyerjimbaker: origin, source = parse_juju_policy(data)19:23
jimbakerniemeyer, but we still need to run apt-cache policy in order to collect the necessary data. isn't this the role for the mock, to intercept this call with some variations of what it could return?19:24
niemeyerjimbaker: There's a single test needed for actually calling apt-cache, and that's also trivial to automate without mocking by putting an executable in the path.19:25
niemeyerjimbaker: I won't fight if you decide to mock this one19:25
niemeyerjimbaker: But mocking every single iteration of parse_juju_policy is making our lives more painful without a reason19:25
niemeyerjimbaker: It's a side-effects free process19:26
niemeyerjimbaker: and it's idempotent19:26
niemeyerjimbaker: If you need mocker for that I'll take the project page down! :-)19:26
jimbakerniemeyer, i will rewrite it according to what you have described, it's not a problem19:27
hazmatbcsaller, are you sure you dont have something in /var/cache/lxc that  makes it work for you?19:50
hazmatbcsaller, i just blew away my cache and its still failing on the tests19:50
bcsallerhazmat: I'll try to clean that out and check again19:51
bcsallertake a few minutes19:51
hazmatbcsaller, did it work?20:02
bcsallerbootstrap is still going, w/o cache.20:02
bcsallerso for me it hit the test timeout20:03
bcsallerbut I'm let it build the cache outside the test now20:04
hazmatbcsaller, you on dsl?20:05
hazmatit didn't hit the test timeout for me.. but still failed20:05
bcsallercable20:05
bcsallerthe unpacking phase took too long oddly20:05
bcsallerhazmat: I am seeing errors now, I'll look into it more20:10
hazmatbcsaller, cool, thanks20:10
hazmatbcsaller, as far as i can see ppa is selected across the board20:10
bcsallerlooked that way to me as well20:10
hazmatoh wait its wrong archive20:11
hazmathaha20:11
hazmati thought that got fixed in this branch, but you had it cached20:11
hazmatbcsaller, niemeyer pointed it out to me in a review20:11
hazmat bcsaller nevermind that looks sane for the ppa20:12
* hazmat grabs some lunch20:13
hazmater.. snack20:13
bcsalleryeah, I didn't know what you were talking about there :)20:13
bcsallerhazmat: pushed changes to both lxc-lib and omega, it was a missing dep that was cached for me :(20:33
* bcsaller looks for a brown paper bag20:33
hazmatbcsaller, cool, just glad its fixed20:34
* hazmat retries20:34
SpamapSHmm.. getting sporadic failures of one test..20:49
SpamapShttps://launchpadlibrarian.net/81106645/buildlog_ubuntu-oneiric-i386.ensemble_0.5%2Bbzr361-0ubuntu1~ppa1_FAILEDTOBUILD.txt.gz20:49
SpamapSjuju.agents.tests.test_unit.UnitAgentTest.test_agent_executes_config_changed_hook20:49
jringsHi, I have a problem tyring to get juju to connect to EC2. I described it here http://ubuntuforums.org/showthread.php?t=1849913 but also with the new version today it is tsill the same. I can bootstrap, a new instance is created in EC2, but in juju status the connection is refused21:07
jringsCannot connect to machine i-48751428 (perhaps still initializing): could not connect before timeout after 2 retries 2011-09-26 14:03:34,431 ERROR Cannot connect to machine i-48751428 (perhaps still initializing): could not connect before timeout after 2 retries21:08
SpamapSjrings: hey, the key that juju uses by default is $HOME/.ssh/id_(rsa|dsa)21:10
jringsHow can I tell juju to use the .pem from EC2?21:11
SpamapSjrings: you don't need to21:12
SpamapSjrings: it installs your key in the instances21:12
jringsWell my key is in $HOME/.ssh21:12
jringsand the juju bootstrap works21:12
jringswhy can't juju status connect then?21:13
SpamapSbootstrap complets w/o ssh21:13
SpamapSits possible your key didn't make it into the instance for some reason21:13
SpamapSjrings: can you pastebin ec2-get-console-output ?21:14
hazmatif i couldn't find a key during bootstrap it will raise an exception21:16
jringsIs that the same as the log for the instance in the EC2 webconsole?21:17
jringsIf so, here: http://pastebin.com/4c78GVC921:19
SpamapSjrings: heh, i takes a few minutes to get the full log .. so you might have to wait a bit longer.21:25
SpamapSOr maybe there's a limit to the size.. I've never checked21:25
SpamapS(that would suck if the limit was applied to the top.. and it wasn't updated like a ring buffer21:25
hazmathmm.. this line 2011-09-25 10:24:11,882 ERROR SSH forwarding error: bind: Cannot assign requested address21:28
hazmatis interesting21:28
jringsthat's what I get from the juju status21:28
hazmatwe pick a random open port on localhost to setup a port forward over ssh21:29
SpamapSconflicting with desktop-couch ?21:30
hazmatit looks like that fails, although for it to fail persistently suggests something else is going on21:30
SpamapSwhich does the same thing21:30
SpamapSYeah true21:30
SpamapShazmat: does it definitely do 127.0.0.1 ?21:30
jringsYes I can see it trying different ports.21:30
SpamapSjrings: can ou paste the output of 'ifconfig -a' ?21:30
jringsWait, I set up a single node hadoop locally and had to change something to localhost21:31
jringseth1      Link encap:Ethernet  HWaddr f0:4d:a2:5f:5c:09             UP BROADCAST MULTICAST  MTU:1500  Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0           collisions:0 txqueuelen:1000            RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)           Interrupt:41 Base address:0xa000   lo        Link encap:Local Loopback             inet addr:127.0.0.1  Ma21:31
jringsugh21:31
jringswait21:31
jringsHere: http://pastebin.com/Vpp3hJPt21:31
SpamapShrm21:35
SpamapSjrings: ufw running?21:35
SpamapScan't imagine that would break it tho21:35
jringsJust did a ufw disable and tried again, same result21:36
jringsOh shit I got it21:39
jringsI had Rstudio installed21:39
jringsit had a server on 127.0.01:878721:39
jringsjust uninstalled it, juju status works21:39
jringsno wait21:40
jringsactually it doesn't21:40
jringsargh21:40
SpamapSthat doesn't make sense. :-/21:40
jringsweird21:40
jringsI got21:40
jrings2011-09-26 14:39:06,972 DEBUG Spawning SSH process with remote_user="ubuntu" remote_host="ec2-174-129-58-110.compute-1.amazonaws.com" remote_port="2181" local_port="58376". 2011-09-26 14:39:08,981:6112(0x7f2eadf27720):ZOO_INFO@log_env@658: Client environment:zookeeper.version=zookeeper C client 3.3.3 2011-09-26 14:39:08,981:6112(0x7f2eadf27720):ZOO_INFO@log_env@662: Client environment:host.name=vavatch 2011-09-26 14:39:08,981:6112(0x7f221:40
SpamapSjrings: can you do 'strace -e trace=listen,bind,connect -f juju status' and paste that? (note that the command 'pastebinit' is really nice for this)21:40
jringsone time21:40
jringsand then the next juju status failed again21:41
hazmatSpamapS, it picks the open port from all interfaces but binds to it on localhost21:41
hazmatalthough i recently added an SO_REUSEADDR flag .. it should still be random each run21:41
SpamapShazmat: literally looks up 'localhost' or uses 127.0.0.1 ?21:41
hazmatit does a bind socket.bind("", 0)21:41
SpamapSwait, isn't it an ssh forward?21:42
hazmatSpamapS, ah.. yeah.. for the port forward it explicitly uses localhost21:42
hazmat'localhost'21:42
SpamapSjrings: pastebin 'ping -c 1 localhost'21:43
jringsHere is the strace http://pastebin.com/Q0CPnDBr21:45
jringsAnd the ping works http://pastebin.com/cwsep2NK21:46
_mup_juju/go-charm-url r14 committed by gustavo@niemeyer.net21:47
_mup_Implemented full-blown charm URL parsing and stringification.21:47
_mup_Bug #860082 was filed: Support for charm URLs is needed in Go <juju:In Progress by niemeyer> < https://launchpad.net/bugs/860082 >21:50
jringsconnection on port 49486 worked21:58
jringsis there a way to fix the port?21:59
SpamapSjrings: it should work on pretty much any port thats not already used21:59
niemeyerbcsaller, hazmat: How to build the base to review lxc-omega?22:00
SpamapSUgh22:01
SpamapStxaws.ec2.exception.EC2Error: Error Message: Not authorized for images: [ami-852fedec]22:01
SpamapSHave seen this before...22:01
SpamapSstale image.. doh22:01
jringsDoes this try to use IP6?22:15
niemeyerbcsaller, hazmat: I'm pushing it back onto Work in Progress.. there are multiple bases and no mention of what they are in the summary22:16
niemeyerbcsaller: I've added an item about the file lock implementation there already22:16
hazmatniemeyer, its lxc-library-clone->file-lock and local-provider-config22:24
_mup_juju/config-juju-origin r360 committed by jim.baker@canonical.com22:25
_mup_Unmocked tests in place22:25
_mup_juju/config-juju-origin r361 committed by jim.baker@canonical.com22:26
_mup_Added files to bzr22:26
niemeyerhazmat: file-lock is not even in the kanban22:32
niemeyerlxc-omega also changed since I last pulled it22:33
niemeyerI'm going to hold off a bit since this is getting a bit wild22:33
hazmatniemeyer, https://code.launchpad.net/~bcsaller/juju/filelock/+merge/7580622:34
hazmatthe change was  a one liner to address a missing package dep22:34
hazmatthat i found while trying it out22:34
niemeyerThat's fine, but things are indeed a bit wild.. missing branches in the kanban.. branch changing after being pushed, multiple pre-reqs that are not mentioned22:37
niemeyerThe file-lock branch should probably be dropped, unless I misunderstand what is going on there22:39
niemeyerIt's not really a mutex.. it'll explode if there are two processes attempting to get into the mutex region22:39
niemeyerThere's an implementation in Twisted already22:40
hazmatniemeyer, its mean to error if another process tries to use it, but yeah the impl in twisted is probably a better option22:40
niemeyerhazmat: It feels pretty bad.. telling the user "Can't open file" with a traceback wouldn't be nice22:41
_mup_juju/config-juju-origin r362 committed by jim.baker@canonical.com22:46
_mup_PEP8, docstrings22:46
hazmatbcsaller, the lxc.lib tests pass but there's still some errors getting units to run23:13
bcsallerhazmat: what are you seeing?23:14
hazmatbcsaller, one quickie the #! header on the juju-create is wrong, missing "/" before bin/bash23:14
hazmatbcsaller, it looks like add-apt-repo still isn't installed on the container.. perhaps i had a leftover machine-0-template ..23:15
hazmatcause juju isn't installed which i assume causes the problem23:15
bcsallerI suspect thats the case23:16
hazmatbecause its missing a prefix its not getting killed i assume has to be done by hand, its going to cause problems as well if someone wants to use the series option23:17
hazmathmm we should passing origin down from the provider to the machine agent23:19
hazmathmm.. the clone interface makes it rather hard to put in console and container logs23:27
hazmati guess just stuff the attrs back on23:27
_mup_juju/lxc-omega-merge r398 committed by kapil.thangavelu@canonical.com23:31
_mup_enable container logs, and trivial juju script header fix23:31
_mup_juju/config-juju-origin r363 committed by jim.baker@canonical.com23:41
_mup_Setup origin policy for affected EC2, Orchestra provider tests23:41
_mup_juju/env-origin r360 committed by jim.baker@canonical.com23:56
_mup_Reversed to r35723:56

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!