hazmat | nice fixed debug-hooks to work for early in the agent lifecycle | 00:14 |
---|---|---|
hazmat | er. charm | 00:14 |
niemeyer | hazmat: Ohhh.. that's sweet | 00:17 |
niemeyer | hazmat: unix modes in zip is in upstream, btw.. we'll just need tip for the moment | 00:18 |
hazmat | niemeyer, cool, i've got a tip build i can utilize | 00:18 |
hazmat | niemeyer, its going to be a little while till the next release? | 00:18 |
niemeyer | hazmat: I've upgraded the PPA, but failed to put the series in the version number | 00:18 |
hazmat | since they just released | 00:18 |
niemeyer | hazmat: There's a bug in the build procedure with the colon in the path that I'll have to check out when I get a moment | 00:18 |
niemeyer | hazmat: Yeah, but it shouldn't be a big deal for us for the server side | 00:19 |
niemeyer | hazmat: We can deploy with tip | 00:19 |
niemeyer | hazmat: Well.. and that'll be in the weekly in a couple of days | 00:19 |
hazmat | niemeyer, cool | 00:19 |
_mup_ | juju/status-with-unit-address r403 committed by kapil.thangavelu@canonical.com | 01:30 |
_mup_ | debug hooks works with unit address, also address deficiency around debugging early unit lifecycle, by allowing debug-hooks command to wait for the unit to be running | 01:30 |
hazmat | niemeyer, i think i'm going to go ahead and try to address the placement stuff after the lxc merge | 01:31 |
niemeyer | hazmat: Sound ok.. but I also think it's going to be surprisingly trivial to handle it as suggested | 01:31 |
niemeyer | Sounds | 01:31 |
niemeyer | hazmat: It's indeed fine either way, though | 01:32 |
hazmat | niemeyer, i know.. i'm just fading, and want to get merges in.. this stuff needs to go upstream... maybe i should hold off till i get full night's rest | 01:32 |
hazmat | anyways.. last of the branches is ready for review ( cli with unit address) | 01:32 |
niemeyer | hazmat: Yeah, get some rest | 01:33 |
niemeyer | hazmat: I'll probably do the same to get up early tomorrow | 01:33 |
hazmat | niemeyer, there are places where i think placement on the cli is useful, and placement is a global provider option.. | 01:33 |
hazmat | some of the discussion from earlier w/ bcsaller.. | 01:34 |
hazmat | the cross-az stuff in particular is of interest to me | 01:34 |
niemeyer | hazmat: I think it's overthinking the problem a bit | 01:34 |
niemeyer | hazmat: This is well beyond what we need for the feature at hand | 01:34 |
niemeyer | hazmat: I'd rather keep it simple and clean until practice shows the need for the cli | 01:35 |
hazmat | i'm concerned that placement is going to get out of hand on responsibilities on the one hand, and on the other i see it as being very convienent for implementing features like deploy this unit in the a differrent az | 01:35 |
niemeyer | hazmat: I feel uncertain about that | 01:36 |
hazmat | ic cross az as something required for production on ec2.. i'm not sure where else we can put this sort of decision | 01:36 |
niemeyer | hazmat: We're implementing one feature, and imagining something else without carefully taking into account the side effects | 01:36 |
hazmat | fair enough | 01:37 |
niemeyer | hazmat: It's not required really.. | 01:37 |
niemeyer | hazmat: cross az can be done with a single cluster | 01:37 |
hazmat | niemeyer, sure it can.. but how do we place it such that is | 01:37 |
hazmat | er.. place such that it is | 01:37 |
niemeyer | hazmat: Yeah, good question.. I don';t think the placement branch answers it | 01:37 |
niemeyer | hazmat: So I'd rather keep it in a way we're comfortable rather than creeping up without properly figuring what we're targeting at | 01:38 |
hazmat | it doesn't but cli placement is easy facility for it.. i agree there are ramifications there given provider validation that bear more thought, but it works pretty simply afaics | 01:38 |
niemeyer | hazmat: I'm not sure, and given that it really won't work either way right now, I'd rather not do it for now. | 01:39 |
niemeyer | hazmat: If nothing else, we're offering a visible interface to something that makes no sense to the user, with some intermangling in the implementation that we're not totally comfortable with. | 01:40 |
niemeyer | hazmat: Feels like a perfect situation to raise KISS and YAGNI | 01:40 |
* hazmat ponders | 01:42 | |
hazmat | i'll sleep on it... i still think cross-az stuff is very important.. and that this is probably the simplest way to offer it to users. | 01:43 |
hazmat | but perhaps its a red herring... much else to do for internal failure scenario recovery | 01:44 |
hazmat | reconnects, restarts, etc | 01:44 |
niemeyer | hazmat: That's not even the point.. no matter if it's the implementation we want or not, it doesn't work today, and won't work for quite a while. | 01:45 |
hazmat | niemeyer, i could implement this cross-az with via cli placement in a day i think. | 01:45 |
niemeyer | hazmat: I'd rather not have this stuff creeping up in the code base until we figure it out. | 01:45 |
hazmat | tomorrow even | 01:45 |
niemeyer | hazmat: Heh | 01:45 |
hazmat | ;-) | 01:46 |
niemeyer | hazmat: I suggest we KISS and you suggest doing even more.. get some sleep. :) | 01:46 |
hazmat | indeed | 01:46 |
_mup_ | Bug #859308 was filed: Juju commands (ssh/status/debug-hooks) should work with unit addresses. <juju:In Progress by hazmat> < https://launchpad.net/bugs/859308 > | 01:52 |
niemeyer | Hello! | 10:48 |
rog | niemeyer: hiya! | 10:49 |
niemeyer | rog: Hey! | 10:50 |
rog | niemeyer: what's the best way for me to update to your merged version? | 10:56 |
rog | (of gozk) | 10:56 |
rog | is it now in a new repository? | 10:57 |
niemeyer | rog: It's a new branch.. just branch from lp:gozk/zk | 10:57 |
niemeyer | rog: Which is an alias for lp:~juju/gozk/zk | 10:57 |
niemeyer | rog: In the future it'll go back to being lp:~juju/gozk/trunk, once we kill launchpad.net/gozk | 10:58 |
niemeyer | I mean, kill as in not support this import path | 10:58 |
rog | ok | 10:58 |
__lucio__ | hi! is there a way to compose to formulas so i can say, for example, deploy a database server + a monitoring agent to this node? | 11:16 |
* rog finds lots of documentation bugs. oops. | 11:22 | |
niemeyer | __lucio__: Absolutely | 11:27 |
__lucio__ | niemeyer, how? (hello!) | 11:27 |
niemeyer | __lucio__: Hey! :) | 11:27 |
niemeyer | __lucio__: Charms (previously known as formulas) interconnect via relations that follow a loose protocol | 11:28 |
niemeyer | __lucio__: We give a name to the interface between them so that we can distinguish the protocols | 11:29 |
niemeyer | __lucio__: So, you can define in one of the formulas that it requires (consumes) a given relation interface, and in the other side that it provides (serves) the given relation interface | 11:29 |
niemeyer | __lucio__: This way both sides can be interconnected at runtime | 11:30 |
niemeyer | __lucio__: Using the "juju add-relation" command | 11:30 |
niemeyer | __lucio__: The charms will be notified when such a relation is established via the hooks | 11:30 |
niemeyer | rog: Hm? | 11:31 |
niemeyer | __lucio__: Does that make sense? :) | 11:31 |
__lucio__ | niemeyer, not exactly what i mean. imagine i get the mysql charm and want to deploy it. get machine 1 with mysql. then i want to deploy some agent to monitor the system stats there. i want to create a new charm and say "deploy this charm to this machine that already exists" | 11:32 |
__lucio__ | is that the "placement policy"? | 11:32 |
niemeyer | __lucio__: Ah | 11:32 |
niemeyer | __lucio__: I see | 11:32 |
__lucio__ | the key part in here would be that those charms should know nothing of each other | 11:32 |
niemeyer | __lucio__: This will be supported in the coming future through what we're calling co-located charms | 11:33 |
niemeyer | __lucio__: In practice it'll be just a flag in the relation | 11:33 |
niemeyer | __lucio__: and juju will put the charms together based on that | 11:33 |
niemeyer | __lucio__: It's not implemented yet, though | 11:33 |
niemeyer | __lucio__: and it's not the placement policy | 11:33 |
niemeyer | hazmat: See? :) | 11:34 |
niemeyer | __lucio__: Yeah, exactly | 11:34 |
niemeyer | __lucio__: Re. knowing nothing about each other | 11:34 |
niemeyer | __lucio__: They will use exactly the same interface for communication that normal charms use | 11:34 |
__lucio__ | niemeyer, ack. nice to see you guys thought about it :) | 11:34 |
niemeyer | __lucio__: Despite them being in the same machine | 11:35 |
niemeyer | __lucio__: Yeah, there's a lot of very nice stuff to come.. just a matter of time | 11:35 |
fwereade | niemeyer: ping | 12:33 |
niemeyer | fwereade: Hey! | 12:34 |
fwereade | niemeyer: thanks for the review :) | 12:34 |
fwereade | niemeyer: how's it going? | 12:34 |
niemeyer | fwereade: np | 12:34 |
niemeyer | fwereade: Going in a roll! | 12:34 |
fwereade | niemeyer: sweet :D | 12:35 |
fwereade | fwereade: I was wondering about charm id/url/collection/etc terminology | 12:35 |
niemeyer | fwereade: Ok | 12:35 |
fwereade | niemeyer: and wanted to know what your theoughts were re: the hash at the end | 12:35 |
fwereade | niemeyer: I see it as not really part of the *id* so much as just a useful bit of verification | 12:36 |
fwereade | niemeyer: but... well, it's an important bit of verification :) | 12:36 |
niemeyer | fwereade: Which hash? | 12:36 |
fwereade | niemeyer: lp:foo/bar-1:ry4xn987ytx984qty498tx984ww | 12:37 |
fwereade | when they're stored | 12:37 |
kim0 | Howdy folks .. did the LXC work land already? | 12:37 |
kim0 | seeing lots of cool comments | 12:37 |
niemeyer | fwereade: It must be there | 12:37 |
kim0 | commits I mean | 12:37 |
niemeyer | fwereade: For storage, specifically | 12:37 |
fwereade | (yes that was a keyboard-mash, not a hash, but close enough ;)) | 12:37 |
niemeyer | fwereade: The issue isn't verification, but uniqueness | 12:38 |
niemeyer | kim0: Heya! | 12:38 |
niemeyer | kim0: It's on the way | 12:38 |
fwereade | niemeyer: ...ha, good point, hadn't internalised the issues with revision uniqueness | 12:38 |
fwereade | niemeyer: except, wait, doesn't the collection-revision pair guarantee uniqueness? | 12:38 |
fwereade | niemeyer: I know rvisions and names wouldn't be enough | 12:39 |
niemeyer | fwereade: Define "guarantee" | 12:39 |
niemeyer | fwereade: ;_) | 12:39 |
kim0 | cool, can't wait to tell the world about this .. It's such a nice feature | 12:40 |
niemeyer | fwereade: A hash is a reasonable "guarantee", even if it's not 100% certain. Trusting the user to provide a unique pair isn't very trustworthy. | 12:40 |
* kim0 compiles regular juju progress report .. shares with planet earth | 12:40 | |
niemeyer | kim0: It is indeed! And we're almost there | 12:40 |
fwereade | niemeyer: ok, it feels like the bad assumption is that a collection + a name will uniquely identify a (monotonically increasing) sequence of revisions | 12:41 |
fwereade | niemeyer: confirm? | 12:41 |
niemeyer | fwereade: I'd say more generally that the tuple (collection, name, id) can't be proven unique | 12:42 |
niemeyer | fwereade: If we were the only ones in control of releasing them, we could make it so, but we're not | 12:43 |
fwereade | niemeyer: hm, indeed :) | 12:43 |
fwereade | niemeyer: ok, makes sense | 12:43 |
fwereade | niemeyer: in that case, I don't see where we'd ever want the revision without the hash | 12:43 |
niemeyer | fwereade: That seems a bit extreme | 12:44 |
kim0 | mm .. the juju list is not on https://lists.ubuntu.com/ | 12:44 |
niemeyer | fwereade: The revision number is informative | 12:45 |
niemeyer | fwereade: and in the store it will uniquely identify the content | 12:46 |
niemeyer | fwereade: FWIW, the same thing is true for packages | 12:46 |
rog | lunch | 12:48 |
fwereade | niemeyer: ok... but if we ever have reason to be concerned about uniqueness of coll+name+rev, in what circumstances *can* we assume that that alone is good enough to identify a charm? | 12:48 |
fwereade | niemeyer: (ok: we can if it came from the store, probably (assuming *we* don't screw anything up) but it doesn't seem sensible to special case that | 12:49 |
fwereade | niemeyer: ) | 12:50 |
niemeyer | fwereade: Pretty much in all cases we can assume it's unique within the code | 12:50 |
fwereade | niemeyer: if we want the bundles to be stored with keys including the hash, why would we eschew that requirement for the ZK node names? | 12:51 |
fwereade | niemeyer: um, "pretty much in all cases" == "not in all cases" :p | 12:51 |
niemeyer | fwereade: Sure, you've just found one case where we're concerned about clashes | 12:52 |
niemeyer | fwereade: Maybe we can change that logic, actually.. hmm | 12:52 |
niemeyer | fwereade: The real problem there is that it's very easy for the user to tweak a formula and ask to deploy it, and then deploy something else | 12:53 |
niemeyer | fwereade: The question is how to avoid that situation | 12:54 |
fwereade | niemeyer: sorry, are we talking about upgrades, or just normal deploys? | 12:54 |
niemeyer | fwereade: I'm happy for us to remove the hash from the name if we can find a way to avoid surprising results in these scenarios | 12:55 |
niemeyer | fwereade: Both | 12:55 |
fwereade | niemeyer: heh, I was more in favour of making the hash a required part of the ID throughout (at least internally) | 12:55 |
fwereade | niemeyer: my issue was that it *wasn't* included in the ZK node name at the moment | 12:56 |
fwereade | niemeyer: that seemed like a problem :) | 12:56 |
niemeyer | fwereade: That will mean we'll get two things deployed with the same name-id | 12:56 |
niemeyer | fwereade: Not a situation I want to be around for debugging ;-) | 12:56 |
niemeyer | fwereade: HmM! | 12:57 |
niemeyer | fwereade: What about revisioning local formulas automatically based on the latest stored version in the env? | 12:57 |
niemeyer | fwereade: Effectively bumping it | 12:58 |
niemeyer | fwereade: The spec actually already suggests that, despite that problem | 12:58 |
niemeyer | fwereade: This way we can remove the hash.. but we must never overwrite a previously existing charm | 12:58 |
fwereade | niemeyer: I'm confused | 12:59 |
niemeyer | fwereade: Ok, let me unconfuse you then | 12:59 |
niemeyer | fwereade: What's the worst point in the above explanation? :) | 12:59 |
fwereade | niemeyer: can we agree that (1) we can't guarantee that a (coll, name, rev) doesn't necessarily uniquely identify a charm | 13:00 |
fwereade | (2) therefore, we need something else to guarantee uniqueness | 13:01 |
niemeyer | fwereade: My suggestion is to guarantee uniqueness "at the door" | 13:01 |
niemeyer | fwereade: We never replace a previous (coll, name, rev) | 13:01 |
niemeyer | fwereade: If we detect the user is trying to do that, we error out | 13:02 |
niemeyer | fwereade: To facilitate development, though, we must give people a way to quickly iterate over versions of a charm | 13:02 |
niemeyer | fwereade: Which means we need to bump charm revisions in the local case based on what was previously deployed | 13:03 |
niemeyer | fwereade: Makes sense? | 13:03 |
fwereade | niemeyer: I think so | 13:04 |
* fwereade thinks... | 13:04 | |
niemeyer | fwereade: This way we can remove the hash | 13:04 |
niemeyer | fwereade: But you'll have to review logic around that a bit so that we're sure we're not naively replacing a previous version | 13:04 |
niemeyer | fwereade: It shouldn't be hard, IIRC | 13:05 |
fwereade | niemeyer: I don't remember it being exceptionally complex | 13:05 |
niemeyer | fwereade: Because we consciously store the charm in zk after uploading | 13:05 |
niemeyer | fwereade: So if the charm is in zk, it must be in the storage | 13:05 |
fwereade | niemeyer: I have a vague feeling it'lll already complain if we try to overwrite a charm in zk | 13:05 |
niemeyer | fwereade: and thus we shouldn't replace | 13:05 |
niemeyer | fwereade: I think upgrade is a bit more naive | 13:06 |
niemeyer | fwereade: But I'm not sure | 13:06 |
niemeyer | fwereade: Perhaps my memory is failing me | 13:06 |
fwereade | niemeyer: I know less about the code than you might think, I was working most of last week with about 3 mental registers operating properly :/ | 13:06 |
fwereade | niemeyer: CharmStateManager calls client.create with a hashless ID, so that should explode reliably already | 13:08 |
niemeyer | fwereade: Not sure really.. but please review it.. it'll be time well spent | 13:09 |
niemeyer | fwereade: Then, we'll need to implement the revision bumping that is in the spec | 13:09 |
niemeyer | fwereade: For the local case, that is | 13:09 |
fwereade | niemeyer: there was atlk a little while ago about allowing people to just ignore revisions locally | 13:10 |
fwereade | niemeyer: which seems to me to be quite nice for charm authors | 13:10 |
niemeyer | fwereade: Exactly.. that's a way to do exactly that | 13:11 |
niemeyer | fwereade: The user will be able to ignore it, because we'll be sorting out automatically | 13:12 |
niemeyer | fwereade: Please see details in the spec | 13:12 |
niemeyer | Will get a bite.. biab | 13:12 |
fwereade | niemeyer: by overwriting the revision file in the local repo? (the spec seems to me to be talking about how the formula store should work, not local repos) | 13:12 |
=== med_out is now known as medberry | ||
niemeyer | fwereade: CTRL-F for "local formula" within "Formula revisions" | 13:28 |
niemeyer | fwereade: Sorry.. | 13:28 |
niemeyer | fwereade: CTRL-F for "local deployment" within "Formula revisions" | 13:28 |
fwereade | niemeyer: hm, I see it now, sorry | 13:29 |
niemeyer | fwereade: np | 13:29 |
fwereade | niemeyer: for some reason I'm not very happy with us writing into a local repo though | 13:29 |
niemeyer | fwereade: That's why the revision is being taken out of the metadata | 13:30 |
fwereade | niemeyer: ...and it seems to say we should bump on every deploy, which feels rather aggressive | 13:30 |
fwereade | niemeyer: just a suggestion: if the revision and the hash don't match, we blow up as expected | 13:31 |
niemeyer | fwereade: You have the context for why this is being done now.. I'm happy to take suggestions :) | 13:31 |
niemeyer | fwereade: The hash of what? | 13:31 |
niemeyer | fwereade: Directories have no hashe | 13:31 |
fwereade | niemeyer: don't they? | 13:31 |
fwereade | niemeyer: ok, it's the hash of the bundle | 13:31 |
fwereade | but they do have the appropriate method | 13:32 |
niemeyer | fwereade: Yeah. it's a hack really | 13:32 |
niemeyer | fwereade: Plus, not updating means we'll force users to bump it manually | 13:32 |
niemeyer | fwereade: Effectively doing the "rather aggressive" part manually, which sucks | 13:32 |
fwereade | niemeyer: what if we treat a revision file that exists as important -- so if you change a revisioned formula but don't change the rev, you blow up -- but allow people to just delete the revision file locally, in which case we identofy purely by hash and treat the has of the current local version as "newer" than any other hashes that might be around | 13:34 |
niemeyer | fwereade: I don't get what's the problem you're solving with that behavior | 13:34 |
fwereade | niemeyer: the failure-to-upgrade-without-manually-tweaking-revision | 13:35 |
niemeyer | fwereade: The solution in the spec solves that without using hashes | 13:35 |
niemeyer | fwereade: Why is your suggestion better? | 13:36 |
fwereade | niemeyer: but at the cost of repeatedly uploading the same formula every time it's deployed whether or not it's required | 13:36 |
niemeyer | fwereade: Hmm | 13:36 |
fwereade | niemeyer: I'm also a bit suspicious of requiring write access to local repos, just to deploy from them | 13:37 |
fwereade | niemeyer: feels icky ;) | 13:37 |
niemeyer | fwereade: That's trivial to solve.. but let's see, your earlier point is a good one | 13:38 |
niemeyer | fwereade: Hmm.. I think we can specialize the behavior to upgrade | 13:43 |
niemeyer | fwereade: and make deploy consistent for local/remote | 13:44 |
niemeyer | fwereade: In deploy, if there's a charm in the env, use it no matter what | 13:44 |
niemeyer | fwereade: Well, assuming no revision was provided, which is always true nowadays | 13:44 |
niemeyer | fwereade: In upgrade, if it is local, bump the revision to the the revision currently deployed (in the *env*) + 1 | 13:45 |
fwereade | niemeyer: so we *might* still needlessly upload, but less frequently... not entirely unreasonable, I guess :p | 13:46 |
niemeyer | fwereade: Sure, which gets us back to the original issue.. we need a method that: | 13:47 |
niemeyer | 1) Does not needlessly bump the revision | 13:47 |
niemeyer | 2) Does not require people to bump the revision manually | 13:47 |
niemeyer | That's one solution | 13:47 |
niemeyer | fwereade: I don't want to get into the business of comparing the hash of an open directory with a file in the env | 13:48 |
niemeyer | fwereade: At least not right now.. to solve the problem we'd need to create a unique way to hash the content that doesn't vary with different bundlings | 13:48 |
fwereade | niemeyer: hm, I wasn't aware we had different bundlings to deal with..? | 13:49 |
niemeyer | fwereade: Well.. | 13:49 |
niemeyer | fwereade: There's a file in the env.. there's an open directory in the disk | 13:49 |
niemeyer | fwereade: How do we compare the two? | 13:49 |
fwereade | niemeyer: well, at the moment, we zip up the env and hash the zipfile; I understand you think that's a hack, but I don't understand how it makes the situation any worse | 13:51 |
niemeyer | <fwereade> niemeyer: hm, I wasn't aware we had different bundlings to deal with..? | 13:51 |
niemeyer | fwereade: So you do understand we have different bundlings to deal with | 13:51 |
fwereade | niemeyer: we have different representations of charms, but the hashing is the same | 13:51 |
niemeyer | fwereade: Why is it the same? | 13:51 |
niemeyer | fwereade: Where's the zipping algorithm described that guarantees that zipping the same directory twice necessarily produces the same hash? | 13:52 |
fwereade | niemeyer: because we convert dirs to bundles to hash them? | 13:52 |
fwereade | niemeyer: and we *also* convert dirs to bundles to deploy them | 13:52 |
niemeyer | fwereade: Where's the zipping algorithm described that guarantees that zipping the same directory twice necessarily produces the same hash? | 13:52 |
fwereade | niemeyer: ah-ha | 13:52 |
rog | zip files hold modification times... | 13:53 |
fwereade | niemeyer, rog: hmm. | 13:54 |
rog | niemeyer: i did something like this before | 13:54 |
niemeyer | rog: modification times can be preserved.. but there are other aspects like ordering that are entirely unspecified | 13:54 |
rog | yup | 13:54 |
rog | niemeyer: my fs file traversal thing (which later became alphabet) solved this by always archiving in canonical order | 13:55 |
niemeyer | So, there are two choices: either we define a directory/content hashing algorithm, or we don't take the content into account | 13:55 |
rog | and i added a filter for canonicalising metadata we don't care about (e.g. mtime, atime) | 13:56 |
rog | oh yes, permissions were a problem too. | 13:56 |
rog | it worked very well in the end though | 13:56 |
niemeyer | rog: Sure, I'm not saying it's not possible.. I'm just saying that it requires diving into the problem more than "hash the zip files" | 13:56 |
rog | sure | 13:56 |
rog | zip files aren't canonical | 13:57 |
fwereade | niemeyer: as a side note: what's the trivial solution to my discomfort with requiring write access to local repos? | 13:58 |
niemeyer | fwereade: bundle the revision dynamically | 13:58 |
fwereade | niemeyer: so we'd have local repos with different revs to the deployed versions? that feels like a pain to debug, too | 13:59 |
niemeyer | fwereade: That may be the case either way, and there's absolutely nothing we can do to prevent it | 13:59 |
niemeyer | fwereade: The prove being that the local version is user modifiable | 14:00 |
niemeyer | fwereade: Either way, the normal case is writing the revision.. so let's not worry about the read-only case for now | 14:01 |
fwereade | niemeyer: ok then :) | 14:02 |
niemeyer | fwereade: local: is really targeting development.. | 14:02 |
fwereade | niemeyer: true | 14:03 |
niemeyer | fwereade: Again, please note that the local revision bumping must take the revision from the env + 1, rather than taking the local revision number in consideration | 14:03 |
niemeyer | fwereade: On upgrade, specifically.. | 14:03 |
niemeyer | fwereade: I believe we can handle the deploy case exactly the same for local/remote | 14:04 |
fwereade | niemeyer: understood, I just feel that "local newer than env" is easily comprehensible, while "env newer than local (from which it was deployed" is a touch confusing | 14:04 |
fwereade | niemeyer: agree on deploy: just use the one already deployed if it exists | 14:04 |
fwereade | niemeyer: (I know I'm still talking about the magic non-writing case, I'll try to forget about that) | 14:05 |
niemeyer | fwereade: I don't understand the first comment in this series | 14:06 |
fwereade | niemeyer: sorry, I was still wittering on about the non-writing case, it's not relevant ATM | 14:06 |
niemeyer | fwereade: The local namespace is flat.. | 14:07 |
niemeyer | fwereade: Ponder for a second what happens if both of us start deploying the same "local" formula on the env | 14:07 |
niemeyer | fwereade: and what the revision numbers mean in that case | 14:08 |
fwereade | niemeyer: I've been having quiet nightmares about that, actually ;) | 14:08 |
niemeyer | fwereade: There's no nightmare that, if you acknowledge that local: is targeting development most importantly | 14:09 |
niemeyer | s/that,/there,/ | 14:09 |
fwereade | niemeyer: I think the only sensible thing we can say is Don't Do That | 14:09 |
niemeyer | fwereade: It's fine actually.. the last deployment will win | 14:09 |
niemeyer | fwereade: Which is a perfect valid scenario when development is being done | 14:09 |
niemeyer | perfectly | 14:09 |
niemeyer | Can't write today | 14:09 |
niemeyer | fwereade: "local:" is _not_ about handling all non-store cases.. | 14:10 |
niemeyer | fwereade: We'll eventually have a "custom store" people will be able to deploy in-house | 14:10 |
fwereade | niemeyer: ok, a separate piece fell into place, part of my brain was conflating services and charms | 14:11 |
fwereade | niemeyer: I'm happy about that now | 14:12 |
niemeyer | fwereade: Ah, phew, ok :-) | 14:12 |
fwereade | niemeyer: so... we trash hashes, then, and double-check that we'll explode if we try to overwrite a (coll, name, rev) in ZK | 14:14 |
niemeyer | fwereade: Yeah, "explode" as in "error out nicely".. :-) | 14:14 |
fwereade | niemeyer: quote so ;) | 14:15 |
fwereade | gaah, I can't write either :/ | 14:15 |
fwereade | niemeyer: tyvm, very illuminating discussion | 14:16 |
niemeyer | fwereade: It's been my pleasure.. have been learning as well | 14:16 |
fwereade | niemeyer: cheers :) | 14:16 |
niemeyer | fwereade: Btw, the critical piece to review is whether we might overwrite the storage content or not | 14:17 |
niemeyer | fwereade: We have some protection from zk that create(...) won't work if it already exists | 14:17 |
niemeyer | fwereade: But we have none from the storage | 14:17 |
niemeyer | fwereade: So if the logic is not as we think it is, it'll blindly overwrite and we'll figure later | 14:17 |
niemeyer | fwereade: The hash protected us from that, even if not in an ideal way as you pointed out | 14:18 |
fwereade | niemeyer: yes indeed, I'll need to be careful but it's not insoluble | 14:18 |
niemeyer | fwereade: I _think_ the original logic had "store + put in zk" for exactly that reason | 14:19 |
fwereade | niemeyer: btw, really quick lazy question: what would cause a zk charm node to be deleted? | 14:19 |
niemeyer | fwereade: The ordering means that if an upload breaks mid-way, we still retry and overwrite | 14:19 |
niemeyer | fwereade: Nothing, IIRC | 14:19 |
niemeyer | fwereade: We debated a bit about garbage collecting it | 14:20 |
fwereade | niemeyer: ok, I thought I saw some logic to deal with that case, and was a bit surprised | 14:20 |
niemeyer | fwereade: and we can do it at some point | 14:20 |
niemeyer | fwereade: but I don't recall supporting it ATM | 14:20 |
fwereade | niemeyer: cool, I won't fret too much about that | 14:22 |
niemeyer | Man.. empty review queue.. I'll run and do some addition server-side work on the store | 14:30 |
niemeyer | additional.. | 14:30 |
* hazmat catches up on the backlog | 14:47 | |
_mup_ | juju/go-store r14 committed by gustavo@niemeyer.net | 14:48 |
_mup_ | Bootstrapping store package. | 14:48 |
hazmat | fwereade, niemeyer interesting about col/name/rev uniqueness.. one of the bugs/useability things for charm authors, is being able to do away with constant rev increments for iteration and just relying on hash | 14:49 |
niemeyer | hazmat: morning! | 14:49 |
hazmat | its something that bites pretty much every charm author | 14:49 |
fwereade | hazmat: indeed, but niemeyer has convinced me that auto-incrementing on upgrade from local repos should solve that | 14:49 |
niemeyer | hazmat: Yeah.. there are other ways to handle this without relying on hash, though.. read through :) | 14:50 |
* hazmat continues the backlog | 14:50 | |
hazmat | long conversation indeed | 14:50 |
kim0 | m_3: howdy .. please ping me hwen you're up | 14:57 |
_mup_ | juju/go-store r15 committed by gustavo@niemeyer.net | 14:57 |
_mup_ | Imported the mgo test suite setup/teardown from personal project. | 14:57 |
hazmat | niemeyer, so the conclusion is, for local repositories, always increment the version on deploy regardless of any change to the formula? | 15:06 |
niemeyer | hazmat: Not quit | 15:06 |
niemeyer | e | 15:06 |
niemeyer | <niemeyer> fwereade: Hmm.. I think we can specialize the behavior to upgrade | 15:07 |
niemeyer | <niemeyer> fwereade: and make deploy consistent for local/remote | 15:07 |
niemeyer | <niemeyer> fwereade: In deploy, if there's a charm in the env, use it no matter what | 15:07 |
niemeyer | <niemeyer> fwereade: Well, assuming no revision was provided, which is always true nowadays | 15:07 |
niemeyer | <niemeyer> fwereade: In upgrade, if it is local, bump the revision to the the revision currently deployed (in the *env*) + 1 | 15:07 |
niemeyer | hazmat: ^ | 15:07 |
hazmat | hmm. also we should log at info level the formula we're using on deploy (already in env, vs uploaded) | 15:09 |
niemeyer | hazmat: True | 15:09 |
hazmat | that's part of what bites people, lack of discovery into the problem till they go inspecting things | 15:09 |
niemeyer | <hazmat> hmm. also we should log at info level the formula we're using on deploy (already in env, vs uploaded) | 15:09 |
niemeyer | fwereade: ^ | 15:09 |
niemeyer | LOL | 15:10 |
niemeyer | <niemeyer> hazmat: True | 15:10 |
niemeyer | <niemeyer> <hazmat> hmm. also we should log at info level the formula we're using on deploy (already in env, vs uploaded) | 15:10 |
niemeyer | <hazmat> that's part of what bites people, lack of discovery into the problem till they go inspecting things | 15:10 |
niemeyer | fwereade: ^^^ | 15:10 |
fwereade | niemeyer, hazmat: sounds sensible | 15:10 |
hazmat | auto increment on upgrade sounds good | 15:11 |
hazmat | the upgrade implementation is pretty strict on newer versions, which is why i punted on a hash based approach, it was hard to maintain that notion | 15:12 |
niemeyer | hazmat: Agreed. The hash stuff sounds interesting to detect coincidences for sure, but the detail is that it won't really solve the problems we have.. we need to consider larger versions anyway, and need to be able to update the previous deployment | 15:14 |
niemeyer | ... without manual interaction | 15:14 |
niemeyer | So for now it feels like the auto-increment upgrade is enough | 15:14 |
niemeyer | fwereade: When do you think the new CharmURL & CharmCollection abstractions will be available? | 15:16 |
niemeyer | fwereade: Just want to sync up because I'd like to have a look at them before mimicking in Go | 15:16 |
niemeyer | fwereade: So we match logic | 15:16 |
fwereade | niemeyer: hopefully EOmyD, but I'm not quite sure when that will be | 15:16 |
niemeyer | fwereade: Cool, thks | 15:17 |
fwereade | niemeyer: certainly before strot of your day tomorrow though | 15:17 |
niemeyer | fwereade: Ok | 15:17 |
fwereade | gaah *start* of your day | 15:17 |
niemeyer | fwereade: Are you planning on doing any modifications to the suggested API? | 15:17 |
fwereade | I think I'm happy with everything you proposed | 15:18 |
niemeyer | fwereade: Awesome, I'll get started on it then | 15:18 |
fwereade | niemeyer: I'll let you know ASAP if I come up with any problems | 15:19 |
niemeyer | fwereade: Superb, cheers | 15:19 |
niemeyer | fwereade: Will do the same on my end | 15:19 |
m_3 | kim0: hey man... what's up? | 15:37 |
_mup_ | juju/config-juju-origin r358 committed by jim.baker@canonical.com | 15:42 |
_mup_ | Merged trunk | 15:42 |
hazmat | fwereade, do you know if the orchestra machines generally have useable fqdns? | 15:47 |
fwereade | hazmat: better check with roaksoax, but I don't think you can guarantee it | 15:48 |
fwereade | hazmat: context? | 15:48 |
hazmat | fwereade, niemeyer, re the delta with local/lxc vs orchestra on address retrieval.. with local the fqdn isn't resolvable, but the ip address is routable and there is a known interface. with orchestra the number of nics on a machine isn't knowable, but i was hoping we could say fqdns are resolvable | 15:48 |
fwereade | hazmat: IIRC the dns_name should work from other machines, but I don't think we have any guarantees about how it works from outside that network | 15:49 |
hazmat | this also per SpamapS comments on the original implementation that we should favor fqdn over ip address, and neatly sidesteps ipv4 vs ipv6 behind dns | 15:49 |
niemeyer | hazmat: We can't guarantee it ATM | 15:49 |
niemeyer | hazmat: Most of the tests I recall were done with IP addresses | 15:49 |
hazmat | niemeyer, on the address branch its all just a popen... local with ip, ec2 and orchestra with fqdn hostnames | 15:50 |
niemeyer | hazmat: The fully qualified domain will also not resolve the problem.. it may have multiple nics despite the existence of a fqdn | 15:50 |
hazmat | niemeyer, multiple nics is fine if the fqdn is resolvable | 15:51 |
niemeyer | hazmat: I believe it's not.. it'll resolve to an arbitrary ip address | 15:51 |
niemeyer | hazmat: Which may not be the right one if a machine has multiple ips | 15:51 |
niemeyer | hazmat: ec2 is a different case.. | 15:51 |
niemeyer | hazmat: We know what we're doing there | 15:51 |
hazmat | niemeyer ? hostname -f returns the fqdn of the host regardless of multiple nics | 15:52 |
SpamapS | For multiple NIC's, the FQDN should resolve to the NIC that you wish the host to be externally reachable on... | 15:52 |
hazmat | which is what we do for orchestra | 15:52 |
niemeyer | hazmat: hostname -f returns *a* name, that may be resolvable or not, and that may map to the right ip or not | 15:52 |
SpamapS | I *can* see a situation where you have a management NIC, and a service NIC .. each needing different handling. | 15:53 |
hazmat | SpamapS, we've got separation of public/private addresses for units, but getting those addresses on on orchestra deployments is the question | 15:53 |
hazmat | doesn't seem like we can do that apriori | 15:54 |
SpamapS | Indeed. DNS is the only reliable way, IMO, to handle something so loosely coupled. | 15:54 |
niemeyer | hazmat: I suggest checking with smoser and RoAkSoAx then | 15:55 |
niemeyer | hazmat: If they're happy, I'm happy :) | 15:55 |
koolhead11 | hi all | 15:57 |
niemeyer | koolhead11: Hey! | 15:58 |
koolhead11 | hello niemeyer | 15:58 |
rog | niemeyer: one merge proposal sent your way: https://code.launchpad.net/~rogpeppe/gozk/update-server-interface/+merge/77009 | 15:58 |
niemeyer | rog: Woohay, cheers! | 15:58 |
koolhead11 | SpamapS: i got some idea how not to use dbconfig-common :) | 15:58 |
rog | niemeyer: (ignore the first one, i did the merge the wrong way around) | 15:58 |
niemeyer | rog: The first one? | 15:59 |
SpamapS | I think IP's grokked from the network provider are usable... EC2 knows which one is externally available vs. internal, and the provider has full network control, so you can take that IP and use it confidently. Orchestra has no such guarantees, so the hostname that we gave to the DHCP server and that we built from its DNS settings is the only meaningful thing we can make use of. | 15:59 |
SpamapS | koolhead11: progress is good. :) | 15:59 |
koolhead11 | SpamapS: yeah. :D | 16:00 |
* koolhead11 bows to Daviey | 16:01 | |
SpamapS | For servers with multi-NIC, the only real thing we can do is use a cobbler pre-seed template that selects the most appropriate one. Making use of multiples for mgmt/service seems like something we'll have to do as a new feature. | 16:01 |
rog | niemeyer: hold on, i think i mucked up. too many versions flying around. | 16:01 |
niemeyer | rog: No worries | 16:01 |
rog | gozk/zk vs gozk vs gozk/zookeeper | 16:02 |
rog | niemeyer: no, it's all good i think | 16:02 |
niemeyer | rog: Coolio | 16:03 |
rog | niemeyer: i just did a dud directory rename, but i don't think it affects what you'll see | 16:03 |
niemeyer | RoAkSoAx: We were just talking about ips vs hostnames in the context of orchestra units | 16:04 |
niemeyer | RoAkSoAx: hazmat has more details | 16:04 |
koolhead11 | hello robbiew RoAkSoAx | 16:04 |
niemeyer | I'm going to step out for lunch and leave you guys with trouble! | 16:04 |
RoAkSoAx | niemeyer: ok | 16:04 |
RoAkSoAx | niemeyer: im on a sprint atm | 16:04 |
RoAkSoAx | hazmat: ^^ | 16:04 |
niemeyer | RoAkSoAx: It's quick | 16:04 |
niemeyer | RoAkSoAx: But important | 16:04 |
* niemeyer biab | 16:04 | |
hazmat | RoAkSoAx, just trying to determine if on an orchestra launched machine we can assume either a routable hostname (fqdn) or nic for recording an address to the machine | 16:06 |
hazmat | ie. if something like hostname -f is useable to reach the machine from another machine in the orchestra environment | 16:06 |
hazmat | i assume the orchestra server is just tracking mac addresses on the machine | 16:07 |
RoAkSoAx | hazmat: hazmat yes the orchestra server is tracking the MAC address | 16:07 |
RoAkSoAx | hazmat: we always have to track it | 16:07 |
RoAkSoAx | hazmat: though, we were making sure hostnames was fqdn as an standard and that it was set correctly | 16:08 |
RoAkSoAx | hazmat: via could-init | 16:08 |
RoAkSoAx | smoser: ^^ | 16:08 |
RoAkSoAx | hazmat: the idea is to use a DNS reacheable name for each machine that's fqdn | 16:09 |
hazmat | RoAkSoAx, if thats the case that's perfect.. fqdn == hostname that is | 16:09 |
RoAkSoAx | hazmat: yes that's what we are trying to standarize last couple weeks. Give me a few minutes till I get a hold on a few people here | 16:11 |
RoAkSoAx | hazmat: and discuss the approach | 16:11 |
SpamapS | hazmat: its fair to say that we should take a look at other strategies for addressing services and machines as we get deeper in to the hardware deployment story... | 16:11 |
SpamapS | hazmat: for this primary pass, making it work "a lot like the cloud" is the simplest approach. | 16:12 |
smoser | for what its worth, you really shoul dnot expect that 'hostname --fqdn' gives an addressable hostname | 16:13 |
SpamapS | smoser: we have no other reliable source of data about what this machine's name is. | 16:13 |
smoser | i believe we've fixed it so that will be the case under orchestra, and in EC2 (and we're fixing that for single nic guests in nova). | 16:13 |
SpamapS | The fact that it wasn't happening was a bug. | 16:14 |
smoser | no. | 16:14 |
smoser | in those limited cases, that is the case. | 16:14 |
smoser | but 'hostname --fqdn' is just not reliable. | 16:14 |
smoser | read the man page if you disagree. | 16:14 |
smoser | it basically says not to use it | 16:14 |
smoser | so i would really suggest against telling charms that the right way to do something is something that documents itself as the wrong way | 16:15 |
smoser | :) | 16:15 |
smoser | i dont have a solution | 16:15 |
SpamapS | smoser: Indeed, this is the first time I've actually read this.. I wonder how recently this changed. :-/ | 16:15 |
SpamapS | I don't know if I agree with the man page's reasoning or with the mechanics of --all-fqdns | 16:16 |
SpamapS | "Don't use this because it can be changed" vs. "Rely on reverse DNS instead" ... | 16:16 |
smoser | if you're depending on cloud-init (which you are for better or worse), we can put something in it , or an external command that would basically query the metadata provided by the cloud provider to give you this. | 16:16 |
smoser | i would i guess suggest making a ensemble command "get-hostname" or something | 16:17 |
SpamapS | smoser: Its something we can control (since we control the initial boot of the machine) which ripples through and affects everything else on the machine. | 16:17 |
SpamapS | I believe the plan is to have some sort of "unit info" command for charms to use. | 16:17 |
smoser | you do not control the initial boot of the machine. | 16:17 |
smoser | you do not control the dns. | 16:17 |
smoser | so how could you possibly control resolution of a name to an IP? | 16:18 |
SpamapS | smoser: We do control what we've told the provisioner to do .. which is to name that box "X" | 16:18 |
smoser | no you do not | 16:18 |
smoser | not on ec2 | 16:18 |
SpamapS | cobbler does | 16:18 |
smoser | right. | 16:18 |
smoser | but stay out of that | 16:19 |
smoser | that would mean that ensemble is acting as the cloud provider in some sense when it talks to cobbler | 16:19 |
smoser | which is just yucky. | 16:19 |
SpamapS | we don't put the hostname in the metadata for the nocloud seed? | 16:19 |
smoser | not any more | 16:19 |
smoser | cobbler does | 16:19 |
smoser | ensembel does not | 16:19 |
smoser | which is much cleaner | 16:19 |
smoser | s/ensemble/juju/ | 16:20 |
SpamapS | Can we ask cobbler what it put there? | 16:20 |
smoser | or s/cleaner/more ec2-or-nova-like/ | 16:20 |
smoser | you *can*, but you should not. | 16:20 |
smoser | oh | 16:20 |
SpamapS | Ok.. where then should we get the address for the machine? | 16:20 |
smoser | wait | 16:20 |
smoser | yes | 16:20 |
smoser | you can ask cobbler what it put there | 16:20 |
smoser | sorry | 16:20 |
SpamapS | can and should I think | 16:20 |
smoser | yes | 16:20 |
smoser | :) | 16:20 |
smoser | sorry | 16:20 |
smoser | i thought you were saying "Can we tell cobbler what to put there" | 16:21 |
SpamapS | I'm not enthralled with hostname --fqdn. It is, however, the only common tool we have between all environments at the moment. | 16:21 |
smoser | well its easy enough to add a tool | 16:21 |
smoser | that lives on the nodes | 16:21 |
SpamapS | I think it might actually be quite trivial to write a charm tool ... 'machine-info --hostname' which gives us the hostname the provider wants us to be contacted with. | 16:22 |
smoser | the other thing, i think might be reasonable to consider, if you're only interested in single-cloud systems, would be to have juju run a dns server. | 16:22 |
smoser | SpamapS, right. that is what i'm suggesting is fairly easy. | 16:22 |
SpamapS | Too tightly coupled to juju at that point | 16:22 |
smoser | right | 16:23 |
SpamapS | If an environment can't provide reliable DNS then it should just give us network addresses when we ask for the hostname. | 16:23 |
smoser | i agree with this. | 16:23 |
SpamapS | I believe thats the direction the local provider has gone | 16:23 |
smoser | why do you care about a hostname ? | 16:24 |
smoser | just curious | 16:24 |
smoser | would it not be superior to always be IP ? | 16:24 |
SpamapS | definitely not | 16:24 |
smoser | (assuming that the IP would not change) | 16:24 |
smoser | why? | 16:24 |
SpamapS | IP can vary from your perspective | 16:24 |
SpamapS | a hostname provides the appropriate level of indirection | 16:25 |
smoser | somewhat. | 16:25 |
smoser | but in all cases you are aaware of so far, the IP address of the system is what you want. | 16:25 |
smoser | ie, in all of cobbler, nova, ec2, 'ifconfig eth0' returns an internally addressable IPv4 address. | 16:26 |
SpamapS | IPv4 or IPv6? internal or external? | 16:26 |
smoser | you are interested in IPv4 internal | 16:26 |
SpamapS | usually | 16:26 |
smoser | you're 100% only interested in internal if you're using hostname --fqdn | 16:26 |
smoser | so that leaves you only ipv4 and ipv6 | 16:26 |
SpamapS | I'm not saying we can't use IP's, I'm saying we need to talk about *hosts* | 16:26 |
smoser | ec2 has no ipv4 | 16:27 |
smoser | so now you're down to nova (which i know you've not tested ipv6 of) and cobber, which i highly doubt you have | 16:27 |
smoser | machine-info --hostname | 16:27 |
SpamapS | You're getting all pragmatic on me. | 16:27 |
smoser | just return ipv4 internal ip address. | 16:27 |
smoser | no | 16:28 |
hazmat | so this is below the level of charm | 16:28 |
SpamapS | Like what, you want to ship something *now* ? | 16:28 |
smoser | i dont understand the question | 16:28 |
hazmat | juju is going to prepopulate and store the address, we just need to know how to get it on an orchestra machine | 16:28 |
smoser | no | 16:28 |
hazmat | i was hoping hostname -f would do.. seems like it won't | 16:28 |
smoser | do not do that hazmat | 16:28 |
smoser | that is broken | 16:28 |
smoser | juju should *NOT* prepopulate the address. | 16:28 |
smoser | juju is not orchestra | 16:29 |
smoser | it can query, it does not set or own. | 16:29 |
hazmat | smoser, sorry wrong context.. juju was going to store the address from the provider for the charm | 16:29 |
SpamapS | smoser: I'm being a bit sarcastic. Yes, all currently known use cases are satisfied with IP's. But all of them also *should* have hostnames, and we shouldn't ignore the need for hostnames just because we can. | 16:29 |
hazmat | smoser, the question is how to get the address | 16:29 |
smoser | i'm fine with wanting to have hostnames | 16:29 |
smoser | you can hide that cleanly behind a command | 16:29 |
smoser | in which right now, you're assuming that command is 'hostname --fqdn' | 16:29 |
smoser | which is documented as broken | 16:30 |
smoser | so i'm suggesting adding another command | 16:30 |
smoser | which does the sambe general thing, but works around a bug or two | 16:30 |
smoser | and may, in some cases, return an ipv4 address. | 16:30 |
hazmat | smoser, that command is? | 16:30 |
smoser | 'machine-info --hostname' | 16:30 |
SpamapS | hazmat: we've talked about a "machine info" or "unit info" script before. | 16:31 |
SpamapS | I think you want unit info, not machine info. | 16:31 |
smoser | which you add as a level of abstraction into the node | 16:31 |
smoser | fine | 16:31 |
hazmat | SpamapS, that doesn't answer the question of how that command gets the info | 16:31 |
hazmat | ie. how do we implement machine-info's retrieval of the address | 16:31 |
smoser | hazmat, right now, it does this: echo $(hostname --fqdn) | 16:31 |
smoser | that makes it 100% bug-for-bug compatible with what you have right now | 16:32 |
smoser | but is fixable in one location | 16:32 |
SpamapS | hazmat: it queries the provider (or, more likely, queries the info we cached in the zk topology when the machine/unit started) | 16:32 |
smoser | SpamapS, is right. | 16:32 |
hazmat | so for local and ec2 providers, we have known solutions, its the orchestra case that its not clear what we should od | 16:32 |
smoser | in the orchestra provider 'hostname --fqdn' works | 16:32 |
smoser | and i thought we had (or i think we should) assume that the machine's "hostname" in cobbler is fqdn internal address. | 16:33 |
smoser | s/assume/insist/ | 16:33 |
smoser | so ensembel can just query that from cobbler | 16:33 |
smoser | afaik, the only place broken right now is in nova | 16:33 |
smoser | due to bug 854614 | 16:34 |
hazmat | smoser, does cobbler have any notion of external/public addresses? or just hostnames for a given mac addr | 16:34 |
smoser | which will be fixed | 16:34 |
_mup_ | Bug #854614: metadata service local-hostname is not fqdn <server-o-rs> <OpenStack Compute (nova):In Progress by smoser> <nova (Ubuntu):Confirmed> < https://launchpad.net/bugs/854614 > | 16:34 |
smoser | RoAkSoAx would know more, but whatever it is, you assert that in some portion of the machines's metadata, a fqdn exists for internal address. | 16:34 |
smoser | and you use it | 16:34 |
smoser | i dont have cobbler in front of me to dump machine data. but i think it is a reasonable assertion. | 16:35 |
SpamapS | wow, --all-fqdns /win 24 | 16:35 |
SpamapS | doh | 16:35 |
SpamapS | so --all-fqdns is pretty new | 16:35 |
SpamapS | Appeared just before 9.10 I think | 16:36 |
smoser | its really all messed up. | 16:36 |
smoser | and it doesn't help you | 16:36 |
smoser | as it doesn't sort them in any order (how could it?) | 16:36 |
SpamapS | yeah its not useful | 16:36 |
smoser | so how can you rely on its output | 16:36 |
SpamapS | providers need to tell us how a machine they're responsible for is addressable | 16:37 |
smoser | right. | 16:37 |
smoser | and we just assert at the moment that cobbler stores that in (i think) 'hostname' | 16:37 |
SpamapS | And then the external and internal IP's are both the result of querying DNS for that hostname. | 16:38 |
smoser | i dont folow that. | 16:38 |
smoser | i didn't know external ip was something that was being discussed. | 16:38 |
SpamapS | Just thinking of analogs for ec2's metadata | 16:39 |
SpamapS | Its needed | 16:39 |
SpamapS | for expose | 16:39 |
smoser | i agree it would be needed... | 16:39 |
SpamapS | For orchestra, all the firewall stuff is noop'd though | 16:39 |
smoser | i really have to look at nova to find a good place for this. | 16:39 |
smoser | but basically i think we just need to store it there and assert that it is configured sanely. | 16:39 |
SpamapS | I believe there's a desire to mvoe that FW management to the agents managing ufw/iptables .. but for now, providers have to do it, and orchestra can't. | 16:40 |
SpamapS | yes, hostname in cobbler is the canonical source of the machine's hostnanme | 16:40 |
SpamapS | and Mavis Beacon is the canonical source of my bad typing | 16:40 |
smoser | i think for our near term purposes cobbler no op is fine for firewall | 16:40 |
hazmat | agreed | 16:40 |
hazmat | SpamapS, so hostname -? is fine for cobbler for the private address.. and hopefully the public address? | 16:41 |
smoser | almost certainly not the public address. | 16:41 |
hazmat | smoser, its not clear what a public address means in orchestra.. its outside the purview of the provider | 16:41 |
smoser | hazmat, well, sort of | 16:42 |
smoser | clearly orchestra could have that data | 16:42 |
smoser | and could provide it to you | 16:42 |
smoser | but i dont think we have a place where we assert it is stored now. | 16:42 |
SpamapS | orchestra does not imply whether it has public/private networks. | 16:43 |
SpamapS | Its really not all that interesting, just return hostname for anything wanting to address the machine. | 16:44 |
smoser | good enough for me. | 16:45 |
smoser | so i do suggest the layer of indirection over 'hostname --fqdn' | 16:45 |
SpamapS | And I'll open up a bug for the desired charm tool | 16:45 |
SpamapS | smoser: agreed, will open up that bug now | 16:45 |
hazmat | SpamapS, the common use for that is going away | 16:45 |
hazmat | SpamapS, the relations be prepopulated with the info | 16:46 |
hazmat | although we still need a way to query it agreed | 16:46 |
hazmat | at the unit level | 16:46 |
SpamapS | Right, is there a bug for that then? | 16:46 |
SpamapS | or will it be a reserved variable in relation-get ? | 16:47 |
hazmat | SpamapS, not yet.. but the units-with-addresses branch does the work of storing it directly on the units (pub/private) address in provider specific manner | 16:47 |
hazmat | SpamapS, just a prepopulated one | 16:47 |
SpamapS | I like that | 16:47 |
hazmat | i just needed to verify that hostname --fqdn does something sane w/ orchestra | 16:48 |
hazmat | and it seems like thats what we should use use for now | 16:48 |
hazmat | which is nice, since that's whats implemented for orchestra | 16:48 |
niemeyer | Wow.. long thread | 16:48 |
SpamapS | hazmat: since all the charms currently rely on it, its been made to work that way. But as we've discussed here, its not really robust as a long term solution. | 16:50 |
hazmat | smoser, RoAkSoAx does that mean that bug 846208 is fixed? | 16:50 |
_mup_ | Bug #846208: Provisioned nodes do not get a FQDN <juju:New> <orchestra (Ubuntu):New> < https://launchpad.net/bugs/846208 > | 16:50 |
hazmat | wrt to orchestra | 16:50 |
hazmat | SpamapS, agreed, but getting it out of the charms, goes a long way to giving us the flexibility to fix it | 16:51 |
SpamapS | niemeyer: yeah, when you get Me, the tire kicker, and smoser, Mr. Meh, talking about something.. the threads tend to go back and forth with a lot of "NO, no, no NO No, no, ahh, yes." | 16:51 |
niemeyer | SpamapS: That's a nice way to get something proper in place.. | 16:51 |
smoser | adam_g, probably knows aboug 846208 but i would have thought yes. | 16:52 |
SpamapS | speaking of long term and short term... I'm hoping to file the FFE tomorrow.. where are we at? | 16:52 |
hazmat | SpamapS, this is probably the closest bug 788992 | 16:52 |
_mup_ | Bug #788992: example formulas refer to providing the hostname in ensemble itself <juju:New> < https://launchpad.net/bugs/788992 > | 16:52 |
smoser | at very least, i'm fairly sure that 'hostname -f' should do the right thing there now. | 16:53 |
hazmat | smoser, cool | 16:53 |
RoAkSoAx | yeah that bug was fixed already ##846208 will verify now that im here | 16:55 |
_mup_ | Bug #846208: Provisioned nodes do not get a FQDN <juju:New> <orchestra (Ubuntu):New> < https://launchpad.net/bugs/846208 > | 16:55 |
hazmat | SpamapS, we're very close on local dev. | 16:56 |
hazmat | bcsaller, how's it going? | 16:56 |
bcsaller | hazmat: I was just reading back the channel actually | 16:56 |
SpamapS | Awesome | 16:56 |
bcsaller | hazmat: have you tried the branch yet? | 16:57 |
hazmat | bcsaller, not yet.. i'll do so now | 16:57 |
hazmat | bcsaller, what's the url for the stats on apt-cacher-ng? | 17:02 |
bcsaller | http://localhost:3142/acng-report.html | 17:03 |
SpamapS | hazmat: btw did those tests get fixed? | 17:04 |
hazmat | SpamapS, which tests? | 17:04 |
SpamapS | hazmat: lxc tests IIRC | 17:04 |
SpamapS | the ones that were blatantly broken last week in trunk | 17:05 |
hazmat | SpamapS, oh yeah.. the breakage, indeed their fixed.. trunk is green | 17:05 |
SpamapS | cool | 17:06 |
SpamapS | I've been doing regular uploads to my PPA with the distro packaging, which runs the test suite... that was blocking those from working. :p | 17:06 |
hazmat | bcsaller, i'm seeing some oddities around namespace passing which is breaking lxc-ls, but the units are up and running | 17:06 |
bcsaller | hazmat: I'll need details ;) | 17:06 |
hazmat | bcsaller, i'll dig into it | 17:07 |
hazmat | bcsaller, but it appears to be working | 17:07 |
bcsaller | hazmat: in an older version its wasn't setting the ns to qualified name and created images with out a prefix, but that was fixed | 17:07 |
hazmat | bcsaller, ah.. that looks like the problem | 17:08 |
hazmat | sounds like | 17:08 |
bcsaller | hazmat: you didn't pull? | 17:08 |
hazmat | bcsaller, i probably need to remerge your branch | 17:08 |
bcsaller | sounds like | 17:08 |
hazmat | bcsaller, i've been pulling your branch and looking over the diff, but i don't think i've remerging into the rest of the pipeline | 17:08 |
bcsaller | then I'm surprised it worked. I expect the services in the container didn't actually start for you | 17:09 |
bcsaller | hazmat: that 'conf' change was missing too I expect | 17:09 |
hazmat | bcsaller, does the template machine get the namespace qualifier? | 17:11 |
hazmat | s/machine/container | 17:11 |
bcsaller | no, there are some advantages and disadvantages there | 17:11 |
bcsaller | I expect there will be debate around that point in the review | 17:12 |
bcsaller | I guess it *should* though, I can think of many ways it can go wrong for people | 17:13 |
bcsaller | vs being a cost savings for the well behaved. It should also have things like series name in it I expect | 17:13 |
hazmat | bcsaller, the question is can we get this stuff landed today for push to the repos tomorrow, is there anything i can help with? | 17:17 |
hazmat | i think all my branches are approved at this point, i've got one last minor to update the provider name, and prepopulate the relations with the unit address | 17:18 |
hazmat | bcsaller, latest revno is 404 on omega? | 17:21 |
bcsaller | Idk, can't find it | 17:21 |
bcsaller | ;) | 17:21 |
bcsaller | yeah, thats it | 17:21 |
hazmat | bcsaller, getting pty allocation errors, just had a kernel upgrade going to try a reboot | 17:30 |
hazmat | unit agents aren't running | 17:30 |
hazmat | conf file looks fine | 17:30 |
=== koolhead11 is now known as koolhead11|bot | ||
_mup_ | juju/config-juju-origin r359 committed by jim.baker@canonical.com | 18:00 |
_mup_ | Add support for get_default_origin | 18:00 |
* rog is done for the day. see y'all. | 18:01 | |
niemeyer | rog: Cheers! | 18:01 |
hazmat | bcsaller, the container unit agents never start, and i get pty allocation errors trying to login manually | 18:12 |
bcsaller | hazmat: sounds like what you were having at the sprint | 18:12 |
bcsaller | hazmat: what was the resolution to that? | 18:12 |
hazmat | bcsaller, upgrading to oneiric | 18:12 |
hazmat | i don't think that works twice ;-) | 18:12 |
bcsaller | darn | 18:13 |
hazmat | currently on lxc == 0.7.5-0ubuntu8 | 18:13 |
bcsaller | same | 18:13 |
bcsaller | hazmat: the lxc-library tests do or don't trigger this issue for you? | 18:19 |
hazmat | bcsaller, are you specing the origin somehow? | 18:31 |
hazmat | bcsaller, the lxc lib tests fail in omega for me | 18:31 |
niemeyer | SpamapS: What is the set of valid charm names we're going to support? | 18:41 |
niemeyer | SpamapS: foo(-bar)*? | 18:41 |
niemeyer | Or, more properly "^[a-z]+([a-z0-9-]+[a-z])*$" | 18:41 |
niemeyer | fwereade, bcsaller, hazmat, anyone: ^^^? | 18:42 |
SpamapS | niemeyer: yes that looks exactly right | 18:42 |
SpamapS | basically the hostname spec. ;) | 18:42 |
SpamapS | but no capitals | 18:43 |
SpamapS | +1 | 18:43 |
bcsaller | niemeyer: looks fine to me, might need [-_] | 18:43 |
hazmat | sounds good | 18:43 |
niemeyer | bcsaller: It contains - already | 18:43 |
SpamapS | no _'s | 18:43 |
SpamapS | one visual separator is fine | 18:43 |
bcsaller | ahh 0-9-, ic | 18:43 |
hazmat | bcsaller, do you have some delta in your omega branch that's not pushed? | 18:44 |
bcsaller | hazmat: no | 18:44 |
hazmat | bcsaller, i get test failures.. it looks like around juju package install | 18:44 |
bcsaller | origin should be ppa at this point, I think thats what it says in the code, I'll check again | 18:44 |
niemeyer | fwereade: In case you are around, these will be useful: | 18:45 |
niemeyer | var validUser = regexp.MustCompile("^[a-z0-9][a-zA-Z0-9+.-]+$") | 18:45 |
niemeyer | var validSeries = regexp.MustCompile("^[a-z]+([a-z-]+[a-z])?$") | 18:45 |
niemeyer | var validName = regexp.MustCompile("^[a-z]+([a-z0-9-]+[a-z])?$") | 18:45 |
hazmat | bcsaller, http://paste.ubuntu.com/697431/ | 18:45 |
bcsaller | hazmat: so either the origin isn't ppa, the networking isn't working or... | 18:48 |
hazmat | bcsaller, the networking is working at least packages are being installed | 18:48 |
bcsaller | hazmat: and you said you can't ssh into the container? I'd try to run the juju-create script, it will be some /tmp/xxxxx-juju-create script in the container and follow the output | 18:49 |
hazmat | bcsaller, also when the tests fail they leave an orphan container | 18:50 |
niemeyer | jimbaker: any chance of getting env-origin landed today? | 19:17 |
jimbaker | niemeyer, i'm working on the mocks for this. once done, it will be ready for review | 19:18 |
niemeyer | jimbaker: Ugh.. | 19:18 |
jimbaker | niemeyer, so pretty close i would say | 19:18 |
niemeyer | jimbaker: "working on the mocks" gives me bad feelings nowadays, for some reason | 19:18 |
jimbaker | niemeyer, well as i understand i need to mock out apt-cache policy for the various cases | 19:19 |
niemeyer | jimbaker: Not really.. that's a pretty side-effects free problem to solve | 19:20 |
jimbaker | niemeyer, how we would test in the case of being on a distro vs one where it was installed from the ppa? or in the case of being installed from a branch? | 19:21 |
niemeyer | jimbaker: origin, source = parse_juju_policy(data) | 19:23 |
jimbaker | niemeyer, but we still need to run apt-cache policy in order to collect the necessary data. isn't this the role for the mock, to intercept this call with some variations of what it could return? | 19:24 |
niemeyer | jimbaker: There's a single test needed for actually calling apt-cache, and that's also trivial to automate without mocking by putting an executable in the path. | 19:25 |
niemeyer | jimbaker: I won't fight if you decide to mock this one | 19:25 |
niemeyer | jimbaker: But mocking every single iteration of parse_juju_policy is making our lives more painful without a reason | 19:25 |
niemeyer | jimbaker: It's a side-effects free process | 19:26 |
niemeyer | jimbaker: and it's idempotent | 19:26 |
niemeyer | jimbaker: If you need mocker for that I'll take the project page down! :-) | 19:26 |
jimbaker | niemeyer, i will rewrite it according to what you have described, it's not a problem | 19:27 |
hazmat | bcsaller, are you sure you dont have something in /var/cache/lxc that makes it work for you? | 19:50 |
hazmat | bcsaller, i just blew away my cache and its still failing on the tests | 19:50 |
bcsaller | hazmat: I'll try to clean that out and check again | 19:51 |
bcsaller | take a few minutes | 19:51 |
hazmat | bcsaller, did it work? | 20:02 |
bcsaller | bootstrap is still going, w/o cache. | 20:02 |
bcsaller | so for me it hit the test timeout | 20:03 |
bcsaller | but I'm let it build the cache outside the test now | 20:04 |
hazmat | bcsaller, you on dsl? | 20:05 |
hazmat | it didn't hit the test timeout for me.. but still failed | 20:05 |
bcsaller | cable | 20:05 |
bcsaller | the unpacking phase took too long oddly | 20:05 |
bcsaller | hazmat: I am seeing errors now, I'll look into it more | 20:10 |
hazmat | bcsaller, cool, thanks | 20:10 |
hazmat | bcsaller, as far as i can see ppa is selected across the board | 20:10 |
bcsaller | looked that way to me as well | 20:10 |
hazmat | oh wait its wrong archive | 20:11 |
hazmat | haha | 20:11 |
hazmat | i thought that got fixed in this branch, but you had it cached | 20:11 |
hazmat | bcsaller, niemeyer pointed it out to me in a review | 20:11 |
hazmat | bcsaller nevermind that looks sane for the ppa | 20:12 |
* hazmat grabs some lunch | 20:13 | |
hazmat | er.. snack | 20:13 |
bcsaller | yeah, I didn't know what you were talking about there :) | 20:13 |
bcsaller | hazmat: pushed changes to both lxc-lib and omega, it was a missing dep that was cached for me :( | 20:33 |
* bcsaller looks for a brown paper bag | 20:33 | |
hazmat | bcsaller, cool, just glad its fixed | 20:34 |
* hazmat retries | 20:34 | |
SpamapS | Hmm.. getting sporadic failures of one test.. | 20:49 |
SpamapS | https://launchpadlibrarian.net/81106645/buildlog_ubuntu-oneiric-i386.ensemble_0.5%2Bbzr361-0ubuntu1~ppa1_FAILEDTOBUILD.txt.gz | 20:49 |
SpamapS | juju.agents.tests.test_unit.UnitAgentTest.test_agent_executes_config_changed_hook | 20:49 |
jrings | Hi, I have a problem tyring to get juju to connect to EC2. I described it here http://ubuntuforums.org/showthread.php?t=1849913 but also with the new version today it is tsill the same. I can bootstrap, a new instance is created in EC2, but in juju status the connection is refused | 21:07 |
jrings | Cannot connect to machine i-48751428 (perhaps still initializing): could not connect before timeout after 2 retries 2011-09-26 14:03:34,431 ERROR Cannot connect to machine i-48751428 (perhaps still initializing): could not connect before timeout after 2 retries | 21:08 |
SpamapS | jrings: hey, the key that juju uses by default is $HOME/.ssh/id_(rsa|dsa) | 21:10 |
jrings | How can I tell juju to use the .pem from EC2? | 21:11 |
SpamapS | jrings: you don't need to | 21:12 |
SpamapS | jrings: it installs your key in the instances | 21:12 |
jrings | Well my key is in $HOME/.ssh | 21:12 |
jrings | and the juju bootstrap works | 21:12 |
jrings | why can't juju status connect then? | 21:13 |
SpamapS | bootstrap complets w/o ssh | 21:13 |
SpamapS | its possible your key didn't make it into the instance for some reason | 21:13 |
SpamapS | jrings: can you pastebin ec2-get-console-output ? | 21:14 |
hazmat | if i couldn't find a key during bootstrap it will raise an exception | 21:16 |
jrings | Is that the same as the log for the instance in the EC2 webconsole? | 21:17 |
jrings | If so, here: http://pastebin.com/4c78GVC9 | 21:19 |
SpamapS | jrings: heh, i takes a few minutes to get the full log .. so you might have to wait a bit longer. | 21:25 |
SpamapS | Or maybe there's a limit to the size.. I've never checked | 21:25 |
SpamapS | (that would suck if the limit was applied to the top.. and it wasn't updated like a ring buffer | 21:25 |
hazmat | hmm.. this line 2011-09-25 10:24:11,882 ERROR SSH forwarding error: bind: Cannot assign requested address | 21:28 |
hazmat | is interesting | 21:28 |
jrings | that's what I get from the juju status | 21:28 |
hazmat | we pick a random open port on localhost to setup a port forward over ssh | 21:29 |
SpamapS | conflicting with desktop-couch ? | 21:30 |
hazmat | it looks like that fails, although for it to fail persistently suggests something else is going on | 21:30 |
SpamapS | which does the same thing | 21:30 |
SpamapS | Yeah true | 21:30 |
SpamapS | hazmat: does it definitely do 127.0.0.1 ? | 21:30 |
jrings | Yes I can see it trying different ports. | 21:30 |
SpamapS | jrings: can ou paste the output of 'ifconfig -a' ? | 21:30 |
jrings | Wait, I set up a single node hadoop locally and had to change something to localhost | 21:31 |
jrings | eth1 Link encap:Ethernet HWaddr f0:4d:a2:5f:5c:09 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:41 Base address:0xa000 lo Link encap:Local Loopback inet addr:127.0.0.1 Ma | 21:31 |
jrings | ugh | 21:31 |
jrings | wait | 21:31 |
jrings | Here: http://pastebin.com/Vpp3hJPt | 21:31 |
SpamapS | hrm | 21:35 |
SpamapS | jrings: ufw running? | 21:35 |
SpamapS | can't imagine that would break it tho | 21:35 |
jrings | Just did a ufw disable and tried again, same result | 21:36 |
jrings | Oh shit I got it | 21:39 |
jrings | I had Rstudio installed | 21:39 |
jrings | it had a server on 127.0.01:8787 | 21:39 |
jrings | just uninstalled it, juju status works | 21:39 |
jrings | no wait | 21:40 |
jrings | actually it doesn't | 21:40 |
jrings | argh | 21:40 |
SpamapS | that doesn't make sense. :-/ | 21:40 |
jrings | weird | 21:40 |
jrings | I got | 21:40 |
jrings | 2011-09-26 14:39:06,972 DEBUG Spawning SSH process with remote_user="ubuntu" remote_host="ec2-174-129-58-110.compute-1.amazonaws.com" remote_port="2181" local_port="58376". 2011-09-26 14:39:08,981:6112(0x7f2eadf27720):ZOO_INFO@log_env@658: Client environment:zookeeper.version=zookeeper C client 3.3.3 2011-09-26 14:39:08,981:6112(0x7f2eadf27720):ZOO_INFO@log_env@662: Client environment:host.name=vavatch 2011-09-26 14:39:08,981:6112(0x7f2 | 21:40 |
SpamapS | jrings: can you do 'strace -e trace=listen,bind,connect -f juju status' and paste that? (note that the command 'pastebinit' is really nice for this) | 21:40 |
jrings | one time | 21:40 |
jrings | and then the next juju status failed again | 21:41 |
hazmat | SpamapS, it picks the open port from all interfaces but binds to it on localhost | 21:41 |
hazmat | although i recently added an SO_REUSEADDR flag .. it should still be random each run | 21:41 |
SpamapS | hazmat: literally looks up 'localhost' or uses 127.0.0.1 ? | 21:41 |
hazmat | it does a bind socket.bind("", 0) | 21:41 |
SpamapS | wait, isn't it an ssh forward? | 21:42 |
hazmat | SpamapS, ah.. yeah.. for the port forward it explicitly uses localhost | 21:42 |
hazmat | 'localhost' | 21:42 |
SpamapS | jrings: pastebin 'ping -c 1 localhost' | 21:43 |
jrings | Here is the strace http://pastebin.com/Q0CPnDBr | 21:45 |
jrings | And the ping works http://pastebin.com/cwsep2NK | 21:46 |
_mup_ | juju/go-charm-url r14 committed by gustavo@niemeyer.net | 21:47 |
_mup_ | Implemented full-blown charm URL parsing and stringification. | 21:47 |
_mup_ | Bug #860082 was filed: Support for charm URLs is needed in Go <juju:In Progress by niemeyer> < https://launchpad.net/bugs/860082 > | 21:50 |
jrings | connection on port 49486 worked | 21:58 |
jrings | is there a way to fix the port? | 21:59 |
SpamapS | jrings: it should work on pretty much any port thats not already used | 21:59 |
niemeyer | bcsaller, hazmat: How to build the base to review lxc-omega? | 22:00 |
SpamapS | Ugh | 22:01 |
SpamapS | txaws.ec2.exception.EC2Error: Error Message: Not authorized for images: [ami-852fedec] | 22:01 |
SpamapS | Have seen this before... | 22:01 |
SpamapS | stale image.. doh | 22:01 |
jrings | Does this try to use IP6? | 22:15 |
niemeyer | bcsaller, hazmat: I'm pushing it back onto Work in Progress.. there are multiple bases and no mention of what they are in the summary | 22:16 |
niemeyer | bcsaller: I've added an item about the file lock implementation there already | 22:16 |
hazmat | niemeyer, its lxc-library-clone->file-lock and local-provider-config | 22:24 |
_mup_ | juju/config-juju-origin r360 committed by jim.baker@canonical.com | 22:25 |
_mup_ | Unmocked tests in place | 22:25 |
_mup_ | juju/config-juju-origin r361 committed by jim.baker@canonical.com | 22:26 |
_mup_ | Added files to bzr | 22:26 |
niemeyer | hazmat: file-lock is not even in the kanban | 22:32 |
niemeyer | lxc-omega also changed since I last pulled it | 22:33 |
niemeyer | I'm going to hold off a bit since this is getting a bit wild | 22:33 |
hazmat | niemeyer, https://code.launchpad.net/~bcsaller/juju/filelock/+merge/75806 | 22:34 |
hazmat | the change was a one liner to address a missing package dep | 22:34 |
hazmat | that i found while trying it out | 22:34 |
niemeyer | That's fine, but things are indeed a bit wild.. missing branches in the kanban.. branch changing after being pushed, multiple pre-reqs that are not mentioned | 22:37 |
niemeyer | The file-lock branch should probably be dropped, unless I misunderstand what is going on there | 22:39 |
niemeyer | It's not really a mutex.. it'll explode if there are two processes attempting to get into the mutex region | 22:39 |
niemeyer | There's an implementation in Twisted already | 22:40 |
hazmat | niemeyer, its mean to error if another process tries to use it, but yeah the impl in twisted is probably a better option | 22:40 |
niemeyer | hazmat: It feels pretty bad.. telling the user "Can't open file" with a traceback wouldn't be nice | 22:41 |
_mup_ | juju/config-juju-origin r362 committed by jim.baker@canonical.com | 22:46 |
_mup_ | PEP8, docstrings | 22:46 |
hazmat | bcsaller, the lxc.lib tests pass but there's still some errors getting units to run | 23:13 |
bcsaller | hazmat: what are you seeing? | 23:14 |
hazmat | bcsaller, one quickie the #! header on the juju-create is wrong, missing "/" before bin/bash | 23:14 |
hazmat | bcsaller, it looks like add-apt-repo still isn't installed on the container.. perhaps i had a leftover machine-0-template .. | 23:15 |
hazmat | cause juju isn't installed which i assume causes the problem | 23:15 |
bcsaller | I suspect thats the case | 23:16 |
hazmat | because its missing a prefix its not getting killed i assume has to be done by hand, its going to cause problems as well if someone wants to use the series option | 23:17 |
hazmat | hmm we should passing origin down from the provider to the machine agent | 23:19 |
hazmat | hmm.. the clone interface makes it rather hard to put in console and container logs | 23:27 |
hazmat | i guess just stuff the attrs back on | 23:27 |
_mup_ | juju/lxc-omega-merge r398 committed by kapil.thangavelu@canonical.com | 23:31 |
_mup_ | enable container logs, and trivial juju script header fix | 23:31 |
_mup_ | juju/config-juju-origin r363 committed by jim.baker@canonical.com | 23:41 |
_mup_ | Setup origin policy for affected EC2, Orchestra provider tests | 23:41 |
_mup_ | juju/env-origin r360 committed by jim.baker@canonical.com | 23:56 |
_mup_ | Reversed to r357 | 23:56 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!