#juju 2011-09-26
<hazmat> nice fixed debug-hooks to work for early in the agent lifecycle
<hazmat> er. charm
<niemeyer> hazmat: Ohhh.. that's sweet
<niemeyer> hazmat: unix modes in zip is in upstream, btw.. we'll just need tip for the moment
<hazmat> niemeyer, cool, i've got a tip build i can utilize
<hazmat> niemeyer, its going to be a little while till the next release?
<niemeyer> hazmat: I've upgraded the PPA, but failed to put the series in the version number
<hazmat> since they just released
<niemeyer> hazmat: There's a bug in the build procedure with the colon in the path that I'll have to check out when I get a moment
<niemeyer> hazmat: Yeah, but it shouldn't be a big deal for us for the server side
<niemeyer> hazmat: We can deploy with tip
<niemeyer> hazmat: Well.. and that'll be in the weekly in a couple of days
<hazmat> niemeyer, cool
<_mup_> juju/status-with-unit-address r403 committed by kapil.thangavelu@canonical.com
<_mup_> debug hooks works with unit address, also address deficiency around debugging early unit lifecycle, by allowing debug-hooks command to wait for the unit to be running
<hazmat> niemeyer, i think i'm going to go ahead and try to address the placement stuff after the lxc merge
<niemeyer> hazmat: Sound ok.. but I also think it's going to be surprisingly trivial to handle it as suggested
<niemeyer> Sounds
<niemeyer> hazmat: It's indeed fine either way, though
<hazmat> niemeyer, i know.. i'm just fading, and want to get merges in..  this stuff needs to go upstream... maybe i should hold off till i get full night's rest
<hazmat> anyways.. last of the branches is ready for review ( cli with unit address)
<niemeyer> hazmat: Yeah, get some rest
<niemeyer> hazmat: I'll probably do the same to get up early tomorrow
<hazmat> niemeyer, there are places where i think placement on the cli is useful, and placement is a global provider option..
<hazmat> some of the discussion from earlier w/ bcsaller..
<hazmat> the cross-az stuff in particular is of interest to me
<niemeyer> hazmat: I think it's overthinking the problem a bit
<niemeyer> hazmat: This is well beyond what we need for the feature at hand
<niemeyer> hazmat: I'd rather keep it simple and clean until practice shows the need for the cli
<hazmat> i'm concerned that placement is going to get out of hand on responsibilities on the one hand, and on the other i see it as being very convienent for implementing features like deploy this unit in the a differrent az
<niemeyer> hazmat: I feel uncertain about that
<hazmat> ic cross az as something required for production on ec2.. i'm not sure where else we can put this sort of decision
<niemeyer> hazmat: We're implementing one feature, and imagining something else without carefully taking into account the side effects
<hazmat> fair enough
<niemeyer> hazmat: It's not required really..
<niemeyer> hazmat: cross az can be done with a single cluster
<hazmat> niemeyer, sure it can.. but how do we place it such that is
<hazmat> er.. place such that it is
<niemeyer> hazmat: Yeah, good question.. I don';t think the placement branch answers it
<niemeyer> hazmat: So I'd rather keep it in a way we're comfortable rather than creeping up without properly figuring what we're targeting at
<hazmat> it doesn't but cli placement is easy facility for it.. i agree there are ramifications there given provider validation that bear more thought, but it works pretty simply afaics
<niemeyer> hazmat: I'm not sure, and given that it really won't work either way right now, I'd rather not do it for now.
<niemeyer> hazmat: If nothing else, we're offering a visible interface to something that makes no sense to the user, with some intermangling in the implementation that we're not totally comfortable with.
<niemeyer> hazmat: Feels like a perfect situation to raise KISS and YAGNI
 * hazmat ponders
<hazmat> i'll sleep on it... i still think cross-az stuff is very important.. and that this is probably the simplest way to offer it to users.
<hazmat> but perhaps its a red herring... much else to do for internal failure scenario recovery
<hazmat> reconnects, restarts, etc
<niemeyer> hazmat: That's not even the point.. no matter if it's the implementation we want or not, it doesn't work today, and won't work for quite a while.
<hazmat> niemeyer, i could implement this cross-az with via cli placement in a day i think.
<niemeyer> hazmat: I'd rather not have this stuff creeping up in the code base until we figure it out.
<hazmat> tomorrow even
<niemeyer> hazmat: Heh
<hazmat> ;-)
<niemeyer> hazmat: I suggest we KISS and you suggest doing even more.. get some sleep. :)
<hazmat> indeed
<_mup_> Bug #859308 was filed: Juju commands (ssh/status/debug-hooks) should work with unit addresses. <juju:In Progress by hazmat> < https://launchpad.net/bugs/859308 >
<niemeyer> Hello!
<rog> niemeyer: hiya!
<niemeyer> rog: Hey!
<rog> niemeyer: what's the best way for me to update to your merged version?
<rog> (of gozk)
<rog> is it now in a new repository?
<niemeyer> rog: It's a new branch.. just branch from lp:gozk/zk
<niemeyer> rog: Which is an alias for lp:~juju/gozk/zk
<niemeyer> rog: In the future it'll go back to being lp:~juju/gozk/trunk, once we kill launchpad.net/gozk
<niemeyer> I mean, kill as in not support this import path
<rog> ok
<__lucio__> hi! is there a way to compose to formulas so i can say, for example, deploy a database server + a monitoring agent to this node?
 * rog finds lots of documentation bugs. oops.
<niemeyer> __lucio__: Absolutely
<__lucio__> niemeyer, how? (hello!)
<niemeyer> __lucio__: Hey! :)
<niemeyer> __lucio__: Charms (previously known as formulas) interconnect via relations that follow a loose protocol
<niemeyer> __lucio__: We give a name to the interface between them so that we can distinguish the protocols
<niemeyer> __lucio__: So, you can define in one of the formulas that it requires (consumes) a given relation interface, and in the other side that it provides (serves) the given relation interface
<niemeyer> __lucio__: This way both sides can be interconnected at runtime
<niemeyer> __lucio__: Using the "juju add-relation" command
<niemeyer> __lucio__: The charms will be notified when such a relation is established via the hooks
<niemeyer> rog: Hm?
<niemeyer> __lucio__: Does that make sense? :)
<__lucio__> niemeyer, not exactly what i mean. imagine i get the mysql charm and want to deploy it. get machine 1 with mysql. then i want to deploy some agent to monitor the system stats there. i want to create a new charm and say "deploy this charm to this machine that already exists"
<__lucio__> is that the "placement policy"?
<niemeyer> __lucio__: Ah
<niemeyer> __lucio__: I see
<__lucio__> the key part in here would be that those charms should know nothing of each other
<niemeyer> __lucio__: This will be supported in the coming future through what we're calling co-located charms
<niemeyer> __lucio__: In practice it'll be just a flag in the relation
<niemeyer> __lucio__: and juju will put the charms together based on that
<niemeyer> __lucio__: It's not implemented yet, though
<niemeyer> __lucio__: and it's not the placement policy
<niemeyer> hazmat: See? :)
<niemeyer> __lucio__: Yeah, exactly
<niemeyer> __lucio__: Re. knowing nothing about each other
<niemeyer> __lucio__: They will use exactly the same interface for communication that normal charms use
<__lucio__> niemeyer, ack. nice to see you guys thought about it :)
<niemeyer> __lucio__: Despite them being in the same machine
<niemeyer> __lucio__: Yeah, there's a lot of very nice stuff to come.. just a matter of time
<fwereade> niemeyer: ping
<niemeyer> fwereade: Hey!
<fwereade> niemeyer: thanks for the review :)
<fwereade> niemeyer: how's it going?
<niemeyer> fwereade: np
<niemeyer> fwereade: Going in a roll!
<fwereade> niemeyer: sweet :D
<fwereade> fwereade: I was wondering about charm id/url/collection/etc terminology
<niemeyer> fwereade: Ok
<fwereade> niemeyer: and wanted to know what your theoughts were re: the hash at the end
<fwereade> niemeyer: I see it as not really part of the *id* so much as just a useful bit of verification
<fwereade> niemeyer: but... well, it's an important bit of verification :)
<niemeyer> fwereade: Which hash?
<fwereade> niemeyer: lp:foo/bar-1:ry4xn987ytx984qty498tx984ww
<fwereade> when they're stored
<kim0> Howdy folks .. did the LXC work land already?
<kim0> seeing lots of cool comments
<niemeyer> fwereade: It must be there
<kim0> commits I mean
<niemeyer> fwereade: For storage, specifically
<fwereade> (yes that was a keyboard-mash, not a hash, but close enough ;))
<niemeyer> fwereade: The issue isn't verification, but uniqueness
<niemeyer> kim0: Heya!
<niemeyer> kim0: It's on the way
<fwereade> niemeyer: ...ha, good point, hadn't internalised the issues with revision uniqueness
<fwereade> niemeyer: except, wait, doesn't the collection-revision pair guarantee uniqueness?
<fwereade> niemeyer: I know rvisions and names wouldn't be enough
<niemeyer> fwereade: Define "guarantee"
<niemeyer> fwereade: ;_)
<kim0> cool, can't wait to tell the world about this .. It's such a nice feature
<niemeyer> fwereade: A hash is a reasonable "guarantee", even if it's not 100% certain.  Trusting the user to provide a unique pair isn't very trustworthy.
 * kim0 compiles regular juju progress report .. shares with planet earth
<niemeyer> kim0: It is indeed! And we're almost there
<fwereade> niemeyer: ok, it feels like the bad assumption is that a collection + a name will uniquely identify a (monotonically increasing) sequence of revisions
<fwereade> niemeyer: confirm?
<niemeyer> fwereade: I'd say more generally that the tuple (collection, name, id) can't be proven unique
<niemeyer> fwereade: If we were the only ones in control of releasing them, we could make it so, but we're not
<fwereade> niemeyer: hm, indeed :)
<fwereade> niemeyer: ok, makes sense
<fwereade> niemeyer: in that case, I don't see where we'd ever want the revision without the hash
<niemeyer> fwereade: That seems a bit extreme
<kim0> mm .. the juju list is not on https://lists.ubuntu.com/
<niemeyer> fwereade: The revision number is informative
<niemeyer> fwereade: and in the store it will uniquely identify the content
<niemeyer> fwereade: FWIW, the same thing is true for packages
<rog> lunch
<fwereade> niemeyer: ok... but if we ever have reason to be concerned about uniqueness of coll+name+rev, in what circumstances *can* we assume that that alone is good enough to identify a charm?
<fwereade> niemeyer: (ok: we can if it came from the store, probably (assuming *we* don't screw anything up) but it doesn't seem sensible to special case that
<fwereade> niemeyer: )
<niemeyer> fwereade: Pretty much in all cases we can assume it's unique within the code
<fwereade> niemeyer: if we want the bundles to be stored with keys including the hash, why would we eschew that requirement for the ZK node names?
<fwereade> niemeyer: um, "pretty much in all cases" == "not in all cases" :p
<niemeyer> fwereade: Sure, you've just found one case where we're concerned about clashes
<niemeyer> fwereade: Maybe we can change that logic, actually.. hmm
<niemeyer> fwereade: The real problem there is that it's very easy for the user to tweak a formula and ask to deploy it, and then deploy something else
<niemeyer> fwereade: The question is how to avoid that situation
<fwereade> niemeyer: sorry, are we talking about upgrades, or just normal deploys?
<niemeyer> fwereade: I'm happy for us to remove the hash from the name if we can find a way to avoid surprising results in these scenarios
<niemeyer> fwereade: Both
<fwereade> niemeyer: heh, I was more in favour of making the hash a required part of the ID throughout (at least internally)
<fwereade> niemeyer: my issue was that it *wasn't* included in the ZK node name at the moment
<fwereade> niemeyer: that seemed like a problem :)
<niemeyer> fwereade: That will mean we'll get two things deployed with the same name-id
<niemeyer> fwereade: Not a situation I want to be around for debugging ;-)
<niemeyer> fwereade: HmM!
<niemeyer> fwereade: What about revisioning local formulas automatically based on the latest stored version in the env?
<niemeyer> fwereade: Effectively bumping it
<niemeyer> fwereade: The spec actually already suggests that, despite that problem
<niemeyer> fwereade: This way we can remove the hash.. but we must never overwrite a previously existing charm
<fwereade> niemeyer: I'm confused
<niemeyer> fwereade: Ok, let me unconfuse you then
<niemeyer> fwereade: What's the worst point in the above explanation? :)
<fwereade> niemeyer: can we agree that (1) we can't guarantee that a (coll, name, rev) doesn't necessarily uniquely identify a charm
<fwereade> (2) therefore, we need something else to guarantee uniqueness
<niemeyer> fwereade: My suggestion is to guarantee uniqueness "at the door"
<niemeyer> fwereade: We never replace a previous (coll, name, rev)
<niemeyer> fwereade: If we detect the user is trying to do that, we error out
<niemeyer> fwereade: To facilitate development, though, we must give people a way to quickly iterate over versions of a charm
<niemeyer> fwereade: Which means we need to bump charm revisions in the local case based on what was previously deployed
<niemeyer> fwereade: Makes sense?
<fwereade> niemeyer: I think so
 * fwereade thinks...
<niemeyer> fwereade: This way we can remove the hash
<niemeyer> fwereade: But you'll have to review logic around that a bit so that we're sure we're not naively replacing a previous version
<niemeyer> fwereade: It shouldn't be hard, IIRC
<fwereade> niemeyer: I don't remember it being exceptionally complex
<niemeyer> fwereade: Because we consciously store the charm in zk after uploading
<niemeyer> fwereade: So if the charm is in zk, it must be in the storage
<fwereade> niemeyer: I have a vague feeling it'lll already complain if we try to overwrite a charm in zk
<niemeyer> fwereade: and thus we shouldn't replace
<niemeyer> fwereade: I think upgrade is a bit more naive
<niemeyer> fwereade: But I'm not sure
<niemeyer> fwereade: Perhaps my memory is failing me
<fwereade> niemeyer: I know less about the code than you might think, I was working most of last week with about 3 mental registers operating properly :/
<fwereade> niemeyer: CharmStateManager calls client.create with a hashless ID, so that should explode reliably already
<niemeyer> fwereade: Not sure really.. but please review it.. it'll be time well spent
<niemeyer> fwereade: Then, we'll need to implement the revision bumping that is in the spec
<niemeyer> fwereade: For the local case, that is
<fwereade> niemeyer: there was atlk a little while ago about allowing people to just ignore revisions locally
<fwereade> niemeyer: which seems to me to be quite nice for charm authors
<niemeyer> fwereade: Exactly.. that's a way to do exactly that
<niemeyer> fwereade: The user will be able to ignore it, because we'll be sorting out automatically
<niemeyer> fwereade: Please see details in the spec
<niemeyer> Will get a bite.. biab
<fwereade> niemeyer: by overwriting the revision file in the local repo? (the spec seems to me to be talking about how the formula store should work, not local repos)
<niemeyer> fwereade: CTRL-F for "local formula" within "Formula revisions"
<niemeyer> fwereade: Sorry..
<niemeyer> fwereade: CTRL-F for "local deployment" within "Formula revisions"
<fwereade> niemeyer: hm, I see it now, sorry
<niemeyer> fwereade: np
<fwereade> niemeyer: for some reason I'm not very happy with us writing into a local repo though
<niemeyer> fwereade: That's why the revision is being taken out of the metadata
<fwereade> niemeyer: ...and it seems to say we should bump on every deploy, which feels rather aggressive
<fwereade> niemeyer: just a suggestion: if the revision and the hash don't match, we blow up as expected
<niemeyer> fwereade: You have the context for why this is being done now.. I'm happy to take suggestions :)
<niemeyer> fwereade: The hash of what?
<niemeyer> fwereade: Directories have no hashe
<fwereade> niemeyer: don't they?
<fwereade> niemeyer: ok, it's the hash of the bundle
<fwereade> but they do have the appropriate method
<niemeyer> fwereade: Yeah. it's a hack really
<niemeyer> fwereade: Plus, not updating means we'll force users to bump it manually
<niemeyer> fwereade: Effectively doing the "rather aggressive" part manually, which sucks
<fwereade> niemeyer: what if we treat a revision file that exists as important -- so if you change a revisioned formula but don't change the rev, you blow up -- but allow people to just delete the revision file locally, in which case we identofy purely by hash and treat the has of the current local version as "newer" than any other hashes that might be around
<niemeyer> fwereade: I don't get what's the problem you're solving with that behavior
<fwereade> niemeyer: the failure-to-upgrade-without-manually-tweaking-revision
<niemeyer> fwereade: The solution in the spec solves that without using hashes
<niemeyer> fwereade: Why is your suggestion better?
<fwereade> niemeyer: but at the cost of repeatedly uploading the same formula every time it's deployed whether or not it's required
<niemeyer> fwereade: Hmm
<fwereade> niemeyer: I'm also a bit suspicious of requiring write access to local repos, just to deploy from them
<fwereade> niemeyer: feels icky ;)
<niemeyer> fwereade: That's trivial to solve.. but let's see, your earlier point is a good one
<niemeyer> fwereade: Hmm.. I think we can specialize the behavior to upgrade
<niemeyer> fwereade: and make deploy consistent for local/remote
<niemeyer> fwereade: In deploy, if there's a charm in the env, use it no matter what
<niemeyer> fwereade: Well, assuming no revision was provided, which is always true nowadays
<niemeyer> fwereade: In upgrade, if it is local, bump the revision to the the revision currently deployed (in the *env*) + 1
<fwereade> niemeyer: so we *might* still needlessly upload, but less frequently... not entirely unreasonable, I guess :p
<niemeyer> fwereade: Sure, which gets us back to the original issue.. we need a method that:
<niemeyer> 1) Does not needlessly bump the revision
<niemeyer> 2) Does not require people to bump the revision manually
<niemeyer> That's one solution
<niemeyer> fwereade: I don't want to get into the business of comparing the hash of an open directory with a file in the env
<niemeyer> fwereade: At least not right now.. to solve the problem we'd need to create a unique way to hash the content that doesn't vary with different bundlings
<fwereade> niemeyer: hm, I wasn't aware we had different bundlings to deal with..?
<niemeyer> fwereade: Well..
<niemeyer> fwereade: There's a file in the env.. there's an open directory in the disk
<niemeyer> fwereade: How do we compare the two?
<fwereade> niemeyer: well, at the moment, we zip up the env and hash the zipfile; I understand you think that's a hack, but I don't understand how it makes the situation any worse
<niemeyer> <fwereade> niemeyer: hm, I wasn't aware we had different bundlings to deal with..?
<niemeyer> fwereade: So you do understand we have different bundlings to deal with
<fwereade> niemeyer: we have different representations of charms, but the hashing is the same
<niemeyer> fwereade: Why is it the same?
<niemeyer> fwereade: Where's the zipping algorithm described that guarantees that zipping the same directory twice necessarily produces the same hash?
<fwereade> niemeyer: because we convert dirs to bundles to hash them?
<fwereade> niemeyer: and we *also* convert dirs to bundles to deploy them
<niemeyer> fwereade: Where's the zipping algorithm described that guarantees that zipping the same directory twice necessarily produces the same hash?
<fwereade> niemeyer: ah-ha
<rog> zip files hold modification times...
<fwereade> niemeyer, rog: hmm.
<rog> niemeyer: i did something like this before
<niemeyer> rog: modification times can be preserved.. but there are other aspects like ordering that are entirely unspecified
<rog> yup
<rog> niemeyer: my fs file traversal thing (which later became alphabet) solved this by always archiving in canonical order
<niemeyer> So, there are two choices: either we define a directory/content hashing algorithm, or we don't take the content into account
<rog> and i added a filter for canonicalising metadata we don't care about (e.g. mtime, atime)
<rog> oh yes, permissions were a problem too.
<rog> it worked very well in the end though
<niemeyer> rog: Sure, I'm not saying it's not possible.. I'm just saying that it requires diving into the problem more than "hash the zip files"
<rog> sure
<rog> zip files aren't canonical
<fwereade> niemeyer: as a side note: what's the trivial solution to my discomfort with requiring write access to local repos?
<niemeyer> fwereade: bundle the revision dynamically
<fwereade> niemeyer: so we'd have local repos with different revs to the deployed versions? that feels like a pain to debug, too
<niemeyer> fwereade: That may be the case either way, and there's absolutely nothing we can do to prevent it
<niemeyer> fwereade: The prove being that the local version is user modifiable
<niemeyer> fwereade: Either way, the normal case is writing the revision.. so let's not worry about the read-only case for now
<fwereade> niemeyer: ok then :)
<niemeyer> fwereade: local: is really targeting development..
<fwereade> niemeyer: true
<niemeyer> fwereade: Again, please note that the local revision bumping must take the revision from the env + 1, rather than taking the local revision number in consideration
<niemeyer> fwereade: On upgrade, specifically..
<niemeyer> fwereade: I believe we can handle the deploy case exactly the same for local/remote
<fwereade> niemeyer: understood, I just feel that "local newer than env" is easily comprehensible, while "env newer than local (from which it was deployed" is a touch confusing
<fwereade> niemeyer: agree on deploy: just use the one already deployed if it exists
<fwereade> niemeyer: (I know I'm still talking about the magic non-writing case, I'll try to forget about that)
<niemeyer> fwereade: I don't understand the first comment in this series
<fwereade> niemeyer: sorry, I was still wittering on about the non-writing case, it's not relevant ATM
<niemeyer> fwereade: The local namespace is flat..
<niemeyer> fwereade: Ponder for a second what happens if both of us start deploying the same "local" formula on the env
<niemeyer> fwereade: and what the revision numbers mean in that case
<fwereade> niemeyer: I've been having quiet nightmares about that, actually ;)
<niemeyer> fwereade: There's no nightmare that, if you acknowledge that local: is targeting development most importantly
<niemeyer> s/that,/there,/
<fwereade> niemeyer: I think the only sensible thing we can say is Don't Do That
<niemeyer> fwereade: It's fine actually.. the last deployment will win
<niemeyer> fwereade: Which is a perfect valid scenario when development is being done
<niemeyer> perfectly
<niemeyer> Can't write today
<niemeyer> fwereade: "local:" is _not_ about handling all non-store cases..
<niemeyer> fwereade: We'll eventually have a "custom store" people will be able to deploy in-house
<fwereade> niemeyer: ok, a separate piece fell into place, part of my brain was conflating services and charms
<fwereade> niemeyer: I'm happy about that now
<niemeyer> fwereade: Ah, phew, ok :-)
<fwereade> niemeyer: so... we trash hashes, then, and double-check that we'll explode if we try to overwrite a (coll, name, rev) in ZK
<niemeyer> fwereade: Yeah, "explode" as in "error out nicely".. :-)
<fwereade> niemeyer: quote so ;)
<fwereade> gaah, I can't write either :/
<fwereade> niemeyer: tyvm, very illuminating discussion
<niemeyer> fwereade: It's been my pleasure.. have been learning as well
<fwereade> niemeyer: cheers :)
<niemeyer> fwereade: Btw, the critical piece to review is whether we might overwrite the storage content or not
<niemeyer> fwereade: We have some protection from zk that create(...) won't work if it already exists
<niemeyer> fwereade: But we have none from the storage
<niemeyer> fwereade: So if the logic is not as we think it is, it'll blindly overwrite and we'll figure later
<niemeyer> fwereade: The hash protected us from that, even if not in an ideal way as you pointed out
<fwereade> niemeyer: yes indeed, I'll need to be careful but it's not insoluble
<niemeyer> fwereade: I _think_ the original logic had "store + put in zk" for exactly that reason
<fwereade> niemeyer: btw, really quick lazy question: what would cause a zk charm node to be deleted?
<niemeyer> fwereade: The ordering means that if an upload breaks mid-way, we still retry and overwrite
<niemeyer> fwereade: Nothing, IIRC
<niemeyer> fwereade: We debated a bit about garbage collecting it
<fwereade> niemeyer: ok, I thought I saw some logic to deal with that case, and was a bit surprised
<niemeyer> fwereade: and we can do it at some point
<niemeyer> fwereade: but I don't recall supporting it ATM
<fwereade> niemeyer: cool, I won't fret too much about that
<niemeyer> Man.. empty review queue.. I'll run and do some addition server-side work on the store
<niemeyer> additional..
 * hazmat catches up on the backlog
<_mup_> juju/go-store r14 committed by gustavo@niemeyer.net
<_mup_> Bootstrapping store package.
<hazmat> fwereade, niemeyer interesting about col/name/rev uniqueness.. one of the bugs/useability things for charm authors, is being able to do away with constant rev increments for iteration and just relying on hash
<niemeyer> hazmat: morning!
<hazmat> its something that bites pretty much every charm author
<fwereade> hazmat: indeed, but niemeyer has convinced me that auto-incrementing on upgrade from local repos should solve that
<niemeyer> hazmat: Yeah.. there are other ways to handle this without relying on hash, though.. read through :)
 * hazmat continues the backlog
<hazmat> long conversation indeed
<kim0> m_3: howdy .. please ping me hwen you're up
<_mup_> juju/go-store r15 committed by gustavo@niemeyer.net
<_mup_> Imported the mgo test suite setup/teardown from personal project.
<hazmat> niemeyer, so the conclusion is, for local repositories, always increment the version on deploy regardless of any change to the formula?
<niemeyer> hazmat: Not quit
<niemeyer> e
<niemeyer> <niemeyer> fwereade: Hmm.. I think we can specialize the behavior to upgrade
<niemeyer> <niemeyer> fwereade: and make deploy consistent for local/remote
<niemeyer> <niemeyer> fwereade: In deploy, if there's a charm in the env, use it no matter what
<niemeyer> <niemeyer> fwereade: Well, assuming no revision was provided, which is always true nowadays
<niemeyer> <niemeyer> fwereade: In upgrade, if it is local, bump the revision to the the revision currently deployed (in the *env*) + 1
<niemeyer> hazmat: ^
<hazmat> hmm. also we should log at info level the formula we're using on deploy (already in env, vs uploaded)
<niemeyer> hazmat: True
<hazmat> that's part of what bites people, lack of discovery into the problem till they go inspecting things
<niemeyer> <hazmat> hmm. also we should log at info level the formula we're using on deploy (already in env, vs uploaded)
<niemeyer> fwereade: ^
<niemeyer> LOL
<niemeyer> <niemeyer> hazmat: True
<niemeyer> <niemeyer> <hazmat> hmm. also we should log at info level the formula we're using on deploy (already in env, vs uploaded)
<niemeyer> <hazmat> that's part of what bites people, lack of discovery into the problem till they go inspecting things
<niemeyer> fwereade: ^^^
<fwereade> niemeyer, hazmat: sounds sensible
<hazmat> auto increment on upgrade sounds good
<hazmat> the upgrade implementation is pretty strict on newer versions, which is why i punted on a hash based approach, it was hard to maintain that notion
<niemeyer> hazmat: Agreed.  The hash stuff sounds interesting to detect coincidences for sure, but the detail is that it won't really solve the problems we have.. we need to consider larger versions anyway, and need to be able to update the previous deployment
<niemeyer> ... without manual interaction
<niemeyer> So for now it feels like the auto-increment upgrade is enough
<niemeyer> fwereade: When do you think the new CharmURL & CharmCollection abstractions will be available?
<niemeyer> fwereade: Just want to sync up because I'd like to have a look at them before mimicking in Go
<niemeyer> fwereade: So we match logic
<fwereade> niemeyer: hopefully EOmyD, but I'm not quite sure when that will be
<niemeyer> fwereade: Cool, thks
<fwereade> niemeyer: certainly before strot of your day tomorrow though
<niemeyer> fwereade: Ok
<fwereade> gaah *start* of your day
<niemeyer> fwereade: Are you planning on doing any modifications to the suggested API?
<fwereade> I think I'm happy with everything you proposed
<niemeyer> fwereade: Awesome, I'll get started on it then
<fwereade> niemeyer: I'll let you know ASAP if I come up with any problems
<niemeyer> fwereade: Superb, cheers
<niemeyer> fwereade: Will do the same on my end
<m_3> kim0: hey man... what's up?
<_mup_> juju/config-juju-origin r358 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<hazmat> fwereade, do you know if the orchestra machines generally have useable fqdns?
<fwereade> hazmat: better check with roaksoax, but I don't think you can guarantee it
<fwereade> hazmat: context?
<hazmat> fwereade, niemeyer, re the delta with local/lxc vs orchestra on address retrieval.. with local the fqdn isn't resolvable, but the ip address is routable and there  is a known interface. with orchestra the number of nics on a machine isn't knowable, but i was hoping we could say fqdns are resolvable
<fwereade> hazmat: IIRC the dns_name should work from other machines, but I don't think we have any guarantees about how it works from outside that network
<hazmat> this also per SpamapS comments on the original implementation that we should favor fqdn over ip address, and neatly sidesteps ipv4 vs ipv6 behind dns
<niemeyer> hazmat: We can't guarantee it ATM
<niemeyer> hazmat: Most of the tests I recall were done with IP addresses
<hazmat> niemeyer, on the address branch its all just a popen... local with ip, ec2 and orchestra with fqdn hostnames
<niemeyer> hazmat: The fully qualified domain will also not resolve the problem.. it may have multiple nics despite the existence of a fqdn
<hazmat> niemeyer, multiple nics is fine if the fqdn is resolvable
<niemeyer> hazmat: I believe it's not.. it'll resolve to an arbitrary ip address
<niemeyer> hazmat: Which may not be the right one if a machine has multiple ips
<niemeyer> hazmat: ec2 is a different case..
<niemeyer> hazmat: We know what we're doing there
<hazmat> niemeyer ? hostname -f  returns the fqdn of the host regardless of multiple nics
<SpamapS> For multiple NIC's, the FQDN should resolve to the NIC that you wish the host to be externally reachable on...
<hazmat> which is what we do for orchestra
<niemeyer> hazmat: hostname -f returns *a* name, that may be resolvable or not, and that may map to the right ip or not
<SpamapS> I *can* see a situation where you have a management NIC, and a service NIC .. each needing different handling.
<hazmat> SpamapS, we've got separation of public/private addresses for units, but getting those addresses on on orchestra deployments is the question
<hazmat> doesn't seem like we can do that apriori
<SpamapS> Indeed. DNS is the only reliable way, IMO, to handle something so loosely coupled.
<niemeyer> hazmat: I suggest checking with smoser and RoAkSoAx then
<niemeyer> hazmat: If they're happy, I'm happy :)
<koolhead11> hi all
<niemeyer> koolhead11: Hey!
<koolhead11> hello niemeyer
<rog> niemeyer: one merge proposal sent your way: https://code.launchpad.net/~rogpeppe/gozk/update-server-interface/+merge/77009
<niemeyer> rog: Woohay, cheers!
<koolhead11> SpamapS: i got some idea how not to use dbconfig-common :)
<rog> niemeyer: (ignore the first one, i did the merge the wrong way around)
<niemeyer> rog: The first one?
<SpamapS> I think IP's grokked from the network provider are usable... EC2 knows which one is externally available vs. internal, and the provider has full network control, so you can take that IP and use it confidently. Orchestra has no such guarantees, so the hostname that we gave to the DHCP server and that we built from its DNS settings is the only meaningful thing we can make use of.
<SpamapS> koolhead11: progress is good. :)
<koolhead11> SpamapS: yeah. :D
 * koolhead11 bows to Daviey 
<SpamapS> For servers with multi-NIC, the only real thing we can do is use a cobbler pre-seed template that selects the most appropriate one. Making use of multiples for mgmt/service seems like something we'll have to do as a new feature.
<rog> niemeyer: hold on, i think i mucked up. too many versions flying around.
<niemeyer> rog: No worries
<rog> gozk/zk vs gozk vs gozk/zookeeper
<rog> niemeyer: no, it's all good i think
<niemeyer> rog: Coolio
<rog> niemeyer: i just did a dud directory rename, but i don't think it affects what you'll see
<niemeyer> RoAkSoAx: We were just talking about ips vs hostnames in the context of orchestra units
<niemeyer> RoAkSoAx: hazmat has more details
<koolhead11> hello robbiew RoAkSoAx
<niemeyer> I'm going to step out for lunch and leave you guys with trouble!
<RoAkSoAx> niemeyer: ok
<RoAkSoAx> niemeyer: im on a sprint atm
<RoAkSoAx> hazmat: ^^
<niemeyer> RoAkSoAx: It's quick
<niemeyer> RoAkSoAx: But important
 * niemeyer biab
<hazmat> RoAkSoAx, just trying to determine if on an orchestra launched machine we can assume either a routable hostname (fqdn) or nic for recording an address to the machine
<hazmat> ie. if something like hostname -f is useable to reach the machine from another machine in the orchestra environment
<hazmat> i assume the orchestra server is just tracking mac addresses on the machine
<RoAkSoAx> hazmat: hazmat yes the orchestra server is tracking the MAC address
<RoAkSoAx> hazmat: we always have to track it
<RoAkSoAx> hazmat: though, we were making sure hostnames was fqdn as an standard and that it was set correctly
<RoAkSoAx> hazmat: via could-init
<RoAkSoAx> smoser: ^^
<RoAkSoAx> hazmat: the idea is to use a DNS reacheable name for each machine that's fqdn
<hazmat> RoAkSoAx, if thats the case that's perfect.. fqdn == hostname that is
<RoAkSoAx> hazmat: yes that's what we are trying to standarize last couple weeks. Give me a few minutes till I get a hold on a few people here
<RoAkSoAx> hazmat: and discuss the approach
<SpamapS> hazmat: its fair to say that we should take a look at other strategies for addressing services and machines as we get deeper in to the hardware deployment story...
<SpamapS> hazmat: for this primary pass, making it work "a lot like the cloud" is the simplest approach.
<smoser> for what its worth, you really shoul dnot expect that 'hostname --fqdn' gives an addressable hostname
<SpamapS> smoser: we have no other reliable source of data about what this machine's name is.
<smoser> i believe we've fixed it so that will be the case under orchestra, and in EC2 (and we're fixing that for single nic guests in nova).
<SpamapS> The fact that it wasn't happening was a bug.
<smoser> no.
<smoser> in those limited cases, that is the case.
<smoser> but 'hostname --fqdn' is just not reliable.
<smoser> read the man page if you disagree.
<smoser> it basically says not to use it
<smoser> so i would really suggest against telling charms that the right way to do something is something that documents itself as the wrong way
<smoser> :)
<smoser> i dont have a solution
<SpamapS> smoser: Indeed, this is the first time I've actually read this.. I wonder how recently this changed. :-/
<SpamapS> I don't know if I agree with the man page's reasoning or with the mechanics of --all-fqdns
<SpamapS> "Don't use this because it can be changed" vs. "Rely on reverse DNS instead" ...
<smoser> if you're depending on cloud-init (which you are for better or worse), we can put something in it , or an external command that would basically query the metadata provided by the cloud provider to give you this.
<smoser> i would i guess suggest making a ensemble command "get-hostname" or something
<SpamapS> smoser: Its something we can control (since we control the initial boot of the machine) which ripples through and affects everything else on the machine.
<SpamapS> I believe the plan is to have some sort of "unit info" command for charms to use.
<smoser> you do not control the initial boot of the machine.
<smoser> you do not control the dns.
<smoser> so how could you possibly control resolution of a name to an IP?
<SpamapS> smoser: We do control what we've told the provisioner to do .. which is to name that box "X"
<smoser> no you do not
<smoser> not on ec2
<SpamapS> cobbler does
<smoser> right.
<smoser> but stay out of that
<smoser> that would mean that ensemble is acting as the cloud provider in some sense when it talks to cobbler
<smoser> which is just yucky.
<SpamapS> we don't put the hostname in the metadata for the nocloud seed?
<smoser> not any more
<smoser> cobbler does
<smoser> ensembel does not
<smoser> which is much cleaner
<smoser> s/ensemble/juju/
<SpamapS> Can we ask cobbler what it put there?
<smoser> or s/cleaner/more ec2-or-nova-like/
<smoser> you *can*, but you should not.
<smoser> oh
<SpamapS> Ok.. where then should we get the address for the machine?
<smoser> wait
<smoser> yes
<smoser> you can ask cobbler what it put there
<smoser> sorry
<SpamapS> can and should I think
<smoser> yes
<smoser> :)
<smoser> sorry
<smoser> i thought you were saying "Can we tell cobbler what to put there"
<SpamapS> I'm not enthralled with hostname --fqdn. It is, however, the only common tool we have between all environments at the moment.
<smoser> well its easy enough to add a tool
<smoser> that lives on the nodes
<SpamapS> I think it might actually be quite trivial to write a charm tool ... 'machine-info --hostname' which gives us the hostname the provider wants us to be contacted with.
<smoser> the other thing, i think might be reasonable to consider, if you're only interested in single-cloud systems, would be to have juju run a dns server.
<smoser> SpamapS, right. that is what i'm suggesting is fairly easy.
<SpamapS> Too tightly coupled to juju at that point
<smoser> right
<SpamapS> If an environment can't provide reliable DNS then it should just give us network addresses when we ask for the hostname.
<smoser> i agree with this.
<SpamapS> I believe thats the direction the local provider has gone
<smoser> why do you care about a hostname ?
<smoser> just curious
<smoser> would it not be superior to always be IP ?
<SpamapS> definitely not
<smoser> (assuming that the IP would not change)
<smoser> why?
<SpamapS> IP can vary from your perspective
<SpamapS> a hostname provides the appropriate level of indirection
<smoser> somewhat.
<smoser> but in all cases you are aaware of so far, the IP address of the system is what you want.
<smoser> ie, in all of cobbler, nova, ec2, 'ifconfig eth0' returns an internally addressable IPv4 address.
<SpamapS> IPv4 or IPv6? internal or external?
<smoser> you are interested in IPv4 internal
<SpamapS> usually
<smoser> you're 100% only interested in internal if you're using hostname --fqdn
<smoser> so that leaves you only ipv4 and ipv6
<SpamapS> I'm not saying we can't use IP's, I'm saying we need to talk about *hosts*
<smoser> ec2 has no ipv4
<smoser> so now you're down to nova (which i know you've not tested ipv6 of) and cobber, which i highly doubt you have
<smoser> machine-info --hostname
<SpamapS> You're getting all pragmatic on me.
<smoser> just return ipv4 internal ip address.
<smoser> no
<hazmat> so this is below the level of  charm
<SpamapS> Like what, you want to ship something *now* ?
<smoser> i dont understand the question
<hazmat> juju is going to prepopulate and store the address, we just need to know how to get it on an orchestra machine
<smoser> no
<hazmat> i was hoping hostname -f would do.. seems like it won't
<smoser> do not do that hazmat
<smoser> that is broken
<smoser> juju should *NOT* prepopulate the address.
<smoser> juju is not orchestra
<smoser> it can query, it does not set or own.
<hazmat> smoser, sorry wrong context.. juju was going to store the address from the provider for the charm
<SpamapS> smoser: I'm being a bit sarcastic. Yes, all currently known use cases are satisfied with IP's. But all of them also *should* have hostnames, and we shouldn't ignore the need for hostnames just because we can.
<hazmat> smoser, the question is how to get the address
<smoser> i'm fine with wanting to have hostnames
<smoser> you can hide that cleanly behind a command
<smoser> in which right now, you're assuming that command is 'hostname --fqdn'
<smoser> which is documented as broken
<smoser> so i'm suggesting adding another command
<smoser> which does the sambe general thing, but works around a bug or two
<smoser> and may, in some cases, return an ipv4 address.
<hazmat> smoser, that command is?
<smoser> 'machine-info --hostname'
<SpamapS> hazmat: we've talked about a "machine info" or "unit info" script before.
<SpamapS> I think you want unit info, not machine info.
<smoser> which you add as a level of abstraction into the node
<smoser> fine
<hazmat> SpamapS, that doesn't answer the question of how that command gets the info
<hazmat> ie. how do we implement machine-info's retrieval of the address
<smoser> hazmat, right now, it does this: echo $(hostname --fqdn)
<smoser> that makes it 100% bug-for-bug compatible with what you have right now
<smoser> but is fixable in one location
<SpamapS> hazmat: it queries the provider (or, more likely, queries the info we cached in the zk topology when the machine/unit started)
<smoser> SpamapS, is right.
<hazmat> so for local and ec2 providers, we have known solutions, its the orchestra case that its not clear what we should od
<smoser> in the orchestra provider 'hostname --fqdn' works
<smoser> and i thought we had (or i think we should) assume that the machine's "hostname" in cobbler is fqdn internal address.
<smoser> s/assume/insist/
<smoser> so ensembel can just query that from cobbler
<smoser> afaik, the only place broken right now is in nova
<smoser> due to bug 854614
<hazmat> smoser, does cobbler have any notion of external/public addresses? or just hostnames for a given mac addr
<smoser> which will be fixed
<_mup_> Bug #854614: metadata service local-hostname is not fqdn <server-o-rs> <OpenStack Compute (nova):In Progress by smoser> <nova (Ubuntu):Confirmed> < https://launchpad.net/bugs/854614 >
<smoser> RoAkSoAx would know more, but whatever  it is, you assert that in some portion of the machines's metadata, a fqdn exists for internal address.
<smoser> and you use it
<smoser> i dont have cobbler in front of me to dump machine data. but i think it is a reasonable assertion.
<SpamapS> wow, --all-fqdns /win 24
<SpamapS> doh
<SpamapS> so --all-fqdns is pretty new
<SpamapS> Appeared just before 9.10 I think
<smoser> its really all messed up.
<smoser> and it doesn't help you
<smoser> as it doesn't sort them in any order (how could it?)
<SpamapS> yeah its not useful
<smoser> so how can you rely on its output
<SpamapS> providers need to tell us how a machine they're responsible for is addressable
<smoser> right.
<smoser> and we just assert at the moment that cobbler stores that in (i think) 'hostname'
<SpamapS> And then the external and internal IP's are both the result of querying DNS for that hostname.
<smoser> i dont folow that.
<smoser> i didn't know external ip was something that was being discussed.
<SpamapS> Just thinking of analogs for ec2's metadata
<SpamapS> Its needed
<SpamapS> for expose
<smoser> i agree it would be needed...
<SpamapS> For orchestra, all the firewall stuff is noop'd though
<smoser> i really have to look at nova to find a good place for this.
<smoser> but basically i think we just need to store it there and assert that it is configured sanely.
<SpamapS> I believe there's a desire to mvoe that FW management to the agents managing ufw/iptables .. but for now, providers have to do it, and orchestra can't.
<SpamapS> yes, hostname in cobbler is the canonical source of the machine's hostnanme
<SpamapS> and Mavis Beacon is the canonical source of my bad typing
<smoser> i think for our near term purposes cobbler no op is fine for firewall
<hazmat> agreed
<hazmat> SpamapS, so hostname -? is fine for cobbler for the private address.. and hopefully the public address?
<smoser> almost certainly not the public address.
<hazmat> smoser, its not clear what a public address means in orchestra.. its outside the purview of the provider
<smoser> hazmat, well, sort of
<smoser> clearly orchestra could have that data
<smoser> and could provide it to you
<smoser> but i dont think we have a place where we assert it is stored now.
<SpamapS> orchestra does not imply whether it has public/private networks.
<SpamapS> Its really not all that interesting, just return hostname for anything wanting to address the machine.
<smoser> good enough for me.
<smoser> so i do suggest the layer of indirection over 'hostname --fqdn'
<SpamapS> And I'll open up a bug for the desired charm tool
<SpamapS> smoser: agreed, will open up that bug now
<hazmat> SpamapS, the common use for that is going away
<hazmat> SpamapS, the relations be prepopulated with the info
<hazmat> although we still need a way to query it agreed
<hazmat> at the unit level
<SpamapS> Right, is there a bug for that then?
<SpamapS> or will it be a reserved variable in relation-get ?
<hazmat> SpamapS, not yet.. but the units-with-addresses branch does the work of storing it directly on the units (pub/private) address in provider specific manner
<hazmat> SpamapS, just a prepopulated one
<SpamapS> I like that
<hazmat> i just needed to verify that hostname --fqdn does something sane w/ orchestra
<hazmat> and it seems like thats what we should use use for now
<hazmat> which is nice, since that's whats implemented for orchestra
<niemeyer> Wow.. long thread
<SpamapS> hazmat: since all the charms currently rely on it, its been made to work that way. But as we've discussed here, its not really robust as a long term solution.
<hazmat> smoser, RoAkSoAx does that mean that bug 846208  is fixed?
<_mup_> Bug #846208: Provisioned nodes do not get a FQDN <juju:New> <orchestra (Ubuntu):New> < https://launchpad.net/bugs/846208 >
<hazmat> wrt to orchestra
<hazmat> SpamapS, agreed, but getting it out of the charms, goes a long way to giving us the flexibility to fix it
<SpamapS> niemeyer: yeah, when you get Me, the tire kicker, and smoser, Mr. Meh, talking about something.. the threads tend to go back and forth with a lot of "NO, no, no NO No, no, ahh, yes."
<niemeyer> SpamapS: That's a nice way to get something proper in place..
<smoser> adam_g, probably knows aboug 846208 but i would have thought yes.
<SpamapS> speaking of long term and short term... I'm hoping to file the FFE tomorrow.. where are we at?
<hazmat> SpamapS, this is probably the closest bug 788992
<_mup_> Bug #788992: example formulas refer to providing the hostname in ensemble itself <juju:New> < https://launchpad.net/bugs/788992 >
<smoser> at very least, i'm fairly sure that 'hostname -f' should do the right thing there now.
<hazmat> smoser, cool
<RoAkSoAx> yeah that bug was fixed already ##846208 will verify now that im here
<_mup_> Bug #846208: Provisioned nodes do not get a FQDN <juju:New> <orchestra (Ubuntu):New> < https://launchpad.net/bugs/846208 >
<hazmat> SpamapS, we're very close on local dev.
<hazmat> bcsaller, how's it going?
<bcsaller> hazmat: I was just reading back the channel actually
<SpamapS> Awesome
<bcsaller> hazmat: have you tried the branch yet?
<hazmat> bcsaller, not yet.. i'll do so now
<hazmat> bcsaller, what's the url for the stats on apt-cacher-ng?
<bcsaller> http://localhost:3142/acng-report.html
<SpamapS> hazmat: btw did those tests get fixed?
<hazmat> SpamapS, which tests?
<SpamapS> hazmat: lxc tests IIRC
<SpamapS> the ones that were blatantly broken last week in trunk
<hazmat> SpamapS, oh yeah.. the breakage, indeed their fixed.. trunk is green
<SpamapS> cool
<SpamapS> I've been doing regular uploads to my PPA with the distro packaging, which runs the test suite... that was blocking those from working. :p
<hazmat> bcsaller, i'm seeing some oddities around namespace passing which is breaking lxc-ls, but the units are up and running
<bcsaller> hazmat: I'll need details ;)
<hazmat> bcsaller, i'll dig into it
<hazmat> bcsaller, but it appears to be working
<bcsaller> hazmat: in an older version its wasn't setting the ns to qualified name and created images with out a prefix, but that was fixed
<hazmat> bcsaller, ah.. that looks like the problem
<hazmat> sounds like
<bcsaller> hazmat: you didn't pull?
<hazmat> bcsaller, i probably need to remerge your branch
<bcsaller> sounds like
<hazmat> bcsaller, i've been pulling your branch and looking over the diff, but i don't think i've remerging into the rest of the pipeline
<bcsaller> then I'm surprised it worked. I expect the services in the container didn't actually start for you
<bcsaller> hazmat: that 'conf' change was missing too I expect
<hazmat> bcsaller, does the template machine get the namespace qualifier?
<hazmat> s/machine/container
<bcsaller> no, there are some advantages and disadvantages there
<bcsaller> I expect there will be debate around that point in the review
<bcsaller> I guess it *should* though, I can think of many ways it can go wrong for people
<bcsaller> vs being a cost savings for the well behaved. It should also have things like series name in it I expect
<hazmat> bcsaller, the question is can we get this stuff landed today for push to the repos tomorrow, is there anything i can help with?
<hazmat> i think all my branches are approved at this point, i've got one last minor to update the provider name, and prepopulate the relations with the unit address
<hazmat> bcsaller, latest revno is 404 on omega?
<bcsaller> Idk, can't find it
<bcsaller> ;)
<bcsaller> yeah, thats it
<hazmat> bcsaller, getting pty allocation errors, just had a kernel upgrade going to try a reboot
<hazmat> unit agents aren't running
<hazmat> conf file looks fine
<_mup_> juju/config-juju-origin r359 committed by jim.baker@canonical.com
<_mup_> Add support for get_default_origin
 * rog is done for the day. see y'all.
<niemeyer> rog: Cheers!
<hazmat> bcsaller, the container unit agents never start, and i get pty allocation errors trying to login manually
<bcsaller> hazmat: sounds like what you were having at the sprint
<bcsaller> hazmat: what was the resolution to that?
<hazmat> bcsaller, upgrading to oneiric
<hazmat> i don't think that works twice ;-)
<bcsaller> darn
<hazmat> currently on lxc == 0.7.5-0ubuntu8
<bcsaller> same
<bcsaller> hazmat:  the lxc-library tests do or don't trigger this issue for you?
<hazmat> bcsaller, are you specing the origin somehow?
<hazmat> bcsaller,  the lxc lib tests fail in omega for me
<niemeyer> SpamapS: What is the set of valid charm names we're going to support?
<niemeyer> SpamapS: foo(-bar)*?
<niemeyer> Or, more properly "^[a-z]+([a-z0-9-]+[a-z])*$"
<niemeyer> fwereade, bcsaller, hazmat, anyone: ^^^?
<SpamapS> niemeyer: yes that looks exactly right
<SpamapS> basically the hostname spec. ;)
<SpamapS> but no capitals
<SpamapS> +1
<bcsaller> niemeyer: looks fine to me, might need [-_]
<hazmat> sounds good
<niemeyer> bcsaller: It contains - already
<SpamapS> no _'s
<SpamapS> one visual separator is fine
<bcsaller> ahh 0-9-, ic
<hazmat> bcsaller, do you have some delta in your omega branch that's not pushed?
<bcsaller> hazmat: no
<hazmat> bcsaller, i get test failures.. it looks like around juju package install
<bcsaller> origin should be ppa at this point, I think thats what it says in the code, I'll check again
<niemeyer> fwereade: In case you are around, these will be useful:
<niemeyer> var validUser = regexp.MustCompile("^[a-z0-9][a-zA-Z0-9+.-]+$")
<niemeyer> var validSeries = regexp.MustCompile("^[a-z]+([a-z-]+[a-z])?$")
<niemeyer> var validName = regexp.MustCompile("^[a-z]+([a-z0-9-]+[a-z])?$")
<hazmat> bcsaller, http://paste.ubuntu.com/697431/
<bcsaller> hazmat: so either the origin isn't ppa, the networking isn't working or...
<hazmat> bcsaller, the networking is working at least packages are being installed
<bcsaller> hazmat: and you said you can't ssh into the container? I'd try to run the juju-create script, it will be some /tmp/xxxxx-juju-create script in the container and follow the output
<hazmat> bcsaller, also when the tests fail they leave an orphan container
<niemeyer> jimbaker: any chance of getting env-origin landed today?
<jimbaker> niemeyer, i'm working on the mocks for this. once done, it will be ready for review
<niemeyer> jimbaker: Ugh..
<jimbaker> niemeyer, so pretty close i would say
<niemeyer> jimbaker: "working on the mocks" gives me bad feelings nowadays, for some reason
<jimbaker> niemeyer, well as i understand i need to mock out apt-cache policy for the various cases
<niemeyer> jimbaker: Not really.. that's a pretty side-effects free problem to solve
<jimbaker> niemeyer, how we would test in the case of being on a distro vs one where it was installed from the ppa? or in the case of being installed from a branch?
<niemeyer> jimbaker: origin, source = parse_juju_policy(data)
<jimbaker> niemeyer, but we still need to run apt-cache policy in order to collect the necessary data. isn't this the role for the mock, to intercept this call with some variations of what it could return?
<niemeyer> jimbaker: There's a single test needed for actually calling apt-cache, and that's also trivial to automate without mocking by putting an executable in the path.
<niemeyer> jimbaker: I won't fight if you decide to mock this one
<niemeyer> jimbaker: But mocking every single iteration of parse_juju_policy is making our lives more painful without a reason
<niemeyer> jimbaker: It's a side-effects free process
<niemeyer> jimbaker: and it's idempotent
<niemeyer> jimbaker: If you need mocker for that I'll take the project page down! :-)
<jimbaker> niemeyer, i will rewrite it according to what you have described, it's not a problem
<hazmat> bcsaller, are you sure you dont have something in /var/cache/lxc that  makes it work for you?
<hazmat> bcsaller, i just blew away my cache and its still failing on the tests
<bcsaller> hazmat: I'll try to clean that out and check again
<bcsaller> take a few minutes
<hazmat> bcsaller, did it work?
<bcsaller> bootstrap is still going, w/o cache.
<bcsaller> so for me it hit the test timeout
<bcsaller> but I'm let it build the cache outside the test now
<hazmat> bcsaller, you on dsl?
<hazmat> it didn't hit the test timeout for me.. but still failed
<bcsaller> cable
<bcsaller> the unpacking phase took too long oddly
<bcsaller> hazmat: I am seeing errors now, I'll look into it more
<hazmat> bcsaller, cool, thanks
<hazmat> bcsaller, as far as i can see ppa is selected across the board
<bcsaller> looked that way to me as well
<hazmat> oh wait its wrong archive
<hazmat> haha
<hazmat> i thought that got fixed in this branch, but you had it cached
<hazmat> bcsaller, niemeyer pointed it out to me in a review
<hazmat>  bcsaller nevermind that looks sane for the ppa
 * hazmat grabs some lunch
<hazmat> er.. snack
<bcsaller> yeah, I didn't know what you were talking about there :)
<bcsaller> hazmat: pushed changes to both lxc-lib and omega, it was a missing dep that was cached for me :(
 * bcsaller looks for a brown paper bag
<hazmat> bcsaller, cool, just glad its fixed
 * hazmat retries
<SpamapS> Hmm.. getting sporadic failures of one test..
<SpamapS> https://launchpadlibrarian.net/81106645/buildlog_ubuntu-oneiric-i386.ensemble_0.5%2Bbzr361-0ubuntu1~ppa1_FAILEDTOBUILD.txt.gz
<SpamapS> juju.agents.tests.test_unit.UnitAgentTest.test_agent_executes_config_changed_hook
<jrings> Hi, I have a problem tyring to get juju to connect to EC2. I described it here http://ubuntuforums.org/showthread.php?t=1849913 but also with the new version today it is tsill the same. I can bootstrap, a new instance is created in EC2, but in juju status the connection is refused
<jrings> Cannot connect to machine i-48751428 (perhaps still initializing): could not connect before timeout after 2 retries 2011-09-26 14:03:34,431 ERROR Cannot connect to machine i-48751428 (perhaps still initializing): could not connect before timeout after 2 retries
<SpamapS> jrings: hey, the key that juju uses by default is $HOME/.ssh/id_(rsa|dsa)
<jrings> How can I tell juju to use the .pem from EC2?
<SpamapS> jrings: you don't need to
<SpamapS> jrings: it installs your key in the instances
<jrings> Well my key is in $HOME/.ssh
<jrings> and the juju bootstrap works
<jrings> why can't juju status connect then?
<SpamapS> bootstrap complets w/o ssh
<SpamapS> its possible your key didn't make it into the instance for some reason
<SpamapS> jrings: can you pastebin ec2-get-console-output ?
<hazmat> if i couldn't find a key during bootstrap it will raise an exception
<jrings> Is that the same as the log for the instance in the EC2 webconsole?
<jrings> If so, here: http://pastebin.com/4c78GVC9
<SpamapS> jrings: heh, i takes a few minutes to get the full log .. so you might have to wait a bit longer.
<SpamapS> Or maybe there's a limit to the size.. I've never checked
<SpamapS> (that would suck if the limit was applied to the top.. and it wasn't updated like a ring buffer
<hazmat> hmm.. this line 2011-09-25 10:24:11,882 ERROR SSH forwarding error: bind: Cannot assign requested address
<hazmat> is interesting
<jrings> that's what I get from the juju status
<hazmat> we pick a random open port on localhost to setup a port forward over ssh
<SpamapS> conflicting with desktop-couch ?
<hazmat> it looks like that fails, although for it to fail persistently suggests something else is going on
<SpamapS> which does the same thing
<SpamapS> Yeah true
<SpamapS> hazmat: does it definitely do 127.0.0.1 ?
<jrings> Yes I can see it trying different ports.
<SpamapS> jrings: can ou paste the output of 'ifconfig -a' ?
<jrings> Wait, I set up a single node hadoop locally and had to change something to localhost
<jrings> eth1      Link encap:Ethernet  HWaddr f0:4d:a2:5f:5c:09             UP BROADCAST MULTICAST  MTU:1500  Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0           collisions:0 txqueuelen:1000            RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)           Interrupt:41 Base address:0xa000   lo        Link encap:Local Loopback             inet addr:127.0.0.1  Ma
<jrings> ugh
<jrings> wait
<jrings> Here: http://pastebin.com/Vpp3hJPt
<SpamapS> hrm
<SpamapS> jrings: ufw running?
<SpamapS> can't imagine that would break it tho
<jrings> Just did a ufw disable and tried again, same result
<jrings> Oh shit I got it
<jrings> I had Rstudio installed
<jrings> it had a server on 127.0.01:8787
<jrings> just uninstalled it, juju status works
<jrings> no wait
<jrings> actually it doesn't
<jrings> argh
<SpamapS> that doesn't make sense. :-/
<jrings> weird
<jrings> I got
<jrings> 2011-09-26 14:39:06,972 DEBUG Spawning SSH process with remote_user="ubuntu" remote_host="ec2-174-129-58-110.compute-1.amazonaws.com" remote_port="2181" local_port="58376". 2011-09-26 14:39:08,981:6112(0x7f2eadf27720):ZOO_INFO@log_env@658: Client environment:zookeeper.version=zookeeper C client 3.3.3 2011-09-26 14:39:08,981:6112(0x7f2eadf27720):ZOO_INFO@log_env@662: Client environment:host.name=vavatch 2011-09-26 14:39:08,981:6112(0x7f2
<SpamapS> jrings: can you do 'strace -e trace=listen,bind,connect -f juju status' and paste that? (note that the command 'pastebinit' is really nice for this)
<jrings> one time
<jrings> and then the next juju status failed again
<hazmat> SpamapS, it picks the open port from all interfaces but binds to it on localhost
<hazmat> although i recently added an SO_REUSEADDR flag .. it should still be random each run
<SpamapS> hazmat: literally looks up 'localhost' or uses 127.0.0.1 ?
<hazmat> it does a bind socket.bind("", 0)
<SpamapS> wait, isn't it an ssh forward?
<hazmat> SpamapS, ah.. yeah.. for the port forward it explicitly uses localhost
<hazmat> 'localhost'
<SpamapS> jrings: pastebin 'ping -c 1 localhost'
<jrings> Here is the strace http://pastebin.com/Q0CPnDBr
<jrings> And the ping works http://pastebin.com/cwsep2NK
<_mup_> juju/go-charm-url r14 committed by gustavo@niemeyer.net
<_mup_> Implemented full-blown charm URL parsing and stringification.
<_mup_> Bug #860082 was filed: Support for charm URLs is needed in Go <juju:In Progress by niemeyer> < https://launchpad.net/bugs/860082 >
<jrings> connection on port 49486 worked
<jrings> is there a way to fix the port?
<SpamapS> jrings: it should work on pretty much any port thats not already used
<niemeyer> bcsaller, hazmat: How to build the base to review lxc-omega?
<SpamapS> Ugh
<SpamapS> txaws.ec2.exception.EC2Error: Error Message: Not authorized for images: [ami-852fedec]
<SpamapS> Have seen this before...
<SpamapS> stale image.. doh
<jrings> Does this try to use IP6?
<niemeyer> bcsaller, hazmat: I'm pushing it back onto Work in Progress.. there are multiple bases and no mention of what they are in the summary
<niemeyer> bcsaller: I've added an item about the file lock implementation there already
<hazmat> niemeyer, its lxc-library-clone->file-lock and local-provider-config
<_mup_> juju/config-juju-origin r360 committed by jim.baker@canonical.com
<_mup_> Unmocked tests in place
<_mup_> juju/config-juju-origin r361 committed by jim.baker@canonical.com
<_mup_> Added files to bzr
<niemeyer> hazmat: file-lock is not even in the kanban
<niemeyer> lxc-omega also changed since I last pulled it
<niemeyer> I'm going to hold off a bit since this is getting a bit wild
<hazmat> niemeyer, https://code.launchpad.net/~bcsaller/juju/filelock/+merge/75806
<hazmat> the change was  a one liner to address a missing package dep
<hazmat> that i found while trying it out
<niemeyer> That's fine, but things are indeed a bit wild.. missing branches in the kanban.. branch changing after being pushed, multiple pre-reqs that are not mentioned
<niemeyer> The file-lock branch should probably be dropped, unless I misunderstand what is going on there
<niemeyer> It's not really a mutex.. it'll explode if there are two processes attempting to get into the mutex region
<niemeyer> There's an implementation in Twisted already
<hazmat> niemeyer, its mean to error if another process tries to use it, but yeah the impl in twisted is probably a better option
<niemeyer> hazmat: It feels pretty bad.. telling the user "Can't open file" with a traceback wouldn't be nice
<_mup_> juju/config-juju-origin r362 committed by jim.baker@canonical.com
<_mup_> PEP8, docstrings
<hazmat> bcsaller, the lxc.lib tests pass but there's still some errors getting units to run
<bcsaller> hazmat: what are you seeing?
<hazmat> bcsaller, one quickie the #! header on the juju-create is wrong, missing "/" before bin/bash
<hazmat> bcsaller, it looks like add-apt-repo still isn't installed on the container.. perhaps i had a leftover machine-0-template ..
<hazmat> cause juju isn't installed which i assume causes the problem
<bcsaller> I suspect thats the case
<hazmat> because its missing a prefix its not getting killed i assume has to be done by hand, its going to cause problems as well if someone wants to use the series option
<hazmat> hmm we should passing origin down from the provider to the machine agent
<hazmat> hmm.. the clone interface makes it rather hard to put in console and container logs
<hazmat> i guess just stuff the attrs back on
<_mup_> juju/lxc-omega-merge r398 committed by kapil.thangavelu@canonical.com
<_mup_> enable container logs, and trivial juju script header fix
<_mup_> juju/config-juju-origin r363 committed by jim.baker@canonical.com
<_mup_> Setup origin policy for affected EC2, Orchestra provider tests
<_mup_> juju/env-origin r360 committed by jim.baker@canonical.com
<_mup_> Reversed to r357
#juju 2011-09-27
<_mup_> juju/env-origin r361 committed by jim.baker@canonical.com
<_mup_> Merged trunk & resolved conflicts
<_mup_> juju/env-origin r362 committed by jim.baker@canonical.com
<_mup_> Reverted to trunk
<_mup_> juju/env-origin r363 committed by jim.baker@canonical.com
<_mup_> Merged config-juju-origin (new attempt)
<_mup_> juju/config-juju-origin r364 committed by jim.baker@canonical.com
<_mup_> Missing new file in bzr
<_mup_> juju/env-origin r364 committed by jim.baker@canonical.com
<_mup_> Merged config-juju-origin to get missing file
<niemeyer> Hohoho
<niemeyer> http://wtf.labix.org/
<niemeyer> http://wtf.labix.org/wtf/361/unittests.out
<hazmat> niemeyer, cool
<jimbaker> niemeyer, nice
<niemeyer> Will tweak the path a bit, and then will try to come up with a test that actually gets in touch with AWS
<bcsaller> niemeyer: given the comment about the lxc-lib [6] do you still feel the same about how it should be changed? I could move from iter over the internal dict to writing the keys explicitly but I don't really want to make the code there any larger unless you feel strongly
<niemeyer> bcsaller: I'm a bit sad about the slightly suboptimal handling of arguments there, but I'd be happy for that to be cleaned up after you feel happy with the release
<bcsaller> niemeyer: the provider writes it own values to the upstart job in the container, I'd rather see it all come from that script really, but we didn't get it synced up like that in time.
<bcsaller> niemeyer: I think with a little polish the script could be used to build out other providers as well though and hope we can move some of that out of the Python code
<_mup_> juju/provider-determines-placement r397 committed by kapil.thangavelu@canonical.com
<_mup_> revert pick_policy, provider determines placement
<niemeyer> bcsaller: Cool, that sounds like a nice directly
<niemeyer> direction!
 * niemeyer has brain issues typing today
<bcsaller> oh, me too, me too
<hazmat> argh.. keyboard interupts in tests.. trial fail
<_mup_> juju/provider-determines-placement r398 committed by kapil.thangavelu@canonical.com
<_mup_> yank placement cli parameter per gustavo's suggestion.
<_mup_> juju/env-origin r365 committed by jim.baker@canonical.com
<_mup_> Doc changes
<_mup_> juju/env-origin r366 committed by jim.baker@canonical.com
<_mup_> Clarification on PPA support by juju-origin
<_mup_> juju/provider-determines-placement r399 committed by kapil.thangavelu@canonical.com
<_mup_> raise a providererror if the environment placement policy is not supported by the local provider
<_mup_> juju/trunk r362 committed by kapil.thangavelu@canonical.com
<_mup_> merge provider-determines-placement [r=niemeyer][f=855162]
<_mup_> In order to better support the local provider which only supports
<_mup_> a single placment strategy, this branch moves the determination
<_mup_> of placement to the provider (while respecting environments.yaml
<_mup_> config). This also removes the placement cli option.
<_mup_> juju/local-provider-config r396 committed by kapil.thangavelu@canonical.com
<_mup_> data-dir is required for local provider, drop storage-dir param
<_mup_> juju/trunk r363 committed by kapil.thangavelu@canonical.com
<_mup_> merge local-provider-config [r=niemeyer][f=855260]
<_mup_> Exposes local provider via environments.yaml
<_mup_> juju/lxc-omega-merge r400 committed by kapil.thangavelu@canonical.com
<_mup_> merge pipeline and resolve conflict
<hazmat> bcsaller, is lxc-library-clone ready to merge?
<bcsaller> hazmat: niemeyer wanted some changes to how the config file is written, I'm making those now, but they are mostly minor
<hazmat> bcsaller, all the pre-reqs on my side are merged omega fwiw
<bcsaller> great
<hazmat> i'm going to move on to fixing origin
<bcsaller> ok
<_mup_> juju/local-origin-passthrough r404 committed by kapil.thangavelu@canonical.com
<_mup_> juju-origin is passed to agent
<hazmat> hmm
<hazmat> where is origin defined
<hazmat> oh.. its still juju-branch
<jimbaker> hazmat, please try using env-origin for juju-origin as an env option
<hazmat> jimbaker, is that branch ready?
<jimbaker> hazmat, yes it is
<hazmat> okay, i'll rebase on it
<jimbaker> hazmat, sounds good
<niemeyer> wtf@li167-23:~/ftests$ ./churn -f ec2
<niemeyer> 2011-09-26 23:47:11-04:00 Writing output to: /home/wtf/ftests/build/wtf/361
<niemeyer> 2011-09-26 23:47:11-04:00 Running test ec2-wordpress... OK
<niemeyer> OK=1 FAILED=0
<niemeyer> wtf@li167-23:~/ftests$
<niemeyer> !!!
<_mup_> juju/local-origin-passthrough r405 committed by kapil.thangavelu@canonical.com
<_mup_> merge env-origin
<hazmat> hmm.. we have two different implementations here
<hazmat> for juju-origin
<jimbaker> hazmat, how so?
<hazmat> jimbaker, lxc provider uses a shell script implementation for container initialization which also interprets origin
<hazmat> there is no cloud init in the container
<jimbaker> hazmat, i recall you mentioning that might be a good idea, to unify
<jimbaker> anyway, let me take a look at juju-origin in lxc
<hazmat> the problem is they have different values, i guess i can bridge them
<hazmat> okay enough for today, now that today is over.. bedtime
<jimbaker> hazmat, where is juju-orgiin defined in the lxc container stuff?
<jimbaker> juju-origin, to be precise ;)
<hazmat> jimbaker, different name .. but lib/lxc/data/juju-create
<jimbaker> hazmat, ok, i 'll take a look at that
<hazmat> jimbaker, not nesc
<hazmat> jimbaker, more important to get the branch merged
<hazmat> jimbaker, the lxc provider will need to bridge the values
<hazmat> since the containers aren't init with cloud-init
<jimbaker> hazmat, yeah, it should be fine from my cursory look
<niemeyer> Awww.. _almost_ an end-to-end ec2 test..
<niemeyer> Another try..
<niemeyer> Night all!
<fwereade> heh, the zookeeper documentation is fun
<fwereade> Ephemeral nodes are useful when you want to implement [tbd].
<fwereade> These can be used to [tbd].
<fwereade> For more information on these, and how they can be used, see [tbd]
<fwereade> hazmat: ping
<rog> fwereade: the documentation in the C bindings header file isn't bad
<rog> (see /usr/include/c-client-src/zookeeper.h)
<fwereade> rog: thanks, good to know
<fwereade> rog: since you've spoken, and therefore volunteered, can I talk at you about concurrent charm publishing for a moment?
<fwereade> :p
<rog> :-)
<rog> of course
<rog> "at" being the operative word
<fwereade> :p
<fwereade> ok
<rog> 'cos i'm not exactly fully up to speed on the charm thing yet
<fwereade> don't worry, I only started a couple of months ago myself
<rog> but discussion is always good for advancing state of knowledge...
<fwereade> any questions you may ask are likely to instructively expose my own ignorance ;)
<rog> i'll do my best
<rog> :-)
<fwereade> so, when you ask juju to deploy a charm, it will find the charm from somewhere (this step isn't directly relevant) and upload it to storage on the machine provider
<fwereade> in the case of EC2, this will be S3
<fwereade> when two people happen to ask the same juju environment to deploy the same charm, the situation warrants closer attention
<fwereade> (at the same time)
<rog> are we assuming that charm names are unique?
<fwereade> it is perfectly possible that the two charms will be different, despite sharing the same id, but we will assume for now that that won't happen
<rog> (presumably that was the original of the extra hash code discussion yesterday, right?)
<fwereade> if you have two people deploying from local repos that don't match, "local:oneiric/mysql-1" does not uniquely identify a charm
<fwereade> yep, exactly
<fwereade> that's a pathological case, though, because local: charms are intended for development really
<rog> anyway, assuming names are unique...
<fwereade> yep :)
<fwereade> I think it seems like a good idea to ensure that a given charm is only uploaded once
<rog> how big can charms be?
<fwereade> so, for example, you and I each try to add a service using the charm "cs:oneiric/mysql-32767"
<fwereade> rog: I don't believe there's an explicit limit
<rog> so they could be huge. in which case, yeah, i think you're right.
<fwereade> although there is a practical limit at the moment because we can only upload so much to S3 in one go
<rog> still, people aren't gonna be happy about wasted bandwidth.
<fwereade> quite so
<fwereade> at the moment, the process is:
<fwereade> * upload to s3
<fwereade> * write state to a zk node
<rog> although... maybe people wouldn't be unhappy if a charm was uploaded exactly once for each user
<fwereade> * blow up if the node already exists
<fwereade> define user
<fwereade> s/user/environment and I'm happy
<rog> yeah, environment is good
<rog> but you've still got the same problem then, of course
<fwereade> ok, that algorithm definitely doesn't work given the above goals
<fwereade> please tell me what's wrong/overcomplicated/undercomplicated with the following suggestion
<rog> why not just choose a deterministic charm->s3 node name mapping?
<fwereade> I think we have one anyway, but for some reason I'm loath to assume that just because a file with the correct name exists it's necessarily the data we want
<fwereade> I'd prefer to depend on matching ZK state than mere file existence
<rog> i think you should. i don't think it's a problem, if the name is sufficiently unique
<rog> if the file exists with the right name and it hasn't got the right contents, then it's an error.
<rog> and it can be checked.
<rog> (assuming the name contains a hash of the charm's contents)
<fwereade> hmm, that takes us back to needing deterministic hashing
<rog> yup
<rog> i think that's a very useful primitive to be able to rely on.
<rog> (gustavo may beg to differ!)
<rog> i've found content-addressable stores to be very useful in the past for this kind of thing.
<fwereade> niemeyer didn't seem strongly in favour of it yesterday, indeed
<rog> hey, speak of the devil!
<fwereade> speak of the devil :)
<fwereade> jinx:p
<rog> niemeyer: hiya!
<fwereade> niemeyer: morning :)
<rog> fwereade: i think the alternative is to have a more fragile state-based charm uploading scheme
<niemeyer> Man.. I'm called a devil twice in the morning in the first 10 seconds!
<rog> fwereade: where different clients negotiate as to who is going to upload the charm first
<niemeyer> :-)
<fwereade> niemeyer: :p
<rog> niemeyer: well, you did arrive when your name was mentioned
<rog> about 2 seconds after
<rog> in a flash of sulphurous smoke
<niemeyer> ;)
<fwereade> rog: well, in effect, I suppose, if indirectly
<fwereade> sorry just a mo
<niemeyer> fwereade: What's the issue there?
<rog> niemeyer: concurrent charm uploading
<niemeyer> Why is that a problem?
<fwereade> niemeyer: sorry, back
<rog> because you want to avoid uploading the same charm to storage if it's already there
<fwereade> niemeyer: I've got myself a little bogged down, hopefully you'll be able to point out something I've missed
<niemeyer> fwereade: Sure, what's up
<fwereade> niemeyer: I'm instinctively against assuming that a file in storage which happens to have the correct name is actually the file we're looking for
<fwereade> niemeyer: actually, I'm definitely against it, if only for local development with multiple repos
<fwereade> niemeyer: could cause all sorts of horrifying confusion
<fwereade> niemeyer: (unless, as rog points out again, we have deterministic hashing and use hashes again)
<fwereade> niemeyer: (which of course is fine if it's the Right Thing, but I was up late last night removing all the hashes so I'm not strongly in favour :p)
<fwereade> niemeyer: *so*, IMO, we need to depend on the environment's zk state to know what has been published
<fwereade> niemeyer: agree?
<niemeyer> fwereade: Sure
<fwereade> niemeyer: ok, to avoid insanity, we want to make sure that only one publisher actually publishes a given charm
<fwereade> niemeyer: and I'm fretting that the "obvious" answer is more complicated than it needs to be
<fwereade> niemeyer: that answer goes as follows
<fwereade> niemeyer: (1) does the charm state already exist in ZK? if so, it's there, we're done
<fwereade> niemeyer: (2) if not, create an ephemeral node at /charms/pending/[charm_url]
<niemeyer> fwereade: Ugh.. ok, hold on
<niemeyer> fwereade: Do we have a problem today?
<rog> lol
<fwereade> niemeyer: er...only potentially, I guess, and only in the case of concurrent publishes
<niemeyer> fwereade: How?
<fwereade> niemeyer: only in situations where we can't depend on uniqueness of charm urls
<niemeyer> fwereade: Why would we have a problem there?
<fwereade> niemeyer: because the current implementation will have 2 publishers uploading to the same storage key, and we can't be sure that the "winning" one will be the one that "wins" in zookeeper... can we?
<fwereade> niemeyer: at the moment, we blindly upload and set the charm state once the upload is done
<niemeyer> fwereade: Ok, a couple of things:
<niemeyer> 1) There's a hash.. it is being uploaded to the same location, it's very very likely to be the same content
<niemeyer> 2) It's uploaded before being stored in zk, so the first one will win
<niemeyer> fwereade: That kind of situation was exactly why we designed the current logic as it is
<fwereade> niemeyer: wait... surely the one that's uploaded first is the one that wins in ZK
<niemeyer> fwereade: and it feels like we're spending time redesigning it, but it's still not clear to me why
<niemeyer> fwereade: Why does it matter?
<fwereade> niemeyer: and is therefore the one that's likely to be overwritten?
<niemeyer> fwereade: The first write to zk wins
<fwereade> niemeyer: and the last write to storage might win
<niemeyer> fwereade: and it will point to a file in the storage that matches the expectation of the uploaded
<niemeyer> uploader
<fwereade> niemeyer: let me check where we actually look at hashes
<niemeyer> fwereade: We don't have to _look_ at them, actually
<niemeyer> fwereade: It's part of the filename
<fwereade> niemeyer: did we not discuss this at length yesterday, and decide to use revisions alone?
<niemeyer> fwereade: Yeah, we did decide to follow your suggestion, as long as you were willing to review the problems coming out of it :-)
<niemeyer> fwereade: It now sounds that rather than "let's drop hashes" we're going towards "let's introduce complexity to solve a problem we don't have today"
<niemeyer> fwereade: which puts me in an alert mode
<fwereade> niemeyer: er, I was never arguing for dropping hashes
<niemeyer> ROTFL
<fwereade> niemeyer: you seemed to put forward a number of fairly solid arguments for dropping them, and convinced me :p
<fwereade> <niemeyer> fwereade: I'm happy for us to remove the hash from the name if we can find a way to avoid surprising results in these scenarios
<fwereade>  fwereade: Both
<fwereade> <fwereade> niemeyer: heh, I was more in favour of making the hash a required part of the ID throughout (at least internally)
<fwereade>  niemeyer: my issue was that it *wasn't* included in the ZK node name at the moment
<rog> do we need a name that includes both the symbolic name *and* the hash? it seems to me we might be good with either/or
<rog> then we can define a mapping from symbolic name to hash name
<rog> (possibly with some preference heuristics taken into account)
<fwereade> niemeyer, rog: my original position was that, if we have the hashes, we should use them throughout as part if the ID
<rog> and then internally we could use hash names exclusively. no ambiguity.
<rog> we could use them *as* the ID
<fwereade> niemeyer, rog: I *thought* niemeyer's position was "actually, we don't need the hashes"
<rog> with name stored inside the charm
<rog> s/with/with the/
<niemeyer> fwereade: Fair enough
<niemeyer> fwereade: Sep 26 09:55:04 <niemeyer>      fwereade: I'm happy for us to remove the hash from the name if we can find a way to avoid surprising results   in these scenarios
<niemeyer> fwereade: Sorry if I misguided you in the wrong direction
<niemeyer> fwereade: Let's stop the fuzz and start to move forward again
 * rog feels fwereade's pain.
<fwereade> niemeyer: not to worry :)
 * fwereade marshals thoughts while he gets a drink
<fwereade> niemeyer: ok, let's rewind a day or so, back to the question that started this off
<fwereade> niemeyer: actually, no, to an even earlier one
<fwereade> niemeyer: how would you define a "charm id"?
<fwereade> niemeyer: specifically, is the hash part of a charm id?
<fwereade> niemeyer: I contend that, internally, it should be: even if the same charm can be bundled N times and produce N different hashes, once it's been bundled and put into the system the hash is an important part of the identifier
<fwereade> niemeyer: is that a reasonable position in your opinion?
<rog> fwereade: if you've got the hash, you don't need anything else
<niemeyer> fwereade: It's not..
<niemeyer> fwereade: There's no such thing as a "charm hash" today
<niemeyer> fwereade: We have the hash of a file
<fwereade> niemeyer: ok, tweak terminology: we have the hash of a file, which is -- within one environment -- the single representation of a given charm
<rog> a charm hash would be trivially implemented, if we want one
<niemeyer> rog: We don't have even a trivial amount of time right now
<rog> ok
 * fwereade thinks again
<niemeyer> rog: We're already late.. it should be in _now_
<niemeyer> rog: and we have to implement the store functionality, which is stopped while we talk about hashes
<rog> well, for the time being, just use the hash of that file
<niemeyer> rog: No.. why!?
<niemeyer> There's no bug..
<niemeyer> fwereade: Please continue
<fwereade> niemeyer: ok, end-run discussion
<niemeyer> fwereade: I'm still keen on understanding your perspective
<fwereade> niemeyer: I reinstate the hashes on storage bundles
<niemeyer> fwereade: You'll be implementing this and must be comfortable with what's going on
<fwereade> niemeyer: and we don't need to do anything else
<fwereade> niemeyer: as before, hashes are not part of the charm id in ZK, because they're not part of the charm id anyway
<niemeyer> fwereade: That's right
<fwereade> niemeyer: that's all fine then
<niemeyer> fwereade: Again, as I mentioned yesterday, the real reason why hashes were ever introduced is uniqueness
<niemeyer> fwereade: In the storage
<fwereade> niemeyer: all of this came from my faliing to realise that the hashes weren't purely based on relevant content
<niemeyer> fwereade: It sorts out precisely the problem we started the discussion with
<fwereade> niemeyer: yep, I just didn't come to that realisation until we'd sidetracked on the whole "get rid of hashes" idea
<niemeyer> fwereade: Without that, there's the chance that two people fight for an upload, and the person that writes to zk is not the one that won in S3
<fwereade> niemeyer: which is what had me bogged down, and mystified by the idea of dropping the hashes, and proposing zk cleverness to get around it
<niemeyer> fwereade: The whole thing is my fault.. I knew about the difficulty in doing this in a correct way and left you rambling around it
<fwereade> niemeyer: sadly the mystification only kicked in relatively recently, when I finally hit the concurrent upload tests I'd punted on with self.fail("looks tricky") yesterday
<fwereade> meh, it takes two ;)
<niemeyer> rog, fwereade: I'm happy to consider developing an actual "charm hash" algorithm in the future, if we find actual issues or advantages that would make it attractive
<rog> content-addressed storage is often attractive in diistributed systems :-)
<niemeyer> rog: If that's the reasoning for introducing it, no thanks
<rog> niemeyer: the reason is it's very useful to have an unambiguous name for something, regardless of its origin or its location.
<rog> it's a nice solid foundation
<niemeyer> rog: No thanks.. solid foundations sink
<rog> niemeyer: only if someone breaks the hashing algorithm...
<niemeyer> rog: We need to approach it from a problem/feature perspective
<rog> sure. hash-based naming is just a useful tool in the box.
<fwereade> niemeyer: potential future reason: avoid repeatedly downloading big charms that are already hanging around in the control bucket, if we can verify name-including-hash with the store
<fwereade> niemeyer: but I think it can wait until we're actually experiencing that as a problem
<fwereade> ;)
<niemeyer> fwereade: There's already a single file for any given charm identifier int he system
<niemeyer> fwereade: and in fact, I think it's already cached
<fwereade> niemeyer: sorry delay, it's a bit of a derail anyway, I'll just focus on getting you a new MP
<niemeyer> fwereade: No worries
<robbiew> rog: oing
<niemeyer> fwereade: Have you checked out the go-charm-url branch?
<robbiew> rog: *p*ing :)
<rog> robbiew: oing boing
<fwereade> niemeyer: when I looked, some files were missing, I'm afraid
<niemeyer> fwereade: Hmm
<niemeyer> fwereade: Let me check
<rog> r
<niemeyer> fwereade: Yeah, great..
<niemeyer> fwereade: I forgot exactly the meaningful content. :(
<_mup_> juju/go-charm-url r15 committed by gustavo@niemeyer.net
<_mup_> Actually _add_ the relevant files.. :-(
<hazmat>  the hashes can be fixed to be stable
<hazmat> by using actual content
<hazmat> instead of the zip
<niemeyer> hazmat: Good morning
<niemeyer> hazmat: Yeah, that was mentioned a few times
<hazmat> niemeyer, g'morning
<niemeyer> hazmat: We can easily develop a "charm hash", when we need it
<niemeyer> hazmat: We don't right now
<hazmat> agreed, its fine for concurrent uploads atm, one fails.
<hazmat> first one wins
<niemeyer> fwereade: The files are pushed.. please let me know what you think
<hazmat> and charm id as ns:name:id is unique
<fwereade> niemeyer: cheers
<niemeyer> hazmat: I'll try to polish what I did yesterday to get the waterfall/wtf running
<hazmat> niemeyer, cool
<niemeyer> hazmat: Would be nice to have a test there for the local case once that's up
<hazmat> niemeyer, sure, i'm kinda of blocked on getting anything else in, but i can add more stuff to the pipeline
<niemeyer> hazmat: Blocked?
<hazmat> niemeyer, i'm fixing up origin, and pending on the rename, and local test
<hazmat> niemeyer, i've got several branches that i
<niemeyer> hazmat: Ah, cool
<hazmat> i'm waiting on other merges for
<niemeyer> hazmat: Ok.. blocked as in doing a lot.. that's cool :-)
<niemeyer> hazmat: I'm going over the queue right now
<niemeyer> hazmat: Most of the branches are already re-reviews, so I'm hoping it'll just go smoothly
<hazmat> niemeyer, i also need to switch out and work on slides for some time time, their do this afternoon
<hazmat> niemeyer, cool
<hazmat> s/there
<niemeyer> hazmat: Ah, that's a good time to mention
<niemeyer> I'll be traveling to Sao Paulo tomorrow
<niemeyer> Will be working on and off still
<niemeyer> The PythonBrasil conference is Thu-Sat
<niemeyer> I have a keynote on Fri morning
<niemeyer> But otherwise I'll be working on the release
<niemeyer> robbiew, fwereade, rog, SpamapS, m_3, bcsaller, jimbaker: ^
<robbiew> niemeyer: ack
<fwereade> niemeyer: sounds good, enjoy :)
<niemeyer> fwereade: Thanks.. I'm a bit sad about the timing
<niemeyer> If I knew about how we'd be running right now, I'd not have taken it a couple of months ago
<niemeyer> I'll be working from there, anyway
<fwereade> niemeyer: ping
<fwereade> niemeyer: I hadn't considered CharmURL.Revision to be optional
<fwereade> niemeyer: I see how it will make sense in the future
<fwereade> niemeyer: but it does mean we can't have charm names like the "mysql2" we have in the test repo
<niemeyer> fwereade: Hmm.. I don't get either of those points, I think :)
<niemeyer> fwereade: The revision must necessarily be optional.. otherwise how can we parse things like
<niemeyer> fwereade: juju deploy cs:~fwereade/oneiric/wordpress
<niemeyer> ?
<niemeyer> fwereade: Then, what's the deal with mysql2?
<fwereade> niemeyer: I'd considered the full CharmURL to be something we're only able to construct once we've asked the repo for the latest version
<fwereade> niemeyer: +var validName = regexp.MustCompile("^[a-z]+([a-z0-9-]+[a-z])?$")
<niemeyer> fwereade: The thing above is a charm url
<niemeyer> fwereade: Yeah, that looks bogus
<niemeyer> fwereade: It should probably be "^[a-z]+([a-z0-9-]+[a-z0-9])?$"
<niemeyer> fwereade: Nice catch
<niemeyer> fwereade: The intention there was just to avoid mysql-
<fwereade> niemeyer: I had the idea that a CharmURL was a pointer to a specific version of a charm
<niemeyer> fwereade: A charm url is.. a charm url :-)
<niemeyer> fwereade: We're going to support charm urls without revisions in real world scenarios
<niemeyer> fwereade: There's no reason to inflict pain on us and make the code unable to handle those
<fwereade> niemeyer: I'd say we're going to allow people to specify charms without revisions in real world scenarios
<fwereade> niemeyer: ok, so we create a charm url, ask a repository about it, and then construct the real charm url?
<niemeyer> fwereade: Both charm urls are real.. one contains a revision, the other doesn't
<fwereade> niemeyer: to me, the task of extracting the user's intention is distinct from extracting the somponents of a fully specified charm url
<fwereade> niemeyer: and, internally, we're always going to use ones with revision, just as they always have schemas and series
<niemeyer> fwereade: Why?
<fwereade> niemeyer: because we want to be able to upgrade charms?
<niemeyer> fwereade: a charm url without a revision is a fine identifier
<niemeyer> fwereade: just like a package name without a version is a fine identifier
<niemeyer> fwereade: Sure.. we also upgrade packages
<niemeyer> fwereade: and still, most package management is done without a version
<fwereade> niemeyer: it's a fine specifier, but to upgrade charms we need distinct zk nodes for the distinct versions
<niemeyer> fwereade: Sure.. you're looking at one very specific operation for which you need to know revisions
<niemeyer> fwereade: The abstraction of a charm url is not restricted to that one operation
<fwereade> niemeyer: what's the distinction between revision and series then?
<fwereade> niemeyer: both are optional, from the user's point of view, given a certain amount of extra context that allows us to infer what they mean
<niemeyer> fwereade: Please check out the test cases
<niemeyer> fwereade: They provide good insight into what each part is, and what are erroneous situations
<fwereade> niemeyer: I've seen them
<niemeyer> fwereade: So I don't get your question.. revision is a number
<niemeyer> fwereade: series is "oneiric", etc
<niemeyer> fwereade: ?
<fwereade> niemeyer: agreed
<fwereade> niemeyer: both are optional from the sufficiently-naive user's POV
<fwereade> niemeyer: but only one is in your CharmURL implementation
<fwereade> niemeyer: I'm suggesting that the user's perception is not enough reason to allow non-specific charm urls
<niemeyer> fwereade: Sorry, I'm really missing context about how you feel about this
<fwereade> niemeyer: and that a charm url should unambiguosly specify a particular collection of bits now and forever
<niemeyer> fwereade: cs:~joe/oneiric/wordpress
<niemeyer> fwereade: This is a charm URL
<niemeyer> fwereade: Correct?
<fwereade> niemeyer: disagree, it's enough information to discover a charm url, given context provided by the formula store
<niemeyer> fwereade: Ah, great.. ok
<niemeyer> fwereade: So that's where we disagree
<niemeyer> fwereade: This _is_ a charm URL
<fwereade> niemeyer: in the same way that "wordpress" is enough info to determine a charm url, given the context of the environment
<niemeyer> fwereade: "wordpress" is _not_ a charm URL
<fwereade> niemeyer: so a chamr url can be, by design, inadequate to specify a given charm?
<fwereade> niemeyer: (without requiring repo access, I mean)
<niemeyer> fwereade: cs:~joe/oneiric/wordpress
<niemeyer> fwereade: This specifies a given charm
<niemeyer> fwereade: For both of us..
<niemeyer> fwereade: It may change over time, but it is a specified
<niemeyer> specifier
<niemeyer> fwereade: and it is what the user will enter in the command line
<niemeyer> fwereade: Having the user talking to the server about such an URL, and having to manage it internally on both the server and the client, and then saying "Oh, but that's not an _actual_ url", would be weird
<rog> niemeyer: [aside, the regexp you gave above would forbit a name like "go" - i think you probably meant something like "^[a-z]([a-z0-9-]*[a-z0-9])?$"]
<niemeyer> fwereade: Think about bazaar branches for a second
<niemeyer> fwereade: Is the revision number part of the url?
<niemeyer> rog:
<fwereade> niemeyer: agreed, it's not
<niemeyer> >>> re.match("^[a-z]+([a-z0-9-]+[a-z0-9])?$", "go").group(0)
<niemeyer> 'go'
<fwereade> niemeyer: sorry, I didn't understand your previous paragraph
<niemeyer> fwereade: Which part?
<rog> oops, missed the first +
<fwereade> niemeyer: "that's not an _actual_ url"
<fwereade> niemeyer: the user is already using things that aren't actual urls
<fwereade> niemeyer: like "wordpress"
<niemeyer> fwereade: As mentioned in the spec, this is an alias.   It is ambiguous, and is _not_ a URL.
<rog> niemeyer: that said, it doesn't match "p9" which it should
<fwereade> niemeyer: but isn't the revisionless version essentially an alias to an actual versioned charm?
<niemeyer> rog: Indeed, will fix it when I work on the branch again
<niemeyer> rog: Thanks
<fwereade> niemeyer: we have to use context to infer what's intended in both cases
<niemeyer> fwereade: It is a Universal Resource Locator.. exactly like an lp: url, or an http: url..
<niemeyer> fwereade: Content can change over time
<niemeyer> fwereade: In all of these cases
<niemeyer> fwereade: We _need_ to handle charm urls without revisions
<hazmat> are we still trying to get repository client work and local dev in  for oneiric, if so what are we doing with regard to FFE dates and upload to the repositories?
<niemeyer> fwereade: To talk to the client about them, to talk to the server about them, and internally
<niemeyer> fwereade: Not supporting it in the code facing that would be silly IMO
<niemeyer> hazmat: Yes, we are trying.. but we've been getting stuck on details for the past couple of days :)
<fwereade> niemeyer: it seems to me that we only need them without revisions in order to locate actual charms, which themselves do have revisions, and which we then want to use throughout
<fwereade> niemeyer: "the content isn't important until we have the content", if you like, and from then on we actually care about it
<niemeyer> fwereade: Alright.. let's move on.. please support charm URLs without revisions.
<fwereade> niemeyer: sure
<niemeyer> fwereade: Thanks
<fwereade> niemeyer: I'm sorry to be delaying things :(
<niemeyer> fwereade: Not a problem.. I just won't let our ramblings detract us from what I know for sure to be the correct approach.
<niemeyer> fwereade: Did that yesterday with the hash stuff
<niemeyer> fwereade: If for no better reason (which I know exist), the user provides us a url without a revision that we have to manage. Having charm url handling and then having to parse by hand to tell if it's right or not, or to extract information out of it, would be quite impractical.
<niemeyer> hazmat: SpamapS has better details on the FFE
<niemeyer> hazmat: He's already filing them
<hazmat> niemeyer, cool, i'm just concerned that we're not sticking to any dates
<niemeyer> hazmat: and I was hoping to merge local dev today, and the store work by the end of the week
<hazmat> and its not clear what the date is
<hazmat> okay
<niemeyer> hazmat: Then, we have about a week of hard core testing and bug fixing to polish what we've got
<niemeyer> hazmat: Until being completely unable to fix anything
<hazmat> niemeyer, sounds good
<koolhead17> hey all
<koolhead17> SpamapS: around
<niemeyer> Will get lunch
<niemeyer> biab
<niemeyer> koolhead17: Hi, btw :)
<koolhead17> niemeyer: hello. howdy
 * koolhead17 bows to robbiew Daviey 
 * koolhead17 stuck with this saving secret key to document root, in automation of gallery2 :
<koolhead17> :(
<hazmat> niemeyer, i'm going to move not in progress tickets to the next milestone
<koolhead17> hey hazmat
<hazmat> hi koolhead17
<hazmat> db-config/commons still an issue?
<koolhead17> hazmat: am feeling bit frustrated with this gallery2 thing
<koolhead17> hazmat: no i am almost done with it
<hazmat> koolhead17, what's the problem?
<hazmat> koolhead17, you can save the secret out of the document root? or does it need to be read by the app?
<koolhead17> this gallery2 s/w doing populating config files asks user to enter few details and one of it is like download secret key and save it to document root of gallery
<koolhead17> read by app
<hazmat> koolhead17, i assume that's common to a normal installation then?
<koolhead17> hazmat: yes i am confused how to move with that
<koolhead17> http://www.rndguy.ca/2010/02/24/fully-automated-ubuntu-server-setups-using-preseed/
<koolhead17> this helped me  as to  know preseed
<koolhead17> hazmat: i needed you help if you have some time to undrstand that metadata workflow
<koolhead17> db-relation-joined
<hazmat> koolhead17, i don't see the actual question? you preseed the mysql db with a master password, its db-relation-joined, creates an account for gallery 2 (all of that's already in the mysql formula), the gallery formula on db-relation-changed stores the password for the app into a location accessible by the app
<hazmat> and there should be some sort of .htaccess config to prevent that config file from being served up directly via the web
<hazmat> although by default the network security will prevent public access to mysql, its good practice not to expose the credentials farther
<koolhead17> hazmat: http://bazaar.launchpad.net/~juju/juju/trunk/view/head:/examples/wordpress/hooks/db-relation-changed
<koolhead17> here
<koolhead17> hostname=`curl http://169.254.169.254/latest/meta-data/public-hostname`
<koolhead17> so this metadata url serves value for variable like $user $password
<koolhead17> which gallery2 will need? correct
<hazmat> 1) that metadata url is ec2 specific, its only exposing virtual instance attributes, it has nothing to do with what's installed on the machine, 2) Its usage by formulas will disappear in a future version of juju.
<hazmat> ie. user/password having nothing to do with that url
<SpamapS> koolhead17: no, gallery2 should not need the public hostname
<hazmat> `relation-get user` && `relation-get password`
<hazmat> get the db user and password
<koolhead17> hazmat: hmm my question was ec2 specific if am not wrong juju currently on runs on ec2 environment?
<hazmat> koolhead17, with the oneiric release we're also supporting bare metal installations via (orchestra/cobbler) and local machine development
<hazmat> using lxc containers
<koolhead17> hazmat: am trying writing charm on oneiric virtualbox only
<koolhead17> hazmat: i have not tried orchestra yet, been working on cobbler all this while. i am soo confused :(
<koolhead17> i will go back https://juju.ubuntu.com/Documentation and spend some time again. what i was currently doing is simply write bash script to get my installation done automated. once that is achieved use to form a charm out of it.
<koolhead17> am i doing wrong procedure?
 * koolhead17 wonders if he asked some dumb question :(
<_mup_> juju/lxc-provider-rename-local r404 committed by kapil.thangavelu@canonical.com
<_mup_> rename lxc provider to local provider
<hazmat> koolhead17, i don't understand how you'd be doing a virtualbox installation of a charm
<hazmat> its not a supported machine provider atm
<koolhead17> hazmat: what i am doing is writing bash script which does auto install of everything for me. and then i put it as a charm and test it on EC2
<hazmat> koolhead17, i think its just as easy to build it out in a formula esp. with tools like debug-hooks
<hazmat> and charm-upgrade
<hazmat> because you'll need information from the remote relations which you'd have to mock/stub in the bash script
<hazmat> and you'll need to tease apart the bash script into its parts.. i mean.. there's nothing wrong with doing it that way
<hazmat> just that its some additional work to restructure as  a charm when its done.
<koolhead17> hazmat: hmm. in that case i have to do everything on ec2 which am using from some friend`s account.
<koolhead17> :D
<hazmat> koolhead17, if you want to live on the bleeding edge the local dev stuff allows for doing it all on your local machine
<rog> that's all folks. see ya tomorrow.
<hazmat> rog, have a good one
<rog> hazmat: will do. you have no idea. :-)
<SpamapS> hazmat: before I wrote the lxc provider, I wrote a relation-get / relation-set mocker .. it worked quite nicely. ;)
 * SpamapS is quite excited tho, about having a local provider built in. :)
<hazmat> indeed its very nice
<koolhead17> hazmat: i only have a 2 GB laptop which allready runs 2 VM when i play with juju
<koolhead17> :D
<hazmat> SpamapS, bcsaller did some nice work to minimize construction time of instances as well (via lxc-clone)
<SpamapS> :-D
<SpamapS> hazmat: I'm glad you guys found a way to make lxc-clone work.
<hazmat> koolhead17, the overhead both for disk and load of an lxc container is *significantly* less than a vm
<koolhead17>  i have no idea how i can use orchestra and local env for using the say
<hazmat> koolhead17, you can have dozens of a container on a machine with minimal load if their not doing any active work
<SpamapS> koolhead17: you don't need orchestra or cobbler
<koolhead17> shall i simply install oneiric on my latop then?
<hazmat> and the disk overhead is around 200mb for a minimal container, up to 500mb for a useful container (not including data)
<SpamapS> koolhead17: thats the opposite of what you need.. you need something local.. which is landing in trunk as we speak. :)
<SpamapS> koolhead17: it should work in natty too
<hazmat> woah..
<SpamapS> koolhead17: tho you'll need the LXC that is in the juju PPA
<hazmat> SpamapS, koolhead17 it does not work in natty
<koolhead17> okey. let me reach home and try this experiment then
<SpamapS> hazmat: why not?
<hazmat> SpamapS, i don't think the ppa has been updated with the latest lxc pkg
<koolhead17> i have lucid
<SpamapS> I definitely think we need to spend more time making these things work on Lucid.
<koolhead17> and oneiric on VM
<hazmat> SpamapS, that needs a kernel update afaik
<koolhead17> SpamapS: please do it. it will be awesome
<koolhead17> LTS ++
<SpamapS> hazmat: it didn't before.. but maybe they've moved on from the stuff I did in Austin.
<koolhead17> am on 10.4.3
<koolhead17> 2.6.32-33-generic #72-Ubuntu SMP Fri Jul 29 21:08:37 UTC 2011 i686 GNU/Linux
<hazmat> SpamapS, maybe it doesn't.. i wasn't sure
<hazmat> bcsaller, there's another failing test in omega.. make b/ptests usage is really rather problematic
<bcsaller> hazmat: not for me? what are you seeing?
<hazmat> just a failure around the upstart file test
<_mup_> juju/lxc-provider-rename-local r405 committed by kapil.thangavelu@canonical.com
<_mup_> additional fixes for s/lxc/local
<koolhead17> also https://juju.ubuntu.com/Documentation explains using EC2 only
<hazmat> bcsaller, ./test juju.machine
<hazmat> bcsaller, i get about 5 failures
<jimbaker> koolhead17, definitely agreed on that. still trying to get the new lxc stuff to work for me
<bcsaller> hazmat: I'm seeing that, yes, hadn't been running those :)
<hazmat> bcsaller, i know.. that's why running b/ptests is a false sense of anything
<koolhead17> jimbaker:  it will be based on oneiric then i suppose. :D
<bcsaller> because the branch is functional
<jimbaker> hazmat, i'm having a problem in bootstrap with lxc-omega
<hazmat> jimbaker, do tell? ;-)
<jimbaker> bcsaller tells me it's more likely to be just the networking stuff you've been working on
<jimbaker> koolhead17, yes, i have been just trying it w/ oneiric beta 2
<hazmat> jimbaker, there isn't any networking stuff that's post lxc-omega
<hazmat> jimbaker, what's the problem?
<jimbaker> hazmat, indeed, that's my understanding ;) so i get 2011-09-27 10:46:48,319 ERROR Command '['virsh', 'net-start', 'default']' returned non-zero exit status 1 in bootstrap
<hazmat> jimbaker, you have libvirt-bin installed?
<koolhead17> k
<jimbaker> running that explicitly, $ virsh net-start default
<jimbaker> error: Failed to start network default
<jimbaker> error: internal error Network is already in use by interface virbr0
<hazmat> interesting
<hazmat> jimbaker, are you still on natty?
<jimbaker> hazmat, indeed i do
<jimbaker> and i'm running oneiric beta 2
<hazmat> it shouldn't be running net-start if it see it already running
<jimbaker> hazmat, sure
<hazmat> jimbaker, can you pastebin virsh net-list
<jimbaker> hazmat, no need: $ virsh net-list
<jimbaker> Name                 State      Autostart
<jimbaker> -----------------------------------------
<hazmat> jimbaker, clearly your libvirt networking is wedged somehow
<jimbaker> hazmat, i'm sure i'm missing some dependency. just don't know what
<hazmat> if that's your output
<hazmat> virsh net-start fails because already started, and virsh net-list doesn't show it started
<jimbaker> hazmat, yes sounds like a wedge indeed
<jimbaker> maybe i should reboot :)
<hazmat> jimbaker, well it might a persistent config issue from the upgrade.. probably relating to libvirt, i'd check dnsmasq running, and libvirt config files, reboot couldn't hurt, and finally brctrl
<jimbaker> hazmat, thanks for the suggestions. definitely upgrade issues could be involved, since i was trying to get the new lxc work going last week w/ beta 1
<niemeyer> hazmat: I also don't have a "default" network locally in natty, FWIW
<hazmat> niemeyer, interesting
<hazmat> hmm
<hazmat> thankfully we have all the tools to install one by hand
<hazmat> but it should be ootb with libvirt-bin
<niemeyer> hazmat: True.. we should just confirm that with a clean instlal
<niemeyer> hazmat: or a clean upgrade :)
<niemeyer> hazmat: If the upgrade is lacking something, there's still time to fix it
<hazmat> niemeyer, i can add code to support the case its pretty trivial with the existinng network suport
<hazmat> to just add a default network if one isn't defined
<hazmat> hmm.. actually its automatic already
<hazmat> if its not defined when we go to start, we define it
<hazmat> and then start it
<hazmat> jimbaker, actually can you pastebin virsh net-list --all
<hazmat> i forget the --all output
<hazmat> flag
<hazmat> without it, it only lists actives
<bcsaller> hazmat: additional we saw the network code calls virsh net-stop which doesn't exist, looks like its net-destroy
<hazmat> ugh
<hazmat> yeah.. that's a bug
<hazmat> doesn't anyone believe in symmetry anymore ;-)
<niemeyer> !!!
<niemeyer> hazmat: "destroy" is such a good command, though!
<niemeyer> apt-get destroy table
<hazmat> lo.. i am become death.. destroyer of worlds
<bcsaller> the way I type its only destroyer of words
<jimbaker> hazmat, http://pastebin.ubuntu.com/698012/
<hazmat> hmm.. so it is defined, but we can't start it
<jimbaker> hazmat, again, i'm going to reboot before trying anything else
<hazmat> jimbaker, sounds good
<jimbaker> but first, i need to run to lunch. biab
<hazmat> bcsaller fortunately local provider doesn't stop the network since its normally setup as autostart by libvirt-bin
<hazmat> and already running
<bcsaller> hazmat: the start/destroy tests in juju.machine are because of the later construction of the .container when using the async interface, they are not around when the tests expect them to be to setup the mocks. trying to fix em
<hazmat> bcsaller, woah.. the async interface should still be returning a deferred that can be waited on
<hazmat> ?
<bcsaller> its simpler than that
<bcsaller> the containers are not built until start() now
<bcsaller> rather than in init
<hazmat> ah
<bcsaller> so .container isn't defined until later and thus can't be mocked
<bcsaller> well... I can mock it, but python has issues
<hazmat> so there is no access to the container till its started? i think that has problems in several places for the rest of the code
<hazmat> ie. its not soley a test problem
<hazmat> the setup directories uses the container.rootfs path several times
<hazmat> prior to starting the container for example
<hazmat> that's unfortunate
<bcsaller> hazmat: all that code is called after
<bcsaller> but yes, fixing this is a little trickier than I wanted
<hazmat> bcsaller, add a container_rootfs using the name to the container in class init
<hazmat> all the usage is to get the fs
 * hazmat grabs some food
<niemeyer> hazmat, bcsaller: Folks, just off a call with robbiew.. I'll finish the ftests polishings I'm working on to get this ready and out of my plate, and will then jump back onto the reviews
<hazmat> niemeyer, cool, also wtf site is empty now
<niemeyer> hazmat: I suspect I broke it
<niemeyer> hazmat: I'm there cleaning it up a bit
 * hazmat starts working on presentation slides
<niemeyer> I'll separate the setup/teardown so that we can easily have several tests for EC2, etc
<TheMue> niemeyer: What framework do you use for testing?
<niemeyer> TheMue: A few trivial scripts
<TheMue> niemeyer: And unit tests is with standard gotest?
<niemeyer> TheMue: this test framework produces a waterfall of success/failure per revisoin
<niemeyer> TheMue: The tests are whatever we want them to be
<niemeyer> TheMue: Right now we have two: one that runs the whole internal suite
<niemeyer> TheMue: and another one that exercises a real interaction with ec2
<niemeyer> TheMue: The juju-go branch, that contains the evolving Go port, is not part of this yet, but can be easily integrated
<niemeyer> TheMue: We use gocheck there
<TheMue> ah, ok, thx
<TheMue> btw, is it possible to simulate juju actions with local vms?
<RoAkSoAx> fwereade: how's it going man?
<fwereade> RoAkSoAx: ah, not too bad, just reverted a big pile of revisions -- which is obviously bad, but feels surprisingly good ;)
<fwereade> RoAkSoAx: and you?
<RoAkSoAx> lol
<RoAkSoAx> fwereade: pretty good
<RoAkSoAx> fwereade: orhcestra/juju nwrking like a charm
<RoAkSoAx> fwereade: without the benefits of auto power management
<RoAkSoAx> fwereade: since we dont have direct access to PDU's and stuff
<RoAkSoAx> fwereade: but good
<fwereade> RoAkSoAx: awesomesauce :D
<RoAkSoAx> fwereade: have time to discuss a bit about juju/orchestra?
<fwereade> RoAkSoAx: surely
<RoAkSoAx> fwereade: so
<RoAkSoAx> fwereade: about showing status pending
<RoAkSoAx> fwereade: when we deploy or bootstrap
<RoAkSoAx> fwereade: right after we do it, it already show the machine
<RoAkSoAx> fwereade: the dns-name /instnace id
<RoAkSoAx> fwereade: however, the machine might have not even been turned on
<RoAkSoAx> fwereade: so I was wondering it might be better to list them "pending" till it actually finish installing and disables PXE
<RoAkSoAx> for its cobbler profile
<RoAkSoAx> but at the same time
<RoAkSoAx> while they are pending, they should probably show
<RoAkSoAx> what machine has been obtained
<RoAkSoAx> fwereade: because, it is actually really needed for us to know what machine was selected, but, we need to see it as pending till it finishes installing I think
<RoAkSoAx> fwereade: what do you think?
<fwereade> RoAkSoAx: in alternative words: available/acquired is not enough information?
<RoAkSoAx> fwereade: that's enought
<RoAkSoAx> fwereade: but, my point being is when I do juju status
<RoAkSoAx> I see the machine as available
<RoAkSoAx> when it should probably be pending
<RoAkSoAx> because it hasn't finished installing
<fwereade> RoAkSoAx: ah, ok -- was confused by the mention of bootstrap, because you can't even get status until we've actually managed to bootstrap
<RoAkSoAx> right
<fwereade> RoAkSoAx: that definitely sounds sensible
<RoAkSoAx> fwereade: yeah, but while showing pending, it doesn't show what machine (dns-name) has been selected
<RoAkSoAx> fwereade: i think we need to know that
<RoAkSoAx> so usually is : 13: pending
<RoAkSoAx> tight?
<RoAkSoAx> right
<RoAkSoAx> in orhcestra we see something like : 13: {dns-name: blabla.domain.com, instance-id: MTMxNzA2NDA1NS4xNzg2NTQ1OTQuNzE4Mg}
<RoAkSoAx> but, it is still pending because installation is still executing
<RoAkSoAx> so should show: 13: {dns-name: hassium.canonical.com, instance-id: MTMxNzA2NDA1NS4xNzg2NTQ1OTQuNzE4Mg}: pending
<RoAkSoAx> or something similar
<fwereade> RoAkSoAx: ok, that makes sense
<RoAkSoAx> fwereade: but it will be very very orchestra specific
<fwereade> RoAkSoAx: offhand, do you recall what it shows for EC2 in similar circumstances?
<fwereade> RoAkSoAx: because the situation is definitely analogous
<RoAkSoAx> fwereade: it shous 13:pending I think
<fwereade> RoAkSoAx: cool
<fwereade> RoAkSoAx: the problem is really just getting the info out of cobbler reliably then, right?
<fwereade> RoAkSoAx: and the problem is kinda bound up with the power-management woes we already know about
<fwereade> RoAkSoAx: ...although I guess it doesn't have to be
<fwereade> RoAkSoAx: what *should* I be paying attention to to figure it out?
<RoAkSoAx> fwereade: power amangement should not really be part of the problem
<RoAkSoAx> fwereade: becase, even if we
<RoAkSoAx> fwereade: do that when we manually or automaticlaly start the machine
<RoAkSoAx> it *wont* show pending
<RoAkSoAx> fwereade: from what I think, pending is the state on ec2 when the image is starting up, once is completely up and usually, then it changes to being available, right?
<fwereade> RoAkSoAx: "running" but yeah
<RoAkSoAx> fwereade: so similarly, the status should show pending while the machine is running the installation, once it has finished, it should show it
<RoAkSoAx> fwereade: but in case of orchestra, i think it is important to know what machine has been selected (dns-name) and its status is pending
<fwereade> RoAkSoAx: do we have a channel that lets us figure it out?
<fwereade> RoAkSoAx: or do we have to store the fact that it *should* show up soon
<fwereade> RoAkSoAx: and go from there?
<RoAkSoAx> fwereade: i think
<RoAkSoAx> when we do status, and easy way would be to check
<RoAkSoAx> if pxe has been disabled in the system itself
<fwereade> RoAkSoAx: I *may* be happy with that but I'll have to think
<RoAkSoAx> fwereade: that's the only way we can know that
<RoAkSoAx> fwereade: because in ec2 is running/pending right?
<RoAkSoAx> pending is when it is booting the VM
<RoAkSoAx> and running its when finished booting
<RoAkSoAx> in our case we cannot verify if it is installed/post_installed
<RoAkSoAx> fwereade: the only way we can do that is by simply checking the pxe enabled on the system
<RoAkSoAx> fwereade: because that's the last step of installation
<fwereade> RoAkSoAx: that sounds great to me
<RoAkSoAx> fwereade: if installation fails, it will never disable PXE booting on the system
<fwereade> RoAkSoAx: was always a bit uncomfortable with what we were using netboot_enabled for
<fwereade> RoAkSoAx: yep
<RoAkSoAx> fwereade: exaclty, so we could just extend status to check netboot_enabled
<RoAkSoAx> fwereade: so for each system netboot_enabled is True, that means it hasn't finished installed or not even powered on
<RoAkSoAx> fwereade: if netboot_enabled is False (on a status) we can assume it finished installed
<fwereade> RoAkSoAx: so available/pxe means we can grab it and use it; acquired/pxe means pending; acquired/nopxe means running(-very-soon); available/nopxe means don-t-touch
<RoAkSoAx> fwereade: because that's the last command executed when deploying
<fwereade> RoAkSoAx: perfect
<RoAkSoAx> fwereade: right, so we keep the management classes as they are right now
<fwereade> RoAkSoAx: just need to make sure we handle the state transitions correctly in CobblerClient
<RoAkSoAx> fwereade: right, so basically when we *already* deployed the machine, and we are *checking* status
<fwereade> RoAkSoAx: assuming that, yes, pending/running is just pxe/nopxe
<RoAkSoAx> fwereade: we should check,  "machine A is being deployed, let's check netboot_enabled. If True, it hasn't finished installed, or has failed. If False, then it has finished"
<fwereade> RoAkSoAx: yep
<RoAkSoAx> fwereade: right, obviously in the future, we would need to know if installation failed
<RoAkSoAx> I just have no idea how to know that right now
<fwereade> RoAkSoAx: ...may just come down to storing when we asked it to come up, and timing out :/
<fwereade> RoAkSoAx: still, that's the future :)
<fwereade> RoAkSoAx: definitely sounds like a good plan
<RoAkSoAx> fwereade: cool
<fwereade> RoAkSoAx: I really need to capture my mental list of orchestra deficiencies as bugs, and soon :/
<RoAkSoAx> fwereade: hjeheh ok, will try to file some by EOW
<fwereade> RoAkSoAx: so will I, hopefully between us we'll cover most of it ;)
<fwereade> RoAkSoAx: thanks :)
<adam_g> http://paste.ubuntu.com/698089/ <- mean anything to anyone? machine agent log from bootstrap node
<SpamapS> Is there any way to get feedback from the provisioning agent?
<SpamapS> Like.. if its unable to provision instances for some reason.. other than debug-log ?
<niemeyer> adam_g: That's pretty weird..
<niemeyer> adam_g: machine 0 is the first machine run
<niemeyer> adam_g: Theoretically if there's no machine 0 zookeeper shouldn't even exist
<niemeyer> adam_g: What's the context there?
<hazmat> niemeyer, it was an old client
<hazmat> was the problem
<hazmat> SpamapS, not atm
<SpamapS> ahh
 * SpamapS searches the bug lists to +1 or report that..
<SpamapS> hmm.. why is the eureka milestone set to release on 2011-01-01 ?
<jimbaker> SpamapS, awesome backdating ;)
<SpamapS> we are *SERIOUSLY* late then ;)
<SpamapS> So.. I'm thinking we need to make the released version of juju not pull itself from the PPA, but rather from the Ubuntu archive only.
<robbiew> +1000
<SpamapS> I suppose we can say if a user wants it on lucid/maverick/natty that they use juju-branch
<SpamapS> or has that been replaced with juju-origin now?
<robbiew> SpamapS: so we can't have the archive version pulling from a ppa...that means a deployment that works today could conceivably behave differently tomorrow.
<SpamapS> right
<SpamapS> just thinking through what that will break
<robbiew> right
<jimbaker> SpamapS, that's the intent of juju-origin and how the env-origin branch determines the correct origin to deploy
<SpamapS> so yeah I think I'll just patch in that the default source is _DISTRO instead of _PPA .. and if people want to spawn releases before 11.10 they will have to use the PPA or juju-branch
<jimbaker> SpamapS, i have had to mock the origin for distro (apt-cache policy juju), but it works well in testing
<jimbaker> SpamapS, so you will see in the env-origin branch, the default origin is determined using that, instead of just using _DISTRO (or the old _PPA)
<SpamapS> jimbaker: awesome, but thats not in trunk yet, is it?
<jimbaker> SpamapS, i should say: i have had to mock the distro behavior. everything else i can directly test
<jimbaker> SpamapS, it is not yet in trunk. i have some issues to fix
<jimbaker> but should be resolved pretty soon
<robbiew> SpamapS: ack on the patch approach
<robbiew> jimbaker: define "pretty soon" :)
<SpamapS> if its not in the next 20 minutes, its not going to be uploaded today. ;)
<adam_g> hey-- is '--placement=local' no longer possible, to deploy to the bootstrap node?
<jimbaker> robbiew, well it has to complete the review process, but the issues i have to fix are small and mostly related to how the testing and code is structured
<robbiew> jimbaker: ack
<robbiew> SpamapS: i translate jimbaker's response to be "not in the next 20min" ;)
<jimbaker> eg how do we test a specific circumstance with respect to policy, do we parse the data, make a call to code that looks like apt-cache, or in the old case, mock that out
<jimbaker> there are a number of approaches, so i'm converging on what works best
<jimbaker> SpamapS, robbiew - that's correct
<jimbaker> not 20 min :)
<SpamapS> danke
 * hazmat finishes up slides
<adam_g> hazmat: with --placement='local' gone from the CLI, is it even possible to deploy certain charms to the bootstrap node anymore?
<niemeyer> adam_g: That was never the intention of the placement logic
<SpamapS> but it was an *awesome* way to test things without having to start multiple nodes
<niemeyer> adam_g: That said,
<SpamapS> hacky, but awesome. :)
<niemeyer> adam_g: placement is still supported
<adam_g> niemeyer: i understand, ill rephrase.. is there currently a new way to abuse this and let us put stuff on the bootstrap node?
<adam_g> :)
<niemeyer> adam_g: In ~/.juju/enviornments.yaml
<niemeyer> adam_g: Shhh.. don't tell anyone
<adam_g> niemeyer: ive found that, but can that be changed per 'juju deploy' or iss the placement policy set for the lifetime of the environment?
 * SpamapS is fairly certain it will be desirable to be able to set placement at runtime as we come up with more interesting placement strategies.
<hazmat> adam_g, yeah.. in the placement: local in the environment config
<hazmat> whoops.. mis constructed..
<niemeyer> I'll step outside for a while and bbiab
<hazmat> SpamapS, yeah.. i agree, but we start conflating very different concepts..
<SpamapS> hazmat: users tend to like doing things that developers never dreamed of. I'd hope we'd follow the unix model, and give them enough rope to hang themselves (and then a little bit more)
<hazmat> co-location and placement look very similiar to end users
<hazmat> SpamapS, we might end up resurrecting it
<hazmat> i see it as very useful for cross-az deploys on a per unit basis
<SpamapS> cross az.. cross cloud.. silly things that you just want to have stacked up on one t1.micro ... flexibility is good.
<hazmat> adam_g, is that removal a significant burden, i can resurrect it now if need be?
<SpamapS> also when did it disappear?
<SpamapS> I use it about twice a day. :-/
<SpamapS> but I'm on an older build
<hazmat> SpamapS, yesterday evening
<SpamapS> ahh ok
<adam_g> hazmat: we were using it to reduce our hardware needs on this openstack cluster by 3 or 4 nodes. i can workaround by just modifying environments.yaml between 'deploys'
<hazmat> adam_g, that kinda of sucks though
<adam_g> hazmat: yah, especially since --placement=local is what we've documented internally. id love to get the option back, but i can see why others wouldn't
<hazmat> adam_g, the ideal placement policy to me is min/max instances, but its very hard to determine where to place a formula to avoid a conflict
 * hazmat ponders
<hazmat> i guess i should try to  make that happen since i'm blocked on other stuff
<SpamapS> hazmat: Its not that hard.. you can keep a record of charms that have failed to deploy together and just use optimistic collision avoidance.
<hazmat> SpamapS, lol
<SpamapS> Its the ethernet model.
<hazmat> SpamapS, i figure the easiest thing is deploy is keep the number of service units of the same formula on a machine to 1, and error if we can't do that
<hazmat> its not real avoidance but it should help..
<SpamapS> hazmat: except they still might conflict. Which is fine, just move it to another machine if that happens.
<hazmat> i guess its easier to resurrect --placmenet
<SpamapS> :)
<hazmat> niemeyer, ^
<adam_g> hazmat: hi, sorry, i got pulled away. IMO, i think in the long run, users are going to want the *option* to have total control over charm placement regardless of the risks. currently "--placement=local" is the only thing that gives me that option
<SpamapS> especially with hardware
<SpamapS> hrm.. so defaulting to _DISTRO leaves us in a bind w.r.t. testing proposed updates of juju for SRU
<SpamapS> we'd need to have some way to enable proposed...
<hazmat> bcsaller, do you have fixes for test failures on omega, i wanted to do some work further down the stack
<bcsaller> hazmat: not yet, sorry
<bcsaller> hazmat: most, but not all
<hazmat> bcsaller, did you end up just adding the container_rootfs attr or trying to rework the api usage/tests?
<bcsaller> hazmat: I tried that but didn't get it working
<bcsaller> so I moved something more comprehensive
<bcsaller> but then its trying to build out the container for real to destroy it and wants root, so still playing with it
<_mup_> juju/cli-placement-restore r397 committed by kapil.thangavelu@canonical.com
<_mup_> restore placement cli
<_mup_> Bug #860966 was filed: Restore command line placement. <juju:In Progress by hazmat> < https://launchpad.net/bugs/860966 >
<hazmat> ^ SpamapS, adam_g if the feature removal matters to you commenting on the above bug/merge proposal would be helpful
 * hazmat hugs lbox
<niemeyer> hazmat: Let's please not resurrect --placement now
<niemeyer> hazmat: We can look at this again after the release
<SpamapS> niemeyer: actually its critical that we have it until there's something better
<SpamapS> niemeyer: understanding full well that its less than ideal, without it, we need 9 full hardware machines to test a full openstack deployment.
<niemeyer> SpamapS: We survived until it existed, so we can survive without it for this release
<SpamapS> niemeyer: orchestra didn't exist before this existed.
<niemeyer> Man.. that's exactly why I don't like that kind of half baked feature.. :-/
<bcsaller> seemed like a good idea at the time
<SpamapS> Thats just how things go... you put stuff in, then you come up with something better and you take it out. :)
<SpamapS> Just look at devfs..
<SpamapS> half baked, overly ambitious, all those things.. then udev made it all better. :)
<_mup_> Bug #860982 was filed: Rename lxc provider to local <juju:In Progress by hazmat> < https://launchpad.net/bugs/860982 >
<SpamapS> BTW, for the Oneiric packages .. I'm hacking in a 'enable-proposed' option to the environment config. Its the only way we'll ever be able to do SRU's.
<hazmat> well it should hacked onto  env-origin
<SpamapS> That would be awesome.
<SpamapS> running out of time tho
<hazmat> yeah..
<niemeyer> SpamapS: What's enabled-proposed?
<hazmat> use the proposed repo to install juju for testing
<SpamapS> niemeyer: the 11.10 packages default to installing from the distro for quite obvious reasons. We also need to be able to enable -proposed so users can test an SRU that manifests on the spawned machines.
<SpamapS> Another option would be to just make people build an AMI that enables proposed
<SpamapS> which actually might be better
<SpamapS> but a lot harder
 * hazmat watches the size of the testing community drop like a stone
<niemeyer> SpamapS: hazmat is right.. that's just another option for juju-origin
<SpamapS> niemeyer: which doesn't exist yet in r361 (the one I've been testing heavily for the last 2 days)
 * hazmat dog walks bbiab
<niemeyer> SpamapS: enable-proposed also doesn't exist
<SpamapS> righ! But its a smaller patch. :)
<niemeyer> SpamapS: heh
<SpamapS> one I fully hope to drop in a week
<niemeyer> SpamapS: That's juju-origin.. if you're planning to land this, please let's do it the right way.
<SpamapS> I can leave it out.. and we can SRU in the ability to.. SRU things.. when we need to.
<niemeyer> SpamapS: There's zero benefit in having another option
<niemeyer> SpamapS: The env-origin branch is in review, and I hope jimbaker has it ready for land
<SpamapS> simplest solution.. just leave both out
<niemeyer> SpamapS: Or maybe leave juju out? That's even simpler.
<SpamapS> You can take that up with higher powers. :)
<SpamapS> I think the simplest thing is to just leave out the ability to turn on proposed, and open it as a bug in the package.
<SpamapS> When the time comes for an SRU, we'll fix it then.
<SpamapS> Hopefully by merging in juju-origin.
<niemeyer> SpamapS: Either we merge juju-origin, or we take juju out of Ubuntu. There's no middle way.
<niemeyer> SpamapS: It's necessary for handling the source.
<SpamapS> Err, wha?
<SpamapS> It works fine w/o it
<SpamapS> We may never actually need to SRU juju
<niemeyer> SpamapS: Where does it take the packages from?
<niemeyer> SpamapS: In the server side?
<SpamapS> given the nature of the project I'd say we'd only SRU it if it was catastrophically broken anyway.
<SpamapS> distro
<niemeyer> SpamapS: Have you patched it?
<SpamapS> niemeyer: yes, I may have missed where there's another way to get that to work.
<SpamapS> niemeyer: not uploaded yet.. just testing currently
<niemeyer> SpamapS: Oh man.. that's awesome.. ok.
<SpamapS> but need to upload very soon as we're already starting to talk about juju in 11.10 in blog posts ...
<SpamapS> And the rename needs time to "settle" ..
<SpamapS> BTW, I may have to disable the test suite on build.. I get this almost every time I build in PPA: https://launchpadlibrarian.net/81242493/buildlog_ubuntu-oneiric-i386.juju_0.5%2Bbzr361-0ubuntu1~ppa3_FAILEDTOBUILD.txt.gz
<SpamapS> Failure: zookeeper.ClosingException: zookeeper is closing
<SpamapS> juju.agents.tests.test_unit.UnitAgentTest.test_agent_executes_config_changed_hook
<SpamapS> hazmat: wondering if that is related to your REUSEADDR change
<hazmat>   SpamapS its not
<hazmat> that typically signals some sort of background activity is happening when the connection is closed
 * hazmat runs in a loop
<hazmat> looks okay through a hundred iterations
<SpamapS> hrm
<SpamapS> it only ever happens on the buildd
<hazmat> SpamapS, is it consistent?
<SpamapS> it has happened the last 3 times, but I think I had a build with r361 that passed
<SpamapS> I'll upload one more time w/o disabling the test..
<SpamapS> would be good for the ppa to have this turned on
 * hazmat widens the loop scope to include the whole test class
<SpamapS> so we get told about failures like this sooner
<SpamapS> hazmat: /win 20
<SpamapS> doh
<hazmat> ;-)
<hazmat> SpamapS, so i widened the loop to the entire unit agent
<hazmat> tests.. no luck reproducing
<SpamapS> Yeah I think its something with the clean isolated environment
<SpamapS> hazmat: the next build is here , starts in 10 min  https://launchpad.net/~clint-fewbar/+archive/fixes/+build/2810619
<hazmat> my env is pretty clean  for a developer ;-)
<hazmat> i'll check back on the build
<SpamapS> buildd doesn't even have the internets
<SpamapS> we build "on the moon" just in case you have to
<niemeyer> hazmat: How's that for a test case: http://pastebin.ubuntu.com/698196/
<hazmat> SpamapS, now i remember why packaging java apps was such a pain
 * hazmat shakes fist at maven and ivy, and points to the  moon
<niemeyer> SpamapS, hazmat: Btw, I've the tests in the wtf run in a clean env
<niemeyer> SpamapS, hazmat: Btw, tests in the wtf run in a clean env
<niemeyer> Will get food, biab
<hazmat> niemeyer, test case looks nice, better abstractions around waiting would make that even cleaner
<hazmat> although really with lxc based tests and apt-cacher things should fly
<hazmat> also on the not around note, i'm going to be out thursday and friday at the conference, lightning talks are tomorrow evening, so i'm going head out a bit early to head up there and promote some good juju
<hazmat> SpamapS, aha.. i reproduced it
<hazmat> the error
<SpamapS> hazmat: race condition somewhere?
<hazmat> its some form of background activity
<hazmat> when the test shutsdown
<hazmat> really, its a specific type of race due to lack of adequate control structure for termination.. let me see if i can reproduce in isolation rather than with the whole test case
<hazmat> i guess i should actually look at the test ;-)
<hazmat> hmm
<hazmat> what would be talking to the unit state as part of hook execution
 * niemeyer back
<hazmat> niemeyer, incidentally a while i ago i added some patches to txzk which record the path for an op as part of the exception
<niemeyer> hazmat: Yeah, I recall something like that
<niemeyer> hazmat: is it in?
<hazmat> niemeyer, no.. its floating.. just lbox proposed it
<niemeyer> hazmat: Aha, neat
 * hazmat hugs lbox
<hazmat> niemeyer, its attached to bug 861028
<_mup_> Bug #861028: Errors should include path information. <txzookeeper:In Progress by hazmat> < https://launchpad.net/bugs/861028 >
#juju 2011-09-28
<SpamapS> hazmat: anyway, I have some other stuff to wrap up so I'm punting the upload until later tonight or tomorrow morning.
<SpamapS> hazmat: would be great if I could re-enable that test then though.
<hazmat> SpamapS, cool, thanks, i'm still looking. i haven't played with this part of the codebase in a while. still trying to understand the interaction.
<hazmat> with the path information the error comes out as ..
<hazmat> Failure: zookeeper.ClosingException: zookeeper is closing /units/unit-0000000000
<hazmat> why the unit is being touched at all is not clear
<niemeyer> hazmat: Checking it out
<niemeyer> hazmat: Very nice
<niemeyer> hazmat: I suggest adding a colon here:
<niemeyer> +                error_msg += " %s" % path
<niemeyer> error_msg += ": %s" % path
<niemeyer> hazmat: +1 either way
<hazmat> niemeyer, sounds good, i'm wondering now, if it should also have the op name
<hazmat> like it would be nice to know what was happening here when the client was interacting with the unit state and zk closed
<niemeyer> hazmat: Not sure.. I'd hope the error message + the traceback would make that clear
<hazmat> twisted traceback is a general fail
<niemeyer> hazmat: Hmm, true
<hazmat> when the reactor is involved between call and result
<hazmat> sometimes it works
<niemeyer> hazmat: "%s %s: %s" % (op, path, msg)
<niemeyer> hazmat: sound sgood
<hazmat> yeah
<hazmat> cool
<SpamapS> is it possible there needs to be some kind of wait after some operation done earlier?
<niemeyer> hazmat: as in "stat /foo/bar: blah blah"
<hazmat> SpamapS, typically in the past its because there's a watch reset
<hazmat> while the test is closing.. but here
<hazmat> i can't think of any reason there'd be interaction with unit state node
<hazmat> as part of hook execution
<hazmat> oh.. unless its a workflow state change
<hazmat> aha
<hazmat> but config hooks don't modify workflow state..
 * hazmat tries to go back to rambling silently
<SpamapS> riveting, really
 * SpamapS pushes back for a bit
<hazmat> ah.. that's the problem
<hazmat> we started doing config-changed before start
<hazmat> in addition to when it actually changed
<hazmat> maybe
<hazmat> nope.. the tests waits for start to complete before modifying the config
<_mup_> juju/ftests-new r4 committed by gustavo@niemeyer.net
<_mup_> A number of improvements, making ftests actually work for real:
<_mup_> - Prepare environment for tests to run juju
<_mup_> - Implemented support for test setup/teardown.
<_mup_> - Added ec2-wordpress test and relevant setup/teardown.
<_mup_> - Added unittests test.
<_mup_> - Other trivial stuff (changed file names, README, etc)
<hazmat> so the op is get on the unit state when zk is closing
<niemeyer> bcsaller: ping
<bcsaller> niemeyer: hey
<niemeyer> bcsaller: Hey!
<niemeyer> bcsaller: Just having a last look in lxc-omega
<niemeyer> bcsaller: What's this about:
<niemeyer> +hostname=`ip -f inet addr |grep 192.168| grep -v '192\.168\.1\.' | awk '{print $2;}'|cut -f 1 -d '/'`
<bcsaller> kapil and I were just talking through what I think is the last test failure there
<bcsaller> niemeyer: you can use a name like bcsaller-laptop-wordpress-0 if you change resolv.conf
<bcsaller> but we don't want to change the host level networking
<bcsaller> so we went back to getting the ips and passing those to the relations
<bcsaller> they can find eachother either way
<hazmat> niemeyer, we can change back to something that will  use hostname on those platfroms that support it when we add something like `unit-info --public-address`
<bcsaller> but the host can't address the wordpress node by its symbolic name w/o the change
<hazmat> else we end up with something that we can't resolve in the browser by name, and if we use ip, we hit the wrong vhost in apache
<niemeyer> bcsaller: Besides looking very cryptic (not sure why all the greps and awks are needed there), doesn't that mean the example formulas won't work on ec2 anymore?
<hazmat> sigh.. it does
<hazmat> i guess i need to make unit-info happen tomorrow
<niemeyer> Now that we just got http://wtf.labix.org/ working! ;-)
<hazmat> still no luck in tracking down this bug SpamapS reported
<hazmat> i don't see anything doing a 'get' op on the unit state
<niemeyer> hazmat: Either that or injecting the ip.. not sure about what would be easier
<niemeyer> hazmat: Injecting the ip has the advantage that we can maintain it automatically in the future
<niemeyer> Well, somewhat
<hazmat> niemeyer, injecting the ec2 public ip ?
<hazmat> that needs the md server
<hazmat> and charm specifivity to the provider
<hazmat> awareness at least
<niemeyer> hazmat: That's the internal ip, I think
<niemeyer> hazmat: I mean, the one used in the relations
<hazmat> niemeyer, yeah.. i was just going to inject the private address into all unit relation nodes
<hazmat> but that doesn't help here
<niemeyer> hazmat: For that line above I think it does
<niemeyer> hazmat: It's replacing the internal ip trick
<niemeyer> hazmat: There's another one that may not help
<_mup_> juju/lxc-omega-base r364 committed by gustavo@niemeyer.net
<_mup_> Preparing lxc-omega-base for review of lxc-omega.
<hazmat> niemeyer, hmm... so instead it would grab its own address from the relation data
<hazmat> yeah.. that works
<niemeyer> hazmat: Its own address?  I think the only reason it gets its own address is to provide to the other side, isn't it?
<hazmat> niemeyer, not in this case
 * niemeyer reads the example charms again
<hazmat> its setting up its own address for an apache vhost
<niemeyer> hazmat: It is
<niemeyer>         relation-set database="$service" user="$service" password="$service_password" host="$hostname"
<niemeyer> hazmat: These are the only places where the value is used
<niemeyer> hazmat: In db-config-joined
 * niemeyer looks at another one
<niemeyer> ServerName is an interesting one.. I think this needs hostname support or similar
<niemeyer> hazmat: Yeah, I think that's the only case there
<niemeyer> There's a bogus one as well
<niemeyer> -hostname=`curl http://169.254.169.254/latest/meta-data/public-hostname`
<niemeyer> +hostname=`ip -f inet addr |grep 192.168| grep -v '192\.168\.1\.' | awk '{print $2;}'|cut -f 1 -d '/'`
<niemeyer> This isn't going to work I believe.. it's replacing the hostname by an ip
<hazmat> niemeyer, it will work
<hazmat> its still a valid server name
<niemeyer> hazmat: How come?
<hazmat> because an ip is a valid address for the server
<hazmat> which is what the servername needs
<niemeyer> hazmat: config-changed and db-relation-changed in wordpress disagree on the content
<niemeyer> hazmat: I don't think that's generally the case.. if the server is a virtual server, the ip doesn't resolve properly
<niemeyer> hazmat: and then config-changed and db-relation-changed disagree with each other either way
<hazmat> the ip isn't used for resolving (its the server) its used for matching a vhost against the host heaader which works fine
<niemeyer> hazmat: "Additionally, ServerName is used (possibly in conjunction with ServerAlias) to uniquely identify a virtual host, when using name-based virtual hosts.
<niemeyer> "
<hazmat> indeed
<hazmat> but given a unit usage its not really germane afaics
<hazmat> we can't alias to the ip even though it would match, since the redirect goes to the name which is non routable
<hazmat> in the local provider case
<hazmat> and the apache is dedicated to the unit
<niemeyer> hazmat: and then
<niemeyer> ln -s /usr/share/wordpress "/var/www/$hostname"
<hazmat> so the ip serves as a unique identifier to the named vhost
<niemeyer> hazmat: That points to a different path than what config-changed points to
<niemeyer> hazmat: I'm really confused about how that could possibly work
<hazmat> ugh.. yeah.. that needs to be fixed
<hazmat> bcsaller, you've tried out with the ip based addressing?
<bcsaller> niemeyer: in that case there might be an error in the config hook post the change back to ip
<bcsaller> hazmat: I had tested it yesterday, yes
<hazmat> i'm going to head out.. its been a long day.. have fun folks
<bcsaller> but never set the config setting
<niemeyer> hazmat: Enjoy the resting!
<niemeyer> hazmat: are you off yet?  Had a quick question
<niemeyer> bcsaller: Just sent a (probably last) review
<niemeyer> bcsaller: I suggest reverting the example changes so we can get lxc-omega in sooner
<niemeyer> I'll step out for the day too
<niemeyer> Have a good time all
<bcsaller> niemeyer: and recommending the host level resolv.conf change?
<hazmat>  niemeyer shoot
<hazmat> too late
<hazmat> bcsaller, yeah.. that's fine for now, i don't see an alternative till we have tools that expose provider specific addresses better
<hazmat> re revert to host
<hazmat> name
<bcsaller> k
<_mup_> juju/env-origin r367 committed by jim.baker@canonical.com
<_mup_> Removed code path going through get_default_origin in tests that do not need this; began refactoring of get_default_origin for its own separate testing
<hazmat> think i've got this error nailed SpamapS
<hazmat> its unfortunate though
<hazmat> some very small percentage of the runs the second config hook seems to run and complete b4 the workflow state has been fully recorded for the initial transition
<SpamapS> hazmat: sounds like its a real error and there's some lack of synchronization going on.
<_mup_> juju/env-origin r368 committed by jim.baker@canonical.com
<_mup_> Inline generation of alt apt-cache script in test
<hazmat> its still not clear to me how its possible, even though async my understanding is we'd get back results in execution order
<hazmat> perhaps not
<hazmat> ah ic
<hazmat> reconfigure does a circle state transition
<hazmat> and we just wait on the hook execution, but the state transition is still happening
<hazmat> after the hook
<hazmat> i can probably just optimize away the state transition when cur state == next state
<hazmat> although state variables may have changed
<hazmat> er. optimize the storage of the new state
<hazmat> yeah.. that seems to work without ill effects
<hazmat> SpamapS, its not a real error, but it is a lack of sync as regards the test
<hazmat> the options are put the sync into the test, or obviate away the need for the sync, since in this case the additional activity is superflous to the overall state
<hazmat> we have quite a few circular transitions, transitions from a state that lead back to the same state
<hazmat> both work well
<hazmat> probably safer to add the sync in
<hazmat> the other works well but has implication on state variables becoming stale, which we only use for error states
<hazmat> something to sleep on
<_mup_> juju/env-origin r369 committed by jim.baker@canonical.com
<_mup_> Parse tests
<_mup_> juju/env-origin r370 committed by jim.baker@canonical.com
<_mup_> Removed unnecessary files
<_mup_> juju/env-origin r371 committed by jim.baker@canonical.com
<_mup_> PEP8
<_mup_> juju/env-origin r372 committed by jim.baker@canonical.com
<_mup_> Docs and additional test suggested by doc revision
<_mup_> juju/env-origin r373 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<_mup_> Bug #861225 was filed: Unit relations should be prepopulated with the unit address <juju:In Progress by hazmat> < https://launchpad.net/bugs/861225 >
<_mup_> Bug #861376 was filed: cannot download charms from remote repositories <juju:New> < https://launchpad.net/bugs/861376 >
<jamespage> hi all
<jamespage> is there a nice way that I can pickup provider information from within a charm
<jamespage> I have a couple that can optimize for running in/out of ec2 for example
<jamespage> I could pass it as config - but that does not feel like good juju :-)
<rog> is niemeyer around today?
<jimbaker> rog, niemeyer is at the brazilian python conference today & tomorrow
<jimbaker> http://www.pythonbrasil.org.br/
<rog> jimbaker: thanks. i'll just have to keep on with destroying his zk interface then...
<jimbaker> rog, sounds like a good plan!
<jimbaker> rog, so how do things change in zk interface 2.0 ?
<rog> jimbaker: i'll show you a paste of the current godoc output for it if you like
<jimbaker> rog, cool, thanks
<rog> jimbaker: http://paste.ubuntu.com/698558/
<rog> jimbaker: the Server stuff is new. the event changes i've just made and think they're a good thing, but Gustavo may easily think they're horrible :-)
<jimbaker> rog, re Server, sounds nice. so now i understand your interest in log4j/the managed zk in python. it will be good to look at this code, since managing processes properly (especially misbehaved ones) will be important for porting to juju
<hazmat>  jamespage what sort of provider info
<rog> jimbaker: it doesn't do much managing - though perhaps it should. it assumes that it takes a single process only.
<jamespage> hazmat: for example, tomcat7 has some neat automagic clustering features - but they rely on multicast
<jamespage> so they don't work in ec2 - but no reason why they would not if the charm was deployed on hardware or another cloud type
<jamespage> or Cassandra as a snitch which sets up the ring based on ec2 avaliability zones for example
<jamespage> but you would not run the same thing outside of ec2
<hazmat> jamespage, good stuff
<hazmat> jamespage, we'll be introducing some mechanisms for units to get additional about their environment
<jimbaker> rog, does the Exec package ensure that any pending output to stderr/stdout is flushed and available? just curious, since this as bitten us in terms of distinguishing - we could see a program exit before all output was ready
<rog> jimbaker: if you read from stderr or stdout, you read until eof. program exit is independent of that.
<jimbaker> rog, cool, that's what i would expect
<rog> so the best course would be to read all the output, then wait for the program
<jimbaker> rog, exactly what we do now in hook invocation, except we also need to timeout. that's a lot easier to do (and test) in go, of course, so looking forward to that
<rog> jimbaker: hmm, looks like exec.Output is flawed in that respect.
<jimbaker> rog, hmm... the reason this is important is that some executables are flawed in how they background (they don't properly daemonize), so the parent can exit, but children hold onto file descriptors
<rog> yeah. it's not actually a possible problem to solve properly.
<jimbaker> rog, can you characterize what the issues are with go for this problem?
<jimbaker> because i would assume i could simply have a timeout participate in the select on the output channel. but i'm a complete go novice
<SpamapS> Wow.. changing the default from _PPA to _DISTRO breaks about 16 tests.
<jimbaker> SpamapS, it's no surprise to me
<jimbaker> SpamapS, although env-origin is waiting for review again, i  think it's ready. you might want to give it a try
<SpamapS> I think a few of them might be a little too broad in scope
<jimbaker> SpamapS, those tests are assuming that a ppa will be installed by cloud init, and so are looking for that in the corresponding cloud init yaml
<SpamapS> yeah exactly, that seems heavy handed to me.
<rog> jimbaker: a timeout is quite possible. but it's not a great solution - you either risk losing legitimate output or you introduce an artificial delay each time.
<jimbaker> so they are being very fussy, but that's actually important
<rog> jimbaker: there's no specific Go problem here, i think.
<jimbaker> rog, this is for bad code. i don't know of any good options because of that
<jimbaker> bad *external to us* code, that is
<SpamapS> jimbaker: I tend to agree with that for full end-to-end integration tests.. but for unit tests.. if the individual bits are tested, I don't see why the whole has to be so rigid.
<SpamapS> But, I digress
<jimbaker> rog, so it sounds like go should work just fine. thanks for the feedback, it was very helpful!
<SpamapS> I think I'll just upload w/o the DISTRO change, and we'll change it and turn off the test suite if we can't get the origin branch into 11.10
<jimbaker> SpamapS, sounds reasonable
<rog> jimbaker: why do we have to stop reading the output from a program anyway? it's useful to keep collecting stdout/stderr - it might not be correctly being sent to a log file.
<SpamapS> It can't ship w/ the PPA linking.. thats just not going to work.. but it can ship w/o the test suite.
<jimbaker> although i do hope it makes it, because otherwise the ppa setting will not be user available there
<jimbaker> ok, well that makes me feel better
<SpamapS> jimbaker: if they want the PPA... they should.. use the PPA. ;)
<jimbaker> SpamapS, a perfectly valid point
<SpamapS> for the 0.001% of users who will be running juju without root on their own machines.. they can use juju-branch
<rog> BTW, it seems that Output doesn't have the problem i thought it did. i did exec.Command("sh", "-c", "(sleep 3; echo hello) &").Output() and it waited for 3 seconds.
<rog> dunno quite how it does it though. i'll look more closely.
<jimbaker> rog, that's a good question, it seems more likely this is a test determinism issue that is bubbling up to how we do things
<jimbaker> rog, so go allows us to change the parameters of this. it may be still desirable to wait around, but much less aggressively
<rog> jimbaker: yeah. in fact for the purposes of this kind of thing, i'd prefer it if programs wrote log messages to stdout, because then they can be easily made available remotely, rather than having to delve into app-specific log files.
<rog> but i'm not sure that's gonna happen in general
<SpamapS> hazmat: is there a bug # for that test fail? I'd like to document it in the package so we can check on its progress and re-enable the test when it is fixed.
<jimbaker> the precedent is certainly to do the daemonization + log to file, but for charms we are going the direction you suggest
<hazmat> SpamapS, i'm just going to do the trivial fix on trunk doing the obvious thing, sync primitives for the test
<rog> jimbaker: ah, i see how the exec package does it now. it's quite neat.
<jimbaker> rog, ?
<SpamapS> hazmat: ahh ok. Thanks!
<rog> jimbaker: http://golang.org/src/pkg/exec/exec.go#L244 and one c.goroutine in this case is doing the io.Copy (until EOF), and c.Wait waits for the results of each goroutine.
<_mup_> Bug #861539 was filed: remote charm revision checking extremely inefficient <juju:In Progress> < https://launchpad.net/bugs/861539 >
<hazmat> jimbaker, bcsaller, fwereade... could i get a +1 on this trivial.. http://paste.ubuntu.com/698569/
<rog> jimbaker: there's nothing to stop one doing a custom timeout too if that's desired
<bcsaller> hazmat: lgtm
<jimbaker> hazmat, +1
<hazmat> thanks guys
<_mup_> juju/trunk r364 committed by kapil.thangavelu@canonical.com
<_mup_> [trivial] fix config-changed test sync primitive to wait on workflow transition instead of hook execution.[r=bcsaller,jimbaker]
<hazmat> SpamapS, that should fix it
<SpamapS> hazmat: sweet thanks!
<SpamapS> jimbaker: remind me again.. your new branch will set its origin based on where the 'juju' package came from, right?
<jimbaker> SpamapS, correct
<SpamapS> Cool. Really looking forward to testing that out soon.
<SpamapS> SOON. :)
<SpamapS> This actually makes perfect sense.. there has to be a 'juju' package in the archive before the distro method can even be tested. :)
<jimbaker> or to use lp:juju, if installed from a branch. maybe it should use the actual branch, but this returns to what it was before
<SpamapS> jimbaker: people using branches are on their own. :)
<jimbaker> SpamapS, apparently we are ;)
<jimbaker> SpamapS, correct about the distro dependency. i have pseudo mocked it (i could explain the distinction from using our standard mocker lib), but it waits until it gets in the distro before it can be truly tested
<jimbaker> SpamapS, another outcome is that you do insist in environments.yaml that juju-origin be distro, you will get a unusable ec2 instance
<jimbaker> until that change is made ;)
<SpamapS> jimbaker: there's another way to test it which is to create an AMI that adds an apt source which has the package already.
<jimbaker> SpamapS, ahh, stub it the other way. sounds good
<SpamapS> jimbaker: better tho.. is to just have it in the distro. :)
<SpamapS> I do think we may need to explore the idea that there's a need to be able to add arbitrary cloud-config data...
<SpamapS> we can't possibly think of everything that people might want to do before juju starts.
<jimbaker> SpamapS, that does make sense
<SpamapS> But that may be too low level.. not sure.
<rog> jimbaker: here's an example of a way you might do the timeout thing in Go
<jimbaker> but at least it's centralized
<rog> http://paste.ubuntu.com/698580/
<jimbaker> rog, thanks!
<rog> jimbaker: and here's a version without the concurrent writer nastiness
<rog> http://paste.ubuntu.com/698581/
<SpamapS> we need to add go to paste.ubuntu.com
<rog> personally, i don't like syntax highlighting, so i'm not sad it's not there :-)
<SpamapS> wow
<SpamapS> I like, can't live without it anymore.
<rog> i spent about 10 minutes earlier today working out how to turn it *off* in a diff tool i was using. i find it so distracting!
<SpamapS> I think thats a global vs. details thing.. if you're global like me, it helps bring order to the big picture. if you're detail oriented, I can see where it would be distracting
<jimbaker> at least there's the /plain option on the pastebin
<jimbaker> but i prefer the syntax highlighting
<rog> i think when the syntax is nice, the program structure itself is quite evident. the only caveat is long multi-line quoted strings, where i've been caught out before
<rog> (but this is definitely a religious issue :-])
<SpamapS> rog: the best syntax formatting in the world just looks like a mass of gray gravy to me. :-P
<SpamapS> I think this is less religious than editors and stuff.. its binary.. you like it or you don't. :)
<Aram> hi.
<Aram> so, niemeyer won't be here today?
<Aram> regarding the syntax hilight thing above, don't care about it :-).
<Aram> my editor doesn't support it.
<adam_g> hey! so if a juju node (runnig a service unit, not the bootstrap node) reboots, what needs to be run to get juju back up and communicating with the bootstrap node so that future hooks execute?
<SpamapS> adam_g: Thats actually a massive problem that has no good solution IIRC
<adam_g> ''
<SpamapS> adam_g: you can reconstruct the startup args from the cloud-init data
<adam_g> SpamapS: thats what ive been trying to do
<hazmat> well we can drop an upstart file down for the unit and machine agents
<hazmat> which solves the structural issue
<SpamapS> adam_g: hazmat has a better handle on this, but there's also a problem where any events that happened while the agent was gone will be missed
<hazmat> the problem is we lose some state and end up with redundant and missed hook executions
<adam_g> right
<SpamapS> This is one of those production gotchyas of a critical nature. :-/
<hazmat> ie. we have some undefined behavior, till the unit agent suppor disk persistence
<hazmat> its a very critical prod issue
<adam_g> im just trying to reconstruct the services to a point where i can remove the relations and readd them, but so far those aren't even firing
<SpamapS> hazmat: I still think you're better off making that persistence happen in zookeeper
<hazmat> SpamapS, adam_g i didn't see any comments on the merge proposal for resurrect --placement, in the absence of that i'm going to let it drop
<hazmat> SpamapS, true
<SpamapS> hazmat: don't let it drop.. its desperately needed.. we're BUSY
<adam_g> hazmat: ^
<hazmat> SpamapS, i need some firepower/reinforcements on it
<SpamapS> hazmat: understood..
<hazmat> understood re busy
<SpamapS> r361 is about to land in 11.10 btw
 * rog is calling it a day. see ya tomorrow.
<hazmat> rog, have a good one
<hazmat> SpamapS, my schedule is pretty clear outside of testing next week, i'll start addressing the restart issue then
<SpamapS> hazmat: that would be pretty awesome. :)
<SpamapS> adam_g: so I think right now the answer is "reinstall" ;)
<adam_g> i know ive run into this before and got it to work after beating up the agents on every node
<robbiew> bcsaller: hazmat: SpamapS: can one of you tell me how likely the LXC stuff will land in time for 11.10? or is it already too late, headed for SRU/PPA
<hazmat> robbiew,  i think we got the lost of the critical branches approved yesterday, we've got about 4 branches pending merges which can probably get done by end of day, that will at least allow usage, but their will be some usage caveats
<robbiew> hazmat: ack
<robbiew> thx
<hazmat> bcsaller, can you go ahead and merge the lxc-lib-clone and omega?
<hazmat> i guess i'm supposed to do another look at omega per the review comment
<hazmat> bcsaller, it doesn't look like the fixes for the test failures have been pushed
<hazmat> s/lost/most
<hazmat> i'm going to head out in an hr to get to the conference
 * hazmat heads out to the conference
<_mup_> Bug #861821 was filed: setup.py needs to include some package_data <juju:New> < https://launchpad.net/bugs/861821 >
<fwereade> bcsaller, hazmat, jimbaker: quick opinion
<fwereade> __repr__ returning a string that isn't obviously a <FancyType object at 0xdeadbeef>, but still doesn't eval to the original object
<fwereade> evil?
<bcsaller> fwereade: something that __str__ can't do?
<fwereade> bcsaller: hm, I may have wasted your time, I misread some context, it turns out the object in question will always be a str anyway
 * fwereade looks embarrased
<fwereade> hazmat, jimbaker: ignore me
<jimbaker> fwereade, no worries
<jimbaker> but now that you mentioned repr vs str, i wonder if we have some lurking bugs with unicode
<jimbaker> anyway, we will get to them. or perhaps not, go washes this issue away
<jimbaker> fwereade, it's also possible that we can control the logging formatting process better - that's where of course i'm concerned
<fwereade> jimbaker: that's a very nice point about go, and a very nice feature of go that hadn't really crossed my mental horizon
<jimbaker> fwereade, yes, the standardization that everything is utf8 when serialized, that's the right one
<fwereade> jimbaker: I'm sure we have some lurking unicode bugs, but most of the stuff I'm working with is rather restricted in character set anyway, so I don't *think* I'll be making anything worse, at least ;)
<jimbaker> fwereade, yeah, if you can control those names. i'm thinking of charms with unicode paths, or stuff coming from environments.yaml
<fwereade> jimbaker: unicode names are considered invalid ATM
<fwereade> jimbaker: charm names
<fwereade> jimbaker: environments.yaml scares me a lot more ;)
<jimbaker> fwereade, so one test we could try is just to name an environment something in chinese. would juju break?
<fwereade> jimbaker: probably :/
<niemeyer> Yo!
<niemeyer> bcsaller: ping
<bcsaller> niemeyer: hey
<niemeyer> bcsaller: Heya
<niemeyer> bcsaller: 365 seems to have broken trunk tests: http://wtf.labix.org/
<bcsaller> niemeyer: omega will fix that
<bcsaller> its api drift from pre/post reqs
<niemeyer> bcsaller: When is it going in?
<bcsaller> still working on a test case thats causing issues, but soon
<niemeyer> bcsaller: Ok.. having trunk broken is a big deal, so if it's going to take a while we need to fix the tests in trunk or revert the change
<bcsaller> understood, but those features are useless, inaccessible and never used w/o the stuff in omega
<bcsaller> and should be fixed soon
<niemeyer> bcsaller: The point isn't that the features are useless, but just that we have trunk broken and other people cannot trust on the tests anymore
<niemeyer> bcsaller: Such large changes should have a test run before committing, so that we never get in such a situation
<bcsaller> I'd only been running the branch tests in isolation not thinking that some of the depends on the branch had been merged already
<niemeyer> Hmm.. I'm going to link revision numbers to Launchpad in the wtf to be easier to follow it up
<fwereade> niemeyer, wb
<niemeyer> fwereade: Thanks!
<niemeyer> fwereade: Sorry for the reviews chunked up
<fwereade> niemeyer: np, much appreciated
<fwereade> niemeyer: there's one quibble I have
<fwereade> niemeyer: charm url/id
<niemeyer> fwereade: I imagined ;)
<fwereade> niemeyer: I'm fairly sure that a charm id is a *string* that can be parsed into a CharmURL with a revision
<fwereade> niemeyer: if we pass CharmURLs throughout, we need to do a slightly annoying amount of str-ing/parsing when we yaml/unyaml them
<niemeyer> fwereade: So is cs:~user/oneiric/wordpress-1 a charm URL, or is it not?
<fwereade> niemeyer: it's a charm url, but not an instance of CharmURL
<niemeyer> fwereade: Exactly
<fwereade> niemeyer: does that make any sense?
<niemeyer> fwereade: It does, and I agree..
<fwereade> niemeyer: cool :)
<fwereade> niemeyer: so I feel that when we're workingwith charm ids, they should be strings
<fwereade> niemeyer: when we need them to be urls, we can parse them
<niemeyer> fwereade: The point I was making is just that a string representation of a charm url is still a charm url
<fwereade> niemeyer: I'm happy about losing CharmIdError
<niemeyer> fwereade: What's a charm id?
<fwereade> niemeyer: it's a string, IMO, that parses to a CharmURL with a non-None revision
<niemeyer> fwereade: If it's a string that parses into a charm url with a non None revision, it's simply a charm URL with a non-None revision
<fwereade> niemeyer: I think we may be in violent agreement again then :)
<fwereade> niemeyer: I have formed the impression, which may be mistaken, that you'd prefer to represent charm ids as CharmURL objects wherever possible
<fwereade> niemeyer: is this correct?
<niemeyer> fwereade: The only point of contention is that there's no reason to name this a "charm id".. it's already a charm url..
<niemeyer> fwereade: Yeah, I certainly think it'd be better to use charm urls throughout the code
<fwereade> niemeyer: including string/parsing them when they go into and out of yaml when stored in zookeeper?
<fwereade> niemeyer: ("str-ing/parsing" might be clearer)
<hazmat>  jimbaker the unicode issue isn't actually a problem for juju atm, its a problem when we interact with external systems like aws
<niemeyer> fwereade: I hadn't thought about that, to be honest, but doesn't sound like a bad idea.. feels easy enough to have an internal schema time within juju.charm.url that would do that for us
<niemeyer> fwereade: What do you think?
<fwereade> niemeyer: I'm ambivalent: I tried it as part of the stuff I ended up reverting (with manual str-ing/parsing, and *that* was quite nice, but then having to explicitly str them again when do juju status felt wrong
<niemeyer> fwereade: My main previous grip was about the distinct terminology for what is a single thing
<niemeyer> fwereade: Hmm.. that sounds like a particularity of juju status only, and IIRC it's trivial to add support for stringifying them globally throught the yaml mechanics
<niemeyer> through
<fwereade> niemeyer: cool -- I'd got the impression that the occasional use of yaml.safe_dump was an explicit disavowal of that technique's legitimacy
<niemeyer> fwereade: Hmm.. that's different
<niemeyer> fwereade: I'm suggesting the precise opposite
<niemeyer> fwereade: IOW, turning a charm url into a plain string
<niemeyer> fwereade: safe_dump avoids dumping Python-specific information into the yaml output
<niemeyer> fwereade: WIth all that said, I'm not specially concerned about this right now, to be honest
<fwereade> niemeyer: indeed, I think I've derailed again :/
<niemeyer> fwereade: The only thing I'd like to reach agreement right now, and that impacts the branch, is the terminology and concepts we're agreeing to
<niemeyer> fwereade: I want to kill the idea that cs:~user/series/foo-1 is both a url and an id, since it's the same thing
<fwereade> niemeyer: yep, but cs:~user/series/foo is a url but not an id
<fwereade> niemeyer: and therefore, IMO, there's a useful distinction to be maintained
<niemeyer> fwereade: It's just a url, because there's no such thing as a charm id! :-)
<niemeyer> fwereade: The former is a url with a revision.. the latter is a url without a revision
<niemeyer> fwereade: Then, we can have curl.with_revision(n).. and curl.assert_revision()..
<niemeyer> fwereade: To make the problem easier to deal with, if I understand your concerns
<fwereade> niemeyer: that I'm 90% sure I like :)
<fwereade> niemeyer: cool :)
<niemeyer> fwereade: Woohay agreement! :-)
<fwereade> niemeyer: and just to clarify: whenever we're loading yaml that contains a charm url, we pipe it through a schema that ensures it comes out the other side correctly
<fwereade> niemeyer: and when we're yamling one, we automatically turn it into a string
<niemeyer> fwereade: This _sounds_ like a good idea, but IMO it doesn't have to be done in that branch necessarily
<niemeyer> fwereade: I don't know how much trouble it'd be to revise all code so that all charm urls are passes as objects rather than strings
 * hazmat ponders
<hazmat> upgrade and deploy are the main ones
<fwereade> niemeyer: it's not *that* much work to do it manually, and it does make some things simpler
<niemeyer> fwereade: It sounds ok to have charm urls strings and values mixed up, and then clean it up as we go, in case you feel it would be painful
<niemeyer> fwereade, hazmat: Cool, sounds awesome then
<fwereade> niemeyer: but I'd rather get these branches put to bed in a *working* state asap
 * hazmat goes back to practicing talk
<niemeyer> fwereade: I know that the schema coercer couldn't be more trivial since we already have parse..
<niemeyer> fwereade: +1!
<hazmat> fwereade, so your talking about doing a custom yaml marshaller for charm urls?
<fwereade> hazmat: yeah
<niemeyer> http://wtf.labix.org/  is in a proper order, and has links to the revisions now..
<hazmat> fwereade, sounds good.. keeping safe_dump usage is also good, else pyyaml.. will take any random python object..
<fwereade> hazmat: I must have misread something -- I had the impression that safe_dump disallowed non-yaml-native types, even if there were custom marshallers
<hazmat> fwereade, ah.. perhaps i'm not sure.. but without safe_dump.. it will pickle anything even without custom marshallers
<fwereade> hazmat: I'll double-check and figure it out
<fwereade> hazmat: it's not for a *current* branch anyway
<fwereade> hazmat: not quite ;)
<fwereade> niemeyer: another thing, hopefully quick
<fwereade> niemeyer: resolve() is indeed not quite right
<fwereade> niemeyer: CharmURL.infer and some sort of get_repository(url) are distinct
<niemeyer> fwereade: Is this about my review point?
<fwereade> niemeyer: but it would be much more convenient to do that in the check-latest-formulas branch, where resolve() changes anyway
<fwereade> niemeyer: yep
<niemeyer> fwereade: In that specific case, I was complaining more about location than anything else
<fwereade> niemeyer: ah, ok, cool
<niemeyer> fwereade: IMO, the charm url management should be within something like juju.charm.url
<niemeyer> fwereade: and it should be fairly self-contained
<niemeyer> fwereade: resolve() knows about the interface of repositories etc
<niemeyer> fwereade: So it feels like it should be close to them
<niemeyer> fwereade: Otherwise we get in a situation where we have repositories using urls that use repositories
<fwereade> niemeyer: yep, they;re a bit mixed up
<fwereade> niemeyer: I'll sort it out :)
<niemeyer> fwereade: Thanks man
<fwereade> niemeyer: thank you, I think I'm happy with everything now
<niemeyer> fwereade: Woot
<fwereade> niemeyer: although you'll no doubt hear from me about *something* :p
<niemeyer> fwereade: Rest assured I take that _very_ positively! ;-)
 * niemeyer orders some food in the hotel room
<hazmat> sweet 15s to spare on my talk, must talk fast :-)
<niemeyer> hazmat: Wow.. that's being _precise_ ;-)
<niemeyer> brb
<niemeyer> Dinner.. biab
<SpamapS> hazmat: where are you speaking?
<SpamapS> https://launchpad.net/ubuntu/+source/juju
<hazmat> SpamapS, surge 2011
<jimbaker> SpamapS, looking good
<hazmat> baltimore
<SpamapS> hazmat: oh cool! I'm a Schlossnagle fanboi.. so ^5 them from one of their first customers. ;)
<hazmat> stepping out.. their starting
<hazmat> SpamapS, yeah.. me too. i just saw a new presentation he gave earlier this year in paris on async architectures
<hazmat> SpamapS, http://www.infoq.com/presentations/Building-Scalable-Systems-Asynchronous-Approach
<_mup_> Bug #861928 was filed: provisioning agent gets confused when machines are terminated <juju:New> < https://launchpad.net/bugs/861928 >
#juju 2011-09-29
<SpamapS> Hmm.. weird problem here.. after a successful install hook..
<SpamapS> http://paste.ubuntu.com/698811/
<jimbaker> SpamapS, trying to think where to start there with that (twisted's reactor structure makes such output frequently useless)
<jimbaker> SpamapS, is it related at all to the bug you just reported (bug 861298)?
<_mup_> Bug #861298: libasound2 addition to linaro-x11-base seems suspicious <Linaro Seeds:New> < https://launchpad.net/bugs/861298 >
<SpamapS> jimbaker: I'm betting that the ZK connection failed for some reason... and it just needs to be handled/reconnected
<jimbaker> (wrong bug # obviously)
<SpamapS> jimbaker: same env, but I don't think its related no
<jimbaker> SpamapS, so is the agent still running on that node?
<SpamapS> jimbaker: oo, darn, I destroyed the env.. didn't check
<jimbaker> SpamapS, making agents robust against failure with upstart is something we discussed, i assume it's still not there yet
<jimbaker> SpamapS, no worries, random failures are easy enough to recreate ;)
<jimbaker> (of the sort seen here, seriously)
<SpamapS> jimbaker: I'll report it if I see it again
<SpamapS> this particular case is a *very* busy nova compute machine
<SpamapS> so its entirely possible that ZK was overwhelmed or something else weird happened
<jimbaker> SpamapS, ahh, that will much more likely expose things
<jimbaker> SpamapS, also i need to review https://code.launchpad.net/~hazmat/txzookeeper/errors-with-path/+merge/77254, this looks good for getting somewhat better info - knowing the actual path is a pretty big signal, especially as we add more nodes
<SpamapS> +1 for that
<SpamapS> useful errors was a theme of an entire month of development we did at my last job
<SpamapS> Because we had so many confused 3 hour pager responses where you spent 3 hours just trying to figure out WTF happened
<fwereade> jimbaker, SpamapS: upstart is rather critical for orchestra too, I think it's likely to be top of my list as soon as I'm not thinking about the charm store
<jimbaker> i do wonder if niemeyer is trying to intentionally pun on wtf with the waterfall at http://wtf.labix.org/wtf/
<jimbaker> i'm sure he is
<niemeyer> jimbaker: Yes :-)
<fwereade> potential recursive acronym, too: WTF Test Failures
<niemeyer> fwereade: Hah, nice
<fwereade> niemeyer: if anyone asks, you could say that, and act all mystified -- "does WTF stand for something else as well? goodness me!"
<niemeyer> fwereade: Totally :-
<niemeyer> )
<jimbaker> biab, i'm going to hang out over at the python in the cloud meetup going on locally in boulder shortly
<jimbaker> someone will be demoing some sort of python mgmt tool for running apps in ec2
<fwereade> niemeyer: ok, I thought I was ready to merge resolve-formula-names (hm, which should be called merge-charm-names)
<fwereade> niemeyer: then I thought the breaking UI changes might need some documentations
<fwereade> niemeyer: ...then I thought "NO TIME NO TIME"
<fwereade> niemeyer: er, opinions?
<niemeyer> fwereade: Hm
<niemeyer> fwereade: One thing we definitely have to do is sending a mail to ensemble@ pointing out what is going on, how the interface is changing, and why
<fwereade> niemeyer: indeed -- shall I try to get that in before the merge?
<niemeyer> fwereade: Yeah, it feels like a good idea to point to people in sync with the merge
<niemeyer> fwereade: So that trunk users will get to know about it
<niemeyer> fwereade: and PPA users in a bit
<fwereade> niemeyer: ok, I'll write something up now
<fwereade> niemeyer: still ensemble@lists.ubuntu.com then?
<hazmat> Talk done!.. wahoo..
<hazmat> its very easy to slip up charm/formula
<niemeyer> jimbaker: and the saga continues..
<niemeyer> hazmat: Woohay
<niemeyer> hazmat: how was it?
<niemeyer> fwereade: Sorry
<niemeyer> fwereade: juju@
<niemeyer> fwereade: As hazmat just said, easy to slip :)
<fwereade> niemeyer: absolutely :)
<fwereade> niemeyer: I wondered if you meant it, because I could swear I signed up to juju@ but still seem to be getting all my mails from ensemble@
<niemeyer> fwereade: Hmm.. there's an alias between them so it continues working
<niemeyer> fwereade: I don't think we should be getting emails from the old address, though
<fwereade> niemeyer: don't worry about it, I'll double-check, maybe aI got confused around the changeover
<niemeyer> fwereade: What might be going on is that people (like me!) are still using the old address
<fwereade> niemeyer: ah, could be
<jimbaker> niemeyer, the company is opdemand, they are currently demoing wordpress ;)
<jimbaker> http://www.opdemand.com/
<jimbaker> they publish information from one tier to another tier
<niemeyer> jimbaker: pvt
<hazmat> ouch just got dissed hard by the opscode (adam) from stage
<hazmat> idempotency... see me after class
<niemeyer> hazmat: Very noble of him :-)
<niemeyer> hazmat: The best answer to a public insult is making juju rock harder
<_mup_> juju/env-origin r374 committed by jim.baker@canonical.com
<_mup_> Fixed review points
<_mup_> juju/env-origin r375 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<jimbaker> oh, i'm glad i just tried running ./test against trunk, the failures are there, not in the merge with env-origin
<SpamapS> we should rename the juju binary to just jj
<jimbaker> SpamapS, please don't give anyone any ideas on that ;)
<jimbaker> SpamapS, i assume you're seeing the same failure as here: http://wtf.labix.org/366/unittests.out.FAILED
<jimbaker> looks like it was introduced in r365
<jimbaker> bcsaller, have you tried looking at trunk since your merge?
<bcsaller> jimbaker: yeah, its a known issue, sorry. Still trying to get omega in the proper state to merge, it was missing some tests and we know how testing goes... esp. with this level of system interaction
<jimbaker> bcsaller, ok, thanks for the update
<bcsaller> jimbaker: it happened because lxc-lib is a prereq to omega but other thing depending on it got merged first
<jimbaker> bcsaller, ok, makes sense
<jimbaker> presumably one of these things corrects the ppa name too
<jimbaker> although i would expect the bridge to the support of juju-origin in my env-origin branch would take care of that regardless
<bcsaller> jimbaker: it will, but I think I will do that in a subsequent branch, using the ppa to get started with local dev (and fixing trunk tonight) seems fine
<_mup_> juju/env-origin r376 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<jimbaker> bcsaller, sounds like a good plan to me
<bcsaller> cool
<SpamapS> jimbaker: I don't like typing juju .. its all right index finger.. jj would be a nice shortened version. :)
<SpamapS> we can just symlink it in the package. :)
<jimbaker> SpamapS, i wonder if my typing here is canonical - i use index for j, middle for u, so it's not as bad in this case
<jimbaker> (that's probably a misuse of canonical...)
<jimbaker> (more like standard i guess)
<jimbaker> fwereade, is local resolution of charms now broken?
<jimbaker> i noticed the change to the series, but i'm seeing this on the machine agent
<jimbaker> fwereade, so i'm seeing this, http://paste.ubuntu.com/698912/ - anyway, i guess it will have to wait until tomorrow, just using irc async here :)
<jimbaker> SpamapS, apparently my typing is nonstandard. darn. http://en.wikipedia.org/wiki/Touch_typing
<SpamapS> jimbaker: very nonstandard, I don't even know how you reach u with your middle finger
<SpamapS> fwereade: one thing about this breakage.. now any version before this is completely broken because the PPA package wants local: prefixed
<SpamapS> CharmStateNotFound: Charm 'local:hadoop-slave-1' was not found
<SpamapS> thats not something you mentioned in your email. :-/
<_mup_> juju/env-origin r377 committed by jim.baker@canonical.com
<_mup_> Remove tabs in rst file
<rog> hurray, all tests pass
<fwereade> rog: belated /cheer
<rog> :-)
<rog> my computer made the requisite sound effect when you mentioned my name
<rog> well ok, it was a sort of bleep, but close enough
<fwereade> rog: haha, awesome
<fwereade> SpamapS: hm, I hadn't considered that side effect
<fwereade> SpamapS: I kinda feel that we're still (just barely) in "developing, expect breakage" mode, and so it's (just barely) acceptable, if annoying
<fwereade> SpamapS: am I right in thinking that error shows up when you have an env running, you upgrade juju locally, and then try to issue commands to the env running the old version?
<fwereade> SpamapS: ...or, yes, also if you try to run any older version against the PPA
<fwereade> SpamapS: is my interpretation sane?
<_mup_> Bug #862415 was filed: Juju bootstrap node disappearance <juju:New> < https://launchpad.net/bugs/862415 >
<_mup_> Bug #862417 was filed: Cloud Foundry server charm was not found on the instance <cloud> <foundry> <juju:New> < https://launchpad.net/bugs/862417 >
<_mup_> Bug #862418 was filed: Add a way to show warning/error messages back to the user <juju:New> < https://launchpad.net/bugs/862418 >
<_mup_> Bug #862417 was filed: Cloud Foundry server charm was not found on the instance <cloud> <foundry> <juju:New> < https://launchpad.net/bugs/862417 >
<lynxman> SpamapS: whenever you're awake, let me know if you have the final juju package for oneiric :)
<_mup_> Bug #862422 was filed: Add a "block" add/remove unit hook <juju:New> < https://launchpad.net/bugs/862422 >
<rog> i can never remember where to create a new issue for code review...
<rog> is it standard to always create a new branch when making some changes? if i haven't done that, how can i change the branch name?
<fwereade> rog: depends: if I'm doing a significant chunk of work I'll start by branching, if I'm just (say) addressing a minor review point in an existing branch I'll continue the existing one
<fwereade> rog: there's a line between them but I find its location varies with my current levels of optimism, forgetfulness, foolishness
<fwereade> rog: offhand, I'm not sure how best to retroactively branch something without the process being somewhat manual
<fwereade> rog: you could just bzr diff --old=[revision you wanted to branch from] > somewhere
<fwereade> rog: bzr revert -r [that revision again]
<fwereade> rog: bzr commit
<fwereade> rog: ...and then branch from there and apply the patch
<fwereade> rog, and everyone else: if there's a better way I'd love to hear it :)
<rog> at the moment, i'm trying a branch and then a merge with the changes i made in the old branch
<rog> dunno if it'll work. we'll see.
<fwereade> rog: I think I tried something like that once, and it went wrong -- if you then revert the parent branch, and *then* merge from it again, I think it'll try to apply your subsequent unchanges
<fwereade> rog: ...er, if you see what I mean
<rog> fwereade: hmm, not sure i do. is that something i shouldn't do, or something i will inevitably do?
<rog> fwereade: it *seems* to have worked
<rog> except the log history is no longer linear, which might be a problem
<fwereade> rog: possibly I misunderstood what I did, or what you did
<rog> i did: cd $HOME/gozk; bzr branch lp:gozk my-upcoming-revision; cd my-upcoming-revision; bzr merge ../zk; bzr commit
<rog> where $HOME/gozk/zk was the place i'd been doing the edits
<fwereade> rog: ah, yes, that sounds fine
<fwereade> rog: I'd been imagining a situation where you'd made several commits already to the wrong branch
<rog> fwereade: i made commits, but hadn't pushed
<fwereade> rog: still sounds fine :)
<rog> cool
 * fwereade crosses fingers
 * rog also crosses his fingers.
<hazmat> bcsaller, none of the things that had lxc-lib as a pre-requisite got merged first.. they where using what was already committed to trunk
<hazmat> lxc-lib-clone changed the api
<jimbaker> hazmat, yes, that's what we are seeing on wtf.labix.org right now
<hazmat> jimbaker, it should be okay after lxc-omega gets merged, but breaking trunk isn't
<jimbaker> hazmat, agreed with that for sure
<jimbaker> (invariably when trunk is broken and i do a merge, i blame my branch first. goose chase, not productive... anyway, it happens)
<SpamapS> fwereade: yes, all older versions of juju break when they deploy because they pull from the PPA
<SpamapS> fwereade: breaking stuff is fine, but telling us about it *well before* committing would be quite helpful.
<SpamapS> why are we committing to trunk without running the test suite?
<SpamapS> Every time I've merged one of my little fixes in, I merge and then run the test suite. :-P
<jimbaker> SpamapS, fwereade, agreed. and announce it here would be nice. the other thing that bit me last night
<fwereade> SpamapS, jimbaker: I'm sorry, it was entirely wrong to assume anyone else had the faintest idea what I was up to :(
<fwereade> it never really crossed my mind that niemeyer and I had talked about it lots, but not with everyone
<jimbaker>  fwereade, no worries, just some suggestions to get everyone in the loop first. this channel has its deficiencies, but i poll it much more frequently than email ;)
<jimbaker> (much more signal for what i immediately need to know)
<fwereade> jimbaker: definitely
<SpamapS> At this point nobody can test the packages in 11.10 properly.. it only works by using juju-branch: lp:ubuntu/oneiric/juju   but thats not the end of the world
<SpamapS> The email as great.. and its the only reason I didn't go "WTF!!" right away.. I knew what was up. Its just that the scope of the breakage may not have been fully explored. Given the velocity you all are maintaining, I'm not surprised (or upset :)
<SpamapS> s/email as/email was/
<fwereade> SpamapS: so, the stuff that's currently in 11.10 is old, and therefore broken; I had the impression we were planning to slip in a last-minute update (er... tonight?) with lxc stuff, and origin stuff ...and this stuff
<fwereade> SpamapS: and that implies the breakage is at least somewhat temporary
<fwereade> SpamapS: have I misunderstood the plans?
<SpamapS> we are planning to update again. The point of updating before then though, was to allow testing of what was done up until now.
<fwereade> SpamapS: indeed :(
<SpamapS> Don't worry, I totally understand the implications and reasons behind this incompatible change. I'm only frustrated that I pushed so hard to get a somewhat new version in that is totally broken. :-/
<SpamapS> "old" is the revision from Monday morning btw.
<fwereade> SpamapS: and now you're stuck only able to test the moving target that is trunk :(
<SpamapS> No I can test using juju-branch:
<fwereade> SpamapS: heh, "old" is everything before this morning :(
<fwereade> SpamapS: ah, good
<fwereade> SpamapS: which is to say, "less bad"
<fwereade> SpamapS: in future I will definitely make sure people know of breaking changes well in advance (if I can't avoid making them at all)
<fwereade> SpamapS: is there anything I can do to mitigate the pain now?
<SpamapS> no. :-/
<SpamapS> Just git 'er done. :)
<SpamapS> seriously, ignore my whining.. you all have important work to do.
<fwereade> yeah, but so do you, and I'm sorry to have disrupted it :(
<SpamapS> It only means the bugs that I might find now will be found after you're done.. and you'll have more of a time crunch to fix them next week.
<SpamapS> fwereade: please don't feel guilty in any way. You guys can't stop for every squirrel crossing the road. ;)
<fwereade> SpamapS: don't worry, I'm neither sobbing nor rending my clothes
<fwereade> SpamapS: still wish I'd done it differently, but we live and learn ;)
<jimbaker> fwereade, please no self-flagellation, ok? ;)
<rog> jimbaker: i've just pushed a new gozk merge request. any comments or feedback welcome.
<rog> https://code.launchpad.net/~rogpeppe/gozk/update-event-interface/+merge/77560
<jimbaker> rog, thanks!
<robbiew> SpamapS: so basically all charms are broken now, right?
 * robbiew notes this is why we need juju pulling from the archive at release...not a ppa :/
<robbiew> I realize we need this change to allow for the store to work...which means **WE** will need to fix everyone's charm before release
<robbiew> not fair to expect authors to go back and do that now, imo
<robbiew> and apparently there's no legacy support :/
<SpamapS> robbiew: all charms and all juju deployments from 11.10 are broken, yes.  Tracking in bug 828147
<_mup_> Bug #828147: Ensemble branch option needs to allow for distro pkg, ppa, and source branch install <cloudfoundry:New> <juju:In Progress by jimbaker> <juju (Ubuntu):Triaged by clint-fewbar> < https://launchpad.net/bugs/828147 >
<robbiew> awesome
<SpamapS> robbiew: there is a workaround, which is to add 'juju-branch: lp:ubuntu/oneiric/juju' to your environment config
<jimbaker> robbiew, really hope that branch gets approved soon
<jimbaker> in terms of env-origin
<robbiew> jimbaker: that change must get in
<robbiew> agreed
<SpamapS> The alternative is to patch the distro version to default to pulling from the distro. I've tried that in a privately built package and it works fine, but the problem is the deployed provisioning agent then still has the distro version deploying from _PPA ..
<SpamapS> Anyway, the best thing to do is probably to just test from the PPA.. ignore the distro package.. and try to help these guys finish off the work so we can upload "the real juju" ASAP
<robbiew> agreed
<robbiew> a hack in the distro is suboptimal
<SpamapS> worst case.. if something goes horribly wrong and we can't do that.. we can regress the PPA to the distro version and move daily builds to a different PPA.
<robbiew> if env-origin works, then we can just use that set to the archive, right?
<jimbaker> fwereade, re robbiew's point - the backwards breaking change seems to be only to change the directory structure. i understand with the juju rename there's been some drastic changes already. but we could make it easier possibly by assuming oneiric
<SpamapS> robbiew: env-origin is intelligent, and chooses distro when your client version came from the distro
<robbiew> cool
<SpamapS> Its probably going to break horribly on OS X tho. :P
<jimbaker> robbiew, env-origin does the right thing
<jimbaker> SpamapS, it doesn't know about os x yet
<SpamapS> jimbaker: it shouldn't really. Serious users will probably be very explicit about their environments.yaml
<jimbaker> SpamapS, the workaround is to set juju-origin manually
<jimbaker> SpamapS, agreed
<jimbaker> SpamapS, it might be something as simple as that for os x/non ubuntu clients, it basically insists that this setting be made
<SpamapS> Actually I think it should just choose distro at that point.
<jimbaker> SpamapS, also a reasonable choice
<SpamapS> hmm.. something's borked with juju-branch too actually
<SpamapS> Ahh, my AMI's are out of date and couldn't download packages.
<fwereade> SpamapS, jimbaker: it seems that it would be better to insist on explicit default-series *everywhere* than to special-case non-ubuntu
<jimbaker> fwereade, certainly a valid point. being clever can surprise
<fwereade> SpamapS: that would mean everybody had to change their environments.yaml, though, which is maybe too much to expect
<fwereade> SpamapS, jimbaker: if that would be acceptable, though, it feels like the Right Thing to me
<robbiew> I guess the big problem we need to resolve is broken existing charms...and getting them fixed asap
<robbiew> the change is in...and newer charms will be written accordingly
<robbiew> for example...our much talked about cloudfoundry charm is now broken :/
<robbiew> we need that working by release
<robbiew> and so on
<fwereade> robbiew, SpamapS: are we saying that *charms* are broken, or that *repositories* are broken?
<RoAkSoAx> hazmat: ping?
<jimbaker> fwereade, but as i understand it, the charms are not themselves broken by this change, it's how we refer to them in terms of their repository structure and when deployed?
<fwereade> jimbaker: that's exactly what I'm fretting about
<jimbaker> fwereade, exactly
<fwereade> :)
<jimbaker> fwereade, i looked at the change in trunk. the sole change to the example charms, which should be reasonable for any other charms, is moving the files around one level
<jimbaker> (reasonable, as in talking about other charms, that is)
<fwereade> jimbaker: exactly, I don't believe that any charms are themselves broken
<robbiew> right....sorry...you are correct
<fwereade> jimbaker: every repository outside the source tree is, though
<jimbaker> now there is certainly broken code out there that manages this process
<jimbaker> so pretty much everyone has their own script to deploy a stack of apps. no one is typing it in again and again
<jimbaker> that's broken now
<robbiew> exactly...nothing we can do about that though...sed/awk to the rescue there
<jimbaker> but a trivial fix to add in the right locator info
<jimbaker> the only challenge is that it totally sucks when they are deployed
<fwereade> jimbaker: we discussed defaulting to "local:" when --repository is given, but that causes us more problems down the line
<robbiew> fwereade: ftr, I'm happy this change has gone in...as we need it for the Charm Collection/store thingy ;)
<jimbaker> errors are nontransparent, and show up in debug logs
<fwereade> jimbaker: bad-repo errors, right?
<jimbaker> fwereade, i forget specifics, i made a paste on this last night
<fwereade> jimbaker: hopefully "cs:blah/blah not found in repo https://store.juju.ubuntu.com" is reasonably clear?
<jimbaker> let me see if i can dig it up
<fwereade> jimbaker: but I suspect it's not so nice if the repo has the wrog structure
<jimbaker> fwereade, depends on where it is. if from the command line, cool
<jimbaker> if it is in the debug-log, not so cool
<jimbaker> way too buried. yes, i will go there. but even for me it's an extra step to ssh in to find what's happening, and i know what i'm looking for
<fwereade> hm, yes :(
<jimbaker> there may be some gnashing and wailing and rending and tearing at the cheeks
<fwereade> jimbaker ...but wait, when are we hitting repositories except from the command line?
<jimbaker> fwereade, need to dig up this error for you...
<fwereade> jimbaker: thanks
<jimbaker> fwereade, lost it. i will have to recreate. hopefully not some dream from last night...
<fwereade> jimbaker: np, although from my perspective I kinda hope it *is* a dream
<fwereade> ;)
<SpamapS> robbiew: the cloudfoundry charms should work w/ the PPA
<robbiew> SpamapS: cool, thx
<fwereade> jimbaker: you have reminded me of one possibility, can I get a +1 trivial on http://pastebin.ubuntu.com/699194/ please?
<jimbaker> fwereade, looking at it now
<SpamapS> Hey didn't we introduce a schema version or something into ZK so that clients would detect when they are trying to twiddle an incompatible ZK?
<SpamapS> Seems like this latest formula change woudl be a good time to bump that.
<jimbaker> fwereade, so basically this covers a bad path
<fwereade> jimbaker: yep, that's it
<fwereade> jimbaker: it should just make the error clearer when people try to deploy from an un-upgraded local repo
<jimbaker> fwereade, well +1, i assume this will not let it get into the bad state i discussed
<jimbaker> SpamapS, yeah, we have that bit, it would be nice to use in this case. we have the power...
<fwereade> jimbaker: it won't start or stop anything working, it'll just complain differently
<jimbaker> fwereade, and i believe more importantly, complain earlier
<jimbaker> and closer
<fwereade> SpamapS, jimbaker: oh, blast, I thought that was discussed but not implemented
<fwereade> jimbaker: indeed
<jimbaker> fwereade, https://code.launchpad.net/~jimbaker/juju/verify-version/+merge/71559
<jimbaker> fwereade, don't worry, i thought i knew what you were working on too
<robbiew> lol
<fwereade> haha
<jimbaker> mostly right, but a few missing, possibly critical bits ;)
<jimbaker> fwereade, so if you want to make a change to VERSION = 1 in ensemble.state.topology, that sounds like a trivial we might all approve
<fwereade> jimbaker: was just about to suggest it :)
<jimbaker> it complains early and close. almost all stuff goes through the topology
<fwereade> jimbaker: for form's sake: http://pastebin.ubuntu.com/699200/
<jimbaker> sounds good, the minimum requirements specified in that comment have been satisfied, so +1 !
<fwereade> jimbaker: cheers :)
<rog> right, i'm off. it's an unseasonably warm and beautiful evening here. the garden calls. see you tomorrow.
<jimbaker> rog, enjoy!
<_mup_> Bug #862415 was filed: Juju bootstrap node disappearance <juju:New> < https://launchpad.net/bugs/862415 >
<RoAkSoAx> b/win 2
<Aram> hello.
<Aram> will niemeyer be here later?
<niemeyer> Hallo!
<Aram> hi!
<niemeyer> Aram: Hey Aram
<niemeyer> bcsaller: ping
<bcsaller> niemeyer: hey
<niemeyer> bcsaller: Hey!
<niemeyer> bcsaller: I see the problem not only isn't fixed, but it's getting worse: http://wtf.labix.org/
<niemeyer> bcsaller: Can you please revert the change, if you can't make the tests work again?
<bcsaller> I didn't get the final +1, my branch is ready to merge
<bcsaller> should be in your email and I pinged kapil about it as well
<niemeyer> bcsaller: It already has my +1 I think
<niemeyer> bcsaller: Or is it another branch?
<bcsaller> you said it needed one more sign off at the end and I made changes for the review and no one ok'd them
<niemeyer> bcsaller: Yeah, from Kapil specifically
<bcsaller> yeah
<niemeyer> bcsaller: Did you talk to him about this already?
<bcsaller> niemeyer: I pinged him about it this morning but haven't seen him
<robbiew> he's at a conference today
<bcsaller> all the tests are passing, I can go and merge it if your ok with that niemeyer
<niemeyer> bcsaller: Yeah, I think that's the best thing to do
<niemeyer> bcsaller: Let's invite him for a post-review
<jimbaker> niemeyer, the env-origin branch is ready for review
<niemeyer> jimbaker: Cool, I'll check it out
<jimbaker> niemeyer, thanks
<_mup_> Bug #862595 was filed: Provisioning agent and destroy-environment show NoneType not iterable on machine shutdown with Openstack <openstack> <juju:New> < https://launchpad.net/bugs/862595 >
<HarryPanda> is there anything in place to test charms with Jenkins etc.
<SpamapS> HarryPanda: jamespage has some preliminary work on that.
<jimbaker> SpamapS, looks like more txaws issues there with 862595
<SpamapS> jimbaker: yeah.. I think that may actually be causing big problems on the provisioning agent.. it doesn't seem to want to provision new instances after that
<bcsaller> niemeyer: http://wtf.labix.org/ is happy again
<bcsaller> niemeyer: well, the non-ftests part anyway
<jimbaker> SpamapS, the provisioning agent certainly doesn't expect errors. i'm not certain what it should be, but just barfing is not one of them
<jimbaker> bcsaller, those are because of the respository changes, so that's independent
<jimbaker> SpamapS, ok, that may be overstating it, but it is vulnerable
<SpamapS> jimbaker: seems like whats happening here is that while the provider has shutdown the machine and changed ZK to remove it, the error has somehow caused the provider to not thing the machine is gone
<SpamapS> jimbaker: leading to bug 861928
<_mup_> Bug #861928: provisioning agent gets confused when machines are terminated <juju:New> < https://launchpad.net/bugs/861928 >
<jimbaker> SpamapS, the periodic rescan is supposed to sync it with reality for cases like this, as i understand it, but the changes in aws api impl is presumably making it confused
<SpamapS> jimbaker: probably because the group is still there
<jimbaker> SpamapS, you mean the security group per machine?
<SpamapS> jimbaker: hrm, no its not there
<SpamapS> seems like there's some order of ops that gets confused by that NoneType iterable error.
<jimbaker> SpamapS, as a general rule, we expect specific errors from txaws - it looks like it's not catching this TypeError, so it bubbles all the way up to the reactor loop
<_mup_> juju/env-origin r378 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<jimbaker> just had an opportunity to see the incompatible juju protocol version in action
<SpamapS> jimbaker: hah :)
<jimbaker> it was expected, fortunately... i think the error message could be more helpful on what to do in this case
<SpamapS> jimbaker: so this does look like nova-api doesn't return the same expected XML after TerminateInstances
<jimbaker> SpamapS, at the python meetup last night, i think someone mentioned that txaws is using an older version of the aws api. don't know about the truth of this
<jimbaker> SpamapS, it may have nothing to do here, but supporting multiple apis is tough
<jimbaker> as usual, we might want to look at boto
<SpamapS> well in this case, comparing to boto .. boto just returns whatever list was responded with
<SpamapS> txaws asserts that it has to be id=instancesSet
<niemeyer> jimbaker: ping
<jimbaker> niemeyer, hi
<niemeyer> jimbaker: Hi
<niemeyer> jimbaker: Please re-read points 3, 4 and 5
<niemeyer> jimbaker: and compare to the implementation
<jimbaker> niemeyer, ok
<niemeyer> jimbaker: Is there anything missing there?
<jimbaker> niemeyer, #3 - python-txzookeeper dependency is no longer relevant; #4, it uses the original implementation; #5, there is no attempt to test deb repositories, since they are no longer supported
<niemeyer> jimbaker: Maybe I'm just missing.. where's that: 3) Otherwise, store the version in a variable, and keep parsing
<niemeyer> jimbaker: 4) Find the version table, and find the installed version
<niemeyer> ?
<jimbaker> niemeyer, ok, i'm referring to the review points. you are referring to a list from irc
<niemeyer> jimbaker: yeah
<jimbaker> looking at that now
<jimbaker> niemeyer, ok, i believe what you are saying here is that 6) refers to the installed version, which was stored earlier in 3), is that correct?
<niemeyer> 1) Grab the installed version from the "Installed:" line
<niemeyer> 2) If this is "(none)" return "branch"
<niemeyer> 3) Otherwise, store the version in a variable, and keep parsing
<niemeyer> 4) Find the version table, and find the installed version
<niemeyer> jimbaker: Is there any ambiguity here?
<jimbaker> niemeyer, correct, there is no ambiguity. the version table has more information than i have assumed, and this parse needs to take it in account
<niemeyer> jimbaker: Thanks.
<niemeyer> biab
<SpamapS> wow.. nova-api's response to TerminateInstances is *completely* different from ec2
<jimbaker> i'm feeling sick right now, i will be back later
<SpamapS> http://pastebin.com/jFDQLrcQ
<SpamapS> thats ec2's response
<SpamapS> http://pastebin.com/ZMXwA4MY
<SpamapS> And that is openstack
<_mup_> juju/ftests r5 committed by gustavo@niemeyer.net
<_mup_> Removing unused imports.
<SpamapS> note that bug 862595 is actually pretty serious.. really confuses the provisioning agent.
<_mup_> Bug #862595: terminate_instances raises NoneType not iterable on machine shutdown with Openstack <openstack> <juju:Invalid> <txAWS:New> <txaws (Ubuntu):Triaged> < https://launchpad.net/bugs/862595 >
<SpamapS> hazmat: when you're around, if you can ack the merge proposal against txaws .. I will patch it into the Ubuntu txaws
<robbiew> bcsaller: free for a catch up call?
<bcsaller> robbiew: I was just sitting down to lunch, can we push it a little bit?
<robbiew> bcsaller: absolutely not!
<robbiew> lol
<robbiew> sure
<bcsaller> thanks
<bcsaller> robbiew: free when you are
<_mup_> juju/ftests r6 committed by gustavo@niemeyer.net
<_mup_> Reverse the order of revisions and link them to Launchpad.
<robbiew> bcsaller: cool...need 5min
<robbiew> bcsaller: g+, phone, skype, mumble...pick yer poison :)
<bcsaller> robbiew: sent g+ invite
<robbiew> bcsaller: cool...one sec
<_mup_> juju/ftests r7 committed by gustavo@niemeyer.net
<_mup_> Move environments.yaml creation into the ec2 suite, and
<_mup_> make it use trunk rather than packages, so that the code
<_mup_> has better chances of being in sync. Hopefully this fix
<_mup_> the current breakage.
<_mup_> juju/trunk r370 committed by gustavo@niemeyer.net
<_mup_> Rename readme.txt to README since we already have one of those. [trivial]
<_mup_> But really, this is just bumping the revno for an ftest cycle. :-)
 * hazmat catches up
<niemeyer> Hah.. missed the URLs
<niemeyer> hazmat: YO!
<hazmat> niemeyer, greetings hows your conference?
<hazmat> SpamapS, which merge proposal?
<niemeyer> hazmat: Awesome! Delighted to see everyone here, even though has been a bit tough to be in and out, and still need to do my talk for tomorrow
<niemeyer> hazmat: Fixing the ftests now..
<niemeyer> hazmat: Hey, I had made a comment on bcsaller's review to invite you for one last look on lxc-omega before the merge since there were a few things since your last review
<niemeyer> hazmat: But tests were broken in trunk, so we decided to move on with it
<hazmat> niemeyer, yeah.. saw that.. i'll have a last look at the commit
<niemeyer> hazmat: If possible, would you mind to do a post-review on this once you have some spare time?
<niemeyer> hazmat: Cheers!
<_mup_> juju/ftests r8 committed by gustavo@niemeyer.net
<_mup_> Forgot to update the ec2 tests to use local: urls (!).
<hazmat> SpamapS, +1 on the terminate fix
<SpamapS> hazmat: cool thanks!
<jamespage> SpamapS: zookeeper -> upstart transition - next release right?
<jamespage> or do we want that now?
<SpamapS> jamespage: yes next release :)
<jamespage> coolio
<jamespage> should be pretty easy
<SpamapS> jamespage: agreed
<SpamapS> jamespage: we should be able to do it in Debian so we don't have a permanent delta
<jamespage> we should be able to - I need todo the tomcat's as well in the same way
<hazmat> 3.4 release of zk is going to require some packaging love for next release, the build system is a bit different afaicr
 * jamespage sighs
<jamespage> better take a look a trunk sooner rather than later then...
<SpamapS> indeed, would be good to get that going into experimental soon
<jamespage> yeah - that had crossed my mind
<jamespage> meh - looks much the same to me
<jamespage> I'll have a go at an new upstream release for experimental some time in the next few weeks
<SpamapS> well anyway, time to run off to the urgent care to take care of this pinkeye.. grubby filthy things those kids are.
<jamespage> hehe
<hazmat> bcsaller, i see one bug in the omega merge to trunk
<bcsaller> hazmat: whats that
<hazmat> bcsaller, on the merge proposal
<hazmat> bcsaller, its referencing self.master_template.. which is undefined, so it won't work if you don't already have one
<hazmat> here's my diff to fix that as well as other style issues, http://paste.ubuntu.com/699361/
<bcsaller> I'll fix it, thanks, well spotted
<bcsaller> hazmat: I'll go ahead and merge that as a trivial then I guess
<_mup_> juju/ftests r9 committed by gustavo@niemeyer.net
<_mup_> Show what the wordpress title seems to be.
<_mup_> juju/ftests r10 committed by gustavo@niemeyer.net
<_mup_> Fixed title extraction.
<hazmat> niemeyer, jimbaker, bcsaller can i get a +1 on this trivial http://paste.ubuntu.com/699361/
<hazmat> actuall
<hazmat> nevermind
<hazmat> i'll incorporate into my next branch
<bcsaller> hazmat: I think that was already merged as a trivial
<hazmat> bcsaller, ah.. cool.. wasn't sure since there wasn't any mup noise
<hazmat> it would be nice if mup could directly monitor the repo
<hazmat> niemeyer, thanks for the email btw
<niemeyer> hazmat: No problem
<hazmat> niemeyer, was talking to some of the heroku guys on how they do multi-tenant with lxc last night, they basically dictate to the process which port to listen on
<hazmat> niemeyer, this conference is pretty awesome btw for ops stuff,  we should have some more people here
<hazmat> hopefully they'll get videos up in timely fashion this year
<niemeyer> hazmat: Interesting
<_mup_> juju/env-origin r379 committed by jim.baker@canonical.com
<_mup_> Properly parse version table of apt-cache policy
<_mup_> juju/env-origin r380 committed by jim.baker@canonical.com
<_mup_> Merged trunk
#juju 2011-09-30
<niemeyer> The wordpress example seems _actually_ broken
<_mup_> juju/trunk-merge r342 committed by kapil.thangavelu@canonical.com
<_mup_> merge trunk
<_mup_> juju/ftests r11 committed by gustavo@niemeyer.net
<_mup_> Reduced wget retries.
<niemeyer> Oh dear..
<niemeyer> Ben didn't revert the changes in the example formulas
<niemeyer> :-(
<hazmat> niemeyer, i thought i saw the change go through
<niemeyer> hazmat: He changed to using hostname..
<hazmat> niemeyer, which is better than hardcoding to ec2 md server
<hazmat> is hostname not routable?
<hazmat> one c2
<niemeyer> hazmat: I don't know, but the ftests are not passing anymore.. I'll give this a shot
<_mup_> Bug #862680 was filed: EC2 api call TerminateInstances returns the wrong response <juju:New> <OpenStack Compute (nova):New> < https://launchpad.net/bugs/862680 >
<niemeyer> hazmat: Yep, works
<hazmat> niemeyer, cool
<_mup_> juju/trunk r372 committed by gustavo@niemeyer.net
<_mup_> Revert example changes introduced with lxc-omega so that they
<_mup_> work again on EC2.
<_mup_> juju/unit-with-addresses r404 committed by kapil.thangavelu@canonical.com
<_mup_> merge trunk and resolve conflict
<hazmat> oh.. the store work is in
<hazmat> awesome!!
 * hazmat does a dance
<hazmat> hmm
<hazmat> i keep getting incompatible protocol errors
<hazmat> niemeyer, unit tests broke with 372
<hazmat> hmmm ssh keys not found
<hazmat> for local provider
<hazmat> needs to be mocked
<niemeyer> hazmat: For the incompatible errors you'll need to use the trunk on juju-branch
<niemeyer> hazmat: The version was bumped with the store changes
<hazmat> ic
<niemeyer> Hmmm.. butler seems to have jumped a revision
<niemeyer> Yeah, I broke it
<hazmat> niemeyer, i don't see how exactly.. all your changes appear to be in formulas
<niemeyer> hazmat: I broke butler itself, in the ftests
<niemeyer> hazmat: It was jumping revisions
<hazmat> niemeyer, ah
<niemeyer> hazmat: Unrelated to the above issue
<hazmat> niemeyer, so trunk examples are using the metadata sever?
<niemeyer> hazmat: Yeah, I reverted it so tests could pass
<niemeyer> hazmat: We have to find a way to make both happy
<niemeyer> and also have to fix the unittests so they don't rely on install ssh keys
<niemeyer> installed
<niemeyer> I've put it to rerun the tests since 369
<niemeyer> unittests will likely all be broken
<niemeyer> since I removed the fake id keys
<niemeyer> ssh id_*sa keys
<niemeyer> The ec2-wordpress one should pass on 372
<hazmat> i'll add the unit-info cmd to resolve charm addressing.. but realistically that's not till tomorrow evening
<hazmat> very happy to see the repository work go in
<niemeyer> hazmat: That's awesome, thanks a lot
<niemeyer> hazmat: Yeah, quite exciting isn't it?
<niemeyer> hazmat: Almost there!
<hazmat> feels very disconcerting to login into byobu
<hazmat> hmm.. odd serialization of the charm id  to escaped form in zk
<niemeyer> hazmat: It's almost like a url encoding
<hazmat> niemeyer, yeah.. but its not needed for the zk node name
<niemeyer> hazmat: It is needed, due to slashes
<hazmat> ah
<niemeyer> hazmat: Agreed about byobu.. Eric Hammond is also complaining publicly about it
<niemeyer> Alright, past bed time here..
<niemeyer> My talk is first in the morning tomorrow
<niemeyer> See you all tomorrow!
<_mup_> juju/trunk r373 committed by kapil.thangavelu@canonical.com
<_mup_> merge unit-with-addresses [r=niemeyer][f=859308]
<_mup_> Units detect and record their public/private addresses into the unit state,
<_mup_> in a provider specific manner.
<_mup_> juju/lxc-provider-rename-local r406 committed by kapil.thangavelu@canonical.com
<_mup_> merge pipeline and resolve conflict
<_mup_> juju/local-origin-passthrough r406 committed by kapil.thangavelu@canonical.com
<_mup_> merge pipeline and resolve conflict
<_mup_> juju/trunk r374 committed by kapil.thangavelu@canonical.com
<_mup_> merge unit-with-addresses [r=niemeyer][f=859308]
<_mup_> Command line tools use unit addresses (status, ssh, debug-hooks, etc).
<_mup_> juju/trunk r375 committed by kapil.thangavelu@canonical.com
<_mup_> merge lxc-provider-rename-local [r=niemeyer][f=860982]
<_mup_> Rename lxc provider to local provider.
<_mup_> juju/unit-relation-with-addr r407 committed by kapil.thangavelu@canonical.com
<_mup_> unit relations are automatically created with the unit's private address
<_mup_> juju/unit-relation-with-addr r408 committed by kapil.thangavelu@canonical.com
<_mup_> merge trunk
<_mup_> juju/local-origin-passthrough r409 committed by kapil.thangavelu@canonical.com
<_mup_> update juju-create to use jujuorigin for bzr branches instead of a separate jujusource var
<_mup_> juju/local-origin-passthrough r410 committed by kapil.thangavelu@canonical.com
<_mup_> merge latest env-origin
<_mup_> Bug #862987 was filed: Local provider should respect juju-origin <juju:In Progress by hazmat> < https://launchpad.net/bugs/862987 >
<_mup_> juju/local-origin-passthrough r411 committed by kapil.thangavelu@canonical.com
<_mup_> use default origin util func if none  specified
<rog> anyone know if Gustavo's around today?
<fwereade> rog: I expect he'll be around at some stage, but he's theoretically not
<RoAkSoAx> j/win 17
<RoAkSoAx> aerrr
<rog> fwereade: ah, ok. i thought he was back today from the conference
<fwereade> rog: hm, I thought it was still on today, maybe he's travelling
<rog> ah, you're right, it's still on.
<rog> didn't think it was.
<xerxas> Hi all
<xerxas> is this canonical juju's user channel ?
<xerxas> $ juju deploy --repository=examples mysql
<xerxas> Charm 'cs:natty/mysql' not found in repository https://store.juju.ubuntu.com/charm
<xerxas> 2011-09-30 15:13:49,717 ERROR Charm 'cs:natty/mysql' not found in repository https://store.juju.ubuntu.com/charm
<xerxas> this command is what's written when I follow the tutorial
<xerxas> is the charms repository down , or isn't yet up ?
<_mup_> Bug #863374 was filed: deploy documentation is out of date <juju:New> < https://launchpad.net/bugs/863374 >
<xerxas> ok
<xerxas> thanks
<xerxas> the ticket doesn't mention where is the repository
<xerxas> " Recent changes to repository structure and deploy args are not recorded in the documentation."
<xerxas> there's no more repo ?
<xerxas> or none , for now ?
<robbiew> fwereade: ^ ?
<fwereade> xerxas, ha, sorry, I was just fixing the docs
<robbiew> ;)
<fwereade> xerxas: $ juju deploy --repository=examples local:mysql
<fwereade> xerxas: if you're not running oneiric, you may need to do:
<xerxas> $ juju deploy --repository=examples local:mysql
<xerxas> [Errno 2] No such file or directory: '/home/ubuntu/examples/natty'
<xerxas> 2011-09-30 15:20:18,818 ERROR [Errno 2] No such file or directory: '/home/ubuntu/examples/natty'
<fwereade> > xerxas: $ juju deploy --repository=examples local:oneiric/mysql
<xerxas> ok , waiting ;)
<xerxas> $ juju deploy --repository=examples local:oneiric/mysql
<xerxas> [Errno 2] No such file or directory: '/home/ubuntu/examples/oneiric'
<xerxas> 2011-09-30 15:20:55,711 ERROR [Errno 2] No such file or directory: '/home/ubuntu/examples/oneiric'
<xerxas> I'm I supposed to retrieve the repo locally ?
<fwereade> ah: yu don't have the examples repository in your working dir?
<fwereade> xerxas: I'm not sure where you got juju from, but there should be an "examples" directory *somewhere*, which should have a subdir called "oneiric"
<xerxas>  ok
<xerxas> in usr/share/doc , i suppose
<xerxas> I got it from the ppa as written in the doc
<xerxas> anyway, thanks for helping ;)
<xerxas> and , I think the charm repository doesn't have charms that fits my need anyway
<fwereade> xerxas: hold on a mo, I'm just installing it myself in the hope I can track it down
<fwereade> xerxas: the examples repository might not, but we have plenty of other charms :)
<fwereade> xerxas: what are you looking for?
<robbiew> xerxas:  https://code.launchpad.net/charm
<niemeyer> Heyo
<fwereade> heya niemeyer
<niemeyer> Argh.. really bad connectivity here :(
<niemeyer> bcsaller: Heyo
<bcsaller> niemeyer: hi
<rog> niemeyer: hiya!
<niemeyer> bcsaller: Had to revert the example changes yesterday so the ftests could be happy
<niemeyer> bcsaller: The modified examples were not working in EC2
<niemeyer> rog: Yo!
<niemeyer> rog: Sorry for not giving you much feedback this week man.. it's being well beyond active
<niemeyer> rog: Promise to do a better job next week
<rog> niemeyer: no probs. i do hope you might come around to my changes! :-)
<niemeyer> rog: I like them from one side, but the problem of trashing information is a real one
<niemeyer> rog: I don't want to have to rollback to the current version because we need details that are being dumped
<rog> niemeyer: there's no information being dumped, honest.
<_mup_> Bug #863400 was filed: examples repository is not installed from PPA <juju:New> < https://launchpad.net/bugs/863400 >
<rog> niemeyer: the code will instantly panic if it happens... which i'll bet it won't.
<niemeyer_> Erm..
<niemeyer_> Bad connectivity indeed :(
<niemeyer_> rog: The change isn't improving things enough to justify dropping information in the protocol
<rog> niemeyer_: if the paths start actually meaning something, then it's easy to change the channel type to reflect that. the main thing is separating session status events from node status events.
<rog> niemeyer_: currently the code checks on every protocol message that the path is what is expected. so we're guaranteed that no information is being dropped.
<xerxas> fwereade:  I'not looking for something special
<xerxas> I'm just testing juju
<xerxas> ;)
<fwereade> xerxas: I've looked into it, and for some reason the examples repository (which should be installed to /usr/share/doc/juju) isn't there at the moment
<fwereade> xerxas: it's being worked on
<xerxas> ok
<xerxas> thanks for the information
<fwereade> xerxas: if you want to test quickly, you can always "bzr branch lp:juju" and use the examples repo in there
<fwereade> xerxas: otherwise, expect an update to the PPA soon
<xerxas> ok
<xerxas> thanks so much
<fwereade> xerxas: a pleasure :)
<hazmat> fwereade, also when committing a breaking change on trunk, (version increment) its probably nice to manually trigger the ppa build
<fwereade> hazmat: thank you, good to know; er, how do I do that?
<hazmat> hmm.. i guess it only happened because i was running trunk and deploying ppa
 * hazmat looks for the build link
<hazmat> fwereade, https://code.launchpad.net/~juju/+recipe/juju from here (its linked off the trunk code view)
<hazmat> a stable ppa will help some as well
<fwereade> hazmat: thanks :)
<hazmat> fwereade, did you discuss with niemeyer a timeline for getting the remote end operational?
<hazmat> fwereade, great work btw. its very exciting to see this stuff getting done
<fwereade> hazmat: afraid not, all I know is that niemeyer's working on it when he can
<fwereade> hazmat: thanks :)
<fwereade> oh, bother, gtg: nn everyone, happy weekends :)
<rog> fwereade: have a good one!
<hazmat> fwereade, cheers
 * hazmat relocates to a better table
<rog> i'm off to enjoy the sunshine. see y'all monday.
<hazmat> http://arstechnica.com/business/news/2011/09/google-devops-and-disaster-porn.ars
<hazmat> surge conference writeup
<hazmat> rog, have a good one
<_mup_> juju/unit-info-cli r412 committed by kapil.thangavelu@canonical.com
<_mup_> amp api for unit-get cli
<rog> hazmat: nice. XML for the lose.
<hazmat> rog, its always the cascading failures that burn
<SpamapS> Two more tests broken when run inside clean chroot's
<SpamapS> you guys should really be running the test suite with HOME=`mktemp -d /tmp/foo.XXXXX`
<jimbaker> SpamapS, sounds like a good idea
<SpamapS> http://paste.ubuntu.com/699960/
<SpamapS> jimbaker: I'm kind of surprised WTF didn't pick that up
<SpamapS> or maybe it has
<SpamapS> I see fails since 366
<jimbaker> SpamapS, yeah, i noticed too
<hazmat> SpamapS, yeah... i've seen those on wtf but its been unstable (wtf)
<hazmat> that's worth a bug report
<hazmat> trying to make the 'unit-get' cli happen so we can stop hard-coding address in formula
<hazmat> just found elmo at surge
<hazmat> jimbaker, is juju-origin ready to merge?
<hazmat> i've two branches in dev that i'd like to see go in that our based on it
<hazmat> SpamapS, so 361 is uploaded, do we get to push any new things in?
<hazmat> i'm a little confused on the schedule for uploads
<SpamapS> yes
<SpamapS> one more upload. :)
<SpamapS> hazmat: technically we can upload all the way up to release day.. but as of yesterday, with FinalFreeze, we are at the release team's pleasure.
<_mup_> Bug #863499 was filed: local provider tests fail with an empty home directory <juju:New> < https://launchpad.net/bugs/863499 >
<jimbaker> hazmat, juju-origin is waiting on approval. i believe it's ready, that's why it's in review
<jimbaker> hazmat, and i did see your branch that's waiting on it, looks good in my review of it
<jimbaker> hazmat, just about to head to lunch, but i'll complete the review right after that
<niemeyer> jimbaker: The ec2-wordpress ftest is hanging after a defer error and never returning
<niemeyer> jimbaker: Will paste the output on an error
<niemeyer> Erm on  a bug
<SpamapS> Just pushed a fix up for bug 863499 .. would be awesome if that got a fast track for reviews so I can enable running the test suite on the daily build PPA
<_mup_> Bug #863499: local provider tests fail with an empty home directory <juju:Fix Committed by clint-fewbar> < https://launchpad.net/bugs/863499 >
<niemeyer> SpamapS: Will check it out right away
<SpamapS> cool thanks. :)
<SpamapS> that will help streamline the process as we move into the final upload to 11.10
<niemeyer> jimbaker: https://bugs.launchpad.net/juju/+bug/863510
<_mup_> Bug #863510: destory-environment errors and hangs forever <juju:New> < https://launchpad.net/bugs/863510 >
<_mup_> Bug #863510 was filed: destory-environment errors and hangs forever <juju:New> < https://launchpad.net/bugs/863510 >
<niemeyer> SpamapS: Hah, sweet.. I was going to ask bcsaller about this one
<niemeyer> SpamapS: You rock
<SpamapS> it will be using the same debian/ dir as Ubuntu. :)
<jimbaker>  niemeyer, thanks for this
<niemeyer> jimbaker: No worries.. don't know exactly what is going on there, but this blocked the ftests since yesterday evening
<SpamapS> Will also fix bug 863400
<_mup_> Bug #863400: examples repository is not installed from PPA <juju:In Progress by clint-fewbar> < https://launchpad.net/bugs/863400 >
<niemeyer> jimbaker: It's hanging due to the unhandled error for sure, but reason why it's erroring isn't clear
<jimbaker> niemeyer, ok, i'll think about this over lunch (have some friends waiting on me here)
<jimbaker> biab
<niemeyer> jimbaker: Thanks
<_mup_> juju/unit-relation-with-addr r409 committed by kapil.thangavelu@canonical.com
<_mup_> setup a unit address by default, so relations have some useful value there by default, fix up hook tests that where checking raw value to include address info
<_mup_> juju/trunk r376 committed by gustavo@niemeyer.net
<_mup_> Merge fix-failing-local-tests branch by Clint [r=niemeyer] [trivial]
<_mup_> This fixes tests so that they don't depend on installed user ssh keys.
<_mup_> Bug #863526 was filed: Juju agents do not handle reboots <juju:New> < https://launchpad.net/bugs/863526 >
<niemeyer> Jim Fulton on stage right now
<SpamapS> hazmat: I filed bug 863526 but now I'm wondering if it may be a duplicate
<_mup_> Bug #863526: Juju agents do not handle reboots <production> <juju:New> < https://launchpad.net/bugs/863526 >
<niemeyer> SpamapS: the "production" tag is a neat idea
<_mup_> juju/unit-relation-with-addr r410 committed by kapil.thangavelu@canonical.com
<_mup_> fix up additional test fallout from relations with unit addresses
<SpamapS> niemeyer: thanks, I've been trying to be very discerning in adding it.. only things that we *cannot* live without for production. Nice to haves are different.
<SpamapS> niemeyer: btw, is bug 712476 valid anymore with the new repo work?
<_mup_> Bug #712476: Ensemble deploy should have a --force option for reuploading a formula <cli> <juju:New> < https://launchpad.net/bugs/712476 >
<hazmat> niemeyer, nice
<hazmat> niemeyer, i think my friend alan is out there as well, he wanted to come out and give tribute to dorneles
<hazmat> SpamapS, do you think cross-az is required for prod?
<SpamapS> hazmat: no
<SpamapS> hazmat: two environments and config settings can work around that
<SpamapS> hazmat: I think its pretty important though!
<SpamapS> for whizbang awesomeness
<_mup_> juju/unit-info-cli r414 committed by kapil.thangavelu@canonical.com
<_mup_> merge pipeline and resolve conflict
<_mup_> juju/unit-info-cli r415 committed by kapil.thangavelu@canonical.com
<_mup_> add unit-get cli
<_mup_> juju/unit-info-cli r416 committed by kapil.thangavelu@canonical.com
<_mup_> update examples to use unit-info to get addresses
<_mup_> juju/unit-info-cli r417 committed by kapil.thangavelu@canonical.com
<_mup_> update the php example.. not sure what its around for
<hazmat> bcsaller, have you tried running local provider recently?
<_mup_> juju/local-origin-passthrough r413 committed by kapil.thangavelu@canonical.com
<_mup_> fix some local provider problems
<m_3> negronjl: hey Juan... can you pls change owner of lp:charm/mongodb to charmers?  trying to push name changes
<negronjl> m_3: sure...give me a sec
<m_3> negronjl: danke
<negronjl> m_3: done
<negronjl> m_3: let me know if it works
<m_3> negronjl: worked... um... like a charm
<m_3> bu-dumpdum
<bcsaller> and that made him quit the channel? heh
<m_3> tough room
<adam_g> hmm. using  'juju deploy --repository=$REPO local:$charm', $REPO/oneiric/charm1 deploys find but $REPO/oneiric/charm2 doesnt. get a charm not found error on the second
<_mup_> juju/local-origin-passthrough r414 committed by kapil.thangavelu@canonical.com
<_mup_> unit container deploy pulls origin from environment
<hazmat> adam_g, is it possible there's a metadata.yaml parse error on charm2?
<hazmat> that breaks the repo find algorithm as i recall
<adam_g> hazmat: ah, it was some bad strings in config.yaml
<hazmat> adam_g, yeah.. i went back and forth if that should break things or not
<hazmat> adam_g, probably worthwhile for us to at least log a message
<hazmat> if we're in verbose mode
 * hazmat loves wifi hurtling down the train tracks
<adam_g> hazmat: i found "ScannerError: ScannerError()
<adam_g> "
<adam_g> in the tracing i was doing.. not sure if thats catchable or what
 * adam_g <- python n00b
<hazmat> adam_g, we can definitely catch it and report it, its just not clear that the log would be useful per se against it
<hazmat> adam_g, could you pastebin the config file
<hazmat> almost done fixing up the local provider, i'll have a look at that next
<adam_g> hazmat: http://paste.ubuntu.com/700121/
<adam_g> the issue was the description of 'virt-type' wasn't quoted
<adam_g> and the : was throwing off the parse
<adam_g> it'd certianly be useful to just point out that there was in error parsing config.yaml, or metadata.yaml
<hazmat> hi bcsaller
<bcsaller> hazmat: hey, power was out for a while, just came back on
<hazmat> bcsaller, noticing a few problems in the local provider, some from the rename, some with the origin stuff, just curious if you been able to run it recently
<hazmat> not quite sure where its going wrong doing the origin stuff, adding a new log for the customize script
<bcsaller> haven't tried it with the origin stuff
<hazmat> yeah.. the origin stuff is new, but even before then i found some other problems in the omega merge, some broken path handling split between the container and the juju home location
<bcsaller> hazmat: but niemeyer reverted the hostname stuff in the examples
<bcsaller> to an ec2 only version
<bcsaller> so his test pass
<hazmat> bcsaller, i've got a unit-get cli
<hazmat> implemented
<hazmat> which fixes charms to get addresses in cross-provider manner
<bcsaller> if you didn't change the formula though that could be it
<bcsaller> ahh
<hazmat> also populates unit relation settings with private address by default
<hazmat> so charms don't need to do that anymore
<hazmat> going to try and spend next week on the upstartifying everything and handling disconnect/reconnects
<hazmat> just wanted to see the unit-get working with local provider, when i hit some of these issues
<bcsaller> I'll test it soon
<hazmat> bcsaller, does something delete the juju-create script in the container?
<bcsaller> no, but currently I think its just written to tmp
<hazmat> bcsaller, so the customize is only run on the master ?
<bcsaller> hazmat: yeah, and then clone rewrite the hostname
<bcsaller> don't want to have to apt-get install all the stuff for each node
<hazmat> debugging this on a mobile hotspot is still a bit painful
<SpamapS> hazmat: is that because of the initial download?
<hazmat> bcsaller, it looks like using a chroot to run juju-create still leaks a bunch of env variables which are problematic
<hazmat> SpamapS, its setup with apt-cacher-ng, but i'm doing a bzr branch for origin
<hazmat> although i switched that out to a lightweight checkout
<SpamapS> that always helps. :)
<SpamapS> truthfully, we should be doing bzr export
<hazmat> so its about as fast as it can be
<hazmat> SpamapS, well its a dev thing anyways.. so having the option to link it back is useful still imo
<hazmat> i've definitely done some remote debug/tests/commits
<bcsaller> hazmat: should we write an upstart job that deletes itself at the end?
<SpamapS> I 'spose
<hazmat> bcsaller, its fine.. i was just wondering cause i wanted to execute it.. but chroot in, wouldn't give me the same fs.. i ran into a pty allocation problem ssh'ing.. just curious if it was still there if i could manage to ge tin
<hazmat> bcsaller, looks like the problem is no setuptools for the develop on a branch
<hazmat> back in a moment, cafe caboose trip
<hazmat> SpamapS, do you know how to get lots out of upstart?
<hazmat> s/logs
<SpamapS> hazmat: indeed I do
<SpamapS> ...
<SpamapS> hazmat: initctl log-priority debug works
<SpamapS> hazmat: you might just want 'initctl log-priority info'
<SpamapS> hazmat: debug is a bit ridiculous
<SpamapS> hazmat: if you want the programs' logs.. you have to redirect their stdout/stderr..
<hazmat> SpamapS, thanks
#juju 2011-10-01
<hazmat>  argh.. hotspot down to edge networking
<hazmat> fail
<jimbaker> hazmat, i'm trying to ssh into the local lxc containers with juju ssh. but i'm getting a pw prompt. looking at the code, i would expect it to just enable me to use my public key. what am i missing here?
<hazmat> jimbaker, it should use your key
<hazmat> jimbaker, although i had a minor accident today where i tried to ssh into the container from a root shell.. it needs to be the shell of the user who bootstrapped
<hazmat> SpamapS, are you going to the wikipedia hack days in nola?
<hazmat> i was just considering going down there
<jimbaker> hazmat, ok, it's not working for me then. i'm using  local-origin-passthrough however
<jimbaker> does that depend on some other branches?
<hazmat> jimbaker, hmmm. let me push my latest
<hazmat> jimbaker, that's the one i'm working against
<jimbaker> hazmat, ok
<hazmat> i gave up on my last attempt after my hotspot switched to edge
<hazmat> although i was definitely able to ssh into my container
<hazmat> jimbaker, looks like the latest is pushed
<jimbaker> hazmat, definitely annoying that, although i find it's more of a transitory thing - maybe 5 min. assuming a place w/ a normal good signal
<hazmat> jimbaker, i started putting the customize script output into the data-dir under units
<jimbaker> hazmat, i was trying the latest
<hazmat> that's what setups the container pretty much.. there's lots of logs on disk in the data-dir .. machine agent, container logs, customize script logs
<hazmat> jimbaker, where you doing it from a root shell?
<jimbaker> hazmat, no, just letting it ask for sudo as usual
<hazmat> jimbaker, you can poke at the constructed container to see if the key is there its under /var/lib/lxc
<hazmat> there should be an authorized key file  under /home/ubuntu/.ssh
<jimbaker> hazmat, it's there
<hazmat> i'm definitely able to login
<hazmat> via ssh against the ubuntu user
<jimbaker> hazmat, yeah, that's what it was prompting me for
<hazmat> jimbaker, how are you doing ssh? using juju?
<jimbaker> hazmat, yes
<jimbaker> juju ssh wordpress/0, usual stuff
<hazmat> ah.. i've been doing ssh by hand
 * hazmat tries juju ssh
<jimbaker> hazmat, i don't know enough about lxc, so i've been using the tool here ;)
<hazmat> jimbaker, dig @192.168.122.1 container-name
<hazmat> will give the ip address
<hazmat> jimbaker, your actually farther than  i am atm, i haven't gotten the unit agent running (which means no juju ssh)
<hazmat> i committed and pushed the fixes, but wasn't able to do a final run, due to lack of networking
<hazmat> trying again
<jimbaker> hazmat, i really was just tried juju ssh 0 in this case
<hazmat> jimbaker, you can't do that
<hazmat> i mean you can
<hazmat> but its not what you think
<jimbaker> yeah, that makes sense
<hazmat> jimbaker, juju ssh 0
<hazmat> is to login into a machine
<hazmat> but all the units are on localhost
<jimbaker> exactly
<hazmat> one machine
<hazmat> so its rather pointless and we don't mess with the host user's authorized keys in their home directory
<jimbaker> i think the more relevant thing is the data-dir
<jimbaker> at least i might see something
<hazmat> so that won't work unless you've explicitly set it up
<jimbaker> basically none of the service units start, just like your experience
<hazmat> jimbaker, are you doing this juju-origin set?
<hazmat> w/
<jimbaker> hazmat, i tried, that didn't work
<jimbaker> so now, no
<hazmat> jimbaker, yeah.. ppa should work
<hazmat> jimbaker, except it fails because of the protocol version increment, but the unit agents are running with it
<hazmat> i'm still debugging why they don't come up with juju-origin
<hazmat> although at this point, its time for a weekend
<jimbaker> hazmat, sounds like a reasonable decision to me too! enjoy
<hazmat> bcsaller, have you ever seen the pty allocation error
<hazmat> it keeps coming up for me, very annoying.. basically needs a reboot
<bcsaller> I've never gotten it, no
<_mup_> juju/local-origin-passthrough r416 committed by kapil.thangavelu@canonical.com
<_mup_> add python-yaml dep for branch installs, and switch address map s/lxc/local
<hazmat> chr
<hazmat> all working now, nice
<_mup_> juju/unit-info-cli r422 committed by kapil.thangavelu@canonical.com
<_mup_> missing bin/unit-get
<_mup_> Bug #863816 was filed: We need a cross-provider way to allow units to get pub/priv address info <juju:In Progress by hazmat> < https://launchpad.net/bugs/863816 >
<_mup_> Bug #864164 was filed: Must complain if charm hooks aren't executable <juju:New> < https://launchpad.net/bugs/864164 >
#juju 2013-09-23
<jamespage> dosaboy, ceph permissions fixup - https://code.launchpad.net/~james-page/charms/precise/ceph/fixup-mon-perms/+merge/186997
<jamespage> I reworked your proposal and included the upgrade stuff
<jamespage> appears to test OK
<dosaboy> jamespage: cool
<dosaboy> so now we can get the cinder and glance patches merged?
<jamespage> dosaboy, once that one is yes
<jamespage> dosaboy, however I think the permissions problem is a problem right now
<jamespage> dosaboy, fwiw I already implemented the pg size calculation in the redux I did on the ceph helper in charm-helpers
<jamespage> https://code.launchpad.net/~james-page/charm-helpers/ceph-redux/+merge/179948
<jamespage> but that's not had review yet :-(
<jamespage> adam_g, any chance you might have time todo that ^^
<jamespage> dosaboy, that would enable you branches into the python redux versions of cinder and glance
<jamespage> your branch features rather
<drew__> where can i get help on juju
<adam_g> jamespage, still awake?
<jamespage> adam_g, yep
<adam_g> jamespage, looking at the ceph stuff now
<jamespage> adam_g, great - thanks
<adam_g> jamespage, how close do you reckon we are to merging python redux?  i'd prefer to fix this ceph issue dosaboy is having in only one place.
<jamespage> adam_g, hmm - pretty close
<jamespage> I did a quick bit of testing PM today
<jamespage> grizzly deployed OK for me
<jamespage> (that was post my changes breaking some stuff last week)
<adam_g> i admint i haven't focused on them much in the last few weeks, but i did spend quite a bit of time testing everything listed http://pad.ubuntu.com/redux-testing with grizzly.
<jamespage> adam_g, yeah - I'd like to have a bit more time testing and reviewing - can we target wednesday your start of day for review?
<jamespage> I think we are almost there - I'm doing a bit of spit a polish just to tidy and make things consistent across charms
<adam_g> i have meetings most of wednesday AM. thursday?
<jamespage> adam_g, yeah - OK
<adam_g> jamespage, merging the ceph charm helpers restructure. do you want me to update the pyredux helpers/branches accordingly or are you busy there atm?
<jamespage> adam_g, please do
#juju 2013-09-24
<Dotted> how do you push a local charm to launchpad? im getting "bzr: ERROR: Invalid url supplied to transport: "lp:~dotted1337/charms/precise/sendy/trunk": No such source package sendy." when doing bzr push
<Dotted> figured it out bzr didnt know my lp username
<davecheney> nice
<freeflying> I ran into a bootstrap issue within a maas environment
<freeflying> bootstrap has its node installed, but juju then can't connect to mongodb on that node
<freeflying> 2013-09-21 16:59:18 ERROR juju.agent agent.go:448 failed to initialize state: no reachable servers
<freeflying> 2013-09-21 16:59:18 ERROR juju supercommand.go:282 command failed: no reachable servers
<freeflying> error: no reachable servers
<davecheney> freeflying: what does /var/log/cloudinit-output.log on the bootstrap node say ?
<davecheney> most likely it could not add the PPA
<davecheney> and installed the wrong version of mongodb
<freeflying> davecheney, which ppa shall it add?  I can try manually, see if its related to network(pproxy)
<davecheney> freeflying: check the output of cloudinit-output.log
<davecheney> all the details are there
<davecheney> freeflying: are you using a proxy in your environment ?
<davecheney> juj basically assumes it has a working defualt route
<davecheney> if that isn't the case
<davecheney> well, shit is going to break
<freeflying> davecheney, yes
<freeflying> davecheney, no direct internet for the maas environment, all traffic have to go via a proxy, which we set on maas server by using squid-deb-proxy
<freeflying> davecheney, I can't add the proxy manually :)
<davecheney> freeflying: yup, that is supported (and recommended)
<davecheney> just finding the magic settings
<freeflying> davecheney, thanks :)
<davecheney> basically if the host you are bootstrapping from uses the proxy
<davecheney> we sniff that and pass the same details down to the bootstrap node
<davecheney> freeflying: try this
<davecheney> apt-config dump | grep -e 'Acquire::[a-z]+::Proxy\s+"[^"]+"'
<davecheney> if that returns anything in on the client
<davecheney> then we'll pass those details via cloud init
<davecheney> if the client and the maas instance are using differnt proxy config
<davecheney> then youre probably stuck
<davecheney> that isn't a configuration we anticipated
<freeflying> davecheney, returns empty
<davecheney> freeflying: short solution
<davecheney> configure your workstation to use the apt proxy
<freeflying> davecheney, what does my workstation mean?
<davecheney> freeflying: where you typ ethe command 'juju bootstrap'
<freeflying> davecheney, on maas server
<davecheney> freeflying: that honestly isn't a configuration we anticipated
<freeflying> davecheney, interesting thin is the node has no problem to access internet
<davecheney> freeflying: which is 'the node' ?
<freeflying> davecheney, the bootstraped one
<davecheney> freeflying: i'm sorry, i am confused
<davecheney> i thought you told me that the bootstrapped node did not have direct access to the internet
<davecheney> 11:41 < freeflying> davecheney, no direct internet for the maas environment, all  traffic have to go via a proxy, which we set on maas server by  using squid-deb-proxy
<freeflying> davecheney, in maas's preseed, the proxy has been set up, so bootstrap node has a default gw via that proxy
<davecheney> freeflying: so, the host does _not_ have direct internet access ?
<davecheney> is that correct ?
<freeflying> davecheney, the host here means which?  maas server?
<davecheney> freeflying: the bootstrap node
<davecheney> freeflying: can you paste the /var/log/cloudinit-output.log from the bootstrap node
<davecheney> that will make everything clear
<freeflying> davecheney, hold one plz
<freeflying> davecheney, sent you in msg
<davecheney> kk
<davecheney> is your squid-apt-proxy configyured to proxy ppa.launchpad.net
<davecheney> by default it does not
<davecheney> freeflying: does this work
<davecheney> env https_proxy=$YOURPROXY curl https://launchpad.net/~juju/+archive/stable/+files/mongodb-dev_2.2.4-0ubuntu1~ubuntu12.04.1~juju1_i386.deb
<davecheney> sorry add a -L to that cur;
<davecheney> sorry add a -L to that curl
<davecheney> basiscally trying to emulate what cloud-init is doing
<freeflying> curl: (56) Received HTTP code 403 from proxy after CONNECT
<davecheney> ok, that is why it isn't working
<davecheney> by default squid-apt-proxy does not allow that
<davecheney> from memory there are comments in the configuratoin file to allow it to talk to ppa sources
<freeflying> you mean enable ssl? or have ppa.launchpad.net enable in squid-deb-proxy?
<davecheney> http://askubuntu.com/questions/303150/apt-get-403-forbidden-but-accessible-in-the-browser
<davecheney> sorry, not an awesome answer
<davecheney> https://answers.launchpad.net/ubuntu/+source/squid-deb-proxy/+question/179075
<davecheney> gives another hint where the config is
<freeflying> davecheney, I have ppa.launchpad.net added to squid-deb-proxy's mirror-dstdomian.acl via maas, but still, it can cache ppa
<davecheney> freeflying: can or cannot ?
<freeflying> davecheney, sorry, can't
<davecheney> freeflying: add some more, like launchpad.net and launchpadlibrarian.net
<freeflying> davecheney, thanks, have it solved
<davecheney> freeflying: cool
<davecheney> what was the solution ?
<freeflying> davecheney, add keyserver and launchpad to squid-deb-proxy's mirror-dstdomian
<freeflying> davecheney, also modify maas's preeed to add the ppa in late_command with http/https_proxy
<freeflying> davecheney, all in short, its a proxy issue :)
<davecheney> freeflying: you do not need to modify the maas preseed
<davecheney> we handle that in cloud init
<davecheney> maas preseed and cloudinit have a strong overlap
<freeflying> davecheney, seems it for http_proxy, but doesn't work for https_proxy
<davecheney> freeflying: did you setup the apt-proxy as i showed above
<davecheney> if that regex does not return anything
<davecheney> then the correct value will not be supplied to cloud init when you bootstrap an environment
<freeflying> davecheney, no, didn't set anything then
<jamespage> raywang, should work as described in the README
<raywang> jamespage, it claims supporting add relation to keystone, but it's missing hooks, dosaboy report that bug  https://bugs.launchpad.net/charms/+source/ceph-radosgw/+bug/1229645
<_mup_> Bug #1229645: ceph-radosgw missing identity relation links <ceph-radosgw (Juju Charms Collection):In Progress by hopem> <https://launchpad.net/bugs/1229645>
<jamespage> raywang, I've merged dosaboys branch
<jaywink> hi all. wondering if anyone has a link or explanation to what is required from a cloud image to bootstrap with juju? Say I get a non-ec2 but ec2 API compatible provider to work up until bootstrap - will a default ubuntu server image do or does it require some custom image that for example Amazon has?
<davecheney> jaywink: the first thing you need is a juju provider
<davecheney> we have providers for ec2, openstack (including private openstack clouds), maas and azure
<davecheney> if your virtual environment looks *identical* to one of those
<davecheney> it will work
<davecheney> if not, it won't
<jaywink> davecheney, by provider you mean the actual cloud image? I've got up to the actual RunInstance call on GreenQloud which has an EC2 compatible API - they have a precise server image available
<davecheney> jaywink: we call a provider a piece of software designed to translate juju's abstract view of the world onto a real provider of virtual (or physical) machines
<davecheney> jaywink: the problem you'll find with something that smells like EC2 is the AMI instance numbers won't match
<davecheney> so, you say to juju 'precise/amd64 1 core, 1.7gb of ram please'
<davecheney> and we have to find the AMI that matches that
<jaywink> davecheney, I set it as POC to the code itself :P
<davecheney> these values are fixed for ec2, and vary even per refion
<davecheney> jaywink: i think IRC is a poor medium to this
<davecheney> i suggest continuing this discussion on juju-dev ML
<davecheney> jaywink: what you want to do is possible
<davecheney> but will take some tweaking
<davecheney> especially with the ec2 provider as it has never been tested against ec2 workalikes
<jaywink> davecheney, don't mind tweaking :) and GQ is very interested too to get things working, even if as POC with non-stable code mods to begin with. I'll subscribe to the list and throw in the questions - thanks! :)
<davecheney> jaywink: i'm sure it can be done with a bit of hacking
<davecheney> the image/AMI selection will be the trickiest bit
<davecheney> jaywink: writing a new provider, copying the ec2 provider would be the simplest solution
<davecheney> again, the best way to approach this is via the development list
<jaywink> davecheney, I just used 'euca-describe-images' against GQ and took the most suitable one - GQ doesn't have that many :) But just worried while waiting for GQ to fix one thing that whether their server image will work or does Juju require some custom Ubuntu builds.
<davecheney> jaywink: we have a product
<davecheney> well
<davecheney> not really a product
<davecheney> it's just a shittonne of json
<davecheney> which allows you to describe the images available in a cloud
<davecheney> this will probably be the best solution to solving the image selection issue
<davecheney> and glue together what juju expects with what GQ provides
<jaywink> ok cool, sounds good. just joining the list, will throw something in there. Thanks!
<davecheney> jaywink: kk
<sinzui> adeuring, Bug #1229708
<_mup_> Bug #1229708: UnavailableSradsException from cstats <elasticsearch> <charmworld:Triaged> <https://launchpad.net/bugs/1229708>
 * adeuring is looking
<jamespage> adam_g, some of my ceph redux was fud
<jamespage> https://code.launchpad.net/~james-page/charm-helpers/fixup-ceph-pool-creation/+merge/187260
<jamespage> I've already merged those fixes into the openstack-charmers branch and re-syced cinder and glance
<wedgwood> how do I recover a local environment after a reboot?
<wedgwood> (reboot of the host system)
<sinzui> adeuring, how goes the elasticsearch issue on staging?
<adeuring> sinzui: fixed, but in a hackish way: I changed the ES clustername and rebuilt the index
<sinzui> \o/
<adeuring> sinzui: where do we store the configs of charmwold?
<adeuring> (we don't want to run juju set elasticsearch cluster-name=new_name manually during deployment ;)
<sinzui> adeuring, I don't think that is hackish? I added the config but  but never used it
<adeuring> sinzui: well, I just declared a new cluster name, but that should be automatically used when staging is freshly deployed
<sinzui> adeuring, your are right about the deploy...and I warned webops that running 2 ES stacks in prodstack will require them to have different cluster names. I suck for not writing it down
<sinzui> adeuring, staging was deployed using the pseudo script in the orangestack branch
<adeuring> sinzui: can you give me the URL?
<sinzui> We want to update ./deploy script in it
<sinzui> adeuring, you don't already have a branch?
<sinzui> ahh, I am using the old name. adeuring look in lp:~ce-orange-squad/charmworld/staging-tools
<adeuring> sinzui: could be -- can't find it at least... Maybe I'm just confused
<adeuring> sinzui: ah, sure...
<adeuring> sinzui: I'll update it
<sinzui> adeuring, lets update the Charmworld page for webops to include the cluster name with each deploy.
<sinzui> we might also want to update the deployer file. I need to find that branch
<adeuring> sinzui: yep. I'll also write an internal email that we might step on each other's toes by not setting the cluser name. Perhaps my change affects otherES instances...
<sinzui> adeuring, thank you!
<hatch> when deploying the gui using local provider on 1.14.1-precise-amd64 the instance-state of the GUI machine is 'missing' - is this intended or an issue of some sort
<hatch> ahh found the docs - that's normal
<jcastro> hey sinzui you're doing release now?
<jcastro> I mean you're doing the juju releases these days?
<sinzui> yes
<jcastro> are we defaulting to the local provider yet?
<sinzui> jcastro, Sorry I don't understand.
<jcastro> ok so say I do a clean install
<jcastro> and do `juju init`
<jcastro> do we write out local as the default environment yet?
<sinzui> May be to 1.15.0 (if we fixed the bug)
<sinzui> I can check
<jcastro> I know it's a filed bug
<jcastro> darned if I can't find it now though
<sinzui> jcastro, https://launchpad.net/juju-core/+milestone/1.16.0
<sinzui> ^ targeted to next week
<jcastro> ok
<sinzui> jcastro, I need to reset my juju-core. It thinks it is 1.14.1 from the release
<jcastro> sinzui: https://bugs.launchpad.net/juju-core/+bug/1229903
<_mup_> Bug #1229903: Default to local provider <juju-core:New> <https://launchpad.net/bugs/1229903>
<sinzui> jcastro, 1.15.0 will use amazon as the default provider
<jcastro> sinzui: I'd like to nominate it for 1.16 but not like, step on any release toes by just assigning it in LP
<sinzui> jcastro, I (as a heavy juju lxc user) endorse your nomination. Are you arguing that Saucy (1.16.0) needs this out of the box setting for a great 30 minute experience?
<jcastro> right!
<jcastro> we had agreed verbally at the sprint, along with thumper
<jcastro> I just realized today that we hadn't followed up and actually done that yet
<sinzui> jcastro, I accept your argument and have targeted the bug. I will ask for forgiveness if I must
<sinzui> I'll talk to the leads
<jcastro> yes, cowboy the world, I like how you think. :p
<weblife> afternoon juju mack daddy's
<jcastro> hi!
#juju 2013-09-25
<ZonkedZebra> Hey. I'm developing a charm and trying to debug one of the hooks. What is the best to recover a node from an error state. According to the docs error state nodes don't run upgrade hooks.
<sarnold> ZonkedZebra: I think ssh to the unit, fix it up, and run juju resolved
<davecheney> sarnold: ZonkedZebra
<davecheney> juju resolved --retry $UNIT && juju debug-hooks $UNIT
<davecheney> will retry the failed unit
<sarnold> ooh! --retry :) very nice
<davecheney> then immediately (before that is actioned) start debug-hooks so you can run the unit yourself
<davecheney> s/unit/hook
<ZonkedZebra> in the debug-hooks it appears to drop be right before the script is exacted. Is there something functionality there I am missing? tailing /var/log/juju/unit-* has provided the best feedback so far
<davecheney> ZonkedZebra: you get to run the hook, debug the hook
<davecheney> the name of the hook will be in the $PS1
<davecheney> so, if the hook that failed is config-changed
<davecheney> at the prompt, type
<davecheney> hooks/config-changed
<davecheney> see where it breaks
<davecheney> fix it
<davecheney> run it again
<davecheney> when you're happy that it has worked
<davecheney> exit 0
<davecheney> will take you to the next hook queued
<davecheney> when you've processed all the hooks
<davecheney> exit the final shell
<ZonkedZebra> those changes will be saved on that node?
<davecheney> no
<davecheney> you will need to apply those changes to your charm
<davecheney> then use something like
<ZonkedZebra> Interesting, guess I can copy back out when done. I was just updating locally and then upgrading the charm
<davecheney> ZonkedZebra: the best mental model of a chamr is to expect them to avaporate at any second
<davecheney> any chanes done on the unit themselves are not saved anywhere
<ZonkedZebra> davecheney: I can figure that part out :) thanks
<davecheney> ZonkedZebra: sorry mate, it wasn't my intention to talk down to you
<cespare> (very new to juju -- about 5 min into the docs) does juju set-constraints just change a configuration file?
<cespare> I'm trying to understand how all these settings you're making would be shared with your team
<cespare> I guess you keep your whole ~/.juju in version control?
<davecheney> cespare: yeah, the .juju directory started simple but now contains a lot of files which all need to be on every client
<davecheney> we're trying to address this
<davecheney> but it won't happen on a timeframe useful to this discussion
<cespare> davecheney: ok thanks
<cespare> is there a way to use something other than .juju, perhaps with an env variable or something?
<freeflying> does ceph charm need a second disk?
<davecheney> cespare: you can move the location of JUJU_HOME with that environment variable
<davecheney> freeflying: yes
<freeflying> davecheney, any particular reason for the second disk? should only ceph-osd need a second disk?
<axw> cespare: davecheney's right in general, but set-constraints in particular goes into the state database
<cespare> davecheney: ok neato. So is just keeping JUJU_HOME in VC the recommended way of working as a team? The docs feel like they're addressed to a single dev or something
<cespare> axw: oh ok, that makes sense
<cespare> axw: that's mongo?
<axw> yup
<davecheney> cespare: i'd say it's a good idea
<davecheney> we dont' have a recommendation
<cespare> ok
<davecheney> the contents of .juju are very much in flxu
<davecheney> we konw that we keep too much state on dick
<davecheney> err
<davecheney> disk
<davecheney> and are trying to fix it
<davecheney> but it's not top priority compared to other stuff we want to get done
<cespare> davecheney: i see. Are you saying that in an ideal world, your .juju would mostly just have enough info to point at the juju configuration server and everything else would be stored there?
<davecheney> cespare: bingo
<davecheney> would you like a job ?
<cespare> Happy with the one I've got, thanks :)
<cespare> Plus I could never use bzr for my dayjob. It wouldn't work out.
<davecheney> everyone has their price
<cespare> can't argue with that
<sarnold> hehe
<cespare> Is there something I can read about using juju to manage your own application? I suppose it's basically just going to involve writing a charm for it...but what about redeploying it when the code changes, monitoring, logging, etc?
<davecheney> cespare: juju is configuration management
<davecheney> it doesn't do monitoring or process management
<davecheney> what it does do is define a framework for connecting services together
<davecheney> the main driver was virtual environments like ec2
<davecheney> when the names of the machines are not known ahead of time
<cespare> davecheney: ok.
<davecheney> process management is always tricky
<davecheney> juju doesn't want to be the process manager
<cespare> So how would juju help out if, say, I have an application that needs a database. Can I get it to just provision me some machines, and my deploy tool can ask juju for what the latest set of application servers are?
<davecheney> ie, we don't want to, and in reality cannot demand that processes do not daemonise themselves
<davecheney> cespare: sort of
<davecheney> but not relaly
<davecheney> juju lets you define an environemnt, a collection of services
<cespare> and then I would want juju to invoke my deploy tool if i add more nodes...:\
<davecheney> juju is your deploy tool
<davecheney> you describe your environment, ie
<davecheney> juju deploy wordpress
<davecheney> juju deploy mysql
<davecheney> juju add-relation wordpress mysql
<davecheney> juju expose wordpress
<davecheney> you don't describe machines, hosts, networks, firewall ports, etc
<cespare> davecheney: that all makes sense. Does juju not help with my application servers at all?
<davecheney> cespare: you'd have to be more specific what kind of help you are looking for ?
<davecheney> for monitoring we have the idea of subordinate charms
<davecheney> which let you descrive things like zenoss and nagios agents
<cespare> well, like in my example. I have an application server that I'm hacking on and want to deploy frequently
<cespare> (forget about monitoring and stuff for now)
<cespare> it connects to a db that I brought up with juju deploy mysql
<freeflying> davecheney, can we deploy ceph onto machine which only have 1 disk, and ceph-osd to machine has 2 disks
<davecheney> you, you'd describe the process of deploying your application server as a charm
<cespare> now, maybe I need to scale up the app server by adding more nodes, or maybe move the db to a different box...does juju do those things?
<davecheney> freeflying: no, ceph rquires two luns
<davecheney> you need to use constraints to make sure the unit is provisioned on a machine with that disk setup
<davecheney> but, regretably,we haven't implemented those constraints yet
<davecheney> cespare: juju does those things
<freeflying> davecheney, what about ceph-osd then
<davecheney> freeflying: that will work
<freeflying> davecheney, one disk for osd will be fine?
<davecheney> cespare: juju add-unit $SERVICE
<davecheney> freeflying: i guess so, isn't it a dashboard or something
<davecheney> cespare: juju has the model of one machine per unit of a sevice
<cespare> davecheney: ok, so what does deploying version N+1 look like? You build a new version of the charm and then...
<davecheney> cespare: you have two options
<davecheney> 1. juju upgrade-charm, and write a hooks/upgrade-charm hook that will git pull your code or something
<davecheney> or
<davecheney> 2. juju destroy-service && juju deploy $SERVICE
<davecheney> or juju deploy $SERVICE $NEW_NAME
<davecheney> then destroy the old name
<cespare> davecheney: do the subordinate charms break from the one machine per unit of service paradigm?
<cespare> i.e. can I run the nagios charm with my app server together?
<davecheney> cespare: yes
<cespare> s/i.e/e.g
<cespare> ok thanks
<davecheney> cespare: yes, subordinate charms are deployed 'into' the machine that the thing they are subordinate too
<davecheney> nagios isn't a subordinate
<davecheney> its the server component
<cespare> oh, the agent, whatever
<davecheney> nagios-nrpe is the agent
<cespare> yeah
<davecheney> fair warning
<davecheney> we understand that the one machine per service unit is a sucky requirement
<davecheney> and makes it hard to have 'small' juju environemnts
<davecheney> we'er working on fixing that with lxc containers
<cespare> davecheney: yeah, saw that in the docs
<davecheney> but there are complex problems, mainly around networking in hostile environemnts like ec2 and private openstack clouds which make the problem much harder
<cespare> davecheney: sounds good, but actually for our infrastructure we pretty much have a machine per service
<cespare> i mean (many) dedicated machines per service
<davecheney> cespare: thta is why i say sevice unit
<davecheney> the sevice is the abstract idea
<davecheney> the unit is the physical manifestation of one instance of that service
<cespare> davecheney: when a charm reacts to the upgrade-charm hook is it supposed to transform itself to the same state as if the upgraded charm were deployed to a fresh machine?
<cespare> davecheney: good terminology
<davecheney> cespare: as the charm author, we push a lot of that work onto you
<davecheney> all we do is call the hook and wave our hands that it is your problem to figure out what that means
<cespare> right, I'm asking if that's what I'm supposed to do as a good citizen
<cespare> ok
<davecheney> cespare: there are many ways of skinning the cat
<cespare> davecheney: pretty easy if the application is a single jar/go binary
<davecheney> you could also use a config variable to define the revision you want to use
<davecheney> then your upgrade could be
<davecheney> juju set revision=XXX
<davecheney> which would fire the hooks/config-changed hook and you could do a git pull
<davecheney> leaving upgrade charm to only change the actual code of the hooks/*
<cespare> ah
<davecheney> cespare: at it's core, juju is two things
<davecheney> 1. a generic interface to varoius vm providers
<davecheney> 2. a way of scheduling the remote execution of remote commands
<davecheney> any assuptoins above and beyond that have to belong with the charm authors
<davecheney> (we do push a lot of responsibility to charm authors)
<cespare> I wish juju worked on digital ocean, that'd be cool
<davecheney> cespare: we're working on a thing called manual provisoining
<davecheney> which lets you supply machines via ssh
<cespare> ah ok, still totally scriptable though heh
<cespare> that would do the trick
<davecheney> it's there in tip if you want to try it
<davecheney> but not documented because there are a lot of rough edges
<cespare> davecheney: thanks for answering all my questions
<davecheney> np
<jamespage> marcoceppi, please ping me when you start today re charm-tools update
<jamespage> marcoceppi, as its pretty much a complete re-write I need more information before I go speak to the release team
<AskUbuntu> Set up nodes for Ubuntu cloud 12.04 | http://askubuntu.com/q/349867
<AskUbuntu> Error on juju configuration for maas | http://askubuntu.com/q/349892
<marcoceppi> jamespage: ping
<jamespage> hey marcoceppi
<jamespage> marcoceppi, so a few questions re cloud-tools 1.0.0 if you have time
<marcoceppi> jamespage: I've got all the time in the world for this
<jamespage> marcoceppi, OK _ so I pulled the packages from the PPA and merged them into the main packaging branch in ubuntu
<jamespage> then restored a few files under debian/* that had got dropped
<jamespage> 1.0.0 is a complete rewrite in python right?
<marcoceppi> jamespage: correct, the code is rewritten, the packagining is re-done, and the structure of the package changed
<jamespage> marcoceppi, so the rationale is really about supporability going fowards right?
<jamespage> as the current package is a mix of bash/python and not actively developed
<marcoceppi> jamespage: the current package, 0.3, is no longer maintained. The re-write was to make charm-tools multi-platform and bring it's quality up
<jamespage> marcoceppi, OK
<ehw> hey, guys, is there any way to tell juju which lxc bridge name it should be using?  lxc works fine, but juju is failing with the net device not found error
<marcoceppi> ehw: not that I know of, let me dig through the environments.yaml options
<ehw> marcoceppi: thanks; was looking through the source, but it hasn't got any clearer for me
<marcoceppi> ehw: there's two places for config options, one is in the providers code itself, then there's like this global options file that is env.yaml options for all environments
<marcoceppi> ehw: it looks like there's an "JUJU_LXC_BRIDGE" environment variable you can set during bootstrap
<marcoceppi> let me dig a little more
<marcoceppi> ehw: oh, wait, that's for something different
<ehw> marcoceppi: yeah, just tried that, didn't seem to get me what I needed
<marcoceppi> ehw: line 42 of provider/local/environ.go has it hard-coded `const lxcBridgeName = "lxcbr0"`
<marcoceppi> ehw: if you wanted that to be configurable, which isn't out of reason, you'd need to open a bug https://bugs.launchpad.net/juju-core
<ehw> marcoceppi: yeah, looks like I'll be doing that
<adeuring> marcoceppi: could you please have a look here: https://code.launchpad.net/~adeuring/charm-tools/python-port-check-config/+merge/186080 ?
<marcoceppi> adeuring: sure can!
<adeuring> thanks!
<marcoceppi> adeuring: while you're here
<marcoceppi> REQUIRED_OPTION_KEYS = set(('description', )) - description is the only req key? I thought type was as well?
<adeuring> marcoceppi: http://bazaar.launchpad.net/~charmers/juju/docs/view/head:/source/service-config.rst says that "str" is the default type. so I assume that is not need to be specified (well, unless you want an int or float)
<marcoceppi> adeuring: ah, gotchya, thanks
<ehw> marcoceppi: done.   pad.lv/1230306
<marcoceppi> Charm call, http://ubuntuonair.com and http://pad.ubuntu.com/7mf2jvKXNa
<mattyw> I can't see anything on ubuntuonair?
<marcoceppi> mattyw: http://www.youtube.com/watch?v=UPUO62DQiuw&feature=youtu.be
<mattyw> marcoceppi, much better tnanks :)
<mattyw> jcastro, can I ask some more questions?
<mattyw> I know you love it
<jcastro> sure!
<jcastro> keep on keeping on!
<mattyw> These pages: https://jujucharms.com/fullscreen/search/~mattyw/precise/docker-3/?text=docker
<mattyw> how do they get generated? I expected them to update if I updated the charm - or after some interval - but they don't
<mattyw> my 2nd question is: this is what I'm doing with the config http://bazaar.launchpad.net/~mattyw/charms/precise/docker/trunk/view/head:/hooks/config-changed. but it's not really config. is this how the framwork charms work at the moment?
<mattyw> jcastro, I think that's all my questions actually
<jcastro> the pages are generated ... nightly I think? rick_h_ do you know the interval?
<jcastro> marcoceppi can answer the 2nd one
<marcoceppi> mattyw: that hook looks fine. a lot of my charms do things like that in the conduit changed hook
<mattyw> marcoceppi, ok cool, glad I get the basic idea
<marcoceppi> config*
<marcoceppi> jcastro: I think we did the charm call to soon?
<jcastro> no it's always been at this time
<jcastro> I just moved it to thr wrong spot
<jcastro> fixed, thanks
<jcastro> http://highscalability.com/blog/2013/9/25/great-open-source-solution-for-boring-ha-and-scalability-pro.html
<jcastro> share/tweet/reddit/whatever please!
<Cobold> Hi all!
<AskUbuntu> Is there a juju PostGIS charm out there? | http://askubuntu.com/q/350035
<jcastro> hey popey
<jcastro> huh
<jcastro> hey guys check this out
<jcastro> http://code.scenzgrid.org/index.php/p/jujucharms/
<jcastro> http://code.scenzgrid.org/index.php/p/jujucharms/source/tree/f906376d2ecba34e82e15b1e558e1b9e3c4d4ea1/postgis/precise/postgis/README
<rick_h_> jcastro: used to be about every 15min
#juju 2013-09-26
<freeflying> I released a node from maas, which has been deployed with service by juju, is it possible to  destroy the service now
<davecheney> freeflying: cn you say what you did another way
<davecheney> i am not a maas expert
<freeflying> davecheney, I have a environment running, deployed ceph charms, it fails due to lack of a second hd, then some one reinstalled aubuntu on one of the machine, I tried to destroy -service,
<freeflying> davecheney, I can't achieve it, use destroy-unit -> resolved -> destroy-machine,  all turned to fail too, then I release the node from maas, but juju status still hows its there
<davecheney> freeflying: yes, sorry this is a known issue
<freeflying> davecheney, any workaround so far
<davecheney> https://bugs.launchpad.net/juju-core/+bug/1206532
<_mup_> Bug #1206532: --terminate option for destroy-unit <juju-core:Triaged> <https://launchpad.net/bugs/1206532>
<davecheney> freeflying: apart from ignoring the entry in status
<davecheney> no
<davecheney> https://bugs.launchpad.net/juju-core/+bug/1089289
<_mup_> Bug #1089289: remove-unit --force <doc> <juju-core:Triaged> <https://launchpad.net/bugs/1089289>
<davecheney> and several others
<freeflying> davecheney, can we delete the status from mongodb
<davecheney> freeflying: possibly, but utterly not recommended
<mrz> having problems getting juju bootstrap to work on a fresh ubuntu instance
<mrz> not sure if i have my config set correctly.
<mrz> and for whatever reasons, juju -e tosses an error too
<mrz> error: error parsing environment "hpcloud": no public ssh keys found
<mrz> what am I missing?
<mrz> authorized-keys-path is missing from the default config.
<mrz> appears to work (well returns nothing) after I added that
<AskUbuntu> juju credential error masage | http://askubuntu.com/q/350187
<AskUbuntu> Does juju helps to build own private cloud | http://askubuntu.com/q/350226
<AskUbuntu> Problem with upload images openstack | http://askubuntu.com/q/350243
<arosales> mrz, did you get your hp config working?
<mgz> arosales: thought you'd tyoped me then...
<arosales> mgz, ha I also initially thought you were also asking an hp question, and I thought that couldn't be right :-)
<jcsackett> orangesquad: can i get a review of https://code.launchpad.net/~jcsackett/charmworld/askubuntu-job/+merge/187758 and https://code.launchpad.net/~jcsackett/charmworld/askubuntu-in-review-queue/+merge/187771 please? neither are very long.
<sinzui> jcsackett, I am in meetings for the next 3 hours. I can look after then
<epafrashg_> hi all
<jcastro> hey utlemming
<epafrashg_> where are you come from?
<_mup_> Bug #1229275 was filed: juju destroy-environment also destroys nodes that are not controlled by juju <juju:New> <juju-core:Triaged> <maas (Ubuntu):Triaged> <https://launchpad.net/bugs/1229275>
<marcoceppi> hey epafrashg_ anything we can help you with?
<jcastro> jamespage: sinzui: arosales: hah! 12.04.3 fresh install ... did the updates, installed the PPA, apt-get install juju-core juju-local ...
<jcastro> and ... everything is working
<sinzui> \o/
<mrz> arosales: more or less. i ran into a service group limitiation
<arosales> mrz, ah yes that has bitten me too
<mrz> the osx brew keeps bailing on me so i gave up and spun up an ubuntu instance
<arosales> an HP ticket should resolve that
<mrz> i have to fill out a form to get it increased
<arosales> mrz, funn you mention that I was just talking to marcoceppi and sinzui about the juju osx client
<arosales> mrz I have asked HP support to just give a general bump the the sec group limit
<mrz> 10's small
<arosales> mrz, agreed. They really didn't give me an answer. They are evaluating it for bumps across the board
<mrz> i haven't played with it much but how does scaling work? is thats some hook wihtin the charm that shoves things behind ha-proxy or something?
<mrz> i have access to the East coast beta but that's no better
<arosales> add-unit spins up another instance and mongo keeps track of it in relation to the service
<arosales> https://juju.ubuntu.com/docs/charms-scaling.html
<arosales> but it doesn't go into the "how"
<arosales> mrz https://juju.ubuntu.com/resources/overview/ goes into the "how" Under "Scaling services horizontally"
<mrz> the example walked through adding 100 instances of mysql but it's really only magic if those are read-only slaves behind a load balancer
<arosales> hazmat, marcoceppi what is the command to disable sec groups?
<mrz> oh cool, i'll read that today
<arosales> mrz, I think the mysql charm itself can set up a cluster with slaves
<arosales> and you can add-unit behind that
<mrz> but it read like it was just master/slave and not some sort of load balancing between slaves
<hazmat> arosales, disable sec groups?
<hazmat> arosales, oh.. firewall global mode
<arosales> hazmat, thats it
<arosales> hazmat, ya to get around hp sec group limits
<hazmat> its an undocumented config... evilnickveitch it would be good to capture that one
<gumango> atlast, Juju for Azure :D
<mrz> hazmat: yes, for cases where all my stuff is behind a load balancer or just internal.
<marcoceppi> hazmat: firewall-mode: global
<arosales> mrz, put that into your HP stanza in environments.yaml _if_ and only if you want to side step security
<arosales> ie doing some dev
<hazmat> arosales, we didn't need to use it for the training.. hp up'd our limits, and we split the class across two zones. i had a nightly setup to clear out the unused groups and env resets.
<arosales> hazmat, thanks
<evilnickveitch> arosales, I spoke to the HP people about that, oh gosh, about 3 months ago, but they never got back to me either
<arosales> evilnickveitch, ya . .
 * arosales sighs
<evilnickveitch> hazmat, thanks
<arosales> hazmat, good to hear for the training, but a nice option to have in your back pocket for development education purposes
<hazmat> arosales, absolutely
 * arosales strongly notes for education and development purposes :-)
<jcastro> arosales: https://bugs.launchpad.net/juju-core/+bug/1229903/comments/3
<_mup_> Bug #1229903: Default to local provider <juju-core:Triaged> <https://launchpad.net/bugs/1229903>
<mrz> "firewall-mode: global" does what exactly?
<mrz> does it use a default group?
<mrz> (it's sort of a pita to add security groups after instance creation right?)
<arosales> jcastro, for completeness could you take a 12.04.0 and install saucy lxc on it and see how local provider works?
<jcastro> arosales: but I'd need the kernel too?
<arosales> mrz: I think it disables the use a per service sec group
<jcastro> arosales: or do you mean, install .0 and then do what I recommend in the bug to confirm?
<mrz> arosales: i like that. wonder what it uses then.
<arosales> jcastro, the thought is with and updated lxc you wouldn't need an updated kernel
<jcastro> huh
<jcastro> ok
<arosales> jcastro, thanks
<arosales> mrz I haven't used it personally, but I am _guessing_ it reuses
<arosales> marcoceppi, or hazmat may have had more actual experience.
<mrz> arosales: suppose i should just try it
<mrz> i don't think i've had luck using the nova client to adjust groups post instance creation
<hazmat> mrz, it uses a single group and adds per machine entries to it
<arosales> hp support does get back pretty fast for these types of requests
<jcastro> arosales: we'd still want people on the newer kernel anyway though right?
<arosales> but if you wanted to try for development purposes may be worth a shot
<jcastro> I mean, if you're going to use LXC we want them on the newest kernel we can support right?
<hazmat> mrz default juju would use a single group per machine
<mrz> hazmat: oh.
<arosales> jcastro, not necessarily
<hazmat> mrz, part of this is throw back to ec2 where security group to machine is static at machine creation outside of vpc
<arosales> 12.04.0 is supported and gets updates
<hazmat> openstack is a bit more flexible.. and can do runtime mapping of groups to machines
<arosales> jcastro, for a good user story it would be nice to not have to reboot
 * jcastro nods
<arosales> but the latest kernel isn't that bad, but its an unknown what a user has tied to a kernel. I think testing and updated lxc on precise may just give us some more information to recommend.
 * jcastro nods - I should have results for you in about ~30
<mrz> "Scaling services horizontally
<mrz> oops
<arosales> jcastro, thanks
<mrz> "Scaling services horizontally" - doesn't really tie into the next step of getting both "web servers" behind a load balancer.
<mrz> i suppose that'd be another relationship to build?
<jcastro> yes
<jcastro> you do ...
<jcastro> juju deploy haproxy
<jcastro> then `juju add-relation haproxy whateveryourservice`
<jcastro> then you `juju unexpose whateveryourservice`
<jcastro> and then `juju expose haproxy`
<mrz> right, that totally makes sense then.
<jcastro> then `juju status haproxy` to get the public IP, then update DNS
<jcastro> however ...
<jcastro> some charms have loadbalancing in them
<jcastro> so like if you `juju add-unit wordpress` each head registers itself with the nginx load balancer on all the other heads
<jcastro> so you don't need an haproxy there
<jcastro> ideally a charm would have options for either way of doing it
<jcastro> mrz: hmm, maybe I should add the haproxy example to that page?
<mrz> ah. i wasn't aware the wordpress charm worked like that. so in the case of wordpress, how do I figure out which IP to add to dns?
<jcastro> any one of them
<mrz> but then i'd need to make sure i don't destroy that instance right?
<jcastro> the nodes just figure it out, it's kind of badass.
<jcastro> yeah
<mrz> i wonder how youi'd safe guard againts humans doing Bad Things there.
<mrz> jcastro: and yeah, the ha-proxy example would be good.
<mrz> you know i should hack up wordpress to use HP's cloud storage vs. nfs.
<mrz> scares me to think of running nfs in the cloud like that
<jcastro> half off pricing!
<mrz> it's what I liked about azure (minus the pricing part). the WP Azure plugin rocked it.
<mrz> i have no idea where themes ended up but largely i didn't need to care.
<jcastro> marcoceppi: hey, does the mediawiki charm have the same load balancing set up as wordpress?
<marcoceppi> jcastro: nope
<jcastro> from looking at it I don't think it does
<jcastro> hah, so the docs are totally incorrect
<evilnickveitch> jcastro, erm, well, not totally. s/mediawiki/wordpress would fix them.
<evilnickveitch> though it would be nice to add some hapraxy eaxample instead
<jcastro> I am doing both now
<jcastro> expect an MP in a few
<jcastro> evilnickveitch: https://code.launchpad.net/~jorge/juju-core/scaling-fixes/+merge/187847
<evilnickveitch> cool. i just need to make a few tweaks
<jcastro> mrz: thanks for that feedback, that was really useful!
<mrz> np.
<jcastro> marcoceppi: evilnickveitch: I suspect the original author mixed up wordpress and mediawiki charms
<jcastro> which is like the thousanth time that's happened
<evilnickveitch> jcastro, possibly, well done for spotting it though, that page has been through umpty-ump revisions and nobody noticed before
<evilnickveitch> I guess because it doesn't generate an error, just doesn't work as intended
<evilnickveitch> jcastro, merged
<arosales> marcoceppi, sinzui: got brew to install juju
<arosales> http://pastebin.ubuntu.com/6159550/
<arosales> marcoceppi, sinzui: I had to have xcode on 10.8 installed, and accept the license
<arosales> in addition to the command line tools the brew wanted me to install
<jcastro> arosales: ok so testing with just "lxc" isn't working out
<jcastro> the thing is to bring it back it has deps on other things in raring
<jcastro> so it wants to pull in a new libc6, etc
<arosales> jcastro, I was worried about the deps :-/
<arosales> jcastro, is that dep string pretty long?
<jcastro> 23 deps
<arosales> marcoceppi, sinzui: I am not sure if that means for https://github.com/mxcl/homebrew/pull/22772 and 1.14.1 if xcode is a dep. I couldn't get it working with out xcode.  I'll post a comment
<arosales> jcastro, ouch
<jcastro> including some libapparmor stuff and libnih stuff that looks scary
<arosales> hwe may be the way then
<arosales> jcastro, thanks for investigating
<arosales> jcastro, can you also add that comment to the bug?
<jcastro> but the new libc6 would mean they'd have to reboot anyway, so this is actually worse than the HWE experience
<jcastro> yeah
<kurt_> jamespage: will the raring version of python-quantumclient work on precise?
<kurt_> jcastro: I have openstack/juju/MAAS working fully on VMWare Fusion and Workstation now.
<kurt_> next step is building redundancy in
<sinzui> benji, gary_poster gmail + juju-gui list == war. They won't let me reply
<sinzui> benji, gary_poster: http://pastebin.ubuntu.com/6160045/
<gary_poster> thank you very much Curtis!  I'll paste that into the conversation so others can read
<sinzui> I am still getting re-subscribed under an address that the two systems accept.
<gary_poster> sinzui, meanwhile I told the mailing list to accept that other email forevermore.
<sinzui> Thank you!
<AskUbuntu> Switching Juju lxc bridge | http://askubuntu.com/q/350503
<thumper> jcastro: ping
#juju 2013-09-27
<ZonkedZebra> Is there a way to modify the config of a charm from within a hook?
<davecheney> ZonkedZebra: no, there is no config-set hook command
<ZonkedZebra> davecheney: best approach to auto load new code from a remote repo? cron? hooks? config bool that you set back and forth?
<davecheney> ZonkedZebra: why not
<davecheney> juju set revision=XXX
<davecheney> which will fire the config-changed hook on your units
<ZonkedZebra> davecheney: if I juju set to the value it already is does that trigger the config-changed hook?
<ZonkedZebra> (The goal is to have production and staging that track the appropriate git branch with minimal intervention)
<davecheney> ZonkedZebra: no
<davecheney> Buuuuuut, remember as a charm author, we do not guarentee that hooks will not be run only once
<davecheney> so they should be written to expect this
<ZonkedZebra> ZonkedZebra: yep, thats fine. I've got all the appropriate checks to make sure multiple runs don't cause issues. Just looking for the easiest way to poke it a little to have pull the new changes from git
<davecheney> ZonkedZebra: why do you want config-set (sic?)
<davecheney> it smells like you are trying to tell someone else that the charm changed something
<ZonkedZebra> i just created a config property that I would going to set to true, and then reset to false in config-changed
<davecheney> what would it resetting to false mean ?
<ZonkedZebra> That i could consistently do "juju set app pull=true"
<davecheney> i'm not trying to troll btw, just trying to understand your problem to fit it into the (sometimes limiting) model that Juju offers
<davecheney> what does pull=true do ?
<davecheney> ie, why not just setup a cron on the unit ?
<ZonkedZebra> it would fire config-changed (where git pull happens) and then set back to false to be ready to get pull=true set again
<davecheney> ZonkedZebra: so, something would be cron'd on the client to setup pull=true ?
<davecheney> why not just setup the cron on the unit ?
<ZonkedZebra> No, I would do that by hand
<davecheney> ok, in that case i'd recommend
<davecheney> not pull=true
<ZonkedZebra> do work, commit, do work, commit, juju set app pull=true
<davecheney> but revision=XXXX
<ZonkedZebra> Would also work, but I would like the units associated with branches, not a particular rev, and as we discussed, if branch is already dev then set branch=dev would not trigger the pull
<davecheney> sure, have two config setings
<davecheney> branch=... rev=...
<davecheney> if you just need a trigger
<davecheney> juju set app rev=$(pwgen 100)
<davecheney> just to set it to some random garbage
 * ZonkedZebra nods
<ZonkedZebra> davecheney: that will do, thanks
<davecheney> np
<davecheney> pwgen 100 may be overkill
<davecheney> maybe call the config valye nonce or trigger or something
<ZonkedZebra> davecheney: timestamp so at least it will be slightly useful
<davecheney> suredate +%N
<davecheney> date +%N
<davecheney> maybe
<julianwa_> davecheney:  hi, I use juju add-machine and OS installation failed due to some post install script failure. now I can't juju terminate-machine. the life-cycle is dying...  what can I do here?
<jose> hey marcoceppi, is it possible to get the postfix charm on the store before oct 1st?
<davecheney> julianwa_: i'm sorry you got bit by this
<davecheney> this is an open bug/freature request
<davecheney> your best bet is to delete the machine using aws or whateve ryou use
<davecheney> then ignore the broken record in the juju status
<davecheney> it's been a known issue for a long time
<davecheney> i'm trying to get it bumped up th epriority list
<davecheney> but please don't take that as a forward looking statement
<julianwa_> davecheney: you mean leave the dying machine there? but the dying server will have same maas-name. is that ok?
<davecheney> oh, you're using maas
<davecheney> hmm
<freeflying> lol
<davecheney> julianwa_: how did the machine get killed ?
<davecheney> did you use mass to kill it when the terminate-machine failed ?
<julianwa_> davecheney:  not killed. one post-install script failed when juju add-machine. Then I execute terminate-machine
<julianwa_> davecheney: it's still in MAAS
<davecheney> julianwa_: are you Canonical ?
<julianwa_> davecheney:  yes...
<davecheney> lets talk in that other channel
<davecheney> sorry folks, i'll post a wrap up
<davecheney> when I figure out the problem
<fwereade> marcoceppi, jcastro, ping
<gnuoy> hi, I've added a new unit to an existing juju deployment and when I do a juju status all the other machines report in fine but the new unit reports "agent-state-info: '(error: failed to list contents of container: juju-stagingstack-geonames"
<gnuoy> I can query the juju-stagingstack-geonames"
<gnuoy> bucket fine and have downloaded bootstrap-verify, provider-state and the tools tgz without a problem
<gnuoy> I tried removing the unit and adding another one but I get the same error
<bloodearnest> heya all - I've been getting "cannot log in to admin database" immediately after a bootstrap on Openstack
<bloodearnest> this is with 1.14.1 on raring
<marcoceppi> fwereade: pong
<fwereade> marcoceppi, hey, I was pinging you as a possible evilnickveitch proxy, but I see he's online now
<marcoceppi> gnuoy: I've not seen that error before, what provider are you using? What version of Juju?
<marcoceppi> fwereade: ack
<fwereade> evilnickveitch, ping
<evilnickveitch> fwereade, hi
<gnuoy> marcoceppi, openstack and 1.13.2-1 bzr revno 1670
<marcoceppi> bloodearnest: is this /immediately/ after bootstrap, or after you can verify that the bootstrap is running via juju status
<fwereade> evilnickveitch, I was wondering if there was anything I could do to ease the passage of the docs I gave you a whileback?
<gnuoy> marcoceppi, what object is it trying to access ?
<fwereade> evilnickveitch, a casual look seemed to indicate they weren't up yet
<marcoceppi> gnuoy: I have no idea, did the machine come online?
<fwereade> evilnickveitch, if the problem is, say, that they're crap, I'd like to help make them less so :)
<evilnickveitch> fwereade, oh, thanks for the offer - they are not, because I have taken the opportunity to include them in more of a restructure, but they will be up later today
<gnuoy> marcoceppi, nova thinks its active, I'll try ssh'ing to it
<fwereade> evilnickveitch, ok, that's awesome, tyvm
<marcoceppi> gnuoy: if you can ssh in to it, then you can get the /var/log/juju/machine-*.log file - should help shed some light
<evilnickveitch> fwereade, no, they aren't crap at all :)
<evilnickveitch> I will let you know when they go up, would be good to get your feedback
<fwereade> evilnickveitch, jolly good, i felt obliged to check ;p
<fwereade> evilnickveitch, cheers
<gnuoy> marcoceppi, well it looks like I was looking in the wrong machine and the new machine never came. I'll have a dig around and see if  I can see why. thanks
<marcoceppi> gnuoy: also, you might want to consider moving to 1.14.1 as it's the latest "stable"
<gnuoy> marcoceppi, absolutely
<marcoceppi> I noticed the environment was named "staging", but the latest dev will be 1.15 so 1.14.1 is the truely latests
<marcoceppi> it should also be easier to upgrade between stable versions than dev releases when doing in-place upgrades
 * marcoceppi is so giddy about in-place juju upgrades
<bloodearnest> marcoceppi: I get that running juju status
<bloodearnest> marcoceppi: it times out after about 7min with "Unable to connect to environment "openstackshredder""
<marcoceppi> bloodearnest: that's interesting. Can you destroy then bootstrap again with `--debug -v` options, then run `juju status -v --debug`
<bloodearnest> marcoceppi: ack
<marcoceppi> bloodearnest: also, you have admin-secret set, correct?
<marcoceppi> bloodearnest: in your environments.yaml
<bloodearnest> marcoceppi: yep - freshly generated with generate-config
<marcoceppi> bloodearnest: excellent, if you could pastebin those when you get them that should help shed some light
<bloodearnest> marcoceppi: destroy fails: https://pastebin.canonical.com/98138/
<bloodearnest> marcoceppi: the --debug points at opendns issues
<bloodearnest> some kinda redirect issues
<marcoceppi> bloodearnest: that's annoying
<bloodearnest> marcoceppi: yeah - am trying from a canonistack instance I use for dev, but I'm having similar problems there too
<bloodearnest> marcoceppi: bootstrap output: https://pastebin.canonical.com/98139/
<bloodearnest> marcoceppi: status output: https://pastebin.canonical.com/98140/
<marcoceppi> bloodearnest: yeah, it's successfully connecting to the bootstrap, just not logging in for some reason :\
<bloodearnest> marcoceppi: for completeness, destroy output (3 lines): https://pastebin.canonical.com/98141/
<bloodearnest> all this is done from a nother vm on the same OS environment
<marcoceppi> bloodearnest: I've not encountered this, not quite sure how to debug past here. You might find more information on the bootstrap node in /var/log/juju/
<bloodearnest> marcoceppi: don't know if it's related, but I can't ssh into the boostrap node - publickey denied
<marcoceppi> bloodearnest: well, that's also interesting
<bloodearnest> marcoceppi: in another env, I seem to be able to ssh in
<Nelson111> Assuming i joined the right ubuntu catch up..... hello geeks nerds and all :)
<jamespage> marcoceppi, charm-tools uploaded to saucy - got accepted
<sylvaing> hi jcastro i just send mail about bluemind and juju ;-)
<marcoceppi> jamespage: \o/ Thank you!
<jamespage> marcoceppi, hey np
<marcoceppi> jamespage: the next step is to get charm-tools in to backports for precise, when I run requestbackport it says no published binaries in saucy. Is this just a waiting game?
<jcastro> Charm Championship submission charm school on http://ubuntuonair.com in a few minutes!
<jamespage> marcoceppi, give it a chance to get into the release pocket
<marcoceppi> jamespage: ack, figured
<marcoceppi> jcastro: you need me there, or is this a Mims and you thing?
<arosales> Hello were are getting kicked off on the charm school, "How to enter the Charm Championship."
<marcoceppi> jcastro: oh bugger. The new package removes charm-helper-sh, which is provided in saucy. I suppose that's going to be a problem during the backport req
<marcoceppi> jamespage: ^^, not jcastro
<marcoceppi> bah, that whole sentance is wrong
<marcoceppi> jamespage: oh bugger. The new package removes charm-helper-sh, which is provided in the precise version. I suppose that's going to be a problem during the backport req
<arosales> if you would like to follow along for the charm school it is at http://ubuntuonair.com/
<arosales> YouTube direct link is @ https://www.youtube.com/watch?v=c6wTtWDyXsc
<m_3> sound went out completely :-(
<m_3> I can't hear anything... gonna try to reconnect... sorry for the technical difficulties
<ktubilgisayar> hi
<zradmin> has anyone else been trying to setup HA openstack with Juju? I've been following the guides posted and have my setup 90% there.... instances are launching and running but are not getting an IP from quantum at all. When I check the logs they just show APMQ messages succesfully crossing. anyone have similar issues?
<marcoceppi> zradmin: a few people have been setting up openstack, let me see if I can recall their names
<zradmin> marcoceppi: thanks!
<marcoceppi> kurt_: were you the one working on deploying openstack?
<kurt_> yup
<marcoceppi> kurt_: did you ever get far enough to experience quantum not assigning IP addresses?
<kurt_> I'm finished and it all works for me.
<kurt_> I had to manually configure quantam.
<kurt_> marcoceppi: I could never get the charm to work out of the box
<kurt_> I only allowed the charm to do the basic install, but did all post-configuration myself
<zradmin> kurt_: what was the post configuration? the charm i specified eth1 as extnet but it looks like something in juju changed so it uses alot of lxc bridges
<zradmin> kurt_: I have nodes with 2 nics, one on the "internal" switch and one on the "external" switch
<kurt_> right - one nic should connect to your oam net, the other to you external lan
<kurt_> zradmin - do you do evernote?  I've put it all in to that format so you can see
<zradmin> kurt: not currently but I can create an account real fast
<kurt_> do that and I'll share the note
<zradmin> kurt: evernote username is zradmin :)
<kurt_> k, hang on a sec
<kurt_> Actually you may not need account
<kurt_> see if you can see this
<kurt_> https://www.evernote.com/shard/s244/sh/37674b81-51af-4579-9579-8058b4cf3a9a/aca1835adea4ca6cb52e7d0091ced91c
<zradmin> yup got it
<kurt_> There you go
<kurt_> that should answer your questions
<zradmin> thanks, I'll let you know how it turns out :)
<kurt_> good stuff
<kurt_> FYI - the process was shamelessly borrowed from Kentb on the security team
<zradmin> kurt_: hmmm it looks like it does the same thing I was doing in horizon to configure the ext_net, I created a new project and created the networks via commandline and am still having the same issue
<zradmin> kurt: this is what I get on the instance http://pastebin.ubuntu.com/6164559/
<kurt_> zradmin: is your ext_net hooked up to a separate network?
<kurt_> it looks like its trying to bring it up on eth0 instead of eth1 too
<zradmin> yeah it is set to configure to eth1, but when I do an ifconfig on the nova-compute node it doesnt show anything configured on eth1
<kurt_> you need to specify eth1 for the quantum charm
<zradmin> yeah i did that
<kurt_> no ip address
<zradmin> this is in the syslog on the node m7q49 dnsmasq-dhcp[2221]: DHCP packet received on qvo6a26ce05-ae which has no address
<kurt_> are you certain your eth1 is alive and connected to a network other than your eth0?
<kurt_> are you doing this with physical hosts or virtual hosts?
<zradmin> physical, building on an m1000e blade
<kurt_> so ensure your eth1 is actually wired to a second network.  you may need to test that part - because I think that's where your problem is
<kurt_> also - are you specifying precise:grizzly?
<kurt_> here - have a look at my local.yaml - make sure yours is similar
<kurt_> http://pastebin.ubuntu.com/6164613/
<kurt_> mine goes for a single node rather than multinode installation
<kurt_> well, let me rephrase...
<kurt_> I am going the non-HA route for now as proof of concept
<kurt_> I installed on 6 virtual hosts
<zradmin> ah i see it, my switch stack is messing up my vlans
<zradmin> its tagging the traffic on the port
<kurt_> ;)
<kurt_> I need to take off for a while - I'll be back in an hour or so.
<kurt_> ping me if you are still having problems after you figure out your tagging problem
<_mup_> Bug #1232282 was filed: maas provider: bucket download failures not handled well <theme-oil> <juju:New> <https://launchpad.net/bugs/1232282>
<ZonkedZebra> Is there a one liner to obtain the public address of a unit?
<marcoceppi> ZonkedZebra: uh, kind of
<ZonkedZebra> juju status api/0 | grep public-address | cut -d ":" -f 2 | tr -d " "
<ZonkedZebra> Something like that?
<marcoceppi> basically, I would have used awk, but that's because I <3 awk
 * ZonkedZebra is not an awk user
<ZonkedZebra> awk cleaner?
<marcoceppi> ZonkedZebra: that's highly subjective :P
<ZonkedZebra> what would it be in awk?
<marcoceppi> ZonkedZebra: juju status wordpress/0 | grep -m1 public-address | awk '{print $2}'
<marcoceppi> -m1 is to only do the first match, in case there are subordinates
<ZonkedZebra> Worth learning I guess, After all, everything has awk
<marcoceppi> ZonkedZebra: awk is it's own language, but at the surface it's pretty easy to use
<sarnold> it's awesome for one-liners. beyond that I lose interest, hehe
<marcoceppi> it's like vi/vim. You learn one basic flag and life is good, then when you need to dig deeper, you can
<marcoceppi> (that flag is -F)
<sarnold> lol
<sarnold> yes, -F is awesome. :)
<marcoceppi> I only learned of awks amazing depth about two years ago, up until then it was my go-to "cut"
<kurt_> see + awk are awesome
<kurt_> sed + awk are awesome
<kurt_> old as the hills but still as good as gold
<ZonkedZebra> This is a new error for me, "error: no relation id specified". Seems to be triggered by a call to relation-get. Ideas?
<ZonkedZebra> If only the hook tools had real documentation....
<ZonkedZebra> Probably because I'm calling a script shared by multiple hooks
<ZonkedZebra> Whats the common practice for sharing code/functionality between hooks?
<kurt_> marcoceppi: ping
<zradmin> kurt_: i fixed the switch issue and restarted the compute node... but it still dosn't seem like traffic is going accross the bridge here's my interfaces: http://pastebin.ubuntu.com/6164893/
<kurt_> zradmin: you've got a whole lot more on my quantum-gateway than I do
<kurt_> did you use my local.yaml as your deployment template?
<kurt_> here are my interfaces
<kurt_> http://pastebin.ubuntu.com/6164899/
<zradmin> yeah my settings for those charms match up
<zradmin> hmm I dont have a br-ext
<kurt_> again - you must be having some issues with your eth1
<kurt_> a bridge can't be created
<kurt_> look for hints in /var/log/syslog or dmesg
<zradmin> yeah im still looking into it... i hope something isn't wrong with that test node
<kurt_> try to manually create the bridge
<zradmin> ty
<kurt_> once you can manually create the bridge, you should be golden
<kurt_> maybe you have some spanning tree issues?
<kurt_> zradmin: look in dmesg to see what's happening - look at my entries around br-ex
<kurt_> http://pastebin.ubuntu.com/6164928/
#juju 2013-09-28
<marcoceppi> kurt_: pong
<marcoceppi> ZonkedZebra: still having issues with sharing code between hooks?
<ZonkedZebra> took a break. settled on sourcing a file with common functions. Yay/Nay?
<kurt_> marcoceppi: are you familiar with the ceph deployment stuff on charms?
<kurt_> that is my next project
<kurt_> add ceph to openstack
<marcoceppi> ZonkedZebra: that works fine, are you writing those in bash or another language?
<ZonkedZebra> bash
<marcoceppi> kurt_: I have pretty much very little experience. I've deployed ceph and ceph-osd as part of a demo openstack setup, but didn't dig very far
<marcoceppi> kurt_: I'm happy to field any questions I can though
<kurt_> marcoceppi: ok.  I'm just in the research stage.  if you have any cliff notes or anything you wish to share short of jamespage's cephaloid thing, I'd be appreciative.
<kurt_> the RTFM stage if you will
<marcoceppi> ZonkedZebra: cool, so you can't explicitly call relation-* commands out-of-band (IE, during hooks that aren't relation hooks). This is because Juju sets extra environment variables to make those tools work. Mainly JUJU_RELATION_ID. So if you wanted to call relation-get say during a config-changed hook, you can, but you need to record the JUJU_RELATION_ID somewhere and call `relation-get -r $JUJU_RELATION_ID <unit> <key,>`
<ZonkedZebra> marcoceppi: Alright, thanks
<ZonkedZebra> Is writing charms in other languages discouraged at all?
<marcoceppi> ZonkedZebra: what I recommend instead, as that can get complicated, is during the relation-(joined,changed) hooks, when you get values you need write them to dot files in the $CHARM_DIR, for instance, the wordpress charm does with when it gets NFS data it puts it in a file in $CHARM_DIR then runs hooks/config-changed which checks for that file then sources it for the values http://bazaar.launchpad.
<marcoceppi> net/~charmers/charms/precise/wordpress/trunk/view/head:/hooks/nfs-relation-changed
<marcoceppi> ZonkedZebra: not at all, the beauty of charms is they can be written in any language. To my knowledge we have bash/dash, python, ruby, and chef charms. Someone tried writting a charm in php but I think they gave up
<marcoceppi> Use whatever language you/people deploying the service are comfortable with
<ZonkedZebra> Am I responsible for installing the required interpreter during the install hook?
<marcoceppi> ZonkedZebra: at this time, yes
<marcoceppi> ZonkedZebra: there's discussions of having pre-required packages described in the metadata.yaml file, but for the time being you'll need to use a language available on all ubuntu machines (pretty much python or bash) to set up your dependencies
<ZonkedZebra> So what envs are possible with the install hook? (What is install on the base image)
<ZonkedZebra> Alright. For the record I vote reps in metadata.yaml
<ZonkedZebra> deps*
<ZonkedZebra> I hope NSA monitoring can handle typos
<marcoceppi> ZonkedZebra: I'm a little against that, personally, I'd rather see hooks/setup or something similar. A hook that is only run once, but the decision isn't up to me :)
<ZonkedZebra> What is the best way to install something on all machines? I had to do a bit of hacking to get mosh running properly.
<marcoceppi> ZonkedZebra: a subordinate would be one way
<marcoceppi> ZonkedZebra: you mean within juju or just in general?
<ZonkedZebra> In general. Built in would be cool. juju mosh unit/0. I like how that felt
<ZonkedZebra> Where can I find the list of things that are set during the hooks, like CHARM_DIR and HOME?
<ZonkedZebra> I guess I can just debug-hooks and dump it
<marcoceppi> ZonkedZebra: debug-hooks is the way to go, it might be in the docs though
<marcoceppi> ZonkedZebra: juju has plugins, so you could totally do a mosh plugin
<ZonkedZebra> Where is the docs on the plugin system?
<marcoceppi> ZonkedZebra: drafted on my computer, but they're pretty straight forward, you just put a file called juju-<plugin> in path
<marcoceppi> ZonkedZebra: there's a juju-plugins project on LP that I started which will eventually pacakged a lot of common plugins together you could add it to
<marcoceppi> ZonkedZebra: I really thought we had the environment variables documented :\
<ZonkedZebra> Nope :(
<ZonkedZebra> a couple of them are, randomly :)
 * marcoceppi makes notes to fix this
<ZonkedZebra> marcoceppi: is there a functioning example somewhere in the juju-plugins repo?
<marcoceppi> ZonkedZebra: yeah, I wrote it in Python, so it's structured a little differently since I wanted to be able to test it like a python app
<marcoceppi> ZonkedZebra: http://bazaar.launchpad.net/~charmers/juju-plugins/trunk/view/head:/plugins/juju_test.py
<marcoceppi> it's actually the worst example ever, because it's a really involved plugin
<marcoceppi> but it's the only one I've written so far
<ZonkedZebra> interesting, looked like part of the testing harness at first glance
<marcoceppi> ZonkedZebra: it's seperate, it basically just runs all the files in tests/
<marcoceppi> the testing harness, amulet, was designed to be independent of the test plugin, so people arent' forced to use it for writing tests
<ZonkedZebra> Yep, figured it out when I hit __name__ == '__main__'
 * marcoceppi disappears for a while
<alexrockz> Hey guys!
<alexrockz> Hi
<alexrockz> How is everybody
<alexrockz> I'm bored
<alexrockz> Hi guys
<marcoceppi> alexrockz: hello
<alexrockz> Hi everybody
<alexrockz> Anyone like Ubuntu here?
<alexrockz> 'cause i do
<alexrockz> I used to have it on my Windows 7 laptop
<alexrockz> Then i un-installed it 'cause it was taking up my memory
<alexrockz> Didn't use it anymore
<alexrockz> So i got a new laptop
<alexrockz> It runs Windows 8
<alexrockz> I was hope for a big computer with 500 GB
<alexrockz> (Giga-Bytes)
<alexrockz> No comment huh?
<alexrockz> ok
<Diegonat> hi?..
<Diegonat> i have this error while trying to use openstack caused by: the configured region "RegionOne" does not allow access to all required services, namely: compute, object-store
<marcoceppi> Diegonat: does your user have access to compute and object-store? Can you launch instances from the dashboard?
<Diegonat> yes i can
<Diegonat> marcoceppi, sto usando
<Diegonat> admin
<Diegonat> fixed
<Diegonat> no not really
<paulczar_> how do delete the environments cache of a charm that I'm developing?   getting tired of doing an environment-destroy every time I change something
<alexrockz> Hi everybody!
<_mup_> Bug #1232547 was filed: When Trying to Deploy on ec2 I get "no instances found" <juju:New> <https://launchpad.net/bugs/1232547>
<AskUbuntu> Juju errors when trying to deploy to ec2 | http://askubuntu.com/q/351269
#juju 2013-09-29
<marcoceppi> paulczar_: you can use the -u flag when deploying with local:<charm> to upgrade the cache
<marcoceppi> paulczar_: juju help deploy for more details
<mrz> jujucharms.com is just demo right? i don't actually deploy there.
<marcoceppi> mrz: correct
<marcoceppi> mrz: it's in "sandbox mode"
<mrz> i have discourse up and running but the detault instance sizes are small.
<mrz> especially for postgres and i bet i want that as block storage
<mrz> firewall-mode: global
<mrz> isn't doing what I thought it should
<mrz> is it known that the default juju-gui install doesn't just work? apache's configured for 8000 but the docs seem to read that it's just http
<mrz> or https
<mrz> and expose only opens 443/80
<mrz> oh i see. looks like haproxy fronts it
<mrz> http://15.185.176.57:8000/ just hangs, however.
<mrz> weird!
<mrz> took 10 mins for this to come online
<mrz> ha!
<mrz> i tried to remote one instance and it destroyed all of them
<alexrockz> Hi everybody!!!
<alexrockz> :)
<alexrockz> Anyone like Ubuntu?
<alexrockz> Its awesome
<mrz> okay, the security group limits are really hurting me
<alexrockz> That sucks
<Alex_Loves_Ubunt> wtf
<alexlovesubuntu> There.
<alexlovesubuntu> Finally!
<panthar_> ping google.com
<dhart> Is a juju environment for multiple hosts possible without MAAS? I've poked around askubuntu.
<dhart> ooo, just found juju add-machine ssh:user@host
<dada> hi
<marcoceppi> dhart: it is, manual provisioning is still very much a new feature
<marcoceppi> dhart: I was abou to link you to the mailing list thread, but it looks like you've already found it :)
<marcoceppi> mrz: sec groups on what provider? HP Cloud?
<dhart> marcoceppi: thanks for the warning. :-) I'm happy to be a guinea pig, as I'm in the process of converting brute distribution scripts to juju charms.
<marcoceppi> mrz: default juju-gui very much so does work, I use it constantly, you can change the default instance size when deploying from "small" to a different size using constraints  https://juju.ubuntu.com/docs/charms-constraints.html
<marcoceppi> dhart: we've got a new provider coming soon which will allow you to just use a machine as a "bootstrap" so you can completely use your "Just a bunch of servers" as a Juju environment. Not sure the timeline on that landing though
<marcoceppi> mrz: let me know if you have any problems with the discourse charm! :D
<dhart> I'm ok with a bootstrap host that uses the 'local' provider now but moves to the null provider later.
<_mup_> Bug #1232736 was filed: juju debug-log : port 22: Connection refused <juju:New> <https://launchpad.net/bugs/1232736>
<scaine> Just starting out here - but when I run "juju generate-config", I don't get given a ./juju directory! Do I have to run this stuff as root?
<scaine> Never mind. Did an "apt-get update", juju upgraded and now it's working.
<mrz> marcoceppi: yeah it works. it's a bit opaque what's happening once i instantiate an instance. it returns green but it's not done installing software
<paulczar_> is there a way to read data from the environments.yaml from hooks ?
<paulczar_> I need to use the ec2 security keys in a hook ...  want to read from environments.yaml rather than adding them in a second place
<nik> hi
<Guest537> any1 there
<ZonkedZebra> How do I open a port to all machines and have it say open? I modified my AWS security policy but when juju reset it on me
<lazyPower> I'm using juju local. I reboot my machine and suddnely juju status refused to report on my bootstrap node and the cloud that was present prior to the reboot. Also, when i run juju bootstrap i'm getting a 500 error.
<lazyPower> however wiping out ~/.juju/local seems to have cleared up whatever was halting progress...
<thumper> lazyPower: morning, what version of juju?
<mrz> i killed off an instance in hpcloud and juju doesn't know it
<mrz> error: no machines were destroyed: machine 11 has unit "discourse/0" assigned
<mrz> is there a consistency check or a force optoin?
<mrz> ha. no i meant wha's the latest news
<lazyPower> thumper: 1.14.1
<thumper> lazyPower: can I get you to run "apt-cache policy lxc" for me?
<thumper> I'm busy debugging some lxc issues anyway
<thumper> lazyPower: do you have copies of the actual error you were seeing?
<thumper> lazyPower: also, are you using an encrypted home drive?
<lazyPower> ahhh, i removed ~/.juju/local and it resolved itself
<lazyPower> i think what happened was juju was provisioning something when i power cycled and it left the juju setup in an inconsistent state
<lazyPower> If i do encounter this again i'll backup the ~/.juju/local directory for analysis. And I am indeed using an encrypted home directory
<thumper> hmm...
<thumper> you may find that lxc still thinks you have machines lying around
<thumper> run this "sudo lxc-ls --fancy"
<lazyPower> nice
<thumper> I noticed one of our guys had his juju environment outside his home dir because it was encrypted
<thumper> I don't know if this is a problem or not
<thumper> do you still have lxc-machines running?
<thumper> you may also still have system upstart scripts hanging around
<lazyPower> I do, but they are part of the new cloud i just provisioned for testing.
<thumper> removing the ~/.juju/local directory doesn't fix everyting
<thumper> hmm... ok
<thumper> mrz: there is a --force option I believe
<thumper> mrz: but I do wonder if you need to still force remove the unit
<thumper> mrz: this may actually be a problem, I recall similar problems elsewhere, but it is on our radar
<mrz> thumper: error: flag provided but not defined: --force
<mrz> can i manually edit the state machine somwewhere?
<thumper> hmm...
<thumper> mrz: no
<thumper> mrz: let me dig
<thumper> mrz: does the machine as juju sees it have any units on it?
<thumper> mrz: is that the "discourse/0" ?
<mrz> it thinks it does
<mrz> but machine 11 was destroyed in the hpcloud panel
<thumper> mrz: have you tried "juju destroy-unit discourse/0"
 * thumper is trying to remember the internal comunication.
<mrz> yes. it appears to run but juju status shows it still dying
<thumper> ah...
<thumper> poos
<thumper> just reading up, there is no --force
<thumper> and it is a bug that this leaves broken bits lying around
<thumper> sorry
<mrz> recall where state lives? something i can manually edit?
<thumper> state lives in a complicated mongo db series of documents
<thumper> manually editing is not at all recommended
<mrz> er, you had me at "mongo"
<mrz> :)
<thumper> I'll be raising the priority of this issue
<thumper> it hits quite a few people
<thumper> and we need to have a nicer resolution that "you're poked"
<mrz> firewall-mode: global doesn't do what I was told it'd do
 * thumper doesn't know much about the firewall bits
<mrz> hpcloud defaults to 10 security groups
<mrz> and one every instance needs its own security policy
<mrz> where should i file bugs or check for dups
<mrz> ?
<thumper> bugs.launchpad.net/juju-core
#juju 2014-09-22
<hazmat> lazyPower, hmm
<hazmat> lazyPower, that's strange.. normally its about 3m.. but yeah.. making that tunable sounds good, if you have a chance file a bug.. i'm eod, else i'll file tomorrow.
<lazyPower> hazmat: already done.
<hazmat> lazyPower, thanks, just noticed
<lazyPower> I wrapped the video edit about 12 minutes ago, waiting on an export so i can do the voice over. I'll have the assets (both raw, and completed) for you tomorrow.
<hazmat> lazyPower, you rock
<hazmat> lazyPower, btw if its slow i generally would try beefing up the size of the vm --constraints="mem=2G" is normally what i use for transient envs, and optionally hit up a different region with --constraints="region=sfo" etc.
<bloodearnest> Trivial review that fixes squid-reverseproxy charm, which is currently broken in the charmstore: https://code.launchpad.net/~bloodearnest/charms/precise/squid-reverseproxy/trunk/+merge/235429
<Odd_Bloke> rick_h_: I resorted to the GUI (which worked nicely). :p
<bloodearnest> Odd_Bloke: you sprinting?
<Odd_Bloke> bloodearnest: I am.
<Odd_Bloke> Doing OpenCorporates stuff. :)
<stub> Is there some charm cache the local provider uses? The wrong branch keeps getting deployed, despite what the symlinks in $JUJU_REPOSITORY say
<Spads> stub: revision number?
<stub> Nah, has to be stuck in a cache somewhere.
<stub> The charm that actually is being deployed has been renamed, so it shouldn't even be deployable (since the directory name != charm name)
<stub> And still stuck. The charm doesn't even exist in $JUJU_REPOSITORY now, and it still successfully installs.
<jcastro> hey marcoceppi, do you know what version of charm-tools is in the ubuntu-cloud archive?
<stub> Rebuilding JUJU_REPOSITORY cleared it. No idea where the errant symlink was, or if that was the case.
<stub> agent-state appears to stay in 'pending' throughout the install hook run now, juju 1.20.7.
<stub> oic, pending, installed, started
<marcoceppi> jcastro: an older version I suppose?
<fabrice> hi, how do I deploy a charm on utopic ?
<fabrice> in my juju I have uploading tools for series [precise trusty] so how do I get support for utopic ?
<marcoceppi> fabrice: utopic isn't released yet
<marcoceppi> fabrice: and we strongly recommend that people use LTS releases for deployment
<fabrice> marcoceppi: I know I want to test that juju-gui can work on utopic
<marcoceppi> that said, it is possible, what cloud provider are you using?
<fabrice> marcoceppi: just for test so local or AWS
<marcoceppi> fabrice: so I think you can do it for local, probably not AWS I don't think utopic images are out
<marcoceppi> fabrice: are you on utopic atm?
<fabrice> marcoceppi: no trusty
<marcoceppi> fabrice: try bootstrapping local
<marcoceppi> then deploying the utopic charm
<fabrice> marcoceppi: I get that when I do a juju status after deploy
<fabrice> marcoceppi: agent-state-info: no matching tools available
<marcoceppi> fabrice: hum, I would say --upload-tools but you're not on utopic
<marcoceppi> but, try it anyways
<fabrice> marcoceppi: you mean juju --upload-tools utopic ?
<marcoceppi> fabrice: just juju bootstrap --upload-tools
<fabrice> marcoceppi: it does change anything :(
<fabrice> marcoceppi: no luck with the upload tools
<marcoceppi> fabrice: spin up a utopic vm, then do --upload-tools after installing juju with local
<marcoceppi> I think --upload-tools only does it for the release you're on
<fabrice> marcoceppi: ok great I'll do that, thanks
<marcoceppi> fabrice: well, let me ask someone in core
<marcoceppi> I'm guessing at this piont
<marcoceppi> natefinch: any way to get a utopic bootstrap going at this point?
<natefinch> marcoceppi: set default-series to utopic?
<marcoceppi> natefinch: doesn't work, gets a "no matching tools" error on deployment
<natefinch> hmm weird
<natefinch> sinzui: do we not have utopic tools?
<marcoceppi> natefinch: would utopic VM + --upload-tools be a way around this? (if no tools avail)
<natefinch> marcoceppi: yep
<marcoceppi> natefinch: cool, fabrice ^
<jamespage> tvansteenburgh, how would you feel about making the implicit_save in the config charm-helper default to False?
<fabrice> natefinch, marcoceppi : thanks I'll try that and come back to you
<jamespage> tvansteenburgh, its completely playing havoc with out openstack charm unit testing
<sinzui> natefinch, yes, for 1.20.x and above
<natefinch> fabrice|family, marcoceppi: ^^   what version are you using?
<sinzui> natefinch, and we certified the utopic tools work...and we go further and deploy juju-ci3 with them
<natefinch> sinzui: my guess is they're using 1.18.x
<sinzui> natefinch, ah, 1.18.x doesn't support utopic 1.20.1 does
<tvansteenburgh> jamespage: i'm hesitant to change that now b/c people have come to expect it save by default
<lazyPower> stub: ~/.juju/.deployer-store-cache maybe?
<sinzui> natefinch, sorry. I got confused by juju knowledge of ubuntu, and ubuntu knowledge of juju. Juju 1.18.4 knows about utopic
<sinzui> natefinch, that was a requirement to put 1.18.4 in utopic
<sinzui> natefinch, marcoceppi, fabrice|family , Is this about the local-deploy case? Juju only supports lts for local deploys *except* when the local host is utopic itself
<sinzui> natefinch, marcoceppi, fabrice|family , so utopic can do local deploys of utopic, trusty, and precise, an lts series (such as in vagrant images) can only upload tools for lts
<lazyPower> hazmat: do you want the source files for the video before I archive them?
<lazyPower> i've just syndicated the post/video everywhere i've got access to
<hazmat> lazyPower, no thanks.. but a link would be great
<lazyPower> http://blog.dasroot.net/juju-digital-ocean-awesome/
<hazmat> nevermind.. just saw the email
<hazmat> lazyPower, oddly enough.. i was just looking at the work for a native provider now that objectstorage is no longer a provider req
<lazyPower> Well, i call out that the provider is still considered beta - so YMMV, which is a fairly good disclaimer
<hazmat> cool
<lazyPower> but if the DO provider keeps growing like it is, we're on par for making that happen. also did you notice we are #1 in APi integration?
<lazyPower> hi5 on that brochacho
<jamespage> tvansteenburgh, right now the hooks helper does not really expose any way to disable the implicit_save
<jamespage> tvansteenburgh, so without stupid patching in unit_tests its going to be trickey to fix this up
<tvansteenburgh> jamespage: could make implicit_save a kwarg to hookenv.config(), and pass it through to the Config constructor
<natefinch> -marcoceppi, jcastro: if you guys have time today, I'd like to talk about the charm-sync feature we're working on and specifically what it solves that charm-upgrade --force doesn't cover.
<marcoceppi> natefinch: sure, I've been meaning to ping perrito666 on that as well
<lazyPower> nice
<lazyPower> +1 on that natefinch
<lazyPower> well, the idea that you guys are going to leverage CH in upgrade-charm
<lazyPower> marcoceppi: when i push these big data bundles, do i wnt to put them in the ~charmer namespace, or bigdata-charmers namespace?
<marcoceppi> lazyPower: depends
<lazyPower> i'm all ears
<marcoceppi> lazyPower: well, do you want bigdata-chamers to maintain it?
<marcoceppi> oh
<marcoceppi> bundles
<marcoceppi> BUNDLESSSS
<marcoceppi> charmers
<lazyPower> ok. I thought so. Thanks :)
<marcoceppi> lazyPower: for the next screencast, put opacity at 100%
<marcoceppi> terminal over webpage is hard to read
<lazyPower> ack.
<lazyPower> i should have upped terminal sizing too i noticed post-facto
<marcoceppi> yeah, but fullscreen is easy enough to see the size
<marcoceppi> also, your battery is about to die and it's making me nervous ;)
<lazyPower> thats on my mouse
<lazyPower> my rechargeables always report < 30%
<marcoceppi> hah, good
<lazyPower> for whatever reason
<roadmr> mouse bat level is borked, mine always says 55% even with a fresh battery
<lazyPower> solaar doesn't give me an option to hide it from display either :(
<lazyPower> its very militant in that regard
<pdobrien> hey folks, got a quick question
<pdobrien> trying to bootstrap juju to a private openstack instance
<pdobrien> when trying to generate the metadata, I get "ERROR unrecognized command: juju metadata"
<pdobrien> this is on a mac, using juju from homebrew
<lazyPower> pdobrien: can you juju version for me?
<pdobrien> it's 1.20.1-mavericks-amd64
<lazyPower> thats 6 minor revisions behind, interesting.
<pdobrien> hm, I just installed it last week or the week before, but I see there's a new version
<lazyPower> i haven't tested our brew installs in a bit - but i'm not positive thats why you're seeing that issue
<pdobrien> getting the same error with 1.20.7
<lazyPower> pdobrien: 1 moment, let me fire up my mac
<aisrael> ^^ confirming, I see that as well
<lazyPower> aisrael: same version?
<lazyPower> i have juju metadata on ubuntu 1.20.7 - brew installing juju now
<aisrael> yep, 1.20.7-mavericks-amd64
<aisrael> juju metadata works inside vagrant, with 1.20.7-trusty-amd64
<jcastro> natefinch, I'm on the road but generally speaking everything marcoceppi I tend to just blindly nod and agree
<lazyPower> aisrael: confirmed what you see
<lazyPower> let me open a bug so we can track this further
<lazyPower> pdobrien: https://bugs.launchpad.net/juju-core/+bug/1372550
<mup> Bug #1372550: juju metadata missing from brew juju 1.20.7 <papercut> <juju-core:New> <https://launchpad.net/bugs/1372550>
<lazyPower> can you click " This bug affects me" so you get updates to the bug? i'll bring this to core
<pdobrien> will do, thanks!
<lazyPower> Hey! I got hackernews'd - go upboat please  https://news.ycombinator.com/newest
<lazyPower> errr https://news.ycombinator.com/item?id=8351651
<natefinch> lazyPower: upboated :)
<lazyPower> Ty ty :)
<sarnold> lazyPower: nice :)
<lazyPower> https://news.ycombinator.com/item?id=8351651 - i got hackernewsed
<lazyPower> can you upboat plz?
<lazyPower> oh
<lazyPower> wrong channel, haha
<natefinch> haha
 * lazyPower is jazzed
<lazyPower> i *never* get HN'd, so to see this reposted by someone other than me, oh man. i'm losing my marbles
<natefinch> the guy who upvoted it is evidently Community Director at DigitalOcean - https://news.ycombinator.com/user?id=beigeotter
<natefinch> marcoceppi: have time now to talk about charm sync?
<marcoceppi> natefinch: yes
<natefinch> marcoceppi: https://plus.google.com/hangouts/_/canonical.com/moonstone?authuser=1
<natefinch> perrito666: you available?
<sebas5384> lazyPower: hey! o/
<lazyPower> hey sebas5384
<sebas5384> i'm going to do a training about Drupal, and I was thinking in using Juju
<sebas5384> and the dns charm comes to my head
<sebas5384> because the users have mac osx
<lazyPower> sebas5384: its still in the same state it was in
<lazyPower> just deploying a single bind host
<sebas5384> so i'm going to "vagrant up" the default image of the mac osx workflow
<sebas5384> lazyPower: ok, but for a locally use, with more than 1 Drupal charm, being related, could the charm be useful ?
<lazyPower> should be
<lazyPower> so long as you add teh DNS Charm IP as your nameserver
<sebas5384> hmm
<lazyPower> sebas5384: i haven't looked at it in a while though, so make sure you file any bugs if you run into papercuts
<sebas5384> thinking in how to do this for the mac osx scenario
<sebas5384> yeah, i'm going to test that now
<sebas5384> deploying two drupal charms, with diferent domains, and then trying to access those from the mac osx
<sebas5384> (host)
<sebas5384> :)
<sebas5384> i'll keep you informed then! thanks man!
<lazyPower> sebas5384: oh!
<lazyPower> sebas5384: it's tied to a single domain, you'll need 2 copies of the dns charnm deployed in its current state
<sebas5384> hmmm
<sebas5384> so its a dns server for each domain ?
<lazyPower> yep
<lazyPower> take a look at the spec doc in the charm
<lazyPower> it gives a really good overview of whats planned, and whats implemented is reflected in the deployment guide
<sebas5384> can I ask why is that way?
<sebas5384> ahh ok
 * mbruzek created an unmaintained-charms group that we will use for charms that are broken.
<ayr-ton> I'm trying to remove a subordinate service that have "pending" as it status. But without success. The charm is cs:~marcoceppi/precise/zabbix-agent-1. juju destroy-unit zabbix-agent/0 does not have any effect.
<marcoceppi> ayr-ton: hav eyou tried juju remove-relation ?
<ayr-ton> marcoceppi: ERROR relation "zabbix-agent:juju-info mysql:juju-info" not found
<marcoceppi> huh
<ayr-ton> I think its Satan.
<marcoceppi> well it is zabbix
<marcoceppi> that charm was never really finished
<ayr-ton> I will try more debugging. Just a sec.
<ayr-ton> marcoceppi: Theres some way to manual destroy de unit?
<sebas5384> https://juju.ubuntu.com/docs/config-vagrant.html -> this is updated ?
<ayr-ton> sebas5384: yep
<ayr-ton> marcoceppi: I figure out.
<sebas5384> thanks ayr-ton
<ayr-ton> marcoceppi: Could I make a merge request for your charm?
<marcoceppi> ayr-ton: of course!
<marcoceppi> or you could jst fork it and use it as a base
<ayr-ton> marcoceppi: ok. I will make some fixes and I will submit (:
<hazmat> lazyPower, just watched the digitalocean video.. thanks again.. two thoughts.. we should really distribute static binaries (add ppa / apt-get update / install vs. wget )... the plugin on bootstrap and add-machine takes a -v flag which will print progress
<hazmat> lazyPower, also juju docean list-machines shows machine details bypassing juju api just using do api.. (ditto for destroy-env --force though destructivly in that case)
<hazmat> oh.. nm.. still watching ;-)
 * hazmat sees list-machines fly by
#juju 2014-09-23
<marcoceppi> hazmat: want someone to package that up?
<hazmat> marcoceppi, sure
<JoshStrobl> hey marcoceppi, you around?
<JoshStrobl> what are your thoughts on me changing the stuff like -changed to <name>-relation-changed in authors-charm-hooks.md, such as in the following section: https://juju.ubuntu.com/docs/authors-charm-hooks.html#<name>-relation-departed. I think doing that and putting it between tick marks would make those hooks stand out more when they are being referenced.
<lazyPower> JoshStrobl: sounds good to me
<JoshStrobl> lazyPower, https://github.com/juju/docs/pull/176
<lazyPower> haha, <3 evilnickveitch https://github.com/juju/docs/pull/172
<JoshStrobl> ah didn't notice he was going to land that code inconsistency fix :D
<JoshStrobl> actually
<JoshStrobl> nvm that was you .P
<JoshStrobl> I just did a resync of my fork today and was working with what I had, you gotta be more on the ball with getting these pull requests merged man! :P
<JoshStrobl> lazyPower, there will be some minor conflicts for your merge, should be easily resolvable though!
<lazyPower> JoshStrobl: yep, no big deal. I can pull and revise the PR
<lazyPower> this is one of the few quiet times during the day that I feel like i can be really productive, if i keep my heado ut of IRC long enough to do so :P
<lazyPower> yesterday i was all aflutter about my blog post landing on HackerNews to be really productive. that tanked my motivation to do anything other than watch stats and merge bundles.
<lazyPower> so today i've gotta make up for that lost productivity
<JoshStrobl> :D
<lazyPower> oye JoshStrobl, did you cat the latest Nu: podcast?
<JoshStrobl> lazyPower, yea I already listened to it :P
<lazyPower> niiiice
<lazyPower> man, you *are* on the ball
<JoshStrobl> I was listening to it while working on the docs actually
<lazyPower> This friday ~ 5pm EDT, i'll have another live show
<lazyPower> looking to do a 3 hour megamix repeat
<JoshStrobl> sweet
<lazyPower> so if you're around, ping me and i'll hit ya with some details
<JoshStrobl> I might jump in SL (it has seriously been a while) and go to the location when that happens
<JoshStrobl> 3hr megamix? awesome man.
<lazyPower> oh  i actually bailed on that medium
<lazyPower> i took the fans from there off the grid and changed format to the podcast for my regular shows
<lazyPower> the live mixes are an added bonus for glory
<lazyPower> and because nothing beats recovering from a botched transition on stream :P
<JoshStrobl> so where will the live show be if not that location? or does mixcloud support live streams?
 * JoshStrobl doesn't know.
<lazyPower> i run an icecast/shoutcast server
<JoshStrobl> ah cool
<lazyPower> getting ready to cut this gem after i add relay support and tests - https://code.launchpad.net/~lazypower/charms/trusty/shoutcast/trunk
<JoshStrobl> you deploy that with Juju?
<JoshStrobl> thought so :D
<lazyPower> thats the point of doing this on friday. Incentive to finish it up so i can scale out if we tank the shoutcast server
<JoshStrobl> sweet
<lazyPower> i plan on collecting metrics and blogging about this as well - i'm hoping to generate enough interest to talk about a real scale out scenario for a change vs theorhetical
<JoshStrobl> maybe we could showcase the metrics and scaling so it is easier for people to really understand the benefit of scaling the service as well as making sure your charm's service is scalable.
<JoshStrobl> besides, everyone likes "real world" examples v.s. theoretical
<lazyPower> indeed!
<lazyPower> we'll need to flood the server tho, i'm talking > 50 listeners
<lazyPower> a single host running at 1gb of ram can easily transcode/stream to ~ 40/50 hosts before it starts dropping connections
<JoshStrobl> lazyPower, any way we can simulate that?
<lazyPower> actually. probably
<lazyPower> mpg123 the source multiple times from another VPS
<gnuoy> I'm using juju from tip (1.21-alpha2) and when I try and do juju sync-tools with a new maas setup it complains that the environment is not bootstrapped. this leaves me in a bit of a chicken/egg situation as I can't bootstrap due to the lack of tools
<jamespage> tvansteenburgh, gnuoy: https://code.launchpad.net/~james-page/charm-helpers/disable-hookenv-config-save/+merge/235607
<marcoceppi> JoshStrobl: it already has <name> there but it's being rendered as HTML
<marcoceppi> I suggest to change it to [name] instead
<JoshStrobl> marcoceppi, I did a pull request where that file in particular just has references to particular relation hooks (like -changed) wrapped in "back ticks" so it is easily distinguishable.
<JoshStrobl> marcoceppi, pull request @ https://github.com/juju/docs/pull/176 already merged thanks to lazyPower
<lazyPower> grr
<lazyPower> i didn't think about that
<marcoceppi> JoshStrobl: that's cool, but currently the headers are <name>- and those are interpurted as HTML wasn't something you did but we should avoid <var> as keywords in general in the docs now that I see this
<lazyPower> marcoceppi: making a fix branch and PR shortly
<marcoceppi> lazyPower: cool
<JoshStrobl> marcoceppi, well I didn't change it to <name>-relation-joined, just `-joined` so it shouldn't be an issue.
<JoshStrobl> I rendered and tested in htmldocs before doing a request and it was rendering properly
<JoshStrobl> https://github.com/juju/docs/pull/176/files#diff-5433c68091963235b67c8f260cd46394
<lazyPower> https://github.com/juju/docs/pull/177
<JoshStrobl> Ah I see what you're referring to.
<JoshStrobl> lazyPower, there are references in some files to particular relation hooks, like authors-hook-environment. with the change from < to [, it'll break those direct links. once your PR goes through, want me to change those?
<lazyPower> JoshStrobl: +1
<l6unchpad> I'm using charmhelpers in a charm and it appears not to be installed on a unit:
<l6unchpad> from charmhelpers.contrib import ansible
<l6unchpad> ImportError: No module named contrib
<JoshStrobl> cool, I'll do that after your PR (that way I don't need to deal with git stashing changes and such)
<l6unchpad> any suggestions to this possibly noob question?
<l6unchpad> actually scratch that.
<l6unchpad> > this: ./scripts/charm_helpers_sync.py -c charm-helpers.yaml
<marcoceppi> l6unchpad: yeah, you've got to embed them or pip install them
<lazyPower> marcoceppi: https://github.com/juju/docs/pull/177
<lazyPower> <3 ty
<lazyPower> JoshStrobl: ready for PR
<JoshStrobl> lazyPower, cool, I'll update my fork and do the change
<JoshStrobl> https://github.com/juju/docs/pull/178
<JoshStrobl> lazyPower: bug report - "juju deploy happyness" should be "juju deploy happiness" (ref: your blog)
<lazyPower> >.>
<lazyPower> <.<
<lazyPower> pushed & closed
<JoshStrobl> :P
<lazyPower> thanks for pointing that out :)
<JoshStrobl> lazyPower, no problem. that digitalocean plugin is awesome. that said, you should get DO support built into juju core.
<lazyPower> there's a bug for that
<JoshStrobl> :O
<lazyPower> JoshStrobl: https://bugs.launchpad.net/juju-core/+bug/1372543
<mup> Bug #1372543: add a digital ocean provider <digital-ocean> <feature> <juju-core:Triaged> <https://launchpad.net/bugs/1372543>
<JoshStrobl> marked as affected
<lazyPower> JoseeAntonioR: ping
<JoshStrobl> I'll keep track of the bug, once it is merged I'll go ahead and write a doc up on DO provider
<lazyPower> Already incoming today based on my blog post with the plugin as our BETA mention of DO
<lazyPower> once the provider lands, we'll just strip the plugin install  and command routing and profit.
<JoshStrobl> ah cool!
<lazyPower> i'll be working on that + getting it merged post lunch
<gnuoy> I'm trying to running sync-tools with the maas provider and am getting "environment is not bootstrapped". Given I can't bootstrap without running the key sync  I'm stuck. Any ideas? (I'm using 1.21-alpha1-trusty-amd64)
<gnuoy> s/key sync/tool sync/
<lazyPower> gnuoy: are you running juju bootstrap --sync-tools? (sorry to be daft, just trying to be thorough)
<lazyPower> (replies may be latent, i'm making waffles)
<gnuoy> lazyPower, I'm the one that's probably being daft. But to answer your question, no. I'm trying to sync tools first as a standalone command as I have some hand rolled tools to upload
<gnuoy> http://paste.ubuntu.com/8410426/
<tvansteenburgh> jamespage: merged, thank you!
<lazyPower> try bootstrapping, then uploading the tools (unless there is an implicit reason for doing it in reverse order)
<gnuoy> lazyPower, bootstrap fails because there are no tools
<lazyPower> gnuoy: even when you pass --upload-tools?
<gnuoy> hmm, let me give that a go.
<gnuoy> natefinch, hi, I'm trying to upload windows juju tools in a maas environment but it's failing. Do you have any time for me to quiz you about it ?
<natefinch> gnuoy: sure
<gnuoy> natefinch, I'm using maas 1.7 and juju 1.21-alpha1. If I try and upload the locally built juju.exe then juju complains the env is not bootstrapped. But I can't bootstrap without the new tools
<natefinch> gnuoy: you should be able to do juju bootstrap --upload-tools
<gnuoy> natefinch, but I need to tell juju to pickup the tools from a local directory which is an option to sync-tools but nbot to upload-tools
<gnuoy> natefinch, is this a change in behaviour? A lot of dicumentation talks about doing sync-tools before bootstrapping
<natefinch> gnuoy: upload tools will automatically look for local tools... just make sure you have a jujud.exe next to your juju client application, and it'll pick those
<natefinch> gnuoy: sync tools is really only for environments that don't have access to the internet.  You bootstrap with --upload-tools and then use sync-tools to grab the various tools from the internet and upload them into your environment
<gnuoy> natefinch, fair enough. fwiw I've seen sync-tools first in a few places, like https://maas.ubuntu.com/docs/juju-quick-start.html
<mgz> as a non-dev user, trying to not use --upload-tools seems correct
<mgz> natefinch: is the simplestreams tooling not up to doing windows yet?
<natefinch> mgz: yes, definitely
<natefinch> mgz: I'm not sure
<natefinch> mgz: doesn't look like it
<mgz> so I guess it's --upload-tools or nowt for now then
<gnuoy> natefinch, it is still complaining about not being able to find tools, http://paste.ubuntu.com/8410715/
<natefinch> mgz: https://maas.ubuntu.com/docs/juju-quick-start.html#now-juju     Is it me or are these docs crazy?
<gnuoy> I may have not stumbled across the correct name for the tools file yet
<natefinch> mgz: it talks about doing sync tools before bootstrap, but sync-tools requires an environment
<JoshStrobl> natefinch, I wasn't even aware that there was that documentation. Any reason for it given there is config-maas as juju.ubuntu.com/docs/ ?
<JoshStrobl> *at juju.....
<mattyw> lazyPower, ping?
<lazyPower> mattyw: pong
<mattyw> lazyPower, hey there, I might have some time later this week to take a look at the mongo charm auth stuff. I took a look at the weekend and wanted to get some unit tests going but was struggling to get what's there running
<mattyw> lazyPower, would you be available to pair for an hour later this week or next?
<lazyPower> Sure thing. I need to clean up the tests directory. There are a bunch of leftover old tests
<lazyPower> I was going to strike whats in the tests directory save for the amulet based tests, and work on fixing/extending those. as they are failing in CI
<lazyPower> and go back and get test coverage around the hooks as a secondary stretch goal
<gnuoy> natefinch, sorry, I'm still not clear where the juju7d.exe needs to be and what it needs to be names to get --upload-tools to pick it up
<natefinch> gnuoy: I think having it in the same directory where you are running juju is all that should be required
<gnuoy> natefinch, what should it be called ? just jujud.exe ?
<natefinch> gnuoy: correct
<ayr-ton> When write charms, do you guys prefer to use shell script or puppet dsl with standalone puppet?
<ayr-ton> I'm looking into this: http://www.slideshare.net/lynxmanuk/juju-puppet-puppetconf-2011
<gnuoy> natefinch, doesn't seem to have done the trick http://paste.ubuntu.com/8410809/
<natefinch> gnuoy: oh hmm... well, so you can't bootstrap onto windows... only ubuntu.  Windows only works for deploying new units
<natefinch> gnuoy: so you need the ubuntu tools for bootstrap
<gnuoy> natefinch, ahhh, the plot thickens. So once you've bootstrapped you do add-machine to get a windows machine and then do deploy --to ?
<jcastro> is any charmer available for an on the spot review?
<jcastro> I need to get this bundle fixed asap
<natefinch> gnuoy: you can just do juju deploy <windows-charm> and it'll pick windows because the charm should be labelled as windows
<gnuoy> natefinch, oh, cool. thanks
<aisrael> jcastro: I'm around, if you can't find someone else
<jamespage> gnuoy, i'm reworking the hacluster charm like we discussed - its quite exciting not watching it rip everything apart on the slightest change of data!
<jamespage> gnuoy, lp:~james-page/charms/trusty/hacluster/mix-fixes
<gnuoy> \o/
<pdobrien> hey folks, having an issue
<pdobrien> I got my private Openstack cloud bootstrapped yesterday by using an ubuntu image to do it (since the Mac code has a bug)
<pdobrien> got it up and running and deployed the GUI, everything working fine
<pdobrien> however, when I try to deploy something else (mysql for example), the instances aren't starting
<pdobrien> the error in juju status is: agent-state-info: index file has no data for cloud {RegionOne https://server.com:8770/v2.0/}
<pdobrien> I think I need to provide the metadata so that it knows which image to use, etc. because I had to do that to bootstrap, but I'm not sure how to do it for charms
<james_w> pdobrien: you followed https://juju.ubuntu.com/docs/howto-privatecloud.html?
<ayr-ton> About the roadmap, if I want to move a service to a different machine or enviroment, it will be possible?
<pdobrien> james_w: I did not see that page, I followed a different guide.  I will check it out, thanks!
<james_w> pdobrien: did you set up simplestreams for your cloud?
<pdobrien> james_w: I don't think so... I'm looking at that now.
<james_w> pdobrien: ok
<pdobrien> james_w: I did do the section under "deploying private clouds" where I generated the metadata, etc.  I was able to bootstrap successfully.
<james_w> pdobrien: where did you upload the metadata?
<pdobrien> james_w: does "juju bootstrap --metadata-source . --upload-tools -v" upload the metadata?
<pdobrien> james_w: I see lots of stuff in the object store
<james_w> pdobrien: I don't know
<james_w> pdobrien: that takes care of the tools
<james_w> but it's the image metadata you are having a problem with
<james_w> pdobrien: you will have control-bucket set in the juju env definition
<james_w> if you list the contents of that in the object store
<james_w> then you will hopefully find streams/v1/index.json
<pdobrien> james_w: yes, that is there.
<pdobrien> james_w: I see the problem... the charms I am using want a precise image, I configured a trusty image for bootstrap
<james_w> pdobrien: do the contents have something like http://pastebin.ubuntu.com/8411413/ ?
<pdobrien> james_w: so I just need to create metadata for a precise image, I think
<james_w> pdobrien: ah, ok
<arosales> marcoceppi: lazyPower, mbruzek: do you guys know if the revision file is being used in 1.18?
<mbruzek> arosales: I do not believe it is used in that version either
<lazyPower> i dont have a definitive answer. But I'm fairly certain that if one is not present, juju will create one on deploy for you.
<lazyPower> so it shoudl be safe to disregard it
<JoshStrobl> I recall mbruzek telling me just to remove it from my charm (he sent it in a review email a while back) since it wasn't used. that correct mbruzek?
<arosales> JoshStrobl: correctly pointed out that it is still being referenced in https://juju.ubuntu.com/docs/authors-charm-components.html
<arosales> if 1.18 isn't using it suggest we just removed that linle
<arosales> *line
<mbruzek> JoshStrobl: I did tell you to remove it.  My understanding is we have not used revision file in QUITE some time.
<JoshStrobl> Yea, since it currently just says "revision is now deprecated", I just figured we should just have it not in the doc to begin with.
<arosales> JoshStrobl: sounds like it is a +1 for removal, nice find
<JoshStrobl> I'll edit it and do a PR then
<pdobrien> james_w: (or anyone) I have uploaded updated image metadata with info about where to find the precise image.  Is there a way to restart the deployment process for the charms I have already deployed, or should I remove them and start over?
<JoshStrobl> https://github.com/juju/docs/pull/179
<JoshStrobl> Hey mbruzek good work on the unmaintained charm workflow doc!
<mbruzek> thank you
<themonk> lazyPower, hi
<lazyPower> greetings themonk
<themonk> how are you
<themonk> how are you?
<lazyPower> I'm well, and yourself?
<themonk> i am fine :)
<lazyPower> Brilliant :)
<themonk> i wrote 2 charm
<themonk> i want it to be reviewed and listed in charmstore
<themonk> but i found it very confusing
<lazyPower> Are you looking for *recommended* status? or are you looking to just get moving quickly by having it int he store?
<themonk> in official doc
<pdobrien> james_w: (or anyone) I have uploaded the updated image metadata to the object store, but juju metadata validate-images still fails.
<JoshStrobl> themonk, are you having issues publishing your charm to your Launchpad account?
<JoshStrobl> themonk, or submitting it to be reviewed?
<themonk> lazyPower, just want to quickly list it in the store
<themonk> for now
<JoshStrobl> You can get it listed in the store under your own namespace by following the documentation at https://juju.ubuntu.com/docs/authors-charm-store.html
<lazyPower> themonk: follow the namespace charms listing: https://juju.ubuntu.com/docs/authors-charm-store.html#name-space-charms
<lazyPower> themonk: however - if you could expand on what was confusing you - it would help us to clean up that document a bit and hopefully help lead other users down the correct path.
<JoshStrobl> themonk, have you used the bzr tool to initialize your local repository, add the files, commit them, etc.
<JoshStrobl> exactly
<themonk> lazyPower, ok, can i later do it "Are you looking for *recommended* status?"
<JoshStrobl> themonk, you can submit your charm to be reviewed whenever you decide to. until then it will reside in your personal namespace, but will still be accessible in the store.
<JoshStrobl> themonk, Of course we recommend having it be reviewed, it gets you a recommended status and helps ensure it is reliable, high quality, and properly deploys with Juju :)
<themonk> ok, doing it now, thanks
<themonk> :)
<JoshStrobl> themonk, if you have any questions regarding submitting it, even to your own namespace, feel free to reach out! That said, as lazyPower mentioned, we would love feedback on what you found confusing in the documentation so we can improve it!
<themonk> JoshStrobl, ok :) i will
<themonk> soon
<james_w> pdobrien: still the same error?
<pdobrien> james_w: yes... index file has no data for cloud
<james_w> pdobrien: where did you upload the metadata this time?
<pdobrien> james_w: it doesn't look like it's looking at cloud storage for the metadata, it says source: default cloud images
<pdobrien> james_w: it's in the cloud storage, images/streams/v1/index.json and com.ubuntu.cloud:released:imagemetadata.json
<pdobrien> james_w: 'juju metadata validate-tools' successfully finds the tools metadata in cloud storage
<james_w> pdobrien: in the control-bucket?
<pdobrien> james_w: yes, same place as the tools metadata
<james_w> pdobrien: juju set-environment logging-config="<root>=DEBUG;juju=TRACE"
<james_w> that will turn on debug logging
<james_w> you can then try again
<james_w> and look in the log of the bootstrap node for more info
<pdobrien> james_w: should I be looking in /var/log/juju/machine-0.log?
<james_w> pdobrien: I'm not exactly sure
<james_w> looking for things like DEBUG juju.environs.simplestreams simplestreams.go:388 fetchData failed for "http://<ip>/v1/AUTH_aaaaaaaaa/simplestreams/data/streams/v1/mirrors.json": cannot find URL "http://<ip>/v1/AUTH_aaaaaaaaaaaaaaaaa/simplestreams/data/streams/v1/mirrors.json" not found
<james_w> or similar
<pdobrien> james_w: I don't see anything like that.  nothing happens when I validate images
<pdobrien> I don't think it's even looking
<james_w> pdobrien: oh
<james_w> sorry
<james_w> not validate-images
<james_w> that's all on the client
<james_w> deploy or something
<pdobrien> ah ok
<pdobrien> iames_w: it's finding the index.json for images
<james_w> pdobrien: interesting
<james_w> can you pastebin the contents of the index.json?
<james_w> does it have a cloud with the region/endpoint it is complaining about in the error message?
<pdobrien> james_w: sure
<pdobrien> james_w: http://pastebin.com/3YHYdpUu
<james_w> pdobrien: ok, that looks reasonable
<james_w> pdobrien: I assume the url matches the one in the error message?
<pdobrien> james_w: yes
<james_w> pdobrien: could you paste the relevant section of the debug log file please?
<pdobrien> james_w: log: http://pastebin.com/1CDByX0D
<james_w> hmm
<james_w> pdobrien: ok
<jamespage> gnuoy, for tomorrow - https://code.launchpad.net/~james-page/charm-helpers/multiple-https-networks/+merge/235676
<james_w> pdobrien: I don't know, sorry
<james_w> pdobrien: I can't see what is wrong
<pdobrien> james_w: no worries, thanks for taking a look!  I'll put up a post on the mailing list
<james_w> pdobrien: you could try filing a bug with that last pastebin, the commands you ran, and the juju status output
<james_w> yeah
<james_w> an expert can probably spot what is going on
<JoseeAntonioR> mbruzek: hey, is your latest merge a fix for bug 1372996?
<mup> Bug #1372996: make 00-setup executable <audit> <mongodb (Juju Charms Collection):New> <https://launchpad.net/bugs/1372996>
<mbruzek> JoseeAntonioR: Not yet, I am on a call at the moment
<JoseeAntonioR> ok, I'm going for it
<kwmonroe> lazyPower: i'm having a doozie of a time bringing up the juju-gui on a DO tiny node (512mb).  i saw you deploy this in your blog post yesterday, but didn't see you actually connect.  did you?
<lazyPower> yep
<lazyPower> not in the demo, but it works
<lazyPower> can you show me your history of how you deployed it?
<kwmonroe> i'm staring at this spinning circle "connecting to the juju env" and it's possibly the slowest page on the internet.
<kwmonroe> wondering if a 1gb DO would do better.. or if somebody is up in my tubes.
<lazyPower> it woudl be better on a 1gb do
<lazyPower> you're underpowering your state server by colo'ing yoru gui on it
<kwmonroe> ack.. fwiw, juju docean bootstrap --constraints="mem=512M, region=nyc3" && juju deploy juju-gui --to 0
<hatch> kwmonroe: that should be fine
<hatch> when you refresh the page does it hang still?
<lazyPower> you dont need to specify the mem if you're going for tiny's
<lazyPower> it defaults to tiny
<kwmonroe> refreshing doesn't seem to help hatch - https://104.131.29.26/
<kwmonroe> sometimes it seems like it's progressing (bg turns dark instead of grey).  other times i watch the spinner.
<hatch> looking
<hatch> kwmonroe: I can get to the login screen
<hatch> right away actually
<kwmonroe> hmph
<hatch> are you behind something which blocks websockets?
<hatch> secure websockets specifically
<lazyPower> kwmonroe: pull it up in firefox
<kwmonroe> i dunno hatch - i don't think so
<lazyPower> i had this issue once before with chrome -dev
<lazyPower> but it was intermittant
<JoshStrobl> I pulled it up fine in Chrome.
<kwmonroe> yeah, i did take the chrome update today but haen't restarted it
<hatch> kwmonroe: ok what browser are you in? I'd like to do a little debugging to try and find the problem
<kwmonroe> hatch: i was in chrome, but hadn't restarted the browser since an update this morning.  derp.  after restart, it's working in chromium 37
<kwmonroe> sorry for the noise - thx lazyPower and hatch!
<hatch> kwmonroe: haha np :) glad it's working
<hatch> lazyPower: bic2k is having issues with the osx brew recipe - is there any known issues with it?
<lazyPower> hatch: a user showed up this morning and introduced a new issue about missing command(s)
<lazyPower> hatch: https://bugs.launchpad.net/juju-core/+bug/1372550
<mup> Bug #1372550: juju metadata missing from brew juju 1.20.7 <feature> <metadata> <osx> <papercut> <juju-core:Triaged> <https://launchpad.net/bugs/1372550>
<hatch> ahh I think his was charmtools related
<bic2k> lazyPower: Ya, mine is the osx brew forumla fails out with some python related issue. Figured it would work since its mentioned in the charm docs.
<lazyPower> bic2k: i brew installed juju thsi morning
<lazyPower> it was confirmed working ~ 9am EDT
<bic2k> lazyPower: `juju` is installed just fine `charm-tools` does not install
<lazyPower> bic2k: stacktrace for me?
<aisrael> Is there a preferred/recommended templating module for charming in python?
<thumper> aisrael: I think there is something in the charm helpers...
<thumper> but I don't recall exactly where
<thumper> marcoceppi: ?
<aisrael> Yeah, that's a good point. I can look at the templating there as an example.
<aisrael> Thanks!
<aisrael> Looks like it's using Cheetah
#juju 2014-09-24
<mwenning> lazyPower, you still out there?
<mwenning> ok, catch you in the morning
<marcoceppi> aisrael: yup, cheetah, but do whatever you want
<marcoceppi> you like Jinja? use that
<mwenning> good night!
<aisrael> marcoceppi: ack. Either one will be new to me, but I'll take a poke around and see which one works the best for me.
<gnuoy> I'm trying to do juju sync-tools but I'm getting this error http://paste.ubuntu.com/8417850/ . Does that look like a corrupt entry in the tools already uploaded ? And if so how do I see what tools the maas environment already has ?
<gnuoy> juju help sync-tools
<gnuoy> mgz, sorry to ping you directly but do you know where maas stores its copy of juju-tools ?
<marcoceppi> gnuoy: doesn't it go in the blobstore in postgresql that maas exposes?
<gnuoy> marcoceppi, that sounds like a distinct possibility, I'll have a look at the database
<mgz> gnuoy: that is indeed the case, you can also use the maas cli to inspect
<gnuoy> mgz, excellent. I think I'm hitting a bug tbh but the maas cli should help me confirm
<gnuoy> mgz, do you know what the maas cli command is ?
<gnuoy> mgz, I see files list but no sign of tools http://paste.ubuntu.com/8418226/
<lazyPower> sebas5384: i just saw you accepted the invite. o/ how many will be attending?
<sebas5384> hey lazyPower, don't know yet
<lazyPower> Okie dokie
<sebas5384> but I think at least two people
<sebas5384> i'm going to the hangout :)
<sebas5384> lazyPower: https://plus.google.com/hangouts/_/canonical.com/drupal-aumlet here?
<lazyPower> yep
<lazyPower> i'm already there
<sebas5384> thats wired
<sebas5384> ohhh hangout trolling our life again
<lazyPower> having an issue joining?
<gnuoy> natefinch, sorry to bother you again but does this error http://paste.ubuntu.com/8418336/ with sync'ing tools ring any bells with you ?
<natefinch> gnuoy: no bother, it's specifically my job to help ;)
<gnuoy> \o/
<natefinch> gnuoy: this part looks suspicious: invalid binary version "1.21-alpha1.1--amd64"
<gnuoy> agreed
<natefinch> like, why --?
<gnuoy> natefinch, my guess is a bug in trying to decode the version data embedded in the tools filename
<natefinch> ahh, yeah, I think I get it... it's not --   it's -<series>-  ... except series is empty for some reason
<natefinch> so, something is failing to parse the series from win2012r2
<natefinch> or, it's failing in the process somewhere, since we're not seeing an actual error until it looks for the binary
<jamespage> gnuoy, I think we have some of our joined hook conditional processing based on being leader wrong
<natefinch> gnuoy: sorry, in a meeting right now, but I can help later today.  in #juju-dev is gsamafira who wrote a lot of the windows support code, and he might be able to help, too.  I pinged him there, but seems like he's AFK right now.
<natefinch> gnuoy: what timezone are you in?
<gnuoy> uk
<gnuoy> utc+1
<natefinch> gnuoy: ok, I'm us-east, UTC-4 so, will try to get back before your EOD
<natefinch> ahh, there's gsamfira.  gnuoy is having a problem deploying a windows charm: http://paste.ubuntu.com/8418336/
<natefinch> noteably: agent-state-info: invalid binary version "1.21-alpha1.1--amd64"    where -- is -<series>-   ... so series is getting set to an empty string at some point
<gsamfira> gnuoy: if you look on the state machine, do you see any errors in the logs
<gsamfira> something like "could not find tools for"
<gnuoy> otp will look in a sec
<gsamfira> also, how are you uploading the tools?
<gsamfira> nvm
<gsamfira> sync-tools
<gnuoy> gsamfira, with sync-tools (I included the sync-tools command in the pastebin)
<gsamfira> gnuoy: can you try the latest master of juju. You seam to be using an older version  (it should be alpha2). Also, please provide the logs of your state machine. Also, please use hyper-v server 2012 R2 instead of windows server 2012 R2. You do not need windows server just for nova-hyperv. Hyper-V server 2012 installs quicker, and does not require a license.
<gnuoy> gsamfira, I think the statemachine may contain the smoking gun
<gnuoy> gsamfira, http://paste.ubuntu.com/8418508/
<stokachu> marcoceppi, the test runner for charms does that basically juju deploy a charm in lxc and run whatever is in tests/*.test?
<gsamfira> gnuoy: looks like the tools finder might be the problem. I have not touched that in a long time, but someone else might have. Are you using the latest trunk?
<stokachu> marcoceppi, also does manage.jujucharms.com interpret TAP output?
<gsamfira> let me do a quick test with the latest master
<rick_h_> stokachu: no, it doesn't.
<stokachu> rick_h_, how does the testing show the report? just pass/fail if the test returns error code?
<gnuoy> gsamfira, so I need to. Upload a Hyper-V server 2012 image to maas and switch to using juju 1.21-alpha2 . I built jujud.exe on a windows server 2012 R2 server can I still use that binary or do I need to rebuild with the hyperv edition (I'm guessing I don't need to) ?
<rick_h_> stokachu: yes, it used to be that manage would check the test results in jenkins. With the new test suite I don't believe they're hooked up at the moment.
<gsamfira> you can use the same binary
<gsamfira> gnuoy ^
<gnuoy> gsamfira, thanks
<rick_h_> stokachu: it's something that has to be worked out as the testing tools get closer to general consumption
<rick_h_> stokachu: so the TAP note is interesting
<gsamfira> the thing is, that windows server 2012 R2 does not have the hyper-v role enabled by default AFAIK
<gsamfira> gnuoy are you using fastpath for windows? :)
<rick_h_> stokachu: but would have to work with through jenkins and the test runner it looks like. Not sure if it's on their radar at atm.
<stokachu> rick_h_, yea, i think having TAP would be beneficial as people could write tests in whatever language they want
<stokachu> rick_h_, so the test runner does jenkins deploy the charm and then run the tests?
<rick_h_> stokachu: I believe so. It handles the bootstrap/etc and then then the tests run against the deployment.
<stokachu> rick_h_, ok cool, thanks for the response
<rick_h_> stokachu: np, thanks for the TAP earworm, something to think about.
<stokachu> rick_h_, np, there isn't a place to file enhancement requests for the manage.jujucharms.com app is there?
<rick_h_> stokachu: well, it's not really manage. It's really just the consumer. I'm looking to see if the testing stuff is up somewhere a bug can be filed. tvansteenburgh have the link handy?
<stokachu> ah ok
 * tvansteenburgh reads scrollback
<rick_h_> tvansteenburgh: is there a place to file bugs against the charm testing work?
 * rick_h_ is looking through GH and LP for something that looks like 'charm testing'
<gnuoy> gsamfira does editing the node in maas to set the os and release  set the installer correctly or is there another step there ?
<gsamfira> you only set the OS in MaaS if you plan on deploying it via MaaS. Juju takes care of requesting the proper OS from MaaS
<gsamfira> so no need to worry about that
<gsamfira> gnuoy: the only thing to worry about when deploying windows server 2012, is that you have the KMS keys set up so it won't prompt for a serial key when installing
<gnuoy> kk
<gnuoy> gsamfira, but thats not a problem when using the hyperv image, right ?
<gsamfira> gnuoy: right
<gnuoy> excellent
<tvansteenburgh> rick_h_, stokachu: https://github.com/tvansteenburgh/charmguardian
<tvansteenburgh> that's what actually kicks off a test in jenkins
<gsamfira> Hyper-v server does not require a key. Its free to use, unlimited, bla bla bla
<tvansteenburgh> rick_h_, stokachu: feel free to file stuff there, i'll move it if necessary
<rick_h_> tvansteenburgh: ty much
<tvansteenburgh> that repo will move under juju-solutions GH namespace soon, just haven't gotten around to it yet
<stokachu> tvansteenburgh, cool, so what causes a test to fail the return code of each test?
<tvansteenburgh> stokachu: correct
<stokachu> tvansteenburgh, and tests are run like ./tests/00-basic.test individually?
<stokachu> or does it run like nosetests or something
<tvansteenburgh> stokachu: tests can be anything executable in charmdir/tests/
<stokachu> ok cool
<tvansteenburgh> it will also run `make lint` and `make test` targets if they exist
<tvansteenburgh> stokachu:  ^
<gsamfira> gnuoy: http://paste.ubuntu.com/8418625/ <-- latest trunk
<stokachu> tvansteenburgh, cool thats good to know, the lint tool could it be a custom one?
<tvansteenburgh> stokachu: yeah, whatever you want
<stokachu> thats pimp
<gnuoy> gsamfira, fantastic, let me give that a try
<stokachu> so the frontend just parses this json  output then?
<tvansteenburgh> stokachu: yep, that's right http://reports.vapour.ws/charm-tests-by-charm
<stokachu> tvansteenburgh, and also what about lint output ?
<tvansteenburgh> stokachu: all output is captured and included in the test report details
<stokachu> tvansteenburgh, ok cool so long as my output is readable and comprehensable that would be ok?
<tvansteenburgh> stokachu: e.g. lint output: http://reports.vapour.ws/charm-tests/charm-bundle-test-942-results/charm/charm-testing-hp/1
<tvansteenburgh> stokachu: yep
<gsamfira> gnuoy: if you like, you can get the binaries from: http://gaby.rohost.com/tools/
<stokachu> sweet
<stokachu> tvansteenburgh, so with this setup would it even make sense to support TAP output?
<tvansteenburgh> stokachu: sorry i don't know what TAP is?
<stokachu> tvansteenburgh, http://testanything.org/
 * tvansteenburgh looks
<stokachu> tvansteenburgh, i use it a lot with my cpan modules
<stokachu> but it works across several languages
<tvansteenburgh> stokachu: interesting. patches welcome :)
<gsamfira> gnuoy: or even set your tools-metadata-url: to that value in environments.yaml
<stokachu> tvansteenburgh, cool, i can work on that if you aren't opposed to it
<tvansteenburgh> stokachu: i'm not opposed, i'm just fairly certain that it won't be something i have time for in the near future
<gnuoy> gsamfira,  I'll give the binary I built a try but if that fails I'll definitely grab those, thanks
<stokachu> tvansteenburgh, no worries i can take a stab at it and open a PR
<gsamfira> gnuoy: my pleasure
<sebas5384> thanks! lazyPower++
<sebas5384> :)
<lazyPower> sebas5384: my pleasure. Looking forward to the follow up where we can sink our teeth into your use cases
<sebas5384> lazyPower: sounds great!
<lazyPower> sebas5384: i'll schedule out a 2 hour block next time as well. Get me the list of names so we can get everyone together at teh same time isntead of a staggered start.
<sebas5384> yeah definitively, and now that the team get all excited about, we are going to have more people contributing into it
<sebas5384> :)
<sebas5384> lazyPower: when you can send us the link of the recorder hangout, there are people that want to see already hehe
<lazyPower> sebas5384: sure, let me export it, and edit off the extra 5 minutes i just recorded
 * lazyPower forgot i was recording
<sebas5384> hehe
<lazyPower> sebas5384: exporting now and i'll upload it unlisted to youtube for your team. incoming link shortly.
<sebas5384> lazyPower: thanks :)
<whit> nice definition of "Da Cloud": https://wiki.mozilla.org/CloudServices#Get_Involved
<aisrael> I think a charmer needs to look at this and change the status of this MP (disapproved): https://code.launchpad.net/~jaywink/charms/trusty/postgresql/swiftwal_missing_functionality/+merge/235394
<themonk> JoshStrobl, hi
<themonk> i am facing a strange problem
<themonk> i set relation in join hook in charm 1
<themonk> and try to get that data in charm 2 in change hook, but getting None type value
<themonk> my other charm are working ok
<themonk> can any one give me any hint
<stokachu> tvansteenburgh, is make lint run in the deployed charm or just in the checked out repo directory
<stokachu> or should i just have a target to make sure required dependencies are installed for a custom lint program to run
<tvansteenburgh> stokachu: the latter
<stokachu> tvansteenburgh, ok cool, thanks again
<tvansteenburgh> np
<tvansteenburgh> cd /tmp
 * JoshStrobl was afk
<rcj> Can I set constraints in environments.yaml?  Specifically instance-type for EC2.
<Tug> rcj: You don't set instance-type in evironment config
<Tug> Here is the list of available params: http://juju-docs.readthedocs.org/en/latest/provider-configuration-ec2.html
<Tug> you can set machine constraints when you deploy a service
<Tug> https://juju.ubuntu.com/docs/reference-constraints.html
<Tug> Some of my instances are scheduled for reboot by aws. I'm wondering is it something juju can handle ?
<Tug> Like did you think about restarting all the units on reboot?
<rcj> Tug, I want to avoid 'juju bootstrap --constraints="instance-type=m3.medium"' by having an setting in environment.yaml if it were available.
<Tug> rcj, yeah but as you see, it has been deprecated
<Tug> not sure it's working today
<lazyPower> Tug: once your api controller fires back up and your agents re-connect it will fire a config-changed event across the nodes
<Tug> ok thx lazyPower, so I guess config-changed should actually start the process if it's not running
<Tug> but it's the job of the charmer to know that
<lazyPower> yeah, i've seen a lot of the charms fire off hooks/start
<lazyPower> which is a decent pattern to adopt. It'll be a telling story as to which services dont come back online -  also - to note, if they have good upstart scripts you should be fine
<Tug> mm not all charms run the underlying service using upstart.
<Tug> For instance mongodb is run as a shell command from a python script with a deamonize option
<Tug> anyway it's good to know
<lazyPower> Tug: are you running the MongoDB charm in production?
<Tug> lazyPower, yes
<Tug> I have actually modified it
<lazyPower> we should talk - as i'm the present hot seat maintainer for the MongoDB charm - anything that you found that was papercut worthy is totally worth investigating
<lazyPower> i'm still hacking through other priorities but its high on my list of TODO's to triage it in its current state and give it more love/tests
<Tug> sure :)
<Tug> I haven't hacked on it for a while
<Tug> but I can try to help
<lazyPower> are you running a single node, or cluster?
<lazyPower> and is it replicated/sharded if cluster?
<Tug> I was running a 12 instances cluster (3 shards)
<Tug> I'm actually removing 2 shards at the moment
<Tug> because it's starting to be quite expensive ^^
<lazyPower> indeed. Have you looked at running them on DigitalOcean?
<lazyPower> DO makes for a zippy MongoDB host, as their VPS's are SSD backed, and a fraction of AWS costs
<Tug> Nop. I have free credits on amazon at the moment (startup credits)
<lazyPower> ah, very nice
<sarnold> for the micro instances?
<sarnold> those things are slooooow :)
<Tug> mongod are running on m3.medium instances
<Tug> with ebs provisioned iops storage
<Tug> sarnold, nop it's real credits for everything on aws for 2 years
<Tug> limited to a few K
<Tug> anyway, this was my branch https://code.launchpad.net/~dekervit/charms/precise/mongodb/trunk
<sarnold> Tug: oh cool :)
<lazyPower> ahhh i remember this branch
<lazyPower> and the comment on the bug before about the replicaset relation-changed method needing refactoring
<Tug> yes, it was me :)
<Tug> like 3 months ago
<Tug> ;)
<lazyPower> mhmm
<lazyPower> progress has been slow, i dont have the time to really manage teh charm like it should be managed
<Tug> I haven't followed juju development for a while now. I guess you added ways to do automated testing and stuff
<natefinch> rcj: it would be worth writing up a bug asking to support constraints in the environments.yaml.... that seems like a pretty nice feature
<rcj> natefinch, sounds good
<lazyPower> Tug: Yep. Amulet testing, bundletester, etc. I'm working on the charm authorship docs as we speak actually
<rcj> natefinch, looks default instance type was removed
<Tug> lazyPower, nice, I'll want to have a look :)
<lazyPower> http://github.com/juju/docs - if you want to star/watch the repo you can get up to the minute updates
<natefinch> rcj: yeah, I think that's been gone ever since we switched away from pyjuju .... the instance-type constraint was only added back fairly recently (April).  It should be pretty trivial to add support for reading constraints from environments.yaml, I'd think.
<Tug> lazyPower, done
<rcj> natefinch, cool
<Tug> anyway I might get back on mongodb charms soon. If I can make it stable enough maybe it can be merged into the main branch ;)
<lazyPower> mbruzek, marcoceppi - https://github.com/juju/docs/pull/172
<natefinch> rick_h_: let me know if I screwed anything up in that doc ;)
<rick_h_> natefinch: thanks looking
<mbruzek> lazyPower:  To quote tvansteenburgh, "I saw what you did there, and I didn't like it"
<mbruzek> lazyPower: There is some merge text in your document you want to clean up.
<lazyPower> what?
<lazyPower> looking
<mbruzek> lazyPower: your pull request
 * lazyPower throws a fit
<lazyPower> thats it, i'm going back to vim diff
<lazyPower> screw meld
<lazyPower> oh wait, it has the .orig file included
<lazyPower> what kind of nonsense is this, thats user error
<lazyPower> mbruzek: corrected and repushed
<lamont> is there really no named charm?
<lazyPower> lamont: there's a start to a dns-charm that would have support for named
<lazyPower> but its nowhere near ready for public consumption
<lazyPower> lamont: https://github.com/chuckbutler/DNS-Charm
<lazyPower> there are spec docs included in the charm if you want to lend a hand
<lamont> lazyPower: I'll look.
<lamont> I also have an openvpn-server charm to upload, that actually charms the things I want charmed (no, the CA does not belong anywhere near the openvpn endpoint)
<lazyPower> lamont: stuff it in your namespace, and open a review ticket against it :)
<lamont> could you expand upon "my namespace"?
<lazyPower> lamont: https://juju.ubuntu.com/docs/authors-charm-store.html#name-space-charms
<lamont> thank yo
<lazyPower> lamont: the idea is its 2 stage. No reviews required for your charm to be listed in the charm store under your namespace. When you're ready to undergo the review process to become a recommended charm - you then progress to the next stage. This is 2 fold 1) this enables you to get moving quickly and have your charm listed in the charm store without high walls to hurdle. 2) you can get community feedback on your charm through use of your charm in your
<lazyPower> namespace, before you have a charmer look at it  (but we're open to reviewing charms anytime regardless of recommended status desire - so feel free to open a review ticket at any time)
<lazyPower> hazmat, kwmonroe, mbruzek, marcoceppi - https://github.com/juju/docs/pull/181  - Digital Ocean provider docs
<kwmonroe> lazyPower: the v1 api stumped me for a bit earlier this week.  glad you got explicit with the screenshots.  i'll circle back tonight for a merge if nobody has other comments.
<lazyPower> ack. Thanks kwmonroe
<arosales> lazyPower: DO doc commented on
<arosales> thats https://github.com/juju/docs/pull/181 for referernce.
#juju 2014-09-25
<lazyPower> marcoceppi: https://github.com/juju/docs/pull/181 - updated
<marcoceppi> lazyPower: cool, I'm about to send an MP your way to fix callouts
<lazyPower> haha, sweet. cuz we just broke the callouts on this page with that refactor
<marcoceppi> lazyPower: yeah, it's been broken in that we're not doing what we document
<marcoceppi> there's a bunch of malformed callouts, i was going to patch them, but instead I'll just make the plugin better
<lazyPower> :heart:
<marcoceppi> mm, sexy
<marcoceppi> it's not pefect but it'll do
<marcoceppi> lazyPower: https://github.com/juju/docs/pull/182
<marcoceppi> lazyPower: what about arosales feedback?"
<lazyPower> That was patched in too
<marcoceppi> cool, we need to teach him to comment on the diffs :P
<lazyPower> he did
<lazyPower> his diff's were a revision behind yours i think?
<marcoceppi> he commented directly on the file
<lazyPower> if you click on his comments they show them inline
<marcoceppi> instead of the diff for the merge request
<lazyPower> o
<marcoceppi> so it's hard to see when they've been fixed
<lazyPower> yesh
<marcoceppi> like mine were hiddenb ecause they're out of date now
<lazyPower> ok hang on regenerating
 * lazyPower drum rolls
<lazyPower> \o/
<lazyPower> works
<marcoceppi> lazyPower: also, for future reference, the bolding on the Note isn't required anymore
<marcoceppi> and it can be any word as long as you use !!!!
<marcoceppi> err !!!
<marcoceppi> so "!!! Warning:" will work
<marcoceppi> etc
<marcoceppi> a true callout plugin
<marcoceppi> what a time to be alive
<lazyPower> we are exploring the uncharted waters of writing your own generators
<lazyPower> wewt
<lazyPower> ok
<lazyPower> pull master, we're good to rock on this
<marcoceppi> both have landed huzzah, they'll be live around 4a
<marcoceppi> 6:00 UTC, fwiw
<rick_h_> marcoceppi: lazyPower heads up, I've got a guy looking into how to ingest the docs for use on the upcoming jujucharms.com rework.
<marcoceppi> lazyPower: while we're romping around in the docs, we should define version'd branches
<lazyPower> orly?
<marcoceppi> rick_h_: would that be helpful ^^?
<rick_h_> marcoceppi: lazyPower so we might have some requests coming in to help us wrap that around the site and keep it up to date. We'll be wanting to make sure we address keeping it up to date/etc
<marcoceppi> rick_h_: cool, style wise or content wise?
<rick_h_> marcoceppi: style wise really
<rick_h_> marcoceppi: and figuring out how to prenent it in a way that fits with nav/search/etc
<rick_h_> we'll be looking to ingest the docs into elasticsearch and building a custom docs search box
<marcoceppi> rick_h_: cool, it's pretty straight forward, there's one main template file then it's Markdown and CSS
<rick_h_> marcoceppi: k
<rick_h_> marcoceppi: lazyPower so if fabrice comes asking strange questions he's researching and doing some proof of concept stuff
<lazyPower> ack
<marcoceppi> rick_h_: cool, sounds good
<marcoceppi> thanks for the heads up
<lazyPower> marcoceppi: want to do a hangout to talk about the doc structure + versioning?
<marcoceppi> right now?
<lazyPower> uhh
<lazyPower> when do you want to do it?
<marcoceppi> we can do it now
<marcoceppi> I was just asking
<lazyPower> i mean i can EOD whenever
<lazyPower> ye
<lazyPower> lets do it now while its resh
<lazyPower> *fresh
<marcoceppi> join my favorite hangout url
<marcoceppi> lazyPower: https://plus.google.com/hangouts/_/canonical.com/iwonderhowlongyoucanmakethesehangouturlsseemstherereallyisnolimitatall
<aisrael> o/
<lazyPower> o/
<marcoceppi> o7
<lazyPower> o5
<lazyPower> i am quadroupal jointed
<marcoceppi> it looks like a guy flexing
<lazyPower> http://i.imgur.com/4mYD13u.gif
<kadams54> rick_h_: If you're still aroundâ¦ how are we looking for release? I looked around at PRs and the kanban board and everything looked pretty good.
<rick_h_> kadams54: everything is going well
<kadams54> rick_h_: great!
<rick_h_> just got back from coffee shop, right before I left functional charm tests passed on both precise/trusty
<rick_h_> so the release is about 5 commands away
<kadams54> is `rm -rf /` one of them?
<rick_h_> hah, not quite
<kadams54> Alright, it's off to bed for me. Here's to a smooth release.
<rick_h_> kadams54: night
<rick_h_> well they charms are up, waiting on ingest time
<rick_h_> will do one final QA, but the code looks good on LP
<rick_h_> oh heh, not the GUI channel is it
 * bloodearnest is wondering if all the charms that use #!/bin/bash hooks need updating...
<gnuoy> jamespage, any chance you could take a look at https://code.launchpad.net/~gnuoy/charms/trusty/keystone/next-lp-1355848/+merge/231529 ?
<gnuoy> Tribaal, to you have a moment to take a look at https://code.launchpad.net/~gnuoy/charms/trusty/nova-cloud-controller/next-multi-console-fix/+merge/233612 ?
<Tribaal> gnuoy: sorry, was out for a moment. I can look, yes
<gnuoy> Tribaal, thanks.
* SaMnCo changed the topic of #juju to: SaMnCo
<Spads> Hi, so what do I need to do to get juju status to show me floating IPs?
<Spads> jjo says it should have visibility, but I'm confused
<Spads> use-floating-ip is the only env setting I could find that matched, but I thought that was the old behaviour that made every unit get a floating IP
<rcj> Trusty juju tools mismatch on s3... http://paste.ubuntu.com/8425671/
<rcj> https://bugs.launchpad.net/juju-core/+bug/1373954
<mup> Bug #1373954: juju-tools checksum mismatch for trusty on S3 <juju-core:New> <https://launchpad.net/bugs/1373954>
<rcj> Can someone look at Juju tool checksum mismatches blocking bootstrap.  Now seen with Trusty/S3, Precise/Canonistack
<rcj> Checksums @ http://streams.canonical.com/juju/tools/streams/v1/com.ubuntu.juju:released:tools.json match content in http://streams.canonical.com/juju/tools/releases/ which I assume is the source for the mirrors that have issues
<arosales> marcoceppi: lazyPower: protips for commenting on diffs instead of the files?
<arosales> marcoceppi: lazyPower re: git hub
<sebas5384> question: Let's say I deploy an env on AWS through my machine, there's any way via other machine could comunicate to the same environment and continue deploying charms?
<lazyPower> sebas5384: the othe rmachine would need a copy of your ~/.juju directory
<lazyPower> specifically, the ~/.juju/environment_name.jenv
<sebas5384> hmmm specifically the .jenv right?
<sebas5384> holly s&*T
<sebas5384> hehe
<sebas5384> thanks lazyPower!
<lazyPower> np sebas5384
<johnmc> Hi all. Can anyone give me some advice on a problem with juju failing to deploy new LXC machines on a host?
<johnmc> Some some unknown reason all new LXC containers intended to be deployed on machine "1" simply stay in pending state forever.
<johnmc> This is what "juju stat" looks like http://pastebin.com/5b4D3d23
<johnmc> all LXC containers from 1/lxc/22 onwards are pending. I've tried doing a "destroy-machine" and "destroy-machine --force" on them, which is how they ended up with "life: dead".
<johnmc> I've looked at log files on machine "1", but can't see anything to suggest why it's not actioning the request for a new LXC container. No errors, nothing.
<johnmc> if anyone can suggest logs files etc. I should be looking at that would help a lot, and right now, I've got nothing to go on.
<johnmc> all existing LXC containers work fine, and come back after a complete physical host reboot.
<natefinch> johnmc: it sounds like an lxc problem, if there's no juju errors.  Juju just tells lxc what to do.  Probably good start would be to ssh into the base machine and do an lxc-ls and see what it spits out
<johnmc> Hi nate. I've been on the base machine quite a bit, and not found anything. This is what I get to lxc-ls:
<johnmc> root@controller-cam1:~# lxc-ls
<johnmc>   juju-machine-1-lxc-15  juju-machine-1-lxc-17  juju-machine-1-lxc-19  juju-machine-1-lxc-21   juju-machine-1-lxc-16  juju-machine-1-lxc-18  juju-machine-1-lxc-20
<johnmc> just the healthy LXC containers
<rick_h_> johnmc: what version of juju are you on? There were a bunch of lxc issues that have gotten fixed in recent weeks
<johnmc> I would have thought that there'd be at least some activity in /var/log/juju/ in response to a request for a new LXC. Nothing happens there at all.
<natefinch> johnmc: there definitely should be some output at least, when doing add-machine
<johnmc> juju -> 1.20.7-0ubuntu1~14.04.1~juju1
<johnmc> rick_h_: Is there a more recent version (after 1.20.7) I should try?
<rick_h_> johnmc: nope that should have the fixes I believe so you're good there. Wanted to double check
<johnmc> natefinch: As a test, I just did a "cp -a /var/log/juju /var/log/juju-old" on the base machine (1), followed by "juju add-machine lxc:1". I then waited a minute and ran "diff -uNr juju-old/ juju/" on the base machine. No logging had occurred.
<rick_h_> johnmc: I think the juju logs are in a .juju directory for local stuff? /me tries to double check.
<rick_h_> johnmc: so logs are in /home/rharding/.juju/local/log
<rick_h_> where rharding is your username on the host machine
<rick_h_> johnmc: and there's the all-machines.log along with per machine/unit logs there.
<natefinch> rick_h_: it's maas, not local
<rick_h_> hatch: natefinch oh, sorry. /me totally missed that part
 * hatch pokes head in
<natefinch> johnmc: if you do juju add-machine lxc:1 --debug --show-log  ... what does it print out?  It sounds like the commands aren't even making it to the server for some reason
<hatch> rick_h_: did you mean to ping someone else? :)
<marcoceppi> arosales: for github reviews
<rick_h_> hatch: heh, I was starting to ping you for a different reason
<hatch> oh haha
<johnmc> natefinch: http://pastebin.com/sJ49b6BY
<johnmc> says it's created
<johnmc> says it's falling back to 1.18!
<arosales> marcoceppi: ack, any protips?
<marcoceppi> arosales: yeah, making a screenshot
<marcoceppi> well, shutter keeps crashing
<marcoceppi> when making comments, make them on the Files Changed tab
<marcoceppi> that way they're associated with the merge request and not directly on the branch
<marcoceppi> as the merge request is iterated upon with the feedback it'll close comments that are no longer up to date
<marcoceppi> arosales: so at the bottom https://github.com/juju/docs/pull/181 you can see how your comments are still shown but mine are marked as outdated
<marcoceppi> even though chuck addressed all the comments in the merge
<johnmc> natefinch: I've updated to the latest juju-core on my juju -agent machine (machine 0), and tried again. I get the same log output as before ( http://pastebin.com/UbzN3mbY ).  Any idea where I go from here?
<johnmc> natefinch: incidentally I can make as many LXC containers on machine "3" as I like. Only machine LXC creation attempts on "1" fail silently.
<johnmc> Does anyone have any tips about where I should be looking for clues? Apart from the debug output I've shown ( http://pastebin.com/UbzN3mbY ) , there is no logging evidence I can find that sheds light on this.
<johnmc> My request for an LXC container is disappearing into a black hole.
<natefinch> johnmc: oh crap
<natefinch> johnmc: I think we have a 1.18<->1.21 bug
<natefinch> johnmc: we just fixed it last niedbalski
<natefinch> johnmc: last night... heh trying auto-complete mid-sentence is not exactly what I wanted
<natefinch> johnmc: although that doesn't explain why it would work on one machine and not the other... nevermind.
<natefinch> johnmc: if you do the same command with --debug --show-log on machine 3 (where it works), do you get different output
<natefinch> ?
<johnmc> natefinch: Sorry, had to go away for a bit. This is the output when creating on machine 3: http://pastebin.com/scsj1Jzb . The machine was successfully created in less than 2 minutes.
<natefinch> johnmc: hmm.... weird
<johnmc> natefinch: Looking at the two base systems, they both have /var/lib/juju/tools/1.18.4-trusty-amd64 on them. Same version on both.
<natefinch> johnmc: when you do add-machine to lxc:3, do you get output in the all-machines log that you don't see when you do it for lxc:1?  That seems like the most interesting place to start right now
<weblife> I am using the GUI for the first time to set up an environment I created, hosted on the bootstrap node.  Did the drag of a zip, committed the changes and launched the instance.  I had an install error and want to ssh in but I am getting an error.  Could this be because I am using the GUI for launching the instance?
<johnmc> natefinch: There's no all-machines.log on my workstation, but there is on my maas-agent machine. There is a block out log output *after* the new LXC machine was created on machine 3, but nothing that coincided with the actual request. This is the output: http://pastebin.com/fmX7bP9H
<johnmc> natefinch: so, I suppose the behaviour is consistent across machines 1 & 2, in that you only get log output after a successful lxc machine creation.
<natefinch> johnmc: sorry, yeah, I meant on the maas-agent machine.   You should get logs about things like the API being accessed
<johnmc> natefinch: I get no logging whatsoever in response to the request.
<johnmc> natefinch: only success creates any log output
<weblife> nevermind I figured it out
<natefinch> johnmc: I'm bringing up an environment of my own so I can double check some stuff, but realized my local environment was kinda messed up.  one is coming up now.
<johnmc> natefinch: something interesting is going on with machine 1. Right now when I run "juju stat" it says "agent-state: down" for machine 1. I had this before an rectified it by restarting the juju daemon on machine 1. Strange this is that it was down earlier when I posted my initial stat output ( http://pastebin.com/5b4D3d23 ).
<johnmc> natefinch: correction - it was *not* down earlier
<natefinch> johnmc: yeah, something is wonky with that machine
<johnmc> natefinch: It's the total lack of any log output that gets me. I've just restarted the jujud on machine 1, and it's up again. I then tried to create yet another lxc machine on there and see no new log output on eith the maas-agent, or machine 1.
<natefinch> johnmc: can you run    sudo grep 1/lxc/30 /var/log/juju/all-machines.log      on that base maas-agent machine?
<johnmc> natefinch: # sudo grep 1/lxc/30 /var/log/juju/all-machines.log grep: /var/log/juju/all-machines.log: No such file or directory
<johnmc> natefinch: all-machines is only on maas-agent. Grepping the all-machines log there shows no log entries
<johnmc> root@juju-agent:~# grep 1/lxc/30 /var/log/juju/all-machines.log \n root@juju-agent:~#
<johnmc> natefinch: It's basically what I've been saying all along; the request for a machine vanishes without a trace.
<natefinch> johnmc: on the same machine, if you grep for 3/lxc/10  do you get hits?
<johnmc> natefinch: nothing for that either
<natefinch> johnmc: ok, that's weird, since that's the one that's actually working
<johnmc> As I said before, the lack of any logging pre-success is consistent.
<themonk> hello
<themonk> my juju says a machine is down, i never encounter it before, why is this happening?? and unit on that machine says "agent-state: down , agent-state-info: (started)"
<natefinch> themonk: machines go down sometimes.... is the actual machine down, or just juju?
<natefinch> johnmc: obviously deploying to lxc containers on machine 1 used to work, since you have some working.  Do you know when it stopped working?
<sinzui> natefinch, themonk: if an local lxc env failed to tear down the cruft left behind will prevent new machines for starting
<sinzui> natefinch, themonk I know there is a juju plugin that will clean the machine
 * sinzui looks for doc about how to clean
<natefinch> sinzui: johnmc is having a problem added new lxc containers to maas instances.  Not sure about themonk's problem yet
<natefinch> sinzui: or rather... he can deploy lxc containers to one maas machine but not another
<sinzui> I have no lxc maas experience johnmc  but there were bugs about the network the machine might give the lxc
<natefinch> johnmc: is this a production environment?  Would you be willing to try upgrading the environment to 1.20.7?
<johnmc> natefinch: The first time it failed I was trying to install a haproxy charm to both machine 1 & 3 at petty much the same time (yesterday). Machine 3 succeeded, and nothing happened on machine 1. No idea why.
<sinzui> themonk, http://pastebin.ubuntu.com/8427862/
 * sinzui looks for lxc maas bugs
<johnmc> natefinch: In the end I realised I should have been using hacluster, but that realisation came long after the failure became apparent.
<johnmc> natefinch: It's not in production yet. I'll happily try anything.
<johnmc> natefinch: Is there a doc explaining what I need to do, or is there just a simple command?
<natefinch> sinzui: juju upgrade-juju should just work, if he's on 1.18.4, right?
<natefinch> sinzui: (and using 1.20.7 client)
<natefinch> johnmc: in theory "juju upgrade-juju" should just work, because you're running a newer stable version of the juju client, and the server is running an older stable  version of the server.  But I'd wait for the go-ahead from sinzui.  He's our QA head, and does about 1000x as many upgrades as I do.
<johnmc> natefinch: thanks. I'll check back in a few minutes.
<themonk> natefinch, sinzui, thanks for response :) machine is up, one thing is that its in a virtualbox on windows machine
<themonk> and its in laptop
<johnmc> sinzui: Am I safe to run a "juju upgrade-juju" using a 1.20.7 client with 1.18.4 servers?
<sinzui> johnmc, yes. We wouldn't release it if it wasn't safe
<johnmc> sinzui: looks like I'm trapped due to my broken (pending) lxc machines. Erro message: ERROR some agents have not upgraded to the current environment version 1.18.4: machine-1-lxc-22, machine-1-lxc-23, machine-1-lxc-24, machine-1-lxc-25, machine-1-lxc-26, machine-1-lxc-27, machine-1-lxc-28, machine-1-lxc-29, machine-1-lxc-30, machine-1-lxc-31, machine-1-lxc-32
<johnmc> sinzui: Those are the broken LXCs I've been discussing with natefinch.
<johnmc> sinzui: they are the reason I'm trying the upgrade
<sinzui> natefinch, you cannot upgrade while they are broken
<sinzui> johnmc, Juju will queue the upgrade until all the machine and unit agents call home and report they are healthy
<sinzui> johnmc, I know this because I have an arm64 instance that can go down. when it comes back, the upgrades take place
<sinzui> johnmc, I am still reviewing the release. when I am done in about 30 minutes, I can return to the lxc maas bug list to find a solution to the problem
<bic2k> natefinch and sinzui: I am literally having the same problem right now. Same version 1.18.4 for 1.20.7
<bic2k> let me know if I can provide any details to help
<bic2k> I lied, we are on 1.18.1 in this cluster
<sinzui> bic2k, what do you see with "juju --show-log upgrade-juju" output will show which versions are available. We expect 1.18.4 for clouds with public access,
<sinzui> bic2k, you can also be explicit about upgrade versions "juju --show-log upgrade-juju --version=1.18.4"
<bic2k> sinzui: no matching tools avaliable?
<sinzui> bic2k, explicit within reason. Juju has some internal rules about what it thinks I can upgrade too and will look for a match
<sinzui> bic2k, does your env use public streams such as streams.canonical.com?
<bic2k> sinzui: good question, thats no terminology to me. Where do I look?
<sinzui> bic2k, run "juju get-env tools-metadata-url".  empty means default to streams.canonical.com
<bic2k> sinzui: empty it is
 * bic2k isn't sure why Yoda said that
<sinzui> bic2k, lets ask juju to be explicit about what it is doing. can you paste the output of
<sinzui> juju metadata validate-tools
<bic2k> sinzui: local tools are on 1.20.7 right now, command metadata isn't around anymore right?
<sinzui> bic2k, juju doesn't use your client unless you exploit a developer hack called --upload-tools
<bic2k> sinzui: says the command is not found
<sinzui> bic2k, are you on mac or windows?
<bic2k> sinzui: mac
<sinzui> bic2k, well good news for you. I promised to try to make the metadata plugin for mac today when I make the 1.20.5 binaries
<sinzui> but that requires me to reboot my machine into os x
<sinzui> bic2k, are you in a private cloud?
<bic2k> sinzui: public?
<sinzui> bic2k, aws, hp, azure, joyent?
<sinzui> or your own openstack
<bic2k> sinzui: lol, this is a cluster on aws. Been active since 0.12.1
<sinzui> bic2k, okay, so I think we need to ask juju about the old urls...
<sinzui> bic2k: juju get-env tools-url
<bic2k> sinzui: empty
<bic2k> want me to gist the whole env without secrets?
<sinzui> bic2k, I am not sure. I just switch to my aws env. and it shows sensible answers
<bic2k> perhaps I got caught in some old upgrade issue with juju in the 0.18 series?
<bic2k> sinzui: been looking in the bugs/issues but nothing related so far
<sinzui> bic2k, once you run upgrade-juju, the action is queuesd. juju wont let us run it again to see its decision
<bic2k> sinzui saves the day. Thanks again. Solution was setting my tools-metadata-url to https://streams.canonical.com/juju/tools/
<sinzui> bic2k, okay, then I think we have learned that old envs may not get the stream url updated during upgrades. 1.18 requires it and a bootstrap will set it
<sinzui> bic2k, I will post this issue for others
<sinzui> bic2k, so now I suspect "juju run" wont work for you because it is a known failure for upgrades
 * sinzui looks for bug/work around
<sinzui> bic2k, I think you are affected by bug 1353681
<mup> Bug #1353681: juju upgrade-juju --upload-tools to 1.18.4 fails to provide juju-run <canonical-is> <run> <juju-core:Triaged> <https://launchpad.net/bugs/1353681>
<sinzui> oh, and so  my juju-ci3 env it seems
<bic2k> mup: close, but this was public cloud and not using --upload-tools. No errors related to permissions either.
<mup> bic2k: I apologize, but I'm pretty strict about only responding to known commands.
<bic2k> sinzui: having an env to reproduce on can go a long ways in figuring it out.
#juju 2014-09-26
<james_w> anyone have any ideas about https://bugs.launchpad.net/juju-core/+bug/1374159 ?
<mup> Bug #1374159: Complains about wanting juju-local installed when it is <juju-core:New> <https://launchpad.net/bugs/1374159>
<james_w> It's preventing me from using juju currently
<james_w> as it doesn't work for long enough to complete a deploy of my test environment
<jamespage> gnuoy, the unit test fix is to add hooks.hooks._config_save = False
<jamespage>  
<jamespage> to the relations tests for the openstack charms - I have this in my https split branches
<gnuoy> jamespage, yes, corey and I did that to the quantum-gateway charm last night
<gnuoy> jamespage, oh, that's not exactly what we did
<gnuoy> jamespage, so you want the implicit save on when the charm is running for realz ?
<jamespage> gnuoy, yeah
<jamespage> gnuoy, it does not hurt - we might want to use it later on
<jamespage> gnuoy, so disabling it for testing is just fine
<gnuoy> k
<jamespage> gnuoy, we could just land that in as a resync + a trivial change
<gnuoy> jamespage, will do
<gnuoy> coreycb, after talking to james ^ I've tweaked the config_save  in the quantum-gateway next charm http://bazaar.launchpad.net/~openstack-charmers/charms/trusty/quantum-gateway/next/revision/65
<coreycb> gnuoy, ok good to know
<rick_h_> marcoceppi: lazyPower any chance you all can help reshare/get the word out today? https://plus.google.com/116120911388966791792/posts/9KaLE7m9hv9 and https://twitter.com/jujuui/status/515467739951923200
<james_w> anyone have any ideas about https://bugs.launchpad.net/juju-core/+bug/1374159 ?
<mup> Bug #1374159: Complains about wanting juju-local installed when it is <juju-core:Incomplete> <https://launchpad.net/bugs/1374159>
<james_w> It's preventing me from using juju currently as it doesn't work for long enough to complete a deploy of my test environment
<james_w> rick_h_: nice screencast, it looks really slick, congrats to those involved
<rick_h_> james_w: ty much, sorry I don't have any clue on your bug to trade for the nice comments :/
<james_w> no problem
<james_w> rick_h_: what can provide the hardware details in the machine view?
<rick_h_> james_w: so there was a bug in MAAS that was fixed and I think is in 1.20.8 (but this video was before then)
<james_w> ah, ok
<rick_h_> james_w: and ec2 shows it, you can see it in makyo's video https://www.youtube.com/watch?v=pRd_ToOy87o&list=UUJ65UG_WgFa_O_odbiBWZoA
<james_w> rick_h_: I'm only sad that we can't really use this work.
<rick_h_> james_w: so it's we'll hopefully get it everywhere in time
<rick_h_> james_w: :( why is that?
<james_w> rick_h_: a couple of reasons really
<james_w> 1. we don't have access to our production environments
<james_w> 2. manual modification of environments doesn't suit our workflow
<james_w> we want an approval workflow that is driven from a desired state in version control
<rick_h_> james_w: ah yea, though with juju auth support coming we've still got the idea of read only and such
<james_w> yeah, that will be nice
<james_w> so we can poke around
<rick_h_> james_w: true, however we'll also be adding some things like linking directly to exposed ip/ports per unit in machine view and such
<james_w> but we'll miss out on the really nice uncommitted changes features etc.
<rick_h_> so now that we show you a real 'per unit' look we can hopefully provide some useful stuff like 'kill that unit' or what would be cool with canary upgrade work to show progress in machine view
<rick_h_> gotcha, yea
<james_w> but this all looks really nice
<james_w> and I'm sure it goes over well with customers
<rick_h_> definitely, well hopefully some people who don't use the gui much will have some real use for it and thanks for that feedback.
<james_w> yeah
<rick_h_> it helps us know what things we can look to offer that we don't currently
<rick_h_> and check out the roadmap
<james_w> I'm trying to think if it could evolve such that we could use the modification features of the gui
<james_w> I think it would only really be useful for testing out changes, and then exporting a diff when finished
<james_w> without some very substantial modifications
<johnmc> rick_h_: The updated gui looks excellent. A very impressive step up. I see I can get the source on github. Is there a ppa or other repo I can download it from?
<rick_h_> johnmc: juju deploy juju-gui
<rick_h_> johnmc: well juju deploy trusty/juju-gui
<johnmc> rick_h_: of course. Thanks.
<rick_h_> johnmc: if you mean source for hacking, there's a LP release tarball as well https://launchpad.net/juju-gui/+download
<rick_h_> johnmc: but yea, deploy it to check it out in your environment
<rm4> is it possible to avoid haproxy being a single point of failure by adding extra instances with add-unit
<aisrael> rick_h_: nice work on the new gui. Watching the ghost walkthrough now. That's slick.
<rick_h_> aisrael: thanks, team worked long and hard on that and exciting to get it out there.
<rick_h_> rm4: I'm not 100% sure but it looks like it provides some peer relation-fu that would seem to work towards that end. http://bazaar.launchpad.net/~charmers/charms/precise/haproxy/trunk/view/head:/hooks/hooks.py#L563
<rick_h_> rm4: it'd be a good bug on the charm to update the readme to address your question directly as I'm sure others wonder the same thing
<rm4> rich_h_: I can create the peering and get the following
<rm4> backend haproxy_service
<rm4>     mode tcp
<rm4>     option tcplog
<rm4>     balance leastconn
<rm4>     server haproxy-0 10.0.3.242:81 check
<rm4>     server haproxy-1 10.0.3.188:81 check backup
<rm4> so it does peer fine however not sure if it has a vip keepalived for example as when I destroy the  the first server then its ip address is not accessible
<lazyPower> rick_h_: Consider it done :)
<rick_h_> lazyPower: ty much
<avoine> hey bloodearnest how went your presentation at pycon uk?
<rm4> rick_h_: I have submitted bug 1374465
<mup> Bug #1374465: Readme does not have a peering section allthough peering is permitted. <haproxy (Juju Charms Collection):New> <https://launchpad.net/bugs/1374465>
<rick_h_> rm4: awesome thanks!
<rm4> rick_h_: of course
<bloodearnest> avoine: went ok thanks. Some issues with the gui freezing up prevented the full demo
<bloodearnest> avoine: but mfoord is dping the same demo in pycon india this weekend
<rick_h_> bloodearnest: :(
<rick_h_> bloodearnest: any hint what was up?
<bloodearnest> rick_h_: well, it was a mac, using parallels to run a vm, deployed to local provider, and untested. We had to use mfoord's mac because my laptop wouldn't connect to the projecter :/
<rick_h_> bloodearnest: ok, well let me know if there's anything we can help take a look at
<bloodearnest> avoine: I added some support for the django charm to cope with multi-unit postgres services, and it handles failover nicely
<rick_h_> bloodearnest: and let mfoord know machine view is out for any follow up talks in case it fits the demo/material you all covered
<bloodearnest> avoine: also, added dj-static as a simple static asset solution. We're using it in prod, with a squid infront it works well. Simple deployment.
<bloodearnest> rick_h_: yeah, we didn't get to the bottom of it. But I think macs hate me, so that's probably it. Feeling's mutual :)
<rick_h_> bloodearnest: hah ok
<lazyPower> sinzui: question for you. I have ~ 4 bugs i need to retarget away from teh charms collection to point to a personal branch - and i can't seem to do that in launchpad via the project data point - any hints would be helpful : https://bugs.launchpad.net/charms/+bug/1374267
<mup> Bug #1374267: ctap-sampleapp crashes on logout <Juju Charms Collection:New> <https://launchpad.net/bugs/1374267>
<sinzui> lazyPower, Lp only permits projects, distros, and distro-packages to have bugs
<lazyPower> so i can't retarget these bugs against the users namespaced charm?
<sinzui> lazyPower, that's right
<lazyPower> D: that seems... not right
<sinzui> lazyPower, a branch is personal
<lazyPower> as a personal namespaced charm is still a project in LP
<lazyPower> or am i misunderstanding a core tenant of LP's structure?
<sinzui> lazyPower, when issues are shared by a group, the branch needs to be promoted to the project level...but Lp is to ass-backwards to explain that
<avoine> bloodearnest: yeah, I saw your MP, it is pretty cool. I'll definitely check dj-static
<marcoceppi> lazyPower: we need to create the packagae in lp for charms first off
<bloodearnest> avoine: the implementation of installing in the charm and running collectstatic is a quick hack, I think it could be done better
<sinzui> lazyPower, Lp is not about the individual, it actually alienates the opportunistic developer by calling work +junk. Lp wants groups to collaborate, but never explains one person needs to create something valuable, then share it as a project
<bloodearnest> avoine: collectstatic is not actually needed with dj-static
<lazyPower> i guess what confuses me is https://bugs.launchpad.net/charms/trusty/+source/hsenidmobile-ctap-sampleapp - leads me to believe i could do this without much fuss
<avoine> bloodearnest: I was planning adding a subordinate that install django-storage and connect to s3 or swift
<avoine> but for now I was using this: https://code.launchpad.net/~patrick-hetu/+junk/django-contrib-staticfiles
<marcoceppi> lazyPower: we need to make a new tag, that's like "not-a-charm"
<lazyPower> marcoceppi: agreed. I'll tag these with that exact phrasing
<marcoceppi> that we can have review-queue ignore for the time being
<marcoceppi> I'm releasing new review-queue today, I can add that in there
<sinzui> lazyPower, since the developer is the "first" community, Lp was never going to attract developers like github
<lazyPower> sinzui: thanks for the clarification. I'm a little saddened by this news but its not the end of the universe.
<bloodearnest> avoine: swift is not yet good as s3 for serving static assets
<bloodearnest> avoine: we are taking the approach of using dj-static and sticking squid in front
<avoine> bloodearnest: yeah I bet that work pretty well
<bloodearnest> avoine: two big advantages are 1) single deployment target (just your django units need updating) and 2) same in dev as in prod (as dj-static works in dev too)
<bloodearnest> avoine: but for a lot of large assets, s3 might be better
<bloodearnest> like videos
<bloodearnest> and mp3s
<avoine> yeah
<avoine> bloodearnest: any thoughts on the python vs ansible approach?
<bloodearnest> avoine: so, I'm a bit confused. You are using ansible, but *not* the hooks integration, just apply_playbook. And you also have ansible for controlling juju from the hosts?
<avoine> bloodearnest: I was planning to use AnsibleHooks and using it with Juju on the hosts
<avoine> bloodearnest: but I'm not sure if the overhead of Ansible worth the trouble
<bloodearnest> avoine: great. I'd be happy to help with that.
<bloodearnest> avoine: right.
<lazyPower> avoine: are you planning on submitting this charm for recommended status from the charm store?
<avoine> lazyPower: no not soon
<avoine> lazyPower: I'll stick with the pure python for now
<lazyPower> ok. I was going to interject that if you want help wrapping that up in proper charm format, we have an ansible charm template for use, and integrating it into juju hooks is pretty straight forward
<lazyPower> if you're using a recent edition of charm tools `charm create -t` gives you some options for that
<avoine> nice I didn't know that
<bloodearnest> lazyPower: nice!
<lazyPower> uh oh, i get the feeling we haven't been very forthcoming with the information about charm create -t
 * lazyPower prepares an email to the list
<bloodearnest> lazyPower: also, have you seen my charm helpers branch that adds super-simple "actions"? We make extensive use of juju run <unit> "actions/some-action a=b c=d" type stuff, which this branch makes easy to integrate with ansible
<lazyPower> bloodearnest: i have not, but if its not in the review queue chances are I didn't see it
<bloodearnest> https://code.launchpad.net/~bloodearnest/charm-helpers/ansible-actions/+merge/233428
<lazyPower> http://review.juju.solutions
<johnmc> As discussed yesterday with natefinch and sinzui, I can no longer create LXC containers using juju-maas on one of my machines.
<johnmc>  Working back through my history to the last thing I did before it broke, I might have found something.
<johnmc> I checked-out (bzr branch) the precise haproxy charm onto my machine and tried to deploy it into a trusty LXC machine
<johnmc> I then back-tracked and realised I should have been working with hacluster under trusty
<bloodearnest> lazyPower: I don't see my branch on there, what to I need to do to make it appear? It should be under tools, right?
<johnmc> Does anyone know if that could explain my now-broken environment
<lazyPower> bloodearnest: that queue may not be implemented yet. I know we're moving at breakneck speed on the new queue
<johnmc> I'm also finding it impossible to deploy hacluster linked to glance.
<bloodearnest> lazyPower: coolio
<lazyPower> bloodearnest: maybe its worth while to open a bug against the review queue so we can track progress on implementation
<lazyPower> bloodearnest: for now it lives here but will eventually be moved to github.com/juju-solutions  --- https://github.com/marcoceppi/review-queue
<johnmc> When I deploy the glance charm and fire-up a new pair of glance instances, then link that to a new hacluster instance, I het this in the logs: http://pastebin.com/ZtNA7UmT
<johnmc> That big long list of "INFO hanode-relation-changed Cleaning up res_glance_haproxy:0 on juju-machine-.-lxc-.." lines represent every glance instance I think I've ever had an subsequently destroyed.
<johnmc> Things look really badly broken. Can anyone help?
<johnmc> natefinch: Are you around to help?
<natefinch> johnmc: yes, but trying to drum up someone more helpful ;)
<natefinch> johnmc: glance and hacluster are beyond my knowledge
<natefinch> johnmc: did you get any more help from sinzui last night?  I saw you got an upgrade half-finished, which is never a great place to be
<sinzui> Ah, no, I had to switch to OS X to complete the release
<johnmc> natefinch: I didn't make any progress on that. sinzui said that the upgrade can't be done while those LXC (requested) machines are there. I have no idea how what to do next.
<sinzui> johnmc, When the containers are there, but the machine/unit agents are down, you can restart each
<johnmc> natefinch: I'm hoping that the attempt to install the precise haproxy charm into trusty might turn out to be a common cause of both the LXC problem, and the hacluster problem.
<sinzui> johnmc, I have a brittle arm64 machine that I do this from time to time. I think restarting the machine agent first is best
<johnmc> sinzui: the containers don't actually exist. I requested their creation, but they were never actually created.
<sinzui> johnmc, 1.18.x was a bad time for arm64, so I did restarts of the agents, then the queued upgrade was dequeued and 5 minutes later the upgrade was complete, and the agents stayed up
<johnmc> sunzui: I have restarted the machine agent (on the base system) many times
<johnmc> sinzui: there are no agents, because there are no LXC containers. That is how my system is broken.
<sinzui> restarting the state-server agents are orthogonal
<sinzui> johnmc, If lxc gets blocked on a machine a lot of manual work is needed to unblock http://pastebin.ubuntu.com/8427862/
<sinzui> johnmc, I think you need to confirm which containers exist with sudo lxc-ls --fancy
<johnmc> I have already been through that in detail with nate yesterday. The LXC containers do not exist. I pasted this output yesterday
<sinzui> johnmc, Since you cannot destroy your env and start over, I think you need to use remove-unit or destroy-machine to make juju forget about the failed containers and try again
<sinzui> johnmc, I don't have experience with destroying maas lxc containers
<johnmc> http://pastebin.com/aGUAyizu
<sinzui> johnmc, is this trusty with cloning enabled?
<johnmc> sinzui: I have used "destroy-machine" and used --force to no avail
<johnmc> sinzui: This is trusty. How is cloning enabled/disabled? I'm not familiar with that setting.
<johnmc> I followed on online guide
<sinzui> johnmc, are there locks named after lxc in /var/lib/juju/locks/
<johnmc> sinzui: No such lock files on either the host system (machine 1) or the juju-agent machine. Where should I be looking for these?
<sinzui> johnmc, the machine with the containers. When lxc doesn't create containers we investigate the host machine
<sinzui> johnmc,the state-server doesn't do work. other machines ask it for a list of tasks. so when a machine has issues, we visit to investigate the local problem
<johnmc> sinzui: There are no lock files.
<johnmc> sinzui: As I explained to natfinch yesterday, there is no evidence that the host system ever received any request for new LXC contianers
 * mgz reads log
<mgz> johnmc: do you want to just clean up the lxc container stuff completely?
<sinzui> johnmc, interesting, but the order of events is the host machine agent asks for the state server for work. status shows the state-server is waiting of the machine to do it's part
<johnmc> sinzui: This failure appears to be totally silent with regard to log files. No files under /var/log change at all on the host system in response to a new LXC request.
<johnmc> sinzui: If we had any log output at all we might get somewhere. How is this to be done?
<sinzui> johnmc, I only know to look for logs in /var/log/juju. when there are no logs or logging stops, that might mean the files were removed, confusing the agents. restarting the agents will recreate the logs an there will be a flood of dequeued messages
<johnmc> sinzui: As requesting new LXC containers has no impact, is it possible to there is a blocked queue of requests?
<johnmc> sinzui: I've restarted the jujud for machine-1 many times, and verified (just now) that there are no deleted files being written to.
<johnmc> sinzui: I used lsof to check open files. /var/log/juju/machine-1.log is used by jujud and is present
<johnmc> sinzui: it's as though the juju-agent machine sends no requests to the host machine at all.
<sinzui> oh...
<johnmc> sinzui: Where do I go from here?
<sinzui> johnmc, I am not sure what to do in this case. I don't have any experience with this. When the machine agent cannot talk to the state server, the logs will scream about it. You don't see this issue in the machine-1.log though.
<sinzui> johnmc, I assume the machine-1.log you are reading never mentions ERROR.
<johnmc> sinzui: The machine agent connects to the server. Netstat shows this has an open tcp connection to port 17070. However, no activity takes place. Shouldn't the jujud on the juju-agent server log something somewhere?
<sinzui> johnmc, The machine-1  log should be stating it called home. The all-machines.log on the state server is puts all the actions in context.
<johnmc> sinzui: the machine agent seems to lose it's connection to the juju-agent at least once a day. This is the latest log: http://pastebin.com/2PP8CK8G
<sinzui> johnmc, This is obviously a bug. Can you report the issue at https://bugs.launchpad.net/juju-core and attach the all-machine.log for the developers. Please review the log though, Juju likes to include certs and password that you will want to redact
<sinzui> johnmc, which version of juju did the env start as, and which have you upgraded too
<johnmc> sinzui: it was 1.18.1 originally
<sinzui> okay, that explains the upgrade already at 1.18.4 message
<sinzui> though it implies something has not completed an upgrade to 1.18.4
<bic2k> juju have any plans to directly support deploying a Docker container.
<natefinch> bic2k: we've talked about docker a lot internally... you can certainly write a charm that deploys a docker container, in fact I wrote such a charm very recently.   Docker and juju work fine together now... what would you like to see done differently?
<bic2k> natefinch: I also just finished writing a charm for deploying some internal docker services. Perhaps thats all we need :-)
<natefinch> bic2k: I think the one thing that makes docker charms special is that it's relatively "safe" to deploy multiple of them to a single machine, rather than needing to put each of them in a separate LXC container (since they're already contained, of course)
<bic2k> natefinch: Private docker registry was a bit of a challenge for us.
<lazyPower> https://plus.google.com/100016143571682046224/posts/PaVGh51FYCR - i'm just going to leave this here...
<natefinch> bic2k: ahh, yeah, I was deploying from a publicly available image, so that wasn't a concern
<bic2k> natefinch: You deploying on 12.04 or 14.04?
<natefinch> bic2k: 14.04
<bic2k> natefinch: We hit some issues with getting docker installed on 12.04 through apt. Mostly the CLI tools to add a repo insist on adding deb-src and then failing when it isn't there.
<natefinch> bic2k: why are you using 12.04?
<bic2k> natefinch: our cluster is old and thats what its on :-)
<natefinch> bic2k: ahh :)
<avoine> lazyPower: what is the bundletester command I should run to test the python-django charm like you do?
<avoine> lazyPower: also are you testing with python3 ?
<lazyPower> avoine: negative. python2
<avoine> ok
<lazyPower> avoine: just `bundletester -F` after i cd into the charmdir
<avoine> ok
<lazyPower> pip install bundletester into a venv
<lazyPower> then you get the same results we do when testing / CI does
<avoine> ok
<lazyPower> avoine: hth - i'm EOD'd for the rest of the day to prep for my show. If you need anything else, i'll be around this weekend/monday
<avoine> ok, thanks
<sebas538_> hi!
<sebas5384> question: how bundle.yaml deals with local charms?
<sebas5384> in the charm property, could be like
<sebas5384> charm: local:precise/drupal
<sebas5384> but how I specify the local repository
<sebas5384> ?
<sebas5384> the machine view is in https://jujucharms.com \o/
<sebas5384> uhuu!! testing already :)
<arosales> lazyPower|Spinni: just the stream for your session correct?
<arosales> not hangout or anything like that
<lazyPower|Spinni> arosales: not this week. maybe after brussels we'll setup a live in studio session to go with it.
<lazyPower|Spinni> replies will be latent, i setup a mixing table this week.
 * lazyPower|Spinni afk's again
<d4rkn3t> hello guys, one question for MaaS and juju, is there a way to make the security upgrade on nodes added on MaaS and for the charms on juju, without make that one by one? thanks. If not it may be a suggest to add as app on MaaS anyone can answer me?
<bic2k> d4rkn3t not that I have tried this, but you may be able to use the ssh command to run the appropriate apt-updates one machine at a time
<d4rkn3t> I'd like to avoid that, because we've 500 virtual svr
<d4rkn3t> i'd like to use MaaS to make that
<d4rkn3t> for example select the nodes and launch the upgrade!!!
<d4rkn3t> it's take too long time make that one by one
<d4rkn3t> the same think is valid also for the charms deployed with juju
<d4rkn3t> if I'd want to make the upgrade for example of MySQL deployed using juju, is there a procedure to make that?
<rick_h_> d4rkn3t: there's a juju run command, and landscape is great at updates across hardware.
<rick_h_> d4rkn3t: so in the mysql sense I'd juju run that across my mysql service.
<rick_h_> d4rkn3t: https://www.youtube.com/watch?v=2d5KdQjXCBs is a cool video on juju run and juju run --help has some basic info.
<lazyPower> I deny knowing anything about juju run :P
<lazyPower> rick_h_: good looking out - i just got off stream and was going to suggest the same thing. hi5
<hazmat> rick_h_, lazyPower  juju run --all "apt-get update && apt-get upgrade"
<mwenning> marcoceppi, can you specify constraints when you deploy amulet?
#juju 2014-09-27
<dpb1> Anyone lurking?  Could that awesome ci test result thing post comments to MPs whenever it runs, and not just in failure case?  Or does it already do that and I'm missing it?
<tvansteenburgh> dpb1: i think it does, e.g. https://code.launchpad.net/~hloeung/charms/precise/apache2/ssl-security-options/+merge/233877
<dpb1> tvansteenburgh: ah cool... just doesn't test everything I guess
<tvansteenburgh> dpb1: yeah i think the integration between RevQ and CI isn't fully automated yet
<dpb1> tvansteenburgh: ok, thanks.  sure is nice when it happens. :)
<tvansteenburgh> dpb1: glad you like :)
<jose> tvansteenburgh: want me to fire a test?
#juju 2014-09-28
<jose> rcj: ping
* jose changed the topic of #juju to: Welcome to Juju! || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP || News and stuff: http://reddit.com/r/juju
#juju 2015-09-21
<miken> Where is the "juju environment key" created? I'm trying to help someone who can't 'juju ssh' into any units (Permission denied (public key)), and it turns out they don't have a juju environment key either locally in their ~/.ssh or listed in the units authorized_keys?
<miken> Ah - it uses your existing key (just reading https://jujucharms.com/docs/stable/getting-started - been a while since I've setup a new dev machine)
<jose> marcoceppi: ping
<jamespage> marcoceppi, hey - can we add vivid and wily series to the charms distro? I'd like to publish some charms under non-trusty namespaces for some bundles we're putting together for lxd
<rick_h_> jamespage: urulama can look if you need that in the store ^
<jamespage> rick_h_, ta
<jamespage> rick_h_, specifically wily and vivid :-)
<urulama> jamespage: vivid is ok, we'll need to update CS for wily
<pitti> hello
<pitti> I ran "juju destroy-service debci-web-swift", but it stays around as "life: dying", so that I can't re-deploy it
<pitti> (juju 1.22.6 on trusty)
<pitti> what can I do to really kill this service?
<pitti> (the underlying machine is already gone)
<lazypower> pitti, is there a relationship trapped in error on a remote unit?
<lazypower> that would cause the principal service to remain in the topology as life:dying
<pitti> lazypower:
<pitti>     relations:
<pitti>       juju-info:
<pitti>       - ksplice
<pitti>       - landscape-client
<pitti> lazypower: I tried to "remove-relation" both after destroy-service, but that didn't help
<pitti> i. e. "juju remove-relation debci-web-swift landscape-client" (and same for ksplice)
<lazypower> hmm. I've run into some edge cases where that happens when i destroy the machine out from under the service, and relations were still present
<lazypower> but that was on 1.22.x i haven't seen that behavior in 1.24+
<pitti> http://askubuntu.com/questions/365724/juju-remove-units-stuck-in-dying-state-so-i-can-start-over  has no answer
<pitti> lazypower: I am on 1.22
<lazypower> ah
<lazypower> welp
<lazypower> The only way i was able to reconcile was by tearing the env down and standing it back up :|
<pitti> oh! https://jujucharms.com/docs/stable/charms-destroy#state-of-
<pitti> lazypower: yeah, I don't really want to do that if I can avoid it
<lazypower> yeah, i hope that works
<lazypower> you'll need to resolve every unit it was related to to make sure you grab the hook in error
<pitti> $ juju resolved debci-web-swift/0
<pitti> ERROR unit "debci-web-swift/0" not found
<pitti> hm, that doesn't work
<pitti> (nor without /0)
<lazypower> try ksplice and landscape-client
<pitti> $ juju resolved ksplice
<pitti> error: invalid unit name "ksplice"
<pitti> $ juju resolved ksplice/0
<pitti> ERROR unit "ksplice/0" is not in an error state
<pitti> hm, so why does it already remove the machine and unit when it complains afterwards that it still haves relations?
<pitti> "has" (urgh)
<lazypower> when you pass a --force, its going to assume you know what you're doing and force it.
<lazypower> there's an inconsistency in the env now w/ those relations and no machine/service under it, and i believe this is due to some oddity of how it was done. I in 1.24+ this has been resolved
<pitti> (I didn't pass --force)
<pitti> lazypower: ah, good to know it's resolved in later versions
<pitti> I "resolved" every other instance of ksplice and landscape-client now, and yay, it's gone
<lazypower> thats good news :)
<lazypower> one of those relations were in error state, keeping the service definition around in the environment
<pitti> lazypower: indeed the ksplice subordinate on a completely different unit was in "agent-state-info: 'hook failed: "config-changed"'
<pitti> lazypower: thank you!
<lazypower> any time :)
<marcoceppi> jamespage: ack, will updat shortly
<jamespage> marcoceppi, ta
<marcoceppi> jamespage: vivid and wily added
<jamespage> marcoceppi, awesome-o!
<jamespage> urulama, how long with the charm-store bits take for wily?
<jamespage> urulama, (no sudden rush but would like to get something up this week)
<urulama> jamespage: we're in the middle of deployment of all jujucharms.com services ... but that update should be small enough. i'll try to squeeze it in this week
<jamespage> urulama, ta
<urulama> jamespage: but before that, you're free to use vivid charms
<jamespage> urulama, ack - thanks
<urulama> jamespage: fyi, the output of new bundle deployment (openstack-base) with bundles supported in core: http://pastebin.ubuntu.com/12513350/
<jamespage> urulama, all good then?
<urulama> jamespage: yes, just showing how new juju deploy "bundle" will look like
<jamespage> \o/
<marcoceppi> urulama: that output looks SO AMAZING
<marcoceppi> any idea when that will be available?
<urulama> 1.26
<urulama> marcoceppi: ^
<lazypower> whoa, that *does* look nice
<urulama> frankban: ^
<lazypower> urulama, looking at this, its also idempotent? so i can deploy 2 bundles with the same service, and it just reconciles between whats deployed + whats declared?
<frankban> cool
<urulama> lazypower: yes
<lazypower> hot diggity dog thats awesome
<frankban> lazypower: it does its best
<urulama> lazypower: look at the second call ... "reusing ..."
<urulama> lazypower: from line 132
 * lazypower dances a happy dance
<lazypower> i see it
<ddellav> thats very very nice. I can't wait to use that
* lazypower changed the topic of #juju to: Welcome to Juju!  || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP
<mbruzek> cory_fu: If I wanted to build an interface for the reactive framework, where would I start?
<cory_fu> mbruzek: The docs (http://pythonhosted.org/charms.reactive/#relation-stubs) are pretty complete, but you likely want to start from one of the examples on http://interfaces.juju.solutions/
<cory_fu> mbruzek: The pgsql is the most complete example, I think
<cory_fu> mbruzek: Though the provides.py in mysql might be slightly easier to follow
<mbruzek> cory_fu: These are all new to me and hard to follow, @not_unless ?
<cory_fu> From the docs: "Assert that the decorated function can only be called if the desired_states are active."
<cory_fu> That one is actually entirely optional and is more intended to make the code easier to read and inspect.
<cory_fu> mbruzek: http://pythonhosted.org/charms.reactive/charms.reactive.decorators.html will of course be very helpful
<mbruzek> how is not_unless different than when?
<cory_fu> And then the other docs page you'll care about for creating a relation stub is http://pythonhosted.org/charms.reactive/charms.reactive.relations.html
<cory_fu> mbruzek: not_unless does not trigger the handler.  It is nothing more than an assertion
<cory_fu> http://pythonhosted.org/charms.reactive/charms.reactive.decorators.html#charms.reactive.decorators.not_unless
<cory_fu> Again, for getting started, you can probably just ignore not_unless
<mbruzek> Ok
<cory_fu> mbruzek: I'm happy to answer questions and help you get started, but I am still sick and sort-of swapping today, so I might be a bit slow at times to reply
<mbruzek> cory_fu: I didn't know you were out today, I saw you responding in other channels so I figured you were working
<cory_fu> But feel free to ping me if you have questions
<mbruzek> Let me RTFM and get back with questions
<cory_fu> Yeah, like I said, "sort-of swapping."  :p
<lazypower> cory_fu, looks like it got the majority of us
<cory_fu> Damn
<lazypower> yeah, half of eco is down with this gnarly bug
<lazypower> *over half
<g3naro> quit
<g3naro> exit
<Slugs_> i was logged into a juju machine runnning one of the ceph services and it issued the shutdown command
<Slugs_> is there a way to start it back up?
<marcoceppi> Slugs_: shutdown on the machinme?
<apuimedo> lazypower: how was the Charm summit?
<Slugs_> marcoceppi: yes
<marcoceppi> Slugs_: is it a cloud instance?
<Slugs_> openstack private cloud single install
<Slugs_> sorry for the lack of informaiion
<marcoceppi> Slugs_: I don't understand, so what was the machine? A KVM? An openstack instance? a machine in MAAS?
<marcoceppi> Slugs_: basically, you just need to "power" that machine back on
<marcoceppi> Slugs_: juju should resume from there
<Slugs_> ah ok, so juju has no control to power it on
<Slugs_> i need to do that from somewhere else
<marcoceppi> Slugs_: yes, juju is just a series of agents running in a machine, it does things like starting and stopping instances but only as a part of asking the substrate (aws, openstack, maas, etc) for a machine and stopping machines as a part of removing it
<marcoceppi> Slugs_: having a juju machine on/off command isn't a terrible idea, we've just not had anyone ask for it yet ;)
<Slugs_> i see to ask out of the box questions, however this is from lack of knowledge
<Slugs_> s/see/seem
<marcoceppi> Slugs_: yeah, so juju talks to providers (aws, maas, etc) to get machines, then it does everything it needs to on top of them but doesn't really interact with the providers much out side of "gimme machine" and "get rid of machine"
<Slugs_> yes this makes sense
#juju 2015-09-22
 * arturt is away: I'm busy
<lazypower> apuimedo, it was excellent! Next time, I hope you can make it :)
<apuimedo> lazypower: I hope so too ;-)
<apuimedo> lazypower: so who is going to the OSt Summit?
<apuimedo> jamespage will surely be there
<jamespage> I will yes
<apuimedo> some presentation on nova-lxd, jamespage?
<jamespage> apuimedo, I submitted talks, but they did not get chosen
<jamespage> apuimedo, but I expect I'll still get an opporunity...
<apuimedo> jamespage: lightning talks?
<jamespage> maybe
<jamespage> gnuoy, did you have any opiniuon on https://code.launchpad.net/~james-page/charms/trusty/neutron-api/wily/+merge/271917 ?
<gnuoy> jamespage, it's ok
<gnuoy> I can't say I'm over the moon about abusing the interpreter to install packages
<jamespage> gnuoy, its horrid
<jamespage> but probably required
<jamespage> I'm open to suggestions
<gnuoy> jamespage, I wonder whether it's clear to actually use the install hook for this. Remove the install symlink, replace with a bash script and then call the old renamed install function
<gnuoy> s/clear/clearer/
<jamespage> gnuoy, that might be a bit neater yes
<jamespage> so splitting any install related activities from interpreter resolution
<gnuoy> yeah
<jamespage> gnuoy, so something like - https://code.launchpad.net/~james-page/charms/trusty/neutron-api/wily/+merge/271917
<gnuoy> jamespage, exactly (although line 25 look wrong)
<jamespage> gnuoy, hmm - really?
<gnuoy> jamespage, which method in  ./hooks/neutron_api_hooks.py is going to be called?
<jamespage> I think the basename will remain install so that might dtrt
<gnuoy> oh, ok
<gnuoy> I stand corrected
<jamespage> just testing that now
<jamespage> gnuoy, nope that does not work as I thought
<jamespage> gnuoy, https://code.launchpad.net/~james-page/charms/trusty/neutron-api/wily/+merge/271917
<jamespage> revised
<jamespage> and tested :-)
<gnuoy> jamespage, approved. Not sure if you want to wait on osci
<beisner> gnuoy, jamespage - on that ^ n-api test, precise-icehouse timed out (45 minutes), stuck @:  neutron-openvswitch/0 maintenance executing (config-changed) installing charm software;   not sure if that was an infra hiccup or indicative of a real issue.
<jamespage> beisner, did not touch thatbit honest guv
<beisner> gnuoy, jamespage - a re-run of that is also stuck in the same state, on precise-icehouse.
<beisner> this one, caught in the act if you want to poke at it.
<beisner> jamespage, gnuoy - the hang happens when this happens:
<beisner> 00:12:56.675 2015-09-22 13:19:34  Adding relation neutron-api:neutron-plugin-api <-> neutron-openvswitch:neutron-plugin-api
<redelmann> There is a better way to do this: juju debug-log -n 100 | grep --line-buffered my-service-name | awk '{$2=$3=$4=$5=$6=""; print $0}'
<redelmann> ??
<marcoceppi> redelmann: wow that's quite a string of commands
<marcoceppi> there's a way to filter debug-log, but I still haven't quite figured it out
<beisner> jamespage, observing some new wily deploy woes, assume that is the py2 affect?    ie. 2015-09-22 15:08:31 INFO install ImportError: No module named apt_pkg
<jamespage> beisner, yes - I just stuck up MP for all OpenStack charms that resolved that problem
<jamespage> beisner, wily will lose py2 on the cloud-image, its already loosing some implicit depends which charms assume are installed
<beisner> jamespage, right-o.   thanks, just wanted to make sure i wasn't bumping into something else.
<beisner> fun times!
 * beisner re-coffees
<Prabakaran> Hello Team,
<marcoceppi> jamespage: do you have a bug for adding things like apt_pkg or py2 back into cloud images by way of juju?
<bloodearnest> bundletester from pypi seems broken for me. 0.5.4/0.5.3 fail to start, and any of the 0.5 series seems to get stuck in a recursive __getattr__ lookup.
<bloodearnest> is there somewhere else I should be looking for bundle tester
<bloodearnest> ?
<jamespage> marcoceppi, nope
<marcoceppi> jamespage: do you think it's worth having juju do as part of it's cloud init?
<jamespage> marcoceppi, I think we'll put in a install wrapper for this cycle
<Prabakaran> Hello Team,  I am charming Platform RTM using juju framework, when i am following the silent installation procedure. i was asked give input as (Yes/No in GUI) to configure database for rsyslog-mysql with dbconfig-common. As per my requirment i will have to give input as "NO". I have tried to use pipe and expect in order to avoid tat prompt but unfortunately it doesnt worked out.  Please advise how to avoid configuring rsyslog-mysql p
<Prabakaran> Could someone help me on my query regarding prompt?
<rick_h_> Prabakaran: hmm, how did you give hte requirement? this is during hte install step? I'm assuming it's the package install?
<rick_h_> Prabakaran: http://www.microhowto.info/howto/perform_an_unattended_installation_of_a_debian_package.html walks you through helping to provide commands at install time of packages that might help?
<Prabakaran> yes it is a package install along with my product , Here for database i have used seperate charm so that we dont need to configure configure database for rsyslog-mysql with dbconfig-common. That is why i have to give No in the GUI prompt
<Prabakaran> Thank you  rick_h ..i wil refer the link which you had sent..
<rick_h_> Prabakaran: ah makes sense. Yes, I think you'll need to get the config passed during the install step like that doc mentions. Good luck!
<Prabakaran> ya k i wil try.. Thanks for ur help
<tsakas> Hi everyone. Trying to deploy nfs charm on Azure I see that the agent state on the machine for a very short period becomes dead (right after VM provisioning) and the nfs service  status becomes unknown. Has anyone seen this before?
#juju 2015-09-23
<jamespage> gnuoy, based on your +1 of my wily python bootstrap fix yesterday, are you ok for me to land the same change across all of the charms with appropriate unit/lint/amulet +1's?
<jamespage> I could give you all of the MP's but there are quite a few
<gnuoy> jamespage,RS +1 for the rest
<jamespage> gnuoy, ta
 * jamespage goes todo some landing
<jamespage> thedac, coreycb, ddellav, gnuoy: working through action upgrade landings pm today
<coreycb> jamespage, ah great thanks
<coreycb> jamespage, wolsen's been reviewing too. note that we can't call register_configs() in the action, we need to import CONFIGS instead so some of the mp's need updates.
<jamespage> coreycb, so like - https://code.launchpad.net/~thedac/charms/trusty/swift-storage/action-managed-upgrade/+merge/272046
<jamespage> that's ok right?
<coreycb> jamespage, yep that one's good
<jamespage> coreycb, ok - I still see some ones need updating
<thedac> jamespage: swift-storage is good. The rest of mine still need fixing. I got stuck yesterday on nova cc. I can't quite figure out what I need to mock to allow importing of the openstack_upgrade. I may skip this one and get the others ready.
<jamespage> thedac, ack - and where did you guys get to last week on service status stuff?
<jamespage> (just catching up)
<thedac> The CH bits are landed. And I demonstrated to our team but no further. Action upgrades was the priority
<thedac> openstack-dashboard should be good now
<thedac> and neutron-gateway should have started with those. :) Now back to the grind with nova cc
<thedac> jamespage: nova-cloud-controller done. I finally looked at the git_install example which layed out exactly what I needed. :p
<cholcombe> anyone know if you can deny juju storage from detaching a disk?
<rick_h_> cholcombe: hmm, does storage support blocks?
<cholcombe> i have no idea
<cholcombe> i'm thinking when a disk is being detached i might want to prevent it until i know it's safe
<cholcombe> rick_h_, if i'm down to 2 out of 3 disks in my replica set and another disk is being detached i want to stop that
<cholcombe> rick_h_, that reminds me i need to create another task for myself
<rick_h_> cholcombe: if you could it would be a block https://jujucharms.com/docs/1.24/juju-block
<rick_h_> cholcombe: I just noticed I don't have the PPA and 1.25 on this machine thuogh so not able to test that list out
<cholcombe> i'm running 1.24.x
<rick_h_> cholcombe: if not, might be worth a juju email thread.
<rick_h_> alexisb: ^
<rick_h_> cholcombe: yea, storage is a feature flag in 1.24 though right?
<cholcombe> rick_h_, thanks.  If we can block that will prevent possible disasters
<cholcombe> rick_h_, i believe so
<rick_h_> cholcombe: yea, not seeing it in the list and it's not part of the general config block-able stuff https://jujucharms.com/docs/1.24/config-general#alphabetical-list-of-general-configuration-values
<alexisb> cholcombe, there were many improvements to storage in 1.25
<alexisb> cholcombe, I can send you a list
<alexisb> and your best expert on this is axw
<rick_h_> cholcombe: alexisb I think the question is does storage support or plan on supporting blocks on it
<cholcombe> alexisb, ok yeah i was chatting with him last night.  he's on aussy time
<cholcombe> alexisb, i'll see if i can catch him later today
<rick_h_> cholcombe: if you chat with him let me know what you find out
<cholcombe> rick_h_, will do :)
<cholcombe> rick_h_, this will necessitate functions to tell whether it's safe to remove a disk.  That's tricky with ceph
<rick_h_> cholcombe: well what would happen is yo'd block it and if someone tried to do something that killed it it would stop and say no
<rick_h_> cholcombe: and you'd have to manually remove the block to continue
<cholcombe> right
<cholcombe> rick_h_, it's best effort right?  people can still just yank the disk
<rick_h_> cholcombe: yes, this is all only within juju
<firl> Any openstack charmers on?
<marcoceppi> firl: there are dozens of us! I'm not one, but I'm dangerous enough to answer questions. Best to just ask
<firl> haha thanks marcoceppi
<firl> I specifically want to branch / know the best way of getting mellanox support into the trusty/juno stack for neutron,nova, and ceph
<marcoceppi> firl: well first off, all the core openstack charms are the same codebase, so the charms in trusty, vivid, precise, wily, etc are the same and that same code base models strategies for Icehouse -> Liberty
<firl> itâs all in bazaar too?
<marcoceppi> firl: the development workflow is modeled here: https://wiki.ubuntu.com/ServerTeam/OpenStackCharms
<marcoceppi> firl: it's all in bzr atm, but they are moving to include the charms in git, not sure the process of that beisner coreycb jamespage ^?
<firl> thanks marcoceppi for the link, I donât know the best way to integrate the mellanox side. I figured I would just make it a boolean config value with sane defaults to start
<coreycb> firl, marcoceppi: james is working on moving the charms to git but I don't think we're there yet.  so I'd use bzr for now.
<firl> kk
<coreycb> firl, jamespage or gnuoy would be best to ask about mellanox but you might need to hit them up earlier tomorrow since they're probably EOD now
<firl> gotcha
<marcoceppi> bcsaller cory_fu I've got some questions about the charm gen/layers stuff around best practices
<cory_fu> Sure
<bcsaller> go for it
<bcsaller> I am sure there are some things we can do to make it easier
<marcoceppi> bcsaller cory_fu so this is my first stab https://github.com/marcoceppi/mysql-charm-layer
<marcoceppi> one thing I tried to do was create a helpers directory in the reactive dir to house my charm specific code that I didn't want cluttering my mysql.py file in reactive
<marcoceppi> but that didn't work for imports, so I put it in lib/helpers, though I can see this as being potentially a problem/conflict
<marcoceppi> so I was curious if there was a pattern already for where to house this code, the idea being that they're generic MySQL helpers (install_mysql, create_database) that other layers could utilize via importing
<marcoceppi> there are almost things that were once in charmhelpers.contrib.mysql actually
<cory_fu> I think lib makes sense, then.  Maybe lib/helpers/__init__.py could be created in base, if that's a common namespace, though you could also have used mysql_helpers
<marcoceppi> I despise underscores in general, so I just used a directory to namespace, but to your point I could creaet mysql_helpers.py in lib?
<bcsaller> marcoceppi:  off the top of my head there are a few ways this could play out. One, we talked about namespaces for newer charm helpers, so they can be different repos but still packaged and pushed to pip. I that case you might add a pip import trigger to your layer in /lib for example
<bcsaller> marcoceppi: you can see the base layer does this to include its deps
<bcsaller> I wouldn't mind if CH made its own namespace and other packages added to that
<marcoceppi> bcsaller: I did see that, these aren't actually in the contrib library, and while somewhat generic they are verioned/released with the layer which is why I opted to have it as an included dep
<bcsaller> ahh ok
<marcoceppi> embeded*
<cory_fu> marcoceppi, bcsaller: We could certainly have a lib/charms/ namespace directory in base so that layers could easily add packages to that namespace without creating a separate, external library
<bcsaller> cory_fu: or allow layers a helper that could help adjust the python path, but that might be too much
<marcoceppi> cory_fu: so I could just have lib/charms/mysql ? alongside reactive?
<cory_fu> Right
<marcoceppi> that sounds interesting
<cory_fu> Actually, with how the libs are pulled in from pypi, I don't think we'd even need to change the basic layer
<cory_fu> I think it would probably just work if you created those folders in your mysql layer
<bcsaller> yeah, it should
 * marcoceppi tries that
<cory_fu> marcoceppi: So, just change lib/helpers to lib/charms and drop lib/helpers/__init__.py
<marcoceppi> cory_fu: I mean, it worked
<marcoceppi> let me change the import calls and deploy
<marcoceppi> I love charm inspect
<bcsaller> cool
<cory_fu> I would love it more if we could get it to work with paging (e.g., less) ;)
<cory_fu> Even if the paging was built-in
<marcoceppi> bcsaller cory_fu yeah, that worked great, upgrade charm ran cleanly so new imports worked etc
<marcoceppi> not sure if we want to document that as a best practice or not?
<bcsaller> cory_fu: I could patch that I think, blessings does tty detection (which less fails) but you can disable that
<cory_fu> bcsaller: Maybe a "--force-color" option that disables the auto-detect?
<bcsaller> yeah, I can look into that
<cory_fu> marcoceppi: I think that seems like a good practice, yeah
<marcoceppi> cory_fu bcsaller I also noticed there are some patches in 1.6 branch for compose not released, should those be another patch release?
<marcoceppi> or do youy have more you wish to land first?
<bcsaller> marcoceppi: there will have to be, one is an important bug fix, but we still might want to wait for the bikeshed on naming to resolve
<bcsaller> marcoceppi: I have a branch with the current naming in it, but I will also try to make inspect page better, we'll call it minor papercuts
<cory_fu> bcsaller: Is the bike-shedding not already resolved from on-high?  ;)
<cory_fu> bcsaller: And at any rate, just make aliases
<marcoceppi> yes, aliases with "deprecation" notice? and then never deprecate
<bcsaller> they are just aliases in setup.py now
<marcoceppi> you never truly finish bikshedding
<bcsaller> https://en.wikipedia.org/wiki/Parkinson%27s_law_of_triviality but I do agree names can be important
<wolverineav> hey, this is openstack keystone related query - for HA, it mentions this: "VIP is only required if you plan on multi-unit clustering (requires relating with hacluster charm). The VIP becomes a highly-available API endpoint."
<wolverineav> i don't know what is the hacluster charm in this context. is it 'haproxy'?
<rick_h_> wolverineav: https://jujucharms.com/hacluster/
<wolverineav> ah, thanks rick_h
<rick_h_> wolverineav: np, good luck!
<wolverineav> somehow when you search 'hacluster' in charmstore, this one is not in the top 10
<rick_h_> wolverineav: ah, it's not fully gone through the review process so it's demoted.
<rick_h_> wolverineav: I'll look into that. thanks for the heads up
<wolverineav> rick_h_: that explains it :)
#juju 2015-09-24
<tvansteenburgh> rick_h_: what am i doing wrong here? https://api.jujucharms.com/charmstore/v4/bundle/mediawiki-single-8
<rick_h_> tvansteenburgh: https://api.jujucharms.com/charmstore/v4/mediawiki-single/meta/any
<rick_h_> tvansteenburgh: no need for /bundle
<rick_h_> tvansteenburgh: and you have to have an endpoint (meta/any is the simplest, see /meta for the list of supported
<tvansteenburgh> rick_h_: thanks, was tripped up by paragraph 2 here https://github.com/juju/charmstore/blob/v5-unstable/docs/API.md#intro
<rick_h_> tvansteenburgh: yes, why technically bundle is implemented as a series, it's not meant to ever be exposed to end users or the api
<tvansteenburgh> ack
<rick_h_> tvansteenburgh: but yea, taht paragraph sohuld ignore that vs noting that
<rick_h_> though I guess this is charmstore docs...which is implementation
<rick_h_> tough line there
<tvansteenburgh> rick_h_: is there a python client for this already?
<rick_h_> tvansteenburgh: yes sec
<rick_h_> tvansteenburgh: https://github.com/juju/theblues has the python client that is used in the storefront
<rick_h_> tvansteenburgh: and please feel free to file bugs/patches
<rick_h_> tvansteenburgh: and we might end up trying to get this into jujulib one day, but for now it's on its own
<tvansteenburgh> rick_h_: excellent, thanks
<rick_h_> tvansteenburgh: storefront = web front end on jujucharms.com
<firl> anyone have experience hooking up logstash with apache logs?
<lazypower> firl, i have done this. which version of the charm are you using? there's currently 2 completely diverged charms. 1 that I am keeping on life support, and one i would recommend that is under maintenance by jrwren
<firl> lazypower: I havenât hooked it up yet. I just deployed 4 nodes of a web frontend using my charm
<firl> and wanted to know how hard / easy it was to get a logstash setup working across it
<lazypower> jrwren, you've added combined apachelog grok filters setup haven't you?
<jrwren> yes?
<lazypower> the certainty is comforting :D
<jrwren> firl: i'll start by saing, it ain't easy :)
<lazypower> actually, you have apache_access in here...
<jrwren> the grok filter is an easy part.
<lazypower> not the combined log
<lazypower> so, there's that
<jrwren> lazypower: look again.
<lazypower> http://bazaar.launchpad.net/~evarlast/charms/trusty/logstash/trunk/view/head:/files/filter.conf  - is what i'm looking at
<lazypower> ah disregard, line 9 has the magic
<firl> haha
<jrwren> the thing is, I'm not aware of any apache charm which uses the log relation to expose to a logstash or beaver subordinate which logs to ship, so its a largely unsolved convention in all juju charm.s
<firl> so is there another easy way to aggregate apache logs using juju charms or should I try on getting the logstash working?
<firl> gotcha
<lazypower> jrwren, we should write an interface stub for it
<lazypower> that way all you're doing is "filling in the blanks" in whatever consumer charm
<firl> it be nice if I could just have my charm expose the relation
<lazypower> firl, +1 to that. why eat an agent when your relation can do what it needs to do
<firl> ( I was kinda hoping it was that easy )
<jrwren> lazypower: yup. i have that, probably could be a lot better.
<lazypower> firl, it will be basically that easy. THere's some dependency work to be done though.
<firl> gotcha
<lazypower> jrwren, correct me if i'm wrong, but beaver is the only *supported* shipping method atm right?
<jrwren> lazypower: supported by whom? me? :)  beaver is the most tested by me.
<firl> hah
<lazypower> jrwren, well the charm is in *your* namespace :)
<jrwren> lazypower: right, and for good reasons that it is not promulgated :)
<lazypower> jrwren, are we still working towards making that charm the ~recommended? its lightyears ahead of the revision i currently have on life support
<lazypower> IS is still submitting patches to that charm as well. the sooner we can deprecate it, the better we will be in terms of getting additional hands on helping w/ maintenance
<jrwren> lazypower: its taken a lower priority lately.
<jrwren> lazypower: we do want to get back to it.
<lazypower> understood, so long as its still on the list :)
<lazypower> firl, i would highly recommend tracking jrwren's logstash charm, and doing a beaver integration w/ the apache logs vs using whats currently in ~recommended
<firl> kk
<lazypower> I'll have to put in some time to add some relations + migration path for consumers of the currently recommended logstash charm, and if you go that route, you'r elikely to have an upgrade surprise.
<firl> So basically, expose the apache relation into my charm, add the beaver to the units, spin up a logstash and add the relationship
<lazypower> yep, that *should* (tm) be all you need to do. Add beaver + write the config for beaver w/ the logstash unit address + port for shipping.
<jrwren> right.
<lazypower> jrwren, when i'm not buried (ha!) i'll try to get some additional time in on this stack as well and contribute the legacy interfaces we will want to support moving forward. logstash-agent + lumberjack
<lazypower> i have a pretty good idea how we can do lumberjack integration + ssl shipping as well
<lazypower> self signed cert, but it'll work regardless. Just means we're shipping a CA cert from the leader and normalizing on that certificate
<firl> thanks guys, looks like itâs trusty only. I will have to wait a couple weeks until the code base gets upgraded for trusty
<jrwren> lazypower: cool. i'm so negative on jvm that what I'd really like to do is move to Heka
<jrwren> firl: yeah, sorry. I've not been on precise for quite a while.
<lazypower> jrwren, whit did some work there to. Hekka is great for log shipping but writing that config file is hairy
<firl> jrwren: haha â¦ I hadnât until I started helping a side project get up to date. they are feeling the upgrade pains now
<lazypower> jrwren, i think i'd rather use the logstash-agent (? i forget which one is the golang bin) and use the recommended plugin
<jrwren> firl: yeah? what side project?
<firl> jrwren: helping get a php website up on amazon using juju and charms. they were on rackspace and could only scale vertically
<jrwren> lazypower: if you can solve the ssl issue with logstash-agent (lumberjack?) you'd be my hero. :)  its a bit convoluted.
<lazypower> jrwren, challenge accepted... in due time.
<jrwren> firl: that sounds fun! :)
<firl> jrwren: lol, getting mellanox support on openstack sounds more fun currently. But yeah, juju has helped quite a bit in getting some standards and environments managed for the side project
<coreycb> jamespage, did you have a comment on keystone action-managed upgrade?
<coreycb> ah maybe amulet failures
<jamespage> coreycb, yeah - hence the recheck
<bdx> hatch: you around?
<hatch> hello bdx
<bdx> hatch: do you know how socket_url is set?
<bdx> hatch: in juju-gui
<hatch> we've actually been doing a bit of work around that of late, I'll have to refresh my memory on that version
<bdx> app.env.getAttrs() returns an object, one attribute of the object is socket_url, which is set to "wss://10.16.100.123/ws/environment/46da5e13-3bbe-4751-8935-e113fd4a0196/api"
<hatch> https://github.com/juju/juju-gui/blob/develop/app/app.js#L693
<bdx> hatch: should socket_url be indicitave the ip of my bootstrap nodes ip address?
<hatch> bdx: it should point to the gui instance
<bdx> oh ok
<hatch> as it needs to talk through its server
<hatch> still can't connect hey?
<bdx> no
<hatch> I'm guessing that there is a communication issue between the gui instance and your bootstrap node
<hatch> which we should be surfacing but aren't
<hatch> if you log into it, there should be a log in the upstart logs
<hatch> maybe there is an indication to the problem in there
<bdx> ok...I feel like I've checked them out and didn't see anything jumping out at me ....is there something I should be looking for inparticular?
<hatch> honestly I have no idea, I have never heard of this issue before :)
<hatch> I asked a couple others, just waiting to see if they have any ideas
<bdx> hatch: great, thank you
<frankban> bdx: could you please try if the ws connects in incognito mode?
<bdx> yes, omp
<bdx> frankban: nothing
<bdx> frankban: same result
<frankban> bdx: do you have python available?
<frankban> bdx: to check that the ws is listening
<bdx> frankban: of course
<frankban> bdx: could you please pip install websocket-client?
<bdx> yes, done
<hatch> thanks frankban :) bdx he is much more familiar with the guiserver side than I am :)
<bdx> hatch: awesome, thanks for your help!
<frankban> bdx: python -c 'import websocket; websocket.create_connection("wss://10.16.100.123/ws/environment/46da5e13-3bbe-4751-8935-e113fd4a0196/api", sslopt={"cert_reqs": 0, "ssl_version": 3})'
<bdx> frankban: I can successfully create a ws object
<frankban> bdx: so the problem is that the GUI is not able to connect to that endpoint?
<bdx> If I assign the object to a variable e.g. ws = websocket.create_connection("wss://10 ......), and then do a result =  ws.recv(), ws.recv() never returns....
<bdx> is this behavior expected?
<frankban> bdx: yes
<frankban> bdx: you should send something before receiving, like: {
<frankban>         'Type': 'Admin',
<frankban>         'Request': 'Login',
<frankban>         'Params': {'AuthTag': 'user-admin', 'Password': password},
<frankban>     'RequestId': 1}
<frankban> bdx: maybe I am missing some background, is the GUI not connecting to the socket?
<bdx> I mean....I feel like I would be seeing errors in the console if it wasn't.....
<frankban> bdx: yes, so what are the symptoms?
<bdx> frankban: http://cl.ly/image/0h1N3P3k0Z28
<frankban> bdx: I see, no services in the canvas
<bdx> frankban: I can deploy juju-gui to lxc, baremetal, or kvm, no mater what It wont display my environment details...
<bdx> exactly
<frankban> bdx: could you please run this script passing your env name? http://pastebin.ubuntu.com/12544350/
<bdx> frankban: done. What am I looking for here?
<frankban> bdx: alos, i see an error notification on your GUI, what does it say?
<frankban> bdx: could you please paste the output of the script?
<frankban> bdx: oh wait
<bdx> http://cl.ly/image/0G2l290Y140J
<frankban> bdx: the output will include your env password, so you might want to remove that
<bdx> the script runs indefinitely
<frankban> bdx: cool that error is unrelated
<frankban> bdx yes, just the initial output is ok
<frankban> bdx: the script watches the juju env in a similar way than the GUI does
<bdx> frankban: nice, ok : http://paste.ubuntu.com/12544391/
<bdx> nice!
<bdx> finally some introspection.....
<frankban> bdx: so you are getting a "unit not found" error from the watcher, and that would explain why the GUI is not showing services
<frankban> bdx: juju version?
<bdx> frankban: totally, nice.
<hatch> yay progress! :)
<bdx> 1.24.5-trusty-amd64
<frankban> bdx, hatch: looks like https://bugs.launchpad.net/juju-gui/+bug/1485249 to me, and looks like a juju-core bug
<mup> Bug #1485249: Juju gui is not loading. Because: "Error":"unit not found" <juju-gui:New> <https://launchpad.net/bugs/1485249>
<frankban> bdx: simple fix would be updating juju version
<frankban> hatch: I need to go now, could you please take care of triaging that bug and gather info from bdx?
<hatch> you bet
<hatch> thanks a lot frankban
<bdx> hatch, frankban: if I update juju-core, do I need to also need to update juju on machines where the agent runs?
<frankban> bdx: yes I guess you need to run "juju upgrade-juju" but never run that so hatch can be more helpful there (or core devs)
<frankban> done for the day, good night!
<hatch> actually I've never run it either :D
<bdx> frankban: thanks alot! night!
<frankban> yw
<bdx> hatch: so I have upgraded juju such that juju-core and all of my agents are running 1.24.6
<bdx> hatch: the issue persists after the upgrade even
<hatch> bdx: alright so it appears that the issue has not yet been resolved in juju-core
<bdx> hatch: gotcha, I see that.
<hatch> could I get you to add your report to that bug frankbank linked?
<hatch> I can then pass it up the food chain to get it looked at
<hatch> (I'll do that regardless, but if you comment you'll be notified)
<hatch> :)
<bdx> hatch: Totally, will do
<hatch> thanks, sorry about the bug
<bdx> hatch: Ok, bug updated. Let me know if you need any other info. Thanks for your help!
<hatch> excellent thanks
<beisner> o/ well hi there bdx, firl
<firl> hey beisner, howâs it going?
<beisner> ahh pretty good here, thanks firl
<beisner> firl, infinibanding yet?
<firl> I sent a message to the mailing list, mark responded saying some people might reach out
<firl> but I think I am going to start by cloning the charms I need and one by one hooking it up proper
<beisner> oh yes i recall now seeing that.
<firl> So I might ask you some questions on the sanity of my environment since youâve tested everything under the sun haha
<beisner> ha, feel free to ask.  if i don't know i'll try to find info.
<beisner> coreycb, jamespage - neutron-gateway/next unit tests start failing @ rev 142
<beisner> coreycb, oh that one.  well now my branch fails as i rebased ;-)  http://bazaar.launchpad.net/~openstack-charmers/charms/trusty/neutron-gateway/next/revision/142
<beisner> fyi output @ http://paste.ubuntu.com/12546097/
<niedbalski_> aisrael, ping
<coreycb> beisner, oops, yeah that commit fixed amulet but broke unit
<coreycb> beisner, ok that's fixed now in next
<aisrael> niedbalski_: pogn
#juju 2015-09-25
<jamespage> gnuoy, https://code.launchpad.net/~james-page/charm-helpers/keystone-v3-support/+merge/272355
<jamespage> gnuoy, http://www.florentflament.com/blog/setting-keystone-v3-domains.html
<urulama> mgz: hey ... could we rung "chicago-cubs" branch against CI please? last run was 2 days ago and it's been there from 9/11 ... now, hm, that date might be a sign by its own :)
<urulama> s/rung/run
<mgz_> urulama: sure
<urulama> mgz: ty
<mgz_> master is being tested at present, I can make cubs happen next
<urulama> mgz: np
<beisner> jamespage, updated n-g tests ready for review/landing @ https://code.launchpad.net/~1chb1n/charms/trusty/neutron-gateway/amulet-update-1508/+merge/271960    plz & thx!
<beisner> jamespage, fyi n-ovs tests complete @ https://code.launchpad.net/~james-page/charms/trusty/neutron-openvswitch/dkms-12.04/+merge/272390
<bdx> beisner: whats up man!? Any thoughts on the new MR? --> https://code.launchpad.net/~jamesbeedy/charms/trusty/nova-compute/next/+merge/272431
<beisner> hey bdx !   just wrapping up, going eow soon.   there are functional tests that are automatically queued up to run and report back with any breakage.  once we have lint + unit + amulet results passing, we can turn a human reviewer to it.
<beisner> bdx, thanks for retargeting that mp
<bdx> sweet. totally. thanks for the heads up!
<beisner> np sir!   o/  nice weekend, all
<lazypower> cheers beisner, enjoy your weekend
<beisner> ditto lazypower !
<lazypower> make sure you feed osci some snacks before you go :D
<lazypower> !oscisnack
<beisner> ha!  i've joked before that it sounds like someone's little puppy name.  but it's eating everyone's lunch already, and has an unhealthy appetite.  #thatqueue
<beisner> down, osci, down.
<beisner> \o
 * lazypower plays rimshot.wav
<kwmonroe> service A has 2 units.. when i add a relation between A and B, which private-address does B see?  is it the lowest unit num for A?
<marcoceppi> kwmonroe: both private addresses
<marcoceppi> kwmonroe: you get a relation-joined on B for each unit of A
<kwmonroe> oh - neat!  thx marcoceppi
<marcoceppi> realtions are per service, but events are modeled per unit
<arosales> may I just say
<arosales> watch juju status --format=tabular
<arosales> is awesome!
<arosales> espeicially when the charms have status
<arosales> thank you big data folks
<asanjar> you are welcome arosales
<arosales> asanjar, :-)
<arosales> asanjar, I'll see you next week
<asanjar> arosales: definitely ..
<arosales> asanjar, excellent. Looking forward to some interesting spark solutions
<asanjar> how about some sparkling solution
<cholcombe> i'm a bit confused how the charm subordinate works
<cholcombe> i declared my charm to be a subordinate of ceph and it says when i add relation that none exists
<cholcombe> i think i got it but i'm not sure it's right
<marcoceppi> cholcombe: you need to have an interface and scope defined for the subordinate relation
<cholcombe> is that defined in the super charm or the subordinate?
<marcoceppi> cholcombe: subordinate
<cholcombe> ok
<marcoceppi> as an example
<cholcombe> is it always a container scope?
<marcoceppi> yes
<cholcombe> so it doesn't matter if it's bare metal, vm, etc.
<marcoceppi> cholcombe: scope: container is just a thing that made sense when juju was created before the concept of containers was popular
<marcoceppi> scope: unit might be a better name for this now
<cholcombe> yeah i agree
 * marcoceppi files bug
<cholcombe> haha
<cholcombe> marcoceppi, if my hook fails to install when i add the subordinate relation can i still do the upgrade force+retry bit to try again?
<marcoceppi> cholcombe: I don't see why not
<cholcombe> i think juju is confused then
<cholcombe> upgrade works but retry fails saying it's not in an error state
<marcoceppi> cholcombe: what are you retrying? the subordinate or the primary?
<cholcombe> the primary
<marcoceppi> cholcombe: what's in error? the subordiante or the primary?
<cholcombe> the primary is saying hook failed 'install'
<cholcombe> so i tried to attach a subordinate to ceph and the install hook failed
<marcoceppi> huh, could you pastebin your juju status output and the commands you're running?
<cholcombe> now i'm stuck.  i could trash it but reinstalling ceph is quite slow
<cholcombe> yup
<cholcombe> marcoceppi, https://pastebin.canonical.com/140643/
<marcoceppi> cholcombe: ceph-cephdash isn't in error, ceph-metrics-collector/0 is, run resolved --retry on it
<marcoceppi> cholcombe: if you run `juju status --format=tabular` it's easier to consume the output
<cholcombe> ah there we go :)
<cholcombe> marcoceppi, thanks :)
<cholcombe> happy friday dude
<marcoceppi> cholcombe: cheers, np!
#juju 2015-09-26
<JerryK2> Hi! I am having a problem with a nova-cloud-controller charm - after executing "juju set nova-cloud-controller network-manager=Neutron" on a clean deploy the service fails to reconfigure and in the log I see KeyError: 'getpwnam(): name not found: neutron' . Please can anybody help me to figure out what's wrong?
<JerryK2> anyone ?
#juju 2015-09-27
<pmatulis> JerryK2: looks like you have some kind of object related to user 'neutron' that isn't set up properly
<pmatulis> maybe not user 'neutron' but you changed to neutron and failed to set up the requirements for it to run
<pmatulis> i think rabbitmq needs some special setup that it's currently missing
#juju 2016-09-26
<madhukar> Hi All, I am using juju 2.0-beta15. I am trying to deploy a charm bundle. Inside the charm bundle there are few charms which I have pointed to my local charm directory.
<madhukar> This is how I have pointed to the local directory:
<madhukar>   cassandra:     charm: local:trusty/cassandra
<madhukar> and I have updated the $JUJU_REPOSITORY env variable
<madhukar> When I deploy this charm, I am getting the below error
<madhukar> ERROR cannot deploy bundle: cannot resolve URL "local:trusty/cassandra": unknown schema for charm URL "local:trusty/cassandra"
<madhukar> Am I missing something here
<madhukar> ?
<hloeung> madhukar: what does $JUJU_REPOSITORY point to?
<hloeung> and within $JUJU_REPOSITORY, is the charm present as 'trusty/cassandra' (so trusty directory followed by cassandra)?
<madhukar> ubuntu@juju-api-client:~$ echo $JUJU_REPOSITORY /home/ubuntu/juju/
<madhukar> yeah I have trusty directory followed by cassandra
<hloeung> hmm, not sure then. Maybe 'juju --show-log --debug' might show something more helpful
<madhukar> ok let me try that
<Rajith> hi, In ubunt 14.04, getting error: The requested backend 'zfs' isn't available on your system (missing tools).
<Rajith> if I try to install zfsutils-linux, getting error unable to locate package zfsutils-linux.  Let me know how to install zfs file system on ubuntu14.04
<junaidali> Hi everyone, I'm deploying a single-controller cluster with juju 2.0 on xenial but percona-cluster seems to have issues. After sometime, other openstack services are not able to connect to mysql. To solve, i have to restart mysql service. There isn't any networking issue as I had deployed single-controller bundle with juju 2.0 on trusty several times on the same setup. Has anyone else facing this issue? or any  idea what might be the cause?
<rock> Hi. On juju version, 2.0-rc1-xenial-amd64 I want to deploy a charm bundle. Inside charm bundle several charms are there. I want to point one of the bundle charm to local charm directory. For that How can I exactly edit the bundle.yaml?
<zeestrat> rock: Try just using "charm: /path/to/local/charm" instead of "charm: cs:xenial/ceph-mon-3"
<rock> zeestrat: Oh. Thank you.
<rock> zeestrat: Am I need to change $JUJU_Repository?
<rock> I mean am I need to update  $JUJU_REPOSITORY environment variable?
<zeestrat> rock: I don't think so if you use an absolute path, but I can't say for sure. P.S. It can be helpful to run juju deploy with the --debug flag to get a bit more info on where Juju is looking for bundle/charms
<rock> zeestrat: OK. Thank you.
<rock> Hi. I have a question. I have openstack-on-lxd setup with juju version 2.0-beta15. So now I want to enable multipath on nova service deployed LXD container  and cinder service deployed LXD container using our own "cinder-storage" driver charm. But we are facing an issue. http://paste.openstack.org/show/582934/
<rock> how to resolve this please?
<rock> Hi. # apt-get install multipath-tools --yes   this command is failing on LXD containers. Please anyone tell me the reason for this.
<rick_h_> rock: I'm not familiar with the package, but it looks like it might run afoul of the security settings/apparmor bits that allows lxd containers to be isolated cleanly
<rock> rick_h_: Thank you for your information. Am I need to change kernal level security settings? But the same package working fine in 16.04 and 14.04 machine.
<rick_h_> rock: in a lxd container?
<rick_h_> rock: because containers have a shared kernel, there's a lot of work into locking things that could cause issues there.
<rick_h_> rock: it might be worth engaging the upstream lxd team on this for better details
<rock> rick_h_: Oh. Thank you. How can approach upstream lxd team?
<rick_h_> rock: check out https://linuxcontainers.org/
<rick_h_> rock: if you go into the lxd xection there's irc, mailing list, etc
<rock> rick_h_: Thank you.
<zeestrat> rick_h_: Are you guys still aiming for compatibility between RC's so you can upgrade between them?
<rick_h_> zeestrat: yes, very much so
<zeestrat> rick_h_: Good to hear. Thanks.
<lazyPower> cmars: yo lmk when youre ready for another round of refresh, i think we have the ingress lb sorted as well
<cmars> lazyPower, cool. i'm having trouble with hacking on local builds of the master & worker charms. system seems to thrash like mad when i upload that 1gb resource :(
<lazyPower> well, yeah
<cmars> lazyPower, how do y'all develop on them?
<lazyPower> wait
<lazyPower> 1gb resource?
<lazyPower> you're repacking the release tarball right?
<cmars> lazyPower, um
<cmars> lazyPower, no?
<cmars> :)
<lazyPower> I sent over a bash script to take that kubernetes releae tarball and split out a worker/master resource package :D
<cmars> LOL
<cmars> ok
<cmars> wow
<lazyPower> lol sorry i wasn't too specific on details, something about deadlines, no time, and lack of sleep
<cmars> no worries
<lazyPower> 1 sec i'll re-gist
<cmars> i'm just hackety-hacking on an experiment
<lazyPower> https://gist.github.com/fa4a1dca1d313967609bc07183bb272a
<cmars> lazyPower, thanks, will take a look
<cmars> otoh, i've possibly found a nice way to stress jujud for future profiling >:)
<lazyPower> haha sick
<lazyPower> cmars: yeah actually we can tank an rc1 container pretty consistently
<lazyPower> cmars: around deploy 30/40 of these k8s bundles in succession the controller seems to go unresponsive during a resource upload, and its not clear why
<lazyPower> i want to get more time to dig into it and fetch the logs and submit a bug, none to report so far...
<geetha> Hi, `juju get-config` command is unrecognized in juju-2.0 rc1.
<tvansteenburgh> geetha: it's just `juju config` now (changed in beta18)
<geetha> ok, thank you tvansteenburgh..:)
<madhukar> Hello
<rick_h_> howdy madhukar
<madhukar> I am trying to install juju 2.0. However when I try to add the ppa, its giving me the below error
<madhukar> ubuntu@juju-api-client:~$ sudo add-apt-repository ppa:juju/devel  Cannot add PPA: 'ppa:~juju/ubuntu/devel'. ERROR: '~juju' user or team does not exist.
<rick_h_> there's a firewall issue atm and wonder if that's causing you issues
<rick_h_> madhukar: the network is working to get corrected atm
<axino> yes this is because of a network outage we're having
<axino> we're working on it
<rick_h_> ty axino
<madhukar> Thanks for the update :)
<axino> sorry for the inconvenience
<madhukar> No Problem!
<madhukar> Any link from where I can track this issue?
<axino> madhukar: I'm afraid not
<madhukar> hmmm!   Is it possible for somebody to update in this group?
<rick_h_> madhukar: there's folks notifying on the launchpad and the juju-gui twitter handles
<rick_h_> madhukar: will do
<madhukar> ok thanks
* rick_h_ changed the topic of #juju to: Welcome to Juju! || Jujucharms.com unavailable due to firewall issue. || Docs: http://jujucharms.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions:  http://goo.gl/dNj8CP || Youtube: https://www.youtube.com/c/jujucharms || Juju 2.0 rc release notes: https://jujucharms.com/docs/devel/temp-release-notes
<madhukar> rick: I m able to add the ppa and install juju 2.0. Thanks for the help!
<rick_h_> madhukar: glad it's coming back up
<lazyPower> rick_h_: ^5 on getting the topic
<stokachu> could probably remove that part now
<stokachu> seems it's all back up
 * rick_h_ thought he did
* rick_h_ changed the topic of #juju to: Welcome to Juju! || Docs: http://jujucharms.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions:  http://goo.gl/dNj8CP || Youtube: https://www.youtube.com/c/jujucharms || Juju 2.0 rc release notes: https://jujucharms.com/docs/devel/temp-release-notes
<stokachu> \o/
<lazyPower> cory_fu: i wrote an action in python and need that magic syspath bit that reactive gives me. can you refresh me on what that looks like?
<lazyPower> oh i meant updating from beta18 in topic to rc
<lazyPower> stokachu: also, hey, we have new kubes
<lazyPower> stokachu: wanna give conjure a go and make sure it works as expected?
<stokachu> lazyPower: yea ill do a run now
<lazyPower> stokachu: ok hang on
<lazyPower> we have an edge channel bundle
<lazyPower> it fails to resolve elasticsearch when you punch in juju deploy cs:~containers/canonical-kubernetes --channel=edge though
<lazyPower> i have no idea why
<stokachu> lazyPower: i should update https://github.com/conjure-up/spells/blob/master/observable-kubernetes/metadata.yaml#L5 right?
<stokachu> point to canonical one?
<lazyPower> or just juju deploy canonical-kubernetes --channel=edge
<lazyPower> yep
<stokachu> ok
<mbruzek> I don't have any idea either, we do have elasticsearch-18 in the store, but I can't get it to deploy out of a bundle.
<lazyPower> observable-kubernets has been deprecated in favor of "Canonical distribution of Kubenetes"
<lazyPower> stokachu: make sure you use those words, but properly spell kubernetes
<lazyPower> it was apparently a big deal last week
<stokachu> cool ill get it renamed
<stokachu> deploying now
<stokachu> lazyPower: mbruzek http://paste.ubuntu.com/23234120/
<stokachu> so far so good
<mbruzek> stokachu: cool
<mbruzek> you attached the resources or is this a charm store deploy?
<stokachu> yea this is basically your juju deploy bundle
<lazyPower> nice
<lazyPower> i think
<lazyPower> :D
<stokachu> i can do the upgrade-charm with resources if you want
<stokachu> to test that
<stokachu> looks like i hit an issue with etcd though: http://paste.ubuntu.com/23234120/
<stokachu> http://paste.ubuntu.com/23234147/
<stokachu> ^ sorry that one
<mbruzek> stokachu: That is an old bundle, we have a new one that splits master / worker
<lazyPower> stokachu: yeah thats been fixed in the latest etcd. no longer running via rendered shell, i'm using the etcd python api libs
<stokachu> https://api.jujucharms.com/charmstore/v5/canonical-kubernetes/archive/bundle.yaml
<lazyPower> stokachu: so, this bundle we push this afternoon will land in stable channel, should be nbd then.
<stokachu> thats the bundle im using is there another one
<stokachu> ah ok
<lazyPower> i dont think thats our edge channel bundle
<stokachu> yea it's just stable
<lazyPower> and the store makes it hard to view bundles in different channels
<stokachu> gotcha, yea conjure-up doesn't support edge either
<stokachu> ok just ping me here when that bundle is in stable and ill rerun
<lazyPower> ok will do stokachu, thanks for TAL so quickly
<kwmonroe> hey lazyPower mbruzek, kibana's action is called 'load-dashboard', but i see 'deploy-dashboard' in a couple places (https://jujucharms.com/u/containers/beats-core and chuck's blog: http://insights.ubuntu.com/2016/09/22/monitoring-big-software-stacks-with-the-elastic-stack/).  i don't mind fixing, but how would you like to do it?  symlink deploy->load in kibana, or update the beats-core readme and blog post(s)?
<mbruzek> kwmonroe: yeah write a bug for it, so we don't forget
<kwmonroe> ack
<mbruzek> kwmonroe: symlink would be fine so we don't have to change the blog, but we should also change the readme to be correct.
<mbruzek> so both
<mbruzek> good find, sorry for the bug
<lazyPower> kwmonroe: its a config option these days. probably prefer to load dashboards via config eh?
<kwmonroe> hard to say lazyPower, mostly because i don't know kibana very well.  dashboards via config might get dicey if i want beats today and beats plus something else tommorow.  in that case, would i config set dashboards="foo bar beats"?  if i forget "beats" in that string, do those dashboards go away?  if dashboards are meant to be "load once, available forever", then i think an action is appropriate.
<kwmonroe> juju my $0.02
<kwmonroe> er, *just
<lazyPower> kwmonroe: its load once, available forever in both contexts
<lazyPower> we supported the config option for bundle deployment to setup the dashboards.. but you bring up an important note
<lazyPower> we have introduce immutability into our config
<lazyPower> *introduced
<kwmonroe> ssssshhhhhhhhhh... mbruzek will hear.
<lazyPower> it needs to be said
<CorvetteZR1> hi.  i got openstack up and running with openstack-base-xenial-mitaka charm
<CorvetteZR1> i can log into the dashboard, but when i go to containers, i get an error:  Unable to get the Swift container listing.
<CorvetteZR1> how do i configure this?  how do i log into the servers juju configured?  i can't ssh to them...
<lazyPower> CorvetteZR1: juju ssh charm/unit#
<lazyPower> eg: juju ssh swift/0
<CorvetteZR1> cool, gonna try that
<CorvetteZR1> trying to figure out which box it's on :D
<CorvetteZR1> ah, it's one of the ceph boxes which failed to deploy...
<CorvetteZR1> i can ssh to the node, thanks for the tip lazyPower
<CorvetteZR1> not sure if i'll be able to fix my issue from here, but i'll see what i can find :)
<lazyPower> CorvetteZR1: happy to help, just let us know, or reach out over the mailing list if nobody seems to be around that knows the answer
<lazyPower> protip: juju@lists.ubuntu.com
<CorvetteZR1> cool, thanks
<lazyPower> tvansteenburgh: standup ping
<charles2> nick lazyPower
<Siva> I used an yaml file to deploy with juju2.0 and it worked fine
<Siva> I am trying the same yaml file with juju1.25 and deployed it using juju-deployer
<Siva> I am getting the following error
<Siva> ubuntu@juju-1-25-api-client:~$ juju-deployer -c contrail-trusty-3.0.2.1-4-liberty-cs-edit2.yaml -d 2016-09-26 19:58:24 [DEBUG] deployer.cli: Using runtime GoEnvironment on maas 2016-09-26 19:58:24 [ERROR] deployer.cli: Deployment name must be specified. available: ('machines', 'relations', 'series', 'services')
<Siva> Any idea what is wrong here?
<Siva> Any help is much appreciated
<rick_h_> Siva: last year we updated the bundle format to v4 and juju 2.0 uses that. See https://blog.jujugui.org/2015/08/13/bundles-bundles-bundles/
<Siva> does it mean juju-deployer does not support format v4?
<rick_h_> Siva: it does support the v4 format
<rick_h_> Siva: sorry, I read that backwards
<rick_h_> Siva: that you had an old bundle that juju 2.0 would not accept
<rick_h_> Siva: what version of 1.25? and maybe you need to s/applications/services in the bundle to get it to work on 1.25?
<Siva> This is my juju version
<Siva> ubuntu@juju-1-25-api-client:~$ juju --version 1.25.6-trusty-amd64
<rick_h_> Siva: right, what does the bundle look like?
<rick_h_> Siva: services were renamed applications in 2.0 so if it's "applications" I don't know if the deploy will remap that for you
<Siva> It looks like this
<Siva> http://pastebin.ubuntu.com/23235034/
<Siva> This is not the full yaml file but something like this
<Siva> This is not the full yaml file but something like the above
<Siva> rick_h_: Are u able to look at the yaml file pasted in pastebin?
<Siva> What should I change there to make it work with Juju 1.25
<lazyPower> stokachu: ping re: kubernetes being ready for a go
<Siva> rick_h_: Do you want me to paste the full yaml file? Please let me know
<rick_h_> Siva: hmm, ok yea that looks about right
<rick_h_> Siva: not sure on the series outside
<rick_h_> Siva: that might be what's confusing it
<Siva> Do you mean the 'series:trusty
<Siva> Do you mean the 'series:trusty' in the very first line?
<rick_h_> Siva: yes, try without that?
<Siva> OK. Tried. I am still getting the same error
<rick_h_> Siva: hmm, ok. so the error from the deployer is fussing about the keys
<Siva> What does that mean?
<rick_h_> Siva: sorry, on the phone. I mean to double check the keys are named matching the deployer error
<Siva> @rick_h_, yes all the keys are available in yaml file
<Siva> so don't know why it is complaining that it must be specified
<kwmonroe> Siva: are you running this?   juju-deployer -c contrail-trusty-3.0.2.1-4-liberty-cs-edit2.yaml -d
<kwmonroe> Siva: was able to able to start the deploy with your pastebin'd yaml.  looked ok to me (it eventually failed on undefined relations, but i attribute that to not being your full yaml file).  anyway, i'm running juju-1.25.6-trusty-amd64 and  juju-deployer-0.9.0~bzr193~58~ubuntu14.04.1
<Siva> kwmonore:  yes I ran the command 'juju-deployer -c contrail-trusty-3.0.2.1-4-liberty-cs-edit2.yaml -d'
<Siva> I am using the following version of juju-deployer
<Siva> ubuntu@juju-1-25-api-client:~/.juju$ dpkg -l | grep deployer ii  juju-deployer                    0.6.4~bzr168~49~ubuntu14.04.1    amd64        A tool to deploy complex stacks of services using juju
<Siva> 0.6.4~bzr168~49~ubuntu14.04.1
<Siva> I am using the same version of juju 1.25 you are using as well
<Siva> How do I install the version of juju-deployer you are using?
<kwmonroe> tvansteenburgh: Siva's asking about juju-deployer v0.9.  i got that from your ppa.  is that the best place to grab it?
<kwmonroe> ...backscroll for context
<kwmonroe> Siva: unless tvansteenburgh says different, you can get 0.9 with:  add-apt-repository -y ppa:tvansteenburgh/ppa
<Siva> OK. I will try with the version you are using an see if it works
<Siva> I feel may be the version of juju-deployer I am using is having some issues
<kwmonroe> yeah Siva, the error message you got ("Deployment name must be specified"), sounds an awful lot like juju-deployer isn't recognizing your v4 bundle format.  not sure when that support was introduced, but i know that juju-deployer-0.9 definitely supports v4.
<Siva> kwmonore: I destroyed my existing container and doing it fresh. Give me 10 more mts. I will update you
<Siva> sorry for the delay
<veebers> alexisb: Do you have the link for the bug filed by OIL you mentioned in the stand up? (race condition with adding/removing/listing models)
<alexisb> veebers, yes
<alexisb> veebers, https://bugs.launchpad.net/juju/+bug/1618212
<mup> Bug #1618212: juju models fails during model destruction <oil> <oil-2.0> <juju:Triaged> <https://launchpad.net/bugs/1618212>
<alexisb> thumper, I will jump back on the call
<veebers> alexisb: sweet thanks, I think this is the bug I see as well
<Siva> kwmonroe: Thank you so much. I am not getting that error anymore
<Siva> It is deploying it now
<Siva> so the version of juju-deployer you pointed out has support for the v4 bundle format
<kwmonroe> glad it's working Siva!
<magicalt1out> liar
#juju 2016-09-27
<stub> marcoceppi: hey. I'm just sorting the ntp related charms so good timing. I've just refreshed cs:~ntp-team/ntpmaster ready for promulgation. ntp next, so I'll review (ha) and merge your mp first
<stub> (still not sold on this -team vs -charmers, since the set of maintainers of the software vs the charm are totally distinct)
<marcoceppi> stub: I agree,but UX and design folks said -charmers rated poorly given they didn't understand what that was
<marcoceppi> stub: and the majority of charms we'll want to get upstream to take over - or at least help maintain
<stub> k
<stub> Still feels rude to be making false claims, since the charm store isn't the only place this is visible. Doubt it makes trouble in reality though.
<stub> The first real conflict will be between the snap package maintainers and charm maintainers for some product, since I doubt an upstream will take on both simultaneously
<stub> Real solution seems to be to use the team's displayname rather than the id, but the charm store might not have that information since it is syncing team membership via openid extensions rather than querying the Launchpad API.
<marcoceppi> stub: yeah, but worth bringing up
<stub> I filed a bug :)
<pascalmazon> hi, I have a question regarding juju store and resources
<pascalmazon> I'm developping a charm that uses 3 resources.
<pascalmazon> I've pushed my charm onto cs:~6wind/trusty/virtual-accelerator-12
<pascalmazon> I'm trying to charm-release it, but it complains that resources are missing from publish request:
<pascalmazon> do I really need to send resources to the store along with my charm? (I would have to use boilerplate files, as my resources contain credentials for using our proprietary software)
<magicaltrout> pascalmazon: you can just upload empty files
<magicaltrout> or some placeholder
<magicaltrout> and have your charm check the content
<stokachu> did the juju api for deploy change recently requiring series to not be empty?
<rick_h_> stokachu: following trunk or in rc1 from b18?
<stokachu> rc1 from b18
<rick_h_> stokachu: not aware of any changes but info to narrow down have to check commits tbh
<rick_h_> we were very careful on the path to rc there
<stokachu> ok np
<stokachu> it's nbd i can fix it in our api code
<stokachu> rick_h_: http://paste.ubuntu.com/23242144/
<stokachu> thats the api server error when we tried to deploy that charm via conjure-up
<stokachu> not sure why that doesn't default to xenial
<stokachu> https://api.jujucharms.com/charmstore/v5/canonical-kubernetes/archive/bundle.yaml is the bundle
<stokachu> this works in regular juju deploy though
<marcoceppi> rick_h_: stokachu what about multi-series subordinates?
<stokachu> the kubernetes worker looks like it's xenial only
<rick_h_> stokachu: hmm, so looks like a change from a long while ago: https://github.com/juju/juju/commit/ff86e5c5413b2920986dc2769d57c6adadf8237f
<stokachu> the bundle has a series: xenial defined
<stokachu> maybe we just need to specify that in our api call
<rick_h_> stokachu: I see, so maybe there's something with that not carrying through the bundle.
<stokachu> yea i think we just need to pull the default series in the bundle and make sure it's set for each deploy call that doesn't have a series in the charm id
<stokachu> im just surprised i didn't hit this earlier
<rick_h_> stokachu: yea, maybe the bundle normally had series in the charm urls?
<stokachu> yea i bet all the other bundles we use have the series in the charm urls
<charles3> hey tvansteenburgh got a hot second?
<tvansteenburgh> charles3: yup
<lazyPower> tvansteenburgh: its been a while since i've tried to co-locate a service in a bundle. I'm getting the return from amulet when trying to deploy the bundle: 2016-09-27 13:59:53 Invalid application placement easyrsa to lxd:etcd/0
<lazyPower> is this known behavior, or should I file a bug about this?
<tvansteenburgh> lazyPower: you have latest deployer?
<lazyPower> double checking, 1 sec
<lazyPower> i didn't, it just pulled an update. However i get the same result
<tvansteenburgh> lazyPower: gimme a min
<lazyPower> ack. https://gist.github.com/4447433ddce4729c88a737524ed7f0c9  -- bundle for reference
<lazyPower> magicaltrout: now that our k8s formation has kind of settled, is it time to get some of your mesos in my kubernetes? or is it time to get some of my kubernetes in your mesos
<tvansteenburgh> lazyPower: s/applications/services/
<lazyPower> ah, same bug that bit amulet bit deployer?
<lazyPower> s/bug/change/
<tvansteenburgh> lazyPower: yeah.
<tvansteenburgh> lazyPower: thanks for the heads-up, i'll file a bug
<lazyPower> tvansteenburgh: does deployer/amulet also need to be updated for the new nomenclature of colocation? s/lxc/lxd?
<lazyPower> http://paste.ubuntu.com/23242376/
<tvansteenburgh> lazyPower: no, that was already done
<lazyPower> ok cool. I'll just update teh bundle for now. If you're busy i can also file that bug about s/application/services/
<tvansteenburgh> lazyPower: already filed, thanks
<lazyPower> you da man
<tvansteenburgh> lazyPower: your placement should work if you s/applications/services. let me know if it doesn't
<bdx_> hey whats up everyone?
<bdx_> I've got some nonsense going on here around aws spaces and subnets
<bdx_> check it out
<lazyPower> tvansteenburgh: doesn't appear to - that output was with this bundle https://gist.github.com/3bcb688d317589e502a41c734f28f734
<lazyPower> well, i commented out the lxd to get this run goin, but i digress, it was uncommented and compained.
<tvansteenburgh> lazyPower: ok, looking
<rock__> Hi. we created a charm that will install and configure one of our storage driver as backend for openstack cinder. And it also modify nova.conf file. I integrated my charm with Openstack bundle. From relations section  how can I provide relation from our charm to cinder and nova services separately.
<lutostag> So who's knowlegeable about multi-series charms? (do they work in juju1? -- is there an incompat between juju1 and juju2 with the metadata.yaml format with series a list vs string?)
<bdx_> here is my space and subnet
<bdx_> http://paste.ubuntu.com/23242403/
<rick_h_> lutostag: so Juju 1.25 (.4+ I think) should auto pick the first one in the list and run with that
<bdx_> previously, as you can see here -> http://paste.ubuntu.com/23242441/
<rick_h_> lutostag: but it's not a fully supported feature as that was a 2.0 feature
<rick_h_> bdx_: ? what's up?
<bdx_> well I guess juju status doesn't show the private ip, here is a screen shot of the aws console showing the instance is in the correct subnet/space -> https://postimg.org/image/pkhhfjzvh/
<hml> is there an equivalent to âjuju resolved --retryâ in juju 2.0 rc1?
<bdx_> so whats going on here, is my instances no longer will deploy to the space/subnet I have defined in my model
<lazyPower> rock__: there is a cinder-client layer that should provide most if not all of that template for you
<bdx_> I run the same command I ran to get the 0th instance deployed to my defined space/subnet on subsequent instance deploys, but my instances are getting deployed to random subnets now
<rick_h_> hml: just juju resolve should auto retry.
<rick_h_> bdx_: does it show in the yaml output?
<rick_h_> bdx_: /me goes to look there was a bug about not all addresses showing in status
<hml> rick_h_: thanks
<bdx_> rick_h_: this should answere all of your questions ->  http://paste.ubuntu.com/23242465/
<rick_h_> bdx_: what's juju status --format=yaml
<bdx_> rick_h_: http://paste.ubuntu.com/23242474/
<rock__> lazypower: OK. Thank you.
<rick_h_> bdx_: so the machine has multiple addresses, you're not seeing the one you want that's on the space correct?
<magicaltrout> lazyPower: sounds good although we have a go live this week. The stuff still remains pretty much as it was I have plans to circle back around to it next week
<bdx_> rick_h_: more than that
<bdx_> rick_h_: not only is the instance not deploying to the specified space/subnet, I'm getting the private ip in juju status
<bdx_> instead of the public like the other instances
<bdx_> instance*
<bdx_> rick_h_: this is a wild one .... sucks it had to show itself when I'm in florida setting up our dev teams with juju :-(
<rick_h_> bdx_: yes, looks like: https://bugs.launchpad.net/juju/+bug/1512875 as far as the reporting
<mup> Bug #1512875: juju 1.25.0 using MAAS 1.9-beta2 juju incorrectly reports the private address <addressability> <maas-provider> <networking> <juju:Triaged by rharding> <juju-core:Won't Fix> <https://launchpad.net/bugs/1512875>
<bdx_> rick_h_: thats 1 bug that I'm experiencing for sure
<rick_h_> maybe, bah
<rick_h_> bdx_: yea, the other thing is that you're using spaces with just machines. So are you using the spaces constraing with add-machine?
<bdx_> or some variant of it
<bdx_> rick_h_: yea, I am
<rick_h_> yea, so there's a few things. 1) juju not picking the 'preferred' address for the display. That's a known issue. 2) That we only show one address in status and should show all addresses a machine has. 3) not sure why the spaces constraint would not get you a machine not in the space :/
<MrDan> hei, I deployed Openstack but neturon-gateway did not deloy l2, dhcp or metadata
<bdx_> I was getting machines in the correct space up till this morning when I went to run a demo
<MrDan> any known issue?
<rick_h_> MrDan: yes, known issue with the Neutron gateway charm and not liking the bridged interfaces setup by RC1
<rick_h_> bdx_: yea, sorry, that one doesn't ring any bells and not sure we've seen that one.
<MrDan> what can I do as a workaround?
<rick_h_> MrDan: https://bugs.launchpad.net/juju/+bug/1627037
<mup> Bug #1627037: rc1 bridges all nics, breaks neutron-gateway <cdo-qa-blocker> <eda> <landscape> <uosci> <v-pil> <juju:In Progress by frobware> <https://launchpad.net/bugs/1627037>
<bdx_> rick_h_: ok, I'll file a bug later today when I have a min
<bdx_> rick_h_: thanks
<rick_h_> bdx_: k, sorry man
<rick_h_> MrDan: there's a couple of notes in the bug. We're working on updating for RC2 and working with the charmers of neutron gateway to correct it soon.
<rick_h_> MrDan: best thing is to backtrack RC or look at the maas hacks in the bug.
<MrDan> ok, thanks
<lazyPower> magicaltrout: no rush, we're still launching ourselves. today is the day
<magicaltrout> just insinuated the board at the ASF was an old boys club, I expect rockets to land on my house shortly
<magicaltrout> good bye all
<MrDan> cna the neutron-gateway be worked aroud if I have only one NIC configured on that host?
<rick_h_> MrDan: I don't think so as it'll still see a bridge on that one interface
<rick_h_> MrDan: and will refuse to use it
<MrDan> ah, so the issue is that the public interface, eth0 in my case, should not be bridged as neutron-gateway charm skipps bridge networks
<rick_h_> MrDan: right, and juju auto bridges so that things work as expected when deployed into containers/etc.
<rick_h_> MrDan: the long term fix is to represent L2 interfaces in the model, but until then the charmers are looking to drop the bridge for neutron specifically
<MrDan> i see, so basically right now openstack is not deployable atm with latest packets
<marcoceppi> MrDan: yeah, if you can beta18 is a good candidate
<hml> marcoceppi: where can we download beta18?
<marcoceppi> hml: let me check if it's still in the ppa
<rick_h_> hml: we can also look to get you a binary if you need, or build it from the tag? https://github.com/juju/juju/tree/juju-2.0-beta18
<beisner> thedac, thanks for landing the mysql c-h sync @ https://code.launchpad.net/~1chb1n/charms/trusty/mysql/newton/+merge/306554 - marcoceppi, what do we need to do to get that rev into the cs?
<hml> rick_h: okay, if itâs not in the ppa iâll build one - question thoughâ¦
<rick_h_> hml: shoot
<beisner> marcoceppi, it looks like i've got perms to charm push it, just not sure of the expected process/flow on that one
<hml> rich_h: when I do a bootstrap - it appears that juju is looking to download the latest, which makes me nervuous
<rick_h_> hml: hmm, if it's custom built I thought it would not auto use the matching tools
<rick_h_> anastasiamac: can you speak to the changes wallyworld did here? ^
<hml> rick_h_: iâm using a custom build right now and got â cmd cmd.go:129 Looking for packaged Juju agent version 2.0-rc2 for amd64â
<hml> rich_h_: though in the end it uses my local juju build
<rick_h_> hml: :/ hmm maybe it's just part of the Id process?
<rick_h_> hml: it used to be you used --upload-tools to make sure it used your local binary
<rick_h_> hml: if it's finding the right binary I that's working as intended then
<anastasiamac> rick_h_: those changes are black magic for me, sorry :D
 * rick_h_ doesn't have a ton of experience with the new flow there
<rick_h_> anastasiamac: k
<rick_h_> anastasiamac: ty
<anastasiamac> rick_h_: natefinch has had more epxosure and mayb of more help here ^^
<anastasiamac> exposure even LD
<hml> rick_h_: it does in the endâ¦ but if iâm using the installed version of juju - and itâs looking to upgrade every time - eeks.  :-)
<marcoceppi> hml: I have a beta18 binary if you'd like
<hml> marcoceppi: cool, how can I get it?
<natefinch> hi, sorry, was at lunch... rick_h_, anastasiamac, hml - use --build-agent to force juju to upload a locally built jujud
<natefinch> (it always rebuilds, so you need the source, and need to be able to build it, but otherwise, works like --upload-tools)
<hatch> In juju 2, what is the correct way to remove an application and a unit that's in error, say a install hook error.
<rick_h_> hatch: you have to resolve it first. --retry is not built into resolve
<rick_h_> hatch: so you can juju resolved app/unit
<rick_h_> hatch: or juju resolved --no-retry if you don't want to bother
<hatch> rick_h_: so when I tried `juju resolved app/unit` it just kept retrying the hook
<hatch> I had to run --no-retry
<hatch> so to remove a unit in error I had to run `juju resolved app/unit --no-retry` a few times after `juju remove-application app`
<hatch> is this intentional?
<hatch> it's quite unintuitive
<rick_h_> hatch: so the normal wish is to retry. The thing is that it goes through each hook
<hatch> right, but the application has already been marked to be destroyed
<hatch> so why would we care if the unit is any good?
<rick_h_> hatch: so if you fail/retry a config-changed and then it hits a relation hook it'll get stuck and you have to resolved agagin
<rick_h_> hatch: right, but destroying it invokes hooks
<rick_h_> hatch: so it's stuck going through them thus the few resolved tries
<hatch> would it make sense to have a warning returned when you destroy an application on how to actually get it gone if any of the units are in error?
<rick_h_> hatch: the thing is that it's all async. when you go to destroy it, you don't really know what's up.
<hatch> hmm
<rick_h_> hatch: there might be a case in there to figure out sometimes, but not consistent
<hatch> yeah...ok
<hatch> this is an interesting problem
<hatch> maybe a always-on notice
<hatch> "if something fails when tearing down, do x"
<rick_h_> hatch: maybe push charm authors to test their charms :)
<rick_h_> hatch: there's definitely room for improvement
<rick_h_> something for 2.1
<hatch> lol
<hatch> yeah I was just trying what I usually do 'spam resolved'
<hatch> but that didn't work because I needed --no-retry
<rick_h_> hatch: right, that's what's new in rc1
<hatch> so I thought that was odd
<hatch> in the 'normal' case that makes sense
<rick_h_> hatch: normally you have to --retry but that's now the default because usually, you want that on
<hatch> to retry be the default
<hatch> right
<hatch> so maybe if the application is in a dying status and units are in error when you run `resolved` it just does that
<hatch> or would you want to potentially still run the hook?
<rick_h_> hatch: well for things that might need to cleanup, hit an API when going down, etc. It makes sense to hook hook exec'ing as it goes down
<rick_h_> "hey, I'm going down, take me out of the load balancer"
<hatch> yeah, that's an excellent point
<hatch> tough problem here
<hatch> heh
<hatch> touch ux problem that is
<hatch> touch
<hatch> tough
<hatch> lol
 * hatch can't type
<cholcombe> relation_set still works in reactive land right?
<magicaltrout> hey arosales did that ibm talk you gave get recorded?
<lazyPower> magicaltrout: he's out at strata this week
<lazyPower> I see some stills from it but no media that i can tell. I'll sync with james over that and see if we have any assets from that talk
<magicaltrout> ta
<magicaltrout> looked interesting thats all
<magicaltrout> slides, video whatever
<spaok> does anyone know if you can create LXD machines from a bundle.yaml for juju deploy without having to specify machines? seems just doing a contraint for container=lxd doesn't work
<hatch_> spaok: so you want to create empty unused machines?
<spaok> naw, I want juju deploy to create containers for the services its deploying, I know can use the to:\n   - lxd:0  type of syntax, but that requires a machine to be defined first
<hatch> spaok: it does, because the placement of containers on machines is typically important
<hatch> spaok: if you use the GUI to generate the bundle placement then you might find that easier
<spaok> we are trying to setup automation for openstack deployment, so we want to just target machines tagged in maas, but we don't want to have to pre-populate a bundle file with machines, cause the counts may change
<hatch> ahhh, I'm not sure of any way around that to be honest. It might be worth an email to the list or a feature request
<rick_h_> spaok: check the constraints docs
<spaok> rick_h_: ya, gone through it backwards, forwards, up and down
<spaok> can't find a way
<rick_h_> spaok: i think theres a :lxd syntax for --to
<rick_h_> spaok: hmm ok.
<spaok> there is, but to use it you need to define machines
<spaok> ideally if I could use contraints + to: lxd
<rick_h_> spaok: so you want each unit on a new machine but in a container?
<rick_h_> spaok: ah sorry, not xonstraints docs but the olacement docs
<spaok> basically ya
<spaok> it works with the direct commands
<spaok> for instance, juju deploy cs:xenial/glance --to lxd --constraints tags=lxc,rack2
<spaok> will create a container on the machine tagged lxc
<rick_h_> i see gotcha
<MrDanDan> juju deploy --to=lxd:[node name]
<spaok> MrDanDan: question is, how do I do it in a bundle.yaml without needed to define all the machines
<MrDanDan> and specify the app after, of course
#juju 2016-09-28
<siva> I am trying to deploy my charms and I find that the machine creation itself has not succeeded
<siva> it is showing the 'pending' state
<siva> How do I debug this problem?
<siva> What can be the issue?
<siva> MAAS2.0 UI shows that the status of the machine is 'Deployed'
<siva> MACHINE   STATE    DNS            INS-ID   SERIES  AZ 0         pending  192.168.1.252  4y3hkh   trusty  default
<siva> The above state is there for hours
<siva> Any help to debug this is much appreciated
<bildz> hey I'm having an issue with conjure-up on xenial.  I have 5 machines allocated for MAAS (1) and Openstack (4).  Maas works great and I can conjure-openstack, but im having an issue with it wanting another host and there isnt one for neutron.  Do I need 5 openstack machines, instead of the 4 the previous one could work with?
<bildz> http://pastebin.com/ZfDciFgM
<stokachu> bildz: so 1 machine is always created for the 'controller' or 'juju admin' node
<stokachu> so if the bundle requires 5 machines that means you need at least 6
<bildz> I'm assuming thats the bundle requirement
<stokachu> bildz: yea
<bildz> stokachu: where would i have read that?
<bildz> I can barely find install information for 2.0-rc1
<stokachu> bildz: jujucharms.com
<stokachu> we use https://jujucharms.com/openstack-base/ for conjure-up
<stokachu> juju docs https://jujucharms.com/docs/stable/getting-started
<bildz> thank you
<bildz> thankfully, i have another to get me through the install, but i've been playing around with the cli commands and am getting familiar with managing the deployment
<stokachu> :)
<KpuCko> can somebody help me with container stuck in error/pending state?
<KpuCko> i have machine which hosting kvm container (this juju says) so i want to remove role from this server
<KpuCko> how to do that? how to fix services on the host machine which is in broken state?
<spaok> juju resolved, but if you want it retry the last hook use juju resolved -r  then the name
<spaok> KpuCko: ^^
<KpuCko> spaok http://pastebin.com/BBkFtA4D
<KpuCko> i don't know anythink for this machine: 11
<KpuCko> how to recognize it?
<spaok> so, typically that happens when you run a add or deploy command it didn't match one of the other nodes
<spaok> it's trying to add a new one
<spaok> you can juju remove-machine 11 --force
<KpuCko> mhm, command runs fine, but result is the same
<KpuCko> oh sorry, i have to wait some time after im running juju status again
<spaok> ya
<spaok> its a queue like system
<KpuCko> thanks a lot
<spaok> np, that happens a lot :)
<spaok> if your testing stuff, using --to against one of your machine ID's will help
<KpuCko> yeah, yeah im working with juju gui
<KpuCko> but i'm trying to learn how to debug hooks
<KpuCko> how to fix them, etc
<KpuCko> many thanks for the help
<magicaltrout> alrighty, signed off to talk Juju at Pentaho Community event this year.... better sort out some more travel
<spaok> does anyone know a charm that uses the current python-basic template? trying to get a feel for writing a charm
<magicaltrout> there is a python basic template?
<magicaltrout> what you looking at spaok ?
<marosg> Hi, I am very new to Juju. I was following https://www.stgraber.org/2016/06/06/lxd-2-0-lxd-and-juju-1012/   and I am getting an error when bootstrapping controller
<marosg> test@lxd:~$ sudo apt install juju
<marosg> [sudo] password for test:
<marosg> Reading package lists... Done
<marosg> Building dependency tree
<marosg> Reading state information... Done
<marosg> The following additional packages will be installed:
<marosg>   distro-info juju-2.0
<spaok> magicaltrout: its part of the charm create command
<spaok> marosg: that should work, that's the error?
<spaok> This post assumes that you already have LXD 2.0 installed
<spaok> you got that?
<marosg> yes, LXD is working
<spaok> there's a bug with images, not sure if they fixed it
<spaok> might be that
<spaok> try sudo lxc image copy ubuntu:16.04 local: --alias ubuntu-xenial
<magicaltrout> ah just the default stuff spaok ?
<magicaltrout> most charms start with it
<magicaltrout> I've got some basic-ish stuff kicking around hold on
<spaok> ya, just reading about the services framework
<spaok> be nice to see a current example
<magicaltrout> https://github.com/buggtb/layer-mesos-master/blob/master/reactive/layer_mesos.py
<magicaltrout> thats a simple one
<spaok> cool, thanks
<magicaltrout> https://github.com/buggtb/layer-drillbit/blob/master/reactive/drillbit.py
<magicaltrout> there's a slightly more complex or messy one
<magicaltrout> depending how you look at it ;)
<marosg> spaok, thanks, looks like it did the trick
<spaok> marosg: np, I ran into that one also, I just pull the image now as ubuntu-xenial as part of my deployments
<marcoceppi> spaok: you need to update your version of charm
<marcoceppi> python-basic isn't the best template, python-reactive is
<spaok> ok
<marcoceppi> spaok: if you `sudo add-apt-repository ppa:juju/stable` and upgrade charm and charm-tools you'll get a better experience
<magicaltrout> ah that explains my confusion
 * magicaltrout didn't use python pre reactive
<spaok> I looked at https://jujucharms.com/docs/stable/tools-charm-tools
<spaok> and http://pythonhosted.org/charmhelpers/getting-started.html
<magicaltrout> read 2.0 docs
<magicaltrout> they are
<magicaltrout> fail
<spaok> hah, ya, I've had a lot of problems with things in 2.0 docs
<marcoceppi> s/fail/in progress/ :P
<spaok> fair enough
<magicaltrout> sorry i was referring to my comment
<magicaltrout> not the docs
<magicaltrout> as I said read 2.0 without clicking the link as I read stable :)
<magicaltrout> said/read
<spaok> ya, the ones I was looking at don't really change between stable and dev
<magicaltrout> coffee clearly didn't have the desired effect this morning
<spaok> heh, its 3am here
<magicaltrout> yeah i don't think charm build has changed much if at all
<magicaltrout> west coaster
<spaok> yar
<magicaltrout> where abouts?
<spaok> san jose
<spaok> heart of silicon valley pretty much
<magicaltrout> ah very nice
<magicaltrout> took a drive up that way when I was out in Pasadena last week
<spaok> nice
<spaok> some nice beach roads
<magicaltrout> indeed
<magicaltrout> nice part of the world
<spaok> this current?
<spaok> charm 2.2.0-0ubuntu1~ubuntu16.04.1~ppa2
<spaok> charm-tools 2.1.4
<spaok> cause no template found for python-reactive
<magicaltrout> more current than mine
<magicaltrout> don't you just run "charm create"?
<spaok> wasn't sure, I saw different things about using -t python, or -t python-basic
<spaok> I can create it
<magicaltrout> na
<spaok> s/create/try/
<magicaltrout>  charm create mycharm
<magicaltrout> then i see
<magicaltrout> INFO: Using default charm template (reactive-python). To select a different template, use the -t option.
<magicaltrout> which is what you want
<spaok> kk, ya
<spaok> ok, now to figure out how to populate it
<magicaltrout> before you build it
<magicaltrout> make sure you have the environment vars set
<magicaltrout> else you'll end up build stuff wonky
<magicaltrout> JUJU_REPOSITORY etc
<spaok> for local repo?
<magicaltrout> yeah it'll stomp on your charm if you aren't careful
<spaok> ok
<magicaltrout> you have to build before deploying or pushing to the charmstore
<magicaltrout> so JUJU_REPOSITORY INFERFACE_PATH and LAYER_PATH
<magicaltrout> need setting
<spaok> cd
<spaok> ok, I'm going to mess with this some more in the day light, thanks for the help magicaltrout and marcoceppi, gives me a good starting place
<magicaltrout> no problem
<magicaltrout> swing back round in the afternoon and more clued up people are around
<MrDan> hello
<MrDan> if I shutdown physical mahcines on which i have juju units as LXDs, when I boot up the machines again, the LXDs should come up themselves, right?
<magicaltrout> I don't believe that is the case although I've never tried
<marcoceppi> MrDan: they should
 * marcoceppi powers off a node in maas to test
<rick_h_> MrDan: yes, they should come back up
<rick_h_> MrDan: I do it on my laptop all the time, but not tried it on a maas server
<magicaltrout> booo i'm 0 for 2 today
<magicaltrout> someone else the other day said theirs weren't coming back up
<marcoceppi> magicaltrout: it could be a bug
<magicaltrout> surely not?
<marcoceppi> but the agent should bring back not just the workload on the machine, but also the containers (if any)
<rick_h_> +1, definitely it should
<magicaltrout> fair enough
<magicaltrout> sounds sensible ;)
<magicaltrout> i'm still waiting for my kickstarter stuff to finally have a way to test MAAS properly
<magicaltrout> looking forward to it
<rick_h_> kickstarter stuff?
<magicaltrout> http://www.udoo.org/udoo-x86/
<magicaltrout> bought a bunch of these
<magicaltrout> altough they now have 32GB ram not 8
<magicaltrout> if they arrive before i'm due to demo Juju at the Pentaho meetup this year I'm gonna take them and do a "MAAS" big data deployment
<rick_h_> ah nice
<zeestrat> Cool stuff. Do they have some sort of management interface so they can be managed by MAAS?
<magicaltrout> you mean powerwise zeestrat ?
<zeestrat> magicaltrout: Yes.
<magicaltrout> dunno, good question
<MrDan> Hi, testing now beta18 for the neutron-gateway issues
<magicaltrout> the maas docs are a bit of a black hole
<rick_h_> MrDan: k, the team's landing that fix today for the rc2 tomorrow as well so please watch for that to help you get back to rc
<magicaltrout> looks like there might be ipmi support zeestrat
<marcoceppi> magicaltrout zeestrat if there's IMPI it'll work for sure with maas
<zeestrat> magicaltrout: Then you are probably good to go. The MAAS docs aren't too bad. There's a list of power driver capabilities here: http://maas.io/docs/manage-power
<magicaltrout> yeah
<marcoceppi> magicaltrout: I know a guy who built a BMC with an arduino for raspberry pis. A lightweight restful api that would power on one of the GPIO pins to turn on or off an RPI
<magicaltrout> the problem with the docs is where they land on google
<magicaltrout> they don't seem to get indexed
<marcoceppi> magicaltrout: yeah, getting google juice has always been...a struggle for us on some of our properties
<marcoceppi> magicaltrout: I gave up googling and just go to the source, or use `site:jujucharms.com` or `site:maas.io`
<marcoceppi> maas.io is pretty new compared to maas.ubuntu.com
<magicaltrout> yeah but you google maas and you end up at maas.ubuntu.com but then maas.io is listed lower down but only the landing page
<magicaltrout> which is weird
<magicaltrout> anyway.... yeah these boards have an arduino built into them which is pretty crazy
<magicaltrout> interesting to see what you can do with an X86 board with an Arduino 101 on them
<MrDan> rick_h: it works, the neturon gateway services are installed
<rick_h_> MrDan: cool, hope that unblocks you until rc2.
<MrDan> yep
<MrDan> tomorrow rc2 is out on ppa?
<rick_h_> MrDan: yes, that's the plan
<MrDan> cool
<rock> Hi. We developed a "cinder storage driver"  charm. In this charm we are using "subordinateconfigcontext" to pass configuration values to cinder.conf. [I mean to modify the cinder.conf  charm]. we followed https://github.com/openstack/charm-cinder/blob/master/templates/mitaka/cinder.conf
<rick_h_> marcoceppi: did we have the ability to list/etc plugins before? I don't see it in the command lists but curious how we missed this when doing the cli stuff
<marcoceppi> rick_h_: `juju-1.0 help plugins`
<marcoceppi> rick_h_: it was a help topic, but I don't see why it wouldn't be list-plugins today
<rock> Similary, Using the same charm we need to modify the nova.conf. We need to add use_multipath_option: true/false.
<rick_h_> marcoceppi: gotcha, ok yea. I was trying to see why we didn't have any plugin based commands make it into the new cli planning
<rick_h_> marcoceppi: but if it was under help we probably just missed it
<marcoceppi> rock: there is a way to do that without modifying the nova charm, but I don't remember what that is
<marcoceppi> rick_h_: I can build a `juju list-plugins` plugin, but it won't show up in help ;)
<rick_h_> marcoceppi: lol yea
<marcoceppi> rick_h_: which brings up an interesting point
<marcoceppi> should `help commands` show plugins as well ;)
<rick_h_> marcoceppi: yea, replying to your email thinking that through
<marcoceppi> rick_h_: cool, thanks
<rick_h_> marcoceppi: if plugins is a noun in the juju-verse...what all does that mean.
<marcoceppi> you can list-plugins or `juju plugins` which are aliases
<rick_h_> list, show, add/remove is done automatically via install. What's the command to get to a list of available plugins to install?
<marcoceppi> but I wonder about register-plugin
<rick_h_> marcoceppi: that kind of stuff
<marcoceppi> where you whitelist the plugin so it shows up in commands
<rick_h_> marcoceppi: hmm, not sure about that one
<marcoceppi> but that's a feature, listing plugins is parity
<marcoceppi> well, add-plugin, probably not register-plugin
<marcoceppi> fwiw, plugins still work
<rick_h_> marcoceppi: yea, tbh as a whole this will have to fall into a 2.0.1 atm
<marcoceppi> pfft, PATCH RELEASE?
<rick_h_> marcoceppi: exactly, we've got bugs of things that don't work that we need to get fixed for GA with it coming down to rc2 tomorrow and then a little gap for GA
<marcoceppi> jk, that sounds good
<rick_h_> marcoceppi: so will file a bug and link it in my email reply
<marcoceppi> ta
<marcoceppi> rick_h_: I made sure to cc the -dev list ;)
<rock> marcoceppi: OK. I have a question. To modify nova.conf we are trying to use https://github.com/openstack/charm-nova-compute/blob/master/templates/mitaka/nova.conf.
<marcoceppi> rock: I imagine #openstack-charms would be a better room, a lot of good openstack charm knowledge there
<rock> marcoceppi: OK. Thank you.
<aisrael> tvansteenburgh: I know I asked this recently, but wrt python-jujuclient, can we get a new release with the 2.0 rc1 support? A PPA won't work, in this case, because it's being run from fedora
<tvansteenburgh> aisrael: i'll do that now
<aisrael> tvansteenburgh: thanks!
<tvansteenburgh> aisrael: uploaded
<tvansteenburgh> (v0.53.3)
<aisrael> excellent, thanks!
<kwmonroe> hey rick_h_, if i add a user to my controller, and grant that person (let's call him tvansteenburgh) 'write' privileges to a model, should he be able to 'juju ssh X' into units of that model?
<rick_h_> kwmonroe: you'll need to add his ssh key
<rick_h_> kwmonroe: see juju add-ssh-key and such
<rick_h_> kwmonroe: since he's not admin he can't manage keys and without a key, no access
<kwmonroe> add-ssh-key?!?!?  rick_h_, you're the best.  i didn't know of such sorcery.
<rick_h_> kwmonroe: let me know how it goes
<pascalmazon> hi. Can I provide --resource foo=bar when deploying a bundle? how will it know what application it is for?
<rick_h_> pascalmazon: sorry, bundles currently only support resources from the charmstore
<rick_h_> pascalmazon: as you note, the bundle deployment scenario is a bit more complicated and was pushed past rev1 atm
<pascalmazon> rick_h: ok, thanks for the info!
<kwmonroe> hey beisner, suchvenu emailed mbruzek and i with some questions about the cinder charm (stuff like cinder for Z and cinder in lxd).  who is the best contact for those?  or would it be better to ask on the juju ML?
<kwmonroe> (stand down beisner ^^; we got the routing sorted)
<beisner> woot thx kwmonroe
<tvansteenburgh> rick_h_: is there any way to tell `juju ssh` which key to offer?
<rick_h_> tvansteenburgh: hmm, with juju scp you can specify args to the underlying scp functionality with a --, but I think we redid juju ssh in a OS agnostic way.
<rick_h_> tvansteenburgh: so honestly not sure, will have to ask around/look and see if I can see anything in the code
<tvansteenburgh> rick_h_: ok. right now i'm in a situation where i have to rename my keys to get juju ssh to work
<tvansteenburgh> rick_h_: should i file a bug about that
<tvansteenburgh> ?
<rick_h_> tvansteenburgh: yes please. Will see if there's a flag that's just not in help/etc
<tvansteenburgh> rick_h_: roger, thanks
<rick_h_> mgz: around? you were just in ssh-key land recently. any ideas on ^
<rick_h_> tvansteenburgh: looking at https://goo.gl/0q2KWq
<rick_h_> tvansteenburgh: looks like not supported :(
<tvansteenburgh> rick_h_: ack, thanks for looking
<rick_h_> bdx_: heads up, comment/etc coming in on https://bugs.launchpad.net/juju/+bug/1627554
<mup> Bug #1627554: juju binary broken on sierra <juju:Triaged by jamesbeedy> <https://launchpad.net/bugs/1627554>
<rick_h_> bdx_: let me know if you're up for seeing it through or want to punt.
<junaidali> Hi, i'm getting error 'ERROR unrecognized command:' for charm publish command. any idea?
<junaidali> charm * packages version: http://paste.ubuntu.com/23247897/
<lazyPower> junaidali: the command changed to charm release
<junaidali> thanks lazyPower. I missed the latest update
<marcoceppi> lazyPower junaidali I'm adding a silent plugin to the next charm snap, where charm publish is just an alias to charm release
<beisner> wait what, is the next charm tools command for publishing changing?
<beisner> we've got automation on that, will need to know when that will affect stable users (deb)
<firl> lazyPower, congrats on getting kubernetes enterprise support ready!
<firl> I had a question around it when you get a moment
<lazyPower> firl: sure, i'm in/out but whats up?
<firl> I saw the kibana / es / filebeat configuration with kube, awesome!
<firl> is there a way to expose services yet like gce?
<lazyPower> firl: we ship with an nginx reverse proxy, there's a demo action of microbots
<lazyPower> it has limited support for sockets, i'm still working to make that easier
<firl> gotcha, I remember that being an issue
<lazyPower> i want to add an action that will let you pick from pre-configured ingress LB's and specify a namespace and it "just works" after that
<firl> Gotcha, yes that would be wonderful
<gQuigs> are there daily builds available for juju 1.25?
<lazyPower> so we're 1/2 way there with whats in there :)
<firl> yeah, sounds awesome
<lazyPower> gQuigs: not daily, 2.0 is what would be in daily
<lazyPower> firl: i'm highly interested in your return feedback this go around, please capture it for us
<firl> yeah, I will try it out in Openstack
<lazyPower> firl: if there's *anything* yinz need over there in the short term beta cycle, i want to get that captured.
<firl> I am considering if I need to try it on bare metal
<lazyPower> well your networking will be pokey
<gQuigs> lazyPower: right, but 1.25 is still maintained and I'd like to try a fix that's committed but not released
<lazyPower> overlay in an overlay and all that
<firl> do I need juju 2 for it?
<firl> yeah the docker config would need to have 1404 mtu
<lazyPower> firl: yeah, i def need that feedback of your network findings when deployed in openstack
<lazyPower> i'm probably going to have to expose some of the guts of the container rumtime for configuration there
<firl> cool. I have a few nodes to test it in
<firl> whats the repo / guide again to get it going and I will try to test it this week
<lazyPower> gQuigs: ah good point, try checking in #juju-dev
<lazyPower> firl: the readme - jujucharms.com/canonical-kubernetes
<gQuigs> will do, thanks
<firl> lazyPower perfect, I will try it with an openstack overlay. If itâs working enough I might put it on a couple bare metal machines.
<firl> does nodeport work through the juju overlay also?
<firl> or do i have to do a nginx reverse
<thumper> lazyPower: hey there
<lazyPower> if you want to do nodeport, you'll need to manually open the ports for now
<lazyPower> any worker can perform as a reverse proxy for both nodeport and for ingress
<lazyPower> thumper: yo
<lazyPower> firl: but our recommendation is to use the ingress controller so its encapsulated. for the workdloads that demand nodeport, like a znc bouncer, we'll have to brianstorm and make that better
<firl> im just thinking through if i wanted to have socket support
<firl> but this is all awsome, nice job!
<lazyPower> firl: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx
<lazyPower> its pretty extensive
<lazyPower> i mean you can throw kubelego as a pod on the ingress and get free tls with letsencrypt
<lazyPower> but thats the LB we shipped with for beta. it has the biggest portfolio of supported options without compromising support through advanced configuration.
<firl> yeah
<firl> currently I have an ssl secret with a single ssl proxy node that just acts as a termination point
<firl> just to get ssl so that might be a nice change of pace
<lazyPower> lots of options
<firl> ya, I will have to get a juju 2.0 environment working and test it out
<kwmonroe> lazyPower: how often do you refresh your boot2docker/docker-machine?  my charmbox is kernel panic'ing more frequently these days, and i realize i haven't updated since 1.11.2
<natefinch> lazyPower: is it me, or does the demo button here not work? https://jujucharms.com/canonical-kubernetes/
<natefinch> lazyPower: I presume it's supposed to bring up the demo page with canonical-kubernetes deploying/deployed, right?  for me it just brings up an empty demo window
<natefinch> lazyPower: it says "fetching bundle data" but then... nothing
<rick_h_> natefinch: known issue, gui team is updating the gui to make it work
<natefinch> rick_h_: ahh, good to know.  Sucks that it happened after the announcement
<rick_h_> natefinch: yea
<magicaltrout> Congratulations, and welcome to Apache: Big Data Europe! Your submission, "Highly Scalable Big Data Analytics with Apache Drill",  has been accepted
<magicaltrout> Seville here I come
<magicaltrout> kwmonroe we have some interesting shit to work on
<magicaltrout> sorry stuff
<magicaltrout> not allowed to swear
<kwmonroe> right on magicaltrout!  i've always wanted to do Highly Scalable Big Data Analytics with Apache Drill ;)
<magicaltrout> meh
<magicaltrout> i detect irony
<magicaltrout> well I can crack out the Big Top Stuff, Drill Stuff
<magicaltrout> at ApacheCon
<magicaltrout> I suspect it'll go down reasonably well and give me an excuse to have that upstream discussion
<kwmonroe> magicaltrout: do you know if there's any work behind https://issues.apache.org/jira/browse/BIGTOP-2001?  it hasn't been updated in forever :/
<kwmonroe> and from the drill side, even more foreverer: https://issues.apache.org/jira/browse/DRILL-114
<magicaltrout> they both seem like odd requests
<magicaltrout> which is why they're probably both open
<kwmonroe> i'm guessing they're requests for bigtop to take over drill builds (or at least pin build versions for a particular bigtop release)
<magicaltrout> hmm, yeah but you can run Drill over anything in HDFS, Hive or HBase
<magicaltrout> its not vendor specific
<kwmonroe> at the summit, we talked about a hackathon with bigtop (c0s, specifically).  when that happens, i'll see if there is any movement in bringing drill under the bigtop umbrella.
<magicaltrout> I don't see that happening
<magicaltrout> MapR have their claws in Drill
<magicaltrout> I also don't see the benifit but thats neither here nor there
<magicaltrout> especially as drill queries a bunch of non hadoop stuff
<spaok> magicaltrout: is there a list of layers or interfaces you can use?
<magicaltrout> indeed spaok
<magicaltrout> interfaces.juju.solutions
<spaok> interesting
<spaok> thanks
<spaok> magicaltrout: sorry to bug ya, just wondering, if I want to write an interface, do I just make something in charms/interfaces? not sure the difference between charms/interfaces and charms/deps/interface
<magicaltrout> no worries spaok i'm just trying to figure out where 150GB of non backedup science data has gone :)
<magicaltrout> yeah you just put it into charms/interfaces
<magicaltrout> no magic required there
<lazyPower> spaok: https://jujucharms.com/docs/stable/developer-layers-interfaces
<spaok> lazyPower: thanks
<spaok> https://jujucharms.com/docs/devel/developer-layer-example
<spaok> had a link to https://jujucharms.com/docs/devel/charms-layers-interfaces/
<spaok> which is 4040
<spaok> s/4040/404/
<lazyPower> spaok: thanks for reporting that https://github.com/juju/docs/pull/1411
#juju 2016-09-29
<spaok> how does provides work in the charms? I'm looking at https://github.com/juju-solutions/layer-docker/blob/master/metadata.yaml  but I don't see any other reference to dockerhost besides it being mentioned in the provides
<caribou> Hello, are there (known) issues with amulet testing on Xenial ?
<caribou> my tests run fine on Trusty, but on Xenial ./amulet/filesystem_data.py expects python2 to be available which is not
<magicaltrout> marcoceppi is/was in europe so you might get a quick answer to that when he's around
<marosg> I did "juju deploy ubuntu". It failed from command line with "unknown channel candidate". I am on beta15, so that's ok, I would need Beta16. However, when I did the same from Juju GUI, it worked. Just curious - is GUI using different mechanism to access charm store ?
<magicaltrout> works here on RC1 marosg
<magicaltrout> there was a bunch of changes to channel naming
<magicaltrout> which I suspect is what you're seeing
<marosg> I understand why cli does not work, it is exactly becasue of those name changes. I am just surprised GUI works.
<magicaltrout> you might be able to fudge it with --channel stable
<magicaltrout> or something
<magicaltrout> instead of having it ponder which to choose
<marosg> yes, --channel stable helps. But my original question was how come GUI worked. Is GUI using different mechanism to access charmstore?
<magicaltrout> marosg: it will just append that channel flag to the call
<magicaltrout> where as your out of date juju command line client won't
<magicaltrout> :)
<magicaltrout> if you apt-get update and rebootstrap you'd see that you don't need to do that on the CLI either
<marosg> ok, now I understand, thanks
<magicaltrout> no probs
<Andrew_jedi> Hello guys, I was wondering whether this bug fix was included in the openstack oslo messaging charms for Liberty? https://bugs.launchpad.net/oslo.service/+bug/1524907
<mup> Bug #1524907: [SRU] Race condition in SIGTERM signal handler <sts> <sts-sru> <Ubuntu Cloud Archive:Fix Released> <Ubuntu Cloud Archive liberty:In Progress by hopem> <oslo.service:Fix Released> <python-oslo.service (Ubuntu):Fix Released> <python-oslo.service (Ubuntu Wily):Won't Fix>
<mup> <python-oslo.service (Ubuntu Xenial):Fix Released> <python-oslo.service (Ubuntu Yakkety):Fix Released> <https://launchpad.net/bugs/1524907>
<Andrew_jedi> jamespage: ^^
<jamespage> Andrew_jedi, there is a patch on the bug report, but its not been pulled into the SRU process for the Liberty UCA yet
<KpuCko> hello, is it there any way to do charm search on the command line?
<Andrew_jedi> jamespage: Thanks, so the only way for me now is to manually apply this patch. I am not sure where should i apply this patch. Is there any other workaround?
<jamespage> not really - the patch is in the queue, just lots of other things also contending for developer time
<Andrew_jedi> jamespage: My cinder scheduler is refusing to remain in active state. Any pointer what should i do in the meantime?
<magicaltrout> KpuCko: not currently
<KpuCko> mhm, thanks
<KpuCko> another question, how to add charm from cli without deploying it?
<KpuCko> i mean i have to do some configuration before deployment?
<magicaltrout> you can't stage them, but I think you can pass configuration options along with the deploy command
<magicaltrout> https://jujucharms.com/docs/1.24/charms-config
<magicaltrout> like there
<jamespage> Andrew_jedi, give me an hour and I'll get it up into liberty-proposed
<KpuCko> mhm, okey will try that
<jamespage> its been kicking around a few weeks and I'm not sure why
<Andrew_jedi> jamespage: Thanks a ton :)
<Andrew_jedi> \O/
<jamespage> Andrew_jedi, track the bug - there will be an automatic comment telling you how to test it when it gets uploaded
<Andrew_jedi> jamespage: Roger that!
<bbaqar__> exit
<MrDan> hi guys
<MrDan> is rc2 out on ppa?
<rick_h_> MrDan: not yet, it'll be late today.
<MrDan> great, thanks
<rick_h_> MrDan: we're working on getting the CI run with the network fix for the NG thing through so hasn't started to build the release yet
<lazyPower> o/ Morning #juju
<pitti> hello
<pitti> I just tried to redeploy a service with juju-1.25 in Canonical's Prodstack; before it even gets to the actual charm, the agent install fails with
<pitti> 2016-09-29 13:44:00 WARNING juju.worker.dependency engine.go:304 failed to start "leadership-tracker" manifold worker: dependency not available
<pitti> 2016-09-29 13:48:30 ERROR juju.worker.uniter.filter filter.go:137 tomb: dying
<pitti> (the latter repeasts over and over)
<pitti> does that ring a bell?
<pitti> (it
<pitti> 's a xenial instance)
<lazyPower> pitti:  i'm not seeing anything in the issue tracker that looks relevant
<lazyPower> pitti: can you bug that along with the controller logs and machine-agent logs if there are any on the unit thats failing?
<pitti> lazyPower: yes, there are; I'll do that
<lazyPower> thanks, sorry about the inconvenience :/
<bdx_> lazyPower: sup
<bdx_> lazyPower: I'm going to be on a fire mission to re-write the elasticsearch charm
<bdx_> lazyPower: we require a client node architecture
<lazyPower> bdx_: I'm OK with this - but i ahve 2 things to put out there
<lazyPower> 1) retain all the existing relations, 2) deploy the old charm and upgrade to hte new one to make sure its a drop in replacement for existing deployments (if you're keeping trusty as the target series)
<lazyPower> (or multiseries)
<bdx_> ok, xenial will be my target .... do I need to support trusty too?
<lazyPower> well, there's a lot of elasticsearch deployments out there on our trusty charm
<lazyPower> so, maybe once there's a straw man poll the list and we go from there?
<lazyPower> s/straw man/straw man,/
<bdx_> perfect
<lazyPower> i doubt they will want it, as es upgrades between major versions is hairy
<lazyPower> requires data dump + data restore in most cases
<lazyPower> bdx_: are you using logstash in any of your deployments?
<bdx_> not yet
<bdx_> why, whats up with it?
<lazyPower> its about to get some TLC after i get ceph moving
<lazyPower> i need to buffer beats input
<lazyPower> i've discovered that you can tank an eS instance with beats fairly quickly
<bdx_> lazyPower: really?
<bdx_> good to know
<lazyPower> yep. i had an 8 node cluster pushing to an underpowered ES host lastnight that died
<bdx_> wow
<lazyPower> when i buffered it through logstash i had a better garantee of the packets coming in at a consistent rate and it didn't tank the database
<bdx_> that makes sense
<lazyPower> what really happened is it was taking too long to respond to kibana so kibana thought the es adapter was dead
<lazyPower> its a fun set of dependencies... the logrouter is a more important thing than i gave it credit
<bdx_> I'm pretty sure I've experienced what you've described
<bdx_> I just assumed kibana was bugging out
<lazyPower> its a hair more complex than that
<lazyPower> but you had it mostly right
<bdx_> right
<lazyPower> i feel that kibana should be more intelligent with what its reporting is the issue. just saying ES Adapter Failed isn't terribly helpful when you're staring at a 502 page
<bdx_> totally
<lazyPower> like "query latency" or "omg load wtf"
<bdx_> YES
<bdx_> my plan is to create layer-elasticsearch-base
<lazyPower> well bdx_, let me show you this
<lazyPower> https://www.elastic.co/products/watcher
<lazyPower> coupled with https://www.elastic.co/products/reporting
<bdx_> oooooh
<bdx_> thats sick
<lazyPower> elastic flavored prometheus?
<lazyPower> with the capacity to email reports on a daily/weekly/monthly basis of charts you define in kibana
<bdx_> wow
<lazyPower> i have no time to write this stack up
<lazyPower> but seems interesting
<bdx_> I want it
<lazyPower> os i thought i'd put it out there
<bdx_> thx
<lazyPower> i'm happy to patch pilot you in if you want to contribute these
<bdx_> I entirely do
<lazyPower> brb refreshon coffee
<bdx_> I'm currently charming up a set of 10+ apps
<pitti> lazyPower: I filed https://bugs.launchpad.net/juju/+bug/1628946
<mup> Bug #1628946: [juju 1.25] agent fails to install on xenial node <juju:New> <https://launchpad.net/bugs/1628946>
<bdx_> I think all but 1 uses elasticsearch
<bdx_> 3 apps are already being deployed as charms .... but I am under high load atm ... not sure if I'll be able to start hacking at it for a minute yet
<lazyPower> bdx_: sounds like a good litmus. you have some interfaces already written for you (a start with requires)
<bdx_> I'm trying to finish the other 7
<lazyPower> i'm curious to see your test structure for that bundle when its done :)
<lazyPower> pitti: i've +1d the heat. thanks for getting that filed
<bdx_> canonical needs to hire you an apprentice
<bdx_> bdb
<bdx_> brb
<lazyPower> pfft canonical needs to hire me 3 more mentors. Mbruzek is tired of nacking my late night code ;)
<pitti> lazyPower: cheers
<lazyPower> i clearly havent had the bad habbits beaten out of me yet
<lazyPower> s/habbits/habits/
<magicaltrout> hobbits?
<lazyPower> with their fuzzy feetses
<lazyPower> what are mesos my troutness?
<lazyPower> eh i was reaching there, disregard that last bit
<lazyPower> jose: i haven't forgotten that i still owe you some couch db testing time. hows tomorrow looking for you?
<jose> lazyPower: owncloud. do you have time in a couple hours? have to do some.uni stuff tomorrow
<lazyPower> jose: i can stick around after hours. i'm pretty solid today with post release cleanup
<lazyPower> but i can lend a hand w/ OC tests or couch tests. take your pick
<jose> OC, couch is being check by couch, oc is an amulet thing (old amulet)
<jose> I should be home at around noon your time, does that work?
<lazyPower> i'm still goign to be full tilt on current duties but i can TAL and MP's and test run results/questions
<lazyPower> s/and M/at M/
<lazyPower> btw, again, this coffee man... :fire:
<lazyPower> i've been spacing it out so it lasts longer
<jose> lol I can get you some more soon
<smgoller> hey all, so I'm using juju to deploy the openstack bundle, and I've got an external network with a vlan tag set up. The goal is assigning VMs IP addresses directly on that network so no floating IPs involved. The network is up, and I can ping openstack's dhcp server instance from the outside world. However, the VM I launched connected to that network is unable to get metadata from nova. How should I configure the bundle so that can work?
<smgoller> according to this post: http://abregman.com/2016/01/06/openstack-neutron-troubleshooting-and-solving-common-problems/ I need to set 'enable_isolated_metadata = True' in the dhcp agent configuration file. I'm not sure if that solves the problem, but is there a way from juju to add that configuration?
<smgoller> thedac: any ideas? instances directly connected to an external VLAN aren't getting cloud-init data properly. I can ping openstack's dhcp server address from the outside world, so connectivity is good.
<thedac> smgoller: Hi. Are you using the neutron-gateway at all? By default this is where we run nova-api-metadata. If not it can be run on the nova-compute nodes directly
<smgoller> there's no openstack router involved if that's what you mean
<smgoller> thedac: the gateway is external to openstack on the vlan
<thedac> right
<thedac> I mean are you deploying with our charm "neutron-gateway" in the mix?
<smgoller> yeah
<smgoller> this is your openstack-base bundle
<thedac> ok
 * thedac re-reads through the details
<smgoller> only thing that's configured in the charm is bridge-mappings and data port
<smgoller> i've created the vlan network manually
<thedac> Ok, does the VM get a DHCP address?
<smgoller> according to openstack it does
<thedac> you can figure that out from nova console-log $ID
<smgoller> thedac: ok one sec.
<thedac> smgoller: you are looking for something along the lines of http://pastebin.ubuntu.com/23252275/
<thedac> in that console output
<smgoller> definitely nothing like that
<smgoller> nning for Raise ne...k interfaces (5min 7s / 5min 8s)[K[[0;1;31mFAILED[0m] Failed to start Raise network interfaces.
<smgoller> See 'systemctl status networking.service' for details.
<smgoller> [[0;1;33mDEPEND[0m] Dependency failed for Initial cloud... job (metadata service crawler).
<smgoller> [
<smgoller> ack
<thedac> Any chance I can see a pastebin of that?
<smgoller> thedac: http://paste.ubuntu.com/23252290/
<thedac> thanks. Let me take a look
<thedac> smgoller: ok, the VM is definitely not getting a DHCP address. Do you know if your neutron network setup commands set GRE instead of flat network for your tenant network?
<thedac> Let me find you the right commands to check. One sec
<smgoller> thedac: I set up the network via horizon and set the type to vlan.
<smgoller> but yeah, let's verify
<thedac> sorry, struggling to find the right command. Give me a few more minutes
<smgoller> sure
<smgoller> thedac: thank you so much for helping me with this.
<thedac> ah, ha. I was not admin. neutron net-show $tenant_net   What does provider:network_type say?
<thedac> no problem
<thedac> smgoller: couple more questions in the PM
<spaok> whats the best way to get the IP of the unit running my charm?
<valeech> spaok: is it in a container?
<smgoller> anyone have any ideas why juju-list-models would hang for a very long time?
<smgoller> adding and switching models happens instantly
<spaok> valeech: yes, it will be
<spaok> i was looking at unit_get('public-address')
<marcoceppi> magicaltrout: I'm back in the US :)
<magicaltrout> booooo
<magicaltrout> i hope you're jetlagged
<spaok> when I try to build my charm I get "build: Please add a `repo` key to your layer.yaml", but I added repo to th elayer.yaml
<spaok> oh nevermind, doc confusered me
<lazyPower> spaok: thats a known bug. the build process is linting the layer directory and not the output charm directory. https://github.com/juju/charm-tools/pull/256
<lazyPower> but glad you got it sorted :)
<spaok> is there a way to make config options required, like juju won't deploy a service unless you set the values?
<magicaltrout> nope, but you could block up the actual install until they are met spaok
<magicaltrout> the relay a status message like "waiting for configuration you moron"
<spaok> ok, I'll try that
<spaok> is the pythonhosted api for charmhelpers the most current?
<lazyPower> yep
<spaok> http://pythonhosted.org/charmhelpers/api/charmhelpers.core.host.html#charmhelpers.core.host.rsync
<spaok> cause it just says -r as the flags
<spaok> but when it runs it uses --delete
<lazyPower> spaok: https://bugs.launchpad.net/charm-helpers if you would be so kind sir
<spaok> sure, gonna see if I set flags if that changes it
<x58> What's the best place to ask for JuJu features that are missing and would make my life easier?
<x58> I filed a support case with my support team too... but I can make this one public.
<x58> marcoceppi / magicaltrout ^^
<magicaltrout> x58: depends what component i guess
<x58> Juju itself.
<magicaltrout> the core platform?
<spaok> lazyPower: I confirmed that if I set the flags it removes the delete option, I'll file a bug in a bit
<magicaltrout> x58: for juju core I believe its: https://bugs.launchpad.net/juju/+filebug
<magicaltrout> filing stuff there is good, and raising it on the mailing list with a usecase is a bonus
<magicaltrout> and recommended
<x58> magicaltrout: https://gist.github.com/bertjwregeer/919fe70e8cfc5184399d83ad11df3932
<x58> I want to report that. Where do I report that feature request?
<x58> Sorry, I try to be helpful, but signing up for another mailing list is not something I really want to deal with.
<magicaltrout> yeah stick it in launchpad x58
<x58> https://bugs.launchpad.net/juju/+bug/1629124
<mup> Bug #1629124: JuJu should learn about customizing configuration by tags <juju:New> <https://launchpad.net/bugs/1629124>
<magicaltrout> then prod rick_h_ about it ;)
<x58> rick_h_: *prod* https://bugs.launchpad.net/juju/+bug/1629124
<magicaltrout> he might know a thing or two that already exist... or just say "we'll schedule that for 2.x " ;)
<x58> magicaltrout: It probably will through the support organisation too ;-) dparrish is our DSE
<magicaltrout> always helps
<lazyPower> hey x58, how's etcd treating you these days?
<x58> lazyPower: It is working well. Sometimes you need to kick it once or twice when you spin up a new one due to some SSL cert issue
<x58> but resolved -r seems to make it behave.
<x58> And removing a running instance seems to fail about 30% of the time.
<lazyPower> x58: what if i told you, we're going to replace that peering certificate with a ca
<x58> No rhyme or reason.
<lazyPower> hmm. is it the last one around?
<lazyPower> i've observed where the leader seems to get behind in unregistering units and it tanks trying to remove a member
<x58> Nope, not the last one around.
<lazyPower> if you can confirm that i think i know how to fix it, and it would be *wonderful* if you could keep an eye out for that and do a simple is-leader check on the unit
<x58> Let's say I spin up 3 - 4 of them
<x58> I then remove 1
<x58> that 1 that I remove might or might not succeed in removal. Sometimes it hangs and a resolved -r kicks it.
<lazyPower> ah ok
<lazyPower> i'll add a scaleup/down test and try to root that out
<x58> etcd just seems finicky.
<lazyPower> terribly
<lazyPower> thanks for the feedback
<x58> lazyPower: Thanks for your work on it :-)
<x58> So long as I don't touch how many we have, things are fine :P
<smgoller> thedac: ok, so I've set up a second cluster. this time I'm seeing this error constantly in the /var/log/neutron/neutron-openvswitch-agent.log: 2016-09-29 21:49:14.443 127296 ERROR oslo.messaging._drivers.impl_rabbit [req-c79a053b-8c2e-45d6-9ebc-4fddab0cf279 - - - - -] AMQP server on 127.0.0.1:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 32 seconds.
<thedac> smgoller: during deploy those messages are normal. After everything has settled and the rabbit connection info is in /etc/neutron.conf you should no longer see those.
<smgoller> ok
<smgoller> that makes sense
<admcleod> how does one deploy a multiseries charm locally?
<thedac> admcleod: juju deploy ./$CHARM --series $SERIES
<admcleod> thedac: juju 1.25?
<thedac> ah, for juju 1.25 you need the charm in a series named directory. juju deploy $SERIES/$CHARM
<admcleod> thedac: so... ive built it as multiseries, and it goes into ./builds, you saying just copy it into ../trusty/ ?
<thedac> yes, that should work
<admcleod> thedac: thanks
<thedac> no problem
<admcleod> thedac: (it worked)
<thedac> \o/
<spaok> magicaltrout: do you know any charm examples doing the blocking thing you mentioned?
<magicaltrout> https://github.com/OSBI/layer-pdi/blob/master/reactive/pdi.py#L19 spaok something like that but instead of the decorated class call
<magicaltrout> (hookenv.config()['pdi_url'] or whatever
<magicaltrout> and make sure its set to your liking
<spaok> so would I make a def for init and put in a call to check method like that one, and if it passes set my state and use a when decorator to look for the ok state?
<magicaltrout> correct spaok
<spaok> kk, thanks
#juju 2016-09-30
<bbaqar_>      /join #maas
<venom3> Hello, does anyone try to deploy openstack with juju 2.0 rc2?
<rick_h_> magicaltrout: x58 consider me prodded
<KpuCko> hello is it that any way to remove unit which stuck in error state but machine/agent is lost?
<lazyPower> KpuCko: juju remove-machine # --force
<lazyPower> KpuCko: or juju resolved --no-retry application/#
<KpuCko> lazyPower thanks a lot, you saved my day
<lazyPower> KpuCko: cheers :)
<KpuCko> cheers |_|)
<smoser> hey.
<smoser> i'm trying to get a cloud-utils upload into yakkety
<smoser> its blocked on juju's dep8 test
<smoser> due to failures on ppc64
<smoser>  http://autopkgtest.ubuntu.com/packages/j/juju-core-1/yakkety/ppc64el
<smoser> and
<smoser>   http://autopkgtest.ubuntu.com/packages/j/juju-core-1/yakkety/amd64
<smoser> (linked to from http://people.canonical.com/~ubuntu-archive/proposed-migration/update_excuses.html)
<smoser> i really do not think cloud-utils is related at all, as the onyl change to that package is in mount-image-callback, which i'm pretty sure is not used by juju
<smoser> anyone able to help refute or confirm that ?
<kwmonroe> smoser: you probably want #juju-dev.  the words you're using are too big for us.
<smoser> thanks :)
<kwmonroe> marcoceppi: do OSX people need to brew install charm or charm-tools (or both)?
<cclarke> With ubuntu 16.04, Maas 2.0, and juju 2.0, when I bootstrap an environment (juju bootstrap bssdev devmaas) maas takes a machine and deploys it and it becomes a controller. However, that server is not listed as a machine in juju to deploy things to it. When previously using ubuntu 14 with maas 2.0 and a previous verision of juju, the new bootstrapped environment takes a machine and deploys it as a controller and you can deploy charms to
<cclarke> it in lxc containers. With the latest version, how do you bootstrap the environment in a way that it is seen as a machine in juju that charms can be deployed to?
<anastasiamac_> cclarke: I *think* u can switch to a controller model and deploy to the controller machine. If u do 'juju models" it'll tell u whcih model u r in and then just switch to the desired one
<cclarke> anastasiamac_: Thanks, that looks like the way to go.
<admcleod> will update-status fire on a blocked unit?
<kwmonroe> yup admcleod, though curiously it looks like it's running every 25 minutes (i thought it was every 5):  http://paste.ubuntu.com/23256225/
<admcleod> kwmonroe: beisner ah!
<admcleod> 25 is... apparently theres a back off
<kwmonroe> yeah admcleod, that must be a thing for blocked.. my 'active' charms run update-status every 5.
<admcleod> kwmonroe: if you do a -n1000 |grep update-status ?
<x58> Is anyone here from the OpenStack charmers?
<kwmonroe> x58: admcleod and beisner are openstackers.. cargonza can rattle off more names if those folks aren't around.
<cargonza> x58 - you can reach the openstack charmers also at #openstack-charms
<x58> Gotcha. Just looking at some behaviour in one of the charms that doens't make much sense to me. Working it through with our on-site DSE dparrish at the moment, will see if I have more questions/concerns.
<smgoller> hey all, any ideas on debugging "hook failed" errors?
<kwmonroe> sure smgoller -- first, what does 'juju version' say?
<smgoller> 2.0 beta 18
<smgoller> i may actually want to ask this question on #openstack-charmers, because it's related to neutron-openvswitch
<kwmonroe> smgoller: you've got a few options.. first, you can ...
<kwmonroe> DONT YOU LEAVE ME FOR OPENSTACKERS
<smgoller> haha
<smgoller> i'm still here
<smgoller> no worries
<kwmonroe> smgoller: as i was saying, you can "juju debug-log -i unit-<app>-<num> --replay"
<kwmonroe> so like 'juju debug-log -i unit-foo-0 --replay'
<kwmonroe> that might get you enough debug info to know why the hook failed
<smgoller> ooo
<smgoller> yup, it did
<smgoller> awesome!
<kwmonroe> smgoller: next, you can do "juju debug-hooks foo/0", and in another terminal, run "juju resolved foo/0"
<kwmonroe> the debug-hooks window will trap at a point where you can run the hook manually
<kwmonroe> so in that window, you'd run something like "./hooks/install", replacing "install" with whatever hook failed.
<kwmonroe> smgoller: and finally, if it's truly a neutron-openvswitch specific failure, #openstack-charmers would probably be the best place for help :)
<smgoller> hm. so i'm on the machine, but in the home dir
<smgoller> when i run juju debug-hooks, that is
<smgoller> so where do i go to find the hooks?
<kwmonroe> smgoller: you'll need to tell juju to retry the failed hook in another terminal..  debug-hooks will sit in the home dir until a hook fires
<smgoller> ok
<kwmonroe> and then it'll switch you to the charm dir
<smgoller> is that what resolved will do?
<kwmonroe> ah crud smgoller.. you said beta 18.
<smgoller> should i upgrade the jujus?
<kwmonroe> i think in versions < rc1, the command would be "juju resolved --retry foo/0"
<kwmonroe> the --retry is default in rc1, maybe not in beta18
<kwmonroe> smgoller: if 18 is working for you, you can keep hacking, but if you do upgrade to the latest (rc2), you won't have to type "--retry".  :)
<smgoller> kk
<marcoceppi> kwmonroe smgoller it's #openstack-charms
<kwmonroe> ack, thx marcoceppi
<kwmonroe> marcoceppi: since you're here.. what harm may come from blessing mysql admins with the grant option?  https://github.com/marcoceppi/charm-mysql/pull/6
<anita_> Hi
<kwmonroe> hi anita_
<anita_> when I am trying to get the services when relation_name.departed, I am getting 5 times the same relation name
<marcoceppi> kwmonroe: I have too many github emails to sift through
<anita_> Hi Kevin
<anita_> This i am getting as I have 5 time joined the relation and departed 5 times
<kwmonroe> no worries marcoceppi -- i'm just not versed enough in mysql to know if adding "with grant" to admins was omitted for a reason.  take your time on the sifting.
<anita_> my relation state is something like this "messaging.departed|{"relation": "messaging", "conversations": ["reactive.conversations.messaging:19.wasdummy", "reactive.conversations.messaging:24.wasdummy", "reactive.conversations.messaging:25.wasdummy", "reactive.conversations.messaging:26.wasdummy", "reactive.conversations.messaging:27.wasdummy"]}"
<anita_> when trying to get services, I am getting 5 times wasdummy as services
<kwmonroe> anita_: are there 5 wasdummy charms deployed?
<smgoller> hm.
<anita_> kwmonroe_:no
<anita_> only one
<smgoller> so, like a bobo I just upgraded juju, and now when i run 'juju status' it tells me 'ERROR "" is not a valid tag'. Any ideas?
<smgoller> I need to upgrade the controller?
<anita_> my provider relation scope is service level
<kwmonroe> anita_: so it sounds like old conversations aren't being removed.  i dunno if that's by design or not.. bcsaller, should 1 charm keep old relation conversations after joining and departing multiple times?  (see anita_'s state output from a couple minutes ago)
<kwmonroe> smgoller: did you run 'juju upgrade-juju'?
<smgoller> i did not :)
<kwmonroe> why not?
<kwmonroe> :)
<anita_> kwmonroe_:How can be removed the old conversations?
<smgoller> because i did an apt upgrade?
<smgoller> my juju-fu is weak
<kwmonroe> anita_: i'm not sure if you're supposed to.  i need to defer to bcsaller or maybe marcoceppi to know if those conversations are meant to stick around on a service scoped relation.
<smgoller> so juju upgrade-juju says no upgrades available
<kwmonroe> hmph smgoller.. that sounds fishy
<smgoller> ayup
<kwmonroe> juju version for you now shows rc2?
<smgoller> yep
<kwmonroe> smgoller: and the 2nd line of 'juju status' shows what for the version?  2.0-beta18?
<smgoller> juju status says 'ERROR "" is not a valid tag"
<kwmonroe> lol, shoot.. sorry, i forgot you already said that.
<smgoller> no worries :)
<kwmonroe> smgoller: maybe 'juju upgrade-juju --version 2.0-rc2'
<kwmonroe> smgoller: and if worse comes to worse, would you be willing to destroy the controller and rebootstrap with rc2?
<smgoller> yeah
<smgoller> it's prod-not-prod
<smgoller> :)
<kwmonroe> :)
<smgoller> so --version doesn't exist, but --agent-version does. is that what you meant?
<kwmonroe> smgoller: maybe.. i'm on rc1 and see a --version.  but --agent-version sounds good too.  i'll go to rc2 and see if that option has been renamed.
<smgoller> trying that results in "ERROR no matching tools available"
<kwmonroe> ah rats
<smgoller> o_O :)
<kwmonroe> smgoller: i have some great news:  juju rcX support upgrades going forward.  i have a bit of bad news:  juju betaX may not.
<smgoller> hahaha
<smgoller> no worries.
<kwmonroe> smgoller: if you're really closer to not-prod, i'd just 'juju destroy-controller X --destroy-all-models' and re-bootstrap.  if you're closer to prod, we might need some bigger guns to get you upgraded.
<smgoller> it's not sufficiently prod to get more involved
<kwmonroe> nice.. i haven't heard 'not sufficiently prod' before, but i'm gonna start using it.
<smgoller> forgive my ubuntu fu, but is there a way to roll back juju locally to beta18?\
<smgoller> and the answer is i can't go back. that's fine.
<smgoller> keep moving forward!
<kwmonroe> smgoller: i was poking around to try a roll back, but i don't see beta18 in the repo anymore (apt-cache madison juju).. so i'm not sure how you'd go back without finding a beta18 deb somehwere and manually creating a headache.
<smgoller> kwmonroe: yeah, it's fine. I'm just going to nuke the site from orbit
<kwmonroe> always a good decision
<smgoller> oof, i may not even be able to destroy the controller >_>
<smgoller> all right, time to nuke from maas
<kwmonroe> smgoller: if you do nuke it from maas, you'll probably want to 'juju unregister <controller-name>' so juju knows it's not around anymore
<smgoller> i blew the juju config away too :)
<smgoller> re-adding maas to a fresh juju config isn't that bad
<kwmonroe> heh.. whatever makes you happy!
<smgoller> if the differences are significant enough, it's probably best to start from scorched earth anyway :)
<kwmonroe> amen!
<kwmonroe> be funny if the only differences were using '--retry' by default and renaming '--version' to '--agent-version'
<kwmonroe> ya know, for some definition of funny
<smgoller> well
<smgoller> if the upgrade path is broken regardless, at some point i would have had to go through this pain
<smgoller> better to rip the bandaid off now
<chz8494> Hi Guys, does anyone know how juju 2.0 define lxd profile in bundle yaml for xenial?
<papertigers> Can anyone tell me how juju determins what image (ami if amazon) its trying to use when bootstrapping
<papertigers> in this case im testing on joyent and im getting
<papertigers> ERROR failed to bootstrap model: cannot start bootstrap instance: no "xenial" images in us-east-1 with arches [amd64 arm64 ppc64el s390x]
<papertigers> and there is an ubuntu certified 16.04 KVM image in that region
<stokachu> chz8494: nope
<stokachu> chz8494: not implemented
<stokachu> chz8494: though you can edit the lxd profile after it's running as long as you know the model name
<stokachu> without having ot reboot the container or anything
<chz8494> stokachu: I have predefined the default lxc profile, and the yaml deployed services are deployed to host's lxd container which uses default profile as what I observed
<stokachu> yep, if you have juju-default defined it'll use that
<chz8494> stokachu: where do you define juju-default?
<stokachu> its the default lxd profile that gets created when you do a new juju bootstrap
<stokachu> with 2.0
<chz8494> are you talking about deploy juju bootstrap on lxd?
<stokachu> huh?
<chz8494> i'm talking about deploying openstack components to lxd
<stokachu> 16:00 < chz8494> Hi Guys, does anyone know how juju 2.0 define lxd profile in bundle yaml for  xenial?
<chz8494> I don't see juju-default in my lxd
<stokachu> i guess i missed that somewhere
<chz8494> sorry, the bundle yaml I meant was for openstack
<chz8494> not juju config yaml
<chz8494> in my test, I predefined lxd default profile, and then run yaml to deploy openstack services into lxd, but juju somehow always overwrite this profile
<chz8494> and seems the deployed lxd instance is hard pinned with lxdbr0, as if I change the profile to use my own bridge, it will complain about missing lxdbr0
<chz8494> so in juju 2.0, is there a way to define which profile to use or eth binding when deploying lxd instance?
<beisner> hi chz8494, we adjust the default lxd profile in this procedure, which might be similar to what you're trying to achieve. http://docs.openstack.org/developer/charm-guide/openstack-on-lxd.html
<LeetaL_> Hi all! Dumb question, but i cannot seem to find out how to change the JUJU API address when bootstrapping with an LXD container... Someone that knows how to accomplish this? I get the following error: (2016-09-30 21:16:30 ERROR cmd supercommand.go:458 new environ: creating LXD client: Get https://172.16.0.1:8443/1.0: Unable to connect to: 172.16.0.1:8443) and there it seems like it has taken the gateway ip instead of my JUJU API addr
<kwmonroe> papertigers: not sure if it's the same for all cloud providers, but if i add '--debug' to the bootstrap command for azure, it lists available images and selects one for me, like this:
<kwmonroe> 22:13:30 INFO  juju.environs.instances image.go:106 find instance - using image with id: Canonical:UbuntuServer:16.04.0-LTS:latest
#juju 2016-10-01
<smgoller> Just like to reiterate to everyone who works on juju: Thanks. This has made my life so much easier. I was able to destroy and redeploy an openstack cluster in a few hours with little intervention.
<Shashaa> Is anybody active here ?
<Shashaa> Has anyone tried adding multiple units of openstack charms which use haproxy ?
<Shashaa> I'm seeing an error while I add multiple units of trove to openstack charm deployment, probably they are using same haproxy and juju relations are ending up in error state
<marcoceppi> Shashaa: I have not seen that, but #openstack-charms might be a better place for support about Juju OpenstStack Charms
<Shashaa> @marcoceppi thanks for that, I will join that group and scout for some help
<Shashaa> JOIN /#openstack-charms
#juju 2016-10-02
<MrDanDan> hei guys
<MrDanDan> I still have the neuron-gateway issue with rc2
<MrDanDan> when neutron-l3-agent, neutron-metadata-agent, neutron-dhcp-agent, nova-api-metadata won't get installed
<MrDanDan> i just did a deployment
#juju 2017-09-25
<chamar> Hi.  I see there's 2 way to install conjure .. apt-get install and snap.. which one should be used?
<fallenour> o/ Morning all! probably most important week of my life, so if I lose my shit at some point this week, thats probably from all the stress and pressure, just an fyi in advance
<lutostag> hey guys... edge is kinda broken, for us in CI... https://bugs.launchpad.net/juju/+bug/1719328 if anybody can take a quick peek
<mup> Bug #1719328: edge: bundle is blocked by relations not allowed to non-leader <cdo-qa> <cdo-qa-blocker> <juju:New> <https://launchpad.net/bugs/1719328>
<fallenour1> o/
<chamar> Hi. Not sure if it's the good place but I'll try. I'm trying out MAAS using virsh and when I compose a server, it always look for virtual network called either "default" or "maas".  Is there a way to use a bridge connection instead?
<fallenour> chamar: you can configure that in maas in the interfaces section
<chamar> fallenour, thanks.  once the vm is created or under the controller itself?
<fallenour> chamar: it hsould be in your maas configuration itself on your http://IP_ADDR/MAAS
<chamar> hum. can't find anywhere that specify with virtual connection to use when creating a VM (I added my kvm host as a Pod.. that seems fine)
<chamar> I think I found the open bug regarding my little issue: https://bugs.launchpad.net/maas/+bug/1697108
<mup> Bug #1697108: Nodes created in virsh pod always use 'default' virtual network for NIC <cdo-qa> <foundations-engine> <pods> <MAAS:Triaged> <https://launchpad.net/bugs/1697108>
<fallenour> @rick_h @stokachu @catbus where is the openstack dashboard in the horizon container? I need it for nginx, and I cant find where its at
<catbus> fallenour: does the http://<horizon container IP>/horizon work?
<fallenour> catbus: yea it works, and it loads appropriately, what I want to do is receive all traffic inbound to parent domain and all subdomains, and forward accordingly from that nginx box to the relevant system, to include horizon
<catbus> I am not familiar with nginx, maybe someone else on the channel does.
<catbus> s/does/is.
<fallenour> does anyone know where the root directory of horizon is? I checked, and its not /var/www/html , that loads the apache index default page.
<bdx> falenour: horizon is a django/angular app that is really a combination of these top level packages installed from the cloud archive apt repos https://github.com/openstack/charm-openstack-dashboard/blob/master/hooks/horizon_utils.py#L105,L108
<bdx> falenour: the relevant dirs and config for openstack-dashboard (horizon) are defined here https://github.com/openstack/charm-openstack-dashboard/blob/master/hooks/horizon_utils.py#L111,L126
<fallenour> bdx: What can i point nginx at? It needs something to load. Is there a master index.html file it uses?
<bdx> fallenour nah ... why don't you just relate the haproxy or hacluster charm to it?
<fallenour> bdx: Its been a looooon two weeks my friend
<fallenour> bdx: The word proxy is like acid to my feels :'(
<bdx> fallenour: juju relate horizon-haproxy openstack-dashboard
<bdx> where horizon-haproxy is really just `juju deploy haproxy horizon-haproxy`
<fallenour> bdx: http://eduarmor.com www.eduarmor.com eduarmor.com:80/
<fallenour> this is the kinda madness ive been dealing with for a while now.
<bdx> ok, so what you are looking for is an nginx redirect to your haproxy endpoint
<bdx> fallenour: this is simple stuff, you just have to take a logical approach
<fallenour> bdx: https://www.dropbox.com/s/4xf66mm0rudvub4/WAN%20Logical%20Diagram%20for%20Nginx.pdf?dl=0
<bdx> fallenour: gotcha
<fallenour> Ive been working so hard on it, I doodled into a visio. Ooh, and got banned from Nginx. Lost my shit this morning I did. Figured I was about to. I start to get that low burning feeling, its like a flaregun
<bdx> fallenour: very familiar with this architecture ... I've been running home labs for while my friend
<bdx> :)
<bdx> so ... what you need is an haproxy redirect
<bdx> or nginx redirect
<fallenour> bdx: I kept getting the "draft out the problem statement" from those guys. over 10 days, same damn problem
<fallenour> bdx: yeap. just gotta figure that out
<bdx> so, I see whats up, where is horizon?
<fallenour> bdx: the overall idea is to push from Nginx to HAProxy to apache
<bdx> what ip?
<bdx> yeah
<fallenour> bdx: bottom far right
<bdx> oh gotcha
<fallenour> bdx: Their would have been a me crying in the corner, but I didnt fit. neither did pickle rick
<fallenour> bdx: right now the site that is live in directly in the nginx /var/www/html directory, exactly what I DIDNT want to do, but I need to be live. Im almost out of time
<bdx> fallenour: haha, ok, check it -> https://gist.github.com/jamesbeedy/580efc6c8da25e7c4dbab0bd5c1d0657
<bdx> let me know if you can make sense of what to do with that
<bdx> on your nginx server, `sudo apt-get remove --purge nginx; sudo apt install haproxy -y`
<bdx> then put that haproxy.cfg config ^ in /etc/haproxy/haproxy.cfg
<bdx> and ensure you configure it to point to your local 10.0.0.x ip addresses
<fallenour> bdx: Oh, its been a pure week of satan my friend. balance least connect >>> round robin, default of 1? Nginx is weighted 1 by default
<fallenour> bdx: im guessing /etc/haproxy/haproxy.conf ?
<bdx> fallenour, yeah
<bdx> pay no attention to those configuration details
<fallenour> bdx: ooo but its where the magic happens o.o
<bdx> you can fine tune it once you have the base implementation down
<fallenour> bdx: yea, Ill praise the sun once its just running
<bdx> yeah thats what I'm saying
<bdx> don't worry about tuning anything until you have a straw man at least
<fallenour> if I can just Physical HAProxy >>> logical HAProxy I might actually cry
<fallenour> Or howler monkey screech, I cant tell at this point anymore.
#juju 2017-09-26
<fallenour> o/
<fallenour> bdx: you on?
<rick_h> Early fallenour
<fallenour>  rick_h Im on the HUNT! Im soooo close o.o I CAN TASTE IT!
<fallenour> rick_h: do I need to setup a relationship between openstack horizon and haproxy in order for it to process proxy requests properly? Im assuming yes, and my brain says I should, but my energy drink hasnt kicked in yet. Im assuming its juju add relationship haproxy:? openstack-dashboard ??
<thumper> magicaltrout: which room are you in?
<rick_h> fallenour: yes, you need a relation so horizon can tell haproxy about its details, port, ip, etc.
<magicaltrout> thumper: none at the moment, i've taken refuge in my room to get some stuff done before my bosses kick off, i'll be kicking around a bit later
<thumper> magicaltrout: ok, would be good to meet in person
<magicaltrout> sure thing come and grab me at some point i'm just rotating around various groups to catch up with folks and discuss use cases etc, happy to chat whenever same for you icey just accost me when you see me ;)
<thumper> magicaltrout: let me know when you decide to emerge from your room for social interaction :)
<admcleod> having trouble bootstrapping 2.2.4 / s390x, says "cannot package bootstrap agent binary" - does some build somewhere need to be kicked?
<stormmore> o/ juju world
<dannf> is there a way to include an image in the README.md such that it'll be rendered on jujucharms.com?
<kwmonroe> dannf: ~tengu-team makes some of the nicest readmes i've seen:  https://jujucharms.com/eclipse-che/.  they do it like this:  https://api.jujucharms.com/charmstore/v5/eclipse-che/archive/README.md
<dannf> kwmonroe: ah, ok - maybe the problem is that i was trying to use relative syntax, not a fully-qualified URL
<dannf> e.g. ![alt text][images/diagram.png "Diagram"]
<dannf> kwmonroe: there doesn't happen to be a way to preview jujucharms.com rendering w/o publishing, is there?
<kwmonroe> yeah dannf, if you have an image baked into the charm source, maybe you can use https://api.jujucharms.com/charmstore/v5/<charm>/archive/images/diagram.png as the url
<kwmonroe> dannf: i have yet to find a way to pre-render without pushing to the store
<kwmonroe> i mean, you can get close with any markdown renderer, but the full md syntax isn't supported on the store.  that's why i have triple digit revision counts on most of my charms ;)
<dannf> kwmonroe: :)
<rick_h> dannf: right, we just render the markdown and the relative urls don't work since the urls aren't right
<rick_h> dannf: as kwmonroe mentions you could link to the api of the image if it's in the charm or any other image link that you've got up somewhere.
<dannf> rick_h: *nod* - yeah - makes sense. relative links work in github, so i tried that before asking - but yeah, an api url will work for me. does the api url have to be versioned, or can i point to the "latest stable"?
<dannf> or latest version, or whatever
<rick_h> dannf: yes, if you leave off the version it'll auto grab the latest stable
<dannf> great
<rick_h> dannf: just be aware that it can be confusing then if someone loads an earlier version/unstable channel and such
<rick_h> dannf: since everyone will get the latest image regardless
<dannf> rick_h: yeah, understood
<fallenour> rick_h: catbus having issues executing commands on my juju controller all of a sudden, and it wont take my juju grant <username> superuser --model <model> command
<fallenour> any ideas?
<rick_h> fallenour: use --debug with the command. Have to flesh out "having issues" a bit more
<rick_h> fallenour: can you connect to it (network connectivity issues?) or is it experiencing load? or is the juju controller not responding because jujud is not running for some reason?
<fallenour> rick_h: figured it out. I had read and write, but not admin. Im guessing I fat fingered while adding other users lol, my bad man.
<rick_h> fallenour: gotcha good stuff
<nevermind> Anyone know of a good comparison doc between juju and terraform?
<marcoceppi> ejat: o/
<ejat> okie
<ejat> sorry
<ejat> im trying to add canonical k8s to rancher20
<marcoceppi> nevermind: not a doc, but the general principles are dynamic infrastructure vs static
<ejat> Unable to connect to the server: x509: certificate signed by unknown authority
<marcoceppi> ejat: where are you getting that message?
<ejat> fenris@X240:~â« kubectl apply -f https://13.76.162.103/v3/scripts/2A35D059FB34679120D5:1483142400000:CS7QDnEJ07bztggKHaToPtHmA.yaml
<ejat> Unable to connect to the server: x509: certificate signed by unknown authority
<marcoceppi> can you run kubectl with the --debug flag? does `kubectl cluster-info` work?
<fallenour> rick_h: hey whats the command to deploy an empty container to a specific machine? Ive already tried juju deploy lxd --to 10/lxd/<new container number> did i do that right?
<ejat> cluster-info work
<marcoceppi> ejat: https://13.76.162.103/v3/scripts/2A35D059FB34679120D5:1483142400000:CS7QDnEJ07bztggKHaToPtHmA.yaml that URL gives me a bad cert authority
<rick_h> fallenour: use add-machine
<marcoceppi> is that a URL from your cluster?
<fallenour> rick_h: use add machine to add another container to an already existing machine?
<rick_h> fallenour: right, you're just wanting an empty "machine" without deploying any charm/bundle to it right?
 * rick_h dbl checks he's following
<fallenour> rick_h: yea
<ejat> marcoceppi: http://paste.ubuntu.com/25621815/
<marcoceppi> it is from your cluster, okay
<fallenour> rick_h: am I crazy? isnt it juju add-machine --to 10/lxd/6
<fallenour> am I doing tha tright?
<fallenour> rick_h: ooh wait, no im wanting to add an additional empty lxd container
<fallenour> rick_h: I misinterpreted that
<marcoceppi> ejat: I'm not sure why, but it's using a self signed certificate for that URL. If you open it in your browser you get the cert warning - then it prompts you for a username and password
<ejat> yeap
<marcoceppi> ejat: are you able to log in?
<ejat> yes
<marcoceppi> ejat: can you just copy the YAML and save it to a local fine then kubectl apply -f <file> ?
<marcoceppi> ejat: also, if you link me the instructions for rancher20 I'd like to give it a try myself
<ejat> http://picpaste.com/Rancher20-3-zAo5Jo3a.png
<rick_h> fallenour: right, so you want an empty lxd container and you add that with juju add-machine with a placement directive
<ejat> marcoceppi: http://rancher.com/rancher2-0/
<rick_h> fallenour: so like juju add-machine --to lxd:3
<rick_h> fallenour: check out the --help on add-machine for more details
<marcoceppi> ejat: thanks, we're doing a webinar with them soon so if there are problems we'll make sure to get them smoothed out
<ejat> thanks .. is it the webinar public?
<marcoceppi> ejat: yup!
<marcoceppi> I'll ping you a link when more information is published
<ejat> is it on the next meetup ?
<ejat> Attend our monthly online meetup on Thursday October 5 (1 PM US Eastern Time) when we'll demo the cool new features in Rancher 2.0.
<ejat> hopefully ill be in front of the screen while u ping me
<marcoceppi> ejat: it's not, it'll be a littler after that
<ejat> okie .. thanks marcoceppi
<ejat> at least i already passed some point for you to check with rancher team
<ejat> :)
<ejat> thanks again
<dannf> kwmonroe, rick_h : nice - got it working :) https://jujucharms.com/u/dannf/scalebot-jenkins/1
<rick_h> dannf: woot woot
<rick_h> dannf: got a 404, not yet grant to everyone?
<dannf> rick_h: oh right, fixed
<dannf> rick_h: oops - granted to "all" instead of "everyone" - which didn't return an error, but was obviously wrong. really fixed now
<rick_h> dannf: yea, all might be a valid sso username who knows :)
<rick_h> dannf: ooh, look at the shiny picture :)
#juju 2017-09-27
<magicaltrout> is it time for beer yet?
<rick_h> magicaltrout: wfm
<D4RKS1D3> Hi, I am looking for some documentation of the scheduler in Juju, thanks
<rick_h> D4RKS1D3: what do you mean by the scheduler?
<admcleod_> thumper: what room are you guys in?
<thumper> admcleod_: 1409
<rick_h> I need a juju show theme song. Something to get everyone pumped up ahead of things
<rick_h> dum dum dum da da dum dum dum
<rick_h> Juju show in 51minutes reminder wheeee
<zeestrat> rick_h: Can't go wrong with some 80's glam metal: https://www.youtube.com/watch?v=CmXWkMlKFkI
 * rick_h chants "go big hair go big hair!" and clicks link
<rick_h> zeestrat: lol, there you go. I want to hear screaming fans before I click that "go live" button
<zeestrat> rick_h: Question for y'all here or on the show. What's the status of the juju snap channels. I'm a bit confused as I thought it went from edge>beta>candidate>stable, however right now stable and beta has 2.2.4 while candidate has 2.2.5+2.2-d75e780
<rick_h> zeestrat: oh hmmm...well that sounds like an oops skipping a channel there.
<rick_h> balloons: ^
<rick_h> thumper: juju show in 5 if any sprinters want to step aside and join in the fun
<balloons> zeestrat, yes, that sounds about right. We're prepping to release a beta of 2.3 that will soon hit that channel
<balloons> so for the moment, yea, it's a little odd that beta is technically behind, but we don't want to ship any further 2.2 series into that channel. So it's just been matching the stable channel
<rick_h> https://hangouts.google.com/hangouts/_/yckznpagk5e4ladwik7ah4yvk4e to join the party live and https://www.youtube.com/watch?v=_yMx129uhYc to watch
<zeestrat> balloons: Gotcha. Thanks for the info!
<rick_h> balloons: but there's 2.2.4 and 2.2.5?
<balloons> rick_h, there's no intention for a 2.2.5, but if one where to come, that's what it would be called :-)
<rick_h> balloons: yea, I was expecting that to be the same as stable tbh
<balloons> rick_h, I thought it made sense to push 2.2.5 into it for folks who might want to try it. zeestrat, rick_h, essentially we've gone with edge == tip of development, beta == "stable" development with a published agent, candidate == "tip of latest stable", stable == latest stable
<rick_h> balloons: oic, I wasn't aware of that ok.
<rick_h> bdx: you joining today?
<kwmonroe> ah crud rick_h, me chromiums burned down
<kwmonroe> restarting, biab
<kwmonroe> now to reopen 74 tabs :/
<rick_h> doh! wrecking crew through kwmonroe's house
<Keenlovel> https://www.youtube.com/watch?v=_yMx129uhYc Is this a repeat ?
<zeestrat> balloons: Are there any docs highlighting that on jujucharms.com? I just was just a bit jarring as I've been reading charm store docs where I'm interpreting those channels to be a bit differently (mind you I understand that charm store and juju core are two entirely separate things :) ) https://jujucharms.com/docs/2.1/charms-deploying#channels
<kwmonroe> oh noes rick_h!  i'm no longer allowed to join the vid.. did you guys go live?
<rick_h> Keenlovel: not yet
<rick_h> kwmonroe: so I kicked you thinking you'd rejoin
<rick_h> Keenlovel: we're about to go live once kwmonroe gets his puter to behave
<kwmonroe> i'm back!
<magicaltrout> jesus
<balloons> zeestrat, they should be mirrrored, and they page you linked has it correct
<magicaltrout> i should dial in and walk into the charm engineering room
<balloons> zeestrat, I was just attempting to stretch the idea of candidate containing a release candidate -- technically it's not yet an rc since we don't do rc's for minor point releases
<balloons> zeestrat, and there are currently no betas for juju, that's why it's the same as stable
<zeestrat> kwmonroe: Yeah, I was wondering what was up the PPA's too
<balloons> would it make more sense to you to have beta also contain 2.2.5?
<kwmonroe> +1 balloons.  if the devel ppa is to live on, i think it should at least be at the same level of the stable ppa.  and devel would get 2.2.5 before the stable ppa.
<Keenlovel> rick_h: check the question for marco c      - in the youtube chat windows - ty
<balloons> kwmonroe, devel won't get 2.2.5 before stable
<balloons> kwmonroe, that's kind of the distinction I was making
<kwmonroe> gotcha balloons -- what's the point of the devel ppa if it doesn't get the next release before stable?
<balloons> kwmonroe, it will get the next release before stable -- we just don't have any stabilized version of it yet
<kwmonroe> balloons!  you just said "kwmonroe, devel won't get 2.2.5 before stable", followed by "kwmonroe, it will get the next release before stable".  color me conflicted.
<balloons> kwmonroe, basically any X.Y.Z to X.Y.Z+1 version goes into the stable ppa directly. We don't have rc ppa's
<balloons> kwmonroe, that said X.Y to X.Y+1 will go into devel first
<thelaugher> rick_h: the new launchpad group (for jujucharm & Marco Ce ) is here https://is.gd/2YReLj - Please contact him on the mailing list to join this group as I am on mobile (please just email him for myself, thankyou)
<zeestrat> Just from the side regarding candidate channel, I expect it to be what I see in stable next as stated in [1], "candidate: used to vet uploads that should require no further code changes before moving to stable." so I was a bit confused when you say there will most likely not be a 2.2.5 release. [1]: https://snapcraft.io/docs/reference/channels#risk-level
<zeestrat> But if it's documented anywhere, that helps
<kwmonroe> ok balloons, i think i get it :)
<balloons> kwmonroe, basically 2.2 is stable, 2.3 is unstable. There are no released versions of 2.3 (yet). New versions of 2.2 are minor point releases and are just released
<kwmonroe> yeah balloons - i got ya.  it took seeing you write X.Y+1 for it to click for me
<balloons> kwmonroe, feedback welcome. I don't think you are the first to be confused. It's clear as mud for me
<balloons> zeestrat, I JUST tried the idea out with 2.2.5; so it's interesting you've noticed it already
<zeestrat> balloons: In the process of setting up some automated testing of my charms with travisci targeting different juju snap channels so that's why :)
<balloons> zeestrat, perhaps it's interesting that we would put something in rc we wouldn't end up releasing. It's certainly possible we do a 2.2.5, but it was planned as the last in the series before 2.3. However, in case we release it, having folks on the channel running it and talking about it is useful. We put the rc's for 2.2.3, 2.2.4 in there as well fo
<balloons> r the same reason. The interesting thing is now with 2.2.5, it may not see a release if 2.3 arrives as expected
<balloons> but for example, 2.2.4 wasn't as planned persay as 2.2.3
<SolutionJUJU> rick_h QUESTION: Where is the bug-list for the mysql jujucharm located at ?
<zeestrat> rick_h: Question: I know you're working on those magical docs with all tips and tricks for running juju in prod. Is there any work being done on the charm developer getting started guide? It could really use a bit of polish and a fully fledged example project that is kept up to date with an opinionated selection of the current best practice tooling (charmhelpers, automated functional testing with bundletester, amulet,
<zeestrat> matrix or what not). It still mentions the charm review queue for example.
<kwmonroe> https://github.com/marcoceppi/charm-mysql/issues for mysql bugs
<zeestrat> rick_h: Your screen is not in focus.
<zeestrat> kwmonroe: Can you shout at Rick ^
<zeestrat> Haha
<rick_h> zeestrat: sorry, for your question there on the docs. They're always getting some love. I don't know where the updates on the new review queue process and such are at atm. Will check it out.
<rick_h> and sorry for the screen driver fail. One of these days I'll figure out how this hangout/YouTube thing works.
<zeestrat> rick_h: Cool. You mentioned some blogpost by the kubernetes guys in the beginning? Care to share the link?
<rick_h> zeestrat: sure thing, it's https://insights.ubuntu.com/2017/09/27/patch-cdk-1-build-release/
<rick_h> and I'm updating the show notes as we speak
<kwmonroe> zeestrat: rick_h:  followup on the doc request.. i've opened https://github.com/CanonicalLtd/jujucharms.com/issues/493
<rick_h> ty kwmonroe
<kwmonroe> rick_h: https://www.youtube.com/watch?v=_yMx129uhYc&feature=youtu.be&t=1988
<rick_h> oh no.../me cringes
<zeestrat> kwmonroe: I created an issue over at juju/docs earlier (https://github.com/juju/docs/issues/2145). Not sure if that was the right place now. What's the diff between the two repos again?
<kwmonroe> zeestrat: heh, i really don't know.  i always use the jujucharms.com repo for issues with content on the web site.
<chamar> Hi.  Is it possible to use conjure-up on already installed VM?  Just use conjure-up to deploy and bootstrap kubernetes, for example?
<marcoceppi> chamar: it is, if you install conjure-up on that VM and install LXD
<chamar> marcoceppi:  Ho.  You're the one of told me to have a look at conjure-up / mass on Reddit :)  But if I conjure up on that server with LXD, it would be a local install (ie: single node cluster), right?
<marcoceppi> chamar: hey o/ so you want to use a few VMs of already installed Ubuntu?
<chamar> marcoceppi: Correct.  I did work perfectly fine with MAAS and a KVM backend.  It did create all the VM and bootstrap k8s.  The issue is that MAAS looks for a virtual network called either "maas" or "default".  I want my VM to use bridge adatpor instead.
<chamar> So I was thinking of installing my VM, and then use conjure-up to bootstrap my cluster by maybe modifying the template somehow?!
<marcoceppi> chamar: so, you /can/ do everything on a single VM
<marcoceppi> but it'd be like, for a test cluster
<marcoceppi> chamar: problem is, conjure-up doesn't really support this mode, but you can do it directly yourself, it's pretty straight forward I'll paste the instructions here
<marcoceppi> cory_fu_: stokachu conjure-up doesn't support manual provider, right?
<chamar> marcoceppi: yeah.  I do expect some tweaking and manual step.  but I really like the way conjure-up bootstrap everything ;)
<chamar> And the web presenation were nice, too.
<marcoceppi> chamar: yeah, it's sweeeeet, the manual steps aren't that much more  and I'll make sure to give you a good run down in like 20 mins
<chamar> no rush.  I'll stay connected for a few hours.
<cory_fu_> marcoceppi: stokachu is on vacation, FYI.  conjure-up does not support manual provider, but you can use the Application Configuration screen to tag units to specific MAAS nodes, which might be an alternate approach (especially if combined with kvm or lxd placement)
<marcoceppi> cory_fu_: cheers
<marcoceppi> chamar: I'm just about done with the blog post
<chamar> marcoceppi: sweeeet!  Thanks a ton.  I'll give that a shot this evening in my lab and report back.
<chamar> I saw that an issue was already open regarding the fact that you can't use bridge networking with MAAS / KVM backend (as a Pod)
<marcoceppi> yeah, after this I'm going to do a "MAAS + KVM" blog post where MAAS is deployed using docker on your host so it's easier to set up in small labs
<chamar> Yeah. It work fine on my lab.  But the fact that it forces to use NAT network make it's a bit useful for a Prod setup.
<chamar> (the NATed network is "behind" your host)
<marcoceppi> I foolishly made a 1 core 1 GB vm
<marcoceppi> so the process is slow
<chamar> haah! I did the same .. slowiisssshh
<marcoceppi> chamar: would you be willing to try two vms? It's going to be really complicated with just one VM because of the whole CNI stuf
<chamar> marcoceppi: yup..  I might try to put it further to have at least 3 etcd
<marcoceppi> chamar: okay, cool, I'll update the blog a little bit to reflect tow machines and publish shortly
<marcoceppi> it's easy to see how to scale up once you read through for two machines
<chamar> Exactly what I thought... Once I understand the pattern, I should be able to scale it ;)
<marcoceppi> perfect, and I still think once you get comfortable with this, MAAS will really help accelerate the process, but I can respect a learn to walk then run ramp up
<chamar> Yeah.  Thanks for that. btw, just got a call and they'll shutdown the power at home (a tree falls on some electrics cable outside)
<chamar> can you drop me a email when it's ready? (no rush, again)
<marcoceppi> chamar: yeah, send me in a PM
<marcoceppi> I plan on publishing tonight, if I don't publish the same day it'll be a draft forevermore
#juju 2017-09-28
<beisner> fwiw, seeing quite a few charm store timeouts today in CI.
<beisner> getting 503 Service Unavailable
<rick_h> beisner: there was a prodstack issue that's getting resolved right now
<rick_h> beisner: it appears to be all back so please let me know if you're getting anything else
<beisner> awesome thx rick_h
<el_tigro1> What's the difference between 'juju deploy ubuntu' and 'juju add-machine'? Is it that 'juju deploy' will consider a machine created with 'add-machine' to be a candidate for deploying a unit to?
<magicaltrout> yeah el_tigro1 if you're doing a manual deployment or similar you can add-machine then deploy --to X
<magicaltrout> where as deploy will just start a new instance
<el_tigro1> magicaltrout: I found that if I run 'add-machine', and then do 'juju deploy <charm>' *without* specifying '--to X', the machine created earlier with 'add-machine' is automatically used
<el_tigro1> magicaltrout: which seems to be inconsistent with 'juju deploy --help': "juju deploy mysql               (deploy to a new machine)"
<el_tigro1> magicaltrout: So if I 'juju add-machine', then 'juju deploy <charm>', then 'juju remove-application <charm>', the machine I created with add-machine is destroyed. Is that expected behavior?
<magicaltrout> well it'll use what resources match the constrains and aren't allocated
<el_tigro1> sounds good
<el_tigro1> Maybe a small adjustment to the "juju deploy --help" would be helpful "juju deploy mysql               (deploy to a new machine *or a valid machine previously created with add-machine*)
<el_tigro1> I guess I incorrectly used 'juju add-machine' when what I actually wanted was a new persistent machine with a blank ubuntu image under *my* control. So 'juju deploy ubuntu'. Not sure if its worth clarifying that in 'add-machine --help'
<zeestrat> Hey rick_h, is there any magic in making a self-published charm available at cs:~username/charm, i.e. without revision number? I have pushed, release to stable and granted everyone.
<rick_h> zeestrat: publish it to the stable channel
<rick_h> zeestrat: what charm is it?
<zeestrat> rick_h: Publish is charm release cs:~username/charm-rev right? Did that already and it says stable=true in charm show. Trying to run the charm release again just gets me an error with some ElasticSearch info. Charm is cs:~szeestraten/slurm-node
<rick_h> zeestrat: ah, the ES info we're looking into. Another user hit it and we're trying to see what caused ES to be whiny atm
<rick_h> zeestrat: so that's not your fault
<zeestrat> Ah, yeah I saw you reported some issues earlier so I was wondering if it was just me screwing up
<zeestrat> You need detailed bug report or info or already on it?
<rick_h> zeestrat: no this is new and fresh. Was just trying to figure out what to file atm. Push works, but changing the permissions didn't. I'll keep you up to date as we figure out wtf.
<skay> for charmhelpers.contrib.python.packages, why are the options for pip restricted? I need to use --find-links and --no-index in order to use a local cache
<rick_h> zeestrat: can you make the release now?
<zeestrat> rick_h: it seems to be working.
<rick_h> zeestrat: coolio
<el_tigro1> 'juju create-backup' creates a backup on the controller and downloads it to the client's current directory. Where does the backup file exist on the controller?
<bdx> el_tigro1: I would assume it gets cleaned up
#juju 2017-09-29
<el_tigro1> bdx: It doesn't get cleaned up. According to https://jujucharms.com/docs/2.1/controllers-backup:
<el_tigro1> "As each backup is stored on the controller, you can manage backups from whatever client you can connect from, and fetch previous backups if the originally downloaded file has gone astray. You can use the following commands to manage and restore your backups:"
<el_tigro1> the backups are stored in mongodb on the controller node
<bdx> el_tigro1: nice, thanks for getting to the bottom of that
<jamespage> bdx: hello - any change you could review https://github.com/jamesbeedy/interface-memcache/pull/3 ?
<jamespage> that's blocking some users of the gnocchi charm who what to use network spaces :-)
<zeestrat> rick_h: Regarding juju store channels, who's the guy or girl to talk to regarding amulet? Looks like it doesn't support channels.
#juju 2017-09-30
<el_tigro1> I am curious as to what goes on at a high level when you run 'juju bootstrap' and 'juju deploy <charm>'
<el_tigro1> For 'juju bootstrap' I found this helpful post https://askubuntu.com/questions/657592/what-is-going-on-during-juju-bootstrap-command
<el_tigro1> For 'juju deploy', I'm assuming it's the controller (and not the client) that creates the worker nodes (using whichever cloud api). The controller then uses scp to copy over machine/unit agent.conf files and uses ssh to install/run the jujud the agents on each worker node (which is basically what 'juju add-machine' does?). From that point on, each agent authenticates with the controller's jujud (listening on port 17070). The
<el_tigro1> agents then request their config from the controller through the juju api. The controller itself retrieves the configs from the mongodb. The worker nodes then download their respective charms from the charm store (or directly from the controller if configured that way)
#juju 2017-10-01
<ejat> marcoceppi: u there?
#juju 2019-09-23
<manadart> humbolt: If it is not obvious which interface to bridge, juju will not guess. Try using a space constraint for the machine.
<humbolt> No talk in this room?
#juju 2019-09-24
<manadart> humbolt: Canonical devs were at an engineering sprint last week. Most will have had Monday off.
<stickupkid> CR anyone https://github.com/juju/juju/pull/10651
<stickupkid> this was left over from last weeks sprint duties...
<achilleasa> stickupkid: looking
<stickupkid> achilleasa, tbh thumper did 99% of this, i just moved it to develop code
<achilleasa> stickupkid: done
<stickupkid> achilleasa, ty
<stickupkid> achilleasa, agree with comment, I'll add that next time.
<stickupkid> manadart, we can't use mocks folder in state, as it causes a import cycle :(
<stickupkid> manadart, any idea where we can put them, other that in the root of state
<stickupkid> manadart, maybe migration-mocks?
<manadart> stickupkid: Yeah, you can only use a mocks directory if the tests are in an external package.
<stickupkid> manadart, yeah, it's annoying
<manadart> For internal, just create a ..._mock_test.go file.
<stickupkid> yeah
<nammn_de> stickupkid achilleasa or manadart any of guys still have in mind what "costs" means under "environs/instances/instancetype.go", are those the costs to order this machine?
<rick_h> nammn_de:  yea, in order of preference
<rick_h> nammn_de:  e.g. find the lowest cost instance that meets the users constraint requirements
<rick_h> "--constraints mem=32g" - so what's the cheapest machine that has 32gb of ram
<nammn_de> rick_h: great, i would use that as the 2. constraint for sorting. E.g. sort by type (a1,a2...) and (cost)
<rick_h> nammn_de:  I don't think so because that'll lead to inter mixing?
<rick_h> nammn_de:  e.g. a t3 and a c3 might be closer to cost? or do you mean just they should sort that way from large, xl, 2xl?
<rick_h> nammn_de:  hmm, now that I say that out loud you're probably right
<rick_h> ignore me...
<bdxbdx> happy tuesday
<bdxbdx> trying to help my guys get to the bottom of an issue
<bdxbdx> we are trying to deploy machines in our maas, everything goes smooth until the target machine tries to download the agent binaries from the controller
<bdxbdx> when machine with ip 10.30.62.1 tries to download the agent binaries during the final post boot initialization we see https://paste.ubuntu.com/p/WXcpQwjVkd/ in the controller logs
<bdxbdx> on the ipmi viewer/console to the maas box we see the machine trying to download the tools from the controller and failing https://imgur.com/a/M58xHAq
<bdxbdx> anyone see this before?
<bdxbdx> our thoughts are that it may be a time skew between the controller and the maas node
<bdxbdx> going to do some digging
<rick_h> bdxbdx:  hmm, looking
<rick_h> bdxbdx:  yea, make sure that the clocks are good or you won't be able to do secure coms
<rick_h> bdxbdx:  hmm, though this "ctrmg6 already saved" doesn't look right
<bdxbdx> I feel like a newb today ... so I have another interesting scenario
<bdxbdx> this morning, I was added to a "manual" juju controller that has the multi-cloud feature flag enabled
<bdxbdx> the manual controller has a maas cloud defined
<bdxbdx> I register my user and have access to the models that the admin have granted me access to
<bdxbdx> 2 problems arise here that I'm unsure how to solve
<bdxbdx> well maybe 1 main issue - I can't add maas credentials to the controller
<thumper> bdxbdx: what happens when you try to add a maas credential?
<bdxbdx> so thats the second issue
<bdxbdx> thumper: I can't add the maas cloud because juju doesn't see one from my local client
<bdxbdx> errr
<bdxbdx> geh
<bdxbdx> sorry, let me see if I can phrase this correctly
<thumper> this is an area that is getting some love for 2.7
<thumper> it is a bit weird right now
<thumper> you need to add the maas cloud and credential locally before you can add to the controller I think
<bdxbdx> the only way my local client can see the maas cloud is like so https://paste.ubuntu.com/p/DyMtvJ5BJZ/
<bdxbdx> with the --controller flag
<rick_h> bdxbdx:  yea, so things "upload" from the client to the controller so best to get the client setup first
<bdxbdx> rick_h: do I need to add the pdl-maas-cloud shown in `juju clouds --controller dc--00` to my local config first ?
<rick_h> bdxbdx:  you'll need that cloud locally so you can add a credential for it :(
<bdxbdx> totally
<rick_h> bdxbdx:  otherwise you can't add the credential
<bdxbdx> totally, I think thats what I'm hitting
<bdxbdx> adding the cloud to my local client config allowed me to add a credential, which allowed me to create a model
<bdxbdx> I don't know why that was a trying experience ... seems straight forward and logical saying it out loud
<rick_h> bdxbdx:  well it's not ideal as thumper mentions. If the cloud lives on the controller you shoulnd't need to worry about the local setup as well
<rick_h> but it's not "that" crazy  currently :P
<bdxbdx> totally, thanks all
<rick_h> np, did the clock sync help and get you past the earlier issue?
<bdxbdx> no
<rick_h> :(
<bdxbdx> I can curl the tools from the controller from any other box ... just not a maas deployed node
<rick_h> oic
<rick_h> networking in/out of the maas network?
<rick_h> is there a bridge/router on the edge there?
<bdxbdx> networking is all legit
<bdxbdx> yeah we have verified the network details in and out of and around the nodes
<rick_h> :/
<bdxbdx> since the juju agent isn't making it
 * rick_h has to run the boy to horseback but if you find something let me know or file a bug if you think it's borked
<bdxbdx> I think my only way into one of the boxes is to get in by way of my maas user ssh key
<rick_h> yea, you can setup ssh keys in maas that get put onto the machines for you when you provision
<bdxbdx> just about to deploy a node that should get my ssh key via my user now that I have created a model with my own maas user creds
<bdxbdx> totally
<bdxbdx> the juju controller and the maas node that is failing to curl the tools are in sync
<thumper> bdxbdx: the juju controller can't get the tools?
<thumper> or the node brought up can't get them from the controller?
<bdxbdx> thumper: the latter
<thumper> bdxbdx: can you ssh to the node?
<bdxbdx> yeah, I'm in it now
<bdxbdx> https://paste.ubuntu.com/p/yC3KJNJWGd/
<bdxbdx> ^ tried curling them manually and it seems to timeout
<thumper> firewall somewhere?
<bdxbdx> but I can curl them from another node right next to the node that can't curl them, same ip range, same switch
<thumper> that is weird
<bdxbdx> and from my local box over vpn, everywhere else I try to curl them from it works
<bdxbdx> yeah ...
<thumper> I'm sorry, but I have no idea
<thumper> mismatched MTU?
<thumper> that is one of the few things I know that can screw up some networking
<bdxbdx> the bridge on the host has 1500 mtu ... the bridge the juju controller container is behind
<bdxbdx> I bet thats it
<bdxbdx> everything else has 9000
<bdxbdx> sec
<thumper> wow, my one bit of networking knowledge seems helpful
<bdxbdx> thumper: that was it - just awesome! thanks!
<thumper> bdxbdx: yay
<ec0> it's always MTUs
<ec0> everything is always MTUs
#juju 2019-09-25
<thumper> ec0: heh
<manadart> achilleasa: Need a review on a purely mechanical patch: https://github.com/juju/juju/pull/10653
<achilleasa> manadart: looking
<achilleasa> manadart: you also need to update the import paths for some other tests: https://jenkins.juju.canonical.com/job/github-make-check-juju/1152/console
<manadart> achilleasa: Yeah, they are fixed; just running the tests now.
<nammn_de> manadart: as we were talking about that before. Mind taking a quick look and review? https://github.com/juju/juju/pull/10652
<manadart> nammn_de: Sure.
<manadart> nammn_de: Reviewed.
<achilleasa> manadart: overall LGTM; just have two questions (see comments)
<nammn_de> thanks manadart and stickupkid
<nammn_de> gonna add a test for the sorting case, manadart: regarding the openstack provider. Would it be worthwhile to update the "flavor" of openstack? As we are using it's struct https://github.com/juju/juju/blob/26d73876d4daedca2a39c3f385f98ac5040f27e0/provider/openstack/provider.go#L557 , my implementation used cost as the second value to sort. Here i
<nammn_de> could use ram though
<stickupkid> manadart, you'll like this update i'm about to post to your discourse post :D
<stickupkid> manadart, once i've written it
<stickupkid> haha
<achilleasa> stickupkid: can you post on discourse or are you getting back 500 errors?
<stickupkid> achilleasa, argh, not thought about that, let me check
<achilleasa> manadart: got a few sec for a quick question re charm upgrades?
<stickupkid> manadart, achilleasa https://discourse.jujucharms.com/t/thoughts-on-unit-testing/1451/9
<stickupkid> achilleasa worked for me
<manadart> achilleasa: One sec.
<achilleasa> stickupkid: can you post in internal?
<stickupkid> achilleasa let me check
<stickupkid> achilleasa yeah i can
<achilleasa> manadart: changes approved
<achilleasa> stickupkid: hmmm... maybe I don't have post permissions? :D
<stickupkid> achilleasa, https://media.tenor.com/images/ea2db29e89e34daa1b3d9716a7644208/tenor.gif
<manadart> achilleasa: Thanks. HO on the charm thing?
<achilleasa> manadart: omw
<achilleasa> stickupkid: so I can create a post but when I try to paste the template and save I get an internal server error :-(
<stickupkid> lol
<achilleasa> stickupkid: can you try to paste https://pastebin.canonical.com/p/twG4txHRJX/ in https://discourse.jujucharms.com/t/juju-release-process-2-6-9/2091 ?
<stickupkid> achilleasa legit 500
<achilleasa> is there a max post size limit or something?
<stickupkid> achilleasa don't think so
<stickupkid> achilleasa surely it would tell you
<achilleasa> stickupkid: pasting half the text seems to work though...
<stickupkid> achilleasa the response from XHR, is the worst - 500, killed the server
<stickupkid> achilleasa not much information to help diagnose
<achilleasa> yeap...
<achilleasa> stickupkid: it does seem like a max size limit... I managed to post up to the "homebrew" sections. Any attempts to append text after that point cause a 500
<stickupkid> lol
<stickupkid> let's check what discourse says
<stickupkid> 99k max char limit
<stickupkid> unless an admin has restricted it
<achilleasa> I could render the markdown into an image and paste that in the post :D
<stickupkid> HAHA
<stickupkid> achilleasa i know why
<stickupkid> achilleasa HO?
<achilleasa> stickupkid: omw
<manadart> achilleasa, stickupkid: Anyone able to review another trivial mechanical one? I started on the substantive patch, but thought I'd add another one to ease eventual review.
<manadart> https://github.com/juju/juju/pull/10655
<manadart> stickupkid: Were you going to have a look at https://github.com/juju/juju/pull/10655 ?
<stickupkid> manadart, aye, was fighting with mocks
<stickupkid> I won
<manadart> Did ye ay?
<stickupkid> HAHAHA
<manadart> Forgot to pull that out at the sprint.
<stickupkid> i wonder what the response to that is tbh
 * manadart realises the innuendo possibilities too late.
<stickupkid> manadart, done
<manadart> stickupkid: Ta.
<gnuoy> Could anyone tell me when 2.7 is likely to move to candidate ?
<rick_h> gnuoy:  end of Oct
<gnuoy> thanks rick_h
<rick_h> magicalt1out:  any thought on putting your post on discourse? or mind if I do? https://www.spicule.co.uk/news/post/2019-09-25-how-to-deploy-applications-at-scale-in-kubernetes
<magicalt1out> i don't mind rick_h go ahead
#juju 2019-09-26
<elox> morning
<achilleasa> manadart: still reviewing your PR. In the meantime can you double-check something for me?
<manadart> achilleasa: Yep; what is it?
<achilleasa> I believe that the state code does not currently handle endpoint additions/deletions when upgrading charms. For example, the `AllEndpointBindings` model method pulls the data from mongo. But if the new charm adds/removes bindings we will get back an incorrect list
<nammn_de> manadart and stickupkid: mind if you could take a look again? I added some things and left some comments there https://github.com/juju/juju/pull/10652
<manadart> Righto. nammn_de's is approved. Looking at your thing now achilleasa.
<achilleasa> manadart: out of curiosity, why are the slices in constraints/Value pointers?
<manadart> achilleasa: So the difference between not set and set to nothing can be told.
<achilleasa> manadart: I get why we want this for scalars but does it make a difference for slices (empty vs not set)?
<manadart> achilleasa: It matters for Tags, but it doesn't look like it for the others.
<stickupkid> migration has soo many tests :|
<rick_h> stickupkid:  what's the comment in your github actions PR referring to? Anything I can help out with?
<rick_h> manadart:  hah, I started reading that spaces bug last night but after a while had to mark it something to come back to
<manadart> rick_h: There is another one along similar lines in my stack. I'm looking at that code right now.
<stickupkid> rick_h, was trying one way then another, there are two ways to do github actions (nice!), but I think I've got the container started, just need to get the right location
<rick_h> stickupkid:  ah cool
<stickupkid> rick_h, also i'm going to relax what is required for running the static analysis as we don't actually need a juju
<rick_h> "need a juju"? as in building Juju?
<stickupkid> rick_h, i was messing around whilst waiting for a meeting
<rick_h> vs just static hitting the code?
<stickupkid> rick_h, yeah, exactly
<rick_h> gotcha, yea makes sense
<nammn_de> manadart: could i have a quick look again? I added your points. But Azure has additional constraints, which are alias constraints, Therefore the unit tests failed (good catchup!). Added a comment for you and 2 ways to solve it.  https://github.com/juju/juju/pull/10652
<stickupkid> nice got it working - finally
<manadart> nammn_de: Commented. Keep the old way for Azure. I think you can go ahead and merge it.
<stickupkid> manadart, what's provider id in terms of a subnet? is this maas only stuff?
<manadart> stickupkid: No, other providers use it - AWS, OCI, Openstack. It's literally what's on the tin - the provider's identifier for that subnet.
<stickupkid> right, ok cool
<manadart> It will now be used to uniquely identify subnets (CIDR and provider ID) being the candidate key. So we can have the same CIDR from different networks.
<manadart> CIDR plus provider ID will ensure uniqueness across all providers. For example on manual, provider ID will always be "", so CIDRs will have to be unique there.
<stickupkid> ah, that's interesting for manual
<manadart> nammn_de be landin'
<nammn_de> manadart: thx for the review. juhu first pr merged ð¥
<rick_h> nammn_de:  woooo! congrats!
<rick_h> nammn_de:  what's awesome is that then you can see it in action in the edge snap that builds every 4ish hours
<rick_h> nammn_de:  please make sure to mark the bug fix-comitted and that the milestone is set to the 2.7 one if it was landed in develop
<nammn_de> rick_h: will do :D!
<stickupkid> nammn_de: congrats
<nammn_de> rick_h and achilleasa: just saw this one https://bugs.launchpad.net/juju/+bug/1812980 as fix commited. In Trello it is assigned to me. Is it done?
<mup> Bug #1812980: try again should not be logged as an ERROR <bitesize> <juju:Fix Committed by achilleasa> <https://launchpad.net/bugs/1812980>
<rick_h> nammn_de:  hmm, I guess just double check. I must have missed the fix-comitted
<rick_h> and we never got it released as it wasn't set to a milestone
<rick_h> nammn_de:  so I guess just confirm and if it's setup the right way per the bug just mark it fix released at this point
<rick_h> looks like it was done back in Jan so must be released by now :)
<achilleasa> I think this was one of the first things I worked on after joining :D
<rick_h> achilleasa:  yea, I bet it was just a missed target to a milestone
<nammn_de> achilleasa: haha okay, I will put me of the trello then
<nammn_de> do we link the commited fix with a pr, to make things easier to trace?
<rick_h> nammn_de:  we can for sure
<rick_h> nammn_de:  and always note the bug in the commit in GH (the PR)
<rick_h> nammn_de:  so that when we go to do the changelog for a release it's easy to see if the PR that's landed ties to a bug worth highlighting
<achilleasa> nammn_de: my approach is to add a comment to the bug like "PR $link_to_pr includes the fix for $branch"
<aisrael> Is there a way to tell the Juju API server to advertise a different endpoint? i.e., I have a lxd controller with 17070 port forwarded from the host's address, and using the manual machine provider to it. I want to add that externally routable IP address to the api endpionts advertised by the controller.
<rick_h> aisrael:  setup a haproxy in front of it?
<rick_h> aisrael:  not really, it'll bind to all addresses on it's host but it's not proxy-nat aware
<aisrael> rick_h: I've intercepted the provisioning script used through libjuju to inject the routable IP to the new machine, but when it first contacts the controller it updates the api address to the non-routable one. I think I need to make the controller itself aware of that address.
<rick_h> aisrael:  the main issue is that there's a generate cert that the client->controller use and that'll be locked to the ips I think?
<rick_h> aisrael:  hmmm, I mean on the client you can edit the list of controller addresses
<manadart> achilleasa, maybe stickupkid: For bundles, if a "to" directive is supplied, is that exactly the same a "juju deploy x --to y" or is there some bundle based delay between machine and unit creation?
<rick_h> aisrael:  but not sure how that'll fallout to working/not
<manadart> Oh, and I specifically mean "to" a container.
<rick_h> manadart:  so the issue with bundles is that the machines come up first and are referenced to the machines in the bundle
<rick_h> manadart:  right, so lxd:0 or the like?
<manadart> rick_h: Yep.
<rick_h> manadart:  there is a delay in that the bundle path does an addCharm I think and then "addUnit" vs hitting the main deploy API call which handles that
<manadart> rick_h: Just looking at this bug. There *should* be a determination of desired spaces based on units assigned to the container, but it's falling through.
<manadart> Just wondering if the provisioner is racing with the bundle.
<rick_h> manadart:  hmmm, yea not sure tbh
<rick_h> manadart:  the bundles getting broken down into smaller bits might be a different path but have to chase it through
<aisrael> rick_h: Okay. I'll dig in a little deeper
<stickupkid> damn, we use a different version of yaml library in description and in juju - ah fun fun fun
<rick_h> stickupkid:  :/
<achilleasa> stickupkid: also, check charms.v6
<stickupkid> the yaml output is different depending on the version
<stickupkid> grrr
<stickupkid> I'll fix it for description
<stickupkid> i have no recollection of doing this https://github.com/juju/yaml/commit/2025133c382644467541934971b571f7896de32a
<rick_h> stickupkid:  lol the git commit doesn't lie
<stickupkid> i mean i could have been drinking tbh
<rick_h> lol, I don't know I should be hearing this :P
<stickupkid> haha
<nammn_de> would love to get some input. Working on a better error message for the cli on some occasions. https://bugs.launchpad.net/juju/+bug/1843456
<mup> Bug #1843456: [2.6.8] model-config prints a cryptic error message if a file was not found <bitesize> <cdo-qa> <juju:Triaged by nammn> <https://launchpad.net/bugs/1843456>
<rick_h> nammn_de:  what's up?
 * rick_h sees a comment and reads up
<nammn_de> rick_h: was just looking into the issue and wasnt sure which way to solve would be preferred
<nammn_de> so just added a comment with possible solutions in my mind. But maybe someone has something better in mind
<rick_h> nammn_de:  so a couple of things. First, I think that if the key isn't found as a model-config key we can provide a better error message. "ERROR key "bundles/k8s-model-config.yaml" not found in model's config"
<rick_h> nammn_de:  and second, if the thing is file-like (e.g. ends in yaml...I wonder if there's a pattern we've got for that already) we can check the file is resolveable/exists and error cleanly in that case "File "bunxles/k8s-model-config.yaml" is not found"
<nammn_de> rick_h: so the code does the following right now. If the file cannot be resolved we take it as a key and parse it like that. Before I can implement a proper file-like parser I need to know if a key can actually look like a path? Can a Key e.g contain an ending? key.yaml?
<rick_h> nammn_de:  right, so my question is can we reverse that. e.g. "does this match one of the model-config keys?"
<rick_h> nammn_de:  and if not, "does it look like a file?" and finally "if it looks like a file can I see it"
<nammn_de> rick_h: let me look, makes totally sense!
<rick_h> nammn_de:  k, updated the bug with my notes in line with that
<nammn_de> rick_h: great, thansk!
#juju 2019-09-27
<manadart> Anyone about to do a review? https://github.com/juju/juju/pull/10661
<stickupkid> manadart, if you have a couple of minutes to look over my PR, before i submit it
<stickupkid> manadart, might be worth HO first?
<manadart> stickupkid: Yep. just need a few mins.
<stickupkid> manadart, nice, need to push it first anyway
<manadart> stickupkid: I am in daily.
<stickupkid> the fact we don't use go mode, kills us with github actions :|
<stickupkid> go module *
<rick_h> stickupkid: that sucks
<rick_h> nammn_de:  review in, let me know if you have any questions
<nammn_de> rick_h: great, thanks for the review! Added some questions https://github.com/juju/juju/pull/10665
<rick_h> k, looking
<rick_h> guild anyone have a chance to review https://github.com/juju/juju/pull/10660 for thumper <3
<stickupkid> not only did I get 10000 PR number, I got 10666 https://github.com/juju/juju/pull/10666
<stickupkid> manadart,
<manadart> Ja?
<manadart> \m/
<achilleasa> stickupkid: wicked!
<rick_h> stickupkid:  15m not bad! https://github.com/SimonRichardson/juju/runs/238881258
<rick_h> stickupkid:  that'll work just peachy for check jobs
<stickupkid> rick_h, yeah, that's a lot less that other stuff, we can actually turn off the pre-checks from jenkins now
<stickupkid> rick_h, well, when it lands
<rick_h> stickupkid:  right, exciting
<rick_h> stickupkid:  I forsee a discourse post in your future! :P
<stickupkid> haha, yeah, I might actually do that this afternoon,
<stickupkid> rick_h, when you get back - https://github.com/juju/juju/pull/10667
<nammn_de> rick_h: same here: https://github.com/juju/juju/pull/10665/
<nammn_de> :D
<nammn_de> If a PR gets some comments and you have addressed them. What do you prefer? Let the reviewer resolve them or resolve them yourselves so that the reviewer knows that it was implemented?
<pmatulis> shouldn't this just work (on AWS)? juju deploy -n 3 ceph-osd --storage osd-devices=3G,2 --storage osd-journals=6G,1
<pmatulis> the appliations stay at 'agent initializing', presumably b/c OSD disks cannot be found. nothing in logs
<pmatulis> so how is one supposed to deploy ceph-osd charm from the store?
#juju 2019-09-28
<bdxbdx> heya, anyone want to talk secrets and volume mounts for kubernetes charms on a saturday?
<bdxbdx> :)
<bdxbdx> discourse is erroring when I try to post :{ https://imgur.com/a/3i8AoNL
<bdxbdx> rick_h: ^
<bdxbdx> I put it here for now https://gist.github.com/jamesbeedy/469f49c8cf89899ba20442e717af05e4
<erik_elox> Go to bed bdx. I am.
<bdxbdx> elox: beautiful portland saturday for charming from my sky loft -> https://imgur.com/a/CBj963O :P
