#juju 2011-10-03
<_mup_> Bug #865163 was filed: default-series option has surprising behaviour <juju:New> < https://launchpad.net/bugs/865163 >
<niemeyer> Good mornings!
<rogpeppe> niemeyer: hiya!
<fwereade> heya niemeyer
<niemeyer> Hey folks!
 * niemeyer grabs coffee before engaging into juju awesomeness
<hazmat> g'morning
<niemeyer> hazmat: Yo
<hazmat> niemeyer, greeting
<niemeyer> hazmat: Alan seemed pleasantly surprised with juju
<hazmat> niemeyer, awesome!
<hazmat> niemeyer,  they've been setting up a saas system with plone.. ploud.. i'm glad you guys had a chance to talk
<niemeyer> hazmat: Spent good part of Saturday's afternoon hacking a django "platform" charm with Sidnei.. very cool stuff
<hazmat> niemeyer, sweet!
<niemeyer> hazmat: and he was around for part of it
<hazmat> niemeyer, did you guys use bash? or python?
<niemeyer> hazmat: Bash for the low level charm integration.. but it's a surprisingly small amount of code
<niemeyer> hazmat: Checks the branch out from Launchpad, and puts it live
<hazmat> niemeyer, indeed, checking out revision control is the only main thing i used python for
<hazmat> on a wsgi charm
<hazmat> niemeyer, virtualenv for requirements.. but figuring out the db dependency from config was a little odd
<hazmat> made me want runtime dependency declarations
<niemeyer> hazmat: Ah, interesting.. we didn't go so far
<niemeyer> hazmat: We were thinking about a python-level or app-level hook for installing dependencies
<hazmat> niemeyer, yeah.. as it is right now i've got optional relations for lots of things, and i just populate a conf file with the properties for the app
<niemeyer> hazmat: Ah, sweet!
<hazmat> using gunicorn for serving up the app ( easy multi-process using a pre-fork model)
<niemeyer> hazmat: We were going in the direction of an nginx server
<hazmat> niemeyer, nginx is a good front end, but i still expect that to be a separate unit.. within a unit the socket bind, fork, saves an extra hop
<niemeyer> hazmat: Hadn't heard of gunicorn before
<hazmat> which is needed for the frontend anyways
<hazmat> niemeyer, its a nice front end to wsgi apps supports several options.. gevent, pre-fork workers, etc.
<niemeyer> hazmat: Nice, we should talk to sidnei about this
<hazmat> niemeyer, sounds good
<hazmat> niemeyer, just a heads up... i'm going to be working on solving transient zk disconnect issues and upstartifying things this week
<niemeyer> hazmat: Good timing.. on a call with fwereade and we were just talking about that :-)
<hazmat> fwereade, niemeyer, cool, how's the remote repo working coming?
<niemeyer> hazmat: Wanna join?
<hazmat> niemeyer, definitely
<niemeyer> hazmat: sent
<robbiew> niemeyer: fyi, I updated the call to use my conference number
<robbiew> 1075684916
<niemeyer> robbiew: Joining
<niemeyer> Lunch time!
<niemeyer> brb
<robbiew> FYI -> https://juju.ubuntu.com/Testing
<robbiew> :)
<SpamapS> cool
<SpamapS> http://summit.openstack.org/sessions/view/103
<SpamapS> lol.. look at the example they use for server templates. ;)
<SpamapS> "A server template could be used, for example, to build a server containing a pre-installed WordPress system and database"
<niemeyer> robbiew: Wow!
<niemeyer> robbiew: Integrates very neatly
 * robbiew is so happy IS added the rawhtml option to our wiki ;)
<niemeyer> robbiew: +1 :)
<m_3> robbiew: rawhtml... yay
 * rogpeppe is off. see ya tomorrow.
<robbiew> rogpeppe: cya
<niemeyer> rogpeppe: Cheers! On a call now, but will review your branches before you're back :)
<_mup_> Bug #865550 was filed: Provide automation for including apparmor profiles in charms <juju:New> < https://launchpad.net/bugs/865550 >
<niemeyer> Four hours of meetings should be good enough for a Monday
 * niemeyer tries to actually review some code now
<_mup_> juju/unit-info-cli r423 committed by kapil.thangavelu@canonical.com
<_mup_> copy host resolv.conf to container before attempting any resolutions, in the customize chroot, this was being masked by the local package cache
 * hazmat heads out to doctor appt, bbiab
<niemeyer> hazmat: Good luck there! :)
<niemeyer> hazmat: When you're back, would like to ping you on a minor
<fwereade> niemeyer: ping
<niemeyer> fwereade: Yo
<fwereade> thanks for the review
<fwereade> niemeyer: one I'm about to address and merge, one addressed in a followup, one I intend to address in a followup
<niemeyer> fwereade: Cool
<niemeyer> fwereade: Branches seem to be missing pre-reqs, btw
<fwereade> really? crap, let me check
<niemeyer> fwereade: The require-default-series one is.. hmm.. interesting
<niemeyer> fwereade: Yeah, it's fine as I can imagine what the stack is
<niemeyer> fwereade: But it's good to have that in place in general
<fwereade> niemeyer: consider that one speculative, I wasn't sure you'd like it but itseemed better to implement while you were away than to moon around wondering
<fwereade> niemeyer: definitely
<niemeyer> fwereade: It's good speculation.. it seemed wrong and I was tempted to ask for more information
<fwereade> niemeyer: yeah, looks like I've missed at least one :( sorry
<niemeyer> fwereade: But pondering on the problem for a while, you have a point
<fwereade> niemeyer: it's clint's point really :)
<niemeyer> fwereade: _Today_, detecting the revision will yield surprising results more often than it will be useful
<niemeyer> fwereade: Well, SpamapS has a point then :-)
<fwereade> niemeyer: detecting the series, right?
<niemeyer> fwereade: Yeah, sorry
<niemeyer> fwereade: Detecting is really the long term solution
<niemeyer> fwereade: But, there are two points that make it suboptimal:
<niemeyer> 1) There's a single series that work today
<niemeyer> 2) We do auto-updating of the environment config, which can yield surprising results
<fwereade> niemeyer: 3) we want people to be able to run juju from non-ubuntu systems
<fwereade> niemeyer: I do feel it would be nice to be smarter about it, but I think I've convinced myself at least that Least Surprise is the right principle to follow at this point
<fwereade> niemeyer: the fact that auto-updating can *still* break this solution is (I think) a point against auto-updating, rather than against the solution
<niemeyer> fwereade: 3) is kind of irrelevant in this specific case
<fwereade> niemeyer: feels somewhat relevant to the long-term case, but I don't really feel that specific point is going to be a fruitful avenue of discussion right now ;)
<niemeyer> fwereade: The fact someone wants to be able to work in environment B isn't good reasoning for not doing something in environment A which is actually more common right now
<fwereade> niemeyer: granted
<niemeyer> fwereade: Yeah, least surprise is good too
<niemeyer> fwereade: and it's actually exactly what we were going at with the original approach, though
<niemeyer> fwereade: If you're siting on a machine running Natty, doing things remotely with Natty is expected
<niemeyer> fwereade: The problem, and the reason why it has my sympathy, are points 1) and 2) above
<niemeyer> fwereade: For 1), natty doesn't exist
<niemeyer> (for juju)
<fwereade> niemeyer: well, indeed :)
<niemeyer> fwereade: re. 2), auto-updating means we'd get different behavior on an _existing_ environment post-bootstrap
<niemeyer> fwereade: Which is quite awkward
<fwereade> niemeyer: yep
<niemeyer> fwereade: I'm happy to move forward with the simplistic way for now..
<niemeyer> fwereade: We'll have to change this once there are other releases, but we have some time to think
<niemeyer> fwereade: I'll just check it up with hazmat
<fwereade> niemeyer: cool -- I agree it's not perfect, but it feels like a sensible short-term solution
<niemeyer> fwereade: +1
<niemeyer> fwereade: Awesome.. just reviewing docs, but I think we're settled as far as our conversation goes
<niemeyer> fwereade: We need the auto-revision bumping now, and the fake testing server
<niemeyer> fwereade: Probably in that order, so that we can get things rolling into the Ubuntu front while we sort out the testing and all
<niemeyer> fwereade: WDYT?
<fwereade> yep, sounds good to me
<fwereade> niemeyer, in case you didn't see, sounds good to me
<niemeyer> fwereade: Cool!
<niemeyer> fwereade: Thanks for checking out
<niemeyer> fwereade: I'll finish reviewing the doc branch, but we're in sync I think
<fwereade> niemeyer: cool
<fwereade> niemeyer: there will be another doc tweak at some stage, to fix up the draft I mention
<fwereade> niemeyer: but, well, it is a draft :)
<niemeyer> fwereade: Yeah, cool :-)
<Aram> hello.
<niemeyer> Aram: Hey!
<niemeyer> hazmat: Can you please check out this when you have a moment: https://code.launchpad.net/~fwereade/juju/require-default-series/+merge/77878
<niemeyer> I'm going to step out for some exercising and be back later today
<fwereade> niemeyer: (I've assumed it was; the other place is the new-user tutorial, where every extra character counts, IMO)
<fwereade> niemeyer: (and now I've convinced myself I'm wrong: the full charm url will show up in status, and it'll be clearer that the two places reference the same thing if they look like the same thing)
 * hazmat checks out branch
<niemeyer> fwereade: I'm here, but you're probably not.. :p
<SpamapS> niemeyer: so, how is the eureka push going?
<SpamapS> I've been distracted with other Ubuntu stuff all day
<niemeyer> SpamapS: Very well.. the only critical bit is the client-side store work
<niemeyer> SpamapS: fwereade's branches are in pretty good shape, though.. we'll probably have everything in by tomorrow, or wednesday the latest
<niemeyer> SpamapS: fwereade will then work on a fake server the rest of the week so we can be sure that the store support _actually_ works
<niemeyer> SpamapS: and then finish the real store in the next couple of weeks
<SpamapS> We can always release an updated client if the API has to change
<niemeyer> SpamapS: That's cool, but at least the that work we're finishing tomorrow/wednesday should really be in
<niemeyer> Aram: ping
<Aram> niemeyer: pong.
<niemeyer> Aram: Hey!
<SpamapS> niemeyer: indeed, its a bit more of an abuse of the process to SRU a whole new feature in than it is to just SRU a new API version in. :)
<niemeyer> SpamapS: Yeah, and also because my _hope_ is that we'll have zero incompatible changes in the 11.10=>12.04 time frame
<SpamapS> +1 from me on that
<niemeyer> SpamapS: So would really like to have the store work now, so that if we have to SRU changes they use the same user interface
<SpamapS> would go a long way to building up user support if they're able to smoothly transition to the PPA and/or 12.04 without having to do anything to their charms/running envs/etc.
<niemeyer> SpamapS: Exactly
<niemeyer> SpamapS: That's my hope really
<SpamapS> Seems like the last of the big structural local storage changes are in with juju-origin and the local charm repository organization.
<niemeyer> SpamapS: juju-origin is kind of minor in that regard
<SpamapS> I do think we'll have to see some backflips in code as the ZK topology changes..
<niemeyer> SpamapS: the charm url user interface is the big deal
<niemeyer> hazmat: ping
<niemeyer> SpamapS: another bit we have to sort out is being able to tag formulas as incompatible with a given revision in a nice way
<niemeyer> SpamapS: But that seems fine for an SRU
<SpamapS> yeah
<SpamapS> thats another one where it fits nicely the SRU desire to make sure interoperability with remote services is maintained
<niemeyer> Right
<hazmat> niemeyer, pong
<niemeyer> hazmat: Yo!
<niemeyer> hazmat: How're things going there?  Still churning, or more like heading to a beer? :)
<hazmat> niemeyer, really hoping to have the unit get cli and addresses in relations by default in for oneiric
<niemeyer> hazmat: Neat
<hazmat> niemeyer, both in the queue
<niemeyer> hazmat: I'll check that out today still
<SpamapS> hazmat: that would be *saweet*
<niemeyer> hazmat: Awesome
<hazmat> niemeyer, i'm still running into occasional issues with the local provider stuff, think it might be because i switch local networks so often
<niemeyer> hazmat: Hmm
<hazmat> not sure, still also seeing an occasional lxc problem when i try to ssh into a machine
<hazmat> i committed a fix for the network issue, it always manifests as xmlrpc.launchpad.net temp failure in resolution
<SpamapS> I'll gladly do a mass bug file/fix in the charm repo once that is done, as that gives us much better reliability in non-ec2 environments
<hazmat> not sure if its a good fix though
<hazmat> works for me atm though
<hazmat> pty allocation error re lxc problem
<hazmat> only a reboot fixes it once it shows up
<hazmat> niemeyer, having a look over fwereade's branches atm, still churning
<hazmat> and that's not mutually exclusive to a beer ;-)
<hazmat> tdd ftw ;-)
<hazmat> saw the new db pkg for go
<hazmat> elmo was at the surge conference saw him the last day we chatted a bit
<niemeyer> hazmat: :-)
<niemeyer> hazmat: On that last branch, check-latest-formulas, I think we need something like
<niemeyer>  /charm-info?charms=<urls>
<niemeyer> So that we can get metadata + bundle-sha256 at once
<niemeyer> for all the charms
<hazmat> niemeyer, sounds good, as we learned rest purisms fails for efficiency
<niemeyer> Yeah
<niemeyer> It feels bad to do _3_ requests per charm download
#juju 2011-10-04
 * niemeyer gets some food
<SpamapS> Its important to have a nice clear easy REST API to use.. but its vital that you also provide optimisations for batch operations. Its why SQL is so popular.. easy to get one row, easy to get all rows.
<niemeyer> jimbaker: ping
<niemeyer> hazmat: Still around?
<hazmat>  niemeyer indeed
<niemeyer> hazmat: Cool, sorted out already, again! :-)
<niemeyer> hazmat: Review queue pretty much empty
<hazmat> niemeyer, nice
 * hazmat crashes
<niemeyer> hazmat: Cheers
<_mup_> juju/unit-info-cli r424 committed by kapil.thangavelu@canonical.com
<_mup_> remove the manual copy of host resolv.conf, since customize runs in a chroot, directly modify the resolv.conf output to point to dnsmasq, fix indentation problem
<_mup_> juju/env-origin r381 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<SpamapS> FYI r378 caused a segfault when building on natty
<SpamapS> https://launchpadlibrarian.net/81865589/buildlog_ubuntu-natty-i386.juju_0.5%2Bbzr378-1juju1~natty1_FAILEDTOBUILD.txt.gz
 * ejat just wondering â¦ is someone doing charm for liferay :) 
<hazmat> hmm
<hazmat> SpamapS, its a problem with the zk version there
<hazmat> 3.3.1 has known issues for juju
<hazmat> applies primarily to libzookeeper and python-libzookeeper
<hazmat> SpamapS, all the distro ppas (minus oneiric perhaps) should have 3.3.3
<_mup_> Bug #867420 was filed: Add section mentioning expose to the user tutorial. <juju:In Progress by rogpeppe> < https://launchpad.net/bugs/867420 >
<TeTeT> just updated my oneiric install, juju seems to have a problem:
<TeTeT> Errors were encountered while processing:
<TeTeT>  /var/cache/apt/archives/juju_0.5+bzr361-0ubuntu1_all.deb
<TeTeT> E: Sub-process /usr/bin/dpkg returned an error code (1)
<TeTeT> was a transient problem, apt-get update, apt-get -f install seemed to have fixed it
<hazmat> interestingly simulating transient disconnection of a bootstrap node for extended periods of time seems to be fine
<fwereade> heya niemeyer
<SpamapS> hazmat: ahh, we need to add a versioned build dep then
<niemeyer> Hello!
<hazmat> niemeyer, g'morning
<niemeyer> fwereade: How're things going there?
<niemeyer> hazmat: Good stuff in these last few branches
<fwereade> niemeyer: tolerable :)
<niemeyer> fwereade: ;-)
<hazmat> niemeyer, yeah.. finally fixed the local provider issue wrt to customization, so all is good there, still seem some occasionally lxc pty allocation errors, but haven't deduced to a reliable reproduction strategy for upstream
<hazmat> niemeyer, i did play around with the disconnect scenarios some more, at least for a period of no active usage (no hooks executing, etc), we tolerate zookeeper nodes going away transiently fairly well
<niemeyer> hazmat: By zookeeper nodes you mean the server themselves?
<niemeyer> servers
<hazmat> niemeyer, yeah.. the zookeeper server going away
<niemeyer> hazmat: Neat!
<niemeyer> It's a good beginning :)
<niemeyer> hazmat: we should talk to rogpeppe about the issues we debated yesterday
<niemeyer> hazmat: re. making things not fail when possible
<rogpeppe> i'm here!
<hazmat> niemeyer, for the single server case, the session stays alive, if the client reconnects within the the session timeout period after the server is back up. and the clients all go into poll mode every 3s when the zk server is down (roughly 1/3 session time i believe)
<rogpeppe> (afternoon, folks, BTW)
<hazmat> niemeyer, theres a few warnings in the zk docs about not trusting library implementation that magic things for the app
<hazmat> regarding error handling
<hazmat> rogpeppe, hola
<niemeyer> hazmat: Well, sure :)
<niemeyer> hazmat: That's what the whole session concept is about, though
<niemeyer> rogpeppe: This goes a bit in the direction you were already thinking
<niemeyer> rogpeppe: You mentioned in our conversations that e.g. it'd be good that Dial would hold back until the connection is actually established
<niemeyer> rogpeppe: This is something we should do, but we're talking about doing more than that
<rogpeppe> i don't know if this is relevant, or if it's a problem with gozk alone, but i never got a server-down notification from a zk server, even when i killed it and waited 15 mins.
<niemeyer> rogpeppe: Try bringing it up afterwards! :-)
<hazmat> rogpeppe, session expiration is server governed, clients don't decide that
<niemeyer> rogpeppe: It's a bit strange, but that's how it works.. the session times out in the next reconnection
<rogpeppe> niemeyer: yeah, i definitely think it should
<hazmat> rogpeppe, the clients go into a polling reconnect mode, turning up the zookeeper debug log verbosity will show the activity
<rogpeppe> hazmat: but what if there's no server? surely the client should fail eventually?
<niemeyer> rogpeppe: So, in addition to this, when we are connected and zk disconnects, we should also block certain calls
<niemeyer> rogpeppe: Well.. all the calls
<hazmat> rogpeppe, nope.. they poll endlessley in the background, attempting to use the connection will raise a connectionloss/error
<hazmat> rogpeppe, at least until the handle is closed
<niemeyer> rogpeppe: So that we avoid these errors ^
<hazmat> rogpeppe, that's why we have explicit timeouts for connect
<niemeyer> rogpeppe: In other words, if we have a _temporary_ error (e.g. disconnection rather than session expiration), we should block client calls
<hazmat> above libzk
<rogpeppe> hazmat: but if all users are blocked waiting for one of {connection, state change}, then no one will try to use the connection, and the client will hang forever
<niemeyer> rogpeppe: Not necessarily.. as you know it's trivial to timeout and close a connection
<niemeyer> rogpeppe: I mean, on our side
<rogpeppe> so all clients should do timeout explicitly?
<niemeyer> rogpeppe: <-time.After & all
<rogpeppe> sure, but what's an appropriate time out?
<niemeyer> rogpeppe: Whatever we choose
<niemeyer> rogpeppe: But that's not what we're trying to solve now
<rogpeppe> sure
<niemeyer> rogpeppe: What we have to do is make the gozk interface bearable
<niemeyer> rogpeppe: Rather than a time bomb
<hazmat> so we're trying to make recoverable error handling subsumed into the client
<rogpeppe> [note to future: i'd argue for the timeout functionality to be inside the gozk interface, not reimplemented by every client]
<niemeyer> [note to future: discuss timeout with rogpeppe]
<hazmat> by capturing a closure for any operation, and on connection error, wait till the connection is restablished, and rexecing the closure (possibly with additional error detection semantics)
<rogpeppe> hazmat: are we talking about the gozk package level here?
<niemeyer> hazmat: I think there's a first step before that even
<rogpeppe> or a higher juju-specific level?
<niemeyer> rogpeppe: Yeah, internal to gozk
<hazmat> rogpeppe, which pkg isn't relevant, but yes at the zk conn level
<hazmat> http://wiki.apache.org/hadoop/ZooKeeper/ErrorHandling
<niemeyer> hazmat: Before we try to _redo_ operations, we should teach gozk to not _attempt_ them in the first place when it knows the connection is off
<hazmat> hmm
<hazmat> yeah.. thats better
<hazmat> we can basically watch session events and hold all operations
<hazmat> niemeyer, +1
<niemeyer> hazmat: Cool!
<hazmat> hmm
<hazmat> niemeyer, so there is still a gap
<niemeyer> rogpeppe: Does that make sense to you as well?
 * rogpeppe is thinking hard
<niemeyer> hazmat: There is in some cases, when we attempt to do something and the tcp connection crashes on our face
<hazmat> niemeyer, internally libzk will do a heartbeat effectively to keep the session alive, if the op happens before the heartbeat detects dead we still get a conn error
<niemeyer> hazmat: Let's handle that next by retrying certain operations intelligently
<rogpeppe> i think the first thing is to distinguish between recoverable and unrecoverable errors
<hazmat> rogpeppe, its a property of the handle
<niemeyer> rogpeppe: That's the next thing, after the initial step we mentioned above
<hazmat> libzk exposes a method for it to return a bool
<hazmat> recoverable(handle)
<niemeyer> rogpeppe: For blocking operations on certain connection states, we're actually preventing the error from even happening
<rogpeppe> preventing the error being exposed to the API-client code, that is, yes?
<niemeyer> rogpeppe: No
<hazmat> rogpeppe, yup
<hazmat> :-)
<rogpeppe> lol
<niemeyer> rogpeppe: Preventing it from happening at all
<hazmat> the error never happens
<hazmat> because we don't let the op go through while disconnected
<niemeyer> rogpeppe: The error never happens if we don't try the call
<rogpeppe> ok, that makes sense.
<rogpeppe> but... what about an op that has already gone through
<rogpeppe> ?
<hazmat> next step is to auto recover the error for ops that we can do so without ambiguity, because there is still a gap on our detection of the client connectivity
<rogpeppe> and then the connection goes down
<niemeyer> rogpeppe: That's the next case we were talking about above
<niemeyer> rogpeppe: If the operation is idempotent, we can blindly retry it behind the lib client's back
<rogpeppe> niemeyer: do we need to? i thought it was important that clients be prepared to handle critical session events
<niemeyer> rogpeppe: If the operation is not idempotent, too bad.. we'll have to let the app take care of it
<hazmat> rogpeppe, effectively the only only ops i've seen ambiguity around is the create scenario, and modifications without versions
<niemeyer> rogpeppe: Do we need to what?
<rogpeppe> do we need to retry, was my question.
<hazmat> so this might be better structured as a library on top of the connection that's specific to juju
<niemeyer> rogpeppe: Yeah, because otherwise we'll have to introduce error handling _everywhere_, doing exactly the same retry
<niemeyer> hazmat: Nah.. let's do it internally and make a clean API.. we know what we're doing
<rogpeppe> does zookeeper do a 3 phase commit?
<hazmat> niemeyer, famous last words ;-)
<rogpeppe> i.e. for something like create with sequence number, does the client have to acknowledge the create before the node is actually created?
<niemeyer> hazmat: Well, if we don't, we have larger problems ;-)
<hazmat> rogpeppe, its a paxos derivative internally. everything forwards to the active leader in the cluster
<hazmat> writes that is
<hazmat> it transparently does leader election as needed
<niemeyer> rogpeppe: The _client_ cannot acknowledge the create
<hazmat> rogpeppe, the client doesn't ack the create, but the error recovery with a sequence node is hard, because without the server response, we have no idea what happened
<rogpeppe> niemeyer: why not? i thought the usual process was: write request; read response; write ack; server commits
<niemeyer> rogpeppe: What's the difference?
<niemeyer> rogpeppe: write ack; read response; write ack; read response; write ack; read response; server commits
<rogpeppe> niemeyer: the difference is that if the server doesn't see an ack from the client, the action never happened.
<niemeyer> rogpeppe: Doesn't matter how many round trips.. at some point the server will commit, and if the connection crashes the client won't know if it was committed or not
<hazmat> ? there's client acks under the hood?
<niemeyer> hazmat: There isn't.. and I'm explaining why it makes no difference
<hazmat> ah
 * hazmat dogwalks back in 15
<niemeyer> hazmat: Cheers
<rogpeppe> if the connection crashes, the client can still force the commit by writing the ack. it's true that it doesn't know if the ack is received. hmm. byzantine generals.
<niemeyer> Yeah
<rogpeppe> i'm slightly surprised the sequence-number create doesn't have a version argument, same as write
<niemeyer> rogpeppe: Hmm.. seems to be sane to me?
<rogpeppe> that would fix the problem, at the expense of retries, no?
<niemeyer> rogpeppe: It's atomic.. it's necessarily going to be version 0
<rogpeppe> ah, child changes don't change a version number?
 * rogpeppe goes back to look at the modify operation
<niemeyer> rogpeppe: It changes, but it makes no sense to require a given version with a sequence number
<niemeyer> rogpeppe: The point of using the sequence create is precisely to let the server make concurrent requests work atomically
<niemeyer_> Hmm
<niemeyer_> Weird
<niemeyer_> Abrupt disconnection
<rogpeppe> niemeyer: but we want to do that with node contents too - that's why the version number on Set
<rogpeppe> niemeyer_: and that's the main problem with the lack of Create idempotency
<rogpeppe> anyway, we could easily document that Create with SEQUENCE is a special case
<rogpeppe> and can return an error without retrying
<niemeyer_> rogpeppe: We don't even have to document it really.. the error itself is the notice
<rogpeppe> i think it would be good if the only time a session event arrived at a watcher was if the server went down unrecoverably
<rogpeppe> actually, that doesn't work
<rogpeppe> watchers will always have to restart
<niemeyer_> rogpeppe: That's how it is today, except for the session events in the session watch
<niemeyer_> rogpeppe: Not really
<niemeyer_> rogpeppe: If the watch was already established, zk will keep track of them and reestablish internally as long as the session survives
<rogpeppe> but what if the watch reply was lost when the connection went down?
<niemeyer_> rogpeppe: Good question.. worth confirming to see if it's handled properly
<rogpeppe> i'm not sure how it can be
<rogpeppe> the client doesn't ack watch replies AFAIK
<niemeyer_> rogpeppe: There are certainly ways it can be.. it really depends on how it's done
<niemeyer_> rogpeppe: E.g. the client itself can do the verification on connection reestablishment
<niemeyer_> Another alternative, which is perhaps a saner one, is to do a 180â° turn and ignore the existence of sessions completely
<niemeyer_> Hmmm..
<rogpeppe> niemeyer_: that would look much nicer from a API user's perspective
<niemeyer_> I actually like the sound of that
<niemeyer_> rogpeppe: Not even thinking about API.. really thinking about how to build reliable software on top of it
<rogpeppe> aren't those closely related things?
<niemeyer_> rogpeppe: Not necessarily.. an API that reestablishes connections and knows how to hanndle problems internally is a lot nicer from an outside user's perspective
<rogpeppe> niemeyer: don't quite follow
<niemeyer> rogpeppe: Don't worry, it's fine either way
 * hazmat catches up
<niemeyer> hazmat: I think we should do a U turn
<hazmat> niemeyer, how so?
<hazmat> hmm.. verifying watch handling while down sounds good
<hazmat> connection down that is
<niemeyer> hazmat: We're adding complexity in the middle layer, and reality is that no matter how complex and how much we prevent the session from "crashing", we _still_ have to deal with session termination correctly
<hazmat> session termination is effectively fatal
<rogpeppe> when does a session terminate?
<hazmat> the only sane thing to do is to restart the app
<niemeyer> hazmat: we're also constantly saying "ah, but what if X happens?"..
<hazmat> rogpeppe, a client is disconnected from the quorum for the period of session timeout
<niemeyer> hazmat: Not necessarily.. we have to restart the connection
<hazmat> niemeyer, and reinitialize any app state against the new connection
<niemeyer> hazmat: Yes
<hazmat> ie. restart the app ;-)
<niemeyer> hazmat: No, restart the app is something else
<niemeyer> hazmat: Restart the app == new process
<hazmat> doesn't have to be a process restart to be effective, but it needs to go through the entire app init
<niemeyer> hazmat: So, the point is that we have to do that anyway
<niemeyer> hazmat: Because no matter how hard we try, that's a valid scenario
<hazmat> rogpeppe, the other way a session terminates is a client closes the handle, thats more explicit
<hazmat> rogpeppe, that can be abused in testing by connecting multiple clients via the same session id, to simulate session failures
<hazmat> niemeyer, absolutely for unrecoverable errors that is required
<niemeyer> hazmat: So what about going to the other side, and handling any session hiccups as fatal?  It feels a lot stronger as a general principle, and a lot harder to get it wrong
<rogpeppe> when you say "reinitialize any app state", doesn't that assume that no app state has already been stored on the server?
<hazmat> for recoverable errors local handling inline to the conn, seems worth exploring
<rogpeppe> or are we assuming that the server is now a clean slate?
<hazmat> we need to validate some of the watch state
<hazmat> rogpeppe, no the server has an existing state
<niemeyer> hazmat: The problem is that, as we've been seeing above, "recoverable errors" are actually very hard to really figure
<hazmat> rogpeppe, the app needs to process the existing state against its own state needs and observation requirements
<niemeyer> hazmat: rogpeppe makes a good point in terms of the details of watch establishment
<rogpeppe> so presumably we know almost all of that state, barring operations in progress?
<niemeyer> hazmat: and I don't have a good answer for him
<hazmat> niemeyer, that's why i was going with a stop/reconnect/start for both error types as a simple mechanism
<hazmat> for now
 * hazmat does a test to verify watch behavior
<niemeyer> hazmat: Yeah, but the problem we have _today_ and that I don't feel safe doing that is that we don't have good-but-stay-alive semantics in the code base
<niemeyer> erm..
<niemeyer> good stop-but-stay-alive
<rogpeppe> i *think* that the most important case is automatic retries of idempotent operations.
<hazmat> niemeyer, we do in the unit agents as a consequence of doing upgrades, we pause everything for it
<rogpeppe> but that's hard too.
<niemeyer> hazmat: I seriously doubt that this will e.g. kill old watches
<hazmat> niemeyer, effectively the only thing that's not observation driven is the provider agent does some polling for runaway instances
<hazmat> niemeyer, it won't kill old watches, but we can close the handle explicitly
<niemeyer> hazmat: and what happens to all the deferreds?
<hazmat> niemeyer, their dead, when the session is closed
<hazmat> at least for watches
<niemeyer> hazmat: What means dead?  Dead as in, they'll continue in memory, hanging?
<hazmat> niemeyer, yeah... their effectively dead, we can do things to clean them up if that's problematic
<hazmat> dead in memory
<niemeyer> hazmat: Yeah.. so if we have something like "yield exists_watch", that's dead too..
<hazmat> we can track open watches like gozk and kill them explicitly (errback disconnect)
<niemeyer> hazmat: That's far from a clean termination
<hazmat> niemeyer, we can transition those to exceptions
<niemeyer> hazmat: Sure, we can do everything we're talking about above.. the point is that it's not trivial
<hazmat> it seems straightforward at the conn level
<hazmat> to track watches, and on close kill them
<niemeyer> hazmat: Heh.. it's straightforward to close() the connection, of course
<niemeyer> hazmat: It's not straightforward to ensure that doing this will yield a predictable behavior
<hazmat> so back to process suicide ;-)
<niemeyer> hazmat: Cinelerra FTW!
<rogpeppe> this is all talking about the situation when you need to explicitly restart a session, right?
<hazmat> rogpeppe, yes
<niemeyer> rogpeppe: Yeah, control over fault scenarios in general
<hazmat> restart/open a new session
<rogpeppe> restart is different, i thought
<rogpeppe> because the library can do it behind the scenes
<rogpeppe> and reinstate watches
<rogpeppe> redo idempotent ops, etc
<hazmat> rogpeppe, but it can't reattach the watches to the all extant users?
<rogpeppe> i don't see why not
<hazmat> perhaps in go that's possible with channels and the channel bookeeping
<hazmat> against the watches
<niemeyer> hazmat, rogpeppe: No, that doesn't work in any case
<rogpeppe> ?
<niemeyer> The window between the watch being dead and the watch being alive again is lost
<rog> of course
<rog> doh
<rog> except...
<rog> that the client *could* keep track of the last-returned state
<rog> and check the result when the new result arrives
<rog> and trigger the watcher itself if it's changed
<niemeyer> rog: Yeah, we could try to implement the watch in the client side, but that's what I was talking above
<rog> expect... i don't know if {remove child; add child with same name} is legitimately a no-op
<niemeyer> rog: We're going very far to avoid a situation that is in fact unavoidable
<niemeyer> rog: Instead of doing that, I suggest we handle the unavoidable situation in all cases
<rog> force all clients to deal with any session termination as if it might be unrecoverable?
<niemeyer> rog: Yeah
<niemeyer> rog: Any client disconnection in fact
<niemeyer> rog: let's also remove the hack we have in the code and allow watches to notice temporary disconnections
<rog> this is why proper databases have transactions
<niemeyer> rog: Uh..
<niemeyer> rog: That was a shoot in the sky :-)
<rog> if the connection dies half way through modifying some complex state, then when retrying, you've got to figure out how far you previously got, then redo from there.
<niemeyer> rog: We have exactly the same thing with zk.
<niemeyer> rog: The difference is that unlike a database we're using this for coordination
<niemeyer> rog: Which means we have live code waiting for state to change
<niemeyer> rog: A database client that had to wait for state to change would face the same issues
<hazmat> rog, databaess still have the same issue
<rog> yeah, i guess
<rog> what should a watcher do when it sees a temporary disconnection?
<rog> await reconnection and watch again, i suppose
<hazmat> so watches don't fire if the event happens while disconnected
<rog> i wonder if the watch should terminate even on temporary disconnection.
<niemeyer> rog: It should error out and stop whatever is being done, recovering the surrounding state if it makes sense
<niemeyer> rog: Right, exactly
<rog> and is that true of the Dial session events too? the session terminates after the first non-ok event?
<rog> i think that makes sense.
<rog> (and it also makes use of Redial more ubiquitous). [of course i'm speaking from a gozk perspective here, as i'm not familiar with the py zk lib]
<hazmat> interesting, i get a session expired event.. just wrote a unit test for watch fire while disconnected, two server cluster, two clients one connected to each, one client sets a watch, shutdown its server, delete on the other client/server, resurrect the shutdown server with its client waiting on the watch, gets a session expired event
<hazmat> hmm. its timing dependent though
<niemeyer> rog: Yeah, I think so too
<hazmat> yeah.. this needs more thought
<niemeyer> hazmat: Yeah, the more we talk, the more I'm convinced we should assume nothing from a broken connection
<hazmat> niemeyer, indeed
<niemeyer> hazmat: This kind of positioning also has a non-obvious advantage.. it enables us to more easily transition to doozerd at some point
<niemeyer> Perhaps not as a coincidence, it has no concept of sessions
 * niemeyer looks at Aram
<hazmat> niemeyer, interesting.. i thought you gave up on doozerd
<hazmat> upstream seems to be dead afaik
<niemeyer> hazmat: I have secret plans!
<niemeyer> ;-)
<hazmat> niemeyer, cool when i mentioned it b4 you seemed down on it
<hazmat> it would be nice for an arm env to go java-less
<niemeyer> hazmat: Yeah, because it sucks on several aspects right now
<hazmat> rog, on ReDial does gozk reuse a handle?
<niemeyer> hazmat: But what if we.. hmmm.. provided incentives for the situation to change? :-)
<hazmat> niemeyer, yeah.. persistence and error handling there don't seem well known
<hazmat> niemeyer, indeed, things can change
<rog> hazmat: no, i don't think so, but i don't think it needs to.
<hazmat> that's one important difference between gozk/txzk.. the pyzk doesn't expose reconnecting with the same handle, which toasts extant watches (associated to handle) when trying to reconnect to the same session explictly
<rog> (paste coming up)
<hazmat> libzk in the background will do it, but if you want to change the server explicitly at the app/client level its a hoser
<rog> if you get clients to explicitly negotiate with the central dialler when the connection is re-made, i think it can work.
<rog> i.e. get error indicating that server is down, ask central thread for new connection.
<rog> store that connection where you need to.
 * niemeyer loves the idea of keeping it simple and not having to do any of that :)
<rog> yeah, me too.
<rog> but we can't. at least i think that's the conclusion we've come to, right?
<niemeyer> rog: Hmm, not my understanding at least
<niemeyer> rog: It's precisely the opposite.. we _have_ to do that anyway
<niemeyer> rog: Because no matter how hard we try, the connection can break for real, and that situation has to be handled properly
<niemeyer> rog: So I'd rather focus on that scenario all the time, and forget about the fact we even have sessions
<rog> niemeyer: so you're saying that we have to lose all client state when there's a reconnection?
<niemeyer> rog: Yes, I'm saying we have to tolerate that no matter what
<rog> so the fact that zk has returned ok when we've created a node, we have to act as if that node might not have been created?
<rog> s/the fact/even if/
<niemeyer> rog: If zk returned ok, there's no disconnection
<rog> niemeyer: if it returned ok, and the next create returns an error; that's the scenario i'm thinking of
<rog> that's the situation where i think the node creator could wait for redial and then carry on from where it was
<niemeyer> rog: If it returned an error, we have to handle it as an error and not assume that the session is alive, because it may well not be
<niemeyer> rog: and what if the session dies?
<rog> i'm not saying that it should assume the session is still alive
<rog> i'm saying that when it gets an error, it could ask the central thread for the new connection - it might just get an error instead
<rog> the node creator is aware of the transition, but can carry on (knowingly) if appropriate
<niemeyer> rog: and what about the several watches that are established?
<rog> niemeyer: same applies
<niemeyer> rog: What applies?
<rog> the watch will return an error; the code doing the watch can ask for a new connection and redo the watch if it wishes.
<hazmat> niemeyer, so we're back to reinitializing the app on any connectoin error, disregarding recoverable
<rog> no redoing behind the scenes, but the possibility of carrying on where we left off
<niemeyer> rog: The state on which the watch was requested has changed
<niemeyer> rog: Check out the existing code base
<hazmat> niemeyer, so interestingly we can be disconnected, not know it, and miss a watch event
<niemeyer> rog: It's not trivial to just "oh, redo it again"
<rog> niemeyer: it doesn't matter because the watcher is re-requesting the state, so it'll see both the state and any subsequent watch event
<niemeyer> hazmat: Yeah, that's exactly the kind of very tricky scenario that I'm concerned about
<rog> the watcher has to deal with the "state just changed" scenario anyway when it first requests the watch
<hazmat> niemeyer, actually we get notification from a session event that we reconnected
<niemeyer> hazmat: As Russ would say, I don't really want to think about whether it's correct or not
<niemeyer> rog: No.. please look at the code base
<rog> niemeyer: sorry, which bit are you referring to?
<niemeyer> rog: We're saying the same thing, in fact.. you're just underestimating the fact that "just retry" is more involved than "request the new connection and do it again"
<niemeyer> rog: juju
<niemeyer> rog: lp:juju
<niemeyer> rog: This concept touches the whole application
<rog> niemeyer: i've been exploring it a bit this morning, but haven't found the crucial bits, i think. what's a good example file that would be strongly affected by this kind of thing?
<niemeyer> hazmat: We do.. the real problem is ensuring state is as it should be when facing reconnections
<niemeyer> rog: I'm serious.. this touches the whole app
<hazmat> niemeyer, right we always have to reconsider state on reconnection
<niemeyer> rog: Check out the agents
<rog> niemeyer: ah, ok. i was looking in state
<rog> thanks
<niemeyer> rog: state is good too
<niemeyer> rog: Since it's what the agents use and touches this concept too
<niemeyer> rog, hazmat: So, my suggestion is that the first thing we do is to unhide temporary failures in gozk
<hazmat> niemeyer, the test becomes alot more reliable when we have multiple zks in the cluster setup for the client to connect to
<rog> sgtm
<niemeyer> rog, hazmat: Then, let's watch out for that kind of issue very carefully in reviews and whatnot, as we build a reliable version
<niemeyer> hazmat: The same problem exists, though..
<hazmat> niemeyer, indeed
<hazmat> but it minimizes total disconnect scenarios with multiple zks
<niemeyer> hazmat: Even if it _immediately_ reconnects, the interim problems may have created differences that are easily translated into bugs very hard to figure out
<niemeyer> hazmat: and again, we seriously _have_ to handle the hard-reconnect across the board
<hazmat> niemeyer, agreed
<hazmat> niemeyer, i'm all in favor of simplifying and treating them the same
<hazmat> recoverable/unrecoverable conn errors
<niemeyer> hazmat: So no matter how much we'd love to not break the session and have a pleasant API, the hard reconnects means we'll need the good failure recovered either way
<niemeyer> So we can as well plan for that at all times
<hazmat> niemeyer, its detecting the conn error that i'm concerned about atm
<niemeyer> hazmat: My understanding is that the client always notifies about temporary issues
<hazmat> niemeyer, based on its internal poll period to the server
<hazmat> niemeyer, a quick disconnect masks any client detection
<hazmat> ^transient
<niemeyer> hazmat: Really!?
<hazmat> it seems the server will attempt to expire the client session, but i've seen once we're instead it shows a reconnect
<hazmat> s/we're/where
<niemeyer> hazmat: I can't imagine how that'd be possible
<niemeyer> hazmat: The client lib should hopefully notify the user that the TCP connection had to be remade
<hazmat> niemeyer, fwiw here's the test i'm playing with (can drop into test_session.py ).. http://paste.ubuntu.com/702290/
<hazmat> for a package install of zk.. ZOOKEEPER_PATH=/usr/share/java
<hazmat> for the test runner
<niemeyer> hazmat: Hmm
<niemeyer> hazmat: That seems to test that watches work across reconnections
<niemeyer> hazmat: We know they can work
<hazmat> niemeyer, they do but we miss the delete
<niemeyer> hazmat: Or am I missing something?
<niemeyer> hazmat: Ah, right!
<hazmat> with no notice
<niemeyer> hazmat: So yeah, it's total crack
<hazmat> niemeyer, actually most of the time we get an expired session event in the client w/ the watch
<hazmat> like 99%
<hazmat> if i connect the client to multiple servers it sees the delete
<hazmat> w/ the watch that is
<niemeyer> hazmat: Hmm.. interesting.. so does it keep multiple connections internally in that case, or is it redoing the connection more quickly?
<hazmat> niemeyer, not afaick, but its been a while since i dug into that
<hazmat> niemeyer, but as an example here's one run http://paste.ubuntu.com/702291/
<hazmat> where it does get the delete event
<hazmat> but that's not guaranteed in allop
<niemeyer> hazmat: If you _don't_ create it on restart, does it get the notification?
<niemeyer> hazmat: Just wondering if it might be joining the two events
<hazmat> niemeyer, no it still gets the deleted event if gets an event, else it gets session expired
<hazmat> but its easy to construct it so it only sees the created event
<hazmat> if toss a sleep in
<hazmat> perhaps not
<hazmat> it seems to get the delete event or session expiration.. i need to play with this some more and do a more thought out write up
<hazmat> in some cases it does get the created event, obviously the pastebin has that
<niemeyer> hazmat: I see, cool
<niemeyer> On a minor note, filepath.Rel is in.. can remove our internal impl. now
<hazmat> niemeyer, cool
<niemeyer> That was a tough one :)
<niemeyer> fwereade: Leaving to lunch soon.. how's stuff going there?
<niemeyer> fwereade: Can I do anything for you?
<niemeyer> jimbaker: How's env-origin as well?
<jimbaker> niemeyer, just need to figure out the specific text for the two scenarios you mention
<niemeyer> jimbaker: Hmmm.. which text?
<jimbaker> niemeyer, from apt-cache policy
<niemeyer> jimbaker: Just copy & paste from the existing test? Do you want me to send a patch?
<jimbaker> niemeyer, well it's close to being copy & paste, but the difference really matters here
<jimbaker> if you have a simple patch, for sure that would be helpful
<niemeyer> jimbaker: Sorry, I'm still not sure about what you're talking about
<niemeyer> jimbaker: It seems completely trivial t ome
<niemeyer> jimbaker: Sure.. just a sec
<jimbaker> niemeyer, i was not familiar with apt-cache policy before this work. obviously once familiar, it is trivial
<niemeyer> jimbaker: I'm actually talking about the request I made in the review..
<niemeyer> jimbaker: But since you mention it, I actually provided you with a scripted version saying exactly how it should work like 3 reviews ago
<jimbaker> niemeyer, i'm going against http://carlo17.home.xs4all.nl/howto/debian.html#errata for a description of the output format
<hazmat> the python-apt bindings are pretty simple too.. i used them for the local provider.. although its not clear how you identify a repo for a given package from it
<jimbaker> niemeyer, if you have a better resource describing apt-cache policy, i would very much appreciate it
<jimbaker> hazmat, one advantage of such bindings is the data model
<hazmat> jimbaker, well.. its as simple as cache = apt.Cache().. pkg = cache["juju"].. pkg.isInstalled -> bool... but it doesn't tell you if its a ppa or distro
<hazmat> and for natty/lucid installs without the ppa thats a keyerror on cache["juju"]
<niemeyer> jimbaker: apt-get source apt
<niemeyer> jimbaker: That's the best resource about apt-cache you'll find
<jimbaker> niemeyer, ok, i will read the source, thanks
<niemeyer> jimbaker: Turns out that *** only shows for the current version, so it's even easier
<niemeyer>          if (Pkg.CurrentVer() == V)
<niemeyer>             cout << " *** " << V.VerStr();
<niemeyer>          else
<niemeyer>             cout << "     " << V.VerStr();
<hazmat> jimbaker, ideally the detection will also notice osx and do something sane, but we can do that latter
<niemeyer> jimbaker: http://paste.ubuntu.com/702301/
<hazmat> more important to have this in now for the release
<niemeyer> hazmat: Oh yeah, please stop giving ideas! :-)
<jimbaker> hazmat, for osx, doesn't it make more sense to just set juju-origin?
<niemeyer> Please, let's just get this branch fixed..
<hazmat> jimbaker, probably does.. but the "/usr" in package path has faulty semantics with a /usr/local install on the branch as i reclal
<niemeyer> I'm stepping out for lunch
<fwereade> popping out for a bit, back later
<rog> i'm off for the evening. am still thinking hard about the recovery stuff. see ya tomorrow.
<rog> niemeyer: PS ping re merge requests :-)
<niemeyer> rog: Awesome, sorry for the delay there
<niemeyer> rog: Yesterday was a bit busier than expected
<niemeyer> jimbaker: How's it there?
<jimbaker> niemeyer, it's a nice day
<niemeyer> jimbaker: Excellent.. that should mean env-origin is ready?
<jimbaker> niemeyer, i still need to figure out what specifically apt-cache policy would print
<niemeyer> jimbaker: Ok.. let's do this.. just leave this branch with me.
<jimbaker> niemeyer, i do have the source code for what prints it, but i need to understand the model backing it
<niemeyer> jimbaker: No need.. I'll handle it, thanks.
<jimbaker> niemeyer, ok, that makes sense, i know you have a great deal of background from your work on synaptic, thanks!
<niemeyer> jimbaker: That work is completely irrelevant.. the whole logic is contained in the paste bint
<jimbaker> niemeyer, ok
<niemeyer> jimbaker: and I pointed the exact algorithm to you
<_mup_> juju/remove-sec-grp-do-not-ignore-exception r381 committed by jim.baker@canonical.com
<_mup_> Simplified remove_security_group per review point
<_mup_> juju/remove-sec-grp-do-not-ignore-exception r382 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<niemeyer> hazmat: Do you have time for a quick review on top of env-origin?
<niemeyer> hazmat: http://paste.ubuntu.com/702373/
<niemeyer> hazmat: It's pretty much just that function I've shown you a while ago plus minor test tweaks
<hazmat> niemeyer, checking
<niemeyer> hazmat: The test tweaks just try a bit harder to break the logic
<niemeyer> hazmat: Hmm.. I'll also add an extra test with broken input, to enusre that's working
<hazmat> niemeyer, what's the file on disk its parsing?
<niemeyer> hazmat: output of apt-cache policy juju
<hazmat> or is that just apt-cache policy pkg?
<niemeyer> hazmat: http://paste.ubuntu.com/702301/
<niemeyer> hazmat: Yeah
<niemeyer> hazmat: That last paste has the logic generating the output
<niemeyer> hazmat: Hmmm.. I'll also do an extra safety check there, actually
<niemeyer> hazmat: It's assuming that any unknown output will fallback to branch.. that sounds dangerous
<niemeyer> hazmat: I'll tweak it so it only falls back to branch in known inputs
<niemeyer> hazmat: http://paste.ubuntu.com/702381/
<hazmat> niemeyer, why is it returning a tuple if it only cares about the line from the line generator
<hazmat> niemeyer, in general it looks fine to me, there's two pieces in the branch that i have minor concern about
<niemeyer> hazmat: Keep reading :)
<hazmat> ah. first indent
<niemeyer> hazmat: It actually cares about the indent as well
<niemeyer> hazmat: It's how we detect we've left a given version entry
<niemeyer> hazmat: What's the other bit you're worried about?
<hazmat> niemeyer, basically how does it break on osx if apt-cache isn't found.. and the notion that if not juju.__name__.startswith("/usr") means unconditionally a package...if i check juju out and do a setup.py install its still a source install.. hmm.. i guess that works with the apt-cache check on installed.. so looks like just what happens if not on ubuntu.. pick a sane default
<hazmat> if apt-cache isn't there this will raise an exception it looks like
<niemeyer> hazmat: I'll take care of that
<hazmat> niemeyer, +1 then
<niemeyer> hazmat: What should we default to?
 * niemeyer thinks
<niemeyer> distro, I guess
<hazmat> niemeyer, distro seems sane
<niemeyer> Cool
<lamalex> is juju useful for deploying services like mongodb on my local dev machine?
<niemeyer> hazmat: http://paste.ubuntu.com/702393/
<niemeyer> lamalex: It is indeed
<niemeyer> lamalex: We've just landed support for that, so we're still polishing it a bit, but that's already in and is definitely something we care about
<lamalex> niemeyer, awesome!
<hazmat>  niemeyer +1
<niemeyer> hazmat: Woot, there we go
<_mup_> juju/env-origin r381 committed by gustavo@niemeyer.net
<_mup_> - Implementation redone completely.
<_mup_> - Do not crash on missing apt-cache.
<_mup_> - Exported and tested get_default_origin.
<_mup_> - Tests tweaked to explore edge cases.
<hazmat> niemeyer, should i be waiting on a second review for local-origin-passthrough or can i go ahead and merge?
<hazmat> bcsaller, if you have a moment and could look at local-origin-passthrough that would be awesome
<bcsaller> I'll do it now
<hazmat> bcsaller, awesome, thanks
<hazmat> bcsaller, i had one fix that i accidentally pushed down to unit-cloud-cli but regarding the network setup in the chroot, the way it was working before modifying resolvconf/*/base wasn't going to work since that's not processed for a chroot, i ended up directly inserting dnsmasq into the output resolvconf/run/resolv.conf to ensure its active for the chroot
<bcsaller> hazmat: why did it need to be active for the chroot?
<hazmat> bcsaller, because we install packages and software from there
<hazmat> bcsaller, most of the packages end up being cached, which caused some false starts, but doing it with juju-origin resurfaced the issue, since it had talk to lp to resolve the branch
<bcsaller> yeah... just put that together. We might be better off with a 1 time job for juju-create
<bcsaller> upstart job I mean
<hazmat> bcsaller, it is still a one time job, and dnsmasq is the correct resolver, i just changed it to be the active one during the chroot
<niemeyer> hazmat: Hmm
<bcsaller> k
<_mup_> juju/trunk r382 committed by gustavo@niemeyer.net
<_mup_> Merged env-origin branch [a=jimbaker,niemeyer] [r=hazmat,niemeyer]
<_mup_> This introduces a juju-origin option that may be set to "ppa",
<_mup_> "distro", or to a bzr branch URL.  The new logic will also attempt
<_mup_> to find out the origin being used to run the local code and will
<_mup_> set it automatically if unset.
<niemeyer> May be worth testing it against the tweaked env-origin
<hazmat> /etc/resolv.conf symlinks to /etc/resolvconf/run/resolv.conf .. its only on startup that it gets regen'd for the container via dhcp to be the dnsmasq..
<bcsaller> hazmat: thats why I was suggesting that it could happen in startup on the first run in a real lxc and not a chroot
<hazmat> niemeyer, good point.. i think i ended up calling get_default_origin to get a sane default for local provider to pass through
<bcsaller> but the change you made should be fine
<niemeyer> hazmat: Yeah.. I've exported it and tested it
<niemeyer> hazmat: So it'll be easy to do that
<hazmat> cool
<niemeyer> hazmat: Note that the interface has changed, though
<hazmat> noted, i'll do an end to end test
<niemeyer> hazmat: It returns a tuple of two element in the same format of parse_juju_origin
<niemeyer> hazmat, bcsaller: I've just unstuck the wtf too.. it was frozen on a "bzr update" of lp:juju for some unknown reason
<niemeyer> We should have some input about the last 3 revisions merged soonish
<niemeyer> I'm going outside for some exercising.. back later
<hazmat> niemeyer, cheers
<niemeyer> Woot! 379 is good.. 3 to go
<niemeyer> Alright, actually leaving now.. laters!
<hazmat> interesting.. Apache Ambari
<SpamapS> hey is there a tutorial for using the local provider?
<SpamapS> hmmmm
<SpamapS> latest trunk failure on PPA build
<SpamapS> https://launchpadlibrarian.net/81932606/buildlog_ubuntu-natty-i386.juju_0.5%2Bbzr378-1juju1~natty1_FAILEDTOBUILD.txt.gz
<hazmat> SpamapS, not yet
<hazmat> i'll put together some provider docs after i get these last bits merged
<hazmat> SpamapS, haven't seen those failures b4
<hazmat> they work for me disconnected on trunk
<hazmat> SpamapS, is the s3 url endpoint being patched for the packaged?
<hazmat> i don't see how else that test could fail, perhaps bucket dns names
<_mup_> Bug #867877 was filed: revision in charm's metadata.yaml is inconvenient <juju:New> < https://launchpad.net/bugs/867877 >
<_mup_> juju/trunk-merge r343 committed by kapil.thangavelu@canonical.com
<_mup_> trunk merge
<_mup_> juju/local-origin-passthrough r418 committed by kapil.thangavelu@canonical.com
<_mup_> merge pipeline, resolve conflict
<fwereade> that's it for me, nn all
<_mup_> juju/trunk r383 committed by kapil.thangavelu@canonical.com
<_mup_> merge unit-relation-with-address [r=niemeyer][f=861225]
<_mup_> Unit relations are now prepopulated with the unit's private address
<_mup_> under the key 'private-address. This obviates the need for units to
<_mup_> manually set ip addresses on their relations to be connected to by the
<_mup_> remote side.
<hazmat> fwereade, cheers
 * niemeyer waves
<niemeyer> Woot.. lots of green on wtf
<niemeyer> hazmat: re. local-origin-passthrough, once you're happy with it would you mind to do a run on EC2 just to make sure things are happy there?
<hazmat> niemeyer, sure, just in progress on that
<niemeyer> hazmat: Cheers!
<jimbaker> although local-origin-passthrough doesn't work for me, hazmat believes he has a fix for it in the unit-info-cli branch
<hazmat> jimbaker, did you try it out?
<jimbaker> hazmat, unit-info-cli has not yet come up
<hazmat> jimbaker, and to be clear that's not  regarding ec2
<jimbaker> hazmat, of course not, it's local :)
<hazmat> jimbaker, pls pastebin the data-dir/units/master-customize.log
<hazmat> jimbaker, it would also be good to know if you have unit agents running or not
<hazmat> jimbaker, are you running oneiric containers?
<hazmat> jimbaker, yeah.. figuring out when its done basically needs to parse ps output
<hazmat> or check status
<jimbaker> hazmat, unfortunately this is hitting a wall of time for me - need to take kids to get their shots momentarily
<hazmat> but incrementally its easier to look at ps output
<jimbaker> hazmat, makes sense. i was just taking a look at juju status
<hazmat> jimbaker, k, i'll be around latter
<jimbaker> ok, i will paste when i get back
<hazmat> jimbaker, juju status won't help if there's an error, looking at ps output shows the container creation and juju-create customization, all the output of customize goes to the customize log
 * hazmat wonders if the cobbler api exposes available classes
<hazmat> ah.. get_mgmtclasses
<_mup_> juju/local-origin-passthrough r419 committed by kapil.thangavelu@canonical.com
<_mup_> incorporate non interactive apt suggestions, pull up indentation and resolv.conf fixes from the pipeline
<_mup_> juju/trunk r384 committed by kapil.thangavelu@canonical.com
<_mup_> merge local-origin-passthrough [r=niemeyer][f=861225]
<_mup_> local provider respects juju-origin settings. Allows for using
<_mup_> a published branch when deploying locally.
<hazmat> whoops forget the reviewers
<_mup_> juju/unit-info-cli r426 committed by kapil.thangavelu@canonical.com
<_mup_> merge local-origin-passthrough & resolve conflict
<_mup_> juju/unit-info-cli r427 committed by kapil.thangavelu@canonical.com
<_mup_> fix double typo pointed out by review
#juju 2011-10-05
<_mup_> juju/trunk r385 committed by kapil.thangavelu@canonical.com
<_mup_> merge unit-info-cli [r=fwereade,niemeyer][f=863816]
<_mup_> Allow units to obtain their public and private addresses in a provider
<_mup_> independent fashion
<_mup_> Bug #867991 was filed: local provider needs documentation <juju:In Progress by hazmat> < https://launchpad.net/bugs/867991 >
<_mup_> Bug #867993 was filed: Code/module docs seem to use "j" instead of juju <juju:New> < https://launchpad.net/bugs/867993 >
<jimbaker> hazmat, here is the master-customize.log: http://pastebin.ubuntu.com/702511/
<jimbaker> i tried going against the latest in unit-info-cli, no changes i could see
<hazmat> interesting
<jimbaker> in terms of the log or juju status'
<hazmat> jimbaker, what's the output of > host archive.ubuntu.com192.168.122.1
<hazmat> whoops space between those
<hazmat> jimbaker, it looks like a network connectivity problem
<jimbaker> hazmat, here's the output: http://pastebin.ubuntu.com/702516/
<hazmat> jimbaker, one more.. what's the output of sudo cat /var/lib/lxc/yourusername-yourenvname-0-template/rootfs/etc/resolvconf/run/resolv.conf
<jimbaker> hazmat, nameserver 192.168.122.1
<hazmat> jimbaker, odd
<hazmat> everything sounds like its correct
<hazmat> jimbaker, aha
<jimbaker> hazmat, ?
<jimbaker> (if you want i can give you a login to this box)
<hazmat> jimbaker, so oneiric is still in beta so old packages aren't per se kept around
<hazmat> jimbaker, you'll need to update manually your lxc cache
<jimbaker> hazmat, ok
<hazmat> so it has the latest oneiric packages
<hazmat> jimbaker,  $ sudo chroot /var/cache/lxc/oneiric/rootfs-i386/
<hazmat> jimbaker, $ apt-get update && apt-get upgrade
<hazmat> that should fix things, else the pkg cache was ref'ing non existant packages upstream i bet
<jimbaker> ok, doing that, but against amd64 :)
 * hazmat heads out for a dog walk
<hazmat> bbiab
<jimbaker> hazmat, makes sense, this would explained those bad refs in master-customize.log
<hazmat> jimbaker, also you can update your juju-origin to lp:juju/trunk if you want.. its all committed now
<jimbaker> hazmat, sounds good
<hazmat> jimbaker, that work for you?
<niemeyer> hazmat: Man!
<niemeyer> hazmat: wtf is happy with unit-get!
<hazmat> niemeyer, :-)
 * niemeyer dances around the chair
<hazmat> niemeyer, cool, i'm out going to go enjoy some down time
<niemeyer> hazmat: Enjoy it :)
<hazmat> niemeyer, sent out some mails to the list inviting additional local provider testing
<niemeyer> I'm going down too
<niemeyer> hazmat: Superbsome
 * niemeyer => poweroff
<jimbaker> hazmat, hmmm, the upgrade was successful of the lxc cache. looks like i have a networking issue, the agent logs (which i finally have) just report zk errors about network is unreachable
<jimbaker> (agent logs for mysql/0, wordpress/0)
<jimbaker> the other detail is that the master-customize.log was not touched. is that only used in the event of an error?
<jcastro> hazmat: oh nice, I'll try local/LXC as soon as the build is finished
<jcastro> hazmat: I've been looking forward to this.
<jcastro> anyone try the local provider? I'm having some problems getting the environment right
<jimbaker> jcastro, i have tried the local provider several times, but yet to get it running
<jcastro> I'm not even getting past the environments.yaml bits
<jimbaker> jcastro, this is my environments.yaml for lxc: http://pastebin.ubuntu.com/702557/
<jimbaker> jcastro, for me it's failing in the network at this point
<jcastro> "error: internal error Network is already in use by interface virbr0"
<jcastro> look familiar?
<jimbaker> jcastro, i have seen that. one thing to try (hate to tell you it) is to reboot your computer
<jcastro> hah, awesome.
<jimbaker> the initial lxc networking for whatever reason didn't come up properly until i did this
<jimbaker> jcastro, before doing that, might as well make sure you upgrade  the pkg cache
<jimbaker> see above, approx 2.5 hours ago
<jcastro> ah
<jcastro> jimbaker: woo! status works
<jimbaker> jcastro, cool, but the provisioning agent runs on your machine, so it's not indicative of the lxc bits
<jimbaker> (other than using local of course)
<jcastro> ah
<jimbaker> i just got past that just now, the fragility was in the package cache
<jcastro> they show up in status, but their state is still null
<jimbaker> jcastro, do something like $ ps -ef | grep juju - verify you see unit agents for mysql/0 and wordpress/0 (or whatever stack you are attempting)
<jcastro> doesn't look like it, just one for zookeeper itself, which I assume is the provisioning one
<jimbaker> jcastro, yes, you will have one zookeeper instance as part of the provisioning setup
<jimbaker> jcastro, so check for the existence of master-customize.log - this will be in the data dir, owned by root
<jimbaker> jcastro, ideally you will have logs for each of service units there too, a container log (for lxc debugging) and the unit.log itself
<jcastro> all the log dirs are empty
<jimbaker> jcastro, ok, so you are seeing something else wrong (and i assume you did the reboot)
<jcastro> yeah
<jcastro> the bootstrap runs, completes, no errors
<jcastro> jimbaker: ok heading to bed, openstack conf tomorrow, I will certainly try on the plane. :)
<jimbaker> bootstrap really means nothing unfortunately - and status is less useful here
<jimbaker> well, of node 0
<jcastro> yeah it looks like it thinks it's firing off things, but they're not happening
<jcastro> at first I was like "wow, this is fast."
<jcastro> but it doesn't appear to be actually doing anything, heh
<jimbaker> i think it might actually be fast, but yeah, still waiting to get it to work
<jimbaker> ok, enjoy the conf!
<jcastro> i feel like we're close!
<jimbaker> i think so too, there are some basic setup doc issues, possibly ones that can be scripted, to make it work, i suspect i just have one more to go through
<hazmat>  /me catches up
<hazmat> surprised  rebooting is needed
<hazmat> jcastro, its all async same as other providers so its fast to execute commands
<hazmat> but takes a few moments for reality
<hazmat> jcastro, planes are probably a problem, network connectivity is needed for non cached package installs
<hazmat> jimbaker, so the lxc cache update didn't solve the issue for you?
<_mup_> juju/remove-sec-grp-do-not-ignore-exception r383 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<hazmat> jimbaker, the master customize log is updated only after the complete run, u should use ps | grep lxc to verify that
<hazmat> ps uax
<hazmat> oh... no.. gluster acquired
<_mup_> juju/trunk r386 committed by jim.baker@canonical.com
<_mup_> merge remove-sec-grp-do-not-ignore-exception [r=niemeyer][f=863510]
<_mup_> Simplified logic for removing security groups to ensure any exceptions
<_mup_> are not ignored by being consumed by the Twisted reactor.
<jimbaker> hazmat, i got further with the lxc cache update
<jimbaker> but now i'm seeing this networking issue
<jimbaker> hazmat, so in the unit.log files (in my case mysql/0, wordpress/0), i'm seeing this repeated: Network is unreachable
<jimbaker> so something is wrong w/ the virtual network setup
<xerxas> Hi all !
<xerxas> is juju well suited for my use case:
<xerxas> I want to launch a complete infrastructure with 1 command, and service knowing one each other address
<xerxas> and , also , I want to specify a list of packages to install on each nodes , and some sed commands
<xerxas> is chef-solo better for doing this ? or cloudformation (I'm working on EC2)
<TeTeT> xerxas: from my reading I think juju can do that for you - but keep in mind it is under development. Not sure if you want to base any production sites on it right now
<xerxas> TeTeT:  you mean, if I want to make live my infrastructure ?
<xerxas> If I'm adding unit afterwards ?
<TeTeT> xerxas: adding units should work nicely, if the charm allows for it. What is your schedule for going productive?
<xerxas> now ;)
<xerxas> what's the simplest way de provision ec2 infrastructure (not instances, but infrastructures !)
<xerxas> cloud-init seems ok for instances, is cloud-formation the cloud-init of infrastructure ?
<kim0> xerxas: actually I'd say juju is exactly what you need, and is the simplest way going forward
<kim0> xerxas: it's just that it's alpha quality right now
<xerxas> kim0: why ?
<kim0> because it does what you described :)
<xerxas> kim0:  I don't mind if it's alpha or not, if it boots my 3 instances, setup a rabbitmq on one , et create a config file on a second one containing my rabbitmq server ip address
<xerxas> kim0:  ;)
<xerxas> kim0: this is maybe why I'm asking here ;)
<xerxas> ahh , also, question , I tested juju ... but
<xerxas> juju client, needs to be ubuntu , as juju "controller" and , juju booted intances ?
<kim0> xerxas: no client can be osx too
<xerxas> by controller I mean the instance where zookeeper is installed
<kim0> or other linux
<xerxas> kim0:  nice, I'm running osx ;)
<kim0> cool
<xerxas> but the rest needs to be ubuntu
<kim0> xerxas: I think it's available on some osx repo
<kim0> can't remmeber what osx folks use to install cli tools :)
<xerxas> brew
<kim0> yes that's it
<xerxas> and easy_install ;)
<TeTeT> wasn't there fink back in the day?
<TeTeT> which was a lot like apt
<kim0> think brew it is now
<xerxas> TeTeT:  fink is useless know
<TeTeT> been 6 years since I worked with osx ...
<xerxas> brew is git based, easy formula (which is metadata for building packages) writing , social ...
<xerxas> been 6 years I had a ubuntu based desktop ;)
<kim0> xerxas: here is a rabbitmq charm: https://code.launchpad.net/~charmers/charm/oneiric/rabbitmq-server/trunk
<xerxas> but never left ubuntu on the server ;)
<kim0> xerxas: good choice ;)
<xerxas> and will never (you're doing good work , guys ! )
<xerxas> kim0: thx for the url
<kim0> xerxas: so one instance is rabbitmq, what are the other two ?
<xerxas> some daemon I wrote in python consuming messages
<kim0> ah cool!
<xerxas> one will have some lxc container
<xerxas> containers
<jamespage> morning - anyone tried using the LXC local provider yet?
<kim0> xerxas: sounds like a cool project!
<xerxas> anyway, my python daemon have a configuration file and in that configuration file I have an ip address, the one of the rabbitmq server
<xerxas> (more precisely, one of rabbitmq-s server, because we might have several of them ... )
<kim0> xerxas: I'd love to write an article about that project when you get it running .. keep me in the loop :)
<TeTeT> jamespage: it's on my plate for later this week
<kim0> TeTeT: is it already working fine ?
<TeTeT> kim0: I don't know yet, this I why I'd love to test it
<kim0> yeah
 * HarryPanda can't get local charm repositories working anymore
<hazmat> good morning
<HarryPanda> `charm create test /root/charms && juju deploy --repository /root/charms/ local:test` = no dice
<hazmat> HarryPanda, you need to use a series (like 'oneiric') as additional directory in the repository from the base to the charm..
<hazmat> HarryPanda, ie.. so looking at the example formulas that come with juju.. its examples/oneiric/wordpress
<HarryPanda> aah, I haven't been keeping up with the changelog
<hazmat> HarryPanda, yeah.. that was recent
 * HarryPanda got confused about store.juju.ubuntu.com etc.
<hazmat> there was a mail out to the list about the chance
<hazmat> HarryPanda, its basically a charm repository.. its not active yet
<hazmat> HarryPanda, yeah.. its a bit confusing on the error message
<hazmat> fwereade, ^
<hazmat> jamespage, haven't gotten any new feedback yet
<jamespage> hazmat, hey - I had a go based on your posting to juju@l.u.c
<hazmat> jamespage, cool, how'd it go?
<jamespage> hit a few errors - the environments.yaml entry wanted a type and admin-secret entry as well ( so I had a guess)
<hazmat> jamespage, doh.. yeah
<hazmat> jamespage, late night emails.. bad
<hazmat> jamespage, and then?
<hazmat> jamespage, fwiw admin-secret is anything you want it to be
<jamespage> I appear to have a running environment!
<jamespage> I've not deployed anything yet - you caught me as I was typing juju bootstrap --environment....
 * HarryPanda is slowly getting our full stack running on a mix of vmware+orchestra, ec2 and lxc :)
<hazmat> jamespage, cool.. the real work happens when you first deploy a charm.. and it starts creating lxc containers..
<hazmat> HarryPanda, fun
<jamespage> hazmat, OK about to try that
<hazmat> jamespage, its async to the cli, so giving progress on it is a little hard, you can ps aux | grep lxc   to watch it happen, or tail the log file in data-dir/machine-agent.log
<jamespage> hazmat: hmm - not sure the machine agent started
<jamespage> http://paste.ubuntu.com/702702/
<hazmat> jamespage, hmm.. haven't seen that one
<hazmat> jamespage, can you try bootstrapping with the verbose flag? ie. juju -v bootstrap
<hazmat> jamespage, that should give the traceback
<HarryPanda> jamespage: I'm getting the same
<HarryPanda> local/agent.py:59, changed JUJU_ORIGIN=%s to JUJU_ORIGIN=ppa
<jamespage> hazmat: http://paste.ubuntu.com/702706/
<hazmat> dah
<hazmat> foobar
<hazmat> jamespage, thanks
<jamespage> np - want me to raise a bug?
<hazmat> jamespage, i think its a trivial, i can commit it to trunk, just double checking
<hazmat> i was verifying this with a juju-origin defined in my environments.yaml.. but i didn't test without one
<hazmat> yeah.. niemeyer mentioned he'd changed the interface on this function, but i forget
<jamespage> hazmat: hmm - might also be better to depend on zookeeper rather than zookeeperd as the local provider starts its own instance
<jamespage> saves a running java process which is always a good thing
<hazmat> jamespage, good point.. libzookeeper-java should have what we need
<jamespage> yep
<jamespage> keen to test this as I have seen panics running Java stuff under lxc with openstack
<jamespage> might disappear as a result tho :-)
<hazmat> jamespage, we don't require any java under lxc.. but noted
<hazmat> fwereade, http://paste.ubuntu.com/702712/
<hazmat> could i get a +1 on that trivial
<jamespage> hazmat, well juju does not - but charms might...
<hazmat> jamespage, yup
<hazmat> no cassandra might hurt
 * hazmat sheds a tear over gluster's acquisition
<HarryPanda> redhat? 0.o
<fwereade> hazmat: sorry, I'm not quite following the paste
<fwereade> hazmat: well, just the first bit
<fwereade> hazmat: I guess, the packages need to change, so... you change the packages
<fwereade> hazmat: consider my confusion withdrawn
<fwereade> hazmat: +1
<hazmat> fwereade, yeah.. let me repaste.. the origin handling changed, but i made it to a branch copy by accident
<jamespage> hazmat: getting there - I now have a running lxc container - however its not showing up as started yet
<jamespage> I can log into it using SSH
<hazmat> fwereade, updated diff.. http://paste.ubuntu.com/702715/
<hazmat> fwereade, does a couple of things, logs juju origin when starting machine agent, fixes the handling for juju origin to reflect new return signature from get_default_origin, changes the package dep from zookeeperd to libzookeeper-java
<fwereade> hazmat: cool; have a much-happier +1 :)
<HarryPanda> jamespage: what's the 'state' for the service unit?
<jamespage> HarryPanda: null
<jamespage> I can see the failsafe upstart configuration waiting for network interfaces to come up within the lxc instance
<jamespage> http://paste.ubuntu.com/702719/
<_mup_> juju/trunk r388 committed by kapil.thangavelu@canonical.com
<_mup_> [trivial] require libzookeeper-java not zookeeperd, update handling of get_default_origin, log origin used [r=fwereade]
<hazmat> jamespage, interesting i never noticed that b4 re waiting for network within the container
<hazmat> jamespage, sadly it happens on my laptop.. sort of destroys any value to making boot faster
<hazmat> jamespage, the unit agent gets started via upstart.. it should have a symlink to its log at $data-dir/units/$unit-name/unit.log
<hazmat> the link will be broken if the unit agent never started
<hazmat> HarryPanda, where you get to with the origin=ppa change?
<HarryPanda> hazmat: enough to get it working so I could deploy stuff
<HarryPanda> I will re-checkout after lunch
<hazmat> HarryPanda, cool, and you see the units with state: started in juju status?
<hazmat> HarryPanda, cheers
<hazmat> lxc-ls shows the containers, the output is a bit odd, it will list it twice if its running, once if its defined but not running
 * hazmat regen's the ppa
 * HarryPanda still has a lot of catching up to do in the meantime, like puppet integration
<hazmat> jamespage, can you pastebin your $data-dir/units/master-customize.log
<jamespage> hazmat: http://paste.ubuntu.com/702724/
<hazmat> jamespage, nice.. looks good
<hazmat> jamespage, but your unit agents aren't starting?
<jamespage> not within the instance
<hazmat> jamespage, do you have any output in $data-dir/units/$unit-name/unit.log or is it a broken link?
<jamespage> hazmat, broken link ATM
<jamespage> this instance is stalled waiting for the network interface to start
<jamespage> so I don't the right run-level has been initiated yet
<hazmat> jamespage, if your able to ssh into the unit, you can try manually starting the agent using its upstart file as a guideline /etc/init/$unit-name-unit-agent.conf
<hazmat> jamespage, hmm.. i haven't seen that one before re network start wait in the container, except on the host
 * hazmat doesn't understand why oneiric is waiting on network config
<jamespage> hazmat: so I did a sudo start xxx on the unit agent
<jamespage> now showing as started
<hazmat> jamespage, cool
<jamespage> hazmat; adding another unit has resulted in another stalled lxc instance - bah!
<hazmat> humbug
<hazmat> jamespage, had you used lxc before today?.. i'm just wondering if destroying the env, and clearing out the lxc cache would help /var/cache/lxc/oneiric/*
<hazmat> its been at least week since i did that myself
<jamespage> hazmat, hmm - I can try that
<hazmat> jamespage, alternatively you can try manually chroot into the cache and upgrade it.. but if i remember the bug correctly its pretty temperamental wrt to package install/upgrade/removal
 * hazmat will bbiab
<niemeyer> Morning all!
<rog> niemeyer: hiya
<jamespage> hazmat, looks like /etc/rcS.d/S07resolvconf is blocking - which is stopping the rest of the lxc instance from coming up OK
<niemeyer> rog: Yo
<niemeyer> jamespage: Hey James
<niemeyer> jamespage: Very interesting ideas in your testing charms post
<jamespage> niemeyer, morning
<jamespage> thanks
<niemeyer> jamespage: Still digesting them
<jamespage> niemeyer, I did a bit of refactoring yesterday based of sabdfl's comments
<niemeyer> jamespage: We have a test suite continuously running at wtf.labix.org ATM, and it made me ponder if we could do more there
<jamespage> the charm-tester is now a bit more clever and less implicit
<jamespage> niemeyer, that might be good
<jamespage> niemeyer, whats driving the testing ATM?
<niemeyer> jamespage: A few trivial scripts.. the whole thing is in lp:juju/ftests if you'd like to check it out
<niemeyer> fwereade: ping
<fwereade> niemeyer: pong
<fwereade> niemeyer: it's turned out *much* easier than I expected
<niemeyer> fwereade: Oh, that's so great to hear
<niemeyer> fwereade: I went to sleep a bit concerned yesterday imagining if we'd have to go back
<fwereade> niemeyer: didn't stop me trying to do it wrong earlier, ofc, but you should have a fresh MP in a short while
<niemeyer> fwereade: Woohay!
<fwereade> niemeyer: yeah, it's a great relief :)
<fwereade> niemeyer: and then the auto-update on top of that should be almost trivial
<fwereade> niemeyer: EOD or before
 * fwereade crosses fingers
<niemeyer> fwereade: Superb.. I'm hopeful we can close down the client side for fixes only today still
<niemeyer> SpamapS: ^
<jamespage> hazmat: so it looks like plymount --ping is hanging forever; hence why the failsafe config is kicking in
 * jamespage scratches his head
<SpamapS> niemeyer: if you guys can give me the revision to upload I will start some builds and tests
<niemeyer> SpamapS: Sounds great.. we just need the stuff fwereade is punching on right now
<SpamapS> niemeyer: here is r381's build failures btw  https://launchpadlibrarian.net/81962870/buildlog_ubuntu-oneiric-i386.juju_0.5%2Bbzr381-1juju1~oneiric1_FAILEDTOBUILD.txt.gz
<hazmat> haven't seen those
<hazmat> jamespage, lame.. dhcp should just work
<jamespage> hazmat, I think it kinda is - however plymouth --ping is hanging  when it tries to configure its logging
<hazmat> jamespage, we can prepopulate resolvconf/resolv.conf.d/base like we did before but for the container startup, straight dhcp should just work
 * hazmat does a  man plymouth 
<hazmat> hmm.. undocumented
<hazmat> jamespage, so plymouth --ping sounds like it just pings upstart
<hazmat> from its help
<jamespage> its the thing that multiplexes all of the output from init scripts and upstart
 * hazmat is out of his element
<hazmat> ah
<jamespage> I've pinged hallyn_ - he might have some ideas esp. with regards to why its borked in a lxc container
<hazmat> jamespage, great, just sent some updated instructions to the list and noted the plymouth hangs
 * hazmat resets his lxc cache
 * hazmat really needs to find a 7mm ssd
<fwereade> niemeyer: about "breaking" changes
<fwereade> niemeyer: it's ok to just commit something that will  not work with existing agents deployed from an older branch... or is it?
<niemeyer> fwereade: *TODAY* it is.. tomorrow, it's not. :-)
<fwereade> heh, good :)
<mdeslaur> hey guys, you should get the juju mailing list added here: https://lists.ubuntu.com/
<mdeslaur> it's kind of hard to find if you don't know what it's called
<hazmat> mdeslaur, good point
<_mup_> Bug #868391 was filed: Juju mailing list should be listed on lists.ubuntu.com <juju:New> < https://launchpad.net/bugs/868391 >
<hazmat> mdeslaur, at the moment the easiest way to find the ml is from the section at the bottom of juju.ubuntu.com
<mdeslaur> hazmat: yes, thanks
<niemeyer> rog: Great stuff indeed in update-server-interface man, thanks!
<niemeyer> rog: I've sent a few comments/suggestions, but pretty superficial stuff
<rog> niemeyer: good, i'm, erm, relieved you like it :-)
<rog> don't you hate it when you know you've made some changes somewhere, but you can't find them anywhere? makes me wonder what else i'm also missing :-)
<fwereade> niemeyer: how would you like really-isolate-formula-revisions? stacked branch on isolate-formula-revisions, both set to needs review?
<hazmat> jamespage, works for me ootb with a clean lxc cache
<hazmat> fwiw
<hazmat> jcastro, ping
<rog> niemeyer: ah, i've just realised that update-server-interface is several revisions ago.
<niemeyer> rog: Well, please do the suggested changes in the version that is up for review, and let's merge it as is
<niemeyer> rog: Then you can push a separate branch with the new content
<rog> niemeyer: i thought you were reviewing factor-out-service ...
<niemeyer> rog: Nope.. reviewing this now
<niemeyer> rog: Please do the changes in the specific revision that was actually reviewed so we can merge it
<rog> niemeyer: will do.
<niemeyer> fwereade: Hmmm
<niemeyer> fwereade: Not sure I understand your question
<niemeyer> fwereade: You have a review on the previous branch.. what is happening with it?
<fwereade> niemeyer: the trivials are fixed in that branch; the big one is fixed in the stacked branch
<fwereade> niemeyer: the actual changes are trivial but widespread
<fwereade> niemeyer: I'm just as happy to merge back into the original branch, or to add a new MP
<fwereade> niemeyer: whatever's easiest for you to review
<niemeyer> fwereade: Please just push it all on that single branch.. I'll have to review it jointly either way since the prior one has blockers without the follow up
<fwereade> niemeyer: ok, cool
<niemeyer> rog: Just added a [10] comment now that I've figured a detail in the follow up branch
<rog> niemeyer: one thing that concerns me about doing kill(pid, 0) is that it's system-specific. but i guess we're tied to linux anyway.
<niemeyer> rog: Yeah.. I'm not very concerned about WIndows to be honest :)
<niemeyer> rog: The way to fix that would be to add something in the Process interface itself
<niemeyer> rog: IsRunning or whatever
<SpamapS> rog: server side linux is a good bet. clients may be OS X or something else.
<niemeyer> rog: But that's not something I worry about myself.. I'll let the Windows folks worry about it
<rog> this is client-side too, right? i'm also slightly concerned that some users may get EPERM even though the process is actually running.
<rog> i'm not too au fait with modern unixy capability stuff
<niemeyer> rog: If they get EPERM, that's great.. it's not their process and they shouldn't be fiddling with it
 * rog nods.
<jamespage> hazmat: I have something to try - I have plymouth in debug mode on my laptop for another issue - might be causing problems
<SpamapS> cd
<rog> niemeyer: hmm, there's a problem with checking if server process is actually running: what do you do if the pid.txt file exists, but there's no running server? you'll have to remove or rewrite the pid.txt file, but then you no longer get the nice race-free create semantics that you've got currently, so two concurrent callers could start two servers at once.
<fwereade> niemeyer: btw, https://code.launchpad.net/~fwereade/juju/isolate-formula-revisions/+merge/78164 is ready for another look
<jamespage> hazmat: that fixed it - for some reason having plymouth:debug as a kernel boot options breaks the in-instance plymouth
<niemeyer> fwereade: Ok, let me see how's lunch looking like here
<niemeyer> fwereade: I'll review it either right now or right after lunch
<niemeyer> rog: I don't get the problem.. if the pid file exists and there's no server running, there's simply no server running?
<rog> niemeyer: yes, but there's a problem if two processes are both trying to start a server at the same time in the same directory
<rog> niemeyer: currently that situation is dealt with fine
<niemeyer> rog: One of them will fail because the port number won't be available
<rog> niemeyer: but the wrong pid might end up in the pid file
<hazmat> jamespage, awesome!
 * niemeyer thinks
<rog> niemeyer: my current thought is that we should still return an error, but that the error should reflect our knowledge of whether the server is or isn't currently running
<hazmat> kill(pid,0) should work on osx i would think
<rog> hazmat: yeah, it should.
<rog> hazmat: i was more thinking of windows.
<hazmat> ah, i don't think anyone cares about that in practice for juju ;-)
<niemeyer> rog: Please ignore the concurrency issues for now and just proceed as we discussed
<rog> e.g. "server is currently running" vs "server was terminated abnormally. Remove /.../pid.txt to force a start"
<niemeyer> rog: We can use actual file locking to sort the problem out, but that's really not for this branch
<rog> niemeyer: oh yeah, another thought: does anyone use shared filesystems these days?
<rog> niemeyer: 'cos that'll break it too.
<niemeyer> rog: I don't know, but that's not a problem to worry about now either
<hazmat> niemeyer, latest tip should work with the go branches in review?
<niemeyer> hazmat: It should
 * hazmat compiles tip and dives in for a review
<niemeyer> fwereade: Ok, I'll have lunch before finishing your review.. will be back ASAP and dive into it directly
<fwereade> niemeyer: cool, the auto-upgrade itself is simple, and works AFAICT, but testing it is a bit tedious
<hazmat> fwereade, should be much faster with local provider ;-)
<fwereade> hazmat: it's just *writing* the tests that's bugging me :)
<hazmat> fwereade, ah
<jimbaker> hazmat, i tried the latest (trunk r388), still seeing the same networking issues
<jimbaker> i'm going to just reboot this box and see if that helps
<hazmat> jimbaker, yeah.. bcsaller is as well
<hazmat> jimbaker, rebooting won't do anything for this
<bcsaller> hazmat: last I heard that was about virbr0 errors for jimbaker, thats not what I'm getting
<jimbaker> hazmat, ok, it was the ultimate solution for the first time, but definitely welcome any ideas
<hazmat> i'm clearing out my apt-cacher-ng cache, and trying again, already cleared out the lxc cache and it worked fine.. so afaics i shouldn't have anything cached localy
<hazmat> jimbaker, your not getting virbr0 errors anymore are you?
<jimbaker> hazmat, no
<hazmat> bcsaller, so i cleared out lxc cache, apt-cacher-ng cache, and it works for me..
<jimbaker> i'm just getting network connectivity issues between the lxc instances (mysql/0, wordpress/0) and zk
<SpamapS> hazmat: we may want to add a juju-local-dev metapackage with all those extra deps
<hazmat> SpamapS, perhaps.. bootstrap will fail and tell you whats missing though
<bcsaller> hazmat: yeah, thats expected, right. Its stale cached data that makes the issue
<SpamapS> hazmat: think upgrades
<SpamapS> hazmat: if we add more of those, or don't need more of those, we can help users add/remove them automatically :)
<hazmat> SpamapS, yeah.. but that puts the extra dep everywhere
<hazmat> SpamapS, including machines deploying units
<hazmat> SpamapS, ie require java ;-)
<hazmat> i'd rather avoid that
<SpamapS> hazmat: which is why we put it as juju-local-dev, which is only a Suggests of juju
<hazmat> SpamapS, ah. true that, yeah.. that sounds fine to me.. i think niemeyer objected last time around on that, but if your up for doing it.. sounds good
<SpamapS> hazmat: its a distro choice really. :)
<hazmat> SpamapS, well then i leave it to some anonymous distro person ;-)
<jamespage> slightly frustratingly my local lxc instances are not picking up any dns resolvers...
<jamespage> weird
<bcsaller> hazmat: ^^
<hazmat> bcsaller, why would dnsmasq need to be specified explicitly in the container if its dhcp based
<bcsaller> hazmat: maybe if the resolvconf package isn't installed, I don't know the details, it might be a matter of competing systems there
<hazmat> bcsaller, so what do you have in resolv.conf of the master template?
<koolhead17> SpamapS: hey
<jimbaker> hazmat, finally the lxc containers came up after cleaning the caches (i assume /var/cache/lxc, /var/cache/apt-cacher-ng), but i'm still getting Network is unreachable in the unit logs: http://pastebin.ubuntu.com/702840/
<niemeyer> fwereade: Diving in again
<hazmat> jimbaker, do you have zookeeper running in the host?
<hazmat> sounds like it on a different port then what the unit agents expect
<jimbaker> hazmat, it was started by the local provider
<jimbaker> with this setting: clientPort=53699
<hazmat> interesting
<hazmat> jimbaker, what ip address does the lxc instance have?
<hazmat> lxc-ls .. grab container name.. and ping dnsmasq... host container-name 192.168.122.1
<hazmat> jamespage, what do you end up with in resolv.conf of the container? is it empty?
<jamespage> hazmat, it had the resolvconf header but no configuration
<hazmat> bcsaller, yeah.. perhaps we need both
<hazmat> i find it extremely odd that we don't pick up the dns server from dhcp
<jimbaker> hazmat, not clear what you mean here...
<hazmat> jimbaker, the container has an ip address, the easiest way to discover it (if the unit agents aren't connecting) is to query the dnsmasq with the container name
<jimbaker> i have consulted the man page/other howtos on dnsmasq. in any event, i can see the containers with lxc-ls
<jimbaker> hazmat, ok, i was getting confused about things ;) just use standard tools
<jimbaker> hazmat, do we actually setup dnsmasq with the local provider?
<hazmat> jimbaker, no its from libvirt
<hazmat> i need to grab some lunch
<hazmat> bcsaller, i guess the way to verify re dns is to add it to both run/resolv.conf and resolv.conf.d/base
<hazmat> bbiab
<jimbaker> hazmat, i do have dnsmasq running. in any event, we can debug more when you get back
<niemeyer> hazmat: Please ping me when you're bakc
<niemeyer> SpamapS: ping
<niemeyer> fwereade: ping
<fwereade> niemeyer: pong
<niemeyer> fwereade: Looks fantastic
<niemeyer> fwereade: I'm pretty surprised as well
<fwereade> niemeyer: yeah, totally unexpected
<niemeyer> fwereade: We should try to get a +1 from hazmat as well given this is such a core concept
<niemeyer> fwereade: I have just a few trivials for you meanwhile
<fwereade> niemeyer: sounds good; sounds good
<niemeyer> fwereade: Sent
<fwereade> niemeyer: cheers
<niemeyer> fwereade: Is there anything else in the pipe I can help you with while you take a look at these?
<fwereade> niemeyer: try previewing lp:~fwereade/juju/always-upgrade-local
<fwereade> niemeyer: just pushed it
<niemeyer> fwereade: I'm on it
<fwereade> niemeyer: pretty sure it works, was just setting up a local provider to try to verify a bit faster
<niemeyer> fwereade: Awesome
<fwereade> niemeyer: re: caching in a private attribute, the idea was that CharmDirectory instances should still report the correct revision even if they're up-revisioned mid-flow
<fwereade> niemeyer: on balance, that might not actually be necessary
<fwereade> niemeyer: because that *should* only happen via set_revision (or at __init__ time)
<niemeyer> fwereade: Yeah, I think that's the exception, so I'd rather delay the behavior until necessary
<niemeyer> fwereade: Opening the file all the time feels bad given we don't depend on this behavior today
<fwereade> niemeyer: in which case... yep, I'll just call it once, will simplify everything I think
<fwereade> niemeyer: cheers
<fwereade> niemeyer: others look good, ta
<niemeyer> fwereade: Thanks!
<niemeyer> fwereade: http://paste.ubuntu.com/702861/
<fwereade> niemeyer: [1] yep, clearer
<fwereade> niemeyer: [2] ambivalent; I'm not sure local bundles are common enough to be a major worry
<fwereade> niemeyer: it feels like the same class of issue as "what if the local repo is not writable"
<niemeyer> fwereade: They're not.. but if we don't take care of this case, it'll explode and render upgade-charm non-useful with bundles
<fwereade> niemeyer: hm, true
<niemeyer> fwereade: Different case, I think
<fwereade> niemeyer: fair enough
<fwereade> niemeyer: offhand, is there a non-isinstance way to tell the difference?
<niemeyer> fwereade: Nope, I think isinstance is the way to go
<kim0> hey folks .. is this plymouth thing blocking local deployment, do can I play already
<kim0> lxc deployment I mean
<hazmat>  /me is back
<hazmat> niemeyer, ping
<hazmat> fwereade, so the charms have revision directly on them now.. the callback into the metadata stuff was wierd..
 * hazmat checks out the branch latest
<hazmat> kim0, try it out
<kim0> cool will
<hazmat> kim0, it was a issue to jamespage setup he had plymouth in debug mode
<fwereade> hazmat: yeah, the idea was to avoid changing the charm state format
<fwereade> hazmat: turns out, revision was never actually used from charm state
<fwereade> hazmat: (well, it was, but it came from charm_url, and we only touched metadata.revision to assert they were the same)
<hazmat> fwereade, cool
<hazmat> fwereade, so the branch looks good to me, the only thing is the file contents should be stripped before attempting to int()
<hazmat> fwereade, if people hand edit it, editors are want to do wonky things
<fwereade> hazmat: >>> int("  \t27\n  \r\n  ")
<fwereade> 27
<fwereade> hazmat: but still, no actual dsagreement
<hazmat> interesting
<fwereade> hazmat: explicit has occasionally been postulated to be superior to implicit, after all ;)
<hazmat> i definitely remember that not working at some point
<hazmat> oh well, relic of the past
<fwereade> so, hazmat, can I count that as an approve?
<hazmat> fwereade, yeah.. i'm just wondering about the extra zipfile construction in get revision on the bundle, i guess its fine since its cached, but we have the zip file already in init
<hazmat> fwereade, +1
<fwereade> hazmat: and we always call get_revision in __init__, too
<fwereade> hazmat: I'll just fix that and merge then
<fwereade> hazmat: cheers
<hazmat> fwereade, cool
<niemeyer> Woohay
<rog> see y'all tomorrow
<hazmat> rog have a good one
<niemeyer> rog: Cheers!
<niemeyer> fwereade: I'm writing an email to the list warning about this change
<fwereade> niemeyer: tyvm
<jimbaker> hazmat, so i just wanted to continue with you on the local provider debugging
<jimbaker> hazmat, whenever it's a good time, just ping me
<niemeyer> fwereade: Hey!
<niemeyer> fwereade: One thought just occurred to me
<fwereade> niemeyer: go on
<niemeyer> fwereade: While writing the email
<niemeyer> fwereade: Rather than erroring out with a missing revision file, we should create one with revision 1 automatically
<niemeyer> fwereade: In a charm directory
<fwereade> niemeyer: good plan
<fwereade> niemeyer: 0 maybe?
<niemeyer> fwereade: Sounds good.. I'm a fan of zero indexing as well ;-)
<fwereade> niemeyer: or is that just too look-at-me-I'm-a-*programmer*?
<niemeyer> fwereade: LOL
<niemeyer> fwereade: Maybe.. but memory initialized to zero by default is such a great idea!  I'm sure we can't go wrong in this case either. ;-D
<fwereade> niemeyer: anyway, sounds good, I'll pastbin you a trivial shortly, just fixing up the always-upgrade-local branch
<hazmat> i thought it was backwards compatible already?
<hazmat> only on invalid content of the metadata file would it error
<hazmat> jimbaker, ping
<fwereade> niemeyer: http://paste.ubuntu.com/702915/
<niemeyer> hazmat: It's backwards compatible, but it's a significant change in structure that deserves a ntoe
<niemeyer> note
<fwereade> hazmat: it is backwards compatible; this is a distinct case, where you create a charm and never even consider the possibility of needing a revision
<hazmat> sounds good
<niemeyer> hazmat: Otherwise people will get a crazy warning that makes no sense after an unknown point in time
<jimbaker> hazmat, hi
<niemeyer> fwereade: Hmmm.. that's not quite it I think
<niemeyer> fwereade: There's an important difference between a revision file not being present and it containing unexpected content
<hazmat> yeah
<fwereade> niemeyer: ...very good point
 * fwereade looks shifty
<hazmat> its more of a preamble to the method if not os.path.exists(revision_path)
<niemeyer> fwereade: Don't worry.. you can look shifty for the next few months without any damage to your image given how late it is there and how much you've been pushing :)
<hazmat> jimbaker, so to continue the epic, our heroes where last trying to figure out why they couldn't get to the zoo
<niemeyer> LOL
<hazmat> fwereade, and it also has to play nice with the backwards compatible  stuff, ie. what might already be in the metadata
 * niemeyer puts that in the quotes page
<hazmat> fwereade, bringing shifty back into style ;-)
<fwereade> :D
<hazmat> jimbaker, where you able to get the container address?
<jimbaker> hazmat, not certain what the procedure for that should be
<hazmat> jimbaker, query the dnsmasq server
<hazmat> jimbaker, is it running?
<hazmat> bcsaller, where you able to try out adding the dnsmasq server to base in addition to run/resolv.conf?
<jimbaker> hazmat, dnsmasq is running
<jimbaker> hazmat, with this range reported from ps, --dhcp-range 192.168.122.2,192.168.122.254
<jimbaker> so that looks as expected
<hazmat> jimbaker, lxc-ls -> list of containers.. pick container name of unit... query dnsmasq on its listen address typically 192.168.122.1
<jimbaker> hazmat, hmm, i missing something in my knowledge of how something like this is setup; would the query be like this: dig -b 192.168.122.1 jbaker-desktop-wordpress-0
<jimbaker> (yes, its listen address is 192.168.122.1, according to ps)
<hazmat> dig @192.168.122.1 jbaker-desktop-wordpress-0
<hazmat> or less verbosely
<hazmat> host jbaker-desktop-wordpress-0 192.168.122.1
<jimbaker> hazmat, now makes perfect sense, but dnsmasq doesn't have the address: http://pastebin.ubuntu.com/702926/
<fwereade> hazmat, niemeyer: http://paste.ubuntu.com/702927/
<fwereade> slightly less trivial now but just barely qualifies IMO
<hazmat> fwereade, that seems sensible.. basically only in the absence of revision set it
<hazmat> i was wondering about transparently doing migrations for folks, but that's probably icky, they'll see the warning
<hazmat> fwereade, +1
<niemeyer> fwereade: Put this within the try:  if result >= 0:\n return result
<niemeyer> fwereade: as a follow up you'll notice you can unify the branches
<hazmat> jimbaker, odd indeed. what's the output of > brctrl show and virsh net-list --all
<hazmat> jimbaker, its like you have a different bridge setup
<niemeyer> fwereade: It all looks good though, +1
<hazmat> jimbaker, for any active on virsh net-list.. can you paste virsh net-dumpxml network-name
<jimbaker> http://pastebin.ubuntu.com/702932/
<jimbaker> hazmat, i don't have brctrl installed. is that a problem?
<fwereade> niemeyer: re try: in get_revision?
<niemeyer> fwereade: yeah
<jimbaker> in terms of not having some useful bridging utils installed?
<fwereade> niemeyer: that won't error on revisions < 0
<hazmat> jimbaker, no its just a bridge management tool..
<hazmat> although i thought libvirt used it
<jimbaker> hazmat, just checking ;)
<niemeyer> fwereade: It will if you tweak the branches below
<jimbaker> hazmat, definitely could install bridge-utils shortly
<niemeyer> fwereade: You'll need a single raise and won't have to define the spurious message upfront
<fwereade> niemeyer: ...oh, *branches*, not branches
<hazmat> jimbaker, do you have libvirt-bin installed?
<niemeyer> fwereade: LOL
 * fwereade looks shifty again
<niemeyer> fwereade: Yeah, sorry :-)
<hazmat> jimbaker, bridge-utils is a dep of it
<jimbaker> hazmat, i do, but apparently an upgrade is available
<jimbaker> hazmat, enjoying the oneiric edge for sure
<hazmat> jimbaker, right.. but you should have brctrl.. its a dependency for libvirt-bin
<jimbaker> hazmat, certainly strange
<jimbaker> hazmat, the upgrade of libvirt-bin did not install bridge-utils/brctrl, so don't know why the dependency is not being honored (or different here)
<jimbaker> hazmat, sorry, it was just suggesting bridge-utils be installed for missing brctrl, but i do have it
<jimbaker> (the bridge-utils package)
<hazmat> jimbaker, okay... so output of brctrl show would be helpful... also do you have other virtualization packages (vmware, virtualbox, etc) on the machine?
<jimbaker> hazmat, no other virtualization installed on this box
<hazmat> jimbaker, cool
<jimbaker> hazmat, still looking for a package with brctrl
<hazmat> jimbaker, so the brctrl output and the virsh net-dumpxml default output are next
<hazmat> jimbaker, sorry its brctl
<jimbaker> hazmat, virsh net-dumpxml default: http://pastebin.ubuntu.com/702935/
<jimbaker> hazmat, brctl show: http://pastebin.ubuntu.com/702936/
<hazmat> jimbaker, so all that looks good.. time to check the container config
<jimbaker> hazmat, sounds good
<fwereade> niemeyer: and at last: https://code.launchpad.net/~fwereade/juju/always-upgrade-local/+merge/78306
<hazmat> jimbaker, can you pastebin  sudo cat /var/lib/lxc/jbaker-desktop-wordpress-0/config
<fwereade> niemeyer: already addressed the points you brought up before
<niemeyer> fwereade: Cool, looking!
<jimbaker> hazmat, here is is: http://pastebin.ubuntu.com/702937/
<niemeyer> Oops
<niemeyer> fwereade: http://wtf.labix.org/
<niemeyer> fwereade: Looks like it never came up.. not sure if it's a hiccup or if it's actually broken for unknown reasons
<niemeyer> I'll kick it and run it again
<fwereade> niemeyer: blech, I'll verify here
<niemeyer> fwereade: +1 on the change
<niemeyer> fwereade: 390 is already on the pipeline running
<niemeyer> fwereade: Let's see..
<fwereade> niemeyer, something is screwed up, the cloud-init scripts fall over
<fwereade> niemeyer, but I don't *think* this was me...
<fwereade> 2011-10-05 19:24:02,144 - cc_apt_update_upgrade.py[WARNING]: Failed to install packages: ['bzr', 'byobu', 'tmux', 'pyt
<fwereade> hon-setuptools', 'python-twisted', 'python-argparse', 'python-txaws', 'python-zookeeper', 'default-jre-headless', 'zoo
<fwereade> keeper', 'zookeeperd']
<fwereade> niemeyer: I'm seeing a lot of 403s trying to talk to http://us-east-1.ec2.archive.ubuntu.com
<hazmat>  jimbaker that's just wacky
<jimbaker> hazmat, how so? ;)
<hazmat> jimbaker, everything looks good
<jimbaker> hazmat, ahh. maybe i should try the reboot. just this once
<fwereade> niemeyer: does that sound familiar in any way?
<hazmat> reboot is for windows and kernel upgrades (/me shake fist at oracle for buying ksplice)
<jimbaker> hazmat, i know, i know
<niemeyer> fwereade: I've heard something about a name change yeah
<niemeyer> fwereade: What's strange is that the wtf broke in that specific revision
<fwereade> niemeyer: theories regarding what I did received gratefully... :/
<jimbaker> it's an act of desperation. but is everyone else who's tried the local provider seen it work?
<hazmat> jimbaker, not afaik, i think jamespage and bcsaller had it fail on a different network issue (dns wasn't working)
<niemeyer> fwereade: The previous revision looks suspicious
<niemeyer> fwereade: 388
<niemeyer> fwereade: But that'd mean something in wtf got the revision wrong somehow
<niemeyer> fwereade: Which isn't impossible
<bcsaller> hazmat: I'm testing a small diff to juju-create that I think fixed everything here
<hazmat> bcsaller, writing both base and run/resolv.conf ?
<fwereade> niemeyer: hm, that went in a good while ago, and all it hits is providers.local... I thought unused providers weren't even imported?
<bcsaller> hazmat: yeah, http://pastebin.ubuntu.com/702946/ w/ the apt-get update after the cache is in place
<niemeyer> fwereade: True..
<niemeyer> fwereade: It could be something in Oneiric itself too
<niemeyer> fwereade: I'll rollback wtf to 388, and make it test everything again from there
<fwereade> niemeyer: cheers
<hazmat> bcsaller, that's slightly dangerous.. if the environment is long running, it will hit the same problem during a beta cycle.. but ok
<niemeyer> fwereade: If 388 changes to broken, it might not be your revision
 * fwereade feels slightly hopeful
<niemeyer> fwereade: Your merge log message are not following the convention, btw
<hazmat> bcsaller, its bewildering though to me, its like dhcp isn't being respected, we shouldn't have to manuall setup dns for a dhcp setup
<niemeyer> fwereade: Do a "bzr log" on trunk to see the difference
<bcsaller> hazmat: I haven't verified it resolvconf is messing that up, /etc/dhcp/dhclient-enter-hooks.d/resolvconf makes me think it should be working as well
<SpamapS> niemeyer: pong!
<fwereade> niemeyer: damn, I'm sorry, I thought I'd retrained myself right
<bcsaller> hazmat: another issue, the way wp is coming up now its only addressable by IP, I'll look at adding an alias with the assigned name and see if that fixes it
<hazmat> bcsaller, that's not really a problem its only resolvable by the browser by ip
<hazmat> bcsaller, that's irrelevant imo
<niemeyer> fwereade: No worries, it's just important because it's hard to tell what was merged from e.g. http://bazaar.launchpad.net/~juju/juju/trunk/revision/389
<bcsaller> not a deal breaker, no
<niemeyer> SpamapS: Hey man
<niemeyer> SpamapS: Was going to ask you for some reviewing, but we already pushed it forward
<niemeyer> fwereade: 388 is churning
<fwereade> niemeyer: the "mfrom: (348.8.15 isolate-formula-revisions)" on that page gives us as much information as does the summary line, surely?
<fwereade> niemeyer: but I'll try to force myself to keep to the standard anyway
<fwereade> niemeyer: that's sort of good news and sort of bad news
<_mup_> juju/ftests r12 committed by gustavo@niemeyer.net
<_mup_> - Fixed butler so it doesn't jump revisions.
<_mup_> - Compute the waterfall after every revision.
<niemeyer> fwereade: Indeed
<niemeyer> fwereade: Hadn't noticed it to be honest
<fwereade> hazmat: do you have a moment for a review of https://code.launchpad.net/~fwereade/juju/always-upgrade-local/+merge/78306?
<hazmat> fwereade, sure
<SpamapS> niemeyer: ahh ok. :)
<jimbaker> hazmat, so one useful aspect of doing the reboot is looking at this scenario: what happens to the lxc containers? bootstrap reports it's still bootstrapped, but status fails
<hazmat> jimbaker, their down
<hazmat> they don't autostart
<hazmat> phone bbiam
<jimbaker> hazmat, exactly. is there a way to restart them?
<jimbaker> hazmat, later
<hazmat> jimbaker, lxc-start -n name-of-container
<jimbaker> hazmat, sounds good. so we can support that later in some sort of automatic way i guess
<hazmat> jimbaker, i believe there are mechanisms to autostart them as well
<hazmat> jimbaker, but the zk and machine agent are down as well
<jimbaker> hazmat, in any event, the reboot did nothing (as expected)
<hazmat> jimbaker, so on reboot you'll need to destroy and bootstrap
<hazmat> the env
<hazmat> for now
<jimbaker> hazmat, correct, that's why juju status failed so badly
<jimbaker> hazmat, which is fine for now
<hazmat> jimbaker, absolutely, we're not touching the host this release with local provider
<hazmat> outside of the minimum needed to work (lxc containers)
<jimbaker> hazmat, unless you have any more ideas, i'm going to put off getting local provider running. hopefully jamespage, bcsaller, and others have better luck
<hazmat> jimbaker, well, we havent done anything since its been rebooted
<bcsaller> jimbaker: with the small patch I posted a short while ago its working fine for me again
<jimbaker> bcsaller, what was that?
<hazmat> jimbaker, destroy-environment and bootstrap & deploy
<hazmat> jimbaker, the issue your seeing is different than the patch which fixes some dns resolver issues
<jimbaker> hazmat, yes, i did that, and units were started, but the network issue is still there
<bcsaller> http://pastebin.ubuntu.com/702946/
<niemeyer> Uh oh
<hazmat> jimbaker, so you still have units unable to connect to zk?
<jimbaker> hazmat, correct, exact same issue in the unit.log (s)
<hazmat> jimbaker, i'm not entirely out of ideas yet ;-)
<niemeyer> hazmat, SpamapS: AMBER CODE
<jimbaker> hazmat, ok, then let's try them out
<hazmat> niemeyer, ready and waiting
<niemeyer> EC2 deployment is broken
<niemeyer> and it doesn't look like something we did
<niemeyer> Something probably changed in Oneiric in the last few hours
<hazmat> jimbaker, confirming (netstat -al) that zk is listening on a localhost with port known to the agents
<niemeyer> 388 was passing moments ago, and is now broken
<hazmat> jimbaker, i'm going to swtich off to the ec2 issue
<hazmat> niemeyer, debugging
<niemeyer> fwereade: Looks unrelated to your changed indeed
<hazmat> niemeyer, could be a bad image
<niemeyer> fwereade: Do you have more details that could help us pinpoint the bug
<niemeyer> ?
<niemeyer> fwereade: (from your test run)
<jimbaker> hazmat, definitely is priority
<hazmat> jimbaker, but if you could confirm that info it would be helpful
<hazmat> jimbaker, did you mention early i could get a login to that machine?
<jimbaker> hazmat, yes, i will create one for you
<fwereade> niemeyer: sorry, not really: the only other thing I verified was that even bzr wasn't installed
<niemeyer> fwereade, hazmat: Maybe one of the packages is missing (or has been renamed)
<hazmat> so if the ec2 repos are changing names
<hazmat> niemeyer, i'm firing up an ec2 environment more info in a few
<niemeyer> hazmat: From your log:   [trivial] require libzookeeper-java not zookeeperd
<niemeyer> hazmat: Shouldn't that be done to the EC2 provider as well?
<hazmat> niemeyer, no.. they don't install this stuff.. that's for running it on the host
<niemeyer> Ok
 * niemeyer checks the list of packages
<niemeyer> Looks ok
<niemeyer> smoser: Hey!
<niemeyer> smoser: Were the AMIs updated today by any chance?
<smoser> niemeyer, utlemming updated them. https://lists.ubuntu.com/archives/ubuntu-cloud/2011-October/thread.html
<niemeyer> smoser: What should I be looking for there?
<hazmat> niemeyer, so i get the same traceback that fwereade got
<niemeyer> hazmat: What's the traceback?
<smoser> Refreshed UEC Images of 10.04 LTS (Lucid Lynx) [20110930]
<hazmat> smoser, niemeyer http://paste.ubuntu.com/702970/
<smoser> Refreshed Cloud Images of 10.10 (Maverick Meerkat) [20111001]
<smoser> Refreshed UEC Images of 11.04 LTS (Natty Narwhal) [20111003]
<niemeyer> smoser: We're using the Oneiric images
<smoser> the dailies?
<hazmat> dailies
<smoser> they're updated every day
<niemeyer> smoser: I'll ping utl* about it
<niemeyer> smoser: Oh, ok
<niemeyer> Hmm
<niemeyer> What the heck.. what's a Pangolin?
<smoser> hazmat, you have the cloud-init-output.log now
<smoser> pastebin that
<hazmat> smoser, sure
<smoser> /var/log/cloud-init-output.log (i think that is the path we used)
<smoser> apt just failed, either archive issue or package failure
<hazmat> smoser, what's the cli for pastebin called?
<hazmat> got it
<hazmat> smoser, http://paste.ubuntu.com/702972/
<hazmat> smoser, and http://paste.ubuntu.com/702973/
<hazmat> woah.. that second one is fubar
<hazmat> forbidden on the repo?
<smoser> yeah
<hazmat> wtf
<smoser> i'm asking in #is
<niemeyer> Ugh.. that explains a ton
<niemeyer> I'll stop the waterfall for the moment
<niemeyer> fwereade: Thanks for your help on this issue, and sorry for the false alarm
<niemeyer> fwereade: Is that last branch in?
<_mup_> juju/ftests r13 committed by gustavo@niemeyer.net
<_mup_> Fixed authorized_keys handling in ec2-setup.sh when moving ~/.ssh away
<_mup_> so that the user can still log in.
<niemeyer> hazmat, smoser: Thanks for debugging this as well folks
<niemeyer> I'll step down for a cup of coffee.. biab
<hazmat> niemeyer, np
<hazmat> that sounds like a good idea
<fwereade> niemeyer, sorry, got pulled away for a while
<fwereade> hazmat: I presume the weirdness has prevented you from taking a look at that branch?
<hazmat> fwereade, it has, also trying to debug local provider on jim's machine
<hazmat> fwereade, many beautiful butterflies.. i'll stop and have a look now
<fwereade> hazmat, only if you don't know  who *isn't* horribly busy atm, who I could hit up instead?
<niemeyer> fwereade: Man, I feel bad that we're holding you back still.. that must have been a long day for you
<fwereade> niemeyer, don't worry about it
<statik> heya hazmat, whats the path to the daily ppa that I should use to try out the local provider awesomeness you posted about?
<hazmat> statik, ppa:juju/pkgs
<statik> ah ok, I didn't realize that was daily
<statik> thx
<hazmat> bcsaller, +1 on that patch btw
<hazmat> jimbaker, i'm going to pause on debugging this and switch back to a review for a bit
<SpamapS> lkja;lkj
<fwereade> hm, niemeyer, just noticed a minor lurking bug in the deploy/upgrade changes: the error when --repository is needed but not included is entirely unhelpful
<fwereade> niemeyer: worth a minor to raise FileNotFound from resolve(), or possibly LocalCharmRepository.__init__()?
<niemeyer> fwereade: How does it look like?
<fwereade> niemeyer: NoneType has no attribute .endswith IIRC
<niemeyer> fwereade: Ouch :-)
<fwereade> niemeyer: hm, it's not quite FileNotFound though
<niemeyer> fwereade: It's CharmNotFound I guess
<fwereade> niemeyer: not even quite that, if anything it's RepositoryNotFound ;)
<niemeyer> fwereade: Hmm
<fwereade> niemeyer: or, really, RepsitoryNotSpecified
<niemeyer> fwereade: Yeah, but only if the charm is local:, and if it's not yet in the env
<hazmat> fwereade, what's the leapfrog do?
<hazmat> rev 0 to rev 2?
<niemeyer> fwereade: and the problem is similar if the repository is provided, but the specific charm isn't there
<fwereade> hazmat: bumps to deployed+1, which is a jump of >1 from the one in the repo
<hazmat> fwereade, ah cool
<fwereade> hazmat: if yu can think of a beeter name I'm all ears :)
 * fwereade peers suspiciously at his keyboard
<fwereade> it's the tools, I tell you!
<fwereade> niemeyer: I feel like the arg parser is really the right place to catch it
<fwereade> niemeyer: at the moment anyway
<niemeyer> fwereade: I can't see how the arg parser is related to this
<niemeyer> fwereade: This is highly contextual..
<fwereade> niemeyer: that might change once we actually can specify a repo in the environment
<niemeyer> fwereade: "if not repository_path and charm is local and charm not in environment: raise foo"
<fwereade> niemeyer: maybe, but it's connected: the reason the bug exists is because the "repository" arg is no longer required
<niemeyer> fwereade: Sure.. that's quite expected given that we just removed it
<niemeyer> fwereade: We just need to consider the case where it's not there and raise an error
<niemeyer> fwereade: We can do that tomorrow, though, in a bug fix
<niemeyer> fwereade: with you relaxed ;-)
<hazmat> fwereade, review in one minor, but very awesome
<fwereade> hazmat: cool, thanks
<fwereade> niemeyer: sounds like a plan
<niemeyer> fwereade: Please just file a bug about this so you can sort it out later
<fwereade> niemeyer: will do, I meant "tomorrow" sounds like a plan :)
<niemeyer> fwereade: Awesome, thank you!
<hazmat> SpamapS, m_3 .. fwereade has answered the common desire of all formula authors.. auto incrementing versions on upgrade
<hazmat> negronjl, ^
<m_3> ah nice
<SpamapS> \o/ !!!
<SpamapS>  
<_mup_> Bug #868729 was filed: deploy and upgrade-charm give unhelpful errors when repository not specified <juju:New> < https://launchpad.net/bugs/868729 >
<fwereade> ...and that's it for me, nn all
<niemeyer> fwereade: Woohay!
<niemeyer> fwereade: Have a restful night! :)
<hazmat> niemeyer, bcsaller, jimbaker just a heads up, i'm moving over not in progress bugs to the next milestone
<bcsaller> hazmat: k
<niemeyer> hazmat: Awesome.. if you find anything interesting on the way that could deserve some attention, please raise a flag
<hazmat> jimbaker, this one is a duplicate afaics https://bugs.launchpad.net/juju/+bug/846055
<_mup_> Bug #846055: Occasional error when shutting down a machine from security group removal <juju:New> < https://launchpad.net/bugs/846055 >
<hazmat> perhaps not
<jimbaker> hazmat, that's a txaws parsing issue, when we separate out the related bug 863510
<_mup_> Bug #863510: destory-environment errors and hangs forever <juju:Fix Released by jimbaker> < https://launchpad.net/bugs/863510 >
<hazmat> jimbaker, this one also seems closed afaics https://bugs.launchpad.net/juju/+bug/824222
<_mup_> Bug #824222: juju bootstrap should be more robust <juju:Triaged by jimbaker> < https://launchpad.net/bugs/824222 >
<hazmat> jimbaker, i only see tracebacks when in verbose mode
<jimbaker> hazmat, yes, that's correct
<hazmat> jimbaker, and this one is fixed? https://bugs.launchpad.net/juju/+bug/802678
<jimbaker> hazmat, so let's definitely close that one. i think we could consider expected errors vs not, but that can be in the next milestone
<_mup_> Bug #802678: Update watch_ports <juju:Confirmed for jimbaker> < https://launchpad.net/bugs/802678 >
<hazmat> bcsaller, i assume statusd isn't getting merged in the next day or so?
<bcsaller> hazmat: no
<jimbaker> hazmat, the test has gone away
<jimbaker> hazmat, let me see where it lives now, if at all
<jimbaker> hazmat, let's mark that as fix released, the current test is very close to what is the attached patch as would fail when repeated
<jimbaker> it doesn't now
<hazmat> well its invalid at this point
<jimbaker> sure, that works too
<jimbaker> hazmat, are we planning on doing jenkins for ec2 functional testing? i thought wtf covered it
<hazmat> jimbaker, fair enough.. feel free to mark it wont fix
<jimbaker> hazmat, ok
<hazmat> kim0, ping.. just curious about the orchestra docs
<jimbaker> hazmat, what do you think of bug 697093 - this purely seems to be an artifact of our testing setup
<_mup_> Bug #697093: Ensemble command should return nonzero status code for errors <cli> <juju:New> < https://launchpad.net/bugs/697093 >
<jimbaker> (at the very least i need to update the description/summary accordingly)
<hazmat> jimbaker, oh.. perhaps
<hazmat> jimbaker, yeah... ideally we could test for status codes, but if i'm reading that right, we're really just testing the mocks not the actual exit codes
 * hazmat calls it a night
<hazmat> cheers
<jimbaker> hazmat, yeah, it would be nice to get it right, but tricky for sure
<jimbaker> worthwhile for florence however
<jimbaker> hazmat, enjoy!
<hazmat> niemeyer, as far as interesting bugs we might want to still consider for eureka.. https://bugs.launchpad.net/juju/+bug/814987 and https://bugs.launchpad.net/juju/+bug/828326 the latter is a new feature though
<_mup_> Bug #814987: config-changed hook is retried on 'resolved' even when --retry is not passed <juju:New> < https://launchpad.net/bugs/814987 >
<_mup_> Bug #828326: need to be able to retrieve a service config or schema from the cli <juju:In Progress by hazmat> < https://launchpad.net/bugs/828326 >
<hazmat> re the first one.. actually the issue i'm interested isn't really the about config-hooks, but that install error retries are broken
<hazmat> for getting to a working state
<niemeyer> hazmat: Why are they broken?
<hazmat> niemeyer, because the automatic transition from install to start doesn't happen
<hazmat> afaicr
<hazmat> after the error recovery
<hazmat> the unit just stays in the installed state
<niemeyer> hazmat: Ugh
<niemeyer> hazmat: Ok
<niemeyer> hazmat: Sounds like something that would be nice to get done indeed, if we find some spare time
<niemeyer> I'll kick off the wtf
<niemeyer> Hopefully the repositories are back already
<niemeyer> Uh oh.. Steve Jobs is no more.. :(
#juju 2011-10-06
<jimbaker> indeed very sad news about steve jobs
<niemeyer> Cool: http://wtf.labix.org/
<hazmat> sadness
<hazmat> niemeyer, having some problems building go-tip
<hazmat> oh well
<niemeyer> hazmat: What's up
<niemeyer> ?
<hazmat> it error'd out. cleaned and retrying
<hazmat> niemeyer, http://pastebin.ubuntu.com/703116/
<niemeyer> Reading
<hazmat> looks like it failed the acceptance tests
 * hazmat tries to figure out scrollback buffers in tmux
<niemeyer> hazmat: It's missing the start of the traceback
<niemeyer> hazmat: Where the actual reason is mentioned
<niemeyer> hazmat: Hmm.. I'm not sure.. I've changed my keys
<niemeyer> hazmat: Mine is CTRL-X PGUP
<niemeyer> Where X is the tmux special key
<hazmat> hmm... maybe its confused by the pkg being installed
<hazmat> niemeyer, full traceback http://paste.ubuntu.com/703119/
<hazmat> hmm.. not full
<niemeyer> hazmat: Strange error that is
<niemeyer> hazmat: Looking at the code
<niemeyer> hazmat: Can you please paste the contents of your /proc/net/igmp
<hazmat> niemeyer, http://paste.ubuntu.com/703122/
<hazmat> jimbaker, i think your problems might be related to firewalling.. perhaps the firestarter pkg
 * hazmat falls asleep
<niemeyer> hazmat: Looks fine.. we can figure it tomorrow
<niemeyer> hazmat: Today was already most excellent :-)
<niemeyer> hazmat: We even have a clean wtf: http://wtf.labix.org/
<jimbaker> hazmat, i removed firestarter, but the wordpress example still does not come up; same issue of Network is unreachable by the ZK client
<jimbaker> on second thought, i believe that firestarter simply configures the firewall. so removal probably does nothing. something i'll look at again tomorrow night
<jimbaker> err, tomorrow
<jamespage> moring - I got my local juju environment working with bscaller's patch for dns resolution
<jamespage> nice
<jamespage> just killed my laptop but runing to many cassandra units -ooops
<kim0> hazmat: Andreas mentioned he's already writing the docs for orchestra/juju interaction, see bug 855989
<_mup_> Bug #855989: Document usage with orchestra <juju:Confirmed for kim0> < https://launchpad.net/bugs/855989 >
<kim0> nice, lxc deployment works great!
<kim0> wordpress connects woohoo This is so awesome \o/
<hazmat> kim0, nice!
<kim0> hazmat: this is great man :)
<hazmat> kim0, indeed, much more fun
<hazmat> kim0, fwereade_ comitted some work to auto increment version when you do an upgrade, should help solve one of the common charm author gotchas
<kim0> hazmat: yes .. to actually get this lxc stuff working, I had to add revision: to metadata.yaml of some charms
<hazmat> kim0, hmm.. really? what was the error?
<hazmat> the version separation should have been backwards compatible
<kim0> hazmat: 2011-10-06 13:11:40,137 ERROR Bad data in charm info: /home/kim0/Documents/code/juju/juju/examples/oneiric/php/metadata.yaml: revision: required value not found
<kim0> there's probably a better way to handle this ? like add revision in a separate file? but I disn't know about it
<hazmat> i'm not even sure what that php charm is supposed to be doing
<kim0> hazmat: it's all of em ..2011-10-06 11:30:29,797 ERROR Bad data in charm info: /home/kim0/Documents/code/juju/juju/examples/oneiric/wordpress/metadata.yaml: revision: required value not found
<kim0> all charms in local "repo" .. were being checked
<hazmat> fwereade_, ^
<hazmat> yeah.. the revision got moved to a separate file
<kim0> so like, echo 1 > revision ?
<kim0> then it needs that file added to trunk I suppose
<hazmat> kim0, it should have already been on trunk, which is why i'm curious
 * hazmat tries it out
<kim0> hazmat: I'm on 388 from ppa, too old ?
<hazmat> hmm
<hazmat> 388 doesn't have the auto upgrade work, and it shouldn't have changes to revision in the charm metadata
<hazmat> 388 works for me, trying again with trunk
<hazmat> yeah.. both seem fine
<kim0> weird
<fwereade_> hazmat, kim0: oh crap
<hazmat> fwereade_, i couldn't reproduce it
<fwereade_> hazmat: it doesn't, indeed, look like it should happen
<hazmat> i should try the package  just to be sure
<fwereade_> hazmat, kim0: the "required value not found" implies that some of the code is out of date -- before I made that optional
<kim0> fwereade_: that's the pkg: 0.5+bzr388-1juju1~oneiric1
<fwereade_> kim0: hmm, I guess that would be the problem, I'm pretty sure 389 was when the change landed, let me double-check
<fwereade_> hazmat: I guess this was another instance in which I should have rebuilt the PA as soon as it landed?
<kim0> cool .. if anyone can force a pkg rebuild, I'd love to test again
<hazmat> fwereade_, i'm not sure why.. was there a rev that hit trunk that would be broken?
<hazmat> afaik no
<hazmat> kim0, sure, i'm just testing out the pkg first
<kim0> great
<fwereade_> hazmat: likewise, but we knew that 389 wouldn't work with <=388, so anything that would cause mixed code could trigger it
<hazmat> fwereade_, yeah.. but this is all client side
<fwereade_> hazmat: oh
<fwereade_> hazmat: I guess the old code won't work with a new-style charm, though
<fwereade_> kim0: where are the charms coming from?
<hazmat> hmm
<hazmat> kim0, are you using charms from version control and pkg from ppa?
<kim0> fwereade_: yeah .. charms are from trunk
<kim0> yes
<fwereade_> kim0: bingo
<kim0> you got it
<hazmat> fwereade_, nice catch
<kim0> I thought it would be a tiny difference :)
 * hazmat rebuilds the ppa
<hazmat> looks like someone beat me to it
<fwereade_> kim0, charms will work just fine with the revision in both places, so you can just re-add it to metadata.yaml for now; it won't hurt when you try to run with newer code
<fwereade_> kim0, it will whine at you a little but that's it
<kim0> cool! fwereade_ great work :)
<kim0> I hated manual versions too hehe
<fwereade_> kim0: (for reference: when both exist, the revision file overrides the value in metadata)
<kim0> got it
<fwereade_> kim0, cheers :)
<kim0> :)
<SpamapS> Looks like r388 builds w/ clean unit tests in my PPA ..
<jcastro> SpamapS: are you in your room or in this big room?
<SpamapS> jcastro: in keynotes now
<SpamapS> jcastro: I think I'll head back up to the room shortly tho
<fwereade_> morning niemeyer
<niemeyer> fwereade_!
<hazmat> hmm.. reflowing text is more involved than i'd hoped
<niemeyer> hazmat: Hm?
<niemeyer> hazmat: Good morning :)
<hazmat> niemeyer, good morning.. doing a display of the schema
<hazmat> but their are newlines in the description which break up the formatting
<hazmat> and then i came across this.. http://www.koders.com/python/fid7886C337B44AD65C83D334544E6CA3E4FBB7E050.aspx?s=timer
<hazmat> which is a unicode aware reflow impl
<hazmat> actually its not unicode aware
<hazmat> hmm.. maybe it is.. but its still way more than i want
<hazmat> not sure i can reflow correctly if there are examples in the description either that depend on formatting
 * hazmat surveys extant charms
<niemeyer> Still waiting for the link to load, so I'm not sure about what this is really about yet
<niemeyer> hazmat: I'm lacking context I guess.. why are we getting into reflowing text at all?
<hazmat> niemeyer, just trying to pretty print the schema, pretty.print() doesn't quite do it if there are embedded newlines in the description
<hazmat> i'm using some indentation + newline to separate options, but the description field is freeform
<hazmat> and breaks up the formatting
 * hazmat goes for a punt and some newline separation for context
<hazmat> niemeyer, pls ignore -> server time :-)
<niemeyer> hazmat: Sorry, but why aren't we simply dumping the config.yaml?
<hazmat> niemeyer, try.. a pretty print on a dictionary with values that have embedded new line.. it ends up on a single line
<niemeyer> hazmat: Ok.. let me try.. $ cat config.yaml..  yeah, looks good?
<hazmat> hmm
<hazmat> i guess i could try pretty printing with yaml instead of the python dict
<hazmat> no.. still looks pretty bad
<hazmat> niemeyer, http://paste.ubuntu.com/703396/
<hazmat> first one is yaml dump with indent=4, second one is manually printing
<hazmat> keep in mind the data is not from file on disk but from the charm state
<SpamapS> niemeyer: do we expect anything else to land after r388 ?
<hazmat> SpamapS, i'd use 391 as a base
<hazmat> SpamapS, it has the auto incrementing work for upgrade work which makes things quite a bit nicer, as well as the separation of revision into a separate file from metadata
<niemeyer> SpamapS: Yeah, 391 is the current base
<niemeyer> SpamapS: My hope is that we'll be looking after bug fixes now only
<niemeyer> hazmat: The yaml there isn't right..
<niemeyer> hazmat: It looks ugly because it's inlining the values
<niemeyer> hazmat: plugin: {description:
<SpamapS> oh ok
<niemeyer> hazmat: It can do much better than that automatically
<hazmat> niemeyer, that was with yaml.dump(schema_dict, indent=4) .. the docs on pyaml aren't very helpful
<niemeyer> hazmat: I know.. I don't even recall how to print it nicely myself, but I'm sure it's possible and easy
<niemeyer> hazmat: Let me look
<hazmat> i'll try.. maybe turning off the default flow style
 * SpamapS triggers rebuild w/ r391
<niemeyer> hazmat: I know because I've tweaked the printing in goyaml
<niemeyer> hazmat: Which is backed by the same code
<SpamapS> really a ton of buzz this week around juju
<hazmat> niemeyer, i switched to 'default_flow_style' off.. http://paste.ubuntu.com/703402/
<hazmat> which gets the key on a separate line, but is still less useable than doing it by hand
<hazmat> imo
<niemeyer> hazmat: Do you really want to get into serializing yaml by hand? :)
 * SpamapS wonders if that would be doable on mechanical turk ;)
<hazmat> niemeyer, its just printing k,v in a dict
<hazmat> niemeyer, we loose the user formatting when we serialize to the charm config node
<niemeyer> hazmat: Hint: your manual yaml is broken.
<hazmat> niemeyer, i'm not trying to print yaml.. i'm trying to print human consumable info
<niemeyer> hazmat: Which we've agreed to print in yaml so far?
<hazmat> in an easy to digest and normal fashion
<niemeyer> hazmat: The yaml in that paste looks quite normal and easy to digest.. I'd really prefer to sustain the current convention we have of displaying human consumable information in yaml, since besides being readable and nice to digest it's also machine readable
<hazmat> niemeyer, fair enough
<niemeyer> hazmat: The decision we made in status is already paying off.. it's already being processed to do decisions based on the status
<hazmat> niemeyer, does yaml guarantee key ordering?
<niemeyer> hazmat: IIRC, maps should not assume a defined ordering
<niemeyer> hazmat: There are ordered maps as well, IIRC, but we're not using them
<niemeyer> hazmat: WEll.. there's probably _anything_ you can imagine in yaml, unfortunately :-)
<niemeyer> It's the only place I've ever seen a convention for base-60 floats (!)
<SpamapS> r391 building in PPA https://code.launchpad.net/~clint-fewbar/+archive/fixes/+build/2826656
<hazmat> fun.. i'm sure doing yaml dump will save some unicode problems.. just curious if we can give the user a normalized presentation of description, default, type for a given option or ordering across options
<niemeyer> hazmat: I suspect it's key-sorting on dump, but I'm not sure
<hazmat> SpamapS, i hand't noticed  that streaming build log before, nice
<SpamapS> yeah, keeps me from angrily pacing
<niemeyer> Wow..
<niemeyer> http://blog.golang.org/2011/10/preview-of-go-version-1.html
<rog> niemeyer: lots of tasty changes there.
<fwereade_> niemeyer: btw, I have an MP for that deploy bug lying around, but I didn't target to eureka because I had a vague feeling it might be finished
<fwereade_> niemeyer: what, if anything, should I be doing about it?
<hazmat> fwereade_, eureka is still open for bug fixes
<hazmat> new feature dev is on florence
<fwereade_> hazmat: ah, cool, thanks
<niemeyer> fwereade_: What hazmat says
<hazmat> niemeyer, so do we want to start a new series or separate merge branch for florence till eureka closes, or just hold off on  florence merges
<hazmat> niemeyer, also we'll need a new kanban
<fwereade_> niemeyer: also, lp:~fwereade/juju/charm-store-hack appears to work
<fwereade_> niemeyer: python juju/charm/store -r REPO -k KEY -c CERT -p PORT
<fwereade_> niemeyer: and edit juju.charm.repository to construct a RemoteCharmRepository with the appropriate url base
<fwereade_> niemeyer: [edit] python juju/charm/store.py -r REPO -k KEY -c CERT -p PORT
<hazmat> fwereade_, nice
<niemeyer> fwereade_: Wow, sweet!
<niemeyer> hazmat: I think let's hold off at least for this week, and focus on getting the last few fixes/polishes into eureka
<SpamapS> https://launchpadlibrarian.net/82132481/buildlog_ubuntu-oneiric-i386.juju_0.5%2Bbzr391-1juju1~oneiric1_FAILEDTOBUILD.txt.gz
<SpamapS> buildd failures for 391
<SpamapS> looks like a timeout..
<niemeyer> SpamapS: Indeed
<niemeyer> SpamapS: Worth a retry
<niemeyer> SpamapS: The wtf is clean for a while, FWIW
<niemeyer> SpamapS: http://wtf.labix.org/
<SpamapS> niemeyer: yeah I don't even try the builds until that is "OK"
<niemeyer> SpamapS: Sweet
<jamespage> morning/afternoon all
<jamespage> I'm having issues with charm where the charm name contains a number
<jamespage> for example - cassandra is fine - but tomcat7 errors
<jamespage> Message: Bad charm URL 'local:oneiric/tomcat7': invalid name (URL inferred from 'local:tomcat7')
<niemeyer> jamespage: I suspect this is a bug in the regex.. rog actually brought that up.. /me checks
<niemeyer> jamespage: Yeah, it's bogus, sorry about that.. we'll fix
<jamespage> niemeyer; ta
<fwereade_> niemeyer, it's not necessarily *quite* as simple as that
<fwereade_> niemeyer: sorry, I mentioned that in an MP discussion somewhere
<niemeyer> fwereade_: It's not?
<fwereade_> niemeyer: we could fix charm names to end with number, but then we need to make sure the final string of number is not preceded by a -
<hazmat> SpamapS, i've been able to run that test a couple hundred times, haven't reproduced
<fwereade_> niemeyer: cs:oneiric/foobar-7
<niemeyer> fwereade_: That sounds trivial
<SpamapS> hazmat: given that its building on a virtual host... could just be that it needs a slightly higher timeout.
<fwereade_> niemeyer: well, it is really
<niemeyer> :-)
<hazmat> SpamapS, its not really a timeout per se, its a deferred being called twice, resulting in an uncaught exception, which leads to a timeout
<hazmat> SpamapS, ie. it would always timeout with that circumstance regardless of timeout length
<fwereade_> niemeyer: it's just that I reverted to the original regex when I noticed, pending further discussion, which I guess got lost
<niemeyer> fwereade_: I lost it as well in the buzz, but I'll put another regex in place.. hold on
<hazmat> SpamapS, hmm.. actually you might be right, its a 2.6 second test as is
<hazmat> SpamapS, the timeout is causing the deferred to fire early
<hazmat> SpamapS, i'll take a look, i think we can rework the config set tests to not use the status build infrastructure which is tons more stuff than it needs
<_mup_> Bug #869272 was filed: charm names should be able to end with a digit <juju:In Progress by fwereade> < https://launchpad.net/bugs/869272 >
<fwereade_> niemeyer: target to eureka, right?
<niemeyer> fwereade_: Yeah
<fwereade_> niemeyer: cheers
<_mup_> juju/config-set-sans-status r392 committed by kapil.thangavelu@canonical.com
<_mup_> simplify setup for status tests, takes runtime from 12.5s for 4 tests to 2s
<niemeyer> fwereade_, hazmat, SpamapS, jamespage: Is this right:
<niemeyer> "^[a-z]+([a-z0-9-]+[a-z][a-z0-9]*)?$"
<fwereade_> niemeyer: I *think* so, but I haven't written any tests yet :p
<niemeyer> fwereade_: Cool, can you take this over?
<fwereade_> niemeyer: I intend to, but I need to pop out for a while in about 1 minute
<hazmat> works for the formulas i have locally
 * hazmat tries to remember the url to the latest charms from lp
<hazmat> https://api.launchpad.net/devel/charm?ws.op=getBranchTips
<fwereade_> niemeyer: I'll fix CharmURL when I get back
<niemeyer> fwereade_: Thanks!
<fwereade_> laters
<hazmat> niemeyer, works against all the extant charms on lp
<niemeyer> hazmat: Super.. and it blocks the bad cases too I suppose (starting and ending with -, starting with digit, and ending with - and digit)
 * niemeyer => lunch!
<jamespage> should the expose/unexpose commands have context with the local provider?  I assume not as they make no difference :-)
<jimbaker> jamespage, they are not meaningful for the local provider
<jamespage> jimbaker: ack - thanks for the confirmation
<hazmat> jamespage, they don't
<_mup_> Bug #869289 was filed: Simplify config set tests to reduce runtime significantly. <juju:In Progress by hazmat> < https://launchpad.net/bugs/869289 >
<hazmat> SpamapS, just put a MP into the queue which should reduce the runtime of those tests by 6x
<hazmat> that should hopefully avoid any issues on that particular test case
<_mup_> juju/config-set-sans-status r393 committed by kapil.thangavelu@canonical.com
<_mup_> remove commented out bits, oops
<negronjl> hi guys:  Is there a way to deploy different releases on the same deployment.  Something like this charm on oneiric ( maybe specify ami and size ) and another one on natty ( maybe also specify ami and size ) ?
<jimbaker> hazmat, i was about to ask about those commented-out bits ;)
<negronjl> I have some charms that work on oneiric and others that work on natty.  I am wondering if there is a way to use them both by just telling juju to use oneiric for some and natty for some others
<hazmat> rog, would you mind have a look niemeyer's go branches in review ?
<m_3> negronjl: +1, but I'd like to specify image-id and instance-type in charm metadata
<negronjl> m_3: I like that idea too :)
<rog> hazmat: sure. where do i find them?
<m_3> negronjl: what I've been doing to solve that is just adding logic around `lsb_release -cs` in the charm
<hazmat> rog, http://j.mp/juju-eureka
<hazmat> negronjl, we've added the notion of charm series which i think will correspond to what the distro series used to deploy them is, but its not easily intermixed atm as its an environment setting
<negronjl> hazmat: Thanks.  Is it somewhere in your roadmap?  Do you know of any work-around to accomplish this ?
<niemeyer> fwereade_, hazmat: Hmm.. was just thinking over lunch we might do something slightly more restrictive
 * niemeyer hacks something quick
<hazmat> negronjl, i don't think its functionally that way now, and its not listed on the roadmap atm, i'd need to defer to niemeyer and fwereade_ who've been discussing this.. ie. charm series atm doesn't correspond to distro.. hmm.. i don't see any way to capture this
<hazmat> atm
<negronjl> hazmat: ok.  that's too bad but, thanks for the info.
<niemeyer> negronjl: Yes, charms are specific to a given Ubuntu series
<niemeyer> fwereade_: In fact, we'll have to address a missing aspect in that regard, I think..
<hazmat> negronjl, atm the only way to address is via default-image-id modified between deploys
<niemeyer> hazmat, fwereade_: "^[a-z][a-z0-9]+(-[a-z0-9]*[a-z][a-z0-9]*)*$"
<hazmat> niemeyer, yeah.. atm charm series doesn't actually correspond to an image launch against a series afaics
<niemeyer> hazmat: Yeah, but the idea is really to use the charm URL series for that
<negronjl> niemeyer: It would be useful to be able to tell juju which release ( ami-id, size ) to use for diff. charms.  ATM I have a hadoop charm that only works on natty ( for now ) and a cloudfoundry charm that only works on oneiric.  It would be useful to be able to demonstrate the two of them without having to destroy the environment between them.
<niemeyer> hazmat: It's not a big deal since there's a single one today, but that's definitely the intention
<niemeyer> negronjl: The ami-id will come out of the charm URL series
<niemeyer> negronjl: Size, etc, is a different issue we'll have to address
<niemeyer> Anyway.. must go back to lunch, just wanted to propose the regex before fwereade_ worked on it
<negronjl> niemeyer: thx for the info
<hazmat> niemeyer, yeah.. we'll need some more leg work for that to happen, machine allocation happens without regard to the unit/charm being placed atm
 * niemeyer mumbles something about placement hack..
 * niemeyer => lunch, again
<hazmat> :-)
<hazmat> that's not really the problem per se, we just need to propogate more info to the machine state regarding its type/series constraints, placement is just the abstraction layer that it needs to get to
<_mup_> juju/safer-perms-on-juju-dir r392 committed by kapil.thangavelu@canonical.com
<_mup_> stricter perms when creating default juju config and juju dir
<niemeyer> hazmat: It is the problem in that the model is not suitable yet
<niemeyer> We'll get there, though
<_mup_> juju/local-provider-docs r392 committed by kapil.thangavelu@canonical.com
<_mup_> local provider docs
 * hazmat lunches
<rog> hazmat, niemeyer: reviews done
<hazmat> rog, awesome thanks
<hazmat> rog, via email?
<rog> i replied on the comment form. doesn't that send an email to interested parties?
<rog> hazmat: ^
<hazmat> rog, oh.. it does.. just to vote  on the mp in addition to the comment, right underneath the  textarea you can do a approve/needs fixing/etc on the merge itself
<hazmat> i missed it cause i was scanning the top of the merge proposal for votes
<rog> done
<niemeyer> rog:
<niemeyer> parts = append([]string{dir.Path}, parts...)
<niemeyer> appending to parts directly is bad behaviour because
<niemeyer> it could interfere with any callers that call join
<niemeyer> rog: I'm not really sure about what you mean there?
<rog> niemeyer: parts is an argument to the function. append has side effects.
<niemeyer> rog: This example has absolutely no side effects.. I don't really get what you mean
<rog> niemeyer: doh, sorry, i'd read it wrong
<rog> ignore!
<niemeyer> rog: Phew, cool, np :)
<niemeyer> rog: "Seems to me like the body of this loop would be better"
<niemeyer> rog: Good idea there, thanks
<Zeelot> morning
<niemeyer> Zeelot: Morning!
<rog> ok, a little question. i'm trying to trace through juju actions on bootstrap.
<Zeelot> so it seems like all the examples use one ec2 instance for each service. How bad of an idea is it to create a charm that installs a full php application onto a single instance? something like php+apache+couchdb+rabbitmq
<rog> in this file, juju/control/bootstrap.py, it looks as if it's calling the bootstrap function defined in this file: juju/providers/common/base.py
<Zeelot> is this all just preference of how we want to manage our instances?
<rog> but that function just seems to return a list of machines - how does the bootstrap process actually get initiated?
<rog> (because the run function in juju/providers/common/bootstrap.py just seems to return a list of machines, and not actually do anything)
<niemeyer> Zeelot: php+apache sounds fine.. I'd put couchdb and rabbitmq in separate charms
<niemeyer> Zeelot: Note that this is just the beginning.. we'll be adding support for multiple charms in a single machine in the future
<rog> what am i missing?
<niemeyer> rog: Let me check
<niemeyer> rog: Hmm
<Zeelot> niemeyer: ah, perfect. I'm also wondering, is this just for EC2? So if I am developing on a local environment, I currently use a VM with a basic ubuntu server. Juju isn't going to help me with anything there, is it? So I still have my vagrant + chef/puppet recipes to set up my VM but at that point, why use juju to set up my software on ec2 when I already have my recipes from chef or puppet? I might simply just be misunderstanding the goals for juju
<niemeyer> rog: I'm not sure how you got to the conclusion that it just runs a list of machines and doesn't do anything
<niemeyer> rog: The bootstrap method in base.py calls Bootstrap.run, which does more than that
<niemeyer> Zeelot: juju deploys on EC2, OpenStack, Cobbler/Orchestra (physical), and locally through LXC
<rog> it calls get_zookeeper_machines, but that didn't look like it did much more than interrogate the state for the list of machines
<Zeelot> alright, I'll have to look at those other things then because I'm not sure what they are :) thanks
<niemeyer> Zeelot: If you already have something else that deploy your software on EC2 in a way you're very happy with, I wouldn't look for anything else :-)
<niemeyer> Zeelot: juju is very different from chef and puppet, though
<Zeelot> niemeyer: we don't have anything for deploying to ec2 but we are looking into making puppet or chef recipes. If juju can help me also set up VMs, then I am fine with it
<niemeyer> Zeelot: Yeah, that's juju! :)
<Zeelot> juju looks like a much more complete solution to deploying to ec2
<niemeyer> Zeelot: Not only set up the VMs, but we're working on the full orchestration experience
<niemeyer> Zeelot: Reusable building blocks a command away
<Zeelot> excellent
<niemeyer> Zeelot: and scaling up and down, recovery, etc etc
<niemeyer> Zeelot: There's a lot to do still, of course, but the future looks unlike anything else out there
<rog> niemeyer: i can't see where the actual machine is bootstrapped. is it as a side-effect of find_zookeepers ?
<niemeyer> rog: Just follow the pipeline
<rog> i'm trying!
<niemeyer>         machines = self._provider.get_zookeeper_machines()
<niemeyer> ...
<niemeyer>         machines.addErrback(self._on_no_machines_found)
<Zeelot> any useful tips for what I should be reading to get caught up on what I need to do in order to use juju for setting up my VM? Or is that part of a future feature?
<niemeyer> rog: What happens on no_machines_found?
 * rog groans.
<hazmat> Zeelot, docs are pending in the merge queue, but theirs a message on the  mailing list which goes through the basics
<rog> that's a bit gross
<niemeyer> rog: Yeah, I'm happy you're unhappy about it
<rog> no wonder i couldn't see it
<Zeelot> alright, thanks
<hazmat> Zeelot, https://lists.ubuntu.com/archives/juju/2011-October/000844.html
<hazmat> Zeelot, complete example of config in this one.. https://lists.ubuntu.com/archives/juju/2011-October/000847.html
<hazmat> Zeelot, that should work fine in a vm as well
<Zeelot> awesome
<Zeelot> looks like I need to learn about LXC first
<hazmat> Zeelot, its basically os level namespacing for isolation.. a jail or chroot on steroids... much less overhead than full virtualization
<Zeelot> hmm, is it possible to use a VM like virtualbox? Mostly a requirement for the developers that work on windows or mac
<hazmat> Zeelot, just to be clear its not for setting up vms, its for a setting up a vm from inside the vm
<Zeelot> ah, then that's probably fine
<hazmat> it will do isolation of the different services within the vm using lxc
<Zeelot> interesting
<hazmat> its mostly meant for ubuntu physical machines, but it works fine in a virtual machine
<Zeelot> so then I could simply have a basic ubuntu server VM that I never alter, and use juju to set up dev environments for my various projects?
<hazmat> Zeelot, outside of install juju and doing deploys from within the vm, yes.. you'd be using an environment per project as well probably to get separation between them
<Zeelot> thanks for the info
<jimbaker> rog, bootstrap is an example where the inlineCallbacks style works better, especially when the logic gets convoluted - do the bootstrap *if* there's a certain error (EnvironmentNotFound)
<jimbaker> but as you see here, it's currently quite convoluted and requires a twisted way of thinking, so to speak
<rog> jimbaker: i still can't quite see what's going on. ahhh, the crux is returnValue, i think i see now
<rog> oh, no, i still don't see it
<jimbaker> rog, i'd be happy to walk you through it
<rog> actually, i *think* i see it.
<rog> but i don't understand why a callback is added that is known to return an error, only to do something on the error.
<jimbaker> rog, yeah, the logic is certainly more convoluted than what would pass a review now imho
<jimbaker> rog, also the use of the command pattern with the run, vs just having a function is something that is not helpful
<niemeyer> rog: Which callback is added that is known to return an error?
<rog> jimbaker: couldn't you just yield launch_machine or something?
<rog> _on_machines_found
<jimbaker> rog, there is nothing in Bootstrap that couldn't just be a single inlineCallbacks function, correct
<rog> AFAICS the only reason that ErrBack gets called is because _on_machines_found returns an error, which it always will (or is there some subclass subterfuge going on there?)
<niemeyer> rog: _on_machines_found doesn't return an error
<rog> niemeyer: oh. so why does errBack get called?
<niemeyer> rog: errback gets called on errors
<rog> niemeyer: so where's the error coming from?
<rog> niemeyer: because without an error, nothing happens, right?
<niemeyer> rog: Take a moment to read this: http://twistedmatrix.com/documents/current/core/howto/defer.html
<jimbaker> the errback is called from get_zookeeper_machines
<niemeyer> rog: It will help you a lot in your understanding of the code base
<jimbaker> or i should say, because of it :)
<jimbaker> so machines in the run function is a Deferred
<rog> yes, because of returnValue, right?
<rog> which is why we can call addCallback and addErrback, yes?
<jimbaker> rog, looking at its definition, but not because of returnValue
<rog> oh
<jimbaker> that's just a convention required by the inlineCallbacks decorator
<niemeyer> rog: returnValue is a hack related to inlineCallbacks
<rog>     machines = []
<rog> does that make a deferred value?
<rog> oh, findZooKeepers is inlineCallbacks
<niemeyer> rog: inlineCallbacks explores Python generators to try to avoid the callback-based logic (addErrback, addCallback, etc) and make the code a bit more linear
<rog> i'm used to seeing inlineCallbacks functions yield things rather than returning them
<hazmat> yeah.. its syntatic sugar using python generators under the hood
<jimbaker> rog, yes, findZookeeper uses the inlineCallbacks decorator. so it wraps the function such that its invocation returns a Deferred
<jimbaker> that Deferred in turn is returned (trivially) by get_zookeeper_machines
<niemeyer> rog: returnValue raises an exception that is caught by the inlineCallback decorator to stop the generator
<rog> jimbaker: yes, i think i kind of understand that (although i haven't read that document)
<niemeyer> rog: It's a "return" from a generator, because generators can't return
<rog> ah ok. i see. lovely stuff.
<niemeyer> rog: I'd call it syntatic salt
<jimbaker> rog, i highly recommend just looking at the source code of inlineCallbacks, it's pretty straightforward code
<rog> rofl
<jimbaker> in the sense that setting up trampolines can be
<niemeyer> jimbaker: WHAT?
<rog> jimbaker: i will some time. i get  the principle.
<niemeyer> ROTFL
<rog> it's a kinda beautiful subversion of a few language features
<jimbaker> rog, sounds good. one convention you will notice in our code is that we don't do use inlineCallbacks if we are just returning a Deferred (eg the passthrough in get_zookeeper_machines)
<niemeyer> rog: http://paste.ubuntu.com/703548/
<rog> jimbaker: yeah, that confused me
<jimbaker> rog, i understand
<niemeyer> rog: Look at that, and pay attention next time jimbaker tells you something is straightforward
<rog> i understand what's going on now. the interplay between deferred stuff and exceptions wasn't... obvious.
<rog> exc_info()[2].tb_next
<rog> no that's not straightforward as i would understand the term
<jimbaker> rog, a good reason not to just pass through is if you want to do something that logically occurs after a value that you would return
<jimbaker> rog, hah, i didn't say straightforward meant easy
<jimbaker> for what it needs to do, it's straightforward
<rog> jimbaker: i'll take it from you
<niemeyer> jimbaker: Yeah, you meant straightforward as in extremely involved
<rog> well, it's less than 100 lines of code
<rog> well, having suffered that enlightenment, i shall go and cook some curry
<niemeyer> rog: and most of them are comments even ;-)
<niemeyer> rog: Do check the documentation about deferreds out later/tomorrow
<jimbaker> rog, and it's also do something helpful - trying to diagnose improper use of returnValue, using the fact that the traceback can be interrogated
<niemeyer> rog: It'll help a lot on your understanding of the base
<rog> jimbaker, niemeyer: thanks for holding my hand through that. i feel.... better.
<jimbaker> again, for what it is trying to do, it's straightforward
<niemeyer> jimbaker: Yeah.. a lot more straightforward than parsing apt-cache's output for sure
<jimbaker> niemeyer, :( it helps to know what it should be, enough said. parsing is the easy part
<niemeyer> OMG, what a feeling of emptiness on the eureka kanban
<_mup_> juju/local-provider-docs r393 committed by kapil.thangavelu@canonical.com
<_mup_> missing doc file, doh
<niemeyer> hazmat: Danke :)
<rog> niemeyer: ping re merge requests, BTW
<niemeyer> rog: Cool
<niemeyer> rog: Cheers
<hazmat> bcsaller, jimbaker, fwereade_ even though there's no items on the kanban, i've just been going through low hanging bugs in the general queue.. till we get some new ones
<hazmat> there's alot to choose from
<rog> see y'all tomorrow
<jimbaker> hazmat, ok, we might want to reserve some of them
<hazmat> jimbaker, its better to pick them as your doing them, else we end up with reserved queues which serve no purpose except delay
<jimbaker> hazmat, should have been more precise, reserve one per person :)
<jimbaker> as in assigned
<hazmat> jimbaker, sure.. but people need to self select what they want to tackle
<jimbaker> hazmat, again sorry for my lack of clarity, this is what i meant
<niemeyer> fwereade_: It's a bit late for you.. do you want me to handle the regex stuff?
<jimbaker> hazmat, ok, i will do juju scp, low hanging and useful
<hazmat> jimbaker, hmm.. thats actually a feature not a bug
<hazmat> jimbaker, definitely would be nice, but we're trying to be feature freeze, bug fix only
<hazmat> till next week and oneiric is out
<jimbaker> hazmat, should this include branches that escaped review because they didn't have a milestone on it, like this trivial from SpamapS: https://code.launchpad.net/~clint-fewbar/juju/remove-default-ami/+merge/71278 ?
<hazmat> jimbaker, i just push those to the correct milestone so they show up on the kanban
<jimbaker> ok, i will mark the corresponding bug for eureka then
<niemeyer> hazmat: hmm.. what's the next milestone's name? florence?
<niemeyer> hazmat: I'll put the kanban to build
<hazmat> niemeyer, yup
<niemeyer> Cool
<jimbaker> hazmat, i think we can do a fix released on bug 810648
<_mup_> Bug #810648: The revision number in formula metadata is not very useful to users <juju:New> < https://launchpad.net/bugs/810648 >
<hazmat> niemeyer, done
<jimbaker> just passing through, this looks like a good feature for florence: bug 814974
<_mup_> Bug #814974: config options need a "file" type <juju:New> < https://launchpad.net/bugs/814974 >
<hazmat> niemeyer, actually i'm not sure
<fwereade_> niemeyer, just back: everyone's asleep, and I'd find it relaxing, if you haven't already done it
<hazmat> niemeyer, its still a problem for a regular deploy
<hazmat> niemeyer, definitely addressed part of it
<hazmat> niemeyer, it will silently use the already env stored formula ignoring the one on disk
<hazmat> which may have newer changes
<jimbaker> bug 816621 looks invalid to me, based on the comments
<_mup_> Bug #816621: Juju doesn't appear to set up a complete environment while running the installation scripts and hooks <cloudfoundry:New> <juju:New> < https://launchpad.net/bugs/816621 >
<niemeyer> fwereade_: Haven't started it yet
<niemeyer> hazmat: Sorry, ECONTEXT
<niemeyer> hazmat: What are you referring to again?
<hazmat> oh.. sorry jimbaker mentioned it .. i lost econtext
<jimbaker> hazmat, i'm not certain what the context is, either, since i mentioned a number of bugs here ;)
<niemeyer> http://j.mp/juju-florence is up
<hazmat> niemeyer, great, thanks
<niemeyer> fwereade_: Please note I've suggested a follow up regex that has probably scrolled up already
<niemeyer> fwereade_: I can dig it again if you want
<niemeyer> fwereade_: It's slightly more restrictive than the initial suggestion
<niemeyer> fwereade_: But avoids things like foo------bar
<niemeyer> fwereade_: and foo-32-bar
<jimbaker> still one usage of ensemble in trunk, debian/ensemble.docs - required to be that for any reason?
<niemeyer> hazmat: have a sec for a quick review: http://paste.ubuntu.com/703602/
<hazmat> niemeyer, is that the same regex from earlier today on the channel?
<niemeyer> hazmat: It's the second one I mentioned in the channel
<niemeyer> hazmat: Not the one you've run against existing formulas
<niemeyer> hazmat: This is slightly more strict
<niemeyer> hazmat: It forbids things like a--b
<niemeyer> hazmat: and a-1-b
<niemeyer> hazmat: But allows a1 and a1-b2
<hazmat> but it allows n1-a1-n1 ?
<niemeyer> hazmat: and even a1-2b
<niemeyer> hazmat: Yeah
<niemeyer> hazmat: Just the number can't be by itself without an accompanying char
<niemeyer> hazmat: Nor can two dashes be seen next to each other
<hazmat> niemeyer, sounds good, un momento going to try and break it ;-)
<niemeyer> hazmat: Woot
<niemeyer> hazmat: It forbids single letter names as well, which doesn't sound like a bad idea :-)
<niemeyer> Hmm.. actually.. it does sound like a bad idea
<niemeyer> Because it forbids a-foo
 * niemeyer changes
<niemeyer> hazmat: Just changed the first + to a *
<hazmat> niemeyer, cool
<hazmat> niemeyer, looks good, we should have this documented in the formula author guide as well
<niemeyer> hazmat: Hmm.. maybe.. but at the same time we can just say "use sensible names" :)
<hazmat> well lower case, must begin with a letter, can use numbers following letters, can use single hyphens between characters and numbers
<hazmat> sensible is a state of mind ;-)
<niemeyer> hazmat: Will mention it
<niemeyer> (_formula_ author.. huh huh)
<niemeyer> hazmat: http://paste.ubuntu.com/703606/
<niemeyer> Hmm.. the revision should be taken out
 * niemeyer does it
<hazmat> niemeyer, sounds good
<niemeyer> hazmat: http://paste.ubuntu.com/703607/
<niemeyer> Cool, cowboying it
<hazmat> niemeyer, this thing gets bigger by the second ;-)
<hazmat> +1
<niemeyer> hazmat: Yeah, I'll commit before this beast gets out of control!
<hazmat> bcsaller, jimbaker if either of you have a moment, i'd appreciate a look over the local provider docs
<hazmat> jimbaker, did rebooting after yanking that firewall software help?
<jimbaker> hazmat, will do
<jimbaker> hazmat, i didn't try a reboot
<hazmat> https://code.launchpad.net/~hazmat/juju/local-provider-docs/+merge/78465
<_mup_> juju/trunk r392 committed by gustavo@niemeyer.net
<_mup_> Fixed charm name URL so that mambo5 works as a name. Also documented
<_mup_> the name format and removed revision from docs.  [r=hazmat]
<niemeyer> Stepping out for shaking the bones
<jimbaker> hazmat, ok, i will try rebooting. after that, if it doesn't work, time to rtfm ;)
<hazmat> jimbaker, you can probably verify pre reboot from sudo iptables --list
<hazmat> rules which drop packets might be interferring
<jimbaker> hazmat, hmmm, i guess it's possible these rules could cause issues - http://pastebin.ubuntu.com/703611/ - zk is running on the actual machine (so 192.168.0.106) iirc, whereas the agents are on lxc containers in the 192.168.122.0 network
<jimbaker> let's try a reboot anyway...
<jimbaker> back
<jimbaker> time to get coffee, i will check the local deployment shortly
<jimbaker> hazmat, guess what, the reboot worked this time
<hazmat> jimbaker, sweet!
<jimbaker> hazmat, i suppose the iptables settings are transitory, in how they are managed by firestarter
<jimbaker> still need to rtfm, but definitely on the end of the queue now
<_mup_> juju/trunk r393 committed by kapil.thangavelu@canonical.com
<_mup_> merge config-set-sans-status [r=bcsaller,jimbaker][f=869289]
<_mup_> Minor test optimization for config-set to get it under timeout threshold
<_mup_> on low powered vhost used for automated test running.
<jimbaker> hazmat, anyway, really glad we have the local stuff working, there were lots of branches involved, but the codebase specifically for local support is pretty minimal. nice.
<_mup_> juju/safer-perms-on-juju-dir r393 committed by kapil.thangavelu@canonical.com
<_mup_> update per review comments
<Aram> hi.
<hazmat> Aram, hi
<_mup_> juju/trunk r394 committed by kapil.thangavelu@canonical.com
<_mup_> merge safer-perms-on-juju-dir [r=bcsaller,jimbaker,niemeyer][f=869289]
<_mup_> Tighten up permissions up on default creation of environments.yaml and ~/.juju dir.
<hazmat> SpamapS, did you want to merge the removal of the deb packaging? i just pushed into the review queue.. but wanted to double check
<hazmat> really need an interface repository
<jimbaker> hazmat, i think we can mark this bug as fix released too: bug 810649
<_mup_> Bug #810649: Revision number should be optional in metadata <juju:Confirmed> < https://launchpad.net/bugs/810649 >
<hazmat> jimbaker, i realized its not quite the same
<hazmat> jimbaker, the auto-increment stuff requires an upgrade
<hazmat> jimbaker, common case is to just want to deploy a new service as you edit
<hazmat> jimbaker, its probably as good as we'll have it for a little while though
#juju 2011-10-07
<jimbaker> hazmat, ok, just making sure
<jimbaker> hazmat, i did just triage bug 846055 as invalid after some more analysis
<_mup_> Bug #846055: Occasional error when shutting down a machine from security group removal <juju:Invalid> < https://launchpad.net/bugs/846055 >
<hazmat> jimbaker, cool
<jimbaker> (related to my fix of bug 863510 a little bit ago)
<_mup_> Bug #863510: destory-environment errors and hangs forever <juju:Fix Released by jimbaker> < https://launchpad.net/bugs/863510 >
 * hazmat noodles on a charm browser
<SpamapS> ifup /win 20
<jamespage> morning all
<jamespage> local provider is working really well for me
<jamespage> however can't upgrade charms - http://paste.ubuntu.com/703911/
<jamespage> not the end of the world as destroying restarting is only a couple of minutes :-)
<hazmat> jamespage, g'morning
<jamespage> morning hazmat
<hazmat> oh.. ugh
<hazmat> i forgot that the unit agent downloads the charm...
<hazmat> so a filesystem solution isn't going to work very well
<hazmat> since the unit and machine agent both need access and are on separate fs mounts
<hazmat> jamespage, yeah.. upgrade is definitely broken
<_mup_> Bug #869945 was filed: upgrade broken for local provider <juju:New> < https://launchpad.net/bugs/869945 >
<rog> can i ask a quick question about the juju source again, please?
<rog> i'm trying to understand the machine startup process
<hazmat> rog, go for it
<rog> in ec2/__init__.py, there are these two lines:
<rog>         constraints = machine_data.get("constraints", {})
<rog>         return EC2LaunchMachine(self, master, constraints).run(machine_id)
<hazmat> rog, so contraints are how we get the image
<hazmat> or what sort of machine we run, or where we run it
<rog> i'm wondering where the zookeepers argument to EC2LaunchMachine.start_machine comes form
<rog> s/form/from/
<hazmat> the second line starts the machine, the master param is whether we start a a zookeeper and provisioning agent on the node by default. the machine id is to inform the machine of its zk machine id so its machine agent can connect back to the right place.
<rog> (and i think i've got confused over the two start_machine implementations... let me have another look)
<rog> ah, machine_id is the machine id of the zookeeper machine, not the new machine?
<hazmat> rog yes..
<hazmat> the provider machine id is only known after the instance has been launched
<hazmat> rog, some of it is a little confusing because of the desire to reuse implementation and lots of similiar named things... it looks like constraints doesn't actually determine machine type now that i look at it, just image selection.
<hazmat> rog, start_machine gets called by a base class in common/launch.py
<hazmat> from the launchmachine base class run method
<rog> hazmat: yeah, it's winding in and out of the base class.
<rog> i was confused
<rog> i think i see it now
<hazmat> rog, it uses the findzookeeper class to get the zks to populate the arg to start_machine
<hazmat> rog, cool
<rog> yeah, i had looked at that before, but hadn't made the connection
<rog> thanks
<hazmat> np
<_mup_> juju/hooks-with-noninteractive-apt r395 committed by kapil.thangavelu@canonical.com
<_mup_> set debian noninteractive
<rog> hazmat: one last, Q: where's the run(machine_id) method defined?
<hazmat> rog which one?
<rog> the one in  return EC2LaunchMachine(self, master, constraints).run(machine_id)
<hazmat> rog, its the primary entrance point into a  LaunchMachine class, its invoked by the provider facade' start_machine method  defined in each provider package
<hazmat> rog, its defined on the common/launch.py LaunchMachine class
<rog> so it is. grep fail.
<rog> ah, so machine_id *is* the id of the new machine, not of the zookeeper machine
<rog> thanks again
<smoser> bug 863629
<_mup_> Bug #863629: libvirt-lxc: virFileOpenTtyAt can't be called on /some/other/dev/pts <patch> <server-o-nrs> <libvirt (Ubuntu):Confirmed> < https://launchpad.net/bugs/863629 >
<fwereade> hazmat: lp:810649 (Revision number should be optional in metadata) has now been fully addressed, I think; but not in the manner suggested in the bug
<fwereade> hazmat: shall I mark it invalid?
<hazmat> fwereade, i'm not sure
<hazmat> fwereade, the common case the bug is raising isn't addressed
<hazmat> which is i modify a formula, go to deploy it, and transparently the on in storage is used instead
<hazmat> s/on/one
<fwereade> hazmat: hmm, you're right
<hazmat> fwereade, so i started to look at exposing the provider storage over http to allow units to download for upgrades
<fwereade> hazmat: oh yes?
<hazmat> fwereade, yeah.. i forget the unit agents download the charms directly for upgrades only
<hazmat> but i ran into the issue that the charm urls (from provider storage) aren't the same if i bind the webserver on localhost in the host
<hazmat> its localhost in the provider and 192.168.122.1 in the unit
 * hazmat wonders if he should break for coffee
<fwereade> what does 192.168.122.1 resolve to in the host?
<hazmat> ah.. i can bind it explicitly to that interface probably
<hazmat> fwereade, it is the host
<hazmat> yeah.. that's the ticket
<hazmat> fwereade, thanks
<fwereade> hazmat: yw
<niemeyer> Morning all!
<niemeyer> Sorry, a bit late.. there was a fierce fight with bed this morning
<fwereade> heya niemeyer
<niemeyer> fwereade: yo!
<niemeyer> fwereade: Some of the fight was useful.. I woke up with a thought in my head about errors and the store
<fwereade> niemeyer: we could certainly make the error handling much more sophisticated
<niemeyer> fwereade: We need to tell people about non-existent and bad charms somehow
<fwereade> niemeyer: bad in what sense?
<niemeyer> fwereade: I think it's straightforward, but we need a patch soonish, and support on the fake thingy to see if it's working
<niemeyer> fwereade: bad as in there's content in a branch that the store can't pack
<fwereade> niemeyer: ah, ok
<fwereade> niemeyer: the fake needs some work anyway, it's only barely good enough to tell that it ought to work
<niemeyer> fwereade: My suggestion is this:
<fwereade> niemeyer: and doesn't even consider usernames
<niemeyer> fwereade: let's introduce a couple of additional keys for each entry returned through /charm-info
<niemeyer> fwereade: "warning", and "error"
<niemeyer> fwereade: The store would take these like that:
<niemeyer> Erm, sorry
<niemeyer> fwereade: The client would take these like that:
<niemeyer> 1) If there's a "warning", print it as a warning (duh) and continue using the received info normally
<fwereade> niemeyer: daring and unorthodox, but I can get behind that
<fwereade> :p
<niemeyer> 2) If there's an "error" raise a CharmError with the received string and the given charm URL
<fwereade> niemeyer: fair enough; what about multiple errors?
<fwereade> niemeyer: well, I guess we don't need to worry about them yet
<fwereade> niemeyer: API sounds sensible though
<niemeyer> fwereade: Yeah, we'll sort them out in the server side for now
<niemeyer> fwereade: Please note these go inside each individual charm's json doc
<niemeyer> fwereade: So, e.g.:
<niemeyer> fwereade: {charm_url: {"error": "no metadata.yaml found"}}
<fwereade> niemeyer: yep, they're charm info not request info
<niemeyer> fwereade: Yeah, +1
<fwereade> niemeyer: sounds good, I'll have a go at that now then
<niemeyer> fwereade: Thanks!
<niemeyer> SpamapS: We have a couple of bug fixes in the pipeline, FYI
<niemeyer> SpamapS: One is merged, the other fwereade is working on right now
<_mup_> Bug #870000 was filed: client should understand errors and warnings from the charm store <juju:In Progress by fwereade> < https://launchpad.net/bugs/870000 >
<fwereade> niemeyer: suggestion: list of warnings, rather than restricting ourselves to just one?
<niemeyer> fwereade: List of warnings and list of errors? Hmm
<fwereade> niemeyer: I'd imagined an error to be a "you're boned, processing stops now" condition
<fwereade> niemeyer: whereas if a warning doesn't stop anything, more warnings are possible
<niemeyer> fwereade: Yeah, but there is always the "you're _seriously_ boned" case
<niemeyer> fwereade: +1 on lists for both
<fwereade> niemeyer: ok, sounds good
<hazmat> niemeyer, are you planning on doing a web ui on the store to start?
<niemeyer> fwereade: and calling them "warnings"/"errors" instead
<fwereade> niemeyer: indeed
<niemeyer> hazmat: Not to start.. I'm planning to maybe get it in time at all :-)
<hazmat> niemeyer, i was playing around with something yesterday, just because i needed a list of interfaces available from other formulas
<niemeyer> hazmat: Nice
<niemeyer> hazmat: Did you put the client interface to test?
<hazmat> niemeyer, i'm just querying lp and scanning bzr branches
<niemeyer> hazmat: Ah, ok
<hazmat> niemeyer, is there a store endpoint up already?
<niemeyer> hazmat: Nope
<hazmat> SpamapS, there's one more bug in progress on a fix for local provider upgrades as well
<hazmat> SpamapS, feel free to merge the deb dir removal as well
<niemeyer> hazmat, SpamapS: Erm, hold on?
<niemeyer> hazmat, SpamapS: Please don't remove the debian dir now.. the PPA depends on it, this isn't important right now I'd guess?
<hazmat> niemeyer, its not to me.. but SpamapS had a pending branch out for a while regarding
<hazmat> niemeyer, i pushed it to the review queue, and its currently awaiting a merge
<niemeyer> hazmat: Ok, I'm pushing it back then
<niemeyer> hazmat: and retargetting to the florence milestone
<niemeyer> hazmat: There's no reason for us to rush this in and have to fix the PPA _right now_
<hazmat> niemeyer, fine by me
<niemeyer> Review queue is empty
<niemeyer> hazmat: and man, good catch on the DEBIAN_FRONTEND
<niemeyer> Totally forgot about it
<rog> niemeyer: i'm still waiting for some feedback on the changes i made on my merge proposals in response to your comments, BTW. not that it's that crucial.
<niemeyer> rog: Yeah, I know.. I've been focusing on the release since yesterday
<rog> niemeyer: that's fine, just checking.
<niemeyer> rog: The changes to the Server interface I think should really be postponed, btw
<rog> niemeyer: you mean the factoring out of the service package?
<niemeyer> rog: Yeah
<niemeyer> rog: I'll check your branches now
<rog> niemeyer: i've gone with you on that, yes
<niemeyer> rog: Cool, cheers
<rog> niemeyer: i've merged back in the fixes that were in that branch
<niemeyer> rog: Sweet, checking it out
<niemeyer> rog: Your juju branch is ready for action, btw
<rog> ?
<rog> ah, you mean fix-tutorial-with-expose?
<rog> niemeyer: when i try to push to lp:juju, i get:
<rog> bzr: ERROR: Cannot lock LockDir(lp-82305296:///%2Bbranch/juju/.bzr/branchlock): Transport operation not possible: readonly transport
<niemeyer> rog: You probably have a wrong url there
<rog> ok
<niemeyer> rog: What's "bzr info" telling you about the push location?
<rog> i did an explicit push
<rog> % bzr push lp:juju
<rog> http://paste.ubuntu.com/703998/
 * hazmat is annoyed by twistd
<rog> odd
<jamespage> hazmat: http://paste.ubuntu.com/704017/
<jamespage> not sure libzookeeper-java is going to give us quite enough for the local provider
<hazmat> jamespage, ah.... that's why we wanted zookeeperd
<hazmat> jamespage, we can work around that
<jamespage> well zookeeper is actually enough
<hazmat> but its a bug
<jamespage> that way nothing starts - but you still get the configuration files
<hazmat> jamespage, i think zookeeperd actually setups /etc/zookeeper/conf
<hazmat> we source it for env variables
<jamespage> I just tried on a clean server install - zookeeper is enough
<hazmat> jamespage, cool
<jamespage> zookeeperd just installs the init scripts I think
<rog> QQs: what's the difference between machine_id and instance_id, and why does ec2.securitygroup.open_provider_port take both a machine and machine_id?
<rog> ( can't you get a machine_id from a machine?)
<hazmat> jamespage, i'm not sure who set its up i see /etc/zookeeper/conf_example from zookeeper .. but it looks like its some sort of script that setups the actual directory /etc/zookeeper/conf
<jamespage> hazmat, the configuration is managed by the alternatives system
<jamespage> (I think)
 * jamespage goes to look
 * jamespage worries is memory is not what it used to be
<jamespage> hazmat: yep - http://tinyurl.com/6a88tag
 * jamespage not so worried anymore
<jamespage> I personally don't like that much - inherited from previous package maintainer
<hazmat> jamespage, well your the maintainer now.. :-)
<hazmat> jamespage, thanks, i'll update the pkg check and docs for zookeeper
<_mup_> juju/local-provider-docs r394 committed by kapil.thangavelu@canonical.com
<_mup_> update dependency s/libzookeeper-java/zookeeper
<hazmat> jamespage, alternatively we could manually scan /usr/share/java for the ones we need (minus version numbers).. but its more of a slippery slope
<hazmat> libs we need that is
<hazmat> bcsaller, you mentioned you might have some additions to the local provider docs?
<hazmat> for some reason twistd won't daemonize for me..
 * hazmat smells a rabbit hole
<hazmat> wrong cli arg
<rog> niemeyer:
<rog> package provider
<rog> // Machine represents a running machine instance.
<rog> type Machine interface {
<rog> 	Id() string
<rog> 	DNSName() string
<rog> 	PrivateDNSName() string
<rog> }
<rog> type Port struct {
<rog> 	Proto string
<rog> 	Number int
<rog> }
<rog> type Interface {
<rog> 	// StartMachine asks for a new machine instance to be created.
<rog> 	// The id of the new machine is given by id.
<rog> 	// The currently running list of zookeeper machines
<rog> 	// is given by zookeepers.
<rog> 	// It returns the new Machine (which is not necessarily
<niemeyer> rog: paste.ubuntu.com
<rog> 	// running yet).
<niemeyer> rog: paste.ubuntu.com
<niemeyer> rog: paste.ubuntu.com
<niemeyer> rog: paste.ubuntu.com
<rog> 	StartMachine(id string, zookeepers []Machine) (Machine, os.Error)
<rog> 	// Machines returns the list of currently started instances.
<rog> 	Machines() ([]Machine, os.Error)
<rog> 	// OpenPort opens a new port on m to the outside world.
<niemeyer> WTF
<rog> 	OpenPort(m Machine, p Port) os.Error
<rog> 	// ClosePort closes the port on m.
<rog> 	ClosePort(m Machine, p Port) os.Error
<rog> 	// OpenedPorts returns the list of currently open ports
<rog> 	// on m.
<rog> 	OpenedPorts(m Machine) ([]Port, os.Error)
<rog> 	// FileURL returns a URL that can be used to access the given file.
<rog> 	URL(file string) (string, os.Error)
<rog> 	// Get returns the contents of the given file as a string.
<rog> 	Get(file string) (string, os.Error)
<rog> 	// Put writes contents to the given file.
<rog> 	Put(file string, contents string) os.Error
<rog> 	// Destroy shuts down all machines and destroys the environment.
<rog> 	Destroy() os.Error
<rog> }
<rog> // Register registers a new provider. Name gives the name
<rog> // of the provider. The connect function is to be used to connect
<rog> // to the given provider type; attrs gives any provider-specific
<rog> // attributes; and it should return the newly created provider.Interface.
<rog> //
<rog> muchos apologies folks
<rog> it seems that xclip does not work
<niemeyer> rog: It worked very well apparently! ;-)
<niemeyer> rog: Btw, your url was indeed wrong
<niemeyer> rog: The trunk branch belongs to the juju team
<niemeyer> rog: which you're part of
<niemeyer> rog: So to be able to commit/push, you need to be using it as lp:~juju/juju/trunk
<niemeyer> rog: Please be extra careful there
<rog> niemeyer: no, xclip didn't work. i'd told it to hold the URL!
<niemeyer> rog: LOL
<niemeyer> rog: update-server-interface reviewed
<niemeyer> rog: A few comments, but good stuff overall
<rog> it seems that X clipboards are fundamentally broken, a fact which i knew once, but had forgotten.
<niemeyer> rog: Yeah, I do know that one
<niemeyer> rog: For a while!
 * niemeyer => lunch
<rog> niemeyer: i'm also off for the weekend
<niemeyer> rog: Nice, enjoy!
<rog> niemeyer: have a good one!
<niemeyer> rog: Btw, warm +1 on "error"
<niemeyer> rog: Let's try to get this one in
<rog> niemeyer: yeah, i think it works ok
<rog> niemeyer: when i thought of it, i was "yeah, that works"
<niemeyer> rog: I was looking for an alternative to the ugly error.Value
<rog> niemeyer: me too.
<niemeyer> rog: But couldn't find anything else.. a standard "error" would be delicious
<rog> yup
<niemeyer> Hmmm.. delicious.. lunch!
<niemeyer> Cheers!
<niemeyer> :)
<rog> ttfn
<rog> have a good w/e
<rog> ha ha! it seems that my editor uses the xclip's default clipboard, but everything else uses a different one. so i can't make xclip work with both. for god's sake.
<rog> force majeur: http://paste.ubuntu.com/704053/
<rog> that seems to work. i promise that i will try very hard not to do multiline pastes again
<hazmat> plan9?
<rog> hazmat: a plan 9 compatibility library i use to introduce some sanity into my command line
<rog> hazmat: (and my C programs, when i write them)
<rog> rc is a nice minimal shell for scripting in.
<rog> it fixes some of the fundamental problems with direct sh/csh derivatives
<rog> niemeyer: here's what i was originally trying to paste
<rog> http://paste.ubuntu.com/704063/
<rog> a sketch of what the juju provider interface might look like in Go.
<rog> any inputs as to whether or not that might be approaching sufficient would be much appreciated.
<rog> right, gotta go. see y'all on monday.
<rog> i'll leave the machine on IRC for a while though, so i'll see any comments.
<hazmat> rog have a good one
<niemeyer> rog: THe interface looks pretty good
<niemeyer> The new logo is quite neat
<niemeyer> https://juju.ubuntu.com/
<hazmat> niemeyer, but i liked the juju man ;-)
<niemeyer> hazmat: Yeah, I liked it as well, but some people didn't
<niemeyer> hazmat: Do you want to have a quick look at this, given we don't have much time to get it wrong: https://code.launchpad.net/~fwereade/juju/charm-store-errors/+merge/78635
<hazmat> niemeyer, looking
<hazmat> niemeyer, i've got one last branch that i need to push as well
<hazmat> niemeyer, also if you have a chance to look over the local provider docs
<niemeyer> hazmat: I've already reviewed everything this morning
<niemeyer> hazmat: If you have a few comments in the branches that were up
<niemeyer> s/If//
<hazmat> niemeyer, nice
<hazmat> niemeyer, fwiw empty tuple is also a single allocation in python ()
<hazmat> hmm.. maybenot
<hazmat> interesting.. id func in python has some strange behavior id(object()) == id(object())
<jimbaker> hazmat, that's almost certainly because the memory is immediately reclaimed, then used again
<jimbaker> if you hold a ref to the first object(), no such equality
<hazmat> jimbaker, thanks.. that was rather confusing
<hazmat> so indeed python does a single allocation for the empty tuple
<jimbaker> such object pooling is an important optimization in cpython. also  in part why unladen swallow was doomed
<jimbaker> hazmat, correct, i believe that's the behavior in jython too
<jimbaker> again, just an optimization
<jimbaker> http://docs.python.org/library/functions.html#id - Two objects with non-overlapping lifetimes may have the same id() value
<niemeyer> hazmat: Hmm.. that was actually my point
<hazmat> niemeyer, oh.. i thought you where just referencing go
<niemeyer> hazmat: No, it was a brain hiccup
<niemeyer> hazmat: There's no such thing as a () object in Go
<niemeyer> hazmat: The empty tuple is a rock. :-)
<hazmat> niemeyer, this interface is a little.. we're expecting to get back json errors and warnings from the charm server?
<hazmat> embedded in a charm info?
<niemeyer> hazmat: Hmm
<hazmat> oh.. its a collection url
<niemeyer> hazmat: I'd put it slightly differently
<niemeyer> hazmat: We're expecting to get errors and warnings related to the charm as part of the charm info
<hazmat> niemeyer, what sort of errors and warnings?
<niemeyer> hazmat: Completely broken charms, for intance
<niemeyer> instance
<niemeyer> hazmat: Since it's a bazaar branch, there's no way to prevent them from being pushed
<hazmat> niemeyer, i noticed :-).. all kinds of style variations on the pushes
<hazmat> trunk-1, trunk, random stuff .. its really all over the place already
<hazmat> it looked like trunk-1 was an attempt to add series to existing trunks
<niemeyer> hazmat: Yeah.. the store will help making them more even
<niemeyer> hazmat: That's not entirely strange.. hmm.. I suspect it may actually have been done by LP itself at some point
<niemeyer> hazmat: Either way, we'll only be looking at /trunk for now
<hazmat> that leaves about 22 charms
<hazmat> from ~charmers
<hazmat> out of 70 some
<hazmat> numbers change.. practices evolve
<niemeyer> hazmat: Sounds ok.. easy to fix
<niemeyer> hazmat: In practice, we'll want the branch name to become invisible in the future
<hazmat> niemeyer, definitely
<hazmat> niemeyer, some times the end segment is actually the charm name.. only for collectd and collectd-node that i saw
<niemeyer> hazmat: You mean it's repeated? Like collectd/collectd?
 * hazmat digs out his script
<hazmat> niemeyer, yup
<hazmat> skipping r-B ~charmers/charm/oneiric/collectd/collectd
<hazmat> skipping r-B ~charmers/charm/oneiric/collectd/collectd-node
<niemeyer> hazmat: Cool.. these should be collectd/trunk and collectd-node/trunk
<niemeyer> hazmat: It also highlights the importance of a strong convention there
<hazmat> niemeyer, well i expect juju push charm_name .. .will help alot
<niemeyer> hazmat: =1
<niemeyer> +1
<hazmat> review in.. back to networking local provider storage
<hazmat> fwereade, ^
<niemeyer> hazmat: Cheers!
<_mup_> juju/go-charm-bits r14 committed by gustavo@niemeyer.net
<_mup_> Moved expanding logic to its own function as suggested by Rog
<_mup_> in the review.
<_mup_> juju/local-provider-storage r395 committed by kapil.thangavelu@canonical.com
<_mup_> web access to the local provider disk storage
<_mup_> juju/local-provider-storage r396 committed by kapil.thangavelu@canonical.com
<_mup_> wire in storage server into provider bootstrap and destroy-env
<_mup_> juju/go-charm-bits r15 committed by gustavo@niemeyer.net
<_mup_> Remove internal filepath.Rel. It's now upstream.
<_mup_> juju/go r12 committed by gustavo@niemeyer.net
<_mup_> Merged go-charm-bits branch [r=fwereade,rogpeppe]
<_mup_> This fixes several problems in the bundling and expansion of
<_mup_> charms in the Go port.
<hazmat> niemeyer, jimbaker, bcsaller if you have a moment the network'd local provider storage could use a look
<jimbaker> hazmat, i will take a look
<hazmat> jimbaker, thanks
<hazmat> niemeyer, just a fwiw, the zk project ended up linking directly to our out of date docs on zookeeper usage within juju
<bcsaller> hazmat: http://pastebin.ubuntu.com/702946/
<niemeyer> Hmm
<niemeyer> hazmat: Which docs are that?
<niemeyer> SpamapS: How is the openstack conf going?
<hazmat> niemeyer, the ones linked to from the zookeeper poweredby wiki page.. they linked (not at my request to) to https://juju.ubuntu.com/docs/internals/zookeeper.html
<hazmat> i gave them juju.ubuntu.com as a link.. but oh well.. i did notice how out of date those docs are
<niemeyer> hazmat: Cool.. still a quite reasonable overview I guess, from the perspective of someone interested in a vague feeling of what we do
<niemeyer> hazmat: From our end, though, yeah, that's quite out of date
<_mup_> juju/hooks-with-noninteractive-apt r396 committed by kapil.thangavelu@canonical.com
<_mup_> also capture APT_LISTCHANGES_FRONTEND for noninteractive hook usage
<hazmat> bcsaller, +1 on the trivial.. we should have someone else look as well
<hazmat> jimbaker, thanks for the review
<hazmat> niemeyer, jimbaker could either of you look at that trivial ben just posted.. it forces the dnsmasq into the resolver config by inserting into head.. both jamespage and bcsaller had problems getting dns resolution working without it.. i don't understand why its needed as it should be picked up from dhcp.. but it works and solves a real issue for some.
<hazmat> er.. fixes the issue for those who had it
<niemeyer> hazmat: Isn't it because it's being regenerated/
<niemeyer> ?
<hazmat> niemeyer, so we set it manually for the chroot customize script.. but when the container boots, it should pick it up from the dhcp server (dnsmasq)
<niemeyer> hazmat: Ok.. I don't understand it either, but if it fixes the problem it looks good for the moment
<niemeyer> bcsaller: Extra space after the ">" please, as usual for other similar lines in the same file
<hazmat> but for some reason its not.. doing this insertion into /etc/resolvconf/resolve.conf.d/base  assures that its *always* included in the generated resolve.conf
<hazmat> i wonder if its a race condition exposed by ssd vs rotating disk.
<hazmat> jimbaker, does your desktop you tested local provider with have a rotating disk?
<niemeyer> hazmat: What content ends up in the file when it doesn't work?
<hazmat> niemeyer, resolv.conf is empty
<niemeyer> Hmm
<hazmat> which makes no sense.. because their networking is running, and dnsmasq handed out the address, and can resolve the container name..
<niemeyer> hazmat: Has someone tried to re-get the dhcp information after that?
<niemeyer> hazmat: I'm wondering if the started dhcp is actually not answering DNS requests properly
<hazmat> niemeyer, not afaik
<jimbaker> hazmat, it has an ssd
<hazmat> oh well there goes that idea..
<hazmat> bcsaller, ^ you want to try and debug root cause
<jimbaker> hazmat, i certainly have not needed ben's trivial, not certain what makes my env diff
<hazmat> jimbaker, it worked for you, me, and kim0|holiday without the change.. but it didn't work for jamespage or bcsaller without it
<bcsaller> hazmat: I can look at it again, sure
<hazmat> bcsaller, we should still go ahead and commit, i'm curious if we can get a better understanding of the problem, else its a chicken ;-)
<hazmat> bcsaller, might need tcpdump to look at the queries on the wire
 * hazmat gives up on sup
<hazmat> too many random messages won't load
<_mup_> juju/local-provider-docs r395 committed by kapil.thangavelu@canonical.com
<_mup_> address review comments
<_mup_> juju/trunk r396 committed by kapil.thangavelu@canonical.com
<_mup_> merge local-provider-docs [r=jimbaker,niemeyer][f=867991]
<_mup_> Basic usage docs for using the local provider.
<_mup_> juju/go-charm-url r16 committed by gustavo@niemeyer.net
<_mup_> Syncing regex from Python code.
<hazmat> niemeyer, btw.. it looks like the resolved install_error does indeed go all the way through to start, i must have fixed it and forgotten about it
<niemeyer> hazmat: Ah, phew.. sweet
<hazmat> so afaics the pending for eureka is good, just two important items in the review queue.
<hazmat> actually i'd consider the orchestra docs to be important but i think andreas is still at openstack conf
 * hazmat goes back to playing with a charm browser
<_mup_> juju/go-charm-url r17 committed by gustavo@niemeyer.net
<_mup_> - Changed error messages as suggested by Rog.
<_mup_> - More tests on charm URL parsing.
<_mup_> - Fixed parsing bug.
<_mup_> juju/go r13 committed by gustavo@niemeyer.net
<_mup_> Merged go-charm-url branch [r=rogpeppe,fwereade]
<_mup_> This introduces support for charm URLs in the Go port.
<niemeyer> I'll head out for some exercising
<jimbaker> niemeyer, enjoy!
<niemeyer> Will try to work a bit on that stuff over the weekend
<niemeyer> jimbaker: Cheers
<hazmat> niemeyer, cool, have a good one
<hazmat> hmm. bzr has to have a command to get out the rev id
<jimbaker> hazmat, you might want to look at the butler code in ftests
<hazmat> jimbaker, probably nott
<hazmat> but thanks
<hazmat> jimbaker, i'm working against revids not revnos
<hazmat> its a hidden command in bzr.. revision-info
<jimbaker> hazmat, sounds good
 * niemeyer observes hazmat implementing a second store
<hazmat> race? ;-)  .. more seriously i just want a web interface to see formulas and interfaces, its rather hard to make something that can connect to multiple things without some sort of interface repo
<hazmat> unless your writing all the charms yourself
<hazmat> niemeyer, also playing around with redis as a queue server
<hazmat> fairly lightweight.. still not sure i'm using it usefully.. but dropping everything into mongo at the end
<niemeyer> hazmat: Sure, as long as you're not planning to put that online for people to consume, that's fine
<SpamapS> niemeyer: pretty insane response to the demo w/ Jane
<niemeyer> SpamapS: Ohhh.. please tell me about it!
<SpamapS> niemeyer: the talk later was a little bit disorganized but the buzz was *HUGE*
<SpamapS> niemeyer: OpenStack people are very excited. Cisco has a similar project called Donabe that focuses more on how to define the network resources needed between apps but clearly has the same focus..
<SpamapS> err... s/focus/direction/ there
<niemeyer> SpamapS: That's very exciting for us too!
<SpamapS> niemeyer: anyway, we deployed hadoop on openstack in 5 minutes.. people were *very* impressed.
<SpamapS> and the status2gource visualization was wowing people ;)
<SpamapS> niemeyer: we also made it clear that we had deployed openstack using juju
<SpamapS> niemeyer: https://launchpad.net/donabe btw ;)
<niemeyer> SpamapS: 5 minutes?  Woah
<SpamapS> niemeyer: to be clear, openstack was already deployed.. we just spun up hadoop-master, hadoop-slave, and ganglia (and 5 additional units with add-unit hadoop-slave)
<SpamapS> niemeyer: but the commands all seemed to resonate with the audience
<SpamapS> I believe there will be vidio
<SpamapS> video even
<niemeyer> SpamapS: juju making its magic.. neat :-)
<SpamapS> niemeyer: yeah, it helped that all the nodes were on a single machine, so network transfer speed was basically 1/2 of RAM bus speed
<niemeyer> SpamapS: I bet
<SpamapS> and we had a pretty beefy 6x15krpm RAID5 backing all instance volumes
<SpamapS> niemeyer: the box we had was actually a 40 core machine w/ 128G of ram
<SpamapS> really fun to play with
<SpamapS> definitely a lot to do to make the juju on openstack experience more robust.. the EC2 stuff on nova works ok.. but often gives back responses that txaws gets confused by
<SpamapS> anyway, about to board flight back to LA
<niemeyer> SpamapS: Woah
<niemeyer> SpamapS: I've never been even close to such a machine :)
<niemeyer> SpamapS: COol, have a good flight (and weekend!)
#juju 2011-10-08
 * hazmat yawns
<_mup_> juju/go-new-revisions r14 committed by gustavo@niemeyer.net
<_mup_> Implemented new schema for charm revision handling in an
<_mup_> independent file in the Go port, including backwards
<_mup_> compatibility with the previous schema, and also SetVersion
<_mup_> methods that enable bundling and expanding charms with
<_mup_> custom revisions (necessary for store).
<_mup_> Bug #870906 was filed: Go port needs to handle new revision schema <juju:In Progress by niemeyer> < https://launchpad.net/bugs/870906 >
#juju 2011-10-09
<hazmat> niemeyer, if you have a moment my last branch in the review queue fixes an important bug for the release
<hazmat> niemeyer, https://code.launchpad.net/~hazmat/juju/local-provider-storage/+merge/78652
<_mup_> juju/trunk r399 committed by kapil.thangavelu@canonical.com
<_mup_> merge hooks-with-noninteractive-apt [r=niemeyer][f=#812343]
<_mup_> When executing hooks set appros environment variables for non-interactive
<_mup_> apt usage.
<niemeyer> hazmat: I'm around
<niemeyer> hazmat: Had already seen your branch, but I want to have another look at it more relaxed.. will do that first thing in the morning tomorrow.. we can sync with SpamapS to get it merged
<jason_> Hi guys, I'm following along with the juju orchestra instructions, and I'm caught up on admin-secret -- there's a foo value in the sample config -- don't know what the equiv of that is for my orchestra install
<jason_> I see in the irc logs that admin-secret can be anything?
<jason_> I'm getting Connection was refused by other side: 111: Connection refused.
#juju 2013-09-30
<marcoceppi> paulczar_: no
<marcoceppi> lazyPower: thanks for the bugs o/
<lazyPower> marcoceppi: anytime
<lazyPower> I'm going to have more coming, I'm going to start by charming hubot.
<lazyPower> helllooooo node
<lazyPower> Wrapping up some template edits, then back to charm school
<marcoceppi> lazyPower: there's a node.js charm, similar to the rails charm, you might want to check it out
<lazyPower> Thats the plan :)
<stokachu> fyi: https://juju.ubuntu.com/docs/config-environments.html seems to be a broken link
<stokachu> was from the getting started page https://juju.ubuntu.com/docs/getting-started.html
<stokachu> https://juju.ubuntu.com/docs/howto.html both deploy docs point to nodejs
<stokachu> ah i see where i can just branch and fix it
<stokachu> ill do that
<davecheney> stokachu: grab charm tools from
<davecheney> ppa:juju/stable
<davecheney> then you can do
<davecheney> mkdir charms
<davecheney> sorry
<davecheney> mkdir charms/precise
<davecheney> cd charms/precise
<davecheney> charm get nodejs
<davecheney> play with it
<davecheney> then
<davecheney> juju deploy --repository=$(pwd)/../.. local:precise/nodejs
<davecheney> you can also switch to your local version of the charm if you have the charmstore one deployed
<stokachu> davecheney: ah is this related to the online documentation? thats what i was referring to
<stokachu> davecheney: was going to update the docs as some of the urls are incorrect
<davecheney> stokachu: cool
<davecheney> docs are in a branch in lp:juju-core
<stokachu> davecheney: cool thanks checking it out locally now
<stokachu> are MP's the preferred way or does a bug need to be linked to it?
<davecheney> stokachu: we'll take anything we can get
<stokachu> davecheney: sounds good :D will get those done in a few minutes
<stokachu> so many broken links, should I not worry about them under the assumption those pages will eventually be added? or should i remove the link references until a page is created
<stokachu> for example, https://juju.ubuntu.com/docs/troubleshooting.html
<stokachu> davecheney: ok got a MP created for my initial pass-through
<yolanda> hi, i have a subordinate charm that will be reused for 3 different services. Do i have to deploy a different subordinate charm for each of the services?
<jamespage> yolanda, juju deploy subordinate subordinate-instance-1
<jamespage> yolanda, juju deploy subordinate subordinate-instance-2
<jamespage> yolanda, juju deploy subordinate subordinate-instance-3
<yolanda> jamespage, ok, that's what i first tried
<yolanda> but i have a problem
<jamespage> the last parameter names the instance of the subordinate service
<yolanda> i need a way to discriminate between if the relation is for one charm or another
<yolanda> i have a configurator charm, that updates config for gerrit, zuul, jenkins
<yolanda> so i will have to create 3 different interfaces then?
<yolanda> we have something common for the 3, and then we have something like: if relation_ids('gerrit-configurator') : ...
<yolanda> then i find that when i associate that zuul it also has the gerrit-configuration relationship
<yolanda> so i'll try with 3 different subordinates
<stokachu> hi, just fyi i filed an MP for some juju-docs corrections
<marcoceppi> stokachu: thanks for the submission!
<jamespage> yolanda, yeah - you would need to implement three differently typed interfaces
<yolanda> jamespage, ok, that works, but i wasn't sure if that was the right way
<jcastro> heya jamespage
<jcastro> Reminder that you're down for reviewer this week
<jcastro> m_3: marcoceppi: we're still in a hole if you guys have time to dig in
<jcastro> negronjl: We miss you. :)
<negronjl> jcastro: lol ... miss you too people ... but they have me tied down like a slave here :/
<marcoceppi> jcastro: ack, I've got amulet to release, but I'll poke at the queue with a hard stick soon
<stokachu> marcoceppi: thanks, ive got a big project im working on that will drive more documentation to the public facing juju site
<m_3> jcastro: ack
<zradmin> anyone on?
<marcoceppi> zradmin: yup, though it's best to just ask your question as people might not be here right this second
<zradmin> Thanks marcoceppi, I'm still having the same issue with quantum not functioning properly. It brings up all of the other bridges except br-ex on eth1 but I have confirmed that I can manually assign an address to eth1 and talk on the external network. Is there a log file I can check for openvswitch (or maybe the charm setup log) that I can check to see why its failing to setup?
<rick_h_> sinzui: can you join #juju-gui for a sec? We've got a promulgation question on bundles and how to link branches to series
 * sinzui #juju-gui
<sidnei> jcastro: around?
<sidnei> or marcoceppi
<marcoceppi> sidnei: hey
<sidnei> marcoceppi: hey, just realized https://juju.ubuntu.com/Events/ is missing the charm school & talk im giving at PythonBrasil
<sidnei> not sure how to get it updated (even if quite late by now )
<marcoceppi> sidnei: we can usually do it but we're looked out ATM
<sidnei> ok, no problem
<jcastro> sidnei: I'm getting on a call, you need to mail Peter Mahnke to add it
<arosales> hazmat, what was the cavet for local provider on systems without swap?
<hazmat> arosales, with encrypted home dirs.. JUJU_HOME env var needs to be set to not be in $HOME
<hazmat> arosales, else reboot won't work and env will need be to destroyed before using
<arosales> hazmat, but other than than swapless system is ok with local?
<hazmat> arosales, should be fine given enough mem to run core.. what's the context?
<hazmat> er. to run core and mongo
<arosales> hazmat, just my fragmented memory recalling a caveat with swapless systems
<arosales> hazmat, I think I just may be hitting some if the issues thumper has recently fixed
<hazmat> arosales, i can't think of a reason why a system wouldn't have swap
<arosales> hazmat, ok and thanks for the reply
<hazmat> i mean.. for a laptop.. for example no suspend to disk.. no virtual memory.. you get oom to kill your processes instead overcommit.
<lazyPower> marcoceppi: Juju stole the show at work with the GUI. I used screenshots to present it initially, I have a functional demo scheduled next week.
<bic2k> I have an issue where the juju tools can no longer communicate with our bootstrap server (which is returning 503). SSH'ed on to the server, only thing that is odd is that the /var/log/juju/all-machines.log is rolling out tons of text, probably eat up all the free space in an hour or so. This is on AWS with juju-1.13.1-unknown-amd64 on the client and 1.14.1-precise-amd64 on the server
<bic2k> ls -l
<bic2k> and nvm. Turns out my JUJU_HOME was set differently after a .bashrc change
<sarnold> interesting, thanks..
<zradmin> bic2k: i had the same issue, you need to upgrade juju to at least 1.13.3.1 to get rid of that error, both on your main node and the bootstrap node
<bic2k> zradmin: Ya, I'm stuck on whatever is ported to brew for now. Looks like I'll be able to work around it for now
#juju 2013-10-01
<thumper> lazyPower: good to hear
<thumper> lazyPower: I'm giving a juju demo this afternoon to
<lazyPower> I'm excited to get started. The learning curve to just get started was an hour vs the 3 months i've been looking at and working off and on with competing frameworks.
<thumper> \o/
<lazyPower> Honestly, i'm surprised you dont throw that down as one of the benefits. "Get started in about an hour - no long learning curve, no extra cirricular setup required if your PC supports LXC. JUJU - its made from magic"
<thumper> :)
<lazyPower> I suppose its now obvious that I program for a marketing firm, looking at the marketing angle first - not the raw and lean
<lazyPower> awesomeness thats infront of me
<lazyPower> Where are you located Thumper? If you don't mind me asking.
<thumper> lazyPower: new zealand
<davecheney> east australia
<thumper> davecheney: :P
<lazyPower> seems legit
<thumper> australia is the west island
<davecheney> shit, that means I live in alice sprints
<davecheney> springs
<lazyPower> is this a bad thing?
<davecheney> lazyPower: do like living in a desert ?
<lazyPower> I've lived in west texas, it wasn't all bad.
<lazyPower> Do i need to destroy my local environment after every reboot or am I not doing something to cleanly shut down my local juju environment?
<davecheney> lazyPower: what do you want do to ?
<davecheney> destroy your enviroment, or reboot your computer ?
<lazyPower> Well, I reboot the computer with the juju environment active out of ignorance
<lazyPower> now i cant seem to get the nodes to come back online, juju status hangs with no output.
<davecheney> lazyPower: sounds like a bug
<davecheney> it would be nice if rebooting wouldn't destroy all your work
<lazyPower> agreed
<lazyPower> I'll file it
<zradmin> everyone gone for the day?
 * davecheney waves
<zradmin> hey davecheny :), you wouldn't happen to know anything about th quntum charm would you?
<zradmin> quantum charm, sorry
<davecheney> zradmin: probably not much
<davecheney> what is your question
<zradmin> i almost have openstack deployed via juju... but my instances aren't able to receive any traffic
<zradmin> davecheney: I get alot of these messages in the syslog Sep 30 17:38:03 m7q49 dnsmasq-dhcp[2371]: DHCP packet received on qbr480533b5-f8 which has no address
<davecheney> zradmin: given i know nothing about the quantum charm
<davecheney> i'm still going to say that is unrelated to the charm
<davecheney> all the charm does it just do apt-get install and twiddle some config files
<davecheney> the rest is going to the be product itself
<zradmin> yeah... I've inspected the configs plenty of times and the physical interfaces are all active so that should be fine
<davecheney> zradmin: my best guess of that a dhcp request packet arrived on qbr480533b5-f8
<davecheney> for a host that dnsmasq has not been configured to service
<zradmin> the odd thing is I can see it create the ports etc in the syslog as I create/destroy instances
<davecheney> zradmin: is the lan isolated ?
<davecheney> remember, i know nothign about quantum
<zradmin> yeah its isolated, I may dig a bit deeper into dnsmasq to see if i can manually confirm the port has been created
<zradmin> so that helped to point me in a new direction :) also its good to vent about it a little so thanks!
<davecheney> win/win
<hazmat> zradmin, by instances you mean vms in openstack?
<hazmat> you might able to get a little farther with neutron admin cli
<zradmin> hazmat: im using quantum atm
<hazmat> zradmin, quantum == neutron fwiw
<zradmin> oic
<hazmat> rename due to trademark
<omgponies> anyone around able to help me with  local environment key issue ?
<zradmin> hazmat: nice
<omgponies> I get this error - Permission denied (publickey,password) when trying to 'juju debug-log' after running 'sudo juju bootstrap'
<omgponies> I have ssh keys in ~/.ssh/id_rsa, id_rsa.pub
<davecheney> omgponies: what happens when you do juju ssh 0
<omgponies> same error
<davecheney> i'm not 100% sure if debug log works on the local provider atm
<zradmin> omgponies: are you using MAAS as well?
<omgponies> i have a passphrase on the key ...  but I've used ssh-add to cache the phrase
<omgponies> nah,  but I do use EC2 with no issues
<omgponies> just so slloooow
<zradmin> omgponies: what version of juju are you on?
<davecheney> omgponies: i suspect we're not passing the right flags when we fork ssh
<omgponies> 1.14.1-raring-amd64
<davecheney> you could try changing /etc/ssh/ssh_config to always pass ForwardAgent: yes
<omgponies> doesn't seem to help
<davecheney> omgponies: would you be able to raise a bug please
<zradmin> hazmat: is that in havana btw? i can't seem to find any nuetron console. 12.04 on grizzly
<davecheney> something like 'juju ssh/debug-log does not work on local provider when passphrase protected ssh key is in used'
<omgponies> will do ...   in juju-core @ launchpad ?
<davecheney> yup
<hazmat> zradmin, oh.. yeah.. it havana
<zradmin> hazmat: is that stable at all yet or still in testing?
<hazmat> zradmin,  dunno.. i haven't used it myself outside of a lab
<hazmat> zradmin,  but the cli here should apply to quantum on grizzly http://docs.openstack.org/user-guide-admin/content/neutron_client_commands.html
<zradmin> hazmat: yeah its pretty much the same as quantums... it looks like everything creates itself properly for the vms, but i can't novnc/ssh to them because nova says it has no public address
<hazmat> zradmin, so typical setup gets a private tenant network and then a floating ip network for ingress.. maybe try create a floating ip and attach to vm
<hazmat> er. for public access
<hazmat> and by typical i mean the charm setup
<zradmin> hazmat: yeah i've done that as well, eth0 and eth1 (public) are both there but I had to manually create br-ex for some reason. Internal should have still worked as the vswitch creates an interface and attaches to br-int which is supposed to go through br-tun in a gre setup right? I never see any traffic crossing br-tun though
<hazmat> jamespage, ping ^
<zradmin> hazmat: so jamespage is who i have to hunt down?
<hazmat> zradmin, or adam_g.. although negronjl might be able to help as well
 * hazmat heads to sleep
<jamespage> hazmat, ping re https://code.launchpad.net/~james-page/juju-deployer/fixup-to-for-strings/+merge/188296
<jamespage> I also note a few other fixes for juju-deployer - if we want to push another point release this week would be good for saucy
<hazmat> jamespage, cool
<jamespage> hazmat,scratching my head over how to make the --to work well with nova-compute/ceph/swift all on the same nodes
<jamespage> I'll ping you an email to explain
<hazmat> jamespage, thanks.. i noted some fixes/bugs from last week as well.
<joachimhs> How do Juju compare with Docker.io ?
<jamespage> hazmat, email in your inboc
<jamespage> x
<joachimhs> Do Juju require that your servers have KVM, or LXC installed and setup?
<hazmat> jamespage, got a moment for g+?
<jamespage> hazmat, 20mins?
<hazmat> jamespage, sounds good
<jamespage> just need to catch lunch before it becomes late afternoon
<joachimhs> If I start up a charm.. Will I be able to SSH into it and make ad-hoc changes? (I know this isn't recommended, but is it still possible?)
<lazyPwork> GMT morning everyone.
<jamespage> hazmat, good now?
<marcoceppi> o/ lazyPwork glad your demo went well
<hazmat> jamespage, doh.. yeah.. now. was analyzing a cts issue
<adeuring> marcoceppi: another MP for charm-tools: https://code.launchpad.net/~adeuring/charm-tools/check-maintainer-branch-owner/+merge/188586
<marcoceppi> adeuring: thank you!
<vds> heelo I'm trying to configure my charm to use haproxy, I've added the hook to my charm but I keep getting:
<vds> $ juju add-relation my-service:website haproxy:reverseproxy
<vds> error: service "my-service" has no "website" relation
<marcoceppi> vds: pastebin your metadata.yaml to paste.ubuntu.com please
<vds> marcoceppi, http://paste.ubuntu.com/6179761/
<marcoceppi> vds: you have an indentation error with nrpe-external-master, that and the two lines below it need to be indented by to line up with mongo
<marcoceppi> otherwise, your metadata.yaml looks fine and should connect to haproxy without issue
<vds> marcoceppi, thanks!
<marcoceppi> vds: adjust that, destroy-environment, re-bootstrap (for good measure) then deploy/relate again
<marcoceppi> it should work
<marcoceppi> if not, let us know
<lazyPwork> marcoceppi: Same. I've generated quite a bit of interest. I feel that I'm going to have a shift in management methodology coming once I have a functional setup in AWS to show them that juju orchestrated.
<lazyPwork> marcoceppi: have you come up with a solid way to not lose your progress in the local dev environment between reboots? I've had to completely wipe my local environment and start over every time I reboot due to kernel updates.
<bloodearnest> marcoceppi: I'm not sure what the plans are wrt amulet, but I wonder about a configurable test charm for testing relation that works a bit like https://gist.github.com/wavydavy/6779561
<marcoceppi> bloodearnest: that's covered in amulet, I'll give you an example
<marcoceppi> lazyPwork: my local environment survives reboots, is this with the vagrant image?
<bloodearnest> marcoceppi: sweet, thought it might be
<jcastro> lazyPwork: what version of juju? we fixed the local reboot thing a while back
<jcastro> also hi!
<marcoceppi> bloodearnest: https://gist.github.com/marcoceppi/6779616
<lazyPwork> jcastro: latest backport from raring
<lazyPwork> i'm not infront of my PC with juju installed, i can do some further information aggregation this evening if you're up for helping me get you the data you need
<lazyPwork> well, getting launchpad the data the juju team would need
<lazyPwork> i sometimes forget you're only superman Jorge :)
<bloodearnest> marcoceppi: thanks, that looks good. Some questions though:
<marcoceppi> bloodearnest: fire away
<bloodearnest> 1. which charm are you testing wordpress or msql?
<bloodearnest> 2. can I intercept/change what mysql:db relation supplies to wordpress:db? so I can test for how the wordpress charm handles misconfiguration for example
<marcoceppi> bloodearnest: in this example it doesn't actually matter, but this was copied from the wordpress test I was writing. Amulet automatically deploys from the store, unless JUJU_TEST_CHARM environment variable is set, if it matches the charm it's adding (wordpress) it uses ../../ (file is assumed to be in charm/tests) to do the deploy
<marcoceppi> bloodearnest: no, not at this time. It only listens, but that can certainly be a feature added
<marcoceppi> bloodearnest: something for a 1.1 release, just need to get 1.0 out first :)
<bloodearnest> marcoceppi: right, so that first answer makes sense. I think that makes sense in the wordpress setting, as a wordpress deployment will always have a mysql charm, it don't work any other way
<bloodearnest> marcoceppi: but haproxy, or squid, it's much more general.
<bloodearnest> it could be an arbitary charm supporting a http interface
<marcoceppi> sure
<bloodearnest> marcoceppi: cool
<bloodearnest> marcoceppi: my ideal world, I would only have to deploy a single charm-under-test, and all the rest of the relation add/removes can be faked under my control :)
<marcoceppi> bloodearnest: Right, that's something I'm also interested in, since that would be like light-weight testing. Something like spin up an LXC with the charm mounted in a psuedo-deployed state, then feed events to a psuedo hook-environment and allow for fast light weight tests. This is a little more blunt. It's using Juju Deployer to throw this up in an environment and actually hash it out
<bloodearnest> marcoceppi: right, which you'd definitely want to do to check out things like wordpresses memcached integration. And wordpress can't do anything without a db, so you'll probably always need that
<melmoth> Hola.. I m deploying cinder (grizzly) on some machine, and strangely enough, the installation proceed without error but once the charm is installed, there is no cinder-volume created at all
<melmoth> no error in juju logs, nor in juju status
<melmoth> any idea what to do ?
<melmoth> the cinder charm has been deployed witrh the following config http://pastebin.ubuntu.com/6179969/
<dpb1> Hey -- has anyone tried to open up a firewall for juju-core bootstrap?  Specifically downloading to tools from the public bucket?  Is there any gotchas?
<Harsh> a
<Harsh> US?
<stokachu> question, did i get credit for this? http://bazaar.launchpad.net/~charmers/juju-core/docs/revision/137
<lazyPwork> I'm so rustled. We just got the mandate from above to move to Lync. There goes my hubot installation until I spin up a darknet.
<lazyPwork> marcoceppi: charm powered darknet? check.
<rektide> lazyPwork: what did you have your hubot doing
<AskUbuntu> Juju-Gui Charm API did not respond | http://askubuntu.com/q/352427
<lazyPwork> rektide: He hooks into janky for jenkins communication, and he executes a few of our internal jobs like kicking off backups
#juju 2013-10-02
<lazyPower> jcastro: 1.14.1-raring-amd64
<_mup_> Bug #1233924 was filed: cannot bootstrap azure because no OS image found <azure> <bootstrap> <juju-1.15.0> <juju:Triaged> <https://launchpad.net/bugs/1233924>
<ubot5`> Ubuntu bug 1233924 in juju "cannot bootstrap azure because no OS image found" [Critical,Triaged]
<_mup_> Bug #1233924: cannot bootstrap azure because no OS image found <azure> <bootstrap> <juju-1.15.0> <juju:Triaged> <https://launchpad.net/bugs/1233924>
<mhall119> so my juju-status says I don't have a bootstrapped environment, but ps says it's running local instances anyway: http://paste.ubuntu.com/6182048/
<mhall119> can somebody tell me how to clean up after juju?
<omgponies> does juju expose details like hostname, ip address  to the local hooks ?    or should I call out to `hostname` etc
<davecheney> omgponies: two secs
<davecheney> omgponies: use unit-get
<davecheney> that is the recommended way
<davecheney> so
<davecheney> unit-get private-address
<davecheney> unit-get public-address
<omgponies> sweet
<omgponies> thx
<davecheney> AWS Y U SO SLOW!?
<omgponies> slow is relative ...  used to be we were thankful at a 4 week turnaround for a new server :)
<davecheney> omgponies: so relative to other cloud providers ...
<omgponies> go spend some time on azure ;)
<davecheney> omgponies: touchÃ©
<lifeless> PONIES
<omgponies> linux why!
<omgponies> oh wait ... because I told it wrong
<omgponies> me why!
<AskUbuntu> Problem with IP Openstack MAAS | http://askubuntu.com/q/352570
<AskUbuntu> MAAS login error | http://askubuntu.com/q/352573
<mattyw> is there a way I can run juju debug-hooks on a unit that has deployed succesfully - the aim is not to debug the hooks - but rather to enter an interactive session with a hook environment, so I can see the env flags that get defined and I can call things like relation-get/ unit-get
<joachimhs> I have a set of servers in a datacenter. ON some of these machines I want to be able to separate out different applications. I have thought about using KVM, or Docker. How would do Ubuntu Juju running on the local machine on LXC compare with those two?
<joachimhs> ^ I will mostly deploy custom-made software, as well as web apps via Nginx
<marcoceppi> mattyw: launch debug-hooks, run `juju set` to change a config value and it'll catch the config-changed hook
<joachimhs> I would really appreciate any comments on the above question :) ^
<mattyw> marcoceppi, thanks, I was actually looking for a generic way of being able to interact with a hooks environment rather than changing config-settings
<marcoceppi> mattyw: that's the only way atm
<mattyw> marcoceppi, ok, thanks
<joachimhs> If one of my charms require a JVM. Is that installed for the host-os, or separate for each of the charms ?
<marcoceppi> joachimhs: so, for your first question, we have container support landing in Juju soon. It's currently there but pretty rough cut. The idea being you can safely isolate multiple charms on a single machine and still perform orchestration on them
<marcoceppi> joachimhs: as for JVM, that would need to by the charm. The base operating system is the same for all charms. You use the charms to build/configure the OS to meet the needs of your service
<joachimhs> marcoceppi: Thanks for your answers! So this means that each physical server will have one JVM installed, and that each charm will configure the java properties to match its requirements?
<marcoceppi> joachimhs: that, again, depends entirely on you as the charm author
<joachimhs> marcoceppi: OK. So each charm can have its own JVM (or any other installed software) ?
<marcoceppi> joachimhs: each charm can do whatever it wants. It's executed as root on the box
<joachimhs> marcoceppi: OK.
<marcoceppi> I don't manage or set up JVMs myself, so I can't speak to the limitations of that product, but if you can have multiple JVMs per physical machine, then you can do that in a charm
<joachimhs> marcoceppi: OK. You mentioned that container support isn't landed yet.. Is it preferred to run on top of KVM, or another hypervisor or is "juju switch local" a viable option?
<marcoceppi> joachimhs: we're talking about two different things. The Local provider works really well, it allows you to spin up a "cloud-like" environment on your local machine using LXC. This is meant as a development tool. container support is the co-location of two or more charms on a single physical machine. The current juju model is once unit of a service per machine. So if you deploy mysql you'll get a single unit of machine on a single
<marcoceppi> machine and that's that. You wouldn't nessiarily (and safely) be able to put a unit of mediawiki on that same node. It's designed to be used just for MySQL. with container support coming to do Juju you can deploy a unit of a service in an LXC container on a physical node. Thereby you can employ density on larger hardware by running services in containers on fewer physical machines. So say co-locating a unit of mediawiki and mysql on
<marcoceppi> the same physical hardware. Since they're in containers they can continue to "own" the machine but now share that one parent node. This whole container setup will be virtually transparent to you as the end user, you would just tell juju you want to use containers and on what machine to put them
<joachimhs> marcoceppi: OK. That makes sense
<joachimhs> marcoceppi: Thanks!
<vds> trying to debug a hook, how do I land on the target machine so I can execute commands like realtion-set or unit-get?
<marcoceppi> vds: use `juju debug-hooks <unit>`
<marcoceppi> vds: that will trap hooks prior to execution, so if you have a hook in an error state and want to run it again, launch debug-hooks and then in another window type `juju resolved --retry <unit>`
<marcoceppi> then debug-hooks will catch the hook in a new tab in the tmux session, and you can run the hook again by typing "hooks/<hook-name>" or run juju commands since you're now in a hook environment
<marcoceppi> vds: https://juju.ubuntu.com/docs/authors-hooks.html
<jcastro> hear ye
<jcastro> hear ye, the Juju Charm Weekly status is happening in 2 minutes
<jcastro> http://ubuntuonair.com if you want to follow along
<jcastro> https://plus.google.com/hangouts/_/b340bb6de5946ca99622521920b9c1454b06c6e4?authuser=0&hl=en
<jcastro> ^^ if you want to participate in the hangout
<jcastro> marcoceppi: evilnickveitch m_3 ^^^
<vds> marcoceppi, thanks, it worked
<vds> it looks like in the docs (https://juju.ubuntu.com/docs/authors-hooks.html ) juju resolved is not mentioned at all
<marcoceppi> jcastro: pad http://pad.ubuntu.com/7mf2jvKXNa
<lazyPwork> crap i missed the charm status. Was it on air so i can play catchup?
<marcoceppi> lazyPwork: yup, there's a video on the juju lists
<marcoceppi> https://lists.ubuntu.com/archives/juju/2013-October/003008.html
<lazyPwork> Already on it ty
<arosales> marcoceppi, sinzui I put my findings for installing brew on the thread https://github.com/mxcl/homebrew/pull/22772
<arosales> sinzui, is 1.14.1 brew formula still failing though?
<sinzui> arosales, I think we need to make a new pull request to get 1.14.1 in. The request you commented on was closed.
<arosales> sinzui, do you think the 1.14.1 formula will build?
<sinzui> I don't know.
<sinzui> I would only know  by installing the entire home brew setup, and make the change to see if it builds.
<sinzui> arosales, The details of why it failed are empty http://bot.brew.sh/job/Homebrew%20Pull%20Requests/2772/
<arosales> sinzui, hmm that just gives me a Status Code: 404 no exception or stacktrace
<zradmin> Im a bit further with my quantum issue. I can ping the router ip i create from the quantum-gateway node, and when i run ip netns on it it shows all the ports etc available. If i run it on the nova-compute node however i get a blank list instead and nothing travels over the br-tun interface... has anyone else had a similar problem?
#juju 2013-10-03
<freeflying> SSH_USER = 'juju_keystone'  where does this user come from in keystone's charm
<lazyPower> freeflying: thats an inline config that gets consumed during the install hooks at first glance.
<freeflying> lazyPower, yes, with this user, we can't sync credential to peer node
<lazyPower> I'm not real familiar with open stack - so i feel that I'm not a great resource of info on how to accomplish what you're trying to do. I literally just opened the charm and scanned the source.
<freeflying> lazyPower, thanks anyway
<lazyPower> No problem *hat tip*
<teknico> Hi, esteemed juju dev colleagues :-)
<teknico> I may be one of the first trying to deploy OpenStack's nova-volume inside an LXC container
<teknico> it's not working because the charm cannot access loop devices
<teknico> and that's because apparmor is preventing that
<teknico> and to make it work, a line needs to be uncommented in the container config:
<teknico> #lxc.aa_profile = unconfined
<teknico> and that line is commented in the config generated by /usr/share/lxc/templates/lxc-ubuntu-cloud
<teknico> so, how do I change the container config while it's being deployed?
<teknico> (phew :-) ) thank you
<Himmagery> I have a weird problem.  I'm trying to deploy a service using juju deploy --constraints "instance-type=cpu2-ram4-disk50-ephemeral20" but am getting an error "ERROR Bad 'instance-type' constraint 'm1.small': unknown instance type" (which is what the service would have used when it was first deployed).  Any ideas why it's ignoring my manually set constraints?
<melmoth> Hola, i m trying to use juju 1.14.1-precise-amd64 against a grizzly openstack installation
<melmoth> bootstrap fails with "error: required environment variable not set for credentials attribute: Region"
<melmoth> any idea what to do ? (a simple google search gave me https://bugs.launchpad.net/juju-core/+bug/1086674)
<melmoth> ahh, i put the region in environments.yaml, seems to go better
<melmoth> hola, i m hitting what looks like https://bugs.launchpad.net/juju-core/+bug/1202163
<_mup_> Bug #1202163: openstack provider should have config option to ignore invalid certs <cts> <cts-cloud-review> <papercut> <Go OpenStack Exchange:Fix Committed by jameinel> <juju-core:In Progress by jameinel> <https://launchpad.net/bugs/1202163>
<melmoth> the last comment mention "
<melmoth> Changed in juju-core:
<melmoth> milestone:	 none â 1.16.0"
<melmoth> i dont understand exactly wich version has the fix, i installed ppa devel (1.15.0-precise-amd64)
<melmoth> but i still have the same behaviour
<melmoth> any idea what should i try ?
<jamespage> melmoth, I think the answer is that is not fixed yet
<melmoth> jamespage, yep, i switched to pyjuju for this environment.
<davecheney> melmoth: we're literally discussing this on a conf call right now
<davecheney> keep an eye on the bug, status updating REAL SOON NOW
<davecheney> https://bugs.launchpad.net/juju-core/+bug/1202163
<_mup_> Bug #1202163: openstack provider should have config option to ignore invalid certs <cts> <cts-cloud-review> <papercut> <Go OpenStack Exchange:Fix Committed by jameinel> <juju-core:In Progress by jameinel> <https://launchpad.net/bugs/1202163>
<melmoth> ok
<davecheney> bug #1234577
<_mup_> Bug #1234577: Uniter needs to support ssl-hostname-verification: false <juju-core:Triaged by jameinel> <https://launchpad.net/bugs/1234577>
<davecheney> #1234576
<_mup_> Bug #1234576: Upgrader needs to support ssl-hostname-verification: false <juju-core:Triaged> <https://launchpad.net/bugs/1234576>
<davecheney> melmoth: short version, we are REALLY, _really_, r e a l l y, hoping to get this done before the 1.16.0 release
<melmoth> any idea whe this will be ? (i m leaving the premicises on friday)
<davecheney> melmoth: if friday is tomorrow
<davecheney> it won't be done by then
<melmoth> ok
<davecheney> it *might* be done by the following monday
<melmoth> i'll stick to the pyjuju version then.
<davecheney> as we're also aginst the final deadlie for saucy
<melmoth> anyway, i have a quantum problem that make the bootstrap not working anyway....
<davecheney> melmoth: shitter
<melmoth> so this one specific problem is not the worst i m currently facing :)
<yolanda> hi, i'm using juju-deployer to deploy a set of charms, but i find this error : error: cannot get latest charm revision: charm not found in "/home/yolanda/development/canonical-ci": local:precise/postgresql
<yolanda> shouldn't be local:postgresql, not local:precise/postgresql ?
<AskUbuntu> How can I know which machine Juju is actually using? | http://askubuntu.com/q/353114
<AskUbuntu> Migration from generic OpenStack to Ubuntu Openstack | http://askubuntu.com/q/353127
<sinzui> gary_poster, We confirmed hazmat's hadoop charm is really not in manage.jujucharms.com. We killed some stale processes. We just restarted the queue. I see http://manage.jujucharms.com/charms/precise/hadoop now
<sinzui> ^ hazmat ping me if you think manage.charmworld.com is slow or not working again
<gary_poster> sinzui, great, thank you!
<gnuoy> when bootstrapping with the maas provider is it possible to specify which physical machine you want to use as the boostrap host ?
<freeflying> gnuoy, theoreticallyï¼ yes
<freeflying> gnuoy, if you use py version of juju, there is a constraint called maas-name, if it go version, you need some workarounds
<gnuoy> freeflying, I'm using juju-core/go version
<freeflying> gnuoy, in this case, still some constraints can be used, like mem/cpu cores, other approach is deploy a pure ubuntu to those server, then deploy service as subordinate onto the machine
<gnuoy> the charms I'm looking to deploy aren't subordinates and juju/maas seems to be ignoring mem/cpu constraints. I've tried bootstrapping with cpu/mem options which match my target host but a different host which doesn't match keeps getting picked
<freeflying> gnuoy, or another approach enlist one, deploy one lol
<gnuoy> yeah, not ideal
<freeflying> but works well here
<jamespage> marcoceppi, urhg:
<jamespage> 2013-10-03 13:52:09 INFO juju.worker.uniter context.go:234 HOOK E: Unable to locate package charm-helper-sh
<jamespage> from mysql charm on saucy
<jamespage> gnuoy, the honest truth is that right now that is tricky
<jamespage> gnuoy, maas tag would be the way todo it once 1.16 is out of juju-core
<marcoceppi> jamespage: you're deploying mysql on saucy?
<jamespage> marcoceppi, yes
<marcoceppi> didn't think we had a mysql saucy charm
<jamespage> marcoceppi, I never deploy direct from charmstore
<jamespage> hence the charm can be any series from local
<marcoceppi> jamespage: with charm-tools 1.0 charm-hlpers-sh is dropped from that package
<jamespage> yeah - I remember
<marcoceppi> you'll need to add a PPA to get it to work
<jamespage> marcoceppi, what the future of charm-helper-sh?
<marcoceppi> jamespage: gone, depricated instead for charm-helpers2 at lp:charm-helpers
<jamespage> marcoceppi, I really don't want to add a ppa
<marcoceppi> the MySQL charm shouldn't be using much from charm-helpers-sh
<marcoceppi> you can embed the helper files locally in the charm
<gnuoy> jamespage, ok, thanks. I'm not sure how I'm going to address this. I have 24 odd services to deploy and 3 different types of server :-(
<jamespage> gnuoy, other constraints work
<jamespage> 1.16 is due today officially but more like Monday I think
<marcoceppi> jamespage: I don't see any of the code using charm-helpers-sh anymore, except for a mention in hooks/monitors-relation-departed, but I think that should be including monitors.common.bash and not a charm-helper
<gnuoy> jamespage, so that'll be in  ppa:juju/stable on monday'ish ?
<marcoceppi> actually, sorry, monitors.common.bash and master-relation-changed are using ch_is_ip and ch_get_ip
<jamespage> gnuoy, yeah - I think so
<gnuoy> ok, that'd be great
<hazmat> sinzui, any reason why the ingest had to be restarted to make it work?
<sinzui> hazmat, there were stale procs, possibly zombies
<sinzui> new proces exit early when they think there are already running
<hazmat> sinzui, any way to monitor that (last collection update on status page)?
<sinzui> hazmt, we think so. I am going to write up recommendations to a better /heartbeat
<hazmat> sinzui, cool better monitoring is one part of the problem.. fixing the stale proc would be the other, are there any forensics on the latter?
<sinzui> hazmat, no. There is definitely still a problem too
<sinzui> hazmat, looks like mthaddon found the issue, crontab was manually put in the wrong place in production. He is fixing it
<mthaddon> sinzui: er, sorry?
<mthaddon> sinzui: what's wrong with /etc/cron.d/charmworld ?
<mthaddon> (as a location)
<sinzui> mthaddon, didn't you indicate that the charmworld crontab was in the wrong location?
<mthaddon> sinzui: no, I indicated you asked for me to look in the wrong location, but it's in /etc/cron.d/charmworld
<sinzui> :(
<mthaddon> (which is fine)
<hazmat> sinzui,  that does not equate to stale process
<sinzui> I still have no clue as to what automated piece is missing.
<hazmat> sinzui, is staging running against the full set of charms?
<sinzui> hazmat, we had stale procs a few hours ago we killed them.
<sinzui> yes it is hazmat
<sinzui> I am using to to see what is missing
<hazmat> sinzui, staging has 725, prod has 746
<sinzui> hazmat, that can alway happen because the older the instance, the more deleted charms it knows about
<sinzui> hazmat compare http://staging.jujucharms.com/recently-changed to http://manage.jujucharms.com/recently-changed
<hazmat> sinzui, 21 deleted charms in a few months is highly suspect imo
<hazmat> but fair enough the ingest is paramount for triage and analysis
<mthaddon> sinzui: so do you need anything from us? I'm not sure if that cron info answers your questions
<hazmat> sinzui, mthaddon, ingest logs might be helpful to see where the stall happens
<sinzui> mthaddon, /home/charmworld/var/app.log
<hazmat> could need several days worth depending on length of stall
<hazmat> if its being rotated
<mthaddon> 1.1G... no rotation by the looks of it
<sinzui> No it doesn't
<mthaddon> can we update the charm to do that (not a critical item, but unrotated logs is a recipe for disk space problems)?
<sinzui> mthaddon, I think so
<sinzui> mthaddon, I am still preparing a list of issues to talk over with gary_poster to address what we wanted to know when the last release failed
<mthaddon> sinzui: should I log a bug about that at https://bugs.launchpad.net/charmworld/+filebug ?
<sinzui> that is good
<sinzui> mthaddon, on the mongodb we want to know if the queue has the list of charms to ingest
<sinzui> mongo juju --eval 'db["charm-queue"].count()'
<sinzui> ^ that can be 0 because ingest drains the queue. If we see a number, we can be certain we are queuing
<mthaddon> sinzui: got "1"
<sinzui> I would expect that to be empty then in a few minutes
<sinzui> mthaddon, hazmat, the log shows that we have only ingested once in two days. it was the run jjo intervened with
<sinzui> mthaddon, has the queue number changed?
<mthaddon> sinzui: not yet
<mthaddon> sinzui: I'm going to have to pass you to the vanguard in the other channel (there is now one) as I have a meeting to go to
<sinzui> thank you
<AskUbuntu> How to remove a relation in Juju after destroying one of the associated services? | http://askubuntu.com/q/353231
<evilnickveitch> I got this ^ ^
<AskUbuntu> How can I deploy a new service in Juju GUI specifying the destination machine? | http://askubuntu.com/q/353262
<FilipeCifali> Hello, can anyone help me w/ Juju-core on Ubuntu-server 12.04?
<FilipeCifali> is this the right place to ask questions?
<sarnold> FilipeCifali: sure; note that irc tends to work best if you just ask questions outright :)
<FilipeCifali> I do like to be polite first :) so, I just installed, made the setup and I'm getting this same error: http://askubuntu.com/questions/351269/juju-errors-when-trying-to-deploy-to-ec2
<FilipeCifali> is the stable version not so stable? Juju is yelling at me:
<FilipeCifali> ~# juju -v deploy wordpress
<FilipeCifali> 2013-10-03 17:52:01 INFO juju.provider.ec2 ec2.go:187 opening environment "amazon"
<FilipeCifali> 2013-10-03 17:52:07 ERROR juju supercommand.go:282 command failed: no instances found
<FilipeCifali> error: no instances found
<FilipeCifali> and I have credentials fixed, have done juju bootstrap before
<FilipeCifali> ~# juju bootstrap
<FilipeCifali> error: environment is already bootstrapped
<FilipeCifali> (I can access my instance over ssh w/o problem)
<FilipeCifali> brb
<FilipeCi_> I'm back
<sarnold> welcome back FilipeCi_ :) did 'juju bootstrap' work?
<FilipeCi_> it did
<FilipeCi_> and I done again after I found that link
<FilipeCi_> ~# juju bootstrap
<FilipeCi_> error: environment is already bootstrapped
<FilipeCi_> is there any other debug/verbose level to use?
<sarnold> FilipeCi_: you could just output of juju status
<FilipeCi_> @sarnold just a sec
<FilipeCi_> hmmm Juju status showed me a DNS timeout
<FilipeCi_> gonna change my resolv.conf and try again
<FilipeCi_> 2013-10-03 18:39:24 ERROR juju supercommand.go:282 command failed: Get : 301 response missing Location header
<FilipeCi_> error: Get : 301 response missing Location header
<FilipeCi_> that's really awkward
<FilipeCi_> I found this in launchpad: https://bugs.launchpad.net/juju-core/+bug/1083017 but not sure if it's related since it's marked as fixed last year
<_mup_> Bug #1083017: Cannot bootstrap with public-tools in non us-east-1 region <ec2> <juju-core:Fix Released by dave-cheney> <https://launchpad.net/bugs/1083017>
<FilipeCi_> The funny part:     region: us-east-1
<FilipeCi_> in env.yaml being used
<FilipeCi_> damn interwebs
<FilipeCi_> any hints about that message?
<sarnold> FilipeCi_: you've missed nothing here
<FilipeCi_> :(
<FilipeCi_> maybe I should downgrade Juju?
<FilipeCi_> the core/client in my server
<FilipeCi_> clear
<marcoceppi> FilipeCi_: what version are you using?
<FilipeCi_> ~# juju version
<FilipeCi_> 1.14.1-precise-amd64
<FilipeCi_> from stable repo
<FilipeCi_> (I was following https://juju.ubuntu.com/docs/getting-started.html#configuring-your-environment-using-ec2)
<marcoceppi> can you run juju destroy-environment; then run `juju bootstrap -v --debug` and paste the output to http://paste.ubuntu.com
<FilipeCi_> just a sec and I'll post it
<FilipeCi_> http://paste.ubuntu.com/6189419/
<FilipeCi_> running ubuntu-server 12.04 LTS
<marcoceppi> According to everything I'm reading this has already been fixed
<FilipeCi_> how do I update?
<marcoceppi> FilipeCi_: this was fixed almost a year ago
<marcoceppi> FilipeCi_: change  your control bucket name to something more unique and then try again
<FilipeCi_> gonna try to put another md5 on that
<Filipe___> oh yeah, it worked
<Filipe___> TY!
<dpb1> Hi -- what is the usual story around deploying juju-gui when your lab is firewalled?  can you d/l the targz directly into the charm dir?
<thumper> dpb1: I think it is to download the charm locally, then deploy as a local charm
<thumper> however you may have issues with any apt-get or http GETs the install hooks may run
<dpb1> thumper: yes, I did that.  but it's trying to curl a tar.gz from launchpad.
<thumper> :-(
<thumper> dpb1: don't suppose you can open ports to launchpad?
<dpb1> thumper: I can.  I just wish it was a package. :)
<dpb1> since that already works
<dpb1> maybe I'll file a bug/enhancement idea
 * thumper nods
<thumper> cheers
<kenn> I have an environment running on AWS, but due to budget contraints I would like to run both prod and dev on the same box. My thought was to run each inside their own LXC inside the EC2 instance. Is this possible? Is it even a good idea? I haven't been able to find a command to add an LXC to and instance.
#juju 2013-10-04
<thumper> kenn: hey
<thumper> kenn: ec2 isn't good at giving lxc containers their own ip address
<thumper> it is potentially a good idea when we can get ec2 to play nice with the lxc containers
<thumper> but this isn't there yet
<kenn> thumper: well, theoretically I don't need the LXC containers to have a public IP, as everything currently runs through an nginx proxy anyway.
<kenn> thumper: but, if EC2 instances are not capable of running several LXCs anyway, then I guess I'll have to figure something else out
<thumper> kenn: they can... if the nginx proxy is on the same machine as the lxc containers, it may work
<thumper> kenn: it is all about the networking
<kenn> thumper: I'd like to play around with it, how do I add an LXC to an EC2 instance, or even a local LXC?
<thumper> kenn: juju will not allow it (or it shouldn't or won't once we hook everything up)
<thumper> you can try: juju add-machine lxc:1
<kenn> thumper: ah ok, so currently this is not implemented?
<thumper> I think that tries to create an lxc container on machine 1
<kenn> ok, let me try that :)
<thumper> kenn: it is implemented but specifically disallowed on ec2 because it doesn't talk right
<thumper> failing that,
<thumper> you end up using the lxc commands directly
<thumper> sudo lxc-create -n some-name -t ubuntu-cloud
<thumper> etc
<kenn> haha no I will not be doing the latter :)
<kenn> thumper: adding the LXC to a running machine seemed to work. It added something anyway. Though if it doesn't work on EC2 then I think I'll solve the problem in a different way
<thumper> you can the go: juju deploy ubuntu --to 1/lxc/0
<thumper> and then : juju ssh ubuntu/0
<thumper> if that is the first one
<kenn> I was just about to google that
<kenn> hang on, I have to deploy ubuntu?
<thumper> not exactly
<thumper> but kinda
<thumper> due to the way that addressing isn't properly hooked up yet
<kenn> ok, so sub-lxcs are kind of sort of experimental then?
<thumper> it is the only way to get the ip address shown in juju status
<thumper> kenn: they work on maas, but not on other providers yet
<thumper> as we need to hook up the networking
<kenn> right, gotcha
<kenn> thanks for the help though
<thumper> np
<kenn> thumper: as a side note, while I have the attention of a juju developer: I absolutely love the design you guys have come up with. In my previous job I did Chef on RightScale and that was such a nightmare compared to this
<thumper> thanks
<kenn> so, good job!
<thumper> I've only been on it since the start of the year
<thumper> so can't take the real credit
<thumper> but I do like it too
<thumper> I'll pass it on
<thumper> glad you're enjoying it
<kenn> please do
<kenn> I'm having trouble working with my local environment. Starting a new machine doesn't create a log for that machine, and deploying services shows up in watch, but stays pending
<kenn> I've tried destroying and bootstrapping my environment several times, including system restart. I've also manually killed a few LXC containers left over with lxc-destroy
<kenn> during environment bootstrap I get the following error in the log:
<kenn> 2013-10-04 04:29:58 ERROR juju apiclient.go:111 state/api: websocket.Dial wss://localhost:17070/: dial tcp 127.0.0.1:17070: connection refused
<kenn> any ideas?
<kenn> juju version: 1.14.1-quantal-amd64
<davecheney> kenn: that means bootstrapping didn't work
<davecheney> kenn: do you have the right version of mongodb installed ?
<kenn> db version v2.2.4, pdfile version 4.5
<kenn> according to APT, that's the right version
<davecheney> ok, that is ood
<davecheney> that is the usual stumbling block
<davecheney> kenn: can your latop talk to the internet without a proxy ?
<davecheney> s/laptop/machine
<kenn> yes, I don't have a proxy
<kenn> I tried reinstalling juju and mongo via apt and also cleared all mongo's data. Still no luck, same error
<davecheney> kenn: we don't use the mongodb that the ackage provides
<davecheney> we start our own
<davecheney> can you please try (not sure if this will give results)
<davecheney> juju bootstrap --debug
<kenn> two logs here: http://pastebin.com/ShCQVYYC (terminal output) and http://pastebin.com/QMfC16PU (machine-0.log)
<kenn> davecheney: ^^
<davecheney> kenn: that looks fine
<davecheney> what is the issue you see
<kenn> machine-0.log: 2013-10-04 04:52:00 ERROR juju apiclient.go:111 state/api: websocket.Dial wss://localhost:17070/: dial tcp 127.0.0.1:17070: connection refused
<davecheney> yeah, that happens
<davecheney> it's just part of the startup dance
<kenn> ok, when I to add-machine a machine is created, but no log file is created. When I then add a service (like mysql) to that machine, the service appears in juju stat as pending, no log file is created, and that's how it stays
<kenn> ok, it finally created the log file which states the install hook has been queued. No change for a few minutes though
<davecheney> the logs should all be in ~/.juju/local/something something/log
<davecheney> alongside wher eyou found machine-0
<davecheney> if they are not
<davecheney> machine-0.log will have the details on what happened
<kenn> they aren't there, let me check machine-0
<kenn> no errors in machine-0.log as far as I can tell
<davecheney> can you make machine-0.log available
<davecheney> kenn, did you use sudo to bootstrap ?
<kenn> davecheney: the machine-0.log is the one I posted here  http://pastebin.com/QMfC16PU, and yes I used sudo for the bootstrap
<kenn> let me post one after starting machines and stuff
<kenn> davecheney: this is machine-0.log after trying to deploy mysql and starting a second machine: http://pastebin.com/4Pf1DgdP
<kenn> I should note that I downgraded juju to see if that would fix it
<davecheney> kenn: ok, it looks lke juju asked lxc to start two machines
<davecheney> but they never started
<kenn> ok
<davecheney> anything in dmesg
<davecheney> or /var/log/syslog
<davecheney> btw, which OS are you using ?
<davecheney> local providers is only known to work on raring or later
<davecheney> also, what is your net connection doing ?
<davecheney> there could be a large lxc related download going on
<kenn> I'm on ubuntu quantal. What am I looking for in dmesg and syslog? I'll check my connection
<davecheney> lxc is doing a version of debootstrap for every machine you start
<davecheney> so that could explain a delay
<davecheney> or it could just be broken
<kenn> hmm, I do appear to downloading quite a lot of stuff from zaurac.canonical.com
<kenn> ok you know what, I've probably just been incredibly impatient
<davecheney> np, that has as solution
<davecheney> and isn't typically a bug :)
<davecheney> i *think* once lxc has done it's dance the fist time
<davecheney> it will cache the results
<kenn> the mysql server has installed and started, and it would appear my local charm is now installing
<davecheney> so this is a one time cost
<kenn> ok, sorry about the fuss man, it's just usually so damn instant!
<kenn> I will be more patient next time :)
<kenn> Is there a place where I can find a list of all the juju api commands? Like open-port and config-get. I can't find it in the docs.
<kenn> Secondary question, what the best way to get the IP of the box my service is being deployed to? Internal IP is fine.
<davecheney> kenn: second answer first
<davecheney> unit-get private-address
<davecheney> unit-get public-address
<davecheney> etc
<davecheney> kenn: first answer
<davecheney> https://juju.ubuntu.com/docs/authors-charm-anatomy.html
<davecheney> grep for "Hook commands for working with relations"
<kenn> davecheney: awesome, thanks, for both. Is there a more complete list somewhere? I'm ok with digging around source code if need be
<gnuoy> morning I seem to have a wedged maas environment. If I try and do destroy-environment I get "error: gomaasapi: got error back from server: 409 CONFLICT" and if I try and bootstrap I get "error: environment is already bootstrapped"
<gnuoy> any ideas ?
<AskUbuntu> how to reproduce an existing environment in juju? | http://askubuntu.com/q/353524
<gnuoy> Ok, I had a node that was stuck in the commissioning state which I think juju may have believed was the bootstrap node. Now, thats cleared I was able to destroy-environment
<marcoceppi> gnuoy: interesting, I wonder if you run those commands with -v or --debug (or both) if you'd get the actual error from MAAS other than just 409 (which maas uses quite a bit)
<gnuoy> marcoceppi, I got no additional info I don't think. Let me see if I have it in my history
<gnuoy> no, can't find it I'm afraid
<stub> $ sudo juju bootstrap
<stub> error: net: no such interface
<stub> I've busted something yesterday and no idea what.
<marcoceppi> stub: I think it's the interface for local provider
<stub> Yeah, I have 'lxc.network.link = lxcbr0' in lxc/default.conf, but that interface no longer exists :-/
<marcoceppi> stub: Try re-installing lxc
<marcoceppi> hazmat: adam_g: is there any way to tell deployer that a service should be exposed? or does that need to happen outside of deployer?
<oatman> anyone know what I get "error: cannot create log collection" when I run `sudo juju bootstrap` ?
<oatman> it seems to have stopped working since I rebooted
<melmoth> hola !
<melmoth> Whatever the service i deploy (provider openstack, grizzly), the service end up with https://pastebin.canonical.com/98533/
<melmoth> twisted internal error.
<melmoth> this is with juju py (there s currently a problem with juju-core on openstrack provider)
<melmoth> any idea what i could do about this ?
<marcoceppi> melmoth: not really sure what's going on in the log, it's hard to say
<melmoth> indeed.
<melmoth> but it does look like a twisted error... so i guess it s something specific to pyjuju.
<melmoth> and it seems to occur even before the install hook is fired.
<marcoceppi> melmoth: well it's actually getting a 500 error to whatever it seems to be trying to do
<marcoceppi> twisted is just the API that juju < 1.0 uses for communication with various API endpoints in the providers
<marcoceppi> it's an async framework
<marcoceppi> oatman: you still having bootstrap problems?
<oatman> marcoceppi, yes I am, I've restarted a few times to no avail
<oatman> wait
<oatman> wtf it's now working
<oatman> does destory environment sometimes not destroy it right?
<marcoceppi> oatman: so with the local provider, it attempts to put logs in ~/.juju/local/log (or if you've named the environment something other than local, replace "local" with that name)
<oatman> ah
<marcoceppi> oatman: what version of juju are you on? 1.14.1?
<oatman> 1.14.1-raring-amd64
<marcoceppi> destroy-environment should work as expected
<oatman> I suspect I'll have it again, I'll be sure to come here when I do
<oatman> I'll try and isolate it first
<marcoceppi> oatman: when runing bootstrap and getting those errors, run bootstrap with -v and --debug flags to give us a little more insight
<oatman> will do
<marcoceppi> might also help you pinpoint what's going on
<oatman> here's my script that I'm running to rebuild my env:
<oatman> sudo juju destroy-environment -y &&
<oatman> sudo juju bootstrap -v --debug
<oatman> juju deploy mysql &&
<oatman> juju expose mysql &&
<marcoceppi> yeah, that looks fine
<oatman> cool
<oatman> I'll keep on going
<oatman> I'm very impressed with juju otherwise, honest!
<marcoceppi> you might want to sleep a few seconds between destroy and bootstrap, just for good measure
<oatman> ah, ok
<marcoceppi> otherwise that should work
<oatman> I'll add that just to be safe
<marcoceppi> on "other" providers, that's not a problem, but local it might be trying to bootstrap too soon after a destroy
<oatman> I see
<oatman> that makes sense
<kentb> so if we are discouraging the use of "deploy --to X" in favor of containerization, how soon will the containerization support be available?  I certainly like the idea rather than using "--to"
 * kentb errand then dell lab
<Makyo> jamespage, I received a question about deploying openstack, is there a bundle I can point them to?
<jcastro> marcoceppi: 20 minute warning!
<marcoceppi> jcastro: ack!
<jamespage> Makyo, not yet
<marcoceppi> jcastro: you going to fire it up?
<Makyo> jamespage, alright, thanks.
<jamespage> Makyo, I will get to it honest - we have a juju-deployer configuration we are using for testing charm work but it points to all our inflight branches
<jcastro> marcoceppi: yeah in ~6 minutes
<marcoceppi> jcastro: ack
<Makyo> jamespage, no problem, sounds good.  Just had someone asking if I could prove that the openstack deployment from the vid worked, figured that'd be easiest.
<jcastro> marcoceppi: https://plus.google.com/hangouts/_/67513881e518abb07e2d4cc3d79041dac96648b9?authuser=0&hl=en
<jcastro> . /usr/bin/byobu-reconnect-sockets
<jcastro> Ok we're having a charm school on Amulet starting nowish!
<jcastro> http://ubuntuonair.com if you wanna follow along.
<FilipeCifali> Oh nice
<FilipeCifali> my client is trolling me
<FilipeCifali> brb
<marcoceppi> jcastro: 1.0.1 uploaded to ppa
<marcoceppi> jamespage: got a second? I'm trying to patch a huge glaring flaw in the charm-tools package, I can get the source from the ppa, but I noticed in Saucy you made a few fixes. If I bzr branch lp:ubuntu/saucy/charm-tools I get 0.3 source. What should I do to get 1.0.0 so I can patch the packaging?
<jamespage> marcoceppi, pull-lp-source charm-tools
<marcoceppi> jamespage: i'm on raring, anything else to get the saucy version?
<jamespage> marcoceppi, no
<marcoceppi> jamespage: awesome, thanks. On a slightly related note, I need to submit a new version of the packaging for charm-tools in saucy
<jcastro> marcoceppi: don't forget to respond to the list wrt. the autogenerating interface docs stuff
<marcoceppi> jcastro: ack
<jamespage> marcoceppi, whats the bug?
<jamespage> if you want to fix it in saucy please raise a task for ubuntu as well
<marcoceppi> jamespage: https://bugs.launchpad.net/charm-tools/+bug/1231441
<_mup_> Bug #1231441: python-markdown missing in the deb packacge's dependencies <Juju Charm Tools:In Progress by marcoceppi> <https://launchpad.net/bugs/1231441>
<marcoceppi> jamespage: so I fixed it in the source, added a new incremental update, and built the source (and just built the package and installed in a clean lxc container), I'm about to push to the PPA
 * marcoceppi needs to read on raising task for ubuntu
<jamespage> marcoceppi, I've been trying to push to the distro first - could you push it to a personal PPA
<jamespage> then I can pull from there and update the distro + backport
<marcoceppi> jamespage: can do
<jamespage> marcoceppi, I raised the task for Ubuntu - remember to reference it in the changelog
<marcoceppi> jamespage: reference the bug?
<jamespage> (LP: #1231441) - that was the bug will be closed once it gets accepted into distro
<_mup_> Bug #1231441: python-markdown missing in the deb packacge's dependencies <Juju Charm Tools:In Progress by marcoceppi> <charm-tools (Ubuntu):New> <charm-tools (Ubuntu Saucy):New> <https://launchpad.net/bugs/1231441>
<jamespage> fwiw I'd probably have just fixed that in packaging now that we are so close to releae
<marcoceppi> jamespage: when I set the release to saucy in the changelog, and go to build source it says release not found?
<marcoceppi> E: charm-tools changes: bad-distribution-in-changes-file saucy
<marcoceppi> safe to ignore?
<jamespage> yep
<marcoceppi> jamespage: uploaded to ppa:juju/stable
<jamespage> marcoceppi, urgh - how are you managing to put the same version in for all releases
<jamespage> thats generally a bad idea
<marcoceppi> jamespage: uhh, I don't know
<marcoceppi> rather, I'm really not sure what I just broke
<marcoceppi> Sadly, I've just been following the internet as best I can
<jamespage> marcoceppi, best to ask
<jamespage> marcoceppi, the problem is that the version you just uploaded to PPA is exactly the same version I have to upload to saucy
<jamespage> note the use of ~ version on the other packages in the PPA
<marcoceppi> jamespage: oh, I didn't realize that would cause an issue
<marcoceppi> jamespage: is there a guide that describes how to best manage packages in ppas?
<jamespage> marcoceppi, not really
<jamespage> marcoceppi, I normally prepare my distro upload then use backportpackage to prepare uploads for PPA
<jamespage> that way the distro is always the preference over PPA
<jamespage> marcoceppi, I uploaded that to saucy
<marcoceppi> jamespage: okay, I think I grasp that. So it'd be do everything I've done up to this point, then run backportpackage to create a backport for each release and put those in ppa?
<jamespage> yep
<marcoceppi> jamespage: thanks, hope that didn't create too much of a kerfuffle
<marcoceppi> jamespage: I'll prepare my releases for tools using that method going forward!
<jamespage> marcoceppi, http://paste.ubuntu.com/6193324/
<jamespage> thats what I run once I've uploaded a juju-core stable release to distro
<marcoceppi> jamespage: awesome, thank you sir!
<jamespage> the script can either pull a package
<jamespage> or use the dsc locally
<marcoceppi> jamespage: if I do that right now for charm-tools am I going to hurt anything?
<marcoceppi> jamespage: I've been using the copy package link to create releases in other versions of ubuntu for the ppa, to answer your "how are  you managing" question
<rektide> is there any media outpost that links stuff such as people's juju blog posts and projects? looking for a kind of street view, hack-a-day coverage of the wider world?
<marcoceppi> rektide: we have this: https://juju.ubuntu.com/community/blog/ which agregates a few juju charmers blogs
<rektide> i'm familiar with the planet schema. it's great hearing from direct authors, for sure.
<rektide> i like being able to venture out to the slightly further removed too, which is harder
<rektide> that's a more critical evaluation than i'm comfortable making, please lend me your support in venturing there
<rektide> (i'm remarkably happy with that phrasing asking forebearance. so much better than an apology.)
<marcoceppi> rektide: Outside of that, I suppose Google would be best, Google for juju and ubuntu and see what people are doing. If you're intersted in see changes to charms being made we have http://manage.jujucharms.com/recently-changed
<rektide> haven't been attending my blogroll as much these past couple weeks- i'd missed the GUI inspector. super awesome! http://www.jorgecastro.org/2013/09/19/here-comes-the-juju-gui-inspector/
<marcoceppi> rektide: yeah, the GUI is really turning in to this awesome piece of Juju UX
<marcoceppi> I mean, it was already pretty awesome, they set the bar pretty high IMO
<rektide> ++
<Nik_> Hi. I'm not able to get anyone on #maas channel to hep with the issue I'm having. Is anyone experienced with maas available?
<marcoceppi> Nik_: we an certainly try to help you. Are you using juju with maas?
<Nik_> yes. though it's a maas tags related issue
<Nik_> I'm trying to tag nodes that have 4 or more disks
<Nik_> the rule is
<Nik_> "definition": "count(//node[starts-with(@id,\"disk\") and @class=\"disk\"]) >= 4",
<Nik_> so some machines get tagged
<Nik_> but some don't
<Nik_> and I verify againt lshw output on one of the machines that don't get tagged
<Nik_> xmlstarlet sel -T -t -v 'count(//node[starts-with(@id,"disk") and @class="disk"]) >= 4' /tmp/lshw.xml
<Nik_> that outputs "true"
<Nik_> so all seems in order, but I'm not sure what approach to take to debug it
<marcoceppi> oh boy, this is way above my head sadly
<Nik_> well thanks for trying. maybe someone comes around and has an answer :)
<marcoceppi> Nik_: you can try asking on http://askubuntu.com with the MAAS tag, it'll at least show up in the MAAS channel and the question will persist
<Nik_> thanks maroceppi. I'll try that if I don't get anything
<marcoceppi> Nik_: cheers and good luck!
#juju 2013-10-05
<zradmin_> hey guys, trying to upgrade an environment to juju 1.15.01 and while i see the nodes are trying to run they are complaining about not being able to find the matching tools
<zradmin_> I've ran a sync-tools and upgrade-juju --upload-tools already
<sarnold> zradmin_: do you need to ...
<sarnold> never mind :)
<zradmin_> sarnold: :)
<zradmin_> sarnold: it looks like this http://pastebin.ubuntu.com/6194603/
<zradmin_> ok i think i found a bug... everytime i try and set the version of tools to upgrade to it inciments the version the agents are trying to upgrade by .1 (i.e. 1.15.0.0 is what I'm trying to upgrade to, but after a few attempts the agents are now searching for 1.15.0.5)
<zradmin_> incriments
<lazyPower> OMG AMULET TALK TODAY!!
<lazyPower> </fanboy>
#juju 2013-10-06
<AskUbuntu> Does the juju charm for postgresql automatically replicates if set up manually? | http://askubuntu.com/q/354344
#juju 2014-09-29
<badsyntax> can juju manage containers on ec2? i'm getting a "Sub-containers not supported" message in the juju-gui.
<lazyPower> badsyntax: 1 moment, let me bootstrap and try. Shouldn't be an issue though
<lazyPower> i admittedly haven't tried with the new GUI, and its a bit early for the GUI folks to be poking around
<lazyPower> badsyntax: appears it works without an issue on AWS - this may be a gui defect surfacing. http://paste.ubuntu.com/8453759/
<lazyPower> paste of a container started in case that comes into question: http://paste.ubuntu.com/8453794/
<mwenning> lazyPower, good morning!
<lazyPower> Morning mwenning o/
<mwenning> lazyPower, I saw your comment about the charm - my main problem was that squid-deb-proxy only allows certain repos - linux.dell.com is not one of them.
<mwenning> not sure when this went in.
<lazyPower> mwenning: you can add that to the squid config
<mwenning> lazyPower, yes that
<lazyPower> i had to do that with ppa.ubuntu.com, as those weren't in by default either
<lazyPower> Do you need me to fish up the config you need to edit?
<mwenning> s what I did.  So do I just add this to the README>
<lazyPower> the fact you had to update squid?
<lazyPower> I woudl add it under caveats, that if you're behind a squid-deb-proxy, that the host needs to be added. I wouldn't think that effects many installations, but the info is there if they find an issue fetching the packages.
<mwenning> lazyPower, if people actually want to use the charm they need to know this.  So I assume I need to add something to the doc
 * lazyPower nods
<mwenning> ok cool.
<lazyPower> did you capture the test run output?
<mwenning> lazyPower, I'll ping you a bit later, it's still complaining about something.
<lazyPower> ack. Looking forward to seeing resolution on this one for ya mwenning
<mwenning> me too :-)
<tvansteenburgh> if i force-terminate a bunch of machines, should i expect juju to eventually destroy all the services on those machines?
<lazyPower> mwenning: not to be a pest, but are you documenting your papercuts as you run into them?
<lazyPower> tvansteenburgh: if you leave the bootstrap node in tact you have to follow upd estroy the services.
<lazyPower> tvansteenburgh: inversely, you can juju deployer -T, which will do this for you.
<mwenning> lazyPower, is there a way to put constraints on amulet?  I've got one machine marked as "bootstrap"; I'd like to tell amulet to, um use that as the bootstrap node.
<lazyPower> mwenning: i'm assuming you mean consuming maas tags?
<mwenning> lazyPower, yup.
<mwenning> yup on your previous question as well.  main one was the squid-deb proxy
<tvansteenburgh> yes you can pass contraints to the add() method
<lazyPower> hazmat: does deployer understand maas tagging?
<mwenning> tvansteenburgh, I'm assuming that d=amulet.Deployment() is what bootstraps juju, so that's before I can use add
<lazyPower> mwenning: if deployer supports maas tagging, the same format you would put in a bundle you specify inline in the test, and it shoudl just hand it off.
<hazmat> lazyPower, for constraints, its pass through .. ie. yes
<lazyPower> i thought so, awesome.
<hazmat> constraints: "tags=mymaastag"
<lazyPower> not sure about on the bootstrap though mwenning
<mwenning> lazyPower, hazmat, so d=amulet.Deployment( 'constraints "tag=bootstrap"');
<mwenning> then d.add("tag=")
<mwenning> to shut it back off
<mwenning> sorry d.add( set-constraints "tag=")
<mwenning> lazyPower, ok forget that for now ;-)
<mwenning> I'll dump juju status in a pastebin, stby
<lazyPower> ack
<mwenning> lazyPower, https://pastebin.canonical.com/117818
<lazyPower> mwenning: interesting. looks like the first machine choked on the series?
<mwenning> juju 1.20.8
<mwenning> lazyPower, correct.
<mwenning> relation sentry wants to be precise
<lazyPower> can you file a bug about this against amulet?  launchpad.net/amulet
<mwenning> lazyPower, sure will do.
<gnuoy> jamespage, two branches to hopefully unbreak neutron + juno
<gnuoy> https://code.launchpad.net/~gnuoy/charms/trusty/neutron-api/next-fix-1372893/+merge/236370
<gnuoy> https://code.launchpad.net/~gnuoy/charms/trusty/nova-cloud-controller/next-fix-1372893/+merge/236371
<tvansteenburgh> hazmat: i'm almost done with the deployer patch to force-terminate machines. do you want that to be the default for an env reset?
<hazmat> tvansteenburgh, sounds good to me
<hazmat> should be faster and avoids having to do the inane code for retrying every single unit..
<tvansteenburgh> hazmat: any reason to even make non-forced an option? right now i'm passing this 'forced' param around, but maybe it should just always force, thoughts?
<hazmat> tvansteenburgh, no.. force always.. its faster
<tvansteenburgh> hazmat: sweet, that simplifies things
<hazmat> tvansteenburgh, if there tearing down the entire env.. there's not much cause to do things slowly imo
<hazmat> hmm
<hazmat> tvansteenburgh, there is some call for a non forced option.. for external resources management.. ie. aws-elb or dns charm wanting to clean up there
<tvansteenburgh> hazmat: well i'm still destroying services b4 terminating machines, doesn't that cover the cleanup?
<tvansteenburgh> i have to do that, otherwise the machines die but the services hang around forever
<hazmat> tvansteenburgh, not really unless you wait for units to die and resolve on error them they won't have all the lifecycle hooks invoked which is what the current impl tries to do.. and then terminate machines after that. (at which point the force doesn't matter)
<tvansteenburgh> bleh
<hazmat> tvansteenburgh, yeah.. force by default is fine for now though. the commands in deployer need to be split out at some point so they can take options without more global option pollution.
<tvansteenburgh> hazmat: sounds good
<hazmat> and orderly destroy can be revisited there.. mostly reset is about geddon.. and speed of nukes counts :-)
<tvansteenburgh> hazmat: https://code.launchpad.net/~tvansteenburgh/juju-deployer/force-terminate-machines/+merge/236381
<hazmat> tvansteenburgh, danke
<marcoceppi> cory_fu: we should talk about services framework being default charm template, I don't think it's a good opinion
<cory_fu> Why not?
<marcoceppi> I think it's too specialized
<marcoceppi> for a default
<cory_fu> I disagree.  It's a general purpose pattern that solves several issues that are common to most charms in a consistent way that makes the charms much easier to understand and follow.  It also encourages making the charms unit testable
<cory_fu> It's a new pattern, and I can see maybe not wanting to make it the default yet, until it is used more, but I would definitely recommend it for any charm.
<marcoceppi> I think it requires too much investment to get started compared to some of the other templates
<cory_fu> That doesn't make any sense
<cory_fu> It doesn't require any investment; that's the point of a template
<marcoceppi> the pattern
<marcoceppi> itself
<cory_fu> You charm create, and then you fill in the actions and requirements
<cory_fu> If anything, it requires *less* investment, because there are fewer places that you have to add code and reason about interactions
<marcoceppi> I think we should drop the notion of a default altogether and instead have it prompted on first one and saved as a user seting
<marcoceppi> IMHO ^
<cory_fu> I'm not averse to that
<cory_fu> I know that was your original idea for how the templates would work, and I'm entirely ok with that
<marcoceppi> going forward, since "services framework" isn't really clear, would this be better summarized as a declaritive framework?
<marcoceppi> or rather, a declaritive template?
<cory_fu> Yeah, the "name" sucks, we just hadn't come up with a better one yet.  Declarative framework is ok, but still not great.
<cory_fu> marcoceppi: How about python-managed
<bic2k> Quick question, does juju 1.20.7 support ec2 regionally availability zone constraints. My region is out of a container type in the default AZ. Just nee to specify us-east-1a or us-east-1c
<bic2k> looks related to this bug #1183831
<mup> Bug #1183831: unable to specify availability zone <charmers> <constraints> <ec2-provider> <landscape> <reliability> <juju-core:Fix Released by axwalk> <https://launchpad.net/bugs/1183831>
<bic2k> I don't see any documentation specific to ec2 machine constraints on the juju site
<lazyPower> bic2k: see the add-machine directive listed in HA docs: https://juju.ubuntu.com/docs/charms-ha.html
<bic2k> lazyPower: perfect, thanks. Didn't think about it in terms of HA since it was related to availability :-)
<lazyPower> bic2k: i have a bug open discussing where it should go: https://github.com/juju/docs/issues/187
<lazyPower> feel free to chime in
<hazmat> bic2k, juju will auto balance units across multiple azs for a service. on a per unit basis its not a constraint per se but a placement directive for  a given unit --to="zone=us-east-1a"
<Subbu__> Hi I am deploying openstack with juju charms from: ~openstack-charms/charms/trusty/openstack-dashboard/next deployment to lxc:1 is all good, but horizon login fails: I see this from error.log from apache2:: RSA certificate configured for 10.0.3.36:433  does NOT include an ID which matches the server name
<arosales> mbruzek: is the VPN some thing we can share from Brian's docs
<arosales> mbruzek: just point it at the new setup for thumper?
<mbruzek> arosales: I don't know, I may be able to share it but I was given an id
<arosales> ah ok so there wasn't a general one that was provided in Brian's doc
<arosales> mbruzek: no worries on sharing your ID.
<arosales> mbruzek: do you know if akash was able to reproduce the problem on the other maas set up?
<mbruzek> I haven't talked with him in a bit now.
<arosales> mbruzek: do you know who was going to try that?
 * arosales doesn't see akash in here
<mbruzek> arosales: I do not know
 * arosales pinged in #canonical.
<arosales> mbruzek: in order to keep this thing going you may want to give that new maas a try before you eod
<mbruzek> arosales: The new one ?
<arosales> correct
<mbruzek> The one that Canonical set up or ?
<jamespage> Subbu__, you are probably getting the standard snakeoil certificate which is auto-generated
<jamespage> horizon also listens on port 80 as well
#juju 2014-09-30
 * thumper sighs
<thumper> axw: you may recall a fix for this
<thumper> upgraded an environment from 1.19.4 to 1.20.8
<thumper> now I see this:
<thumper> machine-0: 2014-09-30 05:47:53 ERROR juju.worker.instanceupdater updater.go:267 cannot set addresses on "0": cannot set addresses of machine 0: cannot set addresses for machine 0: state changing too quickly; try again soon
<thumper> machine-0: 2014-09-30 05:47:53 ERROR juju.worker runner.go:218 exited "instancepoller": cannot set addresses of machine 0: cannot set addresses for machine 0: state changing too quickly; try again soon
<thumper> machine-0: 2014-09-30 05:47:55 ERROR juju.worker runner.go:218 exited "machiner": cannot set machine addresses of machine 0: cannot set machineaddresses for machine 0: state changing too quickly; try again soon
<thumper> every 10s or so
<thumper> actually, every 5s
<thumper> actually... every 3s
<axw> thumper: hmm, I thought that was fixed...
<thumper> it is the worker restart delay
<thumper> seems not
<axw> I will investigate
<thumper> could be because I was on a dev version before
<axw> thumper: 1.19.4 was broken
<axw> https://bugs.launchpad.net/juju-core/+bug/1334773
<mup> Bug #1334773: Upgrade from 1.19.3 to 1.19.4 cannot set machineaddress <landscape> <lxc> <maas-provider> <precise> <regression> <upgrade-juju> <juju-core:Fix Released by axwalk> <juju-core 1.20:Fix Released by axwalk> <https://launchpad.net/bugs/1334773>
<thumper> ok... so I now have an environment which is broken, how do I fix it?
<axw> umm
<axw> thumper: I *think* you would have to resort to mongo surgery
<axw> renaming the "scope" fields to "networkscope"
<gnuoy> jamespage, while prepping the mp for cells I noticed that I was carrying a fix for a neutron-api/nova-cc endpoint race. I've broken it out into a small mp if you have time https://code.launchpad.net/~gnuoy/charms/trusty/neutron-api/next-fix-endpoint-race/+merge/236459
<underyx> hey there
<underyx> I'm writing a charm where the install hook needs to install a package hosted on another service
<underyx> is it okay to block execution in the install hook until the relation to the package server is live?
<jamespage> gnuoy, +1
<gnuoy> thanks
<jamespage> gnuoy, what does your priority list look like right now?
<gnuoy> jamespage, I've got about 1.5 to 2 hours work to do which I could really do with getting finished but then I'm free for urgent studd. does that help ?
<gnuoy> s/studd/stuff/
<jamespage> gnuoy, I was just looking through my list of features we still need to land
<jamespage> gnuoy, l2population driver and vxlan overlays being two we must do
<gnuoy> jamespage, the l2pop branches are done
<gnuoy> I can take a look at vxlan
<jamespage> gnuoy, OK _ looking at l2pop now then
<jamespage> gnuoy, awesome on vxlan - I think we probably just need a toggle option in neutron-api
<jamespage> gre|vxlan
<jamespage> gnuoy, which charms does l2pop impact?
<gnuoy> jamespage,
<gnuoy> lp:~gnuoy/charms/trusty/neutron-openvswitch/next-l2-population
<gnuoy> lp:~gnuoy/charms/trusty/neutron-api/next-l2-population
<gnuoy> lp:~gnuoy/charms/trusty/quantum-gateway/next-l2-population
<jamespage> gnuoy, can you propose those please?
<gnuoy> jamespage, sorry, it looks like I didn't create mps. I think that was because I hadn't had a chance to check it was blocking the propagation of multicast
<jamespage> gnuoy, don't block on that :-)
<jamespage> trusty upstream!
<jamespage> gnuoy, ah  - I just spotted that the neutron-api charm is not nsx enabled just yet - but that was not working on trusty yet so that OK
 * jamespage makes a note to enable that once we get to trusty support with upstream
<jamespage> gnuoy, just looking through your charms - neutron-api needs:
<jamespage> mechanism_drivers = openvswitch,l2population
<gnuoy> ack
<jamespage> the l2_population flag is only used by the agents on neutron-gateway and neutron-openvswitch (which is correct)
<jamespage> gnuoy, I'd probably inconditionally enable the driver and use the agent flag to turn it on and off
<gnuoy> ok, I'll take a look at that
<jamespage> gnuoy, thanks
<underyx> huh, does the install hook have to finish execution before a relation can be added?
<jamespage> underyx, it does yes
<jamespage> underyx, well the relation can be added as soon as you deploy the charm, but its hook won't run until after install->config-changed->start has completed
<underyx> jamespage, so if I need a relation to complete installation, can I block execution in the install hook until I get the required data from a relation?
<jamespage> underyx, I'd just move what you are blocking for into the -changed hook of the relation - as soon as the remote service signals its done via a changed execution, you can complete things
<jamespage> underyx, install does not need to complete install if you see what I mean
<jamespage> its more 'init#
<jamespage> init rather
<underyx> alright, that makes sense
<underyx> thanks!
<thumper> axw: still around?
<axw> thumper: yo
<thumper> axw: do you have the commit handy where you fixed that problem?
<axw> I'll have a look
<thumper> axw: I'm going to look at mongo surgery locally to fix my db
<thumper> I don't really want to re-deploy
<axw> thumper: https://github.com/juju/juju/commit/80ca2ac1765e5f1ec555939006d14f23325da7d8#diff-0854f7d657770b8adf41defe45fc8cd1
<thumper> axw: ta
<axw> np
<thumper> I should really write this down somewhere
<thumper> gah
<thumper> for the love of all things good...
<thumper> why does a machine have both "addresses" and "machineaddresses" ?
<thumper> and why is there overlap in them...
 * thumper thinks
<marcoceppi> for the glory of satan, of course thumper
<thumper> I think, for those following at home, that one is what the provider says
<thumper> and one is what the machine says
<thumper> o/ marcoceppi
<marcoceppi> \o
<gnuoy> jamespage, l2pop mps:
<gnuoy> https://code.launchpad.net/~gnuoy/charms/trusty/neutron-api/next-l2-population/+merge/236474
<gnuoy> https://code.launchpad.net/~gnuoy/charms/trusty/quantum-gateway/next-l2-population/+merge/236482
<gnuoy> https://code.launchpad.net/~gnuoy/charms/trusty/neutron-openvswitch/next-l2-population/+merge/236477
<jamespage> gnuoy, all merged - thanks!
<gnuoy> jamespage, fantastic, thanks
<gnuoy> :q
<ayr-ton> Is possible to move a service/unit to other enviroment?
<underyx> has anyone ever successfully used charmhelpers.contrib.python.packages with a manually specified index URL for installation?
<underyx> because reading this code it seems to be completely broken
<underyx> http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/view/head:/charmhelpers/contrib/python/packages.py#L41
<underyx> if I'm reading it correctly, it checks the keyword arguments if they are in the tuple specified here
<underyx> but an argument in python can't contain a hyphen
<underyx> have I got this completely wrong, or did this really go unnoticed for half a year?
<Odd_Bloke> underyx: You can pass hyphenated kwargs if you use ** at the call-point.
<underyx> Odd_Bloke, right, I just realized and tested if that would work
<underyx> and yep, it does
<underyx> thanks!
<Odd_Bloke> underyx: http://paste.ubuntu.com/8465842/
<Odd_Bloke> :)
<underyx> it wouldn't be an issue if I were to fix this in charmhelpers though, right?
<underyx> if I retained backwards compatibility of course
<underyx> it's just not very pythonic to have to do that
<Odd_Bloke> underyx: I agree that it's un-Pythonic; don't know the answer to your question though. :p
<jamespage> gnuoy, I pushed a  trivial to neutron-api to change the default mcastport - it was conflicting with nova-cc and I ended up with a broken cluster
<gnuoy> kk
<gnuoy> jamespage, I'm thinking about adding a relation between neutron-gateway and neutron-api so that neutron-api can dictate config like l2pop and network type driver to the neutron-gateway. What do you think?
<jamespage> gnuoy, +1
<gnuoy> ta
<gnuoy> jamespage, https://code.launchpad.net/~gnuoy/charms/trusty/quantum-gateway/add-neutron-plugin-api-rel/+merge/236533 once that has landed I'll do the vxlan change since I'll use the same mechanism as l2pop
<jamespage> gnuoy, merged - thanks!
<gnuoy> jamespage, fantastic, thanks
<lazyPower> https://github.com/juju/docs/pull/188 - large refactor on the relationship docs
<lazyPower> kwmonroe: ^
<lazyPower> marcoceppi: when you get time, i'd like your +/- 1 on this as well. ty
<kwmonroe> ack lazyPower- i'll fetch my spectacles shortly.
<marcoceppi> lazyPower: you've got feedback
<lazyPower> ty
<jamespage> gnuoy, we might need to make the neutron db migration conditional on >= juno
<gnuoy> jamespage, ok
<gnuoy> jamespage, I'll add that condition in the morning. fyi I've prepped the vxlan branches but things aren't looking too chipper when I try and deploy an gues iwth vxlan running on the overcloud
<jamespage> gnuoy, ta
<bloodearnest> hazmat: heya, trival deployer MP for your consideration: https://code.launchpad.net/~bloodearnest/juju-deployer/run-build-cmds-in-shell/+merge/236596
<hazmat> bloodearnest, thanks looks good
<bloodearnest> hazmat: cool, thanks. I have a few ideas for further changes I'd like to run by you before I spend time on them. Shall I drop you an email?
<hazmat> bloodearnest, sounds good
<hazmat> bloodearnest, or we can g+ now? there's one other pending merge/pull request from yesterday
<hazmat> either way
<bloodearnest> hazmat: g+ is good - you mean chat or video?
<mwenning> stokachu, good afternoon!
<stokachu> mwenning, hey there
<mwenning> hi, trying to get cloud-install to run a second time - it worked once OK, I tore it back down with -u -
<mwenning> now when I try again it loops on error: kvm container creation failed: exit status 1
<mwenning> any ideas?
<stokachu> mwenning, what does kvm-ok report?
<mwenning> /dev/kvm exists
<mwenning> KVM acceleration can be used
<stokachu> ok hmm
<stokachu> does virsh list show an existing vm or did it get removed?
<mwenning> shows nothing
<stokachu> ok hmm
<stokachu> mwenning, does juju give you any other information other than 'kvm creation failed'?
<stokachu> probably not iirc
<bloodearnest> hazmat: thanks for that, gives me confidence to get started, as it seems we're on the same page.
<stokachu> mwenning, you could also do a juju destroy-environment local && juju bootstrap
<stokachu> manually to see what happens
<mwenning> looks like 0 was created ok, after that I get a succession of "1":   "2" etc
<mwenning> ok.
<stokachu> then run juju debug-log in a separate terminal
<stokachu> and juju deploy 'service-name'
<stokachu> see if it gives us any other information
<mwenning> stokachu, found it .  Firewall permissions had expired - It couldn't access any archives.  Sorry to bother.
#juju 2014-10-01
<jamespage> gnuoy, https://code.launchpad.net/~james-page/charm-helpers/haproxy-https-multi-network/+merge/236674
<gnuoy> looking
<gnuoy> jamespage, just bikeshedding but if len(cluster_hosts) < 1: rather than if cluster_hosts: ?
<jamespage> it might have just the local address in it - in which case we should ignore it
<gnuoy> sorry, I meant if not cluster_hosts:
<gnuoy> jamespage, approved
<jamespage> gnuoy, thanks - I'll probably wait for dosaboy and xianghuis ipv6 stuff to land before landing that one
<jamespage> otherwise well get all twisted up
<gnuoy> sure
<bloodearnest> hazmat: another trivial deployer MP for ya: https://code.launchpad.net/~bloodearnest/juju-deployer/no-relation/+merge/236679
<hazmat>  bloodearnest thanks
<jamespage> gnuoy, could you double check me on https://code.launchpad.net/~james-page/charm-helpers/haproxy-https-multi-network/+merge/236674
<gnuoy> jamespage, looking
<gnuoy> jamespage, if/when you have a moment https://code.launchpad.net/~gnuoy/charms/trusty/openstack-dashboard/lp-1373714/+merge/236720
<gnuoy> jamespage, still looks good
<lazyPower> marcoceppi: JoshStrobl:  - https://github.com/juju/docs/pull/190
<lazyPower> JoshStrobl:  so i heard you hate needlessly recompiling the entire hash, what if i said you could get live-building fairly easily with this mod :)
<marcoceppi> feedbark
<lazyPower> marcoceppi: ack, and will do - let me hack up this watch: target first and inc. fix
<marcoceppi> lazyPower: I literally was moments away from clicking Comment on "Also, could you add a make watch target"
<lazyPower> i wanted you to nuke my approach from orbit before i vested the 10 minutes getting that working ;)
<marcoceppi> watchmedo works well enough
<lazyPower> make target isn't passing the variable though
<lazyPower> watchmedo shell-command --patterns="*.md" --recursive --command='echo "${watch_src_path}"' . is what it shoudl look like
<lazyPower> the ${watch_src_path} is coming up as nil, so its building the entire tree
<marcoceppi> where is what_src_path set?
<lazyPower> it bubbles up from watchmedo
<lazyPower> weird, apparently you cant escape $'s in a makefile
<JoshStrobl> lazyPower, commented on pull 190
<lazyPower> hmm, i like that
 * JoshStrobl edited his comment
<lazyPower> same approach, cleaner to read
<JoshStrobl> yep
<JoshStrobl> either way, it'll just call the build func once
<JoshStrobl> if there is no arg => same as it was before, otherwise just the one file we define
<lazyPower> pushed
<lazyPower> no luck on getting the watch target to populate correctly tho, its documented in that PR - moving on.
<lazyPower> JoshStrobl: marcoceppi - cleaned up, rdy 4 eyeballs
<JoshStrobl> lazyPower, thanks for the addition of documentation!
<lazyPower> thats icing on the cake
<JoshStrobl> Yea :) Everything else looks great man.
<lazyPower> i'm cowboying this marcoceppi, unless you ninja me
<marcoceppi> lazyPower: why cowboy?
<marcoceppi> Give me a chance to test, its not that urgent
<lazyPower> marcoceppi: im' hacking on docs today. *points @ his cards on kanban* - i want this
<marcoceppi> Okay, well patch your local branch
<ayr-ton> Someone have passed to a ERROR juju.provider.common bootstrap.go:122 bootstrap failed: cannot start bootstrap instance: cannot run instances: gomaasapi: got error back from server: 409 CONFLICT (No matching node is available.)
<ayr-ton>  when try to juju bootstrap on maas?
<lazyPower> ayr-ton: are you specifing a zone, or --to when you attempt to bootstrap?
<marcoceppi> Ayr-ton. What arch is your machines?
<ayr-ton> lazyPower: no, just a boostrap with --upload-tools
<ayr-ton> amd64
<marcoceppi> Ayr-ton are any less than 1g ram?
<ayr-ton> Here's the full log: http://paste.ubuntu.com/8473883/
<ayr-ton> marcoceppi: No. 2GB, for test purposes.
<ayr-ton> The command I tried was: juju bootstrap --constraints="mem=512M" --debug
<ayr-ton> And I also tried: juju bootstrap --upload-tools --constraints="mem=512M" --debug
<marcoceppi> 409 means Maas doesn't have any instances matching your request. Either there are non in ready state or that match constraints
<ayr-ton> marcoceppi: boot images? Or nodes?
<marcoceppi> Ayr-ton not sure what you mean
<ayr-ton> marcoceppi: Like, an instance matching my request under maas?
<JoshStrobl> hmm, anyone have some git foo here? merged the changes from juju/docs to my fork but it decided to create a new commit with a merge message and push it to my master (so now it is unnecessarily ahead of juju/docs). tips?
<marcoceppi> Yes, a node in maas
<ayr-ton> marcoceppi: Or a boot image matching my request? Or a node?
<ayr-ton> marcoceppi: So, It is my first try on maas, I need do manually add a node before juju bootstrap? Or juju will automatically add nodes?
<ayr-ton> need to*
<marcoceppi> Ayr-ton okay, yeah. You need to enlist machines in Maas first. Basically, if Maas says there are no node you have to add them first
<marcoceppi> Ayr-ton are these physical machines?
<ayr-ton> marcoceppi: Yep. One physical machine.
<lazyPower> JoshStrobl: git reset --hard hash
<mbruzek> OK #juju question for you.
<ayr-ton> marcoceppi: A node can be both a virtual machine and a physical machine?
<lazyPower> and re-try the merge. Should be good to just checkout master, and git pull upstream master.
<marcoceppi> Ayr-ton yes. You can have a virtual machine
<mbruzek> The Juju charm can not download the charm payload.  But when I try the download on the Juju host it can download it.
<mbruzek> http://bb01.mariadb.net/10.0/bintar_10.0_rev4416/mariadb-10.0.14-linux-ppc64le.tar.gz
<lazyPower> mbruzek: when you say juju host, are you talking the machine that is hosting the charm?
<JoshStrobl> lazyPower, https://github.com/JoshStrobl/docs
<arosales> mbruzek: this is from the actual macine, correct?
<mbruzek> Why can the charms not see that url?
<JoshStrobl> lazyPower, doing a git reset hard isn't helpful after the commit merge has been push to origin/master :/
<ayr-ton> marcoceppi: And how to add a physical node? Like, how to add as a node the machine is running maas master? Just for tests, not for prod =x
<lazyPower> JoshStrobl: at that point, you have to force push, and make sure you're rewinding to a state that is pre-upstream so you ff-only merge.
<lazyPower> mbruzek: you guys in /topic?
<marcoceppi> ayr-ton: Okay, maas is going to be painful unless you have more than 10 machines, you can't enlist the machine if it's running maas master
<JoshStrobl> lazyPower, okay, I owe you two beers now.
<lazyPower> JoshStrobl: <3
<marcoceppi> ayr-ton: maas runs on it's own machine, typically it controls the DNS and DHCP for the environment
<marcoceppi> ayr-ton: so, typically, a machine is booted on the network, gets a DHCP from maas, mass does an enlistment of it, records its hardware, and turns off the machine, then it shows in your dashboard so you or a tool like juju can provision it
<JoshStrobl> lazyPower, not entirely sure what the end consensus will be about https://github.com/juju/docs/pull/179 . There was a short dialog between evilnick and I but a consensus wasn't made in the end about removing the "revision is now deprecated" statement.
<JoshStrobl> https://github.com/juju/docs/pull/179
<JoshStrobl> oops, already added it in the main message
<JoshStrobl> I need to lay off the energy drinks
<lazyPower> Yeah, i read that one. I'm leaving it alone for now - as we're cleaning up revision files and dont have it in the template to be added to .gitignore/.bzrignore by default.
<lazyPower> so until that time that we get it cleaned up from the templates, i'm not vaping the doc page
<JoshStrobl> fair enough
<lazyPower> I FIGHT FOR THE USERS!
<ayr-ton> marcoceppi: So, in that environment: http://marcoceppi.com/2014/06/deploying-openstack-with-just-two-machines/
<JoshStrobl> whoami > flynn uname -a -> SolarOS 4 something...
<ayr-ton> marcoceppi: You do have a maas master in the bigger machine, right? And the three nodes are all nucs?
<marcoceppi> ayr-ton: the bigger machine is the maas master, but it's not very powerful, it's just an old desktop PC that acts as a network bridget to the rest of my network. So it's got two nics and is the gateway for the switch all the nucs are connected to
<lazyPower> JoshStrobl: http://goo.gl/kplXgm
<JoshStrobl> lazyPower, yea I remember seeing that a few years ago
<lazyPower> Special Features DVD
<lazyPower> or BR, respectively
<marcoceppi> ayr-ton: also, the maas master doesn't get enlisted in maas
<ayr-ton> marcoceppi: Got it. So, for have something from maas master enlisted in maas, I do need to boot up a virtual machine inside this, right?
<marcoceppi> ayr-ton: right, you can use virtual machines, I recommend virsh/kvm as it's a supported type in MAAS
<marcoceppi> ayr-ton: https://maas.ubuntu.com/docs/nodes.html#virtual-machine-nodes
<marcoceppi> those can live on maas master without incident
<ayr-ton> marcoceppi: Interesting. So, I could have an storage with like 60GB/s
<ayr-ton> 60GB/ram
<ayr-ton> marcoceppi: And use this for mass master and the other nodes.
<marcoceppi> ayr-ton: depending on how big your machine was tha tmaas master is running on, usre
<ayr-ton> marcoceppi: Thanks. I will try that (:
<marcoceppi> ayr-ton: best of luck, it can be a bit finicky to setup, I recommend virtual-manager it makes creating kvms with qemu/libvirt really easy
<ayr-ton> marcoceppi: thanks (:
 * marcoceppi will make a blog post about it
<mbruzek> marcoceppi: Any idea why the subordinate icons do not show up in a local deployment in the Juju GUI?
<mbruzek> marcoceppi: The local "regular" charms show up with icon.
<mbruzek> marcoceppi: ^^
<marcoceppi> mbruzek: a few reasons
<mbruzek> marcoceppi: Any of them things I can fix?
<marcoceppi> mbruzek: are you in a hangout
<mbruzek> marcoceppi: going there
<jrwren> jcastro: ping
<jcastro> yo
<jcastro> marcoceppi, hazmat, rick_h_: can you guys give this a once over? https://code.launchpad.net/~dweaver/orange-box/orange-box-robust-sync-charmstore/+merge/236755
<jcastro> jrwren, yeah
<jrwren> jcastro: you know anything about trusty/elasticsearch not opening a port after running juju expose?
<jcastro> hmm, no
<jcastro> which provider?
<jrwren> jcastro: Know for sure if it has worked on ec2?
<jcastro> it has for sure worked on ec2
<marcoceppi> kirkland: in re orange-box-sync, do you not need the people.canonical mirror anymore?
<jrwren> jcastro: its not responsive for me :(
<jrwren> jcastro: precise charm works, but trusty does not.
<jcastro> jrwren, how long since you deployed it?
<jcastro> huh ... ssh into the unit and see what's up?
<kirkland> marcoceppi: we haven't gotten around to it yet;  but if that's stable, your tarball would be *much* preferred
<hazmat> marcoceppi, the code for getall looks strange mr.list() -> config.sections
<jrwren> jcastro: hour or so.
<jrwren> jcastro: yes. I'll do that.
<marcoceppi> hazmat: yeah it tried to replicate / be compatible with the original charm getall from days gone by
<hazmat> marcoceppi, what i don't see is the call for getBranchTips to actually get all the charms
<marcoceppi> hazmat: it's in mr
<hazmat> marcoceppi, in here ? http://bazaar.launchpad.net/~charm-toolers/charm-tools/1.4/view/head:/charmtools/mr.py#L96
<marcoceppi> hazmat: yes
<hazmat> marcoceppi, where?
<marcoceppi> it does a full bzr branch, not a checkout
<marcoceppi> oh, where does it get the lists of charms
<hazmat> marcoceppi, yup
<jrwren> jcastro: i don't understand the ansible logs, which is part of my problem.
<hazmat> charm getall -> just does mr.list() for retrieving charms
<jcastro> jrwren, noodles is the author, I'd start with him. He's in europe so I recommend mail
<jrwren> jcastro: thanks.
<jcastro> when you find the problem, ask him to add it as a test
<kirkland> dweaver: marcoceppi has a minimal tarball of all charms
<kirkland> marcoceppi: what's the url for that?
<marcoceppi> hazmat: charm update has it
<kirkland> marcoceppi: and how often is it synced?
<marcoceppi> hazmat: it's super freakin convoluted
<marcoceppi> kirkland: I was just finishing it, it'll happen daily
<marcoceppi> kirkland: http://people.canonical.com/~marco/mirror/juju/charmstore/
<kirkland> marcoceppi: and the resulting tarball is like 15MB, right?
<marcoceppi> kirkland: yes
<kirkland> marcoceppi: sweet;  any chance we could get this merged into juju-gui, and at jujucharms.com (eventually)?
<marcoceppi> essentially a shallow checkout without version control of all precise and trusty promulgated charms
<jrwren> jcastro: yeah, I found the issue. I'll drop him a note. I don't think I know ansible well enough to propose a fix
<kirkland> dweaver: http://people.canonical.com/~marco/mirror/juju/charmstore/
<marcoceppi> kirkland: merged in how?
<kirkland> dweaver: http://people.canonical.com/~marco/mirror/juju/charmstore/latest.tar.gz
<kirkland> marcoceppi: for this to just be a "service" that is at jujucharms.com/latest.tar.gz or whatever
<kirkland> dweaver: so, what I'd much rather see, is that sync script just grab that tarball, and untar it
<marcoceppi> kirkland: oh, uh, I suppose. I'll talk to rick_h_ about that, but this will probably need to live directly in the charm store since that's where all the charms will be more or less
<jcastro> kirkland, I think that's more of a rick question
<kirkland> marcoceppi: jcastro: cool;  rick_h_ ?
<kirkland> dweaver: would you mind having a crack at that?
 * rick_h_ reads backlog
<dweaver> kirkland, marcoceppi thanks, that'll do the trick.
<lazyPower> jrwren: hey, i know whats up
<kirkland> dweaver: I'd imaging a wget + untar would be about 10000x more reliable and faster than bzr branch 250 times :-)
<bac> jcastro, jrwren: noodles now lives in australia
<lazyPower> jrwren: elasticsearch byd efault enables UFW on port 9200, which is the default elasticsearch port. You can nuke this from orbit with juju run -  juju run --service elasticsearch "service ufw stop"
<dweaver> kirkland, indeed, I'll use that method, should be a lot faster too.
<kirkland> dweaver: also, re: "enable-os-upgrade: false", hazmat says that'll be silently ignored on juju-core < 1.21
<kirkland> dweaver: so, good catch on the stupid \t tab
<kirkland> dweaver: but I'm going to remove the # commenting that bit out
<jrwren> lazyPower: thanks! that would do it!  Any reason to not have the charm default to that?
<kirkland> dweaver: +1, thanks
<kirkland> dweaver: cool?
<dweaver> kirkland, well, I was testing it and it might not matter, but it doesn't silently ignore it.
<kirkland> dweaver: ah, it warns, but it keeps going?
<lazyPower> jrwren: it falls under the 'secure by default' section.
<dweaver> kirkland, it prints out something like unsupported option at every juju command
<kirkland> dweaver: oh
<dweaver> kirkland, but does keep going
<kirkland> dweaver: hmm
<rick_h_> kirkland: hmm, so the goal is to get a tarball of the latest rev of all charm zips?
<kirkland> dweaver: okay, well, in the debian/control, we could depend on juju-core >= 1.21
<marcoceppi> rick_h_: yes, all promulgated charms
<jrwren> lazyPower: I thought that was exposes job. Any reason to not have the charm do that on expose?
<rick_h_> kirkland: marcoceppi bundles won't work as they're versioned locked and you'd need the right version
<kirkland> dweaver: and add to the orange-box manual that you have to go and add the juju ppa
<rick_h_> marcoceppi: ok, only promulgated?
<lazyPower> jrwren: also its documented in the readme. Between those two reasons alone - I cannot fault the author for ensuring data security.  It's worthy of filing a bug.
<jrwren> lazyPower: also, secure from what exactly? :)
<kirkland> rick_h_: oh....fudge
<jrwren> lazyPower: Huge thanks.
<lazyPower> jrwren: juju expose only interfaces with the provider firewalling solution, afaik it doesn't send any signals to the charm/service itself.
<rick_h_> marcoceppi: kirkland I'm tempted to just say it's a few line script to do this with the new charmstore api going live at the end of the month
<kirkland> dweaver: uh oh, that's a good point
<marcoceppi> rick_h_: right, and given that these are all deployed from local I think that no bundles working is okay?
<rick_h_> marcoceppi: right, for now.
<kirkland> dweaver: if we do the tarball thing, we'll need to remove all of the version locking of charms
<rick_h_> marcoceppi: kirkland we can move the conversation and I can tell you where we're headed
<rick_h_> marcoceppi: kirkland and maybe get you guys in on beta testing/etc but not sure about this solving the root issue atm
<kirkland> rick_h_: ok
<marcoceppi> kirkland: well, even with charm getall bundles won't work
<marcoceppi> simply because you're doing juju deploy --repository local:charm
<dweaver> kirkland, marcoceppi I'm not worried about bundles, we've been creating local version of bundles that exist locally on the orange box and want the charms to be available locally, so we remove the version from the bundle anyway
<marcoceppi> dweaver: I figured
<marcoceppi> dweaver: the mirror will give you that
<kirkland> dweaver: right, but our current bundles in lp:orange-box-examples often specify specific versions of charms
<kirkland> dweaver:       charm: "cs:trusty/ceph-27"
<dweaver> kirkland, yes, but I've been creating local versions too that have "branch: lp:charms/ceph" instead
<kirkland> dweaver:
<kirkland> dweaver: okay
<kirkland> dweaver: well, as long as you're aware, and handle it, I think it's great
<kirkland> dweaver: we'll just need to adjust
<kirkland> dweaver: in the end, I love the idea of grabbing the snapshot tarball
<dweaver> kirkland, so do I, I'll work on including that.
<kirkland> dweaver: thanks!
<kirkland> dweaver: fyi, I've committed, pushed, and released the rest of your fixes, thanks!
<dweaver> kirkland, ty
<jcastro> marcoceppi, I get an error if I click on a person's name in the review queue
<marcoceppi> jcastro: yeah, known issue, patched, but patch isn't applying in production
<jcastro> ack
<aisrael> Hey charmers, any chance one of you could take a look at this MP? https://code.launchpad.net/~aisrael/charms/precise/cassandra/apt-gpg-key/+merge/235341
<aisrael> thanks, marcoceppi!
<ayr-ton> marcoceppi: A other question. Is possible to deploy nova-compute in the maas master with juju, if it have enough memory? Because I can't add the maas master to enlist as a physical node, right?
<marcoceppi> ayr-ton: you can't to both of those
<ayr-ton> marcoceppi: So. In that environment, I would have only a storage.
<marcoceppi> ayr-ton: but I hink you can put nova-compute in a KVM
<marcoceppi> it'll be slow, but it should work
<lazyPower> marcoceppi: can i get a quick glance over - https://bugs.launchpad.net/charms/+source/haproxy/+bug/1373081
<lazyPower> this is for arosales
<ayr-ton> marcoceppi: The best way would be install the openstack directly on the machine?
<lazyPower> dpb1, niedbalski - feel free to jump in on this as well if you've got the time.
<ayr-ton> marcoceppi: Use MAAS can make the things difficult?
<marcoceppi> ayr-ton: you can't deploy nova-compute to the bare metal for maas master
<marcoceppi> ayr-ton: but you can put it in a VM
<ayr-ton> marcoceppi: Put nova-compute under KVM is a good idea?
<lazyPower> ayr-ton: MAAS is well worth the investment of time if you're going to be orchestrating > 10 nodes. Otherwise, i've found its just 'simpler' to get moving with KVM hosts under a manual environment, and VM snapshots to revert to a 'clean' state.
<marcoceppi> ayr-ton: if you have no physial machines, then it's a great idea.
<lazyPower> but i think marco alluded to that earlier today, so pardon my anecdotal interruption.
<ayr-ton> marcoceppi: If I use nova-compute inside a KVM I will be using nested virtualization, right?
<marcoceppi> ayr-ton: yes
<ayr-ton> marcoceppi: In your setup with NUCs, the NUCs was added as physical machines?
<marcoceppi> ayr-ton: two were, I put virtual machines on the third. One was storage, one was nova-compute, the rest on VMs
<ayr-ton> marcoceppi: So, the juju deploy nova-compute deployed this inside physical machine, not a vm?
<marcoceppi> ayr-ton: in my example, yes
<ayr-ton> Awesome (:
<ayr-ton> marcoceppi, lazyPower: Okay. Thank you guys
<lazyPower> ayr-ton: happy to help in any way that i can
<niedbalski> lazyPower, ack
<marcoceppi> lazyPower: lgtm
<lazyPower> ta marcoceppi
<lazyPower> niedbalski: if you're reviewing i'll wait to address it a bit longer
<niedbalski> lazyPower, i will review it in short
<sebas5384> jose: i'm going to be in the DrupalCamp Bolivia giving session about DevOps with Drupal and Ubuntu Juju :) http://cocha2014.drupalbolivia.org/session/desarrollando-con-drupal-en-equipo
<sebas5384> if you can please! join us!
<niedbalski> lazyPower, do you have a MP for the haproxy branch?
<lazyPower> niedbalski: https://launchpad.net/~lazypower/charms/trusty/haproxy/trunk - it was a new branch, as it didn't exist in the store
<niedbalski> lazyPower, i put some comments https://bugs.launchpad.net/charms/+source/haproxy/+bug/1373081
<X-warrior> Is it possible to have more then one bootstrap machines on the same amazon region
<X-warrior> ?
#juju 2014-10-02
<rick_h_> X-warrior: yes, they need their own control-bucket names I think.
<X-warrior> rick_h_, I don't think so, I just tried to bootstrap to the same region from 2 different machines and the second bootstrap removed the other bootstrap machine
<rick_h_> X-warrior: right, but in your environments.yaml you need to create a second block with a different name make sure the control-bucket field is different
<X-warrior> rick_h_, as far as I know, there is no control-bucket on environments.yaml anymore. At least not in default options.
<X-warrior> but maybe I can add it
<rick_h_> X-warrior: right, but I bet it has a generated value. Check your .juju/environments/XXX.jenv
<X-warrior> yes, there is one in this folder
<X-warrior> and there is acontrol-bucket value
<rick_h_> X-warrior: I know I've got two different ec2 'environments' setup in mine and I've used this. One has a default series of precise and one trusty I use for testing.
<rick_h_> right
<rick_h_> so manually assign that in the second block in your environments.yaml to something else and you can juju bootstrap ec2 and juju bootstrap ec2again
<X-warrior> sweet I will try that
<X-warrior> ty
<X-warrior> :D
<rick_h_> good luck
<X-warrior> How can I update a config.yaml file from a charm with an error? I tried with upgrade-charm --force and resolved --retry but it seems that the config.yaml updates are not used
<X-warrior> Fixed it I guess. Sorry
<X-warrior> oh no :( volume-ephemeral-storage has been dropped
<X-warrior> I added a storage relation from storage to postgresql, but it seems to keep the data on the same disk as the machine not on a volume. What am I missing?
<Valduare> hows it going
<stub> How would I go about deploying cs:precise/foo to a trusty host?
<lazyPower> stub: 2 choices. --to lxc     or fork it and deploy it as a trusty charm.
<jamespage> gnuoy, https://code.launchpad.net/~james-page/charms/trusty/nova-compute/fixup-secgroups/+merge/236874
<bloodearnest> huh. how do I view annotations on a service from the cli? They don't seem to be showing up in juju status like I'd expect?
<bloodearnest> has anyone written an annotate plugin?
<gnuoy> jamespage, vxlan mps:
<gnuoy> https://code.launchpad.net/~gnuoy/charms/trusty/neutron-api/vxlan-support/+merge/236891
<gnuoy> https://code.launchpad.net/~gnuoy/charms/trusty/neutron-openvswitch/vxlan-support/+merge/236890
<gnuoy> https://code.launchpad.net/~gnuoy/charms/trusty/quantum-gateway/vxlan-support/+merge/236889
<lazyPower> tvansteenburgh: http://pastebin.ubuntu.com/8479658/ - as i hack on charmtools i'm adding tests around my create template and i'm getting this spit back at me. looks like we have a missing dependency of yaml in our requirements.
<lazyPower> tvansteenburgh: however, PyYAML is in requirements.txt
<tvansteenburgh> lazyPower: i don't think it's a missing dep
<tvansteenburgh> lazyPower: try running the charm helpers sync script manually and see if it succeeds
<tvansteenburgh> lazyPower: derp, nvm, i missed part of the paste
<lazyPower> tvansteenburgh: i think its due to the fresh venv
<lazyPower> we're just missing a step somewhere
<Valduare> whats the diff between juju and vagrant
<lazyPower> Valduare: vagrant is a wrapper that works with different virtualization providers to make interfacing from teh command line and scripting easier
<lazyPower> juju is a system orchestration tool that runs on most major cloud providers, but it also has a 'local' and 'manual' provider to work wherever you need it to work
<lazyPower> Valduare: while juju is service orchestration, vagrant has no concept of whats being deployed inside the virtualized container, so it does well at enlistment/provisioning or kicking off config management tools, it cannot persay, relate services. this is where juju shines as relations, lifecycle management, enlistment and provisioning are all within our wheel house
<Valduare> so there should be a juju provision for vagrant?
<jamespage> dosaboy, gnuoy: ok this is ready for review:
<jamespage> https://code.launchpad.net/~james-page/charms/trusty/hacluster/mix-fixes/+merge/235675
<jamespage> https://code.launchpad.net/~james-page/charms/trusty/swift-proxy/network-splits-https/+merge/236903
<jamespage> https://code.launchpad.net/~james-page/charms/trusty/nova-cloud-controller/network-splits-https/+merge/236907
<jamespage> https://code.launchpad.net/~james-page/charms/trusty/neutron-api/network-splits-https/+merge/236906
<jamespage> https://code.launchpad.net/~james-page/charms/trusty/keystone/https-multi-network/+merge/236905
<jamespage> https://code.launchpad.net/~james-page/charms/trusty/cinder/https-multi-network/+merge/236904
<jamespage> https://code.launchpad.net/~james-page/charms/trusty/glance/network-splits-https/+merge/236908
<lazyPower> Valduare: actually, we consume vagrant to offer a testing platform to users on mac/windows
<lazyPower> jamespage: busy much?
<jamespage> lazyPower, just a little
<jamespage> we have a self imposed feature freeze today on new big features into the openstack charms
 * lazyPower nods
<lazyPower> i just lamenting that's a lot of work to be reviewed
<jamespage> that one deals with HTTPS+HA with multiple network api endpoint configurations
 * lazyPower hattips
<Valduare> lazyPower: consume vagrant? what do you mean
<lazyPower> keep up the good work
<jamespage> ta
<jamespage> ditto
<lazyPower> Valduare: we use vagrant to provide a 'ready to go' solution for investigating/working with juju
<lazyPower> Valduare: https://juju.ubuntu.com/docs/config-vagrant.html
<lazyPower> tvansteenburgh: i think i resolved it
<lazyPower> tvansteenburgh: i didnt run python setup.py develop before i kicked off 'make check'
 * lazyPower was being silly apparently
<Valduare> lazyPower: so I could use vagrant to fire up a vm and setup a config file to get juju to auto provision stuff for me?
<lazyPower> Valduare: what do you mean auto provision?
<lazyPower> i want to be clear that we're talking about the same things :)
<Valduare> I want the stuff that vagrant markets as being able to have everyone on the team running the same setup vm
<lazyPower> Valduare: You could use vagrant to distribute an identical VM for your team to consume to interact with juju. Through that localized copy of vagrant each developer will have a juju-local environment to use for development/testing.
<lazyPower> once they have that environment, they can create/consume bundles to distribute amongst one another for deployment configurations with juju. For example, developer a creates a mediawiki, haproxy, and mysql bundle - then ships that yaml file to developer b
<lazyPower> developer b loads that bundle into juju and has an identifical deployment as developer a
<jamespage> dosaboy, gnuoy; hold on my merged for now
<jamespage> just spotted a problem
<jamespage> gnuoy, https://code.launchpad.net/~james-page/charm-helpers/get-address-in-network-no-fallback/+merge/236935
<dosaboy> jamespage: approved, but i can;t merge that one
<dosaboy> good catch tho
<jcastro> kirkland, quickstart now has maas support
<jcastro> this should make bundles on the OB easier/faster
<jamespage> dosaboy, gnuoy: OK happy with those branches I pinged about earlier with the new charmhelper fix
<jamespage> ttfn
<dosaboy> jamespage: just fixed the percona issue
<dosaboy> jamespage: need to run by you tho, seem peer_echo is either misplaced or possibly broken
<jamespage> dosaboy, ?
<jamespage> which charm?
<dosaboy> jamespage: https://code.launchpad.net/~cts-engineering/charms/trusty/percona-cluster/ipv6/+merge/235582
<dosaboy> jamespage: seems to work fine now
<dosaboy> jamespage: i fixed a few things but crucially, removed peer_echo() from the cluster-relation-changed
<dosaboy> as it was screwing up the cluster relation settings
<jamespage> dosaboy, hmm that's bad
<jamespage> dosaboy, you'll lose all password data doing that
<dosaboy> hmm
<jamespage> dosaboy, you probably need to extend the default exclusions for peer echo
<dosaboy> jamespage: yeah that sounds right, i'll take a closer look then
<kirkland> jcastro: cool, what does that mean?
<jcastro> one less script on the OB
<jcastro> I'll show you in brussels,  I have some ideas on  making these more generic
<gnuoy> thedac, hi ! Do you know if you can twiddle the options juju-deployer is called with for a mojo run >
<gnuoy> ?
<thedac> Hey, gnuoy
<thedac> gnuoy: IIRC you can set the delay time. Give me a few and I will look closer
<gnuoy> thedac, thanks.
<thedac> gnuoy: looks like it is just delay in seconds. Patches are welcome :)
<gnuoy> thedac, I just might do that :)
<gnuoy> thanks for looking, much appreciated
<thedac> no problem
#juju 2014-10-03
<rbasak> frankban: I'm looking at juju-quickstart 1.4.4. It looks like dependencies have been bumped? We don't have a new enough websocket-client nor jujuclient in Utopic right now.
<rbasak> frankban: see bug 1374335
<mup> Bug #1374335: FFe: Sync websocket-client 0.18.0-1 (universe) from Debian unstable (main), juju-deployer 0.4.2, python-jujuclient 0.18.4 <juju-deployer (Ubuntu):New> <python-jujuclient (Ubuntu):New> <websocket-client (Ubuntu):New> <https://launchpad.net/bugs/1374335>
<frankban> rbasak: yes, I mentioned it in bug 1359938 . we was told that those will be the versions of websocket-client/python-jujuclient in utopic, and so we reflected that. so I guess the exceptions are blocked until we have those in utopic?
<mup> Bug #1359938: [FFE] request to update juju-quickstart to support JUJU env var <juju-quickstart (Ubuntu):New> <https://launchpad.net/bugs/1359938>
<rbasak> frankban: that depends. Are the versions stated actually required?
<rbasak> I don't understand why the version numbers are stated to match exactly. What if one of the dependencies needs a security update. Wouldn't that break it?
<frankban> rbasak: we usually use >= with versions in deb packages
<frankban> rbasak: for development builds (in a virtualenv) we usually have precise versions for reproducible builds
<rbasak> frankban: OK, so do I need to bump the deb package version dependencies here, or will everything work with any version of the dependencies?
<frankban> rbasak: I think we need to bump the version of the dependencies. unfortunately the expected utopic version of websocket-client includes backward incompatible changes, so we needed to reflect those changes
<frankban> rbasak: so quickstart assumes websocket-client==0.18.0 and jujuclient==0.18.4 are in utopic
<frankban> well, >=
<frankban> rbasak: I built those dependencies (and quickstart 1.4.4 as well) in the juju stable ppa FWIW: https://launchpad.net/~juju/+archive/ubuntu/stable/+packages
<rbasak> frankban: backwards incompatible changes are worrying
<rbasak> frankban: that could impact the FFe
<rbasak> (for websocket-client)
<rbasak> Since there are other packages that depend on it too, and they could be impacted.
<frankban> rbasak: the public API of websocket client did not change a lot, IIRC it's mostly a change in how wss certs are handled by default
<frankban> perhaps hazmat could provide more info
<rbasak> python-docker also depends on python-websocket
<rbasak> If we update websocket-client, will we break python-docker?
<rbasak> frankban: apart from a newer websocket-client, does juju-quickstart 1.4.4 need a newer version of anything else?
<frankban> rbasak: the new python-jujuclient has some bug fixes, so I'd say yes, quickstart needs that too
<frankban> rbasak: urwid 1.2.1 is already in utopic universe
<frankban> so yes, basically websocket-client and jujuclient are required
<rbasak> Yeah the others look like they're present. But I still want to know what versioned dependency I need to declare.
<rbasak> This is important especially for any future backport to Trusty.
<rbasak> frankban: OK, I've updated the bugs. I don't think it's worth testing except against the newer versions, so I'll leave it for now. We need those FFes approved and the newer version dependencies uploaded, and then I can look again.
<frankban> rbasak: so our expectations were that websocket client >= 0.18.0 and jujuclient >= 0.18.4 are in utopic. for this reason, we updated quickstart to support and run on those new dependencies, and we tested quickstart 1.4.3 and 1.4.4 with those. I can quickly check if quickstart still works and tests pass after downgrading jujuclient and websocket client to trusty versions if you want
<frankban> rbasak: ok thanks
<rbasak> frankban: trouble is, given that there are changes that matter, we'd need to test again before bumping the dependencies. So probably best get a decision made on the FFes first.
<frankban> rbasak: ack, thank you for looking at those
<hazmat> rbasak, incompatible?
<hazmat> rbasak, its actually forward compatible changes to python 3 for the most part. the pin to 0.12 previously was because it started doing ssl enforcement, but we already had 0.16 from debian that had that.
<hazmat> in utopic. the delta from 0.16 to 0.18.0 is just bug fixes
<hazmat> rbasak, jujuclient needs 0.18.0 there are some fixes in that release i contributed upstream specifically for it.
<frankban> hazmat: cool SSL enforcement is what I was wondering about
<rbasak> hazmat: I just need to know, given that we're way past feature freeze right now, that bumping the version is definitely not expected to break anything.
<rbasak> hazmat: that includes the other reverse depends that I know little about - python-docker and python-socketio-client.
<rbasak> hazmat: could you please update bug 1374335?
<mup> Bug #1374335: FFe: Sync websocket-client 0.18.0-1 (universe) from Debian unstable (main), juju-deployer 0.4.2, python-jujuclient 0.18.4 <juju-deployer (Ubuntu):New> <juju-quickstart (Ubuntu):New> <python-jujuclient (Ubuntu):New> <websocket-client (Ubuntu):New> <https://launchpad.net/bugs/1374335>
<rbasak> And then I just need the FFes to get approved, to know what versioned depends I should specify on juju-quickstart, and then I can test and upload.
<hazmat> rbasak, i'll double checkk them
<rbasak> Thanks!
<hazmat> re python-docker, python-socketio-client
<jamespage> gnuoy, vxlan support landed
<jamespage> gnuoy, looking at cells now
<hazmat> rbasak, so docker-py verified (0.3.2 which afaics is utopic version) with 0.18 websock-client .. still working on socketio-client there's some server dep for their tests i need to figure out.. out for the next hr on family errands. bbiab
<rbasak> hazmat: thanks. No rush - we'll need the FFes approved also before I can do anything.
<jamespage> gnuoy, I appear to have a nova cells based deployed running :-)
<gnuoy> jamespage, \o/
<jamespage> gnuoy, although the updates in openstack-charm-helpers for vxlan break gre configuration
<jamespage> gnuoy, hmm
<jamespage> gnuoy, hows scheduling dealt with when all of the hypervisors in a given cell are at capacity?
<gnuoy> jamespage, cells capacities are reported back to the api cell. If a cell is full then requests should go to the next cell
<gnuoy> or some of that may be delegated with grandchildren I guess
<gnuoy> why anyone would do a grandchild deploy beats me
<jamespage> gnuoy, to layer up the message busses without overloading to much on the top layer
<jamespage> gnuoy, anyway - I think I hit some sort of race between nova pushing the scheduling request to a cell and it updating its capacity
<jamespage> gnuoy, I was generating instances in series
<gnuoy> jamespage, but with the trade off that losing a parent cell kill off its descendants
<jamespage> gnuoy, so this is quite awesome work btw
<jamespage> I have a few questions
<gnuoy> fire away (and thanks :-) )
<jamespage> but I need to get a coffee first
<gnuoy> sure
<hazmat> rbasak, k, out of curiosity what are the rdepends on python-socketio-client
<rbasak> hazmat: I couldn't see any
<rbasak> I'm not sure why that's packaged (in Debian)
<hazmat> rbasak, the unit tests for it are dependent on a bitrotted nodejs socket server being setup manually outside of the tests.
<hazmat> rbasak, yeah.. there are tons of open issues that this python-socketio-client is broken with the socketio 1.0 spec
<hazmat> rbasak, added comments to the issue
<jamespage> gnuoy, sorry got distracted
<jamespage> gnuoy, back in a min
<rbasak> hazmat: thanks, that's very useful.
<ayr-ton> marcoceppi: One difficult question. I'm ready to start build the cloud for my project using openstack and juju. Do you think is a good idea do make a cluster with two intel nucs of 16GB for start?
<ayr-ton> marcoceppi: It is realiable?
<jamespage> gnuoy, I think the nova-api relation is redundant right (nova-cell charm)
<ayr-ton> marcoceppi: And for the maas master, should I use a other machine for this? Or virtualize the maas master under one of the NUCs can be a start aproach?
<ayr-ton> marcoceppi: Like, in this reference architecture: http://www.ubuntu.com/cloud/openstack/reference-architecture, how is the minimum specs for the first hypervisors.
<ayr-ton> ?*
<ayr-ton> At this moment, the best reference archicture for start, that I found, is that one: http://marcoceppi.com/2014/06/deploying-openstack-with-just-two-machines/
<marcoceppi> ayr-ton: if you're going to be doing openstack for real, you're going to want about 8-10 machines
<jamespage> marcoceppi, ayr-ton: hmm - I disagree
<jamespage> that really depends on how many compute nodes you need
<jamespage> (8-10) machines
<jamespage> ayr-ton, its possible to put most things under LXC using juju - apart from the quantum-gateway, ceph, swift-proxy and nova-compute charms
<hazmat> ayr-ton, if you have a single beefy machine doing a virtual maas (libvirt kvm) is also visable.
<hazmat> viable
<hazmat> dinosaursareforever.blogspot.com/2014/06/manually-deploying-openstack-with.html
<ayr-ton> hmmm
<ayr-ton> hazmat: Okay. So if a have a big machine with a lot of ram, I could use a virtual maas. If I want to use NUCs, for example, except the ceph and etc, I could use two machines for start?
<ayr-ton> And a MAAS master?
<ayr-ton> And put a lot of things inside lxc?
<ayr-ton> Except nova-compute.
<ayr-ton> hazmat: One other question, is safe to use nested virtualization? As dinosaur post suggest?
<hazmat> ayr-ton, if your hw supports it sure, its super slow though
<hazmat> ayr-ton, in terms of running anything on the guest
<frenda> Hi,
<frenda> I'm going to illustrate about What I mean:
<frenda> https://nodebb.org/pricing
<frenda> https://payments.discourse.org/buy/
<frenda> https://payments.discourse.org/buy/
<frenda> etc
<marcoceppi> frenda: what do you mean?
<frenda> These open source applications have a hosted solution which make an isolated installation per user
<frenda> Now
<frenda> Can juju do something for my app?
<frenda> something like them*
<frenda> I mean can I use juju for ato-installation?
<marcoceppi> juju can deploy these opensource apps, but it's all free (except for the cloud provider, you have to pay for the instances) but juju is a free and opensource software with no charge associated with it
<marcoceppi> ah, auto-installation. You could, but it's not currently designed to do that. It does provide a websocket which you could use to automated deployments from something like this checkout page
<frenda> What I get: 1. I can install juju under my servers 2. I can make an auto-installation process for my app
<frenda> 3. Then, can user use my installed juju to install my app as a charm; user by user?
<frenda> As I can see here: https://jujucharms.com/precise/ghost-0/machine/?text=ghost (i.e.), I can create mine! true?
<frenda> Is https://jujucharms.com an open source software?
<rick_h_> frenda: yes, https://github.com/juju/juju-gui
<rick_h_> frenda: development is in #juju-gui if you have any questions or need a hand with anything
<frenda> rick_h_, marcoceppi: thanks
<elujin>  /msg NickServ help
<elujin>  /msg help
<elujin> whois elujin
<jcastro> hey kwmonroe
<jcastro> can you check this MP first when you start your shift? https://code.launchpad.net/~bloodearnest/charms/precise/squid-reverseproxy/trunk/+merge/235429
<kwmonroe> ack jcastro
<jcastro> yo jose
<jcastro> how do we do this UOA thing?
<jcastro> marcoceppi, tvansteenburgh: wanna hop on in a minute?
<marcoceppi> jcastro: I'm ready when you are
<tvansteenburgh> jcastro: be there in a min
<jcastro> firing it up now
<jcastro> https://plus.google.com/hangouts/_/hoaevent/AP36tYeqonNeIc-w5qNIq-whPk66nYi98w62Y1gpuwsag-owcyO3gw?authuser=0&hl=en
<jcastro> http://youtu.be/VMHNi67htM0 if you just want to follow along.
<natefinch> anyone know if we actually have documentation of what values you can pass to juju set-environment?  and/or what can go into environments.yaml?
<rick_h_> natefinch: if you find anything please let me know as I have a big interest in that for some project stuff
<natefinch> rick_h_: I'm sure I can troll the source code to find it out.  It's sad that I have to do that.   I'll bring it up in Brussels and probably will spend time writing help docs for it.  Glaring deficiencies in help docs really bug me.
<mbruzek> jcastro: https://juju.ubuntu.com/docs/tools-amulet.html
<natefinch> (git blame cmd/juju/help_topics.go can attest to that ;)
<mbruzek> jcastro ^ link to Amulet tests
<jcastro> ack
<mbruzek> documenation
<jcastro> marcoceppi, you went dark
<marcoceppi> jcastro: sound went out
<marcoceppi> reloading
<marcoceppi> hangouts isn't loading anymore
<tvansteenburgh> doh
<jcastro> pkill your chrome processes
<jcastro> and doublecheck for zombies
<marcoceppi> inbound
<mbruzek> #ubuntuOnAir Charm testing url: http://blog.juju.solutions/cloud/juju/2014/10/02/charm-testing.html
<marcoceppi> this is rediculous
<sebas538_> more and more we are gonna see SOA's, so an OpenTSDB charm would be awesome! :) just saying...
#juju 2014-10-04
<sebas5384> somebody know if there's a command to shutdown a charm/machine ?
#juju 2014-10-05
<lazypower|Travel> sebas5384: juju run can shut down the service if there's an upstart job associated with the service
<lazypower|Travel> but juju doesn't have a concept of shutting down the machine, no.
#juju 2015-09-28
<JerryK2> pmatulis: thanks. In the end it showes up that this property has to be set at the time of deployment, not changed afterwards
<gnuoy> jamespage, I see you added tox support to glance branch. In theory I should just be able to type 'tox' and magic happens right? Because it seems to be exploding for me
<gnuoy> ERROR: InvocationError: '/home/liam/branches/merges/glance-272409/.tox/py27-trusty/bin/ostestr'
<jamespage> that sounds not good
<gnuoy> jamespage, actually, I was looking at a mp to /next which explodes. When I take /next on its own the tox unit tests run. I'll investigate
<gnuoy> (they run but some fail fwiw)
<jamespage> ooo I get a
<jamespage> ImportError: cannot import name wraps
<jamespage> gnuoy, ^^
<gnuoy> jamespage, I don't get that. I get http://paste.ubuntu.com/12601718/
<jamespage> gnuoy, lemme just refresh my tox envs
<jamespage> gnuoy, odd - I don't get that
<gnuoy> jamespage, http://paste.ubuntu.com/12601815/ fixes it for me
<gnuoy> jamespage, arch, sorry, that's not right
<jamespage> its possible its a broken test that gets exacerbated by the parallel nature of ostestr
<IceyEC> I'm trying to configure the local lxc provider with a custom network config; essentially, I'm trying to modify lxcbr0 from 10.0.2.0/24 to 10.0.3.0/24 but having a lot of trouble getting that to work
<IceyEC> anybody here done that before?
<IceyEC> (other netblock would be fine as well)
<gnuoy> jamespage, got a sec for https://code.launchpad.net/~gnuoy/charms/trusty/glance/untest-tox-fix/+merge/272578 ? (fixes the unit test failure I was seeing with tox)
<gnuoy> jamespage, nm, I have a +1 from ed
<beisner> gnuoy, thx for landing the n-g updated tests
<gnuoy> beisner, np
<beisner> thedac, gnuoy, jamespage - fyi, bug 1500552
<mup> Bug #1500552: rabbitmq services may not be started when add_vhost is attempted <openstack> <uosci> <rabbitmq-server (Juju Charms Collection):Confirmed> <https://launchpad.net/bugs/1500552>
<beisner> thedac, gnuoy jamespage - fyi i believe niedbalski will be taking that bug ^
<beisner> jamespage, after resolving the p:i next.yaml mysql/pxc charm config options, bug 1500589
<mup> Bug #1500589: precise-icehouse deploy fails (cloud-compute-relation-changed -> nova_cc_utils.py -> ssh_known_host_key -> IndexError: list index out of range) <openstack> <uosci> <nova-cloud-controller (Juju Charms Collection):New> <https://launchpad.net/bugs/1500589>
<beisner> jamespage, pxc still unhappy >= vivid:  bug 1481362
<mup> Bug #1481362: pxc server 5.6 on Vivid and Wily does not create /var/lib/mysql <amulet> <openstack> <uosci> <percona-xtradb-cluster-5.6 (Ubuntu):New> <percona-cluster (Juju Charms Collection):New> <https://launchpad.net/bugs/1481362>
<beisner> jamespage, fyi, reverted the mysql charm config options in next.yaml as discussed, given the above ^.  re-testing precise, vivid w/ mysql proper.
#juju 2015-09-29
<mbruzek> Hello juju people I am seeing an error with juju that I have never seen before
<mbruzek> bootstrap fails
<mbruzek> ERROR the environment could not be destroyed: destroying instances: cannot remove recorded state instance-id: Access Denied
<mbruzek> ERROR cannot determine if environment is already bootstrapped.: Access Denied
<mbruzek> Any ideas on mitigation?
<marcoceppi> mbruzek: sounds like a dirty tear down? What cloud is this on?
<mbruzek> amazon
<mbruzek> I can not destroy-environment and I have already removed the amazon.yaml in the environments directory
<mbruzek> marcoceppi: I don't see anything listed in my "instances" screen on console.aws.amazon.com
<marcoceppi> mbruzek: what do you mean you've removed the amazon.yaml?
<mbruzek> marcoceppi: sorry amazon.jenv
<mbruzek> marcoceppi:  I got this error, so I tried removing the .juju/environments/amazon.jenv
<mbruzek> I tried a destroy-environment --force before I did that
<marcoceppi> mbruzek: the best way around this is to try to --force destroy that environment
<mbruzek> marcoceppi: I already tried that *before* I removed the file
<mbruzek> $ juju destroy-environment -y amazon
<mbruzek> ERROR cannot connect to API: Access Denied
<marcoceppi> mbruzek: bootstrap with the --debug falg
<mbruzek> $ juju destroy-environment -y amazon --force --debug
<mbruzek> 2015-09-29 13:56:26 INFO juju.cmd supercommand.go:37 running juju [1.25-alpha1-vivid-amd64 gc]
<mbruzek> 2015-09-29 13:56:26 INFO juju.provider.ec2 provider.go:49 opening environment "amazon"
<mbruzek> 2015-09-29 13:56:26 INFO juju.provider.common destroy.go:22 destroying environment "amazon"
<mbruzek> 2015-09-29 13:56:26 INFO juju.provider.common destroy.go:33 destroying instances
<mbruzek> 2015-09-29 13:56:35 ERROR juju.cmd supercommand.go:429 destroying instances: cannot remove recorded state instance-id: Access Denied
<marcoceppi> please use a pastebin ;)
<mbruzek> sorry
<marcoceppi> bootstrap with --debug is what I mean
<mbruzek> ok
<mbruzek> marcoceppi: I have to head into conference will get you the pastebin soon
<beisner> dosaboy_, jamespage - i think we're aware of this, raised a bug to track:   bug 1501065
<mup> Bug #1501065: liberty - pkg_resources.VersionConflict: (six 1.5.2 (/usr/lib/python2.7/dist-packages), Requirement.parse('six>=1.9.0')) <openstack> <uosci> <swift-proxy (Juju Charms Collection):New> <https://launchpad.net/bugs/1501065>
<jamespage> beisner, indeed I am
<jamespage> trusty-liberty right?
<beisner> jamespage, yepper
#juju 2015-09-30
<jamespage> gnuoy, ooo
<jamespage> glance/0                active         idle        1.24.6  2       9292/tcp                                10.5.31.143    Unit is ready
<jamespage> I was told I was missing a message relation
<gnuoy> jamespage, were you?
<gnuoy> I mean: were you missing a message relation?
<jamespage> yup it flagged up as blocked until I added it
<jamespage> gnuoy, apparently so
<gnuoy> tip otp
<gnuoy> * top
<jamespage> gnuoy, landing - https://code.launchpad.net/~danilo/charms/trusty/percona-cluster/autodetect-vip-cidr/+merge/272813
<gnuoy> jamespage, I was supprised that guessing takes preference over explicit config in that mp
<jamespage> gnuoy, that's inline with the other openstack charms
<jamespage> gnuoy, we might want to review that, but consistency == good right now
<jamespage> gnuoy, defer to next cycle
<gnuoy> agreed
<jamespage> urulama: hey - any update on wily enablement for the charm-store?
<rick_h_> urulama_: was that in the release this week? or not yet?
<urulama_> rick_h_: no
<rick_h_> urulama_: ah ok
<beisner> o/ hi all
<beisner> jamespage, re: precise-icehouse ... i reverted those config options in o-c-t.   i think p:i/next is bust in a different way:  bug 1500589   can you confirm?
<mup> Bug #1500589: precise-icehouse deploy fails (cloud-compute-relation-changed -> nova_cc_utils.py -> ssh_known_host_key -> IndexError: list index out of range) <openstack> <uosci> <nova-cloud-controller (Juju Charms Collection):New> <https://launchpad.net/bugs/1500589>
<lovea> Anyone having problems accessing api.jujucharms.com?
<marcoceppi> lovea: I haven't so far, what are you seeing?
<lovea> machine-0: 2015-09-30 14:36:46 ERROR juju.worker runner.go:223 exited "charm-revision-updater": failed updating charms: finding charm revision info: cannot get metadata from the charm store: Get https://api.jujucharms.com/charmstore/v4/meta/any?id=cs%3Atrusty%2Fjuju-gui&id=cs%3Atrusty%2Fnova-compute&include=id-revision&include=hash256: dial tcp 162.213.33.122:443: connection timed out
<lovea> can't ping it either
<marcoceppi> lovea: it loads for me
<marcoceppi> lovea: are you behind a proxy or restricted network?
<lovea> marcoceppi: OK, must be our network!
<lovea> marcoceppi: Yes, It did work so something has changed there.
<lovea> marcoceppi: Thanks for ruling out your end
<marcoceppi> lovea: no worries, gl!
<beisner> hi jamespage, can we land your mongodb branch?  (mongodb is blocking wily deploy tests atm)  https://code.launchpad.net/~james-page/charms/trusty/mongodb/pymongo-3.x/+merge/270525
<beisner> jamespage, if it's not quite ready, i can switch next.yaml to use it.  lmk.
<jamespage> I think it is
<jamespage> marcoceppi, any chance - https://code.launchpad.net/~james-page/charms/trusty/mongodb/pymongo-3.x/+merge/270525 ?
<jamespage> that unblocks that charm on newer ubuntu releases...
<marcoceppi> jamespage: lgtm, I can merge if you'd like
<jamespage> marcoceppi, yp
<marcoceppi> jamespage beisner merged
<jamespage> beisner, nice to get https://code.launchpad.net/~james-page/charms/trusty/percona-cluster/vivid/+merge/271659 landed as well
<jamespage> unblocking pxc on later releases
<jamespage> although I'd not describe it as great due to the increased ram usage...
<jamespage> thedac, gnuoy:
<jamespage> Incomplete relations: message, image Incomplete relations: database
<jamespage> that looks a little clumsy due to the repeat of Incomplete relations
<jamespage> but I like it generally so far
<thedac> jamespage: ok
<beisner> woot, thanks marcoceppi
<beisner> jamespage, do we have a solution for pxc in absence of the root-password and sst-password config options being set?
<beisner> jamespage, also, is there something in flight re: the swift six version conflict re: trusty-liberty?  (bug 1501065)
<mup> Bug #1501065: liberty - pkg_resources.VersionConflict: (six 1.5.2 (/usr/lib/python2.7/dist-packages), Requirement.parse('six>=1.9.0')) <openstack> <uosci> <swift-proxy (Juju Charms Collection):New> <https://launchpad.net/bugs/1501065>
#juju 2015-10-01
<faceless_polpiem> hello guys
<jamespage> thedac, gnuoy: https://code.launchpad.net/~james-page/charms/trusty/rabbitmq-server/status-check/+merge/273037
<jamespage> thedac, gnuoy: I'm pretty happy with that MP for rmq as a v1 for status - appears to work nicely
<thedac> jamespage: I'll review that today. I am working on some bug fixes for rabbit at the moment.
<IceyEC> with a juju bundle, if the bundle includes a charm that is already deployed in an environment, will the bundle use the existing charm or deploy a new copy?
<trondvh> The documentation for the vmware provider states "which supports VMWare's Hardware Version 8 or better", but it looks like the OVA file is hw version 10? Is there a workaround the get the provider working for ESX 5.1?
<beisner> IceyEC, i believe it will use the charm already deployed (cached on the state server), unless you do a juju upgrade-charm.   but, someone confirm i'm not speaking rubbish ... lazypower <?  :D
<lazypower> trondvh, I would recommend contacting the mailing list with that question so the core devs who have done the VMWare integration can address that. Our core team is pretty distributed and in EU timezones.
<jamespage> thedac, hey - just a minor polishing point but 'messaging' reads better than 'message' - we can push that as a trivial
<lazypower> IceyEC, juju-deployer will do so, so long as the charm revision has been updated in the bundle file.
<trondvh> lazypower: Ok. Thanks.
<lazypower> IceyEC, juju quickstart doesn't seem to diff the environment, but as i understand it support for that is on the roadmap, jsut not a priority at this time, as deployer handles it nicely.
<IceyEC> so if I did a juju quickstart, would it look like it would deploy new copies and then the juju-deployer would end up handling the combining?
<lazypower> it wont actually deploy
<lazypower> it errors if services exist in the topology with names that are defined in the bundle :(
<IceyEC> :(
<IceyEC> well, thanks anyways lazypower beisner
<IceyEC> !
<lazypower> however.. deployer!
<lazypower> deployer has support for that, and it will upgrade if the charm rev is + whats in the environment
<lazypower> s/+/>
<IceyEC_> cool lazypower!
<thedac> jamespage: ok
<thedac> jamespage: actually, I do not have commit privs yet. Do you want MPs for the s/message/messaging change?
#juju 2015-10-02
<thumper> o/ veebers
<thumper> veebers: let me guess, you dist upgraded while having a local juju running?
<veebers> hey thumper :-) Not sure why this channel isn't in my  autojoin
<veebers> thumper: that is exactly what I did
<thumper> ok
<thumper> the current local provider made some upstart jobs for the database and mongo
<thumper> I'm pretty sure that your upgrade brought in systemd
<veebers> thumper: I'm currently on vivid
<thumper> pretty sure vivid has systemd by default
<thumper> is your local stuff important?
<thumper> or can we blow it away?
<veebers> hmm, no I guess we could just blow it away. I have a backup of the db and can deploy the same revno of the service too
<thumper> ok...
<thumper> do you have the plugin for nuking a local instance?
<thumper> if not, we can do it by hand
<veebers> thumper: I don't think I do, at least I don't remember setting anything up
<thumper> ok
<thumper> first lets clean up the local agents
<thumper> look in /etc/init/juju-*
<thumper> what do you see?
<veebers> thumper: there is no juju-* dir in /etc/init/
<thumper> oh?
<thumper> curious
 * thumper looks locally
<thumper> <veebers> a 'status juju-agent-leecj2-local' shows an error: status: Name "com.ubuntu.Upstart" does not exist
<thumper> somewhere there is a file called "juju-agent-leecj2-local.conf"
<thumper> perhaps the dist-upgrade moved the config files from /etc/init to somewhere else
<veebers> thumper: let me start a find for that file
<thomi> veebers: on my system it's a *file* in /etc/init, not a dir.
<veebers> thomi: ls /etc/init | grep juju shows nothing
<thomi> veebers: ok, sorry - just thought it was worth metnioning :D
<veebers> thumper: find didn't find anything
<veebers> thomi: oh yeah thanks for the suggestion. Anything helps at this stage :-)
<thumper> veebers: did you look all through /etc?
<veebers> thumper: used: find / -name "juju-agent-leecj2-local.conf" -print
<thumper> hmm...
<thumper> just for my peace of mind, look for any jujud processes
<veebers> thumper: I do see now "/etc/juju-leecj2-local/"
<thumper> ah...
<veebers> yep, I see a couple of jujud, i.. "/var/lib/juju/tools/machine-2/jujud machine  . . ."
<thumper> those will be from the lxc containers
<thumper> it looks like some magic thing converted the upstart script to a systemd start thing...
<thumper> I don't know the systemd way to ask if some service is running
<thumper> veebers: do 'sudo lxc-ls --fancy'
<veebers> thumper: you want me to pastbin the output? I see leecj2-local-machine-{2,3,4} running (3 machines running)
<thumper> veebers: and status works?
<thumper> veebers: perhaps a hangout with screenshare would be quicker here :)
<veebers> thumper: heh, sounds good
<jose> marcoceppi: hey, any news on the aws credits?
<jamespage> thedac, you have commit access now - gnuoy, arosales and I discussed you membership this week :-)
<jamespage> gnuoy, any chance of a quick review for https://code.launchpad.net/~openstack-charmers/charms/trusty/nova-compute/lxd-updates/+merge/273104
<jamespage> urulama, rick_h_: any eta on wily support in charmstore?
<urulama> jamespage: next week, guys will do the release while we're in seattle
<jamespage> urulama, ack - good-oh
<jamespage> bundles and charms waiting to injest!
<urulama> jamespage: yeah, seen bunch of vivid charms ... sorry, couldn't use staging this week as it was used for the Strata big data demo
<urulama> jamespage: i'll try to push it for Monday, it's a small update
<jamespage> urulama, awesome
<adac> Trying to install openstack, but as soon as I set the openstack password, the installation process is aborting.  Any ideas on how to debug this?
<thedac> jamespage: \o/ Thanks. Also, I am going to land your rabbitmq-server branch. And I will continue the race condition hunt.
<jamespage> thedac, ta
<jamespage> I just noticed a 'update-status' hook running in my 1.26 beta env
<jamespage> interesting
<faraujo> Hi! Has anybody seem this error before when running Â´juju charm proofÂ´ on a local charm? W: no copyright file
<faraujo> W: no README file
<faraujo> ERROR subprocess encountered error code 100
#juju 2016-10-03
<myeagleflies> hello
<myeagleflies> I'm having problems bootstrapping juju 2.0 on ubuntu 16.04.1
<rogpeppe> myeagleflies: what's the problem?
<myeagleflies> following https://jujucharms.com/docs/devel/getting-started
<myeagleflies> when executing: "juju bootstrap lxd-test localhost"
<myeagleflies> getting error:
<myeagleflies> ERROR failed to bootstrap model: cannot start bootstrap instance: Missing parent 'lxdbr0' for nic 'eth0'
<myeagleflies> ifconfig shows there are following interfaces: "ens160 lo openstack0"
<myeagleflies> so possibly juju expects eth0 to be still around?
<stub> myeagleflies: Do you know if you have lxd working on your machine. There is some lxd setup needed that might not be mentioned in the Juju docs ('lxd init' I think)
<myeagleflies> lxd init succeeded
<stub> myeagleflies: Do you have the older lxc installed? lxd brought in the lxdbr0 bridge, and it might be conflicting with the old lxcbr0 bridge
<myeagleflies> let me check
<myeagleflies> stub: dpkg -l lxd* shows only current version
<myeagleflies> ii  lxd                                  2.0.4-0ubuntu1~ubuntu16 amd64                   Container hypervisor based on LXC - daemon
<stub> Thats my guesses out the window then
<myeagleflies> ;)
<stub> IIRC lxcbr0 should be brought up automatically as required by lxd.
<stub> "lxc launch ubuntu:" might help narrow down if this is a juju or lxd problem
<myeagleflies> I'm experiencing some DNS resolving issues on this host. This host also runs MAAS. Can juju be bootstrapped on some other host?
<myeagleflies> I presume this should be possible?
<stub> yes. You can point it to an lxd configured to allow remote access IIRC, but I don't know the details.
<stub> (or any other cloud provider, including your MaaS if you have enough spare hardware)
<myeagleflies> what I'm saying is I'm trying to bootstrap juju on same host that is running MAAS. I will try to bootstrap it on some other 'fresh' host and just asking if juju is fine with being run on different host withoug MAAS on it?
<stub> Yes, Juju doesn't need MAAS. I don't have it anywhere.
<cholcombe> how do you juju deploy from the edge channel?
<cholcombe> also, the docs are out of date.  the development channel no longer exists apparently
<marcoceppi> cholcombe: juju deploy <charm> --channel=edge
<cholcombe> marcoceppi, thanks. I found the docs mention it in one place but in another place it says development
<marcoceppi> cholcombe: please make sure to file bugs against the docs
<marcoceppi> esp as we're so close to 2.0 now
<cholcombe> will do
<cholcombe> cmars, do you know how to breakpoint juju?  I'm trying to track down a bug where juju refuses to deploy my charm
<cholcombe> cmars, i've got gdb and dlv installed
<cmars> cholcombe, ping #juju-dev, or i can help in now + 2-3 hr
<cholcombe> cmars, ok
<jamespage> stub, was the removal of the juju-wait bzr branch intentional? I see its moved to git on lp - I must of missed the headsup on that
<stub> jamespage: An unfortunate side effect of switching to git, per https://bugs.launchpad.net/launchpad/+bug/1623924
<mup> Bug #1623924: git-based source build recipes are sometimes interpreted as bzr recipes instead <Launchpad itself:New> <https://launchpad.net/bugs/1623924>
<stub> jamespage: I can put things back as they were if you can't deal with the change now, although obviously I'd rather not.
<jamespage> stub, dunno trying to find beisner to check on move
<jamespage> stub, our gate is bust right now
<papertigers> where is `juju actions` in juju2?
<papertigers> juju help doesnt show it and running it doesnt work
<magicaltrout> same place
<magicaltrout> juju list-actions
<magicaltrout> juju run-action i think
<magicaltrout> juju show-actions
<papertigers> thanks
<papertigers> also new to juju, so how do I restart services or nodes
<magicaltrout> er
<magicaltrout> i've not done that in 2 years of using it
<magicaltrout> juju ssh unit/0 restart
<magicaltrout> or something
<magicaltrout> ssh commands run as root and you can address services or units
<magicaltrout> so that would work
<magicaltrout> i don't know if there is a "proper" way
<papertigers> I only ask because a few of the services are not running properly after a deploy
<papertigers> they are in the error state
<lazyPower> magicaltrout: actions would be a proper way to do that. you should honestly get int he habit of using juju-restart in a hook context
<lazyPower> magicaltrout: it'll ensure all the lxd containers on the host are prepared to handle the reboot as well
<lazyPower> in a hyperconverged world, that will be a big deal
<lazyPower> magicaltrout: so, in summation, stuff reboot in an action, and when invoking that action instead of `reboot now` it becomes `juju-reboot` or `juju-reboot --now`
<lazyPower> which also paves the way for signaling other units/applications participating of the reboot sequence and they can then react/handle temporary routing changes if the underlying app doesn't do this natively.
<lazyPower> but thats pie in the sky talk there...
<magicaltrout> there you go then
<cory_fu_> kwmonroe, petevg, kjackal: Can I get a review on https://github.com/juju-solutions/interface-spark/pull/9
<petevg> +1
<cory_fu_> petevg, kwmonroe: Hrm.  I just realized that that PR accidentally included an unrelated change as a drive-by, which makes it more significant of a PR.  Maybe I should split it out
<cory_fu_> (The two commits)
<petevg> The code all made sense, and seemed to do what you intended. I guess that it could use more tests, for the classpath stuff.
<kwmonroe> cory_fu_: in SERVICE scope, don't you still need to wait to remove .joined until the last unit is departing?  https://github.com/juju-solutions/interface-spark/pull/9/files#diff-6e152090b45a29ed86305121942fb300L30
<kwmonroe> i love conversation scopes, btw.  just love them.
<cory_fu_> kwmonroe: heh.  And no, I don't think so.  For non-GLOBAL scope, the conversation when inside a hook handler should only contain that one unit.  I think
<cory_fu_> kwmonroe: Hrm.  You might be right, and that might be a bug that affects almost all interface layers.  :/
<kwmonroe> easy enough to test cory_fu_.. do you have a deployment with multiple sparks using this interface that we can break into and check conv.units in a hook?
<cory_fu_> Not up right now
<kwmonroe> aight, i can build up a spark to try that in a few minutes.. gotta sort out some docker/mac issue first though.
<cory_fu_> kwmonroe, petevg: Actually, I just remembered that the use-case I had for this (insightedge as a subordinate to spark) went away, so we could probably even just ditch that part.
<petevg> Sounds good. Less code is good code.
<bildz> is there a dedicated channel for conjure assistance?
<stokachu> bildz: o/
<marcoceppi> you found it!
<bildz> hey stokachu
<bildz> im dealing with some conjure scripts that arent working well for me atm
<stokachu> bildz: custom scripts or ones in the available spells?
<bildz> the ones in the available spells
<stokachu> bildz: ok whats up
<bildz> i'm able to get a juju controller up and all my nodes, but there is a failure during the neutron setup
<bildz> https://bugs.launchpad.net/conjure-up/+bug/1626941
<mup> Bug #1626941: neutron-gateway/0: hook failed: "config-changed" for Openstack local <conjure-up:In Progress by adam-stokes> <https://launchpad.net/bugs/1626941>
<stokachu> bildz: subprocess.CalledProcessError: Command '['ip', 'link', 'set', u'eth1', 'up']' returned non-zero exit status 1
<stokachu> thats failing
<stokachu> meaning that no eth1 exists on the system where that command was run
<bildz> well it's using a xenial image
<bildz> and as we know, the interface names have changed
<bildz> is that why it's failing?
<stokachu> could be
<stokachu> bildz: you have maas setup?
<bildz> i do
<stokachu> sorry maas 2.0?
<bildz> let me get the version i have
<bildz> ii  maas                               2.0.0+bzr5189-0ubuntu1~16.04.1
<stokachu> bildz: cool one sec, lemme get a screenshot of what i do
<bildz> thanks man
<bildz> so when I conjure-up openstack, I connect to the MAAS deployment and it spins up a server to install the juju controller
<bildz> then the rest all come up in sequence to deploy
<stokachu> bildz: yea exactly
<stokachu> here is the page: http://imgur.com/a/m4JdJ
<stokachu> you want to set that under global kernel parameters in settings
<bildz> ooo thanks
<stokachu> that'll change the default behavior of your maas nodes to not do that auto naming
<bildz> aw man thanks
<bildz> ill give this a shot now
<stokachu> bildz: cool
<stokachu> bildz: one thing im not entirely sure if it requires you to re-commission your nodes
<stokachu> bildz: keep that in mind if the network interfaces are still being renamed
<cory_fu_> kwmonroe, petevg: https://github.com/juju-solutions/interface-spark/pull/9 updated
<bildz> stokachu: i've recommissioned them all and they all reflect the new settings
<bildz> thanks for the heads up though
<bildz> i've been through this process about 20 times :)
<bildz> with that said, going to add this channel to my autojoin for irssi heh
<stokachu> bildz: cool, yea i got conjure/conjure-up on highlight so just mention that and ill see it
<cory_fu_> petevg: https://github.com/juju-solutions/interface-spark/pull/9 updated
<petevg> cory_fu: +1
<gQuigs> new to Go.. how can I build juju 1.25?  the instructions seem geared towards trunk...
<myeagleflies> is juju 2.0 ok with interface names like 'ens160'?
<myeagleflies> "ERROR failed to bootstrap model: cannot start bootstrap instance: Missing parent 'lxdbr0' for nic 'eth0'"
<marcoceppi> myeagleflies: that interface name should be okay
<bryan_att> hi anyone that can help with a charm-tools issue I am having - narindergupta pointed me here for help. charm-tools is failing with this error "pkg_resources.DistributionNotFound: The 'paramiko<2.0.0' distribution was not found and is required by charm-tools" but paramiko 1.16.0-1~cloud0 is installed
<holocron> I'm fooling with juju lxd here, and following a reboot, none of my lxc containers will start properly. With no lxc processes running, I can run lxc list without issue, but "lxc start <container>" hangs. If I CTRL-C and check ps, there's something called "forkstart" still running, and two more processes of [lxc monitor] on the container.
<holocron> commands like "lxc list" or "juju status" hang with no output until I kill the lxc processes with SIGKILL
<holocron> anyone seen this behaviour before?
<bdx_> is there a way to specify what subnet to bootstrap to by using constraints?
<lazyPower> bryan_att: looks like you might be missing some deps that i think are in teh ppa
<lazyPower> bryan_att: do you have ppa:juju/devel and ppa:juju/stable installed? additionally are you using the snap or the deb package to get charm-tools?
<lazyPower> holocron: thats new, but totally bug worthy
<holocron> lazyPower: sigh alright i hate opening up bugs
<holocron> lazyPower: i'll see if i can duplicate it again, I'm in the process of reinstalling the base OS now ^^
<bryan_att> lazyPower: not sure about those deps, let me check with narindergupta - the JOID installer for OPNFV setup the environment
<bryan_att> lazyPower: narinder may be gone by now - how do I answer "do you have ppa:juju/devel and ppa:juju/stable installed? additionally are you using the snap or the deb package to get charm-tools?"
<cholcombe> is there a bug report needed to refresh promulgated charm code?
<petevg> bryan_att: to check for the ppas being installed, you can open a terminal and run 'ls /etc/apt/sources.list.d/ | grep juju', or open "Software and Updates" and look on the "Other Software" tab.
<petevg> bryan_att: if you installed charm tools via the ppa, then
<petevg> running 'dpkg --list | grep charm-tools' on the command line will return a result.
<bryan_att> https://www.irccloud.com/pastebin/1zesvxmG/
<petevg> bryan_att: Similarly, you can do 'snap list | grep -i charm' to check to see if you have installed the snap.
<bryan_att> https://www.irccloud.com/pastebin/tDeyOCSn/
<petevg> bryan_att: it looks like you have the stable ppa, but not he devel ppa. And it looks like you've installed charm tools from that ppa.
<bryan_att> narinder said "we use ppa:juju/stable and deb package no snaps"
<petevg> bryan_att: what command are you running when you get the error that you pasted?
<bryan_att> petevg: the JOID installer for OPNFV, the step was "charm build -s xenial -obuild src" for the Congress charm (git clone https://github.com/gnuoy/charm-congress.git xenial/charm-congress)
<petevg> Hmm ... So the error is happening when Juju is attempting to put the Python packages in the wheelhouse. Have you updated your apt packages, either via the software updater, or via "apt update && apt upgrade" recently?
<bryan_att> yes, I ran apt-get update and " sudo apt-get dist-upgrade -y" as directed by narinder but it didn't help
<petevg> bryan_att: Have you ever installed charm tools via pip? One possibility is that you have some crossed dependencies in your Python environment.
<bryan_att> petevg: not sure - is there a way for me to clean it up and reinstall in case?
<bryan_att> petevg: the tools are automatically installed by the JOID installer I think
<petevg> bryan_att: you could uninstall the apt packages, then do 'pip freeze | egrep -i "charm|juju"', and uninstall any packages that remained using pip. But I'm not sure that's the best course of action here.
<petevg> bryan_att: if you can, it might be best to stand up a VM and try running through the setup inside of that. If you run into the same issue, then the installer or instructions you're following are possibly broken, and you should file a ticket with the people who wrote them.
<bryan_att> petevg: they claim the issue does not occur in their environment - I'm not sure what value it would be to debug why this is happening - I just need to get beyond it; maybe just reinstall my jumphost I guess. A pain but it appears there is no clear way to clean this up. One of the reasons I abhor python...
<petevg> bryan_att: good luck! Sorry that I couldn't find a quicker fix for you.
<bryan_att> petevg: thanks for your help though - it's been a while since I ran into an issue I couldn't work around, and reinstall is a chance to upgrade to xenial I guess...
#juju 2016-10-04
<anrah> Hi guys! Couple questions regarding bootstrap phase and deploy. I'm using reactive charms and private OpenStack cloud. The question is that is there a way to modify the cloud-init file which is run at the bootstrap phase?
<anrah> Problem is that I'm using reactive charms and I can't bea sure if python3 is installed on the image. The deploys fail at the very beginning obviously as the first commands run on deply phase are python3 related when using reactive charms + charm helpers
<anrah> I know that I can make my own images which include necessary packages, but is there a way to "hack" the deploy phase and install those packages before python3 is run?
<Shashaa> Hi Everyone, I'm trying to add openstack trove charm atop openstack charm platform
<Shashaa> I see that charm is getting stuck at following status: rove/2  maintenance  idle   18       10.73.96.174           Installation complete - awaiting next status
<Shashaa> Does anyone could redirect me where to look into ? just curious what is it waiting for here
<viswesn> How to backup the running environment of lxc and then restore them back
<viswesn> is it possible? in Juju
<viswesn> Any video specific to Juju  endpoint bindings
<viswesn> If so please share me the link
<jrwren> anrah: there is no way to customize the generated cloud-init data. I've always wanted that feature too. Maybe file a bug as a feature request?
<cory_fu_> kwmonroe, petevg, kjackal: https://github.com/apache/bigtop/pull/137 is updated
<cory_fu_> There seems to be an odd issue with Juju 2.0 where if you remove the relation between a subordinate and its principal, the subordinate sticks around when it used to go away.  I know kjackal encountered this, but I wonder if anyone else has?
<kjackal> cory_fu_: After getting the latest code and rebuilding the subordinate was removed without any issue.
<cory_fu_> Strange.  I ran into it on the bigtop charms just now
<kjackal> Could be transient then?
<cory_fu_> Perhaps
<holocron> why is it some times when using LXD, a juju unit will sit in "waiting for machine" state
<holocron> the lxc container machine will sit in "pending" state
<rick_h_> holocron: can you juju status --format tabular
<rick_h_> and see if it has more details?
<holocron> rick_h_ not seeing anything new.. when i "lxc exec <container> /bin/bash" and run top or systemctl, it looks like the container didn't start up properly
<holocron> restarting it doesn't help, and scaling the unit doesn't help
<jrwren> holocron: is the controller the same version as juju client?  mine had problems after upgrading beta -> rc. I had to remove controller and bootstrap new so that they are same version.
<holocron> jrwren: checking but i suspect it is, i installed the OS, bootstrapped, and deployed today
<holocron> jrwren: also, i have a number of other units that are working fine, so it's kinda random so far
<holocron> i'm deploying https://jujucharms.com/u/james-page/openstack-on-lxd
<holocron> this is the 2nd time to attempt to deploy, after destroying the controller and tearing everything down. the first time it was openstack-dashboard and rabbit-mqserver that sat in "waiting for machine"
<holocron> this time it's neutron-gateway
<jrwren> holocron: install the OS & bootstrapped without adding a PPA for latest juju2 beta?
<holocron> jrwren: i am running juju 2.0-rc2
<jrwren> holocron: oh, ok. in that case, I have no idea :[
<holocron> eh well, rather than just restarting the container from inside it with "shutdown -r now", i used lxc stop/start and now it appears to be provisioning the unit
<holocron> so..
 * holocron shrugs
<holocron> and now, it's doing it again...
<holocron> you know what the most annoying thing about juju is? i run a command like "juju remove-unit <unit>" and nothing changes
<holocron> like, is it going to work? when is it going to work?
<holocron> is it just me?
<rick_h_> holocron: so https://bugs.launchpad.net/juju/+bug/1626725 is a bug where we're looking into a potential cause around this.
<mup> Bug #1626725: 8+ containers makes one get stuck in "pending" on joyent <joyent-provider> <jujuqa> <lxd> <scalability> <juju:Triaged by dooferlad> <https://launchpad.net/bugs/1626725>
<rick_h_> holocron: I'm not sure if it's the one you're seeing, but something we're chasing down at the moment that looks like that.
<rick_h_> holocron: if that doesn't seem plausible please file a bug with as much detail on the setup as possible, logs from juju, lxd, etc.
<holocron> rick_h_: okay thanks, i'm attempting a repro now, if i see this again i'll file a bug
<rick_h_> holocron: ty very much
<valeech> working with juju 2.0 rc 2 and the openstack-base charm, is it best to use bindings to ensure different components operate on the correct networks or some other method?
<kwmonroe> holocron: rick_h_:  at the charmer summit, bdx noticed large bundles would result in some machines stuck in 'pending'.  iirc admcleod and beisner were helping troubleshoot that and it looked like bug 1602192.  beisner, do you know how holocron can check to see if "Too many open files" is the issue here?
<mup> Bug #1602192: when starting many LXD containers, they start failing to boot with "Too many open files" <lxd> <juju-core:Invalid> <lxd (Ubuntu):Confirmed> <https://launchpad.net/bugs/1602192>
<kjackal> petevg: wouldn't you prefer a seperate section to put tutorials?
<beisner> kwmonroe, the tell was that i saw 'Too many open files' in various application/service logs.  i don't think i ever saw that in the juju unit logs fwiw.
<kjackal> petevg: like the one you wrote for the earthquake data?
<petevg> kjackal: No. I think that should appear near the top of the landing page. One problem is that our "getting started" page gets very complex very fast. I wanted a simple project to get people going.
<holocron> kwmonroe: rick_h_: the behaviour might've been similar to that as described in that bug, but I'm not able to reproduce. I had forgotten to take some of the steps described in https://jujucharms.com/u/james-page/openstack-on-lxd
<beisner> hi holocron, this is the guide i'd recommend for openstack-on-lxd.  anything in personal namespaces might be old/bitrotted.  http://docs.openstack.org/developer/charm-guide/openstack-on-lxd.html
<holocron> right now it looks like all the containers are started, all agents are executing properly
<holocron> beisner: ah great, thanks for this link!
<holocron> i had seen this before and totally forgot about it
<holocron> beisner: though it does appear to be the same today
<beisner> holocron, yes it's close, but still behind a few click from what we have in the dev charm-guide.
<holocron> beisner: this is excellent, thanks again
<beisner> holocron, yw
<beisner> holocron, even with that, i've run into the too many open files thing and have had to raise max_user_instances on the host in most cases.  maybe not if it's just a deploy & destroy, but when you go to use the deployed cloud, file handles start to go wild.
<holocron> beisner: gotcha, i'll watch for that.. though i'm not planning on using the nova hosts as provided
<beisner> amulet question:  am i wrong to expect sentry.info to always contain a private-address key?  i'm finding that it doesn't always:  ['machine', 'open-ports', 'public-address', 'service', 'workload-status', 'agent-status', 'unit_name', 'agent-state', 'unit', 'agent-version']
<cory_fu_> Some fixes for dhx for 2.0rc2 if someone wants to review: https://github.com/juju/plugins/pull/70
<holocron> beisner: tore down that previous deploy, and went with the bundle-s390x.yaml provided on github -- 3 of the 14 machines ended up in "pending" state though i don't see any errors in journal about too many open files
<beisner> holocron, so perhaps a useful check would be to bootstrap a fresh controller, add a model, then `juju deploy ubuntu -n 10` and see how that goes.  that would deploy 10 units of the ubuntu charm, which does pretty much nothing.  it's useful to check that the system, configuration and tooling are all in order.
<holocron> beisner: trying this now
<beisner> holocron, ps, are you on s390x?
<holocron> aye
<holocron> beisner: all 10 start up, going to try scaling to 20
<kjackal> kwmonroe: last week we had this problem with "WARNING: The following packages cannot be authenticated!" when installing puppet-common. Do you know why? I am getting the same problem with cassandra 2.2
<holocron> beisner: scales to 20 without a hitch
<kwmonroe> yeah kjackal.. if you're using layer-puppet-agent, something is wrong with the key in that layer (or with the repo hosted by puppetlabs).  i never dug to find a solution though.  we moved all our charms to xenial so we didn't need layer-puppet-agent anymore.
<kjackal> hm... I am getting the same error for a ppa repo for cassandra 2.2
<kjackal> this used to work two weeks ago
<holocron> is there a "juju status" switch that'll just show me the unit table?
<x58> awk/grep? :P
<holocron> x58 yeah, or i could parse the json output :P
<x58> Sounds like you've solvd your problem.
<x58> jq that output to your desire.
<holocron> x58 I didn't say I couldn't solve the problem, i asked if there was a switch for just that specific data table
<holocron> not a fan of reinventing wheels, but i can if i need to
<x58> indeed you didn't, I am sorry that I may have implied otherwise.
#juju 2016-10-05
<bildz> Has anyone been able to successfully deploy the neutron-gateway using 16.04?  I've tried using a global kernel parameter of "net.ifnames=0" in the MAAS global config, but i'm still failing at the same step during the autopilot deployment.
<jamespage> bildz, which juju version and which maas version
<jamespage> rc1 has some challenges with automatic bridging of unconfigured network interfaces
<jamespage> marcoceppi, hey - can I request a point release of charmhelpers? I added a get_upstream_version helper to host a while back, which we're using in legacy charms that still sync
<jamespage> but I need to have it for new reactive charms as well pls :-)
<spaok> what does juju-info relation provide? I see you can use it for subordinate charms and get private addresses, but is that it?
<stub> spaok: Its reason for existence is to support subordinate charms. You could probably stuff extra information on there, but you really should declare a real relation instead of abusing juju-info for that.
<marcoceppi> jamespage: you got it
<jamespage> marcoceppi, ta
<holocron> i'm fooling about with juju and lxd again today.. following a reboot, i cannot get more than 10 containers to start up
<bildz> jamespage: good morning
<bildz> saw your response last night/morning
<holocron> the 11th container start hangs with forkstart running
<bildz> ii  conjure-up                         2.0.1~beta2-0~201609281246~ubuntu16.04.1 all          Package runtime for conjure-up spells
<bildz> ii  juju-2.0                           2.0-rc2-0ubuntu1~16.04.1~juju1           amd64        Juju is devops distilled - client
<nicbet> Quick question: Juju 1.25 appears to be able to target vSphere clouds, did that support disappear with 2.0 or not documented yet?
<marcoceppi> nicbet: it still exists with 2.0
<marcoceppi> nicbet: just seems to be suspiciously missing documentation
<marcoceppi> larrymi_afk: could you share what you did to configure vSphere with juju 2.0? nicbet is asking
<spaok> which the reactor charms, how do I trigger on a hook? I've tried a few different patterns I've seen in charms, but when running the charm it seems to always skip the hook
<nicbet> marcoceppi: thanks! looking forward to larrymi_afk's pointers.
<larrymi_afk> marcoceppi: nicbet: sure
<larrymi_afk> nicbet: you will need an entry in your clouds definition pointing to your vcenter.
<larrymi_afk> nicbet: like this
<larrymi_afk>   vsphere:
<larrymi_afk>     type: vsphere
<larrymi_afk>     auth-types: [userpass]
<larrymi_afk>     endpoint:
<larrymi_afk>     regions:
<larrymi_afk>       dc0: {}
<larrymi_afk> nicbet: for endpoint, you'll specify the IP of your vCenter
<larrymi_afk> nicbet: in the credentials, you'll use auth-type: userpass
<larrymi_afk> then spedify password: and user: that you used to access the vCenter
<nicbet> larrymi_afk: excellent, so it appears most of the 1.25 documentation transfers to 2.0, with the change to auth-type: userpass
<larrymi_afk> nicbet: that's pretty much it .. when you bootstrap your controller, juju will ask you export an environment variable.. then once you export it, then everything is smooth sailing
<larrymi_afk> nicbet: yes, not much is different.. it's been months since I used 1.25 so I don't recall exactly what I had in my environments.yaml and I think I passed a few arguments through the command line like the datacenter
<larrymi_afk> other than that, things are relatively similar
<larrymi_afk> nicbet: I didn't  have a regions definition either (just the datacenter that I passed on the command line).. but 1.25 may have changed to be more in line with 2.0 since..
<nicbet> larrymi_afk: thanks, with the details you provided, I created the yaml file, bootstrapping the environment now...
<larrymi_afk> nicbet: cool
<nicbet> larrymi_ : no go. when bootstrapping the controller the provider brings the controller node online in vCenter, obtains IP address and then fails on SSH.
<larrymi_> nicbet: what's the error?
<nicbet> larrymi_: ERROR failed to bootstrap environment: waited for 10m0s without being able to connect: Permission denied (publickey).
<larrymi_> nicbet: looks like maybe cloud-init failed
<larrymi_> nicbet: trusty or xenial?
<nicbet> larrymi_: xenial
<Prabakaran> Hello Team, Not able to login using bzr launchpad-login "user"... getting error bzr: ERROR: Connection error: Couldn't resolve host 'launchpad.net' [Errno -3] Temporary failure in name resolution
<Prabakaran> can you please advise me on this?
<nicbet> larrymi_ : juju help bootstrap says: "Private clouds may need to specify their own custom image metadata, and
<nicbet> possibly upload Juju tools to cloud storage if no outgoing Internet access is
<nicbet> available. In this case, use the --metadata-source parameter to point
<nicbet> bootstrap to a local directory from which to upload tools and/or image
<nicbet> metadata."
<larrymi_> nicbet: there was a cloud init issue in xenial that was fixed recently.. not sure if you're pulling an older image. Try with trusty to see whether you run  into this. btw, there's a DNS issue with Xenial that you could run into after deploying but shouldn't impact the bootstrap.
<larrymi_> older image than the fix
<nicbet> larrymi_ how do I specify which OVF image juju deploys?
<larrymi_> nicbet: you could try to bootstrap by setting the image-stream.. for example, --config image-stream=daily
<larrymi_> not sure about what other values are available for image-stream or if you can point to a url
<spaok> how do I get the remote unit's ip info from a relation? when I try hookenv.relation_get('private-address', unit, relid) I get None
<lutostag> generic question... looking at https://jujucharms.com/docs/1.25/storage, what is the definition of a pool and a volume? how do they relate?
<nicbet> larrymi_ : https://bugs.launchpad.net/juju/+bug/1588041 is what I'm running into.
<mup> Bug #1588041: [2.0 rc1] juju can't access vSphere VM deployed with Xenial,  cloud-init fails to set SSH keys <ci> <jujuqa> <landscape> <oil> <oil-2.0> <vsphere> <cloud-init:New> <juju:Fix Released> <https://launchpad.net/bugs/1588041>
<larrymi_> nicbet: ah it should be fixed though. Have you tried daily?
<nicbet> larrymi_: yes didn't work for me I'm now trying to figure out to get debug logs so I can inspect which image is being pulled
<larrymi_> nicbet: if you use --debug, I think that should have the info for where it's getting it from
<konobi> howdy... i seem to be having trouble using a 4k ssh key with juju... 2.0-rc2-elcapitan-amd64
<konobi> is was reported fixed in https://bugs.launchpad.net/juju/+bug/1543283 but not seeing the same
<mup> Bug #1543283: [Joyent] 4k ssh key can not be used: "cannot create credentials: An error occurred while parsing the key: asn1: structure error: length too large" <juju:Fix Released> <https://launchpad.net/bugs/1543283>
<nicbet> larrymi_ : ova_import_manager.go:175 Downloading ova file from url: http://cloud-images.ubuntu.com/daily/server/xenial/20160930/xenial-server-cloudimg-amd64.ova which is the image with cloud-img fixed
<larrymi_> nicbet: strange, I have a controller from yesterday that worked fine.. I'll redeploy to see.
<larrymi_> nicbet: which version of vsphere?
<nicbet> larrymi_ 6.0-20160104001
<konobi> the bug is marked as "Fixed released", but i'm not seeing the fixed behaviour
<larrymi_> nicbet: worked for me for http://cloud-images.ubuntu.com/releases/server/releases/xenial/release-20160922/ubuntu-16.04-server-cloudimg-amd64.ova which is build where the fix would have gone in... now trying daily
<larrymi_> nicbet: which juju version are you using?
<larrymi_> nicbet: worked with the 9/30 build too.
<spaok> can can open-ports be gotten via hookend.relation_get?  I know privateaddress works, but I get None for open-ports
<spaok> trying to make a subordinate charm that gets the ip and ports for a service
<valeech> I have a general question about isntances of charms. If the software the charm is written around requires physical access to a device how do you deploy multiple instances and configure each instance to use the specific physical device per machine? For instance, ceph-osd. Each OSD will connect to a different âphysicalâ drive. If I have 4 servers and the âphysicalâ drives are not the same on all 4, how do I configure ceph-os
<valeech> use the proper device on the corresponding machine?
<rick_h_> valeech: so in this case there's a "leader" idea in an application to help coordinate among units and so using the leader hooks would help keep track of who's on which devices/etc.
<valeech> rick_h: makes sense! I just read up on leadership hooks. ty!
<nicbet> larrymi_ : version is juju 2.0-rc2-xenial-amd64
<larrymi_> nicbet: it matches what I have
<larrymi_> nicbet: try trusty to see if it's able to bootstrap completely
<larrymi_> juju bootstrap --config default-series=trusty
<nicbet> larrymi_ will try that next. every attempt takes quite a bit of time to push the ova images - it doesn't seem to cache the downloads on fail
<larrymi_> nicbet: ok.. if you look at comment 18 then this is the bug where the fix went in.
<nicbet> larrymi_ who's prividing the cloud-init metadata to the box? juju?
<larrymi_> nicbet: idk
<cholcombe> am i correct in guessing that reactive set_local requires that the value be json serializable?  I was caught by surprise when python error'd out
<nicbet> larrymi_ : according to https://github.com/juju/juju/blob/master/provider/vsphere/ova_import_manager.go line 72 the cloud init params are given as part of the OvfImportSpec
#juju 2016-10-06
<larrymi_> nicbet: ack
<konobi> is there a way to tell juju to stop asking for Ubuntu SSO creds?
<konobi> (during deploy that is)
<jrwren> publish your charm
<spaok> if I have a subordinate charm one a service with multiple units, so each service lists my subordinate charm under it, can the subordinate charm get list of all the units in the service? seems like it only see's itself in related_units
<marcoceppi> spaok: right, it can only see its peers
<marcoceppi> spaok: what are you trying to get?
<spaok> marcoceppi: I was thinking I needed to build a list of units in the charm to pass to haproxy, but I think from what I read about the haproxy charm it will union all the units that join with the same service_name so I might be ok
<spaok> though I really wish the haproxy charm had a python example of how to send it the yaml structure it wants
<spaok> i can't seem to figure out how to set the service structure it wants
<stub> cholcombe: Yes. It needs to serialize the data it so it can detect changes and json was chosen as the format.
<spaok> is relation_set local, or does it send data to the related unit
<spaok> oh, interesting
<stub> spaok: relation_set sets the data on the local end (the only place you can set data), which is visible to the remote units.
<stub> spaok: A unit doesn't 'send' data on a relation. It sets the data on its local end and informs other units that it has changed.
<socceroos> hi all
<socceroos> Has anyone here tried using juju 2.0 to manage deployments on Digital Ocean?
<socceroos> or any version of juju really...
<socceroos> ping...
<SimonKLB> is it possible to run charms that require authentication with the bundletester?
<SimonKLB> and/or how can i run bundletester/amulet tests on charms that use resources?
<anshul_> #juju
<anshul_> facing an issue while testing Glance using Amulet.. Glance is loaded locally and while i execute JUJU STATUS the state does not change and it remains at glance:     charm: local:xenial/glance-150     exposed: false     service-status:       current: unknown       message: Waiting for agent initialization to finish       since: 06 Oct 2016 23:12:45+05:30     relations:       amqp:       - rabbitmq-server       cluster:       - glance  
<anshul_> facing an issue while testing Glance using Amulet.. Glance is loaded locally and while i execute JUJU STATUS the state does not change and it remains at "Waiting for agent initialization to finish".. and if i change the series to trusty it works fine
<anshul_> Please help
<anshul_> facing an issue while testing Glance using Amulet.. Glance is loaded locally and while i execute JUJU STATUS the state does not change and it remains at "Waiting for agent initialization to finish".. and if i change the series to trusty it works fine
<nicbet> larrymi_ what version of VMWare are you running? I just can't get this to work properly. I've tried juju 1.25-6 over night, as well as using trusty images. always the same result with the public key not being injected properly and bootstrap failing.
<petevg> bradm: did you ever get a chance to move the bip stuff to its own namespace. If so, would you go and close out the old reviews for it? (one of them is here: https://code.launchpad.net/~josvaz/charms/trusty/bip/client_side_ssl-with_helper-lp1604894/+merge/301802)
<larrymi_> nicbet: I'm also using 6.0 3620759 for ESXi and 3644788 for the vCenter. I suspect it's a different issue but you'd need to be able to get to the logs. If I had to guess, I would guess that cloud-init is not able to get to a resource (perhaps blocked by the firewall)
<nicbet> larrymi_ I'll have to do shenanigans and mount the failed bootstrap VM's hard drive to a different vm to access the logs... let me see
<larrymi_> nicbet: ok
<nicbet> larrymi_ would you mind sharing your redacted custom cloud yaml for vsphere? maybe I'm missing a config entry.
<larrymi_> nicbet:   vsphere:
<larrymi_>     type: vsphere
<larrymi_>     auth-types: [userpass]
<larrymi_>     endpoint: **.***.*.***
<larrymi_>     regions:
<larrymi_>       dc0: {}
<nicbet> pretty much what I have, for me dc0 is replaced with our datacenter name
<nicbet> larrymi_ : cloud-init.log on the mounted drive states that neither DataSourceOVFNet, nor DataSourceOVF could load any data. Notably this line appears too: ubuntu [CLOUDINIT] DataSourceOVF.py[DEBUG]: Customization for VMware platform is disabled.
<nicbet> larrymi_ : did you configure something special on the vmware itself to enable OVF customization?
<shruthima> hi kwmonroe , We are charming websphere liberty charm for Z , This charm provides httpport, httpsport, and hostname.we thought to make use of http interface https://github.com/juju-solutions/interface-http but we noticed http interface will provide only httpport and hostname . So, we want know do we need to wright a new interface ?
<larrymi_> nicbet: I didn't have to configure anything... just the export prior to bootstrap that I mentioned earlier, but it won't bootstrap at without that.
<larrymi_> nicbet: interesting that it's disabled.. I wonder what it's keying on.
<nicbet> larrymi_ the cloudimage has open-vm-tools package installed. upon boot they are started, i see that from the logs. Then cloud init tries all different data sources, like EC2 store, Config ISO in CDROm, etc. one of them is OVFDatasource, which uses vmware tools to grab the info from the ovf template configuration
<nicbet> larrymi_ at that point it's not given anything, and logs the above line about guest customization being disabled
<nicbet> larrymi_ dumb question ... are you running juju against a vCenter or a vSphere ESXI?
<larrymi_> nicbet: a vCenter
<larrymi_> nicbet: which log are you looking at?
<nicbet> larrymi_ /var/log/cloud-init.log and /var/log/vmware-vmsvc.log together with https://github.com/number5/cloud-init/blob/master/cloudinit/sources/DataSourceOVF.py
<larrymi_> nicbet: I do have the same Oct  5 19:57:55 ubuntu [CLOUDINIT] DataSourceOVF.py[DEBUG]: Customization for VMware platform is disabled.
<larrymi_> nicbet: can probably be ignored
<nicbet> larrymi_ do you find a line that says it loaded cloud-init data from the OVF source?
<larrymi_> nicbet: yes, it then goes on to Oct  5 19:57:54 ubuntu [CLOUDINIT] __init__.py[DEBUG]: Seeing if we can get any data from <class 'cloudinit.sources.DataSourceNoCloud.DataSourceNoCloud'>
<nicbet> larrymi_ grep 'data found' cloud-init.log prints all lines as 'no local data found from ***' where *** is the DataSource it tried
<rufus> https://www.youtube.com/watch?v=lm64uOErZ8w
<larrymi_> nicbet: yeah they're there
<larrymi_> Oct  5 19:57:55 ubuntu [CLOUDINIT] handlers.py[DEBUG]: finish: init-local/search-NoCloud: SUCCESS: no local data found from DataSourceNoCloud
<larrymi_> Oct  5 19:57:55 ubuntu [CLOUDINIT] handlers.py[DEBUG]: start: init-local/search-ConfigDrive: searching for local data from DataSourceConfigDrive
<nicbet> larrymi_ is there any that says data was found?
<larrymi_> nicbet: the logs don't say data was found specifically but they seems to indicate that it only fails to get the data locally.
<nicbet> larrymi_ I have a hunch that this only works with vCenter
<kwmonroe> shruthima: i wouldn't write a new interface, just open an issue and/or provide a pull request to include an https port here:  https://github.com/juju-solutions/interface-http
<shruthima> ok sure kwmonroe
<shruthima> kwmonroe: May i know when will be the xenial series of current IBM-IM charm will be pulled back ? so we can push the ibm-im for Z
<kwmonroe> shruthima: i'll try to complete that before my EOD, so roughly 7 hours from now.
<shruthima> oh k thanku
<larrymi_> nicbet: yes could be. I haven't tried with ESXi host as endpoint
<shruthima> kwmonroe: i have edited http interface provides.py http://paste.ubuntu.com/23285185/ requires.py http://paste.ubuntu.com/23285180/ locally and tested it is working fine. So is there any way to create a merge proposal for http interface ?
<shruthima> kwmonroe: i have seen https://github.com/juju-solutions/interface-http/issues/5 similar issue is opened for http interface also
<kwmonroe> shruthima: issue #5 would allow you to define multiple http interfaces in your charm's metadata and react to them differently (with different ports).  if that would solve your needs, you can just add a comment to issue #5 saying it would be useful to you as well.  if you instead require multiple ports for a single http interface, i think that's a separate issue.  you could run "git diff" in the directory where you made your
<kwmonroe>  edits, paste the output at http://paste.ubuntu.com, and then include that paste link in a new issue here: https://github.com/juju-solutions/interface-http/issues.
<shruthima> ok thanks kevin
<holocron> could some one take a look at this pastebin and tell me what next steps to debugging might be? http://pastebin.com/7MHZV4e3
<holocron> this, specifically: unit-ceph-1: 12:57:00 INFO unit.ceph/1.update-status admin_socket: exception getting command descriptions: [Errno 111] Connection refused
<spaok> stub: I guess what I'm confused on is, when I try to set the services structure for when haproxy joins, if do hookenv.relation_set(relation_id='somerelid:1', service_yaml) then the joined/changed hook runs, haproxy doesn't get the services yaml, however if I do hookenv.relation_set(relation_id=None, service_yaml) then it does work, and haproxy builds the config right, but after a bit when the update-status runs it errors because relation_id isn't set
<stub> spaok: Specifying None for the relation id means use the JUJU_RELATION_ID environment variable, which is only set in relation hooks. Specifying an explicit relation id does the same thing, but will work in all hooks. Provided you used the correct relation id.
<stub> spaok: You can test this using "juju run foo/0 'relation-set -r somerelid:1 foo=bar'" if you want.
<stub> (juju run --unit foo/0 now it seems, with 2.0)
<stub> "juju run --unit foo/0 'relation-ids somerelname' " to list the relationids in play
<bdx> hows it going everyone?
<bdx> do I have the capability to bootstrap to a specific subnet?
<bdx> using aws provider
<lutostag> any way to specify "charm build" deps? (for instance in my wheelhouse I have cffi, which when the charm is built depends on having libffi-dev installed. I have this in the README, but wondering if that was possible to enforce programatically
<spaok> thanks stub, I'm fairly certain I have the right relid, but when I see from the haproxy side is only the private_ip and unit id something else, with None, I get the services yaml, its very confusing
<spaok> I'll look at trying that command, I was wondering how to run the relation-set command
<spaok> lutostag: layers.yaml ?
<spaok> not 100% on that
<lutostag> spaok: yeah, I have deps for install time unit-side like http://pastebin.ubuntu.com/23285512/, but not sure how to do it "charm build" side
<spaok> ah gotca, not sure
<lutostag> think I'll go with a wrapper around charm build in the top-level dir, don't need to turn charm into pip/apt/npm/snap anyways
<kwmonroe> hey icey - can you help holocron with his ceph connection refused?  http://pastebin.com/7MHZV4e3
<holocron> Something in dpkg giving an Input/output error
<kwmonroe> lutostag: seems odd that an entry in your wheelhouse.txt would require a -dev to be installed for charm build
<lutostag> kwmonroe: doesnt it though, it builds it locally. by compiling stuff, I guess there are c-extentsions in the python package itself
<lutostag> lxml is another example
<kwmonroe> cory_fu_:  does charm build have runtime reqs dependent on the wheelhouse?
<kwmonroe> cory_fu_: (see lutostag's issue above)
<lutostag> (but I can get around that one, cause we have that deb'd up)
<holocron> kwmonroe icey going to try this http://askubuntu.com/questions/139377/unable-to-install-any-updates-through-update-manager-apt-get-upgrade
<cory_fu_> lutostag, kwmonroe: You shouldn't need a -dev package for charm-build because it *should* be installing source only and building them on the deployed unit, since we don't know the arch ahead of time.
<lutostag> ah, so I'll need these -dev's on the unit-side, good to know, but interesting.
<kwmonroe> holocron: that doesn't sound like a ceph problem then... got a log with the dpkg error?
<cory_fu_> lutostag: What's the repo for cffi so I can try building it?
<lutostag> cory_fu_: my charm or the upstream python package?
<cory_fu_> lutostag: The charm
<lutostag> cory_fu_: lemme push...
<cory_fu_> Sorry, I misread cffi as the charm name
<holocron> kwmonroe: similar messages to this: http://pastebin.com/XZ0uFfz8
<holocron> I've had make, and build-essential give the error, right now it's libdpkg-perl
<kwmonroe> holocron: when do you see that?  on charm deploy
<lutostag> cory_fu_: bzr branch lp:~lutostag/oil-ci/charm-weebl+weasyprint-dep
<holocron> kwmonroe no, the charm deployed fine yesterday, it came in as part of the openstack-on-lxd bundle. i was able to create a cinder volume even.. logged in today and saw that error in my first pastebin
<lutostag> (and I'll need to add the deps as explained too)
<holocron> i logged into the unit and did an apt-get clean and apt-get update
<holocron> now it's failing with this
<holocron> is it common practice to scale out another unit and then tear down the breaking one?
<holocron> like, should i just make that my default practice or should i try to fix this unit?
<kwmonroe> holocron: common practice for for the breaking unit not to break
<kwmonroe> especially on some nonsense apt operation
<holocron> :P yeah that's the ideal
<kwmonroe> is that unit perhaps out of disk space holocron?
<kwmonroe> or inodes?  (df -h && df -i)
<holocron> nope, lots of space
<holocron> lots of inodes
<kwmonroe> holocron: anything in dmesg, /var/log/syslog, or /var/log/apt/* on that container that would help explain the dpkg failure?
<holocron> kwmonroe probably,sorry i've got to jump to another thing now but i'll try to get back
<kwmonroe> np holocron
<Siva> I am trying to remove my application in juju 2.0 but it is not working
<Siva> I put pdb.set_trace() in my code
<Siva> Not sure if it is because of that
<Siva> juju remove-application does not remove the application
<Siva> How do I now forcefully remove it?
<Siva> Any help is much appreciated
<spaok> is there a decorator for the update-status hook? or do I use @hook?
<Siva> It is stuck at the install hook where I put pdb
<Siva> I have the following decorator for the install hook, @hooks.hook()
<spaok> Siva can you remove the machine?
<lutostag> Siva: juju resolved --no-retry <unit> # over and over till its gone
<kwmonroe> Siva: remove-application will first remove relations between your app and something else, then it will remove all units of your app, then it will remove the machine (if your app was the last unit on a machine)
<kwmonroe> Siva: you're probably trapping the remove-relation portion of remove-application, which means you'll need to continue or break or whatever pdb takes to let it continue tearing down relations / units / machines
<stub> Siva: The hook will likely never complete, so you either need to go in and kill it yourself or run 'juju remove-machine XXX --force'
<kwmonroe> so lutostag's suggestion would work -- keep resolving the app with --no-retry to make your way through the app's lifecycle.  or spaok's suggestion might be faster -- juju remove-machine X --force
<spaok> I work with containers, so I tend to do that mostly
<stub> (and haven't we all left our Python debugger statements in a hook at some point)
<Siva> I removed the machine, it shows the status as 'stopped'
<spaok> takes a sec
<kwmonroe> keep watching.. it'll go away
<spaok> also rerun the remove-application
<Siva> OK. @lutostag suggestion did the trick
<spaok> I've notcied some application ghosts when I remove machines
<Siva> Now it is gone
<Siva> Thanks
<stub> spaok: yes, its perfectly valid to have an application deployed to no units. Makes sense when setting up subordinates, more dubious with normal charms.
<Siva> One thing I noticed is that after removing the machine (say 0/lxd/14 is removed) now when you deploy it creates the machine 0/lxd/15 rather than 14
<Siva> is this expected?
<spaok> ya
<Siva> same for units as well
<spaok> yup
<spaok> makes deployer scripts fun
<Siva> Why does it not consider the freed ones so that it is in sequence?
<rick_h_> Siva: mainly because it makes things like logs more useful when the numbers are unique
<rick_h_> Siva: especially as everything is async
<rick_h_> Siva: so you can be sure any logs re: unit 15 are in fact the unit that was destroyed at xx:xx
<Siva> OK
<Siva> It looks a bit weird in the 'juju status' as the sequence is  broken
<rick_h_> Siva: yea, understand, but for the best imho
<spaok> Siva: why I have scripts to destroy and recreate my MAAS/Juju 2.0 dev env, good to reset sometimes
<Siva> One question
<Siva> I have the following entry in the config.yaml file
<Siva> tor-type:     type: string     description: Always ovs     default: ovs
<Siva> Now when do, tor_type = config.get("tor_type")     print "SIVA: ", tor_type
<Siva> I expect it to print the default value 'ovs' as I have not set any value
<Siva> It prints nothing
<Siva> Is this a expected?
<spaok> tor_type or tor-type?
<Siva> oops! my bad
<spaok> I put underscores in my config.yaml
<Siva> Now after making the change, I can just 'redeploy', right?
<spaok> you can test by editing the charm live if you wanted
<Siva> How do I do that?
<spaok> vi /var/lib/juju/agents/unit-charmname-id/charms/config.yaml
<spaok> kick jujud in the pants by
<spaok> ps aux |grep jujud-unit-charmname |grep -v grep | awk '{print $2}' | xargs kill -9
<spaok> should cause the charm to run
<Siva> I modified the config.yaml and restarted the jujud
<Siva> still prints None
<spaok> Siva: in my reactive charm I just config('tor_type')
<spaok> not config.get
<spaok> not sure the diff
<Siva> I removed the app and deployed it again
<Siva> config.get works
<Siva> I can try config('tor_type') and see how it goes
<cory_fu_> kwmonroe: Comments on https://github.com/juju-solutions/layer-apache-bigtop-base/pull/50
<icey> hey holocron kwmonroe just seeing the messages
<icey> to me, the line saying admin_socket: connection refused is more interesting; it looks like maybe the ceph monitor is down?
<holocron> icey hey, i ended up tearing down the controller. I'm going to redeploy now and i'll drop a line in here if it happens again
<icey> holocron:  great; kwmonroeis right though, the expectation is that it wouldn't break
<Siva> @spaok, live config.yaml change testing does not work for me
<kwmonroe> cory_fu_: i like the callback idea in https://github.com/juju-solutions/layer-apache-bigtop-base/pull/50.  but i'm not gonna get to that by tomorrow, and i really want the base hadoop charms refreshed (which depend on the pr as is).  you ok if i open an issue to do it better in the future, but merge for now?
<kwmonroe> cory_fu_: it doesn't matter what you say, mind you.  petevg already +1'd it.  just trying to fake earnest consideration.
<cory_fu_> ha
<cory_fu_> kwmonroe: I'm also +1 on it as-is, but I'd like to reduce boilerplate where we can.  But we can go back and refactor it later
<cory_fu_> kwmonroe: Issue opened and PR merged
<cory_fu_> kwmonroe: And I'm good on the other commits you made for the xenial updates
<kwmonroe> dag nab cory_fu_!  you're fast.  i was still pontificating on the title of a new issue.  thanks!!
<kwmonroe> and thanks for the xenial +1.  i'll squash, pr, and set the clock for 24 hours till i can push ;)
<kwmonroe> just think, you'll be swimming when the upstream charms get refreshed.
<kwmonroe> nice knowing you
<kwmonroe> before you go cory_fu_.. did you see mark's note to the juju ML about blocked status?  seems the slave units are reporting "blocked" even when a relation is present.  i'm pretty sure it's because of this:  https://github.com/juju-solutions/bigtop/blob/master/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/reactive/hadoop_status.py#L22
<kwmonroe> as in, there *is* a spec mismatch until NN/RM are ready.  what's the harm in setting status with a spec mismatch?
<kwmonroe> afaict, they'll report "waiting for ready", which seems ok to me.  unless we want to add a separate block for specifically dealing with spec mismatches, which would be some weird state between waiting and blocked to see if the spec ever does eventually match.
<cory_fu_> kwmonroe: I think the problem is one of when hooks are triggered.  I think that what's happening is that the relation is established and reported, but the hook doesn't get called on both sides right away, leaving one side reporting "blocked" even though the relation is there, simply because it hasn't been informed of it yet
<kwmonroe> i think i don't agree with ya cory_fu_... NN will send DN info early (https://github.com/juju-solutions/bigtop/blob/bug/BIGTOP-2548/xenial-charm-refresh/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/reactive/namenode.py#L27).  but check out what's missing... https://github.com/juju-solutions/interface-dfs/blob/master/requires.py#L95
<cory_fu_> kwmonroe: Doesn't matter.  The waiting vs blocked status only depends on the .joined state: https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/reactive/hadoop_status.py#L36
<kwmonroe> spoiler alert cory_fu_:  it's clustername.  we don't send that until NN is all the way up, so the specmatch will be false:  https://github.com/juju-solutions/bigtop/blob/bug/BIGTOP-2548/xenial-charm-refresh/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/reactive/namenode.py#L142
<kwmonroe> cory_fu_: you crazy:  https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/reactive/hadoop_status.py#L22
<cory_fu_> kwmonroe: Ooh, I see.
<cory_fu_> We should remove that @when_none line.  There's no reason for it
<kwmonroe> great idea cory_fu_!  if only i had it 15 minutes ago.
<cory_fu_> :)
<kwmonroe> petevg: you mentioned you also saw charms blocked on missing relations (maybe zookeeper?).  could it be you saw the slaves blocked instead?
<neiljerram> Aaaargh guys, would you _please_ stop making gratuitous changes in every Juju 2 beta or rc?
<neiljerram> The latest one that has just bitten my testing is the addition of a * after the unit name in juju status output.
<neiljerram> Before that it was 'juju set-config' being changed to 'juju config'
<neiljerram> This is getting old....
<petevg> kwmonroe: Yes. I think that it was probably just the hadoop slave.
<kwmonroe> neiljerram: apologies for the headaches!  but you should see much more stability in the RCs.  at least for me, api has been pretty consistent with rc1/rc2.  rick_h_ do you know if there are significant api/output changes in the queue from now to GA?
<kwmonroe> thx petevg - fingers crossed that was the only outlier
<petevg> np
<petevg> fingers crossed.
<neiljerram> kwmonroe, tbh I'm afraid I have to say that I think things have been _less_ stable since the transition from -beta to -rc.  My guess is that there are changes that people have been thinking they should do for a while, but only now that the GA is really looking likely so they think that they should get them out :-)
<neiljerram> kwmonroe, but don't worry, I've had my moan now...
<kwmonroe> heh neiljerram - fair enough :)
<neiljerram> Do you happen to know what the new * means?  Should I expect to see it on every juju status line?
<kwmonroe> neiljerram: i was just about to ask you the same thing... i haven't seen the '*'
<kwmonroe> neiljerram: you on rc1, 2, or 3?
<neiljerram> kwmonroe, rc3 now; here's an excerpt from I test I currently have running:
<neiljerram>        UNIT                WORKLOAD  AGENT  MACHINE  PUBLIC-ADDRESS   PORTS  MESSAGE
<neiljerram>        calico-devstack/0*  unknown   idle   0        104.197.123.208
<neiljerram>        
<neiljerram>        MACHINE  STATE    DNS              INS-ID         SERIES  AZ
<neiljerram>        0        started  104.197.123.208  juju-0f506f-0  trusty  us-central1-a
<neiljerram> kwmonroe, just doing another deployment with more units, to get more data
<kwmonroe> hmph... neiljerram i wonder if that's an attempt to truncate the unit name to a certain length.  doesn't make sense in your case, but i could see 'really-long-unit-name/0' being truncated to 'really-long-u*' to keep the status columns sane.
<kwmonroe> just a guess neiljerram
<kwmonroe> and at any rate neiljerram, if you're scraping 'juju status', you might want to consider scraping 'juju status --format=tabular', which might be more consistent.
<neiljerram> kwmonroe, BTW the reason this matters for my automated testing is that I have some quite tricky code that is trying to determine when the deployment as a whole is really ready.
<kwmonroe> ugh, not right
<kwmonroe> sorry, i meant 'juju status --format=yaml', not tabular
<neiljerram> kwmonroe, yes, I suppose that would probably be better
<neiljerram> kwmonroe, Ah, it seems that * means 'no longer waiting for machine'
<kwmonroe> neiljerram: you sure?  i just went to rc3 and deployed ubuntu... i still see:
<kwmonroe> UNIT      WORKLOAD  AGENT       MACHINE  PUBLIC-ADDRESS  PORTS  MESSAGE
<kwmonroe> ubuntu/0  waiting   allocating  0        54.153.95.194          waiting for machine
<neiljerram> kwmonroe, exactly - because the machine hasn't been started yet
<kwmonroe> oh, nm neiljerram, i should wait longer.. you said the '*' is....
<kwmonroe> right
<kwmonroe> i gotta say, intently watching juju status is right up there with the birth of my first child
<kwmonroe> neiljerram: i can't get the '*' after the machine is ready, nor using a super long unit name.  i'm not sure where that's coming from.
<kwmonroe> UNIT                       WORKLOAD     AGENT      MACHINE  PUBLIC-ADDRESS  PORTS  MESSAGE
<kwmonroe> really-long-ubuntu-name/0  maintenance  executing  1        54.153.97.184          (install) installing charm software
<kwmonroe> ubuntu/0                   active       idle       0        54.153.95.194          ready
<neiljerram> kwmonroe, do you have rc3?
<kwmonroe> neiljerram: i do... http://paste.ubuntu.com/23286448/
<neiljerram> kwmonroe, curious, I don't know then.  I'm also using AWS, so it's not because we're using different clouds.
<kwmonroe> neiljerram: i'm aws-west.  if you're east, it could be signifying the hurricane coming to the east coast this weekend.
<neiljerram> kwmonroe, :-)
<kwmonroe> rc3 backed by weather.com ;)
<kwmonroe> neiljerram: care to /join #juju-dev?  i'll ask the core devs where the '*' is coming from
<neiljerram> kwmonroe, sure, will do
<kwmonroe> for anyone following along, the '*' denotes leadership
<magicaltrout> i thought you were the leader kwmonroe
<kwmonroe> that's kwmonroe* to you magicaltrout
<magicaltrout> Texas' own Idi Amin
<kwmonroe> magicaltrout: you still in souther california?  or did you get back to the right side of the atlantic?
<magicaltrout> hehe
<magicaltrout> i'm back in the motherland for now
<magicaltrout> been instructed to report to Washington DC on the 28th November
<magicaltrout> so not for long
<magicaltrout> although i was hoping a nice sunny jaunt to ApacheCon EU was gonna be the last trip of the year
<kwmonroe> must be hard being so popular magicaltrout
<magicaltrout> trololol
<magicaltrout> whatever
<magicaltrout> kwmonroe: is cory_fu_ staying in your basement?
<kwmonroe> magicaltrout: two things you should know about central Texas:  1) it's all bedrock; no basements.  2) cory_fu_ was a track lead at the summit; he stays at the Westin.
<magicaltrout> Westin? and you lot dumped us at the Marriot?
<magicaltrout> I need to upgrade
<magicaltrout> get me a real community sponsor!
<kwmonroe> apply for track lead in Ghent next year ;)
<kwmonroe> comes with a bright orange shirt.. wearable anywhere!
<magicaltrout> they did look nice....... sadly I'll be too drunk
<magicaltrout> oh
<magicaltrout> that never stopped you guys
<magicaltrout> i could lead the "werid big data - container crossover"
<magicaltrout> s/werid/weird
<kwmonroe> pretty sure you're already leading that
<magicaltrout> hehe
<kwmonroe> i don't understand why mapreduce wasn't enough for you
<magicaltrout> yeah i've been tapping up the mesos mailing list the last few days trying to figure out what needs to be done to get LXC support in their container stack
<kwmonroe> everything can be solved with mapred.  and if it can't, map it again.
<magicaltrout> i'm not a C programmer though so it might take me a while unless my IDE-fu wins
<kwmonroe> try emacs
<magicaltrout> I use emacs actually kwmonroe :P
<magicaltrout> just not for coding :)
<kwmonroe> sure magicaltrout.. it's also good as a desktop environment and for playing solitaire.
<magicaltrout> exactly see
<magicaltrout> you know kwmonroe
<magicaltrout> you know
<magicaltrout> sorry.... kwmonroe*
<kwmonroe> so magicaltrout*, how can we help you with apachecon seville?  you've got some drill bit work i pressume?
<magicaltrout> yeah. Plan do to a bigtop & drill demo
<magicaltrout> get some stuff in a bundle so willing volunteers can test etc
<magicaltrout> if that latest RC changelog isn't a lie.... drill will even work in LXC which is a bonus
<kwmonroe> roger that magicaltrout.. i'll volunteer!
<magicaltrout> for the first time ever, the Juju talk is the easier of the two. I'll knock something together next week and we can iterate over it
<magicaltrout> we've got a month or so
<magicaltrout> try and not leave it to the last minute for a change
<kwmonroe> !remindme 1 month
<magicaltrout> yeah, well for ApacheCon NA i was writing the talks on the plane
<magicaltrout> so you know....
<magicaltrout> how bad can it be?
<kwmonroe> it could be as bad as a 6 out of 10. but no worse.
<magicaltrout> thanks for the reassurance
<bdx> cmars: https://github.com/cmars/juju-charm-mattermost/pull/2/files
<cmars> bdx, why does it need nginx embedded?
<cmars> bdx, the systemd support is nice
<bdx> cmars: so I can give my users a clean fqdn with no ports hanging on the end
<bdx> cmars: plus, mattermost docs suggest it
<cmars> bdx, ok, that makes sense
<cmars> bdx, is it worth exposing the mattermost-port at all then?
<cmars> might just leave it fixed and local only..
<bdx> ok
<cmars> bdx, also, let's remove trusty from the series in metadata
<bdx> I thought i did ... checking
<cmars> bdx, ah, you did, my bad
<bdx> cmars: there ya go
<lutostag> how would I fix 2016-10-06 23:30:28 ERROR juju.worker.dependency engine.go:526 "leadership-tracker" manifold worker returned unexpected error: leadership failure: lease manager stopped... which daemon should I kick on the unit?
<lutostag> 2.0 beta7 (can't tear down and redeploy for a little while unfortunately)
<cmars> bdx, thanks, i'll have to test this out, but i could publish it soon. probably need to update the resource as well
<bdx> cmars: totally, I was thinking of adding a tls-certificates interface, so if a user desired to have ssl, they could just relate to the easyrsa charms
<cmars> bdx, oooh nice!
<bdx> actually, I feel that functionality should be a part of the nginx charm though
<cmars> bdx, do we have a LE layer yet?
<cmars> that'd be really nice for a private secure mattermost
<bdx> stokachu: ^^
<bdx> cmars, stokachu: https://jujucharms.com/u/containers/easyrsa/2
<bdx> LE layer?
<cmars> let's encrypt
<bdx> oooo
<bdx> cmars: there should be
<bdx> I know we have discussed it
<stokachu> lutostag: switch to the admin controller and ssh into machine 0
<stokachu> lutostag: then just pkill jujud and it'll restart and pick back up
<stokachu> bdx: sorry some extra context?
<stokachu> ah i see nginx
#juju 2016-10-07
<lutostag> stokachu: that did it. thanks!
<stokachu> lutostag: np
<stokachu> lutostag: it's since been fixed i dont remember what beta release though
<bdx> stokachu: generating and configuring the cernts is a great deal of maunual overhaul I go through with almost every app that uses layer-nginx, or has a frontend or endpoint of any kind for that matter
<bdx> stokach: do you think it would be wise to tls/ssl functionality as as option in layer-nginx?
<bdx> stokachu:^
<stokachu> bdx: yea
<stokachu> think it's a good idea
<bdx> stokachu: should the target directory to store the crt/key in be specified as a config or option?
<bdx> I would think option ... bc it is something that isn't going to be modified really ...
<bdx> post deploy
<stokachu> bdx: yea should just drop the certs in the normal locations for nginx to look
<stokachu> you can make it an option but just default to /etc/ssl/certs
<bdx> stokachu: like this -> http://paste.ubuntu.com/23287010/ ?
<stokachu> bdx: quick glance looks good
<neiljerram> rc3 with GCE: lots of my overnight tests failed because of machines going to 'down' state and never coming back.  Wonder if anyone else is seeing that?  It could be a GCE problem, as much as Juju.
<zeestrat> marcoceppi: Are you still the maintainer for the Nagios charm? If not, do you know who could look at https://bugs.launchpad.net/charms/+source/nagios/+bug/1605733 ?
<mup> Bug #1605733: Nagios charm does not add default host checks to nagios <family> <nagios> <nrpe> <unknown> <nagios (Juju Charms Collection):New> <https://launchpad.net/bugs/1605733>
<herb64> Hi all, trying to bootstrap into an openstack environment, but getting "authentication failed" and I found this is because "certificate signed by unknown authority"...
<herb64> also tried juju --debug  bootstrap --config ssh-hostname-verification=false mycontroller cloudname
<herb64> any way to disable that certificate check, similar with "--insecure" options in curl .. ?
<magicaltrout> the mesos docker integration just mimics docker commands in C
<magicaltrout> how hard can it be to do the same with LXC?!
<magicaltrout> (famous last words)
<jrwren> my mimics, do you mean fork/exec/pipe?
<Spaulding> Hello!
<Spaulding> Is there any folk that has working charm with xenial + ansible?
<Spaulding> xenial does not have python2 ... and basically i'm stuck cause i don't know how to run something like "pre-install" hook
<magicaltrout> jrwren: https://github.com/apache/mesos/blob/master/src/docker/docker.cpp#L1437
<magicaltrout> i'm not a C coder
<magicaltrout> but that looks like its just running some commandline stuff
<jrwren> magicaltrout: i agree. subprocess is a giveaway
<magicaltrout> Spaulding: i dont' know of any, but i know people do do it from time to time. marcoceppi or lazypower should be able to help when they're around
<magicaltrout> indeed jrwren, so I figure fork, copy, change the commands to run LXC/LXD stuff compile and run LXD on Mesos ;)@
<Spaulding> cheers magicaltrout !
<Spaulding> so now i need to wait ;)
<magicaltrout> Spaulding: a two pronged attack never hurt either, but people dont' do it, dump a question on the juju mailing list as well
<magicaltrout> as people monitor that and not irc and visa versa
<Spaulding> magicaltrout: hmm... i might give a shot, but i also have an option to have direct help from juju dev's
<magicaltrout> well they are the people on the mailing list :)
<Spaulding> they should i guess :)
<jrwren> Spaulding: a common pattern is for hooks/install to be a shell script which installs requirements and calls python2 install.real at the end.
<Spaulding> ok, let's give a try with mailing list :)
<Spaulding> jrwren: exactly - but basically it's like a common scenario..
<Spaulding> and i wanted just to use ansible to bootstrap anything i want
<Spaulding> but xenial is different (no python2) so basically i need to hack it dirty...
<jrwren> Spaulding: sadly, I think all of the charms I knew which were using ansible, moved over to reactive, so I have no good examples. They may never have been updated for xenial when they were ansible.
<Spaulding> hmm...
<Spaulding> we've got really big ansible..
<Spaulding> and it would be much easier to use it instead repeating whole thing i.e. in bash
<Spaulding> i also tried to search some projects in google / github
<Spaulding> so far - no luck...
<Spaulding> btw. reactive looks really promising...
<Spaulding> i think it might be a good idea to use reactive and invoke ansible from it
<holocron> hey, sorry about the drop off yesterday, and I've since gotten a decent client that'll log the chat for me. Is there an irc chat log somewhere I can review what was said yesterday?
<magicaltrout> holocron: i have a decent backscroll 2 mins i'll pastebin what I saw
<holocron> magicaltrout thanks
<magicaltrout> https://gist.github.com/buggtb/7b96fa7f023aa3749b4c5c3cc67d3e0c
<holocron> appreciate that
<marcoceppi> Spaulding: o/
<marcoceppi> Spaulding: you can install python2 during bootstrap for reactive charms by adding the following to your layer.yaml
<marcoceppi> Spaulding: https://gist.github.com/marcoceppi/8743453bfce28be97d71d5706bda0ab8
<marcoceppi> the layer.yaml options are evaluated by the reactive framework prior to any reactive code running
<marcoceppi> this allows you to bootstrap any deps needed for either apt packages or pip packages to run hook code
<Spaulding> lovely!
<magicaltrout> nearly as lovely as marcoceppi himself....
<Spaulding> and then i can tell reactive to run ansible playbooks? right?
<magicaltrout> yeah you can do stuff like @when{myplaybook.notinstalled}
<magicaltrout> def install_the_best_playbookever:
<Spaulding> great!
<Spaulding> finally I'm out from the dark hole..
<holocron> so, it seems that i've run into an odd problem with cs:xenial/rabbitmq-server. It starts up normally after deploy, but at some point the local nodename changes from "ubuntu" to "juju-..." and it starts to fail with "unable to connect to node rabbit@ubuntu: nodedown"
<magicaltrout> holocron: is that lxc? or something else?
<holocron> yeah, lxc.. or lxd i suppose
<magicaltrout> yeah, beisner told me they facilitate a reboot of rabbit-mq to fix that (I think,I was a bit drunk)
<magicaltrout> also RC3 supposedly has some hostname fixes that might resolve that issue also holocron
<magicaltrout> so if you're not on RC3, upgrade if you can
<holocron> i'm on rc3
<holocron> okay i'll try a reboot
<magicaltrout> thats supposedly in the charm
<holocron> just fyi: the relevant log snippet https://gist.github.com/vmorris/402e946bbf8d82c1e46e1c2123d29c7e
<magicaltrout> not some manual interaction
<holocron> ah hrm
<magicaltrout> dunno, although beisner and some others will do. Although I'm surprised RC3 doesn't resolve it if the change log wasn't lying
<magicaltrout> admcleod: you coming to ApacheCon then?
<admcleod> magicaltrout: im not sure yet
<magicaltrout> give me warning if kjackal is going
<magicaltrout> need to pack prozac
<beisner> hi magicaltrout - hmm nope no rmq reboots happening here.  but the beer was good :-)
<holocron> beisner : are you saying that no reboots are happening in the rmq charm, or in my log snip? I have a clean model with single rabbitmq-server deployed, and it seems to be okay (but so was the one that came in with the openstack-on-lxd bundle)
<beisner> holocron, in our CI, not rebooting rmq
<holocron> okay, any idea what might've gone wrong here? https://gist.github.com/vmorris/402e946bbf8d82c1e46e1c2123d29c7e
<holocron> beisner I spoke too soon, simple deploy failed in the same manner, as did a scale of the original unit
<holocron> https://gist.github.com/vmorris/4020f3299134e4e8a287e233e3d18dac
<magicaltrout> clearly too much beer
<holocron> magicaltrout: impossible
<magicaltrout> well not "too much beer", but "too much beer... to remember the conversation properly"
<holocron> lol that's definitely possible
<lutostag> cory_fu_: with the charm build thing I ran into yesterday... this fixes it for me, https://github.com/lutostag/charm-tools/commit/b41f5a584809f547adfb0db917d5e9a2cc909500
<lutostag> (trying to run the charm-tools make test, but it keeps falling over, not due to me I believe'
<lutostag> (although I was abusing the wheelhouse -- for application rather than charm deps, so went ahead and made my own instead)
<cory_fu_> lutostag: The problem with using 'download' instead of 'install' is that I don't think it's available with the pip version in trusty.
<lutostag> cory_fu_: ah, ok, I'll keep playing with it then
<cory_fu_> lutostag: We may just have to put a condition on the series, though.  But I would appreciate seeing that tested on trusty
<cory_fu_> kwmonroe: https://github.com/apache/bigtop/pull/137 updated
<kwmonroe> thanks cory_fu_!
<lutostag> anybody know about juju storage (and how to use it in 1.x)?
<lutostag> for instance, I have a postgresql charm that theoretically accepts storage, and its already deployed, how would I add storage to it?
<lutostag> s/charm/unit
<lutostag> rick_h_: ^^ who is storage-knowlegeable?
<bdx> lutostag: if/when you find some answers, will you put them on blast?
<lutostag> bdx: yeah I'll submit an askubuntu.com for sure
<cory_fu_> kwmonroe, petevg: You guys notice this item in the RC3 announcement?  "* LXD containers now have proper hostnames set"
<petevg> cory_fu: awesome! I'm gonna fire off a test of the hadoop bundle against localhost :-)
<petevg> cory_fu: sadly, it looks like our problem might not be fixed. Got a suspicious failure in my logs: http://paste.ubuntu.com/23289933/ (This is from one of the hadoop slaves, when deployed against lxd containers on xenial.)
<kwmonroe> petevg: what in the heck is unallocated.barefruit.co.uk?  is that really the name you get from running 'hostname' on that container?
<petevg> kwomonroe: that's what was in the logs ...
<kwmonroe> cory_fu_: remember how yesterday i was giving you grief about the slave unit status message being wrong because of the spec match?  well, that was true, but you were right(er).  when a charm is undergoing a long hook (like install) before -joined, the other side won't know it's .joined yet :(
<petevg> I tore down the container.  Will try again in a bit, and poke at it some more.
<cory_fu_> kwmonroe: Yeah, I knew that long-running hooks would block the .joined, but the spec issue has potential to make it inaccurate even longer.  Anyway, we were both right
<kwmonroe> cory_fu_: can you think of a way to detect a unit's relations without relying on the states being set?
<cory_fu_> kwmonroe: No.  Before the -relation-joined hook fires, I don't think there's any possible way for the charm to know about the relation.  I don't think even relation-ids would work
<PCdude> hi all :)
<PCdude> I have a fresh xenial (16.04, server version) install in a VM
<PCdude> I want to try out JUJU (yes never used before) so I followed the following instruction
<PCdude> https://jujucharms.com/docs/devel/getting-started
<PCdude> when I type "groups" it does not show me the LXD in the list
<PCdude> and when typing newgrp lxd it gives me "group lxd does not exist"
<cory_fu_> PCdude: You might have to log out and back in, or try the `newgrp lxd` command to refresh in-place
<PCdude> cory_fu_: tried both, I restarted and tried the "newgrp lxd"
<cory_fu_> PCdude: Can you confirm that lxd is installed with, e.g., `dpkg -l lxd`?
<cory_fu_> It should have been brought in as a dependency of Juju, though
<PCdude> cory_fu_: it indeed is not installed, and as u said I thought that was automatically installed
<PCdude> but apparently not haha
<PCdude> manual install?
<PCdude> sudo apt install lxd?
<cory_fu_> Yep\
<kwmonroe> PCdude: what does 'juju version' say?
<PCdude> cory_fu_: lxd is present now lets continue the install and see what JUJU is capable of thanks
<PCdude> kwmonroe: let me check
<PCdude> 2.0-rc3-xenial-amd64
<kwmonroe> ok - that's good PCdude.  just making sure it was of the 2.0 flavor
<PCdude> kwmonroe: yeah, I was aware of the 2.0 version. I added the PPA and check with "apt-cache" that the right version was being installed
<kwmonroe> cool PCdude.. strange that it didn't bring lxd in as a dep
<PCdude> apart that it is solved now, how can this happen?
<holocron> I always have to "usermod -aG lxd holocron" and relog to pick up the change
<PCdude> personally, I think its vmware. I had strange problems with ESXI before, maybe something strange happened there
<PCdude> I used there preseed option for a change, not gonna do that again...
<holocron> oh, you didn't have LXD installed, just catching up :D
<PCdude> holocron: haha np
<PCdude> any amazing cool bundles or charms I have to check out as a newbie? :D
<holocron> what do you want to do?
<holocron> ghost is an okay one to poke at
<PCdude> well I have openstack running on ubuntu, but I want to tweak some more. Since it uses JUJU maybe something in that field?
<PCdude> holocron: from what I can see is that something similar to wordpress?
<holocron> yeah, though it's all node.js
<holocron> PCdude, you might look into the openstack-on-lxd bundle if you're wishing to dig into openstack, juju and lxd
<PCdude> ah cool, yeah let me check that out
<holocron> use these instructions; http://docs.openstack.org/developer/charm-guide/openstack-on-lxd.html
<PCdude> btw, I have the strong feeling that the autopilot function (from landscape) just uses the openstack bundle from JUJU. is somebody here that can confirm that?
<PCdude> holocron: thanks, will look into that link
<holocron> that's the impression i got as well PCdude, though i haven't actually used autopilot myself
<PCdude> holocron: well it is quick and painless, but dont start asking questions about changing something then u are stuck in landscape. easy=10 customization=2 , but I think I can go a level lower to JUJU and configure there, but not sure if autopilot and landscape like that very much
<PCdude> also for me the 10 licenses are good enough, but for  something bigger u have to pay alot
<holocron> PCdude I see. I don't use landscape either ^^ You'll find the openstack charms have a wide variety of configuration options, probably most of what you'd want to tune
<PCdude> holocron: I was thinking about the following for my openstack install: rn, I have 2 machines which is of course way to little to run good enough for the whole infrastructure,but I was planning on placing it on those 2 and when in the future adding machines slowly moving services from one of those 2 and place it on the new one with JUJU. Until I have say lets say 5-6 servers running without anything virtual. Would that be possible? and I mean moving it live, so
<holocron> i suppose it's possible PCdude.. having your machines in a MAAS cluster might make it simpler, though I'm no expert in the matter
<PCdude> holocron: me neither :) , we will see. where do u use JUJU for?
<holocron> pcdude: just starting to explore it myself, but specifically i'll be using it for openstack as well, considering moving some of my workload deployment automation to charms
<PCdude> haha cool, u have a working install with something else now? or this is going to be ur first try?
<holocron> i have a few ibm cloud manager with openstack installations but they're rapidly going away
<holocron> a few custom rolled installations of mitaka being maintained by the team too at the moment
<PCdude> so u are moving away from something I guess,  what did u use before?
<holocron> as i said: ICM
<holocron> maybe the server reboot caused the message to get lost, i'll resend
<holocron> i have a few ibm cloud manager with openstack installations but they're rapidly going away
<PCdude> ah check I see
<PCdude> I got the last one
<holocron> if you're asking about what i'm using for hypervisor, it's KVM
<holocron> but really what i need is a good way to install openstack that's easy and repeatable.. juju is really attractive to me for this purpose
<PCdude> yeah, I am using ESXI right now, but wanna use openstack with KVM
<holocron> and to restate again, i'm interested in migrating some of my workload deployment automation into charms
<PCdude> amen... I so agree on that point. When I first opened the docs I thought, how is anyone in this world even capable of doing this onces haha
<holocron> so that's on the radar
<PCdude> holocron: have u looked at something else besides openstack?
<PCdude> kubernetes maybe
<holocron> kubernetes doesn't really map across to openstack imo
<holocron> it's more akin to juju or docker, i have looked at docker for some things (hyperledger specifically)
<PCdude> yeah I agree, but there have been some projects that it kind of makes it that way but with containers. I have seen some videos that makes it in the grey area.
<PCdude> uhm ok, let me check hyperledger
<holocron> oh hyperledger is not for workloads ;) it's smart contract stuff
<PCdude> haha, I was already reading and thought uhm, that cant be right
<PCdude> some servers are seriously restarting here
<holocron> they're rolling the whole freenode network
<PCdude> yeah, I guess
<PCdude> uhm, what is a "model" in JUJU?
<roadmr> PCdude: A Juju model is an environment associated with a controller
<roadmr> PCdude: https://jujucharms.com/docs/2.0/models
<PCdude> roadmr: yeah, I read that too, but it is the place where the charms are fired up?
<PCdude> its like a subnet for charms?
<PCdude> so when I type "juju list-controllers" I see 1 machine running. is that the controller of the models?
<holocron> PCdude: generally yep
<holocron> oh wait, i think i understand your question, you have 1 under "machines" ?
<holocron> try "juju status" and "juju show-machine 0" -- assuming that machine 0 is the one machine listed
<PCdude> holocron: yeah, I think that is what it is
<PCdude> deploying ghost rn
<magicalt1out> https://lists.apache.org/thread.html/7b215705d3b222336d3989782722715e43af31af720f69db7ad19911@%3Cdev.mesos.apache.org%3E i mention juju and lxc and suddenly the thread goes dead
<magicalt1out> its like they know!
<rick_h_> magicalt1out: heh, did you cause trouble?
<magicalt1out> well its a bit weird when people ask for the use case, i give it and get crickets
<rick_h_> you broke some rule of fight-club
<magicalt1out> silly people, why do people just want application containers
<magicalt1out> this maybe true rick_h_
<hml> good afternoon - i have an juju charm whoâs deploy failed because the machine didnât spin up.  remove-application isnât workingâ¦.
<hml> and itâs causing havoc: ERROR could not filter units: could not filter units: unit "juju-gui/0" has no assigned machine: unit "juju-gui/0" is not assigned to a machine (not assigned)
<hml> how do i get rid of it?  please
<rick_h_> hml: can you mark it resolved and then remove it?
<hml> rick_h_: ERROR unit "juju-gui/0" is not in an error state
<rick_h_> hml: and when you do remove-application juju-gui it gives ou the filter erro?
<hml> hml: no - it gave no error - i got the filter error when trying to do a juju status of a different application.
<rick_h_> hml: maybe try juju retry-provisioning X where X is the machine that should have come up that failed?
<rick_h_> hml: and see if you can get the application up
<rick_h_> hml: and then cleanly remove it
<hml> rick_h_: the machine never came up, how do i give the retry-provision a machine?
<rick_h_> hml: well is there a machine record that would show in status that it tried and is marked with a failure status?
<rick_h_> hml: how many machines are currently deployed? Maybe try to start with what we think it might be. 0, 1, etc?
<rick_h_> hml: other idea might be to try to juju add-unit juju-gui
<rick_h_> hml: and see if you can get it to come up with a unit and help clear up the error space there
<hml> rick_h_: iâm at 16 machinesâ¦
<rick_h_> hml: k, might try the juju retry-provisioning 17 and see what it does
<rick_h_> hml: or try the add-unit trick and see if that gets things to a good place
<hml> rich_h_: ERROR cannot add unit 1/1 to application "juju-gui": cannot add unit to application "juju-gui": application is not alive
<hml> rick_h_: retry-provisioning on machines 16-21 - machine not found.
<rick_h_> hml: ok
<rick_h_> hml: add-unit trick is all I can think of from there then. Will need to file a bug and see if we can repro and make it more resilient
<hml> rick_h_: so is there any way around the filtering message?  i need to resolve issues on other charms, this is standing in the way
<rick_h_> hml: can you do juju status without any filters?
<rick_h_> hml: assuming that's also not working?
<rick_h_> hml: maybe juju status --format=yaml and see if it bypasses any of the filter work?
<hml> rick_h_: status without filters is working
<rick_h_> hml: ok, so there's nothing I can think to bypass an error using filters. Just ways around it by using jason output plus a tool like jq to get the filtering done outside of juju
<hml> rick_h_: format didntâ work - iâm looking for more detail since nova-compute/0 machine is having troubles with the open vswitch
<rick_h_> hml: grep the unit log file and see there? THings like status changes/etc should be in the log
<rick_h_> hml: so juju ssh nova-compute/0 and then view /var/log/juju/unit-xxxxxx.log
<hml> rick_h_: got it
<rick_h_> where the xxxx is something like nova-computer-0
<hml> rick_h_: my favorite: â"leadership-tracker" manifold worker returned unexpected error: leadership failure: lease manager stoppedâ
<rick_h_> hml: :/
<hml> rick_h_: is there a bug for that one, or am i just lucky to keep hitting it?
<rick_h_> hml: https://bugs.launchpad.net/juju/+bug/1616174 ?
<mup> Bug #1616174: Juju agents cannot start: failed to start "uniter" manifold worker: dependency not available <sts> <juju:Incomplete> <https://launchpad.net/bugs/1616174>
<rick_h_> hml: has a potential thing to fix it. Sounds like we didn't get a good repro steps though if you have more details to add to the bug that'd be helpful
<hml> rick_h_: iâve seen that bug, i wasnât sure how to run those steps - i found another solution, but perhaps short term: http://www.astokes.org/juju/2/common-errors
<hml> kwmonroe: the units in question are running 2.0rc1 or were created with
<kwmonroe> optimism reinstated
<kwmonroe> hey rick_h_, can i specify bundle tags?  i'm pretty sure this result only comes up because 'bigtop' is in the name:  https://jujucharms.com/q/bigtop?type=bundle
<kwmonroe> i'm looking for tags simliar to what you'd do in a charm's metadata.yaml
<kwmonroe> but i don't see where i'd specify that for a bundle
<bdx> cmars: I hooked it up -> https://github.com/cmars/juju-charm-mattermost/pull/2/files
<holocron> i give up https://bugs.launchpad.net/charms/+source/rabbitmq-server/+bug/1563271
<mup> Bug #1563271: update-status hook errors when unable to connect <landscape> <openstack> <rabbitmq-server (Juju Charms Collection):Confirmed> <https://launchpad.net/bugs/1563271>
<holocron> i know i saw in the docs how to update a unit from local charm source, now i'm missing it
<holocron> any help?
<kwmonroe> holocron: juju upgrade-charm <foo> --path /path/to/new-charm
<kwmonroe> cory_fu: tvansteenburgh:  is tests.yaml deployment_timeout in seconds or minutes?  juju-deployer -t is seconds (https://github.com/juju-solutions/bundletester/blob/610801149ec214966b80e2766ca8760eb29a6f9e/bundletester/spec.py#L137) but the bundletester readme comment for this option is minutes.
<tvansteenburgh> kwmonroe: seconds
<kwmonroe> thx
<holocron> kwmonroe thanks
<tvansteenburgh> kwmonroe: i don't see where it says minutes in the readme?
<kwmonroe> tvansteenburgh: https://github.com/juju-solutions/bundletester/commit/c88015b890cfd17fa10e375e3e394e57758b9e7d
<kwmonroe> tvansteenburgh: https://github.com/juju-solutions/bundletester/pull/62
<tvansteenburgh> kwmonroe: thanks
<kwmonroe> np
#juju 2016-10-08
<holocron> is there a way to list a local charm in a bundle file?
<holocron> answering my own q, just need to specify the path
<junaidali> #juju
<PCdude> hi all
<PCdude> I have installed JUJU on my PC alongside with some charms
<PCdude> no problems there
<PCdude> I use the LXD option and everything is running localy
<PCdude> when I type "juju expose application" that seems to work, but the address is an LXD container address and thus not accessible without modification. Now, I can go in LXD and make some firewall changes and stuff, but what is the recommended way to deal with this, maybe I am overlooking some command from JUJU to do this
<PCdude> holocron: maybe an idea?
<magicalt1out> juju expose does nothing on lxd PCdude
<magicalt1out> all i do it iptables routing
<magicalt1out> sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 81 -j DNAT --to 10.106.143.112:80
<PCdude> magicalt1out: uhm ok, I was looking at this link: http://askubuntu.com/questions/749063/reach-lxd-container-from-local-network
<PCdude> which basically gives the same option as u said. Are there any downsides by bringing the LXD containers directly to the LAN. IP addresses on the LAN would not be an problem here
<PCdude> ?
<PCdude> It feels like removing an extra layer of complexity where it is not needed imho
<jrwren> PCdude: the recommended way is to use lxd only for dev/test of charms and bundles. It isn't meant to be used for production in this way.
 * magicalt1out uses it in production in that way ;)
<magicaltrout> but I agree its called lxd local for a reason
<jrwren> not much downside to bridging LXD to lan, other than you now depend on LAN DHCP. if you are on laptop and move, your containers won't be accessible.
<PCdude>  yeah good point haha, well it is in a test environment
<PCdude> jrwren: I see ur point there
<jrwren> if you do that, beware the solution in that askubuntu answer won't quite work.
<jrwren> juju uses its own lxd profile. it doesn't inherit from default.
<jrwren> yo'll need to lxc profile edit  juju-default
<PCdude> jrwren: which was exactly my next question :)
<PCdude> is that an easy task?
<jrwren> it isvery easy
<jrwren> you just need to know that you need to do it.
<PCdude> ah ok check, will keep that in mind
<PCdude> so another thing is the way ipv4 and ipv6. I am not sure if this is a LXD or JUU thing
<PCdude> I was not aware that JUJU 2.0 installed the web GUI automatically, so I installed it the old way with a charm
<PCdude> now, I have 2 questions. Why is the charm I installed been given an ipv6 address and how can I change this to a ipv4 address?
<PCdude> and where can I see the address where the "automatic" gui is located?
<jrwren> charms don't get ip addresses, machines/containers do. it is normal for it to have an ipv6 address. are you saying it does not have an ipv4 address also?
<jrwren> `juju gui` will tell you the gui address
<PCdude> juju gui gives me an ipv6 address, should that address be availabe from the network or is that also an container. (not talking about the charm I installed myself, but the automatic one)
<PCdude> the one I installed myself is only having an ipv6 address, but not sure. How can I check? I looked with juju status and juju status juju-gui and both only give me an ipv6 address for the juju-gui
<jrwren> it depends on the address. if it starts with fe80 its a local address
<jrwren> it sounds like the DHCP for LXD isn't working, or something.
<jrwren> Did you setup that bridge to LAN already?
<PCdude> starts with fddc so should be accessible, but it is not..
<PCdude> https://jujucharms.com/docs/stable/getting-started
<PCdude> that is what I followed to install JUJU
<PCdude> there is an bridge in "ifconfig", I am 95% sure there is
<jrwren> if you run `lxc list` what ip addresses does it show?
<PCdude> both v4 and v6
<PCdude> those names are the same as in juju status
<PCdude> let me check real quick
<jrwren> so... if you correlate the v6 address to the v4 address you should be able to use that to view the gui.
<PCdude> jrwren: yeah, was trying exactly that with the command of magicaltrout earlier, but does not seem to work
<PCdude> ok, it took me some time, but I got the problem
<PCdude> I was mapping with iptables to eth0, but of course that is not the case anymore in 16.04
<PCdude> mapped it to ens33 (or any port for that matter) and it worked
<PCdude> thanks anyway!!
<PCdude> so I see this login screen for the first time :)
<PCdude> what is the different between login and login with usso
<PCdude> and what credentials should I use?
<PCdude> the normal credentials for login to the machine does not work
<jrwren> PCdude: juju gui --show-credentials will tell you the password
<PCdude> awesome and what is USSO?
<jrwren> Ubuntu Single Sign On.
<jrwren> I don't think that works yet. We probably should not show that button if it won't work.
<PCdude> strange, I thought that was simply called SSO, but it seems like some sort of LDAP service to seperate JUJU and the user accounts?
<jrwren> SSO can be run by ANYTHING. Your google account is SSO to many google related services.
<todin> hi, it there somewhere an example how to use install-keys in a juju bundle, somehow my yaml is not valid
<junaidali> todin: I'm not sure about bundle, but if you want you set install-keys config, you can do that by $ juju config <charm-name> install-keys='$(cat <install_Key>'
<todin> junaidali: thanks for your help, I found my solution here https://bugs.launchpad.net/charm-helpers/+bug/1515699
<mup> Bug #1515699: configure_sources fails badly with misformatted configuration. <Charm Helpers:Triaged> <https://launchpad.net/bugs/1515699>
<junaidali> thanks todin and mup: I didn't know about that
#juju 2016-10-09
<bbaqar> junaidali
<junaidali> Hey bbaqar
<bbaqar> junaidali did your xenial charms make it to the recommended charms
<junaidali> I haven't got reply from james yet for creating new xenial branch on launchpad
<bbaqar> jamespage
<bbaqar> you might have to open a bug for that. Wait for jamespage to comment first.
<junaidali> bbaqar: sure
#juju 2017-10-03
<erik_lonroth> rick_h: Did you write this? http://mitechie.com/blog/2017/9/28/learning-to-speak-juju
<zeestrat> erik_lonroth: Yeah, that's Rick.
<erik_lonroth> its super!
<rick_h> erik_lonroth: thank you
<rick_h> erik_lonroth: zeestrat please make sure to share it along.
<ed____> hi there :) does anyone know the right way to get rid of a machine that's in error state where provisioning can't match a suitable machine because of constraints? I want to back out of adding one machine manually! (oops)
<rick_h> ed____: you should be able to juju destroy-machine --force
<ed____> thank you Rick, I'll give it a try
<ed____> yay :) it worked. Thank you Rick
<rick_h> np ed____
<skay> does anyone have a wsgi-layer that can deal with python 2 wsgi apps? There's https://github.com/canonical-webteam/layer-wsgi for python3
<skay> I like the idea of using a wsgi layer, but I can't upgrade my application right now
<rick_h> skay: hmm, might be tough. I'm not sure if bdx has anything as he was doing some django bits but I don't know if it was py2 or py3
<rick_h> skay: can you crib enough to make the layer-wsgi py2 compatible?
<skay> rick_h: I was wondering about that. would anyone other than me find it valuable?
<rick_h> skay: always default to "yes" on that one :P
<skay> haha
<rick_h> skay: you're not so special as to be the only one doing it hehe
<rick_h> just the only one asking about it out loud
<skay> rick_h: I'm currently upgrading a django project from 1.6 to 1.11 so I have enough to deal with than to get it on python 3 /o\
<skay> :(
<skay> ;_; cry
<rick_h> skay: hah, yea the MAAS folks I know had to put some big time into that move
<skay> rick_h: oh hey, have of them written an epic of their journey past 1.6 and deployment gotchas?
<skay> I've been thinking about how to do that, and wondering if I should just make a one-off deployment to get from 1.6 to 1.7 and subsequently do normal stuff
<rick_h> skay: hmm, not sure. I thought I've got something tickling my brain but can't find anything off the top of my head.
<bdx> skay, I've traced those trails ... it really helps if you are familiar with the big changes going into 1.11 (e.g write/deploy some small app in 1.11 before you do the port just so you are familiar with the changes)
<bdx> also
<bdx> skay, https://github.com/jamesbeedy/layer-django-base/blob/master/templates/django-gunicorn.service.tmpl
<skay> bdx: I'm thinking for now I might get to 1.8 first, even though it ends next year
<bdx> i use that^ works for python 2 or 3
<skay> bdx: thanks
<rick_h> skay: yea, I'm pretty sure maas took that route
<skay> I'll take a look
<rick_h> bdx: http://mitechie.com/blog/2017/9/28/learning-to-speak-juju for ya
<rick_h> please spread far and wide :)
<rick_h> will get it on insights in another couple of hours but I think it looks better on my blog :)
<bdx> skay, heres the simple handler that writes it out https://github.com/jamesbeedy/layer-django-base/blob/master/reactive/django_base.py#L237,L247
<skay> also, I inherited a juju1 charm written in bash that was from ages yore
<bdx> rick_h: that is beautiful
<skay> so basically I would like to dump it
<rick_h> skay: ouch yea that's not really a migration but a rewrite there
 * skay nods, sadly
<bdx> skay: you found an artifact for the juju museum?
<skay> if not that, then the juju hospice
<bdx> skay: lets get you a fresh new juju 2 version and send that bad boy to the grave
<skay> bdx: small comment. you have mutable args, and that is a gotcha. https://github.com/jamesbeedy/layer-django-base/blob/master/lib/charms/layer/django_base.py#L121
<skay> http://docs.python-guide.org/en/latest/writing/gotchas/#mutable-default-arguments
<bdx> that function is a dangler, I shoudl remove it
<bdx> but thanks for the heads up on that
<skay> yeah, I noticed it has a note for not used
<skay> wasn't sure if it will be used in the future, etc
<skay> anyway, scrub elsewhere too. I just noticed it in render_settings_py
<skay> that gotcha bit me a while back
<bdx> Im not sure I see the issue
<bdx> thying to grok that ^ link now
<bdx> ahh I see "This means that if you use a mutable default argument and mutate it, you will and have mutated that object for all future calls to the function as well."
<skay> I think you may be okay since you are not mutating secrets unless I missed it
<skay> but it's better never to use a mutable arg
<bdx> yeah, Im see that
<bdx> ok, well nice little wake up call
<bdx> thanks @skay
<bdx> I can go fix all my code now
<bdx> I'm sure I have that everywhere
<bdx> shoots
<bdx> lol
<skay> haha
<rick_h> OSS ftw :)
<bdx> skay: thanks again for pointing those out, I fixed up those in the django-base-layer
<skay> bdx: this is handy. I'm learning about actions.
<bdx> skay, yeah that update-app action comes in really handy for sure
<skay> bdx: also, I may have a small PR to tidy something. for gen_short_sha_for_current_comment you could use git rev-parse --short HEAD, so perhaps you could merely pass a short=True to the other method. any objection/comment?
<skay> bdx: it's not critical.
<bdx> skay: those django layers are new, I'm working on them daily - would gladly accept pull requests
<bdx> skay: theres a lot of ground to cover with django .... its so open ended that it makes making an opinionated charm difficult
<bdx> but my basic goal is to come up with some best practices for making it more extensible
<bdx> the settings files are one thing
<bdx> the way I have a separate file for each thing, then the templates/settings.py.tmpl has to track those imports
<bdx> at the bottom
<bdx> I think a better way would be to write all juju generated settings files to a "juju_django_settings/" directory
<bdx> then glob all the files *.py in that dir
<bdx> or something to that affect
<bdx> skay: I'm working on getting a protocol going for things like that
<bdx> but yeah, send some me prs
<skay> bdx: do you need the long git sha for any reason? perhaps just use the short one all the time
<bdx> yeah, so thats kind of a personal thing we can remove
<bdx> I use to like to have a REVISION file in my project root
<bdx> that contained the full sha
<bdx> it just kind of became a practice of mine a worked its way into the charm
<skay> I've got a thing that runs bzr revno (actually it's in that old charm)
<bdx> I dont really use it anymore, and its just hanging out, so we can nix it
<skay> so I know what you mean I think
<skay> hey, it's informative
<skay> at least keep the log line
<zeestrat> Hey rick_h, what's the status on getting bash autocompletion for snapped juju?
<rick_h> zeestrat: not sure. balloons  any insight?
<skay> rick_h: oh hey, you still around? how bad is this approach? I don't like it but if something like this works to support python2 use then I might suggest it https://github.com/codersquid/layer-wsgi/commit/448ba8d669f87328ee36512ba57c9cb2b3d86713
<skay> rick_h: also, charmhelpers contrib has python package support, but with a limited set of options, so I can see why this layer is not using it
<balloons> zeestrat, it's already there
<balloons> zeestrat, it should be working for you
 * rick_h uses zsh and missed it was working doh
<balloons> rick_h, ahh.. well snap is landing support for all shells bash completion, we can 'upgrade' the snap to include support once it lands
<zeestrat> balloons: Is there a trick to it with environment variables or something? Can't seem to get it to work on a clean 14.04 or 16.04 vm after "sudo snap install juju --classic". Tried logging in and out. snap version on both is 2.27.6
<balloons> zeestrat, bash completion is a bit wonky honestly
<balloons> zeestrat, I have had to source the completion script manually or ensure bash-completion is installed and running
<balloons> it's /etc/bash_completion and /usr/share/bash-completion/completions/juju
<zeestrat> balloons: Ah, installing bash-completion and relogging did the trick. That should be documented somewhere.
<balloons> right, if you don't have bash completion, it won't work :-)
<zeestrat> rick_h: Regarding https://github.com/juju/charmstore/issues/774, adding some extra-info to the charm store seems to be easy enough, but are there any fields that will be displayed on jujucharms.com? There used to be a revision box but I can't find that anymore.
<rick_h> zeestrat: no, only the homepage and bugs url are shown.
<zeestrat> rick_h: Alright. Is that a hard rule or can I put in a request somewhere?
<rick_h> zeestrat: you can ask, what are you thinking?
<zeestrat> rick_h: Was thinking about a commit tag or hash as mentioned in the issue.
<zeestrat> rick_h: If I'm reading the issue correctly, that seems to be what the submitter is asking for to.
#juju 2017-10-04
<hallyn> arosales: hey - question.  https://jujucharms.com/docs/2.1/help-vmware  says vmware hardware version 8 or high is required for juju+vsphere.  But juju uses a hardcoded ubuntu.ovf using 'vmx-10', which is a lot newer (and newer than most of my boxes)
<hallyn> arosales: who would know whether the v10 is actually required, or whether that's a bug in the code, and maybe just maybe how to specify a different version (differnt ovf) without re-compiling juju :)
<thumper> hallyn: axw probably knows
<thumper> hallyn: btw, don't use 2.1, use 2.2
<hallyn> thumper: hm actually i'm using 2.0.2
<thumper> hallyn: oh for the love of god please upgrade
<hallyn> guess i'm on xenial.
<hallyn> why isn't the archive uptodate :)
<thumper> reasons...
<hallyn> ppa:juju/ppa ?
<thumper> I think it is ppa:juju/stable
<thumper> but let me check
<thumper> yeah, the stable one
<hallyn> ok thanks, will do that now.  But having looked at github I assume it won't help with my problem :)
<thumper> wallyworld: do you know about the vsphere reqs?
<arosales> ah, thumper thanks for the reply
<wallyworld> arosales: hallyn: IIANM we do only support vmx-10; i think it's to do with API compatibility but not sure
<hallyn> :(
<hallyn> that makes me very sad
<hallyn> wallyworld: in that case, https://jujucharms.com/docs/2.1/help-vmware is a bug iiuc :)
<hallyn> but so, this lab i have can't be used this way.  that kinda sucks.
<hallyn> wallyworld: who would know for sure ?  (i.e. what features are missing etc)
<wallyworld> hallyn: that would be andrew but he's away for another day or so; i'll be asking him when he gets back
<wallyworld> i must admit, i thought we did support v8
<wallyworld> but it could be recent sdk changes forced us to 10
<wallyworld> but i'll need to check
<wallyworld> it's certainly in the code that 10 it must be
<wallyworld> but the reason i do not know
<hallyn> wallyworld: ok, thanks.  Hm, I guess I emailed andrew earlier today, as he was the one who pushed that ovf file
<wallyworld> ok, i'll follow up with him when he's back in the office
<hallyn> thanks- \o
<wallyworld> we do now make user of datastore apis to manage image caching so it may be related to that
<wallyworld> i'd hope though we could still support v8
<hallyn> Hm, created a 'datacente'r with just the single machine that's new enough.  Error chagned - http://paste.ubuntu.com/25671564/
<hallyn> heading to sleep - thanks for the confirmation so far :)
<narinder> Hi Leann
<ed___> hi there juju people! I'm looking at running canonical kubernetes. Can you recommend how I'd make a clean setup we can store the juju bundle [or whatever the right terms would be] in git, and, we'd like to add a docker image repository to the setup. We're experimentally trying this in MAAS at the moment.
<tvansteenburgh> ed___: the bundle is just yaml, so you can store it anywhere
<rick_h> ed___: howdy, so the k8s page walks through install steps: https://jujucharms.com/canonical-kubernetes/ and it'll work with maas. As far as storing a bundle in git or the like I don't think you'll need to. Check out http://mitechie.com/blog/2017/9/28/learning-to-speak-juju for 'terminology' introduction.
<ed___> thank you tvansteenburgh & rick
<tvansteenburgh> ed___: regarding docker registry in your k8s: https://medium.com/@tvansteenburgh/private-docker-registries-and-the-canonical-distribution-of-kubernetes-31cb05a9b61c
<skay> the resources doc mentions using a resource per dependency, but I have ~90 python packages I use. should I bundle that many or have 90 resources?
<tvansteenburgh> skay: bundle them. i assume this is so you can install offline?
<skay> tvansteenburgh: yes
<tvansteenburgh> i certainly wouldn't have 90 resources, lol
<skay> haha me either. I hadn't noticed that bit in the docs until today
<skay> are nested dictionaries okay for yaml configs?
<skay> py yaml is okay with them
#juju 2017-10-05
<skay> I have some resource questions
 * skay has multiple questions and is now stuck deciding which one to ask first
<skay> I want to do upgrades and installs based on the availability of a resource. I have something that checks for the existence or change of a resource and sets a state
<skay> I can have @when_not checking that state for me, so that it triggers an install or upgrade
<skay> but, I can also have that check inside of the upgrade hook, because in the docs, the resource section says that juju attach triggers the upgrade
<skay> which is a better design or are they the same, or is there another approach you'd suggest?
<skay> another question. why isn't there a hook specific to juju attach?
<skay> also, it would be handy to have a decorator for when a resource changes. I could use @when_file_changed for this, but it means I would need to pass a callback that  gets the resource of interest and returns its path. Is that a discouraged approach compared to soemthing that sets a flag?
<skay> also, I can check this in the src code later, but if someone knows off the top of their head it would be useful... The docs mentioned that juju attach triggers the upgrade-charm hook, but do not specify whether juju deploy in conjunction with a resource flag does
<skay> s/flag/option
<bdx> "another question. why isn't there a hook specific to juju attach?" - let me see if I can find the mailing list post I sent out for this, possibly you can put heat on it
<bdx> skay: https://lists.ubuntu.com/archives/juju/2017-September/009515.html
<skay> bdx: ack. just followed up
<bdx> thanks
<skay> whoops, I replied all and I'm not subscribed to juju-dev
<bdx> skay: I just got your email
<bdx> you might get a bounce back from juju-dev though
<skay> bdx: I kept it pretty short
<skay> I did
<bdx> skay: I feel like the decorator would be serving a similar purpose as a upgrade-resource-myresource hook though right
<skay> yes
<bdx> we just need something decoupled from the upgrade-charm hook
<skay> I agree with that part
<skay> I'm not sure whether hooks are preferred over reactive flags
<skay> s/reactive flags/reactions?
<bdx> skay: yeah so, using the reactive framework, you set and unset "flags" that cause your "reactive handlers" to fire when the sufficient rules are met
<skay> correct
<bdx> skay: hooks are the underlying mechanism (legacy juju 1 uses these) that we are abstracting away by using the reactive framework on top
<skay> correct, so I thought it followed that the recommend way for charmers going forward is to avoid using hooks and to use flags
<skay> and the reactive decorators
<bdx> entirely
<skay> this is why I want a decorator like resource-changed
<skay> or for a resource being attached
<bdx> skay: I see what you are saying .... my suggestion of having an upgrade-resource-myresource hook is dealing with legacy-ish juju
<bdx> skay: so possibly I misspoke, it should be a upgrade-resource-myresource flag that is set upon resource upgrade/attach
<skay> I see. btw, wouldn't it be simpler to have an upgrade-resource hook rather than creating hooks on the fly?
<bdx> not sure I follow
<skay> I think I misunderstood. do you want @upgrade_resource("myresource") ?
<bdx> I'm chasing the capability to handle a resources' lifecycle ops on an per-resource basis
<bdx> skay: @when('upgrade-resource-myresource')
<skay> or for juju to discover the resourses from the charm and create @upgrade_resource_myresource() based on the resources listed
<skay> that makes more sense than what I thouhgt you meant
<bdx> ha
<bdx> awesome
<bdx> yeah, something similar to https://github.com/jamesbeedy/interface-db-info/blob/master/provides.py#L9
<bdx> how the interface hooks are generated there
<Durgeoble> is normal that juju stucks in waiting for address?
#juju 2017-10-06
<parlos> Good Morning Juju!
<erik_lonroth> bdx: I'm trying to put down all the notes we have done so far on the django charming part and perhaps it could end up in a blog post when it starts becoming more robust. Its such an awesome work youre doing.
<erik_lonroth> bdx: I shared the google doc with you for now and will perhaps move it to github or something later.
<junaidali> Hi guys, with bundle can I specify an application to be deployed in lxd on a machine where another specific application is deployed?
<junaidali> for example, if I want to deploy keystone in lxd on a machine where glance is deployed
<rick_h> junaidali: sure thing, as long as the bundle is deploying glance as well. You can specify the to: and reference things in that way. Are you using
<rick_h> junaidali: this bundle does it by a machine number (a new machine that is) https://api.jujucharms.com/charmstore/v5/kubernetes-core/archive/bundle.yaml
 * rick_h looks for a closer example
<junaidali> rick_h: I'm looking for placement directive like the one mentioned here -> https://jujucharms.com/docs/2.0/charms-bundles
<junaidali> "to: lxd:wordpress/0"
<junaidali> so it will create an lxd where ever wordpress/0 exists
<junaidali> but it is not working for me
<rick_h> junaidali: yea, sec looking
<junaidali> `juju deploy` gives me this error -> ERROR The charm or bundle "test.yaml" is ambiguous.
<junaidali> sure
<rick_h> junaidali: k, so try this:
<rick_h> junaidali: nvm, ok so yea I'd suggest specifying a machine and then using the to: lxd:0
<rick_h> junaidali: and then having glance to: 0
<junaidali> rick_h: Yeah, i think I have to go that way. Thanks for the help.
<bdx> erik_lonroth: awesome, thanks!
<bdx> erik_lonroth: yeah .... I will be documenting the layers soon .... sorry ... dragging on the most important part
<junaidali> can we force a charm to a different series in a bundle?
#juju 2017-10-07
<ybaumy> is there updated documentation for vsphere in the 2.3 tree
<ybaumy> or are the features that are documented the same
<ybaumy> meh
<pmatulis> ybaumy, is there something you think is missing?
#juju 2017-10-08
<chamar> hi folks, is there any issue of running the Juju Controller + MAAS on the same box?
<ybaumy> pmatulis: i dont know if there is something missing. i just read on the mailing list that the vsphere provider is being reworked. so  i thought there are changes but i couldnt find any change in the documentation. thats why i asked
<boolman> hey, what is the firewall requirements for juju ? I'm having some problems I think is related to my fw
<chamar> is there a place we can make suggestion for Juju?
<rick_h> chamar: file a bug ?
<rick_h> boolman: in what way? It needs to reach streams.canonical.com for some software bits, api.jujucharms.com for public charms in the store.
<chamar> rick_h, Yeah..it's not a bug, merely a suggestion / nice to have.
<rick_h> chamar: that's what the wishlist tag on the bugs are for
<rick_h> chamar: definitely go write it out and speak up if there's something on your mind
<chamar> ha cool. Wasn't aware. will do them.  Thx.
<chamar> haha! You sound like a psychologist ;P
<rick_h> ouch, damn...no need to get rude about it :P
<chamar> in a friendly manner.  ;)
<chamar> now I might have it a real bug.  oops
<pmatulis> ybaumy, understood. i'll ask around
<Fallenour> O/
<Fallenour> Need to ssh into a juju instance but cant. Keep getting public key denied on ssh juju keystone/0 ; Any thoughts?
<pmatulis> Fallenour, are you the controller admin?
<pmatulis> and you meant 'juju ssh keystone/0' ?
<pmatulis> Fallenour, reading: https://jujucharms.com/docs/stable/users-auth
<Fallenour> @pmatulis Yes. That was the command I executed. Ill check out the doc. Its not responding to any juju commands of any kind, but it is up
<pmatulis> Fallenour, what other juju commands fails?
<Fallenour> Hostname, ifconfig, apt get update, the most basic of commands
<pmatulis> Fallenour, i don't get it. is that on a juju machine?
<Fallenour> @pmatulis Yes.
<pmatulis> Fallenour, so you *can* ssh to the machine?
<Fallenour> And I dont either
<Fallenour> I can ssh to the "machine" but not the juju container
<pmatulis> ohh
<Fallenour> My keystone servuce is what I need in order to restore it so I can clear its error messages and my heat relation issue
<Fallenour> I already added the relations with it, but im thinking since it reads as error lost its not working
<Fallenour> And by it, ii mean keystone
<pmatulis> yeah. i don't think i can help. perhaps we need a special section in the docs on ssh'ing to containers
<pmatulis> it *should* all be abstracted
<Fallenour> So Im guessing blow it away for now?
<Fallenour> And rebuild?
<pmatulis> you might want to deploy a simpler installation (that at least uses LXD) and try to ssh to a container. i'm not sure what though
<pmatulis> what are you using to deploy?
<Fallenour> Generic openstack xenial for keystone
<pmatulis> ok, but how are you installing? most people use an installer of some kind
#juju 2019-09-30
<jhobbs> \/wg 8
<thumper> chunky review for someone: https://github.com/juju/juju/pull/10660
<nammn_de> achilleasa: im looking into https://bugs.launchpad.net/juju/+bug/1812980 whether it has been resolved or not. How do your e.g force a restart of a controller?
<mup> Bug #1812980: try again should not be logged as an ERROR <bitesize> <juju:Fix Committed by achilleasa> <https://launchpad.net/bugs/1812980>
<achilleasa> nammn_de: I 've tracked down the PR for this: https://github.com/juju/juju/pull/9695. Take a look at the QA steps for one way to force a controller restart. TLDR version: use lxc to restart the controller box
<nammn_de> achilleasa: thanks, good to know!
<stickupkid> I had to split our install-deps in the make file as we don't need snap go, just the mongo deps. https://github.com/juju/juju/pull/10667
<stickupkid> rick_h, already checked it and ticked it, anyone give it a quick glance
<stickupkid> macos github actions comming soon :D
<stickupkid> jam, why did we never use/setup MONGO_PATH for juju?
<jam> stickupkid: not sure what you mean. Have a separate variable than PATH for finding a "mongod" ? or ?
<stickupkid> how much do i hate osx atm
<stickupkid> rate 0-10
<stickupkid> about 100000
<hml> stickupkid: oh no!
<stickupkid> you can't symlink to /usr/bin
<stickupkid> :(
<stickupkid> SIP blocks me
<hml> stickupkid: what versionâ¦. i just did it
<stickupkid> El Capitan
<stickupkid> or what ever macOS-latest is tbh
<hml> stickupkid:  soft linkâ¦.  iâm on Mojave
<hml> stickupkid: i linked /usr/bin to my home dir.  perhaps itâs where youâre trying to put the link?
<stickupkid> hml, run this "csrutil status"?
<hml> stickupkid: enabled
<stickupkid> hml, just doing "ln -s /usr/local/mongodb/mongod /usr/bin/mongod" with and without sudo
<hml> stickupkid: ohâ¦ i was doing it the wrong wayâ¦ just a sec.  no mongo
<hml> stickupkid: now i see what youâre seeingâ¦.  why canât change the path?
<stickupkid> https://support.apple.com/en-us/HT204899
<hml> stickupkid: poopy.
<hml> stickupkid:â¦ so mongdb install didnât but a copy in a bin dir the path already uses?
<pmatulis> i just had a bad experience with storage on 2.7-beta1. dunno if that's expected
<pmatulis> essentially, ceph-osd could not talk to storage volumes
<bdxbdx> yo
<bdxbdx> how's it going?
<bdxbdx> is discourse giving others issues too?
<pmatulis> works here
<rick_h> yea went to reply but he's gone
<pmatulis> rick_h, did you see my comment re storage on the beta?
#juju 2019-10-01
<hpidcock> thumper:  https://github.com/juju/juju/pull/10660 LGTM
<thumper> hpidcock: thanks, still fixing some bugs in it though, just worker tests
<hpidcock> happy to look at it again when your done, but worked well for me on lxd and k8s, though I accidentally deployed something into the controller model and didn't get a log file, which looks to be what you wanted
<thumper> hpidcock: yeah, that was planned
<thumper> I'm just sitting here a little confused
<thumper> I had test failures which I didn't understand
<thumper> it seems like the test is flakey
<thumper> but I can't tell why
<thumper> I don't like flakey tests
<hpidcock> flakey tests make me sad
<babbageclunk> I love 'em!
<thumper> although the tests now pass...
<hpidcock> babbageclunk: keeps things exciting I guess :P
<babbageclunk> problem solved!
<thumper> they are also insanely long tests
<thumper> I think there may be a bug in the reporting of the tests
<thumper> that would explain the weird test results
 * thumper pokes
 * thumper waits for things to pass
<hpidcock> https://github.com/juju/testing/pull/147 small one for someone
<nammn_de> someone want to take a quick look?  https://github.com/juju/charm/pull/293, one-liner
<stickupkid> na
 * stickupkid just fat fingered that
<nammn_de> timClicks: thx for the quick look
<timClicks> nammn_de: np only 21:50 here
<stickupkid> jam, this is interesting, I was playing around with github actions for windows tests, but we fail because we don't have a windows 2019 server for the os/series package. That's fine, the issue is that the windows tests aren't run, we should fix that! https://github.com/juju/os/pull/13
<jam> stickupkid: any other gaps than just 2019? There seems to be some big gaps in the dates
<stickupkid> jam, i'm unsure, probably
<stickupkid> jam, weirdly not - 2012, 2016, 2019
<stickupkid> jam, seems like they have long LTS support
<stickupkid> jam, they do have a lot of annual releases, but we probably don't support that directly
<stickupkid> jam, this is my reference - not the best tbh - https://en.wikipedia.org/wiki/List_of_Microsoft_Windows_versions
<ociuhandu> On windows different versions should be treated differently, i.e. one can not consider Windows 2019 Datacenter same with Windows 2019 Standard
<ociuhandu> imho there should be different entries for those in the definition (e.g. win2019dc / win2019std / win2019hv and so on)
<ociuhandu> also, there is windows 2019 (which is the LTS version) and there are the frequent releases that get only 6 months support and are numbered based on year and month of release (i.e. 1809 or 1903 and so on)
<stickupkid> we should probably window the windows versions like the linux versions tbh
<stickupkid> ociuhandu, we only use these for if we support the OS, we don't deploy to windows directly and only the juju client runs on the OS, so I don't think we need to change.
<stickupkid> manadart, i'm back on cmr-migrations, i'm unsure about the attach method from your playground example
<nammn_de> timClicks: still around?
<stickupkid> manadart, it inverts the dependency which causes us not to be able to move the migration to a new folder
<stickupkid> manadart, https://play.golang.org/p/GC1tbWd272e <- playground in question
<manadart> stickupkid: Can we not export the parts that allow attachment?
<stickupkid> manadart, this is the problem "func (m RemoteEntitiesMigration) Attach(mig *ExportStateMigration) {"
<stickupkid> manadart, the ExportStateMigration knows too much info i.e. *State
<stickupkid> manadart, if we make it an interface, that means we then have to make shims for complex types, which we may have too
<stickupkid> manadart, let me have a think
<stickupkid> manadart, we're sooooo close
<manadart> stickupkid: OTP with Atos. Let's HO in a mo'.
<stickupkid> manadart, sure
<nammn_de> hey guys are we/the customers using `juju list-clouds --format yaml` in a automated way? Im thinking of adding an additional key on top for bug: https://bugs.launchpad.net/juju/+bug/1826957
<mup> Bug #1826957: YAML produced by list-clouds is not acceptable to update-cloud <bitesize> <papercut> <update-clouds> <juju:Triaged by nammn> <https://launchpad.net/bugs/1826957>
<nammn_de> else i could change the yaml parser from `juju update-cloud ..` to accept values without the highest key being `clouds: `
<manadart> stickupkid: I am good to go here if you've time.
<stickupkid> manadart, sure
<stickupkid> do it
<manadart> stickupkid: Hanging out in daily.
<stickupkid> nammn_de, isn't it a case of you can't update localhost because it's built in
<nammn_de> stickupkid: for me it works to update localhost if i modify the yaml file. Update in general does not work. Try to follow the steps from here: https://bugs.launchpad.net/juju/+bug/1826957 maybe we mean something different
<mup> Bug #1826957: YAML produced by list-clouds is not acceptable to update-cloud <bitesize> <papercut> <update-clouds> <juju:Triaged by nammn> <https://launchpad.net/bugs/1826957>
<achilleasa> jam: is there a recommended pattern for extracting lengthy code from inside buildTxn blocks?
<jam> achilleasa: its just a function, so have helper functions. They tend to be called "doStuffOps"
<achilleasa> jam: makes sense. thanks
<nammn_de> rick_h: got a min to talk about the bug you just replied?
<rick_h> nammn_de:  otp atm, will ping when free
<nammn_de> Okay, im grabbing lunch we can talk after daily if that works for you :)
<nammn_de> rick_h: ^
<nammn_de> what is the https://github.com/juju/juju/blob/9039181e06f6988b1c62d3e71d28fbc031c0d32f/juju/osenv/old_home.go#L19 "juju dir" used for? "home/<name>/.juju"
<nammn_de> stickupkid: im going to add an acceptance test into /tests/suites . Should i add this under the folder " cli" as this relates to the commandline?
<stickupkid> nammn_de, yeah, we can always move it
<stickupkid> "coverage: 100.0% of statements"
<stickupkid> this is a nice thing :D
<nammn_de> stickupkid: how do you run the suite? Feel like I am overlooking something important. `./main.sh test_static_analysis_go` does not work for me. Did we document that somewhere?
<stickupkid> nammn_de, yeah, because you're running a task there not a suite
<nammn_de> stickupkid: ups, meant that. just wanna run a task but seems not to work for me regarding permissions
<stickupkid> nammn_de, run `./main.sh static_analysis test_static_analysis_go`
<nammn_de> but thats how I run a task, right?
<stickupkid> also ./main -h
<stickupkid> ./main.sh -h
<stickupkid> see the "Examples" section
<nammn_de> ahh  you need to define the <folder> and <file>
<stickupkid> nammn_de, yeah, similar to how other test things work
<stickupkid> nammn_de, go test <folder> -check.f <test>
<nammn_de> stickupkid: got it
<stickupkid> nice
<nammn_de> stickupkid: any readme/quick function to create a model? Could find destroy_model from examples?
<stickupkid> nammn_de, juju add-model :D
<stickupkid> nammn_de, ho?
<nammn_de> stickupkid: oh makes sense :D Thought maybe we did add some convenience things
<nammn_de> stickupkid: sure!
<nammn_de> stickupkid: probably faster to get into
<stickupkid> nammn_de, yeah, just use normal juju cli
<hpidcock> https://github.com/juju/juju/pull/10669 PR for the person who reads this :) It's a short one.
<timClicks> urgh why do I get notifications?
<hpidcock> timClicks: thanks :D
#juju 2019-10-02
<bdx> well that was a journey ... finally got my web irc back up
<bdx> updated thelounge snap and charm along the way in case anyone is interested https://snapcraft.io/theloungeirc & https://jaas.ai/u/omnivector/thelounge
<timClicks_> thumper: I've been thinking a little bit about the pinned repositories feature that is presented at https://github.com/juju
<timClicks_> imo we should only have github.com/juju/juju there.. and provide some sort of index in our documentation that explains the rest
<timClicks_> there's also an email address listed that's out of date: juju-dev@lists.ubuntu.com
<timClicks_> I would also like to create a juju-quickstart repo that contains working tutorials
<bdx> +1 ^^^
<thumper> timClicks_: unless we pin some of the more reusable repos there:
<thumper> juju/errors
<thumper> juju/ansiterm
<thumper> juju/loggo
<thumper> juju/schema
<thumper> timClicks_: quick start repo sounds good too, what format were you thinking for tutorials?
<timClicks_> those repos are also good candidates
<timClicks_> ideally, we would have something like a series of Jupyter notebooks..
<hpidcock> timClicks_: other things of note, the https://github.com/juju page doesn't link to jaas.ai || discourse.jujucharms.com or even have a description
<hpidcock> also the  "juju" text in the logo is superfluous, I think the normal logo would work better
<wallyworld> thumper: small PR to adjust action tags to numbers https://github.com/juju/names/pull/98
<thumper> wallyworld: looks like we can blame andrew two years ago
<thumper> no idea why we hadn't caught this earlier
<thumper> nor why it isn't a problem in more places
<thumper> I feel like I'm missing something
<thumper> but I don't know what
<thumper> I can see where we go from a dict with ID keys
<thumper> to a slice of slices
<thumper> and it is iterating randomly over the dict
<thumper> but I can't see why this isn't causing more problems
<wallyworld> hmmmm
 * thumper has a thought
 * thumper checks code some more
<AvondZon> can anyone point me to the anbox-cloud bundle?
<wallyworld> babbageclunk: sice thumper sucks https://github.com/juju/names/pull/98
<babbageclunk> wallyworld: looks super-delish
<wallyworld> babbageclunk: \o/ ty
 * thumper is writing a test for equality...
<thumper> I think I worked it out
<thumper> we are losing the space ID
<thumper> never matches
<manadart> thumper: Probably me. What's up?
<thumper> manadart: otp, chat later
<manadart> Anyone for a simple review? Just a dep update: https://github.com/juju/juju/pull/10671
<hpidcock> manadart: done
<manadart> hpidcock: Ta.
<nammn_de> where do we save the charm archive again?
<stickupkid> manadart, CR these changes https://github.com/juju/juju/pull/10662
<stickupkid> this seems to be failing a lot atm "TestCreateVirtualMachineRootDiskSize"
<stickupkid> i'm having a look
<stickupkid> intermittent  failure running - go test -v ./provider/vsphere/internal/vsphereclient/... -check.v -check.f=clientSuite.TestCreateVirtualMachineRootDiskSize -count=100
<manadart> stickupkid: OTP. Will look in a mo'
<stickupkid> manadart, ta buddy
<nammn_de> Any fast way to get the charm version of a charm (beside mongo db shell)?
<ventura> I live in Brazil and Canonial Juju is a completely unknown by DevOps and developers here
<ventura> I am new to DevOps, so I would like to know if someone may help me to identify
<AvondZon> can anyone point me to the anbox-cloud bundle?
<ventura> 1. What is the product niche (if compared to Ansible, Chef, Circle CI, Puppet, Terraform)
<ventura> 2. What is it main competitor?
<zeestrat> Hey ventura, recommend checking out the forums discourse.jujucharms.com which answers some of your questions.
<rick_h> AvondZon:  hmmm, not showing up in the charmstore. Have to check with morphis
<rick_h> ventura:  howdy, so our main niche is operating complex software over time. e.g. we really take seriously longer term operations, upgrade strategies, etc.
<rick_h> ventura:  I think as far as competitors it's kind of a lot. Folks not using Juju are generally stitching together their own chain of tools, processes, etc.
<rick_h> morphis:  AvondZon was asking about getting pointed to the "anbox-cloud bundle"
<ventura> zeestrat: thx, i am gonna read it right now
<ventura> rick_h: that the point i was arguing with my manager and CTO
<morphis> AvondZon: I guess you read https://ubuntu.com/blog/running-android-in-the-cloud-with-amazon-ec2-a1-instances, if there is interest please reach out through the linked contact form
<ventura> they hired a company the is writing its own solution of creating scripts for managing/creating/etc containers over terraform
<ventura> the idea was having several scripts to fastly create a distributed cloud platform
<ventura> and (of course) publish as an OSS solution to the world
<ventura> when i saw the presentation i said "hey, have you met juju?"
<ventura> rick_h: you mentioned " I think as far as competitors it's kind of a lot. Folks not using Juju are generally stitching together their own chain of tools, processes, etc."
<ventura> I far as I understood, Juju is more like a admin/script anobolic solution we can use over standard cloud technologies ( Ansible, Chef, Circle CI, Docker, Kubernetes, Puppet, Terraform, etc)
<ventura> Several years ago I worked creating IBM Linux VM from scratch using shell-script and Python.
<ventura> Maybe I am really charmed by Juju due my previous mindset
<rick_h> ventura:  sec, on the phone so slow to respond
<ventura> rick_h: take your time :-)
<AvondZon> morphis: yes saw the vid on juju discourse. Thank you (& rick_h), will do.
<rick_h> ventura:  so the charms are like the scripts and I would compare to things like k8s operators or some newer ideas in ansible to have more complex sets of scripts for things
<rick_h> ventura:  but the idea around juju of modeling things at a higher level, really getting away from anything that requires getting into the details of network addresses, etc is the key to juju
<nammn_de> stickupkid: you mind taking a look at that pr? https://github.com/juju/juju/pull/10673
<stickupkid> sure
<ventura> rick_h: that is one of the reasons why the contract we hired was justifying to write a new whole devops framework
<ventura> "but the idea around juju of modeling things at a higher level, really getting away from anything that requires getting into the details of network addresses, etc is the key to juju"
<Mudchains> Hi all, i am trying to deploy a "Charmed Distr. of kubernetes" by conjure-up @ deploying to a onpremise vsphere enviroment. Even when I enter the ipadres I am getting ([Errno -2] Name or service not known). Ipadres and DNS of vCenter is reachable..anyone a clue?
<Mudchains> Ok, I added https:// to the hostname/ip. When I enter it without https its working.
<stickupkid> Mudchains, seems like a bug, as we should at least have a better error message
<Mudchains> stickupkid yes, that would saved me alot of time haha :D
<stickupkid> Mudchains, https://bugs.launchpad.net/juju/+bugs
<Mudchains> stickupkid i will make a report later :)  thanks  for the link (dont have a ubunut one account etc, and worktime is over)
<nammn_de> stickupkid: thanks for the review and patch! Just regarding the added line "test_local_charms", where does bash take this function from? https://github.com/nammn/juju/blob/b56727e88030282bdc15a5602abe8d259dba69c9/tests/suites/cli/task.sh#L13
<stickupkid> nammn_de, https://github.com/nammn/juju/blob/b56727e88030282bdc15a5602abe8d259dba69c9/tests/suites/cli/use_local_charm.sh#L63
<nammn_de> stickupkid: ah nvm the patch didnt change the name, i changed it again and then i thought maybe i was wrong and reverted that :D
<stickupkid> nammn_de, approved
<nammn_de> stickupkid: ð¦¸
<stickupkid> nammn_de, CR please https://github.com/juju/juju/pull/10675
<achilleasa> stickupkid: is it legal to include a dummy txn.Op in a transaction that only asserts something?
<stickupkid> achilleasa, yeah
<stickupkid> achilleasa, like, docMissing etc
<achilleasa> stickupkid: I think I am doing something wrong with the ids because I get a excessive contention error
<stickupkid> linky dink?
<achilleasa> stickupkid: https://pastebin.canonical.com/p/D3DNqzYCwk/
<achilleasa> stickupkid: ahhh crap... field has no hyphen
<stickupkid> d'ho
<pmatulis> can one start a machine that is in the 'down' state?
<rick_h> pmatulis:  so down just means the agent isn't running so it might be that you can ssh to the machine and start the jujud
<rick_h> pmatulis:  or if the machine is off, then no, you'd need to turn it on from the cloud
<pmatulis> rick_h, thank you
<rick_h> pmatulis:  saw your storage issue. I've not had a chance to look into it yet. Did you say that it "works" under 2.6?
<rick_h> pmatulis:  or just didn't work when testing 2.7 and not sure if it ever did but thought it should?
<rick_h> timClicks:  if you have time wanted to see if you were up to chat juju show for next week maybe
<pmatulis> rick_h, just 2.6.9.  it was the first time trying with 2.7. i just assumed it would work
<rick_h> pmatulis:  it does work with 2.6.9?
<pmatulis> rick_h, correct
<rick_h> pmatulis:  ok cool, did you file a bug? /me dbl checks
<pmatulis> rick_h, no. i thought it may have been expected at this time
<pmatulis> hence my question on irc
<rick_h> pmatulis:  no, a storage change like that sounds like a regression (though we've not hacked on storage so not sure why...) we need to look into before 2.7 beta
<pmatulis> rick_h, ok, i will file
<rick_h> pmatulis:  ty!
#juju 2019-10-03
<timClicks> thumper: PR up to document root-disk-source for openstack https://github.com/juju/docs/pull/3497
<pmatulis> interesting setup guys :) ^^^
<stickupkid> nammn_de, when you get a second, can you review this https://github.com/juju/juju/pull/10675
<stickupkid> can some CR this one? https://github.com/juju/juju/pull/10676
<achilleasa> stickupkid: looking
<achilleasa> stickupkid: done
<achilleasa> jam: or manadart can you please take a look at https://github.com/juju/juju/pull/10680?
<Mudchains> stickupkid I submitted a bugreport : https://bugs.launchpad.net/juju/+bug/1846487 :)
<mup> Bug #1846487: Conjure-up Kubernetes vpshere <juju:New> <https://launchpad.net/bugs/1846487>
<manadart> achilleasa: QA is good.
<pmatulis> so when a relation is missing 'juju status' shows a note about it. after adding the relation the note goes away. but if i remove the relation the note does not return. normal?
<stickupkid> pmatulis, probably, think i've see that - is it right, not sure tbh
<rick_h> pmatulis:  which note do you mean?
<pmatulis> rick_h, stuff like "Missing relations: database"
<rick_h> pmatulis:  that's from the charm though right?
<pmatulis> rick_h, that's my understanding yes
<pmatulis> so "charm problem"?
<rick_h> pmatulis:  well just that yea it's not reoutputting the status once it loses the relation. It's on their logic to say "hey I want this" based on config/etc.
<pmatulis> makes sense
<timClicks> which tracker should feature requests be filed for github.com/juju/cmd/?
<babbageclunk> timClicks: it must be a feature request for juju right? The implementation might be a feature in juju/cmd, but there'll be some visible change in juju as a result.
#juju 2019-10-04
<thumper> simple review for someone... https://github.com/juju/juju/pull/10681
<Sou> Hey all, I am pretty new to the entire juju thing. We used juju charms to setup openstack
<Sou> Is there anyway to regenrate a configuration file for a unit using juju?
<Sou> any help would be great
<thumper> Sou: I'm not sure I understand what you are asking for
<thumper> what sort of configuration file are you expecting or wanting?
 * thumper needs to head off for the week
<thumper> Sou: you might want to consider asking the question on our discourse (link in topic)
<thumper> https://discourse.jujucharms.com
<Sou> Ohh. Lemme elaborate what I meant. I had setup a unit vault (number of containers : 3). For vault HA to work, I had to add etcd (count 3 ) and easyrsa (count 1). I had to reOS the host machine which was running easyRSA container. But that broke the etcd cluster. It was throwing bad tls error. So I removed all etcd units, and readded them. But the
<Sou> vault configuration file (which uses etcd) has old details about  etcd. So I was finding out if there is a way to regenerate the configuration files of a container via juju. Another option is to edit it manually. But then I am not sure if juju will create any issues
<thumper> Sou: if vault didn't update the config for the new etcd it smells like a bug in the vault charm
<Sou> ohh
<thumper> To get the right eyes on it, either file a bug in launchpad against the vault charm, or ask in discourse and I can tag the openstack charmers
<Sou> okay thanks
<thumper> juju doesn't hold the config of apps
<thumper> that is the responsibility of the charms themselves
<thumper> have a good weekend folks
 * thumper out
<Sou> Ohh okay. Please correct me if I am understanding it wrong. Juju charms are used to setup the application in containers. And post that does the charms still keep an eye on the changes being made?
<Sou> also does it we shouldn't manage anything inside the containers created by juju?
<babbageclunk> Sou: yes, the charm also manages the running application and lets you configure it using juju commands
<babbageclunk> Sou: In general, you shouldn't be changing things in the container  directly because then the charm might be out of sync with what you've changed.
<Sou> Okay. Thanks a lot babbageclunk. Wrt openstack nova-compute charm, there are many nova related configuration options which I can't see when I do a "juju config <app_name>"
<Sou> Is there anyway to add such configuration options to the containers which run the unit
<babbageclunk> Sou: it might be that some of those are managed by the charm in response to other units being related to the application.
<babbageclunk> What kinds of options do you mean? (I'm not an openstack expert though)
<Sou> $ sudo juju config nova-compute  | grep instance_name_template~$
<Sou> My apologies for the typo
<Sou> one variable name is instance_name_template
<Sou> I can't modify that variable via charms
<Sou> in a big setup if I want to make sure such a variable is managed, I might have to integrate the containers (or units) created  by juju with ansible or puppet
<Sou> But then it would make the setup complex
<Sou> Is that a suggested way of doing things?
<babbageclunk> Sou: I don't think I understand what you're trying to do - you want to have Juju-created machines be managed with ansible or puppet? I'm not sure how that would work.
<Sou> Managing juju created machines with ansible or puppet came to my thought when I was not able to manage a config parameter of an application via juju.
<babbageclunk> Sou: I think people in #openstack-charmers would be able to help you with your instance_name_template question
<Sou> Thanks @babbageclunk I will post the same in that conf
<babbageclunk> Sou: I think I see what you mean - use ansible to make post-deployment changes to a unit to tweak that setting? I think it would be better to change the charm to expose the config you need.
<Sou> Yeah, I think making changes in charm will make things easier
<nammn_de> stickupkid: currently looking into the bug when a user calls "juju /foo" our code tries to create a fork and fails. You worked on the "similiar" cmds last time. I could either return a proper error "file does not exist" or we could run your "similiar" code again. What would you prefer?
<stickupkid> don't remember what I did, any pointers?
<nammn_de> https://github.com/juju/juju/blob/develop/cmd/juju/commands/plugin.go#L77
<nammn_de> stickupkid:
<nammn_de> stickupkid: ^ the code where if a command is not found you try to find the most similiar command and return it something like "foo does not exist, did you mean gui"
<stickupkid> nammn_de, yeah, that imo, but best to ask rick_h
<nammn_de> okey makes sense, rick_h: ^ above, but I will update launchpad to have it written down
<achilleasa> manadart: I have finished reviewing your bridge policy PR and will start the QA steps next
<manadart> achilleasa: Great; ta.
<stickupkid> achilleasa, thumper pointed out an issue with the introspection stuff https://github.com/juju/juju/pull/10682
<achilleasa> stickupkid: I have a question (see comment)
<stickupkid> achilleasa, responded
<achilleasa> stickupkid: are you sure that the command is interpreted as 'xargs "CMD" > out' instead of 'xargs "CMD > out"'?
<stickupkid> achilleasa, tested it locally :D
<achilleasa> bash or zsh?
<stickupkid> juju bootstrap lxd test
<stickupkid> juju enable-ha
<stickupkid> well "sh"
<stickupkid> let me tripple check
<achilleasa> No you are actually right, you have to explicitly quote the commands to get the redirect bit for each command
<achilleasa> wait. let me doublecheck this :D
<achilleasa> yes, it works as you expect. Sorry for the confusion
<stickupkid> achilleasa, yeah don't worry, I also had to check
<stickupkid> nammn_de, updated per your comments https://github.com/juju/juju/pull/10675
<nammn_de> stickupkid: ensure will create a model in the bootstrapped controller as well, if it does not exist, right?
<stickupkid> sort of, it'll name the default model something else
<stickupkid> i'll add that
<nammn_de> stickupkid: ð¦¸ââï¸
<stickupkid> nammn_de, done
<nammn_de> stickupkid: approved
<stickupkid> achilleasa, regarding the series stuff, you can't use head as that's for 2.7, I believe you'll need to make a 2.6 branch and add the new macOS version there
<stickupkid> unless we back port what we did to 2.7 to 2.6, which i don't think is wise
<achilleasa> stickupkid: I think 2.6.10 will be the last release and the sha is already out. We can fix it for 2.7 though...
<stickupkid> achilleasa, sure sure
<achilleasa> they did merge the PR btw
<stickupkid> ah, that's fine then
<achilleasa> yeah, I saw the response this morn. Yest I was thinking that they would say we cannot accept this as the tests don't pass :D
<stickupkid> probably don't care for betas
<rick_h> nammn_de:  what's the link to the bug number again? I want to see the use case in the bug that cause dfolks to file it
<nammn_de> rick_h: https://bugs.launchpad.net/juju/+bug/1747040
<mup> Bug #1747040: Invoking juju with no verb but a path results in confusing error messages <bitesize> <cli> <ui> <juju:Triaged by nammn> <https://launchpad.net/bugs/1747040>
<nammn_de> rick_h: added the PR for more description. Can always update the PR. Just open for discussion
<rick_h> nammn_de:  that works for me, ty
<nammn_de> If thats the case, would love to get a review from someone. Pretty small one, fast to test rick_h stickupkid https://github.com/juju/juju/pull/10683
<manadart> I think we may have a problem here.
<manadart> If we need to run an upgrade to a version that causes a break in the allwatcher/modelcache code without upgrade steps having been run, we get into a deadlock.
<manadart> modelcache error-cycles getting a new watcher, API can't come up, machine agent can't connect to API. Upgrade does not run.
<gQuigs> does anyone have any tricks for referencing the machine_name in a juju_run command?
 * gQuigs wants to do juju run --all "command --batch "machine_name"
<pmatulis> gQuigs, i guess you would need to translate machine name to machine ID prior to 'juju run'
<gQuigs> pmatulis: hoping to make it more like a one liner, so we don't have to give a script to customers..   command in question is sosreport :)
<gQuigs> I guess I could just hope the machine name has the id consistantly in it..
<pmatulis> gQuigs, and i guess a "support charm" on all machines is too heavy right?
<pmatulis> but such a thing could be useful in other imaginative ways i suppose
<fallenour> hey when watching juju status, what color settings shoudl I use in order to ensure the colors stay the same with watch as they do with juju status
<fallenour> it all comes back grey when I do: watch -c color=auto juju status
<gQuigs> pmatulis: yea
<pmatulis> gQuigs, then you could have various actions ('sos-all', 'sos-maas', 'misc-support')
<gQuigs> sos already determines what plugins to run automatically :)
<pmatulis> bah :)
<gQuigs> I think I'll just use run and ask them to provide a list of names and machine ids
<Fallenour> hey guys, I keep getting this message from juju: failed to start machine 1/lxd/3 (acquiring LXD image: no matching image found), retrying in 10s (10 more attempts)
<Fallenour> Im using maas, and I have all of the 18.04LTS images downloaded. Does anyoen have any idea what causes this issue? One machine already built out 3 lxd containers, so I dont know why its giving this error.
<Fallenour> Im currently using juju version 2.6.9
<Fallenour> I foudn the issue. its a dns error with juju. where can I got or what can I do to fix...I think I fixed it.
<Fallenour> I updated the mAAS DNS addresses, and it ... nope, not fixed. How do I update juju dns info for lxd containers?
<Fallenour> WARNING juju.provisioner incomplete DNS config found, discovering host's DNS config
<Fallenour>   is the error I keep seeing in juju debug-log. I keep finding a lot of complaints about this, but no solution. Does anyone have an idea about a work around?
<Fallenour>   is the error I keep seeing in juju debug-log. I keep finding a lot of complaints about this, but no solution. Does anyone have an idea about a work around?
<Fallenour> Does conjure-up allow me to manage the systems built via juju, or do I need to manage those somewhere else?
<davecore> Fallenour: Once you deployed using conjure-up, the rest of the management is done with juju
#juju 2019-10-05
<Fallenour> Im having issues getting lxd containers to install in my juju instances. I see this has been a bug in the past from 2017, and appears to still be a problem. Can anyone make any recommendations?
<Fallenour> is anyone on
<Fallenour> Ive run into a pretty serious issue with juju, and I could really use some help. Its completely blocking me from being able to build or deploy anything.
<pmatulis> Fallenour, check logs, look for errors
<Fallenour> there arent any
<Fallenour> Thats the weirdest part. All it says is the same error over and over again, Image cant be found
<pmatulis> using MAAS as the backing cloud?
<Fallenour> I tried Juju, I tried conjure up, I even tried just deploying a basic lxd container. All failed, All same issue.
<Fallenour> yea, howd you know?
<Fallenour> I saw several tickets outstanding for the issue, but no solution
<pmatulis> go to MAAS and try to Deploy on a node. so no Juju, just MAAS
<Fallenour> I swear by the old gods and the new, I will get this working, and i will deploy it as part of my hybrid cloud solution I made, even if I have to fly to canonicals doorstep and kick it in, drag out the engineers, and build it in the parking lot.
<Fallenour> It deploys nodes just fine
<Fallenour> its only the containers
<Fallenour> juju is magic, I just want it to work :(
<Fallenour> they all fail when deploying containers
<pmatulis> enter the deployed MAAS node and try to use LXD manually
<Fallenour> bare metal, and 18.04LTS and 16.04LTS deployments of the base OS works fine.
<Fallenour> I tried that. It failed.
<Fallenour> same error when I use juju
<pmatulis> so it's not a Juju problem then
<Fallenour> its gotta be a juju issue
<Fallenour> when I ssh into the box, and use LXD locally, it works
<pmatulis> well that's what i asked you to do. you said it didn't work
<Fallenour> And all my systems deployed by maas outside of juju deploy LXD containers just fine
<Fallenour> oooo
<Fallenour> sorry
<Fallenour> yea, maas let me build them just fine
<Fallenour> its actualyl why I migrated the existing lxd systems into juju, because they worked.
<Fallenour> wonky with ceph, and wont work in clustered modes with ceph directly, but they do work, so I figured Id figure that issue out later.
<pmatulis> pastebin exactly what you're trying with Juju. exact commands and output. starting from the bootstrap command
<Fallenour> maas and juju clouds are already built, and work fine
<Fallenour> ill start with the add-model
<Fallenour> juju add-model cloud-000-000001
<Fallenour> then I do
<Fallenour> juju deploy cs:bundle/openstack-base-61
<Fallenour> it begins by deploying the 4 maas servers, installing ubuntu 18.04LTS, which succeeds, which juju shows as deployed. Then it begins by deploying services
<Fallenour> everyting outside of containers deploys successfully, but anythingg in the containers, or the containers themselves, all fail
<Fallenour> Ive got this error: WARNING juju.provisioner incomplete DNS config found, discovering host's DNS config
<Fallenour> but when I check maas, its got several DNS servers set
<Fallenour> and I can ping google, both from the device deploying to juju, and from the machine itself
<Fallenour> I tested that by doing: juju ssh 0
<Fallenour> and then pinging google.
#juju 2019-10-06
<pmatulis> Fallenour, did you read https://jaas.ai/openstack-base/bundle/61
<pmatulis> this bundle should be downloaded locally and edited before usage
<thumper> babbageclunk: https://github.com/juju/juju/pull/10681
