#juju 2011-09-23
<koolhead17> hi all
<rog> koolhead17: hi
<koolhead17> hi rog
<niemeyer> Good morning charmers and jujuers
<koolhead17> hi niemeyer
<rog> hey gustavo!
<niemeyer> koolhead17, rog: Yo
<rog> niemeyer: the gozk server-starting code is working, BTW. i'm just making the test code use it.
<niemeyer> rog: Woohay!
* niemeyer changed the topic of #juju to: http://j.mp/juju-eureka | http://juju.ubuntu.com/docs | http://afgen.com/juju.html | http://j.mp/irclog
* niemeyer changed the topic of #juju to: http://j.mp/juju-eureka http://j.mp/juju-docs http://afgen.com/juju.html http://j.mp/irclog
<rog> things are always easier when you're adapting some other code. for the difficult bits you do the same and blame the other code; for the rest you can simplify according to taste.
<niemeyer> rog: :-)
<rog> niemeyer: BTW in gocheck, is it ok if C.Log is called concurrently?
<niemeyer> rog: It is
<niemeyer> rog: I actually intend to support parallel tests as well
<niemeyer> rog: Well.. hmm.. I'm not entirely sure about that right now
<niemeyer> rog: It can be called concurrently across multiple tests, but I don't recall if it's atomic, which is likely your actual question
<rog> i'd just come to the opposite conclusion, yeah
<rog> i wanted to use it for the logging output of the server.
<rog> rather than sending to the rather obscure stdout.txt
<niemeyer> rog: I've used it that way often.. there's already a trivial logger wrapper in the suite
<niemeyer> rog: Hmm.. maybe not in that one
<niemeyer> rog: Hold on
<niemeyer> rog: http://paste.ubuntu.com/695648/
<niemeyer> rog: I use this in a few suites already
<niemeyer> rog: E.g. mgo.SetLogger((*cLogger)(c))
<rog> erm, that doesn't help the atomicity requirement
<rog> AFAICS
<niemeyer> rog: Sure.. this should be done within gocheck
<niemeyer> rog: But that's a red herring right now
<niemeyer> rog: Nothing will explode if lines are logged concurrently
<rog> it might, because bytes.Buffer won't like it
<niemeyer> rog: Heh
<niemeyer> rog: Just move on.. I'll address this in gocheck later
<rog> ok, i'll put a lock outside
<niemeyer> rog: Man.. ignore the problem until you have it.. :)
<niemeyer> rog: gocheck can easily be fixed
 * rog prefers to fix concurrency issues before they heisenbug
<rog> but actually we should be ok with GOMAXPROCS=1
<niemeyer> rog: I'm telling you I'll fix the problem in gocheck. You don't have a problem.
<rog> ok, cool
<niemeyer> hazmat: Just +1d machine-with-type, with a couple of trivial suggestions.. let me know what you think please
<niemeyer> jimbaker: ping
 * rog has some failing tests now, hurray!
<niemeyer> rog: Hah :-)
<niemeyer> rog: Good stuff
<rog> niemeyer: the test in TestRetryChangeFailsSetting is failing because RetryChange is returning a nil error. why should RetryChange be failing here?
<rog> (not yet familiar with zk retry semantics i'm afraid
<niemeyer> rog: Have you changed it?
<niemeyer> rog: The documentation for the function is pretty good,b tw
<rog> no, i haven't changed it AFAIK. and i just noticed the read-only thing going on above. hmm.
<rog> (that's one of only two tests that are failing now)
<niemeyer> rog: Yeah, but I suspect this is related to your changes in the error handling
<niemeyer> rog: Have you introduce those already?
<niemeyer> rog: How is the test failing?
<rog> i think you're right. RetryChanges is succeeding inappropriately
<rog> the only other test that fails is TestExists, where Exists of "/zookeeper" is returning 0 children, not 1 as expected.
<jimbaker> niemeyer, hi
<avoine> hello
<niemeyer> avoine, jimbaker: Heys!
<niemeyer> rog: That could mean a distinction in the zk versions.. perhaps
<niemeyer> rog: /zookeeper is a default node, so it's not a great assertion
<niemeyer> jimbaker: Hey!
<niemeyer> jimbaker: How're things going there?
<jimbaker> correct about /zookeeper, it's a very modest ping
<jimbaker> niemeyer, my daughter just turned 11, which is very cool
<rog> ok
<jimbaker> rog, if you are looking for a node that says something about juju, /topology or /initialized is probably what you want
<niemeyer> jimbaker: That's very cool indeed
<niemeyer> jimbaker: No, ECONTEXT :)
<niemeyer> jimbaker: He's working on gozk
<jimbaker> ah, one level down
<rog> ah, i did change RetryChange quite a bit, must have got my code transformation wrong...
<niemeyer> rog: Hmm..
<niemeyer> rog: I hope you're planning to provide several independent merge proposals..
<rog> niemeyer: gotcha!
<rog> niemeyer: i'm hoping that it's fairly obvious
<niemeyer> rog: Yeah, but several obvious things make up for a gigantic branch and several bugs
<rog> yeah, i should probably separate out the server stuff
<rog> but i couldn't test the other stuff until i'd put it in...
<niemeyer> rog: I wasn't expecting the things we talked about to break RetryChange, for instance
<rog> the error stuff changed things quite a bit
<rog> for instance, error comparison became simpler
<niemeyer> rog: Still, it sounds relatively trivial..
<rog> which allowed RetryChanges to become simpler
<niemeyer> rog: Cool.. simpler is great
<rog> 6 lines shorter
<rog> (and easier to understand, i'd contend, but YMMV :-])
<rog> anyway, i've found the bug
<rog> (i think!)
<niemeyer> and broken!
 * niemeyer hides
<niemeyer> jimbaker: So, what have you been working on?
<rog> return nil -> return err
<rog> that was the bug
<niemeyer> rog: Was joking :)
<rog> anyway, the only failing test now is the number of nodes under /zookeeper
<rog> :-)
<rog> have you got an alternative suggestion for a test?
<niemeyer> rog: I'm pretty certain this is  bad expectation from the test
<niemeyer> rog: Let's set up a node of our own
<rog> yeah
<rog> i was gonna suggest that
<niemeyer> rog: Create /node and /node/subnode
<niemeyer> rog: and check that instead
<jimbaker> niemeyer, i've been looking at the lxc work and trying to get that to work for me so i can review some of the branches in queue. i've also been finalizing env-origin. i really need to get the unittests fix in as well
<niemeyer> jimbaker: Ok, great.. we don't have a lot of time as you know, so please leave the lxc stuff for hazmat and bcsaller and focus on the things on your plate at the moment
<jimbaker> niemeyer, ok
<rog> niemeyer: to be honest, NumChildren is checked elsewhere. we could just remove that assert from the TestExists test and all would be ok.
<niemeyer> rog: Works too
<rog> passes all tests. woo. time for lunch.
 * hazmat wakes up and grabs a coffee and finishes review on fwereade series branch
<niemeyer> hazmat: Morning!
<hazmat> jimbaker, if you could the fix help output for set config to be  a onliner when listing commands that would be helpful
<hazmat> its currently the only one that spews on juju --help.. if you switch it to a the other help value it only shows on juju set --help
<jimbaker> hazmat, ok
<hazmat> jimbaker, i'm curious what issues you had with the lxc stuff though, did you get a traceback?
<fwereade> cheers hazmat :)
<hazmat> the tests for it should run if you have libvirt-bin and lxc running
<hazmat> s/running/installed
<jimbaker> hazmat, sure, one moment
<niemeyer> Lunch here.. biab
<jimbaker> hazmat, here's the result from running one test - http://pastebin.ubuntu.com/695695/
<jimbaker> hazmat, originally i was trying this against bcsaller's branch for using apt-cacher-ng, but i then decided to step back and verify against trunk. i am running oneiric, beta 2
<jimbaker> to my eye, this sort of spew is basically a simple setup problem
<fwereade> hazmat: re [3], good point
<fwereade> hazmat: I guess it's not too much work to add a new series codename to the list every 6 months :)
<fwereade> hazmat: I remember that from the first review now, sorry I missed it before
<jimbaker> hazmat, also are there any docs on working w/ lxc?
<fwereade> hazmat: it does seem to me that the right point to validate the series is at environment load time, though, given that it's an environment property
<fwereade> hazmat: ...and I've suddenly changed my mind, mainly because I like using made-up series names in tests
<fwereade> hazmat: I think it's a matter of ensuring we wrap the errors nicely, to make it clear what the problem is, rather than restricting the acceptable strings and having to change them ourselves (even if it is only every 6 months)
<rog> niemeyer: i've sent the branch to you for review... i think. still getting to grips with the bzr/lp stuff.
<bcsaller> jimbaker: it looks like its trying to install the distro package of juju. I don't think its in the distro yet, right? That should be set to ppa for now I expect
<jimbaker> bcsaller, what do we need to do to config then for this? i'm just running the test against trunk
<bcsaller> jimbaker: juju/lib/lxc/tests/test_lxc.py   s/distro/ppa should only come up one place
<hazmat> fwereade, that sounds good, but its not clear how for example orchestra could validate that value
<hazmat> fwereade, for lxc the error from incorrect value is coming from a script that's also setting up the container, and is pretty hard to distinguish apriori from other installation errors
<fwereade> hazmat: well, I think it comes down to whether or not cobbler has images with the appropriate codename available
<fwereade> hazmat: and that's something we can check
<fwereade> hazmat: (er, in theory, we don't do anything like that yet: it's oneiric or nothing)
<hazmat> fwereade, i guess the question can we unambigously report such an error, or do we need to provide a list
<jimbaker>  bcsaller, does this mean that lxc testing currently has a dependency on juju being installed as a ppa? ie what is tested in the container is something outside of the branch and thus similar to what we see when executing on say ec2?
<fwereade> hazmat: I think we can, if nothing else because the supported-series list is a property of the cobbler setup
<bcsaller> jimbaker: there is an origin policy like what is being worked on elsewhere. distro, ppa and branch, where does the container gets its version of juju? it defaults to distro on trunk but the package isn't in the distro (though it should be after today?)
<bcsaller> that was the only failure there though
<fwereade> hazmat: us having a list including (say) puissant doesn't help if the adminstrator hasn't made images available
<bcsaller> honestly the lxc-library branch does a bit more in the container and is more worth checking out than whats in trunk wrt lxc
<fwereade> hazmat: so I actually think we have to go by what we can get from the environment
<jimbaker> bcsaller, ok, i just wanted to follow up on what i was seeing. it now makes perfect sense and is even related to what i'm trying to finish up in terms of juju-origin
<jimbaker> (env-origin branch)
<bcsaller> jimbaker: yeah, the lxc stuff supports that in a very similar way to whats happening elsewhere. Honestly I hope we can just have cloudinit run the juju-create script as root in the container
<jimbaker> bcsaller, seems like the right place to do it
<bcsaller> jimbaker: its written in such a way that it sources its config from /etc/juju/juju.conf which can be written by any of the providers
<bcsaller> and then we'd have common image building code
<hazmat> fwereade, good point, sounds good
<fwereade> hazmat: somewhat academic until we can select series in orchestra at all, ofc :)
<jimbaker> hazmat, bcsaller - take a look at this lp:~jimbaker/juju/set-help-trivial - i tried to preserve the original text + some copy editing. but i think it could still stand some improvements. for example, we probably want concrete examples (or none at all)
<jimbaker> hazmat, bcsaller, in particular, it distinguishes between the longer help message for doing juju set --help, vs the summary help with juju --help
<bcsaller> jimbaker: agree that we need a better pattern for this
<jimbaker> also the spacing is currently inconsistent. probably best to use textwrap to control, but manually. this is only necessary if we want a list of examples, otherwise we don't need RawDescriptionHelpFormatter, which is all-or-nothing for the subparser
<jimbaker> bcsaller, agreed on that!
<SpamapS> Gooooood morning juju team
<SpamapS> Done yet? ;)
<SpamapS> hazmat: is bug 829880 still open? I thought we'd worked around it somehow.
<_mup_> Bug #829880: object store doesn't like key with '/'  <juju:Triaged by hazmat> <OpenStack Compute (nova):Confirmed> <ensemble (Ubuntu):New> < https://launchpad.net/bugs/829880 >
<hazmat> SpamapS, that should be closed
<hazmat> SpamapS, its not a bug against ensemble anymore, its still valid for openstack afaik
<SpamapS> hazmat: right so Fix Released or Invalid ?
<hazmat> SpamapS, just marked fixed release against juju, for the  ensemble package...
<koolhead17> ./
<niemeyer> rog: ping
<niemeyer> rog: Review for gozk delivered
<niemeyer> biab
<koolhead17> SpamapS: hello
<rog> niemeyer: where do i find your review?
<jimbaker> hazmat, niemeyer - i would like to verify that we want the user to be able to specify an arbitrary PPA? just wanted to understand comments re the logic being preferred in the trunk version of cloudinit
<rog> niemeyer: ah, slow mail fetch
<rog> thanks
<SpamapS> koolhead17: hey
<SpamapS> koolhead17: I noticed you applied for charmers.. but you haven't submitted any charms.
<koolhead17> SpamapS: i did one dotproject
<koolhead17> but i don`t think its of much use as its not getting done from the oneiric repo
<SpamapS> koolhead17: You need to submit a 'new-charm' bug with it so we can review it. :)
<koolhead17> SpamapS: yes sir!! :D
<koolhead17> SpamapS: https://bugs.launchpad.net/debian/+source/dbconfig-common/+bug/807038
<_mup_> Bug #807038: dbconfig-common fails to preseed phpmyadmin on natty/lucid <dbconfig-common> <phpmyadmin> <dbconfig-common (Ubuntu):New> <dbconfig-common (Debian):New> < https://launchpad.net/bugs/807038 >
<koolhead17> i want some one to shpw little love to this :D
<koolhead17> *show
<SpamapS> koolhead17: yeah, dbconfig-common is a little bit weird.. :-/
<koolhead17> SpamapS: that is one reason am skipping  our repository :D
<koolhead17> i want some bugs which does not require dbconfig-common
<koolhead17> *charms in wishlist
<SpamapS> koolhead17: huh? you can just write your own config file, you don't have to use dbconfig-common's values
<koolhead17> SpamapS: ooh!! i was dumb all this while then. will dig up on it later once am home. thanks for that. :)
<SpamapS> koolhead17: your dotproject charm looks cool tho
 * koolhead17 pats himself for few sec :D
<niemeyer> jimbaker: That's not critical right now
<SpamapS> koolhead17: https://launchpad.net/charm/+filebug .. create a 'new-charm' bug and I'll take a look. I usually don't grant membership in charmers until somebody has contributed something significant.. that charm would count. :)
<koolhead17> SpamapS: so can i call myself a charmer now!! :D
<koolhead17> SpamapS: thanks. i will add few more in the list !! :D
<SpamapS> koolhead17: you can call yourself sally if you want. We'll call you a charmer when you have a charm in the collection. :)
<SpamapS> koolhead17: and btw, thanks for the participation and enthusiasm
 * koolhead17 bows to SpamapS 
<koolhead17> ok catch you guys from home
<jimbaker> niemeyer, you mean, we only need to support the standard juju ppa at this time, correct?
<niemeyer> jimbaker: Right.. that's the problem we have to solve right now. distro/ppa/branch
<jimbaker> niemeyer, sounds good
<niemeyer> rog: The numbering of reviews is a convention we follow for a while so that we can exchange ideas about points
<niemeyer> rog: "Ah, what about [3] on bar"
<rog> ah ok.
<rog> i do miss rietveldt. i was toying with the idea of using the bzr plugin to that.
<rog> then i thought, no, get used to it!
<niemeyer> rog: Yeah, .. well, kind of..
<niemeyer> rog: There's a third option: helping to convince the LP folks to improve it :)
<rog> niemeyer: from what you were saying, it sounds like they've got something in the works
<niemeyer> rog: Yeah, there's significant UI changes coming
<niemeyer> rog: Not sure about reviews, specifically
<niemeyer> rog: But once these changes land, we'll be in a better position to request some improvements on the area
<rog> the reviews are really important. i think the Go community would not be nearly so involved without the codereview site
<niemeyer> rog: LP is developed within LP as well, so it's not like they'll resist the idea of making it better :)
<niemeyer> rog: Agreed
<rog> niemeyer: they might not be into the whole code review thing though
<SpamapS> is rietveldt a review tool?
<SpamapS> never heard of it
<niemeyer> rog: Every branch in Canonical going into Launchpad, Landscape, U1, juju, etc.. is reviewed before merging, and it's been like that since the beginning of times :)
<niemeyer> rog: Well before rietveldt even existed
<SpamapS> btw, whats the word on the 'ensemble:formula' in charms? if its not there, will anything break?
<niemeyer> SpamapS: No
<rog> SpamapS: see codereview.appspot.com
<niemeyer> SpamapS: juju removes the need for headers
<SpamapS> niemeyer: cool.. going to do a mass update on charm once juju is uploaded.
<SpamapS> niemeyer: what about $ENSEMBLE_REMOTE_UNIT ?
<niemeyer> SpamapS: JUJU_*
<niemeyer> SpamapS: Please see the email I sent to the list.. it details all the compatible and incompatible changes
<niemeyer> SpamapS: and it's a nice guide for the mass update
<SpamapS> Oh I must have missed the full details
<SpamapS> Been a few long days since then :p
<rog> niemeyer: just addressed the code review points
<rog> (and replied)
<rog> (i'm presuming that replying to the sender address also sends an email to you)
<niemeyer> rog: Yeah
<niemeyer> rog: Thanks, will check it out
<rog> i'm done for the day. see ya monday.
<_mup_> juju/machine-with-type r366 committed by kapil.thangavelu@canonical.com
<_mup_> address review comments, use hypens for settings key values
<niemeyer> rog: Have a good weekend!
<rog> niemeyer: will do. mmm curry.
<_mup_> juju/trunk r358 committed by kapil.thangavelu@canonical.com
<_mup_> merge machine-with-type [r=niemeyer][f=854230]
<_mup_> An environment now stores the machine provider-type as a settings property.
<hazmat> ugh.. i broke a test on trunk
<SpamapS> thats the developer equivilent of a cheerleader dropping the spirit stick
<niemeyer> LOL
<hazmat> niemeyer, we still need the lxc-library-clone to get reviewed and merged for the local dev stuff to work fully
<hazmat> bcsaller, any updates on it?
<niemeyer> hazmat: Cool, that's at the top of my review pipeline right now actually
<bcsaller> hazmat: I think its working where it is, wasn't sure from the conversation here if you were ok with the changes to __init__ and clone yet though
<hazmat> bcsaller, oh i didn't see your followup.. having  a look now
<AlanBell> hi all
<koolhead17> hello AlanBell
<AlanBell> so if I set up an application like the wordpress example using juju over several instances, say one database server and 3 web servers doing the PHP stuff, the example ends up with a plain unthemed wordpress. What happens when I want to install a theme?
<AlanBell> do I have to do that on all the web servers manually and manage the sync of that myself?
<kim0> AlanBell: nope
<kim0> AlanBell: you should relate to a shared storage service (such as NFS)
<kim0> which will handle that for you
<kim0> assuming the wordpress formula is written to use that
<kim0> there are other shared storage services .. ceph, gluster ...etc which I hope someone will charm soon :)
<AlanBell> ok
<AlanBell> and wordpress is about the least appropriate example I can think of to run on juju, but it serves as a technical example!
<kim0> yeah! it's actually not that bad :)
<kim0> when you get slashdotted .. you'll want to scale it up :)
<koolhead17> kim0: we had drupal one somewhere
<koolhead17> as well
<AlanBell> meh, been slashdotted with wordpress running on a smallish vps
<AlanBell> install supercache, slashdot away at will
<kim0> woot .. slashdot must be loosing in popularity :)
<AlanBell> I have also been knocked over by slashdot in the past :)
<AlanBell> but it only lasts a day anyway
<AlanBell> mediawiki is a slightly more realistic example, but things don't start out needing a multi-server architecture
<AlanBell> you would have a single server mediawiki that has grown over time and now has thousands of users and the server is beginning to creak a bit
<AlanBell> then you want to split out the database and add multiple web servers etc
<kim0> AlanBell: yeah .. the idea is about the management model of services and how they relate to each other
<kim0> how they all collaborate towards an end state
<kim0> that's the beauty
<kim0> having one service tied to being one ec2 node, is only a current limitation
<kim0> once we have LXC working (super soon) .. my understanding is that we'll be able to run multiple services on one box
<AlanBell> so you could do a scaleable architecture starting on a single node?
<kim0> which could even be your laptop
<kim0> yep
<AlanBell> that would make sense
<AlanBell> otherwise include a migration story
<kim0> hazmat: any idea when the local dev (lxc) story is landing please
<hazmat> kim0, as soon as possible, but no sooner ;-) .. more seriously we've got a few branches in flight that should land hopefully today.. that will setup the basics, but there's some additional work to make things work across all the ensemble commands (ssh, debug-hooks) which won't land till next week realistically
<kim0> hazmat: you said the e word
<hazmat> mostly it relates to the lxc containers (service units) will have distinct ip addresses that need to be reported back to the end user
<hazmat> kim0, i like the e word ;-)
<kim0> haha :) thanks a lot
 * hazmat wonders if he can script an auto replace for that into his irc client
<kim0> juju
<kim0> :)
<koolhead17> kim0: charms :P
<koolhead17> hazmat: hey
<hazmat> koolhead17, hi
<koolhead17> hazmat: how have you been?
<niemeyer> hazmat, bcsaller: Review delivered for https://code.launchpad.net/~bcsaller/juju/lxc-library-clone/+merge/75833
<hazmat> koolhead17, busy
<hazmat> koolhead17, looking forward to going home this weekend, its been a long travel the last few weeks
<bcsaller> niemeyer: thanks
<niemeyer> bcsaller: Please let me know if you'd like to talk online about any of the points there
<_mup_> juju/config-juju-origin r357 committed by jim.baker@canonical.com
<_mup_> Initial commit
<niemeyer> bcsaller: Will be happy to help getting this in faster in any way possible
<koolhead17> hazmat: awesome.  :D
<_mup_> juju/local-unit-deploy r387 committed by kapil.thangavelu@canonical.com
<_mup_> address review comments, switch to explicit passing of unit namespace to the unit container deployment
<niemeyer> hazmat: ping
<hazmat> niemeyer, pong.. almost done getting local-unit-deploy ready for another look
<cole> what's up jujuers
<cole> question!
<niemeyer> hazmat: Just answering your points on the merge proposal instead, for refernece
<hazmat> niemeyer, cool, thanks
<cole> what's the current thinking re dynamic scaling with juju?
<niemeyer> hazmat: Please see if it makes sense: https://code.launchpad.net/~niemeyer/juju/go-formula-bundle/+merge/75247
<niemeyer> cole: juju add-unit FTW
<niemeyer> cole: The whole system architecture was designed with that kind of use case in mind
<niemeyer> cole: So it's pretty natural to do it, and we'll be adding more features as we go
<cole> niemeyer: cool, one thing we are thinking of is extending m-collective to be able to talk to ensemble.
<niemeyer> cole: Nice!
<cole> but if there are plans to introduce that functionalityâ¦we won't bother.
<cole> niemeyer: if that does sound interesting to you I'd be happy to set something up with Teyo.
<hazmat> niemeyer, re the comment, so its not clear to me why we need a method to read the whole bundle into memory, if we just expose the file path.. the consumer of the bundle can do whatever is appropriate to its usage
<niemeyer> cole: It depends on what's the functionality you're really looking for
<niemeyer> hazmat: What file path?  If I have a bundle in memory I have no file path
<hazmat> niemeyer, why would you have a bundle only in memory?
<niemeyer> hazmat: Also, what you say is not entirely accurate
<cole> niemeyer: basically let m-collective gather stats on a host and determine threshold and then (since m-collective has no knowledge of relationships) ask ensemble to go scale appropriately.
<niemeyer> hazmat: This is not a method to read the whole bundle into memory
<niemeyer> hazmat: This is a method that takes a sequence of bytes that _are_ in memory, and interprets them as a bundle
<SpamapS> cole: I thought mcollective was just a message bus + some agents
<niemeyer> hazmat: Because, e.g., it was read from a database
<cole> SpamapS: it has the concept of gathering load / performance data.
<niemeyer> cole: Yeah, I'd say don't bother
<hazmat> cole, we'll need to expose an api endpoint on juju to make that sort of use case easier
<niemeyer> cole: We'll eventually have that internally
<cole> niemeyer:that's the answer I was hoping for!
<niemeyer> cole: Unless you're desperate, of course
<niemeyer> :)
<cole> niemeyer: auto-scaling is a HUGE customer requirement and our default answer is always juju without telling them the auto part of scaling doesn't exist yet ;)
<SpamapS> the auto part is impossible to define for any one workload
<niemeyer> cole: Well, you're not saying something entirely off actually, even with the current state
<SpamapS> What you need is an intelligent rules engine.
<niemeyer> cole: The way that the system works right now actually does quite a lot of "auto" things that other systems can't do
<SpamapS> good point..
<niemeyer> cole: E.g. "juju add-unit appserver"
<niemeyer> cole: That's really all it takes..
<SpamapS> its the responsiveness thats missing, but the setting up and configuring things, that it already does.
<SpamapS> cole: right now you could simply use regular graphing/monitoring and when appserver load is too high add-unit .. and when its too low, remove-unit ..
<hazmat> niemeyer, the local-unit-deployment is ready for another look
<niemeyer> hazmat: WOot
<hazmat> niemeyer,  back to formula-bundle-dir.. ah.. i understand the ReadAt method now.. that is kinda of cool.. re the ReadBundleBytes, ic re the use case
<niemeyer> hazmat: Yeah, that's the orthogonality between types and methods that feels so nicely done there
<niemeyer> hazmat: Thanks for the review.. the bits stuff was great catch there
<niemeyer> hazmat: Will have a look at this over the weekend
<hazmat> niemeyer, cool
<_mup_> juju/machine-with-type-merge r378 committed by kapil.thangavelu@canonical.com
<_mup_> merge machine with type
<_mup_> juju/provider-determines-placement r390 committed by kapil.thangavelu@canonical.com
<_mup_> merge machine-with-type and resolve conflict
<_mup_> juju/unit-with-private-ip r391 committed by kapil.thangavelu@canonical.com
<_mup_> merge pipeline and resolve conflicts
<_mup_> juju/local-unit-deploy r389 committed by kapil.thangavelu@canonical.com
<_mup_> update initialization test to note that settings are now present
<niemeyer> hazmat: So, units are under the namespace.. but what about everything else?
<jimbaker> hazmat, re juju-origin, i have a rewritten version (in lp:~jimbaker/juju/config-juju-origin). however, one question i have is whether we should have the default origin be _DISTRO yet (using the names in juju.providers.common.cloudinit). right now, that wouldn't work
<niemeyer> hazmat: e.g. zookeeper, log files, ...
<niemeyer> jimbaker: Yes, it should be distro.. we have to release this code into Ubuntu.
<hazmat> niemeyer, each env has its own zookeeper running on the host on a random port, the user can configure the storage for log, pids, via a config option data-dir in the environments.yaml
<niemeyer> hazmat: Yeah, what I don't get is why are we namespacing a particular aspect, rather than having a root that contains everything
<hazmat> jimbaker, yeah.. the distro installation should be sorted  out shortly as soon as we have a s/ensemble/juju packakage downstream
<jimbaker> niemeyer, ok, as is, we have an impasse until the release. it means between the point of merging this branch, assuming it is good, and the release, environments.yaml would need to point to the ppa with juju-origin to retain the current behavior
<jimbaker> and have anything work at all
<hazmat> niemeyer, effectively the data-dir is that root, but lxc itself wants a flat storage of containers into /var/lib/lxc
<hazmat> hence the namespace value
<hazmat> s/value/prefix
<niemeyer> hazmat: I see.. but it looks like we're using a JUJU_HOME like:
<niemeyer> juju_directory="/var/lib/juju"
<hazmat> niemeyer, that's in the container for the unit agent.. for the machine agent juju home is the data dir configured in environments.yaml
<niemeyer> Or maybe not.. /me looks
<niemeyer> hazmat: I see.. hmmm
<niemeyer> hazmat: It just feels like we're adding yet another concept there, which doesn't map to something concrete (a unit namespace)
<jimbaker> niemeyer, should i submit this as a diff merge proposal and pull the one for env-origin? it's taking a different approach, much closer to trunk, given the simpler usage of ppa (and no deb repository), and the removal of juju-version
<niemeyer> hazmat: Ok, question:
<niemeyer> hazmat: Would it work if we had JUJU_CONTAINER=<the container name> instead, explicitly?
<niemeyer> hazmat: Or is there any other reason to have the UNIT_NS information?
<niemeyer> jimbaker: Hmm
<hazmat> niemeyer, we're passing namespace to the machine agent which is creating multiple containers
<aram> moin.
<niemeyer> hazmat: Ok
<niemeyer> hazmat: Still feel a bit strange about it.. feels like if two users try to use it there are clashing resources
<hazmat> niemeyer, the namespace includes the user name.. where's the clash?
<niemeyer> hazmat: Maybe it's just my impression.. let me see
<niemeyer> +    def get_upstart_unit_job(self, machine_id, zookeeper_hosts):
<niemeyer> +        """Return a string containing the  upstart job to start the unit agent.
<niemeyer> +        """
<niemeyer> +        environ = self.get_environment(machine_id, zookeeper_hosts)
<niemeyer> +        # Keep qualified locations within the container for colo support
<niemeyer> +        environ["JUJU_HOME"] = "/var/lib/juju"
<niemeyer> Sorry.. longer than expected
<niemeyer> hazmat: /var/lib/juju?
<hazmat> niemeyer, that's for the unit agent in the container
<niemeyer> hazmat: Will this path be used in any way?
<hazmat> niemeyer, it will be used in the same manner its used for non local dev
<niemeyer> hazmat: Right, but if there are two different environments for two different users, will they conflict given that the setting is the same, or is it ok?
<hazmat> ie.. the unit agent will store pid, log files there
<hazmat> niemeyer, those paths are inside the unit container
<niemeyer> hazmat: Ah, inside, ok
<hazmat> actually i switched the unit agent in the container to log to /var/log/juju  so its mostly just workflow state that's in juju home..
<niemeyer> hazmat: Cool
<niemeyer> hazmat: Somewhat unrelated,
<niemeyer> hazmat: Shouldn't this depend on a setting:
<niemeyer> +            {"JUJU_ORIGIN": "ppa",
<niemeyer> +             "JUJU_SOURCE": "ppa:juju/ppa"},
<niemeyer> hazmat: It's also wrong, btw.. ppa:juju/pkgs
<hazmat> niemeyer, aha, i was wondering why that failed.. there's a number of fixes for the origin work in bcsaller lxc-library-clone work
<niemeyer> hazmat: Cool
<hazmat> i ended up just merging his work when i want to test the whole thing functionally at the end of the pipeline
<hazmat> also fwiw i'm going to be attending my son's track meet for the next 2.5 hrs.. i'm taking a hotspot with me.. but the connectivity might be spotty
<niemeyer> hazmat: Alright! Let's move forward with this then!
<hazmat> niemeyer, sweet!
<niemeyer> hazmat: I feel like stuff is going over my head due to the heavy system interaction
<niemeyer> hazmat: But what really matters at this point is that this is working, and it's a base we can improve on
<hazmat> niemeyer, its pretty cool when you see it working
<hazmat> bcsaller, any chance you'll be to address the lxc-clone-lib review items today?
 * hazmat steps out for a few
<niemeyer> I'll get some coffee too
<bcsaller> hazmat: fully expect to
<niemeyer> jimbaker: ping
<jimbaker> niemeyer, hi
<niemeyer> jimbaker: So
<niemeyer> jimbaker: What we can try to do is to detect the default behavior, in case the option is missing
<jimbaker> niemeyer, not certain what you mean
<niemeyer> jimbaker: A good way to do it is checking if there's an installed package that comes from the ppa
<niemeyer> jimbaker: Check the output of "apt-cache policy juju"
<niemeyer> jimbaker: Use the following heuristics:
<jimbaker> niemeyer, hmmm... so you mean, implement the following workaround: as part of the cloud init, run a script that tests for a necessary dependency (so checking output above), and if not there, adds the repository
<niemeyer> 1) Grab the installed version from the "Installed:" line
<niemeyer> 2) If this is "(none)" return "branch"
<niemeyer> 3) Otherwise, store the version in a variable, and keep parsing
<niemeyer> 4) Find the version table, and find the installed version
<niemeyer> 5) For each line in the given version, if it contains "ppa.launchpad.net/juju/pkgs", return "ppa"
<niemeyer> 6) If you reach the end of the installed version, return "distro"
<niemeyer> jimbaker: name the function get_default_origin()
<niemeyer> jimbaker: Rather than "branch", it should actually return a proper value for the origin setting
<niemeyer> jimbaker: "lp:juju", I suppose?
<jimbaker> niemeyer, lp:juju seems reasonable as a workaround, this would correspond to our old policy pre ppa
<niemeyer> jimbaker: Cool.. I suppose that's a valid setting for "juju-origin: lp:juju" once you're branch goes in, right?
<jimbaker> niemeyer, yes, that's a valid juju-origin setting
<niemeyer> jimbaker: Cool
<niemeyer> jimbaker: In case of errors or unexpected input, always fall back to "distro"..
<niemeyer> jimbaker: We can easily fix the ppa or trunk, but can't fix the distro
<niemeyer> (that quickly)
<jimbaker> ok, this seems like a reasonable workaround and helps the user run what they expect w/o having to set anything
<niemeyer> jimbaker: Super
<niemeyer> SpamapS, m_3: Any comments on the heuristics?
<niemeyer> jimbaker: Also, please log a line on "bootstrap" pointing out the origin being used, so the user is aware of it
<SpamapS> niemeyer: readnig
<SpamapS> reading even
<jimbaker> niemeyer, sounds good
<SpamapS> niemeyer: +1 looks good, there's a python library for that apt information.. but execing apt-cache policy is probably simpler for the immediate needs.
<niemeyer> SpamapS: and more portable as well
<niemeyer> SpamapS: Cool, thanks
<niemeyer> Alright folks
<niemeyer> I'm going to step out for some time.. will be working a bit over the weekend to push things forward
<SpamapS> niemeyer: one final bit, you may want to verify that the juju thats running is actually from the package..
<niemeyer> SpamapS: Ah, good point
<SpamapS> dpkg -S isn't fast, but for one boot-time check it might be worth not picking the wrong behavior when something else has pulled in juju for some other reason.
<niemeyer> jimbaker: At the forefront of the function, check if juju.__file__ comes from /usr.. if it doesn't, we can use the branch as well
<jimbaker> niemeyer, makes sense
<niemeyer> Alright, I'm heading out.. have a good weekend folks!
<niemeyer> or see you tomorrow :)
<_mup_> juju/unittests r6 committed by jim.baker@canonical.com
<_mup_> Allow churn to run standalone
<zirpu> is juju only going to support AWS, or will it support other cloud providers in the future? rackspace, etc.
<_mup_> juju/unittests r7 committed by jim.baker@canonical.com
<_mup_> Better error handling
<zirpu> arg.  charm-tools is busted.  /usr/bin/charm -> /build/buildd/charm-tools-0.2+bzr68~natty1/debian/charm-tools/usr/share/charm-tools/charm
<zirpu> as is principia
<SpamapS> zirpu: yeah I'm working on that right now actually
<zirpu> ah. good. i won't bother trying to figure out where to open a but then. :)
<zirpu> didn't seem to be anything for charm-tools w/in juju. separate package?
<SpamapS> it will work in about 10 minutes
<SpamapS> charm-tools is a stop-gap until juju has native repo support
<zirpu> ah. ok. thanks.
<SpamapS> zirpu: and for your other question, aws is just the beginning. :)
<zirpu> excellent!
<SpamapS> zirpu: bare metal is already available via orchestra (documentation almost done ;)
<SpamapS> zirpu: and I'm sure somebody will do another cloud provider if they need one. :)
<SpamapS> zirpu: a new charm-tools is building now.. should be available in the ppa in the next 10 minutes or so
<zirpu> cool. thanks.
<zirpu> does orchestra use virtualbox/VMware/etc? or just for bare metal provisioning?
<SpamapS> zirpu: orchestra is cobbler + ubuntuisms , and cobbler has some libvirt integration
<SpamapS> zirpu: but really.. if you want to control VM's w/ it.. just use openstack
<SpamapS> zirpu: in theory it will work w/ Eucalyptus too.. tho nobody's been testing that lately :p
<SpamapS> hazmat: hey, is this the test you broke: juju.providers.ec2.tests.test_provider.ProviderTestCase.test_config_environment_extraction
<zirpu> heh
<SpamapS> zirpu: just curious, what were you wanting to do with juju?
<zirpu> i'm just looking into ways of deploying to the various providers.  juju seems fairly clean so far.  i don't want to write my own set of scripts.
<zirpu> i'll probably use openstack for the vbox server i have at home for testing.
<SpamapS> zirpu: we want you to write your own set of scripts.. that are called by juju ;)
<SpamapS> zirpu: the dev team is landing LXC container integration right now actually.. so you can do local dev work w/o VM's .. just little LXC containers.
<zirpu> right. charms and environments right?
<zirpu> ooh. lxc would be better.
<SpamapS> yeah, charms are the way a service is defined. environments are runtime collections of services, relations, machines and charms
<SpamapS> zirpu: eventually all the services will run inside LXC containers so its easier to re-use an available AWS machine w/o terminating/provisioning an new one.
<zirpu> lxc on aws? :-) turtles! all the way down!
<SpamapS> LXC works great on AWS
<SpamapS> its how we've been testing openstack .. :)
<SpamapS> or at least, one way
<SpamapS> zirpu: btw, charm-tools works now
<zirpu> cool. i'll upgrade.
<zirpu> about to head out to dinner. will work on it later though.
<zirpu> i want to build a replica set of mongodb's to test failover and such. that's my current test app.
<SpamapS> zirpu: lp:charm/oneiric/mongodb .. :)
<zirpu> yep, already looked at that.
#juju 2011-09-24
<hazmat> SpamapS, no re test broken, it was an lxc provider test that broke, fix queued for the next branch to merge
<SpamapS> hazmat: Ok cool I think i mis-copied/pasted that test that failed.. the lxc provider tests are the ones failing for me.
<_mup_> juju/local-unit-deploy r390 committed by kapil.thangavelu@canonical.com
<_mup_> update machine agent to use global settings to determine provider type
<_mup_> juju/trunk r360 committed by kapil.thangavelu@canonical.com
<_mup_> merge local-unit-deployment [r=niemeyer][f=855146]
<_mup_> Deploys service units on the local machine with lxc.
<hazmat> niemeyer, interesting zk competitor, experimental http://www.osrg.net/accord/
<niemeyer> hazmat: Pretty interesting indeed
<niemeyer> hazmat: Wow, and it has transaction support
<niemeyer> On the other hand a few other areas are on the weak side
<koolhead17> hi all
<koolhead17> hi all
<niemeyer> Morning all
<niemeyer> We have an empty review queue!
<_mup_> juju/go r7 committed by gustavo@niemeyer.net
<_mup_> Merged the go-formula-config branch [r=hazmat]
<_mup_>             
<_mup_> This is the initial support for parsing of config.yaml in the Go port.
<_mup_> juju/go r8 committed by gustavo@niemeyer.net
<_mup_> Merged go-formula-config-validation branch [r=hazmat]
<_mup_> This complements the initial config.yaml handling with validation support.
<_mup_> juju/go-formula-dir r22 committed by gustavo@niemeyer.net
<_mup_> Bundle filepath.Rel as filepath_Rel while the submission upstream
<_mup_> doesn't go in (http://codereview.appspot.com/4981049).
<_mup_> juju/go r9 committed by gustavo@niemeyer.net
<_mup_> Merged go-formula-dir branch [r=hazmat]
<_mup_> This introduces support for handling formula directories in the Go port.
<_mup_> This includes the bundling of them, needed for the store.
<_mup_> Bug #858267 was filed: Bundling and unbundling must support perm bits in the Go port <juju:In Progress by niemeyer> < https://launchpad.net/bugs/858267 >
<_mup_> juju/go r10 committed by gustavo@niemeyer.net
<_mup_> Merged go-formula-bundle branch [r=hazmat]
<_mup_> This introduces support for formula bundle files in the Go port.
<_mup_> There's an unhandled review point in this branch regarding formula bits
<_mup_> that will be addressed in a follow up branch coming soon.  The problem
<_mup_> was described in bug #858267.
<_mup_> juju/go r11 committed by gustavo@niemeyer.net
<_mup_> Applied the juju/charm renaming to the Go code base.
<_mup_> Dropped need for the silly "header" field in metadata.yaml.
<_mup_> Bug #858282 was filed: support for wireless USB modems <juju:New> < https://launchpad.net/bugs/858282 >
<hazmat> niemeyer, the lxc-library-clone stuff is ready for another look
<hazmat> niemeyer, awesome re review queue
<niemeyer> hazmat: Awesome, thanks!
<hazmat> niemeyer, also i wanted to talk about the placement stuff
<niemeyer> hazmat: Sure
 * hazmat is preparing to be in airports for 28hrs starting in a few
<hazmat> ugh
<niemeyer> hazmat: Ugh indeed
<niemeyer> hazmat: Do you want to talk now about it?
<hazmat> niemeyer, sure
<hazmat> niemeyer, so the get_placement_policy on the provider is to ensure that the provider gets final choice on the placement policy.. the user currently has two options for selecting the policy, environment, command line, and then failing that most providers default to unassigned except local which always picks local
<hazmat> the 'preference' value is basically the user's cli preference if specified, but the  provider is responsible for returning the actual value, which is why it delegates there and for the base provider implementation does if preference: return preference
<hazmat> else the base provider consults the environment, and failing that returns the default
<hazmat> ah ic
<hazmat> so the reason to not push the if preference is none check to the calling site, is that some providers (local) only support a single policy
<niemeyer> hazmat: Still, the way we figure the actual preference feels a bit convoluted
<niemeyer> hazmat: The provider shouldn
<niemeyer> 't be responsible for analyzing the user preference and failing out
<niemeyer> hazmat: The current implementation also makes this more obvious since all it's doing is returning the provided value without any analysis
<niemeyer> hazmat: I suggest having a get_placement_policies() in the provider interface instead, ordered by the provider's preference
<hazmat> niemeyer, but then the user can't do a cli selection
<niemeyer> hazmat: Why not?
<hazmat> niemeyer, un moment on phone with bcsaller
<niemeyer> hazmat: Ok.. will get some coffee meanwhile..
<hazmat> niemeyer, so it should be doing some analysis at least to validate the user option
<hazmat> and it does when it goes to place the unit
<hazmat> niemeyer, i don't really understand your suggestion, doing an intersection against the provider policies to the user preference at the call site?
<hazmat> ie.. get_supported_placement_policies on the provider, intersect to the user preference and environment section at the call site?
<niemeyer> hazmat: The user selection shouldn't be a "preference"
<niemeyer> hazmat: Either it is supported by the provider, in which case it should be honored
<niemeyer> hazmat: or it's not supported, in which case the request should error out
<niemeyer> hazmat: DOing this is the responsibility of the command line management
<niemeyer> hazmat: Not the provider
<niemeyer> hazmat: The provider simply has a list of supported placement policies
<niemeyer> hazmat: Which the user request must be compared against
<hazmat> niemeyer, sounds good
<niemeyer> hazmat: We don't even need to default to unassigned
<niemeyer> hazmat: If the user request is None, pick the first entry from the provider's list
<niemeyer> which will generally be unassigned
<hazmat> i also think the placement stuff shouldn't be in state, but just in machine or provider package, but that's a different topic
<hazmat> niemeyer, okay.. i'll restructure that a bit then
<niemeyer> hazmat: That's tricky.. either we support a user selection when the provider has multiple options, or we don't
<niemeyer> hazmat: If we do, we need to store the option somewhere
<_mup_> juju/unittests r8 committed by jim.baker@canonical.com
<_mup_> Use proper path for build between butler and churn
<niemeyer> hazmat: I'm on the unix permission stuff, btw
<niemeyer> Fixing Go's archive/zip
<niemeyer> Well.. improving
<niemeyer> There's no bug.. just doesn't support it yet
<hazmat> niemeyer, so current local-dev plan is bcsaller's going to do some work merging the two endpoints (lxc-library-clone w/ local-provider-config), and add support for cloning containers into local unit deploy. i'm going to use that as a base for doing the public/private address stuff in the unit
<niemeyer> hazmat: Sounds awesome
<hazmat> cool
<bcsaller> yeah, applying updates relative to branch evolution since the sprint
<niemeyer> bcsaller: Heyo
<bcsaller> :)
<hazmat> my son's first web game.. http://src.objectrealms.net/kaleb/web-asteroid-defense/
<hazmat> cute
<_mup_> juju/unittests r9 committed by jim.baker@canonical.com
<_mup_> Clean pyc before running unittests
<niemeyer> hazmat: Indeed, quite neat
<niemeyer> hazmat: Good trigonometry lessons I bet :)
<hazmat> bcsaller, can you push, and let me know the name of the branch that merges
<bcsaller> hazmat: I haven't tested it all yet, but I can push it
<hazmat> bcsaller, sounds good, i just want to have the origin of my merge point to have your branch an ancestor
<hazmat> i'm going to be heading out in about 30m, and will have spotty connectivity till i get to the sfo lounge
<hazmat> in about 6hrs
<ejat> kim0: how to set juju using micro instance .. and changing the region
<bcsaller> hazmat: still around?
<SpamapS> ejat: in .juju/environments.yaml   default-instance-type: t1.micro
<SpamapS> ejat: it needs to be indented at the same level as 'type: ec2'
<SpamapS> ejat: to change the region, do 'region: xxxxx'
<ejat> default-instance-type: t1.micro type: ec2
<ejat> mean like that ?
<ejat> before or after the type ?
<ejat> region in which line ?
<SpamapS> Doesn't matter where, just that its indented the same
<ejat> owh okie
<SpamapS> environments:
<SpamapS>   sample:
<SpamapS>     type: ec2
<SpamapS> environments:
<SpamapS>   sample:
<SpamapS>     type: ec2
<SpamapS> oops
<ejat> ?
<SpamapS> ejat: so if that were the first few lines of your environments.yaml .. then anywhere after sample: you just add
<ejat> okie
<SpamapS>     default-instance-type: t1.micro
<SpamapS> Note that once you deploy something, the service will always have that instance type even if you change it and do add-unit
<ejat> owh okie .. this is for the testing purpose ..
<ejat> but if without it .. it will default using small instance right ?
<SpamapS> yeah
<ejat> WARNING: Charm is using obsolete 'str' type in config.yaml. Rename it to 'string'.
<ejat> is it from the charm ?
<ejat> mediawiki charm
<ejat> SpamapS: i cant view my mediawiki after i lunch it .. is it because of the security group policy ?
<SpamapS> ejat: yes
<SpamapS> ejat: you need to expose it
<SpamapS> ejat: juju expose name-of-mediawiki-service
<ejat> owh .. okie
<SpamapS> ejat: if any charms in your repo have 'str' instead of 'string' you will get that warning. Its one of my TODO's to fix them all.. feel free to submit merge proposals :)
<ejat> i just destroy the environment .. means that .. we need to expose the services ?
<SpamapS> ejat: sorry, what?
<ejat> ill try later ... sleepy .... :)
<ejat> SpamapS: will ya be at UDS ?
<ejat> u need to fix all the charm ?
<SpamapS> ejat: yes I'll be at UDS.. quite a few of us will be. Are you coming?
<SpamapS> ejat: yeah we need to fix all the charms that use 'str' to use 'string'
<SpamapS> but warnings < failures ... so failures first.. :)
<ejat> hopefully .. .. need to renew my passport n make the visa ..
<SpamapS> ejat: its in just over 1 month.. do it *soon*
<ejat> SpamapS: yeah .. thanks for the reminder .. a lots of todo starting next week need to be done ..
<ejat> coz . its my 1st time joining the uds ..
 * ejat a lot of thing need to be learn n catch up .. 
<hazmat> bcsaller, i'm here
<_mup_> juju/provider-determines-placement r392 committed by kapil.thangavelu@canonical.com
<_mup_> add an error for invalid placement policies
<ejat> http://paste.ubuntu.com/696384/
<ejat> anyone can help me?
#juju 2011-09-25
<_mup_> juju/unit-with-addresses r395 committed by kapil.thangavelu@canonical.com
<_mup_> unit pub/priv address retrival on ec2, orchestra, and lxc
<_mup_> juju/unit-with-addresses r396 committed by kapil.thangavelu@canonical.com
<_mup_> service unit state accessors/mutators for public/private addresses
<koolhead17> hi all
<_mup_> juju/unit-with-addresses r397 committed by kapil.thangavelu@canonical.com
<_mup_> dummy unit address for tests
<koolhead17> SpamapS: hey
<hazmat> mornings
<koolhead17> hi hazmat
<koolhead17> although its evening for me :D
<koolhead17> https://juju.ubuntu.com/docs/write-formula.html
<koolhead17> now will i replace ensemle to juju
<koolhead17> ensemble: formula == juju: charm
<_mup_> juju/go-charm-bits r12 committed by gustavo@niemeyer.net
<_mup_> Improvements and fixes in charm directory packing:
<_mup_> - Ignore hidden files as the Python version.
<_mup_> - Use the new filepath.Walk interface.
<_mup_> - Pack unix file mode into the charm bundle.
<_mup_> The last change requires this Go CL:
<_mup_>   http://codereview.appspot.com/5124044/
<_mup_> Bug #859151 was filed: Improvements and fixes in charm directory packing <juju:In Progress by niemeyer> < https://launchpad.net/bugs/859151 >
<hazmat> niemeyer, do you want another look at the placement stuff?
<hazmat> niemeyer, also i think bcsaller addressed the review comments on the clone-lib
<niemeyer> hazmat: Have you changed the placement stuff since we last talked?
<hazmat> we've basically got things working and factored out, just landing things into the queue and trunk in the right order
<hazmat> niemeyer, yes.. its basically redone to use a provider provided list of supported policies, intersected with the cli option and environment option
<hazmat> niemeyer, just pushed the latest
<niemeyer> hazmat: Unless it'd make a difference for bcsaller (as in, he'd work on it now), I'll review it first thing on my morning tomorrow before he comes online
<niemeyer> hazmat: Sure, I can check it out
<hazmat> niemeyer, he'd work on it now.. he's pending on pushing another branch into review, but its got a two headed pre-req
<bcsaller> I'll push the merge tip now, but I could really use the rest of the day off :)
<hazmat> with lxc-provider-config (which is held up on placement) and lxc-lib-clone
<hazmat> so we need to get at least one of the pre-reqs in or the merge proposal diff is going to be messy
<hazmat> and then i've got the status/ssh/debug-hooks working with that as a pre-requisite
<niemeyer> bcsaller: Sure.. I imagined that was the case.. I'll review it tomorrow during my morning then
<hazmat> ie.. (provider-placement->lxc-provider-config && lxc-library-clone ) -> lxc-omega -> units-with-addresses -> local-cli-support
<niemeyer> hazmat, bsaller: two-headed pre-reqs are not a big deal.. just mention the puzzle in the summary so that I can sort out the base
<hazmat> niemeyer, cool
<hazmat> bcsaller, cool, then if your comfortable with lxc-omega can you put that into review, i'll push the last two into the review queue, i've got some minor work to finish on the local-cli-support (debug-hooks tests, and a functional test round)
<hazmat> it doesn't need to be in the review queue, but its just nice for pre-reqs
<hazmat> its been interesting to see how bzr/lp work with concurrent dev
<hazmat> lots of interconnected dev
<_mup_> juju/trunk r361 committed by gustavo@niemeyer.net
<_mup_> Added .dir and hooks/install in repository/dummy test data.
<_mup_> This is just to sync up with the Go port.
<_mup_> [trivial]
<_mup_> Bug #859180 was filed: LXC driven local development story <juju:New> < https://launchpad.net/bugs/859180 >
<koolhead17> hazmat: niemeyer hello
<hazmat> koolhead17, hello yes re replace in docs ensemble/juju formula/charm
<hazmat> koolhead17, canonical does require a contributor agreement though .. http://www.canonical.com/contributors
<koolhead17> hazmat: you mean i cannot contribute to juju without signing that? :P
<koolhead17> hazmat: also wordpress/hooks/db-relation-changed
<koolhead17> can i use this in case of other charms too ? hostname=`curl http://169.254.169.254/latest/meta-data/public-hostname`
<koolhead17> ?
<hazmat> koolhead17, yeah.. that's how contrib agreements work.. actually they've become quite common in corporate or foundation backed projects (openstack, plone, etc) all use them
<koolhead17> hazmat: will sign it once i start contributing, all this while i been only reporting bugs :D
<hazmat> ah.. those are welcome, but if your going to submit a patch to the core it will be needed
<koolhead17> hazmat: point noted!!
<hazmat> koolhead17, charms don't count.... those are your own unless derived from an existing
<niemeyer> hazmat: We can move forward with this given timing
<niemeyer> hazmat: But I have a pretty bad feeling about the back and forth going on
<hazmat> koolhead17, a patch to the docs would count towards a core contrib though the way things are structured in the repo
<niemeyer> hazmat: command line knows about provider, placement, and env; placement knows about env and provider; provider knows about env and placement
<niemeyer> hazmat: and in the end we're solving an extremely simple problem, that right now would work fine with "placement = name" in the provider class
<hazmat> niemeyer, i pushed into placement because, else we'd be duplicating the logic in the commands
<hazmat> the pick_policy stuff is just a factoring out of what the cli uses to do just that
<niemeyer> hazmat: We don't need any logic in the commands either right now
<niemeyer> hazmat: We'd be totally fine just hardcoding the placement in the provider
<hazmat> niemeyer, then the user has no selection?
<hazmat> niemeyer, yeah.. i could drop list_policies easily
<niemeyer> hazmat: Yep.. that sounds fine right now
<niemeyer> hazmat: I mean, giving the user no selection
<niemeyer> hazmat: Each provider has only a single option that works
<hazmat> hmmm
<niemeyer> hazmat: the --placement option is a lie, pretty much
<hazmat> i'd like to introduce new policies in the cycle
<hazmat> niemeyer, only against local
<hazmat> a min/max machines policy if we can get lxc working on ec2 would be sweet for example
<niemeyer> hazmat: I also feel bad with state.policy poking into provider.config
<niemeyer> hazmat: It knows about the provider schema, when it shouldn't
<hazmat> yeah..
<niemeyer> Sorry, state.placement
<hazmat> niemeyer, with the provider not having selection anymore.. somebody has to poke at the config
<niemeyer> hazmat: This is the default..
<hazmat> ?
<niemeyer> hazmat: The first entry in the preferences list
<niemeyer> hazmat: get_placement_policies()
<hazmat> niemeyer, it is the default in the absence of user choice
<hazmat> or configuration
<niemeyer> hazmat: If we're putting the user choice in the provider's configuration, the provider should validate the user choice, not state.placement
<hazmat> i'm crazy tired btw... i might be a little dense picking things up.. i've had 3hrs sleep over the last 48
<niemeyer> hazmat: Ouch
<hazmat> actually i exagerate its more like 36
<koolhead17> hazmat: go sleep :D
<hazmat> koolhead17, much to do for the release
<koolhead17> ooh. when is it happening?
<niemeyer> hazmat: So, suggestion to clean things up:
<hazmat> koolhead17, well we're trying to get into the oneiric cycle so we need to be there with some testing time, at this point its monday
<niemeyer> (btw, again, I'm fine with doing this later if you'd rather not work further on that)
<hazmat> niemeyer, okay.. suggest away
<koolhead17> waoo. ok
<niemeyer> 1) Remove the placement option from the command line entirely
<niemeyer> 2) Support it in the provider's configuration
<niemeyer> 3) Validate it just like we validate the rest of the provider configuration
<niemeyer> 4) Make Provider.get_placement_policy() a single item again, which is either the default preferred for the provider, or the placement config
<niemeyer> 5) Kill pick_policy()
<hazmat> niemeyer, so basically revert to previous and drop the preference arg
<niemeyer> hazmat: Well, you've also moved to support config in the env
<hazmat> niemeyer, ? that support was already there (placement config in env yaml)
<niemeyer> hazmat: I missed it then
<hazmat> niemeyer, thats why the previous usage was doing things like serializing the env, so it could pick out the value on the cli
<hazmat> actually i think i yanked that a few branches back
<niemeyer> hazmat: Hmmm..
<niemeyer> hazmat: Yeah, I'm pretty sure it wasn't in the branch I reviewed lsat
<niemeyer> last
<niemeyer> hazmat: Either way, it looks like a good idea.. certainly more sensible than the command line option
<niemeyer> hazmat: It will also avoid the wide distribution of knowledge going on
<niemeyer> hazmat: and probably become tirvial
<niemeyer> trivial
<hazmat> so we'll rely on env yaml schema for validation.. yeah.. that sounds reasonable
<hazmat> bcsaller, you still around.. just curious if you had any concern over dropping cli placement
<hazmat> niemeyer, okay i'll have a look at reverting and simplifying after i finish up the cli support and do some end-to-end testing
<hazmat> i think i'm gonna have a nap first though
<niemeyer> hazmat: Awesome, thanks a lot for that, and please do.. well deserved
<niemeyer> hazmat: I'm pushing some additional tweaks to the Go bundling fixes
<niemeyer> hazmat: It wasn't packing directories, now it will
<hazmat> doh..
<hazmat> niemeyer, cool saw some of the commits floating by
<niemeyer> hazmat: Not in the sense you probably think, though
<niemeyer> hazmat: zip files work fine without intervening directories
<bcsaller> hazmat: people have used placement as a kludge for somethings like doing deployments all to one machine apparently
<niemeyer> hazmat: e.g. "foo/bar" is unpacked fine
<bcsaller> but I'm not really opposed
<niemeyer> hazmat: but after a while they started to pack "foo/" as an entry too
<niemeyer> hazmat: as a consequence the Go port wasn't handling empty directories
<hazmat> bcsaller, but they could do that via env.yaml config just as well afaics
<niemeyer> Yeah
<bcsaller> hazmat: yes, but if you can picture them having to change that file between deploys to get the effect they want then we have other work to do ;)
<_mup_> juju/go-charm-bits r13 committed by gustavo@niemeyer.net
<_mup_> Bundle directories into the zip as well, so that empty directories are
<_mup_> handled properly.
<hazmat> bcsaller, better to come up with a min/max placement to solve the issue i think
<hazmat> bcsaller, afaics there's two usage modes.. fast deploy against local
<bcsaller> hazmat: I think SpamapS and adam_g have both used it and would be better to ask than me. I don't strongly feel the need to keep it but I don't want to make it harder to do something people can already do
<hazmat> actually that it... i don't think folks are doing it adhoc
<hazmat> ie per deploy
<bcsaller> you might be right
<bcsaller> the intention was to expand on the allowed models though
<hazmat> if we had min/max placement, with some pre-booted background machines, and max cost of machines (max machines), i think people would use that in preference to the kludge usage
<bcsaller> to pair a cpu bound and i/o bound set of services for example
<hazmat> bcsaller, local placement doesn't do that
<hazmat> although i see your point
<bcsaller> hazmat: right, its not about local
<niemeyer> Done!
<niemeyer> Dirs, modes, etc.. all works well.  Tested with "zip" as well to make sure everything is compatible.
<bcsaller> niemeyer: that is such a satisfying word.. Done
<niemeyer> I'll go outside for a while to check out a bit of day light..
<niemeyer> bcsaller: Very true!
<hazmat> bcsaller, hmmm.. maybe its magic pie in the sky.. but ideally juju could sample and do that for us, rebalancing units as needed
<hazmat> to get max efficiency out machines
<hazmat> crazy talk
<bcsaller> hazmat: our model has always been to build the primitive first and then think about controlling layers later
<hazmat> bcsaller, but that sort of manual placement isn't easily encoded in a policy
<bcsaller> manual or strategic ... agreed that modeling that is hard
<hazmat> bcsaller, that's admin knowledge of machine usage and assigned units, with manual assignment to a particular machine when deploying
<bcsaller> and the basis is usually cost-savings
<hazmat> or performance benefits
<bcsaller> utilization
<hazmat> ie deploy the namenode on a machine with these characteristics
<bcsaller> we are not really prepared to assist on performance as we intend to isolate in VMs anyway
<hazmat> bcsaller, its more a question of capacity and machine characterstics guided by usage constraint
<bcsaller> and things are move during their lifetime, its gets complex, like machine 1 goes down and those 2 services end up where?
<hazmat> when deploying a service
<hazmat> bcsaller, volume management will open up new doors on this stuff
<bcsaller> agreed, it will help
<hazmat> its just not clear that placement helps here
<bcsaller> but the per thing is often about faster IPC which we don't naturally aid in
<hazmat> because its either trying to describe machine constraints of capacity, or against current usage assuming sampling
<hazmat> for this scenario
<hazmat> hmm
<bcsaller> hazmat: the placement strategies were supposed to be smart than they are now, but yeah, in their current form they don't allow that kind of reasoning
<hazmat> cross-az placement
<hazmat> is interesting here though
<hazmat> deploy these units... for this unit deploy in a different az
<bcsaller> yeah
<bcsaller> which the openstack stuff wants anyway
<hazmat> but we're overloading placement policies, we'll want to have multiple policies with that scenario given our current usage of placement
<hazmat> or overlapping responsibility
<bcsaller> sounds like the choice is grow or die, I'd say grow, but we need more info before we do that
<bcsaller> ok, off  for a while again
<hazmat> yeah.. sleep time
<hazmat> bbiab
#juju 2013-09-16
<marcoceppi> Azendale: you still having issues with keystone?
<AskUbuntu> Not abel to destory service as agent state is down | http://askubuntu.com/q/346228
<marcoceppi> m_3: I know you're on review this week, but I wasn't able to get through the queue much last week. I'm going to plow through most of it today to leave you in a bit of a better state
<m_3> marcoceppi: cool... I'm off for part of today and tomorrow
<m_3> marcoceppi: so that works out well
<m_3> :)
<AskUbuntu> juju error: hook failed: "config-changed" | http://askubuntu.com/q/346276
<kentb> so what exactly does a "HOOK error: no relation id specified" mean?  I had to restart my nova-cloud-controller node and now it's sitting at a hook error like this along with my glance service (with the same error).
<marcoceppi> kentb: is that in the log?
<kentb> marcoceppi: yep.  In the unit logs for the nova-cloud-controller and glance services
<marcoceppi> kentb: juju 1.13?
<kentb> marcoceppi: yes sir
<marcoceppi> it's weird a weird error to be getting
<marcoceppi> kentb: it means relation-(get/set/id) is being called out of band of a relation hook and is not getting the JUJU_RELATION_ID
<kentb> marcoceppi: ok. so how doth one correct that?  juju resolved got the service back on line, but, didn't really seem to fix anything in that keystone and authentication services are totally hosed.
<marcoceppi> kentb: right, you'll probably want to use `juju resolved --retry` next time to have the hook retry itself. Which hook actually failed? config-changed ?
<kentb> marcoceppi: yep. config-changed
<kurt__> marcoceppi: My back search function is horrible in my IRC client - can you give me the link for the bug you created for me on Friday?
<sinzui> kurt__, is this the bug you were looking for? https://bugs.launchpad.net/charms/+source/nova-cloud-controller/+bug/1225160
<_mup_> Bug #1225160: cloud-compute-relation-changed fails, getaddrinfo error <nova-cloud-controller (Juju Charms Collection):Confirmed> <https://launchpad.net/bugs/1225160>
<kentb> agh! I hit that on Friday...worked around it by including "search master" in my resolv.conf going forward
<kurt__> sinzui: yes it was.  thank you!
<kurt__> kentb: you could also add your hostnames to the affected node's /etc/hosts
<kentb> kurt__: yep. indeed.  What I ended up doing was adding the "option domain-search 'master';" line to /etc/maas/dhcpd.conf and restarting the service so all future machines would get the new settings.
<kurt__> kentb: good idea
<dalek49> does juju support vps yet?  I can find outdated answers that say no
<mhall119> can someone tell me if the postgres charm will create a schema and user for me, or if I have to specify those in my Django app's charm?
<kurt__> jcastro: are you on?
<marcoceppi> kurt__: he's out sick, anything I can help with?
<marcoceppi> mhall119: it /should/ iirc, but it also allows you to send the schema name
<kurt__> marcoceppi: nope.  I just wanted to thank him for the threads he sent my way
<marcoceppi> kurt__: ah, yeah, he's a bit under the weather today
<kurt__> marcoceppi: that's fine.  I got camera/mic set up fixed too.  So next hangout session should go smoother.
<marcoceppi> kurt__: excellent, how's the openstack setup going?
<kurt__> taking a little break today :)
 * kentb got his rebuilt and humming along
<marcoceppi> kentb: nice!
<kurt__> kentb: with networking and vm's running with horizon 100% ?
<kentb> kurt__: so far...that's where I left off on Friday before a hardware failure over the weekend basically forced me to rebuild it all
<kentb> kurt__: I've got to upload a couple of images but it should all be working as well as it did before catastrophe hit :)
<kurt__> kentb: yeah I managed to get everything but the ability to spin up and access a VM before.  So I hope you get further
<kentb> marcoceppi: thanks! If the hardware on these old servers don't die on me again, I hope to be in good shape :)
<marcoceppi> kentb: exciting! good luck
<sinzui> hi davecheney
<sinzui> davecheney, I agree a video call is not needed for the steps.
#juju 2013-09-17
<zradmin> has anyone been able to get the nova-ccc up and running properly? It keeps erroring out for me when I join a nova service to it
<kurt__> zradmin: have you gotten a relationship successfully between nova-cc and nova-cs?
<zradmin> kurt__: thats when nova-ccc turns red
<zradmin> kurt__: the log only shows something about the ssh keys for the compute node and then fails the hook, http://pastebin.ubuntu.com/6117251/
<kurt__> right, you are running in to the same bug as we are
<kurt__> you have two work arounds
<kurt__> are you doing maas?
<zradmin> yes
<kurt__> from kentb: kurt__: yep. indeed.  What I ended up doing was adding the "option domain-search 'master';" line to /etc/maas/dhcpd.conf and restarting the service so all future machines would get the new settings.
<kurt__> that's one way
<kurt__> the other way is to ensure your nodes alls have /etc/host entries for each other
<kurt__> you'll have to ensure the nodes are up via maas with a valid image, then go in and update them manually
<kurt__> kentb's method is probably easier.  I've tested neither yet
<zradmin> ah ok so adding to dhcpd, restart the service and reboot the affected nodes?
<kurt__> well, make changes to dhcpd, then just destroy and re-add nova-cc
<kurt__> then add relationship
<kurt__> you keep dropping from the room
<zradmin> ok I'll play around with it a bit
<kurt__> but the key is obviously, the nova-cc node needs to re-up via dhcp to get the info
<kurt__> back in a few
<zradmin> sounds good
<marcoceppi> FYI jamespage adam_g: this is a bug that's affecting a lot of people trying to deploy openstack on MAAS https://bugs.launchpad.net/charms/+source/nova-cloud-controller/+bug/1225160
<_mup_> Bug #1225160: cloud-compute-relation-changed fails, getaddrinfo error <nova-cloud-controller (Juju Charms Collection):Confirmed> <https://launchpad.net/bugs/1225160>
<adam_g> marcoceppi, can you please test with the py rewrite branches in lp:~openstack-charmers?
<marcoceppi> adam_g: I certainly can, I'll update the bug report tomorrow with my findings
<adam_g> marcoceppi, can't promise its going to work any better, actually
<adam_g> marcoceppi, issue is that MAAS/users need to ensure DNS works when resolving non-FQDN hostnames. not just for the charm to deploy okay, but for ssh block migration to work.. nova attempts to initiate that by pointing a libvirt migration to qemu+ssh://$HOSTNAME/system, not qemu+ssh://$FQDN/system
<marcoceppi> adam_g: that's fine, I don't mind getting you as much information as possible to fix it. I've got a 4 hour flight tomorrow so I might just try to find where in the code it's dropping the tld and patch the bash charm for now
<marcoceppi> adam_g: So this is more a bug in maas, where the workaround outlined in the bug report needs to be applied to maas and not the charm?
<adam_g> marcoceppi, i believe you can work around better by just ensuring /etc/resolv.conf contains 'search master'
<adam_g> marcoceppi, or getting maas to ensure that when its giving out DHCP
<marcoceppi> adam_g: this is the current workaround: https://bugs.launchpad.net/charms/+source/nova-cloud-controller/+bug/1225160/comments/1
<_mup_> Bug #1225160: cloud-compute-relation-changed fails, getaddrinfo error <nova-cloud-controller (Juju Charms Collection):Confirmed> <https://launchpad.net/bugs/1225160>
<adam_g> marcoceppi, oh, didnt see that. but yeah, that is correct
<marcoceppi> I mean, this is a big problem for users deploying on maas, I just want to know who I have to annoy to get this fixed
<marcoceppi> so if it's not at the charm level, maybe this needs to find it's way in to MAAS, either during the MAAS setup or as part of the documentation
<adam_g> marcoceppi, i'd think it would be a reasonable default to set the domain-search  in MAAS?
<adam_g> marcoceppi, alterantively, you can just disable live-migration in nova-compute
<marcoceppi> adam_g: I agree as far as maas
<adam_g> if DNS is borked, thats not going to work anyway
<adam_g> (live migration)
<jose> hey guys, anyone around here?
<davecheney> jose: yes
<thumper> yes
<jose> hey, I'd like to know if any of you know how could I add hook triggering on my charm
<thumper> jose: what do you mean by that?
<jose> on http://bazaar.launchpad.net/~jose/charms/precise/postfix/trunk/files I have hooks such as add-ssl, and would like to know if there is a way to trigger them like an option change
<jose> maybe something like 'juju set charmname option true'
<davecheney> jose: yes, handle that in config-changed
<jose> davecheney: and is there any pages that can give me a clue on that? I'd like to implent it
<davecheney> jose: to restate your request, you would like the enable ssl with a config flag ?
<davecheney> is that correct ?
<jose> enable ssl and update ssl certs, which are the two hooks I have
<davecheney> something like
<davecheney> [[ $(config get option) == "true" ]] hooks/ssl-update
<jose> so, if that line's in there, would the person be able to do 'juju set postfix ssl-update' and get the hook running? because as far as I can understand, that line would need to have something changed in config.yaml
<davecheney> jose: ssl-update is not a hook name
<davecheney> the names of hooks are fixed
<davecheney> juju set will fire the config-changed hook
<davecheney> so you caninsert logic inside that hook to check the content of $(config-get ssl-update)
<jose> oooooh, got it now!
<jose> so basically if I say it to check if the command is 'juju set postfix ssl-update' and that is true, then it can trigger it
<davecheney> jose: close
<davecheney> you check the value of the config item, ssl-update
<jose> great, I think I now have an idea on what to do. Thanks a bunch, really!
 * jose runs and fixes his code
<davecheney> jose: don't forget to define ssl-update in your charm's config.yaml
<jose> yep, thanks!
<davecheney> and also, if $(config-get ssl-update) is false, don't skip it
<davecheney> you need to remove any existing ssl configuration from the service
<jose> hm, bug appears
<jose> although it'd be best to add a 'remove ssl' config option, as I find it quite difficult for me to get around a solution on that
<davecheney> jose: hmm, i think that will be harder
<davecheney> all config vlaues have a default
<davecheney> that is to say, there is no way that config-get will return nothing
<davecheney> so make it a bool param
<davecheney> and if inside the config-changed hook you find 'ssl-update' is false, then do whatever logic you need to disable ssl support
<davecheney> that logic should expect to be run multiple times
<jose> I don't understand why people would like to disable ssl, but I'll try to find a workaround
<jose> also, is it possible to end and if/elif without an else?
<zradmin> anyone around still?
<jose> I am, though I'm not that of a coder
<thumper> zradmin: I'm here, and so are a few ozzies
<thumper> but I don't follow this all the time as I have about 30 channels open
<zradmin> do you know anything about setting the api ips in openstack? glance is giving me some trouble and only coming up with one of the two nodes instead of the HA IP (I'm following the ubuntu HA guide using juju for this)
<jose> hey thumper, do you by chance know if an if/elif can end without an else?
<jose> or zradmin, ^
<thumper> sorry don't know anything about openstack
<thumper> jose: in what language?
<jose> bash
<jose> forgot to mention :)
<thumper> sure, just don't code the else
<jose> great, thanks! :)
<zradmin> thumper: no problems, I'm sure I'll figure out whats going on with it eventually
<jose> thumper: also, if you have a min, do you know a command for juju to return the charm name deployed on a node? not wordpress/1 but wordpress, as an example.
<thumper> jose: command from where?
 * thumper doesn't really know much about charms
<jose> such as config-get
<thumper> so from the client machine
<thumper> using the juju command line, or a library, or what?
<jose> juju CL
<jose> second, let me grab a linkl
<jose> urgh, can't find it
<jose> https://juju.ubuntu.com/docs/authors-charm-anatomy.html here, in hook environment
<jose> something like $JUJU_UNIT_NAME
<thumper> so inside the hook context?
<jose> yeah
<thumper> no, it doesn't look like it
<thumper> but you could to an rsplit on the slash
<jose> rsplit? /me googles
<thumper> right split
<thumper> so take foo-bar/2
<thumper> and split on the last /
<thumper> which should be the only slash
<thumper> to get foo-bar and 2
<thumper> the service name is the first bit
<thumper> unfortunately I have to go
<jose> no worries
<jose> thanks!
<jose> marcoceppi, jcastro: http://manage.jujucharms.com/review-queue is 404
<jose> ah
* jose changed the topic of #juju to: Share your infrastructure, win a prize: https://juju.ubuntu.com/charm-championship/ || OSX users: We're in homebrew! || Review Calendar:  http://goo.gl/uK9HD || Review Queue: http://manage.jujucharms.com/tools/review-queue || http://jujucharms.com || Reviewer: ~charmers"
<mattyw> is it ok to reboot units after the intall charm has run? if say you edit users and groups?
<marcoceppi> mattyw: you can reboot units, they are designed to survive restarts
<mattyw> marcoceppi, just doing reboot in the hook?
<marcoceppi> mattyw: I really /really/ wouldn't recommend a reboot, I don't know how it will handle during a hook execution. Why do you need to run a reboot in the first place?
<davecheney> marcoceppi: mattyw it would be great if you didnt reboot a unit
<marcoceppi> davecheney: that's my gut feeling wrt this
<mattyw> davecheney, marcoceppi I have a feeling it's something I don't want to be doing
<mattyw> I don't think I really need to be doing it, but wondered if other were doing it in charms already
<davecheney> mattyw: no, it would be without precident
<davecheney> i'm fairly sure it would raise merry hell
<mattyw> davecheney, my favourite kind of hell
<davecheney> mattyw: go on, blow up the world
<mattyw> marcoceppi, davecheney, I was playing around with trying to do a docker charm last night, part of the install instructions suggests to create  a docker group and add the ubuntu user to the group so you don't have to do sudo docker all the time, but it appears that it isn't working
<mattyw> I was going to try a last ditched reboot to see what happens (out of desperation rather than thought)
<marcoceppi> mattyw: hooks are executed as root, but new groups won't take affect until "next login" of that user
<marcoceppi> there's a newgrp command that might help, but you shouldn't need to do anything with the ubuntu user if you do everything as root
<mattyw> marcoceppi, it's basicall the why sudo? section here I was following http://docs.docker.io/en/latest/use/basics/
<marcoceppi> mattyw: since hooks are run as root, you can just pretend like that's not an issue. Alternatively, create a new user (like docker) to run all the docker commands under
<davecheney> mattyw: the bbb is rather nice
<davecheney> blows the doors off the RPI
<davecheney> shame to turn it into a freebsd builder
<mattyw> davecheney, have you tried running ubuntu on it?
<davecheney> mattyw: not tried nothing, it was waiting at the door when I got home
<davecheney> will have a play this weekend
<mattyw> davecheney, so it's not building yet but will be?
<davecheney> yeah
<davecheney> getting a stable freebsd 10 image is the next challenge
<mattyw> davecheney, I'm still trying to find an lcd screen to connect to my pi that doesn't suck or cost too much
<davecheney> mattyw: there are a few SPI based Oled ones
<davecheney> freetronics has one
<mattyw> davecheney, this is actually the sort of thing I was after https://store.tinygreenpc.com/monitors-displays/7-hdmi-touch-monitor-without-enclosure.html
<mattyw> davecheney, see if I can turn my pi into some budget tablet thing
<mattyw> so here's a question, has anyone tried upgrading the kernel in a charm?
<davecheney> mattyw: yes, and no
<davecheney> you'll find thta most hosts expect to be rebooted due to the implicit apt-get update && apt-get upgrade that cloud-init does on your behalf
<davecheney> mattyw: it is important to recognise that juju is a _service_ management tool, not a _host_ management tool
<davecheney> as such, juju expects a few things to be already done for it
<davecheney> ie, we expect the host to work
<mattyw> davecheney, yeah sure, understood
<davecheney> and we expect the host to have working networking
<davecheney> etc, etc
<mattyw> but if the service requires a special version of the kernel that means it's not really suitable to be charmed?
<davecheney> mattyw: we'd handle that with a constraint
<mattyw> davecheney, oh right
<davecheney> in the same way that you would say 'this service should be deployed on machines with a big /dev/sdb'
<davecheney> well, if we had working constriants, that would be what we use them for
<mattyw> davecheney, ok understof
<mattyw> understood as well
<bac> hi sinzui, would it do any good for you to review my rewriterules branch or should i just wait for a webops to do it?
<sinzui> I can in  a few minutes
<bac> thanks sinzui, it is at https://code.launchpad.net/~bac/canonical-is-charm-configs/remove-head-redirects/+merge/185927
<sinzui> bac, we are removing the HEAD redirects because charmworld supports juju-gui urls?
<bac> sinzui: yes
<bac> sinzui: revisionless urls return head
<lifeless> o/
<sinzui> didn't juju-gui also need to understand revisionless urls to route to the details
<sinzui> Hi lifeless
<sinzui> ^ bac
<bac> sinzui: unsure, let's ask gary_poster ^^
<rick_h_> sinzui: no, the gui is kind of revision agnostic https://jujucharms.com/precise/mysql
<sinzui> rick_h_, bac: fab!
<gary_poster> we want both revisionless and revision...-full fwiw
<gary_poster> revisionless works now.  with revision works 1/4 way
<rick_h_> 1/4 way?
<gary_poster> it doesn't fall over but ignores revision
<gary_poster> will change upcoming
<rick_h_> oh right
<rick_h_> yea, right the gui just takes whatever it's given as the charm id and passes it to charmworld. Making sense/providing the right data comes from there.
<mhall119> jcastro: are you familiar with the gunicorn charm?
<mhall119> jcastro: and do I need to have this charm I'm developing in a ./precise/ directory?
<marcoceppi> mhall119: yes it needs to be in a series directory if you want to deploy it
<marcoceppi> mhall119: the gunicorn charm is being depreciated I think. let me check
<adeuring> marcoceppi: could you please have a look here: https://code.launchpad.net/~adeuring/charm-tools/check-config-yaml/+merge/186066 ?
<marcoceppi> adeuring: see my reply
<mhall119> marcoceppi: I have a limited amount of space on /, but a lot of it on /home/, how can I get juju's local provider to put it's instance on a different partition?
<adeuring> marcoceppi:  thatnkas -- and sorry for not seeing you reply ;)
<marcoceppi> adeuring: no worries! thanks for the submission
<marcoceppi> adeuring: give me two seconds to push the latest to the python-port branch
<adeuring> sure
<marcoceppi> adeuring: actaully, latest is up there. This is the 1.0.0 release. While I probably won't get your merge in for that, I'll have it for 1.1.0
<marcoceppi> So it'll land in the daily ppa, and in stable in about a week
<marcoceppi> mhall119: you'll need to configure lxc to do that. I believe it's outside the reach of juju
<marcoceppi> mhall119: either way, I've not tried
<marcoceppi> mhall119: first thing that comes to mind is to just symlinkn the containers in to your home directory, from /var/lib/lxc*
<marcoceppi> * I think that's where the containers are put
<mhall119> ok, I'll give that a try
<mhall119> marcoceppi: should I be able to run juju debug-log immediately after bootstrapping
<mhall119> ?
<mhall119> or is that only usable after deploying
<marcoceppi> mhall119: no, debug-log does not work on local provider
<marcoceppi> mhall119: you can find all the logs in ~/.juju/local/log
<marcoceppi> mhall119: that should be in the docs but is not
<marcoceppi> mhall119: I'll make sure it gets added
<mhall119> yeah, it just returns a 255 error code
<marcoceppi> mhall119: you also can't juju ssh 2; where 2 is the machine number, you must always use the unit-name
<marcoceppi> mhall119: because the debug-log is on the bootstrap node, and the bootstrap node is your machine. so the bootstrap node has the IP address of the lxc bridge
<marcoceppi> mhall119: you just tried to ssh into the gateway of the LXC bridge, hence the 255.
<mhall119> 2013-09-17 15:30:50 ERROR juju.container.lxc lxc.go:161 lxc container creation failed: container "mhall-local-machine-1" is already created
<mhall119> 2013-09-17 15:30:50 ERROR juju.provisioner provisioner_task.go:341 cannot start instance for machine "1": container "mhall-local-machine-1" is already created
<mhall119> even after juju destroy-environment -e local, I still have /var/;lib/lxc/mhall-local-machine-1
<mhall119> I don't think jcastro is *actually* sick, I think he just heard that I was going to be trying Juju and decided to hide from me :)
<marcoceppi> mhall119: that's odd
<marcoceppi> mhall119: about to board a flight, best of luck, but if you dont' get it working I'll be back online in a few hours
<adeuring> marcoceppi: new attempt: https://code.launchpad.net/~adeuring/charm-tools/python-port-check-config/+merge/186080
<bac> sinzui: did that MP look ok?  i see moon did your previous reviews.  is he the webop i should try to get to review it?
<sinzui> bac: yep
<sinzui> bac: sorry, I was ambiguous. moon127 or any webops can do the review.
<bac> sinzui: oh, ok
<sinzui> bac I don't understand the user rewrite rule.
<bac> sinzui: getting on a call.  can chat in 15 minutes.
<sinzui> bac: http://manage.jujucharms.com/~abentley/precise/charmworld is an example of a user charm url
<sinzui> user urls don't have "charms" in them
<bac> sinzui: good catch.  we had thought there had been URLs of that form before.  i'll remove that rule.
<mhall119> so on the local provider, can I just go into /var/lib/lxc/{machine}/rootfs/ and see what it has?
<mhall119> marcoceppi: ^
<mhall119> is there documentation on setting up a clean environments.yaml?
<jcastro> hey mhall119
<jcastro> I ran into this this weekend
<jcastro> I just blew away /var/lib/lxc/jorge-whatever
<jcastro> and rebootstrapped
<mhall119> jcastro: I tried that,then it complained about stuff in /etc/lxc/auto/
<mhall119> jcastro: I think I need to blow away my environments.yaml, it currently has http://paste.ubuntu.com/6120281/
<mhall119> what's the cleanest way to get a new one with properly configured local
<mhall119> ?
<mhall119> hope you're starting to feel better, btw :)
<jcastro> move it out of the way
<jcastro> and do `juju init` to make a new one
<jcastro> it being the old environments.yaml file
<mhall119> thanks jcastro, seems to be getting further now
<mhall119> jcastro: can local instances get to external websites (like LP)?
<jcastro> yeah
<jcastro> otherwise charms wouldn't work
<mhall119> man, I can't type "unit" without my fingers automaticaly putting a "y" at the end :(
<jcastro> yeah, it took me 6 months to fix that
<mhall119> jcastro: ok, so my install hook failed, I made some changes and added a bunch of juju-log commands, how to I get rid of that instance and deploy again using the new charm?
<tasdomas> mhall119, if your install and upgrade_charm  hooks point to the same code, you can do juju upgrade-charm service --force
<tasdomas> I have a question of my own - is it normal that juju-core leaves the instance running after a destroy-service ?
<kurt_> tasdomas: no
<kurt_> tasdomas: watch the status / debug-log to ensure the service is in fact truly getting destroyed
<kurt_> my guess is it is not
<mhall119> tasdomas: it seems mine are different, what are my options in that case?
<tasdomas> mhall119, 'juju resolved service/unit' - repeat until the service is in a non-error state
<tasdomas> then do a destroy-service and redeploy
<mhall119> thanks tasdomas
<tasdomas> mhall119, hope that helps
<mhall119> tasdomas: trying it now
<mhall119> we shall see
<mhall119> well the service is gone, but the machine is still there, is that expected?
<mhall119> do I need to destroy the machine before I redeploy?
<mhall119> where would I find the output from juju-log?
<mhall119> I'm not seeing *any* output in ~/.juju/local/log/ that looks like it's coming from juju-log commands
<marcoceppi> mhall119: I'd this on a local deployment still?
<marcoceppi> mhall119: the machine staying behind is expected
<mhall119> marcoceppi: local still yes
<mhall119> marcoceppi: where does juju-log write to?
<mhall119> marcoceppi: can you look at http://paste.ubuntu.com/6120826/
<mhall119> "instance-state: missing" looks suspiciously bad
<marcoceppi> mhall119: on the unit itself in /var/log/juju/unit-(service)-#.log however, they will be in .juju/(env-name)/log/unit-(service)-#.log
<marcoceppi> for local environments
<marcoceppi> mhall119: that just means it can't query state. happens on some openstack local and Maas deployments
<mhall119> marcoceppi: hall@mhall-thinkpad:~/projects/Ubuntu/api-website/charm$ ls /var/lib/lxc/mhall-local-machine-3/rootfs/var/log/juju/
<mhall119> mhall@mhall-thinkpad:~/projects/Ubuntu/api-website/charm$ ls /var/lib/lxc/mhall-local-machine-2/rootfs/var/log/juju/
<mhall119> mhall@mhall-thinkpad:~/projects/Ubuntu/api-website/charm$ ls /var/lib/lxc/mhall-local-machine-1/rootfs/var/log/juju/
<mhall119> none of the 3 machines I've created so far have anything in their rootfs/var/log/juju/ directory
<mhall119> mhall@mhall-thinkpad:~/projects/Ubuntu/api-website/charm$ ls ~/.juju/local/log/
<mhall119> machine-0.log  machine-1.log  machine-2.log  machine-3.log  unit-api-website-0.log
<mhall119> all I have in my ~/.juju/local/log is machine logs, no unit ones
<marcoceppi> unit-api-website-0.log looks like it
<marcoceppi> mhall119: FYI, that directory is loop mounts from the VM. of you ssh to it you'll see the loss, but they are in that. juju folder so nbd
<mhall119> marcoceppi: ah, didn't see that in there before
<marcoceppi> mhall119: if you want to mimic debug-log just run `tail -f .juju/local/log/unit-*.log`
<mhall119> thanks marcoceppi, I see some useful messages in that log, I'll be back if I get stuck again
<marcoceppi> mhall119: cheers
<jaywink> hey guys and gals ... read about GreenQloud (https://greenqloud.com) in Iceland offering cloud instances with a fully EC2 compatible API ... shouldn't that mean that Juju could be hacked to work with GC quite easily or would it be hard to tweak it?
<sarnold> jaywink: you might get away with just changing the URL..
<jaywink> sarnold, yeah I though of trying but the environments.yaml has no url. Will check the code where it could be defined. Just thought if someone has any quick "NO cannot be done"'s :)
<sarnold> jaywink: ec2-uri: https://juju-docs.readthedocs.org/en/latest/provider-configuration-ec2.html
<mhall119> marcoceppi: does juju keep a cached copy of my charm?  I changed my install hook but it doesn't seem to be using the changed version
<sarnold> jaywink: oh, looks like you'll also need s3-uri
<sarnold> mhall119: iirc, juju deploy -u  will help with rapid iteration of charm development
<mhall119> thanks sarnold, I'll give it a try
<jaywink> sarnold, ok thank will try that
<mattyw> are they any plans to allow deploying via github: something like juju deploy github:mattyw/charms/mycharm?
<jaywink> sarnold, after setting ec2-uri and s3-uri to the endpoints in greenqloud, bootsrap gives 'error: The AWS Access Key Id you provided does not exist in our records.'. I noticed GC has a separate EC2 key and separate S3 key for the API - so maybe that... will try to grep the code once I find it :)
<marcoceppi> mhall119: yes, it does
<kurt_> marcoceppi: still trying to figure out how to work around that nova-cc problem
<kurt_> I thought kentb had a more elegant solution than adding it in to \/etc/hosts
<kurt_> but I can't get the dhcpd.conf to accept it.  Any ideas?
<marcoceppi> kurt_: check the bug report?
<marcoceppi> kurt_: https://bugs.launchpad.net/charms/+source/nova-cloud-controller/+bug/1225160
<kurt_> yeah, that doesn't work
<_mup_> Bug #1225160: cloud-compute-relation-changed fails, getaddrinfo error <nova-cloud-controller (Juju Charms Collection):Confirmed> <https://launchpad.net/bugs/1225160>
<marcoceppi> according to adam_g this is actually a problem with MAAS and not the charm
<kurt_> http://pastebin.ubuntu.com/6121109/
<marcoceppi> kurt_: http://irclogs.ubuntu.com/2013/09/17/%23juju.html#t00:58
<mhall119> marcoceppi: ok, so how do I deploy gunicorn to work with my django?
<mhall119> mhall@mhall-thinkpad:~/projects/Ubuntu/api-website/charm$ juju deploy --to 9 gunicorn
<mhall119> error: cannot use --num-units or --to with subordinate service
<marcoceppi> kurt_: try `option domain-search "example.com";`
<marcoceppi> mhall119: right, because by nature subordinates live on other services
<mhall119> marcoceppi: ok, so how to I get it playing together with my "api-website" service?
<marcoceppi> so you `juju deploy gunicorn`, then `juju add-relation gunicorn api-website`
<kurt_> marcoceppi: maas by default sets up .master as it's domain
<marcoceppi> mhall119: apparently not, given the errors you're seeing
<marcoceppi> s/example.com/.master/
<kentb> kurt_: what marcoceppi said, and afterward restart the maas-dhcp-server (or whatever it's called) service
<marcoceppi> in my example
<kurt_> I tried that too
<kurt_> still borked
<adam_g> kurt_, are you really planning on using ssh block live migration? if you are not, just turn it off on nova-compute charm config, and you should not have the DNS error during the hook execution.
<marcoceppi> kurt_: I'm just echoing what's in the man page
<adam_g> kurt_, if you are planning on using it, you need reliable DNS
<kurt_> adam_g: ok. but there's a gap here that will prevent maas working with juju the way things are. so maybe it needs to be carefully captured in documentation then?
<kurt_> let me qualify that - in HA situations as described in jamespage's document where live migration via ssh is concerned
<marcoceppi> kurt_: I plan on re-targeting the bug for maas, it's not outside the realm of too much to ask for, to have maas setup include this step
<kurt_> marcoceppi: thanks and adam_g thanks for chiming in
<marcoceppi> kurt_: adam_g: re-targeted to maas, https://bugs.launchpad.net/maas/+bug/1225160, adam_g thanks for the help earlier!
<_mup_> Bug #1225160: MAAS doesn't add tld to DHCP domain-search <MAAS:Confirmed> <nova-cloud-controller (Juju Charms Collection):Confirmed> <https://launchpad.net/bugs/1225160>
<mhall119> marcoceppi: sarnold: tasdomas_afk: thanks for all your help, I'm very close now!
<marcoceppi> mhall119: yahoo!
<mhall119> marcoceppi: the add-relation between gunicorn and api-website failed because of a missing python-jinja2 package, which I assume gunicorn uses
<mhall119> why wouldn't gunicorn be installing that?  Do I have to install any gunicorn dependencies in my charm?
<marcoceppi> mhall119: it's possibly a dependency? I've not used that charm. If it is a gunicorn dep, the gunicorn charm needs to install it
<marcoceppi> jinja2 is a templating system for django, so it might not be a gunicorn dep
<mhall119> well api-website doesn't use it...
<marcoceppi> mhall119: I'm simply googling around :\ what's the log for the unit say?>
<mhall119> unless it's somehow still buried in the remnants of the charm I'm basing this off of
<mhall119> mhall@mhall-thinkpad:~/projects/Ubuntu/api-website/udn$ grep -i error ~/.juju/local/log/unit-gunicorn-0.log
 * marcoceppi shrugs, it could
<mhall119> 2013-09-17 20:42:45 INFO juju.worker.uniter context.go:234 HOOK ImportError: No module named jinja2
<mhall119> 2013-09-17 20:42:45 ERROR juju.worker.uniter uniter.go:356 hook failed: exit status 1
<marcoceppi> mhall119: everything I'm reading suggests jinja2 is married to django and not gunicorn
<marcoceppi> mhall119: typically, a charm should install it's own dependencies. For the sake of moving you along, I'd say have you api-website install the dependency
<mhall119> marcoceppi: hmmm, I see it in both the postgres and gunicorn logs: http://paste.ubuntu.com/6121191/
<marcoceppi> wat.
<marcoceppi> why is postgres doing anything?
<marcoceppi> http://i.imgur.com/kUie3s2.gif
<jaywink> hmmm stupid question if someone can help. I'm looking to tweak locally how juju uses creds towards ec2. It imports txaws module to do the magic. But I cannot find this module on my system no matter where I look?
<marcoceppi> jaywink: what version of juju are you using?
<jaywink> marcoceppi, 1.13.3-raring-amd64
<mhall119> marcoceppi: well I deployed postgres and added it as a relation to api-website
<mhall119> not sure why it's using jinja though
<marcoceppi> mhall119: I have no idea why postgresql is installing anything
<marcoceppi> jaywink: well, 1.13.3 is written in Go-lang if you didn't know already, so juju is compiled.
<mhall119> marcoceppi: I'm using the IS postgres charm, if that matters
<marcoceppi> mhall119: probably, I've not looked at that one
<jaywink> marcoceppi, ok so I took the wrong sources from launchpad haha.... sigh.. that explains it
<jaywink> thanks
<marcoceppi> jaywink: yeah, juju-core is juju version > 1.0
<marcoceppi> jaywink: just "juju" is version 0.7 and below
 * marcoceppi makes a note to update the juju project page
<jaywink> :)
<mhall119> marcoceppi: yeah, the postgres charm uses jinja in it's hooks
 * marcoceppi tilts head
<mhall119> marcoceppi: which service do I juju expose, gunicorn or api-website?
<mhall119> marcoceppi: http://paste.ubuntu.com/6121351/ shouldn't that mean I should be able to go to http://10.0.3.94:8081/ and see something?
<mhall119> hmm, more missing deps it looks like
<hazmat> marcoceppi, so charm-tools still depends/recommends pyjuju
<hazmat> marcoceppi, http://pastebin.ubuntu.com/6121393/
<hazmat> mhall119, jinja2 gets pulled by charm-helpers
<hazmat> which postgres charm might be using
<jaywink> juju-core doesn't seem to support ec2-uri and s3-uri (grepped the code too) - any idea if this support is coming or any branches with it included?
<AskUbuntu> Deploying with juju-core on non-ec2 ec2 clouds | http://askubuntu.com/q/346860
<hazmat> marcoceppi, is there a separate place for python charm-tools?
<hazmat> oh.. python-port branch
#juju 2013-09-18
<AskUbuntu> Scaled down Openstack Grizzly installation with Juju and Maas | http://askubuntu.com/q/346898
<AskUbuntu> Juju GUI public IP | http://askubuntu.com/q/346991
<jamespage> dosaboy, around? lets discuss ceph/cinder ehere
<dosaboy> jamespage: howdi
<dosaboy> congratz btw
<dosaboy> so it is clear that a wider rework is in order
<jamespage> dosaboy, ta
<dosaboy> for now, as you suggest, i'll add  the rep count to the cinder config
<jamespage> dosaboy, agreed; I want to rework the cinder integration anyway so lets do something tactical for the time being
<jamespage> dosaboy, I made another comment on the MP for the replica count stuff
<jamespage> dosaboy, for the rework I want to re-implement the ceph/cinder integration as a separate 'backend' charm
<jamespage> as a single cinder instance can support multiple backends >= grizzly
<dosaboy> yup
<jamespage> dosaboy, so the ceph integration becomes a subordinate charm for cinder
<jamespage> dosaboy, alongside other backend sub charms
<dosaboy> makes sense
<jamespage> existing compat would be retained still for existing deployments
<dosaboy> jamespage: i assume the problem we see here also applies to the glance charm
<jamespage> dosaboy, yup
 * dosaboy checks
<dosaboy> i'll apply same interim patch there too
<jamespage> dosaboy, +1
<jamespage> if you want todo those I can ack them through
<jamespage> (well this morning - not around PM today)
<jamespage> but in tomorrow
<dosaboy> doing it now
<freeflying> does juju-core specifically bridge to eth0 when bootstrap with maas?
<dosaboy> jamespage: cinder and glance patches should be ready for review now
<mattyw> I can't see what command I need to run to actually run charm tests https://juju.ubuntu.com/docs/authors-testing.html
<jamespage> dosaboy, odd - I get an access denied when the cinder charm tries to create the pool
<dosaboy> damn i suspected as much, maybe that the 'ceph osd' command requires different perms to rados command
<dosaboy> jamespage: can you show me the permissions for the cindr client?
<jamespage> dosaboy, maybe
<jamespage> osd rwx
<jamespage> mon r
<jamespage> dosaboy, ^^
<dosaboy> jamepage: hmm that should be enough
<dosaboy> or maybe * is needed for admin
<dosaboy> i can test if you gimme a few mins
<jamespage> dosaboy, just trying a fix now
<jamespage> dosaboy, needs "mon rw"
<dosaboy> aha!
<dosaboy> jamespage: the user is created by the ceph charm right?
<jamespage> dosaboy, yes
<jamespage> ceph.py
<dosaboy> are you going to patch or do you want me to?
<jamespage> dosaboy, trying to figure out how to upgrade permissions
<dosaboy> jamespage: it may be yucky
<dosaboy> 'capabilities'
<marcoceppi> hazmat: in precise it does, shouldn't in the ppa
<jamespage> dosaboy, ceph auth caps
<jamespage> dosaboy, ceph auth caps client.<SERVICE_NAME> mon 'allow rw' osd 'allow rwx'
<jamespage> specifically
<Dotted^> i have a local juju setup, but trying to ssh into any of my machines i get a Permission denied (publickey,password)., however there is no difference between id_rsa.pub on my host user and the authorized_keys on the lxc machines
<rick_h_> Dotted^: so I had to add username ubuntu@ to get into things. I should setup an ssh config block to auto use that user
<rick_h_> Dotted^: try ssh'ing into one of the ips manuall ssh ubuntu@ip
<Dotted^> that did the trick, thanks
<Dotted^> a bit unfortunate that the "juju ssh" wont work
<rick_h_> Dotted^: yea, so make that work you'd have to edit .ssh/config and add a block for your lxc ip range http://paste.ubuntu.com/6123709/
<Dotted^> well ssh <ip> now works without adding ubuntu@, juju ssh is still broken it seems
<rick_h_> Dotted^: hmm, :( not sure then. Maybe marcoceppi knows more when he's around.
<Dotted^> alright, well thanks - at least i can get into the machines now :)
<jcastro> hey evilnickveitch
<mattyw> does someone have a moment to answer a couple of questions?
<evilnickveitch> jcastro, hey
<jcastro> mattyw: no, get out.
<jcastro> of course we have a moment!
<jcastro> evilnickveitch: hey, I removed the warning on the debugs hook page
<mattyw> jcastro, I was just grabbing my coat ;)
<jcastro> the one that is like "this doesn't actually work yet."
<jcastro> evilnickveitch: the other is, I'm listed as maintainer for lp:maas-website, can we move that over to a team or something?
<evilnickveitch> jcastro, oh, cool. that is being rewritten
<mattyw> jcastro, this page - on charm testing: https://juju.ubuntu.com/docs/authors-testing.html it doesn't actually say how I can run the tests
<mattyw> jcastro, also, in a charms config.yaml is it possible for options to depend on other options in the file. For exmple. set a BaseDir and then have other values use that?
<jcastro> mattyw: 1 is a ceppi question, this page looks new to me.
<jcastro> 2. is I don't think so? But I am not sure.
<jcastro> mattyw: any other questions I can no know the answer to for ya? :)
<mattyw> jcastro, are they any plans to allow you to deploy from github like juju deploy github.com/mattyw/myAwesomeCharmCollection myWickedCharm
<jcastro> mattyw: so I've seen some that use multiple configs
<jcastro> but I don't know about enforcing usage of one over the other other than just not working
<jcastro> yes, there are plans to deploy from anywhere, just not github
<jcastro> ideally we can pull from any internal or external source
<mattyw> jcastro, just not or not just?
<jcastro> not just, sorry!
<jcastro> <-- case of the mondays
<jcastro> juju deploy someinternalresource blah
<mattyw> jcastro, one more question
<mattyw> framework charms
<mhall119> jcastro: why are there machine folders in /var/lib/lxc/ even after I destroy-machine them?
<jcastro> I don't know I need to ask thumper about that
<mattyw> well actually - maybe not framework charms, but it would be great to be able to send a charm a message to run an arbitrary hook
<jcastro> mhall119: I noticed that this weekend for the first time, I hope it's not a bug
<mhall119> jcastro: should I wait for my service units to come up before calling add-relation, or will is queue that until they are ready?
<jcastro> mhall119: once the bootstrap is up the entire thing is async, you can go at whatever pace you like
<jcastro> mattyw: from within a charm or just manually?
<jcastro> mhall119: you pulled in the "juju-local" package too I assume?
<mattyw> jcastro, externally
<mattyw> jcastro, well actually let me re phrase
<jcastro> mattyw: hmmm
<jcastro> mhall119: are you on 1.13 or 1.14 of juju? "juju version" will tell you
<mattyw> the rails charm: if I understand correct it will install rails then setup for a specific app?
<jcastro> yeah
<mhall119> 1.13.3-saucy-i386
<mattyw> jcastro, so how do you do that specific stuff - is that done with the config?
<jcastro> yeah
<mattyw> ok cool
<jcastro> the setup for the specific app you mean?
<mattyw> that's right
<jcastro> Right now it's just like, the repo URL and stuff though
<jcastro> I don't think it's doing anything too fancy yet
<marcoceppi> Dotted^: how are you trying to ssh? `juju ssh <mahine-number>`?
<Dotted^> yes
<marcoceppi> Dotted^: that does not work on local provider, please use `juju ssh <unit>` instead, so juju ssh wordpress/0 for example
<marcoceppi> Dotted^: this is a known issue
<Dotted^> ahh thanks, that works
<jcastro> hey marcoceppi, both mhall119 and I saw an issue where the lxc stuff isn't being cleaned up with destroy-environment
<marcoceppi> jcastro: that's a bug then
<mhall119> jcastro: destroy-machine, I haven't done destroy-environment yet
<marcoceppi> mhall119: destroy-machine doesn't remove the machine?
<mhall119> it removed it from juju status
 * marcoceppi bootstraps a local
<mhall119> but not from /var/lib/lxc/
<jcastro> I had the same thing happen
<jcastro> agent-state-info: 'rror: container "jorge-local-machine-1" is already created)'
<jcastro> ok I'm going to file a bug since I can reproduce it
<marcoceppi> jcastro: cool, I'll try to get more debug information
<jcastro> https://bugs.launchpad.net/juju-core/+bug/1227145
<_mup_> Bug #1227145: Juju isn't cleaning up destroyed LXC containers <juju-core:New> <https://launchpad.net/bugs/1227145>
<jcastro> mhall119: can you confirm the bug please?
<mhall119> jcastro: confirmed
<mhall119> "Internal Server Error" Progress!
<mhall119> sure would be nice if it actually wrote the internal error message to the error log file :(
<marcoceppi> mhall119: is that from juju or the application?
<mhall119> marcoceppi: from gunicorn
<yolanda> hi, i'm receiving this error after a juju bootstrap, when trying to deploy a service, or even with juju status: error: cannot log in to admin database: auth fails
<yolanda> environment was working fine until this afternoon
<mhall119> so...juju destroy-environment failed
<mhall119> now I have a postgres instance in "down" agent-state that I can't get rid of
<x-warrior> Should I worry about updating juju when I already have a environment deployed?
<x-warrior> an*
<kurt_> marcoceppi: ping
<marcoceppi> kurt_: pong
<kurt_> marcoceppi: are there any good guides available for setting up openstack once horizon is in place?  Or any guides that are WIP?
<marcoceppi> x-warrior: you can run juju upgrade-juju to upgarde the deployed version of juju. The client should be compatible with most deployed versions of juju since 1.12
<marcoceppi> kurt_: you should be able to follow any instructions/guides on the openstack site
<marcoceppi> mhall119: that's so weird
<kurt_> ok.  it would be nice if there were one published in conjunction with the settings for jamespage's openstack manifesto.
<kurt_> that links up his configurations with IP addressing schema, etc
<marcoceppi> kurt_: there are so many ways you can take the configuration of openstack after setting up the charms though, it's really about what you want to do at that point I imagein
<kurt_> agreed, but a demo path with a working configuration would be cool.  I am headed that way anyways.  The whole network configuration taken to a working finished model IMHO is one of the biggest challenges with openstack.
<mattyw> marcoceppi, what command do you call to run charm tests? https://juju.ubuntu.com/docs/authors-testing.html
<jcastro> charm status hangout in just a few minutes!
<arosales_> Charm Weekly hangout starting in a few
<arosales_> hangout out URL at: https://plus.google.com/hangouts/_/edb0a66997812f8ade7263387d24623dc7094011?authuser=0&hl=en
<arosales_> if you want to join us.
<jcastro> arosales_: I need the youtube URL
<mhall119> marcoceppi: After destroying some other services and machines, destroy-environment finally worked
<mhall119> I've manually deleted all the old lxc container data in /var/lib/lxc/
<mhall119> starting fresh now
<marcoceppi> mattyw:  You need to install the juju-test plugin, it's in the juju-plugins project
<jcastro> http://pad.ubuntu.com/7mf2jvKXNa
<jcastro> for the notes
<jcastro> http://ubuntuonair.com if you wanna follow along!
<mattyw> marcoceppi, this one? https://launchpad.net/juju-plugins
<marcoceppi> mattyw: yes, no ppa/installer for it yet. Just copy juju_test.py to juju-test somewhere in path for now
<mhall119> omg omg omg!  I got something working!
<mhall119> jcastro: how can I get a local shell of a LXC deployed instance?
<marcoceppi> mhall119: juju ssh api-website/0
<mhall119> so I finally got it all working, only to find out I need to put it all behind an Apache proxy :(
<mhall119> marcoceppi: is there an apache subordinate that can automagically serve static files *and* act as a proxy for my gunicorn subordinate?
<marcoceppi> mhall119: no, but someone had a similar question, I can't remember what I did to get it working with them
<marcoceppi> mhall119: there's an apache2 charm
<marcoceppi> mhall119: http://irclogs.ubuntu.com/2013/07/12/%23juju.html might or might not help
<mhall119> thanks marcoceppi
<Dotted^> is there a way to get the juju debug-log on lxc containers?
<sarnold> Dotted^: I believe I read that the easy way is to read the files through the host filesystem
<Dotted^> where are they located?
<sarnold> Dotted^: try .juju/local/log/unit-*.log and /var/lib/lxc/
<Dotted^> nice thanks
<Dotted^> are there any issues with wordpress? combining it with mysql seems to work, wordpress and memcache and nfs is fine, but it explodes when trying to add relations to all 3
<marcoceppi> Dotted^: the LXC containers all mount /var/log/juju/ on each machine to .juju/local/log - so just read them there as sarnold pointed out
<marcoceppi> Dotted^: there's a known issue with memcached and wordpress not working
<marcoceppi> Dotted^: NFS and MySQL work fine
<sarnold> marcoceppi: aha :) thanks!
<marcoceppi> https://bugs.launchpad.net/charms/+source/wordpress/+bug/1057212 https://bugs.launchpad.net/charms/+source/wordpress/+bug/1170034
<_mup_> Bug #1057212: Memcached relation fails if wp install isn't complete <wordpress (Juju Charms Collection):In Progress by marcoceppi> <https://launchpad.net/bugs/1057212>
<_mup_> Bug #1170034: integration with memcached broke <wordpress (Juju Charms Collection):Confirmed> <https://launchpad.net/bugs/1170034>
<Dotted^> so its only because wp isnt setup yet?
<marcoceppi> Dotted^: no, memcached dones't work regardless of the state of wordpress
<marcoceppi> Dotted^: it's a bug with the upstream plugin
<Dotted^> ah
<marcoceppi> Dotted^: I've been meaning to fix it, just haven't found the time just yet
<mhall119> jcastro: marcoceppi: can you guys sanity check https://code.launchpad.net/~mhall119/ubuntu-api-website/canonical-is-charm/ for me?
<Dotted^> hmm adding nfs to wordpress breaks wordpress, the log complains about "mount.nfs: access denied by server while mounting 10.0.3.201:/srv/data"
<jcastro> mhall119: do README.md or README.rst
<Dotted^> 10.0.3.201:/srv/data/wordpressfolder even
<jcastro> mhall119: in whichever format you prefer.
<mhall119> jcastro: what's wrong with just README?
<jcastro> it needs an extension
<jcastro> hmm, did the boilerplate not give it an extension?
<mhall119> jcastro: this is copied from another charm
<jcastro> ok
<mhall119> an internal one that IS uses
<mhall119> this will also only be used by Canonical IS
<jcastro> oh ok!
<jcastro> then it doesn't matter
<kentb> what's a good way to explain 'admin-secret' in the environments.yaml file?  I know you can pretty much use anything you want, but, do any of the supported environments (maas, ec2, etc) actuall make use of it for anything (other than logging into juju-gui)?
<marcoceppi> kentb: it's only really used for API access, it's internal to juju
<kentb> marcoceppi: ok. thanks!
<kurt_> kentb: how far have you gotten?
<kurt_> I managed to get an instance running, but am now stuck on getting an IP allocated to the instance with "an error occurred" - trying to figure out the logging for that - where it is
<blairbo> Hey All. I'm trying to set up a local juju server.  The documentation is a little light on the network setup prior to installing juju, mongodb and bootstrapping. Anyone have experience with this or know of a good walkthrough. I've been googleing and many step-by-steps seem old and outmoded. Specifically i'm interested in  the host network setup. I have three NICs, and am wondering what the best
<blairbo> practices are for setting those up, and any VLANs I should be setting up on the switch.
<kentb> kurt_: Had a bunch of other important stuff jump in front of openstack today, but, when I had it all up and running the first time last week, I could allocate IP addresses to my instances and ping both the internal namespace address as well as the public-facing one.
<kurt_> kentb: wow, I really need to see your configuration
<marcoceppi> blairbo: for local server, there is no additional networking setup. If you want to access the juju machine outside of the host running the local provider you'll need to configure the LXC bridge, but that's outside of the juju documentation
<Dotted^> im doing some benchmarks and im getting about 600 requests per second for wordpress on nginx tuned for single, that seems a bit low to me or is it supposed to be so low ?
<Dotted^> nvm
<blairbo> marcoceppi
<blairbo> marcoceppi: ok.. i must be having other issues then
<marcoceppi> blairbo: could you describe your problem?
<ZonkedZebra> Hello. I have bootstraps my AWS ENV and see the bootstrap instance in the AWS dashboard, but when run juju deploy juju-gui the new machine (#0) never spins up, just says "pending" forever. Where should i be looking for errors?
<ZonkedZebra> Log files i should be looking at?
<ZonkedZebra> Hello, anyone awake? Its no limited to juju-gui, any charge i attempt to deploy crates a new machine and never gets past the pending state
<sarnold> ZonkedZebra: what size instances are you using? I understand micros can take forever to come up..
<ZonkedZebra> sarnold: I haven't specified, the bootstrap node came up on a m1.small instance, none of the other machines even show up as pending or initializing or anything at all in the AWS dashboard
<sarnold> ZonkedZebra: they don't even show up in the dashboard? hrm..
<ZonkedZebra> and /var/log/juju/all-machines.log is helpfully entirely empty
<marcoceppi> ZonkedZebra: could you paste your juju status output?
<marcoceppi> to paste.ubuntu.com
<ZonkedZebra> http://paste.ubuntu.com/6125937/
<marcoceppi> ZonkedZebra: you have a few other problems! Somehow you've got a bootstrap node (which is machine 0) that's pending but you can actually get a juju status which is illogical
<ZonkedZebra> marcoceppi: after bootstrapping juju status reports 0 machines and 0 services
<ZonkedZebra> i had assumed that the bootstrap node was excluded from the list
<marcoceppi> ZonkedZebra: yeah, that makes no sense. There is always 1 machine, the bootstrap node, machine 0
<marcoceppi> ZonkedZebra: Could you ssh on to the bootstrap node and see if you have any files in /etc/init/juju-* ?
<ZonkedZebra> yep
<ZonkedZebra> juju-db.conf appears to be the only juju related file in /etc/init
<marcoceppi> ZonkedZebra: everything about this situation makes me believe something is very wrong. I'd recommend `juju destroy-environment` and then try bootstrapping again. Once you bootstrap run juju status and make sure you have a machine 0 that is in a started state
<ZonkedZebra> marcoceppi: I have, this is my third go around
<marcoceppi> ZonkedZebra: what does `juju version` say?
<ZonkedZebra> 1.11.2-unknown-amd64
<marcoceppi> ZonkedZebra: Ah! i know the problem
<marcoceppi> ZonkedZebra: are you on Mac OSX?
<ZonkedZebra> I am
<marcoceppi> ZonkedZebra: the current release of juju is 1.14.0, so when you bootstrap instead of it detecting the client version you're using it's setting up 1.14. I could have sworn the brew recipe was updated to deploy the latests juju version, did you recently do a brew install>
<marcoceppi> ZonkedZebra: or is this `brew install` from a while ago?
<marcoceppi> arosales: ^
<ZonkedZebra> I tried the homebrew method, got some crazy go compile error, so used the package from github linked to from here: https://github.com/juju/juju-core/releases
<marcoceppi> ZonkedZebra: yeah, release is pretty outdated
<marcoceppi> ZonkedZebra: we opted to use brew since we can keep it up-to-date rather than trying to upload every release
<blairbo> marcoceppi: I went through the setup guide and ran the test proceedure.  agent-state for machine 0 goes down and charm agent-state remains as pending for wordpress and mysql
<dalek49> Tangentially related to juju, can openstack run on a vps, or do I have to run it on bare metal?
<marcoceppi> blairbo: if agent-state for machine 0 is down, then juju isn't working. On your local machine try to restart the juju-* services in /etc/init. If it's still down after a few mins pastebin (http://paste.ubuntu.com/) the files from /var/log/upstart/juju-*
<marcoceppi> dalek49: there's "devstack" which can be setup on a single machine. Depending on the VPS is virtualize, yes you can, but beware of performance issue
<marcoceppi> issues*
<ZonkedZebra> marcoceppi: brew install juju installs go with no issues (1.1.2) but during the go install command runtime/cgo throws this error: "clang: error: no such file or directory: 'libgcc.a'"
<arosales> marcoceppi, reading backscroll
<marcoceppi> ZonkedZebra: yikes, I dont' have a mac osx machine to test what you need
<ZonkedZebra> its using "juju-core_1.12.0-1.tar.gz", should it be using 1.14?
<arosales> ZonkedZebra, ya we need to removed that from github. jcastro put it there before brew.
<arosales> ZonkedZebra, here is the info on homebrew
<arosales> [1] https://github.com/mxcl/homebrew/blob/master/Library/Formula/juju.rb
<arosales> [2] https://github.com/mxcl/homebrew/pull/21858
<marcoceppi> ZonkedZebra: try running brew install for juju with "--use-llvm" option
 * marcoceppi grabs at sticks
<arosales> ZonkedZebra, https://lists.ubuntu.com/archives/juju/2013-August/002847.html is the thread
<arosales> ZonkedZebra, we just released 1.14 today so we proably need to get 1.14 added to that brew though
<ZonkedZebra> marcoceppi: ha-zahh! --use-llvm did the trick
<marcoceppi> ZonkedZebra: WOO! So happy that worked
<marcoceppi> ZonkedZebra: that should give you a juju version of 1.12.0, which should work with 1.14.0 for bootstraps
<marcoceppi> arosales: I'll update the docs with that work around for the brew install command in the mac osx section
<arosales> marcoceppi, thank you
<marcoceppi> afterifinishtestingglusters
<arosales> marcoceppi, ack
<arosales> I'll also reach out to rodrigo and see if can update the brew receipe.
<marcoceppi> arosales: we should be able to open a pull req
<marcoceppi> actually, I might just do that now
<marcoceppi> arosales: we should have the release team do it as part of the stable releases
<arosales> marcoceppi, +1
<marcoceppi> arosales: just update the URLs in this file on github and open pull req https://github.com/mxcl/homebrew/blob/master/Library/Formula/juju.rb
<arosales> marcoceppi, email sent to sinzui
<marcoceppi> arosales ZonkedZebra: https://github.com/mxcl/homebrew/pull/22669 Submitted update to brew for latest stable/devel. Hopefully they will be merged soon
<marcoceppi> ZonkedZebra: are you getting a better juju status this time around?
<ZonkedZebra> Yep, everything going smoothly so far, just waiting on the juju-gui agent to start
<marcoceppi> ZonkedZebra: excellent, if you're looking to save money in the future, you can co-locate the juju-gui charm on the bootstrap node, so you can get your moneys worth on that node; `juju deploy --to 0 juju-gui`; for the next time around :)
<ZonkedZebra> wonderful, thanks for all the help
<marcoceppi> ZonkedZebra: np, the room is most active during business hours in the UK/US. So if it's "after hours" and no one is here feel free to post your question to either http://askubuntu.com tagged with Juju or on our mailing juju@lists.ubuntu.com that way we can get back to you in a more permenant form
<blairbo> marcoceppi: when i tab after typing "service" the list does not include either of the two juju services
<marcoceppi> blairbo: what does `initctl list` show? does it show and juju- services?
<blairbo> marcoceppi: I did "initctl list | grep -i juju" and i got these two - juju-agent-juju-local stop/waiting
<blairbo> juju-db-juju-local start/running, process 1151
<marcoceppi> blairbo: start the juju-agent-juju-local
<marcoceppi> blairbo: that's the APi that controls the provisioning
<marcoceppi> blairbo: `sudo start juju-agent-juju local` should do it
<marcoceppi> blairbo: after you start it, run juju status, if it's still says agent down and juju-agent-juju-local is stopped, pastebin the log from /var/log/upstart/juju-agent-juju-local.log
<ZonkedZebra> marcoceppi: quick question. Im attempting to extend the node-app charm to support my preferred database (postgres) an addition to mongodb, Any examples of that kicking around (multiple databases) in an existing charm?
<marcoceppi> ZonkedZebra: not that I know of, let me check
<marcoceppi> ZonkedZebra: you'd basically just add the bits and bobs that postgresql charm requries to the node-app
<marcoceppi> ZonkedZebra: there's no way to say, "only need one or the other", in relations, so if someone adds a mongodb and a postgresql, the charm will process both of them
<sarnold> ZonkedZebra: this one looks like it can handle postgresql or mysql or mongodb: http://manage.jujucharms.com/charms/precise/rails
<marcoceppi> sarnold: ZonkedZebra: however, that charm also does everything via chef scripts, so it's not so easy to follow
<sarnold> oh man :/
<sarnold> sorry ZonkedZebra :)
<ZonkedZebra> sarnold, marcoceppi : well its better then nothing, thanks
<blairbo> marcoceppi: it issued a process ID, but when i list again it's still in stop/waiting state, and juju status shows agent-state still down
<marcoceppi> blairbo: can you install pastebinit and run `cat /var/log/upstart/juju-agent-juju-local.lot | pastebinit`
<blairbo> marcoceppi: also there's no log file with the path specified.  There is one for the db, but not the agent.
<sarnold> .lot ?
<blairbo> he meant .log
<sarnold> just checking:) hehe
<blairbo> :)
#juju 2013-09-19
<blairbo> btw thanks for your help
<blairbo> $ ls /var/log/upstart
<blairbo> console-setup.log       module-init-tools.log           rsyslog.log
<blairbo> container-detect.log    mongodb.log                     ureadahead.log
<blairbo> dbus.log                network-interface-eth2.log      ureadahead-other.log
<blairbo> juju-db-juju-local.log  procps-static-network-up.log
<blairbo> lxc-net.log             procps-virtual-filesystems.log
<blairbo> sorry should have pastebin'd that.. my bad.  http://pastebin.com/Emu7v3Z8
<marcoceppi> blairbo: weird. `juju destroy-environment` and re-bootstrap
<marcoceppi> blairbo: see if that helps
<blairbo> marcoceppi: will do. stand by
<blairbo> marcoceppi: result - http://pastebin.com/Eu1D5qcv
<marcoceppi> blairbo: how does juju status look now?
<blairbo> marcoceppi: status - http://pastebin.com/hw4GJc9b
<marcoceppi> blairbo: looks better, try deploying something now
<blairbo> marcoceppi: which is how it looked before I deployed mysql last time.. let's give this a shot..
<marcoceppi> blairbo: what's the deploy command you're using?
<blairbo> marcoceppi: juju deploy wordpress (local is the default environment in my environments.yaml file)
<marcoceppi> blairbo: k
<blairbo> marcoceppi: then "juju deploy mysql", then
<blairbo> "juju add-relation mysql wordpress"
<marcoceppi> blairbo: how's that going now? getting and oddness in juju status?
<marcoceppi> blairbo: also, what does `juju version` say?
<blairbo> marcoceppi: same oddness - http://pastebin.com/hw4GJc9b version is 1.14.0-precise-amd64
<marcoceppi> blairbo: old link
<blairbo> marcoceppi: oops - http://pastebin.com/8p7Z5is9
<marcoceppi> blairbo: this is a bug then, I'm still using 1.13.3 so I can't confirm atm
<blairbo> marcoceppi: I am not opposed to starting from bare metal again. Should I revert?
<marcoceppi> blairbo: you could try 1.13.3
<blairbo> marcoceppi: are you on precise?
<marcoceppi> blairbo: no, raring on this machine
<blairbo> marcoceppi: ok. I'll give that a shot. I'm also having a hard time with my 2nd and 3rd NICs on precise - they refuse to pull an IPV4 address from DHCP.
<sarnold> blairbo: what's your /etc/network/interfaces look like?
<sarnold> blairbo: (I think the juju lxc provider requires all lxc guests to be on the same NATted bridge, so those extra nics may not even be useful to you as you're intending to use them..)
<blairbo> sarnold: they're not configured for auto intentionally - I am trying to get them up manually to no avail.  http://pastebin.com/RwJGPfp4
<sarnold> blairbo: Wouldn't you want some "iface eth1 inet dhcp" and "iface eth0 inet dhcp" lines in there, even without the 'auto' bit, so 'ifup eth0' or 'ifdown eth1' works?
<blairbo> sarnold: yes, that would make life easier
<sarnold> blairbo: of course if you just leave them out entirely, I'd hope you could run 'ifconfig eth0 up ; dhclient -i eth0 -whatever-else-is-needed...'
<blairbo> sarnold
<blairbo> sarnold: sorry, itchy pinky finger - no love on the dhclient as before
<sarnold> blairbo: aww. :/
<blairbo> sarnold: did the same thing to me during the OS install
<blairbo> sarnold: i'm thinking it may be a 32/64 bit issue with the hardware, as they are on PCI slots
<blairbo> pardon the OT
<sarnold> blairbo: hrm, seems unlikely to me, 64 bit support isn't exactly new.. :) hehe -- maybe check ethtool output for those other nics? maybe check the dhcp server logs if you can get to those?
<kentb> is there a way to cap constraints in juju, like for CPU memory, etc? I know that it'll work off minimums, but, wasn't sure if there was a way to say "don't pick this machine if it's got more than this amount of memory" for example.
<sidnei> kentb: it will start from the minimum in constraints and try to pick the smallest one that satisfies the constraints
<kentb> ok
<sidnei> kentb: note that there were bugs at some point that caused it to pick a random number of cpus for example if you only informed mem constraint
<sidnei> kentb: the 1.14 release has the fixes
<kentb> sidnei: ah ok...all the more reason to upgrade :)  Thanks
<sidnei> the actual bug was for openstack, where it only sorted by mem and then picked whatever was at the top
<kentb> ok. that also explains things, too
<ZonkedZebra> So I'm writing a charm and using juju-log extensively. Where does one see that output. debug-log shows the hook is being called but I don't see the messages passed to juju-log anywhere
<kentb> one more question...for juju-bootstrap I believe you have to do a sync-tools prior to bootstrapping. If you are stuck behind a firewall that makes sync-tools difficult, is --upload-tools acceptable, especially if all you're doing is testing / non-production work?
<marcoceppi> ZonkedZebra: that goes in /var/log/juju/unit-*.log on the machine
<marcoceppi> kentb: --upload-tools is meant only for dev, maybe fwereade could explain more about --upload-tools
<marcoceppi> kentb: I'm still confused by --upload-tools and sync-tools, other than we recommend sync-tools
<kentb> marcoceppi: ok, yeah, I figured sync-tools is what is strongly advised to use.  That said, is there a way to get the tools source from elswhere, put them on your firewalled-off machine and point juju locally to those tools?
<marcoceppi> kentb: --upload-tools will build the tools locally on your machine, so you need a go build environment, IIRC, to use it
<marcoceppi> kentb: but it could be an alternative
<kentb> marcoceppi: ok got it
<ZonkedZebra> marcoceppi: I see things like "juju.worker.uniter modes.go:109 awaiting error resolution for "config-changed" hook" but i don't see any of the actual messages I'm passing to juju-log
<jaywink> been wondering about juju logging too, on bootstrap phase. I see in the code (running development version) lots of logging calls in various parts but no idea where the log goes.. any ideas? :)
<ZonkedZebra>  /var/log/juju on each node has some log files. as well as juju debug-log seems to collect some log data from across all nodes
<jaywink> ZonkedZebra, don't even have /var/log/juju but I haven't bootstrapped yet :)
<jaywink> there are logging calls in the bootstrap phase too
<ZonkedZebra> adding -v to any command should up the console logging but enough to figure out whats going on
<kurt_> marcoceppi: have you messed with getting the vnc console set up on openstack-dashboard, or no of someone who successfully has?
<marcoceppi> kurt_: no idea
<marcoceppi> ZonkedZebra: you don't see any juju-log data?
<ZonkedZebra> marcoceppi: I see some, thinks that error seem to log the error message, but only if I'm currently debug-hooks
<marcoceppi> ZonkedZebra: if you're running debug-hooks and your hooks are in bash, just add `set -ex` to the top of the script
<marcoceppi> ZonkedZebra: it'll be super verbose
<jaywink> ZonkedZebra, oh thanks, didn't know about -v, don't see any mention of it with "juju help" :P
<marcoceppi> jaywink: if you run juju help bootstrap you should see all the options
<marcoceppi> jaywink: bah, it's no there. -v and --debug are two additional options you can use
<marcoceppi> that are global, but not in any of the command line help files
<jaywink> marcoceppi tnx a lot more verbose now :)
<ZonkedZebra> marcoceppi: thanks. Any experience or advice on deployment ssh keys (using git)
<marcoceppi> ZonkedZebra: how so? Like placing SSH keys on a service/unit
<ZonkedZebra> yep, bust way to provide a key to all nodes and to be used with the git clone/pull
<marcoceppi> ZonkedZebra: via a configuration option
<marcoceppi> juju set <service> deploy-key=`cat ~/.ssh/the-key-you-want`
<AskUbuntu> How can I debug a juju client command? | http://askubuntu.com/q/347651
<ZonkedZebra> and then have the script write out the key file, use it, then delete it again?
<ZonkedZebra> script=charm
<marcoceppi> ZonkedZebra: just store the key in your users home directory on the charm, so during your hooks/config-changed hook, do something like this:
<marcoceppi> ZonkedZebra: http://paste.ubuntu.com/6129743/
<ZonkedZebra> okay, but wouldn't that conflict if I run multiple services on the same node? they all run under the ubunutu user yes?
<marcoceppi> ZonkedZebra: they all run as the root user, and yes if multiple charms are co-located, we dont' recommend co-location of services until containerization is fixed
<ZonkedZebra> marcoceppi: fair enough. thanks
<marcoceppi> ZonkedZebra: if that's a problem, just save the file as ".ssh/${$JUJU_UNIT_NAME/\//-}"
<marcoceppi> instead of id_rsa, you'll just need to make sure you specify that key file as the one to use for git cloning
<marcoceppi> ZonkedZebra: that'll take something like "service-name/0" to "service-name-0" which is a friendier file name
<adam_g> hazmat, around?
<kentb> marcoceppi: so what exactly are the tools that get sync'ed / uploaded prior to bootstrapping?  Is it just API-related, internal-to-juju stuff?
<zradmin> has anyone had issues with the glance charm working properly?
<ZonkedZebra> marcoceppi: is there an easy way to pass files along with the charm to use as templates instead of embedding them in the bash script?
<marcoceppi> kentb: it's the actually uploading juju client to be installed on all the machines. The juju client for the services is uploaded and dpeloyed from the storage to keep the same version between machines
<marcoceppi> ZonkedZebra: you can use things like cheetah
<ZonkedZebra> marcoceppi: link?
<marcoceppi> ZonkedZebra: there's a few charms that do that
<marcoceppi> ZonkedZebra: there's an old bash charm helper hanging around that does this
<marcoceppi> ZonkedZebra: https://github.com/marcoceppi/discourse-charm/blob/master/lib/file.bash
<marcoceppi> ZonkedZebra: but that gives you the cheetah formatting
<marcoceppi> ZonkedZebra: here's an example template file, you just use $VARIABLE in the file https://github.com/marcoceppi/discourse-charm/blob/master/contrib/upstart/discourse-sidekiq.conf.tpl
<kentb> marcoceppi: excellent. just what I needed...thanks!
<hazmat> adam_g, yes
<adam_g>  hazmat i merged darwin into lp:juju-deployer earlier in the month, noticed some more changes hitting darwin since.  any chance we can move development to lp:juju-deployer? not sure what needs to happen in LP
<hazmat> adam_g, ic.  i think its just a question of getting the word/out announcement, and then updating the various ppa/pkg builders to point to the new origin. there's still a few new merge proposals in the queue against darwin, getting those merged, and we can either unlink the series or point the series to trunk also on lp.
#juju 2013-09-20
<blairbo> the lxc bridge juju created assigned itself a subnet that already exists in my network. How do I change that so I can connect to "exposed" charms from outside the host
<blairbo> ?
<sarnold> blairbo: I think there's an /etc/defaults/lxc or lxc-net file that you can fiddle with; be aware though that juju's local provider is pretty touchy about what can and can't be changed with the lxc configuration, not all variables there can be changed and still have things work
<blairbo> sarnold: well it's not worth a whole lot if I can't access the environment outside of the host. there must be some logic as to how it chooses the network address for the host bridge.
<sarnold> blairbo: it's mostly intended for local development...
<blairbo> sarnold: what would the ideal setup be for a private cloud?
<sarnold> blairbo: I think 'private cloud' would probably be served better by maas or openstack, though I think the end goal of local provider or ssh provider might eventually give you what you'd like today..
<AskUbuntu> Should MaaS and Juju get installed on one of my servers or on a client system? | http://askubuntu.com/q/347866
<gnuoy> what process writes to all-machines.log on the bootstrap node ? The file is growing at an alarming rate, >2.5G in the past 30mins
<gnuoy> weirdly, if I do  "head -1 all-machines.log" I get a seemingly endless stream to stdout.
<gnuoy> I've stopped rsyslogd and wiped the file and started it up again and I have meedages from ~5 hours ago
<gnuoy> *messages even
<gnuoy> starting rsyslog for 1s results in 10M of log file which vi reckons is all on one line
<gnuoy> wc reckons its 0 lines so I guess theres no eol
<gnuoy> the messages seem to be prefixed with the unit name, which seems to be the bootstrap node for all messages I've checked. I did try deploying some charms for monitoring to the bootstrap node, I wonder if they're not playing nicely
<marcoceppi> gnuoy: what environment are you using? HP Cloud?
<gnuoy> private openstack
<gnuoy> marcoceppi, I'm doing a redeploy without the charms to the bootstrap node and I'm seeing the same thing, all-machines.log is 2.2G and growing
<gnuoy> urgh, sorry, one did go to the bootstrap node. let me redeploy
<avoine> is there a new way to set the log level on units?
<avoine> must be either development or logging-config environment variable
<jcastro> sidnei: hey, how's your puppet/juju these days?
<jcastro> noodles775: same question!
<noodles775> jcastro: sounds like a loaded question :P. I'm not using puppet other than to update internal stuff occasionally (I've got a branch somewhere of a puppet version of a test charm for an internal service that I did a while ago - it's not been touched for months though).
 * noodles775 might still have the charm-helpers branch that added puppet support (masterless), just for declaring machine state etc.
<kurt_> jcastro: who is working on the quantum-gateway charm?
<kurt_> is that jamespage or adam_g?
<jamespage> kurt_, me
<kurt_> jamespage: is there a bug with being able to delete floating IPs?
<kurt_> in quantum
<jamespage> kurt_, not that I'm aware of
<jamespage> what's the problem? what are you trying todo which is failing?
<kurt_> I'll give you a paste bin in a sec - putting it together
<jamespage> is the ip still assigned to an instance? if it is quantum won't let you delete it
<kurt_> it isn't
<kurt_> jamespage: http://pastebin.ubuntu.com/6133418/
<jamespage> kurt_, quantum floatingip-list ?
<kurt_> that's there...
<kurt_> http://pastebin.ubuntu.com/6133425/
<jamespage> kurt_, you need to delete that first
<kurt_> I guess quantum doesn't support more than 1 floating IP list at a time
<kurt_> I created another and it didn't give me errorrs
<kurt_> jamespage: anyways, thanks.  I will give that a try
<jamespage> kurt_, ?
<kurt_> jamespage: huh?
<jamespage> "<kurt_> I guess quantum doesn't support more than 1 floating IP list at a time"
<kurt_> right
<kurt_> more than 1 subnet
<jamespage> ah
<kurt_> it would seem it should
<jamespage> kurt_, well a router is normally associated with a subnet
<jamespage> and a range of floating-ip's can be associated with the subnet as well
<jamespage> kurt_, the nova-cloud-controller installs a helper for this
<jamespage> quantum-ext-net
<kurt_> I'll read up on that
<kurt_> I'm loving quantum btw.  the more I use it, the more I like it
<jcastro> hey guys
<jcastro> due to some bw issues
<jcastro> we're going to postpone the charm school for today
<Therion87> Hello
<Therion87> I'm using JuJu with AWS just setup it up haven't deployed anything and I already have an instance after running juju bootstrap
<Therion87> Why is this?
<kurt_> Therion87: when you bootstrap, you automatically create your first instance
<kurt_> this is instance "0"
<kurt_> aka your root node
<kurt_> do a 'juju status' and you will see this
<Therion87> Yea
<Therion87> It was more a question of the purpose of it
<kurt_> That is the core juju node.  You need this for all of the main juju functionality
<Therion87> Ok
<kurt_> You can use that node for other purposes too, like deploying other juju charms on
<kurt_> RTFM on the "--to" functionality
<adam_g> jamespage, is there a ceph charm change floating around to support setting the pg count by client when making pool?
<kurt_> WOOT!  Fully working openstack instance with MAAS/juju/VMWare Fusion on mac osx. Finallly!
<kurt_> it took a little fiddling with quantum networking and the vnc console to get things working, but its up and running
<kurt_> Yes, MAAS and juju work on VMWare Fusion, as well as on KVM.  I just proved it.
<hatch> when following the lxc setup guide on juju.ubuntu.com and then deploying/exposing the GUI the ip it's exposed to is an ip local to that machine - has there been any documentation on how to expose these servvices to the host machine?
<marlinc> Juju is trying to access something on my MAAS server that doesn't exist..
<marlinc> - - [20/Sep/2013:23:20:54 +0200] "GET /MAAS/api/1.0/files/provider-state/ HTTP/1.1" 404 276 "-" "Go http package"
<marlinc> I just tried to run the juju bootstrap command
<marlinc> I'm sorry. I ment the status command: error: file 'provider-state' not found
<marlinc> What could it be.. running juju bootstrap doesn't work either. error: no tools available
<kurt_> marlinc: try running again.  sometimes I've seen this on the first try
<marlinc> Well I've ran it 5 times
<kurt_> ah, try juju sync-tools first
<marlinc> Well I've tried that too and that throws: error: environment has no access-key or secret-key
<kurt_> you may get a EOF on one of the tool packages, at which time you should try again
<marlinc> Well the sync command doesn't run at all
<kurt_> juju destroy-environment; juju sync-tools
<marlinc> Still the same problem
<marlinc> https://gist.github.com/Marlinc/4461c6a36ddb36bd3840
<marlinc> This is my environments file
<marlinc> Ah it appears the MAAS tutorial isn't very up-to-date
<kurt_> you don't have ssh keys generated
<marlinc> I had to install charm-tools
<marlinc> Now I can run bootstrap without issues
<kurt_> are you doing this on MAAS?
<marlinc> Yes
<kurt_> ok, so you are good then?
<marlinc> Yes for now
<kurt_> do your MAAS nodes have access to the internet?
<marlinc> They should have
<kurt_> juju can be a little fiddly in the bootstrap process
<kurt_> you need to generate ssh keys too
<marlinc> I already got those :)
<kurt_> it wasn't in your environments.yaml
<kurt_> needs to be there
<marlinc> Ah okay
<marlinc> Well I'm following the quick start: http://maas.ubuntu.com/docs/juju-quick-start.html
<utlemming> on raring, can you have multiple lxc environments using the devel ppa?
<utlemming> I am seeing a case where I can use one environment, but if I try to use the other, it fails
<dalek49> exit
<dalek49> (apologies)
#juju 2013-09-21
<Zack_dee> in order to keep my self login, I should have scree command ran..
<Zack_dee> exit
#juju 2013-09-22
<AskUbuntu> juju can't reaching servers | http://askubuntu.com/q/348702
<HappyPete> Wow, I'm just reading about juju for the first time. All of these tools--chef, juju, etc.--are rocking my world. My son wants help running a craftbukkit server--so I've hand-created one in EC2. So far so good, right?
<thumper> sure
<thumper> what's craftbukkit?
<HappyPete> What I really want is to create a singleton service that will run behind an ELB [actually did that for the one machine] so that at any moment, an instance is running.
<HappyPete> Sorry, It's a minecraft variant, lots of plugins, etc. I saw someone had written a juju charm for minecraft, so I figured  tailoring one for cb should be not so terrible.
<thumper> sure, shouldn't be too hard I guess
<HappyPete> so the next step is watching the health check on the load balancer, terminating the instance if it goes bad, instantiating a new one, attaching the EBS volume if I'm in the same AZ; instantiating a volume from a new snapshot if I'm in a different AZ, etc.
<HappyPete> Have I wandered out of juju land and into "wow...sounds hard, man..." land?
<thumper> HappyPete: yes
<thumper> :)
<HappyPete> Yeah, I was afraid of that...I'm studying cloudwatch and autoscaling groups
#juju 2014-09-15
<lazyPower> fuzzy_: still wrestling with mongodb?
<fuzzy_> lazyPower: yes
<fuzzy_> Following, https://maas.ubuntu.com/2012/11/30/lets-shard-something/ using the command "juju add-relation mongos:mongos-cfg configsvr:configsvr" I get "ERROR service "mongos" not found".  What am I doing wrong?
<lazyPower> do you have a service named 'mongos' deployed?
<fuzzy_> not that I see
<fuzzy_> that's why the howto doesn't make sense
<lazyPower> fuzzy_: are you trying to deploy the sharded/replicated cluster? there is a bundle for it
<lazyPower> that howto looks really old
<lazyPower> https://jujucharms.com/bundle/mongodb/5/cluster/
<lazyPower> take care to note that there are 3 replicated units per shard/configsvr
<lazyPower> thats 13 total units listed in that bundle
<fuzzy_> thank you
<lazyPower> fuzzy_: the mongodb charm is notoriously finicky as well. I've put some elbow grease into it in the past - if you run into any issues with it please file bugs here:   https://bugs.launchpad.net/charms/+source/mongodb
<fuzzy_> i'm just going to reroll everything and try one more time
<lazyPower> ok. I typically have to do that when i botch a mongodb deploy as well, start over from scratch
<lazyPower> there appears to be a race condition in teh charm I haven't tracked down on one of the roles.
<lazyPower> sorry about the weak experience with the charm :( MongoDB has been a problem child for me when triaging/maintaining
<lazyPower> I'm open to any suggestions on how we can make it a better experience, any documented cases / examples will definately help as we move forward with the next iteration.
<fuzzy_> i'm curious why python-yaml ins't part of a machine bootstrap
<lazyPower> I've run into this on 2 substrates - DigitalOcean and Linode
<lazyPower> its standard on AWS, HPCloud, and Joyent
<fuzzy_> i figured that is sort of lsb stuff
<fuzzy_> ah
<lazyPower> i think they are using modified cloud images, but i have no data to back up my claim
<lazyPower> I've had to go back through and add the python-yaml dependency to any of the python based charms as i run across them testing on DO
<fuzzy_> you hooked the do api
<fuzzy_> is it fully automated yet?
<lazyPower> mostly - its a phaux provider
<lazyPower> https://github.com/kapilt/juju-digitalocean
<fuzzy_> because linode does have an api
<fuzzy_> and it's not hard to hook
<lazyPower> mhm, i've mentioned this approach to you before
<fuzzy_> i just didn't know how hard the juju side of it would be
<fuzzy_> i know you did
<fuzzy_> i just lost the link
<lazyPower> well that plugin is about all i can reference for you - i'm not familiar enough with core to make a recommendation.
<fuzzy_> do you use it to admin your do boxes?
<lazyPower> however hazmat has all teh goods when he's around. he's the author of that plugin.
<lazyPower> i do. I've got ~ 6  nodes running on DO right now
<fuzzy_> hmmmmmmm
<lazyPower> (including the api server)
<fuzzy_> damn that drives a hard bargin
<fuzzy_> are you using digital oceans private network to do the deed?
<fuzzy_> lamont:
<fuzzy_> ack
<fuzzy_> lazyPower:
<lazyPower> I'm watching ;) no need to ping
<lazyPower> Yeah, it automatically adds the private networking to the droplet.
<fuzzy_> *sigh* I hope this is worth it
 * fuzzy_ crosses fingers
<fuzzy_> lazyPower: see if you got any digital ocean promo codes for new accounts so you can get hooked up with this
<lazyPower> i do actually, 1 moment
<lazyPower> https://www.digitalocean.com/?refcode=b6ac387b6f36 - $10 free credit with a new account
<fuzzy_> ok I think I got a do account now
<lazyPower> awesome
<fuzzy_> awww crap
<lamont> lol
<fuzzy_> stupid do
<lazyPower> ?
<fuzzy_> this might not actually work
<fuzzy_> remember when they screwed up nyc2 for all those weeks last year?
<lazyPower> mhmm, the juniper upgrade
<fuzzy_> well i moved everything off of them
<fuzzy_> but one of the droplets kept running until it racked up a bill
<lazyPower> whoops
<fuzzy_> and I told them no way I was paying it, I wasn't even using the box it was a left over from an idle
<fuzzy_> now they just took $5 out of paypal and locked my account
<fuzzy_> before i could make a droplet
<fuzzy_> oh nope there it goes
<fuzzy_> whooo
<fuzzy_> ok i'm in business
<lazyPower> the DO guys are really reasonable.
<lazyPower> Every time i've reachedo ut or support i get an answer in an hour or less, and 90% of the time its resolution.
<lazyPower> *out for
<fuzzy_> actually nope
<fuzzy_> i'm locked
<fuzzy_> lets seee
<fuzzy_> but now i'm limited to 1 droplet
<lazyPower> I'd open a ticket and explain whats going on
<lazyPower> they should crank you up to 5 instances for your first month or so
<fuzzy_> I just did
<fuzzy_> now I wait
<fuzzy_> https://github.com/charms/mongodb when I try juju deploy mongos i get failure
<fuzzy_> weird it took that time
<fuzzy_> lazyPower: http://hastebin.com/dadetadepu.sm
<fuzzy_> it claims to be running, but i can't seem to find the exposed console for mongo
<fuzzy_> yea i don't get it, netstat on that box shows it's listening to 28017, i made sure to disable it's firewall and i still can't see it remotely
<fuzzy_> http://hastebin.com/patadakepo.hs
<fuzzy_> makes no sense
<fuzzy_> So reading the juju log on mongos/0 i get alot of permission denied.  I think that might be because of ufw
<fuzzy_> i'm going to tear it down and retry without ufw and ssh configured differently
<fuzzy> yea it was ufw fighting me i got a mongo console lazyPower
<jamespage> gnuoy`, nudge - https://code.launchpad.net/~james-page/charms/trusty/nova-cloud-controller/cluster-sync-fix/+merge/234475
<gnuoy`> ack
<marcoceppi> Review Queue is going down for an upgrade
<marcoceppi> Review Queue is going down for an upgrade
<mbruzek> Hey marcoceppi since we have automated testing now, we should start requiring all new charms to have tests.
<whit> mbruzek, that's what I thought the policy was... I didn't realize it hadn't been more widely disseminated
<marcoceppi> mbruzek: that's not a bad idea, we should propose this policy to the list
<whit> jose, sorry about that :[
<marcoceppi> There's a bunch of previous talks about it on the list, I'll round them up and make a new post
<jose> whit: don't worry, not a prob :)
<marcoceppi> Should we consider unit testing sufficient?
<marcoceppi> mbruzek lazyPower jose ^? et al
<jose> what would you define as unit testing?
<marcoceppi> anything that tests the hooks without being deployed
<jose> I would be good with something saying 'hey! I deployed mycharm successfully, even though I haven't accessed it it didn't error while deploying+relating!"
<jose> would that be unit testing?
<marcoceppi> so, some charms have python unit tests
<mbruzek> marcoceppi, I don't see how we can ask authors to write python tests if they don't know it.
<marcoceppi> would that be enough to satisify new policy for all chams need tests
<marcoceppi> mbruzek: I'm saying, is unittesting enougth to satisfy testing or should we required functional/integration testing as well?
<mbruzek> marcoceppi, could they write a bash deployment or a bundle
<marcoceppi> that's not unit testing, that's fucntional testoing
<marcoceppi> and that's basically what we think of testing today, a la bundle/amulet tests
<mbruzek> marcoceppi, I think we should unit test at minimum but HIGHLY suggest having a bundle or amulet test that deploys a working unit.
<jose> I have to leave for uni. Giving some input later.
<marcoceppi> jose: thanks o/
<marcoceppi> mbruzek: right, that's my question, is unittesting enough to satisfy testing requirement in policy
<mbruzek> yes.
<mbruzek> I don't like only unit tests because I still may not know how to deploy said charm.
<marcoceppi> cool, I'll wait for feedback from jose and lazyPower and draft teh policy
<marcoceppi> mbruzek: that;'s a readme issue
<marcoceppi> and nackable based on that alone
<mbruzek> yes.
<lazyPower> marcoceppi: i think unit tests or integratino tests at bare minimum. i've been treating it as an either/or
<lazyPower> marcoceppi: my reasoning is we still have a ton of bash charms in the store, and bash hasn't exhibited any ground breaking unit test frameworks.
<mbruzek> +1 lazyPower
#juju 2014-09-16
<fuzzy> Hi there, I've been having a problem getting mongodb to deploy.  I believe that the mongodb charm needs an 'apt-get install python-yaml' before calling hooks.py in this file: https://github.com/charms/mongodb/blob/trusty/hooks/install based on this bug https://bugs.launchpad.net/charms/+source/haproxy/+bug/1302642
<mup> Bug #1302642: haproxy missing install dependency <landscape> <haproxy (Juju Charms Collection):Confirmed for mbruzek> <https://launchpad.net/bugs/1302642>
<fuzzy> I've tried now from linode with manual provisioning.  I ended up adding it to my node create script there.  Today I tried to follow the Digital Ocean Juju steps and deploying to 3 nodes fails all in the same spot asking for python-yaml
<fuzzy> http://hastebin.com/xukaxitalu.sm
<fuzzy> https://github.com/kapilt/juju-digitalocean
<lazyPower> fuzzy: you're going to need to patch the charm, or submit a PR so it can be patched upstream in teh charm store
<fuzzy> I don't have the first clue on how to do that properly
<lazyPower> take a look at hooks/install
<fuzzy> Yes I see that
<fuzzy> is it just a parsed bash script?
<fuzzy> Thats what I assume from that launch pad haproxy bug
<lazyPower> nothing parsed about it, it is by definition, a bash script.
<fuzzy> well it's missing the bang at the top so I didn't know
<lazyPower> are you looking at precise or trusty?
<lazyPower> i've got a copy of precise, and it has the shebang
<fuzzy> I have linode with precise and do with trusty
<fuzzy> I'm experiencing the same problem in both places
<fuzzy> I haven't looked at the precise github yet
<fuzzy> https://github.com/charms/mongodb/blob/precise/hooks/install
<fuzzy> it's missing there too
<lazyPower> looks like its a symlink, and the precursor bash script was removed
<lazyPower> interesting
<lazyPower> LOL and I did that
<lazyPower> nice
<fuzzy> ......
<fuzzy> Well at least I know who to talk to about it
<lazyPower> file a bug, I cant push a fix for that tonight
 * lazyPower cannot push his own changes
<fuzzy> Where, lp or github?
<lazyPower> LP
<fuzzy> do you know that link by heart yet?
<sarnold> heh, fuzzy, do you? :)
<lazyPower> http://bugs.launchpad.net/charms/+source/mongodb
<fuzzy> sarnold: ye
<fuzzy> so lazyPower what is juju-mongodb then?
<lazyPower> a juju fork of mongodb for systems that dont have the 'mongodb' package installed already.
<fuzzy> ah
<sarnold> (the juju-mongodb has been neutered to not do javascript, us buzzkills from the security team don't want to support N different javascript implementations.)
<fuzzy> https://bugs.launchpad.net/ubuntu/+source/mongodb/+bug/1369792
<mup> Bug #1369792: Python yaml missing from mongodb install on precise and trusty <mongodb (Ubuntu):New> <https://launchpad.net/bugs/1369792>
<sarnold> hmm, that looks like the mongodb source package, not the mongodb charm
<lazyPower> https://bugs.launchpad.net/charms/+source/mongodb
<lazyPower> fuzzy: make sure you open that bug against the proper project. You *did* open that against the ubuntu mongo package
<fuzzy> wonderful
<lazyPower> I moved it for you
<sarnold> hooray, that was easier than I feared
<lazyPower> sarnold: if you run across any more bugs like that, you change the project from Ubuntu to Juju Charms Collection
<sarnold> lazyPower: thanks
<lazyPower> and if you want confirmation on any of the moved bugs, feel free to ping me with the URL
<fuzzy> well that made that easy
<fuzzy> I didn't see a delete button anywhere
<lazyPower> we dont typically delete bugs, we mark them as invalid
<lazyPower> or incomplete
<fuzzy> So when should I try again?
<lazyPower> subscribe to bug mail and you'll be notified when your bug is resolved.
<lazyPower> if you want a brute force fix for now, just install python-yaml on the hosts as they error, then re-run the hook
<lazyPower> juju resolved -r service/#
<fuzzy> I'm actually going to go try now on linode again now that I understand how that charm works better and hope for the best
<fuzzy> lazyPower: you are actually back now or just sudo back?
<lazyPower> watching TV, enjoying my evening, chatting on my laptop and triaging bugs
<lazyPower> so take that under considering
<lazyPower> *consideration
<fuzzy> Where is the data for charms stored on a node?
<lazyPower> /var/lib/juju/unit-service-#/charm
<lazyPower> i may have missed a directory in there, but thats the round-about path
<fuzzy> so basically all juju stuff on a drone would be in /var/lib/juju ?
<fuzzy> The reason I ask, is I would like to move it to zfs so mongodb used zfs in the backend
<lazyPower> you're speaking greek to me
<lazyPower> we have no concept of zfs enablement on our charms to date.
<lazyPower> and just because teh charms payload is /var/lib/juju/* doesn't mean thats wehre mongodb is going to place its data
<lazyPower> you'll need to evaluate a mongodb deployed host to gleen that, i think it stores data in /var/lib/mongo  but dont quote me on that.
<fuzzy> Oh ok, so they aren't in some kind of docker container on the node as well.  They run right native on the thing
<fuzzy> Awesome
<lazyPower> nope, its all native on the node unless you specify
<lazyPower> --to lxc:#  when you deploy
<lazyPower> eg: juju deploy mongodb --to lxc:1  (which will place mongodb in an lxc container on node 1)
<fuzzy> lazyPower: Alright so I've managed to get a 3 replica node mongodb going and expose it @ linode with little effort
<fuzzy> http://hastebin.com/iladimaxah.sm
<lazyPower> fuzzy: whats the tl;dr?
<fuzzy> http://hastebin.com/uboqezexob.sm
<fuzzy> I got precise / trusty mixed nodes with 3 in a working replica doing mongodb and 3 doing meteor.js
<lazyPower> there ya go
<lazyPower> looks like you have found the secret sauce
<sarnold> ooo
<fuzzy> almost
<fuzzy> hmmmm
<fuzzy> Do I have to expose mongodb for it to work correctly?
<lazyPower> nope, it should be linked/communicating over the private network
<jose> whit: hey, mind a quick PM?
<jose> whit: hey, mind a quick PM?
<lazyPower> jose: he's out of the day and probably for the rest of the week
<jose> lazyPower: oh, ok :(
<whit> jose, I about to sign off, wanna catch up in the morning?
<jose> whit: sure thing, not a prob :) have a good night!
<whit> jose: you too!
<jose> marcoceppi: I believe that unit testing is fine unless it *requires* something to function. like, in the case of wordpress, it would be integration testing with mysql as a bare minimum, but in subway, which doesn't require anything, unit testing would be fine for me
<mbruzek> good morning Jose
<jose> hello, mbruzek :)
<rick_h_> mbruzek: hey, question for you
<mbruzek> rick_h_, go ahead
<rick_h_> mbruzek: if I have an orange box, and I've installed openstack on it, is there a note/doc on what I need to get from that openstack install to configure it as a provider in juju?
<mbruzek> rick_h_, I only know of Kirkland's orange box set up document that talks about how to get maas working with Juju.
<mbruzek> rick_h_, if you have openstack already running I would start with https://juju.ubuntu.com/docs/config-openstack.html
<rick_h_> mbruzek: yea, that doesn't seem to have the info I'm looking for there. Figured I'd bug you about it in case you guys did anything with it in your sprint
<mbruzek> rick_h_, we did some things with it, but not setting up openstack.
<rick_h_> mbruzek: cool thanks
<mbruzek> rick_h_, if you have specific questions post them here and perhaps we can learn together.
<jose> rick_h_: I guess it's just a matter of deploying openstack and then setting the credentials in node 0's environments.yaml
<rick_h_> jose: yea, I assume somewhere in all these openstack services are the secret/info I need for the juju provider
<rick_h_> jose: I was hoping to cheat doing a bunch of research and making mbruzek tell me :P
<jose> rick_h_: hehe, I guess the docs are the best option then :P
<mbruzek> sorry I don't have that information for you rick_h_
<rick_h_> mbruzek: all good, will carry on
<marcoceppi> jose: I htink you're missing the point of unit testing
<marcoceppi> rather the definition of unittesting
<marcoceppi> what you've described with wp and mysql is integration testing, you can unit test even if you need an additional service by simply mocking what mysql /would/ be providing
<jrwren_> rick_h_: if horizon is running, go get the credentials from the openrc file?
<kirkland> rick_h_: hi
<rick_h_> kirkland: howdy
<kirkland> rick_h_: so open stack provider on the orange box...  the only way I've done it so far, is through the Landscape OpenStack Installer
<rick_h_> kirkland: ok, cool
<kirkland> rick_h_: from a bare openstack, you could do it too, but you'll need to take a few more steps
<kirkland> rick_h_: namely, creating networks and routers
<kirkland> rick_h_: otherwise, your instances won't be reachable
<rick_h_> kirkland: ok, no problem then. I was just curious as I had things this far I was tempted to try to see if I could take it the next step
<kirkland> rick_h_: we don't have docs yet on how to do that
<rick_h_> thanks
<kirkland> rick_h_: but, if you were to compile that howto, you'd be a hero ;-)
<rick_h_> save me some time on this searching through various dashboard/interfaces for the bits required
<rick_h_> heh, well I've only got it for today and we're still qa'ing the gui stuff so not sure i'll get that done. Maybe some sprint though it'd be cool to have one of these on site and work that through.
<jose> marcoceppi: unit testing is fine for me, as I said
<marcoceppi> jrwren_: you deployed with juju?
<jrwren_> marcoceppi: no. I think I misunderstood rick_h_'s question.
<marcoceppi> oh, there was more above the fold
<rm1> Hi there. I would like to setup an image to use port other than 22 for ssh as it is blocked by security.
<rm1> the target would be AWS
<rm1> or would it possibly be able to define a custom VPC as this would be very useful
<marcoceppi> rm1: what do you mean port 22 is blocked?
<rcj> For charm testing with amulet, is there a way to fire hooks as I would from juju-run?
<mbruzek> rcj, Other than adding relations and the normal hook firing?
<rcj> mbruzek, yes.  I'd like to run something on a specific unit just as I would with juju-run.  I have code that runs periodically in the charm and is triggered in that fashion.
<mbruzek> rcj there is a UnitSentry.run(command) method.  Can you call it that way?
<mbruzek> d.sentry.unit['ubuntu/0'].run('whoami')
<rcj> mbruzek, yeah.  I think that will work, thanks.
<mbruzek> rcj glad to help
<rcj> Is there a way to know what environment the charm test is running in so that I can provide different configs for each (different config options and constraints)
<mbruzek> rcj, JUJU_ENV maybe? https://juju.ubuntu.com/docs/charms-environments.html
<mbruzek> rcj, the problem is someone could name hp-cloud as hp-mbruzek so the name is not really a deterministic way to do it.
<mbruzek> rcj, We *just* wrote this documentation at our sprint: https://juju.ubuntu.com/docs/reference-environment-variables.html
<rcj> mbruzek, exactly.  I would like to know how the automated tests are run for promulgated charms so that I can provide environment specific constraints
<rcj> mbruzek, so JUJU_ENV won't be sufficient.
<rcj> or I'll just have to nerf the testing
<rcj> but I can do more if I know where it's running at runtime
<mbruzek> rcj I have been given guidance to make the tests run on all environments.
<mbruzek> tvansteenburgh (who should be back tomorrow) should be able to answer that question better than I can.
<rcj> mbruzek, it will, but there is a config option to set up ephemeral storage. The number of volumes available will depend on instance type and device name will depend on the cloud.
<mbruzek> rcj he is working on the automated testing.
<rcj> mbruzek, thanks.
<bloodearnest> rick_h_: yo dude, do bundles still need to use cs charms? Any option yet for local ones? (for demo purposes)
<rick_h_> bloodearnest: yes, afaik they do still.
<rick_h_> bloodearnest: next cycle we'll hopefully have fixes for that into place across everything.
 * rick_h_ apologies for non-awesome answer
<bloodearnest> rick_h_: ok, thanks. No worries, difficult problem. Will try this personal namespace in the charmstore thing that lazyPower blogged about
<rick_h_> bloodearnest: definitely +1 there
<bloodearnest> rick_h_: am doing a django stack demo (python conference), is hadoop still the best general purpose demo?
<rick_h_> bloodearnest: I think jcastro and marcoceppi have put a lot of time into the elasticsearch demo
<rick_h_> bloodearnest: might be useful if you can do some fulltext search on there
<bloodearnest> nice, will look at that
<rick_h_> bloodearnest: there was some scripts to preload data and such
<natefinch> why the heck do you have to resolve a hook failure before you can destroy a service?
<rick_h_> natefinch: you tell me :P
<natefinch> rick_h_: no idea, but it's dumb as rocks.
<rick_h_> natefinch: +1 would love to see that go
<jrwren_> natefinch: so that the departed hooks can run.
<jrwren_> it would be nice if there was a --force
<natefinch> jrwren_: I guess I assume that if the hooks are broken, they're broken, and making me resolve them is not going to make anything better.  Especially true if it's the install hook that has failed.
<natefinch> jrwren_: plus, the really bad part is that destroy-service doesn't actually fail.  It just silently doesn't do anything except set the life of the service to dying, but it'll sit there forever in dying unless you resolve it.
<jrwren_> natefinch: definitely true when doing charmdev. Less true when install fails for reasons such as network temporarily unavailable where --retry will succedd.
<natefinch> jrwren_: but destroy service should destroy the service.
<natefinch> jrwren_: but yes, a --force option would probably fix most of this
<jrwren_> natefinch: remember destroy is just an alias for remove. If I'm in a state such that install did succeed, then I'd expect the departed hooks to run on remove.
<jrwren_> natefinch: and maybe if never installed --force automatially?
<natefinch> jrwren_: yeah, definitely if there's a situation where we know it's "safe" to do --force automatically, just do it.  That's one thing that bugs me about a lot of the juju command line commands... they rarely "just do the right thing" when the right thing is obvious.
<jrwren_> jrwren_: Same here. It makes for a steeper learning curve for a new juju user or charm author too.
<natefinch> jrwren_: we're working on it.  My team is actively trying to make life easier for charm authors in particular right now.  There's some cool stuff coming down the pipeline.
<jrwren_> jrwren_: YAY! Its a great time for juju IMO :)
<jrwren_> and autocompleting the wrong name means I've had a long enough day :)
<natefinch> marcoceppi: wow, holy crap, it finally works: http://54.160.155.32/
<fuzzy> congrats
<natefinch> man that was a pain in the butt.... totally not anything to do with charming, just.... random programming crap
<fuzzy> The other side of the cutting edge
<fuzzy> You gotta watch it, it's sharp and loves the taste of blood, sweat, and most of all, frustration
<natefinch> and it only takes 20 minutes to install.... geez.
<fuzzy> I made two linode scripts that make manual provisioning and deployment alot easier
<fuzzy> I've got bootstrap with drone nodes down to less than 10 minutes
<natefinch> what's funny is that this is a docker image I'm deploying, I would expect it to be super fast, but it looks like they're doing a boatload of stuff when they start the docker instance, so.... yeah.
<aisrael> It looks like I'm running into an issue with nfs and the local provider. Any known issues with that?
<mbruzek> aisrael, Yes
<mbruzek> aisrael, lazyPower has indicated that nfs on LXC does not work but I googled this problem and found several things that claimed to be workarounds.
<aisrael> I found a related bug that lazyPower commented on, but that workaround doesn't seem to work (https://bugs.launchpad.net/charms/+source/nfs/+bug/1251619)
<mup> Bug #1251619: nfs charm fails in install hook <nfs (Juju Charms Collection):New> <https://launchpad.net/bugs/1251619>
<mbruzek> aisrael, I tried what was described here: http://technuts.tru.my/2013/08/28/how-to-mount-nfs-in-lxc-container/
<mbruzek> But I was not able to get success, so I moved on.  Theoretically if something needs to be done in on the client side, the charm could run those commands.
<aisrael> Ok. I'm going to try to get it working, and then add a caveat to the nfs README so it's documented.
<mbruzek> The web page suggests doing something on the server side too.
<aisrael> Fixing apparmor on the lxc host is a fix i've seen a couple times, but hasn't worked for me (yet)
<mbruzek> aisrael, if anyone can get it working you would be the man to do it!
 * mbruzek is interested in a solution too.
<aisrael> heh, thanks for the vote of confidence :D
<mbruzek> aisrael, if you get it working please let me know. That has been a thorn in our sides for quite some time, and is preventing us from writing some charm tests.
<aisrael> mbruzek: ack. I'll find a way to make this work.
<mbruzek> aisrael, are you running both the host and server in LXC or just one?
<mbruzek> s/host/client
<aisrael> host is running in a vagrant vm, hosts in lxc.
<aisrael> s/hosts/clients
#juju 2014-09-17
<Odd_Bloke> How do tests get run against charms; trying to understand how I can (a) include unit testing in test runs, and (b) replicate what is done elsewhere.
<lazyPower> Odd_Bloke: there is a project called bundletester which is pip installable (in a venv) - and it sniffs out the tests in teh charm
<lazyPower> Odd_Bloke: the unit test setup is different between charms, so bundletester evaluates a makefile to find the test targets
<lazyPower> Additionally, if you have the package `charm-tools` installed - you can issue 'charm add tests' and it will setup a test scaffolding for integration tests using amulet.
<Odd_Bloke> lazyPower: Thanks!
<lazyPower> No problem
<Odd_Bloke> I'm hitting a traceback with bundletester: https://gist.github.com/anonymous/389baade22b077abd190
<Odd_Bloke> The tests are those generated by "juju charm create".
<lazyPower> tvansteenburgh: are we tracking bundletester bugs on github or launchpad?
<tvansteenburgh> lazyPower: let's use github
<lazyPower> Odd_Bloke: can you file a bug against the project here: https://github.com/juju-solutions/bundletester
<Odd_Bloke> https://github.com/juju-solutions/bundletester/issues/2
<lazyPower> Thanks Odd_Bloke
<tvansteenburgh> Odd_Bloke: for a workaround you might try downgrading juju-deployer to 3.8
<tvansteenburgh> Odd_Bloke: i don't have time to test it right now but that's what i'd try first
<Odd_Bloke> Trying that now.
<Odd_Bloke> tvansteenburgh: lazyPower: Same traceback.
<tvansteenburgh> Odd_Bloke: gimme a min, i'll give it a try
<Odd_Bloke> tvansteenburgh: Thanks. :)
<natefinch> man, digital ocean is so much faster than AWS
<natefinch> deploying my discourse charm takes 21 minutes on AWS and 7 on DO
<marcoceppi> natefinch: welcome to the world of SSDs
<tvansteenburgh> Odd_Bloke: i haven't been able to repro that error. i have some other things i need to do atm but will circle back
<marcoceppi> natefinch: also, congrats on the deployment
<marcoceppi> natefinch: finally, haha at it taking forever with docker :P
<natefinch> marcoceppi: right?  It looks like they do a crapload of stuff after the docker image is deployed... no idea why
<marcoceppi> natefinch: because why bother using imaged based workflows ;)
<natefinch> marcoceppi: "I: relation website has no hooks"  .... I don't actually know if I should have a hook for that relation or not?  is there something I would need to do with that?  Are there docs I can look at for what that should do?
<marcoceppi> natefinch: no docs, just examples of charms that implement that relation at the moment. We;re working on a way to documentate relations
<natefinch> marcoceppi: IIRC we talked about documenting relations a year ago ;)
 * marcoceppi gets the long list of core features that were discussed a year ago and don't exist
<marcoceppi> ;)
<lazyPower> zing!
<marcoceppi> natefinch: here's an example http://bazaar.launchpad.net/~charmers/charms/precise/wordpress/trunk/view/head:/hooks/website-relation-joined
<natefinch> marcoceppi: ok, so it looks like I could just copy and paste that, right?
<marcoceppi> natefinch: and replace the port with the right value
<marcoceppi> typically it's 80, depends on the service
<natefinch> marcoceppi: what's the right value?
<marcoceppi> natefinch: whatever port the service runs on
<natefinch> marcoceppi: so just the value the service is listening on?
<natefinch> marcoceppi: ok
<lazyPower> jcastro, mbruzek: https://github.com/juju/docs/pull/171
<jcastro> I don't like the term "user space charms"
<jcastro> Just say "charms in your personal namespace"
<jcastro> otherwise that's like another new term people won't understand
<lazyPower> jcastro: updated
<rick_h_> jcastro: or even just your published charms?
<jcastro> yeah
<rick_h_> jcastro: lazyPower just to start to beat the drum on the publishing idea as we move that forward.
<lazyPower> rick_h_: I'm going to bake you a cake when 'publish' lands.
<lazyPower> and i'll send a print copy of the reworked doc page when that happens.
<natefinch> space charms sound pretty cool though
 * natefinch whistles the star trek theme
<josepht> to boldly deploy where no charm has gone before
<natefinch> jcastro: maybe I'm not understanding what you mean by version specific stuff in charms.  Actions won't ever work in 1.18, therefore it's a version specific feature in a charm, isn't it?
<jcastro> right
<jcastro> I think having version specific anything in charms is a bad idea
<natefinch> but my point is, if there's ever a new feature we want to support in a charm, it'll be version specific
<jcastro> right
<jcastro> I think if you were to tell a charm author
<natefinch> and we need new features, unless you want to tell Mark S we can't do actions :)
<jcastro> "check out these new features, but 1.20 only"
<jcastro> my answer would be "well I'm not going to add it to my charm until someone asks for it."
<jcastro> I would tell people "upgrade to 1.20"
<jcastro> I wouldn't make my charm more complex
<natefinch> jcastro: I guess I look at it differently.  I'd see all the new functionality in 1.20 and say "hey, this could make my charm a lot more useful to people", so I'd update it and tell people to get off their butts and upgrade ;)
<jcastro> sure
<jcastro> until you realize that most people are and will remain on 1.18
<jcastro> I would just be like "this charm requires 1.20"
<natefinch> jcastro: that's what we're doing
<jcastro> yeah but this lets you pick and choose afaict.
<natefinch> jcastro: in implementation, yes, but in actuality... feature X and Y will require 1.20, and feature Z will require 1.21, so if you use all three, your charm effectively requires 1.21
<jcastro> right, so why not just something simpler like "> 1.21"
<jcastro> like how packages do it?
<jcastro> I feel like the granularity will just end up with different permutations for everything
<jcastro> in the worst place to have it, the store
<natefinch> I think that's perfectly valid.  It wasn't my design.... I think it was William's... who is conveniently out today :)
<natefinch> I can guess about why it was done this way - so that if you backport a feature, the charms that use that feature all of a sudden work in the old version, rather than having a hard-coded revision number
<jcastro> well, I think of it this way
<natefinch> but it is significantly less obvious to the end user what versions will support any particular charm... they have to go look up the documentation for each feature and see what version it supports
<jcastro> we know X version of juju is in Ubuntu version Z
<jcastro> those charms are series'ed
<jcastro> so I know on precise I'll have Juju version X with charm version Z
<jcastro> when I move to trusty, I'll have X+1 and Z+1
<jcastro> the charm might or might not have new features based on what corresponding version of juju comes with that new version of ubuntu
<jcastro> basically, charm series are already tied to juju core versions we ship
<jcastro> so if I want to use juju actions I'll use utopic charms
<jcastro> but I wouldn't update the precise charm to use actions
<natefinch> that's a very interesting point
<jcastro> and if you use a PPA on an older serioes
<jcastro> blam, you use the newer series charm as well, even on the old distro
<jcastro> then it's easier for us to say
<natefinch> the only problem then, is that you're now requiring that charms with new features use a non-LTS host OS
<jcastro> "if you want to use actions, you need to put them in charms utopic and newer"
<jcastro> not really, we're only doing LTS series anyway
<natefinch> wait, so are you saying you'd deploy the utopic charm to Trusty?
<jcastro> well we don't have utopic charms, that was an example
<jcastro> it'd be whatever the next LTS is
<natefinch> ok, so no new features until 2016 then?
<jcastro> land as many as you want
<jcastro> but like the "trusty" and "precise" stores are pretty much locked to the published version of juju in them
<jcastro> which is why in the mail I think moving to getting newer juju's backported to those is a nicer win
<jcastro> because then you'd bring them up along with you instead of having them hold you back
<jcastro> (I realize that's also another set of problems)
<natefinch> yeah... believe me, we'd love to have 1.20 in precise and trusty
<natefinch> I think we're working on getting it into trusty, actually
<jcastro> I just really think having version fragmentation in the store will be bad
<jcastro> I mean, the hard truth is our customers and users use LTSes, and don't want radical change
<jcastro> shifting the change from juju to the store seems just as bad to me, if not worse
<jcastro> "don't worry, juju itself won't change, now we'll just put all the churn in the charm instead!"
<natefinch> the problem is really that we just can't force people to upgrade
<natefinch> I'm sure there's people out there running merrilly long with a 1.14 juju server, and someday they're going to say "hey, you know what? I want X new charm" and do juju deploy foo ... and it'll not work, in some weird and non-obvious way
<natefinch> the point of this code is to make it not work in an obvious and immediate way
<marcoceppi> woo who! pprint lands in trunk https://github.com/juju/juju/pull/757
<marcoceppi> juju status --format=oneline
<natefinch> nice!
<aisrael> lazyPower: are there test lab credentials re https://bugs.launchpad.net/charms/+bug/1325700 that you can share, or should I assign it to you to re-review?
<mup> Bug #1325700: New Charm: Dell OpenManage Server Administrator (OMSA) <Juju Charms Collection:Fix Committed> <https://launchpad.net/bugs/1325700>
<aisrael> Has relation-list been deprecated?
<arosales> aisrael: not sure, but be a good question to pop into #juju-dev
<aisrael> ack
<arosales> got a DB questoin from the folks working on the Zend charm.
<arosales> "When I add MySQL relation I need 2 seperate DBs. One for Zend Server cluster data and one for the application. When I ask for "db_db=`relation-get database`" I get a database name "zend-server". Is it possible to get another DB from the same MySQL unit for Magento?"
<arosales> hints/tips/tricks on how to create two DBs from one relation?
<fuzzy_> in SQL it's called views
<fuzzy_> where you can build a view of a dataset
<fuzzy_> But I think your question is more Zend related :)
<arosales> fuzzy_: thanks for the comments. I may have to see how the zend folks are trying to use this in their charm.
<fuzzy_> If you have sql questions, feel free to ask
 * arosales taking a look at the review queue
 * arosales reviwing nginx
<arosales> filled https://github.com/marcoceppi/review-queue/issues/9  -- not auto charm testing comment
<jcastro> if I have a directory in my charm
<jcastro> say "files/"
<jcastro> where I want to put like random config files I want on the units
<jcastro> when calling it from say an install hook, what's the pwd?
<jcastro> will "cp -f files/foo destination" work?
<arosales> mbruzek: can I get you to kick off a tests for the items in the review queue?
<arosales> s/a//
<mbruzek> sure
<arosales> mbruzek: specifically the ones that don't have a test stuat
<mbruzek> arosales: when I tried this yesterday it did not work for me let me try again today.
<arosales> ah ok, let me know if it still fails
<mbruzek> arosales: I did the Review Queue with Jose yesterday and told him about the new feature.
<mbruzek> But I was unable to kick new tests off
<mbruzek> arosales: I was incorrect, it does seem to work today.
<arosales> mbruzek: ah ok, good to hear
<mbruzek> specifically which ones?
<mbruzek> arosales: all of them in the queue?
<mbruzek> kicked them all off.
<arosales> yes, I think they all need a test before ack
<arosales> mbruzek: thanks
<ayr-ton> Two quick questions, maybe is good to have this on askubuntu later. In the future will be possible to add relations between environments? And a friend of mine asked about official support do centos deployments.
<alexisb> ./join #docker-dev
<ayr-ton> alexisb, docker-dev?
<alexisb> ayr-ton, yeah I added the "." by mistake
<ayr-ton> Ah, ok.
<arosales> ayr-ton: re your question, it is on the road map to have juju have relations between environments.  Its been dubbed cross environment relaitons.
<arosales> s/relaitons/relations
<arosales> charmer needed to update https://code.launchpad.net/~jorge/charms/trusty/mysql/fix-proof/+merge/234020
<ayr-ton> interesting.
<arosales> its been superceded by https://code.launchpad.net/~a.rosales/charms/trusty/mysql/add-default-keys/+merge/235064
<arosales> charmer also needed to update the MP status on https://code.launchpad.net/~tvansteenburgh/charms/precise/block-storage-broker/fix-tests/+merge/234168
#juju 2014-09-18
<bradm> anyone about?  I'm having issues with getting a juju bootstrap to work, looking at the debug logs its in a loop around trying to ssh in, even though the ssh command it's trying to use it working when I do it manually
<sarnold> is there anything in the unit logs?
<bradm> well, no, because juju can't seem to talk to the unit
<bradm> there's no juju installed, no evidence that juju has done anything to it
<sarnold> but you could ssh in manually and look, right?
<bradm> this is using juju+maas, so the instance is a physical machine
<bradm> sure can
<bradm> there's no /var/lib/juju or /var/log/juju on the unit, there's no juju package installed
<bradm> oh hang on, the bootstrap node has juju 1.20, but the unit can only see 1.18, that doesn't seem good
<sarnold> you've rapidly gone beyond my experience :) but maybe the --upload-tools thing can help?
<bradm> maybe, and if I can add the juju stable ppa to the maas preseed as well, might help
<bradm> if I create the /var/lib/juju/nonce.txt by hand, everything starts moving forward
<ayr-ton> panic: Unexpected <nil> in storage-port: <nil>. Expected float64 or int. When juju status. juju 1.21-alpha1-trusty-amd64
<ayr-ton> Someone knows how to fix it?
<jose> marcoceppi: hey, I'm having this problem, "Failed to request test // Only charmers can initiate reviews"
<Xiaoqian> nobuto: ping
<zaargy> does juju autoheal services? if so can someone point me to how it works? thanks
<Odd_Bloke> Is there a programmatic way of destroying a service without having to guess when to resolve errors?
<Odd_Bloke> By "programmatic" I mean "I'm writing a shell script using the Juju CLI".
<Odd_Bloke> Use case is this: I want to spin up a service and if it fails to deploy, remove it completely.
<Odd_Bloke> (Such that deploying again won't get an 'already deployed' error)
<Odd_Bloke> (Side note: is there documentation of the possible agent states?)
<Mmike> Odd_Bloke, can't you destroy your environment and try again?
<Odd_Bloke> Mmike: These are ephemeral services which I would expect to be deployed in to an existing environment.
<Mmike> Odd_Bloke, from what I can see you can only do 'destroy-machine' with --force option - that would remove all units deployed on that machine
<Odd_Bloke> Mmike: Cool, I'll give that a try. Thanks!
<Mmike> Odd_Bloke, sure thing - let us know if that worked for you
<Odd_Bloke> Will do.
<Odd_Bloke> If I change configuration options iwthin hooks, should I expect the changed values to show up in a 'juju get'?
<rcj> Odd_Bloke, in my experience, no.  If a unit changes a service config value, that change is not reflected to other units or to the juju CLI/GUI
<rcj> I'm having that problem too.
<rcj> Is there a way to surface arbitrary data/status from a charm back to the cli?
<Odd_Bloke> "juju ssh cat ..."? Â¬.Â¬
<Odd_Bloke> Mmike: A first test suggests that blowing away the machine works. :)
<lazyPower>  Odd_Blokejuju destroy-machine # --force
<lazyPower> then juju destroy-service
<Odd_Bloke> Cool, that's working for me.
<lazyPower> mwenning: ping
<mwenning> lazyPower, pong
<lazyPower> I know this is low priority atm, any updates on the dell cluster?
<mwenning> lazyPower, dell cluster?
<lazyPower> mwenning: apologies for being the master of no context. The Dell OpenManage server charm was pending tests when I left off.  I beleive kent handed this off to you?
<mwenning> lazyPower, ah.   yes he did.  Looks like Adam Israel looked at it yesterday and said the charm tests pass cleanly.
<mwenning> My TODO list today says to contact you and ask what the next steps are :-)
<lazyPower> aisrael: i haven't looked at the charm since kent and I last spoke. the tests that passed cleanly were unit tests I'm assuming?
<aisrael> No. Charm proof passed cleanly, but I was unable to run the tests
<mwenning> aisrael, why couldn't you run the tests?
<lazyPower> mwenning: i'm one of the few people that has access to that lab, and even then i had a heck of a time getting in
<aisrael> Looking back through my notes. I see: 2014-09-17 15:33:16 INFO config-changed Starting Systems Management Data Engine:
<aisrael> 2014-09-17 15:33:16 INFO config-changed Failed to start because system is not supported
<mwenning> lazyPower, aisrael, I see.  OK let me play with it.  aisrael, what system were you using?
<aisrael> mwenning: Testing using the local provider under vagrant
<lazyPower> aisrael: its not going to run there, as it depends on a specific dell package that gets installed to manage them that has access to control fans, etc.
<mwenning> oh.  Don't think OMSA will work there, you probably need real dell hardware
<lazyPower> its a hardware level api
<lazyPower> yeah, i already went through all of this, which is what sparked the request for the lab
<mwenning> Maybe I can log a clearer error message
<aisrael> Right, that's what I figured. I meant to re-assign the ticket to you to review, but lp wouldn't let me
<lazyPower> aisrael: no worries. I've had this on my recurring todo list for a couple weeks now since we've had a bit of back and forth about it
<mwenning> kentb, good morning!
<kentb> mwenning: mornin!
<mwenning> They are discussing your charm on the internal #juju -
<mwenning> <lazyPower> mwenning: any insight on if this works for other servers or if its limited to an implicit series such as the M R and G (i think)
<mwenning> I said it should work on any Dell server that supports OMSA, is this correct?
<kentb> correct, so, PowerEdge C is the only line that doesn't work, but, Dell will tell customers that as well.
<mwenning> kentb, cool, thx
<kentb> mwenning: np.  my pleasure :)
<lazyPower> kentb: o/
<kentb> o/
<mwenning> kentb, thx,  one more step closer...
<kentb> yep...it's been a long road on this one.  Thanks for helping out with carrying it to the goal line.
<lazyPower> kentb: we're literally feet away from the finish line. I need to evaluate the tests being run and wrap up on teh quorem of implications this has on our CI and its a wrap.
<kentb> lazyPower: awesome.  Thanks for all your help and review.
<lazyPower> No problem, thanks for being a responsive charm author :)
<lazyPower> and leaving me in good hands post-facto. mwenning deserves a beer.
<kentb> you are certainly welcome. mwenning does deserve a beer.  Fortunately, he's just down the road from me.
<jose> jcastro: updating the Ubuntu on Air! calendar
<natefinch> jcastro: got a minute to talk about feature flags?
<mbruzek> Does anyone in #juju know what charm can relate to couchdb's "db" relation?
<marcoceppi> natefinch: I do
 * marcoceppi was off yesterday
<natefinch> ahh cool
<natefinch> so we had a juju core team lead meeting this morning, and we talked about feature flags for a good proportion of our hour long meeting
<marcoceppi> fun!
<natefinch> what we decided on was that having a flag per feature was just going to get too unwieldy and too confusing for everyone, so we should just mark charms very simply as "needs juju x.y or higher"
<natefinch> and that perhaps this has some drawbacks compared to feature flags, but that they were worth the tradeoff just to make things easier for everyone to understand.
<marcoceppi> natefinch: I'm a +1 with needs juju x.y and if the field is ommitted it's a safe to use charm
<natefinch> yep
<natefinch> cool
<marcoceppi> as that begins to land I'll make sure the review docs are cleared up to make sure people using newer features have this in metadata.yaml
<natefinch> yep, I'll sync with you so that we make sure the docs and the feature land together
<marcoceppi> natefinch: \o.
<lazyPower> this is re: healthchecks/actions/etc?
<natefinch> ish, yes.  Basically, a way to use features that will be not be backwards compatible with older jujus without totally screwing the older jujus (i.e. deploy and nothing happens)
<natefinch> it'll also mean that if you make a new version of a charm and mark it with a minimum juju version, older versions of juju will still use the old charm, and not try to deploy the new one that won't work
<natefinch> ..... where new version means one for a newer series
<lazyPower> awesome
<lazyPower> +1 from me
<natefinch> cool
<marcoceppi> natefinch: given how much of a proponent jcastro was to this yesterday you may want to double check with him, but I think this should appease him
<natefinch> marcoceppi: cool. Yes, I wanted to talk with him about it. Hopefully this will be good enough.  I'll make sure to catch up with him next time he's avaialble.
<jcastro> I AM APPEASED!
<natefinch> haha
<jcastro> natefinch, sorry I didn't respond I just got back
<jcastro> Min version seems like the best approach to me
<natefinch> jcastro: how I viewed that last message: http://2.bp.blogspot.com/-FFpUo771xbc/UaNn0M9Ap3I/AAAAAAAAAWU/CBNxgExQRfg/s1600/51987-Are-You-Not-Entertained-1a5I.jpeg
<natefinch> jcastro: cool
<jcastro> min is nice and easy
<jcastro> because we can easily search one field
<natefinch> yep
<natefinch> and there's no magic to figuring it out
<jcastro> instead of "ok search for charms which support feature X"
<jcastro> then we'd have to have a feature-to-version grid, etc.
<natefinch> yup
<jcastro> hey marcoceppi
<jcastro> you were going to generate a tarball of all the charms for the OBs right?
<marcoceppi> hey jcastro
<jcastro> so we don't have to wait all day for charm get all iiirc
<marcoceppi> jcastro: I do do this, yes
<jcastro> link?
<marcoceppi> jcastro: http://people.canonical.com/~marco/mirror/juju/charmstore/
<marcoceppi> it needs to be refreshed now that I look at it
<marcoceppi> unmomento
#juju 2014-09-19
<jamespage> gnuoy, https://code.launchpad.net/~james-page/charms/trusty/quantum-gateway/fixup-utopic/+merge/235254
<jamespage> fixup one for juno
<jamespage> juno on utopic that is
<gnuoy> jamespage, \o/   looking now
 * fabrice going for lunch with friends
<JoshStrobl> hey arosales_ still planning on making some of those modifications to the juju/docs?
<jcastro> hey, who touched openerp last?
<jcastro> http://www.theopensourcerer.com/2014/09/how-to-install-openerp-odoo-8-on-ubuntu-server-14-04-lts/
<arosales_> JoshStrobl, yes I need to get rolling on the charm guidelines. There was also some ideas to overall the charm walkthrough
<lazyPower> o/ JoshStrobl
<lazyPower> Juju Users! How many of you are using DO? We could use YOUR votes! https://www.digitalocean.com/community/projects/juju-digitalocean-provider
<arosales> charmer needed for simple review on icon + category
<arosales> https://code.launchpad.net/~a.rosales/charms/precise/puppet/add-charm-icon-category/+merge/234359
<arosales> https://code.launchpad.net/~a.rosales/charms/precise/puppetmaster/add-icon-category/+merge/234362
<arosales> little self promotion, but coming up next on the review
<lazyPower> ack, thanks arosales - will get these jammed out
<lazyPower> aisrael: have a moment to talk about https://code.launchpad.net/~aisrael/charms/trusty/vem/lint-cleanup/+merge/234406? you do? awesome!
<aisrael> lazyPower: bring it
<lazyPower> When you tested this, were you attempting to test against LXC inside of Vagrant? is that the core of the issue you outline in your comment
<aisrael> Correct. As I've discovered the past few days, there's some trickiness to getting some modules to work in the local environment.
<lazyPower> yeah, especially this charm
<lazyPower> the VEM charm is a software routing charm that bolts into openstack
 * aisrael needs to add a caveats section to the local provider documentation
<aisrael> So all I did in the end with that charm is clean up the lint errors and move the tests to run under a virtualenv
<lazyPower> i see this. do you have a cloud environment you can use to validate tests?
<lazyPower> I typically run tests locally, if they are totally fubar'd i'll attempt at least once in a cloud provider - (aws, or HPCloud) - and if it fails there then i nack it.
<lazyPower> niedbalski: ping
<aisrael> lazyPower: Yeah, I do now. I have DO working (and should have AWS soon)
<lazyPower> ok. just a bit of a caveat when kicking off tests is we cant rely souly on the local provider as charmers. +1 for the feedback and the merge. I'm goign to follow up with niedbalski to kick these tests off in the lab he has access to for the cisco stuff.
<lazyPower> thanks again aisrael
<aisrael> ack, good point
<arosales> charmes some others reviews in the queue that have a community review and/or small change that could use a quick look are:
<arosales> https://code.launchpad.net/~tvansteenburgh/charms/precise/block-storage-broker/fix-tests/+merge/234168
<arosales> https://code.launchpad.net/~a.rosales/charms/trusty/mysql/add-default-keys/+merge/235064
<arosales> https://code.launchpad.net/~jorge/charms/trusty/mysql/fix-proof/+merge/234020
<arosales> https://code.launchpad.net/~wesmason/charms/precise/mongodb/fix-volumes/+merge/234670
<arosales> jcastro: fyi on https://code.launchpad.net/~jorge/charms/trusty/elasticsearch/remove-sets-metadata/+merge/234358 there is outstanding question from noodles on the sets
<jcastro> I remember this
<jcastro> marcoceppi, you told me this was old metadata, can you confirm?
<arosales> lazyPower:  for some reason auto charm testing didn't pass on https://code.launchpad.net/~a.rosales/charms/trusty/mongodb/jay-wren-faster-install/+merge/234241 but the log is not available. So I am going to re-run locally.
<lazyPower> arosales: ack. ty for taking a look at it.
<arosales> lazyPower: np. I'll let you know what I find.
<arosales> lazyPower: if you have some time during your review I also posted some reviews above that have some community reviews on them and/or small changes that could use a charm to take a look.
<lazyPower> arosales: werent' those tests giving you a headache at teh sprint, and you needed/wanted me to take a look at them?
<lazyPower> ack, will look at this directly after gitlab. nearing completion of the MP testing
<arosales> lazyPower: I thought I got some help on them, but let me see what bundle tester is saying.
<marcoceppi> jcastro: which is old metadata? sets?
<jcastro> yeah
<jcastro> the one you had me remove
<jcastro> that's what the MP is about
<arosales> marcoceppi: this is related to the MP @ https://code.launchpad.net/~jorge/charms/trusty/elasticsearch/remove-sets-metadata/+merge/234358
<marcoceppi> arosales: right, I've left a comment
 * arosales fail should have checked first
<arosales> marcoceppi: thanks!
<arosales> marcoceppi: per your comment in https://code.launchpad.net/~jorge/charms/trusty/elasticsearch/remove-sets-metadata/+merge/234358 could you ack that MP?
<jcastro> wow, pyju, that is old
<lazyPower> whoa, firs ttime i've seen this out of the gitlab charm in MONTHS
<lazyPower> http://ec2-54-167-177-28.compute-1.amazonaws.com/
<lazyPower> hi5 to the contributor that fixed this
<lazyPower> mbruzek: good looking out, i think you empowered this fix. https://bugs.launchpad.net/charms/+source/gitlab/+bug/1360594
<mup> Bug #1360594: install hook failed - curl init.d/gitlab does not handle redirect <audit> <gitlab (Juju Charms Collection):Fix Released by martin-hilton> <https://launchpad.net/bugs/1360594>
<mbruzek> lazyPower: both curl -L and wget worked.  Thanks lazyPower
<mwenning> hi juju team, I'm trying to bootstrap juju on a maas node here, I see the node come up and then disconnect from the network
<mwenning> There is an error on the console Cannot find device "br0"  ; Error getting hardware address for "br0"
<mwenning> apparently the error causes it to not bring eth0 back up so I juju can't contact it after that
<mwenning> Anyone seen this?
<natefinch> mwenning: sorry, no idea, and searching for bugs doesn't bring up much
<natefinch> mwenning: maybe try in #maas ?
<natefinch> marcoceppi: do you have any time today to talk about testing charms?
<mbruzek> mwenning: check for syntax errors in /etc/network/interfaces
<mwenning> natefinch, thx.   I'm looking at #1271144, somewhat similar to what I'm seeing.
<mup> Bug #1271144: br0 not brought up by cloud-init script with MAAS provider <canonical-is> <cloud-installer> <landscape> <local-provider> <lxc> <maas> <regression>
<mup> <juju-core:Fix Released by axwalk> <juju-core (Ubuntu):Fix Released> <juju-core (Ubuntu Trusty):Confirmed> <https://launchpad.net/bugs/1271144>
<mbruzek> mwenning: I had a problem where eth0 would not come up on its own and it turned out to be a incorrect key in that file.
<mwenning> natefinch, I can actually start the node up OK in maas
<marcoceppi> natefinch: I do
<marcoceppi> I always have time for charm testing
<natefinch> mwenning: what version of juju are you using?
<mwenning> natefinch, 1.20.7
<natefinch> mwenning: nice
<natefinch> marcoceppi: mostly I have no idea what I'm doing testing a charm.  Like... do I really have to install the thing?  That seems like it's destined to be a problem, especially for a charm like mine that takes 20+ minutes to install, depending on where it's being deployed
<natefinch> mwenning: at the bottom of that bug, a developer is asking if the primary network interface is called something other than eth0... if so, you need to set the network-bridge attribute in environments.yaml (to whatever the primary interface is)
<marcoceppi> natefinch: so there are varying forms of testing. A blanket that will cover all types of charms is integration testing which does actually deploy the charm and there are a few frameworks that exist to interact with a deployed environment to verify it's working correctly (a la Amulet). Then there's just plain old unit testing which works as well
<natefinch> marcoceppi: I guess the "has some tests" requirement for getting promulgated as a trusty charm is just very vague to me.  I'm sure that's at least somewhat on purpose.  I can definitely do several unit testy things pretty easily.  But the key "does juju deploy actually work?" test is going to take 20+ minutes to run  if it's run on amazon.   I don't know a way around that.
<mwenning> natefinch, primary network meaning the one that maas is on?   I have eth0 going out to the internet, eth1 is the internal maas network
<marcoceppi> natefinch: well, the minium requirement for testing is unit tests (this is the golang charm?). Having integration tests is great as we can make sure it works constantly. That, and we have an automated testing framework that we're running charms through
<mbruzek> mwenning: if br0 is disconnecting we need to figure out why.  Where do you see that message?
<mwenning> mbruzek, on the node console.
<mbruzek> mwenning: and you do not see that happen on a normal maas start ?
<mwenning> mbruzek, correct.
<mwenning> unfortunately it rolled off the console screen now, I can run it again and try to write down the details
<natefinch> marcoceppi: yeah, this is for the go discourse charm.  I can certainly hook up some Go unit tests to the charm tests.  Charm tests just need a *.test executable under the tests directory, right?  I can put a script in there to install go and run go test.
<mwenning> mbruzek, looks like some script does an ifdown on eth0, then tries to configure br0, fails, and never brings eth0 back up again
<marcoceppi> natefinch: yeah, the pattern for how testing works is mutating actually, now we have more entry points. Like if you have a Makefile with a test or unit_test target it'll get run
<marcoceppi> natefinch: but any executable file (not just files with .test) will be executed with the testing harness as well
<natefinch> marcoceppi: makefiles, cute :)
<mwenning> mbruzek, I'll start it up again and get more info
 * marcoceppi makes a note to update testing docs
<natefinch> marcoceppi: so any executable in the tests directory?
<marcoceppi> natefinch: yes
<marcoceppi> so many people complained about .py extensions that it opened to all executables
<natefinch> haha
 * marcoceppi rolls eyes
<mbruzek> mwenning: I would be surprised if juju was doing the ifdown.  I think you have something wrong there that I just don't have enough information on yet.
<marcoceppi> natefinch: so you could, if you want to forego a makefile, put 100-unit.test which was just +x and the bash required to unit test the hooks
<marcoceppi> that'd be sufficient for the testing bits. We've gotten a lot of feedback about standing up a charm and environment each time you want to test
<natefinch> marcoceppi: actually, I had a thought,  I think I can precompile a testing executable and check it in
<marcoceppi> natefinch: that's be bad ass
<natefinch> marcoceppi: haha... it even defaults to being called .test
<marcoceppi> natefinch: it was meant to be ;)
<mwenning> mbruzek, it'll be a few minutes to come back up.  If I hit the timing just right I think I can enable the console login and get more info
<natefinch> marcoceppi: sweet, now all I have to do is write the tests.
<marcoceppi> natefinch: hah, no sweat! ;)
<arosales> lazyPower: my mongo tests keep timing out on me http://paste.ubuntu.com/8381939/
<arosales> I may have to go in and increase their timemout value
<natefinch> good old mongo.... destroyer of timeouts
<mwenning> mbruzek, https://pastebin.canonical.com/117305/
<mwenning> don
<mwenning> t know if that enlightens at all..
<lazyPower> arosales: ack. let me take a look at it when i wrap the BD work i'm in the middle of
<lazyPower> if you're seeing timeouts, then CI will be, and everyone else will be too.
<arosales> lazyPower: thanks
<mbruzek> mwenning: This is outside my area of expertise.  There is a br0 problem, but I can not tell where.
<mwenning> mbruzek, ok thx.    I'll keep at it.   I'm still trying to get in thru the console.
<lazyPower> mwenning: whats in /etc/network/interfaces?
<lazyPower> do you have a bridge device specified in there? or elsewhere that would be recognized by the system?
<mwenning> lazyPower, it just defines lo, eth0 as dhcp (external internet), and eth1 as static (internal maas network)
<mwenning> no bridges
<lazyPower> thats why ti cant find the bridge device br0
<aisrael> Hmm. Looks like the cassandra charm is broke.
<lazyPower> aisrael: quoreming?
<aisrael> lazyPower: http://pastebin.ubuntu.com/8382625/
<lazyPower> mwenning: http://paste.ubuntu.com/8382626/
<aisrael> Tracking it down now. Looks like it's using the wrong gpg key
<lazyPower> take a look there, i implicitly define my network bridge and tie it to a physical interface
<lazyPower> aisrael: thats new emergent behavior compared to what i've seen cause failures in the past. must be an updated GPG key
<lazyPower> carry on sir
<aisrael> lazyPower: testing the fix now, but is there somewhere upstream I can check to verify _why_ the gpg key changed?
<lazyPower> aisrael: Just the cassandra issue tracker, which i think is in jira
<aisrael> Yup, that fixed it
<arosales> lazyPower: since https://code.launchpad.net/~a.rosales/charms/trusty/mysql/add-default-keys/+merge/235064 was merged can you update https://code.launchpad.net/~jorge/charms/trusty/mysql/fix-proof/+merge/234020 to invalid or superceeded so it drops off the queue?
<lazyPower> sure thing, sorry i missed that.
<arosales> lazyPower: no worries. I think you got most of the other low hanging bits --thanks :-)
<lazyPower> i did my best. a few of the changes were a bit in depth to review as no tests were in teh charm, however - they were solid merges
<lazyPower> I'll take a look at MongoDB after BD - it may get moved to Monday
<lazyPower> depends on how late the BD merges take me into the night w/ bundles
<lazyPower> actually, lets make a card for that and I'll dive into it monday morning.
<arosales> lazyPower: I am still trying to get mongo tests not to time out :-/
<lazyPower> i bet i know whats going on there, its the sheer volume of the instances.
<arosales> lazyPower: there is still this one https://code.launchpad.net/~jorge/charms/trusty/elasticsearch/remove-sets-metadata/+merge/234358
<lazyPower> its deploying 8 or 9 servers iirc. and that can be trimmed down to a single shard, w/ replicas
<lazyPower> all the functionality would be done there
<arosales> but I thought marcoceppi was going to update this one since it was +1ed by the communty and he commented
<arosales> but still needs a charmer to take action on it.
<arosales> lazyPower failing for me in aws too
<arosales> marcoceppi: any objections to merging https://code.launchpad.net/~jorge/charms/trusty/elasticsearch/remove-sets-metadata/+merge/234358 ?
<lazyPower> ack. Let me wrangle it now that we have a clear path forward with how our CI infra handles testing.
<lazyPower> it works with juju test, but not bundletester is what i'm seeing, and its going to take some refactoring.
<lazyPower> arosales: i'm already on it.
<arosales> lazyPower: any recomendation on passing larger constraints to bundle tester?
<arosales> or increasing timeout?
<marcoceppi> arosales: none
<lazyPower> marcoceppi: merging and closing
<arosales> marcoceppi: lazyPower: thanks :-)
<lazyPower> arosales: i'm gonig to send another batch update with the work i've done in the BD stack this week re: rev queue update
<lazyPower> as thats been a majority of the work i've done since i've been back from the sprint
<arosales> lazyPower: ack and thanks for the reviews
<lazyPower> mwenning: any update on Dell OMSA today?
<lazyPower> now that iv'e waited until EOD time for you, let me poke you about something :P
<lazyPower> sorry about that, i'll try to remember to poke you earlier in the day on Monday if there's no change.
<mwenning> lazyPower, I've been trying to get this juju cluster up to run it.
<mwenning> np.
<lazyPower> :| anything I can do to assist you?
<mwenning> hmm, actually I managed to run it on another smaller cluster here, it came back with an error.
<mwenning> give me a second, I'll get it in a pastebin
<mwenning> https://pastebin.canonical.com/117311
<lazyPower> the deployment timed out
<lazyPower> the hourglass in the response code says amulet killed the setup before it finished
<lazyPower> mwenning: 2 things. 1) change line 16 to 1800,
<lazyPower> oh it has d.sentry.wait()
<lazyPower> so just 1 thing
<lazyPower> if you can do that and re-run, it should pass, it looks like it was nearly finihsed when amulet pulled the plug
<mwenning> d.sentry.wait() means wait forever?
<lazyPower> negative.
<lazyPower> d.sentry.wait() is an internal command to amulet, it says "halt progression until no further hooks are being fired in teh environment"
<lazyPower> it assures that at least at one single ms during the deployment, no hooks are queued to be fired, and it has "settled"
<mwenning> what's 2)  ;-)
<lazyPower> d.sentry.wait() was going to be 2
<lazyPower> but thats in there, i jumped the gun
<mwenning> k, so timeout=1800
<lazyPower> yep
<mwenning> running...
 * lazyPower crosses fingers
<marcoceppi> lazyPower: it's charm test pulling the plug
<marcoceppi> mwenning: ^
<marcoceppi> lazyPower mwenning charm test -e maas --timeout 1800 10-deploy.test
<lazyPower> ahhh
<marcoceppi> yeah, I'm going to bump the default for that when I do some work on amulet next
<marcoceppi> 600 is way to close
<marcoceppi> mwenning: also, -v may help illuminate more what's going on with charm test
<mwenning> marcoceppi, I changed the d.setup(timeout=1800) in tests/10-deploy.test, will that do the same thing?
<marcoceppi> mwenning: nope, this is the difference between teh timeout for a test and for the test runner, slight distinction, in our jenkins infrastructure the charm test timeout is 2 hours, but the CLI defaults to 600 seconds.
<marcoceppi> amulet and charm-test are decoupled from each other so there's a bunch of different timeouts running around and no one is talking to each other
<mwenning> rats.  ok.  killing and starting again
<mwenning> lazyPower, when I try a variant of your  /etc/network/interfaces file, it comes up with the route default pointing to the maas server
<mwenning> so nothing gets routed out to the internet
<mwenning> Am I doing something else wrong?
<lazyPower> mwenning: well all of the networking config there is subject to modification for your specific use. In this case 10.0.10.1 is a router on my network, and 10.0.10.2 is my MAAS region/cluster controller.
<lazyPower> mwenning: http://paste.ubuntu.com/8383171/ here's my route table on that server
<lazyPower> not sure how much help thats going to be for you, but this is what i have setup
<themonk> after changing a config variable X and do something if for some other reason config-changed hook runs will i still get updated value of X?
<mwenning> lazyPower, ok, ours is set up a bit differently - eth0 is connected to the outside world, eth1 is connected to all the maas nodes.
<mwenning> maas server ip-forwards everything
<lazyPower> mwenning: in my specific setup, the br0 attached to eth1 is whats making my vmaas cluster visible to the network
<lazyPower> that interface handles the anding to reach my units.
<lazyPower> (is anding the proper term here? i think i just failed in networking lingo)
<lazyPower> i digress, its handling the masqerade and forwarding
<mwenning> lazyPower, this is going to take some time, hope you're not hanging around for me.
<lazyPower> mwenning: nah, i'm around for other reasons :) poking you and helping where i can is a perk
<mwenning> ok preciate it.
<mwenning> my "small" cluster has 2 nodes, unfortunately it needs three for the test.  The "big" one is having all these problems, I'm trying to rebuild the maas server
<lazyPower> ahh
<lazyPower> mwenning: well if you've got other stuff to do this can wait. If its not an easy fire and forget thing
<lazyPower> i can lend a hand next week as well, i've got a few windows i can block out to help
<mwenning> lazyPower, I'm going to try to get it running a little while longer - I may leave it running over the weekend.
<lazyPower> ack. i'll be around about another hour before i dip out for the weekend
<mwenning> sounds about right.
<jose> marcoceppi: hey, I'm still having this 'only charmers can initiate reviews' prob on the revq
<marcoceppi> jose: it's a known issue
<marcoceppi> an update is going live soon
<jose> oh, ok
<jose> sorry about that, then
<marcoceppi> testing button is also broken ;)
<marcoceppi> but that's being fixed too
<jose> \o/
<jose> thanks for all your work!
#juju 2014-09-20
<marcoceppi> review queue is about to get a big update, it'll be down for about 20 minutes
<marcoceppi> jose: so, the testing button works, but the refresh profile (and refreshing of the queue is broken) so the button still won't work for you yet
<jose> marcoceppi: thanks! lemme check
<marcoceppi> jose: _won't_ work for you yet still
<jose> ah
<marcoceppi> yeah, hope to have it resolved soon
<jose> no worries
<jaywink> hi. any idea if there is any way to set some config values for charms which are not a part of the charm normal config? what I am thinking about is backing up postgresql backup dumps using the jujubackup charm, which relies on the backed up charm to have backup instructions as config. this is difficult in the case of say a charm from the store - would rather not fork all charms just to make them talk to jujubackup
<jaywink> or maybe if I add those values to postgresql charm and the correct hooks to interact with jujubackup, wondering if there is any chance of that being accepted into the main charm code?
<jaywink> well I guess there is only one way to find out :)
<jaywink> btw, added swift storage for jujubackup, works nicely in an openstack environment now for charm backups
<marcoceppi> jaywink: well, the postgresq charm already does regular dumps
<marcoceppi> and puts them in a standard location
<jaywink> marcoceppi, yeah, but would be nice to be able to deploy jujubackup as a subordinate and have the backups be zipped to swift for example .. I know it's easy to add a crontab entry :)
<marcoceppi> jaywink: well you could do that as it is, just deploy jujubackup and have jujubackup include logic to sniff out common charms backup procedures
<jaywink> but now reading more carefully postgres charm also has some kind of swiftwal backup functionality
<jaywink> marcoceppi, yes was thinking about that too. though I guess the idea behind jujubackup was (reading it) that it would not require config and the other charm would instead have the config. but it's a bit heavy for common charms in the store for example
<jaywink> any idea what is the risk level of doing a "juju upgrade-charm postgresql --switch --repository="/path"" operation on a postgresql charm deployed from the charm store, if the version of the charm is the same, eg I've branched the code out and just want to start using a local version?
<marcoceppi> jaywink: not much, I mean --switch is pretty hulk smashy but if you're using a modified version fot he currently deployed charm not much harm
<marcoceppi> it's actually a great way to switch to a patched version of the charm while you wait for the store to be updated
<jaywink> ok thanks! I have backups ;)
<jaywink> hmm I wonder if anyone uses the SwiftWAL feature of postgresql charm? I don't get the crontab entry installed and looking at the code I can't see how it could work - the configs are never used to init the crontab part OR swiftwal config template. Been trying to look for a development version branch but no luck so far
<jaywink> any idea why charm_helpers_sync.py isn't installed as a script when installing charmhelpers from pip for example? I can see from the setup that it isn't, just wondering why - seems like it would be much easier to use :)
<lazyPower> jaywink: we *just* got charmhelpers in pip
<lazyPower> there's still a lot of work that needs to go into shoring it up. if you file a bug against charmhelpers that will help remind us that there are papercuts left around that are low hanging fruit
<lazyPower> http://launchpad.net/charmhelpers
<jaywink> lazyPower, ok thanks, figured there was probably no reason for it not being installed :) will file a bug
<jaywink> can't quite put my finger to why, but I tried to install some swiftwal requirements in the postgresql charm by adding charmhelpers.contrib.python and using that pip_install command - but it just doesn't do things the same way as I run a subprocess.call for the same packages in the same part of the charm .. but I couldnt' really figure out quickly why, probably due to some versions conflicting which it was not able to solve but pip cli
<jaywink>  was able to solve
#juju 2014-09-21
<Odd_Bloke> So I've installed juju-quickstart and juju-core from the juju/stable PPA; how do I go about deploying a bundle from the command line?
<Odd_Bloke> "juju bundle get" gives me "unrecognized arguments: --bundle"
<rick_h_> Odd_Bloke: juju quickstart bundle:mongodb/5/cluster
<rick_h_> Odd_Bloke: for instance
<rick_h_> Odd_Bloke: you can get the names/strings from jujucharms.com like https://jujucharms.com/bundle/mongodb/5/cluster/
<rick_h_> Odd_Bloke: if you have a file locally you can specify it as well
<rick_h_> juju quickstart bundles.yaml
<rick_h_> Odd_Bloke: or point to one online at a url
<rick_h_> Odd_Bloke: https://jujucharms.com/bundle/mongodb/5/cluster/#deploy for instance goes through the two tools (quickstart and the deployer)
<rick_h_> Odd_Bloke: and any bundle you find there should have copy/paste instructions
<rick_h_> Odd_Bloke: I'm in/out as it's sunday but let me know if that helps or you hit any issues getting it working.
<lazyPower> hazmat: digital ocean isn't making this easy - the provisioner on their side is being pokey today and it keeps timing out when i'm trying to record the bootstrap :(
<lazyPower> is there a way for me to crank up the timeout wait for the juju-docean plugin to wait longer than the default 5 minutes? it's taking close to 8 to get an instance back
<lazyPower> nvm haxed the source and cranked up the loop count
#juju 2015-09-14
<David_Orange> Hello juju masters
<David_Orange> I had a problem with my local juju install, if someone can help it would be great. It is a local bootstrap with LXC
<David_Orange> my boostrap machine have 2 interfaces, and my bridge is on the second interface. The worker on my machine is trying to join the WSS server on the first network, but this one is not routable to the bridge. How can I set the WSS IP ?
<jamespage> beisner, hey - https://code.launchpad.net/~james-page/charms/trusty/cinder/tox/+merge/265008
<jamespage> that's the tox merge for cinder
<beisner> jamespage, ta
<jamespage> beisner, running things under testr reveals some interesting things :-)
<lazypower> David_Orange, hey there, did you receive help w/ setting your WSS url?
<David_Orange> lazypower: nope, maybe it was too early
<David_Orange> but I find a workaround, I unplugged my first interface during the boostrap creation
<David_Orange> the WSS ip sent to nodes is configurable somewhere ?
<David_Orange> lazypower: but thanks for asking, 5 hours back in irc log it is a good community following :)
<lazypower> yeah, you can edit that WSS url in the agent confit
<lazypower> *config
<David_Orange> OK, it is the apiaddresses variable I think. Thanks for your support
#juju 2015-09-15
<coreycb> gnuoy, jamespage: we're moving common action-managed upgrade code to charm-helpers, could one of you review this?  https://code.launchpad.net/~corey.bryant/charm-helpers/action-managed-upgrade/+merge/270998
<jrwren> I just filed https://bugs.launchpad.net/juju-core/+bug/1496016 I'm available if anyone wants to help. jujud using 1.5GB RSS seems bad to me.
<mup> Bug #1496016: jujud uses too much memory <juju-core:New> <https://launchpad.net/bugs/1496016>
#juju 2015-09-16
<ennoble> I just ran into panic, "panic: runtime error: invalid memory address or nil pointer dereference" how do I go about root causing this. I've got a bunch of backtraces in different co-rounties, but I can't figure out where the nil dereference occurred.
<amit213> <amit213> hello everyone.. I have an Ubuntu 14.04 with stock kernel.   Installed juju from stable repo.  juju-local environment.  Attempting to deploy "juju deploy nova-compute"  (it's using LXC underneath) -- I run into this error :    Hook failed "install"
<amit213> <amit213> juju-log server.go:254 Couldn't acquire DPKG lock. Will retry in 10 seconds.
<amit213> <amit213> any tips?
<jamespage> gnuoy, hey - I'm setting up a sync process to push /next branches to https://code.launchpad.net/~openstack-charmers-next
<jamespage> so that they can be addressed via the charm store rather than people having to directly use the branches
<gnuoy> jamespage, ok, sounds good
<jamespage> gnuoy, it would mean we can publish a liberty bundle earlier than we might with just /next branches
<gnuoy> kk
<jamespage> gnuoy, some reviews for liberty fixes:
<jamespage> https://code.launchpad.net/~james-page/charms/trusty/neutron-gateway/liberty/+merge/271245
<jamespage> and
<jamespage> https://code.launchpad.net/~james-page/charms/trusty/neutron-api/liberty/+merge/271247
<jamespage> gnuoy, if you have a few cycles
<gnuoy> sure, just need to clear a couple of things first
<gnuoy> jamespage, looks like you have a lint fail for https://code.launchpad.net/~james-page/charms/trusty/neutron-api/liberty/+merge/271247
<jamespage> gnuoy, urgh - sorry
<gnuoy> np
<jamespage> gnuoy, fixed
<gnuoy> jamespage, approved (assuming osci +1's too)
<coreycb> jamespage, the merge conflicts are fixed up for the action-managed upgrade code:  https://code.launchpad.net/~corey.bryant/charm-helpers/action-managed-upgrade/+merge/270998
<natefinch> rick_h_: juju quickstart requires the bundle file to be .yaml but not .yml?  juju-quickstart: error: unable to open the bundle: invalid bundle URL: /home/nate/bundle.yml
<natefinch> renaming it to .yaml makes it work
<frankban> natefinch: yes, it must have yaml ext
<natefinch> frankban: why?
<frankban> natefinch: for disambiguation, an arbitrary /x/y is considered to be a jujucharms bundle url, like /mediawiki-single or /u/myuser/mybundle
<frankban> natefinch: it's trivial to add .yml to that list, but currently it's .yaml and .json
<natefinch> frankban: why not just see if there's a valid bundle file at that local location, and if not, then look on jujucharms?  or make the disambiguation explicit, like it is with charms, ie cs:mysql vs. local:mysql ?
<frankban> natefinch: "juju-quickstart --help" for all the supported forms
<frankban> natefinch: how to disambiguate all the possible entity identifiers is currently under discussion. a local bundles is different than a local bundle yaml though. please feel free to add a bug to quickstart with what you'd expect
<natefinch> frankban: will do.  Thanks.
<frankban> natefinch: ty!
<natefinch> frankban: while I have you.  I'm trying to repro a bug... I'm trying to pare down a bundle to only deploy the stuff I need to repro the bug... but I'm getting unexpected results. http://pastebin.ubuntu.com/12427722/
<natefinch> frankban: that bundle doesn't actually deploy haproxy
<natefinch> frankban: but also doesn't post an error
<frankban> natefinch: are you doing that with quickstart?
<natefinch> frankban: yes
<natefinch> frankban: 2.2.1
<natefinch> w/ juju 1.24
<mgz> natefinch: have you determined that quickstart is needed to get the bug?
<frankban> natefinch: so quickstart uses the GUI server to deploy the bundle, and the GUI server in turns uses the juju deployer
<mgz> natefinch: I was going to try just deployer-ing the bundle to then cut it down
<frankban> natefinch: do you have errors in the GUI?
<natefinch> mgz: that's a good idea.  I don't know how to deployer stuff though
<natefinch> frankban: looking
<frankban> natefinch: and what's the content of https://{juju-gui-address}/gui-serve-info ?
<frankban> natefinch: sorry  https://{juju-gui-address}/gui-server-info
<natefinch> frankban: {"uptime": 6266, "deployer": [{"Status": "completed", "DeploymentId": 0, "Error": "'NoneType' object has no attribute 'items'", "Time": 1442413931}], "apiversion": "go", "sandbox": false, "version": "0.6.0", "debug": false, "apiurl": "wss://10.0.3.1:17070"}
<frankban> natefinch: "NoneType' object has no attribute 'items'" is your error as returned by juju-deployer :-/
<natefinch> fantastic
<natefinch> probably doesn't like my empty list of relations or something
<frankban> natefinch: using the deployer directly would show at least a traceback, so that you can find out what the deployer expects and juju-core does not provide (I am trying to guess here)
<natefinch> frankban: is there documentation on how to use deployer?  I've never used it.
<frankban> natefinch: http://bazaar.launchpad.net/~juju-deployers/juju-deployer/trunk/view/head:/README  and http://bazaar.launchpad.net/~juju-deployers/juju-deployer/trunk/view/head:/HACKING
<natefinch> frankban: looks like it just doesn't like the empty relations value at the end of my bundle
<frankban> natefinch: in this case, this is a bug of deployer, it should handle that more gracefully
<natefinch> frankban: indeed
<cholcombe> aisrael, is the juju benchmarker able to do continuous performance monitoring?
<aisrael> cholcombe: Not at the moment, no. Benchmarks are executed as actions, so they're point in time performance snapshots
<cholcombe> i see
<cholcombe> i'll have to modify my approach a little for that.  no big deal :)
<thomi> tvansteenburgh: Hi, I wonder if I could ask you to review a couple of MPs for juju-deployer please?
<tvansteenburgh> thomi: sure
<thomi> tvansteenburgh: thanks! I have https://code.launchpad.net/~thomir/juju-deployer/trunk-remove-env-log/+merge/271229 and https://code.launchpad.net/~thomir/juju-deployer/trunk-remove-charm-error/+merge/271368
<thomi> Both fix small, but annoying bugs
<hermanbergwerf> Anyone who successfully installed the juju client on a slackware distro? I'm on openSUSE and I got to make install-dependencies but there are Ubuntu/Debian commands there.
<tvansteenburgh> thomi: for the 2nd one, what's an example of a charm path from the deployer file?
<thomi> let me find one...
<thomi> tvansteenburgh: the charm is specified as: "charm: apache2" for example
<thomi> I can send you an entire services file if you like
<tvansteenburgh> nah, that's enough, thanks
<thomi> no worries
#juju 2015-09-17
<blahdeblah> jose: Sorry I didn't get back to you the other day; other work preempted the charm I was working on.  I ended up basing mine on a newer python-based charm.
<Odd_Bloke> Could a charmer have a look at https://code.launchpad.net/~daniel-thewatkins/charms/trusty/ubuntu-repository-cache/hotfix-leader-election/+merge/270838, please?
<Odd_Bloke> marcoceppi: lazypower: $others: ^
<gnuoy> jamespage, would you mind taking a look at https://code.launchpad.net/~gnuoy/charm-helpers/1496746/+merge/271443 when you have a moment?
<jamespage> gnuoy, For pagesize, size, min_size and nr_inodes options, you
<jamespage> can use [G|g]/[M|m]/[K|k] to represent giga/mega/kilo.
<gnuoy> jamespage, ok, but that leaves me with some horrible code converting that to bytes.
<jamespage> gnuoy, hmm
<jamespage> gnuoy, well we need to support human usable config somewhere in the stack
<jamespage> gnuoy, but I don't think that's exposed just yet right?
<jamespage> gnuoy, some sort of general K M G functions would be good I think
<gnuoy> ok
<gnuoy> jamespage, would you mind taking another look pls?
<jamespage> gnuoy, +1
<gnuoy> jamespage, thanks
<gnuoy> jamespage, got a sec for https://code.launchpad.net/~gnuoy/charms/trusty/nova-compute/1496746/+merge/271451 ? Mainly just a charmhelper sync plus setting the new set_shmmax
<bloodearnest> anyone know of a charm with amulet tests that test the upgrade step from previous version of the charm?
<bloodearnest> I can just install from the charmstore, right? Like cs:trusty/gunicorn-0 ?
<Odd_Bloke> jose: marcoceppi: lazypower: Any chance you could have a look at the merge I linked about?
<Odd_Bloke> There's also https://code.launchpad.net/~cjwatson/charms/trusty/ubuntu-repository-cache/inline-release/+merge/271107
<rick_h_> Odd_Bloke: I think they're sprinting today so might be slow to respond
<Odd_Bloke> rick_h_: Ah, thanks for the heads up.
<Odd_Bloke> I'll try in all caps in a bit, I'm sure that'll help. ;)
<rick_h_> Odd_Bloke: wfm :)
<jose> Odd_Bloke: lazypower says that he'll take a look later today and merge, we're in a summit right now
<Odd_Bloke> jose: lazypower: <3 thanks.
<jose> thanks to you!
<jamespage> thedac, can you tip the video down a bit
<gnuoy> thehi
<gnuoy> thedac, hi (is what I meant
<thedac> hi
<wolverineav> hey, this is aditya
<jamespage> thedac, question sounds like its about branching strategy?
<thedac> yes
<thedac> You also have options when you branch a charm. bzr branch lp:charms/trusty/$CHARM
<jamespage> thedac, this https://github.com/openstack-charmers is probably waht the charms will look like once migrated to git
<wolverineav> I have a question very specific to openstack neutron - the subordinate charm for DHCP and L3 agent is deployed on the controller node (the neutron-api / gateway).
<wolverineav> now, is there a way to deploy this on the compute node instead? (at scale, the controller node becomes a bottleneck when VMs are brought up and it requests DHCP and L3). so i would like to disable DHCP and L3 on controller and enable it only on compute. (or enable it on all nodes)
<jamespage> 15.10
<thedac> thanks
<jamespage> thedac, we'll switchover once we release from bzr
<gnuoy> wolverineav, not at the moment. You can have dhcp and metadata on the compute nodes but only if the gateway is not deployed
<jamespage> probably
<wolverineav> neutron-gateway would always have to be deployed, right?
<gnuoy> wolverineav, right
<thedac> jamespage: is the hangout public?
<jamespage> probably not
<gnuoy> wolverineav, you can use DVR t oreduce traffic through the gateway
<jamespage> wolverineav, erm not sure that's accurate
<gnuoy> fwiw the link quality isn't good enough to follow everything that's being said
<ddellav> we've got some people here that want to screen share so can we switch to a public hangout so they can be invited?
<ddellav> or how else should we accomplish this?
<gnuoy> I'm happy to switch
<bdx> jamesbeedy@gmail.com
<wolverineav> ah, so the bottleneck I'm talking about is pre-DVR
<wolverineav> I haven't tried it with DVR enabled.
<coreycb> wolverineav, https://git.launchpad.net/~sdn-charmers/charms/+source/openvswitch-odl/tree/
<coreycb> wolverineav, http://bazaar.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk/view/head:/bundles/ovs-odl/ovs-odl.yaml
<ddellav> are you guys seeing the juju-gui jamespage gnuoy ? He's got a full openstack deployment but juju-gui is not displaying any charms
<ddellav> we've got mbruzek helping him but it's pretty bazaar
<gnuoy> I see the gui, not sure whats causing the issue though
<rick_h_> ddellav: gui issue? hatch can help if you need a hand with something.
<hatch> hello
<ddellav> hey hatch we have a charm partner that's having issues with the gui
<ddellav> he has the gui deployed to an lxc container alongside a full openstack deployment but the gui is showing no charms or machines
<ddellav> it previously worked but in the last few weeks it started doing this
<hatch> ddellav: can you get them to open the browser console to see if there are any errors?
<hatch> also, do they still have to log into the gui? (it didn't get switched to sandbox mode by accident?)
<thedac> hatch: he does log in
<ddellav> hatch, no errors in the browser console
<hatch> hmm
<hatch> and the `juju status` in the cli shows the gui?
<thedac> Yes as well as the rest of the openstack deploy
<hatch> ok give me a moment to think about this one :)
<thedac> sure. One other point is this is a MAAS deploy
<hatch> thedac: which browser are they using?
<ddellav> chrome
<cory_fu> Merlijn_S: Welcome.  https://jujucharms.com/docs/devel/authors-charm-composing
<hatch> ddellav: thedac ok I'm going to investigate locally here and will get back to you
<Merlijn_S> cory_fu: Thanks
<thedac> hatch: thanks
<stub> tvansteenburgh1: http://juju-ci.vapour.ws:8080/view/Juju%20Ecosystem/job/charm-bundle-test-aws/735/console is looking promising. Formatting a little off with the update, but close enough.
<jamespage> thedac, coreycb: for those interested - https://review.openstack.org/#/c/224797/
<jamespage> gnuoy, ^^
<jamespage> pulled my finger out
<gnuoy> jamespage, tip top. thanks
<tvansteenburgh1> stub: \o/
<coreycb> jamespage, when do you think we'll have base charms available for SDN charm composing?
<coreycb> wolverineav, https://jujucharms.com/docs/devel/authors-charm-composing
<jamespage> coreycb, context?
<jamespage> coreycb, probably this month
<jamespage> need to make some time
<ddellav> thanks hatch
<tvansteenburgh> stub: i have some follow-up questions/comments to your latest remarks, but will get to those later today. thanks for taking the time
<coreycb> jamespage, I'm chatting with wolverineav from big switch and showing him the odl charm
<jamespage> coreycb, right
<ddellav> hatch, not sure if this helps but here's a screenshot: http://cl.ly/image/2z1F3Y100I12
<hatch> ddellav: very odd, thanks - I'm just spinning up a local env here to see if I can reproduce
<hatch> ddellav: can you check `juju get juju-gui` and look for the 'sandbox' property value
<jamespage> wolsen, dosaboy, Tribaal: https://review.openstack.org/#/c/224797/2
<bdx> hatch: http://paste.ubuntu.com/12438918/
<ddellav> hatch, bdx is the user with the gui issue, he's pasted that output you wanted
<hatch> ahh alright
<hatch> yeah so this is very weird
<ddellav> hatch, can you let us know the best place to submit a bug report for this issue?
<hatch> bdx: ddellav https://bugs.launchpad.net/juju-gui
<hatch> but this shouldn't be possible :)
<ddellav> love those impossible bugs haha
<hatch> bdx: can you run `app.env.getAttrs()`
<hatch> it shouldn't contain any sensitive information, just take a look before pasting
<hatch> bdx: does the machine view show your other maas machines? have you tried to deploy something? If you deploy a simple charm via the gui, like say Ghost, does it deploy as expected?
<hatch> ddellav: yeah I could see this happening if there was some exception raised, but with no errors in the browser console it's quite odd
<bdx> hatch: no, the juju-gui shows no sign of anything from my juju env.
<bdx> omp
<bdx> hatch: I don't know how you want this formatted: http://paste.ubuntu.com/12438981/
<bdx> or I don't know how to get it into a better format
<hatch> :) no that's fine
<hatch> thanks
<hatch> bdx: and the ip of your bootstrap node is 10.16.100.73?
<bdx> hatch: .73 is my juju-gui
<hatch> ahh alright, so bdx what I would recommend trying is destroying this instance and deploying a new one
<hatch> it appears that the connection between the GUI and your bootstrap node has somehow become broken
<hatch> bdx: are you able to destroy/deploy the GUI to see if it resolves?
<thedac> hatch: he has destroyed and redeployed several times. Same result.
<thedac> he is at lunch at the moment
<hatch> understood
<hatch> ok
<hatch> maybe the best case would be to file a bug...the issue is that the gui server is not able to communicate with the bootstrap node to get the environment details
<hatch> so we need to surface that error
<hatch> as there isn't anything the GUI can do to resolve it
<mbruzek> hatch: I am working with bdx on this problem.
<mbruzek> We will file a bug on this when he returns from lunch.
<mbruzek> hatch: I had him destroy the service and redeploy with the same result.  The juju status shows all the deployed units, but the gui does not.
<hatch> mbruzek: great thanks - The only way I was able to reproduce was to block the gui server from accessing the bootstrap node
<hatch> so I'm assuming that there is something from the guiserver > bootstrap node which is blocking that communication
<mbruzek> We did find an error in the web browser console a connection problem.
<hatch> ahh there we go
<mbruzek> Is there anything else that I can check.
<hatch> did you happen to copy it somewhere I could see the error?
<mbruzek> He is still at lunch
<hatch> mbruzek: well the GUI thinks that it's connected
<mbruzek> So ssh to the gui and see if I can ping the bootstrap node?
<hatch> mbruzek: yeah that would be a good place to start
<hatch> also, if there is an error in the console that might explain it
<wolsen> jamespage: woot! on it
<lazypower> Odd_Bloke, o/
<lazypower> having a look now
<Odd_Bloke> lazypower: Yo!  Thanks!
<lazypower> Odd_Bloke, this merge makes me a little sad but I understand why its necessary
<lazypower> Odd_Bloke, did you run into instances where this code was not being run on >= 1.23.2
<Odd_Bloke> lazypower: The problem is that other parts of the code assume that 'lowest numbered == leader'.
<lazypower> ah
<lazypower> was this the only merge? or was there another one?
<Odd_Bloke> lazypower: So, for example, unit 0 would go "I'm not the leader, I don't need to sync stuff down" and other units would still try to sync from unit 0.
<Odd_Bloke> Proofs to tier 1 reset currency:	806.52 Qa / 1.18 Qt (+3.89 Qa Proofs/tick)
<Odd_Bloke> Proofs to tier 1 reset currency:	806.52 Qa / 1.18 Qt (+3.89 Qa Proofs/tick)
<Odd_Bloke> NICE
<Odd_Bloke> https://code.launchpad.net/~cjwatson/charms/trusty/ubuntu-repository-cache/inline-release/+merge/271107
<Odd_Bloke> lazypower: ^
<lazypower> Odd_Bloke, yeah, that makes sense that the rest of the charm would need to be updated to us leader-set receivers on where they should sync from
<Odd_Bloke> lazypower: Yeah, precisely.
<lazypower> Odd_Bloke, as the leader may change in flight over the course of the lifetime
<lazypower> *service lifecycle
<Odd_Bloke> Yeah.
<Odd_Bloke> So we do have a way of coping with the leader changing, but it isn't plumbed in to the leader election stuff yet.
<lazypower> yeah this works for me as a conceptual fix until proper leader election bits can be landed
<Odd_Bloke> (And, actually, given the response on that hook bug earlier, it might actually never get called.)
<Odd_Bloke> OK, great!
<lazypower> Odd_Bloke, not suree what should be done to the bug triage here if anything. https://bugs.launchpad.net/launchpad/+bug/804252
<mup> Bug #804252: Please support InRelease files <Launchpad itself:In Progress by cjwatson> <ubuntu-archive-publishing:In Progress by cjwatson> <https://launchpad.net/bugs/804252>
<lazypower> i'm going to head back into the charmer summit. Ping if you need anything else
<lazypower> o/
<Odd_Bloke> lazypower: I'll just let cjwatson know. :)
<hatch> mbruzek: hey, any luck?
<bdx> hatch: whats up
<bdx> hatch: so what I was able to ascertain about the juju-gui not showing env issue is that.... it doesn't look like it is requesting to the bootstrap node....it seems to be requesting to itself....http://cl.ly/image/2q001V0F3v1o
<bdx> 10.16.100.73 is the address of the juju-gui
<bdx> 10.16.100.151 is the ip of the bootstrap node
<hatch> bdx: ok thanks, I'll look into that code to see if I can figure out why it might be doing that
<bdx> awesome, thanks
<hatch> bdx: what version of juju are you using?
<hatch> the gui used to work then just stopped? Can you remember any changes to the env made around that time?
<hatch> the issue appears to be with the gui's websocket connection to the gui's server
#juju 2015-09-18
<blr> Would anyone happen to know if it is possible to debug-hooks both a subordinate and its parent together? Starting debugging for the second service complains that the tmux session already exists.
<ejat> deployed horizon dashboard through juju ... may i know what is the default login n password ?
<ejat> openstack-dashboard timeout when i tried to login
<ejat> what should i do ?
<ejat> marcoceppi: r u there?
<jrwren> can I cancel a pending action?
<jrwren> use case is: i ran `juju action do db/0 dump` which takes a while. I accidentally ran it twice. I'd like to cancel the pending job.
<rick_h_> jrwren: I know you can see the queue, thuoght there was a cancel api
<jrwren> rick_h_: i can't find it.
<rick_h_> jrwren: hmm, wonder if it made the api but not the cli
<jrwren> rick_h_: must be. cli has only do, fetch, status
<aisrael> jrwren: rick_h_: we have a meeting next week about action 2.0 features. I'll add a cancel command to the list
<rick_h_> aisrael: ah ok, isn't there a method to see the queue and such?
<rick_h_> I know it was part of the spec
<aisrael> rick_h_: `juju action status` will show you everything, pending or not, but there's no queue management afaik
<jrwren> rick_h_: `juju action status` shows all of them.
<rick_h_> ah ok
<rick_h_> cool
<thedac> jamespage: question about ceph and cinder charms. Can we specify different pools like this example http://www.sebastien-han.fr/blog/2013/04/25/ceph-and-cinder-multi-backend/
<jamespage> thedac, I think so - let me check
<thedac> thanks
<jamespage> thedac, yes - the ceph pool is aligned to the service name - so
<jamespage> juju deploy cinder-ceph cinder-ceph-sata
<jamespage> juju deploy cinder-ceph cinder-ceph-ssd
<jamespage> for example
<thedac> ah, cool
<jamespage> thedac, the trick is that the ceph charm does not support special placements (yet)
<jamespage> thedac, so the backend pools should be pre-created in ceph by hand first - cholcombe has some stuff inflight to enhance pool management
<firl> so I could set this up by hand manually within ceph, then have cinder-ceph relation charm take care of the relations, and use possibly what cholcombe has to link it between cinder volumes and the ceph pools?
<jamespage> firl, kinda
<jamespage> if I deploy: juju deploy cinder-ceph cinder-ceph-sata
<jamespage> the backend pool must == 'cinder-ceph-sata'
<jamespage> so if you pre-create or re-create the pool directly in ceph with the required characteristics it will work ok
<jamespage> juju add-relation cinder-ceph-sata cinder
<jamespage> and juju add-relation cinder-ceph-sata ceph
<jamespage> are required of course
<firl> so I would have 2 cinder-ceph relations
<firl> and 2 separate ceph charms for the environment
<firl> ?
<coreycb> firl, here's a bundle to reference for deploying from source - http://bazaar.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk/view/head:/bundles/source/default.yaml
<firl> thanks!
<bdx> firl:http://paste.ubuntu.com/12449099/
<coreycb> firl, https://bugs.launchpad.net/charms/+source/nova-compute
<firl> thedac: https://bugs.launchpad.net/charms/+bug/1497308
<mup> Bug #1497308:  local repository for all Openstack charms <Juju Charms Collection:New> <https://launchpad.net/bugs/1497308>
<beisner> wolverineav, https://wiki.ubuntu.com/ServerTeam/OpenStackCharms/ReleaseNotes1504
<wolverineav> hey, neutron question - when I enable DVR, the DHCP and L3 agent are deployed on the compute node. I'd like to disable the L3 agent completely. Is there a way to do that in the neutron-api charm?
<wolverineav> or, what would be the way to go about it?
<coreycb> jamespage, gnuoy, any idea on this ^
<jamespage> wolverineav, DVR enables metadata and l3-agent I think
<jamespage> there is an extra toggle to enable dhcp as well
<Guest90652> does juju-core 1.24.2 support CentOS7 on EC2?
<jamespage> there is no way todo that in the charm right now, as its assummed from the charm choices you're making that you want ml2/ovs driver
<jamespage> wolverineav, whats the use case?
<wolverineav> jamespage, yes right. we're currently moving towards pulling the various agents into the big switch controller. the current release supports L3 and the next one will have DHCP and Metadata.
<jamespage> wolverineav, ok - so in this case, you don't want to use the neutron-openvswitch charm - I'd suggest a neutron-bigswitch charm that dtrt for a big switch deployment
<jamespage> as you really just want ovs right?
<jamespage> not all of the neutron agent scaffolding around it
<wolverineav> i'll be doing something like the ODL charm which deploys its own virtual switch. I would not be deploying the vanilla OVS
<ejat> jamespage: openstack-dashboard charm , do i need manually change at the local_setting.py for the keystone host ?
<jamespage> ejat, no you just add a relation to keystone
<jamespage> wolverineav, sounds like a neutron-bigswitch charm is the right approach then
<ejat> already did the relation
<jamespage> wolverineav, openswitch-odl is the way forward from a frameworks perspective
<ejat> should i change :
<ejat> From:
<ejat> OPENSTACK_KEYSTONE_DEFAULT_ROLE = "Member"
<ejat> To
<ejat> OPENSTACK_KEYSTONE_DEFAULT_ROLE = "_member_"
<jamespage> no
<ejat> or let as it is
<jamespage> you should not need to change anything
<wolverineav> jamespage, so it would be a neutron-api-bigswitch kinda thing. ah, i see
<jamespage> wolverineav, kinda
<jamespage> the bit for nova-compute == like openvswitch-odl charm
<jamespage> the bit for neutron-api == like neutron-api-odl
<wolverineav> got it
<ejat> it cant communicate
<ejat> on azure
<ejat> from dashboard cant ping keystone
<ejat> because it take the public dns
<ejat> how to restart/reboot on of the service machine
<ejat> jamespage: http://paste.ubuntu.com/12449394/
<jamespage> ejat, blimey openstack ontop of azure?
<ejat> yups ..
<ejat> demo purpose
<ejat> jamespage: http://picpaste.com/Screen_Shot_2015-09-18_at_10.56.08_PM-a0Td6kaK.png
<Slugs_> Iâve followed the Ubuntu openstack single installer guide located here â http://openstack.astokes.org/guides/single-install â Every service statrts up except  for Glance - Simplestream Image Sync.  This should not hinder me from logging into to horizion but for some reason I canât authenticate with my username as ubuntu and password I have is âopenstackâ.  I have been able to stop the container, start the container, login to the container and
<Slugs_> check juju logs but I would like some more clairication on this to make sure Iâm doing this correctly.
<jingizu_> Hi all! When I try to juju remove a service (e.g. quantum-gateway, fundamental part of openstack) and re-deploy it to a different server (e.g. --to lxc:3)... I notice that it does remove it from juju, but the actual services themselves (i.e. all the neutron python servers on the origianl host) are still running... it's as if it just removed it from the juju
<jingizu_> database but did not actually stop the services themselves
<jingizu_> Of note is that the service is running on a system deployed bare-metal that is also running other juju services, so juju couldn't just tear down the lxc or tell MAAS to kill the bare metal machine altogether (Which obviously would kill the services too)
<marcoceppi> ejat: what's your question?
<ejat> openstack-dashboard charm
<ejat> add-relation putting the public dns for keystone host in dashboard
<ejat> jamespage: said i should not change anything in local_setting.py
<jamespage> ejat, sorry - I'm not that familiar with how public dns works in azure; you should not have to change anything in settings.py normally but ymmv on anything other that MAAS (or OpenStack itself)
<ejat> i cant login the dashboard
<ejat> openstack.informology.my/horizon
<ejat> login then timeout
<firl> thedac: http://paste.ubuntu.com/12450237/
<firl> thedac: http://paste.ubuntu.com/12450337/
<amit213> <amit213> jingizu_ : on you question about removing service, you'll also have to first do remove-relation on that service (which you're trying to remove) for all its peer services. Once the removal of relation is done, the remove-service should go smoothly. there is also a --force flag that can be used.
<jingizu_> amit213: Thanks for the reply. At this point I have managed to remove the service(s) in question. Like I mentioned, the service is no longer listed in juju status. However, the underlying programs that correspond to the service are still running... Any ideas why it would not delete said programs, configs, etc. when removing the service?
<ennoble> with juju deployer or with the juju client add_machine call, can you specify a specific machine?
<firl> ennoble: you can with constraints, and depending on the machined environment you can even use tags
<ennoble> firl: I'm using maas. Is it possible to specify a specific machine my-server-1.foo? What about with the manual provider? ssh:root@my-server-that-maas-hates?
<firl> ennoble: I donât know anything about the manual provider. here is the tags information https://maas.ubuntu.com/docs/tags.html
<firl> depending on how many nodes and what not, I sometimes just âacquireâ the nodes in MaaS so that I donât have things deployed to those containers
<firl> So with juju deployer you can use the tags constraints as with the add machines
<firl> for service units you can just do a âto
<ennoble> firl: so you acquire the nodes in maas? add tags to them there, and then deploy to them with juju deployer?
<firl> âacquireâ nodes in maas just means that juju deployer canât use them to pull from ( itâs a hack I use )
<firl> but you can just add tags via the maas cli  ( in the maas gui in 1.8 ) to the physical machines and then use the deployer
<firl> and everyone from ubuntu is probably at a party or traveling because they just finished up a summit in DC
<nodtkn> ennoble: you can add a machine to any existing enviorment with juju add-machine ssh:ubunut@<hostname>
<ennoble> nodtkn: thanks, I can do that, I'm wondering after I do that can I make juju deployer use it?
<mwenning> hi, looking for a quick answer - I moved a bundle from one system to another and ran juju-deployer --config=lis-test-bundle.yaml -e maas
<mwenning> It returned with something about must specify deployment, what did I forget?
<mwenning> 'Deployment name must be specified'
#juju 2015-09-19
<JoshStrobl> Is there a tags overview page on jujucharms, mainly to see all the different "popular" tags (like databases, storage, etc) and more quickly drill down into a specific tag rather than needing to click on an existing item under the "all" section
<JoshStrobl> I'm really just trying to see if there are any better alternatives to supervisor / supervisord (process control system) that are available as charms so I can quickly deploy and test them.
<rick_h_> JoshStrobl: no, there's not. Unfortunately the charms aren't tagged as thoroughly as to do that at this point.
<rick_h_> JoshStrobl: the best thing would be trying search terms hoping that summary/names/etc are written well to find things.
<rick_h_> JoshStrobl: but to your question I'm not aware of any generic process mgt tools like supervisord in there. uwsgi, gunicorn, would be closest I can think of.
<rick_h_> JoshStrobl: you can also search for relation interfaces if you know the interface to look for.
<jingizu_> Does anyone know the username/password of LXC instances (Ubuntu) deployed by juju via MAAS?
<jingizu_> I am trying to ssh into the LXC instances from a host other than my juju deployment machine (because it is down) so I can't `juju ssh` right now, can only ssh directly, or lxc-console on the LXC host machine
#juju 2015-09-20
<ejat> hi .. how to restart juju machine/services once change the setting/configuration ?
<bdx> ejat: juju help run
<ejat> ok thanks
<bdx> ejat: it would be something like - juju run --unit nova-compute/0 "service nova-compute restart"
<ejat> im trying to deploy charm openstack base
<ejat> once completed .. i cant login
<bdx> ejat: ensure you are using the credentials you specified for admin-password in your charmconf.yaml
<bdx> ejat: for keystone
<ejat> i just use default
<ejat> its redirected to /horizon/admin but timeout
<bdx> so are you extrapolating the randomly generated password from the keystone service in that case?
<bdx> ejat: your horizon default login should be un: admin, pw: <randomly generated password unless set>
<bdx> https://jujucharms.com/keystone/trusty/28
<ejat> the default pw : openstack right ?
<bdx> ejat: why would you think that?
<ejat> its in services config
<bdx> ejat: oh...are you deploying a bundle than I take it?
<bdx> ahh gotcha
<bdx> can you paste the url you are using?
<ejat> yeah the bundle
<ejat> url for my deployment or the url for the bundle ?
<bdx> ejat: also can you paste the url generated by running the command "juju status --format=tabular | pastebinit"
<bdx> for the bundle
<ejat> https://jujucharms.com/openstack-base/36
<bdx> ejat: ok, I see what you are saying.
<ejat> bdx : http://paste.ubuntu.com/12503691/
<ejat> i have to set /etc/hosts in openstack-dashboard the internal IP
<ejat> because inside openstack-dashboard putting the public dns to connect to keystone
<bdx> ejat: can you run "juju resolved nova-cloud-compute/0 --retry" and then "juju status --format=tabular | pastebinit"
<bdx> and paste me the url
<ejat> i try several time to retry .. its up then down again
<ejat> fenris@macbuntu:~$ juju resolved nova-cloud-compute/0 --retry
<ejat> ERROR unit "nova-cloud-compute/0" not found
<ejat> fenris@macbuntu:~$ juju status nova-cloud-compute
<ejat> environment: azure2
<ejat> machines: {}
<ejat> services: {}
<ejat> ok .. its up .. wait
<ejat> http://paste.ubuntu.com/12503779/
<ejat> tried to login dashboard : The connection was reset
<bdx> ejat: ok, ...I just wanted to get rid of the error state of the hook on the nova-cloud-controller before we move forward
<bdx> ejat: Your telling me you can login now?
<ejat> nope
<bdx> ok
<ejat> still cant login
<bdx> ejat: what if any config modifications are you making to openstack-dashboard concerning dns?
 * ejat not modify .. just adding the host in /etc/hosts
<ejat> keystone host in dashboard /etc/hosts
<bdx> ok
<ejat> keystone/0
<ejat>     IP Address: 10.0.0.46
<ejat>     Status: started
<ejat>     Public Address: 10.0.0.46
<ejat> its doesnt get the right public dns : juju-azure2-j41rc8jcm9.cloudapp.net
<bdx> ah ok, so whats going on here is you also need to add the entry for openstack-dashboard in keystone
<ejat> vise versa ?
<bdx> so do the same thing you did for keystone but for openstack-dashboard, does that make sense?
<bdx> ya
<ejat> let me try
<bdx> once you have made the change, run "juju run --unit openstack-dashboard/0 "service apache2 restart""
<ejat> un : admin , pw: openstack ?
<bdx> yea
<ejat> still ... connecting ...........
<ejat> redirected to /horizon/admin with The connection was reset
<ejat> is it need to define all machine the local/internal IP ?
<ejat> :(
<ejat> bdx :
<bdx> can you paste the url from "juju run --unit openstack-dashboard/0 "apt install pastebinit > /dev/null 2>&1 && cat /etc/openstack-dashboard/local_settings.py | pastebinit""
<bdx> ejat: possibly
<bdx> hopefully not, we will see
<ejat> http://paste.ubuntu.com/12504087/
<bdx> ejat: also, what happens if you run "juju run --unit openstack-dashboard/0 "host juju-azure2-j41rc8jcm9.cloudapp.net""
<ejat> juju run --unit openstack-dashboard/0 "host juju-azure2-j41rc8jcm9.cloudapp.net"
<ejat> juju-azure2-j41rc8jcm9.cloudapp.net has address 207.46.155.248
<ejat> if i didnt add inside /etc/hosts ... dashboard cant ping to that address
<bdx> ejat, hmmm did you add "207.46.155.248 juju-azure2-j41rc8jcm9.cloudapp.net" to /etc/hosts?
<ejat> in dashboard ?
<bdx> or the internal 10.0.0 ?
<ejat> 10.0.0.x
<bdx> yea?
<ejat> not public
<ejat> u want me to add that ?
<bdx> ok, lets try using the public ip in the hosts files on for keystone and openstack-dashboard
<ejat> one second
<ejat> then restart apache2?
<bdx> so in keystone /etc/hosts -> "23.99.125.58 juju-azure2-3a6jyl44mj.cloudapp.net"
<bdx> and in openstack-dashboard /etc/hosts "207.46.155.248 juju-azure2-j41rc8jcm9.cloudapp.net"
<bdx> once you have made the change, run "juju run --unit openstack-dashboard/0 "service apache2 restart""
<ejat> done .. trying to login the dashboard
<ejat> still the same
<ejat> redirected to /horizon/admin then timed out
<bdx> darn
<bdx> if you are not opposed, we can redeploy the dashboard
<ejat> mean .. destroy n redeploy?
<bdx> "juju destroy-service openstack-dashboard && juju destroy-machine 26 --force"
<bdx> yea
<ejat> :)
<ejat> ok
<bdx> then
<ejat> im ok ..
<bdx> lets specify a charmconf.yaml for openstack-dashboard before we redeploy
<ejat> ok ..
<ejat> got sample for charmconf.yaml ?
<bdx> yea omp
<ejat> omp ?
<ejat> on manual page ?
<ejat> dont destroying the dashboard
<bdx> this is a kilo deploy right?
<ejat> done*
<bdx> ok
<ejat> not sure what r the charm get the version
<bdx> http://paste.ubuntu.com/12504309/
<bdx> put that into a file e.g. charmconf.yaml
<bdx> then run these commands
<bdx> "juju deploy openstack-dashboard --config charmconf.yaml && juju add-relation openstack-dashboard keystone"
<bdx> or command I should say :-)
<bdx> and then paste me the url from "juju status --format=tabular | pastebinit"
<ejat> http://paste.ubuntu.com/12504424/
<ejat> then expose ?
<bdx> ummm
<bdx> hold on
<ejat> okie
<bdx> run this again "juju status --format=tabular | pastebinit"
<ejat> http://paste.ubuntu.com/12504545/
<ejat> waiting agnet
<ejat> agent*
<bdx> ok
<bdx> let me know when its finished and in idle state, then paste me the url
<bdx> sorry
<ejat> ok
<ejat> no worries .. thanks for ya assistant :)
<bdx> np
<bdx> run this this time "juju status | pastebinit"
<bdx> and?
<ejat> http://paste.ubuntu.com/12504742/
<ejat> not yet :)
<bdx> darn...
<bdx> ha
<ejat>  installing charm software
<ejat> basic instanse hware
<ejat> its up
<ejat> expose?
<bdx> run this this time "juju status | pastebinit"
<ejat> http://paste.ubuntu.com/12504940/
<bdx> run this this time "juju status | pastebinit"
<ejat> http://paste.ubuntu.com/12504948/
<bdx> ok yea now expose
<ejat> http://paste.ubuntu.com/12504988/
<ejat> what is the pw ?
<bdx> it should be un: admin, pw: openstack
<bdx> but im trying to login atm...no dice
<ejat> :(
<bdx> what are the internal ips for keystone and openstack-dashboard?
<ejat> openstack-dashboard/0
<ejat>     IP Address: 10.0.0.52 | Ports 80/tcp, 443/tcp
<ejat>     Status: started
<ejat>     Public Address: juju-azure2-6yl2jtw74n.cloudapp.net | Ports 80/tcp, 443/tcp
<bdx> also, lets try this in /etc/openstack-dashboard/local_settings.py I want you to modify OPENSTACK_HOST to be the 10.0.0.x ip for keystone
<ejat> hmm .. now its get the right public dns
<bdx> and keyston?
<ejat> keystone/0
<ejat>     IP Address: 10.0.0.46
<ejat>     Status: started
<ejat>     Public Address: 10.0.0.46
<ejat> is it the bundle charm not updated ?
<bdx> ok, so set OPENSTACK_HOST=10.0.0.46
<ejat> ok
<bdx> in local_settings.py
<ejat> ok
<bdx> and then "juju run --unit openstack-dashboard/0 "service apache2 restart""
<bdx> yea?
<ejat> done restart
<bdx> ok
<bdx> do you have the openrc file?
<ejat> http://paste.ubuntu.com/12505116/
<ejat> nope .. i think ..
<ejat> havent get the openrc yet
<ejat> and scp it
<bdx> if so, can you "juju scp openrc keystone/0:~/ && juju run --unit keystone/0 "source openrc && keystone user-list"
<bdx> ok so now on your local machine run "ssh-keygen -f "/root/.ssh/known_hosts" -R 10.0.0.52"
<bdx> then try again
<ejat> mkstemp: Permission denied
<bdx> ssh-keygen -f "/root/.ssh/known_hosts" -R 10.0.0.52 ?
<bdx> oooh
<bdx> your are root?
<bdx> err
<bdx> ok
<bdx> so
<bdx> sudo ssh-keygen -f "/root/.ssh/known_hosts" -R 10.0.0.52
<ejat> openrc i dont have :)
<bdx> ohh ok
<ejat> $ sudo ssh-keygen -f "/root/.ssh/known_hosts" -R 10.0.0.52
<ejat> mkstemp: No such file or directory
<bdx> ok
<bdx> so lets do this as a normal user
<bdx> not root
<ejat> tenant name = admin
<ejat> keystone = internal ip ?
<ejat> for the openrc ?
<bdx> ok
<bdx> I know exactly whats up
<bdx> if you want horizon to be able to use keystone's public url you have to expose it
<bdx> lol
<ejat> horizon expose ?
<ejat> or keystone?
<ejat> both ?
<bdx> otherwise you need to set OPENSTACK_HOST=10.0.0.46 in local_settings.py on the openstack-dashboard
<bdx> horizon is already exposed
<bdx> but notice in local_settings.py it is trying to talk to keystone's public address which is not exposed
<bdx> so
<bdx> if you want public access to your api endpoints (which I'm sure you probably will) then expose keystone
<ejat> $ juju scp openrc keystone/0:~/ && juju run --unit keystone/0 "source openrc && keystone user-list"
<ejat> /tmp/juju-exec096226571/script.sh: line 1: openrc: No such file or directory
<ejat> ERROR subprocess encountered error code 1
<bdx> ok
<ejat> the openrc is in the path
<bdx> I know
<bdx> its
<bdx> ok
<bdx> my bad
<bdx> if so, can you "juju scp openrc keystone/0:/home/ubuntu/ && juju run --unit keystone/0 "source /home/ubuntu/openrc && keystone user-list"
<bdx> but that wont work until you expose keystones public endpoint
<ejat> \0/ user-list
<bdx> did you expose keystone?
<ejat> not yet
<bdx> ok
<ejat> hehe wait
<bdx> are you reading what I'm telling you
<bdx> if you look in the openrc it has keystone's public endpoint which is not accessable unless you expose it
<bdx> so once you expose it
<bdx> "juju run --unit openstack-dashboard/0 "keystone user-list""
<bdx> and you should have success
<ejat> http://paste.ubuntu.com/12505350/
<ejat> :(
<bdx> grrr
<ejat> change back the internal in local_setting.py ?
<bdx> yeah
<bdx> if you exposed keystone
<ejat> keystone/0
<ejat>     IP Address: 10.0.0.46
<ejat>     Status: started
<ejat>     Public Address: 10.0.0.46
<ejat> after exposed ..
<ejat> its still the same
<bdx> oooh
<bdx> ok
<ejat> i think something wrong with the bundle openstack base :(
<bdx> is that the value of OPENSTACK_HOST? in local_settings.py?
<ejat> yups
<bdx> 10.0.0.46 ?
<ejat> still the same
<bdx> ok
<bdx> now "juju ssh openstack-dashboard/0"
<bdx> can you do that?
<ejat> can
<bdx> ok
<bdx> inside of openstack-dashboard run "sudo service apache2 restart"
<ejat> restarted
<ejat> $ keystone user-list
<ejat> Expecting an auth URL via either --os-auth-url or env[OS_AUTH_URL]
<bdx> source openrc
<bdx> then keystone user-list
<bdx> ejat: inside of openstack-dashboard
<bdx> "sudo cat /var/log/apache2/error.log | pastebinit && sudo cat /var/log/apache2/access.log | pastebinit"
<ejat> http://paste.ubuntu.com/12505464/
<ejat> http://paste.ubuntu.com/12505480/
<ejat> http://paste.ubuntu.com/12505481/
<bdx> ejat: http://paste.ubuntu.com/12505480/ line 347
<bdx> now what I want you to try
<bdx> form openstack-dashboard
<bdx> edit openrc
<bdx> to reflect keystone's internal ip
<bdx> 10.0.0.46
<bdx> http://paste.ubuntu.com/12505530/
<ejat> already .46
<ejat> in dashboard
<bdx> ok
<bdx> now edit the openrc to look like http://paste.ubuntu.com/12505530/
<ejat> owh no regionone
<bdx> huh
<bdx> if you want to share a chrome desktop session with me I can fix this for you alot faster prob
<bdx> chrome remote
<bdx> do you know about that
<ejat> ok
<bdx> https://chrome.google.com/webstore/detail/chrome-remote-desktop/gbchcmhmhahfdphkhkmpfmihenigjmpp?hl=en
<ejat> one moment
<bdx> ready when you are
<ejat> bdx : PM
<ejat> ?
<ejat> the code xpiring
<bdx> yeah send it
<bdx> pm me
<ejat> bdx : thanks u so much ... for keep trying
<blr> I have a web service subordinate to squid, with a requires interface defined, however when adding a relation between the web service and gunicorn, _both_ the web service and gunicorn declare that they are subordinate to eachother, which clearly isn't correct.
<rick_h_> blr: the issue is that gunicorn is a subordinate: https://api.jujucharms.com/charmstore/v4/trusty/gunicorn-0/archive/metadata.yaml
<rick_h_> blr: it's meant to be used to serve out/hook up with your app as a full charm vs a subordinate
<rick_h_> blr: and juju does not support a subordinate on a subordinate
<blr> rick_h_: oh it doesn't? :/
<rick_h_> blr: no, because a subordinate is meant to stick onto something and apply to that machine
<blr> I need squid both squid and my web service on the same unit however
<blr> err hopefully you can parse that hah
<rick_h_> blr: so there we'd suggest using colocation
<rick_h_> blr: so I'd do your service as its own charm
<rick_h_> and colocate it using the --to command
<rick_h_> blr: juju supports containers and 'hulk smash' colocation methods you could use to have them all running on one machine
<rick_h_> blr: https://jujucharms.com/docs/stable/charms-deploying#deploying-to-specific-machines-and-containers
<blr> thanks rick_h_, I'll try that again.
<rick_h_> blr: np, hope that helps some
#juju 2016-09-19
<magicaltrout> what the hell
<magicaltrout> why is juju deploy trying to open a browser
<magicaltrout> on a server which runs headless
<magicaltrout> balls
<magicaltrout> now i  juju logout
<magicaltrout> and can't juju login
<magicaltrout> okay logged back in
<magicaltrout> still getting a browser prompt
<lazypower> magicaltrout: that should give you the url you need to copy/paste if its in headless mode
<magicaltrout> lazypower: yes it did after a delay
<magicaltrout> but then when i run that url on a server that isn't my remote one, what difference does it make?
<magicaltrout> or do I curl it?
<magicaltrout> in the end i logged in with lynx
<lazypower> magicaltrout: should be token based. its polling/waiting on a socket to get that auth code back. Pasting that into your workstation browser should have gotten you through.
<lazypower> if its not, we need to tag and bag that bug
<rock> Hi. Can't we install juju2.0 on ubuntu 14.04.1(trusty)?
<rock> If we can, could anyone please provide me a reference link for that.
<rock> Actually, I want to do [MAAS+OpenStack base bundle] setup on physical servers. I have taken 5 servers. On one server I configured MAAS 1.9.4 on trusty(14.04). From MAAS UI , I commissioned all remaining four nodes successfully.
<rock> Now I want to deploy OpenStack base bundle. What exactly I need to do.
<rock> I integrated our cinder-storage driver charm with Openstack bundle. And I pushed this bundle to jujustore as our own bundle. So now I want to deploy our bundle on MAAS nodes. What I need to do exactly here. Please provide me the clear information.
<junaidali> Hi everyone, I'm having an issue with juju 2.0 on xenial. The lxds are getting IPs from lxdbr0 bridge rather than the Openstack management network. Any idea what might be the cause?
<junaidali> I'm using maas 2.1.0alpha3 and juju 2.0-beta18
<beisner> hi junaidali, is this when deploying on top of openstack using juju?
<junaidali> yes
<junaidali> in /var/log/lxd/<lxd-name>/lxc.conf, lxc.network.link = lxdbr0
<junaidali> can this might be the issue?
<beisner> junaidali, it is a known issue.  https://bugs.launchpad.net/juju/+bug/1615917  we do a lot of juju deploys on top of openstack, and have great success in just placing all units in their own nova instance.
<mup> Bug #1615917: juju openstack provider --to lxd results in unit behind NAT (unreachable) <openstack-provider> <uosci> <juju:Triaged> <https://launchpad.net/bugs/1615917>
<junaidali> beisner, when you say deploying on top of openstack, do you mean deploying openstack over openstack?
<beisner> junaidali, are you deploying to maas with juju and getting that network issue?  or do you have an openstack deployed, where you are deploying some other thing on top of that with juju?
<junaidali> i'm deploying to maas with juju
<junaidali> Actually this is happening in a fresh  deployment
<junaidali> sorry for the confusion
<beisner> ok, so i'm confused
<beisner> the openstack networking won't be in play at that point
<rick_h_> junaidali: if MAAS is setup to provide dhcp the lxd containers should come up with IP addresses on the network and be reachable across the hosts.
<junaidali> machines are getting correct IP, its just the lxds that've the issue
<junaidali> yes, but in my case, it is not getting IP from maas.
<junaidali> I was previously using an RC release of maas. The error came up when I upgraded to maas 2.1.0 alpha3
<junaidali> I deleted the maas and recreated the whole environment but it didn't help
<junaidali> lxc.conf for an lxd (/var/log/lxd/<lxd-name>/lxc.conf) http://paste.ubuntu.com/23202676/
<rick_h_> junaidali: maybe check out https://bugs.launchpad.net/juju/+bug/1566791 where not all interfaces get bridged ootb
<mup> Bug #1566791: VLANs on an unconfigured parent device error with "cannot set link-layer device addresses of machine "0": invalid address <2.0> <4010> <cpec> <network> <juju:In Progress by dimitern> <https://launchpad.net/bugs/1566791>
<rick_h_> junaidali: a fix for that is on the way, but not 100% sure it's what you're hitting
<coreycb> junaidali, I'm hitting similar issues but not sure it's the same as yours.  here's something you can fix possibly: https://lists.ubuntu.com/archives/juju/2016-September/007801.html
<coreycb> junaidali, also when creating your model try this: juju add-model --config enable-os-upgrade=false --config enable-os-refresh-update=false <model-name>
<junaidali> thanks coreycb, let me try your suggestion
<coreycb> junaidali, hopefully it helps. I've not gotten around my issue yet so I'll keep you posted on any results.
<marcoceppi> coreycb junaidali you can also make that the default in beta18 `juju model-defaults`
<coreycb> junaidali, fyi I think rc1 comes out tomorrow for juju and from what I understand the above bugs are fixed in it
<coreycb> junaidali, actually the model issue may be an images or cloud-init bug, not sure
<junaidali> I tried the suggestions, still hitting the same issue.  I hope this is fixed in rc1 now
<kjackal> cory_fu kwmonroe: I will be doing the kafka ingestion bundle again with bigtop charms this time. Where should this bunlde live? I think it should be outside the bigtop source tree since it will have apache-flume in it. What do you thing?
<kjackal> *think
<kwmonroe> kjackal: i think it should live in bigtop-deploy, because i expect it to be updated to use bigtop-flume once we get that charmed
<kwmonroe> i don't think there will be too much concern if we have a non bigtop charm in a bigtop bundle, as long as the expectation is that all charms will eventually be bigtopped
<kjackal> kwmonroe: I see your point, so we put it inside bigtop and as soon as we get flume ready we create a PR
<kjackal> sounds good
<cory_fu> kjackal: In particular, I would include a comment in the bundle yaml next to the apache-flume charm URLs saying that they will be swapped out for the Bigtop versions ASAP
<lazypower> http://imgur.com/a/xrOmO -- not sure if juju is telling prophecies or coincidence
<cory_fu> @kwmonroe @kjackal How do you feel about changing our repo naming convention to follow the charm-, layer-, interface- prefix convention (where we currently use layer- for both charm layers and base layers)?
<kjackal> cory_fu: isn't there a case were we might build a new charm over what we consider chamr layer now?
<kjackal> cory_fu: for example could someone at any point use the client layer to put something ontop?
<cory_fu> kjackal: Yes, there is a possibility that a charm (layered or non-layered) could serve as a base layer, but it is less common and I think it's still useful to indicate that the "base" is a charm in its own right
<cory_fu> i.e., layer- would be reserved for things that could never function as a charm on their own
<cory_fu> Hrm.  I thought there were more repos in juju-solutions that followed that convention, but I only actually see one.
<cory_fu> marcoceppi: Was I correct in recalling that we wanted to encourage that convention?  ^
<marcoceppi> cory_fu: we never really have enforced /that/
<marcoceppi> cory_fu: I've been doing layer-* reuse and charm-* as either a top layer or built charm
<cory_fu> marcoceppi: I was suggesting charm-* as top layer.
<marcoceppi> we also have some layers in the index that are actually charm layers and probably shouldn't be
<marcoceppi> cory_fu: right, but I also use it for classic charms that have not been layered
<cory_fu> Indeed.  IOW, anything that can be deployed (possibly after running through `charm build`)
<marcoceppi> cory_fu: this whole "options not defined" bug is killing me
<smgoller> Hey all, I'm trying to deploy openstack on maas 2 with juju 2 beta 18 using ubuntu 16.04 everywhere. Everything comes up, but I can't get networking to work. The bundle configuration references eth1 as presumably the physical interface connected to the provider network, which is eno2 in xenial-land. I can't seem to figure out what's going on. any of gnuoy, thedac, or tinwood awake?
<marcoceppi> yes
<cory_fu> marcoceppi: Really?  I thought we sorted that out?
<marcoceppi> cory_fu: we just found where it happens, never patched it
<marcoceppi> smgoller: I hit that earlier today, but someone else fixed it and I'm not sure how
<thedac> smgoller: I'll get back to you in just a minute when my meeting is done. OK?
<lazypower> cory_fu: marcoceppi - +1 to options undefiend bug
<smgoller> thedac: awesome! thanks
<lazypower> workaround works for now, but its not obvious
<lazypower> *undefined
<cory_fu> marcoceppi, lazypower: Somebody should fix that.  >_>
<smgoller> afk for a sec to run to the toilet
<lazypower> cory_fu: you fix it in the repo, i'll fix it in charmbox :D
<smgoller> back
<kjackal> cory_fu: kwmonroe: am going to push to bigdata-dev a build of the plugin in trusty with openjdk optional. The reason is that we endup with this deployment http://pastebin.ubuntu.com/23203686/ where the plugin must be in trusty to relate to flume and it also needs openjdk
<cory_fu> kjackal: I think the plugin might require some additional work, since it currently expects java to be proxied through the principal
<kwmonroe> hold up kjackal
<kjackal> cory_fu: kwmonroe: didn't we make openjdk optional in the base layer?
<kjackal> then I might need to use the apache plugin
<cory_fu> We did, but the plugin was previously a special case (because we had a misunderstanding about how relations between two subordinates work)
<cory_fu> kjackal, kwmonroe: Apparently, we can't have a bundle and a charm with the same name.  What should we call the insightedge bundle?  insightedge-core?
<marcoceppi> cory_fu: I tried, and failed, the logic is too complex for me
<thedac> smgoller: Hi, so Xenial has slightly more unpredictable interface names based on the device type. There are a handful of configuration knobs you can set in the charms to work this. Let me get you a list. One sec
<kwmonroe> kjackal: can you try this one? http://jujucharms.com/u/bigdata-dev/bigtop-plugin-trusty
<kjackal> thanks kwmonroe.
<kwmonroe> kjackal: cory_fu: ^^ that's a bigtop plugin, specifically built for trusty.. it pulls in puppet from puppetlabs since the trusty archive won't have a new enough puppet.
<smgoller> thedac: yeah, maas tries to keep it a little more consitent, but xenial instances i've brought up on esxi have names like ens160, ens192...
<kjackal> cory_fu: how about insightedge-bunlde or sulution? the core sounds like "limited functionality, only for internal use"
<cory_fu> kjackal: I wish we could rename the charm and just call the bundle "insightedge" but that's impossible (within the bigdata-dev namespace) now
<kwmonroe> kjackal: cory_fu, bigtop-plugin-trusty should *not* need openjdk.  that's the charm i was using during the summit specifically because some of our big data charms weren't xenial... i'm like 92% sure it worked without java.
<kjackal> ah that is my doing (I think) we cannot rename a charm :(
<cory_fu> kjackal: Also, I was thinking "insightedge-core" like "kubernetes-core" since we will want to build on the bundle to include things other than just Spark
<cory_fu> kwmonroe: We can't mix Bigtop and vanilla plugins, can we?
<thedac> smgoller: In neutron-gateway set the ext-port:
<thedac> https://github.com/openstack/charm-neutron-gateway/blob/master/config.yaml#L85
<thedac> If you are running HA set vip_iface and ha-bind interface
<thedac> https://github.com/openstack/charm-keystone/blob/master/config.yaml#L186
<kwmonroe> cory_fu: i didn't try
<thedac> https://github.com/openstack/charm-keystone/blob/master/config.yaml#L198
<thedac> And the corosync_bindifcace for hacluster
<cory_fu> I'm sure we can't
<thedac> https://github.com/openstack/charm-hacluster/blob/master/config.yaml#L30
<cory_fu> kwmonroe: I was somehow under the impression that kjackal was dealing with vanilla Hadoop
<smgoller> thedac: we're not doing HA, this is just a basic install to get things rolling
<cory_fu> I think I read something backwards, though
<cory_fu> kwmonroe: Thoughts on the insightedge bundle name?
<thedac> smgoller: ok, then it should just be the neutron-gateway ext-port.
<smgoller> ok, so i've tried a bunch of different bundles, and the last bundle i tried was your stable one on github.
<smgoller> I've done the ext-port change and that one didn't seem to make a difference.
<thedac> smgoller: oh, ok, then I need more info on *how* things are breaking
<kwmonroe> cory_fu: what's in the bundle?  spark + insightedge?
<smgoller> thedac: heh, that's a good question. I'm trying to figure that out myself.
<cory_fu> kwmonroe: And Zeppelin
<cory_fu> kwmonroe: Basically, it recreates what the InsightEdge release provides.
<smgoller> but first, this is the one i've been using most recently. https://github.com/openstack-charmers/openstack-bundles/tree/master/stable/openstack-base
<cory_fu> (But in a more modular way)
<thedac> smgoller: and you are changing ext-port for the gateway charm? That is needed.
<kwmonroe> cory_fu: i think i agree with kjackal.. -core kinda implies just the minimum.  i wouldn't expect zepp in a core bundle.  unless that's the only interface into insightedge
<smgoller> http://pastebin.com/peLb4Zme
<smgoller> do I need to set ext-port as well?
<smgoller> because the docs say that was deprecated in favor of the options in this one
<smgoller> which is why I switched to this bundle over the one on jujucharms.com
<thedac> ah, let me validate that for you. One sec
<smgoller> Yeah, that data-port line is the only thing I changed from github, from eth1 to eno2
<thedac> smgoller: sorry, I had to read the docs myself. I am trying to figure out if data-port is *only* used in a flat-network setup. I'd like to run a quick test on my end.
<smgoller> ok
<thedac> smgoller: in the meantime, can you say one way or the other that only external networking is failing?
<smgoller> well, I can't ping the openstack router provider interface.
<thedac> ok
<thedac> smgoller: `ip netns exec qrouter-$ID ping $INSTANCE_IP`
<thedac> try that ^^. That will tell us if the internal ovs is working
<smgoller> on the compute node?
<thedac> on the gateway node
<smgoller> got it
<smgoller> nope
<smgoller> no pings
<thedac> ok
<thedac> smgoller: let me run this test then and get back to you in just a bit
<smgoller> ok
<smgoller> thanks!
<kwmonroe> cory_fu: back to your naming convention, we would then have 'layer-bigtop-base' and 'charm-hadoop-namenode'?
<cory_fu> kwmonroe: charm-hadoop-namenode because it is deployed (after being built)
<cory_fu> kwmonroe: It's unclear whether we would have charm-hadoop-datanode or layer-hadoop-datanode, since we use it as a base layer and don't publish it in the store, but it could conceivably be built and deployed directly
<cory_fu> kwmonroe: Actually, to be pedantic, we wouldn't have either charm-hadoop-namenode nor layer-hadoop-namenode because that lives in the Bigtop repo.  ;)
<cory_fu> kwmonroe, kjackal: I went ahead and used the charm- prefix for https://github.com/juju-solutions/charm-insightedge-core since I was renaming it anyway
<kwmonroe> very well
<kwmonroe> i'll let you decide how much confusion you've just injected to the datanode/nodemgr naming.
<cory_fu> :)
<cory_fu> I don't plan on changing them
<cory_fu> And anyway, I was just considering this for future repos
<cory_fu> Though all of our charm repos should live upstream anyway, so it shouldn't really make any difference
<thedac> smgoller: ok, so our test setup does use data-port however, it sets it to the MAC address rather than the interface name ie: br-ex:fa:16:3e:ec:79:d5 Can you test setting the MAC address of the "external" interface? And we can go from there
<smgoller> ok, how would I do that?
<smgoller> oh
<smgoller> i see
<smgoller> the config line explicitly sets a mac address? or do I need it to match to something else
<thedac> With MAAS you can tag a host as the gateway. Then use constraints: tags=$GATEWAY_TAG. Then find the MAC address of the interface and in the bundle have data-port: br-ext:$MAC
<smgoller> ok, but since i've already got something deployed i should just ssh in and grab the mac and redeploy?
<smgoller> or do i need to tear this down and deploy from scratch?
<thedac> smgoller: you could test with the current deploy as a first step
<smgoller> i guess i can just set config on gateway-api?
<thedac> Grab the mac and juju set neutron-gateway data-port="br-ext:$MAC"
<smgoller> ok, that's done.
<smgoller> do I need to do anything to the charm to get it to reconfigure?
<thedac> Let that settle a moment and see if you can ping the neutron router
<smgoller> still no go. you mean the mac address of eno2, the physical interface connected to the external physical network, yes?
<smgoller> just to be clear.
<thedac> yes, correct
<smgoller> should i reboot the machine?
<thedac> ok, can I see `ovs-vsctl show` from the gateway and one of the compute nodes?
<thedac> I don't think that will help
<smgoller> /paste.ubuntu.com/23203969/
<smgoller> http://paste.ubuntu.com/23203969/
<smgoller> that's the gateway
<coreycb> junaidali, fyi I just tested with a pre-release of juju rc1 and it fixed the lxd bridge issues I was hitting
<thedac> ok, looking
<smgoller> http://paste.ubuntu.com/23203974/
<smgoller> compute node
<thedac> smgoller: simple test. From 172.21.0.4 can you ping 172.21.0.7? Just making sure the tunnels should be expected to work.
<smgoller> yes
<thedac> thanks
<thedac> smgoller: so I am not finding a smoking gun the ovs output looks good.
<smgoller> ok.
<thedac> here is a doc I often follow from ODS for troubleshooting neutron http://www.slideshare.net/SohailArham/troubleshoot-cloud-networking-like-a-pro
<thedac> I think the next step would be a re-deploy and if that still does not work follow this doc until we have a smoking gun
<smgoller> Sounds good. Thank you so much for your help! I'll report back once it's redeployed.
<thedac> smgoller: great. And do set the data-port with the MAC in the bundle
<smgoller> thedac: yup, i just set that.
<thedac> cool, I'll hear from you soon
<kjackal> cory_fu: kwmonroe: After some debugging... Seems we are hitting a permission problem on HDFS "Permission denied: user=root, access=WRITE, inode="/user/flume/...."
<kjackal> cory_fu: kwmonroe: I do not see any way of onfiguring the output directory of flume-hdfs https://jujucharms.com/u/bigdata-dev/apache-flume-hdfs/trusty/34
<kjackal> Is it acceptable to ask the user to change the permissions of that dir or should we use the apache-hadoop?
<cory_fu> kjackal: I remember when kwmonroe originally hit that permissions issue and it was sorted out, I thought in the flume charm.
<cory_fu> kjackal: That's not an on-disk path, that's an HDFS path.  The Flume charm should be creating and managing that directory inside HDFS
<kwmonroe> kjackal: do an 'hdfs dfs -ls -R /user' and see if the /user/flume dir is there
<cory_fu> kjackal: https://github.com/juju-solutions/layer-apache-flume-base/blob/master/lib/charms/layer/apache_flume_base.py#L122
<kjackal> cory_fu: true. but bigtop hdfs pre-creates all directories with permissions we do not like for this usecase
<kjackal> kwmonroe: yes, /user/flume is there and is owned by flume in the hadoop group
<kwmonroe> cool kjackal, now you need to find out why 'root' is trying to write there.. writes from flume-hdfs should be coming from the 'flume' user
<kjackal> I see!
<kjackal> Back to debugging :)
<kwmonroe> fwiw kjackal, the flume source will be setting the output dir based on the 'event_dir'.  so apache-flume-syslog defaults that to 'flume-syslog', which would appear in hdfs as '/user/flume/flume-syslog'
<kjackal> kwmonroe: yeap
<junaidali> thanks coreycb
<smgoller> so i have a maas machine that failed deployment according to maas. Can I recover from this from juju's standpoint or do I need to destroy the model and start over?
<smgoller> juju doesn't think the machine is in an error state but maas is pissed
<rick_h_> smgoller: so if it's a deployment you can destroy the application and redeploy?
<rick_h_> smgoller: need more info on what happened I guess.
<rick_h_> smgoller: does juju status --format=yaml show more details on the machine?
<smgoller> so I did a deployment of a bundle, which required 4 machines
<smgoller> i blew the model away and redeployed.
<smgoller> but for the record, one of the machines failed to deploy according to maas
<rick_h_> smgoller: k, if it happens again let us know and we can try to help see what's up.
<smgoller> and juju status showed that machine as "error" state. but I'm sorry I was too impatient. :) If it happens again i'll leave it :)
<kjackal> kwmonroe: cory_fu: need some help with bundles and the store
<cory_fu> kjackal: Sure, what's up?
<kjackal> there is no charm build step for bundles, right?
<kjackal> cory_fu: charm show cs:~bigdata-dev/bundle/kafka-ingestion-0
<kjackal> cory_fu: I pushed the bundle but I am not sure where did it go in the store
<kjackal> cory_fu: I do not see it here: https://jujucharms.com/q/bigdata-dev?type=bundle
<kjackal> I must be mising some thing, eg a metadata.yaml
<cory_fu> kjackal: 1) Did you do `charm release cs:~bigdata-dev/bundle/kafka-ingestion-0`?  2) Did you do `charm grant cs:~bigdata-dev/bundle/kafka-ingestion everyone`?
<cory_fu> kjackal: (Hint, you forgot to grant)
<cory_fu> https://jujucharms.com/u/bigdata-dev/kafka-ingestion/0
<kjackal> http://pastebin.ubuntu.com/23204426/
<cory_fu> kjackal: Odd.  http://pastebin.ubuntu.com/23204431/
<kjackal> So cory_fu, is this a permissions issue?
<kjackal> Is the series = bundle correct?
<cory_fu> kjackal: Oh, I think what happened is that you granted before you released, and the stable channel didn't exist yet, so the grant ended up being a no-op
<cory_fu> kjackal: Try granting again
<kjackal> ok
<cory_fu> kjackal: Yes, the commands look fine to me
<kjackal> https://jujucharms.com/u/bigdata-dev/kafka-ingestion
<kjackal> Awesome, thanks!
<smgoller> thedac: ok, I've got it redeployed. I've created the external network according to the docs again. I've also got a manually deployed host as well.
<smgoller> thedac: at this point, the manual host can ping the router. however, pinging via ip nets exec on the qrouter fails to ping the router.
<smgoller> the manual host is mainly just to validate layer1 connectivity
<thedac> smgoller: hmm, ok.
<thedac> I guess I need you to unpack "manually deployed host".
<thedac> Is that a booted instance on the cloud or something else
<smgoller> a host manually deployed with maas with ubuntu on it
<smgoller> outside openstack
<thedac> but outside the softare defined network .. ok
<smgoller> basically something else connected to the provider network that's not the router
<smgoller> i have not created a internal network
<thedac> Being able to ping the router at all is a good sign. I would finish the config and boot an instance and we can debug from there
<smgoller> this is what i used to create the external network "./neutron-ext-net -g 10.118.28.1 -c 10.118.28.0/24 -f 10.118.28.10:10.118.28.254 ext_net"
<thedac> smgoller: remember there are probably secgroups aslo at play. You might consider making the default secgroup wide open for early testing.
<smgoller> the default secgroup seems to be wide open by default
<smgoller> downloading an ubuntu image so i can launch an instance.
<smgoller> thedac: do you want me to go ahead and create an internal network and launch an instance? Or should we continue trying to debug the external part of this?
<thedac> I would go ahead and create the internal network and boot an instance.
<thedac> Having said that when you say the manual host can ping the router. Do you mean 10.118.28.1 or 10.118.28.10 (the likey SDN router IP)
<stokachu> cory_fu: https://github.com/battlemidget/charm-layer-ghost/pull/5
<stokachu> when you get a chance
<smgoller> 10.118.28.1
<smgoller> thedac: the upstream router, not the router in openstack. my apologies.
<thedac> smgoller: which is an external device corect? ... ok, I was confused
<cory_fu> stokachu: LGTM, but I didn't test it
<smgoller> yes
<stokachu> cory_fu: thanks, im testing it now
<thedac> smgoller: what does neutron router-list show as the IP of the SDN router?
<smgoller> 10.118.28.10
<stokachu> cory_fu: im noticing 5 minute waits between the APT layer running ensure_package_status
<stokachu> not sure what that's about
<thedac> smgoller: ok, and can you ping that?
<thedac> either from the manual host or inside the netns?
<smgoller> sudo ip netns exec qrouter-c2a5b6f2-8006-4a78-ad07-c82f8c1fd7ef ping 10.118.28.10
<smgoller> succeeds.
<stokachu> you can see that here: http://paste.ubuntu.com/23204650/
<smgoller> thedac: fails from the manual host
<thedac> and what is the IP of the manual host?
<cory_fu> stokachu: Strange.  Sounds like maybe it's queueing it but not acting on it during that hook, so it gets processed during the next hook (presumably update-status, after 5 min)
<smgoller> 10.118.28.8
<thedac> smgoller: ok, on the neutron-gateway host are eth0 and eth1 in the same VLAN?
<thedac> smgoller: they would have to be for that to work
<stokachu> cory_fu: im not sure what package it's queueing as everything was installed on line 973
<stokachu> stub: ^ any idea on that?
<smgoller> thedac: you mean the physical interfaces on the machine running neutron-gateway?
<thedac> yes
<cory_fu> stokachu: Where do you see the 5 minute gap?
<smgoller> the physical interfaces are both untagged, but they are on separate vlans on the physical switch they're connected to
<stokachu> cory_fu: line 1118 and line 1119
<thedac> smgoller: Because 10.118.28.8 and 10.118.28.10 are both in the 10.118.28.0/24 network they need to be in the same broadcast domain. Or you need a different set of network address space for the ext interface
<thedac> make sense?
<cory_fu> stokachu: It's not actually installing anything there.  That's just update-status running after 5 min of doing nothing.  The apt layer always logs that it's initializing at the start of every hook, even if there's nothing for it to do
<smgoller> thedac: I'm misinterpreting something. the manual host's ethernet and the "external" interface (eno2) on the machine running neutron-gateway are in the same vlan
<cory_fu> stokachu: Probably worth filing a bug for stub to remove that log message, as it doesn't seem useful
<stokachu> cory_fu: hmm, ghost is sitting at installing NPM dependencies
<smgoller> thedac: my initial answer was for the two physical interfaces on the machine running neutron-gateway
<thedac> smgoller: ooh, ok, so it has a second ethernet interace in the same vlan as the neutron-gateway's external interface?
<cory_fu> stokachu: Guessing that there's a handler order dependency in how you update your status
<smgoller> it being the manual host? yes.
<thedac> yes, ok
<cory_fu> stokachu: Give me a few and I'll take a closer look.  It's probably actually ready
<stokachu> cory_fu: ok
<thedac> smgoller: so we expect that ping to work. But to humor me. Will you set the default secgroup wide open with: http://pastebin.ubuntu.com/23204687/
<smgoller> thedac: on it
<thedac> in particular the icmp bit
<smgoller> thedac: still no go
<thedac> ok, ... /me thinks for a bit
<cory_fu> marcoceppi: Is this the issue you were hitting with resources not downloading?  http://pastebin.ubuntu.com/23204673/
<cory_fu> If so, will it eventually recover?
<marcoceppi> cory_fu: I didn't hit it, lazypower and mbruzek did
<marcoceppi> cory_fu: and no, there was no recover from what I remember
<cory_fu> Ok, then mbruzek, how would handle that?
<cory_fu> i.e., do I have to redeploy the app, start a new model, or tear down the entire controller?
<mbruzek> cory_fu:  is that with juju 2.0 ?
<cory_fu> Yes
<mbruzek> Did you juju attach the resource
<cory_fu> mbruzek: Yes, it's in the store.  This worked several times before this one choked
<cory_fu> Oh, wiat
<cory_fu> I deployed from local this time
<cory_fu> ha!
<cory_fu> Thanks, mbruzek
<mbruzek> we have code that checks the size of a resource
<mbruzek> and I do resource-get in a try catch
<cory_fu> mbruzek: Does it sometimes return an empty file?
<mbruzek> try/except
<mbruzek> We did this so we can specify zero byte resources
<thedac> smgoller: ok, looking at the neutron-ext-net script it defaults to network-type GRE. For your external network you actually want --network-type flat. So, I would remove the router and networks and re-run neutron-ext-net with --network-type flat
<smgoller> aha
<thedac> smgoller: you can prove this to yourself with neutron net-show ext-net
<thedac> look at  provider:network_type
<smgoller> the version of the script i have doesn't have support that argument
<thedac> just to clear up. Are you using the openstack-charm-testing repo to get that script?
<smgoller> no, but I can grab it.
<smgoller> is that on launchpad or github?
<thedac> interesting, so before I send you there. What does neutron net-show ext-net show for provider:network_type
<thedac> lp:~ost-maintainers/openstack-charm-testing/trunk/
<thedac> That has our version of the script in bin
<smgoller> | provider:network_type     | gre                                  |
<thedac> ok, so that is still a problem.
<thedac> Let me track down the neutron commands directly so we are not depending on a script. One sec
<stokachu> cory_fu: yea something is causing dpkg to go into a unconfigured state
<smgoller> thedac: to be clear, that was the old one. I'm checking out that one from launchpad now
<thedac> ok
<cory_fu> marcoceppi, tvansteenburgh: To get the new version of jujuclient on Xenial, do you recommend `pip install --upgrade` over the one provided by python-jujuclient, or is there a ppa I should use instead?
<smgoller> thedac: that was it. I can now ping from the netns to the provider router
<thedac> ok, great. I think that should get you unblocked
<smgoller> and i can ping the manual host as well
<smgoller> thank you so much. So should I be basing my work of the bundle there as well?
<stokachu> huh, charm build layer-nginx puts the built dir in ~/charms/trusty, where building my ghost charm places it in ~/charms/build/ghost
<thedac> smgoller: no, I the one you are working with is the state of the art
<thedac> s/I//
<smgoller> thedac: roger that.
<marcoceppi> cory_fu: think about what you just said.
<smgoller> but the internal network as gre should be fine, yes?
<thedac> that is correct
<smgoller> awesome.
<cory_fu> marcoceppi: pip install, got it.  ;)
<cory_fu> marcoceppi: What is the ppa, then?
<marcoceppi> https://launchpad.net/~tvansteenburgh/+archive/ubuntu/ppa
<smgoller> thedac: you rock sir, thank you so much for your time.
<thedac> smgoller: in that repo you can look at profiles/default for the commands run and use that as a crib sheet
<thedac> no problem
<smgoller> thedac: will do
<cory_fu> marcoceppi: Thank you!
<cory_fu> marcoceppi: Just for that, I'll go ahead and fix dhx for 2.0
<marcoceppi> cory_fu: what about charm build ;)
<cory_fu> marcoceppi: meh
<cory_fu> marcoceppi: I probably won't get dhx fixed tonight either, actually.
<bdx_> hows it going all?
<bdx_> can someone point me at an example of how to bootstrap to rackspace pls?
<stokachu> cory_fu: im thinking it's something with the apt layer, i can't get my nginx layer to deploy either
<cargill> hi, is there an example how to use charmhelpers.core.services.helpers.TemplateCallback or an equivalent?
<bdx_> stokachu: one thing that got me hung up with layer apt when included by a bottom layer, is that the configs specified in config.yaml for the bottom layer werent making into the built artifact
<stokachu> hmm
<bdx_> stokachu: I had to add them to the top layer to get them to persist through to the built charm
<stokachu> ugh
 * stokachu sad
<bdx_> was that it?
<stokachu> bdx_: no lol
<stokachu> it pulls the package from my layer.yaml but then sits in this loop checking package_ensure_status
<stokachu> nothing ever finishes after the apt layer does its thing
<bdx_> stokachu: I modeled layer-nginx-passenger after your nginx layer, it uses layer-apt -> https://github.com/jamesbeedy/layer-nginx-passenger
<bdx_> stokachu: are you using it differently then I?
<stokachu> bdx_: have you done a charm build/juju deploy recently?
<bdx_> errr, not in the last day or so
<stokachu> nah i just built the nginx layer and tried to deploy it locally
<bdx_> ahh changes in layer-apt then
<stokachu> going to download yours and try it
<bdx_> layer apt hasn't changed
<stokachu> bdx_: yea so im not sure whats going on
<bdx_> this is how im using it -> https://github.com/jamesbeedy/layer-nginx-passenger/blob/master/reactive/nginx_passenger.py#L22,L24
<bdx_> not sure if that is correct or not, but it work
<bdx_> s
<stokachu> bdx_: deploying your layer now to see what happens
<stokachu> bdx_: yours fails too
<stokachu> http://paste.ubuntu.com/23204830/
 * stokachu super sad
<cholcombe> anyone remember the cmd to get the reactive states that are set when debugging?
<smgoller> thedac: launched an instance and was able to ssh into it just fine.
<thedac> fantastic!
<smgoller> thedac: so the host can't really resolve anything, including its own hostname. I'm guessing my options are either define a DNS server to use when I create the internal network, or maybe install designate and designate-bind to get route53-like functionality?
<smgoller> s/DNS server/external DNS server/
<thedac> smgoller: yes, when you define the internal network set a DNS server. As long as it can route there that will work.
<smgoller> would designate/designate-bind fulfill that as well?
<thedac> designate is a whole other ballgame. More if you want to serve DNS as a service
<smgoller> ok
<smgoller> thedac: it seems like if I set designate up and link it to nova-compute, then when instances came up, their names would resolve, plus they'd be able to resolve things on the internet normally? Like, I have an instance called "xenial-test". It believes that is its hostname, but it doesn't resolve to anything. designate seems like it would allow that to work.
<smgoller> "out of the box" that is
<bdx_> stokachu: mine fails similarly to yours?
<thedac> smgoller: I am not going to stop you from using designate. It is a good solution but it is adding complexity to a fairly simple problem
<thedac> if you find that interesting, go for it
<smgoller> thedac: a very diplomatic answer. Thank you. :) I'll stick with defining DNS.
<smgoller> and leave designate for another day
<smgoller> thedac: one last question before I let you escape: Is it possible to configure the bundle such that the openstack console for instances works?
<smgoller> I see you can pass custom configuration to nova-compute, but it feels like the configuration for console needs knowledge of its own ip address for it to work, which I don't know if you can model in a bundle
<thedac> smgoller: it has been a while since I have plaeyed with that.
<smgoller> ok, then i won't worry about it.
<smgoller> thanks!
<thedac> smgoller: if the charm is missing anything, patches are welcome :)
<smgoller> oh for sure
<smgoller> If I come up with anything I'll contribute back, definitely.
<bdx_> stokachu: I figured it out to some extent
<bdx_> stokachu: add a series tag to your metadata.yaml
<bdx_> stokachu: xenial is what I used that worked
#juju 2016-09-20
<hatch> If I have a multi-series subordinate, should I be able to relate that subordinate to multiple applications of different series as long as they are in the supported series list?
<cory_fu> hatch: Unfortunately, no.  The series for a subordinate is set when it is deployed and it can then only relate to other applications with the same series
<hatch> cory_fu: np thanks for confirming
<rts-sander> Hello, I'm trying to get juju actions defined on my charm; but it says no actions defined: https://justpaste.it/yird
<rts-sander> did I miss something? I added actions.yml, have a command in the actions directory...
<rts-sander> Oh.. I've got it: actions.yml => actions.yaml :>
<user_____> hi
<user_____> hi! need your help
<user_____> juju add-machine takes forever
<user_____> : cloud-init-output.log shows âSetting up snapd (2.14.2~16.04) â¦â (last line in file)
<user_____> how to fix this?
<rock> Hi. I have MAAS 1.9.4 0n trusty. And 4 physical servers commissioned. Now I want to test our Openstack bundle on MAAS. On MAAS node I installed juju 2.0. And then I followed  https://jujucharms.com/docs/2.0/clouds-maas. MAAS cloud juju model bootstrapping failed. Issue details :  http://paste.openstack.org/show/581940/. Can anyone help me in this.
<cholcombe> we need some docs for debugging reactive charms
<cholcombe> everything i'm getting is stuff random people remember about how to poke at it
<marcoceppi> cholcombe: what are you trying to debug? I gave a whole lightning talk on this at the summit ;)
<marcoceppi> (I intend on turning that into a document)
<cholcombe> marcoceppi, my state isn't firing and i'm trying to figure out why
<marcoceppi> cholcombe: is it set? `charms.reactive get_states`
<cholcombe> marcoceppi, it's not.  so i'm working backwards
<cholcombe> marcoceppi, my interface should be setting a state i'm waiting for so i suspect one of the keys the interface is waiting on is None
<marcoceppi> cholcombe: did you write the interface?
<cholcombe> marcoceppi, i did.  can i breakpoint it?
<marcoceppi> you can with pdb, sure
<marcoceppi> cholcombe: link to it? I've gotten good at finding oddities in interface layers
<cholcombe> marcoceppi, any idea where reactive puts the interface file?
<marcoceppi> cholcombe: hooks/interfaces/*
<cholcombe> cool
<cholcombe> marcoceppi, https://github.com/cholcombe973/juju-interface-ceph-mds/blob/master/requires.py
<cholcombe> it worked until i added admin_key
<cholcombe> looks like hooks/interfaces doesn't exist
<marcoceppi> cholcombe: is in hooks
<marcoceppi> could be relations
<marcoceppi> cholcombe: also,  admin_key  isn' tin the auto-accessors
<cholcombe> marcoceppi, heh helps to have more eyes on it doesn't it
<cholcombe> marcoceppi, thanks :D
<marcoceppi> cholcombe: np, it might be better to just use get_conv instead
<marcoceppi> which will getyou al lthe relation data, regardless of auto-accessors
<cholcombe> marcoceppi, yeah i think i should switch to that
<cholcombe> too much magic going on
<marcoceppi> *magic.gif*
<kjackal> kwmonroe: is this the revision we should be testing cs:bundle/hadoop-processing-9   ?
<kwmonroe> yup kjackal
<kjackal> kwmonroe: not good....
<kwmonroe> oh?
<kjackal> something went wrong, with ganglia
<kjackal> let me see
<kwmonroe> kjackal: error on the install hook with ganglia-node?
<kwmonroe> kjackal: i saw that.. looks like it's trying to run install before the charm is unpacked.. give it a couple minutes and it should work itself out
<kwmonroe> it's not really red unless it's red for > 5 minutes ;)
<kjackal> Indeed!!! It recovered!
<kwmonroe> and that, my friend, is ganglia.
<kjackal> Selfhealing!
<kwmonroe> :)
<cory_fu> kjackal, kwmonroe: I was able to deploy cs:hadoop-processing-9 on GCE and run smoke-test on both namenode and resourcemanager without issue
<kwmonroe> w00t.  thx cory_fu
<cory_fu> admcleod_: ^
<kjackal> kwmonroe: cory_fu: is the smoke-test doing a terasort?
<kjackal> I thought there was a seperate action for terasort
<cory_fu> kjackal: smoke-test does a smaller terasort.  I can run the bigger one.  One min
<bdx> hows it going all? Can storage be provisioned via provider, and attached to an instance w/o also being mounted?
<bdx> using `juju storage`
<bdx> lets say I want to deploy the ubuntu charm and give it external storage, but not have the storage mount to anything
<bdx> then, subsequently configure and deploy the lxd charm over ubuntu
<kjackal> cory_fu: kwmonroe: admcleod_: Terasort action finished here as well. On canonistack
<kwmonroe> oh sweet baby carrots.  thanks kjackal cory_fu.  i kinda wish i would have dug into the broken env more, but we're 3 for 3 today.. so it's ready to ship ;)
<cory_fu> kwmonroe: How are you seeing the error manifest?  My second smoke-test on resourcemanager seems to be hung
<cory_fu> kwmonroe: Hrm.  I seem to have lost my NodeManager on 2 of 3 slaves, too
<kjackal> cory_fu:  if you ssh to the slave node you should find only one java process (Datanode). There should be two java processes there. THe Namenode process is missing
<cory_fu> Ok, I'm seeing that now
<cory_fu> No errors in the hadoop-yarn logs, though
<cory_fu> kwmonroe: 2016-09-20 15:55:25,059 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 9965 for container-id container_1474386123726_0003_01_000001: -1B of 2 GB physical memory used; -1B of 4.2 GB virtual memory used
<cory_fu> -1B??
<cory_fu> tvansteenburgh: You pushed the fix for the missing diff... Will it only apply if a new rev is added?
<tvansteenburgh> cory_fu: once the fix is deployed, you'll need to close and resubmit the review
<cory_fu> Oh.
<tvansteenburgh> cory_fu: and to be clear, i'm not in the process of deploying it
<cory_fu> tvansteenburgh: Also, +1 on removing that whitespace in the textarea.  :p
<cory_fu> tvansteenburgh: Fair enough
<tvansteenburgh> i knew you'd appreciate that
<kjackal> kwmonroe: cory_fu: any luck on the namenode issue? I am trying different configs but with no success
<cory_fu> kjackal: I've been focusing on InsightEdge and waiting for kwmonroe to get back.  I don't see anything useful in the logs, so I have no idea what's happening
<kjackal> kwmonroe: I might have a set of params that seem to make namenode stable (until it breaks again)
<cory_fu> kjackal: What were the params?
<kjackal> just a asec
<kjackal> cory_fu: kwmonroe: http://pastebin.ubuntu.com/23208123/ these go to the yarn-site.xml
<kjackal> I have been running terasort with a single slave for 3-4 consequtive times
<cory_fu> kjackal: Do we have any idea why we're seeing these failures on the Bigtop charms and not (presumably) the vanilla Apache charms?
<kjackal> cory_fu: not yet
<kwmonroe> kjackal: cory_fu:  we should have shipped when we had the chance
<kwmonroe> and yes, nodemgr death is what i saw in yesterday's failure
<kwmonroe> kjackal: do those values go into yarn-site.xml on the resourcemanager, slaves, or both?
<kjackal> kwmonroe: I have them on the slave
<kwmonroe> also cory_fu kjackal, i wonder if the addition of ganglia-node and rsyslogd ate enough resources to cause the slaves to run out of memory.. that's one thing different between the bigtop and vanilla bundles
<kwmonroe> furthermore, didn't we have a card on the board to watchdog these procs?  if not, we should add one... or at least check for each proc before we report status.
<kwmonroe> it'd be nice to see at a glance that status says "ready (datanode)" and know that something went afoul with nodemanager
<kjackal> kwmonroe: cory_fu: nooo... it died again....after 5 terasorts...
<cory_fu> +1.  We could have a cron job that checks for the process and uses juju-run to call update-status if it goes away
<kjackal> yeap thi fire and forget policy we have for services could improve. kafka may also fail to start and we never report that (we have a card for this)
<cory_fu> kwmonroe, kjackal: There's an issue with the idea of relying on the base layer to set the series.  When doing charm-build, if the series isn't defined in the top-level metadata.yaml, it will default to trusty: https://github.com/juju/charm-tools/blob/master/charmtools/build/builder.py#L190
<kwmonroe> yup cory_fu.. the workaround is to be explicit with charm build --series foo
<cory_fu> Yeah, but that's a bit of a hassle
<kwmonroe> cory_fu: i'm not married to the base layer defining the series.. if we want to leave it up to each charm, i'm +1
<kwmonroe> but let's decide that now before i make final changes to push to bigtop
<cory_fu> I'd like to fix charm-build, but I don't see an easy way to do so
<kjackal> cory_fu: kwmonroe: if you have a single series then it will always place the charm under buildidrectory/trusty/mycharm
<cory_fu> Without a fix for charm-build, I don't think it's reasonable for our charms to not "just work" with `charm build` by default
<kjackal> even if the charm is for xenial
<cory_fu> kjackal: That's not true.  If you define a single series in the metadata.yaml, it will use that
<kwmonroe> yeah, pretty sure that ^^ is correct, but it has to be in the top layer metadata.yaml
<cory_fu> kjackal: Specifically, if you define *any* series in metadata.yaml, it will output to builds/.  Otherwise, it will output to trusty/
<kjackal> cory_fu: kwmonroe: I see, so I need to set the series on the top level metadata.yaml to move the output to builds/
<kjackal> that will not play well with the series on the base layer
<cory_fu> kwmonroe, kjackal: Looks like a (slightly hacky) work-around is to put an empty list for series in the top-level charm layer
<cory_fu> Ah, damnit.  Nevermind, that doesn't work, either
<kwmonroe> 2 Ms in dammit, goose
<cory_fu> Though, making that work-around work would be pretty easy
<kjackal> since we are in this subject ... I tried to remove the slave we have in the bundle (juju remove-application) and  add an older one from trusty. The trusty one never started the namenode, probably never related to resource manager
<cory_fu> kjackal: Looks like we weren't the only one to be hitting this: https://github.com/juju/charm-tools/issues/257
<kjackal> cory_fu: Oh.. I totaly forgot to continue wit this issue. Got consumed with the hadoop thing
<kjackal> tring now the bundle in trusty
<cory_fu> tvansteenburgh: Have you run in to issues with charms that use_venv and the python-apt package?
<cory_fu> marcoceppi: ^
<tvansteenburgh> cory_fu: no
<marcoceppi> cory_fu: I don't have a use_venv charm
<cory_fu> grr
<tvansteenburgh> cory_fu: i don't know what the issue actually is, but have you tried installing apt from pypi instead?
<cory_fu> tvansteenburgh: I don't actually need python-apt but it gets pulled in automatically whenever charmhelpers.fetch is imported.  The problem with installing from pypi is that it adds a lot of dependencies into the wheelhouse that I don't need, and it's already installed on the system anyway.
<cory_fu> tvansteenburgh: I remembered, though, that I could use include_system_packages, and that's working for me
<tvansteenburgh> cool
<kjackal> kwmonroe: cory_fu: I am testing the hadoop processing we have for truty. I see a different behavior there. The namenode does not die but we get some jobs failing
<kwmonroe> kjackal: that's odd.. are they long running jobs?  can you tell from the DN or RM logs if they are being killed for a particular reason (like memory exceeds threshold)?
<kjackal> kwmonroe: looking
<kwmonroe> also kjackal, the failure earlier with ganglia-node was a bug.  the charm uses '#/usr/bin/python' which doesn't exist on xenial.  it "fixed" itself because rsyslog-fowarder-ha installs 'python'  (http://bazaar.launchpad.net/~charmers/charms/trusty/rsyslog-forwarder-ha/trunk/view/head:/hooks/install), so once rsyslog-fowarder finished its install hook, the subsequent ganglia install hook would succeed.
<kwmonroe> i'm pushing a similar install hook change for ganglia-node, so we shouldn't see that again.
<beisner> thedac, tvansteenburgh - i've been trying to figure out what changed to cause our juju-deployer + 1.25.6 machine placement to break.  it looks like 0.9.0 is causing our ci grief.  https://launchpad.net/~mojo-maintainers/+archive/ubuntu/ppa
<beisner> basically, bundles that used to deploy to 7 machines with all sorts of lxc placement now end up asking for 18 machines, while still placing some apps in containers.  very strange.
<beisner> is there a known issue?
<beisner> i've found that we got deployer 0.9.0 from the mojo ppa of all places
<tvansteenburgh> beisner: they got it from my ppa
<tvansteenburgh> https://launchpad.net/~tvansteenburgh/+archive/ubuntu/ppa
<tvansteenburgh> nothing has changed with placement in a quite a while
<tvansteenburgh> feel free to file a bug on juju-deployer though
<tvansteenburgh> https://bugs.launchpad.net/juju-deployer
<beisner> tvansteenburgh, at a glance, here's a bundle and the resultant model: http://pastebin.ubuntu.com/23205474/
<tvansteenburgh> beisner: sorry, i don't even have time to glance right now, can you put that paste in a bug?
<kjackal> kwmonroe: this is odd http://pastebin.ubuntu.com/23208612/
<beisner> tvansteenburgh, ok np.  first i need to revert to a working state and block 0.9.0 pkgs.  we're borked atm.
<kwmonroe> kjackal: http://stackoverflow.com/questions/31780985/hive-could-not-initialize-class-java-net-networkinterface.. sounds like our old friend "datanode ip is not reverse-resolvable".  what substrate are you on?
<kjackal> I am on canonistack
<kjackal> kwmonroe: ^
<kjackal> let me check the resolutions
<kwmonroe> kjackal: can you try adding an entry to your namenode and resourcemanager /etc/hosts files that includes your slave IP and `hostname -s`?
<kjackal> yeap just a sec
<kjackal> kwmonroe: it seems i have a ghost slave...
<kjackal> kwmonroe: I have 5 consecutive successfull terasorts
<kwmonroe> kjackal: i got to 4 before my terasort hung.. http://imgur.com/a/maMAM  it's not dead yet, but i have little faith that it will return :/
<kjackal> kwmonroe: I am on the ninth successfull now!
<kwmonroe> kjackal: my "Lost nodes" count is rising on my RM :(  i think i'm toast.
<kjackal> kwmonroe: 10 successfull!
<kjackal> So here is what the setup looks like: started from hadoop-processing-6 updated the /etc/hosts to have reveres lookups
<kjackal> I believe what makes the difference is the trusty host :(
<kjackal> What I do not fully get is why do we still have the hosts issue, I thought we have a workaround for it.
<kjackal> kwmonroe: ^
<kwmonroe> right kjackal -- and especially on clouds that have proper ip/dns mapping, which aws and cstack have
<kjackal> kwmonroe: http://imgur.com/a/VUTSZ
<kwmonroe> however, kjackal, we have only ever worked around the NN->DN reverse ip issue with the hadoop datanode-ip-registration param set to allow non-reversible registration.. perhaps there's an issue with RM->NM that we're not considering.
<kjackal> Ok, kwmonroe, next step for me is to force the latest charms to deploy on trusty. The fact that the namenode is not crashing on trusty is promissing
<kjackal> kwmonroe: (even is the jobs fail)
<kjackal> 8if
<kjackal> *if
<beisner> thedac, tvansteenburgh - updated with examples and attachments.  it's definitely a thing.  https://bugs.launchpad.net/juju-deployer/+bug/1625797
<mup> Bug #1625797: (juju-deployer 0.9.0 + python-jujuclient 0.53.2 + juju 1.25.6) machine placement is broken <uosci> <juju-deployer:New> <mojo:New> <python-jujuclient:New> <https://launchpad.net/bugs/1625797>
<kwmonroe> kjackal: what timezone are you in this week?
<kjackal> kwmonroe: I am in DC
<kwmonroe> ah, very good kjackal.  you can keep working ;)
<kjackal> kwmonroe: -5 I think
<kwmonroe> yeah -- just as long as you're not back in Greece
<kwmonroe> cory_fu: fyi, deploying bigtop zepplin also apt installs spark-core-1.5.1
<cory_fu> Makes sense
#juju 2016-09-21
<huhaoran> hey, charmers, my juju seems to get stuck, it there someway to restart juju itself?
<huhaoran> hey, charmers, my juju seems to get stuck, is there someway to restart juju itself?
<magicaltrout> I'VE FINALLY FOUND A DONALD TRUMP SUPPORTER!
<magicaltrout> my life is complete
<Mian> Hi, does anyone here know something about  the Xenial version of mongodb charm?  it's not available in the charm store right now
<Mian> is there a schedule or calendar as to when we will release mongodb charm on Xenial?   appreciate any advice/clue on it
<magicaltrout> there was some movement on it not too long a go
<magicaltrout> a go / ago
<magicaltrout> but clearly hasn't landed
<magicaltrout> i needs rewriting and updating
<magicaltrout> i think marcoceppi was hacking on it
<Mian> I'd like to deploy Openstack totally with Xenial release, Ceilometer has Xenial charm but it will be great if the underlying mongodb charm is available
<Mian> magicaltrout++
<magicaltrout> can't help you directly, but if you ask on the juju mailing list you'd get a response
<magicaltrout> it appears the lazy folk aren't around, but on the mailing list it give them a chance for an offline response and also registers some interest in making it happen
<Mian> appreciate your advice :), that's what I need and it's helpful
<Mian> will inquiry the mailing list
<magicaltrout> no problem
<pragsmike> greets
<marcoceppi> o/ pragsmike
<marcoceppi> Mian: I have the start of a new Xenial MongoDB charm, I'm just unable to finish it
<pragsmike> morning.  or is it evening?
<marcoceppi> it's morning where I am
<pragsmike> back home yet?
<pragsmike> i'm just writing up my notes from the charmer's summit
<pragsmike> it's taking me days to unpack all that
<Mian> marcoceppi:  thanks for this update, it's great to know that we're working on it
<Mian> marcoceppi:  A customer is trying to deploy with MongoDB of Xenial series in their lab, it will be even greater that if we have an estimation on when the Xenial MongoDB charm might be released :)
<marcoceppi> Mian: well, I have 0 time to work on it, despite my desire to build a better mongodb charm. If somone wants to push it the rest of the way, it's on GH https://github.com/marcoceppi/layer-mongodb
<Mian> marcoceppi: I can understand it :)
<Mian> I guess encouraging the customer to involve into the tinkering of it in their lab might be a win-win strategy if they are desiring it
<Mian> marcoceppi: thanks very much for this update
<junaidali> has the set-config option changed in juju 2.0rc1? I'm not able to set config use juju set-config command
<junaidali> also juju get-config is not working
<junaidali> got it.. the command is now  $ juju config
<pragsmike> junaidali: reading the scrollback it looks like you were having the same problem with container networking that i am
<junaidali> yes pragsmike, rc1 has the fix.
<pragsmike> hey, released 4 hours ago!  I'm going to go try it!
<pragsmike> hot damn! The openstack bundle is deploying with containers on the correct network, for the first time on my NUC cluster!  Thanks to all!
<magicaltrout> boom!
<marcoceppi> pragsmike: nice!
<pragsmike> Now I can sleep :)
<pragsmike> I find that I have to go kick a container or two on each deploy, as they get stuck "pending", but logging into the container host and doing lxc stop / lxc start fixes that
<fernari> Hello all! Quite new charmer here and one question came to mind while writing my own charms
<fernari> I am using reactive charms and building my custom software and it's dependencies with juju
<fernari> one key dependency is consul and I've managed to build base layer which configures consul and redis cluster correctly
<fernari> then i'm using that layer on my actual charm and that works correctly, let's call that "control-software"
<fernari> Now I have a problem as "client-software" needs to join the consul cluster and I can't get the consul-layer's peer information out which I need to correctly join the existing cluster
<fernari> I'm trying to get the related units like this: cluster_rid = hookenv.relation_ids('control')
<fernari> that returns nothing on the client-charm and cluster:0 on the control-charm and after that I'm able to get ip-addresses of the peers
<marcoceppi> fernari: howdy! it might help if you pastebin (paste.ubuntu.com or gist.github.com) some of the metadata.yaml files for your control-software and client (redacted of senstaive information)
<fernari> marcoceppi: cheers mate! http://paste.ubuntu.com/23211107/
<fernari> I think that the problem is that I am building the peer-relationship on the layer below the "control-plane"
<marcoceppi> fernari: so, peers, as I imagine you've found it is just for the units deployed in your charm. So if you have three units of control-plane, they'll all talk to each other
<marcoceppi> fernari: you'll also need to "provide" the raft interface, so it can be connected to worker/client
<fernari> yep
<marcoceppi> and in doing so, you can publish on the relation wire the data you need for it to join the cluster
<marcoceppi> fernari: http://paste.ubuntu.com/23211109/
<marcoceppi> something like that, you can't use the relation name "control" again, since it's already declared as a peer, but if you can think of another name for the relation replace my placeholder with it
<fernari> Right, I'll try that!
<fernari> Learning curve for those relations is quite steep, but slowly getting forward :)
<marcoceppi> fernari: it's the most powerful aspect of juju, but also one of the hardest to wrap a head around
<marcoceppi> fernari: let us know if we can be of any additional assistance!
<fernari> sure thing :)
<kjackal> kwmonroe: cory_fu: The hadoop bundle deployed on trusty seems pretty stable http://imgur.com/a/amvac lets talk at the sync what are we going to do with it
<jamespage> marcoceppi, hey - could I request a release of charm-helpers?
<jamespage> I've pushed a few changes to support use of application-version across the our openstack charms, and need to work on pulling the samefeature into reactive charms
<rick_h_> jamespage: he was on site at a customier thing this morning so might be best to go email on that
<jamespage> rick_h_, ack
<jamespage> marcoceppi, alternatively I'd be happy to RM charm-helpers releases if you wanna give me perms
<geetha> Hi, `juju attach` command is taking long time to upload resources in s390x machine. Can anybody please suggest what would be the issue here?
<kwmonroe> hi geetha -- how large is the resource?
<rick_h_> geetha: just the normal networking type issues one might see tbh.
<geetha> kwmonroe:Resource size is 1.5 G. But in x86 machine it's not taking that much time.
<junaidali> Hey guys, how can we set an install_key in juju. I want to pass a GPG-KEY. I've tried "juju set <charm-name> install_keys="$(cat GPG-KEY)"", also tried using a yaml but erroring out " yaml.scanner.ScannerError: mapping values are not allowed here"
<rick_h_> geetha: can/have you tested the network between the different machines? maybe checking mb/s on a curl or rsync or something?
<junaidali> I'm able to manually add the GPG-key using apt-key add command
<junaidali> on the node
<rick_h_> junaidali: what charm is this?
<junaidali> Hey rick_h_, its a plumgrid-charm http://bazaar.launchpad.net/~junaidali/charms/trusty/plumgrid-director/trunk/files
<junaidali> install_key config is available in charmhelpers
<junaidali> so instead of writing my own code, i'm using charmhelper to parse and add the key
<junaidali> rick_h_, fyi PLUMgrid is an SDN solution provider
<rick_h_> junaidali: hmm, yea so it's a string, not sure what the command turns into with newlines/etc
<geetha> rick_h_: I kept rsources in same host machine.
<geetha> EX: juju attach ibm-was-base ibm_was_base_installer=/root/repo/WAS_BASE/was.repo.9000.base.zip
<rick_h_> geetha: ? the client is on the same machine as the controller?
<junaidali> this is what 'juju get' outputs for install_keys when i pass the value using a yaml file: http://paste.ubuntu.com/23211867/
<kjackal> kwmonroe: how did you find out about this healthcheck resource manager is doing on the namenode? Is it on the RM's logs?
<junaidali> rick_h_: this is when i  pass the value using the actual GPG key file ' juju set <charm> install_keys="$(cat GPG-KEY)"' http://paste.ubuntu.com/23211872/
<kjackal> kwmonroe: I was thinking about the other thing that you said about not timing the start of the service correctly. That should affect trusty as well, but since I do not see it now, perhaps there is some timing issue there that is not always present
<rick_h_> beisner: do you recall who worked on that charm and any idea how to package that up for the key to go through there? ^
<kwmonroe> kjackal: i learned about it here: http://johnjianfang.blogspot.com/2014/10/unhealthy-node-in-hadoop-two.html  and then i saw in the nodemanager logs stuff like "received SIGTERM, shutting down", which seems to be what happens when the RM detects that the NM is unhealthy.
<kjackal> kwmonroe: thanks, interesting
<kwmonroe> kjackal: and "unhealthy" means it either failed to respond to a heartbeat, or it violated resource constraints (like "using 2.6GB of 2.1GB vmem", which i also saw in the NM logs)
<kjackal> kwmonroe: Ah I see where you are getting with that
<kwmonroe> kjackal: so i hypothesized that either the network connectivity was failing the heartbeat, or our yarn.nodemanager.vmem-check-enabled: false setting was not taking effect when the NM started up (hence failing the resource constraint)
<geetha> rick_h_, We are using local lxd provider.
<kwmonroe> kjackal: both of which seem to be resolved by rebooting the slave
<kwmonroe> geetha: what does 'juju version' say?
<geetha> kwmonroe: Tried with beta-15, beta-18 and now using juju 2.0-rc1.
<kwmonroe> ok geetha, but what does the actual output return?  for example, mine is:
<kwmonroe> $ juju version
<kwmonroe> 2.0-beta18-xenial-amd64
<geetha> It's 2.0-rc1-xenial-s390x
<kwmonroe> cool geetha -- i just wanted to double check the series and arch were xenial-s390x
<beisner> hi rick_h_ /me searches backscroll for context of 'the charm'
<geetha> ok kevin :)
<kwmonroe> geetha: i'm not sure what would be causing a significant slowdown on s390 vs x64.  1.5GB is a really large resource though.. how long as the 'juju attach' been running?
<beisner> rick_h_, i can't tell where the code repo is from the cs: charm pages for plumgrid.  hence, not able to view authors/contributors.  sorry i don't have more info on that.
<kwmonroe> hey dooferlad -- you had the good insights into resources being slow prior to beta-12 (https://bugs.launchpad.net/juju/+bug/1594924).  can you think of any reason why geetha's 1.5GB resource would take significantly longer to attach on s390x vs x86?  (both using 2.0-rc1 in lxd)
<mup> Bug #1594924: resource-get is painfully slow <2.0> <resources> <juju:Fix Released by dooferlad> <https://launchpad.net/bugs/1594924>
<geetha> I have 5 resources to be attached as part of WAS deployment, 1.5GB is the largest one. it's taking more than 15 min. and other resources are around 1 GB..Totally it's taking around 1 hr for deployment.
<kwmonroe> geetha: and how long does the same deploy typically take on x86?
<geetha> On x86, I have tried with beta-18 and it's taking around 20 min for deployment.
<junaidali> Hi beisner: value from install_key is actually handled by charmhelpers. But passing a value gives error  while setting value using a yaml file or using the key file itself. It seems like, either there might be another way to setting value or the charmhelper should have a better way of handling this.
<junaidali> for a multi-line value, i usually use a yaml file to set a config but that isn't working in the present case.
<kwmonroe> gotcha geetha.. so 1.5GB in 15 minutes is roughly 13Mbps.  perhaps the s390x is not tuned very well to xfer that much data over the virtual network or to dasd.  can you benchmark the s390x as rick_h_ suggested?  perhaps try to do a normal scp of the local resource to the controller's /tmp directory.  you can find the controller ip with "juju show-status -m controller", and then "scp </path/to/resource.tgz> ubuntu@<
<kwmonroe> controller-ip>/tmp"
<beisner> junaidali, unfortunately, i'm not familiar with those specific charms or using that config option with them.
<junaidali> thanks beisner, np. I will try again to figure out a way.
<kwmonroe> geetha: forgot a colon in that scp.. it would be something like "scp resource.tgz ubuntu@1.2.3.4:/tmp"
<beisner> junaidali, i see install_keys is a string type option, so it should be usable in a bundle yaml by passing a string value.  what that string value should be, i'm not sure.
<junaidali> beisner, key usually looks like this one http://paste.ubuntu.com/23212091/    but fails at this line in charmhelpers http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/view/head:/charmhelpers/fetch/__init__.py#L120
<geetha> ok kevin, let me try that.
<beisner> junaidali, so are you saying this doesn't work? http://paste.ubuntu.com/23211872/
<junaidali> yes, that was the juju get command output
<beisner> junaidali, is there a specific error you're seeing?
<junaidali> let me share you the output please
<junaidali> http://paste.ubuntu.com/23212121/
<junaidali> beisner: http://paste.ubuntu.com/23212121/
<thedac> This feels like a quoting problem
<thedac> stub: if you are around do you know the proper way to send a GPG key for charmhelpers.fetch.configure_sources?
<thedac> the yaml load is choking on the ':' in Version: GnuPG v1.4.11 (GNU/Linux)
<thedac> junaidali: I can tell you add_source has had much more developer attention than configure_source. It might be a big ask but moving to add_source is probably the best option. http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/view/head:/charmhelpers/fetch/ubuntu.py#L212
<thedac> meaning using add_source directly rather than configure_sources.
<CorvetteZR1> hello.  i'm trying to do juju bootstrap --upload-tools using xenial image but get connection refused on port 22
<CorvetteZR1> google turned up some old posts with people having similar issues, but i haven't found any solution
<CorvetteZR1> i can see the container is running, but bootstrap can't auth using ssh key-auth.  any suggestions?
<cory_fu> @kjackal Per your comment on https://github.com/juju-solutions/layer-apache-kafka/pull/13 I realized that I did it the other way for Apache Zeppelin, and I'm leaning toward changing Apache Kafka to have a single, arch-independent resource, since we expect it to work on most platforms.  If you want to deploy to a platform that requires a custom build for some reason, then you would have to attach it at deploy time.  Seem reasonable?
<cory_fu> kwmonroe: ^
<cory_fu> Downside is if we find a platform that we *do* want to support that always requires a custom build, but the upside is that we are more optimistic about platforms and open it up to deploy on, e.g., Power
<kjackal> ok cory_fu
<kjackal> cory_fu: sorry for the delay, I am in the scala world
<kwmonroe> kjackal: cory_fu and i chatted about layer-hadoop-client (https://github.com/juju-solutions/layer-hadoop-client/pull/10).  i think we've agreed that the silent flag is no longer needed, and layer-hadoop-client will *only* report status when deploying the 'hadoop-client' charm.  since you authored the 'silent' flag in the first place, care to weigh in?
<cory_fu> ha, I already weighed in by merging it.  :p  If you have a -1, we can talk about reverting it
<kjackal> cory_fu: kwmonroe: looks good! Thank you
<kwmonroe> lol cory_fu.. i think the only place we'd be exposed is if we had a charm that included layer-hadoop-client and did not set the 'silent' flag.  that charm might rely on hadoop-client to set the 'waiting for plugin'
<cory_fu> Except that it should now be optional anyway, so it's moot
<kwmonroe> your optimism inspires me
<kwmonroe> (and i was talking about plugin, not java, which is not optional for hadoop-client)
<cory_fu> :)
<cory_fu> Oh, the plugin
<cory_fu> Huh
<cory_fu> Yeah, that's probably going to bite us.
<kwmonroe> well, i did some scouring, and all the charms that include hadoop-client have something like this:  https://github.com/juju-solutions/layer-apache-flume-hdfs/blob/master/reactive/flume_hdfs.py#L8
<cory_fu> But I think those charms should be fixed.  I think status messages should really only be set in the charm layer
<kwmonroe> so they're doing their own blocking if/when hadoop isn't ready
<cory_fu> Well, that's nice
<kwmonroe> and when i say "i did some scouring", i mean i grepped the 5 charms on my laptop.
<kwmonroe> but i'm sure the other 50 are fine
<kwmonroe> optimism.
<theoeprator> struggling to bootstrap, log here: http://paste.ubuntu.com/23213182/
<surtin> anyone around that can explain why quickstart gives me an error that it can't find the default model, even though it's clearly there? running 16.04. just trying to install landscape-scalable
<surtin> just end up with juju-quickstart: error: environment land-test:admin@local/default not found
<theoeprator> bueller?
<surtin> and when i first ran the command i got juju-quickstart: error: error: flag provided but not defined: -e
<tvansteenburgh> surtin: you don't need quickstart, just `juju deploy landscape-scalable
<petevg> theoeprator: it looks like juju is having trouble talking to the controller -- see the error connecting to port 22 in the first few lines of your log. If you do "lxc list", what do you get in response?
<surtin> alright
<theoeprator> theoperator@hype:~$ lxc list
<theoeprator> +-------+---------+---------------------+---------------------------------------------+------------+-----------+
<theoeprator> | NAME  |  STATE  |        IPV4         |                    IPV6                     |    TYPE    | SNAPSHOTS |
<theoeprator> +-------+---------+---------------------+---------------------------------------------+------------+-----------+
<theoeprator> | admin | RUNNING | 192.168.0.36 (eth0) | fd00:fc:8df3:eb62:216:3eff:feae:96f2 (eth0) | PERSISTENT | 0         |
<theoeprator> +-------+---------+---------------------+---------------------------------------------+------------+-----------+
<theoeprator> | vpn   | RUNNING | 192.168.0.29 (eth0) | fd00:fc:8df3:eb62:216:3eff:fef0:29c9 (eth0) | PERSISTENT | 0         |
<theoeprator> +-------+---------+---------------------+---------------------------------------------+------------+-----------+
<theoeprator> theoperator@hype:~$
<theoeprator> those are both containers I set up separately
<surtin> hmm nope, says the charm or bundle not found. guess the instructions on the page are outdated or something
<petevg> theoeprator: It looks like juju wasn't able to create the container. Are you running out of disk space? Do you have any interesting messages in /var/log/lxd/lxd.log?
<tvansteenburgh> surtin: which page?
<surtin>  https://help.landscape.canonical.com/LDS/JujuDeployment16.06
<tvansteenburgh> surtin: dpb1 is the landscape guy, he might be able to help. i'm not sure which bundle you should use
<tvansteenburgh> surtin: the only one i see in the store is this https://jujucharms.com/q/landscape?type=bundle
<surtin> yeah i saw that as well
<dpb1> surtin: http://askubuntu.com/search?q=deploy+landscape+bundle+andreas
<dpb1> blah
<dpb1> surtin: http://askubuntu.com/questions/549809/how-do-i-install-landscape-for-personal-use
<dpb1> that one
<dpb1> notice the higher rated answer.  talks about using juju as well.
<theoeprator> nothing interesting in the lxd log.  all just lvl=info stuff
<surtin> ok will try it out
<surtin> ty
<theoeprator> i donât think Iâm out of disk space, i donât have any quotas setup and this is a clean box with ~2TB free
<petevg> surtin: are you using juju 1, or juju 2? (juju --version should tell you).
<theoeprator> the last error on the bootstrap output is 2016-09-21 21:34:20 ERROR cmd supercommand.go:458 new environ: creating LXD client: Get https://192.168.0.1:8443/1.0: Unable to connect to: 192.168.0.1:8443
<theoeprator> whereâs it getting that 192.168.0.1 from?
<theoeprator> thatâs not the address of the host
<theoeprator> it appears to have the container running at one point, as it runs several apt commands inside it
<petevg> theoeprator: it looks like your lxd machines are setup to use your physical network, rather than using a private network. Is that something you setup when you setup the other lxd machines?
<petevg> We're moving beyond my grade level as far as juju and lxd internal go, but juju may be getting confused by that network config.
<theoeprator> sorry, wifi failed
<petevg> theoeprator: no worries. Did you see my last message? (I was puzzled to see lxd machines talking directly to what looks like your physical network, rather than using virtual ips, and asked whether that was something that you had setup when you setup the other containers.)
<theoeprator> yep, the containers are bridged to the hostâs network
<petevg> theoeprator: that might be what's breaking things. We're unfortunately moving beyond the realm of things that I know a lot about, but juju typically expects to be able to herd its machines about on a private/virtual network, and then selectively expose them to the real world.
<petevg> I could be very wrong, though ... you might try bootstrapping again, and watching "lxd list", to see if the machine gets created.
<petevg> theoeprator: another possibility is that you've setup your router to block port 22 -- do you have strict security rules setup for your local network?
<petevg> *lxc list, I mean.
<theoeprator> runing bootstrap again, the container is up right now
<theoeprator> theoperator@hype:~$ lxc list
<theoeprator> +---------------+---------+---------------------+---------------------------------------------+------------+-----------+
<theoeprator> |     NAME      |  STATE  |        IPV4         |                    IPV6                     |    TYPE    | SNAPSHOTS |
<theoeprator> +---------------+---------+---------------------+---------------------------------------------+------------+-----------+
<theoeprator> | admin         | RUNNING | 192.168.0.36 (eth0) | fd00:fc:8df3:eb62:216:3eff:feae:96f2 (eth0) | PERSISTENT | 0         |
<theoeprator> +---------------+---------+---------------------+---------------------------------------------+------------+-----------+
<theoeprator> | juju-f71b38-0 | RUNNING | 192.168.0.41 (eth0) | fd00:fc:8df3:eb62:216:3eff:fe20:8e53 (eth0) | PERSISTENT | 0         |
<theoeprator> +---------------+---------+---------------------+---------------------------------------------+------------+-----------+
<theoeprator> | vpn           | RUNNING | 192.168.0.29 (eth0) | fd00:fc:8df3:eb62:216:3eff:fef0:29c9 (eth0) | PERSISTENT | 0         |
<theoeprator> +---------------+---------+---------------------+---------------------------------------------+------------+-----------+
<theoeprator> theoperator@hype:~$
<theoeprator> now got the same error as before and the container is gone again
<blahdeblah> theoeprator: Please use pastebin.ubuntu.com for more than 2 lines
<theoeprator> my apologies
<blahdeblah> No problem - it just makes it a bit noisy for those monitoring a lot of channels
<petevg> theoeprator: hmmm ... my guess is that juju is having trouble with the network settings, but I'm not sure quite what. If I were troubleshooting, my next steps would be the ssh into the container, and check out the logs (check /var/log/juju).
<petevg> theoeprator: failing that, you might try posting to the mailing list -- someone who knows more than I do should get back to you.
<theoeprator> ok, thanks Pete
<petevg> you're welcome! Sorry that I didn't have an immediate fix for you.
<surtin> petevg: juju 2
<petevg> surtin: you def. want to avoid quickstart then. It's a juju 1 tool. Sounds like those instructions are in need of an update ...
<surtin> yeah seems that way
<magicaltrout> definately going to get to play with some Juju stuff @ JPL and Darpa after a few  meetings this week
<magicaltrout> might also get some buyin for JPL to extend marathon for LXC/LXD support, although thats a longer shot
<petevg> magicaltrout: awesome news :-)
#juju 2016-09-22
<valeech> Not sure if anyone is around. I blew away juju beta 18 since I need to try the fixes in rc1. I have bootstrapped the controller on a maas 2.0 instance. I am trying to enable-ha with 2 additional controller nodes. The nodes get deployed by maas but they never get started in juju. they just stay at pending forever.
<gennadiy> hi, we have private openstack cloud and we want to use different subnetworks for deployed software. is it possible to do it with juju 2.0 ?
<stub> thedac: https://github.com/stub42/layer-apt has an example
<zeestrat> Hey, what happened to juju set-model-config and get-model-config in rc1? I guess it's replaced by just juju config, however that seems to just work for applications.
<Laney> hi!
<Laney> What's the right way to re-run a relation-changed hook if there's no error state? (1.25)
<Laney> I'm trying to deploy a website as a subordinate of apache2 and I want to do some iterating on the relation-changed thingy
<Laney> ok, juju run works
<SimonKLB> anyone here could tell me how to wait for a hook to finish using amulet? right now I'm testing setting a new config value and want to wait for that to finish before doing anything else, sentry.wait() doesn't seem to do the trick
<RAJITH> Hello, while doing bootstrap I am getting ERROR failed to bootstrap model: cannot start bootstrap instance: Error calling 'lxd forkstart juju-a0744c-0 /var/lib/lxd/containers /var/log/lxd/juju-a0744c-0/lxc.conf': err='exit status 1'
<beisner> hi SimonKLB - the best way is to have your charm set juju status when it knows it's ready.   then your amulet test can wait/block until it sees that status.
<SimonKLB> beisner: i tried that too, the problem is that it doesn't have time to change to the "unready" status before i run the wait_for_status/wait_for_message function, so it thinks it's already done
<SimonKLB> beisner: i also have the opposite problem where i'm testing unrelate->relate and the relate complains about the relation still existing because the unrelate doesn't have time to complete, in that case i tried waiting for the status as well, but it seems like the actual remove-relation operation isn't complety done before the status is set
<SimonKLB> or at least the relation isn't cleaned up so that it's possible to add the same relation again
<beisner> SimonKLB, could there be a bug in the charm's status updating logic?  ie. is it unsetting the status immediately when things start to shift?
<SimonKLB> beisner: the status is set when the relation interface is no longer accessable, in this case: @when_not("identity-admin.available")
<SimonKLB> could that be too early, and if so, what should i look for instead?
<beisner> SimonKLB, in the openstack charms, we wait and check for all expected running processes on the units.  but that might not translate to every type of application.
<SimonKLB> beisner: could you show me an example?
<beisner> SimonKLB, the aodh charm is one reactive example.  https://github.com/openstack/charm-aodh/blob/master/src/reactive/aodh_handlers.py   which leverages a library: https://github.com/openstack/charms.openstack/blob/master/charms_openstack/charm.py#L768
<SimonKLB> beisner: right, so i guess the difference is that you're not really basing your statuses on charm relations but rather on the actual services running on the machine?
<beisner> SimonKLB, no not really. things like @reactive.when('shared-db.connected') trigger status to be reassessed
<SimonKLB> beisner: i don't seem to find a when_not('shared-db.connected'), how do you handle the case where the relation is lost?
<SimonKLB> or you want to change to a different database charm perhaps
<SimonKLB> isn't it always going to think it's connected after it's been connected once?
<beisner> SimonKLB, it looks like that charm doesn't handle that case.  probably because operationally it doesn't make sense to ever disconnect the service's database.
<beisner> SimonKLB, but i think the principle illustrated there is:  you should re-assess status and set it, in nearly, if not all events.
<SimonKLB> beisner: i guess not, perhaps im testing a case that actually isn't an issue, but i'm trying to fill the test requirement from the docs "Adding, removing, and re-adding a relation should work without error."
<SimonKLB> beisner: yea, I agree, i think im not really taking the actual services into regard as much as you guys do though, im more looking at the state of the charm rather than the state of the services
<SimonKLB> and that might be the issue
<aisrael> If a deployed unit has two network interfaces, how does Juju select which one is the public and which one is the private?
<jcastro> heya balloons
<jcastro> I'm having a problem with adding credentials in a snapped juju
<jcastro> actually, it all works, the question is semi-related
<jcastro> do I need to mess with any of the juju bits in ~/snap
<jcastro> or is .local/share/juju still the place to be?
<balloons> jcastro, it's in the snap
<jcastro> ok so let's say I add creds but then I want to manually mangle the file
<jcastro> I would search in ... ?
<balloons> jcastro, you would use juju cli commands ;-)
<balloons> jcastro, I'm not sure off the cuff exactly where it is actually.
<jcastro> right, but that's not what I am asking
<jcastro> I was just thinking of adding it to the docs
<jcastro> so like, yes, use the CLI obviously, but for reference we write config to foo, bar, baz, was my idea
<jcastro> balloons: hah, everyone is going to make fun of me for filing this: https://bugs.launchpad.net/juju-core/+bug/1626576
<mup> Bug #1626576: credential v. credentials is confusing <juju-core:New> <https://launchpad.net/bugs/1626576>
<jrwren> jcastro: I did the same thing with controller v. controllers yesterday, but I thought it was just because I was tired.
<rock_> Can't we remove published charm from store completely?
<jcastro> you'd be surprised how worthless I am with juju now without bash-completion
<lazyPower> rock_: You can set the ACL's of the charm as restricted, but i don't beleive there is a way to wholesale remove something, no.
<lazyPower> rock_: eg: charm revoke
<rock_> lazypower: oh. Thank you.
<rock_> Can anyone please provide me detailed info of  juju networking.
<rick_h_> rock_: what are you looking for in particular?
<rock_> rick_h_: I am looking for basic juju netwoking model.
<rock_> rick_h_: how juju manage networking inside and outside containers?
<jrwren> rock_: other than what you see here: https://jujucharms.com/docs/stable/network-spaces  I think the answer is that it doesn't. It all depends on the provider. (aws, gce, azure, etc)
<beisner> rick_h_, can you point me back to that bug about the home url not rendering on the cs side bar?
<beisner> search foo is failing me ;-)
<rock_> jrwren: ok. thank you.
<balloons> jcastro, ~/snap/juju/*/.local/share/juju
<jcastro> so ... ~/snap/juju/current/.local/share/juju should always just work then I figure
<balloons> jcastro, I wouldn't make fun of you
<balloons> jcastro, I don't seem to have current myself, so no..
<balloons> it's a bit odd actually
<jcastro> huh, none of mine do either
<jcastro> did something change?
<pweston> hello, does any one have a clear link for setting up juju to work with a pre-existing openstack
<pweston> the simplestreams are killing me
<hml> hi! i just bootstrapped juju2.0 to an private openstack cloud - the controller instance appears to be working okay, but when i deploy a charm, i end up with a unbuntu instance with no juju tools on it.  anyone else seen this?  know what to do?
<rick_h_> hml: hmm, is there internet access to get the tools? is there anything in the debug-log on the controller (juju switch controller && juju debug-log)
<hml> yes - i did a wget of a outside web page onthe charm instance
<hml> didnât see anything in the debug-log
<rick_h_> hml: maybe check the log on the unit then. look for logs in /var/log/juju/unit*****
<hml> there is no /var/log/juju on the unit :-)
<hml> nor a /var/lib/juju
<rick_h_> hml: oh hmm, makes sense if you never got the agents installed.
<hml> rick_h_: i went thru some debugging yesterday with thedac, he was thinking the tools didnât get installed and sent me here
<rick_h_> hml: so how about juju logs in that directory on the controller unit
<hml> rick_h_: let me double check
<hml> rick_h_: i found the place in the machine-0.log that the new charm was deploy
<hml> rick_h_: nothing is jumping out as a big error etc - is there a better log file to look at?
<kwmonroe> hey #juju.. shilpa asked about an nginx charm, and i could have sworn we had a recommended version in the store... but i can't find it.  does such a thing exist?
<kwmonroe> i only see nginx-passenger when searching (https://jujucharms.com/q/nginx), though i do see 10 other community charms when i deliberately 404:  https://jujucharms.com/nginx/
<rick_h_> hml: no, maybe try debug-log with --replay to get everything?
<kjackal> Is there a way to tell a controller (local in my case) to setup a IPv4 network instead of IPv6?
<hml> rick_h_: looking
<hml> rick_h_: this has happened 2 of 2 times so far - if i deploy another charm - what would be the best way to watch for errors as it goes?
<lazyPower> kwmonroe: no nginx charm itself, we have layernginx though
<lazyPower> (hours later)
<lazyPower> https://github.com/battlemidget/layer-nginx  just fyi
<kwmonroe> gratis lazyPower
<kwmonroe> i think that url is https://github.com/battlemidget/juju-layer-nginx
<petevg> cory_fu: do you have time to jump into the hangout? Zookeeper is misbehaving, and I'd like a second set of eyes on it.
<cory_fu> Sure
<cory_fu> Give me 1 sec
<cory_fu> petevg: I'm in dbd
<lazyPower> sure, whichever doesn't 404 :P
<arosales> mbruzek: does the latest kubernetes charm set workload version?
<mbruzek> arosales: you bet
<arosales> or does anyone have a charm handy that setes workload version
<arosales> ah ok
<arosales> mbruzek: thanks
<lazyPower> arosales: several
<lazyPower> :D
<lazyPower> https://jujucharms.com/u/lazypower/canonical-kubernetes
<mbruzek> https://github.com/juju-solutions/kubernetes/blob/master-node-split/cluster/juju/layers/kubernetes-master/reactive/kubernetes_master.py#L121
<lazyPower> every charm in this bundle thats not a beats charm sets workload version
<lazyPower> (oh and kibana because kibana)
<arosales> lazyPower: thanks
<arosales> and mbruzek thanks for the line ref
<mbruzek> arosales: lazyPower did all the work, I just found the line he added
 * lazyPower flexes
<marcoceppi> arosales: the ubuntu charm sets workload version ;)
<arosales> marcoceppi: ah, that may be a simple enough charm to take a look at
<arosales> thanks marcoceppi
<arosales> man I did a charm helper sync and now "get_platform" and now charmhelpers.osplatform module is not found
<arosales> . . . and no results at http://pythonhosted.org/charmhelpers/search.html?q=get_platform&check_keywords=yes&area=default#
<arosales> or http://pythonhosted.org/charmhelpers/search.html?q=platform&check_keywords=yes&area=default#
 * arosales sigh
<marcoceppi> arosales: wtf are you doing charmhelper-sync for
<marcoceppi> arosales: it's also probably because you need to change your charm-helpers.yaml definitions
<arosales> looking at adding workload status to the mariadb
<marcoceppi> arosales: i already did
<arosales> marcoceppi: which charm
<marcoceppi> mariadb
<marcoceppi> rev 5
 * arosales looking at cs:~bigdata-dev/mariadb
<marcoceppi> https://api.jujucharms.com/charmstore/v5/mariadb/archive/hooks/start
#juju 2016-09-23
<marcoceppi> well, don't use that one. Use the promulgated one ;)
<arosales> gah
<marcoceppi> either way, that's all you need to do
<marcoceppi> arosales: and I added this hook https://api.jujucharms.com/charmstore/v5/mariadb/archive/hooks/update-status
<arosales> marcoceppi: thanks
<marcoceppi> arosales: I'd be curious why there's a bigdata-dev version of a promulgated charm and if we can get those patches upstream
<arosales> I'll work with petevg to get his bigdata-dev/mariadb charm merged with the promulgated one
<arosales> marcoceppi: petevg  was working on it for xenial s390 support
<marcoceppi> cool, I'll make sure it's obvious where the source is
<arosales> I think he was trying to get in contact with the mariadb maintainer to push his updates
<marcoceppi> arosales: I'm in the mariadb-charmers team, and can help get fixes landed
<petevg> Yeah. The maintainer wasn't getting back to me.
<marcoceppi> well, I'm a maintainer (implicitly) and I'll listen to you
<marcoceppi> petevg: ...for a price ;)
<marcoceppi> charm teams ftw
<petevg> marcoceppi: the price is I don't wag my finger at you for ignoring me before :-p
<arosales> I think marcoceppi currency is measured in volumne
<marcoceppi> it's measured in ABV
<arosales> ah
<arosales> :-)
<arosales> marcoceppi: when is update-status hook ran?
<marcoceppi> arosales: every 5 mins
<marcoceppi> give or take the hooke queue
<arosales> ok
<arosales> marcoceppi: et all
<arosales> do you think mariadb-ghost bundle would be a better getting started bundle than wiki-simple?
<arosales> in regards to https://github.com/juju/docs/issues/1382
<arosales> or now that you have these easy hook to drop in, we could add them  to wiki-simple
<marcoceppi> petevg arosales I've updated the code hosting and bugs URL for the charm, pull requests welcome
<petevg> marcoceppi: awesome. I will put one together for you :-)
<marcoceppi> petevg: prepared to be ignored ;)
<marcoceppi> petevg: I care about mariadb becase I want to write ONE mysql-base layer that I can base the oracle-mysql and mariadb charm on
<marcoceppi> since they are like 95% identical
<marcoceppi> and I use mariadb in production
<petevg> Makes sense :-)
<marcoceppi> speaking of playing with fire, I just ran juju upgrade-charm on that environment
 * marcoceppi crosses fingers
<arosales> marcoceppi: in production
<arosales> good luck
<arosales> of course its in production first try, I forgot who I was chatting with
<marcoceppi> welp, didn't crash
<arosales> of course it didn't :-)
<huhaoran> pweston a pre-existing openstack: http://docs.openstack.org/developer/charm-guide/openstack-on-lxd.html, I am trying this
<huhaoran> pweston, sorry to disunderstand you...
<venom3> Hello, we are trying to deploy openstack-dashboard charm with the shared-db relation. It doesn't works due to network address issue (I guess).
<venom3> In Percona the horizon user has  IP 10.40.0.X
<venom3> in local_settings.py there is another IP 10.40.103.X/24
<venom3> so horizon fail the authentication
<venom3> and the db is not popolated.
<venom3> inside the horizon-hook i found
<venom3>         try:
<venom3>             # NOTE: try to use network spaces
<venom3>             host = network_get_primary_address('shared-db')
<venom3> but I wonder If network space is implemented for horizon
<venom3> Any suggestion?
<jamespage> tvansteenburgh, https://code.launchpad.net/~james-page/juju-deployer/bug-1625797/+merge/306595
<jamespage> I think that fixes beisner's issue - just testing now
<jamespage> tvansteenburgh, needed a tickle for 1 and 2 compat afaict
<jamespage> Machine vs machine
<jamespage> beisner, ok I think that fixes up 1.25 compate in juju-deployer for placements
<jamespage> 2.0 placement with v3 formats its still broken - the placements don't accomodate machine 0 properly
<jamespage> i.e. really we should add that prior to deploying any services to ensure that it exists and we don't end up with machine 0 service being smudged with machine 1 services as it stands right now
<huhaoran> Hey, I have a silly question about Juju, :)
<huhaoran> Is Juju suitable in a VM?
<babbageclunk> huhaoran: Not a silly question! :)
<babbageclunk> huhaoran: I think the answer is yes, but also a bit "it depends".
<babbageclunk> But you can definitely run juju in a VM.
<huhaoran> babbageclunk, Thanks
<aisrael> huhaoran, Juju is great with containers. Juju 2 has native support for lxd.
<aisrael> It depends on your use case, but the juju controller can be run inside a VM but deploy machines to physical hardware. You can even co-locate units inside the vm.
<marcoceppi> jamespage: ugh, what a lame thing from the API rename
<jamespage> marcoceppi, there may be other impacts but I've not seen anything functionally broken with that fix in my testing
<marcoceppi> jamespage: it gets an initial +1 from me, I'll have tvansteenburgh look at it when he gets up
<jamespage> marcoceppi, that should at least get 1.25 support working again
<tvansteenburgh> jamespage, marcoceppi: looking now
<SimonKLB> is it possible to retrieve the charm config with amulet?
<rock_> Hi. I have question. On Juju Version 2.0-rc1-xenial-amd64 , "$juju config "  command working but not "$juju set-config". But on Juju Version 2.0-beta15-xenial-amd64, "$juju set-config" command working but not "$juju config". So Juju version 2.0-rc1-xenial-amd64 is in under development?
<rick_h_> rock_: sorry, the documentation there needs updating
<rick_h_> set-config was replaced with just juju config
<rick_h_> so juju config key=value
<rock_> rich_h_: juju version rc1, rc2 means they are development versions?
<rick_h_> rock_: no, rc1 is stable and we're doing bug fixes, but it's not development
<rick_h_> rock_: that change was done in beta 18 and was carried to rc1
<jrwren> rock_: rc = release candidate
<rock_> rich_h_/jrwren: Oh. I have one more question.  all juju version 1.x, have Only one configuration setting command? I mean "$juju service set " and as well as all juju version 2.X, can we use $juju config" command?
<rick_h_> rock_: the juju config command is only in 2.x
<rock_> rick_h_: Hi. I am asking that only.  "$juju config"  will be used in all 2.x versions right? and "$juju service set" will work on all 1.x versions right?
<rock_> rick_h_: "$juju config" is not working on 2.0-beta15-xenial-amd64. It was showing command not recognized.
<rock_> rick_h_: Actually, we developed a "cinder sturage driver" charm. We are preparing README.md file in a detailed mannaer. We tested our charm in different ways with different combinations.
<rock_> rick_h_: So I want to know things clearly.
<tvansteenburgh> rock_: juju config works in 2.0-beta18 and forward
<tvansteenburgh> rock_: so yes, 'juju service set' for juju1, and 'juju config' for juju2
<rock_> tvansteenburgh : Oh. Thanks.
<rock_> rick_h_/jrwren: Thank you
<rick_h_> rock_: sorry, was on the phone. Yes, what tvansteenburgh says. Thanks tim for the assist
<rock_> rick_h_: No problem . tahnk you
<lazyPower> SimonKLB: what you'll see typically is the test/bundle will define the configuration and what we do is inspect the end-location file renderings, flags, etc. for the existance of those config values
<lazyPower> SimonKLB: but off the top of my head, id ont think amulet has any amenities to pull the charm config from the self.deployment object. I may be incorrect though
<autonomouse> hi, I posted this question to the internal juju channel without thinking about it before (so sorry about that whom it may concern), but I have a question about scopes that I'd really appreciate some help with. I put it here if anyone's interested in giving it a shot: http://askubuntu.com/questions/828732/can-anyone-explain-scopes-in-reactive-charms-to-me-please-juju-2-0
<SimonKLB> lazyPower: so ususally you can access the config values through the charmhelpers package via the hookenv, but when running an amulet test thats going to be different right?
<lazyPower> correct, amulet is an external thing, its poking at juju through the api and juju run
<SimonKLB> yea, perhaps there is a better way of solving it, but currently i have the ports available as config values, and i need to know them when testing if the service is up and running during the amulet test
<SimonKLB> perhaps i should put some default ports in a python module and override it with the charm config?
<SimonKLB> that way i could still test stuff with the default settings and still let the user choose which ports it should run on
<SimonKLB> but it feels a bit messy
<tvansteenburgh> SimonKLB: i'd just do a little subprocess wrapper around `juju config`
<tvansteenburgh> arguably that would be a handy addition to amulet
<tvansteenburgh> SimonKLB: or you can just set the ports in the amulet test so you know what they are
<SimonKLB> tvansteenburgh: yea, im just trying to refrain from having configuration options in multiple places
<tvansteenburgh> SimonKLB: but it's a test. it's reasonable to set some config and then act on those values
<SimonKLB> yea, that is the way im doing it right now, i was just curious if you could grab the default config values somehow and be rid of the extra configuration in the test
<tvansteenburgh> SimonKLB: well you can deploy without setting config and it'll use the defaults. to actually retrieve the defaults though, you'd need to use `juju config`
<SimonKLB> yea, it's the retrieving part im having trouble with, since i need some stuff from the config to access the charm - not sure if a subprocess wrapper or setting the config values i need "externally" in the test is cleaner
<tvansteenburgh> SimonKLB: if it were me i'd do the latter
<SimonKLB> tvansteenburgh: thanks, ill go with that then!
<balloons> Does anyone know how I might be able to remove a github key by name? juju remove-ssh-key blah@github or blah@github/12345 or gh:blah all fail
<petevg> cory_fu, kwmonroe, kjackal: go some PRs for you: https://github.com/juju-solutions/interface-zookeeper-quorum/pull/5 and https://github.com/juju-solutions/bigtop/pull/46
<petevg> Together, they give us rolling restart automagic in the new zookeeper charm.
<petevg> *got
<kwmonroe> good news cory_fu, bigtop zepp doesn't need hadoop-plugin.  it just so happens that bigtop defaults the spark interpreter to yarn-client (which is why i thought it needed the plugin in the first place).  we can override that with a proper spark master and make plugin optional.
<kwmonroe> petevg: do you have a build of the zookeeper charm that includes your PRs?
<petevg> kwmonroe: I caught and issue and I've been debugging it (think that I may have a fix), so I haven't pushed a build yet.
<petevg> kwmonroe: if this test that I'm running right now works out, though, I can push what I've got to bigdata-dev
<kwmonroe> oh good petevg: i was fixin to say that all the letters look right, but frankly, you and cory_fu were putting me to sleep with all that remote_ conversation babble.
<petevg> Heh. It's kind of sleep inducing. I think that my mega comment explaining it probably makes for good bedtime reading.
<petevg> The bug is actually in the relation stuff: if you remove-unit the Juju leader, it still tries to orchestrate while it is shutting down, and then it throws errors because it doesn't have the relation data any more.
<kwmonroe> hm, that sounds like a -departed vs -broken thing... as in, don't do conversation stuff during -broken because you don't have the relation data any more.
<petevg> Yep.
<petevg> kwmonroe: am I missing a design pattern that gets around it?
<petevg> Is it just @when_not('{relation-name}.broken')?
<kwmonroe> no petevg, i just checked peers.py on interface-zookeeper.. it's reacting to -departed (which is good).  if it were reacting to -broken, you'd be in trouble.
<petevg> Got it.
<hml> question: why would the containers be so small that charm deploy fails?  this just started happening with juju 2.0-rc1 and conjure-up 2.0 - openstack-novalxd - xenial
<petevg> kwmonroe: here's a charm for you: cs:~bigdata-dev/zookeeper-10
<kwmonroe> thanks petevg!
<petevg> np!
<hml> my container appear to be using zfs this time and i didnât before
<kwmonroe> hml: is your zpool out of space (sudo zpool list)?
<kwmonroe> i seem to recall a juju ML post about the default size being small (like 10G)
<hml> kwmonroe: I took all of the defaults when i did the lxd init
<hml> kwmonroe: the biggested container is only 2G, i do remember something about 10G during the init but i donât have a log
<kwmonroe> hml: speculating here, but i think this might be your fix: https://github.com/lxc/lxd/pull/2364/files  lxd init used to do 10g by default, and now it does something 20% of your disk or 100G (whichever is smaller)
<kwmonroe> tych0: any idea when that might make it into a lxd package? ^^
<hml> kwmonroe: just looked at the pool - itâs maxed.  do you know if something changed recently.  iâve been deploying and reinstalling a lot in the past month and havenât run into this.
<tych0> kwmonroe: i think it will be released into yakkety on tuesday
<tych0> i'm not sure about backporting to xenial
<hml> kwmonroe: is it easy to increase the zpool size?  i have lots of extra disk space to give it
<kwmonroe> hml: not sure if something changed recently, but if you weren't using zfs before and your are now, that might explain it.  i think bootstrap asks if you want to use file-backed or zfs-backed containers.
<hml> kwmonroe: hrm.. must have some soemthing different.  :-/ which is better to use?  file-backed or zfs-backed?
<stokachu> hml: small?
<kwmonroe> hml: if by "better" you mean "new, shiny, fast", then zfs :)
<stokachu> hml: are you using ZFS?
<kwmonroe> hml: i don't know how to expand an existing zpool.. tych0, do you?
<hml> hml: at this point, i just want something that works.  :-)
<stokachu> you can't
<stokachu> you need to remove the lxc images
<stokachu> and rerun lxd init
<stokachu> make the zfs pool size bigger, i usually do like 100G
<hml> stokach: i went back to a snapshot of my vm - iâve been around this block too many times.  :-)
<stokachu> :D
<hml> stockach: i donât care if itâs new and shiny, i just need it to work.  this config isntâ going into production or the like
<stokachu> hml: then just use the dir for the storage backend
<stokachu> you still need to re-run lxd init
<hml> stokach: iâll give it go.  thank you!
<stokachu> yw
<petevg> kwmonroe: here's a version of the zookeeper charm that should avoid some redundant restarts: cs:~bigdata-dev/zookeeper-11 (I also updated the PR with the code from this charm.)
<kwmonroe> ack petevg
<petevg> kwmonroe: the "redundant restarts" might be paranoia on my part. I refactored to prevent two similar routines from essentially executing the same code twice ... but it would only cause an extra restart if things got timed unfortunately. If you're using version 10, it's probably okay. Version 11 is just a little bit tidier.
<petevg> ... also, the issue would only happen after you'd removed a zookeeper unit, so you're unlikely to run into it in a demo.
<kwmonroe> frankly petevg, i'm not versed enough in zookeeper to know what effect zk restarts have on connected services.. like what if spark/kafka/namenode are asking zk something and it restarts?  do they ask again?  melt?  get the answer from /dev/random?  ear-regardless, eliminating extra restarts seems good.
<petevg> kwmonroe: we do the rolling restart so that zookeeper handles that well.
<kwmonroe> petevg: speaking of demos, you comfy enough to put zookeeper-11 in the spark-processing bundle?  or shall we stick to the older zk for strata?
<petevg> kwmonroe: hmmm ... I'm not less confident of zookeeper-11 than I am of zookeeper-9
<kwmonroe> heh, nm.  it's friday afternoon.  there's no way we vet zk-11 in time for strata.
<petevg> kwmonroe: also, to clarify, zookeeper will only restart when you add or remove nodes, and it has to restart. The "extra" restarts won't happen in the middle of normal use.
<petevg> kwmonroe: On the other hand, it might be nice to put zookeeper through a trial by fire ourselves, rather than wait for someone else to do it, post strata.
<cmars> mbruzek, lazyPower hi, i'm trying out the master-node-split bundle.. how do i get a client to connect to my cluster?
<cmars> awesome stuff, btw
<petevg> ... unless you've already finished a lot of testing on the bundle. (In which case, I'd stick with zookeeper-10 -- you're not going to run into the issue unless you're adding and removing a bunch of nodes on the fly.)
<lazyPower> cmars: mkdir -p ~/.kube &&  juju scp kubernetes-master/0:config ~/.kube/config && kubectl get nodes
<cmars> lazyPower, awesome, thanks
<lazyPower> cmars: that command is descructive if you're already connected to a kubernetes cluster
<lazyPower> so be mindful of blindly overwriting, it may have unintended consequences
<cmars> lazyPower, ack. all good, i'll tweak this & use the env var
<cmars> lazyPower, so i'm going to hack on the kubernetes-master layer. where can i get the resource it needs, if i deploy it locally?
<lazyPower> cmars: i wonder, can we fetch resources from the charm store?
<lazyPower> i dont think i've tried
<lazyPower> other than when juju deploy happens
<cmars> lazyPower, maybe, i don't know the incantations
<lazyPower> same
<lazyPower> 1 sec
<lazyPower> https://gist.github.com/935f03ce936bc221ca7cdcc4c974fbd7  - courtesy of mbruzek
<lazyPower> fetch a release tarball from github.com/kubernetes/kubernetes/releases
<lazyPower> we're good on deploy on latest beta10
<lazyPower> since beta3 was our first attempt?
<lazyPower> yeah, we're good in the range of betas from 3-10
<lazyPower> and 1.3.x
<cmars> lazyPower, ok, great!
<cmars> lazyPower, last question. if i set different relation data keys on each side of a relation, does that erase the keys i don't specify, or is it additive?
<lazyPower> wat
<cmars> using reactive framework
<lazyPower> different relation data keys... meaning?
<mbruzek> cmars: If it is a different key, then it adds, if you set the same key then it would replace
<cmars> mbruzek, ok, perfect
<lazyPower> that
<cmars> never done a two-way handshaking relation
<mbruzek> cmars: are you messing with our interface? What are you adding?
<lazyPower> haha
<cmars> mbruzek, i'm not changing your existing interfaces, i'm writing a new one
 * mbruzek is worried all of a sudden
<cmars> don't worry, it's an "experiment"
<mbruzek> Well good luck I hope it is a success
<cmars> thanks. fun stuff :)
<cmars> will def share
<cmars> mbruzek, lazyPower i've got master-worker-split running in lxd, with the worker in a manually added kvm machine
<cmars> \m/
<cmars> awesome, awesome
<lazyPower> \m/,
<mbruzek> sweet
<mbruzek> good experiment
<lazyPower> cmars: word of caution. etcd will choke if you put it on an ipv6 only host
<lazyPower> known caveat
<lazyPower> s/etcd/the etcd charm/
<cmars> lazyPower, interesting, ok. i disable ipv6 on my lxd bridge, but good to know
<cmars> i just like lxd for developing, so much faster
<lazyPower> i hear ya
<mbruzek> cmars: Have you run any docker workloads inside the lxd  yet? Can you confirm they work without adding a profile or something?
<cmars> mbruzek, i can't run the worker in lxd. that's why i manually added the kvm instance. i'm launching those with uvt-kvm and then using add-machine to attach them
<cmars> routing between the kvm and lxd subnets works
<mbruzek> oh yeah
<cmars> tried the docker profile, no luck
<mbruzek> cmars that is why we split the master/workers so we could use lxd
<mbruzek> for the control plane
<mbruzek> So in theory you could deploy --to lxd:<kvm machine>
<cmars> mbruzek, in the worker, i get some read-only filesystem errors for stuff under /proc/sys/...
<cmars> specifically, https://paste.ubuntu.com/23222264/
<cmars> that's after adding docker to the worker container's lxd profile and restarting it
<cmars> oh well..
<mbruzek> cmars: I meant the master could be deployed to the lxd on the worker yes?
<mbruzek> (where the worker is kvm)
<mbruzek> or whatever.
<cmars> ah.. sure could
#juju 2016-09-24
<samba35> how much ram is require for juju and maas server
#juju 2016-09-25
<samba35> do i have to create lxdbr0 manually to install/configure juju ?
 * D4RKS1D3 Hi
<D4RKS1D3> Hi, is possible to enter into a machine with user and password via terminal?
<thumper> no
<D4RKS1D3> the juju ssh 3 give me an error
<thumper> what error?
<D4RKS1D3> Only I can access via this machine via certificate?
<thumper> ssh key only
<thumper> however you can add ssh keys
<thumper> providing you have admin access
<thumper> which version of juju?
<D4RKS1D3> ssh_exchange_identification: Connection closed by remote host
<D4RKS1D3> 1.25.3
<D4RKS1D3> but yesterday i can access without problems
<thumper> that is strange
<D4RKS1D3> I am trying to reboot the machine but the state persists
<thumper> you can take juju out of the loop by using ssh directly
<thumper> can you run status to get the ip address of the machine?
<D4RKS1D3> I try this way to and i receive the same error
<thumper> which cloud substrate?
<D4RKS1D3> What mean substrate in juju
<thumper> lxc, aws, gce, azure, maas?
<D4RKS1D3> I have maas juju and on top openstack an odl
<thumper> is it just one machine that is having problems, or all of them?
<D4RKS1D3> Only the node 3
<D4RKS1D3> Is my controller node in openstack
<thumper> what changed on that machine since you could last access it?
<D4RKS1D3> I only go to sleep
<D4RKS1D3> I am trying ssh to the node 0
<D4RKS1D3> and I can access
<D4RKS1D3> is strange
<thumper> yes, this is strange
<D4RKS1D3> Thanks thumper for your help, when i know what happend i will tell you
<thumper> cheers
#juju 2017-09-18
<fallenour> @stokachu @rick_h @catbus @dmitri-sh hey guys, I am looking to build a cluster of web servers, and Im trying to find a prepackaged loadbalancing apache2 with haproxy or nginx with or without mysql included. I know it exists but I cant find it. Can you lend me a hand?
<bdx> fallenour: there are many ways to skin that cat ;)
<rick_h> fallenour: heh yea bdx is the proxy loadbalancing master atm :)
<fallenour> @bdx Id like to have three apache2 servers, 2 loadbalancers and reverse proxies with Memcached, with master/slave databases. I knwo that means MySQL with either NGinx or HAProxy, using LXD for the web sites themselves on the Apache2 servers, with the HAProxy / Nginx boxes loadbalancing to the Apache2 server clusters, leveraging the MySQL master databases to do forced replication to the slave databases which will be used by the sites.
<fallenour> I know thats a tall order, but I do know that a charm exists already for it, I just cant find the damn thing.
<fallenour> effective work flow End User >>> Load Balancer 1/2 >>> Apache2 Server 1,2,3 >>> Relevant WebServer in LXD Container 1,2,3
<fallenour> Preferrably a round robin basis if possible, its the easiest, and allows me to bring 1 server down at a time for maintanence without outage time
<bdx> fallenour:  the quantity is arbitrary, just assume the services can scale, and focus on getting a bundle put together with just a singleton architecture
<fallenour> @bxd so build a custom charm then? Memcached + HAProxy + Apach2 + lxd?
<fallenour> ooo! Plus MySql Master/Slave?
<bdx> fallenour: a custom *bundle*
<bdx> yeah, put dont worry about the slave part yet
<fallenour> @bdx k. Do you think it would be well received? If So, I wouldnt mind publishing it. I think itd be really cool to finally give something to the Web Devs. Theyve saved my ass on more than one hellfire moment
<bdx> fallenour: yeah ... so the idea is such that you release your charm or bundle to the charm store once it is  super solid/complete
<fallenour> @bdx LOOOL. BDX: "Yeah...." Me: Ohh gwd, I said something stupid again XD
<bdx> fallenour: see if you can find a bundle or two to use as an example
<bdx> you will see how there are sections to specify the charms, and relations, and machines
<bdx> fallenour: no worries, its not always so obvious, different ways of sharing things in different communities/ecosystems, it was more of a "how do I phrase this" "yea...."
<fallenour> @All Please be aware, this is misconfigured. I have a localized cache of all of the Ubuntu Repo, and it exceeds 110 GB of storage currently. Please be aware you will need to extend the threshold from 80 to 150 for stability reasons. Can someone please inform the developer? https://jujucharms.com/ubuntu-repository-cache/24
<fallenour> Other than that, its OUTSTANDING! Im glad someone made building it a lot easier, because the initial build was a huge pain in the ass.
<bdx> fallenour: the "submit a bug" button on the ^ web page that will aid you in submitting a bug for the charm?
<fallenour> @bdx I never saw that LOL!
<bdx> yeah we should make it a big red button huh?
<fallenour> @bdx Please dont, Im red/blue color blind :(
<knobby> set irc.server.canonical.ssl_verify yes
<knobby> bah
<stormmore> o/ juju world
<fallenour> o/ @stormmore
<Catalys> @beisner Just did a upgrade to Pike however did come across a few issues such as the DB migration failing and having to trigger one manually, is it safe to do that with "nova-manage db sync" afterwards though?
<rick_h> beisner: ^
<rick_h> oh nvm /me drinks more coffee
<el_tigro1> Is there a way to backup/restore a juju client for a single controller? The 'tar' method described in the documentation is an all or nothing approach:
<el_tigro1> https://jujucharms.com/docs/2.1/controllers-backup
<rick_h> el_tigro1: a single juju model?
<el_tigro1> The client for a single controller including all its models
<el_tigro1> At least that's what I think I mean
<el_tigro1> the info seems to be stored in these 4 files in ~/.local/share/juju: accounts.yaml, controllers,yaml, bootstrap-config.yaml, models.yaml
<rick_h> el_tigro1: oic, no. I'd love to have one. I've always wanted to do a juju export/import for the client to sync my machines up together
<el_tigro1> exactly
<el_tigro1> rick_h: Thanks
<rick_h> el_tigro1: I cheat instead and I copy/paste the creds and use a different user account for each machine so I can juju register xxxxx and have valid users  and cached data on each machine
<el_tigro1> rick_h: Thanks, that helps
<cory_fu> tinwood: https://github.com/juju-solutions/charms.reactive/pull/126
<fallenour> @rich_h hey whats the juju command to force remove a machine? Ive tried juju remove-machine  and remove-machine --force, but they still show up as down. They never built, so I want to remove them to reduce clutter on juju status output
<fallenour> nevermind, they randomly cleared up this time, im guessing the --force option got it this time
<fallenour> @rick_h @catbus @stokachu @bdx Having issues deploying more than one apache instance, keeps giving me an "application already exists error". I dont receive this error from other charms, or at least none that ive noticed. Any ideas on how to deploy my other two apache instances (apache2/1, apache2/2) on lxd containers on machines 8 and 9?
<bdx> fallenour: once you deploy the application, you have to use the `add-unit` command to scale it up
<bdx> fallenour: e.x. `juju deploy apache2; juju add-unit apache2;`
<fallenour> @bdx juju add-unit --to lxd:8 && juju add-unit --to lxd:9 ?
<bdx> fallenour: :thumbsup:
<fallenour> ooh! juju add-unit apache2 --to lxd*
<bdx> hmmm, not sure about ^
<fallenour> sorry, full context: juju add-unit apache2 --to lxd:8 && juju add-unit apache2 --to lxd:9
<bdx> ya, you can do that
<fallenour> Also, @BDX how do I cluster a pair of HAProxy systems? I tried to do that in juju gui, but it didnt work. Did I miss something?
<rick_h> fallenour: yea, that way your application is tracking config/etc in a common place and you're just scaling it up and up with more units
<bdx> fallenour: so ... I'm not sure haproxy really *clusters* as (to the extent of my knowledge) one haproxy instance isn't aware of the others
<fallenour> @bdx damn. Is there an alternative that you would recommend? Its very important that I have a clustered loadbalancer.
<bdx> fallenour: so you can have pools of haproxy servers
<fallenour> @bdx pointing to more than one apache2 server I take it? And then have dns load balancing im assuming off of the haproxy servers?
<bdx> like you can loadbalance across haproxy servers in different fashions
<bdx> yeah exactly
<bdx> I do a very similar thing in many configurations
<bdx> pool of haproxy servers -> application servers (nginx -> gunicorn)
<bdx> so with haproxy you can configure it to do active or passive loadbalancing
<bdx> fallenour: it sounds like you want active-active
<fallenour> So for websites I take it you take a simliar approach, and drop it onto an apache2 or gunicorn box?
<bdx> fallenour: if this is so, you would need to configure the 'peering_mode' config for the haproxy charm
<fallenour> @bdx yea I want to be able to randomly reconfigure and reshape my traffic geographcially
<fallenour> @bdx some of the projects I am supporting do VPN and encryption, and if I need to run their traffic and data out of the country immediately to keep them from jail time or execution, I need to be able to do it immediately
<fallenour> @bdx right now some countries are on the war path against crypto or vpn of any kind, regardless of legit or illegit use
<fallenour> clear
<bdx> aha
<bdx> yeah
<bdx> fallenour: what you are looking for is a SDN platform/controller
<bdx> fallenour: check this out https://www.opendaylight.org/
<fallenour> @bdx this thing looks insane o.o
<bdx> fallenour: it takes some extensive deep diving to come to terms with the full capability and feature set of what something like opendaylight can do ... don't feel bad if it doesn't all 'click' right off the bat
<bdx> but I feel like you get the big picture
<fallenour> @bdx yea it looks like and sounds like it cna do an L2 / L3 / DDNS centralized orchestration. Is that hitting close to the ballpark?
<fallenour> @bdx one of the apache2 instanaces hung, juju add-unit adds a specified unit, what deletes one?
<bdx> `juju remove-unit`
<bdx> fallenour: yeah you are on the right track
<fallenour> @bdx sweet. Next question, my machine numbers are super messed up right now, 0,1,2,3,5,6,7,8,10 is it possible to renumber them?
<bdx> the numbers are arbitrary ... you shouldn't concern yourself with the ordering
<bdx> correction *the machine number will always increment each time you deploy a machine*
<fallenour> @bdx ok. me personally I dont care, but some people do
<bdx> fllenour: there is no way to keep track of the things if they don't have unique ids
<bdx> if you want the numbering to start over you have to create a new model
<bdx> fallenour: I'll stress that you really shouldn't concern yourself with the ordering of the machine numbers
<bdx> because as you scale things up and down, you may add and remove units and machines
<fallenour> @bdx yea Ive already experienced some unique "features" lol
<bdx> fallenour: a common take on how to look at this is, try to think of your infrastructure as cattle not cows
<bdx> we don't care what number a cow has
<bdx> only what type of cow it is
<rick_h> Cattle not pets :p
<bdx> awww
<fallenour> @bdx yeap, and Ill send Ole Bessie to the slaughter she mooes just a tad over my stereo
<bdx> rick_h: to the save
<bdx> ha
<bdx> haha
<fallenour> clear
<rick_h> Pets get names and love and attention. Cattle get herded and such...
<fallenour> @rick_h nah, cattle get love to. That way they taste better when youre eating them :D
<bdx> oh no
<bdx> :)
<xarses_> wagu
<xarses_> =)
<rick_h> fallenour: hah, "I don't buy steaks without a name tag on them. I want to know who's so yummy!"
<tychicus> ^^ agreed
<xarses_> Waiter: "Tonight we are serving Stephanie, a 4 yr old Texan from yummy tummy farms..."
<xarses_> ok, so I know that juju has the ideas about being able to restrict access, but it doesn't appear to do any restrictions with a maas cloud, is there a way to make it interface with iptables or another driver?
<fallenour> @xarses_ you can deploy pfsense for starters, restrict a lot that way.
<xarses_> deploy pfsense?
<fallenour> @xarses_ Yeap, Check here:  https://www.pfsense.org/download/    You can also deploy OpenVPN: https://jujucharms.com/openvpn/6
<fallenour> @xarses_ you can deploy PfSense easily with Openstack, and then also use juju for OpenVPN
<bdx> fallenour: pfsense should sit above your openstack
<bdx> fallenour: you can deploy it *in* openstack but .....
<xarses_> I'm not sure we are on the same page, I want the maas cloud to require things like `expose` to have the ports exposed outside of the model
<fallenour> @bdx true, but if you deploy it, and then leverage it as a primary gateway, you can use it in openstack
<fallenour> @bdx its dirty, but it works.
<fallenour>  @xarses_ then you will most definitely need to do port forwarding on either a firewall or a modem, depending on if you have a bridged modem or not
<xarses_> for example, the amqp port is exposed on the network, but the service isn't exposed
<fallenour> @xarses_ So you want Internet USer >>> Port 22 >>> Your PFSense Firewall 22 >>> Port Forward SSH Service to "X":22
<fallenour> Is that correct?
<xarses_> no
<fallenour> @xarses_ so then you want an exposed port with no service behind it? So it detects as services active, but not actualyl there?
<fallenour> @xarses_ almost like a false flag honeypot?
<fallenour> @xarses_ For Instance, Internet USer >>> Port 22 >>> Your PFSense Firewall 22 >>> Port Forward to Service that doesnt exist, or is a FIFO Connection Completion Shell Process for Nagios Trigger Alert
<xarses_> I want juju to secure my mass cloud machines the same way it does for openstack cloud machines
<xarses_> well, it cant be the same, but I wan't it to do the iptables, or go tell it what service I want it to use
<fallenour> @xarses_ but maas doesnt do that, at least not to my knowledge. I think you are referring to Heat in Openstack, which is an inline proxy service, which does work, but MaaS is just a Metal as a Service Software. All it can do is deploy ISOs or Yamls at best. Its more so the Yaml that secures your machine, not so much MaaS itself.
<xarses_> no I'm not referring to heat in openstack
<fallenour> @xarses_ are you referring to the security modules built into Yaml scripts deployed by MaaS?
<xarses_> I'm referring to juju using neutron security groups to enforce security between units to exposed ports
<fallenour> @xarses_ ahhh!
<fallenour> @xarses_ for something liek that, youll be much better off, and have much better granular control if you put a protocol proxy inline, something like ThrottleProxy
<xarses_> https://github.com/mistakster/throttle-proxy ?
<xarses_> ya, I still think we aren't connecting here
<xarses_> when juju deploys a model on the openstack cloud, it creates a 100% restricted locked down security group
<xarses_> and then alters based on relations
<xarses_> and if the unit is exposed
<xarses_> it does 0 of that on the mass cloud, with bare machines, or lxd units
<xarses_> what can I hook in there to do something similar, i.e. iptables
<xarses_> or otherwise expose the data that made the similar decisions when deploying to the openstack cloud
<xarses_> preferably, I'd plugin Calico-Felix for it to update, but that seems quite far fetched
<zeestrat> Hey rick_h, I'm writing a controller charm and an accompanying agent charm that scales out and need a reality check on how the relations work. I made an interface that provides on the controller and requires on the agents which works. Adding new agents also work as the joined/changed fires on both the controller and the new agent, however the other agents also need to do somethings when a new agent joins, but they
<zeestrat> don't get any joined events from the controller or other agents that I can use. What's the design pattern for such a case? Am I missing something with the regular provide-request interface relations? Do I need to use the peer interface on the agents so they know about each other when scaling out, or is there something else?
<rick_h> zeestrat: sorry, I'm literally heading out the door to get family from the train station. I'll parse and respond when I get back
<zeestrat> rick_h: No worries. Thanks!
<fallenour> @bdx ok so this might be crazy, but what happens if I do this with haproxy: haproxy 0,1,2 (active-passive) >>> ha proxy 3,4,5 (active-active) >>> apache2 0,1,2
<fallenour> thoughts?
<fallenour> @bdx a loadbalanced roundrobin with failover of the loadbalancer at layer 1
<bdx> fallenour: yeah, nice .... it all depends on your use case, and what the best practice for accommodating that use case is
<bdx> fallenour: if you are proxing to a web application you are going to want a different setup than if you are using it as a network function or general routing utility
<bdx> that goes for the software and the hardware underneath
<bdx> fallenour: all that to say .... I'm sure there are plenty of haproxy/nginx based NFV in a box things out there
<xarses_> is there a `juju` way to move the bootstrap metadata cluster_F from where ever you started to somewhere sane on the controller after getting it bootstrapped?
<fallenour> is anyone else having issues getting gitlab to deploy? https://jujucharms.com/gitlab/precise/5
<fallenour> @bdx @rick_h "no matching agent binaries available"
<bdx> fallenour: https://jujucharms.com/u/spiculecharms/gitlab-server/8
<bdx> fallenour: the *supported* gitlab is not supported anymore, ^ that should work for you
<bdx> fallenour: sometimes, its helpful to click the "show community results" button in the charm store to see the other options (especially when the charm you are looking at only supports < lts ubuntu)
<bdx> ^ supported gitlab charm* isn't supported anymore
<rick_h> bdx: fallenour yea, precise is out of support so there's no Juju agents for precise. We should pull that series I guess but there's extended support and such.
<bdx> rick_h: so contact the maintainers and ask them to contact the juju admins to take their charm out of the store?
<rick_h> bdx: well, for something like that I think we'd just not show/etc the series. precise-only charms yea should probably be yanked/deprecated in some way
<bdx> rick_h: ahh totally, like an underlying filter in the charmstore that will filter out < lts charms?
<bdx> or put them in a deprecated category or something
<rick_h> bdx: yea
<rick_h> zeestrat: so yea, the peer relationship is all about them knowing about each other
<rick_h> zeestrat: I think that's all you need
<zeestrat> rick_h: Alright. Figured so after going through most of the other charms and interfaces. Thanks a bunch.
<rick_h> zeestrat: np, let me know if there's something to play with. I'd love to see what you're up to :P
<zeestrat> rick_h: Will do :) Gotta sort out this peering and some more automated testing. Speaking of testing, what's the state of matrix?
<rick_h> zeestrat: hmm, not sure tbh. tvansteenburgh cory_fu is there any update on the latest on the testing train?
<tvansteenburgh> no one is working on matrix at the moment
<fallenour> @bdx @rick_h does that mean that anything trusty isnt going to work? IE : https://jujucharms.com/owncloud/trusty/4
<rick_h> fallenour: no, precise
<rick_h> fallenour: trusty should be ok for a little bit longer I think
<rick_h> fallenour: yea, trusty EOL is april 2019
<fallenour> @rick_h ok, I was just curious. Its been isntalling for quite some time now, and has been hung on allocating / waiting on machine (lxd machines) so Im not sure why its taking so long
<rick_h> fallenour: ah, so if it's a new series it'll have to d/l the trusty lxd image and such
<rick_h> fallenour: normally that's shown in status and the lxd logs I think
<fallenour> @rick_h I cant get the bad install of owncloud to uninstall, ive tried juju remove-unit owncloud as well as juju remove-application owncloud, and it still shows up as error on 7/lxd/8 . What do I need to do to get rid of it to rebuild?
<rick_h> fallenour: what's the issue? It's in error state? I wonder if you can remove-machine --force a container
<fallenour> @rick_h owncloud didnt install for some reason. Hook failed install, but when I run remove-unit and remove-application to kill it and start over, it wont go away, and it says application is still installed, even though I know i removed it
<fallenour> @rick_h @catbus @bdx is there a command to remove bundles? maybe that is my issue. I did initially install owncloud as a bundle.
<rick_h> fallenour: try the remove-machine --force on the container id
<fallenour> @rick_h HAZAAH!
<fallenour> @rick_h SUCCESS!!
<rick_h> fallenour: woot
<el_tigro1> According to the help page, you can use 'juju register <url>` instead  of 'juju register <blob from add-user>'). I'm guessing this is to register an existing user/controller with a new client?
<el_tigro1> When I try it out I get this error: "ERROR unable to connect to API: x509: certificate signed by unknown authority"
<el_tigro1> Shouldn't juju be expecting an unknown certificate since the ca-cert unique to the controller?
<el_tigro1> *is unique
<thumper> I'm not sure about the register url sorry
<el_tigro1> essentially I just want to register a controller with another client without having to use 'add-user`. So that I can have different clients authenticating as the same user. Does that make sense or is it a bad idea? I guess I could always do it manually by editing the config files
<thumper> no it isn't a bad idea, and it is a deficiency in the current system
<el_tigro1> thumper: thanks for the clear and direct answer
<fallenour> @el_tigro1 have you tried using certutil to add it to approved CAs list?
<el_tigro1> fallenour: I haven't
<thumper> el_tigro1: we are actually looking at this behaviour at the moment with the plan to clean it up in the 2.4 cycle
<fallenour> @el_tigro1 do that. once you add it to the system as an approved CA, it should fix the issue. Ive experienced similar issues before with FreeIPA
<el_tigro1> fallenour: thanks, I'll look into it
<fallenour> has anyone been able to get owncloud to install successfully?
<bdx> fallenour: I gotchu, check it
<bdx> fallenour: at a basic level, this is all it takes to create a owncloud charm https://github.com/jamesbeedy/layer-owncloud
<bdx> fallenour: here is a working owncloud charm -> `juju deploy cs:~jamesbeedy/owncloud-1`, https://jujucharms.com/u/jamesbeedy/owncloud/1
<bdx> `juju deploy cs:~jamesbeedy/owncloud-2`
<bdx> fallenour: with a tidbit of polish https://jujucharms.com/u/jamesbeedy/owncloud/4
#juju 2017-09-19
<fallenour> @bdx Number 2 is asking me to log in as a specific user (you?) it didnt work for me. As for number 4, it gave a 404 not found error
<bdx> fallenour: `juju deploy cs:~jamesbeedy/owncloud-4` ?
<fallenour> @bdx OOOOO!!!!
<fallenour> @bdx MAAYBE!
<fallenour> @bdx I will say that the charm page for it gave a 404 on the /4 page, so maybe an issue? im not sure?
<bdx> hmm, thats odd
<bdx> fallenour: this https://jujucharms.com/u/jamesbeedy/owncloud/4 ?
<fallenour> @bdx I do know that if this works, Ive gotten cloud data, pictures are next, and then the FreeIPA box, and then I gotta turn on SSL on everything
<fallenour> @bdx that loaded, much appreciated my friend
<bdx> np
<fallenour> @bdx now lets just hope that DDNS with Afraid works well, and then HAProxy fine tuning, and then hopefully I can finally get some damn sleep. Its been 16 hour days for....what month is it?
<bdx> hah - yea
<bdx> fallenour: I should document in the readme, the owncloud charm exposes the http endpoint
<fallenour> @bdx yea, I was hoping for a simple juju expose owncloud && juju add-relationship owncloud haproxy
<bdx> fallenour: so you could `juju deploy cs:~jamesbeedy/owncloud-4; juju deploy haproxy; juju relate haproxy owncloud;`
<bdx> to get the ssl termination/front end
<bdx> thats kindof the *basic* workflow as far as relating things to haproxy goes
<fallenour> @BDX YEAAA! Thats what im talkin' bout, nice and easy. im hoping with ssl itll be a smooth transition. FreeIPA lets me do PKI, and generate my own CA infrastructure. Im hoping it just accepts life and lets me do whatever I want without resistance.
<bdx> fallenour: the most simple way to get the haproxy charm to terminate ssl for you, is to deploy it similar to this https://gist.github.com/jamesbeedy/d587cbf048038fb274ef4cd55c4ee3dd
<bdx> fallenour: where the ssl_key, and ssl_cert are base64 encoded strings e.g. (`cat mycert.crt | base64; cat mykey.key | base64;`)
<bdx> fallenour: once you fill in the ssl_key and ssl_cert configs with your own cert/key you can put ^ in `haproxy.yaml`, then deploy haproxy with `juju deploy haproxy --config haproxy.yaml`
<bdx> fallenour: then when you `juju relate owncloud haproxy` haproxy will redirect to https and terminate the ssl for you
<bdx> if you run into issues just give a shout
<tnx> Question: is there a way to change cloud controller endpoint url after juju controller has been bootstrapped?
<tnx> I've got a testing environment where I happened to change maas dns settings some and that broke juju environment that was using old dns names...
<tnx> In this environment I could just wipe the controller, but out of interest, I would assume there is a way to update controller endpoint uri without wiping the controller and redoing bootstrap
<xarses_> can someone explain how your supposed to log in to a fresh keystone application?
<tnx> Well, jury-rigged it by adding additional dns names with: "maas admin dnsresources create name=@ domain=test.domain.tld ip_addresses=a.b.c.d"
<fallenour_> @bdx @rick_h @catbus It is Time o.o
 * rick_h ponders what that means?
<fallenour_> Wait! DDNS! and Then! It is Time o.o
<fallenour_> oh shit :(
<fallenour_> I totally forgot to FQDN everything in maas o.o
<fallenour_> So....if I randomly start changing hostnames all over the place in systems, is that gonna kill MaaS and its ability to communicate with the hardware?
<fallenour_> or will it ignore it, and continue to utilize IPMI?
<jacekn> hello. I'm dealing with a production incident, experiencing high load on our juju controller. It looks like: https://bugs.launchpad.net/juju/+bug/1703675
<mup> Bug #1703675: PingBatcher could gracefully handle "DuplicateKeyError" <mongodb> <presence> <retry> <juju:Triaged> <juju 2.2:Triaged> <https://launchpad.net/bugs/1703675>
<jacekn> is there any workaround?
<fallenour_> @jacekn you can build additional juju controllers to share the load with across multiple models, or put them in a shared load configuration
<jacekn> fallenour_: we already have 3 controllers
<fallenour_> @jacekn add 3 more: https://jujucharms.com/docs/2.1/controllers-ha
<fallenour_> @jacekn I know it sounds crazy, but believe in the Juju, it works.
<jacekn> fallenour_: is there any official documentation on this? ISTR that last time I checked juju HA did not cause it to scale horizontally (mongo writes wre still going through single node)
<fallenour_> @jacekn "You cannot repair the cluster as outlined above if fewer than half of the original controllers remain available because the MongoDB replica set will not have the quorum necessary to elect a new master. You must restore from backups in this case."
<fallenour_> have you verified your original controllers in the cluster are available?
<jacekn> fallenour_: my problem is not to do with quorum, all 3 controllers are still available it's just that one has high load
<fallenour_> otherwise tear down your cluster, rebuild your cluster, resync the HA, that may fix your loadbalancing issue on your controllers
<fallenour_> @jacekn I know that, but what that tells me is that if only 1 of them has a high load, its not loadbalncing across the controllers.
<fallenour_> @jacekn hang on, I foudn your issue
<fallenour_> "multiple redundant controllers are instantiated while the single active controller is defined as the master. Automatic failover occurs should the master lose connectivity."
<fallenour_> @jacekn I think this means its not actually an "HA Cluster" Its like more like a hot spare.
<fallenour_> its a feature?
<fallenour_> @jacekn a work around would be to configure your systems to use a specified controller by DNS name, and then use HAProxy to loadbalance between the controllers that way. Im not sure if thatll work though. @stokachu @catbus @bdx @rick_h Your Thoughts? Is it possible to load balacne between controllers via hostname?
<cory_fu> tinwood: You available?
<fallenour_> @cory_fu whats up?
<tinwood> cory_fu, omw
<cory_fu> fallenour_: Hey, what's up?
<BlackDex> Hello there. Is it possible for juju, if i have all the charms locally available to install charms when fully offline (no connection to the internet)?
<BlackDex> I think that i need to have a apt-mirror for the ubuntu version i want, i also need a local mirror of the cloud repo
<BlackDex> but what i don't know is if and how it should be possible for the LXD images to be downloaded correctly
<fallenour> @blackdex make a localized repo copy
<fallenour> @blackdex I do localized repos due to my 6/1 connection at my current location. I carry servers over to a friends house, download from his 1Gb connection, and then carry the servers back.
<fallenour> @blackdex There is a juju charm for a localized repo, please see here: https://jujucharms.com/docs/1.21/howto-offline-charms
<fallenour> correction: That is what you are looking for, there is also a charm for a localized repo for debian/ubuntu packages, which I recommend you also get.
<BlackDex> but stuff for OpenStack like neutron, nova etc... will not support that i think
<fallenour> @blackdex it will, just use HAProxy or Nginx with proper DNS And FQDN practices
<BlackDex> so you say that i need to catch the fqdn's of the repo's to point to a local mirror :)
<BlackDex> so if i use maas, and i have a local repo on the maas server available, i just point archive.ubuntu.com to 127.0.0.1 ;)
<cory_fu> tinwood: proposed tag updated
<tinwood> thanks cory_fu :)
<tvansteenburgh> what's the equivalent of `juju unregister` for a model?
<fallenour> @blackdex I would recommend you simply use local.repo.local, and use your dns/dhcp server to point it to a local 10. ip space
<cory_fu> tvansteenburgh: You can `juju unregister` a controller, but I don't think you can do it on a per-model basis
<cory_fu> Maybe ungrant yourself permissions?
<rick_h> tvansteenburgh: there isn't. bdx and I were lamenting that and there's conversations with core about how to deprecate the idea of "owner" that users can't change or work around
<tvansteenburgh> rick_h: cory_fu: i just want a way to clean up my `juju models` list when i know a thing doesn't exist any more, but juju thinks it does
<mattyw> can someone remind me where I get the charm build plugin from?
<mattyw> cory_fu, tvansteenburgh can you remember ^^?
<tvansteenburgh> mattyw: snap install charm
<mattyw> tvansteenburgh, https://pastebin.canonical.com/198727/
<mattyw> tvansteenburgh, ah right, it's a problem related to snap/ ppa interaction
<mattyw> sigh
<mattyw> tvansteenburgh, ok thanks
<tvansteenburgh> np
<mattyw> tvansteenburgh, if multiple set_states are called in a reactive charm do you know which order they get executed in? stack or queue?
<kwmonroe> mattyw: cory_fu might be able to help with that ^^
<cory_fu> mattyw: The order is specifically undefined.
<cory_fu> mattyw: If  you need to enforce an ordering for handlers, you should do so by chaining flags (states).
<cory_fu> That is, handler 1 reacts to flag A and then sets flag B.  Then, handler 2 reacts to flag B.
<fallenour> months of working, years of planning, boiled down to 20 lines of code. Either itll work, or itll work when I threaten it with a hammer.
<stormmore> o/ juju world
<fallenour> ok, down to the last item
<xarses_> I'm attempting to destroy a model that has machines that failed to deploy, and the destroy-model is stuck waiting for the machines, is there a way to kick this down the road?
<fallenour> I need to get nginx to forward from internal IP to internal server with relelvant service, and I feel like im missing something
<fallenour> @xarses_ @catbus @rick_h @bdx ive got DDNS pointed with all my subdomains to my WAN IP. My Router forwards all 80 and 443 stuff to my Nginx box.
<xarses_> ?
<fallenour> If I use my browser to go to panda.eduarmor.com, it loads my gninx server html welcome message, but not the application its supposed on panda.eduarmor.com/horizon
<fallenour> example: http://panda.eduarmor.com
<xarses_> @rick_h: https://jujucharms.com/docs/search/?text=zone is giving an error, but something like https://jujucharms.com/docs/search/?text=charm works fine
<xarses_> good, god, why is the handeling of availability-zone so frigging hard
<rick_h> xarses_: filed a bug on that. Not sure but clearly something in the search for zones has caused the app to go ape
<rick_h> xarses_: and AZ are supposed to be automatically handled across mulitiple units. It's a bit too fine grained for most use so I don't think a lot of work's been put into exposing those knobs in a clean way.
<xarses_> 'asdf' has a similar response, but maybe because there are no responses
<rick_h> xarses_: what are you trying to do?
<rick_h> xarses_: yea, but zone should have stuff in there I'd imagine
<xarses_> ya, thats why I pointed that one out
<rick_h> xarses_: https://github.com/juju/docs/search?utf8=%E2%9C%93&q=zone&type= heh
<xarses_> rick_h: my network and availability zone must be passed together (provider networks before routed-provider-networks support)
<xarses_> network is set on the model
<xarses_> there is no valid constraint for zone
<xarses_> zone can be passed on --to
<xarses_> or as zone= on a add-machine
<xarses_> but there is no way to specify a zone when there is more than one machine being asked for
<xarses_> or the charm wants to spin up multiple members
<xarses_> or when deploying a bundle
<rick_h> so the --to constraint works for machines and you can specify them in the bundle and then the application to go on which machine
<rick_h> but yea, if you ask for 5 things then it splits one into each AZ
<xarses_> ya, thats invalid for my network
<xarses_> (zone:one, nets: [one-vlan11, one-vlan12])(zone:two nets:[two-vlan11, two-vlan12])
<xarses_> you can not specify them in the bundle
<xarses_> there is no respected arg for that
<xarses_> end exporting it always looses what ever was manually set when deploying
<xarses_> it just keeps coming back to, I have a constraint of network and az pair, and I can't specify it
<thumper> hmm...
<thumper> certainly seems like a failing in the ability to easily place things
<gaurangt> hi, I'm currently having issues in scaling the charm units.. When I scale any app with add-unit command, new unit gets deployed but it goes in error state.
<gaurangt> That makes all of the subordinate charm deployment to wait until the parent charm error is resolved
<gaurangt> is there any way to force the subordinate charm deployment even if the parent is in error?
<stormmore> so weird question of he day, when looking at juju status <application> is the relationship section suppose to only show the relationships for <application>?
 * stormmore is looking at the output from juju status in 2.3 from the edge snap
<xarses_> thumper: certainly is, won't be resolved anytime soon, the network is too complex. at a minimum have to switch to a new SDN, or upgrade from mitaka to at least okata
<xarses_> and I can't switch to the new SDN until I unwind and change some of these juju charms, which I cant do in my lab easily because I have to sepecify the zone in odd ways because it isn't treated as a constraint
#juju 2017-09-20
<stormmore> this is really odd, I have deploy 3 ubuntu units and 2 agents are in a failed state cause they aren't leader?
<tychicus> does anyone know why the default series for beats core is trusty?  is it known not to work in xenial? https://api.jujucharms.com/charmstore/v5/beats-core/archive/bundle.yaml
<xarses_> probably because the charm is old, and hasn't been updated since
<xarses_> its still on rev #1
<xarses_> neat
<xarses_> there is a bug here
<tychicus> xarses_: ok, thanks, so there is no reason that it "shouldn't" work on xenial, since all of the specific charms support xenial
<xarses_> there are two beats-core modules from containers
<tychicus> ah yes
<tychicus> I've notices that
<tychicus> and different individual charms reference different beats-core
<tychicus> I seems to include packetbeat and the other does not
<xarses_> @rick_h: because I don't know who else to ping, there "beats core" and the second cant be opened, it goes to the first
<xarses_> https://jujucharms.com/u/containers/
<tychicus> from https://jujucharms.com/filebeat/7 juju deploy ~containers/bundle/beats-core
<tychicus> from https://jujucharms.com/u/containers/packetbeat/5 juju deploy cs:bundle/beats-core
<xarses_> tychicus: as to 'should work with xenial' you should inspect each subordinate charm in the bundle, if they have xenial releases then you can likely do all xenial, in some cases one might be old still so you might just deploy with the one old one
<rick_h> xarses_: /me is confused. Is this in a search on jujucharms.com?
<xarses_> I'm looking at the listing of the containers user
<xarses_> there are two results for "beats core"
<rick_h> ok, /me looks at that
<xarses_> and one result points to the other
<rick_h> xarses_: oooh, hah
<rick_h> xarses_: there's a bunch of doubles in there
<xarses_> oh, I didn't even notice
<rick_h> yea, fishy
<stormmore> o/
<rick_h> xarses_: k, filed a bug will see what's up.
<xarses_> aight, thanks
<tychicus> one more question on bundles, I don't completely understand the syntax for relations, but I am working on getting elastic stack set up to do some monitoring
<tychicus> https://gist.github.com/roll4life/3017292adec8098a256cf37b645d9212
<xarses_> neat, the openstack-dashboard ones?
<tychicus> I want to do log monitoring for all of my openstack hosts, i figured I would start with dashboard because it is easy
<rick_h> tychicus: the best thing to do is to be explicit and do something like this: https://pastebin.canonical.com/198904/
<rick_h> tychicus: which is "add a relation, from application:endpoint to application:endpoint
<tychicus> rick_h, my ubuntu one account does not have access to that pastebin
<rick_h> tychicus: doh sorry
<rick_h> http://paste.ubuntu.com/25580465/
<tychicus> rick_h: I am not entirely sure that I understand
<tychicus> the example given in the packetbeat charm is as follows
<tychicus> juju deploy cs:bundle/beats-core
<tychicus> juju deploy ubuntu
<tychicus> juju add-relation filebeat:beats-host ubuntu
<tychicus> juju add-relation topbeat:beats-host ubuntu
<tychicus> juju add-relation packetbeat:beats-host ubuntu
<magicaltrout> morning, are there any hacks to change the controllers detected IP address?
<magicaltrout> cause I need to put my controller on a floating IP and its currently on an internal ip in an openstack cluster
<rick_h> tychicus: yea, so that's a short hand for letting juju finding the right endpoint on the other end
<tychicus> so in the context defining these relationships in a bundle what would the second "application:endpoint"
<rick_h> tychicus: that could also be written juju add-relation filebeat:beats-host ubuntu:juju-info
<rick_h> tychicus: check out the relation info in the charm details page on the right: https://jujucharms.com/filebeat
<rick_h> tychicus: it's basically saying that the beats-host will wire up to anything that has juju-info and juju-info is the one generic endpoint all charms have
<rick_h> tychicus: so you can leave it off and juju figures it out, but I prefer to be explicit for this very reason
<tychicus> ah ok
<rick_h> magicaltrout: ... so there's all the cached controller files of users, the list of mongodb addresses for HA and such internally, the addresses of agents...
<rick_h> magicaltrout: I'm not sure there's a simple update this thing to do. might have to try the juju list for that tbh sorry
<magicaltrout> fair enough no probs
<tychicus> rick_h: so in my case the correct way to define the relationships is:
<tychicus> relations:
<tychicus>   - - "kibana:rest"
<tychicus>     - "elasticsearch:client"
<tychicus>   - - "filebeat:elasticsearch"
<tychicus>     - "elasticsearch:client"
<tychicus>   - - "topbeat:elasticsearch"
<tychicus>     - "elasticsearch:client"
<tychicus>   - - "packetbeat:elasticsearch"
<tychicus>     - "elasticsearch:client"
<tychicus>   - - "filebeat:beats-host"
<tychicus>     - "openstack-dashboard:juju-info"
<tychicus>   - - "topbeat:beats-host"
<tychicus>     - "openstack-dashboard:juju-info"
<tychicus> *updated gist* https://gist.github.com/roll4life/64fcb01ddc674d6c6dab45eb2b446300
<stormmore> hey rick_h so last night I caught an odd ball in juju edge channel, just deploying 3 ubuntu instances and 2 of the complained about not being leader!
<rick_h> stormmore: huh...I didn't know the ubuntu charm supported any leadership ideas
<rick_h> stormmore: oh, I wonder if you're hitting that bug I saw someone talking about
<rick_h> stormmore: does it look like https://bugs.launchpad.net/juju/+bug/1706340 ?
<mup> Bug #1706340: [2.2.2] Failed unit agents: "leadership-tracker" manifold worker returned unexpected error: leadership failure: lease manager stopped <canonical-bootstack> <cdo-qa-blocker> <cpec> <new-york> <juju:Triaged> <https://launchpad.net/bugs/1706340>
<rick_h> tychicus: looks right-ish
<tychicus> rick_h: thanks, I'll try it and report back
<tychicus> rick_h: relation ["filebeat:beats-host" "openstack-dashboard:juju-info"] refers to application "openstack-dashboard" not defined in this bundle
<tychicus> sounds like you can only define relationships in a bundle for resources that are deployed by the bundle
<stormmore> rick_h: I don't think that is related although I am aware of that one
<stormmore> rick_h: this is one that talks about not being able to setup relationships due to not being leader
<stormmore> rick_h: first time I saw it was with swift so I put it down to that, but then I saw it happen to simple base ubuntu image that I didn't know how leader tracking either! so I dropped from edge to candidate juju and issue disappeared
<catbus> tychicus: or juju won't be able to relate two services if juju isn't aware of the service. Did you deploy openstack-dashboard with Juju?
<tychicus> catbus: yes
<tychicus> It looks like the limitation is with how bundles work
<catbus> agreed. Any reason why openstack services and beat-core bundle can't be combined?
<tychicus> since openstack-dashboard is not being deployed in the bundle I specified, I can't make the association
<tychicus> I guess it's the same reason that you can use the --to flag in conjunction with bundles
<tychicus> or specify in the bundle
<tychicus> to:
<tychicus> - lxd:2
<tychicus> since machine 2 is not defined in the bundle, the bundle can't be sure that machine 2 exists, and won't spin up a new lxd container on that machine
<tychicus> maybe I could specify something with the machines: directive
<rick_h> tychicus: no, bundles can't effect what's already in a model. Otherwise the bundles would not be reusable because they'd have to have the model setup in a specific way to work
<rick_h> tychicus: in this way, bundles are meant to be a reusable-sharable definition and if there's specific things your need for your infra then it's best to script via cli, the juju library, etc
<tychicus> rick_h: got it, totally makes sense
<bdx> rich_h: whats up with the maintainers of elasticsearch?
<bdx> the charm is so broken
<bdx> ever since that team took it over
<bdx> I'm filing bugs, no response
<bdx> I honestly feel that charm needs very active maintainership
<bdx> I had put a lot of effort into fixing up bits and pieces to get it to work
<bdx> now this team has taken over, won't accept my changes (the ones that fix the things that are broken)
<bdx> ^ an honest to god piss off
<rick_h> bdx: sorry, I'm not sure. I've not seen anything since the email thread which seemed all good.
<bdx> yeah
<bdx> its cool
<bdx> I just don't know how that happened
<bdx> I'm maintaining my own fork
<bdx> but its just a piss off that they took it over and its sitting there all broken
<bdx> and no one is active enough to give my fix a look over in a weeks time
<rick_h> bdx: yea that's no good. /me goes to look for the bug tracker/homepage of the charm
<bdx> rick_h: thx
<rick_h> bdx: can you reply to tom's feedback here? https://bugs.launchpad.net/elasticsearch-charm/+bug/1714393
<mup> Bug #1714393: ERROR! lookup plugin (dns) not found <Elasticsearch Charm:New> <https://launchpad.net/bugs/1714393>
<rick_h> bdx: did you setup the pull request there on that one or do you mean some other PR?
<bdx> rick_h: I'll find the pr omp, I'm more or less just bummed that the new maintainers aren't so active :(
<rick_h> bdx: understand, I just want to get ducks in a row and such looking at it
 * rick_h does the whole dance of it's a bunch of folks doing a lot of different charms that are just people and might need prodding and such
<bdx> the fact that they moved the code/development of the charm to launchpad git/launchpad is sad enough
<bdx> yeah
<bdx> I know
<bdx> grrrr
<rick_h> bdx: well it's how that team operates. It's our internal folks
<bdx> I see
<rick_h> bdx: at least it's git LP vs bzr LP :)
<bdx> i know ... its equally as bad I feel
<rick_h> bdx: but yea, most of the folks on that team are internal folks at canonical with long long histories of managing all the things through LP
<bdx> got it
<bdx> I feel like they are isolating that charm and its development
<bdx> it need the opposite
<bdx> it needs more eyes, and more hands and feet
<bdx> it needs a ton of work/help right now, including an entire rewrite
<bdx> its just harder to get to on launchpad/git+lp
<fallenour> o/
<fallenour> having an issue with ssh keys today
<fallenour> ive got the pub key of interest onto the server into the /root/.ssh directory, but its still telling me permission denied (public key) which doesnt make any sense to me. Any thoughts? I already checked config ont he distant server, and it says that publickey is set
<fallenour> the oddest thing is they key is already in the authorized_key file
#juju 2017-09-21
<fallenour> asdf
<fallenour> o/
<fallenour> o/
<fallenour> created a second account for backup admin, juju controller isnt present, I know I need to use juju register to connect. Is there a manual for this?
<rick_h> fallenour: a bunch :) https://jujucharms.com/docs/stable/tut-users and https://jujucharms.com/docs/stable/users
<fallenour> @rick_h I see that the guide shows me how to create a new controller via bootstrap, but what I specifically need to do is add my second admin account to the current controller. is there a guide specific for that?
<rick_h> fallenour: you create a user and grant them admin rights on the controller model
<rick_h> fallenour: you said you added a user and the guide goes through granting access
<magicaltrout> anyone tried changing docker storage driver in CDK?
<fallenour> rick_h: how do I identify the controller model though? Im on the box with both admin accounts, 1 can see the controller and all the models, 1 cannot. Do I add the account for the other account?
<rick_h> fallenour: I'm not following. The model is called controller. If you juju list-models it has the names and the grant command takes the model name as an argument to the command
<rick_h> fallenour: I'm not sure what identification you're looking for other than the name "controller"
<rick_h> fallenour: https://jujucharms.com/docs/2.2/users-models#controller-access might be a bit more specific in the docs around this use case
<rick_h> fallenour: I see, I think you're looking for granting superuser to the second account
<fallenour> rick_h: Built out as follows: MAAS00 <> Juju00 <> >>> Admin1 has full juju access >>> Admin2 has no juju access at all, cant see controllers, models, clouds
<fallenour> rick_h: I think so?
<rick_h> fallenour: right, so please look at the link above and see if that makes sense
<fallenour> rick_h: ok so I gave my 2nd admin account read and write access to the model of admin1, as well as I gave them superuser on the controller, but I still dont see the model listed. Ideas?
<fallenour> "ERROR current model for controller conjure-up-cloud-maas-a50-f6c not found"
<rick_h> fallenour: so juju whoami
<fallenour> rick_h: Controller:  conjure-up-cloud-maas-a50-f6c Model:       <no-current-model> User:        lhicks
<rick_h> http://paste.ubuntu.com/25586651/
<rick_h> fallenour: ^
<cory_fu> marcoceppi: https://github.com/juju/charm-tools/pull/345
<cory_fu> When you have a minute
<cory_fu> What in the world does this error mean?
<cory_fu> ERROR cannot upload charm: verification failed: failed to decrypt caveat 1 signature: decryption failure
<xarses_> is your system time way off?
<fallenour> rick_h: Ok so it updated finally, I see the models, now i just need to figure out how to connect to the specific one, and Im golden?
<rick_h> fallenour: so you should be able to juju switch whatever you see in the list models output
<cory_fu> When did the unit logs for Juju stop including the stderr from hooks?
<fallenour> rick_h: just juju switch <modelID>
<fallenour> ?
<rick_h> fallenour: yes, whatever name is in juju list-models should work.
<rick_h> fallenour: and when you have multiple controllers you can juju switch controller:model
<rick_h> To jump directly to models in different controllers
<fallenour> uuughhH~
<fallenour> rick_h: Can you be my ~/ ^___^
<fallenour> rick_h: its finally almost working, 1 apache config away, and ive got magic
<xarses_> is there a pre-defined way to build stub|shim charms that are simply data providers to satisfy relations to systems that are deployed outside of juju?
#juju 2017-09-22
* urulama changed the topic of #juju to: Juju as a Service Beta now available at https://jujucharms.com/jaas | https://review.jujucharms.com/ | https://jujucharms.com/docs/ | http://goo.gl/MsNu4I || https://www.youtube.com/c/jujucharms | JAAS login issues
* urulama changed the topic of #juju to: Juju as a Service Beta now available at https://jujucharms.com/jaas | https://review.jujucharms.com/ | https://jujucharms.com/docs/ | http://goo.gl/MsNu4I || https://www.youtube.com/c/jujucharms
<sfeole> hey all, is there a way to build charms in launchpad?? push the source to a branch and have a recipe that builds the charm for you?
<bdx> sfeole: there is a charm ci workflow for jenkins that exists somewhere, I'm now curious where its at too
<bdx> sfeole: possibly this is it https://github.com/juju-solutions/bundle-cwr-ci
<bdx> there may be another though
<bdx> sfeole: https://github.com/juju-solutions/bundle-cwr-ci#test-charm-when-the-repository-changes
<bdx> kwmonroe: +1
<sfeole> bdx, thanks, i'll take a look .
<bdx> sfeole: np
<xarses_> (re-ask, I think it wasn't responded to) Is there a pre-defined way to build stub|shim charms that are simply data providers to satisfy relations to systems that are deployed outside of juju?
<bdx> xarses_: yeah ... its not really pre-defined but there is a simple path
<kwmonroe> xarses_: i think you're getting at the idea of a proxy charm... eg, faux-db-charm might allow a user to "juju config faux-db-charm ip=<real-db-ip> user=<real-user> pass=<real-password>" and would also provide the db relation.  this would allow you to proxy the db connection details via juju config to a db relation that could then be related to other charms that require a db.
<xarses_> yep, something like that
<kwmonroe> of course, the faux-db-charm would need to do the actual work of relation-setting the configured data.
<kwmonroe> so, i don't know of any docs or pre-defined workflows for proxy charms, but it seems pretty simple in theory (famous last words).
<xarses_> ya, it seems simple, so simple that there should be a pattern and or charm-generator for it
<bdx> kwmonroe, xarses_: looking back at the areas where I've had to have my charms connect to an external service I've implemented it custom in each charm, probably +10 places where I've copied my external service handlers around
<bdx> xarses_: I'm right there with you, +1^
<bdx> I originally thought subordinate charms might be a fit for the external service use case, but I don't think they are
<xarses_> oh, I was going down that route for the relations I currently need to model, but you think no?
<bdx> well... from what I know about subordinates, I think they have to be related to a primary charm to be able to run code
<bdx> so I don't think you could just have a subordinate deployed to your model that isn't related to a primary, that is just handing out config
<bdx> xarses_: I created this https://github.com/jamesbeedy/interface-db-info/blob/master/provides.py#L17
<bdx> simple way to set and get db info from one application to the next
<bdx> xarses_: so then you could have the leader of one application set the dbinfo, and other applications can relate to it to get the db-info
<bdx> xarses_: you can then just make the db-info config values in your primary charm that get set into the db-info relation
<xarses_> ya, thats about what I figured
<bdx> xarses_: not sure if this will get you what you want, but it definitely reinforces "so simple that there should be a pattern and or charm-generator for it"
<bdx> reinforces that we need something* ^
<bdx> xarses_: because the majority of the time (despite what we may like to think) NOT everything is deployed via Juju right
<xarses_> not even that
<xarses_> not everything will be deployed by juju
<xarses_> or even be deployed in this model
<xarses_> or managed by this controller
<bdx> right
<bdx> so
<bdx> I've hit my head on this all to much too
<bdx> xarses_: heres something I came up with a wile back https://github.com/jamesbeedy/juju-layer-barbican-client/blob/master/config.yaml
<bdx> you have to have barbican deployed
<bdx> and keystone
<bdx> then you set your keystone creds in the barbican-client
<bdx> then you can relate your charms to via the barbican-client and have them get the secrets from barbican
<bdx> all that to say
<bdx> that is a lot of work to get something "so simple that there should be a pattern and or charm-generator for it"
<bdx> :)
<bdx> xarses_: I dropped the barbican idea because I didn't like all of the things having a hard dep on having a connection to barbican
<bdx> and then trying to facilitate getting the information from the relations if the service exists and deployed via juju, or if not deployed via juju get the info from barbican .... it was becoming more then I bit off to chew, so I just ditched it in favor of setting charm configs for external things
<bdx> I'm not sure if any of that is best practice for anything
<bdx> but I think your totally right ... we need some sort of generic external application proxy manager or something
<bdx> the guys working on the CAAS stuff might be thinking about a similar problem trying to present k8s application endpoints
<bdx> mmcc:^?
<xarses_> or something with some relation and config factory generators that are just like take "this relation" and give me something that I can relate, and set the config that was passed in the relation
<bdx> right
#juju 2017-09-24
<bdx> is there a way to exclude configs from lower layers?
#juju 2019-09-16
<rick_h> thedac:  jam checked and the addresses aren't sorted from network-get within the space (they're assumed to be equal) so I filed https://bugs.launchpad.net/juju/+bug/1844148 to at least sort them by default so it's more consistent.
<mup> Bug #1844148: aws cloud addresses not ordered and predictable <juju:Triaged> <https://launchpad.net/bugs/1844148>
<thedac> rick_h: thanks, that will help.
<kelvinliu> hi wallyworld got these two PRs, could u take a look if u got time later? thanks! https://github.com/juju/charm-tools/pull/544 https://github.com/juju/juju/pull/10635
#juju 2019-09-17
<stickupkid> FYI: i'm updating jenkins :p
<jam> stickupkid: our recent merge commits have started mixing line endings (\r\n vs \n). so if you do 'git log --first-parent' you'll see a bunch of \r for the description parts.
<jam> stickupkid: is that from the plugin change? or did we break something else?
<danboid> The juju api uses port 37017 (via IP4) to communicate to its controllers right?
<danboid> It seems 17070 (as stated in the docs) is IP6 only
<danboid> My juju controller is only listening on 37017 for IP4
<danboid> Should my juju controller be listening on port 17070 for IP4?
<danboid> If so, how do I make it?
<danboid> I expect I need to tell the juju controller to only use IPv4 do I? How do I do that?
<danboid> Ah! Seems 37017 is just for mongo
<danboid> I've tried disabling IPv6 in LXD but thats only had the effect of preventing the juu controller starting anything on 17070. Now I only see mongodb ports on my controller
<babbageclunk> danboid: sorry, everyone in the juju team is at a team meetup at the moment so we haven't been watching the channel. If your controller isn't listening on 17070 then it sounds like it's not getting to the point of the API server starting up. If you look in the logfile /var/log/juju/machine-0.log, do you see any errors about api-server?
<danboid> babbageclunk, Will do if it happens again. I ended up deleting the controller container and creating a new one but running juju bootsrap has paused at `Waiting for address`
<danboid> Its been waiting for address for a few minutes now
<danboid> Not a good sign I'd imagine
<danboid> I have disabled IPv6 in LXD now
<danboid> lxc network set lxdbr0 ipv6.address none
<babbageclunk> danboid: if you run `lxc list` can you see the container running?
<babbageclunk> (and an IP address?)
<danboid> yes
<danboid> There is a new container but it has no IP
<babbageclunk> ah, ok - it sounds like that'll be the problem
<danboid> Stop a delete it?
<danboid> *and
<babbageclunk> if you launch a lxd container with `lxc launch ubuntu:` does it start ok and get an address?
<danboid> It created fine...
<babbageclunk> hmm
<danboid> no address tho
<danboid> The host machine is using IP4
<danboid> I created lxdbr0 with lxc init
<danboid> Creating juu controllers was working before I disabled IPv6. Maye I should re-enable it
<danboid> re-enable it in lxd that is
<babbageclunk> I think juju requires it to be turned off - but I don't remember specifically how it should be configured
<danboid> This is where I got the idea from https://github.com/juju/docs/issues/2965
<babbageclunk> danboid: I think that's right - it's the same error juju will report if it detects ipv6 turned on on lxd
<danboid> I don't remember seeing any such error
<babbageclunk> It looks like we configure the bridge to turn ipv6 off if we create it (because we run lxd init ourselves)
<babbageclunk> danboid: can you run `lxc network show lxdbr0` for me?
<danboid> babbageclunk, It was like this:
<danboid> babbageclunk, https://gist.github.com/danboid/6ce3fd2b0f298fc76630985c8d25d6fd
<danboid> I have since deleted that container and its rebooting
<danboid> What I thought I'd try is
<danboid> lxc network set lxdbr0 ipv6.nat false
<danboid> but it wasn't letting me, hence the reboot :)
<babbageclunk> danboid: just looked at my config - I have ipv6.nat set to true
<danboid> but ip6 addressing disabled? Seems a bit odd
<babbageclunk> that's the only difference I see from your config
<babbageclunk> yeah, I wouldn't expect it has any effect
<babbageclunk> danboid: sorry, this isn't an area I know much about - I'm tempted to suggest uninstalling and reinstalling lxd.
<babbageclunk> danboid: I have to go now unfortunately - sorry to leave you in the lurch!
<danboid> babbageclunk, OK, thanks for your help!
<danboid> Fixed it!
<danboid> I had to add a grub kernel boot parameter on the lxd host machine to disable ipv6
<danboid> ipv6.disable=1
<danboid> update grub, reboot... tada!
<danboid> How long is juju destro-controller supposed to take?
<danboid> I'm trying to destroy some controllers that were never used
<danboid> I deleted their conrainers first tho, maybe thats the problem?
<danboid> I presume it would delete their containers if I'd nor already done that
<danboid> I don't think its working
<danboid> `juju controllers --refresh` doesn't seem to work either
<rick_h> dannf:  yea, if the containers are already gone just use "juju unregister"
<rick_h> sorry danboid ^
<rick_h> who's gone...
#juju 2019-09-18
<atdprhs> hello everyone, do anyone know how to enable `externalTrafficPolicy: Local` on ingress by juju deployment, i tried to enable on the service but i get >> `Service "default-http-backend-kubernetes-worker" is invalid: spec.externalTrafficPolicy: Invalid value: "Local": ExternalTrafficPolicy can only be set on NodePort and LoadBalancer service
<atdprhs> not sure which service am I missing here?
<atdprhs> Anyone enabled that setting before?
<atdprhs> Have anyone at least worked around enabling ingress to see the real client ip?
<atdprhs> or forward the real client ip?
<rick_h> achilleasa:  can you shoot me that error message so I can tweak it in the cli doc please?
<achilleasa> rick_h: current version: https://pastebin.canonical.com/p/rp9bS24hgX/
<rick_h> achilleasa:  ty
<danboid> I think I may have broke juju or found a bug
<danboid> It doesn't seem to let you destroy a controller if you've already removed its container. Is this a known bug?
<danboid> How can I clean up controllers that have already had their containers removed? I can renstall juju if thats the only solution but I'd lie to know the proper way, if there is one
<danboid> I have tried `juju destroy-controller -y --destroy-all-models --destroy-storage juju1` but it never completes, nor does it print any errors
<danboid> It should be pretty much instant as it never actually got used
<rick_h> danboid:  so you use juju unregister to remove the details of a controller from your local cache
<rick_h> danboid:  if you go to the cloud and remove the VMs, then there's nothing for your juju cli tool to speak to handle the destroy process
<rick_h> danboid:  so you just delete it from your local cache of controllers with unregister
<rick_h> danboid:  juju doesn't auto do that because we can't tell a network outage vs a gone machine/etc.
<danboid> rick_h, Sorted it. `juju kill-controller juju1` got rid with issue, I think
<danboid> *without issue
<danboid> There's no downside to using kill-controller ids there?
<danboid> As I say, it was never used anyway so I'm just housekeepng
<danboid> I presume kill-controller unregisters it too
<danboid> I suppose I should tidy up in the cleanest way that works
<danboid> The docs do say to use kill controller as a last resort, just wondering if there are any drawbacks?
<danboid> rick_h, OK so juju unregister worked for me too. I should use that over kill-controller but what is the difference
<rick_h> danboid:  kill-controller will try to reach out to the cloud and kill off any machines behind the controller's back
<rick_h> danboid:  it doesn't promise to leave things in a clean state and such
<rick_h> danboid:  unregister doesn't touch the cloud at all and just removes the yaml in your local cache
<danboid> rick_h, but in my case where all my controllers never got used, it wouldn't make any difference
<danboid> rick_h, Thanks for explaining
<rick_h> danboid:  right
<danboid> I've not been able to get juju register to work so that I can login to a controller from a remote machine
<danboid> I get prompted for passwords and the controller name then it just waits and waits and eventually times out
<danboid> So, the register code encodes what exactly? My controller is running in an LXD container on my MAAS controller so how is it trying to contact it?
<danboid> I have disabled IPv6 on my MAAS/juju controller so that shouldn't be an issue now
<danboid> Do I need to create some LXD proxy devices (port forwarding rules) to forward the juju api port from the MAAS controller/ LXD server to the container on the IP?
<danboid> *to the container running the controller
<danboid> Luckily I think I know how to do that but the docs should really give an example as I'm sure it's quite a common scenario
<danboid> I expect juju bootstrap doesn't do any network config for the controller container. So after configuring the networking of the controller container (lets say we can SSH into it and port 17070 is open), at that point juju register should work without any LXD proxy devices?
<danboid> Can IPv6 be disabled on the juju controller without disabling IPv6 entirely on the host machine via a kernel argument?
<danboid> I did that and itseems to have broke MAAS
<danboid> I tried disabling IPv6 in LXD but that didn't help
<rick_h> danboid:  the issue is that the client needs to be able to contact the controller on its API port
<rick_h> danboid:  so if you've got a controller in a lxd container, then the client running "juju register" should be able to telnet to the controller ip:port
<rick_h> if it can't you'll get a failure as you describe
<rick_h> danboid:  no, juju just puts the controller on the lxd container. Since you bootstrap from the machine you're talking to lxd on they're normally routable
<rick_h> danboid:  but from another machine you'll need to have a path from the root device that's hosting the lxd container from the outside
<danboid> Do you know if MAAS requires IPv6? We aren't using v6 but after disabling it (to get juju controllers to stop using v6 addresses) I can no onger login to MAAS
<danboid> I disabled it with a kernel argument
<danboid> THe juju controlles were assigning 17017 to an IPv6 address otherwise
<rick_h> danboid:  hmm, not sure. I think it would depend on what was setup on the network if ipv4 was setup correct or the like
<danboid> but you don't know of any juju controlle config option to say "only use IPv4"?
<rick_h> danboid:  on lxd juju throws an error if it's setup with ipv6
<rick_h> danboid:  so you shouldn't need a config for that
<danboid> It hasn't done that for me
<rick_h> danboid:  what Juju are you using? snap or deb and version?
<danboid> snap. I've been using `juju bootstrap lxd Juju1` to create controller containers
<danboid> 2.6.8
<danboid> Under 18.04
<danboid> Then if I shell into the container and run netstat, I see 17017 is using an IPv6 address, unless I disable IPv6 with a kernel arg
<danboid> listening on a IPv6 port even
<rick_h> danboid:  hmm, I wonder what's up with that. We threw an error suggesting you use lxd-configure or the like to set it up w/o ipv6 enabled.
<danboid> I tried disabling ipv6 addressing and NAT but it didn't help. Is this documented?
<danboid> Disabling it on the LXD networking level sorry
<rick_h> danboid:  https://github.com/juju/docs/issues/2965
<rick_h> danboid:  basically when you run `lxd init` it walks you through the setup
<danboid> rick_h, Yes, I saw that bug yesterday and did what it advises but it didn't fix it for me. Even after destroying my controllers and creating new ones with LXD IPv6 disabled beforehand
<rick_h> danboid:  huh? do you disabled lxd ipv6 and still got an ipv6 address on containers brought up with juju?
<danboid> Disabling IPv6 at te kernel level on the host DOES fix it but it seems that means I cannot run MAAS and LXD/juju controller on the same box
<danboid> Yes, I'll show you my LXD network config
<danboid> https://gist.github.com/danboid/96fddd9ae6b95c91f6afc1dce63cf686
<danboid> I went the extra step over what that bug advised by disabling ipv6 NAT too
<danboid> With IPv6 disabled, I didn't get a port 17017 at all
<danboid> Enable v6, and I get 17017 listening on v6
<babbageclunk> danboid: did you manage to get things working?
<danboid> babbageclunk, I solved my IPv6 problem by uninstalling lxd from the 18.04 repo and installing lxd from snap
<babbageclunk> danboid: oh great
<danboid> It does seem that the MAAS web UI doesn't start without IPv6
<babbageclunk> I didn't think to ask your lxd version, that would have been a good hint sorry
<crodriguez> I've tried to bootstrap a controller with juju, and it hangs at the step 'Running machine configuration script'. After 2h, I cancelled the whole thing, and now juju won't return anything for juju status. Any idea how to fix juju?
#juju 2019-09-19
<rick_h> crodriguez:  if it hung there please make sure your client can reach the VM that came up and ssh to that VM and check the cloud-init logs on how it got along.
<hpidcock> wallyworld: https://github.com/juju/juju/pull/10643
<pmatulis> crodriguez, did you fix the bootstrap issue?
<crodriguez> no. It's probably an issue in the dns of my vms vs the host, but even once I make sure my dns works on the vm, the script stays hanging. I'm trying to find the reaktime logs from that step..
<crodriguez> pmatulis: I see that this function in juju's code has a ProgressWirter pointed to ctx.GetStderr(), but I can't find where this may be stored
<crodriguez> nvm I think it just means that it'll print out only errors
<pmatulis> crodriguez, maybe if you describe your env. where is juju client and what cloud type?
<crodriguez> pmatulis: I'm trying to bootstrap an openstack controller from a cloud I deployed with the fce tool. I can access my vms, they have dns and internet access. Just being able to find the bootstrap detailed logs would help a lot. the --debug command doesn't explicit what is blocking
<pmatulis> crodriguez, you can try passing --keep-broken and then inspect the instance
<pmatulis> (if you can)
<pmatulis> default behaviour of juju is to destroy the instance if it cannot be reached
<danboid> When I'm creating a new juju controller, can I get it to use a different LXD (notwork) profile?
<danboid> *network
<danboid> Changing the controller profile after doesn't seem to work
<danboid> Maybe I'll just have to edit the default LXD profile to meet my requiements?
<manadart> danboid: Changing the default profile is the way to do it currently.
<danboid> manadart, OK thanks
<danboid> When running juju bootstrap, what does
<danboid> ERROR no addresses match
<danboid> mean?
<danboid> I've found a couple of bug reports indicating juju bootstrap doesn't work well if you are not using lxdbr0
<danboid> I need to use a macvlan profile
<danboid> The other option is creating the controller using lxdbr0 but then what do I need to modify within the controller container to chabe the addresses?
<danboid> I would change it to use my macvlan profile after creating it
<danboid> I've already tried doing this but it didn't update the controller address
<danboid> I suppose the other option would be to not use LXD at all right?
<danboid> How do I juju bootstrap to 'bare metal'?
<danboid> Is that an option?
<danboid> Apparently macvlan doesn't work with juju https://discourse.jujucharms.com/t/manual-network-setup-for-lxd-clustering/261
<danboid> I've never been able to access containers using an lxdbr0 bridge from outside of the LXD host, which ids what I need to do
<danboid> and its why I was trying macvlan as that does enable external hosts to communicate
<danboid> Am I missing something with lxdbr0?
<danboid> manadart, I think you've answered my question here https://discourse.jujucharms.com/t/manual-network-setup-for-lxd-clustering/261
<danboid> That brigge config is anything but obvious so it seems like that should be covered / linked to in the juju docs. before / o the same page as controller bootstrapping, presuming it works
<danboid> * before / on
<danboid> The default lxdbr0 is only going to be useful to those who have all their juju containers running on the one host, it seems
<kelvinliu> wallyworld: +1 plz https://github.com/juju/charm-tools/pull/547.  thanks!
<danboid> I've tried following that guide but when lxd init asks if I'd like to use an existing bridge or interface it doesn't accept br0 nor the interface name
<danboid> Oops!
<danboid> I was trying to input br0 when the answe was yes :)
<danboid> Time to go home I think :)
#juju 2019-09-20
<pmatulis> i trying to upgrade my controller model. there is no output at all and it's hanging. any hints?
<rick_h> pmatulis:  your vpn isn't connected?
<rick_h> pmatulis:  sounds like the client isn't reaching the controller and waiting to time out but not sure
<pmatulis> rick_h, yeah, it did time out. need to figure this out. ty
<jam> simple review: https://github.com/juju/juju/pull/10648
<cloudsigma-belos> Hello, how are you?
<cloudsigma-belos> Could you please help me with an error which I when I am trying to bootstrap an image with Juju?
#juju 2019-09-22
<ordo> hi everybody
<ordo> does someone knows how to force juju to use proxy ?
<ordo> hello?
<humbolt> Hi there!
<humbolt> I am having quite some trouble with Juju/MAAS network spaces.
<humbolt> I keep getting this weird error:
<humbolt> machine-0: 01:42:57 ERROR juju.worker.provisioner cannot start instance for machine "0/lxd/1": unable to setup network: no obvious space for container "0/lxd/1", host machine has spaces: "boot", "storage", "storage-clients"
<humbolt> Seems lxd does not know, which bridges to create and which underlaying interfaces to use for them.
<humbolt> Does anybody have any experience with this?
