#juju 2012-04-16
<imbrandon> hazmat / SpamapS ( i know neither of you are around yet, this is for later ) : can their be custom "hybrid" providers , e.g. unholly marages between like google storage ( that provides s3 api and even uses s3cmd for its cli tool ) and ec2 or an openstack compute , and if so how would one such env.y look ?
<imbrandon> s/their/there
<SpamapS> imbrandon: it would need to be written, but it wouldn't be hard to make happen
<SpamapS> imbrandon: I've suggested a few times that we should consider decoupling storage from compute
<SpamapS> imbrandon: also I think we can remove the S3 dependency entirely
<SpamapS> imbrandon: just fire up something for storage on node 0, and use ec2 groups to find it.
<imbrandon> SpamapS: awesom
<imbrandon> SpamapS: next question is do we do that in go or python, i'm kinda curious to get started with some go
<imbrandon> SpamapS: btw i did a lil screencast bout blitz'ing omgubuntu today , go like or plus one it or whatever its called :)
<imbrandon> :)
<SpamapS> imbrandon: nice
<SpamapS> imbrandon: so we're stuck in a weird spot right now
<SpamapS> imbrandon: python dev is supposed to cease...
<SpamapS> imbrandon: but go is still not really useable
<SpamapS> imbrandon: they'll be usable soon.. and caught up I'm sure in a "months" measurable time frame
<bkerensa> =o
<SpamapS> yeah, I am =o too :-P
<bkerensa> SpamapS: python dev will cease wha?
<SpamapS> bkerensa: yeah, all new features will go into the go version
<bkerensa> oh
<SpamapS> but..
<bkerensa> so python-dev will not be carried as a package anymore?
<SpamapS> I dunno.. they seem very far behind.
<SpamapS> bkerensa: no we'll keep shipping python-dev in Ubuntu.. ;)
<bkerensa> oh l
<bkerensa> ok then :D
<SpamapS> bkerensa: or are you meaning, python-juju ?
<SpamapS> That version is the only working version of juju right now, so it will continue to be the one that goes into Ubuntu until the go version can do everything it can.
<bkerensa> python2.7-dev
<bkerensa> :P
<SpamapS> right ok well I'm too tired to know if we're talking in circles or I'm just out of brain power
<SpamapS> so.. I'll just sleep. :-P
<imbrandon> bkerensa: we're only talking of python dev ceasing in relation to juju core, not ubuntu as a whole, juju is being rewritten in go
<imbrandon> SpamapS: gnight , see ya when ya wake :)
<Daviey> hola
<niemeyer> Hello all
 * koolhead17 hola backs Daviey :P
<koolhead17> niemeyer, hi
<fwereade__> niemeyer, heyhey
<fwereade__> niemeyer, is it meeting time?
<niemeyer> fwereade__: It was 14UTC, but if everybody is ready we can do it now
<fwereade__> niemeyer, I'm easy, just hadn't picked up on being 2 hours off instead of 1
<jcastro> fwereade__: hey, did we forget to announce subordinates on the mailing list?
<fwereade__> jcastro, if yu mean constraints, then ...er, probably, there hasn't been anything since the warning mail a couple of weeks ago
<jcastro> hmm, did I miss the mail on subordinates then?
<fwereade__> jcastro, ...and if you do mean subordinates then I'm afraid I have no idea, I've been up to my eyebrows in constraints exclusively, sorry
<jcastro> ok I think I am mixing you up with someone else. :)
<jcastro> but hey a quick "hey guys this is finished, here are the docs" wouldn't hurt for constraints either.
<jcastro> ah, subordinates, that's hazmat and bsaller right?
<jcastro> hazmat: ping
<fwereade__> jcastro, thanks, good plan
<imbrandon> hrm i deleted this branch long ago, http://jujucharms.com/~imbrandon/oneiric/quickdrop
<imbrandon> should it not get cleaned up ?
<jcastro> when?
<imbrandon> on cron ?
<imbrandon> as in the branch has been gone for weeks
<jcastro> did you just now remove it?
<jcastro> oh
<jcastro> sounds like a bug then
<imbrandon> maybe 2 weeks
<jcastro> sec
<imbrandon> kk
<jcastro> I'll file it
<imbrandon> kk ty ty :)
<jcastro> it's lp:~charmworld for the web UI btw
<imbrandon> ahhh kk
<imbrandon> btw i installed hub, makes git work with github like bzr does lp , soooo nice
<imbrandon> git clone drupal6 == clones my fork of drupal6 or , git clone jcastro/drupal6 == for yours :)
<hazmat> jcastro, pong
<hazmat> yeah.. i don't think we've announced the subordinate capability
<hazmat> they landed end of last week
 * jcastro nods
<SpamapS> we should
<jcastro> wouldn't hurt to post it this week
<SpamapS> I've already written puppet as a subordinate charm
<jcastro> constraints was just announced!
<hazmat> and their even documented.. https://juju.ubuntu.com/docs/subordinate-services.html
<SpamapS> Yeah the docs are very good
 * imbrandon is working on his first sub charm now, newrelic
<SpamapS> though the 'juju-info' relation is not super discoverable in them
<hazmat> hmm.. good point
<SpamapS> oh yeah we need to make an RDS sub charm
<SpamapS> and an ELB sub charm would be cool too
<imbrandon> hell yea
<SpamapS> also a mysql proxy sub charm with the 'is this a write or a read' ruleset that can then send writes or reads to masters/slaves
<SpamapS> <-- full of ideas, out of time ;)
<hazmat> SpamapS, why those  be sub charms? their both standalones
<hazmat> rds and elb
<SpamapS> hazmat: you're going to run an entire instance, to make API calls?
<SpamapS> hazmat: external services work great as subs
<hazmat> SpamapS, or i propogate ec2 credentials everywhere?
<SpamapS> or, should anyway
<SpamapS> hazmat: thats why Amazon has the sub-creds stuff
<imbrandon> sub keys
 * SpamapS forgets the acronym
<hazmat> jcastro, so i'm not entirely clear that we ever allow 'deletion' of a charm
<imbrandon> iam
<imbrandon> hpcloud has it too
<hazmat> niemeyer, does the charm store ever delete a charm because its upstream branch in lp is gone?
<niemeyer> hazmat: Nope
<SpamapS> hazmat: I happen to agree with you, but short of placement: local .. we really shouldn't waste an instance on stuff like that.
<SpamapS> doh
<imbrandon> hazmat: the charm in question is mine, i renamed it and the old one is gone for good
<SpamapS> we need like, a daily delete cron or something
<imbrandon> like for weeks now
<imbrandon> i'm sure there are others though
<SpamapS> deletes.. the bane of all syncing systems
<imbrandon> http://jujucharms.com/~imbrandon/oneiric/quickdrop
<imbrandon> --delete-after :)
<jcastro> Do we care if they're deleted or not, maybe just "don't display"?
<imbrandon> well it dont display now, i found it via the interfaces
<imbrandon> well dont display on the main cs
<imbrandon> clicked interfaces, then http, then there i was :)
<imbrandon> heh
<hazmat> jcastro, imbrandon the problem is someone may have already deployed it
<jcastro> oh
<hazmat> and at that point deleting it is.. is really bad for a user of the charm
<imbrandon> but yes ultra hide seems ok since thet are essentially namespaced by dev, so even someone with the same name charm would be ok later
<jcastro> ok so I would think we'd do that for cs:foo, but not cs:somedude/foo
<imbrandon> hazmat: not knowing its un-maintained might be as bad though
<imbrandon> and if the branch is gone , whats the hurt in deleting it, it cant be deployed anyhow
<hazmat> imbrandon, true... for the browser we could hide it from listings and search, but show it on direct nav
<hazmat> and perhaps highlight its unmaintained status..
<jono> SpamapS, you around?
<jcastro> hazmat: that's what I was thinking
<imbrandon> hazmat: yea i'd say hilight its unmaintained too
<imbrandon> and even put that in juju status if able to
<SpamapS> jono: I am! wassup?
<jcastro> "branch not found, there's a good chance this might be unmaintained, here's a link to the official one in this same namespace..." or something
<jono> SpamapS, I have these nodes that just arrived at my place, I am going to bring them in this afternoon, but will need a hand getting them in the venue
<jono> can you help me?
<jcastro> jono: he's not there, you want zul or Daviey
<SpamapS> jono: I arrive Thursday morning
<imbrandon> heh
<jono> ahhhh np
<SpamapS> jono: when I said I'm "around" .. I'm .. like.. fat, and spilling all over my chair. ;)
<jono> zul, Daviey can either of you guys help?
<jono> SpamapS, lulz
<imbrandon> they should have made openstack closer to with uds so i coulda went :)
<imbrandon> hahah
<SpamapS> imbrandon: it has become the pre-drinking event to get your liver ready for UDS
<imbrandon> hahah
<hazmat> jono, i'm around
<hazmat> jono, i can rope in folks as needed to help out
<imbrandon> i'm good then, no need to see jono bottle dance after the first time
<imbrandon> although i am hoping he drags along a les paul or some such this time since its not that far :)
<jono> hazmat, that would be cool
<jono> I am going to go and see if they fit in my car in a few mins
<SpamapS> heh.. jono is juju's roadie
<imbrandon> hah
<SpamapS> man, we need to revive the effort to have '--set' passable to deploy. That was a great idea.
<marcoceppi> --set for like, initial config options?
<SpamapS> yeah
<marcoceppi> +1
<SpamapS> Though I see a lot of really bad practice with that
<SpamapS> people writing "this will only be applied at install time'
<SpamapS> I am pondering making config-get in 'install' a warning in charm proof. configs should really only be applied in config-changed
<SpamapS> otherwise they lose their magical powers to morph the service as needed
<hazmat> SpamapS, well they could get defaults in install, you can set config at deploy
<hazmat> its a fail though for charms that require config at deploy imo
<SpamapS> hazmat: that doesn't make any sense though. It can be changed, so it should be handled *only* in config-changed.
<SpamapS> even if it is the default
<SpamapS> install is for stuff that *always* has to happen
<hazmat> SpamapS, true and we do execute config-changed always before start
<SpamapS> exactly
<marcoceppi> SpamapS: wouldn't it eventually be handled in config-changed even during first install
<SpamapS> marcoceppi: yes
<marcoceppi> so, yeah
<SpamapS> so anything that you do 'config-get' for in install, should be moved to config-changed
 * marcoceppi goes back through his charms
<SpamapS> I'm sure we've all made the mistake :)
<imbrandon> SpamapS: tells SpamapS minus one year there is a json interface :)
<imbrandon> :)
<SpamapS> inorite
<SpamapS> SpamapS minus one year had no config-get
<imbrandon> hehe
<robbiew> jcastro: going to send the shirts today...to our west coast warehouse, aka jono's house
<jcastro> nod
<jcastro> I'm bringing the micro right?
<robbiew> yes
<robbiew> I'm bringing another, but we need a murphy's law machine
<robbiew> ;)
<imbrandon> two if jcastro is responsable for one
 * imbrandon dips his head back into the code
<jcastro> imbrandon: you've published things on OSX right?
<jcastro> imbrandon: maybe we should look at the OSX juju work?
<imbrandon> yea
<imbrandon> and yea
<jcastro> https://juju.ubuntu.com/OSX
<jcastro> any help here would be <3
<imbrandon> ohh awesom the hardpart is done
<imbrandon> sweet
<imbrandon> yea i'll get this on wrap sometime in the next 24
<jcastro> you have 10 days actually. :)
<imbrandon> infact looking at this, i can get it down to one command
<jcastro> but getting it all sorted before then would be wicked
<imbrandon> jcastro: great cux my laptop only runs osx atm , that might change at usd
<imbrandon> but lol
<imbrandon> nah i'll be dual booting
<imbrandon> but yea for sure in 10 days, i can get this down to one command i can allready tell
<imbrandon> sweet
<jcastro> imbrandon: if we need to host some stuff on officialish resources I can help you with that
<jcastro> we _need_ to work awesome on OSX
<imbrandon> nah we want it in the official brew trree
<jcastro> oh ok, well, whatever you kids are into
<imbrandon> that way a user littrally says
<imbrandon> brew install juju and is done
<jcastro> that would be nice
<imbrandon> and no sef respecting devops dosent already have brew installed, but if not its just one more command
<imbrandon> self*
<imbrandon> but yea, i'm on it
<imbrandon> oh wow these are old too, this is a few months out of date
<imbrandon> sweet ok, /me is happy now
<imbrandon> infact jcastro i may take you up on the hosting thing
<imbrandon> the more i think about it i can also produce dmg installers
<imbrandon> i forget i'm a "official" apple dev and can code sign packages :)
<imbrandon> lol
<jcastro> I will get you anything you need
<imbrandon> sweet yea, it would only be the equiv of a .deb
<imbrandon> but with 10.8 you they have to be code signed with an approved key
<imbrandon> damn appple walled garden, but lickily i have such key for macos and ios for atleaste the next 10 months
<imbrandon> ( $100 bux a year each , but i just renewed )
<imbrandon> SpamapS: rember that ,its the "Seem exclusive but not Be exclusive", like a stripper :)
<SpamapS> yeah, thats what new-charm is :)
<SpamapS> I think anyway
<imbrandon> :)
<m_3> SpamapS: when you get a sec, I'd love to go over precise charms
<SpamapS> m_3: any glaring problems?
<m_3> SpamapS: nope, not at all
<m_3> SpamapS: just leaning towards _not_ making any big changes through launchpad this week
<imbrandon> jcastro: wont happen in 10 days but i can get it into the app store too ( approval process takes longer than 10 days )
<m_3> SpamapS: just push and demo against cs:~charmers/precise/<blah> for what's needed
<SpamapS> m_3: hrm
<SpamapS> m_3: where's your sense of adventure?
<m_3> SpamapS: I don't have a
<m_3> ha!
<m_3> don't have any specific problems with upgrading across the board other than lots of moving parts and the risk of breakage
<m_3> demo faeries(sp?) and such
<m_3> more charms "just worked" than I expected
<jcastro> well that's a good sign at least.
<m_3> jcastro: yeah, the process went really well... I'm just pretty skiddish when it comes to changes during big demo time
<m_3> skittish?  hell, I need more coffee
<imbrandon> switch to IV, works better i found
<jcastro> well hey, if we want to flip the switch next week then that means we can promulgate the next three in the queue right? :)
<SpamapS> m_3: the MaaS demo isn't using cs: is it?
<SpamapS> robbiew: ^^ ?
<m_3> imbrandon: I bet
<m_3> SpamapS: it sounded like they were excited about showing the charmstore
<robbiew> It would be nice, not a requirement
<robbiew> tbh, we'll probably script the deployment anyway
<jcastro> well, if we don't have to fix it now we have to fix it by release
<jcastro> so we might as well fix it now
<robbiew> and walk folks through it
<m_3> SpamapS: in discussing it last week, it sounded like they were ok with cs:~charmers/precise/<blah> though
<m_3> 'juju deploy cs:~charmers/precise/hadoop' -vs- 'juju deploy hadoop'
<robbiew> I wouldn't worry about affecting the demo...hell, we might use local charms just to avoid any surprises
 * robbiew goes to ship boxes...back l8r
<m_3> SpamapS: ok, well it sounds like we should pull the lp switch to promote them to precise then
<jcastro> m_3: charm school tomorrow for QA folks!
<m_3> jcastro: yup... is there anything in particular that peeps wanna see?
<jcastro> nope, it sounds more like it'll be a Q+A of their questions,
<m_3> yeah... cool
<m_3> jcastro: are there few enough to do it in a hangout?
<m_3> jcastro: or should I spin up juju-classroom
<jcastro> I asked for G+ but they want phone+IRC I guess
<jcastro> so maybe it's more than 10
<m_3> jcastro: don't see anything about phone... is there a conference thingy?
<jcastro> not obvious to me, I'll follow up over email, you'll be CCed
<m_3> jcastro: cool thanks
<m_3> jcastro: re the openerp6.1 split... there can be as many openerp charms in the store as people want
<jcastro> right I get that, what I am wondering is if it makes sense to do that
<m_3> sorry, not in the store... int he repo
<jcastro> oh right
<m_3> in the store, we need to have one
<m_3> imo
<m_3> I'd like them to just play together and figure out how to combine them
<m_3> but I understand that may be unrealistic
<m_3> so either combine, or pick one to keep in the store and one to keep in a separate repo
<m_3> jcastro: cool with you?
<jcastro> sure
<jcastro> I just didn't want them to accidentally duplicate work
<m_3> right
<jcastro> http://www.rackspace.com/blog/next-generation-rackspace-cloud-servers/
<jcastro> anyone try juju on this yet?
<jcastro> marcoceppi: new toys ^
<marcoceppi> jcastro: I signed up for their beta through my company, never heard back from them.
<avoine> jcastro: I need to setup awsome proxy first but I'll give it a try
<avoine> marcoceppi: strange I got a reply the next day
<jcastro> SpamapS: I'll hit lunch then we can G+? We can talk charm store too with m_3 if we want
<SpamapS> jcastro: can we G+ first?
<SpamapS> jcastro: I'm ready right now
<jcastro> SURE
<imbrandon> win 7
<jcastro> fire it up
<imbrandon> doh
<SpamapS> Ok I'll initiate
<jcastro> SpamapS: invite the others too if people want to hang out
<SpamapS> btw, but 983313
<jcastro> I am feeling very "light at end of the tunnel let's hug alot."
<SpamapS> bug 983313
<_mup_> Bug #983313: New Charm: puppet + puppetmaster <new-charm> <Juju Charms Collection:New> < https://launchpad.net/bugs/983313 >
<SpamapS> jcastro: me too
<SpamapS> but with mayonaise
<imbrandon> i'll hang, if nothing else to listen
<SpamapS> imbrandon: its mostly logistics around our demo Thursday.. right after the hugs
<imbrandon> lol
<imbrandon> k /me goes back to fixing the lxc
<m_3> SpamapS: hey
<m_3> SpamapS: looks like it worked... aliases are pointing at precise
<SpamapS> oh they are?
<m_3> SpamapS: yay, ok, so I'll start manual changes for ones that're not supposed to be there in precise
<m_3> test out 'juju deploy <blah>' if you get a chance... here're the ones we expect to work http://ec2-23-20-58-1.compute-1.amazonaws.com:8080/
<m_3> I'll do the same after fixing some links
<m_3> SpamapS: perhaps we need another '--fix' round from charmtools?  /me looking to see if bzr keeps the actual branch or the alias
<SpamapS> m_3: alias
<SpamapS> m_3: or at least, it should
<SpamapS>   parent branch: bzr+ssh://bazaar.launchpad.net/+branch/charms/subway/
<imbrandon> jcastro: woot i now have the ability to blitz with 625 concurrent users ( standard free acct is 250 )
<imbrandon> looks like others liked the video too :)
<imbrandon> Brandon,
<imbrandon> Cool screencast! We've added one-time +250 credits to your account so you free plan gets bumped to 500 concurrent users. Of course, with referral credits you can get this up to a max of 750!
<imbrandon> from blitz.io in my email
<imbrandon> :)
<SpamapS> imbrandon: that scool
<imbrandon> hellz yea :) /me is even more happy now
<imbrandon> i already had 125 referal credits
<imbrandon> so 625 now :)
<SpamapS> Ok, folks PRECISE is the new dev focus for charms.
<SpamapS> charm promulgate will now promulgate to precise
<SpamapS> lp:charms/* is now precise
<SpamapS> all branches have been copied forward
<SpamapS> oneiric, may ye rest in stability.. aaaahhhmmeeen
<jcastro> SpamapS: ok so on precise "juju deploy mysql" will do precise?
<SpamapS> yep
<SpamapS> its like we planned it this way :)
<m_3> SpamapS: jcastro:technical difficulties... one sec
<SpamapS> m_3: clearly :-P
<koolhead17> SpamapS, are you not at ODS?
<SpamapS> koolhead17: no, I go on Thursday
<koolhead17> SpamapS, okey. We have Juju/Maas?openstack demo there?
<SpamapS> niemeyer: an hour ago we copied all the oneiric charms to precise, but they're not showing up as cs:precise/xxxx ... what criteria does the charm store use to determine if it should import a charm?
<niemeyer> SpamapS: Can you give me a sample of a charm that isn't showing but should?
<niemeyer> SpamapS: If you copied all of them at once, it might be that it's still importing
<niemeyer> SpamapS: I can't check that without help from someone with superpowers
<matsubara> hello there. I'm trying to juju ssh to an running instance but I'm getting this error: Incompatible juju protocol versions (found 1, want 2). Is this a known error/bug? Is there a workaround?
<matsubara> I'm using juju: 0.5+bzr531-1juju5~precise1 fwiw
<m_3> f/win 3
<m_3> SpamapS: so maybe metadata[maintainer]... and have promulgate create-or-add-to a "<charm>-maintainer" group.  Ok, I guess that's simpler than I was thinking
<m_3> s/<charm>-maintainer/<charm>-maintainers/, but keep the metadata field singular so there's a single point-of-contact
<m_3> I'm getting a... bzr: ERROR: Permission denied: "~charmers/oneiric/znc/trunk/" when trying to backport a charm fix to oneiric
<m_3> SpamapS: who were you working with in lp with super powers?
<m_3> (the precise lp:charms/znc pushed just fine though)
<m_3> jcastro: hey, can you send me the final of the charmschool handout?  I wanna make sure the node charm is named / behaves the same... I've used two versions over time
<jcastro> m_3: in your inbox!
<m_3> jcastro: thankyouverymuch
<m_3> It's an aubergine world :)
<m_3> negronjl: hey man, please test out 3-node mongodb replsets on precise when you get a chance... it's front-and-center in the juju handout we'll be using at ODS and I haven't run it on anything but oneiric  (lp:charms/mongodb now points to lp:~charmers/charms/precise/mongodb/precise).
<negronjl> m_3:  I'll do that sometime today
<negronjl> m_3:  Let me finish some MaaS testing first
<m_3> negronjl: thanks!
<m_3> jcastro: crap... typo
<m_3> jcastro: "juju add-unit -n20 hadoop hadoop-slavecluster" should read "juju add-unit -n20 hadoop-slavecluster"
<m_3> jcastro: at least it's not really a big one at all
<jcastro> didn't you review this like 3 times already?
<m_3> I reviewed an early version... the google doc
<jcastro> that was the final draft I sent you, I don't know what's actually on the paper at ODS right now
<SpamapS> m_3: any losa in #launchpad-dev can do it. thedac helped me out
<m_3> SpamapS: thanks
<jcastro> m_3: jamespage reviewed it too
<jcastro> m_3: I am hoping what I sent you is just wrong. :)
<SpamapS> matsubara: re your problem, are you by any chance spawning oneiric machines with 'juju-origin: distro' ?
<m_3> jcastro: no biggie... it's a small one.  We can catch it before the nxt batch gets printed up.  nothing to do before ods
<SpamapS> niemeyer: I tried 'statusnet' and 'subway'
<matsubara> SpamapS, this is my environment.yaml for that env: https://pastebin.canonical.com/64366/. answering your question, no, I spawned a precise machine and have no juju-origin: distro in my environment.yaml
<matsubara> SpamapS, that environment was bootstrapped on 2012-02-29 and have been running since then
<SpamapS> matsubara: ugh, you may have to re-bootstrap
<matsubara> SpamapS, really? how about the services I already have running there? re-deploy them?
<matsubara> well, to be fair, I only one service running on that env, it's the MAAS jenkins instance
<matsubara> but even so I wasn't expecting that a juju version change would require me to re-deploy my services :-(
<matsubara> should I file a bug about this?
<m_3> SpamapS niemeyer: I just noticed that things are aliased to lp:~charmers/charms/precise/<charm>/precise and not /trunk... is that going to cause problems?
<matsubara> SpamapS, I can get access to the ec2 instance by ssh directly to its public dns address. Would that help?
<SpamapS> matsubara: I don't believe there's any migration code to morph your version 1 zk tree to version 2
<SpamapS> m_3: yes thats likely the problem, DOH
<SpamapS> matsubara: we won't do that once the release is stable.. but during dev re-factoring ZK had to be done unfortunately
<jcastro> SpamapS: hah idea for charm school. How about towards the end we let people who set up their laptops deploy to the cloud right there
<jcastro> SpamapS: like, it's one thing for you to be deploying to it, it'd be sweet if people from the audience can just deploy something
<SpamapS> jcastro: yeah that should work
<SpamapS>     if len(u) < 5 || u[1] != "charms" || u[4] != "trunk" || len(u[0]) == 0 || u[0][0] != '~' {
<SpamapS>         return "", nil, fmt.Errorf("unwanted branch name: %s", name)
<SpamapS> m_3: curses
<SpamapS>     }
<SpamapS> m_3: we'll need to have a special argument of some kind for branch-distro, or update the charm store.
<SpamapS> silly launchpad.. why do pushes store in 'trunk' but branch-distro stores in $series? :-P
<m_3> SpamapS: we could move them again to charms/precise/<charm>/trunk right?
<SpamapS> I just re-promulgated statusnet as 'trunk'
<SpamapS> waiting to see if it appears as cs:precise/statusnet
<m_3> we really could do it manually if necessary... the hard part is the aliases
<SpamapS> Well I'm waiting to see if manually fixes it
<SpamapS> niemeyer: how often does the store pull from launchpad?
 * m_3 didn't realize it was more than a pull-through directly to lp
<m_3> good to know it caches... that'll help debugging stuff later
<niemeyer> m_3: Yep, that won't work
<niemeyer> m_3: "trunk" is the only name pulled from
<SpamapS> looks like statusnet is there now
<SpamapS> so rename/repromulgate should work
<m_3> SpamapS: you wanna do that or shall I?
<SpamapS> m_3: if it floats your boat, go for it!
<SpamapS> m_3: should go in charm tools
<m_3> the repromulgate is the problem for me
<m_3> I'm concerned the aliases will just always push to .../precise branches now though
<m_3> don't we have to have those changed somewhere?
<m_3> hell, lemme just try it on one and see how far I get
<SpamapS> m_3: promulgate changes the alias
<m_3> SpamapS: it won't let me delete the precise branch
<SpamapS> m_3: don't delete it!
<SpamapS> m_3: push --remember, then promulgate
<m_3> SpamapS: ok, that worked.... scripting it now
<m_3> it looks like we'll have to go in and delete the /precise branches though
<m_3> they're dangling atm
<m_3> (for example, sbuild)
<m_3> oh, wait... it shows it as stacked on /precise
<m_3> SpamapS: ok, I need help... this is pooched
<m_3> at the end of this process we'd have lp:charms aliases pointing to /trunk branches stacked on /precise branches that can't be removed
<m_3> perhaps branch them locally, delete the /precise, and _then_ push/promulgate?  more dangerous
<negronjl> m_3: having issues with juju/precise at the moment.  Once I bootstrap, I cannot connect.  Have you seen this before?
<m_3> negronjl: yes... watch cli version -vs- 'juju-origin'
<negronjl> cli version: 0.5+bzr531-1juju5~precise1
<negronjl> juju-origin: ppa
<negronjl> m_3: ^^
<m_3> negronjl: it's possibly broken again recenly... lemme look at my versions
<SpamapS> m_3: there should be no need to branch locally, and I don't mind having the old stacked-on branch there
<SpamapS> m_3: really the stacked stuff is just an optimization
<m_3> SpamapS: ok... we can keep danglers then
<m_3> SpamapS: discussing this in #launchpad-dev atm
<SpamapS> oh good
<m_3> negronjl: creating a new environment... few more minuts to verify
<negronjl> m_3: cool .. thx
<imbrandon> thats one of the things i dont like about hg/bzr , branches in their own dir, why cant they use hardlinks like git :(
<imbrandon> to me thats not really a branch, its a fork whe its 100% sep like that
 * negronjl will be back in a few
<_mup_> Bug #983530 was filed: charm store should publish all 'Mature' branches. <juju:New> < https://launchpad.net/bugs/983530 >
<SpamapS> niemeyer: ^^ an idea, I'd appreciate your feedback.
<m_3> win 2
<m_3> geez, I've been leaking irssi commands all day
<niemeyer> SpamapS: Looking
<jcastro> SpamapS: can you blogify your constraints email? It's too cool to not just be on the project list.
<SpamapS> jcastro: will do
<SpamapS> reminds me.. I need to upgrade my blog to precise
 * SpamapS considers doing it with juju
<m_3> ok, three more charm branches to fix...
<SpamapS> jcastro: http://fewbar.com/2012/04/juju-constraints-unbinds-your-machines/
<m_3> SpamapS: we need to not run branch-distro again...
<m_3> it's taking pre-existing branches and stacking them on top of the copied over oneiric branches
<m_3> I don't know a clean way to fix rabbitmq-server for example
<SpamapS> m_3: eh?
<m_3> there's merge history in lp:~charmers/charms/precise/rabbitmq-server/trunk
<SpamapS> m_3: rabbit qas forked
<SpamapS> was forked even
<SpamapS> it shouldn't have been touched
<m_3> so ideally we'd just promulgate that branch to lp:charms/rabbitmq-server
<SpamapS> oh weird
<m_3> but we just dropped lp:~charmers/charms/oneiric/precise/rabbitmq-server/trunk right into lp:~charmers/charms/precise/rabbitmq-server/precise
<SpamapS> m_3: we probably just need to push --overwrite and then rename that one/delete the old one
<m_3> so now the precise .../trunk and .../precise are stacked
<SpamapS> m_3: yeah I see that
<SpamapS> thats bad, mmkay
<m_3> but trunk's not on the bottom, so we can't delete /precise
<SpamapS> m_3: right so we have to --overwrite. I don't know if it will worked since it is stacked on tho
<SpamapS> m_3: let me try
<m_3> BTW, while looking at that... one extra part
<SpamapS> my typing is godawful today ugh
<m_3> I brought the .../precise branch up to 17 in hopes that I could do something from there
<SpamapS> well then it looks fine, just rename
<m_3> can't do that b/c there's already a branch of that name
<SpamapS> rename that one too
<m_3> ok
<m_3> we're losing info on the pre-existing precise branch (merge history)
<SpamapS> where?
<m_3> comparing:
<SpamapS> Oh the merge proposals?
<m_3> https://code.launchpad.net/~charmers/charms/precise/rabbitmq-server/trunk
<m_3> with:
<SpamapS> such is life
<m_3> https://code.launchpad.net/~charmers/charms/precise/rabbitmq-server/precise
<m_3> right
<m_3> ok with this particular case... but we wanna not have to do this next time
<imbrandon> SpamapS: if you get a half sec my bash foo is failing me, and i want to do something similar in a charm but cant figure out the quoting ... http://paste.ubuntu.com/933268/    , no rush, just if you have a moment now or later
<imbrandon> or m_3 :)
<SpamapS> #!/usr/bin/env bash
<SpamapS> really?
<imbrandon> heh habbit
<SpamapS> you have a platform that doesn't have bash in /bin?
<SpamapS> :)
<imbrandon> i do it for p[ython php etc
<imbrandon> so its just habbit :)
<SpamapS> imbrandon: whats the question?
<imbrandon> see the last filename transforamtion
<imbrandon> its all messed up, truncates the first half
<imbrandon> the one with spaces
<m_3> imbrandon: add a new line 'local crushed_filename=$(basename...)' then reference that variable below
<imbrandon> i tried all diff combos of quotes and pullin hair out
<imbrandon> hrm , not thought of that, kk
<m_3> should reduce the num of "'s your having to deal with on any given line
<imbrandon> heh yea
<imbrandon> i hate bash for this reason alone, if i could get my head arround the spaces better i would love bash scripting probably more than anything else
<m_3> imbrandon: still sucks for functions and scoping though imo
<imbrandon> yea but as far as gen stuff its very very vers
<imbrandon> and everywhere
<SpamapS> bash scripts longer than 25 lines are just prototypes for perl/python/ruby
<imbrandon> heh true
<m_3> oh, that's impressive... we now have lp:charms/rabbitmq and lp:charms/rabbitmq-server
<SpamapS> m_3: how did that happen?
<imbrandon> i would say php,perl,ruby in that order but yea :)
<m_3> when I renamed the old precise/rabbitmq-server/trunk to precise/rabbitmq-server/old-trunk
<imbrandon> heh
<SpamapS> imbrandon: you are the only one who wants to write CLI php.
<m_3> it popped up on the list :)
<imbrandon> haha no i'm not , i see it daily
<SpamapS> imbrandon: people do it because they have to
 * m_3 narrowly averted religio... uh... _language_ discussion.... whew!
<imbrandon> heh :)
<m_3> ok, so I kinda like the name rabbitmq better than rabbitmq-server :)
<m_3> ... but ours is not to judge
<imbrandon> i've come to embrace most people dont like php, thast ok though it keeps me employed and i'm not super anoyed by it
<imbrandon> frak ok, i'm missing something here, i got the same eaxct results just using a var now
<SpamapS> m_3: 1:1 package:charm is appealing
<imbrandon> crushed_filename=$(basename "$file" .jpg)-crushed.jpg
<imbrandon> still truncates anything before the last space ina filename
<m_3> remove the ""
<m_3> basename $file .jpg
<imbrandon> pretty sure i trued that
<imbrandon> lemme check
<imbrandon> basename: extra operand `Sing'
<m_3> basename \"$file\" .jpg
<imbrandon> hrm
<imbrandon> kk
<imbrandon> nope, ok i;ll let you guys back at it, you were busy, i'll hit the adv bash guide
<imbrandon> thanks though, dont wanna tie ya up for this
<m_3> maybe two steps if necessary.... base=$(basename $file); crushed_filename=${base%%.jpg}-crushed.jpg
<imbrandon> hrm, not sone that before
<m_3> that should work _around_ spaces ok
<imbrandon> ii'll have to look it up if it wokr
<m_3> might have to replace basename with ${base/.*\//} (not exact syntax though... and you gotta make sure it's a greedy match)
<imbrandon> ohhhh ok
<imbrandon> i see
<imbrandon> rockin and thats workin
<imbrandon> ty
<m_3> which one?
<imbrandon> first
<m_3> basename and then %% subst
<m_3> cool
<imbrandon> yup
<imbrandon> ok foood time, ty very much
<imbrandon> then to publish the newrelic charm
<m_3> crap...
<m_3> SpamapS: how can I remove an alias without deleting the branch?
<imbrandon>   crushed_filename=${file%%.jpg}-crushed.jpg
<imbrandon>   du -sh "$file"
<imbrandon>   echo jpegtran -copy none -optimize -perfect "$file" -outfile $crushed_filename
<imbrandon>   echo du -sh $crushed_filename
<imbrandon> shit , sorry
<SpamapS> m_3: I don't know.. probably need the API
<SpamapS> m_3: why not just delete the branch?
<m_3> adam's got other stuff stacked on top of it... I can't (and shouldn't) delete it
<m_3> I'm reading through promulgate to see if I can find the right lplib stuff
<m_3> I'm thinking that one's well and truly screwed... it got promulgated twice as two separate names
<m_3> (not me!)
<m_3> I'm sure it happened in the early days of promulgation
<imbrandon> promstrangulte
 * m_3 likes that word better
<m_3> gonna grab food... bbiab
<SpamapS> m_3: ok we'll figure out a way to resolve this later
<SpamapS> hazmat: When you're around, we should change http://jujucharms.com/charms so it breaks up the series more. Perhaps even only show the current active series.
#juju 2012-04-17
<negronjl> m_3:  I'll look into the mongodb/juju/precise thing a bit later ... Can't get juju to work with precise yet.
<imbrandon> hazmat: where does the code for charmworld live on lp ? i peeked at https://code.launchpad.net/charmworld
<imbrandon> hazmat: never mind, i'm an idiot
<imbrandon> err maybe not, i thought i found it
<imbrandon> i guess not though
<hazmat> imbrandon, its not published
<hazmat> SpamapS, sounds reasonable, is there a way to identify the active series on lp?
<imbrandon> ahh that would explain it, just not published or closed ? i dont mind either i was just gonna try and lend a hand with the sorting and maybe upgrade it to bootstrap 2.0 with responsive-ness and make sure it was all in like with the new branding updates like i did on the commmunity sites
<imbrandon> :)
<imbrandon> s/like/line*
<imbrandon> hazmat: so if i can give a hand with it just ping
<m_3> negronjl: I've got 531 localling bringing up 531 ppa on local provider for precise... seems to be running ok so far
<negronjl> m_3:  Can you pastebin a copy of your environments.yaml file ?
<negronjl> m_3: and if you used any odd commands or switches for juju, I'll take those too :)
<negronjl> m_3:  I'll look at it a bit later tonight
<SpamapS> hazmat: yes the distro object will show you
<m_3> negronjl: http://paste.ubuntu.com/933424/ is what I was using from that precise instance to launch other local precise instances
<huwshimi> Hi, I'm trying to deploy the openstack-dashboard charm locally (http://charms.kapilt.com/charms/precise/openstack-dashboard). I have set up my environment for local use, but when I do 'juju deplou openstack-dashboard' it says "Error processing 'cs:precise/openstack-dashboard': entry not found". Do I need to do something special to get it to find that charm?
<imbrandon> try cs:oneiric/openstack-dashboard instead
<huwshimi> imbrandon: Same problem :(
<imbrandon> is there and openstack-dashboard listed ni the charmstor ? /me looks
<huwshimi> This is the full output: http://paste.ubuntu.com/933781/
<imbrandon> bot sure, you'll have to bug hazmat m_3 or SpamapS when one of them gets in this morning
<imbrandon> not*
<huwshimi> imbrandon: No problems. Thanks :)
<imbrandon> try mysql cuz i know that know works, to make sure your env is right
<imbrandon> afk a few
<huwshimi> yeah, mysql worked
<imbrandon> ok yea its breakage from the switch to precise then likely yesterday and one of those 3 will fix it up here in a few hours i'm sure, they are normally active here pretty soon
<huwshimi> No problems, I can wait
<hazmat> imbrandon, its likely because its not 'officially' released yet.. ie been reviewed. it can still be installed from cs:~charmers/precise/openstack-dashboard
<imbrandon> hazmat: ahh
<negronjl> m_3: ping
<m_3> yo
<m_3> negronjl: yo
<negronjl> m_3:  mongodb works :)
<m_3> cool... thanks for checking it out!  just wanted to make sure we explicitly went through the handout/demo examples after promoting everything to precise
<negronjl> m_3: no worries ... glad it works
<jcastro> negronjl: you did test cloudfoundry on precise right?
 * jcastro runs 
<negronjl> jcastro: not yet ...
<m_3> it looks like parts of it are still failing for some reason... probably packages
<negronjl> m_3: parts of cloudfoundry ?
<m_3> http://ec2-23-20-58-1.compute-1.amazonaws.com:8080/ (only a tmp url)
<m_3> unfortunately, it wasn't capturing logs on these initial precise runs
<negronjl> m_3:  I'll work on some troubleshooting as I get time ...
<m_3> I was going through yseterday knocking them one-by-one to green
<m_3> negronjl: cool... thanks
<SpamapS> FYI, pending approval of FFE, I'll be uploading bzr531 to precise later today or early tomorrow. Please give it a good vigorous testing everybody.
<jcastro> SpamapS: hey, busy?
<jcastro> krish is at openstack and he wants to do a live G+ over the air about juju
<jcastro> want to hop in with me?
<jcastro> and/or m_3
<robbiew> jcastro: right now?
<jcastro> yeah he just pinged me on twitter
<robbiew> jcastro: does he know you and clint will be there Thu?
<jcastro> yeah
<robbiew> interesting
<jcastro> wanna hop in?
<jcastro> I want to talk about the charm school and stuff
<robbiew> nope
<robbiew> lol
 * robbiew has to start blueprint wrangling for 11.10...and do some I-am-leaving-my-wife-with-you-young-boys-this-week chores :P
<robbiew> /s/-you-/-two-
<SpamapS> jcastro: sorry Iw as grabbing an early lunch
<jcastro> no worries he hasn't responded back
<jcastro> he says he'll ping me with a suitable time when he gets there
<jcastro> SpamapS: I was thinking we could just talk about the problems we're trying to fix, etc. Basically, the stuff we talk about every week anyway
<SpamapS> sure
<SpamapS> I should like.. shower so I don't look like a bum
<jono> jamespage, hey, FYI - I had to get all this data of my 1TB drive which is taking a while, and then I can kick off the mirror
<jono> the drive was formatted for a Mac (it was used to record our album), so I will get the data off, format it, and then download the mirror so we have a backup
<m_3> jcastro: hey... I was in a meeting
<m_3> jcastro: any update on time?
<jcastro> no worries, he hasn't given me a time yet
<m_3> jcastro: cool
<jcastro> try to stay open today. (hah!)
<m_3> :)
<m_3> dude, that was a rude awakening this morning!
<jcastro> yeah sorry
 * m_3 open eyes... crawl outta be... check irc... charmschool in 20mins!
<m_3> I'm gonna go get some breakfast... bbiab
<jcastro> SpamapS: and then we should probably talk about charmschool ODS again
<SpamapS> jcastro: indeed, I have rsyslog written though it has a bug that I'm fighting with
<jamespage> jono, great - I think the guys are pulling one locally as well for redundancy
<jono> cool
<jcastro> SpamapS: I know you're in the zone but gitolite promulgation would be swell as well. Looks like shazzner's fixed all your issues
<SpamapS> jcastro: promulgation needed for demo? :)
<jcastro> that would be a cool demo
<yagey> could someone suggest a charm that installs from a cvs source that I can use as a template please?
<marcoceppi> yagey: like an actual CVS source, or just installs from some form of version control?
<yagey> cvs
<yagey> to install the virgo kernel: http://www.eclipse.org/virgo/download/
<marcoceppi> I don't think we have any that install from cvs source, but it would be similar to if you were doing an install, with regards to the commands used to branch the repo
<marcoceppi> yagey: why not just grab the zip download instead of cvs?
<yagey> ok
<yagey> any examples of a zip based install?
<marcoceppi> yagey: sure, quite a few do
<marcoceppi> let me find one
<marcoceppi> yagey: owncloud, statusnet, roundcube, and a few others grab some form of archive from a remote server for installation
<yagey> thx
<marcoceppi> They all make use of a charm-helper called "ch_get_file" which takes two parameters: a url, and a hash of the payload ( in order to do payload validation )
<m_3> yagey: node-app installs from git... shouldn't be too different to change that to cvs
<yagey> k
<m_3> yagey: but there are more examples out there of ch_get_file
<SpamapS> yagey: cvs, *wow*
<yagey> ?
<SpamapS> shocking to hear that anybody has cvs anywhere anymore
<SpamapS> svn is well over 10 years old
<SpamapS> bzr is better than svn
<SpamapS> git is more popular than bzr
<SpamapS> so many options
<SpamapS> cvs is.. just so bad
<SpamapS> oh
<SpamapS> yagey: they seem to be on git :)
<SpamapS> yagey: https://github.com/eclipse/virgo.apps ...
<yagey> seems they are spread over git/svn/cvs
<SpamapS> yagey: I'd recommend targetting your charm at a released version to start with. You can add a config option to switch to git later.
<yagey> yes, doing that.  thx
<yagey> ch_get_file is helpful
<yagey> sigh, my connection to ec2 has started just hanging, any troubleshooting advice please?
<SpamapS> yagey: are you sure its the ec2 connection?
<SpamapS> yagey: sometimes doing 'juju -v ...' will reveal more
<yagey> ah, thanks.  claims it's still initializing.  but the host logs show it's finished
<yagey> the juju-admin is failing, unrecognized constraints
<SpamapS> yagey: that is a version mismatch between whats on your machine and what your 'juju-origin' is set to.
<yagey> "juju-admin: error: unrecognized arguments: --constraints-data="
<SpamapS> yagey: most likely you have 'juju-origin: distro'
<SpamapS> yagey: or juju incorrectly guessed that you were using the distro version.
<SpamapS> yagey: dpkg -l juju
<SpamapS> yagey: I suggest putting an explicit 'juju-origin: ppa' in your environments.yaml
<yagey> on my localhost: 0.5+bzr519-1ju  on the EC2 boostrap 0.5+bzr398-0ubuntu1
<SpamapS> yagey: yikes!
<SpamapS> yagey: that 398 version is the distro version from oneiric
<SpamapS> yagey: I suggest making sure you have the latest from the ppa, and then adding a 'juju-origin: ppa' to the environment
<yagey> ok
<SpamapS> yagey: also you should consider deploying on precise. :)
<yagey> what is precise?
<SpamapS> yagey: Ubuntu 12.04
<SpamapS> oneiric == 11.10
<yagey> ah
<marcoceppi> Anyone ever seen this before? http://paste.ubuntu.com/934414/
<SpamapS> marcoceppi: no .. weird
<SpamapS> marcoceppi: newer juju has a more robust zookeeper though, so its possible it would have exploded much more spectacularly before
<marcoceppi> Running the command again appears to hang
<m_3> marcoceppi: nope... I'm actually excited to see it... usually a connection loss has to be inferred :)
 * marcoceppi checks ec2 status page
 * marcoceppi runs with -v
<marcoceppi> http://paste.ubuntu.com/934420/
<marcoceppi> it just keeps going
<SpamapS> marcoceppi: zk not started yet
<SpamapS> or failed somehow
<marcoceppi> SpamapS: this environment has been bootstrapped for about 5 hours
<SpamapS> marcoceppi: slight chance also that your local ssh is bombed out somehow
<marcoceppi> *it is the current omgubuntu environment*
<marcoceppi> Though, this might explain why wp can't connect to mysql which is running on that bootstrap
<SpamapS> marcoceppi: ssh directly in?
<marcoceppi> aye, I can
<SpamapS> zk running?
<marcoceppi> I guess this is bad: /dev/xvda1            7.9G  7.5G     0 100% /
<marcoceppi> SpamapS: it's not
<SpamapS> ooohhh fuuuuuudddddddgggge
<jcastro> hah
<jcastro> http://www.sadtrombone.com/
<marcoceppi> There shouldn't be 7.5G of MySQL databases on there
<marcoceppi> yeah, /var/lib/mysql is only 567M
<SpamapS> http://www.youtube.com/watch?v=Yii2mcNq2Zw
<SpamapS> marcoceppi: I bet its the juju agent logs or something silly like that
<SpamapS> or ZK
<yagey> so with ppa and precise, now it refuses to connect:   579: Socket [127.0.0.1:42204] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
<jcastro> m_3: I have an idea.
<jcastro> m_3: feel like experimenting with something?
<marcoceppi> 5.9G	./log
 * marcoceppi moves log to /mnt
<jcastro> daddy needs centralized logging!
<marcoceppi> it's actually /var/log/mysql
<marcoceppi> has a crap ton of 101M files mysql-bin.XXXXXX
<marcoceppi> all have like, json in them
<SpamapS> binary logging
<SpamapS> marcoceppi: flush logs;
<SpamapS> marcoceppi: if you're not going to use replication, you can turn that off
<SpamapS> with log-bin
<SpamapS> marcoceppi: and you can pretty much delete all of them
<m_3> jcastro: hey... sure
<jcastro> http://googledevelopers.blogspot.com/2012/04/add-spdy-support-to-your-apache-server.html
<jcastro> We totally should just toss that in a charm right now and tell people about it
<marcoceppi> SpamapS: Okay, that's fixed. For ZK, just `restart zookeeper`?
<jcastro> "I read something cool and I want to try it right now."
<SpamapS> marcoceppi: yeah that should be fine
<SpamapS> marcoceppi: service zookeeper restart btw .. don't call restart directly
<marcoceppi> good to know
<jcastro> m_3: they have packages, etc. And the byobu-classroom charm you just used uses mod_ssl already afaict.
<SpamapS> marcoceppi: and actually .. for oneiric..  stop/start is better
<SpamapS> marcoceppi: in 12.04 I fixed 'service' to do stop/start .. in 11.10 and earlier it uses the broken headed upstart 'restart' command
<SpamapS> which has a few bugs and does weird stuff
<marcoceppi> sweet juju status works again
<marcoceppi> whew!
<marcoceppi> that was fun
<SpamapS> marcoceppi: time to create a troubleshooting.rst
<jcastro> m_3: so I was thinking, find a charm that people would want to play with, toss spdy on it, demo!
<marcoceppi> yup
<SpamapS> what are the available options for spdy?
<m_3> jcastro: are you trying to show this at ods?
<SpamapS> mod_spdy? nginx?
<jcastro> no, this just got published today this mod_spdy snapshot from google
<marcoceppi> SpamapS: will MySQL just keep on chugging with binary logging, or can I set like a ceiling?
<SpamapS> marcoceppi: its designed to be an audit log.. so you should turn it off
<SpamapS> marcoceppi: it will increase write performance by about 5x ;)
<jcastro> my idea is basically tell the "I saw something neat on the internet and want to mess with it, modify a charm, done."
<jcastro> it just so happens this came out right now
<marcoceppi> SpamapS: too bad the database rarely gets touched :)
<SpamapS> marcoceppi: enough to fill the disk ;)
<marcoceppi> SpamapS: it's weird because we don't have this problem on the previous environment
<marcoceppi> Though, we just fixed an issue with apc that was causing php to go apeshit
<m_3> jcastro: dude... I've really gotta get automated tests working on ec2.  lemme see what I can put together in between deployments this afternoon
<SpamapS> marcoceppi: did you set tuning on the mysql service?
<SpamapS> marcoceppi: I believe "fast" turns off log-bin
<jcastro> m_3: I'm not saying do it, I'm just thinking out loud!
<marcoceppi> SpamapS: perfect I'm going to use that instead
<m_3> jcastro: testing got bumped over the weekend to upgrade charms to precise
<SpamapS> marcoceppi: yeah for them, the super fast tuning is the way to go.
<jcastro> m_3: actually let me back up, do you think it'd be a cool demo?
<m_3> jcastro: yeah, of course it's a good story
<marcoceppi> SpamapS: <3 the MySQL config options
<m_3> faster on the heels of the tool announcement the better too
<marcoceppi> It was rather cool, the database for omgubuntu was down for about 20 mins, and the site just kept serving from cache
<jcastro> marcoceppi: never let em see you sweat
<marcoceppi> I don't think anyone knew, I only knew because I just happened to try to log in to the admin panel
<m_3> marcoceppi: that's awesome
<SpamapS> marcoceppi: hrm, no.. fast doesn't turn it off
<SpamapS> marcoceppi: but actually, it shouldn't be on by default anyway
<m_3> jcastro: the openstack thing is a whole panel?
<marcoceppi> SpamapS: I just turned it off manually. I can add a patch to the charm in a few mins. Should it be a config option like log-bin true|false then by default put the bins on /mnt ?
<SpamapS> ahhh.. $JUJU_RELATION_ID is quite handy
<jcastro> m_3: it's going to be more like shooting the breeze with Krish.
<m_3> jcastro: email sounded like an actual panel
<jcastro> what email?
<SpamapS> marcoceppi: I think it should be a config option, string, default to "", and if you have 'safest' tuning, then it sets it to /var/log/mysql/mysql-binlog or whatever it is now as the default...
<m_3> can't see who else was copied... was a g+ mail
<SpamapS> marcoceppi: and if you have 'fast' or 'unsafe' then it should be left off
<m_3> jcastro: PM
<marcoceppi> SpamapS: then the config option would be to what? over-ride the default path?
<jimbaker> SpamapS, glad to hear that about $JUJU_RELATION_ID
<yagey_> still can't run juju status with ppa and precise, and suggestions please?  ^C2012-04-17 12:39:17,262:9336(0x7f4e3126c700):ZOO_ERROR@handle_socket_error_msg@1603: Socket [127.0.0.1:57177] zk retcode=-4, errno=112(Host is down): failed while receiving a server response
<yagey_> interesting, the host halted itself and refusing connections
<SpamapS> yagey_: I'd guess that ZK did not start properly
<SpamapS> jimbaker: yeah, its useful because its consistent through the entire relation lifecycle and can be used to run relation-get outside of hooks. :)
<SpamapS> jimbaker: so for instance in the rsyslog-forwarder charm I just wrote, I use it to control the name of the per-remote-host config file I add.. broken can just 'rm -f /etc/rsyslog.d/80-$JUJU_RELATION_ID.conf'
<SpamapS> jimbaker: before, broken hooks were always hard to write because they had no context available.
<SpamapS> jcastro: speaking of charms that need reviewing.. I have submitted 3 in the last 2 days, and I have 2 more
<jcastro> \o/
<jimbaker> SpamapS, sounds like it worked as intended. very nice indeed!
<bkerensa> jcastro: you did not require signature on the UPS shipments did you?
<SpamapS> 2012-04-17 13:08:06,922 INFO Charm deployed as service: 'mod-spdy'
 * SpamapS waits patiently while his instances start up
<jcastro> don't forget to push!
<jcastro> I want to see how you did it
<jcastro> so I can learn
<jcastro> since I see the first one, and then yours.
<jcastro> hah, can you believe I work at an engineering company? I should be killed.
<SpamapS> lp:~clint-fewbar/charms/precise/mod-spdy/trunk
<bkerensa> =o
<jcastro> \o/
<jcastro> SpamapS: ah I like how you did the arch thing there
<SpamapS> jcastro: they should have an apt source.. or a PPA :)
<jcastro> SpamapS: I think what we do is on the very first instance we see where their package puts the sources.list for their thing
<jcastro> and just put that in the charm
<nteon> okay, so I'm new to juju.  I'd like to use it to deploy our cluster of machines
<bkerensa> nteon: What would you like to know?
<nteon> including a couple of postgres machines
<nteon> bkerensa: can I tell juju about things like which disks/set of disks make up a software raid array, and have them attached and then mounted on an ec2 box?
<nteon> I'm not super clear on where juju falls into the stack I guess
<bkerensa> I dont think juju does that yet but I could be wrong
<jcastro> nteon: I have some videos you might want to check out
<bkerensa> ^
<jcastro> http://www.brighttalk.com/webcast/6793/41933
<nteon> jcastro: awesome, I will check it out in a min
<bkerensa> jcastro is the juju guy
<bkerensa> :D
<jcastro> nteon: we're likely above the disk/array level
<_mup_> juju/apt-proxy-support r532 committed by kapil.thangavelu@canonical.com
<_mup_> update cloudinit renderer for apt proxy url
<SpamapS> hrm, open-port from a subordinate might not work :-/
<SpamapS> 2012-04-17 13:19:58,221 unit:mod-spdy/1: unit.hook.api DEBUG: opened 443/tcp
<SpamapS> Yeah, doesn't work
<SpamapS> opens the port for the subordinate, but the subordinate has no watch for that so it never gets opened for the unit
<SpamapS> bcsaller1: ^^ thoughts?
<bcsaller1> SpamapS: it should be better documented, the principal service controls the networking for the subordinate as its really running the container, but some adjustment there is most likely needed.
<hazmat> marcoceppi, re the connection loss, we guard against that for the agents but not for the cli which is  short lived connection
<nteon> jcastro: jcastro: watching now
<SpamapS> bcsaller1: indeed, subordinates will need to open ports
<SpamapS> I appreciate teh conservative take on it.. but subordinates are adding to the primary, so they should definitely be allowed to open ports
<bcsaller1> SpamapS: but exposing a port for the subordinate from the principal works
<bcsaller1> thats just counter intuitive
<hazmat> yeah.. the port stuff is setup to watch against the service unit assigned to the machine, in a subordinate case that distinction is never made
<SpamapS> the use case is quite simple to imagine. I want to add a management daemon that gets queried from outside the network.
<hazmat> bcsaller1, its also defeats the purpose of subordinates..
<hazmat> ie. adding out of band functionality to a principal
<SpamapS> thats.. the entire point of subordinates
<SpamapS> I get why it doesn't do it today
<SpamapS> but its going to be desired
<SpamapS> already is actually :)
<bcsaller1> easily done
<hazmat> SpamapS, i'd like to turn down the logging for the ssl stuff, its a bit inane in its verbosity 4 messages for the same thing
<bcsaller1> SpamapS: mind adding a bug on it?
<hazmat> SpamapS, this is  patch to clean it up http://paste.ubuntu.com/934535/
<SpamapS> hazmat: its being clear about each vulnerability. -l ERROR gets rid of it.
<SpamapS> bcsaller1: I will definitely
<bcsaller1> SpamapS: great, thanks
<SpamapS> hazmat: I'd like it to log 20 messages :)
<SpamapS> but I only had enough text for 3
<SpamapS> ugh
<SpamapS> lodgeit is just broken
<SpamapS> ok.. lets try statusnet
<SpamapS> jcastro: lodgeit seems broken in precise, should be investigated
<jcastro> k
<jcastro> ditto byobu-classroom
<nteon> heh.  I don't really see the point of LTS releases if you're going to deploy a bunch of unpackaged software on top of it.  in that case what is left that is 'stable'? libc and the kernel?
<nteon> (that wasn't in response to anything in particular)
<SpamapS> hazmat: btw, thanks for the 'open-port' script in juju-jitsu. Totally works perfectly!
<jcastro> nteon: this is additive, the charms have an option of doing that if you want to
<jcastro> nteon: so like, if you want to use the upstream version of wordpress instead of what's packaged you can do that, etc.
<SpamapS> jcastro: https://ec2-107-22-22-168.compute-1.amazonaws.com/
<SpamapS> *SPDY*
<nteon> jcastro: well this video talks about how charms can help keep your software up to date when you're running on an LTS release... in that case why not just use whatever the latest ubuntu release is?
<jcastro> I see it's served via spdy, but it's a 404
<jcastro> nteon: you could do that if you want, but people do want a stable OS that is very stable and long lived but then have their application on top of it be new
<SpamapS> jcastro: cs:~clint-fewbar/precise/mod-spdy *should* work :)
<SpamapS> jcastro: well, once the store picks it up
<SpamapS> I have to run now
<jcastro> <3
<jcastro> is the add-relation normal?
<SpamapS> jcastro: included a README on how to make it work
<jcastro> ah
<jcastro> thanks!
<SpamapS> jcastro: also open a bug against lodgeit.. its totally busted
<jcastro> k
<nteon> jcastro: yea, I guess I just disagree with that line of thinking :P  but thats neither here nor there
<nteon> is there a way of having a new juju environment pick up a copy of the DB from somewhere?
<jcastro> nteon: it's ok, it's up to you how to run your cloud. :)
<jcastro> which db?
<nteon> any?  right now mysql, in a month or two, postgres
<jcastro> oh, well right now both of those charms just apt-get install mysql or postgres
<SpamapS> nteon: sure, thats how omgubuntu does it, they dump to s3 and new deployments import that dump
<SpamapS> ok gone
<jcastro> oh I see what you mean
<nteon> SpamapS: thanks, let me look at the omgubuntu stuff
<jcastro> http://jujucharms.com/~marcoceppi/oneiric/omgubuntu-wp
<m_3> jcastro: nice, when did we get jujucharms.com?
<jcastro> a few weeks ago, kapilt'ed
<m_3> that rocks
<nteon> yes, I'm looking at db_relation_changed right now
<imbrandon> heya robbiew , I got juju working on OSX , i need to do a bit more cleanup on the pkg and such but it should be ready for semi public testing in the next 24 hours i'd say, i just had to do some very very wrong things to get zkpython to play nice that i need to work out
<imbrandon> but overall its "working" :)
<m_3> imbrandon: cool
<imbrandon> m_3: it will be avail via "brew install juju" too :) charm-tools and such are next :)
<robbiew> imbrandon: nice!
<hazmat> the old charms.kapilt.com redirects transparently to the new domain
<nteon> okay, so I have 2 things I'm interested in: having postgres on a raid, and having a new postgres instance be initialized from an existing db dump.  it looks like the answer is that I would do both with hooks?
<imbrandon> the existing db dump will for sure
<imbrandon> the raid , it _might_ be better to start with a specific ami ? not sure
<nteon> imbrandon: okay, so I would roll my own ami in that case I guess
<imbrandon> nteon: cloud servers are "cheap" in that sense though, i think the more natuaral way to ensure raid levels of service would be full server replication though now a days
<imbrandon> e.g run 2 or 3 postgres instead of one monster
<nteon> imbrandon: its not space, its speed.  individual ebs disks are too slow
<imbrandon> that way it can be scaled with juju to :)
<imbrandon> nteon: sure i understand that, read replicas are fast
<imbrandon> nteon: and i'm not 100% sold on the idea, just was the first thing that came to mind
<nteon> imbrandon: writes are too slow (and unpredictable) on individual ebs volumes
<imbrandon> nteon: true at high service levels but at that point i'm looking at memcache persistant to the db or RDS like services
<imbrandon> nteon: but yea a raided ami would likely work
<nteon> imbrandon: well, RDS does striping over several ebs volumes in the background for you, you just don't have access to a shell on that machine
<imbrandon> exactly, thus i was talking about doing that too :)
<imbrandon> not trying to sell ya on it though, just was making sure you had thought about the other options ;)
<hazmat> SpamapS, the top link on the browser ('charms') goes straight to the current series now, hopefully that suffices
<imbrandon> and juju deploy posgress , to scale out more is much sexier than playing with raid imho :)
<imbrandon> but yea, infact __one__ way to use an existing dump is the way marcoceppi has setup for when we deploy omgubuntu.co.uk , you can find the charm here to see it as an example, there is alot of omg specific stuff in there just ignore it, and its mysql but i'm sure you can translate the diffs http://jujucharms.com/~marcoceppi/oneiric/omgubuntu-wp
<imbrandon> nteon: ^^
<imbrandon> nteon: it grabs the latest backup from s3 and uncompresses it and loads it up if the db is currently blank ( new deploy etc )
<imbrandon> but just adding webheads it checks and skips it
<imbrandon> jcastro: btw when i went to the doc's today i stopped at microcenter and got me a "real" keyboard :) lol
<hazmat> nteon, we don't have a good way to expose that at the moment re ebs raid volumes, ideally volume management should be in the core
<hazmat> given the available tech, it can either be done inline to the charm at the cost of pushing ec2 creds there
<hazmat> or via an ebs charm that would allow the creation mapping of the volumes
<hazmat> its really a provider specific functionality so ideally the framework would be to abstract that from a charm perspective
<marcoceppi> ahhhh, I can't turn off binary dumping for some reason
<imbrandon> marcoceppi / m_3 : added you both to the "juju-hackers" group on github so you should both have direct commit access to the juju brew formula ( and later charm-tools ) etc , if you have other juju related brews feel free to put the forumlas in with the others
<imbrandon> and i'll make sure they are easy to install etc just like with juju
<imbrandon> if anyone else has a github acct and wants to help with the OSX juju hacking just ping me and i'll add ya
<imbrandon> ( marcoceppi and m_3 are full admins too and can add ppl as needed as well )
<jono> jamespage, you there?
<m_3> imbrandon: cool... thanks, I'll try it out when I get home and have access to an osx box
<imbrandon> m_3: kk, the instructions at http://juju-hackers.github.com/homebrew-juju/ are a little ahead of themself
<imbrandon> e.g the pull req isnt done yet,but you can just use brew install ./Libraries/Formula/juju.rb
<imbrandon> for tonight
<jamespage> jono: am now - wassup?
#juju 2012-04-18
<SpamapS> hazmat: yeah I think going to the current series makes the most sense
<_mup_> Bug #984484 was filed: subordinate charms should be able to open ports <juju:New> < https://launchpad.net/bugs/984484 >
 * SpamapS feels so lonely.. oh so ronery
<imbrandon> oh wow, my aws bill last month was only $0.08 , nice
<imbrandon> i think they might have messed up tho lol
<imbrandon> SpamapS: http://aws-s3.assets-online.com/pixeldrop/fun/macpro-apple-online-store.png
<SpamapS> haha
<imbrandon> SpamapS: and what ya think about a apache mod_speedy only on 443 and nginx only on 80 , hrm :)
<SpamapS> mine is hovering around $80 - $100 these days
<SpamapS> I should probably destroy-env soon.. have had 4 m1.smalls running since Friday
<imbrandon> yea normaly mines just under 100
<SpamapS> imbrandon: I love that idea. I think I can make that work just with the subordinate
<SpamapS> imbrandon: I was thinking that I'd change it to just proxy to the local port 80. :)
<imbrandon> SpamapS: cool cool, wont work already thought about that
<imbrandon> heh
<imbrandon> spdy needss end to end, cant even have nginx reverse 443 to it
<imbrandon> so no one second cache love for spdy
<imbrandon> untill nginx gains support
<imbrandon> been reading up on it the last hour or so
<imbrandon> but thats my extent lol
<SpamapS> imbrandon: I was wondering about that
<imbrandon> but yea, i am thinking a major reworking of the omg nginx stuff might be in my pipeline soon, as in i've already begun to forumlate it in my head and on my dev server, some of the hastily things we did , while is running awesom right now, i;ve been tuning and refining diff ideas to make it sooo much better
<imbrandon> but it will be between now and uds and likjely i'll grab a hallway session with you and marcoceppi and whomever else and go over it more in person
<imbrandon> then MAYBE implment it after
<SpamapS> imbrandon: if we can make it generic enough, it should be a good 'nginx' charm :)
<imbrandon> right thats my ultimate goal
<imbrandon> is to have a "full stack" but all be little sub charms where it makes sense
<imbrandon> that way too parts can be droped out like the php and rails dropped in
<imbrandon> and still share the good bits
<imbrandon> but i think its gonna take me the next two weeks of iterations and stuff to have it be as compartmenalized as i'd like and that means it will be a good time to "present" a semi if not fully working "stack" at uds betqween us that have intrest in such things
<imbrandon> and published "public" after that ( like blogged about etc, i'm sure it will be in the charm store much sooner )
<SpamapS> imbrandon: I don't actually see how mod_proxy can't be used
 * SpamapS is trying it right now
<imbrandon> least thats the .plan i've been working from the last week
<imbrandon> SpamapS: let me find the link,on the spdy forum a official dev said it needed end to end to work right
<imbrandon> one sec
<imbrandon> here is where i got that info, it may be old or wrong
<imbrandon> https://groups.google.com/forum/?fromgroups#!topic/mod-spdy-discuss/XCQG4w0plaE
<imbrandon> the orig question is deleted now
<imbrandon> it was there earlier
<imbrandon> but the important part the response is still there
<imbrandon> the orig question was about using nginx to proxy back to 443 spdy and then serve on 80
<imbrandon> or something
<imbrandon> or no, the orig is there, i just had it collapsed
<imbrandon> SpamapS: ^^
<imbrandon> am i reading that wrong ,or are they just wrong, or its old
<imbrandon> from feb 4
<SpamapS> imbrandon: I don't think thats definitive
<imbrandon> perfect, i would LOVE to be able to do it
<imbrandon> infact my "ultimate" setup i run on "my" sites even though i purport nginx is ...
<imbrandon> apache based zend server on 8000 , zend php on 127.0.0.1:9000 fpm and then nginx reverse proxy 1 second cae to localhost 8000
<imbrandon> but thats because i actually pay the money to have zend server and zend studio
<imbrandon> although zend server community edition might be nice too in a genericized charm
<SpamapS> why apache zend on 8000?
<imbrandon> arbitraty how i set it up years ago and just keep doing it that way
<imbrandon> no diff between 8080 or whatever
<SpamapS> no I mean why apache?
<imbrandon> just cant or shouldent use 9000 , as fcgi and fpm use it default and dont use 10080-10088 cuz zend server uses it for the "gui"
<imbrandon> oh zend server is built on apache
<SpamapS> fpm seems to scale better than mod_php at this point.
<imbrandon> its all one "package" setup
<imbrandon> yea i use fpm
<imbrandon> with apache
<imbrandon> :)
<imbrandon> ohhhhh i noticed too aparently tcp fpm WAY out preforms unix:/ socket fpm
<imbrandon> i need to run some benchmarks to confirm but i've seen it multi places now
<SpamapS> what?
<imbrandon> yea seems bas ackwards
<SpamapS> that doesn't make much sense at all
<imbrandon> so like i said i need to see with my own eyes, but the word is php unix sockets dont scales as nice as php fpm on 127.0.0.1:9000 tcp
<imbrandon> i think omg might be a good test for that theory at some point
<imbrandon> it has the traffic to get "real" numbers and shouldent hinder the site too bad if one is not as good as the other
<imbrandon> but thast another "when i have time to test correctly " type thing
<imbrandon> but yea i use apache because , well 1 if yu do it right apache CAN be nearly as nice as nginx, and i get all the .htaccess reqrite out of the box etc , and 2 probably the bigger reason is its what is supported and bundled with zend server and the GUI tightly intergrates with it even though you CAN use other servers and zend php its not the "package" deal
<imbrandon> but nginx makes a killer reverse proxy on the same box, and even servs some vhosts directly
<imbrandon> with the 1 sec cache. so personaly on my server i get the best of both worlds so to speak
<SpamapS> ok proxy does not work
<SpamapS> definitively :-P
<SpamapS> [Wed Apr 18 01:33:30 2012] [warn] proxy: No protocol handler was valid for the URL /favicon.ico. If you are using a DSO version of mod_proxy, make sure the proxy submodules are included in the configuration using LoadModule.
<elmo> SpamapS: enable mod_proxy_http too ?
<SpamapS> oo perhaps I forgot that
<imbrandon> heya elmo :)
<elmo> imbrandon: ey
<SpamapS> oh ok that *was it*
<SpamapS> elmo: works!
<elmo> eh, typing hard
<elmo> imbrandon: hey even
<SpamapS> https://ec2-107-22-22-168.compute-1.amazonaws.com/
<elmo> SpamapS: cool
<imbrandon> :)
<SpamapS> of course, statusnet is broken cause I forgot to configure it
<imbrandon> nice
<imbrandon> can you pipe it to 80 ?>
<imbrandon> i'd love to be able to do spdy over 80
<imbrandon> i only have ssl cert for one domain, and i dont even use that domain  nor have the ssl cert installed atm
<imbrandon> lol
<SpamapS> imbrandon: well no apparently SPDY implies HTTPS
<imbrandon> https://wwwbox.co is the only cert i have
<imbrandon> SpamapS: yea i know, its cuz google is an ass about it, no tech reason it cant work on 80 AND there is more initial latency on https eventhough spdy makes up for it later
<imbrandon> spdy on 80 would be even faster :)
<SpamapS> imbrandon: I think google is thinking about your privacy :)
<SpamapS> or even about oppressive governments
<SpamapS> self signed works fine
<imbrandon> yea but if i'm gonna do it i dont want the user to have popups or warnings
<imbrandon> so i'd buy one for brandonholtsclaw.com
<imbrandon> if i get a referal code i can pick a godaddy.com cert up for like 16$ a year on sale
<imbrandon> :)
<SpamapS> anyway, I'll have to wrap it up later tonight
<imbrandon> ok cool, ping me when your working on it, i'm very intrested and doing related stuffs
<imbrandon> i'll be on late tonight
<SpamapS> I think we can make it work, and awesome, by simply having the primary charm feedback where it stores static files and then the mod_spdy can serve stuff directly... eventually we can go with a 'worker' mpm apache and it should be able to at least try to keep up with nginx's crazy speed
<SpamapS> imbrandon: lp:~clint-febar/charms/precise/mod-spdy/trunk
<imbrandon> yea and i want a little help with my first sub too, when ya get back, i gpt the bones of it done and it might be ready for the store
<imbrandon> but not sure 100% and no way to test for a few hrs
<SpamapS> imbrandon: just have to add a ProxyPass and ProxyPassReverse to the top of files/all-ssl and it works
<imbrandon> kk
<imbrandon> i'll try thaat here in a few moments
<SpamapS> imbrandon: oh, and 'a2enmod proxy proxy_http'
<imbrandon> yea i have those already
<SpamapS> anyway, out
<imbrandon> l8tr
<imbrandon> yea apache with only BARE min modules installed can come very close to nginx, nginx still beats it in the very very heavy traffic places, but i'm talking 100+ req a sec or more
<imbrandon> untill that point it can keep up great if configured right
<imbrandon> and even more with varnish etc, but i'm loving nginx and kinda dropped varnish and its vcl's from my toolbag in favor of learning nginx better to do the same job and more
<imbrandon> nginx with a 2gb ramdisk ( in a beefy server with like 8+ gb ram ) and pointing it fcgi_cache or proxy_cache to the ramdisk, wow wee
<imbrandon> speed out the yang, it actually becomes a cpu bottleneck
<imbrandon> before anything else
<jcastro> SpamapS: imbrandon: upstream nginx on twitter says "May" for nginx/spdy.
<imbrandon> just now getting into my devops experince where until now faster was better ALWAYS but a few of the sites ( like grammy.com ) challanged that notion for me lately , where slighly slower config but able to handle a more sustained load is atually better, where as even other high traffic sites i've gneer'd like pets.com never hit that mark
<jcastro> imbrandon: I would say a good way to get ready is add an upstream/ppa flag to the soon-to-exist-should-already-be-in-progress nginx subordinate
<imbrandon> jcastro: yup thats the idea, working on doing that tonight with some help/overlap from SpamapS
<jcastro> so when it's released you just flip the one bit
<jcastro> May is relatively eons from now anyway
<imbrandon> yup yup, i like that idea
<imbrandon> yea but it will get pushed sooner, once more ppl pickup on the new spdy release
<imbrandon> more will have intrest and work on it more even upstream as things pickup
<imbrandon> so it will come out sooner than whats there now
<jcastro> I think it's a cool use case
<imbrandon> its probably gauged on current level of dev contributions timeline, some hotrod will come in and get it 80% of the way and a core dev will be semi forced to fix the otyher 20% early :)
<imbrandon> ( nginx that is )
<imbrandon> thats how it seems in my head atm :) who konws i'm often wrong
<imbrandon> lol
<jcastro> "ok this new thing is out, I want to play with it and stuff, but I don't want to mess with all this stuff. Oh, I'll just redeploy on staging with this new charm and try it."
<jcastro> imbrandon: if this would have been nginx support you know we would have stayed up all night, heh
<imbrandon> but yea i fully intend to do that, my intentions are to "own" the nginx and some other related charms for the long haul *as much as is done in ubnutu, we all contrib cross and like i'm sure SpamapS work and exprince with the stack and marcoceppi with his other php stuff as well as many others , but you konw my meaning , not "hoarding" owning it, but more like the guy that takes the time to research ever little but and test and code and etc , not j
<imbrandon> hahaha well with it reverse proxy it can be
<imbrandon> hehe thats what me anc clint was just talking about and i think i am gonna be up all night here working on it LOL
<imbrandon> its psudo supoort but the result is the same
<imbrandon> :)
<imbrandon> AND we'll be the first, not only in ubuntu but the greater internets from what i can tell searching since you posted about it today
<jcastro> yeah
<imbrandon> btw did you get my feed on there ?
<jcastro> yeah
<imbrandon> yup, i am, just checked, plenty of my aderol perscription left, i'm pulling an al nighter and gonna get some rough cuts pushed
<imbrandon> lol
<jcastro> imbrandon: hey so something SpamapS and I mentioned over the phone was perhaps doing the apache subordinate thing for charms that right now are using the built in node webserver, etc.
<jcastro> like the more developer-y charms
<jcastro> subway just uses the built in node thing afaict, etc.
<jcastro> I think subway is interesting because it's a 2 way "application" that needs lower latency, and built in ssl for your chat would be a nice win anyway
<lifeless> imbrandon: we have no issues with apache for LP, and we're past 100rps
<jcastro> SpamapS: your statusnet instance is still running
<imbrandon> lifeless: yea but youve taken the time to configure it right, i'm talking about joe blow
<imbrandon> and i am not saying apache falls on its face there,
<imbrandon> only that nginx starts to pull ahead alot more
<imbrandon> at that point
<imbrandon> jcastro: yea thats very very common
<imbrandon> re: the subway thing with apache
<imbrandon> infact its the princapal generally behind like php fpm that really runs on port 9000 serving php and python wsgi now they are very very diffrent but the general princeapal is the same
<imbrandon> so sticking a apache/nginx infront of subway and putting it to 80, then letting nodejs direct connections too is common
<imbrandon> well maybe not for subway but that general type of app
<imbrandon> nodejs is good at longpoll and other stuff http servers arent really made for
<imbrandon> but they are still usefull combine, and thats where ppl like us come in, its easy to do all this crap stand alone, but make it all work togather and in the best way is the hard part, things like juju and chef distribute that more to more devops but thats recient, mostly learning what we know untill now has been a black magic and handed down from mentor to mentee like blacksmithing and over years sometimes working togather
<imbrandon> that knowhow is becoming a commodity now, so for ppl like me to stay relevant in 5 more years i need to be on the tech end that makes the things that make it irrevant , and then do it all again , it goes in 3 to 5 year cycles like this since i've been active online in the mid 90's, likely sicne the 50's and early sixtys with the adevent of unix with SysV+
 * imbrandon gets back to charming 
<SpamapS> jcastro: yes I know, but now it shows the statusnet error on 443 ;)
<SpamapS> imbrandon: FYI, Ryan Dahl created node.js specifically *not* to need any frontend proxy for low latency needing apps
<imbrandon> SpamapS: yea i just ment it was common to do for those type things
<imbrandon> not really nodejs specific
<SpamapS> like, his whole point was that you can write an app that will be highly coherent between requests.. "sessions suck"
<SpamapS> Heh... looks like statusnet is just plain broken
<imbrandon> yea but in type i'm including google dev appserver and python and perl and non_mod_php
<imbrandon> all the kinda non standard plain http on 80 apps
<imbrandon> ;)
<imbrandon> nodejs just kinda got lumped in there
<SpamapS> we really need to wrap up "real" automated tests.. these hooks pass install/config-changed but they'r enot really error checking
<imbrandon> well some node apps
<SpamapS> ugh
<SpamapS> we need a 'switch-charm' command
<SpamapS> can't improve any cs: charms in place.. :-P
<imbrandon> ?
<imbrandon> hey ok so if i ssh into a juju instance , can i manually file off relation get and stuff ?
<imbrandon> to se the actual output instead of charm upgrading and echoing it or somehing to see the output in a log
<SpamapS> imbrandon: you can but you need to know the relation id
<imbrandon> can i get that from status , or ?
<SpamapS> $JUJU_RELATION_ID in the charm
<SpamapS> and relation-ids command
<imbrandon> k
<SpamapS> you also need to set JUJU_SOCKET
<imbrandon> yea thats the error i got was something about the socket
<imbrandon> k
<imbrandon> is it in /tmp or something
<SpamapS> I foret
<SpamapS> I forget even
<imbrandon> kk
<SpamapS> probably /var somewhere
<imbrandon> mmmm
<SpamapS> Ok, this definitely *feels* faster in chromium than in firefox  https://ec2-23-21-39-39.compute-1.amazonaws.com/main/login
<imbrandon> firefox does spdy too
<SpamapS> imbrandon: but its not turned on
<imbrandon> ahh ok
<SpamapS> statusnet is seriously a very weird webapp
<SpamapS> you can't have a nickname of 'SpamapS', only 'spamaps'
<imbrandon> doh
<imbrandon> registration not allowed
<SpamapS> also always builds absolute links
<SpamapS> I hate apps that do that
<imbrandon> ha
<imbrandon> yea me too
<imbrandon> wordpres kinda does, it atleaste uses whats in the db
<imbrandon> so it the db is changed all change
<imbrandon> still shitty tho
<SpamapS> I don't think it does that for page references though
<SpamapS> it will redirect you..
<SpamapS> but I can work around that
<SpamapS> this.. this is building the page with <a href="http://...."
<SpamapS> which is *stupid*
<SpamapS> "/foo" would be fine
<imbrandon> Resource interpreted as Font but transferred with MIME type application/octet-stream: "http://ec2-23-21-39-39.compute-1.amazonaws.com/theme/neo/fonts/lato-italic-webfont.woff".
<SpamapS> and make it work proerly
<SpamapS> imbrandon: yeah, its not an optimized app
<imbrandon> yea and a href="//amazon.com/blah" would be even better if they NEED absolute lionks
<imbrandon> putting http{,s}: on links is dumb just for this reason :)
<imbrandon> for like off site resources. the only time i do it is if the site dont have https too, then i try to find a nother link or just build whatever it is myself
<imbrandon> lol
<SpamapS> imbrandon: right but in this case, they're just pulling in HTTP_HOST and building the whole link
<SpamapS> thats just lazy
<imbrandon> but i havent found one in a while that dont except ga.js and it does just with a diff name, due to a ie6 bug, but if you dont care about ie6 then its all good ( hint: i dont, not even on commercial gigs ) IE8+ only
<imbrandon> SpamapS: my martha plugin grabs http host
<imbrandon> and still works
<SpamapS> but there's no reason to put the host in there
<SpamapS> or even a leading / usually
<imbrandon> well the leading slash i can see
<imbrandon> templates can be used
<imbrandon> and that nessesitates it incase subdir etc
<SpamapS> .. works fine :)
<imbrandon> sure if you want to put logic in the template to tell what dir your in
<imbrandon> or just use root relitive links and all are happy :)
 * SpamapS tries subway now
<imbrandon> ( css too , like when using less, you never know where the final product will be )
<SpamapS> imbrandon: I usually use    image_path('foo.jpg')  which does in fact figure out where the request was made and build a relative link.
<imbrandon> sides, ../ transversal in php is code smell for an audit imho :)
<SpamapS> imbrandon: thats how symfony did it anyway :)
<SpamapS> imbrandon: thats not in code, that is going to be emitted in the html.
<SpamapS> and browsers are happy to use it
<imbrandon> yea its similar in drupal and zend too but the end result thats rendered is a root relitve link
<SpamapS> lazy
<imbrandon> smart imho
<imbrandon> less headache
<SpamapS> can't sub-host though :(
<imbrandon> less going back to fix little shit
<imbrandon> sure ya can
<SpamapS> anyway, whats a good charm that exists now that has tons of on screen assets?
<imbrandon> why not ?
<SpamapS> ThinkUp maybe?
<imbrandon> hrm
<imbrandon> omg-wp ? hahaha j/k
<imbrandon> never used thinkup
<SpamapS> sucks down your twitter feeds and facebook and G+ and puts it on one site
<imbrandon> oh nice
<imbrandon> nagios , hrm that really dont have any assets
<imbrandon> stackmobile is the only other one maybe, never seen its gui tho so i duno
<koolhead17> hi all
<imbrandon> ello
<SpamapS> imbrandon: http://ec2-23-22-28-42.compute-1.amazonaws.com/
<SpamapS> simple GUI
<imbrandon> nice tho
<imbrandon> could use a bit of spruce but alot better than most
<imbrandon> i hate "flat" buttons like that, they dont have to go all out css3 gradient crazy but that is one of my pet peives. hell leave it to the ui toolkit if your gonna make it flat :)
<imbrandon> hell i bitch alot
<imbrandon> ohh and they use bootstrap on their own site, smart guys and gals
<imbrandon> :)
<imbrandon> and h5bp, wow, i'm inpressed for a floss web app
<imbrandon> :)
<SpamapS> h5bp ?
<imbrandon> html5boilerplate
<SpamapS> ah
<imbrandon> sorry was reading some of the other code
<SpamapS> <-- ui ignorant by choice
<imbrandon> its best practices from tons of experts , like paul irish etc
<imbrandon> there is a huge community arround it, and they put a TON TON TON of thought into every single byte in the boilerplate
<imbrandon> everything is there for a reason and in a certain order for a rewason etc etc
<imbrandon> and what makes it so cool is its all 1 run by build scripts etc, no runtime or language deps, eg rails php python etc all can use it
<imbrandon> and 2  they explain WHY everything is why it is
<imbrandon> and CONSTANTLY update it
<imbrandon> like many times a day
<imbrandon> they hang out here on freenode in #html5, and here read a tad bit of this in their issue queue to see how much thought goes into each little part, and this is a tame one
<imbrandon> https://github.com/h5bp/html5-boilerplate/issues/378
<imbrandon> i have hella respect for those guys
<SpamapS> its nice to work with stuff that is clearly maintained out of a sense of duty :)
<imbrandon> yea
<imbrandon> ok so i can forgive the button for all the other goodness this app has :)
<imbrandon> hehe
<imbrandon> SpamapS: know if nginx can directly serve content from the configs ? like i used to use a snipit that would actualy serv the /robots.txt from an apache vhost config if one wasent on the filesystem
<imbrandon> hrm
<imbrandon> probably have to look it up /me goes to do that
<SpamapS> hard to call a winner with this one https://ec2-23-22-28-42.compute-1.amazonaws.com/
<imbrandon> ?
<SpamapS> but... mod-spdy to the rescue, no mods t thinkup to get it SSL wrapped
<SpamapS> imbrandon: SPDY doesn't help much with such a tiny well designed site. :)
<imbrandon> hehe right
<imbrandon> well it would if it say had 1000000 little images like flickr
<imbrandon> but yea
<imbrandon> tiny images i tend to base64 encode and put them in a data uri anyhow , like the bullets for ul's etc only exception is font icons i use
<imbrandon> that way its cached with the css
<SpamapS> Oh, mediagoblin would be a good one
<imbrandon> and only one http req total
<SpamapS> I don't even know if my old mediagoblin charm will work
<imbrandon> heh
<imbrandon> wordpress has tons of images
<imbrandon> if you toss a theme in it
<SpamapS> yeah but I hatezorz our default wordpress charm ;)
<imbrandon> like generic wp + a gaudy theme
<imbrandon> hehe
<imbrandon> hey do drupal
<SpamapS> and it will probably just redirect me to http://
<imbrandon> it needs prom anyhow
<imbrandon> and will be a good test
<SpamapS> I'll be reviewing it on Friday
<SpamapS> or maybe tomorrow
<imbrandon> thats cool but it does have lots of little images
<SpamapS> need some solid apps for charm school on Thursday
<imbrandon> yea you'll be like the 5th hehe , seems like everyone looks at it once and never returns hahahahahahahhaha
<koolhead17> imbrandon, not me :D
<imbrandon> but yea, i've installed it a few times now on micros, its works, still lots of room for imporvment :)
<imbrandon> koolhead17: heh
<imbrandon> koolhead17: not sure i've met you :) Hi 0/
<koolhead17> hi imbrandon :)
<SpamapS> imbrandon: so maybe call it "thering"
<SpamapS> while you review it, the phone rings
<imbrandon> :)
<imbrandon> its ok i'm in no hurry , just got it done at a bad time, everyone heading to os conf
<imbrandon> and such
 * koolhead17 rushes 4 owrk
<koolhead17> *work
<SpamapS> alright, sleep time
<imbrandon> gnight
<imbrandon> hrm ok i really need to finish this charm so i can get back to researching this spdy stuff :)
<bkerensa> SpamapS: Do you know why I would be getting lots of merge notifications from Juju's LP :P
<_mup_> Bug #984640 was filed: Unsatisfied constraints are not reported back to the user <juju:New> < https://launchpad.net/bugs/984640 >
 * koolhead17 assumes SpamapS is sleeping :P
<marcoceppi> somehow my ~/.juju folder is 128GB
<marcoceppi> marco@marco-g72:~/.juju$ du -sh
<marcoceppi> 123G	.
<fwereade_> marcoceppi, could it be cached charms?
<fwereade_> marcoceppi, but, yeah, that would be a lot of charms...
<marcoceppi> fwereade_: it looks like local was still bootstrapped, and machine-agent was 120+ gb
<fwereade_> marcoceppi, ha, phew
<marcoceppi> but this laptop has been restarted several times, I didn't think local lxc containers survived restarts
<fwereade_> marcoceppi, still seems like a lot
<fwereade_> marcoceppi, the containers survive, but they don't restart
<marcoceppi> well, this would have been a bootstrap from March 29th
<marcoceppi> So, three weeks worth of machine-agent.log
<fwereade_> marcoceppi, ouch, I wonder if we already have a bug for that
<marcoceppi> I mean, what's the bug there? logrotate should be rotating machine-agent log?
<fwereade_> marcoceppi, I think so, partly; but do you have a lot of zookeeper spam in there as well?
<fwereade_> marcoceppi, we should probably be reducing that as well ;)
<marcoceppi> fwereade_: I torched the file, since I was at 100% disk space
<fwereade_> marcoceppi, heh, can't blame you ;)
<marcoceppi> From what I remember it was a lot of zk
<fwereade_> marcoceppi, yeah, looking at mine, there's quite a lot
<fwereade_> marcoceppi, and, hmm, quite a lot of it is the machine agent (which does restart) whining that it can't find zookeeper (which doesn't :/)
<Omega> SpamapS: https://cloud.torproject.org/ is only for EC2 though
<imbrandon> SpamapS: wouldent it be good to use this for the replacement of s3 to find the meta data, scroll down to the first use case ( not example ) it looks like what we;re wanting to accompish almost identicaly if we can abstract it http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/AESDG-chapter-instancedata.html#AMI-launch-index-examples
<imbrandon> hazmat: ^^ ( re: dropping need for s3 for the instance info metadata )
<hazmat> imbrandon, launch indexes?
<hazmat> thats for disambiguating metadata when launching multiple instances in a single api call
<hazmat> for the s3 usages its not contextually relevant
<imbrandon> no it sets the data at launch
<imbrandon> and can be called later via api to figure out what one is what
<imbrandon> see theuse case for the 4 mysql servers and one needed to be the "master"
<imbrandon> hazmat: ^^ e.g our bootstrap
<imbrandon> she set "store-size=123PB backup-every=5min | replicate-every=1min | replicate-every=2min | replicate-every=10min | replicate-every=20min" and each server got one that chunk of meta data between each pipe as well as the details of the instance it ended up belonging to
<imbrandon> i think it sets it via the "tags" , and hpc has those too, but not sure if its openstack or just a similar feature
 * SpamapS reads
<imbrandon> GET http://169.254.169.254/latest/user-data  user_data.split('|')[ami_launch_index]
<imbrandon> SpamapS: very bottom is the gist, the top is fluf about stuff we wont use i think
<SpamapS> imbrandon: thats not actually helpful no. You need *clients* to be able to find it
<SpamapS> not the instance itself
<imbrandon> yea the client run it
<imbrandon> or you me the conductor ?
<SpamapS> the client can't get to 169.254.169.254 :)
<imbrandon> ahh ok i was thinking clients as in instances
<SpamapS> every time you type 'juju status' or 'juju deploy' .. there's a method that has to go *find* the ZK node
<imbrandon> it can be run for any of them tho so it would just need to pick a random one to connect to
<SpamapS> in ec2, the way that is done is by asking the S3 control bucket where it is.
<SpamapS> imbrandon: I think we can do it with groups
<imbrandon> kk i fond that looking for groups docs
<SpamapS> node 0 is always in juju-$envname-0
<SpamapS> and in fact, is always *alone* in that group
<imbrandon> wow, ok yea
<imbrandon> lets just do that then and have it store the data local
<SpamapS> when node 0 becomes HA .. we can just make a group for ZK nodes
<imbrandon> thats super easy it seems
<imbrandon> yea
<imbrandon> hrm yea thats almost too easy
<imbrandon> got to be a catch :)
<imbrandon> heh
<SpamapS> imbrandon: I do think we should cache that locally, and the SSH key of the instance locally, so that we don't have to keep doing 49 round trips all the time
<SpamapS> imbrandon: we'd only need to re-query if the SSH to the box failed.
<SpamapS> or, once its a REST API, the https ping
<imbrandon> right, yea
<imbrandon> i dont wanna use zk local thoug that seems overkill, what about sqllite or couchdb ?
<imbrandon> since it is just a cache anyhow
<SpamapS> zk is fundamental
<imbrandon> right i mean on the conductors machine
<SpamapS> no I wasn't going to run zk locally
<imbrandon> e.g. wher juju status
<SpamapS> I'm saying, cache *the lookup of the node 0*
<SpamapS> for status we've talked about just having a 'statusd' running along side the provisioning agent that keeps an up to date yaml of status and we can just spit that back at the user when they request status
<imbrandon> yea. thats where i mean a small cache of info in like sqlite or similar, no biggie if its droped or dumped, only keeping that info to not have to relook it up
<imbrandon> 49 times
<imbrandon> :)
<SpamapS> status is *slow*
<SpamapS> because it basically walks the entire ZK tree from your machine
<imbrandon> like in ~/.juju/$env-datacache.db
<imbrandon> and if its not there we just got to do the full round again
<imbrandon> etc
<imbrandon> think of it like html5 local browser storage
<imbrandon> thats how i see it used
<SpamapS> imbrandon: but you want the up to date one, so its not enough to just cache it
<imbrandon> if its there cool, if not ok lets do the expensive looksups
<imbrandon> well yea, i'm simplifing it here a bit, there would need rto be some checks ot make sure its not stale
<imbrandon> etc
<imbrandon> but the general idea
<SpamapS> which is best done with a small daemon watching the entire ZK tree and keeping a status yaml up to date.
<imbrandon> heh well i'm thinking of juju clients everywhere e.g. iPads where thats not feasable
<imbrandon> as well as my dev machine that may or may not be on all the time
<SpamapS> imbrandon: that daemon runs on the provisioning node(s). So your client just asks the daemon for its yaml.
<imbrandon> or be one of 5 i use to manage the instance, think about a small team all with juju constantly pinking the zk for updates
<imbrandon> SpamapS: yea, and i'm only talking about the local storage of that yaml
<imbrandon> thats alll
<SpamapS> to what end?
<imbrandon> i think we're just criss crossed here
<imbrandon> i figured there would be more data than a flat file would be useful for
<SpamapS> basically what I'm saying is, we can have a materialized view of the status-important bits of ZK
<imbrandon> and sqlite or couch can also be eaaily used by other clients like a js web interface or whatever may come up down the road, without writing yet another parser
<SpamapS> if you want to cache that, so its available offline, cool.. but.. I think offline juju is a long long way from being a reality. :)
<imbrandon> not rally offline but not needing to make a req for every tidbit of info if its just status info from 5 miniutes ago
<imbrandon> or something
<SpamapS> imbrandon: so you're saying you want to write a client that doesn't need to know ZK. You and everybody else. A REST API to replace client<->ZK direct access is *very* high on the priority list.
<imbrandon> and update it in the bg
<imbrandon> yea
<imbrandon> yea
<imbrandon> :)
<imbrandon> ok i dont feel so dumb then :)
<SpamapS> We just have to help the go guys get done fast so we can crank up feature dev again :)
<imbrandon> right
<imbrandon> i'm wiling to help where ever, i got not much going on but juju until after uds :)
<imbrandon> found out today that i might be relocating to the bay perm too
<SpamapS> imbrandon: wezt coast is the bezt coast
<imbrandon> yea i've been out there a few times, and i lived in Reno NV for a few years
<imbrandon> so i was on the coast alot then, went to sac alot for concerts
<imbrandon> :)
<imbrandon> i actually loke boths coasts . NYC is probably my fav, but only because my office was on 19th and 8th in Manhattan
<imbrandon> but yea either coast is cool, but i always end up back here in KC
<imbrandon> ugh
<imbrandon> :)
<SpamapS> Sac doesn't count as the coast :)
<SpamapS> In fact, its on the other side of all the faults.. and when CA falls into the pacific, SAC will be the new SF
<imbrandon> when your born in KC and lived here till 18, moved all over the us for IT work for 10 to 12 years then back to KC, sac counts to us :)
<imbrandon> lol
<imbrandon> hell reno and taho count to us :)
<imbrandon> lol
 * SpamapS just realized he is *3 days* behind on inbox 0. *damnit*.
<jono> Daviey, jamespage just to let you know, I have about half of the mirror downloaded, I will be heading out in about four/five hours and I doubt it will be finished by then
<jono> as such, I may only have it ready for tomorrow morning (I will probably need tonight to download it)
<SpamapS> jono: drive to Mountain View and ask google if you can d/l it from their datacenter :)
<jono> SpamapS, hah
<jono> I need beefier internet it seems
<jamespage> jono: OK - I'll see where the local mirror download here has got to
<jamespage> it was running overnight so may be good...
<jono> jamespage, cool
<imbrandon> jono: apt-mirror ? hehe
<rigved> hi everyone. i am testing juju on my local machine. when i try to deploy to a local lxc, i get the error: "No repository specified". what is the default repo name that i should put here?
<imbrandon> jono: for real tho if you use apt mirror you can grab just the arches you need and no source packages if you dont need em, its like 30gb per arch for bin only + noarch packages
<jono> imbrandon, we are doing that
<jono> I am downloading 72GB
<imbrandon> jono: but i'm a tad bias as i'm upstream tooo so take it with a grain of salt :)
<imbrandon> nice cool cool :)
<imbrandon> rigved: put in where ? you shouldent have to , other than specify if its ppa or distro
<imbrandon> but thats the extent of repo choice iir
<imbrandon> c
<jono> :-)
<rigved> imbrandon: i am using this: https://juju.ubuntu.com/docs/getting-started.html. in the local environ section, it says that i have to put local: before each service name.
 * imbrandon needs to move apt-mirror off sourceforge to github someday soonish, been talking about it for a year
<imbrandon> right, local would be where the worrd sample is
<rigved> if i do not put anything, the lxc containers never start. so, i am now trying with local: but it gives me that error.
<imbrandon> in those examples
<imbrandon> well local is the env name, so you would use it like "juju bootstrap -e local"
<imbrandon> to begin
<imbrandon> ok, so lets back up a tad, where did you start, and can you pastbin your environments.yaml to paste,ubuntu.com ?
<rigved> imbrandon: ok. one moment
<imbrandon> kk
<imbrandon> and after that i'm assuming you installed the pre req, right ? e.g.
<imbrandon> sudo add-apt-repository ppa:juju/pkgs
<imbrandon> sudo apt-get update && sudo apt-get install juju
<rigved> imbrandon: http://paste.ubuntu.com/935774/
<rigved> imbrandon: i thought i did not need to do that as i am using precise.
<imbrandon> not sure if its 100% in sync yet, i would still use the ppa personally
<imbrandon> and also here are the other deps for lxc
<imbrandon> do this while i look at your conif
<imbrandon> sudo apt-get install libvirt-bin lxc apt-cacher-ng libzookeeper-java zookeeper juju
<rigved> imbrandon: oh ok. i will add the ppa.
<rigved> imbrandon: yes. i installed all those.
<imbrandon> kk
<rigved> but not using the ppa.
<imbrandon> also change juju-origin: distro
<imbrandon> to juju-origin: ppa when you do
<imbrandon> and if you havent rebooted since you installed the lxc stuff, yea unfortunately its like windows on this one, gotta reboot for network to fully work right
<imbrandon> so if not do that too and i'll still be here, we'll walk through getting ya up on the first one, then you'll be good, looks like your 90% there
<rigved> imbrandon: ok. will do. i did reboot as suggested in the docs
<imbrandon> kk good
<imbrandon> ok when your rady for the next bit let me know
<imbrandon> gonna grab a soda ask 1 min
<imbrandon> afk*
<imbrandon> btw i would change "sample:" to something more memorable too, like ummm localtest or something , but not required , and the rest of your config looks good for just one env and local
<rigved> imbrandon: ok. added the ppa. dist-upgrading now.
<imbrandon> kk
<rigved> imbrandon: ok.
<marcoceppi> rigved: depending on your network, it can take a few minutes for the first instance to pop up on lxc
<imbrandon> yea , as in mine on fairly decent cable took about an hour
<imbrandon> for the very first one
<imbrandon> after that its much faster
<imbrandon> marcoceppi: hi 0/
<marcoceppi> \o
<rigved> marcoceppi: yes i know. earlier, when i did juju status, it showed mysql and wordpress instances were pending. i left it like that for abt 2 hours. still it showed pending. also, i checked with nethogs. there was no network activity. also, htop did not report any activity.
<imbrandon> rigved: it can take very long time the first time and cant reboot in the midle or you haver to destroy and start over
<imbrandon> rigved: if you got that far your config and setup is good ( ppa wont hirt its nearly whats in precise )
<imbrandon> you just need to bootstrap one of anything, mysql etc etc, just something and let it 100% get done
<imbrandon> then move on , but leave it in the background and do other crap m you'll kill your self waiting on it
<imbrandon> and after that first one, even with destroys etc all the rest are fairly fast
<rigved> imbrandon: hmmm. ok. so, i have finished updating to the ppa. now, should i start. bootstrap, mysql and then wait before moving on to wordpress?
<imbrandon> well have you
<imbrandon> rebooted since you started the wp and mysql
<rigved> imbrandon: not yet.
<imbrandon> if so they are dead and you would have to destroy them
<imbrandon> ok then no
<imbrandon> just check status every 30 to 45 min
<imbrandon> and it will eventually get to ready
<rigved> imbrandon: ok. destroyed. starting a-new, with changed juju-origin to ppa.
<imbrandon> no idea why its so slow, i konw its not 100% network, but yea the first one is ungodly slow
<imbrandon> kk
<imbrandon> yea just do one too, incase they are figting for resources
<imbrandon> for the first one
<imbrandon> not certain thats the case but it wontt hurt and the second one will take minutes once the other is done
<marcoceppi> rigved: there's a log you can tail to watch for activity (and breakage) during local deployments
<marcoceppi> Let me see if I can find the path
<rigved> imbrandon: ok. so, i typed the deploy command for mysql.
<rigved> imbrandon: juju status shows pending.
<rigved> marcoceppi: is it juju debug-log ?
<imbrandon> rigved: and just an fyi about the local ones, say you deply and leave it in the background and forget, and reboot tomarrow, once booted, the env will look ok but not start and not be right, the only way to recover is destroy and redeploy after a reboot
<imbrandon> but htats only on local
<rigved> imbrandon: ah. ok.
<imbrandon> rigved: juju debug-log is good to have open too, but i think he means a nother one
<imbrandon> but yea i'd leep a term or screen session with juju debug in it off to check on once in a bit
<marcoceppi> juju-debug starts a byobu session already
<imbrandon> ahh i'm normally always in one already so never paid attn
<marcoceppi> rigved: it's machine-agent.log (I believe) buried in your /home/administrator/cloud folder. I don't have my precise laptop with me
<imbrandon> marcoceppi: whats your email addy you want me to use for newrelic, i'm add you as an admin on the osho acct so you can see all the historic data too, not just those 30 miunte graphs on my blog
<imbrandon> @ubuntu one ?
<marcoceppi> imbrandon: marco@ceppi.net
<imbrandon> kk
<marcoceppi> I keep forgetting I have the @ubuntu one, and I forget where it even routes
<imbrandon> look for a newrelic info in a few min, they send login key to let you set your own pass and stuff
<imbrandon> lol
<rigved> marcoceppi: ok. got it. i'm tailing it now. it says container started. last line is "Started service unit mysql/0"
<imbrandon> i have like 11 emails i normaly use, all going to one gmail business avount
<marcoceppi> rigved: that's good news, what's juju status show?
<imbrandon> ( marcoceppi btw it routes to your promary email addy on LP too, so just change that to whatever you want it to route to )
<imbrandon> primary*
<rigved> marcoceppi: still shows: "agent-state: pending"
<rigved> marcoceppi, imbrandon: here's the full output of juju status: http://paste.ubuntu.com/935804/
<imbrandon> yup that looks right
<imbrandon> i;d say give it a few 3 or so hours tops, depending on your hardware and nic
<imbrandon> its gotta download a ubuntu image, then boot it , and update and install software, THEN run the hooks and stuff for the charm  ( this first time )
<imbrandon> like i said i dont know exactly how long mine took, but i'm on fairly fast cable, and i'm on a quad core i7 2.4ghz with 8GB ram and a ssd+hdd in this mac mini, and it took the better part of an evening , like i started late afternoon and it was done about bed time
<imbrandon> but now its quick to drop a new one etc
<imbrandon> rigved: if you wanting to kick the tires before it gets done, try a amazon t.1micro they give you a free linux and free windows one with enough hours to run constanly all month
<imbrandon> 750 each i think
<imbrandon> or you can run 2 linuxs for just a few hours etc
<imbrandon> free
<imbrandon> micros are definately not ideal but will let you poke at it while the lxc finish
<imbrandon> just add a second stanza to the environments.yaml
<rigved> imbrandon: ok. i have an old dual core with a 2 Mbps line. let's see. it's getting late here. so, i'll just leave it for the night.
<imbrandon> from sample: on down
<rigved> imbrandon: yes. i'll try amazon too later.
<imbrandon> and then use"juju something -e name" to pick what one to do the command on
<imbrandon> name being what ever you put in the "sample:" spot
<imbrandon> kk
<rigved> imbrandon: ohh ok.
<imbrandon> so you can have more than one env going at a time
<rigved> imbrandon: does juju work with some other cloud providers? like rackspace?
<imbrandon> anyhow yea, this week is kinda nuts alot of ppl are out for openstack conf
<imbrandon> but alot are still arroud too so if ya run into more issues or that dont finish
<imbrandon> then someone like marcoceppi or me or tons of others are regularly here
<rigved> imbrandon: cool. i tried the #ubuntu-cloud channel earlier, but no one was there. more people here. but as i understand, this channel is for juju devs, not support.
<imbrandon> yea the dual core might be hurting you more than the line, iirc the base image is less than 100mb
<imbrandon> i think
<imbrandon> its the same lot of us, here and there
<imbrandon> just a diff name, and #juju-dev is where more of the core devish stuff happends, alot of support still happens here if your willing to work at it a bit and dont just want someone to do it for ya, the devs love first hand bug reports when they have the time to actually work with ya on them, next 2 weeks that might be dicey but generally its not bad
<imbrandon> speaking of i got somehing i started here i need to finish up or i'm gonna be a liar
<imbrandon> hilight if ya need something :)
<imbrandon> oh and marcoceppi you should have mail
<imbrandon> lemme know if you got any probs getting in, you have full admin, not that there is much to change etc but just in case
<imbrandon> that account can deploy to as many apps too so if we want to set up a sep one for staging or something someday we can
<imbrandon> jcastro: where is the juju mailing list
<SpamapS> lists.ubuntu.com
<rigved> imbrandon, marcoceppi: i got it working! the culprit was my firewall. so, i just disabled it and started fresh. this time, the mysql unit took only a few minutes to start up. now, continuing with wordpress...
<imbrandon> nice
<imbrandon> rigved: :)
<rigved> imbrandon, marcoceppi: thanks for your help! :)
<imbrandon> jcastro: sent, sorry i had to signup for the list and everything , i thought i was on it but i guess not,i'm on so many damn email lists
<imbrandon> rigved: no worries , yw :)
<imbrandon> rigved: also if yor firewall setting seem fairly common
<imbrandon> you might make a not of that on the wiki to warn others
<imbrandon> :)
<imbrandon> note*
<imbrandon> SpamapS: hah just catching up on the list, you think sru is cumbersom ? that used to be one of my fav areas was doing sru and backorts
<imbrandon> SpamapS: so i'll fufill the dirty work role for that at least till 12.04.1 since i dont mind doing it anyhow, will be a good primer to get me back into the old flow
<SpamapS> imbrandon: I'm on the SRU team. It *should* be more cumbersome :)
<imbrandon> i am as well, and swat and backporter, that was like 60% of my ubntu time was doing that
<SpamapS> imbrandon: the policy is clear, a small patch that does one thing and is verifiable
<imbrandon> SpamapS: i thought you ment un-needely so
<SpamapS> imbrandon: I prefer to go the micro-release exception process
<SpamapS> where you can just take whatever upstream says is bugfix-only
<imbrandon> SpamapS: yup yup, probably one of the only, infact i'm positive the old policy i helped creat in ubuntu :)
<imbrandon> olny*
<imbrandon> gawd
<imbrandon> and i even got a new keyboard
<imbrandon> kitterman still doing sru's too ?
<SpamapS> when you say "doing" srus
<SpamapS> do you mean uploading them, or approving them?
<imbrandon> at the time i think me him and dholbach were the only ones that took em seriously , i do like that -backports is on by default now though, its a good out for non sru worthy changes
<SpamapS> Because when I joined in april 2010, only pitti was doing the approving
<imbrandon> SpamapS: reviewing and approving , then uploading
<SpamapS> Ok no the policy is different probably now
<imbrandon> SpamapS: universe
<SpamapS> ubuntu-sru members have to approve from the queue, all, not just main
<imbrandon> and yea pitti approved the main ones
<SpamapS> and AFAIK, there are only 5 members of that team, only 3 active.
<SpamapS> since ScottK asks me to approve his SRU uploads, I can only assume that no, he is not on the ubuntu-sru team
<imbrandon> yea me and kitterman handeled universe and pitti main , but there was mucho overlap and we all kinda worked as one person taking "days" to do them
<imbrandon> .g it was scots day then mine etc
<imbrandon> SpamapS: he was at one time, maybe not anymore
<SpamapS> perhaps
<SpamapS> I have not been a good SRU team member lately.. need to go through the queue at some point
<imbrandon> SpamapS: but yea me and scott and pitti and dholbach came up with what was the old policy, i need to go look it over
<imbrandon> mostly cuz scott wanted to do the clamav exceptions
<imbrandon> and no one was doing any of them at the time
<imbrandon> and there was no clear process
<imbrandon> so we made one :)
<imbrandon> and yea this was 6.06
<imbrandon> so it likely has changed
<imbrandon> jesus crimany , why am i on some of this crap, i've never touched alot of this
<imbrandon> https://launchpad.net/~imbrandon/+participation
 * imbrandon mubles something aobut Launchpad
<imbrandon> SpamapS: looks like the backo=port team and the security team for main and universe are the only relevant ones , i'm not sure we had an LP team back then but it was pitti and actualy the more i think about it one other person, mdz maybe doing main approvals and me and scott and dhol doing the universe ones
<imbrandon> but yea i'm not so much concerned about the ability to approve them, was more of a "hey i'll review and ack them or prepare and get them ack'd as needed if no one else wants to cuz i dont mind that kinda work" offer :)
<imbrandon> baby steps :)
<imbrandon> kees, thats who
<imbrandon> now that i think about it
<imbrandon> pitti and kees :)
<imbrandon> add_header Cache-Control "public, must-revalidate, proxy-revalidate";
<imbrandon> ugh
<SpamapS> Ok so I'm inspired to improve on debug-hooks
<SpamapS> I think we should build in the way to push your fixes back from debug-hooks into the charm
<SpamapS> And I think we should have like, a 'create charm' that starts by spawning a node and running debug-hooks with 'install'
<SpamapS> so you write install until it is "correct", then save it.. then write config-changed until its correct, then save it.. etc. etc.
<marcoceppi> Would Travis-CI make a good charm?
<imbrandon> i'm not real sure, it has its own kinda charm + puppet and vargrant
<imbrandon> so likely
<imbrandon> but alot of overlap
<imbrandon> marcoceppi: would make a nice project though, but not a quick one
<imbrandon> marcoceppi: iirc when i looked up their process ( its all spelled out on their wiki ) they makes the vm images in virtualbox and then use vargant and pupet to deploy them ( like we do with juju ) on demand
<imbrandon> to a host
<imbrandon> and then they take commands from a .travis.yaml in the main github dir
<imbrandon> that tell it what to install and test like our hooks
<imbrandon> so yea, its a cobbled togather system that was made before juju or anything like it out of all the parts
<imbrandon> but they are github fans and all the base images are ubuntu
<imbrandon> so we might even get them to use it officialy if we did it good enough
<imbrandon> had never thought about it though as its a big project thats got multi moving parts, i'd say akin to a mini openstack its that many parts and variables
<imbrandon> they are awesom about documenting the whole setup though and iirc have a freenode chan too
<imbrandon> would be cool to untie it from github
<rigved> imbrandon: ok. so everything is running fine. ok, i will add a note to the wiki about the firewall.
<rigved> thanks again! bye
<imbrandon> np, glad its working for ya
<imbrandon> marcoceppi: you round ? there is an ask ubuntun question that i know half the awnser to and i konw you know the rest cuz its in omg, got a sec to clarify with me on it so i can get myself some more ask ubuntu points hehe ( not even to 20 yet, lol )
<marcoceppi> link?
<imbrandon> http://askubuntu.com/questions/98588/juju-and-keys-for-multiple-administrators
<imbrandon> i havent -re-awnsered it yet, its possible now since SpamapS orriginally awnsered
<imbrandon> about haveing multi ssh keys like we do for all of us
<imbrandon> on omg
<marcoceppi> Just look at authorized-keys: key in the environment stanza
<imbrandon> yea i got that part where is the hook [art
<imbrandon> in install ?
<imbrandon> or is one not needed , just that
<imbrandon> and it "transfers" when being used
<imbrandon> ( i konw it dont stay )
<marcoceppi> It does the key setup on bootstrap, it's part of the actual juju core
<marcoceppi> and subsequent deploys
<imbrandon> ahh rockin, thats what i needed, ty
<marcoceppi> yeah, it's not charm specific
<imbrandon> ahh ok i thought you had some extra magic in there
<imbrandon> that rocks, ok now i can get above 20 copper maybe :)
<SpamapS> note that ssh key management needs way more thought
<SpamapS> we need to make it something that is updated on all machines when it is changed in the env
<marcoceppi> SpamapS: juju -e <env> add-key "ssh-rsa ...", juju -e <env> add-key -f ~/.ssh/id_rsa.pub, juju -e <env> add-key <LP-ID> would be sweeeeeet
<SpamapS> marcoceppi: exactly
<imbrandon> okies go vote me up and vote that SpamapS guy down :) HAHA! no really go vote me up tho
<imbrandon> :)
<marcoceppi> and I *guess* list-keys and remove-key would be pretty cool to have
<SpamapS> marcoceppi: we can get it with subordinates now.. been thinking about creating a charm for doing mass execution.
<marcoceppi> imbrandon:  noooooooooo
<SpamapS> I'm still not entirely happy with putting everything in zookeeper. :-P Like, its communication isn't even authenticated.. so.. its a huge problem.
<marcoceppi> you made it a community wiki, lol
<marcoceppi> You don't get rep from a community wiki answer
<imbrandon> oops
<imbrandon> no idea
<marcoceppi> Delete and re-add it
<imbrandon> k
<marcoceppi> you can't un-wiki something
<marcoceppi> oh wait, moderators can unwiki things now
<imbrandon> lol
<imbrandon> already deleted
<imbrandon> new one posted
<marcoceppi> mm, saw
<imbrandon> woot, i can chat now
<imbrandon> not like i need another place to talk
<imbrandon> but i like the piints
 * imbrandon looks for other low hanging fruit
<marcoceppi> imbrandon: http://askubuntu.com/unanswered/tagged/?tab=newest
<imbrandon> SpamapS: you want added to the newrelic account to peek in on the ohso data now and then ? before i close out the tab, i sent invites to the other fellas
<SpamapS> no
<imbrandon> kk
<SpamapS> thx
<jcastro> SpamapS: I'd like to bring up multi-person juju things during UDS sessions
<jcastro> I don't like copying and pasting environment stanzas around, heh
<SpamapS> yeah
<SpamapS> jcastro: environments.yaml is supposed to only be the bits you need to find your juju environment.
<SpamapS> jcastro: the other stuff is all hacks.
<imbrandon> imho that is poart of the env
<imbrandon> but i look at it like its a one time setup
<SpamapS> the SSH keys should be part of the bootstrap commandline, and then we need commands to manage the keys in the running env.
<marcoceppi> SpamapS: +1
<imbrandon> like the bootstrap pkgs
<imbrandon> :)
<jcastro> SpamapS: do I want something like "juju -ethisenvironment add marco"?
<imbrandon> marco@lp marco@sf.net marco@github
<imbrandon> plz think of the kittahs
<SpamapS> jcastro: well, I'd say 'marco_id_rsa.pub', but yeah
<marcoceppi> jcastro: I recommended this earlier: http://paste.ubuntu.com/936048/
<jcastro> well, anything that doesn't involve pasting in a huge string
<SpamapS> imbrandon: please, oh please, make a second implementation of an SSH key listing service, and we will add it to ssh-import-id :)
<imbrandon> ok
<marcoceppi> github would be a good second IMO
<marcoceppi> for ssh-import-id
<imbrandon> should be fairly easy with another similar project i got on google app engine, even in python
<SpamapS> Yeah, presumably they already have a ton of keys
<imbrandon> yup
<imbrandon> and an api :)
<imbrandon> i'm on it later with my energy for the day, that does sound fun
<imbrandon> you know too
<SpamapS> the API we need isn't really na API... https://something/~someuser/+sshkeys
<imbrandon> someone should check out the branch of code for ubntuwire on lp
<jcastro> so spoiled by ssh-import-id, heh
<imbrandon> i wrote ssh-import-id like 3 years before that one, its in the ubuntuwire.com bzr repo on lp, just no one knew about it i guess
<imbrandon> heh
<imbrandon> and see if any of it can be merged in
<imbrandon> ;)
<SpamapS> jcastro: you in SFO now?
<jcastro> SpamapS: 2 hours out.
<imbrandon> infact it grabbed whole groups, does the new one do that ?> if not i could merge that in, like i could ssh-add ubuntu-dev
<jcastro> SpamapS: I had surprisingly little problems getting the extra HP microserver in my carryon past security.
<imbrandon> and it would, grab all ~ubuntu-dev team
<jcastro> SpamapS: it's pretty awesome, the entire thing fits in my carry on, I think it'll end up being the nova node for the charm school
<imbrandon> how much are they ?
<imbrandon> i know you got promos i mean normaly
<imbrandon> if its not less than a mac mini, i dunno bout that :)
<imbrandon> they make pretty rockin nodes
<imbrandon> and you can cram 3 hdds in them now
<imbrandon> ( gotta remove the wifiand bluetooth card but no need for that in a server anyhow )
<_mup_> Bug #985232 was filed: libpq include path is wrong <juju:New> < https://launchpad.net/bugs/985232 >
<SpamapS> jcastro: hah cool
<bkerensa> SpamapS: OpenPhoto charm is halfway to RC
<bkerensa> :D
<SpamapS> bkerensa: sweeeet
 * SpamapS needs to spend a little time promulgating before tomorrow
<hasp> hazmat, hows it goin
<imbrandon> SpamapS: doh sooo close `curl -u "bholtsclaw" -i https://api.github.com/user/keys`
<imbrandon> it works , retrns json with all MY ssh keys, no love for random id's
<imbrandon> :(
<imbrandon> curl -u "bholtsclaw" -i https://api.github.com/user/keys/2203247 gets a single key
<imbrandon> but again only mine, unless i can find another way to get that ID and then hope they let me cuz the docs say nothing about it
<jono> zul, you there?
<zul> jono:  indeed
<imbrandon> you know, apple will be a trillion dollar company in the next 3 years, they have already proven `well enough` they can get on without jobs, for at least until his visions for the projects started by the people he surrounded himself with then brainwa^Wmolded with his dna ... but i can garentee no one see's it yet, well 90% dont, they wil guess ipads or iphones or blah blah, nope its simple, they made buying apps easy and now its an addiction, there
<imbrandon> i need to amke a blog post about it ....
<imbrandon> ( i say this after looking at my 118$ bill for ios and mac app store apps this month and my less than $10 aws cloud svc's + ebook monthly  bill combine )
<SpamapS> haha
<SpamapS> imbrandon: amazon is much better at extracting profit from those purchases though
<SpamapS> Apple just has crazy high margins, even on their cheap easy app store purchases
<imbrandon> SpamapS: i dunno, apple has to pay what 250mil in advertising and 100m in datacenter cost to make 3 bil profit off 22bil sales last quater in app store alone
<imbrandon> i'm guessing that datacenter delivery and e content delivery amazon has some of the same margins if not more
<SpamapS> imbrandon: the amazon way will sustain longer. It might not matter, as Apple is in a position where they can make every mistake known to man and still have cash, but amazon will keep sucking cash out of peoples' wallets because they're so low price.
<imbrandon> i hear book publishers paying upwards of 30% to aws
<imbrandon> true
<imbrandon> but
<imbrandon> same thing with MS
<imbrandon> a decade ago
<SpamapS> From what I understand, Amazon can make a profit off sales as small as $1
<imbrandon> ms dident have something that could sustain but dident need to with the coffers built up
<SpamapS> Whereas Apple needs you to buy 3 or 4 $0.99 things to start seeing profit
<SpamapS> MS still has that cash
<SpamapS> and they're still profitable
<SpamapS> and will be for a long time
<imbrandon> SpamapS: but they are into every trransation at that point in 3 years, not just apps
<imbrandon> restuants
<imbrandon> newspapers
<imbrandon> movies
<imbrandon> apps
<imbrandon> itunes
<imbrandon> grocery store
<imbrandon> corner gas
<imbrandon> it will be like the 80's credit cards "do you take diners club?" only do you take iPay ?
<imbrandon> dont think they wont, i bet its comming , look at iAd
<imbrandon> and all the others
<imbrandon> SpamapS: and MS built that long term stuff this last decade with lic activesync and things like it
<imbrandon> 10 or 15 years ago ms was not a long term sustainable model
<imbrandon> it was a cash cow
<imbrandon> but not long term
<imbrandon> now it is
<imbrandon> but only cuz they had the cash cow to get them their, same with apple, aws isnt going anywhere, i just dont think they can compete like them and google think they can
<imbrandon> they will remain arround and making a ton of money, just not at the scale or the pull
<imbrandon> least thats what my fortune telling is saying to me heheheh, i am almost never 50% right
<imbrandon> :)
<SpamapS> I saw the other day Home Depot takes paypal
<SpamapS> at the register
<imbrandon> yea i did notice that too,
<imbrandon> its couse square and the like are forcing them to innovate
<imbrandon> if paypal had been innovating the last 5 years they would OWN the transation market, iphones would be buying apps with paypal
<SpamapS> I'd love to be able to whip out my iphone and just have it figure out where I'm eating.. and ask me my name.. and I can just pay the bill.
<imbrandon> think about how long paypal has had the pull and ability to engneer real world physical payments
<imbrandon> but never did till now
<SpamapS> Paypal got killed by ebay I think
<SpamapS> they couldn't innovate anymore
<imbrandon> yup
<SpamapS> just became ebay's bitch
<imbrandon> exactly
<imbrandon> hahahah
<imbrandon> yea the deal was killer for ebay, sucked to be paypal
<imbrandon> i dunno i'm probably way off, i'm no ecconomist, but i know i'm not 100% wrong, mark my words mr inventer :)
<imbrandon> btw someone with some /topic powers should tiddy that up a bit :)
<imbrandon> lol
<imbrandon> SpamapS: you have no osx huh ? damn amn i'm bringin you a lion disk
<imbrandon> need more brew testers
<imbrandon> actually while i was loking up brew synctax yesterday SpamapS there is a port for windows and linux too, i might pacakge it up for ubuntu, it would make a great suplamental pkg mgr for developers if its used as that and not a apt replacement
<hazmat> hasp, it goes ;-)
<hazmat> SpamapS, you in sf now?
<SpamapS> hazmat: no I fly in tomorrow morning early
 * SpamapS should probably look at his Itinerary so he knows what city he's flying to.. OAK or SFO
 * imbrandon is flying to sfo
<SpamapS> btw, juju updated to 531 in precise
<SpamapS> w0000t
 * SpamapS thinks we should probably announce subordinates
<hazmat> SpamapS, nice!
<imbrandon> ohh /me will update the forumla its using 504
<hazmat> SpamapS, and relation addressability is probably worth a shout out
<SpamapS> hazmat: *definitely*
<imbrandon> gawd i love nginx , SpamapS http://paste.ubuntu.com/936120/
<SpamapS> woot, A29..
<SpamapS> landing at 0720 ..
<SpamapS> at SFO
<lifeless> will there be a coexist-like-subordinates option ?
<lifeless> or subordinates that can scale independently?
<SpamapS> lifeless: "placement" is the single word moniker for that. Not that I know of.
<SpamapS> lifeless: shouldn't be too complicated though
<SpamapS> argh, depwait for juju
<imbrandon> new dep ?
<SpamapS> have to wait for python-txzookeeper 0.9.5 to be published
<SpamapS> yeah new version of txzookeeper needed
<imbrandon> great thats what i was fighting with all last night
<imbrandon> on osx
<imbrandon> :(
<imbrandon> k a need a greek god,female prefered as its a vm, i got hera, athena, zeus, ares, and one more i cant think of this moment and its powered down
<imbrandon> hrm
<SpamapS> imbrandon: its on pypi
<SpamapS> imbrandon: Artemis
<imbrandon> yea it dident want to find zookeepter.h tho
#juju 2012-04-19
<imbrandon> cool, ty
<imbrandon> SpamapS: oh and about the mouse thing, i use photoshop and other mouse tools WAY too much not to be a mouse guy
<imbrandon> actually on my desk i have a mouse, a blootooth multitouch track pad, ( for my left hand while mouse in the right in photoshop ) and i use my ipad off to the side connected with wifi to change brushes etc with adobe connect and then a bamboo pen tablet wireless i place infront of my keyboard when i use it , so many more "hand movement" tools i use too much but ive got my own kinda gestures and shortkuts i do with them
<imbrandon> oh and my guitar hero usb controller at the side of my chair for when i cant rock out on the telecaster ( /me wants a new les paul :( )
<imbrandon> heh
<SpamapS> imbrandon: perhaps consider Mavis Beacon?
<SpamapS> ok, time to go do real world stuff
<imbrandon> :)
<imbrandon> i land one of those two i was telling ya about and i'll take a full real class and mavis :) just for you
<hasp> hazmat, wasn't sure if you saw but Euca provided a possible patch for bug 907450
<_mup_> Bug #907450: juju does not work with Walrus when s3-uri has a suffix <Eucalyptus:New> <juju:New> <txAWS:New> < https://launchpad.net/bugs/907450 >
<hazmat> hasp, yeah.. i've been talking to brian
<hazmat> hasp, i'm at a conf at the moment, what we can do about the archives first thing next week
<hazmat> er. but i'll see ^
<hasp> cool
<hasp> thanks man
<hasp> hows the conference btw
<lazyPower> I'm having some trouble trying to deploy-test my first charm and every piece of literature i find tells me the same command set that i'm using - however its not finding my in-dev charm.
<marcoceppi> lazyPower:  what's your command?
<lazyPower> http://paste.ubuntu.com/936340/
<lazyPower> i suspect i haven't got something setup correctly
<marcoceppi> lazyPower: quick fix, inside your repo folder, create an oneiric folder and put your charms in their
<marcoceppi> there*
<lazyPower> well that caused it to barf with a different message, \o/ thanks marco
<marcoceppi> np, what's the new message?
<lazyPower> http://paste.ubuntu.com/936345/
<marcoceppi> well, that's interesting, pastebin your metadata.yaml
<marcoceppi> You can also install charm-tools, then in the terria folder run charm proof .
<lazyPower> I did that but didnt understand the output
<lazyPower> let me man it
<lazyPower> i dont have a provides: statement
<lazyPower> that'll do it
<SpamapS> marcoceppi: oneiric? You mean precise!!
<SpamapS> be bold, taste the beta
<lazyPower> SpamapS: Just trying to get it working at this point - i'll worry about targeting control in the next iteration.
<lazyPower> if i cant even deploy the service thats cart before the horse isnt it? :)
<SpamapS> lazyPower: newer versions of juju will default to precise anyway
<lazyPower> its specified in my environment.yaml
<lazyPower> so it'll be looking for oneiric
<SpamapS> lazyPower: if you're using the latest from the PPA, and you don't have 'default-series: oneiric' , then you will be spawning precise anyway
<SpamapS> ahh ok
<SpamapS> lazyPower: btw, the provides is just a suggestion. Anything W: or I: is not going to break your charm.
<lazyPower> in the provides: statement can i be arbitrary with my definition or are there guidelines for it that i'm missing?
<SpamapS> lazyPower: provides is a structured dictionary. It needs two levels deep under it. the first level is the name of the relation, the second is the interface
<SpamapS> provides:
<SpamapS>   foo:
<SpamapS>     interface: bar
<lazyPower> Ok i see, nice.
<SpamapS> meaning "I provide something I call foo, and it can talk to things that require an interface: bar
<lazyPower> Theres something wrong here - let me push this upstream to my github repo and maybe you can tell me why juju is barfing on the deploy statement.
<SpamapS> lazyPower: but you should be able to deploy without a provides
<lazyPower> proofing is coming back with only a w
<SpamapS> can you pastebin the barf too?
<lazyPower> certainly
<SpamapS> ahh
<bkerensa> marcoceppi: I noticed the gravatars are broken on omg
<SpamapS> lazyPower: your structure should be /home/ubuntu/repo/oneiric/terraria/metadata.yaml
<lazyPower> is there a charm command i need to run on ~/repo to make it charm friendly that I may have missed?
<SpamapS> lazyPower: no
<SpamapS> its just a dir
<SpamapS> with the releases of ubuntu under it as dirs
<SpamapS> then each charm gets a dir
<SpamapS> with metadata.yaml, revision, and the hooks dir
<lazyPower> i thought that - just analyzing the situation
<marcoceppi> what's your metadata.yaml look like?
<SpamapS> lazyPower: the 'charm create' tool in charm-tools definitely creates a nice skeleton charm, but it doesn't manage the repo for you
<marcoceppi> err, if you're going to push the whole charm to github, that works too
<lazyPower> incoming
<lazyPower> http://paste.ubuntu.com/936352/
<SpamapS> oh that reminds me...
<SpamapS> I should push the latest charm-tools into precise before the release
<lazyPower> i used the bitlbee charm as a guideline, and the documentation to write it - it all appears valid to me.
<marcoceppi> It appears fine
<lazyPower> ok the latest charm revisions are up on github as well.
<lazyPower> i suspect this is something funky going on with my setup not so much the charm itself unless i'm just horribly daft.
<marcoceppi> cool, I'll try a quick deploy
<marcoceppi> lazyPower: easy fix, metadata.yaml "name" needs to be lowercase
<marcoceppi> the name needs to match the directory name of the charm
<lazyPower> oh...
<lazyPower> derp!
<marcoceppi> <3
<lazyPower> well on the bright side i'm finally getting around to pulling the latest precise updates on this server
<marcoceppi> heh
<lazyPower> so the charm trouble had purpse then
<SpamapS> marcoceppi: lets put that in charm proof!
<SpamapS> thats an E: for sure
<lazyPower> nice
<lazyPower> just deployed successfully
<SpamapS> lazyPower: ^5
<lazyPower> that is hot sauce, gentlemen, hi5 indeed.
<SpamapS> marcoceppi: about done w/ a patch
<lazyPower> interesting, is it common to see nothing in the deub log when you have juju-log commands in the install hook?
<lazyPower> *debug
<SpamapS> lazyPower: you should see stuff in the debug-log as soon as the unit starts its agent
<SpamapS> lazyPower: IIRC, there is a bug where debug logging isn't turned on until after the install hook runs.. but I thought that was fixed
<lazyPower> ooohhh - so i think what i'm experiencing is the nice lag in AWS on spinning up a new instance from the controller then
<lazyPower> makes sense.
<lazyPower> i had a machine that was pending - so it just assumed things were deployed.
<SpamapS> marcoceppi: https://code.launchpad.net/~clint-fewbar/charm-tools/name-must-match-dir/+merge/102617
<SpamapS> lazyPower: you also have to wait for juju to install itself.. :)
<SpamapS> lazyPower: in my experience, on m1.small's, it takes about 2 minutes
<lazyPower> well i'm refreshing on my AWS panel - and no new instance spun up
<SpamapS> lazyPower: still "pending" ?
<SpamapS> lazyPower: you should also see any logs from the provisioning agent
<SpamapS> lazyPower: I have to go afk for a bit.. but I'll be back to check on you soon. :)
<lazyPower> Thanks :)
 * marcoceppi reviews
<marcoceppi> SpamapS: The only thing I would say, shouldn't it be higher than a warn? as it's a blocker by Juju standards
<lazyPower> awesome. I think i was a victim of AWS Roulette on my last controller. Just deployed a brand new controller and issued a deploy for my charm and the debug log is logging up a storm.
<lazyPower> This is exactly what i needed guys, thanks for the patience.
<SpamapS> marcoceppi: its not though, you can be submitting it to lp:..... and have it in "foo-bar-baz-bang"
<SpamapS> marcoceppi: its just a helpful warning
<marcoceppi> SpamapS: cool
<marcoceppi> SpamapS: approved
<bac> Hi, we've got some scripts that wait for our charm to be deployed before proceeding.  It involves polling 'juju status' waiting for the agent-state to transition to 'started'.  When deploying to ec2 we see occassional error such as: "ERROR Connection to the other side was lost in a non-clean fashion: Connection lost.".  Is this just a problem with the ssh layer that we should live with or problem juju should solve?
<matsubara> hello! We have a community contributor testing out the MAAS installation and juju deployment but he's running into a timeout error bootstrapping the environment. The error he gets is: Failure: juju.errors.ProviderInteractionError: Unexpected TimeoutError
<matsubara> interacting with provider: User timeout caused connection failure.
<matsubara> 2012-04-19 10:43:45,297 ERROR Traceback (most recent call last):
<matsubara> Failure: juju.errors.ProviderInteractionError: Unexpected TimeoutError
<matsubara> interacting with provider: User timeout caused connection failure. Is there a place to see more details about this error?
<m_3> matsubara: not that I've heard of... most of the hw crew are at the openstack summit atm (talking about maas)
<SpamapS> matsubara: juju -v bootstrap might help a bit
<SpamapS> and yes, we're all at openstack
<SpamapS> keynotes going now
<matsubara> SpamapS, http://pastebin.ubuntu.com/937068/
<SpamapS> matsubara: seems rather clear, timed out trying to connect to ... something ;)
<matsubara> SpamapS, is there any other juj log or more verbose option I could look at to see what that something is?
<SpamapS> matsubara: not really in this case no.
<SpamapS> matsubara: whatever is raising that ProviderError is being very terse, and should have included a message
<SpamapS> sabdfl about to take stage at openstack conference :)
<jcastro> \o/
<jcastro> current .... keynote .... boring.
<jcastro> losing .... conciousness ...
<marcoceppi> hah
<AlanBell> http://openstack.org/conference/san-francisco-2012/ like the new wheels jcastro
<jcastro> thanks!
<bkerensa> AlanBell: I wanted to go to that event so bad
<oarcher> hi, i'm deploying openstack with juju, on ubuntu precise. 'juju deplay mysql' is ok, but 'juju deploy rabbitmq' give Error processing 'cs:precise/rabbitmq': entry not found
<oarcher> i'm using doc from here: https://help.ubuntu.com/community/UbuntuCloudInfrastructure
<_mup_> Bug #985812 was filed: Juju commands return SSL errors <juju:New> < https://launchpad.net/bugs/985812 >
<oarcher> and bzr branch lp:charms/rabbitmq  give "bzr: ERROR: Not a branch: "http://bazaar.launchpad.net/~charmers/charms/precise/rabbitmq-server/precise/"."  Strange, because it's a copy/paste from https://code.launchpad.net/~charmers/charms/precise/rabbitmq-server/trunk (Get this branch)
<flepied> I have rebooted my laptop after an lxc juju setup. how do I restart it ?
<oarcher> and "Browse the code" (https://code.launchpad.net/~charmers/charms/precise/rabbitmq-server/trunk)  on lp give 404 !  Is launchpad charms broken ?
<imbrandon> flepied: juju destroy the env and re bootstrap it'
<flepied> imbrandon, I don't want to loose what I have done. there is no other way ?
<flepied> I manage to modify bootstrap to not stop and restart at least zookeeper. now I need to start the containers...
<imbrandon> not with lxc no
<SpamapS> flepied: its an open bug
<flepied> whith ec2 how would it work ?
<SpamapS> flepied: with EC2 you can reboot the nodes, though if the zookeeper node changes IP, things stop working.
<flepied> SpamapS: ok
<flepied> SpamapS, what is the bug number ?
<marcoceppi> SpamapS: you can change the EIP and things still work
<marcoceppi> juju ssh unit/# stops working, but juju ssh machine# still works. The data for serviecs falls out of sync, but machine data stays up to date
<SpamapS> marcoceppi: you can't change the internal IP of the zookeeper node. Bad things happen.
<marcoceppi> SpamapS: Oh, for the zk node
<marcoceppi> I thought it was any instance
<SpamapS> flepied: bug #955576
<_mup_> Bug #955576: 'local:' services not started on reboot <juju:New> <juju (Ubuntu):Confirmed> < https://launchpad.net/bugs/955576 >
<marcoceppi> I guess I need to test my newest charm against precise now
<marcoceppi> SpamapS: did you see sabdfl's keynote?
<imbrandon> is there video >
<imbrandon> ?
<SpamapS> imbrandon: eventually
<SpamapS> marcoceppi: yes, it was awesome
<SpamapS> not AWSome.. but Awesome :)
<imbrandon> lol
<SpamapS> m_3: http://charmtests.markmims.com/ ... can I get some precise up in here?
<marcoceppi> Is it just me, or should make be a part of build-essential?
<marcoceppi> grr
<marcoceppi> charm-upgrade didn't quite do what I expected, nvm
<SpamapS> marcoceppi: agreed, I'm shocked that make isn't part of build-essential
 * SpamapS tried to write a charm using make the other day. If there had been a puppy around I'd have kicked it
<marcoceppi> haha
<marcoceppi> I just kicked the jr. dev over here instead, and by kicked I mean shot with nerf guns
<marcoceppi> In other news, I think the shelr.tv charm is ready to be reviewed, and it uses mongodb!
<marcoceppi> just waiting for this last test to finish
<imbrandon> nice
<marcoceppi> SpamapS:  should I make a bug for these: 2012-04-19 21:55:11,412:14468(0x7f66884a5700):ZOO_WARN@zookeeper_interest@1461: Exceeded deadline by 86ms they seem exceisve
<imbrandon> why can no one build a fskin sane php ppa ? fskin grumbles and goes off to do it himself possibly
<hazmat> marcoceppi, there is one extant
<marcoceppi> hazmat: cool, thanks
<hazmat> marcoceppi, its unfortunate they chose to do log that as a warning, cause its really not meaningful
<marcoceppi> yeah, I always scratch my head at it like "oh, okay?"
<imbrandon> SpamapS: you know after a couple of days of looking re: ssh key server, their really isnt a universal one, or one peroid etc except LP, and me being the way I am I think I'm just gonna write one ... in Go ... public , secure , and indep not tied to github sf.net lp etc ( but will likely make use of them for imports or syncing )
<imbrandon> kinda like a gpg keyserv
<imbrandon> bad idea ? i'm kinda on the fence about it
<SpamapS> imbrandon: lol
<SpamapS> imbrandon: glad you did the due dilligence
<imbrandon> i figured if nothing else it will give me a "project" to learn Go with
<imbrandon> SpamapS: i tend to retain more that way , and maybe later i can get my hands dirty with juju go easier
<m_3> SpamapS: yup... working on getting charmtests up to precise... charmrunner's gotta take environment params first
<m_3> SpamapS: congrats... I hear the talk went well today!
<marcoceppi> Does juju run with the locale env?
<hazmat> marcoceppi, i think not in the local provider
<marcoceppi> I'm wondering if that's why I'm getting this error, the command fails from the install hook, but not from the terminal
<hazmat> the fix
<marcoceppi> hum, doesn't appear to have LANG set, I just tried to echo it from the install hook
<marcoceppi> I guess the easy way would be to just set it?
<marcoceppi> So two things, A) That fixed it B) what, if any ENV variables are included during execution?
<imbrandon> marcoceppi: jsut evho env
<imbrandon> echo
<imbrandon> from the install hook and and it should show
<imbrandon> err run env
<imbrandon> not echo
<imbrandon> /usr/bin/env
<marcoceppi> eh, I will if I have to push an update
<marcoceppi> before I submit a patch for this, can someone confirm that this is a bug? https://bugs.launchpad.net/charms/+source/mongodb/+bug/985939
<_mup_> Bug #985939: Mongodb charm does not expose the port on relation-joined <mongodb (Juju Charms Collection):New> <mongodb (Charms Oneiric):New> <mongodb (Charms Precise):New> < https://launchpad.net/bugs/985939 >
<imbrandon> jcastro / m_3 : Juju on OSX updated , run "brew update && brew upgrade juju" if you have it installed to get juju-0.5+bzr531 :)
<m_3> imbrandon: cool
<imbrandon> marcoceppi: when you get a moment can you pass me the two curent stanza's we're using ?
<marcoceppi> shelr.tv charm \o/ http://ec2-50-19-40-77.compute-1.amazonaws.com/records/4f909d76c2183956b1000002
<marcoceppi> imbrandon: there's nothing in the new stanzas
<marcoceppi> everyting's back on the old setup
<imbrandon> ahh ok, is thre a current staging ? or not right now ?
<marcoceppi> there's no staging
<marcoceppi> atm
<imbrandon> kk
<m_3> marcoceppi: shelr's cool
<marcoceppi> yeah, I love the work they've been doing
<marcoceppi> and it's another charm we can add to the list that uses mongo
<marcoceppi> need to clean up a few things and add a copyright, then it'll be up for review
<m_3> yup
<imbrandon> yea i did your charm juju install of omg playback via the shell on osx
<imbrandon> that is soooooooo cool
<imbrandon> marcoceppi: ^^
<imbrandon> alot smoother that way too
#juju 2012-04-20
<mrevell> Hi
<oarcher> is there any pbs with charm store ? With every attempt, juju deploy says "Error processing 'cs:precise/glance': entry not found"
<hazmat> mrevell, hello
<hazmat> oarcher, it doesn't appear glance is 'blessed' in the 'official' charm namespace.. ie.. been through review... you can deploy it from the user/ppa namespace via juju deploy cs:~charmers/precise/glance
<oarcher> thank's, hazmat . And is it true for all other charm cited here : https://help.ubuntu.com/community/UbuntuCloudInfrastructure  ? ( glance keystone nova-cloud-controller nova-compute nova-volume openstack-dashboard rabbitmq-server )
<hazmat> oarcher, i couldn't say offhand without checking each individually, but yes that prefix will work regardless.
<oarcher> I've got on error with the last:  $juju deploy cs:~charmers/precise/rabbitmq-server give ERROR
<oarcher> What is strange is that lp give an error while trying to browse the code here: https://code.launchpad.net/~charmers/charms/precise/rabbitmq-server/trunk
<hazmat> oarcher, intesting that is a bug
<hazmat> really odd
<oarcher> is it a bug from launchpad, or a bug from juju ?
 * hazmat checks the charm
<hazmat> oarcher, imo its a juju bug, trying to identify the reason atm
<hazmat> oh.. the charm name doesn't match the charm name in its metadata
<hazmat> still seems odd
<hazmat> huh.. its still referencing ensemble: formula
<hazmat> something is very wrong in denmark with this charm
<_mup_> juju/rabbitmq-server r18 committed by kapil.thangavelu@canonical.com
<_mup_> update metadata
<hazmat> seems to work if its local in a dir but deploying from the charm store it looks like there is an additional sanity check
<oarcher> seem to work in local, exept rabbitmq-server, because i c'ant get the branch. (So I've try with an old oneiric branch for rabbitmq)
<hazmat> oarcher, ack.. that should be okay.. branch is bzr branch lp:charms/rabbitmq-server
<hazmat> there's automation around local dir/repo pulls with charm-tools pkg
<oarcher> ok, i've got the branch
<hazmat> i've pushed a fix upstream, but the charm store has a delay of 1m
<hazmat> er.. 15m
<hazmat> bedtime... back in 6h
<oarcher> You mean that the charm store wiil be up in 15m ?
<hazmat> oarcher, the change i just committed will be reflected in a new charm store packaging of that charm in 15m
<oarcher> ok, thanks. coffe time for me
 * flepied is trying to experiment with subordinate charms without success :-(
<fwereade_> flepied, I'm about as far from being an expert as I can be, but perhaps I can help?
<flepied> fwereade_, is it supposed to work as documented ? or are there tricks to know ?
<fwereade_> flepied, it *should* work as documented, but there's something that I recall being less obvious (about writing them) that I'm racking my brains about
<fwereade_> flepied, what are you trying to do exactly?
<flepied> just experimenting so nothing real. I took the example of rsyslog and trying to implement it
<fwereade_> flepied, you never know: if you pastebin the config.yaml I *might* notice something
<flepied> fwereade_, http://pastebin.com/iRgHXVCY
<fwereade_> flepied, I think you need may need a "requires: juju-info" in there; and I'm not sure you want the provides: to be scope: container
<fwereade_> flepied, on the basis that an rsyslog client will surely be providing the log data to an rsyslog-server somewhere, which is surely not in the same container
<flepied> fwereade_, what is juju-info ?
<fwereade_> flepied, it's an interface provided implicitly by all charms
<fwereade_> flepied, https://juju.ubuntu.com/docs/drafts/implicit-relations.html
<flepied> fwereade_, I had a requires previously for the remote rsyslog connection but I removed it to see if it was causing troubles
<flepied> fwereade_, ok I'll try with juju-info
<fwereade_> flepied, gl, let me know how it goes :)
<flepied> fwereade_, still the same: it creates a real vm not a subordinates. perhaps it doesn't work in local (lxc) ?
<fwereade_> flepied, hmm, sorry, I missed something else: you need "subordinate: true" at the top level
<flepied> oh
<fwereade_> flepied, scope:container is still meaningful for non-subordinates so they can communicate with subordinates
<flepied> I didn't see this in the doc, sorry
<fwereade_> flepied, for example, declaring a logging-directory interface (which is ofc only meaningful to something that can see the directory by virtue of being in the container)
<fwereade_> flepied, haha no worries, I didn't spot it in your charm ;p
<fwereade_> flepied, and nor did I spot it while rereading the docs just now :)
<flepied> I'll try it with it and tell you
<fwereade_> flepied, we should probably have a subordinate example charm really
<fwereade_> flepied, gl again :)
<fwereade_> flepied, any joy?
<flepied> fwereade_, sorry I was in meeting, I try asap
<fwereade_> flepied, np at all, no rush; just wanted to check you weren't silently cursing us ;p
<flepied> fwereade_, much better now ! thx !
<fwereade_> flepied, awesome :D
<fwereade_> flepied, a pleasure
<flepied> fwereade_, I have another pb now the subordinates got attached to the 2 sides while I want it to be attached only on one side. it makes my rsylog server sending logs to itself in a loop :-(
<fwereade_> heya flepied
<fwereade_> flepied, would you pastebin me the config please?
<fwereade_> flepied, config*s* I guess ;)
<flepied> fwereade_, http://pastebin.com/7k9nm6j1
<flepied> fwereade_, I wanted to remove line 14 but I have an error if I do it
<fwereade_> flepied, hmm, would you pastebin me a transcript of what you deployed as well please?
<flepied> fwereade_, you mean juju status output ?
<fwereade_> flepeid, that would probably cover everything I need, yeah :)
<flepied> fwereade_, http://pastebin.com/YMWRuYYD
<flepied> in fact I don't want a subordinates between rsyslog-server and rsyslog-client and I get one...
<fwereade_> flepied, yeah, it does look like it's all tied to line 14, what was the error?
<flepied> fwereade_, labeled subordinate but lacking scope:container `requires` relation',
<fwereade_> flepied, aha, yes, I think the provides/requires are the wrong way round
<fwereade_> flepied, switch them in rsyslog-client and propagate the changes through the others
<flepied> fwereade_, ok I try
<m_3> fwereade_: yeah, we're working on that
<fwereade_> m_3, awesome :)
<m_3> flepied: so looking at your metadata files, I'm seeing more explicit relations that I would expect just offhand
 * m_3 checking out the raw charms
<fwereade_> flepied, listen to this man, he knows more about writing charms than I ever will probably ;)
<flepied> fwereade_, ok :)
<m_3> fwereade_: dunno man... I'm thinking the opposite actually :)
<fwereade_> m_3, I'm good at some things, but writing charms? I'd concede before we even started :)
<m_3> flepied: one point of subordinates were that we didn't wanna specify _every_ possible subordinate relation in each charm (every charm in the repo would have logger, monitor, nfs-client, etc)
<flepied> m_3, yes I understand, I just wanted to experiment
<m_3> flepied: I'm looking for a good example of it running... I know clint's got a few
<m_3> he's probably at the airport today though
<flepied> m_3, don't worry that's not a big deal
<flepied> just learning
<m_3> flepied: me too!
<m_3> flepied: perhaps remove 'logging' from haproxy as a start, then I'm wondering about the "container" scope in the rsyslog-client's syslog-server relation
<flepied> m_3: yes that's what I tried first but I got an error if I remove it
<m_3> flepied: I found clint's versions...
<m_3> https://code.launchpad.net/~clint-fewbar/charms/precise/rsyslog-forwarder/trunk
<m_3> and
<m_3> https://code.launchpad.net/~clint-fewbar/charms/precise/rsyslog/trunk
<m_3> it's using the 'juju-info' sort of anonymous relation
<m_3> rsyslog-forwarder's README shows usage
<m_3> I think this is one of the first real subordinates anywhere in the repo... nothing in the store yet
<flepied> m_3, there is no scope: container in the syslog relation. that's what I wanted to do
<m_3> flepied: so you want haproxy/rsyslog-client on one instance... then rsyslog-server also on the same instance or on a dedicated rsyslog-server instance?
<flepied> rsyslog-server on a dedicated instance
<flepied> I wanted to implement a central log server
<m_3> flepied: ok, then I think that this setup should do that
<flepied> yes I will try his files
<m_3> the "aggregator" relation (using the "syslog" interface) is the rsyslog-forwarder talking to the rsyslog server on another machine... this is an ordinary juju relation
<marcoceppi> m_3: I don't understand the juju-info interface
<marcoceppi> err, relation
<m_3> the scope:container relation is the "juju-info" one that's getting the rsyslog-forwarder talking to haproxy locally
<m_3> marcoceppi: I haven't caught up on all the docs, but I think it's juju's version of an anonymous interface that's used when a subordinate is glommed onto a primary service
<m_3> as far as I understand
<marcoceppi> m_3: but how does the sub-ordinate know which service to attach to? it just does juju deploy "subordinate" from what I see
<m_3> marcoceppi: I think there's only a single primary possible on a given service unit... then it talks 'juju-info' with any attached subordinates
 * m_3 has gotta rtfm about it this weekend
<marcoceppi> as do I, I'd like to setup a wordpress site -> wordpress -> nginx subordinate chain thing soon
<m_3> I gotta get the charmtester as jenkins-master talking to jenkins-slave with subordinate charmtester
<m_3> marcoceppi: pretty sure the thinking was that I just can't glom on subs to a primary... they've got to be _related_ as far as juju is concerned... juju-info is that "internal" relation
<marcoceppi> so, when you deploy...it doesn't actually do anything until  you relate it?
<fwereade_> marcoceppi, that's right
<m_3> marcoceppi: I doubt any info goes over that relation... or that we can even hook against it
<marcoceppi> OH, there's this key in the metadata: subordinate: true
<marcoceppi> I was like OMG HOW DOES JUJU KNOW?!?
<m_3> yuppers
<m_3> marcoceppi: but I'm actually sad about that field... I really wanted to do something like a single "rsyslog" that'd just be either a standalone server or a forwarder depending on context
<bcsaller> no deployment could happen till we knew where to put things and that comes from having a relation
<marcoceppi> m_3: like a juju deploy --subordinate=servicecurrentlydeployed rsyslog
<m_3> bcsaller: thanks btw!  very excited to read up and convert a bunch of stuff over to subs soon
<m_3> marcoceppi: some variation yeah
<bcsaller> m_3: :)
<marcoceppi> this way makes sense now, I was very confused earlier. However, what happens if subordinates need to communicate to their parent services?
<flepied> m_3, his version works but I still don't understand why mine doesn't :-(
<m_3> marcoceppi: dunno yet
<bcsaller> marcoceppi: they can do that over the relation they establish (the container relation) or via a traditional relation
<bcsaller> marcoceppi: in many use cases the principal doesn't know about the subordinate and won't have anything special prepared to tell it
<bcsaller> so we can use juju-info then
<bcsaller> but if they do, the relation works as normal
<m_3> flepied: rsyslog-client's rsyslog-server interface doesn't make sense as container scope
<flepied> exact
<m_3> bcsaller: does it ever make sense to impl juju-info-relation-xxx hooks?
<bcsaller> m_3: what would go in them that a) isn't implicit in all interfaces already and b) wouldn't be better covered by a more specific interface?
<m_3> bcsaller: nothing... that's why I was thinking juju-info wouldn't be hookable
<bcsaller> m_3: logically maybe, but the subordinate could do things on its join side, just knowing its properly connected to the principal at that point
<bcsaller> sometimes presence is enough
<m_3> bcsaller: right
 * m_3 still trying to come up with a clever term for the opposite of hookable
<m_3> neutered?
<m_3> incarcerated?
<m_3> bcsaller: I could see a case where a primary would maybe be _super_ careful and do things like restart a service (like apache) every time any subordinate connected... but that's still a pretty out-there example I guess
<bcsaller> yeah, some sub's might be related to things like the apache and others might be related to things like logging or other container related things.  I don't think you'd want to react blindly like that
<m_3> right
<m_3> although bouncing apache is to lamp stacks as rebooting is to windows
<bcsaller> fixes everything :)
<m_3> magic even
<m_3> hazmat: btw, rabbit's lp aliases are screwed up... been waiting for adam to finish with ods, then we'll fix them
<marcoceppi> What is ODS?
<m_3> marcoceppi: something like openstack design(?) summit
<marcoceppi> Ah, gotchya
<m_3> marcoceppi: big juju/maas demos... went reall well from what I heard
<marcoceppi> excellent!
<SpamapS> m_3: the juju+maas demo went great, and so did charm school
<SpamapS> I felt like we had a really interesting discussion in charm school about what juju does and why it is relevant.
<m_3> SpamapS: awesome!
<SpamapS> as usual.. a few things went wrong.. aws fail rate has been high in us-east-1 .. I think I'll start using us-west-2 all the time now
<SpamapS> network partitioned for a while so zk was timing out
<m_3> SpamapS: yeah, I can totally feel when east-coast primetime starts just about every day
<SpamapS> This was subtle. debug-log just stopped
<m_3> SpamapS: hey, do you know of a way to remove the lp alias from a branch w/o deleting the whole branch?
<SpamapS> m_3: no
<SpamapS> m_3: thats the way, IIRC
 * m_3 sad face
<SpamapS> m_3: branch it, delete it, put it back
<SpamapS> m_3: the LP API may have a way to unpromulgate
<SpamapS> in fact we need that command
<m_3> gotta fix the openstack branches
<SpamapS> m_3: are they super messed up?
<m_3> there're several stacked branches, so I can't delete willynilly
<m_3> and it's sorta... um... rude :)
<m_3> SpamapS: just note for next time that we gotta be more careful with existing dev branches when bumping series
<m_3> SpamapS: oh, btw... I'm thinking two separate blueprints: juju-intelligent-infrastructure and juju-integration
<SpamapS> m_3: hm?
<m_3> for Q
<m_3> you're putting out a juju-release-process or something equiv right?
<SpamapS> m_3: yes, there's a charm store release process blueprint already
<m_3> SpamapS: cool
<m_3> let's hang monday and go over others we want?  be thinking about it in the mean time
<SpamapS> m_3: I think we should probably spark up the discussion on the mailing list first.
<SpamapS> m_3: Lets get our positions in order so the face to face time is spent resolving the differences rather than articulating them.
<m_3> SpamapS: sure... does it hurt to just submit blueprints?  I assume they can be rejected or not approved without any hassle
<SpamapS> m_3: its fine, in fact its a good idea to submit it, then send a link to the list to invite discussion
<m_3> SpamapS: doh... sent already... I'll add them this afternoon
<SpamapS> m_3: doesn't really matter what the sequence is :)
<SpamapS> Just that we make sure we use the time at UDS wisely
<marcoceppi> is it normal for output from the hook to be labled as @ERROR ? http://paste.ubuntu.com/938714/
<bkerensa> jcastro: these Juju shirts are pretty darn cool
<bkerensa> :D
<ninjix> jcastro: I got my t-shirt yesterday. Thanks! very cool
<marcoceppi> anyone around for a review?
<marcoceppi> https://bugs.launchpad.net/charms/+bug/985894
<_mup_> Bug #985894: Charm needed: Shelr.tv <new-charm> <Juju Charms Collection:Fix Committed by marcoceppi> < https://launchpad.net/bugs/985894 >
<m_3> marcoceppi: that is normal, yes... guess it's pumping to stderr for some reason.
<m_3> think there's a bug for that... prob low priority tho
<m_3> marcoceppi: bug #955209
<_mup_> Bug #955209: charm.log uses 'Error' for all stderr messages <juju:Confirmed> < https://launchpad.net/bugs/955209 >
<m_3> marcoceppi: got a dangling mongodb-relation-changed in addition to the db-relation-changed
<m_3> marcoceppi: shelr.tv reviewed
<marcoceppi> m_3: what's a dangling mongodb-relation?
<marcoceppi> OH, you mean a file that doesn't need to be there
<marcoceppi> Also, in my experience in running locally, sudo was needed for about all the commands :\
<marcoceppi> m_3: thanks for the review!
<marcoceppi> I'm guessing this is bad? http://paste.ubuntu.com/938872/
<marcoceppi> Or does it take a while for a promulgated charm to appears in the cs?
<m_3> marcoceppi: hmmm
<m_3> ok, so there's a problem
<m_3> the charm store is looking at precise now
<m_3> I bet it's looking for .../precise/shelr.tv/trunk
<m_3> so... try pushing explicitly to lp:~charmers/charms/oneiric/shelr.tv/trunk
<m_3> marcoceppi: ^
<marcoceppi> m_3: I've pushed to both <3
<marcoceppi> http://code.launchpad.net/charms
<marcoceppi> I hope it's not the stupid .
<m_3> marcoceppi: no, that looks good... so lp:charms/shelr.tv aliases to lp:~charmers/charms/precise/shelr.tv/trunk
<m_3> and then lp:~charmers/charms/oneiric/shelr.tv/trunk is there
<m_3> marcoceppi: so try a 'juju deploy cs:oneiric/shelr.tv'
<m_3> once we verify that works, then it's time to test out the precise version on precise since it's already in the precise store
<SpamapS> also consider trying 'bzr push lp:charms/oneiric/shelr.tv'
<SpamapS> we may need a --series argument for charm promulgate
<spidersddd> Does anyone know if a REST API is going to be used/developed/being developed for juju?
<m_3> spidersddd: it's been proposed... don't know when it's slated to be developed.  We'll know more after the Ubuntu Developer Summit in two weeks (where we plan out what'll be in 12.10)
<SpamapS> spidersddd: I can get you a bug # to subscribe to ...
<SpamapS> spidersddd: bug #804284
<_mup_> Bug #804284: REST API for managing ensemble environments, aka expose cli as ensemble daemon <juju:Confirmed> < https://launchpad.net/bugs/804284 >
 * SpamapS retitles... ensemble...
<SpamapS> Man, we need to do that
<SpamapS> we can get rid of the problem where you have to share environments.yaml
<SpamapS> Just know the address of the juju server and off you go
#juju 2012-04-21
<_mup_> Bug #983530 was filed: "charms" needs branch name consistency <juju:New> <Launchpad itself:Invalid> < https://launchpad.net/bugs/983530 >
#juju 2013-04-15
<trevorj> Hey guys, how's the status on the HA juju charms for grizzly openstack going? Last time I tested it out, there were a couple things I had to change to get grizzly a-workin for a certain packages, but they worked very well ;)
<trevorj> s/a certain/a few/
<AskUbuntu> juju, dnsmasq and `.localdomain` | http://askubuntu.com/q/281628
<AskUbuntu> Juju not seeing the MaaS slaves... at least not after some time? | http://askubuntu.com/q/281640
<kirminas> Hey , maybe some of you could help. Thanks in advance http://askubuntu.com/questions/281640/juju-not-seeing-the-maas-slaves-at-least-not-after-some-time
<kirminas>  Hey , maybe some of you could help. Thanks in advance http://askubuntu.com/questions/281640/juju-not-seeing-the-maas-slaves-at-least-not-after-some-time
<sidnei> hazmat: environment snapshot alternative?
<wedgwood> is the openstack provider expected to be working in juju-core?
<wedgwood> I ask because I get an error when I try to bootstrap: 'error: secret-key: expected nothing, got "<the key>"'
<wedgwood> mramm: is the openstack provider working in juju-core?
<mramm> wedgwood: The basic openstack support is in, so you can deploy charms, add relationships, etc
<wedgwood> mramm: when I try to bootstrap: 'error: secret-key: expected nothing, got "<the key>"'
<wedgwood> I've copied over my (working) config from pyju
<mramm> the config file has changed somewhat
<mramm> and go juju does not accept some of the python keys
<wedgwood> ah ok. I spit out the example, but I'm not sure I see where to plug in some of the values
<mramm> ok
<mramm> did you spit it out with juju generate-config -w
<mgz> wedgwood: for this particular case, your issue is probably that keypair auth isn't in yet
<mgz> you can use userpass auth for now
<wedgwood> mgz: do you know if juju-core has been used against canonistack?
<mramm> wedgwood: yes, it has
<mramm> that was the first testing platform
<mramm> there are some issues with not having available public IP addresses there
<sidnei> wedgwood: last i heard it still requires a public ip address per machine, which makes it unusable with canonistack
<mramm> there is a workaround
<evilnickveitch> wedgwood, there is a funny glitch in the parsing that will give you an error for supplying the key value, even if you choose "userpass"
<wedgwood> mramm: perhaps avoiding expose?
<mramm> it is more than that, since juju core actually communicates between the agent and the server on the public ip addresses
<wedgwood> anyone have notes? I'd like to give it a try.
<wedgwood> if AWS is a path of less resistence, I can get started there, but I'd like to start trying things out in openstack
<Erik_> Trying to deploy wordpress.  /var/lib/juju/units/wordpress-1/charm/hooks/config-changed failed with exit code 1.  The output of hook scripts get logged anywhere?
<Erik_> I think `unit-get private-address` is failing.  Anyone know how I can lookup the CLIENT_ID so I can run `unit-get` from within the container?
<sarnold> Erik_: check /var/log/juju -- iirc, there are logs in there..
<ev> is there a known issue with pyjuju where it forever sits with "instance-id: pending" without actually creating an instance in openstack (lcy01)?
<ev> It seems no matter how many times I try to create a three node cassandra cluster, this happens for third node. This wasn't always the case, though.
<ev> and there it is: ProviderInteractionError: Unexpected 413: '{"overLimit": {"message": "SecurityGroupLimitExceeded: Quota exceeded, too many security groups.", "code": 413}}'
<ev> apols, clearly that's on my end
<sidnei> ev: indeed, but i think the failure mode could be improved. i also had similar issues where i had api quota exceeded or some other error that caused the instance to end up in ERROR state but show as pending in pyjuju.
<ev> sidnei: *nods*
<sidnei> by api quota exceeded i mean openstack's api rate limiting, where if you do too many api calls in sequence it forces you to back off
<ev> oh interesting. I wasn't even aware that existed.
<ev> I've always associated the error state with the canonistack node running out of free memory :)
<sidnei> if you're a light user you might notice it, but if you do add-unit -n 15 or something more abusive. ;)
<ev> heh
<ev> canonistack tends to fall over long before I approach 15 units. It's great fun trying to run lp:error-tracker-deployment (~12 units).
<sidnei> lol
#juju 2013-04-16
<marcoceppi> What causes "ERROR Cannot find machine: i-xxxxxxx" during a destroy-environment?
<evilnickveitch> marcoceppi, don't know, but when you find out let me know - seems like one worth adding to the docs!
<marcoceppi> evilnickveitch: I'll let you know if I find a reason or solution
<evilnickveitch> marcoceppi, thanks
<stoggi> Hello, is it possible to create your own provider for Juju?
<mgz> stoggi: yes, but you probably want to ask yourself why you'd doing that first
<sarnold> perhaps he works for gandi or hetzner or another company with a giant pile of machines and an already-existing cloudy api? :)
<stoggi> mgz: Well, there are a handful of provider configurations, but if I want to use Juju with the latest and greatest IaaS (x). Can I write a "driver" to connect to x's API?
<stoggi> sarnold: bingo, not that I work for those companies, but I am interested in how the community can support new providers.
<arosales> utlemming, looks like your firefox charm went to "fix released" who hoo, nice work.
<arosales> marcoceppi, thanks for the review
<mgz> stoggi: you can indeed write a provider there, but the cloud needs to be non-insane
<sarnold> stoggi: I had idly considered trying to write one myself, just as an exercise to learn juju, but figured it'd be one heck of an exercise and decided to try some charms first.. hehe :)
<mgz> a bunch of these little clouds lack some rather basic things that juju really needs, like the ability to pass in a ssh public key, or doing cloud-init stuff on startup and so on
<stoggi> mgz: OK, great. So assuming the cloud API is useful enough, is it a lot of work to write the provider in Juju?
<mgz> it's programming, but it's not that painful.
<stoggi> Sounds like fun. Does the provider have to be part of the main juju code, or is there a way of providing a plug in.
<mgz> stoggi: there's not been any call for plugins yet, you can just develop in your own branch
<stoggi> mgz: OK, I'll look into adding my own provider into /juju/providers to try it out. Thank you for your help.
<marcoceppi> arosales: Yup it's promulgated and everything@
#juju 2013-04-17
<popey> how do i file bugs against http://jujucharms.com/charms/precise/gitlab (the page, not the charm)...
<popey> "This charm provides gitlab from http://http://gitlab.org. " (note the url)
<danilos> popey, it seems the problem is in the charm README.md: http://bazaar.launchpad.net/~charmers/charms/precise/gitlab/trunk/view/head:/README.md
<AskUbuntu> How to use juju with a VPS host? | http://askubuntu.com/q/282393
<marcoceppi> popey: Hey, I pinged you in the AU chat rooms, but I it would probably be easier just to relay the info here. Could you move the "Edit solved" portion of this question to it's own answer? http://askubuntu.com/questions/281530/running-juju-on-a-local-server
<popey> marcoceppi: done
<marcoceppi> popey: thanks!
<popey> http://askubuntu.com/q/282393 could probably be done with the same method
<popey> I'll bulk the answer up later with snippets from my config
<marcoceppi> popey: awesome, I'm actually going to give it a go here in a bit. WRT other question, it appears his VPS host might not support LXC on it but it'd be worth double checking
<popey> yeah, i might test on my vps, but the problem will be for me, getting more IP addresses
<marcoceppi> ah, yeah
<b1tbkt> with maas/juju how do you make juju aware of a new node once declared in maas?
<bkerensa_> \o/ charm school
<marcoceppi> o/
#juju 2013-04-18
<b1tbkt> how to add a machine to juju?
<marcoceppi> b1tbkt: What do you mean?
<b1tbkt> i have maas & juju. In maas, I've commissioned an additional (bare metal) machine but that is not reflected, for instance, in 'juu status'
<marcoceppi> b1tbkt: You need to deploy a service. Juju will only add a machine in it's status once it has a reason too. Otherwise it leaves the machine out there in case something else uses it
<b1tbkt> ahh okay. makes perfect sense. tks!
<marcoceppi> So try deploying something and Juju should then requisition that node for use and it'll thus show up in the status
<b1tbkt> is juju communicating with maas then to identify available resources?
<b1tbkt> or, i should say, is it relying on maas to know about additional (machine) resources?
<marcoceppi> b1tbkt: Yes, it relies on the provider to give it a machine
<marcoceppi> b1tbkt: It's been forever and a day since I used MAAS and Juju so I don't know the answer, but I assume Juju gives an error when MAAS has no more machines available
<b1tbkt> hrmm. okay.  i appreciate the insight. any particular technical reason that you haven't used both together lately or just circumstance?
<marcoceppi> b1tbkt: Purely circumstance, I dont' have enough hardware to throw at MAAS at the moment. I hope to use it more in the coming months when I get more physical hardware to play with
<b1tbkt> ack. tks. just test driving it now for the first time. fortunately I've got a spare dozen boxes at my disposal. definitely a high barrier to entry w/ maas.
<marcoceppi> b1tbkt: cheers! Good luck
<b1tbkt> tks. only other problem so far is that ipmi startup doesn't seem to be working quite right. credentials get populated inside the server ipmi interface. not sure what's going on there but it's likely a problem for another day ;)
<mfisch> is this the correct way to deploy local changes to a charm?  juju deploy --repository=trunk local:precise/tracks  I ask because it's not seeing my changes
<marcoceppi> mfisch: is trunk a directory? Could you run a tree instide of the trunk directory?
<mfisch> my charm is in here, trunk/precise/tracks/
<mfisch> it looks right to me based on the docs
<mfisch> I destroyed everything and will try one more time
<marcoceppi> mfisch: cool, just checking. You can drop the precise/ from the local:precise/tracks
<marcoceppi> just local:tracks should be sufficient since the series is determined by the environment.yaml file
<mfisch> ok
<mfisch> I should see pass or fail in about 2 more mins
<marcoceppi> When you run the deploy, it should confirm that it's using the local repository over the charm store (cs)
<marcoceppi> by confirm, I mean it should just echo back that it's doing so, not actually prompt you
<mfisch> marcoceppi: I'm fixing your review comment, it was a dumb mistake
<mfisch> INFO Searching for charm local:precise/tracks in local charm repository: /home/mfisch/experiments/tracks/trunk
<marcoceppi> So, /home/mfisch/experiments/tracks/trunk has the following structure: /home/mfisch/experiments/tracks/trunk/precise/tracks; where tracks is the charm?
<mfisch> yes
<marcoceppi> cool
<mfisch> see this is stil wrong cd tracks-2.2.1
<mfisch> it should be a mv blah and then cd tracks
<mfisch> 18 unzip tracks.zip
<mfisch> 19 mv tracks-${VERSION} tracks
<mfisch> 20 cd tracks
<marcoceppi> So, the charm you're deploying doesn't reflect what was actually deployed?
<mfisch> it does not appear that way
<mfisch> I'm sshed in and rechecking
<marcoceppi> Are you deploying to local or another cloud provider?
<mfisch> marcoceppi: cloud provider
<mfisch> marcoceppi: wait
<marcoceppi> hum, and you said you're destroying the environment between tests?
 * marcoceppi waits
<mfisch> yeah
<mfisch> so the install file here looks ight
<mfisch> right
<mfisch> unzip tracks.zip
<mfisch> mv tracks-${VERSION} tracks
<mfisch> cd tracks
<marcoceppi> so the plot thickens
<mfisch> but the logs
<mfisch> here's the unzip finishing
<mfisch>  extracting: tracks-2.2.1/vendor/query_trace.tar.gz
<mfisch> and then ERROR: + cd tracks-2.2.1
<mfisch> thats the old code
<marcoceppi> pastebin the whole charm.log file
<marcoceppi> It's late(early) here, but another pair of eyes might help
<mfisch> yeah its getting late here even (Colorado)
<mfisch> marcoceppi: http://paste.ubuntu.com/5717798/
<mfisch> marcoceppi: the old code did cd tracks-${VERSION} which is what I see in the log
<marcoceppi> huh. The output indicates that it's using your local version
<marcoceppi> If you're around tomorrow I can try to deploy your latest changes from here and try to replicate the issue.
<marcoceppi> It's about time for me to retire to the bedroom
<mfisch> marcoceppi: sure, I'll be here thanks
<mfisch> marcoceppi: I'm going to sign off too, I have baby duty
<marcoceppi> mfisch: cheers, good luck with that
<AskUbuntu> Extending Apache charm to include Apache Modules | http://askubuntu.com/q/282660
<bcmfh> Has anyone succesfully bootstrapped juju with devstack?
<marcoceppi> bcmfh: I tried about 5-8 months ago and wasn't able to get it to work quite right
<bcmfh> marcoceppi, yeah, I'm a bit hampered by my lack of Python understanding, I'll try the ask-ubuntu site
<ahasenack> hey guys, are there no quantal i386 builds of juju-core (golang)? Why?
<ahasenack> https://launchpad.net/~juju/+archive/devel/+packages only amd64
<orospakr> hey, where is the agent source code for the golang version of juju?  I'm looking around in the bzr branch lp:juju-core, but not finding anything eminently obviously the agent.
<orospakr> s/agent source code/source code of the agent/
<marcoceppi> orospakr: I think the agent code is part of the whole juju-core package
<orospakr> alright, I figured it might be that :)
<marcoceppi> orospakr: I could be wrong but that's kind of how the py-juju version works. A lot of the go-juju guys hang out in #juju-dev - you might be able to get more detailed answers there
<orospakr> another nub question: it seems like the canonical (heh, see what I did there) representation of your deployment structure lives inside the first instance.  can it be dumped and recreated on a different cloud provider (or on a different account on the same provider?)
<marcoceppi> orospakr: It can, it's not currently a part of the core juju project, but there's a side project called juju-jitsu which adds commands like jitsu export and jitsu import which take a snapshot of your running environment and allow you to move that structure to another provider. The downside is: any data in the previous cloud doesn't move with your deployment.
<orospakr> fair enough. :)
<marcoceppi> Depending on your deployment, a lot of time it's pretty easy to wrap the jitsu export/import to also sync your data
<orospakr> what's the general strategy for handing backups of juju managed services that store persistent data? is there a common pattern charm writers tend to use for enabling data export?
<marcoceppi> Well
<marcoceppi> orospakr: There are charms like cheph, nfs, etc that can handle pooled storage across multiple nodes. But as it stands now we don't quite have a backup charm or anything in core that handles this. I know there's a lot of talk about making a "backup charm" that would
<orospakr> okay, fair enough! :)
<marcoceppi> be configured to know what data to pull and where to store it, say S3 or something else. Then know when to pull it down
<orospakr> it would have to cooperate with the charms that are responsible for things that store data, like mongodb, postgresql, etc.
<marcoceppi> orospakr: Right, so it'd probably be a subordinate charm that would reside on these units and can then be configured by either the parent charm (mongodb, psql, mysql, etc) or via a config option of what paths/files to back up
<orospakr> that sounds like a great design to me!
<marcoceppi> So, immediately it's a pretty large/robust charm. Which is why it hasn't really been tackled yet I assume
<marcoceppi> I hope this next comming up cycle we can spend some time figuring out how to handle things like backups and the like
<orospakr> so, if the deployment structure is basically sticky (excepting any clever use of jitsu import/export) to the single cloud account, what is the usual method for having a separate staging instance of all of your stuff discrete from production? or is the notion that most of the logic is in charms themselves, so having a duplicate deployment isn't such a big deal?
<marcoceppi> orospakr: it's the latter, almost all the logic is in the charms. So you can juju deploy -e stating your_service which would be identical to juju deploy -e production your_service, where the staging environment might be a private MAAS, OpenStack, or even your local machine and production is any other cloud
<orospakr> hmm, nice.
<orospakr> alright, here's a doozy: what is the AGPL understood to consider as a derivative work?  particularly, are charms considered derivative work of juju?
<marcoceppi> orospakr: IANAL and I think SpamapS knows  better than I do, but No, charms are not considered derivative works of Juju
<orospakr> ok! IANAL is understood ;)
<marcoceppi> Considering charms have their own licesnces, I'm going to go with not derivative works
<SpamapS> marcoceppi: IANAL too btw :)
<SpamapS> charms have their own licenses
<marcoceppi> SpamapS: true, but I feel you have a far better grasp on the licensing aspect
 * marcoceppi bows
<SpamapS> they are like python scripts.. python scripts are not derivative of python
<marcoceppi> Cool, that's what I figured
<orospakr> I figured i'd besk ask; some interpreters of GPL license versions consider API consumption to be a derivative work
<orospakr> hm, is it possible to use more than one cloud provider in a single juju environment?
<marcoceppi> orospakr: Not at the moment. There's talk of cloud federation as being a core feature for juju but I dont' think any work has started on it
<orospakr> hm, cool.
<orospakr> does juju-jitsu work with the golang version of juju?
<marcoceppi> orospakr: not sure, but I'm going to venture a guess of no
<lifeless> SpamapS: ^
<SpamapS> orospakr: some bits of jitsu might work fine w/ juju-core (the go port) but most will not as they directly call the python API's from juju.
#juju 2013-04-19
<AndyBear> hi all. I have a problem and I just cannot find anything online. my juju deploy just never finishes and is forever in the state pending.
<AskUbuntu> What Is Juju and Charm Store | http://askubuntu.com/q/283139
<ahasenack> hi, do you guys know if gojuju can be used with canonistack? I remember something about "s3" (swift)
<ahasenack> I'm getting 404 errors with juju status, and other errors with bootstrap (might be related to what was just said in the mailing list about gojuju not bootstrapping in raring)
<marcoceppi> ahasenack: I believe it can, you might want to verify in the #juju-dev room
<mfisch> marcoceppi: morning
<marcoceppi> mfisch: o/
<mfisch> marcoceppi: you a skins fan?
<marcoceppi> eh, I guess so. Football isn't quite my bag
<mfisch> heh, okay, that was my team growing up in WV
<mfisch> and family living in Alexandria
<marcoceppi> mfisch: ah, I'm over in Falls Church
<mfisch> my uncle used to live ther
<mfisch> only my sister is left there and she moved to maryland for political reasons
<mfisch> marcoceppi: anyway, did you try that new install file?
<marcoceppi> mfisch: not yet, I was swamped yesterday didn't get a chance to try it out
<mfisch> marcoceppi: okay, I'm not in a rush, I have a spare hour today
<b1tbkt> 'cs:quantal/mysql': entry not found
<marcoceppi> b1tbkt: what's the full command/error?
<b1tbkt> juju deploy mysql
<b1tbkt> ERROR Error processing 'cs:quantal/mysql': entry not foun
<marcoceppi> Ah, probably because there aren't many quantal charms. You'll likely want to set your default series to precise in your environments.yaml file
<marcoceppi> Most charms are developed against the LTS release
<b1tbkt> ack thanks. had it set to quantal for a reason but was just able to point it at the precise charm manually
<orospakr> hm, how does one handle ubuntu OS updates, and upgrades between LTS versions?
<orospakr> hm, I stopped my juju instances from EC2's console directly, and restarted them.  juju status works, but at least one of the workloads has failed because /mnt/ramdisk is missing.
<orospakr> i would like my installation to be robust against, say, my EC2 VMs stopping for whatever reason.
<orospakr> and these have ebs root device, so, I'd expect it to work. am I missing something obvious?
#juju 2013-04-20
<mwhudson> hi
<mwhudson> i'm trying to follow https://maas.ubuntu.com/docs/quantal/juju-quick-start.html
<mwhudson> and having a few problems
<marcoceppi> mwhudson: what are the issues you're having?
<mwhudson> marcoceppi: ah, hi
<mwhudson> marcoceppi: i overcame my initial issues but now i get
<mwhudson> root@calxeda02-11-00:~# juju  bootstrap
<mwhudson> 2013-04-19 21:41:45,622 INFO Bootstrapping environment 'maas' (origin: distro type: maas)...
<mwhudson> 2013-04-19 21:41:47,124 ERROR No matching node is available.
<marcoceppi> mwhudson: Just so you know, I glanced over those docs and they're pretty outdated
<mwhudson> marcoceppi: i sort of suspected as much
<mwhudson> i'm wondering if this is some kind of series (precise vs quantal) thing
<mwhudson> but it's all pretty opaque to me :)
<marcoceppi> Did you install Juju from the ppa or from the distro?
<mwhudson> ppa
<marcoceppi> excellent, do you have nodes in maas that are 'ready' (or whatever the state is that maas displays)?
<mwhudson> yeah
<mwhudson> and clicking 'start node' on one of them worked
<mwhudson> (eventually :-)
<marcoceppi> mwhudson: cool, let me poor over my notes from when I tried this about a year ago
<mwhudson> i've got as far as seeing "MAASAPINotFound: File not found" and "NodesNotAvailable: No matching node is available." in the maas.log file when i try to bootstrap
<mwhudson> but i don't know what arguments are being passed...
<mwhudson> was poking around the juju source
<mwhudson> marcoceppi: fwiw, this is an armhf setup
<marcoceppi> heh, so all bets are off ;)
<mwhudson> i couldn't possibly comment!
<marcoceppi> When I tested this, I used virtual box to simulate maas nodes, so what I got working probably isn't a standard setup
<marcoceppi> mwhudson: A couple of things, are the nodes assigned to the same user whos oauth key you're using?
<mwhudson> marcoceppi: the nodes are not assigned at all yet
<mwhudson> there is only one user set up in maas yet
<marcoceppi> mwhudson: I think that user needs to "own" them to use them, which might be why you're getting the no matching node available
<mwhudson> oh ok
<marcoceppi> I found this: http://askubuntu.com/questions/172011/no-matching-node-is-available-error-when-trying-to-bootstrap-juju
<mwhudson> maas-cli default node acquire node-.* sort of thing?
<mwhudson> well
<mwhudson> commissioning is complete
<marcoceppi> mwhudson: I wouldn't be able to say
<mwhudson> hm
<mwhudson> the api method that bootstrapping is calling (and failing on) is acquire
<marcoceppi> ah, so it acquires them for the user
<mwhudson> yeah
<marcoceppi> All the questions on Ask Ubuntu about that error say that means there are no machines "ready" but as you've said that's not the case
<mwhudson> right
<mwhudson> i'm about >< this far from editing maas source to add some logging methods :)
<mwhudson> ah hahahaha
<mwhudson> POST:<QueryDict: {u'mem': [u'512'], u'cpu_count': [u'1'], u'arch': [u'amd64'], u'op': [u'acquire']}>,
<mwhudson> i think i can see the problem here
<marcoceppi> OH, right it defaults to amd64
<marcoceppi> For all cloud providers
<marcoceppi> You should be able to change that with --constraints "arch=armhf" in the bootstrap and subsequent commands
<marcoceppi> mwhudson:  https://juju.ubuntu.com/docs/constraints.html
<mwhudson> i can't put that into environments.yaml?
<mwhudson> hm
<mwhudson> progress
<mwhudson> LookupError: SSH authorized/public key not found.
<mwhudson> oh
<mwhudson> right
<mwhudson> the key isn't on this node...
<marcoceppi> mwhudson: unfortuntately no, you'll have to specify it for each bootstrap, and then either keep specifying it for each deploy or use set-constraints to change the default
<mwhudson> ok
<mwhudson> well, i presume *something* is happening :)
<marcoceppi> mwhudson: if bootstrap exited cleanly, then there's a good chance something _is_ happening after a few mins you'll find a node somewhere has turned on and is ready to roll
<mwhudson> yeah
<mwhudson> ah yes, life on the serial console of the allocated node
<marcoceppi> huzzah
<mwhudson> i guess i'll be wanting to set up a local archive mirror if i'm going to do this a lot...
<marcoceppi> mwhudson: probably a good idea
<marcoceppi> maas esp can be pretty package hungry
<mwhudson> although configuring packages seems to be taking the vast bulk of the time
<mwhudson> i wonder if anything came of that "dd an image onto the disk" idea...
<mwhudson> frigging ssh host key checking
<mwhudson> marcoceppi: seems that the arch constraint was automatically propagated into the environment
<marcoceppi> mwhudson: they may have changed the default behavior of bootstrapping with constraints. It does make sense to make that the default
<mwhudson> right, two more nodes pending
<mwhudson> time to go home
<mwhudson> marcoceppi: thanks for your help!
<marcoceppi> mwhudson: o/ no problem!
#juju 2013-04-21
<ormalud> is there any documentation of jitsu out there? I'm interested in how the deploy-to command actually work. does it deploy services to "bare metal" machines or does it use lxc to provide some isolation of services on a machine?
<AskUbuntu> Specify ami in Juju | http://askubuntu.com/q/284010
#juju 2014-04-14
<jose> hey ValDuare
<raywang> hello juju team, how can I disable the DEBUG info from "juju debug-log", and just leave INFO from the output? :)
<cory_fu> Hrm.  My install hook (running under amulet on LXC) seems to be hanging on "ldconfig deferred processing now taking place" and just sits there with no apparent activity going on.
<cory_fu> Any suggestions on how to figure out what's going on, or should I just jack up the timeout and hope for the best?
<cory_fu> arosales, perhaps?
<arosales> cory_fu, hello
<cory_fu> Was hoping you might have some advice on my question.  :-)
<marcoceppi> cory_fu: what are you installing?
<arosales> ldconfig . . . .
<cory_fu> Running the deploy test on my allura charm.  The allura unit hangs and times out during the install hook
<arosales> cory_fu, got a pointer to your amulet test and pastbin to your tests?
<cory_fu> Actually, it doesn't even seem like it's getting into the hook at all.  I'll put my log file up
<cory_fu> http://pastebin.ubuntu.com/7250318/
<cory_fu> So I'm wrong, it is getting into the install hook
<marcoceppi> cory_fu: yeah, it' hanging on openjdk install
<cory_fu> Why would that happen, and can I fix it?
<marcoceppi> no idea, and that's entirely up to you
<cory_fu> Bah.  :-p
* lazyPower changed the topic of #juju to: Weekly Reviewer: lazyPower || Welcome to Juju! || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://goo.gl/9yBZuv || Unanswered Questions: http://goo.gl/dNj8CP
<arosales> as a quick test to get a charm running on a more recent kernel I tried
<arosales> http://paste.ubuntu.com/7250984/
<arosales> ideally it would be nice to have a config option to specify this, but what do folks think about rebooting in the install hook. Is there a better place?
<arosales> The tested ok for me, but wanted to see if any other folks had any better suggestions on where a are boot should be.
<lazyPower> arosales: It makes sense that since its a predependency. the last step of the install hook should be that reboot so we ensure everything else has been run.
<lazyPower> eg: i wouldn't want my  node to reboot everytime config-change runs, as at that point we can assume the node has already joined a cluster of the service. +1 from me on the current implementation
<arosales> lazyPower, thanks or the feedback. I did add that as the last command in the install hook
<kirkland> I have a bundle, which currently references a bunch of cs: charms
<kirkland> I need to switch all of those cs: to local: charms
<kirkland> I have copies of all of the charms locally
<kirkland> what else do I need to do, to tell the bundle where my local copy of the charm store, is located?
<kirkland> marcoceppi: jcastro: ^
<marcoceppi> kirkland: set JUJU_REPOSITORY before running the bundle
<kirkland> marcoceppi: export JUJU_REPOSITORY=/srv/charmstore/
<kirkland> marcoceppi: and then change cs: to local: ?
<marcoceppi> kirkland: change charm: cs:... to just charm: <charm>
<kirkland> marcoceppi: so charm: "cs:precise/mysql-29" becomes ... what?
<marcoceppi> kirkland: charm: mysql
 * marcoceppi checks
<marcoceppi> yeah, that should do it
<jose> lazyPower: if you're up for some reviewing, I pushed a new fix for the owncloud charm
<jose> guys, what should I do if I want to be the maintainer of a charm?
<ghartmann> I haven't found it yet, but do we an ipython notebook juju charm ?
<ghartmann> do you think that this would be worth working on ?
#juju 2014-04-15
<Kupo24z> Hey all, I've setup a flat-networking openstack cluster with juju however the networking config area seems to be missing in horizon. Nova network shows running on all compute nodes however. Any ideas?
<jose> negronjl: ping
<negronjl> jose: hey
<jose> negronjl: hey! he estado trabajando en el charm de Seafile que habÃ­as hecho, y ya lo tengo terminado, sÃ³lo me falta un copyright file, que vendrÃ­a de tu parte
<jose> no sÃ© si puedas poner uno en tu branch para poder hacer un pull y encargarme de mandarlo a la Charm Store
<negronjl> jose: submit it and I'll add it
<jose> negronjl: you mean, should I push my branch and you'll put an MP to add it?
<negronjl> jose: you can add this one: https://github.com/haiwen/seafile/blob/master/LICENCE.txt
<negronjl> jose: also, add your name to the copyright ( for the charm )
<jose> negronjl: I just did some tweaks, you're actually the author :)
<negronjl> jose: that should suffice then, submitted for review and we can see how it goes
<negronjl> jose: add my name to the copyright as well ... push it and I'll review it
<negronjl> jose: or just MP the thing and I'll take it from there
<jose> ok, I'm finishing up the README and the icon and I'm pushing it
<negronjl> jose ... oh ... and thanks for fixing it :)
<jose> no worries :)
<jose> negronjl: https://code.launchpad.net/~jose/charms/precise/seafile/trunk if you want to take a look
<negronjl> jose: le haces MP ?  Asi yo hago el review y por ahi seguimos
<jose> dale
<jose> (el bug con charmers para la inclusion en el charm store ya estÃ¡ abierto)
<negronjl> jose: Lo miro en un rato
<jose> genial, gracias :)
<negronjl> jose: Thanks for the changes.  I merged the changes
<jose> np, looking forward to seeing it on the store soon
<negronjl> jose: I added you to the maintainers as well.
<jose> awesome, thank
<jose> s
<jose> I'm planning on adding memcached and postfix support in the future, looks promising
<benonsoftware> Hiya
<benonsoftware> I'm currently making a charm for folding@home / origami and I'm not sure if I understand the 'provides:' section in metadata.yaml
<Kupo24z> Hey all, anyone know how to add a fixed range in juju openstack without it being overwritten (nova.conf)?
<timrc> Hrm when I deploy a local jenkins environment using a mixture of precise and trusty containers, the precise containers just hang :( 1.18.1-trusty-amd64
<timrc> I creating an lxc container by hand with -r precise to ensure that was the problem
<timrc> The precise containers are forever in a "Pending" state
<jose> I think there was a bug about that
<mattyw> anyone seen this error before? WARNING failed to write bootstrap-verify file: cannot make S3 control bucket: A conflicting conditional operation is currently in progress against this resource. Please try again.
<mattyw> during a bootstrap
<jose> mattyw: that's an S3 error, is your bucket name unique and you have enough privileges for bucket creation?
<mattyw> jose, everything was working until I tried switching regions about 30 mins ago, I switched back now and get this problem
<mattyw> I've logged into my account, deleted all buckets
<jose> according to Amazon, that's an 409 Conflict error, with code OperationAborted
<smarter> so, I'm running juju on precise and doing a local deployment with saucy machines, and they all fail with some apparmor error: http://sprunge.us/XgPJ
<smarter> previously, I was deploying with raring machines and it worked fine
<jose> mattyw: apparently bug #1183571 is related to that
<_mup_> Bug #1183571: bootstrap fails: A conflicting conditional operation... <cmdline> <hours> <juju-core:Triaged> <https://launchpad.net/bugs/1183571>
<mattyw> jose, I changed my control bucket id and it seems to work ok
<jose> awesome then
<smarter> any idea?
<mattyw> jose, thanks for the link to the bug
<mattyw> anyone seen this error when trying to deploy a local charm? ERROR error uploading charm: cannot update uploaded charm in state: not okForStorage
<jcastro> lazyPower, you're on review this week iirc?
<lazyPower> jcastro: yeah, its going to be a repeat of the last 2 weeks - if you run into some high prority stuff please make a note on the board for me and i'll hit it up EOD
<jcastro> This afternoon I'll try to hit the easy ones and at least +1
<lazyPower> Much appreciated my man
<smarter> is there any way to disable apparmor in the local instances to work around http://sprunge.us/XgPJ ?
<smarter> oh apparently, the problem was fixed in dbus: https://launchpad.net/ubuntu/saucy/+source/dbus/+changelog
<smarter> is there any way to run apt-get update && apt-get dist-upgrade on the lxc containers spawned by juju, before anything else?
<marcoceppi> smarter: you can run juju add-machine to enlist a bunch of machines for deploying to, then run an ssh loop to run those commands
<smarter> okay that could work, is there any way to automate that?
<smarter> wait actually saucy already has dbus 1.6.12-0ubuntu10 so that's probably not the cause of my problem
<jcastro> marcoceppi, reminder to put your juju logging plugin thing in github pls
<smarter> marcoceppi: I can't even juju ssh to the lxc to disable apparmor, because ssh to lxc containers is broken...
<marcoceppi> smarter: no it's not. What version of juju are you using?
<marcoceppi> that was fixed in 1.18
<smarter> 1.16.x
<smarter> ah
<smarter> maybe I should try 1.18 again, but it broke other stuff iirc
<smarter> sigh
<lazyPower> marcoceppi: i wish juju pprint was included by default
<lazyPower> such a handy plugin
<jcastro> ERROR state/api: websocket.Dial wss://10.0.3.1:17070/: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "juju-generated CA for environment \"local\"")
<jcastro> anyone  see that before? Fresh deploy on LXC on trusty
<jcastro> lazyPower, actually I'd prefer normal juju status to be not so wordy by default
<jose> guys, what should be the process in case I want to be the maintainer of a charm?
<marcoceppi> jose: is the charm unmaintained?
<jose> marcoceppi: no, but the maintainer hadn't pushed a fix for ~1.5 years
<marcoceppi> jose: we don't really have good criteria for what constitues an unmaintained charm, which one is it?
<jose> marcoceppi: owncloud
<jcastro> jose, I would snag it, but send nathwill a courtesy email
<jose> jcastro: will do
<cory_fu> Why do I occasionally get this error: http://pastebin.ubuntu.com/7256493/
<cory_fu> Seems like it's failing on the log command
<cory_fu> But it only happens every once in a while
<smarter> okay, so I have this error http://sprunge.us/XgPJ when trying to do a local deployment from a precise host to either saucy or trusty instances
<smarter> but raring works fine
<jamespage> this might be a dumbass question but why when I do juju upgrade-charm for a charm deployed from a local branch does juju not just replace whats on disk in the units with what I have locally?
<marcoceppi> it pretty much does that, in that it zips the contents, uploads to the juju storage service, then has all the units pull the zip down, extract over your CHARM_DIR and run upgrade-charm hook
<marcoceppi> jamespage: ^
<jamespage> marcoceppi, hmm
<jamespage> marcoceppi, I'm having to use --switch to force a wholesale replacement of the charm
<cory_fu> Getting an error when trying to commit in bzr:
<cory_fu> bzr: ERROR: Cannot lock LockDir(http://bazaar.launchpad.net/~johnsca/charms/precise/apache-allura/refactoring-with-tests/.bzr/branch/lock): Transport operation not possible: http does not support mkdir()
<cory_fu> How do I fix that?
<cory_fu> I did set my launchpad-login already
<cory_fu> Ah.  bzr bind lp:... instead of just setting the push branch, fixed it
<cory_fu> I'm now consistently getting this error: http://pastebin.ubuntu.com/7257083/
<cory_fu> The juju-log helper works fine, and then a few steps later in the hook suddenly fails.
<cory_fu> Here is the hook for reference: http://bazaar.launchpad.net/~johnsca/charms/precise/apache-allura/refactoring-with-tests/view/head:/hooks/install
<cory_fu> mbruzek, can you perhaps help me with that ^?
<mbruzek> Yeah
<mbruzek> Just dug up the url for you
<mbruzek> cory_fu, https://docs.google.com/a/canonical.com/document/d/19JyDGyiVqFi4K66yCp3iYH-TbrZvBmZceq-LnmPbj0w/edit#
<cory_fu> Thanks
<mbruzek> cory_fu, Do you want to get on a G+ hangout ?
<cory_fu> Sure
<timrc> marcoceppi, All but one of my services are starting when I bootstrap and deploy a new local environment.  Looks like juju is only able to start one machine.  The rest sit in pending forever.  Have you encountered this recently?
<timrc> marcoceppi, experiencing this with both 1.17.7 and 1.18.1
<marcoceppi> timrc: I have not, but I have not used local in quite a while
<timrc> ah hm
<marcoceppi> timrc: what does all-machine.log in ~/.juju/local/log look like?
<timrc> marcoceppi, I see a lot of this: machine-0: 2014-04-15 19:39:05 WARNING juju.worker.instanceupdater updater.go:231 cannot get instance info for instance "": no instances found
<timrc> marcoceppi, http://pastebin.ubuntu.com/7257403/ -- I removed just the bit of log that shows the one service deploying to the only machine that started
<timrc> marcoceppi, machine-0 and machine-1 seem to start without issue, but machine-2 and on do not... no mention of them in the log either
<timrc> marcoceppi, Looks like you're having some connection issues... did you get my pastebin?
<lazyPower> jcastro: VEM charm is promulgated
<lazyPower> any other high prority items in the queue that need callout? otherwise i'm going top to bottom
<jcastro> lazyPower, those mysql ones would be nice
<lazyPower> jcastro: the 3 you linked earler have already been reved/acked
<jcastro> oh awesome, <3
<lazyPower> :) Context switching all day my man. i know whats up
<jose> lazyPower: hey, mind reviewing an MP I just did to unattended-upgrades? it's pretty simple and straightforward, just 3 lines modified
<jose> well, 4
<lazyPower> jose: link to MP?
<jose> lazyPower: https://code.launchpad.net/~jose/charms/precise/unattended-upgrades/add-categories-readme-markdown/+merge/215966
<lazyPower> jose: nix the $'s from the commands in the markdown
<lazyPower> use CODE output, which you have, and i'll ack this
<jose> ok, I'll remove it
<jose> pushed
<jamespage> marcoceppi, figured out my problem - old maas install suffering from nonces issue creating problems
<marcoceppi> huh
<jamespage> gnz
<jamespage> lazyPower, thanks for the review
<lazyPower> jamespage: thanks for the high quality submission
<lazyPower> :)
<jamespage> lazyPower, can't take all the credit - yolanda and ivoks both worked on that one as well
<lazyPower> the merge history was looonnnnggg on that one.
<jamespage> lazyPower, fwiw that charm also works just fine on trusty as well
<jamespage> it can deal with 12.04 rabbit and 14.04 rabbit
<jose> lazyPower: thank you :)
<marcoceppi> jamespage: good to know
<lazyPower> hey mbruzek
<mbruzek> yes?
<lazyPower> i'm down to the new tomcat submission. Just so i'm 100% clear - this is a brandy new charm intended to deprecate tomcat6 and tomcat7 charms, correct?
<lazyPower> not an update to either/or
<mbruzek> new charm based on Robert Ayres code
 * lazyPower cracks knuckles
<lazyPower> here we go then, let the review...COMMENCE!
<lazyPower> btw ty for this <3
<jose> :P
<fro0g> is there a Juju changelog document somewhere describing what's new in the different versions?
<jose> fro0g: I think http://changelogs.ubuntu.com/changelogs/pool/universe/j/juju-core/juju-core_1.18.1-0ubuntu1/changelog is what you're looking for
<fro0g> jose: indeed, thanks!
<smarter> is "juju debug-log" supposed to work with juju 1.18 and local deployments now?
<rick_h_> smarter: I think with 1.19 now
<smarter> ok :)
<lazyPower> Congrats to mbruzek on his newly promulgated Tomcat charm! This is another example of a high quality submission - https://bugs.launchpad.net/charms/+bug/1295710
<_mup_> Bug #1295710: Create a new Apache Tomcat Juju Charm. <Juju Charms Collection:Fix Released> <https://launchpad.net/bugs/1295710>
 * jose claps
<davecheney> lazyPower: what happens if something dies while obtain_tomcat_lock is held ?
<lazyPower> davecheney: actually - good question. Its a sentinel file, so you'd have to manually release the lock.
<davecheney> lazyPower: what does the lock, lock ?
<lazyPower> davecheney: as i understand the source, it locks tomcat from doing things while its reconfiguring. so the app server continues to churn away while the hooks do their thing, aftwords it recycles the app server
<davecheney> lazyPower: this could be fixed by running each hook in that big block in a ( ) block
<lazyPower> why "fixed"? i dont think theres anything wrong with it.
<lazyPower> i may be getting tripped up on semantics though - if that's a feature you would want out of the tomcat charm, i'd file it so the author can implement it.
<davecheney> if a hook fails
<davecheney> it'll leave tomcat locked
<davecheney> how can an adminstrator see this ?
<lazyPower> let me redeploy it and evaluate the behavior
<lazyPower> davecheney: also thanks for the feedback
<davecheney> lazyPower: no probs
<lazyPower> davecheney: subsequent runs take control of the file descriptor lock, and if a hookf ails, its shown int eh status output
#juju 2014-04-16
<lazyPower> i did some chaos monkey'ing around in here, it seems to be in order
<sarnold> achievement unlocked: chaos monkey!
<davecheney> lazyPower: fairy nuff
<lazyPower> sarnold: thats my middle name ;D
<sarnold> lazyPower: hahaha
<lazyPower> or was it murphys law....
<lazyPower> i forget, either way :P
<jcastro> hey lazyPower
<jcastro> did we test tomcat on trusty too?
<lazyPower> jcastro: nope
<lazyPower> just precise
<lazyPower> its a good candidate for trusty though, it has a really extensive suite of tests
<jcastro> yeah, I'll ask tomorrow
<jcastro> might be a nice one to kick it off
<jose> guys, after config-changed the service isn't stopped/started, right?
<davecheney> jose: no
<jose> good then, thanks!
<marcoceppi> jcastro: I just kicked off a test, fyi
<marcoceppi> against trusty
<marcoceppi> hum, so we don't have trusty images in hpcloud
<marcoceppi> sinzui: is CI testing juju-core against trusty?
<marcoceppi> haha, so you can deploy a subordinate charm, ie: cs:trusty/unattended-updates and relate it to a precise deployed service
<davecheney> marcoceppi: urk
<davecheney> that's bad
<sarnold> haha
<marcoceppi>  Â¯\_(ã)_/Â¯
<davecheney> series, what series
<lazyPower> shhhhhh, just relate
<sarnold> marcoceppi: nice find :)
<davecheney> sarnold: we're entering a world where we have more that one series
<davecheney> this is new stuff
<marcoceppi> a wholleee newwww worrrllddd
<sarnold> davecheney: yeah, the fact that it hasn't been in issue for two years means it could run some pretty deep ruts in the brain :)
<davecheney> http://www.tickld.com/cdn_image_thing/662763.jpg
 * davecheney has lost sleep over this
<marcoceppi> http://i.imgur.com/L2ASIya.gif
<lazyPower> the dutch uzi queen
<lazyPower> boom chakalaka
<davecheney> mmm, dutch uzi
<hobbyBobby> I got some mysterious networking issues trying to run lxc on a virtualbox and expose that to my physical host
<davecheney> hobbyBobby: i can see how that wouldn't work out of the box
<davecheney> -ETOOMANYNATS
<hobbyBobby> saw the tutorials on exposing containers to my network, but I think there's a second level of indirection I must follow
<davecheney> hobbyBobby: so, you've got your machine, running virtualbox, with ubuntu inside, and inside that are some lxc containers ?
<hobbyBobby> yep
<marcoceppi> davecheney: i was mistaken
<hobbyBobby> davecheney: trying to access from host
<marcoceppi> cannot add relation principal and subordinate services series must match
<davecheney> hobbyBobby: which host
<davecheney> marcoceppi: \o/
<davecheney> writing software for the win!
<hobbyBobby> davecheney: the virtualbox host
<davecheney> hobbyBobby: the host inside virtualbox, or the host hosting virtual box
<davecheney> sorry to be a pedant
<davecheney> when it comes to networking
<davecheney> this matters
<hobbyBobby> davecheney: its ok, it's confusing, the physical honest to goodness metal
<hobbyBobby> davecheney: verified juju-gui comes up when I don't try to change networking
<hobbyBobby> davecheney: but when I do, it bootstraps ok but juju-gui fails to start
<davecheney> hobbyBobby: change networking ?
<hobbyBobby> davecheney: using this as a guide http://askubuntu.com/questions/281530/how-do-i-run-juju-on-a-local-server
<davecheney> hobbyBobby: hat guide assumes your running ubuntu on the ohst
<davecheney> not inside virtual box
<hobbyBobby> davecheney: and using a bridged network inside the juju vm. I kinda figured that as soon as we started chatting
<davecheney> still won't help i'm afraid
<davecheney> all the local machines will have 10/8 addresses
<davecheney> and there is no routining from your host to those
<davecheney> even if they are on a bridged network
<hobbyBobby> davecheney: so no way to say bridge the guest network to the guest lxc network
<hobbyBobby> *host
<davecheney> hobbyBobby: in theory yes, in practice, not
<davecheney> no
<hobbyBobby> davecheney:  wow, bad times, there goes my dream of a self-contained cluster I can show off
<davecheney> hobbyBobby: you could try adding a route to 10/8 from your host to the bridge network that virtual box sets up
<davecheney> that might work
<davecheney> not tested
<marcoceppi> is anyone else having problems conecting to bazaar?
<hobbyBobby> davecheney: ok, yea, I definitely could try that, thanks
<lazyPower> hobbyBobby: do you need it exposed beyond the dev cycle?
<lazyPower> sshuttle would be a good ansewr to this problem
<lazyPower> marcoceppi: jose is. he just opened an is ticket
<jose> not actually an IS ticket, but reported and they're investigating the issue
<marcoceppi> ugh, all I want to do is work
<jose> I was so happy branching and pushing :(
<lazyPower> its all these mp's i just acked
<marcoceppi> fwiw, it's back
<lazyPower> lp knew something was up and went away to force us to take a break
<hobbyBobby> 	
<hobbyBobby> lazyPower: not just now, but I would like it to be able to run off of windows
<jose> apparently, marcoceppi is magic
<lazyPower> hobbyBobby: sshuttle is not a permanent fix then. The idea behind sshuttle is a cheap VPN bridge into the remote network.
<marcoceppi> davecheney lazyPower second trusty charm promulgated, tomcat
<lazyPower> and works wonders when just doing dev work or securing the odd hotel-wifi connection.
<lazyPower> marcoceppi: AWESOME!
<lazyPower> matt's going to be jazzed
<lazyPower> marcoceppi: maybe follow up on my email :3
<marcoceppi> lazyPower: going forward, new charms should be trusty tested and considered for promotion at promulgation time
<lazyPower> be like "and this is how easy it is to get acked to trusty with good tests"
<marcoceppi> well, I found a million problems in amulet
 * lazyPower flips tables
<marcoceppi> as in, without a lot of gental but forceful massaging it doesn't play nice with trusty
<marcoceppi> but that's because we don't have a lot of trusty charms
<marcoceppi> and it tries really hard
<marcoceppi> but it's just not good enough
<lazyPower> probably need to map out that fix during our sprint
<marcoceppi> hopefully next cycle we can get some more resources on charm-tools and amulet
<hobbyBobby> lazyPower: thanks, maybe that kind of thinking will bring me out of the box
<lazyPower> i'd love to pick up the banner on that... but i dont think i'm ever going to get out of the queue at the rate its going.
<lazyPower> which is an awesome problem to have... dont get me wrong
<lazyPower> marcoceppi: would chef-server be considered an application-server or an application? I'm leaning towards application... but i could also just be pedantic....
<marcoceppi> lazyPower: does it matter?
<lazyPower> well, its nto really serving anything
<lazyPower> so it doesnt make sense to be an application-server... and if we want the categories to be clean cut with whats put in there.. i mean you dont want postgres living in misc do you?
<lazyPower> marcoceppi: i'm getting picky... dont mind me
<sinzui> marcoceppi, CI tests that trusty deploys and upgrades. It runs the unit tests on trusty too
<marcoceppi> sinzui: yeah, I realized there was a default-series key, but I couldnt' get a bootstrap on HP Cloud
<sinzui> marcoceppi, HP doesn't support trusty yet
<marcoceppi> sinzui: cool, just wanted to make sure
<sinzui> marcoceppi, I have not gotten trusty to work on it, though I last tried 2 weeks ago
<marcoceppi> well, didnt' work last night
<jcastro> mbruzek, hey so what's the workflow for the tomcat charm
<jcastro> like .... let's say I have a war file
<mbruzek> juju deploy tomcat
<mbruzek> juju deploy openmrs
<mbruzek> juju add-relation tomcat openmrs
<mbruzek> juju deploy mysql
<mbruzek> juju add-relation openmrs:db mysql:database
<mbruzek> profit!
<jcastro> ok so it basically deploys other charms that use tomcat?
<jcastro> does it do like, my tomcat apps?
<mbruzek> openmrs is a subordinate
<jcastro> got it
<jcastro> so the apps need to be made into subordinates to use it?
<mbruzek> Robert Aryers had a j2eedeployer charm in his personal space where you could put a random war in and deploy
<mbruzek> I haven't scoped that one out yet, but that is a good idea to add to the list
<jcastro> ok
<jcastro> so this charm is for people who want to charm up their tomcat apps
<mbruzek> j2ee-deployer  is the charm name.
<mbruzek> Yes.  My thought is the j2ee-deployer charm may be too generic
<lazyPower> :) mbruzek congrats on that awesome charm
<mbruzek> Thanks lazyPower
<jcastro> hey so j2ee-deployer is a sub
<jcastro> could you plop your war file in there and then relate that to tomcat?
<mbruzek> In my opinion the j2ee-deployer charm was too generic though.  Often WAR files (such as openmrs) need a database connection, and it is very difficult to do "generic" database relations for j2ee-deployer
 * jcastro nods
<jcastro> ok so let's say I have my app I wrote, myapp
<jcastro> how does this charm help me
<mbruzek> jcastro, Yes that is the model for j2ee-deployer, but only if the WAR file does not need a db or other relations, if it does they need to charm up their own charm
 * jcastro nods
<mbruzek> The tomcat charm offers some more advanced configuration options, for clustering, JNDI, and a robust container for subordinate charms that want to deploy inside it
<jcastro> ok I have pretty much given up us ever having nice URLs
<jcastro> so I am linking to the manage pages. :(
<bbcmicrocomputer> mbruzek: j2ee-deployer can deploy custom web apps with DB relations fine
<jcastro> should we demote tomcat6 and tomcat7 now in the charm store or .... ?
<mbruzek> Well you wrote it so you would know
<jcastro> ok, so if people want to just deploy a war file, then j2ee-deployer is for them
<jcastro> that sound right?
<mbruzek> jcastro, that was my plan to demote the other 2.
<bbcmicrocomputer> jcastro: sure, if they work within the framework of the charm, then they can deploy their custom apps fine
<jcastro> ok
<jcastro> we should likely finish that up and put that in the store
<jcastro> bbcmicrocomputer, how's your time these days? :)
<bbcmicrocomputer> jcastro: j2ee-deployer works where the user just wants to drop their existing app in a Juju environment, but doesn't want to write a charm
<jcastro> ok so we've got both use cases covered then,
<bbcmicrocomputer> jcastro: anything more specific and they can roll their own charm
<bbcmicrocomputer> jcastro: sure
<bbcmicrocomputer> jcastro: I have no time atm unfortunately
<bbcmicrocomputer> jcastro: would love to finish off my work on the Java charm eco-system
<cory_fu> Is there a charmhelper to temporarily change the current directory (ideally, a context manager)?
<marcoceppi> cory_fu: not that I know of
<marcoceppi> cory_fu: you would write one :)
<marcoceppi> cory_fu: there is an os.environ['CHARM_DIR']
<marcoceppi> cory_fu: so you can always find your way home
<ppetraki> cory_fu, probably, but it's just as fast to write one
<cory_fu> Yeah, it was pretty simple.  I'll probably still add it to charmhelpers, though, because it seems useful.
<lazyPower> there's no place like os.environ['CHARM_DIR']
<ppetraki> cory_fu, save cwd, mkdtemp, chdir td
<lazyPower> ba dum psh
<ppetraki> cory_fu, try this, http://stackoverflow.com/questions/169070/python-how-do-i-write-a-decorator-that-restores-the-cwd
<ppetraki> oh nm, path is a new module
<ppetraki> just roll your own
<timrc> Is anyone else having issues creating machines using a local provider?  I'm running on trusty with latest lxc and juju-core/local... machines sit in an indefinite pending state
<timrc> no action in /var/lib/lxc (e.g. the containers are not actually being created)
<timrc> lxc 1.0.3-0ubuntu3 and juju-core/local 1.19.0-0ubuntu1~14.04.1~juju1
<lazyPower> timrc: that could be a myriad of reasons why its not booting. Hav eyout ried removing the cached image file and starting from a pure scratch?
<lazyPower> also, is there any output in the all-machines/machine-0 log?
<timrc> lazyPower, yeah plenty of output.. machine 0 seems to "start" fine... there is no log that I can find not even the cloud-init log in the .local file that even mentions any of the other machines its suppose to be starting
<lazyPower> well machine 0 isn't an lxc machine
<lazyPower> it occupies space on the host
<timrc> I have tried removing everything from /var/cache/lxc as well as /var/lib/lxc
<timrc> lazyPower, Right, I'm just saying that there is plenty of logging for machine 0 and absolutely no mention of any other system even though juju status shows them all "Pending"
<lazyPower> but it should tell you more information about whats going on under the radar - like "Unit failed address assignment, not booting" is what i nromally see since my network setup is ridonk
<timrc> I'm going to try to teardown apparmor and try again
<lazyPower> ok sorry i wasn't more help :(
 * timrc yearns for the day when this Just Works (tm) longer than a week before it breaks
 * timrc switched back to a older "known good" version of juju-core/local and still having the issue, so I'm thinking it might be an issue with lxc.  Will try switching back to an older version of that now
<jamespage> lazyPower, do you have any objection to me pushing and promulgating the rabbitmq charm for trusty?
<jamespage> try to get a full openstack deploy from charmstore on 14.04 for release tomorrow
<lazyPower> jamespage: I dont have a problem with it,  you're known for testing and stressing charms
<lazyPower> if there's an outcry for blood do you mind if i signpost them to you?
<lazyPower> jamespage: there's a series of tests that mbruzek wrote for rabbitmq i believe
<lazyPower> if you could get those included, then there wouldn't be any extra special need for that - it would satisfy the store reqs. I don't think those tests made it into the charm yet though
<jamespage> lazyPower, not yet no
<jamespage> hmm - I have the same issue with mysql and mongodb
<lazyPower> i can tell you the mongodb tests i wrote will fail - it worked prior to a merge in the charm
<lazyPower> and i haven't had a chance to get back to triage those tests
<lazyPower> jamespage: if you've got some high priority stuff you need me to expedite just ping me with the MP and i'll take a look
<marcoceppi> jamespage: I'm writing tests for MySQL
<marcoceppi> jamespage: they should be in before trusty
<marcoceppi> and MySQL in trusty charm is a priority for me in general
<cory_fu> Why does charm test continue if 00-setup fails?  Is there any way to prevent that?
<cory_fu> Ah, --set-e.  Seems like 00-setup should be a special case that always implies --set-e
<jcastro> https://juju.ubuntu.com/docs/reference-release-notes.html
<jcastro> hey so I found out this exists ^^^^
<lazyPower> jcastro: evilnick emailed the list about that on monday
<lazyPower> cory_fu: not necessarily. Its predeps for the host env running the tests.
<jcastro> can someone help me get my local provider working?
<jcastro> ERROR state/api: websocket.Dial wss://10.0.3.1:17070/: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "juju-generated CA for environment \"local\"")
<jcastro> I keep getting this
<jcastro> I've tried every trick reseting the local environment I can think of
<jamespage> marcoceppi, coolio
<jamespage> marcoceppi, do you know if anyone is working on mongodb for trusty?
<marcoceppi> jamespage: not that I'm aware
<jamespage> it works ok on trusty albeit in a single node
<lazyPower> jamespage: the clustering is only protocol stuff, so it should be bueno if it works single node. I'd be game to work with the scaled bundle for testing after my current work queue is complete
<jamespage> lazyPower, cool
<psivaa_> hazmat: hey, i am hitting an issue similar to bug 1287718 in the cts cloud. curious if there is any workaround?
<_mup_> Bug #1287718: jujud on machine 0 stops listening to port 17070/tcp WSS api <cts-cloud-review> <mongodb> <state-server> <juju-core:Triaged> <https://launchpad.net/bugs/1287718>
<hazmat> psivaa_, it should auto restart, ie transient issue
<hazmat> psivaa_, if its not restarting.. please attach /var/log/juju/machine-0.log to the ticket
<psivaa_> hazmat: ack, it's not for me. i'll attach the log to the bug
<hazmat> psivaa_, re workaround.. you can restart the juju agent on that machine
<hazmat> ie. sudo service jujud-machine-0 restart
<psivaa_> hazmat: tried restarting the machine, mongodb, and jujud-machine-0 but without any improvement :/
<psivaa_> i've commented on the bug
<hazmat> psivaa_, that's strange
 * hazmat takes a look at the log
<psivaa_> hazmat: 1.18.1-0ubuntu1~14.04.1~juju1 is my juju version, if that matters
<psivaa> hazmat: http://paste.ubuntu.com/7262521/ contains the logs after juju agent restart.
<psivaa> i'll attach that too to the bug
<hazmat> psivaa, thanks. sorry helping someone else right now.. free in 15m
<psivaa> hazmat: np, i'll wait :)
<hazmat> psivaa, back
<psivaa> hazmat: 10.230.21.113 is the ip of the bootstrap machine, in cts lab
<hazmat> psivaa, what architecture are you on?
<psivaa> hazmat: amd64 on both local and the instance
<hazmat> psivaa, ic the restarts but only one case where there's a panic.. what exactly are you symptoms ?
<hazmat> psivaa, ie you can't run juju status?
<psivaa> hazmat: i keep getting 'ERROR state/api: websocket.Dial wss://192.168.1.2:17070/: dial tcp 192.168.1.2:17070: connection timed out'
<hazmat> psivaa hmm. that's odd.. is there a network firewall between you and the state server?
<hazmat> psivaa, ic the state server listening on all ip addresses on that machine
<psivaa> hazmat: there are some additional restrictions for the instances. let me pm the details pls
<hazmat> psivaa, so let's verify connectivity to the state server from the host, and then verify off host..
<hazmat> ie. from the machine.. telnet 192.168.1.2 17070
<hazmat> and then do the same from off host
<psivaa> hazmat: http://pastebin.ubuntu.com/7262626 is the netstat
<hazmat> psivaa, if your going through firewalls/bastions to get to your machine
<hazmat> you may need to sshuttle into it to create a pseudo vpn for connectivity
<hazmat> psivaa, yup.. so its doing the right thing and trying to listen on all net addresses
<psivaa> hazmat: i am able to telnet to that ip
<hazmat> psivaa, from outside of the machine?
<psivaa> hazmat: no, one sec pls. i misunderstood
<hazmat> psivaa, wanted to try both.. so its good to know it can be connected from on machine as well.
<psivaa> hazmat: should i be able to telnet to  192.168.1.2 from my local host
<psivaa> hazmat: i dont have anyother instances in that network to try within it
<hazmat> psivaa, if you expect juju client to be able to talk to it.. yes
<hazmat> psivaa, ie. underlying issue your having is network connectivity.. sshuttle can help you bridge the connectivity
<psivaa> hazmat: ack, let me try that. thanks
<psivaa> hazmat: sorry to bother you again.. the same issue is present even after sshuttling in to the instance: http://pastebin.ubuntu.com/7262777/
<hazmat> psivaa-afk, your not sshuttling the correct address range
<hazmat> ie you need 192.168.2.0/24
<hazmat> your shuttling 10.x addresses, when its clearly connecting to 192.168.x addresses
<nuclearbob> ahoy!  I've deployed postgresql using juju, and I'm trying to get connected, but I don't think I have a user or a database yet.  Can I create those without connecting another charm to postgres?
<lazyPower> nuclearbob: it creates those for you at time of relationship creation
<lazyPower> and provides it via the relationship hooks
<nuclearbob> lazyPower: since I don't have any other instances, how do I create that relationship?
<lazyPower> nuclearbob: of?
<nuclearbob> lazyPower: I want to connect using psql on my workstation
<lazyPower> ahhhh ok.
<lazyPower> I'm fairly certain there's some administrative credentials in the log after  deployment.. marcoceppi ^ insight?
<marcoceppi> nuclearbob: check the README and ask stub
<nuclearbob> marcoceppi: is this the README, or something else: https://manage.jujucharms.com/charms/precise/postgresql
<marcoceppi> nuclearbob: the README is on that page
<marcoceppi> nuclearbob: there's also a psql charm, that is a service you can deploy to attach to
<nuclearbob> marcoceppi: I
<nuclearbob> 'd like to avoid deploying more instances than I need to since my openstack provider is chronically oversubscribed, but I can try the psql charm if it comes to that
<marcoceppi> nuclearbob: yeah, stub the author wouuld be able to help you out
<nuclearbob> marcoceppi: thanks, I'll see if stub replies
<rharper> I've got a deployed jenkins instance (1.16.6 on precise) and the log file repeating  this: juju runner.go:220 worker: exited "uniter": failed to initialize uniter for "unit-jenkins-0": listen unix /var/lib/juju/agents/unit-jenkins-0/run.socket: bind: address already in use
<rharper> how do I go about resetting/restarting the agents?
<marcoceppi> rharper: you can ssh to the machine, juju ssh jenkins/0; then run `sudo restart jujud-unit-jenkins-0`
<rharper> marcoceppi: yeah -- I did that; it still complains with the same error; I guess something else has that socket file open ?
<marcoceppi> rharper: but, 1.18.1 has been released, which is a much newer stable release
<marcoceppi> maybe
<rharper> marcoceppi: yeah; I tried an upgrade-juju, but it jus says WARNING juju 1.17.0 compat mode
<marcoceppi> interesting
<rharper> possibly because it was bootstrapped with upload-tools... but that's just a guess
<rharper> marcoceppi: any idea what happens if I nute the run.socket in the agent dir ?
<rharper> s/nute/nuke
<marcoceppi> rharper: I'd stop the agents first
<rharper> right, I have it stopped now
<rharper> since it's just generating that message  I had about 1G of it; didn't know it was doing that
<rharper> that seems to have helped ; stop agent; rm run.socket; restart jujud for the service
<psivaa-afk> hazmat: ohh? i couldn't sshuttle to 192.168.2.1, ssh'ing to it does not work.
<hazmat> psivaa, how did you bootstrap?
<hazmat> psivaa, it should be picking the floating ip address by default
<psivaa> hazmat: 'juju bootstrap --metadata-source /home/sivatharman/mthood-metadata  --constraints=mem=1G --debug --upload-tools'
<hazmat> psivaa, you can manually mod the env state cache to point to the public ip
<hazmat> psivaa, are you attaching the ip address after the instance boot?
<psivaa> hazmat: i am using 'use-floating-ip: true' in the environments.yaml
<psivaa> hazmat: and the floating ip is attached by default
<psivaa> hazmat: 2010cf3b-dc41-49b9-b0b1-8c0d77a84fe0 | juju-mthood-machine-0 | ACTIVE | None       | Running     | int_net=192.168.1.2, 10.230.21.113
<hazmat> psivaa, i'd file a bug on that then, the client should be using the floating ip address for communication back to the state server.
<psivaa> hazmat: ack, thanks. would help if you could pls let me know the bug no for ref
<psivaa> hazmat: reported bug #1308767 for this
<_mup_> Bug #1308767: juju client is not using the floating ip to connect to the state server <juju-core:New> <https://launchpad.net/bugs/1308767>
<psivaa> hazmat: i was promted about a possible dupe bug #1248674 for the above though
<_mup_> Bug #1248674: setting a floating address on the state server prevents new agent connections <addressability> <canonistack> <juju-core:Triaged> <https://launchpad.net/bugs/1248674>
<tvansteenburgh> when i deploy a service in an amulet test, am i right to assume that the default config.yaml for the service is used
<tvansteenburgh> ?
<tvansteenburgh> i've added some stuff to config.yaml but i'm not seeing it applied in the amulet test
<tvansteenburgh> although using the same config.yaml to deploy manually (outside of amulet) works as expected
<tvansteenburgh> oh, hrm. i wonder if the test is using the charm from the store instead of my local copy...
<tvansteenburgh> marcoceppi: do i need to explicitly tell amulet to use the local charm?
<marcoceppi> tvansteenburgh: if you're running the test directly, without the charm test runner, yes
<marcoceppi> tvansteenburgh: sent the environment variable JUJU_TEST_CHARM to the charm name
<marcoceppi> which will tell amulet that the cwd that it's running from is the charm and use that to deploy
<tvansteenburgh> marcoceppi: brilliant, tyvm
<marcoceppi> tvansteenburgh: you'll also need to run the tests from the CHARM_DIR
<marcoceppi> in order for that to work
<tvansteenburgh> ok cool. i'll check the amulet docs and get this stuff added if it's not there
<marcoceppi> it's definitely not there
<marcoceppi> a lot of the stuff from 1.4 didn't make it in to the docs
<tvansteenburgh> the only reason i'm running the test directly is so i can use ipdb
 * marcoceppi nods
<jcastro> lazyPower, ping
<jcastro> https://bugs.launchpad.net/charms/+bug/1291783
<jcastro> can you hit this up first thing manana?
<_mup_> Bug #1291783: Cisco N1KV VSM charm review process (cwchang) <Juju Charms Collection:In Progress> <https://launchpad.net/bugs/1291783>
<lazyPower> jcastro: ackd
<lazyPower> to review first thing
#juju 2014-04-17
<stokachu> do bundles support --to machine?
<marcoceppi> stokachu: yes
<stokachu> marcoceppi: whats the syntax
<stokachu> in the yaml file
<stokachu> machine: 1?
<marcoceppi> stokachu: http://pythonhosted.org/juju-deployer/config.html#placement
<stokachu> marcoceppi: cool thanks
<thumper> o/ marcoceppi
<marcoceppi> \o thumper
<thumper> marcoceppi: tried out the new debug-log?
<marcoceppi> nope
<marcoceppi> what am I instore for?
<thumper> marcoceppi: fun :-)
<marcoceppi> thumper: is it in 1.19?
<thumper> yep
<thumper> works with local now
<lazyPower> ooooo
 * lazyPower installs
<jose> hey lazyPower, mind giving me a hand with a test I'm writing?
<jose> I just want to confirm my python code is not bad
<lazyPower> link?
<lazyPower> thumper: niiiiiiice
<jcastro> o/ thumper!
<jcastro> hey, marco and I had an idea today
<jcastro> <thumper> That sounds amazing, how can I implement it?
<jcastro> well, we were thinking that the resolved --retry thing in debug-hooks is painful
<jcastro> so how about you debug-hooks
<thumper> jcastro: ?!
<jcastro> and from inside the unit you do like "hulk smash"
<jcastro> and it does the equivalent of a resolved --retry
<thumper> jcastro: we can talk about improving charm dev experience in vegas
<thumper> jcastro: I'm assuming you are coming to vegas?
<jcastro> I am ready to buy many beers
<jcastro> yep
<thumper> jcastro: I *want* to make dev experience awesome
<thumper> so lets do it!
 * thumper goes to put that row on the spreadsheet
<jose> lazyPower: http://paste.ubuntu.com/7265206/
<lazyPower> jose: while that works, os.system really isnt seen anywhere in our code. we use subprocess.popen or subprocess.call
<jose> hmm, how should I call that?
 * jose googles
<lazyPower> otherwise, looks fine at first glance. i'm assuming you're just testing to see if its up and available? A more semantic approach would be to use python.requests to fetch the url and do some validation on whats returned
<lazyPower> eg: the title of the webpage we expect to see. Just because a server responds to a ping doesn't mean the http interface is acting as it should.
<jose> yeah, as it's a telnet server all the test needs to do is check if it's up
<lazyPower> ah, ok
<lazyPower> when i saw nyancat i just assumed it was a website...
<jose> :P
<lazyPower> i haven't actually interfaced with the nyancat charm
<jose> it's a telenet server which tells you for how many seconds you have nyaned
<jose> I assume that by replacing that os.system line for `subprocess.call([ping -c 1] + hostname)` it should do good?
<lazyPower> jose: the problem I have with just pinging the server is you're not really testing that the service is there. You're just validating that juju provided you a machine
<jose> hmm, that's right
<lazyPower> and we know juju works
<jose> and a telnet command would make it endless(?)
<lazyPower> jose: https://docs.python.org/2/library/telnetlib.html
<lazyPower> use a telnet library
 * jose checks
<jose> whoops, I need to run, will be back in 30
<lazyPower> now you can validate that not only did you get a telnet connection, but you can validate the response you get too, search for 'nyan' and boom. its validated.
<jose> awesome!
<jose> remain assured that I'll be making a test for that one, or at least try :)
<lazyPower> Looking forward to seeing it in the queue :)
<mwhudson> what do you do with machines that have got stuck?
<mwhudson> juju destroy-unit doesn't seem to have done anything
<marcoceppi> mwhudson: what version of juju?
<mwhudson> marcoceppi: trusty
<mwhudson> er i guess it's probably 1.18
<mwhudson> 1.18.1-trusty-amd64
<marcoceppi> mwhudson: you can use the --force flag
<mwhudson> error: flag provided but not defined: --force
<marcoceppi> mwhudson: terminate-machine *
<mwhudson> ah
<mwhudson> thanks!
<thumper> jcastro: still around?
<jose> hey davechen1y, do you have a min?
<davechen1y> jose: shoot
<jose> I need a hand with an amulet test which is giving me a weird error, let me pastebin
<jose> my test is http://paste.ubuntu.com/7265781/ and it returns the following error http://paste.ubuntu.com/7265782/
<davechen1y> jose: hmm, that isn't juju
<davechen1y> i'm not sure i can help
<jose> yeah, it's python-ish
<jose> np
<davechen1y> tn = telnetlib.Telnet("%s" % d.sentry.unit['nyancat/0'].info['public-address'])
<davechen1y> isn't info['public-address'] already a string ?
<jose> hmm, not sure
<jose> afaik d.sentry.unit['nyancat/0'].info['public-address'] returns the public address
<davechen1y> what type is that thow ?
<davechen1y> some vague googlgin suggests that error method is due to some difference between strings and unicode strings in python
<jose> hmm, will check then, thanks
<jose> turns out the problem is on the response that tn.read_until is giving
<davechen1y> jose: interesting
<jose> I'm going to bed now but will fix and push tomorrow morning
<jose> night!
<davechen1y> ok
<stub> nuclearbob: If you just want the PostgreSQL service standalone, you can use the admin_addresses configuration item. This should allow you to connect from the specified IP addresses directly to the PostgreSQL cluster. 'juju status' will give you the IP address and port (almost certainly port 5432)
<stub> marcoceppi: Do you think the config file needs to be documented in the README too, or should the charm store be generating documentation using the descriptions in config.yaml?
<stub> Hmm.... if I scroll down enough and click on 'config details', I get a fixed width and colorized rendering of the config yaml.
<stub> https://manage.jujucharms.com/charms/precise/postgresql/config
<stub> nuclearbob: per Ubuntu packaging, you should be able to connect as the 'postgres' user to the 'postgres' database, and create your users and databases from there using psql or pgadmin or whatever.
<stub> nuclearbob: If that doesn't work, file a bug because this setup should be supported.
<mdunc> Hi.  Got a question.  Fresh MAAS/Juju install on Ubuntu 12.04.  Deploying Keystone fails when it tries to do `keystone-manage db_sync` with the error "ImportError: No module named sql".
<mdunc> Is anyone able to reproduce my problem?
<mdunc> I was able to successfully deploy keystone yesterday.
<mdunc> jamespage: you work on the keystone charm, right?  any idea?
<jamespage> mdunc: give me 15
<mdunc> jamespage:  ok.  just tried keystone-31 and it works.
<mdunc> jamespage:  thanks for looking in  to it
<jamespage> mdunc, ok - that's a regression then
<jamespage> we just landed a large piece of work for keystone
<jamespage> mdunc, what openstack-origin config are you using?
<mdunc> jamespage:  i'm not using any special configuration.  all i did was deploy mysql, rabbitmq-server, and keystone so far all with default settings.
<mdunc> jamespage: ah, I guess that setting would be "distro"
<jamespage> mdunc, ok - that's essex then
<jamespage> mdunc, I would recommend that you use a later openstack release
<jamespage> essex is supported still
<jamespage> but its 5 versions old
<jamespage> mdunc, openstack-origin=cloud:precise-havana
<jamespage> mdunc, or from today cloud:precise-icehouse
<jamespage> mdunc, let me fix this up tho
<mdunc> jamespage: alright, i'll give that a shot
<jamespage> mdunc, OK - I see the issue
<jamespage> keystone < grizzly does not support a sql backend for policies
<mdunc> Ah, good to know.  I just started playing with OpenStack a couple weeks ago and this week is my first time trying installation with Juju.  I still have a lot to learn :)
<jamespage> mdunc, I'm really interested to hear you experience
<jamespage> mdunc, I have alot of users who have been using the charms for a while now - so someone fresh to them is a good checkpoint!
<mdunc> jamespage: so far, it's been great!  juju definitely seems like the way to go for my company as we're all pretty busy and many people don't have the time to learn the ins and outs of openstack to set it up manually.  i gave a demo earlier and they all loved it.
<vila> hi there, I can't bootstrap on hp cloud anymore: 2014-04-17 07:53:52 ERROR juju.cmd supercommand.go:300 cannot start bootstrap instance: no "trusty" images in region-a.geo-1 with arches [amd64 arm64 armhf i386]
<jamespage> mdunc, if this is for a new deployment I'd stongly suggest using either cloud:precise-icehouse with 12.04 (3 years of support left) OR using trusty
<vila> I used to have precise instances, did that change recently and how can I express that I want to stick to precise ?
<jamespage> which has icehouse as default
<jamespage> vila, hmm
<mdunc> jamespage: the only thing that kind of caught me off guard is when removing a charm, it doesn't remove the installed services and juju doesn't seem to want to deploy to it again unless i manually tell it to.  overall though, it's pretty awesome!  you guys are doing great work!
<jamespage> utlemming, ^^
<jamespage> mdunc, I generally don't recycle machines like that
<jamespage> mdunc, destroy-service then terminate-machine
<jamespage> mdunc, thanks for the praise!
<vila> jamespage, utlemming : If that helps, I did bootstrap yesterday afternoon successfully so ~14h ago
<vila> hmm, make that ~16h sorry
<mdunc> jamespage: i'll try that next time. thanks!  and yeah, we're definitely going to go with trusty.  maas is already running trusty, but haven't set up juju on trusty yet.
<vila> jamespage, utlemming : for completeness: https://pastebin.canonical.com/108759/
<jamespage> vila, utlemming is probably not up yet but he'll see this when he awakes
<jamespage> mdunc, OK - I pushed the fix for the regression to the keystone charm - it should sync out to the charmstore ~1hr
<jamespage> mdunc, however I know its already OK with >= grizzly
<vila> jamespage: ack, is there a way to force precise in environments.yaml or something ?
<jamespage> vila, yeah default-series: precise
<jamespage> vila, what version of ubuntu and which version of juju are you using to bootstrap?
<vila> doh, I tried series ;) Thanks for the hint ! It seems to go further
<mdunc> jamespage: thanks for the quick fix :)
<jamespage> mdunc, always on the lookout for regressions when we land such a big change
<vila> jamespage: trusty freshly updated so juju-1.18.1-trusty-amd64 according to the output of juju bootstrap --debug
<jamespage> mdunc, the keystone charm basically got re-written this cycle inline with the other openstack charms
<jamespage> vila, hmm - I think 1.18.1 might have introduced using the latest lts
<jamespage> vila, but I may be wrong
<mdunc> jamespage: i'll be testing them all pretty thoroughly over the next few days/weeks as i write up some documentation for rest of my team here.  is there an official place to file bugs against charms if i find any?
<vila> jamespage: no worries, I'm pushing to upgrade to trusty anyway, I just need something that works now ;) So I'm good, thanks for *default-*series trick ;)
<jamespage> mdunc, launchpad.net/charms
<vila> jamespage: where are those options documented by the way ?
<mdunc> jamespage: cool, thanks!  alright, i got to get back to work.  have a good day/evening/morning!
<jamespage> vila, that specific one is mentioned in the 1.18.1 release notes - https://juju.ubuntu.com/docs/reference-release-notes.html
<jamespage> but its been there for a while
<vila> ha thanks, I was on the site but didn't find/think about the release notes
<vila> right, perfectly explained (as well as why this happens)
<timrc> https://bugs.launchpad.net/ubuntu/+source/jenkins/+bug/1294005 <--- this make me sad
<_mup_> Bug #1294005: Please remove jenkins from trusty <jenkins (Ubuntu):Fix Released> <https://launchpad.net/bugs/1294005>
<timrc> I wish there was more of a transition period here because this breaks us pretty abruptly
<jamespage> timrc, sorry - but its old and full of security vulnerabilities
<jamespage> timrc, the charm supports switching to use the upstream repositories; suggest that happens y default
<caribou> has anyone encountered the situation where mysql refuses to allow connection from other services when colocated on the same machine ?
<caribou> I'm deploying mysql then keystone on the same machine. When adding the relation, I get mysql to refuse connection to the keystone charm
<caribou> this doesn't happen if they're not on the same machine
<caribou> (local provider btw)
<jam1> caribou: note that the default mysql configuration is to consume 80% of your RAM with its buffer
<jcastro> hey lazyPower
<jam1> caribou: there is a known bug that mysql doesn't place nice on local because of that
<jcastro> I subscribed ~charmers to the jenkins bug
<jam1> you can change it in config, though
<jcastro> we need to sort it before promoting jenkins to trusty
<caribou> jam1: I'm not worried about that, it's just for charm testing
<jam1> caribou: sure, but mysql w/ local fails because of that config, so you can change the config to do your test
<caribou> jam1: and afaik, it used to work
<caribou> jam1: ah, ok
<jcastro> juju set mysql dataset-size="1G"
<jam1> caribou: https://lists.ubuntu.com/archives/juju/2014-February/003442.html
<jcastro> ^^ you want something like that
<caribou> jcastro: thanks
<jam1> caribou: I noticed that deploying mysql locally ended up giving like 15GB of ram and then mysql couldn't start
<jcastro> yeah mine was doing 12G
<jcastro> we should find a way to be like "If I am in LXC don't do that."
<jam1> caribou: jcastro: https://bugs.launchpad.net/juju-core/+bug/1255242/comments/15
<_mup_> Bug #1255242: HP cloud requires 4G to do what AWS does in 2G mem <ci> <hp-cloud> <intermittent-failure> <upgrade-juju> <juju-core:Invalid> <mysql (Juju Charms Collection):New> <https://launchpad.net/bugs/1255242>
<jam1> is my comment on it
<jam1> note, it also fails on HP because of that
<jam1> on a default size machine
<jam1> apparently 80% of 2GB or whatever doesn't leave enough room for the OS overhead
<jam1> so on HP you have to deploy --constraints=mem=4G if you want the default charm 80% to work
<caribou> jam1: yep, mine has 7G of VSZ
<jcastro> jamespage, what's the recommendation for precise/jenkins?
<jamespage> jcastro, use lts
<caribou> xcuse my ignorance, but why would the memory size would have a tie with remote connectivity ?
<jam1> caribou: because it doesn't actually start
<jamespage> jcastro, that version is really old - maybe not the same security issues tho
<jam1> it tries to lock that much RAM but can't
<caribou> jam1: it does start, I can connect to it locally
<jam1> caribou: weird, mine would just fail
<jam1> I don't know why it would affect remote connectivity
<jamespage> jcastro, I put a branch up for trusty - https://bugs.launchpad.net/ubuntu/+source/jenkins/+bug/1294005
<_mup_> Bug #1294005: Please remove jenkins from trusty <audit> <jenkins (Ubuntu):Fix Released> <jenkins (Charms Trusty):New> <https://launchpad.net/bugs/1294005>
<caribou> my feeling is that it has something to do with hostname resolution
<caribou> if I try to connect remotely using the IP address, I get a normal failure to connect because of the password
<caribou> if I use the hostname, then I get a mysql rejection message saying that the host is not allowed to connect
<jcastro> jamespage, I've added it to the queue; since it's come up, does it by chance have tests?
<jamespage> jcastro, no
<caribou> jam1: anyway, i worked around it by having mysql on its own machine for the time being, I need my time to investigate other things
<caribou> jam1: but I'm happy to know that I'm not missing something obvious
<jcastro> jamespage, ok I'll see if we can prioritize it and get some tests, we have some new people now that can help.
<jamespage> jcastro, excellent - thanks
<jamespage> jcastro, my branch is function for trusty - just finished testing it
<jcastro> ack
<stokachu> http://pythonhosted.org/juju-deployer/config.html#placement
<stokachu> it says machine id 0 is only supported
<stokachu> does that mean i can't deploy to say machine 2 which contains another node?
<mattyw> sinzui, I'm having real trouble getting local provider to work in precise, I keep getting container failed to start errors, any thoughts? I believe my kernel is up to date
<marcoceppi> stub: typically, we recommend you include config documentation in readme. manage.jujucharms.com is going away and the gui doesn't really illuminate that much about configs except a small excerpt
<stokachu> marcoceppi: do bundles only support deploying to machine 0?
<marcoceppi> stub: for machine id, yes
<marcoceppi> stokachu: ^
<marcoceppi> because that's the only guarenteed machine
<marcoceppi> stokachu: you can still deploy to other services
<stokachu> what about with kvm
<stokachu> if i have 2 kvm instances and want to auto-deploy services to both machines that can't be done with bundles right?
<marcoceppi> stokachu: what do you mean, 2kvm instances?
<stokachu> marcoceppi: if i do juju add-machine
<stokachu> it brings up 2 kvm instances in local provider
<sinzui> mattyw, I had container errors last year when I had stale cloud images http://curtis.hovey.name/2013/11/16/restoring-network-to-lxc-and-juju-local-provider/
<stokachu> machine 1 and 2
<stokachu> i wanted to be able to deploy services to both machines
<marcoceppi> stokachu: so those are LXC, not KVM, and yes, you can't really do that in deployer, there is no add-machine concept
<stokachu> charms*
<marcoceppi> stokachu: however, you can do this, deploy ubuntu charm to two units
<stokachu> marcoceppi: not kvm?
<marcoceppi> stokachu: then do placement ubuntu/0
<sinzui> mattyw, I recommend you try removing the cache.
<marcoceppi> stokachu: it's not kvm, local provider uses LXC
<stokachu> marcoceppi: it also uses kvm
<marcoceppi> unless this is something that landed in 1.19, local provider only uses lxc
<stokachu> uh its been there since 1.17.x
<marcoceppi> stokachu: okay
<mattyw> sinzui, that might suggest I was able to boot a trusty container but not a precise one
<marcoceppi> are you using the local provider, or are you trying to deploy --to kvm: ?
<stokachu> using the local provider
<marcoceppi> the local proivder /is/ lxc
<stokachu> machine 0 maybe but machines 1 and 2 are kvm
<marcoceppi> sinzui: can the local provider use kvm instead of lxc?
<hazmat> stokachu, deployer can spec service colocation .. including kvm: / lxc:
<hazmat> marcoceppi, yes
<hazmat> it can use kvm instead of lxc
<nuclearbob> stub: thanks for the info, I can't seem to get that to work, so I'll file a bug
<marcoceppi> hazmat: wtf, why isn't this documented anywhere
 * marcoceppi loses his mind
<stokachu> hazmat: so i list machine 0,1,2 all kvm
<stokachu> i just want to deploy to machineX
<stokachu> not kvm on machine 0
<hazmat> marcoceppi, markdown it and i'll work on it :-)
<marcoceppi> hazmat: I don't even know where to start
<hazmat> stokachu, so deployer doesn't let you reference arbitrary machines, because that's not reproducible. it will let you specify colocation with other services
<hazmat> marcoceppi, what do you mean, i though the md stuff was in flight?
<sinzui> marcoceppi, yes, add container: kvm
<stokachu> ok
<marcoceppi> hazmat: it is, I was referring to I have no idea how to switch local to kvm
<marcoceppi> hazmat: this would have saved a lot of lxc headaches
<hazmat> stokachu, if you have a syntax for add-machine i'm game to add it.. i've been wanting something cause i use manual tons..
<hazmat> marcoceppi, lxc headaches?
<hazmat> marcoceppi, because of restrictions?
<stokachu> hazmat: if i set the container: kvm in environments.yaml i just do a juju add-machine like normal
<stokachu> can set constraints too
<marcoceppi> hazmat: lxc hasn't been playing nice on one of my machines
<marcoceppi> well, lxc/local provider
 * hazmat nods
<hazmat> i'm still using my jury rig manual + lxc which has been solid..
<stokachu> kvm is really solid too
<hazmat> marcoceppi, i'd check to make sure its not using aufs.. by default.. there was a version of juju during 1.17 dev that was doing aufs by default which caused issues.
<jcastro> wait a minute
<jcastro> are you telling me, I've been messing around with LXC this whole time
<jcastro> and I could have been using KVM
<marcoceppi> jcastro: yeah
<marcoceppi> go figure
<jcastro> Well, I'm pretty much out of options, I have asked over and over for core to document stuff
<stokachu> jcastro: i happened to stumble across it while looking through the juju code
<stokachu> some of the help commands talk about kvm too iirc
<lazyPower> jcastro: roger
<jcastro> well I know you can do kvm: blah
<stokachu> yea
<jcastro> but if I can wholesale switch to it
<jcastro> that would be awesome
<stokachu> jcastro: thats what i do
<stokachu> kvm for everything
<stokachu> works great
<jcastro> ERROR state/api: websocket.Dial wss://10.0.3.1:17070/: EOF
<stokachu> jcastro: http://astokes.org/juju-deploy-to-lxc-and-kvm-in-the-local-provider/
<jcastro> I've been getting this all week with the local provider
<stokachu> according to some of the juju devs this isn't a supported scenario
<stokachu> but it works too for mixing both
<jcastro> mind if I steal that?
<stokachu> jcastro: go for it
<jcastro> stokachu, I think the big one there is "network-bridge"
<stokachu> yea definitely
<mattyw> sinzui, didn't seem to fix my problem :/
<jcastro> deploy --to kvm:blah we have written down
<stokachu> jcastro: i think https://bugs.launchpad.net/juju-core/+bug/1304530 would be good to have too
<_mup_> Bug #1304530: nested lxc's within a kvm machine are not accessible <addressability> <cloud-installer> <kvm> <local-provider> <lxc> <juju-core:Triaged> <https://launchpad.net/bugs/1304530>
<jcastro> stokachu, hey so if I deploy to KVMs with the little bridge there
<jcastro> are those KVMs accessible from other machines on the network?
<stokachu> just on the network bridge
<jcastro> ah
<stokachu> you'd have to add a network bridge device in libvirt
<stokachu> so you could setup br0 to bridge your eth0
<stokachu> then have libvirt use br0
<stokachu> then it would be accessible throughout your network
<jcastro> ok I'll just crosslink to some libvirt docs on that
<stokachu> http://wiki.libvirt.org/page/Networking#Bridged_networking_.28aka_.22shared_physical_device.22.29
<stokachu> that should help
<stokachu> theres a debian/ubuntu section
<stokachu> hazmat: im about to start using your judo stuff
<stokachu> their pricing is awesome with the per hour charge and cap of the monthly fee
<hazmat> stokachu, cool.. your in good company there.
<themonk> is there any way to restart a unit? i just want to rerun start hook.
<jcastro> you can `juju debug-hooks` to it to rerun the start hook
<jcastro> `juju debug-hooks yourservice/#` the # being the machine
<jcastro> then in another terminal do "juju resolved --retry yourservice"
<jcastro> then in the debug-hooks terminal do `hooks/start`
<jcastro> and it will give you the exact error
<themonk> jcastro, thanks but my problem is different, i am using local lxc, when i restart my local machine i get start hook error, but when i deploy there is no error
<themonk> jcastro, i have a error log will you see it?
<jcastro> themonk, sure
<jcastro> which charm btw?
<themonk> jcastro, my charm can't disclose the name yet, will be opensoucce soon :)
<themonk> jcastro, company policy :)
<jcastro> oh ok, I was going to say, if it was mysql we know that can break in LXC
<jose> jcastro: hey, would you mind giving me a hand with some python-ish error I get when writing an amulet test?
<rharper> failed bootstrap returned this:  ERROR waited for 10m0s without being able to connect: /var/lib/juju/nonce.txt contents do not match machine nonce
<themonk> jcastro, no its not mysql
<jcastro> jose, marco is your man for amulet
<jose> then, marcoceppi: ping
<jose> :)
<rharper> looking for any help on what that might mean
<marcoceppi> rharper: bootstrapped timedout trying to do something, rharper run with --debug to get more information
<marcoceppi> jose: best to just post the issue than to ask to post :)
<rharper> marcoceppi: ok
<jose> when doing http://paste.ubuntu.com/7265781/ I get http://paste.ubuntu.com/7265782/ as a response, it looks like the telnet response contains something that doesn't match the character encoding, but I have no idea on how to fix it
<jcastro> themonk, you can send me your info if you want me to take a look
<themonk> jcastro, i am preparing in pastebin :)
<jcastro> hey lazyPower
<jcastro> I see you're down on the spreadsheet for jenkins.
<lazyPower> i've done some work on the tests, but they didn't pass CI
<marcoceppi> jose: you're using python2 formatting for python3 code
<marcoceppi> jose: try "nyaned".encode() in tn.read_until
<jose> "nyaned".encode() instaed of just "nyaned"?
<marcoceppi> yes
<jose> ok
<jose> damn, I just realized I did that on my /tmp folder and didn't have another backup, pastebin lifesaver
<rharper> marcoceppi: on the 10 minute out; is that controllable?  I using juju deployer and I set the timeout in the config to 1800 seconds -- did that value change or is it not being honored any more ?
<jcastro> heh, are you writing tests for the nyancat charm?
<rharper> marcoceppi: re bootstrap timeout
<jose> yeah, seemed like an easy one :P
<marcoceppi> rharper: bootstrap timeout is configurable last I checked, it's a timeout in juju not in deployer
<rharper> marcoceppi: hrm, I though juju deployer passed through the config to juju
 * rharper looks at code
<marcoceppi> rharper: no, bootstrap-timeout is an environments.yaml configuration
<rharper> only?
<marcoceppi> juju help bootstrap
<marcoceppi> yes
<rharper> ok
<rharper> is there such a thing as retry on bootstrap timeout?  dealing with some flaky hardware
<rharper> probably would need to do that in deployer; ok -- thanks for the help
<themonk> jcastro, pastebin is on heavy load please wait
<timrc> marcoceppi, fyi, my issue with juju not starting machines locally had to do, I think, with a stale lock in/var/lib/juju/locks :/
<marcoceppi> timrc: lame! but interesting find
<themonk> jcastro, http://pastebin.com/7LifaphW
<jcastro> huh, that's a new one
<smarter> Is it normal that when I do "juju destroy-environment local", I can't do "juju bootstrap" again until I kill the mongod process manually?
<smarter> using juju 1.18.1 on trusty
<themonk> jcastro, have you found any thing
<jcastro> no, that's a new one for me
<jcastro> marcoceppi, have you seen that error before? http://pastebin.com/7LifaphW
<jcastro> smarter, no that's not normal, but I am having that problem as well
<jose> woohoo, test passed!
<lazyPower> jcastro: what specifically do you need done for the jenkins charm? just a trusty audit or do you need the full deep dive into the relationship issue and getting the tests passing?
<jcastro> lazyPower, jamespage has a branch for trusty/jenkins
<jcastro> which we could use, but if we could get tests in there that would be swell
<lazyPower> i have tests pending that failed ci but work when running them locally
<lazyPower> soooooo
<lazyPower> its part of that forever todo item i've got to circle back to CI and figure out why they are failing in CI
<themonk> marcoceppi, i have a question if i restart lxc container machine in apache2 charm site-enable/default link gone missing but i enabled a2ensite default in apacche2 subodinate charm btw subordinate charm is for installing a apache mod
<themonk> marcoceppi, when i deploy it works file but after restart machine it does not
<jcastro> lazyPower, yeah so basically he ported it to trusty, but if we can get it in with tests as part of the audit, that would be better
<lazyPower> ack. I'll try to squeeze that in to the schedule. No promises - but i'll set it as a stretch goal
<jcastro> nod
<jose> jcastro: just to confirm, we're having a charm school tomorrow at 15 your time
<jcastro> yep
<themonk> jcastro, i have a question if i restart lxc container machine in apache2 charm site-enable/default link gone missing but i enabled a2ensite default in apacche2 subodinate charm btw subordinate charm is for installing a apache mod. when i deploy it works file but after restart machine it does not
<jcastro> me and lazyPower
<jose> cool
<jcastro> themonk, yeah I am unsure how well supported restarting an LXC container is
<marcoceppi> themonk: that shouldn't happen
<marcoceppi> themonk: no idea why it would be doing that
<themonk> marcoceppi, my one observation is after restart juju runs config-changed and start for normal charm ryt
<marcoceppi> themonk: it shouldn't that's news to me
<marcoceppi> mgz: ^^?
<mgz> ...I can barely parse that
<marcoceppi> mgz: if you restart a machine deployed by juju, does it run the config-changed and start hooks again?
<mgz> start only runs once
<mgz> it will probably run config-changed
<marcoceppi> mgz: then, I think what themonk is seeing is config-changed isn't running for the subordinate, which is causing issues as the main charm reverts settinsg that the subordinate sets
<marcoceppi> themonk: is that about right?
<mgz> restarting lxc containers may be a little dodgy
<themonk> marcoceppi, i dont think so
<mgz> themonk: looking at the unit logs should tell you what got run
<themonk> mgz, yes it calls config-changed then rel-join then rel-changed
<mgz> that seems fine then.
<themonk> mgz, but i get a start hook error after restart
<mgz> themonk: then you need to debug that in the charm
<themonk> mgz, start hook is ok no bug :)
<themonk> mgz, and how do i debug when my machine is booting
<mgz> themonk: debug-hooks will still work I'd think
<themonk> mgz, it only happens after reboot
<mgz> so? run debug-hooks, trigger the agent restart, see what happens
<themonk> mgz, how i use debug-hooks when lxc container is loading during reboot
<mgz> just reboot that container
<mgz> not your whole machine (I'm assuming local provider)
<themonk> how?
<themonk> yes mine is local provider
<mgz> `juju ssh thatservive/unitnumber "sudo shutdown -r now"`
<themonk> hmm great :) thanks :)
<stokachu> any plans on getting mysql charms into trusty soon?
<marcoceppi> stokachu: yes
<stokachu> marcoceppi: possible eta?
<marcoceppi> stokachu: early next week
<stokachu> openstack relies on it and theyre on trusty
<marcoceppi> stokachu: I know
<stokachu> marcoceppi: anything i can do to make it happen faster
<marcoceppi> stokachu: earliest I can do is tomorrow
<stokachu> marcoceppi: that would be awesome
<blahRus> So I shouldn't waste time trying to deploy openstack with juju today on 14.04?
<marcoceppi> blahRus: no, you can deploy openstack on 14.04 you just need to deploy mysql from a local source
<blahRus> marcoceppi: kk, any other charms not ready?
<blahRus> all prep'ed for icehouse?
<marcoceppi> blahRus: most all are there, I think rabbitmq-server is in the same boat as mysql though
<marcoceppi> these should all be sorted by next week
<blahRus> great, hopefully we can get the ISO's soon ;)
<hackedbellini> hi all. I'm trying to branch the gerrit charm (http://manage.jujucharms.com/~canonical-ci/precise/gerrit) since I need to do some modifications to use it, but I cant access the code
<hackedbellini> trying to do a "bzr branch lp:~canonical-ci/charms/precise/gerrit/trunk" gives me a "lp:~canonical-ci/charms/precise/gerrit/trunk"
<hackedbellini> also, it appears that this page (https://code.launchpad.net/~canonical-ci) is private now. I could access it 2 weeks ago
<marcoceppi> sinzui: ^?
<hackedbellini> I tried to paste the error and end up pasting the command again. The error is this one: http://pastebin.ubuntu.com/7269665/
<hackedbellini> this url for example (https://bazaar.launchpad.net/~canonical-ci/charms/precise/gerrit/trunk/files) gives me an "Unauthorized" error
<hackedbellini> this is the "Repository" link in the charm page
<sinzui> hackedbellini, marcoceppi: ouch
<hackedbellini> sinzui: do you know what is the problem?
<sinzui> hackedbellini, marcoceppi: That team certainly is private.
<sinzui> I no longer have super privs
<sinzui> hackedbellini, I think we can both see that the team exists because it is subscribed to public bugs or branches.
<sinzui> hackedbellini, I think you need to find another branch to work with
<hackedbellini> sinzui: wow, really? that's really sad...
<marcoceppi> sinzui: I though you lead the canonical-ci team, who do I have to bug to have them upstream their charms?
<hackedbellini> sinzui: do you know why they made it private?
<sinzui> marcoceppi, I was rather thorough about securing Lp's teams. I cannot see who is involved in the team
<marcoceppi> sinzui: you're too good for our own good!
<marcoceppi> blahRus stokachu jamespage mysql is promulgated to trusty
<stokachu> marcoceppi: woot
<stokachu> marcoceppi++
<jamespage> marcoceppi, +1 thanks
<blahRus> :)
<blahRus> tyvm
<jose> lazyPower: do we have a spreadsheet containing which tests are being worked on and which arent?
<jose> it'd be nice so we don't have duplicate efforts
<lazyPower> jose: sent it via privage message
<jose> awesome, thanks
<hackedbellini> When trying to add a new machine, I'm getting this: http://pastebin.ubuntu.com/7270191/
<hackedbellini> the only log I can get is this one: http://pastebin.ubuntu.com/7270192/ (from /var/log/juju/machine-12)
<hackedbellini> I tried googling for both the agent-state-info error and the error on the log, but found nothing
<hackedbellini> do anyone here have any idea on how to solve this?
<themonk> will i upgrade to 14.04, is it ok for juju 1.18.1
<lazyPower> hackedbellini: looking now - 1 moment
<lazyPower> hackedbellini: is this local provider?
<Kupo24z> cool
<hackedbellini> lazyPower: yes it's local provider, using lxc containers
<lazyPower> hmm, machine #12 seems to tank, where as 1 - 11 are fine correct?
<hackedbellini> lazyPower: yes, exactly! All other machines are running fine... I have some services running on them (jenkins, mediawiki, postgresql, etc) and they are running fine. But I cant add a new machine
<hackedbellini> doing a "juju add-machine" triggers the problem
<lazyPower> i'm looking for an answer to this, i'm not positive but i think there is an upper limit to the number of machines you can utilize on LXC
<hackedbellini> lazyPower: really? D:
<lazyPower> dont take that as the answer yet though, i dont have any proof to back it up
<lazyPower> hackedbellini: whats your ram usage look like?
<hackedbellini> lazyPower: hrm, I see... That machine is the 10th one. There are 9 currently running
<hackedbellini> lazyPower: http://pastebin.ubuntu.com/7270712/
<lazyPower> hackedbellini: confirmed there is no hard coded limit to the # of machines
<lazyPower> hackedbellini: i would file a bug including the output of your diagnostics such as mem, and attach any relevant logs.
<hackedbellini> lazyPower: ok, I'll do that. The problem is that I don't have sudo here, so there are some logs that I can't see :(
<hackedbellini> one question: you said that there might be a maximum number of lxc containers I could run at the same time... if that's true, where could I try to check that number? I understand almost nothing about lxc
<lazyPower> hackedbellini: confirmed there is no limit for you.
<lazyPower> i dont remember where i saw that, but its incorrect.
<lazyPower> sorry for the confusion
<hackedbellini> lazyPower: ok, no problem!
<hackedbellini> one quick question: how can restart juju in a way that it looks like to it that I restarted the server?
<hackedbellini> I didn't found any juju service, so I restarted lxc... but I don't know if I should restart anything else, specially because I did a juju upgrade-juju today
<hackedbellini> one last question**
<lazyPower> hackedbellini: not sure on local provider, as it bootstraps to the HOST environment.
<stokachu> does juju automatically install 42-juju-proxy-settings on the deployed machines?
<stokachu> even though i dont set a apt-http-proxy in my environments.yaml
<hackedbellini> lazyPower: np! Thanks anyway for the help :)
<lazyPower> np, sorry i wasn't more help. its been weeks since i've gotten my hands in the local provider. I switched feet and want full force on MAAS / cloud deployments
<lazyPower> s/want/went
<cwchang> charles there ?
<marcoceppi> lazyPower: ^
<lazyPower> cwchang: greetings
<cwchang> lazyPower is that Charkes ?
<cwchang> hi
<cwchang> jsut try out irc free node stuff
<lazyPower> ah, welcome!
<cwchang> thanks !
<cwchang> sorry we are not that used to those IRC tool
<cwchang> I am always the first one to try
<cwchang> I will let Marga join this to lead some discussion for VSM charm review
<lazyPower> Ok. I'm pinning it pending an openstack charmer review as i noted in the ticket. It creates an interesting dependency loop on openstack / vsm.
<cwchang> ok
<cwchang> We will be back in a jiffy
<lazyPower> ack. I'll be here cwchang
<psivaa> hello, could i know how to workaround 'src/launchpad.net/juju-core/utils/ssh/ssh_gocrypto.go:84: undefined: ssh.ClientConn' when running 'go install -v launchpad.net/juju-core/...' pls?
<psivaa> go get -u -v launchpad.net/juju-core/... gives the following logs: http://pastebin.ubuntu.com/7270917/
<lazyPower> psivaa: looks like you're trying to build juju from source? try #juju-dev, thats where the core team hangs out.
<psivaa> lazyPower: ack, thx. yea i was trying to build from trunk.
<lazyPower> cwchang: i'm about to EOD. I'll be floating around but not actively monitoring. If you need anything immediately, feel free to ping me.
<jose> lazyPower: deploying the VSM charm on EC2 will not help, right?
<jose> because if it, I can help with trial deployments and leave it for the queue
<lazyPower> jose: how well do you know openstack?
<jose> lazyPower: not that well, let's say
<lazyPower> jose: next time my friend.
<jose> np :)
<lazyPower> I appreciate the volunteering though. Your enthusiasm is infectious :)
<jose> if it's within my possibilities to help, I'm glad to
<stokachu> should all the openstack charms for trusty be working?
#juju 2014-04-18
<stokachu> ah just needed a little more memory
<marcoceppi> hazmat: cand you define a series per service or is it all or nothing with deployer?
<hazmat> marcoceppi, you can do series per service, its not obvious.. the series at the service container level was inherited.
 * hazmat looks up the mechanics
<marcoceppi> hazmat: oh, awesome. Everytime I think I hit a snag in deployer, you come by to remind me how awesome it is
<hazmat> oh.. its got warts.. but it does the trick more often that not
<hazmat> marcoceppi, so it can't do vcs from multiple series.. but it can pull charmstore charms from multiple series
<hazmat> marcoceppi, also you can override series on the cli
<hazmat> but for mixed mode usage you need charm store charms
<marcoceppi> hazmat: :\ so vcs (branch) is tied to parent series level? I'm trying to to mixed vcs
<hazmat> marcoceppi, not yet.. it looks like its a pretty simple fix though.. if you want to file a bug.. i can work on it tomorrow during my flight.
<marcoceppi> hazmat: sure, we may also be able to spend a few cycles working on the bug
<hazmat> marcoceppi, no worries.. i've got some other pending items i should take care of on it tomorrow as well
<marcoceppi> hazmat: https://bugs.launchpad.net/juju-deployer/+bug/1309274
<_mup_> Bug #1309274: Can't override series for branch: defined services <juju-deployer:New> <https://launchpad.net/bugs/1309274>
<hazmat> marcoceppi, thanks
<marcoceppi> thank you!
<hazmat> marcoceppi, but i'm not tackling tonight if your inclined that way ;-)
<hazmat> np
<marcoceppi> hah, that's fine, I'm too busy with the little doc fixes, still trying to wrap my head around python-markdown extensions
<sridher> Will installed cache of local lxc  trusty, freshly download stable release of upgrade the old cache downloaded 3 days ago
<sridher> ?
<sridher> Download/replace or upgrade?
<strikov> Hi guys. I'm using juju with openstack. I have use-floating-ip: 'true' which means that all instances get floating ip addresses during bootup. Is it possible to ask juju and charms to use these floating addresses to communicate between instances? Right now one of my instances tries to connect to bootstrap instance by its local ip and fails. But it'd connect fine if it uses floating ip of the bootstrap instance. Thanks.
<Tug> Hi, I can't access the doc at https://juju.ubuntu.com/docs/getting-started.html
<timrc> Tug, docs/ seems completely missing
<timrc> :(
<Tug> yep :(
<timrc> jcastro, ^^^
<jcastro> fixing it now
<jcastro> there was a problem with the bzr branch
<jcastro> give us like 5 minutes
<Tug> no pb, thx :)
<jcastro> Tug, all set
<Tug> yes, working fine :)
<jose> jcastro: everything's set for today's charm school, ubuntuonair.com will be updated 1h before the session
<jcastro> <3
<Tug> is juju supported on ubuntu 14.04 ?
<Tug> I'm having errors :(
<mbruzek> ping hazmat
<marcoceppi> Tug: it is, what problems are you having?
<mbruzek> cory_fu, and I are working on an amulet bug that the sentries are not created with the right series.  You and marcoceppi talked about a deployer bug that would not accept series.
<cory_fu> https://bugs.launchpad.net/juju-deployer/+bug/1309274
<_mup_> Bug #1309274: Can't override series for branch: defined services <juju-deployer:New> <https://launchpad.net/bugs/1309274>
<marcoceppi> mbruzek: I think he may be traveling
<mbruzek> Oh
<mbruzek> marcoceppi, cory_fu and I came up with this JSON output after our code change.
<mbruzek> http://pastebin.ubuntu.com/7276643/
<mbruzek> marcoceppi, Does this look right?
<mbruzek> It does not work, but we suspect that is due to the deployer bug.
<marcoceppi> sure, but it's not going to work because of deployer
<marcoceppi> mbruzek: you guys might be able to patch deployer while hazmat works on the bug
<Tug> no it's alright, I needed to export GOPATH
<mbruzek> marcoceppi, that is what we suspect.  Are we building the JSON correctly?
<marcoceppi> mbruzek: Â¯\_(ã)_/Â¯ looks good to me
<ghartmann> did anyone else came across an error "cannot start machine X: no matching tools available" ?
<lazyPower> ghartmann: series? juju version?
<ghartmann> ubuntu 3.10 -> juju 1.18.1-saucy-amd64
<lazyPower> hmm. which environment are you working against?
<ghartmann> local
<lazyPower> give me a few to spin up a VM. I need to fetch an iso. I'll see if i can reproduce
<ghartmann> I can check for logs if you want
<lazyPower> if i cant, i'm going to blame the cached stuff. You've successfully bootstrapped before right? this is a new'ish problem?
<ghartmann> yeap
<ghartmann> how can I clean the cache ?
<ghartmann> this env is my test env do I don't care reinstalling it
<lazyPower> ghartmann: interesting... my dev environment is running 13.10 as well
<ghartmann> I think it might be related with the upgrade
<lazyPower> and apparently i just upgraded to 1.19
<lazyPower>  Â¯\_(ã)_/Â¯
<ghartmann> I had an env setup with 1.17 and upgraded 1.18
<lazyPower> ghartmann: well, i can verify its working with 1.19
<ghartmann> you are using the new ppa ?
<lazyPower> i am, i have the dev ppa enabled
<ghartmann> I will make the upgrade now
<Tug> what are simplestreams ?
<Tug> ok found the definition here :) https://juju.ubuntu.com/docs/howto-privatecloud.html
<ghartmann> well it isn't solved
<ghartmann> how do I clear juju cache ? I am planning to try a full reinstall
<Tug> is it not the ~/.juju folder ?
<ghartmann> ah, I had tried it
<ghartmann> I am going for remove/purge
<ghartmann> restart/reinstall
<ghartmann> back in a few seconds
<Tug> $ juju-quickstart
<Tug> juju-quickstart: error: cannot use the amazon environment:
<Tug> a value is required for the control bucket field
<Tug> I have a value for "control-bucket" in /home/polo/.juju/environments/amazon.jenv if that's what it's talking about
<Tug> I copied it when running juju-quickstart -i and it seems to work
<Tug> I'm not sure why juju-quickstart saw it empty, maybe it used a different file
<rick_h_> Tug: I think it's checking the environments.yaml field. I'm not sure if it looks into the jenv files as those can be recreated and such
<Tug> oh no I see! in fact it has overwritten my environments.yaml
<Tug> apparently it did copy my keys but not the region
<rick_h_> Tug: what overwrote it?
<Tug> juju-quickstart
<rick_h_> Tug: please file that as a bug please. It should not do that
<rick_h_> if it missed a field it needs to be updated to correct for that.
<Tug> when running juju-quickstart -i it was able to somehow retrieve my current keys to connect to amazon but it ignored the other parameters
<rick_h_> https://bugs.launchpad.net/juju-quickstart
<rick_h_> oh, it ignores ones it doesn't know about, but if it lost or changed one it shold not have that's a bug
<Tug> yeap probably
<Tug> I can't access https://bugs.launchpad.net/juju-quickstart though
<rick_h_> hmm, seems launchpad is having issues atm
<Tug> yeah probably due to everyone's updating their ubuntu ?
<rick_h_> Tug: launchpad is back up and responding for me if you've still got time to file that bug
<Tug> ok doing it now
<rick_h_> thanks Tug, really appreciate it
<Tug> not sure I did things correctly
<Tug> https://bugs.launchpad.net/juju-quickstart/+bug/1309678
<_mup_> Bug #1309678: a value is required for the control bucket field <juju-quickstart:New> <https://launchpad.net/bugs/1309678>
<Tug> first time reporting a bug in launchpad
<rick_h_> Tug: sweet, congrats.
<Tug> :) thx
<lazyPower> Hey Jose, looks like jcastro isn't going to be back in time for this charm school
<lazyPower> do you mind being my stand in announcer?
<jose> sure, no worries, just let me set up my environment and I'll be there in 2mins
<arosales> jose: to confirm are you setting up ubuntu on air for the upcoming juju plugins charm school?
<jose> arosales: you mean, hosting, or setting it up?
<jose> I can do both
<arosales> for sure setting up, but hosting too if you are free
<lazyPower> if anyone else wants to join the party.. speak up
<jose> arosales: cool. I'll be hosting then
<lazyPower> its goign to be a quick one. juju run is hyper handy but it'll be difficult to fill up the hour with just run
<jose> arosales: just remind me the dates? I don't see it on my calendar
<arosales> jose: roughly in 4 minutes
<arosales> lazyPower: correct?
<jose> arosales: isn't that one for juju ru?
<lazyPower> yup
<arosales> jose: Ya the OSX workflow got reschedule due to some juju box issues
<jose> ok, all good then
<arosales> so lazyPower was going to do a charm school on juju plugins and juju run
<arosales> jose: in about 2 minutes :-)
<jose> it's all good
<arosales> jose: could you also share the hangout link if folks want to join directly
<jose> https://plus.google.com/hangouts/_/ytl/vjVQa1PpcFP7bsYUNPvWzGCOUoWT528fqaHA2KZS4zU= that is
<arosales> cory_fu: ^
<arosales> jose: thanks
<mbruzek> Question for lazyPower:  Where would I put scripts if I want to be able to run them with juju run?
<mbruzek> juju run --unit mediawiki/0 'pwd'
<mbruzek> lazyPower, ^^
<jcastro> hi guys, sorry I am late!
<arosales> jose: nice back ground btw
<jose> arosales: thank you :)
<arosales> jcastro: no worries jose has us rolling
<jose> jcastro: any questions about juju run?
<jcastro> not really, I <3 it
<lazyPower> woo
<arosales> jose: thanks for hosting us
<lazyPower> lightning fast with the juju run
<mbruzek> great job Jose!
<jose> no worries :)
<lazyPower> juju run jose-hoster
<arosales> lazyPower: thanks for the educations
<lazyPower> COMMAND BROKE 9000
<lazyPower> so... arosales i was thinking
<lazyPower> its about time for me to bequeath the title of charm bot 5000 to Jose... based on his RevQ work, and now he's starting to look at the audit efforts and writing tests
<lazyPower> any objections?
<arosales> lazyPower: just a name right?
<lazyPower> oh dude, its more than just a name
<lazyPower> its a birth right
<arosales> lol
<lazyPower> he's earned it
<arosales> bequeth away then
<mbruzek> I second that Jose is awesome
<jose> :)
<arosales> +1 thanks jose for you work on charms
<jose> no worries, glad I could help :)
 * jose registers CharmBot5000 on freenode
<mbruzek> I want to know where jose  got that nice orange background
<lazyPower> oh hey, i missed the timeout flag on juju run. so go ahead and run juju rum vim /etc/hosts -- it'll time out in 5 minutes by default.
<lazyPower> s/rum/run
<lazyPower> can you tell where my head is? its 5pm somewhere.
<jose> mbruzek: orange wall was painted by myself, design was made by the Canonical Design Team (it's the same they have in London), and printed in vinyl
<mbruzek> very nice
<jose> there's a Juju pictogram somewhere around it
<lazyPower> jose: how much would you charge me to come paint my office with that gnarly pictogram?
<jose> lazyPower: plane ticket + hotel + food + $300
<lazyPower> mbruzek: you got my tab right?
<jose> and a trip to DisneyLand
<mbruzek> I will cover lazyPower's expenses up to $20
<lazyPower> woo
<jose> lazyPower: that warning you got about adding IPs to known host list, it was because it was the first time connecting to the instance via ssh
<lazyPower> Learning all kinds of good stuff today
<jose> lazyPower: have a min?
<lazyPower> jose: sfeole <-
<jose> sfeole: https://code.launchpad.net/~jose/charms/precise/teamspeak3/1297650-fix - lp:~jose/charms/precise/teamspeak3/1297650-fix
<sfeole> jose: another thing i just remembered is that when EC2 creates instances make sure that the security group assigned to that instance is not blocking port 9987
<jose> sfeole: will deploy again and test in a minute
<jose> sfeole: found the error, port was udp
<sfeole> jose: great!  were you able to connect?
<jose> sfeole: yep
<sfeole> jose: awesome!  you changed that in the EC2 Network & Security panel ?
<jose> sfeole: nope, the error was on the charm, and the EC2 security group had the port in both tcp and udp opened, so I could connect
<jose> oh, no, it only had tcp opened
<jose> I opened udp by doing open-port
<lazyPower> jose: hi5 on finding the fix
<jose> o/
 * sarnold whistles
<sarnold> what a charmbot
<jose> haha, hey sarnold
<sarnold> :)
 * lazyPower snickers
<axisys> trying to start a instance on ec2 and getting this error
<axisys> $ juju deploy --constraints="instance-type=m1.large" hadoop hadoop-master
<axisys> error: invalid value "instance-type=m1.large" for flag --constraints: unknown constraint "instance-type"
<axisys> what gives
<axisys> ?
<jose> axisys: try ec2-instance-type
<jose> otherwise, let me find the other way's details
<axisys> error: invalid value "ec2-instance-type=m1.large" for flag --constraints: unknown constraint "ec2-instance-type"
<axisys> following this
<axisys> http://javacruft.wordpress.com/2012/08/16/charming-hadoop/
<lazyPower> axisys: constraints have changed since 2012
<lazyPower> axisys: juju.ubuntu.com/docs -- constraints now use cpu and memory mapping to determine the instance type
<jose> ok, I'm finding the values
<lazyPower> so, specify the mem limit, which i think is 4g on a large
<axisys> lazyPower: I am running from ubuntu 14.04 64bit lts
<axisys> lazyPower: ok
<jose> also, I don't see an  m1.large, it's m3.large
<jose> --constraints "mem=4G" should do
<jose> (if that's the case)
<axisys> worked!
<lazyPower> boom!
<jose> awesome :)
<axisys> I tried with 1G
<axisys> thanks
<lazyPower> axisys: its all abstracted so they work from provider to provider without much muxing with flags.
<lazyPower> so whatever you use to deploy to amazon will work with maas, openstack, etc.
<lazyPower> sarnold: nice namedrop on the charmbot 5000 reference
<sarnold> I think there's a typo on https://juju.ubuntu.com/docs/charms-constraints.html -- "cpu-power=0 cpu-power=0"
<sarnold> lazyPower: hehe thanks :)
<jose> sarnold: let me check and I'll commit a fix
<jose> on the work
<sarnold> woo
<axisys> lazyPower: will it work with lxc ?
<lazyPower> axisys: lxc doesn't really have "constraints" as it shares resources with the host
<axisys> lazyPower: right.. make sense..
<axisys> lazyPower: so what is the environemnt for lxc ? local?
<axisys> I do not see it in ~/.juju/environment.yaml
<lazyPower> yeah, LXC is the local environment
<lazyPower> make sure you have the juju-local package installed if you want to use it.
<axisys> its there
<axisys> I am on 14.04 .. I guess it comes with it?
<lazyPower> Possible.
#juju 2014-04-19
<axisys> so how do I switch my hadoop instance from ec2 to local? is it possible?
<lazyPower> What do you mean? Instead of working on EC2 work on your local machine?
<lazyPower> juju switch local && juju bootstrap && juju deploy hadoop
<axisys> right
<jose> if anyone's got privileges: https://github.com/juju/docs/pull/66
<lazyPower> jose: landed
<axisys> agent-state: down after switch and bootstrap
<jose> thanks lazyPower!
<lazyPower> axisys: are you running the 1.18 rev's of juju?
<axisys>     agent-version: 1.18.1.1
<axisys> 1.18.1-trusty-amd64
<lazyPower> axisys: is the machine still listed as pending?
<axisys> lazyPower: http://dpaste.com/1785728/ yep
<lazyPower> axisys: yeah i saw that behavior today as well from 1.19 - i'm not sure why thats happening. Can you pastebiin the contents of your ~/.juju/local/log/machine-0.log?
<axisys> ok
<axisys> http://paste.ubuntu.com/7279918/
<lazyPower> axisys: i'm not positive but i think there's some hijinx going on in the tools. The log lines repeating about no proper tool versions available lead me to believe... let me try something and i'll report back
<axisys> lazyPower: thanks for your help.. juju feels magic..
 * axisys running some errands
<lazyPower> axisys: https://bugs.launchpad.net/juju-core/+bug/1309805
<_mup_> Bug #1309805: LXC / Local provider machines do not boot in 1.18 / 1.19 series <juju-core:New> <https://launchpad.net/bugs/1309805>
<lazyPower> if you want to +1 that bug for some additional traction. I'm pretty sure its a simple fix - but I don't have access to fix it.
<lazyPower> and most of the core team is out for the holiday weekend.
<lazyPower> jose: can you confirm you have this behavior with the local provider?
<jose> lazyPower: hmm, I'm not in trusty, but I can try and confirm with saucy
<lazyPower> jose: if you're running 1.18 or 1.19 branch it will show the same behavior
<jose> I'm running 1.18
<lazyPower> i've validated on trusty and saucy
<jose> let's go for it
<jose> lazyPower: btw, I'm still facing the old 'lxcbr0: no such interface' error where I have to uninstall and reinstall
<jose> but it's got a simple workaround
<lazyPower> hmmm thats strange
<lazyPower> the juju-local package should be adding that network bridge interface for you
<lazyPower> i'd open a bug against it if you haven't already
<jose> it does, but when I reboot my machine that error pops up
<jose> will do
<jose> let me bootstrap and deploy some instances first
<lazyPower> ack
<jose> woohoo, it's so fun playing with local and ec2 at the same time :P
<lazyPower> daily life on the juju solutions team. swapping from env to env
<jose> I think I can confirm the behaviour, machines are listed as pending and they're making no efforts to download any images or anything
<jose> http://paste.ubuntu.com/7280053/
<jose> lazyPower: ^
<jose> I'll be back in 5 in case you need me
<lazyPower> yep, missing tools all around. seems to be the majority of the issue.
<jose> at least it's not giving a standard message
<jose> \o/
<jose> statusnet charm = broken
<axisys> lazyPower: +1'd
<lazyPower> jose: file a bug against it, maybe the maintainer will update it.
<jose> lazyPower: will do
<jose> lazyPower: and the TeamSpeak charm is pushed for review!
<lazyPower> Boom!
<lazyPower> Great work jose
<jose> :)
<lazyPower> Thanks for submitting a fix
<jose> np, I'll see if there's any other charm I can work in
<jose> lazyPower: still around? I get some errors on my amulets tests
<henninge> Hi! I have a problem that is probably quite common so I would be grateful for a pointer where this is documented.
<henninge> I had to restart machine 0 (the state server) on EC2 and now it has a new ip address.
<henninge> All the agents are down now. I am aware that they need to be informed about the new ip address of machine 0.
<henninge> What is the best way to fix that?
<henninge> They are running 1.16.3
<ghartmann> is there something wrong with the trusty tools when using juju local
<ghartmann> ?
<jose> ghartmann: there's a bug currently open
<ghartmann> ah ok, I just wonder if it was known already
<ghartmann> tks
<balboah> testing out MAAS with juju through virtualbox. I got it bootstrapped, but it doesnât seem destroy-environment actually stops the machine, mongo and juju is still running on that instance, it just doesnât know about them anymore. Is that expected?
<hobbyBobby> good day - need some help networking if anyone is willing
#juju 2014-04-20
<lazyPower> balboah: Yep. Considering MAAS has no idea how to communicate with the machines through a virtualbox's hypervisor. If you were testing with KVM you could setup the machines power management via virsh and it would properly start/stop the instances.
<jose> it looks like someone's not partying this weekend
<jose> lazyPower: checking how to implement that
<lazyPower> eh i'm only around for a bit longer. Doing a Digital Ocean juju deployment of owncloud and documenting the process
<lazyPower> we need to get SSL support on this charm before we can make it a suggested path forward
<jose> that's what I meant, I'll check and see what can I do with the charm
<lazyPower> also, i would nack your deb install - because you removed the option to install from source :(
<lazyPower> it should be extended, with the deb being the default
<jose> I haven't removed the option to install from source?
<lazyPower> wat
<lazyPower> let me double check, when i scanned the MP i saw red over the source installations
<jose> sure
<lazyPower> ok, thats what i get for not looking closer
<lazyPower> you added the option. right on
<jose> yeah :P
<lazyPower> jose: i'll nack it just to make you ask questions, cuz i'm a spaz like that ;P
<jose> :P
<lazyPower> jose:  you dont remove the .migrating file in here after restoring the data.
<lazyPower> looking athttp://bazaar.launchpad.net/~jose/charms/precise/owncloud/port-change+repo-support/view/head:/hooks/upgrade-charm
<jose> oh, whoops, will fix that now
<jose> fixed!
<jose> lazyPower: a problem I find with SSL is the cert, but if you have to go I think we can discuss this on Monday
<lazyPower> Certs are a solved problem
<jose> huh?
<jose> self-signed?
<lazyPower> yep
<lazyPower> Look at charm helpers SSL module
<lazyPower> and other charms that have a ssl-everywhere branch. They use charmhelpers to build the self signed certs
<lazyPower> i plugged the modification into Nagios about a month ago
<jose> hmm, I'll check
<jose> I should mention, charm-tools failed to build on the PPA
<lazyPower> jose: charmhelpers != charm tools
<jose> ack
<jose> oh, do you have a minute to check on my amulet test?
<jose> seems to be failing, no idea why
<lazyPower> link to code + output?
<jose> code is https://code.launchpad.net/~jose/charms/precise/subway/add-tests
<jose> and I'm running the test now to get the output
<lazyPower> haha, you took my approach of using splinter
<lazyPower> awesome
<lazyPower> theres one problem with this approach - it will fail CI consistently
<lazyPower> for whatever reason teh dependencies never get loaded properly
<jose> hmm, maybe that's the problem
<jose> I remember something about a file not being found
<lazyPower> I would just use python requests and parse the output for abs match's
<lazyPower> eg: <title>Subway</title>
<jose> hmm, ok, I'll investigate tha
<lazyPower> hah cool
<lazyPower> this seems to work pretty well
<lazyPower> re: owncloud on DO with a 1GB memory provider and a single user
<lazyPower> jose: did you add the desktop client and interface with this at all when you were testing?
<jose> lazyPower: desktop client and interface? huh? what?
<lazyPower> jose: there's a desktop client for owncloud, akin to the ubuntu1 desktop client for sync'ing content to owncloud
<jose> I didn't test it with that
<jose> I can test it now, though
<lazyPower> http://i.imgur.com/LE4yd7F.jpg
<lazyPower> i'm putting it through the ropes as we speak
<lazyPower> syncing about 8gb worth of content
<jose> \o/
<jose> thank you
<jose> my upload speed is 20KBps
<lazyPower> jose: yeah i've got ya beat. this is with the sync in progress - http://www.speedtest.net/my-result/3450193516
<jose> I want those speeds! :(
<lazyPower> and, done
<lazyPower> not bad
<lazyPower> with a single client this setup works pretty well. I need to make some files collide
<lazyPower> see how it handles it, but i'm pretty happy with the results of this
<jose> awesome!
<jose> I tried with two users on the same instance, but we didn't get to upload too many files
<ghartmann> my juju local provider machines lock on pending state and never get delivered, I tried finding hints on the logs but I couldn't find anything
<jose> ghartmann: it's a known bug (bug 1309805)
<_mup_> Bug #1309805: LXC / Local provider machines do not boot in 1.18 / 1.19 series <juju-core:Confirmed> <https://launchpad.net/bugs/1309805>
<jose> can you mark that it affects you, please?
<ghartmann> I am using local providers to validate my charms
<ghartmann> so it is a blocking issue
<ghartmann> what alternatives do I have ?
<ghartmann> do we have an older ppa that would point to an working version ?
<jose> ghartmann: not exactly, you may use other providers such as EC2
<jose> if you are new to EC2 I can help you configure it with juju for free usage for a year (they give a free tier)
<ghartmann> I have used EC2 for a few years it's just that I am prototyping locally now
<ghartmann> I was very happy on how quickly I could spin dbs and apps on my local server
<ghartmann> so I moved from using vagrant+lxc to juju
<jose> yeah, unfortunately that bug's preventing from deploying in local
<jose> unless... let me try something
<jose> nope, I don't know of a workaround
<ghartmann> ah, well ...
<ghartmann> it's go now right ?
<jose> what?
<jose> you mean, the language juju-core is written in?
<ghartmann> yes
<jose> yes, golang
<ghartmann> I will see if I can find out why it is getting in this state
<jose> cool :)
<jose> #juju-dev may have people who know about it more than myself
<ghartmann> not so familiar with go though
<ghartmann> ha thanks .. didn't knew about this channel
#juju 2015-04-13
<cory_fu> Spads: http://big-data-charm-helpers.readthedocs.org/en/latest/examples/framework.html
<apuimedo> can juju bundle use "charm: 'local:....'"? Cause bundle proof does not recognize it
<apuimedo> marcoceppi_: ^^
<marcoceppi_> apuimedo: not charm: local, but branch: <bzr branch> will work, and I think local: may work as well
<apuimedo> oh good
<apuimedo> let me check
<apuimedo> local: makes the proofer through an error (just like branch: does )
<apuimedo> I think the proofer supports charm:
<apuimedo> since it fails with KeyError: 'charm'
<marcoceppi_> can you post your whole bundle
<marcoceppi_> you can't use local for charm-store bundles, just for local bundles
<apuimedo> marcoceppi_: it's a local bundle atm
<lazyPower> apuimedo: if this is just for testing/deployment, thats not an issue. When you go for charm store promulgation you'll want to refactor out the local: and plug in cs:series/service-revision
<apuimedo> indeed ;-)
<apuimedo> lazyPower: One thing that should be done is to find some way to document the protocols
<apuimedo> for the openstack services it could be quite useful
<lazyPower> when you say protocols you mean the relationship interfaces?
<apuimedo> not when it's simple setting or getting
<apuimedo> interface protocols, yes
<lazyPower> We started doing that in our charm readmes/doc sites
<apuimedo> but when it's two-or-more steps
<lazyPower> http://chuckbutler.github.io/docker-charm/user/configuration.html
<lazyPower> i've adopted the pattern of: sends variables (or data) / receives variables (or data) - i need to normalize that nomenclature, but its handy to have as a reference.
<apuimedo> I was trying to think of something that could present the information of complicated services
<apuimedo> but the best I could come up with was flow charts
<lazyPower> you know, i did that with the DNS charm as well
<apuimedo> specially when there are config variables involved
<lazyPower> https://github.com/chuckbutler/DNS-Charm/tree/master/docs
<lazyPower> but yeah - i understand what you're getting at. this is somethin gi've been noodling/experimenting with since i started
<lazyPower> i feel our current "grep the source" model is for the birds.
<lazyPower> we were talking about doing code analysis to scan the relationships and update a listing somewhere. I still might do this, and any charms that dont make it through the lexer are manually parsed
<apuimedo> lazyPower: those are cool graphs :-)
<lazyPower> draw.io - i can take no credit :)
<apuimedo> :-)
<apuimedo> that looks easier than my graphviz approach
<schkovich> nice work lazyPower :)
<lazyPower> thanks schkovich
<jose> marcoceppi_, lazyPower, mbruzek: I won't be able to join you for the Juju Office Hours this time, I'll be in class
<mbruzek> OK.
<mbruzek> Study hard!
<mbruzek> Or Pay attention!
<jose> :P
<jose> sure, sure...
<schkovich> Is it safe to assume that JUJU_ENV_NAME will be always set?
<cory_fu> cmars: http://big-data-charm-helpers.readthedocs.org/en/latest/
<cory_fu> cmars: lp:~bigdata-dev/charm-helpers/framework
<marcoceppi_> schkovich: so long as you're in a hook context, yes
<AskUbuntu> Juju - Can't access juju charm store | http://askubuntu.com/q/608792
<Guest36694> is there a good mail server charm?
<lazyPower> There is a postfix charm Guest36694
<lazyPower> admittedly i have limited interaction with it
<lazyPower> https://jujucharms.com/postfix/precise/2
<VijayT_> Hi
<Guest36694> hmm. looked for it.  jujucharms.com search doesn't return anything for 'postfix' or 'mail'
<VijayT_> do we need to install juju as root?
<VijayT_> using root login
<lazyPower> Guest36694: that link should have shown the postfix charm in teh store
<lazyPower> VijayT_: you do not, juju is happy running as a user. if it requiers elevated privs it will prompt you for a sudo password
<Guest36694> I see it.  just letting you know.
<Guest36694> it will return if I separate 'post fix'
<lazyPower> interesting, i searched for postfix. I think they are in the middle of a deployment
<lazyPower> I'll follow up with our webops/ui engineering team however, thanks for the heads up Guest36694
<VijayT_> thanks lazypower
<msbrown> How does one prevent juju-deployer from giving identical iSCSI initiator names to hosts?
<lazyPower> msbrown: thats a very specific question, and i'm not sure i have an answer. That might be worthwhile to ping the list
<lazyPower> msbrown: juju@lists.ubuntu.com - there's a sprint going on with the juju teams so they are mostly in EU timezones right now
<blahdeblah> lazyPower: I don't suppose you got a chance to look at my questions about the quassel-core amulet tests? :-)
<lazyPower> blahdeblah: i have not, but i can take a look
<blahdeblah> lazyPower: I don't think I put them in the MP; I pinged you a few days ago in canonical #juju
<lazyPower> https://bugs.launchpad.net/charms/+bug/999439 >
<mup> Bug #999439: Need charm for quassel-core <new-charm> <Juju Charms Collection:Fix Committed by paulgear> <https://launchpad.net/bugs/999439>
<lazyPower> oh
<blahdeblah> lazyPower: I can find them again if needed
<lazyPower> that would be brilliant if you dont mind
<blahdeblah> lazyPower: The main question is why the deployment setup has to be a class method instead of an instance method, since this seems to prevent running of multiple deployments from the same test.
<lazyPower> when issued through bundletester I was getting weird side effects from the deployment, and this is a byproduct of using instance methods. Converting it to a class kept the deployment on point in the test - but its a worthwhile discussion to bring up on the list to see if we can mod the testing tools to support both.
<blahdeblah> Now that I know how to access class methods & variables in python, this is less of a big deal than it was. ;-)
<lazyPower> While it works in some cases, i've run into cases where it just flat out side-effects the test into working once, then failing on the next run :|
<lazyPower> i wish i had a better answer, but i can describe the effect vs the fix -  this was actually suggested by mbruzek to me about 3 weeks ago when i ran into the issue
<blahdeblah> OK
<blahdeblah> Next Q: is there a good way to tighten the test loop, preferably by running locally somehow?  It's a bit frustrating having to wait for hours/days to see the test results, only to find it's some stupid by-product of my python inexperience.
<lazyPower> surely, you can kick off the test against the local provider using bundletester. my preferred method is bundletester -F -l DEBUG -v -e local - you can use any substrate vs CI
<lazyPower> standing up quassel + Mysql should be fairly quick on the local provider
<blahdeblah> Any doc on the setup required?
<lazyPower> https://github.com/juju-solutions/bundletester
<blahdeblah> ta
<lazyPower> np :)
<lazyPower> if you need further clarification on the testing loop with charms, I have a session coming up this week with an ISV
<lazyPower> and i see about adding you to it if they are OK with having a third party sit in
<blahdeblah> lazyPower: That would be great; I'm in UTC+10, though, so don't make it a big deal if timing is an issue.
<lazyPower> Its friday 11-12pm EDT
<blahdeblah> thanks again, lazyPower
<lazyPower> np blahdeblah, happy to help.
<lazyPower> I'll ship an email over to the team and circle back if you're still interested @ the time difference.
<lazyPower> s/@/with/
<blahdeblah> thanks
<blahdeblah> what's the UTC offset of EDT?
<lazyPower> 5 hours
<lazyPower> UTC Offset: UTC -4:00
<lazyPower> forgot, UTC doesn't observe DST
<blahdeblah> That's kinda the whole point of UTC. ;-)
<lazyPower> its been a long day :D
<blahdeblah> :-)
<blahdeblah> That time would probably work reasonable well for me
<blahdeblah> thanks
<lazyPower> np, i'll ping once i've got confirmation
#juju 2015-04-14
<marcoceppi_> stub: https://github.com/juju/amulet/pull/67
<nevermam> Hello..
<nevermam> I have a question regarding amulet test
<nevermam> I am trying to automate my test, which involves deploying 2 charms to same machine
<nevermam> I used below command:
<nevermam> d.add('odm' , constraints=OrderedDict([     ("to", mac) ]))
<nevermam> I got error like this:
<nevermam> Error deploying service 'odm' 2015-04-13 23:27:42 Command (juju deploy -e local --config /tmp/tmp9gvxMa --constraints to=58 --repository=. local:trusty/odm odm) Output:   error: invalid value "to=58" for flag --constraints: unknown constraint "to"
<nevermam> How can I make my code , deploy to an existing machine..
<nevermam> any help will be greatly appreciated
<blahdeblah> nevermam: try just "--to=58"
<blahdeblah> (i.e. take out "constraints ")
<nevermam> 2blahdeblah Do u mean something like this--  d.add('odm' , ("to", mac))
<nevermam> I tried      d.add('odm' ,to= mac) TypeError: add() got an unexpected keyword argument 'to'
<nevermam> and
<nevermam>     d.add('odm' ,--to= mac) SyntaxError: keyword can't be an expression
<nevermam> Is it possible to deploy a charm to machine 0 ?
<nevermam> IN local setup it is not possible..In openstack or somewhere else is it possible ?
<aisrael> charmers, this needs some merge love: https://code.launchpad.net/~nicopace/charms/trusty/python-django/allowed_host_patch/+merge/254958
<lovea> I've just commented on https://bugs.launchpad.net/juju-core/+bug/1434437?comments=all as I have a current problem addressed by this fix (I hope).
<mup> Bug #1434437: juju restore failed with "error: cannot update machines: machine update failed: ssh command failed: " <backup-restore> <maas-provider> <juju-core:Invalid> <juju-core 1.22:Fix Released by ericsnowcurrently> <https://launchpad.net/bugs/1434437>
<lovea> It's a pressing issue, could anyone elighten me as to how I upgrade an existing juju state server?
<lovea> At the moment I cannot get the state server service to listen on port 17070 due to the TLS handshake error loop.
<cmars> waigani, http://big-data-charm-helpers.readthedocs.org/en/latest/
<dpb_other> Tribaal, can you ping me here?
<bdx> Are you all aware of the jujucharms.com issue??
<bdx> Go to jujucharms.com and use the search field in the upper right to search for solutions
<bdx> I receive an error page.....is this intended?
<aisrael> bdx: Sorry about that! We're looking into it now.
<aisrael> rick_h_: ^^
<bdx> aisrael: Great, thanks. Also....the entire site starts strobe flashing if I login
<bdx> lol
<aisrael> That would be the experimental 'disco ball' charm. :D
<bdx> It looks like the ultimate party
<AskUbuntu> Juju - Proxy issue | http://askubuntu.com/q/609240
<catbus1> Hi, visiting demo.jujucharms.com gives this error: Charm API error of type: load
<lazyPower_> catbus1: o/
<lazyPower_> catbus1: There's some backend issues going on there that our webops team is aware of and working to fix. Thank you for reporting the issue thought
<lazyPower_> *though
<catbus1> ok, thanks
<rick_h_> catbus1: yes, sorry data center issue at the moment. Very sorry for the trouble
<rick_h_> the team is on it
<redelmann> Hi, is charmstore down? or im having some internal problems?
<redelmann> ahh
<redelmann> ok
<redelmann> didn't read las message!
<redelmann> last
<catbus1> just checked. it's working for me now.
<rick_h_> redelmann: catbus1 yes, we had an outage in the DC today
<ctlaugh> I've got a problem with my mysql instance that is part of a larger openstack deployment.  I had to power off and move all of the servers and, when powered back on, juju status is reporting an error for mysql/0.  When I look at the juju log, it shows: http://paste.ubuntu.com/10823765/.  I have tried 'juju resolved mysql/0', but no luck.  Is there something I can do to recover.
<catbus1> ctlaugh: it says No such file or directory: '/var/lib/mysql/mysql.passwd'. If mysql is deployed successfully, the file should be created.
<ctlaugh> catbus1: It _was_ deployed and functioning correctly... for months.  I started getting this error after I had to shut down and restart the host system.
<ctlaugh> catbus1: I'm hoping there is a way to have this file be recreated, or to recover in some way.  I really don't want to have to start from scratch.
<catbus1> ctlaugh: https://jujucharms.com/mysql/trusty/24, how about creating the file manually sudo touch /var/lib/mysql/mysql.passwd and set the password in the file?
<catbus1> Once deployed, you can retrieve the MySQL root user password by logging in to the machine viaÂ juju sshÂ and readin theÂ /var/lib/mysql/mysql.passwdÂ file. To log in as root MySQL User at the MySQL console you can issue the following: juju ssh mysql/0 mysql -u root -p`sudo cat /var/lib/mysql/mysql.passwd` Seems like the password is plain text in /var/lib/mysql/mysql.passwd
<catbus1> after creating that file, run juju resolved --retry mysql/0
<ctlaugh> catbus1: Don't I need the right password to put back into that file?  That file is not there to look at.
<lazyPower_> ctlaugh: you can also reset the administrative password manually, and place it in that file
<lazyPower_> https://dev.mysql.com/doc/refman/5.0/en/resetting-permissions.html
<lazyPower_> >  B.5.4.1.2 Resetting the Root Password: Unix Systems   should get you moving
<vijayt> while doing juju sync tools , I am hitting error https://gist.github.com/vijaytripathi/6012e812a876ef2a62e2
#juju 2015-04-15
<AskUbuntu> juju state server upgrade | http://askubuntu.com/q/609569
<jcastro> marcoceppi_, can you pass this around in  nuremberg? ^^
<marcoceppi_> jcastro: ack
<drbidwell> Is there a way to import/export a MAAS configuration from/to a yaml file?
<marcoceppi_> drbidwell: what do yoy mean maas configuration?
<drbidwell> marcoceppi: the networks, cluster, and zones information.  The stuff that I need to beat into by hand when I have to rebuild it.
<drbidwell> marcoceppi: The Mirantis fuel software has a process for feeding this kind of information into the system from a yaml file.  It was handy to have.
<AskUbuntu> How many nodes to use Maas/Juju for Openstack (Juno)? | http://askubuntu.com/q/609589
<puffi> Not sure if this is really a juju question, but i just did an openstack install on ubuntu 14.04 using juju. I rebooted the machine after install. It's not clear what needs to be started? It's a single instance
<cory_fu> bcsaller: https://insights.ubuntu.com/2015/04/15/using-the-services-framework-to-implement-your-charms-intent/
<cory_fu> et al.
<bcsaller> cory_fu: tl;dr
<bcsaller> jk
<cory_fu> :)
<bcsaller> cory_fu: that section on rewriting charms with the framework is gold
<cory_fu> bcsaller: Agreed
<Spads> bcsaller: thanks!
<bcsaller> :)
<bcsaller> Spads: thank you
<lukasa> If a charm relation is changed, can I determine exactly which fields in the relation changed?
<bcsaller> lukasa: not without keeping state in the unit. Its common to be able to process the whole state and take what ever actions are possible given the data you have
<lukasa> bcsaller: Yeah, sadly I'm working on charm that has a particularly annoying component in it
<lukasa> I have a follow-on question, but it works best if you know about the config file infrastructure the openstack charms have
<cmars> wwitzel3, i;ve got some pull requests into juju-pyramid for you, when you get a chance
<redelmann> Hi, im using bootstraped in remote maas
<redelmann> and "juju debug-log"
<redelmann> says: ERROR cannot open log file: open /var/log/juju/all-machines.log: no such file or directory
<redelmann> strange, why it is reading local file?
<redelmann> mh.. ok, i just  realized that juju is telling me that is not all-machine.log in machine0
<ctlaugh> lazyPower_, catbus1: I did a manual reset, put the password in that file, and run 'juju resolved mysql/0', but now I am getting all kinds of errors regarding authorization failures from horizon: "Unable to retrieve public images", "Unable to retrieve images for the current project" when I try to launch instances or create volumes.  Is the mysql root password used by the other charms in some way?
<lazyPower_> ctlaugh: it may be that horizon is using an administrative credential, which would denote it needs the password update as well
<lazyPower_> beisner: Heyo, Can i tap your brain for a second with regard to ctlaugh's issue? ^
<ctlaugh> lazyPower_, beisner: I looked in the horizon logs and they are full of the authorization errors from glance.  So, then, I looked in the glance logs, and they are full of this: "WARNING keystoneclient.middleware.auth_token [-] Authorization failed for token".
<beisner> o/ hi lazyPower_ ctlaugh
<lazyPower_> beisner: the TLDR; on whats going on - ctlaugh joined yesterday and was missing /var/lib/mysql.passwd
<ctlaugh> So, I've been looking in the keystone configs and logs trying to see what might be the cause, but haven't found it yet.  Keystone's config looks like it uses a 'keystone' user, which shouldn't have changed.
<lazyPower_> to get root access back, i linked to MySQL docs on how to do a single-user-admin reset, this appears to have exploded into dropping auth for the other openstack components (assuming they were consuming the db-admin role)
<catbus1> I just want to point out, run juju resolved with --retry so the hook script will be re-run. running it without --retry only sets the status to resolved.
<beisner> could this be bug 1423153 ?
<mup> Bug #1423153: /var/lib/mysql/mysql.passwd no longer exist <mysql (Juju Charms Collection):Fix Released by hopem> <percona-cluster (Juju Charms Collection):Fix Released by hopem> <https://launchpad.net/bugs/1423153>
<beisner> ^ ctlaugh, lazyPower_    helps if I tag ya
<ctlaugh> beisner:, lazyPower_ - I saw that bug, but wasn't sure if it was related to my situtation or not.  I deployed this cloud months ago, and it's been working fine since then... until I powered the whole system off and moved it.  That's when I started getting the errors and the file was missing (well, not sure if it was ever there since I had never looked for it, but the charm was complaining that it was missing).
<beisner> ctlaugh, lazyPower_ - the only deployment i have up at the moment to poke on uses percona cluster instead of mysql.  on that, there is no /var/lib/mysql.passwd file.   that is with the 15.01 charms i believe.
<beisner> ctlaugh, so do the usual api commands fail?    keystone user-list && keystone token-get && nova list && cinder list && glance image-list  ...etc?
<beisner> +     keystone catalog  &&  nova service-list
<beisner> ctlaugh, lazyPower_ - i'd definitely start by looking at keystone, rabbit and mysql logs.   if those three aren't happy, the rest of the stack will be full of errors.
<lazyPower_> beisner: thanks for taking a look
<ctlaugh> beisner: lazyPower_ : some work, some don't: http://paste.ubuntu.com/10827803/
<beisner> lazyPower_, sure welcome!
<lazyPower_> ctlaugh: which revision of the keystone charm are you using?
<ctlaugh> lazyPower_:     charm: cs:trusty/keystone-14
<ctlaugh>     can-upgrade-to: cs:trusty/keystone-20
<ctlaugh> It does appear to be keystone that's having the problem, but I see nothing that looks like an error in /var/log/keystone/keystone.log.
<ctlaugh> I take that back...
<ctlaugh> Well, I see a few errors related to not being able to connect to mysql, but I think that is from when it was restarting.
<beisner> ctlaugh, basically, until    keystone token-get    works, everything else will be problematic.     i'd restart the keystone service, whilst watching its log, to see if it is connecting.
<beisner> connecting to mysql that is.
<beisner> ctlaugh, gotta step out for ~30, but will return.   let us know re:  keystone:mysql
<ctlaugh> beisner: ok, thank you.  I'm trying, but keystone.log is showing NO errors when token-get is failing.  Just lots and lots of debug spam.
<ctlaugh> beisner: I'm about to leave for lunch as well, but will be back soon.
<beisner> ctlaugh,    keystone --debug token-get
<bitchecker> hi
<ctlaugh> beisner: http://paste.ubuntu.com/10828233/
<beisner> ctlaugh, can you pastebin the keystone log?
<ctlaugh> beisner: working on it.  it's choking on the massive file size.  i am going to split it.
<ctlaugh> beisner: http://svartalfheim.laughlins.org/share/keystone.txt
<beisner> ctlaugh, sorry, pulled in another direction.   things to check:  what openstack environment variables are set?   it looks like you're using a token.  i generally don't, i use something like this, where x.x.x.x is the keystone IP.   you may get more meaningful output from   keystone --debug token-get      and keystone --debug catalog   if you set your environment like so.
<beisner> http://paste.ubuntu.com/10828404/
<ctlaugh> beisner: I took the token out, made sure I have all the EVs set like in your example (I had them all there already, but also had the token one) and I get this: Expecting a token provided via either --os-token or env[OS_SERVICE_TOKEN]
<ctlaugh> (from keystone --debug token-get)
<beisner> ctlaugh, what does this show?:     env | grep OS     ... be sure to mask your actual password.
<ctlaugh> beisner: http://paste.ubuntu.com/10828440/
<beisner> ctlaugh, dpkg-query --show python-keystoneclient   ?
<beisner> +:        apt-cache policy python-keystoneclient     ?
<ctlaugh> beisner: http://paste.ubuntu.com/10828453/
<AskUbuntu> OpenStack Juno and restarting it | http://askubuntu.com/q/609718
<beisner> ctlaugh - do ANY keystone cmds work?     keystone catalog      keystone user-list    keystone service-list     keystone endpoint-list
<ctlaugh> beisner: not after removing the token EV.  with the token, these work: endpoint-list, user-list, service-list
<ctlaugh> beisner: keystone catalog fails with this, even with the token: 'NoneType' object has no attribute 'has_service_catalog'
<beisner> ctlaugh,  weird.  i'm not sure what's going on.  i was hoping to see some meaningful keystone errors.    any hints from mysql logs?
<beisner> failed auth, errors connecting, etc?
<ctlaugh> beisner: nothing that I see
<beisner> ctlaugh, are these all deployed with juju?   any changes made to the stack outside of using juju config options?
<beisner> rather, charm config options
<ctlaugh> beisner: no, all with Juju -- no change to any charm config outside of that
<ctlaugh> beisner: I was hoping some kind of solution was going to jump out, but I'm thinking now it might be easier to ust blow it all away and start over.
<beisner> ctlaugh, so did the mysql password end up getting manually reset?
<ctlaugh> Yes, I manually reset it (the root password), and verified I could log in using it.  I put that password in the mysql.passwd file and run juju resolved --retry.
<ctlaugh> beisner: I looked in the config files of nova, keystone, glance, etc. etc and if there is a database URL, they all have separate users -- never root. So, I can't see how a root password change could have affected that.
<beisner> ctlaugh, right, i would think the same.
<ctlaugh> beisner: Is there something to do to force the keystone charm to re-do whatever it does to see if it can correct itself?
<beisner> ctlaugh, there is a debug hooks developer mode that can re-trigger individual hooks, but it's generally for charm development.
<arosales> anyone available to kick an automated charm test for me on https://bugs.launchpad.net/charms/+bug/1441622
<mup> Bug #1441622: Review queue:  xCAT  <Juju Charms Collection:New> <https://launchpad.net/bugs/1441622>
<ctlaugh> beisner: I really appreciate all your time today and all your attempts to help.  I am going to redeploy my openstack setup -- hopefully that'll be quicker to get back up and running.  I didn't have anything critical in it... it was just used for an Openstack CI setup I was creating.
<MitchM> Can I use Juju / MaaS to provision barebone servers with any image (not just openstack) ?
<beisner> ctlaugh, you're welcome - sorry i couldn't help dig deeper today.
<AskUbuntu> How to edit machine hardware detail deployed by Juju? | http://askubuntu.com/q/609741
<ctlaugh> beisner: I redeployed from scratch, and now all the keystone commands are working.  The only thing I see that is still failing (both in horizon and command-line) is 'glance image-list'  -- keeps returning an "Authorization failed for token".  Very strange... still looking.
#juju 2015-04-16
<ctlaugh_> If I follow instructions and do NOT specify an admin password for the keystone charm when I deploy it, how do I figure out what it is so that I can use it for authentication later?
<ctlaugh_> beisner: I'm still looking at this Openstack deployment trying to figure out what in the world is going on.  Is there a chance that something (in the charm(s), etc) that could have changed affecting how it gets deployed?  I have a simple set of scripts (I can share if necessary) that deploys the charms, creates relations between them, and that's about it.  I used them repeatedly earlier this year with no
<ctlaugh_> trouble at all.  Now, I can't get a Juju-deployed Openstack to work.
<cory_fu> bloodearnest: https://github.com/niedbalski/juju-deployerizer
<bloodearnest> cory_fu, awesome, thanks
<lazyPower> blahdeblah: just got confirmation. You're invited to tomorrow's lesson if you're still up for it
<mgz> cory_fu: got it, runSSHKeyImport in apiserver/keymanager/keymanager.go cn generate more error results than import keys
<lazyPower> mgz: so we know whats causing the panic now?
<cory_fu> mgz: It can generate more results, error or not, because there are more than one key per user.  But to make the functionality work, it has to return more results, since the keyInfo is what's used to write the valid keys as well
<cory_fu> So I definitely should have reved the api as well.  :/
<cory_fu> lazyPower: Yeah, it's my change like I thought at first
<lazyPower> doh
<lazyPower> allright, at least we know whats going on. Appreciate the attention on that bug :)
<cory_fu> mgz: Even worse (or better, depending on your point of view), the mismatch in result count could cause it to report an incorrect key as the source of the error without panicing
<cory_fu> The only time it will panic is if the one in error is near the end of the result list and there was at least one ID that had multiple keys
<francokaerntna> Hello, I am using the CLI to upload a bundle, but I only get the error, that the command 'juju bundle' is not known. I have the latest version of juju (1.22.1-0) and juju-quickstart (2.0.1) installed. Can anybody tell me how to deploy a bundle through the CLI?
<lazyPower> francokaerntna: surely  juju-quickstart bundlename.yaml should get you running if the bundle is valid
<lazyPower> francokaerntna: its a positionial argument to juju-quickstart, and you can get detailed help from juju-quickstart -h
<francokaerntna> lazyPower: ok, I will try that. But the docu clearly states 'juju bundle proof ..' https://jujucharms.com/docs/1.22/charms-bundles
<lazyPower> francokaerntna: thats a completely different utility coming from the charm-tools package :)
<lazyPower> juju-quickstart is it's own project as well, and has different nomenclature
<lazyPower> so if you're missing juju bundle, and would like to proof it - apt-get install charm-tools
<francokaerntna> lazyPower: thank you! Now it's working
<lazyPower> Glad to hear it :)
<francokaerntna> I'm trying to deploy the openstack charm on 3 maas hosts. Is it enough to edit the bundle.yaml (add 'to: "lxc:X"' where X is a number from 0 to 3) file and then run juju-quickstart?
<lazyPower> francokaerntna: it can be you need a bare-minimum of 2 machines to deploy openstack
<lazyPower> 1 to warehouse all the ancilliary services, and 1 to dedicate to your VM host
<lazyPower> see http://marcoceppi.com/2014/06/deploying-openstack-with-just-two-machines/ as a jumping off point.
<francokaerntna> lazyPower: ok. But that one is without quantum. And I tried adding it afterwards but it didn't work. Can you give me some assistance?
<francokaerntna> anybody experience with deploying neutron into openstack with juju?
<lazyPower> francokaerntna: i know you have a limitation on the number of machines
<lazyPower> but we do have a bundle for this in the store: https://jujucharms.com/openstack
<lazyPower> you should be able to derive  what needs to be done from either the README or the bundle.
<lazyPower> blahdeblah: ping
<francokaerntna> lazyPower: I tried to deploy the openstack-base one, but it will always try to acquire 17 machines. Do you mean I should download the configuration, change the base.yaml file (so that every service contains a 'to: lxc:X' field) and then deploy with juju-quickstart?
<lazyPower> Thats an option
<francokaerntna> lazyPower: sorry, I read your answer to quick. I will try to humanly parse the configs and reproduce what's written in them.
<lazyPower> well, you can also use thebundle
<lazyPower> adding to: lxc:# shoudl work
<lazyPower> its no different using the bundle than deploying by hand, and arguably less error prone :)
<jshieh> Hey, has anyone had a chance to look at the Power PPA bugs listed under:
<jshieh> https://bugs.launchpad.net/charms/+bugs?field.tag=ppc64el
<jcastro> jose, hey do you think you can host the hangout today? I am having some display/xorg issues today and my laptop isn't very reliable
<jcastro> https://plus.google.com/events/c4t2adthcqk723kbd6j6otr1orc
<jcastro> can everyone see that event?
<marcoceppi_> yes
<jcastro> marcoceppi_, paste the URL in here
<marcoceppi_> https://plus.google.com/hangouts/_/hoaevent/AP36tYdOBGmgZLcV7P1GJ_IWCxHj3S0wWsTbC4UXQy5bAJ5EgrAriQ?authuser=0&hl=en
<marcoceppi_> If you want to watch, https://plus.google.com/events/c7kaoc26rs816cpf14mrpnpuiio
<marcoceppi_> jcastro: http://youtu.be/WWGJpRtZ2H0
<lazyPower> To anyone following along the hangout on air and wants the release notes: https://lists.ubuntu.com/archives/juju/2015-March/005118.html
<jcastro> https://jujucharms.com/docs/stable/reference-release-notes
<beisner> o/ popping in and out whilst testing a ton o stuff.
<beisner>    ++tabular lazyPower !
<lazyPower> aww yeah
<lazyPower> tabular  = godmode for watching your deployments
<lazyPower> cheats++ :D
<lazyPower> make sure you hi5 katco for landing that bit of awesomeness
<lazyPower> https://insights.ubuntu.com/2015/04/16/expediting-local-isolation-with-docker-and-juju/ - if anyone wanted to review the charm context switching on insights
<jw4> lazyPower: great article... thanks
<blahdeblah> lazyPower: pong - Saw your comment on https://bugs.launchpad.net/bugs/999439; I haven't got back to it this week.  Is there a way I can force a run through the testing regime to see results without you guys having to review & approve?
<mup> Bug #999439: Need charm for quassel-core <new-charm> <Juju Charms Collection:In Progress by paulgear> <https://launchpad.net/bugs/999439>
#juju 2015-04-17
<jose> jcastro: I have classes, we need to coordinate so I can help you out
<beisner> ping ctlaugh_
<xenon_> hi there!
<gnuoy> jamespage, have you got a sec for https://code.launchpad.net/~gnuoy/charm-helpers/nrpe-proxy/+merge/256626 ?
<cmars> hi cory_fu
<cory_fu> Hey
<cmars> cory_fu, i think you're looking for code.google.com/p/rog-go/exp/cmd/godef
<cory_fu> Thanks
<jamespage> gnuoy, +1
<gnuoy> Thanks
<jamespage> gnuoy, just re-testing the freyes revised pxc HA improvements
<gnuoy> kk
<gnuoy> I've just upgraded from 1.22.1 to 1.23.0 (from proposed ppa) and after the upgrade I seem to have the same symptoms as Bug #1438489 which is fix committed.
<mup> Bug #1438489: juju stop responding after juju-upgrade <upgrade-juju> <juju-core:Fix Committed by johnweldon4> <https://launchpad.net/bugs/1438489>
<gnuoy> jw4, am I missing something or could that bug still be present?
<jw4> gnuoy: I'm concerned that 1.23.0 doesn't have the latest 1.23 fixes
<gnuoy> ah, that would be not be ideal
<jw4> gnuoy: that change is in the 1.23 branch, but if you're seeing that error then the version of juju you upgraded to can't have that revision ....
<jw4> I overheard an issue this morning where the deployed tarball was the wrong version
<jw4> so it may be an issue that will be fixed soon
<jw4> gnuoy: hmm; If I'm reading the milestones and releases right it might be fixed in 1.23.1 instead of 1.23.0
<gnuoy> jw4, Does that mean that the tools juju plucked from streams.canonical.com  when I did the upgrade might not have had the fix
<gnuoy> oh
<jw4> gnuoy: no... the 1.23.0 tag DOES include my fix, so a recent install of 1.23.0 should work (if the tarball was right)
<gnuoy> jw4, the tar ball you're referring to is the tools downloaded by juju from streams.c.c ?
<jw4> gnuoy: I think so.. whichever mechanism delivers the actual jujud that is upgraded to.  One thing I noticed in your bug report... did you see that error message only once? or many times?
<gnuoy> let me check, then env is still there
<jw4> The 'fix' that I put in converted the hard error into just an error being logged once in the logfile but then continuing normally
<gnuoy> s/then/the/
<jw4> (so any other symptoms you're seeing may be unrelated to that specific bug)
<gnuoy> jw4, I see that message once on each unit, and that appears to be the last message each unit reports
<jw4> gnuoy: ok.. so it looks like my fix is in.
<jw4> gnuoy: otherwise the message would just keep repeating infinitely.
<jw4> gnuoy: what are the follow on symptoms you're seeing?  Hooks not firing at all?
<gnuoy> jw4, but that message appears to be terminal, nothing happens after
<gnuoy> jw4, right, silence in logs and no hook execution
<jw4> gnuoy: I'll see if I can reproduce that.
<gnuoy> jw4, thank you for all your help
<jw4> gnuoy: thanks for reporting :)
<jw4> gnuoy: I've reproduced the symptoms
<jw4> gnuoy: I'll re-open that bug
<gnuoy> jw4, that was quick, thanks!
<jw4> gnuoy: thank the team when it's fixed ! :)
<francokaerntna> hey, anybody of you know what I need to change in my juju config, so that I get 'public' ips in my maas environment?
<francokaerntna> I entered both networks in maas 'network' tab, added 2 interfaces (one of each network) to the cluster, and even 'juju status' is showing me both networks. any clue?
#juju 2015-04-18
<francokaerntna> hey, anybody of you know what I need to change in my juju config, so that I get 'public' ips in my maas environment? I entered both networks in maas 'network' tab, added 2 interfaces (one of each network) to the cluster, and even 'juju status' is showing me both networks. any clue?
<wurde> juju and maas seems promising, wish there were more tutorials of the two used side by side
<francokaerntna> wurde: it is indeed! Works nearly out of the box (still needed 3 days to make it work)
<wurde> hey franco
<wurde> I feel that way, but it's off my surface level impressions and I'm reading the maas docs.
<jackweirdy> Lets say I have a service that I deploy to production with Juju, and I do my development locally. Is it feasible to do development inside a local juju deployment, then commit the code, and have my CI server deploy to production via Juju? Or is that crazy talk. I ask because we have a complicated mesh of dependencies, whose configuration and deployment Juju solves quite nicely. Seems weird to have that benefit for
<jackweirdy>  production but not development
<jackweirdy> Hello everyone, btw :)
#juju 2015-04-19
<lazyPower> jose: you were talking about working w/ teh docker charm in the near future - you'll want to track the status of this
<lazyPower> https://bugs.launchpad.net/charms/+source/docker/+bug/1445995
<mup> Bug #1445995: Docker latest=true displays transient failure <papercut> <docker (Juju Charms Collection):Confirmed for lazypower> <https://launchpad.net/bugs/1445995>
<lazyPower> looks like the update to 1.6 caused some issues
<francokaerntna> hey, how can I change the virtualisation type to from qemu to kvm? the virt_type in nova.conf changes nothing
<francokaerntna> cpu has vmx flag set
#juju 2016-04-18
<jamespage> gnuoy, do you need me to work through the mitaka keystone v3 updates?
<gnuoy> jamespage, yes please. just be carefull, there are two with pending results from full rechecks
<gnuoy> jamespage, oh, actually just one needs us to wait
<jamespage> gnuoy, I'm assuming you have exercised them with v3 as the amulet tests don't right?
<gnuoy> jamespage, I did, yes. I've marked the spreadsheet as such
<jamespage> gnuoy, \o/ +1000 thankyou
<jamespage> gnuoy, ok they all look good - https://review.openstack.org/#/c/306861/ pending full recheck..
<gnuoy> kk
<gnuoy> stub, do you write unit tests for any of juju interfaces on http://interfaces.juju.solutions/ ?
<stub> gnuoy: No. I haven't used them either ;)
<gnuoy> jamespage, got a sec for https://review.openstack.org/#/c/306861/ ?
<jamespage> gnuoy, done
<jamespage> gnuoy, about to clone release notes btw
<gnuoy> thanks
<gnuoy> ok, ta
<jamespage> gnuoy, just a straight copy from 16.01 for now
<jamespage> https://wiki.ubuntu.com/ServerTeam/OpenStackCharms/ReleaseNotes1604
<gnuoy> jamespage, is there a bug for the dashboard upgrade issue you mentioned?
<jamespage> yes
<jamespage> rockstar, is https://review.openstack.org/#/c/305896 ready to go? I was wondering how your got your commit message into that state?
<beisner> hi gnuoy, a lil ++debug @ https://code.launchpad.net/~1chb1n/openstack-mojo-specs/fix-git-checkout-stable/+merge/292078 if you will
<beisner> ready to land if you're +!
<beisner> +1 even ;-)
<gnuoy> beisner, approved
<beisner> gnuoy, thx sir
<gnuoy> np
<beisner> gnuoy, fyi, i'm going to merge your os-mojo precise bundle update then follow up with another mp for some other things.  thx for IDing the issue :)
<beisner> also, o-c-t bundles updated for precise re: nova no longer neutroning
<gnuoy> kk, thanks
<rockstar> jamespage: commit message into what state?
<rockstar> jamespage: it is ready to go, but requires a dependent o-c-t branch which I apparently never pushed on Friday, but thought I had. It's up for review now.
<gQuigs> can we close (or mark incomplete) all the bugs in pyjuju - I always end up looking https://bugs.launchpad.net/juju  instead of juju-core
<jamespage> rockstar, re commit message - two change-id's ?
<rockstar> jamespage: it's two commits squished into one.
<rockstar> (because openstack)
<jamespage> rockstar, not sure I understand 'because openstack'?
<rockstar> jamespage: they like you to squish your patch into a single commit. I don't much care for that workflow, but that's the way they want it.
<jamespage> rockstar, ok
<jamespage> rockstar, have you read https://wiki.openstack.org/wiki/GitCommitMessages ?
<rockstar> jamespage: with respect, it seems that that wiki entry is contradictory to what it wants for gerritt.
<jamespage> rockstar, all I've really been looking for across other reviews is a decent summary title, some details of what and why the change, closure of bugs and the Change-Id...
<rockstar> jamespage: I can remove the other Change-Id when I rebase, if that helps. I've just been leaving whatever it gives me.
<jamespage> rockstar, sure - tbh I was trying to figure out how you got two change-ids?  did you squash two commits together?
<rockstar> Yeah, that's exactly what happened.
<rockstar> Breaking the "one logical change per commit" rule.
<suchvenu> Hi cory_fu
<cory_fu> Hello
<suchvenu> I am getting the below error while calling the set_db_details function in my interface
<suchvenu> 2016-04-18 14:37:02 INFO config-changed   File "/usr/local/lib/python3.4/dist-packages/charms/reactive/relations.py", line 256, in conversation 2016-04-18 14:37:02 INFO config-changed     raise ValueError('Unable to determine default scope: no current hook or global scope') 2016-04-18 14:37:02 INFO config-changed ValueError: Unable to determine default scope: no current hook or global scope
<suchvenu> http://pastebin.ubuntu.com/15914037/
<suchvenu> has the interface code
<cory_fu> suchvenu: That error means that your relation is scope SERVICE or UNIT and you are not looping over a list of conversations.  Remember that your db2 charm could have any number of consumers asking for a database at a given time.  Your charm code should have a for loop over each consumer and then do the set up and call set_db_details for each one, passing which consumer in as a param which is given to self.conversation(consumer) to pick up the right
<cory_fu> one
<cory_fu> suchvenu: See https://github.com/johnsca/juju-relation-mysql/blob/master/provides.py#L85
<mbruzek> bdx: I know this is a deep "call back" but regarding the tls layer you mentioned a desire for a provides/requires relation. I created an issue against the repository and was hoping you could comment on that https://github.com/mbruzek/interface-tls/issues/3
<cory_fu> suchvenu: And the pastebin I sent you in the email (http://pastebin.ubuntu.com/15851388/) is slightly wrong.  Line 7 should include $service right before $db_name
<suchvenu> ok.
<suchvenu> When my scope is SERVICE, should i always use
<mbruzek> bdx: I just want to make sure I captured your request properly and if you have any more information to please add it.
<suchvenu>  conversation = self.conversation()         conversation.remove_state or conversation.set_state  always ?
<suchvenu> does self.remove_state and self.set_state also work ?
<simonklb> Hey! What would be the best course of action to find the provided hooks from interfaces listed on http://interfaces.juju.solutions/ ?
<simonklb> is it simply to read the code provided in the repo?
<cory_fu> suchvenu: You must always use the conversation, but also have to give it an argument unless you're in a @hook decorated method.  So, set_db_details would need to use: conv = self.conversation(consumer)
<simonklb> or can you list the hooks somewhere?
<suchvenu> I mean what is the difference between both ways of calling ?
<cory_fu> suchvenu: Sorry, I'm going to be a few minutes for a meeting right now.  Should be done in ~ 15 min, but I may be slow to respond until then
<suchvenu> ok, no issues
<cory_fu> suchvenu: When inside @hook, there is only ever one conversation (for that specific hook)
<cory_fu> And you don't know ahead of time what it is, so you just have to use self.conversation() (with no arg) to have it figure it out from the @hook
<BrunoR> marcoceppi: charm push does not work for me, also charm login raise an error, but in Ubuntu-SSO webconsole charm shows up as application?
<marcoceppi> BrunoR: have you logged into jujucharms.com yet?
<BrunoR> marcoceppi: ah, mein fehler
<mbruzek> Hi simonklb a well written layer may not write hooks at all.  What is your requirement here?
<marcoceppi> BrunoR: it's a weird thing, I'll make sure to document it on the site
<simonklb> mbruzek: I'm currently just getting to know juju - right now I'm looking at setting up a relation with keystone
<simonklb> I was under the impression that you used interfaces which provided hooks for you to accuire data from other charms
<mbruzek> simonklb: First of all welcome.
<simonklb> thanks!
<simonklb> acquire* :)
<mbruzek> simonklb: It might help to read this document: https://jujucharms.com/docs/devel/developer-event-cycle
<simonklb> ah, so the hooks are usually named in a specific way
<simonklb> thats good to know!
<simonklb> I thought it was more arbitrary
<mbruzek> simonklb: To answer your question directly any method decorated by the @hook decorator would signify a hook. So you could search the code for @hook.
<simonklb> right
<mbruzek> simonklb: However with well written reactive layers we have found less of a need to use @hook decorators and rely only on states to make writing layers more natural
<simonklb> perhaps I should be looking at using the openstack-API layer instead
<simonklb> but it felt a bit of an overkill including rabbitmq and mysql when the only thing I really need is to talk to the nova api
<mbruzek> simonklb: I don't know if the openstack team has rewritten nova or keystone in layers yet. You may have to look at the traditional hooks in the mean time.
<mbruzek> simonklb: If you are looking at the traditional charms (non layered) you should not worry about the interfaces.juju.solutions webpage.
<simonklb> mbruzek: they have this - http://interfaces.juju.solutions/layer/openstack-api/
<mbruzek> They have it but I don't know if they *use* it for any charms yet.
<simonklb> ah I see
<simonklb> heading home for today, thanks for the help!
<simonklb> I'll probably pop in here from time to time :)
<mbruzek> I heard they are moving that direction but I don't know if they have written any charms that way
<mbruzek> yet
<mbruzek> simonklb: anytime
<mbruzek> simonklb: let me know if you have any other questions when you get home.
<cory_fu> suchvenu: Did that solve your problem?
<suchvenu> cory_fu: I didn;t check it still
<suchvenu> If its GLOBAL scope, then we can directly call self.set_state ?
<suchvenu> I mean only for GLOBAL we can call like that ?
<cory_fu> Only for GLOBAL, yes
<cory_fu> Because GLOBAL scope means that everything shares the same conversation, no matter what
<suchvenu> ok, for SERVICE and UNIT, conversation is required
<suchvenu> I will try and let you know. So in the reactive layer, i should call the set_db_details in a loop
<jamespage> beisner, gnuoy: https://review.openstack.org/#/c/307076/ ready for review
<beisner> jamespage, thx, on it
<beisner> gnuoy, jamespage (fyi cholcombe ) - got a new one.  bug 1571782 (n-api db migration fail) ... complicating the other t-i one we were drilling down (blocked on block  device detection).
<mup> Bug #1571782: Trusty-Icehouse neutron-api is failing shared-db-relation-changed for mysql:shared-db <uosci> <neutron-api (Juju Charms Collection):New> <https://launchpad.net/bugs/1571782>
<beisner> and weeee!:  bug 1571789
<mup> Bug #1571789: install hook failing on Xenial with unmet dependency on mysql-client <uosci> <percona-cluster (Juju Charms Collection):New> <https://launchpad.net/bugs/1571789>
<beisner> (pxc: your amulet tests will look at lot different after ODS)
<kwmonroe> cory_fu: when i have relation decorators "@when x; @when y; def foo(x, y)", what's the order of foo() params?  it seems to be (y, x) which is odd to me.
<cory_fu> kwmonroe: It follows Python decorator ordering which is a bit odd.  Basically, decorators are processed inside-out (a.k.a. bottom up) and then the args are processed left-to-right.
<kwmonroe> oh cool - TIL.  thx cory_fu.
<cory_fu> So @when x; @when y z; def foo(y, z, x):
<cory_fu> It tripped me up just the other day, in fact
<BrunoR> after charm push and publish, the named store still shows the old revision of my charm. is there a step which I overlooked?
<cory_fu> BrunoR: What's the charm store URL?
<cory_fu> BrunoR: You might need to set the acl, or if you're referring to a promulgated charm then it requires a charmer and another step for promulgation
<BrunoR> cory_fu: https://jujucharms.com/u/3-bruno/ , e.g. the quobyte-registry charm is linked in revision 2, I would expect revision 4
<cory_fu> BrunoR: Hrm.  That's odd.  https://jujucharms.com/u/3-bruno/quobyte-registry/ points to 4 and everything looks fine in `charm show cs:~3-bruno/trusty/quobyte-registry id published perm`
<cory_fu> But your namespace isn't updated
<cory_fu> The search page is up to date as well: https://jujucharms.com/q/quobyte-registry
<cory_fu> BrunoR: Looks like an issue with namespaces
<cory_fu> Looks like it's affecting ~bigdata-charmers as well
<cory_fu> BrunoR: Please file a bug at https://github.com/CanonicalLtd/jujucharms.com/issues
<jamespage> beisner, I swear I installed a pxc on xenial this morning...
<beisner> jamespage, yah i'm puzzled too.  the only thing i can think is an image rev, and a dropped pkg
<beisner> ie. remnants of the mysql 5.6 thing
<beisner> jamespage, fyi pushed unit test update
<jamespage> beisner, ack - look shortly
<beisner> thx jamespage
<jamespage> beisner, rockstars lxd stuff is not landing - I suspect that its the double Change-Id
<beisner> lolz
<jamespage> beisner, trying again - https://review.openstack.org/#/c/305896/
<gnuoy> beisner, my gut feel for Bug #1571782 is that you've somehow got an old version of nova-cloud-controller charm. If you hit it again can you attatch the /var/log/juju/* logs from neutron-api and nova-cloud-controller please?
<mup> Bug #1571782: Trusty-Icehouse neutron-api: Table 'quotas' already exists <uosci> <neutron-api (Juju Charms Collection):New> <https://launchpad.net/bugs/1571782>
<beisner> jamespage, sure enough, yesterday, a xenial image had mysql-client-5.6 pkg avail:  http://pastebin.ubuntu.com/15920829/
<beisner> gnuoy, http://10.245.162.36:8080/view/Dashboards/view/Mojo/job/mojo_runner_baremetal/623/consoleFull
<beisner> 00:14:39.627 2016-04-18 19:18:27 [INFO] cloning git://github.com/openstack/charm-nova-cloud-controller
<beisner> fyi ^ and yes indeed, more runs underway.
<icey> any chance on  quick C-H review? https://code.launchpad.net/~chris.macnaughton/charm-helpers/use-lsblk-to-check-mount/+merge/292199
<icey> cholcombe: can I get a community review on ^^
<cholcombe> icey, i'm on it
<icey> thanks!
<beisner> icey, what are your intentions with such a thing?  ;-)   you wanna resync some things dontchya?
<icey> beisner: I so do!
<icey> the ceph/ceph-osd charms can
<icey> can't do encryption without that
<icey> because C-H was wrong about detecting if something is moutned
<icey> :-D
<beisner> icey, is there a blocking bug on our 16.04 release checklist for that?
<icey> no....?
<beisner> at this point, i'm only inclined to land things that directly unblock those blockers
<icey> beisner: the charm as it stands today will happily deploy itself to broken :)
<icey> your choice ;)
<beisner> ie.  this would be a fix to land in next/master after 16.04, then do a stable charm update / backport.
<beisner> dood where's your bug?
<beisner> what targets/combos are affected?
<icey> uhm, anything hitting infernalis +, so wily+xenial?
 * beisner waits for bug
<icey> it'll solve https://bugs.launchpad.net/charms/+source/ceph-osd/+bug/1513009
<mup> Bug #1513009: With MAAS 1.9a2, lvm disks are not correctly skipped when in-use by ceph disk prepare. <cloud-install-failure> <landscape> <ceph (Juju Charms Collection):Triaged by xfactor973> <ceph-osd (Juju Charms Collection):Triaged by xfactor973> <https://launchpad.net/bugs/1513009>
<cholcombe> beisner, yeah the lvm one is a long standing bug that I wasn't able to fix
<cholcombe> it too stems from is_device_mounted not going deep enough
<icey> https://bugs.launchpad.net/charms/+source/ceph/+bug/1571840 beisner
<mup> Bug #1571840: Ceph will enter an error state when attempting to add a new encrypted block device <ceph (Juju Charms Collection):New> <https://launchpad.net/bugs/1571840>
<icey> beisner: https://bugs.launchpad.net/charms/+bug/1571842
<mup> Bug #1571842: ceph-osd error encryption <Juju Charms Collection:New> <https://launchpad.net/bugs/1571842>
 * icey EODs
<beisner> cholcombe, icey - please bring those all to the daily tomorrow am.
<beisner> jamespage, if you're still around, full pass on pxc @ https://review.openstack.org/#/c/307414/
#juju 2016-04-19
<blahdeblah> Anyone got a good example of something that consumes an interface layer like https://github.com/juju-solutions/interface-http or https://github.com/juju-solutions/interface-juju-info ?
<blahdeblah> The documentation on interface layers says they're the most misunderstood part, and I'm still misunderstanding even after reading the doc 4-5 times.
<marcoceppi> blahdeblah: possibly, what are you misunderstanding?
<blahdeblah> marcoceppi: How I can consume the private-address (and add public-address) in https://github.com/juju-solutions/interface-juju-info, for starts.
<blahdeblah> I couldn't find anything that showed how to get at the things defined in auto_accessors
<marcoceppi> blahdeblah: so, I can whip up an example
<blahdeblah> marcoceppi: Don't go to any special trouble.  I was just hoping to find some examples of charms which used either of those layers so I could get a feel for how they're used.
<marcoceppi> blahdeblah: private-address is the only thing available in auto-accessors for juju-info
<marcoceppi> also, this layer has a few typos.
<blahdeblah> marcoceppi: I know; I wanted to add public-address as well
<marcoceppi> blahdeblah: ah, I see
<blahdeblah> marcoceppi: yeah - I might have fixed them in my fork
<blahdeblah> https://github.com/paulgear/interface-juju-info
<blahdeblah> I started playing with it and then got stuck on how to actually use it.
<marcoceppi> blahdeblah: https://gist.github.com/marcoceppi/fb911c63eac6a1db5c649a2f96439074
<blahdeblah> marcoceppi: Something else that confused me was that interface uses scopes.GLOBAL, but the doc says "All connected services and units for this relation will share a single conversation. The same data will be broadcast to every remote unit, and retrieved data will be aggregated across all remote units and is expected to either eventually agree or be set by a single leader."  So it seemed to me that there wouldn't be an opportunity to get the
<blahdeblah> private-address (or public-address, assuming I've done that right) from every unit.
<blahdeblah> ^ hope that didn't get cut off
<marcoceppi> blahdeblah: right, scopes.GLOBAL is wrong, you'd want scopes.UNIT
<blahdeblah> marcoceppi: OK; that gist is pretty simple
<blahdeblah> So then if I want to gather a list of public-address values from every unit, that would need to be added per my branch, then each unit would need to send the gathered data across the peer relation to the leader?
<marcoceppi> blahdeblah: so there are a few things - one are you sure you want to use the juju-info interface? or are you creating a new interface?
<blahdeblah> marcoceppi: I can't see any reason not to use juju-info, as long as it works.  I just want to gather a list of all the public-addresses of all the units associated.
<blahdeblah> There might be a better way to do that.
<blahdeblah> e.g. does the subordinate charm automatically get the public-address of the associated primary charm?  If so, I may be able to just ask the peer relation for it.
<marcoceppi> blahdeblah: it does not.
<marcoceppi> blahdeblah: so you don't want to add this to juju-info
<blahdeblah> marcoceppi: So what do I want? :-)
<marcoceppi> blahdeblah: one min otp
<blahdeblah> no worries
<marcoceppi> blahdeblah: so what's your end goal?
<marcoceppi> blahdeblah: because you can't just add features to an interface
<marcoceppi> esp the juju-info interface
<blahdeblah> marcoceppi: juju-info does actually provide public-address, from what I've been able to tell
<blahdeblah> I could be wrong, though
<blahdeblah> marcoceppi: End goal is to resurrect the spirit (if not the flesh) of lazyPower's DNS-Charm and implement the autogenerated part.
<blahdeblah> (as well as a provider for the Dynect DNS API, and do it all with appropriate layers & interfaces)
<marcoceppi> blahdeblah: well, technically, yes, because of spaces in juju
<marcoceppi> blahdeblah: but practically, unit-get public-address will be the same on the primary and the subordinate
<blahdeblah> marcoceppi: I don't understand that "technicallly..." part
<marcoceppi> blahdeblah: ohh, this sounds cool - though I'd wish juju just grow dns natively
<blahdeblah> I actually think charms are a better place for it
<blahdeblah> Because then they're user-customisable and don't need compiled code
<marcoceppi> blahdeblah: because we have net-spaces in Juju, I think public-address was added since you can bind a netspace to the relation
<marcoceppi> blahdeblah: let me check something. if it exists in 1.25 and 2.0 then it's safe to add to the relation
<blahdeblah> marcoceppi: I have no plans to target 2.0 yet, but I guess I do want forward compatibility with it
<marcoceppi> blahdeblah: well the idea is - if it exists in 1.25 and 2.0 (public-address in relation data) I don't see why we can't have it in the juju-info interface layer
<blahdeblah> yeah - if it works I'll drop you a PR from my branch
<blahdeblah> marcoceppi: The primary goal is to have DNS work fully automatically given a very small amount of configuration on a subordinate charm, and have the elected leader update Dynect without the end user having to touch anything when you add or remove units.
<blahdeblah> marcoceppi: And a secondary goal of me actually understanding how layers & interfaces work.
<marcoceppi> blahdeblah: first goal sounds fucking awesome
<blahdeblah> I'm kind of more motivated by the 2nd goal ;-)
<blahdeblah> (although, I do hate editing DNS records, too ;-)
<marcoceppi> blahdeblah: sadly, in 1.25, only private-address exists
<blahdeblah> marcoceppi: So does that mean there's no way to get at the public-address at all?  Because it's been stored in juju and reported in juju status since forever?
<marcoceppi> blahdeblah: but, you could/should create a peer relation, and each unit can run `unit-get public-address` and the leader can just get those addresses
<blahdeblah> That's possibly marginally easier
<marcoceppi> blahdeblah: the subordinates live on the same host, and so unit-get will work as if you were on the primary
<blahdeblah> Is that exposed in charmhelpers as well as bash?
<marcoceppi> blahdeblah: even if juju-info had public-address, you'd still have to use a peer relation
<marcoceppi> blahdeblah: yup
<marcoceppi> blahdeblah: scope: container is a super special type of relation, it basically means that communication will only happen between this unit and it's counterpart, unlike standard relations
<marcoceppi> blahdeblah: you've always needed a peer ;)
<blahdeblah> Yeah - I knew I would.  Otherwise there would be no way for any one unit to know about all the others.
<marcoceppi> blahdeblah: https://pythonhosted.org/charmhelpers/api/charmhelpers.core.hookenv.html#charmhelpers.core.hookenv.unit_get
<blahdeblah> yeah - found that
<blahdeblah> marcoceppi: I didn't follow that last part about scope: container, though.
<marcoceppi> blahdeblah: it just reafirms what you've said before, that you've always needed a peer. The reason is because of what scope:container means. Typically, in juju, every unit of each side of the relation has a channel of communication with each other. scope:container does not, it only has a channel of communication with the unit it is attached to physically
<marcoceppi> blahdeblah: so you couldn't from one suborindate unit, via juju-info, query the private-address or another units primary service unit
<marcoceppi> blahdeblah: where as, in non scope: container situations you could
<blahdeblah> OK - makes sense
<blahdeblah> Thanks a lot for your help; I think that's clarified it to the point where I might be able to make something vaguely coherent next week when I work on this. :-)
<blahdeblah> marcoceppi: ^ Just in case you looked away somewhere :-)
<marcoceppi> blahdeblah: cool, feel free to ping if you have questions!
<blahdeblah> marcoceppi: much appreciated :-)
<simonklb> could anyone recommend one or more charms that follow the very latest patterns?
<simonklb> as a newbie it's hard to know which charms are up to date and which are using old ways
<lathiat> simonklb: generally i would suggest looking at the openstack and bigdata charms
<lathiat> simonklb: and then look into layers, as that is newer
<lathiat> i'm not aware of any specific charms that are a better example, hopefully someone else has some ideas, maybe marcoceppi
<simonklb> lathiat: thanks!
<simonklb> is it possible to combine @when and @when_not ?
<simonklb> nvm, it is, neat :)
<jamespage> gnuoy, hmm long queues upstream to get stuff landed in not awesome...
<simonklb> was the api-endpoints command removed from juju in a recent update?
<simonklb> I'm getting: ERROR unrecognized command: juju api-endpoints
<jamespage> urulama, hey - around? I'm struggling with publishing bundles to the charm store...
<urulama> jamespage: hey
<urulama> jamespage: what error do you get?
<jamespage> urulama, well I don't using the push/publish commands
<jamespage> charm push . cs:~openstack-charmers-next/openstack-base-trusty-mitaka
<jamespage> worked ok
<jamespage> and so did
<jamespage> charm publish ~openstack-charmers-next/openstack-base-trusty-mitaka-0
<jamespage> however I don't see them on jujucharms.com
<urulama> jamespage: it's private, set only to ~openstack-charmers-next, if you havent granted it to "everyone"
<jamespage> urulama, oh I thought read public was the default
<jamespage> urulama, letme fix that
<urulama> jamespage: publish doesn't change permissions
<jamespage> urulama, okies...
<urulama> jamespage: ok, i see the charm now
<urulama> sorry, bundle
<urulama> uf, we need to add series to bundles to avoid this, i know the work has already started
<jamespage> urulama, yeah - setting --acl read everyone
<jamespage> urulama, I just pushed an update which did push ok but I got a
<jamespage> ERROR cannot add extra information: unauthorized: access denied for user "james-page"
<jamespage> at the end of the push operation
<urulama> hm, thought that was resolved already
<simonklb> charm test seem to be removed as well, or am I missing something?
<simonklb> ERROR unrecognized command: charm test
<jamespage> urulama, hmm
<jamespage> urulama, so I'm seeing dupes of bundles I've switched from bzr injestion to direct publishing here:
<jamespage> https://jujucharms.com/u/openstack-charmers-next/
<jamespage> urulama, I also see alot of 'access denied' messages for push and publish which go away if I keep re-runnnig the commands...
<urulama> jamespage: it seems there's a charm store unit in production that is misbehaving, and when ha-proxy switches to it, you get an error. we'll ask webops solve it
<jamespage> urulama, thanks!
<urulama> jamespage: is ~openstack-charmers-next/bundle/openstack-base-43 published and ~openstack-charmers-next/bundle/openstack-base-40 ingested?
<jamespage> urulama, yes
<urulama> jamespage: would you please do "charm publish ~openstack-charmers-next/bundle/openstack-base-43" ... i'd like to see if ingestion changes that pointer from revision 43 to 40 every time it runs
<jamespage> urulama, ok done
<gnuoy> jamespage, giving these three the once over when you have a sec?
<gnuoy> https://review.openstack.org/#/c/307643/
<gnuoy> https://review.openstack.org/#/c/307564/
<gnuoy> https://review.openstack.org/#/c/307387/
<gnuoy> s/giving/would you mind giving/
<jamespage> gnuoy, yes
<gnuoy> ta
<urulama> jamespage: yep, that's the case. ingestion overrides manual publishing. that'll be a high priority bug. https://github.com/CanonicalLtd/jujucharms.com/issues/250
<jamespage> urulama, awesome
<jamespage> thanks
<jamespage> urulama, would removing the original branches help in this case?
<urulama> jamespage: yes it would
 * jamespage goes todo that then
<urulama> jamespage: you'll have to do "charm publish" again to reset the pointer to the revision that you want
<jamespage> urulama, ac
<jamespage> k
<jamespage> gnuoy, those all look reasonable
<jamespage> gnuoy, I see you have functional tested them - thankyou
<gnuoy> thats how I roll
<jamespage> gnuoy, I don't see the need to wait for the full recheck on these ones - amulet does not do upgrade tests anyway
<gnuoy> jamespage, good point
<jamespage> gnuoy, ok landing when the queue catches up...
<gnuoy> ta
<jamespage> gnuoy, queue appears to have caught up - all last nights approved changes have now loanded...
<stub> I use amulet to do upgrade tests by telling it to deploy an old known-good revision then running 'juju upgrade-charm --switch' myself
<gnuoy> stub, fwiw those changes were around upgrading the packages rather than the charm itself but thanks for the pointer
<Garyx> Does anyone here know if there is documentation on gettings juju 2.0 to work with MAAS 2.0
<jamespage> Garyx, thats know not to work right now - dev team are working on it
<jamespage> Juju 2.0 with MAAS 1.9 is OK
<Garyx> oks, been looking around where it actually says that juju 2.0 is not working with maas 2.0 so been banging my head to a rock a little with that one.
<stub> I'm going to use an action to upgrade packages, and I think Amulet has grown action support recently.
<stub> But I guess I'd need two ppas to write an integration test for that.
<jamespage> beisner, for some reason juju 2.0 beta-1 was the default juju on lescina
<jamespage> I've pushed it back to 1.25.x for now
<simonklb> anyone else here running juju 2.0 beta4 and is having troubles with the testing?
<simonklb> it seems it still looks for environment.yaml but environments are called models now
<simonklb> ah I see it's still under development https://github.com/juju/amulet/issues/116
<stub> simonklb: As far as I can tell, testing does not work with 2.0 at the moment as amulet depends on juju-deployer and juju-deployer is only working against the 1.x series.
<stub> simonklb: There may be unpackaged versions around but I don't know how much further you will get.
<simonklb> actually amulet master isn't even up to snuff: https://github.com/juju/amulet/blob/master/amulet/helpers.py#L168
<stub> simonklb: That code path may not be called if the version is being sniffed (although that would still stick you with the juju-1 vs juju executable issue)
<simonklb> yea it's unfortunate, because I was told to start with juju 2.0 right away - but without testing it's going to be difficult
<stub> simonklb: Yes, you are stuck with just unit testing (which means I'm stuck on 1.x, since I have a lot of integration tests to keep happy)
<stub> simonklb: I think the problem is expected to be fixed by the time you get to writing integration tests :)
<simonklb> well it's also a convenient way to test your charm while youre writing it, instead of deploying it in a real environment
<simonklb> maybe there is some way to make that easy with your normal juju deployment too?
<stub> integration testing isn't convenient due to the setup and teardown times, but I agree you want it once you get to a certain point.
<simonklb> how would you go about testing simple relations?
<simonklb> for example fetching host and port from mysql
<stub> You could mock that sort of thing, but I agree that is best as an integration test against a real model. But first, you can do unit tests of the helpers and logic used by your relation hooks which can help the slow integration test work go much smoother.
<simonklb> the problem is that I'm not entierly sure what every relation returns, maybe that is documented somewhere?
<stub> The interfaces are documented in the charms that implement them, and it is of variable quality. The reactive framework and relation stubs are trying to address that problem, but it is early days for that.
<stub> 'juju debug-hooks' is great for exploring, as you can poke the relation interactively.
<simonklb> ah, thanks!
<simonklb> right now I'm looking to build a reactive charm, from what I've gathered you're supposed to use the @when and @when_not decorators to execute when the charm enters differente states
<simonklb> however, the normal install, start and stop hooks etc still looks like they are required, right?
<stub> You usually have an @hook('install') to kick things off, yes. The trick is to keep the @hook decorated handlers to a minimum, and hand off to the @when decorated handlers as soon as possible.
<simonklb> yea, then I feel I'm on the right track at least :)
<stub> @when('config.changed') has recently replaced many uses of @hook('config-changed') too
<simonklb> nice nice
<simonklb> but to get the @hook decorator, do I still need to get the charm helpers package?
<simonklb> or can I get that from the reactive package somehow?
<stub> No, you import the decorator from charms.reactive
<simonklb> great
<simonklb> might be a good idea to include that in the charm create boilerplate, since they automatically include @when('X.installed') but not @hook('install')
<stub> I haven't seen that boilerplate yet.
<stub> simonklb: oic. Yes, that boilerplate doesn't need the @hook
<simonklb> I assume it's this one: https://github.com/juju-solutions/template-reactive-python
<simonklb> so I can run it without the install hook?
<stub> The handler is invoked because the state is not set (@when_not('foo.installed')), and at the end of the handler it sets the foo.installed state so it doesn't get reinvoked a second time.
<stub> yes, you can run without the install hook
<simonklb> I think I had some error like "missing hook: install" or similar when I deployed it before
<simonklb> I'll give it another try
<stub> You need to 'charm build' it before it can be deployed. That adds a lot of the missing boiler plate, by pulling in the layers you declare in layers.yaml
<simonklb> that might have been it, because now it runs :)
<marcoceppi> simonklb: https://jujucharms.com/docs/devel/developer-getting-started and you should check out kubernetes charm or any of the bigdata charms on https://bigdata.juju.solutions
<simonklb> marcoceppi: any eta on juju 2.0 support in amulet?
<marcoceppi> simonklb: it should be there, tvansteenburgh has the latest on that
<simonklb> marcoceppi: it looks like it's still looking for environments and not models
<beisner> jamespage, gnuoy - wow upstream merge queue/activities seem to be taking many hrs
<simonklb> is there any newer version than the master on github?
<tvansteenburgh> simonklb: you need the latest deployer and jujuclient
<simonklb> tvansteenburgh: where can I get them? ppa or source?
<tvansteenburgh> simonklb: https://launchpad.net/~ahasenack/+archive/ubuntu/juju-deployer-daily
<tvansteenburgh> simonklb: https://launchpad.net/~ahasenack/+archive/ubuntu/python-jujuclient
<simonklb> tvansteenburgh: thanks
<simonklb> tvansteenburgh: still getting the same error - it's trying to fetch the environment.yaml from the old juju home path
<simonklb> looking at the amulet code - this seems to be where I end up https://github.com/juju/amulet/blob/master/amulet/helpers.py#L168
<simonklb> I also saw that there was an issue for this: https://github.com/juju/amulet/issues/116
<tvansteenburgh> simonklb: paste the traceback please
<tvansteenburgh> simonklb: i see the problem
<tvansteenburgh> simonklb: my juju2 branch was never merged :/
<marcoceppi> simonklb: that is features missing in amulet for juju 2.0, not amulet supporting juju 2.0
<tvansteenburgh> simonklb: okay it was just merged
<simonklb> yea sorry for being unclear
<marcoceppi> tvansteenburgh: weird, I reviewed it and never merged it
<tvansteenburgh> simonklb: please repull master and try again
<tvansteenburgh> marcoceppi: np
<simonklb> tvansteenburgh: seem to be working now, thanks!
<cory_fu> Can I get a review of CLI for layer options: https://github.com/juju-solutions/layer-basic/pull/58
<urulama> jamespage: if you want to get rid of revision 40 in /u/openstack-charmers-next (which is marked as current development charm), you'll have to do "charm publish ~openstack-charmers-next/bundle/openstack-base-43 --channel=development" which will stop listing revision 40 as alternative development charm
<jamespage> urulama, ack
<urulama> same for any duplicated stuff that you want to get rid of (not duplicate for development revisions)
<jamespage> urulama, I see something niggly for charms published with series in metadata
<jamespage> urulama,
<jamespage> cs:~openstack-charmers-next/ntp
<jamespage> is one of those; but I need to write bundles which are backwards compat with 1.25.5
<jamespage> which needs cs:~openstack-charmers-next/xenial/ntp
<jamespage> which is resolvable, but I can't push a bundle with that it in
<urulama> jamespage: on the phone, gimme 10min
<jamespage> urulama, np
<jamespage> rockstar, we decided that placing control plane components in lxd containers on compute nodes running nova-lxd was a no-no right?
<rockstar> jamespage: yes, at least for right now.
<jamespage> rockstar, okay
<rockstar> If we want to spend the time to fix that bug, we can probably support that.
<jamespage> rockstar, later
<jamespage> rockstar, crazy thing is that it works fine with lxc containers on 1.25
<jamespage> cause lxc != lxd from a storage prespective...
<rockstar> As I recall, the problem was more of a charm specific problem, right? We're messing with the storage underneath AFTER we created some containers.
<ReSam> good morning!
<ReSam> is it possible to migrate the state server to a different host?
<cory_fu> kwmonroe, kjackal: Did you guys see my PR for the puppet layer?  https://github.com/juju-solutions/layer-puppet/pull/2
<kwmonroe> on it cory_fu
<kjackal> cory_fu you are an artist! Nice work!
<bdx> layer-puppet-peeps: I've already worked out alot of the kinks ... see here -> https://github.com/jamesbeedy/layer-puppet-agent
<cory_fu> Dude.  How did we miss that.
<cory_fu> bdx: Does that layer support masterless puppet?
<bdx> it doesn't currently, although it could very easily  .... I should add a flag for masterless
<jcastro> bdx: you're in portland right?
<jcastro> http://www.devopsdays.org/events/2016-portland/
<bdx> jcastro: yea
<bdx> jcastro: crazy! I'm all about it!
<jcastro> ok, holla at me if there's like a hotel cost or something
<bdx> jcastro: perfect, I could use a night out on the town ... the site is kindof wimpy though ... where/how do I sign up?
<bdx> ooooh nm 'propose'
<bdx> got it
<bdx> charmers: what am I doing wrong here -> https://github.com/jamesbeedy/interface-memcache/blob/master/requires.py
<marcoceppi> bdx: what error are you getting/what are you expecting?
<bdx> marcoceppi: I am not even getting an error, I'm just not getting anything returned from memcache_hosts()
<jamespage> beisner, hmmm
<jamespage> beisner, working out the release procedure with git/gerrit
<jamespage> I was kinda expecting to delete the stable branch and then re-create it
<jamespage> but apparently deleting branches is beyond my powers...
 * jamespage thinks...
<mgz> jamespage: can you move branches?
<icey> beisner: jamespage any chance on getting that c-h change merged?
<firl> anyone know the password/user combo for curtain installs with MAAS? I am trying to diagnose something and I forgot
<beisner> thedac, this looks ready to land.  ready?  https://review.openstack.org/#/c/307480/
<thedac> beisner: yes, please
<bdx> marcoceppi: got an error for ya -> http://paste.ubuntu.com/15935177/
<marcoceppi> bdx: interesting
<bdx> marcoceppi: heres where/how I'm using it -> https://github.com/jamesbeedy/layer-sentry/blob/master/reactive/sentry.py
<bdx> https://github.com/jamesbeedy/layer-sentry/blob/master/reactive/sentry.py#L131-142
<jamespage> mgz, not sure
<jamespage> like a rename?
<mgz> jamespage: yeah, I'd expect the perms on a rename to be the same as delete, but maybe not?
<mgz> jamespage: also, unrelated, I'm interested in any results/issues you have with testing using the 'lxd' name in bundles with 2.0
<jamespage> mgz, well that generally worked ok
<jamespage> mgz, apart from the fact that juju managing lxd containers and the lxd charm we have for nova-lxd changing storage config underneath it rather exploded...
<beisner> jamespage, yah i've not jumped into how upstream manages release tags and branches.  how goes?
<beisner> gnuoy, trusty-icehouse n-api looks good now on metal;  cholcombe we're now back to the block devices things and I've got that caught by the tail on metal atm.
<cholcombe> beisner, sweet, can i log in and look?
<beisner> cholcombe, http://pastebin.ubuntu.com/15935632/
<cholcombe> beisner, what's the ceph yaml file look like that you deployed with?
<beisner> cholcombe, http://bazaar.launchpad.net/~ost-maintainers/openstack-mojo-specs/mojo-openstack-specs/view/head:/helper/bundles/baremetal7-next.yaml
<cholcombe> so it should've just used sdb
<cholcombe> as partition of sdb is mounted
<beisner> that's what it does everywhere but trusty-icehouse
<cholcombe> hmm
<kjackal> cory_fu, kwmonroe: Do we want the bigtop plugin to be a subordinate of ANY other charm (juju-info interface) or only the ones implementing the hadoop-plugin interface, or both? (I vote for only the ones having the hadoop-plugin interface)
<cory_fu> Definitely NOT juju-info.  :)
<cory_fu> Yes to hadoop-plugin
<cory_fu> Though, if we're going to extend that to support other services, I guess we'll need to rename it from "hadoop" plugin
<cory_fu> But we can do that down the road
<cholcombe> beisner, can you pgrep ceph-osd ?
<cholcombe> that's the function i believe it's calling to determine if it should block on no block devices found
<beisner> empty-handed, no process
<cholcombe> that's the problem
<cholcombe> something happened that prevented ceph-osd from starting
<cholcombe> can you dump the logs?
<beisner> cholcombe, fwiw if you're after logs, we've got all logs for all units on all jobs
<cholcombe> yeah we might want to just keep moving on testing and save the logs for me to poke at
<cholcombe> beisner, something weird seems to be happening with the hardening function
<cholcombe> so it started working on osdizing the disk and then it seemed to have stopped only to think later that the /dev/sdb is in use and bails
<cholcombe> beisner, i'm not entirely sure after it formats the disk why it seems to bail and then come back later confused about the disk being in use
<beisner> cholcombe, i'll leave the enviro up till this eve, will be out of pocket this afternoon.
<cholcombe> beisner, ok thanks
<cholcombe> beisner, if you need to destroy it go ahead though.  I don't want to hold the pipeline up
<beisner> cholcombe, this (bug) is pretty much the place to be, i think all other metallic issues are sorted ;-)
<stormmore> this seems odd, I have setup vlans on my MAAS server and used juju bootstrap to create my first node. However, when I ssh into the node and try and ping all the maas interfaces they are not all pingable
<cholcombe> beisner, awesome.  i'll keep digging
<cory_fu> marcoceppi: I replied to your last comment on the layer options CLI.  I'd like to get a first pass through, but if you thinking chaining / nested options is important for bash charms, I can add that.
<marcoceppi> cory_fu: we have until oct for this to land, so there's time
<cory_fu> marcoceppi: Eh?
<cory_fu> IBM needs this ASAP
<marcoceppi> cory_fu: this is totes a 2.2 milestone, yeah?
<marcoceppi> OH
<cory_fu> And it's in the base layer
<marcoceppi> IT"S A LAYER
<cory_fu> :)
<stormmore> OK so I am going to rule out juju being the source of my network drama by doing a basic deploy in MAAS
<freak> hi
<freak> hi everyone
<freak> i need help regarding ceph-osd
<freak> i successfully deployed openstack earlier
<freak> but yesterday i switched off my nodes
<freak> now today when i powered them up
<freak> all components came up successfully
<freak> but ceph-osd is showing error
<freak> hook failed "update status"
<marcoceppi> freak: can you try to `juju resolved ceph-osd/#` where # is the number of the failed unit?
<marcoceppi> freak: also, /var/log/juju/unit-ceph-osd-#.log would be helpful
<freak> ok let me try that...although from juju-gui i also clicked resolve options but it didn't worked..ok this time i try from cli
<freak> and also share log
<bdx> marcoceppi: I found the problem with my unit ids!
<marcoceppi> bdx: \o/
<bdx> marcoceppi: when my memcache-relation-joined hook runs, it errors 'relation not found'
<bdx> marcoceppi:it runs `relation-list --format=json -r memcache:93`
<bdx> marcoceppi: when I debug-hooks into the instance and run `relation-list --format=json -r memcache:95` it succeeds
<bdx> so what am I to gather from this, that memcached charm is setting an incorrect relation id?
<freak> marcoceppi  i exeuted the command juju resolved here is the output  http://paste.ubuntu.com/15937564/
<freak> it says already resolved
<freak> but my unit is still in hook failed update status
<jose> freak: the log should give us a bit more info, can you pastebin it please? should be located in /var/log/juju/unit-ceph-osd-0.log
<jose> inside the ceph-osd machine, that is
<freak> ok jose let me take the log
<jose> thank you :)
<freak> dear jose/marcoceppi here is the log file output http://paste.ubuntu.com/15937716/
<jose> I have to leave but maybe Marco can give you a hand later :)
<freak> ok no issue
<bdx> wtf is going on in juju-dev
<bdx> someone kick that guy
<alexisb> bdx, we are working on it
<cholcombe> beisner, i think if we revert c94e0b4b on ceph-osd we should be in the clear.  I think I have an idea for a fix but there's no way I can code it and complete it by tonight
<cholcombe> beisner, do you have a bug for that ceph-osd fail?  I can link the revert task to it
<beisner> cholcombe plz see the spreadsheet
<cholcombe> beisner, ok.  I don't see anything for ceph-osd though
<cholcombe> i'll write up a new bug if that's ok?
<cholcombe> beisner, i put in the revert but of course it has a merge conflict lol
<freak> cholcombe  when i can expect that this ceph-osd issue be resolved
<beisner> cholcombe. Mojo tab. Trusty-Icehouse
<cholcombe> freak, well I can revert the change and we should be fine.  However the feature is going to need a little more work.  I didn't realize that ceph would fail the ceph osd crush create-or-move command if a bucket didn't exist
<freak> ok thanks cholcombe
<cholcombe> ceph crush internally i believe has all the default buckets even if you don't specify them but on firefly the command is failing
<firl> anyone know if there is an easy way to have multiple ceph zones with juju charms and visible in horizon?
<cholcombe> firl, what do you mean by zones?
<firl> cholcombe: availability zones
<firl> like one zone for ssd, one zone for sata spin, one for scsci
<cholcombe> ah yes.  There's a patch that we're going to work on for that but it's not ready yet
<cholcombe> firl, if you'd like to give it a shot to write the patch yourself i'd be happy to guide you
<cholcombe> firl, what i mean is I haven't started on that patch yet but I'm happy to get more help
<firl> gotcha, I donât have the cycles right now :( just didnât know if it was available yet.
<cholcombe> i have a juju action that I wrote to make new pools ( availability zones )
<firl> nice!
<cholcombe> so if you had a crush rule that was just ssd's or just spinners it could use that when creating the pool
<cholcombe> i think i wasn't properly understanding what you meant by az.  In ceph speak i think you're referring to a pool
<firl> i would be ok with different pools, or even different servers servicing Openstack Availability zones
<cholcombe> that firl that action is part of ceph-mon-next and should land in a few days in stable once 16.04 is released
<cholcombe> you'll have to hand make the crush rule for now but in the future the charm will have support to create custom rules
<cholcombe> firl, https://github.com/openstack/charm-ceph-mon/blob/master/actions/create-pool
<firl> hrmm i was thinking more of having multiple ceph installs across the cluster
<cholcombe> https://github.com/openstack/charm-ceph-mon/blob/master/actions.yaml#L49 profile name is what you're looking to pass in
<cholcombe> oh i see
<cholcombe> there's no reason you couldn't deploy the same charm twice with a different name for another ceph cluster
<firl> yeah, but not sure how the relationships would work or show up
<cholcombe> i'm not either.  i haven't tried that yet
<firl> but I think you are right, having named pools also show up would be nice
<cholcombe> they should show up and relate just fine but i'm not certain of it
#juju 2016-04-20
<firl> charmstore seems to be down
<firl> and the openstack bundle doesnât seem to work because of a keystone charm issue
<hatch> firl: it apears to be working for me - https://jujucharms.com/q/?text=apache does this not work for you?
<firl> nope http://picpaste.com/Screen_Shot_2016-04-19_at_7.44.25_PM-y01ZdjPE.png
<firl> hatch
<hatch> huh....interesting
<firl> yeah, thought it was weird also
<hatch> one moment I'll try from another network
<hatch> alright I was able to get the failure
<hatch> odd it doesn't fail for me
<hatch> one moment I'll look into this
<firl> ok
<firl> I can proxy through a diff net and help diagnose if you want
<hatch> it appears to actually be down here now too
<firl> hah ok
<firl> hope I didnât break it
<firl> ;)
<hatch> yup all you!
<firl> haha
<hatch> ;)
<firl> for keystone / OS bundle issue, should I just mail the juju list or open a bug with the system directly?
<hatch> Up to you, I'd probably create the bug then email the list :)
<firl> kk
<hatch> firl: back up
<hatch> it's
<firl> sweet
<firl> âFATAL ERROR: Could not determine OpenStack codename for version 8.1.0â just creating a bug for this now
<beisner> hi firl
<beisner> there was a keystone SRU (package update) which requires an accompanying charm upgrade for keystone
<beisner> fyi, updated bug 1572358 re: keystone 8.1 SRU info
<mup> Bug #1572358: keystone FATAL ERROR: Could not determine OpenStack codename for version 8.1.0 <keystone (Juju Charms Collection):Invalid> <https://launchpad.net/bugs/1572358>
<firl> thanks beisner
<firl> so I just need to update the bundle to 253?
<beisner> firl, yw.  fwiw, we've got an openstack-base bundle update in flight.
<beisner> yep for exactly that
<firl> ahh ok
<firl> that makes much more sense
<firl> thanks, I really appreciate it
<beisner> firl, happy to help.  thanks for raising that.  SRUs can have domino effects.  ideally, we'd have had that bundle revved before the package landed.
<stub> bdx: I updated wal-e in my PPA to the newly released 0.9.1, which might help your radosgw issue. Although (assuming it isn't just ceph/radosgw configuration and admin issues) the more critical piece is probably what version of python-swiftclient is installed.
<urulama> can someone in ~openstack-charmers-next push an update on LP? I'd like to see if an issue in old charm store is resolved.
<urulama> jamespage: ^
<marcoceppi> urulama: jamespage was up pretty late, I wouldn't expect a ping for for a few hours
<urulama> marcoceppi: i guess any LP change in ~charmers would do as well
<jamespage> urulama, morning
<urulama> jamespage: morning
<urulama> jamespage: we've had some issues with the legacy charm store, the service was down over weekend until yesterday, and it seems it's not ingesting charms still. if you can update a charm in  ~openstack-charmers-next, it'll be easier to track what's going on
<jamespage> urulama, ack - will do shortly - tricky to update directly otherwise I'll break the whole reverse mirror from github.com
<urulama> jamespage: no need to do it directly, just so that at least one charm gets updated also on LP
<urulama> jamespage: github + sync to LP is just fine
<jamespage> gnuoy, we need to decide what todo re bug 1570960
<mup> Bug #1570960: ceph-osd status blocked on block device detection <uosci> <ceph-osd (Juju Charms Collection):New> <https://launchpad.net/bugs/1570960>
<jamespage> I think revert is the correct course of action today
<gnuoy> ok
<jamespage> gnuoy, dealing with that now
<gnuoy> jamespage, am I going mad or has the nic device naming just changed in mitaka?
<jamespage> gnuoy, ?
<jamespage> mitaka or xenial?
<gnuoy> jamespage, sorry, yes, xenial
<jamespage> gnuoy, its done the not eth thing for a while now
<jamespage> gnuoy, MAAS however remaps things back to eth based naming for consistency...
<gnuoy> jamespage, so how have any of the mojo specs been working then ? The juju set ext-port port to eth1
<gnuoy> s/The/They/
<jamespage> gnuoy, well I think it is still eth1 on a cloud instance
<gnuoy> jamespage, ah ! it isn't now!
<jamespage> gnuoy, oh fantastic...
<gnuoy> oh...fiddlesticks
<jamespage> gnuoy, updated cholcombe broken revert - https://review.openstack.org/#/c/308057/
<gnuoy> ack
<tinwood> monring gnuoy, jamespage
<gnuoy> hi tinwood
<tinwood> gnuoy, do you know much about ceph-radosgw on mitaka?  Having fixed the test 201 bug, I'm now hitting a mitaka only ceph pool bug: https://pastebin.canonical.com/154738/  -- Any pointers or thoughts?
<gnuoy> tinwood, I'd start off by gathering what data has been set between the two services with relation-{ids,list,get}
<tinwood> gnuoy, okay, will do.
<dosaboy_> tinwood: that's odd since the .rgw pool should have been created by https://github.com/openstack/charm-ceph-radosgw/blob/master/hooks/ceph.py#L232
 * tinwood is taking a look
<dosaboy> tinwood: unless infernalis is deleting that pool but i'd be very surpised
<tinwood> dosaboy, and it *only* happens on mitaka, in the amulet test.
<dosaboy> that's odd, let me kick off a mitaka run to see if I can flesh it out
<tinwood> okay, but there's a fix in test 201 needed to get the ceph-radosgw to pass: the test in test_201 needs to be flipped from if not any(ret) -> if any(ret). Thx!
<jamespage> gnuoy, whats the scope of impact of this ens2 thing on our codebase?
<gnuoy> jamespage, thinking
<jamespage> gnuoy, for context the image on the 12th did not do this, the one on the 17th did...
<jamespage> gnuoy, also - https://review.openstack.org/#/c/308057/
<jamespage> not proposing doing a full recheck for that one...
<gnuoy> jamespage, The mojo specs add a nic to the neutron-gateway and/or the nova-compute nodes to act as the external port. The tests assume that a) The ext port will be eth1 and b) The ext port will be names consistantly accross the units of a service. Neither of those are now true so there is no access to guests that are booted as part of testing. The second nic I added during my last test came up as ens6. Our HA testing assumes that ha-bindiface is eth0 which
<gnuoy>  is no longer true. It *may* be safe to assume that on xenial the first nic is ens2 but it doesn't feel like a safe assumption. I don't think amulet tests are affected as they don't set the ext port or perform ha deployments.
<gnuoy> We can work around the ext-port by having mojo figure out the ext ports and set it via mac addresses
<gnuoy> I'm more worried about the HA testing tbh
<gnuoy> I guess we'd need to add an ha-bindiface: auto option
<gnuoy> Those are the impacts of the cloud instance change.
<gnuoy> I think users should be ok since MAAS and lxc use the old nic naming sceme I believe ?
<jamespage> gnuoy, yeah - that at least is true
<jamespage> good-ole maas
<jamespage> gnuoy, confirmed - lxd also names eth0...
<gnuoy> tip top
<jamespage> gnuoy, what have we not tested yet which will be impacted by this
<jamespage> gnuoy, just trying to figure out whether we can realistically release tommorow with confidence...
<jamespage> ha-bindiface: auto makes sense btw
<jamespage> but not something we commit for tomorrow...
<gnuoy> jamespage, I don't know whats going on with xenial/mitaka ssl as someone has wiped the info from that cell but other than that all xenial/mitaka scenarios have been tested and have either past using master or have passed using in flight fixes which have since landed. What I was hoping to do today was rerun the mitaka tests using only master.
<jamespage> gnuoy, that's going to be tricky...
<gnuoy> jamespage, I think I can have a ext-port fix ready in ~ 90mins for mojo. The ha tests I may just rerun taking a guess at the device name
<jamespage> gnuoy, it does appear to be consistently ens2 in the deploy I just did
<gnuoy> ok, then that should be good enough
<stub> pkg_resources.RequirementParseError: Invalid requirement, parse error at "'-lxc==0.'"
<stub> Anyone seen anything like that?
<stub> Its from tox setting up a venv so I can run unit tests in my built charm, so it is some dependency pulled in from base layer but I have no hint as to what.
<stub> Hmm.... looks like it might be https://github.com/pypa/setuptools/issues/502
<stub> Nope, same indecipherable traceback, different problem. Pinning setuptools doesn't help.
<jamespage> urulama, ok a change has worked it way into ~openstack-charmers-next branches...
<urulama> jamespage: ok, ty. we've identified issue. it's corrupted mongo db on legacy charm store that is preventing any ingestion. we're working on it
<urulama> jamespage: worst case if it can't get resolved, this means that charms will have to be pushed manually if they need to be in store ... but we have few more days left for ODS before such resolution is required
<jamespage> urulama, well...
<jamespage> urulama, trunk charms go in the same way via ~openstack-charmers atm
<jamespage> urulama, maybe we should make the switch now to using charm push/publish...
<urulama> give us a day, please, to see what can be done
<jamespage> urulama, have a charm release to get out tomorrow :)
<urulama> jamespage: ok, if we don't resolve it in next 6-8h, than that'll be best option
<jamespage> urulama, ok working a backup plan now then...
<dosaboy> tinwood: ive got a mitaka deploy and .rgw is missing but unfortunately there is an issue wth the broker logging which makes it hard to debug
<dosaboy> tinwood: https://bugs.launchpad.net/charms/+source/ceph/+bug/1572491
<mup> Bug #1572491: ceph-broker printing incorrect pool info in logs <ceph (Juju Charms Collection):In Progress by hopem> <ceph-mon (Juju Charms Collection):In Progress by hopem> <https://launchpad.net/bugs/1572491>
<dosaboy> tinwood: i'm gonna redeploy with ^^ to see whats going on
<tinwood> dosaboy, ah, I see.  That explains why I'm not really getting anywhere with it.  I'll have a look at that too.
<jamespage> urulama, did the bad backend for charmstore get dropped yet? I'm still seeing alot of unauthorized messages
<dosaboy> tinwood: found le problem - http://paste.ubuntu.com/15944935/
<tinwood> dosaboy, interesting.  is that fixed on mitaka but not previously?
<dosaboy> tinwood: i suspect the issue is that in previously releases of rgw that pool was autocreated by radosgw-admin on install
<dosaboy> but now it is not so we fianlly hit the bug
<dosaboy> its probably a pool that is not actually required by rgw but its still a bug in the charm code
<tinwood> dosaboy, ah, I think I see.  I'll stare at the code to get an understanding of that bit.
<dosaboy> tinwood: i'll submit a patch shortly
<gnuoy> jamespage, beisner I have a mp up for mojo mitaka ext-port  issue https://code.launchpad.net/~gnuoy/openstack-mojo-specs/mojo-openstack-specs-ext-port-rejig/+merge/292363 . It's marked as a wip until i've taken it for a spin
<jamespage> gnuoy, ack
<dosaboy> tinwood: https://code.launchpad.net/~hopem/charm-helpers/lp1572506/+merge/292365
<gnuoy> dosaboy, +1
<gnuoy> dosaboy, can you land the charmhelper fix ?
<gnuoy> Your a charmer right?
<dosaboy> gnuoy: yeap
<tinwood> dosaboy, that's great.  I'll wait for the merge, and then submit my fix.  Thanks for your help!
<urulama> jamespage: if you're working on using charm push/publish, please make sure it won't get new revision every time cron job runs
<jamespage> urulama, no worries we won't
<simonklb> Anyone know what I'm doing wrong when using the docker layer getting this error when building the charm: OSError: Unable to load tactics.docker.DockerWheelhouseTactic
<simonklb> I tried debugging it a bit and found that it's trying to load that tactic for both the docker layer and the basic layer
<simonklb> So it's like it remembers tactics from earlier layers and tries to load those in the other layers as well
<icey> dosaboy the ceph/ceph-osd merge that you judst asked for a unit test for, there is a file full of tests for that function in charmhelpers...
<dosaboy> icey: yeah sorry i revoked my comment after i realised it is a sync
<dosaboy> icey: but you have new -1 re your commit message format :)
<icey> dosaboy: yeah, working on it now :-)
<dosaboy> tx
<icey> jamespage dosaboy commits are re-upped
<beisner> gnuoy, jamespage - wow yah that late eth dev naming thing in xenial is a real suckerpunch lol
<beisner> gnuoy, flipping osci to run on your branch for a few validation runs @ X
<beisner> thx for the mp
<gnuoy> beisner, ta
<jamespage> beisner, I have bundles tested ready for charm store tomorrow for trusty-mitaka xenial-mitaka and lxd
<jamespage> lxd will need a tweak for devices landing
<jamespage> but I'm blocked on charm store injestion atm
<jamespage> beisner, urulama is investigating but we may need to switch to charm push/publish for tomorrow - working that as a backup plan atm
<beisner> jamespage, ack.  fyi we did update o-c-t alongside the gerrit review.  and that's all passing a-ok.
<jamespage> beisner, awesome
<jamespage> beisner, I also managed to get Nova-LXD running on an all-in-on under LXD on my laptop....
<jamespage> parity with KVM ftw...
<beisner> sweet
<jamespage> beisner, do you run tests directly from lescina for the bare-metal lab, or do you just access MAAS directly from a basition?
<beisner> jamespage, osci-slave/10 is pinned to the metal lab.  i never do my jujuing from lescina directly fwiw
<jamespage> beisner, that's probably wise...
<beisner> osci has its own oauth, etc
<beisner> jamespage, re: charm upload -- that'd put us one step closer to nixing LP branches and syncs
<jamespage> beisner, yah
<jamespage> we really could have done with series in metadata for this but its not the end of the world...
<beisner> jamespage, yah the time box for doing that was between last Wednesday and now (ie. 1.25.5's existence)
<jamespage> dosaboy, do you have icey's reviews covered?
<dosaboy> jamespage: sure
<icey> dosaboy: doing a recheck-full right now
<dosaboy> icey: 10-4
<simonklb> what is the best way of accessing data provided by a relation interface?
<simonklb> for example when I get config-changed and I want to use the relation data again to render the configuration
<simonklb> is it possible to call the function that get the interface data again somehow?
<simonklb> or should I store the data?
<simonklb> I have a charm that uses the keystone-admin interface to retrieve the credentials
<simonklb> at the moment I use the keystone-admin interface to retrieve the credentials with the @when("identity-admin.connected") event
<simonklb> but I'm not sure what the best practice would be to get the credentials again - for example when some other config values are changed and I need to render the configuration file
<jacekn> anybody knows if "scope" for juju interface layers has to match on both side? I'd like to have provide side "global" (same data sent/received to all services) but on the require side "SERVICE" seems like better fit
<lazyPower> jacekn - doesn't have to be the same scope on opposing sides of the relationship
<jacekn> thanks
<lazyPower>  simonklb - interfaces provide some level of caching for you already. The conversation data is stored in unitdata, so if you're referencing the conversation scope directly, you can re-use those values, directly off the interface.
<lazyPower> simonklb - in other words, no need to maintain your own cache unless thats really what you want to do :)
<simonklb> lazyPower: that's great!
<simonklb> thanks
<simonklb> btw, did you see what I wrote about the docker layer before?
<lazyPower> I'm not certain, its been undergoing a lot of churn these days
<lazyPower> refresh me?
<simonklb> basically when I try to build the charm using the docker layer I'm getting "OSError: Unable to load tactics.docker.DockerWheelhouseTactic"
<simonklb> and from the looks of it the cause of the error is that it tries to load the docker tactics from the basic layer path as well
<simonklb> it's very possible that I've just configured something incorrectly though
<lazyPower> simonklb nope :(
<lazyPower> you got bit by our latest work in master
<lazyPower> we were investigating a fix for offlining docker-compose and its required deps in the wheelhouse, but it requires a custom tactic, which has some bugs in the current version of charm-tools. Let me double check those commits were reverted
<simonklb> lazyPower: ah, well that's something you have to deal with when you're living on the bleeding edge :)
<simonklb> let me know if you know how to fix it, otherwise it would be great if I could follow an issue tracker somewhere so that I can be notified when it's fixed :)
<lazyPower> simonklb - yeah but I really should have vet that more before i threw that up in tip :) or i should adopt tagging / pointing interfaces.juju.solutions @ tags when i rev
<simonklb> no worries!
<lazyPower> simonklb https://github.com/juju-solutions/layer-docker/issues/46
<jamespage> gnuoy, https://review.openstack.org/#/c/308372/
<jamespage> tested ok locally for me
<jamespage> on xenial...
<lazyPower> I'll get a patch out for that after i drop out of this hangout
<jamespage> | wsrep_incoming_addresses     | 10.0.8.54:3306,10.0.8.105:3306,10.0.8.178:3306 |
<simonklb> lazyPower: thanks!
<beisner> gnuoy, approved @ https://code.launchpad.net/~gnuoy/openstack-mojo-specs/mojo-openstack-specs-ext-port-rejig/+merge/292363
<beisner> thx again!
<gnuoy> np
<thedac> gnuoy: see my last post on https://bugs.launchpad.net/charms/+source/heat/+bug/1571830 Should we revert just the relation_set bits? It only affects upgrading from stable to next
<mup> Bug #1571830: Heat has status blocked missing database relation <heat (Juju Charms Collection):Triaged> <https://launchpad.net/bugs/1571830>
<thedac> or trigger a rerun of shared-db-joined?
<gnuoy> thedac, can we discuss after the standup? I'm failing to get my head around it atm
<thedac> sure, no problem
<simonklb> lazyPower: my first attempt at using the unitdata wasn't succesful - however I noticed that the @when("identity-admin.connected") function was invoked everytime the "config-changed" event fired
<simonklb> any idea why that is?
<lazyPower> if the state is set, the decorated method will be triggered
<jamespage> thedac, that ha change also broke IPv6 support...
<thedac> jamespage: ok, I'll check that as well
<simonklb> lazyPower: does this happen everytime a hook is fired? or what makes it trigger the decorated functions?
<simonklb> is there some periodical interval that executes the current state over and over?
<lazyPower> simonklb - so in reactive, the idea is to work with synthetic states. There's a bus that executes every decorated method at least once, and if the state changes - the bus re-evaluates and re-calculates what needs to be executed
<lazyPower> so yeah its basically an event loop, that knows when its states have changed, and then inserts/removes methods to be invoked appropriately.
<lazyPower> when its executed everything it had in its execution list, it ends context until the next hook is invoked
<lazyPower> simonklb - did that help clarify?
<simonklb> I see, do you still think I should use the cached values or would it be ok to simply use the fact that it will be in the state where the relation data can be fetched everytime?
<lazyPower> the latter. Always opt for pulling the data right off the wire vs caching. Caching means you're responsible for keeping your cache of a cache in sync
<lazyPower> as we dont really go over the wire anymore :) its all serialized in unitdata in the conversation objects.
<simonklb> great, thanks!
<simonklb> I'm starting to see the reasoning behind getting away from hooks and only working with states
<simonklb> you probably want to keep track of data changing or not though, so that you don't have to reconfigure your charms everytime?
<lazyPower> there's a decorator for charm config, but relation data... i dont think we have any indicators that the relation data has changed, no
<simonklb> but could the unitdata be used for that?
<simonklb> comparing the previous relation data to the current one?
<lazyPower> it could. cory_fu may already be tracking if relation data has changed in the conversation bits
<cory_fu> simonklb: I would recommend using data_changed: https://pythonhosted.org/charms.reactive/charms.reactive.helpers.html#charms.reactive.helpers.data_changed
<simonklb> cory_fu: awesome, thanks
<gnuoy> jamespage, I see your percona change has Canoical CI +1, I feel like thats good enough to land it.
<jamespage> gnuoy, yes
<jamespage> agreed
<gnuoy> kk
<cory_fu> lazyPower: It's not tracked automatically, because we expect the interface layers to provide an API on top of the raw relation data, and the data from the API is what should be passed in to data_changed
<freak_> hi
<freak_> hi everyone
<freak_> yesterday i raised the issue regarding ceph-osd
<freak_> any resolution uptill on that?
<freak_> upon startup ceph-osd showing hook failed update status
<freak_>  
<cory_fu> lazyPower: Still having issues with that custom tactic, eh?
<lazyPower> i havent had a chance ot circle back to bens patch
<cory_fu> ah
<marcoceppi> aisrael: are you also going to patch https://github.com/juju/charm-tools/issues/190 ?
<aisrael> marcoceppi: yup, I'll take it
<jamespage> gnuoy, beisner: does this - http://paste.ubuntu.com/15952999/
<jamespage> look like a reasonable set of changes for the stable branches post release?
<beisner> jamespage, ack those are the stable bits to flip wrt c-h sync and amulet tests
<jamespage> beisner, so creating the stable branches doe not require a review
<jamespage> beisner, so I think our process looks like
<jamespage> 1) create branches
<jamespage> 2) resync git -> bzr
<jamespage> 3) proposed stable updates as above
<jamespage> they can work through over time...
<beisner> jamespage, feel like slipping a rm tests/*junu* ?
<beisner> juno of course
<jamespage> beisner, did we not do that already?
<beisner> we've only blacklisted it from running.  we've not done a full sweep
<beisner> of reviews
<jamespage> beisner, ah
<jamespage> beisner, did you seem my comments re not re-using the current 'stable' branch
<jamespage> but going for a 'stable/16.04' branch instead
<beisner> jamespage, yah.  we'll have some automation and helper updates to make for that i think.
<jamespage> beisner, yes
<beisner> right now we're binary on stable||master
<jamespage> beisner, yup
<jamespage> beisner, I did have some thoughts on how we might make this easier
<jamespage> beisner, but they rely on us moving the push to charmstore post-commit for all branches...
<jamespage> which we should do anyway
<jamespage> beisner, for now /next and /trunk charm branches on launchpad will remain analogous to master||stable
<beisner> ++charm-pusher to our list of things we need, and we shall be wise to work in catch/retry/thresholds in the upload/publish cmds.
<beisner> jamespage, we can trigger that as a new job when gerrit 'foo merged' messages are heard.
<beisner> so rather than a full pass scheduled thing, it'll just tack onto the end of the dev process
<jamespage> beisner, that would be ++ for me
<urulama> jamespage, beisner: so, we're restoring the old broken DB for legacy store, but that'll take 10h it seems, so even if (and it's not certain) that works, ingestion will only start tomorrow again
<jamespage> urulama, ok
<jamespage> urulama, will check in first thing and then make a decision on process for our charm release...
<urulama> jamespage, beisner: but everything seems borked, so, we redirected legacy charm store to jujucharms.com and turned off ingestion. it just might be it's end, just a bit too soon
<urulama> jamespage: ok, will let you know first thing in the morning
<skay> hey, I upgraded my laptop to xenial, and have aliased juju-1 to juju since I have scripts that call juju commands (and I use mojo)
<skay> when I ran a spec, I got an error message about there being no current controller
<skay> I think that would be something in juju-2, right?
<skay> I guess having a zsh alias isn't going to work
<skay> is there a proper workaround for this?
<skay> instead of my hacky one
<LiftedKilt> will model creation via the gui be fixed in tomorrow's release?
<cholcombe> how do you force remove a service in a failed status without destroying the machine ?
<cholcombe> freak_, is wondering this ^^
<freak_> :)
<lazyPower> cholcombe - thats not so easy to do. best bet is to juju destroy-service and juju resolved the failures all the down
<cholcombe> lazyPower, yeah I know this is tricky
<lazyPower> there's a reaping setting on machines too, i believe by default, it will reap the machine once the service is off it
<cholcombe> lazyPower, freak_, has a deploy with several lxc's on metal
<cholcombe> i'd love to say force destroy the machine but that will cause collateral damage
<lazyPower> well i should ammend that statement to say all services/containers are off of it.
<lazyPower> yeah
<lazyPower> so long as there's containers on that metal it'll keep the machine around if you destroy && resolve spam it
<cholcombe> well all he did was reboot the machine and it got stuck in this error state :-/
<lazyPower> jinkies
<lazyPower> if you recycle the agent it doesn't un-stick it?
<cholcombe> i'm not sure
<freak_> either i destroy unit or destroy service
<freak_> it remain struck in hook failed state
<freak_> doesn't get removed
<lazyPower> freak_ - is there a relation in error state that is keeping it around?
<lazyPower> thats the likely culprit
<freak_> error is the hook failed "update status",,, otherwise no error
<freak_> not even on single link or relation
<beisner> jamespage, before we can do the charm upload in the flow though, i'd like to see the full set of commands known-to-work to do the upload and publish.  i've got some bits and pieces but not the whole picture i think.
<icey> freak_: have you tried to mark it resolved? `juju resolved ceph-osd/0`
<freak_> yes , i tried from GUI as well as from cli, but no use,,still in hook failed state
<freak_> icey , it says : ERROR cannot set resolved mode for unit "ceph-osd/0": already resolved
<icey> freak_: what juju version are you running?
<freak_> icey ,  1.25.3-trusty-amd64
<marcoceppi> urulama: yeah, we're still going to need a 30 days period of notice before we actually kill ingestion
<icey> lazyPower: how would freak_ go about recycling the agent?
<urulama> marcoceppi: well, if the old service is dead and we can't bring it back, then, ingestion dies with it
<marcoceppi> urulama: this is less than ideal.
<urulama> marcoceppi: yep. we'll know when repairing process finishes
<marcoceppi> urulama: thanks
<marcoceppi> urulama: we'll have a public beta of new review queue out next week, if this comes back we can then do the 30 day phase out for the month of may
<urulama> marcoceppi: we'll see tomorrow EU early morning, and try to restore it asap ... but in the meantime, manual publishing is the only way
<marcoceppi> urulama: ack
<lazyPower> icey : there's a service entry for it - /etc/init/jujud-unit-<service>-<number>
<lazyPower> icey: so stop/start that service
<icey> awesome, thanks lazyPower; freak_ have you tried that, and if not, can you please?
<freak_> icey , here is the output of that unit file  http://paste.ubuntu.com/15954684/
<freak_> can you please guide me further,,,
<lazyPower> freak_ - service jujud-unit-ceph-osd-0 restart
<freak_> ......still in hook failed state
<freak_>  
<lazyPower> freak_ - you're only interested in destroying the ceph-osd service on the machine correct?
<freak_> for the time being ..yes...main goal is to delete it and install ceph-osd again
<lazyPower> ok, are you attached to debug-hooks on the unit?
<lazyPower> if not, can you be?
<ReSam> I'm trying to deploy ceph-radosgw to a container - but it is stuck at: "Waiting for agent initialization to finish" - how can I trigger it?
<freak_> u mean run the command juju debug-hook
<freak_> it simply takes me to the node 0
<lazyPower> freak_ - yes, juju debug-hooks ceph-osd/0
<freak_> that takes me to the cli of node 0
<lazyPower> freak_ - now, cycle the juju agent, and you should be placed in another tmux session with hook context.
<lazyPower> or juju resolved --retry ceph-osd/0
<freak_> tried both......
<freak_> http://paste.ubuntu.com/15954803/
<freak_> http://paste.ubuntu.com/15954807/
<lazyPower> freak_ no change in the debug-hooks session?
<freak_> no, the last output was of start and stop which i shared
<freak_> after this no msg appeared
<lazyPower> I'm not certain what ot recommend then if you cannot trap a hook, or resolve the service's hooks getting caught in error until it successfully removes.
<lazyPower> freak_ - it would be helpful to collect logs on this and file a bug. icey if you have any further things to try?
<icey> lazyPower: freak_ I don't have any more ideas, agree with lazyPower about grabbing logs and opening a bug
<freak_> let me share log with you guys regarding this service
<freak_> may be that can help
<beisner> dosaboy, i suspect cli difference for infernalis radosgw-admin cmds in the ceph-radosgw amulet test.  http://pastebin.ubuntu.com/15955129/   re: https://review.openstack.org/#/c/308339/
<beisner> cholcombe, icey fyi ^
<icey> beisner: dosaboy that wouldn't surprise me, we're working on something that'll let us stop using the cli so much to send these commands to ceph
<beisner> dosaboy, frankly that cmd isn't used to inspect anything (Yet), the test is just checking that it returns zero.  i'd be ok with rm'ing that from the test.
<beisner> via #comment for later revisiting of course
<freak_> icey, lazyPower , check these logs ubuntu@node0:/var/log/juju$ sudo cat unit-ceph-osd-0.log  2016-04-17 16:09:50 INFO juju.cmd supercommand.go:37 running jujud [1.25.5-trusty-amd64 gc] 2016-04-17 16:09:50 DEBUG juju.agent agent.go:482 read agent config, format "1.18" 2016-04-17 16:09:50 INFO juju.jujud unit.go:122 unit agent unit-ceph-osd-0 start (1.25.5-trusty-amd64 [gc]) 2016-04-17 16:09:50 INFO juju.network network.go:242 setting pr
<freak_>  
<freak_> https://gist.github.com/anonymous/e82d5d143f67397d2f6667e1313776b1
<lazyPower> simonklb - Just landed a fix for #46, that should unblock you :)
<beisner> dosaboy, fyi imagonna ++patchset on your change and recheck full
<arosales> kwmonroe: did you have a bundle you wanted some extra testing on?
 * arosales has aws and lxd on beta5 bootstrapped atm
<beisner> icey, http://docs.ceph.com/docs/infernalis/man/8/radosgw-admin/
<beisner> according to that `radosgw-admin regions list` should still be valid
<beisner> are the docs stale?
<beisner> or is our binary fuxord?
<beisner> seems one of the two
<kwmonroe> yeah arosales!  if you don't mind, could you fire this off on lxd? https://jujucharms.com/u/bigdata-dev/bigtop-processing-mapreduce/bundle/1
<kwmonroe> we had to work around some networking issues and i'm curious how the slave->namenode relation works on lxd.
<arosales> sure, waiting for realtime-syslog-analytics to come back, and then I'll fire off that bundle
<kwmonroe> thanks!
<arosales> kwmonroe: thanks for the bundle :-)
<icey> they could be, we found an issue with the erasure coded pool stuff a few days ago
<beisner> icey, ack.  deploying, will poke at it first hand
<beisner> -> back in a bit
<beisner> icey, yah that ceph manpage is cruft.  http://pastebin.ubuntu.com/15955807/
<beisner> and i always chuckle at `radosgw-admin bucket list`
<icey> beisner: that doesn't surprise me
<beisner> icey, bucket list appears to be broken in infernalis as well
<beisner> 2016-04-20 20:15:37.510473 7f12f1ed1900  0 RGWZoneParams::create(): error creating default zone params: (17) File exists
<beisner> icey, dosaboy - i suspect the infernalis packages revved or landed -after- the ceph-radosgw mitaka amulet tests initially passed their fulls.
<icey> beisner: xenial/mitaka is not jewel
<beisner> icey, yah.  i'm talking i here.
<icey> now*
<beisner> it was jewel when these tests initially flew though ,yah?
<beisner> or hammer or some cephy thing ;-)
<beisner> added patchset @ https://review.openstack.org/#/c/308339/
<beisner> that passes in a one-off run, we'll let it churn on a full rerun
<icey> probably infernalis, if it was over a week ago
<jamespage> beisner, rockstar: can you take a look at https://review.openstack.org/#/c/308489
<jamespage> beisner, icey: xenial/mitaka has been in jewel rc's for several weeks now
<jamespage> its possible an rc change behaviour...
<beisner> jamespage, yah but ceph-radosgw full amulet hasn't had a reason to run in that timeframe i think.  and i wasn't forcing full reruns out of the blue yet (we are now).
<jamespage> beisner, yup
<jamespage> quite...
<beisner> special.  :-)
<beisner> yay for ++unit tests jamespage
<jamespage> beisner, always like to add a test...
<jamespage> or three :-)
<jamespage> beisner, tested OK  for multiple config-changed executions post deployment...
<beisner> okie cool
<arosales> so I am being dense here, but what the syntax to juju deploy <bundle> in a personal name space?
 * arosales looking at https://jujucharms.com/docs/devel/charms-deploying
<arosales> my combinations are working atm, http://paste.ubuntu.com/15956415/
<arosales> sorry my combinations aren't working atm, http://paste.ubuntu.com/15956415/
<beisner> icey, onto the next fail on rgw test_402.  seems like swift client usage adjustments will be necessary
<arosales> hmm deploy juju deploy https://jujucharms.com/u/bigdata-dev/bigtop-processing-mapreduce/bundle/1 seemed to do the trick
<jamespage> rockstar, remind me todo a minor refactor on the amulet test bundle for lxd...
<jamespage> but later.. not today..
<rockstar> jamespage: specifics?
<jamespage> rockstar, well its testing using nova-network
<jamespage> rockstar, thats crazy talk...
<jamespage> :-)
<rockstar> jamespage: heh
<jamespage> that said it does avoid the need for another two services...
<beisner> jamespage, yes it needs neutron foo.  that was my january here's-something instead of nothing test, which is intended to be extended with network and an ssh check to the fired up instances.
<beisner> jamespage, nova-compute amulet needs the same love
<jamespage> beisner, okies...
<jamespage> add them to the list of debts....
<jamespage> beisner, I need to crash - can you keep an eye on https://review.openstack.org/#/c/308489/
<jamespage> I did a recheck - the amulet test failure appears unrelated to my changes, and not proposing to fix the world tonight...
<beisner> jamespage ack will do & thx
<thedac> OSCI +1'ed. Anyone for a review https://review.openstack.org/#/c/307492/
<thedac> wolsen: dosaboy: This is in your interests ^^
<thedac> beisner: jamespage: ^^ Really anyone :)
<LiftedKilt> will model creation via juju-gui still be broken in tomorrow's release?
<beisner> wolsen can you give that a review plz?  tia.  & thx thedac
<wolsen> thedac, do I missed that comment - beisner will do
<wolsen> s/do/doh
<thedac> wolsen: thanks
<thedac> wolsen: it is just reverting a mistaken change from earlier
<wolsen> thedac: ok
<wolsen> thedac, I'm slow but its +2 +W
<thedac> wolsen: sweet. Thank you.
<wolsen> thedac, oh no - its a thank you
<thedac> heh
<thedac> True that would have been a problem
<wolsen> lol
<arosales> kwmonroe: the bigtop-processing-mapreduce bundle is looking good on AWS on juju 2.0 beta5
<arosales> http://paste.ubuntu.com/15957451/
<arosales> kwmonroe: I can't get  lxd to give me machines in 2.0-beta5 so I am still testing there
<arosales> kwmonroe: juju fetch <unit-id> should return additional information correct?
#juju 2016-04-21
<beisner> dosaboy, wolsen, thedac or any os-charmers around:  this is all clear, ready for landing/review:  https://review.openstack.org/#/c/308339/
<wolsen> beisner: let me take a look
<beisner> wolsen, woohoo thx
<beisner> cholcombe, trusty-icehouse metal is back in the green. :-)
<jamespage> gnuoy, tinwood: good morning!
<jamespage> urulama|____, morning
<urulama> jamespage: hey. ingestion still not working (we're failing to restore parts of DB) ...
<gnuoy> jamespage, morning
<leon-tseng> Hi, is there anybody successfully use OpenStack as provider in juju 2.0 ?
<tinwood> jamespage, morning :)
<jamespage> tinwood, gnuoy, cargonza: pluggin details into https://wiki.ubuntu.com/ServerTeam/OpenStackCharms/ReleaseNotes1604
<gnuoy> jamespage, argh, still says Immutable Page for me
<jamespage> gnuoy, you need to login...
<gnuoy> ...I...have...logged...in....
<gnuoy> I...will...logout..and...login..again...
<gnuoy> Ok, it helps if you let u1 tell the wiki about your team membership
<jamespage> urulama, I'm reading that as well have to use push/publish for our charm release today?
<urulama> jamespage: yes
<tinwood> gnuoy, yes, I just found a little tick-box on the login to get the page to be editable.  However, it's not like Gdocs - we can't all edit it at once?
<jamespage> tinwood, no its a wiki :-)
<gnuoy> dosaboy, I've added a "Ceph Backup charm" section if you fancy adding some wordage ?
<jamespage> gnuoy, +1
<jamespage> add nova-lxd and opendaylight options as well
<leon-tseng_> Does nova-compute-lxd work well in Canonical OpenStack now?
<gnuoy> jamespage, do you think I should call out the switch to keystone in apache for liberty+ or is that just an implementation detail?
<jamespage> gnuoy, implementation detail
<jamespage> leon-tseng_, on xenial yes...
<urulama> jamespage: when do you want to push it out? do you have another 2h? it looks like we've fixed it
<urulama> (currently looks like it's ok)
<jamespage> gnuoy, actually...
<gnuoy> yes...
<jamespage> urulama, we won't push until later today so that's find
<jamespage> gnuoy, I think we should mention that
<gnuoy> ok
<jamespage> gnuoy, it would impact someone who has monitoring setup outside of juju for example
<gnuoy> ack
<jamespage> gnuoy, how are we on testing?
<jamespage> gnuoy, if we're all done, I can cut the 16.04 branches now and re-open master for development...
<gnuoy> jamespage, reading backscroll it looks like there were some issues with the Xenial + Mitaka + SSL test, but I've just run it myself and it worked...
<jamespage> gnuoy, ack
<gnuoy> jamespage, I think from the non-ceph side we're good
<jamespage> gnuoy, I see 0 outstanding reviews...
<gnuoy> jamespage, I'd like beisner to give the official green light but I see no harm in cutting the branches and we can backport any late breaking changes in the unlikely event that there are any...
<jamespage> gnuoy, agreed - we'll hold the actual push to the charm store until later today anyway
<gnuoy> kk
<jamespage> we have to update the git->bzr migration tool to make that happen
<jamespage> gnuoy, beisner: fell at the first hurdle
<jamespage> can't create a stable/16.04 as 'stable' already exists...
<jamespage> gnuoy, beisner: ok we'll not have stable branches for a bit - worked a migration plan out with yoland
<jamespage> a
<jamespage> gnuoy, beisner: ok implemented
<jamespage> note - anyone who does a pull will need to "git remote prune origin"
<jamespage> to get rid of the old stable branch references
<jamespage> gnuoy, can you cut a new stable branch for charm-helpers please?
<gnuoy> sure
<gnuoy> jamespage, done
<jamespage> gnuoy, awesome
<jamespage> beisner, gnuoy: when we are ready to release:
<jamespage> https://github.com/openstack-charmers/migration-tools/pull/3
<jamespage> then git->bzr sync will pull from the new charm branches for stable/16.04
<gnuoy> jamespage, fantastic, thank you for prep'ing the release
<jamespage> gnuoy, this does not cover hacluster - will need todo that old-skool
<gnuoy> jamespage, I'll prep hacluser.
<jamespage> gnuoy, ta
<gnuoy> jamespage, beisner, hacluster mp is ready  https://code.launchpad.net/~gnuoy/charms/trusty/hacluster/1604/+merge/292493
<jamespage> gnuoy, +1 ack thanks
<jamespage> gnuoy, I have commits ready on all of the stable/16.04 branches to switchover to stable ch and stable testing...
<jamespage> but won't submit until we do our first sync from stable/16.04
<gnuoy> ok...
<jamespage> otherwise they will all just fail...
<jamespage> urulama, how are we looking for injestion based release today?
<urulama> jamespage: unfortunately not good. manual publishing remains the only way.
<jamespage> urulama, just preppeing for push - how do we push/publish an 'official' charm?
<jamespage> urulama, i.e one that was promulgated
<urulama> jamespage: ok, which one will be first, let's do it step by step
<jamespage> urulama, ok lets use ceilometer as an example
<jamespage> urulama, currently cs:trusty/ceilometer
<jamespage> oh I think I answered my own question
<urulama> jamespage: ok. is ti multiseries charm?
<jamespage> charm push . cs:trusty/ceilometer
<urulama> noooo
<jamespage> urulama, well it is but series in metadata landed way to late to get it in
<urulama> jamespage: charm push . ~openstack-charmers/trusty/ceilometer
<jamespage> urulama, ok so the current owner...
<jamespage> check
<urulama> jamespage: yes
<urulama> jamespage: that'll give you revision
<urulama> jamespage: publish that
<urulama> jamespage: it should be promulgated already
<subbie> freak yo !
<jamespage> urulama, I have a handful of new charms to promulgate as well
<jamespage> urulama, whats the promugation command these days?
<urulama> jamespage: you don't have to do it if a charm is already promulgated
<freak_> subbie , welcome to the group
<jamespage> urulama, these are not
<urulama> ceilometer?
<urulama> ceilometer is ...
<jamespage> urulama, well yes - but I have 26 charms to push out today
<jamespage> 4 of them are not promulgated just yet...
<urulama> jamespage: ok, the ones that are not, you should ping marcoceppi
<urulama> jamespage: he has access to promulgate them
<urulama> jamespage: when you're left with with only non-promulgated ones, ping rogpeppe1, he'll help you promulgate them
<jamespage> urulama, http://paste.ubuntu.com/15963061/ look ok to you?
<urulama> jamespage: charm grant is not needed
<jamespage> urulama, default read acl is everyone right?
<urulama> jamespage: it's set to that already
<urulama> jamespage: it's not per revision
<jamespage> urulama, for existing charms only, or does that apply to new charms as well?
<urulama> jamespage: correct, just for existing
<urulama> jamespage: that script looks ok
<jamespage> urulama, ack - thanks for looking...
<urulama> jamespage: i'd run it on one charm though first :)
<jamespage> urulama, yup
<jamespage> urulama, I suspect all our xenial series pushes will need promulgation as well...
<jamespage> as we're not series in metadata yet
<urulama> jamespage: all series have promulgation, it's not per series, it's per username + charm name
<urulama> jamespage: so no need to promulgate xenial ones
<jamespage> urulama, ok should be good there then - thanks for confirming
<gnuoy> jamespage, do you know of a bug tracking the alarming ceilometer charm issue? I see nothing against the charm
<jamespage> gnuoy, that aodh is not included by default right?
<gnuoy> jamespage, right
<jamespage> gnuoy, I don't think there is a bug
<gnuoy> jamespage, I've added harening to the doc. If thats ok we should probably ask wolsen to add a few words
<gnuoy> * Hardening
<deanman> Hi, i just deployed munin and it seems that the charm is failing to configure properly the access rights. Anyone used this particular charm in the past?
<ReSam> is there any more progress on https://bugs.launchpad.net/juju-core/+bug/1469193?
<ReSam> I'm having the same problem, since my controller has 3 interfaces, and it always picks the wrong one
<mup> Bug #1469193: juju selects wrong address for API <kvm> <local-provider> <lxc> <network> <sts> <juju-core:Expired> <https://launchpad.net/bugs/1469193>
<jcastro> office hours in 2 hours!
<jcastro> hatch: any of you ui folks wanna give us an update today?
<jcastro> cory_fu: kwmonroe you guys got anything you been working on to show during office hours?
<jamespage> gnuoy, idea for next cycle - action on keystone charm to get you a novarc
<marcoceppi> jamespage: you need something promulgated?
<jamespage> marcoceppi, not just yet
<jamespage> but we will
<jamespage> urulama, final check - injestion?
<urulama> jamespage: still dead as a dodo
<jcastro> cherylj: do you want to join us for office hours today?
<beisner> gnuoy, jamespage - ready, set, green light go?
<jamespage> beisner, I think gnuoy and I are +1 - just waiting on you for 3 makes a releases...
<beisner> jamespage, gnuoy - yep i don't have any outstanding blockers :-)
<beisner> fyi, i'll be working on ci mods for the new stable branch pivot today
<jamespage> beisner, okies - so the first thing todo is
<jamespage> beisner, https://github.com/openstack-charmers/migration-tools/pull/3
<jamespage> beisner, you ok with that change?
<beisner> man too bad we can't just cut the lp sync right now yah
 * gnuoy holds on to his seat
<jamespage> beisner, still need it for amulet...
<beisner> yes, that is the only thing afaik
<jamespage> beisner, indeed...
<beisner> jamespage, those changes look sound
<jamespage> beisner, hit merge then :-)
<beisner> jamespage, poked
<jamespage> beisner, I also have https://code.launchpad.net/~james-page/openstack-charm-testing/16.04-release/+merge/292497 for you to review
<jamespage> beisner, ok - next step run the sync
<jamespage> I'll do that now
<beisner> dood sweet jamespage ! -- one thing on my afternoon list you've already proposed.  thx!
<jcastro> beisner: buddy, you got time for office hours today? In 1.5 hours?
<jamespage> beisner, I can sed with the best of them...
<beisner> gnuoy, we'll need to do similar for os-mojo-specs ^
<beisner> jcastro, i'm a maybe-to-probably-not with release day activities and post-charm release CI mod needs stacking up.
<gnuoy> beisner, say what now?
<beisner> gnuoy, we'll need to update mojo spec stable foo
<jcastro> beisner: ok, I'll just summarize the mitaka charm release notes .. which I can find ... ?
<bbaqar> hey guys .. using lxds .. when i run sudo -u nova lxc list  i get /root/.config/lxc/config.yml: permission denied  ... any idea wihch charm configures this file?
<gnuoy> beisner, s!stable!stable/16.04! you mean ?
<beisner> jcastro, they're still cooking.  i imagine we'll publish something late today
<beisner> yyyyah gnuoy ;-)
<gnuoy> beisner, on it!
<jcastro> ack, do you have a draft I can stumble through?
<beisner> gnuoy, rock on
<jcastro> by still cooking you probably mean "I haven't started yet." :)
<gnuoy> https://wiki.ubuntu.com/ServerTeam/OpenStackCharms/ReleaseNotes1604
<beisner> jcastro, ^ :)
<jcastro> thanks
<jcastro> marcoceppi: ok so the community team guys inadvertantly announced a Q+A for the same time as our office hours but didn't see us on the calendar, how do you feel about a combo session?
<marcoceppi> jcastro: seems odd, we'd probably just be sitting quietly while everyone asks about desktop stuff
<jcastro> yeah, but the thing is, we announced so it's not like I can move it
<jcastro> I guess I'll just ask them to delay an hour
<jcastro> gnuoy: beisner: man, you guys did a TON this cycle
<jcastro> it would take me an entire session just to glaze over the new stuff
<gnuoy> I'm tired just thinking about it
 * jcastro hugs
<gnuoy> :-)
<cory_fu> jcastro: kwmonroe is out today.  I think we'd want to talk about the conversion of the big data charms to layers, and the benefits to maintainability that provides, the in-progress Big Top charms, and maybe the in-progress HA work on the Apache Hadoop charms.  (Both in-progress items are close to being done, so I think I can get them deployed.)
<cory_fu> Though I haven't deployed the big top charms yet.
<jcastro> cory_fu: yeah that would be awesome, we're going live at 11am EDT if you can join us that would be great.
<gnuoy> beisner, https://code.launchpad.net/~gnuoy/openstack-mojo-specs/mojo-openstack-specs-1604/+merge/292526
<cory_fu> Uh, any idea how to fix this?  ERROR failed to bootstrap model: cannot start bootstrap instance: no "xenial" images in us-east-1 with arches [amd64 arm64 armhf i386 ppc64el s390x]
<gnuoy> beisner, I'll do something a bit cleverer soon ie checkout the most recent stable branch then there should be no need for a mp for each release
<jamespage> bbaqar, hey - you've caught us right in the middle of release - I would suspect that its probably the lxd or nova-compute charms by advent of using the lxc command
<beisner> gnuoy, ack thx
<jamespage> beisner, gnuoy: all of the github charms are trusty+xenial right?
<jamespage> hmm
<beisner> jamespage, p, t, w, x
<jamespage> beisner, pxc is not p I think
<jamespage> love codes...
<beisner> right, it isn't.  good catch
<jamespage> I think that's the exception...
<jamespage> all others are everywhere
 * beisner rechecks, sec..
<jamespage> hacluster is trusty+xenial
<gnuoy> Is that true for hacluster?
<gnuoy> yeah
<beisner> jamespage, *odl* is diff too
<jamespage> yeah no precise testing I guess...
<beisner> jamespage, lxd, x only
<jamespage> right
<jamespage> ok I need a map of this
<beisner> other than that, p t w x is the norm
<beisner> yah we do
<jcastro> ERROR failed to bootstrap model: waited for 10m0s without being able to connect: Permission denied blickey,password).
<jcastro> anyone know what would cause that on the lxd provider?
<marcoceppi> jcastro: looks like you've got permission denied.
<jcastro> marcoceppi: I know that. :) I am just wondering how I got into this state.
<jcastro> jose: you said you wanted to host today?
<jcastro> cherylj: I need to update the Juju release notes section for ubuntu, I am assuming it'll be beta4 for 16.04 today and then updated to final in a future date?
<cherylj> jcastro: it should be beta5.  balloons - can you confirm?
<jcastro> yeah because I see beta4 on my system still
<balloons> beta5 is in proposed still I believe
<jamespage> beisner, ok git->bzr ran - lxd and ceph-mon skipped due to missing /trunk branches - created those by hand ...
<jamespage> from /next branches...
<jamespage> beisner, next I'll put the reviews up for the stable branch changes
<beisner> jamespage, i'd have expected cinder-backup to have the same issue.  or did we release that previously?
<beisner> ah i think it was part of 16.01
<jamespage> beisner, ok this will create some UOSCI noise - but does not gate release of charms - just houskeeping...
<beisner> jamespage, actually i think i'll plug its ears for a few hrs
<cherylj> balloons: so we'll have beta4 in xenial?
<jcastro> I will say beta4 for now
<balloons> I'm trying to find it's status.
<balloons> Tests should pass and it should get into the archive proper
<balloons> welp, it was rejected looks like.. hmm
<beisner> jamespage, ok, osci will ignore gerrit msgs until i flip that back on.  just don't want a kazillion amulet jobs firing while we scoot things around.
<jamespage> beisner, ok leave it off for now - seeming merge conflicts that don't make sense...
<beisner> small impact is that if anyone devs @ master, we'll just have to do rechecks after we light it back up
<jamespage> beisner, sure
<cherylj> jcastro: and I guess I should go to the office hours to let people know that they're going to be getting xenial automagically now  :)
<cherylj> jcastro: when is it?
<jcastro> cherylj: ~30 minutes, top of the hour
<jcastro> what do you mean by xenial automagically?
<cherylj> jcastro: if you don't specify a default series, today it's going to be xenial
<cherylj> happy release day
<jcastro> !!
<jcastro> but if xenial/charm doesn't exist, will it fall back to trusty/charm?
<jamespage> jcastro, how many xenial charms do we have in charmstore?
<jcastro> not many
<cherylj> jcastro: no, no fallback
<jamespage> urulama, is the charmstore api eventually consistent?
<jamespage> I have trouble uploading and then setting info on the charm I just uploaded...
<urulama> bac: ^
<urulama> sorry OTP
<jcastro> cherylj: ok so what happens when I `juju deploy mediawiki`
<bac> jamespage: are you getting an error on the push?
<jamespage> urulama, ack
<jamespage> bac, yeah
<jamespage> bac, https://github.com/openstack-charmers/release-tools/blob/master/push-and-publish
<jamespage> script where using the publish
<jamespage> bac,  I see different problems
<jamespage> bac, sometimes
<bac> jamespage: the charm binary in the PPA has a bug.
<jamespage> bac, does the charm binary in xenial?
<bac> jamespage: if you can build your own from master it'll work.  sorry for the inconvenience.
<bac> jamespage: yes
<cherylj> jcastro: if you have a controller that was bootstrapped before today, it's default series will still be trusty (unless it was specified by the user already)
<jamespage> bac, point me at master...
<cherylj> jcastro: but new bootstraps today will set their default-series to xenial if you don't override it
<cherylj> jcastro: so 'juju deploy mediawiki' will try to deploy with xenial
<jcastro> uggggggggggggg
<bac> jamespage: https://github.com/juju/charmstore-client
<cherylj> unless you have specified default-series=trusty
<bac> jamespage: go get -u github.com/juju/charmstore-client
<jamespage> bac doing so now!
<jcastro> cherylj: so what happens exactly, an error or what?
<bac> marcoceppi: ^^ ETA for charm ppa update in regards to the email i sent yesterday?
<marcoceppi> bac: I want to do the right thing and have it in xenial first
<marcoceppi> bac: so I'm applying it for an SRU
<bac> marcoceppi: ok.  guess on how long?
<marcoceppi> bac: minimum of 7 days?
<bac> ok.  fun.
<bac> thanks!
<jamespage> bac,
<jamespage> package github.com/juju/charmstore-client: no buildable Go source files in /home/jamespage/src/go/src/github.com/juju/charmstore-client
<jamespage> ?
<jcastro> jose: there?
<cherylj> jcastro: I'm still bootstrapping a new controller to verify
<bac> jamespage: looking
<jcastro> I am concerned because this is a huge change in default behavior
<jcastro> users will only be able to deploy a handful of charms, if that
<jcastro> I mean, we'd have to go re-edit all the docs, blog posts, etc.
<jose> jcastro: getting things set up
<jcastro> <3 thanks
<jose> np
<cherylj> jcastro: looks like there may be a fallback.  new bootstrap deployed mediawiki
<jcastro> ok
<jcastro> on trusty?
<cherylj> jcastro: it fell back to using trusty
 * jamespage stops holding his breath
<jcastro> ok
<jcastro> man, I was going to pass out.
<jose> jcastro: I'll have a link for you in just a sec
<jcastro> awesome! finishing up a meeting
<mbruzek> Hi jose!
<jose> hey mbruzek :)
<mbruzek> Are you going to be at our office hours hangout?
<jose> yep
<jose> getting it set up
<mbruzek> awesome
<jose> of course, if google weren't so slow...
<bac> jamespage: http://pastebin.ubuntu.com/15966511/
<mbruzek> Thanks for doing that, I just wanted to say hi, and figure out the next time I would see you
<jcastro> jose: come on come on!
<bac> jamespage: breathe....
<jamespage> bac, thanks for the reminder...
<mbruzek> jcastro:  it looks like jose is working on it
<jamespage> ;-)
<jose> you can join here https://plus.google.com/hangouts/_/hoaevent/AP36tYehhq8M1IOZr6T12q9bSs6tEn6l76nERbIE9gCP_NqwtnbK2A
<jose> jcastro: ^
<jamespage> bac, http://paste.ubuntu.com/15966593/
<jamespage> any ideas?
<bac> frankban_: ^^
<bac> jamespage: `which charm`?  `hash -d charm` ?
<bac> should be $GOPATH/bin/charm
<mbruzek> For those who want to join office hours: https://plus.google.com/hangouts/_/hoaevent/AP36tYehhq8M1IOZr6T12q9bSs6tEn6l76nERbIE9gCP_NqwtnbK2A
<urulama> jamespage: it wasn't pushed https://api.jujucharms.com/charmstore/v5/~openstack-charmers/xenial/swift-storage/meta/any
<gil> hi does anyone have the link for the juju office hours going on now please, thanks!?
<mbruzek> gil: https://plus.google.com/hangouts/_/hoaevent/AP36tYehhq8M1IOZr6T12q9bSs6tEn6l76nERbIE9gCP_NqwtnbK2A
<urulama> jamespage: https://api.jujucharms.com/charmstore/v5/~openstack-charmers/trusty/swift-storage/meta/any this one works
<jose> ubuntuonair.com if you want to watch
<mbruzek> The ntp charm is available in xenial! That is awesome. Who maintains the ntp charm?
<AuroraAvenue> QUESTION: I'm not a coder, but how do I get a 'Mediagoblin' charm going for Juju ? https://wiki.mediagoblin.org
<jcastro> AuroraAvenue: ok I'll address that after marco
<AuroraAvenue> jcsackett: np
<AuroraAvenue> whoops
<AuroraAvenue> jcastro: np
<jcsackett> happens all the time. :)
<AuroraAvenue> jcsackett: We all love you too :)
 * jcsackett laughs
<randall-buzzgen-> QUESTION: Now that Ubuntu 16.04 is out, what is the current best/easiest way for people who have never been "devops" folks or developers to get started with deploying Charms and bundles? (People new to this may want to quickly "kick the tires" and see what Juju is all about.)
<AuroraAvenue> jcastro: So basically mediagoblin and https://ghost.org/ is all that I'm looking for in charms :)  for blogging etc.
<jose> https://jujucharms.com/ghost/
<AuroraAvenue> cheers for the ghost bit, cheers.
<mbruzek> AuroraAvenue: https://jujucharms.com/q/ghost
<mbruzek> The official ghost charm is precise only. Other ones are available for trusty
<bdx> office-hours: how can I use JAAS to make charms that I develop under my namespace publicly accessible, do I need to push to a certain channel?
<jcastro> hatch: update ghost to xenial pls
<mbruzek> those are out of a personal namespace
<cory_fu> There is also an old mediagoblin charm https://jujucharms.com/u/clint-fewbar/mediagoblin/precise/1 but it has not been reviewed and doesn't look like it's been maintained for some time.
<hatch> jcastro: lol, maybe it needs to be a multi-series :)
<jcastro> bdx: https://wiki.ubuntu.com/ServerTeam/OpenStackCharms/ReleaseNotes1604
<thedac> beisner: gnuoy tinwood: If anyone has time https://code.launchpad.net/~thedac/openstack-mojo-specs/16.04-changes/+merge/292545
<bdx> jcastro: awesome!
<gnuoy> thedac, +1
<thedac> gnuoy: ta
<marcoceppi> bdx: https://jujucharms.com/docs/devel/authors-charm-store
<gil> matt I have some questions but I was not able to unmute not sure why....
<gil> ok it finally unmted - I can ask questions after corey
<mbruzek> gil:  you don't look muted to me. Type your question here and Jorge can read it
<gil> (1) I need the command to just push out the lxc-trusty-template to a maas-managed node - I saw this command somewhere but forgot to bookmark it and it's a hard one to find online.  this is just pusing out the template without pushing a charm
<gil> (2) regarding openstack austin next week, what one day would be the best one to go for what I am interested in, and who from the charm team will be at austin?
<gil> (3) need an update on doing charms that need to download secured content - for example oracle software which requries a  couple of export restrictions checkboxes...
<AuroraAvenue> QUESTION: is it possible to run juju on one computer yet? in production i mean ?
<jcastro> yep
<AuroraAvenue> k , cheers.
<jcastro> you would deploy in LXD containers
<gil> corey/cory
<AuroraAvenue> jcastro: So you can run it on one server, easily ?
<bbaqar> jcastro: is it possible to run bootstrap on the node where juju-core is configured?
<jcastro> AuroraAvenue: I hope that answers your question
<AuroraAvenue> jcastro: Yes that is cool-beans, cheers.
<AuroraAvenue> "comical" Ha! freudian slip !
<marcoceppi> AuroraAvenue: ;)
<AuroraAvenue> :)
<jamespage> urulama, I must be missing something about setting meta data
<jamespage> urulama, http://paste.ubuntu.com/15966593/
<jamespage> there is a charm push just before I try to set it...
<jamespage> mbruzek, that was me (ntp) we use it in all of our bundles...
<urulama> jamespage: xenial charm doesn't exist
<mbruzek> Thank you jamespage
<urulama> jamespage: https://api.jujucharms.com/charmstore/v5/~openstack-charmers/xenial/swift-storage/meta/any {"Message":"no matching charm or bundle for cs:~openstack-charmers/xenial/swift-storage","Code":"not found"}
<jamespage> urulama, "charm push swift-storage '~openstack-charmers/xenial/swift-storage'"
<jose> gil: https://insights.ubuntu.com/event/openstack-summit-austin/ <-- you may find that useful
<jamespage> urulama, does it have to be published first?
<urulama> jamespage: no, push is first, publishe next
<jamespage> urulama, yes I know
<jamespage> urulama, before I can charm set, do I have to charm publish
<mbruzek> gil: juju@lists.ubuntu.com
<urulama> jamespage: if you don't publish it, then ~openstack-charmers/xenial/swift-storage doesn't get resolved
<urulama> jamespage: charm set '~openstack-charmers/xenial/swift-storage-0' would work in that case
<jamespage> urulama, metadata is tied to revisions of a charm?
<urulama> jamespage: ~openstack-charmers/xenial/swift-storage uses stable pointer to find revision
<urulama> jamespage: you havent published it yet, so there is no pointer hence it doesn't know which revision to use and says it doesn't exist
<jamespage> that's all I needed to know...
<frankban_> hatch: could you please triage https://github.com/juju/juju-gui/issues/1581 , https://github.com/juju/juju-gui/issues/1582 and https://github.com/juju/juju-gui/issues/1583 ?
<frankban_> ... and wrong channel
<urulama> :)
<hatch> frankban_: I can!
<hatch> :)
<frankban_> :-)
 * hatch reaches for the duct tape
<beisner> gnuoy, ah bugger.  our tempest testing and config scripts in o-c-t are suffering the ens2 change in xenial too.
<beisner> rockstar, zul fyi ^
<jamespage> bac, urulama: OK so I'm using charm built from master branch
<urulama> jamespage: cool
<jamespage> I've re-ordered our push-publish script to push/publish and then set metadata...
<zul> beisner: craaaap
<jamespage> and acls on new charms...
<urulama> jamespage: sounds good
 * beisner borrows duct tape from hatch
<jamespage> urulama, however I still keep hitting
<jamespage> ERROR cannot post archive: unauthorized: access denied for user "james-page"
<jose> jamespage: charm login maybe?
<hatch> :)
<jamespage> jose, I would love if that was the case
<jamespage> jose, runnnig exactly the same command again may or may not work...
<jose> *sighs*
<jamespage> ;)
<beisner> zul, i'll be testing and proposing bundles/lxd* and profiles changes @ o-c-t
<urulama> jamespage: yeah, we have update of services in the IS queue :-/
<marcoceppi> Austin Convention Center - Level 4 - MR 12 A/B
<jose> https://insights.ubuntu.com/event/openstack-summit-austin/
<urulama> jamespage: i guess we could say "all hell broke loose" this week ...perfect timing
<jamespage> urulama, that sounds like a great synopsis
<jamespage> urulama, I'm not quite sure how I'm going to get our 96 charms pushed to the store without writing a script that dos's the API...
<AuroraAvenue> jose, Cud you update the calendar, for the Next juju  meeting ? cheers
<urulama> jamespage: i think it won't ... ingestion was much more invasive
<urulama> jamespage: i'm more concerned that you'll hit the "unauthorized" errors which will be hard to track
<aisrael> brb, going to Austin
<aisrael> pop-up Austin sprint
<mbruzek> They are giving away orange box! : https://pages.ubuntu.com/orange-box.html?utm_source=Insights&utm_medium=Landing%20Page&utm_campaign=FY17_ODS_Austin_OrangeBox&
<AuroraAvenue> nice show guys.
<AuroraAvenue> jose, jcastro Well done !
<jcastro> thanks!
<jamespage> urulama, ok I have written a retry handler around all my charm commands...
<urulama> jamespage: kk, i'm also trying to pinpoint a unit that's misbehaving ...
<jamespage> urulama, ok running now
<jamespage> at least 96 charm publishing events to detect it :-)
<urulama> jamespage: track them here https://api.jujucharms.com/charmstore/v5/changes/published?start=2016-04-21
<urulama> jamespage: mh, it stopped ... did script break?
<AuroraAvenue> jcastro: I'd really like to see mediagoblin as a charm, but as I say - I'm not a coder at all. Any chances of doing it yourself - if it only takes half an hour ?
<jamespage> urulama, yes - needed to refine my grep a little for when charm push fails...
<jamespage> urulama, running again now
<urulama> jamespage: ok, few more charms added
<jamespage> urulama, I seem to be spinning on charm push ceph-radosgw '~openstack-charmers/precise/ceph-radosgw'
<urulama> yeah ,that was last in store
<jcastro> AuroraAvenue: I am also not a coder, but I'll be on a lookout for someone who can help
<AuroraAvenue> jcastro: Cheers jorge - It'll really help - Is there a discourse JuJu too ?
<AuroraAvenue> https://jujucharms.com/q/discourse Can't see it.
<AuroraAvenue> seems strange, as we have http://discourse.ubuntu.com/
<jcastro> the charm is abandoned and old
<jcastro> the discourse run by the discourse guys so it's not juju deployed
<jcastro> at the time discourse was going a ton of change and keeping the charm up to date was difficult
<urulama> jamespage: precise/ceph-radosgw is pushed and published, but failed to update revisions. maybe it's spinning there.
<jamespage> hmm
<marcoceppi> AuroraAvenue jcastro it's be nice, now that we have a docker layer, to re-do the discourse stuff
<urulama> jamespage: maybe it gets error and retries with publishing over and over?
<jcastro> it would indeed
<urulama> jamespage: sorry, not publishing, pushing
<jamespage> urulama, ok running again...
<jamespage> urulama, hitting those unauthorized errors alot
<jamespage> urulama, 10 tries on every charm command atm
<jamespage> one succeeded on the 10ths...
<urulama> jamespage: uf, that is weird :-/
<jamespage> urulama, charm push must make a couple of api call right? sometimes the upload succeeds and then there is a
<jamespage> ERROR cannot add extra information: unauthorized: access denied for user "james-page"
<jamespage> upload succeeed == I get a url and status back...
<urulama> jamespage: correct
<urulama> jamespage: ok, precise charms are up :)
<jamespage> urulama, yup
<jamespage> marcoceppi, hey - I need the following charms under openstack-charmers promulgated please
<jamespage> openvswitch-odl
<jamespage> cinder-backup
<jamespage> lxd
<jamespage> neutron-api-odl
<jamespage> ceph-mon
<marcoceppi> jcastro: np, on it
<marcoceppi> jamespage: are they all in the stable channels, etc?
<jamespage> marcoceppi, yes
<urulama> marcoceppi: fyi, thanks to jamespage's uploads, i was able to pinpoint the unit that was giving problems, so you shouldn't see any more "unaothorized" errors
<marcoceppi> urulama: \o/
<marcoceppi> jamespage: promulgated
<jamespage> marcoceppi, ta
<jamespage> rockstar, zul: congrats you are now live - https://jujucharms.com/lxd/
<rockstar> \o/
<urulama> jamespage, marcoceppi: verified that all those new promulgated are properly resolved by charmstore, all looks good
<jamespage> urulama, yeah but tomorrow
<jamespage> urulama, I was hoping todo that pm today, but...
<jamespage> re bundles...
<beisner> nice spike, jamespage urulama - thx much!
<beisner> marcoceppi, you too ;-)
<zul> jamespage: cool thanks
<jasondotstar> qq about amulet- I'm in the middle of writing a test, and I'm attempting to make sure that I have the relation line in the code working. running into this error: ValueError: All relations must be explicit, service:relation
<jasondotstar> how can i find out what the 'legal' relation values are for a given charm?  Is it the list of 'connects from' services from the charm documentation?
<cory_fu> jasondotstar: The charm store lists the relations that a charm supports.  You can also see it in metadata.yaml
<jasondotstar> cory_fu: thx
<cory_fu> jasondotstar: E.g., on https://jujucharms.com/ganglia/ if you click See More under Connects from, you can see that ganglia-node would support :node, :ganglia-node, :master, and also from Connects to, :website and :head
<beisner> jamespage, looks like after a recheck, your stable change merge conflicts are resolved.  i just triggered recheck on all of them.  if those come back +1 from infra CI, clear to land?
<beisner> thedac, jamespage - with stable proposals stabilized, i'm turning the bot back on.   stable/xx.nn amulet tests, etc., may not be quite right just yet though.
<beisner> thedac, jamespage - ok a zillion stable merges are heading in now.  c-h yaml and amulet stable bit flips.
<thedac> beisner: ok
<beisner> zul, o-c-t updated
<thedac> is there a topic: list of them?
<beisner> thedac, https://review.openstack.org/#/q/topic:16.04-updates
<thedac> thanks
<beisner> o/  i've gotta head out, back in later this eve
<beisner> thedac, those won't hit the charmstore yet naturally.  we still have the push-a-charm-when-something-lands thing to nail down.
<thedac> understood
#juju 2016-04-22
<jamespage> gnuoy, doing sweepups from yesterdays release - bugs targetted for 16.04 marked fix released; all others bumped to 16.07
<gnuoy> jamespage, fantastic, ta
<ReSam> nice job on the recent release! how do I get the new charms with xenial support to show up in my juju-gui search?
<jamespage> ReSam,  I think that should just work...
 * jamespage looks
<ReSam> jamespage: I'm only seeing ceph for trusty - but if I search on the jujucharms.com site, I find the new charm for all supported ubuntu versions
<jamespage> rogpeppe, hey - can you do a promulgation for me?  rabbitmq-server owned by openstack-charmers - superceeds the ~charms ones
<jamespage> ReSam, might just be a search issue - https://jujucharms.com/ceph/xenial
<jamespage> its there :-)
<jamespage> rogpeppe, ~charmers one rather...
<ReSam> yes - on jujucharms.com --- but not in my local juju-gui search...
<ReSam> or do I need to update the catalog or something?
<ReSam> I'm running juju2beta5 from the devel-ppa
<jamespage> ReSam, not sure tbh - not actually used that feature :-)
<jamespage> ReSam, oh thats in the juju-gui itself? urulama might be able to help there...
<jamespage> morning urulama :-)
<ReSam> jamespage: yes - my locally installed juju-gui, which shipped since juju2beta4 I think
<jamespage> ReSam, sorry - long day yesterday - brain not quite functional just yet :-)
<ReSam> jamespage: np - I'm really thankful for your hard work - makes my life a lot easier -- hopefully :)
<jamespage> ReSam, well let us know if that's not the case :-)
<ReSam> f I search for "cs:xenial/ceph-0" I can find it - but not if I only search "ceph", then only the trusty charm shows up
<jamespage> ReSam, https://wiki.ubuntu.com/ServerTeam/OpenStackCharms/ReleaseNotes1604
<jamespage> might be useful for you if you've not already seen them...
<ReSam> jamespage: thanks - already skimmed through it.
<urulama> jamespage, ReSam: morning ... OTP, be back in 10min
<ReSam> jamespage: I'm having problems deploying new charms:
<ReSam>  ERROR juju.worker.dependency engine.go:526 "metric-collect" manifold worker returned unexpected error: failed to read charm from: /var/lib/juju/agents/unit-ceph-21/charm: stat /var/lib/juju/agents/unit-ceph-21/charm: no such file or directory
<ReSam> the unit us stuck at: "Waiting for agent initialization to finish"
<rogpeppe> jamespage: sorry, was in a call
<rogpeppe> jamespage: i don't think i have privs to promulgate
<jamespage> rogpeppe, no problem :-)
<jamespage> rogpeppe, uh - urulama pointed me at you yesterday...
<jamespage> incase marcoceppi was not around...
<rogpeppe> jamespage: i can give you a command that you can execute to do it if you have privs
<urulama> rogpeppe: just to provide the bhttp instructions, jamespage has charmers rights, i believe :)
<rogpeppe> urulama: will do
<urulama> ReSam: looking
<rogpeppe> jamespage: first: go get github.com/rogpeppe/bhttp
<rogpeppe> jamespage: (assuming you've got a go env set up)
<rogpeppe> jamespage: then bhttp put -j https://api.jujucharms.com/charmstore/v5/$charmid/promulgate Promulgate:=true
<urulama> ReSam: indeed, series are missing. that's a regression :-/
<urulama> ReSam: i suggest in gui you search for "ceph xenial" ... first hit will provide you xenial charm
<ReSam> urulama: thanks - yes I already found it - still a bit confusing.
<urulama> ReSam: it is
<urulama> filing a bug
<ReSam> also xenial is missing in the dropdown menu for "series" in the search view
<urulama> all series information is missing
<ReSam> urulama: any idea about my other problem: "stat /var/lib/juju/agents/unit-ceph-21/charm: no such file or directory" ?
<urulama> ReSam: that is after deploying the charm?
<ReSam> yes
<urulama> for that i'll have to redirect you to jamespage and the openstack guys
<ReSam> jamespage: maybe it is proxy related? my machines do not have direct internet access...
<jamespage> urulama, I am a charmer yes...
<jamespage> rogpeppe, how do I resolve the charm id?
<jamespage> sorry being dumb
<jamespage> ReSam, hmm - that looks familiar
<rogpeppe> jamespage: what charm are you trying to promulgate?
<jamespage> are there erros in the machine log on the unit
<ReSam> jamespage: on all my machines in this file: /var/log/juju/unit-ceph-21.log (with different ids of course)
<jamespage> rogpeppe, https://jujucharms.com/u/openstack-charmers/rabbitmq-server
<urulama> jamespage: you want to move if from ~charmers to ~openstack-charmers space?
<jamespage> urulama, yes please
<urulama> rogpeppe: i think you'll have to explain how to build bhttp as well
<rogpeppe> urulama: i did, i think
<rogpeppe> jamespage: https://api.jujucharms.com/charmstore/v5/openstack-charmers/rabbitmq-server/promulgate
<jamespage> rogpeppe, awesome tat
<rogpeppe> jamespage: i think
<jamespage> rogpeppe, gotach on the build..
<jamespage> my go foo is good enough for that these days
<rogpeppe> jamespage: caveat: i haven't actually tried this command for real, 'cos i don't have promulgate privs :)
<rogpeppe> jamespage: but it *should* work :)
<jamespage> rogpeppe, needed a ~ for the team name
<rogpeppe> jamespage: oops, good point!
<jamespage> rogpeppe, hmm I might need to unpromulgate the charmers one first...
<urulama> jamespage: no need
<urulama> jamespage: charmstore does the switch
<jamespage> urulama, hmm
 * urulama corrects that ... *should do the switch*
<jamespage> urulama, now I broke it all...
<urulama> ?
<jamespage> urulama, https://jujucharms.com/rabbitmq-server/
<jamespage> not found...
<urulama> jamespage: what was the bhttp command that you used?
<jamespage> urulama, bhttp put -j https://api.jujucharms.com/charmstore/v5/~openstack-charmers/rabbitmq-server/promulgate Promulgate:=true
<jamespage> urulama, I also did a
<jamespage> bhttp put -j https://api.jujucharms.com/charmstore/v5/~charmers/rabbitmq-server/promulgate Promulgate:=false
<jamespage> maybe that was bad of me...
<jamespage> gnuoy, do we still need todo the hacluster charm -> charm store ?
<urulama> let's try bhttp put -j https://api.jujucharms.com/charmstore/v5/~openstack-charmers/xenial/rabbitmq-server/promulgate Promulgate:=true
<gnuoy> jamespage, looks like its still awaiting a merge https://code.launchpad.net/~gnuoy/charms/trusty/hacluster/1604/+merge/292493 ...
<urulama> rogpeppe: is'nt it "Promulgate=true" not "Promulgate:=true ?
<rogpeppe> urulama: nope
<urulama> and Promulgated:=True not Promulgate
<urulama> jamespage: bhttp put -j https://api.jujucharms.com/charmstore/v5/~openstack-charmers/xenial/rabbitmq-server/promulgate Promulgated:=true
<simonklb> anyone have had any problems with LXD containers not getting the hostname set in /etc/hosts ?
<jamespage> urulama, ok that worked...
<jamespage> urulama, still showing as owned by charmers in the UI, but its the right charm...
<urulama> jamespage: yes, wrong owner resolution. charmstore shows proper one https://api.jujucharms.com/charmstore/v5/rabbitmq-server/meta/owner
<urulama> jamespage: fix will be deployed after ODS (don't want to touch production during if not critical)
<jamespage> gnuoy, ok hacluster merged and direct pushed for trusty and xenial to the charm store
<gnuoy> thanks
<jamespage> gnuoy, https://jujucharms.com/hacluster/xenial
<rogpeppe> jamespage: ah, cool, it worked
<jamespage> rogpeppe, yes all good now
<rogpeppe> jamespage: grand
<ReSam> jamespage: any chance you remember from where my problems looks familiar? is there an open issue or maybe even a solution?
<jamespage> ReSam, it was something todo with stale charms on the controller  - did you happen to use multiple models under juju 2.0?
<ReSam> jamespage: not that I know of
<ReSam> can clean up this somehow? deleting a cache or so?
<jamespage> ReSam, can you check the  /var/log/juju/machine-X.log file please
<jamespage> rogpeppe, does promugation for bundles work in the same way? our current openstack-base and openstack-telemetry bundles are under ~charmers - I'd like to move those over to ~openstack-charmers and switch the pointer to the promugated bundle ...
<rogpeppe> jamespage: yes, it should do
<ReSam> jamespage: seems to be ok... lots of "block devices change" messages though...
<jamespage> ReSam, I was looking for something related to hash sum mismatches
<jamespage> the machine agent downloads the charm that the unit agent then uses
<jamespage> is there anything in /var/lib/juju/agents/ ?
<ReSam> jamespage: no hash mismatch lines.
<jamespage> hmmm
<ReSam> jamespage: yes, /var/lib/juju/agents contains a folder for the machine and ceph-unit
<ReSam> jamespage: including agent.conf's for both
<jamespage> ReSam, whats in var/lib/juju/agents/unit-ceph-21
<jamespage> ?
<ReSam> both are also running process. so that all works
<ReSam> ls /var/lib/juju/agents/unit-ceph-21/
<ReSam> agent.conf  metrics-send.socket  run.socket  state/
<ReSam> seems my state server has 3 ip addresses - and therefore 3 api endpoints. two of which are unreachable though. so I get 2 errors in the logs when it tries to connect to the wrong endpoint
<ReSam> I can see correct proxy settings in the log - so that should also be fine
<ReSam> jamespage: seems one of the downloads is misbehaving: I have this file: /var/lib/juju/agents/unit-ceph-21/state/bundles/downloads/inprogress-377015309 and inprogress-005101612
<jamespage> ReSam, this is a juju problem of some description that I've not seen before - lets see if the juju-dev team know about this...
<jamespage> dimitern, frobware: either of you two seen anything like this before with juju 2.0 betas ? ^^
<frobware> jamespage: reading scrollback
<jamespage> frobware, it maybe that the ip address that units are getting for the state server is not one they can actually connect over...
<frobware> jamespage: are the units in containers?
<jamespage> frobware, I'll have to defer that to ReSam
<ReSam> frobware: no - directly in the machine root
<ReSam> jamespage: yes - so my state server is reachable via 3 interfaces - but only 1 is reachable from the machines. I can see the machine trying to connect to the two others - and get "connection timeout" in the logs. but then I guess the third try succeeds.
<Garyx> Hey guys, anyone had any luck bootstrapping MAAS 2.0, I've tried Beta5 and Beta4 and get a runtime error
<Garyx> panic: runtime error: invalid memory address or nil pointer dereference
<ReSam> frobware: yes - looks like it is using the wrong api endpoint for downloading - although before that is uses the correct one to establish a connection: https://paste.ubuntu.com/15980208/
<frobware> ReSam: are you using a bundle to deploy? Just wanted to repro locally
<frobware> ReSam: would it be possible to share and collect some logs? Also, the output of `juju status' and `juju show-machines'
<Garyx> Is the MAAS 2.0 support still a wip?
<ReSam> frobware: no bundle - just: juju deploy cs:xenial/ceph-0 -n 5
<frobware> ReSam: ok that helps. can you share the juju status output?
<ReSam> frobware: https://paste.ubuntu.com/15980650/
<frobware> ReSam: and the machine NIC configuration? how many NICs, VLANs, et al?
<ReSam> frobware: https://paste.ubuntu.com/15980661/
<ReSam> frobware: machines have active 2 interfaces with the same ip subnet
<ReSam> state server has 3 interfaces - only one is connected to the machines (172.24.32.2)
<frobware> ReSam: and can I get access to the machine-0.log and from the machines too?
<ReSam> sure
<ReSam> frobware: you mean the state server or the machine with ceph?
<frobware> ReSam: all logs really, would be quicker than for me to keep on asking
<BrunoR> 'juju-quickstart bundle.yaml' dies with "juju-quickstart: error: unable to connect to the Juju API server on wss://x.y.z/api: 'module' object has no attribute 'default_timeout" (juju 1.25.5)
<BrunoR> 'juju-quickstart bundle.yaml' dies with "juju-quickstart: error: unable to connect to the Juju API server on wss://x.y.z/api: 'module' object has no attribute 'default_timeout" (juju 1.25.5)
<Makyo_> Hi BrunoR - do you wind up with a stack trace from that?
<BrunoR> Makyo: http://paste.ubuntu.com/15983523/ you are welcome
<Makyo> BrunoR: Thanks, I'm digging into the code, and it looks a bit like a problem with websocket-client.  What version do you get when you run `pip show websocket-client`?
<BrunoR> Makyo: this shows an error, it looks like this modules is installed via deb-package python-websocket (0.18.0-0ubuntu0.14.04~ppa5 from http://ppa.launchpad.net/juju/stable/ubuntu/)
<Makyo> BrunoR: ah, alright, thank you.
<Makyo> BrunoR: may I please see the output from `python -c "import websocket;print dir(websocket)"`?  I'm curious as to why default_timeout is missing from the websocket module.
<BrunoR> Makyo: http://paste.ubuntu.com/15983924/
<Makyo> That's supremely weird.
<Makyo> BrunoR: I'll file a bug against quickstart and work on it.  In the mean time, you can try `sudo pip install websocket-client` and see if the version you have is overwritten by the one installed by pip.
<Makyo> (You might need to do `sudo pip install --upgrade websocket-client` due to the versions being the same)
<BrunoR> Makyo: pip install --upgrade moved websocket-client from 0.35.0 to 0.37.0 ~ trying `juju-quickstart bundle.yaml` again ~ looks like a problem in deb-package python-webclient?
<Makyo> BrunoR: yeah, that's what I'm seeing now that I play around with it.  Did you install quickstart from the juju/stable PPA, or from the default repo?
<BrunoR> Makyo: no, same error
<BrunoR> Makyo: `$ apt-cache madison juju-quickstart` says 2.2.4+bzr147+ppa42~ubuntu14.04.1 from  http://ppa.launchpad.net/juju/stable/ubuntu/
<Odd_Bloke> aisrael: Are you on Ubuntu and seeing https://bugs.launchpad.net/cloud-images/+bug/1573058?  Or OS X?
<mup> Bug #1573058: Ubuntu 16.04 current not booting in Vagrant (gurumeditation) <cloud-images:Invalid by daniel-thewatkins> <https://launchpad.net/bugs/1573058>
<aisrael> Odd_Bloke: OS X. I hadn't refreshed the page before I posted that comment.
<Odd_Bloke> aisrael: No worries; just making sure there's nothing more we can do. :)
<BrunoR> Makyo: my google-foo found something similar https://bugs.launchpad.net/juju-quickstart/+bug/1484158
<mup> Bug #1484158: juju quickstart fails "unable to connect to the Juju API Server" <quickstart> <vagrant> <websocket> <juju-quickstart:Triaged> <Juju Vagrant images:Fix Released> <https://launchpad.net/bugs/1484158>
<Makyo> BrunoR: aha, good catch. I
<Makyo> BrunoR: I'll tag on to that, and see if I can provide a more sensible default when things go wrong like this.
<Makyo> BrunoR: thank you
<Gil> deployed flannel units with juju, units are stuck in pending state?  Any pointers on where to look for debug and fix information?  I looked in /var/log/juju but the logs don't seem to have any errors, here are the messages (which keep updating)
<Gil> checking flannel/6 for flannel leadership
<Gil> flannel/6 confirmed for flannel leadership until 2016-04-22 16:25:13.080001473 +0000 UTC
<Gil> flannel/6 will renew flannel leadership at 2016-04-22 16:24:43.080001473 +0000 UTC
<Gil> those three lines repeat in the log every few minutes  - which looks all good afaik
<Gil> so not sure why flannel showing in pending state...
<LiftedKilt> if I ssh to a machine that is running lxc containers with charms installed, how do I view the containers that are running?
<LiftedKilt> nothing shows up in lxc list / lxc info
<jrwren> LiftedKilt: lxc-ls
<LiftedKilt> jrwren: what's the difference between lxc list and lxc-ls?
<jrwren> LiftedKilt: lxc is the lxd client which talks to the lxd sever.  lxc-ls is lower level lxc (no lxd involved) tool.
<LiftedKilt> jrwren: gotcha - thanks
<BrunoR> how does 'charm push', 'charm publish' works for bundles? I just pushed/published/granted but the bundle does only shows 'partly' up at jujucharms.com? urulama|afk?
<BrunoR> how does 'charm push', 'charm publish' works for bundles? I just pushed/published/granted but the bundle does only shows 'partly' up at jujucharms.com? urulama|afk? marcoceppi?
<lazyPowe_> BrunoR what did you publish? i''m happy to take a look
<BrunoR> lazyPowe_: https://jujucharms.com/u/3-bruno/ or cs:~3-bruno/bundle/demo-0
<BrunoR> lazyPowe_: according to 'charm show' it should be accessible
<freak_> hi everyone
<freak_> i need help regarding instance creation in openstack
<freak_> i have created my first instance
<freak_> and it is showing error
<freak_> http://imgur.com/krBt0H9
<freak_> can you please take a look here is the picture of error msg http://imgur.com/krBt0H9
<lazyPowe_> BrunoR ok, taking a look now
<lazyPowe_> BrunoR i see your bundle up here...
<lazyPowe_> 6 services, 12 units, on 4 machines?
<BrunoR> lazyPowe_: yes
<lazyPowe_> https://jujucharms.com/u/3-bruno/demo/
<lazyPowe_> it totally in the store then :)
<lazyPowe_> so i guess i'm not understanding what you're asking "how does push and publish work for bundles?"  it works exactly like it does for charms.  if you upload the bundle and dont set public acl's, it'll only be accessible to you (the uploader)
<BrunoR> lazyPowe_: I see an image of the services in the bundle but can't reach the README.md or deploy it.
<lazyPowe_> BrunoR - the README does look truncated, however - http://paste.ubuntu.com/15988639/
<lazyPowe_> BrunoR i was able to deploy just fine using juju 2.0 beta5, i did not test on 1.25.5 howeer
<lazyPowe_> *however
<cholcombe> thedac, coreycb could you give freak_ a hand?
<thedac> I'll take a look
<freak_> hi thedac
<freak_> i created my first instance in openstack from horizon dashboard
<freak_> earlier i created network, image, storage volume,,
<freak_> then i created instance
<freak_> but upon startup its showing erro
<freak_> error
<BrunoR> lazyPowe_: `$ juju deploy cs:~3-bruno/bundle/demo` yields 'ERROR expected a charm URL, got bundle URL "cs:~3-bruno/bundle/demo-0"'
<freak_> can you please take a look
<freak_> http://imgur.com/krBt0H9
<thedac> freak_: looking at it now. This was juju deployed?
<thedac> freak_: can I see the bundle used?
<freak_> thedac ,  i used this bundle https://jujucharms.com/u/charmers/openstack-base/bundle/40
<thedac> ok. So it is unclear from the image what precisely is failing. Can you grab the novarc and try some things from command line?
<thedac> I would start with checking juju status and make sure nothing is in error or blocking state
<freak_> yes i will,,
<lazyPowe_> BrunoR weird...
<freak_> here is the status  http://paste.ubuntu.com/15988934/
<thedac> thanks, looking
<freak_> here is the novarc  http://paste.ubuntu.com/15988999/
<thedac> freak_: so that looks good. If you have novaclient installed you might try nova show $INSTANCE_ID and see if we get any more info. And nova console-log $INSTANCE_ID
<thedac> freak_: oh, you have that in horizon.
<thedac> See the log and console tabs
<BrunoR> lazyPowe_: thx anyway
<lazyPowe_> BrunoR yeah sorry i dont know off hand why its doing that
<lazyPowe_> BrunoR what version of JUJU are you running?
<freak_> thedac ,  when i click log tab it shows "  Unable to get log for instance "9fc16f89-9d81-457b-9c93-8ac70e6f87ed"."
<thedac> ok, how about the console tab?
<freak_> and in console tab it shows console is currently unvavailable
<kjackal> cory_fy: Have you ever seen this error "line 3: /usr/local/bin/charms.reactive: Permission denied"  when calling an action?
<kjackal> cory_fu ^
<coreycb> thedac, freak_: the nova-scheduler.log should have some more info on nova-cloud-controller
<freak_> thedac, is there any set of procedure after openstack bundle installation or we can directly start making instance from horizon
<kjackal> cory_fu: This action works on trusty and fails on wily
<cory_fu> kjackal: No.  Can you point me to the charm that's failing?
<thedac> freak_: after creating images, networks and keystone users you should be able to launch instances
<freak_> i have created volume, image and network
<freak_> but didn't modified anything in users section
<thedac> freak_: coreycb is correct looking at the nova-scheduler.log on nova-cloud-controller is the next step.
<thedac> freak_: you are probalby using admin which is fine
<kjackal> cory_fu: just a sec to push what I have
<freak_> thedac, can you specify the location of nova-scheduler.log coz its not in /var/log/juju
<thedac> /var/log/nova
<freak_> ok got it
<freak_> here is the output and it speaks a lot :)    http://paste.ubuntu.com/15989242/
<kjackal> cory_fu: this import under wily raises an exception https://github.com/juju-solutions/layer-apache-spark/blob/sys-init/actions/stop-spark-job-history-server#L5
<thedac> freak_: ok, that seems to suggest that mysql is broken
<freak_> but in juju status there not such thing its in ready state
<thedac> freak_: understood. But the communication between nova-cc and mysql "appears" to be broken.
<cory_fu> kjackal: I don't think the error is coming from that
<thedac> freak_: I am going to run that bundle locally and see if I can re-create this problem.
<thedac> freak_: but that is going to take a while
<cory_fu> kjackal: Do you have a full stack trace handy?
<freak_> that would be great,,i'll wait no issue
<kjackal> cory_fu: let me look into this in more. I will get back to you if I hit a wall.
<lazyPowe_> BrunoR - i'm headed out to travel back home. Feel free to ping lazyPower  and i'll resume this when I get home if you're still around
<freak_> thedac, i have noticed that it gets the error message when it is doing block device mapping task
<freak_> and then shows max retries reached
<thedac> freak_: ok, good hint
<thedac> freak_: are you doing anything special like booting off ceph volumes? You might also look at ceph health on the ceph nodes.
<freak_> how to check that?
<thedac> juju run --unit ceph/0 'ceph health' or 'ceph status'
<freak_> thedac , here is the output  http://paste.ubuntu.com/15989760/
<thedac> freak_: ok, so ntp has not converged yet enough for ceph to be happy. When you boot an instance are you immediately attaching a volume?
<thedac> icey: cholcombe: any advice on the clock skew issue with ceph ^^
<icey> that's not a problem, it will resolve on its own
<freak_> from horizon when i click the option create instance from there i select boot from volume and then specify the volume there
<thedac> freak_: ok. So to rule out ceph can you boot an instance without booting from a volume?
<freak_> what should i select in instance boot source option?
<freak_> it doesn't allow to launch instance without any selection
<thedac> ok, yeah, I need to get this stack up to answer that, sorry
<freak_> i selected the option boot from image and selected the image then it shows this error http://imgur.com/rPY3w1u
<thedac> ok, interesting
<thedac> that still seems to be related to booting from a volume
<thedac> freak_: I am deploying now. I need to run out for lunch. I'll touch base with you in about an hour. sound good?
<freak_> ok. good
<blahdeblah> freak_, thedac: Does your ntp charm have iburst enabled?  That would speed up convergence during deploy.
<freak_> how to check?
<blahdeblah> juju get ntp
<blahdeblah> pastebin the results if there's nothing sensitive in there
<freak_> ok
<freak_> blahdeblah , here is the ntp output http://paste.ubuntu.com/15990466/
<blahdeblah> freak_: Looks like it's pretty much unconfigured; what does "juju run --service ntp 'cat /etc/ntp.conf; ntpq -pn'" show?  (Again, be sure to check for sensitive data before pastebinning.)
<LiftedKilt> is anyone else having problems with percona/mysql on xenial?
<LiftedKilt> I can't get the charms to install - when I attach to the container, there appears to be problems with the install itself
<freak_> blahdeblah ,  here is the output http://paste.ubuntu.com/15990619/
<LiftedKilt> mysql won't start
<cory_fu> Hey, all.  I'd like to get some input on an issue with how config.changed works.  During the install hook, all config options are considered "changed" because they have no previous value.  In the ibm-base layer, this is causing these handlers to be triggered: https://bazaar.launchpad.net/~ibmcharmers/layer-ibm-base/trunk/view/head:/reactive/ibm-base.sh#L94
<cory_fu> My question is, should we change the basic layer to not set config.changed.X if the option has no previous value, or should we change the ibm-base layer to account for the fact that they are "changed" from their previous non-existant value?
<blahdeblah> freak_: That looks to me like there's a firewall config (or maybe lack of juju expose) on your MAAS boxes stopping them from talking to each other on the ntp port.  Also, you're going to need some extra settings to make NTP do anything useful.
<blahdeblah> freak_: (I'm not 100% sure how juju expose works with MAAS)
<cory_fu> marcoceppi, kjackal, anyone else who wants to chime in
<blahdeblah> cory_fu: changing from nothing to something seems like a change to me :-)
<freak_> but after installing bundle i haven't exposed any openstack component manually from juju gui
<cory_fu> blahdeblah: On the one hand, I agree with you.  On the other, the handlers that I linked to seem entirely reasonable, and it would be unfortunate to have to add additional logic in there to see if they "really changed" (from an actual, non-empty previous value)
<cory_fu> Also, I'm not sure how useful it is to react to a "change" when you are only just seeing the values for the first time anyway
<cory_fu> bcsaller: Thoughts?
<cory_fu> I suppose we could fix it in the ibm-base layer fairly easily using config.set.curl_url but I still feel like the current behavior is a little questionable
<blahdeblah> cory_fu: Charms have always had to expect config-changed to happen pretty early on in their existence.  At least with reactive you'll only have to do it once.
<kjackal> cory_fu: no preference here. I tend to agree with blahdeblah on "changing from nothing is a change"
<cory_fu> Actually, config.set.curl_url wouldn't help in this case because the user is setting the value before install.  Hrm
<cholcombe> thedac, clock skew?
<cory_fu> So that means that the current behavior, while arguably correct, is onerous to work around.  Maybe we could compromise by changing how config.changed.X works and adding a config.new.X?
<blahdeblah> freak_: I'll leave the answer about how exposing works when using the GUI, but for NTP to work, you'll need at least to give it some sources, e.g. "juju set ntp source='0.CC.pool.ntp.org 1.CC.pool.ntp.org 2.CC.pool.ntp.org 3.CC.pool.ntp.org'" (where CC is your country code).
<cholcombe> freak_, thedac yeah ceph freaks out if your clocks are not ntp sync'd to within a certain amount because paxos depends on message ordering
<blahdeblah> cholcombe: 50ms, by default
<cholcombe> yeah it's not much
<blahdeblah> cholcombe: In NTP terms, that's quite a lot ;-)
<cholcombe> :D
<blahdeblah> freak_: and I'd also recommend "juju set ntp auto_peers=false" - that caused us some issues in production, and it's deprecated in recent versions of the charm.
<freak_> done both things you just said
<bcsaller> cory_fu: doesn't it mean "I have a value you haven't seen before" and thus should fire to give the code a chance to pick it up?
<blahdeblah> freak_: You'll also need to make sure that your systems can actually reach the NTP pool servers (i.e. no firewall blocking them).
<blahdeblah> freak_: If all that's good, then re-do the "juju run ..." from above and we should see the time starting to sync
<cory_fu> bcsaller: That's certainly what it means now.  I'm wondering if that is the most useful meaning of it, or if this corner case might be better handled some other way (such as the new / changed split I mentioned)
<ReSam> is there any way I can disable an api endpoint I don't want to be used? (specific IP)
<cory_fu> bcsaller: The problem in the ibm-base layer is that they want to trigger a handler when: it hasn't been run before OR one of the options has changed
<blahdeblah> cory_fu: So isn't that what it will do right now?
<cory_fu> There isn't a clean way of doing that with decorators on a single handler, because we don't have an OR type construct that can do both @when and @when_not
<bcsaller> cory_fu: I am not opposed to the framework being able to generate all the delta detection in the keys, so 'new' is an option.
<LiftedKilt> yeah the mysql and percona charms are broken
<bcsaller> cory_fu: I do recall that we thought about making the triggers use a better expression language, @when('x OR y AND NOT z'), maybe its something we look at for 2.0
<cory_fu> blahdeblah: To get around the OR problem, the ibm-base layer has additional handlers for the other cases, but now they are getting called *as well* as the original handler, if the value is set before the install hook, but *not* if the config is changed after the install hook
<cory_fu> That inconsistency seems wrong
<cory_fu> bcsaller: Yeah, that's a possibility, though we wanted to avoid that for clarity's sake.  Perhaps we've made things less clear in the end
<bcsaller> cory_fu: it looks like the real world usage points that way :-/
<cory_fu> Still, I think the new / changed split can fix this, while keeping an easy way to get the current behavior (@when_any('config.changed.X', 'config.new.X')) and I kind of doubt anyone is actually depending on the current behavior
<cory_fu> Though, I could be wrong
<freak_> blahdeblah , is there any command through which i can see ntp sources
<blahdeblah> freak_: "juju get ntp" shows you what's configured, and the "juju run ..." command from earlier shows you the running configuration and current status.
<blahdeblah> freak_: what you're looking for is the "reach" (reachability) column (line 28 in your last pastebin) to go non-zero, and the "offset" column to be something less than 50.
<blahdeblah> freak_: Anyway, it's the weekend for me, and I'm off to get some breakfast; good luck!
<freak_> blahdeblah , here is the updated ouput http://paste.ubuntu.com/15991217/
<freak_> ok
<blahdeblah> freak_: Seems like you could pick some closer and more reliable servers, but eventually that config should get you to a reasonable point
<freak_> blahdeblah , actually i'm located in asia so I selected nearest servers
<magicaltrout> lazyPower: I'm working on doing my demo for the Juju <-> Data management stuff tomorrow
<magicaltrout> so i'll have some stuff for you to look at next week
<blahdeblah> freak_: the delay says that they're not as close as you might think ;-)
<blahdeblah> > 400 ms delay means they're half-way round the planet
<magicaltrout> should probably stand up my fake datamangement stack so i can actually charm it up
<blahdeblah> freak_: Keep watching the offsets from "juju run --service ntp 'ntpq -pn'" and when they get under 50, ceph should become more happy.
<blahdeblah> freak_: If you're going to rely on this long-term, you should deploy the ntpmaster charm on 4+ bare metal hosts in your cluster in order to insulate your ceph hosts from poor connectivity to the upstream NTP servers.
 * blahdeblah bails out for real now
<freak_> ok.thanks
<magicaltrout> LiftedKilt: got a log anywhere I'm curious?
<LiftedKilt> magicaltrout: my juju debug-log is broken unfortunately
<magicaltrout> booo!
<LiftedKilt> magicaltrout: it returns ERROR invalid entity name or password
<LiftedKilt> magicaltrout: on the container itself, mysql-server wouldn't start, and a dpkg --configure was complaining about errors with mysql-server and mysql-server-5.7
<LiftedKilt> doing an apt remove/install didn't clear the errors, nor did reconfiguring with dpkg
<magicaltrout> sorry LiftedKilt its been a while since i last logged onto my dev env and need to bootstrap it, bear with me i'll see what happens
<LiftedKilt> no worries
<LiftedKilt> magicaltrout: I am redeploying with mariadb in a trusty container in the meantime
<LiftedKilt> where I noticed it was with the openstack-lxd bundle's percona-cluster charm, but when I swapped that charm out with the xenial mysql charm and still had the same issues, I thought I would do some poking around and see if anyone else had run into the same issues
<magicaltrout> no problem, anything to stop me doing my real job is preferable! ;)
<LiftedKilt> haha
<Gil> i installed etcd and flannel.  both flannel are stuck in pending.  flannel/6 looks ok based on logs in /var/log/juju but flannel/7 shows message : "2016-04-22 15:34:38 INFO juju.utils.fslock fslock.go:146 attempted lock failed "uniter-hook-execution", flannel/7: executing operation: run install hook, currently held: initialise-lxc
<Gil> any idea how to fix so that flannel units will exit pending state?
<ReSam> why can't I create a backup of my manual controller?
<marcoceppi> cory_fu: it seems we would just want to, on bootstrap, query the config values to see the config database
<marcoceppi> ReSam: manual provider is a second class citizen when it comes to providers
<marcoceppi> that's probably why?
<cory_fu> marcoceppi: Not sure what you mean
<marcoceppi> cory_fu: during the bootstrap code, arguably happens during first hook execution but before reactive runs, can't we just config-get and snapshot that so we don't get an erroneous config.changed on first run?
<cory_fu> marcoceppi: We could, but that's not materially different than my PR.  It's more about the change in behavior and whether people expect one or the other.
<marcoceppi> cory_fu: well yours adds a new state, which will only ever be set once?
<cory_fu> marcoceppi: Yes.  The new state is so that you can replicate the current behavior if you really need it.  Maybe not necessary, if no one is relying on the current behavior
<LiftedKilt> jamespage: having some issues with the percona-cluster charm - it fails to deploy with the openstack-lxd bundle as well as by itself in a separate model on Xenial. Have you run into similar issues?
<LiftedKilt> magicaltrout: did you happen to get anywhere with it?
#juju 2016-04-23
<zengine> Does it matter that the commissoning OS is the same as the one that i try to deploy with? For instance if i commission a server with 14.04 and then try to install 16.04 on it matter?
<zengine> more of a maas question i guess but i am trying to use juju to bootstrap for a maas env
<jamespage> LiftedKilt, make sure you're using cs:xenial/percona-cluster
<jamespage> openstack-charmers-next one is out-of-date
<ReSam> marcoceppi: well, a mongodb backup should be possible regardless of the provider - or is there a significant difference between them?
<magicaltrout> lazyPower: if you're around today at some point, let me know, I have a demo plan I want to run by someone before heading too far down the path of no return ;)
<lazyPower> magicaltrout hit me via email. lazypower@ubuntu.com
#juju 2016-04-24
<magicaltrout> woop https://ibin.co/2erMUamOWgs2.png DCOS/Mesos master managed by Juju \o/
<magicaltrout> now for the agents
<magicaltrout> fscking charm tools
<magicaltrout> wtf
<magicaltrout> marcoceppi: is there any way to kick/destroy a unit that is still(very much stuck) in its install phase?
<magicaltrout> my charm keeps getting completely bloody wedged for some reason
<cherylj> magicaltrout: if it's the only unit on the machine,  you can try to force destroy the machine  (juju destroy-machine --force)
<magicaltrout> yeah cherylj thanks i do do that but i suspect i leave a lot of deteched ec2 nodes for marcoceppi to clean up later ;)
<cherylj> magicaltrout: that should release the instance as well
<magicaltrout> my charm downloads an archive extracts it then doesn't continue, but only like 75% of the time, the other 25% it seems to work fine, and wget works
<magicaltrout> not sure whats going on, gonna do it manually for now I think
<cherylj> :(  good luck
<magicaltrout> and whats most annoying is all the downloads and extraction works, juju just appears to give up \o/
<magicaltrout> bug reporting landing at some point
<delucia> Trying to install openstack with juju.  A container fails on startup and hence the services cannot start.  How do I get a container restarted via juju so the service can come up?
<magicaltrout> i take it back, pythons untarring speed it awful and seems to fluctuate depending on which EC2 node its running on \o/
#juju 2017-04-17
<korean101> only support single nova-consoleauth? (https://jujucharms.com/nova-cloud-controller/#charm-config-single-nova-consoleauth)
<korean101> i use Newton Releases
<korean101> 3 controller nodes + HAProxy
<korean101> start nova-consoleauth on all 3 nodes, checking token false
<korean101> but start nova-consoleauth on only 1 node, checking token true
<korean101> how can i use 3 nova-consoleauth?
<korean101> i already try to [cache section in nova.conf
<Budgie^Smore> are we having fun yet
<rick_h> Budgie^Smore: but of course! Go Monday!
<Budgie^Smore> rick_h lol :)
<rick_h> lazyPower: ping
<rick_h> lazyPower: ignore me, emailed and such since that makes more sense :)
<spaok> hello, is there a way to rename a deployed application?
<rick_h> spaok: no, unfortunately not
<spaok> if i deploy a new service, is there a way to move machines? or do I need to redeploy them?
<rick_h> spaok: you'd need redeploy. It's built on the cloud model of you can add/remove units basically
<spaok> ok, thanks rick_h
#juju 2017-04-18
<filiplt> Hello! Is it possible to move juju 'units' deployed in LXD containers between machines?
<filiplt> i mean move containers
<sparkiegeek> well if your units are stateless, it's better to just "juju add-unit --to lxd:<TARGET_MACHINE> <MY_APP>" and "juju destroy-unit <OLD_UNIT>"
<filiplt> Thank you, sparkiegeek. That was solution i thought of
<Budgie^Smore> o/ juju world
<Zic> hi here
<rick_h> party
<Zic> just for info, for one of my 1.5.3 CDK cluster upgrading to 1.6.1, I had a strange issue with kube-dns claiming that its pod cannot mount his volume (kube-dns has a volume??): http://paste.ubuntu.com/24407777/
<Zic> don't know if it's normal
<Zic> http://paste.ubuntu.com/24408100/
<Zic> kubernetes-dashboard is also unavailable
<Zic> (it's a test cluster, no urgence but just to let you know if somebody of the CDK team already saw that kind of problem before submitting a bug)
<Zic> hmm, seems it's the return of https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/238
<cory_fu> Does anyone know how to change the default bootstrap config options for juju?  I managed to get enable-os-update turned off by default and it's preventing me from bootstrapping without manual intervention
<sparkiegeek> cory_fu: perhaps you set it using "juju model-defaults" ?
<cory_fu> sparkiegeek: That helps for creating new units inside models, but it requires a controller to already be bootstrapped.  I'm trying to influence the default config of the controller during bootstrap.
<cory_fu> sparkiegeek: I can do it per-bootstrap with `juju bootstrap --config enable-os-update=true` but I'm trying to figure out how I ended up with it defaulting to false
<cory_fu> Oh, wait
<cory_fu> Of course.  I created an alias that's sending those options.
<cory_fu> >_<
<sparkiegeek> haha
<marcoceppi> cory_fu: wtg ;)
<firl> mbruzek you around?
<mbruzek> Sure
<firl> I am looking at doing an install of 1.6.1 k8s
<firl> ontop of openstack, didnât know the best way you would recommend it. just use the conjure? https://jujucharms.com/canonical-kubernetes/
<mbruzek> You only need conjure-up if you want to deploy to LXD. Otherwise just make sure Juju can talk to your OpenStack and you should be good with "juju deploy canonical-kubenernetes".
<firl> juju 2.0?
<mbruzek> conjure-up just calls Juju for you. Yep 2.x
<firl> and does it âjust workâ with ingress and openstack?
<mbruzek> Networking is complicated. If you are able to reach your OpenStack VMs without Kubernetes then you should be fine. In my test cases the VMs did not have ingress access.
<firl> so will I be able to reach my services some how though?
<lazyPower> firl: so, the way it works is your workers deploy ingress controllers on port 80/443 respectively
<firl> yes,
<firl> like juju doesnât block the ports to prevent me from putting up a ha_proxy for example I mean
<lazyPower> firl: so long as you have a route to those vm's and you can reach port 80/443, the rest will be handled by the ingress objects you declare with your applications.
<firl> ok, I just remember juju not exposing those ports
<lazyPower> firl: correct, you can expose/unexpose the workers respectively, but yeah.
<firl> so only port 80/443 support right now?
<lazyPower> firl: what you'll find thats slightly mroe complicated is if you want to use the nodeport networking model
<lazyPower> right, you'll wind up needing to do a juju run --application kubernetes-worker "open-port 6000" for example
<lazyPower> thats the only caveat, is you have to manually expose those ports
<firl> gotcha
<firl> thatâs perfectly acceptable, I just remember the first time I tried 8 months ago I couldnât get that going
<firl> is the  `juju run --application kubernetes-worker "open-port 6000â` documented anywhere?
 * lazyPower checks the readme
<lazyPower> i'm not positive we documented that
<lazyPower> yeah, undocumented behavior at the moment firl, but i'll file a bug and get that added for the next release
<firl> sweet
<firl> I will go through and see what I can do, I think I have to adjust my environment to accept juju 2.0 first
<firl> I will report back here if you guys want on how it went
<lazyPower> sounds good firl, make sure you ping me :)
<firl> sweet, thanks again as always
<lazyPower> I monitor #juju but less actively than prior
<lazyPower> s/prior/previously/
<firl> gotcha
<firl> I can imagine, looks like you guys have been busy with juju as a service too
<firl> Is the hope to get it integrated with the kubernetes deployments there to kind of make it an easier deployment then the current azure one?
<lazyPower> firl: i'm not sure i understand the question?
<firl> https://jujucharms.com/jaas
<firl> for example the default kubernetes in azure doesnât allow for scaling post install or attaching to a scaling group etc
<lazyPower> Juju deployed kubernetes certainly supports both of those cases (however instead of scaling groups, we use an autoscaler or manual scaling)
<lazyPower> firl: one such autoscaler exists by a community submission. SimonKLB wrote the elastisys autoscaler charm so you get all the autoscaling goodies it brings with it.
<firl> I will have to check that out. Itâs nice to see you guys progressing towards that
<firl> anyone know where the config data for juju2 is stored locally?
<blahdeblah> firl: .local/share/juju
<firl> ty
<firl> anyone know of a juju 2.0 environment generator for openstack I am having issues specifying the network id
#juju 2017-04-19
<kklimonda> how do I give someone ssh access to the controller?
<kklimonda> I've done juju grant -c [controller] [user] superuser
<kklimonda> and that did *someting* but now I can't figure out how to access it
<kklimonda> juju ssh -m controller 0 says that there is no such model ctrl:[user]/controller
<kklimonda> I've tried juju ssh -m admin/controller 0 but that also didn't work
<lazyPower> kklimonda: I dont think add-user actually adds the ssh key. There's a juju add-ssh-key command that has to be run in order for that to work. rick_h would know best though
<rick_h> lazyPower: correct, atm the admin has to add the key for the user
<rick_h> lazyPower: kklimonda it's a known issue and there's a task for the future to make keys end user manageable
<lazyPower> ty for the alley oop rick_h
<bdx> I'm experiencing some extreme crazyness
<bdx> c4 type instances have some issue with lxd from what I can tell
<bdx> not sure if its juju, or lxd or what
<bdx> the issue is happening with t2 instances too
<Zic> lazyPower: hi, long time with no problems but today I have one :) on our test cluster (happily...) upgraded from 1.5.3 to 1.6.1, kube-dns keeps crashing with kubernetes-dashboard, saying that kind of things: http://paste.ubuntu.com/24413931/
<Zic> juju status is all green
<Zic> seems like a problem of endpoints services which does not respond (last part of my pastebin)
<Zic> as it's a test cluster, I tried to reboot every single machines composing it, with no more luck
<Zic> http://paste.ubuntu.com/24413949/ <= same kind of message for a kubectl logs on kube-dns container
<bdx> lxd is failing across the board for me right now .... on aws instances
<bdx> http://paste.ubuntu.com/24413976/
<bdx> ^ is something I've been doing on a daily basis
<magicalt1out> does look pretty broken
<bdx> I woke up early to test out some newnew, and thats what I get
<bdx> yeah ... at first I thought it was specific to instance type ... but its happening on all instance types (at least the 5 I've tried)
<bdx> then I thought it might be a juju 2.1.2 thing .... as I just created my first model on 2.1.2 .... but I just verified its happening on 2.0.3 models as well
<bdx> @team, what is going on here?
<lazyPower> bdx: we're going to need bare minimum a bug report with a juju-crashdump log (you can report skinny, we dont need the charm artifacts)
<bdx> lazyPower: is crashdump a plugin?
<lazyPower> bdx: snap install juju-crashdump --classic, juju-crashdump -s      should get you moving
<bdx> nice
<bdx> thx
<bdx> lazyPower: http://paste.ubuntu.com/24414014/
<lazyPower> lutostag: ping
<lutostag> pong
<lutostag> son of a
<lazyPower> lutostag: i think we found a scenario where crashdump is misbehaving because of unstarted units
<lutostag> bdx: --edge
<lazyPower> bdx: to be clear, snap refresh juju-crashdump --edge --classic
<lutostag> (fixed that bug, need to release it to stable)
<lazyPower> lutostag: ty <3
<magicalt1out> bdx_: 16:10 < lazyPower> bdx: to be clear, snap refresh juju-crashdump --edge --classic
<lazyPower> rip
<bdx_> crashdump now spams me with
<bdx_> http://paste.ubuntu.com/24414051/
<bdx_> lol
<bdx_> oh no
<bdx_> lazyPower: I appreciate the willingness to help out none the less
<lazyPower> bdx_: thats fine
<lazyPower> bdx_: the spam is expected
<bdx_> ok
<lazyPower> its doing a lot of subprocess calls, and that cgo bit is golang doing what it does best
<bdx_> gotcha .. nice
<lazyPower> its that or snap, i'm unconvinced on which level is spamming that
<lazyPower> but its known and expected all the same, it takes a bit to grab everything on a large deployment, i hope you passed -s or --skinny so it doesn't spend forever nabbing all the charm source
<lazyPower> the idea behind crashdump is we've professionalized nabbing state and debug/status messaging so we can tease apart the deployment artifacts and find root causes. Feel free to inspect the package and see what we're grabbing
<lazyPower> any ideas on improvement are welcome
<bdx_> oh ...
<bdx_> ha
<bdx_> I shall, thx
<Zic> (lazyPower: did you see my last messages or they afraid you so much that I must have been cursed? :D)
<bdx_> lazyPower: these models are on beta controller
<lazyPower> Zic: totally missed it, whats up?
<lazyPower> bdx_: so something went fubar during collection or...?
<bdx_> lazyPower: do you think there is a possibility that juju-crashdump can't collect the info it needs because my user doesn't have permission?
<Zic> lazyPower: (repasting my messages & pastes here: http://paste.ubuntu.com/24414104/)
<lazyPower> lutostag: have we tested crashdump with jaas?
<bdx_> no ... its just spamming hard though with "runtime/cgo: pthread_create failed: Resource temporarily unavailable"
<lazyPower> bdx_: it takes a while, seriously. its nabbing a ton of data
<bdx_> ok
<lazyPower> on a 4 unit small k8s cluster the collection can take ~ 5 minutes.
<Zic> to sum up: seems I have an Service/Endpoint problem on my K8s-test cluster upgraded to 1.6
<lazyPower> but i di dnt pass --skinny.
<lazyPower> Zic: looking now
<Zic> thx
<lazyPower> Zic: check on flannel on the unit running the dashboard, is the flannel.1 interface up?
<lazyPower> Zic: also, check kube-proxy service is started and not in error
<Zic> http://paste.ubuntu.com/24414128/
<Zic> Flannel is OK but kube-proxy is crashed
<lazyPower> Zic: thats why its failing
<lazyPower> lets dig into why kube-proxy is dead, anything in the logs?
<bdx_> lazyPower, lutostag: the last message it gave after 5 mins of spam was "runtime/cgo: need to run as root or suid"
<bdx_> I'm guessing it needs to be ran as root?
<bdx_> hmmm
<Zic> lazyPower: I'm running journalctl -u kube-proxy but nothing except start/stop/backoff of systemd, do I have a better logs somewhere else?
<lazyPower> Zic: can you just recycle the daemon? does it stick or does it immediately crash?
<bdx_> alright ... running again as root
<lazyPower> bdx_: hang on, you shouldnt' need to run it as root
<lazyPower> lutostag: ^ wat
<Zic> lazyPower: http://paste.ubuntu.com/24414138/ <= logs from a fresh restart
<Zic> error code 203 :x
<lazyPower> Cynerva: ryebot  -- post standup, lets dig into this together ^
<ryebot> lazyPower: +1
<lazyPower> Zic: need you on ice for a bit while we do standup and will return to ask more questions
<bdx_> heres the bug https://bugs.launchpad.net/juju/+bug/1684143
<mup> Bug #1684143: applications deployed to lxd on aws instances failing <juju:New> <https://launchpad.net/bugs/1684143>
<bdx_> I'll attach crashdump output when I can get it working
<Zic> lazyPower: no problem, thanks :)
<Zic> lazyPower: I found this in plain-text syslog: syslog.1:Apr 18 15:20:10 ig1-k8s-04 systemd[1163]: kube-proxy.service: Failed at step EXEC spawning /usr/local/bin/kube-proxy: No such file or directory
<lazyPower> Zic: oooo snap, that looks like a stale hash. it should be spawning from /snap/bin/kube-proxy
<Zic> the log is from tomorrow, I'm looking at the .service systemd's unit if it's really the case
<Zic> hmm, I have similar logs for our restart test earlier
<Zic> http://paste.ubuntu.com/24414178/
<Zic> the ExecStart is wrong so :)
<Zic> -r--r--r-- 1 root root 425 Feb 16 11:15 /lib/systemd/system/kube-proxy.service
<Zic> not touched by the snap upgrade
<Zic> seems I hit the spot! :D
<lazyPower> Zic: before you update that hang on
<lazyPower> the snaps have a different system exec scheme, they use bash wrappers and a systemd script that gets installed on snap install.
<Zic> (for info kube-proxy is dead on all kubernetes-worker units, I just cheched that, not only on kube-dns/kubernetes-dashboard nodes)
<lazyPower> Zic: systemctl status snap.kube-proxy.daemon
<Zic> http://paste.ubuntu.com/24414247/
<lazyPower> xref with https://github.com/kubernetes/kubernetes/issues/26003
<lazyPower> Zic: are you using network policies?
<Zic> this test-cluster is not customized at all, the only parameter we change was docker_from_upstream
<Zic> changed*
<lazyPower> hmm
<lazyPower> ok still in standup, will circle back in a sec
<Zic> (docker_from_upstream was set to "true" before the upgrade to 1.6)
<lazyPower> Zic: this is in reference to your workload objects
<lazyPower> Zic: sudo iptables --list
<lazyPower> lets see if it even created the iptables rulechains to do the serviceip forwarding
<Zic> paste.ubuntu.com/24414272/
<Zic> http://paste.ubuntu.com/24414272/
<lutostag> hmm, bdx, this is with a jaas deployment?
<lutostag> I'll see if I can get a one-off run to test that real quick
<lazyPower> that seems fine...
 * lazyPower ponders
<Cynerva> Zic: i just remembered hitting something like this during my upgrade testing. What eventually got me in a working state was to recreate the pods that are failing
<Zic> Cynerva: was my first attempt :)
<lazyPower> Zic: which templates did you use?
<lazyPower> Zic: the ones found in /etc/kubernetes?
<Zic> the one at ~/cdk
<Zic> oops
<Zic> precisely at ~/snap/cdk-addons/current/addons :)
<lazyPower> ok
<Cynerva> dang, okay
<Zic> through a kubectl replace -f
<Cynerva> hmm i wonder if that recreates the pods? or just the deployment objects?
<lazyPower> it *should* have recreated the pods
<lazyPower> in nuke/repave style
<lazyPower> it doesn't blue/green unless you specify a rolling update
<Cynerva> okay
<lazyPower> Zic: for grins on the worker
<lazyPower> can you curl the http endpoint for your kubernetes-apiserver VIP?
<lazyPower> curl https://10.152.183.1
<Zic> (I tried something more agressively: http://paste.ubuntu.com/24414342/)
<Zic> (about kubectl replace)
<lazyPower> ok
<Zic> don't know if all this error are ignorable
<lazyPower> so, that tells me any attempt to replace has failed
<Zic> yup :(
<lazyPower> you'll need to kubectl rm -f
<lazyPower> and then reschedule
<lazyPower> this *may* fix the issue
<lazyPower> but i doubt it
<Zic> ah, don't try this one, I will immediately
<Zic> kubectl rm seems to not exist (?)
<Zic> delete?
<lazyPower> ya
<lazyPower> just checking if you're awake ;)
<Zic> :D
<Zic> http://paste.ubuntu.com/24414356/
<Zic> Container still creating
<Zic> I'm waiting a bit
<Zic> http://paste.ubuntu.com/24414359/ <= the second line is strange
<Zic> about the curl test: root@ig1-k8s-01:~# curl https://10.152.183.1
<Zic> Unauthorized
<lazyPower> OH
<lazyPower> Well thats good!
<lazyPower> if your VIP is responding, its not a networking issue
<lazyPower> and we expect that since you dont have the tls key on that curl command. had you included teh k8s key(s) on that curl rquest it would 404 (i think) you. As it begins at /api.
<Zic> http://paste.ubuntu.com/24414374/ ContainerCreating finished but... it's another bad state now :(
<Zic> don't know why it reached an ImagePullBackOff
<lazyPower> Zic: give it a sec
<lazyPower> that can happen when ther's issues hitting the gcr.io registry
<lazyPower> temporary networking issue, saturation, noisy neighbors, etc.
<Zic>   24s24s1kubelet, ig1-k8s-05spec.containers{influxdb}WarningFailedFailed to pull image "gcr.io/google_containers/heapster-influxdb-amd64:v1.1.1": rpc error: code = 2 desc = Error pulling image (v1.1.1) from gcr.io/google_containers/heapster-influxdb-amd64, Get https://gcr.io/v1/images/55d63942e2eb6a74ea81cbfccd95ef0f44f599a04ed4a46a41dc868a639c847d/ancestry: dial tcp 64.233.166.82:443: i/o timeout
<Zic> seems like
<lazyPower> yeah
<Zic> oh, except grafana, all pods are now Running
<lazyPower> i suspect your'e experiencing an outage atm. let me check here
<lazyPower> \o/
<lazyPower> nice
<lazyPower> so it self resolved
<Zic> kube-system   kube-dns-806549836-w842j                2/3       CrashLoopBackOff   3          6m        10.1.79.7    ig1-k8s-02
 * lazyPower chalks it up to internet gremlins
<Zic> kube-system   kubernetes-dashboard-2917854236-qmvn3   0/1       Error              5          6m        10.1.36.7    ig1-k8s-04
<Zic> speaks too fast :'(
<lazyPower> Zic: you're playing with my heart man
<Zic> :'(
<lazyPower> ok, lets start with dns
<Zic> was blocked in CLBO for so much time I was too happy to see a Running state :(
<lazyPower> whats the story with dns clbo?
<lazyPower> failed healthc heck, failed to reach apiserver?
<Zic>   1m1m2kubelet, ig1-k8s-02WarningFailedSyncError syncing pod, skipping: failed to "StartContainer" for "kubedns" with CrashLoopBackOff: "Back-off 20s restarting failed container=kubedns pod=kube-dns-806549836-w842j_kube-system(c5838bc9-2514-11e7-b7ef-005056949324)"
<Zic> let me do a kubectl logs on it
<Zic> http://paste.ubuntu.com/24414398/
<Zic> grafana hits Running and stayed in Running. but kube-dns & kubernetes-dashboard are stuck in CLBO now
<Zic> http://paste.ubuntu.com/24414411/ <= for dashboard
<lazyPower> hmm
<lazyPower> i'm uncertain why the dashboard isn't able to reach the VIP
<lazyPower> but i'm still concerned about kube-dns
<lazyPower> looks like the sidecar for dnsmasq metrics is whats causing it to fail
<lazyPower> Zic: give me a repeat describe for the dns pods now that they are out of errimgpull
<Zic> I got more info about kubernetes-dashboard through a direct `docker logs` at local worker: Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.152.183.1:443/version: dial tcp
<Zic> 10.152.183.1:443: i/o timeout
<Zic> don't know why it got a timeout if I can curl it...
<lazyPower> right
<lazyPower> I'm not sure whats fishy there but somethings up
<lazyPower> and to make this all the more interesting, our upgrade tests didn't surface this, the addons upgraded without issue
<Zic> http://paste.ubuntu.com/24414438/
<Zic> lazyPower: you know that I'm cursed and love to hit all the bug that nobody have :D
<lazyPower> so the primary issue here is the kubednsmasq pod is still failing to pull.
<Cynerva> Zic: can you paste journalctl logs for snap.kubelet.daemon, snap.kube-proxy.daemon, and flannel?
<lazyPower> Zic: additionally, on any unit, try this:   docker pull gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1
<lazyPower> well, any worker. the master doesn't have docker so you'll figure out real quick to not do it there.
<Zic> Cynerva: http://paste.ubuntu.com/24414459/
<Cynerva> thanks
<Zic> lazyPower: http://paste.ubuntu.com/24414470/
 * lazyPower blinks
<lazyPower> thats *literally* the manual interaction of what that stupid kubelet operation is trying to make happen
 * lazyPower flips tables
<lazyPower> Zic: juju run --application kubernetes-worker "docker pull gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1"
<lazyPower> pre-load all the workers with that image. if it resolves itself, again, i dont know why, but gremlins.
<Zic> ok, it's loading :)
<Cynerva> nothing interesting in the service logs aside from the stream errors, and those aren't telling us much O.o
<lazyPower> Cynerva: i see we're missing the conntrack bin. we should probably add that and pack it into kube-proxy
<lazyPower> that'll be needed for large scale deployments so it properly tracks and terminates stale connections. conntrack bits were causing rimas problems before on another distro. I want to learn from that mistake if we can.
<Zic> lazyPower: http://paste.ubuntu.com/24414492/
<Zic> problem on one of the units
<Cynerva> hmm that's weird, kube-proxy is classic confinement
<Zic> seems very likely that gcr.io has an issue
<lazyPower> Zic:   UnitId: kubernetes-worker/1 <-- so we need to figure out why that unit is having connectivity issues
<Zic> kubernetes-worker/1       active    idle   10       ig1-k8s-01
<Zic> it's ig1-k8s-01, I will do a manual check
<Zic> at least, it can ping 64.233.166.82
<Zic> http://paste.ubuntu.com/24414505/
<Zic> wtf :>
<lazyPower> ah looks like it might have been 3
<lazyPower> i misread the yaml
<lazyPower> kubernetes-worker/3
<Zic> oops, I did not check too :D
<Zic> ok so it's kubernetes-worker/3*      active    idle   12       ig1-k8s-03
<lazyPower> Zic: again, just making sure you're awake
<Zic> :)
<Zic> pinging is OK, it's pulling now, in progress...
<Zic> http://paste.ubuntu.com/24414515/
<Zic> it stopped
<lazyPower> so either there's a network issue on that unit, or gcr.io is having trouble
<lazyPower> i wouldn't be surprised of either
<lazyPower> if you retry does it succeed or does it keep getting rejected?
<Zic> ig1-k8s-03 has the exact same network configuration of other 4 kubernetes-worker units (they are all NATed by our hypervisor through the same public IP)
<Zic> Status: Downloaded newer image for gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1
<Zic> just work at the second attempt...
<Zic> silly gcr.io
<lazyPower> Zic: did that resolve the deployment?
<Zic> lazyPower: hmm, saw in describe pod kube-dns that it tries to redownload the docker image
<lazyPower> now that the image is cached on all the workers, it shouldn't be complaining about image pull sync
<Zic> even if it's already pulled :(
<Zic>   21s21s1kubelet, ig1-k8s-03spec.containers{kubedns}NormalPullingpulling image "gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1"
 * lazyPower sighs
<lazyPower> it probably has pull: always in the manifest
<lazyPower> because lets DDOS our registry sounds like a great plan.
<Zic> http://paste.ubuntu.com/24414538/
<Zic> huhu
<Zic> it's sidecar now
<lazyPower> Zic: edit the manifest for kube-dns and set teh stupid image pull policy from imagePullPolicy: Always  to    imagePullPolicy:IfNotPresent
<lazyPower> and reschedule kubedns
<lazyPower> (delete and recreate)
<lazyPower> mind you this is all a work-around to whatever networking issue we're seeing
<Zic> kubedns-cm.yaml          kubedns-controller.yaml  kubedns-sa.yaml          kubedns-svc.yaml
<lazyPower> i'm not convinced
<Zic> at the controller?
<lazyPower> kbuedns-controller.yaml
<Budgie^Smore> o/ juju world :)
<lazyPower> ya
<lazyPower> Budgie^Smore: o/
<lazyPower> Budgie^Smore: did you bring your rocket launcher? We're on a bug hunt
<Zic> lazyPower: in fact there is 0 ImagePullPolicy at the controller :D
<Zic> so it must be the default value, which is... IfNotPresent
<Zic> I don't understand :D
 * lazyPower flips tables
<lazyPower> Zic: i dont know what to recommend at this point
<lazyPower> i've given every thought i can to work around this issue, the crux is the connectivity of grabbing hat image for kubedns
<Budgie^Smore> lazyPower no rocket launcher... pop corn to watch the show though ;-)
<lazyPower> and i have no clue why teh dashboard pod is unable to contact the VIP if the host machine can contact the VIP
<lazyPower> you did however give us some clues that our removal was not working as expected and have a fix en-route for that
<Zic> lazyPower: sidecar just finished to pull... but health check is not good: http://paste.ubuntu.com/24414621/
<lazyPower> well, progress
<lazyPower> whats in the logs for the pod?
<lazyPower> (s)
<lazyPower> same thing where dns cant reach the service vip of kube-apiserver?
<Zic> http://paste.ubuntu.com/24414651/
<Zic> seems like it yup
<Zic> it times out on the VIP
<Zic> like the dashboard :(
<Zic> reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.152.183.1:443/api/v1/services?resourceVersion=0: dial tcp 10.152.183.1:443: i/o timeout
<lazyPower> yah, i see that :(
<lazyPower> so we've resolved the other nit-noid issues but the core of why it cant find the vip is still alien to me
<lazyPower> if hte host can see it, the container should see it
<lazyPower> Zic: can you fire up an ubuntu pod and attempt the same curl test?
<lazyPower> Zic: from within the container, via kubectl exec
<Zic> http://paste.ubuntu.com/24414704/ <= re-doing the test, it answered
<lazyPower> Budgie^Smore: share that popped corn
<Zic> lazyPower: yup, trying that
<Budgie^Smore> lazyPower come and get it :P
<lazyPower> Open source the corn man
<lazyPower> s/corn man/corn, man/
<Zic> hum
<Zic> lazyPower: no network inside a container
<Zic> no network at all
<lazyPower> Zic: boom
<lazyPower> progress
<Zic> can't do an apt install curl :(
<lazyPower> now lets figure out why the container has no network
<lazyPower> whats in /etc/default/docker?
<Zic> http://paste.ubuntu.com/24414713/
<Zic> pretty empty
<Budgie^Smore> lazyPower lol now you going to get me to code a corny corn popper ;-) pun most assuredly intended!
<Zic> hmm
<Zic> lazyPower: strange things: this new container has no network
<Zic> but I tried a kubectl exec at an ingress controller
<Zic> network is up ghere
<Zic> -g
<Zic> why ingress-controller has network and the new container don't :o
<Budgie^Smore> (me being lazy) you have tried killing the container and having it start on another node right?
<Zic> lazyPower: http://paste.ubuntu.com/24414727/
<Zic> don't understand this part...
<Zic> ubuntu is running at kubernetes-worker/0
<Zic> the ingress-controller I used is running at kubernetes-worker/4
<lazyPower> Zic: whats the age on that ingress controller?
<lazyPower> is it from pre-upgrade?
<Zic> 1d
<Zic> so after-upgrade
<Zic> (was 62days before)
<Zic> default       default-http-backend-35bpm              1/1       Running            1          62d       10.1.80.5    ig1-k8s-01
<Budgie^Smore> Zic, lazyPower have you checked the IPs and iptables yet? could it be a flannel / overlay network?
<Zic> this, however, is up since 62d
<Zic> and is also located at ig1-k8s-01
<lazyPower> Budgie^Smore: it could be, however it should fallback to the default docker network driver iirc.
<lazyPower> Zic: try again but watch the kubelet log
<lazyPower> see if anything leaps out at you there
<Zic> lazyPower: for info, pods of kube-system has no network also
<Zic> tried in the grafana-influxdb pod
<Zic> no network
<Zic> seems like just ingress-controllers have network :o
<Cynerva> Zic: ingress controllers have hostNetwork: true, so i think they bypass flannel/cni entirely
<lazyPower> Zic: i'm at an impass now, but we've gotten deeper into the issue that seems like yet another symptom, but not the root cause
<Cynerva> not entirely sure how that works, but they're definitely a special case
<lazyPower> Cynerva: that would be the case if it specifies host network it doesn't use any of the containerd networking bits. its binding on teh hosts tcp stack.
<Zic> lazyPower: could it be tied to our use of docker_from_upstream ?
<Zic> I can switch it to false if you want
<lazyPower> Zic: quite possible, if you switch it back to archive, do things work?
<Zic> I will try now
<lazyPower> Cynerva: ryebot - i dont think we've tested with upstream docker in quite some time... is this true yeah?
<Cynerva> lazyPower: yeah, we haven't that i'm aware of
<lazyPower> I thought so, i might actually submit a PR this week to remove that option from the k8s charms as its inhereted from layer-docker.
<lazyPower> if we're not extensively testing it, we shouldn't offer it
<Zic> lazyPower: we have a serious garbage collection in our prod-cluster with the Docker version of Ubuntu :(
<Zic> it's why we switched to PPA version
<lazyPower> Zic: thats unfortunate if this resolves the issue
<Zic> yup :( with the docker version at Ubuntu Archive, we got a lot of dockerd stucked at garbage collecting
<lazyPower> if it doesn't i'm not really sure where to go from here either, as this makes no sense to me that your container network just falls out
<Zic> switching docker_from_upstream resolve immediately this issue
<lazyPower> seriously?
<Zic> yup :(
<lazyPower> welp
<lazyPower> nothing to do here
 * lazyPower jetpacks away
<Zic> was Kibana containers wich crash Docker garbage collection
<lazyPower> hmmm
<lazyPower> Zic: its 1.11.x coming from archive correct?
<Zic> lazyPower: careful, saying that for our production-cluster, for the test-cluster we're debugging, downgrading is in progress
<Zic> lazyPower: downgrading is finished and... all my pods are Running and have network connectivity
<Zic> :o
<lazyPower> Zic: perhaps it was jsut recycling docker that did it?
<Zic> to recap what I said: we used docker_from_upstream as we hitted severe garbage collection bug with dockerd on production with heavy usage-intensive container like Kibana, with the version from PPA of docker.com, it was fixed (in 1.5.3)
<Zic> but it seems that this docker version of docker.com breaks network in 1.6
<lazyPower> i'm running a deploy with install_from_upstream=true right now
<Zic> (to be clear, as we mixed our conversation about two different clusters earlier)
<lazyPower> yep, i follow you now
<Zic> for now, the test-cluster we're debugging here is now fixed
<Zic> with docker_from_upstream sets to false
<lazyPower> Zic: prior to doing that, did you attempt to restart the docker daemon?
<lazyPower> was that part of your troubleshooting?
<Zic> it's a bit lame as we are now using docker_from_upstream=true at production :/
<Zic> lazyPower: after the downgrading, yeah, I restarted docker
<lazyPower> Zic: i meant before
<Zic> ah, yeah, rebooted the whole cluster too
<lazyPower> Zic: well, i just deployed and upgraded
<lazyPower> so far so good
<lazyPower> to be clear - deployed with docker from archive, enabled install_from_upstream, things are still running
<Zic> did you enable docker_from_upstream at 1.5.3, then upgrade to 1.6 ? :D
<Zic> was the correct path
<Zic> don't know if it can played at the game
<Zic> lazyPower: the *exact* path was: switching to docker_from_upstream=true, look at juju status and when it's ended, restart docker on every kubernetes-worker units (as the juju scenario don't handle this part) <some days passed> -> upgrade to 1.6 with the Ubuntu Insights tutorial -> CLBO at kube-dns+kubernetes-dashboard after the upgrade / no network in container
<lazyPower> Zic: running another deploy through the upgrade scenario
<lazyPower> but i got networking with upstream docker from a fresh deployment
<lazyPower> so, murky water here...
<lazyPower> Zic: looks like Cynerva may have confirmed the behavior
<lazyPower> still debugging but yeah, we're close to identifying the symptom
<Zic> lazyPower: great! I'm leaving my office to go back to home, I will my backlog later if you find something else :)
<bdx_> lazyPower: the issue seems to be with us-east-1a  ..... the only way I can get an instance to deploy to us-east-1a is by spaces constraint, where the subnet in the space is in 1a
<bdx_> otherwise, `juju deploy ubuntu -n10` will not deploy anything to 1a
<bdx_> its the instances that I get into 1a with the spaces constraint that exhibit the issue of failing lxd
<lazyPower> Zic: the only thing to note here is that with that upstream version of docker (1.28 API) is well beyond whats been tested by upstream. In the 1.6 release notes Drop the support for docker 1.9.x. Docker versions 1.10.3, 1.11.2, 1.12.6 have been validated. Anything outside of that is likely to have gremlins, as we're finding.
<dockerer> Hi
#juju 2017-04-20
<anrah_> Quick question about the juju instances hostnames
<anrah_> Is the format always juju-<someid>-<modelname>-<instance number> ?
<anrah_> thing is that I require name of the current model to be resolved within instances, and agent.conf file shows only uuid and I can't find any other appropriate way (besides talking to API) to get the model where the instance is running
<Zic> lazyPower: saw your message of yesterday, ack, so I will need to downgrade docker at our production-cluster also, before upgrading to K8s 1.6 :(
<Zic> but I'm afraid of garbage collecting problem with Docker that we had with the version from Ubuntu archive
<Zic> the Docker version from docker.com private repository fixed that
<Zic> a bug is open at Docker for that, but the error message is so generic that this bug stayed open since April 2016 with no other news than "We upgraded to the latest version of Docker and that fixed everything!" :'(
<iatrou> hi, I am testing cdk using snapd 2.22.6 conjure-up 2.1.5 and the kubernetes-master is stuck for a while in waiting:  http://paste.ubuntu.com/24420144/
<stokachu> iatrou: yea ive seen that before, lazyPower ^
<lazyPower> iatrou: can you pastebin the output of `kubectl get po --all-namespaces`? You can execute that on the master itself.
<iatrou>  here is /var/log/juju/unit-kubernetes-master-0.log http://paste.ubuntu.com/24420169/
<iatrou> lazyPower: "The connection to the server localhost:8080 was refused - did you specify the right host or port?"
<lazyPower> iatrou: i see that
<lazyPower> iatrou: do you get that same error message when attempting to issue the kubectl command on the master?
<zeestrat> Does anyone know of a way to use dynamic variables such as env variables when deploying bundles with the native juju deploy in 2.x?
<lazyPower> zeestrat: you would need to use something like envtemplate to render the bundle and substitute the variables, then deploy that generated bundle
<iatrou> lazyPower: that was from kubernetes-master, from the conjure-up host, for kubectl.conjure-canonical-kubern-f24 get po --all-namespaces I get 502 Bad Gateway
<lazyPower> iatrou: whats the status of the apiserver when you `systemctl status kube-apiserver`?
<iatrou> lazyPower: inactive (dead)
 * magicalt1out wants to update to the latest CDK but my underlying openstack hardware is so woeful I don't currently dare =/
<lazyPower> iatrou: we've found the culprit, can you attempt a restart of that service? `systemctl restart kube-apiserver`  then re-insepct to make sure its active
<stokachu> lazyPower: this problem has happened to me a few times too
<lazyPower> stokachu: any indicator or collection of logs so we can scrutinze what actually happened?
<lazyPower> i suspect a race condition but i'm not certain of that
<stokachu> im running more tests today and will get you those logs
<lazyPower> ok, thanks stokachu
<iatrou> lazyPower: I must be doing something wrong: on kubernetes-master I get Failed to restart kube-apiserver.service: Unit kube-apiserver.service not found.
<lazyPower> oh, wait, i see what i did there. my mistake
<lazyPower> i gave you the wrong service name
<lazyPower> iatrou: systemctl status snap.kube-apiserver.daemon
<lazyPower> i'm going to have to un-learn that muscle memory
<magicaltrout> or get a mac with those silly bars, so you can put it on a hot key ;)
<lazyPower> ^ that
<lazyPower> only, can we get an updated x1 carbon with a silly bar?
<Zic> lazyPower: hey, I just upgraded a production-cluster this time and... except that I needed to kubectl delete -f <all_cdk_addons> && kubectl create -f <all_cdk_addons> like earlier, it works well
<magicaltrout> hehe
<Zic> it's the one of my two K8s cluster, the smaller one
<lazyPower> Zic: and this is with install_from_upstream=true?
<iatrou> lazyPower: OK, still dead, but for the "right" reason this time, the restart "fixed" the waiting on the master, but  not the workers...
<Zic> lazyPower: nope, I downgraded just before
<lazyPower> iatrou: ok, so i suspect kubelet needs restarted on the workers
<Zic> I do not attempt with it :(
<lazyPower> iatrou: its that or wait for update-status to run and it should attempt to reconverge
<lazyPower> Zic: ok, good
<lazyPower> Zic: i konw the GC issue is a thing for you, but we dont have teh bandwidth to enable that level of testing at this time if the upstream project isn't doing it :(
<lazyPower> Zic: i suspect with 1.7/1.8 those newer dockers will be addressed, however at the release of 1.6 it was only vetted against 1.10-1.13
<Zic> lazyPower: before the kubectl create/delete cdk-addons, my kube-dns was stuck at ContainCreating with a Mount option error
<Zic> lazyPower: for this smaller customer, he has no Docker garbage collecting error
<Zic> so I downgraded without any drawbacks
<Zic> it's the bigger one where it's a problem
 * lazyPower nods
<Zic> in practical, the GC issue is doing this: some of the container of a pod (ZooKeeper one) have an heavy usage which may busy the kubernetes-worker units machine at 100% of its capability
<Zic> - docker stuck
<Zic> - docker will eventually came alive after, but with orphan container error and GC in loop
<Zic> - kubelet cannot schedule any new pod on the node of this docker
<Zic> -> solution, restart dockerd
<Zic> it happens twice a days
<Zic> -s
<Zic> the 17.04-ce version of Docker is not affected
<lazyPower> Zic: catch-22, the 17.04-ce edition of docker breaks networking as we've discovered
<Zic> yep :(
<Zic> in 1.6.1
<Zic> worked well in 1.5.3 :)
<Zic> a possible mitigation: I recommend to my customer to set Kubernetes "limits" on his manifest for ZooKepper pods
<Zic> of CPU/RAM
<Zic> to never hit the 100% spot
<lazyPower> Zic: that sounds like the proper way to do this
<lazyPower> you might have to scale zookeeper.... but if its related to resource consumption and why it gets stuck GC'ing...
<Zic> as the GC issue with dockerd seems to trigger when the kubernetes-worker units machine is under hevy load-average
<lazyPower> ah
<lazyPower> ok
<lazyPower> well thats certainly something to explore and try
<lazyPower> i would def. give it resource limits and see if that resolves it
<lazyPower> Zic: also ifyou want to file a bug to track this issue with us, i'm sure someone else is bound to hit this and it would be good to have it documented somewhere
<Zic> https://github.com/kubernetes/kubernetes/issues/39028 <= it's this kind
<Zic> with this kind of error:
<Zic> Mar 31 00:08:15 mth-k8s-01 kubelet[21200]: E0331 00:08:15.911907   21200 kubelet.go:1128] Container garbage collection failed: operation timeout: context deadline exceeded
<lazyPower> iatrou: any change in status on your workers?
 * lazyPower looks @ linked issue from zic
<Zic> no clear identification / solution of the problem :s
<Zic> the OP seems to have this issue with a heaby Prometheus pod
<Zic> heavy
<lazyPower> ok i see the linked issue now as well relating ot gc
<lazyPower> this is a hairy issue if we're seeing perf gc lock from the underlying container daemon, we're running heapster, which should be sending data back to kubelet to do GC at runtime
<lazyPower> but if the container process fails to GC, yeah we're kind of dead in the water here too
 * lazyPower sighs
<lazyPower> if its not one thing its another amirite Zic?
<Zic> I needed to Google "amirite" xD </French_guy>
<Zic> lazyPower: most of the time, it's Zookeeper pods
<Zic> I never saw another things, except one time, a Kibana one, but seems to be an isolated case
<lazyPower> Zic: fair enough, i'm speaking internet gibberish
<Zic> :D
<lazyPower> so i dont think its english guy vs french guy lingo, its just me being full of trash talk from the internet
<Zic> never kube-system pods or default pods like ingress-controller failed at this GC problem
<lazyPower> Zic: that kibana pod gc seems unrelated, kibana is client side. I would expect that from ES though.
<lazyPower> like kibana has *some* Server side jruby code, but for the most part its client side
<Zic> yep
<Zic> I also have ES pods
<lazyPower> and those dont give you trouble w/ GC?
<Zic> they never provoke this GC issue
 * lazyPower does the schenanigans dance
<Zic> it always start with this ZK, and sometime other pod begins to fail
<lazyPower> i suspect its ZK bleeding into other areas because of the GC issue
<Zic> (I think it's untied, they are just crashing, auto-healing tried to heal them, but cannot as kubelet cannot reschedule new container)
<lazyPower> yeah
<lazyPower> that
<lazyPower> thats my bold prediction as well
<Zic> in conclusion, with the docker_from_upstream=false and for that smaller cluster which don't have any GC issue, the only "problem" I got during after the upgrade was the need of delete/create all cdk-addons YAML manifests
<Zic> don't know if "all" was really required, was just kube-dns which failing
<Zic> but just in case, I did it for all cdk-addons
<Zic> s/during//
<Zic> (I think you already identified this problem yesterday)
<iatrou> lazyPower: nope, the workers are still in waiting
<Zic> don't know if it's a good advice but, when juju status is stuck at something with "waiting", and if I see no action at all at a "juju debug-log" or in the units concerned by the "waiting", I just try to reboot the juju controller and/or the unit concerned :>
<Zic> 90% of the time it works
<Zic> 10% of the other time, I keep this error until lazyPower wake up :>
 * Zic => exit
<lazyPower> iatrou: ok, lets remote into one of the workers and see whats happening. do you have a minute to troubleshoot?
<iatrou> lazyPower: sure, what am I looking for?
<lazyPower> iatrou: systemctl status snap.kubelet.daemon
<lazyPower> is it dead? any indicator in the logs of a failure scenario?
<iatrou> inactive (dead), snap[1510]: cannot change profile for the next exec call: No such file or directory
<iatrou> restart seems to recover it
<iatrou> lazyPower: ^^
<lazyPower> iatrou: excellent, sorry you ran into this. was this an upgrade or a fresh deploy?
<erik_lonroth> Hello! I've spent some time writing a "Hello World" tutorial about getting started with juju development. If you like to give me feedback to it, I'd be glad. I intend to expand with more tutorials as I learn from this myself. https://github.com/erik78se/juju/wiki/The-hello-world-charm
<lazyPower> erik_lonroth: certainly, thanks for sharing. Another good place to cross post that would be the juju mailing list "juju@lists.ubuntu.com"
<erik_lonroth> Oh, thanx for that tip.
<iatrou> lazyPower: fresh install
<lazyPower> iatrou: ah interesting. What cloud was this deployed to?
<iatrou> lazyPower: localhost
<lazyPower> ok, thats interesting. There appears to be a race that you hit and i'm not sure why
<lazyPower> iatrou: if you could snag the contents of the kube-apiserver logs, i think that's good enough to diagnose what happened
<iatrou> lazyPower: from the master? where are they located?
<lazyPower> iatrou: from master
<lazyPower> iatrou: in standup give me 5
<magicaltrout> bet you're not stood up
<lazyPower> magicaltrout: i have a convertible desk
<lazyPower> i stand for meetings sometimes :)
<bdx> jam: ping
<lazyPower> iatrou: ok, are you familiar with juju-crashdump?
<lazyPower> iatrou: it would probably be easier to just collect the env so i dont have to come back for more logs down the road.
<iatrou> lazyPower: apologies otp, getting back to you in a bit
<lazyPower> iatrou: no worries, ping when you're available
<magicaltrout> otp? off to the pub?
<aisrael> on the phone, but the pub sounds good too. ;)
<magicaltrout> ah
<magicaltrout> yeah the pub is the better one
<magicaltrout> or
<magicaltrout> otpitp
<magicaltrout> is acceptable if you need to combine work and alcohol
<Zic> lazyPower: found a minor "bug" -> kube-apiloadbalancer drop my kubectl executed remotely after 90s
<Zic> maybe because of the       proxy_read_timeout      90;
<Zic> at /etc/nginx/sites-enabled/apilb
<Zic> s/executed/exec executed/
<lazyPower> Zic: you can try increasing that to some absurdly large number like 99999999;
<lazyPower> you cannot wholesale disable the read timeout though, that much i'm 90% certain of.
<Zic> I switched it to 600s/10min instead
<Zic> but I don't know if I will be overwritten at next Juju check, or just at next juju upgrade-charm kube-apiloadbalncer (not important)
<mbruzek> Zic - Then you can edit the template used to write that file. /var/lib/juju/agents/unit-kubeapi ... find that template file and edit it there
<Zic> if it's just at next-upgrade, it's not a very big problem, and actually, next-upgrade is switching to HAProxy right? :)
<Zic> if it's actively overwritten, yeah, I will edit the template, thanks for the trick
<mbruzek> Not sure when haproxy is coming, we had problems with the tls keys
<mbruzek> passing through the traffic as a layer 4 router, the addresses were incorrect
<lazyPower> o/ mbruzek
<mbruzek> heyo
<lazyPower> mbruzek: i saw you on steam lastnight and got giddy :D
<mbruzek> Firewatch is a great linux game
<lazyPower> +1 to that
<lazyPower> so is *Drumroll* ARK
<lazyPower> but i'm clearly biased ;)
<mbruzek> Ah ark
<lazyPower> i started playing the even more difficult version: Scorched Earth
<lazyPower> i need to find some good persian music as a backdrop while playing
<lazyPower> or maybe just bolero on a loop
<kwmonroe> aisrael: marcoceppi:  will one of you confirm the right benchmark doohickey to import:
<kwmonroe> >>> from charmhelpers.contrib.benchmark import Benchmark
<kwmonroe> >>> from charms.benchmark import Benchmark
<kwmonroe> it's the last one, right?
<aisrael> the last one, yeah
<kwmonroe> gracias
<aisrael> charms.contrib.benchmark was the initial version, before we spun it out
<kwmonroe> ack
<Zic> the ambiance of Firewatch is brillant :>
<lazyPower> Zic: +1 to that, Panic! writes great software
<lazyPower> a bit pricey, but excellent tools all the same
<siva> I am seeing an issue where 'juju status' command just hangs
<Guest31335> How do I resolve this issue?
<Guest31335> Both the 'juju status' and 'juju list-models' command just hangs
<Guest31335> ubuntu@juju-api-client:/var/log$ juju --version 2.0.2-xenial-amd64
<Guest31335> How do I resolve this one>
<Guest31335> How do I resolve this one?
#juju 2017-04-21
<jianghuaw> hi does charm-nova-compute support VMware?
<jianghuaw> or have to use this charm - nova-compute-vmware? It seems nova-compute-vmware has no update since 2014-06-24. https://code.launchpad.net/~openstack-charmers/charms/precise/nova-compute-vmware/trunk
<jianghuaw> @jamespage
<jianghuaw> jamespage, Do you have an answer about the above question? thanks.
<blahdeblah> jianghuaw: It will probably be a few more hours before he's around (assuming he's not travelling at the moment)
<jianghuaw> blahdeblah, thanks for the information.
<jam> menn0: if you're around https://github.com/juju/txn/pull/32 allows us to use RemoveAll($in) instead of Bulk when Bulk isn't really supported
<chatter29> hey guys
<chatter29> allah is doing
<chatter29> sun is not doing allah is doing
<chatter29> to accept Islam say that i bear witness that there is no deity worthy of worship except Allah and Muhammad peace be upon him is his slave and messenger
<kjackal> Good morning Juju world!
<SimonKLB> is it possible to access relation conversations from multiple charms related using the same interface?
<SimonKLB> or is it only possible to access one conversation at a time?
<kjackal> SimonKLB: I think you have access to a list of all conversations
<kjackal> let me grab an exmple of that
<kjackal> SimonKLB: https://github.com/juju-solutions/interface-dfs/blob/master/provides.py#L37
<SimonKLB> kjackal: this is only going to grab the conversations of the units, not multiple charms, right?
<SimonKLB> of the units (in one charm)*
<SimonKLB> i believe a mix of relation_ids() and relation_get() will do the trick
<kjackal> SimonKLB: you are right this will give you the conversations the unit participates in. I've never seen a unit grabbing conversations of other units.
<SimonKLB> kjackal: no that is not what i want, i want all of the conversations between a charm and other charms related with a specific relation
<SimonKLB> for example, if wordpress was related to both mysql and mariadb, i would like the username and password from both databases in wordpress
<kjackal> i see
<SimonKLB> looks like youre blocked from accessing the relation when it's not in the correct context though
<lazyPower> SimonKLB: so long as you're in reactive, and using reactive states, you can iterate through the cached conversation data objects
<lazyPower> thats something cory work on when implementing reactive, was to remove the requirement to be in context to access that data.
<lazyPower> s/work/worked/
<lazyPower> SimonKLB: an example case: https://github.com/juju-solutions/interface-elasticsearch/blob/master/requires.py#L48-L51
<kjackal> cory_fu: question on conjure-up. Is it posible for a partner to create a spell that would modify an already deployed k8s cluster?
<cory_fu> So, there's this: https://github.com/conjure-up/conjure-up/issues/624
<cory_fu> That was planned for 2.2 but I really doubt that's going to happen.
<cory_fu> But it's definitely in the plan.
<cory_fu> Though, I realize that maybe that's not exactly what you were asking about.  Are you talking about capturing a non-Juju deployed k8s cluster?
<cory_fu> kjackal: ^
<cory_fu> I guess it's actually targeted at 2.2.1 and not 2.2.0
<lazyPower> cory_fu: well its targeted at milestone: later
<lazyPower> is it later yet?
<cory_fu> Oh yeah, apparently I can't read.
<lazyPower> <3
<lazyPower> just poking at ya buddy :)
<cory_fu> :)
<kjackal> cory_fu: lazyPower: so if a partner just wants to "kubectl create -f <my_special_yaml>" he could write up a spell to do so?
<lazyPower> I believe that's possible today, you skip the yaml def and only declare the scripted spell componentry, right cory_fu?
<lazyPower> or am i making too many assumptions here, because there are a lot compacted into that statement
<cory_fu> lazyPower: I don't know if it's possible to skip the deployment portion completely, but you also can't choose an existing deployed model in the GUI yet, either.
<lazyPower> ah right thats only headless with flags
<cory_fu> That was going to be part of that issue
<cory_fu> Right
<lazyPower> well, for POC they could get started...
<lazyPower> but you're right the curses portion is going be a trail of tears
<cory_fu> lazyPower: There's nothing technically challenging about that feature, it's just a matter of designing the user experience and the prioritization of it, though stokachu might correct me on that
<stokachu> yea thats pretty much it
<lazyPower> cory_fu: CORYYYYYY
<lazyPower> got a sec? :)
<cory_fu> omg
<cory_fu> What's up?
<lazyPower> cory_fu: can you join us in the group hangout?
<cory_fu> Yeah, one sec
<cory_fu> Erin's on the phone in the background, but I'll use headphones so hopefully it won't be an issue
#juju 2017-04-22
<jac_cplane> does anyone have an example of an openstack bundle where the novnc service is on an public interface (non-maas) interface
<jac_cplane> I'm using binding option, but I must be missing something
<Guest93120> Hello, I keep getting this error while trying to create a controller with the "juju bootstrap'' command, ERROR error reading current controller: open /home/host/.local/share/juju/controllers.yaml: permission denied
<Guest93120> hello ?
<Guest93120> help?
#juju 2018-04-16
<jam> wallyworld: balloons: thumper: it seems Bionic bootstrap broke over the weekend. I have one bug that may have been there for a while, and one where it looks like they dropped an old package from bionic
<jam> It seems a different package provides what we wanted, but we just always installed it from the old place
<wallyworld> jam: do you have details?
<jam> wallyworld: bug #1764264 I have a patch up
<mup> Bug #1764264: bionic cloud-init 18.2 refuses Juju's 'runcmd' stanza <bionic> <juju> <cloud-init:New> <juju:Triaged> <juju 2.3:Triaged> <https://launchpad.net/bugs/1764264>
<jam> wallyworld: sorry, wrong bug
<jam> bug #1764267
<mup> Bug #1764267: python-software-properties not found on bionic <bionic> <bootstrap> <juju:In Progress by jameinel> <juju 2.3:In Progress by jameinel> <https://launchpad.net/bugs/1764267>
<jam> wallyworld: https://github.com/juju/juju/pull/8602
<wallyworld> looking
<jam> (the former is also an issue on bionic, I'm trying to sort out how big of a deal it is)
<wallyworld> jam: lgtm, a small change :-)
<jam> wallyworld: thanks.
<balloons> wallyworld, jam, so roll both changes into 2.3.6 or ?
<jam> balloons: what are you doing awake ? :)
<balloons> Good morning! ;)
<wallyworld> balloons: i still haven't had a chance to look at the potential dry-run regression
<wallyworld> no commits i can see since 2.3.5 touch that area, but i could be wrong
<balloons> I was thinking perhaps do the 2.4-beta1 first, since we have a queue to do
<balloons> Yea, the dry run regression is weird. We fixed that
<balloons> jam, will you roll back the juju version in a PR so we're 2.3.6?
<jam> balloons: yeah, I can roll it back.
<jam> balloons: though we have a number of actual bugs with real Bionic support I'm finding out, so maybe we don't want to block 2.3.6 on that
<jam> and instead we just say "not fully supported" yet, and wait for 2.3.7 or something
<balloons> Yea, the goal was to not break on bionic, not so much support it
<balloons> So 2.3.6 is already primed at that commit. Easy to finish
<jam> balloons: so basic support is just broken on bionic with https://launchpad.net/bugs/1764267
<mup> Bug #1764267: python-software-properties not found on bionic <bionic> <bootstrap> <juju:In Progress by jameinel> <juju 2.3:Fix Committed by jameinel> <https://launchpad.net/bugs/1764267>
<jam> balloons: at least, I see we try to install a package that just isn't there anymore
<balloons> Yea, that one is pretty straightforward
<jam> balloons: so, should we roll back to 2.3.6 and then release with the package fix, or just release and say bionic not supported in 2.3.6?
<jam> wallyworld: so I'm trying to merge 2.3 into develop, but 'cmd/juju/commands/resolved.go' was deleted in 2.4. it got moved to application/resolved.go
<balloons> jam, I think it's a hard call. If 2.4 is delayed to long, saying upgrade for support is harder
<jam> wallyworld: do you know how your NoRetry is supposed to be in 2.4? Did you land the fix directly there?
<balloons> But that was the original intent
<jam> balloons: well, I've always wanted us to support bionic in 2.3 and netplan support *is* there. Just I found about 4 other bionic support bugs while testing it today
<balloons> Yea everyone else wanted to stick with xenial, which 2.3.6 as is does do
<jam> wallyworld: it looks like your fix was to pass !c.NoRetry instead of c.NoRetry, which looks to already be done in 2.4
<jam> manadart: just to confirm, if I just resolve the lxd conflicts in favor of the 2.4 code, you're ok with that, right?
<manadart> jam: Yeah, I can interrogate and apply what is required.
<jam> manadart: container/lxd/lxd_test.go seems like makeManager is a pretty big ball of differencees
<jam> in 2.4 it starts taking a string param, and you changed it to take a baseConfig() param
<jam> manadart: although, it looks like the name string was always ignored
<jam> in 2.4
<jam> ... weird
<manadart> jam: Ignored in 2.3 as well. I think my change should be OK after-the fact if you resolve in favour of 2.4.
<manadart> From my PR that should leave lxd.go and lxd_test.go untouched.
<manadart> Should still build/pass yes?
<jam> manadart: so I'm just doing "git co HEAD lxd.go lxd_test.go" so it forces it specifically to 2.4. merging lxd.go was trying to pull in "Remote" from the 'lxdclient' which wasn't being imported into container/lxd.go anymore so I'm just punting
<jam> manadart: export_test.go also needed to be reverted.
<manadart> jam: Ah, yes.
<jam> manadart: https://github.com/juju/juju/pulls/8606 is on its way to be merged. You could base your work off of that if you wanted, or you can wait for it to land
<jam> but the container/lxd stuff is just upstream/develop so you can probably work from there already
<manadart> jam: Thanks.
<jam> manadart: it has now landed
<jam> manadart: heads up (can you review) https://github.com/juju/juju/pull/8609
<manadart> jam: Yep.
<jam> manadart: its trying to make a much smaller patch vs upstream lxd, so we can more easily transition.
<jam> wpk's patch was rejected in favor of a different approach, so eventually we'll have to follow along. But presumably we can't do anything until we can update to master tip of lxd
<manadart> jam: Approved.
<jam> manadart: https://github.com/juju/replicaset/pull/6 and https://github.com/juju/replicaset/pull/7 if you could be so kind
<manadart> jam: OK. Opened https://github.com/juju/juju/pull/8610 over here too.
<manadart> jam: Approved both. 1 trivial comment.
<bdx> charm build failure alert https://paste.ubuntu.com/p/PfBYVHMFy8/
<bdx> anyone else failing to get the setuptools wheel?
<bdx> https://files.pythonhosted.org/packages/20/d7/04a0b689d3035143e2ff288f4b9ee4bf6ed80585cc121c90bfd85a1a8c2e/setuptools-39.0.1-py2.py3-none-any.whl
<bdx> seems to be working now
<rick_h_> bdx: so python community rolled over to the new pypi site today
<rick_h_> bdx: first redeploy/new software in 10yrs
<rick_h_> bdx: so there's going to be some rough spots in python community packaging access today heh
<bdx> rick_h_: ahhh good to know, thanks
<rick_h_> yea, I know a few things getting bit by the big upgrade today
<bdx> also hitting this in apt install charm package https://github.com/juju/charm/issues/240
<rick_h_> bdx: hah, yea that packaging is a new upgraded pip version
<bdx> I'll keep that in mind as I go about my way
<rick_h_> I wonder if that hit as well
<rick_h_> cory_fu: ^
<rick_h_> I wonder if the pip is the dstro, from the pypi, or something else used there
<cory_fu> rick_h_, bdx: I hit that today as well when running some tests.  Doing a manual `sudo pip install --upgrade pip` to pick up 10.0.0 fixed it for me.  We might need to rebuild the charm snap
<bdx> cory_fu: cool, the charm snap doesn't exhibit ^
<cory_fu> bdx: If you got the error during charm build, then I think it's the version of pip inside the charm snap that's causing the issue
<bdx> ahh, but I only get it with apt installed snap
<bdx> geh
<cory_fu> bdx: Mainly because that's what the venv is seeded with
<bdx> * apt installed charm
<cory_fu> Hrm
<cory_fu> I see
<cory_fu> So yeah, apt package of charm is pretty outdated
<bdx> yeah, the snap has the pip wheel staticly defined in there
<admcleod_> so i have a MAAS controller deployed, and ive also manually added an s390x machine to it (lts call that machine 0)
<admcleod_> in my bundle i have 2 charms, charm-x86 and charm-s390x. i use constraints: arch=s390x for the latter charm
<admcleod_> when i deploy the bundle, it attempts to request a machine of s390x arch from MAAS, rather than use the manually added machine
<admcleod_> is this expected behaviour?
<admcleod_> that behaviour is the same whether i use map-machine=existing or not
<admcleod_> however, if i just deploy the charm directly, it works.
<admcleod_> guess ill log a bug
<ejat> hi ...
<cory_fu> ejat: Can you repeat the OpenStack credential error you were getting when trying to use your OpenStack with Juju?
<ejat> Please ensure the credentials are correct. A common mistake is
<ejat> to specify the wrong tenant. Use the OpenStack "project" name
<ejat> for tenant-name in your model configuration.
<cory_fu> admcleod_: You might need to use the --map-machines option to juju deploy to get it to use the pre-created machine
<cory_fu> admcleod_: Otherwise, requesting a new machine for the bundle is the expected behavior
<admcleod_> cory_fu: yeah, it doesnt work with the bundle
<admcleod_> cory_fu: with or without map-machines
<cory_fu> admcleod_: Odd.  I would have expected --map-machines to do what you want
<admcleod_> cory_fu: same. bug on its way
<cory_fu> ejat: And, did you add the OpenStack credential using `juju add-credential`, `juju autoload-credentials`, or `conjure-up`?
<ejat> juju add-credential
<ejat> add cloud 1st then add-credential
<cory_fu> ejat: Hrm.  I don't have an OpenStack to test with, but the error message makes it sound like something was entered incorrectly.  Are you sure you chose the correct auth-type and typed everything in correctly?  You could try using `juju add-credential <cloud> --replace` and type it in again
<ejat> tenant-name == project name ?
<ejat> or using the project ID ?
<cory_fu> Yeah, the message said project name, so I'd try that if you used something different previously
<ejat> bugs 1543262
<mup> Bug #1543262: keystone V3 support needed <openstack-provider> <uosci> <Go OpenStack Exchange:Fix Committed by wallyworld> <juju:Fix Released by wallyworld> <juju 2.1:Fix Released by wallyworld> <https://launchpad.net/bugs/1543262>
<ejat> its almost the same
<ejat> but the bugs is fixed
<cory_fu> ejat: You could check the output of `juju show-cloud openstack` and `juju list-credentials --format=yaml openstack` and see if anything looks odd there
<hml> ejat:  if youâve sourced your nova-rc file, juju autoload-credentials sometimes works better for OpenStack
<admcleod_> ejat: if you are using keystone v3, you need to make sure your novarc is also keystone v3
<ejat> admcleod_: using novarc is fine
<admcleod_> ejat: what do you mean
<bdx> I *think* he means that juju doesn't look for the same environement variables that are set by the novarc for kv3
<bdx> using autoload-credentials
<ejat> thanks bdx
<bdx> I hit it this weekend too
<ejat> bdx: so how u counter it
<bdx> 1) download novarc from horizon, 2) source novarc on local machine, 3) add openstack cloud in juju (`juju add-cloud myopenstack`), 4) run `juju autoload-credentials` and select the openstack cloud
<bdx> ^ something like that I think
<bdx> ejat: I'll try to to reproduce ^ in the next day or so and get a bug filed
<ejat> owh let me try
<ejat> bdx: result still the same
<admcleod_> ejat: can you pastebin the novarc without passwords etc?
<hml> bdx:  what is the difference between the env vars set by novarc for kv3 and the novarc contents downloaded from horizen?
<ejat> admcleod_: http://paste.ubuntu.com/p/BGXMWKq6Qr/
<admcleod_> ejat: ok, thanks, looks fine
<ejat> anyone can give advise?
<thumper> ejat: sure
<ejat> thumper: really much appreciate
<ejat> still can't auth with openstack :(
<thumper> personally I don't know much about openstack, but others here do
<thumper> however you probably need to give more information
<thumper> which openstack
<thumper> how are the creds defined
<thumper> what error are you getting
<thumper> etc
<ejat> im using openstack queens bundle with a little bit of customization which to include heat + telemetry
<ejat> using MAAS
<ejat> added the openstack into the cloud list
<ejat> then juju add-credential
<ejat> then tried to bootstrap
<ejat> i got this :
<ejat> ERROR authentication failed.
<ejat> Please ensure the credentials are correct. A common mistake is
<ejat> to specify the wrong tenant. Use the OpenStack "project" name
<ejat> for tenant-name in your model configuration.
<thumper> ejat: are you able to use those same credentials without juju in the mix and have it work?
<ejat> thumper: for openstack cli works fine
<thumper> hml: any ideas around openstack creds and queens?
<hml> thumper: no..  i was looking at the nova rc file ejat put it a pastebin earlier
<hml> ejat: can you give me the results of âjuju credentials  --format yaml <cloudname>â?
<hml> ejat:  also run with âshow-secrets (dash dash)  and verify your password
<ejat> hml: https://paste.ubuntu.com/p/cxrydwmtmQ/
<hml> ejat: it looks good - the only thing i see is that domain-name is also set, which mine doesnât haveâ¦. should be okay
<hml> ejat:  can you get me the output of juju bootstrap âdebug please?  (dash dash is auto correcting on me to â)
<ejat> hml: https://paste.ubuntu.com/p/sGQgNDcf9x/
 * hml looking
<hml> egat: what do you get from âwget http://172.15.1.102:5000/v3/auth/tokens'?
<hml> ejat ^^
<ejat> hml: https://paste.ubuntu.com/p/xQTGQXvTMF/
<hml> ejat: so it looks like there is something wrong with the credentials as the bootstrap output saysâ¦  let me try something on my boxâ¦
<hml> ejat:  it appears that juju doesnât like domain-name in the credentialsâ¦. this is a bug.  please file one!
<hml> ejat:  i can also give you a work around until itâs fixed
<hml> ejat:  juju credentials âformat-yaml âshow-secrets <cloudname> > /tmp/creds.yaml
<ejat> https://bugs.launchpad.net/juju/+bug/1764550
<mup> Bug #1764550: can't authenticate juju credential with openstack queens <juju:New> <https://launchpad.net/bugs/1764550>
<hml> ejat:  edit /tmp/creds.yaml to remove âlocal-â from the first line - and remove the domain-name line as well
<ejat> can u help to comment the work around on LP ?
<hml> ejat: then juju add-credential --replace <cloudname> -f /tmp/creds.yaml
<hml> ejat: sure
<ejat> hml: thanks a lot .. maybe bdx can refer it too
<ejat> i guess he also facing the same issue
<hml> ejat: how did you create the juju credentials?  thatâd be helpful to know in the bug - as well as which novarc you were using?  from k3 or horizen
<bdx> ejat: I can auth to openstack just fine, mine was a totally different thing I was experiencing with autoload-credentials
<ejat> bdx: owh sorry
<bdx> ejat you can't just curl or wget the endpoint like you are in that bug
<ejat> miss understood you
<bdx> you have to get the token using the openstack client etc etc
<bdx> it is meant to fail if you try to interact with it like you are there
<ejat> hml: from horizon
<hml> ejat: interesting.
<hml> ejat:  did you use autoload-credentials, or add-credentials?
<hml> ejat: iâm EOD for now.. will be online tomorrow though.  i tried the cause and work around myself on an openstack v3 - so hopefully youâre good to bootstrap!
<ejat> thanks a lot hml
<ejat> see u tomorrow
#juju 2018-04-17
<jam> manadart: you around?
<manadart> jam: I am.
<manadart> In Oracle HO.
<jam> omw
<jam> https://github.com/juju/replicaset/pull/8
<jam> manadart: ^^
<jam> manadart: https://github.com/juju/juju/pull/8618
<elmaciej> Hi
<elmaciej> I wrote a charm based on the docker layer - https://jujucharms.com/docs/1.24/authors-charm-with-docker
<elmaciej> and I have an issue - it's not installing anything - juju.worker.uniter.operation runhook.go:116 skipped "install" hook (missing)
<elmaciej> and it's copy paste from the article
<elmaciej> can somebody help me with this
<rick_h_> elmaciej: hmm, did you run charm build? normally that's because you edit the layer stuff but don't charm build or deploy the build charm (that generates the hooks)
<elmaciej> well I did charm build ./mycharm
<elmaciej> and then juju deploy ./mycharm --to kvm:0
<manadart> jam: https://github.com/juju/juju/pull/8619/files
<manadart> Going to scan the current known intermittent failures.
<jam> manadart: one change definitely needed, and a discussion of 2 or 3 things to think about
<elmaciej> rick_h_: this is my output : https://pastebin.com/0Bmgf0fM
<elmaciej> rick_h_: should I add more things ? I just want to deploy a docker, no relations, no interfaces ...
<manadart> jam: Implemented all of your suggestions; much nicer. Tests using that code pass repeatedly without the buffer, so it is gone.
<rick_h_> elmaciej: ok and you deployed "/mnt/c/Users/devep/PycharmProjects/charms/aiakafka/builds/aiakafka" so that the built charm, with the install hook, is what gets run?
<elmaciej> yes, I deployed this one using : juju deploy ./aiakafka
<elmaciej> and it deploys but nothing is happening inside this charm
<elmaciej> so thats weird
<rick_h_> elmaciej: right, but it needs to be juju deploy ./builds/aiakafka
<elmaciej> ohhhhh'
<rick_h_> assuming you're terminal is at the directory you're righting the charm from
<elmaciej> yep
<elmaciej> ok, will try now
<rick_h_> <3
<ejat> ello hml
<hml> ejat: hi
<ejat> your work around works ..
<hml> ejat: good news, i tried it locallly, but confirmation is nice
<ejat> but ....
<ejat> later i got this : found 0 image metadata in default cloud images
<ejat> do i need to configure the simplestream ?
<hml> ejat: yes you do
<ejat> its a must ?
<ejat> then bootstrap with --config swift_URL ?
<hml> ejat: for OpenStack yes - juju pulls the images from glance and needs to know how to find the ones you want to use
<ejat> i did the simplestream
<ejat> but its doesnt work for me :(
<hml> ejat: just a sec
<hml> ejat: did you follow the instructions here: https://jujucharms.com/docs/stable/howto-privatecloud
<hml> ejat:  --config image-metadata-url=$SWIFT_URL
<hml> and you need a network defined as well for bootstrap
<hml> ejat: usually
<ejat> https://paste.ubuntu.com/p/9DdRCcmVDR/
<hml> ejat: try âwget http://172.15.1.99/swift/v1/simplestreams/images/streams/v1/index.json'
<ejat> im having problem to add this endpoint
<ejat> openstack endpoint create --region $REGION --publicurl $SWIFT_URL/simplestreams/images \
<ejat>    --internalurl $SWIFT_URL/simplestreams/images product-streams
<hml> ejat: do you have admin permissions for the project in openstack?
<ejat> yes
<hml> ejat: but you canât create an endpoint yes?
<ejat> parameter error
<hml> ejat: which parameter is causing the problem?
<ejat> --region <region-id>
<ejat> https://paste.ubuntu.com/p/G8GBXxpX4G/
<ejat> and some of the parameter are differ from the documentation
<ejat> there is no --internalurl
<ejat> --publicurl
<hml> ejat: which version of the openstack client are you using?
<ejat> 3.14.0-0ubuntu1
<hml> ejat: while i figure this out, perhaps the client options have changedâ¦ you can specify a local path to the streams you created
<hml> ejat: use --metadata-source
<ejat> --metadata-source=~/simplestream ?
<ejat> using local works
<ejat> thanks
<hml> ejat: k
<hml> ejat: which version of OpenStack are you using?
<ejat> hml: queens
<hml> ejat: i see the problem in the doc - the instructions are for identity v2, not identity v3
<ejat> hml: i suspected that
<hml> ejat: https://docs.openstack.org/python-openstackclient/pike/cli/command-objects/endpoint.html#endpoint-create
<ejat> i guess the everything need to be change toward v3
<ejat> btw, how to bootstrap the controller with public ip ?
<ejat> include the config network ?
<hml> ejat: juju show-cloud --include-config <openstack-cloud-name>
<hml> will show the openstack config options for bootstrap and models
<hml> youâll need to define a network (private) and âuse-floating-ip
<hml> âexternal-network if you wish
<ejat> ok yeah
<ejat> --use-floating-ip
<hml> ejat: can you test something for me pleaseâ¦ to update the docs: see if âopenstack endpoint create --region RegionOne product-streams public http://172.15.1.99/swift/v1/simplestreams/images' works
<ejat> yeah
<hml> ejat: ty
<ejat> https://paste.ubuntu.com/p/gJMrqQwMSp/
<hml> youâll need to do run again replacing public with internal - that should setup the endpoint for you
 * ejat wondering is it only me using queens now :) 
<ejat> both work public n internal
<ejat> how to use the --use-floating-ip ?
<hml> ejat:  see if the wget line wokrs now please?
<ejat> image-metadata-url works
<ejat> after create public n internal endpoint
<hml> ejat: perfect
<ejat> how to use the --use-floating-ip ?
<ejat> --config image-metadata-url=xxxxx use-floating-ip
<hml> ejat: âconfig use-floating-ip=true --config image-metadata-url=xxxxx
<ejat> owh
<hml> ejat: once you have nailed down what config items to bootstrap with - you can add them to the cloud config
<ejat> ok
<hml> ejat: since you setup the product-streams endpoint - juju will find it automagically - so you can leave that off the bootstrap if you wish
<ejat> noted
<hml> ejat: i bootstrap openstack with the following config: https://paste.ubuntu.com/p/FYPSRvfPJM/  - so i just have to do âjuju boostrap serverstack-os tuesdayâ for example
<ejat> 00:02:30 DEBUG juju.provider.common bootstrap.go:564 connection attempt for 172.16.5.206 failed: ssh: connect to host 172.16.5.206 port 22: Connection timed out
<ejat> okie my bad
<ejat> its my network access
<hml> ejat: that happens a couple times
<hml> okay..
<hml> you do see it a few times sometimes while the instance is coming up.. unless juju fails you can ignore it usually
<ejat> ok thanks
<hml> ejat: iâm creating a pr to update the doc - thanks for your help there
<ejat> u r most welcome .. thanks to u too :)
<ejat> btw, will you be at vancouver next month ?
<hml> ejat: I will not, Iâm not on the openstack team :-)  I work on juju itself
<ejat> owh ..
<ejat> its quite a years i stop writing my 1st charm
 * ejat now more being a user :) 
 * ejat assist @ contribute as much as i can ... 
<hml> :-)
<ejat> asked question , file bugs
 * ejat missed UDS :) 
<ejat> thats where i met all the juju guru
<ejat> guru @ ninja
<hml> ejat: there should be some at vancouver
<ejat> yeah ..
<Guest57485> hi
<acNEt> Hello just started working with JUJU for an openstack install can someone please tell me what the correct filesystem layout should be for the dashboard.
<bdx> acNEt: what do you mean?
<bdx> acNEt: `juju deploy openstack-dashboard`
<bdx> acNEt: ^ try that and see what you get by default
<acNEt> I deployed the dashboard but I keep getting permissions issues.  Is it supposed to be /var/lib/openstack-dashboard or /usr/share/openstack-dashboard?
<bdx> acNEt: context?
<bdx> to what you are trying to do
#juju 2018-04-18
<jam> babbageclunk: a thought. Another possibility is to have leader epochs. So when unit A becomes leader it gets 10, and then when B supersedes it, it gets 11, etc. And then in the DB we store what epoc we think of the leader. Sort of equiv to the raft log entry, but less specifically tied to raft
<babbageclunk> jam: yeah - that makes sense - I'll use that.
<jam> manadart: I don't see an update on 2/7, shall we punt again today?
<manadart> jam: Fine with me.
<srihas> hi, can we install juju controller on the same MAAS node which hosts the Rack controller+region controller ?
<zeestrat> srihas: Yes, if you setup some KVM VM's on the region/controller and add it to MAAS as nodes (see https://docs.maas.io/2.3/en/nodes-add#kvm-guest-nodes). Then you can use those VM's when you bootstrap your juju controller using constraints
<srihas> zeestrat: I have MAAS controller running on a Physical node, in UCS cluster
<zeestrat> srihas: Right. I'm not so familiar with UCS, but if you install KVM on that host and create some VM's on there according to the link above then you get some MAAS nodes for your juju controllers.
<srihas> I have a MAAS region+rack controller already, won't it work?
<zeestrat> srihas: Not out of the box unfortunately. Juju needs a ready MAAS node to bootstrap a controller on. In order to do that you can use physical nodes or some VM's. Easiest is probably to colocate KVM VM's on the MAAS controller or use VMWare VM's if you have that infrastructure.
<srihas> zeestrat: ok, what do you mean by a "ready MAAS node" ?
<zeestrat> srihas: Just a machine/node that is managed by MAAS and commissioned and ready to go: https://maas.io/how-it-works
<srihas> zeestrat: ok, thank you :) I have deployed ubuntu on all commissioned nodes
<srihas> so the process is like step1: add nodes; step2: commission; step3: Juju controller; and then deploy ubuntu? or its fine eventhough I have the deployed nodes to provision OpenStack ?
<zeestrat> srihas: Aha, I see. You can skip the last step deploy ubuntu unless you really want blank ubuntu machines. If you want to use them with juju you'll need to leave them as ready.
<zeestrat> Is the end goal to deploy OpenStack with Juju on MAAS?
<srihas> zeestrat: yes thats the goal. aha, only ready :(
<zeestrat> Have you checked out https://docs.openstack.org/charm-deployment-guide/latest/?
<srihas> zeestrat: yrah, I am at this https://docs.openstack.org/charm-deployment-guide/latest/install-juju.html#testing-the-environment step and doubts raised
<zeestrat> srihas: Cool. Do you end up with a functioning juju controller?
<srihas> zeestrat: I think no, because I haven't bootstrapped yet
<zeestrat> srihas: Ok. So as the guide mentions with `juju bootstrap --constraints tags=juju maas maas-controller` it expects a MAAS node tagged with `juju`. If you add the `juju` tag to a machine in MAAS then you can try to bootstrap to tat machine.
<srihas> zeestrat: and I cannot use that machine in future for swift for example ?
<zeestrat> Nope, that machine will be occupied by Juju then. That's why it's recommended to create some VM's on the MAAS controller so you don't lose more physical nodes.
<zeestrat> Then you can add the `juju` tag to those VM's in MAAS so it uses those instead of physical nodes.
<srihas> zeestrat: nice, thank you :) what's the purpose of this juju node ?
<srihas> to act as controller for juju ops?
<zeestrat> srihas: No problemo. Yes, the bootstrapped juju controller is the master controller which controls juju deployments
<zeestrat> So your juju client talks to the juju controller which again talks to deployments
<srihas> nice :)
<jam> manadart: since you've been in the area, I have a RFC pr: https://github.com/juju/juju/pulls/8624
<jam> I'd appreciate your feedback if you feel it is clearer/cleaner. Or whether you prefer the old styel.
<manadart> jam: Looking.
<jam> manadart: thoughts/comments ?
<manadart> jam: I like it. Just adding some comments.
<thim> I have a problem using bindings with the openstack-charms. For example if I try to deploy keystone I get an error "machine-0: 12:39:55 WARNING juju.provisioner failed to start machine 0/lxd/0 (unable to setup network: no obvious space for container "0/lxd/0", host machine has spaces: "admin-space", "default", "internal-space"), retrying in 10s (10 more attempts)".  Using MAAS 2.3.0 and Juju 2.3.5. Anyone else having these problems?
<thim> part of yaml: https://pastebin.com/54isF2sJ
<manadart> jam: Done.
<manadart> jam: BTW, wouldn't mind a 2 minute chat if youv'e the time.
<rmcd> Hello all, just a couple of questions regarding Apache Drill/Zookeeper. 1) How do I check the IP address of nodes with Amulet? and 2) How do I look up files on a node with Amulet? magicaltrout reckons there should be a way to do it with the deployment object... Thanks for the help!
<manadart> thim: I believe you need to bind to the correct space. Something like: juju deploy <charm> --bind <space> --to lxd:0
<thim> Yeah, done that - `juju deploy --bind "public=default admin=admin-space internal=internal-space shared-db=internal-space" cs:keystone -n 1 --to lxd:0`
<thim> Same problem as using the yaml-file.
<jam> manadart: I can join a HO if you like
 * manadart heads to standup.
<jam> thim: if you're deploying from a bundle to a substrate that has multiple spaces, then you need to edit the bundle to map your spaces for the various applications
<jam> thim: easiest is to set all bindings for an application to a given space with a stanza:
<jam> bindings:
<jam>  "": internal-space
<jam> but if you're doing openstack and you want internal vs admin vs etc, then you probably need to set multiple values
<thim> @jam I've done that see - https://pastebin.com/54isF2sJ
<thim> jam: also tried without the "": default
<jam> thim: are you putting keystone into a container?
<thim> jam: yeah, but I tried a couple of other charms as well - the error is still the same.
<zeestrat> thim: Just to confirm, have you setup those spaces in maas and are they available for the nodes you're deploying on?
<magicaltrout> kjackal: et al, got any pattern for checking files on a unit within amulet for my minions rmcd
<kjackal> magicaltrout: what do you mean by checking?
<magicaltrout> ah
<magicaltrout> amulet sentry directory_listing
<magicaltrout> or soemthing
<magicaltrout> I want to check a config file is actually set correctly
<magicaltrout> file_contents
<magicaltrout> I believe
<kjackal> I cannot think of anything fancy other than getting the file and looking at its content
<magicaltrout> https://github.com/juju/amulet/blob/master/amulet/sentry.py#L153 that?
<thim> zeestrat: Correct - https://pastebin.com/GtFzCLfL
<thim> zeestrat: `juju spaces` is populated from MAAS
<kjackal> sure magicaltrout, looks ok. Although you would better start using libjuju ;)
<magicaltrout> don't make me sad kjackal
<magicaltrout> whats the deal with testing these days
<magicaltrout> amulet? or libjuju? or a munge of both?
<kjackal> https://github.com/juju/python-libjuju  forget about amulet
<magicaltrout> ya'll make me so sad
<kjackal> https://github.com/juju/python-libjuju/blob/37a7500245ae61fc2c103d1728a65b99485a9ef4/tests/integration/test_machine.py#L67
<magicaltrout> thanks kjackal
<magicaltrout> we're all very sad
<rmcd> can confirm ^
<jam> thim: so the values you have there look correct to me in passing, with the caveat that you probably need to do it for all applications. also, what specific version of Juju are you running. IIRC we fixed a particular bug where space constraints weren't being set early enough for containers
<jam> so if the host machine launches fast enough and you're deploying a bundle, the container could try to start before the spaces were set.
<jam> I *think* that is fixed in 2.3.5 but I'd have to look through changelogs to know for sure
<jam> in the very short term, you can add a "constraints: spaces=X,Y,Z" to get around it.
<thim> Yeah, I'm using 2.3.5 with MAAS 2.3.0. I'm currently just deploying Keystone because of time saving when destroying the machines. Already using constraints as well. `constraints: "tags=controller spaces=admin-space,internal-space,public-space"`
<thim> *public-space = default
<thim> just pasted from the wrong file.
<kjackal> magicaltrout: rmcd: how can we cheer you up?
<thim> jam: are you talking about constraints on the charm or the machine btw?
<TheAbsentOne> kjackal: Is it actually possible with python-libjuju to deploy charms from within a charm in the same model? :/ Haven't used it so far
<kjackal> TheAbsentOne: that charm should be able to talk to the controller, right?
<kjackal> Havent tried it either
<zeestrat> TheAbsentOne: Yeah, but it's a bit clunky. You'll need to create a juju service user in the model then talk to that. I asked about that on the last juju show and it's something that the team would like allow for inside the charms in the future
<TheAbsentOne> kjackal: euhm yes, I suppose. I'm on a heavy theorycrafting trip and I kinda want a charm to decide what service should be deployed. I'll give an example. Imagine a charm that represents a mysql OR a postgres service. The charm then should deploy one of the 2 services itself all automatically.
<TheAbsentOne> Ahn so it's a kinda a request zeestrat ?
<TheAbsentOne> So as a summary: I want to create a charm that that listens on an incomming relation, it takes the requirements, chooses a good service and deploys the chosen service all in one kjackal zeestrat. As I see it now, it seems pretty impossible with juju
<zeestrat> TheAbsentOne: Yeah. You can check out the latest episode for a bit more eloquent details. I asked about better interfacing charms and juju in order to scale and deploy things and it's on their list.
<TheAbsentOne> Good to know!
<kjackal> TheAbsentOne: I know of one charm that talks to the juju controller: https://jujucharms.com/charmscaler/2
<zeestrat> TheAbsentOne: Regarding your summary, if you use an juju service account then you should be able to talk to juju with libjuju just fine and use any logic you want inside your charm. It's a bit greenfield at the moment but not impossible.
<zeestrat> TheAbsentOne: Yeah, the charm above uses that approach (although a bit old).
<TheAbsentOne> Thanks for the insight, big help guys!
<jam> thim: so bug #1737565 was where we ran into this in the past, and that says it was fixed in 2.3.2 so it should be in 2.3.5
<mup> Bug #1737565: no obvious space for container <cdo-qa> <cdo-qa-blocker> <foundations-engine> <juju:Fix Released by wpk> <https://launchpad.net/bugs/1737565>
<jam> a proper "bindings" section should be sufficient, so if it isn't working, we'd like to get logs of the controller preferably with: juju model-config logging-config="<root>=DEBUG;juju.rpc=INFO;juju.apiserver=INFO"
<thim> jam: Yeah, I previously ran 2.3.2 and didn't work there either - so I upgraded to 2.3.5.
<TheAbsentOne> Just one small question zeestrat what exactly do you mean with a "juju service account". Could you point me to some documentation?
<rick_h_> TheAbsentOne: so the model credential work that's follow up to the current trust (cloud credentials access) isn't there atm
<rick_h_> TheAbsentOne the way it's been done so far is to create a juju account with model access and set via model config
<rick_h_> err, not model config, but the charm config
<rick_h_> TheAbsentOne: so that the charm can behave as a juju user with access to the model to make changes over the API (libjuju) like deploy thihngs
<TheAbsentOne> I see, thanks for that clarificiation rick_h_ !
<rick_h_> np, morning party people
<thim> jam: Hmm.. I think i found the problem. It's the Juju GUI that probably parses the yaml file incorrectly.
<thim> `juju add-machine -n 1 --constraints 0=t 1=a 2=g 3=s 4== 5=c 6=o 7=n 8=t 9=r 10=o 11=l 12=l 13=e 14=r 15= 16=s 17=p 18=a 19=c 20=e 21=s 22== 23=a 24=d 25=m 26=i 27=n 28=- 29=s 30=p 31=a 32=c 33=e 34=, 35=i 36=n 37=t 38=e 39=r 40=n 41=a 42=l 43=- 44=s 45=p 46=a 47=c 48=e 49=, 50=d 51=e 52=f 53=a 54=u 55=l 56=t --series xenial`
<rick_h_> TheAbsentOne: what's this?
<TheAbsentOne> I think you wanted to tag thim rick_h_ ? Or am I missing something? x)
<thim> jam: When using juju cli it seem to work, but the state on the LXD is pending, but container is started and I can ssh to the machine.
<rick_h_> TheAbsentOne: oh sorry, tab complete fail
<TheAbsentOne> hehe no problemo
<bdx> thim: if you haven't yet noticed, you were giving incorrect space names above
<bdx> why you space bindings were failing
<bdx> "host machine has spaces: "admin-space", "default", "internal-space""
<bdx> or, excuse me, the way you were specifying them looks incorrect
<bdx> `juju deploy <charm> --constraints "spaces=admin-space,internal-space,public-space"`
<bdx> should be
<magicaltrout> libjuju why does unit object only have a public address?
<magicaltrout> am I missing something obvious?
<bdx> thim: `juju deploy keysone --constraints "spaces=admin-space,internal-space,public-space" --bind default,public=default,admin=admin-space,internal=internal-space,shared-db=internal-space`
<TheAbsentOne> magicaltrout: to be fair it feels I'm missing sanity and knowledge, when it comes to understanding libjuju x) I too was wondering before what unit really represented
<bdx> TheAbsentOne: https://jujucharms.com/docs/2.3/charms-working-with-units
<bdx> it just represents a unit of an applications
<bdx> application
<TheAbsentOne> yeah bdx exactly but it was as a remark on what magicaltrout said. A unit in libjuju doesn't seem to know anything about the application
<bdx> I see, is there an "application" object?
<bdx> TheAbsentOne: possibly what you are looking for is https://github.com/juju/python-libjuju/blob/master/juju/application.py#L26
<admcleod_> hrm, anyone have any ideas why unit-get private-address is returning a hostname instead of an ip?
<admcleod_> cory_fu: ^ ? :)
<cory_fu> admcleod_: Unfortunately, it depends on the cloud.  There are some new fields in there that might be of help, but I'm not sure if any are guaranteed to be an IP
<admcleod_> cory_fu: curious, cos its manual provider and im sure it was working until recently. new fields such as?
<cory_fu> admcleod_: Fields related to the networking primitives: https://jujucharms.com/docs/stable/developer-network-primitives
<TheAbsentOne> ahn thx bdx that might help
<cory_fu> admcleod_: But like I said, I don't know if any of those will end up being helpful, but I think you are supposed to use them instead of private-address
<admcleod_> ah ok
<admcleod_> thanks
<admcleod_> cory_fu: erm. do you know how i can list the bindings?
<cory_fu> admcleod_: It's not integrated into reactive yet, unfortunately, but there are charmhelpers for it.  I have to admit, though, I haven't really used it.  I know stub understands it, but I'm sure he's asleep now.
<admcleod_> ...
<admcleod_> good lord
<cory_fu> admcleod_: I *think* those fields might also be present in the relation data.  It's worth doing a raw dump of the relation data
<admcleod_> all i want to do is get the private address of the machine. really shouldnt be that hard should it  :}
<cory_fu> admcleod_: Try seeing if there are other auto-keys in the relation data
<admcleod_> what if i only have one unit of one app deployed?
<TheAbsentOne> cory_fu: not trying to meddle in the conversation but now that you are talking about it. What is the best way to dump all relation data actually, according to you?
<cory_fu> TheAbsentOne: One-off / by hand for debugging?  I usually use: juju run --unit <unit/n> -- relation-get -r <rel:id> - <remote_or_local_unit/n>
<cory_fu> TheAbsentOne: You can get the <rel:id> with: juju run --unit <unit/n> -- relation-list <rel_name>
<cory_fu> TheAbsentOne: You could also always use debug-hooks and put a breakpoint in your charm code.
<TheAbsentOne> Right cool, and could you access a list or something inside hooks or is a breakpoint the way to go
<TheAbsentOne> ohn you answered already, thx cory_fu
<admcleod_> cory_fu: ill go back to trying network-get (ill let you know what happens)
<cory_fu> admcleod_: Thanks.
<jam> balloons: bug #1765096 looks like our "updated" jujud-versions.yaml holds the wrong data
<mup> Bug #1765096: 2.3.6 bootstrap fails <juju:New> <https://launchpad.net/bugs/1765096>
<balloons> yea, I was just noticing
<jam> balloons: the code seems to expect a version.Binary which includes series and architecture, but it only contains the version #
<balloons> yea, I'm certain I broke it in the new snapcraft build; not the one that was penidng
<balloons> I'll push the right revision, trying to find it now
<balloons> jam, ohh, nvm. hmm. it's just reading version=$(jujud version)
<balloons> so that's odd
<balloons> jam, yea it was the experimental snapcraft changes in the end, but I'm no longer sure why :-)
<balloons> ahh, diff reveals all
<admcleod_> cory_fu: i cant list the spaces, cos its maas. so i cant specify a binding...
<jam> balloons: manadart: thumper: for whoever would like "juju remove-machine 0" now works on controllers for Mongo: https://github.com/juju/juju/pull/8625
<jam> with caveats, but step along the path
<admcleod_> cory_fu: sorry that shouldve been 'not maas'
<cory_fu> admcleod_: My understanding was that the ingress-address should fall back to private-address when spaces aren't being used.
<cory_fu> admcleod_: Also, I don't think maas / not maas is relevant, I think it depends on whether your charm defines network spaces.
<cory_fu> admcleod_: But the ingress-address stuff is also relevant for cross-model relations, irrespective of network spaces.  wallyworld is also on the other hemisphere, though, unfortunately
<admcleod_> cory_fu: well if i cant list spaces, i cant specify a binding, and i am told 'spaces not supported' when trying to list them with manual provider. i dont think i should have to change the charm to get the private ip address.
<cory_fu> admcleod_: Did you check the relation data?
<admcleod_> nope
<admcleod_> cory_fu: if thers 1 unit and no relations, will that still work?
<cory_fu> admcleod_: Oh, I was confused.  I thought you were trying to get the private address for a relation, but you want the local unit.
<admcleod_> right, thats why unit-get originally
<cory_fu> admcleod_: So, if you really need an IP, you have to do something like this: https://github.com/juju-solutions/jujubigdata/blob/master/jujubigdata/utils.py#L427-L441
<cory_fu> admcleod_: But if you recall, that was a source of errors for us with the big data charms
<admcleod_> cory_fu: lol
<admcleod_> cory_fu: i thought i remembered something like that
<cory_fu> Yeah
<admcleod_> cory_fu: fwiw... wtf.
<admcleod_> cory_fu: meanwhile. i seem to be having a pip3 issue with charm build. 'no such option: -d'
<cory_fu> admcleod_: Are you using the snap?
<admcleod_> cory_fu: yeah - i just refreshed it, then i refreshed with --beta
<admcleod_> cory_fu: i would paste the error but my laptop is going crazy
<admcleod_> cory_fu: oh of course i also have the apt version installed.
<cory_fu> admcleod_: Ah yeah, I can only see that error coming from the apt version, since the snap is strictly confined and bundles its deps
<admcleod_> been a while since i used this
<wallyworld> cory_fu: admcleod_ : ingress-address will just be "private address" normally. with cmr, it may instead be set to "public address". network-get now talks generically in terms of ingress-address, bind-address, egress-subnets instead of  private/public address which have little semantic meaning
<wallyworld> cory_fu: did you see the mysql interface bug? it seems relation-broken hook is, well, broken
<cory_fu> wallyworld: Yeah, the -broken hook handling with RelationBase-based interface layers is known to be broken in general.  The easiest fix would be to update that interface layer to use Endpoints
<wallyworld> cory_fu: is that on the todo list for anyone? i'd not be very productive trying to do it at my end
<cory_fu> wallyworld: Not really, but it should be a fairly easy conversion.
<cory_fu> wallyworld: With network-get, does it only work if a network space has been created?  How do you know what bind name to use?
<wallyworld> cory_fu: network-get works with or without. from memory there's a --binding arg
<cory_fu> wallyworld: Not according to https://jujucharms.com/docs/2.3/developer-network-primitives#network-get and admcleod_'s comment from earlier (he's EOD now)
<cory_fu> wallyworld: The charmhelpers wrapper also require the binding name
<wallyworld> cory_fu: just looked at code, network-get takes a positional arg for bindingname
<cory_fu> wallyworld: So, I'm unclear on what one would pass to that?
<wallyworld> cory_fu: i think you pass the endpoint name for which you want the address info
<wallyworld> https://pastebin.ubuntu.com/p/YPTTYzFMbM/
<wallyworld> cory_fu: if there's questions, best to send an email to #juju-dev since john will be able to give a much better answer. i didn't write the original code (just added the new args) so am not totally familiar with the finer detail
<cory_fu> wallyworld: I guess the "what address I should listen on" really depends on who is going to be connecting to you, i.e. a relation (at least, an endpoint), and that's the bit that's confusing for me
<cory_fu> But I think that calling it a "binding name" is confusing if it's in fact the endpoint name
<wallyworld> cory_fu: right. juju knows if you are in a relation context or not, so it will vary what it gives to be the thing that is needed to listen on
<cory_fu> wallyworld: About the mysql interface; are you hitting that?
<wallyworld> cory_fu: generally, what you bind to will be independent of relation. what you advertise to the other side to send to (ingress address), will be relation dependent
<cory_fu> I've just been called to dinner
<wallyworld> cory_fu: i am not sure why it's called binding instead of endpoint, i didn;t write the code :-(
<wallyworld> cory_fu: yeeah, am hitting the bug
<wallyworld> in the interface layer
<wallyworld> in my caas charm
<cory_fu> wallyworld: I'll take a look at the mysql interface tomorrow
<wallyworld> yay, ty
<wallyworld> enjoy dinner
<cory_fu> o/
#juju 2018-04-19
<aperez900907> Hello, any work in progress to integrate the new lxd cluster features in the juju orchestration system?  for example deploy charms to the cluster lxd 3.0 with abstraction on the host target and autoscale?  thanks
<thumper> aperez900907: yes work is in progress
<thumper> not autoscale though
<aperez900907> cool, any detail if will be support on xenial release o the 18.04 ?
<thumper> there will be a two phase support...
<thumper> phase one, which we are hoping to get into 2.4 of Juju, is using existing lxd provider, but if you are on a machine that is part of the cluster, it just uses the cluster
<thumper> phase two is supporting remote clusters and dealing with machines as "zones"
<aperez900907> cool
<aperez900907> any chance to support autoscale in the future given limits metrics?
<aperez900907> xenial will be support in the new phases??
<thumper> yes xenial will be supported
<thumper> unlikely that juju itself will autoscale, but that doesn't mean someone won't write something on top
<kelvinliu_> veebers:  is out citest running on py3 or py2?
<veebers> kelvinli_: py2, p3 should pretty much be supported but I don't think that's been verified yet
<kelvinliu_> i saw py3 in makefile for testing jujupy, but seems assess*.py are all py2?
<veebers> kelvinli_: aye, all assess are py2, there is some new stuff that is py3, but I can't recall off the top of my head.
<veebers> kelvinli_: from memory jujupy/* is all py compatable. There was a test that was py3 that required that, but at this stage the assess scripts are assumed to be py2
<veebers> that being said I don't believe that would be a big job to get it over the line to all be py3 compat (not that that helps you right now :-))
<kelvinliu_> veebers: acceptancetests/jujupy/workloads.py breaks on py3.
<kelvinliu_> veebers: ic, thx. I will test it more to see how it goes.
<veebers> kelvinli_ d'oh, that'll be my fault. That's a recent addition (well, code move around) that I didn't make py3 compat
<kelvinliu_> veebers: i found an import error at `wait_condition.py`
<veebers> kelvinli_: as in it's broken regardless of py2/py3?
<kelvinliu_> veebers: no, it's an incorrect import path
<veebers> kelvinli_: oh, that's not good. I imagine that's my fault too with a recent change :-\
<kelvinliu_> veebers: would you mind to have quick hangout?
<veebers> kelvinli_: sure thing, lets use the standup one
<kelvinliu_> yup cool
<stub> admcleod_: That would be https://bugs.launchpad.net/juju-core/+bug/1557769 and https://github.com/juju/docs/issues/1409 , which will never be fixed with the old API. The new Juju 2 network stuff will always give you IP addresses (network-get, egress and ingress addresses on relation data, etc.)
<mup> Bug #1557769: private-address returns name, not ip <canonical-is> <manual-provider> <network> <Charm Helpers:In Progress by stub> <juju:Fix Released> <juju-core:Invalid> <juju-core 1.25:Won't Fix> <PostgreSQL Charm:Fix Released by stub> <Telegraf Charm:Triaged> <cassandra (Juju Charms
<mup> Collection):Fix Released by stub> <https://launchpad.net/bugs/1557769>
<stub> Well, it could be fixed in charm helpers (FSVO fixed) if someone approved that old MP and landed it in the git repo. But all charms need to be updated with newer networking anyway, to support cross model relations.
<stub> I think the only valid use of using the private-address now is in a peer relation, and that is probably lazyness on my part.
<stub> cory_fu, wallyworld : I found pulling the necessary addresses from the relation easier in practice than using network-get, especially converting an old RelationBase implementation to an Endpoint (which was really easy, and much nicer)
<wallyworld> stub: fair enough. ostensibly, network-get info should be the same as what's put into the relation data bag
<stub> (which is charmhelpers.core.hookenv.ingress_address(...) and charmhelpers.core.hookenv.egress_subnets(...) )
<stub> Yeah, just when writing charms relation-get gives you what you need, while network-get requires a bit more digging to get the bits you need out of the document.
<wallyworld> stub: that i agree with from what little i've been directly exposed to in that area. i guess network-get is sometimes needed outside of a relation context
<stub> yeah, I'll need it when updating Cassandra (what IPs it binds too is quite complex for customer related reasons, but will be much much better with network spaces and network-get)
<jam> manadart: any chance you could have a look at https://github.com/juju/juju/pull/8625 ?
<parlos> Good Morning
<jam> morning
<manadart> jam: Looking now.
<jam> there is also https://github.com/juju/juju/pull/8627 if you're interested in a slightly orthogonal but very related
<manadart> jam: Doesn't look like there are any new OCI provider changes. Want to jump on that call for a quick chat about this?
<jam> manadart: sure
<jamespage> stub: hey - I've pushed vault 0.9.6 up to the candidate channel in the snapstore; 0.10.0 is in edge as well
<jamespage> stub: that's testing ok on a pure bionic deployment including pgsql 10
<wallyworld> jam: here's a small PR to update catacomb to use tomb.v2. part of the work to get our deps all uptodate and consistent across all out repos https://github.com/juju/juju/pull/8628
<jam> wallyworld: shouldn't we be passing the func() into tomb.Go() rather than creating a goroutine that only exists until done is closed?
<jam> wallyworld: specifically, we are closing "done" when our loop exits, but that's exactly the same time that we would be having the loop exit
<jam> is it just that func() wasn't returning an error?
<jam> eg, func() error vs func() ?
<stub> jamespage: Cool. Releasing 0.9.6 to stable should be fine. What were you thinking about 0.10.0 ? I'm leaning towards stable too, rather than juggling channels with zero-dot versions
<jamespage> stub: I think so - there is not a huge diff to 0.10.0
 * stub investigates running the mojo charm tests with an alternative snapstore channel
<jamespage> stub: ok so.... the changes we've been landing into the vault charm are appearing under cs:~openstack-charmers-next/vault
<jamespage> stub: that will include at some time today support for a 'channel' configuration option
<jamespage> which superceedes the hardcoding of 'stable' in layer options
<cnf> hi jamespage :P
<jamespage> hey cnf
<stub> oh, ta. That will be enough for me or tom to tweak our bionic CI tests
<cnf> how are things?
<jamespage> cnf: busy as always
<cnf> hehe, that's not bad, is it?
<jamespage> stub: if you can give me a +1 on the 0.9.6 I'll shove that to stable, and then promote 0.10.0 to candidate
<jamespage> cnf: nope!
<stub> I haven't come up with a good solution for alternative channels with the snap layer. Maybe a snapchannel config option added standard, and a layer.yaml option to hardcode which snaps that applies to.
<stub> jamespage: It is working fine for me in devmode
<stub> jamespage: It looks like there are going to be cli incompatibilities with 0.10. I'm still leaning towards a single stable channel though.
<stub> (0.9.6 is working fine I mean)
<jamespage> Cynerva, ryebot, cory_fu: https://github.com/juju-solutions/interface-etcd/pull/12 if you have cycles - modelled on something similar in the rabbitmq interface
<jamespage> stub: ta - promoting to stable
<jamespage> stub: if you're interested - https://review.openstack.org/#/q/project:openstack/charm-vault is the queue of landed and inflight changes
<jam> manadart: something I just ran into, bug #1765342. I'm not sure how I feel about it yet.
<mup> Bug #1765342: 2.4 enable-ha complains about no address w/ unstarted machines <enable-ha> <juju:Triaged> <https://launchpad.net/bugs/1765342>
<jam> but "juju enable-ha && juju enable-ha" complains about not having IP addresses for the machines that are just starting
<jam> note that would also happen for "juju enable-ha -n3 && juju enable-ha -n5", and I'm a little concerned if they have controllers that are broken during startup.
<jam> manadart: also, when you get a chance, https://github.com/juju/juju/pull/8627
<manadart> jam: Ah, yes. It knows the controller is there, but it has no addresses yet. Hmm. Doosie.
<wallyworld> jam: thanks for review, your idea is much better, my brain just didn't see the simple way. PR updated
<jam> wallyworld: lgtm with a small caveat, would we want to return the error rather than always returning nil? but either way I think its correct. I think the difference is that we probably wouldn't ahve to call .Kill() if we returned the error from runSafely
<jam> but I'm happy landing as is, as well.
<wallyworld> jam: thanks ok, i'll take a look and see if that change works
<wallyworld> jam: yeah, we do need the Kill() as we need to pass that through to the embedded tomb (otherwise tests hang/timeout). I think I did try a variant of that in my testing and concluded I needed the Kill()
<jam> wallyworld: are we doing a beta2 or an RC1? I'd like to create a milestone in LP for bugs
<jam> wallyworld: the calling tomb isn't the embedded tomb, and they aren't linked? k
<wallyworld> jam: *I* think we'll need another beta
<wallyworld> we can always rename the milestone?
<jam> wallyworld: probably
<jam> worst case we rename the targets and delete one we never released.
<wallyworld> i'm hopeful we won't need beta2 but we always seem to
<wallyworld> the calling tomb is the embedded tomb, but the tests fail, i can dig in a bit to see why
<wallyworld> it may be to do with that the catacomb's tomb is not embedded but an attribute of the catacomb struct
<wallyworld> and so the outer Kill() is needed
<elmaciej_> Hi! I have simple question I guess - how can I get using reactive current IP of the machine on which I'm deploying - I need to pass it as variable to some command
<jam> elmaciej_: I don't know the reactive function offhand, the juju command would be "network-get". to get the address to advertise for other applications to use. grepping that from reactive may point you in the right direction.
<jam> https://github.com/juju/juju/pulls/8631 for a fairly straightforward logic fix
<jam> manadart: externalreality_ ^^
<jam> balloons: want to give a go/no-go on https://github.com/juju/juju/pulls/8607 (stripping built binaries)
<elmaciej_> jam: Thanks, I'm looking how to use this from reactive as I'm using docker layer
<jam> elmaciej_: I'm pretty sure they wrap the function, I just don't know the name of it.
<balloons> jam, sure
<elmaciej_> jam: looks like there is from charmhelpers.contrib.openstack.utils import get_host_ip
<stub> Is there a .deb of charm-tools for bionic tucked away in a PPA somewhere?
<jam> hml: it looks like in the container you can do "apt install eatmydata && sudo echo "/usr/lib/*/libeatmydata.so > /etc/ld.so.preload"
<jam> hml: you can do web searches for 'libeatmydata' and/or 'eatmydata' to see if it helps.
<hml> jam: ty
<balloons> jam, thanks. hml, is worth just tacking that on and kicking off a run to see if it helps without us doing more work. If so, :-)
<jam> manadart: thanks for catching the %2 thing. It was the *whole point* of that patch, and I just hadn't finished turning the test into its final form and seeing it actually pass.
<manadart> jam: Cool.
<jam> manadart: 8627 has the updated test and update math.
<manadart> jam: Approved it.
<jam> manadart: thanks
<balloons> hml, so can your ci-test PR land?
<hml> balloons: if i can get an approval  ;-)  - i do want to squash the commits if folks are finished with the viewing
<balloons> hml, my approval still stands :-) If you've addressed Chris's bugs, I'm still +1
<balloons> s/bugs/comments
<hml> balloons: itâs not showing as approved in github - yes, iâve addressed all comments etc
<balloons> ohh, maybe I didn't +1. In which case, I'll fix
<balloons> hml, feel free to merge once you are happy with commits
<hml> balloons: thx
<balloons> hml, also feel free to add to functional tests and push once it lands. Thanks!
<hml> balloons: will do
<admcleod_> stub: so what if i dont have a relation?
<cory_fu> wallyworld: When you're around, can you please take a look at and test https://github.com/johnsca/juju-relation-mysql/pull/4
<cory_fu> wallyworld: It should resolve that issue you were hitting.
<jam> balloons: manadart: https://github.com/juju/juju/pull/8632 removes all the 'enable-ha to demote a machine' code.
<jam> balloons: your email client did the strange formatting again
<balloons> jam I know.. I feel stupid for not using the web client. I forgot until just after I sent
<balloons> 2.4 notes are especially terrible. I'm sorry
<balloons> I think it's the conversion from gdoc to plain text formatting I enforce in my client
<balloons> The message looks fine in draft..
<elmaciej> Hi! It's actually stupid question - does anyone installed charm-tools package on windows on pycharm
<cory_fu> elmaciej: One of the folks from CloudBase might have, but I don't know any of their IRC handles.  Is there a specific issue you're hitting?
<elmaciej> cory_fu: yes, I cant do pip install charm-tools, it's failing
<cory_fu> elmaciej: Can you pastebin me the error?
<elmaciej> cory_fu: I have vm with all setup done on ubuntu but just checking , here is the error https://pastebin.com/swxWvFV9
<elmaciej> cory_fu: same thing worked on ubuntu well. and also under ubuntu on windows it's working fine
<elmaciej> cory_fu: I'm demoing something tomorrow and just want to "sell" idea that you can develop and build charm on any platform
<elmaciej> offcourse juju works on windows so you can deploy , but just checking if "you can develop" too statement is tru
<cory_fu> elmaciej: To be honest, I'm not sure that pip installing charm-tools would be enough to develop anyway.  You would also want https://github.com/juju/charmstore-client
<cory_fu> I really don't understand that error from pip, though.  I wish it gave some indication on where the invalid character came from
<elmaciej> yea, but funny thing is I can do charm-build on my windows using built-in ubuntu subsystem. Just stupid IDE don't wont work as it requires to import packages to compile
<elmaciej> cory_fu: anyway thanks for you time, if I'll find solution I'll paste it here
<cory_fu> elmaciej: Thanks!
<balloons> thumper, can I get a quick look at https://github.com/juju/juju/pull/8633. I'd like to land it with the failing tests to unblock CI runs (I believe)
<balloons> veebers, ^^ 1.25 enablement
<veebers> balloons: LGTM, looks to be largely bringing the 1.25 makefile up to speed with recent changes.
<cory_fu> wallyworld: Did you see my request to review https://github.com/johnsca/juju-relation-mysql/pull/4 ?  I tested it with the Ghost charm and it seems ok, but I wanted to make sure it resolves the issue you were hitting without introducing any others
<wallyworld> cory_fu: hey, sorry i didn't yet, looking
<wallyworld> thanks so much for fixing
<wallyworld> we'll test today asap
<cory_fu> wallyworld: Thanks.  Unfortunately, I'm well past EOD but if it tests ok, I'll merge it first thing tomorrow.
<wallyworld> cory_fu: no worries at all. timeones suck. enjoy your evening, will leave a comment in the PR
#juju 2018-04-20
<stub> admcleod_: You fall back to network_get. With no relation, your charm has decisions to make since it may have several IP addresses and egress ranges to choose from
<admcleod_> stub: how would i get the interface associated with the default route woth network get?
<stub> admcleod_: You would use the Python stdlib or something from pypi. That question has nothing to do with Juju, so no point asking Juju for an answer.
<stub> I suspect it is the wrong question.
<admcleod_> stub: i feel like thats a bit of a cop out
<stub> I think you want the address bound to a particular network space, which is controlled by Juju
<stub> So you bind to the IP address that the operator configured, rather than whatever AWS or whatnot happened to set as the default route.
<stub> https://jujucharms.com/docs/stable/developer-network-primitives#network-get
<admcleod_> ill have a look tomorrow, thanks stub
<parlos> Good Morning
<magicaltrout> hello folks
<magicaltrout> kjackal: or someone
<magicaltrout> can you deploy a local charm using libjuju and also attach resources to it it requires at install time?
<kjackal> magicaltrout: let me see...
<kjackal> magicaltrout: I see the https://github.com/juju/python-libjuju/blob/978f35cc4f19ba98af86ca73a387c9997ad6b24c/examples/localcharm.py localdeployment
<kjackal> but local resources are in the todo list: https://github.com/juju/python-libjuju/blob/978f35cc4f19ba98af86ca73a387c9997ad6b24c/juju/model.py#L1079
<cory_fu> @jamespage Minor ask on https://github.com/juju/layer-index/pull/21 and then I'll merge
<magicaltrout> you make me sad kjackal
<cory_fu> @jamespage Sorry, I'm bad at copy-and-paste.  https://github.com/juju/layer-index/pull/21
<magicaltrout> we switch from amulet and still can't test stuff :)
<cory_fu> Oh wait, I got it right the first time.  heh.
<magicaltrout> i found juju.application.attach with a raise noimplemented exception
<magicaltrout> sad times
<jamespage> cory_fu: done
<cory_fu> jamespage: Thanks
<cory_fu> jamespage: Merged.
<kjackal> magicaltrout: would it work if you juju attach resource from the command line
<cory_fu> @jamespage How are you finding the Endpoint pattern vs the older RelationBase?  Any feedback?
<jamespage> cory_fu: only half a day into that - was going to ask you to peek at my interface parts
<jamespage> cory_fu: https://github.com/openstack-charmers/charm-interface-vault-kv
<jamespage> I think I see the changes and I think I grokked the docs
<kjackal> magicaltrout: can you push your charms in to the edge channel and test it from there?
<cory_fu> jamespage: At first glance, it looks good, though I do notice you're going to hit https://github.com/juju-solutions/charms.reactive/pull/168 but hopefully I can get a release out with a fix for that ASAP
<jamespage> cory_fu: I did some of the old style flags for compat with some of the relation state assessment code we have in other charms
<jamespage> but generally it was all fairly painless
<jamespage> cory_fu: I did notice something a little odd - you always get a return value for endpoint_from_flag with new style interfaces
<jamespage> that may be intentional and I was able to deal with it with a is_flag_set as well
<cory_fu> jamespage: _get_first_value can be replaced with just using self.all_joined_units.received[key]
<cory_fu> jamespage: https://github.com/juju-solutions/charms.reactive/pull/173  ;)
<cory_fu> jamespage: As stub said, I can see it going either way, but I think it makes more sense to match the existing behavior.  But I do think that is_flag_set is more explicit and thus better
<jamespage> cory_fu: https://review.openstack.org/#/c/563091/1/src/reactive/vault_handlers.py is the requires side handler implementation for that interface
<cory_fu> jamespage: You mean provides
<cory_fu> jamespage: Do you really want to publish the info for every unit across every relation: https://github.com/openstack-charmers/charm-interface-vault-kv/blob/master/provides.py#L46-L49
<cory_fu> jamespage: Also, this check is going to fail if the requiring charm sends isolated=False: https://github.com/openstack-charmers/charm-interface-vault-kv/blob/master/provides.py#L60-L61
<cory_fu> You might want to use "isolated is not None"
<cory_fu> jamespage: One pattern I have started to use is to have one of the values in the list of requests be an abstract (at least from the charm's perspective) "request ID" that the interface layer can translate back to relation and unit, if needed.  And as long as the ID is abstract from the charm's perspective, it could easily just be the unit object itself (which has a back reference to its relation), or some serialization of the relation ID + unit ID,
<cory_fu> if you prefer.
<cory_fu> jamespage: And if there are even just a few fields, wrapping it in a simple FooRequest class makes it even easier
<cory_fu> jamespage: There've been a few changes against charm-helpers recently.  Wondering if we should cut a release of that soon.
<magicaltrout> kjackal: if you deploy a local charm
<magicaltrout> does/should juju attach still work like usual?
<cory_fu> tinwood: We were talking yesterday about goal-state, and lo and behold, Juju 2.4-beta1 was updated today with it!
<cory_fu> Oh, I guess that was yesterday.
<cory_fu> Talk about turn-around time. ;)
<rick_h_> cory_fu: I was just talking with core about finding some examples to put together a blog/juju show and such around how to use it and why it's useful
<rick_h_> cory_fu: would love if you're interested to help put together the demo stuff to highlight the feature
<tinwood> cory_fu, wow, that made it in.  Interesting.
<cory_fu> rick_h_: stub and tinwood have good real-world examples.  A simpler, but also a bit more trivial, example would just be showing how you can get more accurate charm status for the period between when relations were added vs when the hook fires in Juju.
<cory_fu> rick_h_: But they actually need it to make the charm logic a lot cleaner and the charms work much more efficiently
<rick_h_> cory_fu: right, I think the big thing will be boiling it down to a couple of the simplest cases to think of
<cory_fu> rick_h_: yeah, charm status for relations is about the simplest case I can come up with and is pretty demo-able.
<jamespage> cory_fu: I do mean provides yes ;)
<jamespage> cory_fu: OK I see what you mean - refactoring a bit
<kjackal> it should magicaltrout
<magicaltrout> yeah there's something amazing going on with libjuju
<magicaltrout> when you deploy a local charm on the commandline I can do juju resources <my-charm>
<magicaltrout> and it lists my resources
<magicaltrout> but model.deply('/local/charm')
<magicaltrout> lists no resources
<cory_fu> magicaltrout: Yeah, I hit that a couple of weeks ago: https://github.com/juju/python-libjuju/issues/223
<cory_fu> magicaltrout: There's an API call missing from Model.deploy to register the resources
<magicaltrout> arse
<magicaltrout> cheers cory_fu
<cory_fu> magicaltrout: Should be an easy fix, though I'm probably not going to have time to work on it for a couple of weeks.  I'm a little confused as to why the controller doesn't register them automatically, since it clearly knows about them from the CharmInfo response
<magicaltrout> thanks cory_fu
<magicaltrout> bought any new hats for summer?
<cory_fu> magicaltrout: :)  I got one for xmas but it's a bit too big.  It's hard to find nice hat stores these days
<magicaltrout> http://cdn.mutually.com/wp-content/uploads/2017/06/84900949.jpg
<magicaltrout> I'll keep an eye out for you cory_fu !
<cory_fu> magicaltrout: lol.  I'm compelled to point out that that's not a fedora (it's a trilby), but I don't imagine that helps my case.  ;)
<cory_fu> It's also too big for that person's head.
<magicaltrout> thanks cory_fu you just gave the office a chuckle ;)
<cory_fu> :)
<sSeBBaSs> hi guys, need some help, i'm trying to conjure-up kubernetes, in an already bootstrapped controller on an custom model, with another user than the default admin, and I keep receiving a credentials not found error message :(
<xnox> i had broken my lxd, and now recovered. but the bootstrap node appears to have lost authentication with lxd
<xnox> since the bootstrap container is generating
<xnox> Apr 20 19:25:09 sochi lxd[3165]: ip=10.71.156.2:46084 lvl=warn msg="rejecting request from untrusted client" t=2018-04-20T19:25:09+0100
<xnox> is there a way to force juju to re-authenticate it self with lxd environment?
<xnox> i think i somehow need to reimport certificate.....
<xnox> but i'm not sure which certificate juju in the bootstrap node is using
<xnox> i think https://jujucharms.com/docs/2.2/clouds-lxd-resources did the trick
<pmatulis_> thank goodness for those docs
<balloons> Nice!
<kwmonroe> +1 pmatulis_ !!
<kwmonroe> and +100 for backporting stable where it makes sense
#juju 2018-04-21
<qsymh> https://www.youtube.com/user/l0de/live IS POPPIN HOT RIGHT NOW STILL GOING!! CALL 315-505-4666. IRC.EFNET.ORG #lrh
<qsymh> https://www.youtube.com/user/l0de/live IS POPPIN HOT RIGHT NOW STILL GOING!! CALL 315-505-4666. IRC.EFNET.ORG #lrh
<qsymh> https://www.youtube.com/user/l0de/live IS POPPIN HOT RIGHT NOW STILL GOING!! CALL 315-505-4666. IRC.EFNET.ORG #lrh
<qsymh> k j a c k a l   m p j e t t a   d e g v i l l e   B r a d C r i t t e n d e n   i o n u t b a l u t o i u   r e d i r   k n o b b y   g a u g h e n   l e v e l d o c   m a r k t h o m a s   b e i s n e r   i c e y   p l a r s   l b o r d a   W i l l M o o g l e   d i d d l e d a n   y o 6 1   S p a d s   e v i l n i c k   j o g _ _   w a l l y w o r l d   r o g p e p p e   c n e w c o m e r   C a l v i n `   m w h u d s o n   d j i n n i `   h i g g i n s   a
<tlyng> Anybody have experience launching K8S charm on localhost/lxd? All components seem to boot, but kubernetes are not able to launch pods and worker nodes are still in a NotReady state.
#juju 2020-04-13
<gsamfira> Hello folks. Couple of questions.
<gsamfira> any chance we can get win2019 tools in simplestreams for the latest juju release?
<gsamfira> and the other question is: How would I go about updating the documentation for the manual provider to include Windows?
<rick_h> gsamfira:  second one is easy, the docs are in the discourse "Docs" category and can leave comments on there that are picked up and such.
<rick_h> gsamfira:  first one will have to look into.
<gsamfira> rick_h: great! Will look in discourse. Need to send a few PRs before I update the docs. A few small fixes to the manual provider.
<gsamfira> rick_h: for the second one, we built the tools locally, so not a huge blocker, but would rather use the ones in simplestreams.
<rick_h> gsamfira:  yea totally. We're a bit understaffed today with the holiday but will look into it.
<gsamfira> rick_h: no rush at all. Whenever you get around to it. Thanks for the info!
<wallyworld> babbageclunk: bug 1790185 was from 2018, but i think we're still lacking zone constraint support?
<mup> Bug #1790185: There is no way of specifiying on which cluster charm should be deployed vSphere <cpe-onsite> <vsphere-provider> <juju:Triaged> <https://launchpad.net/bugs/1790185>
<wallyworld> could they use resource-groups?
<wallyworld> babbageclunk: actually, there's discussion in #juju
<babbageclunk> wallyworld: ok looking
<wallyworld> kelvinliu: any chance of a review on a small PR to merge 2.7 into devel? https://github.com/juju/juju/pull/11434
<kelvinliu> looking
<kelvinliu> looking good to me if the ignore leader check change for upgrade-charm is an expected change wallyworld
<wallyworld> kelvinliu: yeah, added it as a driveby
<wallyworld> it was a bug in 2.8
<kelvinliu> ic
<wallyworld> it caused those leadership errors when upgrading
<kelvinliu> ah right
#juju 2020-04-14
<babbageclunk> wallyworld: since Kelvin's not around can you review https://github.com/juju/juju/pull/11439?
<babbageclunk> wallyworld: thanks!
<wallyworld> babbageclunk: nice to get that fix done :-) you forward port to develop?
<babbageclunk> wallyworld: yup, doing it now
<wallyworld> \o/
<babbageclunk> wallyworld: https://github.com/juju/juju/pull/11440
<wallyworld> looking
<wallyworld> babbageclunk: ty, hardest review i've done in ages
<babbageclunk> :)
<manadart> stickupkid: https://github.com/juju/juju/pull/11441
<stickupkid> manadart, this has grown legs, mind CR whilst I fix da tests https://github.com/juju/juju/pull/11425
<manadart> stickupkid: Sure.
<achilleasa> hml: did you try the caas upgrade after landing the relation changes? Do we have any other pending bits?
<hml> achilleasa:  yesâ¦ that was a card you missed.  i did caas upgrades, migration and a few bits of clean up.
<hml> achilleasa:  the only pending bits i now of are the two prs i have up.
<achilleasa> hml: looking at the panic one atm. Think we can get a test in to ensure that the errors bubble up properly?
<hml> achilleasa:  there is a test which bubble up some.  they wonât go all the way easily as other code checks should prevent them from occuring.
<hml> the same reason itâs difficult to test
<hml> achilleasa:  thatâs for the Implicit relationsâ¦ pondering if the Join when dying can be tested for bubble up
<manadart> stickupkid: Daily in a couple? Just need to grab a drink.
<stickupkid> manadart, yarp
<achilleasa> rick_h: these are the two new errors that the operator can get when we turn on support for rel limits: https://pastebin.canonical.com/p/QsQ59vynwh/ any tweak suggestions for the wording?
<rick_h> achilleasa:  that looks good. Ideally we'd be able to hint to the charm's metadata yaml because this is kind of new/out of the operators hands
<achilleasa> rick_h: for the second error you mean? Something like 'the metadata for the new charm version imposes...'?
<rick_h> achilleasa:  no I mean in general. Ignore me. It's wishful thinking
<hpidcock> tlm: tlm[m]: can you have a look at https://bugs.launchpad.net/juju/+bug/1871894 you might know whats up
<mup> Bug #1871894: Workload pod deployed by Juju 2.8 can't create secret <juju:Triaged by tlmiller> <https://launchpad.net/bugs/1871894>
<tlm> k hpidcock
<hpidcock> It might not have anything to do with the admission stuff
<tlm> very strange
<babbageclunk> I love the message "existing secret does not exist"
<tlm> hpidcock: that is a bug with the admission controller
<tlm> will try and cover it with a test
#juju 2020-04-15
<wallyworld> hpidcock: "juju-run python3" doesn't work with upgrade-charm PR
<wallyworld> so still something to follow up on
<hpidcock> wallyworld: can you link me the charm you are using and how you are deploying it please on the bug
<wallyworld> will do, just reviewing PR, and also have a zoom meeting at 1 so after that
<wallyworld> it's just my mariadb-k8s charm
<thumper> babbageclunk, wallyworld: just thinking... does backup work with k8s controllers? And if so, do we have any idea how we're going to restore?
<thumper> I was thinking about this when thinking about mongo 4.0 restore
<babbageclunk> thumper: I haven't tried backup on a k8s controller
<wallyworld> backup should work. restore no, falls into the HA category for next cycle
<thumper> wallyworld: have you discussed with babbageclunk the mongo 4.0 restore question?
<wallyworld> yes
<babbageclunk> Was trying to think about how k8s restore would work - the core restore is the same, but the stuff around the outside (stopping and starting agents and rewriting configs) is all different
<wallyworld> exasctly
 * thumper nods
<thumper> probably want to have a check somewhere
<thumper> to say "sorry"
<babbageclunk> yeah true - I'll add that
<babbageclunk> I mean, it'll be hard to follow the directions anyway - you can't scp the backup and juju-restore to the controller machine
<thumper> :)
<thumper> yeah... we'll need another whole set of instructions for restoring a k8s controller
<thumper> babbageclunk: any thoughts on the last comment in this bug? https://bugs.launchpad.net/bugs/1858693
<mup> Bug #1858693: ANARCHY!!!!!!! Entirely leaderless application spotted in the wild <juju:Triaged> <https://launchpad.net/bugs/1858693>
<babbageclunk> thumper: yeah - that sounds like an entirely different problem. (Or I guess the same symptoms caused by something totally different.)
 * thumper nods
<babbageclunk> thumper: reading some code...
<thumper> why would it timeout enqueing?
<babbageclunk> I think that's coming from the raft apply command - presumably the raft engine loop is blocked somehow?
<thumper> hmm...
<tlm> hpidcock: https://github.com/juju/juju/pull/11443 if you got 5 minutes
<hpidcock> sure
<tlm> wallyworld: got time for HO ?
<wallyworld> tlm: just finishing a PR, give me 5
<tlm> ll
<tlm> kk
<wallyworld> hpidcock: finally +1 on PR. i think we'll need a followup to deal with the new local attributes and controller state
<wallyworld> tlm: ready for HO now if you're free
<tlm> k
<wallyworld> hang on, stupid 2fa etc
<hpidcock> wallyworld: the local state attributes are resolver local state not operation local state. Resolver local state is not saved.
<wallyworld> hpidcock: ah no worries
<hpidcock> wallyworld: when you have a spare two seconds https://github.com/juju/juju-db-snap/pull/1
<wallyworld> sure
<wallyworld> hpidcock: so 4.0.18 was officially the latest?
<wallyworld> and AGPL not SSL?
<hpidcock> yep, but either we choose 17 or 18 and need to test it
<hpidcock> its confusing because SSL is the mongo license but we include both mongo and mongo-tools (which is apache)
<wallyworld> yeah
<wallyworld> IANAL :-)
<hpidcock> well the charm itself has a license
<hpidcock> sorry the snap
<hpidcock> the snap code is under whatever license we want
<wallyworld> tup, just about to say that
<hpidcock> technically the snap contains more than just mongo stuff
<hpidcock> openssl etc
<stickupkid> manadart, I got it working, just wrapping up the tests for a review
<stickupkid> here is a casual observation, if you're creating mocks for anything outside of your package, but within juju, then you're doing it wrong
<manadart> stickupkid: Will look in a mo'. OTP with thumper.
<achilleasa> stickupkid: any interesting examples of this happening?
<stickupkid> achilleasa, as in, doing it badly (got lots of those), doing it well, not so much
<achilleasa> stickupkid: more interested in the doing it badly ;-)
<stickupkid> achilleasa, anything to do with networking common, model backend...
<stickupkid> achilleasa, if you're taking on a bigger interface from somewhere else, then you're tied to every other place that uses it. You know longer have a vertical boundary and are locked stepped to that other component/entity etc.
<achilleasa> stickupkid: ah, so you are basically talking about dupping (or defining a restricted version of it) the interface in your package and mocking that?
<stickupkid> achilleasa, yeap
<stickupkid> achilleasa, my usual spiel
<manadart> stickupkid: You can get 3 shims down to 2 by pushing the raw state shim into environs/space like this: https://pastebin.canonical.com/p/fRT68XZhSQ/
<manadart> But that damn modelmanager backend is a bastard.
<stickupkid> manadart, yeah, I've not got there yet, as I'm writing tests for spaces reload API
<stickupkid> manadart, I've got rid of my horrid life hack as well, which has made me really happy
<elox> Good morning/evening.
<elox> I've bumped into a problem I can't understand if it is a bug or something I can resolve for myself. I've sent a discourse post about it here: https://discourse.juju.is/t/error-cert-pool-creation-failed-cannot-parse-certificate/2927  ... hope to get some help on it so I can get back my controller.
<elox> In short, I get this: "ERROR cert pool creation failed: cannot parse certificate..."
<elox> Followed by...... asn1: syntax error: data truncated
<elox> Right, I don't know what has happened. But my resolution to ^ was to remove the controller entry from controller.yaml and then log back in as admin: "juju login -u admin 192.168.2.12:17070 -c snowflake-maas"   After being prompted with a password challenge and a fingerprint accept, the controller is again available to my client.
<achilleasa> elox: looks like you were hitting https://github.com/openssl/openssl/issues/1381
<achilleasa> elox:  openssl asn1parse -inform der -strparse 40 -in $yourcert manages to parse the ASN1 data (fails without the strparse flag)
<elox> I'll have a look and update my post on discourse.
<elox> Thanx!
<manadart> achilleasa: Can you review this small one?
<manadart> https://github.com/juju/juju/pull/11445
<achilleasa> manadart: looking
<elox> @achilleasa: https://discourse.juju.is/t/error-cert-pool-creation-failed-cannot-parse-certificate/2927/2
<achilleasa> elox: great that you could get back the controller without having to do any cert surgery ;-)
<elox> achilleasa: Yeah, the horror.
<stickupkid> manadart, updated in my last commit
<elox> Finally I was able to test the latest version of my Nextcloud charm on my own system and feel comfortable releasing it to stable: https://jujucharms.com/u/erik-lonroth/nextcloud/4
<manadart> stickupkid: OK.
<achilleasa> stickupkid: rick_h proposed change for fixing the weird defaults when parsing charm meta: https://github.com/juju/charm/pull/309
<achilleasa> can someone please take a look at https://github.com/juju/juju/pull/11446?
<stickupkid> manadart, ping you now or in the morning about the new stuff?
<manadart> stickupkid: Morning. I've EoD'd
<stickupkid> manadart, ha
<achilleasa> stickupkid: re your wording comment; should I change it to 'as'?
<stickupkid> achilleasa, yeah, that's better
<stickupkid> achilleasa, tick
<achilleasa> stickupkid: still need to patch some more stupid tests with the correct limit expectations :-(
<stickupkid> achilleasa, 9 year old code, 9 years old
<stickupkid> achilleasa, it's like clippy, "do you want to limit this?" <- no clippy, NO!
<achilleasa> btw, what is the correct command for bootstraping a focal instance?
<achilleasa> shouldn't --boostrap-series=focal --force do the trick?
<stickupkid> achilleasa, you need --config image-stream=daily
<stickupkid> achilleasa, not released yet, so you need to point the image-stream to the right place
<achilleasa> aha! thanks
<achilleasa> rick_h: got a focal with 4.0.18 from the candidate snap https://paste.ubuntu.com/p/kpNNWSQ9VV/. Will work on getting the conf settings through (bit harder since they need to be injected into the agent conf) but should have a PR ready for review by EOD tomorrow
<rick_h> achilleasa:  sounds like grat progress ty!
<elox> I'm trying to help a friend adding AWS credentials to JAAS, and we can't do it. It complains about a credential tag. https://pasteboard.co/J3Xx4Pr.png
<elox> We have been trying to follow the guide, but it simply doesn't work
<elox> Can this be related to already existing credentials.yaml for him ?
<pmatulis> elox, which guide exactly?
#juju 2020-04-16
<babbageclunk> wallyworld: can you take a look at this? https://github.com/juju/juju-restore/pull/14
<wallyworld> sure
<hpidcock> Very quick PR https://github.com/juju/rpcreflect/pull/1
<babbageclunk> hpidcock: approved
<babbageclunk> I didn't read it all
<babbageclunk> hpidcock: is that pulled out of the juju/rpc package?
<babbageclunk> (I mean rpcreflect, not the licence)
<hpidcock> I'm not sure ask Simon
<babbageclunk> but he's always asleep!
<babbageclunk> and you're right there
<hpidcock> but the files themselves already have a license
<hpidcock> doing a quick license audit
<babbageclunk> oh right
<hpidcock> another very quick PR https://github.com/juju/jsonschema-gen/pull/5
<hpidcock> babbageclunk?
<babbageclunk> hpidcock: looking
<babbageclunk> approved
<hpidcock> thanks
<hpidcock> sorry babbageclunk because you are so awesome, I missed something https://github.com/juju/jsonschema-gen/pull/6
 * babbageclunk clicks the button
 * hpidcock appreciates babbageclunk
<babbageclunk> :)
<wallyworld> babbageclunk: if you are free, given the circumstances, a second +1 and even QA on this would be awesome https://github.com/juju/juju/pull/11449
<babbageclunk> wallyworld: looking
<wallyworld> hpidcock: a few tweaks if you have a chance sometime today https://github.com/juju/juju/pull/11444
<hpidcock> looking
<babbageclunk> turns out a lot of things get sad when you delete the credential for a cloud.
<babbageclunk> I mean controller model
<wallyworld> true that. i did my testing with a different model
<wallyworld> to simulate the field issue
<babbageclunk> yeah, I'll try again with that
<babbageclunk> suddenly couldn't get status or ssh to the machines
<wallyworld> status won't work with cred missing from any model
<babbageclunk> I just kept my ssh sessions open this time
<wallyworld> babbageclunk: if you still have a model, remove-application --destroy-storage is a test i should have done
<wallyworld> since that will fail without --force
<babbageclunk> controller's still up
<babbageclunk> removing the bad model is difficult, since the storage is still there
<wallyworld> babbageclunk: what i did was rename the cred id so i still had it to rename back after
<wallyworld> then i could cleanup as normal
<babbageclunk> ah right - turns out I could update it back into the controller
<wallyworld> yeah, that would work too
<babbageclunk> ok, redeploying postgres and trying again
<babbageclunk> how did the credential go away in the first place?
<babbageclunk> wallyworld: so if I do `remove-application --destroy-storage` without --force after renaming the cred, it should fail? Or will the cleanup fail?
<wallyworld> babbageclunk: with destroy storage, the cleanup has to be able to talk to the cloud which will fail. i am gussing there will be errors looged and the app will hang around in dyng state
<babbageclunk> but then redoing the remove-application with --force should still clean it up
<babbageclunk> right?
<wallyworld> yeah
<wallyworld> assuming --force woeks as exopetced
<hpidcock> wallyworld can I grab the charm you were using for the juju-run bug?
<wallyworld> oh bollocks, sorry forgot
<babbageclunk> wallyworld: I don't get how you are updating the _id? Did you just paste it in again?
 * babbageclunk just does that
<wallyworld> babbageclunk: i loaded the record to a doc variable using FindOne(), updated the doc._id, inserted into collection, then removed the old record. then later I updated the doc._id gain and inserted/removed again
<babbageclunk> from go code?
<babbageclunk> I'm in the shell
<babbageclunk> wallyworld: ok, that seemed to work fine
<babbageclunk> wallyworld_: did you see my comments on the PR?
<wallyworld> babbageclunk: thanks for QA etc, I've attempted to answer your questions on PR, see if they make sense
<wallyworld> one core issue if thsat we've over applied the use of --force
<wallyworld> in general
<babbageclunk> wallyworld: makes sense to me - just checking
<hpidcock> wallyworld: the problem is that on caas, juju-run always runs the commands on the remote, not in the operator https://github.com/juju/juju/commit/d780faaf774bddd0b904e4428ae6c33076683f55#diff-89a10e4adacaacbb2dabc98d1b50011e
<hpidcock> the reason it was working for me is I misunderstood the question
<hpidcock> we will need to add a --operator flag to juju-run I think
<wallyworld> hpidcock: ah, doh. of course. for charms with deploymentMode=operator, it should only ever run on --operator
<hpidcock> well we have no juju-run --operator flag
<hpidcock> for operator charms and normal iaas charms there isn't a problem at the moment
<wallyworld> right, but as for run-action, we can handle the deployment mode case without the flag first up
<hpidcock> yeah, well juju run and  actions we already have the information we need
<hpidcock> but juju-run has no arg for operator or workload
<wallyworld> so you saying it works as expected for deploymentMode=operator charms?
<wallyworld> it's just workload charms that need any changes?
<hpidcock> I'm not sure, depends on the changes you made
<wallyworld> yeah, i'd need to check the code as well
<hpidcock> ok so action and juju-run (the action made by juju run) should handle this
<hpidcock> juju-run doesn't handle either case properly
<wallyworld> sounds about right
<wallyworld> juju-run should execute on the pod it was invoked from
<wallyworld> without any --operator flag, now that i think about it
<hpidcock> can't juju-run call cross units/applications though?
<wallyworld> yes, i mean more operator vs workload
<wallyworld> as a start anyway
<stickupkid_> manadart, trolling you via a git diff
<manadart> stickupkid_: Cunning.
<stickupkid_> manadart, give me 5-10 minutes and wanna discuss VIPs?
<manadart> stickupkid_: Sure.
<manadart> stickupkid_: Be there in a sec.
<stickupkid_> was scoffing my breakfast down tbh, just there now
<elox> pmatulis: This guide https://juju.is/docs/aws-cloud
<elox> Good morning
<manadart> stickupkid_, achilleasa: Anyone able to tick a little race fix? https://github.com/juju/juju/pull/11452
<stickupkid_> manadart, tick
<manadart> stickupkid_: Ta.
<stickupkid_> manadart, https://github.com/juju/juju/pull/11454
<manadart> stickupkid_: Yep gimme a couple; got one to swap you.
<stickupkid_> manadart, runaway
<stickupkid_> manadart, https://i.imgflip.com/3wva2g.jpg
<manadart> stickupkid_: https://github.com/juju/juju/pull/11455. It's the real fix for the one you approved.
<stickupkid_> manadart, well that's a horrid test :( // so TearDownTest does not reattempt.
<stickupkid_> I have serious question, if github actions is saying use "ubuntu-latest", but that comes with a ton of other services that aren't supported by ubuntu, is that not a trademark issue?
<stickupkid_> I'm getting angry at github for saying something is ubuntu, when it clearly isn't just ubuntu ehre
<stickupkid_> manadart, WINNER -> https://github.com/juju/juju/pull/11456
<stickupkid_> CR please
<achilleasa> mongo snap PR is ready for review: https://github.com/juju/juju/pull/11453
<achilleasa> QA-ing this will be hard so please spend some time to try different things...
<hml> stickupkid_: https://github.com/juju/juju/pull/11457. review pls?
<stickupkid_> hml, ah this makes sense, it's because I turned them on for aws the other month by default
<hml> stickupkid_: i think i caught them all
<stickupkid_> hml, ticked
<hml> stickupkid_: cool.  ty!
<stickupkid_> damn it, why won't anything land
<hml> stickupkid_: iâm seeing that mongo not installing on the static analysis
<hml> sorry - client testsâ¦ analysis is good.  :-)
<manadart> stickupkid_: This fixes one of the intermittent failures: https://github.com/juju/juju/pull/11458
<stickupkid_> manadart, I'm seeing two failures
<manadart> stickupkid_: There are loads.
<achilleasa> can someone remind me how to 'juju add-machine' and get an eoan one?
<rick_h> achilleasa:  juju add-machine --series=eoan
<achilleasa> rick_h: thanks... thought that was only for charms :D
<achilleasa> rick_h: AFAICT the eoan image comes with an lxd snap; this means that the lxd-snap-channel flag that I am adding will only affect focal+ (if lxd is already installed on the host we do nothing)
<achilleasa> is that ok?
<rick_h> achilleasa:  it's not going ot be that way in focal?
<rick_h> achilleasa:  I'd expect it to be part of the focal testing forward with the mongodb snap
<achilleasa> rick_h: it's a separate PR but currently install-lxd is a noop if lxd is already installed. So the channel will only do something if lxd is not already there
<achilleasa> testing now... brb
<rick_h> achilleasa:  right, but if you change the model config and juju add-machine it should use the track for the new machine in focal since focal is getting it from a snap pre-seeded?
<achilleasa> rick_h: that's the expectation. Just wanted to point out that it will have no effect when spinning up machines with series < focal
<rick_h> achilleasa:  yep understand
<achilleasa> (iow, we won't force it to use the channel if lxd is already there)
<hml> stickupkid_: somethign is definately horked in ci for make check and the merge job.  my make check job failed at a 60 min timeoutâ¦ the merge job queue is 3 long
<hml> stickupkid_: i found a place we were lost 15 minâ¦ but still
<stickupkid_> hml, we need to get manadarts patch to land
<hml> stickupkid_: which one?
<stickupkid_> https://github.com/juju/juju/pull/11458
<stickupkid_> although there is an issue with it worker/caasunitprovisioner/worker_test.go:47:22: undefined: "github.com/juju/juju/worker/caasunitprovisioner".MockProvisioningStatusSetter
<hml> stickupkid_: i was gunna sayâ¦ thatâs could be a problem
<achilleasa> hmm... are we installing lxd when we create new machines? I have patched container/lxd/initialization_linux.ensureDependencies and added logging. That code path gets triggered when adding an lxd container to the machine. It seems like the machine already has the lxd snap installed for focal even though if I 'lxc launch ubuntu:20.04 focal' I don't see it installed. Any ideas?
<achilleasa> There doesn't seem to be anything related in the cloudinit bits (just apt pkgs)
<stickupkid_> achilleasa, lxc launch needs daily as well
<stickupkid_> achilleasa, lxc launch ubuntu-daily:focal focal
<achilleasa> stickupkid_: doesn't ' lxc launch images:ubuntu/focal focal' work for you?
<stickupkid_> achilleasa, it might now, it didn't back when i was testing it
<achilleasa> stickupkid_: let me try with the image you suggested. Is that the one we pull in juju?
<achilleasa> stickupkid_: ok... mystery solved; that one has lxd
<achilleasa> rick_h: not sure how to proceed ^
<rick_h> achilleasa:  stickupkid_ sorry was otp...I'm confused? Yes you need image streams for focal still. It'll be cleaning once the images start rolling out of daily in the coming week
<achilleasa> rick_h: what I meant was that the focal image seems to have the lxd snap pre-installed
<stickupkid_> achilleasa, that's the case
<achilleasa> rick_h: I could change the code to 'snap refresh lxd' though
<rick_h> achilleasa:  yes, that's expected. Yes, I think that's the goal to snap refresh to the track in the config as part of our install/provision bits.
<achilleasa> rick_h: ok. that probably needs additional changes to juju/packaging so I will need to revisit this on Tuesday
<rick_h> achilleasa:  ok, is the other bits landing for the mongodb changes?
<rick_h> we can pick up and see what steps we need to update/move forward for focal tests/etc
<achilleasa> rick_h: PR is up; lots of timeout issues with tests (keep !!build!! to get it green) but it is ready for review/QA. Will fire an email to the list in a few min
<achilleasa> hml: small PR for you https://github.com/juju/packaging/pull/8
<hml> achilleasa:  looking
<hml> achilleasa:  approved.
<achilleasa> rick_h: got it working with snap refresh: https://paste.ubuntu.com/p/mgkhwrxwWK. Will push a PR on Tuesday
<rick_h> achilleasa:  ok, got EOD and run. Thanks
<rick_h> sorry, go EOD I mean
<achilleasa> ;-)
<elox> pmatulis: So, we acctually got through it now. But there are tons of errors and warnings in this step which makes it very uncertain if the adding of credentials was successful or not.
<elox> pmatulis: We also had to include the "juju add-model foobar aws/eu-west-1 --credential credname. Omitting that seems not to upload the credentials.
<wallyworld_> babbageclunk: the PR from yesterday, here's the forward port :-) https://github.com/juju/juju/pull/11451
<babbageclunk> wallyworld_: sorry otp with thumper - no conflicts presumably
<wallyworld_> nah, easy peasy
<babbageclunk> Well, that was a thoroughly brain-melting discussion
<hpidcock> sounds like it was fun
<hpidcock> wallyworld_: would love some feedback on https://github.com/juju/juju/pull/11462 before I jump into writing unit tests
#juju 2020-04-17
<tlm> wallyworld_: can I get 5 minutes after standup ?
<hpidcock> tlm: https://media1.tenor.com/images/2a8ac396850c720fd6084c4e94f9d582/tenor.gif?itemid=15201782
<wallyworld_> hpidcock: just otp be there soon
<wallyworld_> babbageclunk: ^^^^
<tlm> otp two ticks
<hpidcock> wallyworld_: another review pls sir https://github.com/juju/juju/pull/11462
<wallyworld_> ok
<babbageclunk> wallyworld_: for now I'm disallowing restoring a backup from a version of Juju greater than the current one. That seems reasonable right?
<wallyworld_> i think so cause the newer agents to match the db might not be available right?
<babbageclunk> well, they should be in the db...
<babbageclunk> but not on the controller machine disks
<babbageclunk> maybe
<wallyworld_> yeah, that's what i meant sorry. and it's work to extract and use, so for now can disalllow
<wallyworld_> IMO
<babbageclunk> cool cool
<babbageclunk> that was what I figured.
<wallyworld_> hpidcock: lgtm, ty
<wallyworld_> hpidcock: feel like creating a go mod file for juju/juju?
<hpidcock> Awesome thankyou
<hpidcock> Sure why not
<wallyworld_> would like to get that done for beta or rc1
<wallyworld_> includes makefile changes, jenkibs build jobs, etc etc
<hpidcock> Then some lucky person can fix the debs
<wallyworld_> yep
<hpidcock> wallyworld_: all required for go mod https://github.com/juju/yaml/pull/2 https://github.com/juju/raft/pull/1 https://github.com/juju/lumberjack/pull/3 https://github.com/juju/gosigma/pull/1
<thumper> wallyworld_: has the mongo db channel branch landed?
<thumper> actually it looks like it hasn't again
<thumper> hpidcock, wallyworld_: the merge job is being killed for taking too long
<thumper> not sure what it is doing
<thumper> but I'm well past EOD
<wallyworld_> hpidcock: looking
<wallyworld_> hpidcock: all lgtm, perhaps PR descriptions a little light on
<wallyworld_> stickupkid: heya, we've had trouble landing stuff today and it still appears broken. first there was a disk space issue on the jenkins master node which hpidcock fixed and there were stale lxc containers on grumpig which i deleted.  but now there's an issue accessing the  lxd socket when the merge jobs run. past eod here, perhaps you could take a peek and $$mrrge$$ the 3 or so approved PRs? pretty please :-)
<wallyworld_> or manadart if stickupkid is away?
<manadart> wallyworld_: Yeah, I'm looking at it now.
<wallyworld_> tyvm
<stickupkid_> who broke lxd in jenkins?
<stickupkid> manadart, when you've got a sec, can i grab you for my idea of where to store origin correctly
<manadart> stickupkid: In Daily
<stickupkid> manadart, hold on fighting bluetooth
<gnuoy> joedborg, do you know if this is the right channel to ask if anyone can land  https://github.com/juju-solutions/interface-tls-certificates/pull/22 ? or is there a separate juju-solutions type channel ?
<joedborg> gnuoy: Iâm not sure actually.  Iâd probably just use the internal k8s channel, weâre all hanging out there
<gnuoy> joedborg, kk
<pmatulis> can a bundle point to a locally stored charm?
<rick_h> pmatulis:  yes with a ./ path
<pmatulis> rick_h, thank you
#juju 2020-04-19
<hpidcock> wallyworld: https://github.com/juju/lumberjack/pull/3 https://github.com/juju/gosigma/pull/1 https://github.com/juju/raft/pull/1 https://github.com/juju/yaml/pull/2 need to be manually merged. I can't merge them.
