[01:27] <wgrant> ericsnow: Is there a PPA around with a vivid-compatible juju?
[01:32] <marcoceppi> wgrant: doesn't look like it. I just checked devel and stable ppas they all stop a utopic
[01:33] <wgrant> marcoceppi: Well, utopic's not a problem, but they all seems to stop at 1.22
[01:33] <wgrant> vivid needs a 1.23 pre-release.
[01:33] <wgrant> (because of systemd)
[01:34] <marcoceppi> wgrant: you'll want to talk to sinzui and the release team about that
[01:37] <wgrant> marcoceppi: Ah, thanks.
[08:29] <apuimedo> Hi everybody
[08:29] <apuimedo> I'm getting an error bootstrapping
[08:29] <apuimedo> http://paste.ubuntu.com/10579001/
[08:30] <apuimedo> (it happened twice)
[08:36] <apuimedo> marcoceppi: ^^
[08:46] <apuimedo> marcoceppi: I also wanted to know if https://jujucharms.com/cassandra/precise/17 is going to have a trusty version, so that we (midonet) can rely on it.
[09:11] <Murali> apuimedo: please check if proxy or firewall settings
[09:12] <apuimedo> hey Murali! did you collect those logs?
[09:13] <Murali> we had issues while deploying, now we got resolved and services of openstack installed using juju-quick start
[09:13] <Murali> but relations not added we now looking in to it
[09:13] <stub> apuimedo: I've got a rewrite of Cassandra for trusty up for review.
[09:14] <apuimedo> stub: that's great!
[09:14] <apuimedo> do you know when it could be released?
[09:14] <stub> apuimedo: lp:~stub/charms/trusty/cassandra/spike should do the trick
[09:14] <stub> apuimedo: When it gets through the review queue
[09:14] <stub> (How long is a piece of string)
[09:15] <Murali> apuimedo: we will try to send today once after midonet-component deploys
[09:16] <apuimedo> Murali: good
[09:16] <apuimedo> Murali: can you be more specific on what you had to fix on proxy/firewall settings?
[09:17] <Murali> we had some firewall rules on our gateway. it was blocking to connect to canonical sites
[09:18] <apuimedo> ah,ok
[10:00] <lazyPower> stub: err, i dont see the cassandra spike in the revq - http://review.juju.solutions/
[10:00] <lazyPower> stub: how long ago was that submitted?
[10:01] <stub> lazyPower: It got pushed to the very bottom
[10:01] <lazyPower> aaahhh
[10:01] <stub> https://bugs.launchpad.net/charms/+bug/1419116
[10:01] <mup> Bug #1419116: New trusty/cassandra charm <Juju Charms Collection:Fix Committed> <https://launchpad.net/bugs/1419116>
[10:01] <lazyPower> yeah i wasn't looking for cassandra, i was looking for stub
[10:01] <lazyPower> herp derp, lazy needs coffee
[10:02] <stub> But swings and roundabouts - I woke up an old PostgreSQL branch and it ended up at the very top ;)
[10:02] <lazyPower> sweet action
[10:37] <apuimedo> turns out I made the wrong assumption, that bootstrapping could be done offline from the internet :P
[10:37] <apuimedo> I had to do some masquerading on the maas machine
[10:39] <apuimedo> lazyPower: how long does the review process typically take?
[10:40] <lazyPower> Depends entirely on the size of the queue, people available during the week giving reviews, how long they are able to work reviews without being interrupted - we strive for a week or less but that has been slipping lately with all the demo work we've had passed through the ~charmer team
[10:41] <apuimedo> :-)
[10:41] <apuimedo> thanks for the info
[10:41] <apuimedo> lazyPower: what's the stance on puppet/ansible/salt for doing the charms work?
[10:42] <lazyPower> we <3 that - please do use configuration management tools in your charm
[10:42] <apuimedo> cool
[10:42] <apuimedo> and for installing the configuration management, is there any preferred way?
[10:42] <apuimedo> i.e., should each charm of a bundle try to install puppet?
[10:42] <lazyPower> We have a few charm helpers for some of the services like ansible.
[10:43] <lazyPower> there's a template for chef charms
[10:43] <apuimedo> unfortunately no puppet, and that's what we use at the moment
[10:43] <apuimedo> but I guess I can infer
[10:44] <lazyPower> but if you're looking at introducing puppet - we dont have a good boilerplate charm for that. Typically what i've seen is either the install hook takes care of the predependencies, and what would normally go in install is moved to config-changed, or a script is called for pre-bootstrap to setup the CM framework - and any additional logic is then placed in a secondary install script that runs - but remember its only run once.
[10:45] <apuimedo> good
[10:45] <lazyPower> apuimedo: if you want one of us to take a look at your charm construction once you've got the puppet delivery done i'd be happy to
[10:46] <lazyPower> i'm the current author/maintainer of the chef bits - they're fairly similar from what i understand
[10:46] <apuimedo> lazyPower: thanks, right now I was just trying to decide if I can remove a charm that we have that is just using puppet to install repos
[10:46] <apuimedo> and that charm must be deployed to each machine, which is a bit bothering
[10:47] <lazyPower> seems like that could be abstracted
[10:47] <apuimedo> yes, probably a single puppet-midonet charm that configured the puppet for the other services in relation-joined
[10:47] <lazyPower> apuimedo: what would be nice is if we had composability in charms, so you could just inheret from that.
[10:48] <apuimedo> like, puppet-midonet config.yaml has a setting called 'repo' that points to which repo/release you want
[10:48] <apuimedo> then, when deploying midonet-agent, it would do nothing
[10:49] <apuimedo> until you do the relation joined with the puppet-midonet controller
[10:49] <lazyPower> yeah, i've got similar logic in the docker/flannel-docker charms
[10:49] <apuimedo> that would tell it which puppet config to write to the midonet-agent puppet
[10:49] <lazyPower> i needed to set/pass data out of band that was dependent on etcd having joined w/ flannel before we did anything with the networking.
[10:50] <apuimedo> yup, sounds similar
[10:51] <apuimedo> I'll have to make a charm eventually that gives nova-docker ;-)
[10:51] <apuimedo> (with midonet, of course :P )
[10:55] <lazyPower> Interesting :)
[10:56] <lazyPower> I just wrapped up my breakdown over the work we did earlier this year with docker (completely negating anything we're doing now with kubernetes... that post is forthcoming) - http://blog.dasroot.net/2015-charming-with-docker.html   Feel free to give it a go and leave any comments about the future of this stack on the list :) We're interested in feedback
[11:04] <apuimedo> cool, thanks
[11:04] <apuimedo> lazyPower: what does juju do if a hook file does not exist, i.e., if install is missing?
[11:04] <lazyPower> skips it with exit 0
[11:06] <apuimedo> cool
[11:06] <apuimedo> I want to avoid placeholder files :-)
[13:25] <rbasak> sinzui: o/
[13:25] <sinzui> hi rbasak
[13:25] <rbasak> sinzui: I'm looking into getting new Juju uploads to Vivid done.
[13:25] <rbasak> sinzui: but I'm not clear on what version we want to upload right now, as that's changed a few times.
[13:25] <rbasak> sinzui: (and also Trusty)
[13:26] <rbasak> sinzui: do you know what the current request is?
[13:26] <sinzui> 1.21.3 is current, but we expect to propose 1.22.0 this week for general release next week
[13:26] <rbasak> Will you want 1.22.0 in Trusty?
[13:26] <sinzui> rbasak, eventually
[13:26] <rbasak> But not yet?
[13:27] <rbasak> As in - simultaneously with your general release next week, or some time after that?
[13:27] <sinzui> rbasak, I think we should let 1.22.0 sit in the wild for a bit
[13:27] <rbasak> How do you feel about putting 1.22.0 into Vivid, but leaving Trusty for now?
[13:27] <sinzui> rbasak, +1
[13:28] <rbasak> OK - so I'll wait for your 1.22.0 proposed release - thanks.
[13:29] <sinzui> rbasak, I am scheduled to propose it tomorrow
[13:29] <rbasak> Sounds good! We can try and release simultaneously to Vivid - I have someone helping me on this one.
[13:30] <rbasak> sinzui: oh, one more question. Will 1.22.0 have systemd support? We're failing dep8 tests on Vivid I think for this reason right now.
[13:30] <rbasak> So if not we'll need a solution as Vivid is now systemd.
[13:31] <sinzui> rbasak, once the streams are published, I have a wee to certify the ubuntu packages, but note that it wont work with the default streams. so you need to delay or use proposed
[13:31] <sinzui> I mean ubuntu proposed where people know to do something special to use it
[13:31] <rbasak> sinzui: we'll hold in vivid-proposed until you've published streams - no problem.
[13:32] <sinzui> rbasak, 1.23-beta1 will be the first juju to use systemd, and I can say the test results are mixed this morning. I will be working with ericsnow to discuss my vivid setup or his new features
[13:33] <rbasak> sinzui: OK - but then in that case, is there much point in having 1.22 in Vivid at all - either in Ubuntu or in your PPA?
[13:34] <rbasak> I suppose in your PPA users could still manually switch to upstart.
[13:43] <sinzui> rbasak, yeah, that was my own concern. For myself. I have a juju env running 1.21.3, so I can no longer provision a vivid machines without upstart.
[13:43] <sinzui> rbasak, so to summarise, juju-core  is fine to deploy in to clouds, but juju-local package may need to depend on upstart
[13:43] <sinzui> rbasak, and for juju 1.23.x, we change juju-local to depend on systemd
[15:01] <turicasto> hi guys!
[15:02] <turicasto> I'm writing a charm, can i ask some questions about the command "relation-get"
[15:02] <turicasto> ?
[15:03] <marcoceppi> turicasto: you certainly can, it's best in this channel to just ask your question
[15:04] <rbasak> sinzui: can I just confirm I understand that please?
[15:04] <turicasto> marcoceppi: thank you
[15:04] <rbasak> sinzui: juju < 1.23 can deploy vivid machines by requesting upstart?
[15:05] <rbasak> And that's automatic?
[15:13] <turicasto> marcoceppi: Can I get the public address of a loadbalancer linked to my charm, only in "loadbalancer-relation-*" hook? or in other hook too?
[15:14] <marcoceppi> I don't think the loadbalancer advertises it's public-address in the http interface, you can get it's private address by doing `relation-get private-address`
[15:19] <turicasto> marcoceppi: ok, but can I get some information (like private-address) of a loadbalancer in all hooks? For example can i get the loadbalancer private address in "db-relation-joined"? There is a location where i can find all the information of the services  linked to my charms?
[15:21] <lazyPower> turicasto: that can be tricky to get information out of band. You have to know the relationship id #, and use relation-get -r realtion:id
[15:21] <lazyPower> turicasto: its in our docs under hook environment authoring here:  https://jujucharms.com/docs/authors-hook-environment
[15:25] <turicasto> lazyPower: thanks!
[15:27] <sinzui> rbasak, the juju client doesn't use upstart or systemd. The init system is only relevant to what you deploy. there are no official utopic or trusty charms, so juju on vivid will just work. But developers/testers may want to run juju on their local host In that case the juju-local package will also need to depend on upstart or systemd as needed
[15:28] <rbasak> sinzui: OK, but what about when you deploy a vivid system with juju? What versions of juju will support that as vivid defaults to systemd?
[15:28] <rbasak> sinzui: is the answer to that "none, until 1.23?"
[15:28] <sinzui> rbasak, 1.23-beta1 and above
[15:29] <rbasak> OK, got it. Thanks.
[15:29] <rbasak> sinzui: we're expecting a stable 1.23 before Vivid releases, right?
[15:30] <rbasak> So we can expect to ship that in Vivid, and thus upon release Vivid juju will be able to deploy Vivid on systemd without breaking?
[15:30] <sinzui> rbasak, it is very close to the deadline.
[15:30] <rbasak> OK
[15:30] <apuimedo> lazyPower: if you have several instances of a service running
[15:30] <sinzui> rbasak, I will bring this up in the meeting I am going into now
[15:30] <rbasak> Thanks
[15:30] <apuimedo> so for example several keystones
[15:30] <apuimedo> when you add-relation between something and keystone
[15:30] <lazyPower> apuimedo: this is where the id's come into play. the relationship name, + the id of the relationship is specific to a host.
[15:31] <apuimedo> yes, but when you do, juju relation-add myservice keystone
[15:31] <apuimedo> it establishes relation between all keystones and myservice?
[15:31] <lazyPower> correct
[15:32] <apuimedo> so relation-joined and after that relation-changed will be called several times, right?
[15:33] <lazyPower> correct, at least once for every service - possibly more if something is relation-set to send over the wire
[15:33] <lazyPower> *for every unit in the service(s)
[15:34] <apuimedo> that's nice
[15:34] <apuimedo> thanks again ;-)
[15:34] <lazyPower> cheers :)
[15:37] <turicasto> lazyPower: It's that normal that the hook "loadbalancer-relation-joined" not start when i link a loadbalancer to my charm?
[15:38] <rbasak> sinzui: when is 1.23-beta1 scheduled for please?
[15:38] <lazyPower> that should run in context of the relationship first being made (as in, executes once - the services are not yet in a bi-directional communication pipeline yet)
[15:38] <lazyPower> so, thats odd if its not.
[15:39] <sinzui> rbasak, either thursday or monday. I really depends on 1.22.0 entering proposed first
[15:39] <rbasak> sinzui: OK, thanks!
[15:54] <apuimedo> lazyPower: is there some place where I can see all the hook commands (like relation-list) and some sample output or I just have to make dummy charms and try it out?
[16:08] <lazyPower> apuimedo: its a short list of commands, iirc - relation-get, relation-set, and unit-get
[16:08] <lazyPower> and config-get
[16:09] <lazyPower> thats all thats rattling around up in my head - but we do appear to be missing an agent reference sheet in the docs
[16:10] <apuimedo> lazyPower: what would be the most useful is to have examples of their outputs in both the json and the plain text output
[16:10] <lazyPower> https://github.com/juju/docs/issues/283
[16:11] <apuimedo> so charm developers can code up the parsing earlier on
[16:11] <apuimedo> lazyPower: tells me "this is not the page you are looking for"
[16:12] <lazyPower> well thats odd
[16:12] <lazyPower> its a bug to track the missing agent command reference for inclusion into the docs
[16:12] <apuimedo> lazyPower: https://jujucharms.com/docs/authors-hook-environment has only the plain text output
[16:13] <apuimedo> lazyPower: ok
[16:13] <lazyPower> apuimedo: I highly encourage you to file bugs against our docs for anything you feel would enhance your experience looking them over. We are continually working on teh docs trying to improve them - and your feedback is invaluable in that process :)
[16:13] <lazyPower> https://github.com/juju/docs/issues/
[16:15] <apuimedo> lazyPower: very well, now it loaded and I commented on the issue
[16:18] <lazyPower> stellar, thanks for that
[16:19] <apuimedo> lazyPower: I take it that charm-helpers is already committed to backwards compatibility, right?
[16:32] <Muntaner> hey guys
[16:32] <Muntaner> I have a problem
[16:33] <Muntaner> an hook is never called. It is a "relation-joined". How can I investigate? May I have a problem in my yawls (config or metadata) ?
[16:46] <lazyPower> apuimedo: you are correct
[16:46] <apuimedo> good
[16:46] <lazyPower> Muntaner: -joined is not running, but -changed is?
[16:50] <Muntaner> lazyPower: none is running
[16:51] <lazyPower> Muntaner: can you point me at a repository of your charm,  and show me the commands you're running in a pastebin?
[16:58] <rogpeppe> marcoceppi: hiya
[16:59] <rogpeppe> marcoceppi (or anyone else): do you know what the current rules are for determining whether a given bundle is promulgated?
[17:01] <marcoceppi> rogpeppe: it must be owned by charmers
[17:02] <marcoceppi> It does not operate like charms
[17:02] <rogpeppe> marcoceppi: right, thanks
[17:02] <rogpeppe> marcoceppi: that was my understanding, ta
[17:02] <marcoceppi> Np, cheers
[17:21] <gsamfira1>  alsi, I will change the comment I made in the function
[17:23] <apuimedo> lazyPower: marcoceppi: in my 14.04 box juju help-tool relation-list does not show any example output
[17:25] <apuimedo> sorry if I'm being obtuse and I should be passing something extra
[17:27] <lazyPower> apuimedo: nah, i think we're jsut being unreasonably difficult about getting a listing in the docs under a heading thats less obscure than "How the innards of juju works - inflect on what that means pleb"
[17:27] <apuimedo> :P
[17:30] <narindergupta> jose: hazmat marcoceppi jamespage: Nuage network wrote the charm and they wants to merge the latest working code. Do we know which is good branch they can merge into their charm which will work?
[17:30] <jose> narindergupta: which charm?
[17:30] <jose> also, no need to highlight us all :)
[17:30] <narindergupta> jose: openstac charms
[17:31] <narindergupta> jose: was just wondering who can answer the query?
[17:31] <lazyPower> beisner: ^
[17:31] <jose> narindergupta: if you have a suggestion, then make a merge proposal against lp:charms/charmnamehere for precise, and lp:charms/trusty/charmnamehere for trusty
[17:31] <marcoceppi> jose: it's different for openstack-charms
[17:31] <jose> ah, sorry then
[17:32] <jose> thought they had the aliases too
[17:32] <marcoceppi> they do, but things must first land in a dev branch
[17:32] <jose> ah, huh
[17:33] <narindergupta> marcoceppi: jose: so what Nuage should do? They have charms in https://code.launchpad.net/~nuage-canonical/ and finding an issue with nova-compute and wants to merge the charm code into their code so can test and verify.
[17:34] <narindergupta> marcoceppi: jose: but they are not sure which branch to take. I suggested to start with release branch but I might be wrong.
[17:34] <marcoceppi> narindergupta: I'm not sure I understand the question
[17:34] <jose> marcoceppi: they wanna do an MP against an openstack charm
[17:35] <marcoceppi> do they though?
[17:35] <narindergupta> marcoceppi: jose: merge propsal is already made https://code.launchpad.net/~nuage-canonical
[17:35] <jose> nope, https://code.launchpad.net/~nuage-canonical/charms/trusty/nova-compute/next/+merge/249410
[17:35] <jose> ^^^ has the MP
[17:36] <marcoceppi> right, so I don't think we understand the query
[17:36] <jose> marcoceppi: they wanna know if the MP they did is good or if they should choose another target branch
[17:36] <jose> let's say myself, I've fixed an openstack charm and wanna open an MP for peer review
[17:36] <jose> where should I point it to? lp:charms/precise/nova-compute for nova-compute?
[17:36] <marcoceppi> jose narindergupta they want to target the next branch of the openstack charm
[17:36] <marcoceppi> not the current one
[17:36] <jose> gotcha.
[17:37] <jose> narindergupta: so, your current MPs are not going to get processed since they are targetting the wrong branches
[17:37] <jose> let me get you the right ones
[17:37] <narindergupta> jose: marcoceppi: i am confused can someone look into those MP and say that everything is ok or different MP is required/
[17:37] <marcoceppi> narindergupta: beisner and jamespage would be the best to confirm, but from what I understand unless it's a bug fix everything should target next
[17:38] <narindergupta> marcoceppi: what about charm helpers that does not have next?
[17:39] <marcoceppi> narindergupta: no, it doesn't
[17:39] <beisner> narindergupta, please see the openstack charm development policy @ https://wiki.ubuntu.com/ServerTeam/OpenStackCharms
[17:39] <marcoceppi> beisner: ta for the link
[17:40] <beisner> marcoceppi, yw
[17:54] <dpb1> hi -- if I run 'go test -gocheck.v github.com/juju/juju/...' I only get one thing tested.  without the -gocheck.v, I get all the test suites executed with a bunch of failures (and no output). How can I see output from the failing test cases?
[17:57] <apuimedo> thanks evilnickveitch
[17:58] <mgz> dpb1: the arguments and ordering for go test is finickity
[17:58] <evilnickveitch> apuimedo, np
[17:59] <beisner> hi jose, fyi, it's also worth noting that openstack charms are different in that we really only target 1 series (trusty), and it is intended to be backwards/forward compatible with all currently-supported versions of Ubuntu and OpenStack (except Essex).
[17:59] <jose> gotcha
[17:59] <jose> thanks!
[17:59] <narindergupta> beisner: marcoceppi: Nuage is asking do we know the last known stable version of next for openstack charm? AS team wants to test against those first to make sure everything is good.
[18:00] <beisner> hi narindergupta - the syntax for any given "next" (development) branch of an openstack charm is:
[18:01] <beisner> lp:~openstack-charmers/charms/trusty/cinder/next
[18:01] <mgz> dpb1: the easiest option tends to cd into github.com/juju/juju and `go test ./... -gocheck.v`
[18:01] <dpb1> mgz: yes, but I think that ignores the gocheck.vv arg
[18:01] <dpb1> at least, I don't see any extra output
[18:02] <narindergupta> beisner: ok so I will ask them to merge the changes from this branch and test if successful then send the merge proposal to next itself.
[18:02] <dpb1> just FAIL testname..., etc
[18:02] <mgz> dpb1: you get more output on the failed tests
[18:02] <dpb1> mgz: I'll paste
[18:03] <beisner> narindergupta, so that is the example from the wiki link, and it's for cinder.   you'll need to substitute the charm name you're working with in place of cinder.
[18:03] <narindergupta> beisner: gotch you thanks
[18:03] <beisner> narindergupta, yw.  holler with any ?s.
[18:03] <dpb1> mgz: http://paste.ubuntu.com/10581353/
[18:04] <narindergupta> beisner: sure and thanks. hot right now is merge proposal into charm helper as well.
[18:04] <narindergupta> beisner: like this one https://code.launchpad.net/~nuage-canonical/charm-helpers/charm-helpers
[18:04] <narindergupta> beisner: i am hoping this MP is valid?
[18:05] <mgz> dpb1: the tests din't fail, the build failed
[18:05] <beisner> narindergupta, i think you only need to propose against lp:charm-helpers
[18:05] <dpb1> mgz: ok, I'm a go newb, obviously. :)
[18:05] <mgz> dpb1: did you run `godeps -u dependencies.tsv` first?
[18:06] <dpb1> yes, ran that, but it produced no output, let me check it again
[18:08] <narindergupta> beisner: ok deleted the merge proposal against the stable now. Hope we are good to go from here. Not sure how to make this progress. As once this MP compelte then i need to resync the other openstack charm and resubmit the merge proposal. Also what the good way to sync the latest code into existing charms?
[18:08] <dpb1> mgz: http://paste.ubuntu.com/10581379/
[18:09] <dpb1> mgz: same result with your ./... syntax
[18:09] <dpb1> I'm working off this, btw: https://github.com/juju/juju/blob/master/CONTRIBUTING.md
[18:09] <beisner> narindergupta, do you mean the /next/ charm code?   or the charmhelpers code?
[18:10] <narindergupta> beisner: both as charmhelper needs to merge first then only /next charm code MP can be sent.
[18:11] <mgz> dpb1: yeah, it's not test running related, just `go test -i github.com/juju/juju` should fail for you
[18:12] <mgz> dpb1: for whatever reason, the copy of code.google.com/p/go.crypto/ssh it's building against is wrong
[18:13] <mgz> cd into that and see what mercurial says, versus what's in dependencies.tsv
[18:13] <dpb1> ok, checking now
[18:20] <dpb1> ok, all those tests pass (in $GOPATH/src/code.google.com/p/go.crypto/ssh)... checking some more
[18:20] <dpb1> hg summary says I'm at tip
[18:21] <dpb1> but hm
[18:21] <dpb1> let me get rid of my whole $GOPATH and do over
[18:21] <dpb1> (juju is the only go thing I have)
[18:21] <beisner> narindergupta, we re-sync charmhelpers across all openstack charms a few times each cycle.  so if the change makes it into charmhelpers, we can push it out to all of the /next/ charms.
[18:21] <beisner> narindergupta, how many nuage changes to charms are just a charmhelper sync?
[18:23] <narindergupta> beisner: they are SDN so additional plug in changes in charmhelper
[18:23] <beisner> narindergupta, let me re-phrase:   are there any changes to openstack charms OTHER THAN what's in charmhelpers for nuage?
[18:24] <narindergupta> beisner: but then they have quantum-gateway, neuton-api, openstack-dashboard, keystone, glance, nova-comute, nova-cloud-controller, and cinder
[18:24] <narindergupta> beisner: needs sync based on charmhelpers
[18:25] <narindergupta> beisner: plus nuage has three additional charms for openstack to implement their SDN
[18:25] <beisner> narindergupta, see https://code.launchpad.net/~nuage-canonical/charms/trusty/nova-cloud-controller/next/+merge/249411/comments/622308
[18:25] <beisner> narindergupta, that is also my question.
[18:25] <beisner> narindergupta, what we're saying is:   don't worry about doing MPs for all of the charms --if-- it is just a charmhelper sync.
[18:26] <narindergupta> beisner: yes that was done and i proposed a charm-helpers changes
[18:26] <narindergupta> beisner: ok so  i will drop the MP for those charms then which does not need changes.
[18:27] <narindergupta> beisner: neuton-api charm is another charm which needs more than charmhelper changes
[18:27] <beisner> narindergupta, ok great.  i think that will help with clarity.  i apologize - i'm not overly familiar with the project and i'm just digging into each branch to see what's going on.
[18:27] <beisner> narindergupta, right, so keep those MPs where there are add'l changes.
[18:27] <narindergupta> beisner:  no proble,
[18:27] <narindergupta> beisner: ok i will clean up MP a bit then
[18:31] <beisner> narindergupta, thanks.  i'll have a look at their neutron-api MP.
[18:32] <narindergupta> beisner: ok neuton, quantum-gateway
[18:32] <narindergupta> beisner: and few other
[18:33] <narindergupta> beisner: i clear it up and now we ahve total 5 MP on neutron-api, charm-helpers, quantum-gateway, nova-compute and nova-cloud-controller
[18:34] <beisner> narindergupta, ok thank you, i'll post a comment on them individually.
[18:34] <narindergupta> beisner: thanks
[18:52] <dpb1> mgz: thx for the help.  blowing away GOPATH/* and re-going through things fixed it.
[18:52] <dpb1> mgz: now, I just need to increase my /tmp since 2G is not enough for juju test apparently
[18:52] <dpb1> :)
[18:54] <mgz> dpb1: cool :)
[19:05] <beisner> narindergupta, the charmhelpers proposal has some pep8 / python syntax issues.  see comments on the MP.
[19:06] <narindergupta> beisner: ok looking into it and also on this http://paste.ubuntu.com/10179495/ it seems upstream charm also fails this test on my machine without modifying the same
[19:07] <beisner> narindergupta, no, i'm talking about the charmhelpers proposal, not a charm proposal.
[19:08] <beisner> narindergupta, i've added comments to MPs:  neutron-api, charm-helpers, quantum-gateway, nova-compute and nova-cloud-controller
[19:08] <narindergupta> beisner: yeah for that i am checking right now
[19:09]  * beisner will bbiab
[19:20] <narindergupta> beisner: for charmhelpers in file charmhelpers/contrib/openstack/context.py line 189 already have the issue which i can not fix context.py:189:80: E501 line too long (81 > 79 characters)
[19:20] <narindergupta> but other two issue i am fixing
[19:35] <narindergupta> beisner: after fixing the error i have resubmitted the MP on charm helpers unfortunately can not resolve the error   narinder:$ flake8 charmhelpers/contrib/openstack/context.py
[19:35] <narindergupta> charmhelpers/contrib/openstack/context.py:189:80: E501 line too long (81 > 79 characters) which is already existing in upstram charm helpers
[19:38] <adalbas> jcastro, arosales , mchasal : i have just submitted the gpfs charm to trunk
[19:38] <arosales> adalbas: ah good to hear :-)
[19:38] <arosales> adalbas: I don't see it yet @ http://review.juju.solutions/
[19:38] <adalbas> arosales, i believe this will be under your eyes for review as well, right?
[19:38] <adalbas> or does it require other steps?
[19:39] <adalbas> arosales, it says it requires about 15 min to be available, is it?
[19:39] <mchasal> arosales, jcastro did mention about a 15 minute lag before it actually shows up.
[19:39] <arosales> adalbas: not so just my eyes but the juju communities, and the ~charmers for the final promotion into the recommeded portion of the charm store
[19:39] <arosales> adalbas: did you follow https://jujucharms.com/docs/authors-charm-store#submitting
[19:40] <adalbas> arosales, es
[19:40] <adalbas> yes
[19:40] <arosales> adalbas: ah great, then it will show up in the queue shortly and follks will add a review in its turn
[19:40] <adalbas> great!
[19:40] <arosales> adalbas: thanks for your contribution, good milestone
[19:41] <adalbas> arosales, and mchasal as well. one question i still have is that people would need gpfs packages to run this.
[19:41] <adalbas> arosales, is there anyone in your teams that have the license for that?
[19:42] <adalbas> arosales, btw, looking forward for the feedback.
[19:42] <arosales> adalbas: not on canonical team, but I would suggest to document in the readme how to obtain a license and get a package?
[19:43] <arosales> adalbas: does the charm assume the package is placed in a specific directory?
[19:43] <adalbas> arosales, we have documented how to create a repo for those packages
[19:43] <adalbas> but yes, good point on how to get the license
[19:44] <mchasal> I'm not sure we'll be able to document anything other than "contact your IBM sales rep" but we'll look into it.
[19:44] <arosales> adalbas: yes, as long as someone can read the readme and get GPFS running that should be a good starting point. We'll have to see in the charm on how to inject the license and GPFS package.
[19:49] <beisner> narindergupta, yep i noticed that pre-existing issue in c-h too.  it's ok to just fix the things which are relevant to your changes.
[19:50] <narindergupta> beisner: ok fixed now and resubmitted the MP
[19:51] <narindergupta> beisner: for rest of charms i would like Nuage to work on it as they are in process of merging the new change from next so it impact the other changes as well.
[19:53] <beisner> narindergupta, next I would re-sync charmhelpers (from your proposed branch) on each of the charm branches.
[19:53] <beisner> narindergupta, for example, on the neutron-api charm, you would temporarily modify Line 1 here:  http://bazaar.launchpad.net/~nuage-canonical/charms/trusty/neutron-api/next/view/head:/charm-helpers-sync.yaml#L1
[19:54] <beisner> narindergupta, use your custom charm-helpers branch there.   then run:     make sync
[19:54] <beisner> narindergupta, on  nuage's neutron-api, quantum-gateway, nova-compute and nova-cloud-controller branches.
[19:54] <narindergupta> beisner: gotch you
[19:55] <beisner> narindergupta, once they all have the lint fixes, then change Line 1 back to default @ http://bazaar.launchpad.net/~nuage-canonical/charms/trusty/neutron-api/next/view/head:/charm-helpers-sync.yaml#L1
[19:56] <narindergupta> beisner: do i need to create the file charm-helpers-sync.yaml
[19:56] <beisner> narindergupta, no.  the file exists in each charm already.
[19:57] <narindergupta> beisner: which directory i am not able to find it
[19:57] <beisner> narindergupta, which charm?
[19:58] <narindergupta> beisner: says nova-cloud-controller
[19:58] <narindergupta> beisner: http://bazaar.launchpad.net/~nuage-canonical/charms/trusty/nova-cloud-controller/next/files
[19:58] <narindergupta> i am seeing    charm-helpers-hooks.yaml and    charm-helpers-tests.yaml
[19:58] <narindergupta> not sync
[19:59] <beisner> narindergupta, ah.  if that file doesn't exist, look for the  charm-helpers-hooks.yaml  file.
[20:00] <narindergupta> beisner: yeah that there
[20:01] <beisner> narindergupta, but in no case should you have to create a charm-helpers-????.yaml file, make sense?
[20:01] <narindergupta> yeah
[20:03] <beisner> narindergupta, there is a good reason for the differing file names btw.  though it's unrelated to what we're working on here.
[20:04] <ctlaugh> I have a change to the nova-compute charm that I would like to request to have merged.  Is there anyone particular that I need to add as a reviewer in the request?
[20:04] <narindergupta> beisner: gotch you yeah i can see because those sync the directories from the charm-helpers and it make sense
[20:04] <beisner> hi ctlaugh
[20:04] <ctlaugh> beisner: hi
[20:06] <beisner> ctlaugh, first stop should be to read up @ https://wiki.ubuntu.com/ServerTeam/OpenStackCharms
[20:06] <beisner> ctlaugh, ie. to make sure you're basing and proposing against the right branch.
[20:06] <beisner> ctlaugh, for the reviewer, please use  "OpenStack Charmers"
[20:07] <beisner> narindergupta, can you please add OpenStack Charmers to your MPs?   (request another review)
[20:08] <narindergupta> beisner: means?
[20:08] <beisner> narindergupta, with each merge proposal, you request a reviewer.   instead of 1 human as a reviewer, we need to have the whole team requested to review it.
[20:09] <mchasal> arosales, still not seeing that gpfs charm after about 30 minutes. Guess we didn't get something quite right.
[20:10] <narindergupta> beisner: ok i did not add any reviewers before but just now added the openstack-charmers to https://code.launchpad.net/~nuage-canonical/charm-helpers/charm-helpers/+merge/252644
[20:10] <ctlaugh> beisner: ok, looks like I need to redo -- I didn't start off of /next
[20:10] <narindergupta> beisner: hope that should be ok
[20:10] <ctlaugh> beisner: thank you for the wiki link
[20:10] <beisner> ctlaugh, lp:~openstack-charmers/charms/trusty/nova-compute/next   is the dev branch
[20:10] <arosales> mchasal: do you have a link to the launchpad branch?
[20:10] <mchasal> https://code.launchpad.net/~ibmcharmers/charms/trusty/ibm-mq/devel
[20:10] <mchasal> oops not that
[20:10] <beisner> ctlaugh, you may be able to just do a bzr merge lp:~openstack-charmers/charms/trusty/nova-compute/next on your branch (might try, see how it shakes out).
[20:11] <narindergupta> beisner: once i will fix the other openstack charms then i will resubmit
[20:11] <mchasal> arosales, https://code.launchpad.net/~ibmcharmers/charms/trusty/gpfs/trunk
[20:12] <arosales> mchasal: taking a look
[20:12] <mchasal> thanks
[20:12] <arosales> marcoceppi: what is the injestion time on the review queue?
[20:12] <beisner> narindergupta, perfect, thank you.
[20:12] <mchasal> I do see 20 minutes listed on the review page as the "don't bother us until it's been this long"
[20:17] <arosales> mchasal: also depends on when that 20 min ingestion starts
[20:18] <mchasal> Sure, I was assuming that meant it would have happened by then no matter where the batch kicks off, but yeah, could be. If we should wait longer, that's fine, just want to fix the problem if there is one.
[20:31] <aisrael> tvansteenburgh: When running bundletester against amulet-driven tests, should allocated machines be automatically terminated?
[20:32] <tvansteenburgh> aisrael: yes, unless you specify not to in tests.yaml
[20:32] <tvansteenburgh> aisrael: reset=false iirc
[20:39] <aisrael> tvansteenburgh: ack, thanks
[20:56] <ctlaugh> beisner: I think I am messing something up trying to merge.  When I tried, I ended up getting an email that listed of a long list of changes that weren't a part of my change.  I went to my branch, clicked "propose for merging" and took the default branch that was selected.  Is that right?
[20:56] <beisner> ctlaugh, can you paste your branch link that has the changes you're wanting to propose?
[20:57] <ctlaugh> beisner: https://code.launchpad.net/~clark-laughlin/charms/trusty/nova-compute/arm64-patch
[21:01] <narindergupta> beisner: have one question should i use the charmhelper stable branch or charm-helper  branch for changing the other charms. After syncing i am seeing multiple change into charmhelper directory of my openstack charms
[21:13] <arosales> adalbas: mchasal: I think you need to propose your branch
[21:13] <arosales> "To submit your charm for 14.04: bzr push lp:~your-launchpad-username/charms/trusty/your-charms-name/trunk"
[21:13] <arosales> step 9 on https://jujucharms.com/docs/authors-charm-store#submitting
[21:13] <beisner> naridergupta, if you sync from lp:charm-helpers, you will indeed pull in a lot of changes.   but that's not what you want.  you want just the nuage changes.
[21:14] <apuimedo> lazyPower: is it possible to read the config of a relation?
[21:14] <apuimedo> *of a charm you are related to
[21:15] <adalbas> arosales, that is what i have: https://code.launchpad.net/~ibmcharmers/charms/trusty/gpfs/trunk
[21:15] <apuimedo> nevermind, sorry. I think it is quite obvious that it is not :P
[21:16] <adalbas> arosales, i'm using a group from people that write the charms
[21:16] <arosales> understood, I am confirming with a new charm you may also need to follow
[21:16] <arosales> https://jujucharms.com/docs/authors-charm-store#recommended-charms
[21:17] <mchasal> arosales, thanks, but we were assuming that the team name could stand in for the lp user naem.
[21:17] <arosales> mchasal: that bit is fine
[21:18] <adalbas> arosales, ok, so that works a bit different from having it with your own name
[21:18] <adalbas> i ll look over it
[21:18] <arosales> adalbas: no weather its a team name or indiv it should be the same
[21:18] <arosales> lp treats team and folks (in this context) the same.
[21:19] <mchasal> Right, so what bit is not fine here?
[21:19] <adalbas> arosales, ok, so that is also needed for individual users
[21:19] <arosales> adalbas: the main point here is you don't have a target branch to propose against, so you'll need to create an lp bug and attach your GPFS branch to that bug per https://jujucharms.com/docs/authors-charm-store#recommended-charms
[21:19] <adalbas> got it.
[21:19] <adalbas> i understood this was a step further after review.
[21:19] <arosales> adalbas: sorry this could be a bit more clear
[21:19] <mchasal> Ah, step 1 there. THanks.
[21:19]  * arosales makes note of that
[21:20] <adalbas> arosales, no worries!
[21:20] <arosales> adalbas: now that should get it into the queue.
[21:20] <adalbas> ok!
[21:20] <arosales> adalbas: thanks :-)
[21:21] <adalbas> thank you!
[21:30] <beisner> ctlaugh, thanks again for the nova-compute arch contribution, looks like the merge proposal is all set for /next/.
[21:30] <ctlaugh> beisner: Thank you for all of your help!
[21:32] <beisner> ctlaugh, sure thing, happy to help.
[22:35] <mchasal> arosales, GPFS charm is there now, thanks for the help!
[23:20] <arosales> mchasal: good to hear :-)
[23:20] <arosales> G . P . F . S in the queue :-)
[23:55] <bdx> charmers: https://ask.openstack.org/en/question/58473/heat-access-created-vm-permission-denied-publickey/
[23:55] <bdx> jamespage: Can we make a configuration parameter for heat that allows the "instance_user" to be specified?
[23:56] <bdx> jamespage: I am running into this error https://ask.openstack.org/en/question/58473/heat-access-created-vm-permission-denied-publickey/
[23:56] <bdx> charmers: Can we get some support for the heat charm?
[23:57] <bdx> charmers: I am experiencing this issue https://ask.openstack.org/en/question/58473/heat-access-created-vm-permission-denied-publickey/ and would like to use heat in my juju deployed openstack cloud
[23:58] <bdx> charmers: Unfortunately the aforementoined issue is dissallowing the ubuntu user from sshing into any instance deployed with heat
[23:59] <bdx> charmers, jamespage: The issue is entirely holding up my deployments, could we get a default of ubuntu user for the heat "instance_user"??