[08:46] <jamespage> gnuoy, are https://review.openstack.org/#/q/owner:liam.young%2540canonical.com+status:open ready for review?
[08:46] <jamespage> well the ones passing verification at least...
[08:48] <gnuoy> jamespage, yes, the neutron-gateway problem is similiar to the one I had yesterday, I don't think neutron-plugin-metering-agent is a think in icehouse onwards
[08:53] <jamespage> gnuoy, comments on rmq
[08:54] <jamespage> gnuoy, cinder +2'ed and landing
[08:54] <gnuoy> jamespage, lovely, ta
[11:37] <neiljerram> Morning all!  Charm store appears to have gone offline - any idea when it will be back?
[11:38] <urulama__> neiljerram: we're working on it
[11:38] <urulama> neiljerram: but no estimates for the moment
[11:38] <neiljerram> Thank you - good to know that.
[11:50] <urulama> neiljerram: charm store is back
[11:50] <neiljerram> Thanks!
[12:21] <BrunoR> Hi! a charm using juju block storage deployed to aws does not create ebs-volumes (as expected) but loop-devices ~ juju storage pool config is unchanged/default ~ someone an idea how to fix this?
[12:39] <BlackDex> Hello ther
[12:39] <BlackDex> e
[12:39] <BlackDex> is there a way that i can restart juju agents ?
[12:39] <BlackDex> they clame there connection is lost, but that isn't
[12:58] <lazyPower> Greetings from DC everyone o/
[12:58] <lazyPower> Day 2 of the Charm Community team sprint is underway, we invite you to participate. More information here: https://lists.ubuntu.com/archives/juju/2016-April/006966.html
[13:43] <BlackDex_> how can i get all the machines which are used in juju?
[13:44] <rick_h_> BlackDex_: ? in what way? juju status shows all the machines?
[13:44] <rick_h_> BlackDex_: or do you mean in the cloud control panel?
[13:44] <BlackDex_> rick_h_: i need to do a `juju run` on all machines
[13:45] <BlackDex_> currently i can only do that with --machine=ID
[13:45] <BlackDex_> so i want to grep or something to extract only the machines
[13:46] <rick_h_> BlackDex_: oh hmm, thinking
[13:46] <BlackDex_> i need to restart all the jujud agents
[13:46] <BlackDex_> and i don't want to go to every machine and run the script
[13:47] <D4RKS1D3> Hi someone can help me to launch services in openstack with JUJU?
[13:47] <BrunoR> BlackDex_: 'juju run --all ...' does not work?
[13:47] <BlackDex_> BrunoR: That run's on all
[13:47] <BlackDex_> also the lxc stuff
[13:48] <BrunoR> BlackDex_: ah ok
[13:52] <rick_h_> BlackDex_: what version of Juju?
[13:52] <rick_h_> BlackDex_: will have to do something with juju status --format=yaml and grep out the ids with bash-foo me thinks
[13:55] <BlackDex_> rick_h_: 1.25.3
[13:56] <marcoceppi> BlackDex_: you'd have to do some fun stuff with awk, but it's possible
[13:56] <magicaltrout> nothing involving awk is fun
[13:56] <BlackDex_> haha
[13:56] <marcoceppi> BlackDex_: I'll get ya a one liner, give me a min
[13:57] <BlackDex_> i now have the list of machine id's in a {1,2,3} ;)
[13:57] <BlackDex_> that works
[13:58] <BlackDex_> for MACHINE_ID in {136,137,138,139,140,141,142,143}; do echo "Machine $MACHINE_ID"; juju run --machine=$MACHINE_ID 'for JUJUD in `find /etc/init -type f -name "jujud-*" -exec sh -c '"'"'basename "$0" | cut -f1 -d. '"'"' {} \;`; do sudo status $JUJUD; done'; done
[13:58] <rick_h_> wheeeeee, that looks like a party
[13:59] <marcoceppi> BlackDex_: I'll get you a juju-run-all-machines plugin
[14:00] <BlackDex_> marcoceppi: That would be nice ;)
[14:00] <rick_h_> BlackDex_: a bug for a --all-machines option to core would be cool
[14:00] <BlackDex_> Would be even better if juju run --machine will just do it when no id is given
[14:00] <BlackDex_> rick_h_: Or that ;)
[14:06] <BlackDex> rick_h_: If i report that to juju-core in LP that would be fine right?
[14:06] <rick_h_> BlackDex: yes please
[14:06] <BlackDex> on it's way
[14:09] <BlackDex> ill also will add a request for the abillity to restart all the jujud agents ;)
[14:17] <marcoceppi> BlackDex: you're not going to like this ;)
[14:23] <BlackDex> i'm waiting to see the horror
[14:25] <marcoceppi> BlackDex: I simplified one line for complexity in another https://gist.github.com/marcoceppi/2b86e80f376ed790198aafdcaf9271e9
[14:26] <marcoceppi> BlackDex: I updated it for readability, https://gist.github.com/marcoceppi/2b86e80f376ed790198aafdcaf9271e9
[14:27] <BlackDex> i like the initctl list part
[14:27] <BlackDex> lets keep it at that ;)
[14:28] <BrunoR> I still have problems using Juju storage https://lists.ubuntu.com/archives/juju/2016-April/006979.html
[14:28] <BlackDex> but thx marcoceppi :)
[14:30] <BlackDex> forked it
[14:31] <A-Kaser> Hi
[14:33] <D4RKS1D3> Hi someone can help me? I received "2016-04-05 14:31:55 ERROR juju.cmd supercommand.go:429 cannot connect to API servers without admin-secret"
[14:33] <D4RKS1D3> Someone knows how to solve it?
[14:33] <D4RKS1D3> For me works properly with the enviorement maas but, when I change to openstack do not works
[14:34] <D4RKS1D3> Thanks
[14:45] <jamespage> gnuoy, thedac: quick one? - https://code.launchpad.net/~james-page/charm-helpers/ovs-datapath-type/+merge/290999
[14:45] <thedac> jamespage: sure, I'll take a look
[14:46] <D4RKS1D3> Hi jamespage , could you help me?
[14:50] <jamespage> D4RKS1D3, not something I've seen before
[14:51] <D4RKS1D3> Thanks jamespage
[14:52] <jamespage> D4RKS1D3, looks like some sort of auth problem - which juju version?
[14:52] <thedac> jamespage: merged.
[14:53] <thedac> jamespage: if you have time neutron-gateway apparmor is ready https://review.openstack.org/#/c/299670/
[14:53] <jamespage> thedac, ta
[14:53] <jamespage> thedac, endeavouring to get my dpdk updates done today and then will switch back to reviews...
[14:53] <D4RKS1D3> 1.25.1 james
[14:53] <thedac> understood
[14:55] <BlackDex> D4RKS1D3: which provider do you use or is configured for openstack?
[14:55] <BlackDex> it's complaining about that there is no admin-secret in your environments.yaml
[14:55] <BlackDex> for the openstack environment
[14:56] <D4RKS1D3> I have the field password:
[14:56] <D4RKS1D3> in the documentation do not exist the field admin-secret
[14:56] <D4RKS1D3> this is the token of openstack?
[15:00] <D4RKS1D3> BlackDex, if I download the openstack enviorement from the openstack dashboard I see this information http://pastebin.com/KjMkPEL7
[15:02] <cory_fu> c0s: Have you been using the charmbox for the deployments you've been doing so far?
[15:03] <lazyPower> charmbox \o/
[15:03] <cory_fu> c0s: https://github.com/juju-solutions/charmbox
[15:04] <lazyPower> recommend you pull charmbox:devel if you're testing on 2.0
[15:12] <cory_fu> lazyPower: Why does the charmbox not set {LAYER,INTERFACE}_PATH?
[15:14] <lazyPower> cory_fu - dev does, doesn't look like i got that ported into -stable   https://github.com/juju-solutions/charmbox/blob/devel/install-review-tools.sh#L18
[15:14] <cory_fu> Ah.  Ok
[15:15] <lazyPower> our 1.25 box is going to need a little bit more love before i shelve it and supplant with whats in the devel flavor
[15:15] <lazyPower> cory_fu - if ya file bugs i'll make sure to clean those up before we tag and archive the box
[15:15] <cory_fu> c0s: The starting docs are at https://jujucharms.com/docs/devel/developer-getting-started and all the other items under Developer Guide in the sidebar on that page.
[15:16] <c0s> thanks cory_fu
[15:20] <BlackDex> D4RKS1D3: i need to add admin-secret to that list
[15:21] <D4RKS1D3> I resolve now the problem
[15:21] <D4RKS1D3> The problem is the project name or tenant
[15:21] <D4RKS1D3> and the user and pass are "incorrect"
[15:22] <D4RKS1D3> now i am having others problem but now i can logging properly
[15:22] <D4RKS1D3> thanks for your support BlackDex and jamespage
[15:48] <BlackDex> yw
[16:13] <jcastro_> rick_h_, we tried publish again today and everything worked
[16:13] <jcastro_> \o/
[16:13] <jcastro_> jrwren fixed things
[16:13] <jcastro_> urulama, ^^
[16:13] <rick_h_> jcastro_: <3
[16:14]  * rick_h_ finds a new way to irritate jcastro_ 
[16:14] <jrwren> i'll take the credit, but I swear, I didn't do anything :]
[16:14] <rick_h_> jrwren: always take the credit
[16:17] <jcastro_> rick_h_, "Home Submit a Bug" is my new irritant.
[16:17] <jcastro_> https://jujucharms.com/u/jorge/wiki-scalable/bundle/0
[16:17] <lazyPower_> i'm in ur html'z, removing your <p> tags
[16:18] <rick_h_> lazyPower_: wfm, pr's welcome :P
[16:18] <lazyPower_> rick_h_ if you have me working on any of your HTML i'm dropping support for every browser but lynx
[16:18] <rick_h_> lazyPower_: :)
[16:21] <lazyPower_> it'll be the ugliest throw back to 1993 we've ever seen, but man it'll load fast.
[16:41] <narindergupta1> gnuoy: hi there is query on openstack ovs integration charm from OPNFV member.  where do I see how OVSs are connected to each other?
[16:42] <narindergupta1> gnuoy: do we know which charm is responsible for and how it is done?
[16:46] <thedac> narindergupta1: this is vanilla ovs not ovs-odl?
[16:47] <narindergupta1> thedac: correct vanilla ovs
[16:47] <thedac> vanilla ovs is neutron-openvsitch which runs on nova-compute as a subordinate
[16:47] <narindergupta1> thedac: question was how do they interconnect for east west traffic?
[16:49] <thedac> Between nova-compute and neutron-gateway neutron-openvsitch builds tunnels (GRE or VXLAN). You can show these with `ovs-vsctl show`
[16:49] <thedac> s/Between nova-compute/Between all the nova-compute(s)
[16:51] <narindergupta1> thedac: do we have code where tunnel is built?
[16:51] <thedac> openvswitch and neutron-api (neutron-server) handle building the tunnels
[16:51] <thedac> Not our code directly
[16:52] <narindergupta1> thedac: ok thanks for letting me know and i will pose any query frther
[16:52] <thedac> no problem
[16:52] <narindergupta1> if i get it from
[17:52] <jcastro_> hatch, is there a way to disable having to use the commit button in the ui? like a flag or config setting?
[17:53] <hatch> jcastro_: so you want to auto deploy as soon as you add something to the canvas?
[17:53] <jcastro_> yeah
[17:54] <hatch> no there isn't
[17:55] <jcastro_> if I wishlist it what are the chances we'd consider it?
[17:55] <hatch> jcastro_: are you running into issues with the deployment summary?
[17:55] <jcastro_> for like development, etc.
[17:55] <hatch> jcastro_: slim to none :)
[17:55] <jcastro_> there's just too many clicks
[17:55] <jcastro_> it's like, let me model
[17:55] <jcastro_> and THEN can I commit
[17:56] <hatch> with the new deployment summary adding the 'immediate deploy' back requires us to make a number of assumptions about what the user wants to do
[17:56] <jcastro_> I just want to cut down on the number of dialogs I have to click through
[17:57] <jcastro_> like, it's actually easier to edit bundles by hand than try to make a bundle in the ui
[17:58] <hatch> jcastro_: but don't you add everything to the canvas once then click deploy/commit?
[17:59] <jcastro_> I'm trying to test to show you
[17:59] <jcastro_> but it seems demo.j.c is having some issues?
[17:59] <hatch> hmm appears to be working for me
[17:59] <hatch> or did I just lie...
[18:00] <hatch> oh no, was just slow
[18:00] <hatch> looks like it's working
[18:00] <jcastro_> yeah the search seems to take a long time
[18:00] <hatch> yeah it does
[18:23] <jcastro_> kwmonroe, I notice you guys do like "bundle-local.yaml" etc in your bundles
[18:23] <jcastro_> is that to make it convenient when you're locally deploying or is there a way to use multiple bundle.yaml's in the same bundle?
[18:25] <rick_h_> jcastro_: you can have as many yaml files in there as you want. Juju will just look for the one
[18:27] <jcastro_> right, so the store will only ever show the actual one
[18:27] <jcastro_> I was thinking how we could have a concept of like, flavors for a bundle
[18:27] <jcastro_> for example, I am working on wiki bundles
[18:27] <jcastro_> I have wiki-simple, ,wiki-scalable, and wiki-smooshed
[18:27] <jcastro_> it would be neat if I could have those as multiple yamls
[18:28] <lazyPower> wiki-smooshed :D
[18:28] <jcastro_> and then like, `juju deploy wiki:scalable` or something
[18:28] <jcastro_> `juju deploy wiki --flavor smooshed` or whatever if you don't like the colon
[18:29] <marcoceppi> rick_h_: the idea being that, we have a clear set of overrides available per flavor: basically constraints and placement, unliek before where overrides were _anything_
[18:30] <marcoceppi> I could have a GCE and Amazon flavor bundle, each with explicity storage and intstance-type constraints
[18:31] <jcastro_> yep!
[18:44] <kjackal_> cory_fu kwmonroe , just moved the jujusolutions-pack to juju-solutions. Many thanks to c0s who offered the first README version. https://github.com/juju-solutions/jujubigdata-pack
[18:48] <c0s> yay! I am helping ;)
[19:21] <kwmonroe> yeah jcastro_, we have a bundle[-dev|local].yaml for all our bundles.  we use -dev and -local for rapid deployment (though local is now screwed because of what a bare charm name means now)
[19:23] <kwmonroe> for us, -dev pulls latest charms from ~bigdata-dev namespace with no revision, so you know every time you deploy it, it's pulling the latest charms from the store.
[19:23] <c0s> kwmonroe cory_fu I am looking into this marriage of Juju and Bigtop and looking into resources.yaml file: while installing from a repository (e.g. Bigtop Hadoop stack release) we don't need to list all the packages for the installation, but rather only a repo URL is needed.
[19:23] <magicaltrout> bah my new website must be raising up google
[19:24] <magicaltrout> because i'm being spammed to f$ck
[19:28] <kwmonroe> c0s: i'm +1 for moving to repos vs juju-resources where it makes sense.. couple questions though:  should the repo url be configurable for bigtop charms?  and does the repo have multiple hadoop versions (like 2.7.1 and 2.7.2) that would suggest each  charm might need a version string?
[19:31] <c0s> kwmonroe: at least in Bigtop we don't mix different versions in the same repo, but sure it is possible, so great point! As for configurable: are the locations of current resources configurable? Looks like they are hard-coded
[19:34] <lazyPower> magicaltrout - ah the woes of being popular
[19:34] <c0s> going to grab a bit. BB in 10
[19:34] <magicaltrout> i wish
[19:35] <magicaltrout> but I can get a good supply of under the counter meds
[19:39] <kwmonroe> yeah c0s, they are hard coded in resources.yaml, which gives us a pseudo configurable option that can be set for deployment, but not changed afterwards.  we had to do that so mbruzek wouldn't nack us for immutable config :)  if we do make the repo configurable, we have to handle a change for the lifecycle of the charm -- that may be ok, but is something to consider (ie: what happens if a user changes the slave repo, but not
[19:39] <kwmonroe>  the namenode).
[19:48] <mbruzek> kwmonroe: immutable config is against the rules, and breaks the user experience.
[19:49] <mbruzek> If you set a configuration variable you would expect it to actually change something in the charm
[19:54] <kwmonroe> see c0s? ^^  that's the wrath we're trying to avoid.
[19:55] <lazyPower> kwmonroe - you're an abriter of that squirrely wrath too ya know, mr ~charmer
[19:55] <lazyPower> *arbiter
[19:59] <c0s> kwmonroe: mramm I see your life is full of joy~
[21:31] <c0s> kwmonroe cory_fu with changing the format of the artifacts we'll have to be changing juju-solutions/jujuresources as well
[21:32] <cory_fu> c0s: If the artifacts are packages, we can drop jujuresources altogether.  That was just there to make it a bit easier to handle more or less arbitrary .tar.gz files
[21:32] <cory_fu> Also, the plan was to drop it anyway in favor of 2.0 resources
[21:32] <c0s> or so just apt-get directly from the handler?
[21:33] <c0s> damn, I suck at Python
[21:35] <cory_fu> c0s: You can call apt-get directly from a handler, but I would recommend using a helper such as https://pythonhosted.org/charmhelpers/api/charmhelpers.fetch.html#charmhelpers.fetch.add_source or the apt layer: https://git.launchpad.net/layer-apt/tree/README.md
[21:35] <c0s> ah, thanks cory_fu - it's getting better by the minute ;)
[21:40] <c0s> cory_fu: interestingly, we Bigtop puppet we don't even need to configure apt sources - it is all done in the Puppet ;)
[21:51] <cory_fu> c0s: Oh, well that's convenient.  You can use that in the charms, right?
[21:51] <c0s> if we call 'puppet apply' from charms - then yes
[21:52] <c0s> still pealing off the layers of juju
[22:44] <terje> Is it possible to specify a seperate swift username and secret key in an environment.yaml file?