[01:42] <wgrant> Can I convince Juju to somehow use an alternate port for SSH to a machine?
[01:42] <wgrant> I need something other than OpenSSH exposed on a unit's port 22, and it would be nice to not have to use a firewall to NAT that.
[01:50] <marcoceppi> bdx: I'm able to get maas to boot machines that have something other than eth0
[01:50] <marcoceppi> (em1, eg)
[05:50] <AskUbuntu_> MAAS - Cannot provision node with interface other than eth0 | http://askubuntu.com/q/598139
[07:10] <AskUbuntu_> Juju - Openstack service charm networking configurations and limitations | http://askubuntu.com/q/598156
[08:00] <stub> wgrant: I have never seen an option to change the ssh port, and if you did it will break 'juju run' since it uses ssh and has no option to use a different port.
[08:09] <wgrant> stub: Right, I can always change the ssh port manually, but I was wondering about adjusting 'juju run' etc.
[11:27] <apuimedo> gnuoy: ping
[11:27] <gnuoy> apuimedo, hello
[11:28] <apuimedo> gnuoy: Hi!
[11:28] <apuimedo> that was fast ;-)
[11:29] <apuimedo> I made a patch that adds midonet support to charm-helpers (similar to the one for calico and n1kv)
[11:29] <apuimedo> and now I'm trying to do the part for neutron-api
[11:29] <gnuoy> ah, ok
[11:29] <apuimedo> *neutron-api charm
[11:30] <apuimedo> gnuoy: (offtopic, cool how your last name includes a reverse gnu)
[11:30] <gnuoy> apuimedo, thanks, a lucky chance of fate :)
[11:30] <apuimedo> the thing is that the midonet plugin needs to write two configs
[11:30] <apuimedo> whereas the config field for neutron_plugins is just a single string
[11:31] <apuimedo> we basically need to put some config in /etc/neutron/dhcp_agent.ini as well
[11:33] <lukasa_work> apuimedo: I think your individual neutron charm will maintain that config
[11:33] <gnuoy> apuimedo, it sounds like you're going to need to update the quantum-gateway charm
[11:33] <apuimedo> so I was wondering if I should just add it to neutron_api_utils.py:register_configs or resource_map
[11:33] <lukasa_work> Oh yes, quantum-gateway is the right one
[11:33] <lukasa_work>  /headdesk
[11:33] <gnuoy> :)
[11:34] <lukasa_work> (FYI, I'm the maintainer of the various Calico charms)
[11:34] <apuimedo> ;-)
[11:36] <gnuoy> apuimedo, have you taken a look at the quantum-gateway charm ? It already has n1kv login in the dhcp_agent.ini template if you want to look at an example
[11:37] <gnuoy> s/login/logic/
[11:39] <apuimedo> gnuoy: Oh! I'll look into it. I was originally modelling the neutron-api box to have also the metadata and the dhcp agent
[11:39] <apuimedo> that's why I was targetting the neutron-api charm
[11:40] <lukasa_work> apuimedo: You can always deploy quantum-gateway on the same machine as neutron-api
[11:40] <apuimedo> lukasa_work: won't the fact that it is not defined as a subordinate charm prevent that?
[11:41] <gnuoy> lxc to the rescue
[11:41] <lukasa_work> apuimedo: Not necessarily. So long as the charms don't step on each others toes you can do it.
[11:41] <apuimedo> good to know. I assumed that without 'subordinate: true' juju would block it
[11:41] <lukasa_work> At the moment you can deploy most of the OpenStack 'control' charms to the same node if you want to
[11:42] <gnuoy> apuimedo, which charm are you saying is a subordinate ?
[11:43] <apuimedo> gnuoy: no, I was saying that I thought that the fact that neutron-api and quantum-gateway are not subordinates would prevent them from being deployed together in the same box
[11:43] <gnuoy> ah, I see
[11:44] <gnuoy> apuimedo, If you want to house multiple services on the same box (to save metal) I'd suggest using lxc fwiw
[11:44] <lukasa_work> LXCs are definitely safer
[11:44] <apuimedo> ok ;-)
[11:45] <lukasa_work> Though as I say I've not had trouble dumping various OpenStack components on the same node
[11:45] <lukasa_work> I wouldn't rely on that state of affairs to continue
[11:46] <apuimedo> thanks lukasa ;-)
[11:46] <lukasa_work> NP =)
[11:47] <apuimedo> lukasa_work: do you run the metadata agent in the quantum-gateway as well?
[11:49] <gnuoy> apuimedo, yes. neutron-server is the only thing on the neutron-api charm
[11:49] <lukasa_work> +1
[11:49] <apuimedo> ok
[11:50] <gnuoy> apuimedo, but from a charms point of vies most neutron settings are exposed via the neutron-api charm and it pushes them out to the other charms
[11:52] <apuimedo> understood
[11:53] <apuimedo> gnuoy: one thing that left me a bit puzzled about neutron-api is that nsx adds a few configs to config.yaml
[11:54] <apuimedo> one of them, specifically 'nsx-controllers'
[11:54] <apuimedo> shouldn't it have been better that those addresses would have been retrieved by adding a relation with them?
[11:55]  * gnuoy goes and peaks at the charm
[11:56] <apuimedo> I guess that it's because you can't deploy nsx controllers with a charm, but in my case, where the midonet api endpoint is deployed with a charm
[11:56] <lukasa_work> I'd add a relation, apuimedo
[11:56] <lukasa_work> I did that for calico-acl-manager
[11:57] <apuimedo> I was thinking that probably the best would be to add a relation between midonet-api and neutron-api that uses the interface neutron-api
[11:57] <apuimedo> and then neutron-api when that relation joins, if it is configured to use midonet, it updates the /etc/neutron/plugin/midonet.ini
[11:58] <apuimedo> lukasa_work: which relation did you use? Are you using the plain neutron-api charm?
[12:00] <apuimedo> lukasa_work: I see that you use the regular neutron-api
[12:00] <apuimedo> but I don't understand how does `juju add-relation calico-acl-manager neutron-api` work
[12:01] <apuimedo> they do not share any interface that would realize which kind of relation it is fulfilling
[13:02] <apuimedo> gnuoy: the template for the midonet plugin referred to in http://bazaar.launchpad.net/~celebdor/charm-helpers/midonet/revision/337 charmhelpers/contrib/openstack/neutron.py
[13:03] <apuimedo> should be in charm-helpers too ( charmhelpers/contrib/openstack/templates/midonet.ini ) or in the neutron-api templates?
[13:06] <apuimedo> I'd put it into neutron-api/templates/midonet.ini
[14:22] <AskUbuntu_> Ceilometer deployment | http://askubuntu.com/q/598297
[15:10] <schkovich> @marcoceppi In Nginx charm description it is mentioned that when "combined with nginx-site, nginx-php, or nginx-python will allow you to deploy independant VirtualHosts and scale those out." I can't find any of mentioned charms. :(
[15:11] <marcoceppi> schkovich: https://jujucharms.com/u/hp-discover/website/trusty/3 https://jujucharms.com/u/marcoceppi/php-website/trusty/1
[15:11] <marcoceppi> there is no "nginx-python" yet, that was optimistic of me
[15:12] <schkovich> ok
[15:12] <schkovich> those ones are just named differently
[15:12] <schkovich> i thought that it might be a case :)
[15:17] <schkovich> @marcoceppi Nginx version installed is 1.4.6 which is affected by several security flaws. Adding option to install Nginx from stable PPA will be great improvement. I checked the code and adding PPA should not be a big deal. Will you welcome PR or in bazaar wording request to merge?
[15:18] <marcoceppi> I always welcome and appreciate merge requests
[15:25] <schkovich> @marcoceppi what type of configuration option should be to get check box in gui?
[15:26] <marcoceppi> schkovich: boolean
[15:26] <schkovich> of course, what else :(
[16:04] <my_chiguai> lazyPower: Thanks for the pointer to the elastic search charm. Did you have any notes on what the development process is?
[16:13] <lazyPower> my_chiguai: what are you looking for in terms of dev process? thats kind of a broad subject.
[16:13] <my_chiguai> indeed
[16:14] <my_chiguai> I like the idea of using ansible with juju
[16:14] <my_chiguai> https://micknelson.wordpress.com/2013/11/08/juju-ansible-simpler-charms/
[16:14] <my_chiguai> and states
[16:14] <my_chiguai> https://micknelson.wordpress.com/2013/06/24/easier-juju-charms-with-python-helpers/
[16:14] <my_chiguai> and of course the starter template
[16:14] <my_chiguai> https://jujucharms.com/docs/authors-charm-writing
[16:15] <my_chiguai> My basic path at the moment (and this will be a slow path)
[16:15] <my_chiguai> is going from github repo, development and testing, and deployment
[16:16] <my_chiguai> possibly with vagrant: https://jujucharms.com/docs/config-vagrant
[16:17] <my_chiguai> a lot of moving parts :)
[16:17] <my_chiguai> were there specific resources you found helpful?
[16:18] <my_chiguai> I am trying to pull together "The Guide" for me. Maybe make it a realist ( http://readlists.com )
[16:19] <my_chiguai> s/realist/readlist
[16:22] <my_chiguai> http://readlists.com/6b321992
[16:22] <my_chiguai> in progress
[16:23] <pdobrien> hi @lazypower @asanjar - having a problem getting the hdp-hadoop bundle to install
[16:26] <pdobrien> looks like it's because it can't verify the package signatures
[16:32] <my_chiguai> hmm looks like a number of pages on jujucharms.com are broken. The search results are all 404s. https://jujucharms.com/docs/search/?text=amulet
[16:45] <my_chiguai> some cached links are available
[16:45] <my_chiguai> http://webcache.googleusercontent.com/search?q=cache:Dkl2vQQqiBkJ:https://jujucharms.com/docs/tools-amulet+&cd=1&hl=en&ct=clnk&gl=us&client=safari
[16:48] <rick_h_> my_chiguai: apologies, release is in progress the qa site is up atm http://qa.storefront.theblues.io:6543/docs/1.20/tools-amulet
[16:50] <my_chiguai> no problem and thanks for the updated link!
[16:52] <my_chiguai> anyone know if the github charm mirrors up to date?
[16:57] <my_chiguai> on http://askubuntu.com/questions/432187/how-can-i-deploy-my-local-juju-charm-with-amulet-framework a comment by Marco Ceppi noted they were but that was March of last year and elastic search is much more recent than that. :)
[17:01] <marcoceppi> my_chiguai: they are not
[17:01] <my_chiguai> ah thanks marcoceppi
[17:01] <marcoceppi> my_chiguai: we kind of abandonded keeping them in sync as we work towards a better charm store model
[17:02] <marcoceppi> my_chiguai: you can follow these instructions to convert a bzr repo a git one if that better suites your workflow
[17:03] <marcoceppi> my_chiguai: http://paste.ubuntu.com/10621984/
[17:03] <marcoceppi> my_chiguai: you may also need to run "git reset --hard" at the end
[17:03] <lazyPower> pdobrien: Have you filed a bug? :(
[17:04] <lazyPower> pdobrien: sorry you ran into that, we've been having some issues with the hortonworks repos as of late. They just came back online after an extended outage last week - and they probably just updated.
[17:06] <lazyPower> my_chiguai: ah, sorry about the delay in reply. i jsut read scrollback
[17:06] <lazyPower> my_chiguai: Actually - it was a learn as you go process for me as well. if you're running ubuntu native - going from concept => staging => production is pretty seamless with bundles, if you're using your personal namespace
[17:08] <pdobrien> lazyPower: did not file a bug yet, wanted to see if it was a known issue.
[17:09] <pdobrien> lazyPower: I was able to get it to deploy by creating a machine, manually updating the repo key, and then deploying the service
[17:09] <lazyPower> pdobrien: ah ok, so its just the repository key that has changed?
[17:09] <my_chiguai> marcoceppi: thanks much!
[17:10] <pdobrien> lazyPower: not sure if it's changed, I found the key id in the install log, then ran sudo apt-key adv --recv-key --keyserver keyserver.ubuntu.com B9733A7A07513CAD to install
[17:10] <my_chiguai> marcoceppi: I'll have to look into updating charms
[17:11] <pdobrien> lazyPower: so not sure if the key changed, or if the charm just isn't looking in the right location anymore
[17:11] <lazyPower> pdobrien: yeah if you could get a bug filed on that i'll make sure it gets routed to the right people and we get a fix in place quickly.
[17:12] <pdobrien> lazyPower: on a somewhat related note, I'm trying to deploy the hdp-hadoop-hive-mysql-4 bundle, and the gui says: "Unable to deploy the bundle. The server returned the following error: invalid request: bundle "bundle-deploy" not found"
[17:12] <pdobrien> lazyPower: will do
[17:13] <lazyPower> pdobrien: https://bugs.launchpad.net/charms/+source/hdp-hadoop - link for filing the charm bug
[17:13] <lazyPower> pdobrien: can you link me to the instructions you're following for my clarification?
[17:14] <pdobrien> lazyPower: for hdp-hadoop?  Just deploying the charm via the gui
[17:15] <lazyPower> pdobrien: i was referring to the hdp-hive-mysql bundle
[17:15] <lazyPower> "bundle "bundle-deploy"" sounds like it may be an incorrect copy/paste stanza somewhere
[17:17] <pdobrien> lazyPower: all I did was find it in the gui and click "Deploy this bundle" - didn't try via cli
[17:18] <lazyPower> ok, let me stand up an env really quick and investigate, ta for the info
[17:25] <lazyPower> pdobrien: this bundle, correct? https://demo.jujucharms.com/bundle/data-analytics-with-sql-like-6/?text=sql-like
[17:25] <pdobrien> lazyPower: I was using https://jujucharms.com/u/lazypower/hdp-hadoop-hive-mysql/4
[17:27] <lazyPower> pdobrien: ah, thats a percursor to the bundle i listed above.
[17:27] <pdobrien> lazyPower: just tried the bundle you linked, and I get the same error
[17:31] <lazyPower> pdobrien: confirmed the bug on my end
[17:39] <lazyPower> pdobrien: I've filed a bug against juju-gui wrt this bug. If you want to follow along you can subscribe on the right.  https://bugs.launchpad.net/juju-gui/+bug/1433706
[17:39] <mup> Bug #1433706:  invalid request: bundle "bundle-deploy" not found <juju-gui:New> <https://launchpad.net/bugs/1433706>
[17:42] <pdobrien> lazyPower: thanks!
[17:49] <murphyslawbbs> Hi, I'm hitting bug https://bugs.launchpad.net/ubuntu/+source/software-properties/+bug/1089389, I was wondering if there is a workaround and if so how I can implement it. Is there a way to add "local" scripts or code so I can set the proxy?
[17:49] <mup> Bug #1089389: juju bootstrap fail behind a proxy when a gpg key must be imported <amd64> <apport-bug> <cloud> <precise> <running-unity> <software-properties (Ubuntu):Triaged> <https://launchpad.net/bugs/1089389>
[17:56] <lazyPower> murphyslawbbs: the only thing that comes to mind is a subordinate - or juju-run against the service.   the subordinate doesn't sound helpful as the deploymetns will fail until the subordinate is run
[17:57] <murphyslawbbs> lazyPower: is there some kind of order to charms so that the proxy would be set before other stuff runs?
[17:57] <lazyPower> so juju-run is probably the best bet - and theres no garantee of the run executing before the hooks are fired even if you are judicious about attaching tot he node before the hooks are fired. (small window between agent coming online and hook execution)
[17:58] <lazyPower> murphyslawbbs: not unless you fork the charms and update the install hook :|
[17:58] <lazyPower> murphyslawbbs: that is one options however, to do that, and when a fix is released moving back to the store charms with juju upgrade-charm --switch cs:series/service
[17:59] <murphyslawbbs> lazyPower: ok i'll let that sink in a bit thanks
[18:00] <lazyPower> murphyslawbbs: i just noticed the bug says this is failing on bootstrap
[18:01] <lazyPower> thats not going to work either :( if the bootstrap node cant be stood up its pretty much dead in teh water unless you manually provision the node, update the proxy, then attempt to bootstrap to that existing node that has the proxy config setup
[18:02] <lazyPower> murphyslawbbs: i've done this in the past successfully using maas tagging, and passing that as a constraint.  juju bootstrap --constraints="tags=bootstrap"
[18:03] <murphyslawbbs> lazyPower: does that somehow alter the maas preseed?
[18:04] <murphyslawbbs> lazyPower: If so, wouldn't it be a problem is that the deployments inside the bootstrapped machines using lxc wouldn't have the changes?
[18:04] <lazyPower> negative. its a work around until its properly patched, and isn't easily reproducible since theres a lot of manual intervention dependency in that method.
[18:05] <lazyPower> ah yeah, that would more than likely be problematic too
[18:06] <murphyslawbbs> lazyPower: maybe easier all round to create my own trusty image with the keyring and the proxies setup
[18:07] <lazyPower> yeah, sorry i didn't have a better answer murphyslawbbs
[18:08] <murphyslawbbs> lazyPower: oh no it's cool, I have an answer :)
[18:26] <pdobrien> lazyPower: I did a fresh deployment of hdp-hadoop and it appears that the gpg key issue is resolved.... probably was a transient issue due to the issues hortonworks was having.  so I won't open a bug.
[18:27] <lazyPower> pdobrien: allright, that much is known. Thankyou for trying to repro and following up that its ok.  We've got a known issue with the hortonworks repository dependency is constantly going offline. We'll be working to keep a more reliable mirror in the near future
[18:27] <lazyPower> There's a launchpad group if you're interested in joining the development efforts in terms of feedback and early releases - would you like a link?
[21:14] <AskUbuntu_> New Openstack Autopilot stuck at: In progress - Configure availability zones - 98% | http://askubuntu.com/q/598434
[22:33] <barchetta> anyone solid in juju agent debugging?