[08:50] <AskUbuntu> How to get a juju charm's parameters | http://askubuntu.com/q/354747
[11:14] <jamespage> jcastro, marcoceppi: do you know if its possible to make the juju-gui display icons for locally launched charms?
[11:14] <jamespage> trying to make my demo for ceph day on Wednesday look good
[11:29] <rick_h_> jamespage: no, it's hard coded logic that only the promulgated charms in the store get their icons displayed
[11:29] <jamespage> rick_h_, anyway I can hack that locally?
[11:29] <rick_h_> jamespage: the only way to force that would be to run a local charm store and ingest your local charms, force them to be promulgated in the charmworld db
[11:30] <rick_h_> jamespage: might file a bug for demo purposes of adding a feature flag that's "display all icons" since the logic is in the client side code of the gui
[11:30] <rick_h_> I *think* we could still do that, but not 100% sure off the top of my head
[11:31] <rick_h_> jamespage: but even then, we don't have access to the icon files for locally deployed charms. Juju doesn't send that data to the gui
[11:31] <rick_h_> jamespage: so they still have to be in a running charmworld instance, they just don't have to be promulgated at that point
[11:33] <jamespage> rick_h_, ah - I see
[13:21] <marcoceppi> rick_h_: is there any documentation on "setting up your own charmworld"?
[13:21] <rick_h_> marcoceppi: there's a charmworld charm?
[13:21] <rick_h_> marcoceppi: with a readme on setting it up?
[13:22] <rick_h_> marcoceppi: there's also the docs in the charmworld source tree for hacking purposes. Mirrored to RTD http://charmworld.readthedocs.org/en/latest/
[13:22] <marcoceppi> rick_h_: thanks!
[13:23] <marcoceppi> rick_h_: wait, charmworld != charmstore, is it?
[13:23] <rick_h_> marcoceppi: no, not really. People refer to it that way sometimes. charmstore is a confusing mess
[13:23] <rick_h_> charmworld == manage.jujucharms.com which ingests from LP + juju-core charm store
[13:24] <marcoceppi> rick_h_: yeah, so there are no real docs on running your own charm store are there?
[13:24] <rick_h_> marcoceppi: the juju-core one? no idea. Never thought to try it or look
[13:25] <marcoceppi> rick_h_: there's this in the docs, which makes me think you_can_ but I've not seen any way how to https://juju.ubuntu.com/docs/charms-deploying.html
[13:25] <marcoceppi> rick_h_: under "changing the defaults"
[13:26] <rick_h_> interesting
[13:26]  * marcoceppi rumages around
[13:55] <jcastro> man, the incoming queue is crushing us
[13:55] <jcastro> marcoceppi: paul c submitted sensu server and agent!
[14:04] <marcoceppi> jcastro: I've got time today to tend to the queue, since we're post-release for stuff
[14:09] <jcastro> marcoceppi: can you do logstash/kibana first?
[14:09] <jcastro> then the sensu stuff?
[14:11] <marcoceppi> jcastro: ack
[14:12] <jcastro> arosales: out of curiosity I brought up our planning/BP problem to jono as we were talking on Friday
[14:13] <jcastro> and I tossed out "we could just toss everything out and start from scratch"
[14:13] <jcastro> and he heavily +1ed
[14:13] <jcastro> so, that means I don't have baggage if you don't want to for this next cycle
[14:33] <jamespage> jcastro, sorry - I managed todo zero review last week
[14:33] <jamespage> work is a bit crazy right now
[14:33] <jamespage> I owe dholbach the same apology
[14:44] <adeuring> marcoceppi: could you have a look at my MP?
[14:44] <marcoceppi> adeuring: yeah, can do
[14:45] <adeuring> marcoceppi: thanks!
[15:07] <sinzui> charmers, a new manage.charmworld.com is being built on gojuju. We are dumping the db to get a copy of featured and qa collections. Any changes you make between now and probably tomorrow will be lost. Do you need to feature any charms or QA any charms in the next 24 hours?
[15:09] <rick_h_> marcoceppi: ^^ since you were talking about hitting the queue
[15:10] <marcoceppi> sinzui rick_h_ we won't be doing any features, this won't affect charm promulgation, etc, correct?
[15:10] <sinzui> correct marcoceppi
[15:10] <rick_h_> marcoceppi: no, ingest should catch/keep up with that fine.
[16:12] <jcastro> jamespage: yeah, I should have not scheduled you on review so close to release, that was my bad.
[18:44] <sconklin> I'm unable to bootstrap the juju environment on my raring maas server. Best I can tell from searching, it's because all my nodes are "allocated to root", and not "ready". How can I return them to ready status?
[19:33] <adam_g> anyone aware of any common issues wrt ssh key auth not working for local containers?
[19:33] <rick_h_> adam_g: yea, the username is ubuntu and juju ssh doesn't seem to work. A manual ssh ubuntu@ip.addr.x.y will work
[19:34] <adam_g> rick_h_, yeah, still no luck tho
[19:34] <rick_h_> adam_g: oh, in that case no. Not seen that
[19:47] <kurt_> jamespage: in thinking through further about our discussion a few weeks ago about consolidation of charms.  Is it unwise to colocate the quantum-gateway with any other charms?
[19:53] <kurt_> Here is the layout I'm think of.  If anyone else has comments on why they think it wouldn't work or a better way to consolidate, please chime in.
[19:53] <kurt_> http://pastebin.ubuntu.com/6206435/
[19:53] <kurt_> let me repast that in to pastebin
[19:54] <kurt_> There we go: http://pastebin.ubuntu.com/6206449/
[20:10] <kurt_> Comments anyone? :)
[21:02] <_mup_> Bug #1236590 was filed: juju destroy-machine leaves orphaned security groups <juju:New> <https://launchpad.net/bugs/1236590>
[21:20] <jamespage> kurt_, hey
[21:20] <kurt_> jamespage: hi - I think I can consolidate even further
[21:21] <jamespage> kurt_, most likely; I've just re-deployed one of our internal test environments using MAAS and the LXC containers feature in 1.14.1
[21:22] <kurt_> jamespage: I've not played with lxc yet, but have gotten fair without needing it.
[21:22] <kurt_> as long as I stick to the rules you laid out before
[21:22] <jamespage> machine 0 runs pretty much everything that can be containerized; mysql, rabbit, cinder, glance, nova-cloud-controller, swift-proxy and keystone
[21:22] <jamespage> with quantum-gateway running alongside the juju bootstrap node on the bare metal
[21:22] <kurt_> but cinder and glance will conflict, right?
[21:22] <kurt_> outside of a container
[21:23] <jamespage> kurt_, not under LXC - all services have their own filesystem and network namespaces
[21:23] <kurt_> and without lxc? there's a problem, right?
[21:23] <jamespage> oh - and the dashboard (under lxc that is)
[21:23] <jamespage> kurt_, yup
[21:24] <_mup_> Bug #1236598 was filed: Machine stuck in juju status if the machine doesn't start <juju:New> <https://launchpad.net/bugs/1236598>
[21:24] <kurt_> ok, if I first want to try this out without, give me a sec and I will paste bin you my proposed layout
[21:26] <kurt_> jamespage: http://pastebin.ubuntu.com/6206818/
[21:26] <kurt_> the only thing I really have concerns about is co-locating quantum-gateway on cloud-controller
[21:27] <kurt_> and remember this is really all on VMs
[21:27] <kurt_> (not that that matters)
[21:29] <jamespage> kurt_, just to give you an idea of what I am doing - http://paste.ubuntu.com/6206832/
[21:29] <jamespage> kurt_, the quantum gateway writes /etc/nova/nova.conf so will conflict with the cloud controller charm
[21:30] <kurt_> Ok, so is there a good candidate otherwise, or should it go to its own node?
[21:31] <jamespage> kurt_, if you are not using containers - its own node
[21:31] <jamespage> kurt_, but adding containers is easy
[21:31] <jamespage> juju add-machine lxc:0
[21:31] <jamespage> adds a new lxc container to machine 0
[21:31] <jamespage> which you can then "juju deploy --to 0/lxc/0 mysql"
[21:32] <kurt_> nice.  But there are still some deployment issues for containers, right?
[21:33] <kurt_> My strategy was to get everything working on regular VMs first, then dive in to containers.  :)
[21:33] <kurt_> I'm about half way through blogging it all.
[21:40] <kurt_> jamespage: thanks for the intro on containers.  I will check it out.  I appreciate the feedback.
[21:40] <jamespage> kurt_, with maas containers are OK
[21:40] <jamespage> should get even better with 1.16.1
[21:41] <adam_g> anyone know the correct way to inspect logs /w juju 1.15.0.1? i apparently missed the memo
[21:42] <jamespage> adam_g, urgh
[21:42] <jamespage> adam_g, no idea on that one
[21:42] <jamespage> adam_g, but I just figured out why juju-core does not like talking to the compute api from within serverstack
[21:43] <jamespage> its not dealing with packet fragmentation well
[21:43] <jamespage> ip link set eth0 mtu 1546
[21:43] <jamespage> and everything comes alive again!
[21:43] <adam_g> jamespage, hmph
[21:43] <sarnold> kurt_: oh cool, where can I find the blog post(s?) when you're done? I want to be better at juju and it'd be nice to learn from your experience -- you've put in a ton of work :)
[21:46] <kurt_> sarnold: sure.  I'm restructuring it now.  I will welcome feedback from everyone when it's ready.
[21:47] <jamespage> adam_g, I think I might need to work in the bits and pieces to drop the mtu in instances using dnsmasq on the gateway nodes
[21:48] <sarnold> kurt_: thanks :D I'm looking forward to it
[21:49] <kurt_> cheers.  It's taken a lot of effort.
[21:50] <adam_g> jamespage, https://lists.ubuntu.com/archives/juju/2013-September/002998.html FYI
[21:51] <jamespage> sinzui, fyi I just upgraded 1.14.1 running agents to 1.15.1 OK on our openstack deployment
[21:51] <sinzui> oh goody
[21:53] <sinzui> jamespage, was the tools-url: https://swift.canonistack.canonical.com/v1/AUTH_526ad877f3e3464589dc1145dfeaac60/juju-dist/tools
[21:53] <jamespage> sinzui, well it was not canonistack
[21:53] <sinzui> okay
[21:53] <jamespage> http://10.98.191.34:8080/v1/AUTH_79699f6f71e245b186720f1e2bc03cf0/juju-dist/tools
[21:54] <jamespage> sinzui, but its looks much the same
[21:54] <sinzui> That does match the pattern I expect
[21:59] <jamespage> sinzui, upgrade-juju with maas provider not so happy
[21:59] <sinzui> :(
[22:03] <jamespage> sinzui, hmm - can't juju status any longer...
[22:04] <sinzui> oh, that is worse
[22:04] <sinzui> I have waited 2 hours after a call to upgrade and I still had access using old and new juju
[22:09] <jamespage> sinzui, hmm - looks like the old agent is looking for the new tools in tools/
[22:10] <jamespage> rather than consuming simplestreams
[22:10] <jamespage> that might be my bad
[22:10] <sinzui> jamespage, I think that is correct. 1.14.1 does not know about streams
[22:12] <sinzui> jamespage, Though I wondered if the upgraded bootstrap agent pointed the unit agents where to find the new juju. Since I saw the bootstrap upgrade from tools/, but not the units, maybe the units went to a different location.
[22:14] <jamespage> sinzui, I can upgrade the bootstrap either with maas
[22:15] <jamespage> sinzui, can't figure out how to get 1.15.1 into the right location
[22:15] <jamespage> if I sync with 1.14.1 it ignores the 1.15.1 tarballs
[22:15] <jamespage> if I do it with 1.15.1 it just pushed tools/releases and tools/streams
[22:16] <sinzui> yeah, the updated release-public-tools does a lot of fixing up to support old and new
[22:18] <sinzui> jamespage, This is what I have been doing to collect, extract, and organise a tree that can be synced: http://pastebin.ubuntu.com/6206992/
[22:18] <sinzui> ^ This is also why I wonder if I am doing something wrong.
[22:18] <jamespage> sinzui, well I can do that with openstack OK as I just push the tree into swift
[22:18] <jamespage> sinzui, but for maas I have to use sync-tools
[22:18] <sinzui> oh?
[22:19] <sinzui> We will need some legacy support then in sync-tools I think
[22:20] <jamespage> I need that fixed for 1.16.1
[22:20] <jamespage> I need that fixed for 1.16.0 rather