[05:24] <stub> lazyPower: Ta muchly ;)
[07:18] <gnuoy> jamespage, https://code.launchpad.net/~gnuoy/charms/trusty/nova-compute/cell-support/+merge/239357 if you get a moment. I'd also like to port it to stable as it addresses a bug which affects cells
[09:46] <jamespage> gnuoy, I took a libbo and pushed two trivial fixes to stable and next for glance
[09:46] <gnuoy> kk
[09:46] <jamespage> the api configuration was defining workers twice = one with {{ workers }} and a second with 1
[09:46] <jamespage> not good
[10:03] <jamespage> gnuoy, one thing I have noticed is that I have todo this:
[10:03] <jamespage> -mechanism_drivers = openvswitch,hyperv,l2population
[10:03] <jamespage> +mechanism_drivers = openvswitch,hyperv
[10:04] <jamespage> to really fully disable l2pop
[10:04] <jamespage> otherwise neutron-api continues to send fdb add's, even though the edges ignore them :-)
[10:04] <gnuoy> oh, interesting. that's not right
[10:11] <jamespage> gnuoy, going to raise a bug for that
[10:12] <Mmike> Hola, lads. When one does 'juju status', I get a 'charm:' line that says, for instance: "charm: cs:trusty/mongodb-3". What does -3 stands for?
[10:12] <Mmike> (I did plain: juju deploy trusty/mongodb)
[10:13] <Odd_Bloke> Mmike: I believe it's the version of the charm that's deployed.
[10:22] <gnuoy> jamespage, do you object to the keystone admin password being stored in the the peer db?
[12:53] <jamespage> gnuoy, context?
[12:53] <jamespage> why I guess
[12:54] <gnuoy> jamespage, Bug#1385105
[12:54] <jamespage> bug 1385105
[12:54] <mup> Bug #1385105: keystone identity-admin relation does not support updates to admin-password <keystone (Juju Charms Collection):In Progress by gnuoy> <https://launchpad.net/bugs/1385105>
[12:56] <gnuoy> jamespage, I don't have to store it in peer db. If the leader dies I could update it to whatever the new leader thinks it should be
[12:57] <gnuoy> jamespage, mea culpa for merging the identity-admin branch in this state tbh
[12:58] <jamespage> gnuoy, still think I'm missing something
[12:59] <gnuoy> jamespage, well, problem 1) If admin-password is set it is not set in the identity-admin relation
[12:59] <gnuoy> 2) If there are more than one keystone units then they each set a different password down the relation
[13:00] <gnuoy> and point the client to their own individual ip
[13:00] <jamespage> gnuoy, oh because its always retrieved from configuration right?
[13:00] <jamespage> in 1)
[13:00] <gnuoy> jamespage, yeah
[13:01] <jamespage> so its ok so long as a) you don't cluster and b) you don't provide an admin password
[13:01] <gnuoy> spot on
[13:02] <gnuoy> I have a branch to set the password regardless of how it was generated and to use resolve_address(ADMIN) to set the endpoint
[13:02] <gnuoy> s/endpoint/service_hostname/
[13:02] <gnuoy> but I have two options for the admin password. !) Have the leader set it and share it via peer_db
[13:03] <gnuoy> 2) Change it everytime the leader changes which I think will br fragile
[13:09] <thedellster> Hi any openstack charmers online and willing to help with a openstack deployment?
[13:12] <jamespage> thedellster, wassup?
[13:12] <thedellster> Hi james,
[13:13] <thedellster> Trying to figure out how to deploy a cinder driver from a storage vendor. And how to use the multiple interfaces on my servers. Particularly want to segerate iscsi traffic out from the rest of it. Each of the servers in the cluster have 4 nics.
[13:17] <jcastro> heya noodles775
[13:17] <jcastro> can one of you guys check this out? https://code.launchpad.net/~s-matyukevich/charms/trusty/elasticsearch/elasticsearch-dns-bug-fix/+merge/239547
[13:18] <thedellster> I sent a email to the list on the advice of someone here last night. But it says waiting approval.
[13:20] <jcastro> niemeyer, can you check the mailing list queue for approvals?
[13:20] <jamespage> thedellster, ok - so for a cinder driver - you probably need to write a new backend charm - see cinder-ceph or cinder-vmware for examples
[13:21] <jamespage> thedellster, in terms of using a specific network interface for iscsi traffic - the charms did just grow features for splitting out internal/admin/public and data traffic onto different nets - however that does not include iscsi from cinder nodes
[13:21] <niemeyer> jcastro: I have been doing that every day.. which one did I miss?
[13:21] <jamespage> it probably just needs plumbing in
[13:22] <jcastro> thedellster, which email address did you send from?
[13:22] <niemeyer> jcastro: Which mailing list, that is
[13:22] <thedellster> ndell@nddit.com
[13:22] <niemeyer> Which mailing list?
[13:22] <thedellster> juju@lists.ubuntu.com
[13:22] <niemeyer> thedellster: Sorry, you'll need to resend your mail
[13:23] <niemeyer> The juju mailing list for some reason (its name, maybe?) is a hot spot for spam
[13:23] <niemeyer> If it did not reach the list, it means I discarded it in a batch
[13:23] <noodles775> jcastro: sure
[13:23] <thedellster> should I resend now?
[13:23] <niemeyer> I'm considering changing the rule for that one list to auto-discard
[13:24] <niemeyer> Or rather, auto-reject
[13:24] <niemeyer> thedellster: Yes please
[13:24] <thedellster> Resent
[13:24] <niemeyer> thedellster: If you want to make sure you never get a message lost, subscribe to the list with the respective email first
[13:24] <jcastro> noodles775, does your team have charm powers? or do they go in the queue like everyone else's?
[13:24] <niemeyer> thedellster: But if you send right now I can make sure it's properly filtered
[13:24] <niemeyer> and it already has more spam
[13:25] <niemeyer> thedellster: Done.. I have also whitelisted your email
[13:25] <niemeyer> thedellster: So it won't get held, even if you're not subscribred
[13:25] <niemeyer> subscribed
[13:25] <thedellster> Thank niemeyer!
[13:25] <niemeyer> thedellster: No problem
[13:26] <thedellster> jamespage it sounds like I might need to look at a different deployment …  The problem is that the storage is on a different switching infrastructure.
[13:26] <jamespage> thedellster, which driver?
[13:27] <noodles775> jcastro: we've been able to land changes to charm-helpers, I've not tried landing changes to a charmstore charm.
[13:27] <jcastro> ok
[13:27] <thedellster> Pure storage, also have  clients with various hp products like lefthand, 3par
[13:27] <thedellster> and a few with nexenta
[13:28] <thedellster> and netapp.
[13:28] <thedellster> But the one I need to action on right away is pure .
[13:31] <jamespage> thedellster, so do the pure storage arrays present iscsi directly?
[13:31] <jamespage> or via a cinder head?
[13:33]  * jamespage reads a bit
[13:33] <jamespage> thedellster, so this is all in Juno by the looks of things - and all the cinder server needs is a few bits of config
[13:33] <jamespage> thedellster, this is pretty much how the cinder-vmware charm works
[13:34] <jamespage> thedellster, so I'd recommend basing a cinder backend charm off of that for your purposes
[13:34] <jamespage> thedellster, in terms of access over a different network - I guess so long as your compute nodes are cabled up and configured correctly, you should be good
[13:39] <thedellster> james I believe the cinder head directs traffic to the array. They require the multipath driver on the nova compute nodes
[13:42] <thedellster> The Pure Storage Volume Driver selects a FlashArray iSCSI port that it can reach and always uses that
[13:42] <thedellster> port when first establishing a connection with the array. If the OpenStack compute nodes have access
[13:42] <thedellster> Best Practices
[13:42] <thedellster> Dedicated Purity User Account
[13:42] <thedellster> Page 26 August 20, 2014
[13:42] <thedellster> to the same SAN subnet as the node where the driver is running, then the compute nodes are able to
[13:42] <thedellster> connect to that port as well.
[13:42] <thedellster> Confirm the following:
[13:42] <thedellster> • From each OpenStack nodes where the volume driver will run, confirm that you can ping the
[13:42] <thedellster> array’s management port and the array's iSCSI ports.
[13:42] <thedellster> • From each of the OpenStack compute nodes that will connect to the FlashArray, confirm that you
[13:42] <thedellster> can ping the array's iSCSI ports.
[13:43] <thedellster> Sorry for the SPAM…. Pasted that in
[13:43] <sarnold> pastebins are your friend :)
[13:44] <thedellster> Sarnold yeah I think they are the channels friend to. Been a while since I used irc… Feel like i’m in highschool again
[13:45] <sarnold> hehe :)
[13:50] <thedellster> jamespage in terms of the different network. Would I just log on to the hosts in questions and carve out that nic. Or should I describe them in a juju charm?
[15:38] <rbasak> marcoceppi: about amulet, I'm pushing ahead with some hacky shell scripts in the meantime, which are working OK for now. So no rush on that issue for me.
[15:39] <marcoceppi> rbasak: cool, there's a new release landing today and we should have another in a week or so. tvansteenburgh has been doing a lot of the development on it as of late so features are once again landing
[15:51] <noodles775> jcastro: I just approved another elasticsearch MP, which show me as community, so I guess I don't have charm powers :) https://code.launchpad.net/~evarlast/charms/trusty/elasticsearch/add-version-config/+merge/237916
[15:51] <jcastro> ack
[15:53] <jcastro> lazyPower, if you've got time to check out those elasticsearch fixes that would be <3
[15:56] <jcastro> noodles775, what other charms do you guys maintain?
[15:56] <jcastro> maybe we can give you ownership of the ones that you do
[15:59] <noodles775> jcastro: hrm, bloodearnest is helping with the maintenance of the gunicorn charm. I don't think there are any others that our team maintains.
[16:01] <bloodearnest> jcastro: if Patrick (avoine) is ok with it, I'll happily take it on. I have a bunch of changes planned anyway.
[16:01] <jcastro> yeah I think it would just be easier if you guys didn't have to gate fixes on us
[16:01] <jcastro> like how we do for the openstack guys
[16:01] <jcastro> I'll bring it up during our next team meeting
[16:03] <jcastro> bloodearnest, do you guys have a team?
[16:03] <jcastro> or we can do ~elasticsearch-charmers or something
[16:04] <noodles775> jcastro: might be better if we create ~onlineservices-charmers or similar
[16:04] <jcastro> ok
[16:04] <jcastro> if you create the team and put the right people in it
[16:05] <jcastro> I'll propose ES and gunicorn on the juju list to move over to you?
[16:05] <marcoceppi> noodles775: whatever group you create just make sure ~charmers is a member and admin as well
[16:05] <marcoceppi> bloodearnest: ^ (as well)
[16:06] <bloodearnest> kk
[16:09] <noodles775> jcastro, marcoceppi: https://launchpad.net/~onlineservices-charmers
[16:20] <jcastro> noodles775, bloodearnest: proposal out and I've made it verbally on our team list and it seems to be universally agreed, after our call we'll do the approval bits.
[16:20] <noodles775> Great, thanks.
[16:23] <bloodearnest> awesome
[16:36] <marcoceppi> tvansteenburgh: it's in pip, charm-tools, backportpackage for ppa uploading is complaining that distro data is out of date but the package isn't in trusty updates yet
[16:36] <marcoceppi> so I can't publish to ppa
[16:36] <tvansteenburgh> pypi is enough for me
[16:37] <tvansteenburgh> i'll update testing as soon as amulet hits pypi
[17:03] <jrwren> how does one join onlineservices-charmers? :]
[17:08] <marcoceppi> jrwren: you must first walk over the pit of broken glass
[17:09] <jcastro> hazmat, what's your holiday schedule? I have "charm testing" as a charm school on 12/19 but we did it already, I would like to do "Juju on Digital Ocean" with you if you're interested
[17:11] <jrwren> marcoceppi: no problem, already did that 3 times this morning.
[17:15] <marcoceppi> tvansteenburgh: any issue with moving amulet from my namespace to juju-solutions?
[17:18] <tvansteenburgh> marcoceppi: not at all, great idea
[17:21] <hazmat> jcastro, i'm game.. but might want to push to jan
[17:21] <marcoceppi> tvansteenburgh: 1.8.0 is on pypi
[17:22] <tvansteenburgh> marcoceppi: excellent, thanks!
[17:22] <marcoceppi> Will email the list after ppa uploading starts working again
[17:22] <marcoceppi> I'm out fot he after noon o/
[17:22] <tvansteenburgh> later man
[17:22] <jcastro> hazmat, ok, I'm kind of looking for this calendar year, I'll put you in the next-year pile.
[17:32] <jcastro> jose, I want to make sure I join the right hangout thing this time
[17:32] <jcastro> do I sign is as the UoA user?
[17:36] <noodles775> marcoceppi: Automated tests on charm branches is *awesome* - thanks. I've not done any changes in a while, and was just thinking I had to create HP/ec2 accounts.
[17:59] <aisrael> There isn't a charmhelpers (python) helper to run a command as a specific user, is there?
[18:22] <jose> jcastro: I'll handle it, don't worry
[18:22] <jcastro> ok
[18:55] <jcastro> https://plus.google.com/hangouts/_/hoaevent/AP36tYfGTmFkqf0B3qhcmaf_yL1vnc0vvhpiuuCbM530FY0z4Zfjlg
[18:55] <jcastro> here's the hangout for anyone who wants to hang out
[18:55] <jcastro> we'll be doing "Relationships" as a charm workshop here in a few minutes.
[18:57] <Erazm> Hi I have question regarding juju-quickstart. Will juju-quickstart attempt to recreate local envrioment.jenv file in case it is missing or corrupted ? e.g. I have valid AWS credentials and juju environment running on AWS but missing .jenv file so I cannot execute command line juju commands..
[19:01] <sinzui> WTF look at the signed windows downloads https://launchpad.net/juju-core/+milestone/1.20.10
[19:01] <sinzui> We had 225  downloads for 1.20.9
[19:19] <jcastro> sinzui, that can't be right
[19:20] <sinzui> jcastro, The mac downloads are double so I expect some 550 win downloads
[19:20] <jcastro> yeah but windows says 22,715
[19:20] <jcastro> that can't be right can it?
[19:20] <jcastro> I mean, if so, then awesome!
[19:21] <sinzui> jcastro, 1.20.5 was stable for about 1 month https://launchpad.net/juju-core/+milestone/1.20.5
[19:22] <sinzui> and 1.18.4 was stable for months https://launchpad.net/juju-core/+milestone/1.18.4
[19:23] <sinzui> That 22,715 is implies we are getting 1000s of downloads was day
[21:17] <marcoceppi> noodles775: you can thank tvansteenburgh he's been spearheading that for quite some time