=== CyberJacob|Away is now known as CyberJacob === ianous is now known as Phoenix === Phoenix is now known as ianous === roadmr is now known as roadmr_afk === CyberJacob is now known as CyberJacob|Away [05:24] lazyPower: Ta muchly ;) === CyberJacob|Away is now known as CyberJacob [07:18] jamespage, https://code.launchpad.net/~gnuoy/charms/trusty/nova-compute/cell-support/+merge/239357 if you get a moment. I'd also like to port it to stable as it addresses a bug which affects cells === CyberJacob is now known as CyberJacob|Away [09:46] gnuoy, I took a libbo and pushed two trivial fixes to stable and next for glance [09:46] kk [09:46] the api configuration was defining workers twice = one with {{ workers }} and a second with 1 [09:46] not good [10:03] gnuoy, one thing I have noticed is that I have todo this: [10:03] -mechanism_drivers = openvswitch,hyperv,l2population [10:03] +mechanism_drivers = openvswitch,hyperv [10:04] to really fully disable l2pop [10:04] otherwise neutron-api continues to send fdb add's, even though the edges ignore them :-) [10:04] oh, interesting. that's not right [10:11] gnuoy, going to raise a bug for that [10:12] Hola, lads. When one does 'juju status', I get a 'charm:' line that says, for instance: "charm: cs:trusty/mongodb-3". What does -3 stands for? [10:12] (I did plain: juju deploy trusty/mongodb) [10:13] Mmike: I believe it's the version of the charm that's deployed. [10:22] jamespage, do you object to the keystone admin password being stored in the the peer db? [12:53] gnuoy, context? [12:53] why I guess [12:54] jamespage, Bug#1385105 [12:54] bug 1385105 [12:54] Bug #1385105: keystone identity-admin relation does not support updates to admin-password [12:56] jamespage, I don't have to store it in peer db. If the leader dies I could update it to whatever the new leader thinks it should be [12:57] jamespage, mea culpa for merging the identity-admin branch in this state tbh [12:58] gnuoy, still think I'm missing something [12:59] jamespage, well, problem 1) If admin-password is set it is not set in the identity-admin relation [12:59] 2) If there are more than one keystone units then they each set a different password down the relation [13:00] and point the client to their own individual ip [13:00] gnuoy, oh because its always retrieved from configuration right? [13:00] in 1) [13:00] jamespage, yeah [13:01] so its ok so long as a) you don't cluster and b) you don't provide an admin password [13:01] spot on [13:02] I have a branch to set the password regardless of how it was generated and to use resolve_address(ADMIN) to set the endpoint [13:02] s/endpoint/service_hostname/ [13:02] but I have two options for the admin password. !) Have the leader set it and share it via peer_db [13:03] 2) Change it everytime the leader changes which I think will br fragile [13:09] Hi any openstack charmers online and willing to help with a openstack deployment? [13:12] thedellster, wassup? [13:12] Hi james, [13:13] Trying to figure out how to deploy a cinder driver from a storage vendor. And how to use the multiple interfaces on my servers. Particularly want to segerate iscsi traffic out from the rest of it. Each of the servers in the cluster have 4 nics. [13:17] heya noodles775 [13:17] can one of you guys check this out? https://code.launchpad.net/~s-matyukevich/charms/trusty/elasticsearch/elasticsearch-dns-bug-fix/+merge/239547 [13:18] I sent a email to the list on the advice of someone here last night. But it says waiting approval. [13:20] niemeyer, can you check the mailing list queue for approvals? [13:20] thedellster, ok - so for a cinder driver - you probably need to write a new backend charm - see cinder-ceph or cinder-vmware for examples [13:21] thedellster, in terms of using a specific network interface for iscsi traffic - the charms did just grow features for splitting out internal/admin/public and data traffic onto different nets - however that does not include iscsi from cinder nodes [13:21] jcastro: I have been doing that every day.. which one did I miss? [13:21] it probably just needs plumbing in [13:22] thedellster, which email address did you send from? [13:22] jcastro: Which mailing list, that is [13:22] ndell@nddit.com [13:22] Which mailing list? [13:22] juju@lists.ubuntu.com [13:22] thedellster: Sorry, you'll need to resend your mail [13:23] The juju mailing list for some reason (its name, maybe?) is a hot spot for spam [13:23] If it did not reach the list, it means I discarded it in a batch [13:23] jcastro: sure [13:23] should I resend now? [13:23] I'm considering changing the rule for that one list to auto-discard [13:24] Or rather, auto-reject [13:24] thedellster: Yes please [13:24] Resent [13:24] thedellster: If you want to make sure you never get a message lost, subscribe to the list with the respective email first [13:24] noodles775, does your team have charm powers? or do they go in the queue like everyone else's? [13:24] thedellster: But if you send right now I can make sure it's properly filtered [13:24] and it already has more spam [13:25] thedellster: Done.. I have also whitelisted your email [13:25] thedellster: So it won't get held, even if you're not subscribred [13:25] subscribed [13:25] Thank niemeyer! [13:25] thedellster: No problem [13:26] jamespage it sounds like I might need to look at a different deployment … The problem is that the storage is on a different switching infrastructure. [13:26] thedellster, which driver? [13:27] jcastro: we've been able to land changes to charm-helpers, I've not tried landing changes to a charmstore charm. [13:27] ok [13:27] Pure storage, also have clients with various hp products like lefthand, 3par [13:27] and a few with nexenta [13:28] and netapp. [13:28] But the one I need to action on right away is pure . [13:31] thedellster, so do the pure storage arrays present iscsi directly? [13:31] or via a cinder head? [13:33] * jamespage reads a bit [13:33] thedellster, so this is all in Juno by the looks of things - and all the cinder server needs is a few bits of config [13:33] thedellster, this is pretty much how the cinder-vmware charm works [13:34] thedellster, so I'd recommend basing a cinder backend charm off of that for your purposes [13:34] thedellster, in terms of access over a different network - I guess so long as your compute nodes are cabled up and configured correctly, you should be good [13:39] james I believe the cinder head directs traffic to the array. They require the multipath driver on the nova compute nodes [13:42] The Pure Storage Volume Driver selects a FlashArray iSCSI port that it can reach and always uses that [13:42] port when first establishing a connection with the array. If the OpenStack compute nodes have access [13:42] Best Practices [13:42] Dedicated Purity User Account [13:42] Page 26 August 20, 2014 [13:42] to the same SAN subnet as the node where the driver is running, then the compute nodes are able to [13:42] connect to that port as well. [13:42] Confirm the following: [13:42] • From each OpenStack nodes where the volume driver will run, confirm that you can ping the [13:42] array’s management port and the array's iSCSI ports. [13:42] • From each of the OpenStack compute nodes that will connect to the FlashArray, confirm that you [13:42] can ping the array's iSCSI ports. [13:43] Sorry for the SPAM…. Pasted that in [13:43] pastebins are your friend :) [13:44] Sarnold yeah I think they are the channels friend to. Been a while since I used irc… Feel like i’m in highschool again [13:45] hehe :) [13:50] jamespage in terms of the different network. Would I just log on to the hosts in questions and carve out that nic. Or should I describe them in a juju charm? === roadmr_afk is now known as roadmr [15:38] marcoceppi: about amulet, I'm pushing ahead with some hacky shell scripts in the meantime, which are working OK for now. So no rush on that issue for me. [15:39] rbasak: cool, there's a new release landing today and we should have another in a week or so. tvansteenburgh has been doing a lot of the development on it as of late so features are once again landing [15:51] jcastro: I just approved another elasticsearch MP, which show me as community, so I guess I don't have charm powers :) https://code.launchpad.net/~evarlast/charms/trusty/elasticsearch/add-version-config/+merge/237916 [15:51] ack [15:53] lazyPower, if you've got time to check out those elasticsearch fixes that would be <3 [15:56] noodles775, what other charms do you guys maintain? [15:56] maybe we can give you ownership of the ones that you do [15:59] jcastro: hrm, bloodearnest is helping with the maintenance of the gunicorn charm. I don't think there are any others that our team maintains. [16:01] jcastro: if Patrick (avoine) is ok with it, I'll happily take it on. I have a bunch of changes planned anyway. [16:01] yeah I think it would just be easier if you guys didn't have to gate fixes on us [16:01] like how we do for the openstack guys [16:01] I'll bring it up during our next team meeting [16:03] bloodearnest, do you guys have a team? [16:03] or we can do ~elasticsearch-charmers or something [16:04] jcastro: might be better if we create ~onlineservices-charmers or similar [16:04] ok [16:04] if you create the team and put the right people in it [16:05] I'll propose ES and gunicorn on the juju list to move over to you? [16:05] noodles775: whatever group you create just make sure ~charmers is a member and admin as well [16:05] bloodearnest: ^ (as well) [16:06] kk [16:09] jcastro, marcoceppi: https://launchpad.net/~onlineservices-charmers [16:20] noodles775, bloodearnest: proposal out and I've made it verbally on our team list and it seems to be universally agreed, after our call we'll do the approval bits. [16:20] Great, thanks. [16:23] awesome [16:36] tvansteenburgh: it's in pip, charm-tools, backportpackage for ppa uploading is complaining that distro data is out of date but the package isn't in trusty updates yet [16:36] so I can't publish to ppa [16:36] pypi is enough for me [16:37] i'll update testing as soon as amulet hits pypi === roadmr is now known as roadmr_afk [17:03] how does one join onlineservices-charmers? :] [17:08] jrwren: you must first walk over the pit of broken glass [17:09] hazmat, what's your holiday schedule? I have "charm testing" as a charm school on 12/19 but we did it already, I would like to do "Juju on Digital Ocean" with you if you're interested [17:11] marcoceppi: no problem, already did that 3 times this morning. [17:15] tvansteenburgh: any issue with moving amulet from my namespace to juju-solutions? [17:18] marcoceppi: not at all, great idea [17:21] jcastro, i'm game.. but might want to push to jan [17:21] tvansteenburgh: 1.8.0 is on pypi [17:22] marcoceppi: excellent, thanks! [17:22] Will email the list after ppa uploading starts working again [17:22] I'm out fot he after noon o/ [17:22] later man [17:22] hazmat, ok, I'm kind of looking for this calendar year, I'll put you in the next-year pile. === urulama is now known as urulama___ === roadmr_afk is now known as roadmr === tvansteenburgh is now known as tvan|lunch [17:32] jose, I want to make sure I join the right hangout thing this time [17:32] do I sign is as the UoA user? [17:36] marcoceppi: Automated tests on charm branches is *awesome* - thanks. I've not done any changes in a while, and was just thinking I had to create HP/ec2 accounts. === roadmr is now known as roadmr_afk === roadmr_afk is now known as roadmr [17:59] There isn't a charmhelpers (python) helper to run a command as a specific user, is there? === tvan|lunch is now known as tvansteenburgh [18:22] jcastro: I'll handle it, don't worry [18:22] ok === roadmr is now known as roadmr_afk === roadmr_afk is now known as roadmr [18:55] https://plus.google.com/hangouts/_/hoaevent/AP36tYfGTmFkqf0B3qhcmaf_yL1vnc0vvhpiuuCbM530FY0z4Zfjlg [18:55] here's the hangout for anyone who wants to hang out [18:55] we'll be doing "Relationships" as a charm workshop here in a few minutes. === tdc_ is now known as tdc [18:57] Hi I have question regarding juju-quickstart. Will juju-quickstart attempt to recreate local envrioment.jenv file in case it is missing or corrupted ? e.g. I have valid AWS credentials and juju environment running on AWS but missing .jenv file so I cannot execute command line juju commands.. [19:01] WTF look at the signed windows downloads https://launchpad.net/juju-core/+milestone/1.20.10 [19:01] We had 225 downloads for 1.20.9 === roadmr is now known as roadmr_afk [19:19] sinzui, that can't be right [19:20] jcastro, The mac downloads are double so I expect some 550 win downloads [19:20] yeah but windows says 22,715 [19:20] that can't be right can it? [19:20] I mean, if so, then awesome! [19:21] jcastro, 1.20.5 was stable for about 1 month https://launchpad.net/juju-core/+milestone/1.20.5 [19:22] and 1.18.4 was stable for months https://launchpad.net/juju-core/+milestone/1.18.4 [19:23] That 22,715 is implies we are getting 1000s of downloads was day === roadmr_afk is now known as roadmr === roadmr is now known as roadmr_afk [21:17] noodles775: you can thank tvansteenburgh he's been spearheading that for quite some time === CyberJacob|Away is now known as CyberJacob === roadmr_afk is now known as roadmr === CyberJacob is now known as CyberJacob|Away === jcw4 is now known as jw4