[03:52] jamespage, gnuoy, wolsen - even with happy a/ptr records, Precise-Icehouse n-c-c/next is broken when ssh migration auth is set. if i substitute the n-c-c stable charm into next.yaml, the problem goes away. break is at or after n-c-c/next rev188. [03:52] https://bugs.launchpad.net/charms/+source/nova-cloud-controller/+bug/1480677 [03:52] Bug #1480677: oslo.messaging.rpc.dispatcher AttributeError: 'Connection' object has no attribute 'connection_errors' Collection):New> [03:53] bahh wrong link. [03:53] jamespage, gnuoy, wolsen - https://bugs.launchpad.net/charms/+source/nova-compute/+bug/1500589 [03:53] Bug #1500589: n-c-c/next fails cloud-compute-relation-changed when migration-auth-type set for Precise-Icehouse [09:12] beisner, https://code.launchpad.net/~gnuoy/charms/trusty/nova-cloud-controller/lp1500589/+merge/273508 should fix Bug #1500589 [09:12] Bug #1500589: n-c-c/next fails cloud-compute-relation-changed when migration-auth-type set for Precise-Icehouse [10:16] thedac, https://code.launchpad.net/~james-page/charms/trusty/ceph/status - will work on osd and radosgw as well [10:17] that one works OK - needs unit tests [10:17] gnuoy, beisner: I fixed up my crappy failing unit tests - apologies for that [10:17] insufficient patching [10:17] thedac, http://paste.ubuntu.com/12696040/ [10:18] 'Unit is active' has become policy .... [10:19] thedac, and - http://paste.ubuntu.com/12696048/ === IceyEC is now known as Icey === lukasa is now known as lukasa_away === lukasa_away is now known as lukasa === lukasa is now known as lukasa_away [12:51] hi jamespage, gnuoy, thedac - http://paste.ubuntu.com/12696720/ i'm noticing that with next.yaml deploys, glance status stays in blocked 'Missing relations: messaging' - although the deploy completes successfully otherwise. indeed there's no amqp relation. do we need one? [12:52] jamespage, ty for the quick n-c fix btw [12:52] beisner, we should probably just add that [12:52] its used for ceilometer notifications [12:52] jamespage, I'm not sure I agree [12:53] IS for one, don't use ceilometer [12:53] jamespage, right, but i think 'blocked' will eventually really block us when we do more clever 'my thing is really deployed' logic. i imagine deployer, amulet and juju itself will start watching that soon. [12:53] jamespage, I think we should move it to an optional relation [12:53] gnuoy, beisner: ok so maybe the glance messaging relation should be optional [12:53] I'm easy either way [12:53] so, optional relation then. [12:53] ;-) yeah that [12:53] but +1 on it being optional [12:55] jamespage, gnuoy - either way, since defauly/next yamls include ceilo, should we relate it to glance in those bundles? [12:56] I would think so [12:56] i haven't dug into the code. is glance aware of whether/not ceilo is present [12:56] ? [13:01] beisner, nope [13:01] beisner, +1 to adding it to the bundle tho [13:03] thedac, gnuoy: how does this look - http://paste.ubuntu.com/12696803/ ? [13:03] that's ceph-osd [13:04] jamespage, it looks wonderful, I'd like to hug it [13:05] gnuoy, unit tests of the atlantic and mps by the end of today [13:05] gnuoy, its basic but I think its all fairly sound [13:05] kk, thanks [13:06] ok, i'll adjust o-c-t bundles. [13:09] fyi, raised for tracking: bug 1503272 [13:09] Bug #1503272: workload status: amqp relation should be optional === lukasa_away is now known as lukasa [14:02] jamespage: is it possible to limit the amount of units of a charm? === lukasa is now known as lukasa_away === lukasa_away is now known as lukasa [14:28] jamespage: wrt, ceph, fantastic. [14:28] wrt glance +1 to moving messaging to optional. In fact if no one has done it already I will [14:31] ha! /me just saw gnuoy hug a pastebin [14:51] jcastro: jose: is there an official way to request a charm be written? [14:52] the developer portal could really benefit from using Haystack, but I don't see a charm for it in the charm store [14:57] gnuoy: care to review? https://code.launchpad.net/~thedac/charms/trusty/glance/optional-amqp/+merge/273542 [15:00] thedac, approved, would you mind doing the landing? [15:00] will do [15:00] ta [15:01] I'll wait on osci amulet before doing so [15:14] gnuoy, fyi, you'll be getting 2 amulet tests for kicking the bot while he/she/it was already running an amulet test on behalf of n-c-c ;-) [15:15] Can I kick it ? It seems I can [15:15] new commit = retest [15:15] so the amulet result that is there, was for rev 199 [15:16] there will be a new set for rev 200 [15:16] beisner, so I think it fixes the bug you found either way [15:16] lol [15:16] yes thanks! [15:16] np [15:29] coreycb, thedac - i'm on a normalize-makefiles mission for *os-charms. noticing that some charms aren't lint checking the actions/ dir. when i enable that, some of them have unused uuid import in actions/openstack_upgrade.py [15:30] and when i remove that import, unit tests start to fail with 'actions/openstack_upgrade.py'> does not have the attribute 'uuid' [15:30] beisner, I'd mark that low priority for now [15:30] agreed [15:31] coreycb, thedac - if i move forward, we'll either have failing lint checks or failing unit checks. [15:31] i can just ignore the actions dir for now ? [15:31] beisner, yeah I'd vote for that and a bug [15:32] Yes, I would not make the change (yet) that checks the actions directory [15:32] fwiw, some do have it enabled, though for the pause/resume actions [15:32] be aware ^ [15:32] so that'll force the next guy to make it right ;-) [15:32] on those ones [15:37] ok, squashing actions/ dir normalization wrt lint checks for now, will add affected charms to bug/1503340 [15:37] Bug #1503340: actions dir not checked for lint; unused import when checked; unit test fails when lint is resolved. [15:38] ie. if it's already checking actions/, it will keep checking, but i won't force it to just yet. [15:48] beisner, sounds good, thanks [15:48] coreycb, yw [15:51] jamespage, gnuoy, thedac, coreycb - while i'm batch-updating, any input on changing maintainers in metadata.yaml to something generic? it'd be an easy change now. atm, we have a mix of yolanda, adam g, jamespage. [15:52] Do we have an openstack-charmers@ mail alias? or something like it? [15:52] ls [15:52] :) [15:53] thedac, [15:53] -rw-rw-r-- 1 ubuntu ubuntu 467 Oct 6 14:09 metadata.yaml [15:53] ;-) [15:53] wow, my system is lagging [15:54] not sure re: mail alias. that's what we should probably do i think [16:00] coreycb, can you check if you just rcvd a test email from my gmail acct to the openstack charmers list? [16:05] beisner, I'm not seeing anything [16:09] beisner, coreycb thedac i can take a look at that uuid and lint issues with the upgrade actions just as soon as im done with what im working on here. [16:12] coreycb, hrm. no reject or awaiting-moderation reply on my end. i wonder if the public can send to that? [16:12] https://launchpad.net/~openstack-charmers [16:13] jamespage, gnuoy, thedac, coreycb - i'm going to ignore the maintainers field for this batch. but let's figure that out before release and i can push another batch to update all of the oscharms. [16:31] lazypower, ping? [16:34] mattyw, pong [16:34] lazypower, hey hey - I'm not sure who else to ping - but if you're busy feel free to move me along... [16:35] lazypower, wondering why my merge request isn't appearing the queue https://code.launchpad.net/~mattyw/charms/trusty/mongodb/mongodb-backup/+merge/273544 [16:37] mattyw, its only an hour old, its likely to be pending an intgest into review.juju.solutions [16:37] lazypower, ok cool, so be patient grasshopper is the message :) [16:47] o/ lazypower, mattyw - our openstack testing bot cares about mongodb (because of ceilometer) and it does some automated testing on proposals there. i've added a pre-review review ;-) [16:47] https://code.launchpad.net/~mattyw/charms/trusty/mongodb/mongodb-backup/+merge/273544 [16:48] beisner, thanks very much - I've just seen it, I'll make those changes [16:48] mattyw, cool, thanks! [16:48] beisner, there was already an action - who's implementation I basically followed, would you like me to fix that up as well? [16:48] beisner, <3 :) [16:48] mattyw, /me looks back... [16:51] mattyw, lazypower - indeed, there was 1 existing action, outside the coverage of lint checks. [16:52] http://bazaar.launchpad.net/~charmers/charms/trusty/mongodb/trunk/annotate/head:/actions/perf [16:54] beisner, I'm eoding now, anything you'd like me to do just shove a message on the pr [16:54] beisner, if I don't see anything new I'll go ahead with the linting fixes [16:54] mattyw, thanks. i'll pull a few heads together and one of us will advise there. [16:55] beisner, perfect [16:55] night all [17:47] o/ coreycb - look out! here's that batch to normalize makefiles and amulet test dependencies @ http://paste.ubuntu.com/12698366/ as discussed, did not address upgrade actions lint or maintainer bits yet. [18:12] beisner, ok [18:24] is charmhelpers build for python 2.x? [18:40] coreycb, i believe you set pulls are welcome? :) https://code.launchpad.net/~xfactor973/charm-helpers/status_set-enum [18:40] coreycb, oh wait that has some junk from another branch in there. i need to clean it up [18:41] cholcombe, hmm dunno if I said that or not, but they are! [18:42] :) [18:42] cholcombe, actually I can't land to charm-helpers though [18:42] coreycb, who lands that stuff? [18:42] cholcombe, for openstack, typically gnuoy, jamespage or dosaboy [18:43] coreycb, oh i meant for charm-helpers [18:43] cholcombe, right I was referring to openstack related charm-helpers code [18:43] gotcha [18:44] coreycb, alright i'll add one of them to the review. thanks! [18:44] cholcombe: there's a group of charm-helper maintainers that currate that [18:44] marcoceppi, even better === lukasa is now known as lukasa_away === lukasa_away is now known as lukasa [21:53] core, dev: hey whats going on everyone? I'm wondering which part of cinder (api, volume, scheduler) provides the image-service interface? [21:54] core, dev: I'm thinking it must be cinder-api.. [21:54] or scheduler, or both... [22:01] bdx: not sure I understand your question [22:02] bdx: the cinder charm provides image-servce which connects to glance [22:04] marcoceppi: In the case you deploy cinder in a decoupled fashion, with separate volume units ..... you end up with different units running different cinder services... [22:04] bdx: right, but we don't have a decouple charm (this is where my knowledge of the openstack charm ecosystem begins to fade) [22:04] marcoceppi: https://jujucharms.com/cinder/trusty/29 [22:05] bdx: beisner coreycb or ddellav might be able to help more [22:05] bdx: OIC [22:05] it's probably api [22:05] but that's a guess [22:06] marcoceppi: how could I preform introspection into this?...there are a few different relations made to cinder that I have questionabout [22:06] bdx: well, a better readme would be my guess [22:08] beisner, coreycb, ddellav, marcoceppi: so .... I have 3 availibility zones ....in order to use cinder across all 3 [22:08] I need to run cinder volume service in the 3 separate locations [22:09] in which I configure each of the 3 cinder volume services to include a config-flag that specifies each zone [22:10] such that "cinder availibility-zone-list" shows all three zones are enabled [22:12] what I am trying to figure out is what relations should be made from the cinder component's interfaces to what other charms [22:13] bdx: well it probably wouldn't hurt to connect them all to glance, tbh [22:13] bdx: let me see if I can find a bundle with cinder spun off [22:18] marcoceppi: here's a small script of what I'm doing to give an idea https://gist.github.com/jamesbeedy/6429af503c3581ba9e7e [22:19] I have most of the relations dialed...I feel like I should just test the last few to see what happens [22:21] bdx: whoa that is, whoa [22:21] ha [22:21] bdx: I may be able to help make that script a little more managable [22:21] oh...do tell [22:21] bdx: what does charmconf.yaml have in it? [22:23] marcoceppi: http://paste.ubuntu.com/12699829/ [22:23] bdx: yeah, you basically want a bundle [22:24] Hi! Please is there anyone able & willing to help me with maas & juju networking? Trying to figure out how to expose juju openstack services to public network [22:25] JerryK_: I can certainly try [22:31] thanks! The thing is I have maas controller connected to internet + private network. Nodes are connected to private network + public. Everything I install through juju is exposed on the private network but I would like to move some of those services to the public one ... [22:32] JerryK_: what maas version do you have? [22:32] Or maybe I just misunderstood how to make "the right" network layout ... [22:32] 1.8.2 [22:33] and juju 1.24.6 [22:35] e.g. can't find any doc on how to work with networks in juju. I see the networks section with cird as provided from maas in juju status, but that's all [22:58] bdx: okay, took forever, https://gist.github.com/marcoceppi/1029c03170a35ca48a10 I got lazy and didn't add all the relations, but we have a way to model large scale deployments without having to write scripts and waits [22:58] bdx: doesn't asnwer your question about relations, but it should be something that's easier to maintain over time [22:59] JerryK_: so networking in maas is still a bit lite, do you have multiple interfaces in the services deployed with juju? or is it just a single private network? [22:59] JerryK_: fwiw, 1.9 of MAAS makes networking and storage a first class tennant, it'll be so much easier in the next release [23:02] marcoceppi: that is awesome. thanks! [23:03] marcoceppi: yeah, heard about that. Code "1.9.alpha" still feels a little bit unstable but I'll give it a try [23:05] marcoceppi: juju goes directly for the network managed by maas. The node has two nic - one for private(maas managed) and second public network