[06:16] === CyberJacob is now known as zz_CyberJacob === zz_CyberJacob is now known as CyberJacob === frankban|afk is now known as frankban === ionutbalutoiu_ is now known as ionutbalutoiu === rohit___ is now known as rohit__ === mmcc_ is now known as mmcc [09:08] could somebody check any my MP is not showing on review.juju.solutions? https://code.launchpad.net/~canonical-sysadmins/charms/trusty/apache2/apache2-storage/+merge/298617 [09:09] jacekn: its probably just broken, although I think there is movement on the new review queue [09:09] marcoceppi is in europe at the mo I believe so might be able to shed some light [09:10] yes some info would be good, I know there were problems with the review queue months ago but they must have been fixed by now [09:41] jamespag`: We need to connect a charm to keystone. Is there a layer for that relation? [09:41] aisrael, there is an interface for the keystone interface which is probably what you want to use [09:41] its in the index [09:42] aisrael, context? [09:42] jamespag`: openmano charming session w/marks [09:42] aisrael, right - so does it need todo endpoint registration into the keystone server catalog? [09:43] aisrael, if it does not you might want to use the keystone-credentials interface to just get some credentials - really depends [09:44] there is also an interface for that [09:44] jamespag`: ack, that's a good starting point, thanks. I'm not up to speed on how deep the keystone integration goes. [09:46] aisrael, ok well around most of the day - will be out for lunch at about 11:30 UTC [10:09] jacekn: we're moments away from deploying a new review-queue, but it's stalled because it's not on IS infrastrture and I'm having problems fetching remote resources in our qa environment [10:12] marcoceppi: thanks, good news === jamespag` is now known as jamespge === jamespge is now known as jamespage [13:13] Link to https://jujucharms.com/docs/devel/config-local from https://jujucharms.com/docs/devel/developer-getting-started is broken [13:26] Hi, I deployed a charm making using of juju resources, I have pushed empty packages to the charm store, when i deploy the charm , i see that the charm is stuck at Unknown package_status None [13:26] Can anyone help why we get this status message ? [14:30] ahoy jamespage -- do you know much about this bundle? [14:30] https://jujucharms.com/openstack-midonet-liberty/bundle/0/ [14:30] we have an incompatible zookeeper change, and it looks like this bundle is the only one using zk. [14:32] also jamespage, you're the current maintainer of zookeeper, so if you have a sense of how much your charm is used, that will help us in deciding how much effort to put in to make it upgradable / back compat / etc.. [14:37] jamespage: When kwmonroe says "incompatible" he means it's layered, so you can't directly upgrade, and that it adds a required java relation. Otherwise, the interfaces are the same. [14:40] see kwmonroe told you we defer to cory_fu for explanations ;) [14:40] lol [14:41] kwmonroe, cory_fu: hmm - I know midonet uses it - and I think contrail does as well - bbcmicrocomputer would be able to confirm [14:41] * jamespage feels sheepish about 'current maintainer' status [14:42] kwmonroe: yes, contrail uses zookeeper [14:44] jamespage: I think we're comfortable taking over maintainership since it will be a Bigtop charm now, but we're concerned about the transition [14:44] thx for the info -- we'll run through https://jujucharms.com/requires/zookeeper and see how deep the rabbits live. [14:46] cory_fu, jamespage tells me that you've been discussing actions in the context of reactive. I was wondering if you had an exemplar of how much of reactive to bring up when running an action? Thanks [14:58] Where do I find how to set the default directory to put lxd containers? I am using an existing xfs file system instead of a zfs or btfs file system. [14:59] Hi lazyPower, are you around? [14:59] neil__ i am [14:59] (Just poking the new etcd charm...) [15:00] Awesome! Hows it goin? [15:00] lazyPower, I've deployed the new charm into my OpenStack cluster, and so far things are not fully hooking up... [15:00] lazyPower, ...which is I think as expected because I need to do some work in the charms that use etcd client proxies. [15:01] how so? [15:01] Well I have this charm 'neutron-api', which includes installing etcd to act as a proxy. [15:02] And right now it's not getting the initial cluster string properly. [15:02] neil__ https://github.com/juju-solutions/layer-etcd/blob/master/reactive/etcd.py#L156 [15:03] i found the issue. We must have missed a merge in the final bits before moving for release. the interface was updated, but looks like the invocation block never made it [15:03] Ah, I was wondering about those lines being commented out :-) [15:04] give me a couple minutes and i'll get you a fixed revision [15:04] But then I also wondered if the code under hooks/relations/etcd-proxy was intended to be a better alternative to those lines... === caribou_ is now known as caribou [15:07] How is the code under hooks/relations/etcd-proxy hooked into the rest of the charm? I couldn't find any explicit references to the EtcdProvider class. [15:07] that code is the interface code :) [15:10] Oh I see, I think EtcdHelper in the commented out code should in fact be EtcdProvider (with an appropriate import at the top of that file), because there's nothing else in the charm that has a provide_cluster_string method. [15:13] One other thing - could the key be 'cluster' instead of 'cluster_string'? 'cluster' is what my two proxy client charms are expecting. [15:15] (And I believe previous iterations of the etcd charm used 'cluster' - e.g. my latest version before your recent work, at https://github.com/projectcalico/charm-etcd/blob/master/hooks/hooks.py#L80) [15:31] neil__ - negative, the cluster strings are now computed from the active members coming back from etcdctl [15:31] the etcdhelper is deprecated fully, it was harder to understand than modeling the client utility. [15:33] what i'll wind up doing is pulling the member list of who's actively participating in the cluster and send that over with client credentials. i'm not finding that branch, so i think we legitimately missed porting this [15:42] neil__ https://github.com/juju-solutions/interface-etcd-proxy/pull/3 [15:44] Understood I think. There's also what looks like reasonable code in the hooks/relations/etcd-proxy/README.md (in case you'd forgotten that). [15:45] That PR LGTM - thanks. [15:53] neil__ - https://github.com/juju-solutions/layer-etcd/pull/32 [16:03] petevg: that suggestion to use bigtop::jdk_preinstalled: false may not work.. zk builds off of the bigtop base layer, which will report blocked when_not java.ready :/ [16:03] Darn :-/ [16:03] so we'd have to tweak our java check in the base layer [16:03] That sounds entertaining. [16:03] Actually, since we're passing in overrides to the Bigtop class, it might be able to check for that value. [16:04] let's see what cory_fu comes up with after his current meeting.. it may be a non issue if we vote to rename zk to zk-bigdata or something [16:04] Cool. [16:05] I missed the context for this. Are we suggesting not making ZK depend on the external java relation? [16:10] Can we make the charm use the Bigtop java install by default but let the java relation override / switch the java when related? [16:12] lazyPower, Just in a meeting at the mo - will look properly at your layer-etcd PR soon after that finishes. [16:28] kwmonroe, petevg, kjackal: So it sounds like we'll make the bigtop zookeeper charm xenial only and leave the existing charm for trusty [16:28] With that, we could keep the required java relation. [16:29] lazyPower, Is it a concern that the new integration code in the PR is different from what is suggested in https://github.com/juju-solutions/interface-etcd-proxy/blob/master/README.md ? Should the latter be updated? [16:29] The relation name changes wouldn't be an issue either [16:29] We'd have to have separate tests for trusty vs xenial [16:30] there's a pickle in there cory_fu [16:31] let's say i deploy hadoop-processing, which is currently gonna use all trusty charms.. including openjdk. then i add xenial zk and it says "blocked waiting for java". [16:31] so i'm all like "cool, juju add-relation zk openjdk" [16:31] and then juju's all like "ERROR cannot add relation "openjdk:java hive:java": principal and subordinate applications' series must match " [16:31] tinwood: The majority of the discussion around actions & reactive is at: https://github.com/juju-solutions/charms.reactive/pull/66 [16:32] cory_fu, yes, jamespage pointed me to that - interesting read. I'm trying to work out if it's possible to get the relation data whilst I'm doing an action. Hmm. [16:33] kwmonroe: That's a problem with subordinates in general. It would be an issue in mixed series deployments even if both ZK series were the same charm underneath [16:33] cory_fu, i.e. I have some data set from the other end, and I want to check/use it in an action. [16:34] lazyPower, sorry, will be AFK again for a while now - back later [16:34] tinwood: The goal of that PR would be to be able to use @when and @action together, so your action would be predicated on the relation state and would get the relation instance [16:35] cory_fu, that would work nicely. In the meantime, can you think of anyway to do with a 'plain' action? [16:36] anyone familiar with a problem where the bootstrap node constantly has *very* high load? I'm not sure if it's the cause or a symptom, but mongodb is hammering the logs [16:36] by very high, I mean up aroun 400-500 [16:36] restarting juju-db brings it down temporarily, but it creeps back up pretty quickly [16:37] cory_fu, could I set a state, run reactive.main() and then pick up that state and a relation state? [16:37] plars: i haven't seen that, but i think #juju-dev might know better [16:37] kwmonroe: thanks === frankban is now known as frankban|afk [16:39] tinwood: You could, and there are some examples of charms that do that. You could also just use RelationBase.from_state('state.name') to get the relation instance directly (or None) [16:40] cory_fu, that latter thing is JUST the ticket. Thx! (I thought I'd stared at the code, but I must have missed that). I'll try to code it so I can revert to an @action() if it lands. Thanks for your help. [16:42] tinwood: We don't tend to promote using from_state directly because it's generally better to make your preconditions clear with the decorators. [16:42] And no problem, glad to help [16:43] cory_fu, yes, i understand that - I'm going to try to get it as early as possible so that it can be converted to a 'proper' @action() @when('state') def... form. [16:49] neil__ - sounds good. https://github.com/juju-solutions/interface-etcd-proxy/pull/4 === X-Istence is now known as x58 === sarnold_ is now known as sarnold [18:02] lazyPower, your PRs all look good to me now. Is there a way I can easily pull them together into a complete new charm to try out? [18:03] neil__ in standup, give me 15 and i'll push an assembled version to my namespace [18:03] lazyPower, thank you - sorry to keep bothering! [18:37] neil__ url: cs:~lazypower/etcd-18 if this revision works for you, ping me back and i'll propose it against the proper charm [18:37] lazyPower, thanks, will try that out now [18:38] lazyPower, cycle will take about 45 mins though, so please don't hold your breath! [18:39] :) I'm in no rush, I'm moving this week. I have to admit, Pods + movers = the way to do it. [18:48] Hi , I have uploaded resource to the controller using `juju attach` command but when I try to fetch resource using `resource-get`, it's keep on trying to fetch the resource..(log : http://pastebin.ubuntu.com/18187857)/ [18:49] geetha - the pastebin looks truncated. can you paste the full log, or use a utility like `pastebinit` (which is apt-get installable) to send that paste over to paste.ubuntu.com? [18:50] hey guys, I'm having an issue deploying a charm that I've written. It keeps spitting out the following error: ERROR POST https://10.18.150.54:17070/model/3362bfce-179e-44b0-8c02-b6b7bb238d85/charms?series=trusty: URL has invalid charm or bundle name: "local:trusty/charm-DataCenterTopology-0" Does anyone know what I should be investigating to fix this? [18:50] mskalka - try removing charm- from the name in metadata [18:51] lazyPower, save error as before [18:52] mskalka - can you link me to your layer/charm? [18:53] lazyPower - like to the repo? [18:53] yep [18:53] i'll clone it and see if i can reproduce here, and try to help you find a fix [18:54] lazyPower: you can see full log here: http://paste.ubuntu.com/18188302/, its keep on going.. [19:00] lazyPower (etcd), hmm: hook failed: "proxy-relation-joined" for neutron-api:etcd-proxy Just going to investigate further. [19:01] geetha - it looks like it ran on line 1752 [19:01] i see voting bits below, which tells me the charm went idle [19:01] how large is the resource you are trying to fetch geetha? [19:02] lazyPower (etcd) - http://pastebin.com/r4UiF3LL [19:02] doh [19:02] neil__ - sorry i know what i did there... duh [19:02] my python is weak today [19:06] cory_fu: petevg, hive is in the same boat as zookeeper.. the current promulgated hive is non-layered, so it would have a broken upgrade path. fortuantely, that one is only precise, so we should be able to drop in a xenial/trusty version without much hurt. [19:07] so here's what i propose.. we find all the services that might conflict with an existing promulgated non-upgradable charm, then fire off a note to the list. if people need precise hive, they'll need to be explicity in their deployment instructions and bundles. same for zk. [19:07] lazyPower - that did it! Thanks again! [19:07] mskalka happy to help :) [19:08] kwmonroe: I think that makes sense. [19:11] lazyPower: The resource size is 163M...I have tested it already, it was not taking much time. [19:13] I am deploying charm from charm store and attaching resource using 'juju attach' command [19:19] geetha - yeah without more information i'm not much help :( sorry. This looks like you've followed the correct process. You may have uncovered a bug, can you investigate the logs on your controller to see if there are any errors scrolling in there while its attempting to fetch the resource? [19:23] geetha: is it still running? [19:24] geetha: you might want to 'juju ssh' to the unit and have a look in /var/lib/juju/agents/unit-ibm-im/charm/resources to see if the file is still being fetched [19:27] for anyone else seeing slow resource fetches, logs and observations would be great in bug 1594924 [19:27] Bug #1594924: resource-get is painfully slow [19:27] cory_fu: promulgation issue https://github.com/juju/charm/issues/214 [19:27] cory_fu: feel free to add any examples to that issue specifically if you find it occurs with zookeeper [19:30] neil__ sorry about the delay i got a bit distracted. cs:~lazypower/etcd-19 - i deployed a hollow charm and verified the data on the wire looked good this time around, no more python errors :) [19:42] lazyPower, thanks, will give that a go now... [20:07] lazyPower, while that's coming up, could I ask you about my step? [20:07] sure [20:10] lazyPower, thanks. So, on the machines where I have etcd-using units, I need an etcd proxy that connects over the network on its db side (using 'cluster') and allows localhost access only on its client side. I'm not yet sure if I need TLS-security for the localhost connections. [20:10] you wont, you'll need to provide the client keys on your proxy configuration however [20:11] you wont be able to communicate with the primary etcd cluster without that key. [20:11] lazyPower, Yes indeed, I definitely need the proxy to have keys on its connection to the non-proxies. I believe I can get that using the requires part of interface-etcd-proxy - right? [20:13] neil__ - yep. that interface is reactive based, so i dont know that you can consume it out of the box in your charm.. you're supporting a non layered approach right? [20:13] lazyPower, Is there a cast-iron argument for saying that TLS security isn't needed for the localhost connections? (I'll be very happy if there is, but I'm not greatly experienced yet in security questions.) [20:13] neil__ - with that being said, the keys are all there for you. cluster, and the 3 ssl keys. [20:13] ah, i dont think its required, thats kind of the point of the proxy right? [20:14] thats solely up to you in the implementation details of that proxy [20:14] i just gave you the pipeline to do so :) [20:14] Thanks. [20:16] At the moment, the etcd proxy function is integrated into the charms that need it (neutron-api and neutron-calico). But I was thinking it might be nicer to have a separate etcd-local-proxy charm, and then just to deploy that alongside the other units that need it, and remove the etcd code from neutron-api and neutron-calico. [20:17] If I did that I guess it might be quite easy to do the new charm using reactive...? [20:18] yep! [20:19] and you can grab that interface code too, the conversation is done for you, just follow the placement guide. YOu'll need to update the systemd defaults template for etcd so you can declare the tls options. thats all thats different that i can think of [20:19] Yes, that's what I was thinking. But what do you mean by the placement guide? [20:21] hey everyone, is there any way to specify machines to deploy to outside of the 'deploy --to' command? Like pre-configured in a config file or something similar [20:22] mskalka, In the bundle, yes. [20:23] neil__, thanks, I'll check that out [20:25] mskalka, Complication is, I think, that the supported declarations depend on what version of Juju you're using; and it's not well documented. [20:25] this may be a bit off topic, but does ubuntu openstack autopilot leverage juju to deploy the services on maas devices? [20:26] neil__, haha 'not well documented' seems to be the phrase of the month. It gives me a place to work from though. [20:26] mskalka, But for Juju 2 the bundle can say, for example: "to: [ bird ]", which means "put the first unit of this service in the same place as bird" [20:26] neil__, bird in this example being another service? [20:27] mskalka, yes [20:27] neil__, that's exactly what I'm looking for actually [20:27] mskalka, some doc claims that you can specify machine numbers, but I've not got that to work reliably [20:28] neil__, I'm trying to get a charm deployed to every machine another charm operates on [20:30] mskalka, hmm, I think that's possible. But you might also want to look up 'subordinate' charms [20:32] lazyPower, looks like your latest etcd charm is good: on the etcd proxy client machine I now have /etc/default/etcd with correct ETCD_INITIAL_CLUSTER. [20:32] neil__, I don't need to access any of the other charms on the machine, just gather information about the machine and its relation to other machines. It's basically a charm to create crushmaps for Ceph without actually touching the Ceph charm [20:33] lazyPower, the etcd proxy is still failing to connect to the cluster, but that's expected because I haven't implemented code to get the keys from the relation and add those into /etc/default/etcd. [20:34] mskalka, Sounds like you just need 'juju status' then. [20:35] neil__, if only.. it's got to be able to do some code execution [20:55] neil__ - excellent [20:56] neil__ - add the TLS keys to the defaults file and you should be in like flynn [20:56] do you need those? i'm pretty sure i have them in my design notes [20:56] do I need which? [20:56] the tls flags to provide the defaults file [20:56] sorry, I'm afraid I'm lost... [20:59] neil__ https://gist.github.com/chuckbutler/3649337495ca3fa95bb6a3bdb11fc70f [21:01] sorry, still not quite understanding - what is it that supports those flags? [21:02] neil__ : thats what you'll want to set in the defaults file for flags to etcd [21:03] ran a 40 m cat 5 cable the length of my garden..... then find its broken [21:03] woop [21:03] so, receive and write out those client certificates (the interface code does this for you if you're using that), and use the path you saved those client certs to populate those flags in the defaults file, and you should be g2g w/ the new etcd implementation [21:03] magicaltrout - blarg, hardware problems [21:03] indeed [21:04] lazyPower, Right, got it, thanks. [21:06] lazyPower, But I see two sets of possible settings: ETCD_CERT_FILE, ETCD_KEY_FILE, ETCD_TRUSTED_CA_FILE; and ETCD_PEER_CERT_FILE, ETCD_PEER_KEY_FILE, ETCD_PEER_TRUSTED_CA_FILE. For a proxy connecting to a non-proxy, do you know which ones I need? (I would guess the ones without _PEER_, but not at all sure about that.) [21:06] the ones without PEER assume server installation [21:06] those keys are client keys not server keys :) [21:07] so i'm 90% certain you want the PEER flags === urulama is now known as urulama|___ [21:09] lazyPower, Thanks, I think you're right - I found a note elsewhere that says "etcd proxies communicate with the cluster as peers so they need to have peer certificates." (http://docs.projectcalico.org/en/1.3.0/securing-calico.html) [21:10] solid :) sounds like you're all set. Let me know how you make out with it. I'm going to be taking off in a bit [21:14] thedac, mojo question for ya [21:14] cholcombe: sure [21:15] thedac, i'm working on a mojo spec for gluster. I'm wondering what i should put in the deploy config=gluster-default.yaml delay=0 wait=True target=${MOJO_SERIES} MOJO_SERIES field [21:15] it says it can't find 'trusty' [21:15] thedac, i can show you the whole spec if you'd like. i can push it up to a branch [21:16] target is the name in the config file (gluster-default.yaml) that juju deployer uses to deploy. In openstack specs these like trusty-liberty and xenial-mitaka [21:16] But it can be anything you named it in the bundle file [21:16] i see [21:17] thedac, cool i think that helps :) [21:17] cool, let me know if it down not [21:18] thedac, ah yes this was the thing i was hitting. i ran mojo again. it says 'gluster' not found. Available base. When i try putting base that also fails [21:19] thedac, i should prob push this up to lp so i can show ya [21:19] cholcombe: just as an example of multiple targets in a single bundle file http://bazaar.launchpad.net/~ost-maintainers/openstack-mojo-specs/mojo-openstack-specs/view/head:/helper/bundles/full.yaml [21:19] sure, it is probably the bundle file === rye is now known as ryebot [21:27] thedac, http://bazaar.launchpad.net/~xfactor973/mojo/gluster/revision/263 [21:27] ok, I'll take a look [21:28] thedac, i'm a n00b i don't know what the heck i'm doing lol [21:31] cholcombe: based on gluster-default.yaml you want target=base. Are you using juju 2.0? I don't think mojo is 2.0 ready [21:31] thedac, i believe i'm using juju 1.x [21:31] ok [21:31] thedac, does it need a clean juju environment? [21:32] Well, at least one without gluster deployed [21:32] ok cool [21:32] i think it got further [21:32] it's complaining now about no charm metadata for precise/gluster/metadata.yaml [21:32] i need to restrict it to trusty/xenial somehow [21:32] ah, we need some more info in the bundle file. [21:33] You can set series: [21:33] cholcombe: let me fix the bundle a bit. :) [21:33] thedac, much appreciated :) [21:37] cholcombe: try this http://pastebin.ubuntu.com/18196928/ [21:37] thedac, interesting [21:38] thedac, it's doing something \o/ [21:38] :) === setuid_ is now known as setuid [22:09] thedac, that got me to the testing part where it says it can't find my test script [22:10] cholcombe: let me look again [22:10] i assume it copies the mojo dir onto the unit somewhere? [22:11] yes. /srv/mojo/$MOJO_PROJECT/$MOJO_SERIES/$MOJO_WORKSPACE [22:11] hmm nothing is on there [22:11] cholcombe: on your local not the instance [22:11] ok [22:12] thedac, i think i have a typo [22:12] ah yes, test_gluster_store.py vs test_gluster.py [22:13] right [22:13] looks like once the service is up it's quick to change stuff and retest [22:13] Yes, the deploy is the slowest bit [22:14] thedac, success! [22:14] \o/ [22:14] :) [22:17] thedac, alright cool i'm going to put this up for review [22:17] sounds good. I am heads down the rest of my afternoon. but I can take a look tomorrow [22:17] sure thing [22:17] thedac, thanks a bunch for the help :)