[06:16] <junaidali>  
[09:08] <jacekn> could somebody check any my MP is not showing on review.juju.solutions? https://code.launchpad.net/~canonical-sysadmins/charms/trusty/apache2/apache2-storage/+merge/298617
[09:09] <magicaltrout> jacekn: its probably just broken, although I think there is movement on the new review queue
[09:09] <magicaltrout> marcoceppi is in europe at the mo I believe so might be able to shed some light
[09:10] <jacekn> yes some info would be good, I know there were problems with the review queue months ago but they must have been fixed by now
[09:41] <aisrael> jamespag`: We need to connect a charm to keystone. Is there a layer for that relation?
[09:41] <jamespag`> aisrael, there is an interface for the keystone interface which is probably what you want to use
[09:41] <jamespag`> its in the index
[09:42] <jamespag`> aisrael, context?
[09:42] <aisrael> jamespag`: openmano charming session w/marks
[09:42] <jamespag`> aisrael, right - so does it need todo endpoint registration into the keystone server catalog?
[09:43] <jamespag`> aisrael, if it does not you might want to use the keystone-credentials interface to just get some credentials - really depends
[09:44] <jamespag`> there is also an interface for that
[09:44] <aisrael> jamespag`: ack, that's a good starting point, thanks. I'm not up to speed on how deep the keystone integration goes.
[09:46] <jamespag`> aisrael, ok well around most of the day - will be out for lunch at about 11:30 UTC
[10:09] <marcoceppi> jacekn: we're moments away from deploying a new review-queue, but it's stalled because it's not on IS infrastrture and I'm having problems fetching remote resources in our qa environment
[10:12] <jacekn> marcoceppi: thanks, good news
[13:13] <eeemil> Link to https://jujucharms.com/docs/devel/config-local from https://jujucharms.com/docs/devel/developer-getting-started is broken
[13:26] <shilpa> Hi, I deployed a charm making using of juju resources, I have pushed empty packages to the charm store, when i deploy the charm , i see that the charm is stuck at Unknown package_status None
[13:26] <shilpa> Can anyone help why we get this status message ?
[14:30] <kwmonroe> ahoy jamespage -- do you know much about this bundle?
[14:30] <kwmonroe> https://jujucharms.com/openstack-midonet-liberty/bundle/0/
[14:30] <kwmonroe> we have an incompatible zookeeper change, and it looks like this bundle is the only one using zk.
[14:32] <kwmonroe> also jamespage, you're the current maintainer of zookeeper, so if you have a sense of how much your charm is used, that will help us in deciding how much effort to put in to make it upgradable / back compat / etc..
[14:37] <cory_fu> jamespage: When kwmonroe says "incompatible" he means it's layered, so you can't directly upgrade, and that it adds a required java relation.  Otherwise, the interfaces are the same.
[14:40] <magicaltrout> see kwmonroe told you we defer to cory_fu for explanations ;)
[14:40] <kwmonroe> lol
[14:41] <jamespage> kwmonroe, cory_fu: hmm - I know midonet uses it - and I think contrail does as well - bbcmicrocomputer would be able to confirm
[14:41]  * jamespage feels sheepish about 'current maintainer' status
[14:42] <bbcmicrocomputer> kwmonroe: yes, contrail uses zookeeper
[14:44] <cory_fu> jamespage: I think we're comfortable taking over maintainership since it will be a Bigtop charm now, but we're concerned about the transition
[14:44] <kwmonroe> thx for the info -- we'll run through https://jujucharms.com/requires/zookeeper and see how deep the rabbits live.
[14:46] <tinwood> cory_fu, jamespage tells me that you've been discussing actions in the context of reactive.  I was wondering if you had an exemplar of how much of reactive to bring up when running an action? Thanks
[14:58] <drbidwell> Where do I find how to set the default directory to put lxd containers?  I am using an existing xfs file system instead of a zfs or btfs file system.
[14:59] <neil__> Hi lazyPower, are you around?
[14:59] <lazyPower> neil__ i am
[14:59] <neil__> (Just poking the new etcd charm...)
[15:00] <lazyPower> Awesome! Hows it goin?
[15:00] <neil__> lazyPower, I've deployed the new charm into my OpenStack cluster, and so far things are not fully hooking up...
[15:00] <neil__> lazyPower, ...which is I think as expected because I need to do some work in the charms that use etcd client proxies.
[15:01] <lazyPower> how so?
[15:01] <neil__> Well I have this charm 'neutron-api', which includes installing etcd to act as a proxy.
[15:02] <neil__> And right now it's not getting the initial cluster string properly.
[15:02] <lazyPower> neil__ https://github.com/juju-solutions/layer-etcd/blob/master/reactive/etcd.py#L156
[15:03] <lazyPower> i found the issue. We must have missed a merge in the final bits before moving for release. the interface was updated, but looks like the invocation block never made it
[15:03] <neil__> Ah, I was wondering about those lines being commented out :-)
[15:04] <lazyPower> give me a couple minutes and i'll get you a fixed revision
[15:04] <neil__> But then I also wondered if the code under hooks/relations/etcd-proxy was intended to be a better alternative to those lines...
[15:07] <neil__> How is the code under hooks/relations/etcd-proxy hooked into the rest of the charm?  I couldn't find any explicit references to the EtcdProvider class.
[15:07] <lazyPower> that code is the interface code :)
[15:10] <neil__> Oh I see, I think EtcdHelper in the commented out code should in fact be EtcdProvider (with an appropriate import at the top of that file), because there's nothing else in the charm that has a provide_cluster_string method.
[15:13] <neil__> One other thing - could the key be 'cluster' instead of 'cluster_string'?  'cluster' is what my two proxy client charms are expecting.
[15:15] <neil__> (And I believe previous iterations of the etcd charm used 'cluster' - e.g. my latest version before your recent work, at https://github.com/projectcalico/charm-etcd/blob/master/hooks/hooks.py#L80)
[15:31] <lazyPower> neil__ - negative, the cluster strings are now computed from the active members coming back from etcdctl
[15:31] <lazyPower> the etcdhelper is deprecated fully, it was harder to understand than modeling the client utility.
[15:33] <lazyPower> what i'll wind up doing is pulling the member list of who's actively participating in the cluster and send that over with client credentials. i'm not finding that branch, so i think we legitimately missed porting this
[15:42] <lazyPower> neil__ https://github.com/juju-solutions/interface-etcd-proxy/pull/3
[15:44] <neil__> Understood I think.  There's also what looks like reasonable code in the hooks/relations/etcd-proxy/README.md (in case you'd forgotten that).
[15:45] <neil__> That PR LGTM - thanks.
[15:53] <lazyPower> neil__ - https://github.com/juju-solutions/layer-etcd/pull/32
[16:03] <kwmonroe> petevg: that suggestion to use bigtop::jdk_preinstalled: false may not work.. zk builds off of the bigtop base layer, which will report blocked when_not java.ready :/
[16:03] <petevg> Darn :-/
[16:03] <kwmonroe> so we'd have to tweak our java check in the base layer
[16:03] <petevg> That sounds entertaining.
[16:03] <petevg> Actually, since we're passing in overrides to the Bigtop class, it might be able to check for that value.
[16:04] <kwmonroe> let's see what cory_fu comes up with after his current meeting.. it may be a non issue if we vote to rename zk to zk-bigdata or something
[16:04] <petevg> Cool.
[16:05] <cory_fu> I missed the context for this.  Are we suggesting not making ZK depend on the external java relation?
[16:10] <cory_fu> Can we make the charm use the Bigtop java install by default but let the java relation override / switch the java when related?
[16:12] <neil__> lazyPower, Just in a meeting at the mo - will look properly at your layer-etcd PR soon after that finishes.
[16:28] <cory_fu> kwmonroe, petevg, kjackal: So it sounds like we'll make the bigtop zookeeper charm xenial only and leave the existing charm for trusty
[16:28] <cory_fu> With that, we could keep the required java relation.
[16:29] <neil__> lazyPower, Is it a concern that the new integration code in the PR is different from what is suggested in https://github.com/juju-solutions/interface-etcd-proxy/blob/master/README.md ?  Should the latter be updated?
[16:29] <cory_fu> The relation name changes wouldn't be an issue either
[16:29] <cory_fu> We'd have to have separate tests for trusty vs xenial
[16:30] <kwmonroe> there's a pickle in there cory_fu
[16:31] <kwmonroe> let's say i deploy hadoop-processing, which is currently gonna use all trusty charms.. including openjdk.  then i add xenial zk and it says "blocked waiting for java".
[16:31] <kwmonroe> so i'm all like "cool, juju add-relation zk openjdk"
[16:31] <kwmonroe> and then juju's all like "ERROR cannot add relation "openjdk:java hive:java": principal and subordinate applications' series must match "
[16:31] <cory_fu> tinwood: The majority of the discussion around actions & reactive is at: https://github.com/juju-solutions/charms.reactive/pull/66
[16:32] <tinwood> cory_fu, yes, jamespage pointed me to that - interesting read.  I'm trying to work out if it's possible to get the relation data whilst I'm doing an action. Hmm.
[16:33] <cory_fu> kwmonroe: That's a problem with subordinates in general.  It would be an issue in mixed series deployments even if both ZK series were the same charm underneath
[16:33] <tinwood> cory_fu, i.e. I have  some data set from the other end, and I want to check/use it in an action.
[16:34] <neil__> lazyPower, sorry, will be AFK again for a while now - back later
[16:34] <cory_fu> tinwood: The goal of that PR would be to be able to use @when and @action together, so your action would be predicated on the relation state and would get the relation instance
[16:35] <tinwood> cory_fu, that would work nicely.  In the meantime, can you think of anyway to do with a 'plain' action?
[16:36] <plars> anyone familiar with a problem where the bootstrap node constantly has *very* high load? I'm not sure if it's the cause or a symptom, but mongodb is hammering the logs
[16:36] <plars> by very high, I mean up aroun 400-500
[16:36] <plars> restarting juju-db brings it down temporarily, but it creeps back up pretty quickly
[16:37] <tinwood> cory_fu, could I set a state, run reactive.main() and then pick up that state and a relation state?
[16:37] <kwmonroe> plars: i haven't seen that, but i think #juju-dev might know better
[16:37] <plars> kwmonroe: thanks
[16:39] <cory_fu> tinwood: You could, and there are some examples of charms that do that.  You could also just use RelationBase.from_state('state.name') to get the relation instance directly (or None)
[16:40] <tinwood> cory_fu, that latter thing is JUST the ticket.  Thx!  (I thought I'd stared at the code, but I must have missed that).  I'll try to code it so I can revert to an @action() if it lands.  Thanks for your help.
[16:42] <cory_fu> tinwood: We don't tend to promote using from_state directly because it's generally better to make your preconditions clear with the decorators.
[16:42] <cory_fu> And no problem, glad to help
[16:43] <tinwood> cory_fu, yes, i understand that - I'm going to try to get it as early as possible so that it can be converted to a 'proper' @action() @when('state') def... form.
[16:49] <lazyPower> neil__ - sounds good. https://github.com/juju-solutions/interface-etcd-proxy/pull/4
[18:02] <neil__> lazyPower, your PRs all look good to me now.  Is there a way I can easily pull them together into a complete new charm to try out?
[18:03] <lazyPower> neil__ in standup, give me 15 and i'll push an assembled version to my namespace
[18:03] <neil__> lazyPower, thank you - sorry to keep bothering!
[18:37] <lazyPower> neil__ url: cs:~lazypower/etcd-18  if this revision works for you, ping me back and i'll propose it against the proper charm
[18:37] <neil__> lazyPower, thanks, will try that out now
[18:38] <neil__> lazyPower, cycle will take about 45 mins though, so please don't hold your breath!
[18:39] <lazyPower> :) I'm in no rush, I'm moving this week. I have to admit, Pods + movers = the way to do it.
[18:48] <geetha> Hi , I have uploaded resource to the controller using `juju attach` command but when I try to fetch resource using `resource-get`, it's keep on trying to fetch the resource..(log : http://pastebin.ubuntu.com/18187857)/
[18:49] <lazyPower> geetha - the pastebin looks truncated. can you paste the full log, or use a utility like `pastebinit` (which is apt-get installable) to send that paste over to paste.ubuntu.com?
[18:50] <mskalka> hey guys, I'm having an issue deploying a charm that I've written. It keeps spitting out the following error: ERROR POST https://10.18.150.54:17070/model/3362bfce-179e-44b0-8c02-b6b7bb238d85/charms?series=trusty: URL has invalid charm or bundle name: "local:trusty/charm-DataCenterTopology-0" Does anyone know what I should be investigating to fix this?
[18:50] <lazyPower> mskalka - try removing charm- from the name in metadata
[18:51] <mskalka> lazyPower, save error as before
[18:52] <lazyPower> mskalka - can you link me to your layer/charm?
[18:53] <mskalka> lazyPower - like to the repo?
[18:53] <lazyPower> yep
[18:53] <lazyPower> i'll clone it and see if i can reproduce here, and try to help you find a fix
[18:54] <geetha> lazyPower: you can see full log here: http://paste.ubuntu.com/18188302/, its keep on going..
[19:00] <neil__> lazyPower (etcd), hmm: hook failed: "proxy-relation-joined" for neutron-api:etcd-proxy  Just going to investigate further.
[19:01] <lazyPower> geetha - it looks like it ran on line 1752
[19:01] <lazyPower> i see voting bits below, which tells me the charm went idle
[19:01] <lazyPower> how large is the resource you are trying to fetch geetha?
[19:02] <neil__> lazyPower (etcd) - http://pastebin.com/r4UiF3LL
[19:02] <lazyPower> doh
[19:02] <lazyPower> neil__ - sorry i know what i did there... duh
[19:02] <lazyPower> my python is weak today
[19:06] <kwmonroe> cory_fu: petevg, hive is in the same boat as zookeeper.. the current promulgated hive is non-layered, so it would have a broken upgrade path.  fortuantely, that one is only precise, so we should be able to drop in a xenial/trusty version without much hurt.
[19:07] <kwmonroe> so here's what i propose.. we find all the services that might conflict with an existing promulgated non-upgradable charm, then fire off a note to the list.  if people need precise hive, they'll need to be explicity in their deployment instructions and bundles.  same for zk.
[19:07] <mskalka> lazyPower - that did it! Thanks again!
[19:07] <lazyPower> mskalka happy to help :)
[19:08] <petevg> kwmonroe: I think that makes sense.
[19:11] <geetha> lazyPower: The resource size is 163M...I have tested it already, it was not taking much time.
[19:13] <geetha> I am deploying charm from charm store and attaching resource using 'juju attach' command
[19:19] <lazyPower> geetha - yeah without more information i'm not much help :( sorry. This looks like you've followed the correct process. You may have uncovered a bug, can you investigate the logs on your controller to see if there are any errors scrolling in there while its attempting to fetch the resource?
[19:23] <kwmonroe> geetha: is it still running?
[19:24] <kwmonroe> geetha: you might want to 'juju ssh' to the unit and have a look in /var/lib/juju/agents/unit-ibm-im/charm/resources to see if the file is still being fetched
[19:27] <kwmonroe> for anyone else seeing slow resource fetches, logs and observations would be great in bug 1594924
[19:27] <mup> Bug #1594924: resource-get is painfully slow <resources> <juju-core:Triaged by dooferlad> <https://launchpad.net/bugs/1594924>
[19:27] <arosales> cory_fu: promulgation issue https://github.com/juju/charm/issues/214
[19:27] <arosales> cory_fu: feel free to add any examples to that issue specifically if you find it occurs with zookeeper
[19:30] <lazyPower> neil__ sorry about the delay i got a bit distracted. cs:~lazypower/etcd-19 - i deployed a hollow charm and verified the data on the wire looked good this time around, no more python errors :)
[19:42] <neil__> lazyPower, thanks, will give that a go now...
[20:07] <neil__> lazyPower, while that's coming up, could I ask you about my step?
[20:07] <lazyPower> sure
[20:10] <neil__> lazyPower, thanks. So, on the machines where I have etcd-using units, I need an etcd proxy that connects over the network on its db side (using 'cluster') and allows localhost access only on its client side.  I'm not yet sure if I need TLS-security for the localhost connections.
[20:10] <lazyPower> you wont, you'll need to provide the client keys on your proxy configuration however
[20:11] <lazyPower> you wont be able to communicate with the primary etcd cluster without that key.
[20:11] <neil__> lazyPower, Yes indeed, I definitely need the proxy to have keys on its connection to the non-proxies.  I believe I can get that using the requires part of interface-etcd-proxy - right?
[20:13] <lazyPower> neil__ - yep. that interface is reactive based, so i dont know that you can consume it out of the box in your charm.. you're supporting a non layered approach right?
[20:13] <neil__> lazyPower, Is there a cast-iron argument for saying that TLS security isn't needed for the localhost connections?  (I'll be very happy if there is, but I'm not greatly experienced yet in security questions.)
[20:13] <lazyPower> neil__ - with that being said, the keys are all there for you. cluster, and the 3 ssl keys.
[20:13] <lazyPower> ah, i dont think its required, thats kind of the point of the proxy right?
[20:14] <lazyPower> thats solely up to you in the implementation details of that proxy
[20:14] <lazyPower> i just gave you the pipeline to do so :)
[20:14] <neil__> Thanks.
[20:16] <neil__> At the moment, the etcd proxy function is integrated into the charms that need it (neutron-api and neutron-calico).  But I was thinking it might be nicer to have a separate etcd-local-proxy charm, and then just to deploy that alongside the other units that need it, and remove the etcd code from neutron-api and neutron-calico.
[20:17] <neil__> If I did that I guess it might be quite easy to do the new charm using reactive...?
[20:18] <lazyPower> yep!
[20:19] <lazyPower> and you can grab that interface code too, the conversation is done for you, just follow the placement guide. YOu'll need to update  the systemd defaults template for etcd so you can declare the tls options. thats all thats different that i can think of
[20:19] <neil__> Yes, that's what I was thinking.  But what do you mean by the placement guide?
[20:21] <mskalka> hey everyone, is there any way to specify machines to deploy to outside of the 'deploy --to' command? Like pre-configured in a config file or something similar
[20:22] <neil__> mskalka, In the bundle, yes.
[20:23] <mskalka> neil__, thanks, I'll check that out
[20:25] <neil__> mskalka, Complication is, I think, that the supported declarations depend on what version of Juju you're using; and it's not well documented.
[20:25] <valeech> this may be a bit off topic, but does ubuntu openstack autopilot leverage juju to deploy the services on maas devices?
[20:26] <mskalka> neil__, haha 'not well documented' seems to be the phrase of the month. It gives me a place to work from though.
[20:26] <neil__> mskalka, But for Juju 2 the bundle can say, for example: "to: [ bird ]", which means "put the first unit of this service in the same place as bird"
[20:26] <mskalka> neil__, bird in this example being another service?
[20:27] <neil__> mskalka, yes
[20:27] <mskalka> neil__, that's exactly what I'm looking for actually
[20:27] <neil__> mskalka, some doc claims that you can specify machine numbers, but I've not got that to work reliably
[20:28] <mskalka> neil__, I'm trying to get a charm deployed to every machine another charm operates on
[20:30] <neil__> mskalka, hmm, I think that's possible.  But you might also want to look up 'subordinate' charms
[20:32] <neil__> lazyPower, looks like your latest etcd charm is good: on the etcd proxy client machine I now have /etc/default/etcd with correct ETCD_INITIAL_CLUSTER.
[20:32] <mskalka> neil__, I don't need to access any of the other charms on the machine, just gather information about the machine and its relation to other machines. It's basically a charm to create crushmaps for Ceph without actually touching the Ceph charm
[20:33] <neil__> lazyPower, the etcd proxy is still failing to connect to the cluster, but that's expected because I haven't implemented code to get the keys from the relation and add those into /etc/default/etcd.
[20:34] <neil__> mskalka, Sounds like you just need 'juju status' then.
[20:35] <mskalka> neil__, if only.. it's got to be able to do some code execution
[20:55] <lazyPower> neil__ - excellent
[20:56] <lazyPower> neil__ - add the TLS keys to the defaults file and you should be in like flynn
[20:56] <lazyPower> do you need those? i'm pretty sure i have them in my design notes
[20:56] <neil__> do I need which?
[20:56] <lazyPower> the tls flags to provide the defaults file
[20:56] <neil__> sorry, I'm afraid I'm lost...
[20:59] <lazyPower> neil__ https://gist.github.com/chuckbutler/3649337495ca3fa95bb6a3bdb11fc70f
[21:01] <neil__> sorry, still not quite understanding - what is it that supports those flags?
[21:02] <lazyPower> neil__ : thats what you'll want to set in the defaults file for flags to etcd
[21:03] <magicaltrout> ran a  40 m cat 5 cable the length of  my garden..... then find its broken
[21:03] <magicaltrout> woop
[21:03] <lazyPower> so, receive and write out those client certificates (the interface code does this for you if you're using that), and use the path you saved those client certs to populate those flags in the defaults file, and you should be g2g w/ the new etcd implementation
[21:03] <lazyPower> magicaltrout - blarg, hardware problems
[21:03] <magicaltrout> indeed
[21:04] <neil__> lazyPower, Right, got it, thanks.
[21:06] <neil__> lazyPower, But I see two sets of possible settings: ETCD_CERT_FILE, ETCD_KEY_FILE, ETCD_TRUSTED_CA_FILE; and ETCD_PEER_CERT_FILE, ETCD_PEER_KEY_FILE, ETCD_PEER_TRUSTED_CA_FILE.  For a proxy connecting to a non-proxy, do you know which ones I need?  (I would guess the ones without _PEER_, but not at all sure about that.)
[21:06] <lazyPower> the ones without PEER assume server installation
[21:06] <lazyPower> those keys are client keys not server keys :)
[21:07] <lazyPower> so i'm 90% certain you want the PEER flags
[21:09] <neil__> lazyPower, Thanks, I think you're right - I found a note elsewhere that says "etcd proxies communicate with the cluster as peers so they need to have peer certificates." (http://docs.projectcalico.org/en/1.3.0/securing-calico.html)
[21:10] <lazyPower> solid :) sounds like you're all set. Let me know how you make out with it. I'm going to be taking off in a bit
[21:14] <cholcombe> thedac, mojo question for ya
[21:14] <thedac> cholcombe: sure
[21:15] <cholcombe> thedac, i'm working on a mojo spec for gluster.  I'm wondering what i should put in the deploy config=gluster-default.yaml delay=0 wait=True target=${MOJO_SERIES} MOJO_SERIES field
[21:15] <cholcombe> it says it can't find 'trusty'
[21:15] <cholcombe> thedac, i can show you the whole spec if you'd like.  i can push it up to a branch
[21:16] <thedac> target is the name in the config file (gluster-default.yaml) that juju deployer uses to deploy. In openstack specs these like trusty-liberty and xenial-mitaka
[21:16] <thedac> But it can be anything you named it in the bundle file
[21:16] <cholcombe> i see
[21:17] <cholcombe> thedac, cool i think that helps :)
[21:17] <thedac> cool, let me know if it down not
[21:18] <cholcombe> thedac, ah yes this was the thing i was hitting.  i ran mojo again.  it says 'gluster' not found.  Available base.  When i try putting base that also fails
[21:19] <cholcombe> thedac, i should prob push this up to lp so i can show ya
[21:19] <thedac> cholcombe: just as an example of multiple targets in a single bundle file http://bazaar.launchpad.net/~ost-maintainers/openstack-mojo-specs/mojo-openstack-specs/view/head:/helper/bundles/full.yaml
[21:19] <thedac> sure, it is probably the bundle file
[21:27] <cholcombe> thedac, http://bazaar.launchpad.net/~xfactor973/mojo/gluster/revision/263
[21:27] <thedac> ok, I'll take a look
[21:28] <cholcombe> thedac, i'm a n00b i don't know what the heck i'm doing lol
[21:31] <thedac> cholcombe: based on gluster-default.yaml you want target=base. Are you using juju 2.0? I don't think mojo is 2.0 ready
[21:31] <cholcombe> thedac, i believe i'm using juju 1.x
[21:31] <thedac> ok
[21:31] <cholcombe> thedac, does it need a clean juju environment?
[21:32] <thedac> Well, at least one without gluster deployed
[21:32] <cholcombe> ok cool
[21:32] <cholcombe> i think it got further
[21:32] <cholcombe> it's complaining now about no charm metadata for precise/gluster/metadata.yaml
[21:32] <cholcombe> i need to restrict it to trusty/xenial somehow
[21:32] <thedac> ah, we need some more info in the bundle file.
[21:33] <thedac> You can set series:
[21:33] <thedac> cholcombe: let me fix the bundle a bit. :)
[21:33] <cholcombe> thedac, much appreciated :)
[21:37] <thedac> cholcombe: try this http://pastebin.ubuntu.com/18196928/
[21:37] <cholcombe> thedac, interesting
[21:38] <cholcombe> thedac, it's doing something \o/
[21:38] <thedac> :)
[22:09] <cholcombe> thedac, that got me to the testing part where it says it can't find my test script
[22:10] <thedac> cholcombe: let me look again
[22:10] <cholcombe> i assume it copies the mojo dir onto the unit somewhere?
[22:11] <thedac> yes. /srv/mojo/$MOJO_PROJECT/$MOJO_SERIES/$MOJO_WORKSPACE
[22:11] <cholcombe> hmm nothing is on there
[22:11] <thedac> cholcombe: on your local not the instance
[22:11] <cholcombe> ok
[22:12] <cholcombe> thedac, i think i have a typo
[22:12] <thedac> ah yes, test_gluster_store.py vs test_gluster.py
[22:13] <cholcombe> right
[22:13] <cholcombe> looks like once the service is up it's quick to change stuff and retest
[22:13] <thedac> Yes, the deploy is the slowest bit
[22:14] <cholcombe> thedac, success!
[22:14] <thedac> \o/
[22:14] <cholcombe> :)
[22:17] <cholcombe> thedac, alright cool i'm going to put this up for review
[22:17] <thedac> sounds good. I am heads down the rest of my afternoon. but I can take a look tomorrow
[22:17] <cholcombe> sure thing
[22:17] <cholcombe> thedac, thanks a bunch for the help :)