[10:48] <gnuoy> tinwood, got a sec for https://github.com/openstack-charmers/charms.openstack/pull/10 ?
[10:48] <tinwood> gnuoy, lemme take a look.
[10:48] <gnuoy> ta
[10:53] <tinwood> gnuoy, done.
[10:53] <gnuoy> tinwood, thanks
[10:59] <gnuoy> tinwood, updated
[11:02] <tinwood> gnuoy, looks good to me +1
[11:12] <gnuoy> tinwood, can you hit the button?
[11:12] <tinwood> gnuoy, I don't know - I'll give it a go.
[11:13] <tinwood> gnuoy, sadly not.  I don't have write access tot he openstack-charmers repo.  I probably need to be elected to a group?
[12:21] <tinwood> gnuoy, thanks for the invite :)
[12:46] <beisner> o/
[12:51] <gnuoy> narindergupta, https://review.openstack.org/#/c/327638/
[13:03] <shruthima> hi kwmonroe , I have tried to deploy the IBM-IM charm that you have proposed for merge request but facing some issues..Please can we have a discussion for 5-10 min on these..
[13:59] <shruthima> hi mbruzek/kwmonroe , I have tried to deploy the IBM-IM charm that you proposed for merge request but facing some issues..Please can we have a discussion for 5-10 min on these..
[13:59] <mbruzek> Sure
[13:59] <mbruzek> shruthima: What error did you see?
[14:00] <shruthima> actually when we deploy is is not asking for license to accept
[14:00] <shruthima> and it is gng to unknown state
[14:01] <mbruzek> shruthima: please run "juju list-agreements" and pastebin the output
[14:02] <shruthima> root@ptcvm2:~# juju list-agreements Press return to select a default value. Username:  (it is asking for username and password)
[14:03] <shruthima> when we use launchpad id it is showing incorrect email/password
[14:04] <mbruzek> shruthima: what launchpad id are you using?
[14:04] <shruthima> mbruzek:ERROR failed to list user agreements: cannot get discharge from "https://api.jujucharms.com/identity/v1/discharger": cannot start interactive session: cannot get token: Provided email/password is not correct.
[14:05] <shruthima> salmavar
[14:07] <mbruzek> https://launchpad.net/~salmavar shows the user is "Shruthima" is the email wrong on this page?
[14:09] <beisner> hi bcsaller, interested in your take on this behavior we're seeing:  https://github.com/juju/charm-tools/issues/220    tldr;  while trying to automate charm build in openstack ci, we're seeing lower layer ignores impact higher layer files.
[14:09] <shruthima> mbruzek: username is Shruthima .. email is salmavar
[14:10] <mbruzek> beisner: bcsaller is on the west coast, and it is 6 am there. I would not expect him in yet.
[14:11] <mbruzek> beisner: sorry 7am
[14:11] <beisner> mbruzek, ah thx.  :)  do you have any ideas on that^?
[14:13] <mbruzek> beisner: I saw this bug come in, no idea why the dot files don't make it to the built charm. Ben told us we could look at "Tactics" to handle different kinds of files. We can include Tactics files in our layers that will govern how to copy, compose specific file types. Look at the code regarding Tactics
[14:14] <beisner> mbruzek, actually the dotfile is a separate issue
[14:14] <beisner> mbruzek, that dotfile (hidden files) issue is @ https://github.com/juju/charm/issues/201
[14:15] <beisner> https://github.com/juju/charm-tools/issues/220 is what we're blocked on
[14:16] <mbruzek> beisner: Ah I see
[14:22] <mbruzek> shruthima: what version of juju do you have? "juju version"
[14:22] <shruthima> juju 2.0
[14:24] <mbruzek> shruthima: What specific version? For example I am on 2.0-beta8-xenial-amd64
[14:25] <mbruzek> beisner: perhaps you can overwrite that ignore statement in the base layer. I wouldn't know how to do that (of course) but there might be away
[14:25] <mbruzek> a way
[14:25] <shruthima> 2.0-beta7-xenial-amd64
[14:26] <beisner> mbruzek, i think the expected behavior is that each layer should be able to declare its own set of ignores, and layers above it should operate only on their own declared ignores.
[14:26] <mbruzek> beisner: Yeah that makes sens, but something tells me Ben had though of this, and you can remove ignores or overwrite them somehow.
[14:26] <shruthima> mbruzek: Now iam able to authenticate  juju list-agreements command is showing []
[14:27] <mbruzek> shruthima: What commands did you run to get to that state?
[14:28] <shruthima> i was checking juju status
[14:28] <shruthima> and iam checking juju debug-log it is not gng to reactive even
[14:30] <jcastro> So the jenkins charm, on the store page points to launchpad, but the branch itself lives upstream
[14:31] <jcastro> are we manually syncing these because the one from the store seems old?
[14:31] <mbruzek> shruthima: Can you deploy ibm-im now? Does it ask for terms?
[14:31] <marcoceppi> beisner: dot files were a known issue, I thought they had sorted it though
[14:32] <shruthima> yup after authentication i have tried it is not asking for terms
[14:32] <mbruzek> marcoceppi: His current problem is the README.md not makign it to the final charm
[14:32] <shruthima> i have used juju deploy /root/charms/trusty/ibm-im  --series trusty --resource ibm_im_installer=/root/repo/agent.installer.linux.gtk.x86_64_1.8.3000.20150606_0047.zip
[14:32] <marcoceppi> mbruzek: I'm aware, but it might not be a build issue
[14:33] <mbruzek> shruthima: pastebin the juju list-agreements command after you authenticate
[14:33] <beisner> marcoceppi, so the hidden dot file thing is an issue, but not a current blocker.  our blocker is that lower layer ingore declarations are causing higher layers to drop files (it appears).
[14:33] <marcoceppi> beisner: right, which makes sense in our current design but might not be the end goal.
[14:33] <marcoceppi> beisner: why even bother having the ignore in the first place? since it'll just be overwritten with the top layer
[14:33] <shruthima> mbruzek: root@ptcvm2:~/charms/trusty# juju list-agreements []
[14:34] <shruthima> it is displaying []
[14:34] <beisner> marcoceppi, this seems inert (just a readme file), but we also need to ignore unit tests from lower layers so they don't make it into higher layers or the built charm.
[14:34] <marcoceppi> beisner: that's a bigger issue
[14:35] <beisner> marcoceppi, it is.
[14:41] <shruthima> mruzek: http://paste.ubuntu.com/17144275/
[14:44] <mbruzek> shruthima: Ah!
[14:45] <marcoceppi> beisner: going to work on a fix today for 2.1.3
[14:45] <marcoceppi> err, 2.14
[14:45] <marcoceppi> 2.1.4
[14:48] <beisner> marcoceppi, rock on.  much appreciated :)
[14:48] <shruthima> mbruzek: juju debug-log http://paste.ubuntu.com/17144408/
[14:49] <shruthima> actually not understanding were it is getting struck bec deploy is gng smooth but installtion is not happening even reactive is not getting called according to logs what i have checked
[14:50] <shruthima> terms is not displying though ,could u please suggest what may be the reason?
[14:50] <mbruzek> shruthima: Paste bin a juju status
[14:51] <gnuoy> jamespage, https://review.openstack.org/#/c/327638/
[14:51] <shruthima> mbruzek : http://paste.ubuntu.com/17144275/
[14:54] <mbruzek> shruthima: The reason you didn't get prompted to agree to the terms is because you are deploying the local charm. Terms is only prompted by charms deployed from the charm store.
[14:55] <mbruzek> shruthima: If you reset that environment and juju deploy cs:~kwmonroe/trusty/ibm-im  you should be prompted for the terms
[14:56] <shruthima> is there any way to check local charms with terms
[14:56] <shruthima> ok il check
[14:57] <kwmonroe> shruthima: it doesn't make much sense to enforce a term with a local charm, because if you have the charm locally, you could just edit metadata.yaml and remove the "terms" key.
[14:57] <cmars> shruthima, distribution of a charm through the charmstore is gated by capturing user agreement to terms
[14:57] <kwmonroe> so terms are only enforced when deploying from the charm store
[14:57] <jamespage> gnuoy, two nits
[14:57] <kwmonroe> heh cmars, i was just about to make you jump in :)
[14:58] <cmars> shruthima, we can't effectively gate distribution if that distribution is happening out of band with a local charm deploy
[14:58] <cmars> shruthima, nor would we have any legal basis for knowing who's agreed to what terms for a given charm deployment
[14:59] <cmars> shruthima, i recommend publishing the charm to the charmstore, possibly with access permissions restricted if it's not ready for general public consumption
[15:00] <shruthima> yes i agree it wont make sense for local charms but i mean in terms of testing juju terms before deploying charms
[15:01] <shruthima> thanks for making me clear
[15:01] <gnuoy> jamespage, nits swatted
[15:51] <cory_fu> bcsaller: Can you take a look at beisner's issue above (https://github.com/juju/charm-tools/issues/220)?  I'm not really familiar with the intended behavior of "ignores"
[15:55] <beisner> hi cory_fu, bcsaller - fyi marcoceppi is also looking.  i think this is where that behavior change was introduced:  https://github.com/juju/charm-tools/issues/85
[15:55] <cory_fu> From a quick glance at the code, though, the ignore property uses rget (https://github.com/juju/charm-tools/blob/master/charmtools/build/config.py#L90) which seems pretty explicit that ignores are combined from all layers before being handled, rather than being per-layer like the issue expects.
[15:55] <marcoceppi> cory_fu: yes, but I think the issue is valid
[15:56]  * marcoceppi has a suggestion he's writing in the bug
[15:56] <cory_fu> beisner: I'm not sure.  From the implementation of the ignores property, it seems like it was that way from the begining.  I do think your interpretation of how it should work is better, though
[15:58] <bcsaller> maybe it should work that way, sure, but having to ignore a file you intend/expect to override in a later layer is also a little much since overwrite is the default behavior
[15:58] <cory_fu> Although, there's also some ambiguity in whether an ignore in a given layer should ignore that file from only the current layer (and allow the same file from a lower layer to come through), or if it should block the file from only lower layers (i.e., preventing merges with the file as defined in the current layer), or block it from the current layer and below, having no file be given to higher layers
[15:58] <bcsaller> like a base layer shouldn't have to say ignore: readme
[15:58] <bcsaller> thats just a dead chicken
[15:59] <cory_fu> That's also a good point
[16:00] <beisner> bcsaller, we would like to ignore the unit_tests dir.  that's a little more concrete example than a readme file.
[16:00] <marcoceppi> cory_fu: https://github.com/juju/charm-tools/issues/220#issuecomment-224941776
[16:01] <marcoceppi> bcsaller: ^^
[16:01] <cory_fu> That removes the ambiguity, I like it
[16:02] <cory_fu> Do we need to change the key name, though?  Why not just change the behavior of "ignore"?
[16:04] <marcoceppi> cory_fu: because how would I distinguish between ignoring on my layer and ignoring on layers below me?
[16:04] <marcoceppi> I suppose if the key is a string vs a dict
[16:04] <marcoceppi> but then it's a list of things
[16:05] <cory_fu> Yeah, I guess I'm saying the current behavior isn't really useful, and I don't think it's used much at all, so just change it to be a map and re-use the existing key name
[16:06] <cory_fu> I guess if we want to be backwards compatible, we can detect list vs map
[16:06] <marcoceppi> cory_fu: that's fine, I suppose we could create a tactic that rewrote the key if it was a list?
[16:06] <marcoceppi> and gave a warning
[16:06] <cory_fu> Perhaps, though I'm think the value might be processed before a tactic has a chance to rewrite it
[16:07] <marcoceppi> cory_fu: good point
[16:07] <marcoceppi> so just do it transparently to the user?
[16:08] <cory_fu> Yeah.  I don't think that feature is even documented anywhere.  It's certainly not in https://github.com/juju/charm-tools/blob/master/doc/source/build.md
[16:08] <bcsaller> only apply ignores to layers previous to itself? then clear the list
[16:08] <bcsaller> maybe
[16:09] <kwmonroe> admcleod: you were right about zeppelin getting trigger happy.  it does indeed fail because it thinks hadoop is ready earlier than it actually is.  i didn't notice it before because it does eventually become ready.. is this similar to what you saw? http://paste.ubuntu.com/17146568/
[16:10] <beisner> bcsaller, that seems like a useful and simple default behavior.   i like marcoceppi 's idea of having finer granularity with a different thing - but that can be a different thing, yah?
[16:11] <beisner> bcsaller, well <= self actually
[16:14] <beisner> strike -1 prev statement ;-)
[17:28] <kwmonroe> anyone (kjackal?) have thoughts on bundle naming conventions?  for big data specifically, i'm thinking <engine>-<task>-<highlight>, like hadoop-analytics-hive, or spark-processing-kafka, or ignite-visualization-zeppelin
[17:35] <beisner> coreycb, some overdue housekeeping along with a cleaner way to split legacy (precise):  https://code.launchpad.net/~1chb1n/openstack-charm-testing/pxc-vs-mysql-vs-precise-config-options/+merge/296963     validating @ osci (will retrigger your precise proposed test run as one of those steps of validation).
[17:38] <coreycb> beisner, thanks!
[17:38] <beisner> coreycb, yw
[17:38] <marcoceppi> kwmonroe: why not the other way? <engine>-<highlight>-<task> ?
[17:39] <marcoceppi> spark-kafka-processing
[17:39] <marcoceppi> the <task> in the middle feels weird
[18:10] <beisner> thedac can you also do the honors on this? - (fyi: jamespage tinwood ) need to unignore things for now to proceed with automating things.   https://github.com/openstack-charmers/charm-layer-openstack/pull/7
[18:10] <thedac> sure
[18:12] <beisner> thedac, thanks again
[18:14] <thedac> beisner: merged
[18:14] <beisner> off to the races then, thx thedac :)
[18:23] <kwmonroe> ack marcoceppi, thx.  <task> may be superflous too.. like if you already know what spark and kafka are, you probably don't need somebody to say "processing" or whatever
[18:24] <kwmonroe> yes cory_fu, i misspelled superfluous ^ get over it
[18:29] <marcoceppi> cory_fu: I've got questions
[19:05] <marlinc> I'm currently trying to run OpenStack in LXD containers (just to try it out). I'm currently running into the following error: RuntimeError: Exit code: 1; Stdin: ; Stdout: ; Stderr: mount --make-shared /var/run/netns failed: Permission denied
[19:05] <marlinc> I'm not sure how to allow the LXC container to create that mount point
[19:11] <cory_fu> marcoceppi: Sorry, I was out for a bit.  What question do you have?
[19:13] <cory_fu> kwmonroe, petevg, kjackal, admcleod: Any objection to me creating a Jira and a patch for the Spark charm layer?
[19:13] <petevg> cory_fu: sounds good to me.
[19:27] <kwmonroe> +1 cory_fu
[19:27] <cory_fu> kwmonroe: For https://github.com/juju-solutions/layer-openjdk/issues/5 do we need to be worried about the non-zero exit code?  The test failure doesn't look like it was actually choking on the output
[19:29] <cory_fu> Also, if the grep does fail, maybe we should report something more useful than "Unexpected return code"
[19:31] <kwmonroe> hmph, you're right cory_fu.. we weren't testing the output for anything, so the ssh host warning shouldn't have mattered.
[19:31] <kwmonroe> i wonder if the rest of that output was "warning...java not found"
[19:32] <cory_fu> I don't actually even understand why the command exited non-zero.
[19:32] <cory_fu> kwmonroe: Actually, looking at the code, that print dumps all of the output, so it doesn't look like the `java -version` command even generated any output
[19:33] <kwmonroe> yeah, agreed
[20:18] <cmars> cherylj, didn't see test timeouts due to npipe listener Close, but I also forgot to capture stderr. However, I did find some other test failures -- which I also saw when testing in my own KVM. Opened LP:#1590947
[20:18] <mup> Bug #1590947: TestCertificateUpdateWorkerUpdatesCertificate failures on windows <juju-core:New> <https://launchpad.net/bugs/1590947>
[20:19] <cmars> running again, this time with stderr redirected properly...
[20:19] <xilet_> So, I can't find much documentation on it so far, can anyone explain how expose is supposed to work? (Trying to expose openstack-dashboard)
[20:19] <cherylj> thanks, cmars
[20:41] <cory_fu> xilet_: It should be pretty straightfoward.  The charm has to call `open-port <port>` to open a port, which would then be listed in `juju status`.  If you call `juju expose <service>` on that service, then you should be able to connect to that unit's public address (also listed in juju status) on any of the ports the charm opened.
[20:41] <xilet_> Yeah, I see exposed: yes, but no public addresses listed
[20:41] <xilet_> I am using 2.0-beta7-xenial-amd64
[20:42] <arosales> xilet_: are you using maas or lxd as the cloud?
[20:43] <xilet_> http://pastebin.com/StVS70z5  [for juju status]
[20:43] <xilet_> lxd
[20:44] <cory_fu> xilet_: The public-address for openstack-dashboard/0 is 10.125.232.72
[20:44] <cory_fu> xilet_: It's listed under [Units]
[20:44] <arosales> xilet_: what I see also
[20:44] <arosales> and exposed is true
[20:44] <arosales> under [Services]
[20:45] <arosales> IP under [Units]
[20:45] <xilet_> Right, the question I had is what does it actually 'do', because it was on that IP before I exposed it
[20:45] <cory_fu> And it has ports 80 and 443 open, so you should be able to go to http://10.125.232.72/ or https://10.125.232.72/
[20:45] <xilet_> some of the documentation mentioned doing the firewall rules to make it publically available
[20:45] <cory_fu> xilet_: It was on that IP before, but the firewall rules blocked all external trafic to it
[20:45] <xilet_> ah ok, so you need to manually set up a bridge to the network to acutally make expose work?
[20:46] <cory_fu> You shouldn't.  The lxd provider should manage the bridge for you
[20:46]  * cory_fu hasn't used the lxd provider, though.
[20:47] <magicaltrout> expose has not effect on lxd local
[20:47] <xilet_> Ok, because right now nothing else on the general network (10.19.40.0/24) can reach that (10.125.232.0/24) subnet
[20:48] <magicaltrout> but you will have to do some funky bridging on ports for stuff on that box which you want making pubic
[20:48] <cory_fu> magicaltrout: I know that's true with the 1.25 local provider, but I thought it was needed for 2.0 lxd provider
[20:48] <cory_fu> If that's true, it seems like a bug in the lxd provider
[20:49] <magicaltrout> dunno then, I run beta7 and I do some IPtables natting to get local services exposed on the box
[20:49] <magicaltrout> the networking for LXD is on a 10.x subnet
[20:49] <magicaltrout> which isn't available to external processes
[20:50] <magicaltrout> of course i could have just missed something in a release note or similar
[20:50] <cory_fu> magicaltrout: Oh, are you talking about getting lxd provided services to be accesible from outside the machine that it's bootstrapped on?  I was assuming access directly from the machine that did the bootstrap
[20:50] <magicaltrout> ah no
[20:50] <magicaltrout> that is certainly fine
[20:51] <magicaltrout> but if you have a remote box running lxd local, and want that service "exposed"
[20:51] <magicaltrout> you do some iptables natting
[20:51] <magicaltrout> but my understanding is "expose" in juju does nothing in lxd local because there is no firewall unlike AWS etc
[20:51] <xilet_> Yeah, that is what I am trying to accomplish: remote server, everything running contained on that system, wanting direct https access without a ssh tunnel to reach it from other systems
[20:52] <magicaltrout> yeah you'll need to do some natting xilet_
[20:52] <magicaltrout> that said, i just slapped openvpn on the box to make my life easier
[20:53] <magicaltrout> cory_fu: demoed some juju big datastuff to the JPL guys this week, they loved it
[20:53] <xilet> magicaltrout: thanks, I just wanted to make sure I was not missing something obvious.
[20:53] <magicaltrout> not that i'm aware of xilet
[20:53] <magicaltrout> xilet: https://www.digitalocean.com/community/tutorials/getting-started-with-lxc-on-an-ubuntu-13-04-vps
[20:53] <magicaltrout> that was my reference
[20:54] <magicaltrout> the iptables call down the bottom
[20:54] <magicaltrout> there may be other/better ways
[20:54] <xilet> Thanks it is a start. I was thinking of being really lazy and setting up apache proxies
[20:55] <cory_fu> magicaltrout: Awesome.  :)  Did they have any specific feedback?
[20:56] <magicaltrout> mostly stuff like "this is freaking awesome, look how quickly all that stuff is configured" ;)
[20:56] <magicaltrout> the usual
[20:57] <magicaltrout> i've sent a few of them jcastro 's charmer summit mailshot
[20:57] <magicaltrout> as they're in the same place I'm hoping to drag a few alng
[20:57] <magicaltrout> along
[21:00] <xilet> magicaltrout: worked like a charm, thank you!
[21:00] <cory_fu> That would be awesome
[21:00] <magicaltrout> no problem xilet
[21:00] <magicaltrout> cory_fu: https://github.com/SciSpark/SciSpark they make a lot of use of scispark at JPL
[21:01] <magicaltrout> I'll see if we can get them to build a charm for it, as they stand up scispark, hadoop and ipython/zeppelin stuff
[21:29] <xilet> so one more stupid question, again on 2.0beta7 of openstack, there was never a prompt for setting a user account for horizon,  keystone user-list does show an admin user, if I just reset the password for that one user account is that the best way to gain access or did I miss a default password somewhere?
[21:33] <xilet> Nevermind, did the smart thing and just added a new user.
[22:34] <xilet> Sorry for the set of openstack questions, but if any of you have it working, how do you attach cinder to a lvm group local to the physical machine? The host can see it fine.