[04:25] <xilet> Another block device question, with lxc I have /etc/lxc/defaults.conf with lxc.cgroup.devices.allow = b 43:* rwm.  If I start an lxc container manually I can attach a storage device and use it normally inside the container. However if I deploy a juju container I can attach the device, it shows up but won't let me access it.
[04:25] <xilet> Is there another place with juju charms that defines the lxc defaults for those sorts of settings to allow device access?
[04:25] <xilet> Juju 2.0
[06:59] <admcleod> magicaltrout: noooo
[10:00] <admcleod> kjackal: so im thinking abotu our bigtop charms, and the principal and any subordinates will both unpack the bigtop repo to /home/ubuntu/bigtop.deploy right?
[10:01] <kjackal> yes
[10:02] <admcleod> kjackal: so this means if we're writing any values to hiera files we have to consider them stateless - we write, we apply, we imagine they're gone (because they might be)
[10:04] <kjackal> yes, true
[10:05] <kjackal> now if these values propagate to a resource that is shared accross bigtop roles (eg a shared hadoop-core.xml file) then we might get into trouble
[10:08] <admcleod> kjackal: yeah. well. it would have to be specific values in that file, e.g. hdfs-site.xml ... and i think the top layer's value should take precedence
[10:58] <admcleod> hey kwmonroe_, your bigtop smoke-test stuff works ootb with sqoop without any template mods (as long as i hardset the env var)
[11:08] <kjackal_> admcleod: how long does it take to run?
[11:13] <kjackal_> 3 minutes, looks fine
[12:37] <petevg> admcleod, kjackal: stateless hiera files are the reason that I stashed the list of Zookeeper nodes in a .json file under the Zookeeper charm's resources directory. Whenever I run puppet, I read from that  file, and pass it in as an override. I'm not sure whether that's a best practice, though.
[13:01] <admcleod> petevg: what is it you're putting in that json file again?
[13:26] <petevg> admcleod: the list of zk peers. We override the ensemble var in heirdata, and it ends up getting written to the zookeeper config.
[13:29] <admcleod> petevg: so you write the peers to the hieradata and then run puppet apply every time the list changes?
[13:29] <petevg> admcleod: yes.
[13:29] <admcleod> petevg: cool
[13:30] <petevg> :-)
[13:31] <admcleod> petevg: why was it you said you chose not to use leader settings?
[13:32] <petevg> admcleod: the list is different on each box (each node lists itself first), and it has to get updated when a node joins, right before it figures out who the leader is.
[13:32] <petevg> ... so it was either throw in a bunch of waits that made things confusing, or just stick the data somewhere else.
[13:33] <admcleod> petevg: hmm when you say 'figures out who the leader is' are you talking about juju or zookeeper 'figuring'?
[13:33] <petevg> juju
[13:33] <petevg> Zookeeper has its own ideas about leadership, which are separate.
[13:34] <admcleod> right.. i didnt think the leadership election took very long though. if theres one node, its the leader, and if another one joins the first one is still the leader. or so i thought
[13:35] <petevg> Yes. But the node that is joining doesn't figure that out right away, and I get errors trying to write to the leader.
[13:36] <petevg> The wait probably wouldn't be a long one ... it might be worth revisiting and refactoring, now that I've got the basic flow of stuff working.
[13:36] <admcleod> petevg: you might evne be able to use stub's leadership layer to wait until election has completed
[13:38] <admcleod> petevg: https://launchpad.net/layer-leadership
[13:38] <petevg> admcleod: the tricky thing is that I need to persist the state right away. If the process exits because it is waiting for something, then I lose the state.
[13:38] <petevg> I'll play around with it a bit.
[14:29] <neiljerram> Does anyone know how Juju 2's idea of the current controller/model is stored?
[14:30] <neiljerram> If I have a long-running test script in one terminal window, where model 'm1' is the default, can I do 'juju add-model m2 && juju switch m2' in another window, without disturbing the first test?
[14:39] <cherylj> neiljerram: the current controller is stored in your JUJU_DATA (~/.local/share/juju), so if you're using juju switch, it will take effect in all terminals
[14:39] <neiljerram> Thanks cherylj.
[14:39] <cherylj> neiljerram: you can use the JUJU_MODEL env var
[14:40] <cherylj> that should just be local to the terminal it's set in
[14:40] <neiljerram> Ah, great.
[14:40] <neiljerram> I was thinking that I should modify my scripts to put an explicit "-m <model>" parameter in every Juju command.  But JUJU_MODEL would be much simpler.
[14:42] <cherylj> yeah, the commands will look at JUJU_MODEL first, before inspecting what the current model was switched to with 'juju switch'
[14:47] <neiljerram> This is very nearly perfect... :-)  One slight snag, that I just discovered, is that 'juju add-model' implicitly does a 'juju switch' as well - which means there is a risk of disturbing a test that is already running.
[14:50] <neiljerram> I suggest it would be better if 'juju add-model' did not do that.  Then I could do 'juju add-model M2; export JUJU_MODEL=M2' in another window, without any disturbance of the existing test.
[15:28] <Prabakaran> Hello  Team, Can i get any sample layered charm to check peer relation in bash? I am asking this for my learning
[15:35] <arosales> Hello
[15:35] <arosales> charmes have a question for you
[15:35] <arosales> I woud like to contribute to https://jujucharms.com/mediawiki-single
[15:36] <arosales> as the readme is incorrect
[15:36] <arosales> I follow the contribute link to https://code.launchpad.net/~charmers/charms/bundles/mediawiki/bundle
[15:36] <arosales> which does not match the download zip
[15:36] <arosales> :-/
[15:36] <arosales> so I am guessing this was pushed from a different copy that what the contribute link is pointing to
[15:36] <marcoceppi> arosales: mediawiki-single is old, wiki-simple is the new one
[15:37] <arosales> marcoceppi: question still valid but
[15:37] <arosales> I ask as mediawiki-single is the example on  https://jujucharms.com/get-started
[15:41] <arosales> marcoceppi: should we update https://jujucharms.com/get-started to use mediawiki-single and un-promulgate mediawiki-single ?
[15:41] <marcoceppi> arosales: yes, jcastro has been trying to do this for a few weeks now
[15:41] <arosales> sorry update https://jujucharms.com/get-started to use mediawiki-simple and un-promulgate mediawiki-single
[15:42] <arosales> marcoceppi: I was also updating CWR to test the bundle at /get-started
[15:42] <marcoceppi> it's wiki-simple
[15:42] <arosales> marcoceppi: given jcastro has been trying for weeks is there a github bug I can follow up on?
[15:43] <marcoceppi> https://jujucharms.com/wiki-simple/
[15:43] <marcoceppi> probably
[15:43] <marcoceppi> arosales: https://github.com/CanonicalLtd/jujucharms.com/issues/242
[15:45] <valeech> Hello. What are some troubleshooting steps I could take to determine why juju 2.0 beta9 gets stuck boostrapping maas 2.0 beta 7 at the fetching tools stage?
[15:45] <arosales> wow april
[15:46] <arosales> marcoceppi: so should we un-promulgate mediawiki-single then?
[15:46] <marcoceppi> arosales: yes, when the get-started page gets updated
[15:46] <marcoceppi> otherwise we just break the new user experience even more
[15:47] <arosales> marcoceppi: ok, I'll work on that
[15:47] <arosales> marcoceppi: last question
[15:47] <arosales> http://status.juju.solutions/bundle/cwr-test-410
[15:47] <arosales> failing on mysql
[15:47] <marcoceppi> arosales: https://lists.ubuntu.com/archives/juju/2016-April/007132.html
[15:47] <arosales> should we update wiki-simple to use mariadb?
[15:48] <marcoceppi> ugh, it's the openstack tests.
[15:48] <marcoceppi> the charm works
[15:48] <marcoceppi> the tests don't
[15:48] <arosales> marcoceppi: awesome on the lists and bugs. Just need to follow up on seeing this done
[15:49] <arosales> marcoceppi: I think we can un-promulgate mediawiki-scalable per the mail list post
[15:49] <arosales> and I'll work on updating /get-started so we can unpromulgate mediawiki-single
[15:49] <marcoceppi> arosales: yup
[15:50] <arosales> marcoceppi: can you un-promulgate if I ask nicely?
[15:50] <marcoceppi> arosales: already doing it
[15:50] <arosales> marcoceppi: thanks
[15:50] <arosales> marcoceppi: re mysql the bigdata team is hitting the same thing in their tests
[15:50] <marcoceppi> arosales: because we have openstack tests mixed in the bunch
[15:50] <marcoceppi> I'll update the charm, but I don't think the OS team will appreciate it
[15:51] <arosales> easiest way forward is to use mariadb to show green, but the correct way forward is to not run openstack tests
[15:51] <arosales> marcoceppi: we should keep the openstack tests, but figure out a way to not run them in non-openstack contexts
[15:51] <marcoceppi> well, not having them in the charm is a good start
[15:52] <arosales> beisner: is openstack using mysql or percona as the sql db?
[15:52] <arosales> marcoceppi: sure, we just need to give openstack an alternative to testing mysql in openstack
[15:52] <beisner> hi arosales, marcoceppi - percona-cluster is the primary focus.  we may still have some test bundles with the mysql charm in play, but i'd say it's safe to remove the keystone bits from the mysql amulet tests.
[15:54] <beisner> marcoceppi, arosales - what's the status of mongodb for xenial in the charm store?   seems like i saw some convo around that recently but i don't find one avail.
[15:54] <marcoceppi> beisner: in progress
[15:55] <beisner> marcoceppi, ack thx.  fwiw, we're a bit blocked on cs: bundles for xenial-mitaka as ceilometer requires mongodb.
[15:56] <marcoceppi> beisner: but why not just deploy trusty mongodb?
[15:56] <arosales> beisner: hopefully by end of month we will have an updated mongo as system Z also needs that
[15:57] <beisner> marcoceppi, arosales - hmm, gonna try that now.    this is for system z s390x openstack validation this wk.
[15:57] <arosales> marcoceppi: beisner so where do we stand on mysql openstack tests?
[15:57] <marcoceppi> arosales: sounds like I can just pull the tests
[15:57] <arosales> beisner: I think we need a special system z binary for mongo on z
[15:57] <beisner> arosales, marcoceppi - i'd say it's safe to remove the keystone bits from the mysql amulet tests.
[15:57] <arosales> not yet in the charm
[15:57] <beisner> i'm about to find out :)
[15:57] <arosales> beisner: can marcoceppi pull all the openstack tests or just keystone?
[15:58] <arosales> beisner: no I am telling you re mongo :-)
[15:58] <beisner> arosales, marcoceppi - refactor mysql tests to suit
[15:58] <marcoceppi> beisner: \o/
[15:58] <beisner> dammit arosales :)
[15:58] <marcoceppi> arosales: I'll have it updated today
[15:59] <arosales> beisner: but perhaps some happy coincidence has occurred last time I looked. I just know the IBM system z folks were working on a mongo ppa for Z
[15:59] <arosales> dannf: do you know the status of mongo and xenial or ppa?
[15:59]  * arosales also searching
[15:59] <dannf> arosales: well, the guy doing the work on ibm's side left the company. he has a replacement, but i haven't seen a drop from him yet
[16:01] <arosales> dannf: gotcha, but stock mongo doesn't work on xenial, correct?
[16:01] <arosales> dannf: and no current s390 mongo ppa that you know of
[16:01] <dannf> arosales: correct (not on s390x)
[16:02] <arosales> beisner: ^ :-/
[16:02] <dannf> arosales: correct. i made one: https://launchpad.net/~ubuntu-s390x-community/+archive/ubuntu/mongodb
[16:02] <dannf> arosales: but mongo FTBFS. i'm sure *we* could fix the build issues, but IBM was supposed to
[16:03] <arosales> dannf: ok thanks for the update I'll email IBM and see how we can move this forward
[16:03] <dannf> arosales: i suspect it's just a missing build-dep fwiw
[16:03] <dannf> arosales: let me forward you the last thread on this...
[16:03] <arosales> beisner: I'll cc you on mail to IBM in working to get a mongo for s390 we can put into charms
[16:03] <arosales> dannf: thanks I'll follow up from there
[16:04] <beisner> arosales, ack.  appreciate it.  be aware that without mongodb, we have no ceilometer.
[16:04] <dannf> arosales: that, and java seems stalled too :(
[16:05] <dannf> arosales: i sent them a git tree w/ fixes for their java packages, but *PLONK*
[16:05] <dannf> (OT here though i suppoe)
[16:06] <beisner> arosales, dannf - sure enough. no mongodb pkgs in ubuntu-ports s390x packages.
[16:07] <dannf> beisner: yeah, main reason for that is that s390x needs a new upstream version - and upgrading mongo in general in ubuntu is an issue, because upstream mongo doesn't support upgrading from the old version we have to current
[16:08] <dannf> beisner: solution for that is to version the mongo packages, so that upgrading isn't an issue, but i don't know of anyone working on that
[16:08] <dannf> s/solution/proposed solution/
[16:10] <andrey-mp> hi all. can I remove my charm from charm store?
[16:19] <beisner> arosales, dannf - fyi, raised for tracking and reference in our current validation docs:  https://bugs.launchpad.net/ubuntu/+source/mongodb/+bug/1595242
[16:19] <mup> Bug #1595242: mongodb xenial s390x packages are needed (blocks ceilometer) <s390x> <uosci> <mongodb (Ubuntu):New> <ceilometer (Juju Charms Collection):New> <ceilometer-agent (Juju Charms Collection):New> <mongodb (Juju Charms Collection):New> <https://launchpad.net/bugs/1595242>
[16:28] <stub> andrey-mp: You can revoke access to it. You can't remove them yet.
[16:29] <andrey-mp> stub: ok, thanks. i already revoked access.
[16:32] <stub> Does open-port take effect immediately, or only if the hook terminates successfully?
[16:44] <arosales> beisner: thanks
[16:55] <aisrael> tvansteenburgh: Do we know for sure bundletester is working with beta 8? This might entirely be pebkac, but it's acting wonky on me. Hanging when a unit hits an error state. Huh, and failing on `juju api-endpoints -e local.reviewqueue:default` because api-endpoints is gone.
[16:57] <tvansteenburgh> aisrael: what's the output of juju list-controllers --format yaml
[16:58] <aisrael> tvansteenburgh: http://pastebin.ubuntu.com/17705681/
[16:59] <tvansteenburgh> aisrael: okay, i'll need to see the full output of the test run
[17:00] <aisrael> tvansteenburgh: http://pastebin.ubuntu.com/17705768/
[17:01] <aisrael> I don't think it should matter, but this is a fresh xenial install, running in lxd (nested)
[17:02] <tvansteenburgh> aisrael: and you have latest juju-deployer installed?
[17:04] <aisrael> Hm. That might be a problem. I installed it from archives, and that's 0.6.4, and I installed via pip and that's 0.8. Removing the older one will remove amulet, too, but I can pip install that
[17:05] <tvansteenburgh> aisrael: if you want to install the deb you need the one in ppa:tvansteenburgh/ppa
[17:05] <tvansteenburgh> aisrael: same for python-jujuclient
[17:05] <aisrael> tvansteenburgh: ok, I think this is related to pip installing everything in ~/.local
[17:05] <tvansteenburgh> aisrael: you can also pip install both of those if you want, latest are on pypi too
[17:08] <petevg> Following this conversation w/ interest. I saw bundletester hang on an error just now ... the only version of juju-deployer I have is the one from pip, though (0.8.0).
[17:08] <aisrael> tvansteenburgh: thanks. Let me get this pip stuff straightened out, and I may grab you for a few minutes after standup if I'm still stuck.
[17:14] <aisrael> petevg: what version of juju?
[17:16] <petevg> aisrael: 2.0 beta8
[17:18] <petevg> aisrael: I installed it from the archive, because beta7 was the only thing in my apt cache. I'm also running on xenial. I think that the only major difference in our environments is that I did a hung and destroy session for stray Python packages yesterday.
[17:19] <petevg> whoops: "hung" -> "huge search and"
[17:19] <aisrael> I'm going to downgrade to beta7 and see if that makes any difference
[17:20] <petevg> Cool.
[17:36] <aisrael> So far, beta7 is working much better
[17:41] <petevg> Cool. Beta 7 has that issue where it occasionally gets upset when you destroy a model and immediately create another one. If I get frustrated with test hangs, I'll give it a try, though.
[18:02] <beisner> arosales, do you know - is the manual provider still a thing with juju2?   i'm not succeeding in finding usage/docs.
[18:02] <arosales> beisner: it is
[18:03] <beisner> aha https://jujucharms.com/docs/master/clouds-manual
[18:03] <arosales> beisner: also at https://jujucharms.com/docs/devel/clouds under "manual"
[18:04] <beisner> arosales, if i need to do both manual machines and containers, do i need to stand up the containers and bring those in the same way?
[18:05] <arosales> beisner: yes I believe so as you still need the resource
[18:05] <arosales> and manual won't set up a lxc container for you
[18:05] <beisner> right, makes sense.  thx arosales
[18:07] <icey> can interfaces on interfaces.juju.solutions point to other repositories besides github now?> I have a gitlab server where I've been storing stuff
[18:10] <cory_fu> petevg: Hey, have you started on / finished the restart action for Zookeeper yet?
[18:10] <cory_fu> I just finished a discussion with bcsaller about how we want to handle the actions that would be relevant
[18:11] <petevg> cory_fu: I finished, tested and pushed.
[18:11] <petevg> But I can refactor if we have something better :-)
[18:11] <petevg> (Just realized that I forgot to move the card to review -- just did that.)
[18:15] <cory_fu> petevg: Actually, looking at your action, nevermind.  Your action is fine the way it is, save for a couple of minor, unrelated comments I will add to the PR.
[18:15] <petevg> Cool.
[18:44] <bdx> icey: lol ..... trying to intrduce a dependency on your personal gitlab for all ? - Not that I doubt its functionality, availability, or capibility ... if people could just add arbitrary repos ... doesn't that seem like something that would decrease the stability of the framework as a whole?
[18:45] <icey> frankly, I think the iwhole interfaces.juju.solutions needs some way of specifying "This is still in dev!"
[18:45] <icey> I can't get our CI to test things that aren't there  :)
[18:45] <bdx> aaah
[18:45] <icey> also, I'm attributing these top myself, if you trust me to know what I'm doing, by all means, use it ;-)
[18:46] <bdx> like a stamp of approval, or "supported" - something to that affect?
[18:46] <icey> bdx:  long term, these things /should/ move under github.com/openstack but for now we haven't merged them in :L)
[18:46] <bdx> interfaces and layers?
[18:47] <icey> bdx: yeah, I'm working on replacing the current ceph* charms with layers
[18:47] <jhobbs> Is there a working juju daily ppa somewhere? https://launchpad.net/~juju/+archive/ubuntu/daily looks like it's out of date
[18:47] <icey> which means, 2 new layers fort ceph-mon (ceph-base and ceph-mon), as well as 4 new interfaces
[18:47] <jhobbs> I need to get a tip or close to tip juju and I was hoping there was a PPA I could use so I don't have to learn how to build it
[18:47] <icey> SO, I'm going to have these (not yert tested or peer reviewed) layers + interfaces going onto interfaces.juju.solutions
[18:48] <bdx> icey: I was looking over those ... super cool
[18:49] <bdx> I see, what is the protocol for testing layers and interfaces? Is there one?
[18:49] <icey> well
[18:49] <icey> the open stack team is using gerrit (with jenkins) to run tests
[18:54] <kwmonroe> anyone know the "callout" box syntax for charm readmes?  docs say "!!! Note: foo" should do it, but it doesn't when the readme is rendered in the store.
[21:25] <valeech> arosales: Thank you for the help! Got it figured out with the help from #maas
[21:25] <valeech> Any idea how to get juju 2.0 to deploy trusty on maas machines when doing an add-machine? Everything I have tried it deploys xenial even though mas has the default commission and deploy set to trusty.
[21:30] <ockra> Help. Trying to deploy juju within juju on a localhost (LXD) cloud.
[21:30] <ockra> Error: Failed to change ownership of: /var/lib/lxd/containers/juju-d8b754-0/rootfs
[21:39] <ockra> I used --keep-broken for juju bootstrap to read logs
[21:39] <ockra> Only two lines in there were "read uid map: type u nsid 0 hostid 100000 range 65536"
[21:40] <ockra> and read uid map: type g nsid 0 hostid 100000 range 65536
[21:41] <magicaltrout> SaMnCo: the chap from Mesosphere seems pretty interested, thanks for the intro
[21:42] <magicaltrout> be nice to work with them to smooth out the extraction of DC/OS into an archive I can upgrade easier
[21:53] <arosales> valeech: the maas guys rock :-) glad you got a setup working :-)
[22:11] <ockra> The error propagates from lxc/lxd "C.shiftowner(cbasepath, cpath, C.int(uid), C.int(gid))"
[22:32] <aisrael> tvansteenburgh: I may have a ci job hung up. I don't think this should be running for 4+ hours: http://juju-ci.vapour.ws/job/charm-bundle-test-lxc/4678/console
[22:45] <DenverParaFlyer> Hello all
[22:45] <DenverParaFlyer> Really having a hell of a time getting Kubernetes running on AWS using juju
[22:46] <DenverParaFlyer> anyone seen this? ubuntu@612c225cd992:~$ juju deploy local:trusty/kubernetes ERROR unknown schema for charm URL "local:trusty/kubernetes"
[22:46] <SaMnCo> magicaltrout: cool, let me know if we can help in any way...
[22:47] <DenverParaFlyer> ahh maybe just need to remove the local:
[22:48] <bdx> DenverParaFlyer: use relative paths e.g. `juju deploy ./../../wordpress`
[22:48] <bdx> DenverParaFlyer: if you're using 2.0 ... otherwise 1.x uses the 'local:'
[22:48] <bdx> prefix
[22:49] <DenverParaFlyer> thanks. I was following the instructions here: https://jujucharms.com/kubernetes/trusty
[22:52] <aisrael> DenverParaFlyer: replace local: with cs:
[22:52] <aisrael> juju deploy cs:trusty/kubernetes
[22:53] <aisrael> local: assumes you have a local copy of the charm. cs: will download one from the charm store
[22:59] <DenverParaFlyer> thanks @aisrael!
[23:00] <DenverParaFlyer> any idea now how I fix this? " hook failed: "etcd-relation-joined" for etcd:client"
[23:01] <DenverParaFlyer> shows up when I do a 'juju status'
[23:02] <DenverParaFlyer> already tried "juju deploy trusty/etcd juju deploy local:trusty/kubernetes juju add-relation kubernetes etcd" from the guide @  https://jujucharms.com/kubernetes/trusty
[23:06] <DenverParaFlyer> similar to this issue? https://github.com/juju-solutions/bundle-observable-kubernetes/issues/17