=== valeech_ is now known as valeech [04:25] Another block device question, with lxc I have /etc/lxc/defaults.conf with lxc.cgroup.devices.allow = b 43:* rwm. If I start an lxc container manually I can attach a storage device and use it normally inside the container. However if I deploy a juju container I can attach the device, it shows up but won't let me access it. [04:25] Is there another place with juju charms that defines the lxc defaults for those sorts of settings to allow device access? [04:25] Juju 2.0 [06:59] magicaltrout: noooo === bodie__ is now known as bodie_ === urulama_ is now known as urulama === rogpeppe1 is now known as rogpeppe === frankban|afk is now known as frankban === BlackDex_ is now known as BlackDex [10:00] kjackal: so im thinking abotu our bigtop charms, and the principal and any subordinates will both unpack the bigtop repo to /home/ubuntu/bigtop.deploy right? [10:01] yes [10:02] kjackal: so this means if we're writing any values to hiera files we have to consider them stateless - we write, we apply, we imagine they're gone (because they might be) [10:04] yes, true [10:05] now if these values propagate to a resource that is shared accross bigtop roles (eg a shared hadoop-core.xml file) then we might get into trouble [10:08] kjackal: yeah. well. it would have to be specific values in that file, e.g. hdfs-site.xml ... and i think the top layer's value should take precedence [10:58] hey kwmonroe_, your bigtop smoke-test stuff works ootb with sqoop without any template mods (as long as i hardset the env var) [11:08] admcleod: how long does it take to run? [11:13] 3 minutes, looks fine [12:37] admcleod, kjackal: stateless hiera files are the reason that I stashed the list of Zookeeper nodes in a .json file under the Zookeeper charm's resources directory. Whenever I run puppet, I read from that file, and pass it in as an override. I'm not sure whether that's a best practice, though. [13:01] petevg: what is it you're putting in that json file again? [13:26] admcleod: the list of zk peers. We override the ensemble var in heirdata, and it ends up getting written to the zookeeper config. [13:29] petevg: so you write the peers to the hieradata and then run puppet apply every time the list changes? [13:29] admcleod: yes. [13:29] petevg: cool [13:30] :-) [13:31] petevg: why was it you said you chose not to use leader settings? [13:32] admcleod: the list is different on each box (each node lists itself first), and it has to get updated when a node joins, right before it figures out who the leader is. [13:32] ... so it was either throw in a bunch of waits that made things confusing, or just stick the data somewhere else. [13:33] petevg: hmm when you say 'figures out who the leader is' are you talking about juju or zookeeper 'figuring'? [13:33] juju [13:33] Zookeeper has its own ideas about leadership, which are separate. [13:34] right.. i didnt think the leadership election took very long though. if theres one node, its the leader, and if another one joins the first one is still the leader. or so i thought [13:35] Yes. But the node that is joining doesn't figure that out right away, and I get errors trying to write to the leader. [13:36] The wait probably wouldn't be a long one ... it might be worth revisiting and refactoring, now that I've got the basic flow of stuff working. [13:36] petevg: you might evne be able to use stub's leadership layer to wait until election has completed [13:38] petevg: https://launchpad.net/layer-leadership [13:38] admcleod: the tricky thing is that I need to persist the state right away. If the process exits because it is waiting for something, then I lose the state. [13:38] I'll play around with it a bit. === stub` is now known as stub [14:29] Does anyone know how Juju 2's idea of the current controller/model is stored? [14:30] If I have a long-running test script in one terminal window, where model 'm1' is the default, can I do 'juju add-model m2 && juju switch m2' in another window, without disturbing the first test? [14:39] neiljerram: the current controller is stored in your JUJU_DATA (~/.local/share/juju), so if you're using juju switch, it will take effect in all terminals [14:39] Thanks cherylj. [14:39] neiljerram: you can use the JUJU_MODEL env var [14:40] that should just be local to the terminal it's set in [14:40] Ah, great. [14:40] I was thinking that I should modify my scripts to put an explicit "-m " parameter in every Juju command. But JUJU_MODEL would be much simpler. [14:42] yeah, the commands will look at JUJU_MODEL first, before inspecting what the current model was switched to with 'juju switch' === tasdomas` is now known as tasdomas [14:47] This is very nearly perfect... :-) One slight snag, that I just discovered, is that 'juju add-model' implicitly does a 'juju switch' as well - which means there is a risk of disturbing a test that is already running. [14:50] I suggest it would be better if 'juju add-model' did not do that. Then I could do 'juju add-model M2; export JUJU_MODEL=M2' in another window, without any disturbance of the existing test. [15:28] Hello Team, Can i get any sample layered charm to check peer relation in bash? I am asking this for my learning [15:35] Hello [15:35] charmes have a question for you [15:35] I woud like to contribute to https://jujucharms.com/mediawiki-single [15:36] as the readme is incorrect [15:36] I follow the contribute link to https://code.launchpad.net/~charmers/charms/bundles/mediawiki/bundle [15:36] which does not match the download zip [15:36] :-/ [15:36] so I am guessing this was pushed from a different copy that what the contribute link is pointing to [15:36] arosales: mediawiki-single is old, wiki-simple is the new one [15:37] marcoceppi: question still valid but [15:37] I ask as mediawiki-single is the example on https://jujucharms.com/get-started [15:41] marcoceppi: should we update https://jujucharms.com/get-started to use mediawiki-single and un-promulgate mediawiki-single ? [15:41] arosales: yes, jcastro has been trying to do this for a few weeks now [15:41] sorry update https://jujucharms.com/get-started to use mediawiki-simple and un-promulgate mediawiki-single [15:42] marcoceppi: I was also updating CWR to test the bundle at /get-started [15:42] it's wiki-simple [15:42] marcoceppi: given jcastro has been trying for weeks is there a github bug I can follow up on? [15:43] https://jujucharms.com/wiki-simple/ [15:43] probably [15:43] arosales: https://github.com/CanonicalLtd/jujucharms.com/issues/242 [15:45] Hello. What are some troubleshooting steps I could take to determine why juju 2.0 beta9 gets stuck boostrapping maas 2.0 beta 7 at the fetching tools stage? [15:45] wow april [15:46] marcoceppi: so should we un-promulgate mediawiki-single then? [15:46] arosales: yes, when the get-started page gets updated [15:46] otherwise we just break the new user experience even more [15:47] marcoceppi: ok, I'll work on that [15:47] marcoceppi: last question [15:47] http://status.juju.solutions/bundle/cwr-test-410 [15:47] failing on mysql [15:47] arosales: https://lists.ubuntu.com/archives/juju/2016-April/007132.html [15:47] should we update wiki-simple to use mariadb? [15:48] ugh, it's the openstack tests. [15:48] the charm works [15:48] the tests don't [15:48] marcoceppi: awesome on the lists and bugs. Just need to follow up on seeing this done [15:49] marcoceppi: I think we can un-promulgate mediawiki-scalable per the mail list post [15:49] and I'll work on updating /get-started so we can unpromulgate mediawiki-single [15:49] arosales: yup [15:50] marcoceppi: can you un-promulgate if I ask nicely? [15:50] arosales: already doing it [15:50] marcoceppi: thanks [15:50] marcoceppi: re mysql the bigdata team is hitting the same thing in their tests [15:50] arosales: because we have openstack tests mixed in the bunch [15:50] I'll update the charm, but I don't think the OS team will appreciate it [15:51] easiest way forward is to use mariadb to show green, but the correct way forward is to not run openstack tests [15:51] marcoceppi: we should keep the openstack tests, but figure out a way to not run them in non-openstack contexts [15:51] well, not having them in the charm is a good start [15:52] beisner: is openstack using mysql or percona as the sql db? [15:52] marcoceppi: sure, we just need to give openstack an alternative to testing mysql in openstack [15:52] hi arosales, marcoceppi - percona-cluster is the primary focus. we may still have some test bundles with the mysql charm in play, but i'd say it's safe to remove the keystone bits from the mysql amulet tests. [15:54] marcoceppi, arosales - what's the status of mongodb for xenial in the charm store? seems like i saw some convo around that recently but i don't find one avail. [15:54] beisner: in progress [15:55] marcoceppi, ack thx. fwiw, we're a bit blocked on cs: bundles for xenial-mitaka as ceilometer requires mongodb. [15:56] beisner: but why not just deploy trusty mongodb? [15:56] beisner: hopefully by end of month we will have an updated mongo as system Z also needs that [15:57] marcoceppi, arosales - hmm, gonna try that now. this is for system z s390x openstack validation this wk. [15:57] marcoceppi: beisner so where do we stand on mysql openstack tests? [15:57] arosales: sounds like I can just pull the tests [15:57] beisner: I think we need a special system z binary for mongo on z [15:57] arosales, marcoceppi - i'd say it's safe to remove the keystone bits from the mysql amulet tests. [15:57] not yet in the charm [15:57] i'm about to find out :) [15:57] beisner: can marcoceppi pull all the openstack tests or just keystone? [15:58] beisner: no I am telling you re mongo :-) [15:58] arosales, marcoceppi - refactor mysql tests to suit [15:58] beisner: \o/ [15:58] dammit arosales :) [15:58] arosales: I'll have it updated today [15:59] beisner: but perhaps some happy coincidence has occurred last time I looked. I just know the IBM system z folks were working on a mongo ppa for Z [15:59] dannf: do you know the status of mongo and xenial or ppa? [15:59] * arosales also searching [15:59] arosales: well, the guy doing the work on ibm's side left the company. he has a replacement, but i haven't seen a drop from him yet [16:01] dannf: gotcha, but stock mongo doesn't work on xenial, correct? [16:01] dannf: and no current s390 mongo ppa that you know of [16:01] arosales: correct (not on s390x) [16:02] beisner: ^ :-/ [16:02] arosales: correct. i made one: https://launchpad.net/~ubuntu-s390x-community/+archive/ubuntu/mongodb [16:02] arosales: but mongo FTBFS. i'm sure *we* could fix the build issues, but IBM was supposed to [16:03] dannf: ok thanks for the update I'll email IBM and see how we can move this forward [16:03] arosales: i suspect it's just a missing build-dep fwiw [16:03] arosales: let me forward you the last thread on this... [16:03] beisner: I'll cc you on mail to IBM in working to get a mongo for s390 we can put into charms [16:03] dannf: thanks I'll follow up from there [16:04] arosales, ack. appreciate it. be aware that without mongodb, we have no ceilometer. [16:04] arosales: that, and java seems stalled too :( [16:05] arosales: i sent them a git tree w/ fixes for their java packages, but *PLONK* [16:05] (OT here though i suppoe) [16:06] arosales, dannf - sure enough. no mongodb pkgs in ubuntu-ports s390x packages. [16:07] beisner: yeah, main reason for that is that s390x needs a new upstream version - and upgrading mongo in general in ubuntu is an issue, because upstream mongo doesn't support upgrading from the old version we have to current [16:08] beisner: solution for that is to version the mongo packages, so that upgrading isn't an issue, but i don't know of anyone working on that [16:08] s/solution/proposed solution/ [16:10] hi all. can I remove my charm from charm store? [16:19] arosales, dannf - fyi, raised for tracking and reference in our current validation docs: https://bugs.launchpad.net/ubuntu/+source/mongodb/+bug/1595242 [16:19] Bug #1595242: mongodb xenial s390x packages are needed (blocks ceilometer) [16:28] andrey-mp: You can revoke access to it. You can't remove them yet. [16:29] stub: ok, thanks. i already revoked access. [16:32] Does open-port take effect immediately, or only if the hook terminates successfully? === kwmonroe_ is now known as kwmonroe [16:44] beisner: thanks [16:55] tvansteenburgh: Do we know for sure bundletester is working with beta 8? This might entirely be pebkac, but it's acting wonky on me. Hanging when a unit hits an error state. Huh, and failing on `juju api-endpoints -e local.reviewqueue:default` because api-endpoints is gone. [16:57] aisrael: what's the output of juju list-controllers --format yaml [16:58] tvansteenburgh: http://pastebin.ubuntu.com/17705681/ [16:59] aisrael: okay, i'll need to see the full output of the test run [17:00] tvansteenburgh: http://pastebin.ubuntu.com/17705768/ [17:01] I don't think it should matter, but this is a fresh xenial install, running in lxd (nested) [17:02] aisrael: and you have latest juju-deployer installed? [17:04] Hm. That might be a problem. I installed it from archives, and that's 0.6.4, and I installed via pip and that's 0.8. Removing the older one will remove amulet, too, but I can pip install that [17:05] aisrael: if you want to install the deb you need the one in ppa:tvansteenburgh/ppa [17:05] aisrael: same for python-jujuclient [17:05] tvansteenburgh: ok, I think this is related to pip installing everything in ~/.local [17:05] aisrael: you can also pip install both of those if you want, latest are on pypi too [17:08] Following this conversation w/ interest. I saw bundletester hang on an error just now ... the only version of juju-deployer I have is the one from pip, though (0.8.0). [17:08] tvansteenburgh: thanks. Let me get this pip stuff straightened out, and I may grab you for a few minutes after standup if I'm still stuck. [17:14] petevg: what version of juju? [17:16] aisrael: 2.0 beta8 [17:18] aisrael: I installed it from the archive, because beta7 was the only thing in my apt cache. I'm also running on xenial. I think that the only major difference in our environments is that I did a hung and destroy session for stray Python packages yesterday. [17:19] whoops: "hung" -> "huge search and" [17:19] I'm going to downgrade to beta7 and see if that makes any difference [17:20] Cool. === frankban is now known as frankban|afk [17:36] So far, beta7 is working much better [17:41] Cool. Beta 7 has that issue where it occasionally gets upset when you destroy a model and immediately create another one. If I get frustrated with test hangs, I'll give it a try, though. === zz_CyberJacob is now known as CyberJacob [18:02] arosales, do you know - is the manual provider still a thing with juju2? i'm not succeeding in finding usage/docs. [18:02] beisner: it is [18:03] aha https://jujucharms.com/docs/master/clouds-manual [18:03] beisner: also at https://jujucharms.com/docs/devel/clouds under "manual" [18:04] arosales, if i need to do both manual machines and containers, do i need to stand up the containers and bring those in the same way? [18:05] beisner: yes I believe so as you still need the resource [18:05] and manual won't set up a lxc container for you [18:05] right, makes sense. thx arosales [18:07] can interfaces on interfaces.juju.solutions point to other repositories besides github now?> I have a gitlab server where I've been storing stuff [18:10] petevg: Hey, have you started on / finished the restart action for Zookeeper yet? [18:10] I just finished a discussion with bcsaller about how we want to handle the actions that would be relevant [18:11] cory_fu: I finished, tested and pushed. [18:11] But I can refactor if we have something better :-) [18:11] (Just realized that I forgot to move the card to review -- just did that.) [18:15] petevg: Actually, looking at your action, nevermind. Your action is fine the way it is, save for a couple of minor, unrelated comments I will add to the PR. [18:15] Cool. [18:44] icey: lol ..... trying to intrduce a dependency on your personal gitlab for all ? - Not that I doubt its functionality, availability, or capibility ... if people could just add arbitrary repos ... doesn't that seem like something that would decrease the stability of the framework as a whole? [18:45] frankly, I think the iwhole interfaces.juju.solutions needs some way of specifying "This is still in dev!" [18:45] I can't get our CI to test things that aren't there :) [18:45] aaah [18:45] also, I'm attributing these top myself, if you trust me to know what I'm doing, by all means, use it ;-) [18:46] like a stamp of approval, or "supported" - something to that affect? [18:46] bdx: long term, these things /should/ move under github.com/openstack but for now we haven't merged them in :L) [18:46] interfaces and layers? [18:47] bdx: yeah, I'm working on replacing the current ceph* charms with layers [18:47] Is there a working juju daily ppa somewhere? https://launchpad.net/~juju/+archive/ubuntu/daily looks like it's out of date [18:47] which means, 2 new layers fort ceph-mon (ceph-base and ceph-mon), as well as 4 new interfaces [18:47] I need to get a tip or close to tip juju and I was hoping there was a PPA I could use so I don't have to learn how to build it [18:47] SO, I'm going to have these (not yert tested or peer reviewed) layers + interfaces going onto interfaces.juju.solutions [18:48] icey: I was looking over those ... super cool [18:49] I see, what is the protocol for testing layers and interfaces? Is there one? [18:49] well [18:49] the open stack team is using gerrit (with jenkins) to run tests [18:54] anyone know the "callout" box syntax for charm readmes? docs say "!!! Note: foo" should do it, but it doesn't when the readme is rendered in the store. === urulama is now known as urulama|__ [21:25] arosales: Thank you for the help! Got it figured out with the help from #maas [21:25] Any idea how to get juju 2.0 to deploy trusty on maas machines when doing an add-machine? Everything I have tried it deploys xenial even though mas has the default commission and deploy set to trusty. [21:30] Help. Trying to deploy juju within juju on a localhost (LXD) cloud. [21:30] Error: Failed to change ownership of: /var/lib/lxd/containers/juju-d8b754-0/rootfs === valeech_ is now known as valeech [21:39] I used --keep-broken for juju bootstrap to read logs === valeech_ is now known as valeech [21:39] Only two lines in there were "read uid map: type u nsid 0 hostid 100000 range 65536" [21:40] and read uid map: type g nsid 0 hostid 100000 range 65536 [21:41] SaMnCo: the chap from Mesosphere seems pretty interested, thanks for the intro [21:42] be nice to work with them to smooth out the extraction of DC/OS into an archive I can upgrade easier [21:53] valeech: the maas guys rock :-) glad you got a setup working :-) [22:11] The error propagates from lxc/lxd "C.shiftowner(cbasepath, cpath, C.int(uid), C.int(gid))" === redir is now known as redir_afk [22:32] tvansteenburgh: I may have a ci job hung up. I don't think this should be running for 4+ hours: http://juju-ci.vapour.ws/job/charm-bundle-test-lxc/4678/console [22:45] Hello all [22:45] Really having a hell of a time getting Kubernetes running on AWS using juju [22:46] anyone seen this? ubuntu@612c225cd992:~$ juju deploy local:trusty/kubernetes ERROR unknown schema for charm URL "local:trusty/kubernetes" [22:46] magicaltrout: cool, let me know if we can help in any way... [22:47] ahh maybe just need to remove the local: [22:48] DenverParaFlyer: use relative paths e.g. `juju deploy ./../../wordpress` [22:48] DenverParaFlyer: if you're using 2.0 ... otherwise 1.x uses the 'local:' [22:48] prefix [22:49] thanks. I was following the instructions here: https://jujucharms.com/kubernetes/trusty [22:52] DenverParaFlyer: replace local: with cs: [22:52] juju deploy cs:trusty/kubernetes [22:53] local: assumes you have a local copy of the charm. cs: will download one from the charm store === CyberJacob is now known as zz_CyberJacob [22:59] thanks @aisrael! [23:00] any idea now how I fix this? " hook failed: "etcd-relation-joined" for etcd:client" [23:01] shows up when I do a 'juju status' [23:02] already tried "juju deploy trusty/etcd juju deploy local:trusty/kubernetes juju add-relation kubernetes etcd" from the guide @ https://jujucharms.com/kubernetes/trusty [23:06] similar to this issue? https://github.com/juju-solutions/bundle-observable-kubernetes/issues/17