[00:33] kwmonroe: Pretty sure the Puppet scripts should handle the formatting of the namenode. https://github.com/apache/bigtop/blob/master/bigtop-deploy/puppet/modules/hadoop/manifests/init.pp#L639 [01:22] firl sorry i dropped off there. ping me when you're back around and i'm happy to resume where we left off [01:22] firl - there's 2 ingress options, traefik and nginx. i have rc defs for both, i'm undecided which is a better option at this point [01:22] both have some problems with session affinity i've noticed [01:23] lazyPower - no worries [01:23] I just remember that juju left the security groups blocked and no ports opened last time I tried. [01:24] Is there a way to have it accessible externally now? [01:26] ( svc equiv ) [01:30] I've got a todo item to work on a daemon to read the ingress and open ports accordingly [01:30] it hasn't been completed yet its still very much a juju run open-port operation at the moment :( [01:30] but, we're aware and moving towards fixing it. I suspect we'll have something for you to look at there within the next month or so assuming we dont get reprioritized [01:33] :) [01:33] haha ok [01:34] I thought you were saying you were planing on having traefik be the ingress object like the GCE load balancer implementation for juju [01:34] Is the thought just to get nodeport working? [01:50] well we have nodeport basically working minus the firewalling [01:51] if you open port 443/80, and stuff in the nginx/traefik ingress controllers, that handles a good chunk of the workloads [01:51] ya [01:51] what that leaves out in the cold however, is socket based services like irc bouncers, rabbitmq, and workloads like that. Where odd ports may need connectivity. NGINX isn't the best middle man for those workloads. I've been considering building a socat container to handle some of those middlewares [01:51] socat is pretty good at proxying connections... [01:52] I haven’t tested traefik load balancing / ws connections [01:52] the LB works, WSS seemed to fall down if it required session affinity [01:52] but the nginx ingress controller handled it beautifully [01:52] gotcha [01:53] even though traefik says they support it, i am unconvinced [01:53] and its likely PEBKAC [01:53] or picnic, take your pick :( [01:53] I am using nginx proxy RC’s right now. ployst/nginx-ssl-proxy [01:54] however I still put a svc in front so that I can keep the same ip effortlessly [01:54] right, thats how you do it [01:54] the svc gives you that iptables forward rule that no matter where you enter into the cluster it routes accordingly, which is somewhat nice even if overly complex [01:55] yeah for a production standpoint it’s a non starter for me [01:55] ( if it’s not there that is ) [01:57] so, we should sync in the very very near future again, so we can check the checklist together [01:57] did you get that ss i sent over? [01:58] brb [01:58] just saw it ( requested permission to see it ) [02:37] lazyPower: ping [02:42] thumper pong [02:42] lazyPower: hey, looking to test migration of a unit with payloads [02:42] do you know of any? [02:42] even fake ones? [02:42] thumper - we gutted it, older versions of etcd have payloads though [02:42] let me check my namespace [02:44] thumper charm show cs:~lazypower/etcd-21 [02:45] there's one for ya, including the payload(s) [02:45] ta [02:45] np [02:49] firl :| i'm not the doc owner. so whenever matt gets that mail :P [02:49] no worries [02:51] ah but i can edit the sharing perms. give it another go [02:51] should be able to get in now [02:55] i can see it === natefinch-afk is now known as natefincgh === natefincgh is now known as natefinch [03:27] lazyPower: can I use that cham in the lxd provider? [03:27] and how do I get it to register some payloads? [03:33] thumper yes [03:33] and what do you mean register some payloads? [03:34] OH! i misread that as resources... really sorry chap [03:34] my mistake [03:34] it is after all after 10pm [03:34] :) [03:35] lazyPower: all I need is for it to register one payload [03:35] so I can migrate it to another controller [03:35] and make sure the payloads are still there :) [03:42] lazyPower: so does that charm use resources or payloads? [03:49] it uses resources [03:49] :( [03:49] https://jujucharms.com/u/lazypower/idlerpg [03:49] fwiw install hook failed [03:49] but it wont run in lxd [03:50] it doesn't have to "run" but does need to register payload [03:50] I'm guessing it wont [03:50] its going to have ot "run" to register a payload [03:51] its pulling in a docker image [03:51] thats what it would register as the payload [03:55] hmmm... [03:55] I suppose I could deploy anything then just juju run the register command right? === frankban|afk is now known as frankban [09:24] is there an example of how to use: https://github.com/juju/juju/tree/master/api in a charm? [09:32] rts-sander: Hi there, not sure if this helps, but there is this python library https://launchpad.net/python-jujuclient [09:33] rts-sander: if you are trying to make a charm talk to juju you can look at what the juju gui is doing [09:33] kjackal, I tried https://github.com/kapilt/python-jujuclient but it only works for juju 1, I'm using 2 [09:33] I'm reading through the juju-gui charm now to see if I can find how they do it [09:34] I thought jujuclient lib also supports juju2 since it has this "juju2" path... [09:39] yeah the code looks more up to date than the code on github [09:58] hii all, I installed juju 2.0 on ubuntu 16.04 by following this link https://jujucharms.com/docs/stable/getting-started [10:00] I deployed two charms they are still in pending state here i pasted the juju status http://paste.openstack.org/show/562911/ [10:02] and in log i am getting like this http://paste.openstack.org/show/562912/ [10:02] please someone help? === Guest20905 is now known as CyberJacob === rogpeppe1 is now known as rogpeppe [11:41] is there an error in the juju go project? http://pastie.org/10939526 [11:42] did go get and trying to use api but it doesn't even compile [11:59] rts-sander: there is a makefile and godeps which must be used. [12:20] cheers jrwren godeps did it [12:28] balloons: ok so we just need them to push a new snap-confine with our LXD fixes and we should be good to go wrt that fix for running lxd with snappy juju right? [12:47] Hi all. For writing a JUJU Charm which language is preferable with community? Actually I want to develop a charm in Shell script. [12:56] ram____: shell script is OK, python seems to be what everyone uses [12:58] marcoceppi : Ok . Thank you. How can I enable a customized charm as part of Ubuntu Autopilot installation? I mean How can we integrate our customized juju charm along with autopilot openstack deployment [12:59] ram____: what charm are you customizing? [13:03] marcoceppi: I want to configure cinder to change the backend as one of our storage driver. So now we are developing a charm for that to configure. [13:05] ram____: so you don't need to modify cinder, your charm would be like the other cinder backends. However, to get into the autopilot the charm must first join our OIL program: http://partners.ubuntu.com/programmes/openstack [13:07] here are a few examples of charms that are similar to what you describe: https://jujucharms.com/u/marcoceppi/cinder-xtremio https://jujucharms.com/u/marcoceppi/cinder-vnx [13:07] hey! trying to deloy the openstack bundle on aws using juju 2.0 beta15, looks like it is panicing because it's using lxc instead of lxd containers - does this need to be updated in the bundle configuration? [13:08] SimonKLB: that's odd, juju 2.0 should translate lxc -> lxd automatically [13:08] if it's not, it's a bug. Though, a quick fix would be to update the bundle to include lxd: instead of lxc [13:08] 2016-08-24 11:32:23 INFO juju.provisioner container_initialisation.go:98 initial container setup with ids: [6/lxc/0 6/lxc/1 6/lxc/2] [13:08] 2016-08-24 11:32:23 INFO juju.worker runner.go:262 stopped "6-container-watcher", err: worker "6-container-watcher" exited: panic resulted in: runtime error: invalid memory address or nil pointer dereferencey [13:09] SimonKLB: I think that lxc line is a red herring [13:09] the second line is definitely interesting [13:09] yea i might be mistaken, but i thought it was caused by: https://github.com/juju/juju/blob/master/worker/provisioner/container_initialisation.go#L102 [13:10] SimonKLB: possibly, I'd poke the developers in #juju-dev about that one [13:10] it might be quite a serious bug [13:10] will do! [13:11] marcoceppi: Thank you . [13:18] marcoceppi: is it possible to remove a whole bundle or do you have to remove the applications individually? [13:18] SimonKLB: each application individually [13:18] okok! === redelmann is now known as redelmann_wfh [13:41] marcoceppi : how can we certify juju charm? How much time it will take to certify? [13:41] ram____: that's something that you should inquire with OIL folks [13:48] marcoceppi : Ok. How can I get connection with OIL folks? [13:48] ram____: http://partners.ubuntu.com/programmes/openstack [13:50] marcoceppi : OK. Thank you. [13:51] kwmonroe, cory_fu: With kwmonroe's help last night, I think that I know why I'm seeing namenode failures: When we setup the openjdk relation, we set JAVA_HOME in /etc/environment. This happens before puppet runs. It looks like Bigtop doesn't do this when it installs java by itself, which means that hadoop_java_home never gets set by the puppet script, and [13:51] hdfs fails to start. (We don't setup JAVA_HOME in /etc/defaults/bigtop-utils until after puppet runs, so even if puppet has a fallback to that value if it can't find it in /etc/environment, it won't have that fallback until after it has tried and failed to start hdfs.) [13:53] petevg: I'm not sure I understand. You're saying it fails because we *do* set up JAVA_HOME correctly? [13:53] cory_fu: nope. I'm saying it fails because bigtop *doesn't* [13:53] Also, pretty sure Bigtop & Puppet ignore /etc/environment entirely [13:54] petevg: We've been using the java relation with Bigtop Hadoop this entire time. Why is it only failing now? [13:54] cory_fu: it's no failing when we use the openjdk charm. [13:54] It's failing when we don't use it. [13:54] This is me testing the "make java relation optional" stuff. [13:55] *not [13:55] marcoceppi: Which version of Autopilot we should use to deploy Liberty OpenStack? you have any idea? [13:55] cory_fu: I know that bigtop is *supposed* to ignore /etc/environment, but I don't think that it is doing so. [13:55] I really don't understand. The puppet scripts and Bigtop don't look at or care about /etc/environment, AFAIK. The fact that we update that is just an artifact of how we were doing Java handling prior to Bigtop [13:55] ram____: I [13:56] ram____: I'm not sure, but Mitaka is what we currently deploy. [13:56] cory_fu: I suspect that it's an artifact that was masking a bug in bigtop (or a bug in the way that we're asking Bigtop to setup namenode). [13:56] Also, if you're saying that it fails when we *don't* use the java relation, then how does /etc/environment come in to it at all? It should just be using the built-in Puppet installation of java at that point [13:57] I know we have deployed it with that built-in java management before, because we used that prior to adding the java relation [13:57] cory_fu: it does use it. And hdfs fails to start, complaining that JAVA_HOME is not set. [13:58] cory_fu: there's some context that you're missing -- see kwmonroe and my convo from yesterday evening, around 18:20, Eastern time. [13:58] marcoceppi: For Mitaka which autopilot version it used? [13:58] ram____: the latest? I'm not 100% sure [13:59] marcoceppi : OK. Thank you. [13:59] cory_fu: basically, namenode is failing to start hdfs, and the clue to what's happening lives in nn.format.log, which is very short: http://paste.ubuntu.com/23085142/ [14:01] petevg: We have definitely run this w/o the java relation successfully before. [14:03] cory_fu: Maybe I need to pass a third value in to puppet, beyond jdk_preinstalled and jdk_package_name? (If so, I don't see where -- I'm looking at the relevant puppet script now.) [14:07] cory_fu: does passing in hadoop_java_home sound familiar? [14:07] No [14:08] This is the relevant line from hadoop-env.sh: [14:08] http://paste.ubuntu.com/ [14:08] That's set to undef in puppet/modules/hadoop/manifests/init.pp [14:09] initialized to undef, I should say. [14:10] petevg: I'm pretty sure that the *only* thing we did prior to adding support for the java relation was set bigtop::jdk_package_name. So, that should be what we do if the relation is attached. [14:10] Sorry, if the relation is *not* attached [14:12] petevg: Also, that link isn't what you meant to send [14:13] marcoceppi: How can we test the newly created cinder-storage driver charm locally ? any idea? [14:14] ram____: you can juju deploy openstack onto LXD then deploy your cinder-storage charm and relate it to cinder [14:14] ram____: https://github.com/openstack-charmers/openstack-on-lxd [14:15] cory_fu: whoops. http://paste.ubuntu.com/23085177/ [14:15] cory_fu: I have a couple of mini-fixes on cwr, should I submit a PR just to show them to you? [14:19] petevg: I wonder if we could tell bigtop to go and install whatever jave it sees fit, instead of us setting the java package name on the config. [14:20] kjackal: that would be nice. It did not do so when I tried, though. I initially just set jdk_preinstalled to false, and didn't discover jdk_package_name until I was trying to figure out why that failed. [14:21] I think that it just tries to do "apt install jdk" in that case, which isn't a valid package name. [14:25] that would be an easy fix upstream [14:28] marcoceppi: I followed the provided link https://github.com/openstack-charmers/openstack-on-lxd. I am getting an error. I pasted juju status error log. http://paste.openstack.org/show/563013/. [14:28] ram____: at this point, you should probably join #openstack-charms for support [14:29] kjackal: possibly. The script is in the distro agnostic bits of Bigtop; I'm not sure that you can drop in a string that will make a good default across distros. [14:29] marcoceppi: OK. Thank you. [15:15] petevg: Link from HO: https://github.com/puppetlabs/puppetlabs-java [15:15] cory_fu: thx [15:22] cory_fu: I also looked at why cwr does not work with mongodb test plan [15:24] Running a fresh install of juju 2.0beta15 bootstrapped to a MAAS 2.0 rc4. Via the juju gui, I added the openstack base bundle to the canvas. When I do this, it sets all the application names to xenial-X where X is the next letter available. Is there something I need to do to get it to keep the application name (ie ceph-mon, neutron-gateway)? [15:24] cory_fu: it seems that the issue is not with cwr. Mongodb fails/hungs when tested through bundle tester (apt-get install permissions) [15:25] kjackal: Good to know. We should change the example to something that actually works [15:53] cory_fu, i have a ceph relation that sets a parameter that has dashes in it. It's a param in a dict. auto accessors won't work for me. Is there a workaround for that? I'd like to avoid adding another relation if i can [15:54] Hi. I followed https://jujucharms.com/docs/stable/getting-started. I deployed wiki charm. It was giving error. pasted error log : http://paste.openstack.org/show/563091/. please provide me the solution. [15:55] cholcombe: You can always use conversation.get_remote() directly, but auto-accessors also translate hyphens to underscores, so you should be able to access prop-foo as rel.prop_foo() [15:55] cory_fu, oh ok. i'll try that underscores first [16:00] Hi all, [16:05] ram_____: all of your containers are stuck in pending [16:05] ram_____: can you paste the output of `juju status --format yaml` [16:20] Hi all, I have deployed Openstack Liberty with github openstack charms- branch 16.07/stable in HA. After deployment I am hitting one issue with Nova cloud controller. root@radcmaas01:~/deployments# nova service-list +------------------+----------------------+----------+---------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Dis [16:20] Hi all, I have deployed Openstack Liberty with github openstack charms- branch 16.07/stable in HA. After deployment I am hitting one issue with Nova cloud controller. === frankban is now known as frankban|afk [16:21] Is this right place to ask questions regarding openstack charm ? [16:24] sunny: yeah, ask away! [16:24] marcoceppi: paste output of # juju status --format yaml http://paste.openstack.org/show/563096/ [16:25] ram_____: "Failed to get device attributes: no such file or directory" that's an interesting error === mup_ is now known as mup === mup_ is now known as mup [16:30] I have HA deployment of Liberty with Openstack charms branch 16.07/stable. After the deployment I am hitting a issue which i think is of HA Nova cloud controller(NCC). My compute service ( nova service-list) is flapping meaning UP/DOWN and that depends on which of my HA unit of Nova cloudcontroller is up. If request goes to unit NCC/0 (when this unit is UP) then it says nova-computer state is UP but when NCC/1 services are UP then it r [16:30] as DOWN [16:32] This is causing VM spin up to fail as it complains "no valid host found" ( at times then nova-compute state is down). Can you please point me why I am seeing that issue ? [16:36] jcastro: http://paste.ubuntu.com/23085632/ take a look at this link [16:36] I'm not an openstack charmer but one of them should be able to take a look [16:37] cargonza: any of you fellas around to take a look? [16:37] Thanks a lot and please let me know if you guys need anyu other details as well. [16:52] Hi, after I bootstrapped juju, can I ssh to the juju controller? [16:52] bdx: Hey. Did we discuss changing the license of the puppet layer to Apache or similar instead of AGPL? https://github.com/jamesbeedy/layer-puppet-agent/blob/master/LICENSE [16:52] catbus1: juju ssh -m controller 0 [16:54] cory_fu, kjackal: do you have any updates to last week's review queue doc. Just realized that I never sent it out, but I still only see my stuff in it. [16:55] petevg: I do not [16:55] petevg: I got caught up in other things (I think working on the new RQ) and didn't get the charm I was looking at finished [16:58] jhobbs: thank you [17:30] hi sunny can you hop on #openstack-charms ? also first questions will be: can you pastebin a juju status output? and are there 3 units of each service which is in HA? [17:57] Good evening :) [17:58] If someone can point me into the right direction on the following error that would be awesome! [17:58] juju add-relation neutron-gateway mysql [17:58] ERROR no relations found === jesse__ is now known as Randleman [18:01] welp, hard to answer when you leave :( [18:03] i didnt [18:03] changed my nick :D [18:03] lazyPower: [18:03] ah [18:04] Randleman - neutron-gateway doesn't implement a mysql relation [18:04] https://jujucharms.com/neutron-gateway/ [18:04] the listed relations are on the right side of the store listing above the file list [18:04] it only implements hacluster, and neutron-plugin [18:04] so, i have an old canonical workbook? [18:05] beisner thedac - do we know if neutron-gateway had at one time, a mysql relation? [18:05] Randleman - sorry i'm not an openstack charmer so i'm not terribly familiar with the history of the charms [18:06] allright, well thanks anyway :) now i can atleast continue my deployment. [18:27] lazyPower, it did, prior to the 16.04 charm versions. https://github.com/openstack/charm-neutron-gateway/commit/00f0edc70d68ce846db928ec2304d79fc6d1a5ae [18:28] Randleman - ah, seems like thats the case. The latest revisions of the charms changed. so there's a few options. Use the older charms, or see if there's newer documentation [18:28] beisner - thanks for taking a look [18:29] yw lazyPower [18:31] lazyPower, Randleman - it looks like the neutron-gateway readme didn't get a necessary update on that. i'll be proposing a readme change shortly. tldr; it's now safe to just not relate neutron-gateway to the database, as db ops now happen via rpc. [18:32] i'd recommend using the latest stable charm release [18:54] cory_fu, kwmonroe: are either of you available to jump into the hangout? I want to point at something and see if it makes sense to you. [18:55] yup petevg, omw [18:55] thx :-) [19:08] thanks beisner [19:09] Got another issue :D yay [19:09] i deployed the neutron-gateway charm and it's up and running [19:09] except for the fact that i can't see it anywhere in the openstack services list. [19:09] Nor is their a neutron user created... [19:10] all the other services are running fine. [19:13] it looks like the environment doesn't know about the existence of neutron-gateway [19:16] hi Randleman - please have a look at this reference bundle and its relations to check against your neutron* relations and config options: https://jujucharms.com/openstack-base/ [19:16] https://api.jujucharms.com/charmstore/v5/openstack-base/archive/bundle.yaml === mup_ is now known as mup [19:22] Thanks weird beisner , the bundle file shows me that neutron-gateway has a relation with mysql [19:23] But that shouldn't matter. [19:23] I got all relations it needs... but the neutron/network doesnt show up anywhere. [19:29] This could be a thing.. [19:30] AMQP server on 127.0.0.1:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 32 seconds. [19:32] also not very nice [19:32] l3-agent cannot contact neutron server to retrieve service plugins enabled. [19:45] hi Randleman - i don't see that there is a shared-db neutron-gateway relation in the current bundle. neutron-api, yes. [19:53] hi marcoceppi - do you know https://github.com/juju/charm-tools/issues/220 fixes will hit pypi? [19:54] insert: when. typing and thinking is hard [19:55] tinwood, looks like that merged into master ~1hr ago. you could confirm manually by using git-foo for charm-tools master in requirements as a check. [19:56] beisner, yes, I'll do that to start with. I've got a review up for interface-keystone with unit tests on WIP at the moment too. [19:57] tinwood, feel free to temporarily flip charm-tools to master in that gerrit review just to exercise :) [19:57] beisner, I think we also need to do some project-foo on it too, to enable a testing gate - it's only got pep8. I'll take a look at that too. [19:59] tinwood, oh yes, likely so. [20:00] beisner: it's in the snap in edge channel ;) [20:00] beisner, np, will do. going back to #openstack-charms now. [20:01] marcoceppi, test runners live on trusty (!snap) until jenkins-slave gains xenial foo. [20:01] marcoceppi, otherwise :cat2: [20:01] :) [20:02] beisner: it's not a patch release, it'll be a 2.2, which isn't scheduled until October [20:02] beisner: but, I'm sure we can drop 2.2 sooner === mup_ is now known as mup [20:12] marcoceppi, ok. it does seem like a legit bugfix patch, as the existing ignore logic is unusable in that it makes ignores from any 1 layer apply to all layers globally. [20:14] lazyPower: i'm looking at https://github.com/juju-solutions/charmbox/issues/37, but local builds are working fine for me (docker build -t charmbox .). how can i reproduce the env causing failures on docker hub? (https://hub.docker.com/r/jujusolutions/charmbox/builds/bwfmghxnj8xbj85fptqnhw9/) [20:18] kwmonroe i hope you're priming your liver [20:18] :) === redelmann_wfh is now known as rudi_brb === mup_ is now known as mup [20:40] hi all, im new to openstack, but im very interesting to learn, so i have a ubuntu maas setup with a controller and 4 nodes deployed, but i dont know what next... and i kind of dont find any good dokumentation. can someone tell me how to find any good dokumentation on how to get juju installed correctly? [20:46] tls-peeps: https://gist.github.com/jamesbeedy/c20d91bd0087b32dbc0aa0956cde5ed8 [20:46] does that^ look legit? [20:48] lazyPower, mbruzek, ^^ [20:49] I'm getting this error -> http://paste.ubuntu.com/23086229/ [20:49] looking [20:52] bdx: hrmm that is strange [20:53] right [20:55] here is all of feed.py, shouldn't really matter though [20:55] https://gist.github.com/jamesbeedy/4ce0224642ae11df473771b83c5e3506 [20:55] https://github.com/juju-solutions/layer-tls/blob/master/lib/tlslib.py#L60 [20:56] bdx That tells me that your cred is not there [20:57] mbruzek: http://paste.ubuntu.com/23086275/ [20:57] bdx: can you do an ls /var/lib/juju/agents/unit-feed-14/charm/easy-rsa/easyrsa3/pki/private/ [20:58] empty [20:58] bdx: I need more context on how this is deployed. There are 13 other feed peers? [20:58] bdx: run "is-leader" [20:59] mbruzek: lol, no. I've been iterating [20:59] theres only one [20:59] OK so that must be the leader. [20:59] is-leader returns 'true' [20:59] Can you gist a "tree" command in: /var/lib/juju/agents/unit-feed-14/charm/easy-rsa/easyrsa3 [21:00] bdx: So it has been a while since I used the tls layer. I remember the leader is the CA and the signer [21:01] bdx: so maybe there is another Error earlier on? [21:02] http://paste.ubuntu.com/23086284/ [21:03] totally .. I think I should try a barebones top layer that includes tls and just simply writes out the keys so I can isolate that to being the issue [21:04] bdx it looks to me that you don't have a CA unless it did not show the stuff in the private directory. [21:04] yeah, I def don't [21:04] In the log are there any earlier errors [21:05] not that I can see -> http://paste.ubuntu.com/23086294/ [21:06] oooo [21:06] line 2124 [21:06] Yeah [21:08] I wonder if it has something to do with nginx-passenger [21:08] or the phusion repo being enabled [21:08] It looks like you get an error there on the cnf file, have not seen that one before. [21:09] For the latest uses of tls layer, checkout this https://github.com/mbruzek/layer-k8s/blob/master/reactive/k8s.py#L98 [21:09] thanks [21:09] I've thoroughly looked over that though [21:09] lol [21:09] I'm doing nothing different [21:09] I only use the user password parameters when you want non root [21:09] oooh [21:10] tru [21:10] But yeah other than that [21:10] e === natefinch is now known as natefinch-afk [21:10] bdx as you suggested build a simple layer with just tls and if you find that it is a bug in my code please create an issue against layer-tls and I will fix it asap [21:11] totally, thanks for your insight here [21:11] bdx: but I totally think the earlier error is giving you problem down the line [21:11] totally [21:12] But again I have not see that error before. The tree output shows the file exists, but the error says it is not there. [21:12] I don't know nwhat is going on [21:13] alright [21:13] thx [21:16] bdx: Chuck is using the tls layer in swarm https://github.com/juju-solutions/layer-swarm/blob/master/reactive/swarm.py#L219 [21:17] But it looks like you are using it correctly [21:17] so I don't know, I suspect those earlier error. If that cnf file is not there or readable I guess that would be a problem.