[00:36] nicbet: ack === skr is now known as Guest61072 [04:32] is there a way to tell juju to stop asking for Ubuntu SSO creds? [04:33] (during deploy that is) [04:41] publish your charm [04:59] if I have a subordinate charm one a service with multiple units, so each service lists my subordinate charm under it, can the subordinate charm get list of all the units in the service? seems like it only see's itself in related_units [06:07] spaok: right, it can only see its peers [06:07] spaok: what are you trying to get? [06:36] marcoceppi: I was thinking I needed to build a list of units in the charm to pass to haproxy, but I think from what I read about the haproxy charm it will union all the units that join with the same service_name so I might be ok [06:37] though I really wish the haproxy charm had a python example of how to send it the yaml structure it wants [06:39] i can't seem to figure out how to set the service structure it wants === frankban|afk is now known as frankban [07:39] cholcombe: Yes. It needs to serialize the data it so it can detect changes and json was chosen as the format. [08:15] is relation_set local, or does it send data to the related unit [08:18] oh, interesting === ant_ is now known as Guest48995 [08:31] spaok: relation_set sets the data on the local end (the only place you can set data), which is visible to the remote units. [08:32] spaok: A unit doesn't 'send' data on a relation. It sets the data on its local end and informs other units that it has changed. [10:09] hi all [10:10] Has anyone here tried using juju 2.0 to manage deployments on Digital Ocean? [10:10] or any version of juju really... [10:22] ping... [12:10] is it possible to run charms that require authentication with the bundletester? [12:11] and/or how can i run bundletester/amulet tests on charms that use resources? [12:39] #juju [12:41] facing an issue while testing Glance using Amulet.. Glance is loaded locally and while i execute JUJU STATUS the state does not change and it remains at glance: charm: local:xenial/glance-150 exposed: false service-status: current: unknown message: Waiting for agent initialization to finish since: 06 Oct 2016 23:12:45+05:30 relations: amqp: - rabbitmq-server cluster: - glance [12:42] facing an issue while testing Glance using Amulet.. Glance is loaded locally and while i execute JUJU STATUS the state does not change and it remains at "Waiting for agent initialization to finish".. and if i change the series to trusty it works fine [12:43] Please help [12:45] facing an issue while testing Glance using Amulet.. Glance is loaded locally and while i execute JUJU STATUS the state does not change and it remains at "Waiting for agent initialization to finish".. and if i change the series to trusty it works fine === scuttle|afk is now known as scuttlemonkey [13:53] larrymi_ what version of VMWare are you running? I just can't get this to work properly. I've tried juju 1.25-6 over night, as well as using trusty images. always the same result with the public key not being injected properly and bootstrap failing. [14:14] bradm: did you ever get a chance to move the bip stuff to its own namespace. If so, would you go and close out the old reviews for it? (one of them is here: https://code.launchpad.net/~josvaz/charms/trusty/bip/client_side_ssl-with_helper-lp1604894/+merge/301802) [14:16] nicbet: I'm also using 6.0 3620759 for ESXi and 3644788 for the vCenter. I suspect it's a different issue but you'd need to be able to get to the logs. If I had to guess, I would guess that cloud-init is not able to get to a resource (perhaps blocked by the firewall) [14:17] larrymi_ I'll have to do shenanigans and mount the failed bootstrap VM's hard drive to a different vm to access the logs... let me see [14:18] nicbet: ok [14:20] larrymi_ would you mind sharing your redacted custom cloud yaml for vsphere? maybe I'm missing a config entry. [14:22] nicbet: vsphere: [14:22] type: vsphere [14:22] auth-types: [userpass] [14:22] endpoint: **.***.*.*** [14:22] regions: [14:22] dc0: {} [14:23] pretty much what I have, for me dc0 is replaced with our datacenter name [15:06] larrymi_ : cloud-init.log on the mounted drive states that neither DataSourceOVFNet, nor DataSourceOVF could load any data. Notably this line appears too: ubuntu [CLOUDINIT] DataSourceOVF.py[DEBUG]: Customization for VMware platform is disabled. [15:07] larrymi_ : did you configure something special on the vmware itself to enable OVF customization? [15:09] hi kwmonroe , We are charming websphere liberty charm for Z , This charm provides httpport, httpsport, and hostname.we thought to make use of http interface https://github.com/juju-solutions/interface-http but we noticed http interface will provide only httpport and hostname . So, we want know do we need to wright a new interface ? [15:19] nicbet: I didn't have to configure anything... just the export prior to bootstrap that I mentioned earlier, but it won't bootstrap at without that. [15:19] nicbet: interesting that it's disabled.. I wonder what it's keying on. [15:21] larrymi_ the cloudimage has open-vm-tools package installed. upon boot they are started, i see that from the logs. Then cloud init tries all different data sources, like EC2 store, Config ISO in CDROm, etc. one of them is OVFDatasource, which uses vmware tools to grab the info from the ovf template configuration [15:21] larrymi_ at that point it's not given anything, and logs the above line about guest customization being disabled [15:23] larrymi_ dumb question ... are you running juju against a vCenter or a vSphere ESXI? [15:24] nicbet: a vCenter [15:24] nicbet: which log are you looking at? [15:25] larrymi_ /var/log/cloud-init.log and /var/log/vmware-vmsvc.log together with https://github.com/number5/cloud-init/blob/master/cloudinit/sources/DataSourceOVF.py [15:28] nicbet: I do have the same Oct 5 19:57:55 ubuntu [CLOUDINIT] DataSourceOVF.py[DEBUG]: Customization for VMware platform is disabled. [15:28] nicbet: can probably be ignored [15:29] larrymi_ do you find a line that says it loaded cloud-init data from the OVF source? [15:30] nicbet: yes, it then goes on to Oct 5 19:57:54 ubuntu [CLOUDINIT] __init__.py[DEBUG]: Seeing if we can get any data from [15:31] larrymi_ grep 'data found' cloud-init.log prints all lines as 'no local data found from ***' where *** is the DataSource it tried [15:33] https://www.youtube.com/watch?v=lm64uOErZ8w [15:34] nicbet: yeah they're there [15:34] Oct 5 19:57:55 ubuntu [CLOUDINIT] handlers.py[DEBUG]: finish: init-local/search-NoCloud: SUCCESS: no local data found from DataSourceNoCloud [15:34] Oct 5 19:57:55 ubuntu [CLOUDINIT] handlers.py[DEBUG]: start: init-local/search-ConfigDrive: searching for local data from DataSourceConfigDrive [15:34] larrymi_ is there any that says data was found? [15:40] nicbet: the logs don't say data was found specifically but they seems to indicate that it only fails to get the data locally. [15:45] larrymi_ I have a hunch that this only works with vCenter [15:54] shruthima: i wouldn't write a new interface, just open an issue and/or provide a pull request to include an https port here: https://github.com/juju-solutions/interface-http [15:55] ok sure kwmonroe [15:59] kwmonroe: May i know when will be the xenial series of current IBM-IM charm will be pulled back ? so we can push the ibm-im for Z [16:00] shruthima: i'll try to complete that before my EOD, so roughly 7 hours from now. [16:01] oh k thanku [16:11] nicbet: yes could be. I haven't tried with ESXi host as endpoint [16:14] kwmonroe: i have edited http interface provides.py http://paste.ubuntu.com/23285185/ requires.py http://paste.ubuntu.com/23285180/ locally and tested it is working fine. So is there any way to create a merge proposal for http interface ? [16:16] kwmonroe: i have seen https://github.com/juju-solutions/interface-http/issues/5 similar issue is opened for http interface also [16:48] shruthima: issue #5 would allow you to define multiple http interfaces in your charm's metadata and react to them differently (with different ports). if that would solve your needs, you can just add a comment to issue #5 saying it would be useful to you as well. if you instead require multiple ports for a single http interface, i think that's a separate issue. you could run "git diff" in the directory where you made your [16:48] edits, paste the output at http://paste.ubuntu.com, and then include that paste link in a new issue here: https://github.com/juju-solutions/interface-http/issues. [16:50] ok thanks kevin [17:01] could some one take a look at this pastebin and tell me what next steps to debugging might be? http://pastebin.com/7MHZV4e3 [17:03] this, specifically: unit-ceph-1: 12:57:00 INFO unit.ceph/1.update-status admin_socket: exception getting command descriptions: [Errno 111] Connection refused === frankban is now known as frankban|afk [17:21] stub: I guess what I'm confused on is, when I try to set the services structure for when haproxy joins, if do hookenv.relation_set(relation_id='somerelid:1', service_yaml) then the joined/changed hook runs, haproxy doesn't get the services yaml, however if I do hookenv.relation_set(relation_id=None, service_yaml) then it does work, and haproxy builds the config right, but after a bit when the update-status runs it errors because relation_id isn't set [17:23] spaok: Specifying None for the relation id means use the JUJU_RELATION_ID environment variable, which is only set in relation hooks. Specifying an explicit relation id does the same thing, but will work in all hooks. Provided you used the correct relation id. [17:25] spaok: You can test this using "juju run foo/0 'relation-set -r somerelid:1 foo=bar'" if you want. [17:28] (juju run --unit foo/0 now it seems, with 2.0) [17:28] "juju run --unit foo/0 'relation-ids somerelname' " to list the relationids in play [17:29] hows it going everyone? [17:29] do I have the capability to bootstrap to a specific subnet? [17:29] using aws provider [17:31] any way to specify "charm build" deps? (for instance in my wheelhouse I have cffi, which when the charm is built depends on having libffi-dev installed. I have this in the README, but wondering if that was possible to enforce programatically [17:31] thanks stub, I'm fairly certain I have the right relid, but when I see from the haproxy side is only the private_ip and unit id something else, with None, I get the services yaml, its very confusing [17:31] I'll look at trying that command, I was wondering how to run the relation-set command [17:32] lutostag: layers.yaml ? [17:32] not 100% on that [17:34] spaok: yeah, I have deps for install time unit-side like http://pastebin.ubuntu.com/23285512/, but not sure how to do it "charm build" side === alexisb is now known as alexisb-afk === fginther` is now known as fginther [17:37] ah gotca, not sure [17:38] think I'll go with a wrapper around charm build in the top-level dir, don't need to turn charm into pip/apt/npm/snap anyways [17:53] hey icey - can you help holocron with his ceph connection refused? http://pastebin.com/7MHZV4e3 [17:57] Something in dpkg giving an Input/output error [18:00] lutostag: seems odd that an entry in your wheelhouse.txt would require a -dev to be installed for charm build [18:02] kwmonroe: doesnt it though, it builds it locally. by compiling stuff, I guess there are c-extentsions in the python package itself [18:02] lxml is another example [18:02] cory_fu_: does charm build have runtime reqs dependent on the wheelhouse? [18:02] cory_fu_: (see lutostag's issue above) [18:03] (but I can get around that one, cause we have that deb'd up) [18:04] kwmonroe icey going to try this http://askubuntu.com/questions/139377/unable-to-install-any-updates-through-update-manager-apt-get-upgrade [18:04] lutostag, kwmonroe: You shouldn't need a -dev package for charm-build because it *should* be installing source only and building them on the deployed unit, since we don't know the arch ahead of time. [18:05] ah, so I'll need these -dev's on the unit-side, good to know, but interesting. [18:05] holocron: that doesn't sound like a ceph problem then... got a log with the dpkg error? [18:06] lutostag: What's the repo for cffi so I can try building it? [18:06] cory_fu_: my charm or the upstream python package? [18:07] lutostag: The charm [18:07] cory_fu_: lemme push... [18:07] Sorry, I misread cffi as the charm name [18:08] kwmonroe: similar messages to this: http://pastebin.com/XZ0uFfz8 [18:08] I've had make, and build-essential give the error, right now it's libdpkg-perl [18:10] holocron: when do you see that? on charm deploy [18:10] cory_fu_: bzr branch lp:~lutostag/oil-ci/charm-weebl+weasyprint-dep [18:11] kwmonroe no, the charm deployed fine yesterday, it came in as part of the openstack-on-lxd bundle. i was able to create a cinder volume even.. logged in today and saw that error in my first pastebin [18:11] (and I'll need to add the deps as explained too) [18:11] i logged into the unit and did an apt-get clean and apt-get update [18:11] now it's failing with this [18:12] is it common practice to scale out another unit and then tear down the breaking one? [18:12] like, should i just make that my default practice or should i try to fix this unit? [18:12] holocron: common practice for for the breaking unit not to break [18:13] especially on some nonsense apt operation [18:13] :P yeah that's the ideal [18:13] is that unit perhaps out of disk space holocron? [18:13] or inodes? (df -h && df -i) [18:13] nope, lots of space [18:14] lots of inodes [18:16] holocron: anything in dmesg, /var/log/syslog, or /var/log/apt/* on that container that would help explain the dpkg failure? [18:18] kwmonroe probably,sorry i've got to jump to another thing now but i'll try to get back [18:19] np holocron [18:36] I am trying to remove my application in juju 2.0 but it is not working [18:36] I put pdb.set_trace() in my code [18:36] Not sure if it is because of that [18:36] juju remove-application does not remove the application [18:37] How do I now forcefully remove it? [18:37] Any help is much appreciated [18:38] is there a decorator for the update-status hook? or do I use @hook? [18:40] It is stuck at the install hook where I put pdb [18:40] I have the following decorator for the install hook, @hooks.hook() [18:41] Siva can you remove the machine? [18:41] Siva: juju resolved --no-retry # over and over till its gone [18:41] Siva: remove-application will first remove relations between your app and something else, then it will remove all units of your app, then it will remove the machine (if your app was the last unit on a machine) [18:42] Siva: you're probably trapping the remove-relation portion of remove-application, which means you'll need to continue or break or whatever pdb takes to let it continue tearing down relations / units / machines [18:43] Siva: The hook will likely never complete, so you either need to go in and kill it yourself or run 'juju remove-machine XXX --force' [18:43] so lutostag's suggestion would work -- keep resolving the app with --no-retry to make your way through the app's lifecycle. or spaok's suggestion might be faster -- juju remove-machine X --force [18:43] I work with containers, so I tend to do that mostly [18:44] (and haven't we all left our Python debugger statements in a hook at some point) [18:44] I removed the machine, it shows the status as 'stopped' [18:44] takes a sec [18:44] keep watching.. it'll go away [18:45] also rerun the remove-application [18:45] OK. @lutostag suggestion did the trick [18:45] I've notcied some application ghosts when I remove machines [18:45] Now it is gone [18:46] Thanks [18:47] spaok: yes, its perfectly valid to have an application deployed to no units. Makes sense when setting up subordinates, more dubious with normal charms. [18:48] One thing I noticed is that after removing the machine (say 0/lxd/14 is removed) now when you deploy it creates the machine 0/lxd/15 rather than 14 [18:48] is this expected? [18:49] ya [18:49] same for units as well [18:49] yup [18:49] makes deployer scripts fun [18:49] Why does it not consider the freed ones so that it is in sequence? [18:50] Siva: mainly because it makes things like logs more useful when the numbers are unique [18:50] Siva: especially as everything is async [18:51] Siva: so you can be sure any logs re: unit 15 are in fact the unit that was destroyed at xx:xx [18:51] OK [18:52] It looks a bit weird in the 'juju status' as the sequence is broken [18:52] Siva: yea, understand, but for the best imho [19:03] Siva: why I have scripts to destroy and recreate my MAAS/Juju 2.0 dev env, good to reset sometimes [19:05] One question [19:05] I have the following entry in the config.yaml file [19:05] tor-type: type: string description: Always ovs default: ovs [19:06] Now when do, tor_type = config.get("tor_type") print "SIVA: ", tor_type [19:06] I expect it to print the default value 'ovs' as I have not set any value [19:06] It prints nothing [19:06] Is this a expected? [19:06] tor_type or tor-type? [19:07] oops! my bad [19:07] I put underscores in my config.yaml [19:08] Now after making the change, I can just 'redeploy', right? [19:08] you can test by editing the charm live if you wanted [19:08] How do I do that? [19:09] vi /var/lib/juju/agents/unit-charmname-id/charms/config.yaml [19:09] kick jujud in the pants by [19:09] ps aux |grep jujud-unit-charmname |grep -v grep | awk '{print $2}' | xargs kill -9 [19:09] should cause the charm to run [19:19] I modified the config.yaml and restarted the jujud [19:19] still prints None === alexisb-afk is now known as alexisb [19:30] Siva: in my reactive charm I just config('tor_type') [19:30] not config.get [19:30] not sure the diff [19:56] I removed the app and deployed it again [19:56] config.get works [19:57] I can try config('tor_type') and see how it goes [20:24] kwmonroe: Comments on https://github.com/juju-solutions/layer-apache-bigtop-base/pull/50 [20:25] hey holocron kwmonroe just seeing the messages [20:25] to me, the line saying admin_socket: connection refused is more interesting; it looks like maybe the ceph monitor is down? [20:27] icey hey, i ended up tearing down the controller. I'm going to redeploy now and i'll drop a line in here if it happens again [20:28] holocron: great; kwmonroeis right though, the expectation is that it wouldn't break [20:30] @spaok, live config.yaml change testing does not work for me === rcj` is now known as rcj === rcj is now known as Guest46240 === Guest46240 is now known as rcj [20:40] cory_fu_: i like the callback idea in https://github.com/juju-solutions/layer-apache-bigtop-base/pull/50. but i'm not gonna get to that by tomorrow, and i really want the base hadoop charms refreshed (which depend on the pr as is). you ok if i open an issue to do it better in the future, but merge for now? [20:41] cory_fu_: it doesn't matter what you say, mind you. petevg already +1'd it. just trying to fake earnest consideration. [20:41] ha [20:42] kwmonroe: I'm also +1 on it as-is, but I'd like to reduce boilerplate where we can. But we can go back and refactor it later [20:44] kwmonroe: Issue opened and PR merged [20:44] kwmonroe: And I'm good on the other commits you made for the xenial updates [20:48] dag nab cory_fu_! you're fast. i was still pontificating on the title of a new issue. thanks!! [20:50] and thanks for the xenial +1. i'll squash, pr, and set the clock for 24 hours till i can push ;) [20:50] just think, you'll be swimming when the upstream charms get refreshed. [20:50] nice knowing you [20:58] before you go cory_fu_.. did you see mark's note to the juju ML about blocked status? seems the slave units are reporting "blocked" even when a relation is present. i'm pretty sure it's because of this: https://github.com/juju-solutions/bigtop/blob/master/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/reactive/hadoop_status.py#L22 [20:59] as in, there *is* a spec mismatch until NN/RM are ready. what's the harm in setting status with a spec mismatch? [21:01] afaict, they'll report "waiting for ready", which seems ok to me. unless we want to add a separate block for specifically dealing with spec mismatches, which would be some weird state between waiting and blocked to see if the spec ever does eventually match. [21:08] kwmonroe: I think the problem is one of when hooks are triggered. I think that what's happening is that the relation is established and reported, but the hook doesn't get called on both sides right away, leaving one side reporting "blocked" even though the relation is there, simply because it hasn't been informed of it yet [21:11] i think i don't agree with ya cory_fu_... NN will send DN info early (https://github.com/juju-solutions/bigtop/blob/bug/BIGTOP-2548/xenial-charm-refresh/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/reactive/namenode.py#L27). but check out what's missing... https://github.com/juju-solutions/interface-dfs/blob/master/requires.py#L95 [21:12] kwmonroe: Doesn't matter. The waiting vs blocked status only depends on the .joined state: https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/reactive/hadoop_status.py#L36 [21:12] spoiler alert cory_fu_: it's clustername. we don't send that until NN is all the way up, so the specmatch will be false: https://github.com/juju-solutions/bigtop/blob/bug/BIGTOP-2548/xenial-charm-refresh/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/reactive/namenode.py#L142 [21:13] cory_fu_: you crazy: https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/reactive/hadoop_status.py#L22 [21:13] kwmonroe: Ooh, I see. [21:13] We should remove that @when_none line. There's no reason for it [21:14] great idea cory_fu_! if only i had it 15 minutes ago. [21:14] :) [21:15] petevg: you mentioned you also saw charms blocked on missing relations (maybe zookeeper?). could it be you saw the slaves blocked instead? [21:16] Aaaargh guys, would you _please_ stop making gratuitous changes in every Juju 2 beta or rc? [21:17] The latest one that has just bitten my testing is the addition of a * after the unit name in juju status output. [21:17] Before that it was 'juju set-config' being changed to 'juju config' [21:17] This is getting old.... [21:23] kwmonroe: Yes. I think that it was probably just the hadoop slave. [21:23] neiljerram: apologies for the headaches! but you should see much more stability in the RCs. at least for me, api has been pretty consistent with rc1/rc2. rick_h_ do you know if there are significant api/output changes in the queue from now to GA? [21:24] thx petevg - fingers crossed that was the only outlier [21:24] np [21:24] fingers crossed. [21:25] kwmonroe, tbh I'm afraid I have to say that I think things have been _less_ stable since the transition from -beta to -rc. My guess is that there are changes that people have been thinking they should do for a while, but only now that the GA is really looking likely so they think that they should get them out :-) [21:27] kwmonroe, but don't worry, I've had my moan now... [21:27] heh neiljerram - fair enough :) [21:28] Do you happen to know what the new * means? Should I expect to see it on every juju status line? [21:29] neiljerram: i was just about to ask you the same thing... i haven't seen the '*' [21:29] neiljerram: you on rc1, 2, or 3? [21:30] kwmonroe, rc3 now; here's an excerpt from I test I currently have running: [21:30] UNIT WORKLOAD AGENT MACHINE PUBLIC-ADDRESS PORTS MESSAGE [21:30] calico-devstack/0* unknown idle 0 104.197.123.208 [21:30] [21:30] MACHINE STATE DNS INS-ID SERIES AZ [21:30] 0 started 104.197.123.208 juju-0f506f-0 trusty us-central1-a [21:32] kwmonroe, just doing another deployment with more units, to get more data [21:32] hmph... neiljerram i wonder if that's an attempt to truncate the unit name to a certain length. doesn't make sense in your case, but i could see 'really-long-unit-name/0' being truncated to 'really-long-u*' to keep the status columns sane. [21:32] just a guess neiljerram [21:33] and at any rate neiljerram, if you're scraping 'juju status', you might want to consider scraping 'juju status --format=tabular', which might be more consistent. [21:33] kwmonroe, BTW the reason this matters for my automated testing is that I have some quite tricky code that is trying to determine when the deployment as a whole is really ready. [21:33] ugh, not right [21:34] sorry, i meant 'juju status --format=yaml', not tabular [21:34] kwmonroe, yes, I suppose that would probably be better [21:36] kwmonroe, Ah, it seems that * means 'no longer waiting for machine' [21:44] neiljerram: you sure? i just went to rc3 and deployed ubuntu... i still see: [21:44] UNIT WORKLOAD AGENT MACHINE PUBLIC-ADDRESS PORTS MESSAGE [21:44] ubuntu/0 waiting allocating 0 54.153.95.194 waiting for machine [21:44] kwmonroe, exactly - because the machine hasn't been started yet [21:44] oh, nm neiljerram, i should wait longer.. you said the '*' is.... [21:44] right [21:45] i gotta say, intently watching juju status is right up there with the birth of my first child [21:51] neiljerram: i can't get the '*' after the machine is ready, nor using a super long unit name. i'm not sure where that's coming from. [21:51] UNIT WORKLOAD AGENT MACHINE PUBLIC-ADDRESS PORTS MESSAGE [21:51] really-long-ubuntu-name/0 maintenance executing 1 54.153.97.184 (install) installing charm software [21:51] ubuntu/0 active idle 0 54.153.95.194 ready [21:52] kwmonroe, do you have rc3? [21:52] neiljerram: i do... http://paste.ubuntu.com/23286448/ [21:55] kwmonroe, curious, I don't know then. I'm also using AWS, so it's not because we're using different clouds. [21:56] neiljerram: i'm aws-west. if you're east, it could be signifying the hurricane coming to the east coast this weekend. [21:56] kwmonroe, :-) [21:56] rc3 backed by weather.com ;) [21:57] neiljerram: care to /join #juju-dev? i'll ask the core devs where the '*' is coming from [21:57] kwmonroe, sure, will do [22:13] for anyone following along, the '*' denotes leadership [22:29] i thought you were the leader kwmonroe [22:29] that's kwmonroe* to you magicaltrout [22:30] Texas' own Idi Amin [22:30] magicaltrout: you still in souther california? or did you get back to the right side of the atlantic? [22:31] hehe [22:31] i'm back in the motherland for now [22:31] been instructed to report to Washington DC on the 28th November [22:31] so not for long [22:32] although i was hoping a nice sunny jaunt to ApacheCon EU was gonna be the last trip of the year [22:34] must be hard being so popular magicaltrout [22:34] trololol [22:34] whatever [22:34] kwmonroe: is cory_fu_ staying in your basement? [22:35] magicaltrout: two things you should know about central Texas: 1) it's all bedrock; no basements. 2) cory_fu_ was a track lead at the summit; he stays at the Westin. [22:37] Westin? and you lot dumped us at the Marriot? [22:37] I need to upgrade [22:37] get me a real community sponsor! [22:37] apply for track lead in Ghent next year ;) [22:38] comes with a bright orange shirt.. wearable anywhere! [22:38] they did look nice....... sadly I'll be too drunk [22:38] oh [22:38] that never stopped you guys [22:39] i could lead the "werid big data - container crossover" [22:39] s/werid/weird [22:40] pretty sure you're already leading that [22:40] hehe [22:40] i don't understand why mapreduce wasn't enough for you [22:40] yeah i've been tapping up the mesos mailing list the last few days trying to figure out what needs to be done to get LXC support in their container stack [22:41] everything can be solved with mapred. and if it can't, map it again. [22:41] i'm not a C programmer though so it might take me a while unless my IDE-fu wins [22:43] try emacs [22:44] I use emacs actually kwmonroe :P [22:44] just not for coding :) [22:44] sure magicaltrout.. it's also good as a desktop environment and for playing solitaire. [22:44] exactly see [22:44] you know kwmonroe [22:44] you know [22:45] sorry.... kwmonroe* [22:45] so magicaltrout*, how can we help you with apachecon seville? you've got some drill bit work i pressume? [22:47] yeah. Plan do to a bigtop & drill demo [22:47] get some stuff in a bundle so willing volunteers can test etc [22:47] if that latest RC changelog isn't a lie.... drill will even work in LXC which is a bonus [22:47] roger that magicaltrout.. i'll volunteer! [22:48] for the first time ever, the Juju talk is the easier of the two. I'll knock something together next week and we can iterate over it [22:49] we've got a month or so [22:49] try and not leave it to the last minute for a change [22:49] !remindme 1 month [22:49] yeah, well for ApacheCon NA i was writing the talks on the plane [22:49] so you know.... [22:49] how bad can it be? [22:50] it could be as bad as a 6 out of 10. but no worse. [22:51] thanks for the reassurance [23:24] cmars: https://github.com/cmars/juju-charm-mattermost/pull/2/files [23:25] bdx, why does it need nginx embedded? [23:26] bdx, the systemd support is nice [23:26] cmars: so I can give my users a clean fqdn with no ports hanging on the end [23:27] cmars: plus, mattermost docs suggest it [23:27] bdx, ok, that makes sense [23:27] bdx, is it worth exposing the mattermost-port at all then? [23:28] might just leave it fixed and local only.. [23:28] ok [23:29] bdx, also, let's remove trusty from the series in metadata [23:29] I thought i did ... checking [23:30] bdx, ah, you did, my bad [23:32] cmars: there ya go [23:32] how would I fix 2016-10-06 23:30:28 ERROR juju.worker.dependency engine.go:526 "leadership-tracker" manifold worker returned unexpected error: leadership failure: lease manager stopped... which daemon should I kick on the unit? [23:33] 2.0 beta7 (can't tear down and redeploy for a little while unfortunately) [23:36] bdx, thanks, i'll have to test this out, but i could publish it soon. probably need to update the resource as well [23:39] cmars: totally, I was thinking of adding a tls-certificates interface, so if a user desired to have ssl, they could just relate to the easyrsa charms [23:39] bdx, oooh nice! [23:40] actually, I feel that functionality should be a part of the nginx charm though [23:40] bdx, do we have a LE layer yet? [23:40] that'd be really nice for a private secure mattermost [23:40] stokachu: ^^ [23:40] cmars, stokachu: https://jujucharms.com/u/containers/easyrsa/2 [23:40] LE layer? [23:41] let's encrypt [23:41] oooo [23:42] cmars: there should be [23:42] I know we have discussed it === scuttlemonkey is now known as scuttle|afk === scuttle|afk is now known as scuttlemonkey [23:57] lutostag: switch to the admin controller and ssh into machine 0 [23:57] lutostag: then just pkill jujud and it'll restart and pick back up [23:57] bdx: sorry some extra context? [23:57] ah i see nginx