[01:05] something is super fishy .... [01:05] http://paste.ubuntu.com/23091222/ [01:05] show 'feed/19' [01:06] but ssh into the instance, and cat the .juju-charm file -> http://paste.ubuntu.com/23091226/ [01:07] shows 'local:xenial/feed-24' [01:07] is there some kind of wierd version mismatch going on here [01:08] weird* [01:21] oooh my bad [01:21] 19th deploy, 24th build [03:00] lazyPower, mbruzek, stokachu: here's one for ya' [03:00] lazyPower, mbruzek, stokachu: when I include layer-ruby, and layer-tls, I get this error -> http://paste.ubuntu.com/23091435/ [03:01] lazyPower, mbruzek, stokachu: if I don't include layer-ruby, my certs are there and I don't get the error [03:02] lazyPower, mbruzek, stokachu: I feel like the inclusion of layer-ruby is preventing layer-tls from generating the certs somehow [03:02] lazyPower, mbruzek, stokachu: I've been stuck on this for a few days, can one of you guys build something with layer-ruby AND layer-tls and verify this for me [05:30] lazyPower, mbruzek, stokachu: figured it out -> https://github.com/battlemidget/juju-layer-ruby/pull/6/files [05:30] lazyPower, mbruzek, stokachu: os.chdir was changing the directory context for all [05:30] needed the context manager === frankban|afk is now known as frankban [07:12] Hey guys, what did quantum-security-groups change into? [07:12] ERROR unknown option "quantum-security-groups" [08:54] quantum is neutron now [09:00] Got that, but it throws me an error even when using neutron-security-groups in the nova-cloud-controller yaml [09:00] though neutron-security-groups works in the neutron-api yaml [09:00] so i guess nova-cloud-controller does not need that option anymore? [09:00] as it has a relation to the neutron-api? [10:16] it looks like juju overwrites /home/ubuntu/.ssh/authorized_keys - is that expected behaviour? [10:16] (1.25) [10:32] having some issues with rackspace auth https://pastebin.canonical.com/164061/ [10:33] do youknow what can be the problem? the user & password work when using curl or python pyrax [11:07] admcleod: Well done on the accepted presentation on openstack and bigdata! [11:09] kjackal: thanks :) [11:10] When/where are you presenting? [11:13] openstack summit barcelona october [11:25] Nice! === freyes__ is now known as freyes [12:55] kjackal admcleod - not sure if you two used charmbox, but i've just unblocked the builders. Apparently there's a diff in the shipping packages in FROM ubuntu on the hub and FROM ubuntu locally (which makes no sense) [12:55] if you use charmbox/charmbox:devel and encounter any issues please lmk so i can address accordingly [12:56] marcoceppi : Hi. Based on yesterday's discussion with , I created a cinder-storageDriver charm using Shell script which will pass external config data to relation instead of cinder.conf directly. This is the standard way. It is working fine for us. We want to certify and integrate our charm to Autopilot. But I have a question. How much compatible our created cinder-storageDriver charm with Autopilot? Our conversation log :ht [12:56] ok lazyPower, thank you. I havent used charmbox recently [12:58] marcoceppi: http://paste.openstack.org/show/563919/. [13:08] Hi, can container have FQDNs === Anita is now known as Guest51400 [13:11] can container's have fqdn? [14:14] Hello cory_fu, running tests on cwr all day today, I seem to be hitting this behavior: http://pastebin.ubuntu.com/23093330/ have you seen this before? [14:15] kjackal: I hadn't run in to that before, but that makes sense. I didn't realize that jujuclient was using SIGALRM or that it didn't work in threads. That's annoying. [14:16] I guess I'll have to pull out the threading logic and either make it multi-process or not parallel [14:20] cory_fu: thats for resetting the environment. What about the error at the end where the bundle tester output does not have a "test" (?) is this expected? [14:21] kjackal: I'm pretty sure that's caused by the other exception. Because of the first exception, it doesn't get the output its expecting [14:22] ok, let me see where the threading is done [15:02] cory_fu: just to confirm that removing Threads fixes the issue I was seeing [15:03] kjackal: Are you saying that you confirmed that? [15:03] cory_fu: yes. [15:03] Ok, cool [15:03] cory_fu: I removed the threads and it worked [15:04] Yeah, I went with threads optimistically without much confidence that it was actually threadsafe. TBH, I'm not sure why it's not just using subprocess and the CLI for bundletester anyway. If we do that, threads would be fine [15:08] let me try to see what happens if we are to use Processes... It should work [15:23] cory_fu: replacing threads with processes seems to work, let me get the observable kubernetes test plan to pass and will update the PR to cwr [15:26] kwmonroe - moving you here, i hear you had questions about the hub builders? [15:27] yeah lazyPower, related to https://github.com/juju-solutions/charmbox/issues/37, what do you make of the charmbox success in ci.containers? you reckon it was inclusion of libssl-dev? any insight as to why docker hub wasn't able to build right? [15:27] kwmonroe - i added libssl-dev to the install manifests [15:27] so, i have no idea still why it was faiiling, the only thing i can consider is that the base images are different somehow [15:28] hmph, roger that lazyPower. happy to see a new build. [15:28] kwmonroe - we're pulling them into a CI solution instead of relying on the hub to do our builds as well [15:28] ideally we'll get some smoke tests around the containers and actually verify they work [15:29] yup, understood lazyPower. what's the mechanism between ci builds and docker hub? can i still docker pull jujusolutions/charmbox to get the latest from ci? [15:29] little do you know, thats already been rolled over ;) [15:29] as of about 7am this morning [15:38] How to get the fqdn of a container? [15:44] Hi [15:44] Anita_: does 'hostname -f' give you what you want? [15:44] How to get the FQDN of a container or is there any way to set the FQDN of containers [15:44] kwmonroe_:yes [15:45] kwmonroe_:it gives just juju-2cf5ba-19, without any domain name [15:47] Anita_ - thats a great question to relay to the juju mailing list - juju@lists.ubuntu.com [15:48] lazyPower_:ok [15:51] lazyPower_:when I ping from one container to another container it takes with ".lxd" suffix. like "juju-2cf5ba-19.lxd" but that does not qualify FQDN I think. [15:52] lazyPower_:Will mail to juju@lists.ubuntu.com [15:52] Anita_ - Juju doesn't manage DNS at present, this is why i said to ask on the list, as there may be workable solutions that i'm not aware of, but the plain answer is not today. [15:53] lazyPower_: ok . Thank you... [15:56] Anita_: you may also want to add your thoughts/requirements to this bug: https://bugs.launchpad.net/juju/+bug/1590961 [15:56] Bug #1590961: Need consistent resolvable name for units [15:56] sounds like juju will have some sort of dns provision in 2.1.0 [15:58] kwmonroe_:ok [15:58] kwmonroe_P [15:59] kwmonroe_:Thank you === mup_ is now known as mup === frankban is now known as frankban|afk [16:38] when i charm publish a new charm to the store how long typically does it take to show up? I published to the stable channel [16:39] cory_fu: do you have time for a quick chat on the daily regarding cwr? [16:40] Sure [16:40] thanks [17:00] cholcombe: usually less than a minute. did you also "charm grant cs:~blah/foo everyone" [17:00] ^^ that gives everyone read perms, which is required to be visible in the store [17:00] kwmonroe, oh.. no i didn't . that's prob it [17:08] hi, when i deploy lxd containers to a host with juju deploy --to lxd:X, i see there is a file created on the host /var/lib/lxd/zfs.img which is 100G. What happens if my physical disk is smaller than 100G? I think i may have run into a situation where my containers filled up the disk. i'm not sure the best way to recover [17:08] right now zpool status says one or more devices are faulted in response to IO failures, https://gist.github.com/raema/337581a825687bf7774715dc925fff31 [19:38] bdx: Can you have a look at my change for tls ? https://github.com/juju-solutions/layer-tls/pull/47 [20:56] hi mattrae - i recall a recent convo about that being the case, and not ideal for that very reason. here's where that logic lives. bugworthy imo. https://github.com/juju/juju/blob/11682c54646fbe625120c0368b41b3349f04df77/container/lxd/initialisation.go#L124 [21:25] mattrae, bug 1617460 [21:25] Bug #1617460: zfs.img sparse file size is fixed, assumes at least 100GB free space on host [21:58] bdx: i'm in a pickle. puppetlabs doesn't have ppc64le support at http://apt.puppetlabs.com/dists/, so including layer-puppet-agent fails like an absolte madman on that arch. [21:58] bdx: apt update eventually fails like this "W: Failed to fetch http://apt.puppetlabs.com/dists/trusty/Release Unable to find expected entry 'dependencies/binary-ppc64el/Packages' in Release file" [21:59] bdx: so what's a good fix for layer-puppet-agent? i was thinking an easy fix would be to check cpu arch here: https://github.com/jamesbeedy/layer-puppet-agent/blob/master/lib/charms/layer/puppet.py#L117 and if ppc64le, don't bother with apt.add_source since we know it'll fail. [22:00] but that means we'll fall back to the archive to handle self.puppet_pkgs, and if somebody wants v4 on ppc64le, the archive wont have it. [22:01] i'll open an issue on the layer, but wanted to give you a heads up so you can spend your entire weekend thinking about this. kthx. [22:16] bdx: in your free time ;) https://github.com/jamesbeedy/layer-puppet-agent/issues/9 === zeestrat_ is now known as zeestrat [23:47] exit