[06:00] Hello Juju World! === gnuoy` is now known as gnuoy === frankban|afk is now known as frankban [08:06] o/ [08:30] https://ibin.co/2sQKsBt5MB9f.jpg blind IRC for Lasek people [08:30] officially the largest IRC client in the world I reckon === ant_ is now known as Guest67209 [08:41] Hi, I'm using Juju 2.0-beta15-0ubuntu1~16.04.1~juju1 with the openstack provider. There are multiple networks defined so I'm using "--config network=" when bootstrapping. However, I don't see a way to set this as the default when deploying applications in the model. "juju model-defaults" does not list 'network' as a configurable option. Trying to set it results in 'key "network" is not defined in the known model configuration' [08:42] I'll report a but unless I'm doing something obviously wrong? [08:49] ok, so I can set the default when creating the model rather than adding the default afterwards, thats ok. I see network in model-defaults now too. Seems like a bug that you can't add a value to the models defaults after creation [08:59] Hello there. Is it possible to have lxc deployed charms to have a static ip instead of dhcp? This incase the dhcp is offline and you need to reboot/restart stuff? [09:05] awww [09:06] marcoceppi: I think the charm dev aws stuff has run out of room :'( [09:06] one of those days when I could do with a MAAS server under my desk [09:17] okay I have no clue [09:17] every machine in aws and lxd ends up in error state [09:27] kjackal: where can I look to find some logs regarding why my machines are failing to bootstrap? [09:27] s/bootstap/get allocated [09:27] let's see.. [09:28] magicaltrout: which provider are you using? [09:28] I've tried AWS, now trying LXD [09:28] I get bootstrapped [09:28] but if I try and deploy something if just lands in an error state [09:28] Juju keeps logs all machines under /var/log/juju/all-machines.log (think) [09:28] I thought it was because the AWS for Charm Devs might have been full up [09:28] on the bootstrap node kjackal ? [09:29] yes, on the coordinator [09:29] how do I ssh to that these days? [09:29] wait, wait [09:29] on your local client not the coordinator [09:29] hmm [09:29] do you have enything under /var/log/juju ? [09:30] I don't have a /var/log/juju to start withj [09:30] oooh [09:30] hold on [09:30] formatting status to json [09:30] gives me an error [09:31] thats a bit $hit [09:31] surely those errors should be more obvious [09:31] what juju version are you using? [09:31] beta15 [09:32] I was using 9 earlier when my problems began [09:32] ls -ld /var/log/juju* [09:32] I worked yesterday on AWS fine [09:32] today it doesn't like me [09:32] no such file or directory [09:33] nothing under var log! [09:33] no [09:33] i have no logs [09:33] I'm old enough to know where to look by default ;) [09:34] thats on my client [09:34] on the controller I'm sure there are lgos [09:34] logs [09:34] :) I am sorry did not meant any offence [09:34] hehe I'm only messing [09:34] i looked there first, clearly on the units etc they exist when stuff breaks [09:34] but it doesn't seem to aggregate on the client [09:35] anyway, it claimed there were missing tools, so I've rebootstrapped with --upload-tools [09:35] see if it unbreaks it [09:35] I don't understand why the tabular status doesn't tell you the error message though [09:35] that seems a bit silly [09:36] I am still on Juju 1.25. The lxd provider on Juju 2.0 had a bug that must have been fixed by now, but haven't tested it yet [09:36] you going to pasadena this year kjackal ? [09:37] Yeap, will we see you there? [09:37] I've been using betaX and LXD for 6 months, normally pretty stable [09:37] hope so kjackal else someones wasted a lot of money on flights for me [09:37] yeah, I submitted a couple of talk proposals [09:37] so hopefully I'm demoing something [09:38] I worked most of yesterday on finishing up DC/OS for Mesosphere EU on the 31st [09:38] :) The problem we had with juju 2.0 and lxd was that the machine could not immediately resolve its hostname. We had to wait or reboot the container. have you noticed any similar behavior? [09:38] ah yeah [09:38] but I get that in AWS as well now [09:38] I saw that yesterday [09:40] magicaltrout: Well done on Mesosphere EU! [09:40] hehe [09:40] its pretty crazy this month [09:41] I'm doing Amsterdam on the 31st for Mesos, London on the 1st for Pentaho, then Pasadena for Charmers summit [09:41] I've got 2 or 3 more talk submissions to write for this year as well [09:41] Big Data Spain and ApacheCon EU [09:41] oh and I'm doing Pentaho Europe Community Meetup in November [09:42] BigdataWeek London is too close for you? :) [09:43] yeah thats a Bigstep thing, Meteorite runs its servers on bigstep but they've lagged on Ubuntu support which is a pain for development [09:44] they told me the other week they were getting Xenial tested so hopefully I can turn our Bigstep servers into Juju managed DC/OS clusters soon [09:44] But overall your schedule is crazy!!! [09:44] lol [09:44] thats pretty normal ;) [09:45] I blame jcastro he said "submit some talks on Juju and I'll help you get to them" [09:45] so I did [09:45] and they got accepted ;) [09:46] hmm the BDW CFP is still open [09:46] maybe I shall submit a paper ;) [09:46] Lol!!! [09:47] Oh, I have a chalenge for you! http://bigdata.ieee.org/ its for next year! [09:48] cool [09:48] I'll think of something [09:49] I'm happy to talk at conferences though so if people have good ones for a non canonical employee to speak at I'm happy to pitch a talk [09:51] I'm also giving a presentation to the JPL team when i'm in Pasadena next month [09:51] so I plan to show off the DC/OS, Kubernetes stuff as I'm getting them all involved in docker stuff [09:51] but currently the deploy to single hosts [09:52] and I'm trying to get them down to the summit, but I need jcastro to publish the schedule so I can tempt thjem [09:52] them [09:56] crazy! [09:58] I get bored when I just work on one thing [09:58] so doing a bunch of different stuff and going to conferences at least keeps my schedule varied [10:16] jamespage: Hi! I've asked a question in the https://review.openstack.org/#/c/348336/6/metadata.yaml - how to connect glance-charm to cinder-charm in case of implementing one additional relation with subordinate property when no other configuration is needed. [10:17] andrey-mp, hey! [10:17] andrey-mp, sorry for the silience - I've been away for the last week or so... [10:17] andrey-mp, could you join #openstack-charms please? [10:18] sure [10:44] hi guys. i need to add an interface to the lxc created by juju and connect it to an underlying bridge [10:47] If i add "lxc.network.type = veth lxc.network.link = br2 lxc.network.flags = up" to /var/lib/lxc/juju-machine-8-lxc-8/config [10:47] would that do the trick [10:53] rick_h_: can you or somebody explain multi series charm publishing please :) [12:05] guys i configured cinder using its juju charms correctly .. i then removed the charms because i wanted to deploy the service somewhere else .. now when i am deploying the charms again keystone is not passign the req relation data to cinder [12:05] Missing required data: admin_user service_port admin_tenant_name auth_port admin_password auth_host service_host [12:08] guys i configured cinder using its juju charms correctly .. i then removed the charms because i wanted to deploy the service somewhere else .. now when i am deploying the charms again keystone is not passign the req relation data to cinder [12:08] Missing required data: admin_user service_port admin_tenant_name auth_port admin_password auth_host service_host [12:11] magicaltrout: sure thing [12:12] magicaltrout: so the only trick is that you claim the charm supports multiple series and when you publish the charmstore stores it under those different urls [12:12] so, I tell it it supports Wily and Xenial, but then when I "push" I push the Wily one [12:13] is that correct? or am I going wrong somewhere? [12:13] I suspect I've ballsed up somewhere [12:13] charm push . cs:~spicule/dcos-master [12:15] magicaltrout: leave the series out of the push [12:15] magicaltrout: yes, just push like that, without any series and let the charmstore figure out where to put it [12:15] interesting [12:15] so then if I want to push xenial [12:15] I just go to the xenial directoryu [12:15] and do the same? [12:15] magicaltrout: if it's a multi-series charm you just push it once. You declare in the metadata.yaml which series you suport [12:15] or does it happen in one push? [12:16] okay, cool [12:16] magicaltrout: and then when you push, the charmstore reads that file and puts it in all the right places [12:31] Hi all, I got following error message when type "juju --debug status" [12:31] https://www.irccloud.com/pastebin/babPt2OC/ [12:32] is there any way to fix it? I'd be very appreciated for any hint or help. Thanks! [12:32] It seems juju bootstrap node was gone forever.... ? [12:33] magicaltrout: heya, did the jpl guys confirm if they're coming to the summit? If so can you have them register so I have the food count right? [12:40] jcastro: I don't know yet, have you got a schedule together (even something half done)? I want to blast around an email today or tomorrow but I'd like some content for the sales pitch, not just "turn up it'll be cool" ;) [12:40] I'll have a schedule for you today [12:41] I just drafted it on friday [12:41] thanks [12:45] hi marcoceppi, tvansteenburgh - do you have an eta for the next charm-tools pypi release? there are some fixes in master that we need to unblock osci (virtualenvs). [13:11] beisner: i defer to marcoceppi on that one === Guest37793 is now known as zeus [14:05] magicaltrout - i'm here whenever you're ready to talk dee cee oh ess [14:06] hmm sooooo lazyPower [14:06] I tweaked some stuff [14:06] some things just plain don't work yet, but the main subsystems work [14:06] Its a start right? :) [14:06] I did find they've dropped K8S support for now annoying =/ [14:06] oh really? [14:07] thats interesting, there are still mesosphere people attending their sig groups. I wonder what brought that on [14:07] yeah the more recent versions of DC/OS Mesos have some incompatabilities [14:07] but they are trying to garner support on the GH project [14:07] must be pending additional work w/ the scheduler [14:07] so its not gone, its just missing stuff [14:07] anyway [14:07] you can spin up dcos-master & dcos-agents [14:07] they should both be in the CS [14:08] I lie [14:08] the agents aren't [14:08] 2 mins I'll push them [14:08] don't try it in lxd you won't get very far [14:08] i'm painfully aware of that [14:08] * lazyPower points at the very expensive k8s bundle that wont properly run in lxd today [14:09] okay agents should be alive [14:10] you have to have 1, 3 or 5 masters [14:10] and any number of agents [14:10] magicaltrout - no bundle? [14:10] not yet I have a day job :P [14:10] not even a minimal "use this to kick the tires" formation? [14:11] the most mimimal is 1 master - 1 agent and a relation [14:11] even you can manage that ;) [14:12] You're giving me a lot of credit [14:12] hehe [14:12] anyway, I have a bunch of stuff on my backlog, like, I ripped out a master today to see what happened [14:12] it locked me out :) [14:12] we face similar issues with the k8s bundle [14:13] nuke a master and you lose PKI [14:13] that said, if you have masters fail in DC/OS proper, you can't add new ones, so I reckon I'm on feature parity there ;) [14:13] the fact you can add-unit on the masters is already something you can't do in DC/OS officially [14:14] magicaltrout https://gist.github.com/chuckbutler/ae49b395648a07222b149978c27c5402 [14:14] mind pushing that up @ your namespace? :) [14:14] ta [14:15] feel free to remix the machine constraints [14:15] that might be the slowest dc/os cluster you ever deploy in your life [14:15] seconded only by.... rpiv1's [14:15] hehe [14:16] * lazyPower gives it a whirl [14:16] here goes something [14:25] magicaltrout - are these in a github repo somewhere? [14:26] yup [14:27] https://github.com/buggtb/dcos-master-charm [14:27] apologies in advance for the crazy hacks and bad code quality, I've not had chance to tidy it up yet ;) [14:28] https://github.com/buggtb/dcos-agent-charm [14:30] no stress amigo [14:30] just filing bugs as i find them so we have somewhere to start :) [14:30] cool [14:31] I've not tried it outside of wily by the way, so your xenial test is the first blast at something a bit more modern [14:31] heh [14:31] thats an interesting perdicament [14:32] when i started using it Xenial images weren't in EC2 and Trusty doesn't work [14:32] so technically it should "probably" work :) [14:33] it was an upstart/systemd thing [14:36] ack [14:36] looks like it needs a bump for pip out the gate [15:03] hii all, I am trying to deploy openstack using juju "openstack-base bundle" all services are in pending state [15:03] how much time it will take to deploy openstack? [15:03] please someone help [15:04] sra some 30+minutes I think [15:04] Depends on your substrate, as well. [15:05] rick_h: I am deploying juju openstack on vm which has 6 GB RAM and 80 GB Disk [15:05] it will cause any issues [15:06] sra: 6gb of ram is very light imo [15:06] When I've deployed it using lxd, I've seen it sit at 8-10GB of RAM. [15:06] Odd_Bloke: you deployed on VM? [15:07] This was lxd containers on hardware. [15:07] Odd_Bloke: can we deploy on Vm [15:07] sra: Are you saying you're trying to deploy it on to one VM? Or you want to deploy on to multiple VMs with those specifications? [15:08] Odd_Bloke: trying to deploy it on single VM [15:09] sra: With 6GB of RAM, you aren't going to get something that works very well. [15:09] If it works at all. [15:13] Odd_Bloke: I started my deployment 1 hour back [15:14] still all the services it is showing agent-state in "pendig" [15:15] Odd_Bloke: are you around [15:15] sra: How are you deploying them? [15:16] Using openstack-base bundle [15:16] from juju-gui [15:16] sra: Right, but what substrate? EC2? lxd? [15:17] Odd_Bloke: lxc [15:18] sra: OK, so you should be seeing that machine under a lot of load ATM. [15:18] sra: As I said, I don't think 6GB of RAM is going to work. [15:18] OpenStack is too complex a beast to fit in 6GB of RAM. [15:19] Odd_Bloke: So please provide me the better requirements for deploying openStack on a single VM using JUJU OpenStack-base bundle [15:20] sra: What are you actually trying to achieve? I've had it work on a 16GB NUC, but you wouldn't actually have wanted to use that for anything serious. [15:23] Odd_Bloke: I want to do changes to cinder charm and have to test the changes applied or not [15:23] sra: Ah, OK, I see. [15:23] sra: So I haven't actually used the bundles for development of OpenStack. [15:24] jamespage: Perhaps you would be able to point sra at someone (or some docs) of how to get set up to do OpenStack charm development? [15:36] sra - bare minimum you will need 12GB of ram on that unit [15:36] sra - and likely 4+ cores if you expect it to work with any reasonable efficiency [15:36] sra - additionally, if you're seeing lxd units in 'pending', can you do me a favor? run lxc list and see if the juju templates have been created. they should be listed clearly: with the phrase juju and xenial in the image name [15:38] my Base VM has ubuntu14.04 OS will it work? [15:38] sra - i highly recommend you move to xenial so you can use the latest bits for lxd [15:39] lazyPower: Juju<2 would be using lxc not lxd, right? [15:39] Odd_Bloke - i havent tried juju2 on trusty in quite some time [15:40] so, for completeness sake, i recommend xenial [15:40] better to have them on a series thats got more eyes on it, know what i mean? :) [15:40] lazyPower: Right, but if sra is on trusty then I wonder if they are using Juju 1.x (and therefore lxc), rather than Juju 2 (and therefore lxd). :) [15:40] i assume that to be the case [15:41] sra can you confirm? ^ [15:42] lazyPower: yes [15:43] i am using ubunut 14.04 [15:43] sra: Which version of juju are you using? [15:43] jamespage thedac wolsen - any known blockers on using juju 1.25 with lxc for openstack-base bundle deployments? [15:43] 1.25.6-trusty-amd64 [15:43] lazyPower, no infact that what we verfiy on still [15:43] ok [15:43] sra - no need to upgrade according ot the potentate of our charms :) [15:43] oh wait - its 16.04 based, not 14.04 based [15:44] ah [15:44] welp [15:44] perhaps upgrade and still install juju-1 [15:44] sra, openstack-base is not deployable in a single-vm [15:45] its very much designed to be deployed on multiple servers using MAAS [15:45] https://jujucharms.com/openstack-base/ [15:45] README has the details for the requirements [15:45] if you want todo an all-in-one; https://github.com/openstack-charmers/openstack-on-lxd is your best route [15:46] jamespage: can i deploy openstack by dragging individual components from juju gui in ubuntu14.04 [15:47] sra, well you can be its alot of clicking [15:47] a bundle is a much better option [15:47] jamespage: for bundle we need ubuntu16.04? [15:48] sra, the latest openstack-base will deploy a 16.04 based openstack cloud [15:48] sra, openstack-on-lxd requires a 16.04 host for the deployment [15:48] so yeah I guess it does - sorry [15:52] I want a basic openstack deployment for modifying cinder and test the changes using juju. So for this what I have to do Openstack deployment in ubuntu 14.04? [15:54] jamespage: are you around [15:54] sra, I am [15:54] jamespage: I want a basic openstack deployment for modifying cinder and test the changes using juju. So for this what I have to do Openstack deployment in ubuntu 14.04? [15:55] sra, I saw :) [15:55] well the charms will support 14.04, its just that the bundles we publish are all baselined on 16.04 [15:56] sra, https://code.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk [15:56] has all of the bundles that most of the openstack-charmers team use for development of charms; they deploy sparse (not using LXD/LXC containers) and are designed to be deployed ontop of a cloud; we happen to use OpenStack as the base cloud as well. [15:57] if you propose a change against one of the openstack charms, its the same cloud that gets used to verify the changes... [15:57] http://docs.openstack.org/developer/charm-guide/ might be useful as a reference as well [16:08] kjackal, arosales: Can you remind me why I needed to email the Bigtop list about ppc64 artifacts again? The apache-bigtop-base layer already has a repo configuration listed for ppc64el (https://github.com/juju-solutions/layer-apache-bigtop-base/blob/master/layer.yaml#L42) [16:10] cory_fu: If I remember correctly when we were writting the ticket we couldn't find the actual *.deb packages. [16:10] cory_fu: arosales I can try to deploy something on power and see what happens if you grant me access to any such machine [16:13] cory_fu: as I recall it the only .debs were xenial and weren't in the latest bigtop release, perhaps that has changed [16:16] cory_fu: if you have a xenial ppc64el .deb we can use from bigtop today then no need to email [16:16] I'm not sure. kwmonroe: Did you test this at one point? [16:17] arosales: Do we not have access to Power machines anymore? [16:17] cory_fu: we do [16:17] arosales: siteox? stilson? [16:21] cory_fu: sorry, i can't seem to recall the need for a ppc artifact email. perhaps it was just to verify where we should be pulling ppc debs from, but you already know where to get them for vivid and xenial. [16:22] petevg: I got this error on tha nemenode: "dpkg-query: package 'openjdk-8-jdk' is not installed" looking into it [16:24] cory_fu: stilson [16:28] kjackal: I thought that I had tested things in trusty, but I may have run my Zookeeper tests on xenial. [16:29] I was just playing w/ stuff on a vagrant vm, and it looks like jdk-8 isn't in trusty by default. [16:30] petevg: let me understand something. In case we skip openjdk, what layer should deploy java? [16:31] The base layer? [16:31] kjackal: apache-bigtop-base should install it. [16:31] kjackal: if you just got rid of the relations, you'd need to make sure that all the charms without the relation were built on top of the updated apache-bigtop-base layer. [16:32] magicaltrout: https://docs.google.com/spreadsheets/d/1czOlxejWRkE5tHnX8c04Xo5ZhxVe5auiDoCBqR4mN90/edit [16:36] petevg: another question [16:37] we set the default value for bidtop_jdk config param to openjdk 8 [16:38] kjackal: ? [16:38] in the bigtop base layer we ask bigtop to install the jdk by doing this: 'bigtop::jdk_preinstalled': not bigtop_jdk [16:38] good monday morning all [16:39] kjackal: yes. So ... [16:39] (Good morning, bdx) [16:39] I've an implementation question if anyone wants to chime in [16:39] when does this "not bigtop_jdk" evaluates to false? [16:39] kjackal: Bigtop will install opnejdk if jkd_preinstalled is *not* true. [16:40] sorry, when does this "not bigtop_jdk" evaluates to tru? [16:40] kjackal: When we have no value set there. When we override the value in options with an empty string. [16:41] By default, it should evaluate to False, which means that we do want Bigtop to install the version of jdk we specify in the config. [16:41] I don't like the backwards logic a whole lot, but that is how it kind of needs to work. [16:41] bdx: I will chime in on stuff -- don't mind kjackal and I working through some other stuff at the same time :-) [16:42] bdx: Quest away [16:42] petevg: yes, agreed. Let me try to find when we empty the self.options.get('bigtop_jdk') [16:43] I have 3 private applications, 2 of which are rails apps, 1 is a php app. Each app has its own stack of supporting micro services (e.g. redis, postgres, es, resqueue, etc, etc)), averaging to 5 extra supporting services (or 5 extra instances) per app [16:43] all three app talk to eachother, and all 3 apps have been charmed up [16:44] petevg: or is this (setting the bigtop_jdk to '') expected to be a deploy-time decision? [16:44] kjackal: no. It is an implementation time decision. [16:45] petevg: by implementation you mean? [16:45] kjackal: basically, with this change, by default, your bigtop based charms will not require a relation to openjdk, per cory_fu's request. You can override that in a charm by overriding the option in that charm's layer.yaml. [16:45] I have been experimenting with service placement; deploying everying to lxd minus the database(s) - which works well for my use case for these apps [16:45] heres the crux [16:46] petevg: at build time? [16:46] petevg: I see [16:46] petevg: s/implementation/orchestration/ [16:47] bdx: got it. thx for the correction :-) [16:47] we currently deploy everything to aws, and use opsworks/docker to get the apps to the containers [16:48] yeah thanks jcastro i'll blast around an email [16:50] I have revised theses apps to be juju deployed, but am having trouble determining how to orchestrate the apps concerning how to get my apps similarly deployed to lxd containers at a provider agnostic level [16:52] this set of 3 apps need be deployed to rackspace and aws, and soon an openstack cloud per customer requirements [16:52] so, basically I feel like I've worked myself into a rat hole === frankban is now known as frankban|afk [16:54] I've charmed up our apps, but have no way to orchestrate with containers using Juju :-( [16:54] I've plans to make use of lxd-openstack [16:55] but that doesn't help when I have to do a KPI comparison for the apps being juju deployed vs. non-juju deployed [16:55] I'd write it up and send it to the list [16:55] see what other people are doing [16:55] jcastro: sure thing [16:58] petevg: do you see where this is going at least? [17:04] bdx: I got pulled into a meeting. Catching up ... [17:06] bdx: I agree that posting it to the list makes sense. I'm not sure how to untangle the containers in containers issue :-/ [17:29] marcoceppi, do you have an eta for the next charm-tools pypi release? there are some fixes in master that we need to unblock osci (virtualenvs). [17:29] beisner: 10 mins [17:29] wooo! marcoceppi [17:29] it's not going to be all of master, it'll be a 2.13 patch [17:29] 2.1.3* [17:29] kk thx [17:30] #219, #204, and #248 PR included [17:33] beisner: 2.1.4 is on pypi [17:35] Can we do remote_get function during relation_departed time? [17:36] sorry get_remote() during relation_departed [17:37] Anita_: potentially, I can't rememeber. I know you can't during broken, but departed may still have remote data [17:38] ok that means during *relation_departed* we can get the values? [17:39] bdx: Sorry, I also got caught up in something else. When you say you want to deploy the apps to lxd containers, how is what you're looking for different than using lxc placement directives in a bundle (https://jujucharms.com/docs/2.0/charms-bundles#bundle-placement-directives)? [17:40] marcoceppi_: I can call relation_call function during departed and get the values? [17:40] That link again without the parens messing it up: https://jujucharms.com/docs/2.0/charms-bundles#bundle-placement-directives [17:42] marcoceppi_:but does the producer program needs to set the values in for *relation.departed* state? [17:43] marcoceppi_: currently my producer application sets the values for *relation_joined/changed* states only... So i am not sure, if I can get the values as a consumer charm during *relation_departed* state [17:45] marcoceppi_:can you please confirm? [17:56] marcoceppi_: currently my provider charm sets the values for *relation_joined/changed* states only... So i am not sure, if I can get the values as a consumer charm during *relation_departed* state [17:56] marcoceppi_:can you please confirm? [18:09] cory_fu: ^? [18:13] beisner: is 2.1.4 working for you? [18:24] marcoceppi: Anita_ signed off, apparently, but the answer is that you can do get_remote (aka relation-get) during -departed but you probably shouldn't do relation-set or set new states (rather, just remove them) [18:24] cory_fu: figured, thanks [18:25] marcoceppi, a couple of manual checks look good. thank you [18:31] beisner: I've pressed a charm snap and charm-tools deb update [18:33] confirm it marcoceppi !!! confirrrrrrrm it! [18:35] lazyPower: one thing I did want to mull over with you before MesosCon [18:35] is Logstash and DC/OS [18:36] to offer some logging [18:36] and I'll wire up some nagios stuff hopefully [18:36] to demo some relations stuff [18:39] cory_fu: the only provider supporting lxd is openstack [18:39] and thats not even juju lxd [18:40] as far as juju is concerned, the only provider to support lxd is maas [18:52] magicaltrout - ack, when is mesoscon? [18:55] bdx: Is that true? I thought most clouds supported lxc container placement. Perhaps I'm still not understanding what you mean by "support lxd" [19:05] 31st lazyPower ;) [19:05] cory_fu: try placing a lxd on an aws instance [19:06] :-( [19:06] bdx: http://pastebin.ubuntu.com/23079266/ [19:06] That's lxc and not lxd, though. I think we may be talking about different things [19:36] magicaltrout - ack, we have a bit of wiggle room then. let me finish up this weeks demo prep and i can context switch over to getting you hooked up with the elastic stack [19:43] cool lazyPower i should have a few spare evenings this week to sort a bunch of the backlog out [19:58] marcoceppi, fleet of :boats: - thanks again! [20:14] magicaltrout \oo/, rock on man [20:24] not really [20:24] i'm wiring up a CAS server for web app authentication [20:24] its very tedious :O [20:25] hi, have you guys seen this error in a reactive charm? http://pastebin.ubuntu.com/23079428/ [20:25] just wondering if it's a bug in how the charm is using that layer, or in the layer itself [20:26] I filed a bug against the postgresql charm for now [20:33] ahasenack - i've seen that when i've rebuilt a charm using local layers, and i didn't keep my clone in sync with whats upstream [20:34] namely, it didn't pull in a new interface it expected to have [20:34] I see [20:35] yeah, looks like a "bzr add" was forgotten or something [20:35] ahasenack - what i suggest is peek at the interface archive, give the charm a build locally using the following switches: `charm build -r --no-local-layers` and see if that interface pops up in the assembled charm [20:36] the archive peek is to verify the interface exists and implements the missing class [22:00] beisner: good, because it :boat:'d a while ago