[06:00] <kjackal> Hello Juju World!
[08:06] <jamespage> o/
[08:30] <magicaltrout> https://ibin.co/2sQKsBt5MB9f.jpg blind IRC for Lasek people
[08:30] <magicaltrout> officially the largest IRC client in the world I reckon
[08:41] <gnuoy> Hi, I'm using Juju 2.0-beta15-0ubuntu1~16.04.1~juju1 with the openstack provider. There are multiple networks defined so I'm using "--config network=<network UUID>" when bootstrapping. However, I don't see a way to set this as the default when deploying applications in the model. "juju model-defaults" does not list 'network' as a configurable option. Trying to set it results in 'key "network" is not defined in the known model configuration'
[08:42] <gnuoy> I'll report a but unless I'm doing something obviously wrong?
[08:49] <gnuoy> ok, so I can set the default when creating the model rather than adding the default afterwards, thats ok. I see network in model-defaults now too. Seems like a bug that you can't add a value to the models defaults after creation
[08:59] <BlackDex> Hello there. Is it possible to have lxc deployed charms to have a static ip instead of dhcp? This incase the dhcp is offline and you need to reboot/restart stuff?
[09:05] <magicaltrout> awww
[09:06] <magicaltrout> marcoceppi: I think the charm dev aws stuff has run out of room :'(
[09:06] <magicaltrout> one of those days when I could do with a MAAS server under my desk
[09:17] <magicaltrout> okay I have no clue
[09:17] <magicaltrout> every machine in aws and lxd ends up in error state
[09:27] <magicaltrout> kjackal: where can I look to find some logs regarding why my machines are failing to bootstrap?
[09:27] <magicaltrout> s/bootstap/get allocated
[09:27] <kjackal> let's see..
[09:28] <kjackal> magicaltrout: which provider are you using?
[09:28] <magicaltrout> I've tried AWS, now trying LXD
[09:28] <magicaltrout> I get bootstrapped
[09:28] <magicaltrout> but if I try and deploy something if just lands in an error state
[09:28] <kjackal> Juju keeps logs all machines under /var/log/juju/all-machines.log (think)
[09:28] <magicaltrout> I thought it was because the AWS for Charm Devs might have been full up
[09:28] <magicaltrout> on the bootstrap node kjackal ?
[09:29] <kjackal> yes, on the coordinator
[09:29] <magicaltrout> how do I ssh to that these days?
[09:29] <kjackal> wait, wait
[09:29] <kjackal> on your local client not the coordinator
[09:29] <magicaltrout> hmm
[09:29] <kjackal> do you have enything under /var/log/juju ?
[09:30] <magicaltrout> I don't have a /var/log/juju to start withj
[09:30] <magicaltrout> oooh
[09:30] <magicaltrout> hold on
[09:30] <magicaltrout> formatting status to json
[09:30] <magicaltrout> gives me an error
[09:31] <magicaltrout> thats a bit $hit
[09:31] <magicaltrout> surely those errors should be more obvious
[09:31] <kjackal> what juju version are you using?
[09:31] <magicaltrout> beta15
[09:32] <magicaltrout> I was using 9 earlier when my problems began
[09:32] <kjackal> ls -ld  /var/log/juju*
[09:32] <magicaltrout> I worked yesterday on AWS fine
[09:32] <magicaltrout> today it doesn't like me
[09:32] <magicaltrout> no such file or directory
[09:33] <kjackal> nothing under var log!
[09:33] <magicaltrout> no
[09:33] <magicaltrout> i have no logs
[09:33] <magicaltrout> I'm old enough to know where to look by default ;)
[09:34] <magicaltrout> thats on my client
[09:34] <magicaltrout> on the controller I'm sure there are lgos
[09:34] <magicaltrout> logs
[09:34] <kjackal> :) I am sorry did not meant any offence
[09:34] <magicaltrout> hehe I'm only messing
[09:34] <magicaltrout> i looked there first, clearly on the units etc they exist when stuff breaks
[09:34] <magicaltrout> but it doesn't seem to aggregate on the client
[09:35] <magicaltrout> anyway, it claimed there were missing tools, so I've rebootstrapped with --upload-tools
[09:35] <magicaltrout> see if it unbreaks it
[09:35] <magicaltrout> I don't understand why the tabular status doesn't tell you the error message though
[09:35] <magicaltrout> that seems a bit silly
[09:36] <kjackal> I am still on Juju 1.25. The lxd provider on Juju 2.0 had a bug that must have been fixed by now, but haven't tested it yet
[09:36] <magicaltrout> you going to pasadena this year kjackal ?
[09:37] <kjackal> Yeap, will we see you there?
[09:37] <magicaltrout> I've been using betaX and LXD for 6 months, normally pretty stable
[09:37] <magicaltrout> hope so kjackal else someones wasted a lot of money on flights for me
[09:37] <magicaltrout> yeah, I submitted a couple of talk proposals
[09:37] <magicaltrout> so hopefully I'm demoing something
[09:38] <magicaltrout> I worked most of yesterday on finishing up DC/OS for Mesosphere EU on the 31st
[09:38] <kjackal> :) The problem we had with juju 2.0 and lxd was that the machine could not immediately resolve its hostname. We had to wait or reboot the container. have you noticed any similar behavior?
[09:38] <magicaltrout> ah yeah
[09:38] <magicaltrout> but I get that in AWS as well now
[09:38] <magicaltrout> I saw that yesterday
[09:40] <kjackal> magicaltrout: Well done on Mesosphere EU!
[09:40] <magicaltrout> hehe
[09:40] <magicaltrout> its pretty crazy this month
[09:41] <magicaltrout> I'm doing Amsterdam on the 31st for Mesos, London on the 1st for Pentaho, then Pasadena for Charmers summit
[09:41] <magicaltrout> I've got 2 or 3 more talk submissions to write for this year as well
[09:41] <magicaltrout> Big Data Spain and ApacheCon EU
[09:41] <magicaltrout> oh and I'm doing Pentaho Europe Community Meetup in November
[09:42] <kjackal> BigdataWeek London is too close for you? :)
[09:43] <magicaltrout> yeah thats a Bigstep thing, Meteorite runs its servers on bigstep but they've lagged on Ubuntu support which is a pain for development
[09:44] <magicaltrout> they told me the other week they were getting Xenial tested so hopefully I can turn our Bigstep servers into Juju managed DC/OS clusters soon
[09:44] <kjackal> But overall your schedule is crazy!!!
[09:44] <magicaltrout> lol
[09:44] <magicaltrout> thats pretty normal ;)
[09:45] <magicaltrout> I blame jcastro he said "submit some talks on Juju and I'll help you get to them"
[09:45] <magicaltrout> so I did
[09:45] <magicaltrout> and they got accepted ;)
[09:46] <magicaltrout> hmm the BDW CFP is still open
[09:46] <magicaltrout> maybe I shall submit a paper ;)
[09:46] <kjackal> Lol!!!
[09:47] <kjackal> Oh, I have a chalenge for you! http://bigdata.ieee.org/ its for next year!
[09:48] <magicaltrout> cool
[09:48] <magicaltrout> I'll think of something
[09:49] <magicaltrout> I'm happy to talk at conferences though so if people have good ones for a non canonical employee to speak at I'm happy to pitch a talk
[09:51] <magicaltrout> I'm also giving a presentation to the JPL team when i'm in Pasadena next month
[09:51] <magicaltrout> so I plan to show off the DC/OS, Kubernetes stuff as I'm getting them all involved in docker stuff
[09:51] <magicaltrout> but currently the deploy to single hosts
[09:52] <magicaltrout> and I'm trying to get them down to the summit, but I need jcastro to publish the schedule so I can tempt thjem
[09:52] <magicaltrout> them
[09:56] <kjackal> crazy!
[09:58] <magicaltrout> I get bored when I just work on one thing
[09:58] <magicaltrout> so doing a bunch of different stuff and going to conferences at least keeps my schedule varied
[10:16] <andrey-mp> jamespage: Hi! I've asked a question in the https://review.openstack.org/#/c/348336/6/metadata.yaml - how to connect glance-charm to cinder-charm in case of implementing one additional relation with subordinate property when no other configuration is needed.
[10:17] <jamespage> andrey-mp, hey!
[10:17] <jamespage> andrey-mp, sorry for the silience - I've been away for the last week or so...
[10:17] <jamespage> andrey-mp, could you join #openstack-charms please?
[10:18] <andrey-mp> sure
[10:44] <bbaqar> hi guys. i need to add an interface to the lxc created by juju and connect it to an underlying bridge
[10:47] <bbaqar> If i add "lxc.network.type = veth lxc.network.link = br2 lxc.network.flags = up"  to  /var/lib/lxc/juju-machine-8-lxc-8/config
[10:47] <bbaqar> would that do the trick
[10:53] <magicaltrout> rick_h_: can you or somebody explain multi series charm publishing please :)
[12:05] <bbaqar> guys i configured cinder using its juju charms correctly .. i then removed the charms because i wanted to deploy the service somewhere else .. now when i am deploying the charms again keystone is not passign the req relation data to cinder
[12:05] <bbaqar> Missing required data: admin_user service_port admin_tenant_name auth_port admin_password auth_host service_host
[12:08] <bbaqar1> guys i configured cinder using its juju charms correctly .. i then removed the charms because i wanted to deploy the service somewhere else .. now when i am deploying the charms again keystone is not passign the req relation data to cinder
[12:08] <bbaqar1> Missing required data: admin_user service_port admin_tenant_name auth_port admin_password auth_host service_host
[12:11] <rick_h_> magicaltrout: sure thing
[12:12] <rick_h_> magicaltrout: so the only trick is that you claim the charm supports multiple series and when you publish the charmstore stores it under those different urls
[12:12] <magicaltrout> so, I tell it it supports Wily and Xenial, but then when I "push" I push the Wily one
[12:13] <magicaltrout> is that correct? or am I going wrong somewhere?
[12:13] <magicaltrout> I suspect I've ballsed up somewhere
[12:13] <magicaltrout> charm push . cs:~spicule/dcos-master
[12:15] <rick_h_> magicaltrout: leave the series out of the push
[12:15] <rick_h_> magicaltrout: yes, just push like that, without any series and let the charmstore figure out where to put it
[12:15] <magicaltrout> interesting
[12:15] <magicaltrout> so then if I want to push xenial
[12:15] <magicaltrout> I just go to the xenial directoryu
[12:15] <magicaltrout> and do the same?
[12:15] <rick_h_> magicaltrout: if it's a multi-series charm you just push it once. You declare in the metadata.yaml which series you suport
[12:15] <magicaltrout> or does it happen in one push?
[12:16] <magicaltrout> okay, cool
[12:16] <rick_h_> magicaltrout: and then when you push, the charmstore reads that file and puts it in all the right places
[12:31] <godleon> Hi all, I got following error message when type "juju --debug status"
[12:31] <godleon> https://www.irccloud.com/pastebin/babPt2OC/
[12:32] <godleon> is there any way to fix it? I'd be very appreciated for any hint or help. Thanks!
[12:32] <godleon> It seems juju bootstrap node was gone forever.... ?
[12:33] <jcastro> magicaltrout: heya, did the jpl guys confirm if they're coming to the summit? If so can you have them register so I have the food count right?
[12:40] <magicaltrout> jcastro: I don't know yet, have you got a schedule together (even something half done)? I want to blast around an email today or tomorrow but I'd like some content for the sales pitch, not just "turn up it'll be cool" ;)
[12:40] <jcastro> I'll have a schedule for you today
[12:41] <jcastro> I just drafted it on friday
[12:41] <magicaltrout> thanks
[12:45] <beisner> hi marcoceppi, tvansteenburgh - do you have an eta for the next charm-tools pypi release?  there are some fixes in master that we need to unblock osci (virtualenvs).
[13:11] <tvansteenburgh> beisner: i defer to marcoceppi on that one
[14:05] <lazyPower> magicaltrout - i'm here whenever you're ready to talk dee cee oh ess
[14:06] <magicaltrout> hmm sooooo lazyPower
[14:06] <magicaltrout> I tweaked some stuff
[14:06] <magicaltrout> some things just plain don't work yet, but the main subsystems work
[14:06] <lazyPower> Its a start right? :)
[14:06] <magicaltrout> I did find they've dropped K8S support for now annoying =/
[14:06] <lazyPower> oh really?
[14:07] <lazyPower> thats interesting, there are still mesosphere people attending their sig groups. I wonder what brought that on
[14:07] <magicaltrout> yeah the more recent versions of DC/OS Mesos have some incompatabilities
[14:07] <magicaltrout> but they are trying to garner support on the GH project
[14:07] <lazyPower> must be pending additional work w/ the scheduler
[14:07] <magicaltrout> so its not gone, its just missing stuff
[14:07] <magicaltrout> anyway
[14:07] <magicaltrout> you can spin up dcos-master & dcos-agents
[14:07] <magicaltrout> they should both be in the CS
[14:08] <magicaltrout> I lie
[14:08] <magicaltrout> the agents aren't
[14:08] <magicaltrout> 2 mins I'll push them
[14:08] <magicaltrout> don't try it in lxd you won't get very far
[14:08] <lazyPower> i'm painfully aware of that
[14:08]  * lazyPower points at the very expensive k8s bundle that wont properly run in lxd today
[14:09] <magicaltrout> okay agents should be alive
[14:10] <magicaltrout> you have to have 1, 3 or 5 masters
[14:10] <magicaltrout> and any number of agents
[14:10] <lazyPower> magicaltrout - no bundle?
[14:10] <magicaltrout> not yet I have a day job :P
[14:10] <lazyPower> not even a minimal "use this to kick the tires" formation?
[14:11] <magicaltrout> the most mimimal is 1 master - 1 agent and a relation
[14:11] <magicaltrout> even you can manage that ;)
[14:12] <lazyPower> You're giving me a lot of credit
[14:12] <magicaltrout> hehe
[14:12] <magicaltrout> anyway, I have a bunch of stuff on my backlog, like, I ripped out a master today to see what happened
[14:12] <magicaltrout> it locked me out :)
[14:12] <lazyPower> we face similar issues with the k8s bundle
[14:13] <lazyPower> nuke a master and you lose PKI
[14:13] <magicaltrout> that said, if you have masters fail in DC/OS proper, you can't add new ones, so I reckon I'm on feature parity there ;)
[14:13] <magicaltrout> the fact you can add-unit on the masters is already something you can't do in DC/OS officially
[14:14] <lazyPower> magicaltrout https://gist.github.com/chuckbutler/ae49b395648a07222b149978c27c5402
[14:14] <lazyPower> mind pushing that up @ your namespace? :)
[14:14] <magicaltrout> ta
[14:15] <lazyPower> feel free to remix the machine constraints
[14:15] <lazyPower> that might be the slowest dc/os cluster you ever deploy in your life
[14:15] <lazyPower> seconded only by.... rpiv1's
[14:15] <magicaltrout> hehe
[14:16]  * lazyPower gives it a whirl
[14:16] <lazyPower> here goes something
[14:25] <lazyPower> magicaltrout - are these in a github repo somewhere?
[14:26] <magicaltrout> yup
[14:27] <magicaltrout> https://github.com/buggtb/dcos-master-charm
[14:27] <magicaltrout> apologies in advance for the crazy hacks and bad code quality, I've not had chance to tidy it up yet ;)
[14:28] <magicaltrout> https://github.com/buggtb/dcos-agent-charm
[14:30] <lazyPower> no stress amigo
[14:30] <lazyPower> just filing bugs as i find them so we have somewhere to start :)
[14:30] <magicaltrout> cool
[14:31] <magicaltrout> I've not tried it outside of wily by the way, so your xenial test is the first blast at something a bit more modern
[14:31] <lazyPower> heh
[14:31] <lazyPower> thats an interesting perdicament
[14:32] <magicaltrout> when i started using it Xenial images weren't in EC2 and Trusty doesn't work
[14:32] <magicaltrout> so technically it should "probably" work :)
[14:33] <magicaltrout> it was an upstart/systemd thing
[14:36] <lazyPower> ack
[14:36] <lazyPower> looks like it needs a bump for pip out the gate
[15:03] <sra> hii all, I am trying to deploy openstack using juju "openstack-base bundle" all services are in pending state
[15:03] <sra> how much time it will take to deploy openstack?
[15:03] <sra> please someone help
[15:04] <rick_h_>  sra some 30+minutes I think
[15:04] <Odd_Bloke> Depends on your substrate, as well.
[15:05] <sra> rick_h: I am deploying juju openstack on vm which has 6 GB RAM and 80 GB Disk
[15:05] <sra> it will cause any issues
[15:06] <rick_h_> sra: 6gb of ram is very light imo
[15:06] <Odd_Bloke> When I've deployed it using lxd, I've seen it sit at 8-10GB of RAM.
[15:06] <sra> Odd_Bloke: you deployed on VM?
[15:07] <Odd_Bloke> This was lxd containers on hardware.
[15:07] <sra> Odd_Bloke: can we deploy on Vm
[15:07] <Odd_Bloke> sra: Are you saying you're trying to deploy it on to one VM?  Or you want to deploy on to multiple VMs with those specifications?
[15:08] <sra> Odd_Bloke: trying to deploy it on single VM
[15:09] <Odd_Bloke> sra: With 6GB of RAM, you aren't going to get something that works very well.
[15:09] <Odd_Bloke> If it works at all.
[15:13] <sra> Odd_Bloke: I started my deployment 1 hour back
[15:14] <sra> still all the services it is showing agent-state in "pendig"
[15:15] <sra> Odd_Bloke:  are you around
[15:15] <Odd_Bloke> sra: How are you deploying them?
[15:16] <sra> Using openstack-base bundle
[15:16] <sra> from juju-gui
[15:16] <Odd_Bloke> sra: Right, but what substrate?  EC2?  lxd?
[15:17] <sra> Odd_Bloke: lxc
[15:18] <Odd_Bloke> sra: OK, so you should be seeing that machine under a lot of load ATM.
[15:18] <Odd_Bloke> sra: As I said, I don't think 6GB of RAM is going to work.
[15:18] <Odd_Bloke> OpenStack is too complex a beast to fit in 6GB of RAM.
[15:19] <sra> Odd_Bloke: So please provide me the better requirements for deploying openStack on a single VM using JUJU OpenStack-base bundle
[15:20] <Odd_Bloke> sra: What are you actually trying to achieve?  I've had it work on a 16GB NUC, but you wouldn't actually have wanted to use that for anything serious.
[15:23] <sra> Odd_Bloke: I want to do changes to cinder charm and have to test the changes applied or not
[15:23] <Odd_Bloke> sra: Ah, OK, I see.
[15:23] <Odd_Bloke> sra: So I haven't actually used the bundles for development of OpenStack.
[15:24] <Odd_Bloke> jamespage: Perhaps you would be able to point sra at someone (or some docs) of how to get set up to do OpenStack charm development?
[15:36] <lazyPower> sra - bare minimum you will need 12GB of ram on that unit
[15:36] <lazyPower> sra - and likely 4+ cores if you expect it to work with any reasonable efficiency
[15:36] <lazyPower> sra - additionally, if you're seeing lxd units in 'pending', can you do me a favor? run lxc list and see if the juju templates have been created. they should be listed clearly: with the phrase juju and xenial in the image name
[15:38] <sra> my Base VM has ubuntu14.04 OS will it work?
[15:38] <lazyPower> sra - i highly recommend you move to xenial so you can use the latest bits for lxd
[15:39] <Odd_Bloke> lazyPower: Juju<2 would be using lxc not lxd, right?
[15:39] <lazyPower> Odd_Bloke - i havent tried juju2 on trusty in quite some time
[15:40] <lazyPower> so, for completeness sake, i recommend xenial
[15:40] <lazyPower> better to have them on a series thats got more eyes on it, know what i mean? :)
[15:40] <Odd_Bloke> lazyPower: Right, but if sra is on trusty then I wonder if they are using Juju 1.x (and therefore lxc), rather than Juju 2 (and therefore lxd). :)
[15:40] <lazyPower> i assume that to be the case
[15:41] <lazyPower> sra can you confirm? ^
[15:42] <sra> lazyPower: yes
[15:43] <sra> i am using ubunut 14.04
[15:43] <Odd_Bloke> sra: Which version of juju are you using?
[15:43] <lazyPower> jamespage thedac wolsen - any known blockers on using juju 1.25 with lxc for openstack-base bundle deployments?
[15:43] <sra> 1.25.6-trusty-amd64
[15:43] <jamespage> lazyPower, no infact that what we verfiy on still
[15:43] <lazyPower> ok
[15:43] <lazyPower> sra - no need to upgrade according ot the potentate of our charms :)
[15:43] <jamespage> oh wait - its 16.04 based, not 14.04 based
[15:44] <lazyPower> ah
[15:44] <lazyPower> welp
[15:44] <lazyPower> perhaps upgrade and still install juju-1
[15:44] <jamespage> sra, openstack-base is not deployable in a single-vm
[15:45] <jamespage> its very much designed to be deployed on multiple servers using MAAS
[15:45] <jamespage> https://jujucharms.com/openstack-base/
[15:45] <jamespage> README has the details for the requirements
[15:45] <jamespage> if you want todo an all-in-one; https://github.com/openstack-charmers/openstack-on-lxd is your best route
[15:46] <sra> jamespage: can i deploy openstack by dragging individual components from juju gui in ubuntu14.04
[15:47] <jamespage> sra, well you can be its alot of clicking
[15:47] <jamespage> a bundle is a much better option
[15:47] <sra> jamespage: for bundle we need ubuntu16.04?
[15:48] <jamespage> sra, the latest openstack-base will deploy a 16.04 based openstack cloud
[15:48] <jamespage> sra, openstack-on-lxd requires a 16.04 host for the deployment
[15:48] <jamespage> so yeah I guess it does - sorry
[15:52] <sra> I want a basic openstack deployment for modifying cinder and test the changes using juju. So for this what I have to do Openstack deployment in ubuntu 14.04?
[15:54] <sra> jamespage: are you around
[15:54] <jamespage> sra, I am
[15:54] <sra> jamespage: I want a basic openstack deployment for modifying cinder and test the changes using juju. So for this what I have to do Openstack deployment in ubuntu 14.04?
[15:55] <jamespage> sra, I saw :)
[15:55] <jamespage> well the charms will support 14.04, its just that the bundles we publish are all baselined on 16.04
[15:56] <jamespage> sra, https://code.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk
[15:56] <jamespage> has all of the bundles that most of the openstack-charmers team use for development of charms; they deploy sparse (not using LXD/LXC containers) and are designed to be deployed ontop of a cloud; we happen to use OpenStack as the base cloud as well.
[15:57] <jamespage> if you propose a change against one of the openstack charms, its the same cloud that gets used to verify the changes...
[15:57] <jamespage> http://docs.openstack.org/developer/charm-guide/ might be useful as a reference as well
[16:08] <cory_fu> kjackal, arosales: Can you remind me why I needed to email the Bigtop list about ppc64 artifacts again?  The apache-bigtop-base layer already has a repo configuration listed for ppc64el (https://github.com/juju-solutions/layer-apache-bigtop-base/blob/master/layer.yaml#L42)
[16:10] <kjackal> cory_fu: If I remember correctly when we were writting the ticket we couldn't find the actual *.deb packages.
[16:10] <kjackal> cory_fu: arosales I can try to deploy something on power and see what happens if you grant me access to any such machine
[16:13] <arosales> cory_fu: as I recall it the only .debs were xenial and weren't in the latest bigtop release, perhaps that has changed
[16:16] <arosales> cory_fu: if you have a xenial ppc64el .deb we can use from bigtop today then no need to email
[16:16] <cory_fu> I'm not sure.  kwmonroe: Did you test this at one point?
[16:17] <cory_fu> arosales: Do we not have access to Power machines anymore?
[16:17] <arosales> cory_fu: we do
[16:17] <cory_fu> arosales: siteox?  stilson?
[16:21] <kwmonroe> cory_fu: sorry, i can't seem to recall the need for a ppc artifact email.  perhaps it was just to verify where we should be pulling ppc debs from, but you already know where to get them for vivid and xenial.
[16:22] <kjackal> petevg: I got this error on tha nemenode: "dpkg-query: package 'openjdk-8-jdk' is not installed" looking into it
[16:24] <arosales> cory_fu: stilson
[16:28] <petevg> kjackal: I thought that I had tested things in trusty, but I may have run my Zookeeper tests on xenial.
[16:29] <petevg> I was just playing w/ stuff on a vagrant vm, and it looks like jdk-8 isn't in trusty by default.
[16:30] <kjackal> petevg: let me understand something. In case we skip openjdk, what layer should deploy java?
[16:31] <kjackal> The base layer?
[16:31] <petevg> kjackal: apache-bigtop-base should install it.
[16:31] <petevg> kjackal: if you just got rid of the relations, you'd need to make sure that all the charms without the relation were built on top of the updated apache-bigtop-base layer.
[16:32] <jcastro> magicaltrout: https://docs.google.com/spreadsheets/d/1czOlxejWRkE5tHnX8c04Xo5ZhxVe5auiDoCBqR4mN90/edit
[16:36] <kjackal> petevg: another question
[16:37] <kjackal> we set the default value for bidtop_jdk config param to openjdk 8
[16:38] <petevg> kjackal: ?
[16:38] <kjackal> in the bigtop base layer we ask bigtop to install the jdk by doing this: 'bigtop::jdk_preinstalled': not bigtop_jdk
[16:38] <bdx> good monday morning all
[16:39] <petevg> kjackal: yes. So ...
[16:39] <petevg> (Good morning, bdx)
[16:39] <bdx> I've an implementation question if anyone wants to chime in
[16:39] <kjackal> when does this "not bigtop_jdk" evaluates to false?
[16:39] <petevg> kjackal: Bigtop will install opnejdk if jkd_preinstalled is *not* true.
[16:40] <kjackal> sorry, when does this "not bigtop_jdk" evaluates to tru?
[16:40] <petevg> kjackal: When we have no value set there. When we override the value in options with an empty string.
[16:41] <petevg> By default, it should evaluate to False, which means that we do want Bigtop to install the version of jdk we specify in the config.
[16:41] <petevg> I don't like the backwards logic a whole lot, but that is how it kind of needs to work.
[16:41] <petevg> bdx: I will chime in on stuff -- don't mind kjackal and I working through some other stuff at the same time :-)
[16:42] <cory_fu> bdx: Quest away
[16:42] <kjackal> petevg: yes, agreed. Let me try to find when we empty the self.options.get('bigtop_jdk')
[16:43] <bdx> I have 3 private applications, 2 of which are rails apps, 1 is a php app. Each app has its own stack of supporting micro services (e.g. redis, postgres, es, resqueue, etc, etc)),  averaging to 5 extra supporting services (or 5 extra instances) per app
[16:43] <bdx> all three app talk to eachother, and all 3 apps have been charmed up
[16:44] <kjackal> petevg: or is this (setting the bigtop_jdk to '') expected to be a deploy-time decision?
[16:44] <petevg> kjackal: no. It is an implementation time decision.
[16:45] <kjackal> petevg: by implementation you mean?
[16:45] <petevg> kjackal: basically, with this change, by default, your bigtop based charms will not require a relation to openjdk, per cory_fu's request. You can override that in a charm by overriding the option in that charm's layer.yaml.
[16:45] <bdx> I have been experimenting with service placement; deploying everying to lxd minus the database(s) - which works well for my use case for these apps
[16:45] <bdx> heres the crux
[16:46] <kjackal> petevg: at build time?
[16:46] <kjackal> petevg: I see
[16:46] <bdx> petevg: s/implementation/orchestration/
[16:47] <petevg> bdx: got it. thx for the correction :-)
[16:47] <bdx> we currently deploy everything to aws, and use opsworks/docker to get the apps to the containers
[16:48] <magicaltrout> yeah thanks jcastro i'll blast around an email
[16:50] <bdx> I have revised theses apps to be juju deployed, but am having trouble determining how to orchestrate the apps concerning how to get my apps similarly deployed to lxd containers  at a provider agnostic level
[16:52] <bdx> this set of 3 apps need be deployed to rackspace and aws, and soon an openstack cloud per customer requirements
[16:52] <bdx> so, basically I feel like I've worked myself into a rat hole
[16:54] <bdx> I've charmed up our apps, but have no way to orchestrate with containers using Juju :-(
[16:54] <bdx> I've plans to make use of lxd-openstack
[16:55] <bdx> but that doesn't help when I have to do a KPI comparison for the apps being juju deployed vs. non-juju deployed
[16:55] <jcastro> I'd write it up and send it to the list
[16:55] <jcastro> see what other people are doing
[16:55] <bdx> jcastro: sure thing
[16:58] <bdx> petevg: do you see where this is going at least?
[17:04] <petevg> bdx: I got pulled into a meeting. Catching up ...
[17:06] <petevg> bdx: I agree that posting it to the list makes sense. I'm not sure how to untangle the containers in containers issue :-/
[17:29] <beisner> marcoceppi, do you have an eta for the next charm-tools pypi release?  there are some fixes in master that we need to unblock osci (virtualenvs).
[17:29] <marcoceppi> beisner: 10 mins
[17:29] <beisner> wooo! marcoceppi
[17:29] <marcoceppi> it's not going to be all of master, it'll be a 2.13 patch
[17:29] <marcoceppi> 2.1.3*
[17:29] <beisner> kk thx
[17:30] <marcoceppi> #219, #204, and #248 PR included
[17:33] <marcoceppi> beisner: 2.1.4 is on pypi
[17:35] <Anita_> Can we do remote_get function during relation_departed time?
[17:36] <Anita_> sorry get_remote() during relation_departed
[17:37] <marcoceppi> Anita_: potentially, I can't rememeber. I know you can't during broken, but departed may still have remote data
[17:38] <Anita_> ok that means during *relation_departed* we can get the values?
[17:39] <cory_fu> bdx: Sorry, I also got caught up in something else.  When you say you want to deploy the apps to lxd containers, how is what you're looking for different than using lxc placement directives in a bundle (https://jujucharms.com/docs/2.0/charms-bundles#bundle-placement-directives)?
[17:40] <Anita_> marcoceppi_: I can call relation_call function during departed and get the values?
[17:40] <cory_fu> That link again without the parens messing it up: https://jujucharms.com/docs/2.0/charms-bundles#bundle-placement-directives
[17:42] <Anita_> marcoceppi_:but does the producer program needs to set the values in for *relation.departed* state?
[17:43] <Anita_> marcoceppi_: currently my producer application sets the values for *relation_joined/changed* states only... So i am not sure, if I can get the values as a consumer charm during *relation_departed* state
[17:45] <Anita_> marcoceppi_:can you please confirm?
[17:56] <Anita_> marcoceppi_: currently my provider charm sets the values for *relation_joined/changed* states only... So i am not sure, if I can get the values as a consumer charm during *relation_departed* state
[17:56] <Anita_> marcoceppi_:can you please confirm?
[18:09] <marcoceppi> cory_fu: ^?
[18:13] <marcoceppi> beisner: is 2.1.4 working for you?
[18:24] <cory_fu> marcoceppi: Anita_ signed off, apparently, but the answer is that you can do get_remote (aka relation-get) during -departed but you probably shouldn't do relation-set or set new states (rather, just remove them)
[18:24] <marcoceppi> cory_fu: figured, thanks
[18:25] <beisner> marcoceppi, a couple of manual checks look good.  thank you
[18:31] <marcoceppi> beisner: I've pressed a charm snap and charm-tools deb update
[18:33] <magicaltrout> confirm it marcoceppi !!! confirrrrrrrm it!
[18:35] <magicaltrout> lazyPower: one thing I did want to mull over with you before MesosCon
[18:35] <magicaltrout> is Logstash and DC/OS
[18:36] <magicaltrout> to offer some logging
[18:36] <magicaltrout> and I'll wire up some  nagios stuff hopefully
[18:36] <magicaltrout> to demo some relations stuff
[18:39] <bdx> cory_fu: the only provider supporting lxd is openstack
[18:39] <bdx> and thats not even juju lxd
[18:40] <bdx> as far as juju is concerned, the only provider to support lxd is maas
[18:52] <lazyPower> magicaltrout - ack, when is mesoscon?
[18:55] <cory_fu> bdx: Is that true?  I thought most clouds supported lxc container placement.  Perhaps I'm still not understanding what you mean by "support lxd"
[19:05] <magicaltrout> 31st lazyPower ;)
[19:05] <bdx> cory_fu: try placing a lxd on an aws instance
[19:06] <bdx> :-(
[19:06] <cory_fu> bdx: http://pastebin.ubuntu.com/23079266/
[19:06] <cory_fu> That's lxc and not lxd, though.  I think we may be talking about different things
[19:36] <lazyPower> magicaltrout - ack, we have a bit of wiggle room then. let me finish up this weeks demo prep and i can context switch over to getting you hooked up with the elastic stack
[19:43] <magicaltrout> cool lazyPower i should have a few spare evenings this week to sort a bunch of the backlog out
[19:58] <beisner> marcoceppi, fleet of :boats: - thanks again!
[20:14] <lazyPower> magicaltrout \oo/,  rock on man
[20:24] <magicaltrout> not really
[20:24] <magicaltrout> i'm wiring up a CAS server for web app authentication
[20:24] <magicaltrout> its very tedious :O
[20:25] <ahasenack> hi, have you guys seen this error in a reactive charm? http://pastebin.ubuntu.com/23079428/
[20:25] <ahasenack> just wondering if it's a bug in how the charm is using that layer, or in the layer itself
[20:26] <ahasenack> I filed a bug against the postgresql charm for now
[20:33] <lazyPower> ahasenack - i've seen that when i've rebuilt a charm using local layers, and i didn't keep my clone in sync with whats upstream
[20:34] <lazyPower> namely, it didn't pull in a new interface it expected to have
[20:34] <ahasenack> I see
[20:35] <ahasenack> yeah, looks like a "bzr add" was forgotten or something
[20:35] <lazyPower> ahasenack - what i suggest is peek at the interface archive, give the charm a build locally using the following switches:  `charm build -r --no-local-layers` and see if that interface pops up in the assembled charm
[20:36] <lazyPower> the archive peek is to verify the interface exists and implements the missing class
[22:00] <marcoceppi> beisner: good, because it :boat:'d a while ago