kjackalHello Juju World!06:00
=== gnuoy` is now known as gnuoy
=== frankban|afk is now known as frankban
magicaltrouthttps://ibin.co/2sQKsBt5MB9f.jpg blind IRC for Lasek people08:30
magicaltroutofficially the largest IRC client in the world I reckon08:30
=== ant_ is now known as Guest67209
gnuoyHi, I'm using Juju 2.0-beta15-0ubuntu1~16.04.1~juju1 with the openstack provider. There are multiple networks defined so I'm using "--config network=<network UUID>" when bootstrapping. However, I don't see a way to set this as the default when deploying applications in the model. "juju model-defaults" does not list 'network' as a configurable option. Trying to set it results in 'key "network" is not defined in the known model configuration'08:41
gnuoyI'll report a but unless I'm doing something obviously wrong?08:42
gnuoyok, so I can set the default when creating the model rather than adding the default afterwards, thats ok. I see network in model-defaults now too. Seems like a bug that you can't add a value to the models defaults after creation08:49
BlackDexHello there. Is it possible to have lxc deployed charms to have a static ip instead of dhcp? This incase the dhcp is offline and you need to reboot/restart stuff?08:59
magicaltroutmarcoceppi: I think the charm dev aws stuff has run out of room :'(09:06
magicaltroutone of those days when I could do with a MAAS server under my desk09:06
magicaltroutokay I have no clue09:17
magicaltroutevery machine in aws and lxd ends up in error state09:17
magicaltroutkjackal: where can I look to find some logs regarding why my machines are failing to bootstrap?09:27
magicaltrouts/bootstap/get allocated09:27
kjackallet's see..09:27
kjackalmagicaltrout: which provider are you using?09:28
magicaltroutI've tried AWS, now trying LXD09:28
magicaltroutI get bootstrapped09:28
magicaltroutbut if I try and deploy something if just lands in an error state09:28
kjackalJuju keeps logs all machines under /var/log/juju/all-machines.log (think)09:28
magicaltroutI thought it was because the AWS for Charm Devs might have been full up09:28
magicaltrouton the bootstrap node kjackal ?09:28
kjackalyes, on the coordinator09:29
magicaltrouthow do I ssh to that these days?09:29
kjackalwait, wait09:29
kjackalon your local client not the coordinator09:29
kjackaldo you have enything under /var/log/juju ?09:29
magicaltroutI don't have a /var/log/juju to start withj09:30
magicaltrouthold on09:30
magicaltroutformatting status to json09:30
magicaltroutgives me an error09:30
magicaltroutthats a bit $hit09:31
magicaltroutsurely those errors should be more obvious09:31
kjackalwhat juju version are you using?09:31
magicaltroutI was using 9 earlier when my problems began09:32
kjackalls -ld  /var/log/juju*09:32
magicaltroutI worked yesterday on AWS fine09:32
magicaltrouttoday it doesn't like me09:32
magicaltroutno such file or directory09:32
kjackalnothing under var log!09:33
magicaltrouti have no logs09:33
magicaltroutI'm old enough to know where to look by default ;)09:33
magicaltroutthats on my client09:34
magicaltrouton the controller I'm sure there are lgos09:34
kjackal:) I am sorry did not meant any offence09:34
magicaltrouthehe I'm only messing09:34
magicaltrouti looked there first, clearly on the units etc they exist when stuff breaks09:34
magicaltroutbut it doesn't seem to aggregate on the client09:34
magicaltroutanyway, it claimed there were missing tools, so I've rebootstrapped with --upload-tools09:35
magicaltroutsee if it unbreaks it09:35
magicaltroutI don't understand why the tabular status doesn't tell you the error message though09:35
magicaltroutthat seems a bit silly09:35
kjackalI am still on Juju 1.25. The lxd provider on Juju 2.0 had a bug that must have been fixed by now, but haven't tested it yet09:36
magicaltroutyou going to pasadena this year kjackal ?09:36
kjackalYeap, will we see you there?09:37
magicaltroutI've been using betaX and LXD for 6 months, normally pretty stable09:37
magicaltrouthope so kjackal else someones wasted a lot of money on flights for me09:37
magicaltroutyeah, I submitted a couple of talk proposals09:37
magicaltroutso hopefully I'm demoing something09:37
magicaltroutI worked most of yesterday on finishing up DC/OS for Mesosphere EU on the 31st09:38
kjackal:) The problem we had with juju 2.0 and lxd was that the machine could not immediately resolve its hostname. We had to wait or reboot the container. have you noticed any similar behavior?09:38
magicaltroutah yeah09:38
magicaltroutbut I get that in AWS as well now09:38
magicaltroutI saw that yesterday09:38
kjackalmagicaltrout: Well done on Mesosphere EU!09:40
magicaltroutits pretty crazy this month09:40
magicaltroutI'm doing Amsterdam on the 31st for Mesos, London on the 1st for Pentaho, then Pasadena for Charmers summit09:41
magicaltroutI've got 2 or 3 more talk submissions to write for this year as well09:41
magicaltroutBig Data Spain and ApacheCon EU09:41
magicaltroutoh and I'm doing Pentaho Europe Community Meetup in November09:41
kjackalBigdataWeek London is too close for you? :)09:42
magicaltroutyeah thats a Bigstep thing, Meteorite runs its servers on bigstep but they've lagged on Ubuntu support which is a pain for development09:43
magicaltroutthey told me the other week they were getting Xenial tested so hopefully I can turn our Bigstep servers into Juju managed DC/OS clusters soon09:44
kjackalBut overall your schedule is crazy!!!09:44
magicaltroutthats pretty normal ;)09:44
magicaltroutI blame jcastro he said "submit some talks on Juju and I'll help you get to them"09:45
magicaltroutso I did09:45
magicaltroutand they got accepted ;)09:45
magicaltrouthmm the BDW CFP is still open09:46
magicaltroutmaybe I shall submit a paper ;)09:46
kjackalOh, I have a chalenge for you! http://bigdata.ieee.org/ its for next year!09:47
magicaltroutI'll think of something09:48
magicaltroutI'm happy to talk at conferences though so if people have good ones for a non canonical employee to speak at I'm happy to pitch a talk09:49
magicaltroutI'm also giving a presentation to the JPL team when i'm in Pasadena next month09:51
magicaltroutso I plan to show off the DC/OS, Kubernetes stuff as I'm getting them all involved in docker stuff09:51
magicaltroutbut currently the deploy to single hosts09:51
magicaltroutand I'm trying to get them down to the summit, but I need jcastro to publish the schedule so I can tempt thjem09:52
magicaltroutI get bored when I just work on one thing09:58
magicaltroutso doing a bunch of different stuff and going to conferences at least keeps my schedule varied09:58
andrey-mpjamespage: Hi! I've asked a question in the https://review.openstack.org/#/c/348336/6/metadata.yaml - how to connect glance-charm to cinder-charm in case of implementing one additional relation with subordinate property when no other configuration is needed.10:16
jamespageandrey-mp, hey!10:17
jamespageandrey-mp, sorry for the silience - I've been away for the last week or so...10:17
jamespageandrey-mp, could you join #openstack-charms please?10:17
bbaqarhi guys. i need to add an interface to the lxc created by juju and connect it to an underlying bridge10:44
bbaqarIf i add "lxc.network.type = veth lxc.network.link = br2 lxc.network.flags = up"  to  /var/lib/lxc/juju-machine-8-lxc-8/config10:47
bbaqarwould that do the trick10:47
magicaltroutrick_h_: can you or somebody explain multi series charm publishing please :)10:53
bbaqarguys i configured cinder using its juju charms correctly .. i then removed the charms because i wanted to deploy the service somewhere else .. now when i am deploying the charms again keystone is not passign the req relation data to cinder12:05
bbaqarMissing required data: admin_user service_port admin_tenant_name auth_port admin_password auth_host service_host12:05
bbaqar1guys i configured cinder using its juju charms correctly .. i then removed the charms because i wanted to deploy the service somewhere else .. now when i am deploying the charms again keystone is not passign the req relation data to cinder12:08
bbaqar1Missing required data: admin_user service_port admin_tenant_name auth_port admin_password auth_host service_host12:08
rick_h_magicaltrout: sure thing12:11
rick_h_magicaltrout: so the only trick is that you claim the charm supports multiple series and when you publish the charmstore stores it under those different urls12:12
magicaltroutso, I tell it it supports Wily and Xenial, but then when I "push" I push the Wily one12:12
magicaltroutis that correct? or am I going wrong somewhere?12:13
magicaltroutI suspect I've ballsed up somewhere12:13
magicaltroutcharm push . cs:~spicule/dcos-master12:13
rick_h_magicaltrout: leave the series out of the push12:15
rick_h_magicaltrout: yes, just push like that, without any series and let the charmstore figure out where to put it12:15
magicaltroutso then if I want to push xenial12:15
magicaltroutI just go to the xenial directoryu12:15
magicaltroutand do the same?12:15
rick_h_magicaltrout: if it's a multi-series charm you just push it once. You declare in the metadata.yaml which series you suport12:15
magicaltroutor does it happen in one push?12:15
magicaltroutokay, cool12:16
rick_h_magicaltrout: and then when you push, the charmstore reads that file and puts it in all the right places12:16
godleonHi all, I got following error message when type "juju --debug status"12:31
godleonis there any way to fix it? I'd be very appreciated for any hint or help. Thanks!12:32
godleonIt seems juju bootstrap node was gone forever.... ?12:32
jcastromagicaltrout: heya, did the jpl guys confirm if they're coming to the summit? If so can you have them register so I have the food count right?12:33
magicaltroutjcastro: I don't know yet, have you got a schedule together (even something half done)? I want to blast around an email today or tomorrow but I'd like some content for the sales pitch, not just "turn up it'll be cool" ;)12:40
jcastroI'll have a schedule for you today12:40
jcastroI just drafted it on friday12:41
beisnerhi marcoceppi, tvansteenburgh - do you have an eta for the next charm-tools pypi release?  there are some fixes in master that we need to unblock osci (virtualenvs).12:45
tvansteenburghbeisner: i defer to marcoceppi on that one13:11
=== Guest37793 is now known as zeus
lazyPowermagicaltrout - i'm here whenever you're ready to talk dee cee oh ess14:05
magicaltrouthmm sooooo lazyPower14:06
magicaltroutI tweaked some stuff14:06
magicaltroutsome things just plain don't work yet, but the main subsystems work14:06
lazyPowerIts a start right? :)14:06
magicaltroutI did find they've dropped K8S support for now annoying =/14:06
lazyPoweroh really?14:06
lazyPowerthats interesting, there are still mesosphere people attending their sig groups. I wonder what brought that on14:07
magicaltroutyeah the more recent versions of DC/OS Mesos have some incompatabilities14:07
magicaltroutbut they are trying to garner support on the GH project14:07
lazyPowermust be pending additional work w/ the scheduler14:07
magicaltroutso its not gone, its just missing stuff14:07
magicaltroutyou can spin up dcos-master & dcos-agents14:07
magicaltroutthey should both be in the CS14:07
magicaltroutI lie14:08
magicaltroutthe agents aren't14:08
magicaltrout2 mins I'll push them14:08
magicaltroutdon't try it in lxd you won't get very far14:08
lazyPoweri'm painfully aware of that14:08
* lazyPower points at the very expensive k8s bundle that wont properly run in lxd today14:08
magicaltroutokay agents should be alive14:09
magicaltroutyou have to have 1, 3 or 5 masters14:10
magicaltroutand any number of agents14:10
lazyPowermagicaltrout - no bundle?14:10
magicaltroutnot yet I have a day job :P14:10
lazyPowernot even a minimal "use this to kick the tires" formation?14:10
magicaltroutthe most mimimal is 1 master - 1 agent and a relation14:11
magicaltrouteven you can manage that ;)14:11
lazyPowerYou're giving me a lot of credit14:12
magicaltroutanyway, I have a bunch of stuff on my backlog, like, I ripped out a master today to see what happened14:12
magicaltroutit locked me out :)14:12
lazyPowerwe face similar issues with the k8s bundle14:12
lazyPowernuke a master and you lose PKI14:13
magicaltroutthat said, if you have masters fail in DC/OS proper, you can't add new ones, so I reckon I'm on feature parity there ;)14:13
magicaltroutthe fact you can add-unit on the masters is already something you can't do in DC/OS officially14:13
lazyPowermagicaltrout https://gist.github.com/chuckbutler/ae49b395648a07222b149978c27c540214:14
lazyPowermind pushing that up @ your namespace? :)14:14
lazyPowerfeel free to remix the machine constraints14:15
lazyPowerthat might be the slowest dc/os cluster you ever deploy in your life14:15
lazyPowerseconded only by.... rpiv1's14:15
* lazyPower gives it a whirl14:16
lazyPowerhere goes something14:16
lazyPowermagicaltrout - are these in a github repo somewhere?14:25
magicaltroutapologies in advance for the crazy hacks and bad code quality, I've not had chance to tidy it up yet ;)14:27
lazyPowerno stress amigo14:30
lazyPowerjust filing bugs as i find them so we have somewhere to start :)14:30
magicaltroutI've not tried it outside of wily by the way, so your xenial test is the first blast at something a bit more modern14:31
lazyPowerthats an interesting perdicament14:31
magicaltroutwhen i started using it Xenial images weren't in EC2 and Trusty doesn't work14:32
magicaltroutso technically it should "probably" work :)14:32
magicaltroutit was an upstart/systemd thing14:33
lazyPowerlooks like it needs a bump for pip out the gate14:36
srahii all, I am trying to deploy openstack using juju "openstack-base bundle" all services are in pending state15:03
srahow much time it will take to deploy openstack?15:03
sraplease someone help15:03
rick_h_ sra some 30+minutes I think15:04
Odd_BlokeDepends on your substrate, as well.15:04
srarick_h: I am deploying juju openstack on vm which has 6 GB RAM and 80 GB Disk15:05
srait will cause any issues15:05
rick_h_sra: 6gb of ram is very light imo15:06
Odd_BlokeWhen I've deployed it using lxd, I've seen it sit at 8-10GB of RAM.15:06
sraOdd_Bloke: you deployed on VM?15:06
Odd_BlokeThis was lxd containers on hardware.15:07
sraOdd_Bloke: can we deploy on Vm15:07
Odd_Blokesra: Are you saying you're trying to deploy it on to one VM?  Or you want to deploy on to multiple VMs with those specifications?15:07
sraOdd_Bloke: trying to deploy it on single VM15:08
Odd_Blokesra: With 6GB of RAM, you aren't going to get something that works very well.15:09
Odd_BlokeIf it works at all.15:09
sraOdd_Bloke: I started my deployment 1 hour back15:13
srastill all the services it is showing agent-state in "pendig"15:14
sraOdd_Bloke:  are you around15:15
Odd_Blokesra: How are you deploying them?15:15
sraUsing openstack-base bundle15:16
srafrom juju-gui15:16
Odd_Blokesra: Right, but what substrate?  EC2?  lxd?15:16
sraOdd_Bloke: lxc15:17
Odd_Blokesra: OK, so you should be seeing that machine under a lot of load ATM.15:18
Odd_Blokesra: As I said, I don't think 6GB of RAM is going to work.15:18
Odd_BlokeOpenStack is too complex a beast to fit in 6GB of RAM.15:18
sraOdd_Bloke: So please provide me the better requirements for deploying openStack on a single VM using JUJU OpenStack-base bundle15:19
Odd_Blokesra: What are you actually trying to achieve?  I've had it work on a 16GB NUC, but you wouldn't actually have wanted to use that for anything serious.15:20
sraOdd_Bloke: I want to do changes to cinder charm and have to test the changes applied or not15:23
Odd_Blokesra: Ah, OK, I see.15:23
Odd_Blokesra: So I haven't actually used the bundles for development of OpenStack.15:23
Odd_Blokejamespage: Perhaps you would be able to point sra at someone (or some docs) of how to get set up to do OpenStack charm development?15:24
lazyPowersra - bare minimum you will need 12GB of ram on that unit15:36
lazyPowersra - and likely 4+ cores if you expect it to work with any reasonable efficiency15:36
lazyPowersra - additionally, if you're seeing lxd units in 'pending', can you do me a favor? run lxc list and see if the juju templates have been created. they should be listed clearly: with the phrase juju and xenial in the image name15:36
sramy Base VM has ubuntu14.04 OS will it work?15:38
lazyPowersra - i highly recommend you move to xenial so you can use the latest bits for lxd15:38
Odd_BlokelazyPower: Juju<2 would be using lxc not lxd, right?15:39
lazyPowerOdd_Bloke - i havent tried juju2 on trusty in quite some time15:39
lazyPowerso, for completeness sake, i recommend xenial15:40
lazyPowerbetter to have them on a series thats got more eyes on it, know what i mean? :)15:40
Odd_BlokelazyPower: Right, but if sra is on trusty then I wonder if they are using Juju 1.x (and therefore lxc), rather than Juju 2 (and therefore lxd). :)15:40
lazyPoweri assume that to be the case15:40
lazyPowersra can you confirm? ^15:41
sralazyPower: yes15:42
srai am using ubunut 14.0415:43
Odd_Blokesra: Which version of juju are you using?15:43
lazyPowerjamespage thedac wolsen - any known blockers on using juju 1.25 with lxc for openstack-base bundle deployments?15:43
jamespagelazyPower, no infact that what we verfiy on still15:43
lazyPowersra - no need to upgrade according ot the potentate of our charms :)15:43
jamespageoh wait - its 16.04 based, not 14.04 based15:43
lazyPowerperhaps upgrade and still install juju-115:44
jamespagesra, openstack-base is not deployable in a single-vm15:44
jamespageits very much designed to be deployed on multiple servers using MAAS15:45
jamespageREADME has the details for the requirements15:45
jamespageif you want todo an all-in-one; https://github.com/openstack-charmers/openstack-on-lxd is your best route15:45
srajamespage: can i deploy openstack by dragging individual components from juju gui in ubuntu14.0415:46
jamespagesra, well you can be its alot of clicking15:47
jamespagea bundle is a much better option15:47
srajamespage: for bundle we need ubuntu16.04?15:47
jamespagesra, the latest openstack-base will deploy a 16.04 based openstack cloud15:48
jamespagesra, openstack-on-lxd requires a 16.04 host for the deployment15:48
jamespageso yeah I guess it does - sorry15:48
sraI want a basic openstack deployment for modifying cinder and test the changes using juju. So for this what I have to do Openstack deployment in ubuntu 14.04?15:52
srajamespage: are you around15:54
jamespagesra, I am15:54
srajamespage: I want a basic openstack deployment for modifying cinder and test the changes using juju. So for this what I have to do Openstack deployment in ubuntu 14.04?15:54
jamespagesra, I saw :)15:55
jamespagewell the charms will support 14.04, its just that the bundles we publish are all baselined on 16.0415:55
jamespagesra, https://code.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk15:56
jamespagehas all of the bundles that most of the openstack-charmers team use for development of charms; they deploy sparse (not using LXD/LXC containers) and are designed to be deployed ontop of a cloud; we happen to use OpenStack as the base cloud as well.15:56
jamespageif you propose a change against one of the openstack charms, its the same cloud that gets used to verify the changes...15:57
jamespagehttp://docs.openstack.org/developer/charm-guide/ might be useful as a reference as well15:57
cory_fukjackal, arosales: Can you remind me why I needed to email the Bigtop list about ppc64 artifacts again?  The apache-bigtop-base layer already has a repo configuration listed for ppc64el (https://github.com/juju-solutions/layer-apache-bigtop-base/blob/master/layer.yaml#L42)16:08
kjackalcory_fu: If I remember correctly when we were writting the ticket we couldn't find the actual *.deb packages.16:10
kjackalcory_fu: arosales I can try to deploy something on power and see what happens if you grant me access to any such machine16:10
arosalescory_fu: as I recall it the only .debs were xenial and weren't in the latest bigtop release, perhaps that has changed16:13
arosalescory_fu: if you have a xenial ppc64el .deb we can use from bigtop today then no need to email16:16
cory_fuI'm not sure.  kwmonroe: Did you test this at one point?16:16
cory_fuarosales: Do we not have access to Power machines anymore?16:17
arosalescory_fu: we do16:17
cory_fuarosales: siteox?  stilson?16:17
kwmonroecory_fu: sorry, i can't seem to recall the need for a ppc artifact email.  perhaps it was just to verify where we should be pulling ppc debs from, but you already know where to get them for vivid and xenial.16:21
kjackalpetevg: I got this error on tha nemenode: "dpkg-query: package 'openjdk-8-jdk' is not installed" looking into it16:22
arosalescory_fu: stilson16:24
petevgkjackal: I thought that I had tested things in trusty, but I may have run my Zookeeper tests on xenial.16:28
petevgI was just playing w/ stuff on a vagrant vm, and it looks like jdk-8 isn't in trusty by default.16:29
kjackalpetevg: let me understand something. In case we skip openjdk, what layer should deploy java?16:30
kjackalThe base layer?16:31
petevgkjackal: apache-bigtop-base should install it.16:31
petevgkjackal: if you just got rid of the relations, you'd need to make sure that all the charms without the relation were built on top of the updated apache-bigtop-base layer.16:31
jcastromagicaltrout: https://docs.google.com/spreadsheets/d/1czOlxejWRkE5tHnX8c04Xo5ZhxVe5auiDoCBqR4mN90/edit16:32
kjackalpetevg: another question16:36
kjackalwe set the default value for bidtop_jdk config param to openjdk 816:37
petevgkjackal: ?16:38
kjackalin the bigtop base layer we ask bigtop to install the jdk by doing this: 'bigtop::jdk_preinstalled': not bigtop_jdk16:38
bdxgood monday morning all16:38
petevgkjackal: yes. So ...16:39
petevg(Good morning, bdx)16:39
bdxI've an implementation question if anyone wants to chime in16:39
kjackalwhen does this "not bigtop_jdk" evaluates to false?16:39
petevgkjackal: Bigtop will install opnejdk if jkd_preinstalled is *not* true.16:39
kjackalsorry, when does this "not bigtop_jdk" evaluates to tru?16:40
petevgkjackal: When we have no value set there. When we override the value in options with an empty string.16:40
petevgBy default, it should evaluate to False, which means that we do want Bigtop to install the version of jdk we specify in the config.16:41
petevgI don't like the backwards logic a whole lot, but that is how it kind of needs to work.16:41
petevgbdx: I will chime in on stuff -- don't mind kjackal and I working through some other stuff at the same time :-)16:41
cory_fubdx: Quest away16:42
kjackalpetevg: yes, agreed. Let me try to find when we empty the self.options.get('bigtop_jdk')16:42
bdxI have 3 private applications, 2 of which are rails apps, 1 is a php app. Each app has its own stack of supporting micro services (e.g. redis, postgres, es, resqueue, etc, etc)),  averaging to 5 extra supporting services (or 5 extra instances) per app16:43
bdxall three app talk to eachother, and all 3 apps have been charmed up16:43
kjackalpetevg: or is this (setting the bigtop_jdk to '') expected to be a deploy-time decision?16:44
petevgkjackal: no. It is an implementation time decision.16:44
kjackalpetevg: by implementation you mean?16:45
petevgkjackal: basically, with this change, by default, your bigtop based charms will not require a relation to openjdk, per cory_fu's request. You can override that in a charm by overriding the option in that charm's layer.yaml.16:45
bdxI have been experimenting with service placement; deploying everying to lxd minus the database(s) - which works well for my use case for these apps16:45
bdxheres the crux16:45
kjackalpetevg: at build time?16:46
kjackalpetevg: I see16:46
bdxpetevg: s/implementation/orchestration/16:46
petevgbdx: got it. thx for the correction :-)16:47
bdxwe currently deploy everything to aws, and use opsworks/docker to get the apps to the containers16:47
magicaltroutyeah thanks jcastro i'll blast around an email16:48
bdxI have revised theses apps to be juju deployed, but am having trouble determining how to orchestrate the apps concerning how to get my apps similarly deployed to lxd containers  at a provider agnostic level16:50
bdxthis set of 3 apps need be deployed to rackspace and aws, and soon an openstack cloud per customer requirements16:52
bdxso, basically I feel like I've worked myself into a rat hole16:52
=== frankban is now known as frankban|afk
bdxI've charmed up our apps, but have no way to orchestrate with containers using Juju :-(16:54
bdxI've plans to make use of lxd-openstack16:54
bdxbut that doesn't help when I have to do a KPI comparison for the apps being juju deployed vs. non-juju deployed16:55
jcastroI'd write it up and send it to the list16:55
jcastrosee what other people are doing16:55
bdxjcastro: sure thing16:55
bdxpetevg: do you see where this is going at least?16:58
petevgbdx: I got pulled into a meeting. Catching up ...17:04
petevgbdx: I agree that posting it to the list makes sense. I'm not sure how to untangle the containers in containers issue :-/17:06
beisnermarcoceppi, do you have an eta for the next charm-tools pypi release?  there are some fixes in master that we need to unblock osci (virtualenvs).17:29
marcoceppibeisner: 10 mins17:29
beisnerwooo! marcoceppi17:29
marcoceppiit's not going to be all of master, it'll be a 2.13 patch17:29
beisnerkk thx17:29
marcoceppi#219, #204, and #248 PR included17:30
marcoceppibeisner: 2.1.4 is on pypi17:33
Anita_Can we do remote_get function during relation_departed time?17:35
Anita_sorry get_remote() during relation_departed17:36
marcoceppiAnita_: potentially, I can't rememeber. I know you can't during broken, but departed may still have remote data17:37
Anita_ok that means during *relation_departed* we can get the values?17:38
cory_fubdx: Sorry, I also got caught up in something else.  When you say you want to deploy the apps to lxd containers, how is what you're looking for different than using lxc placement directives in a bundle (https://jujucharms.com/docs/2.0/charms-bundles#bundle-placement-directives)?17:39
Anita_marcoceppi_: I can call relation_call function during departed and get the values?17:40
cory_fuThat link again without the parens messing it up: https://jujucharms.com/docs/2.0/charms-bundles#bundle-placement-directives17:40
Anita_marcoceppi_:but does the producer program needs to set the values in for *relation.departed* state?17:42
Anita_marcoceppi_: currently my producer application sets the values for *relation_joined/changed* states only... So i am not sure, if I can get the values as a consumer charm during *relation_departed* state17:43
Anita_marcoceppi_:can you please confirm?17:45
Anita_marcoceppi_: currently my provider charm sets the values for *relation_joined/changed* states only... So i am not sure, if I can get the values as a consumer charm during *relation_departed* state17:56
Anita_marcoceppi_:can you please confirm?17:56
marcoceppicory_fu: ^?18:09
marcoceppibeisner: is 2.1.4 working for you?18:13
cory_fumarcoceppi: Anita_ signed off, apparently, but the answer is that you can do get_remote (aka relation-get) during -departed but you probably shouldn't do relation-set or set new states (rather, just remove them)18:24
marcoceppicory_fu: figured, thanks18:24
beisnermarcoceppi, a couple of manual checks look good.  thank you18:25
marcoceppibeisner: I've pressed a charm snap and charm-tools deb update18:31
magicaltroutconfirm it marcoceppi !!! confirrrrrrrm it!18:33
magicaltroutlazyPower: one thing I did want to mull over with you before MesosCon18:35
magicaltroutis Logstash and DC/OS18:35
magicaltroutto offer some logging18:36
magicaltroutand I'll wire up some  nagios stuff hopefully18:36
magicaltroutto demo some relations stuff18:36
bdxcory_fu: the only provider supporting lxd is openstack18:39
bdxand thats not even juju lxd18:39
bdxas far as juju is concerned, the only provider to support lxd is maas18:40
lazyPowermagicaltrout - ack, when is mesoscon?18:52
cory_fubdx: Is that true?  I thought most clouds supported lxc container placement.  Perhaps I'm still not understanding what you mean by "support lxd"18:55
magicaltrout31st lazyPower ;)19:05
bdxcory_fu: try placing a lxd on an aws instance19:05
cory_fubdx: http://pastebin.ubuntu.com/23079266/19:06
cory_fuThat's lxc and not lxd, though.  I think we may be talking about different things19:06
lazyPowermagicaltrout - ack, we have a bit of wiggle room then. let me finish up this weeks demo prep and i can context switch over to getting you hooked up with the elastic stack19:36
magicaltroutcool lazyPower i should have a few spare evenings this week to sort a bunch of the backlog out19:43
beisnermarcoceppi, fleet of :boats: - thanks again!19:58
lazyPowermagicaltrout \oo/,  rock on man20:14
magicaltroutnot really20:24
magicaltrouti'm wiring up a CAS server for web app authentication20:24
magicaltroutits very tedious :O20:24
ahasenackhi, have you guys seen this error in a reactive charm? http://pastebin.ubuntu.com/23079428/20:25
ahasenackjust wondering if it's a bug in how the charm is using that layer, or in the layer itself20:25
ahasenackI filed a bug against the postgresql charm for now20:26
lazyPowerahasenack - i've seen that when i've rebuilt a charm using local layers, and i didn't keep my clone in sync with whats upstream20:33
lazyPowernamely, it didn't pull in a new interface it expected to have20:34
ahasenackI see20:34
ahasenackyeah, looks like a "bzr add" was forgotten or something20:35
lazyPowerahasenack - what i suggest is peek at the interface archive, give the charm a build locally using the following switches:  `charm build -r --no-local-layers` and see if that interface pops up in the assembled charm20:35
lazyPowerthe archive peek is to verify the interface exists and implements the missing class20:36
marcoceppibeisner: good, because it :boat:'d a while ago22:00

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!