=== CyberJacob is now known as CyberJacob|Away | ||
=== kadams54 is now known as kadams54-away | ||
=== kadams54-away is now known as kadams54 | ||
=== kadams54 is now known as kadams54-away | ||
=== kadams54-away is now known as kadams54 | ||
=== _sEBAs_ is now known as sebas5384 | ||
sebas5384 | hey! | 01:30 |
---|---|---|
thumper | o/ | 01:31 |
sebas5384 | hey thumper o/ | 01:31 |
sebas5384 | was looking someone to talk about the mac osx workflow | 01:31 |
sebas5384 | the other day i was doing an experiment with vagrant, vbox and network bridges | 01:32 |
sebas5384 | configuring the lxc to use them | 01:32 |
sebas5384 | so the containers would appear directly in the host | 01:33 |
sebas5384 | and it worked! hehehe | 01:33 |
sebas5384 | when i mean mac os x, i mean every vagrant workflow | 01:34 |
sebas5384 | lazyPower: hi o/ | 01:35 |
=== kadams54 is now known as kadams54-away | ||
=== kadams54-away is now known as kadams54 | ||
=== kadams54 is now known as kadams54-away | ||
=== kadams54-away is now known as kadams54 | ||
=== kadams54 is now known as kadams54-away | ||
stub | tvansteenburgh: I've sorted lazr.authentication on pypi btw, so the scary pip options will no longer be needed. | 05:05 |
=== fuzzy_ is now known as Fuzai | ||
=== CyberJacob|Away is now known as CyberJacob | ||
=== CyberJacob is now known as CyberJacob|Away | ||
=== scuttlemonkey is now known as scuttle|afk | ||
=== erkules_ is now known as erkules | ||
stub | tvansteenburgh: lazr.authentication should be correctly registered in pypi now | 12:10 |
tvansteenburgh | stub, that's great news, thanks! \o/ | 13:05 |
lazyPower | stub: incoming beer vouchers | 13:07 |
stub | Just what I need, more virtual beers | 13:07 |
lazyPower | you can exchange them for virtual pizza vouchers as well | 13:08 |
lazyPower | Do you attend any of the cloud sprints? | 13:08 |
stub | lazyPower: If someone invites me ;) | 13:09 |
stub | I'm pretty much a team of one, so only go to other peoples sprints. | 13:10 |
arbrandes | jamespage, hey there! I'm trying to set up highly available OpenStack with Juju, but without MaaS. I believe I'm hitting https://bugs.launchpad.net/charms/+source/nova-cloud-controller/+bug/1391784 ( HA failure when no IP address is bound to the VIP interface). | 13:21 |
mup | Bug #1391784: HA failure when no IP address is bound to the VIP interface <openstack> <cinder (Juju Charms Collection):In Progress> <glance (Juju Charms Collection):In Progress> <keystone (Juju Charms Collection):In Progress> <neutron-api (Juju Charms Collection):In Progress> <nova-cloud-controller | 13:21 |
mup | (Juju Charms Collection):In Progress> <openstack-dashboard (Juju Charms Collection):In Progress> <percona-cluster (Juju Charms Collection):Invalid> <https://launchpad.net/bugs/1391784> | 13:21 |
arbrandes | Is there any documentation on how to properly set up what the charms require in terms of containers and network interfaces? | 13:22 |
arbrandes | I tried following https://wiki.ubuntu.com/ServerTeam/OpenStackHA with 23 nodes, and had no luck. Ideally, there'd be a way to do this with containers on less nodes. | 13:23 |
arbrandes | Note: I'm trying to run this on OpenStack itself, so I have to deal with port security on the underlying "hardware". | 13:24 |
ezobn | Hi, Is it possible to set constraints when juju create KVM machine, using existing maas provisioned machine ? I have tried the --constraints option, but it is not working ... | 13:39 |
lazyPower | ezobn: juju should hand off those constraints to MAAS and MAAS will do its best to fulfill the request | 13:41 |
lazyPower | ezobn: what were your constraints? assuming typical ones like memory=2G? | 13:41 |
ezobn | yep - based on existing tags on physical machines | 13:41 |
ezobn | but I whant set mem=8G, not just 512M as seems a default | 13:42 |
ezobn | lazyPower: reating machine-22: 2014-12-03 14:38:28 INFO juju.provisioner.kvm kvm-broker.go:103 started kvm container for machineId: 22/kvm/3, juju-machine-22-kvm-3, arch=amd64 cpu-cores=1 mem=512M root-disk=8192M | 13:43 |
lazyPower | ezobn: which version of MAAS/Juju? | 13:43 |
ezobn | lazyPower: juju:1.18.4-trusty-amd64, maas:1.5.4+bzr2294-0ubuntu1.1 | 13:48 |
lazyPower | ezobn: 1 moment while i check the release notes - i'm not sure that 1.18 supports maas tagging (But it may i'm not certain) | 13:51 |
ezobn | lazyPower: I am using tags on maas via juju, but can't use them when creating kvm VMs | 13:53 |
ezobn | lazyPower: so just wondering does it possible to use them when creating KVM units on physical machine ? or any other means supposed ? | 13:55 |
lazyPower | ezobn: as i understand it you define the constraints when doing enlistment to maas | 13:55 |
lazyPower | ezobn: so i dont think any of those constraints will be handed off | 13:55 |
ezobn | lazyPower: those tags is working good. But I need somehow to say juju worker to create the VM with more memory ;-) | 13:57 |
lazyPower | ezobn: juju doesn't tell maas to actually 'create' the vm | 13:57 |
lazyPower | ezobn: juju requests a machine from maas, and maas has a pool of vm's already enlisted and available ina pool. it just returns that vm from the pool to juju | 13:57 |
ezobn | lazyPower: yes, understand. But then juju using libvirt to create KVM units on added machine. | 13:59 |
ezobn | lazyPower: so by default only this constrains using arch=amd64 cpu-cores=1 mem=512M root-disk=8192M | 14:00 |
lazyPower | ahh so you're doing juju deploy --to 1:kvm correct? | 14:00 |
lazyPower | as an example | 14:00 |
ezobn | lazyPower: yes | 14:00 |
lazyPower | ok my mistake, i thought this was a layer above in teh request | 14:01 |
lazyPower | Good question, I haven't done that. I've only used lxc with deploy --to | 14:01 |
ezobn | lazyPower: juju add-machine --constraints="root-disk=64G mem=8G" --to kvm:22 f.e. ;-) | 14:02 |
lazyPower | ezobn: bootstrapping and investigating - give me a bit to look into it | 14:05 |
ezobn | lazyPower: yes, lxc is a good option, but it all works good with KVM too ... so just options ;-) I try to make working the mataswitch clearwater charms... and they have some custom kernels, as I have learnt ... | 14:06 |
lazyPower | Indeed they do, i was on the early team workign with them and their solutions | 14:06 |
lazyPower | the nice part about kvm containers is you have dedicated resources vs sharing with the lxc containers - and this *should* work | 14:07 |
jamespage | arbrandes, I hope not - that was a very specific edge case causing that specific bug | 14:07 |
ezobn | lazyPower: will be glad to here any advice ;-) Good to know that I am on the right way with metaswitch charms ;-) | 14:09 |
arbrandes | jamespage, I hope so too. I guess my question is: if I'm using the trusty juno charms in manual mode, deploying to VMs running on an OpenStack cloud, should I expect any trouble trying to get HA working for all services? | 14:16 |
arbrandes | In other words, no MaaS here. | 14:16 |
jamespage | arbrandes, most likey yes | 14:16 |
=== scuttle|afk is now known as scuttlemonkey | ||
jamespage | arbrandes, we use openstack internally to test HA but we do funky things with neutron; specifically disabling all port level security in the cloud | 14:17 |
jamespage | which allows us to float IP's and have that just work | 14:17 |
jamespage | neutron security groups would by default just stop that from happening | 14:17 |
arbrandes | jamespage, that's what I feared. :) | 14:17 |
jamespage | ditto nova ones | 14:17 |
arbrandes | jamespage, I suppose you use containers for testing HA, which is why you need to disable port security. What if I just deploy everything to "actual" nodes? I actually did try this, btw, and the first roadblock I hit was the Keystone charm complaining it couldn't bind the VIP address. | 14:19 |
arbrandes | (Though in practice it was already bound). | 14:20 |
jamespage | arbrandes, hmm that's odd | 14:20 |
jamespage | I would expect the charms to still dtrt but the vips would just be inaccessible | 14:20 |
jamespage | arbrandes, the VIP must be in the same subnet as an existing configured network interface | 14:20 |
arbrandes | They're accessible because I used the allowed-address-pairs neutron extension. | 14:20 |
arbrandes | In other words, everything that the keystone + hacluster charms deploy work in practice, but the ha-relation-changed hook fails. | 14:21 |
arbrandes | Which sucks because then further actions fail. | 14:21 |
arbrandes | Anyway, what I'm trying to understand at this point is if I need anything else on a node that will receive HA Keystone besides a NIC configured on said subnet (there is one, btw - plus 2 more NICs for the data and external nets). | 14:24 |
darknet | hello guys, I've a problem with Juju, is there someone can help me? please | 14:24 |
marcoceppi | darknet: probably, its' best to just ask your question | 14:25 |
lazyPower | ezobn: my openstack provider is being pokey this morning - still investigating | 14:27 |
ezobn | lazyPower: Thank you ! I am using juju for my openstack to test :-) | 14:29 |
darknet | marcoceppi_: do you have any idea to resolve that? | 14:39 |
lazyPower | darknet: unless i'm mistaken you didn't ask a question - what seems to be the trouble? | 14:40 |
=== jcw4 is now known as jw4 | ||
lazyPower | ezobn: it appears you've uncovered a bug | 14:49 |
lazyPower | ezobn: i can reproduce the same behavior | 14:49 |
jamespage | arbrandes, sooooo | 14:50 |
jamespage | arbrandes, are you adding all your relations to deployed services at deployment time? | 14:50 |
arbrandes | jamespage, yes - basically all in one go. | 14:50 |
ezobn | lazyPower: I glad that is not just in my setup ;-) Thank you for your help ! | 14:51 |
jamespage | arbrandes, right to this sucks atm and its something we have focus on this cycle | 14:51 |
arbrandes | jamespage, it's not a bundle. I just have a script that does juju add-machine for all 23 nodes, then a series of juju deploy + juju relantionship blocks. | 14:51 |
jamespage | arbrandes, but you'll need to do a phased deployment for ha right now | 14:51 |
arbrandes | jamespage, interesting! How would that work? | 14:51 |
lazyPower | ezobn: np - sorry I didn't have a better message for you. If you could - would you mind filing a bug against juju-core? | 14:51 |
jamespage | arbrandes, I happen to be working on one of these right now let me dig it out | 14:52 |
ezobn | lazyPower: yep, I will do it | 14:52 |
arbrandes | jamespage, awesome, thanks! | 14:52 |
lazyPower | ezobn: brilliant. paste me the link when done so i can track it :) | 14:52 |
ezobn | lazyPower: OK | 14:52 |
roadmr | hm, we've had trouble trying to set all relations in one go, it seems like if the charm isn't race-condition-resistant then things fail | 14:52 |
darknet | marcoceppi, any idea? | 14:52 |
ezobn | lazyPower: just here ? | 14:53 |
roadmr | we have resorted to waiting until all units are up, then adding relations one by one with a X-second sleep interval between each | 14:53 |
jamespage | arbrandes, I've also not yet tested this; I was literally working on the bundles when I saw your pring | 14:53 |
jamespage | arbrandes, http://bazaar.launchpad.net/~canonical-server/+junk/serverstack/view/head:/deployment/serverstack5.yaml | 14:53 |
lazyPower | ezobn: that'll work | 14:53 |
ezobn | lazyPower: OK | 14:53 |
roadmr | darknet: what's the problem you're seeing? sorry, I missed it, but maybe I can help | 14:53 |
jamespage | arbrandes, I have some gaps right now due to the fact we are about to re-ip our networking and I don't have all the details yet | 14:53 |
jamespage | arbrandes, the idea is that you deploy 'serverstack-base' first, and let that deploy and settle (no hooks executing) | 14:54 |
jamespage | and then you do serverstack-relations | 14:54 |
darknet | anyone have any idea to resolve that error with juju? | 14:55 |
jamespage | arbrandes, there is quite a bit in that bundle which is MAAS specific; we make alot of use of LXC which is not possible under openstack | 14:55 |
lazyPower | darknet: none of us have seen a link to a pastebin or reference to your error. Can you provide some insight as to where we should look? | 14:55 |
arbrandes | jamespage, I understand it's a WIP, but thanks anyway - I might be able to extract what I need from that bundle. | 14:57 |
jamespage | arbrandes, fwiw I am driving towards a single line deployment for HA this cycle; juju should be delivering a few new features to be able to help us achieve that | 14:58 |
jamespage | arbrandes, right now its tricky for a unit of a service to know categorically how many peers its going to have from the point of first execution which makes electing a leader a bit clumsy and race prone right now | 14:58 |
arbrandes | jamespage, that would be fantastic. Just so you guys keep my use-case in mind when working on this: I'm trying to do Openstack-on-Openstack with Juju for the purposes of training. In this case, I'm trying to demonstrate how the Juju charms do HA, and to do it, I fire up one Heat stack per student, a stack that contains a Juju bootstrap node and as many other nodes as needed. | 15:00 |
arbrandes | jamespage, I can't really use MaaS for this setup, so I need Juju (or a bundle) to do its thing without it, and for HA to work *with* port security enabled. | 15:02 |
arbrandes | jamespage, if doing a phased deployment works, it's good enough for me. I'll just have students wait for a prerequisite deployment to settle. | 15:02 |
jamespage | arbrandes, adding the extra allowed addresss stuff will work OK for HA VIP's (I think) | 15:03 |
darknet | so, no one can help me to resolve that? | 15:03 |
jamespage | arbrandes, however you may come unstuck with the quantum-gateway charm | 15:04 |
jamespage | arbrandes, that really does need port security disabled todo its neutron networking foo for routers etc... | 15:04 |
jamespage | it generates new mac and other things that won't work with allowed-address-pairs | 15:04 |
roadmr | darknet: we can help but only if you show us the problem. So far you have only said you have a problem, you have not detailed what it is. | 15:06 |
arbrandes | jamespage, yes, the allowed-address-pairs thing does work for accessing the Pacemaker VIP. quantum-gateway works fine as well. My only problem is having the hacluster hook finish running | 15:06 |
darknet | I've already posted it | 15:06 |
jamespage | arbrandes, ok - do you have the exact error from the unit log? | 15:06 |
arbrandes | jamespage, I just tore down my stack and am in the process of rebuilding it. I'll post the error as soon as I reach that point again. | 15:07 |
jamespage | arbrandes, great - fwiw the bundle I pointed you at is the cloud that we test openstack ontop of | 15:08 |
darknet | roadmr, anyway, I've an error when I try to run the command "juju add-machine vnode -e maas" on maas environment. I've as result that the vnode starts but after few seconds goes down and make the reboot and receiving an error. I've posted that on askubuntu (http://askubuntu.com/questions/556605/juju-ver-1-20-13-cannot-run-instances-gomaasapi-got-error-back-from-server) both of them, Juju and MaaS are installed via ppa stable | 15:09 |
arbrandes | jamespage, awesome. I'll get two stacks up and on one reproduce the bug, and on the other try to fiddle with the phased deployment. | 15:09 |
roadmr | darknet: ah cool! thanks for that. By the way, it's the first time I see it since you first mentioned you had a problem (less than an hour ago), so maybe your first message got lost. | 15:10 |
roadmr | darknet: so you say you've done this before and it worked? was it the exact same procedure? | 15:11 |
darknet | idem for me!!! I've installed them in past but it's the first time I receive that after to make upgrade via ppa | 15:12 |
roadmr | darknet: I've never used manual provisioning, but it seems to me as if Node1 should already be up when you try to add it. Juju won't create it | 15:12 |
darknet | roadmr: it's already up and juju-gui has been deployed without prb, | 15:13 |
darknet | roadmr: I've a problem when add another vnode to environment | 15:13 |
roadmr | darknet: ahh ok, and are you 100% sure the name you're giving is correct? what juju is saying is it's unable to find the node by that name | 15:13 |
roadmr | CloudMaaSRCNode1.maas | 15:14 |
darknet | roadmr: yes | 15:14 |
roadmr | darknet: are you maybe missing an S? CloudMaaSSRCNode1.maas | 15:14 |
roadmr | (though the first node CloudMaaSRCNode0.maas looks like it has the correct name...) | 15:14 |
darknet | roadmr: it's strange, for CloudMaaSRCNode0.maas everything has worked well. | 15:14 |
roadmr | darknet: hey do you have the maas cli configured? you could do maas maas nodes list (or equivalent command, I'm assuming your profile is also named "maas") and post that or use that to double-check the nodes you're giving to juju are known to maas | 15:16 |
darknet | roadmr: the vnode continues to make the reboot and on juju status it results in pending... | 15:16 |
darknet | all vnode are present on MaaS and they are in ready status. | 15:17 |
roadmr | darknet: I believe you... that's quite weird | 15:18 |
* roadmr goes out for a bit, brb | 15:18 | |
=== roadmr is now known as roadmr_afk | ||
darknet | roadmr: It's the second time I re-install everything, and receive the same error with juju..the environment has been installed on ubuntu 14.04 | 15:20 |
darknet | roadmr: I've received this type of error after to have made the upgrade to MaaS 1.7 and Juju 1.20 | 15:23 |
darknet | roadmr: do y hav eany idea? | 15:28 |
darknet | roadmr: I was thinking that maybe is why I'm using ubuntu 14.04 with MaaS and Juju upgraded???? | 15:45 |
darknet | roadmr: because I'm testing the previously release on ubuntu 14.04 without upgrade and it works well!!!!!! | 15:48 |
arbrandes | jamespage, I just managed to reproduce the error: "unit-keystone-0: 2014-12-04 15:46:40 INFO ha-relation-changed ValueError: Unable to resolve a suitable IP address based on charm state and configuration". This is with keystone -n 2. But if I SSH into the leader, I can see the VIP bound to the proper interface, and `crm status` looks good. | 15:53 |
jamespage | arbrandes, hmm | 15:54 |
arbrandes | jamespage, if you're interested, I can get you SSH access to that environment. | 15:54 |
jamespage | arbrandes, can you pastebin the full stacktrace? | 15:54 |
jamespage | oh ssh is good as well | 15:54 |
=== roadmr_afk is now known as roadmr | ||
roadmr | darknet: hm, I suspect a name resolution issue, could you maybe ssh into the bootstrap node and see if it can ping cloudmaasrcnode1.maas? | 16:18 |
sebas5384 | balloons: hey o/ | 16:23 |
LinStatSDR | Hello all. | 16:23 |
balloons | sebas5384, hey! ;-) | 16:23 |
sebas5384 | balloons: just confirming our meeting 17 UTC :) | 16:24 |
balloons | yep, I should be all ready | 16:24 |
lazyPower | o/ sebas5384 | 16:24 |
sebas5384 | hey lazyPower o/ | 16:24 |
lazyPower | i was driving back yesterday when you pinged, so belated greetings. | 16:24 |
sebas5384 | lazyPower: np! :) | 16:25 |
sebas5384 | balloons: I was having a problem with a bug in the charm helpers | 16:26 |
balloons | sebas5384, ohh.. did you try and get a charm for it already? | 16:27 |
sebas5384 | but i resolved getting an old version | 16:27 |
darknet | roadmr: already done it, I've tried ssh from host to vnode CloudMaaSRCNode0 and it works perfectly. | 16:27 |
sebas5384 | https://bugs.launchpad.net/bugs/1397134 | 16:28 |
mup | Bug #1397134: Python's Six dependency <oil> <Charm Helpers:In Progress by stub> <https://launchpad.net/bugs/1397134> | 16:28 |
roadmr | darknet: ok, and once you're in cloudmaasrcnode0, can you do somegthing like "ping cloudmaasrcnode1.maas"? | 16:28 |
sebas5384 | balloons: didn't understand your question, sorry :P | 16:28 |
balloons | sebas5384, no worries. See you in 30 mins | 16:29 |
sebas5384 | balloons: great then! | 16:29 |
darknet | roadmr: it' s impossible to make that because the cloudmaasrcnode1.maas don't finish the boot. it starts and after few seconds goes down. | 16:30 |
roadmr | darknet: true, sorry | 16:32 |
darknet | roadmr: will you be tomorrow here? I've to go out from office about 10 minutes | 16:32 |
roadmr | darknet: yes, I will be here, and there's other people who may be able to help too | 16:32 |
roadmr | darknet: just remember to put the URL for your askubuntu question, that's very well-explained, thanks for that! | 16:33 |
darknet | ok, thanks you for your support in case see y tomorrow | 16:33 |
darknet | roadmr: have a nice day bye | 16:33 |
roadmr | darknet: enjoy! | 16:33 |
sebas5384 | balloons: I updated a new version recently, so I'm waiting for the change to update in the charm store | 17:02 |
balloons | sebas5384, ok. | 17:03 |
balloons | so we ready? | 17:04 |
sebas5384 | balloons: some things changed, and the charm wasn't expecting that | 17:04 |
sebas5384 | yeah sure | 17:04 |
sebas5384 | hangout ? | 17:04 |
balloons | sebas5384, https://plus.google.com/hangouts/_/canonical.com/nicholas-sebas?authuser=1 | 17:04 |
sebas5384 | balloons: let me try again | 17:05 |
sebas5384 | hangout always is trolling me | 17:07 |
balloons | sebas5384, I have the same issue at times :-) | 17:08 |
sebas5384 | i'm going to use another browser | 17:08 |
sebas5384 | balloons: yeah if you can get it from the launchpad | 17:08 |
sebas5384 | till it updates to the new revision | 17:08 |
sebas5384 | :) | 17:09 |
balloons | shall we carry on via IRC sebas5384 ? | 17:10 |
sebas5384 | balloons: i'm installing the plugin into the safari | 17:10 |
balloons | ohh, right, the dreadful plugin | 17:11 |
lazyPower | i thought hangouts went html5 last week | 17:17 |
lazyPower | boo @ the dreaded plugin | 17:17 |
balloons | sebas5384, https://github.com/nskaggs/isotracker.git | 17:28 |
mbruzek1 | Hey marcoceppi, jose just pointed out the precise drupal6 charm is not owned by charmers! | 17:40 |
mbruzek1 | $ charm get cs:precise/drupal6 | 17:40 |
mbruzek1 | Branching drupal6 (lp:~lynxman/charms/precise/drupal6/trunk) to /tmp/precise/drupal6 | 17:40 |
mbruzek1 | marcoceppi: What is wrong here and how can we fix that? | 17:41 |
marcoceppi | unpromulgate, bzr init ~charmers, push, promulgate | 17:41 |
jackweirdy | Hey all o/ | 17:41 |
mbruzek1 | thanks marcoceppi, I will work with jose to fix that. | 17:42 |
jose | cool, thanks marcoceppi and mbruzek1! | 17:42 |
jackweirdy | I have a machine set up with a neo4j charm I'm building - testing on localhost at the moment. when I `juju ssh` into the machine I can `telnet` and see the port is open, but from the host I don't seem to be able to. The right port is exposed and I'm connecting to the IP reported by juju. I think I'm missing something obvious here; any ideas? | 17:43 |
jackweirdy | Doh! I bet I know what it is. I don't think I set Neo4j to accept public connections. I'll put my dunce hat on :) | 17:44 |
balloons | sebas5384, http://iso.qa.ubuntu.com/ | 17:48 |
lazyPower | jackweirdy: we've all done that to be sure :) | 17:49 |
jackweirdy | I get a bit carried away with some of the magic and forget I have to do some work myself xD | 17:49 |
lazyPower | jackweirdy: thats a byproduct of how awesome we all are when charming :) (shameless self plug there) | 17:51 |
=== arbrandes_ is now known as arbrandes | ||
jose | marcoceppi, mbruzek1: found another of those branches, this time it's for the juju charm. should I move it to ~charmers like I did with drupal6? | 18:20 |
marcoceppi | jose: which charm? | 18:20 |
marcoceppi | sorry, who owns it*? | 18:20 |
jose | marcoceppi: juju, owned by Marc Cluet | 18:20 |
marcoceppi | yeah | 18:20 |
jose | ack, doing that now | 18:20 |
jose | marcoceppi: changes pushed and charm promulgated, thanks! | 18:25 |
marcoceppi | o7 | 18:25 |
=== CyberJacob|Away is now known as CyberJacob | ||
jcastro | hey mbruzek1 | 19:41 |
mbruzek1 | yo | 19:41 |
jcastro | calendar says charm school tomorrow on fat charms | 19:41 |
mbruzek1 | yes. | 19:42 |
mbruzek1 | I still have to prepare for that | 19:42 |
jcastro | ok I'll send a reminder now | 19:42 |
mbruzek1 | 3pm est? | 19:42 |
jcastro | yeah | 19:42 |
jcastro | hey, you know what would be cool, since the power machines are firewalled | 19:42 |
mbruzek1 | OK will have to set aside some time to prepare | 19:42 |
jcastro | maybe do the thing on the power machines? | 19:42 |
mbruzek1 | jcastro: great idea | 19:42 |
mbruzek1 | Can you get a fresh system from smoser? | 19:42 |
jcastro | I can work it now | 19:42 |
mbruzek1 | Please do | 19:42 |
mbruzek1 | I would love to show something like that off, but we need a fresh system that does not have the proxies set up | 19:43 |
mbruzek1 | I will work on preparations after our standup | 19:43 |
jcastro | ok, I'll go ask | 19:43 |
=== kadams54 is now known as kadams54-away | ||
=== kadams54-away is now known as kadams54 | ||
=== kadams54 is now known as kadams54-away | ||
=== roadmr is now known as roadmr_afk | ||
=== scuttlemonkey is now known as scuttle|afk | ||
=== scuttle|afk is now known as scuttlemonkey | ||
=== roadmr_afk is now known as roadmr | ||
noise][1 | anyone in here familiar with the apache-openid charm? (https://jujucharms.com/u/caio1982/apache-openid/trusty/4) | 22:21 |
noise][1 | and/or if it's better to just hack it together manually in the apache2 vhost conf? | 22:21 |
lazyPower | noise][1: i haven't used it unfortunately - however it looks like a userspace charm so ymmv | 22:38 |
noise][1 | lazyPower: so better to just hack up the vhost file for the main apache2 charm directly? | 22:39 |
lazyPower | noise][1: but just looking at the config it generates seems feesable | 22:39 |
lazyPower | i would test it in a sandbox before relying on it | 22:39 |
noise][1 | :) | 22:39 |
lazyPower | it may have quirky behavior with different charms configurations. I dont like racey configs | 22:39 |
lazyPower | as this looks like its appending, and if the parent charm has a config-changed that updates the vhost - you're asking for loss of OpenID | 22:40 |
lazyPower | so that would be my litmus test, is ensuring the template updates aren't atomic | 22:40 |
noise][1 | oh, interesting | 22:40 |
lazyPower | if they are, its a no-go | 22:40 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!