magicaltrout | although that seems to be haproxy only and not apache2 | 00:04 |
---|---|---|
magicaltrout | isn't there a webpage with a list of available interfaces on it? | 00:07 |
* magicaltrout can't wait for the updated docs and stuff to get properly indexed | 00:07 | |
magicaltrout | interfaces.juju.solutions that bad boy | 00:09 |
lazyPower | magicaltrout - you hit the nail on the head. that works as expected due to the interface being provided for you | 01:02 |
lazyPower | apache2 being an older charm has no such interface documentation, and its painful to integrate with charms that dont use interface-layers as competing implementations of the same interface exist | 01:03 |
blahdeblah | wallyworld: ping - quick question: do you guys have a standard recommendation for how to manually move a bootstrap node to another system? | 05:14 |
blahdeblah | cc: ^ anastasiamac axw | 05:22 |
axw | blahdeblah: sounds like a job for backup/restore | 05:26 |
blahdeblah | axw: You do know that is one of our trigger words, right? :-) | 05:26 |
axw | blahdeblah: had bad experiences with it? there are known issues, slated for fixing in 2.1 I believe | 05:27 |
axw | blahdeblah: but I think it's the only "standard recommendation" we have | 05:27 |
blahdeblah | axw: Ask wallyworld to tell you the story sometime :-) | 05:27 |
axw | blahdeblah: ah, I think I may know what you're referring to :) | 05:28 |
blahdeblah | So should it basically work on 1.24.7? | 05:28 |
blahdeblah | actually, no; I think it's 1.25.1.2 | 05:28 |
* blahdeblah checks | 05:28 | |
axw | blahdeblah: I'm not aware of any major bugs with it that would, say, delete all your machines. just usability issues | 05:29 |
blahdeblah | Cool - thanks; I'll have a read of the doco later | 05:30 |
=== scuttlemonkey is now known as scuttle|afk | ||
ryotagami | jamespage: I have a quick question, if you have time. | 08:58 |
admcleod1 | i have a leadership question.. i have 2 nodes in a peer relation, and both report is_leader false (in the charm, and in a debug hook). apparently both were denied leadership: http://pastebin.ubuntu.com/15204742/ | 10:41 |
admcleod1 | and http://pastebin.ubuntu.com/15204752/ | 10:43 |
admcleod1 | hmm maybe looks like https://bugs.launchpad.net/juju-core/+bug/1465307 | 10:49 |
mup | Bug #1465307: 1.24.0: Lots of "agent is lost, sorry!" messages <landscape> <regression> <juju-core:Incomplete> <https://launchpad.net/bugs/1465307> | 10:49 |
magicaltrout | right sod it | 11:33 |
magicaltrout | its friday | 11:33 |
magicaltrout | I'm bored | 11:33 |
* magicaltrout rents a cheap server to try Jorge's Xenial blog stuff | 11:33 | |
magicaltrout | woop shiny new server with 6TB of storage | 11:57 |
magicaltrout | should keep me going | 11:57 |
magicaltrout | boom xenial here i come | 12:20 |
jamespage | beisner, zul : I raised merges for mongodb and mysql compat for xenial btw | 14:03 |
jamespage | mgz, I guess jelmer is a good person to poke right? | 14:29 |
jamespage | beisner, I'm getting nowhere fast with the bzr fast-import problem | 14:32 |
mgz | jamespage: well, he does know more about the import process than me, though not sure he has any more time | 14:38 |
jamespage | mgz, ok - that's what I thought but its worth a punt | 14:38 |
jamespage | mgz, trying to import git repo's back to bzr - all but one of the 24 repositories works just fine | 14:39 |
jamespage | beisner, I can think of ways to workaround this but they are not pretty | 14:41 |
mgz | and it's ghost revs or some other form of odd history? | 14:41 |
mgz | jamespage: one option would be working out what's odd about the history, redo the git import with that part rewritten, then you'll get a clean import back | 14:42 |
jamespage | mgz, | 14:43 |
jamespage | ABORT: exception occurred processing commit :393 | 14:43 |
jamespage | bzr: ERROR: An inconsistent delta was supplied involving '<unknown>', 'havana-20160226144309-u3xhs67nd4l2ygof-220' | 14:43 |
jamespage | mgz, how do I resolve that back to the original bzr export git import? | 14:46 |
mgz | jamespage: not clear just from that, what's the diff of that commit | 14:49 |
jamespage | mgz, http://paste.ubuntu.com/15206137/ | 14:50 |
jamespage | hmm - I wonder whether is those symlinks... | 14:50 |
mgz | those do look some odd filemoves | 14:51 |
jamespage | yes agreed | 14:51 |
jamespage | mgz, its like the ordering on those fast-import statements is wrong | 14:52 |
jamespage | http://paste.ubuntu.com/15206157/ | 14:52 |
mgz | D templates/havana | 14:53 |
mgz | bet its an issue from git not versioning directories | 14:53 |
mgz | so, we lost the rev info in the export/import roundtrip through git | 14:53 |
jamespage | mgz, I see the same if I just try to re-import the original export directly back into bzr | 14:54 |
jcastro | aisrael: xenial vagrant boxes do exist: http://cloud-images.ubuntu.com/xenial/current/ | 14:54 |
jcastro | they're just not listed in on the vagrant page | 14:54 |
aisrael | jcastro: Ahh. I tend to use the vagrant-registered ones because that supports versioning | 14:55 |
mgz | jamespage: so, my prefered solution would be manipulating the original export of merge/mung the problem history | 14:55 |
mgz | so you're clean from there on | 14:55 |
jcastro | indeed | 14:55 |
jcastro | I think it was just an oversight, I let gaughen know | 14:55 |
jcastro | or they may be doing it on purpose because it's a beta | 14:56 |
jamespage | mgz, so the data in the original fast export file ? | 14:56 |
mgz | jamespage: yeah, there are also rewrite options to the fast export command | 14:58 |
mgz | and I think on the git import side | 14:58 |
beisner | o/ | 14:59 |
gaughen | jcastro, you're specifically looking for it in the beta? because we're building dailies but it does seem to be missing from beta 1 | 15:08 |
jcastro | gaughen: I was just wondering why we're not listing it on vagrantcloud.com | 15:16 |
jcastro | I was assuming because we're not released yet, which would make sense | 15:19 |
gaughen | jcastro, I'll follow up on that specific item | 15:35 |
apuimedo | jamespage: I have the mem-manager with amulet and unit tests | 15:36 |
apuimedo | should I add it to the existing bugs? | 15:36 |
apuimedo | s/bugs/bug/ | 15:36 |
apuimedo | for review and promulgation | 15:36 |
jamespage | apuimedo, new charm bug please | 15:38 |
apuimedo | alright | 15:40 |
apuimedo | jamespage: https://bugs.launchpad.net/charms/+bug/1550394 | 15:44 |
mup | Bug #1550394: New charm: mem-manager <Juju Charms Collection:New> <https://launchpad.net/bugs/1550394> | 15:44 |
apuimedo | I tried assigning it to you but I don't think I'm allowed to | 15:44 |
apuimedo | I ran the amulet tests with the local lxc provider | 15:44 |
apuimedo | and tried it on ha | 15:44 |
apuimedo | as well | 15:44 |
jamespage | apuimedo, let it run through the normal review process - with my current workload its likely someone else will pick it up forst... | 15:46 |
apuimedo | jamespage: understood | 15:47 |
apuimedo | thanks | 15:47 |
apuimedo | cory_fu: did you merge that cassandra change again? | 15:49 |
apuimedo | (wondering if I should update my bundles) | 15:49 |
cory_fu | Not yet. I was going to shortly | 15:49 |
jamespage | if anyone has a spare moment - https://code.launchpad.net/~james-page/charms/trusty/mongodb/forward-compat-xenial/+merge/287312 | 15:50 |
jamespage | fixes up compat of the mongodb charm with xenialk | 15:50 |
apuimedo | ok, let me know, cause I'll also have to send changes to cs:trusty/midonet-api amulet tests and also cs:trusty/midonet-agent and cs:trusty/neutron-agents-midonet | 15:51 |
apuimedo | cory_fu: ^^ | 15:51 |
lazyPower | jamespage - approved and merged | 15:55 |
jamespage | lazyPower, thankyou | 15:55 |
lazyPower | thanks beisner and ci :) the passing results made that a no brainer merge | 15:56 |
admcleod1 | jamespage: hey james, did you have any ideas about the leadership stuff i mentioned earlier in here? | 15:59 |
jamespage | admcleod1, missed that | 16:09 |
* jamespage reads backscroll | 16:09 | |
cory_fu | apuimedo: Ok, I'm going to merge now. Link me to the MPs for the test fixes and I'll do those as well | 16:16 |
apuimedo | ok, just a moment | 16:17 |
apuimedo | cory_fu: do you know how to put the three sources on a string for amulet? | 16:23 |
apuimedo | I only got your multiline example for bundle/config file | 16:24 |
apuimedo | should I add "\n - " | 16:24 |
apuimedo | between the different sources | 16:24 |
cory_fu | apuimedo: You can also do "[source, source, source]" | 16:25 |
apuimedo | cool | 16:25 |
pmatulis | 'juju sync-tools' is apparently deprecated in juju-core2 . where do its features go? what does one do if the agents do not have internet access? | 16:27 |
apuimedo | cory_fu: https://code.launchpad.net/~celebdor/charms/trusty/midonet-api/cassandra/+merge/287334 | 16:29 |
apuimedo | I'll do the other two now | 16:29 |
magicaltrout | oooh jcastro not putting all your signups on bcc, thats a bit naughty! :P | 16:32 |
stokachu | so juju 2.0 has accounts which list admin@local as the admin user for a controller | 16:32 |
stokachu | however, you can't login to the api with admin@local only user-admin | 16:33 |
stokachu | is the because the api doesn't support the login from accounts yet? | 16:33 |
* magicaltrout returns to bashing his head on an LXD shaped wall | 16:34 | |
jcastro | magicaltrout: ack | 16:34 |
jcastro | magicaltrout: it's email, it's all spam | 16:34 |
magicaltrout | hehe | 16:34 |
admcleod1 | jamespage: any ideas? | 16:35 |
jcastro | I am hoping the next beta of juju will fix my lxd problems | 16:35 |
apuimedo | cory_fu: https://code.launchpad.net/~celebdor/charms/trusty/neutron-agents-midonet/cassandra/+merge/287336 | 16:35 |
lazyPower | rick_h_ SHARED MODELS?! | 16:35 |
magicaltrout | aye well lxd beta2 and juju trunk don't work but I don't know if its me messing something up or elsewhere | 16:35 |
magicaltrout | because when I roll back to alpha1 and stuff it still seems broken | 16:36 |
jcastro | it's a known issue, they're working on a release now | 16:36 |
jcastro | let me find the bug for you | 16:36 |
jcastro | https://bugs.launchpad.net/juju-core/+bug/1547268 | 16:36 |
mup | Bug #1547268: Can't bootstrap environment after latest lxd upgrade <juju-core:In Progress by jameinel> <https://launchpad.net/bugs/1547268> | 16:36 |
jcastro | is what you want | 16:36 |
jcastro | magicaltrout: core tells me new beta early next week with this resolved, so this is the one thing we're waiting on | 16:37 |
magicaltrout | well | 16:37 |
magicaltrout | yes to the api_compat bit | 16:37 |
apuimedo | cory_fu: https://code.launchpad.net/~celebdor/charms/trusty/midonet-agent/cassandra/+merge/287338 | 16:38 |
magicaltrout | so I got the various LXD/LXC beta2 packages and installed them | 16:38 |
apuimedo | there, that was the last one | 16:38 |
jcastro | are you on trusty? | 16:38 |
magicaltrout | so i've downgraded which gets rid of api_compat | 16:38 |
stokachu | wallyworld: do you know when logging into the api will use the admin user from the accounts (admin@local) instead of 'user-admin'? | 16:38 |
* apuimedo taking a 30min break | 16:38 | |
magicaltrout | I just bought a random server in the cloud and walked through your blog, so i'm sat in xenial | 16:38 |
magicaltrout | but with a downgraded lxc lxd stack | 16:38 |
magicaltrout | so I dont' get the api_compat error | 16:39 |
jcastro | oh, I don't think that would work | 16:39 |
jcastro | jam: ^^ that shouldn't work right? | 16:39 |
cory_fu | apuimedo: Ok, I'll get to them shortly | 16:39 |
magicaltrout | well, I figured that as well, so I folled the juju source right back | 16:39 |
magicaltrout | and it made no real difference | 16:39 |
magicaltrout | the bootstrap node comes up | 16:39 |
magicaltrout | but then can't authenticate against it and fails | 16:39 |
apuimedo | thanks | 16:40 |
magicaltrout | maybe i didn't rollback far enough, I got to 2.0 alpha 1 | 16:40 |
jcastro | I think beta2 is the one you want | 16:42 |
jcastro | or did you try that one too? | 16:42 |
magicaltrout | hold on, LXD/LXC beta 2? | 16:42 |
jcastro | no, juju | 16:42 |
magicaltrout | hmm | 16:42 |
magicaltrout | dunno, i'll go dig out a tag | 16:42 |
magicaltrout | beta2? not alpha2? | 16:43 |
magicaltrout | cause it didn't get into 1.25 did it | 16:43 |
magicaltrout | so it ended up on the devel ppa which became 2.0 | 16:43 |
jcastro | I'm sorry, I meant juju _alpha_2 | 16:45 |
magicaltrout | lol | 16:45 |
magicaltrout | okay | 16:45 |
magicaltrout | building... lets see what she does | 16:49 |
magicaltrout | na exactly the same connection refused error | 16:50 |
magicaltrout | which makes me wonder what on earth its doing | 16:50 |
tvansteenburgh1 | frankban: when you have time can you put the latest juju-deployer from pypi into juju/stable please? | 16:51 |
=== tvansteenburgh1 is now known as tvansteenburgh | ||
frankban | tvansteenburgh: sure, I'll do that next week | 16:54 |
=== scuttle|afk is now known as scuttlemonkey | ||
tvansteenburgh | frankban: ta | 16:55 |
=== Makyo_ is now known as Makyo | ||
=== freeflying__ is now known as freeflying | ||
=== alexisb is now known as alexisb-afk | ||
=== lazypower_ is now known as lazyPower | ||
=== natefinch is now known as natefinch-lunch | ||
=== CyberJacob is now known as zz_CyberJacob | ||
=== mhall119_ is now known as mhall119 | ||
=== natefinch-lunch is now known as natefinch | ||
=== alexisb-afk is now known as alexisb | ||
stokachu | anyone on the juju team able to answer my question(s) to the mailing list? | 19:17 |
alexisb | stokachu, yes, I would like wallyworld to answer that one, the short answer is yes it will be there but target depends on progress for the new bootstrap/controller work his team is doing | 19:21 |
alexisb | stokachu, I will loop him into the thread to make sure he sees it | 19:21 |
stokachu | alexisb: ok perfect thank you! | 19:21 |
=== dpm is now known as dpm-afk | ||
firl | you around lazyPower ? | 20:36 |
lazyPower | firl you betchya | 20:36 |
firl | got it up and running | 20:37 |
lazyPower | awww yeaaaa | 20:37 |
firl | had a questiona bout services though, how to get an external ip mapped to them | 20:38 |
lazyPower | We've been talking through this ourselves, mbruzek and I | 20:38 |
lazyPower | we had some success with consul as service discovery and putting that behind a reverse proxy | 20:39 |
lazyPower | the other option is ot launch a pod with host port mappings which expose them on the network of the machine, like docker run -p 80:80 style | 20:39 |
mbruzek | firl: TL;DR; it is hard | 20:39 |
firl | lol | 20:40 |
firl | what about openstack | 20:40 |
firl | make it even harder? ;) | 20:40 |
firl | what about implementing the tcp load balancer service? | 20:41 |
lazyPower | i like this idea | 20:42 |
magicaltrout | the talk of loadbalancers and reverse proxies are banned in this channel | 20:43 |
lazyPower | firl - i took a look here http://kubernetes.io/v1.1/docs/user-guide/services.html#type-loadbalancer, see the subsection about External IP's | 20:43 |
lazyPower | looks like we just pass it config and the kubes-router does all the iptables magic | 20:44 |
firl | ya | 20:44 |
firl | it’s up to the kubernetes implementation to be able to implement it | 20:44 |
lazyPower | ok so, lets talk through this - thats an integration with neutron right? | 20:44 |
firl | magicaltrout :) | 20:45 |
firl | lazyPower ya | 20:45 |
firl | juju has access to the networking id's | 20:46 |
firl | juju already does this for the maas / lxc implementation also right? | 20:47 |
lazyPower | yeah, but this is also apples/oranges too, juju has deep integration with lxd/lxc | 20:49 |
lazyPower | docker is only being modeled by the charm, so all that is on us, and how our charm talks to the components, so the comparison there was a bit off in terms of whats being given automatically | 20:49 |
firl | well my thought is, doesn’t juju also expose the network side of openstack to the juju subsystem | 20:50 |
firl | if juju had a way to map an IP address from neutron to the container, you can easily forward it | 20:50 |
lazyPower | yeah, i haven't done a lot with networks spaces *yet* | 20:50 |
lazyPower | but its there in 2.0 | 20:50 |
firl | juju already controls the security side | 20:50 |
lazyPower | right | 20:50 |
lazyPower | have you looked at juju network spaces docs yet? | 20:51 |
firl | for 2? no I haven't | 20:51 |
lazyPower | we should look at that and figure out how to do this :) | 20:51 |
firl | I have seen some stuff for LACP | 20:51 |
firl | I think that would be one of the right ways | 20:51 |
firl | the other thing you could do is just implement the service to spawn up a haProxy charm | 20:51 |
lazyPower | Right | 20:52 |
lazyPower | there's also vulcand, nginx w/ consul-template or etcd/confd | 20:52 |
firl | i think vulcand has done more with kube also | 20:52 |
firl | ( If I remember correctly ) | 20:53 |
lazyPower | I think it really depends on the workload, and this is going to take a few bundles to get the right options together | 20:53 |
lazyPower | we looked into this before, and our best success was with the reverse proxy and template watchers | 20:53 |
lazyPower | but thats been 6 or 7 months ago | 20:53 |
firl | template watchers being when the kube services change? | 20:53 |
lazyPower | yeah, as they come up, down, etc. | 20:53 |
firl | +,- pods? | 20:53 |
firl | kk | 20:53 |
firl | yeah I saw a great article on it | 20:54 |
lazyPower | the containers registered in consul, and consul-template was rewriting an nginx config | 20:54 |
firl | I can’t seem to find the article, but ya essentially what you have mentioned | 20:58 |
firl | so in the mean time until that gets resolved / figured out. how should I create a mapping? create a private subnet route to the network server and do iptables to map to the internal ip? | 20:59 |
=== natefinch_ is now known as natefinch | ||
mbruzek | firl: That looks like it would work, if you get that working I would love to read more about it | 21:02 |
firl | it’s easier for me because I have pfsense as the backend and can do that via simple routes | 21:02 |
firl | but that solution doesn’t lend itself to most people | 21:03 |
lazyPower | I hear ya firl | 21:09 |
firl | I think the proper solution would be to leverage the juju networking stack | 21:10 |
firl | for exposing kube services, however being able to do ssl termination of load balancers would be a nice add with a juju bundle | 21:10 |
bdx | core, charmers: I'm experiencing behavior I can't understand or explain when deploying to the openstack provider using 1.25.3 released/released .... when I `juju deploy postgresql`, my machine goes into error state. juju-env -> http://paste.ubuntu.com/15209811/ , machine-0.log -> http://paste.ubuntu.com/15209834/ , nova-api-os-compute.log -> http://paste.ubuntu.com/15209842/ | 21:24 |
bdx | core, charmers: but when I `juju deploy postgresql <anyname other then postgresql>` I have successful deploys .... has anyone heard of anything like this? | 21:26 |
bdx | core, charmers: my machine-0.log is also getting spammed with "2016-02-26 20:49:17 ERROR juju.rpc server.go:573 error writing response: EOF" | 21:31 |
bdx | core, charmers: I can reproduce this to no end -> http://paste.ubuntu.com/15209967/ | 21:40 |
firl | lazyPower is there a way to specify what internal networking namespace to use? | 23:21 |
firl | ( I see it ) | 23:21 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!