[00:04] although that seems to be haproxy only and not apache2 [00:07] isn't there a webpage with a list of available interfaces on it? [00:07] * magicaltrout can't wait for the updated docs and stuff to get properly indexed [00:09] interfaces.juju.solutions that bad boy [01:02] magicaltrout - you hit the nail on the head. that works as expected due to the interface being provided for you [01:03] apache2 being an older charm has no such interface documentation, and its painful to integrate with charms that dont use interface-layers as competing implementations of the same interface exist [05:14] wallyworld: ping - quick question: do you guys have a standard recommendation for how to manually move a bootstrap node to another system? [05:22] cc: ^ anastasiamac axw [05:26] blahdeblah: sounds like a job for backup/restore [05:26] axw: You do know that is one of our trigger words, right? :-) [05:27] blahdeblah: had bad experiences with it? there are known issues, slated for fixing in 2.1 I believe [05:27] blahdeblah: but I think it's the only "standard recommendation" we have [05:27] axw: Ask wallyworld to tell you the story sometime :-) [05:28] blahdeblah: ah, I think I may know what you're referring to :) [05:28] So should it basically work on 1.24.7? [05:28] actually, no; I think it's 1.25.1.2 [05:28] * blahdeblah checks [05:29] blahdeblah: I'm not aware of any major bugs with it that would, say, delete all your machines. just usability issues [05:30] Cool - thanks; I'll have a read of the doco later === scuttlemonkey is now known as scuttle|afk [08:58] jamespage: I have a quick question, if you have time. [10:41] i have a leadership question.. i have 2 nodes in a peer relation, and both report is_leader false (in the charm, and in a debug hook). apparently both were denied leadership: http://pastebin.ubuntu.com/15204742/ [10:43] and http://pastebin.ubuntu.com/15204752/ [10:49] hmm maybe looks like https://bugs.launchpad.net/juju-core/+bug/1465307 [10:49] Bug #1465307: 1.24.0: Lots of "agent is lost, sorry!" messages [11:33] right sod it [11:33] its friday [11:33] I'm bored [11:33] * magicaltrout rents a cheap server to try Jorge's Xenial blog stuff [11:57] woop shiny new server with 6TB of storage [11:57] should keep me going [12:20] boom xenial here i come [14:03] beisner, zul : I raised merges for mongodb and mysql compat for xenial btw [14:29] mgz, I guess jelmer is a good person to poke right? [14:32] beisner, I'm getting nowhere fast with the bzr fast-import problem [14:38] jamespage: well, he does know more about the import process than me, though not sure he has any more time [14:38] mgz, ok - that's what I thought but its worth a punt [14:39] mgz, trying to import git repo's back to bzr - all but one of the 24 repositories works just fine [14:41] beisner, I can think of ways to workaround this but they are not pretty [14:41] and it's ghost revs or some other form of odd history? [14:42] jamespage: one option would be working out what's odd about the history, redo the git import with that part rewritten, then you'll get a clean import back [14:43] mgz, [14:43] ABORT: exception occurred processing commit :393 [14:43] bzr: ERROR: An inconsistent delta was supplied involving '', 'havana-20160226144309-u3xhs67nd4l2ygof-220' [14:46] mgz, how do I resolve that back to the original bzr export git import? [14:49] jamespage: not clear just from that, what's the diff of that commit [14:50] mgz, http://paste.ubuntu.com/15206137/ [14:50] hmm - I wonder whether is those symlinks... [14:51] those do look some odd filemoves [14:51] yes agreed [14:52] mgz, its like the ordering on those fast-import statements is wrong [14:52] http://paste.ubuntu.com/15206157/ [14:53] D templates/havana [14:53] bet its an issue from git not versioning directories [14:53] so, we lost the rev info in the export/import roundtrip through git [14:54] mgz, I see the same if I just try to re-import the original export directly back into bzr [14:54] aisrael: xenial vagrant boxes do exist: http://cloud-images.ubuntu.com/xenial/current/ [14:54] they're just not listed in on the vagrant page [14:55] jcastro: Ahh. I tend to use the vagrant-registered ones because that supports versioning [14:55] jamespage: so, my prefered solution would be manipulating the original export of merge/mung the problem history [14:55] so you're clean from there on [14:55] indeed [14:55] I think it was just an oversight, I let gaughen know [14:56] or they may be doing it on purpose because it's a beta [14:56] mgz, so the data in the original fast export file ? [14:58] jamespage: yeah, there are also rewrite options to the fast export command [14:58] and I think on the git import side [14:59] o/ [15:08] jcastro, you're specifically looking for it in the beta? because we're building dailies but it does seem to be missing from beta 1 [15:16] gaughen: I was just wondering why we're not listing it on vagrantcloud.com [15:19] I was assuming because we're not released yet, which would make sense [15:35] jcastro, I'll follow up on that specific item [15:36] jamespage: I have the mem-manager with amulet and unit tests [15:36] should I add it to the existing bugs? [15:36] s/bugs/bug/ [15:36] for review and promulgation [15:38] apuimedo, new charm bug please [15:40] alright [15:44] jamespage: https://bugs.launchpad.net/charms/+bug/1550394 [15:44] Bug #1550394: New charm: mem-manager [15:44] I tried assigning it to you but I don't think I'm allowed to [15:44] I ran the amulet tests with the local lxc provider [15:44] and tried it on ha [15:44] as well [15:46] apuimedo, let it run through the normal review process - with my current workload its likely someone else will pick it up forst... [15:47] jamespage: understood [15:47] thanks [15:49] cory_fu: did you merge that cassandra change again? [15:49] (wondering if I should update my bundles) [15:49] Not yet. I was going to shortly [15:50] if anyone has a spare moment - https://code.launchpad.net/~james-page/charms/trusty/mongodb/forward-compat-xenial/+merge/287312 [15:50] fixes up compat of the mongodb charm with xenialk [15:51] ok, let me know, cause I'll also have to send changes to cs:trusty/midonet-api amulet tests and also cs:trusty/midonet-agent and cs:trusty/neutron-agents-midonet [15:51] cory_fu: ^^ [15:55] jamespage - approved and merged [15:55] lazyPower, thankyou [15:56] thanks beisner and ci :) the passing results made that a no brainer merge [15:59] jamespage: hey james, did you have any ideas about the leadership stuff i mentioned earlier in here? [16:09] admcleod1, missed that [16:09] * jamespage reads backscroll [16:16] apuimedo: Ok, I'm going to merge now. Link me to the MPs for the test fixes and I'll do those as well [16:17] ok, just a moment [16:23] cory_fu: do you know how to put the three sources on a string for amulet? [16:24] I only got your multiline example for bundle/config file [16:24] should I add "\n - " [16:24] between the different sources [16:25] apuimedo: You can also do "[source, source, source]" [16:25] cool [16:27] 'juju sync-tools' is apparently deprecated in juju-core2 . where do its features go? what does one do if the agents do not have internet access? [16:29] cory_fu: https://code.launchpad.net/~celebdor/charms/trusty/midonet-api/cassandra/+merge/287334 [16:29] I'll do the other two now [16:32] oooh jcastro not putting all your signups on bcc, thats a bit naughty! :P [16:32] so juju 2.0 has accounts which list admin@local as the admin user for a controller [16:33] however, you can't login to the api with admin@local only user-admin [16:33] is the because the api doesn't support the login from accounts yet? [16:34] * magicaltrout returns to bashing his head on an LXD shaped wall [16:34] magicaltrout: ack [16:34] magicaltrout: it's email, it's all spam [16:34] hehe [16:35] jamespage: any ideas? [16:35] I am hoping the next beta of juju will fix my lxd problems [16:35] cory_fu: https://code.launchpad.net/~celebdor/charms/trusty/neutron-agents-midonet/cassandra/+merge/287336 [16:35] rick_h_ SHARED MODELS?! [16:35] aye well lxd beta2 and juju trunk don't work but I don't know if its me messing something up or elsewhere [16:36] because when I roll back to alpha1 and stuff it still seems broken [16:36] it's a known issue, they're working on a release now [16:36] let me find the bug for you [16:36] https://bugs.launchpad.net/juju-core/+bug/1547268 [16:36] Bug #1547268: Can't bootstrap environment after latest lxd upgrade [16:36] is what you want [16:37] magicaltrout: core tells me new beta early next week with this resolved, so this is the one thing we're waiting on [16:37] well [16:37] yes to the api_compat bit [16:38] cory_fu: https://code.launchpad.net/~celebdor/charms/trusty/midonet-agent/cassandra/+merge/287338 [16:38] so I got the various LXD/LXC beta2 packages and installed them [16:38] there, that was the last one [16:38] are you on trusty? [16:38] so i've downgraded which gets rid of api_compat [16:38] wallyworld: do you know when logging into the api will use the admin user from the accounts (admin@local) instead of 'user-admin'? [16:38] * apuimedo taking a 30min break [16:38] I just bought a random server in the cloud and walked through your blog, so i'm sat in xenial [16:38] but with a downgraded lxc lxd stack [16:39] so I dont' get the api_compat error [16:39] oh, I don't think that would work [16:39] jam: ^^ that shouldn't work right? [16:39] apuimedo: Ok, I'll get to them shortly [16:39] well, I figured that as well, so I folled the juju source right back [16:39] and it made no real difference [16:39] the bootstrap node comes up [16:39] but then can't authenticate against it and fails [16:40] thanks [16:40] maybe i didn't rollback far enough, I got to 2.0 alpha 1 [16:42] I think beta2 is the one you want [16:42] or did you try that one too? [16:42] hold on, LXD/LXC beta 2? [16:42] no, juju [16:42] hmm [16:42] dunno, i'll go dig out a tag [16:43] beta2? not alpha2? [16:43] cause it didn't get into 1.25 did it [16:43] so it ended up on the devel ppa which became 2.0 [16:45] I'm sorry, I meant juju _alpha_2 [16:45] lol [16:45] okay [16:49] building... lets see what she does [16:50] na exactly the same connection refused error [16:50] which makes me wonder what on earth its doing [16:51] frankban: when you have time can you put the latest juju-deployer from pypi into juju/stable please? === tvansteenburgh1 is now known as tvansteenburgh [16:54] tvansteenburgh: sure, I'll do that next week === scuttle|afk is now known as scuttlemonkey [16:55] frankban: ta === Makyo_ is now known as Makyo === freeflying__ is now known as freeflying === alexisb is now known as alexisb-afk === lazypower_ is now known as lazyPower === natefinch is now known as natefinch-lunch === CyberJacob is now known as zz_CyberJacob === mhall119_ is now known as mhall119 === natefinch-lunch is now known as natefinch === alexisb-afk is now known as alexisb [19:17] anyone on the juju team able to answer my question(s) to the mailing list? [19:21] stokachu, yes, I would like wallyworld to answer that one, the short answer is yes it will be there but target depends on progress for the new bootstrap/controller work his team is doing [19:21] stokachu, I will loop him into the thread to make sure he sees it [19:21] alexisb: ok perfect thank you! === dpm is now known as dpm-afk [20:36] you around lazyPower ? [20:36] firl you betchya [20:37] got it up and running [20:37] awww yeaaaa [20:38] had a questiona bout services though, how to get an external ip mapped to them [20:38] We've been talking through this ourselves, mbruzek and I [20:39] we had some success with consul as service discovery and putting that behind a reverse proxy [20:39] the other option is ot launch a pod with host port mappings which expose them on the network of the machine, like docker run -p 80:80 style [20:39] firl: TL;DR; it is hard [20:40] lol [20:40] what about openstack [20:40] make it even harder? ;) [20:41] what about implementing the tcp load balancer service? [20:42] i like this idea [20:43] the talk of loadbalancers and reverse proxies are banned in this channel [20:43] firl - i took a look here http://kubernetes.io/v1.1/docs/user-guide/services.html#type-loadbalancer, see the subsection about External IP's [20:44] looks like we just pass it config and the kubes-router does all the iptables magic [20:44] ya [20:44] it’s up to the kubernetes implementation to be able to implement it [20:44] ok so, lets talk through this - thats an integration with neutron right? [20:45] magicaltrout :) [20:45] lazyPower ya [20:46] juju has access to the networking id's [20:47] juju already does this for the maas / lxc implementation also right? [20:49] yeah, but this is also apples/oranges too, juju has deep integration with lxd/lxc [20:49] docker is only being modeled by the charm, so all that is on us, and how our charm talks to the components, so the comparison there was a bit off in terms of whats being given automatically [20:50] well my thought is, doesn’t juju also expose the network side of openstack to the juju subsystem [20:50] if juju had a way to map an IP address from neutron to the container, you can easily forward it [20:50] yeah, i haven't done a lot with networks spaces *yet* [20:50] but its there in 2.0 [20:50] juju already controls the security side [20:50] right [20:51] have you looked at juju network spaces docs yet? [20:51] for 2? no I haven't [20:51] we should look at that and figure out how to do this :) [20:51] I have seen some stuff for LACP [20:51] I think that would be one of the right ways [20:51] the other thing you could do is just implement the service to spawn up a haProxy charm [20:52] Right [20:52] there's also vulcand, nginx w/ consul-template or etcd/confd [20:52] i think vulcand has done more with kube also [20:53] ( If I remember correctly ) [20:53] I think it really depends on the workload, and this is going to take a few bundles to get the right options together [20:53] we looked into this before, and our best success was with the reverse proxy and template watchers [20:53] but thats been 6 or 7 months ago [20:53] template watchers being when the kube services change? [20:53] yeah, as they come up, down, etc. [20:53] +,- pods? [20:53] kk [20:54] yeah I saw a great article on it [20:54] the containers registered in consul, and consul-template was rewriting an nginx config [20:58] I can’t seem to find the article, but ya essentially what you have mentioned [20:59] so in the mean time until that gets resolved / figured out. how should I create a mapping? create a private subnet route to the network server and do iptables to map to the internal ip? === natefinch_ is now known as natefinch [21:02] firl: That looks like it would work, if you get that working I would love to read more about it [21:02] it’s easier for me because I have pfsense as the backend and can do that via simple routes [21:03] but that solution doesn’t lend itself to most people [21:09] I hear ya firl [21:10] I think the proper solution would be to leverage the juju networking stack [21:10] for exposing kube services, however being able to do ssl termination of load balancers would be a nice add with a juju bundle [21:24] core, charmers: I'm experiencing behavior I can't understand or explain when deploying to the openstack provider using 1.25.3 released/released .... when I `juju deploy postgresql`, my machine goes into error state. juju-env -> http://paste.ubuntu.com/15209811/ , machine-0.log -> http://paste.ubuntu.com/15209834/ , nova-api-os-compute.log -> http://paste.ubuntu.com/15209842/ [21:26] core, charmers: but when I `juju deploy postgresql ` I have successful deploys .... has anyone heard of anything like this? [21:31] core, charmers: my machine-0.log is also getting spammed with "2016-02-26 20:49:17 ERROR juju.rpc server.go:573 error writing response: EOF" [21:40] core, charmers: I can reproduce this to no end -> http://paste.ubuntu.com/15209967/ [23:21] lazyPower is there a way to specify what internal networking namespace to use? [23:21] ( I see it )