[00:00] marcoceppi: Indeed that was the problem. Now it’s working. [00:00] Thanks for responding [00:00] agunturu: cool, np - sorry it was a bit delayed [00:15] so I just noticed an oddity [00:15] I deployed the openstack-base bundle and it looks fine except for some reason I now have 41 machine requests [00:38] stormmore: that's a bit excessive [00:39] stormmore: it should only use 4 === alexlist` is now known as alexlist === axino` is now known as axino [07:24] any openstackers on? === caribou_ is now known as Caribou [08:09] hi firl_, whats up? [08:10] So every now and then one of my nodes goes down for some odd reason [08:10] this case it was a node with neutron-api [08:10] and a compute resource was on this [08:11] when rebooting of the physical machine completed, networking would not come up on this for the instances until I moved neutron-api to another juju host [08:11] ( kept getting timeouts, no dhcp packets on the qrouter etc ) [08:12] Is there a better way to fix the issue then moving neutron-api to and from another host? I have had to do this several times between different environments [08:13] Was there anything useful in the neutron-server log on the neutron-api host? [08:14] jamespage, can you remind what that bug was with tunnels not being recreated ? [08:19] I destroyed the neutron-api host, I will gather that next time if I see it. [08:20] Is there a way to force the recreation of the qrouter/qdhcp ip namespaces? === Odd_Blok1 is now known as Odd_Bloke === BigJ64_ is now known as BigJ64 [10:22] regarding reactive, does @when_file_changed work in conjunction with say @when? [10:36] jamespage: ping [10:38] trying to use lxd in xenial with zfs [10:38] http://paste.ubuntu.com/15073291/ [10:43] I was following http://www.jorgecastro.org/2016/02/12/super-fast-local-workloads-with-juju/ [10:52] apuimedo, still here [10:52] ? [10:52] apuimedo, yeah you'r hitting the lxd 2.0 problem that I'm patching into my master branch built from source [10:53] jcastro, are you deploying on 14.04 or 16.04? [10:53] apuimedo, that might be the different - the way lxd takes config updates changes, and I think14.04 still has the older version [10:53] apuimedo, still reviewing midonet - hope to get through today [10:53] I'm using 16.04 [10:53] I have a few fixes for tests - I'll give them back to you once I know they work [10:53] as the post was describing [10:54] apuimedo, hmm [10:54] jcastro, can you confirm that alls good and that your deployment is actually using zfs? [10:55] jamespage: was your patch done to address this config issue? [10:57] apuimedo, yes [10:57] apuimedo, I run with master + https://github.com/juju/juju/pull/4131.patch [10:57] jamespage: can I haz amd64 binary package with the fix? [11:57] Good morning! [11:59] I wanted to ask about the current upstreams for nova-compute, nova-cloud-controller and neutron-api [12:00] In other words, if I'm working on enhancements to those charms - as I am for Liberty support for Calico networking - against exactly what upstreams should I propose changes? [13:33] jamespage: I am deploying 14.04 [13:33] jamespage: I noticed over the weekend though that controller can be really flaky, like if I fire one up and leave it works [13:34] but tearing it down and resetting it up over and over again eventually fails and I need to kill the container by hand, etc. [13:43] jamespage: I also noticed that destroying just models messes up the controller, like I have to kill the entire controller every time. [13:53] dpb1: you around? [14:03] apuimedo, https://code.launchpad.net/~james-page/charms/trusty/midonet-agent/trunk/+merge/286059 and -agent is good to go [14:15] thanks jamespage. I'll review them now === bpierre_ is now known as bpierre [14:31] jamespage: one question about the lsb_release and get_os_codename [14:33] doesn't this change make it more difficult to test trusty and xenial codepaths once both are supported? [14:33] what do you usually do for the openstack-charmers charms [14:34] apuimedo, well we generally mock everything out - I'm running your tests on xenial - so they currently fail [14:34] unit tests should be deterministic across underlying ubuntu release [14:34] if you want to test xenial, have specific tests to cover that with appropriate mocking. [14:35] ok [14:35] jamespage: I'm actually running the tests on arch linux :P [14:35] \o/ [14:35] jamespage: the other thing is [14:36] apuimedo, I try to avoid anything that relies on the host os [14:36] I guess that with the rmmod thing you are just checking if we are in a container [14:36] and refuse to do the action if we are, is that right? [14:36] is that related to running on lxd? === mbruzek is now known as mbruzek-holiday [14:57] jamespage: merged [14:57] apuimedo, awesome [14:57] jamespage: thanks for the suggestions [15:14] does anyone know where the office hours were streamed?. I can't find them on ubuntuonair [15:16] jose: yes, they were on the onair site. It's on the page here https://www.youtube.com/channel/UCSsoSZBAZ3Ivlbt_fxyjIkw [15:16] rick_h__: uh, ok. we have a channel for livestreams, but looks like you guys used another one. np though, thanks for the pointer! [16:10] jcastro: any idea about the error I posted earlier following the steps on your blog? [16:11] http://paste.ubuntu.com/15073291/ [16:11] in the bootstrap step === scuttle|afk is now known as scuttlemonkey [16:15] hmm, no idea on that one [16:16] did you perhaps launch some containers in the pool before setting up the config? [16:16] nope [16:16] not even the one example in the post [16:17] jcastro: clean xenial install too [16:17] hmm, no idea on this one [16:17] have you posted to the list? [16:18] jcastro: http://paste.ubuntu.com/15076104/ [16:18] there are people more expert than me on the list [16:18] no, noet yet [16:18] I wanted to see first with you if there was something I was missing [16:18] that looks the same as what I have [16:19] http://paste.ubuntu.com/15076139/ [16:19] apuimedo: what exactly is going on? [16:20] I don't have it as home pool though [16:20] I used "juju" as name [16:20] ok I just read the pastes, let's see... [16:20] jose: I can't bootstrap juju on lxd [16:20] (with zfs backend) [16:20] apuimedo: would you mind running `sudo lxc-ls --fancy`? [16:21] jose: empty [16:21] ubuntu@lxd:~$ sudo lxc-ls --fancy | pastebinit [16:21] You are trying to send an empty document, exiting. [16:21] ubuntu@lxd:~$ [16:22] so there's that image, error says 'image or container' is using the pool. would it be much to ask to delete that image and then retry bootstrapping? [16:23] <_Sponge> jose, ARe there any videos going up today ? [16:23] _Sponge: sorry? [16:23] <_Sponge> Are there any videos being published on Juju or UbuntuOnAir channels, today ? [16:23] <_Sponge> BRBack. [16:24] I... don't think so? [16:24] it's a US holiday today as well [16:24] and I don't think there's any announced broadcasts [16:26] hola [16:26] I mean, yes [16:26] wrong conversation [16:26] xD [16:26] :P [16:32] jose: http://paste.ubuntu.com/15076545/ [16:33] what the... [16:33] isn't juju supposed to download the image and create an instance and all of that? [16:34] jose: jcastro had pulling the image as a step [16:34] that's why it was on the list [16:35] I'm not too familiar with lxd deployments, I was trying to do some basic debugging. but apparently the error messages contradict themselves... [16:35] apuimedo jose no you have to do lxd-images import first [16:35] ohai marcoceppi [16:36] https://jujucharms.com/docs/devel/config-LXD#images [16:36] apuimedo, hmm - did something change in the key imported code? I'm getting timeouts on the key import right now [16:37] trying to figure out whether its environmental or else... [16:38] jamespage: nope [16:38] unless we are having problems with our servers [16:39] * jamespage scratches his head... [16:39] apuimedo, I think the interaction is with keyserver.ubuntu.com, but puppet is not exactly verbose about what timed out... [16:51] marcoceppi: jose: that's what I had done [16:51] to get the images [16:51] and it exploded anyway [16:57] lol [16:57] I repeated the sync and then bootstrap again [16:57] and, for no particular reason, it worked [16:57] after all the weekend crashing [17:04] woot woot [17:04] glad things are working now :) [17:05] jose: it gives me a bad feeling when things are so undeterministic [17:06] but I guess it comes with the alpha state [17:06] jamespage: how do you keep several environments in lxd without re-bootstrapping? [17:06] apuimedo, create-model [17:08] gnuoy, I need to switch lint -> pep8 in the tox configurations across the charms - ok if I do that as a trivial/ [17:08] > [17:08] ? [17:08] jamespage: and then juju switch I guess [17:08] ok, done for the day [17:08] thank you all ;-) [17:23] hello everybody [17:24] do charms need to be changed in any way for 2.0? I've got one charm I've been using for a while now for testing, but it doesn't even get downloaded to the machine [17:24] I'm using lxd [17:24] latest alpha+xenial === bladernr_ is now known as bladernr [18:07] jamespage: marcoceppi: is it possible that there's no amulet in juju-devel? [18:11] apuimedo: yes, amulet is only in ppa:juju/stable [18:11] marcoceppi: so I can't run amulet tests with juju 2.0? [18:11] apuimedo: you can, you just need to also add ppa:juju/stable [18:12] I hope I won't get conflicts :P [18:12] you won't [18:12] it's safe to combine devel and stable ppa. You'll get devel of juju but all the other packages [18:13] ok [18:21] marcoceppi: I must be doing something lame http://paste.ubuntu.com/15079852/ [18:22] apuimedo: let me take a look [18:24] apuimedo: it's not you, it' sme [18:24] I've just uploaded it for xenial, it was only available on wily and older [18:24] apuimedo: give it about 10 mins to show up [18:25] thanks marcoceppi ;-) [18:25] marcoceppi: it's smee? https://www.youtube.com/watch?v=bnh6ZDKOVOI [18:46] marcoceppi: I think I will move the reactive framework PostgreSQL charm to git://git.launchpad.net/postgresql-charm and stop taking merge proposals on the old branch. [18:46] stub: +1 [18:46] stub: I'd also probably just delete the branch as soon as charm push comes out [18:48] marcoceppi: Delete which branch? The lp:charms/trusty/postgresql one? [18:50] stub: yeah [18:51] charm push will be able to hold a copy of the built charm, but I'd like a git branch to hold a copy too (for sites that can't use the store). Is it a stupid idea to keep that in the same git repository as the main branch? [18:55] marcoceppi: yeah it is excessive but the bundle only shows 4 so I am not sure where the 41 phantom machines came from [18:56] well actually 31 forgetting about the 10 containers that I am using [18:59] apuimedo: it's in xenial now [18:59] thanks marcoceppi ;-) [18:59] marcoceppi: installed! === mbruzek is now known as mbruzek-holiday === bdx_ is now known as bdx [19:37] writing a reactive charm without help, this is where I find out how much effect the alcohol had, or didn't..... [19:40] :P [19:41] well sounds like you should be fine since that sentence was cohesive [19:41] not this evenings alcohol, that is only just beginning [19:41] i'm wondering how much information i actually persisted in belgium :P [20:15] thedac: whats up [20:20] charmers, openstack-charmers: I need to get some input on how haproxy configs are rendered to /etc/haproxy/haproxy.cfg by the openstack services [20:22] charmers, openstack-charmers: for example, I see here -> http://bazaar.launchpad.net/~openstack-charmers/charms/trusty/neutron-api/trunk/view/head:/hooks/neutron_api_utils.py#L165 [20:23] that the haproxy context is generated by context.HAProxyContext [20:24] charmers, openstack-charmers: but where in the codebase is the context written to /etc/haproxy/haproxy.cfg ? [20:40] from what I can tell, http://bazaar.launchpad.net/~openstack-charmers/charms/trusty/neutron-api/trunk/view/head:/hooks/neutron_api_utils.py#L320 [20:40] takes care of rendering the context into templates that are defined in the resource_map [20:48] charmers, openstack-charmers: but how is the haproxy.cfg rendered for percona-cluster? [22:40] hi guys [22:41] i been trying to setup multi-users environment with no success :( [22:41] im getting environment "fred-local" not found [22:42] i diligently read documentation and followed Managing multi-user environments document :) === thumper is now known as thumper-dogwalk [23:12] schkovich: which version of juju? [23:22] marcoceppi: 1.25.3-trusty-amd64 [23:23] marcoceppi: it's supported since 1.21? right? [23:23] schkovich: uh, I'm not sure [23:24] marcoceppi: managing multi-users environments is in 1.24 docs [23:24] then, yes [23:24] * magicaltrout wonders when the, lets run alpha and trunk builds in search of cool stuff, ethos will come back to haunt him :) [23:25] magicaltrout: longer than you'd think but sooner than you'd want [23:25] hehe [23:25] indeed! [23:25] schkovich: so, what steps are you taking? [23:26] i followed docs, juju user add fred -o /tmp/fred-local.jenv and so on [23:27] marcoceppi: this document https://jujucharms.com/docs/1.25/juju-multiuser-environments [23:27] schkovich: let me install 1.25 and give it a whirl [23:27] marcoceppi: thanks a loooot! :) [23:28] marcoceppi: let me know if i can provide more information [23:28] marcoceppi: unfortunately there is nothing in logs :( [23:30] marcoceppi: i can confirm that user fred is added and enabled; however when i su as fred im getting environment not found; [23:30] marcoceppi: i can confirm that user fred is added and enabled; however when i su as fred im getting environment not found; [23:30] schkovich: I think you need to give the fred user something [23:32] marcoceppi: something like? a pint of bear? ;) [23:33] schkovich: hah, I think the admin needs a pint of beer - fred needs to know the environments endpoint though [23:33] marcoceppi: lol; he does; confirmed :) [23:33] a pint of bear?! sounds gizzly.... [23:34] marcoceppi: that is in jenv file [23:34] marcoceppi: variable addresses [23:35] marcoceppi: though that diverts from documentation [23:35] I just got a 1.25 juju environment bootstrapped [23:36] marcoceppi: ok; im not going to bug you any more [23:39] schkovich: okay, I can confirm what you're seeing [23:39] let me see if I can get this owrking [23:40] schkovich: interesting. It's not reading from the .jenv cache [23:43] schkovich: it's not reading $FRED_HOME/.juju/environments ? [23:44] marcoceppi: perhaps some environment variables are needed? [23:44] schkovich: it expects ~/.juju/environemts/.jenv [23:44] but it's not even getting that far. [23:44] marcoceppi: exactly [23:46] marcoceppi: same problem is present in 1.24 i tried to set it up in early january but did not have time to dig into the problem [23:49] schkovich: yeah, going to try in 2.0-alpha2 [23:49] see if that's any better [23:54] marcoceppi: i moved a step further: 2016-02-15 23:53:30 WARNING juju.api api.go:140 discarding API open error: invalid entity name or password [23:54] schkovich: it works really well in 2.0-alpha2 :\ [23:54] marcoceppi: ha [23:55] schkovich: yeah, i got that as well after moving things like state-server and such to the jenv file [23:55] marcoceppi: yes that's what i did as well [23:55] marcoceppi: is 2.0-alpha2 stable and reliable? in production? [23:55] no and nope [23:55] it's an alpha :\ [23:56] it'll be released in ~ April though [23:56] and will be the recommended then [23:56] marcoceppi: will i have to change charms? [23:57] schkovich: no, charms written for 1.x are 2.0 compatabile [23:57] compatible* [23:57] marcoceppi: nice [23:57] schkovich: 2.0 is becaues some of the apis and commands are changing [23:58] marcoceppi: :( i have staging environment running on virtual maas and production in rackspace [23:59] marcoceppi: anyway, thanks; shall i file a bug report? [23:59] schkovich: I would [23:59] marcoceppi: will 1.* be maintained after 2.* is out? [23:59] 1.25.X will for a bit