[00:00] <agunturu> marcoceppi: Indeed that was the problem. Now it’s working.
[00:00] <agunturu> Thanks for responding
[00:00] <marcoceppi> agunturu: cool, np - sorry it was a bit delayed
[00:15] <stormmore> so I just noticed an oddity
[00:15] <stormmore> I deployed the openstack-base bundle and it looks fine except for some reason I now have  41 machine requests
[00:38] <marcoceppi> stormmore: that's a bit excessive
[00:39] <marcoceppi> stormmore: it should only use 4
[07:24] <firl_> any openstackers on?
[08:09] <gnuoy> hi firl_, whats up?
[08:10] <firl_> So every now and then one of my nodes goes down for some odd reason
[08:10] <firl_> this case it was a node with neutron-api
[08:10] <firl_> and a compute resource was on this
[08:11] <firl_> when rebooting of the physical machine completed, networking would not come up on this for the instances until I moved neutron-api to another juju host
[08:11] <firl_> ( kept getting timeouts, no dhcp packets on the qrouter etc )
[08:12] <firl_> Is there a better way to fix the issue then moving neutron-api to and from another host? I have had to do this several times between different environments
[08:13] <gnuoy> Was there anything useful in the neutron-server log on the neutron-api host?
[08:14] <gnuoy> jamespage, can you remind what that bug was with tunnels not being recreated ?
[08:19] <firl_> I destroyed the neutron-api host, I will gather that next time if I see it.
[08:20] <firl_> Is there a way to force the recreation of the qrouter/qdhcp ip namespaces?
[10:22] <admcleod-> regarding reactive, does @when_file_changed work in conjunction with say @when?
[10:36] <apuimedo> jamespage: ping
[10:38] <apuimedo> trying to use lxd in xenial with zfs
[10:38] <apuimedo> http://paste.ubuntu.com/15073291/
[10:43] <apuimedo> I was following http://www.jorgecastro.org/2016/02/12/super-fast-local-workloads-with-juju/
[10:52] <jamespage> apuimedo, still here
[10:52] <apuimedo> ?
[10:52] <jamespage> apuimedo, yeah you'r hitting the lxd 2.0 problem that I'm patching into my master branch built from source
[10:53] <jamespage> jcastro, are you deploying on 14.04 or 16.04?
[10:53] <jamespage> apuimedo, that might be the different - the way lxd takes config updates changes, and I think14.04 still has the older version
[10:53] <jamespage> apuimedo, still reviewing midonet - hope to get through today
[10:53] <apuimedo> I'm using 16.04
[10:53] <jamespage> I have a few fixes for tests - I'll give them back to you once I know they work
[10:53] <apuimedo> as the post was describing
[10:54] <jamespage> apuimedo, hmm
[10:54] <jamespage> jcastro, can you confirm that alls good and that your deployment is actually using zfs?
[10:55] <apuimedo> jamespage: was your patch done to address this config issue?
[10:57] <jamespage> apuimedo, yes
[10:57] <jamespage> apuimedo,  I run with master + https://github.com/juju/juju/pull/4131.patch
[10:57] <apuimedo> jamespage: can I haz amd64 binary package with the fix?
[11:57] <neiljerram> Good morning!
[11:59] <neiljerram> I wanted to ask about the current upstreams for nova-compute, nova-cloud-controller and neutron-api
[12:00] <neiljerram> In other words, if I'm working on enhancements to those charms - as I am for Liberty support for Calico networking - against exactly what upstreams should I propose changes?
[13:33] <jcastro> jamespage: I am deploying 14.04
[13:33] <jcastro> jamespage: I noticed over the weekend though that controller can be really flaky, like if I fire one up and leave it works
[13:34] <jcastro> but tearing it down and resetting it up over and over again eventually fails and I need to kill the container by hand, etc.
[13:43] <jcastro> jamespage: I also noticed that destroying just models messes up the controller, like I have to kill the entire controller every time.
[13:53] <tvansteenburgh> dpb1: you around?
[14:03] <jamespage> apuimedo, https://code.launchpad.net/~james-page/charms/trusty/midonet-agent/trunk/+merge/286059 and -agent is good to go
[14:15] <apuimedo> thanks jamespage. I'll review them now
[14:31] <apuimedo> jamespage: one question about the lsb_release and get_os_codename
[14:33] <apuimedo> doesn't this change make it more difficult to test trusty and xenial codepaths once both are supported?
[14:33] <apuimedo> what do you usually do for the openstack-charmers charms
[14:34] <jamespage> apuimedo, well we generally mock everything out - I'm running your tests on xenial - so they currently fail
[14:34] <jamespage> unit tests should be deterministic across underlying ubuntu release
[14:34] <jamespage> if you want to test xenial, have specific tests to cover that with appropriate mocking.
[14:35] <apuimedo> ok
[14:35] <apuimedo> jamespage: I'm actually running the tests on arch linux :P
[14:35] <jamespage> \o/
[14:35] <apuimedo> jamespage: the other thing is
[14:36] <jamespage> apuimedo, I try to avoid anything that relies on the host os
[14:36] <apuimedo> I guess that with the rmmod thing you are just checking if we are in a container
[14:36] <apuimedo> and refuse to do the action if we are, is that right?
[14:36] <apuimedo> is that related to running on lxd?
[14:57] <apuimedo> jamespage: merged
[14:57] <jamespage> apuimedo, awesome
[14:57] <apuimedo> jamespage: thanks for the suggestions
[15:14] <jose> does anyone know where the office hours were streamed?. I can't find them on ubuntuonair
[15:16] <rick_h__> jose: yes, they were on the onair site. It's on the page here https://www.youtube.com/channel/UCSsoSZBAZ3Ivlbt_fxyjIkw
[15:16] <jose> rick_h__: uh, ok. we have a channel for livestreams, but looks like you guys used another one. np though, thanks for the pointer!
[16:10] <apuimedo> jcastro: any idea about the error I posted earlier following the steps on your blog?
[16:11] <apuimedo> http://paste.ubuntu.com/15073291/
[16:11] <apuimedo> in the bootstrap step
[16:15] <jcastro> hmm, no idea on that one
[16:16] <jcastro> did you perhaps launch some containers in the pool before setting up the config?
[16:16] <apuimedo> nope
[16:16] <apuimedo> not even the one example in the post
[16:17] <apuimedo> jcastro: clean xenial install too
[16:17] <jcastro> hmm, no idea on this one
[16:17] <jcastro> have you posted to the list?
[16:18] <apuimedo> jcastro: http://paste.ubuntu.com/15076104/
[16:18] <jcastro> there are people more expert than me on the list
[16:18] <apuimedo> no, noet yet
[16:18] <apuimedo> I wanted to see first with you if there was something I was missing
[16:18] <jcastro> that looks the same as what I have
[16:19] <apuimedo> http://paste.ubuntu.com/15076139/
[16:19] <jose> apuimedo: what exactly is going on?
[16:20] <apuimedo> I don't have it as home pool though
[16:20] <apuimedo> I used "juju" as name
[16:20] <jose> ok I just read the pastes, let's see...
[16:20] <apuimedo> jose: I can't bootstrap juju on lxd
[16:20] <apuimedo> (with zfs backend)
[16:20] <jose> apuimedo: would you mind running `sudo lxc-ls --fancy`?
[16:21] <apuimedo> jose: empty
[16:21] <apuimedo> ubuntu@lxd:~$ sudo lxc-ls --fancy | pastebinit
[16:21] <apuimedo> You are trying to send an empty document, exiting.
[16:21] <apuimedo> ubuntu@lxd:~$
[16:22] <jose> so there's that image, error says 'image or container' is using the pool. would it be much to ask to delete that image and then retry bootstrapping?
[16:23] <_Sponge> jose, ARe there any videos going up today ?
[16:23] <jose> _Sponge: sorry?
[16:23] <_Sponge> Are there any videos being published on Juju or UbuntuOnAir channels, today ?
[16:23] <_Sponge> BRBack.
[16:24] <jose> I... don't think so?
[16:24] <jose> it's a US holiday today as well
[16:24] <jose> and I don't think there's any announced broadcasts
[16:26] <apuimedo> hola
[16:26] <apuimedo> I mean, yes
[16:26] <apuimedo> wrong conversation
[16:26] <apuimedo> xD
[16:26] <jose> :P
[16:32] <apuimedo> jose:  http://paste.ubuntu.com/15076545/
[16:33] <jose> what the...
[16:33] <jose> isn't juju supposed to download the image and create an instance and all of that?
[16:34] <apuimedo> jose: jcastro had pulling the image as a step
[16:34] <apuimedo> that's why it was on the list
[16:35] <jose> I'm not too familiar with lxd deployments, I was trying to do some basic debugging. but apparently the error messages contradict themselves...
[16:35] <marcoceppi> apuimedo jose no you have to do lxd-images import first
[16:35] <jose> ohai marcoceppi
[16:36] <marcoceppi> https://jujucharms.com/docs/devel/config-LXD#images
[16:36] <jamespage> apuimedo, hmm - did something change in the key imported code? I'm getting timeouts on the key import right now
[16:37] <jamespage> trying to figure out whether its environmental or else...
[16:38] <apuimedo> jamespage: nope
[16:38] <apuimedo> unless we are having problems with our servers
[16:39]  * jamespage scratches his head...
[16:39] <jamespage> apuimedo, I think the interaction is with keyserver.ubuntu.com, but puppet is not exactly verbose about what timed out...
[16:51] <apuimedo> marcoceppi: jose: that's what I had done
[16:51] <apuimedo> to get the images
[16:51] <apuimedo> and it exploded anyway
[16:57] <apuimedo> lol
[16:57] <apuimedo> I repeated the sync and then bootstrap again
[16:57] <apuimedo> and, for no particular reason, it worked
[16:57] <apuimedo> after all the weekend crashing
[17:04] <jose> woot woot
[17:04] <jose> glad things are working now :)
[17:05] <apuimedo> jose: it gives me a bad feeling when things are so undeterministic
[17:06] <apuimedo> but I guess it comes with the alpha state
[17:06] <apuimedo> jamespage: how do you keep several environments in lxd without re-bootstrapping?
[17:06] <jamespage> apuimedo, create-model
[17:08] <jamespage> gnuoy, I need to switch lint -> pep8 in the tox configurations across the charms - ok if I do that as a trivial/
[17:08] <jamespage> >
[17:08] <jamespage> ?
[17:08] <apuimedo> jamespage: and then juju switch I guess
[17:08] <apuimedo> ok, done for the day
[17:08] <apuimedo> thank you all ;-)
[17:23] <bogdanteleaga> hello everybody
[17:24] <bogdanteleaga> do charms need to be changed in any way for 2.0? I've got one charm I've been using for a while now for testing, but it doesn't even get downloaded to the machine
[17:24] <bogdanteleaga> I'm using lxd
[17:24] <bogdanteleaga> latest alpha+xenial
[18:07] <apuimedo> jamespage: marcoceppi: is it possible that there's no amulet in juju-devel?
[18:11] <marcoceppi> apuimedo: yes, amulet is only in ppa:juju/stable
[18:11] <apuimedo> marcoceppi: so I can't run amulet tests with juju 2.0?
[18:11] <marcoceppi> apuimedo: you can, you just need to also add ppa:juju/stable
[18:12] <apuimedo> I hope I won't get conflicts :P
[18:12] <marcoceppi> you won't
[18:12] <marcoceppi> it's safe to combine devel and stable ppa. You'll get devel of juju but all the other packages
[18:13] <apuimedo> ok
[18:21] <apuimedo> marcoceppi: I must be doing something lame http://paste.ubuntu.com/15079852/
[18:22] <marcoceppi> apuimedo: let me take a look
[18:24] <marcoceppi> apuimedo: it's not you, it' sme
[18:24] <marcoceppi> I've just uploaded it for xenial, it was only available on wily and older
[18:24] <marcoceppi> apuimedo: give it about 10 mins to show up
[18:25] <apuimedo> thanks marcoceppi ;-)
[18:25] <apuimedo> marcoceppi: it's smee? https://www.youtube.com/watch?v=bnh6ZDKOVOI
[18:46] <stub> marcoceppi: I think I will move the reactive framework PostgreSQL charm to git://git.launchpad.net/postgresql-charm and stop taking merge proposals on the old branch.
[18:46] <marcoceppi> stub: +1
[18:46] <marcoceppi> stub: I'd also probably just delete the branch as soon as charm push comes out
[18:48] <stub> marcoceppi: Delete which branch? The lp:charms/trusty/postgresql one?
[18:50] <marcoceppi> stub: yeah
[18:51] <stub> charm push will be able to hold a copy of the built charm, but I'd like a git branch to hold a copy too (for sites that can't use the store). Is it a stupid idea to keep that in the same git repository as the main branch?
[18:55] <stormmore> marcoceppi: yeah it is excessive but the bundle only shows 4 so I am not sure where the 41 phantom machines came from
[18:56] <stormmore> well actually 31 forgetting about the 10 containers that I am using
[18:59] <marcoceppi> apuimedo: it's in xenial now
[18:59] <apuimedo> thanks marcoceppi ;-)
[18:59] <apuimedo> marcoceppi: installed!
[19:37] <magicaltrout> writing a reactive charm without help, this is where I find out how much effect the alcohol had, or didn't.....
[19:40] <redelmann> :P
[19:41] <stormmore> well sounds like you should be fine since that sentence was cohesive
[19:41] <magicaltrout> not this evenings alcohol, that is only just beginning
[19:41] <magicaltrout> i'm wondering how much information i actually persisted in belgium :P
[20:15] <bdx> thedac: whats up
[20:20] <bdx> charmers, openstack-charmers: I need to get some input on how haproxy configs are rendered to /etc/haproxy/haproxy.cfg by the openstack services
[20:22] <bdx> charmers, openstack-charmers: for example, I see here -> http://bazaar.launchpad.net/~openstack-charmers/charms/trusty/neutron-api/trunk/view/head:/hooks/neutron_api_utils.py#L165
[20:23] <bdx> that the haproxy context is generated by context.HAProxyContext
[20:24] <bdx> charmers, openstack-charmers: but where in the codebase is the context written to /etc/haproxy/haproxy.cfg ?
[20:40] <bdx> from what I can tell, http://bazaar.launchpad.net/~openstack-charmers/charms/trusty/neutron-api/trunk/view/head:/hooks/neutron_api_utils.py#L320
[20:40] <bdx> takes care of rendering the context into templates that are defined in the resource_map
[20:48] <bdx> charmers, openstack-charmers: but how is the haproxy.cfg rendered for percona-cluster?
[22:40] <schkovich> hi guys
[22:41] <schkovich> i been trying to setup multi-users environment with no success :(
[22:41] <schkovich> im getting environment "fred-local" not found
[22:42] <schkovich> i diligently read documentation and followed Managing multi-user environments document :)
[23:12] <marcoceppi> schkovich: which version of juju?
[23:22] <schkovich> marcoceppi: 1.25.3-trusty-amd64
[23:23] <schkovich> marcoceppi: it's supported since 1.21? right?
[23:23] <marcoceppi> schkovich: uh, I'm not sure
[23:24] <schkovich> marcoceppi: managing multi-users environments is in 1.24 docs
[23:24] <marcoceppi> then, yes
[23:24]  * magicaltrout wonders when the, lets run alpha and trunk builds in search of cool stuff, ethos will come back to haunt him :)
[23:25] <marcoceppi> magicaltrout: longer than you'd think but sooner than you'd want
[23:25] <magicaltrout> hehe
[23:25] <magicaltrout> indeed!
[23:25] <marcoceppi> schkovich: so, what steps are you taking?
[23:26] <schkovich> i followed docs, juju user add fred -o /tmp/fred-local.jenv and so on
[23:27] <schkovich> marcoceppi: this document https://jujucharms.com/docs/1.25/juju-multiuser-environments
[23:27] <marcoceppi> schkovich: let me install 1.25 and give it a whirl
[23:27] <schkovich> marcoceppi: thanks a loooot! :)
[23:28] <schkovich> marcoceppi: let me know if i can provide more information
[23:28] <schkovich> marcoceppi: unfortunately there is nothing in logs :(
[23:30] <schkovich> marcoceppi: i can confirm that user fred is added and enabled; however when i su as fred im getting environment not found;
[23:30] <schkovich> marcoceppi: i can confirm that user fred is added and enabled; however when i su as fred im getting environment not found;
[23:30] <marcoceppi> schkovich: I think you need to give the fred user something
[23:32] <schkovich> marcoceppi: something like? a pint of bear? ;)
[23:33] <marcoceppi> schkovich: hah, I think the admin needs a pint of beer - fred needs to know the environments endpoint though
[23:33] <schkovich> marcoceppi: lol; he does; confirmed :)
[23:33] <magicaltrout> a pint of bear?! sounds gizzly....
[23:34] <schkovich> marcoceppi: that is in jenv file
[23:34] <schkovich> marcoceppi: variable addresses
[23:35] <schkovich> marcoceppi: though that diverts from documentation
[23:35] <marcoceppi> I just got a 1.25 juju environment bootstrapped
[23:36] <schkovich> marcoceppi: ok; im not going to bug you any more
[23:39] <marcoceppi> schkovich: okay, I can confirm what you're seeing
[23:39] <marcoceppi> let me see if I can get this owrking
[23:40] <marcoceppi> schkovich: interesting. It's not reading from the .jenv cache
[23:43] <schkovich> schkovich: it's not reading $FRED_HOME/.juju/environments ?
[23:44] <schkovich> marcoceppi: perhaps some environment variables are needed?
[23:44] <marcoceppi> schkovich: it expects ~/.juju/environemts/<env>.jenv
[23:44] <marcoceppi> but it's not even getting that far.
[23:44] <schkovich> marcoceppi: exactly
[23:46] <schkovich> marcoceppi: same problem is present in 1.24 i tried to set it up in early january but did not have time to dig into the problem
[23:49] <marcoceppi> schkovich: yeah, going to try in 2.0-alpha2
[23:49] <marcoceppi> see if that's any better
[23:54] <schkovich> marcoceppi: i moved a step further: 2016-02-15 23:53:30 WARNING juju.api api.go:140 discarding API open error: invalid entity name or password
[23:54] <marcoceppi> schkovich: it works really well in 2.0-alpha2 :\
[23:54] <schkovich> marcoceppi: ha
[23:55] <marcoceppi> schkovich: yeah, i got that as well after moving things like state-server and such to the jenv file
[23:55] <schkovich> marcoceppi: yes that's what i did as well
[23:55] <schkovich> marcoceppi: is 2.0-alpha2 stable and reliable? in production?
[23:55] <marcoceppi> no and nope
[23:55] <marcoceppi> it's an alpha :\
[23:56] <marcoceppi> it'll be released in ~ April though
[23:56] <marcoceppi> and will be the recommended then
[23:56] <schkovich> marcoceppi: will i have to change charms?
[23:57] <marcoceppi> schkovich: no, charms written for 1.x are 2.0 compatabile
[23:57] <marcoceppi> compatible*
[23:57] <schkovich> marcoceppi: nice
[23:57] <marcoceppi> schkovich: 2.0 is becaues some of the apis and commands are changing
[23:58] <schkovich> marcoceppi: :( i have staging environment running on virtual maas and production in rackspace
[23:59] <schkovich> marcoceppi: anyway, thanks; shall i file a bug report?
[23:59] <marcoceppi> schkovich: I would
[23:59] <schkovich> marcoceppi: will 1.* be maintained after 2.* is out?
[23:59] <marcoceppi> 1.25.X will for a bit