[09:27] <jamespage> D4RKS1D3, I am
[09:29] <urulama> jamespage: morning ... fyi, we'll rollout new release of charm store this week, which will support xenial charms that you guys have there already
[09:31] <jamespage> urulama, awesome...
[09:31] <jamespage> gnuoy, ^^ will need that for 16.04 release
[09:32] <gnuoy> ack
[10:27] <stub> tvansteenburgh: AWS job got stuck over the weekend, unreaped. And lxc is failing with the can't-bootstrap-port-in-use problem again.
[11:07] <D4RKS1D3> Hi jamespage , morning
[11:07] <D4RKS1D3> I am having problems with add ml2_odl into the ml2_conf.ini file
[11:09] <jamespage> D4RKS1D3, ok - so you need to help me out a bit here
[11:09] <jamespage> D4RKS1D3, are you using the neutron-api-odl charm with neutron-api?
[11:09] <D4RKS1D3> Yes
[11:10] <jamespage> D4RKS1D3, ok so the neutron-api-odl charm writes that section to ml2_conf.ini
[11:10] <jamespage> D4RKS1D3, can you check whether you have
[11:12] <D4RKS1D3> I have the file but, without ml2_odl config
[11:13] <D4RKS1D3> In the log, i have "juju-log Writing file /etc/neutron/plugins/ml2/ml2_conf.ini root:root 444"
[11:13] <D4RKS1D3> and the file has change, but without this section
[11:19] <D4RKS1D3> Probably is the relation between them?
[11:19] <D4RKS1D3> I test with both but any works
[12:05] <jamespage> D4RKS1D3, sorry - someone distracted me
[12:05] <jamespage> D4RKS1D3, right - so can you check
[12:05] <jamespage> manage-neutron-plugin-legacy-mode=False on the neutron-api charm
[12:05] <jamespage> otherwise it will keep overwriting the file written by the neutron-api-odl charm
[12:05] <D4RKS1D3> yes
[12:07] <jamespage> D4RKS1D3, yes its set like that?
[12:07] <D4RKS1D3> I am looking
[12:09] <D4RKS1D3> I have this option put to false
[12:11] <jamespage> D4RKS1D3, good
[12:11] <jamespage> so that means that neutron-api-odl should be writing the ml2_conf.ini file
[12:12] <D4RKS1D3> but before you talk me I have this option to false
[12:12] <jamespage> 'should'
[12:12] <jamespage> ;-)
[12:13] <jamespage> D4RKS1D3, do you have a bundle you are using?
[12:14] <D4RKS1D3> no
[12:14] <D4RKS1D3> I'm using the openstack-charms-next (3 charms available in juju store)
[12:15] <D4RKS1D3> it seems that the neutron-api-odl plugin is not being executed even if the relation are properly set
[12:15] <jamespage> D4RKS1D3, well maybe - can I see juju status out please
[12:17] <D4RKS1D3> Yes, one second, please
[12:20] <D4RKS1D3> - neutron-api/0: 10.1.23.165 (started) 9696/tcp  - neutron-api-odl/2: 10.1.23.165 (error)
[12:22] <D4RKS1D3> - neutron-api/0: 10.1.23.165 (started) 9696/tcp  - neutron-api-odl/2: 10.1.23.165 (started)
[12:22] <D4RKS1D3> (this is with a juju resolved)
[12:22] <D4RKS1D3> the configuration is not updated anyway
[12:24] <D4RKS1D3> all services are "started"
[12:31] <jamespage> D4RKS1D3, I need the full status output please "juju status" - pastebin is a good idea
[12:31] <D4RKS1D3> http://paste.ubuntu.com/14662888/
[12:46] <D4RKS1D3> jamespage, this is the link
[12:58] <jamespage> D4RKS1D3, problem 1 - don't use neutron-openvswitch and openvswitch-odl
[12:58] <jamespage> for ODL drop use of neutron-openvswitch
[12:59] <jamespage> that might resolve your issue - pls try
[12:59] <D4RKS1D3> Okey, one minute
[13:32] <D4RKS1D3> jamespage, now works!
[13:33] <D4RKS1D3> You know if the configuration of the compute needs to be changed?
[13:34] <D4RKS1D3> In theory I think it should be changed like the other file
[14:19] <apuimedo> jamespage: ping
[14:20] <apuimedo> would http://paste.openstack.org/show/484872/ address your review comment?
[14:22] <tych0> lazypower: pong
[14:23] <lazypower> tych0 - hey there. is there anything special i need to do when workign with lxd remotes?
[14:23] <lazypower> i spun up 2 vm's to sample lxd migration, and was unable to add the remote lxd instance. i'm unsure if its networking, ssl keys, or otherwise
[14:23] <tych0> lazypower: you have to turn on HTTPS listening on the remote, but then it should work as usual
[14:23] <lazypower> aaahhh
[14:23] <lazypower> now that explains why i was losing hair over this
[14:24] <tych0> https://github.com/lxc/lxd#how-to-enable-lxd-server-for-remote-access
[14:25] <lazypower> ta
[14:44] <jamespage> apuimedo, looking
[14:46] <jamespage> apuimedo, config is not dict like I'm afraid - but it does cache so "if config('midonet-origin') and" would be fine
[14:47] <apuimedo> are you sure? I thought it's just a json.loads
[14:48] <apuimedo> jamespage: ^^
[14:50] <jamespage> apuimedo, I am
[14:50] <jamespage> def config(scope=None):
[14:50] <jamespage> if you get the entire dict, you could do that
[14:50] <jamespage> config = config()
[14:50] <jamespage> config.get('midonet-origin', '')
[14:51] <jamespage> don't copy my ugly code that overrides the symbol config :-=)
[14:52] <apuimedo> yeah.. I was now looking at the class
[14:54] <apuimedo> jamespage: I'm probably misreading, but in hookenv.py:config
[14:54] <D4RKS1D3> jamespage, could you answer my question?, Thanks
[14:55] <apuimedo> nothing... I was ignoring the "except" ::P
[14:55] <jamespage> D4RKS1D3, I'd set the same config option on nova-compute as I said for neutron-api (sorry missed you question)
[14:56]  * jamespage watches to many channels
[14:57] <lazypower> tych0 one thing i noticed, was that BTRFS performance was night/day comparison with lxd. i assume its the same with zfs?
[14:58] <apuimedo> jamespage: http://paste.openstack.org/show/484880/
[14:59] <jamespage> apuimedo, +1
[15:00]  * jamespage likes defensive code
[15:00] <apuimedo> cool
[15:00] <apuimedo> I'll update the proposal
[15:01] <tych0> lazypower: in comparison with lxd?
[15:01] <tych0> you mean the default directory backend?
[15:01] <lazypower> tych0 - lxd using btrfs snapshots vs regular ext4 operation
[15:01] <tych0> yes, naturally :)
[15:01] <tych0> should be the same with zfs
[15:01] <lazypower> it reduced container spinup from ~ 30 seconds ot sub 2 seconds
[15:02] <tych0> yep
[15:02] <tych0> should also reduce snapshotting, etc.
[15:03] <apuimedo> jamespage: https://code.launchpad.net/~celebdor/charm-helpers/liberty/+merge/283707
[15:03] <apuimedo> updated and ready ;-)
[15:04] <jamespage> apuimedo, ack
[15:17] <tiagogomes> Hi, I am trying to boostrap JuJu in OpenStack. Is swift required?
[15:20] <jamespage> tiagogomes, yes
[15:20] <tiagogomes>  /o\ Can I get away with using nova object store?
[15:49] <tiagogomes> Anyone knows what does this means: 2016-01-25 15:47:33 ERROR juju.cmd supercommand.go:429 failed to bootstrap environment: index file has no data for cloud {RegionOne http://10.24.100.10:5000/v2.0/} not found
[16:01] <rick_h_> tiagogomes: it can't access the streams file. There's a json file that it reads what images/etc are available
[16:13] <apuimedo> thanks for the merge jamespage
[16:13] <apuimedo> jamespage: should I request a backport to stable?
[16:13] <jamespage> apuimedo, needs to all land in /next first
[16:13] <apuimedo> ok
[16:14] <jamespage> then we can talk about a backport (we have a release going out on thurs for openstack charms so will need to be after that)
[16:17] <apuimedo> jamespage: it's already frozen, Thursday release?
[16:24] <jrwren> anyone know offhand how to import a wily image into lxd for use with juju lxd provider?
[16:35] <lovea> Hi, Trying to use ProLiant DL160 Gen9 servers in a MAAS set up. They have hpdsa drivers detected. So far so good. Then I use Juju to deploy a 15.10 Wily charm and the deploy fails because the hpdsa drivers are requested from http://downloads.linux.hp.com/SDR/repo/ubuntu-hpdsa/dists/wily/main/binary-amd64/Packages. This doesn't exist. HP only has a 14.04 Trusty repo. Any ideas how I can proceed?
[16:35] <jhobbs> narindergupta: ^^
[16:35] <jamespage> apuimedo, yes
[16:35] <lovea> HP is saying "HPE is supporting only LTS releases (14.04, 16.04).    hpdsa driver does use dkms, so, while it
[16:35] <lovea> may be possible to upgrade to Wily, it has not been tested in our lab."
[16:36] <narindergupta> deploy we lovea, jhobbs hp is not buildin hpdsa any more so best way is enable ahci mode in controller in BIOS
[16:37] <lovea> narindergupta: I was wondering about that. Thanks for the tip, AHCI mode it is then.
[16:38] <narindergupta> lovea: correct
[16:38] <narindergupta> lovea: in ahci mode it will use SATA driver
[16:39] <lovea> Which is fine for me
[16:40] <lovea> narindergupta: Many thanks
[16:40] <narindergupta> lovea: hope you are aware how to change to AHCI mode?
[16:41] <lovea> narindergupta: I just need to navigate the DL160 BIOS/iLO config utility!!
[16:47] <jrwren> ah, i was missing the lxd image alias, added the alias and all is good.
[16:54] <D4RKS1D3> jamespage, to whom sal I connected the juju-info interface required by openvswitch-odl?,  p.s: Could you share with me a juju-status, it would be very appreciated to enable me to explore this information, thanks a lot in advanced.
[16:56]  * jmalcaraz Hi
[19:00] <bdx> hey whats up everyone? Is there a maas/next repo for xenial?
[19:02] <marcoceppi> bdx: probably
[19:03] <bdx> marcoceppi: from what I've gathered, it should be up next week
[23:07] <bolthole> Hi!. I'm new to juju, and have a couple questions about getting started on ubuntu 15.10 if someone might help me
[23:18] <rick_h_> bolthole: ask away
[23:27] <bolthole> kewl. well, first off... juju-quickstart has a bug, I should mention :-/
[23:28] <bolthole> among other things, it tries to start up lxc-net service... but that service will fail to start, because dnsmasq makes it fail. but...
[23:28] <bolthole> if you reboot, then lxc-net gets started before dnsmasq (i guess) and so it starts working fro that point.
[23:32] <bolthole> erm.. i was going to give details about how deploy of juju-gui hangs for me. but.. it might actually be working this time. haha. ha.
[23:47] <bdx> marcoceppi: check it --> https://github.com/jamesbeedy/layer-django/blob/modularize_and_refactor/reactive/django.py
[23:50] <bdx> marcoceppi: and here --> http://bazaar.launchpad.net/~jamesbeedy/charms/trusty/django-nginx/dev/files