[03:19] <stub> marcoceppi: I don't know and I don't care. Should I care?
[03:19] <marcoceppi> stub: we accidentally merged it with precise
[03:20] <stub> Hmmm...
[03:21] <stub> It should work - it just relies on charmhelpers and the PG stuff is common, and the default PG version for precise is still coded in there.
[03:21] <marcoceppi> stub: we can revert, but it seems to be working
[03:22] <stub> Ok. I was kind of hoping to just drop support for precise (since we don't need it any more), but we can drop support for this version rather than the previous version
[03:24] <stub> If the basic deployment, the rest will work the same or better than the previous version. Replication or some extensions like wal_e might be wonky, but the older version would be wonkier.
[03:25] <stub> So lets leave it.
[03:30] <stub> I'll try and have the reactive rewrite up soon :)
[09:21] <gnuoy> jamespage, https://code.launchpad.net/~gnuoy/charms/trusty/odl-controller/new-tests/+merge/277258 is ready for another review if you have a moment
[09:28] <jamespage> gnuoy, ok looking now
[09:37] <jamespage> gnuoy, https://code.launchpad.net/~james-page/charms/trusty/neutron-openvswitch/lp1515008/+merge/277325
[09:40] <jamespage> gnuoy, I'll just wait for amulet to pass that before landing
[09:40] <gnuoy> kk
[09:40] <jamespage> gnuoy, can we get osci pointed at the odl-controller branches?
[09:41] <gnuoy> yep
[09:47] <gnuoy> jamespage, when the branches exist I think https://code.launchpad.net/~gnuoy/ubuntu-openstack-ci/odl/+merge/277327 should do the trick
[09:47] <gnuoy> beisner, ^
[09:56] <jamespage> gnuoy, can we create /next branches now from the current source branches and propose against those? running things by hand is nasty
[09:56] <gennadiy> hi all. i need to create specific instance type of aws ec2 instance for my service.
[09:57] <gennadiy> i know about juju set-constraints --service <name> instance-type=m3.xlarge
[09:57] <gnuoy> jamespage, yes, but I don't know what needs to happen to get osci to pick up the change to lp:ubuntu-openstack-ci
[09:57] <gnuoy> we can wait for beisner if you like
[09:57] <jamespage> gnuoy, sure
[09:57] <jamespage> if your branch tests out ok I'll land it
[09:57] <gennadiy> but when i add unit from juju-gui it creates default instance tyoe
[10:01] <jamespage> gnuoy, we need to plumb in AMULET_ODL_LOCATION=http://10.245.161.162/swift/v1/opendaylight/distribution-karaf-0.2.3-Helium-SR3.tar.gz as well
[10:02] <gnuoy> yep
[10:08] <gnuoy> jamespage, added setting AMULET_ODL_LOCATION to the mp
[10:48] <dpm_> hi all, anyone around who is familiar with https://jujucharms.com/python-django and can help with a couple of questions?
[10:56] <apuimedo> frobware: Hi. I was told it is you who works on juju networking and using OpenStack for providing machines to Juju
[10:56] <frobware> apuimedo, yes
[10:56] <frobware> apuimedo, (and the team!)
[10:56] <apuimedo> frobware: that was fast :P
[10:56] <apuimedo> frobware: who's the team?
[10:56] <apuimedo> *in
[10:56] <frobware> apuimedo, dimitern, voidspace, dooferlad
[10:57] <apuimedo> nice to meet you guys ;-)
[10:57] <dooferlad> hi
[10:58] <frobware> apuimedo, want to briefly HO - I saw you had some questions earlier in the week
[10:58] <apuimedo> HO?
[10:58] <frobware> apuimedo, google hangout
[10:58] <apuimedo> sounds good
[10:58] <frobware> apuimedo, https://plus.google.com/hangouts/_/canonical.com/juju-sapphire
[10:59] <dooferlad> frobware: there in a couple of minutes
[11:03] <apuimedo> frobware: I'm getting a google hangout error trying to join
[11:03] <apuimedo> when requesting permission to join
[11:04] <jamespage> frobware, you'll have to allow external participants as thats under a canonical.com hangout
[11:04] <frobware> apuimedo, OK, let's just try here in IRC
[11:04] <apuimedo> frobware: let me create a meeting on ho
[11:05] <apuimedo> frobware: https://plus.google.com/hangouts/_/midokura.com/juju_openstack
[11:08] <dimitern> hey apuimedo
[11:08] <apuimedo> hey ;-)
[11:19] <jamespage> gnuoy, juju-test INFO    : Results: 4 passed, 0 failed, 0 errored
[11:19] <jamespage> awesome
[11:19] <gnuoy> \o/
[11:26] <jamespage> gnuoy, ok landed all of that
[11:26] <gnuoy> ta
[11:26] <jamespage> gnuoy, also snuck in the tox bits needed for if we want to upstream this charm to /openstack
[11:26] <gnuoy> k
[11:27] <jamespage> gnuoy, I think we could also run func tests under tox as well
[11:33] <apuimedo> dimitern: is there any trick that would make the openstack provider set up the bridges as if it were maas?
[11:33] <apuimedo> (I can disable the arp anti spoofing filter in my openstack provider)
[11:34] <gnuoy> jamespage, I'm guessing I need to
[11:35] <gnuoy> create the /next branches and delete the originals
[11:35] <gnuoy> or is there a smarted way
[11:35] <jamespage> yeah
[11:35] <gnuoy> smarted? smarter
[11:35] <gnuoy> kk
[11:35] <jamespage> just branch then and mark the old ones as decprecated
[11:36] <gnuoy> jamespage, I see Abandoned is that the one?
[11:36] <jamespage> yah
[11:36] <gnuoy> ta
[11:36] <jamespage> gnuoy, I'd create the trunk and next branches for all of them now under openstack-charmers
[11:37] <gnuoy> will do
[15:50] <gnuoy> beisner, I've probably missed something but if you get a sec https://code.launchpad.net/~gnuoy/ubuntu-openstack-ci/odl/+merge/277327
[16:51] <thomnico> Hello
[16:51] <thomnico> does someone is familiar with  add_source(source, key=None): in juju helpers python ??
[16:51] <thomnico>  I try to pass a block with the gpg key starting with -----BEGIN PGP PUBLIC KEY BLOCK----- and it fails on safe_loader ..
[16:53] <marcoceppi> thomnico: I think you need a key ID not a key file, not sure though
[16:54] <lazypower> correct, it polls the configured keyserver for the key
[16:55] <thomnico> checking the code it show I should be able to add a keyfile ..
[16:55] <lazypower> or at least thats how i've used it
[16:55] <thomnico> and I won't have access to hard coded keyserver.ubuntu.com
[16:56] <lazypower> when live gives you lemons, write a bash script and use subprocess *ducks from impending object trajectory*
[16:56] <lazypower> s/live/life/
[16:56] <thomnico> hehehe you guys are the python fans
[16:56] <lazypower> I'm a pragmatist, I'm a fan of what works reliably
[16:57] <thomnico> so do I lazypower (but you know already)
[16:57] <lazypower> <3
[16:57] <thomnico> where should I raise bug on helpers please ??
[16:57] <lazypower> launchpad.net/charm-helpers
[16:58] <thomnico> It might pretty well be me not putting the expcted syntax though ..
[16:58] <lazypower> thats possible, but if there's a bug for it we can get it on the docket to take a closer look
[16:58] <lazypower> there very well may be a bug in there
[16:58] <thomnico> ok ...
[17:40] <bdx> jamespage: nice presentation! How can I start testing nova-compute-lxd? I'm checking out the lxd charm now....is there a method or procedure you have defined for how you are doing this?
[17:49] <jamespage> bdx, https://jujucharms.com/u/openstack-charmers-next/openstack-lxd
[17:51] <bdx> ooooohhh nicceeeee!! thx!
[18:05] <thedac> bdx: fyi, jamespage fixed a bug with DVR. This should help you. https://bugs.launchpad.net/charms/+source/neutron-openvswitch/+bug/1515008
[18:05] <mup> Bug #1515008: L3 agent missing on compute node in DVR setup <backport-potential> <openstack> <neutron-openvswitch (Juju Charms Collection):In Progress by james-page> <https://launchpad.net/bugs/1515008>
[18:41] <dpm_> Anyone knowledgeable on the python-django charm? I'm trying to use fabric as described on https://jujucharms.com/python-django , but whenever I try to execute a task with 'fab', I'm being asked for "Login password for 'ubuntu': "
[18:41] <dpm_> lazypower perhaps? ^
[18:54] <bdx> thedac: YES!
[18:55] <bdx> jamespage: ^^
[18:55] <thedac> bdx: Fix already landed in next and the backport to stable is on the way
[19:01] <bdx> thedac, jamespage: nice fix! ... it totally makes sense that was the issue...I can't stop smiling
[19:08] <bdx> thedac: thanks for your help over the last few days investigating that
[19:09] <thedac> bdx: no problem. Sorry I did not catch it sooner
[19:10] <bdx> thedac: your good man... ditto
[19:21] <beisner> gnuoy, jamespage - fyi uosci is now feeding on the odl trio;  please see proposals on those charms for a few cosmetic adjustments.
[19:31] <lazypower> dpm_ sorry i'm not sure whats going on there. thumper is the current maintainer of the django charm. I think a mail to the list as he's in NZ timezones, would be a good path forward to getting support with that particular error :(
[19:31] <lazypower> sorry i'm not of more help
[19:33] <jcastro> lazypower: remind me, did you do the charm testing and debugging at the last summit?
[19:33] <lazypower> negatory
[19:34] <jcastro> ah, that was you mbruzek iirc
[19:35] <mbruzek> yeah that was me
[19:35] <jcastro> ok you're doing one for cfgmgmntcamp. :)
[19:35] <dpm_> lazypower, no worries, I ended up posting on http://askubuntu.com/questions/697318/how-to-use-fabric-with-juju and pinging cory_fu - will ask thumper on e-mail if all else faile
[19:35] <dpm_> *fails
[19:36] <jcastro> that reads like it's a key issue doesn't it?
[19:37] <jcastro> like, you should be able to just ssh in there without any prompts
[19:39] <cory_fu> jcastro: Yes, though per the instructions in the README it ought to just work.  I'm not sure how the charm is supposed to tell fab to use the Juju SSH key, nor even how it tells it how to resolve a unit ID (foo/0) into a host name
[19:40] <jcastro> I'm going to stab in the dark and I bet thumper exports a bunch of environment variables for fab that juju consumes or the other way around
[19:40]  * thumper has burning ears
[19:42] <cory_fu> Oh, no, it's the fabfile.py in the charm.  You have to have that locally.  I haven't used Fabric before.  :)
[19:43] <dpm_> cory_fu, yeah, you bzr branch the charm, and then you can point fab to the fabfile
[19:43] <dpm_> if you're running fab from the charm's code directory, it finds the fabfile.py automatically too
[19:44] <dpm_> I've also added
[19:44] <thumper> what you do need?
[19:44] <cory_fu> thumper: https://askubuntu.com/questions/697318/how-to-use-fabric-with-juju
[19:44] <jcastro> http://askubuntu.com/questions/697318/how-to-use-fabric-with-juju
[19:44] <dpm_> thanks guys :)
[19:44] <cory_fu> It's prompting for the ubuntu user's password
[19:44] <thumper> lazypower: that reminds me, I need to propose another change to the django charm as I've deployed celery in prod now
[19:45] <cory_fu> Not using Juju's ssh key file
[19:45] <thumper> lazypower: and I need to support upgrading django so I can get to 1.8
[19:47] <thumper> dpm_: I'm unclear as to what you are attepmting
[19:49] <cory_fu> dpm_: Ok, so if your SSH public key is on Launchpad, you can get this to work by importing your key into the Juju deployment: juju authorized-keys import <launchpad-username>
[19:49] <dpm_> thumper, essentially to be able to run "fab -R python-django/0 manage:collectstatic" from my desktop PC
[19:49] <cory_fu> thumper: I think either that needs to be added to the Fabric section of the README, or it should somehow use the juju_id_rsa key
[19:50] <cory_fu> ("that" being the auth-keys import instructions)
[19:50] <thumper> hmm... not at all familiar with fabric
[19:51] <thumper> why does this not just translate through a juju run thing?
[19:51] <dpm_> cory_fu, that worked nicely, thanks! Now the actual command failed, but at least key authentication worked
[19:52] <cory_fu> thumper: It apparently uses normal ssh.  There might be a way to have it use `juju ssh` as its ssh command, but I don't know how that would work
[19:53] <dpm_> I'm not familiar with fabric, either, I just used it as the charm's documentation mentions it as the way to do what I was trying to do. If there is an equivalent way to do it with juju, I'd be more than happy to try that instead
[19:55] <cory_fu> These functions should really be redone as Juju actions, but that would require work on the charm
[19:59] <dpm_> cory_fu, in any case, your suggestion worked, so if you want to add it to the Ask Ubuntu question, I'll check it as the answer
[19:59] <cory_fu> Done
[20:01] <dpm_> \o/ thanks!
[20:03] <cory_fu> dpm_: What error did you get from the command failure?  Anything useful?
[20:06] <dpm_> cory_fu, it seems not all of the fabricfile.py commands work. Luckily, the one I'm interested in (manage:collectstatic) does. But here is an example of one that doesn't: http://pastebin.ubuntu.com/13241329/
[20:06] <lazypower> thumper so many wants, so many todos, so little time
[20:06] <dpm_> it seems to use ubucon_site instead of the expected gunicorn as the service name
[20:06] <lazypower> thumper i know those feels :-|
[20:09] <jcastro> man, authorized-keys import. I didn't even know that existed
[20:10] <cory_fu> dpm_: What does this give you: juju ssh python-django/0 -- ls /etc/init/ubucon_site.conf
[20:10] <cory_fu> dpm_: Scratch that.  This instead: juju ssh ubucon_site/7 -- ls /etc/init/ubucon_site.conf
[20:12] <dpm_> cory_fu, there is not such a file. ubucon-site is not a service, it's created using a ubucon-site.yaml config for the python-django charm
[20:14] <cory_fu> dpm_: The way the charm looks like it works is that it creates an Upstart job conf file in /etc/init based on the name of the deployed service, from the unit name (in your case, ubucon-site/7).  So if the site were up and running, and thus could be reloaded, there should be an /etc/init/ubucon-site.conf file on the unit
[20:16] <cory_fu> Is there a config option that you haven't set for the charm to start the service?
[20:16] <cory_fu> (thumper might be of more use there)
[20:16] <cory_fu> I don't really know much about that charm
[20:17] <dpm_> cory_fu, the site is up and running, but the charm seems to set up only the gunicorn upstart job as means of reloading the site: http://paste.ubuntu.com/13241388/
[20:17] <dpm_> and http://paste.ubuntu.com/13241402
[20:22] <cory_fu> dpm_: Ah, I think the issue is that the Fabric code pre-dates the wsgi / gunicorn change
[20:23] <dpm_> aha
[20:25] <cory_fu> dpm_: You could edit the reload function in fabfile.py and hard-code the service name to "gunicorn"...
[20:25] <cory_fu> -    sudo('service %s reload' % env.sanitized_service_name)
[20:25] <cory_fu> +    sudo('service gunicorn reload')
[20:26] <dpm_> yeah, I was toying with the idea :)
[20:26] <cory_fu> That should be changed in the charm to handle whatever wsgi subordinate was used, but I'm not sure how that would work
[20:27] <dpm_> That is way beyond my charm-fu, but for now I'm happy that it's more or less working :)
[20:31] <cory_fu> Glad we could help
[20:32] <dpm_> indeed, thanks :)
[22:22] <blahdeblah> Hi all - anyone able to tell me what happened here?  http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/1392/console
[22:22] <blahdeblah> Looks like a problem with the test infrastructure, not the MP.