[00:33] <skylerberg> So I have a change pending review to merge into nova-compute to add an interface for my charm, but the review is awaiting my charm being available.
[00:33] <skylerberg> However, I have a test in my charm that depends on the change in nova-compute.
[00:34] <skylerberg> Would it be preferable to publish with a failing test for the interface or upload without the interface having any test?
[00:50] <beisner> hi skylerberg - can you point us at the test you're referring to?
[00:52] <skylerberg> beisner: I decided to just upload as is. I was working on a test in my charm that would use the tintri-nova interface, but I think I will wait to finish writing that test until after that interface is added to nova-compute.
[00:55] <beisner> skylerberg, it may be worth looking at one of the existing openstack charm test dirs to see how we define and exercise the spectrum of release combos.  there are a lot of existing test helpers too, which may come in handy.
[00:55] <beisner> ex.  http://bazaar.launchpad.net/~openstack-charmers/charms/trusty/cinder-ceph/next/files/head:/tests/
[00:55] <beisner> http://bazaar.launchpad.net/~openstack-charmers/charms/trusty/cinder/next/files/head:/tests/
[00:58] <beisner> skylerberg, fwiw - ppl have work-in-progress branches in lp all the time, i don't think that's an issue.
[00:59] <skylerberg> beisner: Good to know and thanks for the pointers.
[01:00] <skylerberg> I went ahead on pushed mine https://code.launchpad.net/~sberg-l/charms/trusty/cinder-tintri/trunk.
[01:14] <beisner> skylerberg, cool - oh, probably an easier example is the Makefile and tests/ dir of the lp:charms/trusty/ubuntu charm
[08:17] <jamespage> gnuoy, I think we're at the point where we can start testing liberty again; can we do a charm-helpers sync to ensure all charms have the new version detection code?
[08:17] <jamespage> I can run that - but I know you have a process
[08:18] <gnuoy> jamespage, sure, I'll do it, it'll help kick me to automate it :-)
[08:18] <jamespage> gnuoy, +1
[08:18] <jamespage> thanks
[08:31] <jamespage> gnuoy, hold a sec
[08:31] <jamespage> I think there is something up with my regex and unit tests
[08:31] <gnuoy> ack
[08:34] <jamespage> gnuoy, yeah my regex for digits is not greedy enough
[08:34] <jamespage> I don't know why the unit test is actually passing
[08:34]  * jamespage looks
[08:43] <jamespage> gnuoy, https://code.launchpad.net/~james-page/charm-helpers/fixup-detection/+merge/270016
[08:44] <jamespage> gnuoy, basically the code only worked with single digit versions, and the dict for testing had multiple entries for nova-common
[08:44] <jamespage> dohy
[08:48] <gnuoy> jamespage, merged
[08:49] <jamespage> gnuoy, ta - ok I think we can resync now then
[08:50] <gnuoy> jamespage, I'll just let the mojo job run through with the previous version of charmhelpers then I'll sync
[09:45] <gnuoy> jamespage, charm helpers sync'd
[09:49] <jamespage> gnuoy, awesome - thanks
[09:50] <jamespage> gnuoy, I have a few trivial updates to fixup other things - an number of charms where not using -common packages for version detection
[09:50] <gnuoy> ok, sounds good
[09:50] <gnuoy> jamespage, do you have a sec for a second opinion on https://code.launchpad.net/~thedac/charm-helpers/legacy_leadership_peer_retrieve_fix/+merge/269400
[09:50] <gnuoy> not sure if I'm being paranoid
[09:51] <sparkiegeek> hmm
[09:51] <jamespage> gnuoy, hmm - I thought that a unit should always rely on what it set on the peer relation
[09:51] <sparkiegeek> gnuoy: http://bazaar.launchpad.net/~openstack-charmers/charms/trusty/swift-storage/next/revision/78 worries me
[09:52] <sparkiegeek> in https://code.launchpad.net/~adam-collard/charms/trusty/swift-storage/fix-unconditional-service-restart/+merge/269744 there were failures with the fixed charm-helpers
[09:52] <sparkiegeek> (see explanation on the MP)
[09:52] <gnuoy> jamespage, that's what I thought. I need to understand why that change fixes rabbit
[09:57] <gnuoy> sparkiegeek, I'm probably being slow but I don't quite follow what you are saying. The latest charm helper sync which isn't in your branch yet is causing a problem?
[09:58] <sparkiegeek> gnuoy: nope :)
[09:58] <sparkiegeek> lemme try and explain a bit better
[09:58] <sparkiegeek> https://bugs.launchpad.net/charms/+source/swift-storage/+bug/1489770
[09:58] <mup> Bug #1489770: openstack_upgrade_available incorrectly always returns True for swift charms <backport-potential> <landscape> <Charm Helpers:Fix Committed by adam-collard> <swift-storage (Juju Charms Collection):In Progress by adam-collard> <https://launchpad.net/bugs/1489770>
[09:59] <sparkiegeek> ^ this got fixed in charm-helpers first, and then I made a branch for swift-storage
[09:59] <sparkiegeek> the change to charm-helpers inadvertently caused amulet tests to fail in swift-storage (further digging shows that they were incorrect all along, and were essentially asserting that that bug exists)
[10:00] <sparkiegeek> so I am worried that mojo passed those tests with the charmhelpers synced, because when I ran them, they failed
[10:00] <sparkiegeek> (my MP for swift-storage fixes them)
[10:01] <sparkiegeek> gnuoy: ^^ any clearer? :)
[10:01] <gnuoy> sparkiegeek, the mojo charmhelper sync job doesn't run the amulet tests
[10:01] <sparkiegeek> gnuoy: ah ha!
[10:02] <gnuoy> sparkiegeek, although it should really
[10:02] <sparkiegeek> ok, so I expect we'll now see OSCI throwing up when it runs the amulet tests
[10:02] <sparkiegeek> unless some kind, generous sole in ~openstack-charmers reviews/merges my branch?
[10:02] <gnuoy> kk
[10:02] <sparkiegeek> *soul
[10:02] <sparkiegeek> I mean, you might have nice feet too
[10:02] <gnuoy> sparkiegeek, I shall don my review trousers
[10:04] <sparkiegeek> then we can talk about backporting to stable :)
[10:04] <sparkiegeek> za boog seems pretty important to me
[10:08] <sparkiegeek> I expect other swift-* charms are broken too
[10:09] <sparkiegeek> gnuoy: ^^ my focus is on swift-storage at the moment
[10:09] <gnuoy> sparkiegeek, do you have time to propose it to stable? fwiw stable uses lp:~openstack-charmers/charm-helpers/stable
[10:09] <sparkiegeek> gnuoy: the idea would be to cherry-pick the necessary change in to stable for both c-h and the charm?
[10:09] <gnuoy> sparkiegeek, yep
[10:09] <sparkiegeek> gnuoy: sure, will do
[10:10] <gnuoy> ta
[10:11] <apuimedo> late morning!
[10:17] <sparkiegeek> gnuoy: https://code.launchpad.net/~adam-collard/charm-helpers/openstack-upgrade-available-swift-stable/+merge/270028
[10:38] <sparkiegeek> gnuoy: and .. https://code.launchpad.net/~adam-collard/charms/trusty/swift-storage/conditional-service-restart-stable/+merge/270030
[10:39] <gnuoy> sparkiegeek, ta
[12:37] <coreycb> jamespage, I see you and gnuoy were looking at some liberty c-h bits.  I have this mp if you haven't already fixed things up.  https://code.launchpad.net/~corey.bryant/charm-helpers/liberty/+merge/269979
[12:38] <coreycb> I should merge tip
[12:39] <coreycb> tip merged
[13:33] <pmatulis> what are these juju "tools" i keep reading about?
[13:52] <lazyPower> pmatulis: charm tools maybe?
[13:53] <lazyPower> pmatulis: or are you referring to the juju tools that are fetched from streams / built on bootstrap and live on the state server?
[14:06] <pmatulis> lazyPower: the latter i think. i see it mentioned for bootstrap and for upgrade-juju
[14:06] <pmatulis> i downloaded something [1] and there was a single file: 'jujud'
[14:06] <pmatulis> [1]: https://streams.canonical.com/juju/tools/
[14:07] <lazyPower> pmatulis: yep, those are the actual bins that empower juju
[14:07] <lazyPower> the juju daemon, agent, et-al
[14:08] <pmatulis> lazyPower: so each machine and unit have a jujud or is it just something that stays on the state server?
[14:09] <lazyPower> As I understand it, yes. That's the agent component for every service
[14:11] <hazmat> dimitern: did the network vpc support get merged?
[14:12] <dimitern> hazmat, not yet, it turned out a bigger rat hole I imagined
[14:12] <hazmat> dimitern: ack, bummer
[14:12] <dimitern> hazmat, I have a working fix, but it will take some refactoring to merge it without breaking other stuff
[14:26] <marcoceppi> hey rbasak, I've got another packaging question for you. So I've got a .install file that has this line "usr/lib/python2.*/dist-packages/stuf*/" because this python project, when built puts a tests directory in the dist-packages. However when I run the build I get  "dh_install: python-stuf missing files (usr/lib/python2.*/dist-packages/stuf*/), aborting"
[14:26] <marcoceppi> not sure how to go about removing the tests directory
[15:05] <natefinch> lazyPower, marcoceppi:  I'm deploying wordpress in a juju environment, like, for production.  I noticed that the max media upload size is 2 MB, and googling says the only way to modify that is to edit some .ini file manually...
[15:07] <natefinch> lazyPower, marcoceppi: so, two questions:  1.) why is the default 2 MB?  That seems tiny.  2.) it seems like this is something we could help people adjust without having to go find some file to twiddle... e.g. juju set wordpress maxuploadsize=12mb
[15:07] <lazyPower> natefinch: iirc that file is already under cheeta template control and that should be a pretty simple patch
[15:07] <lazyPower> natefinch: can you file a bug and we'll try to circle back on it? thats a good point.
[15:08] <lazyPower> I know we haven't taken a serious look at the wordpress charm in some time, there's probably some room for improvement there
[15:08] <natefinch> lazyPower: cool
[15:13] <mbruzek1> Someone wants to use relations to share files between charms in Juju. Can you help me brainstorm a solution?  It appears Juju only puts the public keys on the units so they can not scp with each other.  Is there another way?
[15:14] <mbruzek1> I wrote a charm that serves the files via http.
[15:14] <mbruzek1> but I am looking for other options.  Ideas?
[15:20] <marcoceppi> mbruzek1: you could exchange directories and ssh-keys
[15:20] <mbruzek1> via the relation data?
[15:20] <marcoceppi> and use rsync/scp
[15:20] <marcoceppi> rsync ftw, if you do go that route
[15:21] <marcoceppi> mbruzek1: is this peer or provides/requires scenario?
[15:21] <mbruzek1> relation-set private-key="binaryhexdatahere"
[15:21] <marcoceppi> mbruzek1: basically, so you could do the exchange one of two ways
[15:21] <mbruzek1> marcoceppi: provides/requires scenario.
[15:22] <marcoceppi> mbruzek1: you could have the end that wants to connect send public-key it's safer that way, then the one serving the files simply adds taht key to authorized_users for the user that has the files on the system and sends back a path
[15:22] <mbruzek1> Oh yeah
[15:22] <marcoceppi> mbruzek1: or you could have the charm serving the files generate private-keys and send them out
[15:22] <marcoceppi> that way it's a one-way communication, but it's a little more...hinky that way
[15:23] <marcoceppi> either way I'd definitely make sure teh charm serving the files creates a non-priviledged user on the system to house the files and have all the other services use that user whena ccessing the server to minimize security risks
[16:15] <Odd_Bloke> If someone has a minute to review https://code.launchpad.net/~daniel-thewatkins/charms/trusty/ubuntu-repository-cache/fix-cron-template/+merge/270083 it would be much appreciated.
[16:16] <Odd_Bloke> We're trying to roll-out in-cloud mirrors in a few places and this is (hopefully) the one remaining blocker. :)
[16:18] <lazyPower> @Odd_Bloke white space fix on a template?
[16:18] <lazyPower> ah i see i the commit message the intention here
[16:20] <lazyPower> @Odd_Bloke - I see a inked issue which currently has no resolution. Is this MP going to resolve the problem if you're still rendering w/ jinja2 from charm-helpers?
[16:22] <Odd_Bloke> lazyPower: Yep, jinja2 essentially does "\n".join(rendered_template.split('\n')), and so only strips the _last_ newline (not all trailing newlines).
[16:23] <lazyPower> i'm deploying now to verify, should have this merged shortly
[16:23] <Odd_Bloke> lazyPower: Great, thanks!
[16:27] <lazyPower> @Odd_Bloke merged.
[16:29] <Odd_Bloke> lazyPower: Thanks!
[16:29] <g3naro> joooooojooooooooooo
[16:55] <ntpttr> Hi, I was here with this issue yesterday, but I'm still having some trouble today and I'm not getting the same error. I have the juju-gui charm deployed and exposed with
[16:55] <ntpttr> a public address of node0vm0.maas, but when I go there in a browser it hangs on "connecting to the juju environment"
[16:55] <ntpttr> I do have a proxy set up, but to my knowledge it's configured correctly
[16:56] <lazyPower> ntpttr: which browser are you using?
[16:57] <ntpttr> lazyPower: I've tried firefox and chrome - on firefox going to https://node0vm0.maas hangs on the connecting screen and on chrome it says ERR_CONNECTION_CLOSED
[16:57] <lazyPower> rick_h_: ^
[16:58] <lazyPower> ntpttr: I've seen some strange issues out of browsers like safari and self signed certificiates, it accepts for the http session but not the websocket connection. But w/ chrome you should be g2g
[16:59] <ntpttr> lazyPower: If I curl the address here's what it gives me. Does it matter that it's 'moved permanently'? http://pastebin.com/wHwSkPR3
[16:59] <rick_h_> ntpttr: it's redireceting port 80 to port 443 for ssl
[16:59] <rick_h_> ntpttr: so both ports need to be open and reacahable via the proxy
[17:00] <ntpttr> lazyPower: Because in chrome the details on the error say that it might have moved permanently, but not sure where
[17:00] <rick_h_> ntpttr: if you open chrome dev toolbar there's a network tab where you can see the GUI reaching out (the spinning wheel part) to the juju state server
[17:00] <rick_h_> ntpttr: and the proxy needs to be allowing a secure websocket
[17:00] <rick_h_> ntpttr: wss://
[17:00] <rick_h_> ntpttr: it sounds like there's a proxy issue.
[17:01] <ntpttr> rick_h_: Hmm I thought I had my proxy configured this same way a few days ago when it was working...
[17:01] <rick_h_> ntpttr: are there any details in the network tools?
[17:02] <rick_h_> frankban: is there a way to 'curl' a wss connection to test it out from the cli or the like?
[17:02] <ntpttr> rick_h_: The only thing in the network tab in chrome is saying failed to load resource: connection closed
[17:02] <rick_h_> ntpttr: for what url?
[17:02] <frankban> rick_h_: reading backscroll
[17:02] <rick_h_> ntpttr: I'm guessing the proxy is closing things on your as 'not allowed'
[17:03] <rick_h_> ntpttr: but it is a guess, apologies
[17:03] <ntpttr> rick_h_: The URL is node0vm0.maas, it's what the orange box example bootstrap script sets up
[17:04] <rick_h_> ntpttr: ok, so that's the url of the state server juju is running?
[17:04] <ntpttr> rick_h_: I have '.maas' in my no_proxy list
[17:04] <rick_h_> ntpttr: but the connection should be to the domain that the juju-gui charm is on?
[17:04] <frankban> rick_h_: the easiest way is using the python websocket client, e.g. create_connectiion(url) and check that it works
[17:04] <rick_h_> ntpttr: are they colocated on the same machine/network?
[17:04] <frankban> rick_h_: url being wss://{GUI-ADDRESS}/ws
[17:05] <ntpttr> rick_h_: the service is being run on a VM in the machine I'm trying to access the URL on
[17:05] <rick_h_> ntpttr: right, but are they on the same DNS name then?
[17:05] <rick_h_> ntpttr: e.g. are both http://node0vm0.maas/ ?
[17:06] <ntpttr> rick_h_: Hmm I think so, I'm sorry I'm a little confused
[17:06] <rick_h_> ntpttr: so 10.14.100.1 is the GUI, and it redirects to 10.14.100.1:443 and then it'll try to build a wss to 10.14.100.1.
[17:06] <rick_h_> ntpttr: I'm wondering if there's a collision on that IP address from your point of view
[17:06] <rick_h_> ntpttr: e.g. we think connecting to 10.14.100.1 is the GUI but really it's juju itself? or something else deployed?
[17:07] <ntpttr> rick_h_: Oh I'm not sure, no other services are deployed
[17:07] <rick_h_> ntpttr: how did you deploy the juju-gui?
[17:08] <ntpttr> 'juju deploy --to 0 --repository=/srv/charmstore/ local:trusty/juju-gui' after bootstrapping node0vm0
[17:08] <ntpttr> and then 'juju expose juju-gui'
[17:09] <rick_h_> ntpttr: right, you deployed it to the same server juju itself is running on. Try changing the juju-gui port to something non-stnadard.
[17:09] <rick_h_> ntpttr: https://jujucharms.com/juju-gui/#charm-config-port
[17:10] <ntpttr> rick_h_: Okay I'll give that a shot
[17:10] <rick_h_> frankban: what's the wss port by default? We don't normally collide with juju when colocated right?
[17:10]  * rick_h_ is having sprint mental fatigue. 
[17:10] <frankban> rick_h_: 443 by default
[17:10] <ntpttr> rick_h_: The odd thing though is that it worked before with this setup, and this is the default way orange box does it
[17:10] <frankban> rick_h_: or based on a gui charm option
[17:11] <rick_h_> ntpttr: yea, sorry. I'm brain farting at the moment trying to think.
[17:11] <frankban> ntpttr: does it work using incognito mode?
[17:11] <ntpttr> frankban: No
[17:12] <rick_h_> ntpttr: ok, so if you hit "ctrl-shift-j" you get the chrome dev tools. In there you get a nav and click "Network"
[17:12] <rick_h_> ntpttr: then reload the page, and click the 'filter' icon that looks like a funnel and pick "WebSockets" from the list
[17:13] <rick_h_> ntpttr: and can you screenshot what that looks like please?
[17:14] <ntpttr> rick_h_: how should I share the screenshot? The screen is blank when I pick websockets
[17:14] <rick_h_> ntpttr: after reload?
[17:15] <rick_h_> ntpttr: I guess then it'd be interesting to see the list unfiltered then. The websocket should be there really early in the page load
[17:16] <ntpttr> rick_h_: the only things showing up are data:image/png and 'node0vm0.maas     (failed)'
[17:16] <rick_h_> ntpttr: on a fresh reload?
[17:17] <ntpttr> rick_h_: yeah
[17:17] <ntpttr> rick_h_: one second I'm uploading a screenshot
[17:19] <ntpttr> rick_h_: http://postimg.org/image/sseqzhpy9
[17:20] <rick_h_> ntpttr: so looking at that, there's something taking https://node0vm0.mas to https://node0vm0/:1 ?
[17:21] <ntpttr> rick_h_: hmm I don't know what that would be about...
[17:21] <rick_h_> ntpttr: me either :/
[17:22] <lazyPower> rick_h_: i've not had the gui collide with the bootstrap server in my experience (late i know, but confirmation)
[17:22] <rick_h_> lazyPower: yea
[17:22] <rick_h_> lazyPower: but if there was a 'proxy' setup then I was wondering if things are nested/etc
[17:22] <lazyPower> ah, yeah
[17:25] <rick_h_> lazyPower: ntpttr I've got to run for a sec. Sorry, but just not sure what's up. ntpttr I'd like to see if you can curl the main index.html page and get that back as I can't tell the actual urls from your netowrk tab. It's cut off. But it looks like you're not getting a spinning circle, but no files are able to come through
[17:25] <rick_h_> ntpttr: the one thing that you might check is that the gui is running (maybe it died?) by ssh'ing to the unit and checking of guiserver is in the ps output
[17:26] <ntpttr> rick_h_: Okay I'll try that out thanks for the help
[18:13] <skylerberg> Is there a way to retrigger uosci-testing-bot?
[18:16] <pmatulis> for 'juju upgrade-juju', is the new software d/l'd to the client which then sends it to the state server which then distributes it to instances? or what?
[18:27] <ntpttr> Are there logs for this IRC channel?
[18:30] <wolsen> skylerberg, beisner can help you with that - or you can move the merge proposal into WIP for ~40 minutes then back to Ready for Review - OSCI bot runs on a timer for now and won't re-run against unchanged merge proposals
[18:30] <skylerberg> ntpttr: Yes, http://irclogs.ubuntu.com/
[18:33] <skylerberg> wolsen: Thanks. Going with the WIP solution.
[18:33] <wolsen> skylerberg, ack
[18:49] <natefinch> lazyPower: juju's wordpress seems to have a fairly critical bug - you can't upload pictures for your posts etc.   Is there some configuration I need to set up to enable that?
[19:27] <marcoceppi> natefinch: what's tuning set to?
[19:33] <natefinch> marcoceppi: the default, uh... single
[19:33] <marcoceppi> you should be able to upload, what error are you getting?
[19:34] <natefinch> marcoceppi: there's a panel on the right that says UPLOADING  and under it "foo.jpg    HTTP error"
[19:35] <natefinch> marcoceppi: btw, can repro easily on local provider, but saw it first on a manually provisioned digital ocean machine
[19:35] <natefinch> marcoceppi: I'm happy to look for log messages, but I couldn't figure out where WP was keeping logs (if it does)
[19:36] <marcoceppi> natefinch: if you're using nginx as the engine, there's a php-fpm log either in /mnt or /var/log
[19:36] <marcoceppi> the wordpress charm is...crufty
[19:38] <natefinch> marcoceppi: lol... I'm figuring that out.
[19:40] <natefinch> marcoceppi: ahh, nginx log has it in /var/log/nginx/error.log:
[19:40] <natefinch> 2015/09/03 14:24:00 [error] 31196#0: *161 client intended to send too large body: 1326272 bytes, client: 10.0.3.1, server: _, request: "POST /wp-admin/async
[19:40] <natefinch> -upload.php HTTP/1.1", host: "10.0.3.202", referrer: "http://10.0.3.202/wp-admin/customize.php?return=%2Fwp-admin%2F"
[19:40] <natefinch> marcoceppi: so, some nginx configuration problem, given that wordpress says 2megs is ok, and I'm only sending 1.3megs
[19:43] <natefinch> marcoceppi: https://bugs.launchpad.net/charms/+source/wordpress/+bug/1491995
[19:43] <mup> Bug #1491995: Can't upload files to wordpress <wordpress (Juju Charms Collection):New> <https://launchpad.net/bugs/1491995>
[21:16] <aisrael> Database relation question (using postgresql): I'm testing a charm, destroying and re-deploying it occasionally. What's the proper way to cleanup the database when the relation is broken, so that tables aren't left behind owned by the previous user? IOW, I have tables owned by db_1_foo so a new deploy assigned the user db_2_foo can't change said tables.
[21:20] <aisrael> stub: ^^ might be a question for you