[00:33] So I have a change pending review to merge into nova-compute to add an interface for my charm, but the review is awaiting my charm being available. [00:33] However, I have a test in my charm that depends on the change in nova-compute. [00:34] Would it be preferable to publish with a failing test for the interface or upload without the interface having any test? [00:50] hi skylerberg - can you point us at the test you're referring to? [00:52] beisner: I decided to just upload as is. I was working on a test in my charm that would use the tintri-nova interface, but I think I will wait to finish writing that test until after that interface is added to nova-compute. [00:55] skylerberg, it may be worth looking at one of the existing openstack charm test dirs to see how we define and exercise the spectrum of release combos. there are a lot of existing test helpers too, which may come in handy. [00:55] ex. http://bazaar.launchpad.net/~openstack-charmers/charms/trusty/cinder-ceph/next/files/head:/tests/ [00:55] http://bazaar.launchpad.net/~openstack-charmers/charms/trusty/cinder/next/files/head:/tests/ [00:58] skylerberg, fwiw - ppl have work-in-progress branches in lp all the time, i don't think that's an issue. [00:59] beisner: Good to know and thanks for the pointers. [01:00] I went ahead on pushed mine https://code.launchpad.net/~sberg-l/charms/trusty/cinder-tintri/trunk. === scuttlemonkey is now known as scuttle|afk [01:14] skylerberg, cool - oh, probably an easier example is the Makefile and tests/ dir of the lp:charms/trusty/ubuntu charm [08:17] gnuoy, I think we're at the point where we can start testing liberty again; can we do a charm-helpers sync to ensure all charms have the new version detection code? [08:17] I can run that - but I know you have a process [08:18] jamespage, sure, I'll do it, it'll help kick me to automate it :-) [08:18] gnuoy, +1 [08:18] thanks [08:31] gnuoy, hold a sec [08:31] I think there is something up with my regex and unit tests [08:31] ack [08:34] gnuoy, yeah my regex for digits is not greedy enough [08:34] I don't know why the unit test is actually passing [08:34] * jamespage looks [08:43] gnuoy, https://code.launchpad.net/~james-page/charm-helpers/fixup-detection/+merge/270016 [08:44] gnuoy, basically the code only worked with single digit versions, and the dict for testing had multiple entries for nova-common [08:44] dohy [08:48] jamespage, merged [08:49] gnuoy, ta - ok I think we can resync now then [08:50] jamespage, I'll just let the mojo job run through with the previous version of charmhelpers then I'll sync [09:45] jamespage, charm helpers sync'd [09:49] gnuoy, awesome - thanks [09:50] gnuoy, I have a few trivial updates to fixup other things - an number of charms where not using -common packages for version detection [09:50] ok, sounds good [09:50] jamespage, do you have a sec for a second opinion on https://code.launchpad.net/~thedac/charm-helpers/legacy_leadership_peer_retrieve_fix/+merge/269400 [09:50] not sure if I'm being paranoid [09:51] hmm [09:51] gnuoy, hmm - I thought that a unit should always rely on what it set on the peer relation [09:51] gnuoy: http://bazaar.launchpad.net/~openstack-charmers/charms/trusty/swift-storage/next/revision/78 worries me [09:52] in https://code.launchpad.net/~adam-collard/charms/trusty/swift-storage/fix-unconditional-service-restart/+merge/269744 there were failures with the fixed charm-helpers [09:52] (see explanation on the MP) [09:52] jamespage, that's what I thought. I need to understand why that change fixes rabbit [09:57] sparkiegeek, I'm probably being slow but I don't quite follow what you are saying. The latest charm helper sync which isn't in your branch yet is causing a problem? [09:58] gnuoy: nope :) [09:58] lemme try and explain a bit better [09:58] https://bugs.launchpad.net/charms/+source/swift-storage/+bug/1489770 [09:58] Bug #1489770: openstack_upgrade_available incorrectly always returns True for swift charms [09:59] ^ this got fixed in charm-helpers first, and then I made a branch for swift-storage [09:59] the change to charm-helpers inadvertently caused amulet tests to fail in swift-storage (further digging shows that they were incorrect all along, and were essentially asserting that that bug exists) [10:00] so I am worried that mojo passed those tests with the charmhelpers synced, because when I ran them, they failed [10:00] (my MP for swift-storage fixes them) [10:01] gnuoy: ^^ any clearer? :) [10:01] sparkiegeek, the mojo charmhelper sync job doesn't run the amulet tests [10:01] gnuoy: ah ha! [10:02] sparkiegeek, although it should really [10:02] ok, so I expect we'll now see OSCI throwing up when it runs the amulet tests [10:02] unless some kind, generous sole in ~openstack-charmers reviews/merges my branch? [10:02] kk [10:02] *soul [10:02] I mean, you might have nice feet too [10:02] sparkiegeek, I shall don my review trousers [10:04] then we can talk about backporting to stable :) [10:04] za boog seems pretty important to me [10:08] I expect other swift-* charms are broken too [10:09] gnuoy: ^^ my focus is on swift-storage at the moment [10:09] sparkiegeek, do you have time to propose it to stable? fwiw stable uses lp:~openstack-charmers/charm-helpers/stable [10:09] gnuoy: the idea would be to cherry-pick the necessary change in to stable for both c-h and the charm? [10:09] sparkiegeek, yep [10:09] gnuoy: sure, will do [10:10] ta [10:11] late morning! [10:17] gnuoy: https://code.launchpad.net/~adam-collard/charm-helpers/openstack-upgrade-available-swift-stable/+merge/270028 [10:38] gnuoy: and .. https://code.launchpad.net/~adam-collard/charms/trusty/swift-storage/conditional-service-restart-stable/+merge/270030 [10:39] sparkiegeek, ta === apuimedo is now known as apuimedo|lunch [12:37] jamespage, I see you and gnuoy were looking at some liberty c-h bits. I have this mp if you haven't already fixed things up. https://code.launchpad.net/~corey.bryant/charm-helpers/liberty/+merge/269979 [12:38] I should merge tip [12:39] tip merged [13:33] what are these juju "tools" i keep reading about? [13:52] pmatulis: charm tools maybe? [13:53] pmatulis: or are you referring to the juju tools that are fetched from streams / built on bootstrap and live on the state server? [14:06] lazyPower: the latter i think. i see it mentioned for bootstrap and for upgrade-juju [14:06] i downloaded something [1] and there was a single file: 'jujud' [14:06] [1]: https://streams.canonical.com/juju/tools/ [14:07] pmatulis: yep, those are the actual bins that empower juju [14:07] the juju daemon, agent, et-al [14:08] lazyPower: so each machine and unit have a jujud or is it just something that stays on the state server? [14:09] As I understand it, yes. That's the agent component for every service [14:11] dimitern: did the network vpc support get merged? [14:12] hazmat, not yet, it turned out a bigger rat hole I imagined [14:12] dimitern: ack, bummer [14:12] hazmat, I have a working fix, but it will take some refactoring to merge it without breaking other stuff [14:26] hey rbasak, I've got another packaging question for you. So I've got a .install file that has this line "usr/lib/python2.*/dist-packages/stuf*/" because this python project, when built puts a tests directory in the dist-packages. However when I run the build I get "dh_install: python-stuf missing files (usr/lib/python2.*/dist-packages/stuf*/), aborting" [14:26] not sure how to go about removing the tests directory === scuttle|afk is now known as scuttlemonkey [15:05] lazyPower, marcoceppi: I'm deploying wordpress in a juju environment, like, for production. I noticed that the max media upload size is 2 MB, and googling says the only way to modify that is to edit some .ini file manually... [15:07] lazyPower, marcoceppi: so, two questions: 1.) why is the default 2 MB? That seems tiny. 2.) it seems like this is something we could help people adjust without having to go find some file to twiddle... e.g. juju set wordpress maxuploadsize=12mb [15:07] natefinch: iirc that file is already under cheeta template control and that should be a pretty simple patch [15:07] natefinch: can you file a bug and we'll try to circle back on it? thats a good point. [15:08] I know we haven't taken a serious look at the wordpress charm in some time, there's probably some room for improvement there [15:08] lazyPower: cool [15:13] Someone wants to use relations to share files between charms in Juju. Can you help me brainstorm a solution? It appears Juju only puts the public keys on the units so they can not scp with each other. Is there another way? [15:14] I wrote a charm that serves the files via http. [15:14] but I am looking for other options. Ideas? [15:20] mbruzek1: you could exchange directories and ssh-keys [15:20] via the relation data? [15:20] and use rsync/scp [15:20] rsync ftw, if you do go that route [15:21] mbruzek1: is this peer or provides/requires scenario? [15:21] relation-set private-key="binaryhexdatahere" [15:21] mbruzek1: basically, so you could do the exchange one of two ways [15:21] marcoceppi: provides/requires scenario. [15:22] mbruzek1: you could have the end that wants to connect send public-key it's safer that way, then the one serving the files simply adds taht key to authorized_users for the user that has the files on the system and sends back a path [15:22] Oh yeah [15:22] mbruzek1: or you could have the charm serving the files generate private-keys and send them out [15:22] that way it's a one-way communication, but it's a little more...hinky that way [15:23] either way I'd definitely make sure teh charm serving the files creates a non-priviledged user on the system to house the files and have all the other services use that user whena ccessing the server to minimize security risks === xwwt_ is now known as xwwt [16:15] If someone has a minute to review https://code.launchpad.net/~daniel-thewatkins/charms/trusty/ubuntu-repository-cache/fix-cron-template/+merge/270083 it would be much appreciated. [16:16] We're trying to roll-out in-cloud mirrors in a few places and this is (hopefully) the one remaining blocker. :) [16:18] @Odd_Bloke white space fix on a template? [16:18] ah i see i the commit message the intention here [16:20] @Odd_Bloke - I see a inked issue which currently has no resolution. Is this MP going to resolve the problem if you're still rendering w/ jinja2 from charm-helpers? [16:22] lazyPower: Yep, jinja2 essentially does "\n".join(rendered_template.split('\n')), and so only strips the _last_ newline (not all trailing newlines). [16:23] i'm deploying now to verify, should have this merged shortly [16:23] lazyPower: Great, thanks! [16:27] @Odd_Bloke merged. [16:29] lazyPower: Thanks! [16:29] joooooojooooooooooo [16:55] Hi, I was here with this issue yesterday, but I'm still having some trouble today and I'm not getting the same error. I have the juju-gui charm deployed and exposed with [16:55] a public address of node0vm0.maas, but when I go there in a browser it hangs on "connecting to the juju environment" [16:55] I do have a proxy set up, but to my knowledge it's configured correctly [16:56] ntpttr: which browser are you using? [16:57] lazyPower: I've tried firefox and chrome - on firefox going to https://node0vm0.maas hangs on the connecting screen and on chrome it says ERR_CONNECTION_CLOSED [16:57] rick_h_: ^ [16:58] ntpttr: I've seen some strange issues out of browsers like safari and self signed certificiates, it accepts for the http session but not the websocket connection. But w/ chrome you should be g2g [16:59] lazyPower: If I curl the address here's what it gives me. Does it matter that it's 'moved permanently'? http://pastebin.com/wHwSkPR3 [16:59] ntpttr: it's redireceting port 80 to port 443 for ssl [16:59] ntpttr: so both ports need to be open and reacahable via the proxy [17:00] lazyPower: Because in chrome the details on the error say that it might have moved permanently, but not sure where [17:00] ntpttr: if you open chrome dev toolbar there's a network tab where you can see the GUI reaching out (the spinning wheel part) to the juju state server [17:00] ntpttr: and the proxy needs to be allowing a secure websocket [17:00] ntpttr: wss:// [17:00] ntpttr: it sounds like there's a proxy issue. [17:01] rick_h_: Hmm I thought I had my proxy configured this same way a few days ago when it was working... [17:01] ntpttr: are there any details in the network tools? [17:02] frankban: is there a way to 'curl' a wss connection to test it out from the cli or the like? [17:02] rick_h_: The only thing in the network tab in chrome is saying failed to load resource: connection closed [17:02] ntpttr: for what url? [17:02] rick_h_: reading backscroll [17:02] ntpttr: I'm guessing the proxy is closing things on your as 'not allowed' [17:03] ntpttr: but it is a guess, apologies [17:03] rick_h_: The URL is node0vm0.maas, it's what the orange box example bootstrap script sets up [17:04] ntpttr: ok, so that's the url of the state server juju is running? [17:04] rick_h_: I have '.maas' in my no_proxy list [17:04] ntpttr: but the connection should be to the domain that the juju-gui charm is on? [17:04] rick_h_: the easiest way is using the python websocket client, e.g. create_connectiion(url) and check that it works [17:04] ntpttr: are they colocated on the same machine/network? [17:04] rick_h_: url being wss://{GUI-ADDRESS}/ws [17:05] rick_h_: the service is being run on a VM in the machine I'm trying to access the URL on [17:05] ntpttr: right, but are they on the same DNS name then? [17:05] ntpttr: e.g. are both http://node0vm0.maas/ ? [17:06] rick_h_: Hmm I think so, I'm sorry I'm a little confused [17:06] ntpttr: so 10.14.100.1 is the GUI, and it redirects to 10.14.100.1:443 and then it'll try to build a wss to 10.14.100.1. [17:06] ntpttr: I'm wondering if there's a collision on that IP address from your point of view [17:06] ntpttr: e.g. we think connecting to 10.14.100.1 is the GUI but really it's juju itself? or something else deployed? [17:07] rick_h_: Oh I'm not sure, no other services are deployed [17:07] ntpttr: how did you deploy the juju-gui? [17:08] 'juju deploy --to 0 --repository=/srv/charmstore/ local:trusty/juju-gui' after bootstrapping node0vm0 [17:08] and then 'juju expose juju-gui' === rogpeppe2 is now known as rogpeppe [17:09] ntpttr: right, you deployed it to the same server juju itself is running on. Try changing the juju-gui port to something non-stnadard. [17:09] ntpttr: https://jujucharms.com/juju-gui/#charm-config-port [17:10] rick_h_: Okay I'll give that a shot [17:10] frankban: what's the wss port by default? We don't normally collide with juju when colocated right? [17:10] * rick_h_ is having sprint mental fatigue. [17:10] rick_h_: 443 by default [17:10] rick_h_: The odd thing though is that it worked before with this setup, and this is the default way orange box does it [17:10] rick_h_: or based on a gui charm option [17:11] ntpttr: yea, sorry. I'm brain farting at the moment trying to think. [17:11] ntpttr: does it work using incognito mode? [17:11] frankban: No [17:12] ntpttr: ok, so if you hit "ctrl-shift-j" you get the chrome dev tools. In there you get a nav and click "Network" [17:12] ntpttr: then reload the page, and click the 'filter' icon that looks like a funnel and pick "WebSockets" from the list [17:13] ntpttr: and can you screenshot what that looks like please? [17:14] rick_h_: how should I share the screenshot? The screen is blank when I pick websockets [17:14] ntpttr: after reload? [17:15] ntpttr: I guess then it'd be interesting to see the list unfiltered then. The websocket should be there really early in the page load [17:16] rick_h_: the only things showing up are data:image/png and 'node0vm0.maas (failed)' [17:16] ntpttr: on a fresh reload? [17:17] rick_h_: yeah [17:17] rick_h_: one second I'm uploading a screenshot [17:19] rick_h_: http://postimg.org/image/sseqzhpy9 [17:20] ntpttr: so looking at that, there's something taking https://node0vm0.mas to https://node0vm0/:1 ? [17:21] rick_h_: hmm I don't know what that would be about... [17:21] ntpttr: me either :/ [17:22] rick_h_: i've not had the gui collide with the bootstrap server in my experience (late i know, but confirmation) [17:22] lazyPower: yea [17:22] lazyPower: but if there was a 'proxy' setup then I was wondering if things are nested/etc [17:22] ah, yeah [17:25] lazyPower: ntpttr I've got to run for a sec. Sorry, but just not sure what's up. ntpttr I'd like to see if you can curl the main index.html page and get that back as I can't tell the actual urls from your netowrk tab. It's cut off. But it looks like you're not getting a spinning circle, but no files are able to come through [17:25] ntpttr: the one thing that you might check is that the gui is running (maybe it died?) by ssh'ing to the unit and checking of guiserver is in the ps output [17:26] rick_h_: Okay I'll try that out thanks for the help [18:13] Is there a way to retrigger uosci-testing-bot? [18:16] for 'juju upgrade-juju', is the new software d/l'd to the client which then sends it to the state server which then distributes it to instances? or what? [18:27] Are there logs for this IRC channel? [18:30] skylerberg, beisner can help you with that - or you can move the merge proposal into WIP for ~40 minutes then back to Ready for Review - OSCI bot runs on a timer for now and won't re-run against unchanged merge proposals [18:30] ntpttr: Yes, http://irclogs.ubuntu.com/ [18:33] wolsen: Thanks. Going with the WIP solution. [18:33] skylerberg, ack [18:49] lazyPower: juju's wordpress seems to have a fairly critical bug - you can't upload pictures for your posts etc. Is there some configuration I need to set up to enable that? [19:27] natefinch: what's tuning set to? [19:33] marcoceppi: the default, uh... single [19:33] you should be able to upload, what error are you getting? [19:34] marcoceppi: there's a panel on the right that says UPLOADING and under it "foo.jpg HTTP error" [19:35] marcoceppi: btw, can repro easily on local provider, but saw it first on a manually provisioned digital ocean machine [19:35] marcoceppi: I'm happy to look for log messages, but I couldn't figure out where WP was keeping logs (if it does) [19:36] natefinch: if you're using nginx as the engine, there's a php-fpm log either in /mnt or /var/log [19:36] the wordpress charm is...crufty [19:38] marcoceppi: lol... I'm figuring that out. [19:40] marcoceppi: ahh, nginx log has it in /var/log/nginx/error.log: [19:40] 2015/09/03 14:24:00 [error] 31196#0: *161 client intended to send too large body: 1326272 bytes, client: 10.0.3.1, server: _, request: "POST /wp-admin/async [19:40] -upload.php HTTP/1.1", host: "10.0.3.202", referrer: "http://10.0.3.202/wp-admin/customize.php?return=%2Fwp-admin%2F" [19:40] marcoceppi: so, some nginx configuration problem, given that wordpress says 2megs is ok, and I'm only sending 1.3megs [19:43] marcoceppi: https://bugs.launchpad.net/charms/+source/wordpress/+bug/1491995 [19:43] Bug #1491995: Can't upload files to wordpress [21:16] Database relation question (using postgresql): I'm testing a charm, destroying and re-deploying it occasionally. What's the proper way to cleanup the database when the relation is broken, so that tables aren't left behind owned by the previous user? IOW, I have tables owned by db_1_foo so a new deploy assigned the user db_2_foo can't change said tables. [21:20] stub: ^^ might be a question for you