=== urulama is now known as uru-afk === uru-afk is now known as urulama [11:15] is it ok for a charmworldlib to have series hardcoded to "precise" in a charm.py? [11:37] ah, i see that default series in core/store is "precise" as well [11:44] urulama: is this in a charm? [11:44] rogpeppe: hi [11:44] urulama: hiya [11:45] rogpeppe: charmworldlib source [11:45] urulama: is that the server side of charmworld? [11:45] rogpeppe: and in juju-core/store/ go source [11:46] rogpeppe: yes, looking at server side [11:46] urulama: juju-core/store ? [11:47] go juju-core code ... /launchpad.net/juju-core/store/server.go has DefaultSeries set to "precise" as well [11:53] urulama: ah, launchpad.net/juju-core is no longer a thing [11:53] github? [11:53] urulama: github.com/juju/charmstore is the new place for it [11:53] great :/ [11:53] urulama: juju-core has moved to github.com/juju/juju [11:54] morning [11:54] morning there [11:54] rick_h__: hiya [11:54] charmworldlib is the python client for the charmworld api [11:55] urulama: it looks as if the store DefaultSeries is just a holdover from old code and is not actually used anywhere [11:56] urulama: the new logic chooses the most recent LTS series (see byPreferredSeries) [11:57] * urulama waits to dl the code from git [12:04] morning ya'll [12:10] rogpeppe: went to see if there is a warning that github code should be used for core ... [12:10] morning, jrwren [12:10] urulama: there definitely should be some kind of redirect message [12:11] * urulama might be blind ... https://launchpad.net/juju-core [12:12] rogpeppe: this is the only thing that i found about gridfs limitations ... "Working Set: Serving files along with your database content can significantly churn your memory working set. If you would not like to disturb your working set it might be best to serve your files from a different mongodb server" [12:13] rogpeppe: (regarding the charm/bundle talk we had with frankban) [12:13] urulama: we should definitely preserve that possibility, but there are definite advantages for an environment to use the same (automatically HA'd) mongo server for the charm store too [12:14] jrwren: bac you guys up for a call this morning? [12:14] rick_h__: sure [12:15] rick_h__: now? when? [12:15] * bac fetches headset [12:15] bac: was hoping now, waiting to see if jrwren sees our pings [12:20] urulama: so re: the gridfs stuff. In a normal small env I don' think there's much file serving going on. mainly some icon files really [12:20] urulama: and moving this to theblues channel [12:21] rick_h__: just ping me when you're ready [12:21] bac: let's hangout then and you guys can sync up later [12:21] * rick_h__ wants to go get some breakfast/coffee [12:21] bac: just the friday standup room please [12:24] rick_h__: yes, I'm up for a call. [12:24] jrwren: https://plus.google.com/hangouts/_/calendar/cmljay5oYXJkaW5nQGNhbm9uaWNhbC5jb20.t3m5giuddiv9epub48d9skdaso?authuser=1 [13:23] * rogpeppe lunches [13:29] rick_h__: you were right. through the azure dashboard i could see there was no endpoint created for :80 even though it is open and exposed in juju. [13:30] bac: :( [13:30] rick_h__: opened it via azure and now can get in [13:31] bac: I had to manually open our CI ports for saucelabs (8888 and 8889) so had been poking in there before [13:31] rick_h__: their separation between 'cloud services' and 'vm [13:31] is weird [13:32] yea, I really am not a fan of a lot of their dashboard stuff. [13:33] it's confusing and sometimes hard to get back to where you were at one point [13:33] * rick_h__ migrates to the coffee shop afk for a few min [13:44] rick_h__: neither hatch nor i have been able to figure out why the stupid event to clear the inspector when the charm won't load isn't being picked up. [13:44] or at least, i infer he didn't find anything. [13:44] rick_h__: you want me to just mothball this card for now and move on to an MV thing, since even the simple fix is taking awhile? [13:50] jcsackett: sounds good [14:00] rick_h__: ok. [14:00] mothballing. [14:00] bac: you had to open it in cloud services or in the VM part? [14:01] also, sounds like a bug. open-port should do that,IMO [14:01] bac: is this setup without the availability sets turned on to allow you to do this colocating? [14:02] bac: or can you place via the cli even with the availability sets enabled [14:02] jrwren: vm section [14:02] jrwren: bug 1340756 filed [14:02] <_mup_> Bug #1340756: [azure] Service deployed to existing machine do not expose properly [14:02] cool. [14:03] rick_h__: availability sets are on, per the default. (meaning i didn't disable them) [14:03] rick_h__: i guess the --to is only disallowed for 0 ? [14:03] bac: I think it's only blocked via the api :( [14:03] rick_h__: actually, i was kind of surprised it let me do it [14:03] but only after the fact [14:03] bac: and since the --to cli is working around that it works but failed for us in the gui [14:03] bac: yea, just clicked in my head [14:04] right , since QS uses api [14:04] right [14:04] rick_h__: if we were smart, we'd redo the whole setup to use availability sets properly... [14:04] if we were really smart we'd save the hassle and move to ec2 [14:05] bac: that'd require smarts :) and AS don't do us good with jenkins and such since we'd need to deal with multiple units and charms like jenkins don't do anything with multiple units that I'm aware of [14:05] * bac buries head in sand. [14:05] bac: and yea, I keep justifying the pita with azure by saying we're using a rarely used cloud, doing some good qa, getting a discount, etc [14:05] do we typically use ec2 autoscaling groups? [14:05] rick_h__: we are being good corporate citizens. [14:05] but it's not easy to swallow sometimes and you take the brunt of it right now [14:06] and we've improved azure provider through our immense pain [14:06] jrwren: the only stuff *we* use (UI Engineering) is this azure thing [14:06] we test and such in ec2, but don't have anything running [14:06] jrwren: our production stuff runs on prodstack, an internal run openstack cluster maintained out of london DC [14:07] jrwren: so no, we don't use ec2 autoscaling groups or anything [14:07] what about juju-core support? it supports the azure availability sets, does it also support autoscaling groups? [14:08] jrwren: not that I'm aware of. We're only recently getting into cloud specific tooling. We've trie dto be agnostic, but users want constraints based on machine types/etc [14:08] so work on ec2 machine times, azure availability sets, openstack networking, are more recent tasks [14:08] ec2 machine types [14:08] got it. Thanks. [14:19] jenkins seems to be grumpy. Build failed right away with permission denied errors on the build-prod directory. [14:19] guihelp: On a related note: very small QA needed: https://github.com/juju/juju-gui/pull/429 [14:19] kadams54: hmm, maybe related to what bac was doing [14:19] bac: ^ [14:20] hey, that sounds like me [14:21] kadams54: sorry about that. should work now. [14:21] np [14:23] rick_h__: on jenkins juju-gui-merge step, you have an extra build step that marks the merge as a failure. what specifies that it is run only on failure? [14:23] * luca has a lovely stack of changes to machine view [14:24] luca: and you've got a 'release blocker' and 'non-release blocker' for eaech? [14:25] bac: looking [14:25] bac: it's that log text [14:26] bac: if that text appears in the build log, it's a failed build [14:26] oh [14:26] rick_h__: possibly…. if possible I would like jump on your standup and talk through them quickly [14:26] luca: let's do a post-standup with just MV folks [14:26] no need for everyone to go through tbh [14:26] rick_h__: ok, sounds good [14:27] luca: I'll ping once we're wrapping up if that's cool? [14:30] rick_h__: fine by me [14:30] my watch is still stuck at ups....ugh [14:31] hatch: heh, well my new watchband for mine shows up today [14:31] get rid of this default goop [14:32] have you used the watch yet? [14:32] hatch: yea ,been using it all week [14:32] git it last friday? [14:32] no, when I got back from holiday, tues [14:32] jujugui: the landing step for juju-gui on jenkins has been updated to serve the new branch. please keep your eyes peeled for any funny business on the next few landings. apologies in advance for any implosions. [14:32] so day 4 [14:33] bac: thanks, I'll update the jujugui.org DNS [14:34] bac: and once we know it's working we can figure out how to move the comingsoon url [14:34] bac: is it looking for specific dns? [14:34] bac: if so can you add jujugui.org as a hostname? [14:34] bac: this doesn't seem good: http://cl.ly/image/1G2F1K0S1k0b Currently have a build stuck because it's waiting for an available executor. [14:34] rick_h__: yes, not yet [14:35] kadams54: hmm, looking [14:35] kadams54: restarted. [14:36] kadams54: dumb error message: no space left on device. i think it was a permissions problem instead. [14:36] OK - build seems to be going now, thanks. [14:36] hatch: what watch did you get? [14:36] the LG one [14:36] if it ever friggen shows up [14:36] hatch: oh, cool [14:36] rick_h__: what are you going to do with jujugui.org? [14:37] bac: I was going to point jujugui.org to point to the same host as the ci one [14:37] rick_h__: when we're ready, we just need to move comingsoon.jujucharms.com [14:37] rick_h__: oh, you want to not use comingsoon anymore? [14:37] so ci. is ci, no ci is the qa site off of ci [14:37] bac: I figured we'd move that as well, but figured this was easy to do [14:37] and can be shorter :) [14:37] but comingsoon is all over the company so not wanting to change that at the moment [14:37] rick_h__: ok, well let me know what you want the fqdn to be so i can update apache ServerName [14:39] bac: hmm, nvm. Looks like hover won't let me do a CNAME for the root [14:39] that's kind of a pita oh well. [14:39] bac: so ignore me, let's just go with comingsoon and carry on [14:39] bac: maybe we should look at a static ip in azure though for this [14:39] rick_h__: you can currently get to the jenkins version as ci.jujugui.org:80 [14:39] bac: so that we don't have to update dns if we need to bring up a new jenkins/etc [14:40] rick_h__: ok, i'll look at that after everything is stable [14:40] bac rgr [14:40] where does apache fit into this? [14:41] bac: I get a permission denied error looking at that [14:41] jrwren: the GUI needs to be served out, apache is being used to serve the production juju gui files [14:41] oh the gui! :) [14:41] jrwren: for your needs you don't have the same requirements [14:42] jrwren: but bac is doing this process with the gui so you two are a bit in parallel [14:42] Machine view enables you to customise the deployment of your services. Drag and drop unplaced units from the new units column to add services to a machine. [Create a machine] [14:42] luca: you don't say. :) [14:42] rick_h__: would that work as onboarding? [14:43] luca: thinking [14:44] There's a confusion of units and services in there. Thinking of what is off or what can be more clear [14:44] "Drag and drop the unplaced units of your services"? [14:44] getting long :/ [14:44] rick_h__: yeah, it isn't building now. errors from smash for d3 stuff [14:44] bac: ah ok [14:45] rick_h__: does that make sense to you? [14:45] bac: you had some issue with the file ownership of the deps? [14:45] d3 deps [14:46] rick_h__: well i did a chown [14:46] bac: so no, it's not clear, just that you told me it was 403'ing because it was building and stuff is going on [14:48] Makyo: can i pick your brain for a few on the "block deployment for unresolved conflicts" card? [14:48] jcsackett that's going to be a little complicated [14:49] hatch: lovely. but: it's the only MV release related card on the board, so off i go. [14:49] hatch: can i pick *your* brain, then? [14:49] yep [14:49] joining the standup room [14:49] one sec [14:49] hatch: standup hangout? [14:49] jcsackett: feel free to poke at the backlog on deck as well tbh [14:49] ...great minds, or something? :p [14:49] rick_h__: is stuff there likely to be more important? [14:50] haha [14:50] rick_h__: or is this a release requirement? [14:50] jcsackett: looking sec [14:50] jcsackett: i'm debating on the release requirement. I think so, but it's not very visible as it only happens in real environments with multiple simultaneous users [14:50] jcsackett I don't seee youuuuu [14:50] jcsackett: so as far as requirements go it's down the list as far as getting sign off on MV [14:50] hatch: not there yet, rick_h__ might be throwing a different card my way. :p [14:51] ok ping when ready [14:51] hatch: ok. sorry for the confusion.:p [14:51] * rick_h__ joins call [14:51] jcsackett: my fault [14:55] jujugui call in 5 kanban please [14:56] rick_h__: http://ci.jujugui.org/ should work now, for now. [14:56] bac: rock on [14:56] permissions is hard [14:57] kadams54: bac please coordinate to track your branch and landing that and making sure it auto updates then [14:57] hatch: or jcsackett can you peek at kadams54's branch to help move that forward [14:57] ? [14:57] I think it's just a asset change right? [14:57] jujugui: what's the url for determining the git version being served? [14:58] juju-ui/version.js [14:58] bac: juju-ui/version.js [14:58] ty [14:58] jujugui call in 2 [14:58] Makyo: or hatch can one of you guys run the call today please, fighting a headache and have a bunch of background noise. Not up to it [14:58] Sure [14:59] heh yeah a coffee shop probably isn't the best place for that :) [14:59] hatch: hoping some day light and getting out of the house helped [15:00] am I really teh first one there? [15:00] jujugui call in 1/now jrwren Makyo hatch [15:00] jrwren: wrong url, friday is different for the longer call [15:00] oh [15:00] jrwren: check the calendar link please [15:05] kadams54: have you marked it :shipit: yet? [15:06] bac: No, waiting for QA from someone. *ahem* jcsackett or hatch :-) [15:06] ah, ok [15:06] kadams54 it's going to be a little tough for me to do a qa on it [15:06] kadams54: i'll get to it just as soon as call ends. [15:06] branches are kind of messed up atm [15:06] thanks jcsackett [15:11] luca: ping for standup [15:13] luca: going once, going twice [15:16] and....gone! [15:16] luca: ok, called off for now [15:16] that coffee just cost you [15:17] So apparently MI will be hit by a polar vortex next week. [15:17] For which I blame hatch. [15:17] <3 [15:17] ...so did everyone leave the standup room while i restarted ff, or is google just screwing with me now? [15:17] no 90 degree days for me [15:17] polar vortex....rofl you must be watching American news [15:17] jcsackett: ^ we called it off [15:17] rick_h__: cool. [15:17] hatch: yea, going to get a 20F drop in temps for a couple of days [15:17] woto! [15:18] woot! [15:18] hatch: well, he does live in the US, so it stands to reason. [15:18] damn, send the vortex this way. [15:18] jcsackett: it'll be Canada invading the South. [15:19] lol [15:19] kadams54: if it means it's not 101 w/ the heat index i'll take it. [15:19] rick_h__: sorry, was sorting the images with Spencer [15:19] rick_h__: I’ll be ready in 2 mins [15:19] luca: need everyone? [15:19] jcsackett: I'm seeing highs in the mid 80s for you by mid next week. [15:19] rick_h__: who is everyone? [15:19] luca: or can we delegate a couple of short straws? [15:20] rick_h__: I don’t mind [15:20] luca: kadams54 hatch Makyo and jcsackett [15:20] rick_h__: I’m happy for whoever to come along [15:20] ok, two volunteers to rejoint the hangout? [15:20] nowhammy nowhammy nowhammy! [15:20] oki joining [15:20] rick_h__: it’s looking a lot nicer >:D [15:20] I'll join [15:20] ty hatch [15:20] luca: :/ this doesn't sound like releasing next week levels of changes [15:20] ty kadams54 [15:20] ok i'm here and there is noone else there [15:20] :'( [15:21] rick_h__: that’s not the smiley I wanted lol [15:21] Are we back in the same room? [15:21] kadams54: rgr [15:21] luca: ok we're all in the room waiting [15:21] again... :P [15:21] haha I'm so funny [15:21] rick_h__: hatch just waiting for the images, 2 secs! [15:23] rick_h__: hatch just tying up loose ends [15:23] rick_h__: hangout linky [15:24] https://plus.google.com/hangouts/_/calendar/cmljay5oYXJkaW5nQGNhbm9uaWNhbC5jb20.t3m5giuddiv9epub48d9skdaso?authuser=1 [15:25] rick_h__: you're good, or you need more to jump in? [15:27] i'll take silence as "good". :p [15:33] bac: shipped my PR. Let's see what blows up :-) [15:34] * bac ducks. covers. [15:36] jcsackett: all good [15:43] haha [15:43] rick_h__: lol [15:43] luca: sorry, hit the wrong button [15:43] anyway, thanks got the updates [15:44] rick_h__: no worries, I’ll send it over [15:44] it does look nicer [15:50] yup I'm also a fan [16:04] lunch. bbiab [16:07] kadams54: whoo: http://ci.jujugui.org/juju-ui/version.js [16:08] bac: kadams54 awesome [16:08] bac: once you get the DNS updated, please let hazmat know and we're moved and off [16:08] and owe hazmat some drinks in germany for running that for so long [16:08] rick_h__: by that you mean, once i look into fixed ip address? [16:08] wohoo! [16:08] bac: ah, yes sorry [16:09] jumping around too much [16:09] rick_h__: i doubt we can transition to fixed ip in situ [16:09] hazmat: on the drinks or getting to shutdown the instance :P [16:09] bac: situ? [16:09] rick_h__: i mean, we may have to tear down and rebuild to get a fixed ip. [16:10] dunno. i'll see after lunch [16:10] bac: oh, I'd hoped it was like ec2 where you can get a floating IP and associate it with the instance [16:10] and if we bring up a new instance you just move the floating IP to point to the new one [16:10] rick_h__: maybe it is. [16:10] and DNS stays the same without updates [16:10] bac: k, thanks [16:10] rt [16:11] * rick_h__ goes home from the coffee shop. biab [16:22] lazyPower hey, whenever you get a moment could you have a chat with your compatriots about the requirement for port 80 for the Ghost charm? It's kind of in limbo right now and I think that's the only blocker for promulgation [16:23] port 80 is not a requirement for promulgation. was teh last interaction with jose? [16:24] if thast teh case, dont worry about it. i'll be in teh queue today - i've been slammed all week. and i'll get the final review done. [16:24] yeah - ok cool thanks - no rush [16:25] lazyPower I just wanted to open up a discussion about it if port 80 was indeed a blocker because I didn't really want to include a server in the charm [16:25] the idea of your app is that it deploys behind a LB anyway, regardless of an nginx charm or apache - so any fuss about port 80 is moot [16:25] right [16:26] just make sure that you have thought about any asset pipeline mojo [16:26] yeah I include the ghost blog asset in the charm [16:26] I'll be adding the ability to supply a different version once I have at least a single external user haha [16:27] perfect - by the end of following the deployment tut in the readme if i have a working ghost app, its good. (you dont run as root, you took care of schenanigans, and you're not using AWS Metadata right?) should be g2g [16:27] kewl [16:27] it 'should' just be drag and drop to deploy [16:28] but the only crappy part is that the user 'must' read the readme for configuration of the actual ghost account [16:28] but I imagine that's a similar situation for WP [16:30] jcsackett are you working on #1339798 ? [16:30] <_mup_> Bug #1339798: When using Firefox and you drag a unplaced unit onto any drop zone the page shows a bad request [16:30] wp has a setup phase post deployment [16:30] if you dont configure a user, you still get teh setup wizard [16:30] if you set a user, it runs the wizard for you [16:30] ahh, ghost doesn't really have a wizard [16:30] yeah, its a config file edit iirc [16:30] you have to visit a special url in your app to set it up [16:31] juju actions should make that setup easier :) [16:31] hatch: you can include a script to be used via juju-run [16:31] corey_fu has done that with teh allura charm - which is a stop-gap pending juju actions [16:32] yeah - well really it's just that the first time they use it they need to visit a special url, so they would even have to know about the script by the readme [16:34] you'll have the same problem with teh action [16:34] or is the gui going to expose the actions? [16:35] * lazyPower has no concept of whats coming from the gui wrt to actions. [16:35] lazyPower that's one of the plans [16:36] I thought i heard that at vegas, but there's been a whole ocean of information since then [16:36] i find myself crossing wires often about what i remember vs whats been decided post sprint [16:37] haha yeah - well, it was talked about in vegas, who knows what's happened since :D [16:38] hatch: sorry about the delay with the review, it'll get done today though. You'll have a response of +1 or 'wait whaaaat' today. [16:38] lol np [17:04] jujugui have a nice weekend everyone, c u === urulama is now known as uru-afk [17:04] you too, cya urulama [17:04] you too [17:15] 70 lines of test setup....5 lines per test....aww yeah [17:16] maybe this function is doing too much.... [17:16] ;) [17:23] jujugui lf a review and qa on https://github.com/juju/juju-gui/pull/430 [17:23] plz and thx [17:25] happy weekends all [17:25] rogpeppe: have a good weekend [17:31] jcsackett you around? [17:31] jujugui fighting this headache and going to go afk. If you need something shoot me a message in hangouts (chat) and it'll ring [17:31] cya rick_h__ get betta [17:34] feel better (not putting your nick, cuz I don't want to ping you) [18:15] bac: can I push to ~yellow/juju-gui/ci-environment/ or does that need to be reviewed? [18:16] jrwren: it is pretty informal. but if you'd like me to have a look that'd be ok too. [18:16] jrwren: since it is just a private branch, we can't really do a merge proposal in launchpad [18:18] jrwren: what changes have you made? have you pulled from LP to merge the changes i've pushed? [18:24] its just adding that mongodb package to that tools list. nothing else. [18:24] everything else I did was jenkins config. [18:24] now I just need to figure out how to open the port. [18:25] jrwren: ah, that must be done via the azure web interface [18:26] jrwren: if you just changed the bundle then please merge in my changes (i made lots of changes to the bundle) and then push it directly. [18:26] jrwren: since it has keys and stuff please ensure you're pushing to that private branch, not one in your namespace. [18:27] jrwren: do you have access to the azure dashboard? [18:27] i'll find out. [18:29] i do not [18:34] jrwren: what port/s do you want opened [18:34] 8080 [18:34] on trusty-slave, right? [18:35] yes [18:39] jrwren: try now [18:39] 8080 connection refused [18:41] boo [18:42] jrwren: and that is the message you got before [18:43] not sure, let me see if I tried before. [18:44] jrwren: on trusty-slave i see a listener on 127.0.0.1:8080. i think it needs to be listening on the eth0 interface 10.0.0.36 [18:44] oh. [18:44] yup, thanks. [18:45] i'm looking right at that. [18:45] i guess its been a long week [18:45] hrm, well maybe this shouldn't be internet exposed? [18:50] ok, looks good, thanks for help [18:51] jrwren: great. [18:53] jrwren: what does the colon do here: [18:53] rm -rf $charmstoregopath || : [18:53] true [18:54] i should have spelled out || true [18:54] first time run that path won't exist and it would fail, so this is that case. [18:54] ya know, I'm going to change that. [18:55] i should test for dir and rm and if rm fails, let it fail. [18:55] i don't think it will fail with -rf but being safe is better [18:56] jrwren: i do wonder about putting it in /tmp, though, since it could be deleted on reboot. [18:56] jrwren: want to do a quick hangout? [18:56] 2 fail cases I ran into: 1. DNE, rm will return 1 and because of set... script ends. 2. other user (ubuntu) owns dir and rm fails [18:56] sure, lets chat [18:57] jrwren: i'm in https://plus.google.com/hangouts/_/canonical.com/daily-standup [19:12] jujugui: can someone look at https://github.com/juju/juju-gui/pull/431 for me? [20:33] jujugui: can someone look at my PR? https://github.com/juju/juju-gui/pull/431 [21:14] jcsackett still need that review? [21:14] looks like it [21:14] :) [21:24] hatch: i'm looking at yours still as well. got started a bit ago and then got sidetracked. i'm about to QA it. [21:25] cool, I hope you didn't get too much of a headache with yours [21:25] I knew what the issue was when you started but you musta been offline [21:26] and replied to your comment [21:33] hatch: no headache at all. just spent some time getting re-acquainted with mv code. [21:34] coolio [21:34] I will qa yours soon, having some technical issues heh [21:34] hatch: follow up is good. [21:34] I'll land it if it's all good [21:34] (in ref to your reply) [21:34] excellent [21:36] hatch: 1 qa thing. when i deploy after scale up, shouldn't the text field reset? it stays at whatever number i put in. [21:37] seems odd that it doesn't reset to the state it was in when i opened it. [22:03] hatch: marked your PR as QA OK, but we should file a follow up to deal with scale out UI thing. [22:04] i am going AFK, so i'll :shipit: on mine later if it's QA OK.