[00:44] gary_poster: around? [00:44] or do I start the email [00:50] ok, well email replied to [01:20] rick_h_: you still around? i see heartbeat is happy now. thanks for getting it up again! [01:20] rick_h_: i read the backlog on webops but i'm not sure what happened [01:25] rick_h_: review done [01:30] bac: thanks [01:31] bac: yea, there was confusion. It was changing that url in the charmtools + restarting supervisord that runs the ingest process [01:31] bac: but took a while to find someone and to debug that there's not an issue with the version of charmtools on the server [01:32] bac: ok, I'm submitting the branch now. So in the morning we need to do another deploy to the rev after this lands (assuming the lander doesn't hate me) [01:33] bac: the tricky part is that the production-overrides should be used to build a new config, but config changes don't happen all that often so cross your fingers on that part [01:33] ok, i followed it all but that last part [01:33] bac: so the production_overrides.ini [01:34] bac: where the port number was added? [01:34] rick_h_: if it lands do i need to do anything but request a deploy? [01:34] bac: hopefully no. THe cowboy will be erased by upgrading charmtools and the deploy should go smoothly [01:35] rick_h_: thanks again for getting this figured out. let's hope tomorrow goes smoother. [01:35] bac: rgr, we're all good. If we can get the charm stuff into shape we'll be good to land this baby and have a long good weekend :) [01:35] rick_h_: rt. good night. [01:35] thanks for getting that branch going. I'm going to go turn the brain off for the night. See ya tomorrow [01:58] thank you both rick_h_ and bac [04:50] evening [04:55] hatch: Hello [04:55] hows it going? [04:55] Good thanks, yourself? [04:57] good good [04:58] trying to fix my vm [04:58] doesn't look like parallels supports ubuntu 13.10 no matter what [05:01] Ouch [05:01] hatch: Dual boot? [05:01] I'll have to look into that on the mac mini [05:01] hatch: Buy a new computer? :P [05:02] haha - I willllll but I have pretty much convinced myself I need the 15" with the graphics card [05:02] heh [05:02] did you end up picking up a new one? [05:02] Yeah, well, it's in the post [05:03] Should arrive next week [05:03] cool what did you go with? [05:03] hatch: Air, i7, 8GB RAM, 256GB SSD [05:04] oh cool that's the other option I was thinking of going with [05:04] So couldn't just pick up in store [05:04] that thing should fly :) [05:04] Yeah, hopefully [05:05] well this thing has the 2.3 I5 with the intel 3000 and 16GB of ram [05:05] hatch: In one of the stores the guy was trying to convince me of the retina 13" MBP as for $100 more you get faster CPU plus retina, but it weights 400 grams more. [05:05] hatch: Nice. How much? [05:05] no idea I bought it in 2011 [05:05] haha [05:05] it only supports 8GB of ram [05:06] :) [05:06] Oh right, your current computer [05:06] yeah [05:07] hatch: How much would the new MBP cost? [05:07] so this thing -is- fast enough for everything i need it for, but no games [05:07] lots.....lol [05:07] heh [05:07] sec [05:07] $2600+tax [05:08] assuming I don't upgrade the cpu or storage [05:08] compared to the $1500 for the air (equipped like yours) [05:08] so the question is....do I need to game for that $1100 more? :D [05:09] hatch: haha [05:09] it would cost me that for a gaming rig [05:09] so really.... [05:09] hatch: Or a console [05:10] true, but I can't take the console with me [05:10] at least not very easily [05:10] hatch: Although the CPU will mean running test faster etc. [05:10] the single core speed isn't that much faster on the pro than the air [05:10] it just has 2 more cores [05:10] and our tests are single threaded right now [05:11] true [05:11] so it'll only be like 10% faster [05:11] hatch: But you're running VMs etc. as well [05:11] are you trying to convince me to buy the mbp? [05:11] your a bad influence [05:11] haha [05:12] nope, I mean, I think, uh... [05:13] ok when you get yours you can be my test monkey [05:14] you can run the tests and maybe some games for me to see if they run :P [05:14] heh [05:15] hatch: I bought it through a local store which meant a saving of $140 vs the online Apple store, but it means I don't have an online tracker to refresh every five minutes [05:15] haha [05:15] small price to pay for the $140 discount :) [05:15] yeah [05:16] I'm going to try and hold out until a few weeks before the next sprint so I'll have some time to decide [05:16] I actually didn't realise you could get them cheaper than the Apple store [05:16] I didn't either [05:16] heh [12:07] morning [12:13] howdy frankban, did you tinker with benji's branch at all? [12:21] rick_h_: yes. I made some changes, trying now bootstrap + deploy === _mup__ is now known as _mup_ [12:21] frankban: ok cool, in looking at his branch I think the missing thing is that we need to send a dict to the deployer call vs the string from sanitize_constraints [12:22] normally the deployer uses a string because it passes it to the cli as `juju deploy --constraints=""' [12:22] frankban: ok, let me know how it goes and what I can do to help. I'll stop messing with his branch for now then. [12:23] and grab coffee... [12:23] rick_h_: I added that and moved the sanitize logic from base to views, so that, if a constraint is passed but not supported, we can deny the bundle deployment and send to the GUI the error message [12:24] I also created a new deployer tarball with the fix to the deploy() call, adding the missing parameter [12:24] rick_h_: ^^^ [12:36] sounds great frankban [12:36] thank you === BradCrittenden is now known as bac [12:38] gary_poster: staging has been updated with the changes rick landed last night. it looks happy. [12:39] awesome bac. thanks [12:39] gary_poster: i shall now proceed with an RT to get production updated and less stupid [12:39] heh, great [12:39] gary_poster: you're aware of our decision to temporarily fork charm-tools for expediency until we can remove it entirely? [12:41] yes, bac. I'd like to talk more about the "remove it entirely decision" once the heat of the moment has passed and see what other options we can pursue, but yes, I know we have a three-stage story of "hack production" (last night), "hack the branch with a fork" (last night and today), and "make it better" (ASAP, such as next week). [12:41] frankban: awesome [12:43] gary_poster: yeah, that's a more rational statement. [12:44] cool [12:44] rational? are we still at rational? :P [12:44] we have next week for that :-) [12:52] gary_poster, rick_h_: it seems to work, writing some missing tests and proposing [12:52] frankban: rgr, will look at the MP when it hits [12:53] awesome! [12:59] btw, new macbook arrived this morning, left ignored in its box [12:59] rick_h_: the new deploy produced https://pastebin.canonical.com/100054/ [12:59] frankban: we all appreciate the sacrifice [12:59] i think it is not a problem but causes the webops to be concerned [12:59] bac: ?! [12:59] bac: yea, I'm looking for the hook to see what it's doing [13:01] jujugui: post deploy charmworld is good: http://manage.jujucharms.com/heartbeat [13:01] bac: cool, waiting to see if the bundles get processed vs blocked [13:02] bac: yea, not sure on that migrate thing. I don't understand why it's doing that [13:02] ok [13:02] bac: well, not sure what prepare-upgrade --init is [13:02] rick_h_: i'm making a card about the queueing of multiple revisions per sinzui's comments yesterday [13:02] ooh, let's see if my heartbeat auto reload worked [13:03] bac: ok, I htink we're fine. In my testing last night we were around 8ish minutes to process the queue. [13:03] so we're well under the 15min window for now [13:03] yay for auto reloading heartbeat page! [13:03] * rick_h_ is facinated by the little things in life [13:04] cool bac [13:04] gah, docstrings or bust [13:05] bac: all 0 so yay! [13:05] rick_h_: yes, not waiting to timeout due to egress rules speeds up the processing dramatically. go us. [13:05] :) [13:06] bac: it's almost like...you called that one lol [13:06] gary_poster: webops wants us to move staging off canonistack so it is more production-like [13:06] bac: I've got a card for that [13:06] bac ack on call [13:06] oh good [13:06] bac: it'll need to be a dupe since we won't be able to auto upgrade it from CI [13:07] it'll be more a 'practice deploy' than anything from what I can tell [13:07] oh, ok [13:07] dearlordletitwork.jujucharms.com [13:08] lol [13:32] bac: oh, that whole output was just to test if the prepare-upgrade flag was available to the migrate script [13:33] bac: I guess it's an artifact that on the first time you update the charm/code they need to be in sync. this protects them from being out of sync [13:33] bac: since the charm doesn't want to run migrations prepare-upgrade if that's not a valid command [13:33] rick_h_: hmm, can it be squelched or a message saying "ignore the following non-error" [13:34] bac: i bet -h goes to stdout or something. I think the best thing is to just remove the check from the charm [13:35] we're past a point where it's a useful check [13:35] rick_h_: ok. i made a card for it. [13:35] bac: rgr [13:41] morning rick_h_, what's the constraint sanitization status? [13:45] benji: frankban has a branch he's getting up for review [13:45] rick_h_: great! [14:01] * gary_poster running. back in 1.5 or 1.75 hours. (what a week) [14:06] gary_poster, rick_h_, benji: the branch is up for review and qa: https://codereview.appspot.com/22810043 [14:06] frankban: I'll be glad to do one or both; I'll start on the review. [14:08] frankban: looking [14:08] benji, rick_h_: thank you both [14:11] frankban: I thought that we wanted to ignore invalid constraints, not generate an error. Have I misunderstood the requirements or your code? [14:12] benji: we were ignoring when it was in the jujuclient because we could not get errors out of it to the user [14:12] benji: are these errors able to get to the user from the new location? [14:12] frankban: ^ [14:17] rick_h_: all the bundle validation errors are immediately reported as a GUI notification [14:17] frankban: ok cool then. [14:20] benji, rick_h_: I am not sure the GUI server is the right place for an extensive bundle validation (I also added an XXX), and we can change it later. however ISTM good to have at least a weak firewall before starting all the deployer/ProcessExecutor machinery [14:21] frankban: yes, we've talked about trying to create a validation library re-used by all entry points for users. (charmtools for proof, deployer for a manual deploy of a bundle) [14:21] frankban: so long term I think we'd piggy back off the deployer doing validation [14:21] frankban: since it's already doing some very limited validating currently [14:22] I'm QAing the branch now. [14:23] rick_h_: such a bundle validation library could help also in quickstart [14:23] frankban: +1 [14:24] rick_h_: yes, please [14:24] frankban: replied with a couple of comments, I didn't see the versions update in the hard coded places in the utils.py ? [14:27] frankban: ignore me, I did see it. :) [14:27] even commented on it. /me sips more coffee [14:33] heh, thanks rick_h_, re deployer version, is it ok if instead I add a comment in the hardcoded versions in utils, also mentioning the source branch for that sdist? [14:33] frankban: yea, that's cool. Just something so we remember it's our fork [14:34] this morning I found out that you can stop g+ from showing posts from a community but still be part of the community [14:34] frankban: since it's not obvious looking at things [14:34] best feature ever [14:34] rick_h_: great, and good point [14:53] wooo, it's working! [14:54] will check a failure here in a second. [15:01] hehe oh Chrome [15:01] "C:\fakepath\maarten.yaml" [15:01] that's one pretty crazy fakepath :D [15:02] gary_poster: so the only QA thing is that the two services were stacked on each other. I thought mysql wasn't deployed. :/ [15:02] frankban: got the error when I had invalid constraints, deploy worked on jcastro's bundle [15:02] gary_poster: for when you are back, I think we are hitting this: http://bugs.python.org/issue1692335 [15:03] rick_h_: http://i.imgur.com/1skzJ3u.gif [15:03] lol [15:04] benji: did you also get the layout issue in qa? [15:04] rick_h_: I had some non-branch related QA issues; redeploying the charm now [15:04] benji: ah ok [15:05] benji: do me a favor and don't touch the layout as it comes up then and see if you get the same stacked service issue. I moved it before the relations where set so not sure if it'd do a redraw or anything later in the process [15:05] rick_h_: k [15:06] frankban: ok' LGTM'd with qa ok and notes on the qa process. [15:07] cool [15:18] rick_h_: the services didn't pile up, but I think the canvas scrolled to make it look like they did [15:20] benji: hmm, ok. [15:20] ok, so time to release the hounds? [15:22] gary_poster: confirmed the problem is http://bugs.python.org/issue1692335 . easily fixable in more than one ways, let's have a quick call when you are back [15:24] rick_h_: we can be able to add a quick fix to the charm/deployer error handling before releasing the charm. AFAIK the GUI is ready to be released [15:25] frankban: k [15:26] uh-oh, searching on charmworld for '~bac' makes for some unpleasantness [15:26] lol [15:26] oops [15:27] bac: requested the log file from webobs and we'll see what hte traceback is [15:27] although should be able to dupe locally I guess [15:28] rick_h_: yeah. i'll look at it. we'll probably get hate mail from curtis soon. [15:33] rick_h_: my favorite part of the traceback: Encountered " "~bac " [15:34] who writes an error message like that? [15:34] bac: heh, not sure. I haven't gotten that far yet [15:35] bac: I got https://pastebin.canonical.com/100094/ from webops. I don't see the search error, but seeing ingest errors, but things are working so confused. [15:36] bac: if you get a sec can you update your bundle and push and make sure it's getting ingested? [15:36] heartbeat isn't hanging, but wtf is with the connection error in the logs? [15:37] connection refused [15:37] yea, so it's not working. The url isn't good. [15:37] port 2464? [15:37] i thought you had a different port for production [15:38] yea, that's not legit. That's the wrong port. ok, hitting up webops [15:38] bac: have that RT from the deploy? [15:38] bac: just the # for it [15:39] it's in the card [15:39] * bac looks [15:39] 65750 [15:39] bac: ah cool, I'll look then. I'll work with webops. If you can update your bundle as our canary in the coal mine it'd be great [15:40] oh, sure [15:40] the "parse string constraints into a dict in deployer" card is not correct -- right? -- and can be deleted [15:43] frankban, great. makes sense. want to call? [15:44] gary_poster: yes thanks [15:45] gary_poster: https://plus.google.com/hangouts/_/7ecpir0k4g087sfq5aj9gfos3o?hl=en [15:49] ugh fail [15:50] jujugui call in 10 [15:51] for all those people who hunger for fewer clicks when attentding the standup, I present your salvation: http://tinyurl.com/gui-hangout-2 [15:51] benji: card was in the branch frankban just merged. So it's good and can go away [15:52] rick_h_: well, I suppose that's one way to look at it, but we didn't do what the card says needs to be done (as best I can tell) [15:54] benji: ok, well something did it. maybe I'm mising the point, but strings go in, dicts go out to go. [15:54] twitter up by $20 since opening. this will be a "successful" ipo even though the company left all that money on the table. [15:55] man I wish I could start a company that's losing money hand over fist and sell it for billions [15:57] yeah, that second step is the challeng [15:57] s/challeng/challenge/ [15:57] I think the steps are [15:57] develop product [15:57] hype hype hype hype [15:57] ipo [15:57] hype > profits [15:58] twitter was worse. develop awful, crashy product. hype. crash more. hype. hire some decent engineer. hype. profit. [15:59] jujugui call in 1 [15:59] lol [16:16] jcsackett: hey you around? [16:21] Running to get prescription. Bringing Air with, there's a coffee shop if it takes a while. [16:22] rick_h_: the reason there is no tests in my branch is because it's already tested by other tests [16:22] just fyi [16:23] hatch: a live test in a real env deploying services and ingested bundles could really help. the development charm URL is cs:~juju-gui/precise/juju-gui [16:23] frankban: ok cool - are we doing to be doing a charm release along side the gui release then? [16:23] hatch: so basically is juju deploy + expose the charm above, and "juju set juju-gui juju-gui-source=lp:juju-gui" [16:24] hatch: yes [16:24] ok cool [16:24] hatch: we always have to do that, because GUI releases are now included in the charm [16:24] hatch: i am. [16:24] hatch: hah, ok. Heading that one off at the pass [16:24] that's how I did the demo at the conf on Tuesday - worked well then haha [16:25] hatch: next demo use quickstart ;-) [16:25] jcsackett: so the apache front end has stalled, mostly because I Dont' have enough time [16:25] that's what gary said haha [16:25] hatch: dig. [16:25] :-) [16:25] hatch: i'm happy to explore that. [16:25] jcsackett: tbh I'm more interested in a nginx front [16:25] but I don't know if we have an nginx charm.... [16:26] well not a promoted one anyways [16:26] damn chunk mismatch [16:26] hatch: yeah, doesn't look like there's a reviewed one of those. [16:27] but we could have both so nothing saying we coudln't do the apache one first then do nginx after [16:27] hatch: i'm interested in apache b/c we *do* have a reviewed one of those, so that's a better story. [16:27] right [16:27] I'm going to be pretty swamped with life stuff at least until next weekend [16:27] hatch: indeed, i totally think we can have both. there's an issue for supporting both. [16:27] oh? [16:27] yeah, this weekend is sort of booked for me, but i have some evenings free. [16:28] hatch: https://github.com/hatched/ghost-charm/issues/11 [16:28] ohhh 'issue' meaninig 'ticket' [16:28] I thought you meant there was a problem with supporting both :) [16:28] hatch: come on man, you wrote a new module. How can 'there are already tests for this'? [16:29] because the import code is already tested, all I did was move it out [16:29] I could repurpose the tests to point to the module directly [16:29] if you'd like [16:29] hatch: no, just that the desired thing is already captured. [16:30] is there a juju story for deploying services to a single machine then splitting them off that machine later? [16:30] so we could have a bundle to deploy apache/ghost/mysql to a single machine then give them the ability to break one out if needed [16:32] maybe the story there is simply to deploy another mysql/apache/ghost and then change the relationship [16:33] gary_poster: do you have a hangout? [16:33] luca, alejandraobregon what hangout? [16:33] :-) [16:34] luca https://plus.google.com/hangouts/_/calendar/Z2FyeS5wb3N0ZXJAY2Fub25pY2FsLmNvbQ.t3m5giuddiv9epub48d9skdaso [16:36] jcsackett: do you have a live site up with the ghost charm yet? [16:37] * bac lunches [16:37] hatch: no, but only for lack of time. [16:38] frankban: so I `juju deploy cs:~juj-gui/precise/juju-gui` and it's taking forever to get back to the console... [16:38] is this normal? [16:39] I guess ec2 could be taking forever to respond [16:39] yeah juju status is hanging too [16:39] hatch: yea, welcome to the real cloud [16:40] glad this didn't happen during my demo, that would have sucked haha [16:40] hatch: it'll hang until bootstraping is done and a machine is brought up/setup [16:42] hatch: ok, one qa feedback on your branch [16:42] """ERROR cannot log in to admin database: unauthorized mongo access: auth fails""" [16:42] that's an odd one [16:42] hatch: :( [16:43] rick_h_: re the merge files - for some reason the parser does not pick up those deps, I have it on my list to fix that parser but that's been on the list for 6 months :) [16:44] hatch: rgr [16:44] It's using the loader to generate the deps [16:44] so it SHOULD pick those up [16:44] but who knows [16:45] hatch: all good, the only thing I'd love to see changed pre-land is the titles for tooltips [16:45] I've spent all of 20s looking at it [16:45] yeah that's a good idea [16:46] oh rick_h_ the sollution to that error message was to use `sudo` ... [16:46] I think I may have a bug on my system [17:10] rick_h_: what shoudl I write in the tests requirements.pip to make it install the stuff in deps? [17:11] frankban: it's inthe pip command not in the requirements.txt file [17:11] err, requirements.pip [17:11] frankban: it's that snippet I linked from charmworld's makefile [17:11] we already have --find-links [17:11] rick_h_: so, just the name of the packaged in the requirements [17:11] frankban: right, but it'll still try to go online if it doesn't see it, or if the version is a > XXX ad such [17:12] frankban: the name==version [17:12] rick_h_: cool, I'll try [17:12] frankban: --no-index --no-dependencies are the other two options to force it to use offline-only cache [17:13] rick_h_: I don't think we want offline only cache. we want all the other packages (e.g. mock, selenium) to be retrieved from pypi [17:14] nananananananananananananaannanana qa man.....qa man......qa man [17:14] frankban: sure, but you were talking of splitting those into a different file and could be a different pip install command? [17:14] frankban: but maybe that's step #2 I guess [17:15] frankban: I just know that if you don't go full-offline-this-directory-only-dammit mode eventually you'll hit an inconsistent version between one and another [17:16] rick_h_: yeah, what if the dependency is unversioned? who takes precedence? [17:17] frankban: if it's not versioned then that's when it'll go to the net anyway. pip isn't really great at this 'no network' idea [17:18] frankban: if I recall correctly that is. [17:18] rick_h_: hum? ok then, and FYI, we are going to also have a customized jujuclient tarball in the charm [17:18] frankban: so we use'd pip freeze to generate an initial list of deps + versions [17:19] man our app rocks! [17:19] hatch: lol [17:20] yea, it's fun to see how excited martin is getting with this feature. A bit scary how everyone's handing out comingsoon urls to potential customers :/ but cool [17:20] yeah - mark did that in his keynote haha [17:21] maybe we should put a warning on comingsoon :) [17:31] abentley: ping, working on an upgrade to the charm on production and jjo and I are trying to figure out if the lp_credentials change will bite us in any way. I see it's a config value, and a file is written, but not seeing anyone using that file? [17:32] rick_h_: lp_credentials should be in use. Looking... [17:36] jcsackett: do you know about that? ^ I see it was your branch mentioning a job that needs it but greping for lp_cred or charmbot in charmworld brings up no hits [17:36] gary_poster: https://bugs.launchpad.net/juju-gui/+bug/1249026 https://bugs.launchpad.net/juju-gui/+bug/1249028 and https://bugs.launchpad.net/juju-gui/+bug/1249030 [17:36] <_mup_> Bug #1249026: On a real environment destroying a pending service throws console error [17:36] <_mup_> Bug #1249028: Hovering over the service with your mouse should turn the cursor to a pointer [17:36] <_mup_> Bug #1249030: After destroying a related service the GUI still shows the relation. [17:38] on call will see soon [17:40] sure np https://bugs.launchpad.net/juju-gui/+bug/1249033 [17:40] <_mup_> Bug #1249033: Long charm names cause the related charms layout to break in fullscreen [17:47] https://bugs.launchpad.net/juju-gui/+bug/1249039 [17:47] <_mup_> Bug #1249039: Exporting from real environment exports juju-gui as well [17:51] https://bugs.launchpad.net/juju-gui/+bug/1249042 [17:51] <_mup_> Bug #1249042: Destroying a service doesn't give any indication that the machine is sticking around [17:51] hatch: that one is a long running juju bug/discussion [17:51] rick_h_: the gui bug is that we don't tell the user [17:51] so if they are a gui user then they will never know [17:52] we should say 'hey btw this machine is still there' [17:52] but feel free to add input to the ticket [17:52] *cough* machine view *cough* but yea, understood [17:52] I'm just creating them - there may be duplicates [17:52] :) [17:56] ok looking now [17:59] cool [17:59] hatch ack thanks. Please let me know what you think about the following. #1 sounds nasty but I suspect it is longstanding and not a release blocker. #2 Arguable (since you can drag the service) and not a blocker. #3 same as #1. #4 same as #1 but sounds easy to fix. :-P. #5 uh...working as designed? :-) not a blocker IMO. #6 yeah that's on the "wouldn't it be nice to fix this" tasklist [17:59] So, so far, either known or non-blockers. Agree? [18:00] rick_h_: the code that rolled back uses that file. [18:00] jcsackett: ah! [18:00] yep I wouldn't consider any of these blockers unless of course someone already has an idea on how to fix #1/3 [18:00] the presence of that file however shouldn't cause any problems. [18:00] yeah [18:00] jcsackett: ok, so it's safe for me deploy the charm then it sounds like [18:00] it's been running on staging safely. [18:00] absent any code touching it. [18:00] jcsackett: awesome [18:00] jcsackett: so the charm was updated on staging just not production? [18:00] I do like that I can go `juju destroy-machine 1 2 3 4 5 6 7` :D [18:01] rick_h_: correct, as far as i knwo. [18:01] s/knwo/know/ [18:01] jcsackett: cool, thanks [18:01] gary_poster: I'm just going to run a real bundle import on a real env then It should be QAOK [18:01] awesome! thanks hatch [18:02] I need a quick lunch (and to decompress :-P) and then will be right back [18:02] :) [18:12] Makyo: are you around? [18:12] Yis [18:13] cool ok quick q [18:13] I exported a bundle from my real env [18:13] deleted the gui from it [18:13] then re-imported it (after emptying out the old stuff) [18:13] both services have x/y coords [18:13] but they were placed ontop of eachother [18:13] exactly ontop of eachother [18:13] so you couldn't even tell they were there [18:14] https://gist.github.com/hatched/7359271 [18:14] here is the resulting bundle that was imported [18:14] is there anything wrong with that bundle? [18:14] so my question is....is this a bundle import or export issue :) [18:16] Works for me? [18:16] this is on a real env... [18:16] I can destroy and try again see if I can reproduce [18:16] hatch: I had this happen with one of jcastro's bundles in a real env as well. wordpress and mysql were stacked and couldn't see both [18:16] ok so it's definitely a bug thanks [18:16] Import, then. [18:17] ok cool [18:17] will file [18:20] gary_poster: one more https://bugs.launchpad.net/juju-gui/+bug/1249051 also not a blocker [18:20] <_mup_> Bug #1249051: Importing a bundle stacks services regardless of x/y annotations on real env [18:20] all-n-all I think this is a pretty darn stable release [18:23] that's not ideal, but we can live with it for now [18:23] thank you very much hatch [18:25] no problemo [18:25] guihelp, python revievers: I need one review + QA for a really really nice to have for the release: https://codereview.appspot.com/23000043 [18:26] rick_h_, you happen to be available ^^^? about to have my call with benji [18:26] or bac? [18:26] gary_poster: working with webops on charm upgrade atm. I can look in a few if all goes well [18:26] ok thank you rick_h_ [18:27] rick_h_: thanks [18:30] frankban: i can now [18:30] rick_h_: ^^ [18:30] bac: thanks, mjc upgrade is process and wheee [18:31] i don't know what that means [18:31] bac: upgrading the charm for prodstack mjc to fix the port/ini issue [18:31] bac: so thanks for looking at frankban's branch [18:33] abentley: jcsackett so charm is upgraded. Because that's a new config there's some hackery needed to get the initial value going. [18:34] abentley: jcsackett just a heads up for when that code comes back and things need to be updated [18:37] stepping out for lunch, bbl [18:38] frankban: code looks good. QA now [18:39] bac: great thanks [18:41] rick_h_: what hackery? the default value should be an empty string, which rights out to an empty file. [18:41] s/rights/writes/ [18:42] jcsackett: so the first time it runs the default doesn't get written I guess? If another charm deploy happens between now/then it should be ok I think [18:42] jcsackett: I was pointed at #1244841 [18:42] <_mup_> Bug #1244841: support atomic upgrade-charm --config var=val ... [18:42] rick_h_: so just no file gets written? [18:42] jcsackett: right, my understanding is that right now the file doesn't exist [18:43] huh. [18:43] jcsackett: I'll verify when I check the log in a sec [18:43] jcsackett: but it's a limitation of juju upgrading a charm when the upgrade contains a config value that didn't exist before is my understanding [18:43] well, if it's going to do something unexpected, "nothing" is one of the better options. :p [18:43] :) [18:44] rick_h_: oh, i see; this would have been hidden on staging b/c we did a juju set right after to load the credentials. [18:44] and we haven't updated the IS docs b/c right now there's no need to set that option/write that file. [18:45] jcsackett: hmm ok, well that didn't come up as an option in talking. I'm not following 100% I guess. Anyway, heads up there be dragons I guess. [18:45] rick_h_: there be no dragons. [18:45] ok, then awesome ness [18:46] to get the file to write out, you need to do `juju set charmworld lp_credentials=$BIG_DAMN_STRING` [18:46] and then on config changed it'll write it out. [18:47] but, like i said, IS documentation isn't updated b/c the failure mode is do nothing. so actually, this is everything running as expected. :-) [18:59] gary_poster: charm is all set, charmworld all good. What's next for release I should look at? [19:02] rick_h_, ack 1 sec [19:03] rick_h_, I've been unable to make changelog because of calls etc. you could take that over. docs/process.rst has a recommeneded way to generate for a release. This is what I have so far: [19:05] rick_h_, http://pastebin.ubuntu.com/6377908/ [19:05] gary_poster: rgr, looking [19:06] rick_h_, thank you. card is in maintenance but you can drag down to bundle and claim [19:06] bac, in hangout [19:15] rick_h_: what rev did you release for production charmworld? cleaning up our kanban. [19:15] jcsackett: 84 [19:15] jcsackett: oh, charmworld was 450 [19:20] frankban, you are still here! are you landing "Improve bundle deployment error handling" or should we? [19:21] gary_poster: waiting bac's QA, and then I'll land it [19:21] oh cool. thank yo ufr [19:21] thank you frankban, that is ;-) [19:21] tab completion on ufr did not work as expected [19:22] :-) [19:24] frankban: done. all good. [19:25] bac: thanks, landing [19:42] man our traffic is nuts [19:44] done, have a nice evening, and a nice release! [19:44] thanks frankban ! [19:49] rick_h_, we are ready for release, and you have the charm in. you doing it, or shall we look for other volunteers? [19:49] * hatch hides [19:49] gary_poster: I'm working through the gui release notes. [19:49] awesome thanks rick_h_ ! [19:49] gary_poster: I've got the changelog, tests, xz and working on how to open it up to qa it [19:49] *phew* [19:49] I see you hatch! :-) [19:49] gary_poster: if someone wants to do the charm +1 from me :) [19:50] * rick_h_ looks at hatch [19:50] is there release notes? I've never done the charm [19:50] rick_h_, ack, we do that after the tarball release though [19:50] gary_poster: rgr, ok. know off the top of your head the tar flags for xz files? [19:50] ah, -J [19:50] rick_h_, use tar xvaf [19:50] rick_h_: https://xkcd.com/1168/ [19:51] rick_h_, a means "use the file extension to figure it out, buster" [19:51] lol [19:51] gary_poster: ah, good stuff [19:51] gary_poster: ok, plodding through notes. Man this is a looong list :) [19:51] rick_h_, :-) on the bright side a lot of the qa has been done [19:51] rick_h_: I found using the trunks commit notes to build the changelog [19:52] is the best way [19:52] hatch: yea, got that part. Wasn't that bad since it was only around 40 commits (thought it'd be bigger/longer) [19:52] and most summarize as "Welcome to bundles" [19:52] if it was on git it would be :) haha [19:53] so I'm up for whatever right now...if anyone has any tasks they need completed [19:57] oh bah, the charm is part of this. doh! [19:59] you can pass that off rick_h_ . :-) [19:59] hatch, I suggest tackling one of the bugs you found, or working on one of https://docs.google.com/a/canonical.com/spreadsheet/ccc?key=0AtC9etoysSQldDQxMVdmTDB4dm1XXzA0NFlLSUQ4Mmc#gid=0 [19:59] * rick_h_ looks at how far this list goes [19:59] preferring ones near the top :-) [20:00] gary_poster: alrighty [20:00] gary_poster: got it, should go through this at some point and looks like I'm mostly through it so far [20:00] rick_h_, yeah you are getting near-ish to the end. :-) [20:00] We might be able to remove some of the more paranoid bits at some point [20:09] running charm tests failed due to dep building. Missing apt-pgk/hashes.h file. Looking for missing system dep now [20:10] that's an odd one [20:10] I've never seen that before [20:10] looks like libapt-pkg-dev [20:11] re-running make lint from a clean charm env [20:12] hmm, that's in the sysdeps make target. So it's a new step in the process. [20:13] ok, take 3 [20:14] third time is the.....charm [20:14] ...see what I did there... [20:14] sorry, wasn't looking. What was that again? [20:14] lol u suck [20:15] so after next week's talk on Go/JS I can get started on learning Python well enough to contribute - any good resources for learning about the ecosystem? [20:16] go to pycon, it's in CA :P [20:16] California? [20:16] ohh Canada [20:16] Montreal [20:16] nope, bit more north [20:16] wow...it's 8 days long [20:17] make that 9 [20:17] well tutorials, conference, and sprints. I go for the conference and sprints myself [20:17] ok, make test now in progress [20:18] bah, now it's making fun of my environments.yaml file [20:18] so they make you sign up for an account before telling you how much it is.....fail [20:19] wtf, it's complaining about a differnet environment [20:19] oh I was looking at the wrong place [20:19] * hatch failed [20:21] yea, reminds me I need to sign up asap. Guess I'll be doing that tonight [20:21] flights to Montreal cost me the same as it does to fly to SFO haha [20:29] gary_poster: crap, getting functional test failures [20:29] oh man, it wants FF in my lxc? [20:30] grrrr [20:30] * rick_h_ sees what happens when you try to run firefox in a headless lxc [20:30] unicorns happen [20:33] rick_h_, so you are addressing? If not, and it is too difficult, we can find other options [20:33] gary_poster: working on it [20:33] k [20:33] gary_poster: installing firefox now, pulling all of X and lovelyness so taking a few [20:33] heh [20:33] k [20:33] will have to restart functional tests, destroying old env now [20:38] now I know why hatch ducked for cover :P [20:38] haha [20:38] I've never actually done a charm release [20:56] still running functional tests, running on test_branch_source forever [20:56] or 10minish [20:56] if forever is a bit vague :) [20:57] hmm.....delete 100+ LOC, no test failures......win? [20:57] heh, that's what we call 'test coverage time' [20:58] ohh the ones which would have failed were already skipped [20:58] *phew* [20:58] I was a little scared there haha [20:58] python /home/rharding/bin/juju-test --timeout=120m -v -e ec2 --upload-tools 20-functional.test [20:58] 120minutes?! [20:58] just incase lol [20:58] well, 20/120 I guess [21:00] this is the last step before I can merge/push stuff! grrr go go go ec2 go [21:04] rick_h_, if you have to go, and you have manually qa'd the charm with the new release, I would be fine with you making the release. Francesco ran the tests earlier today. All you've done is change the tarball. [21:09] gary_poster: working on it. Didn't try to deploy it as a local charm yet since I put it all in a tmp dir and I can't figure out how to get juju to deploy it unless it's in a directory named precise. [21:10] rick_h_, make deploy. [21:10] gary_poster: do you want me to mark tasks complete somehow on this spreadsheet? [21:10] oh duh [21:10] rick_h_, http://jujugui.wordpress.com/2013/10/15/if-you-want-to-run-a-custom-gui/ [21:11] except all you have to do is make deploy in this case :-) [21:11] hatch, mark as Done in spreadsheet [21:11] cool [21:11] in status column hatch [21:11] gary_poster: trying, getting an error out of make deploy about username/pass/project-name. [21:12] rick_h_, :-( worked for me last week [21:12] gary_poster: ve to do is make deploy in this case :-) [21:12] bah [21:12] bad paste [21:12] rick_h_, you can push charm to ~juju-gui [21:12] http://paste.mitechie.com/show/1068/ [21:12] rick_h_, then test [21:12] rick_h_, and if that is ok then merge to ~juju-gui-deployers [21:13] does that make sense? [21:13] woah lbox just exploded on me... [21:13] gary_poster: yea, makes some sense. I'd like to not have this make deploy error before I go pushing. [21:14] rick_h_, never seen :-( [21:14] that traceback I mean [21:15] rick_h_, I will try locally [21:15] gary_poster: rgr [21:16] rick_h_, do you want to push up your charm somewhere or if I test juju-gui trunk is that sufficient? [21:16] gary_poster: I'll push it up to my home I guess. I'd be curious to start with trunk [21:16] has anyone seen this when lboxing? sphinx error? https://gist.github.com/hatched/ad0695b48078503825f7 [21:18] hatch: no, but something about getting version from release? Is that our own code? [21:18] fresh trunk with js and handlebars template removal [21:19] rick_h_, running make deploy from ~juju-gui trunk [21:19] bah, can I not push this without it going into the store and ingested? /me tries to recall how +junk worked [21:19] I think bzr just blew up? np? [21:19] no [21:19] juju status [21:20] gary_poster: was that to me? [21:20] rick_h_, you push it but don't call it trunk and it will not be ingested [21:20] hatch no sorry [21:20] gary_poster: ah, ok. [21:23] hatch dupe on trunk. investigating. I think it has to do with CHANGES file... [21:23] gary_poster: sorry landing a fix now [21:23] I think [21:23] gary_poster: pushed it to lp:~rharding/+junk/charmers-trunk as it is now. Tests are at 42min run time. [21:23] it's rick_h_'s fault [21:23] :P [21:23] oh cool hatch thanks. fix in CHANGES, yeah? [21:23] bah, did I mess up the changes? i tried to dupe the system [21:23] gary_poster: yeah the 'unreleased' was missing the trailing : [21:24] at least I THINK that's the issue [21:24] hatch sounds right thanks [21:24] ah, crap. thanks hatch. Sorry you hit that [21:24] pretty fragile system [21:24] rick_h_: ahh no problem :) [21:24] at least I could do enough python to debug it haha [21:25] rick_h_: If you have any ideas on how we can streamline our release process I'd be interested to hear/implement them [21:26] hatch: taking notes now on the things I've hit along the way. As for streamlining, just tring to get it to work once before I do that. :) [21:26] :) [21:28] gary_poster: rick_h_ fyi the missing : was the culprit [21:28] hatch: haha, with one character I can bring down your whole branch! [21:28] * rick_h_ pockets that away [21:28] lol - maybe that means that we should require an lbox on the 'CHANGES' changes ;) [21:29] because that got into trunk because of the step that says 'push to trunk' [21:29] we effectively do IIRC [21:29] it's that the deployment instructions do not run lbox [21:29] https://codereview.appspot.com/23240043/ if anyone is avail for a quick review/qa [21:29] right [21:29] rick_h_, I have trunk up with make deploy [21:30] trying [21:30] gary_poster: :( ok I guess. I can't find any reference to "project-name" that was in my traceback. [21:30] rick_h_, ok. I will kill this one and merge your branch and do it again... [21:31] gary_poster: k [21:31] test run almost hitting an hour. Is that normal from previous deploys? [21:32] rick_h_, yes, think so [21:32] ok [21:32] charms are painful beasts [21:32] to test [21:32] yea, understood. /me does a s/test_/wait_ so it feels more natural :) [21:32] :-) [21:35] hatch, on it [21:40] hatch, uh, I'm struggling to figure out what to do to qa. :-P [21:40] the app seems to work? :-) [21:40] gary_poster: deploy two services with the same name [21:41] you should get a notification [21:41] sorry :) [21:41] oh ok [21:41] I found if I removed too much - notifications would stop happening :) [21:41] Morning [21:41] mornin [21:41] morning [21:41] hatch QAOK [21:42] thanks [21:42] rick_h_, site is up; trying [21:43] rick_h_, WFM :-) [21:43] rick_h_, ship it (~_juju-gui and ~juju-gui-charmers) [21:43] rick_h_, https://ec2-54-205-201-186.compute-1.amazonaws.com/ [21:43] gary_poster: ok, cool then [21:44] gary_poster: I'm happy to have our calls anytime from now. [21:44] thank you huwshimi! maybe in a few minutes, then? would be good [21:44] gary_poster: It looks like we've both changed daylight savings recently... [21:44] gary_poster: Sure! [21:44] heh, that's right I forgot [21:45] cool [21:46] shall I join or is this just 1:1? [21:46] ok, both branches of the charm pushed, waiting to check on the new revs hitting [21:51] awesome thanks rick_h_ . I have a blog post and email waiting in the wings. [21:51] gary_poster: rgr [21:52] gary_poster: http://paste.mitechie.com/show/1069/ are my notes for the deploy steps I hit. Done for today, but will look at tweaking those as follow up [21:52] Are we releasing now!? [21:52] huwshimi: yes [21:52] woah! Nice. [21:52] huwshimi: this way - if it breaks you can fix it while we are out drinking [21:52] mohohahaha [21:52] Boo [21:53] :-) [21:53] hatch: It's my Friday night before it's yours :P [21:53] :-) [21:53] darn.... [21:53] thank you rick_h_ ! [21:54] +1 internets to rick_h_ [21:56] thanks for hte patients. Waiting for ingest at top of the hour and then running away [21:56] ah cool [21:57] huwshimi, hatch, https://plus.google.com/hangouts/_/calendar/Z2FyeS5wb3N0ZXJAY2Fub25pY2FsLmNvbQ.dd77sn7kjl6unba21lutdr0p70 when ready? [21:57] gary_poster: on way [22:05] ok, ingested https://jujucharms.com/precise/juju-gui-78/ [22:05] night all [22:10] congrats! night! [22:11] rick_h_: Night Rick, thanks! [22:30] woot! and with the new juju fixing lxc deployed a bundle from the new charm in lxc. only issue is that stacked service block problem yay! [22:31] <3 having lxc powers back === gary_poster is now known as gary_poster|away