[00:44] <rick_h_> gary_poster: around?
[00:44] <rick_h_> or do I start the email
[00:50] <rick_h_> ok, well email replied to 
[01:20] <bac> rick_h_: you still around?  i see  heartbeat is happy now.  thanks for getting it up again!
[01:20] <bac> rick_h_: i read the backlog on webops but i'm not sure what happened
[01:25] <bac> rick_h_: review done
[01:30] <rick_h_> bac: thanks
[01:31] <rick_h_> bac: yea, there was confusion. It was changing that url in the charmtools + restarting supervisord that runs the ingest process
[01:31] <rick_h_> bac: but took a while to find someone and to debug that there's not an issue with the version of charmtools on the server
[01:32] <rick_h_> bac: ok, I'm submitting the branch now. So in the morning we need to do another deploy to the rev after this lands (assuming the lander doesn't hate me)
[01:33] <rick_h_> bac:  the tricky part is that the production-overrides should be used to build a new config, but config changes don't happen all that often so cross your fingers on that part
[01:33] <bac> ok, i followed it all but that last part
[01:33] <rick_h_> bac: so the production_overrides.ini
[01:34] <rick_h_> bac: where the port number was added?
[01:34] <bac> rick_h_:  if it lands do i need to do anything but request a deploy?
[01:34] <rick_h_> bac: hopefully no. THe cowboy will be erased by upgrading charmtools and the deploy should go smoothly
[01:35] <bac> rick_h_: thanks again for getting this figured out.  let's hope tomorrow goes smoother.
[01:35] <rick_h_> bac: rgr, we're all good. If we can get the charm stuff into shape we'll be good to land this baby and have a long good weekend :)
[01:35] <bac> rick_h_: rt.  good night.
[01:35] <rick_h_> thanks for getting that branch going. I'm going to go turn the brain off for the night. See ya tomorrow
[01:58] <gary_poster> thank you both rick_h_ and bac
[04:50] <hatch> evening
[04:55] <huwshimi> hatch: Hello
[04:55] <hatch> hows it going?
[04:55] <huwshimi> Good thanks, yourself?
[04:57] <hatch> good good
[04:58] <hatch> trying to fix my vm
[04:58] <hatch> doesn't look like parallels supports ubuntu 13.10 no matter what
[05:01] <huwshimi> Ouch
[05:01] <huwshimi> hatch: Dual boot?
[05:01] <hatch> I'll have to look into that on the mac mini
[05:01] <huwshimi> hatch: Buy a new computer? :P
[05:02] <hatch> haha - I willllll but I have pretty much convinced myself I need the 15" with the graphics card
[05:02] <huwshimi> heh
[05:02] <hatch> did you end up picking up a new one?
[05:02] <huwshimi> Yeah, well, it's in the post
[05:03] <huwshimi> Should arrive next week
[05:03] <hatch> cool what did you go with?
[05:03] <huwshimi> hatch: Air, i7, 8GB RAM, 256GB SSD
[05:04] <hatch> oh cool that's the other option I was thinking of going with
[05:04] <huwshimi> So couldn't just pick up in store
[05:04] <hatch> that thing should fly :)
[05:04] <huwshimi> Yeah, hopefully
[05:05] <hatch> well this thing has the 2.3 I5 with the intel 3000 and 16GB of ram
[05:05] <huwshimi> hatch: In one of the stores the guy was trying to convince me of the retina 13" MBP as for $100 more you get faster CPU plus retina, but it weights 400 grams more.
[05:05] <huwshimi> hatch: Nice. How much?
[05:05] <hatch> no idea I bought it in 2011
[05:05] <hatch> haha
[05:05] <hatch> it only supports 8GB of ram
[05:06] <hatch> :)
[05:06] <huwshimi> Oh right, your current computer
[05:06] <hatch> yeah
[05:07] <huwshimi> hatch: How much would the new MBP cost?
[05:07] <hatch> so this thing -is- fast enough for everything i need it for, but no games
[05:07] <hatch> lots.....lol
[05:07] <huwshimi> heh
[05:07] <hatch> sec
[05:07] <hatch> $2600+tax
[05:08] <hatch> assuming I don't upgrade the cpu or storage
[05:08] <hatch> compared to the $1500 for the air (equipped like yours)
[05:08] <hatch> so the question is....do I need to game for that $1100 more? :D
[05:09] <huwshimi> hatch: haha
[05:09] <hatch> it would cost me that for a gaming rig
[05:09] <hatch> so really....
[05:09] <huwshimi> hatch: Or a console
[05:10] <hatch> true, but I can't take the console with me
[05:10] <hatch> at least not very easily
[05:10] <huwshimi> hatch: Although the CPU will mean running test faster etc.
[05:10] <hatch> the single core speed isn't that much faster on the pro than the air
[05:10] <hatch> it just has 2 more cores
[05:10] <hatch> and our tests are single threaded right now
[05:11] <huwshimi> true
[05:11] <hatch> so it'll only be like 10% faster
[05:11] <huwshimi> hatch: But you're running VMs etc. as well
[05:11] <hatch> are you trying to convince me to buy the mbp?
[05:11] <hatch> your a bad influence
[05:11] <huwshimi> haha
[05:12] <huwshimi> nope, I mean, I think, uh...
[05:13] <hatch> ok when you get yours you can be my test monkey
[05:14] <hatch> you can run the tests and maybe some games for me to see if they run :P
[05:14] <huwshimi> heh
[05:15] <huwshimi> hatch: I bought it through a local store which meant a saving of $140 vs the online Apple store, but it means I don't have an online tracker to refresh every five minutes
[05:15] <hatch> haha
[05:15] <hatch> small price to pay for the $140 discount :)
[05:15] <huwshimi> yeah
[05:16] <hatch> I'm going to try and hold out until a few weeks before the next sprint so I'll have some time to decide
[05:16] <huwshimi> I actually didn't realise you could get them cheaper than the Apple store
[05:16] <hatch> I didn't either
[05:16] <huwshimi> heh
[12:07] <rick_h_> morning
[12:13] <rick_h_> howdy frankban, did you tinker with benji's branch at all?
[12:21] <frankban> rick_h_: yes. I made some changes, trying now bootstrap + deploy
[12:21] <rick_h_> frankban: ok cool, in looking at his branch I think the missing thing is that we need to send a dict to the deployer call vs the string from sanitize_constraints
[12:22] <rick_h_> normally the deployer uses a string because it passes it to the cli as `juju deploy --constraints=""'
[12:22] <rick_h_> frankban: ok, let me know how it goes and what I can do to help. I'll stop messing with his branch for now then. 
[12:23] <rick_h_> and grab coffee...
[12:23] <frankban> rick_h_: I added that and moved the sanitize logic from base to views, so that, if a constraint is passed but not supported, we can deny the bundle deployment and send to the GUI the error message
[12:24] <frankban> I also created a new deployer tarball with the fix to the deploy() call, adding the missing parameter
[12:24] <frankban> rick_h_: ^^^
[12:36] <gary_poster> sounds great frankban
[12:36] <gary_poster> thank you
[12:38] <bac> gary_poster: staging has been updated with the changes rick landed last night.  it looks happy.
[12:39] <gary_poster> awesome bac.  thanks
[12:39] <bac> gary_poster: i shall now proceed with an RT to get production updated and less stupid
[12:39] <gary_poster> heh, great
[12:39] <bac> gary_poster: you're aware of our decision to temporarily fork charm-tools for expediency until we can remove it entirely?
[12:41] <gary_poster> yes, bac.  I'd like to talk more about the "remove it entirely decision" once the heat of the moment has passed and see what other options we can pursue, but yes, I know we have a three-stage story of "hack production" (last night), "hack the branch with a fork" (last night and today), and "make it better" (ASAP, such as next week).
[12:41] <rick_h_> frankban: awesome
[12:43] <bac> gary_poster: yeah, that's a more rational statement.
[12:44] <gary_poster> cool
[12:44] <rick_h_> rational? are we still at rational? :P
[12:44] <gary_poster> we have next week for that :-)
[12:52] <frankban> gary_poster, rick_h_: it seems to work, writing some missing tests and proposing
[12:52] <rick_h_> frankban: rgr, will look at the MP when it hits
[12:53] <gary_poster> awesome!
[12:59] <frankban> btw, new macbook arrived this morning, left ignored in its box
[12:59] <bac> rick_h_: the new deploy produced https://pastebin.canonical.com/100054/
[12:59] <rick_h_> frankban: we all appreciate the sacrifice
[12:59] <bac> i think it is not a problem but causes the webops to be concerned
[12:59] <rick_h_> bac: ?!
[12:59] <rick_h_> bac: yea, I'm looking for the hook to see what it's doing
[13:01] <bac> jujugui: post deploy charmworld is good: http://manage.jujucharms.com/heartbeat
[13:01] <rick_h_> bac: cool, waiting to see if the bundles get processed vs blocked
[13:02] <rick_h_> bac: yea, not sure on that migrate thing. I don't understand why it's doing that
[13:02] <bac> ok
[13:02] <rick_h_> bac: well, not sure what prepare-upgrade --init is 
[13:02] <bac> rick_h_: i'm making a card about the queueing of multiple revisions per sinzui's comments yesterday
[13:02] <rick_h_> ooh, let's see if my heartbeat auto reload worked
[13:03] <rick_h_> bac: ok, I htink we're fine. In my testing last night we were around 8ish minutes to process the queue. 
[13:03] <rick_h_> so we're well under the 15min window for now
[13:03] <rick_h_> yay for auto reloading heartbeat page!
[13:03]  * rick_h_ is facinated by the little things in life
[13:04] <benji> cool bac
[13:04] <rick_h_> gah, docstrings or bust 
[13:05] <rick_h_> bac: all 0 so yay!
[13:05] <bac> rick_h_: yes, not waiting to timeout due to egress rules speeds up the processing dramatically.  go us.
[13:05] <rick_h_> :)
[13:06] <rick_h_> bac: it's almost like...you called that one lol
[13:06] <bac> gary_poster: webops wants us to move staging off canonistack so it is more production-like
[13:06] <rick_h_> bac: I've got a card for that
[13:06] <gary_poster> bac ack on call
[13:06] <bac> oh good
[13:06] <rick_h_> bac: it'll need to be a dupe since we won't be able to auto upgrade it from CI
[13:07] <rick_h_> it'll be more a 'practice deploy' than anything from what I can tell
[13:07] <bac> oh, ok
[13:07] <bac> dearlordletitwork.jujucharms.com
[13:08] <rick_h_> lol
[13:32] <rick_h_> bac: oh, that whole output was just to test if the prepare-upgrade flag was available to the migrate script
[13:33] <rick_h_> bac: I guess it's an artifact that on the first time you update the charm/code they need to be in sync. this protects them from being out of sync
[13:33] <rick_h_> bac: since the charm doesn't want to run migrations prepare-upgrade if that's not a valid command
[13:33] <bac> rick_h_: hmm, can it be squelched or a message saying "ignore the following non-error"
[13:34] <rick_h_> bac: i bet -h goes to stdout or something. I think the best thing is to just remove the check from the charm
[13:35] <rick_h_> we're past a point where it's a useful check
[13:35] <bac> rick_h_: ok.  i made a card for it.
[13:35] <rick_h_> bac: rgr
[13:41] <benji> morning rick_h_, what's the constraint sanitization status?
[13:45] <rick_h_> benji: frankban has a branch he's getting up for review
[13:45] <benji> rick_h_: great!
[14:01]  * gary_poster running.  back in 1.5 or 1.75 hours.  (what a week)
[14:06] <frankban> gary_poster, rick_h_, benji: the branch is up for review and qa: https://codereview.appspot.com/22810043
[14:06] <benji> frankban: I'll be glad to do one or both; I'll start on the review.
[14:08] <rick_h_> frankban: looking
[14:08] <frankban> benji, rick_h_: thank you both
[14:11] <benji> frankban: I thought that we wanted to ignore invalid constraints, not generate an error.  Have I misunderstood the requirements or your code?
[14:12] <rick_h_> benji: we were ignoring when it was in the jujuclient because we could not get errors out of it to the user
[14:12] <rick_h_> benji: are these errors able to get to the user from the new location?
[14:12] <rick_h_> frankban: ^
[14:17] <frankban> rick_h_: all the bundle validation errors are immediately reported as a GUI notification
[14:17] <rick_h_> frankban: ok cool then. 
[14:20] <frankban> benji, rick_h_: I am not sure the GUI server is the right place for an extensive bundle validation (I also added an XXX), and we can change it later. however ISTM good to have at least a weak firewall before starting all the deployer/ProcessExecutor machinery
[14:21] <rick_h_> frankban: yes, we've talked about trying to create a validation library re-used by all entry points for users. (charmtools for proof, deployer for a manual deploy of a bundle)
[14:21] <rick_h_> frankban: so long term I think we'd piggy back off the deployer doing validation 
[14:21] <rick_h_> frankban: since it's already doing some very limited validating currently
[14:22] <benji> I'm QAing the branch now.
[14:23] <frankban> rick_h_: such a bundle validation library could help also in quickstart
[14:23] <rick_h_> frankban: +1
[14:24] <bac> rick_h_: yes, please
[14:24] <rick_h_> frankban: replied with a couple of comments, I didn't see the versions update in the hard coded places in the utils.py ?
[14:27] <rick_h_> frankban: ignore me, I did see it. :)
[14:27] <rick_h_> even commented on it. /me sips more coffee
[14:33] <frankban> heh, thanks rick_h_, re deployer version, is it ok if instead I add a comment in the hardcoded versions in utils, also mentioning the source branch for that sdist? 
[14:33] <rick_h_> frankban: yea, that's cool. Just something so we remember it's our fork 
[14:34] <hatch> this morning I found out that you can stop g+ from showing posts from a community but still be part of the community
[14:34] <rick_h_> frankban: since it's not obvious looking at things
[14:34] <hatch> best feature ever
[14:34] <frankban> rick_h_: great, and good point
[14:53] <rick_h_> wooo, it's working!
[14:54] <rick_h_> will check a failure here in a second. 
[15:01] <hatch> hehe oh Chrome
[15:01] <hatch> "C:\fakepath\maarten.yaml"
[15:01] <hatch> that's one pretty crazy fakepath :D
[15:02] <rick_h_> gary_poster: so the only QA thing is that the two services were stacked on each other. I thought mysql wasn't deployed. :/
[15:02] <rick_h_> frankban: got the error when I had invalid constraints, deploy worked on jcastro's bundle 
[15:02] <frankban> gary_poster: for when you are back, I think we are hitting this: http://bugs.python.org/issue1692335
[15:03] <frankban> rick_h_: http://i.imgur.com/1skzJ3u.gif
[15:03] <rick_h_> lol
[15:04] <rick_h_> benji: did you also get the layout issue in qa?
[15:04] <benji> rick_h_: I had some non-branch related QA issues; redeploying the charm now
[15:04] <rick_h_> benji: ah ok
[15:05] <rick_h_> benji: do me a favor and don't touch the layout as it comes up then and see if you get the same stacked service issue. I moved it before the relations where set so not sure if it'd do a redraw or anything later in the process
[15:05] <benji> rick_h_: k
[15:06] <rick_h_> frankban: ok' LGTM'd with qa ok and notes on the qa process. 
[15:07] <frankban> cool
[15:18] <benji> rick_h_: the services didn't pile up, but I think the canvas scrolled to make it look like they did
[15:20] <rick_h_> benji: hmm, ok. 
[15:20] <rick_h_> ok, so time to release the hounds?
[15:22] <frankban> gary_poster: confirmed the problem is http://bugs.python.org/issue1692335 . easily fixable in more than one ways, let's have a quick call when you are back
[15:24] <frankban> rick_h_: we can be able to add a quick fix to the charm/deployer error handling before releasing the charm. AFAIK the GUI is ready to be released
[15:25] <rick_h_> frankban: k
[15:26] <bac> uh-oh, searching on charmworld for '~bac' makes for some unpleasantness
[15:26] <rick_h_> lol
[15:26] <rick_h_> oops
[15:27] <rick_h_> bac: requested the log file from webobs and we'll see what hte traceback is
[15:27] <rick_h_> although should be able to dupe locally I guess
[15:28] <bac> rick_h_: yeah.  i'll look at it.  we'll probably get hate mail from curtis soon.
[15:33] <bac> rick_h_: my favorite part of the traceback:  Encountered " <FUZZY_SLOP> "~bac "
[15:34] <bac> who writes an error message like that?
[15:34] <rick_h_> bac: heh, not sure. I haven't gotten that far yet
[15:35] <rick_h_> bac: I got https://pastebin.canonical.com/100094/ from webops. I don't see the search error, but seeing ingest errors, but things are working so confused. 
[15:36] <rick_h_> bac: if you get a sec can you update your bundle and push and make sure it's getting ingested?
[15:36] <rick_h_> heartbeat isn't hanging, but wtf is with the connection error in the logs?
[15:37] <bac> connection refused
[15:37] <rick_h_> yea, so it's not working. The url isn't good. 
[15:37] <bac> port 2464?
[15:37] <bac> i thought you had a different port for production
[15:38] <rick_h_> yea, that's not legit. That's the wrong port. ok, hitting up webops
[15:38] <rick_h_> bac: have that RT from the deploy?
[15:38] <rick_h_> bac: just the # for it
[15:39] <bac> it's in the card
[15:39]  * bac looks
[15:39] <bac> 65750
[15:39] <rick_h_> bac: ah cool, I'll look then. I'll work with webops. If you can update your bundle as our canary in the coal mine it'd be great
[15:40] <bac> oh, sure
[15:40] <benji> the "parse string constraints into a dict in deployer" card is not correct -- right? -- and can be deleted
[15:43] <gary_poster> frankban, great.  makes sense.  want to call?
[15:44] <frankban> gary_poster: yes thanks
[15:45] <frankban> gary_poster: https://plus.google.com/hangouts/_/7ecpir0k4g087sfq5aj9gfos3o?hl=en
[15:49] <rick_h_> ugh fail
[15:50] <Makyo> jujugui call in 10
[15:51] <benji> for all those people who hunger for fewer clicks when attentding the standup, I present your salvation: http://tinyurl.com/gui-hangout-2
[15:51] <rick_h_> benji: card was in the branch frankban just merged. So it's good and can go away
[15:52] <benji> rick_h_: well, I suppose that's one way to look at it, but we didn't do what the card says needs to be done (as best I can tell)
[15:54] <rick_h_> benji: ok, well something did it. maybe I'm mising the point, but strings go in, dicts go out to go. 
[15:54] <bac> twitter up by $20 since opening.  this will be a "successful" ipo even though the company left all that money on the table.
[15:55] <hatch> man I wish I could start a company that's losing money hand over fist and sell it for billions
[15:57] <bac> yeah, that second step is the challeng
[15:57] <bac> s/challeng/challenge/
[15:57] <hatch> I think the steps are
[15:57] <hatch> develop product
[15:57] <hatch> hype hype hype hype
[15:57] <hatch> ipo
[15:57] <hatch> hype > profits
[15:58] <bac> twitter was worse.  develop awful, crashy product.  hype.  crash more.  hype.  hire some decent engineer. hype. profit.
[15:59] <Makyo> jujugui call in 1
[15:59] <hatch> lol
[16:16] <hatch> jcsackett: hey you around?
[16:21] <Makyo> Running to get prescription. Bringing Air with, there's a coffee shop if it takes a while.
[16:22] <hatch> rick_h_: the reason there is no tests in my branch is because it's already tested by other tests
[16:22] <hatch> just fyi
[16:23] <frankban> hatch: a live test in a real env deploying services and ingested bundles could really help. the development charm URL is cs:~juju-gui/precise/juju-gui
[16:23] <hatch> frankban: ok cool - are we doing to be doing a charm release along side the gui release then?
[16:23] <frankban> hatch: so basically is juju deploy + expose the charm above, and "juju set juju-gui juju-gui-source=lp:juju-gui"
[16:24] <frankban> hatch: yes
[16:24] <hatch> ok cool
[16:24] <frankban> hatch: we always have to do that, because GUI releases are now included in the charm
[16:24] <jcsackett> hatch: i am.
[16:24] <rick_h_> hatch: hah, ok. Heading that one off at the pass 
[16:24] <hatch> that's how I did the demo at the conf on Tuesday - worked well then haha
[16:25] <frankban> hatch: next demo use quickstart ;-)
[16:25] <hatch> jcsackett: so the apache front end has stalled, mostly because I Dont' have enough time
[16:25] <hatch> that's what gary said haha
[16:25] <jcsackett> hatch: dig.
[16:25] <frankban> :-)
[16:25] <jcsackett> hatch: i'm happy to explore that.
[16:25] <hatch> jcsackett: tbh I'm more interested in a nginx front
[16:25] <hatch> but I don't know if we have an nginx charm....
[16:26] <hatch> well not a promoted one anyways
[16:26] <rick_h_> damn chunk mismatch
[16:26] <jcsackett> hatch: yeah, doesn't look like there's a reviewed one of those.
[16:27] <hatch> but we could have both so nothing saying we coudln't do the apache one first then do nginx after
[16:27] <jcsackett> hatch: i'm interested in apache b/c we *do* have a reviewed one of those, so that's a better story.
[16:27] <hatch> right
[16:27] <hatch> I'm going to be pretty swamped with life stuff at least until next weekend
[16:27] <jcsackett> hatch: indeed, i totally think we can have both. there's an issue for supporting both.
[16:27] <hatch> oh?
[16:27] <jcsackett> yeah, this weekend is sort of booked for me, but i have some evenings free.
[16:28] <jcsackett> hatch: https://github.com/hatched/ghost-charm/issues/11
[16:28] <hatch> ohhh 'issue' meaninig 'ticket'
[16:28] <hatch> I thought you meant there was a problem with supporting both :)
[16:28] <rick_h_> hatch: come on man, you wrote a new module. How can 'there are already tests for this'?
[16:29] <hatch> because the import code is already tested, all I did was move it out
[16:29] <hatch> I could repurpose the tests to point to the module directly
[16:29] <hatch> if you'd like
[16:29] <jcsackett> hatch: no, just that the desired thing is already captured.
[16:30] <hatch> is there a juju story for deploying services to a single machine then splitting them off that machine later?
[16:30] <hatch> so we could have a bundle to deploy apache/ghost/mysql to a single machine then give them the ability to break one out if needed
[16:32] <hatch> maybe the story there is simply to deploy another mysql/apache/ghost and then change the relationship
[16:33] <luca> gary_poster: do you have a hangout?
[16:33] <gary_poster> luca, alejandraobregon what hangout?
[16:33] <gary_poster> :-)
[16:34] <gary_poster> luca https://plus.google.com/hangouts/_/calendar/Z2FyeS5wb3N0ZXJAY2Fub25pY2FsLmNvbQ.t3m5giuddiv9epub48d9skdaso
[16:36] <hatch> jcsackett: do you have a live site up with the ghost charm yet?
[16:37]  * bac lunches
[16:37] <jcsackett> hatch: no, but only for lack of time.
[16:38] <hatch> frankban: so I `juju deploy cs:~juj-gui/precise/juju-gui` and it's taking forever to get back to the console...
[16:38] <hatch> is this normal?
[16:39] <hatch> I guess ec2 could be taking forever to respond
[16:39] <hatch> yeah juju status is hanging too
[16:39] <rick_h_> hatch: yea, welcome to the real cloud
[16:40] <hatch> glad this didn't happen during my demo, that would have sucked haha
[16:40] <rick_h_> hatch: it'll hang until bootstraping is done and a machine is brought up/setup
[16:42] <rick_h_> hatch: ok, one qa feedback on your branch
[16:42] <hatch> """ERROR cannot log in to admin database: unauthorized mongo access: auth fails"""
[16:42] <hatch> that's an odd one
[16:42] <rick_h_> hatch: :(
[16:43] <hatch> rick_h_: re the merge files - for some reason the parser does not pick up those deps, I have it on my list to fix that parser but that's been on the list for 6 months :)
[16:44] <rick_h_> hatch: rgr
[16:44] <hatch> It's using the loader to generate the deps
[16:44] <hatch> so it SHOULD pick those up
[16:44] <hatch> but who knows
[16:45] <rick_h_> hatch: all good, the only thing I'd love to see changed pre-land is the titles for tooltips
[16:45] <hatch> I've spent all of 20s looking at it
[16:45] <hatch> yeah that's a good idea
[16:46] <hatch> oh rick_h_ the sollution to that error message was to use `sudo` ...
[16:46] <hatch> I think I may have a bug on my system
[17:10] <frankban> rick_h_: what shoudl I write in the tests requirements.pip to make it install the stuff in deps?
[17:11] <rick_h_> frankban: it's inthe pip command not in the requirements.txt file
[17:11] <rick_h_> err, requirements.pip
[17:11] <rick_h_> frankban: it's that snippet I linked from charmworld's makefile
[17:11] <frankban> we already have --find-links
[17:11] <frankban> rick_h_: so, just the name of the packaged in the requirements
[17:11] <rick_h_> frankban: right, but it'll still try to go online if it doesn't see it, or if the version is a > XXX ad such
[17:12] <rick_h_> frankban: the name==version
[17:12] <frankban> rick_h_: cool, I'll try
[17:12] <rick_h_> frankban: --no-index --no-dependencies are the other two options to force it to use offline-only cache
[17:13] <frankban> rick_h_: I don't think we want offline only cache. we want all the other packages (e.g. mock, selenium) to be retrieved from pypi
[17:14] <hatch> nananananananananananananaannanana qa man.....qa man......qa man
[17:14] <rick_h_> frankban: sure, but you were talking of splitting those into a different file and could be a different pip install command?
[17:14] <rick_h_> frankban: but maybe that's step #2 I guess
[17:15] <rick_h_> frankban: I just know that if you don't go full-offline-this-directory-only-dammit mode eventually you'll hit an inconsistent version between one and another
[17:16] <frankban> rick_h_: yeah, what if the dependency is unversioned? who takes precedence?
[17:17] <rick_h_> frankban: if it's not versioned then that's when it'll go to the net anyway. pip isn't really great at this 'no network' idea
[17:18] <rick_h_> frankban: if I recall correctly that is. 
[17:18] <frankban> rick_h_: hum? ok then, and FYI, we are going to also have a customized jujuclient tarball in the charm 
[17:18] <rick_h_> frankban: so we use'd pip freeze to generate an initial list of deps + versions
[17:19] <hatch> man our app rocks!
[17:19] <rick_h_> hatch: lol
[17:20] <rick_h_> yea, it's fun to see how excited martin is getting with this feature. A bit scary how everyone's handing out comingsoon urls to potential customers :/ but cool
[17:20] <hatch> yeah - mark did that in his keynote haha
[17:21] <hatch> maybe we should put a warning on comingsoon :)
[17:31] <rick_h_> abentley: ping, working on an upgrade to the charm on production and jjo and I are trying to figure out if the lp_credentials change will bite us in any way. I see it's a config value, and a file is written, but not seeing anyone using that file?
[17:32] <abentley> rick_h_: lp_credentials should be in use.  Looking...
[17:36] <rick_h_> jcsackett: do you know about that? ^ I see it was your branch mentioning a job that needs it but greping for lp_cred or charmbot in charmworld brings up no hits
[17:36] <hatch> gary_poster: https://bugs.launchpad.net/juju-gui/+bug/1249026 https://bugs.launchpad.net/juju-gui/+bug/1249028 and https://bugs.launchpad.net/juju-gui/+bug/1249030
[17:36] <_mup_> Bug #1249026: On a real environment destroying a pending service throws console error <juju-gui:New> <https://launchpad.net/bugs/1249026>
[17:36] <_mup_> Bug #1249028: Hovering over the service with your mouse should turn the cursor to a pointer <juju-gui:New> <https://launchpad.net/bugs/1249028>
[17:36] <_mup_> Bug #1249030: After destroying a related service the GUI still shows the relation. <juju-gui:New> <https://launchpad.net/bugs/1249030>
[17:38] <gary_poster> on call will see soon
[17:40] <hatch> sure np https://bugs.launchpad.net/juju-gui/+bug/1249033
[17:40] <_mup_> Bug #1249033: Long charm names cause the related charms layout to break in fullscreen <juju-gui:New> <https://launchpad.net/bugs/1249033>
[17:47] <hatch> https://bugs.launchpad.net/juju-gui/+bug/1249039
[17:47] <_mup_> Bug #1249039: Exporting from real environment exports juju-gui as well <juju-gui:New> <https://launchpad.net/bugs/1249039>
[17:51] <hatch> https://bugs.launchpad.net/juju-gui/+bug/1249042
[17:51] <_mup_> Bug #1249042: Destroying a service doesn't give any indication that the machine is sticking around <juju-gui:New> <https://launchpad.net/bugs/1249042>
[17:51] <rick_h_> hatch: that one is a long running juju bug/discussion
[17:51] <hatch> rick_h_: the gui bug is that we don't tell the user
[17:51] <hatch> so if they are a gui user then they will never know
[17:52] <hatch> we should say 'hey btw this machine is still there'
[17:52] <hatch> but feel free to add input to the ticket
[17:52] <rick_h_> *cough* machine view *cough* but yea, understood
[17:52] <hatch> I'm just creating them - there may be duplicates
[17:52] <hatch> :)
[17:56] <gary_poster> ok looking now
[17:59] <hatch> cool
[17:59] <gary_poster> hatch ack thanks.  Please let me know what you think about the following.  #1 sounds nasty but I suspect it is longstanding and not a release blocker.  #2 Arguable (since you can drag the service) and not a blocker.  #3 same as #1.  #4 same as #1 but sounds easy to fix. :-P. #5 uh...working as designed?  :-) not a blocker IMO.  #6 yeah that's on the "wouldn't it be nice to fix this" tasklist
[17:59] <gary_poster> So, so far, either known or non-blockers.  Agree?
[18:00] <jcsackett> rick_h_: the code that rolled back uses that file.
[18:00] <rick_h_> jcsackett: ah!
[18:00] <hatch> yep I wouldn't consider any of these blockers unless of course someone already has an idea on how to fix #1/3
[18:00] <jcsackett> the presence of that file however shouldn't cause any problems.
[18:00] <gary_poster> yeah
[18:00] <rick_h_> jcsackett: ok, so it's safe for me deploy the charm then it sounds like
[18:00] <jcsackett> it's been running on staging safely.
[18:00] <jcsackett> absent any code touching it.
[18:00] <rick_h_> jcsackett: awesome
[18:00] <rick_h_> jcsackett: so the charm was updated on staging just not production?
[18:00] <hatch> I do like that I can go `juju destroy-machine 1 2 3 4 5 6 7` :D
[18:01] <jcsackett> rick_h_: correct, as far as i knwo.
[18:01] <jcsackett> s/knwo/know/
[18:01] <rick_h_> jcsackett: cool, thanks
[18:01] <hatch> gary_poster: I'm just going to run a real bundle import on a real env then It should be QAOK
[18:01] <gary_poster> awesome!  thanks hatch
[18:02] <gary_poster> I need a quick lunch (and to decompress :-P) and then will be right back
[18:02] <hatch> :)
[18:12] <hatch> Makyo: are you around?
[18:12] <Makyo> Yis
[18:13] <hatch> cool ok quick q
[18:13] <hatch> I exported a bundle from my real env
[18:13] <hatch> deleted the gui from it
[18:13] <hatch> then re-imported it (after emptying out the old stuff)
[18:13] <hatch> both services have x/y coords
[18:13] <hatch> but they were placed ontop of eachother
[18:13] <hatch> exactly ontop of eachother
[18:13] <hatch> so you couldn't even tell they were there
[18:14] <hatch> https://gist.github.com/hatched/7359271
[18:14] <hatch> here is the resulting bundle that was imported
[18:14] <hatch> is there anything wrong with that bundle?
[18:14] <hatch> so my question is....is this a bundle import or export issue :)
[18:16] <Makyo> Works for me?
[18:16] <hatch> this is on a real env...
[18:16] <hatch> I can destroy and try again see if I can reproduce
[18:16] <rick_h_> hatch: I had this happen with one of jcastro's bundles in a real env as well. wordpress and mysql were stacked and couldn't see both
[18:16] <hatch> ok so it's definitely a bug thanks
[18:16] <Makyo> Import, then.
[18:17] <hatch> ok cool
[18:17] <hatch> will file
[18:20] <hatch> gary_poster: one more https://bugs.launchpad.net/juju-gui/+bug/1249051 also not a blocker
[18:20] <_mup_> Bug #1249051: Importing a bundle stacks services regardless of x/y annotations on real env <juju-gui:New> <https://launchpad.net/bugs/1249051>
[18:20] <hatch> all-n-all I think this is a pretty darn stable release
[18:23] <gary_poster> that's not ideal, but we can live with it for now
[18:23] <gary_poster> thank you very much hatch
[18:25] <hatch> no problemo
[18:25] <frankban> guihelp, python revievers: I need one review + QA for a really really nice to have for the release: https://codereview.appspot.com/23000043
[18:26] <gary_poster> rick_h_, you happen to be available ^^^? about to have my call with benji
[18:26] <gary_poster> or bac?
[18:26] <rick_h_> gary_poster: working with webops on charm upgrade atm. I can look in a few if all goes well
[18:26] <gary_poster> ok thank you rick_h_ 
[18:27] <frankban> rick_h_: thanks
[18:30] <bac> frankban: i can now
[18:30] <bac> rick_h_: ^^
[18:30] <rick_h_> bac: thanks, mjc upgrade is process and wheee
[18:31] <bac> i don't know what that means
[18:31] <rick_h_> bac: upgrading the charm for prodstack mjc to fix the port/ini issue
[18:31] <rick_h_> bac: so thanks for looking at frankban's branch
[18:33] <rick_h_> abentley: jcsackett so charm is upgraded. Because that's a new config there's some hackery needed to get the initial value going. 
[18:34] <rick_h_> abentley: jcsackett just a heads up for when that code comes back and things need to be updated
[18:37] <hatch> stepping out for lunch, bbl
[18:38] <bac> frankban: code looks good.  QA now
[18:39] <frankban> bac: great thanks
[18:41] <jcsackett> rick_h_: what hackery? the default value should be an empty string, which rights out to an empty file.
[18:41] <jcsackett> s/rights/writes/
[18:42] <rick_h_> jcsackett: so the first time it runs the default doesn't get written I guess? If another charm deploy happens between now/then it should be ok I think
[18:42] <rick_h_> jcsackett: I was pointed at #1244841
[18:42] <_mup_> Bug #1244841: support atomic upgrade-charm --config var=val ... <canonical-webops> <config> <upgrade-charm> <juju-core:Triaged> <https://launchpad.net/bugs/1244841>
[18:42] <jcsackett> rick_h_: so just no file gets written?
[18:42] <rick_h_> jcsackett: right, my understanding is that right now the file doesn't exist
[18:43] <jcsackett> huh.
[18:43] <rick_h_> jcsackett: I'll verify when I check the log in a sec
[18:43] <rick_h_> jcsackett: but it's a limitation of juju upgrading a charm when the upgrade contains a config value that didn't exist before is my understanding
[18:43] <jcsackett> well, if it's going to do something unexpected, "nothing" is one of the better options. :p
[18:43] <rick_h_> :)
[18:44] <jcsackett> rick_h_: oh, i see; this would have been hidden on staging b/c we did a juju set right after to load the credentials.
[18:44] <jcsackett> and we haven't updated the IS docs b/c right now there's no need to set that option/write that file.
[18:45] <rick_h_> jcsackett: hmm ok, well that didn't come up as an option in talking. I'm not following 100% I guess. Anyway, heads up there be dragons I guess. 
[18:45] <jcsackett> rick_h_: there be no dragons.
[18:45] <rick_h_> ok, then awesome ness
[18:46] <jcsackett> to get the file to write out, you need to do `juju set charmworld lp_credentials=$BIG_DAMN_STRING`
[18:46] <jcsackett> and then on config changed it'll write it out.
[18:47] <jcsackett> but, like i said, IS documentation isn't updated b/c the failure mode is do nothing. so actually, this is everything running as expected. :-)
[18:59] <rick_h_> gary_poster: charm is all set, charmworld all good. What's next for release I should look at?
[19:02] <gary_poster> rick_h_, ack 1 sec
[19:03] <gary_poster> rick_h_, I've been unable to make changelog because of calls etc.  you could take that over.  docs/process.rst has a recommeneded way to generate for a release.  This is what I have so far:
[19:05] <gary_poster> rick_h_, http://pastebin.ubuntu.com/6377908/
[19:05] <rick_h_> gary_poster: rgr, looking 
[19:06] <gary_poster> rick_h_, thank you.  card is in maintenance but you can drag down to bundle and claim
[19:06] <gary_poster> bac, in hangout
[19:15] <jcsackett> rick_h_: what rev did you release for production charmworld? cleaning up our kanban.
[19:15] <rick_h_> jcsackett: 84
[19:15] <rick_h_> jcsackett: oh, charmworld was 450
[19:20] <gary_poster> frankban, you are still here! are you landing "Improve bundle deployment error handling" or should we?
[19:21] <frankban> gary_poster: waiting bac's QA, and then I'll land it
[19:21] <gary_poster> oh cool.  thank yo ufr
[19:21] <gary_poster> thank you frankban, that is ;-)
[19:21] <gary_poster> tab completion on ufr did not work as expected
[19:22] <frankban> :-)
[19:24] <bac> frankban: done.  all good.
[19:25] <frankban> bac: thanks, landing
[19:42] <hatch> man our traffic is nuts
[19:44] <frankban> done, have a nice evening, and a nice release!
[19:44] <gary_poster> thanks frankban !
[19:49] <gary_poster> rick_h_, we are ready for release, and you have the charm in.  you doing it, or shall we look for other volunteers?
[19:49]  * hatch hides
[19:49] <rick_h_> gary_poster: I'm working through the gui release notes. 
[19:49] <gary_poster> awesome thanks rick_h_ !
[19:49] <rick_h_> gary_poster: I've got the changelog, tests, xz and working on how to open it up to qa it
[19:49] <hatch> *phew*
[19:49] <gary_poster> I see  you hatch! :-)
[19:49] <rick_h_> gary_poster: if someone wants to do the charm +1 from me :)
[19:50]  * rick_h_ looks at hatch 
[19:50] <hatch> is there release notes? I've never done the charm
[19:50] <gary_poster> rick_h_, ack, we do that after the tarball release though
[19:50] <rick_h_> gary_poster: rgr, ok. know off the top of your head the tar flags for xz files?
[19:50] <rick_h_> ah, -J
[19:50] <gary_poster> rick_h_, use tar xvaf
[19:50] <hatch> rick_h_:  https://xkcd.com/1168/
[19:51] <gary_poster> rick_h_, a means "use the file extension to figure it out, buster"
[19:51] <hatch> lol
[19:51] <rick_h_> gary_poster: ah, good stuff
[19:51] <rick_h_> gary_poster: ok, plodding through notes. Man this is a looong list :)
[19:51] <gary_poster> rick_h_, :-) on the bright side a lot of the qa has been done
[19:51] <hatch> rick_h_: I found using the trunks commit notes to build the changelog
[19:52] <hatch> is the best way
[19:52] <rick_h_> hatch: yea, got that part. Wasn't that bad since it was only around 40 commits (thought it'd be bigger/longer)
[19:52] <rick_h_> and most summarize as "Welcome to bundles"
[19:52] <hatch> if it was on git it would be :) haha
[19:53] <hatch> so I'm up for whatever right now...if anyone has any tasks they need completed
[19:57] <rick_h_> oh bah, the charm is part of this. doh!
[19:59] <gary_poster> you can pass that off rick_h_ .  :-)
[19:59] <gary_poster> hatch, I suggest tackling one of the bugs you found, or working on one of https://docs.google.com/a/canonical.com/spreadsheet/ccc?key=0AtC9etoysSQldDQxMVdmTDB4dm1XXzA0NFlLSUQ4Mmc#gid=0
[19:59]  * rick_h_ looks at how far this list goes
[19:59] <gary_poster> preferring ones near the top :-)
[20:00] <hatch> gary_poster: alrighty
[20:00] <rick_h_> gary_poster: got it, should go through this at some point and looks like I'm mostly through it so far
[20:00] <gary_poster> rick_h_, yeah you are getting near-ish to the end. :-)
[20:00] <gary_poster> We might be able to remove some of the more paranoid bits at some point
[20:09] <rick_h_> running charm tests failed due to dep building. Missing apt-pgk/hashes.h file. Looking for missing system dep now
[20:10] <hatch> that's an odd one
[20:10] <hatch> I've never seen that before
[20:10] <rick_h_> looks like libapt-pkg-dev
[20:11] <rick_h_> re-running make lint from a clean charm env
[20:12] <rick_h_> hmm, that's in the sysdeps make target. So it's a new step in the process.
[20:13] <rick_h_> ok, take 3
[20:14] <hatch> third time is the.....charm
[20:14] <hatch> ...see what I did there...
[20:14] <rick_h_> sorry, wasn't looking. What was that again?
[20:14] <hatch> lol u suck
[20:15] <hatch> so after next week's talk on Go/JS I can get started on learning Python well enough to contribute - any good resources for learning about the ecosystem?
[20:16] <rick_h_> go to pycon, it's in CA :P
[20:16] <hatch> California?
[20:16] <hatch> ohh Canada
[20:16] <hatch> Montreal
[20:16] <rick_h_> nope, bit more north
[20:16] <hatch> wow...it's 8 days long
[20:17] <hatch> make that 9
[20:17] <rick_h_> well tutorials, conference, and sprints. I go for the conference and sprints myself
[20:17] <rick_h_> ok, make test now in progress
[20:18] <rick_h_> bah, now it's making fun of my environments.yaml file
[20:18] <hatch> so they make you sign up for an account before telling you how much it is.....fail
[20:19] <rick_h_> wtf, it's complaining about a differnet environment
[20:19] <hatch> oh I was looking at the wrong place
[20:19]  * hatch failed
[20:21] <rick_h_> yea, reminds me I need to sign up asap. Guess I'll be doing that tonight
[20:21] <hatch> flights to Montreal cost me the same as it does to fly to SFO haha
[20:29] <rick_h_> gary_poster: crap, getting functional test failures
[20:29] <rick_h_> oh man, it wants FF in my lxc?
[20:30] <rick_h_> grrrr
[20:30]  * rick_h_ sees what happens when you try to run firefox in a headless lxc
[20:30] <hatch> unicorns happen
[20:33] <gary_poster> rick_h_, so you are addressing?  If not, and it is too difficult, we can find other options
[20:33] <rick_h_> gary_poster: working on it
[20:33] <gary_poster> k
[20:33] <rick_h_> gary_poster: installing firefox now, pulling all of X and lovelyness so taking a few
[20:33] <gary_poster> heh
[20:33] <gary_poster> k
[20:33] <rick_h_> will have to restart functional tests, destroying old env now
[20:38] <rick_h_> now I know why hatch ducked for cover :P
[20:38] <hatch> haha
[20:38] <hatch> I've never actually done a charm release
[20:56] <rick_h_> still running functional tests, running on test_branch_source forever
[20:56] <rick_h_> or 10minish
[20:56] <rick_h_> if forever is a bit vague :)
[20:57] <hatch> hmm.....delete 100+ LOC, no test failures......win?
[20:57] <rick_h_> heh, that's what we call 'test coverage time'
[20:58] <hatch> ohh the ones which would have failed were already skipped
[20:58] <hatch> *phew*
[20:58] <hatch> I was a little scared there haha
[20:58] <rick_h_> python /home/rharding/bin/juju-test --timeout=120m -v -e ec2 --upload-tools 20-functional.test
[20:58] <rick_h_> 120minutes?!
[20:58] <hatch> just incase lol
[20:58] <rick_h_> well, 20/120 I guess
[21:00] <rick_h_> this is the last step before I can merge/push stuff! grrr go go go ec2 go
[21:04] <gary_poster> rick_h_, if you have to go, and you have manually qa'd the charm with the new release, I would be fine with you making the release.  Francesco ran the tests earlier today.  All you've done is change the tarball.
[21:09] <rick_h_> gary_poster: working on it. Didn't try to deploy it as a local charm yet since I put it all in a tmp dir and I can't figure out how to get juju to deploy it unless it's in a directory named precise. 
[21:10] <gary_poster> rick_h_, make deploy.  
[21:10] <hatch> gary_poster: do you want me to mark tasks complete somehow on this spreadsheet?
[21:10] <rick_h_> oh duh
[21:10] <gary_poster> rick_h_, http://jujugui.wordpress.com/2013/10/15/if-you-want-to-run-a-custom-gui/
[21:11] <gary_poster> except all you have to do is make deploy in this case :-)
[21:11] <gary_poster> hatch, mark as Done in spreadsheet
[21:11] <hatch> cool
[21:11] <gary_poster> in status column hatch
[21:11] <rick_h_> gary_poster: trying, getting an error out of make deploy about username/pass/project-name. 
[21:12] <gary_poster> rick_h_, :-( worked for me last week
[21:12] <rick_h_> gary_poster: ve to do is make deploy in this case :-)
[21:12] <rick_h_> bah
[21:12] <rick_h_> bad paste
[21:12] <gary_poster> rick_h_, you can push charm to ~juju-gui
[21:12] <rick_h_> http://paste.mitechie.com/show/1068/
[21:12] <gary_poster> rick_h_, then test
[21:12] <gary_poster> rick_h_, and if that is ok then merge to ~juju-gui-deployers
[21:13] <gary_poster> does that make sense?
[21:13] <hatch> woah lbox just exploded on me...
[21:13] <rick_h_> gary_poster: yea, makes some sense. I'd like to not have this make deploy error before I go pushing. 
[21:14] <gary_poster> rick_h_, never seen :-(
[21:14] <gary_poster> that traceback I mean
[21:15] <gary_poster> rick_h_, I will try locally
[21:15] <rick_h_> gary_poster: rgr
[21:16] <gary_poster> rick_h_, do you want to push up your charm somewhere or if I test juju-gui trunk is that sufficient?
[21:16] <rick_h_> gary_poster: I'll push it up to my home I guess. I'd be curious to start with trunk
[21:16] <hatch> has anyone seen this when lboxing? sphinx error? https://gist.github.com/hatched/ad0695b48078503825f7
[21:18] <rick_h_> hatch: no, but something about getting version from release? Is that our own code? 
[21:18] <hatch> fresh trunk with js and handlebars template removal
[21:19] <gary_poster> rick_h_, running make deploy from ~juju-gui trunk
[21:19] <rick_h_> bah, can I not push this without it going into the store and ingested? /me tries to recall how +junk worked
[21:19] <gary_poster> I think bzr just blew up? np?
[21:19] <gary_poster> no
[21:19] <gary_poster> juju status
[21:20] <hatch> gary_poster: was that to me?
[21:20] <gary_poster> rick_h_, you push it but don't call it trunk and it will not be ingested
[21:20] <gary_poster> hatch no sorry
[21:20] <rick_h_> gary_poster: ah, ok. 
[21:23] <gary_poster> hatch dupe on trunk.  investigating.  I think it has to do with CHANGES file...
[21:23] <hatch> gary_poster: sorry landing a fix now
[21:23] <hatch> I think
[21:23] <rick_h_> gary_poster: pushed it to lp:~rharding/+junk/charmers-trunk as it is now. Tests are at 42min run time. 
[21:23] <hatch> it's rick_h_'s fault
[21:23] <hatch> :P
[21:23] <gary_poster> oh cool hatch thanks.  fix in CHANGES, yeah?
[21:23] <rick_h_> bah, did I mess up the changes? i tried to dupe the system
[21:23] <hatch> gary_poster: yeah the 'unreleased' was missing the trailing :
[21:24] <hatch> at least I THINK that's the issue
[21:24] <gary_poster> hatch sounds right thanks
[21:24] <rick_h_> ah, crap. thanks hatch. Sorry you hit that
[21:24] <hatch> pretty fragile system
[21:24] <hatch> rick_h_: ahh no problem :)
[21:24] <hatch> at least I could do enough python to debug it haha
[21:25] <hatch> rick_h_: If you have any ideas on how we can streamline our release process I'd be interested to hear/implement them
[21:26] <rick_h_> hatch: taking notes now on the things I've hit along the way. As for streamlining, just tring to get it to work once before I do that. :)
[21:26] <hatch> :)
[21:28] <hatch> gary_poster: rick_h_ fyi the missing : was the culprit
[21:28] <rick_h_> hatch: haha, with one character I can bring down your whole branch!
[21:28]  * rick_h_ pockets that away
[21:28] <hatch> lol - maybe that means that we should require an lbox on the 'CHANGES' changes ;)
[21:29] <hatch> because that got into trunk because of the step that says 'push to trunk'
[21:29] <gary_poster> we effectively do IIRC
[21:29] <gary_poster> it's that the deployment instructions do not run lbox
[21:29] <hatch> https://codereview.appspot.com/23240043/ if anyone is avail for a quick review/qa
[21:29] <hatch> right
[21:29] <gary_poster> rick_h_, I have trunk up with make deploy
[21:30] <gary_poster> trying
[21:30] <rick_h_> gary_poster: :( ok I guess. I can't find any reference to "project-name" that was in my traceback. 
[21:30] <gary_poster> rick_h_, ok.  I will kill this one and merge your branch and do it again...
[21:31] <rick_h_> gary_poster: k
[21:31] <rick_h_> test run almost hitting an hour. Is that normal from previous deploys? 
[21:32] <gary_poster> rick_h_, yes, think so
[21:32] <rick_h_> ok
[21:32] <gary_poster> charms are painful beasts
[21:32] <gary_poster> to test
[21:32] <rick_h_> yea, understood. /me does a s/test_/wait_ so it feels more natural :)
[21:32] <gary_poster> :-)
[21:35] <gary_poster> hatch, on it
[21:40] <gary_poster> hatch, uh, I'm struggling to figure out what to do to qa. :-P
[21:40] <gary_poster> the app seems to work? :-)
[21:40] <hatch> gary_poster: deploy two services with the same name
[21:41] <hatch> you should get a notification
[21:41] <hatch> sorry :)
[21:41] <gary_poster> oh ok
[21:41] <hatch> I found if I removed too much - notifications would stop happening :)
[21:41] <huwshimi> Morning
[21:41] <hatch> mornin
[21:41] <gary_poster> morning
[21:41] <gary_poster> hatch QAOK
[21:42] <hatch> thanks
[21:42] <gary_poster> rick_h_, site is up; trying
[21:43] <gary_poster> rick_h_, WFM :-)
[21:43] <gary_poster> rick_h_, ship it (~_juju-gui and ~juju-gui-charmers)
[21:43] <gary_poster> rick_h_, https://ec2-54-205-201-186.compute-1.amazonaws.com/
[21:43] <rick_h_> gary_poster: ok, cool then
[21:44] <huwshimi> gary_poster: I'm happy to have our calls anytime from now.
[21:44] <gary_poster> thank you huwshimi!  maybe in a few minutes, then?  would be good
[21:44] <huwshimi> gary_poster: It looks like we've both changed daylight savings recently...
[21:44] <huwshimi> gary_poster: Sure!
[21:44] <gary_poster> heh, that's right I forgot
[21:45] <gary_poster> cool
[21:46] <hatch> shall I join or is this just 1:1?
[21:46] <rick_h_> ok, both branches of the charm pushed, waiting to check on the new revs hitting
[21:51] <gary_poster> awesome thanks rick_h_ .  I have a blog post and email waiting in the wings.
[21:51] <rick_h_> gary_poster: rgr
[21:52] <rick_h_> gary_poster: http://paste.mitechie.com/show/1069/ are my notes for the deploy steps I hit. Done for today, but will look at tweaking those as follow up
[21:52] <huwshimi> Are we releasing now!?
[21:52] <rick_h_> huwshimi: yes
[21:52] <huwshimi> woah! Nice.
[21:52] <hatch> huwshimi: this way - if it breaks you can fix it while we are out drinking
[21:52] <hatch> mohohahaha
[21:52] <huwshimi> Boo
[21:53] <gary_poster> :-)
[21:53] <huwshimi> hatch: It's my Friday night before it's yours :P
[21:53] <gary_poster> :-)
[21:53] <hatch> darn....
[21:53] <gary_poster> thank you rick_h_ !
[21:54] <hatch> +1 internets to rick_h_
[21:56] <rick_h_> thanks for hte patients. Waiting for ingest at top of the hour and then running away
[21:56] <gary_poster> ah cool
[21:57] <gary_poster> huwshimi, hatch, https://plus.google.com/hangouts/_/calendar/Z2FyeS5wb3N0ZXJAY2Fub25pY2FsLmNvbQ.dd77sn7kjl6unba21lutdr0p70 when ready?
[21:57] <huwshimi> gary_poster: on way
[22:05] <rick_h_> ok, ingested https://jujucharms.com/precise/juju-gui-78/
[22:05] <rick_h_> night all
[22:10] <gary_poster> congrats!  night!
[22:11] <huwshimi> rick_h_: Night Rick, thanks!
[22:30] <rick_h_> woot! and with the new juju fixing lxc deployed a bundle from the new charm in lxc. only issue is that stacked service block problem yay!
[22:31] <rick_h_> <3 having lxc powers back