[00:02] <rick_h_> Makyo: yay video
[00:02] <rick_h_> looking cool
[00:03] <rick_h_> Makyo: do we have an account to upload the videos to?
[00:04]  * rick_h_ looks and notices it's the normal account
[00:04] <rick_h_> Makyo: ok cool, I'll check with the marketing and eco teams if there's a good 'juju home' we should get stuff towards.
[00:04] <Makyo> rick_h_, I've been uploading them to my account and then passing the link on.  I think there's a canonical account, though, can forward the video to...someone...?  I don't know
[00:04] <Makyo> Yeah, sounds good.
[00:05] <rick_h_> now we just need hatch to do the blog post matching something like that, get the ghost folks excited about it :) and almost there
[00:06] <Makyo> I'll have the de-chirped version up in a few.
[00:06] <hatch> yeah I was thinking of releasing the post next week
[00:07] <rick_h_> hatch: well wait until thurs
[00:07] <rick_h_> hatch: really that's the wait for all of this
[00:07] <hatch> ok sounds good, lots of time
[00:07] <rick_h_> :P
[00:07] <rick_h_> procrastinator
[00:07] <rick_h_> though Makyo's video brought out one more bug to file :/
[00:08] <rick_h_> hmm, though maybe not I guess, Makyo you used the flag to override the settings?
[00:08] <rick_h_> yep, see it in the url. Ok, no new bug yay!
[00:14] <Makyo> rick_h_, yep
[00:14] <Makyo> rick_h_, only thing I found was the X on the cookie warning didn't show up on this laptop.
[00:14] <Makyo> Will try and repro
[00:15] <rick_h_> Makyo: k
[00:15] <rick_h_> I assumed it was hidden behind the onboarding or something
[00:15] <rick_h_> but didn't look too close
[00:15] <Makyo> Oh, good point
[00:38] <Makyo> https://www.youtube.com/watch?v=rEiwKLfzlX8 Updated video without the chirp
[00:39] <rick_h_> woot
[00:39] <rick_h_> Makyo: can you send that out to peeps plese?
[00:39] <rick_h_> please
[00:39] <Makyo> Sure fing
[00:39] <rick_h_> Makyo: and we'll work on pulling together out content and that way we've got a url down somewhere to remember
[00:42] <Makyo> rick_h_, sounds good.  Can forward the mp4 to anyone in marketing who needs it, too, if it belongs on another account
[00:42] <rick_h_> Makyo: rgr, thanks. We probably won't know more about that until Sally gets back Monday
[00:43] <Makyo> YEp, sounds good
[01:22] <huwshimi> I can't figure out good way to do sorting.
[01:22] <rick_h_> huwshimi: the 0 padding doesn't help?
[01:23] <huwshimi> rick_h_: Oh, I actually don't know what that is :)
[01:24] <rick_h_> huwshimi: oh, that's what frankban was saying
[01:24] <rick_h_> he was suggesting that if the thing was 10new
[01:24] <rick_h_> and you had 1
[01:24] <rick_h_> that if you padded it 01 and 10 they'd sort correctly
[01:25] <rick_h_> so basically full in things with 0's so that they're similar and will sort as strings
[01:25] <rick_h_> so machine 0 turns into 00000 I guess. 
[01:26]  * rick_h_ loads pr to look at frankban's comments again
[01:26] <huwshimi> rick_h_: Actually that gives me another idea, we could just add 1 to all ints, the numbers will still sort correctly and zero will become 1 so it will be truthy when compared against strings
[01:27] <rick_h_> huwshimi: right, but overall if things are just always strings, and you prefix the names with 0000's then they'll sort correct, even things like 12 vs 13new
[01:28] <rick_h_> huwshimi: try both and see what 'works' to the eye test and we can go from there
[01:28] <huwshimi> oh I see what you mean now :)
[01:29] <rick_h_> huwshimi: so the longest one in the previous example was 5 characters long, so if we did '00000' + 1, 2, 3, 4, 5 they'd sort with 'new10' just fine
[01:29] <rick_h_> well, I guess that's four zeros needed to make them all 5 long, /me wonders if we even need to add more than one 0, or just add '0' + name
[01:29] <rick_h_> to get a string out of it 
[01:30] <huwshimi> rick_h_: I guess we don't know how long a number someone might use though
[01:30] <rick_h_> huwshimi: right, but even if they use 12343243243543543 and we add a 0 to 1, 2, 3, we'd be sorting '01', '02', '03', and '012334...'
[01:31] <rick_h_> huwshimi: ah right, so we'd need to use as many 0's as missing from the longest string
[01:31] <huwshimi> yeah
[01:31] <rick_h_> because otherwise we'd still have issues
[01:31] <huwshimi> yep
[01:31] <rick_h_> right, so we'd have to find the len of the longest one, and then do a longest-len(name) * '0' + name
[01:31] <rick_h_> well in python, have to convert that to JS
[01:44] <huwshimi> rick_h_: Actually the issue might not just be about zeros. It occurs if I add machines in this way too: 
[01:44] <huwshimi> app.db.machines.add([{id: '3'}, {id: 'new3'}, {id: '10'}, {id: '10'}, {id: '2'}, {id: 'new1'}, {id: 'new11'}, {id: 'new42'}, {id: '1'}, {id: 'new21'}]);
[01:45] <huwshimi> something about the lowest number being after two strings?
[01:46] <rick_h_> huwshimi: right, so you need to update everything to be 5 chars long
[01:46] <rick_h_> so you need to make it 
[01:47] <rick_h_> '00003', '0new3', '00010', '00010', '00002', '0new1', 'new42', '00001', '0new21'
[01:47] <rick_h_> and those should all sort properly
[01:48] <huwshimi> ouch
[01:48] <rick_h_> it's not as bad as it seems
[01:49] <huwshimi> rick_h_: It is because I need to figure out how to loop through everything and store the highest value inside the model comparator method :)
[01:49] <rick_h_> huwshimi: if it's heading off into the weeds for you feel free to put the card back with your branch and we can update it
[01:50] <rick_h_> huwshimi: yea, and it'll have to be updated as new machines come into play, so it really needs to be done at the time of the sort button being pressed
[01:50] <rick_h_> and can't be done ahead of time
[01:50] <huwshimi> yep
[01:57] <huwshimi> rick_h_: Can we actually have custom names at the moment?
[01:58] <huwshimi> If not then users can't currently add machines with names out of order
[01:58] <huwshimi> I'm just thinking I could land this as is with a follow up card to add the zero padding.
[02:00] <hatch> evening all
[02:43] <rick_h_> huwshimi: no, not yet. It's on the todo with some other stuff
[02:43] <rick_h_> huwshimi: but we have the issue of the sorting with 1 and 13 currently?
[02:43] <rick_h_> huwshimi: but yea, just want to make sure we have things 'working' and the 0 was the issue atm
[02:44] <huwshimi> rick_h_: Is that the issue with new1 and new13?
[02:44] <rick_h_> not really, those can go at the end and that's not an issue as they'll come back with real numbers once comitted
[02:45] <huwshimi> That's true
[02:45] <rick_h_> 0 and the 1 and 13 are the two issues currently
[02:45] <huwshimi> I'm not sure what the 1 and 13 issue is...
[02:46] <rick_h_> huwshimi: call?
[02:46] <huwshimi> sure!
[02:46] <rick_h_> standup url?
[02:46] <rick_h_> evening hatch 
[02:46] <huwshimi> rick_h_: What's the standup url?
[02:47] <rick_h_> https://plus.google.com/hangouts/_/canonical.com/daily-standup?authuser=1
[02:47] <huwshimi> thanks!
[02:47] <rick_h_> adjust the authuser to your accounts
[08:04] <rogpeppe1> mornin' all
[08:13] <urulama> morning rogpeppe1
[08:13] <rogpeppe1> urulama: yo!
[08:14]  * urulama puts on a big golden chain, big hat ... sends yo back :D
[10:58] <rick_h_> morning everyone
[10:59] <urulama> rick_h_: fabrice will play with MV, he has dev gui up and running ... what's the link to the MV?
[11:00] <rick_h_> urulama: /:flags:/mv
[11:00] <urulama> fabrice: ^
[11:01] <urulama> ty, rick_h_
[11:02] <fabrice> morning
[11:02] <fabrice> what's the url to get to mv ?
[11:03] <fabrice> I should read before typing :)
[11:03] <fabrice> rick_h_: thanks
[11:03] <fabrice> in fact time for a break
[11:46] <rbasak> rick_h_: o/
[11:46] <rbasak> I'm just catching up on landing Juju etc. in Trusty for 1.18.4. It's in -proposed, as is juju-quickstart.
[11:46] <rick_h_> rbasak: what's up? otp atm 
[11:47] <rbasak> The pending bugs for juju-quickstart are https://bugs.launchpad.net/juju-quickstart/+bug/1309678 and https://bugs.launchpad.net/juju-core/+bug/1306537.
[11:47] <mup> Bug #1309678: a value is required for the control bucket field <verification-needed> <juju-quickstart:Fix Released by bac> <juju-quickstart (Ubuntu):Fix Released> <juju-quickstart (Ubuntu Trusty):Fix Committed> <https://launchpad.net/bugs/1309678>
[11:47] <mup> Bug #1306537: LXC local provider fails to provision precise instances from a trusty host <deploy> <local-provider> <lxc> <verification-needed> <juju-core:Fix Released by wallyworld> <juju-core 1.18:Fix Released by wallyworld> <juju-quickstart:Fix Released by frankban> <juju-quickstart (Ubuntu):Fix
[11:47] <mup> Released> <juju-quickstart (Ubuntu Trusty):Fix Committed> <https://launchpad.net/bugs/1306537>
[11:49] <rick_h_> rbasak: looking
[11:49] <rbasak> rick_h_: sorry I got distracted.
[11:50] <rick_h_> rbasak: so this is for utopic vs trusty?
[11:50] <rbasak> So we just need to check that the package in -proposed does fix these two bugs, and comment to explain the testing and mark verification-done. This is for Trusty.
[11:50] <rbasak> I have a 1.18.4 in trusty-proposed now (probably didn't before).
[11:50] <rick_h_> rbasak: ok, I'll see if we can get these tested out. 
[11:50] <rbasak> I just verified that basic functionality works with -proposed enabled.
[11:50] <rick_h_> urulama: do you think jrwren_ has the bandwidth to look at verifying the two bugs today? ^
[11:51] <rick_h_> rbasak: rgr, will get someone on it
[11:51] <rbasak> rick_h_: thanks! I'm also going to look at the Juju bugs today, and hopefully we can get the update landed in Trusty very soon.
[11:51] <rbasak> I'm sorry this is so late. I got distracted by feature freeze issues for Utopic, and also have had to be away for a while.
[11:53] <rick_h_> rbasak: rgr, will get it done today
[11:54] <urulama> jrwren_: when you join, ping rick_h_ for quickstart issues, please
[12:02] <rick_h_> frankban: running a couple of min late
[12:02] <frankban> rick_h_: np
[12:31]  * frankban lunches
[12:31]  * rick_h_ goes to find breakfast now that morning calls are through 
[12:47] <fabrice> I have some question about mv views
[12:47] <fabrice> Is there a remove unit ?
[12:48] <rick_h_> fabrice: yes, you have to click onto the machine
[12:48] <rick_h_> fabrice: and then the units are listed out in the container column on the right
[12:48] <rick_h_> fabrice: and you can hover over the units and get a 'more menu' with a destroy
[12:49] <fabrice> kool the hover menu !
[12:49] <fabrice> I have found a bug I think also
[12:49] <fabrice> 2 in facts
[12:51] <fabrice> one question more : change log do not indicate on which machine a unit will be placed
[12:51] <fabrice> is that voluntary ?
[12:52] <rick_h_> fabrice: yes, though it's a design/idea I've questioned as well
[12:53] <rick_h_> fabrice: the goal is to get some feedback. The idea is that it's showing things that are more important/direct to the environment and pocketbook (adding a new machine costs $$)
[12:53] <rick_h_> fabrice: and it's a bit less busy
[12:54] <fabrice> I found 3 bugs so I will play with launchpad now
[12:55] <rick_h_> fabrice: rgr make sure to check the kanban board (or I guess launchpad works) to make sure they're new vs existing ones 
[12:55] <fabrice> rick_h_: good suggestions :)
[13:29] <jcsackett> jujugui: can i get two reviews and qa on https://github.com/juju/juju-gui/pull/564 please?
[13:30] <kadams54> jcsackett: taking a look
[13:30] <jcsackett> thanks, kadams54.
[13:35] <jcsackett> rick_h_: i'm reviewing/qa-ing huw's service icon delete branch, and i'm a little confused. he's setting the service blocks in service view to blue border on delete, as when they're uncommitted deploys. is that really what we want to do?
[13:35] <jcsackett> s/delete/destroy
[13:36] <rick_h_> jcsackett: I asked about that. I asked him to check the designs for any note on if we have feedback on how to show that. 
[13:36] <rick_h_> jcsackett: /me looks at the branch to see if he calls any of that out
[13:36] <jcsackett> rick_h_: there's no indication of design docs, and i'm not seeing anything that looks like this.
[13:36] <rick_h_> jcsackett: rgr looking
[13:37] <rick_h_> luca...how dare you not be in my irc channel when I want to ping you
[13:37] <rick_h_> jcsackett: yea, I'm not a fan of that. I think this might be huw's best path to do *something* but not sure without asking
[13:37] <rick_h_> jcsackett: I think we need to push up to UX on this. 
[13:38] <rick_h_> jcsackett: ah, but I did mention to look at how we show a remove relation and that's how it's done
[13:38] <jcsackett> rick_h_: ok, i'll note as much in my qa notes and avoid stamping qa ok for now, and we'll harass luca when he's around.
[13:38] <jcsackett> rick_h_: oh, really?
[13:38] <rick_h_> jcsackett: I think this can go forward but we need to bring the inconsistancy up with design. 
[13:39] <rick_h_> jcsackett: in a relation, an uncommitted relation is a grey line
[13:39] <rick_h_> but a removing one is a blue line with blue circle
[13:39] <kadams54> rick_h_: FYI, I checked all the destroy cards (all have my face on them) and they all seem to be local env only. That is, I couldn't replicate in EC2.
[13:39] <jcsackett> rick_h_: so maybe we move away from blue border for services.
[13:39] <rick_h_> kadams54: awesome, is there any hint as to the issue?
[13:39] <rick_h_> jcsackett: that seems like a big change to do :/
[13:39] <jcsackett> ok, i'll qa ok the branch and then email to bring up design discussion.
[13:39] <rick_h_> jcsackett: as far as general idea
[13:40] <jcsackett> rick_h_: right, i'm not advocating. :p
[13:40] <rick_h_> jcsackett: yea, let's go ahead and qa/land as is and hopefully there's a small follow up to tweak
[13:40] <rick_h_> based on design feedback
[13:41] <kadams54> rick_h_: It seems fairly likely that it's a problem in fakebackend.js or sandbox.js, but I've been striking out with everything I've looked at so far. None of the places where we explicitly call "db.machines.remove" are being invoked.
[13:42] <rick_h_> kadams54: rgr, ok mark them as low and we'll try to move the other stuff blocking release more first
[13:42] <rick_h_> kadams54: and take a look at the removal stuff as it'll show on jujucharms.com when we update it
[13:46] <rick_h_> kadams54: are you able to look at the 'saving configuration setting creates white box' issue next?
[13:46] <kadams54> rick_h_: sure
[13:52] <kadams54> jcsackett: https://github.com/juju/juju-gui/pull/564 looks good
[13:53] <lazyPower> rick_h_: wait... I have access to work with charm-admin? O_O
[13:56] <jcsackett> kadams54: thanks.
[13:59] <lazyPower> rick_h_: when you get about 5 that you can spare for me, ping me please.
[14:01] <rick_h_> lazyPower: what's up?
[14:01] <rick_h_> lazyPower: I've got 29min to spare atm
[14:01] <fabrice> rick_h_: I added comments in the bug you marked as incomplete
[14:02] <lazyPower> Hey, the email from 10 days ago is referrencing a tool I cant access - charm-admin.  I thought the process was to file an rt-ticket, did I misunderstand?
[14:02] <rick_h_> fabrice: ty, appreciate the QA, just will bug for more detailed bug filings to help those that follow afterwards
[14:02] <lazyPower> and wait, i've just earned my black belt in no-context-fu again
[14:02] <rick_h_> charm-admin, oh the script on the IS thing
[14:02] <lazyPower> woo and you get a gold star for picking up on my no-context clues
[14:02] <rick_h_> lazyPower: ok, so we don't have access to it, we need IS to run that on the machine running the charmstore
[14:02] <rick_h_> lazyPower: thus the RT
[14:02] <fabrice> rick_h_: hope the 2other one are filled with enough details
[14:03] <rick_h_> lazyPower: once that is done, we DO have access to remove from charmworld, as ~charmers can login and hit the button on there
[14:03] <lazyPower> ok I thought the RT was the appropriate move forward. I received an email from a community member wanting removal of their personal namespace charm as well - so we have 2 items in teh queue for removal.
[14:03] <rick_h_> lazyPower: rgr
[14:03] <lazyPower> when i went back to look for the instructions, i saw the charm-admin command and insta-confused myself
[14:03]  * lazyPower doffs hat
[14:03] <lazyPower> you are a gentleman and a scholar
[14:03] <rick_h_> lazyPower: with the new charmstore stuff we'll have control of that so it'll get better
[14:04] <lazyPower> no worries, I just wanted to make sure i'm not opening tickets and waiting for nothing.
[14:06] <lazyPower> rick_h_: is it a problem if i CC you on the RT tickets, so when they are removed you get notice to nuke them from charmworld?
[14:06] <rick_h_> lazyPower: no prob at all
[14:06] <lazyPower> ta
[14:20] <fabrice> I have a question the OS is not indicated (precise or trusty) in machine view, was it discussed already ?
[14:21] <rick_h_> fabrice: yes, I think there's a bug about that. The series isn't labeled because UX-wise it's normaally just repeat info across the machines
[14:21] <rick_h_> fabrice: but there are times it's useful, there was talk of adding it and show/hiding via the more menu but it's something we've not decided
[14:24] <fabrice> It would be kool to have Network + Machine in a canvas view 
[14:25] <fabrice> horizon display network like that for example
[14:25] <fabrice> http://www.sebastien-han.fr/images/horizon-network-topology.jpg
[14:25] <rick_h_> fabrice: :) as juju supports various network ideas you can be sure we'll be thinking about showing network info
[14:26] <rick_h_> one might even say there should be a 'network view' to go with the 'service view' and 'machine view' 
[14:36] <fabrice> rick_h_: Added a comment for https://bugs.launchpad.net/juju-gui/+bug/1371127
[14:36] <mup> Bug #1371127: Able to commit a unit added to a machine without choosing the subcontainer <juju-gui:Invalid> <https://launchpad.net/bugs/1371127>
[14:37] <fabrice> rick_h_: I think this is an issue
[14:37] <rick_h_> fabrice: rgr ty
[14:39] <jrwren_> rick_h_: do you know anything more about https://bugs.launchpad.net/juju-core/+bug/1306537  I do not know how to get precise to be used at all.
[14:39] <mup> Bug #1306537: LXC local provider fails to provision precise instances from a trusty host <deploy> <local-provider> <lxc> <verification-needed> <juju-core:Fix Released by wallyworld> <juju-core 1.18:Fix Released by wallyworld> <juju-quickstart:Fix Released by frankban> <juju-quickstart (Ubuntu):Fix
[14:39] <mup> Released> <juju-quickstart (Ubuntu Trusty):Fix Committed> <https://launchpad.net/bugs/1306537>
[14:39] <rick_h_> jrwren_: deploy a precise charm?
[14:39] <hatch> that works here
[14:39] <hatch> I do it all the time
[14:40] <jrwren_> rick_h_: juju-gui used to be a precise charm and is now a trusty charm?
[14:40] <rick_h_> jrwren_: yes, it has both
[14:40] <rick_h_> you just specify which you want
[14:40] <rick_h_> juju deploy precise/juju-gui
[14:40] <jrwren_> quickstart just picks one, how to force?
[14:41] <jrwren_> --gui-charm-url maybe?
[14:41] <frankban_> jrwren_: yes, that should work
[14:41] <jrwren_> i'll try that. 
[14:41] <jrwren_> thanks frankban_ 
[14:41] <frankban_> jrwren_: otherwise, for example in ec2 where the GUI is colocated in the bootstrap node, if machine 0 is precise than the precise charm should be used
[14:41] <jrwren_> should short form charm url work?  cs:precise/juju-gui   ok?
[14:42] <frankban_> jrwren_: it should IIRC
[14:42] <jrwren_> frankban_: makes sense, I just did not know how to force it on a new bootstrap.
[14:42] <jrwren_> frankban_: thanks.
[14:42] <frankban_> yw
[14:43] <jrwren_> juju-quickstart: error: charm URL has invalid revision: gui  <-- I wonder if I should file a bug
[14:43] <frankban_> jrwren_: no, it's not a bug, now that I remember, when using a customized charm utl, you need to specify the revision
[14:44] <frankban_> url even
[14:44] <jrwren_> ok, easy enough to use browser to find latest rev.
[14:48] <rbasak> jrwren_: re: bug 1306537, I think juju-gui deploys precise to run itself, doesn't it? So if "lsb_release -a" says on the juju gui machine that it's precise, then the bug is fixed.
[14:48] <mup> Bug #1306537: LXC local provider fails to provision precise instances from a trusty host <deploy> <local-provider> <lxc> <verification-needed> <juju-core:Fix Released by wallyworld> <juju-core 1.18:Fix Released by wallyworld> <juju-quickstart:Fix Released by frankban> <juju-quickstart (Ubuntu):Fix
[14:48] <mup> Released> <juju-quickstart (Ubuntu Trusty):Fix Committed> <https://launchpad.net/bugs/1306537>
[14:49] <rbasak> I think that's how the original bug triggered.
[14:49] <rbasak> (and thus breaking juju-quickstart)
[14:49] <jrwren_> rbasak: It did not deploy precise by default for me. It chose trusty.
[14:49] <rbasak> jrwren_: ah, perhaps that has changed now.
[14:49] <rick_h_> jrwren_: do you have a default series defined?
[14:49] <jrwren_> rick_h_: yes, precise.
[14:50] <rick_h_> I though this bug was around what happened without a defautl series, we hit into a bug with juju-core. So the thing now is that core's relaesed a fix, we've released a fix, it might be hard to redupe
[14:52] <frankban_> yeah, IIRC that was mainly a core bug
[14:53] <rick_h_> jrwren_: so I think a fair thing to do here is to note that you cannot replicate the bug with this version of quickstart
[14:53] <rbasak> +1
[14:53] <rbasak> And I think an explanation is sufficient to then mark it verification-done.
[14:54] <rbasak> "Bug no longer exists because $reasons" is perfect for verification-done.
[14:54] <rick_h_> jujugui call in 7 kanban please
[14:55] <jrwren_> rick_h_: I did that, but I wanted to try forcing precise to make sure that it does not hang, and I did that successfully too.
[14:55] <rick_h_> jrwren_: awesome
[14:55] <rbasak> jrwren_: sorry, I think I've muddled things here.
[14:55] <rbasak> Looking again, the key bug was in Juju. juju-quickstart had a task, and you moved to Trusty from Precise, which also eliminated the bug.
[14:56] <rick_h_> rbasak: right
[14:56] <hatch> oh man lp not including the link in bug emails is incredibly frustrating 
[14:56] <rbasak> So I don't think you can reproduce without using both juju-core from !proposed and also juju-quickstart from !proposed.
[14:56] <rick_h_> hatch: it does have links, what bug email did you get without one?
[14:56] <rbasak> As long as proposed juju works with proposed juju-quickstart, we should be verification-done.
[14:56] <jrwren_> rbasak: hehehe
[14:57] <rbasak> I'll just c&p this explanation and mark verification done.
[14:58] <jcsackett> rick_h_: may be a moment late, helping ant out with an issue.
[14:58] <rbasak> jrwren_: done. Thanks for testing, and sorry for the confusion.
[14:58] <rick_h_> jujugui call in 2 prepare prepare
[14:59] <jrwren_> rbasak: no worries. Thanks for jumping in.
[15:00] <rick_h_> frankban_: ant__ ^
[15:30] <hatch> we are getting new cash regists^h^h^h^h^h speed cameras 
[15:40] <hatch> jujugui lf reviews and qa https://github.com/juju/juju-gui/pull/566
[15:41] <hatch> luca: I'm thinking that if the blue circle is turning yellow in mv it should also turn yellow on the service icons?
[15:42] <luca> hatch: yeah
[15:43] <hatch> it's not as noticable because it just sits here instead of changing names or anything
[15:43] <hatch> but I'll add a card to get to at some point
[15:43] <hatch> Makyo: my current branch makes some changes to the ecs so you might want to take a peek at #566 just to make sure it's not going to conflict
[15:44] <hatch> and while you're there - you might as well review it :P
[15:46] <Makyo> hatch, will do
[16:24] <jrwren_> TIL: you can mismatch juju and juju-core package versions.
[16:24] <hatch> ohh yeah that happens
[16:25] <hatch> I dun that before
[16:25] <hatch> you get some weird error
[16:25] <hatch> s
[16:29] <jrwren_> especially when core is 1.21 and quickstart can't read the version
[16:29] <jrwren_> juju cmd was working. doing weird things like juju versions says 1.18, but using 1.21 tools when bootstrapping.
[16:37] <jrwren_> rbasak: ping?
[16:37] <rbasak> jrwren_: pong
[16:37] <jrwren_> rbasak: https://bugs.launchpad.net/juju-quickstart/+bug/1309678  Tested with juju and juju-quickstart from proposed and maybe found a new bug :(
[16:37] <mup> Bug #1309678: a value is required for the control bucket field <verification-needed> <juju-quickstart:Fix Released by bac> <juju-quickstart (Ubuntu):Fix Released> <juju-quickstart (Ubuntu Trusty):Fix Committed> <https://launchpad.net/bugs/1309678>
[16:38] <jrwren_> I think it is a fixed bug in newer juju, but not fixed in proposed?
[16:39] <jrwren_> rbasak: oh, i just realized you aren't on that bug. 
[16:40] <rbasak> jrwren_: does this new potential bug affect any Juju user hitting EC2 or OpenStack? Or is it really specific to the reproduction steps in this bug?
[16:41] <jrwren_> rbasak: afaik any ec2 user. I am unsure about openstack.
[16:47] <jrwren_> rbasak: wait, It may be because I am using python-websocket package from a juju stable ppa
[16:47] <rbasak> jrwren_: we need to find out if this package in trusty-proposed will regress users in Trusty.
[16:47] <jrwren_> rbasak: exactly what I'm making sure is not the case.
[16:47] <hatch> anyone else available for a review?
[16:47] <hatch> https://github.com/juju/juju-gui/pull/566
[16:47] <rbasak> jrwren_: thanks, you're ahead of me :)
[16:48] <hatch> jujugui ^
[16:48] <jrwren_> rbasak: would be a pretty nasty and obvious bug, so I think it is me.
[16:51] <jrwren_> rbasak: confirmed it was ME and not a real bug. Sorry about that. i should have figured it out sooner.
[16:52] <rbasak> jrwren_: no problem. Thank you for being diligent.
[16:52] <rbasak> (about flagging potential issues)
[16:52] <rbasak> Would rather have it that way round than push a bug to -updates :)
[16:52] <jrwren_> indeed.
[17:20] <hatch> rick_h_: you around yet?
[17:31] <rick_h_> hatch: just back
[17:31] <rick_h_> what's up?
[17:32] <hatch> rick_h_: the config 'pick original value' stuff - did we wan't them to do this at any time or only when there is a conflict?
[17:32] <rick_h_> hatch: only when there is a conflict
[17:32] <rick_h_> hatch: just that the conflict UI shows 3 values in the select box vs the 2 I think
[17:33] <hatch> alright - able for a preimp?
[17:33] <rick_h_> sure thing, standup room?
[17:33] <hatch> yup'
[18:14] <rick_h_> jcastro: what's the cross team thing next week? Should we be thinking of showing off machine view there?
[18:14] <jcastro> what cross team thing?
[18:15] <rick_h_> jcastro: the email you sent out about a cross team presentation next week
[18:15] <jcastro> oh that's the cloud cross team
[18:15] <rick_h_> oh it was canceled for this week nvm
[18:15] <jcastro> It's more higher level than specific tools
[18:15] <rick_h_> gotcha ok cool
[18:15] <rick_h_> just checking
[18:15] <jcastro> that's the once a month one
[18:22] <hatch> jcastro: since you're working hard on learning the tools - do you know if you have a multi proc instance if you can deploy two units to it and specify that each one gets one proc?
[18:23] <jcastro> I haven't tried that specifically
[18:23] <jcastro> but I don't see why that wouldn't work
[18:25] <hatch> how would you do it?
[18:26] <jcastro> machine constraints
[18:26] <hatch> juju deploy --to doesn't also work with constraints
[18:26] <jcastro> how else?
[18:26] <jcastro> oh, I guess if we set the constraint beforehand?
[18:26] <rick_h_> jcastro: hatch http://www.cyberciti.biz/tips/setting-processor-affinity-certain-task-or-process.html I'd have the charm do it
[18:26] <rick_h_> jcastro: hatch and it'd have to be some sort of config on the charm which cpu to have affinity with
[18:27] <hatch> interesting - I'm working on a new blog post "easy horizontal scaling of SOA" so doing some research and not having much luck hah
[18:27] <hatch> rick_h_: that does seem like a cool technique....but I feel like juju should handle this
[18:27] <hatch> maybe kvm instances?
[18:27] <hatch> are they heavy?
[18:27]  * hatch knows little of kvm
[18:28] <rick_h_> hatch: right, but currently it doesn't really. I'm not sure how lxc containers get cpu time, but assume it's less manual than 'you get core 0, you get core 1'
[18:30] <hatch> if setting cpu affinity is as easy as it shows in that post then adding it to the charm separate from the users real service would be an acceptable workaround imho
[18:36] <Makyo> jujugui going to walk the dogs over lunch, stepping away for a bit.
[18:38] <jrwren_> what is the benefit?
[18:39] <hatch> jrwren_: say you had two services running on the same machine but one was more cpu intensive - you may still want to give the other one a full core to use regardless
[18:40] <jrwren_> I've never thought about affinity and that case.
[18:40] <hatch> that's essentially what you're doing when you get 2 ec2 smalls instead of a medium
[18:40] <hatch> but assume that you don't have a choice about the hardware
[18:41] <jrwren_> with Xen as your affinity system.
[18:42] <hatch> I'm not sure what you mean
[18:42] <jrwren_> the hypervisor manages it for you when you get 2 smalls instead of a medium.
[18:44] <hatch> right - but it would be nice if you could deploy 4 charms to a 4 core machine and assign each one a core
[18:44] <jrwren_> I see what you mean. That would be pretty cool.
[18:45] <jrwren_> it prevents starvation, but it also prevents allowing natural balancing of short bursts of usage >1
[18:45] <jrwren_> I'd want to profiled and a real world case documented before I actually deployed a production service that way :)
[18:46] <jrwren_> hatch: you could use rlimits to do the same thing with memory.
[18:47] <jrwren_> hatch: actually, if you don't actually care about affinity, and just usage, you could do the whole thing with rlimits and continue to let the kernel execute the process wherever it wants.
[18:51] <jrwren_> hatch: you have me distracted thinking about this.
[18:54] <jrwren_> hatch: at first I was thinking choosing a good linux scheduler could help... but then i stumbled on http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/cgroups/cpusets.txt
[18:55] <jrwren_> so cgroups supports this. no idea if lxc utilizes them.
[18:55] <urulama> jrwren_, hatch: was just trying to suggest cgroups
[18:55] <jrwren_> lookx like lxc has supported it for a couple of yeras.
[18:55] <urulama> jrwren_: lxc actually sits on top of cgroups
[18:55] <urulama> jrwren_: iirc
[18:55] <jrwren_> urulama: right. I didn't know if lxc used cpusets though.
[18:55] <rick_h_> yep
[18:56] <jrwren_> http://serverfault.com/questions/444232/limit-memory-and-cpu-with-lxc-execute
[18:56] <jrwren_> lxc-cgroup -n foo cpuset.cpus "0,3" 
[18:56] <urulama> jrwren_: nice, tnx
[18:56] <jrwren_> wow, its all there and can be done at the lxc level instead of per process level. That is awesome.
[18:57] <urulama> jrwren_: did you just propose to make more detailed choices with MV? :D
[18:58] <jrwren_> urulama: nope. Just thinking out loud.
[18:59] <hatch> that's awesome
[18:59] <hatch> so now the question is weather juju would support passing those flags
[18:59] <hatch> unless it can be done from within the container
[18:59] <urulama> jrwren_: then i will :) allocate your charm per core per cpu. just a machine is not good enough :D
[19:00]  * urulama spent too much time playing with toys ... shall get serious now
[19:00] <hatch> hah - this has been really helpful :)
[19:01] <hatch> didn't even know where to start looking so apparently I was looking in the wrong spots
[19:01] <jrwren_> i'd be surprised if it can be done inside container.
[19:03] <jrwren_> i guess it would be if root inside the container is not restricted.
[19:04] <hatch> if it can't be then it would have to be functionality added to juju
[19:05] <urulama> juju would be awesome if something like OSv would be used (or implemented)
[19:05] <urulama> http://osv.io
[19:06] <jrwren_> urulama: ha!
[19:07] <urulama> such virtualization and better vm io was something i was working on before joining canonical, but it would make things really useful in juju land
[19:10] <jrwren_> i kind of like the "ubuntu is our platform" approach to juju.
[19:10] <jrwren_> all these "kernel is our platform" VM/container systems are forgetting that all those system services are there for a reason.
[19:11] <jrwren_> but... i'm just an old curmudgeon sysadmin
[19:13] <hatch> haha no I am with you there too
[19:15] <urulama> sure, agree. just that "ubuntu" can be really small :D
[19:15] <hatch> yeah 58MB or something
[19:16] <jrwren_> it can be 58? really?
[19:17] <jrwren_> I thought cloudimg was about as small as it got.
[19:18] <hatch> https://plus.google.com/+JeffPihach/posts/id9zyd8CZsd
[19:19] <hatch> 63MB sorry
[19:22] <jrwren_> cool.
[19:22] <jrwren_> and its not even bzip2 or xv! :)
[19:24] <jrwren_> 209M extracted, so this is almost cloudimg without a kernel?
[19:26] <hatch> I have no idea
[19:26] <hatch> haha
[19:29] <jrwren_> 15yrs ago I played with a redhat variant and its --excludedocs option to make a pretty minimal size core distro.
[19:30] <jrwren_> I can't remember why. I was targetting something with limited storage, but I don't remember what.
[19:35] <jrwren_> use better compression: 42M     ubuntu-core-14.04.1-core-amd64.tar.xz
[19:40] <hatch> haha
[19:51] <urulama> jrwren_, hatch: if interested, i think this is a good read 
[19:51] <urulama> https://lwn.net/Articles/524952/
[19:53] <urulama> and part2
[19:53] <urulama> https://plus.google.com/+OsvIo/posts/fgzsepcScTa
[19:53] <hatch> will check it out
[19:54] <jrwren_> same ideas as https://coreos.com AFAICT
[19:54] <hatch> and coreos has a bus with a gopher on it
[19:54] <urulama> jrwren_: yas
[19:54] <urulama> yes even :)
[19:55] <urulama> it's sometimes nice to remember all the abstractions that are going on within the "cloud" ... and not take them for granted
[19:55] <urulama> like these
[19:55] <urulama> orm/x86-server-virtualization-technology/
[19:55] <urulama> ah
[19:55] <urulama> http://www.cubrid.org/blog/dev-platform/x86-server-virtualization-technology/
[19:56] <jrwren_> head exploding. :)
[20:03] <urulama> that's always a good thing :)
[20:04] <hatch> you mean I can't just go to BestBuy and pick up a cloud and plug it in and go?
[20:05] <urulama> just buy some chemtrails spray and create one in your room, as big as you want :)
[20:17] <urulama> night all
[20:31] <jrwren_> head exploding more: http://www.openmirage.org
[20:35]  * rick_h_ steps away until evening AU calls
[22:18] <huwshimi> Morning
[22:21] <rick_h_> morning huwshimi 
[22:32] <huwshimi> rick_h_, hatch: Call time?
[22:32] <rick_h_> huwshimi: couple min late hang on
[22:32] <huwshimi> rick_h_: np
[22:32] <rick_h_> hatch: is going to be out for tonight
[22:32] <huwshimi> ok
[22:37] <rick_h_> huwshimi: joining now
[22:38] <huwshimi> rick_h_: on way
[23:08] <huwshimi> rick_h_: Great videos, nice to see things really working!
[23:14] <huwshimi> *video