[00:02] Makyo: yay video [00:02] looking cool [00:03] Makyo: do we have an account to upload the videos to? [00:04] * rick_h_ looks and notices it's the normal account [00:04] Makyo: ok cool, I'll check with the marketing and eco teams if there's a good 'juju home' we should get stuff towards. [00:04] rick_h_, I've been uploading them to my account and then passing the link on. I think there's a canonical account, though, can forward the video to...someone...? I don't know [00:04] Yeah, sounds good. [00:05] now we just need hatch to do the blog post matching something like that, get the ghost folks excited about it :) and almost there [00:06] I'll have the de-chirped version up in a few. [00:06] yeah I was thinking of releasing the post next week [00:07] hatch: well wait until thurs [00:07] hatch: really that's the wait for all of this [00:07] ok sounds good, lots of time [00:07] :P [00:07] procrastinator [00:07] though Makyo's video brought out one more bug to file :/ [00:08] hmm, though maybe not I guess, Makyo you used the flag to override the settings? [00:08] yep, see it in the url. Ok, no new bug yay! [00:14] rick_h_, yep [00:14] rick_h_, only thing I found was the X on the cookie warning didn't show up on this laptop. [00:14] Will try and repro [00:15] Makyo: k [00:15] I assumed it was hidden behind the onboarding or something [00:15] but didn't look too close [00:15] Oh, good point [00:38] https://www.youtube.com/watch?v=rEiwKLfzlX8 Updated video without the chirp [00:39] woot [00:39] Makyo: can you send that out to peeps plese? [00:39] please [00:39] Sure fing [00:39] Makyo: and we'll work on pulling together out content and that way we've got a url down somewhere to remember [00:42] rick_h_, sounds good. Can forward the mp4 to anyone in marketing who needs it, too, if it belongs on another account [00:42] Makyo: rgr, thanks. We probably won't know more about that until Sally gets back Monday [00:43] YEp, sounds good [01:22] I can't figure out good way to do sorting. [01:22] huwshimi: the 0 padding doesn't help? [01:23] rick_h_: Oh, I actually don't know what that is :) [01:24] huwshimi: oh, that's what frankban was saying [01:24] he was suggesting that if the thing was 10new [01:24] and you had 1 [01:24] that if you padded it 01 and 10 they'd sort correctly [01:25] so basically full in things with 0's so that they're similar and will sort as strings [01:25] so machine 0 turns into 00000 I guess. [01:26] * rick_h_ loads pr to look at frankban's comments again [01:26] rick_h_: Actually that gives me another idea, we could just add 1 to all ints, the numbers will still sort correctly and zero will become 1 so it will be truthy when compared against strings [01:27] huwshimi: right, but overall if things are just always strings, and you prefix the names with 0000's then they'll sort correct, even things like 12 vs 13new [01:28] huwshimi: try both and see what 'works' to the eye test and we can go from there [01:28] oh I see what you mean now :) [01:29] huwshimi: so the longest one in the previous example was 5 characters long, so if we did '00000' + 1, 2, 3, 4, 5 they'd sort with 'new10' just fine [01:29] well, I guess that's four zeros needed to make them all 5 long, /me wonders if we even need to add more than one 0, or just add '0' + name [01:29] to get a string out of it [01:30] rick_h_: I guess we don't know how long a number someone might use though [01:30] huwshimi: right, but even if they use 12343243243543543 and we add a 0 to 1, 2, 3, we'd be sorting '01', '02', '03', and '012334...' [01:31] huwshimi: ah right, so we'd need to use as many 0's as missing from the longest string [01:31] yeah [01:31] because otherwise we'd still have issues [01:31] yep [01:31] right, so we'd have to find the len of the longest one, and then do a longest-len(name) * '0' + name [01:31] well in python, have to convert that to JS [01:44] rick_h_: Actually the issue might not just be about zeros. It occurs if I add machines in this way too: [01:44] app.db.machines.add([{id: '3'}, {id: 'new3'}, {id: '10'}, {id: '10'}, {id: '2'}, {id: 'new1'}, {id: 'new11'}, {id: 'new42'}, {id: '1'}, {id: 'new21'}]); [01:45] something about the lowest number being after two strings? [01:46] huwshimi: right, so you need to update everything to be 5 chars long [01:46] so you need to make it [01:47] '00003', '0new3', '00010', '00010', '00002', '0new1', 'new42', '00001', '0new21' [01:47] and those should all sort properly [01:48] ouch [01:48] it's not as bad as it seems [01:49] rick_h_: It is because I need to figure out how to loop through everything and store the highest value inside the model comparator method :) [01:49] huwshimi: if it's heading off into the weeds for you feel free to put the card back with your branch and we can update it [01:50] huwshimi: yea, and it'll have to be updated as new machines come into play, so it really needs to be done at the time of the sort button being pressed [01:50] and can't be done ahead of time [01:50] yep [01:57] rick_h_: Can we actually have custom names at the moment? [01:58] If not then users can't currently add machines with names out of order [01:58] I'm just thinking I could land this as is with a follow up card to add the zero padding. [02:00] evening all [02:43] huwshimi: no, not yet. It's on the todo with some other stuff [02:43] huwshimi: but we have the issue of the sorting with 1 and 13 currently? [02:43] huwshimi: but yea, just want to make sure we have things 'working' and the 0 was the issue atm [02:44] rick_h_: Is that the issue with new1 and new13? [02:44] not really, those can go at the end and that's not an issue as they'll come back with real numbers once comitted [02:45] That's true [02:45] 0 and the 1 and 13 are the two issues currently [02:45] I'm not sure what the 1 and 13 issue is... [02:46] huwshimi: call? [02:46] sure! [02:46] standup url? [02:46] evening hatch [02:46] rick_h_: What's the standup url? [02:47] https://plus.google.com/hangouts/_/canonical.com/daily-standup?authuser=1 [02:47] thanks! [02:47] adjust the authuser to your accounts === uru_ is now known as urulama [08:04] mornin' all [08:13] morning rogpeppe1 [08:13] urulama: yo! [08:14] * urulama puts on a big golden chain, big hat ... sends yo back :D [10:58] morning everyone [10:59] rick_h_: fabrice will play with MV, he has dev gui up and running ... what's the link to the MV? [11:00] urulama: /:flags:/mv [11:00] fabrice: ^ [11:01] ty, rick_h_ [11:02] morning [11:02] what's the url to get to mv ? [11:03] I should read before typing :) [11:03] rick_h_: thanks [11:03] in fact time for a break === fabrice is now known as fabrice|lunch [11:46] rick_h_: o/ [11:46] I'm just catching up on landing Juju etc. in Trusty for 1.18.4. It's in -proposed, as is juju-quickstart. [11:46] rbasak: what's up? otp atm [11:47] The pending bugs for juju-quickstart are https://bugs.launchpad.net/juju-quickstart/+bug/1309678 and https://bugs.launchpad.net/juju-core/+bug/1306537. [11:47] Bug #1309678: a value is required for the control bucket field [11:47] Bug #1306537: LXC local provider fails to provision precise instances from a trusty host Released> [11:49] rbasak: looking [11:49] rick_h_: sorry I got distracted. [11:50] rbasak: so this is for utopic vs trusty? [11:50] So we just need to check that the package in -proposed does fix these two bugs, and comment to explain the testing and mark verification-done. This is for Trusty. [11:50] I have a 1.18.4 in trusty-proposed now (probably didn't before). [11:50] rbasak: ok, I'll see if we can get these tested out. [11:50] I just verified that basic functionality works with -proposed enabled. [11:50] urulama: do you think jrwren_ has the bandwidth to look at verifying the two bugs today? ^ [11:51] rbasak: rgr, will get someone on it [11:51] rick_h_: thanks! I'm also going to look at the Juju bugs today, and hopefully we can get the update landed in Trusty very soon. [11:51] I'm sorry this is so late. I got distracted by feature freeze issues for Utopic, and also have had to be away for a while. [11:53] rbasak: rgr, will get it done today [11:54] jrwren_: when you join, ping rick_h_ for quickstart issues, please [12:02] frankban: running a couple of min late [12:02] rick_h_: np [12:31] * frankban lunches [12:31] * rick_h_ goes to find breakfast now that morning calls are through === fabrice|lunch is now known as fabrice [12:47] I have some question about mv views [12:47] Is there a remove unit ? [12:48] fabrice: yes, you have to click onto the machine [12:48] fabrice: and then the units are listed out in the container column on the right [12:48] fabrice: and you can hover over the units and get a 'more menu' with a destroy [12:49] kool the hover menu ! [12:49] I have found a bug I think also [12:49] 2 in facts [12:51] one question more : change log do not indicate on which machine a unit will be placed [12:51] is that voluntary ? [12:52] fabrice: yes, though it's a design/idea I've questioned as well [12:53] fabrice: the goal is to get some feedback. The idea is that it's showing things that are more important/direct to the environment and pocketbook (adding a new machine costs $$) [12:53] fabrice: and it's a bit less busy [12:54] I found 3 bugs so I will play with launchpad now [12:55] fabrice: rgr make sure to check the kanban board (or I guess launchpad works) to make sure they're new vs existing ones [12:55] rick_h_: good suggestions :) [13:29] jujugui: can i get two reviews and qa on https://github.com/juju/juju-gui/pull/564 please? [13:30] jcsackett: taking a look [13:30] thanks, kadams54. [13:35] rick_h_: i'm reviewing/qa-ing huw's service icon delete branch, and i'm a little confused. he's setting the service blocks in service view to blue border on delete, as when they're uncommitted deploys. is that really what we want to do? [13:35] s/delete/destroy [13:36] jcsackett: I asked about that. I asked him to check the designs for any note on if we have feedback on how to show that. [13:36] jcsackett: /me looks at the branch to see if he calls any of that out [13:36] rick_h_: there's no indication of design docs, and i'm not seeing anything that looks like this. [13:36] jcsackett: rgr looking [13:37] luca...how dare you not be in my irc channel when I want to ping you [13:37] jcsackett: yea, I'm not a fan of that. I think this might be huw's best path to do *something* but not sure without asking [13:37] jcsackett: I think we need to push up to UX on this. [13:38] jcsackett: ah, but I did mention to look at how we show a remove relation and that's how it's done [13:38] rick_h_: ok, i'll note as much in my qa notes and avoid stamping qa ok for now, and we'll harass luca when he's around. [13:38] rick_h_: oh, really? [13:38] jcsackett: I think this can go forward but we need to bring the inconsistancy up with design. [13:39] jcsackett: in a relation, an uncommitted relation is a grey line [13:39] but a removing one is a blue line with blue circle [13:39] rick_h_: FYI, I checked all the destroy cards (all have my face on them) and they all seem to be local env only. That is, I couldn't replicate in EC2. [13:39] rick_h_: so maybe we move away from blue border for services. [13:39] kadams54: awesome, is there any hint as to the issue? [13:39] jcsackett: that seems like a big change to do :/ [13:39] ok, i'll qa ok the branch and then email to bring up design discussion. [13:39] jcsackett: as far as general idea [13:40] rick_h_: right, i'm not advocating. :p [13:40] jcsackett: yea, let's go ahead and qa/land as is and hopefully there's a small follow up to tweak [13:40] based on design feedback [13:41] rick_h_: It seems fairly likely that it's a problem in fakebackend.js or sandbox.js, but I've been striking out with everything I've looked at so far. None of the places where we explicitly call "db.machines.remove" are being invoked. [13:42] kadams54: rgr, ok mark them as low and we'll try to move the other stuff blocking release more first [13:42] kadams54: and take a look at the removal stuff as it'll show on jujucharms.com when we update it [13:46] kadams54: are you able to look at the 'saving configuration setting creates white box' issue next? [13:46] rick_h_: sure [13:52] jcsackett: https://github.com/juju/juju-gui/pull/564 looks good [13:53] rick_h_: wait... I have access to work with charm-admin? O_O [13:56] kadams54: thanks. [13:59] rick_h_: when you get about 5 that you can spare for me, ping me please. [14:01] lazyPower: what's up? [14:01] lazyPower: I've got 29min to spare atm [14:01] rick_h_: I added comments in the bug you marked as incomplete [14:02] Hey, the email from 10 days ago is referrencing a tool I cant access - charm-admin. I thought the process was to file an rt-ticket, did I misunderstand? [14:02] fabrice: ty, appreciate the QA, just will bug for more detailed bug filings to help those that follow afterwards [14:02] and wait, i've just earned my black belt in no-context-fu again [14:02] charm-admin, oh the script on the IS thing [14:02] woo and you get a gold star for picking up on my no-context clues [14:02] lazyPower: ok, so we don't have access to it, we need IS to run that on the machine running the charmstore [14:02] lazyPower: thus the RT [14:02] rick_h_: hope the 2other one are filled with enough details [14:03] lazyPower: once that is done, we DO have access to remove from charmworld, as ~charmers can login and hit the button on there [14:03] ok I thought the RT was the appropriate move forward. I received an email from a community member wanting removal of their personal namespace charm as well - so we have 2 items in teh queue for removal. [14:03] lazyPower: rgr [14:03] when i went back to look for the instructions, i saw the charm-admin command and insta-confused myself [14:03] * lazyPower doffs hat [14:03] you are a gentleman and a scholar [14:03] lazyPower: with the new charmstore stuff we'll have control of that so it'll get better [14:04] no worries, I just wanted to make sure i'm not opening tickets and waiting for nothing. [14:06] rick_h_: is it a problem if i CC you on the RT tickets, so when they are removed you get notice to nuke them from charmworld? [14:06] lazyPower: no prob at all [14:06] ta [14:20] I have a question the OS is not indicated (precise or trusty) in machine view, was it discussed already ? [14:21] fabrice: yes, I think there's a bug about that. The series isn't labeled because UX-wise it's normaally just repeat info across the machines [14:21] fabrice: but there are times it's useful, there was talk of adding it and show/hiding via the more menu but it's something we've not decided [14:24] It would be kool to have Network + Machine in a canvas view [14:25] horizon display network like that for example [14:25] http://www.sebastien-han.fr/images/horizon-network-topology.jpg [14:25] fabrice: :) as juju supports various network ideas you can be sure we'll be thinking about showing network info [14:26] one might even say there should be a 'network view' to go with the 'service view' and 'machine view' [14:36] rick_h_: Added a comment for https://bugs.launchpad.net/juju-gui/+bug/1371127 [14:36] Bug #1371127: Able to commit a unit added to a machine without choosing the subcontainer [14:37] rick_h_: I think this is an issue [14:37] fabrice: rgr ty [14:39] rick_h_: do you know anything more about https://bugs.launchpad.net/juju-core/+bug/1306537 I do not know how to get precise to be used at all. [14:39] Bug #1306537: LXC local provider fails to provision precise instances from a trusty host Released> [14:39] jrwren_: deploy a precise charm? [14:39] that works here [14:39] I do it all the time [14:40] rick_h_: juju-gui used to be a precise charm and is now a trusty charm? [14:40] jrwren_: yes, it has both [14:40] you just specify which you want [14:40] juju deploy precise/juju-gui [14:40] quickstart just picks one, how to force? [14:41] --gui-charm-url maybe? [14:41] jrwren_: yes, that should work [14:41] i'll try that. [14:41] thanks frankban_ [14:41] jrwren_: otherwise, for example in ec2 where the GUI is colocated in the bootstrap node, if machine 0 is precise than the precise charm should be used [14:41] should short form charm url work? cs:precise/juju-gui ok? [14:42] jrwren_: it should IIRC [14:42] frankban_: makes sense, I just did not know how to force it on a new bootstrap. [14:42] frankban_: thanks. [14:42] yw [14:43] juju-quickstart: error: charm URL has invalid revision: gui <-- I wonder if I should file a bug [14:43] jrwren_: no, it's not a bug, now that I remember, when using a customized charm utl, you need to specify the revision [14:44] url even [14:44] ok, easy enough to use browser to find latest rev. [14:48] jrwren_: re: bug 1306537, I think juju-gui deploys precise to run itself, doesn't it? So if "lsb_release -a" says on the juju gui machine that it's precise, then the bug is fixed. [14:48] Bug #1306537: LXC local provider fails to provision precise instances from a trusty host Released> [14:49] I think that's how the original bug triggered. [14:49] (and thus breaking juju-quickstart) [14:49] rbasak: It did not deploy precise by default for me. It chose trusty. [14:49] jrwren_: ah, perhaps that has changed now. [14:49] jrwren_: do you have a default series defined? [14:49] rick_h_: yes, precise. [14:50] I though this bug was around what happened without a defautl series, we hit into a bug with juju-core. So the thing now is that core's relaesed a fix, we've released a fix, it might be hard to redupe [14:52] yeah, IIRC that was mainly a core bug [14:53] jrwren_: so I think a fair thing to do here is to note that you cannot replicate the bug with this version of quickstart [14:53] +1 [14:53] And I think an explanation is sufficient to then mark it verification-done. [14:54] "Bug no longer exists because $reasons" is perfect for verification-done. [14:54] jujugui call in 7 kanban please [14:55] rick_h_: I did that, but I wanted to try forcing precise to make sure that it does not hang, and I did that successfully too. [14:55] jrwren_: awesome [14:55] jrwren_: sorry, I think I've muddled things here. [14:55] Looking again, the key bug was in Juju. juju-quickstart had a task, and you moved to Trusty from Precise, which also eliminated the bug. [14:56] rbasak: right [14:56] oh man lp not including the link in bug emails is incredibly frustrating [14:56] So I don't think you can reproduce without using both juju-core from !proposed and also juju-quickstart from !proposed. [14:56] hatch: it does have links, what bug email did you get without one? [14:56] As long as proposed juju works with proposed juju-quickstart, we should be verification-done. [14:56] rbasak: hehehe [14:57] I'll just c&p this explanation and mark verification done. [14:58] rick_h_: may be a moment late, helping ant out with an issue. [14:58] jrwren_: done. Thanks for testing, and sorry for the confusion. [14:58] jujugui call in 2 prepare prepare [14:59] rbasak: no worries. Thanks for jumping in. [15:00] frankban_: ant__ ^ === fabrice is now known as fabrice|familyti [15:30] we are getting new cash regists^h^h^h^h^h speed cameras [15:40] jujugui lf reviews and qa https://github.com/juju/juju-gui/pull/566 [15:41] luca: I'm thinking that if the blue circle is turning yellow in mv it should also turn yellow on the service icons? [15:42] hatch: yeah [15:43] it's not as noticable because it just sits here instead of changing names or anything [15:43] but I'll add a card to get to at some point [15:43] Makyo: my current branch makes some changes to the ecs so you might want to take a peek at #566 just to make sure it's not going to conflict [15:44] and while you're there - you might as well review it :P [15:46] hatch, will do === fabrice is now known as fabrice|family [16:24] TIL: you can mismatch juju and juju-core package versions. [16:24] ohh yeah that happens [16:25] I dun that before [16:25] you get some weird error [16:25] s [16:29] especially when core is 1.21 and quickstart can't read the version [16:29] juju cmd was working. doing weird things like juju versions says 1.18, but using 1.21 tools when bootstrapping. [16:37] rbasak: ping? [16:37] jrwren_: pong [16:37] rbasak: https://bugs.launchpad.net/juju-quickstart/+bug/1309678 Tested with juju and juju-quickstart from proposed and maybe found a new bug :( [16:37] Bug #1309678: a value is required for the control bucket field [16:38] I think it is a fixed bug in newer juju, but not fixed in proposed? [16:39] rbasak: oh, i just realized you aren't on that bug. [16:40] jrwren_: does this new potential bug affect any Juju user hitting EC2 or OpenStack? Or is it really specific to the reproduction steps in this bug? [16:41] rbasak: afaik any ec2 user. I am unsure about openstack. [16:47] rbasak: wait, It may be because I am using python-websocket package from a juju stable ppa [16:47] jrwren_: we need to find out if this package in trusty-proposed will regress users in Trusty. [16:47] rbasak: exactly what I'm making sure is not the case. [16:47] anyone else available for a review? [16:47] https://github.com/juju/juju-gui/pull/566 [16:47] jrwren_: thanks, you're ahead of me :) [16:48] jujugui ^ [16:48] rbasak: would be a pretty nasty and obvious bug, so I think it is me. [16:51] rbasak: confirmed it was ME and not a real bug. Sorry about that. i should have figured it out sooner. [16:52] jrwren_: no problem. Thank you for being diligent. [16:52] (about flagging potential issues) [16:52] Would rather have it that way round than push a bug to -updates :) [16:52] indeed. [17:20] rick_h_: you around yet? [17:31] hatch: just back [17:31] what's up? [17:32] rick_h_: the config 'pick original value' stuff - did we wan't them to do this at any time or only when there is a conflict? [17:32] hatch: only when there is a conflict [17:32] hatch: just that the conflict UI shows 3 values in the select box vs the 2 I think [17:33] alright - able for a preimp? [17:33] sure thing, standup room? [17:33] yup' [18:14] jcastro: what's the cross team thing next week? Should we be thinking of showing off machine view there? [18:14] what cross team thing? [18:15] jcastro: the email you sent out about a cross team presentation next week [18:15] oh that's the cloud cross team [18:15] oh it was canceled for this week nvm [18:15] It's more higher level than specific tools [18:15] gotcha ok cool [18:15] just checking [18:15] that's the once a month one [18:22] jcastro: since you're working hard on learning the tools - do you know if you have a multi proc instance if you can deploy two units to it and specify that each one gets one proc? [18:23] I haven't tried that specifically [18:23] but I don't see why that wouldn't work [18:25] how would you do it? [18:26] machine constraints [18:26] juju deploy --to doesn't also work with constraints [18:26] how else? [18:26] oh, I guess if we set the constraint beforehand? [18:26] jcastro: hatch http://www.cyberciti.biz/tips/setting-processor-affinity-certain-task-or-process.html I'd have the charm do it [18:26] jcastro: hatch and it'd have to be some sort of config on the charm which cpu to have affinity with [18:27] interesting - I'm working on a new blog post "easy horizontal scaling of SOA" so doing some research and not having much luck hah [18:27] rick_h_: that does seem like a cool technique....but I feel like juju should handle this [18:27] maybe kvm instances? [18:27] are they heavy? [18:27] * hatch knows little of kvm [18:28] hatch: right, but currently it doesn't really. I'm not sure how lxc containers get cpu time, but assume it's less manual than 'you get core 0, you get core 1' [18:30] if setting cpu affinity is as easy as it shows in that post then adding it to the charm separate from the users real service would be an acceptable workaround imho [18:36] jujugui going to walk the dogs over lunch, stepping away for a bit. [18:38] what is the benefit? [18:39] jrwren_: say you had two services running on the same machine but one was more cpu intensive - you may still want to give the other one a full core to use regardless [18:40] I've never thought about affinity and that case. [18:40] that's essentially what you're doing when you get 2 ec2 smalls instead of a medium [18:40] but assume that you don't have a choice about the hardware [18:41] with Xen as your affinity system. [18:42] I'm not sure what you mean [18:42] the hypervisor manages it for you when you get 2 smalls instead of a medium. [18:44] right - but it would be nice if you could deploy 4 charms to a 4 core machine and assign each one a core [18:44] I see what you mean. That would be pretty cool. [18:45] it prevents starvation, but it also prevents allowing natural balancing of short bursts of usage >1 [18:45] I'd want to profiled and a real world case documented before I actually deployed a production service that way :) [18:46] hatch: you could use rlimits to do the same thing with memory. [18:47] hatch: actually, if you don't actually care about affinity, and just usage, you could do the whole thing with rlimits and continue to let the kernel execute the process wherever it wants. [18:51] hatch: you have me distracted thinking about this. [18:54] hatch: at first I was thinking choosing a good linux scheduler could help... but then i stumbled on http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/cgroups/cpusets.txt [18:55] so cgroups supports this. no idea if lxc utilizes them. [18:55] jrwren_, hatch: was just trying to suggest cgroups [18:55] lookx like lxc has supported it for a couple of yeras. [18:55] jrwren_: lxc actually sits on top of cgroups [18:55] jrwren_: iirc [18:55] urulama: right. I didn't know if lxc used cpusets though. [18:55] yep [18:56] http://serverfault.com/questions/444232/limit-memory-and-cpu-with-lxc-execute [18:56] lxc-cgroup -n foo cpuset.cpus "0,3" [18:56] jrwren_: nice, tnx [18:56] wow, its all there and can be done at the lxc level instead of per process level. That is awesome. [18:57] jrwren_: did you just propose to make more detailed choices with MV? :D [18:58] urulama: nope. Just thinking out loud. [18:59] that's awesome [18:59] so now the question is weather juju would support passing those flags [18:59] unless it can be done from within the container [18:59] jrwren_: then i will :) allocate your charm per core per cpu. just a machine is not good enough :D [19:00] * urulama spent too much time playing with toys ... shall get serious now [19:00] hah - this has been really helpful :) [19:01] didn't even know where to start looking so apparently I was looking in the wrong spots [19:01] i'd be surprised if it can be done inside container. [19:03] i guess it would be if root inside the container is not restricted. [19:04] if it can't be then it would have to be functionality added to juju [19:05] juju would be awesome if something like OSv would be used (or implemented) [19:05] http://osv.io [19:06] urulama: ha! [19:07] such virtualization and better vm io was something i was working on before joining canonical, but it would make things really useful in juju land [19:10] i kind of like the "ubuntu is our platform" approach to juju. [19:10] all these "kernel is our platform" VM/container systems are forgetting that all those system services are there for a reason. [19:11] but... i'm just an old curmudgeon sysadmin [19:13] haha no I am with you there too [19:15] sure, agree. just that "ubuntu" can be really small :D [19:15] yeah 58MB or something [19:16] it can be 58? really? [19:17] I thought cloudimg was about as small as it got. [19:18] https://plus.google.com/+JeffPihach/posts/id9zyd8CZsd [19:19] 63MB sorry [19:22] cool. [19:22] and its not even bzip2 or xv! :) [19:24] 209M extracted, so this is almost cloudimg without a kernel? [19:26] I have no idea [19:26] haha [19:29] 15yrs ago I played with a redhat variant and its --excludedocs option to make a pretty minimal size core distro. [19:30] I can't remember why. I was targetting something with limited storage, but I don't remember what. [19:35] use better compression: 42M ubuntu-core-14.04.1-core-amd64.tar.xz [19:40] haha [19:51] jrwren_, hatch: if interested, i think this is a good read [19:51] https://lwn.net/Articles/524952/ [19:53] and part2 [19:53] https://plus.google.com/+OsvIo/posts/fgzsepcScTa [19:53] will check it out [19:54] same ideas as https://coreos.com AFAICT [19:54] and coreos has a bus with a gopher on it [19:54] jrwren_: yas [19:54] yes even :) [19:55] it's sometimes nice to remember all the abstractions that are going on within the "cloud" ... and not take them for granted [19:55] like these [19:55] orm/x86-server-virtualization-technology/ [19:55] ah [19:55] http://www.cubrid.org/blog/dev-platform/x86-server-virtualization-technology/ [19:56] head exploding. :) [20:03] that's always a good thing :) [20:04] you mean I can't just go to BestBuy and pick up a cloud and plug it in and go? [20:05] just buy some chemtrails spray and create one in your room, as big as you want :) [20:17] night all === urulama is now known as urulama-afk [20:31] head exploding more: http://www.openmirage.org [20:35] * rick_h_ steps away until evening AU calls [22:18] Morning [22:21] morning huwshimi [22:32] rick_h_, hatch: Call time? [22:32] huwshimi: couple min late hang on [22:32] rick_h_: np [22:32] hatch: is going to be out for tonight [22:32] ok [22:37] huwshimi: joining now [22:38] rick_h_: on way [23:08] rick_h_: Great videos, nice to see things really working! [23:14] *video