[00:00] so in the sandbox with the simulator I've got two mysqls in one lxc container mysql/0 and mysql/5 [00:01] huwshimi: if you get a sec toss your card on the board please [00:02] rick_h_: Oh yes, sure. [00:02] huwshimi: thanks, will try to look after we settle this fire [00:02] rick_h_: It's all good. Let me know if I can help with anything. [00:06] running tests [00:07] hatch: rgr [00:08] hatch, rick_h_ just curious, what's stopping us from using LXCs? I've deployed to them on EC2 before; is this an openstack thing? [00:08] Makyo: yes, the demo hardware is openstack running on openstack [00:08] Makyo: not openstack on bare metal via maas like we thought [00:08] Boo, so no containers? [00:08] Ah, crap. [00:08] Makyo: yep, none at all, lxc or kvm [00:08] That's where I was confused [00:09] so worst worst case we swap the drop target to the create new machine and just deploy a new service on a new machine via machine view, but if we can colocate on bare metal we can at least demo that as closer to what we want to show [00:11] ok rick_h_ it's up [00:11] hatch: rgr, updating my two envs [00:14] distfile building [00:14] dave in #juju think this won't work as --to is a hack and the api won't support it like we're trying to use it [00:15] lol [00:15] Can we just be pretty? [00:15] hatch: Makyo so probably be prepared to just try to create a new machine. I think we can move the drop target to the machine one, use the last commit, and just remove the machine parent? [00:16] to deploy a new unit on a new machine (no parent machine id) and then after that machine bring up the unit [00:16] * rick_h_ is still holding out hope as these damn distfiles build [00:16] or what do you guys think? [00:17] so drop on the new machine header? [00:17] I thought the colo was the big deal? [00:17] rick_h_, if that works, I'm all for it. It sounds like a valid path, too. "Let's deploy Nagios, just drag it onto a new machine..." [00:17] hatch: I did too until the last 30min [00:18] hatch: I'm cranky this isn't working out like we wanted, I'll be honest [00:19] Makyo: yea, that's my thought. It's a small step code-wise from where we were with lxc [00:19] good thing I saved that code [00:19] lol [00:19] * hatch planned for this [00:19] haha [00:19] hatch: yea, thank you git [00:19] ok I'll make the machine header the droppable one [00:19] hatch: lol, plan for "the issue is going to be" [00:20] haha [00:20] hatch: thanks, I'll still work on this qa here just to be sure [00:20] damn this distfile stuff is slow when you're watching it [00:21] oh npm wheee [00:21] I'm going to nuke that damn site one of these days [00:21] 'var container = db.machines.add({id: 'Hi Openstack!'}); [00:22] kehehehe [00:22] lol [00:22] just don't makeit "Fuuuuuuuuuuuuuuuuuuuuuuuuuu" [00:23] do you know what the container type for bare metal is? [00:23] null? Makyo any idea? [00:24] I'll try undefined and see if it breaks [00:24] no container type [00:24] No parentId, no containerType [00:24] hatch: rm the attr? [00:24] coo [00:24] yeah [00:24] k [00:25] hehe apparenlty it doesn't like my machine name :D [00:25] lol [00:27] Problem I'm getting now (probably fixed by having MS log in beforehand) is that on login, it displays a broken GUI rather than the login screen [00:27] Until you refresh [00:27] Then it offers the login screen [00:27] ok it's working but not showing the icon when it first creates the token [00:27] Oh, flags. [00:28] I had flags in the URL before loading login [00:31] So, fwiw, everything works on EC2 if I just create a new machine with the unplaced unit [00:31] Makyo: ? [00:31] rick_h_, if I drag the unplaced unit to a new machine (what I'd guessed was the path, correct me if I'm wrong) [00:31] Makyo: ok, this is with hatch's latest? [00:32] rick_h_, I think so, within the last 15 minutes. [00:32] that shouldn't work heh, maybe you're on the version HEAD~2 ? [00:32] Makyo: yea, what's version.js say? [00:33] hatch: Makyo let's hop in a hangout? [00:33] fire me a link, will be in in a min [00:33] just need to get the dogs in [00:33] k [00:33] acb5d2a [00:33] https://plus.google.com/hangouts/_/g4qxjkx7wpe5uiwrl7eahe6fb4a?hl=en [00:34] Makyo: ok, so that's the latest "colocates on primary machine" [00:35] jcsackett: Did you implement the units on machines/containers? It might be our simulator, but it never shows more than one unit on a container/bare metal, even if there are more units showing for the machine than there are containers: https://dl.dropboxusercontent.com/u/1826926/units.png [00:36] If we're colocating, we might need to fix that if it's not a simulator issue [00:40] Oh, maybe our unit mapping is wrong, the extra units have a different machine id [00:43] huwshimi: yea, we'll have to look into it. Something's not right. I noticed it when doing the mongodb bundle [00:54] oh, it's matching the first part of the id, so a unit.machine = 12 will render on machine.id=1 and unit.machine = 25 will render to machine.id=2 [00:55] huwshimi: heh, that seems ungood :) [00:55] Yeah... :) [00:55] good catch [00:56] ok, bootstrapped new lxc and ec2 envs on my desktop so I can throw moar power at the problem! [01:04] ok the machine token is being created before the unit is assigned [01:04] that's why it's not rendering properly [01:05] so when the machine is assigned we're unlazy-updating-relazying? [01:05] can the machine token watch the unit for changes? [01:05] and update accordingly? [01:05] the unfortunately named '_smartUpdateList' isn't very smart [01:05] lol [01:05] and falls over when called manually [01:06] I'm not sure why it fails [01:06] maybe I can manually clear it out [01:07] so updateMachines and smartUpdateList appear to only be for the unplaced unit list [01:08] oh never mind [01:08] misreading that [01:08] hatch: so if you call updateMachines it won't work? [01:08] _updateMachines that is [01:08] nope, just trying to look into why [01:09] hatch: what's _updateMachineWithUnitData for? [01:09] the unit doesn't yet know what machine it's being assigned to [01:09] so that doesn't help [01:09] so it's passed an empty list [01:10] hatch: when we place the unit we don't know the machine to updateMachineWithUnitData? [01:11] hatch: I'm getting at it's broken for a split second then updates [01:18] hatch: want to push up what you've got and I can pull down and help poke at it? [01:20] yeah one sec [01:20] k thx [01:23] I'm pretty much back to https://github.com/hatched/juju-gui/commit/2f7a580bb5c0b81e154438422378b95d3adb2d7a [01:23] it's the closest I've gotten [01:24] hatch: cool, update the pr and I'll pull down and stab at it some [01:25] GOT IT [01:26] fffffuuuuuuuuuuuu [01:26] hah [01:26] rick_h_ [01:26] I'll clean this mess up and push it up [01:26] huh? [01:27] I got it [01:27] it renders the icon [01:28] woot [01:28] it doesn't render the container though until after deploying [01:29] hatch: Makyo stand down, we've been bumped [01:29] But....I.....Just..... [01:29] I even just got the container to render [01:29] hatch: https://plus.google.com/hangouts/_/canonical.com/daily-standup?authuser=1 [01:30] rick_h_ we r in [01:35] juju gui machine view demo killed, chat with me if you've got any questions. [01:36] rick_h_: :( [01:36] huwshimi: acutally jump in ^ for a minute please [01:37] rick_h_ https://github.com/hatched/juju-gui/tree/unit-drop-n-place-machine [01:38] that one, when dropping the unplaced unit on the machine header creates a machine token and a container for the bare metal [01:41] hatch: thanks [01:41] hatch: make it a pr for me please [01:41] hatch: just mark it WIP so I can see the diff/qa-pr it easier [01:42] thanks man, see you all later in the week [01:42] rick_h_: Cheers man, have a good one. [01:42] rick_h_ https://github.com/juju/juju-gui/pull/318 [02:27] * huwshimi has lunch [11:55] morning [11:57] o/ [12:15] morning [12:25] * rick_h_ takes the boy off to day care, biab [12:28] ooo, google chat has circle icons now [12:49] ooh, fancy [12:49] * rick_h_ is back [12:53] morning all. [12:57] jcsackett: can you check out huw's branches this morning please? [12:57] rick_h_: already looking. [12:57] jcsackett: and take over if necessary, he's out this week I believe [12:57] rick_h_: got it. [12:57] jcsackett: ty much, I'll take a peek at yours here. Jeff's will hold for a bit [12:57] rick_h_: thanks. [12:58] redir: let's setup a chat later today to go through your doc comments. Are you still going through the doc or set now? [13:16] rick_h_: i'll go back through it some more. and will draw some pictures from my notes. [13:16] redir: ok cool [13:16] redir: when you're set and ready then let's setup a call to walk through it [13:16] reading the store code and setting up bzr branches under go -- which is new to me [13:16] rick_h_: cool will do [13:17] redir: heh, yea welcome to the dual dvcs life :) [13:21] jcsackett: i'd like to get your feedback on my local charm ingestion branch when you have some time. lp:~bac/charmworld/ingest-local-charms === BradCrittenden is now known as bac [13:21] bac: sure. [13:21] jujugui: i've just discovered i'm not part of ~juju-gui-peeps on launchpad anymore. assuming that's still the group to be in, can someone add me? [13:21] jcsackett: ping me when you are free. call it a mid-flight, pre-impy thing [13:22] bac: i'll ping you when i finish looking over huw's branch. [13:22] jcsackett: will do [13:23] jcsackett: added [13:23] rick_h_: thanks. [13:23] go ~yellow [13:23] :) [13:43] bac: looking at a diff of yours now. you want to hangout when i've read it? [13:44] yes, please [13:47] bac: this is incorporating the work you've been doing for the demo into the charmworld base, yeah? [13:47] y [13:48] awesome. [13:48] one sec, lemme find a headset and i'll send you a hangout. [13:53] bac: https://plus.google.com/hangouts/_/gwcncgfm72mxid5ajoocxnc2cua?authuser=2&hl=en [13:57] sorry jon, i'd gone to get a tea [13:57] joining [13:58] jcsackett: ^^ [13:59] bac: can I bug you about locations.conf and setting up bzr later? after your call [14:00] redir: sure [14:09] hi redir [14:11] yo [14:12] wanna hangout bac? [14:13] redir: sure. paste an url here [14:13] redir: we have seven minutes before the keynote [14:13] https://plus.google.com/hangouts/_/gshmjqp7flvrgad5bny2rybvwua?authuser=3&hl=en [14:20] reminder marks' keynote is coming up next [14:20] jujugui ^ [14:21] jujugui no machine view, but should be cool [14:21] where's the keynote? [14:21] link... [14:22] https://www.openstack.org/home/Video/ [14:22] jujugui^ [14:22] tx [14:22] so we're running a little late... [14:22] yea [14:24] go solidfire! [14:48] *fingers crossed* [14:49] yay, the layout worked. [14:50] :-D [14:54] wow [14:56] great work guys, you all rock [14:56] Yeesh [14:56] no spoilers. i've got about a 20 second lag. :) [14:56] hah [14:57] My "yeesh" was mostly the thought of using juju across centos, ubuntu, and windows. Craziness. [14:57] rofl [14:58] lol [14:59] Anyone know if Mark's going to be demoing anything at the November 3rd summit in Paris? [15:00] jujugui: standing up nowish? [15:01] bac: rgr, on my way [15:16] what was that short lpad thing? lpad.in/something [15:22] hmm, not sure. [15:22] I'd just stick the url in there, it's all good [15:22] already done [15:33] redir: for the lightweight co set up, this is all i need in my locations.conf: http://paste.ubuntu.com/7458051/ [15:43] jcastro when I click the up button on HN the button just goes away, does that mean i've voted? [15:43] hatch: yep [15:43] you're good [15:43] yes [15:43] (shitty UX) [15:43] :) [15:43] thx [15:43] we were on the front page, now we're at the top of the 2nd page [15:44] must have moar clicks! [15:44] hatch: that is hacker new not ui designer news [15:44] s/new/news [15:45] hatch, if it makes you feel better their search also sucks! [15:45] lol [15:46] rick_h_: let me know if there is enough context in the merge request attached to that card [15:46] redir: rgr will do [15:47] my new amt ready intel nucs arrived so now trying to swap hardware so I can send back the first set :) [17:03] marcoceppi any progress on the GH to LP stuff for charms? I'd like to get the Ghost charm in the store [17:19] jcsackett: Got some questions for you when have some time… [17:23] hatch: I should be getting a proof of concept for my sync stuff by end of week [17:24] got derailed getting more charms in to trusty [17:28] thansk for the pointers bac, I have a working lightweight checkout now...forgot I had installed cobzr some time ago and had to remove i t... [17:33] redir: great. stab cobzr. [17:36] hatch: I caught up on some of last night's backlog when I came in this morning but was wondering: what's the latest with https://github.com/juju/juju-gui/pull/316 ? [17:39] Awesome. I can consistently crash juju-gui:develop in Canary to the point of needing a kill -9. [17:39] Tab won't even close [17:49] kadams54: hatch is out today. I'm going to work on mering the two branches and see if we can get them landable. [17:49] kadams54: what's up? [17:50] That branch has a lot of overlap with my stuff so I just want to make sure I coordinate [17:52] kadams54: ok. I'll be a bit. So just take a peek and we'll try to get yours in and update to match. [17:53] Yeah… I'm actually working in Jeff's branch right now, so if you setup your own branch let me know so I can track the changes there. [17:53] k [17:56] redir: is qa still bin/enque followed by ingest? [17:57] rick_h_: make run and in a diff window [17:58] ./bin/es-ingest IIRC [17:58] let me double check [17:58] bac: is bin/enque used any more? Looks like --debug got removed but the code using it didn't catch up [17:58] redir: es-update? [17:58] well that's just the fulltect update [18:00] rick_h_: http://pastebin.ubuntu.com/7458731/ [18:01] gotcha, yea I think bin/enqueue is gone and just hadn't been removed [18:01] rick_h_: it may be used by the charm. ingest-queued now does the big enqueue does as stand-alone [18:01] I've got too much old knowledge of how it used to work in my head [18:01] rick_h_: I've got too much ignorance. [18:01] bac: well bin/enque just fails because there's a if args.debug [18:01] but no debug [18:01] It is bliss:) [18:01] arg is left [18:02] so guess the charm isn't using it either [18:02] big report:) [18:02] s/big/bug [18:02] bug even [18:03] rick_h_: no, charm isn't using it [18:03] bac: rgr, thanks for peeking [18:07] rick_h_: when you get the chance, we should talk more about Jeff's branch and my card. I *think* my card may be resolved within the branch, but I can't verify in the current state. [18:07] kadams54: ok [18:07] shoot me a link [18:08] https://plus.google.com/hangouts/_/gw7gremxriufe4kurfpnffyo3ya [18:09] rick_h_: when are your comp/swap days [18:09] redir: when are mine? someday [18:09] not sure atm [18:11] k [18:24] how many charms are there? [18:24] ~200 [18:25] 350 ish last I looked [18:25] but that number is a little odd because there are charms for different series which are the same, and then a lot that aren't in the store [18:25] oh wow that many? it's grown fast [18:25] k. importing them in core-store to try reproducing a bug and up to 378... [18:26] just wondering if it is near the end. [18:26] probably near it, but with trusty and such maybe not :) [18:43] redir bac: is there something to wipe the index? running es-update I get pyelasticsearch.exceptions.ElasticHttpError: (400, u'MapperParsingException[Analyzer [n3_20grams] not found for field [ngrams]]') [18:44] rick_h_: yes, let me look, though redir may have it handy === BradCrittenden is now known as bac [18:45] rick_h: this should work: http DELETE 'http://localhost:9200/charms' [18:45] bac thx, retrying [18:46] bac: why do you keep becoming BradCrittenden? [18:46] jcsackett: connection drops for a tiny bit, irc reconnects but sees 'bac' is already taken [18:47] irc nick timeout is greater than my outage time [18:47] huh; i don't actually see you disconnecting/reconnecting. [18:48] bac has quit from #juju-gui ; Ping timeout: 252 seconds [18:49] jcsackett: i can get momentarily booted when a large, norovirus incubator goes by [18:50] 521 charms... [18:50] woot [18:51] bac: ... airplane? [18:52] cruiseship [18:56] jcsackett: i wouldn't want to be on that airplane [18:56] ah. [18:56] that makes more sense [18:57] redir: bac man I'm having all kinds of issues trying to ingest and get stuff loaded. http://paste.ubuntu.com/7458986/ [18:59] I'm going to EOD soon, but just fyi, I'm trynig to get things down to QA and having a series of elasticsearch issues. [19:00] hmm, bet that's from the clearing of the db [19:01] rick_h_: that exception you pasted is in the 'save' method in the error handling section. so saving threw an error and the corrective action is to delete the charm, which then errored b/c it did not exist. i've never seen that before. [19:01] another restart going without errors [19:02] bac: yea, I got impatient and tried to kill stuff and do qa without letting ingest finish [19:02] probably left stuff in a bad place [19:02] oh [19:02] starting over again [19:02] that error handling could be more robust [19:02] anyway, will let it run tonight while I'm out and hopefully be good tomorrow [19:02] * rick_h_ runs away a little bit early today. have a good night all [19:03] rick_h_: k [19:03] have a great eve [19:32] what's a QA day? [19:33] and what's an exploratory qa day [19:33] ? [19:33] redir a day where it's your job to find a bug in the GUI [19:33] find & file [19:33] we used to adhere to it pretty strictly, but lately it's been too much of a cram :) [19:33] oh shit, i was supposed to do that today. [19:33] hmmm [19:33] * jcsackett makes note to find a bug tomorrow. [19:34] like just take a day and use the gui looking for bugs? [19:34] what is the diff btwn qa and exploratory qa? [19:38] redir exploratory qa means you have to go and try and break the app [19:39] do things in different ways, pretend you're a user who doesn't know what's going on etc [19:39] redir not really the day, just part of the day [19:39] tx, hatch [19:40] you'll be surprised what you find when you try and use the app in unintended ways :) [19:40] and with all the work that's been landing lately we no doubt created some bugs :) [21:05] oops, I had that on my stand up list to talk to jcsackett about and he wasn't there so missed it [21:07] hatch: who were you schooling on red velvet? http://www.nytimes.com/2014/05/14/dining/red-velvet-cake-from-gimmick-to-american-classic.html === BradCrittenden is now known as bac [21:07] haha I don't remember [21:07] huw I think [21:08] oh man it needs to rain here [21:08] so dusty [21:08] bah, take ours [21:08] gladly take some :) [21:09] it comes with car-alarm-setting-off thunder too [21:12] hah sounds like your shock sensor is a litttle too sensitive [21:13] that landscape cloud builder thing is pretty darn cool [21:13] I'm re-watching the keynote heh [21:21] hatch: not my car. all of the other cars on the street... [21:21] * bac walkies [21:21] ahhh - is car crime bad there? [21:22] ha, all crime is bad here [21:22] not my little enclave, though [21:23] haha ok [21:23] it also helps to not be a drug dealer or gang member. for some reason they tend to be overrepresented in the statistics. [21:27] guihelp https://github.com/juju/juju-gui/pull/320 is ready for a look-see. It's a WIP so not yet ready to land. [21:28] bac lol [21:30] kadams54 does it work? [21:30] Yes [21:30] no units should fire a change event by default [21:30] because they are just objects [21:30] or did my wip branches land? [21:31] No, your WIP branch did not land [21:31] But I've seen it so I know placeUnit will revive the unit :-) [21:31] env.placeUnit() that is [21:31] ohh ok so the placeUnit has the changes in it to revive [21:32] cool [21:32] :) [21:32] QA instructions include console commands to grab a unit, revive it, and then set the machine ID. [21:32] I'm not really set up atm to do any qa/reviews [21:32] just glanced through [21:33] what do you think of your smart updater stuff being able to diff the old list and new list and updating the tokens which need updating [21:33] that would be my preference....of course that's going to be more work to implement [21:33] :) [21:36] I agree that function needs to handle changes and not just the current additions/deletions [21:36] But I think there's still a need for a function that can just hit a single machine in the list [21:37] you think so? [21:37] Yeah, I was talking with rick_h_ earlier and there are going to be a lot of events that update/change a single machine [21:37] what I'm thinking is that at scale, we'll want to batch the changes in the UI [21:39] Seems like, even if changes were batched, you'd still need a way to update a single token [21:39] I suppose this could actually be used by the smart renderer [21:39] yeah