[00:16] this is only install tests, but precise charms are looking good so far... http://ec2-23-20-58-1.compute-1.amazonaws.com:8080/ [00:32] m_3: sweeeeet [00:33] m_3: we're going to need a whole new test methodology for subordinates [00:35] man juju status seems way slower with the subordinate stuff [00:35] lots more to look at in zk I guess [00:36] m_3: http://paste.ubuntu.com/930324/ .. status looks a bit wonky.. so waiting for 'started' may be harder [00:37] ok time to get back to Saturday-ing [00:41] yeah, subordinates are a whole other world of tests [01:01] marcoceppi: is the supercache still looking at memcache maybe ? === _mup__ is now known as _mup_ [13:46] anybody knowing what to do when there s no "accept & commission" button ? (https://bugs.launchpad.net/maas/+bug/981845) [13:46] <_mup_> Bug #981845: "Accept & Commission" button does not exist < https://launchpad.net/bugs/981845 > [14:10] melmoth: I think daviey said there's a weird hacky thing you have to do... [14:11] 12:49 < Daviey> sudo mount /var/lib/maas/ephemeral/precise/server/amd64/20120328/disk.img /mnt/ ; sudo chroot /mnt ; sudo apt-get update ; sudo apt-get install cloud-init ; exit ; sudo umount /mnt [14:11] 12:49 < Daviey> restart the node. [14:19] SpamapS, try it (i saw his post on irc), without success. [14:20] melmoth: bummer [14:20] the "new" version of cloud init i installed in the disk.img was 0.6.3-0ubuntu1 (that s the one the repo i used had), may be it s still does not contain the fix ? [14:21] cloud-init | 0.6.3-0ubuntu1 | precise | source, all [14:21] melmoth: thats the latest [14:21] melmoth: after doing that did you start over? [14:21] yep [14:22] perhaps the nodes need to be reinstalled from scratch [14:22] i wanted to remove node and put them back, but i can remove them , the button is greyd out and it say they are being used [14:22] i added new node, still no button neither [14:23] if i reboot the node, they are reinstalled and ends up in "ready" states, but no button.... [14:32] <_mup_> Bug #982353 was filed: destroying a subordinate service after destroying its primary services results in a traceback... < https://launchpad.net/bugs/982353 > [15:24] SpamapS: when you get a sec... we've got a decent picture of breakages http://ec2-23-20-58-1.compute-1.amazonaws.com:8080/ [15:26] SpamapS: most of the failing charms are either a.) failing in oneiric too, or b.) expected (hadoop,cloudfoundry) [15:27] m_3: cool! [15:27] SpamapS: I'd be fine with pulling the switch at any time... so let's just figure out what makes sense from a release perspective [15:27] m_3: one question.. why do we have failing charms in oneiric? [15:27] ha! [15:28] good question... just haven't gotten to fixing them. mostly package/repo barfs I think (node,nfs) [15:29] you can see details of most oneiric charms in charmtests.markmims.com (maybe a week out of date) [15:29] ok, doing weekend stuff now.. will check it out later [15:30] cool man [15:31] I won't push succeeding charms to precise branchs... that makes sense to do through lp [15:31] I'll work on fixing failing ones today to make the oneiric branches work for precise [15:45] m_3: that reminds me that we need to sort-out which hadoop charms are going to transfer to precise [15:47] jamespage: yup [15:47] and whether I should be pushing the current precise one to oneiric - it works just fine - but seems a little pointless now [15:47] jamespage: I think it'll just be easiest to transfer all the oneiric ones and then manually remove the links for ones we want changed [15:48] m_3: yeah - I guess that makes sense [15:48] its closest to the approach we would take in the ubuntu archive [15:48] new release == old release; then `make changes` [15:48] or rm -Rf [15:48] not sure how the lp auto-changeover scripts'll work... it might make sense to push it to oneiric first, then let the script treat it as normal [15:48] but that also might break the script too :) [15:49] m_3: because the branch already exists for precise? [15:49] right [15:49] OK [15:49] we'll find out though... there're several situations like that already I think [15:49] I love some live testing! [15:50] oh yeah! [15:50] it looks like most charms are doing fine just being auto-promoted though [15:50] s/are doing/will do/ [15:56] g'morning [15:56] morning hazmat! [15:57] mornin K [15:57] m_3, just testing out the new recorder, its env agnostic now, use sftp to fetch logs [15:58] hazmat: whoohoo! [16:01] awesome... I've been testing precise charms, but I'm itching to get back to testing against other providers === almaisan-away is now known as al-maisan [16:10] m_3 cool, re subordinates we just sample them against a known set of interesting other charms === al-maisan is now known as almaisan-away [16:16] hazmat: guided planning then? [16:17] m_3, partly the deps for the sub are still resolved normally, but a set of known hosts services is used to associate them.. but yes effectively it is guided planning [16:18] m_3, technically with the default juju-info we'd have to do a pair wise construction of a sub against all other non sub charms known to pass tests to get full coverage/compat checking [16:18] er.. default `juju-info` relation that gets established for subs [16:19] wanna maybe start with a --parent param to plan? then we can expand from there? [16:20] <_mup_> Bug #982422 was filed: unable to simulate a failed hook with debug hooks. < https://launchpad.net/bugs/982422 > [16:21] m_3, i'd rather pass it via env var, it feels silly to pass a param for all charms on the basis that some may possibly use it, when most won't [16:21] but parameterization sounds good [16:21] cool with either [16:22] we'll choose something other than wordpress as the primary to attach to :) [16:24] I guess metadata[subordinate] will be the best way to determine if we should be looking for a primary to glom on to [16:25] didn''t see any other ways of doing it in the docs, but haven't read too carefully wrt this [16:26] m_3, yeah. that's the primary way to ask the question is this a subordinate charm [16:50] What is the meaning of the node status: "ready" "commisionning" ? [16:51] I still do not have the "accept and commission" button, but if i statr a node, it ends up in a ready state. [16:51] but still, i juju refuse to bootstrap [17:01] flacoste, ^ [17:02] melmoth: there is a problem with the commissioning process as present in the archive [17:02] melmoth: it uses an image wich doesn't support commissioning :-( [17:03] melmoth: the work-around is to manually update cloud-init in the commissioning images [17:03] unfortunately, i don't have step-by-step instructions on how to do that [17:03] Daviey and smoser could provide this [17:03] but i'm not sure they are around [17:04] flacoste, i did update cloud-init [17:04] ah, ok [17:04] https://bugs.launchpad.net/maas/+bug/981845 (see my comment 2) [17:04] <_mup_> Bug #981845: "Accept & Commission" button does not exist < https://launchpad.net/bugs/981845 > [17:04] i even re installed from scratch and updated the cloud init thingy before adding the node , just in case. [17:04] melmoth: btw, if you start an instance, you cannot use it with juju as it's supposed to be available for "self-admin" [17:05] so you need nodes in the 'Ready' state [17:05] so that completed the commissioning process [17:05] i see them in ready state (after having bootem them up manually that is) [17:05] melmoth: ah [17:05] there is another bug [17:05] commissioning should shutdown the instance :-) [17:06] https://bugs.launchpad.net/maas/+bug/981116 [17:06] <_mup_> Bug #981116: MAAS should power down the node once the commissioning process is successful < https://launchpad.net/bugs/981116 > [17:06] manually power them down [17:06] and then they should be good to go with juju [17:06] i do all of this within kvm (maas and nodes), am i suppose to tell maas to use virsh instead of power on lan then i guess. [17:06] yes [17:06] you should do that [17:07] but that still doesn't make the node power-down at the end of commissioning [17:07] the problem is that the node is 'Ready' [17:07] but still running [17:07] and the only way to get a node to contact the MAAS to know what to do [17:07] is by booting it [17:07] so when you juju bootstrap [17:07] you'll just wait and wait [17:07] for a node to come up [17:08] probably power-cycling the node that has been assigned to as the juju bootstrap node [17:08] would work just as well [17:08] but i'd simply power-down all 'Ready' nodes [18:13] Soo, i added 5 nodes, all in ready state, all powered off (i try with 2 powered on too, just in case) [18:13] juju bootstrap fail with internal server error. maas.log say http://pastebin.com/LryTtLNP [20:34] <_mup_> charmrunner/charmrunner r31 committed by kapil.foss@gmail.com [20:34] <_mup_> recorder is now provider type agnostic and can work against any specified env, update watcher for new status format === jkyle_ is now known as jkyle [23:03] <_mup_> juju/trunk r530 committed by kapil.thangavelu@canonical.com [23:03] <_mup_> [trivial] update status to utilize JUJU_ENV if set [f=981387] [23:09] <_mup_> juju/trunk r531 committed by kapil.thangavelu@canonical.com [23:09] <_mup_> [trivial] update terminate-machine to utilize JUJU_ENV if set [f=981387]