m_3 | this is only install tests, but precise charms are looking good so far... http://ec2-23-20-58-1.compute-1.amazonaws.com:8080/ | 00:16 |
---|---|---|
SpamapS | m_3: sweeeeet | 00:32 |
SpamapS | m_3: we're going to need a whole new test methodology for subordinates | 00:33 |
SpamapS | man juju status seems way slower with the subordinate stuff | 00:35 |
SpamapS | lots more to look at in zk I guess | 00:35 |
SpamapS | m_3: http://paste.ubuntu.com/930324/ .. status looks a bit wonky.. so waiting for 'started' may be harder | 00:36 |
SpamapS | ok time to get back to Saturday-ing | 00:37 |
m_3 | yeah, subordinates are a whole other world of tests | 00:41 |
imbrandon | marcoceppi: is the supercache still looking at memcache maybe ? | 01:01 |
=== _mup__ is now known as _mup_ | ||
melmoth | anybody knowing what to do when there s no "accept & commission" button ? (https://bugs.launchpad.net/maas/+bug/981845) | 13:46 |
_mup_ | Bug #981845: "Accept & Commission" button does not exist <MAAS:Invalid> < https://launchpad.net/bugs/981845 > | 13:46 |
SpamapS | melmoth: I think daviey said there's a weird hacky thing you have to do... | 14:10 |
SpamapS | 12:49 < Daviey> sudo mount /var/lib/maas/ephemeral/precise/server/amd64/20120328/disk.img /mnt/ ; sudo chroot /mnt ; sudo apt-get update ; sudo apt-get install cloud-init ; exit ; sudo umount /mnt | 14:11 |
SpamapS | 12:49 < Daviey> restart the node. | 14:11 |
melmoth | SpamapS, try it (i saw his post on irc), without success. | 14:19 |
SpamapS | melmoth: bummer | 14:20 |
melmoth | the "new" version of cloud init i installed in the disk.img was 0.6.3-0ubuntu1 (that s the one the repo i used had), may be it s still does not contain the fix ? | 14:20 |
SpamapS | cloud-init | 0.6.3-0ubuntu1 | precise | source, all | 14:21 |
SpamapS | melmoth: thats the latest | 14:21 |
SpamapS | melmoth: after doing that did you start over? | 14:21 |
melmoth | yep | 14:21 |
SpamapS | perhaps the nodes need to be reinstalled from scratch | 14:22 |
melmoth | i wanted to remove node and put them back, but i can remove them , the button is greyd out and it say they are being used | 14:22 |
melmoth | i added new node, still no button neither | 14:22 |
melmoth | if i reboot the node, they are reinstalled and ends up in "ready" states, but no button.... | 14:23 |
_mup_ | Bug #982353 was filed: destroying a subordinate service after destroying its primary services results in a traceback... <juju:New> < https://launchpad.net/bugs/982353 > | 14:32 |
m_3 | SpamapS: when you get a sec... we've got a decent picture of breakages http://ec2-23-20-58-1.compute-1.amazonaws.com:8080/ | 15:24 |
m_3 | SpamapS: most of the failing charms are either a.) failing in oneiric too, or b.) expected (hadoop,cloudfoundry) | 15:26 |
SpamapS | m_3: cool! | 15:27 |
m_3 | SpamapS: I'd be fine with pulling the switch at any time... so let's just figure out what makes sense from a release perspective | 15:27 |
SpamapS | m_3: one question.. why do we have failing charms in oneiric? | 15:27 |
m_3 | ha! | 15:27 |
m_3 | good question... just haven't gotten to fixing them. mostly package/repo barfs I think (node,nfs) | 15:28 |
m_3 | you can see details of most oneiric charms in charmtests.markmims.com (maybe a week out of date) | 15:29 |
SpamapS | ok, doing weekend stuff now.. will check it out later | 15:29 |
m_3 | cool man | 15:30 |
m_3 | I won't push succeeding charms to precise branchs... that makes sense to do through lp | 15:31 |
m_3 | I'll work on fixing failing ones today to make the oneiric branches work for precise | 15:31 |
jamespage | m_3: that reminds me that we need to sort-out which hadoop charms are going to transfer to precise | 15:45 |
m_3 | jamespage: yup | 15:47 |
jamespage | and whether I should be pushing the current precise one to oneiric - it works just fine - but seems a little pointless now | 15:47 |
m_3 | jamespage: I think it'll just be easiest to transfer all the oneiric ones and then manually remove the links for ones we want changed | 15:47 |
jamespage | m_3: yeah - I guess that makes sense | 15:48 |
jamespage | its closest to the approach we would take in the ubuntu archive | 15:48 |
jamespage | new release == old release; then `make changes` | 15:48 |
jamespage | or rm -Rf | 15:48 |
m_3 | not sure how the lp auto-changeover scripts'll work... it might make sense to push it to oneiric first, then let the script treat it as normal | 15:48 |
m_3 | but that also might break the script too :) | 15:48 |
jamespage | m_3: because the branch already exists for precise? | 15:49 |
m_3 | right | 15:49 |
jamespage | OK | 15:49 |
m_3 | we'll find out though... there're several situations like that already I think | 15:49 |
jamespage | I love some live testing! | 15:49 |
m_3 | oh yeah! | 15:50 |
m_3 | it looks like most charms are doing fine just being auto-promoted though | 15:50 |
m_3 | s/are doing/will do/ | 15:50 |
hazmat | g'morning | 15:56 |
jamespage | morning hazmat! | 15:56 |
m_3 | mornin K | 15:57 |
hazmat | m_3, just testing out the new recorder, its env agnostic now, use sftp to fetch logs | 15:57 |
m_3 | hazmat: whoohoo! | 15:58 |
m_3 | awesome... I've been testing precise charms, but I'm itching to get back to testing against other providers | 16:01 |
=== almaisan-away is now known as al-maisan | ||
hazmat | m_3 cool, re subordinates we just sample them against a known set of interesting other charms | 16:10 |
=== al-maisan is now known as almaisan-away | ||
m_3 | hazmat: guided planning then? | 16:16 |
hazmat | m_3, partly the deps for the sub are still resolved normally, but a set of known hosts services is used to associate them.. but yes effectively it is guided planning | 16:17 |
hazmat | m_3, technically with the default juju-info we'd have to do a pair wise construction of a sub against all other non sub charms known to pass tests to get full coverage/compat checking | 16:18 |
hazmat | er.. default `juju-info` relation that gets established for subs | 16:18 |
m_3 | wanna maybe start with a --parent param to plan? then we can expand from there? | 16:19 |
_mup_ | Bug #982422 was filed: unable to simulate a failed hook with debug hooks. <juju:New> < https://launchpad.net/bugs/982422 > | 16:20 |
hazmat | m_3, i'd rather pass it via env var, it feels silly to pass a param for all charms on the basis that some may possibly use it, when most won't | 16:21 |
hazmat | but parameterization sounds good | 16:21 |
m_3 | cool with either | 16:21 |
m_3 | we'll choose something other than wordpress as the primary to attach to :) | 16:22 |
m_3 | I guess metadata[subordinate] will be the best way to determine if we should be looking for a primary to glom on to | 16:24 |
m_3 | didn''t see any other ways of doing it in the docs, but haven't read too carefully wrt this | 16:25 |
hazmat | m_3, yeah. that's the primary way to ask the question is this a subordinate charm | 16:26 |
melmoth | What is the meaning of the node status: "ready" "commisionning" ? | 16:50 |
melmoth | I still do not have the "accept and commission" button, but if i statr a node, it ends up in a ready state. | 16:51 |
melmoth | but still, i juju refuse to bootstrap | 16:51 |
hazmat | flacoste, ^ | 17:01 |
flacoste | melmoth: there is a problem with the commissioning process as present in the archive | 17:02 |
flacoste | melmoth: it uses an image wich doesn't support commissioning :-( | 17:02 |
flacoste | melmoth: the work-around is to manually update cloud-init in the commissioning images | 17:03 |
flacoste | unfortunately, i don't have step-by-step instructions on how to do that | 17:03 |
flacoste | Daviey and smoser could provide this | 17:03 |
flacoste | but i'm not sure they are around | 17:03 |
melmoth | flacoste, i did update cloud-init | 17:04 |
flacoste | ah, ok | 17:04 |
melmoth | https://bugs.launchpad.net/maas/+bug/981845 (see my comment 2) | 17:04 |
_mup_ | Bug #981845: "Accept & Commission" button does not exist <MAAS:Invalid> < https://launchpad.net/bugs/981845 > | 17:04 |
melmoth | i even re installed from scratch and updated the cloud init thingy before adding the node , just in case. | 17:04 |
flacoste | melmoth: btw, if you start an instance, you cannot use it with juju as it's supposed to be available for "self-admin" | 17:04 |
flacoste | so you need nodes in the 'Ready' state | 17:05 |
flacoste | so that completed the commissioning process | 17:05 |
melmoth | i see them in ready state (after having bootem them up manually that is) | 17:05 |
flacoste | melmoth: ah | 17:05 |
flacoste | there is another bug | 17:05 |
flacoste | commissioning should shutdown the instance :-) | 17:05 |
flacoste | https://bugs.launchpad.net/maas/+bug/981116 | 17:06 |
_mup_ | Bug #981116: MAAS should power down the node once the commissioning process is successful <MAAS:New> < https://launchpad.net/bugs/981116 > | 17:06 |
flacoste | manually power them down | 17:06 |
flacoste | and then they should be good to go with juju | 17:06 |
melmoth | i do all of this within kvm (maas and nodes), am i suppose to tell maas to use virsh instead of power on lan then i guess. | 17:06 |
flacoste | yes | 17:06 |
flacoste | you should do that | 17:06 |
flacoste | but that still doesn't make the node power-down at the end of commissioning | 17:07 |
flacoste | the problem is that the node is 'Ready' | 17:07 |
flacoste | but still running | 17:07 |
flacoste | and the only way to get a node to contact the MAAS to know what to do | 17:07 |
flacoste | is by booting it | 17:07 |
flacoste | so when you juju bootstrap | 17:07 |
flacoste | you'll just wait and wait | 17:07 |
flacoste | for a node to come up | 17:07 |
flacoste | probably power-cycling the node that has been assigned to as the juju bootstrap node | 17:08 |
flacoste | would work just as well | 17:08 |
flacoste | but i'd simply power-down all 'Ready' nodes | 17:08 |
melmoth | Soo, i added 5 nodes, all in ready state, all powered off (i try with 2 powered on too, just in case) | 18:13 |
melmoth | juju bootstrap fail with internal server error. maas.log say http://pastebin.com/LryTtLNP | 18:13 |
_mup_ | charmrunner/charmrunner r31 committed by kapil.foss@gmail.com | 20:34 |
_mup_ | recorder is now provider type agnostic and can work against any specified env, update watcher for new status format | 20:34 |
=== jkyle_ is now known as jkyle | ||
_mup_ | juju/trunk r530 committed by kapil.thangavelu@canonical.com | 23:03 |
_mup_ | [trivial] update status to utilize JUJU_ENV if set [f=981387] | 23:03 |
_mup_ | juju/trunk r531 committed by kapil.thangavelu@canonical.com | 23:09 |
_mup_ | [trivial] update terminate-machine to utilize JUJU_ENV if set [f=981387] | 23:09 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!