[00:00] Destreyf_: its an actual charm that deploys nothing [00:01] Destreyf_: you could deploy one of your regular charms as the primary.. but this keeps them all on the same level as subordinates. [00:01] when i run that command i get an "2012-05-14 17:55:48,163 ERROR Error processing 'cs:precise/ubuntu': entry not found" [00:02] do i need to specify lp:charms/precise/ubuntu ? [00:04] Destreyf_: that charm is broken, I forgot. Try 'bzr branch lp:charms/precise/ubuntu' into your local repo and then deploy it with 'local:ubuntu' [00:06] ::P i didn't think i was crazy [00:06] i just got everything setup again so i had just tried it once [00:07] kk time to do the magic of booting :P [00:07] * SpamapS lands maintainer checks for 'charm proof' [00:07] btw powerwake does not ever work for me, though that may be the supermicro being a POS :P === vednis is now known as mars [00:24] https://code.launchpad.net/~clint-fewbar/juju/add-maintainer/+merge/105738 [00:24] docs update for the maintainer field [00:25] SpamapS: you're a busy person :P [00:30] SpamapS: :P I cant merge ^ you guys fixed the team privileges [00:30] either way looks good [00:49] bkerensa: who fixed the team privileges? [00:50] damnit, lets make up our mind! [00:51] bkerensa: no, I just did it wrong [00:51] so confusing that there is an lp:juju/docs different from lp:~juju/juju/docs [00:51] bkerensa: You don't need to merge other peoples' merge proposals. Just +1 them [00:52] bkerensa: https://code.launchpad.net/~clint-fewbar/juju/add-maintainer/+merge/105742 .. *that* one you can Mark as Approved :) [00:55] SpamapS, fixed thanks for bringing that to my attention === zirpu-away is now known as zirpu [01:14] can someone explain to me the difference between -departed and -broken? The docs don't make sense to me [01:15] in particular I want to run something each time a unit leaves the relation so that I can drop its address from a config file [01:15] I don't care how the unit left in this case [01:24] hmm, it looks like -departed is what I want, but it doesn't get run on remove-relation? [01:35] and it seems I can't relation-get in the relation-departed hook to find out the info about what to remove? [02:08] m_3, SpamapS: lp:~james-w/charms/precise/nagios-nrpe-server/trunk has a sketch of what I was thinking for the nrpe charm. The main concept still missing is how the charm tells nagios what to check and when [02:09] I think it's a case where we may want to move to structured data in the relation info, as it's fairly complex to specify all of the needed info (check name, command, frequency etc.) [02:39] james_w, departed is a remote unit is gone [02:39] james_w, broken is the relation is gone [02:40] ie. remove-unit vs remove-relation [02:48] hazmat, and there is not one that does both? [02:48] so I should symlink them or something to handle both cases? [04:19] james_w: you can't really handle departed the same as broken usually [04:19] james_w: when broken fires, all the units are are already gone. All you have is the relation ID [04:20] SpamapS, ok [04:20] so it should clear everything [04:20] what about getting the relation info when departed fires? [04:21] james_w: right, if you look at what I just recently did with the nagios charm.. I prefixed everything on disk with $JUJU_RELATION_ID, and on broken, I just rm -f /etc/nagios3/conf.d/$JUJU_RELATION_ID-*.cfg [04:21] SpamapS, cool idea [04:22] simple is what makes the charm world go round :) [04:22] SpamapS, I'll have to amend slightly in this case, as it's a single value to append to, so I'll have to use puppet or something as a layer of indirection [04:23] james_w: dotdee works if puppet feels like too big of a hammer [04:23] or cat :-) [04:24] james_w: re the structured data.. this is needed for the general monitoring case as well.. I wonder if we can make use of the same interface [04:24] SpamapS, that would be cool [04:25] the structure in this case is {'check_name': ..., 'script': ..., 'frequency', ...} [04:25] if other things can consume the script which would have nagios plugin semantics then it could be re-used [04:26] NRPE does make things complicated in this light... hrm [04:27] I think I could spend 2 weeks straight making the monitoring story really solid. [04:27] I feel like that, and backups, need to get much much better [04:27] the sketch I have has an service->nrpe interface and an nrpe<->nagios interface [04:27] that would rock [04:28] I can't decide if the nrpe<->nagios interface is any more than the interface you put in the nagios charm, but that one isn't currently extensible [04:29] having a generic monitoring interface would be *fantastic* [04:29] So what you really want is for a service to provide its own plugin [04:29] which is definitely something NRPE was made for [04:29] that's my primary use case currently, yeah [04:30] For that case, I can see NRPE just swallowing and running whatever you give it from your service. I still want that to map to something that we can generically identify on Nagios. [04:31] yeah [04:31] the nrpe->nagios interface can just list check_nrpe!check_foo things for the host in question, and it should all just work [04:32] whereas non nrpe would be things like check_ssh [04:32] so it seems like there is one 'nagois-checks' interface that would suffice for both [04:33] in addition to magic for http/juju-info/etc. interfaces [04:33] so primary service sets this to nrpe: plugin=/usr/share/foo/plugin.py monitor_type=unique_to_this_service .. then nrpe says to nagios monitor_type=nrpe args=unique_to_this_service .. and nagios says "Oh I know how to do NRPE" and just monitors that [04:33] yeah, sounds like that would work [04:33] I think a description would be helpful too. [04:33] rather, alias [04:34] but I'd call it description. Basically "What to show the pager duty guy at 3am" [04:35] echo "command[$NAME]=$COMMAND" >> /etc/nagios/nrpe.d/plugin-$NAME.cfg [04:35] You can stick the $JUJU_RELATION_ID right there :0 [04:35] yeah [04:35] it's the other side that requires the indirection [04:35] "append private-address to allowed_hosts" [04:37] other relations sorry [04:37] right [05:34] SpamapS: you're charm shows as latest in http://jujucharms.com/charms/precise/nagios [05:34] Not sure if that's your or hazmat's doing [05:40] aaaand, the latest Nagios charm looks good: members skypress-1,skypress-0 [05:40] ...until I find something else broken. ;-) [06:49] SpamapS: whats the proper way to use run-as-hook that would drop me in a shell in say the config-changed hook context ? === almaisan-away is now known as al-maisan [09:20] grumble...I was thinking about trying to charm condor... [09:20] https://bugs.launchpad.net/ubuntu/+source/condor/+bug/919671 [09:20] <_mup_> Bug #919671: Please remove condor from ubuntu precise < https://launchpad.net/bugs/919671 > [09:21] manual installation seems to be a bit painfull... got to find something else... [09:21] anyone knows of a scheduler similar to condor that is already packaged on 12.04 ? === TheMue_ is now known as TheMue === rog is now known as Guest16956 === Guest16956 is now known as rogpeppe [13:28] hi there. [13:29] I'm having a simple question for you guys, I think. If installing a charm with juju is "like installing a package", then ... what do I need charm for? ;-) [13:29] or juju, for that matter [13:29] maybe I'm just not seeing the obvious. [13:31] Madkiss, to deploy machine. [13:31] like, you want a reverse proxy , n http server backend and 1 mysq db used by those. [13:32] once your charm are written for those services, you can install them as easily as you would apt-get install something. [13:32] (well, you need to deploy them, and then define their relationship, that s about it) [13:32] Okay. So it's mainly about getting the right configuration stuff into my system? [13:32] plural [13:32] into systemS [13:33] that are deployed in your cloud. [13:33] Okay. And what's the relation between Juju and Orchestra? [13:34] orchestra has been ...cancelled. [13:34] the new stuff is now called Maas. [13:35] and with Maas, you can use juju to deploy bare metal boxes. [13:35] so you deploy your cloud infrastructure with juju, and then you deploy your vms with juju [13:43] Madkiss: the reason for charms is that in the modern computing world, you want to run things across many machines, not just one. [13:45] SpamapS: what's the TLDR; status on hpcloud/juju? [13:45] Madkiss: getting things setup to talk between two machines often involves multiple steps where you need to run things on one box, and then on another... and back and forth. So, ask for a database to be created, and a user, create a database user and the database, then create the schema. This distributed configuration is easy w/ juju. [13:46] jcastro: HP cloud does not expose S3, so it cannot store charms ore the "map" that we use to find the bootstrap machine. [13:47] I knew that part [13:47] jcastro: the 5 line change I had to allow us to use a non HP S3 is pretty much a hack.. but I may propose it for trunk. [13:47] ok so we're actively working on it though right? [13:51] jcastro: yes :) === nathwill_will is now known as nathwill [14:00] negronjl: here's "the big list" distro uses: http://reqorts.qa.ubuntu.com/reports/sponsoring/ [14:00] SpamapS: so we're going to have to move to a "subscribe a team" instead of "use a tag" for charms [14:00] which I think is fine [14:01] jcastro: totally. We can even have a bug bot that automatically converts new-charm to the subscription. [14:02] https://launchpad.net/ubuntu-sponsoring is the code [14:03] SpamapS: in this case, it's ~charmers instead of ~ubuntu-sponsors right? [14:04] we don't need another team do we? [14:05] good questino [14:05] I think we do [14:05] jcastro: we want to be able to unsubscribe the team when we want it off the queue [14:06] jcastro: there may be good reasons to subscribe ~charmers that don't include sponsorship [14:06] tho I can't think of one now [14:07] well, in cases where I want a charmer to look at something ... feels "queueish" to me [14:13] jcastro: test it out now.. because I think ~charmers has an implicit subscription to all the bugs anyway [14:13] jcastro: so that may be one reason [14:14] Indeed [14:14] ok, so in cases like this, I am sure this was discussed at length when they did it for distro [14:15] so like, why deviate, it's probably like that for a reason [14:16] I suspect launchpad limitations before "sane rational thought" ;) [14:19] Hi guys, any word on landing bug 958312 in precise? A dev on my team using the disto version of juju was bit by a runaway log file this morning. [14:19] <_mup_> Bug #958312: Change zk logging configuration < https://launchpad.net/bugs/958312 > [14:27] mars: thats enough of a push for me. I'll start working on an SRU [14:29] jcastro, did you got my mail sir. :) [14:31] yes, forwarded it on [14:31] SpamapS, ok, thanks for looking into it [14:32] jcastro, cool. :) [14:39] imbrandon: run-as-hook is jimbaker's thing.. but I believe you just run 'jitsu run-as-hook ...' [14:41] imbrandon: to be clear, you have to run it on the box with the unit agent [15:05] SpamapS, imbrandon - you can use 'jitsu run-as-hook' on a juju machine OR a client box that's running the juju cli [15:06] jimbaker: thats a bit crackful [15:06] jimbaker: and as soon as we implement real ACL's, that won't work [15:06] running it on a juju machine could be useful for working with cron, for example; on your client box, for doing introspection or triggering exec to run as a debug hook [15:07] jimbaker: you're relying on wide-open-zk [15:07] SpamapS, it's just a tool ;) [15:08] jimbaker: yes, and I like that it goes beyond the usual limits [15:08] but.. a bit crackful nonetheless [15:08] SpamapS, zk acls change this, and that's good. but it could be still useful then [15:08] actually you'll probably just spin up w/ the admin secret ;) [15:09] SpamapS, yes, certainly for the admin side of things. and for running on a juju machine, presumably restricted by the same acls as that user agent === al-maisan is now known as almaisan-away [16:18] I am trying to test juju on a private openstack cloud and am wondering how to set the ec2 api target. Can anyone help? [16:19] spidersddd: if you have essex dashboard setup it will give you a pre-populated environments.yaml [16:20] This is diablo with keystone, but I have an Essex test cloud I can get it from [16:20] What is the process? [16:20] spidersddd: its in the same place as you get your credentials from [16:20] I will check it out. Thank you. [16:21] spidersddd: http://askubuntu.com/questions/94150/how-do-i-use-openstack-and-keystone-with-juju === niemeyer__ is now known as niemeyer [17:03] negronjl: mira, I suibscribed you to the bug about "the big list" [17:56] how does one clone the juju charm repo for the latest ubuntu [18:11] senior7515: you can use 'charm getall' from charm-tools [18:11] senior7515: its *very* slow [18:11] senior7515: there are 78 charms now.. but we expect there will be 100's soon.. thousands some day [18:12] senior7515: you can get individual charms with 'charm get name-of-charm' [18:12] jimbaker: can you please add something to https://bugs.launchpad.net/juju/+bug/992329 explaning why its important and what the impact of the bug is? I need that for SRU's. [18:12] <_mup_> Bug #992329: Ensure Invoker.start is called from UnitRelationLifecycle usage. < https://launchpad.net/bugs/992329 > [18:19] I have been working on getting juju working with a openstack Diablo install with keystone and having no luck. Can someone help me out with a more verbose method in juju? [18:19] jcastro: ok [18:20] All I am getting back is : [18:20] 2012-05-15 11:16:56,425 DEBUG Initializing juju status runtime [18:20] Traceback (most recent call last): [18:20] Failure: twisted.internet.error.ConnectionRefusedError: Connection was refused by other side: 111: Connection refused. [18:20] 2012-05-15 11:16:56,510 ERROR Traceback (most recent call last): [18:20] Failure: twisted.internet.error.ConnectionRefusedError: Connection was refused by other side: 111: Connection refused. [18:20] Connection was refused by other side: 111: Connection refused. [18:20] 2012-05-15 11:16:56,510 ERROR Connection was refused by other side: 111: Connection refused. [18:21] I know the packets are making it to port 3333 and the response is making it all the way back to the juju initiating host. [18:22] can one specify the VPS deployment subnet with juju on EC2 [18:22] VPC** [18:23] jujutest: we have not tried VPC [18:23] jujutest: I suspect it does nto work [18:23] got you. Thanks a lot! [18:24] negronjl: hey so all the code exists, etc. I guess we just need to integrate it? [18:24] spidersddd: that means zookeeper isn't working most likely. [18:24] jcastro: even better :) [18:24] spidersddd: I assume you're getting that message back after bootstrap succeeded? [18:24] negronjl: also, jono ended up with the ~charmers bottle of rum, so he's holding onto it for us [18:24] spidersddd: also, do you have an S3 component to your diablo? [18:24] spidersddd: juju needs an S3 (nova-objectstore will suffice) [18:27] We have swift backed glance. [18:28] spidersddd: ok, so are you pointing s3-uri to the swift S3 frontend? [18:28] Glance frontend [18:28] spidersddd: glance is not S3 :) [18:29] Got it. Swift it is. [18:29] spidersddd: http://docs.openstack.org/trunk/openstack-object-storage/admin/content/configuring-openstack-object-storage-with-s3_api.html [18:30] spidersddd: specifically you need your swift proxy server to have the snippet listed there [18:46] SpamapS, i would recommend folks use the store instead of charm get.. unless they specifically want to develop a charm [18:46] Thank you. No S3 support in our setup. [18:46] hazmat: I would not [18:46] hazmat: until there is a switch-charm .. the store is just for playing IMO [18:46] hazmat: inability to fix your charm == fail in production [19:07] <_mup_> juju/trunk r536 committed by kapil.thangavelu@canonical.com [19:07] <_mup_> [trivial] add relation topology check was only verifying one endpoint for user exc instead of topo exc [r=jimbaker] [19:16] SpamapS: you online? [19:16] Destreyf: I am, though I am dist-upgrading so I may disappear ;) [19:17] lol, figures. Have you ever heard of the MAAS interface saying "Duplicate Mac" with no nodes appearing in the list (i used the enlist function and the node never showed up, but now i can't add manually either) [19:20] Or do you know who i can talk to in order to get some feedback on the whole MAAS provisioning, as i can't get them to install properly 50% of the time, (grub rescue prompt, with out of disk when accessing (hda0,2)/boot/) [19:28] Destreyf: I've never used MaaS [19:29] Destreyf: #ubuntu-server , ping roakasox or Daviey [19:30] SpamapS: how do you use Juju then :P [19:30] (i don't have amazon ec2) [19:35] Destreyf: ec2 [19:35] Destreyf: and the local provider for testing stuff [19:35] Ah, i never got local provider to work either :P [19:35] it just always hung on the first deploy command when it built the container [19:40] destreyf: disable ufw and see if it makes a difference [19:40] already did that [19:41] ah [19:41] no such luck [19:41] then i got no clue [19:41] that was on my home machine [19:41] inside of virtualbox on an SSD array [19:41] but, i'm not playing with the local containers :P i'm working on a MAAS deployment. [19:41] Destreyf: hung or errored out? [19:42] Hung [19:42] left it running for 6 and 1/2 hours [19:42] Destreyf: that sounds like errored out, not hung :) [19:42] just didn't report the error where you could find it ;) [19:42] well the juju -v debug-log didn't show anything, neither did the machine-agent.log inside of the data-dir [19:43] and that was in an attempt to deploy just mysql+wordpress as the examples show [19:50] Destreyf: there are like, 4 more logs [19:50] Destreyf: a known problem w/ the local provider [19:50] debug-log needs to be all inclusive [19:51] Destreyf: there's master-customize.log , and then the unit logs.. :-P [19:51] Ah, i hadn't looked there, however i tried several more times with no avail :P [19:52] Destreyf: you are not alone :) [19:52] But i did see alot of people saying that the local provider was buggy and to just try a couple times :P [20:04] its just juju, zookeeper, and zookeeperd that i need for Juju correct? [20:05] Destreyf: for local? You don't even need zookeeperd [20:05] * SpamapS heads to lunch [20:06] well i'm trying a from scratch install of stuff [20:06] so i'll be dealing with MAAS [20:20] jcastro, ping [20:32] MarkDude: hi [20:36] Hey dude. I figure you may be recovered enough from UDS now. Now you get the after-UDS fun [20:36] * MarkDude wants to see how to get the Juju in Fedora thing rolling [20:37] I have one guy already signed on, and also Brandon said he would help [20:37] Im gonna need a bit more. I will need at least 2 points of contact for Juju folks. Im guessing you are one [20:38] I can then see about getting the rest of the details sorted on my side [20:42] <_mup_> juju/local-cloud-img r489 committed by kapil.thangavelu@canonical.com [20:42] <_mup_> cloud init file is passed through to lxc lib layer [20:42] MarkDude: when you say point of contact [20:42] do you mean technical or just otherwise? [20:43] Well both of those would be nice [20:43] also is it possible to put a list as a point of contact? [20:43] sure, put me down for one [20:43] hazmat: you fine being the technical POC for fedora folks for pyjuju? [20:43] But mostly but I figure having a few people would be good. [20:44] * MarkDude assumes he should join some sortof mailing list for juju (never have too manyh of those) [20:44] https://lists.ubuntu.com/mailman/listinfo/juju [20:48] Ty [20:49] jcastro, sure [20:51] * MarkDude likes the idea of more stuff like this [20:51] MarkDude: here's your guy: https://launchpad.net/~hazmat [20:51] We are all on the same penguin team :D [20:52] My LP account is sooooooo not updated. [20:52] Perfect guys. [20:53] so if we're currently submitting a charm, would y'all want we should add "maintainer" to avoid a need to update later? [20:53] yeah it's probably a good idea to do that now [20:54] * MarkDude s gong to complete hs report on going to UDS. And then see about doing some more public thing on my side of it, hopefully more interested people. [20:54] k. [20:54] The report startts, it was hella fun :D [20:54] nathwill, you should have been at UDS [20:54] * MarkDude will see you at OSCON [20:55] markdude, i know :) half a dozen people have been on me about missing it :P [20:55] markdude, yes you will [20:55] we'll be the ones throwing eggs at you ;) [20:56] Sure, I deserve it :P [20:56] I will scotchgaurd my penguin suit [20:56] Thx jcastro hazmat [20:58] nathwill, yeah.. the charm lint tool requires it now for things going into the 'official' namespace ( as opposed to ~personal) [20:59] jcastro, here is the link for the Open Pixel Cup. the open source game thing - http://lpc.opengameart.org/ Its a great project that needs a bt more publicity [20:59] hazmat, y'all got that pushed already? nice :) [20:59] wink wink. nudge nudge :D [21:00] nathwill, SpamapS did the magic [21:00] ttyl [21:03] well cool beans folks. thx :) i'll get the pending ones updated asap [21:05] <_mup_> juju/unit-address-changes r513 committed by kapil.thangavelu@canonical.com [21:05] <_mup_> unit address updates on agent restart, periodically checks for changes, and updates relations accordingly [21:11] <_mup_> juju/docs/rest-api-spec r24 committed by kapil.thangavelu@canonical.com [21:11] <_mup_> old rest spec revisited [21:33] nathwill: I'll be sending out emails to maintainers this week before assigning them to all the charms in the store... [21:33] nathwill: IIRC you have at least one charm in the store.. right? [21:41] SpamapS, i have 2 in new-charm queue, none in store yet [21:42] nathwill: alright good, then def add them ASAP. :) [21:42] :) yeah, will do as soon as i get to my box w/ my ssh keys === niemeyer__ is now known as niemeyer [22:10] m_3: got a second? [22:24] thomi: hey [22:25] m_3: Hi, THanks for reviewing the quassel-core cham (at: https://bugs.launchpad.net/charms/+bug/999439). I've updated the branch with the requested changes, but I'm not sure if I need the relation and interface bit. [22:25] <_mup_> Bug #999439: Need charm for quassel-core < https://launchpad.net/bugs/999439 > [22:25] the only thing that can connect to the core is the 'quassel-client' package, so it's probably not reusable [22:26] ...I also don't understand what an 'interface' is in this context. [22:28] thomi, its the communication protocol for a relation [22:29] ahh ok [22:29] thomi: you can probably not provide an implementation for quassel client since it looks like they're all GUI and thats a rare case.. [22:29] thomi, ie.. the 'mysql' interface defines a protocol where on join of a new service, it creates a db, user and sets that on its relation, and the other side can read that info in its rel changed hook [22:29] ok, that makes sense [22:30] SpamapS: right, since the client isn't ever going to be on the cloud, it doesn't make sense to define that interface I guess [22:30] well thats saying a lot I think [22:30] what if you wanted to make a robotic tester for the client? [22:31] pop up a cloud instance with VNC as the X server and some program that remote-controlled it as part of automated testing [22:31] ahh, OK [22:31] or what about bare metal for an IRC kiosk? :) [22:31] *rare* cases [22:31] but not "never" :) [22:31] fair enough :) [22:31] I'll get on it :) [22:32] seems like the server part of it is pretty tiny [22:34] SpamapS: hey ninja, I saw a branch be proposed for subordinates and ports. [22:34] * jcastro wants to demo mod_spdy everyday [22:36] heh [22:38] quassel-client is also on android and windows too, so the only thing in the archive [22:38] but not the only thing [22:41] thomi: I was thinking maybe a detachable client that might live in the cloud. it's not that big of a deal, but it's probably worth doing. biggest thing was to remove the template relations and then make the metadata match the set of hooks you implement [22:42] m_3: yup, makes sense now that I know what it does [23:14] anyone know what the cause of http://paste.ubuntu.com/989853/ is ? [23:14] oh oh! I bet bash isn't installed [23:15] ah, not at /usr/bin/bash [23:19] heh.. is it ever there? [23:29] SpamapS: /usr/local/bin/bash on this FreeBSD box :P [23:30] SpamapS, no :-) [23:30] SpamapS, I hadn't noticed before though as my config-changed hook wasn't executable, and so not running [23:31] james_w: charm proof should have squawked at you for that [23:31] W: all charms should provide at least one thing [23:31] heh [23:32] thats it? [23:32] it should warn about non executable hooks [23:32] * SpamapS checks [23:32] aha! [23:32] it checks install, start, and stop [23:33] but not config-changed [23:34] fixed in trunk [23:38] SpamapS, cool thanks [23:38] plus I'd fixed it already [23:39] but I wanted to see what it said, and found that to be a rather amusing warning [23:42] SpamapS, would it then be considered bad form to include a non-executable data file in hooks? [23:43] nathwill: put it somewhere else in the charm [23:43] like /data maybe [23:43] nathwill: but not hard/fast rule... just my pref to only have hooks in hooks/ [23:43] hrm. alright... i only needed one file and was trying to avoid making extra top-level dirs, but.. definitely makes sense [23:44] adding to my to-fix list, lol [23:44] :) [23:45] CWD of hook during execution is top-level charm dir (aka $CHARM_DIR) [23:51] also avail as env var of the name noted [23:51] juju should be warning about non-exec hooks as well [23:51] i noticed go-juju just transparently makes them exec [23:53] https://bugs.launchpad.net/charms/+bug/999990 [23:53] <_mup_> Bug #999990: New charm: tarmac < https://launchpad.net/bugs/999990 > [23:54] \o/ bug number [23:54] so close [23:54] quick everyone try and get the good ones! [23:54] hazmat: several charms have hooks that're just links to a single non-hook-named file.. usually in the hooks/ dir... just a note that we probably don't wanna break that [23:55] james_w: wow... makes you wanna submit a few more bugs to see what happens with rollover huh? [23:55] m_3, noted [23:55] pg sequences go for a while beyond 100000! [23:55] we'll find out in ~30 minutes, so hopefully it doesn't go boom! [23:55] but it would be fun to claim the num ;-) [23:56] yup [23:56] 999993