[01:39] hazmat: right, any juju after "reboot" support cannot spawn nodes for a juju before reboot support. [01:40] hazmat: so distro support is incompatible in oneiric === almaisan-away is now known as al-maisan [11:52] hello, can someone help me? [11:53] I'm trying to deploy openstack on vm's and got weird error. [11:56] I have deployed all charms successfuly, except nova-volume and openstack-dashboard [11:56] I'm getting these errors: [11:56] Error processing 'cs:precise/nova-volume': entry not found [11:57] Error processing 'cs:precise/openstack-dashboard': entry not found [11:57] Any ideas? [12:01] vladas, those charms are not in the store [12:01] vladas, they need to be deployed from bzr [12:04] Oh, i see. Thanks, will try that. === al-maisan is now known as almaisan-away === vladas is now known as hitexlt === carif_ is now known as carif === zyga is now known as zyga-food === zyga-food is now known as zyga-food] === zyga-food] is now known as zyga-food === zyga-food is now known as zyga === almaisan-away is now known as al-maisan === al-maisan is now known as almaisan-away === mrevell_ is now known as mrevell [15:34] gooooooooooood morning juju town! [15:35] gooood morning to you! (sorry for the reduction in O's) [15:36] Its like an echo.. it can't sustain.. ;) [16:09] gd morning [16:11] marcoceppi: done with wordpress? Are you done yet? How about now? done..? [16:11] ;) [16:11] no yes! nope not yet [16:11] it's in bzr now though [16:12] Ugh that reminds me I need to finish nagging the last of the un-maintained charms [16:13] SpamapS, met someone at cls12 who is working on newest owncloud charm [16:13] koolheadd17: oh? [16:14] koolheadd17: did you hand it off to them or they just wanted to make it better? [16:14] SpamapS, he wanted to make it better [16:14] jimbaker: jitsu --help looks *amazing* now btw. THANK YOU [16:14] koolheadd17: *awesome* [16:15] he is working on integrating nfs based support too i guess bkerensa knows him [16:21] cool [16:22] i have few charms in baking state though :) [16:22] cool [16:22] 'morning all [16:23] I'm working on making nagios more awesome [16:23] SpamapS, waoo :) [16:23] and then hopefully collectd too [16:23] SpamapS, is someone working on Monit? === salgado is now known as salgado-lunch [16:23] koolheadd17: https://bugs.launchpad.net/charms/ ... look there [16:24] k [16:24] SpamapS, so jcastro google doc list and all is primitive now [16:26] koolheadd17: it has been dead for months now [16:26] it should have been deleted if it hasn't been already [16:30] ejat, around [16:30] koolheadd17 : yups .. [16:30] ejat, are you coming to china for openstack asia pacific event [16:31] hmmm not sure yet … [16:31] how about u ? confirm ? [16:31] not yet, need to talk to my manager on the same and visa might be difficult [16:31] ejat, you should definately go there [16:32] * ejat in the mist of changing employment … so could not give a say yet .. but hopefully … [16:33] ejat, ol [16:33] k [16:33] SpamapS : is there any tutor to deploy OS using juju after installing via MAAS ? [16:34] * ejat means for production environment .. [16:34] or how to wiki .. . [16:39] ejat: yes there's something on wiki.ubuntu.com [16:43] https://help.ubuntu.com/community/UbuntuCloudInfrastructure <-- ok thanks [16:55] SpamapS, thanks. over at oscon with m_3 working on demo stuff [16:55] jimbaker: woot [17:04] m_3: so the dependency should not be tight according to the upstream author so I removed the ppa portion [17:05] jimbaker, m_3 rackspace is a fail btw [17:05] jimbaker, m_3 they implement their own version of credential passing [17:05] instead of keystone v2, even fixing that there are some other issues [17:05] jimbaker, m_3 i'd go with ec2 or hpcloud [17:06] hazmat: I love you man... `jitsu export | jitsu import -enewenv -` worked [17:07] bkerensa: thanks === salgado-lunch is now known as salgado [17:22] SpamapS: got a chance to look further re: haproxy-overhauled ? [17:28] negronjl: I want to wrap up this nagios thing I did, and then I'll dive back into it [17:29] SpamapS: you hate me :) [17:29] hazmat: wtf?! no keystone? [17:37] negronjl: because you're beautiful [17:44] <_mup_> Bug #1025382 was filed: Add a generic constraint "persistent-root" < https://launchpad.net/bugs/1025382 > [18:06] SpamapS, its a bastardized keystone [18:07] SpamapS, mostly its just different credential passing [18:07] hazmat: ahh the fun of apache licensing [18:07] SpamapS, openstack is a kit for building your own snowflakes ;-) [18:07] "Lets create a single place to dump code that nobody runs" [18:07] btw.. 'juju status | ccze -A' is purty [18:08] we should like, build in --color to juju cli [18:08] SpamapS, don't even get me started on how broken rspace is about their client tools.. nova client has had broken packages for months, and their swift client doesn't work with their custom keystone either. [18:08] 'started' is green, 'error' is red.. [18:09] nice [18:09] * hazmat should file a bug on status improvements [18:09] hostnames are blue [18:10] and known things are purple.. which is weird.. nrpe and mysql are purple.. but nagios is not [18:11] wow [18:11] jitsu export is.. like, everything I've been looking for [18:11] ^5's hazmat [18:11] SpamapS, :-) [18:11] That should be like, feature #1 to go into goju [18:11] SpamapS, i haven't pimped it out to folks yet, cause people hates features.. [18:11] but perhaps you guys can [18:11] Its enough to prove the concept is valid [18:12] we can move forward from there once we have a permanent home for the feature :) [18:12] SpamapS, this is pretty much the exact syntax from the austin/lxc sprint ml discussion [18:41] bcsaller, is the repairs branch ready for review? [18:42] hazmat: since we are not supporting restarts I was just testing changes that use start-ephemeral to see if its faster for people [18:42] bcsaller, pls stop adding additional features to the bug fix, its much, much more important to release the fix for the original bug that add extraneous things. [18:42] s/than [18:43] moreover most of things are inappropriate for an SRU [18:44] and at the very least need to have separate bugs/documentation [18:44] hey, before that bug you were the one that had me working on this ;) [18:44] bcsaller, i wanted a fix for the bug, not for any of the other things that are in that branch [18:45] some of those things can definitely go into an SRU, but their still more appropriate and better documented with separate branches and bugs. [18:46] and specifically for the download feedback, that was in the context of informing of downloads when doing cloud-images, not tying the existing ma into desktop-notifications. [18:47] feels much more like pending stuff that's been discussed and you decided to put into this branch, but it was never part of the expectation for the bug in question [18:48] a good SRU btw, would be to *just* remove the 'start on' from the upstart jobs [18:48] * hazmat nods [18:49] so you can land whatever fix you want [18:49] Once its fixed in quantal, I can do a much simpler patch for SRU :) [18:52] actually for the upstart jobs, its better to just not have them if it doesn't survive restart and just fork the relevant processes. [18:52] rather than dropping files into sys dirs that are irrelevant [18:53] yes that would be great [18:54] tho its nice to be able to easily restart them [18:54] hazmat: upstart has a notion of user jobs [18:54] but I think they're off by default IIRC [19:33] hrm, getting lots of tracebacks when running relation-get -r$relid in an upgrade-charm hook [19:34] http://paste.ubuntu.com/1095430/ [19:38] Ok, I *think* have working nagios+nrpe in a generic fashion... [19:39] to the point where this is all thats needed to add monitoring to wordpress: [19:39] http://paste.ubuntu.com/1095441/ [19:41] cool [19:41] james_w: I haven't forgotten custom nagios plugin support either :) [19:42] :-) [19:42] james_w: lp:~clint-fewbar/charms/precise/nrpe/trunk ... but.. I'd wait for my README updates and blog post.. there's a *LOT* there :) [19:43] time for lunch === salgado is now known as salgado-afk [21:37] is there a limit to the number of lxc clients that juju could effectively start? [21:40] dpb___: ZooKeeper keeps everything in RAM.. and each one needs at least 400MB of disk space... [21:40] dpb___: so, yes, your box will crumble well before any logical limits are reached [21:41] oh, nice. [21:42] dpb___: I've done 7 at a time before.. it made my SSD feel slow. :) [21:42] All that dpkg :-P [21:42] ya, I just tried to spin up 10 and it's zookeeper is throwing errors in debug-log [22:09] new one for me: 2012-07-16 22:08:28,265 ERROR Invalid Remote Path provider-state [22:17] SpamapS: even the local ZK keeps a full FS and not just pointers to it ? so ... in LXC a single NRPE deploy would be 1.2GB of disk just to store the charm [22:18] ... that seems wrong. [22:18] ok, for future reference, data-dir cannot do ~ substitution. :) [22:18] ( also seems wrong there is 400MB of data in the charm ... ) [22:22] imbrandon: no it doesn't keep *EVERYTHING* in it [22:22] actually a more than 2.4GB ... 400 in the local:repo ~/.juju/charmname 400 more and then ZK bootstrap lxc has 400mb on the zk fs and then it deploys to another lxc that add 400 in /var/lib/juju/service/* and finaly another in /mnt or where ever its running from the final 400mb... [22:22] imbrandon: it keeps everything that is in the ZK tree in RAM [22:22] right i'm just saying a 400mb charm is 2.4gb deployed in LXC [22:22] seems ... off [22:25] SpamapS: btw got me a shiney new fileserver powered on and in the spare bedroom ( my in-home DC , lol ) [22:28] now i have enough disks spun up and in the right manner that i can migrate things from 4 other machines I have around doing various things all to that one + any minor crons etc they ran, and I've already purchased a rsync.net account just to offsite backup that one box GREATLY simplifying by @home setup that has become a frankenstein over the years ... one server, running minimal services with 12tb of raided storage and a offsite backup ready to g [22:29] BUT the major win out of the whole thing , and what caused me to finaly do it this weekend ? heheh in-house MAAS with OpenStack on 5 nodes ( got a lil laptop to use as the controller ) with Gigabit speeds [22:30] woot woot [22:30] :) [22:31] hoping by Wed or so I'll have everyting copied into place and verified etc etc + the boxes reprovisioned fo sum fun hehh [22:31] ( i keep off site backups of family members data too so i cant afford to goof it up , lol ) [23:06] imbrandon: where do you get 2.4gb from 400MB? [23:06] imbrandon: you want CoW for the charm on local? Thats a bit far reaching. ;) [23:06] imbrandon: some day.. not today [23:08] SpamapS: yea , i konw its not fixable ( well not in the current context other things need to fall into place first ) [23:09] just had no real strong feeling about more data in the charm than config templates untill just now tho [23:09] and i realized that [23:09] 400Mb would be a bit weird [23:09] i mean i dident like it but was like meh, now it seems like a very very bad idea [23:09] and yea 2.4 [23:10] count it with me , maybe i'm wrong ... [23:10] 400 in the charm.. 400 into the file storage.. 400 on the disk layed down.. thats 1.2G not 2.4 [23:11] ok so you have /var/lib/charm/400.mb then you juju bootstrap and it copies it to ~/.juju/400.mb and then again to the lxc zk/400.mb then you "juju deploy charm" and zk copies it to new node at /var/lib/charm/400.mb and then the hooks fire and unpack it to /mnt/400.mb [23:12] whats that add up ... all on your laptop ... but even on ec2 thats a bit extreeme [23:12] oh, and there is also a cached copy in s3 as well ... [23:15] imbrandon: I don't believe it copies it to ~/.juju if its a local: [23:15] imbrandon: and it never copies the charm into zk [23:15] zk is structure only [23:15] imbrandon: /var/lib/charm also doesn't exist :) [23:15] ahh ok so i was mixing up my zk/400.mb with s3/400.mb [23:16] and thats where i deploy local: form [23:16] from* so just used it in my head [23:17] * imbrandon echo "JUJU_REPOSITORY=/var/lib/charms/precise/" >> /etc/profile.d/juju-charms [23:17] :) [23:18] Hrm, bug in subordinates [23:19] nice clean place outside of my ~/Projects/juju/charms/ to let "charm getall" live and not mix uninstentionally with the ones in Projects i'm actively developing :) [23:19] if I destroy the primary service.. I never see a depart for the unit of the subordinate [23:21] hrm [23:21] race ? [23:22] something like that [23:22] 2012-07-16 16:17:03,362 unit:nrpe/5: statemachine DEBUG: relationworkflowstate: transition complete depart (state departed) {} [23:22] i never took the time to dig into pyjuju enough to be effective even trying to look since i'm just concentrating on other things waiting for go [23:22] Thats the depart from the sub<->primary relationship [23:23] ok ... [23:23] so its possible it just commits hari-kari and never cleans anything else up [23:23] so thats right ... [23:23] oh well sure , well kinda [23:23] it should be departing all of its non-primary relations first [23:23] isnt that the point of its async call backs is it can do it anytime, so it is right? its just sloppy [23:24] like i said not dug enough into this bit to be really informed [23:24] just thought thats how it was tho [23:24] Yeah unless one of your callbacks does its own little self-suicide instead of telling the reactor to exit when its all done [23:24] right, so its a bug but not quite the same kind [23:24] this is all speculation [23:25] just a bad implmentation not accounting for any order [23:25] I have a reproducible problem [23:25] right rihgt [23:25] which causes my nagios stuff to never clean up after itself :-/ [23:25] just just trying to help ya play strawman :) [23:25] which sucks [23:25] but you destroyed it [23:25] why care ? [23:25] because I've been able to do it without regenerating the whole nagios config every time [23:25] I destroyed it [23:25] so now I need to remove it from the nagios configs [23:25] oh THAT hook isnt going ? [23:26] thats the hook that isn't going [23:26] but i thought that they all fire just late ... [23:26] or ... ok one sec let me re-read that above [23:26] confused mtself i think lol [23:26] there's a relationship between nrpe<->nagios .. and one between wordpress<->nagios .. and when I destroy wordpress, it nrpe/X is gone.. but never departs from the nagios<->nrpe relationship [23:27] ummm [23:27] it shouldent [23:27] oh wait ... [23:27] it /should/ [23:28] but dident you tell me that before that might happen [23:28] with something i did in the 1st newrelic ones [23:28] <_mup_> Bug #1025478 was filed: destroying a primary service does not cause subordinate to depart < https://launchpad.net/bugs/1025478 > [23:28] one* [23:28] yes but in this case the relation hasn't been broken.. nrpe and nagios are still related.. just a *unit* departed [23:29] right, i dident care cuz i dont work without it [23:29] you could still try to be working ... if they both reported to the same nrpe ... but honestly thats the incorrect way to deploy nagios i was taught [23:31] here let me run through a line of how i was shown to deploy nagios when first learning about it at GSI logn long ago and this si from memory but i think will not be effected by the bug even if the bug needs fixwed [23:31] ok soooo [23:31] service we say simle html app on apache [23:31] is_principal = not (yield self._service.is_subordinate()) [23:31] I just love inlineCallbacks :-P [23:31] easy one check on port 80 nrpe checking it [23:32] i actually do ... specicaly recursive ones :( [23:32] heheh [23:32] anyhow so yoy got one nrpe on 80 [23:32] imbrandon: sorry wtf are you getting on about nagios? [23:32] you run 3 nagios in your senerio [23:32] in the right way [23:32] like how jenkins is run [23:33] jenkins.qa.ubuntu.com dont run any jobs, other jenkins report their results into it [23:33] so, in my world, NRPE is for checking things local to the box, and everything else the nagios server does direct against the host address. [23:34] and then nsca goes the other way.. pushes things back from server -> nagios [23:34] imbrandon: if you're talking about scaling out nagios.. we're not there yet. Lets just *configure* nagios first. [23:35] nrpe local -> to one nagios per service --> one more nagios for whole environment [23:35] yeah [23:35] slow your roll [23:35] thats later [23:35] yea but if you do that then bug dont matter [23:35] yes [23:35] is what i was pointing out [23:35] yes it does [23:35] because I'd still be monitoring a now deceased box [23:35] no because both services die thgen [23:35] what do I need to do to push the charm review foeward ? [23:35] You're going to run *ALL OF NAGIOS* on the box?! [23:36] no , well thats how i said it but not in rewality [23:36] I'm reminded of the perl joke here. [23:36] it was for wasy explaining [23:36] lifeless: m_3 is supposed to pilot tomorrow, but I suspect he'll be busy prepping for OSCON demo's .. so I might take his spot [23:37] imbrandon: ok, so you're going to run a nagios for every service? [23:37] but you end up with nrpe plugins all over , maybe even many on one box , then one nagios where ever per service name its related to if it gets another relation name then it fires up another daemon of nagios , and then both report to a 3rd [23:38] imbrandon: hi [23:38] maybe even on that same box as well but that 3rd will be the one the customer "reads" [23:38] SpamapS: he is very busy preparing ^ [23:39] SpamapS: not just every service but every service relation, but they can all still just be on one "nagios" machines physicly [23:39] heya bkerensa , hows it goin [23:39] imbrandon: so if I have 40 services, I have 40 /usr/sbin/nagios running? All due respect, but that sounds bat@!$% crazy [23:40] imbrandon: nagios is perfectly capable of scaling one nagios daemon up to thousands of service checks [23:40] lifeless: i'll have some time tonight and tomarrow as well, i'll review it and minimum give ya feed back if I think its too complex for me to +1 alone [23:40] :) [23:40] imbrandon: nothing much just hanging with all the juju folks except for SpamapS :D [23:41] SpamapS: yea and no , and in yes 40 and no its the norm, or at least how i've actually seen nagios deployed realworld [23:41] imbrandon: also for this bug.. I have a workaround.. which is to just trash anything that belongs to the primary service even if the sub relationship still thinks it should be there. [23:41] but seriously 40 services in one env ? [23:41] imbrandon: even 5.. I see no reason to do that. [23:41] sure, nagios is designed just exactly for that and its light [23:41] imbrandon: I've never seen nagios deployed that way or even heard anyone doing it that way, FWIW [23:41] imbrandon: and it would still suffer the same problem, because the nrpe relationship would still be left dangling [23:41] (until now) [23:42] its like built into the setup that one is normaly a "collector" and gathers the others [23:42] SpamapS: but would only break one instnce of the daemon [23:42] that dont matter at that point anyhow [23:42] imbrandon: yes thats one thing nagios can do, but thats for aggregating several monitoring boxes into one.. not for "the norm" [23:43] soory i dont follow what you mean there [23:43] heya elmo :) [23:43] I could see an issue where nagios would be heavily single threaded and need more processes to scale up onto one box with lots of cores.. [23:43] but nagios is pretty well written and is almost always just waiting on slow network stuff and disk [23:44] nagios already spawns processes to distribute check load [23:44] Yeah, I've seen pretty moderate boxes keep up just fine with thousands of checks defined and running at pretty regular intervals [23:44] elmo: yea thats what i'm trying to unsuccessfully explain that its very common to break nagios up like that [23:44] or other arbitrary ways [23:45] SpamapS: yea i know for fact there is a dell 2650 in the basement reporting about 10k checks :) [ its my brother in laws machine heh ] [23:45] and those are only like c2d 1.6ghz or something silly [23:47] ( that dumb dude went nuts and has like one website cheked like for real 8 ways or something , like tcp, then telnet then http 80 then text exists etc etc etc ] [23:47] i think he was just bored or was trying to make it break ) [23:47] heh [23:50] * imbrandon setups up 2 checks for php/python/rails apps ( on each node direct ) one to pull a txt file from /health-check/plain.txt and make sure its 200 with body "OK" and one to pull /health-check/dynamic.php{or .py or .erb/.rb} to get a 200 and a "OK" body thats printed by the code [23:51] that should cover about any thing as far as if the server is working ( not taking something slips past lint and other checks at build and/or deploy and is a app error ) [23:52] imbrandon: IMO the really important monitor is the one that verifies traffic is flowing [23:53] imbrandon: artificial checks are great, but I want to know that requests are happening at normal levels [23:53] right, thats a whole nother class of check , as well as the disk space one, dunno how many times i've seen a log fill the box and totaly hose everyting [23:54] even if most all logs are on other partiions or something , someone fraks up and writes their own or logrotate dies or some crap [23:54] never fails [23:55] * imbrandon thinks damn near every log should be sent to /dev/null on prod machines anyhow with the exception of the authlog and dmesg for hardware failures etc [23:56] but all service/daemons supporting services/and apps should not need or really imho be forcefully sent to /dev/null in prod [23:59] imbrandon: I think I'll stick with the "should be sent to a central logging host asynchronously" not /dev/null [23:59] better ways to clicktrack or anything else they might provide, and you should have a identical setup in staging to debug issues, not similar, identical, that way if something is up its in a hardware log/syslog still cuz its hardware ( or more likely delll openmanage has already told nagios that a hdd has been spun and you just get the report from in the NoC HUD's and call dell to add another when they show up in their 4 hrs and it gets hotswapped [23:59] imbrandon: if that host decides to devnull them thats fine, but I want to be able to see them [23:59] oh i know , just ranting at this point