imbrandon | s3 | 00:21 |
---|---|---|
imbrandon | yea | 00:21 |
imbrandon | hazmat: s3, gonna switch to akamia ( rackspace cloudfiles ) real soon ( probably have the update pushed by uds ) was just talking to Joey and marcoceppi earlier about it | 00:22 |
imbrandon | real cdn, faster, same price | 00:22 |
imbrandon | lots of JS optim comming too | 00:23 |
imbrandon | now that the server side is manageable | 00:23 |
hazmat | imbrandon, cool, although frankly just having more points of presence then s3 eu would be a big win (ie any cdn would be a big improv) | 00:24 |
lifeless | hazmat: how many pops does s3 eu have ? | 00:24 |
hazmat | lifeless, doesn't really matter when i'm not in the eu ;-) | 00:24 |
imbrandon | heh | 00:25 |
imbrandon | lifeless: not sure but s3 is only served from the one its stored in | 00:25 |
imbrandon | unless you use cloudfiles service | 00:25 |
hazmat | or cloudfront | 00:25 |
imbrandon | then its more like a cdn , origin pull that uses s3 as origin | 00:25 |
imbrandon | err yea cloudfront | 00:25 |
imbrandon | bah | 00:26 |
imbrandon | names all so close :) | 00:26 |
hazmat | imbrandon, mix and match clouds is a confusing biz ;-) | 00:26 |
imbrandon | heh yea | 00:26 |
imbrandon | i think thats how it may be with omg tho untill we grow | 00:26 |
imbrandon | OSAPI supprt | 00:27 |
imbrandon | rack for cdn aws for deploy + s3 db dump backups | 00:27 |
imbrandon | leaste thats the tentive plan i came up with today | 00:27 |
imbrandon | gotta sell it to the others | 00:27 |
imbrandon | but it seems like the right thing for themoment | 00:28 |
imbrandon | then when juju can move so can we, ver very easy | 00:28 |
imbrandon | but yea we've squeesed pretty much what we're gona get out of pure httpd enhancements out of it, maybe a bit more, but the rest will mostly be code and front end, right now 80% of the page req 0 to done is front end rendering anyhow | 00:30 |
imbrandon | thus we need to work on the JS badly | 00:30 |
imbrandon | and thats another place where newrelic shines , man i love that company | 00:32 |
imbrandon | i'm almost as much a newrelic fanboy as i am apple and github HAHHAHA | 00:32 |
imbrandon | monitor.us sup has gotten much better and closer to the newrelic level of service herer reciently too but not had a chance to try it myself yet | 00:33 |
lifeless | "Unlimited Application Insight at Light Speed!" wtf | 00:34 |
imbrandon | oh and if you ever consider exceptionhub for a project, dont, not even worth the $9 a quarter it costs | 00:34 |
lifeless | imbrandon: have you used tracelytics.com ? | 00:35 |
imbrandon | its so bad i actually started a self hosted opensource version and have it 80% complete, just to spite them | 00:35 |
imbrandon | lifeless: nah, not heard of them | 00:35 |
imbrandon | getexceptional and exceptinohub | 00:35 |
lifeless | imbrandon: js exceptions - I need that glued into LP OOPS someday | 00:36 |
imbrandon | are the only two that i found quickly that had php api's and js apis | 00:36 |
lifeless | Just haven't gotten around to it | 00:36 |
lifeless | already have analysis console of course ;) | 00:36 |
imbrandon | lifeless: i can get ya the glue, its very easy | 00:36 |
imbrandon | i am not great on the py part for lp you smooth the edge on that and i'll snag my bits out and givem to ya, its very simple idea these places do | 00:37 |
imbrandon | once its caputured its just jsond and compressed then posted to server | 00:37 |
imbrandon | server does all the work | 00:37 |
lifeless | so, basically I need a dict sent to a web service | 00:38 |
imbrandon | yup | 00:38 |
lifeless | http://bazaar.launchpad.net/~canonical-launchpad-branches/python-oops/trunk/view/head:/oops/config.py#L54 | 00:38 |
lifeless | describes the keys we use today | 00:38 |
imbrandon | kk | 00:38 |
lifeless | they are all optional but the more the better | 00:38 |
imbrandon | oh thats not many | 00:38 |
imbrandon | i was expecting 54 | 00:39 |
imbrandon | lol | 00:39 |
imbrandon | :) | 00:39 |
lifeless | we use bson for the rabbitmq glue / disk store - (but json would be fine too, obviously can't do binary in that case) | 00:39 |
imbrandon | yea, well there might be a lib but it would add weight to the thing | 00:39 |
imbrandon | not sure if the diff is worth it | 00:39 |
lifeless | I'd use json to start with | 00:39 |
lifeless | lingua franca of the open web | 00:40 |
imbrandon | wouldent be hard to try / untry once in plkace anyhow | 00:40 |
imbrandon | right | 00:40 |
lifeless | ok, time to put on the hacking music and put my head down for a bit. | 00:40 |
imbrandon | yea every place i seen just gzip encodes a json req, no matter how the lang glue captures the exceptions | 00:40 |
imbrandon | then posts it, server gunzip's ties to an accounts sent with the api key | 00:41 |
imbrandon | and stuffed into a db for analytics later | 00:41 |
lifeless | right | 00:42 |
lifeless | so id is a guid or other random thing | 00:43 |
* imbrandon really needs to bytes the bullet and get comfy with python, i mean i know the basics , honestly more than some novices i know but still .... mid-newyears resolution ? heh | 00:43 | |
lifeless | reporter should be configurable, e.g. LP would use 'Launchpad-js-prod' | 00:43 |
imbrandon | right | 00:43 |
lifeless | topic would probably be also configurable (e.g. LP might supply the topic in the page body, for correlation purposes) | 00:44 |
lifeless | ditto branch_nick and revno, LP would inject that | 00:44 |
imbrandon | https://github.com/bholtsclaw/exceptional-php/blob/master/exceptional/remote.php | 00:44 |
imbrandon | is the send bits i would convert to yer need | 00:44 |
imbrandon | to give ya an idea | 00:45 |
lifeless | ok, well I have one other thing I *have* to get done today | 00:45 |
imbrandon | kk | 00:45 |
lifeless | I might whip up the web service end of this after that, can't promise it tho. we'll see. | 00:45 |
imbrandon | yea i'll much with this inbetween my charms , i never speed too much at one moment , task flip | 00:45 |
imbrandon | sure thing | 00:46 |
imbrandon | spend* | 00:46 |
imbrandon | it is very very nice once its even half way in place tho | 00:46 |
imbrandon | its like, man why did i never do this 5 years ago | 00:47 |
lifeless | yeah | 00:47 |
lifeless | the zope and django versions are invaluable, key tools in LP problem diagnosis | 00:47 |
imbrandon | hrm | 00:47 |
imbrandon | zope /me runnnnnnnnnns | 00:48 |
imbrandon | lol jk | 00:48 |
lifeless | https://errors.ubuntu.com/ uses a common substrate | 00:48 |
imbrandon | isnt LP based on zope heavily, or was to begin with ? | 00:48 |
lifeless | yes, LP was one of the first zope3 things built | 00:48 |
imbrandon | ohhhh nice | 00:48 |
imbrandon | ( the ui ) | 00:48 |
lifeless | I wouldn't advise folk to get into zope3 tho | 00:49 |
lifeless | it meets a very niche set of requirements | 00:49 |
imbrandon | heh nah, that enews i just converted , well reimplmented was zope something, bnever looked at the code only the business req , into drupal | 00:49 |
imbrandon | fun fun | 00:49 |
imbrandon | was solid and working for 10+ years tho | 00:49 |
imbrandon | prior to them contracting me to do that | 00:50 |
lifeless | prob zope2 then | 00:50 |
lifeless | different beast | 00:50 |
imbrandon | if you can tell from output its http://enews.penton.com | 00:50 |
lifeless | many things inherited from, but many different and no necessarily better :P) | 00:50 |
imbrandon | mine is http://enewspro.penton.com ;) | 00:50 |
lifeless | Server: Zope/(Zope 2.7.4-0, python 2.3.5, linux2) ZServer/1.1 | 00:51 |
imbrandon | :) | 00:51 |
lifeless | the pro one wants a pasword :> | 00:51 |
imbrandon | yea i never even had to open the code on that one, year long project with 6 guys under me full time | 00:51 |
imbrandon | still never touched it | 00:52 |
imbrandon | only the brd | 00:52 |
imbrandon | i was happy | 00:52 |
imbrandon | oh yea , prod i dont have a pass for | 00:52 |
imbrandon | you could hit http://dev.enews-pro.gotpantheon.com | 00:52 |
imbrandon | and use "admin" and "admin1" | 00:52 |
imbrandon | :) super secure | 00:52 |
imbrandon | dev server | 00:52 |
lifeless | Content Encoding Error | 00:53 |
lifeless | 00:53 | |
lifeless | 00:53 | |
lifeless | 00:53 | |
lifeless | 00:53 | |
lifeless | 00:53 | |
lifeless | 00:53 | |
lifeless | 00:54 | |
lifeless | The page you are trying to view cannot be shown because it uses an invalid or unsupported form of compression. | 00:54 |
imbrandon | and no i dident pick the pink and blue, it was handed to me, i only was tech engagement lead , e.g. head coder with ass on line | 00:54 |
lifeless | lololol | 00:54 |
imbrandon | NIce | 00:54 |
imbrandon | what browser ? | 00:54 |
lifeless | ff | 00:54 |
imbrandon | just a norm one ? | 00:54 |
lifeless | Accept-Encoding:gzip, deflate | 00:54 |
imbrandon | hrm | 00:54 |
lifeless | yeah, normal | 00:54 |
imbrandon | wow | 00:54 |
imbrandon | strange | 00:54 |
lifeless | Accept:text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 | 00:54 |
imbrandon | never had that from anyone else but the dev is on totaly diff hardware / software than stage and prod too | 00:55 |
lifeless | Content-Encoding:gzipContent-Length:2522 | 00:55 |
imbrandon | dev == some funky outsourced nginx from india with mongodb cache | 00:55 |
imbrandon | heh | 00:55 |
lifeless | ctrl-shift-refresh fixed it | 00:55 |
imbrandon | stage and prod was in house over there | 00:55 |
imbrandon | ahh prob tried to send gz headers without telling ya | 00:56 |
imbrandon | first time | 00:56 |
imbrandon | or somethn | 00:56 |
imbrandon | anyhow its a pretty simple app, build out newsletters that go out daily for 100+ subcompanies | 00:56 |
imbrandon | something like 1.3 mil email a month from that thing | 00:57 |
imbrandon | and its really justa wysiwyg editor for a farm of F5's blasting constantly | 00:57 |
imbrandon | well only prod, dev wont actually send anything | 00:57 |
imbrandon | legs cut out from it on back end | 00:58 |
imbrandon | it thinks it does tho :) | 00:58 |
imbrandon | but yea all UI choices on that were not me at all | 00:58 |
imbrandon | i mean i put theme there but was handed them | 00:58 |
imbrandon | and not input on changing them | 00:58 |
imbrandon | none | 00:58 |
imbrandon | grrr | 00:58 |
imbrandon | but yea pretty much all functional code there i either wrote or designed and hand picked the dev to complete it | 00:59 |
imbrandon | then did the review for merge | 00:59 |
imbrandon | along with anothre trusted senior dev | 00:59 |
imbrandon | to check me own arse | 00:59 |
imbrandon | considering how much that thing sends tho, and then archives forever ( in an archive db but same ui) | 01:00 |
imbrandon | is impressive to me even having done it | 01:00 |
imbrandon | the real ones probably been in production full blast since mid january and i bet arround 5 to 8 mill nodes already are in the archives | 01:01 |
imbrandon | but i finished the proj and opted to find another late feb | 01:02 |
imbrandon | so not 100% on that one | 01:02 |
imbrandon | dev is on my farmed out server btw, your not hackin the gibson :) | 01:02 |
imbrandon | i need to take the code down really | 01:03 |
imbrandon | yea so if you ever get an "Industry" news letter of some type , like trucker magazine or Win SQLPro | 01:05 |
imbrandon | look at the bottom for "Penton Media" its those freaks | 01:06 |
imbrandon | they own like every industry mag possible and are borderline spammers, legit but only barely thus me choosing to move on once the projcet was in a state i did not mind handing it off | 01:07 |
imbrandon | mmmm i do like that ui alot ( not meaning the standard ubuntu branding parts, altho those are bad ) but what chart widge/lib/whatever is thaty | 01:10 |
imbrandon | is it open ? or avail at least ? | 01:10 |
imbrandon | would be kinda nice to grab some of the new relic info and snaitize it then use that to represent it via their api | 01:11 |
imbrandon | instead of the embed charts the giv | 01:11 |
imbrandon | was gonna use google charts but that looks much slicker and dont make my cpu fan kick on | 01:11 |
imbrandon | lol | 01:11 |
lifeless | imbrandon: yes, all open | 01:12 |
imbrandon | nice i'll have to add that to my note pad to check out later then | 01:12 |
* imbrandon keeps a oldschool pen and paper otherise shiney would overule work | 01:13 | |
imbrandon | for todo's later in the week | 01:13 |
lifeless | SpamapS: land your patch | 01:13 |
lifeless | SpamapS: for txaws | 01:13 |
imbrandon | wow wait a min, LP is fully open now ? | 01:14 |
imbrandon | wow i really need to get out of my little hole in the sand and look around a bit | 01:14 |
lifeless | LP has been fully open for a few years | 01:15 |
imbrandon | SpamapS: is awsome even deployable, i cant find any actual code or instuctions to piece existing code togather if i wanted to use it | 01:15 |
imbrandon | lifeless: yea, just now noticing, how dumb of me | 01:16 |
imbrandon | heh | 01:16 |
* imbrandon rembers when some ppl would sign nda's to help with it prior | 01:16 | |
imbrandon | infact william grant, isnt he on the LP team now | 01:16 |
imbrandon | bah, JS code | 01:17 |
lifeless | yes | 01:17 |
lifeless | he is | 01:17 |
imbrandon | cool yea he was an execlent contrib when we used to be on the same bits of ubuntu working | 01:18 |
imbrandon | glad for him | 01:18 |
imbrandon | very young too iirc ( well then , likely a full man now ) | 01:18 |
imbrandon | lifeless: ohh this is much easier than i thought it would be | 01:19 |
imbrandon | i _might_ have this ready in just a few | 01:20 |
imbrandon | def not long tho | 01:20 |
imbrandon | $json_ret = $u->url_get_contents($apiurl); | 01:32 |
imbrandon | bah | 01:32 |
imbrandon | lifeless: i got to run for a bit, promised someone id help them IRL for a few , ive got a good chunk of this ripped out , its pretty isolated to begin with as i builts it from 2 or 3 others ideas i had evaluated, anyhow i'll jot ya an email if your not around when i get back | 02:08 |
imbrandon | probably only an hour or so | 02:08 |
=== Furao_ is now known as Furao | ||
=== hspencer is now known as hspencer[afk] | ||
SpamapS | lifeless: I thought I did | 05:10 |
SpamapS | lifeless: I don't see any pending reviews for txaws anyway | 05:15 |
bkerensa | SpamapS: can you have a look at https://code.launchpad.net/~bkerensa/charms/precise/locker/trunk | 05:17 |
bkerensa | I have a bug open for it | 05:17 |
SpamapS | bkerensa: you might not have noticed this, but precise's nodejs is only one patch level behind upstream | 05:22 |
bkerensa | SpamapS: oh | 05:22 |
SpamapS | bkerensa: I bet you can use the stock nodejs/npm from precise | 05:22 |
bkerensa | SpamapS: Yeah I will go ahead and fix that | 05:23 |
SpamapS | bkerensa: also you open port 8042 twice. The one after 'start' is far more appropriate :) | 05:23 |
bkerensa | :D | 05:24 |
SpamapS | bkerensa: other than that it looks pretty clean. :) You may want to explain in the README that scaling out is not possible. | 05:25 |
SpamapS | anyway, about to pass out.. so tired | 05:26 |
lifeless | SpamapS: the indicators one ? | 05:35 |
lifeless | SpamapS: if so, cool | 05:35 |
=== lynxman- is now known as lynxman | ||
=== almaisan-away is now known as al-maisan | ||
=== al-maisan is now known as almaisan-away | ||
=== daker__ is now known as daker | ||
SpamapS | lifeless: yeah, in trunk aws-status is an indicator now :) | 13:36 |
SpamapS | https://bugs.launchpad.net/charms/+bug/991980 | 14:29 |
SpamapS | doh | 14:29 |
_mup_ | Bug #991980: Oneiric official branches are all locked <Juju Charms Collection:New> < https://launchpad.net/bugs/991980 > | 14:29 |
senior7515 | SpamapS: did you guys, for lack of better word, push the precise charms to the correct repo? | 14:50 |
senior7515 | with m_3 | 14:51 |
m_3 | senior7515: yes, the lp:charms/hadoop is in the right place... and no, we haven't deprecated lp:charms/hadoop-{master,slave,mapreduce} yet | 14:59 |
m_3 | senior7515: those were the ones you were interested in right? I recommend using lp:charms/hadoop on precise | 15:00 |
m_3 | senior7515: that combo passed basic tests this morning | 15:01 |
senior7515 | m_3: thanks a lot. | 15:01 |
senior7515 | yeah | 15:01 |
senior7515 | about to do some testing on that. | 15:01 |
senior7515 | thanks | 15:01 |
m_3 | senior7515: cool... lemme know how it goes. love to know more about your project sometime too if it's not proprietary | 15:03 |
senior7515 | m_3: basically just trying to compute averages of a large mongo collection on the fly.. | 15:04 |
senior7515 | averages per day, hour, month, year, graphs, etc. | 15:04 |
m_3 | senior7515: oh cool... are you planning to fork the lp:charms/hadoop to work with mongo? it's pure hdfs atm | 15:05 |
senior7515 | well, not sure how this works yet.. :) But basically there is this mongo-hadoop lib | 15:06 |
senior7515 | you drop it in the jar directory of hadoop | 15:07 |
m_3 | senior7515: shouldn't be too much work... just thinking offhand... mongo's pretty easy to work with in general, and the mongo charm manages replsets pretty well | 15:07 |
senior7515 | and it gives you a streaming api | 15:07 |
m_3 | awesome!... yeah, then that shouldn't be hard to change the charm to do then | 15:07 |
senior7515 | ohh i see, is a charm some sort of config file ? | 15:08 |
m_3 | senior7515: it might even make sense to make that a config option in the primary hadoop charm... have to think about it | 15:08 |
senior7515 | true | 15:08 |
senior7515 | m_3: any idea why juju ssh doesn't work ? | 15:09 |
m_3 | senior7515: yeah, poke around in the charm... the key places to look are the hook/install and hook/datanode-relation-changed | 15:09 |
m_3 | no clue... there are several situations where 'juju ssh' can be broken depending on your provider | 15:10 |
m_3 | this is more a problem with the bare metal or openstack providers | 15:10 |
senior7515 | i'm on ec2 | 15:10 |
m_3 | ec2 and lxc providers shouldn't have that problem | 15:10 |
m_3 | ah, then you should be good | 15:10 |
senior7515 | it doesn't work though :) | 15:10 |
m_3 | make sure your client version is up to date and matches what your environments.yaml says for 'juju-origin' | 15:11 |
m_3 | i.e., both should be 'distro' or both should be 'ppa' | 15:12 |
senior7515 | ohh k cheking | 15:12 |
senior7515 | m_3: sooo question what does it mean that the client matches juju-origin. I mean I only have one env set up 'prod' and it has juju-origin:distro | 15:14 |
senior7515 | what else should be there ? | 15:14 |
senior7515 | i just did an update on the system and didn't update juju… so I assumed is up to date | 15:14 |
m_3 | ah, so on your client machine you can install juju from the universe archives (just apt-get install juju) | 15:14 |
m_3 | or from a ppa (a more recent version of juju) | 15:15 |
senior7515 | I did… my client is a server on ec2 cuz the mac juju is broken .. I reported the bug | 15:15 |
senior7515 | but besides the point | 15:15 |
senior7515 | yeah | 15:15 |
senior7515 | is updated from the ppa | 15:15 |
senior7515 | the client machine is 10.04.4 LTS soo I had to use the ppa | 15:16 |
m_3 | ah, ok... then if you're client machine is installed from the ppa, then your environment.yaml file should have 'juju-origin: ppa' | 15:16 |
m_3 | this makes sure that the version on your client matches the version installed on the instances | 15:16 |
m_3 | shouldn't matter too much, but that might explain an ssh breakage | 15:17 |
m_3 | oh...hmmm | 15:17 |
m_3 | if your client is on ec2, then I recommend precise | 15:17 |
m_3 | 10.04 is pretty old for juju | 15:17 |
senior7515 | hmm i see... | 15:17 |
m_3 | safer route to take... especially if you spun it up just for this... definitely use precise | 15:18 |
senior7515 | ok got you.. soo basically spawn up precise new instance. install juju there… | 15:18 |
senior7515 | will it reattach to the hadoop-master/0 | 15:19 |
senior7515 | if I do juju bootstrap ? | 15:19 |
senior7515 | k so changing to ppa didn't work, spawning new instance for juju | 15:22 |
senior7515 | so basically juju is a client, and it also spawns a server on ec2 or whatev… with juju software installed. so the client sends commands to the juju server that it spawns and then it does the dirty work of installing and configuring instances etc ? | 15:23 |
m_3 | senior7515: safest, if you can do it, is just drop it all... then spin up a precise client... then bring up a new stack of services based on the lp:charms/hadoop charm.. and test from there | 15:23 |
senior7515 | ok... | 15:23 |
senior7515 | will do. then... | 15:23 |
m_3 | senior7515: yes, great description | 15:23 |
senior7515 | ok off to killing and spawning | 15:26 |
m_3 | that's totally my life lately... waiting on ec2 :) | 15:27 |
=== TheMue_ is now known as TheMue | ||
senior7515 | m_3: soo ok I finally terminated everything and i'm up and running with juju server and client installed on precise | 16:13 |
senior7515 | do I just do juju deploy hadoop-master ? | 16:13 |
senior7515 | or I have to clone the charms repo | 16:13 |
senior7515 | and install from a local dir ? | 16:13 |
m_3 | senior7515: sorry.. in a meeting atm... clone lp:charms/hadoop, not hadoop-master | 16:15 |
m_3 | the one hadoop charm itself is the one to use | 16:15 |
m_3 | the README file in lp:charms/hadoop has some great walkthroughts | 16:16 |
* m_3 finding link | 16:16 | |
m_3 | https://code.launchpad.net/~charmers/charms/precise/hadoop/trunk | 16:17 |
m_3 | senior7515: ^^ | 16:17 |
m_3 | senior7515: and http://jujucharms.com/charms/precise/hadoop | 16:18 |
m_3 | the last link shows the README with the examples | 16:18 |
* m_3 gotta run | 16:18 | |
senior7515 | m_3: thanks a lot! | 16:19 |
=== avalanch_ is now known as avalanche123|w | ||
jcastro | arosales: https://blueprints.launchpad.net/sprints/uds-q?searchtext=juju | 17:36 |
jcastro | arosales: ok so I made all the juju sessions people wanted | 17:36 |
jcastro | but then I realized that I have super launchpad powers so they become autoapproved. | 17:36 |
m_3 | jcastro: thanks man! | 17:36 |
jcastro | arosales: what's supposed to happen is you need to approve the ones under servercloud-q-juju | 17:36 |
* arosales is looking | 17:37 | |
jcastro | so I think to make up for that you can just tell me which juju ones you think you'd like to see consolidated, etc. | 17:37 |
jcastro | or if you think they're good to go | 17:37 |
jcastro | the only one I'm confused about is this intelligent brain thing | 17:37 |
arosales | 17 for servercloud track . . . | 17:37 |
jcastro | m_3: ^^^ more PhD research? :) | 17:38 |
arosales | jcastro: does that fit into the schedule ok? | 17:38 |
jcastro | yep, it does | 17:38 |
m_3 | jcastro: can you kill the first versions "/juju" of the two blueprints that were already there? | 17:38 |
jcastro | I think I'd rather overbook a little bit because we can always say "we don't need this, remove it." on the fly | 17:38 |
arosales | jcastro: if it fits into the schedule ok, its ok with me. | 17:38 |
jcastro | whereas "oh no we need to find room for 5 more sessions this week" would be tough | 17:38 |
m_3 | jcastro: that stuff's just about autoscaling tools | 17:38 |
m_3 | we can bump it to next series if you want | 17:38 |
m_3 | people have been asking for it | 17:39 |
arosales | m_3: and SpamapS probably have a better pulse on things that can consolidated though | 17:39 |
jcastro | m_3: done | 17:39 |
m_3 | danke | 17:39 |
jcastro | m_3: ok I can fix the description so that's more obvious | 17:39 |
m_3 | by all means | 17:40 |
arosales | are these dups? | 17:40 |
arosales | https://blueprints.launchpad.net/juju/+spec/servercloud-q-juju-intelligent-infrastructure | 17:40 |
arosales | https://blueprints.launchpad.net/ubuntu/+spec/servercloud-q-juju-intelligent-infrastructure | 17:40 |
jcastro | the first one is for the wrong project, I just declined it | 17:40 |
jcastro | so we're good | 17:40 |
arosales | jcastro: same with https://blueprints.launchpad.net/juju/+spec/servercloud-q-juju-integration | 17:41 |
m_3 | arosales: yeah, I'd mistakenly put them under /juju instead of /ubuntu | 17:41 |
arosales | and https://blueprints.launchpad.net/ubuntu/+spec/servercloud-q-juju-integration | 17:41 |
jcastro | yep, killed that one too | 17:41 |
arosales | ah ok | 17:41 |
jcastro | I just refiled them under ubuntu that's why they're there, though I wonder why LP doesn't drop them from the list | 17:41 |
arosales | jcastro: thanks for documenting those blueprints | 17:42 |
m_3 | hulk should be happier now | 17:42 |
jcastro | pro tip for you guys: | 17:42 |
jcastro | always schedule early, even if the bp is empty. | 17:42 |
m_3 | he's totally gotta be watching for that name now :) | 17:42 |
jcastro | because when the slots fill you are doomed | 17:42 |
jcastro | so I like to claim my space early | 17:42 |
m_3 | jcastro: good to know | 17:42 |
jcastro | then go back and fill it with content, agenda, etc. | 17:42 |
arosales | jcastro: thanks for the info, that makes sense | 17:43 |
jcastro | m_3: when these schedule in about 15 minutes, what you'll want to do | 17:43 |
jcastro | is go into each session from the schedule | 17:43 |
jcastro | and like, layout an agenda and stuff | 17:43 |
arosales | jcastro: if we need to we can shorten the mysql ultils session to just one hour. | 17:43 |
jcastro | m_3: that way you can just walk into the session and know what to do, as I always forget and UDS is a brain shuffling spaz fest, so I put my notes in the etherpad before I get on the plane. | 17:44 |
arosales | jcastro: if servercloud spots become crampe | 17:44 |
m_3 | jcastro: you're critical in a couple of those too | 17:44 |
jcastro | yeah | 17:44 |
jcastro | i've criticalled you guys on some of mine as well | 17:45 |
jcastro | arosales: http://summit.ubuntu.com/uds-q/2012-05-07/display | 17:45 |
jcastro | looks like only about 1/3 of the rooms have been booked per hour | 17:45 |
jcastro | so we should be good | 17:45 |
arosales | jcastro: ok, thanks. | 17:45 |
jcastro | arosales: I'm going to try to make Thursday as juju empty as I can so we can have time for contest/demo planning, etc. | 17:46 |
m_3 | yeah, we totally have a lot of overlap | 17:46 |
jcastro | I'm just waiting an hour for the schedule to settle, then I'll make ours nice and smooth | 17:46 |
m_3 | charm release process -vs- charmstore maintenance | 17:46 |
m_3 | juju best practices -vs- charm workflow | 17:47 |
m_3 | cool | 17:47 |
jcastro | I want workflow to be more about fixing the 10 step process | 17:47 |
jcastro | than the actual best practices for authors | 17:47 |
* m_3 just wants to go to desktop icon sessions all week | 17:48 | |
arosales | jcastro: cool, thanks. | 17:48 |
arosales | jcastro: also it looks like http://summit.ubuntu.com/uds-q/meeting/20357/servercloud-q-cloud-imgages/ | 17:48 |
arosales | is having some issues | 17:48 |
arosales | I think utlemming filed a bug | 17:48 |
arosales | its linking to the wrong blueprints, and thus not scheduling with the correct folks who have limited availabilit | 17:49 |
arosales | availability | 17:49 |
jcastro | I think the misspelling threw it off at first, I'll check | 17:49 |
arosales | jcastro: thanks for taking a look. | 17:50 |
arosales | jcastro: if need be, can we manually move it to monday or tuesday | 17:50 |
jcastro | k | 17:50 |
arosales | jcastro: thanks for all help, and documenting the juju blueprints | 17:51 |
jcastro | arosales: tuesday, 16:15 | 17:54 |
arosales | I think that works, I'll confirm with utlemming in ubuntu-server | 17:55 |
m_3 | jcastro: there were two on charm testing... James had a charm-testing in general | 18:01 |
m_3 | in addition to the unit-tests | 18:01 |
jcastro | m_3: oh those I combined, you need them seperate? | 18:01 |
m_3 | no, combined is fine with me | 18:02 |
m_3 | also juju-upstart-integration was william's... | 18:02 |
m_3 | don't know if James added himself on that or you did | 18:02 |
m_3 | jcastro: Barton's asking for you | 18:05 |
jcastro | ok, hopping on | 18:05 |
m_3 | you need the #? | 18:05 |
flepied | how can we customize the installed packages on all systems ? to use a configuration engine like puppet or chef, I need it pre-installed on all system... | 18:11 |
m_3 | flepied: it's easy to add to the top of the install hook... there're discussions of adding some packages to metadata, but it's easy to do in the install hook | 18:13 |
m_3 | flepied: you can add ppas or other package sources there too.. it's just shell script | 18:14 |
flepied | m_3, I don't want to do apt-get in the install hook, I want to use puppet for example... | 18:15 |
m_3 | flepied: right, I'm saying you can bootstrap puppet in the install hook | 18:15 |
m_3 | that's easy with puppet... chef's a little more work, but it's easy to add too | 18:16 |
adam_g | jimbaker: i saw something in my scrollback re that relation-get bug i hit last week. has that landed yet? | 18:17 |
flepied | m_3, in which install hook ? | 18:17 |
m_3 | flepied: actually, that might make a great subordinate charm... | 18:18 |
m_3 | that can be deployed alongside of any charm | 18:19 |
m_3 | flepied: but note that juju does the "service orchestration" or coordination... best integration so far is puppet in masterless or chef in solo mode | 18:19 |
m_3 | flepied: this is a topic for the ubuntu developer summit next week | 18:19 |
m_3 | flepied: (juju integration in general) | 18:20 |
m_3 | capistrano's in there too | 18:20 |
jimbaker | adam_g, not yet | 18:20 |
jimbaker | however, you can try out that branch and see that it works for you or not - i just need to add appropriate testing before it can land | 18:21 |
adam_g | jimbaker: ah, ok. sounds good | 18:21 |
jimbaker | adam_g, you can try out a version of the change in lp:~jimbaker/juju/debug-relation-hook-context, just set juju-origin as usual | 18:22 |
flepied | m_3, I don't think it will work to use a subordinate as it will be too late to be able to use puppet to install what is needed | 18:23 |
m_3 | flepied: we have charms that apply puppet manifests from within hooks | 18:25 |
m_3 | that works great | 18:25 |
m_3 | especially with templates | 18:25 |
m_3 | "install" hooks are called before "started" hooks | 18:26 |
m_3 | so you can make sure anything you need is in there | 18:26 |
flepied | m_3, yes but if you want to use puppet to have system independence then you don't want to install it via apt-get as it'll break this independence | 18:28 |
m_3 | flepied: sorry, not sure I understand | 18:28 |
m_3 | flepied: install hooks can install stuff from other gems or even directly from github for that matter | 18:29 |
m_3 | flepied: jenkins is a good example where the install source is a config parameter for the charm | 18:30 |
m_3 | flepied: something similar can be done with the puppet install itself | 18:30 |
flepied | m_3, I have an example here: https://code.launchpad.net/~flepied/charms/precise/mongodb/puppet | 18:32 |
m_3 | flepied: yeah, I see | 18:34 |
m_3 | certainly putting something in the metadata that's a pre-dep of the charm would be a nice feature for this | 18:34 |
m_3 | but you could certainly install puppet and deps at the top of the install hook | 18:35 |
m_3 | packages or from a frozen repo | 18:35 |
m_3 | there's some way to do a preseed for the cloud images too | 18:35 |
m_3 | (that's used in MaaS) | 18:35 |
m_3 | but that depends on your provider... I've been thinking in this conversation we've been talking about ec2 | 18:36 |
flepied | m_3, yes but I would like to have the charm independent from the system to ease the adoption of juju from other systems than apt based ones | 18:36 |
adam_g | support for user-definied cloud-init /w juju would be great here :) | 18:36 |
m_3 | adam_g: yup | 18:36 |
flepied | adam_g, yes that's what I would like | 18:36 |
m_3 | flepied: until then, I'm thinking the install hook installs from other than apt packages | 18:37 |
m_3 | but add a bug request for it unless it's already there | 18:37 |
adam_g | flepied: this is obviously a chicken and the egg scenario. if you don't have control over your machine to provider to have that installed pre-juju, you'll need to install it manually in the install hook | 18:37 |
flepied | adam_g, I could be cool to find a way to add a script to be run at provision time imho | 18:38 |
m_3 | flepied: oh, one other option | 18:38 |
adam_g | flepied: and if you're really running this on multiple distros, that would be the only distro-specific bit in any of your hooks (if puppet is used everywhere else). a simple shell switch statement would be easy to install using the correct package manager | 18:39 |
m_3 | you can specify ami... so that can be custom if you want... not a good answer, but... | 18:39 |
adam_g | flepied: is this EC2 or ? | 18:39 |
flepied | adam_g, no just a general question about juju | 18:39 |
m_3 | again, jenkins is a good example of that config | 18:40 |
flepied | m_3, jenkins install hook uses apt-get so I don't get where it solve the indepence issue | 18:41 |
m_3 | flepied: one of its options is installing directly from upstream | 18:43 |
m_3 | we have ones that use tarballs even | 18:43 |
m_3 | but it's an easy switch | 18:43 |
m_3 | that's configurable in config.yaml | 18:43 |
m_3 | (first entry in jenkins config.yaml iirc) | 18:44 |
flepied | m_3, I don't see it but I think it'll not solve the issue for system independence as we need to bootstrap the configuration engine from what is already installed | 18:46 |
flepied | and it'll not always be possible | 18:48 |
adam_g | flepied: forgetting about juju, how do you usually ensure puppet is installed pre-first boot? i've always used cloud-init | 18:49 |
flepied | adam_g, yes and I think we need the same thing in juju | 18:52 |
flepied | specified at the environment level | 18:52 |
adam_g | flepied: so (on ec2) either a custom AMI with puppet pre-installed, or a way to add that package to what gets installed via the cloud-init Juju generates? | 18:55 |
flepied | adam_g, yes that would be cool for all the providers | 18:56 |
_mup_ | Bug #992153 was filed: juju deploy stuck on Starting container... <juju:New> < https://launchpad.net/bugs/992153 > | 19:31 |
senior7515 | m_3: are you around ? | 20:35 |
senior7515 | m_3: sorry had to fix a bug in prod :) back to devops. | 20:35 |
senior7515 | m_3: soo this line doesn't work any ideas.. I can post my juju status if that helps but the machines deployed correctly is just the names that dont' work juju add-relation hadoop-master:namenode hadoop-slavecluster:datanode | 20:36 |
senior7515 | actually the last thing could be answered by anyone, posting my status… 2 secs | 20:36 |
senior7515 | http://paste.ubuntu.com/958429/ | 20:38 |
senior7515 | i deleted the dns names but other than that | 20:38 |
senior7515 | I'm not sure why associating the nodes don't work. | 20:39 |
senior7515 | perhaps spamaps knows :) | 20:39 |
m_3 | senior7515: it looks like you're still trying to deploy using the hadoop-master charm | 20:39 |
m_3 | senior7515: you need a single charm for hadoop... lp:charms/hadoop... nothing else | 20:40 |
m_3 | senior7515: you deploy it according to instructions in the README for that charm | 20:40 |
senior7515 | hmmm yeah | 20:40 |
m_3 | the "hadoop-master" that it references is just the service _name_, not the charm | 20:40 |
senior7515 | ohhh lol | 20:40 |
senior7515 | k | 20:40 |
senior7515 | you are right | 20:40 |
m_3 | it's still just deployed from the lp:charms/hadoop | 20:41 |
m_3 | there's a big difference between the two | 20:41 |
senior7515 | sweet thanks. err.. that was silly of me. | 20:41 |
m_3 | senior7515: cool! | 20:41 |
m_3 | senior7515: np.. it's confusing... we need to figure out the best way to deprecate charms | 20:41 |
* m_3 note to self to try to really prevent charm renaming :) | 20:42 | |
senior7515 | hehe soo this command is awesome juju destroy-environment | 20:42 |
m_3 | totally | 20:43 |
senior7515 | hmm wished amazon was faster | 20:46 |
senior7515 | m_3: ok soo that was easier.. all I had to do was copy and paste the commands and add —repositry ~/charms … sweet. soo when I expose it | 20:47 |
senior7515 | do I just expose the master ? right | 20:47 |
senior7515 | or I need to expose all of them | 20:47 |
nathwill | hey all, having a bit of a problem with juju becoming confused when trying to process config values of either "YES" or "NO" (puts them out as "True","False", then chokes), no matter what i set the config key type as "string", "boolean", even tried regex, with validator 'YES|NO' | 20:53 |
nathwill | is this something worth reporting a bug about? or is this expected behavior? | 20:53 |
nathwill | i.e. here's the charm that works fine on its own, http://bazaar.launchpad.net/~nathwill/charms/precise/vsftpd/trunk/files, and here's a config override that makes juju blow up: http://pastebin.ubuntu.com/958460/ | 20:56 |
m_3 | senior7515: only expose master yes | 21:06 |
senior7515 | ohh ok how to unexpose.. :) | 21:06 |
senior7515 | i exposed all | 21:07 |
m_3 | nathwill: yes, it's expected behavior, and yes perhaps it's something to report a bug about | 21:07 |
m_3 | senior7515: actually, I have no idea | 21:07 |
m_3 | senior7515: i'd have to look... one sec | 21:07 |
nathwill | m_3, ok. i'll send in a bug report... | 21:07 |
senior7515 | just that | 21:07 |
senior7515 | juju unexpose | 21:07 |
senior7515 | i think | 21:07 |
senior7515 | i just looked at the help | 21:08 |
senior7515 | but will it be able to communicate between master and slaves | 21:08 |
senior7515 | without exposed ports ? | 21:08 |
senior7515 | that's cool | 21:08 |
m_3 | nathwill: I find it quite annoying myself | 21:09 |
m_3 | nathwill: strict type checking for no real reason | 21:10 |
m_3 | senior7515: expose is just to the outside world... within ec2 they can talk to each other | 21:10 |
senior7515 | ohh cool… soo i basically never have to expose | 21:10 |
senior7515 | anything if all my stuff runs on ec2 | 21:11 |
senior7515 | except 80 or whatev I need cool good to know | 21:11 |
senior7515 | thanks | 21:11 |
senior7515 | unexposing | 21:11 |
_mup_ | Bug #992237 was filed: juju fails to override charm config correctly when values are "YES" or "NO", treats as boolean even when setting type as string <juju:New> < https://launchpad.net/bugs/992237 > | 21:29 |
senior7515 | how does one install more than one pakcage in a host | 22:10 |
senior7515 | say I deploy package x | 22:10 |
senior7515 | then I also want package/charm y on the same host that x deployed ? | 22:10 |
chmac | Can I use juju to install a handful of applications, users, etc on a standalone dedicated server? Like puppet / chef? | 22:14 |
chmac | Ok, not yet according to my reading of the FAQ | 22:22 |
=== avalanche123|w is now known as avalanche123 | ||
senior7515 | err… after reading faq.. only one service per machine | 22:29 |
m_3 | senior7515: you can deploy some services subordinate to others on the same machine, but not two primary services | 22:48 |
m_3 | senior7515: it's for things like loggers, monitors, storage clients, etc | 22:48 |
senior7515 | m_3: ahh how come ? just haven't gotten around or something purposely not included ? perhaps dependency management is a pain.. dunno. | 22:52 |
m_3 | senior7515: subordinate services just landed a couple of weeks ago | 22:57 |
m_3 | senior7515: lots of stuff that we have planned for development | 22:58 |
senior7515 | m_3: is it possible to deploy hadoop on one computer for dev only ? | 23:50 |
senior7515 | with juju ? | 23:50 |
senior7515 | the instructions add up a bunch of nodes | 23:50 |
senior7515 | not sure if possible | 23:50 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!