[00:15] hazmat: yeah I think going to the current series makes the most sense [00:43] <_mup_> Bug #984484 was filed: subordinate charms should be able to open ports < https://launchpad.net/bugs/984484 > [01:06] * SpamapS feels so lonely.. oh so ronery [01:06] oh wow, my aws bill last month was only $0.08 , nice [01:07] i think they might have messed up tho lol [01:09] SpamapS: http://aws-s3.assets-online.com/pixeldrop/fun/macpro-apple-online-store.png [01:09] haha [01:09] SpamapS: and what ya think about a apache mod_speedy only on 443 and nginx only on 80 , hrm :) [01:09] mine is hovering around $80 - $100 these days [01:09] I should probably destroy-env soon.. have had 4 m1.smalls running since Friday [01:10] yea normaly mines just under 100 [01:10] imbrandon: I love that idea. I think I can make that work just with the subordinate [01:10] imbrandon: I was thinking that I'd change it to just proxy to the local port 80. :) [01:11] SpamapS: cool cool, wont work already thought about that [01:11] heh [01:11] spdy needss end to end, cant even have nginx reverse 443 to it [01:11] so no one second cache love for spdy [01:11] untill nginx gains support [01:12] been reading up on it the last hour or so [01:12] but thats my extent lol [01:14] imbrandon: I was wondering about that [01:14] but yea, i am thinking a major reworking of the omg nginx stuff might be in my pipeline soon, as in i've already begun to forumlate it in my head and on my dev server, some of the hastily things we did , while is running awesom right now, i;ve been tuning and refining diff ideas to make it sooo much better [01:15] but it will be between now and uds and likjely i'll grab a hallway session with you and marcoceppi and whomever else and go over it more in person [01:15] then MAYBE implment it after [01:15] imbrandon: if we can make it generic enough, it should be a good 'nginx' charm :) [01:15] right thats my ultimate goal [01:16] is to have a "full stack" but all be little sub charms where it makes sense [01:16] that way too parts can be droped out like the php and rails dropped in [01:16] and still share the good bits [01:18] but i think its gonna take me the next two weeks of iterations and stuff to have it be as compartmenalized as i'd like and that means it will be a good time to "present" a semi if not fully working "stack" at uds betqween us that have intrest in such things [01:18] and published "public" after that ( like blogged about etc, i'm sure it will be in the charm store much sooner ) [01:19] imbrandon: I don't actually see how mod_proxy can't be used [01:19] * SpamapS is trying it right now [01:19] least thats the .plan i've been working from the last week [01:19] SpamapS: let me find the link,on the spdy forum a official dev said it needed end to end to work right [01:19] one sec [01:21] here is where i got that info, it may be old or wrong [01:21] https://groups.google.com/forum/?fromgroups#!topic/mod-spdy-discuss/XCQG4w0plaE [01:21] the orig question is deleted now [01:21] it was there earlier [01:21] but the important part the response is still there [01:22] the orig question was about using nginx to proxy back to 443 spdy and then serve on 80 [01:22] or something [01:22] or no, the orig is there, i just had it collapsed [01:22] SpamapS: ^^ [01:23] am i reading that wrong ,or are they just wrong, or its old [01:23] from feb 4 [01:23] imbrandon: I don't think thats definitive [01:23] perfect, i would LOVE to be able to do it [01:24] infact my "ultimate" setup i run on "my" sites even though i purport nginx is ... [01:24] apache based zend server on 8000 , zend php on 127.0.0.1:9000 fpm and then nginx reverse proxy 1 second cae to localhost 8000 [01:25] but thats because i actually pay the money to have zend server and zend studio [01:26] although zend server community edition might be nice too in a genericized charm [01:26] why apache zend on 8000? [01:26] arbitraty how i set it up years ago and just keep doing it that way [01:26] no diff between 8080 or whatever [01:27] no I mean why apache? [01:27] just cant or shouldent use 9000 , as fcgi and fpm use it default and dont use 10080-10088 cuz zend server uses it for the "gui" [01:27] oh zend server is built on apache [01:27] fpm seems to scale better than mod_php at this point. [01:27] its all one "package" setup [01:28] yea i use fpm [01:28] with apache [01:28] :) [01:28] ohhhhh i noticed too aparently tcp fpm WAY out preforms unix:/ socket fpm [01:28] i need to run some benchmarks to confirm but i've seen it multi places now [01:29] what? [01:29] yea seems bas ackwards [01:29] that doesn't make much sense at all [01:30] so like i said i need to see with my own eyes, but the word is php unix sockets dont scales as nice as php fpm on 127.0.0.1:9000 tcp [01:30] i think omg might be a good test for that theory at some point [01:31] it has the traffic to get "real" numbers and shouldent hinder the site too bad if one is not as good as the other [01:31] but thast another "when i have time to test correctly " type thing [01:33] but yea i use apache because , well 1 if yu do it right apache CAN be nearly as nice as nginx, and i get all the .htaccess reqrite out of the box etc , and 2 probably the bigger reason is its what is supported and bundled with zend server and the GUI tightly intergrates with it even though you CAN use other servers and zend php its not the "package" deal [01:33] but nginx makes a killer reverse proxy on the same box, and even servs some vhosts directly [01:34] with the 1 sec cache. so personaly on my server i get the best of both worlds so to speak [01:35] ok proxy does not work [01:35] definitively :-P [01:36] [Wed Apr 18 01:33:30 2012] [warn] proxy: No protocol handler was valid for the URL /favicon.ico. If you are using a DSO version of mod_proxy, make sure the proxy submodules are included in the configuration using LoadModule. [01:36] SpamapS: enable mod_proxy_http too ? [01:37] oo perhaps I forgot that [01:37] heya elmo :) [01:38] imbrandon: ey [01:38] oh ok that *was it* [01:38] elmo: works! [01:38] eh, typing hard [01:38] imbrandon: hey even [01:38] https://ec2-107-22-22-168.compute-1.amazonaws.com/ [01:38] SpamapS: cool [01:38] :) [01:38] of course, statusnet is broken cause I forgot to configure it [01:39] nice [01:39] can you pipe it to 80 ?> [01:39] i'd love to be able to do spdy over 80 [01:40] i only have ssl cert for one domain, and i dont even use that domain nor have the ssl cert installed atm [01:40] lol [01:40] imbrandon: well no apparently SPDY implies HTTPS [01:40] https://wwwbox.co is the only cert i have [01:41] SpamapS: yea i know, its cuz google is an ass about it, no tech reason it cant work on 80 AND there is more initial latency on https eventhough spdy makes up for it later [01:41] spdy on 80 would be even faster :) [01:43] imbrandon: I think google is thinking about your privacy :) [01:43] or even about oppressive governments [01:43] self signed works fine [01:44] yea but if i'm gonna do it i dont want the user to have popups or warnings [01:44] so i'd buy one for brandonholtsclaw.com [01:44] if i get a referal code i can pick a godaddy.com cert up for like 16$ a year on sale [01:44] :) [01:44] anyway, I'll have to wrap it up later tonight [01:45] ok cool, ping me when your working on it, i'm very intrested and doing related stuffs [01:45] i'll be on late tonight [01:45] I think we can make it work, and awesome, by simply having the primary charm feedback where it stores static files and then the mod_spdy can serve stuff directly... eventually we can go with a 'worker' mpm apache and it should be able to at least try to keep up with nginx's crazy speed [01:46] imbrandon: lp:~clint-febar/charms/precise/mod-spdy/trunk [01:46] yea and i want a little help with my first sub too, when ya get back, i gpt the bones of it done and it might be ready for the store [01:46] but not sure 100% and no way to test for a few hrs [01:46] imbrandon: just have to add a ProxyPass and ProxyPassReverse to the top of files/all-ssl and it works [01:46] kk [01:46] i'll try thaat here in a few moments [01:46] imbrandon: oh, and 'a2enmod proxy proxy_http' [01:46] yea i have those already [01:47] anyway, out [01:47] l8tr [01:48] yea apache with only BARE min modules installed can come very close to nginx, nginx still beats it in the very very heavy traffic places, but i'm talking 100+ req a sec or more [01:49] untill that point it can keep up great if configured right [01:50] and even more with varnish etc, but i'm loving nginx and kinda dropped varnish and its vcl's from my toolbag in favor of learning nginx better to do the same job and more [01:51] nginx with a 2gb ramdisk ( in a beefy server with like 8+ gb ram ) and pointing it fcgi_cache or proxy_cache to the ramdisk, wow wee [01:51] speed out the yang, it actually becomes a cpu bottleneck [01:52] before anything else [01:54] SpamapS: imbrandon: upstream nginx on twitter says "May" for nginx/spdy. [01:54] just now getting into my devops experince where until now faster was better ALWAYS but a few of the sites ( like grammy.com ) challanged that notion for me lately , where slighly slower config but able to handle a more sustained load is atually better, where as even other high traffic sites i've gneer'd like pets.com never hit that mark [01:55] imbrandon: I would say a good way to get ready is add an upstream/ppa flag to the soon-to-exist-should-already-be-in-progress nginx subordinate [01:55] jcastro: yup thats the idea, working on doing that tonight with some help/overlap from SpamapS [01:55] so when it's released you just flip the one bit [01:55] May is relatively eons from now anyway [01:55] yup yup, i like that idea [01:56] yea but it will get pushed sooner, once more ppl pickup on the new spdy release [01:56] more will have intrest and work on it more even upstream as things pickup [01:56] so it will come out sooner than whats there now [01:57] I think it's a cool use case [01:57] its probably gauged on current level of dev contributions timeline, some hotrod will come in and get it 80% of the way and a core dev will be semi forced to fix the otyher 20% early :) [01:57] ( nginx that is ) [01:58] thats how it seems in my head atm :) who konws i'm often wrong [01:58] lol [01:59] "ok this new thing is out, I want to play with it and stuff, but I don't want to mess with all this stuff. Oh, I'll just redeploy on staging with this new charm and try it." [02:01] imbrandon: if this would have been nginx support you know we would have stayed up all night, heh [02:01] but yea i fully intend to do that, my intentions are to "own" the nginx and some other related charms for the long haul *as much as is done in ubnutu, we all contrib cross and like i'm sure SpamapS work and exprince with the stack and marcoceppi with his other php stuff as well as many others , but you konw my meaning , not "hoarding" owning it, but more like the guy that takes the time to research ever little but and test and code and etc , not j [02:01] hahaha well with it reverse proxy it can be [02:02] hehe thats what me anc clint was just talking about and i think i am gonna be up all night here working on it LOL [02:02] its psudo supoort but the result is the same [02:02] :) [02:03] AND we'll be the first, not only in ubuntu but the greater internets from what i can tell searching since you posted about it today [02:03] yeah [02:03] btw did you get my feed on there ? [02:04] yeah [02:04] yup, i am, just checked, plenty of my aderol perscription left, i'm pulling an al nighter and gonna get some rough cuts pushed [02:04] lol [02:05] imbrandon: hey so something SpamapS and I mentioned over the phone was perhaps doing the apache subordinate thing for charms that right now are using the built in node webserver, etc. [02:05] like the more developer-y charms [02:05] subway just uses the built in node thing afaict, etc. [02:06] I think subway is interesting because it's a 2 way "application" that needs lower latency, and built in ssl for your chat would be a nice win anyway [02:09] imbrandon: we have no issues with apache for LP, and we're past 100rps [02:16] SpamapS: your statusnet instance is still running [02:19] lifeless: yea but youve taken the time to configure it right, i'm talking about joe blow [02:19] and i am not saying apache falls on its face there, [02:19] only that nginx starts to pull ahead alot more [02:19] at that point [02:20] jcastro: yea thats very very common [02:20] re: the subway thing with apache [02:21] infact its the princapal generally behind like php fpm that really runs on port 9000 serving php and python wsgi now they are very very diffrent but the general princeapal is the same [02:22] so sticking a apache/nginx infront of subway and putting it to 80, then letting nodejs direct connections too is common [02:22] well maybe not for subway but that general type of app [02:22] nodejs is good at longpoll and other stuff http servers arent really made for [02:24] but they are still usefull combine, and thats where ppl like us come in, its easy to do all this crap stand alone, but make it all work togather and in the best way is the hard part, things like juju and chef distribute that more to more devops but thats recient, mostly learning what we know untill now has been a black magic and handed down from mentor to mentee like blacksmithing and over years sometimes working togather [02:26] that knowhow is becoming a commodity now, so for ppl like me to stay relevant in 5 more years i need to be on the tech end that makes the things that make it irrevant , and then do it all again , it goes in 3 to 5 year cycles like this since i've been active online in the mid 90's, likely sicne the 50's and early sixtys with the adevent of unix with SysV+ [02:27] * imbrandon gets back to charming [04:14] jcastro: yes I know, but now it shows the statusnet error on 443 ;) [04:18] imbrandon: FYI, Ryan Dahl created node.js specifically *not* to need any frontend proxy for low latency needing apps [04:19] SpamapS: yea i just ment it was common to do for those type things [04:19] not really nodejs specific [04:19] like, his whole point was that you can write an app that will be highly coherent between requests.. "sessions suck" [04:22] Heh... looks like statusnet is just plain broken [04:22] yea but in type i'm including google dev appserver and python and perl and non_mod_php [04:22] all the kinda non standard plain http on 80 apps [04:22] ;) [04:22] nodejs just kinda got lumped in there [04:22] we really need to wrap up "real" automated tests.. these hooks pass install/config-changed but they'r enot really error checking [04:22] well some node apps [04:31] ugh [04:31] we need a 'switch-charm' command [04:31] can't improve any cs: charms in place.. :-P [04:32] ? [04:33] hey ok so if i ssh into a juju instance , can i manually file off relation get and stuff ? [04:34] to se the actual output instead of charm upgrading and echoing it or somehing to see the output in a log [04:34] imbrandon: you can but you need to know the relation id [04:34] can i get that from status , or ? [04:35] $JUJU_RELATION_ID in the charm [04:35] and relation-ids command [04:35] k [04:35] you also need to set JUJU_SOCKET [04:35] yea thats the error i got was something about the socket [04:35] k [04:35] is it in /tmp or something [04:36] I foret [04:36] I forget even [04:36] kk [04:37] probably /var somewhere [04:56] mmmm [05:28] Ok, this definitely *feels* faster in chromium than in firefox https://ec2-23-21-39-39.compute-1.amazonaws.com/main/login [05:28] firefox does spdy too [05:28] imbrandon: but its not turned on [05:28] ahh ok [05:30] statusnet is seriously a very weird webapp [05:30] you can't have a nickname of 'SpamapS', only 'spamaps' [05:30] doh [05:30] registration not allowed [05:32] also always builds absolute links [05:32] I hate apps that do that [05:32] ha [05:32] yea me too [05:32] wordpres kinda does, it atleaste uses whats in the db [05:32] so it the db is changed all change [05:32] still shitty tho [05:33] I don't think it does that for page references though [05:33] it will redirect you.. [05:33] but I can work around that [05:33] this.. this is building the page with which is *stupid* [05:33] "/foo" would be fine [05:33] Resource interpreted as Font but transferred with MIME type application/octet-stream: "http://ec2-23-21-39-39.compute-1.amazonaws.com/theme/neo/fonts/lato-italic-webfont.woff". [05:33] and make it work proerly [05:34] imbrandon: yeah, its not an optimized app [05:34] yea and a href="//amazon.com/blah" would be even better if they NEED absolute lionks [05:35] putting http{,s}: on links is dumb just for this reason :) [05:36] for like off site resources. the only time i do it is if the site dont have https too, then i try to find a nother link or just build whatever it is myself [05:36] lol [05:37] imbrandon: right but in this case, they're just pulling in HTTP_HOST and building the whole link [05:37] thats just lazy [05:37] but i havent found one in a while that dont except ga.js and it does just with a diff name, due to a ie6 bug, but if you dont care about ie6 then its all good ( hint: i dont, not even on commercial gigs ) IE8+ only [05:37] SpamapS: my martha plugin grabs http host [05:38] and still works [05:38] but there's no reason to put the host in there [05:38] or even a leading / usually [05:38] well the leading slash i can see [05:38] templates can be used [05:39] and that nessesitates it incase subdir etc [05:39] .. works fine :) [05:39] sure if you want to put logic in the template to tell what dir your in [05:39] or just use root relitive links and all are happy :) [05:40] * SpamapS tries subway now [05:40] ( css too , like when using less, you never know where the final product will be ) [05:41] imbrandon: I usually use image_path('foo.jpg') which does in fact figure out where the request was made and build a relative link. [05:41] sides, ../ transversal in php is code smell for an audit imho :) [05:41] imbrandon: thats how symfony did it anyway :) [05:41] imbrandon: thats not in code, that is going to be emitted in the html. [05:41] and browsers are happy to use it [05:41] yea its similar in drupal and zend too but the end result thats rendered is a root relitve link [05:42] lazy [05:42] smart imho [05:42] less headache [05:42] can't sub-host though :( [05:42] less going back to fix little shit [05:42] sure ya can [05:42] anyway, whats a good charm that exists now that has tons of on screen assets? [05:42] why not ? [05:42] ThinkUp maybe? [05:42] hrm [05:42] omg-wp ? hahaha j/k [05:43] never used thinkup [05:43] sucks down your twitter feeds and facebook and G+ and puts it on one site [05:43] oh nice [05:44] nagios , hrm that really dont have any assets [05:45] stackmobile is the only other one maybe, never seen its gui tho so i duno [05:48] hi all [05:49] ello [05:49] imbrandon: http://ec2-23-22-28-42.compute-1.amazonaws.com/ [05:49] simple GUI [05:50] nice tho [05:50] could use a bit of spruce but alot better than most [05:51] i hate "flat" buttons like that, they dont have to go all out css3 gradient crazy but that is one of my pet peives. hell leave it to the ui toolkit if your gonna make it flat :) [05:51] hell i bitch alot [05:53] ohh and they use bootstrap on their own site, smart guys and gals [05:53] :) [05:53] and h5bp, wow, i'm inpressed for a floss web app [05:53] :) [05:55] h5bp ? [05:59] html5boilerplate [05:59] ah [05:59] sorry was reading some of the other code [05:59] <-- ui ignorant by choice [05:59] its best practices from tons of experts , like paul irish etc [06:00] there is a huge community arround it, and they put a TON TON TON of thought into every single byte in the boilerplate [06:00] everything is there for a reason and in a certain order for a rewason etc etc [06:00] and what makes it so cool is its all 1 run by build scripts etc, no runtime or language deps, eg rails php python etc all can use it [06:01] and 2 they explain WHY everything is why it is [06:01] and CONSTANTLY update it [06:01] like many times a day [06:02] they hang out here on freenode in #html5, and here read a tad bit of this in their issue queue to see how much thought goes into each little part, and this is a tame one [06:02] https://github.com/h5bp/html5-boilerplate/issues/378 [06:02] i have hella respect for those guys [06:04] its nice to work with stuff that is clearly maintained out of a sense of duty :) [06:04] yea [06:06] ok so i can forgive the button for all the other goodness this app has :) [06:06] hehe [06:10] SpamapS: know if nginx can directly serve content from the configs ? like i used to use a snipit that would actualy serv the /robots.txt from an apache vhost config if one wasent on the filesystem [06:10] hrm [06:10] probably have to look it up /me goes to do that [06:11] hard to call a winner with this one https://ec2-23-22-28-42.compute-1.amazonaws.com/ [06:12] ? [06:12] but... mod-spdy to the rescue, no mods t thinkup to get it SSL wrapped [06:12] imbrandon: SPDY doesn't help much with such a tiny well designed site. :) [06:12] hehe right [06:12] well it would if it say had 1000000 little images like flickr [06:13] but yea [06:13] tiny images i tend to base64 encode and put them in a data uri anyhow , like the bullets for ul's etc only exception is font icons i use [06:14] that way its cached with the css [06:14] Oh, mediagoblin would be a good one [06:14] and only one http req total [06:14] I don't even know if my old mediagoblin charm will work [06:14] heh [06:14] wordpress has tons of images [06:14] if you toss a theme in it [06:14] yeah but I hatezorz our default wordpress charm ;) [06:15] like generic wp + a gaudy theme [06:15] hehe [06:15] hey do drupal [06:15] and it will probably just redirect me to http:// [06:15] it needs prom anyhow [06:15] and will be a good test [06:15] I'll be reviewing it on Friday [06:15] or maybe tomorrow [06:15] thats cool but it does have lots of little images [06:15] need some solid apps for charm school on Thursday [06:16] yea you'll be like the 5th hehe , seems like everyone looks at it once and never returns hahahahahahahhaha [06:16] imbrandon, not me :D [06:16] but yea, i've installed it a few times now on micros, its works, still lots of room for imporvment :) [06:16] koolhead17: heh [06:17] koolhead17: not sure i've met you :) Hi 0/ [06:17] hi imbrandon :) [06:18] imbrandon: so maybe call it "thering" [06:18] while you review it, the phone rings [06:18] :) [06:20] its ok i'm in no hurry , just got it done at a bad time, everyone heading to os conf [06:20] and such [06:20] * koolhead17 rushes 4 owrk [06:20] *work [06:22] alright, sleep time [06:23] gnight [06:23] hrm ok i really need to finish this charm so i can get back to researching this spdy stuff :) [07:38] SpamapS: Do you know why I would be getting lots of merge notifications from Juju's LP :P [07:42] <_mup_> Bug #984640 was filed: Unsatisfied constraints are not reported back to the user < https://launchpad.net/bugs/984640 > [08:53] * koolhead17 assumes SpamapS is sleeping :P [10:57] somehow my ~/.juju folder is 128GB [10:57] marco@marco-g72:~/.juju$ du -sh [10:57] 123G . [11:02] marcoceppi, could it be cached charms? [11:02] marcoceppi, but, yeah, that would be a lot of charms... [11:02] fwereade_: it looks like local was still bootstrapped, and machine-agent was 120+ gb [11:02] marcoceppi, ha, phew [11:02] but this laptop has been restarted several times, I didn't think local lxc containers survived restarts [11:02] marcoceppi, still seems like a lot [11:03] marcoceppi, the containers survive, but they don't restart [11:03] well, this would have been a bootstrap from March 29th [11:03] So, three weeks worth of machine-agent.log [11:07] marcoceppi, ouch, I wonder if we already have a bug for that [11:11] I mean, what's the bug there? logrotate should be rotating machine-agent log? [11:19] marcoceppi, I think so, partly; but do you have a lot of zookeeper spam in there as well? [11:19] marcoceppi, we should probably be reducing that as well ;) [11:19] fwereade_: I torched the file, since I was at 100% disk space [11:20] marcoceppi, heh, can't blame you ;) [11:20] From what I remember it was a lot of zk [11:21] marcoceppi, yeah, looking at mine, there's quite a lot [11:22] marcoceppi, and, hmm, quite a lot of it is the machine agent (which does restart) whining that it can't find zookeeper (which doesn't :/) [12:44] SpamapS: https://cloud.torproject.org/ is only for EC2 though === carif_ is now known as carif === al-maisan is now known as almaisan-away [14:55] SpamapS: wouldent it be good to use this for the replacement of s3 to find the meta data, scroll down to the first use case ( not example ) it looks like what we;re wanting to accompish almost identicaly if we can abstract it http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/AESDG-chapter-instancedata.html#AMI-launch-index-examples [14:55] hazmat: ^^ ( re: dropping need for s3 for the instance info metadata ) [15:01] imbrandon, launch indexes? [15:02] thats for disambiguating metadata when launching multiple instances in a single api call [15:02] for the s3 usages its not contextually relevant [15:12] no it sets the data at launch [15:13] and can be called later via api to figure out what one is what [15:13] see theuse case for the 4 mysql servers and one needed to be the "master" [15:13] hazmat: ^^ e.g our bootstrap [15:15] she set "store-size=123PB backup-every=5min | replicate-every=1min | replicate-every=2min | replicate-every=10min | replicate-every=20min" and each server got one that chunk of meta data between each pipe as well as the details of the instance it ended up belonging to [15:16] i think it sets it via the "tags" , and hpc has those too, but not sure if its openstack or just a similar feature [15:16] * SpamapS reads [15:16] GET http://169.254.169.254/latest/user-data user_data.split('|')[ami_launch_index] [15:17] SpamapS: very bottom is the gist, the top is fluf about stuff we wont use i think [15:17] imbrandon: thats not actually helpful no. You need *clients* to be able to find it [15:17] not the instance itself [15:17] yea the client run it [15:18] or you me the conductor ? [15:18] the client can't get to 169.254.169.254 :) [15:18] ahh ok i was thinking clients as in instances [15:18] every time you type 'juju status' or 'juju deploy' .. there's a method that has to go *find* the ZK node [15:18] it can be run for any of them tho so it would just need to pick a random one to connect to [15:18] in ec2, the way that is done is by asking the S3 control bucket where it is. [15:19] imbrandon: I think we can do it with groups [15:19] kk i fond that looking for groups docs [15:19] node 0 is always in juju-$envname-0 [15:19] and in fact, is always *alone* in that group [15:19] wow, ok yea [15:19] lets just do that then and have it store the data local [15:20] when node 0 becomes HA .. we can just make a group for ZK nodes [15:20] thats super easy it seems [15:20] yea [15:20] hrm yea thats almost too easy [15:20] got to be a catch :) [15:20] heh [15:20] imbrandon: I do think we should cache that locally, and the SSH key of the instance locally, so that we don't have to keep doing 49 round trips all the time [15:21] imbrandon: we'd only need to re-query if the SSH to the box failed. [15:21] or, once its a REST API, the https ping [15:21] right, yea [15:21] i dont wanna use zk local thoug that seems overkill, what about sqllite or couchdb ? [15:22] since it is just a cache anyhow [15:22] zk is fundamental [15:22] right i mean on the conductors machine [15:22] no I wasn't going to run zk locally [15:22] e.g. wher juju status [15:22] I'm saying, cache *the lookup of the node 0* [15:23] for status we've talked about just having a 'statusd' running along side the provisioning agent that keeps an up to date yaml of status and we can just spit that back at the user when they request status [15:23] yea. thats where i mean a small cache of info in like sqlite or similar, no biggie if its droped or dumped, only keeping that info to not have to relook it up [15:23] 49 times [15:23] :) [15:23] status is *slow* [15:24] because it basically walks the entire ZK tree from your machine [15:24] like in ~/.juju/$env-datacache.db [15:24] and if its not there we just got to do the full round again [15:24] etc [15:25] think of it like html5 local browser storage [15:25] thats how i see it used [15:25] imbrandon: but you want the up to date one, so its not enough to just cache it [15:25] if its there cool, if not ok lets do the expensive looksups [15:26] well yea, i'm simplifing it here a bit, there would need rto be some checks ot make sure its not stale [15:26] etc [15:26] but the general idea [15:26] which is best done with a small daemon watching the entire ZK tree and keeping a status yaml up to date. [15:27] heh well i'm thinking of juju clients everywhere e.g. iPads where thats not feasable [15:27] as well as my dev machine that may or may not be on all the time [15:28] imbrandon: that daemon runs on the provisioning node(s). So your client just asks the daemon for its yaml. [15:28] or be one of 5 i use to manage the instance, think about a small team all with juju constantly pinking the zk for updates [15:28] SpamapS: yea, and i'm only talking about the local storage of that yaml [15:28] thats alll [15:28] to what end? [15:28] i think we're just criss crossed here [15:29] i figured there would be more data than a flat file would be useful for [15:29] basically what I'm saying is, we can have a materialized view of the status-important bits of ZK [15:29] and sqlite or couch can also be eaaily used by other clients like a js web interface or whatever may come up down the road, without writing yet another parser [15:30] if you want to cache that, so its available offline, cool.. but.. I think offline juju is a long long way from being a reality. :) [15:30] not rally offline but not needing to make a req for every tidbit of info if its just status info from 5 miniutes ago [15:30] or something [15:30] imbrandon: so you're saying you want to write a client that doesn't need to know ZK. You and everybody else. A REST API to replace client<->ZK direct access is *very* high on the priority list. [15:31] and update it in the bg [15:31] yea [15:31] yea [15:31] :) [15:31] ok i dont feel so dumb then :) [15:31] We just have to help the go guys get done fast so we can crank up feature dev again :) [15:31] right [15:32] i'm wiling to help where ever, i got not much going on but juju until after uds :) [15:32] found out today that i might be relocating to the bay perm too [15:33] imbrandon: wezt coast is the bezt coast [15:34] yea i've been out there a few times, and i lived in Reno NV for a few years [15:35] so i was on the coast alot then, went to sac alot for concerts [15:35] :) [15:36] i actually loke boths coasts . NYC is probably my fav, but only because my office was on 19th and 8th in Manhattan [15:36] but yea either coast is cool, but i always end up back here in KC [15:36] ugh [15:36] :) [15:36] Sac doesn't count as the coast :) [15:36] In fact, its on the other side of all the faults.. and when CA falls into the pacific, SAC will be the new SF [15:37] when your born in KC and lived here till 18, moved all over the us for IT work for 10 to 12 years then back to KC, sac counts to us :) [15:37] lol [15:37] hell reno and taho count to us :) [15:37] lol [15:41] * SpamapS just realized he is *3 days* behind on inbox 0. *damnit*. === Furao_ is now known as Furao === zyga is now known as zyga-food === jkyle_ is now known as jkyle === andreas__ is now known as ahasenack [17:11] Daviey, jamespage just to let you know, I have about half of the mirror downloaded, I will be heading out in about four/five hours and I doubt it will be finished by then [17:11] as such, I may only have it ready for tomorrow morning (I will probably need tonight to download it) [17:12] jono: drive to Mountain View and ask google if you can d/l it from their datacenter :) [17:13] SpamapS, hah [17:13] I need beefier internet it seems [17:15] jono: OK - I'll see where the local mirror download here has got to [17:15] it was running overnight so may be good... [17:15] jamespage, cool === zyga-food is now known as zyga-afk [17:41] jono: apt-mirror ? hehe [17:42] hi everyone. i am testing juju on my local machine. when i try to deploy to a local lxc, i get the error: "No repository specified". what is the default repo name that i should put here? [17:42] jono: for real tho if you use apt mirror you can grab just the arches you need and no source packages if you dont need em, its like 30gb per arch for bin only + noarch packages [17:43] imbrandon, we are doing that [17:43] I am downloading 72GB [17:43] jono: but i'm a tad bias as i'm upstream tooo so take it with a grain of salt :) [17:43] nice cool cool :) [17:44] rigved: put in where ? you shouldent have to , other than specify if its ppa or distro [17:44] but thats the extent of repo choice iir [17:44] c [17:45] :-) [17:45] imbrandon: i am using this: https://juju.ubuntu.com/docs/getting-started.html. in the local environ section, it says that i have to put local: before each service name. [17:45] * imbrandon needs to move apt-mirror off sourceforge to github someday soonish, been talking about it for a year [17:45] right, local would be where the worrd sample is [17:45] if i do not put anything, the lxc containers never start. so, i am now trying with local: but it gives me that error. [17:46] in those examples [17:46] well local is the env name, so you would use it like "juju bootstrap -e local" [17:46] to begin [17:47] ok, so lets back up a tad, where did you start, and can you pastbin your environments.yaml to paste,ubuntu.com ? [17:47] imbrandon: ok. one moment [17:47] kk [17:48] and after that i'm assuming you installed the pre req, right ? e.g. [17:48] sudo add-apt-repository ppa:juju/pkgs [17:48] sudo apt-get update && sudo apt-get install juju [17:48] imbrandon: http://paste.ubuntu.com/935774/ [17:48] imbrandon: i thought i did not need to do that as i am using precise. [17:49] not sure if its 100% in sync yet, i would still use the ppa personally [17:49] and also here are the other deps for lxc [17:49] do this while i look at your conif [17:49] sudo apt-get install libvirt-bin lxc apt-cacher-ng libzookeeper-java zookeeper juju [17:49] imbrandon: oh ok. i will add the ppa. [17:49] imbrandon: yes. i installed all those. [17:49] kk [17:49] but not using the ppa. [17:50] also change juju-origin: distro [17:50] to juju-origin: ppa when you do [17:51] and if you havent rebooted since you installed the lxc stuff, yea unfortunately its like windows on this one, gotta reboot for network to fully work right [17:51] so if not do that too and i'll still be here, we'll walk through getting ya up on the first one, then you'll be good, looks like your 90% there [17:52] imbrandon: ok. will do. i did reboot as suggested in the docs [17:52] kk good [17:52] ok when your rady for the next bit let me know [17:52] gonna grab a soda ask 1 min [17:52] afk* [17:54] btw i would change "sample:" to something more memorable too, like ummm localtest or something , but not required , and the rest of your config looks good for just one env and local [17:54] imbrandon: ok. added the ppa. dist-upgrading now. [17:54] kk [17:54] imbrandon: ok. [17:54] rigved: depending on your network, it can take a few minutes for the first instance to pop up on lxc [17:55] yea , as in mine on fairly decent cable took about an hour [17:55] for the very first one [17:55] after that its much faster [17:55] marcoceppi: hi 0/ [17:56] \o [17:56] marcoceppi: yes i know. earlier, when i did juju status, it showed mysql and wordpress instances were pending. i left it like that for abt 2 hours. still it showed pending. also, i checked with nethogs. there was no network activity. also, htop did not report any activity. [17:57] rigved: it can take very long time the first time and cant reboot in the midle or you haver to destroy and start over [17:57] rigved: if you got that far your config and setup is good ( ppa wont hirt its nearly whats in precise ) [17:58] you just need to bootstrap one of anything, mysql etc etc, just something and let it 100% get done [17:58] then move on , but leave it in the background and do other crap m you'll kill your self waiting on it [17:58] and after that first one, even with destroys etc all the rest are fairly fast [17:59] imbrandon: hmmm. ok. so, i have finished updating to the ppa. now, should i start. bootstrap, mysql and then wait before moving on to wordpress? [17:59] well have you [17:59] rebooted since you started the wp and mysql [17:59] imbrandon: not yet. [17:59] if so they are dead and you would have to destroy them [17:59] ok then no [17:59] just check status every 30 to 45 min [18:00] and it will eventually get to ready [18:00] imbrandon: ok. destroyed. starting a-new, with changed juju-origin to ppa. [18:00] no idea why its so slow, i konw its not 100% network, but yea the first one is ungodly slow [18:00] kk [18:00] yea just do one too, incase they are figting for resources [18:01] for the first one [18:01] not certain thats the case but it wontt hurt and the second one will take minutes once the other is done [18:02] rigved: there's a log you can tail to watch for activity (and breakage) during local deployments [18:02] Let me see if I can find the path [18:02] imbrandon: ok. so, i typed the deploy command for mysql. [18:02] imbrandon: juju status shows pending. [18:03] marcoceppi: is it juju debug-log ? [18:03] rigved: and just an fyi about the local ones, say you deply and leave it in the background and forget, and reboot tomarrow, once booted, the env will look ok but not start and not be right, the only way to recover is destroy and redeploy after a reboot [18:03] but htats only on local [18:03] imbrandon: ah. ok. [18:03] rigved: juju debug-log is good to have open too, but i think he means a nother one [18:04] but yea i'd leep a term or screen session with juju debug in it off to check on once in a bit [18:05] juju-debug starts a byobu session already [18:05] ahh i'm normally always in one already so never paid attn [18:05] rigved: it's machine-agent.log (I believe) buried in your /home/administrator/cloud folder. I don't have my precise laptop with me [18:07] marcoceppi: whats your email addy you want me to use for newrelic, i'm add you as an admin on the osho acct so you can see all the historic data too, not just those 30 miunte graphs on my blog [18:07] @ubuntu one ? [18:08] imbrandon: marco@ceppi.net [18:08] kk [18:09] I keep forgetting I have the @ubuntu one, and I forget where it even routes [18:09] look for a newrelic info in a few min, they send login key to let you set your own pass and stuff [18:09] lol [18:09] marcoceppi: ok. got it. i'm tailing it now. it says container started. last line is "Started service unit mysql/0" [18:09] i have like 11 emails i normaly use, all going to one gmail business avount [18:10] rigved: that's good news, what's juju status show? [18:10] ( marcoceppi btw it routes to your promary email addy on LP too, so just change that to whatever you want it to route to ) [18:10] primary* [18:10] marcoceppi: still shows: "agent-state: pending" [18:14] marcoceppi, imbrandon: here's the full output of juju status: http://paste.ubuntu.com/935804/ [18:15] yup that looks right [18:15] i;d say give it a few 3 or so hours tops, depending on your hardware and nic [18:16] its gotta download a ubuntu image, then boot it , and update and install software, THEN run the hooks and stuff for the charm ( this first time ) [18:18] like i said i dont know exactly how long mine took, but i'm on fairly fast cable, and i'm on a quad core i7 2.4ghz with 8GB ram and a ssd+hdd in this mac mini, and it took the better part of an evening , like i started late afternoon and it was done about bed time [18:18] but now its quick to drop a new one etc [18:19] rigved: if you wanting to kick the tires before it gets done, try a amazon t.1micro they give you a free linux and free windows one with enough hours to run constanly all month [18:19] 750 each i think [18:19] or you can run 2 linuxs for just a few hours etc [18:19] free [18:20] micros are definately not ideal but will let you poke at it while the lxc finish [18:20] just add a second stanza to the environments.yaml [18:20] imbrandon: ok. i have an old dual core with a 2 Mbps line. let's see. it's getting late here. so, i'll just leave it for the night. [18:20] from sample: on down [18:20] imbrandon: yes. i'll try amazon too later. [18:21] and then use"juju something -e name" to pick what one to do the command on [18:21] name being what ever you put in the "sample:" spot [18:21] kk [18:21] imbrandon: ohh ok. [18:21] so you can have more than one env going at a time [18:21] imbrandon: does juju work with some other cloud providers? like rackspace? [18:22] anyhow yea, this week is kinda nuts alot of ppl are out for openstack conf [18:22] but alot are still arroud too so if ya run into more issues or that dont finish [18:22] then someone like marcoceppi or me or tons of others are regularly here [18:23] imbrandon: cool. i tried the #ubuntu-cloud channel earlier, but no one was there. more people here. but as i understand, this channel is for juju devs, not support. [18:23] yea the dual core might be hurting you more than the line, iirc the base image is less than 100mb [18:23] i think [18:24] its the same lot of us, here and there [18:25] just a diff name, and #juju-dev is where more of the core devish stuff happends, alot of support still happens here if your willing to work at it a bit and dont just want someone to do it for ya, the devs love first hand bug reports when they have the time to actually work with ya on them, next 2 weeks that might be dicey but generally its not bad [18:26] speaking of i got somehing i started here i need to finish up or i'm gonna be a liar [18:26] hilight if ya need something :) [18:27] oh and marcoceppi you should have mail [18:27] lemme know if you got any probs getting in, you have full admin, not that there is much to change etc but just in case [18:28] that account can deploy to as many apps too so if we want to set up a sep one for staging or something someday we can [18:37] jcastro: where is the juju mailing list [18:41] lists.ubuntu.com [19:03] imbrandon, marcoceppi: i got it working! the culprit was my firewall. so, i just disabled it and started fresh. this time, the mysql unit took only a few minutes to start up. now, continuing with wordpress... [19:04] nice [19:04] rigved: :) [19:04] imbrandon, marcoceppi: thanks for your help! :) [19:04] jcastro: sent, sorry i had to signup for the list and everything , i thought i was on it but i guess not,i'm on so many damn email lists [19:05] rigved: no worries , yw :) [19:06] rigved: also if yor firewall setting seem fairly common [19:06] you might make a not of that on the wiki to warn others [19:06] :) [19:06] note* [19:09] SpamapS: hah just catching up on the list, you think sru is cumbersom ? that used to be one of my fav areas was doing sru and backorts [19:11] SpamapS: so i'll fufill the dirty work role for that at least till 12.04.1 since i dont mind doing it anyhow, will be a good primer to get me back into the old flow [19:14] imbrandon: I'm on the SRU team. It *should* be more cumbersome :) [19:14] i am as well, and swat and backporter, that was like 60% of my ubntu time was doing that [19:15] imbrandon: the policy is clear, a small patch that does one thing and is verifiable [19:15] SpamapS: i thought you ment un-needely so [19:15] imbrandon: I prefer to go the micro-release exception process [19:15] where you can just take whatever upstream says is bugfix-only [19:15] SpamapS: yup yup, probably one of the only, infact i'm positive the old policy i helped creat in ubuntu :) [19:16] olny* [19:16] gawd [19:16] and i even got a new keyboard [19:16] kitterman still doing sru's too ? [19:17] when you say "doing" srus [19:17] do you mean uploading them, or approving them? [19:17] at the time i think me him and dholbach were the only ones that took em seriously , i do like that -backports is on by default now though, its a good out for non sru worthy changes [19:17] Because when I joined in april 2010, only pitti was doing the approving [19:17] SpamapS: reviewing and approving , then uploading [19:18] Ok no the policy is different probably now [19:18] SpamapS: universe [19:18] ubuntu-sru members have to approve from the queue, all, not just main [19:18] and yea pitti approved the main ones [19:18] and AFAIK, there are only 5 members of that team, only 3 active. [19:18] since ScottK asks me to approve his SRU uploads, I can only assume that no, he is not on the ubuntu-sru team [19:18] yea me and kitterman handeled universe and pitti main , but there was mucho overlap and we all kinda worked as one person taking "days" to do them [19:19] .g it was scots day then mine etc [19:19] SpamapS: he was at one time, maybe not anymore [19:19] perhaps [19:19] I have not been a good SRU team member lately.. need to go through the queue at some point [19:19] SpamapS: but yea me and scott and pitti and dholbach came up with what was the old policy, i need to go look it over [19:20] mostly cuz scott wanted to do the clamav exceptions [19:20] and no one was doing any of them at the time [19:20] and there was no clear process [19:20] so we made one :) [19:22] and yea this was 6.06 [19:22] so it likely has changed [19:24] jesus crimany , why am i on some of this crap, i've never touched alot of this [19:24] https://launchpad.net/~imbrandon/+participation [19:25] * imbrandon mubles something aobut Launchpad [19:26] SpamapS: looks like the backo=port team and the security team for main and universe are the only relevant ones , i'm not sure we had an LP team back then but it was pitti and actualy the more i think about it one other person, mdz maybe doing main approvals and me and scott and dhol doing the universe ones [19:28] but yea i'm not so much concerned about the ability to approve them, was more of a "hey i'll review and ack them or prepare and get them ack'd as needed if no one else wants to cuz i dont mind that kinda work" offer :) [19:28] baby steps :) [19:29] kees, thats who [19:29] now that i think about it [19:29] pitti and kees :) [19:29] add_header Cache-Control "public, must-revalidate, proxy-revalidate"; [19:29] ugh [19:37] Ok so I'm inspired to improve on debug-hooks [19:38] I think we should build in the way to push your fixes back from debug-hooks into the charm [19:38] And I think we should have like, a 'create charm' that starts by spawning a node and running debug-hooks with 'install' [19:38] so you write install until it is "correct", then save it.. then write config-changed until its correct, then save it.. etc. etc. [19:50] Would Travis-CI make a good charm? [19:54] i'm not real sure, it has its own kinda charm + puppet and vargrant [19:54] so likely [19:54] but alot of overlap [19:54] marcoceppi: would make a nice project though, but not a quick one [19:55] marcoceppi: iirc when i looked up their process ( its all spelled out on their wiki ) they makes the vm images in virtualbox and then use vargant and pupet to deploy them ( like we do with juju ) on demand [19:55] to a host [19:56] and then they take commands from a .travis.yaml in the main github dir [19:56] that tell it what to install and test like our hooks [19:56] so yea, its a cobbled togather system that was made before juju or anything like it out of all the parts [19:56] but they are github fans and all the base images are ubuntu [19:57] so we might even get them to use it officialy if we did it good enough [19:57] had never thought about it though as its a big project thats got multi moving parts, i'd say akin to a mini openstack its that many parts and variables [19:58] they are awesom about documenting the whole setup though and iirc have a freenode chan too [19:58] would be cool to untie it from github [20:19] imbrandon: ok. so everything is running fine. ok, i will add a note to the wiki about the firewall. [20:19] thanks again! bye [20:20] np, glad its working for ya [20:25] marcoceppi: you round ? there is an ask ubuntun question that i know half the awnser to and i konw you know the rest cuz its in omg, got a sec to clarify with me on it so i can get myself some more ask ubuntu points hehe ( not even to 20 yet, lol ) [20:26] link? [20:26] http://askubuntu.com/questions/98588/juju-and-keys-for-multiple-administrators [20:26] i havent -re-awnsered it yet, its possible now since SpamapS orriginally awnsered [20:27] about haveing multi ssh keys like we do for all of us [20:27] on omg [20:27] Just look at authorized-keys: key in the environment stanza [20:27] yea i got that part where is the hook [art [20:27] in install ? [20:28] or is one not needed , just that [20:28] and it "transfers" when being used [20:28] ( i konw it dont stay ) [20:28] It does the key setup on bootstrap, it's part of the actual juju core [20:28] and subsequent deploys [20:28] ahh rockin, thats what i needed, ty [20:29] yeah, it's not charm specific [20:29] ahh ok i thought you had some extra magic in there [20:29] that rocks, ok now i can get above 20 copper maybe :) === Furao_ is now known as Furao [20:39] note that ssh key management needs way more thought [20:39] we need to make it something that is updated on all machines when it is changed in the env [20:40] SpamapS: juju -e add-key "ssh-rsa ...", juju -e add-key -f ~/.ssh/id_rsa.pub, juju -e add-key would be sweeeeeet [20:41] marcoceppi: exactly [20:41] okies go vote me up and vote that SpamapS guy down :) HAHA! no really go vote me up tho [20:41] :) [20:41] and I *guess* list-keys and remove-key would be pretty cool to have [20:41] marcoceppi: we can get it with subordinates now.. been thinking about creating a charm for doing mass execution. [20:42] imbrandon: noooooooooo [20:42] I'm still not entirely happy with putting everything in zookeeper. :-P Like, its communication isn't even authenticated.. so.. its a huge problem. [20:42] you made it a community wiki, lol [20:42] You don't get rep from a community wiki answer [20:43] oops [20:43] no idea [20:43] Delete and re-add it [20:43] k [20:43] you can't un-wiki something [20:44] oh wait, moderators can unwiki things now [20:45] lol [20:45] already deleted [20:46] new one posted [20:46] mm, saw [20:46] woot, i can chat now [20:46] not like i need another place to talk [20:46] but i like the piints [20:48] * imbrandon looks for other low hanging fruit [20:48] imbrandon: http://askubuntu.com/unanswered/tagged/?tab=newest [20:57] SpamapS: you want added to the newrelic account to peek in on the ohso data now and then ? before i close out the tab, i sent invites to the other fellas [20:57] no [20:57] kk [20:58] thx [20:59] SpamapS: I'd like to bring up multi-person juju things during UDS sessions [21:00] I don't like copying and pasting environment stanzas around, heh [21:00] yeah [21:00] jcastro: environments.yaml is supposed to only be the bits you need to find your juju environment. [21:01] jcastro: the other stuff is all hacks. [21:01] imho that is poart of the env [21:01] but i look at it like its a one time setup [21:01] the SSH keys should be part of the bootstrap commandline, and then we need commands to manage the keys in the running env. [21:01] SpamapS: +1 [21:01] like the bootstrap pkgs [21:01] :) [21:02] SpamapS: do I want something like "juju -ethisenvironment add marco"? [21:02] marco@lp marco@sf.net marco@github [21:03] plz think of the kittahs [21:03] jcastro: well, I'd say 'marco_id_rsa.pub', but yeah [21:03] jcastro: I recommended this earlier: http://paste.ubuntu.com/936048/ [21:03] well, anything that doesn't involve pasting in a huge string [21:04] imbrandon: please, oh please, make a second implementation of an SSH key listing service, and we will add it to ssh-import-id :) [21:04] ok [21:04] github would be a good second IMO [21:04] for ssh-import-id [21:04] should be fairly easy with another similar project i got on google app engine, even in python [21:04] Yeah, presumably they already have a ton of keys [21:04] yup [21:04] and an api :) [21:05] i'm on it later with my energy for the day, that does sound fun [21:05] you know too [21:05] the API we need isn't really na API... https://something/~someuser/+sshkeys [21:05] someone should check out the branch of code for ubntuwire on lp [21:05] so spoiled by ssh-import-id, heh [21:06] i wrote ssh-import-id like 3 years before that one, its in the ubuntuwire.com bzr repo on lp, just no one knew about it i guess [21:06] heh [21:06] and see if any of it can be merged in [21:06] ;) [21:06] jcastro: you in SFO now? [21:07] SpamapS: 2 hours out. [21:07] infact it grabbed whole groups, does the new one do that ?> if not i could merge that in, like i could ssh-add ubuntu-dev [21:07] SpamapS: I had surprisingly little problems getting the extra HP microserver in my carryon past security. [21:07] and it would, grab all ~ubuntu-dev team [21:08] SpamapS: it's pretty awesome, the entire thing fits in my carry on, I think it'll end up being the nova node for the charm school [21:08] how much are they ? [21:08] i know you got promos i mean normaly [21:09] if its not less than a mac mini, i dunno bout that :) [21:09] they make pretty rockin nodes [21:09] and you can cram 3 hdds in them now [21:10] ( gotta remove the wifiand bluetooth card but no need for that in a server anyhow ) [21:22] <_mup_> Bug #985232 was filed: libpq include path is wrong < https://launchpad.net/bugs/985232 > [21:26] jcastro: hah cool [21:29] SpamapS: OpenPhoto charm is halfway to RC [21:29] :D [21:30] bkerensa: sweeeet [21:31] * SpamapS needs to spend a little time promulgating before tomorrow [21:53] hazmat, hows it goin [21:58] SpamapS: doh sooo close `curl -u "bholtsclaw" -i https://api.github.com/user/keys` [21:59] it works , retrns json with all MY ssh keys, no love for random id's [21:59] :( [22:01] curl -u "bholtsclaw" -i https://api.github.com/user/keys/2203247 gets a single key [22:02] but again only mine, unless i can find another way to get that ID and then hope they let me cuz the docs say nothing about it [22:13] zul, you there? [22:14] jono: indeed [23:20] you know, apple will be a trillion dollar company in the next 3 years, they have already proven `well enough` they can get on without jobs, for at least until his visions for the projects started by the people he surrounded himself with then brainwa^Wmolded with his dna ... but i can garentee no one see's it yet, well 90% dont, they wil guess ipads or iphones or blah blah, nope its simple, they made buying apps easy and now its an addiction, there [23:20] i need to amke a blog post about it .... [23:24] ( i say this after looking at my 118$ bill for ios and mac app store apps this month and my less than $10 aws cloud svc's + ebook monthly bill combine ) [23:25] haha [23:25] imbrandon: amazon is much better at extracting profit from those purchases though [23:25] Apple just has crazy high margins, even on their cheap easy app store purchases [23:26] SpamapS: i dunno, apple has to pay what 250mil in advertising and 100m in datacenter cost to make 3 bil profit off 22bil sales last quater in app store alone [23:27] i'm guessing that datacenter delivery and e content delivery amazon has some of the same margins if not more [23:27] imbrandon: the amazon way will sustain longer. It might not matter, as Apple is in a position where they can make every mistake known to man and still have cash, but amazon will keep sucking cash out of peoples' wallets because they're so low price. [23:27] i hear book publishers paying upwards of 30% to aws [23:27] true [23:27] but [23:27] same thing with MS [23:27] a decade ago [23:27] From what I understand, Amazon can make a profit off sales as small as $1 [23:28] ms dident have something that could sustain but dident need to with the coffers built up [23:28] Whereas Apple needs you to buy 3 or 4 $0.99 things to start seeing profit [23:28] MS still has that cash [23:28] and they're still profitable [23:28] and will be for a long time [23:28] SpamapS: but they are into every trransation at that point in 3 years, not just apps [23:28] restuants [23:28] newspapers [23:28] movies [23:28] apps [23:28] itunes [23:28] grocery store [23:28] corner gas [23:29] it will be like the 80's credit cards "do you take diners club?" only do you take iPay ? [23:29] dont think they wont, i bet its comming , look at iAd [23:29] and all the others [23:30] SpamapS: and MS built that long term stuff this last decade with lic activesync and things like it [23:30] 10 or 15 years ago ms was not a long term sustainable model [23:30] it was a cash cow [23:30] but not long term [23:30] now it is [23:31] but only cuz they had the cash cow to get them their, same with apple, aws isnt going anywhere, i just dont think they can compete like them and google think they can [23:32] they will remain arround and making a ton of money, just not at the scale or the pull [23:32] least thats what my fortune telling is saying to me heheheh, i am almost never 50% right [23:32] :) [23:33] I saw the other day Home Depot takes paypal [23:33] at the register [23:33] yea i did notice that too, [23:33] its couse square and the like are forcing them to innovate [23:34] if paypal had been innovating the last 5 years they would OWN the transation market, iphones would be buying apps with paypal [23:34] I'd love to be able to whip out my iphone and just have it figure out where I'm eating.. and ask me my name.. and I can just pay the bill. [23:34] think about how long paypal has had the pull and ability to engneer real world physical payments [23:34] but never did till now [23:34] Paypal got killed by ebay I think [23:34] they couldn't innovate anymore [23:35] yup [23:35] just became ebay's bitch [23:35] exactly [23:35] hahahah [23:35] yea the deal was killer for ebay, sucked to be paypal [23:36] i dunno i'm probably way off, i'm no ecconomist, but i know i'm not 100% wrong, mark my words mr inventer :) [23:36] btw someone with some /topic powers should tiddy that up a bit :) [23:36] lol [23:37] SpamapS: you have no osx huh ? damn amn i'm bringin you a lion disk [23:37] need more brew testers [23:39] actually while i was loking up brew synctax yesterday SpamapS there is a port for windows and linux too, i might pacakge it up for ubuntu, it would make a great suplamental pkg mgr for developers if its used as that and not a apt replacement [23:39] hasp, it goes ;-) [23:39] SpamapS, you in sf now? [23:39] hazmat: no I fly in tomorrow morning early [23:39] * SpamapS should probably look at his Itinerary so he knows what city he's flying to.. OAK or SFO [23:40] * imbrandon is flying to sfo [23:40] btw, juju updated to 531 in precise [23:40] w0000t [23:41] * SpamapS thinks we should probably announce subordinates [23:41] SpamapS, nice! [23:41] ohh /me will update the forumla its using 504 [23:41] SpamapS, and relation addressability is probably worth a shout out [23:42] hazmat: *definitely* [23:42] gawd i love nginx , SpamapS http://paste.ubuntu.com/936120/ [23:42] woot, A29.. [23:42] landing at 0720 .. [23:42] at SFO [23:43] will there be a coexist-like-subordinates option ? [23:43] or subordinates that can scale independently? [23:44] lifeless: "placement" is the single word moniker for that. Not that I know of. [23:49] lifeless: shouldn't be too complicated though [23:55] argh, depwait for juju [23:55] new dep ? [23:55] have to wait for python-txzookeeper 0.9.5 to be published [23:55] yeah new version of txzookeeper needed [23:55] great thats what i was fighting with all last night [23:55] on osx [23:55] :( [23:58] k a need a greek god,female prefered as its a vm, i got hera, athena, zeus, ares, and one more i cant think of this moment and its powered down [23:58] hrm [23:59] imbrandon: its on pypi [23:59] imbrandon: Artemis [23:59] yea it dident want to find zookeepter.h tho