/srv/irclogs.ubuntu.com/2012/06/25/#juju.txt

lifelesshttps://juju.ubuntu.com/docs/faq.html#is-it-possible-to-deploy-multiple-services-per-machine might need updates01:31
lifelesslinks at the bottom of https://juju.ubuntu.com/docs/faq.html are broken,obolete too - https://www.jujucharms.com/ and https://juju.ubuntu.com/kanban/dublin.html01:32
lifelessis there an elasticsearch charm ?01:33
imbrandonyea01:36
imbrandonits not in the store yet01:36
imbrandonbut its in the review que01:37
imbrandonnot tried it myself01:37
imbrandonbut i think jorge may have01:37
imbrandonhe was talking about it01:37
imbrandonthere are elb and rds ones as well01:37
imbrandonboth need a tiny bit of polish but function01:37
imbrandonif you can handle rough edges :)01:38
imbrandonlifeless: ohh i read that as elasticache01:38
imbrandonnot elasticsearch01:38
imbrandonno i dont think so01:38
imbrandonnot that i ahve seen tho i havent looked specificly01:38
lifelessimbrandon: I would love ones for elasticsearch and opentsdb; it may be time for me to get into it :.01:39
imbrandonbtw i found whay looks to be all the info i need for the osx stuff ealier , not time to go over it yet but it looked completish01:39
imbrandonso ty01:40
imbrandonhehe yea, elasticsearch would likely be a good starter charm01:40
imbrandonas its probably going to be simple. most external services that are charmed are01:40
imbrandonyou could see my newrelic charm or clints elb charm as examples01:41
imbrandonof two and they are both pretty simple01:41
imbrandonwell mine is VERY , like 10 lines of code01:41
imbrandonbut his not much more but still relates etc01:41
imbrandonbut yea to be your first one, thats actually a good idea, a total beginner i might not say that but you have a solid progmatic head on you and are semi familiar with it already ( via just interaction is all i mean )01:43
imbrandonso i think you should be able to seriosuly pick it upa nd even have it done in an hours time or so01:43
* imbrandon afk01:44
lifelessbtw02:19
lifelesshttps://juju.ubuntu.com/docs/user-tutorial.html02:19
lifelessfails to talk the user through environments.yaml config02:19
lifelessfirst time I was looking into this stuff, it was -painful- as a result. Fear and confusion all around.02:20
lifelesshttps://juju.ubuntu.com/Charms is a little incoherent on 'starting a new charm' - it says to push before describing how to init-repo etc :> {I figured it out, others may be more puzzled}02:41
lifelesshazmat: your branch generates configs like: Acquire::HTTP::Proxy "{'apt-proxy': 'http://192.168.1.102:8080/'}";03:06
lifelesshazmat: this isn't quite what you intended, I think :)03:06
lifelesshazmat: not sure if its the cloud-init support, or the juju code yet, looking into it.03:06
lifelessmetadata service claims - 'apt_proxy: {apt-proxy: 'http://192.168.1.102:8080/'}'03:12
lifelessso I think juju03:12
lifelessdefinitely: (Pdb) print machine_data03:14
lifeless{'apt-proxy': {'apt-proxy': 'http://192.168.1.102:8080/'}, 'machine-id': '0', 'constraints': <juju.machine.constraints.Constraints object at 0x348c190>}03:14
lifelesshazmat: also, it looks like you override it if not set, which breaks images with that preconfigured. I will see about fixing that too03:14
lifelesshazmat: and done - https://code.launchpad.net/~lifeless/juju/apt-proxy-support/+merge/111763 - for merging into your WIP branch03:38
imbrandonlifeless: coool ( re: the env.y thing, i'll see if i cant get some clarification there and be a bit more verbose about some of the things that yea it looks to barely touch if at alll even for getting started04:24
imbrandonty04:24
imbrandonif not i'm minimally add a trello task for it so someoe will ( hopefully heh )04:25
lifeless:)04:26
lifelessimbrandon: I haven't gotten to actually doing the charm yet, more yak shaving happened.04:27
lifelessimbrandon: is there a howto for charms? e.g. step-by-step including deploying from local tree ?04:29
imbrandon100% complete i'm not sure or optimistic04:36
imbrandonbut i do bet that there is a recorded charm schoool04:36
imbrandonthat does tho04:36
* imbrandon looks for where they are storeed04:36
lifelessthanks04:40
imbrandonyea , there are a few listed there it looks like04:43
imbrandonhttps://juju.ubuntu.com/CharmSchool04:43
imbrandonin the webinars04:43
imbrandon( skip to the end of the first one to get a list04:43
imbrandonof all the current and upcomming ones )04:43
imbrandonafter a once over they look to be fairly complete04:44
lifelessthis stuff -really- needs to be in the main docs04:44
lifelesssince we want folk doing it easily.04:44
imbrandonyea , i am workingf on that , so is jorge and mims04:44
lifelesscool04:44
lifelessI realise folk are working hard04:44
lifelessI guess I'm ensuring that this is on the radar04:44
imbrandontrying to move EVERYTHING to the docs04:44
imbrandonyop yup04:45
imbrandonmany and big docs importvements are on most of ours04:45
imbrandoni know mine and jorges for sure, but client and mims and kapil i know all spend copious ammounts of time on them too at times04:46
imbrandonheh04:46
imbrandoni think this last thursday we had clint me kapil mimms jorge and there was one more , or juan04:46
imbrandonall working on them at the same time04:46
imbrandonfor a good solid 4 to 6 hours04:47
imbrandonlots done, still lots to do tho :)04:47
imbrandonhehe04:47
imbrandondefinatly was an intresting time to see that many all at once on a docs setup like we have04:47
imbrandon:)04:47
imbrandonjujucharms.com/docs is much more updated thanks to that day too compared to the normal docs04:48
imbrandonwe have a IS ticket in to fix the packages on the one with the main docs but its not completed yet so they are a week or so outdauted but that week saw a metric ton of updates04:49
imbrandonso hopefully someone can fix it up tomarrow ( just needs -backports enabled and then packages updated that are already installed ) [ if you had a lil weight to get er done quicker that would be awesome and I could fix more faster :P heheh ]04:50
lifelesswhats the RT # ?04:50
imbrandonumm i need to look in the irc back log, clint filed it04:50
imbrandonbut he told us in the chan about ~18 hours ago04:51
imbrandonor so04:51
* imbrandon looks quickly04:51
imbrandonlastlog rt04:51
imbrandonbah04:51
lifelessaiee04:52
lifelessimbrandon: http://jujucharms.com/docs/ is a 40304:55
lifeless'fobidden'04:55
imbrandonnice04:56
* imbrandon pulls the branch to see if its building 04:56
imbrandonnot sure wherer the output of the cron build for juju.ubuntu.com/docs is either04:57
imbrandonif there is one :)04:57
imbrandonbut i know its supose to build them evert 15 min on cron and did for a long time till we broke 0.6.4 and need 1.0+ 1.1.3 is current04:58
* imbrandon checks the build currently uploaded04:58
imbrandonthurs day we considered "dogfooding" it like mmims brought up ( and good idea i think personally ) to charm ( and this cleanup too ) juju.ubuntu.com and then just redirect the page to it on ec2/hpcloud etc05:00
imbrandonheh05:00
imbrandonnot sure how far it would "actually" fly if it was attempted05:01
imbrandonbut its worth a thought later :)05:01
imbrandonlifeless: http://jujucharms.com/docs/ fixed up05:34
imbrandonmade a typo in my last commit :) but that is what the official stite will also look like as soon as builds resume on a newer sphinx than Lucid default ships05:36
imbrandon:)05:36
imbrandonand hopefully shuffling some of the content to the docs form juju.ubuntu.com its self also so its all central in one place05:37
imbrandon( as well as sexu looking too I think, but just a tad bias, definately easier to read/navigate )05:38
imbrandon:P05:38
lifelessdid you file the rt #?05:38
imbrandonclint did05:38
lifelesss/file/find/05:38
imbrandonnot yet05:38
imbrandoni am actively looking tho05:39
lifelessheh05:39
imbrandon:)05:39
lifelessok http://jujucharms.com/docs/write-charm.html is *much* better05:39
lifelesskudos05:39
imbrandon:) ty05:40
lifelesshttp://jujucharms.com/docs/write-charm.html - uses oneiric05:40
imbrandonyea there are still many oneiric refs05:41
imbrandoni am affraid to sed them out05:41
imbrandonbit mostly that can be done05:41
imbrandonproviders is my next section to hit up hard i think05:42
imbrandonlots of pain ares there esp with env.y config options etc05:42
lifelesshttp://jujucharms.com/docs/write-charm.html doesn't know that revision is a separate file yet either.05:43
imbrandonnice, so VERY out dated05:44
bkerensaSpamapS: I think I borked your merge05:44
imbrandonfixes that one now05:44
imbrandonbkerensa: nice going slick heh05:44
imbrandonbkerensa: j/k , something i can help sort ?05:44
imbrandonyour golden, once overed the commit and it looks correct, i'll look closer when i make my next docs commit here in a few myself too06:01
imbrandonbut i'm fairly certain your solid :)06:01
=== almaisan-away is now known as al-maisan
=== wrtp is now known as rogpeppe
=== zyga_ is now known as zyga
=== al-maisan is now known as almaisan-away
sidneiSpamapS, added a comment on #920454, seems like precise might be missing some patches on libvirt13:25
_mup_Bug #920454: juju bootstrap hangs for local environment under precise on vmware <local> <juju:Confirmed> < https://launchpad.net/bugs/920454 >13:25
SpamapSsidnei: ahh! good sleuthing.. maybe its already reported in precise?13:43
=== zyga is now known as zyga-food
sidneiSpamapS, couldn't find a !juju bug for precise no13:44
imbrandonSpamapS: do you now what these corospond to for HPcloud13:55
imbrandon    project-id:13:55
imbrandon    nova-uri:13:55
imbrandon    auth-mode:13:55
imbrandon?13:55
SpamapSimbrandon: no, I don't even know what that is. :)13:55
imbrandonopenstack hpcloud ( not openstack_s3 ) gonna try pure first ( i hope )13:55
imbrandonenv.y settings13:56
SpamapSimbrandon: ohhh13:56
imbrandoni pulled them from conf13:56
imbrandonbut its the only ones i couldent match up13:56
imbrandonto something i knew13:56
SpamapSsidnei: Well I do think thats a libvirt problem, and it looks like there might be a patch13:56
SpamapSimbrandon: I'd suspect auth-mode is sort of an enum13:56
imbrandonsaid proj id is a int, wonder if its the tennant id13:57
imbrandonthats and int13:57
SpamapSimbrandon: perhaps HPCLOUD will give you those when you get your account?13:57
SpamapSI've still never signed up13:57
imbrandonyea i got a whole screen of creadentials13:57
SpamapStho with the new ostack provider maybe I will :)13:58
imbrandonbut not sure what they map to, as the name are slightly off, so i was hoping to at leaste get bootstrap wortking doe i could document them13:58
imbrandonheh13:58
imbrandonyea i'm trying the ostack provider now13:58
imbrandonwhen you do let me know and i'll pastebin my whole env.y and give ya a head start13:59
imbrandonerr if13:59
SpamapSit shouldn't be this hard13:59
SpamapSperhaps the provider needs better docs13:59
imbrandonyea14:00
SpamapSor HPcloud needs a good slapping if they aren't using the standard terms14:00
imbrandonthere is -0- now, i;'m reading code14:00
SpamapSoh wtf?14:00
SpamapSnothing in docs?14:00
imbrandonwell i think they do, i think the provider is off14:00
imbrandonits a bitdiff than ec2 as well14:00
imbrandonbut yea14:00
imbrandon-0- docs14:00
sidneiSpamapS, yes, i agree. so mark the bug as affecting libvirt, invalid in juju? :)14:00
imbrandoni'm reading source to fig this stuff out14:00
SpamapSsidnei: I'm not 100% sure its invalid in juju, so I'm waiting to do that14:01
SpamapSsidnei: there may be a workaround we could apply after all14:01
sidneioki14:04
imbrandonSpamapS: http://cl.ly/HdVH14:09
imbrandonfor your ref too, so you have an idea should you need it in a pinch14:09
SpamapSimbrandon: I bet tenant-id is project-id14:10
imbrandonyou'll likely have to click it to zoom to read anything and it should be safe to not have to worry about hoarding ( got bits blured )14:10
imbrandonkk14:10
SpamapSI remember hearing that the name in the API was up for consideration to be renamed14:10
imbrandonheh14:10
imbrandonthat page should stay there indefinately too if you wanna bookmark that14:11
imbrandonfor ref later or something14:11
SpamapSNo I think I'll just sign up :)14:11
imbrandonits actaly my upload account14:11
imbrandonkk14:11
imbrandonkool, cuz those other two are optional14:11
imbrandonso with that i *should* have a full stanza14:12
imbrandonof pure OSAPI on hpcloud14:12
imbrandon*crosses fingers and prepares to fail*14:12
SpamapSHave they at least moved up from Diablo to Essex yet?14:13
imbrandonno idea14:13
imbrandoni prettu sire its essex14:14
imbrandonbut dunno how to tell14:14
imbrandoni'm prtty ignorant on openstack14:14
imbrandononly have the absolute minimum in my head for uit so far14:14
SpamapSI don't know how to tell either14:14
imbrandonreally not even  that14:14
imbrandonheh14:14
imbrandonthats one reason i'm using hp and not rak, zomg their rest api is the suxors14:15
SpamapSwell, its OSTACK now IIRC14:15
imbrandon8 rest calls to create one mysql db on the new mysql service14:15
imbrandonat rak14:15
imbrandonnah14:15
imbrandonnot all of it14:15
SpamapSOh they're not using red dwarf?14:16
imbrandonthey have joey on half and half , mine only has db access14:16
imbrandonthen i got a toy acct that only has new access14:16
imbrandonyea no, its all helter skelter in their control pannel right now, feels like aws but with 2 competing back ends14:16
imbrandonheh14:16
=== zyga-food is now known as zyga
imbrandonpresonaly i would avaoid rak for like a nother month id say and let them settle some dust14:17
imbrandonheh14:17
imbrandonbtw i do those "full page screenshots + annotations on save" with a BA chrome extension "Awesome Screenshot" heh its perfect to get a whole page when it s like like that14:19
imbrandonwith one click14:20
imbrandonnova_project_id == tennant name14:24
imbrandonwoot14:24
imbrandonbah, these forum are such a pita to use, then o grok some of the source to see why, its d7 heh non-custom d7 at that , heh no wonder14:25
imbrandon:)14:26
imbrandonthese forums == help docs at hpcloud , d7 == drupal 714:26
* imbrandon gets ready to bootstrap env hpc.1-az.1-regon-a.geo114:27
imbrandonwow14:27
* robbiew senses a disturbance in the cloud force...velocity and google IO this week.14:30
imbrandonheheh14:30
imbrandoni always love IO but i always like it like 3 weeks late as videos14:31
imbrandon:)14:31
* m_3 sad to be missing out on both events :(14:42
twobottuxaujuju: Is juju specific to ubuntu OS on EC2 [closed] <http://askubuntu.com/questions/149952/is-juju-specific-to-ubuntu-os-on-ec2>15:27
hazmatmorning15:36
m_3o/15:46
=== salgado is now known as salgado-lunch
jimbaker`SpamapS, , i took a look at the failing test on buildd, juju.lib.tests.test_format.TestPythonFormat.test_dump_load. this test normally takes on the order of 0.001s on my laptop and involves no resources other than a json serializer/deserializer. i just ran it for 10000 loops and i didn't see it explode16:58
jimbaker`SpamapS, just looking at the test and the code it exercises, i would not expect resource starvation failures. maybe in similar code in test_invoker, which involves processes. but not here17:00
SpamapS/wi/win 717:09
SpamapSarghhh!!17:10
SpamapSbursty network.. yargh17:11
SpamapSjimbaker`: So perhaps it is a race of some kind?17:11
jimbaker`SpamapS, in this specific code, no. twisted can report problems in the wrong place, so that could be the problem17:12
jimbaker`in terms of the twisted trial test runner17:13
jimbaker`SpamapS, i assume this failing test is stable?17:13
jimbaker`as reported on buildd?17:14
SpamapSjimbaker`: I'm not sure, I'll retry a few of the builds.17:17
SpamapSjimbaker`: it may have been transient. quantal succeeded on retry17:18
SpamapSjimbaker`: I'm retrying all the others. If they succeed, we can chalk this up to a buildd problem I think17:19
jimbaker`SpamapS, again, this is not a test i would expect to fail transiently, since it's not async. but again, it's completely possible for twisted trial to point the wrong finger from some other transient bug17:20
jimbaker`so given that, i think this is the best strategy for now17:20
sidneiuhm, is there a 'juju scp' counterpart to 'juju ssh'? if not, it could be handy17:20
jimbaker`there was one other error reported, which was a zk closing connection problem17:21
jimbaker`sidnei, yes17:21
jimbaker`sidnei, just to be clear, juju scp exists, not that would be hypothetically handy ;)17:22
sidneiah, i totally missed it in juju --help17:22
=== salgado-lunch is now known as salgado
SpamapSsidnei: its quite useful for pulling down modified hooks actually. :)17:33
sidneiSpamapS, im trying to figure out if i can get rsync to work with that, using 'juju ssh' as the remote shell, but service name has '/' so it gets interpreted as a path instead of a machine name17:33
SpamapSsidnei: IMO we need a 'juju get-public-hostname' so you can just do rsync `juju get-public-hostname foo/3`:/path/to/file17:35
SpamapSsidnei: I think juju-jitsu might have that actually17:35
sidneiah, that could work too17:35
SpamapSbkerensa: Hey17:44
SpamapSbkerensa: that was a pretty MASSIVE merge proposal you merged into lp:juju/docs17:45
SpamapSbkerensa: I'd have liked to hear more than just your +1 ;)17:45
lifelessSpamapS: hazmat: morning guys; hope my spam overnight wasn't an issue ;)19:08
imbrandonheya19:09
imbrandonSpamapS: whats the rt# for that docs ticket? lifeless would like to know :)19:09
lifelesshey!19:09
hazmatlifeless, no that was awesome19:17
hazmatlifeless, i'm running around today at velocity conf with meetings, i'm going to try and circle back to your branches/bugs this evening.19:17
lifelesscool, thanks!19:17
lifelesshazmat: the main one I'd like info on is the use of ip; that could be contentious for reasons other than code.19:17
hazmatlifeless, yeah.. that changes the display in status as well.19:18
hazmatlifeless, ideal would be to capture both, and use the appropriate one if the other is missing19:18
lifelesshazmat: in a good way IMNSHO :)19:18
lifelesshazmat: 10.0.0.3 is much more useful than 'server-2345'19:18
hazmatlifeless, so do you have a a valid dns name in your context, or is it just ip19:18
hazmathmm.. so you have invalid dns names19:18
lifelesshazmat: I assert that noone running openstack outside of rackspace and perhaps hp has valid public DNS names.19:19
lifelesshazmat: its not even part of the deploying openstack guidelines yet.19:19
lifelesslet alone labs etc19:19
hazmatlifeless, but everyone running in a public cloud or maas probably does19:19
hazmatand for them i would posit its better to have things displayed by name then ip19:20
lifelesshazmat: public cloud will, maas *may* (if they route DNS to the maas controller)19:20
lifelesshazmat: maas seed clouds though won't, same issue as openstack.19:20
lifelessdns will be usable w/in the cloud, but not by machines outside it.19:20
hazmatlifeless, i've seen and assisted with several maas demos where dns does work fwiw19:21
lifelesshazmat: from a machine that isn't the cloud controller ?19:21
lifelesshazmat: because, that machine will be specially configured to use the dnsmasq instance on the controller.19:21
hazmatlifeless, yes, its a machine on the same network though19:21
hazmatlifeless, probably19:22
lifelessI'll match your probably and raise you an almost certainly ;)19:22
lifelessanyhow, we can gather data if we care.19:22
hazmatlifeless, no takers :-)19:22
lifelessI'm not sure that we -care- about the dns name though; what does it offer the user?19:22
hazmata nicer symbolic name19:23
hazmatlifeless, think about configuring a vhost in apache19:23
lifelesshazmat: have you seen the public names ec2 uses ?19:23
hazmator browsing to an app..19:23
hazmatlifeless, true19:23
lifelessthese are not the same as symbolic names19:23
lifelessthey are bulk loaded static mappings19:23
lifelessper region19:24
lifelesssuch as ec2-184-73-10-99.compute-1.amazonaws.com19:24
hazmatagreed realistically, true dns management  for services is already separate than the provider dns entries19:25
hazmater. a separate concern19:26
hazmatlifeless, okay i'm convinced19:30
lifelesshazmat: cool, thanks for taking the time; chat more when you're done @velocity19:39
SpamapSlifeless: I'm with you on this. The DNS causes more problems than it solves.19:48
=== Guest73293 is now known as dpb_
=== dpb_ is now known as dpb__
=== dpb__ is now known as dpb___
negronjlSpamapS: ping20:04
negronjlSpamapS: can you give me the commands that you use to run gource with jitsu ?20:05
negronjlSpamapS: I am trying to show the "pretty deploying stuff screen"20:05
SpamapSjitsu gource --run-gource20:12
jimbaker`the gource integration definitely demos well, i used it as part of my demo for usenix two weeks ago. nothing like seeing some good pictures of what's going on20:14
SpamapSYeah I really think a web app showing a network diagram will play even better.. if we can ever get such a thing :)20:15
jimbaker`SpamapS, yes, i'm sure that will be quite nice :)20:15
jimbaker`it might be nice to also demo the checking off of expectations of jitsu watch, here's how service orchestration happens20:17
lifelessdoes one need to wait for a deploy to complete (via status) before doing add-relation ?20:38
lifelessor will it Just Work if you run add-relation immediately that the deploy returns ?20:39
jimbaker`lifeless, you can do juju add-relation as soon as you have deployed the two services20:42
lifelessjimbaker`: as soon as the juju cli returns, you mean ?20:42
jimbaker`it just works because add-relation records this setup in zookeeper; it's the responsibility of the agents to carry out this policy20:43
lifelessjimbaker`: right, but deploy returns before the agents are even running20:43
jimbaker`yes, as soon as the juju cli returns20:43
lifelessjimbaker`: which is why I'm probing for specifics ;)20:43
jimbaker`yes, even in that case20:43
lifelesscool20:43
lifelessthanks20:43
jimbaker`because the agents carry out the policy, as recorded as it is in ZK20:43
lifelesshow can one recover from a wedged node ?20:44
lifelessby which I mean20:44
jimbaker`lifeless, basically what you see with the juju cli returning is that the update to zk has been made20:44
lifelesswhen I run 5 or 6 deploys in quick succession, openstack is throwing a tanty and gets an unspecified 'error' on one of the nodes - it gets an instance id and never comes up20:44
lifelessHow can I tell Juju 'btw that machine, it didn't, so toss it away and start clean'20:45
jimbaker`lifeless, sounds like a bug with the provider and possibly the provisioning agent20:45
jimbaker`lifeless, so if you can get the provision agent log (on machine 0), that would be helpful. or use juju debug-log20:46
lifelessjimbaker`: I'm sure its accentuated for me here and now, local openstack install. *but*, it can happen in e.g. ec2, when service disruption happens, that a reservation request goes into limbo or even fails asynchronously.20:46
lifelessjimbaker`: the API call to provision the instance fails.20:46
lifelessbah20:46
lifelesssucceeds*20:46
lifelessits a cloud backend failure. Async.20:47
jimbaker`lifeless, it should eventually succeed20:47
jimbaker`it's a bug if it doesn't20:47
lifelessTo a degree, I agree. Ideally we could say 'its a bug over there, go fix that'20:47
jimbaker`so some sort of occasional failure is expected20:47
lifelessbut, ^^ that.20:47
lifelesshow do I recover without wiping the whole juju environment of 10 instances20:47
jimbaker`we just tried to engineer the provisioning agent so that it does appropriate retries20:47
lifelessjimbaker`: but it doesn't retry if the API call succeeds?20:48
jimbaker`lifeless, iirc, it does do dead machine detection for cases like that20:48
lifelesshow long does it wait? Perhaps I wasn't patient enough20:49
jimbaker`lifeless, without logs, i'm afraid i can only speculate :)20:49
lifelessok, well I didn't capture any (didn't konw how)20:49
lifelessjuju debug-log; how do I get the provision agent log ?20:49
jimbaker`lifeless, it should just be there, see https://juju.ubuntu.com/docs/user-tutorial.html#starting-debug-log20:50
lifelessjimbaker`: how do I get the whole log though?I mean, this is some time back presumably... I don't notice instantly.20:51
jimbaker`lifeless, you can also just grab it from the machine 0 box20:53
lifelessjimbaker`: thats what I was asking20:53
lifelesswhere on the box :)20:54
jimbaker`lifeless, regrettably i don't have it committed to memory, but i am looking right now20:54
lifelessthanks; Perhaps a doc patch :>20:54
jimbaker`/var/log/juju/provision-agent.log20:55
lifelesscool20:55
lifelessI will look there next time it happens and see what I can see20:55
lifelessthank you20:55
jimbaker`lifeless, sounds good20:56
lifeless\o/ finally apparently have hdfs + hadoop up, ready to iterate on stuff using it ;>20:56
lifelesssuper not-simple20:56
jimbaker`lifeless, i did check juju debug-log; two things, 1) the check against the flag it sets in ZK executes too late for the provisioning agent upon first install; 2) the PA still does not seem to use debug logging even if restarted. so for now, /var/log/juju/provision-agent.log seems to be the way to go21:08
jimbaker`also this doesn't mix output, which is probably why i actually use the file when debugging the PA21:09
lifelessfirst install - of a new service, or first install of the environment ?21:09
jimbaker`first install of the environment21:09
jimbaker`specifically the bootstrap node21:09
jimbaker`(aka machine 0)21:09
lifelessthat seems hard to avoid, as the zk runs on that node21:10
lifelessif it fails to come up, ....21:11
jimbaker`lifeless, actually i'm going to backout that earlier statement. you are going to miss some changes to agent log, but there is a watcher in place for such global en settings (currently only debug log). so it appears to be just a fault of logging the PA log21:16
jimbaker`lifeless, not certain why, the log setup is fairly generic in terms of adding handlers21:17
=== salgado is now known as salgado-afk
lifelessoh, lalalalalala22:22
lifelesshazmat: SpamapS: you'll love this:22:22
lifeless2012-06-26 10:21:39,616 INFO Connecting to unit hbase-regioncluster-01/0 at server-10.novalocal22:22
lifelessssh: Could not resolve hostname server-10.novalocal: Name or service not known22:22
lifeless(juju ssh)22:22
lifelessnovalocal is the local search domain on the instance, not globally resolvable.22:22
lifelessINSTANCE        i-0000000a      ami-00000004    server-10       server-10       running         0               m1.small        2012-06-25T20:36:32.000Z        nova                            monitoring-disabled     10.0.0.7        10.0.0.7                       instance-store22:23
lifelessis what the EC2 API returns :>22:23
lifelesshas anyone here worked with the hbase charm? I want to create a table from a different charm, which requires running the hbase binary (which is only on hbase units...)22:28
SpamapSlifeless: double doh on the IP vs hostname22:38
lifelessSpamapS: the rabbit hole gets deeper :) To me, this validates my first point :> Also, split view DNS sucks.22:41
SpamapSlifeless: split view DNS is just a relic of the way EC2 has done things..22:45
SpamapSlifeless: you know, there's an openstack provider in review right now..22:46
SpamapSlifeless: have you been able to try it out?22:46
lifelessSpamapS: nope22:46
lifelessSpamapS: I'm not actually trying to hack on juju at all, just use it ;)22:47
lifelessSpamapS: so I'm running precise, etc; just keep hitting yak shave events.22:47
lifelessSpamapS: what I *want* to do, is try out opentsdb22:47
lifelessand logstash (with elasticsearch)22:47
SpamapSoh right22:47
SpamapSlifeless: but you're also trying it against a local openstack.....22:48
SpamapSwhich.. complicats matters. :)22:48
lifelessSpamapS: a little22:48
lifelessbut shouldn't break charms as inside the stack dns work22:48
lifelesss22:48
lifelessits just the outside thing, exactly like canonistack22:49
SpamapSyeah22:49
lifelesse.g.22:51
lifelessubuntu@server-8:~$ host server-8.novalocal22:51
lifelessserver-8.novalocal has address 10.0.0.322:51
lifelessand22:51
lifelessubuntu@server-8:~$ host server-9.novalocal22:51
lifelessserver-9.novalocal has address 10.0.0.422:51
lifelessso hbase's tools decided they couldn't find the master.22:51
SpamapSlifeless: I assume the relationship sets up the necessary configs for that?22:53
lifelesshttp://jujucharms.com/charms/precise/hbase22:53
lifelessI followed the bouncing ball there22:53
lifelessjuju deploy hbase hbase-master22:54
lifelessjuju deploy hbase hbase-regioncluster-0122:54
lifelessjuju deploy zookeeper hbase-zookeeper22:54
lifelessjuju add-relation hbase-master hbase-zookeeper22:54
lifelessjuju add-relation hbase-regioncluster-01 hbase-zookeeper22:54
lifelessjuju deploy --config example-hadoop.yaml hadoop hdfs-namenode22:54
lifelessjuju deploy --config example-hadoop.yaml hadoop hdfs-datacluster-0122:54
lifelessjuju add-relation hdfs-namenode:namenode hdfs-datacluster-01:datanode22:54
lifelessjuju add-relation hdfs-namenode:namenode hbase-master:namenode22:54
lifelessjuju add-relation hdfs-namenode:namenode hbase-regioncluster-01:namenode22:54
lifelessjuju add-relation hbase-master:master hbase-regioncluster-01:regionserver22:54
lifeless^ is exactly what I ran22:54
lifelesswith a minute or so between deploys22:54
lifelessand nothing between relation adding22:54
lifeless(sorry for the Spam)22:54
SpamapSwhy minutes between deploys?22:56
SpamapSYou should have been able to run all of the deploys all at once22:56
SpamapSand all the relations22:56
SpamapSand then wait for everything to finish. :)22:56
lifelessSpamapS: see my chat with jimbaker` above about vagaries of local stack deploys with one compute node22:58
lifelesssomething whinges when I allocate 32GB of ram etc all in one batch22:59
SpamapSlifeless: I see. Thats a bit disappointing given how "scalable" openstack is supposed to be...23:01
lifelesswell23:01
lifelessI mean, I have one node here with 16GB ram23:02
lifelessbut yes, there is some glitch in there, and I've already shaved enough yaks on this exercise.23:02
lifelessthe local install is v useful considering latency to canonistack, f'instance23:02
SpamapSindeed23:03
SpamapSlifeless: I think a native OpenStack provider, and more people banging on OpenStack's with juju, will help this go a lot more smoothly.23:03
lifelesswell, I'm beyond that bit, workarounds R us.23:04
lifelessSpamapS: whats a good charm that knows how to create db's on demand ?23:06
lifelessI presume the wordpress <-> mysql pair does that ?23:07
jimbaker`lifeless, that pairing would be a good choice23:09
lifelessjimbaker`: my suspicion/concern is that it does that using the network protocol23:11
lifelesshmmm23:11
* lifeless needs to know more hbase ops23:11
jimbaker`lifeless, i'm not certain what you mean by that re "network protocol".  i will say that it does an exchange of relation settings to accomplish the desired service orchestration23:13
lifelesshah23:15
lifelessso schema details23:15
lifelessI'd be surprised if schema migrations were passed via zk23:15
lifelessrather than directly.23:15
lifelesswhere directly == connect to the mysql network port.23:15
jimbaker`lifeless, the theory of juju, if there is one ;), is that this is outside of the scope of the charm, unless orchestration is required23:17
jimbaker`not trying to reinvent how mysql coordinates, just solve certain issues23:17
lifelessjimbaker`: sure23:17
* hazmat pauses from juju talk to catchup23:17
lifelessits orchestration though23:17
lifelessschema application + upgrades is a massive orchestration issue23:17
jimbaker`lifeless, then i can see that under the scope of the charm23:18
lifelessincluding the trivial-enough first-bringup case.23:18
lifelessthe charm needs to facilitate it23:18
lifelesswhich is what I'm trying to figure out for hbase :23:18
lifeless>23:18
jimbaker`so if the services need to coordinate on add-relation/remove-relation or add-unit/remove-unit, then sure, use juju there23:19
lifelessI think we're talking past each other23:20
SpamapSlifeless: the charm gets you credentials and a place to connect to. But no doubt, if a relationship is more specific, schema details and dependencies across multiple charms could be orchestrated very easily.23:21
SpamapSlifeless: such a thing just has yet to surface yet.23:21
jimbaker`lifeless, in particular, this can be done through an advanced form of service orchestration, which orchestrates with respect to relations. adam_g has been doing this for the openstack charms fwiw23:22
jimbaker`lifeless, you might need to do this. i guess we just need to settle on something concrete to say one way or the other :)23:23
* hazmat gets back to the audience questions23:25
zirpuvelocity talk went well. woot23:55
m_3whoohoo!23:57

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!