[01:31] https://juju.ubuntu.com/docs/faq.html#is-it-possible-to-deploy-multiple-services-per-machine might need updates [01:32] links at the bottom of https://juju.ubuntu.com/docs/faq.html are broken,obolete too - https://www.jujucharms.com/ and https://juju.ubuntu.com/kanban/dublin.html [01:33] is there an elasticsearch charm ? [01:36] yea [01:36] its not in the store yet [01:37] but its in the review que [01:37] not tried it myself [01:37] but i think jorge may have [01:37] he was talking about it [01:37] there are elb and rds ones as well [01:37] both need a tiny bit of polish but function [01:38] if you can handle rough edges :) [01:38] lifeless: ohh i read that as elasticache [01:38] not elasticsearch [01:38] no i dont think so [01:38] not that i ahve seen tho i havent looked specificly [01:39] imbrandon: I would love ones for elasticsearch and opentsdb; it may be time for me to get into it :. [01:39] btw i found whay looks to be all the info i need for the osx stuff ealier , not time to go over it yet but it looked completish [01:40] so ty [01:40] hehe yea, elasticsearch would likely be a good starter charm [01:40] as its probably going to be simple. most external services that are charmed are [01:41] you could see my newrelic charm or clints elb charm as examples [01:41] of two and they are both pretty simple [01:41] well mine is VERY , like 10 lines of code [01:41] but his not much more but still relates etc [01:43] but yea to be your first one, thats actually a good idea, a total beginner i might not say that but you have a solid progmatic head on you and are semi familiar with it already ( via just interaction is all i mean ) [01:43] so i think you should be able to seriosuly pick it upa nd even have it done in an hours time or so [01:44] * imbrandon afk [02:19] btw [02:19] https://juju.ubuntu.com/docs/user-tutorial.html [02:19] fails to talk the user through environments.yaml config [02:20] first time I was looking into this stuff, it was -painful- as a result. Fear and confusion all around. [02:41] https://juju.ubuntu.com/Charms is a little incoherent on 'starting a new charm' - it says to push before describing how to init-repo etc :> {I figured it out, others may be more puzzled} [03:06] hazmat: your branch generates configs like: Acquire::HTTP::Proxy "{'apt-proxy': 'http://192.168.1.102:8080/'}"; [03:06] hazmat: this isn't quite what you intended, I think :) [03:06] hazmat: not sure if its the cloud-init support, or the juju code yet, looking into it. [03:12] metadata service claims - 'apt_proxy: {apt-proxy: 'http://192.168.1.102:8080/'}' [03:12] so I think juju [03:14] definitely: (Pdb) print machine_data [03:14] {'apt-proxy': {'apt-proxy': 'http://192.168.1.102:8080/'}, 'machine-id': '0', 'constraints': } [03:14] hazmat: also, it looks like you override it if not set, which breaks images with that preconfigured. I will see about fixing that too [03:38] hazmat: and done - https://code.launchpad.net/~lifeless/juju/apt-proxy-support/+merge/111763 - for merging into your WIP branch [04:24] lifeless: coool ( re: the env.y thing, i'll see if i cant get some clarification there and be a bit more verbose about some of the things that yea it looks to barely touch if at alll even for getting started [04:24] ty [04:25] if not i'm minimally add a trello task for it so someoe will ( hopefully heh ) [04:26] :) [04:27] imbrandon: I haven't gotten to actually doing the charm yet, more yak shaving happened. [04:29] imbrandon: is there a howto for charms? e.g. step-by-step including deploying from local tree ? [04:36] 100% complete i'm not sure or optimistic [04:36] but i do bet that there is a recorded charm schoool [04:36] that does tho [04:36] * imbrandon looks for where they are storeed [04:40] thanks [04:43] yea , there are a few listed there it looks like [04:43] https://juju.ubuntu.com/CharmSchool [04:43] in the webinars [04:43] ( skip to the end of the first one to get a list [04:43] of all the current and upcomming ones ) [04:44] after a once over they look to be fairly complete [04:44] this stuff -really- needs to be in the main docs [04:44] since we want folk doing it easily. [04:44] yea , i am workingf on that , so is jorge and mims [04:44] cool [04:44] I realise folk are working hard [04:44] I guess I'm ensuring that this is on the radar [04:44] trying to move EVERYTHING to the docs [04:45] yop yup [04:45] many and big docs importvements are on most of ours [04:46] i know mine and jorges for sure, but client and mims and kapil i know all spend copious ammounts of time on them too at times [04:46] heh [04:46] i think this last thursday we had clint me kapil mimms jorge and there was one more , or juan [04:46] all working on them at the same time [04:47] for a good solid 4 to 6 hours [04:47] lots done, still lots to do tho :) [04:47] hehe [04:47] definatly was an intresting time to see that many all at once on a docs setup like we have [04:47] :) [04:48] jujucharms.com/docs is much more updated thanks to that day too compared to the normal docs [04:49] we have a IS ticket in to fix the packages on the one with the main docs but its not completed yet so they are a week or so outdauted but that week saw a metric ton of updates [04:50] so hopefully someone can fix it up tomarrow ( just needs -backports enabled and then packages updated that are already installed ) [ if you had a lil weight to get er done quicker that would be awesome and I could fix more faster :P heheh ] [04:50] whats the RT # ? [04:50] umm i need to look in the irc back log, clint filed it [04:51] but he told us in the chan about ~18 hours ago [04:51] or so [04:51] * imbrandon looks quickly [04:51] lastlog rt [04:51] bah [04:52] aiee [04:55] imbrandon: http://jujucharms.com/docs/ is a 403 [04:55] 'fobidden' [04:56] nice [04:56] * imbrandon pulls the branch to see if its building [04:57] not sure wherer the output of the cron build for juju.ubuntu.com/docs is either [04:57] if there is one :) [04:58] but i know its supose to build them evert 15 min on cron and did for a long time till we broke 0.6.4 and need 1.0+ 1.1.3 is current [04:58] * imbrandon checks the build currently uploaded [05:00] thurs day we considered "dogfooding" it like mmims brought up ( and good idea i think personally ) to charm ( and this cleanup too ) juju.ubuntu.com and then just redirect the page to it on ec2/hpcloud etc [05:00] heh [05:01] not sure how far it would "actually" fly if it was attempted [05:01] but its worth a thought later :) [05:34] lifeless: http://jujucharms.com/docs/ fixed up [05:36] made a typo in my last commit :) but that is what the official stite will also look like as soon as builds resume on a newer sphinx than Lucid default ships [05:36] :) [05:37] and hopefully shuffling some of the content to the docs form juju.ubuntu.com its self also so its all central in one place [05:38] ( as well as sexu looking too I think, but just a tad bias, definately easier to read/navigate ) [05:38] :P [05:38] did you file the rt #? [05:38] clint did [05:38] s/file/find/ [05:38] not yet [05:39] i am actively looking tho [05:39] heh [05:39] :) [05:39] ok http://jujucharms.com/docs/write-charm.html is *much* better [05:39] kudos [05:40] :) ty [05:40] http://jujucharms.com/docs/write-charm.html - uses oneiric [05:41] yea there are still many oneiric refs [05:41] i am affraid to sed them out [05:41] bit mostly that can be done [05:42] providers is my next section to hit up hard i think [05:42] lots of pain ares there esp with env.y config options etc [05:43] http://jujucharms.com/docs/write-charm.html doesn't know that revision is a separate file yet either. [05:44] nice, so VERY out dated [05:44] SpamapS: I think I borked your merge [05:44] fixes that one now [05:44] bkerensa: nice going slick heh [05:44] bkerensa: j/k , something i can help sort ? [06:01] your golden, once overed the commit and it looks correct, i'll look closer when i make my next docs commit here in a few myself too [06:01] but i'm fairly certain your solid :) === almaisan-away is now known as al-maisan === wrtp is now known as rogpeppe === zyga_ is now known as zyga === al-maisan is now known as almaisan-away [13:25] SpamapS, added a comment on #920454, seems like precise might be missing some patches on libvirt [13:25] <_mup_> Bug #920454: juju bootstrap hangs for local environment under precise on vmware < https://launchpad.net/bugs/920454 > [13:43] sidnei: ahh! good sleuthing.. maybe its already reported in precise? === zyga is now known as zyga-food [13:44] SpamapS, couldn't find a !juju bug for precise no [13:55] SpamapS: do you now what these corospond to for HPcloud [13:55] project-id: [13:55] nova-uri: [13:55] auth-mode: [13:55] ? [13:55] imbrandon: no, I don't even know what that is. :) [13:55] openstack hpcloud ( not openstack_s3 ) gonna try pure first ( i hope ) [13:56] env.y settings [13:56] imbrandon: ohhh [13:56] i pulled them from conf [13:56] but its the only ones i couldent match up [13:56] to something i knew [13:56] sidnei: Well I do think thats a libvirt problem, and it looks like there might be a patch [13:56] imbrandon: I'd suspect auth-mode is sort of an enum [13:57] said proj id is a int, wonder if its the tennant id [13:57] thats and int [13:57] imbrandon: perhaps HPCLOUD will give you those when you get your account? [13:57] I've still never signed up [13:57] yea i got a whole screen of creadentials [13:58] tho with the new ostack provider maybe I will :) [13:58] but not sure what they map to, as the name are slightly off, so i was hoping to at leaste get bootstrap wortking doe i could document them [13:58] heh [13:58] yea i'm trying the ostack provider now [13:59] when you do let me know and i'll pastebin my whole env.y and give ya a head start [13:59] err if [13:59] it shouldn't be this hard [13:59] perhaps the provider needs better docs [14:00] yea [14:00] or HPcloud needs a good slapping if they aren't using the standard terms [14:00] there is -0- now, i;'m reading code [14:00] oh wtf? [14:00] nothing in docs? [14:00] well i think they do, i think the provider is off [14:00] its a bitdiff than ec2 as well [14:00] but yea [14:00] -0- docs [14:00] SpamapS, yes, i agree. so mark the bug as affecting libvirt, invalid in juju? :) [14:00] i'm reading source to fig this stuff out [14:01] sidnei: I'm not 100% sure its invalid in juju, so I'm waiting to do that [14:01] sidnei: there may be a workaround we could apply after all [14:04] oki [14:09] SpamapS: http://cl.ly/HdVH [14:09] for your ref too, so you have an idea should you need it in a pinch [14:10] imbrandon: I bet tenant-id is project-id [14:10] you'll likely have to click it to zoom to read anything and it should be safe to not have to worry about hoarding ( got bits blured ) [14:10] kk [14:10] I remember hearing that the name in the API was up for consideration to be renamed [14:10] heh [14:11] that page should stay there indefinately too if you wanna bookmark that [14:11] for ref later or something [14:11] No I think I'll just sign up :) [14:11] its actaly my upload account [14:11] kk [14:11] kool, cuz those other two are optional [14:12] so with that i *should* have a full stanza [14:12] of pure OSAPI on hpcloud [14:12] *crosses fingers and prepares to fail* [14:13] Have they at least moved up from Diablo to Essex yet? [14:13] no idea [14:14] i prettu sire its essex [14:14] but dunno how to tell [14:14] i'm prtty ignorant on openstack [14:14] only have the absolute minimum in my head for uit so far [14:14] I don't know how to tell either [14:14] really not even that [14:14] heh [14:15] thats one reason i'm using hp and not rak, zomg their rest api is the suxors [14:15] well, its OSTACK now IIRC [14:15] 8 rest calls to create one mysql db on the new mysql service [14:15] at rak [14:15] nah [14:15] not all of it [14:16] Oh they're not using red dwarf? [14:16] they have joey on half and half , mine only has db access [14:16] then i got a toy acct that only has new access [14:16] yea no, its all helter skelter in their control pannel right now, feels like aws but with 2 competing back ends [14:16] heh === zyga-food is now known as zyga [14:17] presonaly i would avaoid rak for like a nother month id say and let them settle some dust [14:17] heh [14:19] btw i do those "full page screenshots + annotations on save" with a BA chrome extension "Awesome Screenshot" heh its perfect to get a whole page when it s like like that [14:20] with one click [14:24] nova_project_id == tennant name [14:24] woot [14:25] bah, these forum are such a pita to use, then o grok some of the source to see why, its d7 heh non-custom d7 at that , heh no wonder [14:26] :) [14:26] these forums == help docs at hpcloud , d7 == drupal 7 [14:27] * imbrandon gets ready to bootstrap env hpc.1-az.1-regon-a.geo1 [14:27] wow [14:30] * robbiew senses a disturbance in the cloud force...velocity and google IO this week. [14:30] heheh [14:31] i always love IO but i always like it like 3 weeks late as videos [14:31] :) [14:42] * m_3 sad to be missing out on both events :( [15:27] aujuju: Is juju specific to ubuntu OS on EC2 [closed] [15:36] morning [15:46] o/ === salgado is now known as salgado-lunch [16:58] SpamapS, , i took a look at the failing test on buildd, juju.lib.tests.test_format.TestPythonFormat.test_dump_load. this test normally takes on the order of 0.001s on my laptop and involves no resources other than a json serializer/deserializer. i just ran it for 10000 loops and i didn't see it explode [17:00] SpamapS, just looking at the test and the code it exercises, i would not expect resource starvation failures. maybe in similar code in test_invoker, which involves processes. but not here [17:09] /wi/win 7 [17:10] arghhh!! [17:11] bursty network.. yargh [17:11] jimbaker`: So perhaps it is a race of some kind? [17:12] SpamapS, in this specific code, no. twisted can report problems in the wrong place, so that could be the problem [17:13] in terms of the twisted trial test runner [17:13] SpamapS, i assume this failing test is stable? [17:14] as reported on buildd? [17:17] jimbaker`: I'm not sure, I'll retry a few of the builds. [17:18] jimbaker`: it may have been transient. quantal succeeded on retry [17:19] jimbaker`: I'm retrying all the others. If they succeed, we can chalk this up to a buildd problem I think [17:20] SpamapS, again, this is not a test i would expect to fail transiently, since it's not async. but again, it's completely possible for twisted trial to point the wrong finger from some other transient bug [17:20] so given that, i think this is the best strategy for now [17:20] uhm, is there a 'juju scp' counterpart to 'juju ssh'? if not, it could be handy [17:21] there was one other error reported, which was a zk closing connection problem [17:21] sidnei, yes [17:22] sidnei, just to be clear, juju scp exists, not that would be hypothetically handy ;) [17:22] ah, i totally missed it in juju --help === salgado-lunch is now known as salgado [17:33] sidnei: its quite useful for pulling down modified hooks actually. :) [17:33] SpamapS, im trying to figure out if i can get rsync to work with that, using 'juju ssh' as the remote shell, but service name has '/' so it gets interpreted as a path instead of a machine name [17:35] sidnei: IMO we need a 'juju get-public-hostname' so you can just do rsync `juju get-public-hostname foo/3`:/path/to/file [17:35] sidnei: I think juju-jitsu might have that actually [17:35] ah, that could work too [17:44] bkerensa: Hey [17:45] bkerensa: that was a pretty MASSIVE merge proposal you merged into lp:juju/docs [17:45] bkerensa: I'd have liked to hear more than just your +1 ;) [19:08] SpamapS: hazmat: morning guys; hope my spam overnight wasn't an issue ;) [19:09] heya [19:09] SpamapS: whats the rt# for that docs ticket? lifeless would like to know :) [19:09] hey! [19:17] lifeless, no that was awesome [19:17] lifeless, i'm running around today at velocity conf with meetings, i'm going to try and circle back to your branches/bugs this evening. [19:17] cool, thanks! [19:17] hazmat: the main one I'd like info on is the use of ip; that could be contentious for reasons other than code. [19:18] lifeless, yeah.. that changes the display in status as well. [19:18] lifeless, ideal would be to capture both, and use the appropriate one if the other is missing [19:18] hazmat: in a good way IMNSHO :) [19:18] hazmat: 10.0.0.3 is much more useful than 'server-2345' [19:18] lifeless, so do you have a a valid dns name in your context, or is it just ip [19:18] hmm.. so you have invalid dns names [19:19] hazmat: I assert that noone running openstack outside of rackspace and perhaps hp has valid public DNS names. [19:19] hazmat: its not even part of the deploying openstack guidelines yet. [19:19] let alone labs etc [19:19] lifeless, but everyone running in a public cloud or maas probably does [19:20] and for them i would posit its better to have things displayed by name then ip [19:20] hazmat: public cloud will, maas *may* (if they route DNS to the maas controller) [19:20] hazmat: maas seed clouds though won't, same issue as openstack. [19:20] dns will be usable w/in the cloud, but not by machines outside it. [19:21] lifeless, i've seen and assisted with several maas demos where dns does work fwiw [19:21] hazmat: from a machine that isn't the cloud controller ? [19:21] hazmat: because, that machine will be specially configured to use the dnsmasq instance on the controller. [19:21] lifeless, yes, its a machine on the same network though [19:22] lifeless, probably [19:22] I'll match your probably and raise you an almost certainly ;) [19:22] anyhow, we can gather data if we care. [19:22] lifeless, no takers :-) [19:22] I'm not sure that we -care- about the dns name though; what does it offer the user? [19:23] a nicer symbolic name [19:23] lifeless, think about configuring a vhost in apache [19:23] hazmat: have you seen the public names ec2 uses ? [19:23] or browsing to an app.. [19:23] lifeless, true [19:23] these are not the same as symbolic names [19:23] they are bulk loaded static mappings [19:24] per region [19:24] such as ec2-184-73-10-99.compute-1.amazonaws.com [19:25] agreed realistically, true dns management for services is already separate than the provider dns entries [19:26] er. a separate concern [19:30] lifeless, okay i'm convinced [19:39] hazmat: cool, thanks for taking the time; chat more when you're done @velocity [19:48] lifeless: I'm with you on this. The DNS causes more problems than it solves. === Guest73293 is now known as dpb_ === dpb_ is now known as dpb__ === dpb__ is now known as dpb___ [20:04] SpamapS: ping [20:05] SpamapS: can you give me the commands that you use to run gource with jitsu ? [20:05] SpamapS: I am trying to show the "pretty deploying stuff screen" [20:12] jitsu gource --run-gource [20:14] the gource integration definitely demos well, i used it as part of my demo for usenix two weeks ago. nothing like seeing some good pictures of what's going on [20:15] Yeah I really think a web app showing a network diagram will play even better.. if we can ever get such a thing :) [20:15] SpamapS, yes, i'm sure that will be quite nice :) [20:17] it might be nice to also demo the checking off of expectations of jitsu watch, here's how service orchestration happens [20:38] does one need to wait for a deploy to complete (via status) before doing add-relation ? [20:39] or will it Just Work if you run add-relation immediately that the deploy returns ? [20:42] lifeless, you can do juju add-relation as soon as you have deployed the two services [20:42] jimbaker`: as soon as the juju cli returns, you mean ? [20:43] it just works because add-relation records this setup in zookeeper; it's the responsibility of the agents to carry out this policy [20:43] jimbaker`: right, but deploy returns before the agents are even running [20:43] yes, as soon as the juju cli returns [20:43] jimbaker`: which is why I'm probing for specifics ;) [20:43] yes, even in that case [20:43] cool [20:43] thanks [20:43] because the agents carry out the policy, as recorded as it is in ZK [20:44] how can one recover from a wedged node ? [20:44] by which I mean [20:44] lifeless, basically what you see with the juju cli returning is that the update to zk has been made [20:44] when I run 5 or 6 deploys in quick succession, openstack is throwing a tanty and gets an unspecified 'error' on one of the nodes - it gets an instance id and never comes up [20:45] How can I tell Juju 'btw that machine, it didn't, so toss it away and start clean' [20:45] lifeless, sounds like a bug with the provider and possibly the provisioning agent [20:46] lifeless, so if you can get the provision agent log (on machine 0), that would be helpful. or use juju debug-log [20:46] jimbaker`: I'm sure its accentuated for me here and now, local openstack install. *but*, it can happen in e.g. ec2, when service disruption happens, that a reservation request goes into limbo or even fails asynchronously. [20:46] jimbaker`: the API call to provision the instance fails. [20:46] bah [20:46] succeeds* [20:47] its a cloud backend failure. Async. [20:47] lifeless, it should eventually succeed [20:47] it's a bug if it doesn't [20:47] To a degree, I agree. Ideally we could say 'its a bug over there, go fix that' [20:47] so some sort of occasional failure is expected [20:47] but, ^^ that. [20:47] how do I recover without wiping the whole juju environment of 10 instances [20:47] we just tried to engineer the provisioning agent so that it does appropriate retries [20:48] jimbaker`: but it doesn't retry if the API call succeeds? [20:48] lifeless, iirc, it does do dead machine detection for cases like that [20:49] how long does it wait? Perhaps I wasn't patient enough [20:49] lifeless, without logs, i'm afraid i can only speculate :) [20:49] ok, well I didn't capture any (didn't konw how) [20:49] juju debug-log; how do I get the provision agent log ? [20:50] lifeless, it should just be there, see https://juju.ubuntu.com/docs/user-tutorial.html#starting-debug-log [20:51] jimbaker`: how do I get the whole log though?I mean, this is some time back presumably... I don't notice instantly. [20:53] lifeless, you can also just grab it from the machine 0 box [20:53] jimbaker`: thats what I was asking [20:54] where on the box :) [20:54] lifeless, regrettably i don't have it committed to memory, but i am looking right now [20:54] thanks; Perhaps a doc patch :> [20:55] /var/log/juju/provision-agent.log [20:55] cool [20:55] I will look there next time it happens and see what I can see [20:55] thank you [20:56] lifeless, sounds good [20:56] \o/ finally apparently have hdfs + hadoop up, ready to iterate on stuff using it ;> [20:56] super not-simple [21:08] lifeless, i did check juju debug-log; two things, 1) the check against the flag it sets in ZK executes too late for the provisioning agent upon first install; 2) the PA still does not seem to use debug logging even if restarted. so for now, /var/log/juju/provision-agent.log seems to be the way to go [21:09] also this doesn't mix output, which is probably why i actually use the file when debugging the PA [21:09] first install - of a new service, or first install of the environment ? [21:09] first install of the environment [21:09] specifically the bootstrap node [21:09] (aka machine 0) [21:10] that seems hard to avoid, as the zk runs on that node [21:11] if it fails to come up, .... [21:16] lifeless, actually i'm going to backout that earlier statement. you are going to miss some changes to agent log, but there is a watcher in place for such global en settings (currently only debug log). so it appears to be just a fault of logging the PA log [21:17] lifeless, not certain why, the log setup is fairly generic in terms of adding handlers === salgado is now known as salgado-afk [22:22] oh, lalalalalala [22:22] hazmat: SpamapS: you'll love this: [22:22] 2012-06-26 10:21:39,616 INFO Connecting to unit hbase-regioncluster-01/0 at server-10.novalocal [22:22] ssh: Could not resolve hostname server-10.novalocal: Name or service not known [22:22] (juju ssh) [22:22] novalocal is the local search domain on the instance, not globally resolvable. [22:23] INSTANCE i-0000000a ami-00000004 server-10 server-10 running 0 m1.small 2012-06-25T20:36:32.000Z nova monitoring-disabled 10.0.0.7 10.0.0.7 instance-store [22:23] is what the EC2 API returns :> [22:28] has anyone here worked with the hbase charm? I want to create a table from a different charm, which requires running the hbase binary (which is only on hbase units...) [22:38] lifeless: double doh on the IP vs hostname [22:41] SpamapS: the rabbit hole gets deeper :) To me, this validates my first point :> Also, split view DNS sucks. [22:45] lifeless: split view DNS is just a relic of the way EC2 has done things.. [22:46] lifeless: you know, there's an openstack provider in review right now.. [22:46] lifeless: have you been able to try it out? [22:46] SpamapS: nope [22:47] SpamapS: I'm not actually trying to hack on juju at all, just use it ;) [22:47] SpamapS: so I'm running precise, etc; just keep hitting yak shave events. [22:47] SpamapS: what I *want* to do, is try out opentsdb [22:47] and logstash (with elasticsearch) [22:47] oh right [22:48] lifeless: but you're also trying it against a local openstack..... [22:48] which.. complicats matters. :) [22:48] SpamapS: a little [22:48] but shouldn't break charms as inside the stack dns work [22:48] s [22:49] its just the outside thing, exactly like canonistack [22:49] yeah [22:51] e.g. [22:51] ubuntu@server-8:~$ host server-8.novalocal [22:51] server-8.novalocal has address 10.0.0.3 [22:51] and [22:51] ubuntu@server-8:~$ host server-9.novalocal [22:51] server-9.novalocal has address 10.0.0.4 [22:51] so hbase's tools decided they couldn't find the master. [22:53] lifeless: I assume the relationship sets up the necessary configs for that? [22:53] http://jujucharms.com/charms/precise/hbase [22:53] I followed the bouncing ball there [22:54] juju deploy hbase hbase-master [22:54] juju deploy hbase hbase-regioncluster-01 [22:54] juju deploy zookeeper hbase-zookeeper [22:54] juju add-relation hbase-master hbase-zookeeper [22:54] juju add-relation hbase-regioncluster-01 hbase-zookeeper [22:54] juju deploy --config example-hadoop.yaml hadoop hdfs-namenode [22:54] juju deploy --config example-hadoop.yaml hadoop hdfs-datacluster-01 [22:54] juju add-relation hdfs-namenode:namenode hdfs-datacluster-01:datanode [22:54] juju add-relation hdfs-namenode:namenode hbase-master:namenode [22:54] juju add-relation hdfs-namenode:namenode hbase-regioncluster-01:namenode [22:54] juju add-relation hbase-master:master hbase-regioncluster-01:regionserver [22:54] ^ is exactly what I ran [22:54] with a minute or so between deploys [22:54] and nothing between relation adding [22:54] (sorry for the Spam) [22:56] why minutes between deploys? [22:56] You should have been able to run all of the deploys all at once [22:56] and all the relations [22:56] and then wait for everything to finish. :) [22:58] SpamapS: see my chat with jimbaker` above about vagaries of local stack deploys with one compute node [22:59] something whinges when I allocate 32GB of ram etc all in one batch [23:01] lifeless: I see. Thats a bit disappointing given how "scalable" openstack is supposed to be... [23:01] well [23:02] I mean, I have one node here with 16GB ram [23:02] but yes, there is some glitch in there, and I've already shaved enough yaks on this exercise. [23:02] the local install is v useful considering latency to canonistack, f'instance [23:03] indeed [23:03] lifeless: I think a native OpenStack provider, and more people banging on OpenStack's with juju, will help this go a lot more smoothly. [23:04] well, I'm beyond that bit, workarounds R us. [23:06] SpamapS: whats a good charm that knows how to create db's on demand ? [23:07] I presume the wordpress <-> mysql pair does that ? [23:09] lifeless, that pairing would be a good choice [23:11] jimbaker`: my suspicion/concern is that it does that using the network protocol [23:11] hmmm [23:11] * lifeless needs to know more hbase ops [23:13] lifeless, i'm not certain what you mean by that re "network protocol". i will say that it does an exchange of relation settings to accomplish the desired service orchestration [23:15] hah [23:15] so schema details [23:15] I'd be surprised if schema migrations were passed via zk [23:15] rather than directly. [23:15] where directly == connect to the mysql network port. [23:17] lifeless, the theory of juju, if there is one ;), is that this is outside of the scope of the charm, unless orchestration is required [23:17] not trying to reinvent how mysql coordinates, just solve certain issues [23:17] jimbaker`: sure [23:17] * hazmat pauses from juju talk to catchup [23:17] its orchestration though [23:17] schema application + upgrades is a massive orchestration issue [23:18] lifeless, then i can see that under the scope of the charm [23:18] including the trivial-enough first-bringup case. [23:18] the charm needs to facilitate it [23:18] which is what I'm trying to figure out for hbase : [23:18] > [23:19] so if the services need to coordinate on add-relation/remove-relation or add-unit/remove-unit, then sure, use juju there [23:20] I think we're talking past each other [23:21] lifeless: the charm gets you credentials and a place to connect to. But no doubt, if a relationship is more specific, schema details and dependencies across multiple charms could be orchestrated very easily. [23:21] lifeless: such a thing just has yet to surface yet. [23:22] lifeless, in particular, this can be done through an advanced form of service orchestration, which orchestrates with respect to relations. adam_g has been doing this for the openstack charms fwiw [23:23] lifeless, you might need to do this. i guess we just need to settle on something concrete to say one way or the other :) [23:25] * hazmat gets back to the audience questions [23:55] velocity talk went well. woot [23:57] whoohoo!