[01:31] <lifeless> https://juju.ubuntu.com/docs/faq.html#is-it-possible-to-deploy-multiple-services-per-machine might need updates
[01:32] <lifeless> links at the bottom of https://juju.ubuntu.com/docs/faq.html are broken,obolete too - https://www.jujucharms.com/ and https://juju.ubuntu.com/kanban/dublin.html
[01:33] <lifeless> is there an elasticsearch charm ?
[01:36] <imbrandon> yea
[01:36] <imbrandon> its not in the store yet
[01:37] <imbrandon> but its in the review que
[01:37] <imbrandon> not tried it myself
[01:37] <imbrandon> but i think jorge may have
[01:37] <imbrandon> he was talking about it
[01:37] <imbrandon> there are elb and rds ones as well
[01:37] <imbrandon> both need a tiny bit of polish but function
[01:38] <imbrandon> if you can handle rough edges :)
[01:38] <imbrandon> lifeless: ohh i read that as elasticache
[01:38] <imbrandon> not elasticsearch
[01:38] <imbrandon> no i dont think so
[01:38] <imbrandon> not that i ahve seen tho i havent looked specificly
[01:39] <lifeless> imbrandon: I would love ones for elasticsearch and opentsdb; it may be time for me to get into it :.
[01:39] <imbrandon> btw i found whay looks to be all the info i need for the osx stuff ealier , not time to go over it yet but it looked completish
[01:40] <imbrandon> so ty
[01:40] <imbrandon> hehe yea, elasticsearch would likely be a good starter charm
[01:40] <imbrandon> as its probably going to be simple. most external services that are charmed are
[01:41] <imbrandon> you could see my newrelic charm or clints elb charm as examples
[01:41] <imbrandon> of two and they are both pretty simple
[01:41] <imbrandon> well mine is VERY , like 10 lines of code
[01:41] <imbrandon> but his not much more but still relates etc
[01:43] <imbrandon> but yea to be your first one, thats actually a good idea, a total beginner i might not say that but you have a solid progmatic head on you and are semi familiar with it already ( via just interaction is all i mean )
[01:43] <imbrandon> so i think you should be able to seriosuly pick it upa nd even have it done in an hours time or so
[01:44]  * imbrandon afk
[02:19] <lifeless> btw
[02:19] <lifeless> https://juju.ubuntu.com/docs/user-tutorial.html
[02:19] <lifeless> fails to talk the user through environments.yaml config
[02:20] <lifeless> first time I was looking into this stuff, it was -painful- as a result. Fear and confusion all around.
[02:41] <lifeless> https://juju.ubuntu.com/Charms is a little incoherent on 'starting a new charm' - it says to push before describing how to init-repo etc :> {I figured it out, others may be more puzzled}
[03:06] <lifeless> hazmat: your branch generates configs like: Acquire::HTTP::Proxy "{'apt-proxy': 'http://192.168.1.102:8080/'}";
[03:06] <lifeless> hazmat: this isn't quite what you intended, I think :)
[03:06] <lifeless> hazmat: not sure if its the cloud-init support, or the juju code yet, looking into it.
[03:12] <lifeless> metadata service claims - 'apt_proxy: {apt-proxy: 'http://192.168.1.102:8080/'}'
[03:12] <lifeless> so I think juju
[03:14] <lifeless> definitely: (Pdb) print machine_data
[03:14] <lifeless> {'apt-proxy': {'apt-proxy': 'http://192.168.1.102:8080/'}, 'machine-id': '0', 'constraints': <juju.machine.constraints.Constraints object at 0x348c190>}
[03:14] <lifeless> hazmat: also, it looks like you override it if not set, which breaks images with that preconfigured. I will see about fixing that too
[03:38] <lifeless> hazmat: and done - https://code.launchpad.net/~lifeless/juju/apt-proxy-support/+merge/111763 - for merging into your WIP branch
[04:24] <imbrandon> lifeless: coool ( re: the env.y thing, i'll see if i cant get some clarification there and be a bit more verbose about some of the things that yea it looks to barely touch if at alll even for getting started
[04:24] <imbrandon> ty
[04:25] <imbrandon> if not i'm minimally add a trello task for it so someoe will ( hopefully heh )
[04:26] <lifeless> :)
[04:27] <lifeless> imbrandon: I haven't gotten to actually doing the charm yet, more yak shaving happened.
[04:29] <lifeless> imbrandon: is there a howto for charms? e.g. step-by-step including deploying from local tree ?
[04:36] <imbrandon> 100% complete i'm not sure or optimistic
[04:36] <imbrandon> but i do bet that there is a recorded charm schoool
[04:36] <imbrandon> that does tho
[04:36]  * imbrandon looks for where they are storeed
[04:40] <lifeless> thanks
[04:43] <imbrandon> yea , there are a few listed there it looks like
[04:43] <imbrandon> https://juju.ubuntu.com/CharmSchool
[04:43] <imbrandon> in the webinars
[04:43] <imbrandon> ( skip to the end of the first one to get a list
[04:43] <imbrandon> of all the current and upcomming ones )
[04:44] <imbrandon> after a once over they look to be fairly complete
[04:44] <lifeless> this stuff -really- needs to be in the main docs
[04:44] <lifeless> since we want folk doing it easily.
[04:44] <imbrandon> yea , i am workingf on that , so is jorge and mims
[04:44] <lifeless> cool
[04:44] <lifeless> I realise folk are working hard
[04:44] <lifeless> I guess I'm ensuring that this is on the radar
[04:44] <imbrandon> trying to move EVERYTHING to the docs
[04:45] <imbrandon> yop yup
[04:45] <imbrandon> many and big docs importvements are on most of ours
[04:46] <imbrandon> i know mine and jorges for sure, but client and mims and kapil i know all spend copious ammounts of time on them too at times
[04:46] <imbrandon> heh
[04:46] <imbrandon> i think this last thursday we had clint me kapil mimms jorge and there was one more , or juan
[04:46] <imbrandon> all working on them at the same time
[04:47] <imbrandon> for a good solid 4 to 6 hours
[04:47] <imbrandon> lots done, still lots to do tho :)
[04:47] <imbrandon> hehe
[04:47] <imbrandon> definatly was an intresting time to see that many all at once on a docs setup like we have
[04:47] <imbrandon> :)
[04:48] <imbrandon> jujucharms.com/docs is much more updated thanks to that day too compared to the normal docs
[04:49] <imbrandon> we have a IS ticket in to fix the packages on the one with the main docs but its not completed yet so they are a week or so outdauted but that week saw a metric ton of updates
[04:50] <imbrandon> so hopefully someone can fix it up tomarrow ( just needs -backports enabled and then packages updated that are already installed ) [ if you had a lil weight to get er done quicker that would be awesome and I could fix more faster :P heheh ]
[04:50] <lifeless> whats the RT # ?
[04:50] <imbrandon> umm i need to look in the irc back log, clint filed it
[04:51] <imbrandon> but he told us in the chan about ~18 hours ago
[04:51] <imbrandon> or so
[04:51]  * imbrandon looks quickly
[04:51] <imbrandon> lastlog rt
[04:51] <imbrandon> bah
[04:52] <lifeless> aiee
[04:55] <lifeless> imbrandon: http://jujucharms.com/docs/ is a 403
[04:55] <lifeless> 'fobidden'
[04:56] <imbrandon> nice
[04:56]  * imbrandon pulls the branch to see if its building 
[04:57] <imbrandon> not sure wherer the output of the cron build for juju.ubuntu.com/docs is either
[04:57] <imbrandon> if there is one :)
[04:58] <imbrandon> but i know its supose to build them evert 15 min on cron and did for a long time till we broke 0.6.4 and need 1.0+ 1.1.3 is current
[04:58]  * imbrandon checks the build currently uploaded
[05:00] <imbrandon> thurs day we considered "dogfooding" it like mmims brought up ( and good idea i think personally ) to charm ( and this cleanup too ) juju.ubuntu.com and then just redirect the page to it on ec2/hpcloud etc
[05:00] <imbrandon> heh
[05:01] <imbrandon> not sure how far it would "actually" fly if it was attempted
[05:01] <imbrandon> but its worth a thought later :)
[05:34] <imbrandon> lifeless: http://jujucharms.com/docs/ fixed up
[05:36] <imbrandon> made a typo in my last commit :) but that is what the official stite will also look like as soon as builds resume on a newer sphinx than Lucid default ships
[05:36] <imbrandon> :)
[05:37] <imbrandon> and hopefully shuffling some of the content to the docs form juju.ubuntu.com its self also so its all central in one place
[05:38] <imbrandon> ( as well as sexu looking too I think, but just a tad bias, definately easier to read/navigate )
[05:38] <imbrandon> :P
[05:38] <lifeless> did you file the rt #?
[05:38] <imbrandon> clint did
[05:38] <lifeless> s/file/find/
[05:38] <imbrandon> not yet
[05:39] <imbrandon> i am actively looking tho
[05:39] <lifeless> heh
[05:39] <imbrandon> :)
[05:39] <lifeless> ok http://jujucharms.com/docs/write-charm.html is *much* better
[05:39] <lifeless> kudos
[05:40] <imbrandon> :) ty
[05:40] <lifeless> http://jujucharms.com/docs/write-charm.html - uses oneiric
[05:41] <imbrandon> yea there are still many oneiric refs
[05:41] <imbrandon> i am affraid to sed them out
[05:41] <imbrandon> bit mostly that can be done
[05:42] <imbrandon> providers is my next section to hit up hard i think
[05:42] <imbrandon> lots of pain ares there esp with env.y config options etc
[05:43] <lifeless> http://jujucharms.com/docs/write-charm.html doesn't know that revision is a separate file yet either.
[05:44] <imbrandon> nice, so VERY out dated
[05:44] <bkerensa> SpamapS: I think I borked your merge
[05:44] <imbrandon> fixes that one now
[05:44] <imbrandon> bkerensa: nice going slick heh
[05:44] <imbrandon> bkerensa: j/k , something i can help sort ?
[06:01] <imbrandon> your golden, once overed the commit and it looks correct, i'll look closer when i make my next docs commit here in a few myself too
[06:01] <imbrandon> but i'm fairly certain your solid :)
[13:25] <sidnei> SpamapS, added a comment on #920454, seems like precise might be missing some patches on libvirt
[13:25] <_mup_> Bug #920454: juju bootstrap hangs for local environment under precise on vmware <local> <juju:Confirmed> < https://launchpad.net/bugs/920454 >
[13:43] <SpamapS> sidnei: ahh! good sleuthing.. maybe its already reported in precise?
[13:44] <sidnei> SpamapS, couldn't find a !juju bug for precise no
[13:55] <imbrandon> SpamapS: do you now what these corospond to for HPcloud
[13:55] <imbrandon>     project-id:
[13:55] <imbrandon>     nova-uri:
[13:55] <imbrandon>     auth-mode:
[13:55] <imbrandon> ?
[13:55] <SpamapS> imbrandon: no, I don't even know what that is. :)
[13:55] <imbrandon> openstack hpcloud ( not openstack_s3 ) gonna try pure first ( i hope )
[13:56] <imbrandon> env.y settings
[13:56] <SpamapS> imbrandon: ohhh
[13:56] <imbrandon> i pulled them from conf
[13:56] <imbrandon> but its the only ones i couldent match up
[13:56] <imbrandon> to something i knew
[13:56] <SpamapS> sidnei: Well I do think thats a libvirt problem, and it looks like there might be a patch
[13:56] <SpamapS> imbrandon: I'd suspect auth-mode is sort of an enum
[13:57] <imbrandon> said proj id is a int, wonder if its the tennant id
[13:57] <imbrandon> thats and int
[13:57] <SpamapS> imbrandon: perhaps HPCLOUD will give you those when you get your account?
[13:57] <SpamapS> I've still never signed up
[13:57] <imbrandon> yea i got a whole screen of creadentials
[13:58] <SpamapS> tho with the new ostack provider maybe I will :)
[13:58] <imbrandon> but not sure what they map to, as the name are slightly off, so i was hoping to at leaste get bootstrap wortking doe i could document them
[13:58] <imbrandon> heh
[13:58] <imbrandon> yea i'm trying the ostack provider now
[13:59] <imbrandon> when you do let me know and i'll pastebin my whole env.y and give ya a head start
[13:59] <imbrandon> err if
[13:59] <SpamapS> it shouldn't be this hard
[13:59] <SpamapS> perhaps the provider needs better docs
[14:00] <imbrandon> yea
[14:00] <SpamapS> or HPcloud needs a good slapping if they aren't using the standard terms
[14:00] <imbrandon> there is -0- now, i;'m reading code
[14:00] <SpamapS> oh wtf?
[14:00] <SpamapS> nothing in docs?
[14:00] <imbrandon> well i think they do, i think the provider is off
[14:00] <imbrandon> its a bitdiff than ec2 as well
[14:00] <imbrandon> but yea
[14:00] <imbrandon> -0- docs
[14:00] <sidnei> SpamapS, yes, i agree. so mark the bug as affecting libvirt, invalid in juju? :)
[14:00] <imbrandon> i'm reading source to fig this stuff out
[14:01] <SpamapS> sidnei: I'm not 100% sure its invalid in juju, so I'm waiting to do that
[14:01] <SpamapS> sidnei: there may be a workaround we could apply after all
[14:04] <sidnei> oki
[14:09] <imbrandon> SpamapS: http://cl.ly/HdVH
[14:09] <imbrandon> for your ref too, so you have an idea should you need it in a pinch
[14:10] <SpamapS> imbrandon: I bet tenant-id is project-id
[14:10] <imbrandon> you'll likely have to click it to zoom to read anything and it should be safe to not have to worry about hoarding ( got bits blured )
[14:10] <imbrandon> kk
[14:10] <SpamapS> I remember hearing that the name in the API was up for consideration to be renamed
[14:10] <imbrandon> heh
[14:11] <imbrandon> that page should stay there indefinately too if you wanna bookmark that
[14:11] <imbrandon> for ref later or something
[14:11] <SpamapS> No I think I'll just sign up :)
[14:11] <imbrandon> its actaly my upload account
[14:11] <imbrandon> kk
[14:11] <imbrandon> kool, cuz those other two are optional
[14:12] <imbrandon> so with that i *should* have a full stanza
[14:12] <imbrandon> of pure OSAPI on hpcloud
[14:12] <imbrandon> *crosses fingers and prepares to fail*
[14:13] <SpamapS> Have they at least moved up from Diablo to Essex yet?
[14:13] <imbrandon> no idea
[14:14] <imbrandon> i prettu sire its essex
[14:14] <imbrandon> but dunno how to tell
[14:14] <imbrandon> i'm prtty ignorant on openstack
[14:14] <imbrandon> only have the absolute minimum in my head for uit so far
[14:14] <SpamapS> I don't know how to tell either
[14:14] <imbrandon> really not even  that
[14:14] <imbrandon> heh
[14:15] <imbrandon> thats one reason i'm using hp and not rak, zomg their rest api is the suxors
[14:15] <SpamapS> well, its OSTACK now IIRC
[14:15] <imbrandon> 8 rest calls to create one mysql db on the new mysql service
[14:15] <imbrandon> at rak
[14:15] <imbrandon> nah
[14:15] <imbrandon> not all of it
[14:16] <SpamapS> Oh they're not using red dwarf?
[14:16] <imbrandon> they have joey on half and half , mine only has db access
[14:16] <imbrandon> then i got a toy acct that only has new access
[14:16] <imbrandon> yea no, its all helter skelter in their control pannel right now, feels like aws but with 2 competing back ends
[14:16] <imbrandon> heh
[14:17] <imbrandon> presonaly i would avaoid rak for like a nother month id say and let them settle some dust
[14:17] <imbrandon> heh
[14:19] <imbrandon> btw i do those "full page screenshots + annotations on save" with a BA chrome extension "Awesome Screenshot" heh its perfect to get a whole page when it s like like that
[14:20] <imbrandon> with one click
[14:24] <imbrandon> nova_project_id == tennant name
[14:24] <imbrandon> woot
[14:25] <imbrandon> bah, these forum are such a pita to use, then o grok some of the source to see why, its d7 heh non-custom d7 at that , heh no wonder
[14:26] <imbrandon> :)
[14:26] <imbrandon> these forums == help docs at hpcloud , d7 == drupal 7
[14:27]  * imbrandon gets ready to bootstrap env hpc.1-az.1-regon-a.geo1
[14:27] <imbrandon> wow
[14:30]  * robbiew senses a disturbance in the cloud force...velocity and google IO this week.
[14:30] <imbrandon> heheh
[14:31] <imbrandon> i always love IO but i always like it like 3 weeks late as videos
[14:31] <imbrandon> :)
[14:42]  * m_3 sad to be missing out on both events :(
[15:36] <hazmat> morning
[15:46] <m_3> o/
[16:58] <jimbaker`> SpamapS, , i took a look at the failing test on buildd, juju.lib.tests.test_format.TestPythonFormat.test_dump_load. this test normally takes on the order of 0.001s on my laptop and involves no resources other than a json serializer/deserializer. i just ran it for 10000 loops and i didn't see it explode
[17:00] <jimbaker`> SpamapS, just looking at the test and the code it exercises, i would not expect resource starvation failures. maybe in similar code in test_invoker, which involves processes. but not here
[17:09] <SpamapS> /wi/win 7
[17:10] <SpamapS> arghhh!!
[17:11] <SpamapS> bursty network.. yargh
[17:11] <SpamapS> jimbaker`: So perhaps it is a race of some kind?
[17:12] <jimbaker`> SpamapS, in this specific code, no. twisted can report problems in the wrong place, so that could be the problem
[17:13] <jimbaker`> in terms of the twisted trial test runner
[17:13] <jimbaker`> SpamapS, i assume this failing test is stable?
[17:14] <jimbaker`> as reported on buildd?
[17:17] <SpamapS> jimbaker`: I'm not sure, I'll retry a few of the builds.
[17:18] <SpamapS> jimbaker`: it may have been transient. quantal succeeded on retry
[17:19] <SpamapS> jimbaker`: I'm retrying all the others. If they succeed, we can chalk this up to a buildd problem I think
[17:20] <jimbaker`> SpamapS, again, this is not a test i would expect to fail transiently, since it's not async. but again, it's completely possible for twisted trial to point the wrong finger from some other transient bug
[17:20] <jimbaker`> so given that, i think this is the best strategy for now
[17:20] <sidnei> uhm, is there a 'juju scp' counterpart to 'juju ssh'? if not, it could be handy
[17:21] <jimbaker`> there was one other error reported, which was a zk closing connection problem
[17:21] <jimbaker`> sidnei, yes
[17:22] <jimbaker`> sidnei, just to be clear, juju scp exists, not that would be hypothetically handy ;)
[17:22] <sidnei> ah, i totally missed it in juju --help
[17:33] <SpamapS> sidnei: its quite useful for pulling down modified hooks actually. :)
[17:33] <sidnei> SpamapS, im trying to figure out if i can get rsync to work with that, using 'juju ssh' as the remote shell, but service name has '/' so it gets interpreted as a path instead of a machine name
[17:35] <SpamapS> sidnei: IMO we need a 'juju get-public-hostname' so you can just do rsync `juju get-public-hostname foo/3`:/path/to/file
[17:35] <SpamapS> sidnei: I think juju-jitsu might have that actually
[17:35] <sidnei> ah, that could work too
[17:44] <SpamapS> bkerensa: Hey
[17:45] <SpamapS> bkerensa: that was a pretty MASSIVE merge proposal you merged into lp:juju/docs
[17:45] <SpamapS> bkerensa: I'd have liked to hear more than just your +1 ;)
[19:08] <lifeless> SpamapS: hazmat: morning guys; hope my spam overnight wasn't an issue ;)
[19:09] <imbrandon> heya
[19:09] <imbrandon> SpamapS: whats the rt# for that docs ticket? lifeless would like to know :)
[19:09] <lifeless> hey!
[19:17] <hazmat> lifeless, no that was awesome
[19:17] <hazmat> lifeless, i'm running around today at velocity conf with meetings, i'm going to try and circle back to your branches/bugs this evening.
[19:17] <lifeless> cool, thanks!
[19:17] <lifeless> hazmat: the main one I'd like info on is the use of ip; that could be contentious for reasons other than code.
[19:18] <hazmat> lifeless, yeah.. that changes the display in status as well.
[19:18] <hazmat> lifeless, ideal would be to capture both, and use the appropriate one if the other is missing
[19:18] <lifeless> hazmat: in a good way IMNSHO :)
[19:18] <lifeless> hazmat: 10.0.0.3 is much more useful than 'server-2345'
[19:18] <hazmat> lifeless, so do you have a a valid dns name in your context, or is it just ip
[19:18] <hazmat> hmm.. so you have invalid dns names
[19:19] <lifeless> hazmat: I assert that noone running openstack outside of rackspace and perhaps hp has valid public DNS names.
[19:19] <lifeless> hazmat: its not even part of the deploying openstack guidelines yet.
[19:19] <lifeless> let alone labs etc
[19:19] <hazmat> lifeless, but everyone running in a public cloud or maas probably does
[19:20] <hazmat> and for them i would posit its better to have things displayed by name then ip
[19:20] <lifeless> hazmat: public cloud will, maas *may* (if they route DNS to the maas controller)
[19:20] <lifeless> hazmat: maas seed clouds though won't, same issue as openstack.
[19:20] <lifeless> dns will be usable w/in the cloud, but not by machines outside it.
[19:21] <hazmat> lifeless, i've seen and assisted with several maas demos where dns does work fwiw
[19:21] <lifeless> hazmat: from a machine that isn't the cloud controller ?
[19:21] <lifeless> hazmat: because, that machine will be specially configured to use the dnsmasq instance on the controller.
[19:21] <hazmat> lifeless, yes, its a machine on the same network though
[19:22] <hazmat> lifeless, probably
[19:22] <lifeless> I'll match your probably and raise you an almost certainly ;)
[19:22] <lifeless> anyhow, we can gather data if we care.
[19:22] <hazmat> lifeless, no takers :-)
[19:22] <lifeless> I'm not sure that we -care- about the dns name though; what does it offer the user?
[19:23] <hazmat> a nicer symbolic name
[19:23] <hazmat> lifeless, think about configuring a vhost in apache
[19:23] <lifeless> hazmat: have you seen the public names ec2 uses ?
[19:23] <hazmat> or browsing to an app..
[19:23] <hazmat> lifeless, true
[19:23] <lifeless> these are not the same as symbolic names
[19:23] <lifeless> they are bulk loaded static mappings
[19:24] <lifeless> per region
[19:24] <lifeless> such as ec2-184-73-10-99.compute-1.amazonaws.com
[19:25] <hazmat> agreed realistically, true dns management  for services is already separate than the provider dns entries
[19:26] <hazmat> er. a separate concern
[19:30] <hazmat> lifeless, okay i'm convinced
[19:39] <lifeless> hazmat: cool, thanks for taking the time; chat more when you're done @velocity
[19:48] <SpamapS> lifeless: I'm with you on this. The DNS causes more problems than it solves.
[20:04] <negronjl> SpamapS: ping
[20:05] <negronjl> SpamapS: can you give me the commands that you use to run gource with jitsu ?
[20:05] <negronjl> SpamapS: I am trying to show the "pretty deploying stuff screen"
[20:12] <SpamapS> jitsu gource --run-gource
[20:14] <jimbaker`> the gource integration definitely demos well, i used it as part of my demo for usenix two weeks ago. nothing like seeing some good pictures of what's going on
[20:15] <SpamapS> Yeah I really think a web app showing a network diagram will play even better.. if we can ever get such a thing :)
[20:15] <jimbaker`> SpamapS, yes, i'm sure that will be quite nice :)
[20:17] <jimbaker`> it might be nice to also demo the checking off of expectations of jitsu watch, here's how service orchestration happens
[20:38] <lifeless> does one need to wait for a deploy to complete (via status) before doing add-relation ?
[20:39] <lifeless> or will it Just Work if you run add-relation immediately that the deploy returns ?
[20:42] <jimbaker`> lifeless, you can do juju add-relation as soon as you have deployed the two services
[20:42] <lifeless> jimbaker`: as soon as the juju cli returns, you mean ?
[20:43] <jimbaker`> it just works because add-relation records this setup in zookeeper; it's the responsibility of the agents to carry out this policy
[20:43] <lifeless> jimbaker`: right, but deploy returns before the agents are even running
[20:43] <jimbaker`> yes, as soon as the juju cli returns
[20:43] <lifeless> jimbaker`: which is why I'm probing for specifics ;)
[20:43] <jimbaker`> yes, even in that case
[20:43] <lifeless> cool
[20:43] <lifeless> thanks
[20:43] <jimbaker`> because the agents carry out the policy, as recorded as it is in ZK
[20:44] <lifeless> how can one recover from a wedged node ?
[20:44] <lifeless> by which I mean
[20:44] <jimbaker`> lifeless, basically what you see with the juju cli returning is that the update to zk has been made
[20:44] <lifeless> when I run 5 or 6 deploys in quick succession, openstack is throwing a tanty and gets an unspecified 'error' on one of the nodes - it gets an instance id and never comes up
[20:45] <lifeless> How can I tell Juju 'btw that machine, it didn't, so toss it away and start clean'
[20:45] <jimbaker`> lifeless, sounds like a bug with the provider and possibly the provisioning agent
[20:46] <jimbaker`> lifeless, so if you can get the provision agent log (on machine 0), that would be helpful. or use juju debug-log
[20:46] <lifeless> jimbaker`: I'm sure its accentuated for me here and now, local openstack install. *but*, it can happen in e.g. ec2, when service disruption happens, that a reservation request goes into limbo or even fails asynchronously.
[20:46] <lifeless> jimbaker`: the API call to provision the instance fails.
[20:46] <lifeless> bah
[20:46] <lifeless> succeeds*
[20:47] <lifeless> its a cloud backend failure. Async.
[20:47] <jimbaker`> lifeless, it should eventually succeed
[20:47] <jimbaker`> it's a bug if it doesn't
[20:47] <lifeless> To a degree, I agree. Ideally we could say 'its a bug over there, go fix that'
[20:47] <jimbaker`> so some sort of occasional failure is expected
[20:47] <lifeless> but, ^^ that.
[20:47] <lifeless> how do I recover without wiping the whole juju environment of 10 instances
[20:47] <jimbaker`> we just tried to engineer the provisioning agent so that it does appropriate retries
[20:48] <lifeless> jimbaker`: but it doesn't retry if the API call succeeds?
[20:48] <jimbaker`> lifeless, iirc, it does do dead machine detection for cases like that
[20:49] <lifeless> how long does it wait? Perhaps I wasn't patient enough
[20:49] <jimbaker`> lifeless, without logs, i'm afraid i can only speculate :)
[20:49] <lifeless> ok, well I didn't capture any (didn't konw how)
[20:49] <lifeless> juju debug-log; how do I get the provision agent log ?
[20:50] <jimbaker`> lifeless, it should just be there, see https://juju.ubuntu.com/docs/user-tutorial.html#starting-debug-log
[20:51] <lifeless> jimbaker`: how do I get the whole log though?I mean, this is some time back presumably... I don't notice instantly.
[20:53] <jimbaker`> lifeless, you can also just grab it from the machine 0 box
[20:53] <lifeless> jimbaker`: thats what I was asking
[20:54] <lifeless> where on the box :)
[20:54] <jimbaker`> lifeless, regrettably i don't have it committed to memory, but i am looking right now
[20:54] <lifeless> thanks; Perhaps a doc patch :>
[20:55] <jimbaker`> /var/log/juju/provision-agent.log
[20:55] <lifeless> cool
[20:55] <lifeless> I will look there next time it happens and see what I can see
[20:55] <lifeless> thank you
[20:56] <jimbaker`> lifeless, sounds good
[20:56] <lifeless> \o/ finally apparently have hdfs + hadoop up, ready to iterate on stuff using it ;>
[20:56] <lifeless> super not-simple
[21:08] <jimbaker`> lifeless, i did check juju debug-log; two things, 1) the check against the flag it sets in ZK executes too late for the provisioning agent upon first install; 2) the PA still does not seem to use debug logging even if restarted. so for now, /var/log/juju/provision-agent.log seems to be the way to go
[21:09] <jimbaker`> also this doesn't mix output, which is probably why i actually use the file when debugging the PA
[21:09] <lifeless> first install - of a new service, or first install of the environment ?
[21:09] <jimbaker`> first install of the environment
[21:09] <jimbaker`> specifically the bootstrap node
[21:09] <jimbaker`> (aka machine 0)
[21:10] <lifeless> that seems hard to avoid, as the zk runs on that node
[21:11] <lifeless> if it fails to come up, ....
[21:16] <jimbaker`> lifeless, actually i'm going to backout that earlier statement. you are going to miss some changes to agent log, but there is a watcher in place for such global en settings (currently only debug log). so it appears to be just a fault of logging the PA log
[21:17] <jimbaker`> lifeless, not certain why, the log setup is fairly generic in terms of adding handlers
[22:22] <lifeless> oh, lalalalalala
[22:22] <lifeless> hazmat: SpamapS: you'll love this:
[22:22] <lifeless> 2012-06-26 10:21:39,616 INFO Connecting to unit hbase-regioncluster-01/0 at server-10.novalocal
[22:22] <lifeless> ssh: Could not resolve hostname server-10.novalocal: Name or service not known
[22:22] <lifeless> (juju ssh)
[22:22] <lifeless> novalocal is the local search domain on the instance, not globally resolvable.
[22:23] <lifeless> INSTANCE        i-0000000a      ami-00000004    server-10       server-10       running         0               m1.small        2012-06-25T20:36:32.000Z        nova                            monitoring-disabled     10.0.0.7        10.0.0.7                       instance-store
[22:23] <lifeless> is what the EC2 API returns :>
[22:28] <lifeless> has anyone here worked with the hbase charm? I want to create a table from a different charm, which requires running the hbase binary (which is only on hbase units...)
[22:38] <SpamapS> lifeless: double doh on the IP vs hostname
[22:41] <lifeless> SpamapS: the rabbit hole gets deeper :) To me, this validates my first point :> Also, split view DNS sucks.
[22:45] <SpamapS> lifeless: split view DNS is just a relic of the way EC2 has done things..
[22:46] <SpamapS> lifeless: you know, there's an openstack provider in review right now..
[22:46] <SpamapS> lifeless: have you been able to try it out?
[22:46] <lifeless> SpamapS: nope
[22:47] <lifeless> SpamapS: I'm not actually trying to hack on juju at all, just use it ;)
[22:47] <lifeless> SpamapS: so I'm running precise, etc; just keep hitting yak shave events.
[22:47] <lifeless> SpamapS: what I *want* to do, is try out opentsdb
[22:47] <lifeless> and logstash (with elasticsearch)
[22:47] <SpamapS> oh right
[22:48] <SpamapS> lifeless: but you're also trying it against a local openstack.....
[22:48] <SpamapS> which.. complicats matters. :)
[22:48] <lifeless> SpamapS: a little
[22:48] <lifeless> but shouldn't break charms as inside the stack dns work
[22:48] <lifeless> s
[22:49] <lifeless> its just the outside thing, exactly like canonistack
[22:49] <SpamapS> yeah
[22:51] <lifeless> e.g.
[22:51] <lifeless> ubuntu@server-8:~$ host server-8.novalocal
[22:51] <lifeless> server-8.novalocal has address 10.0.0.3
[22:51] <lifeless> and
[22:51] <lifeless> ubuntu@server-8:~$ host server-9.novalocal
[22:51] <lifeless> server-9.novalocal has address 10.0.0.4
[22:51] <lifeless> so hbase's tools decided they couldn't find the master.
[22:53] <SpamapS> lifeless: I assume the relationship sets up the necessary configs for that?
[22:53] <lifeless> http://jujucharms.com/charms/precise/hbase
[22:53] <lifeless> I followed the bouncing ball there
[22:54] <lifeless> juju deploy hbase hbase-master
[22:54] <lifeless> juju deploy hbase hbase-regioncluster-01
[22:54] <lifeless> juju deploy zookeeper hbase-zookeeper
[22:54] <lifeless> juju add-relation hbase-master hbase-zookeeper
[22:54] <lifeless> juju add-relation hbase-regioncluster-01 hbase-zookeeper
[22:54] <lifeless> juju deploy --config example-hadoop.yaml hadoop hdfs-namenode
[22:54] <lifeless> juju deploy --config example-hadoop.yaml hadoop hdfs-datacluster-01
[22:54] <lifeless> juju add-relation hdfs-namenode:namenode hdfs-datacluster-01:datanode
[22:54] <lifeless> juju add-relation hdfs-namenode:namenode hbase-master:namenode
[22:54] <lifeless> juju add-relation hdfs-namenode:namenode hbase-regioncluster-01:namenode
[22:54] <lifeless> juju add-relation hbase-master:master hbase-regioncluster-01:regionserver
[22:54] <lifeless> ^ is exactly what I ran
[22:54] <lifeless> with a minute or so between deploys
[22:54] <lifeless> and nothing between relation adding
[22:54] <lifeless> (sorry for the Spam)
[22:56] <SpamapS> why minutes between deploys?
[22:56] <SpamapS> You should have been able to run all of the deploys all at once
[22:56] <SpamapS> and all the relations
[22:56] <SpamapS> and then wait for everything to finish. :)
[22:58] <lifeless> SpamapS: see my chat with jimbaker` above about vagaries of local stack deploys with one compute node
[22:59] <lifeless> something whinges when I allocate 32GB of ram etc all in one batch
[23:01] <SpamapS> lifeless: I see. Thats a bit disappointing given how "scalable" openstack is supposed to be...
[23:01] <lifeless> well
[23:02] <lifeless> I mean, I have one node here with 16GB ram
[23:02] <lifeless> but yes, there is some glitch in there, and I've already shaved enough yaks on this exercise.
[23:02] <lifeless> the local install is v useful considering latency to canonistack, f'instance
[23:03] <SpamapS> indeed
[23:03] <SpamapS> lifeless: I think a native OpenStack provider, and more people banging on OpenStack's with juju, will help this go a lot more smoothly.
[23:04] <lifeless> well, I'm beyond that bit, workarounds R us.
[23:06] <lifeless> SpamapS: whats a good charm that knows how to create db's on demand ?
[23:07] <lifeless> I presume the wordpress <-> mysql pair does that ?
[23:09] <jimbaker`> lifeless, that pairing would be a good choice
[23:11] <lifeless> jimbaker`: my suspicion/concern is that it does that using the network protocol
[23:11] <lifeless> hmmm
[23:11]  * lifeless needs to know more hbase ops
[23:13] <jimbaker`> lifeless, i'm not certain what you mean by that re "network protocol".  i will say that it does an exchange of relation settings to accomplish the desired service orchestration
[23:15] <lifeless> hah
[23:15] <lifeless> so schema details
[23:15] <lifeless> I'd be surprised if schema migrations were passed via zk
[23:15] <lifeless> rather than directly.
[23:15] <lifeless> where directly == connect to the mysql network port.
[23:17] <jimbaker`> lifeless, the theory of juju, if there is one ;), is that this is outside of the scope of the charm, unless orchestration is required
[23:17] <jimbaker`> not trying to reinvent how mysql coordinates, just solve certain issues
[23:17] <lifeless> jimbaker`: sure
[23:17]  * hazmat pauses from juju talk to catchup
[23:17] <lifeless> its orchestration though
[23:17] <lifeless> schema application + upgrades is a massive orchestration issue
[23:18] <jimbaker`> lifeless, then i can see that under the scope of the charm
[23:18] <lifeless> including the trivial-enough first-bringup case.
[23:18] <lifeless> the charm needs to facilitate it
[23:18] <lifeless> which is what I'm trying to figure out for hbase :
[23:18] <lifeless> >
[23:19] <jimbaker`> so if the services need to coordinate on add-relation/remove-relation or add-unit/remove-unit, then sure, use juju there
[23:20] <lifeless> I think we're talking past each other
[23:21] <SpamapS> lifeless: the charm gets you credentials and a place to connect to. But no doubt, if a relationship is more specific, schema details and dependencies across multiple charms could be orchestrated very easily.
[23:21] <SpamapS> lifeless: such a thing just has yet to surface yet.
[23:22] <jimbaker`> lifeless, in particular, this can be done through an advanced form of service orchestration, which orchestrates with respect to relations. adam_g has been doing this for the openstack charms fwiw
[23:23] <jimbaker`> lifeless, you might need to do this. i guess we just need to settle on something concrete to say one way or the other :)
[23:25]  * hazmat gets back to the audience questions
[23:55] <zirpu> velocity talk went well. woot
[23:57] <m_3> whoohoo!