[05:19] <SpamapS> anybody up late charming? :)
[05:27] <ejat> :)
[05:27] <ejat> SpamapS: any plan ?
[05:34] <marcoceppi> SpamapS: You know it
[05:34] <marcoceppi> SpamapS: since you're up
[05:34] <marcoceppi> I've got questions
[05:35] <marcoceppi> relation-set, it works from any hook now right?
[06:06] <ejat> marcoceppi: u at ya room ?
[06:06] <marcoceppi> ejat: no, I'm downstairs
[06:06] <ejat> with ?
[06:06] <ejat> at lobby ?
[06:26] <SpamapS> marcoceppi: yes
[06:26] <SpamapS> marcoceppi: you just have to give it a relation id with -r
[06:26] <SpamapS> marcoceppi: this allows full orchestration now :)
[06:26] <SpamapS> service A changes something, B sees change, reacts by informing C
[06:33] <SpamapS> marcoceppi: working on anything juicy?
[08:17] <_mup_> Bug #995823 was filed: Machines occasionally fail to start the machine agent correctly because of an unhandled ConnectionTimeoutException <juju:New> < https://launchpad.net/bugs/995823 >
[08:23]  * SpamapS rings the bell
[08:23] <SpamapS> new charm promulgated, mumble-server!
[08:23] <SpamapS> well done Kees Cook. :)
[12:21] <Leseb> hi everyone!
[12:30] <benji> hi Leseb
[12:32] <Leseb> benji: I'm running MAAS + Juju and I have some questions, do u have time to help me a little? pleas
[12:32] <Leseb> +e
[12:33] <benji> Leseb: I will give it a shot, but I don't know anything about MAAS yet.
[12:35] <Leseb> ok, first I just want to be sure about the meaning of "juju bootstrap"
[12:35] <Leseb> It mean setting up a new environment right?
[12:35] <Leseb> does it also mean launching instance?
[12:39] <benji> Leseb: when running in EC2 "juju bootstrap" will start one instance which is used for administrative purposes
[12:39] <benji> then deploying charms will launch more instances
[12:40] <benji> (there are plans to make the control instance shared with other instances so as to reduce the instance count, but that's not ready yet)
[12:41] <Leseb> ok, what do you mean by "for administrative purpose"? the node doesn't run service?
[12:42] <benji> Leseb: exactly
[12:44] <Leseb> ok, thank you that really helpful :)
[12:44] <Leseb> *was
[12:48] <benji> my pleasure
[13:00] <Leseb> benji: so this same command also copy the public ssh keys?
[13:00] <Leseb> in the new created instance?
[13:01] <benji> Leseb: I /think/ temporary ssh keys are generated for instances, you can use the "juju ssh" command to ssh into an instance.
[13:02] <benji> once in you could add your real ssh keys if you so desired
[13:46] <marcoceppi> SpamapS: with -r for relation-set, can i just give it the interface or relation, instead of a unit? like `relation-set -r db foo=bar`
[16:00] <SpamapS> marcoceppi: no, you need a relation id (as given as $JUJU_RELATION_ID in the joined/changed/departed/broken hooks). Thats basically the bucket of information that you want to update. -r db wouldn't have enough context because there might be more than one db relationship
[16:00] <SpamapS> in fact I think relation-id is the wrong term
[16:01] <SpamapS> relations are the things you define, relationships are the things you establish
[16:01] <marcoceppi> SpamapS: so relationship-id is a better descriptor?
[16:01] <SpamapS> marcoceppi: yeah
[16:01] <SpamapS> but thats something we'll have to work out long-term
[16:02] <SpamapS> relation-ids is already part of the vernacular
[16:02] <marcoceppi> sure, np
[16:02] <marcoceppi> SpamapS: I just threw this up too, not sure if it's a good idea for charm-helper-sh
[16:02] <marcoceppi> https://code.launchpad.net/~marcoceppi/charm-tools/unit-parsing/+merge/104929
[16:08] <SpamapS> marcoceppi: +1 , thats nice actually.
[16:09] <SpamapS> marcoceppi: I had some good success moving juju-jitsu to autotools so we can address the path issues btw.
[16:09] <marcoceppi> SpamapS: just wanted to make sure the movement from peer.sh and copyright were okay
[16:09] <marcoceppi> excellent!
[16:09] <SpamapS> marcoceppi: we'll move to autotools and then we can have @scriptdir@ in code and just let 'make install' work that stuff out.
[16:09] <marcoceppi> awesome, I felt dirty hard coding paths ;)
[16:09] <SpamapS> marcoceppi: that does make it hard to test.. the hardcoded path..
[16:10] <SpamapS> marcoceppi: perhaps do . "${CHARMTOOLS_PATH:-/usr/share/charm-tools}/scripts" ?
[16:10] <SpamapS> marcoceppi: just so we can override it
[16:11] <marcoceppi> I was starting to do this weird bash hack to figure out working directory, and decided it would be just best to push the hard code for now
[16:11] <marcoceppi> yeah, I'll go ahead and update that
[16:12] <marcoceppi> SpamapS: . "${CHARM_HELPER_SH_PATH:-/usr/share/charm-helper/sh}/unit.sh"
[16:12] <SpamapS> marcoceppi: yeah that should work. Please make sure there is at least a test that it parses.
[16:12] <SpamapS> marcoceppi: as long as we're touching it.. moar tests!
[16:13] <marcoceppi> test that parses it?
[16:13] <marcoceppi> You mean create a unit test?
[16:13] <marcoceppi> I can do that prior to the merge
[16:14] <SpamapS> Yeah look at the other tests
[16:14] <marcoceppi> seems easy enough
[16:14] <marcoceppi> I'll add that now and push it up
[16:14] <SpamapS> like seriously as long as it does '. ...'
[16:15] <SpamapS> so we don't ship a package that has unparsable shell code :p
[16:18] <marcoceppi> SpamapS: I noticed that unit tests use $HELPERS_HOME, would that be a better env variable than $CHARM_HELPER_SH_PATH ?
[16:19] <marcoceppi> inside peer.sh
[16:21] <SpamapS> marcoceppi: yeah that makes sense
[16:21] <marcoceppi> awesome, just about done
[16:21] <SpamapS> marcoceppi: peer.sh is not always the best thing to copy though :)
[16:21] <SpamapS> marcoceppi: its a bit ambitious :)
[16:22] <marcoceppi> aye it is, I just want to parse unit names/numbers quickly
[16:22] <SpamapS> exactly, that stuff belongs in its own tight lib
[16:24] <marcoceppi> hence unit.sh :)
[16:36] <marcoceppi> SpamapS: something odd is happening with the unit test, it's only running the first test and not carrying on
[16:38] <marcoceppi> dis regard
[16:38] <SpamapS> done
[16:39] <marcoceppi> Unit tests are good indeed
[16:41] <SpamapS> always nice to know it works :)
[16:43] <marcoceppi> Okay, unit test is up, passing, and pushed if you want to take a final look
[16:43]  * SpamapS pulls and plays
[17:59] <SpamapS> FYI, there is a juju related session starting at UDS in Room 201
[17:59] <SpamapS> http://summit.ubuntu.com/uds-q/meeting/20509/servercloud-q-juju-resource-map/
[18:00] <SpamapS> http://icecast.ubuntu.com:8000/room-201.ogg
[20:44] <ahasenack> I have two environments in my .juju/environments.yaml file, how do I tell juju which one to use without changing "default" in that file all the time?
[20:44] <ahasenack> hm, -e
[20:55] <SpamapS> ahasenack: also $JUJU_ENV
[20:56] <SpamapS> ahasenack: another useful one, $JUJU_REPOSITORY
[21:00] <marcoceppi> SpamapS: is there a way to lower the verbosity of the juju command?
[21:00] <marcoceppi> IE mute the WARNINGS
[21:00] <marcoceppi> wall of warning during a deploy demo is always awkward to explain
[21:00] <marcoceppi> "PAY NO ATTENTION TO THE MAN BEHIND THIS CURTAIN"
[21:08] <SpamapS> juju -l ERROR
[21:09] <SpamapS> marcoceppi: what are the warnings? Bad charms in your repo?
[21:09] <SpamapS> I think we should drop that to INFO actually
[21:09] <SpamapS> having a half-written charm in a repo is a normal, but notable event
[21:09] <SpamapS> not a warning IMO
[21:12] <marcoceppi> SpamapS: http://paste.ubuntu.com/974479/
[21:13] <SpamapS> marcoceppi: add 'ssl-hostname-verification: true' to your environment.
[21:13] <SpamapS> marcoceppi: that *is* a legitimate warning. :)
[21:13] <marcoceppi> I know, i know :)
[21:14] <SpamapS> marcoceppi: we made it a warning, instead of a fail, so you'd have time to make sure all your SSL endpoints verify :)
[21:14] <SpamapS> The 'honolulu' release will make it true by default
[21:24] <koolhead17> SpamapS, sir when can we see you at UDS :)
[21:25] <SpamapS> koolhead17: I arrive in the morning
[21:26] <koolhead17> cool. :)
[21:46] <SpamapS> marcoceppi is about to present charms at UDS http://video.ubuntu.com/live
[21:58] <Tribaal> Hi all
[21:59] <SpamapS> why does everybody want to deploy multiple things to one box.. one box in the cloud == fail
[22:05] <james_w> SpamapS, it's more reliable than multiple boxes :-)
[22:06] <SpamapS> james_w: if by more reliable you mean "fails more reliably", then yes. :)
[22:06] <james_w> lower system failure rate
[22:06] <lifeless> SpamapS: multiple systems multiple their failure rates
[22:06] <lifeless> SpamapS: not everything is single node failure resilient
[22:07] <lifeless> SpamapS: secondly, many things need to exist on the same box to cooperate sanely (e.g. take advantage of local IO, or process local logs) - and making them all tightly bound to another charm doesn't make sense in folks heads.
[22:08] <lifeless> SpamapS: a good question to ask is why folk don't *want* to do it the way the Juju design thinks is best
[22:11] <SpamapS> lifeless: thats exactly what I'm asking. Why are people rejecting the model that juju is selling. I think its the same reason people default to single threaded code... multiple threads, multiple servers, is hard.
[22:13] <james_w> IME it's cost that drives it initially. For a trivial app paying for three machines is far more than is needed to handle the request load
[22:15] <lifeless> SpamapS: thinking of 'threads are for developers that don't understand state machines' ?
[22:16] <lifeless> SpamapS: FWIW I don't think its because there are a wide range of things where coexisting is much easier to reason about (and pay for)
[22:16] <lifeless> SpamapS: where reasoning about includes dealing with network failure modes, avoiding NFS or similar tools
[22:18] <SpamapS> lifeless: could it just be force of habit?
[22:19] <lifeless> SpamapS: I don't think so
[22:19] <lifeless> SpamapS: I've had what, 2 years of Juju exposure, and its still my second most desired feature
[22:20] <lifeless> SpamapS: (the #1 being security)
[22:20] <SpamapS> lifeless: can you explain a deployment that you want to do where m1.small is too big?
[22:21] <SpamapS> or is that the real problem, I think its resource based, but its deeper than that?
[22:21] <lifeless> SpamapS: size isn't the issue for me - james_w brought up size.
[22:21] <lifeless> I acked that paying for things does appear
[22:21] <lifeless> but you could use micro I presume for really really tiny things
[22:22] <SpamapS> Ok, so its a fail rate thing. Hm.
[22:23] <lifeless> fail rate + fail mode
[22:23] <lifeless> if you have a log tailer
[22:23] <lifeless> for instance, trivial thing
[22:23] <lifeless> do you want to do that from a different box ?
[22:23] <lifeless> even a different 'virtual box' using LXC ?
[22:23] <SpamapS> well a log tailer is a subordinate
[22:24] <SpamapS> which goes inside the same container
[22:24] <SpamapS> (which in most cases means inside the same bare VM/machine)
[22:26] <SpamapS> lifeless: subordinates solves the case for anything that you always want deployed at a 1:1 ratio together
[22:26] <lifeless> consider oops-tools UI then
[22:26] <lifeless> let me tell you about oops-tools
[22:27] <lifeless> it has:
[22:27] <lifeless>  - a blob store, which is just a directory on disk where oops files are stored. They get there via an AMQP consumer
[22:28] <lifeless>  - a postgresql DB, which indexes the blob store, it is populated by the same AMQP consumer
[22:28] <lifeless>  - an AMQP consumer, which receives OOPSes in near-realtime, writes them to the blob store and postgresql
[22:28] <lifeless>  - a gc process which removes things from both the postgresql DB and the blob store
[22:29] <lifeless>  - a wsgi web UI that shows things from the blob store + postgresql
[22:29] <lifeless>  - some helper scripts that run out of cron to do reports and the lie
[22:29] <lifeless> like
[22:29] <SpamapS> alright
[22:29] <SpamapS> all sounds good
[22:30] <lifeless> now, thats /nearly/ full distributed
[22:30] <lifeless> if the blob store were e.g. s3
[22:30] <lifeless> but to use juju today, why would I reach for 3 machines vs 1 ?
[22:31] <SpamapS> You would not. And in that case, thats all 1:1
[22:31] <lifeless> perhaps I don't understand subordinates then
[22:31] <lifeless> I wouldn't want every postgresql server to have apache+rabbit+oops-tools-ui etc
[22:32] <SpamapS> right, you just want one, the one that does oops , right?
[22:32] <SpamapS> assuming we convert oops-tools to a subordinate..
[22:32] <SpamapS> juju deploy pgsql
[22:32] <SpamapS> juju deploy oops-tools
[22:32] <lifeless> for this part of the picture yes, but then I also want postgresql for LP :P
[22:32] <SpamapS> juju add-relation oops-tools pgsql
[22:32] <SpamapS> that puts oops-tools on the pgsql service
[22:32] <SpamapS> lifeless: so you want to make *one* server special?
[22:33] <lifeless> SpamapS: I'm not trying to move the goalposts, honestly.
[22:33] <SpamapS> well is the pgsql service one server, or +1 ?
[22:33] <lifeless> one for oops-tools, slony cluster for LP
[22:34] <SpamapS> It sounds like you have an over-powered pgsql server and want to take advantage of that...
[22:34] <lifeless> probably a slony cluster for other ancilliary components of LP once some refactorings get done.
[22:34] <SpamapS> are the pgsql servers for oops and lp the same though?
[22:34] <lifeless> nope
[22:34] <lifeless> different
[22:34] <SpamapS> Ok then yeah, subs solves this nicely *IF* you always want oops-tools to be on the pgsql instance.
[22:34] <SpamapS> thats where I don't like the rigidity of oops-tools
[22:35] <lifeless> don't want to impact production LP behaviour with data-analytics from oops-tools
[22:35] <SpamapS> err
[22:35] <SpamapS> rigidity of usbordinates
[22:36] <lifeless> essentially, LP is headed towards a pattern where each cooperating service is a bundle of (pg, amqp-publishers, amqp-consumers, one or more wsgi apps)
[22:36] <lifeless> and LP as a whole is a group of sunch bundles
[22:36] <lifeless> some bundles will be HA (via slony, haproxy etc), others (like oops-tools) will be best-effort (and the rest of the system handles their absence in some fashion)
[22:37] <lifeless> e.g. gpg key verification - if down, ppa uploads get queued, alerts go off, we bring it back up, but it doesn't have to be up 100% of the time
[22:37] <lifeless> (and bringing it up should be self healable, in fact)
[22:37] <SpamapS> lifeless: we have this problem w/ nova too. Nova can work w/ its own local sqlite, or a remote mysql server.
[22:37] <SpamapS> lifeless: for devs, the local sqlite is the simple case for smoke testing.. but no sane deployment uses it.
[22:38] <lifeless> SpamapS: sure, for clarity though, I'm talking the prod layout; local dev uses test fixtures to bring up transient services
[22:38] <lifeless> like, we want separate pg clusters, to mitigate failures
[22:38] <lifeless> thats a place where we *do* want multiple machines
[22:39] <lifeless> its very unclear to me atm how you run 5 or 6 different slony clusters in one juju environment
[22:39] <SpamapS> lifeless: juju deploy slony slony-1 ; juju deploy slony slony-2   ??
[22:39] <lifeless> the way I'm thinking about juju use today, each environment will be extremely narrowly targeted, and we'll export its public surface and manually inject it into other environments.
[22:40] <SpamapS> lifeless: you can even do this on ec2   juju deploy slony slony-a --constraints ec2-zone=a ; juju deploy slony slony-b --constraints ec2-zone=b
[22:40] <lifeless> https://juju.ubuntu.com/docs/user-tutorial.html#deploying-service-units
[22:40] <lifeless> probably wants to mention that optional parameter then ;)
[22:41] <SpamapS> Yeah its a common problem
[22:46] <SpamapS> lifeless: I'm fixing that particular page to be more clear now. Really good suggestion.
[22:46] <lifeless> would that address the subordinates thing?
[22:46] <lifeless> if you do deploy pgsql oops-pg
[22:47] <SpamapS> yes!
[22:47] <lifeless> or would oops-tools still land on all pg server s?
[22:47] <SpamapS> no it would only be on oops-pg
[22:47] <SpamapS> the only bummer about that is that you now always have to deploy oops as a sub
[22:47] <lifeless> presumably because deploy oops-tool does nothing, and add-relation oops-tool oops-pg triggers the actual install
[22:47] <lifeless> ?
[22:47] <SpamapS> I'd really prefer there to be a way to have runtime subordination
[22:48] <lifeless> SpamapS: we have runtime insubordination already :P
[22:48] <SpamapS> lifeless: thats exactly how it works
[22:48] <lifeless> you might like to touch up https://juju.ubuntu.com/docs/subordinate-services.html a bit while you are there
[22:48] <lifeless> it reads like an advert for a coming up feature
[22:48] <lifeless> not something that exists
[22:49] <lifeless> it doesn't explain the 1:1 thing, nor the need to name things to get them to be M:N
[22:50] <SpamapS> lifeless: indeed, its basically the original spec. We need to add a subordinate section to the tutorials
[22:51] <lifeless> SpamapS: I hope this is useful feedback
[22:51] <lifeless> I feel as though I've been a bit curmdgeonly
[22:51] <SpamapS> its unbelievably useful
[22:52] <SpamapS> we're all so close to the problem
[22:52] <SpamapS> we don't always see how it relates to the real world
[23:00] <SpamapS> lifeless: https://code.launchpad.net/~clint-fewbar/juju/docs-clarify-service-name/+merge/104994
[23:00] <SpamapS> lifeless: I intend to do more, but that at least will fix the cited page
[23:01] <lifeless> there is a lot in that diff
[23:01] <SpamapS> Yeah I"m not sure why
[23:01] <SpamapS> whoa heh I think I branched the wrong thing :p
[23:02] <lifeless> rather than database-service, perhaps you could say wordpress-db or something
[23:02] <lifeless> I suspect admins will try to name things so they can remember them
[23:02] <SpamapS> lifeless: yeah I am not super happy with that name now that I read it
[23:02] <lifeless> and that will ring truer
[23:02] <SpamapS> ah the merge target is wrong
[23:03] <SpamapS> interesting, I think lp-propose doesn't work right
[23:03] <SpamapS> 'bzr lp-propose lp:juju/docs' merged against the subdir of lp:juju called docs
[23:04] <SpamapS> not the docs series
[23:04] <SpamapS> we really need to delete that dir from lp:juju
[23:05] <SpamapS> lifeless: calling it 'wordpress-db' changes the juxtaposition a bit though.. as now we're suggesting that we couldn't relate non-wp things to it.
[23:05] <lifeless> SpamapS: do you do that in the example ?
[23:05] <lifeless> SpamapS: and wearing your paranoid sysadmin hat - I know you have it somewhere - would you ever do that?
[23:06] <lifeless> one db server, one app
[23:06] <SpamapS> my paranoid sysadmin hat yells at me from its place in my closet..
[23:06] <lifeless> -much- easier to think about query behaviour, caching patterns, db recovery, failover, etc
[23:06] <lifeless> I mean, you are right, yes it changes the slant.
[23:07] <SpamapS> Yes indeed, though I have made multi-tenancy work fine in the past... its a whole different ballgame.
[23:07] <lifeless> and rule #1 of devops is KISS :)
[23:07] <lifeless> this is the same argument juju makes about machines
[23:07] <lifeless> I think the difference is a matter of intent
[23:08] <lifeless> sharing a machine with different components of one usecase -> fair call
[23:08] <lifeless> sharing a machine with components for different usecases -> world of confusion
[23:08] <lifeless> you can s/machine/DB-server/ there with no change in semantics
[23:08] <lifeless> and - ahha moment - this is what I've been trying to get at with juju and multiple machines
[23:09] <lifeless> when you're delivering one use case, it is often (particularly when you aren't gluing full SOA services together, but are homebrewing or whatever), easier to work in a local environment not networked.
[23:10] <SpamapS> alright pushed s/database-service/wordpress-db/g
[23:10] <lifeless> when you're delivering two or more use cases, at that point you definitely want things to not stomp on each other
[23:10] <SpamapS> looks much better
[23:10] <SpamapS> sentences read more clearly
[23:10] <lifeless> so if you come to me and talk about LP + juju, I do want multiple machines, for the things that compete for resources, but I also want single machines, for the bits that really are a single unit (like the librarian with its associated helper code)
[23:11] <SpamapS> lifeless: right, I think a lot of things are well written as a subordinate
[23:11] <lifeless> its not that they won't in future want and need horizontal scaling etc, its a minimum complexity, maximum gain kindof thing
[23:11] <SpamapS> lifeless: btw, we just pushed a new primary service into the charm store, called 'ubuntu' .. so any subordinate can be deployed alone by itself too :)
[23:11] <lifeless> nice hack
[23:12] <SpamapS> its also useful for pre-allocating machines in ec2/maas
[23:24] <gogodoit> hi there, trying to install juju on OSX 10.7 via brew, and getting this error: Error: Failed executing: python setup.py install (juju.rb:31)
[23:24] <gogodoit> (this is with a clean brew installation)
[23:24] <gogodoit> any ideas what I'm missing?
[23:33] <Laney> Hey jujuers. I'm trying to play with juju using canonistack but it appears to want to use the hostname of the servers when doing commands (actually, I only tried 'juju status'). Can I make it use the IP? Or, can you give me a hint to get the dns resolving?
[23:38] <m_3_> Laney: on openstack installs, you need to either configure an ssh proxy into the cloud or assign a public address to the bootstrap node
[23:39] <Laney> m_3_: I can ssh using the IP address, yes
[23:39] <m_3_> Laney: often also need to add patterns into .ssh/config in order for it to use the proxies
[23:39] <Laney> but juju tries to use server-nnnn which I cannot resolve
[23:40] <m_3_> Laney: alias them in .ssh/config
[23:42] <Laney> let me try that
[23:43] <SpamapS> can't you just run juju on the node 0 that gets bootstrapped?
[23:43] <SpamapS> just copy your environments.yaml and ssh key
[23:47] <gogodoit>    sorry, I didn't rtfm, found the osx brew fix here https://github.com/jujutools/homebrew-juju/issues/5
[23:48] <SpamapS> gogodoit: thanks, imbrandon is the author of that, he's at the ubuntu dev summit today and so probably not watching IRC
[23:48] <Laney> SpamapS: yes, but that is more manual than I like
[23:48] <Laney> I got it working with a pattern in .ssh/config
[23:48] <SpamapS> Laney: yeah that usually works
[23:48] <Laney> I think ...
[23:48] <Laney> ERROR SSH forwarding error: nc: getaddrinfo: Name or service not known
[23:48]  * Laney eyes things
[23:57] <Laney> ah, works
[23:57] <Laney> does canonistack have a secure transport?