=== almaisan-away is now known as al-maisan === al-maisan is now known as almaisan-away [05:19] anybody up late charming? :) [05:27] :) [05:27] SpamapS: any plan ? [05:34] SpamapS: You know it [05:34] SpamapS: since you're up [05:34] I've got questions [05:35] relation-set, it works from any hook now right? [06:06] marcoceppi: u at ya room ? [06:06] ejat: no, I'm downstairs [06:06] with ? [06:06] at lobby ? [06:26] marcoceppi: yes [06:26] marcoceppi: you just have to give it a relation id with -r [06:26] marcoceppi: this allows full orchestration now :) [06:26] service A changes something, B sees change, reacts by informing C [06:33] marcoceppi: working on anything juicy? [08:17] <_mup_> Bug #995823 was filed: Machines occasionally fail to start the machine agent correctly because of an unhandled ConnectionTimeoutException < https://launchpad.net/bugs/995823 > [08:23] * SpamapS rings the bell [08:23] new charm promulgated, mumble-server! [08:23] well done Kees Cook. :) [12:21] hi everyone! [12:30] hi Leseb [12:32] benji: I'm running MAAS + Juju and I have some questions, do u have time to help me a little? pleas [12:32] +e [12:33] Leseb: I will give it a shot, but I don't know anything about MAAS yet. [12:35] ok, first I just want to be sure about the meaning of "juju bootstrap" [12:35] It mean setting up a new environment right? [12:35] does it also mean launching instance? [12:39] Leseb: when running in EC2 "juju bootstrap" will start one instance which is used for administrative purposes [12:39] then deploying charms will launch more instances [12:40] (there are plans to make the control instance shared with other instances so as to reduce the instance count, but that's not ready yet) [12:41] ok, what do you mean by "for administrative purpose"? the node doesn't run service? [12:42] Leseb: exactly [12:44] ok, thank you that really helpful :) [12:44] *was [12:48] my pleasure [13:00] benji: so this same command also copy the public ssh keys? [13:00] in the new created instance? [13:01] Leseb: I /think/ temporary ssh keys are generated for instances, you can use the "juju ssh" command to ssh into an instance. [13:02] once in you could add your real ssh keys if you so desired [13:46] SpamapS: with -r for relation-set, can i just give it the interface or relation, instead of a unit? like `relation-set -r db foo=bar` === carif_ is now known as carif [16:00] marcoceppi: no, you need a relation id (as given as $JUJU_RELATION_ID in the joined/changed/departed/broken hooks). Thats basically the bucket of information that you want to update. -r db wouldn't have enough context because there might be more than one db relationship [16:00] in fact I think relation-id is the wrong term [16:01] relations are the things you define, relationships are the things you establish [16:01] SpamapS: so relationship-id is a better descriptor? [16:01] marcoceppi: yeah [16:01] but thats something we'll have to work out long-term [16:02] relation-ids is already part of the vernacular [16:02] sure, np [16:02] SpamapS: I just threw this up too, not sure if it's a good idea for charm-helper-sh [16:02] https://code.launchpad.net/~marcoceppi/charm-tools/unit-parsing/+merge/104929 [16:08] marcoceppi: +1 , thats nice actually. [16:09] marcoceppi: I had some good success moving juju-jitsu to autotools so we can address the path issues btw. [16:09] SpamapS: just wanted to make sure the movement from peer.sh and copyright were okay [16:09] excellent! [16:09] marcoceppi: we'll move to autotools and then we can have @scriptdir@ in code and just let 'make install' work that stuff out. [16:09] awesome, I felt dirty hard coding paths ;) [16:09] marcoceppi: that does make it hard to test.. the hardcoded path.. [16:10] marcoceppi: perhaps do . "${CHARMTOOLS_PATH:-/usr/share/charm-tools}/scripts" ? [16:10] marcoceppi: just so we can override it [16:11] I was starting to do this weird bash hack to figure out working directory, and decided it would be just best to push the hard code for now [16:11] yeah, I'll go ahead and update that [16:12] SpamapS: . "${CHARM_HELPER_SH_PATH:-/usr/share/charm-helper/sh}/unit.sh" [16:12] marcoceppi: yeah that should work. Please make sure there is at least a test that it parses. [16:12] marcoceppi: as long as we're touching it.. moar tests! [16:13] test that parses it? [16:13] You mean create a unit test? [16:13] I can do that prior to the merge [16:14] Yeah look at the other tests [16:14] seems easy enough [16:14] I'll add that now and push it up [16:14] like seriously as long as it does '. ...' [16:15] so we don't ship a package that has unparsable shell code :p [16:18] SpamapS: I noticed that unit tests use $HELPERS_HOME, would that be a better env variable than $CHARM_HELPER_SH_PATH ? [16:19] inside peer.sh [16:21] marcoceppi: yeah that makes sense [16:21] awesome, just about done [16:21] marcoceppi: peer.sh is not always the best thing to copy though :) [16:21] marcoceppi: its a bit ambitious :) [16:22] aye it is, I just want to parse unit names/numbers quickly [16:22] exactly, that stuff belongs in its own tight lib [16:24] hence unit.sh :) [16:36] SpamapS: something odd is happening with the unit test, it's only running the first test and not carrying on [16:38] dis regard [16:38] done [16:39] Unit tests are good indeed [16:41] always nice to know it works :) [16:43] Okay, unit test is up, passing, and pushed if you want to take a final look [16:43] * SpamapS pulls and plays === carif_ is now known as carif === daker is now known as daker__ [17:59] FYI, there is a juju related session starting at UDS in Room 201 [17:59] http://summit.ubuntu.com/uds-q/meeting/20509/servercloud-q-juju-resource-map/ [18:00] http://icecast.ubuntu.com:8000/room-201.ogg [20:44] I have two environments in my .juju/environments.yaml file, how do I tell juju which one to use without changing "default" in that file all the time? [20:44] hm, -e [20:55] ahasenack: also $JUJU_ENV [20:56] ahasenack: another useful one, $JUJU_REPOSITORY [21:00] SpamapS: is there a way to lower the verbosity of the juju command? [21:00] IE mute the WARNINGS [21:00] wall of warning during a deploy demo is always awkward to explain [21:00] "PAY NO ATTENTION TO THE MAN BEHIND THIS CURTAIN" [21:08] juju -l ERROR [21:09] marcoceppi: what are the warnings? Bad charms in your repo? [21:09] I think we should drop that to INFO actually [21:09] having a half-written charm in a repo is a normal, but notable event [21:09] not a warning IMO [21:12] SpamapS: http://paste.ubuntu.com/974479/ [21:13] marcoceppi: add 'ssl-hostname-verification: true' to your environment. [21:13] marcoceppi: that *is* a legitimate warning. :) [21:13] I know, i know :) [21:14] marcoceppi: we made it a warning, instead of a fail, so you'd have time to make sure all your SSL endpoints verify :) [21:14] The 'honolulu' release will make it true by default [21:24] SpamapS, sir when can we see you at UDS :) [21:25] koolhead17: I arrive in the morning [21:26] cool. :) [21:46] marcoceppi is about to present charms at UDS http://video.ubuntu.com/live [21:58] Hi all [21:59] why does everybody want to deploy multiple things to one box.. one box in the cloud == fail === Sherlock_ is now known as Guest74757 [22:05] SpamapS, it's more reliable than multiple boxes :-) [22:06] james_w: if by more reliable you mean "fails more reliably", then yes. :) [22:06] lower system failure rate [22:06] SpamapS: multiple systems multiple their failure rates [22:06] SpamapS: not everything is single node failure resilient [22:07] SpamapS: secondly, many things need to exist on the same box to cooperate sanely (e.g. take advantage of local IO, or process local logs) - and making them all tightly bound to another charm doesn't make sense in folks heads. [22:08] SpamapS: a good question to ask is why folk don't *want* to do it the way the Juju design thinks is best [22:11] lifeless: thats exactly what I'm asking. Why are people rejecting the model that juju is selling. I think its the same reason people default to single threaded code... multiple threads, multiple servers, is hard. [22:13] IME it's cost that drives it initially. For a trivial app paying for three machines is far more than is needed to handle the request load [22:15] SpamapS: thinking of 'threads are for developers that don't understand state machines' ? [22:16] SpamapS: FWIW I don't think its because there are a wide range of things where coexisting is much easier to reason about (and pay for) [22:16] SpamapS: where reasoning about includes dealing with network failure modes, avoiding NFS or similar tools [22:18] lifeless: could it just be force of habit? [22:19] SpamapS: I don't think so [22:19] SpamapS: I've had what, 2 years of Juju exposure, and its still my second most desired feature [22:20] SpamapS: (the #1 being security) [22:20] lifeless: can you explain a deployment that you want to do where m1.small is too big? [22:21] or is that the real problem, I think its resource based, but its deeper than that? [22:21] SpamapS: size isn't the issue for me - james_w brought up size. [22:21] I acked that paying for things does appear [22:21] but you could use micro I presume for really really tiny things [22:22] Ok, so its a fail rate thing. Hm. [22:23] fail rate + fail mode [22:23] if you have a log tailer [22:23] for instance, trivial thing [22:23] do you want to do that from a different box ? [22:23] even a different 'virtual box' using LXC ? [22:23] well a log tailer is a subordinate [22:24] which goes inside the same container [22:24] (which in most cases means inside the same bare VM/machine) [22:26] lifeless: subordinates solves the case for anything that you always want deployed at a 1:1 ratio together [22:26] consider oops-tools UI then [22:26] let me tell you about oops-tools [22:27] it has: [22:27] - a blob store, which is just a directory on disk where oops files are stored. They get there via an AMQP consumer [22:28] - a postgresql DB, which indexes the blob store, it is populated by the same AMQP consumer [22:28] - an AMQP consumer, which receives OOPSes in near-realtime, writes them to the blob store and postgresql [22:28] - a gc process which removes things from both the postgresql DB and the blob store [22:29] - a wsgi web UI that shows things from the blob store + postgresql [22:29] - some helper scripts that run out of cron to do reports and the lie [22:29] like [22:29] alright [22:29] all sounds good [22:30] now, thats /nearly/ full distributed [22:30] if the blob store were e.g. s3 [22:30] but to use juju today, why would I reach for 3 machines vs 1 ? [22:31] You would not. And in that case, thats all 1:1 [22:31] perhaps I don't understand subordinates then [22:31] I wouldn't want every postgresql server to have apache+rabbit+oops-tools-ui etc [22:32] right, you just want one, the one that does oops , right? [22:32] assuming we convert oops-tools to a subordinate.. [22:32] juju deploy pgsql [22:32] juju deploy oops-tools [22:32] for this part of the picture yes, but then I also want postgresql for LP :P [22:32] juju add-relation oops-tools pgsql [22:32] that puts oops-tools on the pgsql service [22:32] lifeless: so you want to make *one* server special? [22:33] SpamapS: I'm not trying to move the goalposts, honestly. [22:33] well is the pgsql service one server, or +1 ? [22:33] one for oops-tools, slony cluster for LP [22:34] It sounds like you have an over-powered pgsql server and want to take advantage of that... [22:34] probably a slony cluster for other ancilliary components of LP once some refactorings get done. [22:34] are the pgsql servers for oops and lp the same though? [22:34] nope [22:34] different [22:34] Ok then yeah, subs solves this nicely *IF* you always want oops-tools to be on the pgsql instance. [22:34] thats where I don't like the rigidity of oops-tools [22:35] don't want to impact production LP behaviour with data-analytics from oops-tools [22:35] err [22:35] rigidity of usbordinates [22:36] essentially, LP is headed towards a pattern where each cooperating service is a bundle of (pg, amqp-publishers, amqp-consumers, one or more wsgi apps) [22:36] and LP as a whole is a group of sunch bundles [22:36] some bundles will be HA (via slony, haproxy etc), others (like oops-tools) will be best-effort (and the rest of the system handles their absence in some fashion) [22:37] e.g. gpg key verification - if down, ppa uploads get queued, alerts go off, we bring it back up, but it doesn't have to be up 100% of the time [22:37] (and bringing it up should be self healable, in fact) [22:37] lifeless: we have this problem w/ nova too. Nova can work w/ its own local sqlite, or a remote mysql server. [22:37] lifeless: for devs, the local sqlite is the simple case for smoke testing.. but no sane deployment uses it. [22:38] SpamapS: sure, for clarity though, I'm talking the prod layout; local dev uses test fixtures to bring up transient services [22:38] like, we want separate pg clusters, to mitigate failures [22:38] thats a place where we *do* want multiple machines [22:39] its very unclear to me atm how you run 5 or 6 different slony clusters in one juju environment [22:39] lifeless: juju deploy slony slony-1 ; juju deploy slony slony-2 ?? [22:39] the way I'm thinking about juju use today, each environment will be extremely narrowly targeted, and we'll export its public surface and manually inject it into other environments. [22:40] lifeless: you can even do this on ec2 juju deploy slony slony-a --constraints ec2-zone=a ; juju deploy slony slony-b --constraints ec2-zone=b [22:40] https://juju.ubuntu.com/docs/user-tutorial.html#deploying-service-units [22:40] probably wants to mention that optional parameter then ;) [22:41] Yeah its a common problem [22:46] lifeless: I'm fixing that particular page to be more clear now. Really good suggestion. [22:46] would that address the subordinates thing? [22:46] if you do deploy pgsql oops-pg [22:47] yes! [22:47] or would oops-tools still land on all pg server s? [22:47] no it would only be on oops-pg [22:47] the only bummer about that is that you now always have to deploy oops as a sub [22:47] presumably because deploy oops-tool does nothing, and add-relation oops-tool oops-pg triggers the actual install [22:47] ? [22:47] I'd really prefer there to be a way to have runtime subordination [22:48] SpamapS: we have runtime insubordination already :P [22:48] lifeless: thats exactly how it works [22:48] you might like to touch up https://juju.ubuntu.com/docs/subordinate-services.html a bit while you are there [22:48] it reads like an advert for a coming up feature [22:48] not something that exists [22:49] it doesn't explain the 1:1 thing, nor the need to name things to get them to be M:N [22:50] lifeless: indeed, its basically the original spec. We need to add a subordinate section to the tutorials [22:51] SpamapS: I hope this is useful feedback [22:51] I feel as though I've been a bit curmdgeonly [22:51] its unbelievably useful [22:52] we're all so close to the problem [22:52] we don't always see how it relates to the real world [23:00] lifeless: https://code.launchpad.net/~clint-fewbar/juju/docs-clarify-service-name/+merge/104994 [23:00] lifeless: I intend to do more, but that at least will fix the cited page [23:01] there is a lot in that diff [23:01] Yeah I"m not sure why [23:01] whoa heh I think I branched the wrong thing :p [23:02] rather than database-service, perhaps you could say wordpress-db or something [23:02] I suspect admins will try to name things so they can remember them [23:02] lifeless: yeah I am not super happy with that name now that I read it [23:02] and that will ring truer [23:02] ah the merge target is wrong [23:03] interesting, I think lp-propose doesn't work right [23:03] 'bzr lp-propose lp:juju/docs' merged against the subdir of lp:juju called docs [23:04] not the docs series [23:04] we really need to delete that dir from lp:juju [23:05] lifeless: calling it 'wordpress-db' changes the juxtaposition a bit though.. as now we're suggesting that we couldn't relate non-wp things to it. [23:05] SpamapS: do you do that in the example ? [23:05] SpamapS: and wearing your paranoid sysadmin hat - I know you have it somewhere - would you ever do that? [23:06] one db server, one app [23:06] my paranoid sysadmin hat yells at me from its place in my closet.. [23:06] -much- easier to think about query behaviour, caching patterns, db recovery, failover, etc [23:06] I mean, you are right, yes it changes the slant. [23:07] Yes indeed, though I have made multi-tenancy work fine in the past... its a whole different ballgame. [23:07] and rule #1 of devops is KISS :) [23:07] this is the same argument juju makes about machines [23:07] I think the difference is a matter of intent [23:08] sharing a machine with different components of one usecase -> fair call [23:08] sharing a machine with components for different usecases -> world of confusion [23:08] you can s/machine/DB-server/ there with no change in semantics [23:08] and - ahha moment - this is what I've been trying to get at with juju and multiple machines [23:09] when you're delivering one use case, it is often (particularly when you aren't gluing full SOA services together, but are homebrewing or whatever), easier to work in a local environment not networked. [23:10] alright pushed s/database-service/wordpress-db/g [23:10] when you're delivering two or more use cases, at that point you definitely want things to not stomp on each other [23:10] looks much better [23:10] sentences read more clearly [23:10] so if you come to me and talk about LP + juju, I do want multiple machines, for the things that compete for resources, but I also want single machines, for the bits that really are a single unit (like the librarian with its associated helper code) [23:11] lifeless: right, I think a lot of things are well written as a subordinate [23:11] its not that they won't in future want and need horizontal scaling etc, its a minimum complexity, maximum gain kindof thing [23:11] lifeless: btw, we just pushed a new primary service into the charm store, called 'ubuntu' .. so any subordinate can be deployed alone by itself too :) [23:11] nice hack [23:12] its also useful for pre-allocating machines in ec2/maas === andreas__ is now known as ahasenack [23:24] hi there, trying to install juju on OSX 10.7 via brew, and getting this error: Error: Failed executing: python setup.py install (juju.rb:31) [23:24] (this is with a clean brew installation) [23:24] any ideas what I'm missing? [23:33] Hey jujuers. I'm trying to play with juju using canonistack but it appears to want to use the hostname of the servers when doing commands (actually, I only tried 'juju status'). Can I make it use the IP? Or, can you give me a hint to get the dns resolving? [23:38] Laney: on openstack installs, you need to either configure an ssh proxy into the cloud or assign a public address to the bootstrap node [23:39] m_3_: I can ssh using the IP address, yes [23:39] Laney: often also need to add patterns into .ssh/config in order for it to use the proxies [23:39] but juju tries to use server-nnnn which I cannot resolve [23:40] Laney: alias them in .ssh/config [23:42] let me try that [23:43] can't you just run juju on the node 0 that gets bootstrapped? [23:43] just copy your environments.yaml and ssh key [23:47] sorry, I didn't rtfm, found the osx brew fix here https://github.com/jujutools/homebrew-juju/issues/5 [23:48] gogodoit: thanks, imbrandon is the author of that, he's at the ubuntu dev summit today and so probably not watching IRC [23:48] SpamapS: yes, but that is more manual than I like [23:48] I got it working with a pattern in .ssh/config [23:48] Laney: yeah that usually works [23:48] I think ... [23:48] ERROR SSH forwarding error: nc: getaddrinfo: Name or service not known [23:48] * Laney eyes things [23:57] ah, works [23:57] does canonistack have a secure transport?