=== fenris is now known as Guest24195 [02:53] seems http://afgen.com/juju.html on https://juju.ubuntu.com/docs/faq.html page is broken === fenris is now known as Guest18389 [04:11] Hi, im trying to relaunch in a different ec2 region. but error: Environments configuration error: /home/ubuntu/.juju/environments.yaml: environments.mytest.region: expected 'us-east-1', got 'ap-southeast-2' === defunctzombie is now known as defunctzombie_zz === TheRealMue is now known as TheMue === Beret- is now known as Beret === huats_ is now known as huats === drrk_ is now known as drrk [12:25] hello! [12:27] I'm trying to bootstrap against an openstack, and I get ERROR Unexpected 400: '{"badRequest": {"message": "Security group is still in use", "code": 400}}' [12:27] full log is http://pastebin.ubuntu.com/5649239/ [12:27] I really don't know how to debug this further, or fix it [12:27] any recommendation is appreciated, thanks! [12:28] facundobatista, I think that happens when your previous destroy-environment didn't finish or left stuff behind. what does 'nova list' show you? [12:29] salgado, two items, one ACTIVE and one in ERROR [12:29] salgado, should I destroy them with "nova delete"? [12:30] facundobatista, yep [12:31] salgado, should I pass the "id" to it? I executed the "nova delete " for both items, the commands finished ok, but "nova list" still shows them [12:32] facundobatista, it takes a little while before they're really gone [12:32] * facundobatista waits [12:47] salgado, there they went away! :D now let's try bootstrap [12:47] yay, bootstrap ended ok [12:48] let's try status [12:48] :) [12:53] salgado, ok, "juju status" got stuck, ctrl-c'ed it after some minutes, and there I see this error: [12:53] SSLVerificationError: Bad HTTPS certificate, set 'ssl-hostname-verification' to false to permit [12:53] 2013-03-26 09:52:56,695 ERROR Bad HTTPS certificate, set 'ssl-hostname-verification' to false to permit [12:54] facundobatista, sidnei tells me that's a because something timed out. I also saw that when my network was flaky [12:54] * facundobatista tries again [12:55] now I got this pretty fast: [12:55] 2013-03-26 09:55:10,898 ERROR SSH forwarding error: ssh_exchange_identification: Connection closed by remote host [12:59] and fourth time it ended ok [13:00] salgado, thanks! [13:00] facundobatista, you're welcome [13:17] EC2 images are broken!: https://bugs.launchpad.net/ubuntu-on-ec2/+bug/1160358 [13:17] <_mup_> Bug #1160358: Hash Sum mismatch for Precise main and universe EC2 images < https://launchpad.net/bugs/1160358 > [13:20] Need for fix/workaround is urgent === wedgwood_away is now known as wedgwood === wedgwood is now known as Guest28578 === Makyo|out is now known as Makyo [14:06] hi folks, if I do a "juju resolved instance relation" is it supposed to remove the relation-errors: entry from juju status (pyjuju)? [14:08] mthaddon: yes. [14:09] mthaddon: if you want it to retry the hook that caused the issue, add --retry [14:10] SpamapS: so I'd rather not retry the hook (or at least, not until I have a staging env to test on) but I'm still seeing the relation-errors after having run that command - any debugging that can be done to figure out why it's not marked resolved? [14:10] mthaddon: got a pastebin of the status output? [14:12] SpamapS: http://paste.ubuntu.com/5649469/ [14:12] question, "juju ssh " is stuck at "INFO Waiting for unit to come up": https://pastebin.canonical.com/87751/ [14:12] if I do "juju status", I see that the instance is ok, however there's a strange error: https://pastebin.canonical.com/87746/ [14:13] facundobatista: no-one outside canonical will be able to see those, fwiw [14:13] uh, bad pastebin [14:13] mthaddon, thanks === imbrandon_ is now known as imbrandon [14:16] mthaddon: yeah that is ??? don't know why thats not working [14:17] facundobatista: Waiting for unit to come up is waiting for the machine agent to start [14:18] SpamapS: I guess I'll need to try reproducing the error, and then fixing by upgrading the charm and doing a --retry [14:18] SpamapS, but status says "pending"... doesn't that mean that is already started? http://pastebin.ubuntu.com/5649481/ [14:19] SpamapS, how can I know what went wrong? [14:27] facundobatista: no, pending is "I have never heard from that agent" [14:28] facundobatista: check the cloud-init-output.log and console log. Likely skew between your client and what was installed/attempted to install on the instance === defunctzombie_zz is now known as defunctzombie [15:38] hola === salgado is now known as salgado-lunch === salgado-lunch is now known as salgado === imbrandon_ is now known as imbrandon [16:22] I have what is hopefully a quick question. [16:22] Are the relationship kev-value pairs unique on a per service basis, or per service unit? [16:23] context: The postgres charm creates a new user/password for each service unit that joins a service relationship. [16:25] after this happens, the relationship-changed hook is fired and this new key-value pair is sent to several units, overwriting their own user/pass info from previous relation-changed runs [16:50] hi, would someone have time to help me on getting the juju up and running on local? i have followed the guides at https://juju.ubuntu.com/get-started/ and related https://juju.ubuntu.com/docs/getting-started.html#configuring-a-local-environment" ..but i am getting following issue (machine_agent.log) http://paste.ubuntu.com/5649810/ "SyntaxError: invalid syntax" ..running on raring 64-bit.. any ideas, hints would help a lot [16:50] "Py_Initialize: Unable to get the locale encoding" [16:50] i guess that is the main issue [16:51] jppiiroi1en: that looks like https://bugs.launchpad.net/juju/+bug/1159020 [16:51] <_mup_> Bug #1159020: SyntaxError: invalid syntax < https://launchpad.net/bugs/1159020 > [16:52] hum, i've seen that before. seems like a missing locale in the container actually. [16:52] jppiiroi1en: I'm not entirely sure what that means -- if juju on raring doesn't work at all, or just the local provider on raring doesn't work at all [16:53] sidnei: hazmat's response: < hazmat> sarnold, i suspect there is an underlying python distro issue [16:55] sarnold: oh yes, the same [16:55] yeah, python 2.7.4rc1 was uploaded recently, the webbrowser module is also broken [16:56] (if you have google-chrome installed) [16:56] not that it's related [16:56] ah! I knew someone else had broken but couldn't recall. heh. [16:56] thank you for confirming this, now i know that it is not just my workstation which is having the issue :) [16:57] :) [16:58] sarnold, barry's on vacation this week, he gave me some pointers to try and debug though [16:59] hazmat: oh man, bad time to lose barry :) [17:01] ah, cool that other one is fixed bug #1159636 [17:01] <_mup_> Bug #1159636: python2.7 failed to import webbrowser: NameError in register_X_browsers(): global name 'Chrome' is not defined - Regression in 2.7.4~rc1-2ubuntu1 < https://launchpad.net/bugs/1159636 > [17:02] hi marcoceppi, I've been looking at the WP charm and I'd like to use it to create instances of developer.ubuntu.com for testing purposes. I believe I can use it as is, apart from one bit: the d.u.c theme requires a copy of the database to be installed. Is this something that I could do with the current charm? Would I need to add anything to it? [17:11] dpm_: a copy of what database? [17:11] wedgwood: I have a question about the postgres charm. Perhaps you could help? [17:12] hi SpamapS, the mysql database Wordpress uses to store its content on d.u.c - the Wordpress thems depends on some content from the database, so for the d.u.c theme to work on new instances, it requires that database content [17:15] dpm_: ah, sounds like a simple case where you need something that loads that data. Perhaps a subordinate charm. [17:15] stepheno: possibly. what's up? [17:16] question... I had a typo in the install hook ('raate' instead of 'rate'), so it crashed... destroyed the service, opened the hook file, fixed it, saved it, then deployed the charm again... and it failed again in with the same problem! but the file was fixed... could it be it's getting the file from some cache or something? [17:17] SpamapS, I'm not sure, I'm not too familiar with charms. jcastro_ mentioned adding an option to the existing charm to let people seed the DB (e.g. juju set mysql seed-db=blah.sql) [17:17] facundobatista: when iterating development, 'juju deploy -u' is useful :) [17:17] sarnold, thanks [17:24] dpm_: I believe it already has one for pulling a snapshot from s3.. so perhaps that can be adapted [17:26] SpamapS, I'm not sure, I could not find any similar option on http://jujucharms.com/charms/precise/wordpress === defunctzombie is now known as defunctzombie_zz [17:27] wedgwood: I have multiple service units connecting to the postgres:db relationship, and the charm creates a new user/password and pg_hba entry for each unit. [17:28] I was wondering if the key-value pairs set on the relationship are unique per unit, or global to the service. [17:29] Basically i don't know how to ignore the relation-changed hook with invalid info for the service units which have already joined the relationship. [17:29] So all units get the user/password of the last unit to join the relationship to postgres [17:31] stepheno: I'll have to look at that. when did you deploy that charm? I feel like that behavior has changed. [17:31] wedgwood: this morning :) [17:31] i've been trying it out for the last week or so [17:32] on precise [17:37] stepheno: can you explain what you mean when you say you "don't know how to ignore the relation-changed hook with invalid info" ? [17:38] There isn't a way (without changing the charm) to use the same credentials for each relation. [17:38] wedgwood: sure. When i add another unit to the relation, the postgres charm will create a new user/password for that user. At that time, the db-relation-changed hook will fire on all units of my service [17:39] wedgwood: relation-get user will now return the newly created user for the latest unit, but it's only valid for one unit in the service [17:40] Not sure how to ignore this on the other units. [17:40] stepheno: I see. that's definitely a bug. [17:40] I'd prefer to have distinct key value pairs per unit, but i'm not sure if that's possible [17:41] it would be awesome if i could just do 'relation-get user service/1', and have it return the key-value pairs for that particular service unit [17:42] stepheno: It looks like that code was written without understanding how relation-changed worked. That may have been my fault. [17:42] I was looking through the debug-log when running this, and saw juju output an array of kv pairs. e.g. [ { user : x, pass : 1, unit: 1 }, { user: y, pass: 2, unit: 2}] [17:43] but i believe it merges those dictionaries, unfortunately [17:47] stepheno: do you have a few minutes to file a bug about that? https://bugs.launchpad.net/charms/precise/+source/postgresql [17:49] stepheno: now as far as ignoring relation-changed ... what charm are you using at the other end? [17:49] wedgwood: sure. I even have a few minutes to work on a fix if you'd like [17:49] i'm using a modified version of the python-django charm [17:50] stepheno: certainly. It's not apparent to me what the fix should be since presumably the user and password given to each unit should be unit specific but relation-set doesn't appear to support hat. [17:50] *that [17:52] well, i suppose a quick fix would be to set a value from the postgres charm. relation-set unit-id=JUJU_REMOTE_UNIT [17:54] you'd set that so that the other side could ignore irrelevant changes? [17:54] wedgwood: yeah [17:56] that would have to be implemented in every relatable charm thought. it's a break from how the relation is expected to work. [17:56] plus it would make it confusing to implement relation-changed for changes that *should* affect all units. [17:56] wedgwood: good point. [17:57] Ideally we'd have the ability to set distinct key-value pairs per service unit [17:57] and be able to set global kv-pairs per service [17:58] i.e. relation-set host=POSTGRES_IP [17:58] I think the fix might really be to use one set of credentials per database, rather than per relation. pg_hba would will enforce connection restrictions. [17:58] relation-set user=USER JUJU_REMOTE_UNIT [17:59] I'm going to do a few experiments to confirm how things work. stand by [18:00] wedgwood: sure. Thanks a bunch, i've been slowly losing sanity trying to scour the documentation to find a way to do this === hatch_ is now known as hatch [18:17] stepheno: ok, I've confirmed the behavior you're seeing. I think the fix is to use one set of credentials per *relation* rather than per unit. pg_hba would still contain explicit grants for each unit. [18:18] sorry, that's per-relation-per-database [18:24] sidnei, you digging on riemann? [18:27] hazmat: haven't got my hands dirty yet, but it's high on my list [18:27] sidnei, it looks pretty nice, i had the same concern you did though.. [18:28] sidnei, for automation though.. it seems like its basically going to need generation of lisp as config [18:28] unless its a known config [18:28] cant remember the concern i had. :) [18:28] sidnei, ha [18:28] ah, indeed [18:29] wedgwood: thanks. Do you have a working fix, or should i file the bug report and take a crack at it? [18:29] sidnei, in future ostack w/ heat should address some of the same stuff (cloudwatch, autoscale groups, etc) [18:29] hazmat: so in the context of juju, yes. some lisp generation, sub-rienmanns for each set of services, aggregating on one or more cetnral ones. [18:30] stepheno: I probably won't have a chance to make a fix today. I'd really appreciate a bug and merge proposals are always welcome if you have the time. [18:31] stepheno: thanks for finding that problem [18:33] hazmat: my short term plans were for replacing some of our statsd usage by rienmann events and use the graphite output instead, mostly to give better real-time visualization of core stats. [18:35] stepheno, that should only be creating a single user/password/db for the entire service fwiw. [18:37] hazmat: got it. Is there a particular benefit to the current behavior(one user/password per service unit, one db per service)? [18:39] stepheno, its helpful when diagnosing query perf issues, as it identifies the remote side.. it can also be useful for compromise mitigation [18:39] was about to say the same [18:40] hazmat: cool, that's what i assumed the intention was. I'll go ahead and file this bug report, and then file a feature request for juju-core to be able to set unique kv-pairs per service unit. [18:41] stepheno, well the later is already true [18:41] hazmat: ! [18:41] each unit has its own private data store for k/v in the relation [18:41] its just that each change to that data store is broadcast to the world. [18:42] the world being all the units of the other side of the relation [18:42] in this case it seems the postgresql charm is basically overwriting its own data and broadcasting in attempt to communicate private info to an individual remote unit [18:43] * hazmat really wants to see the tic-tac-toe relation [18:43] hazmat: i'm imagining the ability to set k/v specifically for the service unit that initiated the request. postgres would have global k/v for all related service units, and individual k/v for each service unit [18:45] stepheno, you mean each postgres unit i think.. its kinda of against the goals to have that sort of individualized state.. ie. what happens when that pg unit dies.. [18:46] hazmat: not exactly. the postgres charm would set k/v on the relation. The keyspace would look like this: [18:47] { host: x, db: y, unit_1: { user: 1, pass: 1}, unit_2: { user:2, pass: 2} } [18:48] there are k/v global to the service, and unique values are possible per service unit [18:48] gotcha [18:49] ideally if there's a k/v that's specific to a unit only that unit should be able to access it [18:50] sounds reasonable and worth a bug or ml discussion, although i worry about abuse of individualized state [18:51] hazmat: speaking of ha, facing an interesting problem with that. got a couple services that need to talk to multiple endpoints (eg: multiple squid units), but only support being configured to talking to a single one (eg: http_proxy env variable). thinking that the best way around that might be an haproxy subordinate or even something more lightweight that then proxies/lbs to the multiple endpoints. got any better idea? [18:51] is this possible with the current infrastructure? it would be nice to set service global k/v with "relation-set key=value", and unique with "relation-set key=value service_unit" [18:52] stepheno, its possible without the ns enforcement by the charm alone, additional visibility restrictions would need to be in juju [18:52] just via $JUJU_REMOTE_UNIT-key=value [18:54] sidnei, i'm not sure i parse that.. are the multiple squid units all members of the same service.. and then the issue is then that the usage only supports a single host ? which would imply a different usage is needed. [18:54] hazmat: yes, multiple members of the same service, usage only supports a single host. [18:55] hazmat: yeah, i think i'll stay away from that approach for now. I'll write up a feature request for juju-core in a bit. Don't want to infect the postgres charm with framework workarounds when it might just be nice to have in juju-core [18:56] hazmat: so the idea of making single host -> localhost:$port -> lb to multipe units [18:56] sidnei, yeah.. a local proxy managed as alternative sounds pretty reasonable.. haproxy is pretty lightweight [18:56] stepheno, sounds good [18:56] hazmat: wonder if there's a generic pattern that can be extracted from that [19:05] sidnei, if there is encapsulation to a sub should do the trick, with non generic (juju-info) relation to primary to encompass config passing, if its not centrally configured (svc) on the sub. [19:09] hazmat: let me see if i parse. sub to primary via juju-info, primary controls the sub via passing information in the juju-info relation? [19:11] sidnei, not using juju-info .. using a custom interface that the primary and sub can communicate on [19:11] ah, gotcha. missed the 'non' in non generic above. [19:12] the sub-e-ness is dictated by a metadata flag and the sub relation by the 'container' scope.. the juju-info is just an implicitly provided one so generic subs can be defined (monitoring, logging, etc). [19:12] cool [19:13] <_mup_> Bug #1160538 was filed: juju-origin: bzr branch puts binaries into /usr/local which needs to be explictly added to hook path < https://launchpad.net/bugs/1160538 > === JanC_ is now known as JanC === chrischr1s is now known as chrischris [20:12] SpamapS dpm_ The "pull snapshot from S3" was a hack that's since been removed. The preferred method for loading extra data in to a database at the moment would be to create a script that does it for you and have it executed from the browser [20:35] marcoceppi: s/browser/automated-deploy-workflowy-thing/ === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz === OutOfControl is now known as benonsoftware === defunctzombie_zz is now known as defunctzombie === wedgwood is now known as wedgwood_away === defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz