[00:01] <hazmat> m_3, all the build links on the #charmtbot-test channel are broken
[00:01] <hazmat> is that to be expected?
[01:17] <hazmat> m_3, its the cyclical dependency between hadoop-master and hadoop-mapreduce that's throwing it off i think
[01:17] <hazmat> m_3, hadoop-master shouldn't be depending on hadoop-mapreduce
[01:20] <hazmat> yup.. removing that and it works fine
[01:23] <hazmat> the planner is a nice lint on charm metadata semantics ;-)
[03:25] <_mup_> juju/enhanced-relation-support r14 committed by jim.baker@canonical.com
[03:25] <_mup_> Use cases
[03:35] <_mup_> juju/upgrade-sym-link r468 committed by kapil.thangavelu@canonical.com
[03:35] <_mup_> support extracting bundle symlinks in place over an existing charm for upgrades
[03:35] <hazmat> jimbaker, its probably worthwhile to send an email to the list re the relation support
[03:36] <jimbaker> hazmat, that's what i'm working on :)
[03:36] <hazmat> jimbaker, awesome.. i should have mentioned that earlier
[03:36] <jimbaker> it's taking a while to get the proposed api change in place
[03:36] <hazmat> jimbaker, its probably better to do the email first, just in case..
[03:36] <jimbaker> given the number of bugs and interlocking concepts
[03:36] <hazmat> yeah.. its got a hit a few places
[03:36] <jimbaker> hazmat, i have not worked on impl. just the spec
[03:37] <hazmat> jimbaker, ah.. cool
[03:37] <hazmat> jimbaker, i'm probably going to take over on that purge-queue-hook thing then
[03:37] <jimbaker> hazmat, sounds cool
[03:37] <hazmat> great
[03:38] <jimbaker> hazmat, i expect at least 6 branches will be needed for this work
[03:38] <hazmat> jimbaker, sounds about right
[03:39] <hazmat> jimbaker, the most important i think is just getting unambigious relation identities
[03:39] <jimbaker> hazmat, yes, that's the first branch
[03:39] <hazmat> jimbaker, after that hook cli mods, and status.. what else?
[03:39] <jimbaker> hazmat, out of band hook command execution
[03:40] <hazmat> oh.. yeah.. and the hook context as well
[03:40] <jimbaker> yes, that's the second branch :)
[03:40] <jimbaker> i'm not planning to do any changes for juju status, i think it's possible to do everything with an enhanced relation-list
[03:41] <jimbaker> trying to keep the changes as small as possible
[03:41] <hazmat> jimbaker, there's a problem with status relating to it.. say for example you have mysql with relations to wordpress and mediawiki..
[03:41] <jimbaker> hazmat, yes, that's a reasonable scenario
[03:41] <hazmat> jimbaker, status only shows the one rel because its using a dict by rel name, instead of identity
[03:43] <jimbaker> hazmat, i see what you mean. well... i was going to delete the juju status changes from the proposed spec because i couldn't show a use case from the bug reports/mailing list, but it looks like you have a reasonable one here
[03:44] <jimbaker> that's at least one more branch then
[03:44] <jimbaker> hazmat, on the same vein: do you see a need for a juju-status (a version of juju status that can be run from a hook)?
[03:45] <jimbaker> right now, i cannot justify it, so it's pending deletion...
[03:45] <jimbaker> but that would be 2 more branches for that support
[03:46] <hazmat> jimbaker, huh.. hooks and status are on remote ends
[03:47] <hazmat> i don't follow
[03:47] <jimbaker> hazmat, indeed they are... might be useful for a brain. so i guess we can delete.
[03:47] <jimbaker> it can always be done in some subsequent api change, just trying to keep this work reasonably limited
[03:49] <hazmat> bcsaller, jimbaker .. for small branches, i'm thinking we should just  move to approved on one review, if the original reviewer feels the change is small/simple..
[03:50] <jimbaker> hazmat, +1 on that
[03:50] <bcsaller> hazmat: I'm good with that
[03:50] <hazmat> the review queue has been flooded for a while it feels like
[03:50] <hazmat> cool
[03:50] <hazmat> flacoste, suggested it
[03:51] <hazmat> he mentioned lp did an experiment with no reviews (ie. self-reviews), and then went back after a cycle and looked back over the commits, to see if they would have benefited from a review.. for the most part they didn't.. i'm not quite feeling that one yet ;-)
[03:53] <jimbaker> hazmat, ;)
[03:58] <_mup_> juju/enhanced-relation-support r15 committed by jim.baker@canonical.com
[03:58] <_mup_> Removed juju-status, other cleanup
[03:58] <hazmat> interesting.. first 'real' google go project i've seen http://code.google.com/p/vitess/
[04:01] <jimbaker> hazmat, definitely good to see, especially the pros/cons of go discussed http://code.google.com/p/vitess/wiki/ProjectGoals
[04:02] <jimbaker> i naively thought they were using a better gc, but guess the project just needs work on that
[04:05] <hazmat> jimbaker, gc at scale is dark arts
[04:05] <hazmat> a generational gc would work for a cache
[04:07] <hazmat> i just hope they don't get into the  thousand gc  knobs of java.. looking at all the gc hints in programs like cassandra or elasticcache needs a book to reference
[04:09] <jimbaker> hazmat, well that's certainly a perverse result of large market share
[04:10] <jimbaker> and of course the fact that java code is not necessarily written by top notch devs...
[04:11] <hazmat> or what scale your using it at
[04:37] <hazmat> fwereade, g'morning
[16:19] <matsubara> hello there. I'm having problem starting an precise instance on ec2 using juju. I'm trying to deploy the oneiric jenkins charm on a precise instance. The problem is that the ec2 instance doesn't seem to come up online
[16:20] <matsubara> as in, I can't ssh into it to debug what's wrong with the deployment
[16:35] <hazmat> matsubara, do the ec2 tools show the instance is running? did you have problems with bootstrapping or just the charm?
[16:35] <matsubara> hazmat, not sure where the problem is
[16:36] <matsubara> hazmat, and yes, ec2 console shows the instance as running and with a public ip assigned
[16:36] <hazmat> matsubara, okay.. why don't we walk through the steps.. you bootstrapped, and then deployed a charm?
[16:36] <matsubara> hazmat, I had a environment bootstrapped previously
[16:36] <hazmat> ah
[16:36] <matsubara> where I deploys a jenkins instance on oneiric
[16:36] <matsubara> I deployed, I mean
[16:37] <hazmat> matsubara, how old is the environment?
[16:37] <matsubara> then, using the same bootstrapped environment, I tried to deploy the precise jenkins instance
[16:37] <matsubara> not older than 2 weeks
[16:37] <matsubara> should I have bootstrapped another environment?
[16:37] <hazmat> matsubara, we had an incompatible change in the ppa version of juju about a little over a week ago
[16:38] <hazmat> fwereade__, we probably should have sent an email out about that one
[16:38] <fwereade__> hazmat, damn, yes we should :(
[16:38] <matsubara> hazmat, is there a workaround? what do I need to do?
[16:38] <hazmat> matsubara, are you using the ppa?
[16:38] <matsubara> AFAICT, no
[16:40] <matsubara> hazmat, ii  juju                          0.5+bzr457-0ubuntu1           next generation service orchestration system
[16:45] <hazmat> matsubara, do you have any value specified for juju-origin in environments.yaml
[16:46] <hazmat> matsubara, i think that implies your using the ppa
[16:46] <hazmat> oh.. maybe not.. sorry
[16:46] <SpamapS> hazmat: is that the version of juju on your bootstrapped machine too?
[16:47] <SpamapS> hazmat: juju-origin defaults to distro if you are using the distro version. :)
[16:47] <hazmat> matsubara, can you pastebin the console output of the instance, it should be available from the ec2 console or and also via the ec2 cli tools
[16:47] <hazmat> SpamapS, just wanted to check that's the latest version in precise.. and it appears to be
[16:48] <hazmat> the restart support started landing right after 457
[16:48] <SpamapS> hazmat: DOH
[16:49] <SpamapS> hazmat: whats really going to break things is when people try to deploy an older version of juju with juju-origin: distro .. as in.. an oneiric instance.. that will be r398
[16:49] <SpamapS> hazmat: I'm convinced now, we can't do it this way anymore. We have to start storing juju in file storage and distributing it to the nodes.
[16:50] <hazmat> SpamapS, agreed, its insane as it is
[16:50] <hazmat> SpamapS, would you mind bringing it up on the list? if not i can
[16:50] <SpamapS> hazmat: I've been holding off applying for a FFe because of subordinates landing.. would rather not bother the release team with 2
[16:50] <SpamapS> hazmat: I'll email the list, I've been thinking about this for a while.
[16:51] <hazmat> SpamapS, thanks
[17:34] <flaviamissi> hey! I'm writing a charm for django, and I want to export some environment variables in the `db-relation-changed` hook, but when I try to access this from a django app, after relating the service with mysql, the environment variables exported by the hook are not available, I could just write them to a file, but I like the approach of using environment variables for that... some one has any tip for this problem?
[17:54] <SpamapS> flaviamissi: you really *should* be writing them to disk, in a place where your django app's startup can read them.
[17:54] <SpamapS> flaviamissi: if you reboot, or have to restart the service manually, you won't have access to the environment that the charm had access to.
[17:58] <flaviamissi> SpamapS: yeah, you're absolutely right. Do you know how Heroku solves that issue? because they use environment variables for database informations..
[18:02] <SpamapS> flaviamissi: I don't think Heroku has the same structure as Juju.. perhaps those are stored somewhere and re-executed every time?
[18:05] <flaviamissi> SpamapS: that's what I thought, I think you're right, the best approach with Juju is writing those to a file.
[18:05] <SpamapS> flaviamissi: right, juju is just an event framework with a conduit for sharing configuratino.
[18:05] <flaviamissi> SpamapS: but the environment variables are so clean... :~
[18:06] <flaviamissi> SpamapS: thank's for the clarification :)
[18:06] <SpamapS> flaviamissi: sounds to me like Heroku requires you to give up more control though.
[18:06] <flaviamissi> SpamapS: yeah, I have that impression too.
[18:07] <SpamapS> flaviamissi: care to share your charm? I'd be happy to suggest a clean way to do what you're intending.
[18:07] <flaviamissi> SpamapS: Sure! https://github.com/timeredbull/charms
[18:07] <flaviamissi> SpamapS: we are integrating Juju with openstack
[18:09] <SpamapS> flaviamissi: great start, it looks nicely organized
[18:09] <SpamapS> flaviamissi: juju_charm="/var/lib/juju/units/${juju_unit}/charm"
[18:09] <SpamapS> flaviamissi: thats a no-no
[18:09] <SpamapS> flaviamissi: $CHARM_DIR will always be set
[18:10] <flaviamissi> SpamapS: great!
[18:10] <SpamapS> flaviamissi: also the cwd when charms are run will always be the root of the charm
[18:10] <SpamapS> flaviamissi: in some cases, that may not be the right path. :)
[18:10] <SpamapS> rather, the one you've explicitly chosen, may not be the right path
[18:10] <flaviamissi> SpamapS: hmmmmm, changing it now
[18:11] <flaviamissi> SpamapS: thanx :)
[18:11] <SpamapS> flaviamissi: Other than that, you should move *all* of joined into changed.
[18:12] <SpamapS> flaviamissi: and instead of just exporting them, just write them to a file that 'start_gunicorn' sources.
[18:12] <flaviamissi> SpamapS: hmmm
[18:13] <SpamapS> flaviamissi: I'd also recommend putting start_gunicorn and stop_gunicorn in /usr/local/bin so you can use them without the charms.
[18:13] <flaviamissi> SpamapS: great! I'll do that.
[18:14] <flaviamissi> SpamapS: but... why should start_gunicorn source the file with the variables?
[18:14] <flaviamissi> SpamapS: sorry if it's a noob question.. :/
[18:14] <SpamapS> flaviamissi: I assume that the django app running inside gunicorn is the bit that needs those variables.
[18:14] <flaviamissi> SpamapS: that's right
[18:15] <flaviamissi> SpamapS: oooh, right, I got it :)
[18:16] <flaviamissi> SpamapS: you've helped I *lot*! thank you ^^
[18:16] <SpamapS> flaviamissi: I'd just put it in the helloworld root.. 'settings.sh' or something.. and then just 'source /home/app/helloworld/settings.sh'
[18:16] <SpamapS> flaviamissi: no, thank you. :)
[18:16] <flaviamissi> SpamapS: xD
[18:16] <SpamapS> we appreciate that juju is new and weird, and that you are willing to play with us. :)
[18:17] <flaviamissi> SpamapS: it has been a great experience
[18:19] <flaviamissi> SpamapS: are you a juju developer?
[18:19] <SpamapS> flaviamissi: heh.. sort of. :)
[18:20] <SpamapS> flaviamissi: I'm not in the core dev team, more of a power user. :)
[18:20] <flaviamissi> SpamapS: great, i've been thinking in send a patch
[18:20] <frankban> hey SpamapS: do you have a minute? I was looking at juju-jitsu, it seems that a shell alias is used to override the juju command. AFAICT this sure works well for tests using bash, but not so easily for python tests.
[18:20] <frankban> I've seen the proposal to use env vars to set up charms repository and namespace, and IMHO that should solve the problem. In the meanwhile, what do you suggest? I could just continue using the juju_wrapper script.
[18:20] <flaviamissi> SpamapS: because we use proxy here, and juju doesn't seems to work properly behind one
[18:21] <matsubara> hazmat, hey there, you were helping me up with that juju issue on precise. it seems I'm using the packaged version rather than the ppa's.
[18:21] <matsubara> would the console output of the instance help debug the problem?
[18:24] <SpamapS> frankban: I have been thinking about that..
[18:25] <SpamapS> frankban: I think just need to create a temporary PATH override until the env vars are available. So just put 'juju-jitsu-wrapper' in $PATH somewhere as juju
[18:25] <SpamapS> flaviamissi: I think there may already be a bug report. Its a common request. :)
[18:26] <flaviamissi> SpamapS: I saw it :)
[18:27] <hazmat> matsubara, yes.. very much so..
[18:28] <frankban> SpamapS: thanks.
[18:29] <SpamapS> frankban: I'd expect the environment variables to land quite soon though. :)
[18:29] <frankban> SpamapS: great!
[18:30] <matsubara> hazmat, https://pastebin.canonical.com/61266/
[18:31] <matsubara> hazmat, machine 6 is the first one I started where I noticed the problem. machine 7 is the second one but that one didn't have any output in the system log
[18:32] <hazmat> matsubara, this appears to the libc6 upgrade problem when installing juju
[18:32] <hazmat> matsubara, at the moment the work around is to manually specify a machine image to use
[18:32] <hazmat> ie. picking one of the latest nightlies
[18:33] <hazmat> from http://cloud-images.ubuntu.com/precise/current/
[18:33] <hazmat> and putting it in ~/.juju/environments.yaml for the ec2 environment as default-image-id
[18:34] <matsubara> hazmat, cool. thank you. let me try that
[18:36]  * hazmat hopes it works
[18:45] <matsubara> hazmat, do I need to choose the one ebs root? (or it doesn't matter?)
[18:45] <matsubara> s/one/one with/
[18:50] <matsubara> looks like the daily build images don't even start
[18:58] <jcastro> hazmat: do we handle Amazon spot instances in juju?
[19:05] <m_3> hazmat: re: build links...yes broken until I flick the switch on publishing
[19:05] <m_3> hazmat: thanks, I'll fix the hadoop cycle
[19:16] <hazmat> matsubara, after the environments.yaml change, it takes a moment for the provisioning agent to notice the change
[19:16] <hazmat> jcastro, no
[19:16] <mchenetz> good afternoon…
[19:17] <matsubara> hazmat, does it matter that the bootstrapped environment was bootstraped with oneiric now that I changed the environment.yaml file for the precise release?
[19:22] <hazmat> matsubara, at the moment it shouldn't.. as those versions should be compatible. otoh you wouldn't be having these problems if you had a fresh environment
[19:22] <hazmat> the next upload to precise will be incompatible with the oneiric version
[19:23] <matsubara> right
[19:23] <hazmat> at the moment the best option to utilize a single version of juju across multiple distro releases is to utilize the ppa
[19:24] <SpamapS> hazmat: or use a branch
[19:24] <hazmat> but even that's not a guarantee as the ppa is a trunk build, SpamapS and i were discussing this earlier right before you left, namely that juju should be distributing itself to all machines
[19:25] <hazmat> SpamapS, yes, using a stable branch would work well (juju-origin: lp:branch_location)
[19:25] <matsubara> hazmat, right. I'm bootstrapping a new env to test.
[19:25] <hazmat> but in terms of moving towards juju upgrading itself, having juju distribute itself would work a bit better
[19:28] <SpamapS> hazmat: indeed
[19:50] <_mup_> juju/enhanced-relation-spec r16 committed by jim.baker@canonical.com
[19:50] <_mup_> Rework juju status output, consistency guarantees, and other finalization
[19:52] <_mup_> juju/enhanced-relation-spec r17 committed by jim.baker@canonical.com
[19:52] <_mup_> Merged docs trunk
[19:59] <matsubara> hazmat, bootstrapping a new environment work, but I lost access to the old bootstrapped environment. Is there a way to access that one? Even if I change back my environment.yaml back to what it was, it still refuses connection to the oneiric bootstrapped env
[20:00] <marcoceppi> o/ hi everyone
[20:01] <hazmat> matsubara, ? if you want to have multiple environments, you should have multiple entries in environments.yaml one per environment.. if you change the details of an environment (things like control bucket or name) then juju won't know how to contact the original environment.. if you need to destroy a previous environment you can make sure it has the same name and do juju destroy-environment.
[20:02] <hazmat> matsubara, ie.. when you say you bootstrapped a new environment, did you destroy the old one or use a new entry in environments.yaml.. if you need to reference a non-default environment pass -e env_name on the cli after the subcommand
[20:02] <hazmat> marcoceppi, greetings!
[20:02] <matsubara> hazmat, that's what I did (and I didn't change the control bucket, access-key, secret-key for them) just created a new environment with the same values plus the ami type and series
[20:02] <matsubara> then I bootstrapped this new one and lost access to the old one
[20:03] <matsubara> when I try juju -e $env it says it can't connect to the environment
[20:03] <matsubara> so, I think I kinda broke everything
[20:05] <matsubara> is there any way to find the key used to log into the ec2 instance?
[20:06] <hazmat> matsubara, it sniffs a public key from the ~/.ssh/  if one isn't specified in environments.yaml
[20:13] <hazmat> matsubara, the previous one was working, so its not clear why access would have been lost
[20:14] <matsubara> hazmat, yeah, it's unclear to me as well. I'm trying to connect to the instance to get the data but the instance is not accepting my ssh key
[20:24] <matsubara> hazmat, managed to connect to the instance. I was using the wrong username.
[20:24] <hazmat> ah
[20:24] <matsubara> (it's still unclear why juju can't see both environments...)
[20:24] <SpamapS> matsubara: do they have the same control-bucket ?
[20:29] <hazmat> that would be bad
[20:39] <SpamapS> like crossing the streams
[20:40] <matsubara> yep
[20:40] <matsubara> SpamapS, where do I get that value from?
[20:43] <SpamapS> matsubara: that has to be unique to every environment
[20:43] <SpamapS> matsubara: it stores the seed information that tells clients where to find the bootstrap node
[20:43] <matsubara> SpamapS, right. that makes sense. how do I generate a new one?
[20:44] <SpamapS> matsubara: lately I attach the date to the environment name when I bootstrap.
[20:49] <matsubara> SpamapS, so it's just a random string? it's not a hashed value from somewhere else?
[20:50] <matsubara> btw, isn't it a bug that juju doesn't tell me I have two environments with the same control-bucket value?
[20:53] <hazmat> matsubara, hmm.. yeah.. it is
[20:53] <hazmat> matsubara, its just a random string
[20:53] <matsubara> cool. I'll file the bug once I'm done with my experiments
[20:53] <matsubara> thanks for all the help hazmat and SpamapS
[20:54] <hazmat> matsubara, incidentally even with the control bucket change, destroy-environment will do the right thing (it works by env name)
[20:55] <matsubara> right. I was being careful to not run destroy-environment before I can get my data out of it
[21:35] <_mup_> juju/enhanced-relation-spec r18 committed by jim.baker@canonical.com
[21:35] <_mup_> Finish TODOs
[21:41] <_mup_> Bug #943610 was filed: Specification for enhanced relation support <juju:In Progress by jimbaker> < https://launchpad.net/bugs/943610 >
[21:43] <jimbaker> with that and the merge proposal, about to generate the corresponding proposed api change email to juju mailing list... hold on please :)
[21:44] <hazmat> jimbaker, cool
[21:44] <jimbaker> hazmat, hopefully it will delight everybody. fingers crossed ;)
[21:45] <ejat> ping jcastro
[21:47] <SpamapS> ejat: its usually safest to start with somebody's name, like
[21:47] <SpamapS> jcastro: ping ^^
[21:47] <jcastro> hi
[21:47] <jcastro> oh sorry I was burning AWS CPU time with boinc
[21:48] <ejat> SpamapS owh okie
[21:49] <ejat> just want to check .. i just at the bugs at the charm target list
[21:49] <ejat> line 125
[21:50] <jimbaker> and email sent on the proposed api change. time to walk the dog! :)
[21:52] <jcastro> ejat: that's symfony right?
[21:53] <jcastro> that's in review right now right?
[21:53] <ejat> jcastro : not sure .. but i already tag new-charm
[21:54] <jcastro> ah right
[21:54] <jcastro> SpamapS: marcoceppi: review?
[21:54] <jcastro> who else is around ...
[21:56]  * ejat trying to write another charm … go go go ..  
[21:56] <SpamapS> ejat: whats the status of the new-charm tagged bug?
[21:56] <SpamapS> I only look at New/Confirmed/Triaged
[21:57] <SpamapS> In Progress and Incomplete I ignore since they usually mean its not ready for review
[21:57] <ejat> so i need to change to confirm / triaged ?
[21:57] <m_3> hazmat: snapshot restore is hanging about 5-10% of the time ( http://ec2-107-22-3-212.compute-1.amazonaws.com:8080/job/oneiric-local-charm-cloudfoundry-server-dea/1/console )
[21:58] <SpamapS> ejat: bug #?
[21:58] <m_3> hazmat: changing it to use verbose and catch stderr properly... hopefully I'll catch more info next time
[21:58] <ejat> bug #940140
[21:58] <_mup_> Bug #940140: Charm needed: Symfony <new-charm> <Juju Charms Collection:In Progress by fenris> < https://launchpad.net/bugs/940140 >
[21:58] <m_3> hazmat: (just fyi)
[21:58] <hazmat> m_3, i think i see the issue, can you file a bug against charmrunner
[21:58] <m_3> sure
[21:59] <ejat> since m_3 already help me to debug and test
[21:59] <hazmat> m_3, i'm in the middle of a problematic relation bug atm
[21:59] <m_3> np... thanks!
[22:00] <ejat> brb
[22:01] <jcastro> nice he got that one
[22:01] <marcoceppi> jcastro: o/
[22:01] <jcastro> he was stuck on some java thing before that wasn't packaged and it was generally not fun
[22:01] <jcastro> marcoceppi: https://launchpad.net/bugs/940140
[22:01] <_mup_> Bug #940140: Charm needed: Symfony <new-charm> <Juju Charms Collection:Confirmed for fenris> < https://launchpad.net/bugs/940140 >
[22:01] <jcastro> We know you love php dude
[22:03] <marcoceppi> and I <3 Symfony
[22:06] <matsubara> is it possible to pass the instance-type to juju deploy at run time?
[22:06] <ejat> SpamapS: so i need to change to confirm / triaged?
[22:07] <marcoceppi> jcastro: looking at it now
[22:08] <ejat> thanks marcoceppi
[22:08] <jcastro> SpamapS: see if we could sort merge proposals we wouldn't have this bug status confusion
[22:08] <jcastro> just sayin'
[22:08] <SpamapS> jcastro: what confusion?
[22:09] <SpamapS> jcastro: In Progress would be like Work In Progress in a merge proposal.. same problem. :)
[22:11] <ejat> is it something in juju school mentioning about the status ....
[22:12] <SpamapS> ejat: no, it was not clear, so I have updated https://juju.ubuntu.com/Charms to make it more clear.
[22:12] <ejat> \0/
[22:13] <SpamapS> ejat: also just noted that you need to point reviewers to your charm.. no mention of it in the bug
[22:14] <ejat> noted + thanks
[22:19] <ejat> updated the branch
[22:30] <marcoceppi> Any idea if PEAR uses HTTPS?
[22:30] <marcoceppi> Or does payload verification?
[22:31] <SpamapS> marcoceppi: it does not do either
[22:31] <SpamapS> marcoceppi: I looked into it at one point. :(
[22:31] <SpamapS> neither pecl nor pear do anything to protect the user
[22:31] <marcoceppi> onedayiwillwriteabetterphpdeploymentservice
[22:32] <SpamapS> "pear is a dev tool"
[22:32] <marcoceppi> "pear is a pos"
[22:32] <SpamapS> marcoceppi: better to just dump any unpackaged PEAR modules you need into the charm.
[22:32] <marcoceppi> I agree, considering how tedious the update and release cycle is for module owners
[22:33] <SpamapS> "pear don't care, pear don't give a s***"
[22:35]  * marcoceppi concludes review
[22:39] <marcoceppi> Is there an idea of "optional interfaces"
[22:39] <marcoceppi> It's been a while :)
[22:40] <marcoceppi> Would that be a peer?
[22:42] <ejat> :)
[22:45] <ejat> pear fruit :)
[22:57] <SpamapS> marcoceppi: all requires are actually optional
[22:57] <marcoceppi> SpamapS: I noticed that actually
[22:58] <SpamapS> marcoceppi: at some point we may provide some mechanism for hardening or softening that
[22:58] <SpamapS> marcoceppi: we're seeing some ordering problems because they're optional... so a  discussion needs to happen.
[22:58] <marcoceppi> I think that responsibility should lye within the charm/hooks itself though
[22:59] <marcoceppi> Or, if there are problems, then a conversation should happen
[23:00] <SpamapS> marcoceppi: for instance, there are times where you want to store your data locally in sqlite, and others where you want mysql. That shouldn't be two charms IMO, we should be able to have that choice declared, and a simple way to change the default behavior.
[23:00] <hazmat> SpamapS, re ordering around relations, outside of them being established, notional ordering supposes a steady state/goodness, where as dependency fufillment is done by existance.
[23:00] <marcoceppi> SpamapS: I agree, and am struggling with that ATM
[23:00] <marcoceppi> But that's something that happens in the config-changed hook, nothing related to interfaces, correct?
[23:02] <SpamapS> hazmat: somehow you always manage to make me go cross eyed
[23:02] <hazmat> SpamapS, sometimes i have that effect, in english just because  a relation is established and thus a dependency satisified doesn't imply that's its ready to use.
[23:03] <SpamapS> marcoceppi: thats how you deal with it today.. but it would be a better experience if there were a uniform way to say "don't serve X until you have saitsifed Y"
[23:03] <marcoceppi> SpamapS: interesting idea
[23:03] <hazmat> yeah.. a nice way for a charm to say.. i don't provide anything till X
[23:03] <SpamapS> hazmat: right, I'm looking for a better answer than the one that I proposed since my proposal requires steady state.
[23:04] <SpamapS> as hazmat pointed out to me yesterday, if we make disk storage a 1st class abstraction in juju, that may handle the whole thing properly
[23:04] <SpamapS> since you could, in theory, declare that you need disk storage | mysql storage..
[23:05] <SpamapS> but, forget what I'm saying, I get off in the weeds sometimes.
[23:14] <jimbaker> SpamapS, i'd appreciate if you'd take a look at the proposal i sent to the mailing list, it should satisfy this need
[23:14] <SpamapS> jimbaker: ooo I didn't know it was something actively being looked at. Will read ASAP :)
[23:14]  * SpamapS is only about 6 hours behind on email ATM.. woot
[23:14] <jimbaker> also, i'm curious if the potential tongue twister of `juju do` will be what end up implementing ;)
[23:15] <jimbaker> any hook command from anywhere juju runs would seem to be some goodness
[23:16]  * jimbaker understands email backlog, for sure
[23:25] <jimbaker> hazmat, can you respond to the api change email re your observation "juju do is out of scope and dangerous"? i guess the stmt mysql has multiple relations is complex may be overstating these types of scenarios. certainly more complex than 1 instance of a wordpress blog + mysql
[23:26] <jimbaker> just trying to keep the discussion in the appropriate channel, per api changes