SpamapS | interesting.. | 00:00 |
---|---|---|
SpamapS | config-changed fires when the unit agent is restarted | 00:00 |
benji | config-changed used to fire twice on deploy, I don't know if that is intentional or not, but I thought it was a little odd | 00:11 |
benji | /ignore moe | 00:11 |
hazmat | SpamapS, its not landed | 00:19 |
hazmat | benji, config-changed is meant to be idempotent.. its fired once always before the service is started, just so the app can pull config and put in place.. and then it will be fired again if the config is changed | 00:19 |
_mup_ | juju/symlink-guard r455 committed by kapil.thangavelu@canonical.com | 00:29 |
_mup_ | merge trunk | 00:29 |
_mup_ | juju/symlink-guard r456 committed by kapil.thangavelu@canonical.com | 00:38 |
_mup_ | address review comments, extract symlink constant to charm base | 00:38 |
_mup_ | juju/symlink-guard r457 committed by kapil.thangavelu@canonical.com | 01:15 |
_mup_ | clean up error message | 01:15 |
_mup_ | juju/trunk r465 committed by kapil.thangavelu@canonical.com | 01:21 |
_mup_ | merge symlink-guard verify that charm symlinks don't extend outside of the charm and verify file types being included, internal symlinks are ok. [r=bcsaller,jimbaker][f=928348] | 01:21 |
_mup_ | juju/enhanced-relation-support r6 committed by jim.baker@canonical.com | 04:59 |
_mup_ | Sketch of impl details | 04:59 |
dazz | Guys, can you scale (Add units) to mysql? | 06:15 |
SpamapS | dazz: right now that just creates two independent mysql servers.. | 06:26 |
SpamapS | dazz: that would be useful in some cases.. such as if you are sharding/partitioning | 06:26 |
dazz | which is fine, is it clustered still? | 06:26 |
dazz | automatically? That's how I thought it would have been in my head. | 06:26 |
SpamapS | dazz: its not "clustered" per se | 06:26 |
SpamapS | dazz: there's probably a way to make them scale out in some kind of ring replication.. but not without some thought. | 06:27 |
SpamapS | dazz: it would be cool to add something like galera replication to the charm and just have it automatically scale out that way | 06:27 |
dazz | np cheers | 06:28 |
SpamapS | dazz: just curious.. how much experience do you have with scaling mysql? | 06:30 |
dazz | little bit | 06:30 |
dazz | nothing huge though | 06:30 |
dazz | One of the "awesome" things I was looking at juju for was the automatic scaling of mysql. | 06:31 |
dazz | automatically cluster them as you scale etc. | 06:31 |
SpamapS | dazz: I think we could put together a few different config options that would allow one to say something like 'deploy mysql --set scaling-mode=ring|galera|standalone' .. | 06:31 |
dazz | I assume really, these would be custom mysql charms for this ability. | 06:31 |
dazz | that would be awesome | 06:31 |
SpamapS | dazz: no not custom.. it should be able to be encoded in the single mysql charm | 06:32 |
dazz | yeah | 06:32 |
SpamapS | dazz: juju allows customization very easily, but it encourages making things configurable | 06:32 |
dazz | right but the current mysql charm doesn't do this right? | 06:32 |
SpamapS | dazz: right. It only supports one-way replication | 06:35 |
dazz | yeap | 06:35 |
dazz | so if I were to extend the way the charm currently is, you'd like it ;) | 06:35 |
SpamapS | dazz: part of the reason for that is that your application needs to understand how to use a mysql cluster, and there really aren't any opensource apps that do. | 06:35 |
dazz | hrmm | 06:35 |
SpamapS | mediawiki does know how to use readonly slaves.. | 06:35 |
SpamapS | which is why the mysql charm supports readonly slaves. :) | 06:36 |
dazz | wonder if we can do some sort of load balancer for mysql.... | 06:53 |
SpamapS | dazz: sure, but without a synchronous replication technology like galera, you can't guarantee reads from one will be consistent with writes from another. | 06:56 |
adam_g | SpamapS: still up? | 07:07 |
SpamapS | adam_g: yeah, just poking at things. Whats up? | 07:10 |
adam_g | SpamapS: you mentioned there was some recent juju breakage in precise? is that still an issue? some issue hit all the CI nodes at once tonight, something ive not seen before | 07:13 |
adam_g | agent startup fails with: juju.errors.JujuError: No session file specified | 07:13 |
SpamapS | adam_g: the issue was that in the cloud-init blocks it was doing 'apt-get -y install', which, if it pulls in something that asks a debconf question, would be a fail. | 07:13 |
SpamapS | adam_g: the problem is that you are trying to spawn juju units with an older version of juju I think | 07:14 |
SpamapS | adam_g: do you tear-down the bootstrap every time? | 07:15 |
adam_g | SpamapS: no.. im waiting on a re-bootstrap now, im sure it'll resolve | 07:15 |
SpamapS | adam_g: its possible the change made today had some unintended consequences... when I saw that error, I had used an older build to bootstrap with the newer stuff in the ppa and it failed | 07:16 |
adam_g | yeah, im running running with juju-origin: ppa, i think the bootstrap node was up for a week or something stupid | 07:16 |
adam_g | i need to write a nightly job to teardown and rebootstrap to avoid that | 07:17 |
SpamapS | adam_g: the latest version actually allows you to *reboot* | 07:17 |
SpamapS | adam_g: we might even be able to upgrade juju on a box.. :) | 07:17 |
adam_g | SpamapS: as in, upstart jobs? | 07:17 |
SpamapS | aye | 07:17 |
adam_g | suhweet | 07:17 |
SpamapS | adam_g: apparently there are still some cases where the restart will fail.. something with zookeeper and clocks and sessions | 07:18 |
SpamapS | but this is still super sweet, as we can in theory have a long running bootstrap node that we can upgrade | 07:18 |
SpamapS | adam_g: yeah I think the issue you're seeing is that the newer agents require a session file argument.. | 07:20 |
SpamapS | so we basically just broke *every* running bootstrapped environment | 07:20 |
adam_g | heh | 07:21 |
SpamapS | unless people manually update juju on their machine 0, and then manually restart the provisioning agent | 07:21 |
* SpamapS files a bug | 07:22 | |
SpamapS | I really think juju bootstrap needs to *cache* the version of juju it installs/deploys, and deploy that one rather than whatever is in juju-origin | 07:23 |
adam_g | just run squid alongside zookeeper, and setup repository access thru it via cloud-init on nodes, right? :) | 07:25 |
adam_g | okay been in front of this thing since 8am. startin to feel like Daviey. g'night | 07:25 |
SpamapS | adam_g: me too.. time to sleep. :) | 07:26 |
_mup_ | Bug #938463 was filed: New agent requirement for session file breaks all existing bootstrapped envs using juju-origin PPA <juju:New> < https://launchpad.net/bugs/938463 > | 07:26 |
adam_g | SpamapS: note, still getting there same error, this time on the bootstrap node:http://paste.ubuntu.com/852331/ | 07:31 |
SpamapS | adam_g: make sure whatever ran 'juju bootstrap' is also the latest version | 07:31 |
SpamapS | Seems like that session file option could have a default to prevent this. | 07:32 |
adam_g | SpamapS: is there something i need to pass as session file? | 07:32 |
SpamapS | adam_g: no | 07:33 |
SpamapS | adam_g: you just need the latest juju from the PPA | 07:33 |
adam_g | ok | 07:33 |
adam_g | upgraded client node as well. | 07:33 |
adam_g | thanks | 07:33 |
SpamapS | adam_g: note that 'distro' would probably be a more stable option. :) | 07:34 |
=== grapz is now known as grapz_afk | ||
=== grapz_afk is now known as grapz | ||
_mup_ | Bug #938521 was filed: constraints spec contains historical information <juju:New> < https://launchpad.net/bugs/938521 > | 09:46 |
=== grapz is now known as grapz_afk | ||
Leseb | Is anyone already setup juju with orchestra only in localhost environment? (not EC2 instance). There are so much bug in there :( | 11:20 |
niemeyer | Good morning! | 12:21 |
bac | hi niemeyer, yesterday i posted to juju list from 'brad.crittenden@canonical' instead of 'bac@canonical'. could you approve the message and check the settings to allow the alternate address in the future? sorry for the bother. | 12:45 |
niemeyer | bac: Definitely, sorry for the trouble | 12:50 |
niemeyer | bac: Done, also whitelisted it | 12:51 |
bac | niemeyer: thanks! | 12:52 |
niemeyer | bac: np | 12:55 |
=== grapz_afk is now known as grapz | ||
xerxas | Hi all | 13:40 |
xerxas | is juju useable from mac ? | 13:41 |
xerxas | I mean , the machine i type command on | 13:41 |
xerxas | as juju is python , I think it would easly work | 13:41 |
xerxas | but I haven't looked at the code | 13:41 |
xerxas | I'm just asking in case you already know it does run or doesn't run | 13:41 |
xerxas | or maybe someone did it | 13:42 |
benji | hi all, I'm having an issue similar to the one I had yesterday: when bootstrapping in EC2 on oneiric "juju status" hangs | 14:08 |
benji | hey! it's working. I must have just needed more patience. | 14:12 |
benji | how does one deploy a service on a large EC2 instance? | 14:40 |
benji | I'm trying to just change default-instance-type and default-image-id without any luck. | 14:41 |
m_3 | benji: I use: 'default-instance-type: m1.large' and 'default-image-id: ami-6fa27506' | 14:46 |
m_3 | benji: that image might not be ebs-backed... I usually prefer ephemeral | 14:46 |
m_3 | benji: that's oneiric btw | 14:47 |
=== grapz is now known as grapz_afk | ||
=== grapz_afk is now known as grapz | ||
hazmat | xerxas, yes the client is mac compatible, if its not its a bug, we do rely on one main extension the zookeeper bindings | 14:57 |
hazmat | xerxas, it is packaged in systems like homebrew or darwinports i believe | 14:57 |
xerxas | ok | 14:57 |
xerxas | ahh | 14:57 |
xerxas | didn't saw, sorry | 14:58 |
xerxas | doesn't seem to be in homebrew | 14:58 |
xerxas | hazmat: but I'll look | 14:58 |
xerxas | thanks | 14:58 |
hazmat | xerxas, ? https://github.com/mxcl/homebrew/blob/master/Library/Formula/zookeeper.rb | 14:58 |
xerxas | ahh , you mean zookeeper | 14:59 |
xerxas | but zookeeper is installed on the "controller" host | 14:59 |
hazmat | xerxas, it is but we need the bindings for the client to talk to the controller host | 14:59 |
xerxas | I mean , juju workflow is : my workstation <=> an (EC2 let's say) Server that has zookeper and controlling the infrastructures <=> servers for charms | 15:00 |
xerxas | ok | 15:00 |
xerxas | then I don't need zookeeper, I just need to python zookeeper client , am I wrong ? | 15:00 |
hazmat | xerxas, that's right | 15:00 |
hazmat | xerxas, this is a homebrew recipe attempt for juju ... http://repo.chenetz.net/juju.rb | 15:01 |
hazmat | xerxas, but really since its python, you can just grab a checkout and run it, if you have the zk binding (+ python-yaml) | 15:02 |
xerxas | yeah | 15:03 |
xerxas | sure | 15:03 |
xerxas | thanks | 15:03 |
xerxas | I was just asking here before doing something stupid (If you know it doesn't work , it would have save me some time ;) ) | 15:03 |
xerxas | hazmat: thanks, I'll try it | 15:03 |
xerxas | then | 15:03 |
jcastro | m_3: hey did you see this wrappz work in lp:charms? | 15:38 |
jcastro | SpamapS: fill this in for me: | 15:47 |
jcastro | SpamapS: we prefer charms be licensed under the ... X ... | 15:47 |
jcastro | solve for X! | 15:47 |
jcastro | or do we care as long as it's OSI? | 15:47 |
SpamapS | jcastro: GPLv3 | 16:17 |
SpamapS | jcastro: but yeah, thats just what we prefer.. anything that is "Free" works. :) | 16:19 |
koolhead17 | SpamapS: the bug is still not fixed after the upgrade of php :( | 16:27 |
SpamapS | koolhead17: durn it. | 16:30 |
SpamapS | koolhead17: there's code in the postinst to fix it .. so that surprises me. | 16:30 |
koolhead17 | :( | 16:31 |
SpamapS | koolhead17: dpkg-maintscript-helper rm_conffile /etc/php5/conf.d/sqlite.ini 5.3.9~ -- "$@" | 16:43 |
SpamapS | koolhead17: that should have resulted in the file being renamed to /etc/php5/conf.d/sqlite.ini.dpkg-bak | 16:45 |
jamespage | I'm guessing there is no nice way to manage rolling restarts across service units within a deployed service? | 16:46 |
jimbaker | jamespage, seems like this could be done with a subordinate service | 16:48 |
jimbaker | (most management issues seem to boil down to, use a subordinate service) | 16:49 |
jamespage | jimbaker, hmm - so how would that work? | 16:49 |
jamespage | basically I need each service unit to | 16:50 |
jamespage | a) unload its regions | 16:50 |
jamespage | b) restart | 16:50 |
jamespage | c) reload its regions | 16:50 |
jamespage | in sequence | 16:50 |
jamespage | I keep breaking my hbase deployment when I do upgrade-charm at the moment :-) | 16:50 |
jimbaker | jamespage, the units of the subordinate services could coordinate with each other to manage this. the question i would ask, is there a lightweight mgmt solution out there that basically does something like this? | 16:52 |
jimbaker | writing one would not be so hard i suppose | 16:52 |
jimbaker | so basically map this new subordinate service, say rollingrestart, to such a solution whether custom or using some package out there | 16:53 |
jamespage | jimbaker, something service agnostic? | 16:55 |
jimbaker | jamespage, that would seem to be ideal - something that knows how to work with standard services | 16:56 |
m_3 | jamespage: I was thinking we'd use multiple groups of slave services to do rolling upgrades... the complication is they'd be separate services" | 16:56 |
jamespage | m_3 that though had cross my mind | 16:56 |
m_3 | datacluster1, datacluster2, datacluster3 | 16:56 |
jamespage | lol | 16:56 |
m_3 | they'd all be separately related to the master | 16:56 |
jamespage | lemme try that out | 16:56 |
m_3 | it's another reason to avoid peer relations imo... i haven't figured out how to really get that done with multiple peer-groups | 16:57 |
m_3 | master-slave seems doable tho | 16:57 |
mars | Hi, could someone tell me where I might find the juju logs on a new ec2 unit? | 16:57 |
mars | I am trying to debug the system setup - the machine isn't coming up (or is taking a very long time) | 16:57 |
m_3 | mars something like /var/lib/juju/units/<service-name>/charm.log | 16:58 |
mars | m_3, cool, which host? zookeeping (machine 0) or the new unit? (machine 1) | 16:58 |
mars | zookeeper even | 16:58 |
m_3 | typically machine1... lemme look if the bootstrap node keeps them in /var/log/juju or not | 16:59 |
jamespage | m_3: using dotdee is working out quite well | 17:01 |
mars | hmm, /var/lib/juju is empty on machine 1 | 17:01 |
m_3 | mars: yeah, bootstrap node in ec2 has them in /var/log/juju/{machine,provisioning}-agent.log | 17:01 |
jamespage | think I have discovered a bug tho - need to find kirkland | 17:01 |
m_3 | jamespage: cool! | 17:01 |
SpamapS | jamespage: how do hbase users usually keep from breaking their cluster? | 17:02 |
jamespage | SpamapS, there is a special 'formula' for doing it with a couple of help scripts | 17:02 |
jamespage | SpamapS, http://hbase.apache.org/book/node.management.html | 17:03 |
mars | m_3, ah, darn. Just realized machine 1 wasn't even provisioned (although it is pending in juju status). bootstrap's /var/log/juju is empty. | 17:03 |
jamespage | basically you have to disable load balancing | 17:03 |
jamespage | restart each server in turn using the graceful_stop helper script | 17:03 |
jamespage | and turn it back on afterwards | 17:04 |
m_3 | mars: other machines have /var/log/juju/machine-agent.log and then the charm log in /var/lib/juju/units/... | 17:04 |
mars | m_3, both directories are empty on the bootstrap machine. That is a bit odd. | 17:05 |
m_3 | mars: check that `dpkg -l | grep juju` | 17:05 |
mars | Obviously the data for wordpress machine 1 is somewhere in the system | 17:05 |
m_3 | returns something nonempty | 17:05 |
SpamapS | jamespage: hbase is surprisingly similar to CEPH (or perhaps not, they attempt the same thing with different problem spaces).. | 17:05 |
m_3 | mars: note that local (lxc) provider logs to different locations !!! :( | 17:05 |
mars | m_3, yep, running from the PPA, 0.5+bzr464-1juju2~oneiric1 | 17:06 |
SpamapS | wow.. | 17:06 |
SpamapS | there are like, 4 invented versions there | 17:06 |
SpamapS | 0.5, -1, juju2, ~oneiric1 ... | 17:06 |
jamespage | SpamapS, it would work with some sort of service level lock | 17:07 |
SpamapS | jamespage: zookeeper would be a nice way to facilitate that.. and you already have it available. :) | 17:07 |
mars | m_3, yes. I gave up on lxc. It messed with /cgroups/ to the point where I couldn't boot a container any more. I'm trying ec2 now after reading that it was stable | 17:07 |
jamespage | SpamapS, amusingly hbase uses zookeeper for coordination | 17:07 |
SpamapS | jamespage: yeah thats what I mean. | 17:08 |
jamespage | but not on restarts | 17:08 |
jamespage | :-( | 17:08 |
jamespage | but that does get me thinking.... | 17:08 |
SpamapS | jamespage: what does this graceful_stop.sh use to make things happen? ssh? | 17:09 |
jamespage | SpamapS, erm yes | 17:09 |
SpamapS | so its ssh in a for loop.. awesome. | 17:09 |
jamespage | common theme in hadoop stuff | 17:09 |
jamespage | forget init scripts | 17:09 |
jamespage | and service orchestration - run everything from the master using SSH | 17:09 |
SpamapS | jamespage: yeah I've noticed that hadoop seems to hate best operations practices | 17:10 |
SpamapS | jamespage: so, yeah, I think you need to then setup ssh keys to use upstream's preferred method | 17:10 |
mars | m_3, so juju status know about the request for a new machine - I assume that data should have been logged somewhere, or the command couldn't retrieve it. | 17:10 |
mars | m_3, Perhaps I could start by checking that the deploy request was logged correctly? | 17:10 |
jamespage | nooooooo | 17:11 |
* jamespage head in hands | 17:11 | |
SpamapS | jamespage: and your upgrade-charm needs to include guards to prevent breaking the cluster.. by either ssh'ing into the main node to call graceful_stop .. or by using something a bit smarter like salt or fabric to roll these command executions around the right way | 17:11 |
SpamapS | jamespage: this is also what I've had to do on ceph.. and it kind of sucks. :-P | 17:12 |
SpamapS | jamespage: eventually it would be nice to have a "command bus" on top of juju's "config bus" | 17:12 |
jamespage | SpamapS, having multiple deployments of the hbase-slave charm against the same master works OK | 17:12 |
jamespage | but users can still break it | 17:12 |
SpamapS | jamespage: I think you can only make your charm refuse to break it... but one thing that sucks there is you might upgrade the charm and not know it didn't break the cluster in favor of not actually upgrading the software. ;) | 17:14 |
m_3 | mars: sorry... got a hangout... gimme a few | 17:16 |
mars | np | 17:16 |
_mup_ | juju/enhanced-relation-support r7 committed by jim.baker@canonical.com | 17:17 |
_mup_ | More details | 17:17 |
jamespage | SpamapS, the other challenge is zookeeper | 17:18 |
jamespage | if I want to expand the quorum then I need to start all region servers and the master :-) | 17:18 |
* jamespage goes to look at the zk formulas repository | 17:19 | |
SpamapS | jamespage: yeah, I think you're getting into the most difficult of services to orchestrate | 17:19 |
jamespage | SpamapS, w00t! | 17:20 |
SpamapS | jamespage: swift, I think, also gave adam_g fits because of the fairly simplistic things available to juju for orchestration | 17:20 |
m_3 | mars: the definitive place that info is stored is zookeeper on the bootstrap node (check /var/log/juju and /var/log/zookeeper) | 17:49 |
mars | m_3, checking | 17:52 |
mars | m_3, all I see in the zookeeper log are a lot of java.nio.channels.CancelledKeyException and EndOfStreamException | 17:58 |
mars | in fact, all of zookeeper.log's information is about connection information, and nothing about data | 18:00 |
mars | plug a little bit of information about the zookeeper server startup at the top | 18:01 |
mars | *plus | 18:01 |
m_3 | mars: sorry, let me back up a sec... you're running oneiric on ec2, using 'juju-origin: ppa' which catches juju-v464 on the units and you're running something like 464 on your precise(?) client? | 18:02 |
mars | m_3, oneiric client, 0.5+bzr457-1juju2~oneiric1 | 18:03 |
m_3 | mars: are you specifying default-instance-type and default-image-id? | 18:04 |
mars | m_3, ah, bootstrap machine is 0.5+bzr464-1juju2~oneiric1 | 18:04 |
mars | m_3, no, I am not specifying either | 18:04 |
m_3 | ok | 18:04 |
m_3 | so this sounds like my standard setup... except maybe a little later client | 18:05 |
m_3 | lemme check | 18:05 |
mars | I could start by updating my local juju package to 464 | 18:06 |
m_3 | yeah, running 463oneiric on my laptop, juju-origin: ppa picks up 464 on ec2 oneiric units | 18:06 |
mars | m_3, there also appears to be a problem with cloud-init on the bootstrap machine | 18:06 |
m_3 | mars: please try that | 18:06 |
mars | last line in cloud-init.log on the bootstrap machine is: [CLOUDINIT] cloud-init-cfg[ERROR]: errors running cloud_config [final]: ['scripts-user'] | 18:07 |
mars | but first, a package upgrade | 18:07 |
m_3 | and please destroy-environment then re-bootstrap | 18:08 |
mars | will do | 18:08 |
m_3 | I'll recycle my env to make sure it hasn't broken in the last few hours | 18:08 |
mars | m_3, looking good this time, I have files in bootstrap:/var/log/juju | 18:21 |
m_3 | yay | 18:24 |
SpamapS | m_3: another instance of bug 938463 | 18:33 |
_mup_ | Bug #938463: New agent requirement for session file breaks all existing bootstrapped envs using juju-origin PPA <juju:New> < https://launchpad.net/bugs/938463 > | 18:33 |
SpamapS | I think we might need to issue a fix of some kind where the session file argument is no longer required and just prints a warning "agent not restartable!" | 18:34 |
m_3 | with a recommended action perhaps? | 18:34 |
SpamapS | Yeah | 18:34 |
SpamapS | "Upgrade your client!" | 18:34 |
m_3 | although I hate to open the gates for 'please recycle your env'... that's just wrong | 18:34 |
SpamapS | Well.. frankly, all old clients "did it wrong" | 18:37 |
SpamapS | I wonder.. | 18:37 |
SpamapS | we can, perhaps, fix this in packaging | 18:37 |
lifeless | rm -rf is not 'fixing' | 18:37 |
* SpamapS slowly lowers hand... n/m | 18:38 | |
SpamapS | ;) | 18:38 |
SpamapS | I think I have a decent answer | 18:38 |
SpamapS | If you try to bootstrap or deploy with juju-origin: ppa .. we should probably check the PPA version and warn you if you are out of sync | 18:39 |
SpamapS | of course, that would require juju to actually know what its version is | 18:39 |
m_3 | SpamapS: charmtester desperately needs 'juju --version' | 18:42 |
m_3 | rather than dpkg | awk | sed | 18:42 |
m_3 | BTW, true for juju-origin distro too | 18:43 |
jimbaker | SpamapS, i suppose for bug 938463, it would be nice if we had updated juju.state.topology.VERSION. arguably this is a valid usage | 18:45 |
_mup_ | Bug #938463: New agent requirement for session file breaks all existing bootstrapped envs using juju-origin PPA <juju:New> < https://launchpad.net/bugs/938463 > | 18:45 |
_mup_ | Bug #938899 was filed: juju needs a '--version' option <juju:New> < https://launchpad.net/bugs/938899 > | 18:50 |
SpamapS | jimbaker: hm | 18:51 |
SpamapS | jimbaker: would not help for people with an old client.. but it would help save the already-existing bootstrapped environments. | 18:52 |
SpamapS | I can't help but wonder if this parameter can just have a sane default though | 18:52 |
jimbaker | SpamapS, one option worth exploring is to at least inform the user of the juju client when there's a version mismatch between client and a specific envir | 18:57 |
jimbaker | i believe there's a bug out there on this | 18:57 |
mars | m_3, looks like everything is now working on ec2. Thanks for the help. | 19:00 |
jcastro | SpamapS: m_3: should we explicitly say "no provider specific features" up front or do you want that as part of the review process still? | 19:01 |
SpamapS | jcastro: Perhaps we need a "tips" section that suggests that these will be judged negatively. If somebody does something unbelievably cool that requires SQS or ELB.. they could still win. | 19:03 |
jcastro | OTOH, it could be a good talking point | 19:08 |
jcastro | "man this is a good charm, but it's s3 specific, how can we make it work on other things in a way that doesn't suck?" will be a problem for us anyway | 19:09 |
mars | Another question for the room: I am writing a new charm and forgot to make the hooks chmod +x. I made the change on my local system, but juju deploy appears to be using a cached (and broken) copy of my charm. Can I force juju to take the updated version? | 19:10 |
SpamapS | mars: heh.. you need to bump the revision file number | 19:18 |
SpamapS | mars: there's a feature in development now to add '-u' to deploy so that it automatically updates the revision number | 19:19 |
mars | SpamapS, ok, thanks. I was hoping to avoid that when in 'dev mode' :) | 19:19 |
mars | That sounds like it would work | 19:19 |
SpamapS | mars: yeah, its very close.. probably will land in a few days | 19:19 |
mars | Or even just 'deploy --force-update' | 19:19 |
SpamapS | hazmat: ^^ another person who finds deploy maddening. :) | 19:19 |
mars | hehe | 19:19 |
SpamapS | I think it should tell you "Deploying existing charm." or "Uploading new charm" | 19:20 |
hazmat | SpamapS, the fix for that is in the queue | 19:20 |
hazmat | SpamapS, we're short a reviewer this week (ben's on vacation) | 19:21 |
hazmat | fwiw the signature is 'deploy --upgrade' | 19:22 |
xerxas | can I halt the controller machine between deployements ? | 19:22 |
xerxas | I'm deploying 3 machines , I don't want to run one more that will idle | 19:23 |
SpamapS | hazmat: can we have a follow-up that also adds an INFO message to tell whether or not a charm is being uploaded/pulled from the repo? That would be super helpful. | 19:25 |
SpamapS | xerxas: actually.. I think the answer is.. "maybe" | 19:26 |
xerxas | ;) | 19:26 |
SpamapS | xerxas: you might break the juju agents if you do that.. so when you resurrect it, you would need to check on them.. nobody has tested that use case. | 19:26 |
xerxas | works for 3 nodes, but not 2 ? | 19:26 |
xerxas | because of zookeeper ? | 19:26 |
xerxas | SpamapS: ok interesting | 19:26 |
SpamapS | xerxas: the ability to stop/start the agents only landed in trunk yesterday. ;) | 19:27 |
xerxas | ;) | 19:27 |
hazmat | SpamapS, already done | 19:27 |
hazmat | in that branch | 19:27 |
SpamapS | hazmat: .... you are the wind beneath my wings... | 19:32 |
* hazmat has a vision of icarus | 19:33 | |
* SpamapS hopes he lands on something soft when the wax melts | 19:51 | |
=== garyposter is now known as gary_poster | ||
jcastro | SpamapS: m_3: you guys busy? I need a G+ for like, 3 minutes. | 21:36 |
m_3 | jcastro: sure man | 21:41 |
jcastro | SpamapS: around? | 21:49 |
jcastro | m_3: well he's bailed, let's hang out with hazmat | 21:53 |
* jcastro starts the hangout | 21:54 | |
jcastro | m_3: hazmat: http://expertlabs.aaas.org/thinkup-launcher/ | 22:19 |
=== koolhead17 is now known as koolhead17|zzZZ | ||
m_3 | https://bugs.launchpad.net/charms/+bugs?field.tag=new-charm | 22:40 |
=== lifeless_ is now known as lifeless | ||
SpamapS | jcastro: sorry I had a conflicting hangout | 23:05 |
jcastro | it's ok we changed a bunch of charm policy without you | 23:05 |
jcastro | j/k | 23:05 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!