/srv/irclogs.ubuntu.com/2012/02/22/#juju.txt

SpamapSinteresting..00:00
SpamapSconfig-changed fires when the unit agent is restarted00:00
benjiconfig-changed used to fire twice on deploy, I don't know if that is intentional or not, but I thought it was a little odd00:11
benji/ignore moe00:11
hazmatSpamapS, its not landed00:19
hazmatbenji, config-changed is meant to be idempotent.. its fired once always before the service is started, just so the app can pull config and put in place.. and then it will be fired again if the config is changed00:19
_mup_juju/symlink-guard r455 committed by kapil.thangavelu@canonical.com00:29
_mup_merge trunk00:29
_mup_juju/symlink-guard r456 committed by kapil.thangavelu@canonical.com00:38
_mup_address review comments, extract symlink constant to charm base00:38
_mup_juju/symlink-guard r457 committed by kapil.thangavelu@canonical.com01:15
_mup_clean up error message01:15
_mup_juju/trunk r465 committed by kapil.thangavelu@canonical.com01:21
_mup_merge symlink-guard verify that charm symlinks don't extend outside of the charm and verify file types being included, internal symlinks are ok. [r=bcsaller,jimbaker][f=928348]01:21
_mup_juju/enhanced-relation-support r6 committed by jim.baker@canonical.com04:59
_mup_Sketch of impl details04:59
dazzGuys, can you scale (Add units) to mysql?06:15
SpamapSdazz: right now that just creates two independent mysql servers..06:26
SpamapSdazz: that would be useful in some cases.. such as if you are sharding/partitioning06:26
dazzwhich is fine, is it clustered still?06:26
dazzautomatically? That's how I thought it would have been in my head.06:26
SpamapSdazz: its not "clustered" per se06:26
SpamapSdazz: there's probably a way to make them scale out in some kind of ring replication.. but not without some thought.06:27
SpamapSdazz: it would be cool to add something like galera replication to the charm and just have it automatically scale out that way06:27
dazznp cheers06:28
SpamapSdazz: just curious.. how much experience do you have with scaling mysql?06:30
dazzlittle bit06:30
dazznothing huge though06:30
dazzOne of the "awesome" things I was looking at juju for was the automatic scaling of mysql.06:31
dazzautomatically cluster them as you scale etc.06:31
SpamapSdazz: I think we could put together a few different config options that would allow one to say something like 'deploy mysql --set scaling-mode=ring|galera|standalone' ..06:31
dazzI assume really, these would be custom mysql charms for this ability.06:31
dazzthat would be awesome06:31
SpamapSdazz: no not custom.. it should be able to be encoded in the single mysql charm06:32
dazzyeah06:32
SpamapSdazz: juju allows customization very easily, but it encourages making things configurable06:32
dazzright but the current mysql charm doesn't do this right?06:32
SpamapSdazz: right. It only supports one-way replication06:35
dazzyeap06:35
dazzso if I were to extend the way the charm currently is, you'd like it ;)06:35
SpamapSdazz: part of the reason for that is that your application needs to understand how to use a mysql cluster, and there really aren't any opensource apps that do.06:35
dazzhrmm06:35
SpamapSmediawiki does know how to use readonly slaves..06:35
SpamapSwhich is why the mysql charm supports readonly slaves. :)06:36
dazzwonder if we can do some sort of load balancer for mysql....06:53
SpamapSdazz: sure, but without a synchronous replication technology like galera, you can't guarantee reads from one will be consistent with writes from another.06:56
adam_gSpamapS: still up?07:07
SpamapSadam_g: yeah, just poking at things. Whats up?07:10
adam_gSpamapS: you mentioned there was some recent juju breakage in precise? is that still an issue? some issue hit all the CI nodes at once tonight, something ive not seen before07:13
adam_gagent startup fails with: juju.errors.JujuError: No session file specified07:13
SpamapSadam_g: the issue was that in the cloud-init blocks it was doing 'apt-get -y install', which, if it pulls in something that asks a debconf question, would be a fail.07:13
SpamapSadam_g: the problem is that you are trying to spawn juju units with an older version of juju I think07:14
SpamapSadam_g: do you tear-down the bootstrap every time?07:15
adam_gSpamapS: no.. im waiting on a re-bootstrap now, im sure it'll resolve07:15
SpamapSadam_g: its possible the change made today had some unintended consequences... when I saw that error, I had used an older build to bootstrap with the newer stuff in the ppa and it failed07:16
adam_gyeah, im running running with juju-origin: ppa, i think the bootstrap node was up for a week or something stupid07:16
adam_gi need to write a nightly job to teardown and rebootstrap to avoid that07:17
SpamapSadam_g: the latest version actually allows you to *reboot*07:17
SpamapSadam_g: we might even be able to upgrade juju on a box.. :)07:17
adam_gSpamapS: as in, upstart jobs?07:17
SpamapSaye07:17
adam_gsuhweet07:17
SpamapSadam_g: apparently there are still some cases where the restart will fail.. something with zookeeper and clocks and sessions07:18
SpamapSbut this is still super sweet, as we can in theory have a long running bootstrap node that we can upgrade07:18
SpamapSadam_g: yeah I think the issue you're seeing is that the newer agents require a session file argument..07:20
SpamapSso we basically just broke *every* running bootstrapped environment07:20
adam_gheh07:21
SpamapSunless people manually update juju on their machine 0, and then manually restart the provisioning agent07:21
* SpamapS files a bug07:22
SpamapSI really think juju bootstrap needs to *cache* the version of juju it installs/deploys, and deploy that one rather than whatever is in juju-origin07:23
adam_gjust run squid alongside zookeeper, and setup repository access thru it via cloud-init on nodes, right? :)07:25
adam_gokay been in front of this thing since 8am. startin to feel like Daviey. g'night07:25
SpamapSadam_g: me too.. time to sleep. :)07:26
_mup_Bug #938463 was filed: New agent requirement for session file breaks all existing bootstrapped envs using juju-origin PPA <juju:New> < https://launchpad.net/bugs/938463 >07:26
adam_gSpamapS: note, still getting there same error, this time on the bootstrap node:http://paste.ubuntu.com/852331/07:31
SpamapSadam_g: make sure whatever ran 'juju bootstrap' is also the latest version07:31
SpamapSSeems like that session file option could have a default to prevent this.07:32
adam_gSpamapS: is there something i need to pass as session file?07:32
SpamapSadam_g: no07:33
SpamapSadam_g: you just need the latest juju from the PPA07:33
adam_gok07:33
adam_gupgraded client node as well.07:33
adam_gthanks07:33
SpamapSadam_g: note that 'distro' would probably be a more stable option. :)07:34
=== grapz is now known as grapz_afk
=== grapz_afk is now known as grapz
_mup_Bug #938521 was filed: constraints spec contains historical information <juju:New> < https://launchpad.net/bugs/938521 >09:46
=== grapz is now known as grapz_afk
LesebIs anyone already setup juju with orchestra only in localhost environment? (not EC2 instance). There are so much bug in there :(11:20
niemeyerGood morning!12:21
bachi niemeyer, yesterday i posted to juju list from 'brad.crittenden@canonical' instead of 'bac@canonical'.  could you approve the message and check the settings to allow the alternate address in the future?  sorry for the bother.12:45
niemeyerbac: Definitely, sorry for the trouble12:50
niemeyerbac: Done, also whitelisted it12:51
bacniemeyer: thanks!12:52
niemeyerbac: np12:55
=== grapz_afk is now known as grapz
xerxasHi all13:40
xerxasis juju useable from mac ?13:41
xerxasI mean , the machine i type command on13:41
xerxasas juju is python , I think it would easly work13:41
xerxasbut I haven't looked at the code13:41
xerxasI'm just asking in case you already know it does run or doesn't run13:41
xerxasor maybe someone did it13:42
benjihi all, I'm having an issue similar to the one I had yesterday: when bootstrapping in EC2 on oneiric "juju status" hangs14:08
benjihey! it's working.  I must have just needed more patience.14:12
benjihow does one deploy a service on a large EC2 instance?14:40
benjiI'm trying to just change default-instance-type and default-image-id without any luck.14:41
m_3benji: I use: 'default-instance-type: m1.large' and 'default-image-id: ami-6fa27506'14:46
m_3benji: that image might not be ebs-backed... I usually prefer ephemeral14:46
m_3benji: that's oneiric btw14:47
=== grapz is now known as grapz_afk
=== grapz_afk is now known as grapz
hazmatxerxas, yes the client is mac compatible, if its not its a bug, we do rely on one main extension the zookeeper bindings14:57
hazmatxerxas, it is packaged in systems like homebrew or darwinports i believe14:57
xerxasok14:57
xerxasahh14:57
xerxasdidn't saw, sorry14:58
xerxasdoesn't seem to be in homebrew14:58
xerxashazmat:  but I'll look14:58
xerxasthanks14:58
hazmatxerxas, ? https://github.com/mxcl/homebrew/blob/master/Library/Formula/zookeeper.rb14:58
xerxasahh , you mean zookeeper14:59
xerxasbut zookeeper is installed on the "controller" host14:59
hazmatxerxas, it is but we need the bindings for the client to talk to the controller host14:59
xerxasI mean , juju workflow is : my workstation <=> an (EC2 let's say) Server that has zookeper and controlling the infrastructures <=> servers for charms15:00
xerxasok15:00
xerxasthen I don't need zookeeper, I just need to python zookeeper client , am I wrong ?15:00
hazmatxerxas, that's right15:00
hazmatxerxas,  this is a homebrew recipe attempt for juju ... http://repo.chenetz.net/juju.rb15:01
hazmatxerxas, but really since its python, you can just grab a checkout and run it, if you have the zk binding (+ python-yaml)15:02
xerxasyeah15:03
xerxassure15:03
xerxasthanks15:03
xerxasI was just asking here before doing something stupid (If you know it doesn't work , it would have save me some time ;) )15:03
xerxashazmat:  thanks, I'll try it15:03
xerxasthen15:03
jcastrom_3: hey did you see this wrappz work in lp:charms?15:38
jcastroSpamapS: fill this in for me:15:47
jcastroSpamapS: we prefer charms be licensed under the ... X ...15:47
jcastrosolve for X!15:47
jcastroor do we care as long as it's OSI?15:47
SpamapSjcastro: GPLv316:17
SpamapSjcastro: but yeah, thats just what we prefer.. anything that is "Free" works. :)16:19
koolhead17SpamapS: the bug is still not fixed after the upgrade of php :(16:27
SpamapSkoolhead17: durn it.16:30
SpamapSkoolhead17: there's code in the postinst to fix it .. so that surprises me.16:30
koolhead17:(16:31
SpamapSkoolhead17: dpkg-maintscript-helper rm_conffile /etc/php5/conf.d/sqlite.ini 5.3.9~ -- "$@"16:43
SpamapSkoolhead17: that should have resulted in the file being renamed to /etc/php5/conf.d/sqlite.ini.dpkg-bak16:45
jamespageI'm guessing there is no nice way to manage rolling restarts across service units within a deployed service?16:46
jimbakerjamespage, seems like this could be done with a subordinate service16:48
jimbaker(most management issues seem to boil down to, use a subordinate service)16:49
jamespagejimbaker, hmm - so how would that work?16:49
jamespagebasically I need each service unit to16:50
jamespagea) unload its regions16:50
jamespageb) restart16:50
jamespagec) reload its regions16:50
jamespagein sequence16:50
jamespageI keep breaking my hbase deployment when I do upgrade-charm at the moment :-)16:50
jimbakerjamespage, the units of the subordinate services could coordinate with each other to manage this. the question i would ask, is there a lightweight mgmt solution out there that basically does something like this?16:52
jimbakerwriting one would not be so hard i suppose16:52
jimbakerso basically map this new subordinate service, say rollingrestart, to such a solution whether custom or using some package out there16:53
jamespagejimbaker, something service agnostic?16:55
jimbakerjamespage, that would seem to be ideal - something that knows how to work with standard services16:56
m_3jamespage: I was thinking we'd use multiple groups of slave services to do rolling upgrades... the complication is they'd be separate services"16:56
jamespagem_3 that though had cross my mind16:56
m_3datacluster1, datacluster2, datacluster316:56
jamespagelol16:56
m_3they'd all be separately related to the master16:56
jamespagelemme try that out16:56
m_3it's another reason to avoid peer relations imo... i haven't figured out how to really get that done with multiple peer-groups16:57
m_3master-slave seems doable tho16:57
marsHi, could someone tell me where I might find the juju logs on a new ec2 unit?16:57
mars  I am trying to debug the system setup - the machine isn't coming up (or is taking a very long time)16:57
m_3mars something like /var/lib/juju/units/<service-name>/charm.log16:58
marsm_3, cool, which host?  zookeeping (machine 0) or the new unit? (machine 1)16:58
marszookeeper even16:58
m_3typically machine1... lemme look if the bootstrap node keeps them in /var/log/juju or not16:59
jamespagem_3: using dotdee is working out quite well17:01
marshmm, /var/lib/juju is empty on machine 117:01
m_3mars: yeah, bootstrap node in ec2 has them in /var/log/juju/{machine,provisioning}-agent.log17:01
jamespagethink I have discovered a bug tho - need to find kirkland17:01
m_3jamespage: cool!17:01
SpamapSjamespage: how do hbase users usually keep from breaking their cluster?17:02
jamespageSpamapS, there is a special 'formula' for doing it with a couple of help scripts17:02
jamespageSpamapS, http://hbase.apache.org/book/node.management.html17:03
marsm_3, ah, darn.  Just realized machine 1 wasn't even provisioned (although it is pending in juju status).  bootstrap's /var/log/juju is empty.17:03
jamespagebasically you have to disable load balancing17:03
jamespagerestart each server in turn using the graceful_stop helper script17:03
jamespageand turn it back on afterwards17:04
m_3mars: other machines have /var/log/juju/machine-agent.log and then the charm log in /var/lib/juju/units/...17:04
marsm_3, both directories are empty on the bootstrap machine.  That is a bit odd.17:05
m_3mars: check that `dpkg -l | grep juju`17:05
marsObviously the data for wordpress machine 1 is somewhere in the system17:05
m_3returns something nonempty17:05
SpamapSjamespage: hbase is surprisingly similar to CEPH (or perhaps not, they attempt the same thing with different problem spaces)..17:05
m_3mars: note that local (lxc) provider logs to different locations !!! :(17:05
marsm_3, yep, running from the PPA, 0.5+bzr464-1juju2~oneiric117:06
SpamapSwow..17:06
SpamapSthere are like, 4 invented versions there17:06
SpamapS0.5, -1, juju2, ~oneiric1 ...17:06
jamespageSpamapS, it would work with some sort of service level lock17:07
SpamapSjamespage: zookeeper would be a nice way to facilitate that.. and you already have it available. :)17:07
marsm_3, yes.  I gave up on lxc.  It messed with /cgroups/ to the point where I couldn't boot a container any more.  I'm trying ec2 now after reading that it was stable17:07
jamespageSpamapS, amusingly hbase uses zookeeper for coordination17:07
SpamapSjamespage: yeah thats what I mean.17:08
jamespagebut not on restarts17:08
jamespage:-(17:08
jamespagebut that does get me thinking....17:08
SpamapSjamespage: what does this graceful_stop.sh use to make things happen? ssh?17:09
jamespageSpamapS, erm yes17:09
SpamapSso its ssh in a for loop.. awesome.17:09
jamespagecommon theme in hadoop stuff17:09
jamespageforget init scripts17:09
jamespageand service orchestration - run everything from the master using SSH17:09
SpamapSjamespage: yeah I've noticed that hadoop seems to hate best operations practices17:10
SpamapSjamespage: so, yeah, I think you need to then setup ssh keys to use upstream's preferred method17:10
marsm_3, so juju status know about the request for a new machine - I assume that data should have been logged somewhere, or the command couldn't retrieve it.17:10
marsm_3, Perhaps I could start by checking that the deploy request was logged correctly?17:10
jamespagenooooooo17:11
* jamespage head in hands17:11
SpamapSjamespage: and your upgrade-charm needs to include guards to prevent breaking the cluster.. by either ssh'ing into the main node to call graceful_stop .. or by using something a bit smarter like salt or fabric to roll these command executions around the right way17:11
SpamapSjamespage: this is also what I've had to do on ceph.. and it kind of sucks. :-P17:12
SpamapSjamespage: eventually it would be nice to have a "command bus" on top of juju's "config bus"17:12
jamespageSpamapS, having multiple deployments of the hbase-slave charm against the same master works OK17:12
jamespagebut users can still break it17:12
SpamapSjamespage: I think you can only make your charm refuse to break it... but one thing that sucks there is you might upgrade the charm and not know it didn't break the cluster in favor of not actually upgrading the software. ;)17:14
m_3mars: sorry... got a hangout... gimme a few17:16
marsnp17:16
_mup_juju/enhanced-relation-support r7 committed by jim.baker@canonical.com17:17
_mup_More details17:17
jamespageSpamapS, the other challenge is zookeeper17:18
jamespageif I want to expand the quorum then I need to start all region servers and the master :-)17:18
* jamespage goes to look at the zk formulas repository17:19
SpamapSjamespage: yeah, I think you're getting into the most difficult of services to orchestrate17:19
jamespageSpamapS, w00t!17:20
SpamapSjamespage: swift, I think, also gave adam_g fits because of the fairly simplistic things available to juju for orchestration17:20
m_3mars: the definitive place that info is stored is zookeeper on the bootstrap node (check /var/log/juju and /var/log/zookeeper)17:49
marsm_3, checking17:52
marsm_3, all I see in the zookeeper log are a lot of java.nio.channels.CancelledKeyException and EndOfStreamException17:58
marsin fact, all of zookeeper.log's information is about connection information, and nothing about data18:00
marsplug a little bit of information about the zookeeper server startup at the top18:01
mars*plus18:01
m_3mars: sorry, let me back up a sec... you're running oneiric on ec2, using 'juju-origin: ppa' which catches juju-v464 on the units and you're running something like 464 on your precise(?) client?18:02
marsm_3, oneiric client, 0.5+bzr457-1juju2~oneiric118:03
m_3mars: are you specifying default-instance-type and default-image-id?18:04
marsm_3, ah, bootstrap machine is 0.5+bzr464-1juju2~oneiric118:04
marsm_3, no, I am not specifying either18:04
m_3ok18:04
m_3so this sounds like my standard setup... except maybe a little later client18:05
m_3lemme check18:05
marsI could start by updating my local juju package to 46418:06
m_3yeah, running 463oneiric on my laptop, juju-origin: ppa picks up 464 on ec2 oneiric units18:06
marsm_3, there also appears to be a problem with cloud-init on the bootstrap machine18:06
m_3mars: please try that18:06
marslast line in cloud-init.log on the bootstrap machine is:  [CLOUDINIT] cloud-init-cfg[ERROR]: errors running cloud_config [final]: ['scripts-user']18:07
marsbut first, a package upgrade18:07
m_3and please destroy-environment then re-bootstrap18:08
marswill do18:08
m_3I'll recycle my env to make sure it hasn't broken in the last few hours18:08
marsm_3, looking good this time, I have files in bootstrap:/var/log/juju18:21
m_3yay18:24
SpamapSm_3: another instance of bug 93846318:33
_mup_Bug #938463: New agent requirement for session file breaks all existing bootstrapped envs using juju-origin PPA <juju:New> < https://launchpad.net/bugs/938463 >18:33
SpamapSI think we might need to issue a fix of some kind where the session file argument is no longer required and just prints a warning "agent not restartable!"18:34
m_3with a recommended action perhaps?18:34
SpamapSYeah18:34
SpamapS"Upgrade your client!"18:34
m_3although I hate to open the gates for 'please recycle your env'... that's just wrong18:34
SpamapSWell.. frankly, all old clients "did it wrong"18:37
SpamapSI wonder..18:37
SpamapSwe can, perhaps, fix this in packaging18:37
lifelessrm -rf is not 'fixing'18:37
* SpamapS slowly lowers hand... n/m18:38
SpamapS;)18:38
SpamapSI think I have a decent answer18:38
SpamapSIf you try to bootstrap or deploy with juju-origin: ppa .. we should probably check the PPA version and warn you if you are out of sync18:39
SpamapSof course, that would require juju to actually know what its version is18:39
m_3SpamapS: charmtester desperately needs 'juju --version'18:42
m_3rather than dpkg | awk | sed18:42
m_3BTW, true for juju-origin distro too18:43
jimbakerSpamapS, i suppose for bug 938463, it would be nice if we had updated juju.state.topology.VERSION. arguably this is a valid usage18:45
_mup_Bug #938463: New agent requirement for session file breaks all existing bootstrapped envs using juju-origin PPA <juju:New> < https://launchpad.net/bugs/938463 >18:45
_mup_Bug #938899 was filed: juju needs a '--version' option <juju:New> < https://launchpad.net/bugs/938899 >18:50
SpamapSjimbaker: hm18:51
SpamapSjimbaker: would not help for people with an old client.. but it would help save the already-existing bootstrapped environments.18:52
SpamapSI can't help but wonder if this parameter can just have a sane default though18:52
jimbakerSpamapS, one option worth exploring is to at least inform the user of the juju client when there's a version mismatch between client and a specific envir18:57
jimbakeri believe there's a bug out there on this18:57
marsm_3, looks like everything is now working on ec2.  Thanks for the help.19:00
jcastroSpamapS: m_3: should we explicitly say "no provider specific features" up front or do you want that as part of the review process still?19:01
SpamapSjcastro: Perhaps we need a "tips" section that suggests that these will be judged negatively. If somebody does something unbelievably cool that requires SQS or ELB.. they could still win.19:03
jcastroOTOH, it could be a good talking point19:08
jcastro"man this is a good charm, but it's s3 specific, how can we make it work on other things in a way that doesn't suck?" will be a problem for us anyway19:09
marsAnother question for the room: I am writing a new charm and forgot to make the hooks chmod +x.  I made the change on my local system, but juju deploy appears to be using a cached (and broken) copy of my charm.  Can I force juju to take the updated version?19:10
SpamapSmars: heh.. you need to bump the revision file number19:18
SpamapSmars: there's a feature in development now to add '-u' to deploy so that it automatically updates the revision number19:19
marsSpamapS, ok, thanks.  I was hoping to avoid that when in 'dev mode' :)19:19
marsThat sounds like it would work19:19
SpamapSmars: yeah, its very close.. probably will land in a few days19:19
marsOr even just 'deploy --force-update'19:19
SpamapShazmat: ^^ another person who finds deploy maddening. :)19:19
marshehe19:19
SpamapSI think it should tell you "Deploying existing charm." or "Uploading new charm"19:20
hazmatSpamapS, the fix for that is in the queue19:20
hazmatSpamapS, we're short a reviewer this week (ben's on vacation)19:21
hazmatfwiw the signature is 'deploy --upgrade'19:22
xerxascan I halt the controller machine between deployements ?19:22
xerxasI'm deploying 3 machines , I don't want to run one more that will idle19:23
SpamapShazmat: can we have a follow-up that also adds an INFO message to tell whether or not a charm is being uploaded/pulled from the repo? That would be super helpful.19:25
SpamapSxerxas: actually.. I think the answer is.. "maybe"19:26
xerxas;)19:26
SpamapSxerxas: you might break the juju agents if you do that.. so when you resurrect it, you would need to check on them.. nobody has tested that use case.19:26
xerxasworks for 3 nodes, but not 2 ?19:26
xerxasbecause of zookeeper ?19:26
xerxasSpamapS:  ok interesting19:26
SpamapSxerxas: the ability to stop/start the agents only landed in trunk yesterday. ;)19:27
xerxas;)19:27
hazmatSpamapS, already done19:27
hazmatin that branch19:27
SpamapShazmat: .... you are the wind beneath my wings...19:32
* hazmat has a vision of icarus19:33
* SpamapS hopes he lands on something soft when the wax melts19:51
=== garyposter is now known as gary_poster
jcastroSpamapS: m_3: you guys busy? I need a G+ for like, 3 minutes.21:36
m_3jcastro: sure man21:41
jcastroSpamapS: around?21:49
jcastrom_3: well he's bailed, let's hang out with hazmat21:53
* jcastro starts the hangout21:54
jcastrom_3: hazmat: http://expertlabs.aaas.org/thinkup-launcher/22:19
=== koolhead17 is now known as koolhead17|zzZZ
m_3https://bugs.launchpad.net/charms/+bugs?field.tag=new-charm22:40
=== lifeless_ is now known as lifeless
SpamapSjcastro: sorry I had a conflicting hangout23:05
jcastroit's ok we changed a bunch of charm policy without you23:05
jcastroj/k23:05

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!